The world is governed by scores, ranks and league tables. UK education certainly carries a heavy burden in this respect. Everything that every single person in a school, college or university does seems subject to several KPIs, satisfaction surveys and eligible for some industry award. Later today the Times Higher World University Rankings will be published, and I am sure there will be much ado about who’s gained or lost, how many UK universities are in the top 10 and all that.
It’s easy just dismissing it all. But I do think there is there is value in the idea of some accountability and transparency of how our institutions are performing. It’s complex understanding how to pick a school for yourself or your child, an R&D partner or assign a research grant. Universities are incredibly complex and the idea that someone tries to make that complexity perhaps a bit more intelligible is very useful.
There are also risks. The most important risk isn’t really that a ranking might be ‘wrong’. What is ‘wrong’ anyway in this context? These are all approximations from a certain perspective and I don’t have a problem with that in itself. What I think is the biggest risk is that poorly chosen indicators can decrease performance and kill innovation. I’ll try to illustrate by taking an indicator from our Key Information Set (KIS): Contact time. I understand the idea behind that indicator, but let’s look at the feedback effect:
To improve their score, an institution wants to maximise contact hours. There is no more money, so the easiest thing to do is to reduce tutorials and seminars and deliver most contact via large scale lectures. 20 hours teaching to a group of 400 is far easier then teaching 10 hours to 4 groups of 100. However, that is not an improvement in the learning experience.
Now let’s say that an institution is genuinely looking to improve learner engagement and achievement. They might explore the flipped classroom. They replace lectures, generally not a transformational experience, with quality recordings and resources online. Face to face time focuses on interactive workshops, seminars and tutorials. While these are much more valuable, they might be slightly less in number as the groups are smaller. Teaching might have improved but the institution will look worse in their key data-set.
So I think making a league table or key dataset or satisfaction survey comes with some responsibility: Rankings are not just snapshots of performance, they become targets for performance. These measurements have a very strong observer effect, I suppose akin to the Hawthore effect. Choosing a public performance indicator should be done with an understanding of the target that implicitly is being set. And the way the target is defined should not unfairly discriminate against unconventional approaches and innovations. Generally this means rankings should focus on outcomes, not on process. While I am not a capitalism apologist by any stretch of the imagination, what it does very well is purely measure (monetary) outcomes, rewarding performance and innovation irrespective of process. We might want to define our desired outcomes differently , but the principle should be the same.