The psychology of rules

I came across this interesting article form New Zealand, about a school that has gotten rid of all the little rules regulating their playground. The unexpected result: A safer school with more confident children.

Follow the rules

Follow the rules

It’s a pattern I’ve often observed. In the Information Commons we try and minimise rules and give students the ownership and responsibility for their space (have a look at our code of conduct). It isn’t a perfect solution, but largely it works well. Taking into account the volume of visitors (1.3M per year, and growing), we have surprisingly little behaviour problems, thefts etc.

I’ve thought about the dynamics of this a lot. A side effect of a rule it seems, is that it moves responsibility away from an internally motivated judgement to something external, generic and anonymous. There are 2 main risks linked to this. The first is that human judgement is far more versatile then a rule. It is almost impossible to define a rule that covers all possible circumstances adequately. The second problem is that we’ve removed the development of an internal motivation to moderate behaviour. People tend to be far more casual about braking an externally imposed rule, then an internally motivated behaviour.

When we find people are no longer always internally motivated to obey our rules, the response to that is often to introduce sanctions, an external motivation to comply. Sanctions and punishments however also introduce a new problematic side effect: The sanction now becomes the most ‘visible’ consequence of breaking the rule. In stead of weighing up what the effects of their actions are, people might now just consider whether the risk of incurring a sanction is a price worth paying for breaking a rule.

It’s a funny dynamic. Rules seem a great way to solve a problem on paper. But in my experience they seldom work as expected. That is not to say that all rules are bad of course, but we often are far too casual introducing them to solve a problem in my view. This school seems to have found an interesting solution where the group as a whole discusses and commits to a possible rule or guideline that is deemed to be required to solve a problem. This means people are still internally motivated and engaged with moderating their behaviour, which might circumvent the problem. I’m glad some research is being done into this, and great results like the outcome in this New Zealand school are achieved.

 

Responsible university rankings

The world is governed by scores, ranks and league tables. UK education certainly carries a heavy burden in this respect. Everything that every single person in a school, college or university does seems subject to several KPIs, satisfaction surveys and eligible for some industry award. Later today the Times Higher World University Rankings will be published, and I am sure there will be much ado about who’s gained or lost,  how many UK universities are in the top 10 and all that.

Responsible university rankingsIt’s easy just dismissing it all. But I do think there is there is value in the idea of some accountability and transparency of how our institutions are performing. It’s complex understanding how to pick a school for yourself or your child, an R&D partner or assign a research grant. Universities are incredibly complex and the idea that someone tries to make that complexity perhaps a bit more intelligible is very useful.

There are also risks. The most important risk isn’t really that a ranking might be ‘wrong’. What is ‘wrong’ anyway in this context? These are all approximations from a certain perspective and I don’t have a problem with that in itself. What I think is the biggest risk is that poorly chosen indicators can decrease performance and kill innovation. I’ll try to illustrate by taking an indicator from our Key Information Set (KIS): Contact time. I understand the idea behind that indicator, but let’s look at the feedback effect:

To improve their score, an institution wants to maximise contact hours. There is no more money, so the easiest thing to do is to reduce tutorials and seminars and deliver most contact via large scale lectures. 20 hours teaching to a group of 400 is far easier then teaching 10 hours to 4 groups of 100. However, that is not an improvement in the learning experience.

Now let’s say that an institution is genuinely looking to improve learner engagement and achievement. They might explore the flipped classroom. They replace lectures, generally not a transformational experience, with quality recordings and resources online. Face to face time focuses on interactive workshops, seminars and tutorials. While these are much more valuable, they might be slightly less in number as the groups are smaller. Teaching might have  improved but the institution will look worse in their key data-set.

So I think making a league table or key dataset or satisfaction survey comes with some responsibility: Rankings are not just snapshots of performance, they become targets for performance. These measurements have a very strong observer effect, I suppose akin to the Hawthore effect. Choosing a public performance indicator should be done with an understanding of the target that implicitly is being set. And the way the target is defined should not unfairly discriminate against unconventional approaches and innovations. Generally this means rankings should focus on outcomes, not on process. While I am not a capitalism apologist by any stretch of the imagination, what it does very well is purely measure (monetary) outcomes, rewarding performance and innovation irrespective of process. We might want to define our desired outcomes differently , but the principle should be the same.

The Learning Space Rating System

This is the first in a series of posts reviewing frameworks supporting learning space design.

System: Learning Space Rating System (LSRS)

Source: EDUCAUSE Learning Initiative

 

LSRS aims to provide a means of measuring the potential performance of a learning space. This is done through scoring a learning space against a set of 51 criteria covering a broad range of topics ranging from policy alignment to technological infrastructure and support. The current version of LSRS focuses on formal learning spaces, but the intent is to broaden this scope in future versions.

LSRS was inspired by green building rating systems, such as BREEAM, and has clearly been set up with the intent of enabling a method of ranking and accrediting spaces in an objectified way. To do this LSRS has a broad scope of criteria categorised in 6 sections:

  1. Integration with Campus Context
  2. Planning and Design Process
  3. Support and Operations
  4. Environmental Quality
  5. Layout and furnishings
  6. Tools and Technology

The scope of the categories has clearly been designed with an understanding of the complexity of learning space design. While the physical space is certainly the most extensive and detailed section there is also attention for factors such as timetabling and alignment with pedagogic strategies.

The idea of accreditation is interesting I think, and might be a way to articulate values other then students per square meters to senior administrators when the decisions are being made. There are however also some methodological tensions. Capturing technical criteria such as environmental performance in an objective score is relatively straightforward. The value of a pedagogic space very much lies in how it mediates social processes. This mediation depends on many factors such as a sense of agency in users, and other qualities that don’t lend themselves to easy objectification. Meaningful engagement in design processes, for instance,  is crucial, but not easy to capture in a score that requires ‘stakeholder involvement’. Ticking the box is one thing, actually creating a sense of agency with users is potentially quite another.

How to draw an owlAnother weakness in the current version lies in the lack of tangible guidance. While the list of topics is broad, there is very little practical usable advise in many of the categories, aside from the layout and furniture section. I hope and expect this first version is a structure that will be filled out in more detail with future iterations. In it’s current state it reminds me a bit of my favourite illustrated guide to drawing owls (see picture).

That being said, some detail has been added in particular to the section on space design itself. Details on recommended density, sizes of working surfaces for different kinds of spaces are tangible enough. There isn’t much information about the underpinnings of these values, or support for the categories of space advocated. I would personally be very interested to get a bit more references and context for those specific choices in future versions.

All in all this seems like a promising idea, be it with some inherent risks. I’ll be watching the further development with interest. If you do have any thoughts on LSRS, there is a form for you to feed your thoughts into the future development of the system. Please take some time to fill it out and help the authors.

 

 

 

There is no teaching.

One of the benefits of my current job is that I get to talk to lots of academics about how people learn. I haven’t yet had a conversation where I’ve not gained some interesting new insight or idea, what more could you ask for?

Yesterday I had one such conversation. It had been rescheduled a few times, but it was worth the wait. I knew it was when I was told off for using the word ‘teaching’. ‘There is no such thing as teaching’ I was told. And of course that is true. We can’t make someone learn any more then we can make a seed sprout. We can create the conditions that are conducive to learning, but the learning will have to be done by the student.

Seed sprouting

Learning is like a seed sprouting

It is a view well supported by research, but at odds with the prevalent practices and views in academia. Lecturing is fundamentally about ‘teaching’, not about learning. It is about the idea of transferring knowledge to a receiving vessel. Which brings us to another issue. Because that knowledge also isn’t really of much value anymore. The half-life of most knowledge is becoming increasingly short, sometimes shorter then the length of a degree.

So this raises the almost existential question: what are academics for? What is the value of being taught by an experienced active researcher, if not his or her incredible and up-to-date knowledge? I would have struggled for a good answer to this question, until I was given one yesterday:

The value of a researcher facilitating learning, is that he or she has experience of the creative process of generating new ideas and knowledge. What the researcher has to offer is not the knowledge they posses, but their understanding and experience of dealing with a lack of knowledge, with uncertainty, and how to go about finding or creating the knowledge that you need to be successful. Researchers are expert collaborative learners, and that is exactly what we need in the classroom.

What the library could learn about search

I’m not a librarian, or an expert on search. I’m a (re-)searcher who just happens to work in a library. I’ve tried to benefit from having such expert colleagues by developing my information literacy skills, but I have always had one dirty little secret: I really struggle to get sensible results out of your catalogue and discovery tools. I use the catalogue, but only to find access to a resource I’ve already found elsewhere.

I will give an example: I’m currently exploring some ideas that link pattern libraries (a design method originating in Architecture) with communicative action (an idea from philosophy and social sciences). I start my research by the search “pattern libraries” “communicative action”. When I enter this into the library catalogue, I get 0 results. Nothing in the collection, nothing in articles and databases. I then enter the exact same phrase into Google and I get 2,750 results. Sure the vast majority of links are probably rubbish, but on the first page alone there are several links to academic work that has touched on the topic. The link at the very top is a book. Google has actually scanned the book and found my search terms in the content. Neatly highlighted for me on page 29, exactly what I was looking for. I the copy the title into our library catalogue, and it turns out we even have a copy of the book, it’s in my bag right now. This is how I search (and find), and I recon I am not alone.

The reality is that there simply is no way to compete with the resources and expertise that a company like Google have. This isn’t just about dealing with student habits and preferences, it is the realisation that the sheer idea that we could make or buy any system that is going to outsearch Google is really just quite naive.

The University of Utrecht Library are one of the first libraries to take this seriously and ditch their discovery layer . Rather then try to compete with Google (or other search providers), they are enhancing it. Resources not spent on the purchase and maintenance of a discovery layer are used to ensuring that their resources are easy to find and access by working with Google on exposing their resources, and providing detailed guides for users. It’s a bold step, and I think a sensible and visionary one.