When I start an experimental school, I want to randomly select students from the applicant pool and ask those not selected to serve as a control group, so that we can see whether our students are doing better or worse than students at other schools. This is obvious to me, but judging by the number of times I have to explain this to other people, it does not seem to be obvious to most.
Generalizing from this anecdotal evidence, I suspect that "wanting to test if what you're doing actually works" does not come naturally to most people. For many, this is easily explained by trust in superiors, institutions and common practices, all of which are reasonable under most circumstances (if everybody in your line of work does things a certain way, your priors should be high that it's a good thing to do, until you encounter good evidence to the contrary). Other than that, the standard explanation would be that we don't see how we're biased: for example, many teachers in alternative schools will tell me success stories that, to their mind, "prove" that their approach works (confirmation bias), or cite statistics of how many of their students go on to have successful lives, without controlling for socioeconomic status or family background (selection effects: even out of those who can afford it, only parents with a particular interest in education will put their kids in alternative schools). This explains, in part, why many people do things that they just assume are working, without bothering to check if they actually do. It does not explain why many people who come up with plans or new ways to do things also do not ask themselves, "how can I see if this new thing will actually work?". This, and a great part of the former cases, is better explained by people mistaking explanations for evidence. If I want to do something, I usually have some idea why I think this will work. I imagine a mechanism, implicitly or explicitly, that leads from my actions to the desired result. Say I want my students to take happiness classes at school (that was a popular thing a while ago). Teaching them about mindfulness and mental hygiene etc. will make my students happier, I figure, so it seems like a good idea. But do I know that it will? The most common way to go wrong there is to wait until someone comes up with an alternative mechanism that "shows" how happiness classes might lead to no improvement, or even make things worse. Say someone comes to me and says, "Look, learning about how other people keep gratitude journals will only make things even worse for the worst-off students, if they feel like they have nothing to be grateful for." then I might feel compelled to "prove", in some way, that my happiness classes don't do that -- though usually that "proof" will be in the form of another counter-explanation. But that's not the point. The point is that I shouldn't be confident that a new thing will work, regardless of how plausible my explanations are for why and how it ought to work. At least, I shouldn't be so confident of my explanations that I don't feel the need to test it. ("But won't I see if the kids are happier afterwards? All the other Happiness teachers I've read about say so!" -- THIS is where the bias and conformity problems come into play; but the first step, the one that usually goes unchallenged, is taking the explanation as evidence. "So what if we're not sure it works; it can't hurt, right?" -- That tends to be the last line of defense, showing insensitivity to tradeoffs and limited resources.) Mistaking explanations for evidence is one of many problems of taking the inside view; but I think it is worth highlighting as its own issue. "A mechanism is not a substitute for measurement" could be another way to put it. There is no alternative to measuring actual outcomes, no matter in how much detail you can imagine the mechanism leading from your actions to the desired result. I've put together a plan for a short lesson, "Beware of explanations", that I've taught to students aged 11-18. It's intended as an introduction and first eye-opener only; learning to override this tendency will obviously take a lot more than this. This is a follow-up exercise (that I've only been able to do with a small part of the classes I taught, unfortunately) that's intended as a weak first attempt at testing, some weeks later, whether the lesson had any lasting effect on students' scepticism of explanations without evidence. Results so far look slightly encouraging, but there's too little data to support any conclusions at this point.
0 Comments
Leave a Reply. |
OPENschoolOur new page: Archiv
August 2017
Kategorien |