I came across this fascinating article from the The New Yorker that really speaks volumes about how careful we have to be when it comes to using and visualizing data. Whenever you try to force the real world to do something that can be counted, unintended consequences abound. The COVID pandemic demonstrated just how vulnerable the world can be when you don’t have good statistics, and the US Presidential election filled our newspapers with polls and projections, all meant to slake our thirst for insight. The same applies to education where knowing what to measure, but also why you want to measure it, is the primary hurdle to tackle. We all have a tendency to naturally trust data as it aims to represent something we are observing. However there are times when simply even solid data is not enough for decision making. That’s why the context, the aim, and the balance between quantitative and qualitative data is so important. As the article states: “The great psychologist Daniel Kahneman, who, in his book “Thinking Fast and Slow,” explained that, when faced with a difficult question, we have a habit of swapping it for an easy one, often without noticing that we’ve done so. There are echoes of this in the questions that society aims to answer using data, with a well-known example concerning schools. We might be interested in whether our children are getting a good education, but it’s very hard to pin down exactly what we mean by “good.” Instead, we tend to ask a related and easier question: How well do students perform when examined on some corpus of fact? And so we get the much lamented “teach to the test” syndrome.” You can read this fascinating article in full here.
(This post is by Megan Brazil, Elementary School Principal, United Nations International School, Hanoi. The post was first published online in 2016.) In a ‘silo schools’ approach, teachers have generally been left to work independently on collecting, understanding and using their own classroom data to make decisions about instruction, planning and assessment. Many schools have not yet made the successful transition from individual to collaborative: to enable teams of teachers to collectively analyze learning data in order to improve learning outcomes for all students. What we know to be true in many schools is that teachers still spend a disproportionate amount of time planning instruction, but don’t place the same emphasis or effort on finding out if the instruction really worked. Perhaps then, less importance has been placed on finding time for teams of teachers, coaches and administrators to take a look at the ‘back end’ — the learning that has taken place as a result of the planni