Policymakers should use caution when drawing lessons from OECD’s education report
- Written by The Conversation Contributor
Each year, the Organisation for Economic Co-operation and Development (OECD) launches a report called Education at a Glance.
The title is ironic because this report can hardly be taken in “at a glance”. At 550 pages long, and with nearly 250 tables, 140 charts and more than 100,000 figures, it provides a comparative account of data on learning outcomes, educational attainment, investment, participation and learning environments across the education systems of 34 OECD nations and 12 non-OECD nations.
The OECD explicitly intends for this publication to inform policy.
But, paradoxically, the very features that make the report so impressive – its complexity and scale – also make drawing inferences and deriving useful policy lessons rather complex and prone to oversimplification.
When making comparisons across a vast range of contexts, many aspects are left out of the story. This makes comparisons rather treacherous.
For example, in some East Asian countries such as Korea and Singapore, parents spend large sums of money on private tutors and cram schools – schools that train students to pass exams – but this does not show up in the report as part of the “investment in education”.
By relying on the reported data alone, one could be misled into emulating policies that appear to get better education bang for buck.
Another major issue is the reporting of data by national averages.
Aggregating data at the national level masks the vast variations within countries.
In Australia, the Australian Capital Territory (ACT) performs as well as many of the highest-performing nations on international assessments, while the Northern Territory (NT) and Tasmania perform below the OECD average.
Talking about Australian trends and focusing attention on national averages encourages national reforms, such as the introduction of a national curriculum and the National Assessment Program – Literacy and Numeracy (NAPLAN), which have come at enormous cost, but have resulted in little improvement.
Robert Randall, the head of the Australian Curriculum, Assessment and Reporting Authority (ACARA), accepted that “at a national level we are seeing little change in student achievement”.
That money might have been better redistributed and spent on resources for Australia’s low-scoring states.
Part of the difficulty with making such comparisons is that education systems are extremely complex. Often there are no clear answers even after decades of research.
A good example is class size.
Often, students who are struggling or disadvantaged are placed in smaller classes – these classes would then be correlated with poorer outcomes.
The report shows that in Australia private schools – which perform better – tend to have larger classes than public schools (although the “advantage” is sometimes explained as a factor of social and economic capital).
If we were to draw policy lessons from this data, we might imagine that the way to improve education is to enlarge class size.
But calculating class size is not easy. In the report, class size is calculated simply by dividing the number of students enrolled by the number of classes.
But the reality of class size is somewhat more complex. Class size does not remain the same all day for students – it changes as they move through the school day or attend different subjects.
Many schools also have special needs programs, which may involve in-class or pull-out support for students, or elective options that result in smaller classes.
The report acknowledges these complexities. It also cites evidence from the OECD’s Teaching and Learning International Survey (TALIS) that larger classes are correlated with more time spent on behaviour management and administrative tasks, and less on teaching and learning.
Nevertheless, it concludes that the evidence of the effect of differences in class size on student performance is weak. The report goes on to suggest, based on data from the Program for International Student Assessment (PISA), that countries should prioritise policies to improve teacher quality. An example is raising salaries to attract good candidates and retain effective teachers – even if the trade-off is larger classes.
The report is meticulous in declaring vulnerabilities in its methodology and in cautioning against over-interpretation. However, these finer points and cautions often tend to be disregarded in the narratives constructed from the report by the media and by the OECD itself for the media and for policymakers.
We should then use these comparisons cautiously.
The report is a good way to alert us to the presence of a problem, but more focused research should be done first before arriving at any policy actions or interventions.
Radhika Gorur received funding from Collaborative Research Network for the research that informed this article.
Authors: The Conversation Contributor