The news that more than more than half – 617 million - children and adolescents of primary and lower secondary school age worldwide are not reaching minimum proficiency levels in reading and mathematics is a wake-up call for educators and for statisticians. Without such data, we would be unaware of the learning crisis that threatens progress towards the Sustainable Development Goals (SDGs) – a crisis that demands an urgent response from education ministries, most certainly, but also from the world’s data gatherers.
In an ideal world, progress towards SDG 4 would be monitored through reliable, high-quality data, comparable across countries, and compiled efficiently at regular intervals. But in the real world, there is no framework to pull together the many different types of learning assessments and to produce internationally comparable data on the core skills that are so critical for children’s learning and for their future prospects.
A pragmatic approach to gather data on learning
It will take time to develop the methodological tools and political consensus needed to produce globally-comparable indicators on learning. We need a more immediate approach to producing the cross-nationally comparable data needed to report progress towards SDG 4, and a reality check on what is possible, given the timeframe. The new statistics on reading and mathematics have emerged from a groundbreaking approach tested by the UNESCO Institute for Statistics, captured in a new database, and a paper, Mind the Gap.
It is grounded in a methodology that uses all possible results from international and regional student achievement tests in order to obtain comparable results for the proportion of students reaching minimum proficiency levels in both primary and secondary education.
By applying the criterion that some countries took part in different assessments simultaneously, the methodology links assessments by using the results of these “doubloon countries” – countries that participate in both regional and international assessments.
This approach is efficient because it does not re-invent the wheel, nor does it require additional instruments or, importantly for cash-strapped statistical offices, additional costs. In effect, we have found a pragmatic way to recognize the education contexts within regions while still producing data that can be compared internationally. Our overall approach to linking is very similar to the “Ring Comparison” methodology, which is widely used to improve the quality of purchasing parity indices.
For the first time, data on learning are comparable
The result is a much-needed international dataset for more than 160 countries or locations between 1995 and 2015. To the best of our knowledge, the database provides the largest and most internationally comparable information available for monitoring progress towards SDG 4. What’s more, it provides the level of disaggregation needed to report on SDG 4 and assess both quality and equity in education. It may well be the first study to provide comparable data on learning.
So how does it work? After populating the database with all the available assessment results, the methodology applies two alternative benchmarks to assess proficiency across countries.
The first benchmark is the basic proficiency level used by the Southern and Eastern Africa Consortium for Monitoring Education Quality (SACMEQ) in its regional survey on students at the end of primary school. The second benchmark is the minimum proficiency level defined by the International Association for Evaluation of Educational Achievement (IEA) for the Progress in International Reading Literacy Study (PIRLS) and Trends in International Mathematics and Science Study (TIMSS) – international assessments with global coverage involving, for the most part, middle- and high-income countries. At secondary level, we used the benchmarks set by the OECD’s Programme for International Student Assessment (PISA), which includes about 70 countries facing very different education contexts.
Because these benchmarks apply only to children who are in school, we also developed a methodology to estimate the situation for those who are out of school. As explained in our paper, Counting the Number of Children Not Learning, it is essential to recognize that the children who never enrol or drop out have little chance of reaching any benchmarks, and that those who start school later than they should will struggle.
Three reasons why the new approach works
The new database has three key advantages over previous research approaches.
First, while previous research focused mainly on mean scores, this approach results in a new international benchmark for tracking students who achieve minimum proficiency. The threshold is carefully designed to offer an accurate but fair perspective when comparing the results of countries in different regions and at different stages of education development.
Second, even though our methodology is based mainly on countries that participate in both regional and international assessments, it generates estimations that span countries, education levels and skills.
While previous research used only a single methodology, we are using these “ring” or “doubloon” countries to provide alternative estimations of the results. In other words, we can provide confidence intervals for estimates of the proportion of students reaching a minimum level of proficiency in reading and mathematics around the world.
Third – and perhaps most importantly – this approach makes it possible to report the data needed for SDG 4 by tracking the proportion of students achieving minimum levels over time and to distinguish between different sub-samples for equity purposes.
By providing comparable scores across time (between 1995 and 2015) and across different groups within each country, our international dataset includes more than 16,000 combinations of results for students reaching the minimum levels.
The sub-samples included in our dataset are mainly gender-based, or distinguish between location, socio-economic levels, languages spoken at home and immigration status. This is a major and crucial leap forward, given the emphasis within the SDGs on equitable progress.
Next step: continuing to improve the comparability of data
Although the new dataset has its limitations, it is a much-needed barometer that provides an initial reading of the performance of national education systems while offering a set of common parameters for SDG reporting.
Now we want to broaden the debate on measurement and improve the comparability and coverage of data. The good news is that we have the support of the regional and international assessment agencies, who are actively pursuing options to link their assessment results and share items across tests.
Together, we are finding concrete ways to improve the comparability of student achievement tests and reduce estimation bias in reporting on progress towards SDG 4.
Leave a comment