Benchmarks: Using Data to Set Evidence-based Targets to Improve Learning Proficiency

By Martin Gustafsson, Research on Socio-Economic Policy (ReSEP), University of Stellenbosch

In a recent blog, the UNESCO Institute for Statistics (UIS) and the Global Education Monitoring Report explored how benchmarks can be used to accelerate progress towards SDG 4.  To further these discussions, the UIS has released a new report, entitled Evidence-based Projections and Benchmarks for SDG Indicator 4.1.1, which focuses on how countries can use existing data to set develop projections and benchmarks for the percentage of children reaching minimum proficiency levels in reading and mathematics.

The report builds on a previous report, entitled How Fast Can Levels of Proficiency Improve? Examining Historical Trends to Inform SDG 4.1.1 Scenarios. It found, for instance, that the most successful developing countries have improved the percentage of primary pupils who are proficient, by 2.0 percentage points a year. We can therefore say that it is not unrealistic to expect, say, the percentage of Grade 4 pupils who are proficient in reading to improve from 40% to 60% in ten years – which amounts to 2.0 percentage points a year. But should countries not aim for faster progress than this? The World Bank has proposed that ‘learning poverty’ – meaning the percentage of children who are not proficient – to be halved by 2030. Using our example, that would mean moving from 40% to 70% proficient by 2030 in order to halve learning poverty from 60% to 30%. The World Bank proposal means that countries with the greatest learning poverty would need to raise the percentage of proficient children by about 4.0 percentage points a year. And this is still less ambitious than the SDG vision of zero non-proficient children by 2030.

Can we exceed the ‘speed limits’ of recent history?

How ambitious should benchmarks, or targets, be? Should we be limited by what history suggests is possible? There are of course no easy answers to these questions, as highlighted in the initial blog. One thing that we can say with certainty, however, is that we need to understand the improvements that have actually occurred around the world when we set targets. Whether we can progress faster than the most successful countries have progressed up to now is obviously something education stakeholders need to ask. But one should remember that even replicating the trends seen in relatively successful improvers such as Indonesia, Qatar and Trinidad and Tobago, is ambitious. Yet maybe this should not be the limit. Maybe with more focussed governments and civil society, and by employing new technologies, we can exceed the ‘speed limits’ we see from recent history.

It should be remembered that even modest gains in proficiency, if sustained over time, can make a real difference to the development of society and the economy. To illustrate, the Netherlands, Canada and Finland maintained small but continual annual improvements in proficiency from 1975 to 2000 and, according to one study, this translated into better economic growth for these countries relative to Germany and Italy, which saw virtually no educational improvement. If even small educational quality improvements are to be valued, this has implications for national and international testing programmes. These systems need to be designed to pick up, with a sufficient level of certainty, even relatively small changes.

The new report, dealing with forward projections, explains how long it would take the world, and different world regions, to reach specific benchmarks with respect to learning proficiency. Different scenarios, based on different assumptions, are presented.  

How to construct evidence-based targets

At least as important as the findings, are the methods and the Excel tool that accompany this report. The report explains how one can use the typically normal distribution of pupil scores in an assessment dataset to construct evidence-based targets. Importantly, the average score and the percentage proficient statistic do not move forward at uniform speeds. In general, a country with a high level of ‘learning poverty’, meaning low levels of proficiency, can expect to make relatively fast progress initially, while a country which has already reached a fairly high level of proficiency will see relatively slow progress. These statistical principles can be applied fairly easily at a national level to produce future targets.  

The following graph illustrates where countries are with respect to reading proficiency. In Nigeria, for example, over half of lower primary pupils are not proficient in reading. They are distributed to the left of Nigeria’s point on the curve. The peak of Nigeria’s distribution is still not proficient. This means that an increase in average performance will result in particularly steep improvements in the percentage of children who are proficient. Countries on the right of the curve are well placed to achieve relatively rapid reductions in learning poverty, if the right education strategies are pursued.

Figure 1: Countries with a large potential for contributing to global change

Of course, targets set using evidence and tools of this nature are not necessarily the targets that politicians will want to use. Politically set targets are often extremely ambitious. They are often set to inspire hope and stimulate action. This is not necessarily wrong. Yet statisticians and education planners at the national level need tools and methods to produce evidence-based targets. These are likely to be useful when highly ambitious targets are not met, and one must assess whether the education system has failed. Despite not hitting the ambitious targets, the system may in fact be doing relatively well.

A tool for countries

Though the Excel tool has details on individual countries, the tool is not intended as a reliable source of what the baselines or best future benchmarks are for individual countries. Instead, the tool is intended to produce sufficiently reliable scenarios for world regions, and the world. Each country’s statistics tend to display peculiarities which only national planners would know about. It is vital for national planners to interrogate and know their data well. How comparable are the country’s data to data in other countries? How comparable are statistics over time within the country? Are there sampling and test administration issues which need to be taken into account when statistics are interpreted? Moreover, there may be issues with the population projections and out-of-school statistics used in the tool, which only national planners are aware of. As long as these issues are given due consideration, the tool can be replicated at the national level, for instance to model future improvements with respect to the individual regions or provinces within the country. The tool is deliberately free of any locked or hidden elements, meaning there are no restrictions to adapting it. 

To conclude, the two reports discussed above should take us forward in terms of the knowledge we need to plan educational quality improvements. Yet many fascinating questions remain unanswered. National planners and researchers have an important role to play in proposing answers to these questions. For instance, what lies behind the steep improvements seen in certain developing countries? There are no strictly scientific ways of answering this question. Answers need to draw from data, for instance on classroom practices, but also analysis of how institutions have evolved over time. Findings based on scientific ‘experiments’, where education interventions are tested in ‘treatment groups’, have become more available in recent years. This has helped to bring about a more informed policy debate. But apart from this, we need more informed speculation about why whole countries appear to have made significant progress with respect to reducing learning poverty. And we need to attempt to explain why some other countries, despite impressive-looking interventions, have apparently not succeeded in this task.

Leave a comment