06/03/2019
Share

Digital Literacy Skills: From a Framework to a Measure

By Manos Antoninis, Director of the Global Education Monitoring Report

The latest SDG 4 Data Digest 2018, Data to Nurture Learning, summarized, among other things, the progress made by a range of partners working with the UNESCO Institute for Statistics (UIS) towards establishing a framework and identifying assessment tools to monitor digital literacy skills.

Against the backdrop of Mobile Learning Week, this blog highlights the challenges for operationalizing the framework in a way that is formative, informative and cost efficient.  

The idea that it is possible to set a global target on “relevant skills, including technical and vocational skills, for employment, decent jobs and entrepreneurship” that youth and adults should possess by 2030 has been open to many criticisms.

Decent jobs are not just a matter of the skills people have but also of the jobs that are on offer; if the more educated get decent jobs, this may simply reflect that decent jobs are scarce. Defining the skills for entrepreneurship could be equivalent to a search for the holy grail. The skill set one needs differs from one job to the other; having more of a particular skill may be ideal in one job but detrimental in another.

Responding to social and economic needs

As with many of the targets, SDG target 4.4 is above all aspirational. It aims to show government policymakers the importance of ensuring that education systems are responsive to social and economic needs. But to understand whether progress is being made requires broad concepts to be pinned down.

As a first step, the SDG 4 monitoring framework narrowed down the concept of relevant skills to skills related to the digital world. This reflected an intuitive belief that these skills are more likely to be relevant across a very wide set of labour markets and other daily life contexts. It is also more likely they would be measurable and comparable at a reasonably low cost.

This was a significant step: at the time, it was unusual for the international community to set targets related to learning outcomes. It was also unusual to set targets that involve not just the public education system but also, if not more, the entire network of national skills providers and mechanisms that support skill acquisition.

But at the same time, even deciding what would be the relevant digital literacy skills for employment from now to 2030 risks being no less ambitious. Ten years ago, the skills we thought we would need today are not quite close to what we actually need today. With the ever more rapid pace of change in both technology and job requirements, why would we think we can do that better today? Are we not opening ourselves even more to the criticism that we engage in wasteful forecasting and crystal ball gazing?

The answer is to be humble. The purpose of the monitoring framework is not to put all countries on a league table. It is also not a signal for countries to invest millions in monitoring digital literacy skills, which in some cases would have cost more than the total budget for digital skills training. Rather, the purpose is to keep reminding governments that these skills are critical; that a policy for their development is needed; and that they may be able to get ideas how their peers are approaching the issue. Not least, to start thinking of ways that will help them understand how well they are doing.

Digital Literacy Global Framework

To get us started on this last point, as part of the Global Alliance to Monitor Learning (GAML), a Digital Literacy Global Framework was developed, building on DigComp, a research project of the European Union, where the issue has featured high on the policy agenda. The framework features seven competence areas: fundamentals of hardware and software; information and data literacy; communication and collaboration; digital content creation; safety; problem solving; and career-related competences.

The next step has been to catalogue existing assessment tools and map them to the framework. The Centre for Educational Technology at Tallinn University looked at different types of digital literacy assessments that vary by focus, application domain, purpose (e.g. admission, certification, training needs assessment and employment), target population, scale, item development, reliability and validity, mode of delivery, cost, scalability and accreditation.

As shown in a new UIS information paper, the range of skills covered in digital literacy assessments is much wider than in assessments of reading and mathematics, which tend to follow a clearly-defined curriculum. In addition, digital literacy assessments vary in terms of the responsible authority. Non-government providers are more often involved in administering them. As a result, these assessments become proprietary and less transparent.

Dual-purpose assessment options

Potentially interesting models explicitly assessing competences in the Global Framework are appearing, as also mentioned in the 2019 Global Education Monitoring Report. While their purpose is formative, they could potentially be used for monitoring.

One, SELFIE, has been developed with reference to an extension of DigComp for education institutions. It is targeted at school leaders, teachers and students to help schools identify digital literacy strengths and weaknesses and build a school improvement strategy. It has gone through a 2017 pilot phase with 67,000 users and its aim, included in the European Union’s Digital Education Action Plan, is to reach 1 million users by the end of 2019.

Another example, Pix, is an online platform for assessment and certification of DigComp skills, managed by the French Ministry of National Education, Higher Education and Research and developed as a state‑sponsored and state-managed start-up. Citizens would have free access to a digital skills assessment, diagnosis of strengths and weaknesses and recommendations of learning resources. In 2019/20, it will be administered to every student in grades 8 to 12.

Both models show a good way forward: take tools that serve another purpose, such as a self-assessment of digital literacy skills, as a means for individuals to receive advice on how to improve their skills and learn more about relevant training opportunities and jobs. At the same time, policymakers can use the information to assess the distribution of skills in the population. This is a cost-effective method, as developing assessments just for monitoring purposes is unlikely to represent good value-for-money.

Thinking in innovative ways is what the SDG 4 monitoring framework calls us to do – and this is all the more so in the case of monitoring the ever-evolving digital literacy skills.

 

Leave a comment