Why the EdTech evidence reform needs global quality benchmarks

By Professor Natalia Kucirkova, University of Stavanger and The Open University

The use of educational technology (EdTech) during the pandemic revealed structural weaknesses in the EdTech system, from the way it is designed to the way it is funded, selected and implemented by schools. To address these weaknesses, the EdTech evidence reform has been proposed. The reform can only be successful if diverse national efforts get unified with a global strategy on what counts as “evidence” in educational technology.

In the aftermath of the pandemic, a number of EdTech advocates proposed extensive changes to EdTech. The reports of national governments (e.g. England), funders (e.g. the Jacobs Foundation) and scientists’ consortia (e.g. EdTech Exchange) proposed an EdTech reform. At the heart of the reform is the global consensus that schools should only select technologies that has evidence of positive impact on children’s learning. However, there are major differences in the way EdTech evidence is defined, measured and mandated across countries.

The United States follows the ESSA Standards of Evidence, with randomized control trials as the highest form of evidence. The US government has defined standardized measures of evidence with requirements of efficacy at four levels. Supporting non-regulatory guidance on how to measure the individual levels and a list of recommended resources is included in the What Works Clearinghouse catalogue.

In Europe, various countries follow different EdTech evidence mandates and enforcements. Some countries have funded the development of EdTech for national use (e.g. the Octavo Digital Library in Malta). Other countries leave the decision-making up to teachers and local municipalities (e.g. Norway). The United Kingdom has a number of evidence framework provided by various university teams, think-tanks and commercial entities (e.g. Educate Ventures or What Worked ). Outside of the Global North, countries follow a mixture of recommendations, most of which are less stringent and broader than the ESSA standards.

The 2023 GEM Report on technology and education aims to provide an overview of education technology policies based on national experiences. A key question in this process is how to ensure that national efforts for greater EdTech evidence are in line with work underway on a global level. Most EdTech is designed for the international market. However, while the content of individual platforms can be tailored to national curricula, the evidential basis should be based on international standards of evidence.

There is a clear academic consensus on what counts as evidence: an independent study published in a peer-reviewed journal. However, when it comes to EdTech, an alternative definition of evidence has been in use for the past ten years: the evidence in the form of teachers’ reports and reviews. EdTech solutions top-rated by teachers on platforms like EdTech Impact or Educational App Store, dominate the lists of school procurement teams.

Teachers’ views of what works in their classroom are not in opposition with scientific measurement of evidence. Indeed, teachers’ experiences should be combined with scientific evaluations of EdTech’s efficacy and effectiveness in promoting children’s learning. So far, neither teachers nor scientists have been able to combine their evidence ratings in a coordinated way. The gap is being currently filled with various EdTech evidence providers, some of which use combined ratings for certifying or approving specific EdTech products. Examples include the ISTE and ASD EdTech certification organisations or LearnPlatform with Instructure, both of which have been recently merged in major deals.

Building a solid evidence base requires many trials and errors, many tests with many children from many schools. It therefore makes sense to consolidate the evidence testing efforts with a joint framework of efficacy – such as the one proposed by ESSA. It also makes sense to incentivize EdTech’s efforts to be more evidence-led through federal grants and venture capital investments (e.g. as modelled by the Vital Prize). The problem of defining evidence only in efficacy terms means that RCTs become the golden standard for EdTech. This goes against the broader definitions of evidence proposed by individual states. Furthermore, efficacy standards were criticised for undermining smaller start-ups and thereby innovation in the market.

EdTech is a capital- intensive industry, sensitive to the business conditions set by international policies. The EU pledged and became counterweight to US ‘dominance’ in EdTech in relation to privacy, but is lagging behind in the EdTech evidence race. The evidence framework and market mechanisms are exactly the type of forces that propelled US EdTech to its dominance in the educational market. The forces that threaten our global commitment towards diverse and open spaces in EdTech. The GEM Report needs to address this reality with a multipronged approach that aligns the need for EdTech evidence with a clear set of international standards.

 

Natalia Kucirkova is Professor of Early Childhood Education and Development at the University of Stavanger, Norway and Professor of Reading and Children’s Development at The Open University, UK. Natalia’s work is concerned with social justice in children’s literacy and use of technologies. Her research takes place collaboratively across academia, commercial and third sectors. She is the founder of the university spin-out Wikit, AS, which integrates science with the children’s EdTech industry.

Twitter: @NKucirkova

LinkedIn

The post Why the EdTech evidence reform needs global quality benchmarks appeared first on World Education Blog.

Comparte este contenido: