A learning curve?
A Iandscape review of AI and education in the UK
30 January 2025
Reading time: 126 minutes
Executive summary
Identifying the technology needed to support teaching, learning and school management is something every school leader has to grapple with. In school administration, technology use is often non-negotiable, but in teaching and learning the decision to use technology raises questions that can be difficult, spanning concerns from value for money to teaching practice, pupil attainment and data governance. Today, schools have to consider not only what technical hardware they can afford but also what software, connectivity and AI staff and pupils will need access to in order to support them in their day-to-day work.
The education technology (EdTech) market continues to grow and, since the launch of OpenAI’s ChatGPT in 2022, there has been a surge of interest in how a new powerful type of general-purpose AI (GPAI) might interact with and augment these technologies.
Some educators and policymakers see AI as presenting opportunities to improve education and teaching in potentially seismic ways. These include:
- supporting pupils’ development through the use of AI for personalised learning, whereby pupils can learn at their own speed and level and be given instantaneous feedback
- supporting teachers by automating aspects of time-consuming tasks such as lesson planning, marking, assessment and report writing, freeing them to focus on teaching
- supporting administrators and teachers with faster and more nuanced data insights, predictions and information about pupils.
But are these aspirations in line with the reality of what the technology can and cannot do? Will the benefits be realised, or are the risks posed by AI too great for it to be used safely in schools to teach developing minds?
The Nuffield Foundation, with its deep expertise in the field of education, and the Ada Lovelace Institute, with its sociotechnical understanding of data-driven systems and AI, have undertaken a year-long investigation into this emerging field. Our aims were to bring greater clarity on the role of AI in schools, to support policy and educational experts to navigate these issues, and to highlight priorities for further research and policy.
In support of these aims, in this report we:
- explain what AI is and the various systems and uses that sit under the AI definition umbrella, particularly focusing on the development from ‘narrow’ systems to ‘general-purpose systems’ and where ‘generative AI’ models sit in the AI landscape
- detail key issues raised by the use of AI EdTech products, including issues around data, privacy, transparency and regulation, as well as more wide-ranging challenges and questions posed by emergent GPAI systems and the models built on top of them
- contextualise these issues, looking at the available information on current use of AI in schools in relation to:
- Pupils: The use of existing narrow AI products for personalised and adaptive learning, such as diagnosing a pupil’s knowledge and directing them to individual learning pathways. We also consider the development of emergent products such as Khan Academy’s Khanmigo, which uses GPAI to act as a personal AI tutor to engage with and provide feedback to pupils.
- Special educational needs and disabilities (SEND): The use of assistive technology to help and support disabled pupils and others with additional needs, as well as other AI uses for SEND, such as technologies that track attention, and risks arising in relationship to pupils’ rights.
- Teachers: The role of AI in preparing lessons, marking and assessment and teacher training, examining current, informal use by some teachers of non-education specific AI products such as ChatGPT, and the development of more specific AI EdTech.
- Administration and safeguarding: The well-established use of algorithms and data analytics and particular issues arising from data protection and automated decisions about pupils.
- we also explore the evidence and guidance around efficacy, security and the pedagogical basis for AI in EdTech, particularly current gaps in oversight and evaluation
- we highlight areas for future research.
This paper seeks to inform conversations around EdTech in UK schools, drawing on existing and emerging evidence and highlighting where gaps in evidence require urgent attention, and supporting understanding of types of AI, existing tools and products being used in the UK, and the governance landscape in education.
This report does not aim to be definitive – it faces limitations and complexities associated with navigating the emerging and often opaque field of AI, with shortcomings in the available evidence and in transparency. It is the first stage in a collaboration between the Nuffield Foundation and the Ada Lovelace Institute to develop the evidence base in this area, working with external experts.
From this initial review, there are a number of key findings:
- There are perceived opportunities for AI in education. These focus predominantly on its use by teachers to support planning, marking and administration, and personalised learning for pupils.
- Despite optimism, the current use of AI tools in teaching, learning and education consists mainly of informal use of generic AI products such as ChatGPT by teachers and pupils. Education-specific AI tools are barely emergent.
- There are barriers to understanding not just the impacts but the data and models used in some EdTech:
- The evidence base is limited on the pedagogical efficacy of using AI in EdTech, whether for general learning and teaching, for SEND or for administration.
- The social impact of both existing and emerging technologies also needs to be evaluated. More transparency from EdTech companies may be needed to enable this evaluation.
- Better evidence is needed on the outcomes of pupils using AI EdTech or general-purpose AI in their education. Longitudinal studies may pose methodological challenges, in particular with general-purpose AI as it continues to evolve.
- The regulation and governance of AI in EdTech has not kept pace with the evolution of the products, leaving pupils and schools exposed to potentially risky technologies being deployed.
- Current support for schools in procuring EdTech looks only at administration technology. This leaves a significant gap in support for decision-making around the use of EdTech products in teaching and learning. Lack of expert oversight and independent guidance leaves schools overly reliant on marketing materials and hype, rather than support for procuring and using EdTech that is fit for purpose and proven to be effective. Based on these findings, we see a case for an expert body to undertake evaluation, audits and analysis of AI EdTech and offer guidance on the use of general-purpose AI and generative AI.
This review also identified a number of important areas for future research. These include:
- The relationship between the use of EdTech (including AI) and pupils’ learning and attainment, including variations between pupils with different background characteristics.
- The pedagogical theory and practice that underpins different AI and other EdTech tools for teaching and learning and informs its purposes.
- Opportunities for establishing a standardised evaluation framework that could be used to test the effectiveness of teaching and learning tools before they hit the market.
- Improving oversight and access to the data that AI EdTech and general-purpose AI for education products are trained on, for example for learning content and diagnostic tests that drive personalised or self-directed learning.
- How AI personalised learning models make decisions about a pupil’s knowledge base, and how schools use them.
- What rigorous evaluation of marking and exam assessment tools should look like and the accuracy, fairness and transparency of the algorithms used, to ensure they are unbiased, appropriate for use and can be made subject to redress.
- How teachers are incorporating AI EdTech or general-purpose generative AI models such as ChatGPT into their pedagogical practice.
We will continue the conversations, develop thinking and research, and support evidence gathering on the opportunities and the challenges. This work aims to develop evidence about AI and education that works for everyone – pupils, teachers, administrators, schools, parents and education technology innovators – so that risks are mitigated and benefits are shared.
Glossary
Adaptive learning: when learning material and teaching are adapted to an individual pupil’s learning needs and abilities, providing them with immediate assistance, targeted resources and relevant feedback.[1]
AI (artificial intelligence): an umbrella term for a range of algorithm-based technologies designed to carry out tasks previously considered to require human behaviour, intervention or oversight.
Assistive technology: products or systems that assist disabled people and those with restricted mobility or other impairments to perform tasks that might otherwise be difficult or impossible.[2]
Automated decision-making: a function of technology that uses data and algorithms to make decisions, predictions or outputs without human input.
Cloud-based technologies (or ‘the cloud’): computing services and resources accessed over the internet, allowing users to use software, store data and perform tasks without the need for local hardware or software installation.[3]
Data-driven systems: a range of technologies including advanced data analytics, predictive analytics and algorithms.
EdTech: an abbreviation of ‘education technology’. EdTech is ‘the use of technology to support teaching and the effective day-to-day management of education institutions’.[4] It can also refer to the companies that build and sell this technology.
General-purpose AI (GPAI): an emerging type of AI capable of a range of general tasks (such as text synthesis, image manipulation and audio generation). Notable examples are OpenAI’s GPT-3 and GPT-4 models which underpin ChatGPT and many other applications via OpenAI’s application programming interface (API).
GPAI can work across many complex tasks and domains and can exhibit unpredictable and contradictory behaviour when prompted by human users. It can also be built ‘on top of’ to develop applications for many different purposes. It contrasts with ‘narrow AI’ (see below), which focuses on a specific or limited task, such as predictive text or image recognition.
GPAI systems are sometimes referred to as ‘foundation models’.
Generative AI (GenAI): refers to AI systems usually but not always built on top of general-purpose AI models that can generate new content based on user inputs or prompts. This includes generating new content such as images, video, text or audio.
Hosted chat interface: ‘interface’ refers to how a user interacts with an AI system, such as a chat box that allows users to engage in dialogue with the system (as with ChatGPT). ‘Hosted’ refers to an application being run on a provider’s infrastructure (such as their cloud service) rather than on a device (such as the user’s phone). An example is when ChatGPT is accessed via its hosted website, chatgpt.com.[5]
Large language models (LLMs): an example of general-purpose AI (GPAI). LLMs are trained on significant amounts of text data, enabling them to generate natural language responses to a wide range of inputs. LLMs are used to perform a wide range of text-based tasks, such as answering questions, autocompleting text, translating and summarising, in response to a wide range of inputs and prompts.
Narrow AI: refers to AI systems that have been designed and trained using relevant, specific data to complete a specific or limited set of tasks. Unlike general-purpose AI, a narrow AI system is not designed to be used beyond its original purpose. This makes it easier to predict its risks and benefits.
This type of AI has been used for longer than general-purpose AI.
SEND: an abbreviation of ‘special education needs and disabilities’, used to describe learning difficulties or disabilities that make it harder for a child or young person to learn compared with others.[6] This includes difficulties in communicating and interacting and in cognition and learning; social, emotional and mental health difficulties; and sensory and physical needs.[7]
A note on this paper’s focus
The Department for Education (DfE) is currently examining the benefits and risks of using generative AI in schools. As the DfE notes in its definition, generative AI is a technology that can be used to create new content based on large volumes of data.[8]
This definition is accurate, however it is important to note that generative AI is not the only type of AI that has an impact on schools and education in the UK.
We believe that focusing solely on generative AI, rather than also considering the use of general-purpose AI and narrow AI in the education sector, may lead to other relevant and important AI issues and opportunities being missed.
In the ‘Understanding AI’ section of this report, we explain the difference between general-purpose AI and narrow AI and where generative AI sits in this landscape. As these technologies develop, we may begin to acknowledge that the spectrum of AI, not just generative AI, will have an impact on schools and education in the UK.
Introduction
Information technology has had a place in UK schools for the past four decades.[9] Successive governments have encouraged schools to introduce computing hardware, software and internet connectivity into classrooms to improve the quality of education and children’s readiness for their future.[10]
A market has grown that is specifically focused on technology for education. EdTech, as it is commonly referred to, is the umbrella term for a broad range of education-specific digital tools and resources used by:
- teachers, to support them in the classroom and with their professional development
- pupils, to support and develop their learning
- special educational needs and disability (SEND) schools and pupils, as an assistive technology
- school management, to support administrative duties.[11]
In recent years, EdTech providers have begun to add AI to products and services. AI is also an umbrella term, referring to a range of data‑ and algorithm-based technologies designed to carry out complex tasks previously considered to require human behaviour, intervention or oversight.
Traditionally, AI systems have been designed and trained using relevant, specific data to complete a limited set of tasks. This model of AI, known as narrow AI, has existed for decades.
Narrow AI is an often invisible component of many day-to-day technologies we use, from mobile phones, to online entertainment and shopping, to automated decision-making tools used to show us adverts, make financial decisions about us or support our healthcare. Narrow AI is used in a range of EdTech products, particularly those which use data to support decision-making, make a diagnosis of knowledge or define pupil learning pathways.
More recently, with the launch of OpenAI’s ChatGPT in November 2022 a new kind of AI technology has emerged: general-purpose AI (GPAI).
GPAI systems are different from narrow AI systems in several ways. First, they are trained on much larger amounts of data. Second, they are customisable in that they can respond to feedback loops and be fine-tuned to improve on a range of specific tasks. Most notably, unlike narrow AI, a single GPAI system may be capable of performing many tasks across many domains. It can be used to:
- generate new outputs such as text, images or code
- make decisions based on data
- summarise documents or knowledge
- perform tasks such as answering questions, solving problems or developing actions and plans.
The promise of GPAI is that models built using these systems can act as an ‘engine’ for a wide range of tasks in diverse sectors. However, the fact that a model is capable of these tasks does not mean it can achieve them reliably, accurately or effectively.
The launch of these more powerful models has had a significant impact on society. The ability to interact with GPAI via large language models (LLMs) and generative AI models such as ChatGPT, Dall-E or Anthropic’s Claude, or by using the Microsoft Copilot or Google’s Gemini search feature, has made AI feel visible and useable. Many people and organisations have begun to consider how AI can be adopted or embedded into every facet of life to improve efficiency or generate new ways of working, learning and living. This includes within the education sector.
Within six months of the launch of ChatGPT, the World Economic Forum (WEF) published an article outlining how AI could transform education systems and make them more equitable by providing:[12]
- time-saving opportunities for teachers
- tools for teacher training, including acting as a teaching assistant or teacher learning buddy[13]
- streamlined processes for administrative staff
- personalised and self-directed learning for pupils
- improved engagement for pupils with SEND.[14] [15]
Not surprisingly, some teachers and pupils began to experiment informally with ChatGPT. In response, the Department for Education (DfE) put out a call for evidence[16] asking how generative AI was being used and how generative AI systems could support teachers with time-saving activities and pupils with personalised learning.[17] The DfE published the responses[18] and has continued to develop its thinking. Most recently, it led a ‘hackathon’[19] to test a proof-of-concept model (i.e. one not for the market) with teachers to understand further the benefits and risks that teachers perceive the models to have.
To date, most use of GPAI by teachers and pupils is through easily accessible models that are not specific to education, such as ChatGPT, Claude, Microsoft Copilot or Google Gemini. Specific GPAI EdTech products are only just starting to be developed, deployed and used, with very few on the market at the time of writing. Khan Academy’s Khanmigo[20] and Oak National Academy’s Aila[21] are two.
But in time, it is likely we will see an increase in specific GPAI for education coming to market or the integration of GPAI into existing EdTech products. For example, to help diagnose a pupil’s knowledge, improve accessibility for SEND pupils, make administration for teachers more efficient, or support school administrators with communications and reporting.
If these AI systems and AI EdTech are to realise the transformational opportunities suggested by the WEF and fulfil the daily needs of teachers and pupils, research will be needed across policy, business, academia, and education practice and leadership, to interrogate the technology and its appropriateness to the sector and pupils’ learning outcomes.
Currently there are gaps in knowledge and expertise between all of these parties: in understanding what AI is, how it works, how it could be used and how it should be controlled; in understanding what AI-based products and services can and cannot do; and in providing clear evidence of their benefit to the educational experience.
The Nuffield Foundation, with its deep expertise in the field of education, and the Ada Lovelace Institute, with its sociotechnical understanding of data-driven systems and AI, have undertaken a year-long investigation into this emerging field. Our aims were to bring greater clarity on the role of AI in schools, to support policy and educational experts to navigate these issues, and to highlight priorities for further research and policy.
In support of that, this report:
- explains what AI is and the various systems and uses that sit under the AI definition umbrella
- details key issues raised by the use of AI EdTech products and what we know about current AI use in schools
- explores how oversight and evaluation can help to ensure that AI EdTech is safe, effective and beneficial throughout the education system
- highlights areas for future research.
This report does not aim to be definitive – it faces limitations and complexities associated with navigating the emerging and often opaque field of AI, with shortcomings in the available evidence and in transparency. It is the first stage in a collaboration between the Nuffield Foundation and the Ada Lovelace Institute to develop the evidence base in this area, working with external experts.
On 22 January 2025, the Education Secretary Bridget Phillipson gave a keynote address at Bett UK that outlined plans to ‘modernise education through the power of technology’.[22] The address took place after this report was finalised.
The speech announced that two organisations have been appointed to undertake DfE projects previously outlined in a tender from May 2024. We advise readers that reference to the tender and the relevant projects appear in this report in the paragraphs relating to footnotes 155 and 183, as well as the content of the references.
The speech also referred to £1 million of funding awarded to 16 developers for developing marking and feedback tools. This is an update to an earlier announcement of a £4 million investment for a data content store to support AI marking tools, which we refer to in the paragraph relating to footnote 133.
Project aims and methodology
This report focuses on exploring and identifying the use of AI in EdTech in primary and secondary education in the UK, to highlight areas of research needed to demonstrate the technological and pedagogical efficacy, safety and effectiveness of EdTech and AI in education. Education policy is devolved across the four nations of the UK, but as policy discussions about AI will impact all nations we have chosen to refer to the UK as a whole.
This initial review was based on desk research. The literature we reviewed includes research reports, EdTech product descriptions, policy documents, academic literature, and grey literature such as blogs and news articles.
Relevant literature was identified through word of mouth from stakeholders, keyword searches of academic and grey literature databases, internet searches, and the websites of EdTech companies.
The report also incorporates policy analysis of existing and draft legislation and regulations relating to AI and data protection, and of evaluation frameworks that apply to the development, deployment and use of technologies in education across the UK, Europe, USA and Australia.
We have not undertaken any technical testing of the products we refer to. We do not always know for sure that a product can be reliably referred to as being AI or whether a more apt and accurate description would be that the products are data-driven or data analytics systems.
The challenge of accurately identifying whether AI is in a product is part of a broader transparency issue. There is a lack of systematic information about where and how AI is used in the education sector and very little in the public domain about the data, models or evaluation of tools. This is not unique to EdTech but an issue across the whole public sector.
How to read this report
This report is designed to offer a foundation for those engaged in questions around EdTech and AI. We therefore suggest reading the report in its entirety to gain a comprehensive understanding of AI and the role it plays now and may play in the future within the UK education sector and education technologies. However, we have suggested sections to prioritise.
If you are new to AI or interested in learning what it is:
- Read the ‘Introduction’, the ‘Understanding AI’ section, and Appendixes 2 and 3 to gain a grounding in narrow AI, general-purpose AI (GPAI) and generative AI.
If you are a policymaker or regulator concerned with AI in education:
- Read the section ‘AI in EdTech: Key issues’ to understand some of the impacts and risks associated with the use of data-driven technologies and AI, and specifically GPAI, that could impact education and education technology.
- Read the section ‘Contextualising AI in EdTech’ for an overview of how AI is currently deployed or being imagined for learning, teaching, SEND and administration.
- Read the ‘Oversight and evaluation’ section for a discussion of the gaps that exist in how AI and EdTech products are assessed and evaluated and a view on what oversight is needed to ensure that GPAI products, EdTech and AI EdTech are appropriate, necessary and fit for purpose.
If you are a developer or designer building AI in EdTech:
- Read the section ‘AI in EdTech: Key issues’ for a consideration of transparency and accountability, and how to understand if a system is unbiased, accurate and performing reliably.
- Read the ‘Oversight and evaluation’ section to gain a sense of the evaluation, assessment and oversight needed for EdTech and AI to be safe and fit for purpose.
If you are a researcher, civil society organisation, school leader, teacher, parent or member of the public interested in education technology and AI:
- For a quick overview of the report, read the ‘Executive summary’ and ‘Introduction’.
- The ‘What next? Areas for future research’ section provides some questions that warrant further consideration and represent opportunities for future research.
Understanding AI
AI tools are increasingly integrated into the digital platforms, products, applications and services many of us use every day. AI autocorrects our words as we type, identifies our preferences and uses them to shape what content we see online, analyses data for us or about us, and can automate decisions about us, based on the data it is trained on.
To understand the introduction of AI in EdTech and the impact this may have, we need a clear sense of what AI is, not least as the term is often used as a shorthand or an umbrella term to describe a spectrum of concepts and components that enable or support very different functions.
There is no universally accepted definition of AI. Broadly, it refers to the science of creating computer systems designed to carry out tasks previously considered to require human behaviour, intervention or oversight. The Information Commissioner’s Office (ICO) defines AI as ‘an umbrella term for a range of algorithm-based technologies that solve complex tasks by carrying out functions that previously required human thinking’.[23]
In this section we cover narrow AI, general-purpose AI (GPAI) and generative AI.
While there are many different types of AI systems and techniques, they have a common set of components. These include things like data, algorithms and compute.
Common components of AI
- Data: Data is at the core of AI systems and can be used in several ways. Some AI systems, such as machine learning models, are trained using data to infer patterns and relationships between the data’s different features. Data is also used as an input to an AI system that produces an output. Any data or information that can be captured and uploaded into a system can be analysed or processed by AI. This includes numerical data, text (including handwritten notes), visual and audio data, and more specialist types of data such as human genome or geospatial data.
- Algorithms: An algorithm is a sequence of instructions for completing a task using data. In some kinds of AI systems (such as symbolic systems; see Appendix 2), an algorithm functions like a recipe, listing steps for how to use certain ingredients. In others (such as machine learning), the rules are inferred from the data rather than being hard-coded in advance.
- Compute: This refers to the computational resources and processing power required to train, develop and run AI systems. These can include processing chips, memory and storage, and cloud computing resources.
- Model: An AI model is the final result of training an algorithm on data. It represents the learned patterns, relationships and features from training data, which can be used to make predictions or decisions when fed new data.
Narrow AI systems
The AI landscape now includes many GPAI systems, trained on large datasets and with a wide range of applications. Before this, AI systems were designed and trained to complete a more specific, limited set of tasks.[24] This ‘narrow AI’ model has existed for decades and has the following features:
- Specific: narrow AI systems are designed for a particular task or a set of closely related tasks.
- Limited in scope: they do not possess the ability to achieve a wide range of tasks or transfer learning from one domain to another.
- Data-driven: they typically rely on large amounts of domain-specific data for training and operation.
- Task optimised: they may outperform humans in specific tasks they have been trained for.
An example of narrow AI is a facial recognition system trained to detect a person’s face and match it to their passport when they pass through border control. In this case the system is trained specifically with a dataset of facial images curated for this purpose. If this system were to be deployed in another context (for example in schools to support payment for school lunches)[25] it would need to be tailored for that purpose.
Despite being task-specific, narrow AI systems can be varied and incredibly powerful. They can analyse large datasets at speed, make predictions and automate decisions. Some systems can learn patterns from datasets and use this to adapt their analysis or decision making.
Different approaches to narrow AI can be used to achieve different tasks. AI systems that may be found already in EdTech products include:
- Natural language processing (NLP) systems: NLP systems process and analyse human language data. They enable machines to interpret and generate human-like text.
- Examples: personalised or adaptive learning products; vocabulary, translation or language apps; screen readers for SEND or for automated marking.
- Computer vision systems: These systems interpret and analyse visual information. They are used to recognise objects, faces, text, and patterns in images and videos.
- Examples: a self-checkout system in a supermarket that uses computer vision to identify fruits and vegetables without barcodes; SEND learning products that monitor or detect a pupil’s engagement or learning behaviours; automated invigilation systems that support online examinations.
- Speech recognition and generation systems: These systems convert spoken language into text or generate spoken language from text. They enable voice-based interactions with machines.
- Examples: voice assistants such as like Apple’s Siri or Amazon’s Alexa that can understand spoken commands and respond with synthesised speech; digital technologies and EdTech products supporting pupils learning to read or with speech and language development.
- Predictive analytics systems: These systems use historical data to predict future outcomes or behaviours in order to make data-driven decisions and forecasting trends.
- Examples: learning, teaching, administration or safeguarding AI EdTech; technologies used to identify pupils’ behaviours, attainment, absenteeism, learning state or classroom activity.
- Recommender systems: These systems suggest items or content to users based on their preferences and behaviour. They are used to enhance user experience and increase engagement. The accuracy and effectiveness of these systems in EdTech products is yet to be determined.[26]
- Examples: Netflix’s movie recommendation engine; personalised or adaptive learning products for pupils, or planning tools for teachers.
- Biometric recognition systems: These systems identify individuals based on unique physical characteristics and are used for security and authentication purposes. Their use in schools is not without controversy and concern.[27]
- Examples: biometric recognition systems to support registration, school lunch payments or library loans.
General-purpose AI (GPAI)
Recently, advances in AI research have created more powerful kinds of AI systems, capable of achieving many tasks that traditionally required separate narrow AI systems. These new general-purpose AI systems can be built ‘on top of’, enabling them to be used to develop applications for many different tasks, purposes and domains.
While GPAI may be capable of performing some of the same tasks as narrow AI, GPAI systems are trained on huge datasets, meaning that they are capable of a wide range of general tasks – including tasks that they may not have been explicitly or exclusively trained for.
For example, GPAI models can produce individualised responses to prompts from a user and can vary their responses based on a user’s question or query. This use of GPAI – sometimes referred to as LLMs – can predict or generate the next word or sequence of words far more accurately, quickly and efficiently than earlier AI models, using sophisticated machine learning and deep learning methods to identify and encode patterns and relationships in order to determine a response or output.
When a user inputs a prompt into a GPAI model, it produces a response by identifying or predicting what word comes next, based on the sophisticated patterns it has learned from the data it has been trained on.
GPAI systems are characterised by:
- Versatility and adaptability: They are capable of many tasks they were not specifically trained for and can adjust to new tasks without requiring extensive reprogramming.
- Improved task performance: They can perform much better at some tasks, such as language translation, than previous models.
- Dialogic: They tend to be more capable than other AI systems of advanced and customisable interaction with users.
- Ability to customise: They can be fine-tuned with more data to improve performance on specific kinds of tasks.
- Range of modalities: They are capable of generating content in different modalities (text to text, text to image, audio to text).
- Size: The models require exponentially larger datasets than many other systems, and larger amounts of compute resources (including cloud computing and data centres) are needed to train and run them.
These systems may seem capable of many tasks, but whether they do them well, reliably or effectively should be central to consideration of their adoption in EdTech and their use in education more generally. For example, they may perform poorly on languages other than English, and they cannot answer information outside of what is in the data they are trained on or have access to. Some GPAI products constrain the kinds of tasks they can be used for by restricting what a user can prompt the system to do, or what kinds of outputs it may generate.
Prominent examples of GPAI models include Google’s Gemini,[28] Meta’s Llama-3, Anthropic’s Claude Sonnet 3.5[29] and OpenAI’s GPT-4.
These models have been used to build hosted chat interfaces, making them much easier for everyday users to interact with. Users do not need technical expertise to chat and engage: they can simply head to a website or app, write questions or requests, and get a response. Popular examples include OpenAI’s ChatGPT, Dall-E (for images), Microsoft’s Copilot[30] and Anthropic’s Claude.
These products specialise in generating new content based on a text prompt inputted by the user. They can generate text, summarise documents, write code and create images and audio, which is why they are often referred to as generative AI.
The value chain of GPAI models
A key feature of GPAI models is they can be used by other companies to build bespoke AI products and services. Companies developing GPAI systems may draw on proprietary data and compute resources they have collected, or they may use data and resources from another company. They may build products from their own model, or make their model available for other companies to build a product from.
For example, the developer of a general-purpose model can sell access to it to an EdTech developer, who can further train that system on education-related or education-specific data (a process known as ‘fine-tuning’) for a specific educational task. An EdTech developer could also procure an open-source general-purpose model like Meta’s Llama-3 and build an EdTech product around that model by adding in specific safeguards, and fine-tuning data and design features.
Companies building AI models, systems and products can sell or deploy them in different ways:
- A company might sell a model they developed as a serviceto other developers to integrate into an AI product. For example, Faculty bought access to OpenAI’s GPT-4 to use as a back-end engine to power a proof-of-concept generative AI tool in a recent user research study.[31]
- A company might release a model it develops as open source, enabling anyone to use and integrate it into their product. For example, Meta released its Llama-3 general-purpose model via an open-source licence that allows developers to use it for a variety of purposes.
- A company might develop an AI product to sell to a school, or direct to consumers, doing all of the model and product development itself, for example Khan Academy’s Khanmigo AI tutor.
- A company might produce componentsof a product, such as the dataset needed to train a model. For example, Scale.AI offers a service of creating datasets used to train AI models.
AI in EdTech: Key issues
We are in the early stages of general-purpose AI (GPAI). On the Gartner ‘hype cycle’, we are still in the peak of inflated expectations[32] – we are identifying the benefits and risks, and what constitutes safe, effective and beneficial use.
If we consider the internet as an example of another technology, it is clear that its adoption has not been without challenge. Use and harms that were not imagined or considered have over time come to the fore. The passing of the Online Safety Act 2023[33] has highlighted the broad spectrum of risks, harms and socio-technical challenges that the internet has brought, particularly where children and young people are concerned.
This section of the paper highlights some issues that are particularly pertinent when considering the adoption of AI in EdTech. These issues may be experienced in the use of non-education specific GPAI products for teaching, learning and administration; in the use of AI embedded within a specific EdTech product; or in using specific GPAI or generative AI EdTech products.
Given the novelty of the tools, and in light of the some of the issues we discuss below, we suggest that care needs to be taken to avoid developing binary narratives that see the technology as either good or bad, or that see scepticism as an expression of luddism or negativity. Being circumspect about a new and disruptive technology such as GPAI may prove valuable, particularly in relation to uses by or for children and young people in education settings.
Training data
The quality of data in a system determines the quality of data out. Datasets are never a perfect reflection of society, and there are often issues with completeness, curation and relevance.
GPAI systems in particular are highly opaque, drawing on datasets so large that there is no clear account of exactly what data is included. There is a lack of transparency about exactly what data these systems are trained on, other than that it is ‘internet based’,[34] includes web pages, books, research articles and social chatter,[35] and is majority English-language data.[36]
One of the major datasets used to train and build some of the most cutting-edge general-purpose models was recently found to contain thousands of child sexual abuse images and examples of misogynistic and racist content.[37] The online social media site Reddit – which contains content that may be unsuitable for children and young people – has been and will be used to train ChatGPT.[38]
The selection of sources of data (and lack of transparency about the full spectrum of sources) raises a number of serious issues, particularly for systems used in education settings. Much of the content used to train models may be age inappropriate for children and young people. While there may be attempts to manage issues with training data through restrictions on what content they can produce, the safety classifiers that prevent, for example, racist or sexist content from appearing can be easily overridden by a user.[39]
Bias and diversity of data
Data used to train AI systems can be susceptible to risks and harms relating to size, characteristics, encoded bias and diversity.[40] A paper in the International Journal of Information Management identifies that issues relating to the data used to train models and to the reliability of outputs might include ‘discrimination and biases, vulgarity, copyright infringement, plagiarism and “fabricated unauthentic textual content”’.[41]
Biased or unrepresentative data can propagate existing biases and inequalities. This may be due to issues with data collection (a lack of demographic information or lower representation from certain groups, for example). A still more complex problem arises when data accurately captures inequalities that exist in the world, such as representing CEOs as male.
Developers of AI for EdTech products may be better able to monitor and address some of these issues if they are training their own AI systems. However, they may face challenges if they are using GPAI models developed by another organisation, where they do not have access to the underlying data to determine if it is unrepresentative or otherwise unsuitable for their use case.
Privacy
The collection of personal data about pupils is required by schools for administration purposes, and schools must adhere to data protection laws in their capacity as data controllers.
Online services designed for use by children and young people – including EdTech products and services – are subject to the Age Appropriate Design Code.[42] This statutory code of practice came into force in 2020. Information society service (ISS) providers must show that they are conforming to the 15 standards set out in the code, to ensure that children’s personal data is safeguarded when using their products or services.
The code’s relevance to EdTech is nuanced: it does not apply across the board.[43] Schools are not subject to the code. EdTech companies that provide a service that a pupil can access as a consumer are required to comply, as are service providers who process children’s personal information through an EdTech service used by a school.
As more advanced AI technologies are introduced into schools, there will be a need to review the protections for children and young people’s data to ensure that their personal data and privacy are not compromised by data-intensive technologies using their data for commercial purposes.
Any AI system, product or app that is connected to the internet collects data: indeed, it is data that makes the system or product function. The data collected can be personal identifiable data, sensitive data or behavioural data. The collection, retention, sharing or use of this data has the potential to impact a person or a group’s privacy.
When we share data with GPAI systems, it appears that rather than being used to personalise a system or service for us alone, it is used to train and fine-tune the models for the benefit of every user. But the full extent to which our data is being used by these systems is at present unclear. Researchers at the Allen Institute for AI have found that data shared by users in chats with ChatGPT has appeared in outputs generated by the system for other users.[44] [45] If personal data about pupils or staff – for example data for the generation of a pupil report or for generating personal feedback – is shared with these generally available systems, there is a risk that some or all of the data shared could appear in other outputs.
Transparency
Knowing how AI systems work, what data they are trained on, and what data they collect, create, retain or share, is necessary – particularly when using products that assist or automate decision making.
The use of AI – including predictive analytics embedded in EdTech – to shape decisions can be controversial, as it can be difficult, if not impossible, to determine or explain how a decision has been made. This is often referred to as the ‘black-box problem’: the technical challenge of understanding what is happening within a complex model or system.
The black-box problem can cause challenges for auditing or assessing how an AI system has reached a decision. If a specific algorithm or a GPAI system has been used to determine a pupil’s learning pathway, recommend a future career or university degree, or determine a grade or qualification, understanding how the decision was made, what data was used and in what way is absolutely necessary – and yet if AI has been used, it may be difficult if not impossible.
Under the UK General Data Protection Regulation (UK GDPR), people have the right to query a fully automated decision made about them (i.e. when there has been no human intervention in the decision-making process at all). In practice, however, we very often do not know that a fully or partially automated decision has been made, that AI has been involved.
When it comes to AI, there is also a non-technical challenge related to transparency. Much of the details, data and research are held by private companies and not easily able to be scrutinised. This limits research into the efficacy and impact of tools.
Even within the public sector, there is little transparency about where AI is being used. Despite the launch of the Algorithmic Transparency Reporting Standards several years ago, there are few entries available and these only cover a subset of uses of AI.[46]
Case study: Ofqual exam results algorithm
The potential for harm in using narrow AI for decisions that can impact pupils’ lives was demonstrated clearly during the COVID-19 pandemic when Ofqual used an algorithm to grade A-levels. The algorithm used by Ofqual was not highly complex (from a technical perspective) or opaque:[47] Ofqual had held a public consultation. The design of the algorithm combined pupils’ previous attainment data with teacher assessment and the ranking of the school in order to assign grades to those who had taken their A-levels in 2020 during the pandemic.
However, the decisions made by the algorithm were seen to be unfair, unjust and untrustworthy, leading to protests from students, lobbying by parents, threats of legal action and backpedalling by Ofqual. By the time the A-level results were explained, many qualified students had been rejected by their universities of choice,[48] because the bias towards school performance within the algorithm had resulted in high-performing students from low-performing schools getting disproportionately low grades compared with students from higher-ranking schools.[49]
Lack of comprehensive governance and legislation
There are a number of laws and regulations that affect the use of AI – including the Data Protection Act 2018,[50] the UK General Data Protection Regulation (GDPR),[51] relevant parts of the Human Rights Act 1998, the Equality Act 2010[52] and, where relevant, the Age Appropriate Design Code (AADC).[53] However there is no specific legislation for AI in the UK.
There is acknowledgement that GPAI is putting existing regulation under pressure. The government has committed to bringing forward ‘binding regulation’ on the developers of ‘the most powerful’ AI models. The Information Commissioner’s Office (ICO) has undertaken a series of consultations on the application of existing data protection law in relation to generative AI,[54] with a view to updating draft guidance.
This means that there is currently a lack of clarity about school leaders’ duties as data controllers in relation to procuring and using new GPAI systems or AI EdTech products.
Issues specific to GPAI
‘Hallucination’, falsehoods and inaccuracies in AI-generated outputs: A fundamental problem with GPAI is its ability to ‘hallucinate’.
This refers to when these systems produce convincing but factually incorrect information, although a recent academic paper has suggested that ‘hallucinate’ is an ‘inapt metaphor’,[55] as the persistent inaccuracies it describes are not about the system misrepresenting the world. Rather, the systems simply have no concern for the truth, as their role is to produce text that looks like truth.[56]
General-purpose AI systems lack any inherent understanding of the concepts they describe in their outputs and have no ability to reason or plan. Rather, they convincingly replicate human speech or writing by estimating the likeliest next word based on the text that has gone before.[57] This is why they can fail simple logic tests that a human might understand and can generate factually incorrect outputs.
Experts disagree on whether the problem of ‘hallucination’ will ever be resolved,[58] so any AI-generated output might need to be checked to ensure accuracy, or only be used for tasks where accuracy does not matter.
This is of acute concern if GPAI systems are to be used in formal education. Pupils are still developing their skills, knowledge and critical thinking and so may not be adept at identifying errors, inaccuracies or falsehoods that an AI-generated output could produce, and this leaves them vulnerable.
Further research is needed to understand how users identify and consider potential falsehoods, and how they can be protected against them – for example by using GPAI only for questions with no ‘correct’ answers, or as an aid for further work.
The National Education Policy Center has raised concerns that teacher use of GPAI could ‘flood’ classrooms with ‘misleading inaccuracies or false information’,[59] and that AI-generated text might make it ‘impossible to ascertain the authority or authenticity of any online source’.[60] Yet unlike with pupil use of GPAI, there is an expectation that teachers will be better equipped to identify inaccuracies in an output.
Of course, teachers may find that double-checking an AI-generated output could add to their working time rather than decrease it. Further research is needed to understand how teachers use tools and check information in practice, and how that compares with time spent doing ‘manual’ research.
Impact on autonomy and creativity: There is concern in some quarters that the use of GPAI could impact autonomy and creativity, for pupils and teachers. Dr Wayne Holmes, professor of critical studies of AI and education at Oxford University, has outlined fears about the disempowerment that teachers may experience as a result of having their role ‘relegated to switching on the technology … while the AI-enabled system – or rather the commercial organisation behind the AI-enabled system – decides what the students should be learning, in what order and how’.[61]
Some AI EdTech products and GPAI systems have the potential to reduce teachers to facilitators as described here. Others might seek to train teachers to behave in certain ways that could undermine or challenge a teacher’s natural style.
We need to be wary of suggesting that the introduction of technology will inevitably reduce teachers’ autonomy or undermine their role. As with all technologies, it is not just what they can do but how they are used which determines how harmful they might be. Furthermore their value is not always apparent across every sector or aspect of life.
Ensuring that pupils’ creativity and autonomy are enhanced rather than diminished by general-purpose AI will be a necessity of any curriculum development. Educators will need to determine how to bring AI into the curriculum and classroom in a way that supports their learning, enquiry, critical thinking and creativity, rather than displacing or relegating those essential skills.
Impact on relationships: Concerns exist about how the adoption of technology could undermine the pupil–teacher relationship. AI marking and feedback may be beneficial in removing teacher subjectivity,[62] or preventing teacher bias or mood from impacting a pupil’s mark.[63] But the overarching importance of keeping a human in the loop – even if that human demonstrates bias – relates to the value of human interaction, navigating emotions and being able to query or challenge an outcome.
Teachers who were invited by the DfE to test a proof-of-concept generative AI marking tool echoed these worries: they liked the idea of being able to reduce subjectivity but raised concern about the impact that automated feedback would have on pupils and how it could undermine the pupil–teacher relationship.[64]
With regard to the use of chatbots for learning, educators and policymakers should be alert to research in non-educational contexts[65] into the impact that AI chatbots are having on young people. Some benefits have been suggested, for example facilitating learning, alleviating issues around loneliness and enabling the development of social skills. However, concerns are also emerging about negative impacts on emotions and trust, the risk of reliance, and encouragement of risky behaviour. The more ‘humanlike’ an AI chatbot appears to a user, the more blurred reality and technology can become; this is a risk to all age groups but of particular concern to those whose emotional development is still in progress.
Contextualising AI in EdTech – the evolution of technology in schools
Schools in the UK have been investing in technology for over four decades. In the 1970s schools began to adopt computing hardware such as microcomputers, transitioning to portable computers in the 1980s. The 1997 general election and subsequent New Labour government led to significant change in educational technologies, more in line with equivalent advancements in many other aspects of wider society.
Schools today feature a range of hardware for pupils to use, from desktop computers to laptops and tablets. Pupils with SEND have access to assistive technology such as screen readers, Braille translators and voice assistants to assist their learning and development. The introduction of the internet in schools was followed by technology to monitor pupils’ use of technology. Safeguarding and filtering software are used to monitor and block online/internet activity, so that pupils are protected from seeing or sharing harmful content such as terrorist or extremist material.[66] Pupils use digital technologies, EdTech and, now, AI EdTech to learn on, learn about and increasingly learn from.
Access to software or cloud-based applications is widespread across all facets of the school environment. The MISs (management information systems) that administration teams use have become more sophisticated and increasingly reliant on vast quantities of data to function. Teachers’ use of technology has moved beyond interactive whiteboards to an array of software products. Classroom management software provides teachers with a suite of offerings to support their teaching and administrative duties, from monitoring and measuring pupils’ behaviour and attention to seating planning, lesson planning, curriculum support, report writing, marking and remote teaching/learning tools, as well as a means by which teachers can directly communicate with parents and pupils.
Where is AI in use in schools today?
This section of the paper discusses what we know about how AI is currently being used across education, based on information available in the public domain (noting our early comments about the general opacity of AI). More specifically, we review the use or development of AI systems as a feature of EdTech products, the development of specific GPAI EdTech products and the use of non-education specific GPAI products for:
- personalised and adaptive learning
- assistive technology and augmentative and alternative communication tools for SEND pupils
- lesson planning
- marking and assessment
- teacher training
- administration and safeguarding.
Current use of AI systems and products in education
AI for pupils
One of the most prevalent use cases cited for AI in education is the perceived opportunity for it to support a more personalised learning experience for pupils. The concept of personalisation in education is complex and multifaceted.[67] There is no agreed definition for ‘personalised’ or ‘adaptive learning’. At the core of the concept, though, sits the pupil and their individual learning experiences through which they are able to expand their knowledge, perspective, skills and understanding.[68]
Conventional teaching – whereby a teacher teaches a class of pupils as a group – is the standard approach taken in most UK primary and secondary schools. Pupils learn together and are tested and assessed on the same content, at the same or similar pace. Pupils’ one-to-one experience with the teacher comes in the form of individual feedback, either verbal or written.[69]
EdTech products that are marketed as intelligent tutors or adaptive or personalised learning tools[70] have been on the market for a while. These products aim to work with the pupil directly and provide insights for the teacher. They are seen by educators, technologists and policymakers to open up possibilities for teaching and learning that to date have been difficult to achieve.[71]
Some use algorithms to run automated diagnostic assessments of a pupil’s knowledge and learning levels.[72] [73] Others use narrow AI to tailor the content shared with a pupil, which is modified as the pupil’s skills and knowledge evolve.[74] [75]
Policymakers and AI experts have identified GPAI as potentially having a ‘transformative impact on education’ including in ‘improving teacher effectiveness by personalising learning for students’. [76] [77] Specifically, they see the potential for real-time communication between AI and pupils, and the opportunities to give instant personalised feedback and recommend adjustments to support pupils with their learning journey.
Some pupils may already be using ChatGPT and other GPAI or generative AI systems such as Dall-E and Midjourney[78] to support them in their homework and assignments. But none of these systems are specifically designed for educational purposes and so outputs may not be fully age appropriate, UK focused or specifically curriculum based. ChatGPT has launched ChatGPT Edu[79] but currently this is solely for use in universities and not tailored to primary or secondary schools.
Education-specific GPAI systems are only beginning to emerge. Khan Academy’s Khanmigo[80] is the most prominent and advanced example. The product is described on its website as a ‘debate partner, essay reviewer, tutor, writing coach, homework helper and study buddy’.[81] The New York Times has described it as a ‘simulated tutor’.[82] It is yet to be launched in the UK.
The company says that the product has been ‘incorporated with’ the Khan Academy content library,[83] indicating that the underlying GPAI model has been trained on large quantities of pre-existing educational content – such as Khan Academy practice exercises – that the organisation has developed over the past two decades.
At a 2023 Ted Talk,[84] founder and CEO Sal Khan demonstrated how a pupil can engage in a real-time conversation with Khanmigo about a specific Khan Academy exercise or any question that they have. The product’s responses appeared to offer the pupil real-time support and discussion regarding the exercise they were working on or the problem they needed to solve. The product was demonstrated as identifying errors, offering praise and posing Socratic questions.
No academic or technical evaluation of the product appears to be publicly available, so it is unclear whether it is technically proficient and pedagogically effective. The New York Times raises concerns regarding the accuracy of responses as something to be wary of.[85] Khanmigo makes clear in its user guidelines that the product ‘may make errors or “hallucinate”’.[86]
Other AI companies are taking a different approach. Singapore-based NoodleFactory[87] is marketed as an AI platform that schools can use to create a bespoke GPAI product based on their own learning materials, but whether this would be for the benefit of pupils as personalised learning EdTech or for a school to build its own AI teaching and learning support system is unclear.
A 2024 DfE report about generative AI[88] refers to the company, outlining that the product is built using an LLM (potentially an open-source one such as Meta’s LLama-3, or one made available via an application programming interface such as OpenAI’s GPT-4) that has been fine-tuned using a school’s data. Their model ‘prioritises’ the data that the school has uploaded, so the content is curated to align with the school data and not with the wider data that the LLM is trained on – though, as recent research has shown,[89] many safeguards that prevent harmful content from being outputted can be broken by this fine-tuning process.[90]
According to NoodleFactory, the school has the option to turn on or offer access to the LLM data ‘if the tool is unable to answer the student questions with the educator-curated content’,[91] however it is unclear how this works in practice.
For any of these AI systems to work as hoped, appropriate data will be critical. Khanmigo has years of educational data that can be used to train its GPAI model. Whether schools have access to the data needed to support a bespoke AI system that can support a truly personalised tutor – rather than a chatbot trained to answer a finite number of questions – is yet to be determined.
Any school wanting to invest in building a bespoke GPAI product will want to consider whether it is good value for money, fit for purpose and adaptive to pedagogical needs over time. Independent evaluation of the technical and pedagogical efficacy of the product is recommended, as well as evidence gathering on the accuracy of its outputs and the impact – positive or negative – on pupils’ learning, critical thinking and personal agency.
Similarly, schools that want to invest in existing AI EdTech reliant on algorithms to assess a pupil’s knowledge, identify learning pathways, auto-generate feedback, or predict outcomes for pupils need to be sure that the algorithms and AI are fit for purpose and the outcomes are accurate and unbiased.
Academic research can make an important contribution to understanding the impact and effectiveness of these tools. Arguably there is a need for a consistent and long-term approach to testing, auditing, assessing and evaluating the technology, and the pedagogy underpinning the technology, to ensure that products are beneficial to pupils. This role could be fulfilled by an independent body with technical and pedagogical expertise; no such body currently exists, although the government appears to be investigating what the creation of such a body would look like.
Furthermore, deliberative work, qualitative research and user testing may be needed if personalised learning is to be developed for individual learners’ benefit. Pupils who participated in the DfE’s generative AI hackathon in 2023-2024 were not keen on AI tutors, deeming the idea to be impersonal, error prone and not as helpful as a real teacher.[92] The importance of the pupil–teacher relationship matters as much to the pupil as it does to the teacher.
AI for pupils with SEND
Assistive technology is the branch of EdTech designed and used to support accessibility and inclusivity for pupils with special educational needs and disabilities (SEND), and other learning needs such as English as a second language.
The DfE has identified widely available products with built-in accessibility features to support pupils in overcoming reading and writing difficulties, make assessment questions or instructions more accessible, or help them to proofread [93] as demonstrating clear benefits.
The use of assistive technology in UK schools includes:
- Computer accessibility settings: Users can change these settings on a computer or device to support vision, hearing, mobility and focus.[94]
- Screen readers or text‑ and image-to-speech products: These are a form of assistive technology that turn text and images into speech or braille to support people who are blind or visually impaired, who have a learning disability or who are illiterate. They can either be built into a pre-existing product, such as Microsoft Narrator and Google’s voice typing feature,[95] or be a stand-alone product such as Speechify.[96]
- Voice assistants: These use a range of AI, including natural language processing, machine learning and speech recognition, to initiate commands given by a user in real time.
- Braille translators: These are used to convert text into braille notation or braille into text.
None of these technologies are uniquely or specifically EdTech; they are widely available, general-purpose assistive technologies which can all be used within schools and educational settings.
Augmentative and alternative communication (AAC) tools or devices are another branch of assistive technology. AAC devices incorporate various communication methods which supplement or replace speech or writing. Unaided AAC, for example, uses facial expression, vocalisations, gestures and sign language to replace spoken language.[97]
AAC and assistive technology EdTech can support pupils with communication difficulties, learning difficulties and developmental disabilities to develop communication skills and learn through sensory play, engaging various parts of the brain, with the aim of helping to better absorb and retain information.[98]
Whether in turning text and images into speech or in translating text into different languages, AI systems feature in AAC and assistive technology used by pupils in SEND schools.
Algorithms and machine learning are a feature of these models, used to support data analytics and predictive analytics to analyse and interpret behaviours in unaided AAC products which track or monitor users – for example the use of eye tracking technology to support communication for pupils with motor impairments.[99] Narrow AI natural language processing is used to support voice assistants and technologies that respond to verbal commands.
Assistive technology and AAC, whether designed specifically for SEND environments or for wider use, can bring support to learning and assessment and opportunities for greater inclusivity and equity. Indeed, access to these technologies is considered by the United Nations to be a human right.[100] But concerns have been raised by the WEF, among others, that using AI in products to support disabled people could discriminate, manipulate or undermine a person’s rights.[101]
For example, academic review of the use of eye tracking technology in some products used in SEND education indicates that it can be helpful and can improve the learning environment.[102] But technologies that track a facial expression or head movement in order to monitor concentration levels, or to undertake emotion or sentiment analysis, face challenge in relation to their accuracy and reliability.[103] [104]
The risks of biometrics and technologies that seek to analyse emotions have been a point of concern for the ICO. It found in 2022[105] that emotion analysis technologies which ‘process data such as gaze tracking, sentiment analysis, facial movements … facial expressions’ were ‘immature’, noting that ‘they may not work yet, or indeed ever’. The report went on to say that ‘the only sustainable biometric deployments will be those that are fully functional, accountable and backed by science’.
While the emphasis of the ICO statement may have been intentionally focused on the breadth of uses of biometric technology for emotion analysis, the need for scientific evidence to support use within AAC EdTech may be of even greater importance to ensure the safety and protection of SEND pupils or those considered to be vulnerable.
It is important that any technologies designed and marketed to support pupils with SEND are rigorous and do not risk undermining the rights and freedoms of users. That might require, for example, greater clarity and transparency around the processing and use of data about pupils’ disabilities, their behaviours while using the products and their learning progress.
The use of biometrics remains controversial, and the lack of clear and meaningful regulation and legislation around the technology continues to be a point of significant concern.[106]
For assistive technology to be truly inclusive and designed with users at the core, development should involve the people who will be using the technology, whether as pupils, teachers or parents/guardians or carers.
Such an approach is also advisable for the development of specific EdTech for SEND products and non-specific assistive technology such as generative AI. Direct and ongoing engagement with disabled people[107] can help to ensure that accessibility and inclusivity are minimum criteria for products going forward.
AI for teachers
Research published by the Office for National Statistics has shown that student management tasks take teachers on average 4 hours and 46 minutes per day to complete.[108] It is unsurprising, then, that 66% of teachers say they spend less than half their working hours teaching lessons,[109] and 72% report feeling that their workload is too high.[110]
Within a year of the launch of ChatGPT, 42% of primary and secondary school teachers in England reported to the DfE that they had used ‘generative AI’ in their role.[111] A 2023 survey done by Teacher Tapp nine months after the launch of ChatGPT similarly found that 34% of teachers had used either ChatGPT, Bard, Dall-E or Midjourney to help with school work.[112]
Teachers report using GPAI to support them with:
- lesson planning
- creating and tailoring learning resources
- assessment, marking and feedback[113]
- researching a topic or concept
- summarising articles, books or videos
- transcribing or translating content
- proofreading and editing
- supporting pupils with SEND.[114]
The interest in ChatGPT and similar tools has led both teachers and policymakers[115] to consider how GPAI products could be used to reduce teachers’ workload, in particular time spent on lesson planning and marking and assessment.
Planning and teaching: While widely available non-education specific GPAI systems such as ChatGPT have been the go-to for teachers looking for lesson planning support, evidence on the benefits or risks of the technology as used for this purpose is only beginning to emerge.
The Education Endowment Foundation (EEF) has undertaken a trial testing of the use and impact of ChatGPT for lesson planning and resource preparation for Key Stage 3 science.[116] Their findings may provide further beneficial insights.
A UK-specific GPAI product for teaching has recently been launched. Oak National Academy,[117] the body funded by the DfE to collate and provide online resources to schools and families during the pandemic,[118] was given investment of ‘up to £2 million’[119] in 2023 to establish an AI-powered lesson planner and quiz builder called Aila.[120]
Aila was launched in September 2024. Described in its promotional material as a ‘lesson assistant’, it aims to ‘lighten the load’ of lesson planning for teachers while keeping them as the ‘expert in the driving seat’. The emphasis is on Aila supporting the teacher to plan lessons not on teaching lessons for them.[121]
Oak has stated that the product is curriculum aligned. It has been trained on Oak’s own pedagogy and its extensive library of content[122] of over 40,000 teaching resources[123] that teachers, subject and education experts have been developing since the company’s inception in 2020. It is using this data to fine-tune a version of OpenAI’s GPT-4 model.
Despite the product being trained on Oak’s own data, the issue of accuracy of outputs has come up in the development of the product. In a November 2023 blog, its product and engineering director outlined some of the challenges of working with the GPT-4 model, stating that ‘the results are not always accurate enough and the content they generate is not always safe enough for classroom use’.[124]
Determining what content is curriculum appropriate and who has responsibility for decisions about it, teachers or tech companies,[125] will likely be an ongoing discussion. It was acknowledged during the DfE hackathon in 2024 that models need to be trained in subject disciplines for each individual use case scenario, even down to the specificity of each individual school or academy, as ChatGPT might not produce relevant enough outputs for teachers to use.[126]
As Aila is so new, assessment of the technology’s efficacy and the benefit to teachers has yet to be planned, and monitoring the impact will be the critical next step. The launch of Aila, coming at the same time as the government announced a ‘national conversation’ on the curriculum,[127] indicates that a change in lessons and learning has begun.
Meanwhile, teachers who continue to use non-education specific GPAI systems such as ChatGPT to support them with their workload should exercise caution about the accuracy and suitability of the outputs they are given. ChatGPT and similar tools may be suited to particular types of tasks (for example where there is no right answer, or it is easy to check accuracy) and not for others.
Classroom marking and assessment: There are also hopes that GPAI could reduce the time that teachers spend on marking and assessment – something that 46% of teachers told the DfE they felt they spent too much time on.[128]
Commercial startups have begun to offer products described as using AI to automate marking, essay grading and scoring.[129] [130] [131] These products tend to use algorithms trained on predefined criteria (such as the National Curriculum) and exam standards in order to produce a product that can automate marking. Products such as this are marketed predominantly as tools to save teachers time. Some are also promoted as having the potential to protect pupils from teacher bias or ‘variations in mood’ by ensuring increased consistency and objectivity.[132]
The government has recently announced an investment of £4 million to develop a data content store which will support technology companies to develop AI for marking tools.[133]
Feedback from a small cohort of teachers and pupils invited by the DfE to test a proof-of-concept generative AI marking tool suggested that teachers saw opportunities in the use of the tool for saving time and reducing subjectivity. But they identified risks associated with the accuracy of the assessment and feedback, the impact on teachers – particularly newly qualified teachers who may become reliant on the tool, thereby undermining their development or autonomy – and the effect on the teacher–pupil relationship, with concerns about rapport and an understanding of pupils’ quirks potentially being removed.[134]
Exam assessment and grading: AI has similarly been identified as having the potential to serve as a tool for assessing and grading exam papers, or as a tool for the invigilation of online exams to prevent cheating.[135]
These uses have, to date, been seen as controversial. The Assessment and Qualifications Alliance (AQA) has suggested that AI ‘could be used to automate simple marking processes’[136] to help ease teachers’ workload but has warned against the use of AI in high-stakes standardised assessment, noting that the proliferation of AI marking tools could reduce trust in the system.[137]
Similarly, Ofqual has outlined its approach to regulating the use of AI in the qualifications sector.[138] While noting that there are opportunities for AI to ‘complement and quality assure human marking’, it also stresses that the potential for ‘bias, inaccuracies and a lack of transparency … could introduce unfairness into the system’.
It is unsurprising that this concern has been raised. The high-profile Ofqual A-level scandal during the COVID-19 pandemic in 2020 demonstrated how the use of an algorithm to determine exams grades can result in public backlash.[139] Despite the algorithm having been subject to public consultation prior to its deployment, when the results came out it was the focal point for complaints about results which were deemed unjust.
The nuance of when a person trusts AI over a human being or vice versa is complex. When critical life-changing decisions are made, the ability to ask a human for an explanation rather than face the intractable challenge of understanding how AI made a decision can be hugely important for trust and accountability. When there are low levels of transparency or understanding as to how models function, this can exacerbate distrust of automated decision-making systems.
For any use of AI that could negatively impact outcomes for pupils – such as exam results – it is important to have supporting evidence that the products do what they claim and will provide teachers and pupils with accurate and fair outcomes. Clear explanations of the model and ways to challenge the results should be considered, as well as ways to determine the framework for assessing the accuracy and fairness of tools using GPAI for marking. Ensuring teachers know how to use such tools, and are aware of any limitations or risks, should also be considered.
Training teachers and continuing professional development: The development of products that use AI to support teachers’ development and training has begun but it is far from well established. It is vital that tools for delivering teacher training and professional development via AI EdTech are fit for purpose.
The University of Central Florida’s Center for Research in Education Simulation Technology has, for example, developed a mixed-reality learning environment platform called TeachLivE™.[140] This online virtual classroom enables trainees and practising teachers to interact with avatar pupils in a variety of scenarios, giving them opportunities to ‘learn the instruction and management skills needed to become effective teachers’.[141]
More than 80 universities worldwide have used TeachLivE™.[142] The platform uses ‘devices and means to detect emotional and behavioural responses of participants’ and it describes itself as using ‘artificial intelligence informed by multimodal sensors to determine the emotional states of participants in a simulation’.
As with the emotion recognition products discussed in the context of assistive technology, scientific evidence is needed to support the use of technology of this nature in order to protect teachers’ human rights and guard against manipulation by the technology or discrimination in the assessments it might make about a teacher’s progress, development or skills.
In the UK, the Teacher Development Trust[143] has been working with the company Salesforce to develop an LLM approach to simulate difficult situations that teachers may experience in the classroom.[144] The product is a chatbot that enables a teacher to experience different teacher/pupil scenarios and practise their response. It provides real-time feedback both from the bot and from human coaches and peers who are monitoring the learning process.
TeachFX,[145] based in the USA, takes this approach a step further. The product is used by trained teachers to listen in themselves as they teach a class. It provides real-time insights and feedback for teachers on how they engage the class and communicate with pupils. The product uses AI to measure the sentiment[146] of the voices it hears.
Use of a product such as TeachFX in classrooms, as opposed to within teacher training environments, potentially raises some ethical questions, including whether it is appropriate for a trained teacher to record interactions with their class in order to improve their teaching approach.
While this product is only available in the USA at the time of writing, if it became available for use in UK schools, adherence to UK data protection law relating to schools[147] would be required. Ensuring that pupils are learning in a safe environment, where monitoring of lessons is visible and communicated clearly to them, is important for transparency and trust.
The impact on teachers of a ‘datafication’ approach to teaching also warrants consideration. Insights from TeachFX could lead to a teacher changing their natural style based on the guidance or automated decision of a product rather than the support of a trained human expert. The efficacy and impact on teachers of any GPAI system or AI EdTech product used for feedback or guidance need to be evidenced, to ensure that teachers being trained and guided by these products are getting appropriate and accurate support.
Teaching teachers how to use AI: Every iteration of technology that has come into classrooms has been supported by calls for teachers to be trained and confident in their ability to teach with or about it.[148] There is much work to be done to ensure that current and future teachers understand AI and the AI EdTech they may be required to use.
Early adopters of ChatGPT and other LLMs told the DfE[149] that they had experimented with the products in their spare time, using LinkedIn and Instagram to provide tips and guidance. Since those early days there has been a noticeable shift towards more specific online guidance for teachers from the AI companies themselves. OpenAI has published a ‘How to use ChatGPT for teaching’ page[150] and a FAQs[151] outlining issues around bias, safety and accuracy. Google has launched a Generative AI for Educators[152] course designed to help teachers learn how to use generative AI. The grant-giving charity the HG Foundation has – with the support of teachers and tutors across England – created a free online guide detailing what ChatGPT is, how to make an account and what to ‘watch out for’ when using generative AI tools.[153]
While these resources may help teachers to get to grips with LLMs and other GPAI products, the reliance on ad-hoc online guidance, videos and blogs or documents written by the companies and platforms developing GPAI tools themselves is not ideal in the long term. If GPAI is to become a feature of teaching, training will be necessary to ensure that teachers are confident in using it and can do so appropriately and effectively. According to press reports,[154] the government is in the early stages of planning an online resource for teachers to train and embed effective AI practice into their teaching. A tender[155] was published in May 2024 outlining four elements of a training package for teachers, covering ideas for a training website, AI case studies, advanced skills training and a toolkit for safe practice.
The development and deployment of any teacher training product will warrant as much scrutiny regarding the impact on teachers’ professional development as the development and deployment of products used in classrooms for pupil learning. Efficacy will need to be established, as will the accuracy of the generated scenarios, feedback and educational support.
AI for school administration and safeguarding technologies
A range of AI models and systems are integrated into MISs used by schools to support them in their statutory duties around reporting,[156] safeguarding and pupil welfare.
The use of these products is not a requirement but, as noted by the DfE in its ‘Choosing a school management information system (MIS)’ guidance page,[157] an MIS has the potential to:
- reduce costs and free up funds for teaching and learning
- help to prevent cyber-attacks and safeguard school data by storing data in the cloud
- speed up the production of internal and external reports
- help schools collect and analyse data efficiently
- improve communications.
The use of safeguarding systems to monitor pupil’s online behaviours[158] and filtering systems that identify and block access to online terrorist and extremist material is not a requirement for schools. But schools are advised[159] to ensure that they have the technical support they need to meet their duty to keep children safe.
Some schools collect fingerprints or facial biometrics not for the purpose of safeguarding but for the administration of library books, logging into computers or monitoring attendance, or as part of a cashless payment system for school lunches.[160] While it is legal for schools to collect and use biometric data, it is not without controversy. Academics and campaigners[161] have warned that its use is disproportionate. Schools that collect pupils’ biometric data are required to adhere to data protection legislation, as biometrics data is considered in data protection law as ‘special category’ – i.e. sensitive data – and this is closely monitored by the ICO.
In 2021 the ICO investigated the use of facial recognition technology in nine schools in North Ayrshire and found they had all infringed data protection law.[162] One of the failings related to how transparent the collection and use of the data was to the pupils.
This is an important point. Under data protection law children (as well as adults) have the right to know what data is being collected about them, what is it being used for and whether an AI system is being used to profile their behaviours in order to make predictions or decisions about them – for example about their attendance, attainment or performance.
Many products designed to support administration or safeguarding tasks use AI to provide specific functionality. Two of the MIS market leaders, Bromcom[163] and Arbor,[164] use AI systems to generate data-driven insights and analyse trends in attendance, student performance and resource allocation.
Administrators use these products to support day-to-day tasks such as data analysis, data insights, pupil profiling or prediction – whether in relation to assessment outcomes, attendance or pupil behaviours. One of the challenges that administrators might encounter is ensuring that the data they input into systems is representative and clean – i.e. that it has been checked for inaccuracies, errors, or unintentional or malign biases.[165]
Representative and clean data is vital in any data-driven system but particularly one used for management or decision making in relation to children and young people’s data.
If an MIS system uses AI to make a decision, or support a human decision, in a way that leads to a pupil’s safety, wellbeing or educational opportunities being undermined or negatively impacted, pupils can exercise their rights under the UK GDPR to seek redress. However, identifying that AI has been used in a decision is not always easy. The challenge of knowing how an AI system reached a decision is similarly often opaque. MIS providers will need to address these challenges before embedding GPAI in existing systems or developing new GPAI MISs for the market.
Administrators participating in the DfE hackathon expressed an interest in and tested the use of GPAI systems to enable ‘whole pupil data analysis’, including analysing specific assessment data to ‘identify personal and group-level capability gaps’.
This use of GPAI aligns with product ideas that Bromcom has said it is interested in developing. In a blog from 2023,[166] the company provided an example of how a product using GPAI could be used by a school. The blog suggested that the product could be used to answer questions such as ‘Who are my lowest attenders in year 6 who have special educational needs?’[167] to which it would provide an answer based on the school’s unique datasets.
Uploading the personal data of children and young people into a GPAI system, however, raises serious data protection and ethical questions. In a later blog, Bromcom made it clear that personal data was not being sent to ChatGPT.[168]
Similarly, the DfE and Faculty AI made clear to the administrators participating in the hackathon that ‘GPT models are not currently conducive to direct analysis of student data, due to accuracy and ethical concerns’. [169]
While that is still the case, the development of GPAI systems into MISs is currently restricted to less obviously high-risk administration tasks. Arbor has launched the tool Ask Arbor,[170] which offers support with drafting letters and creating student reports, while Bromcom has integrated GPAI into its product to support the creation of emails, letters, lesson plans, quizzes and homework and the identifying of trends. The intention of Bromcom to integrate AI to support analysis of student attendance, behaviours and interventions is outlined on its website, but these uses appear to still be aspirations, and ‘coming soon’.[171]
Future iterations of GPAI being built into MIS products will need to be monitored closely to ensure that necessity, proportionality, privacy implications and ethical use have been defined and, where necessary, mitigated. In particular, the risk of GPAI products creating inaccurate or false outputs, and the unintentional sharing of personal data with these platforms should be treated seriously.
School leaders need to be alert to how AI-enabled administration products are using data and be wary of any untested or unregulated use of GPAI in any products and technologies that they may be encouraged to use or procure.
Using GPAI for basic administrative tasks, such as the drafting of emails and letters or the translation of text into multiple languages, may offer benefits for administration teams in a relatively low-risk way, if it doesn’t include the inputting of personal information. The use of GPAI for safeguarding or analysis and reporting based on schools’ own data relating to pupils, however, may place both schools and pupils at risk.
Oversight and evaluation
Having looked at the use of AI in EdTech and having outlined some of the issues related to data-driven systems, and general-purpose AI (GPAI) systems in particular, this section of the paper considers how we can use oversight and evaluation to ensure that AI EdTech is safe, effective and beneficial for pupils, teachers and administrators.
We explore:
- earlier approaches to oversight and evaluation
- oversight and support for schools’ procurement of EdTech
- oversight and evaluation of pedagogical efficacy and effectiveness
- developing a holistic approach to oversight and evaluation.
Earlier approaches to oversight and evaluation
From 1998 to 2011, the integration of information and communication technologies and e-learning undertaken by schools was overseen and monitored by the British Educational Communications and Technology Agency (BECTA).[172]
BECTA’s role was to ‘lead the effective and innovative use of technology’. It worked closely with schools, Ofsted and academic researchers to evaluate the use and impact of technology in schools, to support schools with procurement decisions, and to provide specialist knowledge and experience that could support policy development.
BECTA was abolished in 2010 and closed in 2011. The government cited financial reasons for the abolition but also stated that schools were in a position to manage the support they needed themselves.[173]
Since the closure of BECTA, the oversight and evaluation of technology in education has been fragmented. The DfE picked up the baton of guidance on procurement, but the role that BECTA played in evaluating use and impact of technology in schools has not been assigned to a specific body or organisation. We explore both of these areas further.
Oversight and support for schools’ procurement of EdTech
The DfE supports schools, multi-academy trusts and local authorities with procurement.[174] The department’s guidance covers everything from buying books to support for facilities and estate management, purchasing food, hiring contractors, energy suppliers and many other procurement needs a school might have.
With regard to EdTech, emphasis is given to the products and services that can support schools in their statutory duties, including the function of running a school, such as procuring hardware, software, connectivity, cloud services, audiovisual equipment and telephony services,[175] and fulfilling their duties relating to administration and safeguarding products, which are defined as ‘education management systems’.[176] The DfE use this term to cover a broad range of applications, including data analytics, lesson monitoring, communicating with parents, progress tracking, special educational needs, safeguarding, registration and admissions, and managing student assessment.[177]
The DfE’s Everything ICT website[178] enables schools to access a one-stop shop of approved providers and suppliers from which they can identify which products and platforms might suit their needs. The site states that all suppliers have been through a ‘rigorous evaluation process’, against criteria ‘including cost, quality and service’.[179] Suppliers may choose to demonstrate that their products are compliant with the voluntary Cyber Essentials scheme,[180] open standards for technology[181] and the government’s Cloud First policy.[182]
It is unclear from the DfE’s Everything ICT website which organisations or bodies are responsible for evaluating the technology for efficacy, accuracy or effectiveness, or how rigorous the evaluation is. The opacity of the approach and the lack of a standardised pre-approved procurement framework for evaluation is a point of concern.
The DfE published a tender in May 2024 seeking a provider to establish a ‘project team’ to run an ‘EdTech evidence board’ to assess product efficacy against set criteria.[183] This may indicate that there are plans to establish an evaluation process that may begin to address calls for a standardised approach to evaluation of EdTech.
Such calls were made in a 2023 report[184] by the 5Rights Foundation and the Digital Futures Commission which recommended standardised certification criteria be established for schools similar to those for digital technologies in health. This approach would require demonstration of compliance with ‘relevant legislation, regulation for data protection and security, and good practices of interoperability and risk-benefit calculation’.
It is not just in the UK that standardising evaluation of EdTech is a concern. Internationally there have been calls from academics for the EdTech Global Education Security Standard (GESS)[185] framework to be adopted as a globally agreed standard for cybersecurity. Academics in Norway and Germany have, along with academics in the UK, proposed a scientific approach to evaluation called the EdTech Evidence Evaluation Routine (EVER).[186] This approach is presented as a guide for evaluating the evidence base of EdTech so that schools can be sure the technology they are buying and using has been proven to be beneficial.
Standardised evaluation schemes are also being proposed in the USA and Australia (see Appendix 1). Both frameworks focus on supporting schools to evaluate the evidence of EdTech’s effectiveness themselves.
Oversight and evaluation of pedagogical efficacy and effectiveness
Oversight of EdTech should not focus solely on the technology. Evaluation and oversight of the pedagogy underpinning the EdTech is also needed, not least to determine if the pedagogical approaches the technologies are built on are best served using EdTech or not.
Since the closure of BECTA there has been a vacuum in this space that needs to be addressed if EdTech and AI are to become a more embedded feature in education.
Some independent research into the use of technology for teaching and learning has happened over the past decade. The DfE provided specific grants to the EEF to evidence the use and impact of digital technology in relation to learning.[187] Two subsequent evidence reviews were published in 2012[188] and 2019.[189]
The 2012 review evidenced how technology can ‘enable, or make more efficient, effective teaching and learning practice’, while the 2019 report sought to evidence the impact of technology on learning and attainment. As part of this review, the EEF undertook randomised controlled trials of a range of ‘technologies or interventions’ used in schools to determine impact, including a mix of EdTech products such as computer games, online programmes and applications.
The findings across these reports showed that there are benefits in using EdTech to support learning and teaching but that these benefits are nuanced and are reliant on identifying the specific conditions, context and uses where technology can improve learning and attainment.
The EEF continues to undertake projects to assess the impact of EdTech on educational outcomes. Maths-Whizz[190] and Reading Plus[191] are two specific trials announced in 2024. A wider review of evidence on EdTech interventions for disadvantaged pupils has begun,[192] while a trial of teachers’ use of ChatGPT for lesson planning and resource preparation took place at the end of 2024.[193]
These projects, and the publication of the DfE’s ‘Areas of Research Interest’ document published in January 2024,[194] are integral in the acknowledgment and action needed to ensure that research is undertaken to ‘robustly measure’ the impact of digital technology use within education.
This is welcome, but more can be done. If GPAI systems and AI EdTech are to become more of a feature in schools and the education system, there needs to be a holistic approach to oversight and evaluation of the products.
A holistic approach to oversight and evaluation
The oversight needed to ensure GPAI products, EdTech and AI EdTech are appropriate, necessary and fit for purpose is multifaceted and should cover both the technology itself, i.e. the hardware and software, and the impact of its use. In the case of EdTech it also needs to include evaluating the pedagogy the technology has been designed on, and any improvements in pedagogical practice that a product claims it can bring about.
Oversight and evaluation of the technology itself could include substantiating that:
- products can demonstrate they have been tested for accuracy, precision and recall and for any potential biases based on protected characteristics, and that they adhere to technical and cybersecurity standards
- products can demonstrate robust safeguards for protecting pupils from harmful content
- personal and behavioural data of pupils is held, shared and used safely and securely, and in accordance with data protection legislation, including consideration of whether data should be used to train AI systems and models
- products have been subjected to the necessary and appropriate tests, evaluation and assessment to determine that they are technically suitable and appropriate for use in schools, to a standard agreed by DfE or an appropriate body
- use of automated decision-making capabilities is transparent to those whose data is being inputted into the system, and the decisions made by an AI component can be explained sufficiently to anyone impacted.
Oversight and evaluation of impact could include evidencing that:
- products have been assessed and determined to be appropriate for and able to support pupils’ learning and their development and knowledge
- products meet the curriculum and educational standards expected and are evidenced to be necessary for use
- products do not undermine a pupil’s ability to learn or a teacher’s ability to teach, and their use has been proven not to create or perpetuate educational inequalities or discrimination.
This approach to oversight and evaluation aligns with the five ‘AI principles’ published in the previous government’s A Pro-Innovation Approach to AI Regulation white paper in May 2023 – namely safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.[195] It also aligns with the approach outlined in the USA, in an Executive Order signed by President Biden and published in October 2023 which stipulated that AI used in education must be ‘safe, responsible, and non-discriminatory’.[196]
Developing a holistic approach to look at the intersection between the technology, the pedagogy and the societal outcomes requires collaboration across a range of stakeholders, including school leaders, technologists, regulators, government, teachers, data protection experts and academics to undertake the tasks detailed below.
Randomised control testing and independent evaluation: Products that are marketed for learning and marking would benefit from being tested and evaluated using a mix of research and evaluation methods, including, where relevant, testing by randomised controlled trial (RCT) for efficacy and effectiveness (though RCTs come with their own challenges and these should also be acknowledged[197]). Schools need much more support in knowing what products will bring proven benefit. Identifying who – for example, which organisation – should conduct these evaluations is the first necessary step. Consideration needs to be given to how longitudinal research can be undertaken when dealing with a technology that is constantly changing and adapting.
Transparency and access to evaluate: To minimise risks and harms, it is important to ensure that systems are transparent and, wherever possible, that they and their outputs are explainable. Products that are used to produce outputs are recommended to be tested for accuracy and bias[198] and should be shown not to be encoding or exacerbating bias or inequality.
Companies building and selling AI EdTech could support this approach by providing access to their training data and AI system in order to allow independent research and academic study into their impacts. Concerns about proprietary data or intellectual property may need to be addressed in order for approved researchers to be able to ‘look under the hood’ of these technologies to determine if they are accountable and fit for use in an educational environment.
Data protection: All products – including general-purpose AI products – must demonstrate compliance with the Data Protection Act (2018), the UK GDPR, the Human Rights Act 1998 and the Equality Act 2010 and, where relevant, the Age Appropriate Design Code (AADC). Schools, in their role as data controllers, must be clear on what data is collected by these products and applications, where it is stored, who it is shared with, and for what purpose. They must also be clear on the wider use of personal and behavioural data captured or shared by these products.
What next? Areas for future research
This first review has highlighted a number of evidence gaps around EdTech technologies, including in their use and effectiveness. Ensuring that EdTech, AI EdTech and general-purpose AI (GPAI) is well-implemented in schools may require new guidance, institutions, policy and technologies.
The Ada Lovelace Institute and the Nuffield Foundation will continue our collaboration over 2025. We plan to develop and supplement our collective understanding of this field through a series of roundtables, using this paper as a provocation to focus on:
- evidence gaps and research priorities (funders and academics)
- policy, governance and procurement
- evaluation and evidence standards – what is needed/realistic?
- history of technology – is this a continuation or step change?
- what overarching questions AI poses for the future of education.
Academics and other researchers will play a vital part in understanding AI in EdTech: in interrogating the underpinning pedagogy, testing the commercially built EdTech and AI that is being deployed, evidencing the impact on learning and teaching, and influencing policy to ensure it is evidence based and benefits from the best expertise.
In this initial review we have identified some areas of interest which researchers might consider exploring further. Some align with the areas of research interest the DfE has outlined;[199] others go further:
- the relationship between the use of AI and other EdTech tools and pupils’ learning and attainment, including variations between pupils with different background characteristics
- understanding the pedagogic theory and practice underpinning AI and other EdTech tools for teaching and learning, so that schools understand what the technology is seeking to do
- opportunities for establishing a standardised evaluation framework that could be used to test the effectiveness of teaching and learning tools before they hit the market
- improving oversight and access to the data that AI EdTech and GPAI for education products are trained on, for example for learning content and diagnostic tests that drive personalised or self-directed learning
- how AI personalised learning models make decisions about a pupil’s knowledge base and how schools use this information
- what rigorous evaluation of marking and exam assessment tools should look like, and how to ensure the accuracy, fairness and transparency of the algorithms used so that they are unbiased, appropriate for use and can be made subject to redress
- how teachers are incorporating AI EdTech or GPAI systems such as ChatGPT or education-specific AI products into their pedagogical practice.
There are many other areas of significant research interest. In this rapidly evolving field, we welcome discussion, debates and suggestions for additions and improvements arising from this paper.
Acknowledgements
This paper was lead authored by Renate Samson and Dr Kruakae Pothong, with substantive input from Andrew Strait.
The authors would also like to thank Josh Hilman and Imogen Parker.
An early draft of the paper was externally reviewed. The authors would like to thank the reviewers for their time and valuable feedback, which provided a breadth of insight and areas for consideration.
Appendix 1: Evaluation frameworks
US EdTech evaluation framework
The US EdTech evaluation framework has its roots in the Elementary and Secondary Act (ESEA), which ‘encourages state and local educational agencies to prioritise’ using and developing their own evidence of EdTech products to inform schools’ EdTech adoption.[200]
The approach classifies evidence of EdTech’s effectiveness and impact into four tiers and offers step-by-step activities in support of schools’ evidence-building to inform their EdTech selection.
The four tiers are:
- ‘demonstrating rationale’ (Tier 4),[201] which relies on a literature review of the research and findings on the EdTech product, given its features, functionalities and underpinning technologies (for example, AI)
- ‘promising evidence’ (Tier 3),[202] based on the findings of ‘at least one well-designed correlated study’ to explore the relationship between the EdTech and a claimed outcome
- ‘moderate evidence’ (Tier 2),[203] backed by ‘at least one well-designed quasi-experimental non-randomised study’ to examine the causal relationships between the EdTech features or functionalities and the claimed outcomes
- ‘strong evidence’ (Tier 1),[204] supported by a randomised controlled trial to establish that the causal relationship between the EdTech features or functionalities and the claimed outcomes does not exist by chance.
In determining the relevant level of evidence, schools are encouraged to consider their needs (or the purposes that the EdTech would serve), the context of use (for example, classrooms or school administration) and the population the EdTech is meant to serve (for example, primary or secondary pupils).[205] In general, schools are encouraged to seek the highest level of evidence possible.
Australian Standards of Evidence
These were developed by the Australian Education Research Organisation (AERO) to ensure consistent and transparent judgements about the effectiveness of a particular education policy, practice or programme.[206] This includes a technology intervention through the use of EdTech products.
Similarly to the US EdTech evidence toolkit, the Australian Standards of Evidence ranks the credibility of the evidence of the educational approaches, including EdTech, into four tiers, according to the evaluation methods used. It differs from the US model in that it outlines guidance on how to choose the appropriate level of evidence to inform the choices of educational approaches, including EdTech use. The four tiers are:
- Level 1: plausible hypothesises, based on secondary research producing findings indicating that the approach ‘should have’ a positive impact on intended outcomes
- Level 2: association with positive effects, based on small-scale research that demonstrates a correlation (not causal relation) between the approach and the intended outcomes
- Level 3: evidence of causes of positive effects (in general), supported by qualitative, quantitative or mixed-methods research that measures the changes (outcomes) resulting from the application of the approach in general
- Level 4: evidence of causes of positive effect in the specific or relevant contexts of use, backed by qualitative, quantitative or mixed-methods research to measure the effectiveness of the approach in the specific or relatable contexts.
Appendix 2: Subfields of AI – how AI uses different approaches to solve a task
There are various subfields of AI that use varying approaches to solve a task, including:
- Symbolic AI: This refers to AI systems that use reasoning and symbolic representation. Examples include decision trees or hard-coded rules to categorise data. A simple example would be an AI algorithm that uses ‘if/then’ rules to categorise a dataset of vegetables – for example, the algorithm could be coded such that vegetables that have the data attributes of ‘green’ and ‘can be eaten on the cob’ would be labelled as ‘corn’.
- Machine learning: This branch of AI focuses on developing algorithms to enable computers to perform tasks without being explicitly programmed to do so, instead learning from data, identifying patterns and making decisions based on them. There are two types of machine learning:
- Supervised learning: Algorithms learn from labelled data to make predictions on new data. For example, an algorithm trained on a dataset that contains images of apples vs images of bananas can identify if a new image is an apple or a banana.
- Unsupervised learning: Algorithms find patterns and relationships in unlabelled data. For example, an algorithm might be given a dataset of images of random fruits and identify which images appear to have common statistical patterns that suggest they may be the same fruit.
- Deep learning: This subset of machine learning involves algorithms inspired by the structure and function of the brain’s neural networks. Deep learning uses artificial neural networks – a complex network of neurons that identify features of data and classify them. Artificial neural networks contain one or more ‘hidden’ layers of neurons in addition to the ‘input’ and ‘output’ layers. These multiple layers allow the building of more abstract knowledge and relationships in a dataset and are designed to recognise patterns and learn from large amounts of data.
- Reinforcement learning systems: This approach involves using an AI system to make sequences of decisions by interacting with an environment. The system improves its performance through trial and error, and through receiving ‘rewards’ or feedback, sometimes from a human – for example, an AI system learning to play a complex video game by repeatedly attempting levels and improving its strategy based on rewards and penalties.
Appendix 3: Understanding the technology stack
The term ‘AI’ can be used to refer to a scientific field of study (with subfields), a model, a product that uses AI methods, a feature of a product that uses an AI method, or a service for providing AI products to an organisation. This challenge of differing uses of the terminology can obscure important aspects of how an AI system is created and maintained.
For example, an AI model is the by-product of training an algorithm on data to perform a particular task or set of tasks. By contrast, an AI system is a complete, user-facing solution that incorporates one or more AI models along with other components such as safety controls and a user interface.
Every AI system has a ‘technology stack’, which is the term used to describe the collection and combination of technologies, tools and programming languages used to develop, deploy and sustain the product. This includes front-end features (for example, the design of the product and its user interface, or what infrastructure it uses to run, such as cloud computing infrastructure) and back-end features (such as what AI models it is using to evaluate new data from users and produce new outputs).
For example, OpenAI’s GPT-4 is an LLM that can, among other things, process natural language. ChatGPT, which uses GPT-4 as its core engine, is an AI system that provides a chat interface for users to interact with the underlying AI models. But ChatGPT could also use other AI models in its technology stack, such as a model trained to identify and block potentially racist or offensive content.
A technology stack for a system can change. This means that an EdTech product that does not currently use AI could incorporate it in future. It also means that an AI EdTech product could swap out an older AI model in its technology stack for a newer, more advanced model at a later date.
Figure 1: A technology stack for a hypothetical AI-powered homework assessment product that has pupils submit their homework for automated assessment that is reported back to the teacher
About the Nuffield Foundation
The Nuffield Foundation is an independent charitable trust with a mission to advance social well-being. It funds research that informs social policy, primarily in Education, Welfare, and Justice. The Nuffield Foundation is the founder and co-funder of the Nuffield Council on Bioethics, the Ada Lovelace Institute and the Nuffield Family Justice Observatory.
Bluesky: @nuffieldfoundation.org
LinkedIn: Nuffield Foundation
Website: www.nuffieldfoundation.org
Footnotes
[1] Dr Serhat Kurt, ‘Adaptive Learning: What Is It, What Are Its Benefits and How Does It Work?’ (Educational Technology, 1 April 2021) <https://educationaltechnology.net/adaptive-learning-what-is-it-what-are-its-benefits-and-how-does-it-work/> accessed 6 December 2024.
[2] ‘Assistive Technology: Definition and Safe Use’ (gov.uk) <www.gov.uk/government/publications/assistive-technology-definition-and-safe-use/assistive-technology-definition-and-safe-use> accessed 6 December 2024.
[3] Peter M Mell and Timothy Grance, ‘The NIST Definition of Cloud Computing’ (NIST, 2011) <www.nist.gov/publications/nist-definition-cloud-computing> accessed 8 January 2025.
[4] M Walker and others, ‘Education Technology for Remote Teaching: Research Report’ (gov.uk, 2022) <www.gov.uk/government/publications/education-technology-for-remote-teaching> accessed 6 December 2024.
[5] UXPin, ‘Chat User Interface Design – A Quick Introduction to Chat UI’ (Studio by UXPin, 12 April 2023) <www.uxpin.com/studio/blog/chat-user-interface-design/> accessed 6 December 2024.
[6] ‘What Are Special Educational Needs and Disabilities?’ (Sense) <https://www.sense.org.uk/information-and-advice/life-with-complex-disabilities/childhood-and-school/send-education-special-education-needs-disabilities/what-are-special-educational-needs-and-disabilities-send/> accessed 6 December 2024.
[7] ‘SEND: guide for parents and carers’ (gov.uk, 15 August 2014) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/417435/Special_educational_needs_and_disabilites_guide_for_parents_and_carers.pdf> accessed 6 December 2024.
[8] ‘Generative Artificial Intelligence (AI) in Education’ (gov.uk) <www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education> accessed 6 December 2024.
[9] Ben Rossi, ‘30 Years of Technology in Education: BESA Report Advises Government on Lessons Learned’ (Information Age, 21 January 2015) <www.information-age.com/30-years-technology-education-besa-report-advises-government-lessons-learned-30904> accessed 6 December 2024.
[10] Don Passey, ‘Early Uses of Computers in Schools in the United Kingdom: Shaping Factors and Influencing Directions’ in Arthur Tatnall and Bill Davey (eds), Reflections on the History of Computers in Education: Early Use of Computers and Teaching about Computing in Schools (Springer, 2014) <https://doi.org/10.1007/978-3-642-55119-2_9> accessed 6 December 2024.
[11] ‘The Education Technology Market in England’ (gov.uk, 24 November 2022) <www.gov.uk/government/publications/the-education-technology-market-in-england> accessed 6 December 2024.
[12] Wendy Kopp and Bo Stjerne Thomsen, ‘How AI can accelerate students’ holistic development and make teaching more fulfilling’ (World Economic Forum, 1 May 2023) <www.weforum.org/stories/2023/05/ai-accelerate-students-holistic-development-teaching-fulfilling> accessed 6 December 2024.
[13] Matthew Nyaaba and others, ‘Generative AI as a Learning Buddy and Teaching Assistant: Pre-service Teachers’ Uses and Attitudes’ (ResearchGate, 24 September 2024) <www.researchgate.net/publication/382331194_Generative_AI_as_a_Learning_Buddy_and_Teaching_Assistant_Pre-service_Teachers’_Uses_and_Attitudes> accessed 6 December 2024.
[14] ‘Generative Artificial Intelligence (AI) in Education’ (gov.uk) <www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education> accessed 6 December 2024.
[15] ‘Generative AI in Education: Educator and Expert Views’ (gov.uk, January 2024) <https://assets.publishing.service.gov.uk/media/65b8cd41b5cb6e000d8bb74e/DfE_GenAI_in_education_-_Educator_and_expert_views_report.pdf> accessed 6 December 2024.
[16] ‘Generative Artificial Intelligence in Education Call for Evidence’ (gov.uk, 14 June 2023) <www.gov.uk/government/calls-for-evidence/generative-artificial-intelligence-in-education-call-for-evidence> accessed 6 December 2024.
[17] As noted in ‘A note on this paper’s focus’, the DfE use the term generative AI to refer to the use of AI in education. In this report, we use the term general-purpose AI, which can include generative AI, but we see AI more broadly than its capability to generate new content.
[18] ‘Generative AI in Education Call for Evidence: summary of responses’ (gov.uk, November 2023) <https://assets.publishing.service.gov.uk/media/65609be50c7ec8000d95bddd/Generative_AI_call_for_evidence_summary_of_responses.pdf> accessed 6 December 2024.
[19] ‘Generative AI in Education: User Research and Technical Report’ (gov.uk, 17 October 2024) <www.gov.uk/government/publications/generative-ai-in-education-user-research-and-technical-report> accessed 6 December 2024.
[20] Khanmigo, ‘Meet Khanmigo: Khan Academy’s AI-Powered Teaching Assistant & Tutor’ <https://khanmigo.ai/> accessed 20 November 2024.
[21] Oak National Academy, ‘Introducing Aila’ <https://labs.thenational.academy/> accessed 6 December 2024.
[22] ‘Education Secretary gives Bett Show 2025 keynote address’ (gov.uk, 22 January 2025) <www.gov.uk/government/speeches/education-secretary-gives-bett-show-2025-keynote-address> accessed 22 January 2025.
[23] ‘Definitions’ (ico.org.uk, 19 November 2024) <https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/explaining-decisions-made-with-artificial-intelligence/part-1-the-basics-of-explaining-ai/definitions/> accessed 6 December 2024.
[24] DeepAI, ‘Narrow AI’ (deepai.org, 17 May 2019) <https://deepai.org/machine-learning-glossary-and-terms/narrow-ai> accessed 6 December 2024.
[25] Nicole Winchester, ‘Facial Recognition Technology in Schools’ (House of Lords Library, 1 November 2021) <https://lordslibrary.parliament.uk/facial-recognition-technology-in-schools/> accessed 6 December 2024.
[26] Felipe Leite da Silva and others, ‘A systematic literature review on educational recommender systems for teaching and learning: research trends, limitations and opportunities’ (2022) Education and Information Technologies 1(2):40
<https://link.springer.com/content/pdf/10.1007%2Fs10639-022-11341-9.pdf> accessed 6 December 2024.
[27] ‘ICO Highlights Rules for Facial Recognition in Schools’ (UK Authority, 1 February 2023) <www.ukauthority.com/articles/ico-highlights-rules-for-facial-recognition-in-schools> accessed 7 December 2024.
[28] Gemini, ‘What Gemini Apps Can Do and Other Frequently Asked Questions’ <https://gemini.google.com/faq> accessed 6 December 2024.
[29] Note that Claude is both a model, and a product. Anthropic, ‘Claude 2’ <https://www.anthropic.com/news/claude-2> accessed 6 December 2024.
29 Dawn Hollingsworth, ‘Overview of Microsoft Search in Bing’ (30 January 2023) <https://learn.microsoft.com/en-us/microsoftsearch/overview-microsoft-search-bing> accessed 22 January 2024.
[31] ‘Generative AI in Education: User Research and Technical Report’ (gov.uk, 17 October 2024) <https://www.gov.uk/government/publications/generative-ai-in-education-user-research-and-technical-report> accessed 6 December 2024.
[32] Gartner, ‘Gartner Hype Cycle Research Methodology’ <www.gartner.com/en/research/methodologies/gartner-hype-cycle> accessed 6 December 2024.
[33] ‘Online Safety Act 2023’ (legislation.gov.uk, 2023) <www.legislation.gov.uk/ukpga/2023/50/contents?view=plain> accessed 6 December 2024.
[34] ‘OpenAI’s GPT-3 Language Model: A Technical Overview’ (3 June 2020) <https://lambdalabs.com/blog/demystifying-gpt-3> accessed 6 December 2024.
[35] Yogesh K Dwivedi and others, ‘“So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy’ (2023) 71 International Journal of Information Management 102642 <https://qspace.qu.edu.qa/bitstream/10576/42799/1/1-s2.0-S0268401223000233-main.pdf> accessed 6 December 2024.
[36] Rebecca L Johnson and others, ‘The Ghost in the Machine Has an American Accent: Value Conflict in GPT-3’ (ResearchGate, 15 March 2022) <https://www.researchgate.net/publication/359256884_The_Ghost_in_the_Machine_has_an_American_accent_value_conflict_in_GPT-3> accessed 6 December 2024.
[37] David Thiel, ‘Investigation Finds AI Image Generation Models Trained on Child Abuse’ (Stanford University Cyber Policy Center, 20 December 2023) <https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse> accessed 6 December 2024.
[38] Scharon Harding, ‘OpenAI Will Use Reddit Posts to Train ChatGPT under New Deal’ (Ars Technica, 17 May 2024) <https://arstechnica.com/ai/2024/05/openai-will-use-reddit-posts-to-train-chatgpt-under-new-deal/> accessed 6 December 2024.
[39] AI Safety Institute, ‘Advanced AI Evaluations at AISI: May Update’ (aisi.gov.uk, 20 May 2024) <www.aisi.gov.uk/work/advanced-ai-evaluations-may-update> accessed 6 December 2024.
[40] Emily M Bender and others, ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜’, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (ACM, 1 March 2021) <https://dl.acm.org/doi/10.1145/3442188.3445922> accessed 6 December 2024.
[41] Yogesh K Dwivedi and others, ‘“So What If ChatGPT Wrote It?” Multidisciplinary Perspectives on Opportunities, Challenges and Implications of Generative Conversational AI for Research, Practice and Policy’ (2023) 71 International Journal of Information Management 102642.
‘The Children’s Code and Education Technologies (Edtech)’ (ico.org.uk, 22 October 2024) <https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/childrens-information/childrens-code-guidance-and-resources/the-children-s-code-and-education-technologies-edtech/> accessed 27 January 2025.
[43] ‘The Children’s Code and Education Technologies (Edtech)’ (ico.org.uk, 22 October 2024) <https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/childrens-information/childrens-code-guidance-and-resources/the-children-s-code-and-education-technologies-edtech/> accessed 6 December 2024.
[44] Lila Shroff, ‘Shh, ChatGPT. That’s a Secret.’ (The Atlantic, 2 October 2024) <www.theatlantic.com/technology/archive/2024/10/chatbot-transcript-data-advertising/680112/> accessed 6 December 2024.
[45] Wenting Zhao and others, ‘WildChat: 1M ChatGPT Interaction Logs in the Wild’ (arXiv, 2 May 2024) <http://arxiv.org/abs/2405.01470> accessed 6 December 2024.
[46] CDDO and CDEI, ‘Algorithmic Transparency Recording Standard – Guidance for Public Sector Bodies’ (gov.uk, 5 January 2023) <www.gov.uk/government/publications/guidance-for-organisations-using-the-algorithmic-transparency-recording-standard/algorithmic-transparency-recording-standard-guidance-for-public-sector-bodies> accessed 9 February 2023.
[47] Elliot Jones and Cansu Safak, ‘Can Algorithms Ever Make the Grade?’ (Ada Lovelace Institute, 18 August 2020) <www.adalovelaceinstitute.org/blog/can-algorithms-ever-make-the-grade> accessed 6 December 2024.
[48]Anna Fazackerley, ‘Top Pupils Rejected by Universities in A-Levels Fiasco Fallout’ (The Observer, 22 May 2021) <www.theguardian.com/education/2021/may/22/top-pupils-rejected-by-universities-in-a-levels-fiasco-fallout> accessed 6 December 2024.
[49] Elliot Jones and Cansu Safak, ‘Can Algorithms Ever Make the Grade?’ (Ada Lovelace Institute, 18 August 2020) <www.adalovelaceinstitute.org/blog/can-algorithms-ever-make-the-grade> accessed 6 December 2024.
[50] ‘Data Protection Act 2018’ <www.legislation.gov.uk/ukpga/2018/12/contents> accessed 6 December 2024.
[51] ‘Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data (United Kingdom General Data Protection Regulation)’ <www.legislation.gov.uk/eur/2016/679/contents> accessed 6 December 2024.
[52] Emma Day and others, ‘Who controls children’s education data? A socio-legal analysis of the UK governance regimes for schools and EdTech’ (2022) Learning Media and Technology 49(1):1-15 <https://eprints.lse.ac.uk/119548/1/Who_controls_children_s_education_data_5Rights_Digital_Futures_Commission.pdf> accessed 6 December 2024.
[53] ‘Introduction to the Children’s Code’ (ico.org.uk, 22 October 2024) <https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/childrens-information/childrens-code-guidance-and-resources/introduction-to-the-childrens-code/> accessed 6 December 2024.
[54] ‘ICO Consultation Series on Generative AI and Data Protection’ (ico.org.uk, 19 September 2024) <https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/ico-consultation-series-on-generative-ai-and-data-protection/> accessed 6 December 2024.
[55] Michael Townsen Hicks, James Humphries and Joe Slater, ‘ChatGPT Is Bullshit’ (2024) 26 Ethics and Information Technology 38.
[56] ibid.
[57] ibid.
[58] Ziwei Xu, Sanjay Jain and Mohan Kankanhalli, ‘Hallucination Is Inevitable: An Innate Limitation of Large Language Models’ (arXiv, 22 January 2024) <http://arxiv.org/abs/2401.11817> accessed 6 December 2024.
[59] Ben Williamson, Alex Molnar and Faith Boninger, ‘Time for a Pause: Without Effective Public Oversight, AI in Schools Will Do More Harm Than Good’ (NEPC, 5 March 2024) <https://nepc.colorado.edu/publication/ai> accessed 6 December 2024.
[60] ibid.
[61] Wayne Holmes, ‘The Unintended Consequences of Artificial Intelligence and Education’ (Education International, 18 October 2023) <www.ei-ie.org/en/item/28115:the-unintended-consequences-of-artificial-intelligence-and-education> accessed 6 December 2024.
[62] ‘Use Cases for Generative AI in Education: User Research Report’ (gov.uk, August 2024) <https://assets.publishing.service.gov.uk/media/66cdb078f04c14b05511b322/Use_cases_for_generative_AI_in_education_user_research_report.pdf> accessed 6 December 2024.
[63] ‘A Whole New World: AI Grading for Teachers’ (Marking.ai, 17 June 2024) <https://marking.ai/blog/a-whole-new-world-ai-grading-for-teachers> accessed 6 December 2024.
[64] ‘Use Cases for Generative AI in Education: User Research Report’ (gov.uk, August 2024) <https://assets.publishing.service.gov.uk/media/66cdb078f04c14b05511b322/Use_cases_for_generative_AI_in_education_user_research_report.pdf> accessed 6 December 2024.
[65] ‘Coded Companions: Young People’s Relationships With AI Chatbots’ (VoiceBox, 12 October 2023) <https://voicebox.site/article/coded-companions-young-peoples-relationships-ai-chatbots> accessed 6 December 2024.
65 ‘Appropriate Filtering’ (UK Safer Internet Centre) <https://saferinternet.org.uk/guide-and-resource/teachers-and-school-staff/appropriate-filtering-and-monitoring/appropriate-filtering> accessed 20 January 2025
[67] Antonio Bartolomé, Linda Castañeda and Jordi Adell, ‘Personalisation in Educational Technology: The Absence of Underlying Pedagogies’ (2018) 15 International Journal of Educational Technology in Higher Education 14.
[68] Atikah Shemshack and Jonathan Michael Spector, ‘A Systematic Literature Review of Personalized Learning Terms’ (2020) 7 Smart Learning Environments 33.
[69] Benjamin S. Bloom, ‘The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring’ (1984) Educational Researcher Vol.13 No.6 <https://web.mit.edu/5.95/www/readings/bloom-two-sigma.pdf> accessed 6 December 2024.
[70] Wayne Holmes, Maya Bialik and Charles Fadel, ‘Artificial Intelligence in Education’ <https://discovery.ucl.ac.uk/id/eprint/10168357> accessed 6 December 2024.
[71] ibid.
[72] Eedi, ‘What Is Eedi?’ <https://help.eedi.co.uk/en/articles/4364845-what-is-eedi> accessed 6 December 2024.
[73] ibid.
[74] Atikah Shemshack and Jonathan Michael Spector, ‘A Systematic Literature Review of Personalized Learning Terms’ (2020) 7 Smart Learning Environments 33.
[75] ‘DoodleLearning: Best Learning Apps for EYFS, KS1 & KS2’ (DoodleLearning) <https://doodlelearning.com/> accessed 6 December 2024.
[76] Times Education Commission, ‘Bringing Out The Best: How to transform education and unleash the potential of every child’ (Times Education Commission, June 2022) <https://s3.documentcloud.org/documents/22056664/times-education-commission-final-report.pdf> accessed 6 December 2024.
[77] ‘Generative AI in Education: Educator and Expert Views’ (gov.uk, January 2024) <https://assets.publishing.service.gov.uk/media/65b8cd41b5cb6e000d8bb74e/DfE_GenAI_in_education_-_Educator_and_expert_views_report.pdf> accessed 6 December 2024.
[78] ‘Online Nation 2023 Report’ (Ofcom, 28 November 2023) <www.ofcom.org.uk/siteassets/resources/documents/research-and-data/online-research/online-nation/2023/online-nation-2023-report.pdf?v=368355> accessed 6 December 2024.
[79] OpenAI, ‘Introducing ChatGPT Edu’ (OpenAI, 30 May 2024) <https://openai.com/index/introducing-chatgpt-edu> accessed 6 December 2024.
[80] ‘Meet Khanmigo: Khan Academy’s AI-Powered Teaching Assistant & Tutor’ <https://khanmigo.ai/> accessed 6 December 2024.
[81] ibid.
[82] Natasha Singer, ‘New A.I. Chatbot Tutors Could Upend Student Learning’ The New York Times (8 June 2023) <https://www.nytimes.com/2023/06/08/business/khan-ai-gpt-tutoring-bot.html> accessed 6 December 2024.
[83] Khanmigo, ‘Meet Khanmigo: Khan Academy’s AI-Powered Teaching Assistant & Tutor’ <https://khanmigo.ai/> accessed 6 December 2024.
[84] Sal Khan, ‘How AI Could Save (Not Destroy) Education’ (TED Talks, April 2023) <https://www.ted.com/talks/sal_khan_how_ai_could_save_not_destroy_education> accessed 6 December 2024.
[85] Natasha Singer, ‘New A.I. Chatbot Tutors Could Upend Student Learning’ The New York Times (8 June 2023) <https://www.nytimes.com/2023/06/08/business/khan-ai-gpt-tutoring-bot.html> accessed 6 December 2024.
[86] ‘Khanmigo Usage Guidelines’ (Khan Academy Help Center, 18 April 2024) <https://support.khanacademy.org/hc/en-us/articles/25358718125837-Khanmigo-Usage-Guidelines> accessed 6 December 2024.
[87] NoodleFactory, ‘Welcome to Next-Gen Education with Walter, Your AI Teaching Assistant’ <https://www.noodlefactory.ai/ai-teaching-assistant-edtech> accessed 6 December 2024.
[88] ‘Generative AI in Education: Educator and Expert Views’ (gov.uk, January 2024) <https://assets.publishing.service.gov.uk/media/65b8cd41b5cb6e000d8bb74e/DfE_GenAI_in_education_-_Educator_and_expert_views_report.pdf> accessed 6 December 2024.
[89] Xiangyu Qi and others, ‘Fine-Tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!’ (arXiv, 5 October 2023) <http://arxiv.org/abs/2310.03693> accessed 6 December 2024.
[90] ibid.
[91] ‘Generative AI in Education: Educator and Expert Views’ (gov.uk, January 2024) <https://assets.publishing.service.gov.uk/media/65b8cd41b5cb6e000d8bb74e/DfE_GenAI_in_education_-_Educator_and_expert_views_report.pdf> accessed 6 December 2024.
[92] ‘Use Cases for Generative AI in Education: User Research Report’ (gov.uk, August 2024) <https://assets.publishing.service.gov.uk/media/66cdb078f04c14b05511b322/Use_cases_for_generative_AI_in_education_user_research_report.pdf> accessed 6 December 2024.
[93] ‘Realising the Potential of Technology in Education’ (gov.uk, 3 April 2019) <www.gov.uk/government/publications/realising-the-potential-of-technology-in-education > accessed 6 December 2024.
[94] ‘Discover Windows Accessibility Features – Microsoft Support’ <https://support.microsoft.com/en-gb/windows/discover-windows-accessibility-features-8b1068e6-d3b8-4ba8-b027-133dd8911df9> accessed 6 December 2024.
[95] ‘Type & Edit with Your Voice – Google Accessibility Help’ <https://support.google.com/accessibility/answer/4492226?hl=en> accessed 6 December 2024.
[96] ‘Turn Any Image to Speech with Speechify’ (Speechify, 27 June 2022) <https://speechify.com/blog/turn-image-to-speech-with-speechify/> accessed 6 December 2024.
[97] Katerina Zdravkova, ‘The Potential of Artificial Intelligence for Assistive Technology in Education’ in Mirjana Ivanović, Aleksandra Klašnja-Milićević and Lakhmi C Jain (eds), Handbook on Intelligent Techniques in the Educational Process: Vol 1 Recent Advances and Case Studies (Springer International Publishing, 2022) <https://doi.org/10.1007/978-3-031-04662-9_4> accessed 9 October 2023.
[98] Highfurlong School, ‘Sensory Rooms’ <https://highfurlong.org/sensory-room/> accessed 19 January 2024.
[99] ‘The Use of Assistive Technologies for Assessment’ (gov.uk, 10 June 2021) <www.gov.uk/government/publications/the-use-of-assistive-technologies-for-assessment/the-use-of-assistive-technologies-for-assessment> accessed 6 December 2024.
[100] United Nations, ‘Access to Assistive Technologies “Is a Human Right”, Deputy Secretary-General Stresses in Message for Launch of Global Report’ (UN Meetings Coverage and Press Release, 17 May 2022) <https://press.un.org/en/2022/dsgsm1743.doc.htm> accessed 6 December 2024.
[101] Yonah Welker, ‘Generative AI Holds Great Potential For Those With Disabilities – But It Needs Policy To Shape It’ (World Economic Forum, 3 November 2023) <www.weforum.org/stories/2023/11/generative-ai-holds-potential-disabilities/> accessed 6 December 2024.
[102] Mehmet Donmez, ‘A Systematic Literature Review for the Use of Eye-Tracking in Special Education’ (2022) 28 Education and Information Technologies 1.
[103] Md Shofiqul Islam and others, ‘Challenges and Future in Deep Learning for Sentiment Analysis: A Comprehensive Review and a Proposed Novel Hybrid Approach’ (2024) 57 Artificial Intelligence Review 62.
[104] Jia Zheng Lim, James Mountstephens and Jason Teo, ‘Emotion Recognition Using Eye-Tracking: Taxonomy, Review and Current Challenges’ (2020) 20 Sensors (Basel, Switzerland) 2384.
[105] ‘“Immature Biometric Technologies Could Be Discriminating against People” Says ICO in Warning to Organisations’ (ICO, 27 October 2022) < https://web.archive.org/web/20241208025750/https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2022/10/immature-biometric-technologies-could-be-discriminating-against-people-says-ico-in-warning-to-organisations/ > accessed 6 December 2024.
[106] Madeleine Chang, ‘Countermeasures: The Need for New Legislation to Govern Biometric Technologies in the UK’ (Ada Lovelace Institute, 29 June 2022) <www.adalovelaceinstitute.org/report/countermeasures-biometric-technologies> accessed 21 March 2023.
[107] Laurie Henneborn, ‘Designing Generative AI to Work for People with Disabilities’ (Harvard Business Review, 18 August 2023) <https://hbr.org/2023/08/designing-generative-ai-to-work-for-people-with-disabilities> accessed 6 December 2024.
[108] ‘Time Use in the Public Sector, Great Britain’ (Office for National Statistics, 21 October 2024) <www.ons.gov.uk/economy/economicoutputandproductivity/publicservicesproductivity/bulletins/timeuseinthepublicsectorgreatbritain/latest> accessed 6 December 2024.
[109] Lorna Adams and others, ‘Working Lives of Teachers and Leaders – Year 1’ (gov.uk, April 2023) <https://assets.publishing.service.gov.uk/media/66f673e03b919067bb482842/Working_Lives_of_Teachers_and_Leaders_-_Year_1_Core_Research_Report.pdf> accessed 6 December 2024.
[110] ibid.
[111] ‘Generative AI in Education: Educator and Expert Views’ (gov.uk, January 2024) <https://assets.publishing.service.gov.uk/media/65b8cd41b5cb6e000d8bb74e/DfE_GenAI_in_education_-_Educator_and_expert_views_report.pdf> accessed 6 December 2024.
[112] Freddie Whittaker, ‘ChatGPT: 1 in 3 Teachers Use AI to Help with School Work’ (School Week, 14 September 2023) <https://schoolsweek.co.uk/chatgpt-one-in-three-teachers-use-ai-to-help-with-school-work> accessed 6 December 2024.
[113] ‘Generative AI in Education Call for Evidence: Summary of Responses’ (gov.uk, November 2023) <https://assets.publishing.service.gov.uk/media/65609be50c7ec8000d95bddd/Generative_AI_call_for_evidence_summary_of_responses.pdf> accessed 28 November 2023.
[114] ‘Generative AI in Education: Educator and Expert Views’ (gov.uk, January 2024) <https://assets.publishing.service.gov.uk/media/65b8cd41b5cb6e000d8bb74e/DfE_GenAI_in_education_-_Educator_and_expert_views_report.pdf> accessed 6 December 2024.
[115] ‘Workload Reduction Taskforce’ (gov.uk) <www.gov.uk/government/groups/workload-reduction-taskforce> accessed 6 December 2024.
[116] ‘ChatGPT in Lesson Preparation – Teacher Choices Trial’ (EEF, 15 February 2024) <https://educationendowmentfoundation.org.uk/projects-and-evaluation/projects/choices-in-edtech-using-generative-ai-chatgpt-for-ks3-science-lesson-preparation-2024-teacher-choices-trial> accessed 6 December 2024.
[117] Oak National Academy, ‘Who We Are’ <www.thenational.academy/about-us/who-we-are> accessed 6 December 2024.
[118] ‘Opportunity for All – Strong Schools with Great Teachers for Your Child’ (gov.uk, 28 March 2022) <www.gov.uk/government/publications/opportunity-for-all-strong-schools-with-great-teachers-for-your-child> accessed 6 December 2024.
[119] ‘New Support for Teachers Powered by Artificial Intelligence’ (gov.uk, 30 October 2023) <www.gov.uk/government/news/new-support-for-teachers-powered-by-artificial-intelligence> accessed 6 December 2024.
[120] Oak National Academy, ‘Introducing Aila’ <https://labs.thenational.academy/> accessed 6 December 2024.
[121] Rachel Strom, ‘Introducing Aila, Our AI-Powered Lesson Assistant’ (Oak National Academy, 6 September 2024) <www.thenational.academy/blog/introducing-aila-for-ai-lesson-planning> accessed 6 December 2024.
[122] ibid.
[123] Oak National Academy, ‘Who We Are’ <www.thenational.academy/about-us/who-we-are> accessed 6 December 2024.
[124] Oak National Academy, ‘Our New AI Tools for Teachers Are Just the Start’ <www.thenational.academy/blog/ai-tools-for-teachers-are-just-the-start> accessed 6 December 2024.
[125] Wayne Holmes, ‘The Unintended Consequences of Artificial Intelligence and Education’ (Education International, 18 October 2023) <www.ei-ie.org/en/item/28115:the-unintended-consequences-of-artificial-intelligence-and-education> accessed 6 December 2024.
[126] ‘Use Cases for Generative AI in Education: User Research Report’ (gov.uk, August 2024) <https://assets.publishing.service.gov.uk/media/66cdb078f04c14b05511b322/Use_cases_for_generative_AI_in_education_user_research_report.pdf> accessed 6 December 2024.
[127] ‘“National Conversation” on Curriculum Begins’ (gov.uk, 25 September 2024) <www.gov.uk/government/news/national-conversation-on-curriculum-begins> accessed 6 December 2024.
[128] ‘Working Lives of Teachers and Leaders: Wave 2 Summary Report’ (gov.uk) <www.gov.uk/government/publications/working-lives-of-teachers-and-leaders-wave-2/working-lives-of-teachers-and-leaders-wave-2-summary-report> accessed 6 December 2024.
[129] Top Marks, ‘Top Marks AI UK – Automated Essay Marking for Schools & Teachers’ <www.topmarks.ai/uk> accessed 6 December 2024.
[130] Marking.ai, ‘Marking.ai – Saving Time for High School Teachers’ <https://marking.ai/> accessed 6 December 2024.
[131] Mark Mate, ‘Mark Mate – AI & Speech Teacher Marking Tool’ <www.markmate.co.uk> accessed 6 December 2024.
[132] ‘A Whole New World: AI Grading for Teachers’ (Marking.ai, 17 June 2024) <https://marking.ai/blog/a-whole-new-world-ai-grading-for-teachers> accessed 6 December 2024.
[133] ‘Teachers to Get More Trustworthy AI Tech, Helping Them Mark Homework and Save Time’ (gov.uk, 28 August 2024) <www.gov.uk/government/news/teachers-to-get-more-trustworthy-ai-tech-as-generative-tools-learn-from-new-bank-of-lesson-plans-and-curriculums-helping-them-mark-homework-and-save> accessed 6 December 2024.
[134] ‘Use Cases for Generative AI in Education: User Research Report’ (gov.uk, August 2024) <https://assets.publishing.service.gov.uk/media/66cdb078f04c14b05511b322/Use_cases_for_generative_AI_in_education_user_research_report.pdf> accessed 6 December 2024.
[135] ‘AI Proctoring Software – Online Solution for Proctored Exam’ <https://proctoredu.com/solutions/ai-proctoring> accessed 6 December 2024.
[136] ‘Hallucinations Do Not Limit AI’s Power to Transform Education’ <www-forms.aqa.org.uk/news/hallucinations-do-not-limit-ais-power-to-transform-education> accessed 7 December 2024.
[137] Cesare Aloisi, ‘The Future of Standardised Assessment: Validity and Trust in Algorithms for Assessment and Scoring’ (2023) 58 European Journal of Education 98.
[138] ‘Ofqual’s Approach to Regulating the Use of Artificial Intelligence in the Qualifications Sector’ (gov.uk) <www.gov.uk/government/publications/ofquals-approach-to-regulating-the-use-of-artificial-intelligence-in-the-qualifications-sector/ofquals-approach-to-regulating-the-use-of-artificial-intelligence-in-the-qualifications-sector> accessed 7 December 2024.
[139]Jane Wakefield, ‘A-Levels: Ofqual’s “cheating” Algorithm under Review’ BBC News (London, 20 August 2020) <www.bbc.com/news/technology-53836453>.
[140] ‘CREST/TeachLivE’ <https://sites.google.com/view/teachlive/home?authuser=0> accessed 6 December 2024.
[141] ‘CREST/TeachLivE – History’ <https://sites.google.com/view/teachlive/history> accessed 6 December 2024.
[142] Zara Ersozlu and others, ‘Mixed-Reality Learning Environments in Teacher Education: An Analysis of TeachLivETM Research’ (2021) 11 SAGE Open 21582440211032155.
[143] ‘TDT Home’ (Teacher Development Trust) <https://tdtrust.org/> accessed 6 December 2024.
[144] ‘Teacherverse.AI – Teacher Development Trust’ <https://tdtrust.org/teacherverse-ai/> accessed 6 December 2024.
[145] ‘TeachFX’ (TeachFX) <https://teachfx.com> accessed 6 December 2024.
[146] ‘Mission’ (TeachFX) <https://teachfx.com/mission> accessed 6 December 2024.
[147] ‘Data Protection in Schools’ (gov.uk, 3 February 2023) <https://www.gov.uk/guidance/data-protection-in-schools>.
[148] ‘Realising the Potential of Technology in Education: A Strategy for Education Providers and the Technology Industry’ (gov.uk, 3 April 2019) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/791931/DfE-Education_Technology_Strategy.pdf>.
[149] ‘Generative AI in Education: Educator and Expert Views’ (gov.uk, January 2024) <https://assets.publishing.service.gov.uk/media/65b8cd41b5cb6e000d8bb74e/DfE_GenAI_in_education_-_Educator_and_expert_views_report.pdf> accessed 6 December 2024.
[150] ‘Teaching with AI’ <https://openai.com/index/teaching-with-ai> accessed 7 December 2024.
[151] ‘Educator FAQ | OpenAI Help Center’ <https://help.openai.com/en/collections/5929286-educator-faq> accessed 7 December 2024.
[152] ‘Generative AI for Educators – Grow with Google’ <https://grow.google/ai-for-educators> accessed 7 December 2024.
[153] ‘Teaching with ChatGPT’ (Teaching with ChatGPT) <https://teachingwithchatgpt.org.uk> accessed 7 December 2024.
[154] Lucas Cumiskey, ‘Ministers Plan to Appoint Edtech Evidence Checkers’ (Schools Week, 22 May 2024) <https://schoolsweek.co.uk/ministers-plan-to-appoint-edtech-evidence-checkers> accessed 7 December 2024.
[155] ‘Tenderlake | Sector Fund’ <https://app.tenderlake.com/Notice/Tender/f4ac4630-c1a2-4edc-b319-f2c86c489e47/sector-fund/3d937dd3-fe74-4b94-8abd-42a768711bf3?lg=EN> accessed 7 December 2024.
[156] Schools’ ‘official reporting duties’ refer to schools’ statutory duty under Section 537A of the Education Act 1996 to submit a school census of individual pupil records to the Department for Education; see Education Act 1996.
[157] ‘Benefits of a Fit for Purpose Management Information System (MIS)’ (gov.uk) <https://www.gov.uk/government/publications/choosing-a-school-management-information-system-mis/benefits-of-a-fit-for-purpose-management-information-system-mis> accessed 30 October 2023.
[158] Smoothwall, ‘Smoothwall Monitor’ <www.smoothwall.com/education/monitor/> accessed 9 October 2023.
[159] ‘Keeping Children Safe in Education’ (Department of Education, July 2015) <https://dera.ioe.ac.uk/id/eprint/23624/1/KCSIE_July_2015.pdf> accessed 7 December 2024.
[160] ‘Protection of Biometric Data of Children in Schools and Colleges’ (gov.uk, July 2022) <https://assets.publishing.service.gov.uk/media/62d7d76c8fa8f50c012d14df/Biometrics_Guidance_July_2022.pdf> accessed 7 December 2024.
[161] ‘The State of Biometrics 2022 | Defend Digital Me’ <https://defenddigitalme.org/research/state-biometrics-2022> accessed 7 December 2024.
[162] ‘ICO Highlights Rules for Facial Recognition in Schools’ (UK Authority, 1 February 2023) <www.ukauthority.com/articles/ico-highlights-rules-for-facial-recognition-in-schools> accessed 7 December 2024.
[163] ‘Bromcom | Cloud Based School MIS Provider’ <https://bromcom.com/> accessed 7 December 2024.
[164] ‘Arbor – the UK’s Most Popular Cloud MIS’ (Arbor) <https://arbor-education.com/> accessed 7 December 2024.
[165] Cristina Goldfain, ‘Sources of Unintended Bias in Training Data’ (Medium, 21 August 2020) <https://towardsdatascience.com/sources-of-unintended-bias-in-training-data-be5b7f3347d0> accessed 7 December 2024.
[166] ‘Bromcom AI: Introspective and Extrospective | Bromcom Cloud MIS’ (Bromcom, 16 August 2023) <https://bromcom.com/news/bromcom-ai-introspective> accessed 7 December 2024.
[167] ibid.
[168] ibid.
[169] ‘Use Cases for Generative AI in Education: User Research Report’ (gov.uk, August 2024) <https://assets.publishing.service.gov.uk/media/66cdb078f04c14b05511b322/Use_cases_for_generative_AI_in_education_user_research_report.pdf> accessed 6 December 2024.
[170] Underdown A, ‘Introducing: Ask Arbor – Powered by OpenAI’ (Arbor, 8 June 2023) <https://arbor-education.com/blog-ask-arbor-openai/> accessed 20 January 2025.
[171] ‘Bromcom AI: The UK’s First AI Powered MIS | Bromcom School MIS’ (Bromcom) <https://bromcom.com/bromcom-ai> accessed 7 December 2024.
[172] Becta, ‘Becta’s Role’ (2009) <https://webarchive.nationalarchives.gov.uk/ukgwa/20101007151210/http://about.becta.org.uk/display.cfm?page=2085> accessed 9 April 2024.
[173] ‘Equality Impact Assessment: Becta Closure’ (gov.uk, 1 September 2012) <www.gov.uk/government/publications/equality-impact-assessment-becta-closure> accessed 7 December 2024.
[174] ‘Buying for Schools: How to Buy What You Need’ (gov.uk, 1 July 2019) <www.gov.uk/guidance/buying-procedures-and-procurement-law-for-schools/find-the-right-way-to-buy> accessed 7 December 2024.
[175] ‘Buyers Guide and FAQs: The ICT Procurement Framework for Education’ (Everything ICT) <www.everythingict.org/_files/ugd/f72357_3d069dc9324e4c3499dc5a35e0db8020.pdf> accessed 7 December 2024.
[176] ‘Education Management Systems – Find a DfE Approved Framework for Your School’ <https://find-dfe-approved-framework.service.gov.uk/list/education-management-systems> accessed 7 December 2024.
[177] ‘Education Management Systems’ <www.procurementservices.co.uk/our-solutions/frameworks/education/education-management-systems> accessed 7 December 2024.
[178] ‘Everything ICT Public Sector Procurement Framework – DfE Recommended’ <www.everythingict.org/> accessed 7 December 2024.
[179] ‘Trust & Compliance Centre – Everything ICT – DfE Approved Procurement’ (Everything ICT) <https://www.everythingict.org/trust-compliance-centre> accessed 7 December 2024.
[180] ‘Procurement Policy Note 09/14: Cyber Essentials Scheme Certification’ (gov.uk, 26 May 2016) <www.gov.uk/government/publications/procurement-policy-note-0914-cyber-essentials-scheme-certification> accessed 7 December 2024.
[181] ‘Procurement Policy Note 07/15: Open Standards for Technology’ (gov.uk, 31 January 2023) <www.gov.uk/government/publications/procurement-policy-note-0715-open-standards-for-technology> accessed 7 December 2024.
[182] ‘Government Adopts “Cloud First” Policy for Public Sector IT’ (gov.uk) <www.gov.uk/government/news/government-adopts-cloud-first-policy-for-public-sector-it> accessed 7 December 2024.
[183] ‘Tenderlake | Sector Fund’ (n 155).
[184] Beeban Kidron and others, ‘A Blueprint for Education Data: Realising Children’s Best Interests in Digitised Education’ (Digital Futures Commission – 5Rights Foundation, March 2023) <https://digitalfuturescommission.org.uk/wp-content/uploads/2023/03/A-Blueprint-for-Education-Data-FINAL-Online.pdf>.
[185] ‘What Is The Global Education Security Standard (GESS) And How Can Schools Use It? | Coro Cybersecurity’ (26 February 2024) <www.coro.net/blog/what-is-the-global-education-security-standard-gess-and-how-can-schools-use-it> accessed 7 December 2024.
[186] Natalia Kucirkova, Garvin Brod and Nadine Gaab, ‘Applying the Science of Learning to EdTech Evidence Evaluations Using the EdTech Evidence Evaluation Routine (EVER)’ (2023) 8 npj Science of Learning 1.
[187] EEF, ‘DfE Confirms Funding to Enable the EEF to Continue Its Work Evaluating and Spreading Best Practice for at Least Another Decade’ EEF (2 September 2022) <https://educationendowmentfoundation.org.uk/news/dfe-confirms-funding-to-enable-the-eef-to-continue-its-work-evaluating-and-spreading-best-practice-for-at-least-another-decade> accessed 3 January 2024.
[188]‘Digital Technology (2012)’ (EEF, 16 August 2021) <https://educationendowmentfoundation.org.uk/education-evidence/evidence-reviews/digital-technology-2012> accessed 7 December 2024.
[189] ‘Digital Technology (2019)’ (EEF, 18 December 2019) <https://educationendowmentfoundation.org.uk/education-evidence/evidence-reviews/digital-technology-2019> accessed 7 December 2024.
[190] ‘Maths-Whizz Intelligent Tutoring Programme – Trial’ (EEF, 18 July 2024) <https://educationendowmentfoundation.org.uk/projects-and-evaluation/projects/maths-whizz-23-24-trial> accessed 7 December 2024.
[191] ‘Reading Plus – Trial’ (EEF, 18 July 2024) <https://educationendowmentfoundation.org.uk/projects-and-evaluation/projects/reading-plus-2024-25-trial> accessed 7 December 2024.
[192] ‘EdTech Interventions for Disadvantaged Pupils’ (EEF, 29 May 2024) <https://educationendowmentfoundation.org.uk/education-evidence/evidence-reviews/edtech-interventions-for-disadvantaged-pupils> accessed 7 December 2024.
[193] ‘ChatGPT in Lesson Preparation – Teacher Choices Trial’ (n 115).
[194] ‘Areas of Research Interest’ (gov.uk, January 2024) <https://assets.publishing.service.gov.uk/media/65cc9eba13054900118679b4/DfE_areas_of_research_interest.pdf> accessed 7 December 2024.
[195] ‘A Pro-Innovation Approach to AI Regulation’ (gov.uk) <https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper> accessed 7 December 2024.
[196] The White House, ‘FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence’ (2023) <www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/> accessed 16 November 2023.
[197] Angus Deaton and Nancy Cartwright, ‘Understanding and Misunderstanding Randomized Controlled Trials’ (2018) 210 Social Science & Medicine 2.
[198] ‘What Do We Need to Know about Accuracy and Statistical Accuracy?’ (ICO, 19 November 2024) <https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/what-do-we-need-to-know-about-accuracy-and-statistical-accuracy> accessed 7 December 2024.
[199] ‘Areas of Research Interest’ (gov.uk, January 2024) <https://assets.publishing.service.gov.uk/media/65cc9eba13054900118679b4/DfE_areas_of_research_interest.pdf> accessed 7 December 2024.
[200] Office of Educational Technology, ‘Using Evidence to Support EdTech Adoption in Schools’ (Office of Educational Technology) <https://tech.ed.gov/evidence/> accessed 10 October 2023.
[201] Office, ‘Tier 4: Using Evidence to Demonstrate a Rationale for Educational Technology Use’.
[202] Office of Educational Technology, ‘Tier 3: Using Promising Evidence to Inform Educational Technology Use’ <https://tech.ed.gov/files/2023/04/EdTech-Evidence_Tier-3.pdf>.
[203] Office of Educational Technology, ‘Tier 2: Using Moderate Evidence to Inform Educational Technology Use’ <https://tech.ed.gov/files/2023/04/EdTech-Evidence_Tier-2.pdf>.
[204] Office of Educational Technology, ‘Tier 1: Using Strong Evidence to Inform Educational Technology Use’ <https://tech.ed.gov/files/2023/04/EdTech-Evidence_Tier-1.pdf>.
[205] US Department of Education, ‘Non-Regulatory Guidance: Using Evidence to Strengthen Education Investments’ <www2.ed.gov/fund/grant/about/discretionary/2023-non-regulatory-guidance-evidence.pdf>.
[206] Australian Education Research Organisation, ‘Standards of Evidence’ <www.edresearch.edu.au/using-evidence/standards-evidence> accessed 16 November 2023.