Learning and teaching with AI
A call for a rights-respecting approach
20 February 2025
Reading time: 9 minutes
The views expressed in this piece are of the author and do not necessarily reflect those of the Ada Lovelace Institute.
AI-driven educational technologies (AI in EdTech) can enable students to learn at their own pace, by catering to specific needs, as well as help teachers prepare engaging activities and complete administrative tasks.
However, specific challenges emerge at the intersection of AI and the children’s education sector, including the fundamental rights of learners and teachers, such as fair treatment, privacy, freedom of expression and equal access to education.
The education ecosystem involves significant power imbalances – for instance, between students and teachers and between individuals and institutions – yet education is both mandatory and crucial for a person’s development. If we continue to use AI in this context, it is of the utmost importance that institutions adopt a children’s rights-respecting approach to AI governance, conceived for and with children, to establish the necessary safeguards, and to ensure AI is used in the best interest of students and educators.
While there are ongoing regulatory and policymaking efforts, AI deployment in school settings is fast outpacing policy debates and the implementation of safe frameworks.
In this blog, I provide an overview of the benefits and risks associated with the use of AI in children’s education, in schools and elsewhere, and argue for the importance of adopting a child-centred and rights-respecting approach.
Potential benefits of AI in EdTech
AI systems can provide insights into students’ learning patterns by analysing their interactions with digital tools, identifying the areas in which they struggle, and making suggestions for improvement. Based on such analysis, AI systems can support teachers in curriculum design and assessment, as well as to personalise content and learning experiences. In a similar vein, AI in EdTech can contribute to creative activities in classrooms, provide fun and playful experiences, and support students’ engagement with different topics.
These tools may change the learning experience of students with special educational needs and disabilities, promote different forms of learning and make teaching more inclusive. For example, AI-powered speech recognition and text-to-speech tools can help students with visual or hearing impairments, and generative AI tools (which can augment or create new and original content like images, videos, text and audio) can facilitate the participation of children with additional needs in classroom activities and provide them with a suitable way to express themselves.
AI can also support teachers and administrators to complete admin work, which is part of the learning and teaching experience. This has become a topic of interest for the UK Government, which is currently discussing the use of responsible AI for teaching and assessing coursework. Policymakers hope to be able to automate tasks like grading and attendance tracking to save time for lesson planning and student interaction. These developments are gaining traction in the context of the UK policy landscape on AI in EdTech and the recent announcement of the UK’s AI Opportunities Action Plan.
Potential risks of AI in EdTech
These potentially beneficial uses, however, can quickly become problematic, with AI tools failing to achieve what the companies producing them promise to deliver.
AI can make learning more accessible, but it can also exacerbate inequalities. And AI systems are only as good as the data on which they are trained. If this data reflects existing societal biases, such as racial or gender discrimination, the system will perpetuate them with its recommendations and decisions. In schools, this could lead to the unfair treatment of certain groups of students, potentially worsening existing inequalities in education.
Besides biased systems, the use of AI can amplify inequalities as schools and pupils may not all have the resources to access expensive technologies.
Privacy breaches and heightened surveillance constitute another substantial risk. AI in EdTech requires large amounts of personal data to function effectively. This data includes sensitive information, such as students’ academic performance, behaviour, etc. If not protected adequately, a student’s data could be exploited, misused or unfairly used. For example, schools might use data from monitoring tools to track students’ behaviour and online activities beyond academic purposes, such as how often they use certain websites or interact on social media. This type of surveillance, which means students feel constantly watched, negatively impacts their wellbeing, confidence and trust.
Moreover, while AI can help automate routine tasks and offer insights, it cannot replace the human interaction that characterises teaching and learning. When an app provides quick answers to questions, it can discourage students from reflecting on issues and solving problems independently. Instead of stimulating students, AI could limit their creativity and imagination, and promote a standardised and passive approach to learning.
AI’s potentially negative impact on children’s rights, including wellbeing, freedom of expression, privacy, equal treatment and access to education, raises huge concerns. The stakes of EdTech could not be higher, as its use affects children’s futures, and the absence of adequate legal protections only amplifies the risks.
To realise the benefits that AI has to offer the education sector, it is necessary to establish a user-centric and ethical approach as well as technical standards. These must ensure that tools are pedagogically beneficial and aligned with children’s rights, as put forward by the Digital Futures Commission’s work on the blueprint for education data.
Policy and regulatory landscape
Efforts are underway in various jurisdictions to develop policy and regulatory frameworks ensuring responsible AI innovation and protecting learners’ rights. However, except for the EU AI Act, which is legally binding, such efforts are still in the form of research, recommendations, guidance of non-AI specific laws, and codes of conduct. Policymakers should instead focus on specifically regulating AI use in education, taking into account young people’s attitudes towards it.
The US Department of Education published Designing for Education with Artificial Intelligence: An Essential Guide for Developers in July 2024. If formalised into actual regulations, many of these guidelines could contribute in practice to responsible AI use in education. Similarly, the UK Government has addressed the rapidly changing AI landscape in education and issued its position on using large language models like ChatGPT in schools. At the same time, the ICO has been designing an Age appropriate design code to support the development of, among others tools, AI-powered technology for schools.
International initiatives so far include frameworks like the Beijing Consensus on Artificial Intelligence and Education, which is part of the general UNESCO guidance on AI and education, and the Council of Europe’s framework on AI and Education. These two documents highlight that we need rigorous legal and ethical standards and effective oversight to ensure AI benefits all students equally and protects their rights. However, such legal requirements are still rare to non-existent in most national and international jurisdictions.
Four key areas of concern
Four critical issues that are specific to the use of AI in education require the urgent attention of policymakers.
First, the rights of vulnerable children, like those with special educational needs and disabilities (SEND), require particular consideration, especially in relation to the use of emotion recognition tools. The hype around the deployment of AI in EdTech, and emotion recognition systems in particular, hinges on its use to support children with SEND. However, research has consistently highlighted serious concerns about lack of representation and biased tools. This is not a new dilemma: the implications of emotion recognition systems in any domain have long been debated. Emotion recognition lacks scientific validation, and the tools based on it may impact vulnerable groups. While these systems are often marketed as beneficial for children with learning disabilities and those in need of additional mental health support, they pose significant risks, including their potential infringement on freedom of thought. Recent regulatory developments in the EU, especially concerning AI in education, acknowledge these risks. As awareness grows, it is becoming clear that the very children these technologies are claimed to support may in fact be the most impacted by their harms.
Second, there are no clear standards for how and what technologies can contribute to the public task of education. Developing specific standards for AI in EdTech that prioritise students’ best interests and their education is crucial to promoting equity in society and supporting responsible innovation. Research by the Digital Futures Commission has stressed the need for formal certification for all EdTech products to avoid disparities in the quality of education delivered through digital platforms.
For example, the proliferating use of biometric technologies in UK schools, such as facial recognition, raises significant privacy and security concerns. The Defend Digital Me report, ‘The State of Biometrics 2022: A Review of Policy and Practice in UK Education’, analyses the current UK legal framework and questions whether students are protected from harms, recommending areas where robust norms are necessary to defend their rights.
Third, regulations related to AI in EdTech and technical standards will have to meet the expectations of young people themselves and embrace an intersectional and inclusive approach, as the use of AI in education can affect students in different parts of the world in different ways. To ensure AI empowers all students, its deployment has to take into account existing racial, socio-economic and gendered injustices and be carefully considered in stakeholder consultations.
Finally, as the private sector plays a critical role in the design, development and deployment of AI in EdTech, public-private sector collaborations in this context are necessary but also raise ethical questions. The interests of private companies might compete with those of students and must be carefully managed to ensure that innovation efforts align with broader educational, individual and societal interests, and certainly with children’s rights.
Crucially, we must move beyond the false dichotomy between innovation and regulation. Regulation and compliance are not barriers to innovation but essential pillars that ensure responsible and ethical development. True innovation thrives when guided by well-structured regulation, fostering trust, accountability and long-term sustainability. This is not a matter of balancing between the two: innovation and regulation must evolve together.
The way forward
These challenges require a combination of initiatives. Robust regulatory and policy frameworks that address these unique challenges and risks are necessary to ensure responsible use.
Regulations should require that AI developers adhere to ethical guidelines and comply with standards that centre transparency, accountability and fairness, and protect students’ privacy and autonomy. These policy objectives can only be achieved through meaningful public participation and evidence-based research. Engaging with students, parents, guardians, educators, policymakers and computer scientists will help identify key concerns and needs, build trust, and enable the use of AI systems only when it is in the best interest of students.
In this, AI tools that contribute pedagogical value and respect the rights of children will always constitute a baseline.
Related content
A learning curve?
A Iandscape review of AI and education in the UK
Education and AI
The role of AI and data-driven technologies in primary and secondary education in the UK