Skip to content
Resource

What is a foundation model?

This explainer is for anyone who wants to learn more about foundation models, also known as 'general-purpose artificial intelligence' or 'GPAI'.

Elliot Jones

17 July 2023

Reading time: 5 minutes

pile of white cube building blocks stacked on top of each other

This resource was updated in July 2024 with a refreshed supply chain diagram.
The new diagram now includes ‘affected persons’, and highlights the development, pre-deployment and deployment stages.

Artificial intelligence (AI) technologies have a significant impact on our day-to-day lives. AI is embedded in many systems and processes that affect us.

People have mixed views about the use of AI technologies in our lives,[1] recognising both the benefits and the risks. While many members of the public believe these technologies can make aspects of their lives cheaper, faster and more efficient, they also express worries that they might replace human judgement or harm certain members of society.

An emerging type of AI system is a ‘foundation model’, sometimes called a ‘general-purpose AI’ or ‘GPAI’ system. These are capable of a range of general tasks (such as text synthesis, image manipulation and audio generation). Notable examples are OpenAI’s GPT-3 and GPT-4, foundation models that underpin the conversational chat agent ChatGPT.

Because foundation models can be built ‘on top of’ to develop different applications for many purposes, this makes them difficult – but important – to regulate. When foundation models act as a base for a range of applications, any errors or issues at the foundation-model level may impact any applications built on top of (or ‘fine-tuned’) from that foundation model.

As these technologies are capable of a wide range of general tasks, they differ from narrow AI systems (those that focus on a specific or limited task, for example, predictive text or image recognition) in two important respects: it can be harder to identify and foresee the ways they can benefit people and society, and it is also harder to predict when they can cause harm.

As policymakers begin to regulate AI, it will become increasingly necessary to distinguish clearly between types of models and their capabilities, and to recognise the unique features of foundation models that may require additional regulatory attention.

For these reasons, it is important for the public, policymakers, industry and the media to have a shared understanding of terminology, to enable effective communication and decision-making.

We have developed this explainer to cut through some of the confusion around these terms and support shared understanding. This explainer is for anyone who wants to learn more about foundation models, and it will be particularly useful for people working in technology policy and regulation.

In this explainer we use the term ‘foundation models’ – which are also known as ‘general-purpose AI’ or ‘GPAI’. Definitions of foundation models and GPAI are similar and sometimes overlap. We have chosen to use ‘foundation models’ as the core term to describe these technologies. We use the term ‘GPAI’ in quoted material, and where it’s necessary in the context of a particular explanation.

 

We also explain other related terminology and concepts, to help distinguish what is and isn’t a foundation model.

We recognise that the terminology in this area is contested. This is a fast-moving topic, and we expect that language will evolve quickly. This explainer should therefore be viewed as a snapshot in time.

 

Rather than claiming to have solved the terminology issue, this explainer will help those working in this area to understand current norms in uses of terminologies, and their social and political contexts.

 

Terminology is socially constructed and needs to be understood in context – where possible we have included the origins and uses of terms, to help explain the motivations behind their use.

Why are foundation models hard to define?

It is hard to define and explain these technologies, partly because people don’t always agree on the meaning of AI itself. There is no single definition of AI, and the term is applied in a wide variety of settings. Even what is meant by ‘intelligence’ is a contested concept.[2]

In this explainer, we refer to the UK Data Ethics Framework’s definition of AI: ‘AI can be defined as the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence.’[3] For a full definition, see Table 3: Glossary.

We recognise that this definition is not definitive and is UK-centric, but also that it is similar to other definitions adopted by national and international governments – including the EU High Level Expert Group on AI’s definition of AI as ‘systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals.’ [4]

There are several similar, related terms used in policymaking, academia, industry and in the media (see Table 3: Glossary). Many of these terms stem from computer science but are now being used in different ways by other sectors. Not everyone agrees on the meaning of these terms, particularly as their use evolves over time.

The terms can be difficult to define, subject to multiple interpretations and are often poorly understood. Some of these terms refer to components of AI systems, or related or subdisciplines of AI. We hope the definitions we have provided here provide a base level of shared understanding for members of the public, policymakers, industry and the media.

Foundation model supply chain diagram

Related content