Context Engineering for AI Agents Explained - a simple overview of the basics
Why is everyone suddenly talking about “context engineering”? What it is, how it helps AI agents and why it matters to product teams.
🔒The Knowledge Series is available for paid subscribers. Get full ongoing access to 75+ explainers and AI tutorials to grow your technical knowledge at work. New guides added every month.
Barely a week goes by without some new technology emerging that gets everyone talking. It’s not always wise to lurch from one topic to another based on a couple of tweets - particularly with something as volatile as AI - but “context engineering” might just be a concept that warrants a little further exploration.
In the past week or so, it has recently gained some new backers: the CEO of Shopify and the co-founder of OpenAI.
Shopify’s Tobi Lütke describes it as "the art of providing all the context for the task to be plausibly solvable by the LLM.” and the former OpenAI co-founder Andrej Karpathy also gave the concept a “+1” over prompt engineering. One supporter even went as far as to say that context engineering is “the new vibe coding”. But let’s not go there.
As AI agents become more important, the information we put into their limited working memory becomes more important too. Whether an agent’s task ultimately succeeds or fails comes down in part to the quality of the context you give it.
In this Knowledge Series, we’ll take a closer look at this emerging new concept of context engineering. We’ll explore what it is, the different types of context that you can give AI Agents and why this matters to product teams building AI features. We’ll also take a look at an end to end example to bring everything together.
Coming up:
What is Context Engineering?
Why does it matter to product teams?
What types of features might you build using context engineering?
What’s the difference between prompt engineering and context engineering?
A real world example to bring core concepts to life
Further reading and resources to get up to speed quickly
What is context engineering?
Context describes the process of building a system that can provide an LLM / AI Agent all of the relevant information and tools in the right formats it needs so that it can complete a task.
It is most relevant in the development of AI agents. AI agents can get context from multiple different sources including the engineer building the agents, the user, previous interactions or external data. Pulling all of this together ultimately involves a complex system.
AI agent systems can sometimes mess up and fail at their tasks. Cursor’s CEO recently explained that this is in part caused by a failure of agentic systems to fully understand the context in which they operate.
Here’s a snapshot of each of the different types of context that are typically available to engineering teams working on developing AI-powered features or agents:
“Context engineering is the delicate art and science of filling the context window with just the right information for the next step” - OpenAI co-founder, Andrej Karpathy
The different types of context explored - and why they matter to product teams
In LLMs and AI agents, the context window refers to the maximum number of tokens (words, characters, or pieces of text) that the model can “see” and use as input at any given time. Managing the context window well is crucial for building reliable and capable AI agents.
AI agents can be given access to multiple different types of context to improve their performance - and potentially reduce the number of tokens required to achieve their goals.
Here’s a closer look at some of those in more detail - and why they matter to product teams: