Prompting is Dead. Context Engineering is What L&D Actually Needs.
Context engineering sounds technical. It's not. It's what you already do.
When scrolling LinkedIn or Reddit, there are two camps for ID folks battling it out in the comments. You've seen the post: "I asked ChatGPT to write a course / create a graphic / etc and look at what it created" followed by either breathless amazement confirming that AI has become our overlords, or performative disappointment confirming AI is ruining humanity.
The comments are inevitably split. Believers share their own magic prompts. Skeptics declare AI useless for “real” instructional design. Both miss the point entirely.
We shouldn’t be thinking of AI as primarily a tool that generates output. The real question are: What’s your input/output mechanism? Where does the information come from? How easily can you pull it in? Where does the output need to go? How easily can you push it there?
When I talk to L&D folks about their AI use, I see most people at one of these levels (most at level 0 or 1).
Level 0 is “Prompt and Pray”: every conversation starts from zero, you’re typing your audience and context fresh each time, hoping the right phrasing unlocks a usable response.
Level 1 is Static Context: you’ve set up persistent information the AI can access (your audience, your voice, your constraints), so you’re not repeating yourself constantly. Think Claude Projects or ChatGPT projects.
Level 2 is Layered Context: you’ve built a system. There are global rules for how you work, plus project-specific context for each initiative. The AI knows the stakes and the standards before you say a word.
Level 3 is Connected Context: you’ve got input mechanisms pulling from where information lives (Slack, Google Drive, your content library) and outputs push to where the work happens. Institutional memory begins to compound.
Most of the “prompt engineering” content out there is teaching people to be better at Level 0. And the L&D-specific advice? It barely reaches Level 1. It’s prompt templates, generation tricks, and AI-for-speed — not setting up the context that makes prompts matter less.
The Shift: From Prompting to Context
In the engineering world, this shift already happened. Developers stopped obsessing over the perfect prompt and started building the context around it — the instructions, the reference code, the constraints that make AI actually useful for their work. They call it context engineering.
I work alongside engineers every day. Watching how they set up AI for coding changed everything for me. They don't obsess over prompt wording—they obsess over what the AI knows before they prompt it. The setup is the work. The prompt is just the trigger.
Compare that to prompt engineering, which treats each interaction as a standalone event - the right incantation produces the right output.
Think of it this way: prompt engineering is wordsmithing. Context engineering is curriculum design.
I hear a lot of “but you can’t control what the AI produces.” And I get it. But that’s exactly what context engineering solves. The prompt is maybe the last 10%. The other 90%? It’s everything you set up before you type a single word.
But here’s the thing — most AI tools have built-in memory now, and it’s terrible. It’s like that friend who’s always on their phone — technically listening, barely remembering anything. Context engineering is solving this yourself: explicitly providing what the AI needs to know so you’re not re-explaining every time.
When you perfect a prompt, you get one good output. When you set up your context well, every conversation is better. Even lazy prompts produce useful results because the AI already knows what it needs to know.
Think of it like onboarding. You can explain the company culture, the brand voice, and the goals to a new hire every single time they ask a question—an exhausting cycle of ad-hoc “prompting.” Or, you can onboard them once. You give them the handbook, the past examples, and the “how we work” guide. After that, they don’t need magic instructions; they just need a task.
Ad-hoc prompting is the tax you pay for poor onboarding. Context engineering is onboarding the AI once so it can finally do its job.
What does context engineering look like for L&D?
IDs and learning designers should be front and center using AI in their companies. Of all the roles, we have the skillsets that make this natural.
Think about what you do throughout the program proposal and build phases:
You interview SMEs to extract what they know but can’t articulate
You document how a process actually works (not just how it’s supposed to)
You identify what’s important vs. what’s just noise
You capture edge cases that trip people up
You organize information so it’s findable and usable
This is context engineering. You’ve been doing it your whole career.
The difference is the output format. Instead of a design document for humans, you’re building a dossier for an AI partner.
The Levels: From Prompt-and-Pray to Context Systems
Level 1: Static Context
You are likely already doing this. You’ve fed the LLM a bit about your role or your company voice. You’re using “Custom Instructions” or the basic “Project” features in Claude or ChatGPT projects to keep a few reference documents handy.
The Workflow: You open a specific project folder, and the AI knows your basic style guide.
The Gain: You stop re-explaining that you work in “ZED company” weekly.
The Ceiling: It’s a flat system. If you change projects or audiences, you’re back to dumping new info into the chat and hoping the AI doesn’t get “context drift” and forget your earlier rules. It’s better than Level 0, but you’re still doing the same work over and over when you switch between projects or audiences.
Level 2: Layered Context
This is where it stops feeling like a tool and starts feeling like a collaborator. At this level, the AI doesn’t just remember things — it’s organized.
The Workflow: you can achieve this in Claude today by stacking two distinct features that most people use in isolation:
The Global Layer (personal preferences): This is your baseline — it lives in your account settings and applies to every single chat. Your communication style, your strengths, your “never-dos,” your preferred frameworks. You write it once.
The Project Layer (project instructions): Each project gets its own brief: audience, constraints, what good looks like, reference docs.
When you start a conversation, the AI reads both layers before you say a word. The global layer knows how you communicate. The project layer knows what you're building.
The Gain: Like compound interest, each piece of context makes the rest more valuable. Your decisions and constraints travel with the project. Your standards stay consistent across everything.
Level 2 is where I’m asking you to go. The tools exist now. The barrier is knowing they’re there and setting them up.
Level 3 is where this is all headed - and where I think the real potential for L&D lives.
Level 3: Connected Context
This is where context becomes dynamic — and where institutional memory starts to compound.
On the input side, the AI isn’t waiting for you to upload a PDF; it is querying your company’s Slack, Google Drive, or intranet directly. On the systems side, multiple “agents” can hand off to each other—where the context from a Tuesday SME interview automatically informs the Wednesday storyboard and the Thursday assessment.
Here’s a real example. I recently needed to update the resources for a company-wide event.
The usual approach: analyze post-survey results. But surveys only capture the extremes—people who loved it or hated it. I wanted to know what tripped people up in the middle, while they were actually working.
This time, I used a connected approach to get this information and update the content:
Slack Integration → I had the AI analyze the real-time “help” channel from the last event to see exactly where people got stuck while they were working.
Intranet connection → searched our company docs for the resources that already existed — what was there, what was outdated, what had gaps.
My Global Context → my principles for knowledge management were already loaded: clear, direct, just enough information, with a bit of fun baked in.
From there, the AI mapped out the gap areas and flagged which pages needed updates. The result wasn’t just a “faster” update; it was a smarter one. I didn’t copy-paste a single thing. I didn’t re-explain my audience. I just steered the ship.
Word of Caution: Pulling from Slack or your intranet requires security sign-off —that's not a solo decision. The context files and layered setup? You can start building those today.
Why This Matters Beyond Efficiency
Context doesn’t just serve memory. It enables thinking.
When you write your principles into a context file, you’re doing something most people never do — making your implicit standards explicit. And then the AI holds you to them. It can push back. It can notice patterns you missed. It can tell you when you’re drifting from your own rules. That’s reflective practice — except now you have a partner that never forgets what you said you cared about.
That’s not just an “output generator.” That is a thinking partner (I wrote about this partnership with AI last year).
That’s the real return on context — not just better outputs, but better thinking about your own practice.
Want to get to Level 2+?
I’m building a step-by-step tutorial on how to set up a “Context Dossier,” including the exact Global Rules I use to keep my AI partner from sounding like a corpo stooge. It will include:
Global rules — The AI knows how I work before I say a word. Communication style, strengths, gaps, guardrails. Written once, applied everywhere.
Project-specific context — Each project carries its own brief. Open one up, and the AI already knows the audience, constraints, and what good looks like.
Specialized agents — Different roles for different tasks. An editor that knows my voice. An ID agent that pushes back when I over-engineer. Each loaded with its own context.
Examples over style guides — Not descriptions of my work — actual examples of it. Show, don’t tell applies to AI context too.
I’m curious: where are you right now? Level 0, copying and pasting the same background info every conversation? Level 1 with a Claude Project? Already layering? If you want the tutorial when it’s ready, let me know.
So the next time you see that LinkedIn post — and the comments split into believers and skeptics — notice what neither camp is talking about: the context underneath that makes everything else work.
When you move from prompting to context engineering, you stop being a “user” of a tool and start being the architect of a system. A partner that understands your standards as well as you do.
You’ve been doing this work your entire career. Stop perfecting your prompts. Start building your context.
Have a great week ahead - if you’re part of the arctic blast moving though North America - stay warm, and dream of sunnier times! I know I am.
- Maria





