Context Engineering: Between Prompts and RAG

I've written before about the reported death of Prompt Engineering (spoiler: it's alive and well). Now there's apparently Context Engineering to wrestle with, thanks to Shopify's Tobi Lutke coining the term.

Good. This needed a name.

Context Engineering sits in that critical zone between prompt engineering and RAG—and it's where most AI implementations actually succeed or fail. It was my biggest breakthrough when building AI-driven content systems at my previous company.

The Evolution of Getting Context Right

Stage one was the obvious realization: inject specific, targeted data rather than relying solely on the model's training data. (Yes, this sounds elementary now, but we're talking early days.)

Stage two was more nuanced: choosing which data to inject matters enormously. For earnings coverage, the press release helped. Adding the conference call transcript helped more. Incorporating sections of the 10-K helped even more.

Stage three—where I'm still learning—revealed the real art: curating how that context gets constructed. Even with million-token context windows, cramming every available piece of information rarely produces the best results.

Why Context Engineering Matters

Think about human context for a moment. Walk into any situation and you're immediately processing: Where am I? What's happening? What do I already know about these people and circumstances?

That context drives everything—how you speak, what you notice, how you respond to questions. You don't absorb every detail, but you instinctively focus on information that seems relevant. Focus on the wrong signals, and your responses will be off.

LLMs work similarly. The contextual environment you create fundamentally shapes how the model responds. Whether you emphasize recent earnings trends versus long-term strategic shifts will produce entirely different analyses of the same company.

The Strategic Imperative

Context Engineering isn't just a technical consideration—it's a product strategy decision. The context you provide doesn't just inform the AI's response; it defines the AI's perspective on the problem you're trying to solve.

Master this, and you're not just building better AI products. You're building AI products that understand their purpose.

Have thoughts on this?

I'd love to hear your perspective. Feel free to reach out.