Prompt Engineering Is Dead. Long Live Prompt Engineering.

I've read quite a bit that prompt engineering is no longer relevant. Or that it soon won't be. Or that it shouldn't be necessary.

I don't buy it.

Or, at least, not entirely.

The Name vs. The Skill

The very title "prompt engineering" - particularly the "engineering" part - is a bit overwrought. Maybe a lot overwrought. It makes it sound complicated. Like you need a degree or a calculator or a knighting by Queen Anne of the AI.

It's not really any of those things, but it's nonetheless important. It's just that calling it "communicating effectively to a large language model" doesn't have the same ring to it as "prompt engineer."

In other words, I think we could all do without the perceived pomp of the Prompt Engineer title. But the essence of what prompt engineering is - communicating effectively with large language models - is something that's still extremely relevant.

Why Communication Always Matters

In some sense, it's simple: because communicating effectively in general is important. But when you combine that truism with the sort of naive brilliance of an LLM, it does become even more important.

At least, if you want effective results.

The Human Parallel: How We Already Do This

Let's consider some practical examples.

If I come to you and start asking about that game that you play with the round ball and the wooden stick where people hit it and run around on grass and dirt, how would you respond? If you're reasonable, you'd assume I don't know anything about baseball, and so you'd respond by tip-toeing around baseball terminology, trying to figure out what I wanted to know without losing me in something I clearly don't understand.

Here's another.

Imagine going to a financial advisor hoping to get holistic financial advice. But you want to sound smart, so you start the conversation asking about whether the current macroeconomic picture means that people should be selling right now. What does that do to the conversation? Even if the advisor realizes that you don't fully know what you're talking about, they may tailor what they say to the assumption that you are interested in short-term trading based on macroeconomic factors.

And one more.

Was, wenn du fängst auf Deutsch an und hofftest etwas zu lernen? Hopefully you understand German, because the answer will likely come auf Deutsch. That, or the confused non-German speaker you're talking to will have no idea what you're asking.

The point I'm belaboring here is that it matters how you frame any communication exchange. That's true of interactions with other humans, and it's true of interacting with an LLM. How your words and framing tee up that LLM to understand what you're after - and, realistically, cue the complex pattern matching of the LLM - is critical. Ask in a certain way or with certain words, and you'll get a different answer than if you approach it another way.

Getting Practical: Simple vs. Complex Needs

Using a web chat interface working through fairly simple needs - think, macaroni and cheese recipe - you don't have to stress as much. A back and forth with an LLM can typically get you to the recipe that you're after, tailored to your exact needs.

But, if you are after something more complex, or if you're creating LLM prompts within an automated system, you don't have the same luxury. Getting to the answer/output that you're after relies on precise use of language and clarity of thought.

Real-World Applications Matter

For example, when building an AI assistant that needs to analyze customer feedback at scale, your prompt architecture becomes critical. The difference between "summarize this feedback" and "identify the top three product issues mentioned in this feedback, categorize them by severity, and suggest potential solutions based on previous customer interactions" will yield dramatically different results.

The Meta-Prompt Approach

There's sort of a shortcut to this.

You can use the LLM itself to help you to get to a good starting prompt. You do need an understanding of what you're after in the first place, but if you have that, you can prompt the LLM to help you craft the prompt that gets you to the output you want. A bit circular, yes, but effective.

The Fundamental Challenge Remains

But this doesn't let you off the hook completely. You need to actually take some time to really understand what you want and how you want that information delivered. And if you're after something truly complex that's domain-specific and within a zone of expertise, you may need to do some learning in order to have the words and concepts to know what you need to know to ask for what you need to know.

In large part it's one of those "simple, not easy" kind of things. Anybody that can communicate well can do this well. With effort.

The Future of Effective AI Communication

Do you want to call this "prompt engineering"? I, for one, don't care. But I do know that whatever it is, it's important, and because what we're talking about here is effectively communicating what you want, I don't see that importance diminishing even as LLMs get smarter. In fact, as these models become more capable, the difference between mediocre and excellent prompting will likely become even more pronounced.

As someone who builds AI products, I've found that the most valuable skill isn't knowing the latest prompt "hacks" – it's developing a deep understanding of how to frame problems in ways that machines can effectively process. And that's a skill that will remain valuable no matter what we call it.

Have thoughts on this?

I'd love to hear your perspective. Feel free to reach out.