Researchers don’t need to be scared of AI, we were prompt engineers long before it was cool, we just didn’t realize it

Researchers were AI prompt engineers long before it was cool — they just didn’t know it 

If you’re like us at Wiley, you’ve probably seen no shortage of posts on LinkedIn from people claiming to have unlocked the secrets to writing the most effective AI prompts. They give examples that imply finding the exact right combination of words can reveal almost mystical knowledge. 

And who knows…maybe it can. We’re not AI engineers. 

But being able to ask clear, intuitively answerable questions — which is all “prompt engineering” ultimately is — isn’t some new esoteric skill that only became relevant only when ChatGPT was released. It’s what primary researchers have been doing their entire careers. 

Whether moderating a focus group and realizing that a respondent’s puzzled look meant our “perfectly crafted” question was actually missing the mark, or cognitively pre-testing a survey and discovering an ambiguity in response options that could result in unreliable field results, the work has always been about the same thing: refining messy questions into clear ones. 

 

Why the resemblance isn’t accidental 

Every time we ran a cognitive pretest or in-depth interview, we weren’t just collecting opinions. We were probing how people interpreted our questions. 

  • Asking: “Read this question aloud. What do you think it’s asking you? How would you answer?” 

  • Spotting unintended meaning. 

  • Iterating phrasing until the signal matched the intent. 

  • Controlling priming, ordering effects, anchoring, and context. 

In other words, we were engineering prompts before the term existed. 

 

A thought experiment 

Think of two different prompt writers. 

  • Prompt A: written by someone who’s never done a qualitative interview or focus group. They ask: 
    “What are the top trends in consumer behavior in 2025?”  

Clear enough at first glance, but wide open to misinterpretation — and almost guaranteed to return vague, generic answers. 

  • Prompt B: written by a researcher who’s been in way too many dingy focus group facilities in cities they had never even heard of before the project that landed them there. They ask: 
    “List three emerging consumer behavior trends for 2025 that are not yet mainstream. Provide a one-sentence rationale for each, and classify whether it’s driven by technology, culture, or economics.” 

Prompt B narrows scope, sets expectations, adds structure, and establishes categories. Exactly the way we’d refine a survey question or an interview probe. The difference isn’t technical — it’s a matter of question design. 

 

Why this matters now 

So while LinkedIn fills with a new breed of consultants presenting themselves as “prompt whisperers,” researchers shouldn’t feel like outsiders to this conversation. We’ve been whispering prompts into human ears for decades, refining phrasing until clarity emerged. 

The tools are new. The black box is bigger. But the discipline is familiar. Knowing how to ask questions that yield clarity and insight is not a skill AI diminishes — it’s a skill AI amplifies. 

 

So don’t despair  

The research industry may feel unsettled right now. But if you’ve built your career on carefully designing questions, iterating language, and watching for how people interpret nuance — you’re not starting from scratch. 

You’re already fluent in the language AI needs most. 

And that means researchers may be in a stronger position to thrive in this new environment than we think.