● LIVE   Breaking News & Analysis
Paintou
2026-05-17
Education & Careers

AI Model Now Interrogates Humans to Gather Context, Replacing Traditional Documentation

New technique uses LLMs to interview humans one question at a time, replacing manual context writing. Experts say it speeds up complex tasks and helps non-writers.

Breaking: Developers are turning the tables on large language models (LLMs) by having them interview humans instead of relying on human-written context documents. This novel technique, known as an 'interrogatory LLM,' could dramatically speed up complex tasks like feature design and software specification reviews, according to Martin Fowler, chief scientist at ThoughtWorks.

Instead of a human writing pages of markdown to feed an LLM, the model asks questions—one at a time—to extract the necessary information. 'The LLM can ask all the questions it needs to create appropriate context, and I tell it other sources to consult,' Fowler explained in a recent blog post. The approach was inspired by Harper Reed, who insisted that the LLM ask only one question at a time—a rule Fowler found required frequent reminders.

Background

Traditional methods for generating context involve humans manually drafting descriptions, guidelines, and references—a multi-page process that is time-consuming and error-prone. The interrogatory LLM flips this by having the AI drive the conversation, soliciting details on user appearance, implementation guidelines, and external systems.

AI Model Now Interrogates Humans to Gather Context, Replacing Traditional Documentation
Source: martinfowler.com

The technique is not limited to creating new documents. It can also be used for review: an LLM can interview a human expert to verify the accuracy of an existing software specification, rather than requiring the expert to read through a poorly written document. 'People often find reviewing hard,' Fowler noted, adding that a conversation with an LLM may be more fruitful.

What This Means

This shift could democratize knowledge capture for people who struggle with writing. 'Many folks find writing very hard,' Fowler said. 'Being interviewed by an LLM might be easier than writing a document from scratch.' The result may carry an 'AI-writing tang,' but Fowler argues that is preferable to rushed or nonexistent documentation.

The approach has broader applications beyond LLM use cases—any situation where expertise needs to be extracted and codified. For instance, an interrogatory LLM could build a document in one session, then additional LLMs could review it with other experts in subsequent sessions.

However, experts caution that the technique requires careful prompting. 'You need to repeatedly remind the LLM to ask one question at a time,' Fowler emphasized. Without that discipline, the model may overwhelm the human, reducing the effectiveness of the conversation.

Future Implications

As LLMs become more conversational, interrogatory techniques could become standard in software development, technical writing, and knowledge management. Combining multiple interrogatory LLMs—one to build context, others to review it—could create a self-improving documentation pipeline.

For now, early adopters are testing the method on internal projects. Fowler recommends starting with small, well-defined tasks. 'Give the LLM a document and ask it to interview an expert to check accuracy,' he suggested. The outcomes so far indicate that even imperfect AI-generated context is better than none at all.