People who are deeply experienced with technology — software engineers, researchers, systems designers — often find that working with LLM agents is harder than they expected. Not because the technology is complicated, but because it demands a way of thinking that nothing in their prior experience has prepared them for. They misunderstand what an agent can see, over-specify tasks that need loose direction, or abdicate decisions they don't have the domain knowledge to evaluate. The problem isn't technical depth. It's mental model.
When experienced technologists first encounter agent-based work, they typically fall into one of two patterns.
The chatbot model treats the agent as a search engine. You ask it a question, you get an answer. This leads to under-specified requests and disappointment. The person hasn't thought about what context the agent needs, what materials it should work from, or how the task should be structured. They're treating a composition process like a lookup.
The junior developer model treats the agent as an inexperienced employee who needs step-by-step instructions. This leads to brittle, over-specified prompts that the agent follows to the letter while missing the point entirely. The person is trying to script execution rather than describe outcomes.
Both of these are failures of perspective-taking by people who have the knowledge to do better. The chatbot user assumes too much shared context. The junior developer user specifies the how rather than the what. In both cases, the person isn't modelling what the task looks like from the agent's side.
There is a third pattern, but it comes from a different population. The vibe coding model is what happens when someone without deep domain knowledge asks a capable model to build something for them. They describe goals in natural language and accept the generated code without understanding the tradeoffs involved. It appears to work — sometimes it genuinely does — until it doesn't, and the person has no way to diagnose why. They aren't delegating; they're abdicating. The agent made a hundred small technical decisions and the person accepted all of them by default, not by judgement.
The vibe coding failure is instructive because it shows the problem in its starkest form. These users can't take the agent's perspective on the technical work because they don't have the knowledge to model what the agent is doing. They can't provide the right context because they don't know what context matters. But the underlying deficit — failing to account for the agent's information state — is the same one that trips up experienced technologists. The difference is that experienced people take the wrong perspective; non-technical people can't take one at all.
What makes someone good at agent-based delegation? It's perspective-taking: the ability to see the task from the agent's point of view. What's in its context window? What does it know and what doesn't? What does the task look like from inside the information state it actually has?
This is functionally the same cognitive operation as empathy. But acknowledging that makes people uncomfortable. There's a correct and important taboo against anthropomorphising AI systems. The model isn't conscious, doesn't have experiences, and treating it as if it does leads to confused thinking. But this taboo, while sound about ontology, creates practical problems. The skill you actually need—modelling another agent's information state—requires you to think in perspective-taking terms.
There's a useful distinction here between empathy (attributing inner experience) and perspective-taking (modelling information state). You don't need to believe the agent has feelings to ask: what does it have access to right now? What would I need to understand if I were starting fresh with only these documents and this prompt? That's an engineering question, not a sentimental one.
There's a second difficulty stacked on top. With conventional software, execution is traceable. Same input, same output. Engineers develop mental models by simulating state machines—thinking like the computer. This works because computers are deterministic.
Language models are not. We observe that the same input can produce different outputs. Small changes in context cause large shifts in behaviour. The relationship between input and output isn't traceable through any sequence of steps you can follow. You can't "think like the computer" because there's no execution path to trace.
What people actually need is the intuition of a good experimentalist—a feel for the space of likely outcomes, which variables matter, and how to set up conditions that make the desired outcome more probable. Comfort with distributions rather than deterministic paths. We observe that people trained primarily in deterministic systems often find this mode of thinking unfamiliar. Probabilistic reasoning comes more naturally to statisticians and experimental scientists, but it's learnable—and it's essential for working effectively with agents.
So you have two uncommon skills stacked: perspective-taking with a non-human process, and reasoning about stochastic behaviour. The combination feels like wizardry to people who are otherwise highly capable.
But there's a way through. People already reason about one class of stochastic system intuitively: organisations made of people. You give a team a brief and you don't know exactly what you'll get back. You learn which teams need tight direction and which work better with loose framing. You develop a feel for which context matters and what can be left implicit. Managing a team is probabilistic reasoning in practice, though nobody calls it that.
This isn't just a teaching metaphor—it's a design principle. Systems structured around agent-based composition tend to mirror human organisations: defined roles, separation of concerns, adversarial review, resource budgets, audit trails. Quality comes from the structure, not from telling any individual agent to be careful.
This suggests that learning to work with agents may be less about acquiring a new technical skill and more about transferring an existing one—managing collaborative work—into a new domain. The challenge is that the surface presentation obscures the transfer. A chat interface looks nothing like management. It looks like typing into a search box.
Chat interfaces import all the conventions of human conversation: shared context, implicature, the expectation that the other party will fill in gaps from common knowledge. But what's actually happening is closer to composing an input for a batch process. The agent has exactly the context you gave it, plus its system prompt. Nothing else. Every time.
An older interaction model—the teletype, the command line—is in some ways more honest about this. When you sit at a terminal, you don't assume the process on the other end knows what you're thinking. You compose a complete, self-contained input. You think about what the process needs to receive. The chat UI obscures this by making interaction feel conversational, when the underlying mechanics are anything but.
People who internalise the teletype framing—"I am composing an input to be processed" rather than "I am having a conversation"—tend to write better prompts almost immediately. Not because the framing is more technically accurate in every detail, but because it puts them in the right cognitive mode: thinking about what the process receives rather than what they intend.
The deepest issue is simpler than it appears. Vibe coding fails because the person can't articulate intent at the level where decisions are made. The fix isn't acquiring a technical skill—it's being able to communicate technical intent clearly enough for the agent to work from it.
In practice this means the engineering work starts before the agent does. Design discussions, technical notes, documentation of decisions and rationale—these come first. The agents work from those documents rather than trying to reverse-engineer intent from vague requests or from source code alone. Agent-ready documentation needs to specify not how to do the work but what good work looks like: the goals, the constraints, the tradeoffs that matter, the rationale for the approach. It's the context that allows an agent to make sound decisions in the spaces you haven't specified.
This is where the organisational metaphor becomes not just a teaching aid but a design principle. In a well-run organisation, you don't hand someone a task and hope they figure out what you meant. You provide a brief, reference materials, and enough context about the goals and constraints that they can exercise judgment. The people who are best at working with agents are not necessarily the best programmers. They're the best communicators of technical intent.