We have been building a small language for describing how AI agents work together. It is called plumbing, in the tradition of Plan 9's plumber and Solaris's STREAMS architecture — wiring processes together through typed channels. The command is plumb. The file extension is .plumb.
The language is not publicly available yet. This is a sketch of what we are working on and why.
Every agent framework in production today wires agents together ad hoc. An orchestrator calls an agent, gets a result, decides what to do next, calls another agent. The topology is implicit in the control flow. There are no types on the boundaries between agents. There is no way to verify, before spending money on API calls, that the pipeline is well-formed.
This works because the models are smart enough to muddle through. But it does not compose. You cannot take two working pipelines and wire them together with any confidence that the result works. You cannot look at a pipeline and know what types flow through it. You cannot budget-scope a sub-pipeline independently of its parent. You cannot have an agent design a pipeline and check it before running it.
A pipeline is a composition of typed processes. Here is a document review pipeline:
type Sources = { topic: string, instructions: string, documents?: [string] }
type Draft = { text: string, title: string }
type Verdict = { substantiated: bool, explanation: string }
type Review = { score: int, review: string }
let composer : !Sources → !Draft = agent { model: "claude-sonnet-4-5" }
let corroborator : !Draft → !Verdict = agent { model: "claude-haiku-4-5" }
let critic : !Draft → !Review = agent { model: "claude-sonnet-4-5" }
Three agents, each with a typed signature. The ! means "stream of" — a channel that carries a sequence of messages, not a single value. The composer consumes sources and produces drafts. The corroborator consumes drafts and produces verdicts. The critic consumes drafts and produces reviews.
Wiring them together:
input ; composer ; corroborator
The ; is sequential composition: the output of one feeds the input of the next. This is nose-to-tail wiring, like Unix pipes. composer ; corroborator means "the composer's output stream feeds the corroborator's input stream." The type checker verifies that the types match at every connection.
Side-by-side independence uses ⊗ (tensor product):
(composer ⊗ critic)
Two processes running in parallel with no shared channels. The tensor product of their types: if composer : A → B and critic : C → D, then composer ⊗ critic : A ⊗ C → B ⊗ D.
A pipeline has type A → B. A single agent has type A → B. A pipeline is a process. When a pipeline appears as a node inside another pipeline, the outer pipeline sees only the type signature. Whether the node is one LLM call or an entire organisation of agents with adversarial review is invisible at the boundary.
This is closure under composition. It means you can build a complex pipeline, give it a name and a type, and use it exactly as you would use a single agent. The person (or agent) wiring the outer pipeline does not need to know what is inside.
Every category has an identity (id : A → A — the wire that passes messages through unchanged). Beyond that, the language has four structural morphisms that handle the plumbing:
A → A ⊗ A. Fan-out. Duplicates a stream to two consumers.A → I. Consumes a stream, produces nothing.A ⊗ A → A. Nondeterministic interleaving. Two input streams become one output stream. Provenance is preserved via routing metadata (topic tags), not type-level coproduct injection — if the consumer needs to discriminate, coproduct injection is a separate step before the merge point.A ⊗ B → (A, B). Synchronised pairing. Waits for a message from both inputs, then produces their product. Where merge interleaves, barrier synchronises.Copy and discard form a comonoid (one-to-many and one-to-none). Merge interleaves two streams of the same type; barrier synchronises two streams of different types into their product. Together with composition (;) and tensor (⊗), these are sufficient to describe any agent topology — linear pipelines, fan-out to parallel workers, fan-in from multiple sources, review loops, observer patterns, scatter-gather, and A/B tests of alternative topologies.
The language is small — about 200 lines of grammar, including a module system — which means an LLM can write valid programs in a single tool call. This means agents can design their own pipelines at runtime.
An agent receives a complex task, reasons about what sub-processes it needs, writes a plumbing program, and submits it for type checking. The type checker is exposed as a tool. If the program is well-typed and within budget, the agent spawns it. If not, it gets back type errors, fixes them, and resubmits. No money is spent on execution until the program passes validation.
This is the key idea: topology is a first-class value. Agents do not just execute pipelines — they program them. The structure of an organisation of agents becomes something that agents can reason about, construct, and verify.
The dominant pattern in the industry right now is two-level delegation: a main agent spawns a sub-agent with the same tools and a sub-task. No types on the boundary. No budget scoping. No verification. Hope the model is smart enough.
What we have is arbitrarily deep, typed, budget-scoped, and composable. A monoidal category with four structural morphisms, two combining forms, and typed channels. Static checking means you can verify — before spending a penny — that types line up, budgets balance, and every path through the graph produces a valid result.
The ad hoc approach scales with model capability. Ours scales with composition. The difference matters most when your agents fail, when your budget is tight, or when you need to understand what will happen before it happens. That is always.
Leith Document Company, Edinburgh — March 2026