Tools

Tools are base-category morphisms — single-shot, one message in, one message out. They use bare types (A → B), not stream types (!A → !B).

Lift and lower

The relationship between tools and stream processes is a categorical duality:

  • lift = map(expr) : (A → B) → (!A → !B) — the functorial action that embeds a base morphism into the Kleisli category (apply the function to each message in the stream)
  • lower = tool { process: f } : (!A → !B) → (A → B) — project a Kleisli morphism back to the base category via η_A ; f ; ε_B, where η is the adjunction unit (inject a single value into a stream) and ε is the counit (extract the single result)

lower(lift(f)) = f — lowering a lifted function recovers the original. But lift(lower(g)) ≠ g — lifting a lowered stream process does not recover stream behaviour (state, multi-message context). This is not an equivalence; it's an adjunction.

Totality

Not every stream morphism can be lowered. The discriminant is totality: a base-category morphism must produce exactly one output for each input.

Morphism Bare A → B Kleisli !A → !B Lowerable Reason
id yes yes yes identity, total
map(e) yes yes yes pure function, total
copy yes yes yes diagonal A → (A,A), total
discard yes yes yes terminal A → Unit, total
tool yes already base category
agent yes yes subprocess, one output per input
plumb yes yes composite subprocess
filter yes no not total (may produce 0 outputs)
merge yes no requires interleaving
barrier yes no requires synchronisation
project yes no requires product stream

Two forms of tool

1. Lowered processtool { process: name } lowers a stream morphism:

let searcher : !Query -> !Results = agent {
  prompt: "Search for relevant results."
}

let search : Query -> Results = tool { process: searcher }

The process config key names the stream morphism to lower. Any lowerable process works — agents, plumbs, or structural morphisms. Non-lowerable processes (filter, merge, barrier) are rejected at load time: cannot lower filter to a tool (not total).

Type compatibility is checked at load time: strip_stream(process.input) must equal tool.input, and likewise for output.

2. Annotated bare morphism@tool annotation on a bare-typed binding:

@tool true
@description "Double the input score"
let double : int -> int = map(score * 2)

@tool true
let echo : string -> string = id

The @tool annotation is the programmer's declaration of intent. Not all bare morphisms are tools — a bare map used internally should not be exposed to agents. The annotation makes tool status explicit.

Tool invocation

Tools are invoked by agents during LLM conversation, not wired into pipelines. An agent declares its available tools in its config:

let writer : !string -> !string = agent {
  prompt: "Write about the topic."
  tools: [search, double]
}

When the LLM decides to call a tool, the agent sends a request over the tool dispatch channel (fd 4), the runtime executes the tool and returns the result on fd 5, and the agent feeds the result back to the LLM.

Inline tools (id, map, copy, discard) execute directly — no subprocess. Process-backed tools (agent, plumb) fork a subprocess for isolated execution.

Config options (for tool { ... } form):

Key Type Description
process ident Name of the stream morphism to lower
description string Tool description for LLM

Bare-typed bindings cannot appear in wiring chains or spawn statements: 'search' has bare types (base category); only stream morphisms can be wired into pipelines.

Tool dispatch protocol

When an agent has tools, the plumb runtime creates two internal pipes and routes them through the multiplexed I/O envelope protocol: - __port: "tool_req" (agent → runtime) — tool request channel - __port: "tool_resp" (runtime → agent) — tool response channel

Tool availability is determined by the presence of tool bindings in the agent's spec (not by an environment variable).

Protocol is synchronous request-response, one JSON line per message:

Request (agent → runtime via tool_req):

{"id": "toolu_abc123", "name": "search", "input": {"text": "climate change"}}

Response (runtime → agent via tool_resp):

{"id": "toolu_abc123", "content": "[{\"title\":\"...\",\"url\":\"...\"}]", "is_error": false}

The id field is the LLM's tool_use ID, passed through for correlation. The content field is a JSON string (the serialised result). On error, is_error is true and content contains the error message.

The runtime validates tool inputs and outputs against the declared types. Type validation failures return is_error: true — the LLM decides what to do.

On error, content contains a structured error JSON object. See errors.md for the wire format and error codes.

MCP tools

Agents can use tools from external MCP servers. The mcp config key takes an array of server configurations — either inline objects or references to named value bindings.

Reusable config bindings (stdio):

let ldc = { command: "plumb-mcp", args: ["--docroot", "."] }
let github = { command: "npx", args: ["-y", "@anthropic/mcp-github"], tools: ["search_repositories"] }

let composer : !Draft -> !Review = agent {
  model: "claude-sonnet-4-5",
  tools: [read, write],
  mcp: [ldc, github]
}

let critic : !Review -> !Verdict = agent {
  model: "claude-haiku-4-5",
  mcp: [ldc]
}

HTTP transport with bearer tokens:

let ldc = { url: "https://leithdocs.com/ldc/mcp", token_env: "LDC_TOKEN" }

let writer : !string -> !string = agent {
  mcp: [ldc]
}

Value bindings (let ldc = { ... }) are resolved at parse time — the parser substitutes Ident references in config arrays with their bound values. This avoids repetition when multiple agents share the same MCP servers.

MCP server config keys:

Key Type Default Description
command string Subprocess command. Selects stdio transport. Mutually exclusive with url.
url string HTTP(S) endpoint. Selects HTTP transport. Mutually exclusive with command.
args [string] [] Command-line arguments (stdio only)
env {string: string} {} Extra environment variables (stdio only)
token string Bearer token (inline). HTTP only.
token_env string Env var name containing bearer token. HTTP only.
tools [string] Tool whitelist. If present, missing tools at spawn is an error.
prefix string auto Tool name prefix. Default: derived from serverInfo.name, the command basename, or the URL hostname.

Each entry requires exactly one of command or url — providing both is an error, providing neither is an error.

Tool naming. MCP tool names are colon-prefixed: ldc:search, github:create_issue. The prefix defaults to the binding name (e.g. let ldc = { ... } gives prefix ldc). When no binding name is available, it falls back to serverInfo.name or the command basename. Override with prefix: "docs".

Dots are reserved for the module system (Fs.read). MCP tool names use colons, which are unambiguous — split on the first : to recover prefix and raw name.

Provider rewriting. Anthropic rejects : in tool names. The Anthropic client rewrites ldc:search to ldc__search on the wire and reverses on response. This is transparent to the pipeline — the canonical form is always prefix:name.

Dispatch priority. When an agent has both plumbing tools and MCP tools, plumbing bindings are checked first. If a plumbing tool and an MCP tool share the same name, the plumbing tool wins (with a warning at spawn time).

Lifecycle. MCP connections are per-agent-spawn. Each agent instance gets its own MCP server connections with independent state. Connections are established when the agent spawns and torn down when the tool dispatch loop exits (agent EOF). Stdio server processes are registered for cleanup alongside child process PIDs; HTTP connections send a DELETE with the session ID on shutdown.

Transport. Two transports are supported, both implementing JSON-RPC 2.0 over the MCP 2025-03-26 protocol:

  • Stdio — spawns a subprocess, communicates over pipes. Selected by command key.
  • Streamable HTTP — connects to an HTTP(S) endpoint. Selected by url key. Handles both application/json and text/event-stream (SSE) response types per the MCP spec. Tracks Mcp-Session-Id automatically. The HTTP client runs in a background thread with its own Eio scheduler — see networking.md.

Reconnection policy. No automatic reconnection for either transport. If the connection dies (server crash, HTTP error, timeout), it is marked dead and all subsequent call_tool requests return an error immediately. The LLM decides whether to retry or fall back.

Per-tool MCP declarations

Individual tools can be backed by MCP servers using tool { mcp: ..., name: "..." }:

let search : Query -> Results = tool { mcp: "https://leithdocs.com/ldc/mcp", name: "search" }

Or with auth via a reusable config:

let ldc = { url: "https://leithdocs.com/ldc/mcp", token_env: "LDC_TOKEN" }
let search : Query -> Results = tool { mcp: ldc, name: "search" }

The mcp key in tool { } accepts a string URL (HTTP, no auth), an object (full server config), or a reference to a value binding. The name key specifies which MCP tool to call on that server.

Per-tool MCP connections are cached by normalised URL or command string within a single tool dispatch loop — two tools pointing at the same server share one connection.

Schema transport. MCP tool schemas are written to a temporary JSON file and passed to the agent binary via the PLUMB_MCP_TOOLS environment variable. The agent binary reads this file at startup and merges MCP schemas with plumbing tool schemas. MCP schemas use input_schema (snake_case, Anthropic format) and are already object-typed — they bypass the non-record input wrapping that plumbing tool schemas require.

Error handling:

Failure Response
Server fails to spawn Warning, skip (error if whitelisted tools)
Handshake timeout (30s) Warning, skip (error if whitelisted tools)
Whitelisted tool not found Error — pipeline author explicitly required it
Server dies mid-session call_tool returns error, marks connection dead
Tool call timeout (60s) Returns error, marks connection dead
isError: true from server Passed through as (content, true)

Inline execution

The structural morphisms (id, copy, merge, discard, map, filter, barrier) execute inline as forwarding threads within the plumb process. They do not spawn subprocesses, synthesise spec files, or require binary resolution. Only agents and nested plumbs spawn subprocesses — they need isolation for API credentials, file permissions, and failure domains.

Tools are not spawned as pipeline nodes. They are executed by the plumb runtime on demand, in response to requests from agents over the tool dispatch channel. Inline tools (id, map, copy, discard) execute directly within the dispatch thread. Process-backed tools (agent, plumb) fork a subprocess per invocation.