plumb-agent - LLM conversation process for plumbing pipelines

SYNOPSIS

plumb-agent spec.plumb

DESCRIPTION

plumb-agent reads JSON Lines from standard input, runs one LLM conversation loop per line with the configured provider and model, validates the output against the declared type, and writes JSON Lines to standard output.

Each input line becomes a user message. The model's response is validated against the output type. If validation fails, the error is fed back to the model for retry (up to max_retries).

By default, conversation context accumulates: each input builds on the full history of prior exchanges. Set amnesiac: true to start a fresh conversation for each input line.

The spec file must contain exactly one agent binding. Multiple agent bindings are rejected with a diagnostic listing the names.

CONFIGURATION

The agent binding in the spec file accepts these configuration keys:

provider
LLM provider. "anthropic", "openai", or "eliza". Required — must be set in config or via the PLUMB_PROVIDER environment variable.
model
Model identifier (e.g. "claude-sonnet-4-5-20250514", "gpt-4o", "doctor"). Required — must be set in config or via the PLUMB_MODEL environment variable.
prompt
Inline system prompt (string).
prompts
Array of file paths. Each file is read at startup and wrapped in \ tags in the system prompt.
max_retries
Validation retry limit. Default: 3.
max_tokens
Maximum tokens per response (includes thinking + output). Default: 8192.
max_tool_calls
Maximum total tool calls the agent may make while processing a single input message. When the limit is reached after the current batch completes, an error is raised. Default: no limit.
thinking_budget
Token budget for extended thinking. Must be less than max_tokens. Anthropic only — ignored for OpenAI.
endpoint
API endpoint override. Falls back to PLUMB_ENDPOINT env var.
amnesiac
When true, each input starts a fresh conversation. Default: false.
max_messages
Cap on input messages before clean exit.
runtime_context
When true, inject a runtime context block into the system prompt before each API call. The block includes wall clock time, cumulative token counts, API call count, history length, pinned document count, and compaction cycle count. Not cached — updates every turn. Default: false.
tools
Array of tool binding names available to the agent.

PROVIDERS

Anthropic

Uses the Anthropic Messages API. Requires ANTHROPIC_API_KEY. Supports extended thinking via thinking_budget.

OpenAI

Uses the OpenAI Chat Completions API. Requires OPENAI_API_KEY. The thinking_budget configuration key is ignored.

ELIZA

A faithful recreation of Weizenbaum's 1966 DOCTOR script — keyword matching, decomposition rules, pronoun reflection, and reassembly templates. No network, no API key, no external dependencies.

The only supported model is "doctor".

ELIZA speaks the full agent protocol: multiplexed I/O, control channels (pause/resume, memory get/set, stop), and usage telemetry. Token counts are estimated from word counts.

Primary uses:

  • Integration testing of the agent protocol stack (mux/demux, control, pause/resume, memory, compaction) without burning API tokens.
  • Development — iterate on pipeline wiring and protocol logic with instant responses.
  • Demonstration — a self-contained example that runs anywhere.

Example spec:

let bot : !string -> !string = agent {
  provider: "eliza",
  model: "doctor"
}

Example invocation:

echo '"I feel sad today"' | plumb-agent eliza-bot.plumb

FILE DESCRIPTORS

fd 0 (stdin)
Data input — JSON Lines, one value per line.
fd 1 (stdout)
Data output — JSON Lines, one validated value per line.
fd 2 (stderr)
Errors and debug logs (structured JSON).

All logical channels beyond data I/O (telemetry, tool dispatch, control) are multiplexed onto stdin/stdout using tagged JSON envelopes: {"__port": "name", "msg": <json>}. See plumb(1).

PORT NAMES

The plumbing runtime maps file descriptors to named ports for wiring:

Configuration Ports
No control, no telemetry [input, output]
No control, telemetry [input, output, telemetry]
Control, no telemetry [input, ctrl_in, output, ctrl_out]
Control and telemetry [input, ctrl_in, output, ctrl_out, telemetry]

ctrl_in and ctrl_out map to __port envelope names. In spawn statements:

spawn worker(input=ch, ctrl_in=ctl, output=out, ctrl_out=ctl_out)

The ctrl_out port is composable — it can be wired to downstream processes. When unwired, it is silently drained.

CONTROL CHANNEL

A background thread reads control messages from the internal ctrl pipe (fed by the stdin demux from __port: "ctrl_in" envelopes). The agent writes responses as __port: "ctrl_out" envelopes on stdout. Recognised fields:

set_temp (float | null)
Set or clear temperature override.
set_model (string | null)
Set or clear model override.
stop (bool)
Signal clean exit after the current message.
pause (bool)
When true, stop reading data from stdin. Control and telemetry remain live. The agent handles memory requests while paused.
resume (bool)
When true, resume reading data from stdin.
get_memory (bool)
When true, emit the current conversation history on fd 7 as: {"kind":"memory","messages":[...]}. Each message has role (string) and content (JSON).
set_memory (array)
Replace the conversation history with the provided messages. Each element must have role (string) and content (JSON). If any message is malformed, the entire payload is rejected. Emits acknowledgement on fd 7: {"kind":"memory_set","old_messages":N,"new_messages":M}.

Changes to temperature and model take effect on the next API call, including retries and tool-use rounds within a single input. Sending null reverts to the spec default.

Memory operations are handled at message boundaries (between input lines). When paused, they are handled immediately. The set_memory operation modifies only the conversation history — the system prompt is never affected.

ENVIRONMENT

PLUMB_PROVIDER
Default LLM provider when provider is not set in config. Supported values: "anthropic", "openai", "eliza".
PLUMB_MODEL
Default model when model is not set in config.
ANTHROPIC_API_KEY
Required when provider is "anthropic".
OPENAI_API_KEY
Required when provider is "openai".

The "eliza" provider requires no API key or environment variable.

PIPELINE_DEBUG
Set to 1 to enable debug logging. Conversation messages and API metadata are written as JSON Lines to stderr.
PLUMB_RESOURCES
Colon-separated list of directories to search when resolving bare prompt file names (e.g. "system.md", "grammar.peg"). Searched before the installed resource directories.
PLUMB_MCP_TOOLS
Path to temp JSON file containing MCP tool schemas. Set by the plumb runtime when MCP servers are configured. Internal — not user-facing.

EXIT STATUS

0
Success (EOF on stdin or max_messages reached).
1
Data or runtime error.
2
Configuration error.

EXAMPLES

Run an agent with a simple spec:

echo '"hello"' | plumb-agent my-agent.plumb

Enable debug logging:

echo '"hello"' | PIPELINE_DEBUG=1 plumb-agent my-agent.plumb

SEE ALSO

plumb(1), plumb-chat(1), plumbing(5)