coaxer -- Complete Reference (LLM-friendly)
Online: https://thekevinscott.github.io/coaxer/llms-full/
Full coaxer documentation on a single page, designed for consumption by language models.
What is coaxer?
Coaxer turns labeled examples into prompts. You label behavior you want, coaxer compiles it into a prompt artifact, you consume the rendered template as a string at runtime.
The library provides:
- Label folder format — one directory per record;
record.json+ sibling files for text/binary. coaxCLI — reads the folder, optionally runs DSPy 3 + GEPA optimization, writes a prompt artifact.CoaxedPrompt— astrsubclass that loads the compiled artifact and renders it at call time.AgentLM/OpenAILM— DSPyBaseLMbackends for the optional compile-time optimizer (Claude via Agent SDK, or any OpenAI-compatible endpoint).
The prompt is a build artifact. Labeled examples are the source of truth.
Installation
Requirements: Python >= 3.14, DSPy >= 3.0. AgentLM additionally requires the Claude Code CLI installed and authenticated.
Label folder format
labels/<name>/
_schema.json # optional
0001/
record.json
readme.md # sibling file referenced from record.json
0002/
record.json
logo.png # binary is fine
record.json:
When a value names a file that exists in the record folder, the file's contents are substituted at compile time (UTF-8 text if decodable, raw bytes otherwise). Other values pass through.
_schema.json is optional. It adds field descriptions, types, and enum values:
{
"inputs": {
"readme": {"desc": "Project README markdown"},
"stars": {"desc": "GitHub star count", "type": "int"}
},
"output": {
"desc": "Curated collection vs organic project",
"type": "enum",
"values": ["true", "false"]
}
}
Supported types: str, int, float, bool, bytes, enum (with values). Without _schema.json, types are inferred from the first record.
CLI
coax
<labels-dir>— path to the label folder.--out— output folder (created if missing).--optimizer—none(default) emits schema-derived template with no network.geparuns DSPy 3 GEPA, requires an LLM credential.--output-name— name of the predicted output field in the rendered template (defaultoutput).
The output folder is the prompt artifact — load it with CoaxedPrompt; don't reach into it directly.
CoaxedPrompt
from coaxer import CoaxedPrompt
p = CoaxedPrompt("prompts/repo-classification", role="classifier")
filled = p(readme=new_readme, stars=1200)
CoaxedPrompt(path, **bound)— str subclass. Reads the template at construction.**boundsets default variables.str(p)— raw template.p(**vars)— render with merged variables. Missing vars raiseMissingVariableError. Call-time vars override bound defaults.p.fields— input variables the template expects (parsed from the template, cached).p.response_format— Pydantic model class for the compiled output schema. Cached after first access.
Because CoaxedPrompt is a str, it drops into any API that accepts a string.
from openai import OpenAI
client = OpenAI()
resp = client.chat.completions.parse(
model="gpt-4o",
messages=[{"role": "user", "content": p(readme=..., stars=...)}],
response_format=p.response_format,
)
parsed = resp.choices[0].message.parsed
AgentLM
DSPy BaseLM subclass. Each forward() call spawns a Claude Code subprocess via claude_agent_sdk.query().
AgentLM(
model: str = "claude-agent-sdk",
model_type: str = "chat",
max_tokens: int = 4096,
**kwargs, # forwarded to ClaudeAgentOptions
)
Common ClaudeAgentOptions kwargs:
tools: list— pass[]for structured-output tasks.allowed_tools: list[str],disallowed_tools: list[str].max_turns: int.env: dict[str, str]— subprocess environment.
Methods: forward, aforward, copy(**kwargs), inspect_history(n).
OpenAILM
DSPy BaseLM subclass that hits any OpenAI-compatible chat endpoint.
from coaxer import OpenAILM
lm = OpenAILM(model="llama3") # Ollama default
lm = OpenAILM(model="meta-llama/Llama-3-8B", base_url="http://localhost:8000/v1")
lm = OpenAILM(model="gpt-4o", base_url="https://api.openai.com/v1", api_key="sk-...")