hello-multi-agent · the router skeleton
12-line router + 2 specialists. Drop-in shape for any internal NL-input service.
snap/multi-agent:helloRepo · multi-agent-hello
$git clonehttps://github.com/snap-dev/multi-agent-hello.git
docker-compose.yml
services:
router:
image: python:3.11-slim
working_dir: /app
volumes: ["./src:/app/src:ro", "./requirements.txt:/app/requirements.txt:ro"]
environment:
OPENAI_API_KEY: ${OPENAI_API_KEY:?}
ROUTER_MODEL: gpt-4o-mini
CODER_MODEL: gpt-4o
MAX_STEPS: "6"
command: >-
bash -c "pip install -q -r requirements.txt && python -m src.team"
Run
~/multi-agent-hello · zsh
$ docker compose up --abort-on-container-exit
[route] verdict=code → [coder] FINAL: ...
What you'll observe
Container exits with code 0 within 30 seconds
Verdict line followed by specialist FINAL line
Total token usage logged in stderr
Router uses ROUTER_MODEL only; specialists use their respective env vars
Re-run with the same task is idempotent — same verdict
Lift this to your work
Drop in front of any internal NL-input service: a Slack `/ask`, an internal helpdesk first responder, a CLI 'data vs eng question' splitter. Replace router prompt + specialists with your own — the compose shape stays.
hello-multi-agent · the router skeleton
12-line router + 2 specialists. Drop-in shape for any internal NL-input service.
snap/multi-agent:helloRepo · multi-agent-hello
$git clonehttps://github.com/snap-dev/multi-agent-hello.git
docker-compose.yml
services:
router:
image: python:3.11-slim
working_dir: /app
volumes: ["./src:/app/src:ro", "./requirements.txt:/app/requirements.txt:ro"]
environment:
OPENAI_API_KEY: ${OPENAI_API_KEY:?}
ROUTER_MODEL: gpt-4o-mini
CODER_MODEL: gpt-4o
MAX_STEPS: "6"
command: >-
bash -c "pip install -q -r requirements.txt && python -m src.team"
Run
~/multi-agent-hello · zsh
$ docker compose up --abort-on-container-exit
[route] verdict=code → [coder] FINAL: ...
What you'll observe
Container exits with code 0 within 30 seconds
Verdict line followed by specialist FINAL line
Total token usage logged in stderr
Router uses ROUTER_MODEL only; specialists use their respective env vars
Re-run with the same task is idempotent — same verdict
Lift this to your work
Drop in front of any internal NL-input service: a Slack `/ask`, an internal helpdesk first responder, a CLI 'data vs eng question' splitter. Replace router prompt + specialists with your own — the compose shape stays.