Watch: Complete SDK setup walkthrough—install, configure, and see real performance gains in 2 minutes.
Beta notice: The Atlas SDK runtime is in beta. APIs and configuration keys may evolve—check release notes before upgrading.
Prerequisites
Store credentials in
.env to avoid shell history exposure. Atlas defaults to Anthropic (Claude Haiku 4.5 for student, Claude Sonnet 4.5 for teacher) with Gemini for rewards. See Configuration for alternatives.Step 1 – Run the Quickstart Task
Working directory: The SDK installs globally via
pip install arc-atlas. You can run atlas commands from any directory. Config files can live in your project root (for CLI autodiscovery) or in the ATLAS Core repository (for example configs).
The adaptive runtime probes capability and routes every task into the right lane before the dual-agent loop (student + verifying teacher) executes.
Option A – CLI Autodiscovery (recommended for new stacks)
atlas env initscans for@atlas.environment/@atlas.agentdecorators or factory functions, loads.env, writes.atlas/discover.json,.atlas/generated_factories.py, and.atlas/generated_config.yaml, and automatically sets up storage (integratingatlas initfunctionality).- Agent Selection:
atlas env inituses Claude Haiku 4.5 (claude-haiku-4-5-20251001) as an LLM-powered agent selector to analyze your codebase and automatically detect the best agent integration points. This intelligent discovery helps bootstrap configuration for existing codebases. - Learning Features: Few-shot prompting and playbook injection are enabled by default, allowing the system to learn from past interactions immediately.
atlas run --configloads the generated config, verifies module hashes, streams telemetry into.atlas/runs/, and injects learning playbooks when available.- Need to exercise the full orchestrator? Point
atlas run --config configs/examples/sdk_quickstart.yaml --task "..."at a config file to bypass discovery entirely.
Option B – Python API (direct invocation)
If you want to use pre-built example configs:Run it inline if you prefer to avoid creating a file:
atlas.runtime.telemetry.ConsoleTelemetryStreamer auto-enables when stdout is a TTY; override with stream_progress=True/False.
Want to see adaptive learning in action? Check out the Adaptive Tool Use example showing a LangGraph agent learning efficient MCP tool usage across 25 tasks, demonstrating 30-40% reduction in tool calls.
Bring Your Own Agent
Atlas wraps any agent that exposes an OpenAI-compatible API, HTTP endpoint, or Python callable. Three adapter types are available:- OpenAI adapter - For GPT, Claude via OpenAI-compatible APIs
- HTTP adapter - For microservices, serverless functions
- Python adapter - For LangGraph, local callables, custom agents
What Just Happened?
Think ofatlas.core.run as a project manager who never gets tired—now fronted by an adaptive controller:
- Triage & probe – a triage adapter builds context, the capability probe scores confidence, and the runtime picks a lane.
- Configure – the YAML tells the orchestrator which agent to call and how the dual-agent reasoning loop (student + verifying teacher) should behave.
- Plan – the Student drafts a step-by-step approach when a stepwise lane is chosen; in single-shot lanes the plan collapses to one step.
- Review – the Teacher approves or tweaks the plan (or just inspects the final answer in
pairedmode). - Execute – each step runs with lane-specific guidance, validation, and retries.
- Evaluate – the Reward System scores the work, deciding whether to reuse guidance and how to update persona memories.
Configuration Breakdown
Key sections insdk_quickstart.yaml:
agent: Adapter settings (litellm/http/python) and model choiceteacher: Verification model, typically stronger than studentrim: Reward system judges (Gemini 2.5 Flash/Pro by default)adaptive_teaching.probe: Capability assessment (xAI Grok-2-mini)storage: Optional Postgres persistence
Troubleshooting Checklist
- Missing API key – ensure
OPENAI_API_KEY(or Azure equivalents) are exported in the same shell. - Time spent downloading dependencies – editable installs pull in
litellm,httpx, and friends on the first run; subsequent runs are instant. - Model limits – bump
max_output_tokensin the config if your summaries get truncated.