Skip to main content

Watch: Complete SDK setup walkthrough—install, configure, and see real performance gains in 2 minutes.

This guide provides the fastest path to running the Atlas SDK. Install the packaged runtime, point it at your agent, and execute your first task in a few commands—all while the adaptive runtime decides how much supervision each request needs.
Beta notice: The Atlas SDK runtime is in beta. APIs and configuration keys may evolve—check release notes before upgrading.

Prerequisites

python -m pip install --upgrade arc-atlas
Set your API keys (see Installation for details):
export ANTHROPIC_API_KEY="sk-ant-your-key"
export GEMINI_API_KEY="your-gemini-key"  # Optional for rewards
Store credentials in .env to avoid shell history exposure. Atlas defaults to Anthropic (Claude Haiku 4.5 for student, Claude Sonnet 4.5 for teacher) with Gemini for rewards. See Configuration for alternatives.

Step 1 – Run the Quickstart Task

Working directory: The SDK installs globally via pip install arc-atlas. You can run atlas commands from any directory. Config files can live in your project root (for CLI autodiscovery) or in the ATLAS Core repository (for example configs).
Atlas now ships with an autodiscovery CLI so you can validate your environment before touching Python.
Atlas SDK adaptive runtime flow diagram showing triage, probe, and lane routing

The adaptive runtime probes capability and routes every task into the right lane before the dual-agent loop (student + verifying teacher) executes.

pip install arc-atlas
atlas env init --task "Summarize the latest AI news"
atlas run --config .atlas/generated_config.yaml --task "Summarize the latest AI news"
  • atlas env init scans for @atlas.environment / @atlas.agent decorators or factory functions, loads .env, writes .atlas/discover.json, .atlas/generated_factories.py, and .atlas/generated_config.yaml, and automatically sets up storage (integrating atlas init functionality).
  • Agent Selection: atlas env init uses Claude Haiku 4.5 (claude-haiku-4-5-20251001) as an LLM-powered agent selector to analyze your codebase and automatically detect the best agent integration points. This intelligent discovery helps bootstrap configuration for existing codebases.
  • Learning Features: Few-shot prompting and playbook injection are enabled by default, allowing the system to learn from past interactions immediately.
  • atlas run --config loads the generated config, verifies module hashes, streams telemetry into .atlas/runs/, and injects learning playbooks when available.
  • Need to exercise the full orchestrator? Point atlas run --config configs/examples/sdk_quickstart.yaml --task "..." at a config file to bypass discovery entirely.
Customizing agent discovery: Set ATLAS_DISCOVERY_MODEL to override the default Claude Haiku 4.5 model used for agent selection. Any Anthropic model is supported via the ANTHROPIC_API_KEY environment variable. Storage setup is now automatic—no need to run atlas init separately.

Option B – Python API (direct invocation)

This option uses example configs from the Atlas Core repository. If you only installed the SDK (pip install arc-atlas), use Option A or create your own config file.
If you want to use pre-built example configs:
# Clone Atlas Core for example configs
git clone https://github.com/Arc-Computer/ATLAS.git
cd ATLAS
Then use the Python API with the example config:
from atlas.core import run

result = run(
    task="Summarize the latest AI news",
    config_path="configs/examples/sdk_quickstart.yaml",  # Path relative to ATLAS repo root
    stream_progress=True,
)
print(result.final_answer)
Run it inline if you prefer to avoid creating a file:
python -c "from atlas.core import run; result = run(task='Summarize the latest AI news', config_path='configs/examples/sdk_quickstart.yaml', stream_progress=True); print(result.final_answer)"
Expected output:
=== Atlas task started: Summarize the latest AI news (2025-01-11 10:30:45) ===
Plan ready (3 steps):
  1. Search for recent AI news articles
  2. Extract key points from top articles
  3. Synthesize findings into concise summary
Adaptive: mode=coach confidence=0.58
STEP 1: Search for recent AI news articles | actor=student | attempt=1 | validation=PASS (found relevant sources) | duration=1200.5ms
STEP 1: Search for recent AI news articles | actor=teacher | attempt=1 | guidance=Focus on authoritative sources
STEP 1: retry 1 | Reward score=0.82 | Judge scores: helpfulness:0.85, accuracy:0.80
STEP 2: Extract key points from top articles | actor=student | attempt=1 | validation=PASS (extracted main themes) | duration=850.3ms
STEP 2: retry 1 | Reward evaluation deferred to session-level judge
STEP 3: Synthesize findings into concise summary | actor=student | attempt=1 | validation=PASS (summary complete) | duration=950.7ms
STEP 3: retry 1 | Reward evaluation deferred to session-level judge
Final Answer:
  Recent AI developments include...
Summary | execution_mode=stepwise | total_runtime=15.2s | judge_calls=1 | adaptive_mode=coach | adaptive_confidence=0.58
  attempts: 1=1, 2=1, 3=1
  Reward score=0.85 (All steps completed successfully)
=== Atlas task completed in 15.2s ===
The console streamer shows the plan, adaptive lane selection, step-by-step execution with validation status, teacher guidance when provided, and reward scores. atlas.runtime.telemetry.ConsoleTelemetryStreamer auto-enables when stdout is a TTY; override with stream_progress=True/False.
Want to see adaptive learning in action? Check out the Adaptive Tool Use example showing a LangGraph agent learning efficient MCP tool usage across 25 tasks, demonstrating 30-40% reduction in tool calls.

Bring Your Own Agent

Atlas wraps any agent that exposes an OpenAI-compatible API, HTTP endpoint, or Python callable. Three adapter types are available:
  • OpenAI adapter - For GPT, Claude via OpenAI-compatible APIs
  • HTTP adapter - For microservices, serverless functions
  • Python adapter - For LangGraph, local callables, custom agents
See the Agent Adapters guide for complete configuration options and examples.

What Just Happened?

Think of atlas.core.run as a project manager who never gets tired—now fronted by an adaptive controller:
  • Triage & probe – a triage adapter builds context, the capability probe scores confidence, and the runtime picks a lane.
  • Configure – the YAML tells the orchestrator which agent to call and how the dual-agent reasoning loop (student + verifying teacher) should behave.
  • Plan – the Student drafts a step-by-step approach when a stepwise lane is chosen; in single-shot lanes the plan collapses to one step.
  • Review – the Teacher approves or tweaks the plan (or just inspects the final answer in paired mode).
  • Execute – each step runs with lane-specific guidance, validation, and retries.
  • Evaluate – the Reward System scores the work, deciding whether to reuse guidance and how to update persona memories.

Configuration Breakdown

Key sections in sdk_quickstart.yaml:
  • agent: Adapter settings (litellm/http/python) and model choice
  • teacher: Verification model, typically stronger than student
  • rim: Reward system judges (Gemini 2.5 Flash/Pro by default)
  • adaptive_teaching.probe: Capability assessment (xAI Grok-2-mini)
  • storage: Optional Postgres persistence
See Configuration Reference for complete details and preset templates.

Troubleshooting Checklist

  • Missing API key – ensure OPENAI_API_KEY (or Azure equivalents) are exported in the same shell.
  • Time spent downloading dependencies – editable installs pull in litellm, httpx, and friends on the first run; subsequent runs are instant.
  • Model limits – bump max_output_tokens in the config if your summaries get truncated.

Next Steps