Understand every block in an Atlas SDK YAML file so you can tailor the orchestrator to your agents.
Atlas SDK configs are the control tower for runtime orchestration. Every key is validated by a Pydantic schema (atlas-sdk/atlas/config/models.py), so mistakes surface before the adaptive dual-agent reasoning loop—your agent paired with a verifying teacher—spins up. Atlas uses LiteLLM as its primary adapter backend, making the system model-agnostic and compatible with 100+ LLM providers including OpenAI, Anthropic Claude, Google Gemini, XAI Grok, Azure OpenAI, AWS Bedrock, local models (Ollama, vLLM), and custom endpoints.
Keep atlas.core.run(..., stream_progress=True) enabled while tuning configs—the live event stream mirrors exactly what persists to storage and makes it easy to spot misconfigured blocks.
This block wires the orchestrator to your agent. The schema is defined by AdapterConfig and its subclasses in atlas-sdk/atlas/config/models.py:67-176; extra keys are rejected.
Supply reusable chunks merged into prompts per run.
max_plan_tokens
int (default 2048)
No
Raise when plans are truncated.
max_step_tokens
int (default 2048)
No
Increase for verbose tool output.
max_synthesis_tokens
int (default 2048)
No
Allow longer final answers.
tool_choice
Literal auto | required (default auto)
No
Force tool invocation on every step when governance demands it.
Example override (add explicit prompts to your config):
Copy
Ask AI
student: prompts: planner: | {base_prompt} Break the user's task into a short numbered plan. executor: | {base_prompt} Execute the current plan step. Show the work that led to your answer. synthesizer: | {base_prompt} Summarize the important findings from every step and deliver the final answer. max_plan_tokens: 1024 max_step_tokens: 1024 max_synthesis_tokens: 1024 tool_choice: auto
Reward System Block (RIM - Reward Integration Module)
The RIM (Reward Integration Module) evaluates each trajectory to decide whether to retry or accept the outcome. Configure the reward system using the rim block in your runtime config.
Parameter
Type / Default
Required?
Why adjust
small_model
LLMParameters
Yes
Fast path judge; keep lightweight for latency-sensitive checks. Supports all LiteLLM providers including local models.
large_model
LLMParameters
Yes
Escalation judge invoked on disagreement. Supports all LiteLLM providers including local models.
Configures triage, probing, and lane routing for the adaptive dual-agent pair—your agent plus the verifying teacher (atlas-sdk/atlas/config/models.py:185-227).
Parameter
Type / Default
Required?
Why adjust
enabled
bool (default true)
No
Disable to bypass adaptive routing entirely.
certify_first_run
bool (default true)
No
Force first-time personas through paired certification.
mode_override
Literal | null
No
Pin execution to auto, paired, or coach.
triage_adapter
str | null
No
Reference a custom dossier builder.
default_tags
List[str] (default [])
No
Apply default metadata to persona memories.
probe.llm
LLMParameters | null
No
Override the capability probe model.
probe.thresholds
auto=0.85, paired=0.65, coach=0.35
No
Adjust lane cut-offs; order must satisfy auto ≥ paired ≥ coach.
probe.fallback_mode
Literal ("paired" default)
No
Lane chosen when the probe cannot decide.
probe.evidence_limit
int (default 6, range 1..32)
No
Limit how many supporting reasons the probe collects.
probe.timeout_seconds
float (default 15.0)
No
Extend for slower models.
reward.type
Literal rim (default) | python
No
Switch to a custom reward objective.
reward.import_path / attribute
str / str
Required when type="python"
Point at your custom scorer.
reward.focus_prompt
str | null
No
Give the reward model an extra steer for this deployment.
Legacy configs may still include a prompt_rewrite block, but the runtime now rejects it (atlas-sdk/atlas/core/__init__.py raises a ValueError). Remove the block and rely on explicit student.prompts / teacher.prompts instead.
This minimal config demonstrates the recommended litellm adapter with OpenAI models:
Copy
Ask AI
agent: type: litellm name: example-litellm-agent system_prompt: | You are an AI model acting as the Atlas Student. Follow instructions carefully and respond with JSON when asked. tools: [] llm: provider: openai model: gpt-4o-mini api_key_env: OPENAI_API_KEY temperature: 0.2 max_output_tokens: 2048teacher: llm: provider: openai model: gpt-4o-mini api_key_env: OPENAI_API_KEY temperature: 0.1 max_output_tokens: 2048rim: small_model: provider: gemini model: gemini/gemini-2.5-flash api_key_env: GEMINI_API_KEY max_output_tokens: 8096 large_model: provider: gemini model: gemini/gemini-2.5-flash api_key_env: GEMINI_API_KEY max_output_tokens: 8096 judge_prompt: 'reward the agent for attending the issues mentioned in the task' variance_threshold: 0.15 uncertainty_threshold: 0.3storage: database_url: postgresql://atlas:atlas@localhost:5433/atlas min_connections: 1 max_connections: 5 statement_timeout_seconds: 30
Legacy configs: If you have existing configs using type: openai, they will continue to work but emit deprecation warnings. Migrate to type: litellm at your convenience.