Atlas SDK configs are the control tower for runtime orchestration. Every key is validated by a Pydantic schema (atlas-sdk/atlas/config/models.py), so mistakes surface before the adaptive dual-agent reasoning loop—your agent paired with a verifying teacher—spins up. Atlas uses LiteLLM as its primary adapter backend, making the system model-agnostic and compatible with 100+ LLM providers including OpenAI, Anthropic Claude, Google Gemini, XAI Grok, Azure OpenAI, AWS Bedrock, local models (Ollama, vLLM), and custom endpoints.
Keep atlas.core.run(..., stream_progress=True) enabled while tuning configs—the live event stream mirrors exactly what persists to storage and makes it easy to spot misconfigured blocks.
Root Config Overview
| Field | Type / Default | Required? | Why it matters |
|---|
agent | Adapter union (litellm | http_api | python | openai) | Yes | Connects the orchestrator to your underlying agent transport. |
teacher | TeacherConfig | Yes | Defines the verifying teacher persona, LLM, and feedback limits. |
rim | RIMConfig | Yes | Configures the RIM ensemble that drives retries and adaptive feedback. |
student | StudentConfig (token caps default to 2048) | No | Controls your agent’s (student) prompts, tool usage, and token budgets. |
orchestration | OrchestrationConfig (max_retries=1, step_timeout_seconds=900, rim_guidance_tag="rim_feedback", emit_intermediate_steps=true) | No | Governs retries, timeouts, and telemetry emission. |
adaptive_teaching | AdaptiveTeachingConfig (enabled=true) | No | Triage, probe, and lane-selection policy. |
storage | StorageConfig | null (default null) | No | Enables Postgres persistence for traces and learning memory. |
metadata | Dict[str, Any] (default {}) | No | Free-form tags for analytics and logging. |
Agent Block (agent)
This block wires the orchestrator to your agent. The schema is defined by AdapterConfig and its subclasses in atlas-sdk/atlas/config/models.py:67-176; extra keys are rejected.
Common fields
| Parameter | Type / Default | Required? | Why adjust |
|---|
type | Enum: litellm, http_api, python, openai | Yes | Selects which adapter subclass will validate the rest of the block. Use litellm for new projects. |
name | str | Yes | Appears in telemetry and logs; use a descriptive identifier per deployment. |
system_prompt | str | Yes | Baseline persona text passed to your agent (the student). |
tools | List[ToolDefinition] (default []) | No | Register JSON-schema tool signatures; validation ensures required keys exist. |
HTTP adapter (type: http_api)
| Parameter | Type / Default | Required? | Why adjust |
|---|
transport.base_url | str | Yes | Base endpoint for your service. |
transport.headers | Dict[str, str] (default {}) | No | Inject auth or custom headers. |
transport.timeout_seconds | float (default 60.0) | No | Increase when downstream APIs are slow. |
transport.retry.attempts | int (default 1, bounded 1..5) | No | Add resilience for flaky endpoints. |
transport.retry.backoff_seconds | float (default 1.0) | No | Control backoff between retry attempts. |
payload_template | Dict[str, Any] (default {}) | No | Provide a skeleton payload with placeholders the runtime will fill. |
result_path | Sequence[str] | null | No | Extract a nested field from the response JSON. |
Python adapter (type: python)
| Parameter | Type / Default | Required? | Why adjust |
|---|
import_path | str | Yes | Python module or package that exposes your callable. |
attribute | str | null | No | Specify the function/class name when the module exports multiple callables. |
working_directory | str | null | No | Run relative imports against a specific path. |
allow_generator | bool (default false) | No | Enable when the callable yields streaming results. |
llm | LLMParameters | null | No | Supply metadata when the callable proxies an LLM (e.g., for telemetry). |
LiteLLM adapter (type: litellm)
| Parameter | Type / Default | Required? | Why adjust |
|---|
llm.provider | str | Yes | Choose from 100+ providers: openai, anthropic, gemini, xai, azure-openai, bedrock, etc. |
llm.model | str | Yes | Choose the underlying chat model. |
llm.api_key_env | str | Yes | Environment variable containing the API key. |
llm.api_base | str | null | No | Override the base URL for local models or custom endpoints. |
llm.temperature | float (default 0.0, range 0..2) | Yes | Increase for more exploratory generations. |
llm.top_p | float | null | No | Apply nucleus sampling if desired. |
llm.max_output_tokens | int | Yes | Cap response length. |
llm.timeout_seconds | float (default 60.0) | No | Widen for long-running completions. |
llm.retry.attempts | int (default 1, bounded 1..5) | No | Increase for transient API failures. |
response_format | Dict[str, Any] | null | No | Request JSON schema enforcement when the provider supports it. |
Using local models: The litellm adapter makes local model integration seamless.Ollama:agent:
type: litellm
llm:
provider: openai # Ollama is OpenAI-compatible
model: llama3.1
api_base: http://localhost:11434
api_key_env: DUMMY # Ollama doesn't need auth
temperature: 0.2
max_output_tokens: 2048
vLLM:agent:
type: litellm
llm:
provider: openai
model: meta-llama/Llama-3.1-8B-Instruct
api_base: http://localhost:8000/v1
api_key_env: DUMMY
temperature: 0.2
max_output_tokens: 2048
Both Ollama and vLLM are OpenAI-compatible, so use provider: openai with the correct api_base.
Provider Examples
Common LiteLLM provider configurations:
| Provider | Model Example | API Key Env |
|---|
| OpenAI | gpt-4o-mini | OPENAI_API_KEY |
| Anthropic | claude-sonnet-4-5 | ANTHROPIC_API_KEY |
| Gemini | gemini/gemini-2.5-flash | GEMINI_API_KEY |
| XAI Grok | xai/grok-4-fast | XAI_API_KEY |
| Azure OpenAI | gpt-4o-mini | AZURE_OPENAI_API_KEY + api_base |
| AWS Bedrock | anthropic.claude-3-5-sonnet-* | AWS_ACCESS_KEY_ID + region/secret |
All use temperature: 0.2 and max_output_tokens: 2048 by default. See LiteLLM docs for full provider list.
Student Block (student)
Guides the student agent’s prompts and token budgets. When prompts is omitted, the runtime builds defaults from the agent system_prompt.
| Parameter | Type / Default | Required? | Why adjust |
|---|
prompts | StudentPrompts | null | No | Override planner/executor/synthesizer prompt templates explicitly. |
prompt_guidance | Dict[str, str] (default {}) | No | Supply reusable chunks merged into prompts per run. |
max_plan_tokens | int (default 2048) | No | Raise when plans are truncated. |
max_step_tokens | int (default 2048) | No | Increase for verbose tool output. |
max_synthesis_tokens | int (default 2048) | No | Allow longer final answers. |
tool_choice | Literal auto | required (default auto) | No | Force tool invocation on every step when governance demands it. |
Override example:
student:
max_plan_tokens: 1024
max_step_tokens: 1024
tool_choice: auto
Teacher Block (teacher)
Defines the verifying teacher persona that validates plans, emits guidance, and certifies results.
| Parameter | Type / Default | Required? | Why adjust |
|---|
llm | LLMParameters | Yes | Choose the verifying teacher model (often stronger than the student agent). Supports all LiteLLM providers; use api_base for local models. |
max_review_tokens | int | null (default null) | No | Cap plan-review responses. |
plan_cache_seconds | int (default 300) | No | Reuse approved plans for repeated task IDs. |
guidance_max_tokens | int | null | No | Limit per-step feedback length. |
validation_max_tokens | int | null | No | Cap the validation verdict. |
prompts | TeacherPrompts | null | No | Replace default reviewer prompts. |
prompt_guidance | Dict[str, str] (default {}) | No | Inject reusable guidance fragments. |
Orchestration Block (orchestration)
Controls retry semantics and telemetry.
| Parameter | Type / Default | Required? | Why adjust |
|---|
max_retries | int (default 1, hard ceiling) | No | Set to 0 to disable retries entirely. |
step_timeout_seconds | float (default 900.0) | No | Lengthen for slow tools or external APIs. |
rim_guidance_tag | str (default "rim_feedback") | No | Change when your prompts expect a different insertion tag. |
emit_intermediate_steps | bool (default true) | No | Toggle console/storage streaming of intermediate events. |
forced_mode | AdaptiveMode | null | No | Lock the runtime to auto, paired, or coach (useful for deterministic evaluation). |
Reward System Block (RIM - Reward Integration Module)
The RIM (Reward Integration Module) evaluates each trajectory to decide whether to retry or accept the outcome. Configure the reward system using the rim block in your runtime config.
| Parameter | Type / Default | Required? | Why adjust |
|---|
small_model | LLMParameters | Yes | Fast path judge; keep lightweight for latency-sensitive checks. Supports all LiteLLM providers including local models. |
large_model | LLMParameters | Yes | Escalation judge invoked on disagreement. Supports all LiteLLM providers including local models. |
active_judges | Dict[str, bool] (default {"process": true, "helpfulness": true}) | No | Toggle built-in dimensions or add custom judges. |
variance_threshold | float (default 0.15) | No | Lower to escalate disagreements sooner. |
uncertainty_threshold | float (default 0.3) | No | Raise to reduce escalations on ambiguous scores. |
parallel_workers | int (default 4, range 1..32) | No | Tune concurrency to match judge model throughput. |
judge_prompt | str | null | No | Provide a rubric that defines success for your domain. |
See Reward Design for judge composition examples.
Adaptive Teaching Block (adaptive_teaching)
Configures triage, probing, and lane routing for the adaptive dual-agent pair—your agent plus the verifying teacher (atlas-sdk/atlas/config/models.py:185-227).
| Parameter | Type / Default | Required? | Why adjust |
|---|
enabled | bool (default true) | No | Disable to bypass adaptive routing entirely. |
certify_first_run | bool (default true) | No | Force first-time personas through paired certification. |
mode_override | Literal | null | No | Pin execution to auto, paired, or coach. |
triage_adapter | str | null | No | Reference a custom dossier builder. |
default_tags | List[str] (default []) | No | Apply default metadata to persona memories. |
probe.llm | LLMParameters | null | No | Override the capability probe model. |
probe.thresholds | auto=0.85, paired=0.65, coach=0.35 | No | Adjust lane cut-offs; order must satisfy auto ≥ paired ≥ coach. |
probe.fallback_mode | Literal ("paired" default) | No | Lane chosen when the probe cannot decide. |
probe.evidence_limit | int (default 6, range 1..32) | No | Limit how many supporting reasons the probe collects. |
probe.timeout_seconds | float (default 15.0) | No | Extend for slower models. |
reward.type | Literal rim (default) | python | No | Switch to a custom reward objective. |
reward.import_path / attribute | str / str | Required when type="python" | Point at your custom scorer. |
reward.focus_prompt | str | null | No | Give the reward model an extra steer for this deployment. |
Storage Block (storage)
Controls Postgres persistence (atlas-sdk/atlas/config/models.py:299-307). Omit the block or set storage: null for ephemeral runs.
| Parameter | Type / Default | Required? | Why adjust |
|---|
database_url | str | Yes (when block present) | Point at your managed or local Postgres instance. |
min_connections | int (default 1) | No | Increase for burstier workloads. |
max_connections | int (default 5) | No | Upper bound for connection pool size. |
statement_timeout_seconds | float (default 30.0) | No | Abort long-running queries sooner. |
Tip: atlas init scaffolds a Docker Compose file with sensible defaults and exposes Postgres on localhost:5433.
Learning Block (learning)
Controls the runtime synthesizer that generates and applies student/teacher playbooks.
| Parameter | Type / Default | Required? | Why adjust |
|---|
enabled | bool (default true) | No | Disable to run without loading or updating playbooks. |
update_enabled | bool (default true) | No | Freeze updates while keeping existing playbooks active. |
llm | LLMParameters | null | No | Override the synthesizer model; falls back to runtime defaults otherwise. |
prompts | LearningPrompts | null | No | Supply custom prompts for the synthesizer LLM. |
history_limit | int (default 10) | No | Cap historical sessions fed into each update. |
session_note_enabled | bool (default true) | No | Persist per-session learning notes alongside the registry. |
apply_to_prompts | bool (default true) | No | Toggle playbook injection into persona prompts and validation payloads. |
playbook_injection_mode | "prefix" or "suffix" (default "prefix") | No | Inject playbook before (prefix) or after (suffix) system prompt. Suffix mode enables KV cache reuse. |
inject_few_shot_examples | bool (default true) | No | Append captured examples to playbook entries for in-context learning. Now enabled by default. |
max_few_shot_token_budget | int (default 500) | No | Maximum tokens allocated for few-shot examples in playbook injection. |
token_budget_chars_per_token | float (default 3.5) | No | Character-to-token ratio for estimating few-shot example token usage. |
max_entries_to_process | int (default 10) | No | Maximum number of historical entries to process when extracting few-shot examples. |
max_examples_per_block | int (default 2) | No | Maximum few-shot examples to include per playbook block. |
usage_tracking.redaction_patterns | List[str] (default []) | No | Regex patterns for redacting sensitive data from usage tracking logs. |
Pair this section with Learning System Architecture for deeper context.
Runtime Safety Block (runtime_safety)
Defines production guardrails for drift detection and export review policies.
| Parameter | Type / Default | Required? | Why adjust |
|---|
drift.enabled | bool (default true) | No | Disable statistical drift alerts (rarely recommended). |
drift.window | int (default 50) | No | Increase for noisy telemetry; decrease for faster alerts. |
drift.z_threshold | float (default 3.0) | No | Lower to make alerts more sensitive. |
drift.min_baseline | int (default 5) | No | Require more samples before alerts fire. |
review.require_approval | bool (default true) | No | Keep true in production to gate exports on human review. |
review.default_export_statuses | List[str] (default ["approved"]) | No | Adjust when automation needs additional review states. |
See Runtime Safety & Review for operational guidance.
| Parameter | Type / Default | Required? | Why adjust |
|---|
metadata | Dict[str, Any] (default {}) | No | Attach labels consumed by your monitoring stack. |
Legacy configs may still include a prompt_rewrite block, but the runtime now rejects it (atlas-sdk/atlas/core/__init__.py raises a ValueError). Remove the block and rely on explicit student.prompts / teacher.prompts instead.
Cheat Sheet
| Goal | Section to edit | Pointer |
|---|
| Swap to Anthropic or Gemini | agent | Use type: litellm with provider: anthropic or provider: gemini. |
| Use local models (Ollama/vLLM) | agent | Use type: litellm, provider: openai, and set api_base to your local server. |
| Tighten or loosen retries | orchestration + rim | Adjust max_retries, variance_threshold, and uncertainty_threshold. |
| Persist adaptive memories | storage | Add a Postgres URL or run atlas init. |
| Force a supervision lane | adaptive_teaching | Set mode_override to auto, paired, or coach. |
| Personalise prompts | student.prompts / teacher.prompts | Override templates or reuse prompt_guidance. |
| Enforce JSON output | agent (response_format) | Provide OpenAI-compatible schemas or swap to http_api with custom validation. |
| Freeze playbook updates | learning.update_enabled | Pause runtime learning while investigating regressions. |
| Require approvals for exports | runtime_safety.review | Keep require_approval=true and document review notes. |
Validated Example (Quickstart)
This minimal config demonstrates the recommended litellm adapter with OpenAI models:
agent:
type: litellm
name: example-litellm-agent
system_prompt: |
You are an AI model acting as the Atlas Student. Follow instructions carefully and respond with JSON when asked.
tools: []
llm:
provider: openai
model: gpt-4o-mini
api_key_env: OPENAI_API_KEY
temperature: 0.2
max_output_tokens: 2048
teacher:
llm:
provider: openai
model: gpt-4o-mini
api_key_env: OPENAI_API_KEY
temperature: 0.1
max_output_tokens: 2048
rim:
small_model:
provider: gemini
model: gemini/gemini-2.5-flash
api_key_env: GEMINI_API_KEY
max_output_tokens: 8096
large_model:
provider: gemini
model: gemini/gemini-2.5-flash
api_key_env: GEMINI_API_KEY
max_output_tokens: 8096
judge_prompt: 'reward the agent for attending the issues mentioned in the task'
variance_threshold: 0.15
uncertainty_threshold: 0.3
storage:
database_url: postgresql://atlas:atlas@localhost:5433/atlas
min_connections: 1
max_connections: 5
statement_timeout_seconds: 30
Legacy configs: If you have existing configs using type: openai, they will continue to work but emit deprecation warnings. Migrate to type: litellm at your convenience.
Parameter Index (Alphabetical)
adaptive_teaching.default_tags – Tag sessions and learning updates with deployment metadata.
adaptive_teaching.mode_override – Force the runtime into a specific lane for deterministic evaluation.
agent.response_format – Request JSON-mode enforcement from OpenAI-compatible providers.
learning.apply_to_prompts – Enable/disable playbook injection into persona prompts.
learning.update_enabled – Gate persistence of new playbooks after each session.
orchestration.forced_mode – Hard-set the execution mode regardless of probe results.
runtime_safety.drift.z_threshold – Sensitivity of automatic drift alerts.
runtime_safety.review.default_export_statuses – Review states included when tooling omits filters.
storage.database_url – Connection string for the Postgres telemetry store.
student.tool_choice – Force tool invocation on each step when governance demands it.
teacher.plan_cache_seconds – Duration to reuse previously approved plans.