Skip to main content
Atlas SDK configs are the control tower for runtime orchestration. Every key is validated by a Pydantic schema (atlas-sdk/atlas/config/models.py), so mistakes surface before the adaptive dual-agent reasoning loop—your agent paired with a verifying teacher—spins up. Atlas uses LiteLLM as its primary adapter backend, making the system model-agnostic and compatible with 100+ LLM providers including OpenAI, Anthropic Claude, Google Gemini, XAI Grok, Azure OpenAI, AWS Bedrock, local models (Ollama, vLLM), and custom endpoints.
This page is a configuration reference. For adapter walkthroughs and orchestration concepts, see Bring Your Own Agent and How Orchestration Works.
Keep atlas.core.run(..., stream_progress=True) enabled while tuning configs—the live event stream mirrors exactly what persists to storage and makes it easy to spot misconfigured blocks.

Root Config Overview

FieldType / DefaultRequired?Why it matters
agentAdapter union (litellm | http_api | python | openai)YesConnects the orchestrator to your underlying agent transport.
teacherTeacherConfigYesDefines the verifying teacher persona, LLM, and feedback limits.
rimRIMConfigYesConfigures the RIM ensemble that drives retries and adaptive feedback.
studentStudentConfig (token caps default to 2048)NoControls your agent’s (student) prompts, tool usage, and token budgets.
orchestrationOrchestrationConfig (max_retries=1, step_timeout_seconds=900, rim_guidance_tag="rim_feedback", emit_intermediate_steps=true)NoGoverns retries, timeouts, and telemetry emission.
adaptive_teachingAdaptiveTeachingConfig (enabled=true)NoTriage, probe, and lane-selection policy.
storageStorageConfig | null (default null)NoEnables Postgres persistence for traces and learning memory.
metadataDict[str, Any] (default {})NoFree-form tags for analytics and logging.

Agent Block (agent)

This block wires the orchestrator to your agent. The schema is defined by AdapterConfig and its subclasses in atlas-sdk/atlas/config/models.py:67-176; extra keys are rejected.

Common fields

ParameterType / DefaultRequired?Why adjust
typeEnum: litellm, http_api, python, openaiYesSelects which adapter subclass will validate the rest of the block. Use litellm for new projects.
namestrYesAppears in telemetry and logs; use a descriptive identifier per deployment.
system_promptstrYesBaseline persona text passed to your agent (the student).
toolsList[ToolDefinition] (default [])NoRegister JSON-schema tool signatures; validation ensures required keys exist.

HTTP adapter (type: http_api)

ParameterType / DefaultRequired?Why adjust
transport.base_urlstrYesBase endpoint for your service.
transport.headersDict[str, str] (default {})NoInject auth or custom headers.
transport.timeout_secondsfloat (default 60.0)NoIncrease when downstream APIs are slow.
transport.retry.attemptsint (default 1, bounded 1..5)NoAdd resilience for flaky endpoints.
transport.retry.backoff_secondsfloat (default 1.0)NoControl backoff between retry attempts.
payload_templateDict[str, Any] (default {})NoProvide a skeleton payload with placeholders the runtime will fill.
result_pathSequence[str] | nullNoExtract a nested field from the response JSON.

Python adapter (type: python)

ParameterType / DefaultRequired?Why adjust
import_pathstrYesPython module or package that exposes your callable.
attributestr | nullNoSpecify the function/class name when the module exports multiple callables.
working_directorystr | nullNoRun relative imports against a specific path.
allow_generatorbool (default false)NoEnable when the callable yields streaming results.
llmLLMParameters | nullNoSupply metadata when the callable proxies an LLM (e.g., for telemetry).

LiteLLM adapter (type: litellm)

ParameterType / DefaultRequired?Why adjust
llm.providerstrYesChoose from 100+ providers: openai, anthropic, gemini, xai, azure-openai, bedrock, etc.
llm.modelstrYesChoose the underlying chat model.
llm.api_key_envstrYesEnvironment variable containing the API key.
llm.api_basestr | nullNoOverride the base URL for local models or custom endpoints.
llm.temperaturefloat (default 0.0, range 0..2)YesIncrease for more exploratory generations.
llm.top_pfloat | nullNoApply nucleus sampling if desired.
llm.max_output_tokensintYesCap response length.
llm.timeout_secondsfloat (default 60.0)NoWiden for long-running completions.
llm.retry.attemptsint (default 1, bounded 1..5)NoIncrease for transient API failures.
response_formatDict[str, Any] | nullNoRequest JSON schema enforcement when the provider supports it.
Using local models: The litellm adapter makes local model integration seamless.Ollama:
agent:
  type: litellm
  llm:
    provider: openai  # Ollama is OpenAI-compatible
    model: llama3.1
    api_base: http://localhost:11434
    api_key_env: DUMMY  # Ollama doesn't need auth
    temperature: 0.2
    max_output_tokens: 2048
vLLM:
agent:
  type: litellm
  llm:
    provider: openai
    model: meta-llama/Llama-3.1-8B-Instruct
    api_base: http://localhost:8000/v1
    api_key_env: DUMMY
    temperature: 0.2
    max_output_tokens: 2048
Both Ollama and vLLM are OpenAI-compatible, so use provider: openai with the correct api_base.

Provider Examples

Common LiteLLM provider configurations:
ProviderModel ExampleAPI Key Env
OpenAIgpt-4o-miniOPENAI_API_KEY
Anthropicclaude-sonnet-4-5ANTHROPIC_API_KEY
Geminigemini/gemini-2.5-flashGEMINI_API_KEY
XAI Grokxai/grok-4-fastXAI_API_KEY
Azure OpenAIgpt-4o-miniAZURE_OPENAI_API_KEY + api_base
AWS Bedrockanthropic.claude-3-5-sonnet-*AWS_ACCESS_KEY_ID + region/secret
All use temperature: 0.2 and max_output_tokens: 2048 by default. See LiteLLM docs for full provider list.

Student Block (student)

Guides the student agent’s prompts and token budgets. When prompts is omitted, the runtime builds defaults from the agent system_prompt.
ParameterType / DefaultRequired?Why adjust
promptsStudentPrompts | nullNoOverride planner/executor/synthesizer prompt templates explicitly.
prompt_guidanceDict[str, str] (default {})NoSupply reusable chunks merged into prompts per run.
max_plan_tokensint (default 2048)NoRaise when plans are truncated.
max_step_tokensint (default 2048)NoIncrease for verbose tool output.
max_synthesis_tokensint (default 2048)NoAllow longer final answers.
tool_choiceLiteral auto | required (default auto)NoForce tool invocation on every step when governance demands it.
Override example:
student:
  max_plan_tokens: 1024
  max_step_tokens: 1024
  tool_choice: auto

Teacher Block (teacher)

Defines the verifying teacher persona that validates plans, emits guidance, and certifies results.
ParameterType / DefaultRequired?Why adjust
llmLLMParametersYesChoose the verifying teacher model (often stronger than the student agent). Supports all LiteLLM providers; use api_base for local models.
max_review_tokensint | null (default null)NoCap plan-review responses.
plan_cache_secondsint (default 300)NoReuse approved plans for repeated task IDs.
guidance_max_tokensint | nullNoLimit per-step feedback length.
validation_max_tokensint | nullNoCap the validation verdict.
promptsTeacherPrompts | nullNoReplace default reviewer prompts.
prompt_guidanceDict[str, str] (default {})NoInject reusable guidance fragments.

Orchestration Block (orchestration)

Controls retry semantics and telemetry.
ParameterType / DefaultRequired?Why adjust
max_retriesint (default 1, hard ceiling)NoSet to 0 to disable retries entirely.
step_timeout_secondsfloat (default 900.0)NoLengthen for slow tools or external APIs.
rim_guidance_tagstr (default "rim_feedback")NoChange when your prompts expect a different insertion tag.
emit_intermediate_stepsbool (default true)NoToggle console/storage streaming of intermediate events.
forced_modeAdaptiveMode | nullNoLock the runtime to auto, paired, or coach (useful for deterministic evaluation).

Reward System Block (RIM - Reward Integration Module)

The RIM (Reward Integration Module) evaluates each trajectory to decide whether to retry or accept the outcome. Configure the reward system using the rim block in your runtime config.
ParameterType / DefaultRequired?Why adjust
small_modelLLMParametersYesFast path judge; keep lightweight for latency-sensitive checks. Supports all LiteLLM providers including local models.
large_modelLLMParametersYesEscalation judge invoked on disagreement. Supports all LiteLLM providers including local models.
active_judgesDict[str, bool] (default {"process": true, "helpfulness": true})NoToggle built-in dimensions or add custom judges.
variance_thresholdfloat (default 0.15)NoLower to escalate disagreements sooner.
uncertainty_thresholdfloat (default 0.3)NoRaise to reduce escalations on ambiguous scores.
parallel_workersint (default 4, range 1..32)NoTune concurrency to match judge model throughput.
judge_promptstr | nullNoProvide a rubric that defines success for your domain.
See Reward Design for judge composition examples.

Adaptive Teaching Block (adaptive_teaching)

Configures triage, probing, and lane routing for the adaptive dual-agent pair—your agent plus the verifying teacher (atlas-sdk/atlas/config/models.py:185-227).
ParameterType / DefaultRequired?Why adjust
enabledbool (default true)NoDisable to bypass adaptive routing entirely.
certify_first_runbool (default true)NoForce first-time personas through paired certification.
mode_overrideLiteral | nullNoPin execution to auto, paired, or coach.
triage_adapterstr | nullNoReference a custom dossier builder.
default_tagsList[str] (default [])NoApply default metadata to persona memories.
probe.llmLLMParameters | nullNoOverride the capability probe model.
probe.thresholdsauto=0.85, paired=0.65, coach=0.35NoAdjust lane cut-offs; order must satisfy auto ≥ paired ≥ coach.
probe.fallback_modeLiteral ("paired" default)NoLane chosen when the probe cannot decide.
probe.evidence_limitint (default 6, range 1..32)NoLimit how many supporting reasons the probe collects.
probe.timeout_secondsfloat (default 15.0)NoExtend for slower models.
reward.typeLiteral rim (default) | pythonNoSwitch to a custom reward objective.
reward.import_path / attributestr / strRequired when type="python"Point at your custom scorer.
reward.focus_promptstr | nullNoGive the reward model an extra steer for this deployment.

Storage Block (storage)

Controls Postgres persistence (atlas-sdk/atlas/config/models.py:299-307). Omit the block or set storage: null for ephemeral runs.
ParameterType / DefaultRequired?Why adjust
database_urlstrYes (when block present)Point at your managed or local Postgres instance.
min_connectionsint (default 1)NoIncrease for burstier workloads.
max_connectionsint (default 5)NoUpper bound for connection pool size.
statement_timeout_secondsfloat (default 30.0)NoAbort long-running queries sooner.
Tip: atlas init scaffolds a Docker Compose file with sensible defaults and exposes Postgres on localhost:5433.

Learning Block (learning)

Controls the runtime synthesizer that generates and applies student/teacher playbooks.
ParameterType / DefaultRequired?Why adjust
enabledbool (default true)NoDisable to run without loading or updating playbooks.
update_enabledbool (default true)NoFreeze updates while keeping existing playbooks active.
llmLLMParameters | nullNoOverride the synthesizer model; falls back to runtime defaults otherwise.
promptsLearningPrompts | nullNoSupply custom prompts for the synthesizer LLM.
history_limitint (default 10)NoCap historical sessions fed into each update.
session_note_enabledbool (default true)NoPersist per-session learning notes alongside the registry.
apply_to_promptsbool (default true)NoToggle playbook injection into persona prompts and validation payloads.
playbook_injection_mode"prefix" or "suffix" (default "prefix")NoInject playbook before (prefix) or after (suffix) system prompt. Suffix mode enables KV cache reuse.
inject_few_shot_examplesbool (default true)NoAppend captured examples to playbook entries for in-context learning. Now enabled by default.
max_few_shot_token_budgetint (default 500)NoMaximum tokens allocated for few-shot examples in playbook injection.
token_budget_chars_per_tokenfloat (default 3.5)NoCharacter-to-token ratio for estimating few-shot example token usage.
max_entries_to_processint (default 10)NoMaximum number of historical entries to process when extracting few-shot examples.
max_examples_per_blockint (default 2)NoMaximum few-shot examples to include per playbook block.
usage_tracking.redaction_patternsList[str] (default [])NoRegex patterns for redacting sensitive data from usage tracking logs.
Pair this section with Learning System Architecture for deeper context.

Runtime Safety Block (runtime_safety)

Defines production guardrails for drift detection and export review policies.
ParameterType / DefaultRequired?Why adjust
drift.enabledbool (default true)NoDisable statistical drift alerts (rarely recommended).
drift.windowint (default 50)NoIncrease for noisy telemetry; decrease for faster alerts.
drift.z_thresholdfloat (default 3.0)NoLower to make alerts more sensitive.
drift.min_baselineint (default 5)NoRequire more samples before alerts fire.
review.require_approvalbool (default true)NoKeep true in production to gate exports on human review.
review.default_export_statusesList[str] (default ["approved"])NoAdjust when automation needs additional review states.
See Runtime Safety & Review for operational guidance.

Metadata

ParameterType / DefaultRequired?Why adjust
metadataDict[str, Any] (default {})NoAttach labels consumed by your monitoring stack.
Legacy configs may still include a prompt_rewrite block, but the runtime now rejects it (atlas-sdk/atlas/core/__init__.py raises a ValueError). Remove the block and rely on explicit student.prompts / teacher.prompts instead.

Cheat Sheet

GoalSection to editPointer
Swap to Anthropic or GeminiagentUse type: litellm with provider: anthropic or provider: gemini.
Use local models (Ollama/vLLM)agentUse type: litellm, provider: openai, and set api_base to your local server.
Tighten or loosen retriesorchestration + rimAdjust max_retries, variance_threshold, and uncertainty_threshold.
Persist adaptive memoriesstorageAdd a Postgres URL or run atlas init.
Force a supervision laneadaptive_teachingSet mode_override to auto, paired, or coach.
Personalise promptsstudent.prompts / teacher.promptsOverride templates or reuse prompt_guidance.
Enforce JSON outputagent (response_format)Provide OpenAI-compatible schemas or swap to http_api with custom validation.
Freeze playbook updateslearning.update_enabledPause runtime learning while investigating regressions.
Require approvals for exportsruntime_safety.reviewKeep require_approval=true and document review notes.

Validated Example (Quickstart)

This minimal config demonstrates the recommended litellm adapter with OpenAI models:
agent:
  type: litellm
  name: example-litellm-agent
  system_prompt: |
    You are an AI model acting as the Atlas Student. Follow instructions carefully and respond with JSON when asked.
  tools: []
  llm:
    provider: openai
    model: gpt-4o-mini
    api_key_env: OPENAI_API_KEY
    temperature: 0.2
    max_output_tokens: 2048

teacher:
  llm:
    provider: openai
    model: gpt-4o-mini
    api_key_env: OPENAI_API_KEY
    temperature: 0.1
    max_output_tokens: 2048

rim:
  small_model:
    provider: gemini
    model: gemini/gemini-2.5-flash
    api_key_env: GEMINI_API_KEY
    max_output_tokens: 8096
  large_model:
    provider: gemini
    model: gemini/gemini-2.5-flash
    api_key_env: GEMINI_API_KEY
    max_output_tokens: 8096
  judge_prompt: 'reward the agent for attending the issues mentioned in the task'
  variance_threshold: 0.15
  uncertainty_threshold: 0.3

storage:
  database_url: postgresql://atlas:atlas@localhost:5433/atlas
  min_connections: 1
  max_connections: 5
  statement_timeout_seconds: 30
Legacy configs: If you have existing configs using type: openai, they will continue to work but emit deprecation warnings. Migrate to type: litellm at your convenience.

Parameter Index (Alphabetical)

  • adaptive_teaching.default_tags – Tag sessions and learning updates with deployment metadata.
  • adaptive_teaching.mode_override – Force the runtime into a specific lane for deterministic evaluation.
  • agent.response_format – Request JSON-mode enforcement from OpenAI-compatible providers.
  • learning.apply_to_prompts – Enable/disable playbook injection into persona prompts.
  • learning.update_enabled – Gate persistence of new playbooks after each session.
  • orchestration.forced_mode – Hard-set the execution mode regardless of probe results.
  • runtime_safety.drift.z_threshold – Sensitivity of automatic drift alerts.
  • runtime_safety.review.default_export_statuses – Review states included when tooling omits filters.
  • storage.database_url – Connection string for the Postgres telemetry store.
  • student.tool_choice – Force tool invocation on each step when governance demands it.
  • teacher.plan_cache_seconds – Duration to reuse previously approved plans.