Simulation Examples

This page provides examples of how to use Arc Memory’s simulation feature to predict the impact of code changes before merging them. The simulation feature helps you understand potential risks and make more informed decisions.

Basic Simulation

The simplest way to run a simulation is to use the arc sim run command with no options:

arc sim run

This will:

  1. Analyze your current uncommitted changes
  2. Compare them against the main branch
  3. Run all available simulation scenarios
  4. Provide a risk assessment and detailed analysis

Simulation Scenarios

Arc Memory supports various simulation scenarios to analyze different aspects of your changes:

Network Latency Simulation

arc sim run --scenario network_latency

This scenario analyzes how your changes might affect network latency, including:

  • API response times
  • Service-to-service communication
  • Database query performance

Memory Usage Simulation

arc sim run --scenario memory_usage

This scenario analyzes how your changes might affect memory usage, including:

  • Memory allocation patterns
  • Potential memory leaks
  • Garbage collection impact

CPU Usage Simulation

arc sim run --scenario cpu_usage

This scenario analyzes how your changes might affect CPU usage, including:

  • Computational complexity
  • Concurrency patterns
  • Resource contention

Error Rates Simulation

arc sim run --scenario error_rates

This scenario analyzes how your changes might affect error rates, including:

  • Exception handling
  • Error propagation
  • Failure modes

Security Simulation

arc sim run --scenario security

This scenario analyzes potential security implications of your changes, including:

  • Input validation
  • Authentication/authorization
  • Data exposure risks

Advanced Simulation Options

Using a Specific Diff

You can run a simulation on a specific diff file instead of your current changes:

arc sim run --diff path/to/changes.diff

This is useful for:

  • Analyzing changes from a colleague
  • Evaluating historical changes
  • Testing hypothetical changes

Comparing Against a Different Branch

By default, simulations compare against the main branch. You can specify a different branch:

arc sim run --branch develop

This is useful for:

  • Feature branch development
  • Release branch validation
  • Experimental changes

Using Different Sandbox Environments

Arc Memory supports different sandbox environments for running simulations:

# Local sandbox (default)
arc sim run --sandbox local

# Docker sandbox
arc sim run --sandbox docker

# E2B cloud sandbox
arc sim run --sandbox e2b

Each environment offers different levels of isolation and reproducibility:

  • local: Fastest but least isolated
  • docker: Good balance of speed and isolation
  • e2b: Most isolated and reproducible, but requires internet connection

Using Memory to Improve Results

Arc Memory can use insights from previous simulations to improve results:

arc sim run --memory

This enables:

  • More accurate risk assessments based on historical patterns
  • Contextual recommendations based on similar past changes
  • Improved explanation quality with relevant historical context

Working with Simulation History

Viewing Simulation History

To view your recent simulations:

arc sim history

This shows a table of recent simulations with their IDs, dates, scenarios, risk scores, and affected services.

Filtering Simulation History

You can filter the history by various criteria:

# Filter by service
arc sim history --service auth-service

# Filter by scenario
arc sim history --scenario network_latency

# Filter by risk score range
arc sim history --risk 50..100

# Combine filters
arc sim history --service api-gateway --scenario memory_usage --risk 30..70

Viewing Detailed Simulation Results

To view detailed results for a specific simulation:

arc sim show sim_abc123

This shows the complete analysis, including:

  • Summary information
  • Detailed impact analysis
  • Metrics and predictions
  • Recommendations

Programmatic Simulation

You can also run simulations programmatically using the SDK:

from arc_memory import ArcMemory

# Initialize Arc Memory
arc = ArcMemory()

# Run a simulation
simulation = arc.simulate(
    scenario="network_latency",
    branch="main",
    sandbox="local",
    use_memory=True
)

# Process the results
print(f"Simulation ID: {simulation.id}")
print(f"Risk Score: {simulation.risk_score}/100")
print(f"Affected Services: {', '.join(simulation.affected_services)}")
print(f"Analysis: {simulation.analysis}")

# Get recommendations
for recommendation in simulation.recommendations:
    print(f"- {recommendation}")

# Get metrics
for metric in simulation.metrics:
    print(f"{metric.name}: {metric.value} {metric.unit}")

CI/CD Integration

You can integrate simulations into your CI/CD pipeline:

# Example GitHub Actions workflow
name: Arc Memory Simulation

on:
  pull_request:
    branches: [ main ]

jobs:
  simulate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
        with:
          fetch-depth: 0
      
      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.10'
      
      - name: Install Arc Memory
        run: pip install arc-memory
      
      - name: Run simulation
        run: |
          arc auth gh --token ${{ secrets.GITHUB_TOKEN }}
          arc build --incremental
          arc sim run --output json > simulation_results.json
      
      - name: Post simulation results
        uses: actions/github-script@v6
        with:
          github-token: ${{ secrets.GITHUB_TOKEN }}
          script: |
            const fs = require('fs');
            const results = JSON.parse(fs.readFileSync('simulation_results.json', 'utf8'));
            
            const comment = `## Arc Memory Simulation Results
            
            Risk Score: ${results.risk_score}/100 (${results.risk_level})
            
            Affected Services: ${results.affected_services.join(', ')}
            
            ### Analysis
            ${results.analysis}
            
            ### Recommendations
            ${results.recommendations.map(r => `- ${r}`).join('\n')}
            `;
            
            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: comment
            });

Best Practices

  1. Run simulations early and often during development to catch issues early
  2. Use the appropriate sandbox for your needs (local for speed, e2b for isolation)
  3. Enable memory to improve simulation accuracy over time
  4. Integrate with CI/CD to automate simulation on pull requests
  5. Review simulation history to identify patterns and trends
  6. Use multiple scenarios to get a comprehensive view of potential impacts

Troubleshooting

Simulation Fails to Start

If the simulation fails to start:

# Run with debug logging
arc sim run --debug

# Check if sandbox environment is properly configured
arc doctor

# Try a different sandbox
arc sim run --sandbox local

Inaccurate Results

If you believe the simulation results are inaccurate:

# Try with memory enabled
arc sim run --memory

# Use a more isolated sandbox
arc sim run --sandbox e2b

# Run multiple scenarios
arc sim run --scenario network_latency,memory_usage,cpu_usage

See Also