Skip to content

Configuration

This guide outlines the required setup steps to configure the simulator before running it.

1. Configure LLM API Keys

Edit the config/llm_env.yml file with your API credentials:

openai:
  OPENAI_API_KEY: "your-api-key-here"
  OPENAI_API_BASE: ''
  OPENAI_ORGANIZATION: ''

azure:
  AZURE_OPENAI_API_KEY: "your-api-key-here"
  AZURE_OPENAI_ENDPOINT: "your-endpoint"
  OPENAI_API_VERSION: "your-api-version"

2. Configure Simulator Parameters

Before running the simulator, configure the config/config_default.yml file. Key configuration sections include:

  • environment: Paths for prompts, tools, and data-scheme folders
  • description_generator: Settings for flow extraction and policies
  • event_generator: LLM and worker configurations
  • dialog_manager: Dialog system and LLM settings
  • dataset: Parameters for dataset generation

Important configuration points:

  1. Update file paths in the environment section
  2. Configure LLM settings (type and name)
  3. Adjust worker settings (num_workers and timeout)
  4. Set appropriate cost_limit values

Key points to configure:

  • Update file paths in the environment section to match your project structure:
prompt_path: 'examples/airline/wiki.md'  # Path to your agent's wiki/documentation
tools_folder: 'examples/airline/tools'   # Path to your agent's tools
database_folder: 'examples/airline/data' # Path to your data schema
  • Adjust LLM configurations throughout the file:
llm:
    type: 'openai'  # or 'azure'
    name: 'gpt-4o'   # or other model names as gpt-4o-mini
  • Configure worker settings based on your system's capabilities. This parameter is highly important in cases of rate-limit responses:
   num_workers: 3    # Number of parallel workers
   timeout: 10       # Timeout in seconds
  • Set appropriate cost limits to control API usage:

cost_limit: 30    # In dollars, for simulator dialog manager