configuration - MadBomber/aia GitHub Wiki

Configuration

AIA offers flexible configuration through command-line options, environment variables, configuration files, and prompt file directives.

Configuration Precedence

AIA determines configuration settings using this order (highest to lowest priority):

  1. Embedded config directives (in prompt files): //config model = gpt-4
  2. Command-line arguments: --model gpt-4
  3. Environment variables: export AIA_MODEL=gpt-4
  4. Configuration files: ~/.aia/config.yml
  5. Default values

Configuration Methods

1. Command-Line Arguments

aia --model gpt-4 --temperature 0.8 --chat my_prompt

2. Environment Variables

export AIA_MODEL=gpt-4
export AIA_TEMPERATURE=0.8
export AIA_PROMPTS_DIR=~/my-prompts

3. Configuration File

Create ~/.aia/config.yml:

model: gpt-4
temperature: 0.8
prompts_dir: ~/my-prompts
chat: false
verbose: true

4. Embedded Directives (in prompt files)

//config model = gpt-4
//config temperature = 0.8
//config chat = true

Your prompt content here...

Essential Configuration Options

The most commonly used configuration options:

Option Default Description
model gpt-4o-mini AI model to use
prompts_dir ~/.prompts Directory containing prompts
out_file temp.md Default output file
temperature 0.7 Model creativity (0.0-1.0)
chat false Start in chat mode
role `` Role/system prompt to use
verbose false Show detailed output

Complete Configuration Reference

Config Item Name CLI Options Default Value Environment Variable
adapter --adapter ruby_llm AIA_ADAPTER
aia_dir ~/.aia AIA_DIR
append -a, --append false AIA_APPEND
chat --chat false AIA_CHAT
clear --clear false AIA_CLEAR
config_file -c, --config_file ~/.aia/config.yml AIA_CONFIG_FILE
debug -d, --debug false AIA_DEBUG
embedding_model --em, --embedding_model text-embedding-ada-002 AIA_EMBEDDING_MODEL
frequency_penalty --frequency_penalty 0.0 AIA_FREQUENCY_PENALTY
fuzzy -f, --fuzzy false AIA_FUZZY
image_quality --iq, --image_quality standard AIA_IMAGE_QUALITY
image_size --is, --image_size 1024x1024 AIA_IMAGE_SIZE
image_style --style, --image_style vivid AIA_IMAGE_STYLE
log_file -l, --log_file ~/.prompts/_prompts.log AIA_LOG_FILE
markdown --md, --markdown true AIA_MARKDOWN
max_tokens --max_tokens 2048 AIA_MAX_TOKENS
model -m, --model gpt-4o-mini AIA_MODEL
next -n, --next nil AIA_NEXT
out_file -o, --out_file temp.md AIA_OUT_FILE
parameter_regex --regex '(?-mix:([[A-Z _|]+]))' AIA_PARAMETER_REGEX
pipeline --pipeline [] AIA_PIPELINE
presence_penalty --presence_penalty 0.0 AIA_PRESENCE_PENALTY
prompt_extname .txt AIA_PROMPT_EXTNAME
prompts_dir -p, --prompts_dir ~/.prompts AIA_PROMPTS_DIR
refresh --refresh 7 (days) AIA_REFRESH
require_libs --rq --require [] AIA_REQUIRE_LIBS
role -r, --role AIA_ROLE
roles_dir ~/.prompts/roles AIA_ROLES_DIR
roles_prefix --roles_prefix roles AIA_ROLES_PREFIX
speak --speak false AIA_SPEAK
speak_command afplay AIA_SPEAK_COMMAND
speech_model --sm, --speech_model tts-1 AIA_SPEECH_MODEL
system_prompt --system_prompt AIA_SYSTEM_PROMPT
temperature -t, --temperature 0.7 AIA_TEMPERATURE
terse --terse false AIA_TERSE
tool_paths --tools [] AIA_TOOL_PATHS
allowed_tools --at --allowed_tools nil AIA_ALLOWED_TOOLS
rejected_tools --rt --rejected_tools nil AIA_REJECTED_TOOLS
top_p --top_p 1.0 AIA_TOP_P
transcription_model --tm, --transcription_model whisper-1 AIA_TRANSCRIPTION_MODEL
verbose -v, --verbose false AIA_VERBOSE
voice --voice alloy AIA_VOICE

Configuration Examples

Basic Setup

# ~/.bashrc or ~/.zshrc
export AIA_MODEL=gpt-4o-mini
export AIA_PROMPTS_DIR=~/my-prompts
export AIA_VERBOSE=true
export AIA_OUT_FILE=./temp.md

Advanced Configuration File

# ~/.aia/config.yml
model: gpt-4
temperature: 0.8
max_tokens: 4000
prompts_dir: ~/my-prompts
verbose: true
markdown: true
shell: true
erb: true

# Tools configuration
tool_paths:
  - ~/my-tools
  - shared
allowed_tools:
  - read_file
  - write_file
  - run_shell_command

# Audio settings
speak: false
voice: alloy
speak_command: afplay

Dynamic Configuration in Prompts

# In a prompt file
//config model = gpt-4
//config temperature = 0.9
//config max_tokens = 4000
//config out_file = analysis_result.md

Analyze the following data and provide insights...

For advanced configuration, see Prompt Directives and Shell Integration.