Troubleshooting - djvolz/coda-code-assistant GitHub Wiki
This guide helps you resolve common issues when installing and using Coda.
If you encounter errors like:
dns error: failed to lookup address information
client error (Connect)
- Failed package downloads
Common causes and solutions:
- Disable VPN and try again
- Some VPNs interfere with package downloads
Check for lingering proxy settings:
env | grep -i proxy
Clear them if not needed:
unset HTTP_PROXY HTTPS_PROXY NO_PROXY
Reset your terminal session after clearing.
If behind a corporate firewall, configure proxy:
export HTTP_PROXY=http://your-proxy:port
export HTTPS_PROXY=http://your-proxy:port
-
macOS:
sudo dscacheutil -flushcache
-
Linux:
sudo systemctl restart systemd-resolved
-
Windows:
ipconfig /flushdns
Clear uv cache and reinstall:
uv cache clean
rm -rf .venv && uv sync
Error: ImportError
or compatibility issues
Solution: Ensure you have Python 3.11 or higher:
python --version
# Should show 3.11.x or higher
If using multiple Python versions, explicitly use the correct one:
python3.11 -m pip install uv
uv sync
Error: Permission denied during installation
Solution:
# Don't use sudo with uv
# Instead, ensure proper user permissions
chmod -R 755 ~/.local/share/
ValueError: compartment_id is required
Solution: Set compartment ID via one of:
Environment variable:
export OCI_COMPARTMENT_ID="ocid1.compartment.oc1..xxxxxxxx"
Config file (~/.config/coda/config.toml
):
[providers.oci_genai]
compartment_id = "ocid1.compartment.oc1..xxxxxxxx"
ConfigFileNotFound: OCI config file not found
Solution:
- Ensure
~/.oci/config
exists - Run
oci setup config
to create it - Verify OCI CLI installation:
oci --version
ServiceError: Authentication failed
Solution:
- Check OCI config file format
- Verify key file exists and has correct permissions:
chmod 600 ~/.oci/private_key.pem
- Test OCI CLI:
oci iam region list
Error: No models available for provider oci-genai
Solution:
- Verify compartment has access to GenAI service
- Check region configuration
- Ensure models are enabled in your tenancy
httpx.ConnectTimeout: timed out
Solution:
- Check internet connection
- Verify provider endpoints are accessible
- Try different models or providers
- Enable debug mode:
uv run coda --debug
AuthenticationError: Invalid API key
Solution (for OpenAI, Anthropic, etc.):
- Verify API key is correct
- Check environment variable name:
echo $OPENAI_API_KEY echo $ANTHROPIC_API_KEY
- Ensure no extra spaces or characters
bash: coda: command not found
Solution:
# Use uv run instead
uv run coda
# Or activate virtual environment
source .venv/bin/activate
coda
Problem: Text appears in chunks or seems frozen
Solution:
- Try different models
- Check network stability
- Enable debug mode to see detailed logs
- Restart Coda
Problem: Broken text formatting or colors
Solution:
- Check terminal compatibility
- Try different themes:
/theme
- Update terminal emulator
- Set environment:
export TERM=xterm-256color
Error: Config file not loading
Solution:
- Check file location:
~/.config/coda/config.toml
- Verify TOML syntax:
python -c "import tomlkit; tomlkit.load(open('~/.config/coda/config.toml'))"
- Check file permissions:
chmod 644 ~/.config/coda/config.toml
Problem: Theme not loading or broken display
Solution:
- Reset to default:
/theme default
- List available themes:
/theme list
- Check terminal color support
- Try a different theme variant (e.g.,
monokai_dark
vsmonokai_light
) - Ensure terminal supports 256 colors:
echo $TERM
Problem: AI can't use tools or tools fail
Solution:
- Check agent mode is enabled:
/agent status
- Enable agent mode:
/agent on
- Verify provider supports tools: Some models don't support function calling
- Check specific tool status:
/tools status
- Enable debug mode to see tool calls:
uv run coda --debug
Problem: Tools fail with permission errors
Solution:
- Check file permissions in working directory
- Verify tool is enabled:
/tools info <tool_name>
- Enable confirmation mode for safety:
[tools] require_confirmation = true
Problem: Tools execute but return errors
Solution:
- Check working directory is correct
- Verify required files/programs exist
- Review tool output in debug mode
- Disable problematic tools:
/tools disable <tool_name>
Problem: AI responses take too long
Solution:
- Try smaller/faster models
- Check network latency
- Switch providers temporarily
- Use one-shot mode for simple queries:
uv run coda --one-shot "question"
- Disable unnecessary tools to reduce processing
Problem: Coda using too much CPU/memory
Solution:
- Close unused sessions:
/session delete old-sessions
- Restart Coda periodically
- Check for memory leaks in debug mode
- Reduce agent max_iterations in config:
[agent] max_iterations = 5 # Lower from default 10
Get detailed logging:
uv run coda --debug
uv run coda --version
# Test basic functionality
uv run coda --one-shot "Hello"
# List available models
uv run coda --list-models
# Check provider status
uv run coda --provider ollama --list-models
Check logs in:
~/.local/share/coda/logs/
- Use debug mode for real-time output
If issues persist:
-
Check existing issues: GitHub Issues
-
Search the wiki: Use the search function above
-
Create a new issue with:
- Complete error message
- System information (OS, Python version)
- Steps to reproduce
- Debug output (use
--debug
flag)
-
Join discussions: GitHub Discussions
When reporting issues, please include:
**Environment:**
- OS: [e.g., macOS 14.0, Ubuntu 22.04]
- Python version: [output of `python --version`]
- Coda version: [output of `uv run coda --version`]
**Problem:**
[Describe what you expected vs what happened]
**Error message:**
[Paste full error message here]
**Steps to reproduce:**
1. [First step]
2. [Second step]
3. [And so on...]
**Debug output:**
[Output from uv run coda --debug
]
See also: Configuration, Getting Started, Development Guide