FAQ - mensfeld/code-on-incus GitHub Wiki

Why does my agent freeze completely with no error?

The most common cause is a full /tmp. The agent writes gigabytes of temp data (package tarballs, compiler artefacts, test output) and the RAM-backed tmpfs silently fills up. The kernel blocks every subsequent write with ENOSPC and tools freeze waiting for I/O that will never complete.

Quick check:

incus exec <container-name> -- df -h /tmp

If it shows 100% used, either the storage pool itself is full (check incus storage info default) or you have opted into a RAM-backed tmpfs. To switch to the unlimited disk-backed default, remove or clear the setting:

[limits.disk]
tmpfs_size = ""   # default — use container root disk, no size cap

Or opt into a RAM-backed cap if you prefer speed over capacity:

[limits.disk]
tmpfs_size = "8GiB"

See Troubleshooting → Agent Freezes Completely Mid-Task for full diagnosis steps, immediate fixes without restarting, and the TMPDIR workaround for very large builds.

Why do I see many orphaned firewalld zone bindings?

If coi clean --orphans reports dozens of orphaned firewalld zone bindings (stale veth entries), this is almost always caused by Docker running on the host — not by COI itself.

What happens: Every time Docker creates or destroys a container on the host, it creates a veth pair. Firewalld auto-assigns each new veth to its default zone. When the Docker container stops, the veth interface is removed but firewalld keeps the stale zone entry in its nft rules. Over time, these accumulate.

Other sources: Incus containers stopped or deleted outside of COI (e.g., via incus delete directly, or after a system reboot) can also leave behind stale entries, since COI's veth cleanup only runs through its own exit paths (coi shutdown, coi kill, or container stopped via sudo poweroff inside).

To clean up:

coi clean --orphans          # Interactive cleanup
coi clean --orphans --force  # Automatic cleanup (no prompt)

To prevent accumulation, add a cron job:

# Clean orphaned firewalld entries every 30 minutes
*/30 * * * * coi clean --orphans --force 2>/dev/null

These orphaned entries are harmless (they reference interfaces that no longer exist) but cleaning them keeps your firewalld ruleset tidy.

Why not just use Docker with a volume?

A Docker container with a volume mount gives you filesystem isolation — but that's only one layer of defense, and not the most important one for AI agents.

Capability Docker + Volume COI
Filesystem isolation Yes Yes
Credential isolation Manual (you decide what to mount) Automatic (nothing exposed by default)
Reverse shell detection No Yes (real-time process monitoring)
Data exfiltration detection No Yes (filesystem I/O + network monitoring)
Network filtering Basic (--network none or full access) Granular (restricted/allowlist/open modes)
C2 port blocking No Yes (nftables blocks known C2 ports)
Cloud metadata protection No Yes (169.254.169.254 blocked)
Supply-chain attack prevention No Yes (.git/hooks, .vscode, .husky read-only)
Automated threat response No Yes (pause on HIGH, kill on CRITICAL)
Audit logging No Yes (JSONL forensic logs)
Session resume No Yes (conversation history + credentials restored)
File permission handling Manual UID mapping Automatic UID shifting
Docker-in-Docker Requires --privileged Works unprivileged (systemd + nesting)

Docker gives you a box. COI gives you a box with security monitoring — real-time kernel-level threat detection, automated response, and forensic audit logging.

Additionally, Docker application containers run a single process without an init system. COI uses Incus system containers with full systemd, which means Docker, systemd services, and other system-level tools work natively inside the container without privileged mode.

How is COI different from Docker Sandboxes?

Docker Sandboxes is a Docker Desktop feature that uses microVMs for isolation on macOS/Windows. On Linux, it falls back to traditional containers. COI is built specifically for Linux using Incus system containers:

  • No Docker Desktop needed - COI uses Incus (fully open source), while Docker Sandboxes requires Docker Desktop (not open source, commercial licensing for organizations)
  • System containers, not microVMs - One clean isolation layer vs. containers-in-VMs complexity
  • Linux-first design - Built for Linux from day one, not as an afterthought

Beyond architectural differences, COI includes a security monitoring layer that Docker Sandboxes lack entirely. COI monitors processes for reverse shells and credential scanning, tracks filesystem I/O for data exfiltration, and uses kernel-level nftables rules to detect and block suspicious network connections (C2 ports, cloud metadata endpoints, private network access). Threats trigger automated responses — HIGH severity events pause the container, CRITICAL events kill it — with all events logged to JSONL audit files for forensic review.

How is COI different from DevContainers?

Purpose: DevContainers are for setting up development environments. COI is for securely running AI coding tools that need broad system access.

Security model:

  • DevContainers - Your code runs in the container, but typically with your host credentials mounted
  • COI - AI tools run in isolated containers without your credentials. Only your workspace is mounted, nothing else unless explicitly configured

Architecture:

  • DevContainers - Application containers (Docker) without init systems
  • COI - System containers (Incus) with full systemd, can run Docker inside

COI also includes security monitoring that doesn't exist in the DevContainers ecosystem. Real-time process, filesystem, and kernel-level network monitoring detect threats like reverse shells, data exfiltration, and C2 connections — with automated container pause/kill responses based on threat severity.

How is COI different from Distrobox?

Distrobox is designed to feel like you're not in a container. It shares the host's home directory, network stack, display server, and often the entire filesystem. This is great for running desktop applications from different distros, but it's the opposite of what you want for an AI coding agent.

Aspect Distrobox COI
Home directory Shared with host Isolated per slot
Network Shared with host Isolated (restricted/allowlist/open)
Host filesystem Broadly accessible Only workspace mounted
Credentials Fully exposed Never exposed by default
Security monitoring None Real-time threat detection
Purpose Run apps from other distros Secure AI agent isolation

Running an AI agent in Distrobox is essentially the same as running it directly on your host — it can access your SSH keys, read your .env files, reach your local network, and modify git hooks. COI isolates all of these by default and adds security monitoring to detect malicious behavior in real time.

Can I run COI on Windows?

Not directly. Incus is Linux-only. However, you can:

  1. WSL2 (Windows Subsystem for Linux) - Install a Linux distribution in WSL2, then install Incus and COI inside WSL2
  2. VM - Run a Linux VM (Ubuntu, Debian, etc.) and install COI there

Note: Windows support via WSL2 is experimental and not officially tested. Linux or macOS (via Colima/Lima) are the recommended platforms.

Why does COI use Colima on macOS? Can I use Incus directly?

Incus requires a Linux kernel — LXC (Linux Containers) is a Linux kernel feature. On macOS, a Linux VM is needed to run Incus.

  • On Linux: COI uses Incus directly. No Colima, no VM layer.
  • On macOS: Colima (or Lima) provides a lightweight Linux VM where Incus runs. COI manages the VM transparently.

If you're on Linux, there's no Colima involved. See the macOS Setup Guide for macOS-specific instructions.

Can I use Lima/limactl or a plain VM instead of COI?

You can run AI tools inside any Linux VM, but a plain VM provides only isolation — no security monitoring, no credential protection, no session management.

Capability Plain VM (Lima/etc.) COI
Filesystem isolation Yes Yes
Reverse shell detection No Yes
Data exfiltration monitoring No Yes
Network threat filtering Manual firewall setup Automatic (firewalld + nftables)
Credential isolation Manual Automatic
Supply-chain protection No Yes (read-only .git/hooks, .vscode)
Session resume No Yes
Audit logging No Yes
Startup time 30-60 seconds 1-2 seconds
Resource overhead Full VM (kernel + OS) Container (shared kernel)

COI's value is not just isolation — it's the security monitoring layer that detects and responds to threats in real time. A VM is a locked room; COI is a locked room with security cameras, motion sensors, and an automated response system.

Does COI prevent prompt injection attacks?

No, COI does not prevent prompt injection. What COI does protect against:

  • Credential exposure - Your SSH keys, environment variables, and API tokens are not accessible to AI tools
  • Host system access - AI tools can't access your entire filesystem, only the mounted workspace
  • Lateral movement - Network isolation prevents access to local network resources (in restricted mode)
  • Remote code execution blast radius - Built-in firewalld networking limits access to private networks and metadata services in restricted mode, and allowlist mode can constrain egress to approved domains/IPs, reducing (but not eliminating) data exfiltration and command-and-control risk
  • Persistent damage - Ephemeral containers mean any malicious modifications are discarded
  • Real-time threat detection - Security monitoring detects reverse shells, data exfiltration attempts, and malicious patterns, automatically pausing or killing the container (see Security Monitoring)

What COI doesn't protect against:

  • Prompt injection - Malicious prompts can still trick the AI into generating harmful code, but even if the AI goes rogue, damage is limited to your workspace by filesystem isolation and network controls
  • API key leakage via AI - If you give the AI your API key, it could be prompted to send it elsewhere
  • Insecure code generation - AI-generated code might have vulnerabilities (SQL injection, XSS, etc.)

Best practices:

  • Review AI-generated code before committing
  • Don't mount sensitive credentials into containers
  • Use network isolation (restricted/allowlist modes) to limit data exfiltration
  • Enable security monitoring ([monitoring] enabled = true) for untrusted projects
  • Review audit logs after sessions: cat ~/.coi/audit/<container-name>.jsonl
  • Commit AI changes with git hooks disabled (see Security Best Practices)

What about API key security?

If the API key is for the AI tool itself (e.g., Anthropic API key for Claude):

  • Store it in your host ~/.claude/settings.json or similar config
  • COI automatically copies essential config files from the host into the container during session setup
  • The AI tool uses the key to authenticate, but it's not available to arbitrary commands in the container
  • Financial blast radius: For subscription models (e.g., Claude Pro/Max), worst case is exhausting your daily quota. For pay-per-token models, set spending caps on your API key to limit potential abuse

If you're giving API keys to the AI for it to use (e.g., AWS keys for the AI to deploy things):

  • Don't do this unless you fully trust the project and AI's capabilities
  • COI isolation prevents credential leakage to your host, but a compromised AI could still misuse those credentials
  • Use temporary/scoped credentials with minimal permissions
  • Prefer explicit mounting of credentials rather than storing them in the workspace

Can I preserve the workspace path inside the container?

Yes. By default, your project is mounted at /workspace inside the container. If you need the container workspace to use the same absolute path as on your host (useful for tools that store session data relative to the workspace directory), enable preserve_workspace_path:

# .coi/config.toml or ~/.coi/config.toml
[paths]
preserve_workspace_path = true

With this enabled, if your project is at /home/user/projects/my-app, it will be mounted at the same path inside the container.

Is this really simpler than just running Claude Code directly?

First-time setup: coi build (one time, 5-10 minutes), then coi shell - that's it!

After setup: Just cd your-project && coi shell - same simplicity as running Claude Code directly, but with:

  • ✅ Automatic file ownership (no permission issues)
  • ✅ Credential isolation (your SSH keys safe)
  • ✅ Session save/resume (continue later)
  • ✅ Parallel sessions (multiple workspaces/slots)
  • ✅ Clean environment (no host pollution)

The complexity is hidden. You get security and isolation with the same simple workflow.

Can I use Docker and Docker Compose inside COI?

Yes. COI automatically enables Docker support flags (security.nesting, security.syscalls.intercept.mknod, security.syscalls.intercept.setxattr) on session containers. Docker and Docker Compose work out of the box without any additional configuration.

Docker commands also work without sudo — the container's code user is added to the docker group, and the Docker socket is configured with the correct group ownership.

If Docker commands fail, verify that:

  1. Your COI image was built with the latest coi build
  2. You're using coi shell (not raw incus exec)

How do I add extra context files (agents, rules, configs) into my container?

Use [mounts.default](/mensfeld/code-on-incus/wiki/mounts.default) in your config to mount host directories into the container. The key rule is: mount subdirectories, not the parent config directory (e.g., mount ~/.claude/skills, not ~/.claude).

See the Configuration — Mounting Additional Files section for full examples, common patterns (Claude skills/commands/plugins, opencode agents), and security guidance.

What is Incus? (Is it the same as tmux?)

No, Incus is not tmux. They're completely different tools:

Incus - Linux container and VM manager (like Docker, but for system containers)

  • Manages containers with full operating systems inside
  • Provides isolation, networking, storage, etc.
  • COI uses Incus to create isolated environments

tmux - Terminal multiplexer for managing shell sessions

  • Lets you detach/reattach terminal sessions
  • COI uses tmux inside containers to manage AI tool sessions

In COI: Incus creates the container, tmux runs inside it to manage your session with the AI tool.

Why should I trust this?

Fair question! Here's what makes COI trustworthy:

Open Source - Full source code at github.com/mensfeld/code-on-incus (MIT license)

  • Review the code yourself
  • Community can audit and contribute
  • No hidden behavior

Transparent architecture:

  • Uses standard Linux tools (Incus, tmux, systemd)
  • No custom daemons or proprietary components
  • Easy to inspect running containers (incus list, incus exec)

Security by design:

  • Credentials isolated by default (not mounted unless you configure it)
  • Network isolation with firewalld (blocks private networks by default)
  • Workspace-only mounting (AI can't access your entire filesystem)

Active development - Regular updates, responsive to issues, community-driven improvements

Do I need to give COI "full access"?

COI itself doesn't need "full access" to anything. Here's what actually happens:

COI requires:

  • Incus permissions (you must be in incus-admin group)
  • Access to your workspace directory (the project you're working on)
  • Optional: firewalld sudo for network isolation (you can set [network] mode = "open" in config to avoid this)

What AI tools can access:

  • Your workspace only - The project directory you explicitly mount
  • Container filesystem - Temporary files that get deleted (ephemeral mode)
  • Your SSH keys - Not accessible unless you explicitly mount ~/.ssh
  • Your home directory - Not accessible
  • Your environment variables - Not passed to the container unless explicitly forwarded via forward_env config
  • Your local network - Blocked by default (restricted mode)

You control what gets mounted. By default, COI is locked down.

Why run an AI agent locally instead of in the cloud?

AI coding agents are most effective when they can:

  • Access your actual project files and directory structure
  • Run your test suite and build tools
  • Interact with local services (databases, APIs)
  • Use your project's Docker Compose setup
  • Work with low latency and no network dependency

Running in the cloud adds latency, requires syncing files, and complicates access to local development services. COI's approach is: run locally but contained — you get the convenience of local execution with the safety of isolation and security monitoring.

Can I use COI with local/self-hosted AI models?

COI supports multiple AI coding tools through its extensible tool interface. Currently supported:

  • Claude Code (default) — Anthropic's CLI agent
  • opencode — open-source AI coding agent

Coming soon:

  • Aider — AI pair programming tool
  • Cursor (CLI mode)

See Supported Tools for the full list and configuration details.

Adding support for other tools (including those backed by local models like Ollama) is possible by implementing the tool interface. If you'd like to see a specific tool supported, open an issue.

Quick Comparison: COI vs. Other Tools

Tool Purpose Credentials Isolated Security Monitoring AI Tool Support Session Management Best For
COI AI coding isolation ✅ Yes (default) ✅ Yes (process, filesystem, network) Built-in Auto save/resume Running AI coding tools securely
Docker + Volume Basic isolation ⚠️ Manual ❌ No Manual Manual Simple filesystem isolation
Docker Sandboxes AI tool isolation ✅ Yes ❌ No Limited Manual macOS/Windows users (requires Docker Desktop)
DevContainers Dev environment ❌ No (typically mounted) ❌ No Manual Manual Reproducible development environments
Distrobox Desktop apps/dev ❌ No (shares home) ❌ No Manual Manual Running apps from different distros
Plain VM General isolation ⚠️ Manual ❌ No Manual Manual Full OS isolation without monitoring
Bare metal Direct execution ❌ No (full access) ❌ No Manual None Maximum performance, trusted environments

Choose COI if: You want AI to modify code without credential exposure, with real-time security monitoring, automatic session management, and strong isolation.

More Questions?

Join the COI community on Slack for live discussion and support.