Security - supercog-ai/community GitHub Wiki

Supercog is designed with privacy and security best practices as priorities.

Agent security

Agents run in a secured environment where they are protected from any malicious access. Only authorized users can run or view your agents.

Data security

Data you make available to your agent via authenticated tools is only available to agents that you run. By default Connections are private to your account.

You can elect to make a Connection a Shared Connection, which will make it usable by other users and agents in your Organization. For example, you can configure a JIRA Connection and let other people in your organization use it for JIRA access. This can be useful for shared systems in a company.

Connection security

Connection credentials (passwords, API tokens, etc...) are only readable by the service which executes agents. Thus once the Connection is created, the Supercog web interface cannot read its credentials. If you elect to share a Connection, no one can see the credentials even though their agents can use Connection.

Similarly, although the connection credentials are readable by tool functions, the Agents themselves cannot see the Connection credentials. Because Agents have "real world" access, like sending email, this prevents the risk that Agent actions will disclose credentials.

Files

Any files that you upload to Supercog are stored in a space that is private to your account (files are stored with on-disk encryption in Amazon S3). Note that files ARE readable by your agents, and the file system is shared by all agents that you run. This is a useful mechanism for sharing data between agents. Although Agents generally need a tool to access files, you should assume that any agent you run could access any file.

Because of this design, you should be careful with the data that you upload to Supercog and how long you leave it in storage. Because Agents have real-world access, it is possible that an agent could read sensitive data from a file and, for example, send it over email. This is very unlikely to happen "by accident", but you should be aware of this risk.

In the future we will add more fine-grained control over which files are accessible to which agents in order to minimize these risks.

Data privacy

Supercog has a number of features to protect data privacy:

  1. Your data is never used to train any of our AI models.
  2. Most of our LLM partners (Open AI, Anthropic) have similar terms to not use API data for training.
  3. You can elect to only use open source models which never share your data with a 3rd party LLM provider.
  4. Supercog employs a data plane architecture where the LLM is used mostly for orchestration purposes, and actual operational data is confined to a data pipeline that is not exposed to the LLM. (This isn't a 100% firewall, as the LLM will often examine selected records to determine schemas and data types.)