Building Agents - supercog-ai/community GitHub Wiki

Following are some good tricks to use when building your agents.

The overview described the basic components of an agent: LLM, instructions, and tools. But agents have a few other components:

User Instructions

Agents typically work from two different types of instructions. The first are the agent instructions: these construct the system prompt to the LLM, which is the primary instruction that the LLM follows. The agent instructions are written by the creator of the Agent.

The second instruction type are prompts or user instructions. These are the session-specific instructions given the by the user who is running the agent. When you type into the chat box you are sending the agent "user instructions".

So if you have created an agent with instructions to summarize files, then the "user instruction" would be a specific request to operate on a single named file.

To make agents easier to test and operate, Supercog lets you save a list of User Instructions as part of the agent:

Just click "New" and type instructions in the text box. You can label each instruction if you like. This is a convenient way to save "prompts" that you may want to give repeatedly to your agent.

To send the instruction to the agent, just click the ▶ button.

Developing Agents

When I am building agents, I typically work out the instructions interactively via the Supercog agent, then I save pieces of those instructions into User Instructions. This makes it easy to test operations in isolation and work out the best wording for each.

Once I have the instructions worked out, then I will consolidate the pieces into a single block in Agent Instructions.

Formatting Agent Instructions

I often use list style when formatting agent instructions:

1. Do this first
2. Then do this
3. Then finally do this

This is easy to read and easy for the agent to understand. One little trick is that you can use '#' to "comment out" lines and they will be skipped when the agent runs:

### This agent operates from a simple list of instructions.
1. Do this first
# 2. Then do this (this step will be omitted)
3. Then finally do this

Agent Memory and Reflection

Agents will often encounter and solve problems when they are running. If a tool function call returns an error, the agent can often determine how to modify the input parameters to avoid the error.

To help agents learn and improve over time, they can record facts into long term memory. These facts will be made available to the agent whenever it runs.

You can manually add facts to the agent by adding them on the Memory tab:

You can also generate learning facts by having the Agent reflect on its own performance. Whenever a chat session is complete, you can click the Reflect button:

This asks the agent to reflect on its performance and synthesize any facts that it should learn for the future. You can review these facts and choose which ones to save.

Note that memory items are injected into the Agent LLM context, and so they will consume tokens when the agent runs.