Interacting with your ollama agents - curlyphries/Crew.AI-Ollama-Multi-Agent-System GitHub Wiki
The Crew.ai Ollama Multi-Agent System is designed to operate locally, ensuring privacy and security. Using tools like Ollama and Open WebUI, you can easily interact with the system either via the command line or through a web-based interface. This guide walks you through both options step-by-step.
The Command Line Interface (CLI) is a lightweight and efficient way to interact with your system. It’s great for:
- Quickly testing queries.
- Sending commands directly to agents.
- Monitoring responses without extra interfaces.
Follow the official Ollama Installation Guide to set up Ollama locally.
For macOS:
brew install ollama
For Linux (using Docker):
docker pull ollama/ollama
For macOS:
ollama serve
For Docker:
docker run --name ollama -d -p 11434:11434 ollama/ollama
Once Ollama is running, use the CLI to send queries:
curl -X POST http://localhost:11434/api/generate \
-H "Content-Type: application/json" \
-d '{"model": "llama2", "prompt": "What is the weather today?"}'
You should see a response from the AI model.
Open WebUI provides a user-friendly, graphical way to interact with Crew.ai. It’s perfect for:
- Users unfamiliar with the command line.
- Visualizing agent responses.
- Adjusting system settings and monitoring performance.
Clone the Open WebUI repository:
git clone https://github.com/open-webui/open-webui.git
cd open-webui
Install the required packages:
pip install -r requirements.txt
Run the web application:
python app.py
Access the UI in your browser at:
http://localhost:7860
Add a webui_agent
to your Crew.ai system:
File: multi_agent_config.yaml
agents:
- name: webui_agent
enabled: true
File: src/agents/webui_agent.py
import requests
class WebUIAgent:
def init(self):
self.base_url = "http://localhost:7860"
def can_handle(self, query):
return "webui" in query.lower()
def handle_query(self, query, session):
payload = {"query": query}
response = requests.post(f"{self.base_url}/api", json=payload)
return response.json().get("result", "No response from WebUI"), session
Start the Crew.ai service:
uvicorn src.main:app --host 0.0.0.0 --port 8000
Send a query through Open WebUI and watch the response from your agents appear in the browser interface.
Feature | Command Line Interface | Open WebUI |
---|---|---|
Ease of Use | Requires basic CLI knowledge | User-friendly |
Resource Usage | Minimal | Slightly higher |
Customization | Limited | Advanced options |
Accessibility | Local machine only | Cross-device on network |
- Use CLI if you want fast, direct control and minimal resource use.
- Use Open WebUI if you prefer a graphical interface or need to monitor multiple agents visually.
- Experiment with both methods to find what works best for your workflow.
- Customize agents in
multi_agent_config.yaml
to expand functionality. - Share your experience and enhancements with the community!
The Crew.ai Ollama Multi-Agent System is designed to operate locally, ensuring privacy and security. Using tools like Ollama and Open WebUI, you can easily interact with the system either via the command line or through a web-based interface. This guide walks you through both options step-by-step.
The Command Line Interface (CLI) is a lightweight and efficient way to interact with your system. It’s great for:
- Quickly testing queries.
- Sending commands directly to agents.
- Monitoring responses without extra interfaces.
Follow the official [Ollama Installation Guide](https://www.ollama.ai) to set up Ollama locally.
For macOS:
brew install ollama
For Linux (using Docker):
docker pull ollama/ollama
For macOS:
ollama serve
For Docker:
docker run --name ollama -d -p 11434:11434 ollama/ollama
Once Ollama is running, use the CLI to send queries:
curl -X POST http://localhost:11434/api/generate \
-H "Content-Type: application/json" \
-d '{"model": "llama2", "prompt": "What is the weather today?"}'
You should see a response from the AI model.
Open WebUI provides a user-friendly, graphical way to interact with Crew.ai. It’s perfect for:
- Users unfamiliar with the command line.
- Visualizing agent responses.
- Adjusting system settings and monitoring performance.
Clone the Open WebUI repository:
git clone https://github.com/open-webui/open-webui.git
cd open-webui
Install the required packages:
pip install -r requirements.txt
Run the web application:
python app.py
Access the UI in your browser at:
http://localhost:7860
Add a webui_agent
to your Crew.ai system:
File: multi_agent_config.yaml
agents:
- name: webui_agent
enabled: true
File: src/agents/webui_agent.py
import requests
class WebUIAgent:
def __init__(self):
self.base_url = "http://localhost:7860"
def can_handle(self, query):
return "webui" in query.lower()
def handle_query(self, query, session):
payload = {"query": query}
response = requests.post(f"{self.base_url}/api", json=payload)
return response.json().get("result", "No response from WebUI"), session
Start the Crew.ai service:
uvicorn src.main:app --host 0.0.0.0 --port 8000
Send a query through Open WebUI and watch the response from your agents appear in the browser interface.
Feature | Command Line Interface | Open WebUI |
---|---|---|
Ease of Use | Requires basic CLI knowledge | User-friendly |
Resource Usage | Minimal | Slightly higher |
Customization | Limited | Advanced options |
Accessibility | Local machine only | Cross-device on network |
- Use CLI if you want fast, direct control and minimal resource use.
- Use Open WebUI if you prefer a graphical interface or need to monitor multiple agents visually.
- Experiment with both methods to find what works best for your workflow.
- Customize agents in
multi_agent_config.yaml
to expand functionality. - Share your experience and enhancements with the community!