Interacting with your ollama agents - curlyphries/Crew.AI-Ollama-Multi-Agent-System GitHub Wiki

Interacting with Crew.ai Using Ollama and Open WebUI

The Crew.ai Ollama Multi-Agent System is designed to operate locally, ensuring privacy and security. Using tools like Ollama and Open WebUI, you can easily interact with the system either via the command line or through a web-based interface. This guide walks you through both options step-by-step.


Option 1: Command Line Interaction (CLI)

Why Use the CLI?

The Command Line Interface (CLI) is a lightweight and efficient way to interact with your system. It’s great for:

  • Quickly testing queries.
  • Sending commands directly to agents.
  • Monitoring responses without extra interfaces.

Step-by-Step Guide

1. Install Ollama

Follow the official Ollama Installation Guide to set up Ollama locally.

For macOS:

brew install ollama

For Linux (using Docker):

docker pull ollama/ollama

2. Start Ollama

For macOS:

ollama serve

For Docker:

docker run --name ollama -d -p 11434:11434 ollama/ollama

3. Send a Query via CLI

Once Ollama is running, use the CLI to send queries:

curl -X POST http://localhost:11434/api/generate \
    -H "Content-Type: application/json" \
    -d '{"model": "llama2", "prompt": "What is the weather today?"}'

You should see a response from the AI model.


Option 2: Open WebUI Interaction

Why Use Open WebUI?

Open WebUI provides a user-friendly, graphical way to interact with Crew.ai. It’s perfect for:

  • Users unfamiliar with the command line.
  • Visualizing agent responses.
  • Adjusting system settings and monitoring performance.

Step-by-Step Guide

1. Set Up Open WebUI

Clone the Open WebUI repository:

git clone https://github.com/open-webui/open-webui.git
cd open-webui

2. Install Dependencies

Install the required packages:

pip install -r requirements.txt

3. Start Open WebUI

Run the web application:

python app.py

Access the UI in your browser at:

http://localhost:7860

4. Connect Open WebUI to Crew.ai

Add a webui_agent to your Crew.ai system:

File: multi_agent_config.yaml

agents:
  - name: webui_agent
    enabled: true

File: src/agents/webui_agent.py

import requests

class WebUIAgent: def init(self): self.base_url = "http://localhost:7860"

def can_handle(self, query):
    return "webui" in query.lower()

def handle_query(self, query, session):
    payload = {"query": query}
    response = requests.post(f"{self.base_url}/api", json=payload)
    return response.json().get("result", "No response from WebUI"), session

5. Test Open WebUI

Start the Crew.ai service:

uvicorn src.main:app --host 0.0.0.0 --port 8000

Send a query through Open WebUI and watch the response from your agents appear in the browser interface.


Comparison of CLI and WebUI

Feature Command Line Interface Open WebUI
Ease of Use Requires basic CLI knowledge User-friendly
Resource Usage Minimal Slightly higher
Customization Limited Advanced options
Accessibility Local machine only Cross-device on network

Which Should You Use?

  • Use CLI if you want fast, direct control and minimal resource use.
  • Use Open WebUI if you prefer a graphical interface or need to monitor multiple agents visually.

Next Steps

  1. Experiment with both methods to find what works best for your workflow.
  2. Customize agents in multi_agent_config.yaml to expand functionality.
  3. Share your experience and enhancements with the community!
# Interacting with Crew.ai Using Ollama and Open WebUI

The Crew.ai Ollama Multi-Agent System is designed to operate locally, ensuring privacy and security. Using tools like Ollama and Open WebUI, you can easily interact with the system either via the command line or through a web-based interface. This guide walks you through both options step-by-step.


Option 1: Command Line Interaction (CLI)

Why Use the CLI?

The Command Line Interface (CLI) is a lightweight and efficient way to interact with your system. It’s great for:

  • Quickly testing queries.
  • Sending commands directly to agents.
  • Monitoring responses without extra interfaces.

Step-by-Step Guide

1. Install Ollama

Follow the official [Ollama Installation Guide](https://www.ollama.ai) to set up Ollama locally.

For macOS:

brew install ollama

For Linux (using Docker):

docker pull ollama/ollama

2. Start Ollama

For macOS:

ollama serve

For Docker:

docker run --name ollama -d -p 11434:11434 ollama/ollama

3. Send a Query via CLI

Once Ollama is running, use the CLI to send queries:

curl -X POST http://localhost:11434/api/generate \
    -H "Content-Type: application/json" \
    -d '{"model": "llama2", "prompt": "What is the weather today?"}'

You should see a response from the AI model.


Option 2: Open WebUI Interaction

Why Use Open WebUI?

Open WebUI provides a user-friendly, graphical way to interact with Crew.ai. It’s perfect for:

  • Users unfamiliar with the command line.
  • Visualizing agent responses.
  • Adjusting system settings and monitoring performance.

Step-by-Step Guide

1. Set Up Open WebUI

Clone the Open WebUI repository:

git clone https://github.com/open-webui/open-webui.git
cd open-webui

2. Install Dependencies

Install the required packages:

pip install -r requirements.txt

3. Start Open WebUI

Run the web application:

python app.py

Access the UI in your browser at:

http://localhost:7860

4. Connect Open WebUI to Crew.ai

Add a webui_agent to your Crew.ai system:

File: multi_agent_config.yaml

agents:
  - name: webui_agent
    enabled: true

File: src/agents/webui_agent.py

import requests

class WebUIAgent:
    def __init__(self):
        self.base_url = "http://localhost:7860"

    def can_handle(self, query):
        return "webui" in query.lower()

    def handle_query(self, query, session):
        payload = {"query": query}
        response = requests.post(f"{self.base_url}/api", json=payload)
        return response.json().get("result", "No response from WebUI"), session

5. Test Open WebUI

Start the Crew.ai service:

uvicorn src.main:app --host 0.0.0.0 --port 8000

Send a query through Open WebUI and watch the response from your agents appear in the browser interface.


Comparison of CLI and WebUI

Feature Command Line Interface Open WebUI
Ease of Use Requires basic CLI knowledge User-friendly
Resource Usage Minimal Slightly higher
Customization Limited Advanced options
Accessibility Local machine only Cross-device on network

Which Should You Use?

  • Use CLI if you want fast, direct control and minimal resource use.
  • Use Open WebUI if you prefer a graphical interface or need to monitor multiple agents visually.

Next Steps

  1. Experiment with both methods to find what works best for your workflow.
  2. Customize agents in multi_agent_config.yaml to expand functionality.
  3. Share your experience and enhancements with the community!
⚠️ **GitHub.com Fallback** ⚠️