Ollama OpenWebUI - spinningideas/resources GitHub Wiki

You can use a local llm chat setup without any limits using Ollama and Open WebUI and OpenAI API or any model running locally.

How to Video: https://www.youtube.com/watch?v=8J6OJzseYuo

Here is the step by step process:

Step 1: Install Ollama

Enable nvidia GPU

Step 2: Copy and Paste Llama 3 install command using Terminal

https://github.com/ollama/ollama?tab=readme-ov-file#quickstart

ollama run llama3.2

Step 3: Add other LLM models (optional)

Step 4: Install Docker

Step 5: Install OpenWebUI via running docker container

docker run -d -p 6000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main