Open Web UI - davidmarsoni/llog GitHub Wiki
:rocket: Introduction
Open WebUI is an extensible, self-hosted AI interface that adapts to your workflow, all while operating entirely offline.
It is designed to be a simple and efficient way to interact with LLMs, allowing you to easily create and manage your own AI models and workflows.
This interface is also completely open-source, meaning you can customize it to fit your needs and contribute to its development.
:sparkles: Features
Open WebUI offers a variety of features to enhance your LLM applications. For our alternative analysis, we will focus on the following features that are the most relevant for our use case:
- Connection to both local and remote LLMs
- Tools and functions integration
- Workspace management
- Customization and Adaptability
:computer: Installation
Open WebUI can be installed using pip. To install it, run the following command:
-
Install the package using pip:
pip install open-webui
-
Run the application:
python -m open_webui
Alternatively, you can directly install Open WebUI in a Docker container. For example, installing Open WebUI in a Docker container can make it easier to connect it directly to Ollama local models.
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
To have more information about the installation of Open WebUI, you can refer to the official GitHub page:
GitHub - open-webui/open-webui
:gear: Usage
The following is a quick start guide to help you get started with Open WebUI. It will show you how to use the main features of the application and how to navigate through it.
:footprints: First Steps
After installing Open WebUI, you can start using it by opening your web browser and navigating to http://localhost:3000
.
At the first launch, you will be prompted to create a new account. Follow the instructions to set up your account. This account will be used to manage your LLMs and workflows and will be stored locally inside the docker container.
- Workspace : Workspaces are the main way to organize your LLMs, Knowledge Bases, and other resources. You can create multiple workspaces to separate different projects or tasks.
- Choice of the model: Here you can choose the model you want to use. You can choose between local models (e.g., Ollama) or remote models (e.g., OpenAI, Anthropic).
- Model controls: The Control page is where you can manage your model parameters, such as temperature, max tokens, and other settings.
- Function selection: Function selection allows you to choose the functions you want to use with your LLMs. You may use provided functions or create your own.
:key: Add OpenAI API Key
In order to add your OpenAI API key, you have to go to the Admin Settings
page under Settings
and add your OpenAI API key in the corresponding field.
First click on your account name on the bottom left corner of the page, then click on the Settings
button.
Next click on the Admin Settings
button.
Then click on the OpenAI API key field and add your OpenAI API key.
:toolbox: Tools and Functions
Tools and functions are the main way to enhance the capabilities of your models.
To be able to add tools and functions to your workspace, you have to go to the Tools
and Functions
page.
From these pages you can add, edit or delete tools and functions.
To add a new tool or function, you can click on the Discover a tool
button to go to the Open WebUI tool store.
From there you can choose the tool you want to add to your workspace.
[!NOTE] All of the tools are entirely made by community members and are not official Open WebUI tools.
[!NOTE] Some of the tools cannot work as the Open WebUI had some important changes in the last months and some of the tools are not updated yet
[!TIP] when you install your tool, you have to enter the url of your local server by default it is
http://localhost:3000
.
:briefcase: Workspace management and usage
With Open WebUI, you can create and manage multiple knowledge bases. This allows you to organize your files to be able to query them easily in the future.
To be able to create a new knowledge base, you have to go to the Knowledge
page. To do this, you can click on the workspace button on the left side of the page.
When you are inside a knowledge base, you can add files to your knowledge base by clicking on the Add file
button.
Open WebUI will automatically parse the file and index it, ready to be queried.
[!NOTE] You cannot add all types of files, but a lot of extensions are supported.
If you need to configure in more detail the knowledge base, you can click and go to the Admin Settings
page and click on the documents page.
For example, you can configure the chunk size or choose the embedding model you want to use.
:art: Customization and Adaptability
Open WebUI is designed to be adaptable and customizable. You can create your own tools and functions to enhance the functionality of your LLMs. You can also customize the look of the interface, activate or deactivate some features and overall, reconfigure and redesign the platform to your needs.
To be able to configure the look of the interface, you can go to the Settings
page and click on the Interface
page.
:eyes: Actual usage of Open WebUI
To demonstrate the usage of Open WebUI, we have created a simple example of how to use Open WebUI to create a web interface for querying a PDF document.
For this first example, we are using the OpenAI model gpt-4o-mini
alongside the web search tool configured with the Tavily search engine
. We have also activated the tool getWeather
.
The prompt we provided is relevant, because it focuses on the web search and the weather tool.
Here below is the result of the query:
For our second example, we are using the Ollama model deepseek-r1:8b
alongside a knowledge base llog-wiki
that contains this wiki.
The prompt we provided is relevant, because it focuses on the knowledge base and the local model.
Here below is the result of the query:
This result is a bit disappointing as this model is not as performant, and the result is not really relevant. But it demonstrates the ability of Open WebUI to be able to query a local model and use local storage. We can clearly see that the response is based on the knowledge base and not on the model; however, it seems the understanding of the knowledge base is not really good (once again likely due to the model chosen).
Here below is the thought process of the model. We can see that the model has started well but has then lost the meaning of the question and the context.
[!NOTE] Using local models is often way slower than using remote models. But local models are free and do not have restrictions or censorship (apart from physical constraints).
:left_right_arrow: Comparison with LlamaIndex
Open WebUI and LlamaIndex have two completely different goals. Open WebUI is a web interface for LLMs, while LlamaIndex is a framework for contextualizing LLMs with data sources.
However, both frameworks have some similarities and useful features.
For example for simple tasks like querying a PDF or a Notion page, Open WebUI can be used to create a simple web interface to query the data. It provides a simple way to create a web interface for your LLMs and allows you to easily manage your models and workflows. This interface is really designed to be used by everyone and not only by developers.
On the other hand, LlamaIndex is more focused on providing a framework for contextualizing LLMs with data sources, allowing you to build applications that can interact with different systems. It is more focused on providing a framework for building applications powered by LLMs. But it is more complex to set up and use compared to Open WebUI.
Lastly, Open WebUI is lacking robustness in the multi-agent and workflow implementations. It is not possible to create complex workflows with Open WebUI, while LlamaIndex provides a framework for creating workflows that can be used to build complex applications.
:checkered_flag: Conclusions
Open WebUI is a powerful and flexible web interface for LLMs that allows you to easily create and manage your own AI models and workflows. It is designed to be simple and efficient, making it a great choice for anyone looking to get started with LLMs.
On the other hand, LlamaIndex is a powerful framework for contextualizing LLMs with data sources, allowing you to build applications that can interact with different systems. It is more focused on providing a framework for building applications powered by LLMs.
For example, during our project, we have built an integration and a caching system with the Notion API due to its slow performance. This would not have been possible with Open WebUI, as it does not provide a framework for creating workflows that can be used to build complex applications.