Ollama setup - bmachek/lrc-ai-assistant GitHub Wiki

Download and Install Ollama

Download the current version for your platform: HERE

There's a standard installer package for each platform.

After the installation

Once the installation procedure is complete, it's time to pull the LLM you considering to use. Most LLM are multiple gigabytes big so the download will take some time, depending on your internet connection.

Open a Terminal and run at least one of the following commands.

Model need to be load at least the first time then the plugin will show you models available on you Ollama install.

ollama pull gemma3 

( will load gemma3:4b depending on your config you can try using gemma3:12b wich is getting better results)

ollama pull llama3.2-vision
ollama pull deepseek-r1
ollama pull llava
ollama pull mistral-small3.1

Select Model

Once you have pull your vision model, you will be able to select from the dropdown menu and the model will me load by Ollama.

Capture d’écran 2025-05-19 à 09 09 00

Change Ollama source

Depending on your config you can choose to use a local or remote instance of Ollama.

Default is localhost:

Capture d’écran 2025-05-19 à 09 06 32

If your Ollama run on another computer change to desire hostname or IP: Capture d’écran 2025-05-19 à 09 09 56 Capture d’écran 2025-05-19 à 09 10 39

You can also use a remote instance:

Capture d’écran 2025-05-19 à 09 11 33