Setup with Ollama - travisvn/obsidian-vision-recall GitHub Wiki
Setting Up Vision Recall with Ollama
Ollama provides free, local AI models that can be used with Vision Recall. This setup allows you to process your screenshots without sending data to external services.
Prerequisites
-
Install Ollama:
- Download and install Ollama from ollama.com
- Ollama is available for macOS, Windows, and Linux
-
Install a Vision-Capable Model:
- Browse Ollama vision models
- Open a terminal or command prompt
- Run one of the following commands to download a vision-capable model:
orollama pull llama3.2-vision
orollama pull llama3.2-vision:11b
ollama pull granite3.2-vision
Configuration in Vision Recall
-
Open Vision Recall Settings:
- Go to Obsidian Settings → Vision Recall
-
Configure LLM Provider:
-
Set LLM Provider to
Ollama
-
Set API Base URL to
http://localhost:11434/v1
or whatever works for your current setup (i.e. maybe '192.168.x.x', '127.0.0.1', etc.)- Note: Don't forget to add the
/v1
to the end!
- Note: Don't forget to add the
-
Set the Endpoint and Vision model parameters to be the vision model you decided on (ex.
llama3.2-vision:11b
)- Note: For Ollama (or any local) setups, it might be best to make the
endpoint
andvision
models the same so that you do not have to load both models on your local setup at the same time
- Note: For Ollama (or any local) setups, it might be best to make the
-
-
Test Connection:
- Click the Test config button in the main view to verify that Vision Recall can communicate with Ollama
- (you can test your connection by clicking the model retrieval in the modal that opens)
- If successful, you'll see a list of the models you have available to you
- Click the Test config button in the main view to verify that Vision Recall can communicate with Ollama
Advanced Ollama Configuration
Custom Ollama Server
If you're running Ollama on a different machine or port:
- Set API Base URL to
http://your-server-address:port
- Ensure that the Ollama server is accessible from your Obsidian machine
Model Parameters
You can adjust the model parameters for better results:
- In Ollama, create a custom Modelfile to adjust parameters like temperature, top_p, etc.
- Pull the custom model using
ollama pull
- Select your custom model in Vision Recall settings
Troubleshooting Ollama Connection
If you encounter issues connecting to Ollama:
-
Verify Ollama is Running:
- Open a terminal and run
ollama list
to check if Ollama is running - If not, start Ollama with
ollama serve
- Open a terminal and run
-
Check Firewall Settings:
- Ensure that your firewall allows connections to port 11434
-
Verify Model Installation:
- Run
ollama list
to verify that you have a vision-capable model installed - If not, install one using the commands in the Prerequisites section
- Run
-
Restart Ollama:
- Sometimes restarting the Ollama service can resolve connection issues
- Close and reopen Ollama, then try connecting again
Performance Considerations
- Local AI models require significant system resources
- For best performance, we recommend:
- At least 16GB of RAM
- A modern CPU or GPU
- Sufficient free disk space for model storage
If you experience slow processing times, consider:
- Closing other resource-intensive applications
- Using a smaller model (if available)
- Upgrading your hardware or switching to a cloud-based provider like OpenAI