Setup with Ollama - travisvn/obsidian-vision-recall GitHub Wiki

Setting Up Vision Recall with Ollama

Ollama provides free, local AI models that can be used with Vision Recall. This setup allows you to process your screenshots without sending data to external services.

Prerequisites

  1. Install Ollama:

    • Download and install Ollama from ollama.com
    • Ollama is available for macOS, Windows, and Linux
  2. Install a Vision-Capable Model:

    • Browse Ollama vision models
    • Open a terminal or command prompt
    • Run one of the following commands to download a vision-capable model:
      ollama pull llama3.2-vision
      
      or
      ollama pull llama3.2-vision:11b
      
      or
      ollama pull granite3.2-vision
      

Configuration in Vision Recall

  1. Open Vision Recall Settings:

    • Go to Obsidian Settings → Vision Recall
  2. Configure LLM Provider:

    • Set LLM Provider to Ollama

    • Set API Base URL to http://localhost:11434/v1 or whatever works for your current setup (i.e. maybe '192.168.x.x', '127.0.0.1', etc.)

      • Note: Don't forget to add the /v1 to the end!
    • Set the Endpoint and Vision model parameters to be the vision model you decided on (ex. llama3.2-vision:11b)

      • Note: For Ollama (or any local) setups, it might be best to make the endpoint and vision models the same so that you do not have to load both models on your local setup at the same time
  3. Test Connection:

    • Click the Test config button in the main view to verify that Vision Recall can communicate with Ollama
      • (you can test your connection by clicking the model retrieval in the modal that opens)
    • If successful, you'll see a list of the models you have available to you

Advanced Ollama Configuration

Custom Ollama Server

If you're running Ollama on a different machine or port:

  1. Set API Base URL to http://your-server-address:port
  2. Ensure that the Ollama server is accessible from your Obsidian machine

Model Parameters

You can adjust the model parameters for better results:

  1. In Ollama, create a custom Modelfile to adjust parameters like temperature, top_p, etc.
  2. Pull the custom model using ollama pull
  3. Select your custom model in Vision Recall settings

Troubleshooting Ollama Connection

If you encounter issues connecting to Ollama:

  1. Verify Ollama is Running:

    • Open a terminal and run ollama list to check if Ollama is running
    • If not, start Ollama with ollama serve
  2. Check Firewall Settings:

    • Ensure that your firewall allows connections to port 11434
  3. Verify Model Installation:

    • Run ollama list to verify that you have a vision-capable model installed
    • If not, install one using the commands in the Prerequisites section
  4. Restart Ollama:

    • Sometimes restarting the Ollama service can resolve connection issues
    • Close and reopen Ollama, then try connecting again

Performance Considerations

  • Local AI models require significant system resources
  • For best performance, we recommend:
    • At least 16GB of RAM
    • A modern CPU or GPU
    • Sufficient free disk space for model storage

If you experience slow processing times, consider:

  • Closing other resource-intensive applications
  • Using a smaller model (if available)
  • Upgrading your hardware or switching to a cloud-based provider like OpenAI