Install Ollama Local LLM Windows - stone-alex/EliteIntel GitHub Wiki
🪟 Local LLM - Windows Setup (Ollama)
Running a local LLM keeps everything private, offline, and free (beyond electricity and hardware). Think of it as the difference between running a game on your own rig vs. streaming it from the cloud - lower latency, no subscriptions, no one snooping on your loadout.
It requires Ollama and a capable GPU.
Minimum Hardware
To run Elite Dangerous and the LLM on the same machine, you need at minimum an NVIDIA RTX 3060 with 12 GB VRAM. That's the floor - it'll run, but don't expect headroom to spare.
Tip: You can point Elite Intel at an Ollama instance running on a separate PC on your network. If you have a home lab or a spare box with a good GPU, that's a great option - the game PC doesn't carry the load at all.
Recommended Model
| Model | VRAM Required | Notes |
|---|---|---|
tulu3:8b Q4_K_M |
~5 GB | ✅ Recommended. Reliable for commands and queries. |
qwen3 8B |
~8 GB | Experimental. Expect occasional missed commands and hallucinations. |
Note: If you want the fastest local inference, consider LM Studio with
matrixportalx/tulu-3.1-8b-supernova. In testing, it's noticeably faster than Ollama on the same hardware with the same model class.
Step 1 - Install Ollama
- Go to 👉 https://ollama.com/download 👈
- Download and run
OllamaSetup.exe. No admin rights required. - Ollama installs and runs quietly in the system tray. It auto-starts on login.
Step 2 - Pull a Model
Open Command Prompt or PowerShell and run:
ollama pull tulu3:8b
Or experimental alternatives:
ollama pull qwen3:8b
Step 3 - (Optional) Tune the Configuration
Out of the box Ollama works fine, but if you want to share VRAM more carefully with Elite Dangerous, this is where you do it.
On Windows, Ollama reads config from user environment variables.
- Right-click the Ollama tray icon → Quit.
- Open Settings → search "environment variables".
- Click "Edit environment variables for your account".
- Add each variable below using New:
| Variable | Value | Notes |
|---|---|---|
OLLAMA_MAX_VRAM |
14000000000 |
14 GB cap - adjust based on your GPU and game needs |
OLLAMA_NUM_PARALLEL |
3 |
Covers Elite Intel's async call patterns without over-allocating |
OLLAMA_MAX_LOADED_MODELS |
1 |
One model at a time |
OLLAMA_FLASH_ATTENTION |
1 |
Faster inference |
OLLAMA_KEEP_ALIVE |
-1 |
Keep model loaded permanently |
- Click OK. Relaunch Ollama from the Start Menu.
What these settings do
OLLAMA_MAX_VRAM - hard cap on VRAM Ollama can use, in bytes. Leaves the rest for Elite Dangerous. Adjust based on your GPU and how much the game needs.
OLLAMA_NUM_PARALLEL - how many requests Ollama handles simultaneously. Elite Intel makes async calls so setting this too low causes failures. 3 covers the typical command + query overlap without over-allocating.
OLLAMA_MAX_LOADED_MODELS - keeps only one model in VRAM at a time. No reason to keep stale models loaded.
OLLAMA_FLASH_ATTENTION - enables Flash Attention, which reduces memory bandwidth usage during inference. Generally faster, especially for repeated requests.
OLLAMA_KEEP_ALIVE=-1 - keeps the model loaded in VRAM indefinitely. Without this, Ollama may unload the model after a period of inactivity and you pay a reload penalty on the next request.
Step 4 - Wire It Up in Elite Intel
Head to the Settings tab in Elite Intel:
- Leave the LLM Key field blank (local Ollama doesn't need one).
- LLM Address defaults to
http://localhost:11434/api/chat. If Ollama is on another machine, replacelocalhostwith that machine's IP. - LLM Model - set to
tulu3:8b. - Command LLM - set to
tulu3:8b. - Query LLM - set to
tulu3:8b. - Hit Stop → Start on the AI tab to apply changes.
Community 👉Matrix👈