Install LM Studio Windows - stone-alex/EliteIntel GitHub Wiki
🪟 Local LLM - Windows Setup (LM Studio)
Running a local LLM keeps everything private, offline, and free (beyond electricity and hardware). Think of it as the difference between running a game on your own rig vs. streaming it from the cloud - lower latency, no subscriptions, no one snooping on your loadout.
LM Studio is an alternative to Ollama. It uses the same models and serves the same OpenAI-compatible API that Elite Intel talks to. Pick whichever you prefer - you can even switch between them in settings.
It requires LM Studio and a capable GPU.
Minimum Hardware
To run Elite Dangerous and the LLM on the same machine, you need at minimum an NVIDIA RTX 3060 with 24 GB VRAM.
Tip: You can point Elite Intel at an LM Studio instance running on a separate PC on your network. If you have a home lab or a spare box with a good GPU, that's a great option - the game PC doesn't carry the load at all.
Recommended Model
| Model | VRAM Required | Notes |
|---|---|---|
tulu-3.1-8b-supernova Q4_K_M |
~5 GB | ✅ Recommended. Fast, accurate, works great for commands and queries. |
tulu-3.1-8b-supernova Q8_0 |
~8.5 GB | Higher quality, if you have the VRAM headroom. |
qwen3 8B |
~8 GB | Experimental. Expect occasional missed commands and hallucinations. |
Step 1 - Install LM Studio
Open PowerShell and run:
irm https://lmstudio.ai/install.ps1 | iex
This installs the lms CLI and the LM Studio runtime. Open a new PowerShell window after it finishes for the changes to take effect.
Verify it worked:
lms --help
Note: If you already have the LM Studio desktop app installed, the
lmsCLI comes bundled with it - you may already have it. Trylms --helpfirst before running the install script.
Step 2 - Download the Model
lms get matrixportalx/Tulu-3.1-8B-SuperNova-Q4_K_M-GGUF
or
lms get Tulu-3.1
and choose the matrixportalx/Tulu-3.1-8B-SuperNova-Q4_K_M-GGUF variant (Could be listed as Tulu-3.1-8B-SuperNova-Q4_K_M-GGUF)
To see all models you've downloaded:
lms ls
Step 3 - Start the Server
Load the model and start the inference server:
lms load tulu-3.1-8b-supernova --context-length 8192 --gpu max
lms server start
NOTE the --context-length 8192 flag is important. Without it, you may end up with a small context window, and it will cut off part of the prompt which will lead to failures and hallucinations!
Verify it's running - open a browser or another PowerShell window and hit:
http://localhost:1234/v1/models
You should get back a JSON list of loaded models. The model ID string in that response is what you'll put in Elite Intel's LLM Model field.
To stop the server:
lms server stop
⚠️ Important: The LM Studio server does not survive reboots. You need to run
lms server startagain after each restart, or use one of the auto-start options below.
Step 4 - (Optional) Auto-Start on Boot
There are two ways to keep the server running across reboots.
Option A - Desktop App (Easiest)
If you have the LM Studio desktop app installed, this is the simplest path:
- Open LM Studio and press Ctrl + , to open Settings.
- Check "Run LLM server on login".
- From now on, closing the app minimizes it to the system tray and keeps the server running. It restores automatically on next login.
Option B - Task Scheduler (Headless / No GUI)
- Press Win + R, type
taskschd.msc, press Enter. - Click Create Task in the right panel.
- General tab: Name it
LM Studio Server. Check "Run with highest privileges". - Triggers tab: Click New → "At log on" → OK.
- Actions tab: Click New → "Start a program".
- Program/script:
%USERPROFILE%\.lmstudio\bin\lms.exe - Add arguments:
server start
- Program/script:
To also load the model automatically, create a batch file instead:
@echo off
%USERPROFILE%\.lmstudio\bin\lms.exe daemon up
%USERPROFILE%\.lmstudio\bin\lms.exe load tulu-3.1-8b-supernova --yes --context-length 8192 --gpu max
%USERPROFILE%\.lmstudio\bin\lms.exe server start
Save it as start-lmstudio.bat somewhere permanent (e.g. C:\Scripts\) and point the Task Scheduler Action at that file.
Step 5 - Wire It Up in Elite Intel
Head to the Settings tab in Elite Intel:
- Leave the LLM Key field blank (local LM Studio doesn't need one).
- LLM Address - set to
http://localhost:1234/v1/chat/completions. If LM Studio is on another machine, replacelocalhostwith that machine's IP. - LLM Model - paste in the model ID string from
http://localhost:1234/v1/models. - Command LLM - set to the same model ID.
- Query LLM - set to the same model ID.
- Hit Stop → Start on the AI tab to apply changes.
Community 👉Matrix👈