Install LM Studio Linux - stone-alex/EliteIntel GitHub Wiki
🐧 Local LLM - Linux Setup (LM Studio)
Running a local LLM keeps everything private, offline, and free (beyond electricity and hardware). Think of it as the difference between running a game on your own rig vs. streaming it from the cloud - lower latency, no subscriptions, no one snooping on your loadout.
LM Studio is an alternative to Ollama. It uses the same models and serves the same OpenAI-compatible API that Elite Intel talks to. Pick whichever you prefer - you can even switch between them in settings.
It requires LM Studio and a capable GPU.
Minimum Hardware
To run Elite Dangerous and the LLM on the same machine, you need at minimum an NVIDIA RTX 3060 with 12 GB VRAM. That's the floor - it'll run, but don't expect headroom to spare.
Tip: You can point Elite Intel at an LM Studio instance running on a separate PC on your network. If you have a home lab or a spare box with a good GPU, that's a great option - the game PC doesn't carry the load at all.
Recommended Model
| Model | VRAM Required | Notes |
|---|---|---|
tulu-3.1-8b-supernova Q4_K_M |
~5 GB | ✅ Recommended. Fast, accurate, works great for commands and queries. |
tulu-3.1-8b-supernova Q8_0 |
~8.5 GB | Higher quality, if you have the VRAM headroom. |
qwen3 8B |
~8 GB | Experimental. Expect occasional missed commands and hallucinations. |
Step 1 - Install LM Studio
curl -fsSL https://lmstudio.ai/install.sh | bash
The installer drops everything into ~/.lmstudio/ and adds the lms CLI tool. After it finishes, add the CLI to your PATH:
# Add this to your ~/.bashrc
export PATH="$HOME/.lmstudio/bin:$PATH"
Then reload your shell:
source ~/.bashrc
Verify it worked:
lms --help
Step 2 - Download the Model
lms get tulu3.1
Searching for models with the term tulu3.1
No exact match found. Please choose a model from the list below.
? Select a model to download
❯ QuantFactory/Tulu-3.1-8B-SuperNova-GGUF
mradermacher/Tulu-3.1-8B-SuperNova-i1-GGUF
QuantFactory/Tulu-3.1-8B-SuperNova-Smart-GGUF
mradermacher/Tulu-3.1-8B-SuperNova-GGUF
bunnycore/Tulu-3.1-8B-SuperNova-Smart-IQ4_XS-GGUF
mradermacher/Tulu-3.1-8B-SuperNova-Smart-GGUF
mradermacher/Tulu-3.1-8B-SuperNova-Smart-i1-GGUF
matrixportalx/Tulu-3.1-8B-SuperNova-Q4_0-GGUF
matrixportalx/Tulu-3.1-8B-SuperNova-Q4_K_M-GGUF
↑↓ navigate • ⏎ select
Use up and down arrows to navigate, Enter to select. You want the matrixportalx/Tulu-3.1-8B-SuperNova-Q4_K_M-GGUF
To see all models you've downloaded:
lms ls
That is the happy path. But LMStudio has a bug.
So it may not download the model this way and show you
Error: No staff picks found with the specified search criteria. instead.
If that is the case the model can be downloaded manually:
curl -s "https://huggingface.co/api/models/matrixportalx/Tulu-3.1-8B-SuperNova-Q4_K_M-GGUF" | grep -o '"rfilename":"[^"]*\.gguf"'
Followed by
lms import /path/to/tulu-3.1-8b-supernova-q4_k_m.gguf
Step 3 - Start the Server
Load the model and start the inference server:
lms load tulu-3.1-8b-supernova --context-length 8192 --gpu max
lms server start
--gpu max tells LM Studio to offload as much as possible to your GPU, which is what you want.
Verify it's running:
curl http://localhost:1234/v1/models
You should get back a JSON list of loaded models. The model ID string in that response is what you'll put in Elite Intel's LLM Model field.
To stop the server:
lms server stop
⚠️ Important: The LM Studio server does not survive reboots. You need to run
lms server startagain after each restart, or set up the optional auto-start below.
Step 4 - (Optional) Auto-Start on Boot
If you want LM Studio to start automatically, set it up as a user systemd service. This runs under your own session rather than as a system service, which means it starts after your desktop environment is fully up - no root required.
Create the user systemd directory if it doesn't exist:
mkdir -p ~/.config/systemd/user
Create the service file:
nano ~/.config/systemd/user/lmstudio.service
Paste this in:
[Unit]
Description=LM Studio Server
After=network.target
[Service]
Type=oneshot
RemainAfterExit=yes
Environment="HOME=/home/YOUR_USERNAME"
Environment="PATH=/home/YOUR_USERNAME/.lmstudio/bin:/usr/local/bin:/usr/bin:/bin"
Environment="XDG_RUNTIME_DIR=/run/user/YOUR_UID"
ExecStartPre=/home/YOUR_USERNAME/.lmstudio/bin/lms daemon up
ExecStartPre=/home/YOUR_USERNAME/.lmstudio/bin/lms load matrixportalx/tulu-3.1-8b-supernova --yes --context-length 8192
ExecStart=/home/YOUR_USERNAME/.lmstudio/bin/lms server start
ExecStop=/home/YOUR_USERNAME/.lmstudio/bin/lms server stop
ExecStopPost=/home/YOUR_USERNAME/.lmstudio/bin/lms daemon down
[Install]
WantedBy=default.target
Replace YOUR_USERNAME with your Linux username and YOUR_UID with your user ID. To find your UID:
id -u
⚠️ Why
XDG_RUNTIME_DIR? Even user services run in a stripped-down environment that may not include your session variables. LM Studio usesXDG_RUNTIME_DIRfor IPC - without it, the service can fail silently even thoughlmsworks fine from your terminal. This is the most common reason the service fails when everything works manually.
Enable and start it:
systemctl --user daemon-reload
systemctl --user enable lmstudio.service
systemctl --user start lmstudio.service
Verify it's running:
systemctl --user status lmstudio.service
curl http://localhost:1234/v1/models
Troubleshooting: If the service fails, check the journal:
journalctl --user -xeu lmstudio.service --no-pager | tail -40If it says "Failed to load model", run
lms lsand confirm the model name matches exactly what you put in the service file.
Step 4b - (Optional) Fix Slow Inference After Boot
Some users experience slow inference responses when LM Studio starts at boot, which clears up immediately after a manual service restart. This appears to be a quirk in LM Studio's daemon initialization - the first cold start can leave the inference runtime in a degraded state.
If you notice slow responses after a reboot that go away the moment you restart the service, this automated fix handles it for you.
Create a companion service:
nano ~/.config/systemd/user/lmstudio-restart.service
[Unit]
Description=LM Studio post-boot restart
After=lmstudio.service
[Service]
Type=oneshot
ExecStart=systemctl --user restart lmstudio.service
Create the timer that fires it:
nano ~/.config/systemd/user/lmstudio-restart.timer
[Unit]
Description=Restart LM Studio 2 minutes after login
[Timer]
OnBootSec=2min
Unit=lmstudio-restart.service
[Install]
WantedBy=timers.target
Enable the timer:
systemctl --user daemon-reload
systemctl --user enable --now lmstudio-restart.timer
The timer waits 2 minutes after login, restarts the LM Studio service once, and then stays out of the way. If you don't experience the slow inference issue, you don't need this.
Disable Ollama Auto-Start (if installed)
Ollama installs itself as an enabled systemd service by default. If you want to run LM Studio instead and start Ollama only on demand:
sudo systemctl disable ollama.service
sudo systemctl stop ollama.service
Step 5 - Wire It Up in Elite Intel
Head to the Settings tab in Elite Intel:
- Leave the LLM Key field blank (local LM Studio doesn't need one).
- LLM Address - set to
http://localhost:1234/v1/chat/completions. If LM Studio is on another machine, replacelocalhostwith that machine's IP. - LLM Model - paste in the model ID string from
curl http://localhost:1234/v1/models. - Command LLM - set to the same model ID.
- Query LLM - set to the same model ID.
- Hit Stop → Start on the AI tab to apply changes.
Community 👉Matrix👈