UI and Configuration Options - stone-alex/EliteIntel GitHub Wiki

Meet the UI

AI Tab

AI tab This is the main/default tab.

  • Start / Stop Services - Toggles the AI stack on/off. Any time you change service settings, stop and restart here - services don't reload automatically.
  • **Calibrate ** - Calibrates the app to your room noise level, mic, and audio setup. Do this on first run and any time your hardware changes. Worth it.
  • OBS Overlay - When ON, the app will display an animated black window. The window will show animated text from what you say to AI, and AI responses. Add that to your OBS as screen capture, add filter "Chroma Key", set color to black, set "Similarity", "Smoothness", and "Key Color Spill reduction" to 0. NOTE if you re-lauch the window, you will have to re-add it to OBS.
  • Listent to me / Ignore me - Ask the ship to ignore you, and it will not react unless it hears trigger words like 'ship' or 'listen'. Say 'stop ignoring me' to resume.
  • Detailed log on/off - Nerd mode on/off.

Player Tab

Player

  • Commander Name - If your in-game name is unpronounceable for Text-to-Speech, put something easier here.
  • Journal and Bindings Directory - On Windows this defaults to the standard location automatically. On Linux, set up symlinks as described in the installation guide.
  • Fleet Management This tab allows you to assign voices, personalities and cadence to your ships. The personality and cadence will only operate with cloud LLMs. You can do these settings via voicoe command like 'Change Name to George'

Settings / Local LLM Tab

Settings - Local LLM

  • Set the address of your inference server. Defaults to localhost and Ollama URL.
  • Provide the names of the models you want to use. see Local LLM guide
  • LLM host radio buttons allow you to choose between Ollama and LM Studio.
  • Use check box Tick this to use the local model instead of the cloud.

Settings / Audio

Settings - audio

  • Speech Volume Controls the volume of the speech synthesis.
  • TTS Voice Speed Controls the speed of the speech synthesis.
  • Beep Volume Controls the volume of the beep indicator. Indicates STT finished processing, and LLM received input.
  • STT Threads How many threads to request for STT processing. This is a min/max setting, meaning the app will request the minimum number of threads but will use what will be given to it by the processor. Threads are released once the processing is complete.
  • Use Local Text To Speech Override the cloud LLM key and use local TTS.
  • Audio Wave Visualizer This graph changes dynamically based on the audio input. It will show the noise floor, audio signal and gate zones as well as clipping if any.

Settings / Cloud LLM Tab

Settings - Cloud LLM

  • Cloud LLM Key - Enter your API key. Supported Cloud providers are Gemini, OpenAI, Grok, Deepseek, and Anthropic/Claude.
  • Cloud TTS Key - Enter your API key. Supported Cloud providers are Google.
  • NOTE Uncheck the "use" check box in Local LLM. It override the cloud LLM key.

🧠 LLM (AI Brain)

Cloud option: Enter your API key for xAI, OpenAI, or Anthropic/Claude. You cannot select specific models - the app uses a fixed model per provider:

  • xAI: grok-4-1-fast-non-reasoning
  • OpenAI: gpt-4.1-mini (commands) / gpt-5.2 (queries)
  • Gemini Generative Language API gemini-3.1-flash-lite-preview command and queries
  • Anthropic/Claude: see cost guide for notes

Local option: Leave the key blank, fill in the local LLM fields below, and check ☑ Use next to the local LLM. See Local LLM guide (Linux) / Local LLM guide (Windows) .

  • LLM Address - defaults to localhost. Replace with the IP of another PC if Ollama runs on a separate box.
  • Command LLM - handles voice command interpretation. tulu3:8b works well.
  • Query LLM - handles data analysis. tulu3:8b is the minimum; bigger models give better results.

No hardware to use local AI?

Choice A) Getting Your xAI API Key (more roguelike)

  1. Head to 👉xAI Console👈
  2. Sign up or log in.
  3. Navigate to the API section and generate a new API key.
  4. Add some tokens to your account. ($50 will last a very long time - see cost breakdown.)
  5. Paste the key into the LLM field and check the lock box.

Choice B) Getting Your OpenAI API Key (more clinical)

  1. Go to 👉OpenAI Platform👈
  2. Sign up or log in.
  3. Navigate to the API section and generate a new API key.
  4. Paste the key into the LLM field and check the lock box.

Choice C) Getting Your Anthropic/Claude API Key (works well with Friendly or Rogue)

  1. Go to 👉Claude Platform👈
  2. Sign in with email or Google. Note: Passwordless login - they'll email you a magic link every time.
  3. Go to Settings → Billing and add budget before creating a key. A key created on an unfunded account won't work even if you add credits later.
  4. Go to API Keys and create a key.
  5. Paste it into the LLM field, check the lock box, and start or restart services on the AI tab.

Getting Your Google TTS Key (14 hand picked voices)

  1. Go to 👉Google Cloud Console👈
  2. Sign in or create an account.
  3. Create a new project.
  4. Enable the Generative Language API for LLM and/or Cloud Text-to-Speech API for TTS.
  5. Go to Credentials, create an API key, and copy it.
  6. Important - restrict the key: Click on the key you just created. On the key detail page, click the Restrict key radio button. A dropdown appears - tick the checkbox next to each API you enabled (STT and/or TTS), then hit Save.
  7. Paste the key into the Speech to Text and/or Text to Speech fields in the app. Check the lock boxes.

App settings and data directory

The app settings and data are stored in SQLite database located in the following directory:

  • Linux: ~/.local/share/elite-intel/elite-intel/db/
  • Windows: %APPDATA%\elite-intel\db\

Run into issues? Drop by Matrix and let's sort it out. Bug reports and pull requests always welcome! o7, commander!

Community 👉Matrix👈