Node‐Red and local LLM - nygma2004/km GitHub Wiki

This page contains the background information for my Node-Red LLM related videos.

Setup

  1. Install the following dependencies that are required for Whisper (audio transcription model):
sudo apt-get install python3.13-venv
sudo apt-get install ffmpeg
  1. Install this node from the Palette Manager in Node-Red: @background404/node-red-contrib-whisper. This will install the Whisper node and also the model behind it.

  2. Install Ollama for large language model: Use command under Linux/Install

  3. Test the LLM and also download the llama3.2 model that we will use. Run this in the command line on your server:

ollama run llama3.2
  1. Install my example flow: Node-Red flow

  2. To use audio recording function in Chrome, add your dashboard URL to the "Insecure Origins treated as secure" section in Chrome flags

Video