Node‐Red and local LLM - nygma2004/km GitHub Wiki
This page contains the background information for my Node-Red LLM related videos.
Setup
- Install the following dependencies that are required for Whisper (audio transcription model):
sudo apt-get install python3.13-venv
sudo apt-get install ffmpeg
-
Install this node from the Palette Manager in Node-Red:
@background404/node-red-contrib-whisper. This will install the Whisper node and also the model behind it. -
Install Ollama for large language model: Use command under Linux/Install
-
Test the LLM and also download the llama3.2 model that we will use. Run this in the command line on your server:
ollama run llama3.2
-
Install my example flow: Node-Red flow
-
To use audio recording function in Chrome, add your dashboard URL to the "Insecure Origins treated as secure" section in Chrome flags
