21 Whisper Server WSL - gloveboxes/OpenAI-Whisper-Transcriber-Sample GitHub Wiki
The recommended configuration for running the OpenAI Whisper sample on Windows is with WSL 2 and an NVidia GPU. This configuration is popular and provides the best performance. The OpenAI Whisper speech to text transcription runs consistently faster on WSL 2 than natively on Windows.
Ideally, your system should have:
- Windows 11 with WSL 2 and Ubuntu 20.04 LTS.
- A modern CPU with 16 GB of RAM.
- An NVidia GPU with 10 to 12 GB of VRAM. But you can run smaller Whisper models on GPUs with less VRAM.
Ensure the NVidia drivers are up to date. The NVidia drivers are installed in Windows. WSL includes a GPU driver that allows WSL to access the GPU, so don't install the NVidia drivers in WSL.
- Follow the instructions to install WSL.
- This sample was tested with Ubuntu 20.04 LTS running in WSL 2. You can download Ubuntu 20.04 LTS from the Microsoft Store.
- Update the Ubuntu system.
- From a WSL terminal.
- Run
sudo apt update && sudo apt upgrade
- Restart WSL if necessary, from PowerShell, run
wsl --shutdown
.
- Install
FFmpeg
andpip3
- From a WSL terminal.
- Install FFmpeg and pip3. Run
sudo apt install ffmpeg python3-pip
- Test FFmpeg. Run
ffmpeg -version
. The command should return the FFmpeg version.
-
From a WSL terminal.
-
Clone the Whisper Transcriber Sample to your preferred repo folder.
git clone https://github.com/gloveboxes/OpenAI-Whisper-Transcriber-Sample.git
-
Navigate to the
server
folder.cd OpenAI-Whisper-Transcriber-Sample/server
-
Install the required Python libraries.
-
From a terminal window.
-
Install the required Python libraries. Run
pip3 install -r requirements.txt
-
-
Test that CUDA/GPU is available to PyTorch.
-
From a WSL terminal.
-
Run the following command, if CUDA is available, the command will return
True
.python3 -c "import torch; print(torch.cuda.is_available())"
-
-
Review the following chart is taken from the OpenAI Whisper Project Description page and select the model that will fit in the VRAM of your GPU. At the time of writing, Whisper multilingual models include
tiny
,small
,medium
, andlarge
, and English-only models includetiny.en
,small.en
, andmedium.en
. -
Update the
server/config.json
file to set your desired Whisper model. For example, to use themedium
model, set themodel
property tomedium
.{ "model": "medium" }
-
Start the Whisper Transcriber Service. The first time you run the service, it'll download the selected model. The download can take a few minutes, so depending on your internet speed, a timeout interval of 300 seconds is recommended.
gunicorn --bind 0.0.0.0:5500 wsgi:app -t 300
-
Once the Whisper Transcriber Service starts, you should see output similar to the following.
[2023-06-04 18:53:46.194411] Whisper API Key: 17ce01e9-ac65-49c8-9cc9-18d8deb78197 [2023-06-04 18:53:50.375244] Model: medium loaded. [2023-06-04 18:53:50.375565] Ready to transcribe audio files.
-
Now, restart the Whisper Transcriber Service. The service will start much faster as the model is already downloaded without the timeout interval.
gunicorn --bind 0.0.0.0:5500 wsgi:app
-
The
Whisper API Key
will be also be displayed. Save theWhisper API Key
somewhere safe, you'll need the key to configure the Whisper client.Whisper API Key: <key>