23 Whisper Server no GPU - gloveboxes/OpenAI-Whisper-Transcriber-Sample GitHub Wiki

Systems without an NVidia GPU

The Whisper Transcriber Service runs on Windows, macOS, and Linux systems without an NVidia GPU, it'll just run slower as the Whisper model run on the CPU.

From limited testing, the multilingual and the English-only OpenAI Whisper models for tiny(.en), small(.en), and medium(.en) models ran with acceptable performance on Windows 11 with a modern CPU and on a MacBook M2 Air with 16 GB of RAM.

Install system dependencies

Follow the instructions for your operating system.

Install Windows 11 dependencies

  1. Install the latest version of Python 3. At the time of writing, June 2023, Python 3.11.3.

  2. Install FFmpeg.

    1. You can download the latest release from FFmpeg-Builds.
    2. Unzip the downloaded FFmpeg file and move to your preferred app folder.
    3. From System Properties, select Environment Variables, and add the path to the FFmpeg bin folder to the path.
    4. Test FFmpeg. From a new terminal window, run ffmpeg -version.
  3. From a terminal window, install the required Python libraries.

    pip install openai-whisper flask requests chardet

Install macOS dependencies

  1. Install FFmpeg
    1. Open a terminal window.

    2. Install Homebrew.

    3. Install FFmpeg. Run

      brew install ffmpeg

Install Ubuntu dependencies

  1. Install FFmpeg and pip3
    1. Open a terminal window.

    2. Install FFmpeg and pip3. Run

      sudo apt install ffmpeg python3-pip

Run the Whisper Transcriber Server

  1. Install the git client if it's not already installed.

  2. From a Terminal window, clone the Whisper Transcriber Sample to your preferred repo folder.

    git clone https://github.com/gloveboxes/OpenAI-Whisper-Transcriber-Sample.git
  3. Navigate to the server folder.

    cd OpenAI-Whisper-Transcriber-Sample/server
  4. Install the required Python libraries.

    On Windows:

    pip install -r requirements.txt

    On macOS or Ubuntu:

    pip3 install -r requirements.txt
  5. Review the following chart is taken from the OpenAI Whisper Project Description page and select the model that will fit in the VRAM of your GPU. At the time of writing, Whisper multilingual models include tiny, small, medium, and large, and English-only models include tiny.en, small.en, and medium.en.

  6. Update the server/config.json file to set your desired Whisper model. For example, to use the medium model, set the model property to medium.

    { "model": "medium" }
  7. Start the Whisper Transcriber Service. The first time you run the service, it'll download the selected model. The download can take a few minutes, so depending on your internet speed, a timeout interval of 300 seconds is recommended.

    gunicorn --bind 0.0.0.0:5500 wsgi:app -t 300
  8. Once the Whisper Transcriber Service starts, you should see output similar to the following.

    [2023-06-04 18:53:46.194411] Whisper API Key: 17ce01e9-ac65-49c8-9cc9-18d8deb78197
    [2023-06-04 18:53:50.375244] Model: medium loaded.
    [2023-06-04 18:53:50.375565] Ready to transcribe audio files.
    
  9. Now, restart the Whisper Transcriber Service. The service will start much faster as the model is already downloaded without the timeout interval.

    gunicorn --bind 0.0.0.0:5500 wsgi:app
  10. The Whisper API Key will be also be displayed. Save the Whisper API Key somewhere safe, you'll need the key to configure the Whisper client.

    Whisper API Key: <key>
    

Next steps

Deploy the Whisper client

⚠️ **GitHub.com Fallback** ⚠️