Setup AI Summary resource - gabrielvpina/viralquest GitHub Wiki

Introduction

ViralQuest has support for LLM models locally alocated or via API keys for Google Gemini, ChatGPT and Claude support. The AI Summary is the junction of the BLAST, Taxonomy and HMM Conserved Domains results with the correspondent viral Families and Order accordly to ICTV (International Committee on Taxonomy of Viruses) and ViralZone.

To a local LLM support, Ollama is used to create the interface between the model and the code. In our tests, the smallest model that provided acceptable performance was qwen3:4b. Therefore, we recommend using this model as a minimum requirement for running this type of analysis. Here some steps to install Ollama in your local machine.

Graphics Card

It's possible to run Local LLM Ollama models via CPU, but the process takes much more time than a GPU. To run Ollama qwen3:4b model it's necessary at least 4Gb of VRAM, to DeepSeek-R1:7b 8Gb are required.

LLM Summary with Google API

It's possible to use Google gemini API in Free tier to classify and create a summarization. Go to Google AI Studio, and click in Get API Key and "create API key". After that, an API key should be provided and the model name normally is gemini-2.0-flash for a free tier of use. Get the API Key and copy to use in the --api-key argument of ViralQuest pipeline.

Install Ollama locally

You can download by script via Ollama site, with this script:

curl -fsSL https://ollama.com/install.sh | sh

Also, it's possible download the binary version of Ollama via GitHub:

# Download Ollama binary
wget https://github.com/ollama/ollama/releases/download/v0.7.0/ollama-linux-amd64.tgz

# extract the file 
tar -xzvf ollama-linux-amd64.tgz

# after that, execute the binary
/bin/ollama 

Another method is Anaconda installation:

conda install conda-forge::ollama

In this two methods, after install Ollama, is required running the background service in your system (this command is not necessary via Ollama script installation).

ollama serve &

⚠️ Sometimes the ollama serve & command do not works well if you log out from your user in the linux server. It's necessary run the command, log out of the server and sign in again to test if the service it's still online. If not, you can run nohup ollama serve & >> nohup.out to create a log file of 'ollama serve' and fix the problem with disconection after log out.

After that, the ollama will be running in the background of your system and now it's possible pull and run our local LLM models.

Pull and Run LLM models

With Ollama installed and running in your machine, download a model from Ollama, we will pull the model qwen3:8b

ollama pull qwen3:8b

After the download, run the model:

ollama run qwen3:8b

Install Pip modules

The integration of Ollama service and ViralQuest occurs via Langchain module, the required pip packages to support Ollama models are:

pip install langchain langchain-core langchain-ollama langchain-openai langchain-anthropic langchain-google-genai ollama

With this Requirements installed, it's possible to run LLMs to support ViralQuest Analysis.