Documentation about self‐hosting - cshunor02/sponge-attack GitHub Wiki

Self hosted AI interface

For this project we self-hosted an AI interface in our home server.

Hardware and Software data:

The AI interface is hosted in a virtual machine, on the server.

  • 8 vCPU (virtual CPU)
  • 8GB ram
  • AMD Ryzen 5 7600 6-Core Processor

Connection

Everyone can use the AI interface, if their computer is on our network (either by Wifi or VPN).

VPN connection

The connection can be made with Wireguard software and a VPN configuriation.

Configuration data:

[Interface]

PrivateKey = *privatekey*

Address = *address*

[Peer]

PublicKey = *public key*

Endpoint = *endpoint*.org:4501

AllowedIPs = 192.168.*IP*/32

PersistentKeepalive = 25

AI Interface

We selected and used the solution of Open WebUI.

https://openwebui.com/

From this website's documentation the self-hosting is reproduceable.

We used the Open WebUI's Github page: https://github.com/open-webui/open-webui

And we changed this compose file: https://github.com/open-webui/open-webui/blob/main/docker-compose.yaml

The exact compose file can be found in the Self-host folder as docker-compose.yml file.

Starting page

image

AI model

Open WebUI makes it possible to select any AI model from this website: https://ollama.com/library

image

Just by searching for the AI model's name on the starting page, we can download and use the model.

image

After the : menas the LLM model's size. E.g: 10b means 10 billion.

Statistic

Since the AI Interface and LLM models are running in our hosted, virtual environment we can monitor the CPU and other statistics of the virtual environment. Which we used to monitor the performance of some of our Sponge attacks.

Run

Other information and and the exact documentation about how to start the docker, and the docker compose file's content can be found here