Page Index - mostlygeek/llama-swap GitHub Wiki
36 page(s) in this GitHub Wiki:
- Home
- About
- Model Guides
- Use Case Guides
- aider, QwQ, Qwen‐Coder 2.5
- Please reload this page
- Configuration
- Please reload this page
- Docker in Docker with llama‐swap guide
- Please reload this page
- Examples
- Please reload this page
- gemma3 27b 100k context
- Please reload this page
- golang test large context
- Please reload this page
- Installation
- Please reload this page
- llama 3.3 70B Q4_K_M with Spec decoding
- Please reload this page
- llama cpp reranker
- Please reload this page
- llama server embedding
- Please reload this page
- llama server spec decoding script
- Please reload this page
- llama4 scout triple24gb gpu
- Please reload this page
- mistral small vision 24gb
- Please reload this page
- mostlygeek llama3.3 70B spec decoding dry
- Please reload this page
- qwen3 30B 24GB VRAM
- Please reload this page
- whisper cpp large v3 turbo
- Please reload this page