AI Models - omer-faruq/assistant.koplugin GitHub Wiki
Choose a Model with KOReader
Unlike most devices, KOReader primarily runs on e-ink devices, which have limited resources for displaying dynamic content. Currently, it is not feasible to implement AI experiences similar to available on a new iPhone.
assistant.koplugin
plugin uses a non-streaming mode from the AI API, meaning there will be no response until the model has generated all the content. Therefore, the speed of the model is a factor when selecting one for use with KOReader.
Reasoning models are NOT recommended at all. The reasoning process takes additional time, and there will be no indication of the on KOReader's screen.
Most AI platforms offer various models with distinct, and their names often indicate optimization for response speed: -flash
, -fast
, -mini
, and so on.
Select one of the rapid models, as they are typically the most cost-efficient. Even free access is sufficient for personal reading purposes. The following table listed common platform and the recommanded model.
Platform | Recommended Model | Cost | Free to Use | Notes |
---|---|---|---|---|
Gemini | gemini-2.5-flash-lite-preview-06-17 |
Most cost-efficient | ✅ | |
OpenAI | gpt-4.1-nano-2025-04-14 |
❌ | ||
Mistral AI | magistral-small-2506 |
FREE | ✅ | |
Groq | gemma2-9b-it qwen/qwen3-32b |
FREE | ✅ | Ultra-fast platform.Turn off reasoning for qwen3: additional_parameters = {reasoning_effort = "none"} |
OpenRouter | deepseek/deepseek-chat-v3-0324:free mistralai/mistral-small-24b-instruct-2501:free |
Various | ✅ |