Home - amosproj/amos2025ss04-ai-driven-testing GitHub Wiki
Welcome to the amos2025ss04-ai-driven-testing wiki!
Wiki Pages
Architecture
-
Architecture
Overview of the system architecture and component interactions. -
CLi Pipeline
Overview over the continous integration pipeline for the project. -
Docker Runner
A Utility enabling docker containers to run sequentially on python -
Ollama Setup Information
Possible ways to use Ollama to run LLMs
Metrics
- Code Complexity
Description of the code complexity metrics that will be used for evaluation.
Language Models Considered
A list of all language models evaluated for use in the project.
General LLM Information
-
LLM Research
Background research and documentation on large language models. -
LLMs Incompatible With Our Project
Overview of models that were evaluated but deemed unsuitable. -
AI-Model Benchmarks
Standard Benchmarks used to evaluate LLMs -
Docker Performance
Performance of different docker configurations when running LLMs
User Guide
Introduction to the project's purpose from a user's perspective. Requirements: (Docker, Python/Conda). Setup Instructions:
- Cloning the repository.
- Setting up the Conda environment.
- Ensuring Docker is running. Using the Command-Line Interface (main.py)
Developer Guide
Sub-sections:
Backend Core Components Detailed Design:
LLMManager (llm_manager.py)
Execution Flow (execution.py)
API Design (api.py)
CLI Design (cli.py)
Model Configuration (allowed_models.json, model_manager.py): How models are defined and loaded.
Module System Internals:
ModuleBase (modules/base.py): The abstract base class contract. Module Manager (module_manager.py): Module loading (dynamic import, naming conventions), application logic.
Integration Considerations
- Robot Framework
Information about potential integration with the Robot Framework.
Project Contribution Infos
- How to be an AMOS Release Manager
How to do the weekly release
The wiki is actively maintained. Additional content may be added as the project evolves.