System Architecture - VforVitorio/F1_Strat_Manager GitHub Wiki

🏗️ System Architecture & Setup

Technical architecture, installation and setup procedures

Data Processing Utilities

Model Artifacts and Deployment

Development Environment

System Architecture

Relevant source files

Purpose and Scope

This document provides a detailed technical overview of the F1 Strategy Manager's modular architecture, covering the integration of data pipelines, machine learning models, expert systems, and application interfaces. The architecture supports real-time F1 race strategy analysis through AI-powered decision making.

For specific implementation details of individual components, see Machine Learning Models, Expert System, and Streamlit Dashboard. For deployment and development setup, see Installation and Setup.

Overall System Architecture

The F1 Strategy Manager follows a layered, modular architecture that processes multiple data sources through machine learning pipelines and expert systems to generate strategic recommendations.

High-Level System Overview

Application Layer

Expert System Core

Data Processing Layer

External Data Sources

FastF1 API

Race Telemetry

OpenF1 API

Team Radios

Roboflow

Annotated Videos

YouTube

Race Footage

extract_radios.ipynb gap_calculation.ipynb N01_radio_transcription.ipynb

XGBoost Models

TCN/LSTM Models

YOLO Models

NLP Pipeline

TelemetryFact

DegradationFact

GapFact

RadioFact RaceStatusFact

F1CompleteStrategyEngine

Multiple Rule Systems

Conflict Resolution

Recommendation Ranking

main.py Streamlit Dashboard

Strategy Chat

LLM Integration

Sources:

README.md 19-68

scripts/IS_agent/N06_rule_merging.ipynb 115-137

Core Component Architecture

Data Transformation

Fact Definitions

Expert System Framework

F1CompleteStrategyEngine

Multiple Inheritance

F1DegradationRules

Tire Strategy

F1LapTimeRules

Performance Analysis

F1RadioRules

Communication Analysis

F1GapRules

Position Strategy

TelemetryFact lap_time, tire_age position, driver_number

DegradationFact degradation_rate predicted_rates

GapFact gap_ahead, gap_behind in_undercut_window

RadioFact sentiment, intent entities

RaceStatusFact lap, total_laps race_phase

transform_tire_predictions()

transform_lap_time_predictions()

transform_gap_data_with_consistency()

transform_radio_analysis()

transform_all_facts()

Sources: scripts/app/app_modules/agent/N06_rule_merging.py 78-101

scripts/app/app_modules/agent/N01_agent_setup.py 14-103

Data Flow Architecture

The system processes data through a multi-stage pipeline that transforms raw race data into actionable strategic recommendations.

Data Processing Pipeline

Strategy Generation

Fact Generation

ML Prediction Layer

Raw Data Ingestion

Race CSV

Telemetry Data

MP3 Files

Team Radio

MP4 Files

Race Footage

Gap CSV

Position Data

load_tire_predictions() N02_model_tire_predictions

predict_lap_times() N00_model_lap_prediction

YOLO Inference

Car Detection

analyze_radio_message() N06_model_merging

transform_all_facts() Data → Facts

engine.declare(fact) Fact Registration

engine.run() Rule Firing

_resolve_conflicts() Priority Resolution

get_recommendations() Sorted Results

Sources: scripts/app/app_modules/agent/N06_rule_merging.py 238-431

scripts/app/app_modules/agent/N06_rule_merging.py 522-639

Rule Engine Integration Architecture

Recommendation Output

Conflict Resolution

Rule System Inheritance

F1StrategyEngine

KnowledgeEngine

F1DegradationRules high_degradation_pit_stop stint_extension_recommendation

F1LapTimeRules optimal_performance_window performance_cliff_detection

F1RadioRules grip_issue_response weather_information_adjustment

F1GapRules undercut_opportunity defensive_pit_stop

F1CompleteStrategyEngine

Unified Rule System

action_groups Group by Action Type

conflicting_pairs extend_stint vs pit_stop perform_undercut vs perform_overcut

_resolve_conflicts() Priority + Confidence

StrategyRecommendation action, confidence explanation, priority

Sorted by Priority

Conflict-Free Results

Sources: scripts/app/app_modules/agent/N06_rule_merging.py 143-215

scripts/IS_agent/N06_rule_merging.ipynb 127-282

Code Organization and Module Structure

The codebase follows a hierarchical organization with clear separation of concerns across different functional domains.

Module Hierarchy

Module Path Primary Classes/Functions Responsibility scripts/app/app_modules/agent/ F1CompleteStrategyEngine, transform_all_facts Expert system integration scripts/app/app_modules/agent/N01_agent_setup.py TelemetryFact, DegradationFact, GapFact Core fact definitions scripts/app/app_modules/agent/N02_degradation_time_rules.py F1DegradationRules Tire degradation rules scripts/app/app_modules/agent/N03_lap_time_rules.py F1LapTimeRules Performance analysis rules scripts/app/app_modules/agent/N04_nlp_rules.py F1RadioRules Radio communication rules scripts/app/app_modules/agent/N05_gap_rules.py F1GapRules Position strategy rules scripts/app/main.py Streamlit application Web interface

Sources: scripts/app/app_modules/agent/N06_rule_merging.py 14-48

scripts/app/app_modules/agent/N01_agent_setup.py 1-12

Key Integration Points

Application Layer

Engine Layer

Processing Layer

Data Layer

CSV Files

Race Data

JSON Files

Radio Analysis

PKL/PT Files

Trained Models

load_tire_predictions() load_lap_time_predictions() load_gap_data()

transform_tire_predictions() transform_lap_time_predictions() transform_radio_analysis()

engine.declare(fact) Working Memory

@Rule decorators Pattern Matching

StrategyRecommendation

Facts Generation

st.sidebar, st.columns User Interface

Strategy displays

Visualization

Sources: scripts/app/app_modules/agent/N06_rule_merging.py 434-519

scripts/app/app_modules/agent/N01_agent_setup.py 156-498

The architecture ensures modularity through clear interfaces between components, enabling independent development and testing of machine learning models, rule systems, and user interfaces while maintaining a cohesive strategic analysis capability.


Data Processing Utilities

Model Artifacts and Deployment

Development Environment

Installation and Setup Relevant source files

This page provides comprehensive instructions for installing the F1 Strategy Manager system, setting up the required environment, and running the application. For information about the overall system architecture, see System Architecture.

System Requirements

Before installing the F1 Strategy Manager, ensure your system meets the following requirements:

Component Requirement

Python 3.10 or higher

GPU CUDA-compatible (for local training and video inference)

Storage At least 5GB of free space for code, dependencies, and models

Memory Minimum 8GB RAM (16GB recommended)

OS Windows, Linux, or macOS

Sources:

README.md 53-57

Environment Setup

The F1 Strategy Manager uses Python virtual environments to manage dependencies. The following diagram illustrates the environment setup process:

Model Setup

Environment Setup

Clone Repository

Create Virtual Environment

Activate Environment

Install Dependencies

Download Models (if needed)

Run Application

Creating a Virtual Environment

Clone the repository: git clone https://github.com/VforVitorio/F1_Strat_Manager.git cd F1_Strat_Manager Create and activate a virtual environment:

Create virtual environment

python -m venv venv

Activate on Windows

venv\Scripts\activate

Activate on Linux/macOS

source venv/bin/activate

Sources:

README.md 75-88

Installing Dependencies

The project requires numerous Python libraries for data processing, machine learning, visualization, and the user interface. All dependencies are listed in the requirements.txt file.

Core Dependencies

Computer Vision

opencv-python

roboflow

Visualization

streamlit

plotly

matplotlib

seaborn

Expert System

experta

Machine Learning

torch

xgboost

transformers

openai-whisper

ultralytics

Data Processing

pandas

numpy

fastf1

openf1

To install all dependencies:

pip install -r requirements.txt

The requirements.txt file contains all necessary packages, including version specifications for compatibility.

Sources:

README.md 60-72

requirements.txt 1-229

Critical Dependencies

The following dependencies are critical for specific subsystems:

Dependency Purpose Subsystem fastf1, openf1 Formula 1 data access Data Extraction torch, xgboost Machine learning models Predictive Models ultralytics YOLOv8 object detection Vision-based Gap Calculation transformers, whisper NLP processing Radio Analysis Pipeline streamlit Web interface User Interface experta Rule-based expert system Strategy Engine

Sources:

README.md 30-42

Running the Application

After installation, you can run the F1 Strategy Manager using Streamlit:

streamlit run app/main.py

The application will start and open in your default web browser. If it doesn't open automatically, access it at http://localhost:8501.

Sources:

README.md 98-100

Application Structure

The following diagram shows the key components of the installed system and how they interact:

Backend Components

app/main.py

Streamlit Dashboard

Strategy Recommendations View

Gap Analysis View

Radio Analysis View

Time Predictions View

Strategy Chat Interface

Machine Learning Models

Computer Vision Models

NLP Pipeline

Expert System

Sources:

README.md 43-49

Data and Model Management

The F1 Strategy Manager uses several data sources and pre-trained models. Some files are excluded from version control as specified in the gitignore file:

data/ # Raw and processed data models/ # Trained models outputs/ # Generated outputs f1_cache/ # FastF1 cache yolo-files # YOLOv8 model files

You may need to download these files separately or generate them by running the training scripts.

Sources: .gitignore 3-10

Configuration (Optional)

The system doesn't require explicit configuration after installation, but you may want to customize certain aspects:

FastF1 cache location: The system uses FastF1's built-in caching, which defaults to a directory in the project. You can customize this by setting the appropriate environment variables.

GPU settings: If you have multiple GPUs, you might want to specify which one to use by setting the CUDA_VISIBLE_DEVICES environment variable.

Troubleshooting

Common Issues

Issue Solution

Missing dependencies Run pip install -r requirements.txt again

CUDA/GPU errors Ensure you have compatible NVIDIA drivers installed

Model file missing Download required model files or check paths

Memory errors Reduce batch sizes or use a machine with more RAM

Checking Installation

To verify that all components are correctly installed, run:

python -c "import fastf1; import torch; import streamlit; import experta; print('All dependencies successfully imported')"

If this command executes without errors, the core dependencies are correctly installed.


Data Processing Utilities

Model Artifacts and Deployment

Development Environment

Documentation Workflow

Relevant source files

Purpose and Scope

This document covers the automated documentation synchronization system that maintains the F1 Strategy Manager's GitHub Wiki by fetching content from DeepWiki and processing it into structured markdown files. The system uses GitHub Actions to automatically update the wiki whenever changes are pushed to the main branch.

For information about the overall system architecture, see System Architecture. For details about setting up the development environment, see Installation and Setup.

Workflow Overview

The documentation workflow operates as a fully automated GitHub Actions pipeline that bridges external documentation sources with the project's GitHub Wiki. The system fetches comprehensive documentation from DeepWiki, processes it through multiple transformation stages, and publishes it as organized wiki pages.

Wiki Publishing

Content Processing

DeepWiki Integration

Trigger

push to main branch

Clone DeepWiki MCP

Build DeepWiki MCP

Start HTTP Service on port 3000

Fetch all pages from DeepWiki

Process and Split Pages

clean_deepwiki_content()

categorize_section()

Organize by hierarchy

Checkout .wiki repository

Copy markdown files

Generate Home.md

Commit and push changes

Sources: .github/workflows/update-wiki.yml 1-616

Component Architecture

The documentation workflow consists of several distinct components that handle different aspects of the synchronization process:

File Organization

Content Processing Pipeline

DeepWiki MCP Service

External Services

GitHub Actions Workflow

update-wiki job

ubuntu-latest runner

Unsupported markdown: link

github.repository.wiki

deepwiki-mcp repository

HTTP service on localhost:3000

/mcp endpoint

deepwiki_fetch tool

Python processing script

clean_deepwiki_content()

get_page_title_from_content()

categorize_section()

safe_filename()

docs-md/ directory

Individual .md files

f1-strat-manager-complete.md

Home.md

DeepWikeFetch

Sources: .github/workflows/update-wiki.yml 18-65

.github/workflows/update-wiki.yml 170-521

.github/workflows/update-wiki.yml 533-600

Data Processing Pipeline

The content processing pipeline transforms raw DeepWiki content into structured GitHub Wiki pages through multiple stages of data cleaning and organization:

Content Transformation Functions

The system implements several Python functions embedded in the GitHub Actions workflow to process content:

Function Purpose Lines clean_deepwiki_content() Removes DeepWiki UI elements and navigation 180-223 get_page_title_from_content() Extracts clean titles from content 224-234 safe_filename() Generates safe filenames from titles 235-251 categorize_section() Organizes content by hierarchical structure 252-317

Sources: .github/workflows/update-wiki.yml 175-317

Content Categorization System

The workflow implements a sophisticated categorization system that maps content to the project's hierarchical structure:

Subsection Examples

Main Section Types

Hierarchical Categories

Content Classification

Raw DeepWiki Content

Title Extraction

Category Mapping

Main Sections

Subsections

Miscellaneous Content

01-overview.md

02-streamlit-dashboard.md

03-machine-learning-models.md

04-nlp-pipeline.md

05-expert-system.md

06-developer-guide.md

02-01-strategy-recommendations-view.md

03-01-lap-time-prediction.md

04-01-radio-transcription.md

05-01-degradation-rules.md

Sources: .github/workflows/update-wiki.yml 252-317

.github/workflows/update-wiki.yml 354-418

DeepWiki Integration

The workflow integrates with DeepWiki through the Model Context Protocol (MCP) service, which provides a standardized interface for accessing documentation content:

Service Configuration

The DeepWiki MCP service is configured and started with specific parameters:

Content Fetching

Service Verification

MCP Service Setup

git clone deepwiki-mcp

npm install

Build verification

node ./bin/cli.mjs --http --port 3000

curl localhost:3000/mcp

ps aux | grep node

cat deepwiki.log

JSON-RPC 2.0 request

Unsupported markdown: link

maxDepth: 1, mode: pages

Process JSON response

Sources: .github/workflows/update-wiki.yml 18-86

.github/workflows/update-wiki.yml 87-169

API Communication

The workflow communicates with DeepWiki using structured JSON-RPC 2.0 requests:

{ "jsonrpc": "2.0", "method": "tools/call", "params": { "name": "deepwiki_fetch", "arguments": { "url": "https://deepwiki.com/VforVitorio/F1_Strat_Manager", "maxDepth": 1, "mode": "pages" } }, "id": 1 }

Sources: .github/workflows/update-wiki.yml 109-120

Wiki Organization

The processed content is organized into a hierarchical wiki structure with consistent naming conventions and cross-references:

File Naming Convention

Pattern Purpose Example 01-.md Overview section 01-overview.md 02-.md Streamlit Dashboard 02-streamlit-dashboard.md 02-01-.md Dashboard subsections 02-01-strategy-recommendations-view.md 99-.md Miscellaneous content 99-section-1.md

Sources: .github/workflows/update-wiki.yml 272-316

Home Page Generation

The workflow automatically generates a comprehensive Home.md file that serves as the wiki's entry point with hierarchical navigation:

Cross-References

Navigation Structure

Home Page Structure

Welcome Section

Complete Documentation Link

Hierarchical Table of Contents

Project Overview

Unsupported markdown: list

Unsupported markdown: list

Unsupported markdown: list

Unsupported markdown: list

Unsupported markdown: list

Unsupported markdown: list

DeepWiki Link

Internal Wiki Links

Subsection Links

Sources: .github/workflows/update-wiki.yml 551-598

Automation and Deployment

The documentation workflow is fully automated and integrates with GitHub's repository management system:

Trigger Configuration

The workflow activates automatically on pushes to the main branch:

on: push: branches: - main

Sources: .github/workflows/update-wiki.yml 3-6

Authentication and Permissions

The workflow uses a Personal Access Token (WIKI_PAT) to access the wiki repository:

with: repository: ${{ github.repository }}.wiki token: ${{ secrets.WIKI_PAT }} path: wiki

Sources: .github/workflows/update-wiki.yml 529-531

Change Detection and Commit Process

The workflow implements intelligent change detection to avoid unnecessary commits:

Change Detection

Git Operations

No

Yes

Set git user config

git add all files

git diff --quiet --staged

git commit with timestamp

git push to wiki

Changes detected?

Log: No changes to commit

Log: Changes pushed successfully

Sources: .github/workflows/update-wiki.yml 602-615

The automated commit message includes a timestamp for tracking updates:

git commit -m "🔄 Update Complete Wiki from DeepWiki - $(date '+%Y-%m-%d %H:%M')"

Sources: .github/workflows/update-wiki.yml 612


📝 This documentation is automatically generated using browser automation.
🕒 Last updated: $(date '+%Y-%m-%d %H:%M:%S UTC')