Architecture - edgeof8/tIRC GitHub Wiki
tIRC is built with a highly modular, event-driven architecture designed for extensibility, maintainability, and high performance, leveraging Python's asyncio
framework for all I/O operations.
Design Philosophy: tIRC adheres to the principle of separation of concerns, with clear boundaries between components. A strong emphasis is placed on asynchronous operations, thread-safe state management, and a robust event dispatching system to ensure responsiveness and stability.
graph TD
subgraph Application Entry (tirc.py)
A[main()] --> B{CLI Argument Check}
B -- headless/UI mode --> C(curses.wrapper)
B -- --send-raw --> D[IPC Client Call]
C --> E[asyncio.run(IRCClient_Logic.run_main_loop())]
end
subgraph Main tIRC Instance
E --> F{IRCClient_Logic (Orchestrator)}
F -- Manages --> G[ConnectionOrchestrator]
F -- Manages --> H[ClientShutdownCoordinator]
F -- Manages --> I[ClientViewManager]
F -- Manages --> J[UIManager]
F -- Manages --> K[CommandHandler]
F -- Manages --> L[StateManager]
F -- Manages --> M[EventManager]
F -- Manages --> N[ScriptManager]
F -- Manages --> O[NetworkHandler]
F -- Manages --> P[IPCManager]
end
subgraph IPC Flow
D -- Connects to --> Q(Local TCP Socket)
Q -- Handled by --> P
P -- Injects Command --> K
end
subgraph Connection Lifecycle
G -- Uses --> O
G -- Uses --> L
end
subgraph Command & Event Flow
K -- Sends via --> O
K -- Updates --> L
M -- Dispatches to --> N
M -- Dispatches to --> F
N -- Uses API --> F
end
subgraph UI & View Management
I -- Controls --> J
J -- Renders via --> R(Modular Renderers)
end
subgraph Shutdown Flow
I -- Stops --> G
I -- Stops --> O
I -- Stops --> Task_Input[Input Task]
I -- Stops --> Task_Network[Network Task]
end
style D fill:#f9f,stroke:#333,stroke-width:2px
style Q fill:#ccf,stroke:#333,stroke-width:2px
-
Complete Migration to asyncio: The entire client has been refactored to use Python's
asyncio
framework, eliminating the previous threading-based approach for better performance and simplified concurrency management. - Non-blocking I/O: All network operations, user input handling, and UI updates are handled asynchronously, ensuring a responsive user experience even during heavy network traffic.
- Efficient Resource Usage: The single-threaded event loop model reduces context switching overhead and simplifies synchronization.
-
Modern Python Features: Leverages Python 3.9+ features like
asyncio.to_thread
for running blocking operations without blocking the event loop.
- The
StateManager
is the exclusive source of truth for all connection, session, and client-specific runtime state. - It provides thread-safe, persistent session state that includes:
- Connection details (server, port, SSL status)
- Authentication state (SASL/NickServ info)
- Connection statistics and error history
- Joined channels and their states
- User preferences and client settings
- Message history and scrollback positions
- State is automatically persisted to disk and restored on startup.
- All core client commands are implemented in individual Python modules within a structured
commands/
directory. - Commands are dynamically discovered using
pkgutil.walk_packages
and registered at startup, making the client easily extensible.
- A powerful Python scripting system allows for deep customization.
- Scripts can register commands, subscribe to a wide range of events, and interact with the client through a rich
ScriptAPIHandler
.
- The
ConnectionOrchestrator
is a critical component responsible for centralizing and meticulously managing the entire lifecycle of a server connection. - It orchestrates the complex sequence of operations required to establish a connection, including:
- Establishing the initial TCP/SSL socket connection (via
NetworkHandler
). - Performing IRCv3 capability negotiation (coordinating with
CapNegotiator
) to agree on supported features with the server. - Handling SASL (Simple Authentication and Security Layer) authentication (coordinating with
SaslAuthenticator
) for secure logins. - Managing the NICK/USER registration process (coordinating with
RegistrationHandler
) to formally identify the client to the server.
- Establishing the initial TCP/SSL socket connection (via
- By delegating these responsibilities, it significantly simplifies the
IRCClient_Logic
, abstracting away the intricate details of connection state transitions, error handling (e.g., connection refused, authentication failure), and timeout management for each phase. - This leads to more robust, maintainable, and testable connection handling, as each part of the connection sequence is managed by a specialized handler under the orchestrator's control.
- It implements comprehensive timeout mechanisms for each step (e.g., CAP negotiation timeout, SASL authentication timeout) and can trigger appropriate error recovery or retry logic.
- The
ClientShutdownCoordinator
was introduced to provide a dedicated and centralized mechanism for gracefully shutting down all parts of the tIRC client. - Previously, shutdown logic was dispersed, notably within
IRCClient_Logic
's main loop'sfinally
block and other areas. This new coordinator encapsulates all shutdown responsibilities. - Its primary role is to ensure an orderly and complete termination sequence, which includes:
- Signaling all active asynchronous tasks (like network loops and input handlers) to terminate.
- Disconnecting from the IRC server cleanly, sending QUIT messages if appropriate.
- Releasing resources held by UI components (e.g., properly closing curses).
- Ensuring script managers unload scripts and release their resources.
- Saving any final state via the
StateManager
.
- This focused approach improves the reliability of the client's exit process, preventing resource leaks, zombie processes, or abrupt terminations that could lead to data loss or an inconsistent state.
- The
ClientViewManager
is a new component designed to isolate and manage UI-specific logic related to different views and context switching. - Responsibilities previously handled by
UIManager
or directly withinIRCClient_Logic
, suchs as managing split-screen layouts, determining which chat window (context) is currently active, and handling events likeACTIVE_CONTEXT_CHANGED
, are now delegated to this manager. - Key functions include:
- Maintaining the state of available views (e.g., single pane, top/bottom split).
- Tracking the currently focused context (e.g., a specific channel, a query window, or the status window).
- Orchestrating UI updates when the active context changes (e.g., ensuring the correct message history and user list are displayed).
- Handling user commands related to window navigation and view manipulation (e.g.,
/next
,/split
).
- This separation decouples core application logic (like message processing or connection management) from the specifics of how views are presented and interacted with. It makes the UI system more flexible, allowing for easier modifications to existing views or the introduction of new view types without impacting other parts of the client.
- The previously monolithic
UIManager
has been refactored into a set of specialized components:CursesManager
,WindowLayoutManager
,MessagePanelRenderer
,SidebarPanelRenderer
,StatusBarRenderer
,InputLineRenderer
, andSafeCursesUtils
. - This decomposition significantly improves separation of concerns, making the UI system more modular, testable, and easier to extend.
UIManager
now acts as an orchestrator for these components.
tIRC now includes a lightweight Inter-Process Communication (IPC) mechanism, enabling external command-line instances or scripts to send commands to a running tIRC client. This is achieved through a local TCP socket.
-
Main Instance (Server): When a primary tIRC instance starts, the
IPCManager
component (initialized withinIRCClient_Logic
) opens a local TCP socket (default port61234
). This socket acts as a command server, listening for incoming connections. -
CLI Caller (Client): When
tirc.py
is executed with the--send-raw "<command>"
argument, it acts as a temporary IPC client. This client connects to the main instance's local socket, sends the raw command string, and then immediately exits. -
Command Execution: The main tIRC instance receives the raw command via its IPC server. The
IPCManager
then decodes the command and injects it directly into theCommandHandler
for processing, just as if the user had typed it interactively.
This architecture provides a robust and efficient way to remotely control a running tIRC client, facilitating advanced scripting and integration workflows without requiring a full interactive session.