Customization and Extensibility - sgajbi/portfolio-analytics-system GitHub Wiki

5. Customization & Extensibility

The Portfolio Analytics System is designed not as a rigid, black-box product, but as a flexible platform that can be adapted to meet the specific needs of our clients. Extensibility is a core architectural principle, not an afterthought.


Configuration-Driven Behavior

Many of the system's core business logic parameters can be modified through simple configuration changes, without requiring any code modifications.

  • Cost-Basis Method: The cost-calculator-service can be configured to use different cost-basis accounting methods, such as First-In, First-Out (FIFO) or Average Cost, to align with a firm's specific accounting or regulatory requirements.
  • Deployment Environment: All system components, such as database connections and Kafka broker details, are configured via environment variables, allowing for seamless deployment across different environments (development, staging, production) and cloud providers.

API-Based Integration

The system is built on an API-first philosophy. All data flows in and out of the platform through well-defined, modern REST APIs.

  • Integrating Upstream Data Sources: Clients can easily integrate their existing data sources (e.g., custodian feeds, order management systems) by writing simple adaptors that push data to the system's ingestion-service API.
  • Powering Downstream Applications: The query-service API allows a firm's developers to pull clean, processed analytics data to power any number of downstream applications, such as internal advisor workstations, client-facing mobile apps, or custom reporting solutions.

Architectural Extensibility via Microservices

For clients with unique business logic, the event-driven architecture provides powerful extension capabilities. Firms can develop and add their own custom microservices that securely "plug in" to the platform's Kafka message bus.

Example Use Case 1: Integrating a Proprietary Risk Model

A bank has its own proprietary model for calculating portfolio risk scores. They can develop a custom risk-calculator-service that:

  1. Subscribes to the position_history_persisted Kafka topic to receive real-time position updates.
  2. Executes its proprietary risk calculation logic against the new position data.
  3. Publishes the results to a new, custom Kafka topic or calls an internal bank API to store the risk score.

Example Use Case 2: Adding Custom Ingestion Rules

A wealth management firm needs to enforce a set of complex, firm-specific validation rules on incoming transactions before they are processed. They can:

  1. Develop a custom pre-validation-service.
  2. Configure their data adaptors to send raw data to a new, temporary Kafka topic (e.g., pre_validated_transactions).
  3. The custom service consumes from this topic, applies its unique validation rules, and then publishes only the valid transactions to the standard raw_transactions topic for the rest of the platform to process.