Architecture Solutions - vidyasekaran/current_learning GitHub Wiki

Batch processor is taken here:

Arch points

  1. Shared dB ms.

  2. Hazel cast client server for caching data / Multi map impl.

  3. Configurable application via config tables

  4. Asynchronous microservices via solace and Kafka message broker

  5. Elimination of dB access in happy flow

  6. Digital signature for client auth and data security.

  7. Same base code but can be configured for specific country. This act as single tenant application where same code base shared across several clients where based on client configuration application performs.

  8. Openshift for on Prem where no fast country based rtgs,bt,imps apps all share same namespace and eks in aws env where product specific fast code deployed under single namespace.

  9. 2 code base fast and non fast maintained by single team here we have batch processor fast and non fast code.

  10. Microservices architecture uses. Shared database model where all microservices uses same db. There is a seperate persistence microservices which alone will do db insert and update.

  11. In happy flow none of the system access database.

  12. Blue green or canary feature is available blue green environment implemented via dervivation rule system which can read the env value from payload header and sets topic header the value. Application then publish the payload to that topic.

Canary feature where 10 percent goes to blue others go to green. This is handled by batch controller microservices.

  1. We use base framework which has mq, elk logger, data client ( api to access database).

  2. Hazelcast for cache, elk as framework for logging, s3 for file storage,

  3. All payloads come to on premise and this is transferred to aws s3 as file transfer which then is processed by aws eks.

  4. Digital signature for authentication and data security used.

  5. Each microservices handle replay based on reply indicator true coming from other components. If we take TECX Event code from life cycle tracker table then other components sents that to us with replay indicator true which we process accordingly.

  6. For caching we have Jvm level Hashmap where we store and check hazelcast if not found.

  7. Microserice check source system and event code value and decide from which system it has come example: ACCOUNTING,SCHEDULAR,ONLINE BULK.

  8. We also have validation flow which for current and future dated transactions we set CREA, VALD and send to processor. Processor applies validations only on VALD payloads alone.

  9. Time taken by microservices is calculated by each ms which help during performance tuning.

  10. Payment life cycle statuses with event code is defined in table which microservices takes and sends payload with this value and it sends to other services each time it sends payload to event store when sending to other systems.

Life cycle study’s are SUCESS,FAILURE,TECHNICAL EXCEPTION each status has a event code defined.

2 readers namely batch reader and bulk reader reads customer payloads from either message broker or nas or s3. Bulk reader reads payload via Kafka or solace message broker applicable for payload size < 5 mb. Batch reader reads lighter Jason where only filepath details are sent via Jason header and the payload is available in nas path or s3.

Itemised btchbook indicator false

Batch processor Application reads batchbook indicator true or false process as itemised which is individual debut and credit leg generation and it sends to adapter which converts payload into cananical format and then sends to adapter. In this mode only duplicate check and integrity is performed based on batch config table setup.

Consolidated debit btchbook indicator true

Batch processor processes all payload and generates a single debit and sends to processor. Here a unique key is generated based on config in batch-group-keys and batch-group-nodes (values set in this table are execution date, country, batch reference, clientid) sample key : ddmmyyyy-sa-111111-212.

Map<unique key, list

Single debit , multiple credits - we identify debitor client Id and creditor client I’d Parsee payload add all individual credit amounts and single debit detail and store in Map<unique key, list. This payload goes to processor and validation done and gets stored in batch file and batch file row info tables.

We have a schedular component which picks payload from batch file row info tables and send to batch processor which checks schedular tag vale as schedular and event code value set in application properties are coming and call appropriate services. For consolidated flow we get back status of some of the transactions which need certain processing to be done.

Event code is sent across systems which has meaning for every position 7 to 9 define 001 tells processor that no more processing done.

A set of records with all event codes in life cycle table is inserted for each microservices. Life cycle events such as success,rejects,exception each has specific event code define and this query is set in application properties. We always take this record and send info to event store microservices whenever we publish it to processor microservices. If we send technical exceptions event code to event store and processor this transactions are sent back with replay indicator as true/false and we handle.

  1. Develop configurable application

Use Batch_config and batch_config_path and have mandatory validations the application should perform example :

When a transaction is thru the file and corresponding data is stored in batch-file-info and batch-file_rows-info tables

duplicate check - based on this config apply file level duplicate check using file name which comes in Jason header and check against batch-file-info table. And for transaction level take unique uter and check against batch-file-rows-info.

workflow ( derive application flow based on customer preference which is stored in batch-cust-pref table and have those values in application.properties and filter db records based on those vales.. example define country in application properties such as $header.txctry take this value and apply filter.

Workflow1 to workflow5 is defined

Workflow 1 : itemised rejects, file level rejects, apply FX

derivation,

check sum (check no of transaction from header matches sum of all data amounts),

control sum check no of transaction in header and data match.

Task done by me

  1. Foundation team where core components developed each country team customises accordingly with unique configuration at dB level. Req analysis and development, Juanita thru mockito, power mock.

  2. Digital signature implementation in router where it reads payload and digital signature applied and sends to batch reader component for further processing.

http://tutorials.jenkov.com/java-cryptography/signature.html

  1. Code refactor - eliminate unnecessary code from loop, minimise number of statements, reusable methods.

  2. Performance tuning - remove unnecessary object creation and use @Bean for object creation, use hazelcast where possible, avoid huge io.

  3. Sonar fix

  4. Cloud migration Dev support.

Springboot 2.0.4, hazelcast client server, Postgres jdbctemplate used.