Diagrams - kristianrpo/mom-grpc-microservices GitHub Wiki
This architecture represents a cloud-based distributed system deployed on AWS using EC2 instances inside a Virtual Private Cloud (VPC) with a public subnet. The project is containerized with Docker and orchestrated using Docker Swarm.
A user who wants to interact with the system connects via SSH to the client EC2 instance. Once connected, the client interacts with the API Gateway through a REST API to request a specific operation. The API Gateway, running as the Swarm manager, handles incoming REST requests and internally communicates with the corresponding microservice using gRPC.
If the requested operation is handled by the SUM microservice, the system routes traffic through a Docker Swarm Load Balancer since this service has replicas deployed for high availability. This ensures load balancing and automatic distribution of requests across instances.
If any microservice is temporarily unavailable, the API Gateway forwards the request via gRPC to the MOM (Message-Oriented Middleware) service. MOM then stores the request in a Redis database, associating it with a specific queue for that microservice. Once the microservice becomes available again, it retrieves the pending tasks from Redis and, upon computing the result, sends it back to MOM, which stores the response for later access.
All microservices—including SUM, SUBTRACTION, MULTIPLICATION, and MOM—run as Docker Swarm services distributed across several EC2 instances designated as workers. The API Gateway is the only instance designated as the Swarm manager. Docker Swarm, together with a docker-compose.yml file, orchestrates the deployment and scaling of each service across the infrastructure.
The Redis instance is also deployed via Docker and configured as part of the same Swarm network, ensuring that all components can access it as needed.
To secure access, a Security Group is defined with specific inbound rules allowing only the required ports for service communication:
- 22: SSH
- 443: HTTPS
- 80: HTTP
- 4789: VXLAN (Docker Swarm overlay network)
- 7946: Gossip protocol (node discovery and communication)
- 2377: Swarm manager coordination
- 6379: Redis
- 50051: gRPC MOM
- 50052: gRPC SUM
- 50053: gRPC MULTIPLICATION
- 50054: gRPC SUBTRACTION
This architecture ensures scalability, fault tolerance, service recovery, and efficient request routing through a fully containerized microservices system using modern DevOps practices.
This diagram illustrates a successful initial communication flow between a client and a microservice through an API Gateway using REST and gRPC protocols.
- The process begins with the Client (Instance #1) sending a REST API request to the API Gateway (Instance #2), specifying the microservice it wants to interact with.
- The API Gateway processes the request and determines which microservice is required.
- It then checks whether the corresponding microservice instance is currently active.
- If the microservice is available, the API Gateway initiates a gRPC request to that microservice.
- The Microservice (e.g., Instance #4 or #5) processes the request and sends back a gRPC response to the API Gateway.
- Finally, the API Gateway sends the response back to the Client, completing the communication successfully. This flow represents the ideal path when the target microservice is active and responsive. It ensures real-time interaction between client and service with proper request routing, service resolution, and response delivery.
This diagram illustrates the failover handling flow when a client requests a microservice that is currently inactive The process begins when the Client (Instance #1) sends a request via REST API to the API Gateway (Instance #2). The API Gateway evaluates the availability of the requested microservice and determines that the service is not currently active.
As a result, the API Gateway communicates with the MOM (Message-Oriented Middleware) component (Instance #3) using gRPC. The MOM is responsible for enqueuing the task in a point-to-point queue (P2P) dedicated to that specific microservice. MOM then replies to the API Gateway to confirm that the task has been successfully queued.
Once the microservice becomes active (e.g., Instance #4 or #5), it begins polling its specific Redis queue to retrieve any pending tasks. Upon retrieving one, it processes the request and sends the result back to MOM via gRPC. The MOM then stores the response in a Redis database (Instance #6) along with a TTL (Time-To-Live) value. MOM responds to the microservice indicating whether the result was successfully stored.
After this, the client begins requesting the result of its task via the REST API. The API Gateway forwards these requests to the MOM via gRPC.
MOM then checks in the Redis database:
- If the result is available and has not expired, MOM retrieves and returns it.
- If the result is not available, MOM informs the API Gateway that the task is still being processed or that it has expired, depending on the TTL. This design ensures fault tolerance in the system by handling microservice downtime gracefully and asynchronously processing tasks until the result can be delivered to the client.