Microservices framework learning - vidyasekaran/current_learning GitHub Wiki
https://www.youtube.com/watch?v=_MMf2SvNqxo
We can do micro service communication in many ways. Rest template is widely used approach but there is another approach which is well suited for Micro services which is by using Feign Client. Feign client by default supports client side load balancing by using Ribbon and fault tolerance by using Hystrix
Fake online rest : https://jsonplaceholder.typicode.com/
@FeignClient(url="https://jsonplaceholder.typicode.com",name="USER-CLIENT") public interface UserClient {
@GetMapping("/users")
public List<UserResponse> getUsers();
}
@SpringBootApplication @RestControllerr @EnableFeignClients public class SpringCloudFeignApp {
@Autowired private UserClient client;
public List getAllUsers(){ return client.getUsers(); }
public static void main(String args[]){
SpringCloudFeignApp.run(SpringCloudFeignApplication.class, args); } }
============================
SpringBoot : Spring Cloud Task | Example | Java Techie https://www.youtube.com/watch?v=pagGJwCqxGI
https://docs.spring.io/spring-cloud-task/docs/current/reference/#getting-started
Introducing Spring Cloud Task - Spring Cloud Task makes it easy to create short-lived microservices. It provides capabilities that let short lived JVM processes be executed on demand in a production environment.
Database Requirements
Spring Cloud Task uses a relational database to store the results of an executed task. While you can begin developing a task without a database (the status of the task is logged as part of the task repository’s updates), for production environments, you want to use a supported database. Spring Cloud Task currently supports the following databases:
DB2,H2,HSQLDB,MySql,Oracle,Postgres,SqlServer
This section goes into more detail about Spring Cloud Task’s integration with Spring Batch. Tracking the association between a job execution and the task in which it was executed as well as remote partitioning through Spring Cloud Deployer are covered in this section.
============================
Create Microservices Architecture Spring Boot https://www.dineshonjava.com/microservices-with-spring-boot/
Pro Java™ EE Spring Patterns -Must read book to under JEE Patterns/Spring framework and patterns
how Java EE design patterns can be best applied using the Spring Framework. http://index-of.es/Java/Java.EE%20-%20Spring%20Patterns%202008.pdf
Consumer driven Contracts https://spring.io/guides/gs/contract-rest/
Api design https://youtu.be/_YlYuNMTCc8
Api design checklist https://tomyjaya.github.io/2018/02/01/api-design/
For this spring security this works fine for me.
without jpa https://www.javainuse.com/spring/boot-jwt
works with jpa mysql https://www.javainuse.com/spring/boot-jwt-mysql
Combine this with h2 jpa dB https://o7planning.org/en/11893/integrating-spring-boot-jpa-and-h2-database
And for roles https://www.roytuts.com/spring-security-authentication-and-role-based-authorization-using-jwt/
in local we have this working : D:\workspace\spring-boot-jwt-JPA
i had to add below in pom.xml to fix java.lang.NoClassDefFoundError: javax/xml/bind/JAXBException in Java 9
jakarta.xml.bind jakarta.xml.bind-api 2.3.2 org.glassfish.jaxb jaxb-runtime 2.3.2Springsecurity with roles https://www.journaldev.com/8748/spring-security-role-based-access-authorization-example
Complete reference for spring boot jpa batch etc https://www.journaldev.com/spring/page/3
Spring Security – Authentication and Role Based Authorization using JWT https://www.roytuts.com/spring-security-authentication-and-role-based-authorization-using-jwt/
Isitio and kubernetes https://www.katacoda.com/courses/istio/deploy-istio-on-kubernetes
Strong understanding of spring boot Micro Services architecture. All detail fro this url alone with additional URLs at relevant places https://medium.com/clover-platform-blog/building-a-microservice-with-spring-boot-and-spring-cloud-1c8275d7d229
Monolith
- Traditionally services were designed as monoliths with all endpoints supported by a single service. A consequence of this approach is that unrelated features of the same product end up in the same server.
- An obvious problem with this architecture is that every minor change in a small part of the code requires deploying the entire code base again.
- To make changes and deploy them, developers have to coordinate with every team that works on the server code. This approach is obviously not scalable or agile enough for fast development cycles.
Microservices 4. In a microservice-based architecture, rather than keeping all the modules together in the same monolithic service, you design separate smaller services for different modules. For example, you can have a small service take care of your billing requirements and another independent service for login. When one module requires information from another, it can just make an API call to the corresponding microservice. 5. With this decoupling, developers can focus on their particular microservice and deploy it as and when required, rather than waiting for changes from a different module to be finished. 6. If something goes wrong with a particular microservice, it only brings down that microservice; the rest of the independent microservices will be up and running. 7. Another advantage is the ability to scale microservices separately. https://dzone.com/articles/spring-boot-autoscaler
Eureka One advantage of deploying a microservice is the ability to horizontally scale individual services as and when required. When new instances of a service are spun up dynamically, the other services that use it need to know about the additional availability. Obviously hardcoding addresses of instances of services is not scalable. This is where a service discovery tool comes into play. Service discovery tools allow individual microservices to register themselves with a registry when they are deployed and running. When a client requires a particular service, it can obtain a list of instances of the service from the discovery tool and then query an instance of the service.
Ribbon Once we have Eureka up and running, when your service wants to query another service, it should get a list of instances of the second service from Eureka server and call one of them.
application.yml: ribbon: https: client: enabled: true
Also note that we create a RestTemplate bean with @LoadBalanced and @Bean annotations (autowired to the restTemplate protected object in line 1). This is essential to make RestTemplate use Ribbon for load balancing.
@Autowired protected RestTemplate restTemplate; String serviceUrl = "http://DEMOB"; public String callGetService(int id) {
log.info("Calling get service with id : " + id);
String response = restTemplate
.exchange(serviceUrl+"/{id}"
, HttpMethod.GET
, null
, new ParameterizedTypeReference<String>() {
}, id).getBody();
return response;
}
@LoadBalanced @Bean public RestTemplate restTemplate(RestTemplateBuilder builder) { return builder.build(); }
Hystrix After setting up your microservices, they obviously need to communicate with each other over the network. Even after your best efforts, API calls to a microservice could fail due to a variety of reasons.
To use the Hystrix dashboard, add the following to application.yml: management: endpoints: web: exposure: include: hystrix.stream
After this, add annotations @EnableHystrixDashboard and @EnableCircuitBreaker to the Application class. The following code shows how to write a Hystrix wrapper around restTemplate. Inside the callGetService function, if the restTemplate call fails, the control moves to the fallback method mentioned in the @HystrixCommand annotation. (This is the same code used above to illustrate the use of Ribbon.)
@Autowired protected RestTemplate restTemplate; String serviceUrl = "http://DEMOB";
@HystrixCommand(fallbackMethod = "callGetServiceFallBack") public String callGetService(int id) { log.info("Calling get service with id : " + id); String response = restTemplate .exchange(serviceUrl+"/{id}" , HttpMethod.GET , null , new ParameterizedTypeReference() { }, id).getBody(); return response; }
public String callGetServiceFallBack(int id) { log.info("Calling fall get back service with id : " + id); String response = "Fall back response get : " + id; return response; }
Check for example from above url
- One approach we tried was to use Spring Boot along with Spring Cloud. This article discusses the basics of creating a microservice using Spring Boot and Spring Cloud. Spring Boot helps you to create a Spring application with minimal effort, while Spring Cloud provides you with a set of tools that makes communication between different microservices easier.
Learnt using Zipkin and sleuth x) https://howtodoinjava.com/spring-cloud/spring-cloud-zipkin-sleuth-tutorial/
1.downloaded and setup 4 microservices and started it - D:\workspace\spring-boot-microservices-series\zipkin 2.download zipkin from maven repository a jar from x) link - and started in command prompt D:\workspace\spring-boot-microservices-series\zipkin>java -jar zipkin-server-2.12.9-exec.jar 3. brinup zipkin page - http://localhost:9411/zipkin/ 4. hit service one by http://localhost:8081/zipkin it internally invokes 2 ->3 -> 4 5. check latency in http://localhost:9411/zipkin/
https://zipkin.io/pages/quickstart.html check zipkin flow - https://zipkin.io/pages/architecture.html
- It manages both the collection and lookup of this data.
- If you are troubleshooting latency problems or errors in ecosystem, you can filter or sort all traces based on the application, length of trace, annotation, or timestamp. By analyzing these traces, you can decide which components are not performing as per expectations, and you can fix them.
4 components in zipkin
There are 4 components that make up Zipkin:
collector storage search web UI
Sleuth is a tool from Spring cloud family. It is used to generate the trace id, span id and add these information to the service calls in the headers and MDC, so that It can be used by tools like Zipkin and ELK etc. to store, index and process log files. As it is from spring cloud family, once added to the CLASSPATH, it automatically integrated to the common communication channels like –
requests made with the RestTemplate etc. requests that pass through a Netflix Zuul microproxy HTTP headers received at Spring MVC controllers requests over messaging technologies like Apache Kafka or RabbitMQ etc.
Learnt API Gateway https://howtodoinjava.com/spring-cloud/spring-cloud-api-gateway-zuul/
D:\workspace\spring-boot-zuulgatwayproxy - Zuul server get registered in 8080
#Zuul routes. Here for /student path, we are routing to localhost:8090 with extra path after that. zuul.routes.student.url=http://localhost:8090 #Riban is auto integrated with Zuul and for this exercise we are not using that. ribbon.eureka.enabled=false #Will start the gateway server @8080 server.port=8080
Student service got registered in 8090 port
D:\workspace\spring-boot-zuulgatwayproxy-student-service spring.application.name=student server.port=8090
Hit Student service via API Gateway http://localhost:8080/student/echoStudentName/Yogananda http://localhost:8080/student/getStudentDetails/Yogananda
Hit Student service without API Gateway http://localhost:8090/echoStudentName/Yogananda http://localhost:8090/getStudentDetails/Yogananda
Access service via eureka try from below url
https://www.javainuse.com/spring/spring_eurekaregister2
Deploy docker image from hub to cloudfoundry https://cloud.gov/docs/apps/docker/
You can also push an image from a private Docker registry by providing the host and authentication information, as in this example: CF_DOCKER_PASSWORD=YOUR-PASSWORD cf push APP-NAME --docker-image REPO/IMAGE:TAG --docker-username USER
Logging and monitoring
Integrate spring boot actuator with Prometheus and graphing solution called Graffana
Travis Ci / CD github integration https://youtu.be/DloZ2muO7-g
https://dzone.com/articles/applying-cicd-to-java-apps-using-spring-boot
Ci CD SpringBoot app in cf using travis https://medium.com/@ghorpade.a.g/ci-cd-springboot-on-pcf-9b0942a02cd7
Auditing in microservices https://stackoverflow.com/questions/38492639/auditing-microservices
Logging
By generating and printing using Correlationid randomly generated value
https://dzone.com/articles/correlation-id-for-logging-in-microservices https://dzone.com/articles/centralized-logging
Using log back https://www.javaguides.net/2018/09/spring-boot-2-logging-slf4j-logback-and-log4j-example.html
by passing userid,sessionid,requestid from URI and intercept web request using logging supported mdc mapped diagonostic context and print in log
https://dzone.com/articles/mdc-better-way-of-logging-1
https://access.redhat.com/documentation/en-us/reference_architectures/2017/pdf/microservice_architecture/Reference_Architectures-2017-Microservice_Architecture-en-US.pdf
With the emergence of cloud platforms, especially the container orchestration technologies like Kubernetes, service mesh has been gaining attention. A service mesh is the bridge between application services that adds additional capabilities like traffic control, service discovery, load balancing, resilience, observability, security, and so on. It allows the applications to offload these capabilities from application- level libraries and allows developers to focus on business logic.
Adoption of Cloud-Native Architecture, Part 1: Architecture Evolution and Maturity (useful read) https://www.infoq.com/articles/cloud-native-architecture-adoption-part1/?itm_source=articles_about_microservices&itm_medium=link&itm_campaign=microservices
MULTI-CLOUD APPLICATION DELIVERY https://www.nginx.com/solutions/cloud/
https://www.nginx.com/solutions/microservices/
Zuul indepth https://cloud.spring.io/spring-cloud-netflix/multi/multi__router_and_filter_zuul.html
How to Make Services Resilient in a Microservices Environment (Retry,Circuit breaker, rate limiter,bulkhead,correlationid and log aggregator) - No springboot support for Ratelimiter and bulkhead we achieve using -The Resilience-4j library supports this feature. https://dzone.com/articles/libraries-for-microservices-development
Almost everyone has a sports watch or a smartwatch. The user synchronizes an activity log after a training session. It could be an ordinary training session or a running session (w/ specific running data added). https://dzone.com/articles/microservices-observability
Chaos Monkey randomly terminates virtual machine instances and containers that run inside of your production environment. Exposing engineers to failures more frequently incentivizes them to build resilient services. https://github.com/Netflix/chaosmonkey
The type of monitoring that provides details about the internal state of your application is called white-box monitoring, and the metrics extraction process is called instrumentation. https://blog.risingstack.com/the-future-of-microservices-monitoring-and-instrumentation/
Correlation ID for Logging in Microservices https://dzone.com/articles/correlation-id-for-logging-in-microservices
Global Exception Handling With @ControllerAdvice https://dzone.com/articles/global-exception-handling-with-controlleradvice
The Eric Evans book, Domain-Driven Design (DDD), transformed how we develop software. Eric promoted the idea of looking at business requirements from a domain perspective rather than from one based on technology.
The book considers microservices to be a derivation of the aggregate pattern. But many software development teams are taking the microservices design concept to the extreme, by attempting to convert all of their existing apps into microservices. This has led to anti-patterns like monolith hell, microliths, and others.
Following are some of the anti-patterns that architecture and dev teams need to be careful about:
monolithic hell microliths Jenga tower logo slide (also known as Frankenstein) square wheel Death Star
API management API management comes with its own concepts and terminology:
Internal APIs, External APIs, Definition and publication API management solutions - provide an intuitive interface to define meaningful APIs, including the base path (URL), resources, and endpoints. https://www.nginx.com/blog/what-is-api-management/
API gateway -
Microgateway – Many solutions have a centralized, tightly coupled data plane (API gateway) and control plane (API management tool). All API calls have to pass through the control plane, which adds latency. The API gateway in this architectural approach is inefficient when handling traffic in distributed environments (for example intraservice traffic in a microservices environment or handling IoT traffic to support real‑time analysis). Hence, to manage traffic where API consumers and providers are in close proximity, vendors of legacy solutions have introduced an additional software component called a microgateway to process API calls.
API analytics
API security(authentication, authorization, Role‑based access control (RBAC) , ratelimiting, springboot oauth https://spring.io/guides/tutorials/spring-boot-oauth2/
Cloud native https://stackify.com/cloud-native/
Cloud native computing uses an open source software stack to be: Containerized. Each part (applications, processes, etc) is packaged in its own container. This facilitates reproducibility, transparency, and resource isolation. Dynamically orchestrated. Containers are actively scheduled and managed to optimize resource utilization. Microservices-oriented. Applications are segmented into microservices. This significantly increases the overall agility and maintainability of applications.”
Hands-on With Istio Service Mesh: Implementing Canary Deployment https://dzone.com/articles/hands-on-with-istio-service-mesh-implementing-cana https://dzone.com/articles/istio-service-mesh-the-step-by-step-guide-part-2-t
Bookinfo Application - this is a polygot (microservices are written in different languages) https://istio.io/docs/examples/bookinfo/
install and setup isitio https://istio.io/docs/setup/getting-started/#platform
Canary release https://martinfowler.com/bliki/CanaryRelease.html
BlueGreen Deployment https://martinfowler.com/bliki/BlueGreenDeployment.html
Service mesh data plane vs. control plane https://blog.envoyproxy.io/service-mesh-data-plane-vs-control-plane-2774e720f7fc
Istio Service Mesh Control Plane https://dzone.com/articles/istio-service-mesh-control-plane
Istio Control Plane Components
Galley - centralized configuration management and distribution. Galley insulates the rest of the Istio components from the underlying platform, like Kubernetes. Galley ingests the user supplied configuration and validates it before distributing across pods.
Pilot - configures all the Envoy proxy instances deployed in the Istio service mesh.
Pilot provides service discovery for the Envoy sidecars and is the core component used for traffic management (Canary, Dark, etc.) in Istio.
Pilot fetches the configuration from Galley and lets you specify which rules you want to use to route traffic between Envoy proxies and configure failure recovery features such as timeouts, retries, and circuit breakers. As a result, it helps to keep the configuration in sync between the underlying platform and Istio Data Plane.
Mixer is the Istio component responsible for Policy Enforcement and Telemetry Collection.
Citadel provides strong service-to-service and end-user authentication using mutual TLS, with built-in identity and credential management.
Sidecar Design Pattern in Your Microservices Ecosystem
Segregating the functionalities of an application into a separate process can be viewed as a Sidecar pattern.
As a sidecar is attached to a motorcycle, similarly in software architecture a sidecar is attached to a parent application and extends/enhances its functionalities. A sidecar is loosely coupled with the main application.
Each microservice needs to have functionalities like observability, monitoring, logging, configuration, circuit breakers, and more. All these functionalities are implemented inside each of these microservices using some industry standard third-party libraries. But, is this not redundant? Does it not increase the overall complexity of your application? What happens if your applications are written in different languages — how do you incorporate the third-party libraries which are generally specific to a language like .NET, Java, Python, etc.?
How Does the Sidecar Pattern Work? The service mesh layer can live in a sidecar container that runs alongside your application. Multiple copies of the same sidecar are attached alongside each of your applications.
All the incoming and outgoing network traffic from an individual service flows through the sidecar proxy. As a result, the sidecar manages the traffic flow between microservices, gathers telemetry data, and enforces policies. In a sense, the service is not aware of the network and knows only about the attached sidecar proxy. This is really the essence of how the sidecar pattern works — it abstracts away the network dependency to the sidecar.
Inside a service mesh, we have the concept of a data plane and a control plane.
The data plane's responsibility is to handle the communication between the services inside the mesh and take care of the functionalities like service discovery, load balancing, traffic management, health checks, etc. The control plane's responsibility is to manage and configure the sidecar proxies to enforce policies and collect telemetry.
Simplifying Microservice Architecture With Envoy and Istio
See how Envoy and Istio can simplify your microservice architecture with service discovery, service-to-service and end-user authentication, and more.
In a Microservice architecture, handling service to service communication is challenging and most of the times we depend upon 3rd party libraries or components to provide functionality like Service Discovery, Load Balancing, Circuit Breaker, Metrics, Telemetry and more. Companies like Netflix came up with their own libraries like Hystrix for Circuit Breaker, Eureka for Service Discovery, Ribbon for Load Balancing which are popular and widely used by organizations.
However these components need to be configured inside your application code and based on the language you are using, the implementation will vary a bit. Anytime these external components are upgraded, you need to update your application, verify it and deploy the changes. This also creates an issue where now your application code is a mixture of business functionality and these additional configurations. Needless to say, this tight coupling increases the overall application complexity - since the developer now needs to also understand how these components are configured so that he/she can troubleshoot in case of any issues.
Service Mesh comes to the rescue here. It decouples this complexity from your application and puts it in a service proxy & let it handle it for you. These service proxies can provide you with a bunch of functionalities like traffic management, circuit breaking, service discovery, authentication, monitoring, security and much more. Hence from an application standpoint, all it contains is the implementation of Business functionalities.
Istio Architecture Istio Service Mesh has two components — Control Plane and Data Plane.
Control Plane - Controls the Data plane and is responsible for configuring the proxies for traffic management, policy enforcement, and telemetry collection.
Data Plane - Comprises of Envoy proxies deployed as sidecars in each of the pods. All the application traffic flows through the Envoy proxies.
Pilot - Provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing and resiliency (timeouts, retries, circuit breakers)
Mixer - Platform independent component which enforces access control and usage policies across the service mesh, and collects telemetry data from the Envoy proxy and other services.
Citadel - Provides strong service-to-service and end-user authentication with built-in identity and credential management.
All traffic entering and leaving the Istio service mesh is routed via the Ingress/Egress Controller. By deploying an Envoy proxy in front of services, you can conduct A/B testing, deploy canary services, etc. for user-facing services. Envoy is injected into the service pods inside the data plane using Istioctl kube-inject.
The Envoy sidecar logically calls Mixer before each request to perform precondition checks, and after each request to report telemetry. The sidecar has local caching such that a large percentage of precondition checks can be performed from the cache. Additionally, the sidecar buffers outgoing telemetry such that it only calls Mixer infrequently.
Envoy Proxy Envoy is a high-performance open-source proxy designed for cloud-native applications.
Istio leverages Envoy's many built-in features like Service Discovery, load Balancing, Circuit Breakers, Fault Injection, Observability, Metrics and more. Envoy is deployed as a sidecar to the relevant service in the same Kubernetes pod. The sidecar proxy model also allows you to add Istio capabilities to an existing deployment with no need to rearchitect or rewrite code.