service mesh - bobbae/gcp GitHub Wiki
https://cloud.google.com/architecture/service-meshes-in-microservices-architecture
A service mesh is a platform layer on top of the infrastructure layer that enables managed, observable, and secure communication between individual services.
Enterprises are adopting microservices and service mesh to enable new levels of IT agility but a successful microservices implementation is complicated. As the number of services an organization uses grows, complexity and risk can increase rapidly. The microservices need to be exposed as APIs to enable access via features such as discovery, load balancing, failure recovery, metrics, monitoring, A/B testing, canary rollouts, rate limiting, access control, and end-to-end authentication.
The service mesh is typically implemented as a scalable set of network proxies deployed alongside application code (a pattern sometimes called a sidecar.
The rise of the service mesh is tied to the rise of the cloud native application. In the cloud native world, an application might consist of hundreds of services and each service might have thousands of instances and each of those instances might be in a constantly-changing state as they are dynamically scheduled by an orchestrator like Kubernetes.
A sidecar proxy is an application design pattern which abstracts certain features, such as inter-service communications, monitoring and security, away from the main architecture to ease the tracking and maintenance of the application as a whole.
Nginx and Envoy are common proxies used this way and controlled by service mesh controllers.
L7 Proxies maintain two TCP connections: one with the client and one with the server. The packets are re-assembled then the load-balancer can take a routing decision based on the information it can find in the application requests or responses.
https://buoyant.io/service-mesh-manifesto/
https://logz.io/blog/istio-linkerd-consul-comparison-service-meshes/
https://medium.com/google-cloud/when-not-to-use-service-mesh-1a44abdeea31
Istio is an open source service mesh that helps organizations run distributed, microservices-based apps anywhere. Why use Istio? Istio enables organizations to secure, connect, and monitor microservices, so they can modernize their enterprise apps more swiftly and securely.
https://blog.christianposta.com/microservices/istio-as-an-example-of-when-not-to-do-microservices/
Anthos Service Mesh is a suite of tools that helps you monitor and manage a reliable service mesh on-premises or on Google Cloud.
https://cloud.google.com/service-mesh/docs/overview
https://blog.searce.com/anthos-blog-series-part-1-anthos-service-mesh-a258ba621732
https://cloud.google.com/service-mesh/docs/unified-install/install-anthos-service-mesh
https://cloud.google.com/blog/topics/anthos/anthos-service-mesh-deep-dive
https://cloud.google.com/service-mesh/docs/onlineboutique-install-kpt
https://cloud.google.com/service-mesh/docs/unified-install/gke-install-multi-cluster
https://cloud.google.com/service-mesh/docs/unified-install/off-gcp-multi-cluster-setup
https://cloud.google.com/service-mesh/docs/unified-install/options/all-install-options
https://cloud.google.com/service-mesh/docs/managed/configure-managed-anthos-service-mesh
Envoy is a L7 edge service Proxy used widely by service mesh controllers such as Consul, Contour and istio. Envoy is also used by API gateway like Ambassador.
There are many Open source projects built on Envoy Proxy.
https://www.envoyproxy.io/docs/envoy/latest/intro/life_of_a_request
Apigee adapter for Envoy.
https://www.youtube.com/watch?v=BNkfoZt-jvU
Consul is a widely used service mesh. You can use Consul with Ambassador Edge Stack.
https://www.youtube.com/watch?v=XW3AXQfAaQc
https://www.youtube.com/watch?v=Bj7gGQUiDuk
https://linkerd.io/2.11/reference/architecture/
https://www.infracloud.io/blogs/service-mesh-comparison-istio-vs-linkerd/
https://konghq.com/blog/envoy-service-mesh/
Multi-cluster Services (MCS) is a cross-cluster Service discovery and invocation mechanism for Google Kubernetes Engine (GKE) that leverages the existing Service object. Services enabled with this feature are discoverable and accessible across clusters with a virtual IP, matching the behavior of a ClusterIP Service accessible in a cluster.
https://cloud.google.com/kubernetes-engine/docs/concepts/multi-cluster-services
The Google Kubernetes Engine (GKE) MCS feature extends the reach of the Kubernetes Service beyond the cluster boundary and lets you discover and invoke Services across multiple GKE clusters.
https://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-services
Multi-cluster Ingress (MCI) is a cloud-hosted multi-cluster Ingress controller for Anthos clusters. It's a Google-hosted service that supports deploying shared load-balancing resources across clusters and across regions.
https://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-ingress-setup
https://cloud.google.com/service-mesh/docs/security/security-overview
https://cloud.google.com/service-mesh/docs/observability-overview
https://cloud.google.com/service-mesh/docs/by-example/canary-deployment
https://cloud.google.com/service-mesh/docs/by-example/mtls
https://cloud.google.com/service-mesh/docs/automate-tls
https://cloud.google.com/architecture/exposing-service-mesh-apps-through-gke-ingress
https://cloud.google.com/service-mesh/docs/onlineboutique-install-kpt