ingress - bobbae/gcp GitHub Wiki
Ingress may provide HTTP(S) load balancing, SSL termination and name-based virtual hosting.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
https://github.com/kubernetes/ingress-gce
https://jaygorrell.medium.com/kubernetes-ingress-82aa960f658e
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-ilb
https://cloud.google.com/kubernetes-engine/docs/concepts/multi-cluster-ingress
In GCP, a Network Endpoint Group (NEG) is a configuration object that specifies a group of backend endpoints or services in containers.
https://cloud.google.com/load-balancing/docs/negs
https://cloud.google.com/load-balancing/docs/negs#neg-types
When NEGs are used with GKE Ingress, the Ingress controller facilitates the creation of all aspects of the L7 load balancer. This includes creating the virtual IP address, forwarding rules, health checks, firewall rules, and more.
https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg#ingress_with_negs
A Kubernetes Ingress is not a type of Service. It is a collection of rules. An Ingress Controller in your cluster watches for Ingress resources, and attempts to update the server-side configuration according to the rules specified in the Ingress.
https://thenewstack.io/ingress-controllers-the-swiss-army-knife-of-kubernetes/
Kubernetes ingress is a collection of routing rules that govern how external users access services running in a Kubernetes cluster.
In a typical Kubernetes application, you have pods running inside a cluster and a load balancer running outside. The load balancer takes connections from the internet and routes the traffic to an edge proxy that sits inside your cluster. The edge proxy is then responsible for routing traffic into your pods. The edge proxy is commonly called an ingress controller because it is commonly configured using ingress resources in Kubernetes. The edge proxy can also be configured with custom resource definitions (CRDs) or annotations.
You can do a lot of different things with an Ingress, and there are many types of Ingress controllers that have different capabilities.
The default GKE ingress controller will spin up a HTTP(S) Load Balancer for you. This will let you do both path-based and subdomain-based routing to backend services. For example, you can send everything on foo.yourdomain.com to the foo service, and everything under the yourdomain.com/bar/ path to the bar service.
https://kubernetes.io/docs/concepts/services-networking/ingress/
The Ingress resource https://thenewstack.io/ingress-controllers-the-swiss-army-knife-of-kubernetes/
An example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-wildcard-host
spec:
rules:
- host: "foo.bar.com"
http:
paths:
- pathType: Prefix
path: "/bar"
backend:
service:
name: service1
port:
number: 80
- host: "*.foo.com"
http:
paths:
- pathType: Prefix
path: "/foo"
backend:
service:
name: service2
port:
number: 80
In GKE, an Ingress object defines rules for routing HTTP(S) traffic to applications running in a cluster. An Ingress object is associated with one or more Service objects, each of which is associated with a set of Pods. To learn more about how Ingress exposes applications using Services, see Service networking overview.
When you create an Ingress object, the GKE Ingress controller creates a Google Cloud HTTP(S) Load Balancer and configures it according to the information in the Ingress and its associated Services.
To use Ingress, you must have the HTTP load balancing add-on enabled. GKE clusters have HTTP load balancing enabled by default; you must not disable it.
You can expose a Service in multiple ways that don't directly involve the Ingress resource:
Services can be exposed in a variety of ways.
https://github.com/haproxytech/kubernetes-ingress
https://www.haproxy.com/blog/announcing-haproxy-kubernetes-ingress-controller-1-7/
https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
Multi-cluster Ingress (MCI) is a cloud-hosted multi-cluster Ingress controller for Anthos clusters. It's a Google-hosted service that supports deploying shared load balancing resources across clusters and across regions.
https://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-ingress-setup
https://github.com/GoogleCloudPlatform/gke-managed-certs
https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/
https://faun.pub/multi-cluster-ingress-gke-57be59ced00d
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl#secrets
https://medium.com/@betandr/kubernetes-ingress-with-tls-on-gke-744efd37e49e
https://johnclarke73.medium.com/tls-configuration-in-gke-the-really-simple-way-5af7abb0e8e1
https://gist.github.com/pydevops/dce8bdf1c360f7a913ac48f04b2d39d1
https://cloud.google.com/architecture/exposing-service-mesh-apps-through-gke-ingress