lb_example - OpenNebula/cluster-api-provider-opennebula GitHub Wiki
This guide provides an example of deploying applications within a workload cluster. Specifically, it demonstrates how to set up a Nginx application with multiple instances, exposing it through a LoadBalancer Service
The following YAML defines a Deployment for a Nginx application with two replicas:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Apply the Deployment using:
kubectl apply -f deployment.yaml
Now, define a LoadBalancer Service to expose the Nginx deployment externally:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Apply the Service using:
kubectl apply -f service.yaml
To ensure that the Deployment is running correctly, execute:
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 4h19m
Next, verify the status of the LoadBalancer Service:
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h39m
nginx-service LoadBalancer 10.96.243.105 <EXTERNAL-IP> 80:30591/TCP 3h36m
Once the LoadBalancer Service is created, it will provide a public-facing IP that directs incoming traffic to the Nginx deployment. Look for the EXTERNAL-IP column in the output and access the application in your browser or via curl:
curl http://<EXTERNAL-IP>