Skip to content

Network question observation: Why Pod's are accessible by their Private IP Address Port# but not through their K8s Services from outside cluster?

Ramkumar edited this page Oct 28, 2022 · 1 revision

Network related issues/question:

While using GKE cluster, could access the pods with curl command using the IP addresses of the pods whereas could not access via the corresponding services (clusterIP) of the Pods from a VM where Jenkins master is setup. From the GKE cluster node, both the Pods and services are accessible.

This is tried from the Jenkins node:

kubectl describe svc qarel-helm
W0728 09:31:07.675073    4121 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
Name:              qarel-helm
Namespace:         default
Labels:            app.kubernetes.io/instance=qarel
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=helm
                   app.kubernetes.io/version=1.16.0
                   helm.sh/chart=helm-0.1.0
Annotations:       cloud.google.com/neg: {"ingress":true}
                   meta.helm.sh/release-name: qarel
                   meta.helm.sh/release-namespace: default
Selector:          app.kubernetes.io/instance=qarel,app.kubernetes.io/name=helm
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.8.15.43
IPs:               10.8.15.43
Port:              http  6060/TCP
TargetPort:        http/TCP
Endpoints:         10.4.0.10:5050,10.4.1.10:5050
Session Affinity:  None
Events:            <none>

# accessing via individual Pod IP & port for the app works fine

curl 10.4.0.10:5050/api/appointment
[{"message":"","email":"","mobile":"","image":"","_id":"62e23e4de4ccf1404e087ce4","name":"new chap","fulfilled":false}]



# Accessing via the service is NOT working

curl 10.8.15.43:6060/api/appointment (No response)


While trying the above from one of the GKE node:


curl 10.4.0.10:5050/api/appointment
[{"message":"","email":"","mobile":"","image":"","_id":"62e23e4de4ccf1404e087ce4","name":"new chap","fulfilled":false}]gke-ram-cluster2807-default-pool-eaa9f7c8-g7t4 /etc/containerd # 

 curl 10.8.15.43:6060/api/appointment
[{"message":"","email":"","mobile":"","image":"","_id":"62e23e4de4ccf1404e087ce4","name":"new chap","fulfilled":false}]gke-ram-cluster2807-default-pool-eaa9f

The IP route output on the GKE node:

The Service/Pod IPs are routed through different devices. "vethXXXXXX" could be created by the Overlay network.

 # ip route get 10.4.0.10
10.4.0.10 dev veth7c530464 src 10.4.0.1 uid 0 
    cache 

# ip route get 10.8.15.43
10.8.15.43 via 10.128.0.1 dev eth0 src 10.128.0.44 uid 0 
    cache 

The IP route on Jenkins node

Both the service/Pod IPs are handled by the same device (ens4). The Kubernetes services need overlay network interfaces in order to be served !?


jenkins@jenkinsmaster:~$ ip route get 10.4.0.10
10.4.0.10 via 10.182.0.1 dev ens4 src 10.182.0.18 uid 113 
    cache 
jenkins@jenkinsmaster:~$ ip route get 10.8.15.43
10.8.15.43 via 10.182.0.1 dev ens4 src 10.182.0.18 uid 113 
    cache 

Clone this wiki locally