Manual para competencia de nodos de kubernetes - NestorAndresSantosVidales/Oracle_kubernetes_vs_world GitHub Wiki

Bienvenido a la competencia de PODS para el racing to the cloud 2023

Con este ejercicio podrás aprender cómo manejar la infrastructura de un cluster de kubernetes usando el paquete de CLI de Oracle.

¿Listos? a jugar y aprender!!...

Iniciemos...

Wargame: Actualmente tenemos un Cluster de kubernetes con ip pública desplegado en la infrastructura de Oracle Cloud , el administrador está de vacaciones y estámos presentando una lentitud en el acceso a las páginas web que están alojadas en este cluster. Tu misión es ingresar a la infrastructura con las credenciales que el administrador ha suministrado por whatsapp y usando comandos Oracle CLI desplegar 1 Pool de nodos adicional en este cluster.

Go Go Go Go

  1. Copia y pega la siguiente información en un archivo de texti plano en tu computadora

[DEFAULT] user=ocid1.user.oc1..aaaaaaaa5eya2vzrlrqwb5lx3gqcba3bp475lsgnwpjkxrdccl2ksarbb7aq fingerprint=e8:02:b2:74:38:c8:25:c1:29:49:5e:0a:53:7a:b9:04 tenancy=ocid1.tenancy.oc1..aaaaaaaaoi6b5sxlv4z773boczybqz3h2vspvvru42jysvizl77lky22ijaq region=us-ashburn-1 key_file= # TODO

  1. Descarga este archivo que es una llave privada, nos servirá para conectarnos a nuestro cluster de kubernetes.

Llave privada: https://drive.google.com/file/d/16dA_zaK5eYDbnsH6J9di-Sk9bxPMTD_w/view?usp=sharing

  1. Descarga e instala la herramienta OCI CLI siguiendo los pasos y abriendo un consola ssh(en linux) o consola power shell(en windows)

Descarga: https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm

Ahora vamos a crear la configuración para conectarnos a nuestro Cluster de Kubernetes.

  1. Una vez instalado ejecuta lo siguiente:

oci setup config

Te va a pedir una nueva configuración de perfil

This command provides a walkthrough of creating a valid CLI config file.

The following links explain where to find the information required by this
script:

User API Signing Key, OCID and Tenancy OCID:

    https://docs.cloud.oracle.com/Content/API/Concepts/apisigningkey.htm#Other

Region:

    https://docs.cloud.oracle.com/Content/General/Concepts/regions.htm

General config documentation:

    https://docs.cloud.oracle.com/Content/API/Concepts/sdkconfig.htm
  • Ingresa la siguiente información:

  • Enter a location for your config [C:\Users\Nestor Santos.oci\config]: **C:\testconfig.txt ** Este es la ubicación dónde quieres que se guarde el archivo

  • Enter a user OCID: ocid1.user.oc1..aaaaaaaa5eya2vzrlrqwb5lx3gqcba3bp475lsgnwpjkxrdccl2ksarbb7aq

  • Enter a tenancy OCID: ocid1.tenancy.oc1..aaaaaaaaoi6b5sxlv4z773boczybqz3h2vspvvru42jysvizl77lky22ijaq

  • Enter a region by index or name(e.g.

  • 1: af-johannesburg-1, 2: ap-chiyoda-1, 3: ap-chuncheon-1, 4: ap-dcc-canberra-1, 5: ap-hyderabad-1,

  • 6: ap-ibaraki-1, 7: ap-melbourne-1, 8: ap-mumbai-1, 9: ap-osaka-1, 10: ap-seoul-1,

  • 11: ap-singapore-1, 12: ap-sydney-1, 13: ap-tokyo-1, 14: ca-montreal-1, 15: ca-toronto-1,

  • 16: eu-amsterdam-1, 17: eu-dcc-milan-1, 18: eu-frankfurt-1, 19: eu-madrid-1, 20: eu-marseille-1,

  • 21: eu-milan-1, 22: eu-paris-1, 23: eu-stockholm-1, 24: eu-zurich-1, 25: il-jerusalem-1,

  • 26: me-abudhabi-1, 27: me-dcc-muscat-1, 28: me-dubai-1, 29: me-jeddah-1, 30: mx-queretaro-1,

  • 31: sa-santiago-1, 32: sa-saopaulo-1, 33: sa-vinhedo-1, 34: uk-cardiff-1, 35: uk-gov-cardiff-1,

  • 36: uk-gov-london-1, 37: uk-london-1, 38: us-ashburn-1, 39: us-chicago-1, 40: us-gov-ashburn-1,

  • 41: us-gov-chicago-1, 42: us-gov-phoenix-1, 43: us-langley-1, 44: us-luke-1, 45: us-phoenix-1,

  • 46: us-sanjose-1): 38

  • Do you want to generate a new API Signing RSA key pair? (If you decline you will be asked to supply the path to an existing key.) [Y/n]: n

  • Enter the location of your API Signing private key file: C:\Users\Nestor Santos\Downloads\Kubernetes_Cluster_key.pem Esta es la ubicación del archivo que descargaste en el paso 2

Resultado:

Fingerprint: e8:02:b2:74:38:c8:25:c1:29:49:5e:0a:53:7a:b9:04 Config written to C:\testconfig.txt

Para probar si hemos conectado correctamente nuestro OCI CLI con el cluster vamos a ejecutar lo siguiente comando, que nos mostrará la información de nuestro Cluster de kubernetes

** oci ce cluster list --compartment-id ocid1.compartment.oc1..aaaaaaaatmwo626q7jugpjmnk6crvyzlrzmglzo7sy5egue67noptjrpknea**

El resultado debería ser lo siguiente:

{
"data": [
{
  "available-kubernetes-upgrades": [],
  "cluster-pod-network-options": [
    {
      "cni-type": "FLANNEL_OVERLAY"
    }
  ],
  "compartment-id": "ocid1.compartment.oc1..aaaaaaaatmwo626q7jugpjmnk6crvyzlrzmglzo7sy5egue67noptjrpknea",
  "defined-tags": {},
  "endpoint-config": {
    "is-public-ip-enabled": true,
    "nsg-ids": [],
    "subnet-id": "ocid1.subnet.oc1.iad.aaaaaaaanshqidujoekxb7nktdhxkt6mg4ezfigg7mqz7tj3jfgxze3qk35a"
  },
  "endpoints": {
    "kubernetes": null,
    "private-endpoint": "10.0.0.2:6443",
    "public-endpoint": "129.213.181.100:6443",
    "vcn-hostname-endpoint": "c4mrg5os2pq.sub05d8cfe20.cluster1.oraclevcn.com:6443"
  },
  "freeform-tags": {
    "OKEclusterName": "cluster1"
  },
  "id": "ocid1.cluster.oc1.iad.aaaaaaaa4o64evfa3xxlijege5fvgpfnybeyevaq46dgqw3jqc4mrg5os2pq",
  "image-policy-config": {
    "is-policy-enabled": false,
    "key-details": []
  },
  "kubernetes-version": "v1.25.4",
  "lifecycle-details": null,
  "lifecycle-state": "ACTIVE",
  "metadata": {
    "created-by-user-id": "ocid1.saml2idp.oc1..aaaaaaaavaffpse6nw7wql6jqgi3qj5le24z3fld56up342v6mvghm7y3vpq/[email protected]",
    "created-by-work-request-id": "ocid1.clustersworkrequest.oc1.iad.aaaaaaaaaiastpqewbv74j6uqburh2mwbvrkunllhw4ztverfweau3v3s7ma",
    "deleted-by-user-id": null,
    "deleted-by-work-request-id": null,
    "time-created": "2023-02-17T13:05:24+00:00",
    "time-deleted": null,
    "time-updated": "2023-02-17T13:33:53+00:00",
    "updated-by-user-id": "ocid1.saml2idp.oc1..aaaaaaaavaffpse6nw7wql6jqgi3qj5le24z3fld56up342v6mvghm7y3vpq/[email protected]",
    "updated-by-work-request-id": "ocid1.clustersworkrequest.oc1.iad.aaaaaaaa2yydwor3rmbhe5y65djdcvahziwtf7syxkyhzwveyw63eybaqlba"
  },
  "name": "DemoFeb-23-Capacityvs-aws",
  "options": {
    "add-ons": {
      "is-kubernetes-dashboard-enabled": false,
      "is-tiller-enabled": false
    },
    "admission-controller-options": {
      "is-pod-security-policy-enabled": false
    },
    "kubernetes-network-config": {
      "pods-cidr": "10.244.0.0/16",
      "services-cidr": "10.96.0.0/16"
    },
    "persistent-volume-config": {
      "defined-tags": {},
      "freeform-tags": {
        "OKEclusterName": "cluster1"
      }
    },
    "service-lb-config": {
      "defined-tags": {},
      "freeform-tags": {
        "OKEclusterName": "cluster1"
      }
    },
    "service-lb-subnet-ids": [
      "ocid1.subnet.oc1.iad.aaaaaaaai6citdrxvnoqtwfqycvkq6t3vthgcvgdbxkpov4yx2qlhhmmv52a"
    ]
  },
  "system-tags": null,
  "vcn-id": "ocid1.vcn.oc1.iad.amaaaaaafioir7iasor7l3u4x6ll47swkyi72oy47ileoj3okcpt2holz62a"
	}
	]
}

Ahora vamos a sacar la información del nodo existente en el Cluster ejecutando este comando:

oci ce node-pool list --compartment-id ocid1.compartment.oc1..aaaaaaaatmwo626q7jugpjmnk6crvyzlrzmglzo7sy5egue67noptjrpknea

El resultado debería ser el siguiente:

{
"data": [
{
  "cluster-id": "ocid1.cluster.oc1.iad.aaaaaaaa4o64evfa3xxlijege5fvgpfnybeyevaq46dgqw3jqc4mrg5os2pq",
  "compartment-id": "ocid1.compartment.oc1..aaaaaaaatmwo626q7jugpjmnk6crvyzlrzmglzo7sy5egue67noptjrpknea",
  "defined-tags": {},
  "freeform-tags": {
    "OKEnodePoolName": "pool1"
  },
  "id": "ocid1.nodepool.oc1.iad.aaaaaaaa6gnl6ywxpzza6ump6gapa5cgay6e764f6zcgg6vjqnrwvvz7gdsq",
  "initial-node-labels": [
    {
      "key": "name",
      "value": "cluster1"
    }
  ],
  "kubernetes-version": "v1.25.4",
  "lifecycle-details": null,
  "lifecycle-state": "ACTIVE",
  "name": "pool1",
  "node-config-details": {
    "defined-tags": {},
    "freeform-tags": {
      "OKEnodePoolName": "pool1"
    },
    "is-pv-encryption-in-transit-enabled": false,
    "kms-key-id": "",
    "node-pool-pod-network-option-details": {
      "cni-type": "FLANNEL_OVERLAY"
    },
    "nsg-ids": [],
    "placement-configs": [
      {
        "availability-domain": "IAfA:US-ASHBURN-AD-1",
        "capacity-reservation-id": null,
        "fault-domains": null,
        "subnet-id": "ocid1.subnet.oc1.iad.aaaaaaaahm4kys5othuwpc24y6tb6tmrm3f5v7ylzeeb7ud4ai3iiuj4s33a"
      },
      {
        "availability-domain": "IAfA:US-ASHBURN-AD-2",
        "capacity-reservation-id": null,
        "fault-domains": null,
        "subnet-id": "ocid1.subnet.oc1.iad.aaaaaaaahm4kys5othuwpc24y6tb6tmrm3f5v7ylzeeb7ud4ai3iiuj4s33a"
      },
      {
        "availability-domain": "IAfA:US-ASHBURN-AD-3",
        "capacity-reservation-id": null,
        "fault-domains": null,
        "subnet-id": "ocid1.subnet.oc1.iad.aaaaaaaahm4kys5othuwpc24y6tb6tmrm3f5v7ylzeeb7ud4ai3iiuj4s33a"
      }
    ],
    "size": 3
  },
  "node-eviction-node-pool-settings": {
    "eviction-grace-duration": "PT1H",
    "is-force-delete-after-grace-duration": null
  },
  "node-image-id": "ocid1.image.oc1.iad.aaaaaaaanv2ck2dnx6tqapiw5uaaxsppsbruygzn22h5hqteljb5anqflkca",
  "node-image-name": "Oracle-Linux-8.6-2022.12.15-0-OKE-1.25.4-543",
  "node-shape": "VM.Standard.E3.Flex",
  "node-shape-config": {
    "memory-in-gbs": 16.0,
    "ocpus": 1.0
  },
  "node-source": {
    "image-id": "ocid1.image.oc1.iad.aaaaaaaanv2ck2dnx6tqapiw5uaaxsppsbruygzn22h5hqteljb5anqflkca",
    "source-name": "Oracle-Linux-8.6-2022.12.15-0-OKE-1.25.4-543",
    "source-type": "IMAGE"
  },
  "node-source-details": {
    "boot-volume-size-in-gbs": null,
    "image-id": "ocid1.image.oc1.iad.aaaaaaaanv2ck2dnx6tqapiw5uaaxsppsbruygzn22h5hqteljb5anqflkca",
    "source-type": "IMAGE"
  },
  "quantity-per-subnet": 0,
  "ssh-public-key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8wnskF/nOneJblNN7RjDZ36qadOhdOrBoyBbnVRfFdEXP8c1yb9XVWsQzjxv9q6ziKPJJtjJKSFalWvmXGtVmeAjCPKC+4pdmFv+UK7yjZ4lZyHT1eNUdkc8LIknOCjIXCDRRfPOBnvvFIeeN1BtCkTJ85iDTKhuKD3+4pQOjU49NUffUDudTDFdLF3n6xWbgwNi5QouAOvrbyQRmLOKP+WSS89ch0sHqA6o3N0K46guHWet2jes3M2s48okagjcrI6xIB1nyArCcsPNS4XZ4KqHrjrkcBuxjpLj2KdAoTPvIEG6vKkGGG0haEQyHawkfFzrydEUUwzkYyt2sipj/ ssh-key-2023-02-17",
  "subnet-ids": [
    "ocid1.subnet.oc1.iad.aaaaaaaahm4kys5othuwpc24y6tb6tmrm3f5v7ylzeeb7ud4ai3iiuj4s33a"
  ],
  "system-tags": null
}
]
}	

Ahora que ya tienes toda la información, solo queda crear el nuevo Pool de nodos usando la siguiente documentación y los resultados que se dieron de la ejecución de los 2 comandos anteriores:

export compartment_id=<substitute-value-of-compartment_id> 
export kubernetes_version=<substitute-value-of-kubernetes_version> 
export name=<substitute-value-of-name> 
export vcn_id=<substitute-value-of-vcn_id> 
export node_shape=<substitute-value-of-node_shape> 

cluster_id=$(oci ce cluster create --compartment-id $compartment_id --kubernetes-version $kubernetes_version --name $name --vcn-id $vcn_id --query data.id --raw-output)


oci ce node-pool create --cluster-id $cluster_id --compartment-id $compartment_id --name $name --node-shape $node_shape