Use Cases - GoogleCloudPlatform/anthos-appconfig GitHub Wiki
The app config custom resource expands high level configuration options into multiple sub-resources (often istio resources).
Client whitelisting can be done via the .services[].allowedClients
field. Whitelisting controls which intra-cluster clients (Pods with matching app
labels) can call a given service. This functionality is provided by istio mixer rules when istio is enabled and is controlled by kubernetes network policies otherwise.
When istio is not enabled, a required deny-all NetworkPolicy must be applied separately:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Ingress
This network policy denies all inbound traffic to all pods by default. Note that applying this to an existing cluster without allow-policies will most likely cause issues with existing application communication. Read more about network policies here.
spec:
services:
- name: orders-api
...
allowedClients:
- name: orders-app # Allow clients from the same namespace with label app=orders-app
- name: etl/orders-importer # Allow clients from namespace=etl and label app=orders-importer
...
Istio Instance
https://istio.io/docs/reference/config/policy-and-telemetry/istio.policy.v1beta1/#Instance
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: myapp-allowed-istio-applabel
...
spec:
compiledTemplate: listentry
params:
value: source.namespace + "/" + source.labels["app"]
Istio Handler
https://istio.io/docs/reference/config/policy-and-telemetry/istio.policy.v1beta1/#Handler
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: myapp--orders-api
...
spec:
compiledAdapter: listchecker
params:
blacklist: false
cachingInterval: 10s
overrides:
- my-namespace/orders-app
- etl/orders-importer
Istio Rule
https://istio.io/docs/reference/config/policy-and-telemetry/istio.policy.v1beta1/#Rule
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
...
spec:
actions:
- handler: myapp--orders-api
instances:
- myapp-allowed-istio-applabel
match: destination.labels["app"] == "orders-api"
Kubernetes NetworkPolicy
https://kubernetes.io/docs/concepts/services-networking/network-policies/
apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
...
spec:
ingress:
- from:
- podSelector:
matchLabels:
app: orders-app
- podSelector:
matchLabels:
app: orders-importer
podSelector:
matchLabels:
app: orders-api
policyTypes:
- Ingress
The field .auth.jwt
can be used to configure JWT ingress authentication. This functionality is provided by istio policies. Istio will validate tokens before they reach your application allowing your application to trust the token claims. Currently firebase
and google
JWT issuers are supported.
Validates JWTs signed by firebase for a given project (required).
spec:
auth:
jwt:
type: firebase
params:
project: my-firebase-project
...
Validates JWTs signed by Google (does not require a project id).
spec:
auth:
jwt:
type: google
...
Istio Policy (firebase)
https://istio.io/docs/reference/config/istio.authentication.v1alpha1/#Policy
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
...
spec:
origins:
- jwt:
issuer: https://securetoken.google.com/my-firebase-project
jwksUri: https://www.googleapis.com/service_accounts/v1/jwk/[email protected]
peers:
- mtls: {}
principalBinding: USE_ORIGIN
targets:
- name: tasks-api
The field .allowedEgress
can be used to control outbound connections that services can make. Many types of attacks can be avoided by applying the principle of least-privilege outbound access. This functionality is provided by istio service entries.
The following types are currently supported (to-be configurable):
type | ports |
---|---|
http | 80 |
https | 443 |
kafka | 9092,8082,2181 |
spec:
allowedEgress:
- type: https
hosts:
- api.github.com
- "*.googleapis.com"
...
Istio ServiceEntry
https://istio.io/docs/reference/config/networking/v1alpha3/service-entry/
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
...
spec:
hosts:
- api.github.com
- "*.googleapis.com"
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: HTTPS
resolution: NONE
exportTo:
- "."
The field .auth.gcpAccess
can be used to inject GCP access tokens into containers. This is accomplished via a mutating webhook that adds init & sidecar containers to pods on creation. Two mechanisms are supported: native kubernetes secrets and vault-issued secrets.
Native kubernetes secrets can be used to store access tokens instead of vault.
spec:
auth:
gcpAccess:
accessType: secret
secretInfo:
name: my-secret-name # This secret must exist in appconfigmgrv2-system namespace.
...
In this case, an admin would upload a service account key.json
file as a kubernetes secret to the appconfigmgrv2-system
namespace and point to it via the .secretInfo.name
field (above). This secret is then copied into the app config namespace by the controller and injected into pods in that namespace.
apiVersion: v1
kind: Secret
type: Opaque
data:
key.json: ...
metadata:
name: my-secret-name
namespace: appconfigmgrv2-system
Vault can be configured to dynamically provision GCP access tokens.
spec:
auth:
gcpAccess:
accessType: vault
vaultInfo:
serviceAccount: my-service-account
path: my-path
roleset: my-roleset
...
We will leverage Vault, kubernetes plugin and gcp plugin to retrieve GCP Service Account. Complex setup, FYI |
---|
The details below but the key linkaged between CRD is:
- serviceAccount in vaultInfo and namespace from CRD resource will access vault, so it needs permission to the role/key path (in vaultInfo.roleset)
- path will be the gcp plugin path define in the vaultInfo appended with "-${PROJECT_ID}" so we can re-use app configs in other projects
- The keys are going to be requested based ont the roleset defined in vaultInfo
- Configuration for kubernetes plugin
- Define a path that will be associated with each cluster,
- kubectl config current-context
- Define a path that will be associated with each cluster,
vault auth enable --path="${VAULT_PREFIX}" kubernetes
- Configuration for gcp plugin
- Define a path that will be associated with each project
- +
- Define a path that will be associated with each project
vault secrets enable --path="${GCP_VAULT_PREFIX}" gcp
- export VAULT_ADDR="https://35.245.198.47:8200"
- export VAULT_CACERT="/root/.ssh/ca.pem"
kubectl create secret generic vault-ca \
--namespace=appconfigmgrv2-system \
--from-file=$${VAULT_CACERT}
kubectl create configmap vault \
--namespace=appconfigmgrv2-system \
--from-literal vault-addr=${VAULT_ADDR} \
--from-literal vault-cluster-path=<kubernetes path> \
--from-literal gcp-vault-path=<gcp path>
-
This will be used to generated the keys requested by the pods (via the vault plugin)
-
Needs at minium to manage the lifecycle of Service Accounts and Keys
- --role roles/iam.serviceAccountAdmin
- --role roles/iam.serviceAccountKeyAdmin
- (any additional role that will enable the ServiceAccount to have the roles needed for the other googleapi's)
- In our example, pubsub, we add --role roles/pubsub.admin
-
Assign Key to confg for GCP plugin
vault write ${GCP_VAULT_PREFIX}/config project=${PROJECT_NAME} \
ttl=3600 \
max_ttl=7200 \
credentials=@${VAULT_SA_KEY_PATH}
- Assign access to validate kubernetes service account (KSA)
vault write auth/${VAULT_PREFIX}/config \
token_reviewer_jwt="${VAULT_SA_JWT_TOKEN}" \
kubernetes_host="${VAULT_REVIEWER_CLIENT_API_SERVER}" \
kubernetes_ca_cert="${VAULT_SA_CA_CRT}"
- Create HCL for Policy and Access from KSA and IAM roles to assign to requested GSA
cat > ${ROLE_NAME}-gcp.hcl <<EOF
resource "projects/${PROJECT_NAME}/topics/appconfigcrd-demo-topic1" {
roles = [
"roles/pubsub.publisher",
]
}
resource "projects/${PROJECT_NAME}/topics/appconfigcrd-demo-topic2" {
roles = [
"roles/pubsub.publisher",
]
}
EOF
cat > ${ROLE_NAME}-policy.hcl <<EOF
path "${GCP_VAULT_PREFIX}/key/${ROLE_NAME}" {
capabilities = ["read"]
}
EOF
- Define Role and request service account generated
vault write ${GCP_VAULT_PREFIX}/roleset/${VAULT_ROLE_NAME} \
project="${PROJECT_NAME}" \
secret_type="service_account_key" \
bindings=@${VAULT_ROLE_NAME}-gcp.hcl \
- Allow access from KSA - this is done as needed
vault write auth/${VAULT_PREFIX}/role/${VAULT_ROLE_NAME} \
bound_service_account_names="${VAULT_KSA}" \
bound_service_account_namespaces="${VAULT_NS}" \
policies=${VAULT_ROLE_NAME} \
ttl=3600
The field .services[].serviceAccount
can be used to assign a service account to pods on creation. The controller enforces service account assignment via a mutating webhook that acts at the pod level. This means that deployments that specify their service account will be overriden.
Service account assignment along with configuring GCP workload identity can be used to give pods access to GCP resources. This is done by assigning .services[].serviceAccount
the same service account referenced in the above GCP docs as [KSA_NAME]
.
This example shows how an application running on GKE can be given access to a GCP pubsub topic.
Create a GKE cluster with workload identity enabled:
gcloud beta container clusters create $CLUSTER_NAME$ \
--cluster-version=1.12 \
--identity-namespace=${PROJECT_ID}.svc.id.goog
Create a pubsub topic and a corresponding service account that can publish to it:
# Define the follow env variables:
export GSA_NAME=my-google-service-account
export K8S_NAMESPACE=my-namespace
export KSA_NAME=my-k8s-service-account
export PROJECT_ID=my-gcp-project
export TOPIC=my-pubsub-topic
export GSA_FULLNAME=${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com
gcloud pubsub topics create $TOPIC
gcloud iam service-accounts create $GSA_NAME
# Associate the service account with GCP workload identity.
gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:${PROJECT_ID}.svc.id.goog[${K8S_NAMESPACE}/${KSA_NAME}]" \
$GSA_FULLNAME
# Allow service account to publish to our topic.
gcloud beta pubsub topics add-iam-policy-binding $TOPIC \
--member="serviceAccount:${GSA_FULLNAME}" \
--role=roles/pubsub.publisher
# Create corresponding service account in kubernetes.
kubectl create serviceaccount \
--namespace $K8S_NAMESPACE \
$KSA_NAME
# Connect GCP service account to k8s service account.
kubectl annotate serviceaccount \
--namespace $K8S_NAMESPACE \
$KSA_NAME \
iam.gke.io/gcp-service-account=${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com
Corresponding app config:
apiVersion: appconfigmgr.cft.dev/v1alpha1
kind: AppEnvConfigTemplateV2
metadata:
name: my-app-config
namespace: my-namespace
spec:
services:
- name: my-pubsub-app
serviceAccount: my-k8s-service-account
...
Corresponding deployment:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: my-pubsub-deployment
namespace: my-namespace
labels:
app: my-pubsub-app
version: v1.0.0
annotations:
appconfigmgr.cft.dev/application: "workload-identity"
spec:
template:
metadata:
labels:
app: my-pubsub-app
version: v1.0.0
...
The field .services[].ingress
can be used to allow L7 (http/https) traffic to a service from outside of the cluster. Path and host based rules can be used. This currently is implemented through the creation of an ingress resource. This requires that an ingress controller is installed (the default on GKE). If at least one service has a .services[].ingress
field defined, an ingress resource will be created.
An empty ingress config is a catch-all.
apiVersion: appconfigmgr.cft.dev/v1alpha1
kind: AppEnvConfigTemplateV2
spec:
services:
- name: service-a
ingress: {}
...
Host and/or path rules can be added. Note: .path
is a regex.
apiVersion: appconfigmgr.cft.dev/v1alpha1
kind: AppEnvConfigTemplateV2
spec:
services:
- name: service-a
ingress:
path: "/service-a/.*"
host: example.com
...
Certificates can be added via the top level .ingress
field in the spec. The certificate is referenced via a kubernetes secret that must be in the same namespace as the app config. If no tls certificates are specified the service will be available via http
only.
# Create a cert for example.com.
openssl genrsa -out ingress.key 2048
openssl req -new -key ingress.key -out ingress.csr \
-subj "/CN=example.com"
openssl x509 -req -days 365 -in ingress.csr -signkey ingress.key \
-out ingress.crt
# Create kubernetes secret from cert.
kubectl create secret tls my-example-com-cert --namespace my-namespace \
--cert ingress.crt --key ingress.key
apiVersion: appconfigmgr.cft.dev/v1alpha1
kind: AppEnvConfigTemplateV2
metadata:
name: my-app-config
namespace: my-namespace
spec:
ingress:
tls:
certSecrets:
- my-example-com-cert
services:
- name: service-a
ingress:
path: "/service-a/.*"
host: example.com
...
To call the ingress we must first get the IP address:
kubectl get ingress -n my-namespace
Then we can curl the endpoint:
curl https://$MY_IP/service-a/ -k -H 'Host: example.com'