AWS - EKS Post Exploitation

Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!

Other ways to support HackTricks:

EKS

For mor information check

pageAWS - EKS Enum

Enumerate the cluster from the AWS Console

If you have the permission eks:AccessKubernetesApi you can view Kubernetes objects via AWS EKS console (Learn more).

Connect to AWS Kubernetes Cluster

  • Easy way:

# Generate kubeconfig
aws eks update-kubeconfig --name aws-eks-dev
  • Not that easy way:

If you can get a token with aws eks get-token --name <cluster_name> but you don't have permissions to get cluster info (describeCluster), you could prepare your own ~/.kube/config. However, having the token, you still need the url endpoint to connect to and the name of the cluster.

In my case, I didn't find the info in CloudWatch logs, but I found it in LaunchTemaplates userData and in EC2 machines in userData also. You can see this info in userData easily, for example in the next example (the cluster name was cluster-name):

API_SERVER_URL=https://6253F6CA47F81264D8E16FAA7A103A0D.gr7.us-east-1.eks.amazonaws.com

/etc/eks/bootstrap.sh cluster-name --kubelet-extra-args '--node-labels=eks.amazonaws.com/sourceLaunchTemplateVersion=1,alpha.eksctl.io/cluster-name=cluster-name,alpha.eksctl.io/nodegroup-name=prd-ondemand-us-west-2b,role=worker,eks.amazonaws.com/nodegroup-image=ami-002539dd2c532d0a5,eks.amazonaws.com/capacityType=ON_DEMAND,eks.amazonaws.com/nodegroup=prd-ondemand-us-west-2b,type=ondemand,eks.amazonaws.com/sourceLaunchTemplateId=lt-0f0f0ba62bef782e5 --max-pods=58' --b64-cluster-ca $B64_CLUSTER_CA --apiserver-endpoint $API_SERVER_URL --dns-cluster-ip $K8S_CLUSTER_DNS_IP --use-max-pods false
kube config
describe-cache-parametersapiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1USXlPREUyTWpjek1Wb1hEVE15TVRJeU5URTJNamN6TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDlXCk9OS0ZqeXZoRUxDZGhMNnFwWkMwa1d0UURSRVF1UzVpRDcwK2pjbjFKWXZ4a3FsV1ZpbmtwOUt5N2x2ME5mUW8KYkNqREFLQWZmMEtlNlFUWVVvOC9jQXJ4K0RzWVlKV3dzcEZGbWlsY1lFWFZHMG5RV1VoMVQ3VWhOanc0MllMRQpkcVpzTGg4OTlzTXRLT1JtVE5sN1V6a05pTlUzSytueTZSRysvVzZmbFNYYnRiT2kwcXJSeFVpcDhMdWl4WGRVCnk4QTg3VjRjbllsMXo2MUt3NllIV3hhSm11eWI5enRtbCtBRHQ5RVhOUXhDMExrdWcxSDBqdTl1MDlkU09YYlkKMHJxY2lINjYvSTh0MjlPZ3JwNkY0dit5eUNJUjZFQURRaktHTFVEWUlVSkZ4WXA0Y1pGcVA1aVJteGJ5Nkh3UwpDSE52TWNJZFZRRUNQMlg5R2c4Q0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZQVXFsekhWZmlDd0xqalhPRmJJUUc3L0VxZ1hNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBS1o4c0l4aXpsemx0aXRPcGcySgpYV0VUSThoeWxYNWx6cW1mV0dpZkdFVVduUDU3UEVtWW55eWJHbnZ5RlVDbnczTldMRTNrbEVMQVE4d0tLSG8rCnBZdXAzQlNYamdiWFovdWVJc2RhWlNucmVqNU1USlJ3SVFod250ZUtpU0J4MWFRVU01ZGdZc2c4SlpJY3I2WC8KRG5POGlHOGxmMXVxend1dUdHSHM2R1lNR0Mvd1V0czVvcm1GS291SmtSUWhBZElMVkNuaStYNCtmcHUzT21UNwprS3VmR0tyRVlKT09VL1c2YTB3OTRycU9iSS9Mem1GSWxJQnVNcXZWVDBwOGtlcTc1eklpdGNzaUJmYVVidng3Ci9sMGhvS1RqM0IrOGlwbktIWW4wNGZ1R2F2YVJRbEhWcldDVlZ4c3ZyYWpxOUdJNWJUUlJ6TnpTbzFlcTVZNisKRzVBPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://6253F6CA47F81264D8E16FAA7A103A0D.gr7.us-west-2.eks.amazonaws.com
  name: arn:aws:eks:us-east-1:<acc-id>:cluster/<cluster-name>
contexts:
- context:
    cluster: arn:aws:eks:us-east-1:<acc-id>:cluster/<cluster-name>
    user: arn:aws:eks:us-east-1:<acc-id>:cluster/<cluster-name>
  name: arn:aws:eks:us-east-1:<acc-id>:cluster/<cluster-name>
current-context: arn:aws:eks:us-east-1:<acc-id>:cluster/<cluster-name>
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:<acc-id>:cluster/<cluster-name>
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - us-west-2
      - --profile
      - <profile>
      - eks
      - get-token
      - --cluster-name
      - <cluster-name>
      command: aws
      env: null
      interactiveMode: IfAvailable
      provideClusterInfo: false

From AWS to Kubernetes

The creator of the EKS cluster is ALWAYS going to be able to get into the kubernetes cluster part of the group system:masters (k8s admin). At the time of this writing there is no direct way to find who created the cluster (you can check CloudTrail). And the is no way to remove that privilege.

The way to grant access to over K8s to more AWS IAM users or roles is using the configmap aws-auth.

Therefore, anyone with write access over the config map aws-auth will be able to compromise the whole cluster.

For more information about how to grant extra privileges to IAM roles & users in the same or different account and how to abuse this to privesc check this page.

Check also this awesome post to learn how the authentication IAM -> Kubernetes work.

From Kubernetes to AWS

It's possible to allow an OpenID authentication for kubernetes service account to allow them to assume roles in AWS. Learn how this work in this page.

Bypass CloudTrail

If an attacker obtains credentials of an AWS with permission over an EKS. If the attacker configures it's own kubeconfig (without calling update-kubeconfig) as explained previously, the get-token doesn't generate logs in Cloudtrail because it doesn't interact with the AWS API (it just creates the token locally).

So when the attacker talks with the EKS cluster, cloudtrail won't log anything related to the user being stolen and accessing it.

Note that the EKS cluster might have logs enabled that will log this access (although, by default, they are disabled).

EKS Ransom?

By default the user or role that created a cluster is ALWAYS going to have admin privileges over the cluster. And that the only "secure" access AWS will have over the Kubernetes cluster.

So, if an attacker compromises a cluster using fargate and removes all the other admins and deletes the AWS user/role that created the Cluster, the attacker could have ransomed the cluster.

Note that if the cluster was using EC2 VMs, it could be possible to get Admin privileges from the Node and recover the cluster.

Actually, If the cluster is using Fargate you could EC2 nodes or move everything to EC2 to the cluster and recover it accessing the tokens in the node.

Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!

Other ways to support HackTricks:

Last updated