Kubernetes Basics
Last updated
Last updated
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE) Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
The original author of this page is Jorge (read his original post here)
Allows running container/s in a container engine.
Schedule allows containers mission efficient.
Keep containers alive.
Allows container communications.
Allows deployment techniques.
Handle volumes of information.
Node: operating system with pod or pods.
Pod: Wrapper around a container or multiple containers with. A pod should only contain one application (so usually, a pod run just 1 container). The pod is the way kubernetes abstracts the container technology running.
Service: Each pod has 1 internal IP address from the internal range of the node. However, it can be also exposed via a service. The service has also an IP address and its goal is to maintain the communication between pods so if one dies the new replacement (with a different internal IP) will be accessible exposed in the same IP of the service. It can be configured as internal or external. The service also actuates as a load balancer when 2 pods are connected to the same service.
When a service is created you can find the endpoints of each service running kubectl get endpoints
Kubelet: Primary node agent. The component that establishes communication between node and kubectl, and only can run pods (through API server). The kubelet doesn’t manage containers that were not created by Kubernetes.
Kube-proxy: is the service in charge of the communications (services) between the apiserver and the node. The base is an IPtables for nodes. Most experienced users could install other kube-proxies from other vendors.
Sidecar container: Sidecar containers are the containers that should run along with the main container in the pod. This sidecar pattern extends and enhances the functionality of current containers without changing them. Nowadays, We know that we use container technology to wrap all the dependencies for the application to run anywhere. A container does only one thing and does that thing very well.
Master process:
Api Server: Is the way the users and the pods use to communicate with the master process. Only authenticated request should be allowed.
Scheduler: Scheduling refers to making sure that Pods are matched to Nodes so that Kubelet can run them. It has enough intelligence to decide which node has more available resources the assign the new pod to it. Note that the scheduler doesn't start new pods, it just communicate with the Kubelet process running inside the node, which will launch the new pod.
Kube Controller manager: It checks resources like replica sets or deployments to check if, for example, the correct number of pods or nodes are running. In case a pod is missing, it will communicate with the scheduler to start a new one. It controls replication, tokens, and account services to the API.
etcd: Data storage, persistent, consistent, and distributed. Is Kubernetes’s database and the key-value storage where it keeps the complete state of the clusters (each change is logged here). Components like the Scheduler or the Controller manager depends on this date to know which changes have occurred (available resourced of the nodes, number of pods running...)
Cloud controller manager: Is the specific controller for flow controls and applications, i.e: if you have clusters in AWS or OpenStack.
Note that as the might be several nodes (running several pods), there might also be several master processes which their access to the Api server load balanced and their etcd synchronized.
Volumes:
When a pod creates data that shouldn't be lost when the pod disappear it should be stored in a physical volume. Kubernetes allow to attach a volume to a pod to persist the data. The volume can be in the local machine or in a remote storage. If you are running pods in different physical nodes you should use a remote storage so all the pods can access it.
Other configurations:
ConfigMap: You can configure URLs to access services. The pod will obtain data from here to know how to communicate with the rest of the services (pods). Note that this is not the recommended place to save credentials!
Secret: This is the place to store secret data like passwords, API keys... encoded in B64. The pod will be able to access this data to use the required credentials.
Deployments: This is where the components to be run by kubernetes are indicated. A user usually won't work directly with pods, pods are abstracted in ReplicaSets (number of same pods replicated), which are run via deployments. Note that deployments are for stateless applications. The minimum configuration for a deployment is the name and the image to run.
StatefulSet: This component is meant specifically for applications like databases which needs to access the same storage.
Ingress: This is the configuration that is use to expose the application publicly with an URL. Note that this can also be done using external services, but this is the correct way to expose the application.
If you implement an Ingress you will need to create Ingress Controllers. The Ingress Controller is a pod that will be the endpoint that will receive the requests and check and will load balance them to the services. the ingress controller will send the request based on the ingress rules configured. Note that the ingress rules can point to different paths or even subdomains to different internal kubernetes services.
A better security practice would be to use a cloud load balancer or a proxy server as entrypoint to don't have any part of the Kubernetes cluster exposed.
When request that doesn't match any ingress rule is received, the ingress controller will direct it to the "Default backend". You can describe
the ingress controller to get the address of this parameter.
minikube addons enable ingress
CA is the trusted root for all certificates inside the cluster.
Allows components to validate to each other.
All cluster certificates are signed by the CA.
ETCd has its own certificate.
types:
apiserver cert.
kubelet cert.
scheduler cert.
Minikube can be used to perform some quick tests on kubernetes without needing to deploy a whole kubernetes environment. It will run the master and node processes in one machine. Minikube will use virtualbox to run the node. See here how to install it.
Kubectl
is the command line tool for kubernetes clusters. It communicates with the Api server of the master process to perform actions in kubernetes or to ask for data.
The dashboard allows you to see easier what is minikube running, you can find the URL to access it in:
Each configuration file has 3 parts: metadata, specification (what need to be launch), status (desired state). Inside the specification of the deployment configuration file you can find the template defined with a new configuration structure defining the image to run:
Example of Deployment + Service declared in the same configuration file (from here)
As a service usually is related to one deployment it's possible to declare both in the same configuration file (the service declared in this config is only accessible internally):
Example of external service config
This service will be accessible externally (check the nodePort
and type: LoadBlancer
attributes):
This is useful for testing but for production you should have only internal services and an Ingress to expose the application.
Example of Ingress config file
This will expose the application in http://dashboard.com
.
Example of secrets config file
Note how the password are encoded in B64 (which isn't secure!)
Example of ConfigMap
A ConfigMap is the configuration that is given to the pods so they know how to locate and access other services. In this case, each pod will know that the name mongodb-service
is the address of a pod that they can communicate with (this pod will be executing a mongodb):
Then, inside a deployment config this address can be specified in the following way so it's loaded inside the env of the pod:
Example of volume config
You can find different example of storage configuration yaml files in https://gitlab.com/nanuchi/youtube-tutorial-series/-/tree/master/kubernetes-volumes. Note that volumes aren't inside namespaces
Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. These are intended for use in environments with many users spread across multiple teams, or projects. For clusters with a few to tens of users, you should not need to create or think about namespaces at all. You only should start using namespaces to have a better control and organization of each part of the application deployed in kubernetes.
Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces. Namespaces cannot be nested inside one another and each Kubernetes resource can only be in one namespace.
There are 4 namespaces by default if you are using minikube:
kube-system: It's not meant or the users use and you shouldn't touch it. It's for master and kubectl processes.
kube-public: Publicly accessible date. Contains a configmap which contains cluster information
kube-node-lease: Determines the availability of a node
default: The namespace the user will use to create resources
Note that most Kubernetes resources (e.g. pods, services, replication controllers, and others) are in some namespaces. However, other resources like namespace resources and low-level resources, such as nodes and persistenVolumes are not in a namespace. To see which Kubernetes resources are and aren’t in a namespace:
You can save the namespace for all subsequent kubectl commands in that context.
Helm is the package manager for Kubernetes. It allows to package YAML files and distribute them in public and private repositories. These packages are called Helm Charts.
Helm is also a template engine that allows to generate config files with variables:
A Secret is an object that contains sensitive data such as a password, a token or a key. Such information might otherwise be put in a Pod specification or in an image. Users can create Secrets and the system also creates Secrets. The name of a Secret object must be a valid DNS subdomain name. Read here the official documentation.
Secrets might be things like:
API, SSH Keys.
OAuth tokens.
Credentials, Passwords (plain text or b64 + encryption).
Information or comments.
Database connection code, strings… .
There are different types of secrets in Kubernetes
Builtin Type | Usage |
---|---|
Opaque | arbitrary user-defined data (Default) |
kubernetes.io/service-account-token | service account token |
kubernetes.io/dockercfg | serialized ~/.dockercfg file |
kubernetes.io/dockerconfigjson | serialized ~/.docker/config.json file |
kubernetes.io/basic-auth | credentials for basic authentication |
kubernetes.io/ssh-auth | credentials for SSH authentication |
kubernetes.io/tls | data for a TLS client or server |
bootstrap.kubernetes.io/token | bootstrap token data |
The Opaque type is the default one, the typical key-value pair defined by users.
How secrets works:
The following configuration file defines a secret called mysecret
with 2 key-value pairs username: YWRtaW4=
and password: MWYyZDFlMmU2N2Rm
. It also defines a pod called secretpod
that will have the username
and password
defined in mysecret
exposed in the environment variables SECRET_USERNAME
__ and __ SECRET_PASSWOR
. It will also mount the username
secret inside mysecret
in the path /etc/foo/my-group/my-username
with 0640
permissions.
etcd is a consistent and highly-available key-value store used as Kubernetes backing store for all cluster data. Let’s access to the secrets stored in etcd:
You will see certs, keys and url’s were are located in the FS. Once you get it, you would be able to connect to etcd.
Once you achieve establish communication you would be able to get the secrets:
Adding encryption to the ETCD
By default all the secrets are stored in plain text inside etcd unless you apply an encryption layer. The following example is based on https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/
After that, you need to set the --encryption-provider-config
flag on the kube-apiserver
to point to the location of the created config file. You can modify /etc/kubernetes/manifest/kube-apiserver.yaml
and add the following lines:
Scroll down in the volumeMounts:
Scroll down in the volumeMounts to hostPath:
Verifying that data is encrypted
Data is encrypted when written to etcd. After restarting your kube-apiserver
, any newly created or updated secret should be encrypted when stored. To check, you can use the etcdctl
command line program to retrieve the contents of your secret.
Create a new secret called secret1
in the default
namespace:
Using the etcdctl commandline, read that secret out of etcd:
ETCDCTL_API=3 etcdctl get /registry/secrets/default/secret1 [...] | hexdump -C
where [...]
must be the additional arguments for connecting to the etcd server.
Verify the stored secret is prefixed with k8s:enc:aescbc:v1:
which indicates the aescbc
provider has encrypted the resulting data.
Verify the secret is correctly decrypted when retrieved via the API:
should match mykey: bXlkYXRh
, mydata is encoded, check decoding a secret to completely decode the secret.
Since secrets are encrypted on write, performing an update on a secret will encrypt that content:
Final tips:
Try not to keep secrets in the FS, get them from other places.
Check out https://www.vaultproject.io/ for add more protection to your secrets.
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE) Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)