GCP - Storage Enum

Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!

Other ways to support HackTricks:


Google Cloud Platform (GCP) Storage is a cloud-based storage solution that provides highly durable and available object storage for unstructured data. It offers various storage classes based on performance, availability, and cost, including Standard, Nearline, Coldline, and Archive. GCP Storage also provides advanced features such as lifecycle policies, versioning, and access control to manage and secure data effectively.

The bucket can be stored in a region, in 2 regions or multi-region (default).

Storage Types

  • Standard Storage: This is the default storage option that offers high-performance, low-latency access to frequently accessed data. It is suitable for a wide range of use cases, including serving website content, streaming media, and hosting data analytics pipelines.

  • Nearline Storage: This storage class offers lower storage costs and slightly higher access costs than Standard Storage. It is optimized for infrequently accessed data, with a minimum storage duration of 30 days. It is ideal for backup and archival purposes.

  • Coldline Storage: This storage class is optimized for long-term storage of infrequently accessed data, with a minimum storage duration of 90 days. It offers the lower storage costs than Nearline Storage, but with higher access costs.

  • Archive Storage: This storage class is designed for cold data that is accessed very infrequently, with a minimum storage duration of 365 days. It offers the lowest storage costs of all GCP storage options but with the highest access costs. It is suitable for long-term retention of data that needs to be stored for compliance or regulatory reasons.

  • Autoclass: If you don't know how much you are going to access the data you can select Autoclass and GCP will automatically change the type of storage for you to minimize costs.

Access Control

By default it's recommended to control the access via IAM, but it's also possible to enable the use of ACLs. If you select to only use IAM (default) and 90 days passes, you won't be able to enable ACLs for the bucket.


It's possible to enable versioning, this will save old versions of the file inside the bucket. It's possible to configure the number of versions you want to keep and even how long you want noncurrent versions (old versions) to live. Recommended is 7 days for Standard type.

The metadata of a noncurrent version is kept. Moreover, ACLs of noncurrent versions are also kept, so older versions might have different ACLs from the current version.

Learn more in the docs.

Retention Policy

Indicate how long you want to forbid the deletion of Objects inside the bucket (very useful for compliance at least). Only one of versioning or retention policy can be enabled at the same time.


By default objects are encrypted using Google managed keys, but you could also use a key from KMS.

Public Access

It's possible to give external users (logged in GCP or not) access to buckets content. By default, when a bucket is created, it will have disabled the option to expose publicly the bucket, but with enough permissions the can be changed.

The format of an URL to access a bucket is https://storage.googleapis.com/<bucket-name> or https://<bucket_name>.storage.googleapis.com (both are valid).


An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. You use an HMAC key to create signatures which are then included in requests to Cloud Storage. Signatures show that a given request is authorized by the user or service account.

HMAC keys have two primary pieces, an access ID and a secret.

  • Access ID: An alphanumeric string linked to a specific service or user account. When linked to a service account, the string is 61 characters in length, and when linked to a user account, the string is 24 characters in length. The following shows an example of an access ID:


  • Secret: A 40-character Base-64 encoded string that is linked to a specific access ID. A secret is a preshared key that only you and Cloud Storage know. You use your secret to create signatures as part of the authentication process. The following shows an example of a secret:


Both the access ID and secret uniquely identify an HMAC key, but the secret is much more sensitive information, because it's used to create signatures.


# List all storage buckets in project
gsutil ls

# Get each bucket configuration (protections, CLs, times, configs...)
gsutil ls -L

# List contents of a specific bucket
gsutil ls gs://bucket-name/
gsutil ls -r gs://bucket-name/ # Recursive
gsutil ls -a gs://bucket-name/ # Get ALL versions of objects

# Cat the context of a file without copying it locally
gsutil cat 'gs://bucket-name/folder/object'
gsutil cat 'gs://bucket-name/folder/object#<num>' # cat specific version

# Copy an object from the bucket to your local storage for review
gsutil cp gs://bucket-name/folder/object ~/

# List using a raw OAuth token
## Useful because "CLOUDSDK_AUTH_ACCESS_TOKEN" and "gcloud config set auth/access_token_file" doesn't work with gsutil
curl -H "Authorization: Bearer $TOKEN" "https://storage.googleapis.com/storage/v1/b/<storage-name>/o"
# Download file content from bucket
curl -H "Authorization: Bearer $TOKEN" "https://storage.googleapis.com/storage/v1/b/supportstorage-58249/o/flag.txt?alt=media" --output -

# Enumerate HMAC keys
gsutil hmac list

# Get permissions
gcloud storage buckets get-iam-policy gs://bucket-name/
gcloud storage objects get-iam-policy gs://bucket-name/folder/object

If you get a permission denied error listing buckets you may still have access to the content. So, now that you know about the name convention of the buckets you can generate a list of possible names and try to access them:

for i in $(cat wordlist.txt); do gsutil ls -r gs://"$i"; done

With permissions storage.objects.list and storage.objects.get, you should be able to enumerate all folders and files from the bucket in order to download them. You can achieve that with this Python script:

import requests
import xml.etree.ElementTree as ET

def list_bucket_objects(bucket_name, prefix='', marker=None):
    url = f"https://storage.googleapis.com/{bucket_name}?prefix={prefix}"
    if marker:
        url += f"&marker={marker}"
    response = requests.get(url)
    xml_data = response.content
    root = ET.fromstring(xml_data)
    ns = {'ns': 'http://doc.s3.amazonaws.com/2006-03-01'}
    for contents in root.findall('.//ns:Contents', namespaces=ns):
        key = contents.find('ns:Key', namespaces=ns).text
    next_marker = root.find('ns:NextMarker', namespaces=ns)
    if next_marker is not None:
        next_marker_value = next_marker.text
        list_bucket_objects(bucket_name, prefix, next_marker_value)


Privilege Escalation

In the following page you can check how to abuse storage permissions to escalate privileges:

pageGCP - Storage Privesc

Unauthenticated Enum

pageGCP - Storage Unauthenticated Enum

Post Exploitation

pageGCP - Storage Post Exploitation


pageGCP - Storage Persistence
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!

Other ways to support HackTricks:

Last updated