Amazon S3 is a service that allows you store big amounts of data.
Amazon S3 provides multiple options to achieve the protection of data at REST. The options include Permission (Policy), Encryption (Client and Server Side), Bucket Versioning and MFAbased delete. The user can enable any of these options to achieve data protection. Data replication is an internal facility by AWS where S3 automatically replicates each object across all the Availability Zones and the organization need not enable it in this case.
With resource-based permissions, you can define permissions for sub-directories of your bucket separately.
Bucket Versioning and MFA based delete
When bucket versioning is enabled, any action that tries to alter a file inside a file will generate a new version of the file, keeping also the previous content of the same. Therefore, it won't overwrite its content.
Moreover, MFA based delete will prevent versions of file in the S3 bucket from being deleted and also Bucket Versioning from being disabled, so an attacker won't be able to alter these files.
S3 Access logs
It's possible to enable S3 access login (which by default is disabled) to some bucket and save the logs in a different bucket to know who is accessing the bucket (both buckets must be in the same region).
S3 Presigned URLs
It's possible to generate a presigned URL that can usually be used to access the specified file in the bucket. A presigned URL looks like this:
A presigned URL can be created from the cli using credentials of a principal with access to the object (if the account you use doesn't have access, a shorter presigned URL will be created but it will be useless)
The only required permission to generate a presigned URL is the permission being given, so for the previous command the only permission needed by the principal is s3:GetObject
It's also possible to create presigned URLs with other permissions:
DEK means Data Encryption Key and is the key that is always generated and used to encrypt data.
Server-side encryption with S3 managed keys, SSE-S3
This option requires minimal configuration and all management of encryption keys used are managed by AWS. All you need to do is to upload your data and S3 will handle all other aspects. Each bucket in a S3 account is assigned a bucket key.
Encryption:
Object Data + created plaintext DEK --> Encrypted data (stored inside S3)
Created plaintext DEK + S3 Master Key --> Encrypted DEK (stored inside S3) and plain text is deleted from memory
Decryption:
Encrypted DEK + S3 Master Key --> Plaintext DEK
Plaintext DEK + Encrypted data --> Object Data
Please, note that in this case the key is managed by AWS (rotation only every 3 years). If you use your own key you willbe able to rotate, disable and apply access control.
Server-side encryption with KMS managed keys, SSE-KMS
This method allows S3 to use the key management service to generate your data encryption keys. KMS gives you a far greater flexibility of how your keys are managed. For example, you are able to disable, rotate, and apply access controls to the CMK, and order to against their usage using AWS Cloud Trail.
Encryption:
S3 request data keys from KMS CMK
KMS uses a CMK to generate the pair DEK plaintext and DEK encrypted and send them to S£
S3 uses the paintext key to encrypt the data, store the encrypted data and the encrypted key and deletes from memory the plain text key
Decryption:
S3 ask to KMS to decrypt the encrypted data key of the object
KMS decrypt the data key with the CMK and send it back to S3
S3 decrypts the object data
Server-side encryption with customer provided keys, SSE-C
This option gives you the opportunity to provide your own master key that you may already be using outside of AWS. Your customer-provided key would then be sent with your data to S3, where S3 would then perform the encryption for you.
Encryption:
The user sends the object data + Customer key to S3
The customer key is used to encrypt the data and the encrypted data is stored
a salted HMAC value of the customer key is stored also for future key validation
the customer key is deleted from memory
Decryption:
The user send the customer key
The key is validated against the HMAC value stored
The customer provided key is then used to decrypt the data
Client-side encryption with KMS, CSE-KMS
Similarly to SSE-KMS, this also uses the key management service to generate your data encryption keys. However, this time KMS is called upon via the client not S3. The encryption then takes place client-side and the encrypted data is then sent to S3 to be stored.
Encryption:
Client request for a data key to KMS
KMS returns the plaintext DEK and the encrypted DEK with the CMK
Both keys are sent back
The client then encrypts the data with the plaintext DEK and send to S3 the encrypted data + the encrypted DEK (which is saved as metadata of the encrypted data inside S3)
Decryption:
The encrypted data with the encrypted DEK is sent to the client
The client asks KMS to decrypt the encrypted key using the CMK and KMS sends back the plaintext DEK
The client can now decrypt the encrypted data
Client-side encryption with customer provided keys, CSE-C
Using this mechanism, you are able to utilize your own provided keys and use an AWS-SDK client to encrypt your data before sending it to S3 for storage.
Encryption:
The client generates a DEK and encrypts the plaintext data
Then, using it's own custom CMK it encrypts the DEK
submit the encrypted data + encrypted DEK to S3 where it's stored
Decryption:
S3 sends the encrypted data and DEK
As the client already has the CMK used to encrypt the DEK, it decrypts the DEK and then uses the plaintext DEK to decrypt the data
Enumeration
One of the traditional main ways of compromising AWS orgs start by compromising buckets publicly accesible. You can findpublic buckets enumerators in this page.
# Get buckets ACLsawss3apiget-bucket-acl--bucket<bucket-name>awss3apiget-object-acl--bucket<bucket-name>--keyflag# Get policyawss3apiget-bucket-policy--bucket<bucket-name>awss3apiget-bucket-policy-status--bucket<bucket-name>#if it's public# list S3 buckets associated with a profileawss3lsawss3apilist-buckets# list content of bucket (no creds)awss3lss3://bucket-name--no-sign-requestawss3lss3://bucket-name--recursive# list content of bucket (with creds)awss3lss3://bucket-nameawss3apilist-objects-v2--bucket<bucket-name>awss3apilist-objects--bucket<bucket-name>awss3apilist-object-versions--bucket<bucket-name># copy local folder to S3awss3cpMyFolders3://bucket-name--recursive# deleteawss3rbs3://bucket-name–-force# download a whole S3 bucketawss3syncs3://<bucket>/.# move S3 bucket to different locationawss3syncs3://oldbuckets3://newbucket--source-regionus-west-1# list the sizes of an S3 bucket and its contentsawss3apilist-objects--bucketBUCKETNAME--outputjson--query"[sum(Contents[].Size), length(Contents[])]"# Update Bucket policyawss3apiput-bucket-policy--policyfile:///root/policy.json--bucket<bucket-name>##JSON policy example{"Id":"Policy1568185116930","Version":"2012-10-17","Statement": [ {"Sid":"Stmt1568184932403","Action": ["s3:ListBucket" ],"Effect":"Allow","Resource":"arn:aws:s3:::welcome","Principal":"*" }, {"Sid":"Stmt1568185007451","Action": ["s3:GetObject" ],"Effect":"Allow","Resource":"arn:aws:s3:::welcome/*","Principal":"*" } ]}# Update bucket ACLawss3apiget-bucket-acl--bucket<bucket-name># Way 1 to get the ACLawss3apiput-bucket-acl--bucket<bucket-name>--access-control-policyfile://acl.jsonawss3apiget-object-acl--bucket<bucket-name>--keyflag#Way 2 to get the ACLawss3apiput-object-acl--bucket<bucket-name>--keyflag--access-control-policyfile://objacl.json##JSON ACL example## Make sure to modify the Owner’s displayName and ID according to the Object ACL you retrieved.{"Owner":{"DisplayName":"<DisplayName>","ID":"<ID>" },"Grants": [ {"Grantee":{"Type":"Group","URI":"http://acs.amazonaws.com/groups/global/AuthenticatedUsers" },"Permission":"FULL_CONTROL" } ]}## An ACL should give you the permission WRITE_ACP to be able to put a new ACL
dual-stack
You can access an S3 bucket through a dual-stack endpoint by using a virtual hosted-style or a path-style endpoint name. These are useful to access S3 through IPv6.
Dual-stack endpoints use the following syntax:
bucketname.s3.dualstack.aws-region.amazonaws.com
s3.dualstack.aws-region.amazonaws.com/bucketname
Privesc
In the following page you can check how to abuse S3 permissions to escalate privileges:
According to this research it was possible to cache the response of an arbitrary bucket as if it belonged to a different one. This could have been abused to change for example javascript file responses and compromise arbitrary pages using S3 to store static code.
Amazon Athena
Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL.
You need to prepare a relational DB table with the format of the content that is going to appear in the monitored S3 buckets. And then, Amazon Athena will be able to populate the DB from the logs, so you can query it.
Amazon Athena supports the ability to query S3 data that is already encrypted and if configured to do so, Athena can also encrypt the results of the query which can then be stored in S3.
This encryption of results is independent of the underlying queried S3 data, meaning that even if the S3 data is not encrypted, the queried results can be encrypted. A couple of points to be aware of is that Amazon Athena only supports data that has been encrypted with the following S3 encryption methods, SSE-S3, SSE-KMS, and CSE-KMS.
SSE-C and CSE-E are not supported. In addition to this, it's important to understand that Amazon Athena will only run queries against encrypted objects that are in the same region as the query itself. If you need to query S3 data that's been encrypted using KMS, then specific permissions are required by the Athena user to enable them to perform the query.
Enumeration
# Get catalogsawsathenalist-data-catalogs# Get databases inside catalogawsathenalist-databases--catalog-name<catalog-name>awsathenalist-table-metadata--catalog-name<catalog-name>--database-name<db-name># Get query executions, queries and resultsawsathenalist-query-executionsawsathenaget-query-execution--query-execution-id<id># Get query and meta of resultsawsathenaget-query-results--query-execution-id<id># This will rerun the query and get the results# Get workgroups & Prepared statementsawsathenalist-work-groupsawsathenalist-prepared-statements--work-group<wg-name>awsathenaget-prepared-statement--statement-name<name>--work-group<wg-name># Run queryawsathenastart-query-execution--query-string<query>