HackTricks Cloud
HackTricks Cloud
Ask or search…

Pentesting Cloud Methodology

Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!
Other ways to support HackTricks:

Basic Methodology

Each cloud has its own peculiarities but in general there are a few common things a pentester should check when testing a cloud environment:
  • Benchmark checks
    • This will help you understand the size of the environment and services used
    • It will allow you also to find some quick misconfigurations as you can perform most of this tests with automated tools
  • Services Enumeration
    • You probably won't find much more misconfigurations here if you performed correctly the benchmark tests, but you might find some that weren't being looked for in the benchmark test.
    • This will allow you to know what is exactly being used in the cloud env
    • This will help a lot in the next steps
  • Check exposed assets
    • This can be done during the previous section, you need to find out everything that is potentially exposed to the Internet somehow and how can it be accessed.
      • Here I'm taking manually exposed infrastructure like instances with web pages or other ports being exposed, and also about other cloud managed services that can be configured to be exposed (such as DBs or buckets)
    • Then you should check if that resource can be exposed or not (confidential information? vulnerabilities? misconfigurations in the exposed service?)
  • Check permissions
    • Here you should find out all the permissions of each role/user inside the cloud and how are they used
      • Too many highly privileged (control everything) accounts? Generated keys not used?... Most of these check should have been done in the benchmark tests already
      • If the client is using OpenID or SAML or other federation you might need to ask them for further information about how is being each role assigned (it's not the same that the admin role is assigned to 1 user or to 100)
    • It's not enough to find which users has admin permissions "*:*". There are a lot of other permissions that depending on the services used can be very sensitive.
      • Moreover, there are potential privesc ways to follow abusing permissions. All this things should be taken into account and as much privesc paths as possible should be reported.
  • Check Integrations
    • It's highly probably that integrations with other clouds or SaaS are being used inside the cloud env.
      • For integrations of the cloud you are auditing with other platform you should notify who has access to (ab)use that integration and you should ask how sensitive is the action being performed. For example, who can write in an AWS bucket where GCP is getting data from (ask how sensitive is the action in GCP treating that data).
      • For integrations inside the cloud you are auditing from external platforms, you should ask who has access externally to (ab)use that integration and check how is that data being used. For example, if a service is using a Docker image hosted in GCR, you should ask who has access to modify that and which sensitive info and access will get that image when executed inside an AWS cloud.

Multi-Cloud tools

There are several tools that can be used to test different cloud environments. The installation steps and links are going to be indicated in this section.


A tool to identify bad configurations and privesc path in clouds and across clouds/SaaS.
# You need to install and run neo4j also
git clone https://github.com/carlospolop/PurplePanda
cd PurplePanda
python3 -m venv .
source bin/activate
python3 -m pip install -r requirements.txt
export PURPLEPANDA_NEO4J_URL="bolt://neo4j@localhost:7687"
export PURPLEPANDA_PWD="neo4j_pwd_4_purplepanda"
python3 main.py -h # Get help
export GOOGLE_DISCOVERY=$(echo 'google:
- file_path: ""
- file_path: ""
service_account_id: "[email protected]"' | base64)
python3 main.py -a -p google #Get basic info of the account to check it's correctly configured
python3 main.py -e -p google #Enumerate the env


It supports AWS, GCP & Azure. Check how to configure each provider in https://docs.prowler.cloud/en/latest/#aws​
# Install
pip install prowler
prowler -v
# Run
prowler <provider>
# Example
prowler aws --profile custom-profile [-M csv json json-asff html]
# Get info about checks & services
prowler <provider> --list-checks
prowler <provider> --list-services


AWS, Azure, Github, Google, Oracle, Alibaba
# Install
git clone https://github.com/aquasecurity/cloudsploit.git
cd cloudsploit
npm install
./index.js -h
## Docker instructions in github
## You need to have creds for a service account and set them in config.js file
./index.js --cloud google --config </abs/path/to/config.js>


AWS, Azure, GCP, Alibaba Cloud, Oracle Cloud Infrastructure
mkdir scout; cd scout
virtualenv -p python3 venv
source venv/bin/activate
pip install scoutsuite
scout --help
## Using Docker: https://github.com/nccgroup/ScoutSuite/wiki/Docker-Image
scout gcp --report-dir /tmp/gcp --user-account --all-projects
## use "--service-account KEY_FILE" instead of "--user-account" to use a service account
for pid in $(gcloud projects list --format="value(projectId)"); do
echo "================================================"
echo "Checking $pid"
scout gcp --report-dir "$SCOUT_FOLDER_REPORT/$pid" --no-browser --user-account --project-id "$pid"


Download and install Steampipe (https://steampipe.io/downloads). Or use Brew:
brew tap turbot/tap
brew install steampipe
# Install gcp plugin
steampipe plugin install gcp
# Use https://github.com/turbot/steampipe-mod-gcp-compliance.git
git clone https://github.com/turbot/steampipe-mod-gcp-compliance.git
cd steampipe-mod-gcp-compliance
# To run all the checks from the dashboard
steampipe dashboard
# To run all the checks from rhe cli
steampipe check all
Check all Projects
In order to check all the projects you need to generate the gcp.spc file indicating all the projects to test. You can just follow the indications from the following script
rm -rf "$FILEPATH" 2>/dev/null
# Generate a json like object for each project
for pid in $(gcloud projects list --format="value(projectId)"); do
echo "connection \"gcp_$(echo -n $pid | tr "-" "_" )\" {
plugin = \"gcp\"
project = \"$pid\"
}" >> "$FILEPATH"
# Generate the aggragator to call
echo 'connection "gcp_all" {
plugin = "gcp"
type = "aggregator"
connections = ["gcp_*"]
}' >> "$FILEPATH"
echo "Copy $FILEPATH in ~/.steampipe/config/gcp.spc if it was correctly generated"
To check other GCP insights (useful for enumerating services) use: https://github.com/turbot/steampipe-mod-gcp-insights​
More GCP plugins of Steampipe: https://github.com/turbot?q=gcp​
# Install aws plugin
steampipe plugin install aws
# Modify the spec indicating in "profile" the profile name to use
nano ~/.steampipe/config/aws.spc
# Get some info on how the AWS account is being used
git clone https://github.com/turbot/steampipe-mod-aws-insights.git
cd steampipe-mod-aws-insights
steampipe dashboard
# Get the services exposed to the internet
git clone https://github.com/turbot/steampipe-mod-aws-perimeter.git
cd steampipe-mod-aws-perimeter
steampipe dashboard
# Run the benchmarks
git clone https://github.com/turbot/steampipe-mod-aws-compliance
cd steampipe-mod-aws-compliance
steampipe dashboard # To see results in browser
steampipe check all --export=/tmp/output4.json
More AWS plugins of Steampipe: https://github.com/orgs/turbot/repositories?q=aws​


AWS, GCP, Azure, DigitalOcean. It requires python2.7 and looks unmaintained.


Nessus has an Audit Cloud Infrastructure scan supporting: AWS, Azure, Office 365, Rackspace, Salesforce. Some extra configurations in Azure are needed to obtain a Client Id.


Cloudlist is a multi-cloud tool for getting Assets (Hostnames, IP Addresses) from Cloud Providers.
Second Tab
cd /tmp
wget https://github.com/projectdiscovery/cloudlist/releases/latest/download/cloudlist_1.0.1_macOS_arm64.zip
unzip cloudlist_1.0.1_macOS_arm64.zip
chmod +x cloudlist
sudo mv cloudlist /usr/local/bin
## For GCP it requires service account JSON credentials
cloudlist -config </path/to/config>


Cartography is a Python tool that consolidates infrastructure assets and the relationships between them in an intuitive graph view powered by a Neo4j database.
# Installation
docker image pull ghcr.io/lyft/cartography
docker run --platform linux/amd64 ghcr.io/lyft/cartography cartography --help
## Install a Neo4j DB version 3.5.*
docker run --platform linux/amd64 \
--volume "$HOME/.config/gcloud/application_default_credentials.json:/application_default_credentials.json" \
-e GOOGLE_APPLICATION_CREDENTIALS="/application_default_credentials.json" \
-e NEO4j_PASSWORD="s3cr3t" \
ghcr.io/lyft/cartography \
--neo4j-uri bolt://host.docker.internal:7687 \
--neo4j-password-env-var NEO4j_PASSWORD \
--neo4j-user neo4j
# It only checks for a few services inside GCP (https://lyft.github.io/cartography/modules/gcp/index.html)
## Cloud Resource Manager
## Compute
## DNS
## Storage
## Google Kubernetes Engine
### If you can run starbase or purplepanda you will get more info


Starbase collects assets and relationships from services and systems including cloud infrastructure, SaaS applications, security controls, and more into an intuitive graph view backed by the Neo4j database.
# You are going to need Node version 14, so install nvm following https://tecadmin.net/install-nvm-macos-with-homebrew/
npm install --global yarn
nvm install 14
git clone https://github.com/JupiterOne/starbase.git
cd starbase
nvm use 14
yarn install
yarn starbase --help
# Configure manually config.yaml depending on the env to analyze
yarn starbase setup
yarn starbase run
# Docker
git clone https://github.com/JupiterOne/starbase.git
cd starbase
cp config.yaml.example config.yaml
# Configure manually config.yaml depending on the env to analyze
docker build --no-cache -t starbase:latest .
docker-compose run starbase setup
docker-compose run starbase run
## Config for GCP
### Check out: https://github.com/JupiterOne/graph-google-cloud/blob/main/docs/development.md
### It requires service account credentials
name: graph-google-cloud
instanceId: testInstanceId
directory: ./.integrations/graph-google-cloud
gitRemoteUrl: https://github.com/JupiterOne/graph-google-cloud.git
SERVICE_ACCOUNT_KEY_FILE: '{Check https://github.com/JupiterOne/graph-google-cloud/blob/main/docs/development.md#service_account_key_file-string}'
engine: neo4j
username: neo4j
password: s3cr3t
uri: bolt://localhost:7687
#Consider using host.docker.internal if from docker


Discover the most privileged users in the scanned AWS or Azure environment, including the AWS Shadow Admins. It uses powershell.
Import-Module .\SkyArk.ps1 -force
# in the Cloud Console
IEX (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/cyberark/SkyArk/master/AzureStealth/AzureStealth.ps1')

​Cloud Brute​

A tool to find a company (target) infrastructure, files, and apps on the top cloud providers (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr, Linode).


  • CloudFox is a tool to find exploitable attack paths in cloud infrastructure (currently only AWS & Azure supported with GCP upcoming).
  • It is an enumeration tool which is intended to compliment manual pentesting.
  • It doesn't create or modify any data within the cloud environment.

More lists of cloud security tools






Attack Graph

​Stormspotter creates an “attack graph” of the resources in an Azure subscription. It enables red teams and pentesters to visualize the attack surface and pivot opportunities within a tenant, and supercharges your defenders to quickly orient and prioritize incident response work.


You need Global Admin or at least Global Admin Reader (but note that Global Admin Reader is a little bit limited). However, those limitations appear in some PS modules and can be bypassed accessing the features via the web application.
Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!
Other ways to support HackTricks: