Each cloud has its own peculiarities but in general there are a few common things a pentester should check when testing a cloud environment:
Benchmark checks
This will help you understand the size of the environment and services used
It will allow you also to find some quick misconfigurations as you can perform most of this tests with automated tools
Services Enumeration
You probably won't find much more misconfigurations here if you performed correctly the benchmark tests, but you might find some that weren't being looked for in the benchmark test.
This will allow you to know what is exactly being used in the cloud env
This will help a lot in the next steps
Check exposed assets
This can be done during the previous section, you need to find out everything that is potentially exposed to the Internet somehow and how can it be accessed.
Here I'm taking manually exposed infrastructure like instances with web pages or other ports being exposed, and also about other cloud managed services that can be configured to be exposed (such as DBs or buckets)
Then you should check if that resource can be exposed or not (confidential information? vulnerabilities? misconfigurations in the exposed service?)
Check permissions
Here you should find out all the permissions of each role/user inside the cloud and how are they used
Too many highly privileged (control everything) accounts? Generated keys not used?... Most of these check should have been done in the benchmark tests already
If the client is using OpenID or SAML or other federation you might need to ask them for further information about how is being each role assigned (it's not the same that the admin role is assigned to 1 user or to 100)
It's not enough to find which users has admin permissions "*:*". There are a lot of other permissions that depending on the services used can be very sensitive.
Moreover, there are potential privesc ways to follow abusing permissions. All this things should be taken into account and as much privesc paths as possible should be reported.
Check Integrations
It's highly probably that integrations with other clouds or SaaS are being used inside the cloud env.
For integrations of the cloud you are auditing with other platform you should notify who has access to (ab)use that integration and you should ask how sensitive is the action being performed.
For example, who can write in an AWS bucket where GCP is getting data from (ask how sensitive is the action in GCP treating that data).
For integrations inside the cloud you are auditing from external platforms, you should ask who has access externally to (ab)use that integration and check how is that data being used.
For example, if a service is using a Docker image hosted in GCR, you should ask who has access to modify that and which sensitive info and access will get that image when executed inside an AWS cloud.
Multi-Cloud tools
There are several tools that can be used to test different cloud environments. The installation steps and links are going to be indicated in this section.
A tool to identify bad configurations and privesc path in clouds and across clouds/SaaS.
# You need to install and run neo4j alsogitclonehttps://github.com/carlospolop/PurplePandacdPurplePandapython3-mvenv.sourcebin/activatepython3-mpipinstall-rrequirements.txtexport PURPLEPANDA_NEO4J_URL="bolt://neo4j@localhost:7687"export PURPLEPANDA_PWD="neo4j_pwd_4_purplepanda"python3main.py-h# Get help
export GOOGLE_DISCOVERY=$(echo'google:- file_path: ""- file_path: "" service_account_id: "some-sa-email@sidentifier.iam.gserviceaccount.com"'|base64)python3main.py-a-pgoogle#Get basic info of the account to check it's correctly configuredpython3main.py-e-pgoogle#Enumerate the env
# Installpipinstallprowlerprowler-v# Runprowler<provider># Exampleprowleraws--profilecustom-profile [-M csvjsonjson-asffhtml]# Get info about checks & servicesprowler<provider>--list-checksprowler<provider>--list-services
mkdirscout; cdscoutvirtualenv-ppython3venvsourcevenv/bin/activatepipinstallscoutsuitescout--help## Using Docker: https://github.com/nccgroup/ScoutSuite/wiki/Docker-Image
scoutgcp--report-dir/tmp/gcp--user-account--all-projects## use "--service-account KEY_FILE" instead of "--user-account" to use a service accountSCOUT_FOLDER_REPORT="/tmp"for pid in $(gcloudprojectslist--format="value(projectId)"); doecho"================================================"echo"Checking $pid"mkdir"$SCOUT_FOLDER_REPORT/$pid"scoutgcp--report-dir"$SCOUT_FOLDER_REPORT/$pid"--no-browser--user-account--project-id"$pid"done
# Install gcp pluginsteampipeplugininstallgcp# Use https://github.com/turbot/steampipe-mod-gcp-compliance.gitgitclonehttps://github.com/turbot/steampipe-mod-gcp-compliance.gitcdsteampipe-mod-gcp-compliance# To run all the checks from the dashboardsteampipedashboard# To run all the checks from rhe clisteampipecheckall
Check all Projects
In order to check all the projects you need to generate the gcp.spc file indicating all the projects to test. You can just follow the indications from the following script
FILEPATH="/tmp/gcp.spc"rm-rf"$FILEPATH"2>/dev/null# Generate a json like object for each projectfor pid in $(gcloudprojectslist--format="value(projectId)"); doecho"connection \"gcp_$(echo-n $pid |tr "-" "_" )\" { plugin = \"gcp\" project = \"$pid\"}">>"$FILEPATH"done# Generate the aggragator to callecho'connection "gcp_all" { plugin = "gcp" type = "aggregator" connections = ["gcp_*"]}'>>"$FILEPATH"echo"Copy $FILEPATH in ~/.steampipe/config/gcp.spc if it was correctly generated"
# Install aws pluginsteampipeplugininstallaws# Modify the spec indicating in "profile" the profile name to usenano~/.steampipe/config/aws.spc# Get some info on how the AWS account is being usedgitclonehttps://github.com/turbot/steampipe-mod-aws-insights.gitcdsteampipe-mod-aws-insightssteampipedashboard# Get the services exposed to the internetgitclonehttps://github.com/turbot/steampipe-mod-aws-perimeter.gitcdsteampipe-mod-aws-perimetersteampipedashboard# Run the benchmarksgitclonehttps://github.com/turbot/steampipe-mod-aws-compliancecdsteampipe-mod-aws-compliancesteampipedashboard# To see results in browsersteampipecheckall--export=/tmp/output4.json
AWS, GCP, Azure, DigitalOcean.
It requires python2.7 and looks unmaintained.
Nessus
Nessus has an Audit Cloud Infrastructure scan supporting: AWS, Azure, Office 365, Rackspace, Salesforce. Some extra configurations in Azure are needed to obtain a Client Id.
Cartography is a Python tool that consolidates infrastructure assets and the relationships between them in an intuitive graph view powered by a Neo4j database.
# Installationdockerimagepullghcr.io/lyft/cartographydockerrun--platformlinux/amd64ghcr.io/lyft/cartographycartography--help## Install a Neo4j DB version 3.5.*
dockerrun--platformlinux/amd64 \--volume"$HOME/.config/gcloud/application_default_credentials.json:/application_default_credentials.json" \-eGOOGLE_APPLICATION_CREDENTIALS="/application_default_credentials.json" \-eNEO4j_PASSWORD="s3cr3t" \ghcr.io/lyft/cartography \--neo4j-uribolt://host.docker.internal:7687 \--neo4j-password-env-varNEO4j_PASSWORD \--neo4j-userneo4j# It only checks for a few services inside GCP (https://lyft.github.io/cartography/modules/gcp/index.html)## Cloud Resource Manager## Compute## DNS## Storage## Google Kubernetes Engine### If you can run starbase or purplepanda you will get more info
Starbase collects assets and relationships from services and systems including cloud infrastructure, SaaS applications, security controls, and more into an intuitive graph view backed by the Neo4j database.
# You are going to need Node version 14, so install nvm following https://tecadmin.net/install-nvm-macos-with-homebrew/npminstall--globalyarnnvminstall14gitclonehttps://github.com/JupiterOne/starbase.gitcdstarbasenvmuse14yarninstallyarnstarbase--help# Configure manually config.yaml depending on the env to analyzeyarnstarbasesetupyarnstarbaserun# Dockergitclonehttps://github.com/JupiterOne/starbase.gitcdstarbasecpconfig.yaml.exampleconfig.yaml# Configure manually config.yaml depending on the env to analyzedockerbuild--no-cache-tstarbase:latest.docker-composerunstarbasesetupdocker-composerunstarbaserun
## Config for GCP### Check out: https://github.com/JupiterOne/graph-google-cloud/blob/main/docs/development.md### It requires service account credentialsintegrations: -name:graph-google-cloudinstanceId:testInstanceIddirectory:./.integrations/graph-google-cloudgitRemoteUrl:https://github.com/JupiterOne/graph-google-cloud.gitconfig: SERVICE_ACCOUNT_KEY_FILE: '{Check https://github.com/JupiterOne/graph-google-cloud/blob/main/docs/development.md#service_account_key_file-string}'
PROJECT_ID:""FOLDER_ID:""ORGANIZATION_ID:""CONFIGURE_ORGANIZATION_PROJECTS:falsestorage:engine:neo4jconfig:username:neo4jpassword:s3cr3turi:bolt://localhost:7687#Consider using host.docker.internal if from docker
Discover the most privileged users in the scanned AWS or Azure environment, including the AWS Shadow Admins. It uses powershell.
Import-Module .\SkyArk.ps1 -forceStart-AzureStealth# in the Cloud ConsoleIEX (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/cyberark/SkyArk/master/AzureStealth/AzureStealth.ps1')
Scan-AzureAdmins
A tool to find a company (target) infrastructure, files, and apps on the top cloud providers (Amazon, Google, Microsoft, DigitalOcean, Alibaba, Vultr, Linode).
Stormspottercreates an “attack graph” of the resources in an Azure subscription. It enables red teams and pentesters to visualize the attack surface and pivot opportunities within a tenant, and supercharges your defenders to quickly orient and prioritize incident response work.
Office365
You need Global Admin or at least Global Admin Reader (but note that Global Admin Reader is a little bit limited). However, those limitations appear in some PS modules and can be bypassed accessing the features via the web application.