Concourse Lab Creation

Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!

Other ways to support HackTricks:

Testing Environment

Running Concourse

With Docker-Compose

This docker-compose file simplifies the installation to do some tests with concourse:

docker-compose up -d

You can download the command line fly for your OS from the web in

You can easily deploy concourse in Kubernetes (in minikube for example) using the helm-chart: concourse-chart.

brew install helm
helm repo add concourse
helm install concourse-release concourse/concourse
# concourse-release will be the prefix name for the concourse elements in k8s
# After the installation you will find the indications to connect to it in the console

# If you need to delete it
helm delete concourse-release

After generating the concourse env, you could generate a secret and give a access to the SA running in concourse web to access K8s secrets:

echo 'apiVersion:
kind: ClusterRole
  name: read-secrets
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]


kind: RoleBinding
  name: read-secrets-concourse
  kind: ClusterRole
  name: read-secrets
- kind: ServiceAccount
  name: concourse-release-web
  namespace: default

apiVersion: v1
kind: Secret
  name: super
  namespace: concourse-release-main
type: Opaque
  secret: MWYyZDFlMmU2N2Rm

' | kubectl apply -f -

Create Pipeline

A pipeline is made of a list of Jobs which contains an ordered list of Steps.


Several different type of steps can be used:

Each step in a job plan runs in its own container. You can run anything you want inside the container (i.e. run my tests, run this bash script, build this image, etc.). So if you have a job with five steps Concourse will create five containers, one for each step.

Therefore, it's possible to indicate the type of container each step needs to be run in.

Simple Pipeline Example

- name: simple
  - task: simple-task
    privileged: true
      # Tells Concourse which type of worker this task should run on
      platform: linux
        type: registry-image
          repository: busybox # images are pulled from docker hub by default
        path: sh
        - -cx
        - |
          sleep 1000
          echo "$SUPER_SECRET"
        SUPER_SECRET: ((super.secret))
fly -t tutorial set-pipeline -p pipe-name -c hello-world.yml
# pipelines are paused when first created
fly -t tutorial unpause-pipeline -p pipe-name
# trigger the job and watch it run to completion
fly -t tutorial trigger-job --job pipe-name/simple --watch
# From another console
fly -t tutorial intercept --job pipe-name/simple

Check to see the pipeline flow.

Bash script with output/input pipeline

It's possible to save the results of one task in a file and indicate that it's an output and then indicate the input of the next task as the output of the previous task. What concourse does is to mount the directory of the previous task in the new task where you can access the files created by the previous task.


You don't need to trigger the jobs manually every-time you need to run them, you can also program them to be run every-time:

Check a YAML pipeline example that triggers on new commits to master in

Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!

Other ways to support HackTricks:

Last updated