GCP - Artifact Registry Enum
Last updated
Last updated
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE) Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Google Cloud Artifact Registry is a fully managed service that allows you to manage, store, and secure your software artifacts. It's essentially a repository for storing build dependencies, such as Docker images, Maven, npm packages, and other types of artifacts. It's commonly used in CI/CD pipelines for storing and versioning the artifacts produced during the software development process.
Key features of Artifact Registry include:
Unified Repository: It supports multiple types of artifacts, allowing you to have a single repository for Docker images, language packages (like Java’s Maven, Node.js’s npm), and other types of artifacts, enabling consistent access controls and a unified view across all your artifacts.
Fully Managed: As a managed service, it takes care of the underlying infrastructure, scaling, and security, reducing the maintenance overhead for users.
Fine-grained Access Control: It integrates with Google Cloud’s Identity and Access Management (IAM), allowing you to define who can access, upload, or download artifacts in your repositories.
Geo-replication: It supports the replication of artifacts across multiple regions, improving the speed of downloads and ensuring availability.
Integration with Google Cloud Services: It works seamlessly with other GCP services like Cloud Build, Kubernetes Engine, and Compute Engine, making it a convenient choice for teams already working within the Google Cloud ecosystem.
Security: Offers features like vulnerability scanning and container analysis to help ensure that the stored artifacts are secure and free from known security issues.
When creating a new repository it's possible to select a the format/type of the repository among several like Docker, Maven, npm, Python... and the mode which usually can be one of these three:
Standard Repository: Default mode for storing your own artifacts (like Docker images, Maven packages) directly in GCP. It's secure, scalable, and integrates well within the Google Cloud ecosystem.
Remote Repository (if available): Acts as a proxy for caching artifacts from external, public repositories. It helps prevent issues from dependencies changing upstream and reduces latency by caching frequently accessed artifacts.
Virtual Repository (if available): Provides a unified interface to access multiple (standard or remote) repositories through a single endpoint, simplifying client-side configuration and access management for artifacts spread across various repositories.
For a virtual repository you will need to select repositories and give them a priority (the repo with the largest priority will be used).
You can mix remote and standard repositories in a virtual one, if the priority of the remote is bigger than the standard, packages from remote (PyPi for example) will be used. This could lead to a Dependency Confusion.
Note that in the Remote version of Docker it's possible to give a username and token to access Docker Hub. The token is then stored in the Secret Manager.
As expected, by default a Google-managed key is used but a Customer-managed key can be indicated (CMEK).
Delete artifacts: Artifacts will be deleted according to cleanup policy criteria.
Dry run: (Default one) Artifacts will not be deleted. Cleanup policies will be evaluated, and test delete events sent to Cloud Audit Logging.
It's possible to enable the vulnerability scanner which will check for vulnerabilities inside container images.
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE) Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)