In a business, when consuming public docker images, you may want to sanitise them, running some processes before putting them to use. This process could be used for:

  • Standardising configuration.
  • Installing required software/packages.
  • Checking for vulnerabilities and take snapshots of all dependency versions.
  • Validating OSS License compliance.
  • Scanning for malware.
  • OS-level patching.

Generally that would result in “Golden Images” or simply base images that would be white-listed for internal consumption. I won’t be focusing on why or how to do any of the above, as it could be quite specific. However, below I will just cover the “how to” automate the process of re-tagging public images so you can push them into your internal CR.

0. ACR Log in

In order to make push images into a registry, you need to authenticate against it. For Azure ACR, you can either use the docker login command:

docker login --username USER_NAME --password PASSWORD ACR_NAME.azurecr.io

Or the azure CLI command:

az acr login -n ACR_NAME -g RESOURCE_GROUP_NAME --username USER_NAME --password PASSWORD

1. Pull source images

The re-tagging command takes place locally, so before you can do that, you need to pull the required images locally.

You can either pull all tags of a given image:

docker pull microsoft/dotnet -a

Or make this more storage-and-time efficient, finding the tags you want for that docker image and executing the pull command to download only them.

2. Re-tag images and Push then up

Once you have the required images locally, you can add new tags to them with docker tag. Here’s a bash script to help with that:

original_image="microsoft/dotnet"
target_acr="myinternalacr.azurecr.io"
minimum_version="2.1"
grep_filter="deps|nanoserver|bionic|latest"


# Download all images
docker pull $original_image --all-tags

# Get all images published after $minimum_version
# format output to be: 
#   docker tag ORIGINAL_IMAGE_NAME:VERSION TARGET_IMAGE_NAME:VERSION |
#   docker push TARGET_IMAGE_NAME:VERSION
# then filter the result, removing any entries containing words defined on $grep_filter (i.e. rc, beta, alpha, etc)
# finally, execute those as commands
docker images $original_image \
  --filter "since=$original_image:$minimum_version" \
  --format "docker tag {{.Repository}}:{{.Tag}} $target_acr/{{.Repository}}:{{.Tag}} | docker push $target_acr/{{.Repository}}:{{.Tag}}" | 
  grep -vE $grep_filter | 
  bash

Note that I use Go Templates in the docker images command, to build the commands I will need to execute.

For each image found locally based on the original_image that also matches the filter defined, the result will be:

docker pull old-registry/app:some_tag
docker tag old-registry/app:some_tag new-registry/app:some_tag
docker push new-registry/app:some_tag

Then, I “grep out” anything that is contained in the grep_filter. For example, I do not want to push the tag latest, nor any tag containing the words bionicnanoserver or deps.

As a last thing, I execute all the commands, which will then re-tag and push each one of the images to the private ACR.

Alternatif cara lain, kita bisa menggunakan aplikasi pihak ke 3 skopeo untuk mengcopy container

skopeo copy docker://quay.io/buildah/stable docker://registry.internal.company.com/buildah

Wrap Up

This can be especially handy when you are putting in place an Image Assurance within a company-wide. Note that alternative approaches exist, for example, using the original Dockerfiles (when available) to trigger the process of generating such images. However, I pursued the approach above as it felt easier to automate whilst keeping a direct connection to publicly available docker images.

Leave a Reply

Your email address will not be published. Required fields are marked *