Pin all docker images to digests (#4047)

This dramatically increases the reproducibility of Docker builds and
avoids painful debugging experiences due to inconsistent Docker
caching. Note I was able to delete the entire "Debugging" section. I
was also able to remove basically all the arguments to
`manage_images.py`.

The cost is that using the script now always uploads to GCR which takes
time and requires permissions. Given that this is required to test the
images anyway, this seems not too great a cost. Personally, I've spent
way more time dealing with weird Docker edge cases than pushing images.
People can always fall back to standard Docker commands for local
iteration.
diff --git a/WORKSPACE b/WORKSPACE
index 381797c..fc53200 100644
--- a/WORKSPACE
+++ b/WORKSPACE
@@ -63,7 +63,7 @@
 rbe_autoconfig(
     name = "rbe_default",
     base_container_digest = "sha256:1a8ed713f40267bb51fe17de012fa631a20c52df818ccb317aaed2ee068dfc61",
-    digest = "sha256:d6d895294076b5289e81489f664656211c41656cffe7c448ecb5c6f54f045974",
+    digest = "sha256:0b44a9ec88bd032f892457e1e23728074d91b7bc00a4167a2862288594f54595",
     registry = "gcr.io",
     repository = "iree-oss/rbe-toolchain",
     use_checked_in_confs = "Force",
diff --git a/build_tools/docker/README.md b/build_tools/docker/README.md
index 3fd42d6..2bb7d4e 100644
--- a/build_tools/docker/README.md
+++ b/build_tools/docker/README.md
@@ -4,6 +4,8 @@
 for IREE. Images are uploaded to
 [Google Container Registry (GCR)](https://cloud.google.com/container-registry).
 
+## Running Images Locally
+
 To build an image, use `docker build`, e.g.:
 
 ```shell
@@ -16,6 +18,12 @@
 docker run --interactive --tty --rm cmake
 ```
 
+Production versions of the images can be downloaded from GCR:
+
+```shell
+docker pull gcr.io/iree-oss/cmake:prod
+```
+
 You can find more information in the
 [official Docker docs](https://docs.docker.com/get-started/overview/).
 
@@ -37,43 +45,31 @@
 
 We use a helper python script to manage the Docker image deployment. It lists
 all images and their dependencies and manages their canonical registry location.
-When creating a new image, add it to this mapping. To build an image and all
-images it depends on:
+This script pushes images to GCR which requires the `Storage Admin` role in the
+`iree-oss` GCP project.
+
+When creating a new image, add it to the mapping in this script. To build an
+image and all images it depends on as well as pushing them to GCR and updating
+all references to the image digest.
 
 ```shell
-python3 build_tools/docker/manage_images.py --build --image cmake
+python3 build_tools/docker/manage_images.py --image cmake
 ```
 
-To build multiple images
+For multiple images
 
 ```shell
-python3 build_tools/docker/manage_images.py --build --image cmake --image bazel
+python3 build_tools/docker/manage_images.py --image cmake --image bazel
 ```
 
-There is also the special option `--image all` to build all registered images.
-
-Pushing images to GCR requires the `Storage Admin` role in the `iree-oss` GCP
-project. To push these images to GCR with the `latest` tag:
+There is also the special option `--image all` for all registered images.
 
 ```shell
-python3 build_tools/docker/manage_images.py --image cmake --push
+python3 build_tools/docker/manage_images.py --image all
 ```
 
-Kokoro build scripts and RBE configuration refer to images by their repository
-digest. You can update references to the digest:
-
-```shell
-python3 build_tools/docker/manage_images.py --images all --tag latest --update_references
-```
-
-This requires that the tagged image have a repository digest, which means it was
-pushed to or pulled from GCR.
-
 ## Adding or Updating an Image
 
-If you have worked with the `docker` images before, it is prudent to follow the
-steps in the "Debugging" section below before continuing.
-
 ### Part 1. Local Changes
 
 1. Update the `Dockerfile` for the image that you want to modify or add. If
@@ -83,11 +79,7 @@
    with the new GCR digest:
 
     ```shell
-    python3 build_tools/docker/manage_images.py \
-      --image "${IMAGE?}" --build \
-      --tag latest \
-      --push \
-      --update_references
+    python3 build_tools/docker/manage_images.py --image "${IMAGE?}"
     ```
 
 3. Test that the changes behave as expected locally and iterate on the steps
@@ -116,25 +108,3 @@
     ```shell
     python3 build_tools/docker/manage_prod.py
     ```
-
-## Debugging
-
-Sometimes old versions of the `:latest` images can be stored locally and produce
-unexpected behaviors. The following commands will download all of the prod
-images and then update the images tagged with `:latest` on your machine (and on
-GCR).
-
-```shell
-# Pull all images that should have :prod tags. (They won't if someone ignores
-# step 6 above, but the images that this command pulls are correct regardless).
-python3 build_tools/docker/manage_prod.py --pull_only
-
-# Update the :latest images to match the :prod images.
-# If you have a clean workspace this shouldn't require building anything as
-# everything should be cache hits from the :prod images downloaded above.
-python3 build_tools/docker/manage_images.py \
-  --images all --build \
-  --tag latest \
-  --push \
-  --update_references
-```
diff --git a/build_tools/docker/base/Dockerfile b/build_tools/docker/base/Dockerfile
index ea85143..4177601 100644
--- a/build_tools/docker/base/Dockerfile
+++ b/build_tools/docker/base/Dockerfile
@@ -12,7 +12,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-FROM ubuntu:18.04 AS final
+FROM ubuntu@sha256:fd25e706f3dea2a5ff705dbc3353cf37f08307798f3e360a13e9385840f73fb3 AS final
 
 # Environment variables for IREE.
 ENV CC /usr/bin/clang
diff --git a/build_tools/docker/bazel-python/Dockerfile b/build_tools/docker/bazel-python/Dockerfile
index 98b221b..45e2314 100644
--- a/build_tools/docker/bazel-python/Dockerfile
+++ b/build_tools/docker/bazel-python/Dockerfile
@@ -14,7 +14,7 @@
 
 # An image for building IREE with Python bindings using Bazel.
 
-FROM gcr.io/iree-oss/bazel AS final
+FROM gcr.io/iree-oss/bazel@sha256:066af7fcb39c13284ed47b2d6afe75f944c1d7415a21beaa5afd6319176654e8 AS final
 
 # Install python3 and numpy.
 RUN apt-get update \
diff --git a/build_tools/docker/bazel-tensorflow-swiftshader/Dockerfile b/build_tools/docker/bazel-tensorflow-swiftshader/Dockerfile
index b2c16e3..6b1efcc 100644
--- a/build_tools/docker/bazel-tensorflow-swiftshader/Dockerfile
+++ b/build_tools/docker/bazel-tensorflow-swiftshader/Dockerfile
@@ -17,7 +17,7 @@
 
 FROM gcr.io/iree-oss/bazel-tensorflow-vulkan AS final
 
-COPY --from=gcr.io/iree-oss/swiftshader swiftshader/ swiftshader/
+COPY --from=gcr.io/iree-oss/swiftshader@sha256:82c9ca1d8e31cfd7aaf0f0b7dc19899b4f50adaf00b6fdfbea49851007dd31f4 swiftshader/ swiftshader/
 
 # Set VK_ICD_FILENAMES so Vulkan loader can find the SwiftShader ICD.
 ENV VK_ICD_FILENAMES /swiftshader/vk_swiftshader_icd.json
diff --git a/build_tools/docker/bazel-tensorflow-vulkan/Dockerfile b/build_tools/docker/bazel-tensorflow-vulkan/Dockerfile
index 05b7209..a4e89f1 100644
--- a/build_tools/docker/bazel-tensorflow-vulkan/Dockerfile
+++ b/build_tools/docker/bazel-tensorflow-vulkan/Dockerfile
@@ -20,7 +20,7 @@
 
 ARG VULKAN_SDK_VERSION=1.2.154.0
 
-COPY --from=gcr.io/iree-oss/vulkan /opt/vulkan-sdk/ /opt/vulkan-sdk/
+COPY --from=gcr.io/iree-oss/vulkan@sha256:5812ee64806a7f3df0739ccf0930c27cabce346901488eceb1ee66c9c0a5ae96 /opt/vulkan-sdk/ /opt/vulkan-sdk/
 
 ENV VULKAN_SDK="/opt/vulkan-sdk/${VULKAN_SDK_VERSION}/x86_64"
 
diff --git a/build_tools/docker/bazel/Dockerfile b/build_tools/docker/bazel/Dockerfile
index a7cda8d..1c6982a 100644
--- a/build_tools/docker/bazel/Dockerfile
+++ b/build_tools/docker/bazel/Dockerfile
@@ -18,7 +18,7 @@
 # Change to a new version if upgrading Bazel.
 ARG NEW_BAZEL_VERSION=3.3.1
 
-FROM gcr.io/iree-oss/util AS install-bazel
+FROM gcr.io/iree-oss/util@sha256:40846b4aea5886af3250399d6adfdb3e1195a8b0177706bb0375e812d62dc49c AS install-bazel
 WORKDIR /install-bazel
 ARG BAZEL_VERSION
 ARG NEW_BAZEL_VERSION
@@ -42,7 +42,7 @@
   # is effectively a noop.
   && apt-get install -y "bazel=${BAZEL_VERSION?}" "bazel-${NEW_BAZEL_VERSION?}"
 
-FROM gcr.io/iree-oss/base AS final
+FROM gcr.io/iree-oss/base@sha256:1e57b0957f71cd1aa9d6e4838c51f40bdbb52dd1be0b4b6b14b337b36654cc63 AS final
 ARG BAZEL_VERSION
 ARG NEW_BAZEL_VERSION
 COPY --from=install-bazel \
diff --git a/build_tools/docker/cmake-android/Dockerfile b/build_tools/docker/cmake-android/Dockerfile
index cb7ef82..03e50f9 100644
--- a/build_tools/docker/cmake-android/Dockerfile
+++ b/build_tools/docker/cmake-android/Dockerfile
@@ -16,7 +16,7 @@
 
 ARG NDK_VERSION=r21d
 
-FROM gcr.io/iree-oss/util AS install-ndk
+FROM gcr.io/iree-oss/util@sha256:40846b4aea5886af3250399d6adfdb3e1195a8b0177706bb0375e812d62dc49c AS install-ndk
 ARG NDK_VERSION
 WORKDIR /install-ndk
 
@@ -24,7 +24,7 @@
 
 RUN unzip "android-ndk-${NDK_VERSION?}-linux-x86_64.zip" -d /usr/src/
 
-FROM gcr.io/iree-oss/cmake AS final
+FROM gcr.io/iree-oss/cmake@sha256:644cc10ea5a33bd97be51a8f6fd6ee7e2ab3904f468873be0f71373b0ec48919 AS final
 ARG NDK_VERSION
 COPY --from=install-ndk "/usr/src/android-ndk-${NDK_VERSION}" "/usr/src/android-ndk-${NDK_VERSION}"
 ENV ANDROID_NDK "/usr/src/android-ndk-${NDK_VERSION}"
diff --git a/build_tools/docker/cmake-python-swiftshader/Dockerfile b/build_tools/docker/cmake-python-swiftshader/Dockerfile
index 28af929..fc18dc3 100644
--- a/build_tools/docker/cmake-python-swiftshader/Dockerfile
+++ b/build_tools/docker/cmake-python-swiftshader/Dockerfile
@@ -16,7 +16,7 @@
 # Vulkan implementation.
 
 FROM gcr.io/iree-oss/cmake-python-vulkan AS final
-COPY --from=gcr.io/iree-oss/swiftshader /swiftshader /swiftshader
+COPY --from=gcr.io/iree-oss/swiftshader@sha256:82c9ca1d8e31cfd7aaf0f0b7dc19899b4f50adaf00b6fdfbea49851007dd31f4 /swiftshader /swiftshader
 
 # Set VK_ICD_FILENAMES so Vulkan loader can find the SwiftShader ICD.
 ENV VK_ICD_FILENAMES /swiftshader/vk_swiftshader_icd.json
diff --git a/build_tools/docker/cmake-python-vulkan/Dockerfile b/build_tools/docker/cmake-python-vulkan/Dockerfile
index af95afc..f8aa2d5 100644
--- a/build_tools/docker/cmake-python-vulkan/Dockerfile
+++ b/build_tools/docker/cmake-python-vulkan/Dockerfile
@@ -20,7 +20,7 @@
 
 ARG VULKAN_SDK_VERSION=1.2.154.0
 
-COPY --from=gcr.io/iree-oss/vulkan /opt/vulkan-sdk/ /opt/vulkan-sdk/
+COPY --from=gcr.io/iree-oss/vulkan@sha256:5812ee64806a7f3df0739ccf0930c27cabce346901488eceb1ee66c9c0a5ae96 /opt/vulkan-sdk/ /opt/vulkan-sdk/
 
 ENV VULKAN_SDK="/opt/vulkan-sdk/${VULKAN_SDK_VERSION}/x86_64"
 
diff --git a/build_tools/docker/cmake-python/Dockerfile b/build_tools/docker/cmake-python/Dockerfile
index 0f1e618..5daa252 100644
--- a/build_tools/docker/cmake-python/Dockerfile
+++ b/build_tools/docker/cmake-python/Dockerfile
@@ -14,7 +14,7 @@
 
 # An image for building IREE and its Python bindings using CMake.
 
-FROM gcr.io/iree-oss/cmake AS final
+FROM gcr.io/iree-oss/cmake@sha256:644cc10ea5a33bd97be51a8f6fd6ee7e2ab3904f468873be0f71373b0ec48919 AS final
 # Dependencies for the python bindings tests.
 RUN apt-get update \
   && apt-get install -y \
diff --git a/build_tools/docker/cmake/Dockerfile b/build_tools/docker/cmake/Dockerfile
index 00088ac..a0b4263 100644
--- a/build_tools/docker/cmake/Dockerfile
+++ b/build_tools/docker/cmake/Dockerfile
@@ -21,7 +21,7 @@
 ARG CMAKE_MINOR_VERSION=13
 ARG CMAKE_PATCH_VERSION=5
 
-FROM gcr.io/iree-oss/util AS install-cmake
+FROM gcr.io/iree-oss/util@sha256:40846b4aea5886af3250399d6adfdb3e1195a8b0177706bb0375e812d62dc49c AS install-cmake
 ARG CMAKE_MAJOR_VERSION
 ARG CMAKE_MINOR_VERSION
 ARG CMAKE_PATCH_VERSION
@@ -33,7 +33,7 @@
 RUN chmod +x "./cmake-${CMAKE_VERSION?}-Linux-x86_64.sh"
 RUN "./cmake-${CMAKE_VERSION?}-Linux-x86_64.sh" --skip-license --prefix=/usr/
 
-FROM gcr.io/iree-oss/base AS final
+FROM gcr.io/iree-oss/base@sha256:1e57b0957f71cd1aa9d6e4838c51f40bdbb52dd1be0b4b6b14b337b36654cc63 AS final
 ARG CMAKE_MAJOR_VERSION
 ARG CMAKE_MINOR_VERSION
 
diff --git a/build_tools/docker/manage_images.py b/build_tools/docker/manage_images.py
index 4093c68..835033e 100755
--- a/build_tools/docker/manage_images.py
+++ b/build_tools/docker/manage_images.py
@@ -17,21 +17,20 @@
 
 Includes information on their dependency graph and GCR URL.
 
-See the README for information on how to add and update images.
+See the README for more information on how to add and update images.
 
 Example usage:
 
 Rebuild the cmake image and all images that transitively on depend on it,
-tagging them with `latest`:
-  python3 build_tools/docker/manage_images.py --build --image cmake
+tagging them with `latest` and updating all references to their sha digests:
+  python3 build_tools/docker/manage_images.py --image cmake
 
 Print out output for rebuilding the cmake image and all images that
 transitively on depend on it, but don't take side-effecting actions:
-  python3 build_tools/docker/manage_images.py --build --image cmake --dry-run
+  python3 build_tools/docker/manage_images.py --image cmake --dry-run
 
 Rebuild and push all images and update references to them in the repository:
-  python3 build_tools/docker/manage_images.py --push --images all
-  --update-references
+  python3 build_tools/docker/manage_images.py --images all
 """
 
 import argparse
@@ -41,7 +40,7 @@
 import re
 import subprocess
 import sys
-from typing import List, Sequence, Union
+from typing import Dict, List, Sequence, Union
 
 import utils
 
@@ -89,22 +88,6 @@
                       required=True,
                       action='append',
                       help=f'Name of the image to build: {IMAGES_HELP}.')
-  parser.add_argument('--pull',
-                      action='store_true',
-                      help='Pull the specified image before building.')
-  parser.add_argument('--build',
-                      action='store_true',
-                      help='Build new images from the current Dockerfiles.')
-  parser.add_argument(
-      '--push',
-      action='store_true',
-      help='Push the built images to GCR. Requires gcloud authorization.')
-  parser.add_argument(
-      '--update_references',
-      '--update-references',
-      action='store_true',
-      help='Update all references to the specified images to point at the new'
-      ' digest.')
   parser.add_argument(
       '--dry_run',
       '--dry-run',
@@ -124,44 +107,31 @@
   return args
 
 
+def _dag_dfs(input_nodes: Sequence[str],
+             node_to_child_nodes: Dict[str, Sequence[str]]) -> List[str]:
+  # Python doesn't have a builtin OrderedSet, but we don't have many images, so
+  # we just use a list.
+  ordered_nodes = []
+
+  def add_children(parent_node: str):
+    if parent_node not in ordered_nodes:
+      for child_node in node_to_child_nodes[parent_node]:
+        add_children(child_node)
+      ordered_nodes.append(parent_node)
+
+  for node in input_nodes:
+    add_children(node)
+  return ordered_nodes
+
+
 def get_ordered_images_to_process(images: Sequence[str]) -> List[str]:
-  # Python doesn't have a builtin OrderedSet, so we mimic one to the extent
-  # that we need by using 'in' before adding any elements.
-  processing_order = []
-
-  def add_dependent_images(image: str):
-    if image not in processing_order:
-      for dependent_image in IMAGES_TO_DEPENDENT_IMAGES[image]:
-        add_dependent_images(dependent_image)
-      processing_order.append(image)
-
-  for image in images:
-    add_dependent_images(image)
-
-  processing_order.reverse()
-  return processing_order
+  dependents = _dag_dfs(images, IMAGES_TO_DEPENDENT_IMAGES)
+  dependents.reverse()
+  return dependents
 
 
-def run_command(command: Sequence[str],
-                dry_run: bool = False,
-                check: bool = True,
-                capture_output: bool = False,
-                universal_newlines: bool = True,
-                **run_kwargs) -> subprocess.CompletedProcess:
-  """Thin wrapper around subprocess.run"""
-  print(f'Running: `{" ".join(command)}`')
-  if dry_run:
-    # Dummy CompletedProess with successful returncode.
-    return subprocess.CompletedProcess(command, returncode=0)
-
-  if capture_output:
-    # Hardcode support for python <= 3.6.
-    run_kwargs['stdout'] = subprocess.PIPE
-    run_kwargs['stderr'] = subprocess.PIPE
-  return subprocess.run(command,
-                        universal_newlines=universal_newlines,
-                        check=check,
-                        **run_kwargs)
+def get_dependencies(images: Sequence[str]) -> List[str]:
+  return _dag_dfs(images, IMAGES_TO_DEPENDENCIES)
 
 
 def get_repo_digest(tagged_image_url: str) -> str:
@@ -189,10 +159,13 @@
 def update_rbe_reference(digest: str, dry_run: bool = False):
   print('Updating WORKSPACE file for rbe-toolchain')
   digest_updates = 0
-  for line in fileinput.input(files=['WORKSPACE'], inplace=(not dry_run)):
+  for line in fileinput.input(files=['WORKSPACE'], inplace=True):
     if line.strip().startswith('digest ='):
       digest_updates += 1
-      print(re.sub(DIGEST_REGEX, digest, line), end='')
+      if dry_run:
+        print(line, end='')
+      else:
+        print(re.sub(DIGEST_REGEX, digest, line), end='')
     else:
       print(line, end='')
 
@@ -209,9 +182,9 @@
 
   grep_command = ['git', 'grep', '-l', f'{image_url}@sha256']
   try:
-    completed_process = run_command(grep_command,
-                                    capture_output=True,
-                                    timeout=5)
+    completed_process = utils.run_command(grep_command,
+                                          capture_output=True,
+                                          timeout=5)
   except subprocess.CalledProcessError as error:
     if error.returncode == 1:
       print(f'Found no references to {image_url}')
@@ -221,51 +194,69 @@
   # Update references in all grepped files.
   files = completed_process.stdout.split()
   print(f'Updating references in {len(files)} files: {files}')
-  for line in fileinput.input(files=files, inplace=(not dry_run)):
-    print(re.sub(f'{image_url}@{DIGEST_REGEX}', f'{image_url}@{digest}', line),
-          end='')
+  if not dry_run:
+    for line in fileinput.input(files=files, inplace=True):
+      print(re.sub(f'{image_url}@{DIGEST_REGEX}', f'{image_url}@{digest}',
+                   line),
+            end='')
+
+
+def parse_prod_digests() -> Dict[str, str]:
+  image_urls_to_prod_digests = {}
+  with open(utils.PROD_DIGESTS_PATH, "r") as f:
+    for line in f:
+      image_url, digest = line.strip().split("@")
+      image_urls_to_prod_digests[image_url] = digest
+  return image_urls_to_prod_digests
 
 
 if __name__ == '__main__':
   args = parse_arguments()
 
-  if args.push:
-    # Ensure the user has the correct authorization if they try to push to GCR.
-    utils.check_gcloud_auth(dry_run=args.dry_run)
+  # Ensure the user has the correct authorization to push to GCR.
+  utils.check_gcloud_auth(dry_run=args.dry_run)
 
   images_to_process = get_ordered_images_to_process(args.images)
   print(f'Also processing dependent images. Will process: {images_to_process}')
 
+  dependencies = get_dependencies(images_to_process)
+  print(f'Pulling image dependencies: {dependencies}')
+  image_urls_to_prod_digests = parse_prod_digests()
+  for dependency in dependencies:
+    dependency_url = posixpath.join(IREE_GCR_URL, dependency)
+    # If `dependency` is a new image then it may not have a prod digest yet.
+    if dependency_url in image_urls_to_prod_digests:
+      digest = image_urls_to_prod_digests[dependency_url]
+      dependency_with_digest = f'{dependency_url}@{digest}'
+      utils.run_command(["docker", "pull", dependency_with_digest],
+                        dry_run=args.dry_run)
+
   for image in images_to_process:
     print(f'Processing image {image}')
     image_url = posixpath.join(IREE_GCR_URL, image)
-    tagged_image_url = f'{image_url}:latest'
+    tagged_image_url = f'{image_url}'
     image_path = os.path.join(DOCKER_DIR, image)
 
-    if args.pull:
-      utils.run_command(['docker', 'pull', tagged_image_url], args.dry_run)
+    utils.run_command(
+        ['docker', 'build', '--tag', tagged_image_url, image_path],
+        dry_run=args.dry_run)
 
-    if args.build:
-      utils.run_command(
-          ['docker', 'build', '--tag', tagged_image_url, image_path],
-          args.dry_run)
+    utils.run_command(['docker', 'push', tagged_image_url],
+                      dry_run=args.dry_run)
 
-    if args.push:
-      utils.run_command(['docker', 'push', tagged_image_url], args.dry_run)
+    digest = get_repo_digest(tagged_image_url)
 
-    if args.update_references:
-      digest = get_repo_digest(tagged_image_url)
-
-      # Check that the image is in 'prod_digests.txt' and append it to the list
-      # in the file if it isn't. We know that the GCR digest exists at this
-      # point because 'get_repo_digest' confirms that the image has been pushed.
-      with open(utils.PROD_DIGESTS_PATH, 'r') as f:
-        in_prod_digests = f'{image_url}@' in f.read()
-      if not in_prod_digests:
+    # Check that the image is in 'prod_digests.txt' and append it to the list
+    # in the file if it isn't.
+    if image_url not in image_urls_to_prod_digests:
+      image_with_digest = f'{image_url}@{digest}'
+      print(
+          f'Adding new image {image_with_digest} to {utils.PROD_DIGESTS_PATH}')
+      if not args.dry_run:
         with open(utils.PROD_DIGESTS_PATH, 'a') as f:
-          f.write(f'{image_url}@{digest}\n')
+          f.write(f'{image_with_digest}\n')
 
-      # Just hardcode this oddity
-      if image == 'rbe-toolchain':
-        update_rbe_reference(digest, dry_run=args.dry_run)
-      update_references(image_url, digest, dry_run=args.dry_run)
+    # Just hardcode this oddity
+    if image == 'rbe-toolchain':
+      update_rbe_reference(digest, dry_run=args.dry_run)
+    update_references(image_url, digest, dry_run=args.dry_run)
diff --git a/build_tools/docker/manage_prod.py b/build_tools/docker/manage_prod.py
old mode 100644
new mode 100755
index 3f26b00..692c99e
--- a/build_tools/docker/manage_prod.py
+++ b/build_tools/docker/manage_prod.py
@@ -20,46 +20,26 @@
   them to GCR. This will make sure that you are at upstream head on the main
   branch before pushing:
     python3 build_tools/docker/manage_prod.py
-
-  Pull all images that should have :prod tags:
-    python3  build_tools/docker/manage_prod.py --pull_only
 """
 
-import argparse
 import os
 import utils
 
-
-def parse_arguments():
-  """Parses command-line options."""
-  parser = argparse.ArgumentParser(
-      description="Pull and push the images in prod_digests.txt to GCR.")
-  parser.add_argument("--pull_only",
-                      "--pull-only",
-                      action="store_true",
-                      help="Pull but do not tag or push the images.")
-  return parser.parse_args()
-
-
 if __name__ == "__main__":
-  args = parse_arguments()
+  # Ensure the user has the correct authorization if they try to push to GCR.
+  utils.check_gcloud_auth()
 
-  if not args.pull_only:
-    # Ensure the user has the correct authorization if they try to push to GCR.
-    utils.check_gcloud_auth()
-
-    # Only allow the :prod tag to be pushed from the version of
-    # `prod_digests.txt` at upstream HEAD on the main branch.
-    utils.run_command([os.path.normpath("scripts/git/git_update.sh"), "main"])
+  # Only allow the :prod tag to be pushed from the version of
+  # `prod_digests.txt` at upstream HEAD on the main branch.
+  utils.run_command([os.path.normpath("scripts/git/git_update.sh"), "main"])
 
   with open(utils.PROD_DIGESTS_PATH, "r") as f:
     images_with_digests = [line.strip() for line in f.readlines()]
 
   for image_with_digest in images_with_digests:
     image_url, _ = image_with_digest.split("@")
-    tagged_image_url = f"{image_url}:prod"
+    prod_image_url = f"{image_url}:prod"
 
     utils.run_command(["docker", "pull", image_with_digest])
-    if not args.pull_only:
-      utils.run_command(["docker", "tag", image_with_digest, tagged_image_url])
-      utils.run_command(["docker", "push", tagged_image_url])
+    utils.run_command(["docker", "tag", image_with_digest, prod_image_url])
+    utils.run_command(["docker", "push", prod_image_url])
diff --git a/build_tools/docker/prod_digests.txt b/build_tools/docker/prod_digests.txt
index 4ee200a..4513340 100644
--- a/build_tools/docker/prod_digests.txt
+++ b/build_tools/docker/prod_digests.txt
@@ -1,17 +1,17 @@
-gcr.io/iree-oss/base@sha256:392b2f865f000c6fb558d01a372446f3ab81120db34249f03efa999669647230
-gcr.io/iree-oss/util@sha256:ec9198493cea4f5d9ac7097e8a64b94b7a43628cb995b91e6e89a95cff4a1982
-gcr.io/iree-oss/cmake@sha256:ceaff365ca0cd3d770daf5fad370e29783e30b654f56780761a6d0a040da45e5
-gcr.io/iree-oss/swiftshader@sha256:3ed32e7c74da71b6db1904b583827e760ea845d5d2876b38c036cf72ca6e5623
-gcr.io/iree-oss/cmake-python@sha256:5fa42743c458a7df680175542269067fd89d2072b776b43e48169a7d0b43ebc3
-gcr.io/iree-oss/cmake-android@sha256:7accda0b84e2ae337740f2ee71801ee30f2155900abf1cf7b73ea47c15dc694f
-gcr.io/iree-oss/bazel@sha256:59da17e5cc8176890a6e1bda369b1f3d398e27af3d47e02e1ffd5b76729c215b
-gcr.io/iree-oss/bazel-python@sha256:473b7e294136bc38abc1941042f0c0404199de5827f141520f0b6757305b7a95
-gcr.io/iree-oss/bazel-tensorflow@sha256:6ec501edcbaaf817941c5be5060cafc47616ca4e2a875bbb62944ffbc396ceb0
-gcr.io/iree-oss/vulkan@sha256:c2e21657a231f3e39c50c01c3cbae3355f5b03ff52033b41ad322a0c792099dd
-gcr.io/iree-oss/rbe-toolchain@sha256:d6d895294076b5289e81489f664656211c41656cffe7c448ecb5c6f54f045974
-gcr.io/iree-oss/cmake-python-vulkan@sha256:63db8f65485e73af8a16603729bf39b4e616b6cb90216a1589ba2919489a6483
-gcr.io/iree-oss/cmake-python-swiftshader@sha256:3e3d3427f3a58b32fa3ed578b610e411e0b81fd0e1984ac9b0fceae8bf8343dc
-gcr.io/iree-oss/cmake-python-nvidia@sha256:310e3b399717905bb2b485f3ebed32222915c7dc4dc075aa4e1b8551101fe607
-gcr.io/iree-oss/bazel-tensorflow-vulkan@sha256:61522fcfcd11cd9c067e991b72419d6decf70dae35b8ee3efa71e55ca31b8866
-gcr.io/iree-oss/bazel-tensorflow-swiftshader@sha256:39c0e43c503bddfacd69758a50f02450ad2322d35324e2f56997aebb33a1b20a
-gcr.io/iree-oss/bazel-tensorflow-nvidia@sha256:e5e96ec1709e83355ee2264c97c26fa5c3d40f749a62734f4787b17a83f2c3b8
+gcr.io/iree-oss/base@sha256:1e57b0957f71cd1aa9d6e4838c51f40bdbb52dd1be0b4b6b14b337b36654cc63
+gcr.io/iree-oss/util@sha256:40846b4aea5886af3250399d6adfdb3e1195a8b0177706bb0375e812d62dc49c
+gcr.io/iree-oss/cmake@sha256:644cc10ea5a33bd97be51a8f6fd6ee7e2ab3904f468873be0f71373b0ec48919
+gcr.io/iree-oss/swiftshader@sha256:82c9ca1d8e31cfd7aaf0f0b7dc19899b4f50adaf00b6fdfbea49851007dd31f4
+gcr.io/iree-oss/cmake-python@sha256:f90e72f8d01c53f462bef56d90a07fed833ff754637d324ad95d81c8699c1309
+gcr.io/iree-oss/cmake-android@sha256:78db00980309a0b52f8c877f8717b3d9ac3c35b619ae704e21f165345409685f
+gcr.io/iree-oss/bazel@sha256:066af7fcb39c13284ed47b2d6afe75f944c1d7415a21beaa5afd6319176654e8
+gcr.io/iree-oss/bazel-python@sha256:b9fc661cedcf3f5f0cce3f207640f79cb92ba72a9f850e1041312ec0ecdefa39
+gcr.io/iree-oss/bazel-tensorflow@sha256:4c2845e20e62f991e34a7cbe973a12ee824e9adc146fb86fdeee1c4e6b35cb12
+gcr.io/iree-oss/vulkan@sha256:5812ee64806a7f3df0739ccf0930c27cabce346901488eceb1ee66c9c0a5ae96
+gcr.io/iree-oss/rbe-toolchain@sha256:0b44a9ec88bd032f892457e1e23728074d91b7bc00a4167a2862288594f54595
+gcr.io/iree-oss/cmake-python-vulkan@sha256:9a764e4944951a8717a4dfbfdcedb0ddd40f63ff681b2e2f24e34fe3e8bb85e7
+gcr.io/iree-oss/cmake-python-swiftshader@sha256:ff5255d6c4d9602d1f4f2a67e00f017a8bac921c4ce796cb7a24763a04f5999c
+gcr.io/iree-oss/cmake-python-nvidia@sha256:bf6ce5a17c44b041d2fcc74018afd30b6ad35cb769d668f49e615085daddf8a7
+gcr.io/iree-oss/bazel-tensorflow-vulkan@sha256:a33217d03c1a1e96056c7ffa2c0c8857634a9cde23f5d346a58f5e266e3c011a
+gcr.io/iree-oss/bazel-tensorflow-swiftshader@sha256:a60d30603e5a2b1be4fd8cf0ebd2dd0d454d1756e13b5360e969d18af1b5b23a
+gcr.io/iree-oss/bazel-tensorflow-nvidia@sha256:575ba235ebbbcee5bc26f20c6362664a62113ac869c8868ad415c175fe9c08b0
diff --git a/build_tools/docker/rbe-toolchain/Dockerfile b/build_tools/docker/rbe-toolchain/Dockerfile
index f3fef55..f31f9f3 100644
--- a/build_tools/docker/rbe-toolchain/Dockerfile
+++ b/build_tools/docker/rbe-toolchain/Dockerfile
@@ -22,7 +22,8 @@
 # own toolchains.
 
 ######################## Install Swiftshader ###################################
-FROM ubuntu:16.04 AS install-swiftshader
+# Ubuntu 16.04
+FROM ubuntu@sha256:3355b6e4ba1b12071ba5fe9742042a2f10b257c908fbdfac81912a16eb463879 AS install-swiftshader
 WORKDIR /install-swiftshader
 
 RUN apt-get update && apt-get install -y wget
@@ -82,7 +83,7 @@
 
 ARG VULKAN_SDK_VERSION=1.2.154.0
 
-COPY --from=gcr.io/iree-oss/vulkan /opt/vulkan-sdk/ /opt/vulkan-sdk/
+COPY --from=gcr.io/iree-oss/vulkan@sha256:5812ee64806a7f3df0739ccf0930c27cabce346901488eceb1ee66c9c0a5ae96 /opt/vulkan-sdk/ /opt/vulkan-sdk/
 
 ENV VULKAN_SDK="/opt/vulkan-sdk/${VULKAN_SDK_VERSION}/x86_64"
 
diff --git a/build_tools/docker/swiftshader/Dockerfile b/build_tools/docker/swiftshader/Dockerfile
index 562874b..8869dc7 100644
--- a/build_tools/docker/swiftshader/Dockerfile
+++ b/build_tools/docker/swiftshader/Dockerfile
@@ -12,7 +12,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-FROM gcr.io/iree-oss/cmake AS install-swiftshader
+FROM gcr.io/iree-oss/cmake@sha256:644cc10ea5a33bd97be51a8f6fd6ee7e2ab3904f468873be0f71373b0ec48919 AS install-swiftshader
 WORKDIR /install-swiftshader
 
 RUN apt-get update && apt-get install -y git
@@ -40,5 +40,6 @@
 # Keep track of the commit we are using.
 RUN echo "${SWIFTSHADER_COMMIT?}" > /swiftshader/git-commit
 
-FROM ubuntu:18.04 AS final
+# Ubuntu 18.04
+FROM ubuntu@sha256:fd25e706f3dea2a5ff705dbc3353cf37f08307798f3e360a13e9385840f73fb3 AS final
 COPY --from=install-swiftshader /swiftshader /swiftshader
diff --git a/build_tools/docker/util/Dockerfile b/build_tools/docker/util/Dockerfile
index 1b34d25..2010dee 100644
--- a/build_tools/docker/util/Dockerfile
+++ b/build_tools/docker/util/Dockerfile
@@ -16,7 +16,8 @@
 # needed in the final images. Intermediate stages can inherit from this image,
 # but final stages should not.
 
-FROM ubuntu:18.04 AS final
+# Ubuntu 18.04
+FROM ubuntu@sha256:fd25e706f3dea2a5ff705dbc3353cf37f08307798f3e360a13e9385840f73fb3 AS final
 
 RUN apt-get update \
   && apt-get install -y \
diff --git a/build_tools/docker/vulkan/Dockerfile b/build_tools/docker/vulkan/Dockerfile
index 24a8914..0653a13 100644
--- a/build_tools/docker/vulkan/Dockerfile
+++ b/build_tools/docker/vulkan/Dockerfile
@@ -12,7 +12,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-FROM gcr.io/iree-oss/util AS base
+FROM gcr.io/iree-oss/util@sha256:40846b4aea5886af3250399d6adfdb3e1195a8b0177706bb0375e812d62dc49c AS base
 
 ARG VULKAN_SDK_VERSION=1.2.154.0
 
diff --git a/build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-swiftshader/bindings/build_kokoro.sh b/build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-swiftshader/bindings/build_kokoro.sh
index 9e18be2..256472e 100755
--- a/build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-swiftshader/bindings/build_kokoro.sh
+++ b/build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-swiftshader/bindings/build_kokoro.sh
@@ -32,7 +32,7 @@
 docker_setup
 
 docker run "${DOCKER_RUN_ARGS[@]?}" \
-  gcr.io/iree-oss/bazel-python@sha256:473b7e294136bc38abc1941042f0c0404199de5827f141520f0b6757305b7a95 \
+  gcr.io/iree-oss/bazel-python@sha256:b9fc661cedcf3f5f0cce3f207640f79cb92ba72a9f850e1041312ec0ecdefa39 \
   build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-swiftshader/bindings/build.sh
 
 # Kokoro will rsync this entire directory back to the executor orchestrating the
diff --git a/build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-swiftshader/core/build_kokoro.sh b/build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-swiftshader/core/build_kokoro.sh
index 72f53b1..d910e2c 100755
--- a/build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-swiftshader/core/build_kokoro.sh
+++ b/build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-swiftshader/core/build_kokoro.sh
@@ -32,7 +32,7 @@
 docker_setup
 
 docker run "${DOCKER_RUN_ARGS[@]?}" \
-  gcr.io/iree-oss/bazel@sha256:59da17e5cc8176890a6e1bda369b1f3d398e27af3d47e02e1ffd5b76729c215b \
+  gcr.io/iree-oss/bazel@sha256:066af7fcb39c13284ed47b2d6afe75f944c1d7415a21beaa5afd6319176654e8 \
   build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-swiftshader/core/build.sh
 
 # Kokoro will rsync this entire directory back to the executor orchestrating the
diff --git a/build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-swiftshader/integrations/build_kokoro.sh b/build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-swiftshader/integrations/build_kokoro.sh
index 3b9f412..3d89914 100755
--- a/build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-swiftshader/integrations/build_kokoro.sh
+++ b/build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-swiftshader/integrations/build_kokoro.sh
@@ -32,7 +32,7 @@
 docker_setup
 
 docker run "${DOCKER_RUN_ARGS[@]?}" \
-  gcr.io/iree-oss/bazel-tensorflow-swiftshader@sha256:39c0e43c503bddfacd69758a50f02450ad2322d35324e2f56997aebb33a1b20a \
+  gcr.io/iree-oss/bazel-tensorflow-swiftshader@sha256:a60d30603e5a2b1be4fd8cf0ebd2dd0d454d1756e13b5360e969d18af1b5b23a \
   build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-swiftshader/integrations/build.sh
 
 # Kokoro will rsync this entire directory back to the executor orchestrating the
diff --git a/build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-turing/integrations/build_kokoro.sh b/build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-turing/integrations/build_kokoro.sh
index dc43124..ff47d21 100755
--- a/build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-turing/integrations/build_kokoro.sh
+++ b/build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-turing/integrations/build_kokoro.sh
@@ -36,7 +36,7 @@
 # TODO(#3550): Allow this to follow the checked-in Docker hierarchy.
 docker run "${DOCKER_RUN_ARGS[@]?}" \
   --gpus all \
-  gcr.io/iree-oss/bazel-tensorflow-nvidia@sha256:e5e96ec1709e83355ee2264c97c26fa5c3d40f749a62734f4787b17a83f2c3b8 \
+  gcr.io/iree-oss/bazel-tensorflow-nvidia@sha256:575ba235ebbbcee5bc26f20c6362664a62113ac869c8868ad415c175fe9c08b0 \
   build_tools/kokoro/gcp_ubuntu/bazel/linux/x86-turing/integrations/build.sh
 
 # Kokoro will rsync this entire directory back to the executor orchestrating the
diff --git a/build_tools/kokoro/gcp_ubuntu/cmake/android/arm64-v8a/build_kokoro.sh b/build_tools/kokoro/gcp_ubuntu/cmake/android/arm64-v8a/build_kokoro.sh
index 43f6e0a..68f13a8 100755
--- a/build_tools/kokoro/gcp_ubuntu/cmake/android/arm64-v8a/build_kokoro.sh
+++ b/build_tools/kokoro/gcp_ubuntu/cmake/android/arm64-v8a/build_kokoro.sh
@@ -32,7 +32,7 @@
 docker_setup
 
 docker run "${DOCKER_RUN_ARGS[@]?}" \
-  gcr.io/iree-oss/cmake-android@sha256:7accda0b84e2ae337740f2ee71801ee30f2155900abf1cf7b73ea47c15dc694f \
+  gcr.io/iree-oss/cmake-android@sha256:78db00980309a0b52f8c877f8717b3d9ac3c35b619ae704e21f165345409685f \
   build_tools/kokoro/gcp_ubuntu/cmake/android/build.sh arm64-v8a
 
 # Kokoro will rsync this entire directory back to the executor orchestrating the
diff --git a/build_tools/kokoro/gcp_ubuntu/cmake/linux/x86-swiftshader/build_kokoro.sh b/build_tools/kokoro/gcp_ubuntu/cmake/linux/x86-swiftshader/build_kokoro.sh
index eb80e5a..e8dcac4 100755
--- a/build_tools/kokoro/gcp_ubuntu/cmake/linux/x86-swiftshader/build_kokoro.sh
+++ b/build_tools/kokoro/gcp_ubuntu/cmake/linux/x86-swiftshader/build_kokoro.sh
@@ -32,7 +32,7 @@
 docker_setup
 
 docker run "${DOCKER_RUN_ARGS[@]?}" \
-  gcr.io/iree-oss/cmake-python-swiftshader@sha256:3e3d3427f3a58b32fa3ed578b610e411e0b81fd0e1984ac9b0fceae8bf8343dc \
+  gcr.io/iree-oss/cmake-python-swiftshader@sha256:ff5255d6c4d9602d1f4f2a67e00f017a8bac921c4ce796cb7a24763a04f5999c \
   build_tools/kokoro/gcp_ubuntu/cmake/linux/x86-swiftshader/build.sh
 
 # Kokoro will rsync this entire directory back to the executor orchestrating the
diff --git a/build_tools/kokoro/gcp_ubuntu/cmake/linux/x86-turing/build_kokoro.sh b/build_tools/kokoro/gcp_ubuntu/cmake/linux/x86-turing/build_kokoro.sh
index c77abcd..9452065 100755
--- a/build_tools/kokoro/gcp_ubuntu/cmake/linux/x86-turing/build_kokoro.sh
+++ b/build_tools/kokoro/gcp_ubuntu/cmake/linux/x86-turing/build_kokoro.sh
@@ -33,7 +33,7 @@
 
 docker run "${DOCKER_RUN_ARGS[@]?}" \
   --gpus all \
-  gcr.io/iree-oss/cmake-python-nvidia@sha256:310e3b399717905bb2b485f3ebed32222915c7dc4dc075aa4e1b8551101fe607 \
+  gcr.io/iree-oss/cmake-python-nvidia@sha256:bf6ce5a17c44b041d2fcc74018afd30b6ad35cb769d668f49e615085daddf8a7 \
   build_tools/kokoro/gcp_ubuntu/cmake/linux/x86-turing/build.sh
 
 # Kokoro will rsync this entire directory back to the executor orchestrating the