commit | 51329bf5fe1aa5bc87b3eece2ce2f9d2978e63a2 | [log] [tgz] |
---|---|---|
author | Scott Todd <scott.todd0@gmail.com> | Mon Sep 23 08:17:53 2024 -0700 |
committer | GitHub <noreply@github.com> | Mon Sep 23 08:17:53 2024 -0700 |
tree | fceb57ac3a3f58bfae4f03159a0efb5b57aab06c | |
parent | b08cf020b3a0b284fe0167a697635a884820eeab [diff] |
Migrate ci_linux_arm64_clang to new dockerfile. (#18569) Progress on https://github.com/iree-org/iree/issues/15332. This was the last active use of [`build_tools/docker/`](https://github.com/iree-org/iree/tree/main/build_tools/docker), so we can now delete that directory: https://github.com/iree-org/iree/pull/18566. This uses the same "cpubuilder" dockerfile as the x86_64 builds, which is now built for multiple architectures thanks to https://github.com/iree-org/base-docker-images/pull/11. As before, we install a qemu binary in the dockerfile, this time using the approach in https://github.com/iree-org/base-docker-images/pull/13 instead of a forked dockerfile. Prior PRs for context: * https://github.com/iree-org/iree/pull/14372 * https://github.com/iree-org/iree/pull/16331 Build time varies pretty wildly depending on cache hit rate and the phase of the moon: | Scenario | Cache hit rate | Time | Logs | | -- | -- | -- | -- | Cold cache | 0% | 1h45m | [Logs](https://github.com/iree-org/iree/actions/runs/10962049593/job/30440393279) Warm (?) cache | 61% | 48m | [Logs](https://github.com/iree-org/iree/actions/runs/10963546631/job/30445257323) Warm (hot?) cache | 98% | 16m | [Logs](https://github.com/iree-org/iree/actions/runs/10964289304/job/30447618503?pr=18569) CI history (https://github.com/iree-org/iree/actions/workflows/ci_linux_arm64_clang.yml?query=branch%3Amain) shows that regular 97% cache hit rates and 17 minute job times are possible. I'm not sure why one test run only got 61% cache hits. This job only runs nightly, so that's not a super high priority to investigate and fix. If we migrate the arm64 runner off of GCP (https://github.com/iree-org/iree/issues/18238) we can further simplify this workflow by dropping its reliance on `gcloud auth application-default print-access-token` and the `docker_run.sh` script. Other workflows are now using `source setup_sccache.sh` and some other code.
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels
Package | Release status |
---|---|
GitHub release (stable) | |
GitHub release (nightly) | |
Python iree-compiler | |
Python iree-runtime |
Host platform | Build status |
---|---|
Linux | |
macOS | |
Windows |
For the full list of workflows see https://iree.dev/developers/general/github-actions/.
See our website for more information.
Community meeting recordings: IREE YouTube channel
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.