commit | 900ec67677ccca3eaae79fa02fe78bec30603f3c | [log] [tgz] |
---|---|---|
author | Scott Todd <scott.todd0@gmail.com> | Wed May 22 10:25:40 2024 -0700 |
committer | GitHub <noreply@github.com> | Wed May 22 17:25:40 2024 +0000 |
tree | 8e8eb048f2ded614f21ee99983e5e4076c7d7ecf | |
parent | f7ca45d5d58898a2bf9b27c4c7ac09b3b2b48f8f [diff] |
Split benchmark jobs into their own independent workflow file. (#17400) Progress on https://github.com/iree-org/iree/issues/17001. This moves the benchmark jobs from `ci.yml` to a standalone `benchmark.yml` workflow file. Note that there were some experiments on https://github.com/pzread/iree/tree/bench-sep to have a separate workflow file wait on the results of the main workflow file. I opted for a simpler approach here. Pros * Triggering benchmarks will just re-run the `Benchmark / build_all` -> `Benchmark / [others]` jobs, not unrelated builds/tests (e.g. `CI / gcc`, `CI / asan`) * Test failures will no longer block benchmark runs (there are other ways to achieve this though) * Benchmarks will now be easier to move into a separate directory or repository (they never should have been this ingrained in the main repo...) * The `CI` workflow now has a simpler graph view, and `Benchmark` is simpler too Cons * The `CI / build_all` job is now duplicated to `Benchmark / build_all`, resulting in duplicated CI time (8-20 minutes per run) and cloud storage (3GB per run) * The tests in `CI / build_all` will still block `CI / test_nvidia_gpu` and other test jobs * There is now yet another page to view all workflow logs (checks are split between Lint, PkgCI, CI, and Benchmark) Note: the sequencing with this change is tricky - the `benchmark_trigger.yml` and `post_benchmark_comment.yml` workflow files run on already committed code for security reasons. I can't fully test this and some outstanding PRs might need to be synced after landing this.
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
See our website for more information.
Community meeting recordings: IREE YouTube channel
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.