Rename benchmark git trailer to benchmark-extra (#13623)
As we have labels and the trailers to enable benchmarks in CI, rename
the trailer `benchmarks` to `benchmark-extra` and change its purpose to
"trigger extra benchmarks that are not available in the labels ".
The extra benchmark preset options will be added in the follow-up
changes.
benchmark-extra: x86_64
diff --git a/build_tools/github_actions/configure_ci.py b/build_tools/github_actions/configure_ci.py
index 7453a5a..cf70d5d 100755
--- a/build_tools/github_actions/configure_ci.py
+++ b/build_tools/github_actions/configure_ci.py
@@ -42,7 +42,7 @@
SKIP_CI_KEY = "skip-ci"
RUNNER_ENV_KEY = "runner-env"
-BENCHMARK_PRESET_KEY = "benchmarks"
+BENCHMARK_EXTRA_KEY = "benchmark-extra"
# Trailer to prevent benchmarks from always running on LLVM integration PRs.
SKIP_LLVM_INTEGRATE_BENCHMARK_KEY = "skip-llvm-integrate-benchmark"
@@ -289,8 +289,8 @@
preset_options = set(
label.split(":", maxsplit=1)[1]
for label in labels
- if label.startswith(BENCHMARK_PRESET_KEY + ":"))
- trailer = trailers.get(BENCHMARK_PRESET_KEY)
+ if label.startswith(BENCHMARK_EXTRA_KEY + ":"))
+ trailer = trailers.get(BENCHMARK_EXTRA_KEY)
if trailer is not None:
preset_options = preset_options.union(
option.strip() for option in trailer.split(","))
diff --git a/docs/developers/developing_iree/benchmark_suites.md b/docs/developers/developing_iree/benchmark_suites.md
index 465eddb..9de8ed4 100644
--- a/docs/developers/developing_iree/benchmark_suites.md
+++ b/docs/developers/developing_iree/benchmark_suites.md
@@ -10,10 +10,9 @@
The benchmark suites are run for each commit on the main branch and the results
are uploaded to https://perf.iree.dev for regression analysis (for the current
-supported targets). On pull requests, users can write `benchmarks:
-x86_64,cuda,comp-stats` (or a subset) at the bottom of the PR descriptions and
-re-run the CI workflow to trigger the benchmark runs. The results will be
-compared with https://perf.iree.dev and post in the comments.
+supported targets). On pull requests, users can add labels `benchmarks:*` to
+trigger the benchmark runs. The results will be compared with
+https://perf.iree.dev and post in the comments.
Information about the definitions of the benchmark suites can be found in the
[IREE Benchmark Suites Configurations](/build_tools/python/benchmark_suites/iree/README.md).