commit | 933f798046a817dcff48d84df8fd987c5cb9e72b | [log] [tgz] |
---|---|---|
author | Han-Chung Wang <hanhan0912@gmail.com> | Wed Sep 03 13:16:56 2025 -0700 |
committer | GitHub <noreply@github.com> | Wed Sep 03 13:16:56 2025 -0700 |
tree | 0122c097f6931d9e294a4811f1bc9c4d287543b9 | |
parent | b4da7b202cf21e07c418250f697565734f9e5f19 [diff] |
[DT] Fuse encoding ops more aggressively for multi-use, gather, and slices ops. (#21830) The fusion constraint of multi-use dispatch is only required by SetEncoding pass, because it has to move consumer dispatches around. It is not required by encoding fusion, because it is just moving a SetEncoding op into its producer dispatch. The revision also allows the fusion when the dispatch region contains tensor.extract_slice op and iree_linalg_ext.gather ops. It reduces the number of dispatches to 644 in llama fp8 model, the same as without data tiling. The latency drops 25ms, from 378ms to 353ms. | | No Data Tiling | Data Tiling w/o the revision | Data Tiling w/ the revision | | ------------- | ------------- | ------------- | ------------- | | Benchmark latency | 243ms | 378ms | 353ms | | Memory usage (HIP unpooled) | 15.9GB | 31.14GB | 31.11GB | | Number of dispatches | 644 | 741 | 644 | | | No Data Tiling (ms) | Data Tiling w/o the revision | Data Tiling w/ the revision | | ------------- | ------------- | ------------- | ------------- | | dispatch_15_attention_4x8x4xDx128xf8 | 62.29 | 55.35 | 59.21 | | dispatch_20_matmul_like_Dx14336x4096_f8xf8xf32 | 40.13 | 89.14 | 93.72| | dispatch_19_matmul_like_Dx14336x4096_f8xf8xf32 | 28.01 | 44.78 | 44.59 | | dispatch_21_matmul_like_Dx4096x14336_f8xf8xf32 | 27.25 | 40.18 | 39.99 | | dispatch_643_matmul_like_Dx128256x4096_f16xf16xf32 | 17.1 | 29.76 | 29.21 | | dispatch_16_matmul_like_Dx4096x4096_f8xf8xf32 | 8.83 | 17.92 | 17.91 | | dispatch_23_matmul_like_Dx4096x4096_f8xf8xf32 | 9.27 | 16.69 | 16.59 | | encoding_10_encode_Dx4096xf8_to_Dx4096xf8 | - | 32.15 | - | | encoding_6_encode_Dx14336xf32_to_Dx14336xf32 | - | 0.318 | - | --------- Signed-off-by: hanhanW <hanhan0912@gmail.com>
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
Releases notes are published on GitHub releases.
Package | Release status |
---|---|
GitHub release (stable) | |
GitHub release (nightly) | |
iree-base-compiler | |
iree-base-runtime |
For more details on the release process, see https://iree.dev/developers/general/release-management/.
Operating system | Build status |
---|---|
Linux | |
macOS | |
macOS |
For the full list of workflows see https://iree.dev/developers/general/github-actions/.
See our website for more information.
Community meeting recordings: IREE YouTube channel
Date | Title | Recording | Slides |
---|---|---|---|
2025-06-10 | Data-Tiling in IREE: Achieving High Performance Through Compiler Design (AsiaLLVM) | recording | slides |
2025-05-17 | Introduction to GPU architecture and IREE's GPU CodeGen Pipeline | recording | slides |
2025-02-12 | The Long Tail of AI: SPIR-V in IREE and MLIR (Vulkanised) | recording | slides |
2024-10-01 | Unveiling the Inner Workings of IREE: An MLIR-Based Compiler for Diverse Hardware | recording | |
2021-06-09 | IREE Runtime Design Tech Talk | recording | slides |
2020-08-20 | IREE CodeGen (MLIR Open Design Meeting) | recording | slides |
2020-03-18 | Interactive HAL IR Walkthrough | recording | |
2020-01-31 | End-to-end MLIR Workflow in IREE (MLIR Open Design Meeting) | recording | slides |
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.