[VectorDistribute] Lower and distribute `async_dma` (#24299) Pass to distribute and lower `async_dma` operations at the workgroup level to `amdgpu.gather_to_lds` operations at the thread-level (with threads in each subgroup collaborating). The pass shares helpers with the existing GPU pass to distribute operations based on layouts, but as the `async_dma` operation does not have `vector` operands or results, lowering and distribution are implemented as a separate pass. The changes to `GPUNestedLayoutDistributionPatterns.cpp` are therefore mainly a code move extracting shared helpers to the new `GPUNestedLayoutUtils.[h|cpp]`. The basic idea of the distribution is to construct a (nested) layout that represents how the data-transfer is split across subgroups and threads to perform the full transfer with direct-to-LDS compatible operations. The layout is constructed in stages: 1. We choose the DMA size for the given target that fulfills the requirements and determine the element tile based on the size of the transfer per thread from the DMA size (`distributeFromInnermost`). 2. The element tile is given by the number of threads in subgroup (`distributeFromInnermost`). 3. Outer tile is always all-ones. 4. We distribute the transfer to the configured number of subgroups (`distributeFromOutermost`). 5. Whatever is left after these steps ends up as the batch tile of each thread. Once we have that layout, we can use the shared helpers for the mechanics of distributing the operation. The distribution fails if any of the requirements are not met. This is mostly a defensive check, the pass inserting the `async_dma` operations (will be added in a different PR) should only insert `async_dma` operations if the prerequisites can be met with the available DMA sizes for the transfer shape etc. Therefore, the pass also fails if any of the `async_dma` operations could not be distributed and lowered. Swizzling and gather semantics are not part of this PR and will be added in follow-up PRs. This is part of https://github.com/iree-org/iree/issues/23782. Assisted-by: Claude Code and Codex --------- Signed-off-by: Lukas Sommer <lukas.sommer@amd.com>
IREE (Intermediate Representation Execution Eenvironment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
Releases notes are published on GitHub releases.
| Package | Release status |
|---|---|
| GitHub release (stable) | |
| GitHub release (nightly) | |
iree-base-compiler | |
iree-base-runtime |
For more details on the release process, see https://iree.dev/developers/general/release-management/.
| Operating system | Build status |
|---|---|
| Linux | |
| macOS | |
| macOS |
For the full list of workflows see https://iree.dev/developers/general/github-actions/.
See our website for more information.
Community meeting recordings: IREE YouTube channel
| Date | Title | Recording | Slides |
|---|---|---|---|
| 2025-06-10 | Data-Tiling in IREE: Achieving High Performance Through Compiler Design (AsiaLLVM) | recording | slides |
| 2025-05-17 | Introduction to GPU architecture and IREE's GPU CodeGen Pipeline | recording | slides |
| 2025-02-12 | The Long Tail of AI: SPIR-V in IREE and MLIR (Vulkanised) | recording | slides |
| 2024-10-01 | Unveiling the Inner Workings of IREE: An MLIR-Based Compiler for Diverse Hardware | recording | |
| 2021-06-09 | IREE Runtime Design Tech Talk | recording | slides |
| 2020-08-20 | IREE CodeGen (MLIR Open Design Meeting) | recording | slides |
| 2020-03-18 | Interactive HAL IR Walkthrough | recording | |
| 2020-01-31 | End-to-end MLIR Workflow in IREE (MLIR Open Design Meeting) | recording | slides |
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.