commit | ce92024f0c16faa4c77cb35c144d4c65f156c3e2 | [log] [tgz] |
---|---|---|
author | Zhuoran Yin <zhuoryin@amd.com> | Thu Aug 28 16:57:21 2025 -0400 |
committer | GitHub <noreply@github.com> | Thu Aug 28 20:57:21 2025 +0000 |
tree | bff79caf2e0e990232f56a076f738a57bcc3deef | |
parent | 31404c6e0bbf746aa5a79a85a62088f56186b8a3 [diff] |
[Codegen][GPU] Adding new heuristics to take all dimensions into account when distributing tiles (#21803) This motivation of this PR is to address the multi-dimension distribution situation in convolution codegen. A sample convolution config that wouldn't distribute properly looks like the following: > convbfp16 -n 16 -c 768 -H 48 -W 32 -k 2048 -y3 -x 3 -p 1 -q 1 -u 1 -v 1 -l 1 -j 1 -m conv -g 1 -F 1 -t 1 There are 3 dimension in M: [16, 48, 32]. There's one dimension in N: [256]. Since N's last dimension is much larger than M's last dimension, the current algorithm will yield an extremely imbalanced tile that allocate all subgroup and tiles to N dimension, causing a small memory bound workgroup. The new tile allocation algorithm can prevent the problem by considering the entire aggregated M and N dimension together and find optimal balanced tile for the full scope. Then it will attempt to allocate the full allocated tile to each sub-dimension. This yields a much more reasonably distributed tiles. With the new algorithm, it will improve the performance of this convolution from 5000us -> 1500us, and 5% of performance among all 478 convolutions. I'm pushing this to review as I gather gemm and model perf. Likely since this has little impact with M/N problems that have a single dimension, the performance should stay flat. I'll be posting perf updates as follow-up comments soon. --------- Signed-off-by: jerryyin <zhuoryin@amd.com>
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
Releases notes are published on GitHub releases.
Package | Release status |
---|---|
GitHub release (stable) | |
GitHub release (nightly) | |
iree-base-compiler | |
iree-base-runtime |
For more details on the release process, see https://iree.dev/developers/general/release-management/.
Operating system | Build status |
---|---|
Linux | |
macOS | |
macOS |
For the full list of workflows see https://iree.dev/developers/general/github-actions/.
See our website for more information.
Community meeting recordings: IREE YouTube channel
Date | Title | Recording | Slides |
---|---|---|---|
2025-06-10 | Data-Tiling in IREE: Achieving High Performance Through Compiler Design (AsiaLLVM) | recording | slides |
2025-05-17 | Introduction to GPU architecture and IREE's GPU CodeGen Pipeline | recording | slides |
2025-02-12 | The Long Tail of AI: SPIR-V in IREE and MLIR (Vulkanised) | recording | slides |
2024-10-01 | Unveiling the Inner Workings of IREE: An MLIR-Based Compiler for Diverse Hardware | recording | |
2021-06-09 | IREE Runtime Design Tech Talk | recording | slides |
2020-08-20 | IREE CodeGen (MLIR Open Design Meeting) | recording | slides |
2020-03-18 | Interactive HAL IR Walkthrough | recording | |
2020-01-31 | End-to-end MLIR Workflow in IREE (MLIR Open Design Meeting) | recording | slides |
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.