[LinalgExt] Implement direct vectorization for im2col op (#23855) Implements direct vectorization for im2col, and completes the support for padding on the im2col op. The padding attributes are used to compute the read mask when vectorizing. In the old path, we would have separate padding on the input and the result of the im2col, and we try to compose those pads into a single masked read. This is fragile and difficult for cases where the im2col result dims don't map well to the input dims. With this direct vectorization approach, we can compute the mask based on the input and result padding simultaneously. This will make flattening of the spatial dimensions of convolutions possible. ### Performance results: ### - Run 1: https://github.com/nod-ai/amd-shark-ai-reports/tree/main/boo/boo-custom-runs/2026-03-27_04-36_d1b822f45ac693a8593232a7d3fc5d67b1087f7e/comparison - Run 2: https://github.com/nod-ai/amd-shark-ai-reports/tree/main/boo/boo-custom-runs/2026-03-27_20-34_cf0d758bb199713b96a84a67245bc8b24ba7b74a/comparison These runs were taken on different commits, but they are functionally the same (just some cleanup differences). I am only able to reproduce 3 of the regressions locally, and most of the improvers (~35 of them with 10-40% speedup) are real. There seems to have been some noise in the runs, but overall there is a good perf improvement. ci-extra: test_torch --------- Signed-off-by: Max Dawkins <max.dawkins@gmail.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
IREE (Intermediate Representation Execution Eenvironment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
Releases notes are published on GitHub releases.
| Package | Release status |
|---|---|
| GitHub release (stable) | |
| GitHub release (nightly) | |
iree-base-compiler | |
iree-base-runtime |
For more details on the release process, see https://iree.dev/developers/general/release-management/.
| Operating system | Build status |
|---|---|
| Linux | |
| macOS | |
| macOS |
For the full list of workflows see https://iree.dev/developers/general/github-actions/.
See our website for more information.
Community meeting recordings: IREE YouTube channel
| Date | Title | Recording | Slides |
|---|---|---|---|
| 2025-06-10 | Data-Tiling in IREE: Achieving High Performance Through Compiler Design (AsiaLLVM) | recording | slides |
| 2025-05-17 | Introduction to GPU architecture and IREE's GPU CodeGen Pipeline | recording | slides |
| 2025-02-12 | The Long Tail of AI: SPIR-V in IREE and MLIR (Vulkanised) | recording | slides |
| 2024-10-01 | Unveiling the Inner Workings of IREE: An MLIR-Based Compiler for Diverse Hardware | recording | |
| 2021-06-09 | IREE Runtime Design Tech Talk | recording | slides |
| 2020-08-20 | IREE CodeGen (MLIR Open Design Meeting) | recording | slides |
| 2020-03-18 | Interactive HAL IR Walkthrough | recording | |
| 2020-01-31 | End-to-end MLIR Workflow in IREE (MLIR Open Design Meeting) | recording | slides |
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.