[HAL/AMDGPU] Name prepublished kernarg storage Reusable AQL command buffers were materializing prepublished kernarg templates by asking the HAL allocator for DEVICE_LOCAL|HOST_VISIBLE memory and trusting the AMDGPU allocator to resolve that to a fine-grained host-coherent device pool. That was the right local outcome, but the contract was anonymous: a future allocator policy change could silently put template bytes behind a non-coherent mapping and make replay correctness depend on a late flush branch. Introduce a small AQL prepublished-kernarg storage strategy and record the current device-fine host-coherent strategy on each physical device. Logical-device command-buffer creation passes the selected strategy into the command buffer after queue affinity has been normalized to one physical device, and recording only prepublishes reusable static dispatches when the strategy is enabled. Finalization now requests DEVICE_LOCAL|HOST_VISIBLE|HOST_COHERENT explicitly and verifies that the returned AMDGPU buffer actually has those memory-type bits before copying templates. The old non-coherent flush fallback is gone because it was not a real strategy. Missing fine-grained device-local pools also get explicit allocator and physical-device diagnostics instead of an opaque pool lookup failure. The low-level command-buffer tests now use a real heap allocator instead of constructing command buffers with a null allocator, preserving the production object invariant while keeping prepublish storage disabled for tests that only exercise recording or block processing. Add a host-queue integration check for the recorded device-fine strategy alongside the existing real prepublished-dispatch execution test.
IREE (Intermediate Representation Execution Eenvironment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
Releases notes are published on GitHub releases.
| Package | Release status |
|---|---|
| GitHub release (stable) | |
| GitHub release (nightly) | |
iree-base-compiler | |
iree-base-runtime |
For more details on the release process, see https://iree.dev/developers/general/release-management/.
| Operating system | Build status |
|---|---|
| Linux | |
| macOS | |
| macOS |
For the full list of workflows see https://iree.dev/developers/general/github-actions/.
See our website for more information.
Community meeting recordings: IREE YouTube channel
| Date | Title | Recording | Slides |
|---|---|---|---|
| 2025-06-10 | Data-Tiling in IREE: Achieving High Performance Through Compiler Design (AsiaLLVM) | recording | slides |
| 2025-05-17 | Introduction to GPU architecture and IREE's GPU CodeGen Pipeline | recording | slides |
| 2025-02-12 | The Long Tail of AI: SPIR-V in IREE and MLIR (Vulkanised) | recording | slides |
| 2024-10-01 | Unveiling the Inner Workings of IREE: An MLIR-Based Compiler for Diverse Hardware | recording | |
| 2021-06-09 | IREE Runtime Design Tech Talk | recording | slides |
| 2020-08-20 | IREE CodeGen (MLIR Open Design Meeting) | recording | slides |
| 2020-03-18 | Interactive HAL IR Walkthrough | recording | |
| 2020-01-31 | End-to-end MLIR Workflow in IREE (MLIR Open Design Meeting) | recording | slides |
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.