commit | f3d0369dee99d07399ea6f12ed634b6d63f25de7 | [log] [tgz] |
---|---|---|
author | Nicolas Vasilache <nicolasvasilache@users.noreply.github.com> | Fri Jun 09 10:47:16 2023 +0200 |
committer | GitHub <noreply@github.com> | Fri Jun 09 08:47:16 2023 +0000 |
tree | 3b45f5b4a8c90818b1e52f55f98436b06a25e1c3 | |
parent | 7f10fe28ddc3cbc1bd8466a751ce2e03dd0da0c5 [diff] |
Add codegen strategy for GPU padding (#14000) This revision introduces a matcher for a single tensor.pad operation and a simple codegen strategy that reuses a subset of the transformations of the matmul strategy. With now 3 strategies available, some code reorganization occured: the top-level switch and strategy decisions have been moved into Strategies.cpp and out of Common.cpp which remains a place for building common C++ transformation APIs to construct TD. For now, this is only targeting 2-D pad operations on f32 and should be further tightened to capture only the high paddings for which vector masking is known to apply. This is the common case that can also lower to the `zfill` form of `cp.async`. This will later be generalized along the common axes following the MO from the reductions work. For now this is sufficient to enable the higher order `unaligned -> aligned matmul` and the `split-K` work to run end-to-end. For now this is only activated behind a flag `--iree-codegen-llvmgpu-enable-transform-dialect-pad-strategy` that will be retired in due time, following the steps taken in the past for reductions. A [quick performance evaluation](https://github.com/iree-org/iree-samples/commit/c91f45d4955b8cb16648c776747ead72fd42c085#diff-14aa2df3252ae84a37788ff0a69ee1fa2d1ca92ce31ae8be9cbdc6884524e215R23) shows > 10x improvements over what IREE currently does: ~9.5us vs ~135us. For the particular sizes selected this amounts to roughly 650GB/s read and 665GB/s write BW on Ampere. Basic early profiling shows a performance around 40% of peak memory BW (assuming full duplex, 85% otherwise..). We still have simple means to crank up the performance if needed, in particular in conjunction with multiple `cp.async` operations (future work). --------- Co-authored-by: Alex Zinenko <zinenko@google.com>
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
See our website for more information.
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.