commit | d7b3bb9b73f7faf5ddfd0d76d3eaa6500db47eb6 | [log] [tgz] |
---|---|---|
author | Lei Zhang <antiagainst@google.com> | Fri May 14 09:50:48 2021 -0400 |
committer | GitHub <noreply@github.com> | Fri May 14 09:50:48 2021 -0400 |
tree | b6963fb743482d332a7d4a3555fe863dc84cdd9d | |
parent | 4b24775e1496b89f76a9db7c95e0b8b35b9cb6ee [diff] |
[spirv] Migrate ConvertToGPUPass' invocation tiling logic (#5814) `ConvertToGPUPass` is a sink for lowering away all Linalg ops in the SPIR-V pipeline: it can distribute a Linalg op to both global invocation IDs (if just having one-level distribution) or local invocation IDs (if having two-level distribution). This is mostly historical; now we have more proper layering in the pipeline where we perform the first-level tiling and distribution at flow level and the second/third level at `TileAndVectorizeInOneWorkgroupPass`, the functionality in `ConvertToGPUPass` is overlapping with that, although in a complementary way: the tiling in the former pass does not handle the cases where we cannot do imperfect tiling; it requires perfectly tiled cases, as we are assuming number of processors equal to number of iterations to avoid generating `scf.for` loop from the start. `ConvertToGPUPass` is more generic and can handle all cases. This is all opaque and complicated. This commit relaxes the `TileAndVectorizeInOneWorkgroupPass` to not assume the number of processors equal to the number of iterations. Now we just tile and cyclically distribute using `scf.for` loops. This causes issues for perfectly tiled cases as we need to canonicalize the `affine.min` and one-trip `scf.for` away to expose static sizes for further vectorization. That can be done by pulling in additional canonicalization patterns. Also in order to utilize `TileAndVectorizeInOneWorkgroupPass` for the second/third level tiling we need to have the corresponding scheme in launch configuration for them. This commit also adds the default second/third level tiling for all now supported Linalg ops: no tiling on subgroup and tiling to 1 for invocations. This at the same time helps to clean up a bunch of unwieldy templated configurations for different ops.. Together, the above migrates the invocation tiling logic in `ConvertToGPUPass` to their proper places. This is the first step as reining in `ConvertToGPUPass` and launch configurations.
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler that lowers Machine Learning (ML) models to a unified IR optimized for real-time inference on mobile/edge devices against heterogeneous hardware accelerators. IREE also provides flexible deployment solutions for its compiled ML models.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
IREE is licensed under the terms of the Apache license. See LICENSE for more information.