commit | bfd507f97a9076fce85021d7b2665630ef4f281e | [log] [tgz] |
---|---|---|
author | Lei Zhang <antiagainst@google.com> | Thu Sep 02 15:00:09 2021 -0400 |
committer | GitHub <noreply@github.com> | Thu Sep 02 15:00:09 2021 -0400 |
tree | 57c624a993de2beff9868d63d1614a50be0f56c3 | |
parent | 64e52252fed14db75f0f8f9c15ce5faf1ec590fa [diff] |
Plumb dynamic shape support through for Vulkan and VMVX (#6917) This commit plumbs dynamic shape support through for both Vulkan and VMVX. They rely on 1-D MemRef and running `FlattenMemRefSubspanPass` in advance, instead of MemRef descriptors. In order to enable dynamic shape support, we need to carry the SSA values for dynamic dimensions down the CodeGen pipeline so that we can linearize the index calculation in `FlattenMemRefSubspanPass`. We have such information tightly associated with various ops at the Flow level, but when outlining executables and materializing HAL interface, the association is broken down. Instead, `tie_shape` ops are used to carry such information. It's structurally difficult to maintain and convert. So this commit changes the `hal.interface.binding.subspan` to carry the dynamic dimension SSA values by itself, like many other ops in Flow/HAL. It's a natural change that simplifies lots of analysis and transformation. For example, we don't need to maintain the two step conversion at CPU side (first generating an undefined MemRef descriptor when handling the `subspan` op, and then filling its content when handling the `tie_shape` op). It also makes the intervals of HAL more akin to Flow on this front. Other changes are mostly natural based on that: * `MaterializeInterfaces` picks up the information from `tie_shape` ops and attaches them to `subspan` ops. * `FlattenBindingSubspan` reads the dynamic dimensions to perform index linearization. * `ConvertToLLVM` now generates the full MemRef descriptor from `subspan` ops. * A new pass is added to fold `memref.dim`/`tensor.dim` ops over shape carrying ops. This puts IREE CodeGen dynamic shape support for Vulkan/VMvX in a very nice state: Because we run `FoldSubViewOpsPass` in advance, there won't be intermediate MemRefs (coming from `subview` ops). So load/stores directly take in HAL `subspan` ops. By definition in IREE we have tightly packed buffers so all MemRefs coming from subspans should have strides of the total element count in inner dimensions. So symbolic strides in subspan ops' AffineMaps correspond to SSA values for dimension sizes (or their products). Offsets are attached to subspan ops as SSA values, but then they are "transferred" to load/store ops during memref flattening, by being part of the index linearization calculation.
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.