commit | 1f98cc5fddf05b431d22c791b1d3101547d5a27b | [log] [tgz] |
---|---|---|
author | Guray Ozen <guray.ozen@gmail.com> | Fri Dec 16 16:40:56 2022 +0100 |
committer | GitHub <noreply@github.com> | Fri Dec 16 16:40:56 2022 +0100 |
tree | 03b8db440fad12b8504215cad042c5ce7d99bf48 | |
parent | 2d91b37fe5215d35a53dcc21d8e6b9f8cc7bcb39 [diff] |
Collapse `linalg.generic` (#11295) PR implements collapsing `linalg.generic`. It first identifies the collapsible parallel dimensions and collapses them all. It does collapsing on the shapes rather than loops. Therefore, it does not introduce any arithmetic to calculate loop indices and etc. When there are `reduction` it can still collapse `parallel` loops but it does not mix them. It finds the longest same sequence in each `AffineMap`. There can be multiple. For example, for the following case it is `d1, d3, d0`. Here the `d1, d3, d0` loops are not nested; there are other loop(s) in between. But they're all parallel loops, so it's safe to interchange them out. After interchanging, it is also safe to collapse them. ``` indexing_maps = [affine_map<(d0, d1, d2, d3, d4) -> (d1, d3, d0)>, affine_map<(d0, d1, d2, d3, d4) -> (d2, d1, d3, d0, d4)>] ``` After collapsing, iree can parallelize more dimensions of the `linalg.generic`, this yields significant performance improvement. Current limitations: 1. Dynamic tensor shapes: Generating `tensor.expand_shape` with dynamic tensors is ambiguous. It is a known limitation of MLIR. It is possible to solve that. See RFC: https://discourse.llvm.org/t/rfc-add-explicit-shape-inputs-to-tensor-expand-shape/65952 2. Non-contiguous loops. Current mechanism does collapsing on tensor shapes, not on the loops. If there is non-contiguous loops (like transpose), collapsing tensor would change behavior. #11385 tackles this problem by linearizing the workgroup id. Alternative idea is to linearize loops.
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
See our website for more information.
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.