commit | 6a84459258c76906bb5682416b6d719cdacd50c7 | [log] [tgz] |
---|---|---|
author | Han-Chung Wang <hanchung@google.com> | Mon Feb 27 14:25:14 2023 -0800 |
committer | GitHub <noreply@github.com> | Mon Feb 27 14:25:14 2023 -0800 |
tree | 63861bc770e928d881e0d22562ba9522d3c4ccf8 | |
parent | 9dc8696f9814e6e39a42586af88458cef9a3be27 [diff] |
Fixes tiling sizes for pulled producers in TileAndDistribute (#12399) The tiling sizes for producers are not always as same as the tiling sizes for the consumers. The information is carried by the extract_slice op. We should use the sizes and offsets from extract_slice op for tiling the producers ops. E.g., ``` %16:2 = linalg.generic { ... } -> (tensor<384x512xf32>, tensor<384x512xf32>) %18 = iree_linalg_ext.pack %16#0 ... : tensor<384x512xf32> -> tensor<48x512x8x1xf32> ``` Say that the tiling sizes are [8, 64], the result after tiled implementation is ``` scf.for ... { scf.for ... { %28 = affine.min affine_map<(d0, d1) -> (d0 * 8, d1 * -8 + 384)>(%21, %arg0) %30 = affine.min affine_map<(d0, d1) -> (d0, -d1 + 512)>(%24, %arg1) %extracted_slice = tensor.extract_slice %16#0[%25, %arg1] [%28, %30] [1, 1] : tensor<384x512xf32> to tensor<?x?xf32> %extracted_slice_7 = tensor.extract_slice %17[%arg0, %arg1, 0, 0] [%21, %24, 8, 1] [1, 1, 1, 1] : tensor<48x512x8x1xf32> to tensor<?x?x8x1xf32> %31 = iree_linalg_ext.pack %extracted_slice ... : tensor<?x?xf32> -> tensor<?x?x8x1xf32> } } ``` The input slice sizes of `pack` op is 64x64 in this case. We should propagate the information when tiling the producer. Otherwise, it triggers an undefined behavior. The generic op produces a 64x64xf32 tensor. It is casted to ?x?xf32 when it's used in the tensor.pack op. The ?x?xf32 tensor is casted to 8x64xf32 tensor when storing it into the destination tensor. The cast ops are not folded away because the shapes mismatch. ``` %21:2 = linalg.generic ... } -> (tensor<64x64xf32>, tensor<64x64xf32>) %cast = tensor.cast %21#0 : tensor<64x64xf32> to tensor<?x?xf32> %cast_0 = tensor.cast %21#1 : tensor<64x64xf32> to tensor<?x?xf32> %cast_1 = tensor.cast %cast : tensor<?x?xf32> to tensor<8x64xf32> ... flow.dispatch.tensor.store %cast_1, %7, ... sizes = [8, 64] %cast_2 = tensor.cast %cast_0 : tensor<?x?xf32> to tensor<8x64xf32> flow.dispatch.tensor.store %cast_2, %8, ... sizes = [8, 64] ``` Fixes https://github.com/openxla/iree/issues/12286
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
See our website for more information.
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.