[LLVMCPU] Enable shuffle_16x16 for pack codegen. (#13318)

## Pack Codegen

It uses 16x16 shuffle strategy for LHS packing. Here is the ASM dump: https://gist.githubusercontent.com/hanhanW/3b67ca80389383d4c2de6e4b63f2698c/raw

The below table is the benchmark result for packing w/o padding semantics (on LHS). They are the cases in MobileBert FP32.

Shape | before | after
-- | -- | --
384x512 | 79.5 us | 62.28 us
384x384 | 53.02 us | 32.81 us
384x128 | 15.37 us | 5.04 us
384x32 | 3.9 us |  1.42 us
384x2 | 0.41 us | 0.5 us

And some results for packing w/ padding semantics (on LHS). They are the cases in Posenet FP.

Shape | before | after
-- | -- | --
1485x192 | 111 us | 91 us
22833x48 | 425 us | 350 us
391x384 | 54 us | 31 us
391x1 | 1 us | 1 us
391x17 | 4 us | 4 us
391x32 | 4 us | 2 us
391x34 | 7 us | 8 us
5785x96 | 224 us | 303 us

Only 5785x96 case is regressed. Because 16x16 shuffle is not enabled. The decomposition does not work with dynamic shapes at this moment. That needs an upstream fix. Apply masking can fix the regression and give us another boost.

## Pack Fusion Codegen

We're having good pack fusion codegen with the PR as well. Here is an example that extracted from MobileBert model:

```mlir
#map = affine_map<(d0, d1) -> (d1)>
#map1 = affine_map<(d0, d1) -> (d0, d1)>
func.func @generic_pack(%arg0: tensor<384x512xf32>, %arg1: tensor<512xf32>) -> tensor<24x512x16x1xf32> {
  %cst = arith.constant 3.40282347E+38 : f32
  %cst_0 = arith.constant 0.000000e+00 : f32
  %0 = tensor.empty() : tensor<384x512xf32>
  %1 = linalg.generic {indexing_maps = [#map, #map1, #map1], iterator_types = ["parallel", "parallel"]} ins(%arg1, %arg0 : tensor<512xf32>, tensor<384x512xf32>) outs(%0 : tensor<384x512xf32>) {
  ^bb0(%in: f32, %in_1: f32, %out: f32):
    %3 = arith.addf %in, %in_1 : f32
    %4 = arith.minf %3, %cst : f32
    %5 = arith.maxf %4, %cst_0 : f32
    linalg.yield %5 : f32
  } -> tensor<384x512xf32>
  %2 = tensor.empty() : tensor<24x512x16x1xf32>
  %pack = tensor.pack %1 inner_dims_pos = [0, 1] inner_tiles = [16, 1] into %2 : tensor<384x512xf32> -> tensor<24x512x16x1xf32>
  return %pack : tensor<24x512x16x1xf32>
}
```

Configuration | Single Generic Op | Fusion with the PR | Fusion without the PR |
-- | -- | -- | --
Single-threaded w/o Distribution mode | 58 us | 58 us | 107 us

The packing overheads is hidden in fusion cases as expected.  The generated ASM is: https://gist.githubusercontent.com/hanhanW/cccfae78af4cf24c49ee780396061ec2/raw

## UnPack + Generic + Pack

If we benchmark with actual dispatch from MobileBert, which is `unpack + generic + pack` case, it still only take 55 us to finish. This shows again that pack/unpack overheads can be hidden in fusion! Also, the perf w/o PR is 98 us. We'll get ~1.8x improvements on overheads from the PR!

```mlir
#map = affine_map<(d0, d1) -> (d1)>
#map1 = affine_map<(d0, d1) -> (d0, d1)>
func.func @unpack_generic_pack(%0: tensor<24x32x16x16xf32>, %1: tensor<512xf32>) -> tensor<24x512x16x1xf32> {
  %cst = arith.constant 3.40282347E+38 : f32
  %cst_0 = arith.constant 0.000000e+00 : f32
  %unpack_init = tensor.empty() : tensor<384x512xf32>
  %unpack = tensor.unpack %0 inner_dims_pos = [0, 1] inner_tiles = [16, 16] into %unpack_init : tensor<24x32x16x16xf32> -> tensor<384x512xf32>
  %2 = tensor.empty() : tensor<384x512xf32>
  %4 = linalg.generic {indexing_maps = [#map, #map1, #map1], iterator_types = ["parallel", "parallel"]} ins(%1, %unpack : tensor<512xf32>, tensor<384x512xf32>) outs(%2 : tensor<384x512xf32>) {
  ^bb0(%in: f32, %in_1: f32, %out: f32):
    %6 = arith.addf %in, %in_1 : f32
    %7 = arith.minf %6, %cst : f32
    %8 = arith.maxf %7, %cst_0 : f32
    linalg.yield %8 : f32
  } -> tensor<384x512xf32>
  %5 = tensor.empty() : tensor<24x512x16x1xf32>
  %pack = tensor.pack %4 inner_dims_pos = [0, 1] inner_tiles = [16, 1] into %5 : tensor<384x512xf32> -> tensor<24x512x16x1xf32>
  return %pack : tensor<24x512x16x1xf32>
}
```

## MobileBert FP32

Default | Data Tiling without the PR | Data Tiling With the PR
-- | -- | --
152 ms | 253 ms | 232 ms

The PR saves us ~20 ms in the e2e model.

Fixes https://github.com/openxla/iree/issues/13127
12 files changed
tree: f48679f1c83ef555512ef2bf9f8a775fd619745f
  1. .devcontainer/
  2. .github/
  3. benchmarks/
  4. build_tools/
  5. compiler/
  6. docs/
  7. experimental/
  8. integrations/
  9. lib/
  10. llvm-external-projects/
  11. runtime/
  12. samples/
  13. tests/
  14. third_party/
  15. tools/
  16. .bazel_to_cmake.cfg.py
  17. .bazelignore
  18. .bazelrc
  19. .bazelversion
  20. .clang-format
  21. .dockerignore
  22. .gitignore
  23. .gitmodules
  24. .pylintrc
  25. .style.yapf
  26. .yamllint.yml
  27. AUTHORS
  28. BUILD.bazel
  29. CITATION.cff
  30. CMakeLists.txt
  31. configure_bazel.py
  32. CONTRIBUTING.md
  33. LICENSE
  34. README.md
  35. WORKSPACE
README.md

IREE: Intermediate Representation Execution Environment

IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.

See our website for project details, user guides, and instructions on building from source.

CI Status

Project Status

IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!

Communication Channels

Related Project Channels

  • MLIR topic within LLVM Discourse: IREE is enabled by and heavily relies on MLIR. IREE sometimes is referred to in certain MLIR discussions. Useful if you are also interested in MLIR evolution.

Architecture Overview

IREE Architecture IREE Architecture

See our website for more information.

Presentations and Talks

  • 2021-06-09: IREE Runtime Design Tech Talk (recording and slides)
  • 2020-08-20: IREE CodeGen: MLIR Open Design Meeting Presentation (recording and slides)
  • 2020-03-18: Interactive HAL IR Walkthrough (recording)
  • 2020-01-31: End-to-end MLIR Workflow in IREE: MLIR Open Design Meeting Presentation (recording and slides)

License

IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.