[Codegen][GPU] Update greedy tile + fuse pipeline to generate mfma (#17617)

This adds intrinsic packing and reshape propagation patterns to
LLVMGPUTileAndFuse to allow for generating mfma operations. This adds a
few passes to invoke a few necessary patterns for the pipeline to
generate (good) code.

1. PropagateReshapesByExpansion to propagate reshapes introduced after
decomposing tensor.pack/unpack towards the edges of the kernel in the
hopes that the destination can line up properly.
2. IREE::GPU::PackToIntrinsics to pack based on the lowering config
specified mma kind.
3. IREE::GPU::DistributeMmaToLanes to distribute iree_gpu.multi_mma ops
to lanes, similar to another tiling level.

There are a few known outstanding issues.

1. We run `ConvertToDestinationPassingStyle` twice to re-link the kernel
destination with the body after decomposing `tensor.unpack`. This is to
work around an issue with EliminateEmptyTensors being unable to analyze
`flow.dispatch.tensor.store` ops with slicing behavior properly. After
workgroup distribution is refactored to generate an scf.forall, this
needs to be revisited.
4. iree_gpu.shuffle_tensor lowering to `tensor.insert_slice` is still
broken. This will need to be reworked to support dynamic shapes.
5. Currently, because of the way the layout works, only MFMA_16x16x16
works. To support other layouts we will need another level of expanding
to the intrinsic implicit layout and then propagating those
expand_shapes. This will likely need to happen after reduction tiling
unless we want to teach tile + fuse to swap tensor.expand_shape ops with
tensor.extract_slice.
48 files changed
tree: a8f0c93f9f38b51bd72389038a56c7c98c9347ce
  1. .devcontainer/
  2. .github/
  3. build_tools/
  4. compiler/
  5. docs/
  6. experimental/
  7. integrations/
  8. lib/
  9. llvm-external-projects/
  10. runtime/
  11. samples/
  12. tests/
  13. third_party/
  14. tools/
  15. .bazel_to_cmake.cfg.py
  16. .bazelignore
  17. .bazelrc
  18. .bazelversion
  19. .clang-format
  20. .dockerignore
  21. .git-blame-ignore-revs
  22. .gitattributes
  23. .gitignore
  24. .gitmodules
  25. .pre-commit-config.yaml
  26. .yamllint.yml
  27. AUTHORS
  28. BUILD.bazel
  29. CITATION.cff
  30. CMakeLists.txt
  31. configure_bazel.py
  32. CONTRIBUTING.md
  33. LICENSE
  34. MAINTAINERS.md
  35. README.md
  36. RELEASING.md
  37. WORKSPACE
README.md

IREE: Intermediate Representation Execution Environment

IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.

See our website for project details, user guides, and instructions on building from source.

CI Status IREE Discord Status

Project Status

IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!

Communication Channels

Related Project Channels

  • MLIR topic within LLVM Discourse: IREE is enabled by and heavily relies on MLIR. IREE sometimes is referred to in certain MLIR discussions. Useful if you are also interested in MLIR evolution.

Architecture Overview

IREE Architecture IREE Architecture

See our website for more information.

Presentations and Talks

Community meeting recordings: IREE YouTube channel

  • 2021-06-09: IREE Runtime Design Tech Talk (recording and slides)
  • 2020-08-20: IREE CodeGen: MLIR Open Design Meeting Presentation (recording and slides)
  • 2020-03-18: Interactive HAL IR Walkthrough (recording)
  • 2020-01-31: End-to-end MLIR Workflow in IREE: MLIR Open Design Meeting Presentation (recording and slides)

License

IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.