Adds support for mixed precision NVIDIA A100 Tensor Cores (F32 <= F16 * F16 + F32) (#13857)

This PR adds support for mixed precision NVIDIA A100 Tensor Core support
addressing the issue #13813 from Epic #13812

F16 input for lhs, and F16 input for rhs, F32 for accumulation, and F32
output data types can be supported on CUDA backend by lowering to
`mma.sync.aligned.m16n8k16.row.col.f32.f16.f16.f32` ptx instruction.
Inputs and outputs are different bit-width (16b inputs and 32b outputs)


```bash
 python3 dispatch_profiler/profiler.py --dispatches=matmul_3456x1024x2048_f16t_f16t_f32t
---------------------------------------------------------------- 
Dispatch      : matmul_3456x1024x2048_f16t_f16t_f32t_tile_config_128x256_32x3_tensorcore_mmasync
Provider      : IREE Codegen
OpKind        : OperationKind.Matmul
Operation     : matmul_3456x1024x2048_f16t_f16t_f32t
Configuration : tile_config_128x256_32x3_tensorcore_mmasync
Arguments     : --batch_count=1 --m=3456 --n=1024 --k=2048 --lhs=f16t --rhs=f16t --result=f32t
                --split_k_mode=N/A --split_k_slices=N/A
Verification  : SUCCESS
Runtime(ms)   : 0.067
GFLOPs        : 216350.96
---------------------------------------------------------------- 
Dispatch      : matmul_3456x1024x2048_f16t_f16t_f32t_tile_config_128x128_64x4_tensorcore_mmasync
Provider      : IREE Codegen
OpKind        : OperationKind.Matmul
Operation     : matmul_3456x1024x2048_f16t_f16t_f32t
Configuration : tile_config_128x128_64x4_tensorcore_mmasync
Arguments     : --batch_count=1 --m=3456 --n=1024 --k=2048 --lhs=f16t --rhs=f16t --result=f32t
                --split_k_mode=N/A --split_k_slices=N/A
Verification  : SUCCESS
Runtime(ms)   : 0.069
GFLOPs        : 210079.92
---------------------------------------------------------------- 
Dispatch      : matmul_3456x1024x2048_f16t_f16t_f32t_tile_config_128x128_32x5_tensorcore_mmasync
Provider      : IREE Codegen
OpKind        : OperationKind.Matmul
Operation     : matmul_3456x1024x2048_f16t_f16t_f32t
Configuration : tile_config_128x128_32x5_tensorcore_mmasync
Arguments     : --batch_count=1 --m=3456 --n=1024 --k=2048 --lhs=f16t --rhs=f16t --result=f32t
                --split_k_mode=N/A --split_k_slices=N/A
Verification  : SUCCESS
Runtime(ms)   : 0.066
GFLOPs        : 219629.01
---------------------------------------------------------------- 
Dispatch      : matmul_3456x1024x2048_f16t_f16t_f32t_tile_config_128x64_32x5_tensorcore_mmasync
Provider      : IREE Codegen
OpKind        : OperationKind.Matmul
Operation     : matmul_3456x1024x2048_f16t_f16t_f32t
Configuration : tile_config_128x64_32x5_tensorcore_mmasync
Arguments     : --batch_count=1 --m=3456 --n=1024 --k=2048 --lhs=f16t --rhs=f16t --result=f32t
                --split_k_mode=N/A --split_k_slices=N/A
Verification  : SUCCESS
Runtime(ms)   : 0.08
GFLOPs        : 181193.93
---------------------------------------------------------------- 
Dispatch      : matmul_3456x1024x2048_f16t_f16t_f32t_tile_config_64x64_64x5_tensorcore_mmasync
Provider      : IREE Codegen
OpKind        : OperationKind.Matmul
Operation     : matmul_3456x1024x2048_f16t_f16t_f32t
Configuration : tile_config_64x64_64x5_tensorcore_mmasync
Arguments     : --batch_count=1 --m=3456 --n=1024 --k=2048 --lhs=f16t --rhs=f16t --result=f32t
                --split_k_mode=N/A --split_k_slices=N/A
Verification  : SUCCESS
Runtime(ms)   : 0.104
GFLOPs        : 139379.95
---------------------------------------------------------------- 
Dispatch      : matmul_3456x1024x2048_f16t_f16t_f32t_tile_config_64x64_32x10_tensorcore_mmasync
Provider      : IREE Codegen
OpKind        : OperationKind.Matmul
Operation     : matmul_3456x1024x2048_f16t_f16t_f32t
Configuration : tile_config_64x64_32x10_tensorcore_mmasync
Arguments     : --batch_count=1 --m=3456 --n=1024 --k=2048 --lhs=f16t --rhs=f16t --result=f32t
                --split_k_mode=N/A --split_k_slices=N/A
Verification  : SUCCESS
Runtime(ms)   : 0.105
GFLOPs        : 138052.52
```

## Top performance

For mixed-precision `F32 <= F16 * F16 + F32` is `219 TFLOPs` with
`matmul_3456x1024x2048_f16t_f16t_f32t_tile_config_128x128_32x5_tensorcore_mmasync`
5 files changed
tree: e406146b55da1c65dd834dd818a769ada3bf534f
  1. .devcontainer/
  2. .github/
  3. build_tools/
  4. compiler/
  5. docs/
  6. experimental/
  7. integrations/
  8. lib/
  9. llvm-external-projects/
  10. runtime/
  11. samples/
  12. tests/
  13. third_party/
  14. tools/
  15. .bazel_to_cmake.cfg.py
  16. .bazelignore
  17. .bazelrc
  18. .bazelversion
  19. .clang-format
  20. .dockerignore
  21. .gitignore
  22. .gitmodules
  23. .pylintrc
  24. .style.yapf
  25. .yamllint.yml
  26. AUTHORS
  27. BUILD.bazel
  28. CITATION.cff
  29. CMakeLists.txt
  30. configure_bazel.py
  31. CONTRIBUTING.md
  32. LICENSE
  33. README.md
  34. WORKSPACE
README.md

IREE: Intermediate Representation Execution Environment

IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.

See our website for project details, user guides, and instructions on building from source.

CI Status

Project Status

IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!

Communication Channels

Related Project Channels

  • MLIR topic within LLVM Discourse: IREE is enabled by and heavily relies on MLIR. IREE sometimes is referred to in certain MLIR discussions. Useful if you are also interested in MLIR evolution.

Architecture Overview

IREE Architecture IREE Architecture

See our website for more information.

Presentations and Talks

  • 2021-06-09: IREE Runtime Design Tech Talk (recording and slides)
  • 2020-08-20: IREE CodeGen: MLIR Open Design Meeting Presentation (recording and slides)
  • 2020-03-18: Interactive HAL IR Walkthrough (recording)
  • 2020-01-31: End-to-end MLIR Workflow in IREE: MLIR Open Design Meeting Presentation (recording and slides)

License

IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.