commit | 975ba03c2205a5d2f141422495927a4e3b547162 | [log] [tgz] |
---|---|---|
author | Manish Gupta <manigupta@google.com> | Thu Jun 08 11:07:34 2023 -0700 |
committer | GitHub <noreply@github.com> | Thu Jun 08 11:07:34 2023 -0700 |
tree | e406146b55da1c65dd834dd818a769ada3bf534f | |
parent | 6a61d9f5e0418fd8322306471cee613c2b31efe7 [diff] |
Adds support for mixed precision NVIDIA A100 Tensor Cores (F32 <= F16 * F16 + F32) (#13857) This PR adds support for mixed precision NVIDIA A100 Tensor Core support addressing the issue #13813 from Epic #13812 F16 input for lhs, and F16 input for rhs, F32 for accumulation, and F32 output data types can be supported on CUDA backend by lowering to `mma.sync.aligned.m16n8k16.row.col.f32.f16.f16.f32` ptx instruction. Inputs and outputs are different bit-width (16b inputs and 32b outputs) ```bash python3 dispatch_profiler/profiler.py --dispatches=matmul_3456x1024x2048_f16t_f16t_f32t ---------------------------------------------------------------- Dispatch : matmul_3456x1024x2048_f16t_f16t_f32t_tile_config_128x256_32x3_tensorcore_mmasync Provider : IREE Codegen OpKind : OperationKind.Matmul Operation : matmul_3456x1024x2048_f16t_f16t_f32t Configuration : tile_config_128x256_32x3_tensorcore_mmasync Arguments : --batch_count=1 --m=3456 --n=1024 --k=2048 --lhs=f16t --rhs=f16t --result=f32t --split_k_mode=N/A --split_k_slices=N/A Verification : SUCCESS Runtime(ms) : 0.067 GFLOPs : 216350.96 ---------------------------------------------------------------- Dispatch : matmul_3456x1024x2048_f16t_f16t_f32t_tile_config_128x128_64x4_tensorcore_mmasync Provider : IREE Codegen OpKind : OperationKind.Matmul Operation : matmul_3456x1024x2048_f16t_f16t_f32t Configuration : tile_config_128x128_64x4_tensorcore_mmasync Arguments : --batch_count=1 --m=3456 --n=1024 --k=2048 --lhs=f16t --rhs=f16t --result=f32t --split_k_mode=N/A --split_k_slices=N/A Verification : SUCCESS Runtime(ms) : 0.069 GFLOPs : 210079.92 ---------------------------------------------------------------- Dispatch : matmul_3456x1024x2048_f16t_f16t_f32t_tile_config_128x128_32x5_tensorcore_mmasync Provider : IREE Codegen OpKind : OperationKind.Matmul Operation : matmul_3456x1024x2048_f16t_f16t_f32t Configuration : tile_config_128x128_32x5_tensorcore_mmasync Arguments : --batch_count=1 --m=3456 --n=1024 --k=2048 --lhs=f16t --rhs=f16t --result=f32t --split_k_mode=N/A --split_k_slices=N/A Verification : SUCCESS Runtime(ms) : 0.066 GFLOPs : 219629.01 ---------------------------------------------------------------- Dispatch : matmul_3456x1024x2048_f16t_f16t_f32t_tile_config_128x64_32x5_tensorcore_mmasync Provider : IREE Codegen OpKind : OperationKind.Matmul Operation : matmul_3456x1024x2048_f16t_f16t_f32t Configuration : tile_config_128x64_32x5_tensorcore_mmasync Arguments : --batch_count=1 --m=3456 --n=1024 --k=2048 --lhs=f16t --rhs=f16t --result=f32t --split_k_mode=N/A --split_k_slices=N/A Verification : SUCCESS Runtime(ms) : 0.08 GFLOPs : 181193.93 ---------------------------------------------------------------- Dispatch : matmul_3456x1024x2048_f16t_f16t_f32t_tile_config_64x64_64x5_tensorcore_mmasync Provider : IREE Codegen OpKind : OperationKind.Matmul Operation : matmul_3456x1024x2048_f16t_f16t_f32t Configuration : tile_config_64x64_64x5_tensorcore_mmasync Arguments : --batch_count=1 --m=3456 --n=1024 --k=2048 --lhs=f16t --rhs=f16t --result=f32t --split_k_mode=N/A --split_k_slices=N/A Verification : SUCCESS Runtime(ms) : 0.104 GFLOPs : 139379.95 ---------------------------------------------------------------- Dispatch : matmul_3456x1024x2048_f16t_f16t_f32t_tile_config_64x64_32x10_tensorcore_mmasync Provider : IREE Codegen OpKind : OperationKind.Matmul Operation : matmul_3456x1024x2048_f16t_f16t_f32t Configuration : tile_config_64x64_32x10_tensorcore_mmasync Arguments : --batch_count=1 --m=3456 --n=1024 --k=2048 --lhs=f16t --rhs=f16t --result=f32t --split_k_mode=N/A --split_k_slices=N/A Verification : SUCCESS Runtime(ms) : 0.105 GFLOPs : 138052.52 ``` ## Top performance For mixed-precision `F32 <= F16 * F16 + F32` is `219 TFLOPs` with `matmul_3456x1024x2048_f16t_f16t_f32t_tile_config_128x128_32x5_tensorcore_mmasync`
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
See our website for more information.
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.