commit | ab0f86a11f142d0cd4a0923d5783038f70170dfc | [log] [tgz] |
---|---|---|
author | Manish Gupta <manigupta@google.com> | Mon Mar 27 14:01:57 2023 -0700 |
committer | GitHub <noreply@github.com> | Mon Mar 27 14:01:57 2023 -0700 |
tree | 81dfbe30d9331017494541c64a6412ba1cfc492a | |
parent | deac6b44af7a32fc6cec4c9a2e013fa6902d4f04 [diff] |
Improved fine-grained Instruction Pipelining for F32 Tensor Cores (mma.sync) (#12761) This PR improves the fine-grained instruction scheduling for F32 Tensor Cores using native `mma.sync.aligned.m16n8k8.row.col.f32.tf32.tf32.f32`, `ldmatrix` for operandA, and `ld.shared` for operandB. In summary, the PR covers the following bullets: - **Improves F32 Tensor Core scheduling : 115 TFLOP/s vs. 99 TFLOP/s** New performance run ``` ---------------------------------------------------------------- Dispatch : matmul_3456x1024x2048_f32t_f32t_f32t_tile_config_128x128_16x5_tensorcore_mma_sync Provider : IREE Codegen Operation : matmul_3456x1024x2048_f32t_f32t_f32t Configuration : tile_config_128x128_16x5_tensorcore_mma_sync Verification : SUCCESS Bytes : 50855936 Flops : 14495514624 Runtime(ms) : 0.126 GFLOP/s : 115043.77 ---------------------------------------------------------------- ``` Old performance run ``` ---------------------------------------------------------------- Dispatch : matmul_3456x1024x2048_f32t_f32t_f32t_tile_config_128x128_16x5_tensorcore_mma_sync Provider : IREE Codegen Operation : matmul_3456x1024x2048_f32t_f32t_f32t Configuration : tile_config_128x128_16x5_tensorcore_mma_sync Verification : SUCCESS Bytes : 50855936 Flops : 14495514624 Runtime(ms) : 0.145 GFLOP/s : 99969.07 ---------------------------------------------------------------- ``` - **Fixes the race issue with the previous schedule: compute-sanitizer generates clean report for synccheck, memcheck, racecheck.** ``` $ compute-sanitizer --tool=memcheck ./tools/iree-run-module --module=./generated/linalg/matmul/matmul_3456x1024x2048_f32t_f32t_f32t/matmul_3456x1024x2048_f32t_f32t_f32t_verify.vmfb --device=cuda --function=matmul_3456x1024x2048_f32t_f32t_f32t_tile_config_128x128_16x5_tensorcore_mma_sync --input=@./generated/linalg/matmul/matmul_3456x1024x2048_f32t_f32t_f32t/m3456xk2048_f32t_random_lhs.npy --input=@./generated/linalg/matmul/matmul_3456x1024x2048_f32t_f32t_f32t/k2048xn1024_f32t_random_rhs.npy --expected_output=@./generated/linalg/matmul/matmul_3456x1024x2048_f32t_f32t_f32t/m3456xn1024_f32t_reference_result.npy ========= COMPUTE-SANITIZER EXEC @matmul_3456x1024x2048_f32t_f32t_f32t_tile_config_128x128_16x5_tensorcore_mma_sync [SUCCESS] all function outputs matched their expected values. ========= ERROR SUMMARY: 0 errors $ compute-sanitizer --tool=synccheck ./tools/iree-run-module --module=./generated/linalg/matmul/matmul_3456x1024x2048_f32t_f32t_f32t/matmul_3456x1024x2048_f32t_f32t_f32t_verify.vmfb --device=cuda --function=matmul_3456x1024x2048_f32t_f32t_f32t_tile_config_128x128_16x5_tensorcore_mma_sync --input=@./generated/linalg/matmul/matmul_3456x1024x2048_f32t_f32t_f32t/m3456xk2048_f32t_random_lhs.npy --input=@./generated/linalg/matmul/matmul_3456x1024x2048_f32t_f32t_f32t/k2048xn1024_f32t_random_rhs.npy --expected_output=@./generated/linalg/matmul/matmul_3456x1024x2048_f32t_f32t_f32t/m3456xn1024_f32t_reference_result.npy ========= COMPUTE-SANITIZER EXEC @matmul_3456x1024x2048_f32t_f32t_f32t_tile_config_128x128_16x5_tensorcore_mma_sync [SUCCESS] all function outputs matched their expected values. ========= ERROR SUMMARY: 0 errors $ compute-sanitizer --tool=racecheck ./tools/iree-run-module --module=./generated/linalg/matmul/matmul_3456x1024x2048_f32t_f32t_f32t/matmul_3456x1024x2048_f32t_f32t_f32t_verify.vmfb --device=cuda --function=matmul_3456x1024x2048_f32t_f32t_f32t_tile_config_128x128_16x5_tensorcore_mma_sync --input=@./generated/linalg/matmul/matmul_3456x1024x2048_f32t_f32t_f32t/m3456xk2048_f32t_random_lhs.npy --input=@./generated/linalg/matmul/matmul_3456x1024x2048_f32t_f32t_f32t/k2048xn1024_f32t_random_rhs.npy --expected_output=@./generated/linalg/matmul/matmul_3456x1024x2048_f32t_f32t_f32t/m3456xn1024_f32t_reference_result.npy ========= COMPUTE-SANITIZER EXEC @matmul_3456x1024x2048_f32t_f32t_f32t_tile_config_128x128_16x5_tensorcore_mma_sync [SUCCESS] all function outputs matched their expected values. ========= RACECHECK SUMMARY: 0 hazards displayed (0 errors, 0 warnings) ``` - **Re-enables F32 Matmul e2e test disabled in the [PR#12751](https://github.com/openxla/iree/pull/12751)**
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
See our website for more information.
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.