commit | 67c8d3f27c113ef7b3ec63d3df2b6bbf738689b4 | [log] [tgz] |
---|---|---|
author | bjacob <benoitjacob@google.com> | Wed Oct 27 15:05:10 2021 -0400 |
committer | GitHub <noreply@github.com> | Wed Oct 27 12:05:10 2021 -0700 |
tree | 6196882a1d513f9f85f6b8548245b767cc4961e0 | |
parent | 527b7a0b3677d991e8f3e19047ab7d8bb0df87dc [diff] |
Trim e2e matmul tests and share MLIR code across testcases (#7475) When I wrote these tests, I put great care in ensuring low test run latency. But I didn't think about test compilation latency. So this PR reduces these build times in two different ways: 1. By commenting out half of the shapes in get_test_shapes. I've retained ~ 50% of the shapes that I believe provide ~ 90% of the coverage. The remaining 10% coverage will only start to matter later when we start to make the matmul implementation do more complicated things, and we can uncomment those shapes then. 2. By ensuring that testcases that test the same exact code (differing only by runtime data) actually share that code already at the source level (without relying on CSE, which might kick in too late to recover the best compilation latency), by changing generate_function_name to generate the same exact name, so that we're sure that we insert only one. There are two sub-cases here: a. Testcases that differ in dynamic shape dimensions. Before we could have functions foo_2x2(tensor<?x?xf32>) and foo_3x3(tensor<?x?xf32>) doing the same thing, only differing in the dynamic shapes that they are called on. Now it's foo_DYNxDYN(tensor<?x?xf32>). b. Testcases that differ in the generator of matrix element that they are called with. Before we would have foo_identity(tensor<4x4xf32>) and foo_random(tensor<4x4xf32). Now it's just foo(tensor<4x4xf32>). Before: ``` $ time cmake --build . [0/2] Re-checking globbed directories... [28/28] Generating e2e_matmul_direct_i8_small_dylib-llvm-aot_dylib.vmfb real 0m56.333s user 10m24.481s sys 3m46.072s ``` After: ``` $ time cmake --build . [0/2] Re-checking globbed directories... [28/28] Generating e2e_matmul_mmt4d_i8_small_dylib-llvm-aot_dylib.vmfb real 0m22.573s user 3m34.928s sys 1m23.833s ```
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
See our website for more information.
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.