commit | 14d6a17940c6b55f40eaf90701d8078adaafda18 | [log] [tgz] |
---|---|---|
author | Han-Chung Wang <hanchung@google.com> | Wed Nov 10 12:23:25 2021 -0800 |
committer | GitHub <noreply@github.com> | Wed Nov 10 12:23:25 2021 -0800 |
tree | 5158a0d224cb737f1e6fbc787954133f2e12c9cb | |
parent | 125900a6c6e96dc9be59c1985220201d949426e0 [diff] |
Use tile and fuse on tensors for CPU pipeline (#7533) The PR adds a new enum value for developing the new codegen strategy, which aims to vectorizes matmul + generic cases. Different from GPU pipeline, we don't apply unrolling vectors pass. Unrolling vectors would cause register pressure. For now, we still follow what we have. We can revisit if we want unrolling or not later. The rest of regression in `BM_dot_384x512x2` is because that it's not vectorized. Because there are dynamic shapes and the pass does nothing when any of the ops can't be vectorized. ## Single threaded performance on x86. Before: ``` ---------------------------------------------------------------------------------------- Benchmark Time CPU Iterations ---------------------------------------------------------------------------------------- BM_dot_384x384x512/process_time/real_time 3.84 ms 3.84 ms 182 BM_dot_384x128x128/process_time/real_time 0.149 ms 0.149 ms 4704 BM_dot_384x128x512/process_time/real_time 0.733 ms 0.733 ms 959 BM_dot_384x512x128/process_time/real_time 0.651 ms 0.651 ms 1063 BM_dot_384x512x2/process_time/real_time 0.313 ms 0.313 ms 2240 BM_dot_384x384x32/process_time/real_time 0.184 ms 0.184 ms 3785 BM_dot_384x384x512_exp/process_time/real_time 3.85 ms 3.85 ms 182 BM_dot_384x128x128_exp/process_time/real_time 0.175 ms 0.175 ms 4157 BM_dot_384x128x512_exp/process_time/real_time 0.887 ms 0.887 ms 809 BM_dot_384x512x128_exp/process_time/real_time 0.669 ms 0.669 ms 964 BM_dot_384x512x2_exp/process_time/real_time 0.342 ms 0.342 ms 2074 BM_dot_384x384x32_exp/process_time/real_time 0.194 ms 0.193 ms 3592 ``` After: ``` ---------------------------------------------------------------------------------------- Benchmark Time CPU Iterations ---------------------------------------------------------------------------------------- BM_dot_384x384x512/process_time/real_time 2.60 ms 2.60 ms 263 BM_dot_384x128x128/process_time/real_time 0.124 ms 0.124 ms 5680 BM_dot_384x128x512/process_time/real_time 0.500 ms 0.500 ms 1393 BM_dot_384x512x128/process_time/real_time 0.550 ms 0.550 ms 1269 BM_dot_384x512x2/process_time/real_time 0.507 ms 0.507 ms 1393 BM_dot_384x384x32/process_time/real_time 0.124 ms 0.124 ms 5668 BM_dot_384x384x512_exp/process_time/real_time 2.50 ms 2.50 ms 276 BM_dot_384x128x128_exp/process_time/real_time 0.136 ms 0.136 ms 5200 BM_dot_384x128x512_exp/process_time/real_time 0.550 ms 0.550 ms 1263 BM_dot_384x512x128_exp/process_time/real_time 0.561 ms 0.561 ms 1234 BM_dot_384x512x2_exp/process_time/real_time 0.515 ms 0.515 ms 1352 BM_dot_384x384x32_exp/process_time/real_time 0.129 ms 0.129 ms 5439 ```
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
See our website for more information.
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.