commit | 585e5ca36587a9d74f1cc2f9ef8fc19f535e3a1e | [log] [tgz] |
---|---|---|
author | Stella Laurenzo <stellaraccident@gmail.com> | Fri Aug 25 18:37:41 2023 -0700 |
committer | GitHub <noreply@github.com> | Sat Aug 26 01:37:41 2023 +0000 |
tree | c850b2a4dd036118c2ac166935af65635b24eb9e | |
parent | 65afae6e7d678269b3ae4697dc19cd661c662a7b [diff] |
Move demotion passes to GlobalOptimization. (#14815) The global optimizations really depend on the flow level demotion passes. Moves them to the right place. Also restores the ordering of the strip assertions pass so that it can guide optimizations. Renames old `-iree-flow-(demote|promote)-*` flags to `-iree-opt-(demote|promote)-*` flags now that they are just part of the rest of the global optimizations. Internally, renames "HighLevelOptimizations" to "GlobalOptimizations" for coherence. This lets us enable consteval on llam2 7b qi4/f16 models and drops latency from 14.3ms -> 11.9ms. Mostly it is small scalar-level evaluations but it is also eliminating a 250MiB f16 transpose in both first/second that I expect is a main culprit. Fixes #14835.
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
See our website for more information.
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.