commit | 7b7ffeb71dead0dd7fdc80f975bae75274749e3b | [log] [tgz] |
---|---|---|
author | Stella Laurenzo <stellaraccident@gmail.com> | Wed Dec 20 19:03:59 2023 -0800 |
committer | GitHub <noreply@github.com> | Wed Dec 20 19:03:59 2023 -0800 |
tree | 4f26020c0227dc9ea5d9e82a3a2d34ba23fddde6 | |
parent | 193bc277db88c6d9eb550abf5242f281344212e4 [diff] |
[onnx] Add ONNX importer and iree-import-onnx tool to compiler package. (#15920) * When building the torch frontend, we now also have access to the upstream ONNX importer and include it here as part of our official API. * Also added a custom `iree-import-onnx` tool and corresponding test. * Added extras_require for `onnx` to setup.py (allowing it to be installed as an optional dependency). * Added a _package_test.py for the compiler package like the runtime has and configured the CI to use it. * Added a check to the release validation job. * Includes a bump of torch-mlir to latest. * May need to tweak some things in the input pipeline to get iree-compile to work on this by default. Will do in a followup.
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
See our website for more information.
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.