commit | 7ec09ce9c21c3193a675f6bc0737f63b8ac32915 | [log] [tgz] |
---|---|---|
author | Stella Laurenzo <laurenzo@google.com> | Thu Nov 24 11:13:52 2022 -0800 |
committer | GitHub <noreply@github.com> | Thu Nov 24 11:13:52 2022 -0800 |
tree | a31ae618a462b65564b80f8cd88e4856665e8d31 | |
parent | 1814bc50be6662a571b9e7b330944b6533ee1b4b [diff] |
Create explicit compiler C API libraries and enable dynamic linking on all platforms (#11285) This is the first step of a multi-part sequence that will: * Rebuild the compiler/API directory into its final form. * Move the compiler python API to its correct location (bindings/python). * Enable dynamic linking project wide of all compiler tools. This first step leaves the following undone: * Python API stays where it is (there are a lot of fiddly path dependencies there and I opted to handle that in isolation). * Sets everything up in an API2 which will eventually be renamed to API. * Just gets the stub of what is needed for a delay loaded embedding API -- the full thing will follow and be integrated into PJRT for Jax and TensorFlow serving on the OSS side. * The project local lld is still building static, even though the API includes it in the shared library. Was just trying to keep the patch size down vs there being a fundamental problem. In the new state, there is a libIREECompiler.so.0 (or IREECompiler.dll on Windows) which contains both the in-process APIs and the tool entry-points. The following tools have been reworked to link against the shared library: * iree-compile * iree-lld (Python side only) * iree-opt * iree-mlir-lsp-server In addition, some API tests which were doing expensive static linking now link dynamically. All said, this saves ~4-5x multi-gigabyte compiler tool linking steps and should be a pretty substantial build iteration improvement. I wasn't originally going to include iree-opt and iree-mlir-lsp-server in the shared build, but based on an experiment, they contribute negligible size (really just a few extra source files when linked together), and the productivity and usability savings (i.e. we can now distribute those at negligible cost) made me decide it was worth it. I don't know what to do about iree-run-mlir. I think I can refactor it into something based on the compiler API that then links the runtime. But that is for later. I took some steps to ensure this will work on Windows and MacOS but haven't finished verifying.
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
See our website for more information.
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.