| commit | deb48059607272ba3a320ec5ccef48783e6b3fa5 | [log] [tgz] |
|---|---|---|
| author | Stella Laurenzo <laurenzo@google.com> | Fri Nov 25 10:41:46 2022 -0800 |
| committer | GitHub <noreply@github.com> | Fri Nov 25 10:41:46 2022 -0800 |
| tree | d673962e032122ef08f7335115c2076ce8283dab | |
| parent | 7ed278e2d91b1a70187e5414ecb6e736444f9279 [diff] |
Enable split-dwarf and thin archives when possible. (#11292)
Split-dwarf separates debug info to per-object .dwo files, which reduces
IO throughout the build. It works best with the gdb-index linker feature
(of gold and lld), which links to this debug info (vs embedding) and
also adds an index to speed debugger launch.
Thin archives, which is a feature of GNU AR and llvm-ar on Linux
produces static archives that do not embed object files, instead just
referencing them by path.
While spit-dwarf is aimed at debug configurations, thin archives can
help all builds.
Results:
```
Before:
Clean build:
real 7m24.609s
user 392m31.930s
sys 18m59.113s
build dir: 11GiB
libIREECompiler.so: 1.4GiB
Trivial relink the compiler:
real 0m5.336s
user 0m12.862s
sys 0m31.818s
After:
Clean build:
real 7m38.461s
user 402m52.180s
sys 18m8.314s
build dir: 5.2GiB
libIREECompiler.so: 490MiB
Trivial relink the compiler:
real 0m4.350s
user 0m8.233s
sys 0m8.104s
gdb of iree-compile starts instantly and sets a breakpoint on ireeCompilerRunMain with no delay (a few seconds to step in)
```
For a RelWithDebInfo build as documented on our website, wall clock time
to do a clean build is in the noise on my machine, but with the new
flags:
* Build directory size is reduced by >50%
* Size of the main compiler shared library is reduced by ~65%
* Time to incremental relink of the compiler after a trivial change to a
C file is reduced by ~18%.
Ideally there would be a less invasive way to enable these things, but
that isn't coming any time soon. I think the complexity is worth it. The
trivial relink case shows that this should make the cycle time better
disproportionately on lower end machines as well.IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
See our website for more information.
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.