commit | 352da3ffa3f8468178836d865b9e1093e9cd0bfb | [log] [tgz] |
---|---|---|
author | Scott Todd <scotttodd@google.com> | Wed Jul 20 15:25:11 2022 -0700 |
committer | GitHub <noreply@github.com> | Wed Jul 20 15:25:11 2022 -0700 |
tree | 876546ae9e3423df94802f197160aa98f947bbd9 | |
parent | c8bd90646409ca942ba2073194c22882560bc1ea [diff] |
Remove "AOT" from LLVM target names. (#9854) TLDR: Starting a transition from legacy names like "dylib-llvm-aot" to just "llvm-cpu". History: in the early days of the project, we used to have compilation paths for both LLVM JIT (Just-In-Time) and LLVM AOT (Ahead-Of-Time). As the project matured, we dropped the JIT target and refactored the AOT target a few times. On the runtime side, we went through a few iterations of how "executable loaders" were defined: "dylib" (system .so, .dylib, .dll), "static" (executable code linked into runtime code statically at build time), and "embedded" (platform agnostic ELF file). The "dylib-llvm-aot" name was descriptive at the time, but it doesn't accurately represent what the project can do today. Embedded ELF is the default linking mode, and there is no JIT to contrast AOT against. Notable renames in this change: | Component | Old name | New Name | Notes | | --- | --- | --- | --- | | CMake | `IREE_TARGET_BACKEND_DYLIB_LLVM_AOT` | `IREE_TARGET_BACKEND_LLVM_CPU` | | | CMake | `IREE_TARGET_BACKEND_WASM_LLVM_AOT` | `IREE_TARGET_BACKEND_LLVM_CPU_WASM` | Adds `WebAssembly` to `LLVM_TARGETS_TO_BUILD` | | File names | `LLVMAOTTarget.h/.cpp` | `LLVMCPUTarget.h/.cpp` | | | Compiler targets | (old names will remain for now) | `llvm-cpu` | | | Flags | `dylib-llvm-aot` | `llvm-cpu` | Only where required to match the CMake option for now | | Env vars | `IREE_LLVMAOT_SYSTEM_LINKER_PATH` | `IREE_LLVM_SYSTEM_LINKER_PATH` | | In future changes: * tests using `# REQUIRES: llvmaot` -> `llvmcpu`? * target name usage `dylib-llvm-aot`, `llvm`, `cpu` -> `llvm-cpu` * remove `dylib-llvm-aot` legacy name
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
See our website for more information.
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.