commit | 77ff2e862e58cbb2716859f482da9a25aef278a0 | [log] [tgz] |
---|---|---|
author | phoenix-meadowlark <meadowlark@google.com> | Mon May 11 15:34:34 2020 -0700 |
committer | Copybara-Service <copybara-worker@google.com> | Mon May 11 15:37:47 2020 -0700 |
tree | db723a1c7dfc8e12f4c022507d48ca62b175da03 | |
parent | 9ce4a06fe65e0fab132ed6e38f83f89094cdacba [diff] |
Adds Dockerfiles for building with bazel and updates GitHub CI workflows - Adds Dockerfiles that define images for our bazel and bazel with tensorflow builds. - Adds a script for updating these images on the IREE-OSS Google container registry (GCR). - These images are hosted as `gcr.io/iree-oss/bazel` and `gcr.io/iree-oss/bazel-tensorflow` - Updates the GitHub CI workflows to use these images. This change adds Dockerfiles specifying the dependencies of IREE's core bazel build and IREE's integrations/bindings builds. This allows us to run CI tests in GitHub Actions without having to install the dependencies for each commit to master. We are hosting the Docker images that these files build on the IREE-OSS GCP project. Currently, this means that every time there is a change to one of the Docker images, someone who is a storage admin for IREE-OSS must run the `build_tools/docker/update_gcr_images.sh` script. It should be possible to automate this process whenever a change is made to these files on GitHub in a future PR. This change was tested by creating another branch based off of this one, changing the workflows to run on push to that branch and then pushing that change. Links to each test: - [Bazel Build - Bindings](https://github.com/phoenix-meadowlark/iree/actions/runs/100891735) - [Bazel Build - Fallthrough](https://github.com/phoenix-meadowlark/iree/actions/runs/100891736) - [Bazel Build - Integrations](https://github.com/phoenix-meadowlark/iree/actions/runs/100891738) (Failed due to the build being broken at the time of testing). - [Bazel Build -Core](https://github.com/phoenix-meadowlark/iree/actions/runs/100891741) Since the integrations build failed due to the state of the repo at the time it was run, I ran the test again on my machine on a commit with a working build as a stand in and it passed. Closes https://github.com/google/iree/pull/1874 COPYBARA_INTEGRATE_REVIEW=https://github.com/google/iree/pull/1874 from phoenix-meadowlark:docker-containers b280481b306a1a2ba76758437930a235ef6cf4f7 PiperOrigin-RevId: 311007783
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler that lowers ML models to a unified IR optimized for real-time mobile/edge inference against heterogeneous hardware accelerators. IREE also provides flexible deployment solutions for the compiled ML models.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logisitics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
For development, IREE supports both Bazel and CMake on Windows and Linux. We are working on enabling macOS support. For deployment, IREE aims to additionally cover Android and iOS.
Please see the Getting Started pages on IREE's documentation hub to configure, compile, and run IREE in your favorite development environment!
IREE hosts all its documentation and project status dashboards on GitHub Pages. We are still building up the website; please feel free to create issues for the documentation you'd like to see!
We also have some public talks that explain IREE's concepts and architecture:
IREE adopts a holistic approach towards ML model compilation: the IR produced contains both the scheduling logic, required to communicate data dependencies to low-level parallel pipelined hardware/API like Vulkan, and the execution logic, encoding dense computation on the hardware in the form of hardware/API-specific binaries like SPIR-V.
The architecture of IREE is best illustrated by the following picture:
Being compilation-based means IREE does not have a traditional runtime that dispatches “ops” to their fat kernel implemenations. What IREE provides is a toolbox for different deployment scenarios. It scales from running generated code on a particular API (such as emitting C code calling external DSP kernels), to a HAL (Hardware Abstraction Layer) that allows the same generated code to target multiple APIs (like Vulkan and Direct3D 12), to a full VM allowing runtime model loading for flexible deployment options and heterogeneous execution.
IREE aims to
IREE is still at its early stage; we have lots of exciting future plans. Please check out the long-term design roadmap and short-term focus areas.
We use GitHub Projects to track various IREE components and GitHub Milestones for major features and quarterly plans. Please check out for updated information.
CI System | Build System | Platform | Component | Status |
---|---|---|---|---|
GitHub Actions | Bazel | Linux | Core | Workflow History |
GitHub Actions | Bazel | Linux | Bindings | Workflow History |
GitHub Actions | Bazel | Linux | Integrations | Workflow History |
GitHub Actions | Bazel | Linux | Other | Workflow History |
Kokoro | Bazel | Linux | Core | |
Kokoro | CMake | Linux | Core + Bindings |
IREE is licensed under the terms of the Apache license. See LICENSE for more information.