commit | 4b31bd95f2277a8dcc03c74b98765815f46ad35c | [log] [tgz] |
---|---|---|
author | raikonenfnu <68087699+raikonenfnu@users.noreply.github.com> | Tue May 18 18:20:25 2021 -0700 |
committer | GitHub <noreply@github.com> | Tue May 18 18:20:25 2021 -0700 |
tree | 587338c9619523b881d5d8093be22a6a4e069fc5 | |
parent | 3bdcea8dfbd4b4919665621f13c092ab207a0022 [diff] |
Initial Adding ROCM HAL Backend to Experimental (#5943) Initial pass to integrate ROCm in to IREE so that we can Codegen and run on AMDGPUs. Following steps similar to thomasraoux's CUDA backend. Since ROCm do not have graph or CommandBuffer by default, we implement ROCm's command buffer using stream API to default stream. Tested out and pass most CTS tests except: semaphore_submission_test + semaphore_test-> some functionalities not implemented for rocm backend yet command_buffer_test -> CommandBufferTest.CopySubBuffer In the next patch: -Complete semaphore functionality -Squash CommandBuffer bugs
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler that lowers Machine Learning (ML) models to a unified IR optimized for real-time inference on mobile/edge devices against heterogeneous hardware accelerators. IREE also provides flexible deployment solutions for its compiled ML models.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
IREE is licensed under the terms of the Apache license. See LICENSE for more information.