commit | 9dc8ae413b4a52495876c5efb86e01186efdc111 | [log] [tgz] |
---|---|---|
author | Lei Zhang <antiagainst@gmail.com> | Mon Feb 26 20:21:15 2024 -0800 |
committer | GitHub <noreply@github.com> | Tue Feb 27 04:21:15 2024 +0000 |
tree | 1bae5c92a92da98f1f10ee590c00de096b451783 | |
parent | baeffa7520cccb1387561dd1763b837cb99498df [diff] |
[cuda][hip] Fix launch host func and worker thread state update (#16568) This commits fixes a few issues in pending action queue to resolve driver deadlock issues: * In host launch func, which is called from a driver thread, we cannot invoke any GPU API. Otherwise we might see deadlock. This includes cleaning up the actions after execution--it may involve buffer releasing/unregistering which was the issue causing hip driver hang. Now move this cleanup into the worker thread. This is done by adding a state field to each action to indicate whether it's alive or zombie. We enqueue each action again after done execution by flipping its state to zombie to let the worker thread to cleanup. * The worker thread can have five states--two normal states (idle waiting or workload pending), three exit states (requested, committed, error). They have increasing priorities w.r.t. overwriting. We cannot overwrite state later in the list without checking. This guarantees that exit requests are properly respected and not dropping to the floor so to have clean exit. * When the worker thread is waken to process ready list, we need to immediately flip the worker state from workload pending to idle waiting, before any real processing. This makes sure we don't drop new workload enqueued while we are processing, and the worker thread can be waken up again properly later. With the above fixes, we can pass all stablehlo/tosa e2e op tests on hip driver without hang or crashes. The same change is mirrored to the cuda pending action queue. Fixes https://github.com/openxla/iree/issues/15790 Progress towards https://github.com/openxla/iree/issues/16504
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
See our website for more information.
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.