commit | eb15493eac2746668e237b7ac1ddaca88fdea75e | [log] [tgz] |
---|---|---|
author | Benoit Jacob <jacob.benoit.1@gmail.com> | Wed Oct 09 10:07:20 2024 -0400 |
committer | GitHub <noreply@github.com> | Wed Oct 09 14:07:20 2024 +0000 |
tree | 77856db15e346b21e0227651fca027585b488dcc | |
parent | 5270093401cea05a548352fce312e39d0291024c [diff] |
e2e matmul test improvements (#18725) This PR is made of individual commits for review convenience and so we can drop anything that causes problems on CI. * Add default shapes set, combining small and large. * The need to specify "small" or "large" is a real need in only a minority of cases. That's a difference from when these tests were first added. * Enable dynamic sizes in large shapes, leaving only gpu_large_aligned out. * Who remembered that large shapes weren't tested as dynamic shapes, unlike small shapes... and unlike "gpu_large" shapes?! * Rename gpu_large_aligned -> easy_large_static. * This is only needed in sketchy GPU codegen pipelines that can't deal with sizes that aren't multiples of some internal tile size. * Fold gpu_large into large and tolerate fuzzy bf16 accumulators. * Retaining the evidently more curated set of shapes from "gpu_large". The larger sizes ran into new issues with the mostly artificial case of bf16 accumulators. * Use default shapes and reenable sanitizers. * This simplifies the build, reduces the number of targets and increases coverage as "default" combines small and large shapes. And this reenables sanitizers that hard been disabled on large sizes due to timeouts. As tests at some point started verifying only a subset of result matrix elements, the timeouts should be avoided now. * Enable default shapes for most rocm tests. * The motivation for this PR. The rest just bubbled up from there. * Make large shapes more diverse (including odd and rectangular kinds of shapes). --------- Signed-off-by: Benoit Jacob <jacob.benoit.1@gmail.com>
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels
Package | Release status |
---|---|
GitHub release (stable) | |
GitHub release (nightly) | |
Python iree-compiler | |
Python iree-runtime |
Host platform | Build status |
---|---|
Linux | |
macOS | |
Windows |
For the full list of workflows see https://iree.dev/developers/general/github-actions/.
See our website for more information.
Community meeting recordings: IREE YouTube channel
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.