commit | 9aaae342bde3a469e5156214669866fd4ba57531 | [log] [tgz] |
---|---|---|
author | Max191 <44243577+Max191@users.noreply.github.com> | Tue Jul 30 08:37:35 2024 -0700 |
committer | GitHub <noreply@github.com> | Tue Jul 30 11:37:35 2024 -0400 |
tree | e7110f104aeaa9479a21e60231d448fee62f6f25 | |
parent | 2e6dbfab0eb9786a9716f017ece893eb42c9ecc9 [diff] |
[GlobalOpt] Transition SetEncoding to use round_dims_to and stop creating tensor.pad (#17931) This PR removes all padding introduced by SetEncoding, and instead places the `round_dims_to` attribute on encodings. The consequences of this PR are: 1. `iree_encoding.upper_bound_tile_size` is no longer necessary, since the information needed for stream buffer allocation is already encoded in `round_dims_to`, and there is no tensor.pad that uses the upper bound size anymore. 2. The `original_type` field in encodings is no longer necessary, since the types of the encoded tensors are no longer padded by the upper bound size. This means the types of the encoded tensors have directly the shape sizes of the `original_type`. 3. With the encoded types no longer having padded shapes, the `matmul_narrow_M` and `matmul_narrow_N` fields can also be removed, since the narrow sizes can be taken from directly from the tensor shapes. 4. Since `iree_encoding.upper_bound_tile_size` is no longer generated, the `CPUMaterializeUpperBoundTileSizePass` is no longer necessary, so we can remove it. This PR simply removes the padding from SetEncoding, and updates tests to reflect the new expected form, but some follow up PRs to remove the above mentioned fields can now be easily done. This also updates the `PadFactor` value for `SetEncoding` to be a command line flag with a default value of `32`. The previous value was `16`, but some microkernels require a tile size of `32`, so the `round_dims_to` needs to support padding up to `32` in those cases. --------- Signed-off-by: Max Dawkins <max.dawkins@gmail.com>
IREE (Intermediate Representation Execution Environment, pronounced as “eerie”) is an MLIR-based end-to-end compiler and runtime that lowers Machine Learning (ML) models to a unified IR that scales up to meet the needs of the datacenter and down to satisfy the constraints and special considerations of mobile and edge deployments.
See our website for project details, user guides, and instructions on building from source.
IREE is still in its early phase. We have settled down on the overarching infrastructure and are actively improving various software components as well as project logistics. It is still quite far from ready for everyday use and is made available without any support at the moment. With that said, we welcome any kind of feedback on any communication channels!
See our website for more information.
Community meeting recordings: IREE YouTube channel
IREE is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.