Upgrade to the IdentifyDispatchRegions2 pass.
* Has a nasty workaround for #2050, which was exposed by different DR formation.
* Removed the special case output fusion in favor of relying on optimization passes to do the right thing (which is necessary for dynamic shapes anyway). FoldCompatibleDispatchRegions is working reasonably for this, and giving it smaller atoms, in theory, makes more advanced optimization passes more possible).
* Added in the special cases for our current list of ops that don't support fusion.
* Added a special case to assign a lower benefit to ops that are known to materialize to a copy, increasing the chance that they fuse to ops which are doing useful work.
* Enabled the dynamic shape multi-layer-perceptron tests on VMLA. Tried enabling on vulkan but ran into an issue. Filed #2057
PiperOrigin-RevId: 313881073
diff --git a/integrations/tensorflow/e2e/dynamic_mlp_test.py b/integrations/tensorflow/e2e/dynamic_mlp_test.py
index d2dc527..1a1ad41 100644
--- a/integrations/tensorflow/e2e/dynamic_mlp_test.py
+++ b/integrations/tensorflow/e2e/dynamic_mlp_test.py
@@ -57,14 +57,11 @@
tf.add(tf.matmul(layer_2, self.out_weights), self.out_bias))
def predict(self, x):
- # return tf.nn.softmax(self.mlp(x))
- # For simplicity at this point, don't do the softmax, as it lets us
- # skip reductions.
- return self.mlp(x)
+ return tf.nn.softmax(self.mlp(x))
-# TODO(silvasean): Get this test working on IREE.
-@tf_test_utils.compile_modules(backends=["tf"], mlp=(Mlp, ["predict"]))
+@tf_test_utils.compile_modules(
+ backends=["tf", "iree_vmla"], mlp=(Mlp, ["predict"]))
class DynamicMlpTest(tf_test_utils.SavedModelTestCase):
def test_dynamic_batch(self):