tree: 2e1fa86065d159534efadbb6f39bb30c7042d375 [path history] [tgz]
  1. batch_norm_test.py
  2. BUILD
  3. control_flow_test.py
  4. dynamic_mlp_test.py
  5. exported_names_test.py
  6. keras_lstm_static_test.py
  7. keras_lstm_test.py
  8. keras_vision_model_test.py
  9. linspace_test.py
  10. mandelbrot_test.py
  11. math_test.py
  12. matrix_ops_test.py
  13. README.md
  14. ring_buffer_test.py
  15. simple_arithmetic_test.py
  16. simple_stateful_test.py
  17. sliding_window_test.py
  18. tensorlist_test.py
  19. vulkan_conv_test.py
integrations/tensorflow/e2e/README.md

TensorFlow e2e tests

This is a collection of e2e tests that, in various fashion saves a TensorFlow model, compiles it with IREE and runs/evaluates it on all backends.

Pre-Requisites

You will need a TensorFlow 2.0+ nightly installed in your python environment: the python binary in $PYTHON_BIN should be able to import tensorflow and that TensorFlow should be version 2.0+. This can be checked with tensorflow.version.

See Install TensorFlow with pip for instructions.

Vulkan setup

By default, tests run on TensorFlow and the IREE CPU interpreter, as it never needs additional environment setup. If you have your environment setup to use IREE with Vulkan (see the doc), then you can enable the backends by setting the environment variable IREE_TEST_BACKENDS=tf,iree_interpreter,iree_vulkan.

You can also pass this as a command line argument when running individual tests: --target_backends=tf,iree_interpreter,iree_vulkan.

Running tests

# Run all tests with defaults and output on failure.
bazel test ... --test_output=errors

# Run an individual test interactively.
bazel test simple_arithmetic_test --test_output=streamed

# Run tests with an altered list of backends.
bazel test ... --test_output=errors -- \
    --target_backends=tf,iree_interpreter,iree_vulkan

# (alternative) Run tests with an altered list of backends.
bazel test ... --test_env=IREE_TEST_BACKENDS=tf,iree_interpreter,iree_vulkan \
    --test_output=errors

Debugging tests

If the compiler fails to compile the program, then it will create a crash reproducer (see documentation here), which then allows reproducing the bug with an appropriate “opt” tool. Further debugging iteration can happen in opt.

TODO(silvasean): debugging miscompiles

Test harnesses

Simple function tests

See simple_arithmetic_test.py for some examples of writing a test case that runs on multiple backends.

Limiting a test to only certain backends

The @tf_test_utils.compile_modules decorator on tests takes a backends= keyword argument. This argument should be a Python list of backends, which accepts the same keys as the --target_backends flags.

Example:

@tf_test_utils.compile_modules(backends=["tf"], mlp=(Mlp, ["predict"]))
class DynamicMlpTest(tf_test_utils.SavedModelTestCase):
  ... the test case ...

Limiting this statically in the code can be useful for tests that are known to fail on certain backends but are still useful to have checked in.