At its core, TFLM is a portable library that can be used on a variety of target hardware to run inference on TfLite models.
Prior to integrating TFLM with a specific hardware involves tasks that is outside the scope of the TFLM project, including:
In this guide we outline our recommended approach for integrating TFLM with a new target hardware assuming that you have already set up a development and debugging environment for you board independent of TLFLM.
Use the TFLM project generation script to create a directory tree containing only the sources that are necessary to build the code TFLM library.
python3 tensorflow/lite/micro/tools/project_generation/create_tflm_tree.py \ -e hello_world \ -e micro_speech \ -e person_detection \ /tmp/tflm-tree
This will create a folder that looks like the following at the top-level:
examples LICENSE tensorflow third_party
All the code in the tensorflow
and third_party
folders can be compiled into a single static library (for example libtflm.a
) using your platform-specific build system.
TFLM's third party dependencies are spearated out in case there is a need to have shared libraries for the third party code to avoid symbol collisions.
Note that for IDEs, it might be sufficient to simply include the folder created by the TFLM project generation script into the overall IDE tree.
Replace the following files with a version that is specific to your target platform:
These can be placed anywhere in your directory tree. The only requirement is that when linking TFLM into a binary, the implementations of the functions in debug_log.h, micro_time.h and system_setup.h can be found.
For example, the implementations of these functions for:
Once you have completed step 2, you should be set up to run the hello_world
example and see the output over the UART.
cp -r /tmp/tflm-tree/examples/hello_world <path-to-platform-specific-hello-world>
The hello_world
example should not need any customization and you should be able to directly build and run it.
We recommend that you fork the TFLM examples and then modify them as needed (to add support for peripherals etc.) to run on your target platform.
TFLM has optimized kernel implementations for a variety of targets that are in sub-folders of the kernels directory.
It is possible to use the project generation script to create a tree with these optimized kernel implementations (and associated third party dependencies).
For example:
python3 tensorflow/lite/micro/tools/project_generation/create_tflm_tree.py \ -e hello_world -e micro_speech -e person_detection \ --makefile_options="TARGET=cortex_m_generic OPTIMIZED_KERNEL_DIR=cmsis_nn TARGET_ARCH=project_generation" \ /tmp/tflm-cmsis
will create an output tree with all the sources and headers needed to use the optimized cmsis_nn kernels for Cortex-M platforms.
In order to have tighter coupling between your platform-specific TFLM integration and the upstream TFLM repository, you might want to consider the following:
For some pointers on how to set this up, we refer you to the GitHub repositories that integrated TFLM for the:
Once you are set up with continuous integration and the ability to integrate newer versions of TFLM with your platform, feel free to add a build badge to TFLM's Community Supported TFLM Examples.
Here are some ways that you can reach out to get help.