Quantization specific registration for CMSIS-NN (#977)

* Quantization specific registration for CMSIS-NN

Adds int8 and int16x8 specific registrations for conv kernel.

Person detect example is updated with int8 specific
registration for conv. It will result in a smaller memory footprint
for kernels that support this. For kernels that not support it the
default registration will be used.

Also 16x8 conv unit tests are updated with specific 16x8
registration. For kernels that do not have this it will default to
default registration.

Change-Id: I412cc472e56a0835ed662525dfd7768a920590bb

* Address review comments

Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
5 files changed
tree: ce5eca88e4a9f05f6b9d7ac131e0109826fb6189
  1. .github/
  2. ci/
  3. data/
  4. docs/
  5. tensorflow/
  6. third_party/
  7. .bazelrc
  8. .clang-format
  9. .gitignore
  10. AUTHORS
  11. CODEOWNERS
  12. CONTRIBUTING.md
  13. LICENSE
  14. README.md
  15. WORKSPACE
README.md

TensorFlow Lite for Microcontrollers

TensorFlow Lite for Microcontrollers is a port of TensorFlow Lite designed to run machine learning models on DSPs, microcontrollers and other devices with limited memory.

Additional Links:

Build Status

Official Builds

Build TypeStatus
CI (Linux)CI
Code SyncSync from Upstream TF

Community Supported TFLM Examples

This table captures platforms that TFLM has been ported to. Please see New Platform Support for additional documentation.

PlatformStatus
ArduinoArduino Antmicro
ESP32ESP32
Sparkfun EdgeSparkfun Edge
Texas Instruments Dev BoardsTexas Instruments Dev Boards

Community Supported Kernels and Unit Tests

This is a list of targets that have optimized kernel implementations and/or run the TFLM unit tests using software emulation or instruction set simulators.

Build TypeStatus
Cortex-MCortex-M
HexagonHexagon
RISC-VRISC-V
XtensaXtensa Xtensa

Contributing

See our contribution documentation.

Getting Help

A Github issue should be the primary method of getting in touch with the TensorFlow Lite Micro (TFLM) team.

The following resources may also be useful:

  1. SIG Micro email group and monthly meetings.

  2. SIG Micro gitter chat room.

  3. For questions that are not specific to inference with TFLM (for example model conversion and quantization) please use the following resources:

Additional Documentation

RFCs

  1. Pre-allocated tensors
  2. TensorFlow Lite for Microcontrollers Port of 16x8 Quantized Operators