commit | 1a0c74232ef6ec6f513e9026e3c47e46089ccb88 | [log] [tgz] |
---|---|---|
author | Måns Nilsson <mans.nilsson@arm.com> | Tue Mar 29 10:16:54 2022 +0200 |
committer | GitHub <noreply@github.com> | Tue Mar 29 08:16:54 2022 +0000 |
tree | ce5eca88e4a9f05f6b9d7ac131e0109826fb6189 | |
parent | 94d9316865f822f1bc802578868f52c2a08f32d9 [diff] |
Quantization specific registration for CMSIS-NN (#977) * Quantization specific registration for CMSIS-NN Adds int8 and int16x8 specific registrations for conv kernel. Person detect example is updated with int8 specific registration for conv. It will result in a smaller memory footprint for kernels that support this. For kernels that not support it the default registration will be used. Also 16x8 conv unit tests are updated with specific 16x8 registration. For kernels that do not have this it will default to default registration. Change-Id: I412cc472e56a0835ed662525dfd7768a920590bb * Address review comments Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
TensorFlow Lite for Microcontrollers is a port of TensorFlow Lite designed to run machine learning models on DSPs, microcontrollers and other devices with limited memory.
Additional Links:
Build Type | Status |
---|---|
CI (Linux) | |
Code Sync |
This table captures platforms that TFLM has been ported to. Please see New Platform Support for additional documentation.
Platform | Status |
---|---|
Arduino | |
ESP32 | |
Sparkfun Edge | |
Texas Instruments Dev Boards |
This is a list of targets that have optimized kernel implementations and/or run the TFLM unit tests using software emulation or instruction set simulators.
Build Type | Status |
---|---|
Cortex-M | |
Hexagon | |
RISC-V | |
Xtensa |
See our contribution documentation.
A Github issue should be the primary method of getting in touch with the TensorFlow Lite Micro (TFLM) team.
The following resources may also be useful:
SIG Micro email group and monthly meetings.
SIG Micro gitter chat room.
For questions that are not specific to inference with TFLM (for example model conversion and quantization) please use the following resources: