build: begin using -std=c++17 and c17 in Makefile builds (#2648)

Begin using -std=c++17 and -std=c17 in Makefile builds on
all platforms. Bazel builds have been using C++17 since 52c9568.

Set `-stdlib=libc++ on xt-clang` on Xtensa to add C++17 library
support in addition to compiler support. From the xt-clang docs:

    Starting with the RI-2019.1 release, XT-CLANG has included support
    for C++14 and C++17 language features. The compiler support for
    C++14 and C++17 is accompanied by the C++ library from the LLVM
    project. This library can be selected with the -stdlib=libc++
    option, and this is strongly recommended when compiling with
    -std=c++14 or - std=c++17. Starting with the RI-2021.6 release, two
    additional versions of this C++ library are provided— one that
    excludes support for exception handling, and one that excludes both
    exception handling and run-time type identification. These libraries
    can be selected with -stdlib=libc ++-e and -stdlib=libc++-re options
    respectively.

Based on the `docker run` command in
tflite-micro/.github/workflows/xtensa_presubmit.yml, our CI is
currently using RI-2020.4.

Refactor the make/ext_libs/xtensa_download.sh script to make it eaiser
to patch downloads for all Xtensa platforms. The old script made overly
strict assumptions about the name of the patch.

Patch the Xtensa vision_p6 platform download xi_tflmlib_vision_p6 for
compatibility with the C++ library standard. Use the header <climits> to
access constants such as INT_MAX.

BUG=#2650
8 files changed
tree: 8a4d27bed2ffa55f0b714b01513e7ece718347a3
  1. .github/
  2. ci/
  3. codegen/
  4. data/
  5. docs/
  6. python/
  7. signal/
  8. tensorflow/
  9. third_party/
  10. tools/
  11. .bazelrc
  12. .bazelversion
  13. .clang-format
  14. .editorconfig
  15. .gitignore
  16. AUTHORS
  17. BUILD
  18. CODEOWNERS
  19. CONTRIBUTING.md
  20. debugging_output.md
  21. LICENSE
  22. README.md
  23. SECURITY.md
  24. WORKSPACE
README.md

TensorFlow Lite for Microcontrollers

TensorFlow Lite for Microcontrollers is a port of TensorFlow Lite designed to run machine learning models on DSPs, microcontrollers and other devices with limited memory.

Additional Links:

Build Status

Official Builds

Build TypeStatus
CI (Linux)CI
Code SyncSync from Upstream TF

Community Supported TFLM Examples

This table captures platforms that TFLM has been ported to. Please see New Platform Support for additional documentation.

PlatformStatus
ArduinoArduino Antmicro
Coral Dev Board MicroTFLM + EdgeTPU Examples for Coral Dev Board Micro
Espressif Systems Dev BoardsESP Dev Boards
Renesas BoardsTFLM Examples for Renesas Boards
Silicon Labs Dev KitsTFLM Examples for Silicon Labs Dev Kits
Sparkfun EdgeSparkfun Edge
Texas Instruments Dev BoardsTexas Instruments Dev Boards

Community Supported Kernels and Unit Tests

This is a list of targets that have optimized kernel implementations and/or run the TFLM unit tests using software emulation or instruction set simulators.

Build TypeStatus
Cortex-MCortex-M
HexagonHexagon
RISC-VRISC-V
XtensaXtensa
Generate Integration TestGenerate Integration Test

Contributing

See our contribution documentation.

Getting Help

A Github issue should be the primary method of getting in touch with the TensorFlow Lite Micro (TFLM) team.

The following resources may also be useful:

  1. SIG Micro email group and monthly meetings.

  2. SIG Micro gitter chat room.

  3. For questions that are not specific to TFLM, please consult the broader TensorFlow project, e.g.:

Additional Documentation

RFCs

  1. Pre-allocated tensors
  2. TensorFlow Lite for Microcontrollers Port of 16x8 Quantized Operators