Windows support was removed from documentation.
 TensorFlow Lite micro build is not compatible with MinGW make.

Signed-off-by: alexander <alexander.efremov@arm.com>
Change-Id: I0980838f659431b18ebc54ec0a1e4371941c36b4
9 files changed
tree: d59f8f6b94172c47e731b683efb4371b1a19fe5a
  1. .gitignore
  2. .gitmodules
  3. CMakeLists.txt
  4. LICENSE_APACHE_2.0.txt
  5. Readme.md
  6. dependencies/
  7. docs/
  8. model_conditioning_examples/
  9. release_notes.txt
  10. resources/
  11. scripts/
  12. source/
  13. tests/
Readme.md

Arm® ML embedded evaluation kit

This repository is for building and deploying Machine Learning (ML) applications targeted for Arm® Cortex®-M and Arm® Ethos™-U NPU. To run evaluations using this software, we suggest using an MPS3 board or a fixed virtual platform (FVP) that supports Ethos-U55 software fast model. Both environments run a combination of the new Arm® Cortex™-M55 processor and the Arm® Ethos™-U55 NPU.

Overview of the evaluation kit

The purpose of the evaluation kit is to allow the user to develop software and test the performance of the Ethos-U55 NPU and Cortex-M55 CPU. The Ethos-U55 NPU is a new class of machine learning(ML) processor, specifically designed to accelerate computation for ML workloads in constrained embedded and IoT devices. The product is optimized to execute mathematical operations efficiently that are commonly used in ML algorithms, such as convolutions or activation functions.

ML use cases

The evaluation kit adds value by providing ready to use ML applications for the embedded stack. As a result, you can experiment with the already developed software use cases and create your own applications for Cortex-M CPU and Ethos-U NPU. The example application at your disposal and the utilized models are listed in the table below.

ML applicationDescriptionNeural Network Model
Image classificationRecognize the presence of objects in a given imageMobilenet V2
Keyword spotting(KWS)Recognize the presence of a key word in a recordingDS-CNN-L
Automated Speech Recognition(ASR)Transcribe words in a recordingWav2Letter
KWS and ASRUtilise Cortex-M and Ethos-U to transcribe words in a recording after a keyword was spottedDS-CNN-L Wav2Letter
Anomaly DetectionDetecting abnormal behavior based on a sound recording of a machineComing soon
Generic inference runnerCode block allowing you to develop your own use case for Ethos-U55 NPUYour custom model

The above use cases implement end-to-end ML flow including data pre-processing and post-processing. They will allow you to investigate embedded software stack, evaluate performance of the networks running on Cortex-M55 CPU and Ethos-U55 NPU by displaying different performance metrics such as inference cycle count estimation and results of the network execution.

Software and hardware overview

The evaluation kit is based on the Arm® Corstone™-300 reference package. Arm® Corstone™-300 helps you build SoCs quickly on the Arm® Cortex™-M55 and Arm® Ethos™-U55 designs. Arm® Corstone™-300 design implementation is publicly available on an Arm MPS3 FPGA board, or as a Fixed Virtual Platform of the MPS3 development board.

The Ethos-U NPU software stack is described here.

All ML use cases, albeit illustrating a different application, have common code such as initializing the Hardware Abstraction Layer (HAL). The application common code can be run on x86 or Arm Cortex-M architecture thanks to the HAL. For the ML application-specific part, Google® TensorFlow™ Lite for Microcontrollers inference engine is used to schedule the neural networks models executions. TensorFlow Lite for Microcontrollers is integrated with the Ethos-U55 driver and delegates execution of certain operators to the NPU or, if the neural network model operators are not supported on NPU, to the CPU. CMSIS-NN is used to optimise CPU workload execution with int8 data type. Common ML application functions will help you to focus on implementing logic of your custom ML use case: you can modify only the use case code and leave all other components unchanged. Supplied build system will discover new ML application code and automatically include it into compilation flow.

APIs

To run an ML application on the Cortex-M and Ethos-U55 NPU, please, follow these steps:

  1. Setup your environment by installing the required prerequisites.
  2. Generate an optimized neural network model for Ethos-U with a Vela compiler by following instructions here.
  3. Configure the build system.
  4. Compile the project with a make command.
  5. If using a FVP, launch the desired application on the FVP. If using the FPGA option, load the image on the FPGA and launch the application.

To get familiar with these steps, you can follow the quick start guide.

For more details see full documentation: