Before starting the setup process, please make sure that you have:
Linux x86_64 based machine or Windows Subsystem for Linux is preferable. Unfortunately, Windows is not supported as a build environment yet.
At least one of the following toolchains:
An Arm® MPS3 FPGA prototyping board and components for FPGA evaluation or a
Fixed Virtual Platform binary:
AN547) from: https://developer.arm.com/tools-and-software/development-boards/fpga-prototyping-boards/download-fpga-images. You would also need to have a USB connection between your machine and the MPS3 board - for UART menu and for deploying the application.
Arm Corstone-300 based FVP for MPS3 is available from: https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps.
This document contains information that is specific to Arm® Ethos™-U55 products. See the following documents for other relevant information:
ML platform overview: https://mlplatform.org/
Arm® ML processors technical overview: https://developer.arm.com/ip-products/processors/machine-learning
Arm® Cortex-M55® processor: https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m55
ML processor, also referred to as a Neural Processing Unit (NPU) - Arm® Ethos™-U55: https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u55
Arm® MPS3 FPGA Prototyping Board: https://developer.arm.com/tools-and-software/development-boards/fpga-prototyping-boards/mps3
Arm® ML-Zoo: https://github.com/ARM-software/ML-zoo/
See http://developer.arm.com for access to Arm documentation.
The repository has the following structure:
. ├── dependencies ├── docs ├── model_conditioning_examples ├── resources ├── /resources_downloaded/ ├── scripts │ └── ... ├── source │ ├── application │ │ ├── hal │ │ ├── main │ │ └── tensorflow-lite-micro │ └── use_case │ └── <usecase_name> │ ├── include │ ├── src │ └── usecase.cmake ├── tests │ └── ... └── CMakeLists.txt
dependencies: contains all the third party dependencies for this project.
docs: contains the documentation for this ML applications.
model_conditioning_examples: contains short example scripts that demonstrate some methods available in TensorFlow to condition your model in preparation for deployment on Arm Ethos NPU.
resources: contains ML use cases applications resources such as input data, label files, etc.
resources_downloaded: created by
set_up_default_resources.py, contains downloaded resources for ML use cases applications such as models, test data, etc.
scripts: contains build related and source generation scripts.
source: contains C/C++ sources for the platform and ML applications. Common code related to the Ethos-U55 NPU software framework resides in application sub-folder with the following structure:
application: contains all the sources that form the core of the application. The
use case part of the sources depend on sources here.
hal: contains hardware abstraction layer sources providing a platform agnostic API to access hardware platform specific functions.
main: contains the main function and calls to platform initialization logic to set things up before launching the main loop. It also contains sources common to all use case implementations.
tensorflow-lite-micro: contains abstraction around TensorFlow Lite Micro API implementing common functions to initialize a neural network model, run an inference, and access inference results.
use_case: contains the ML use-case specific logic. Having this as a separate sub-folder isolates ML specific application logic with the assumption that the
application will do all the required set up for logic here to run. It also makes it easier to add a new use case block.
tests: contains the x86 tests for the use case applications.
Hardware abstraction layer has the following structure:
hal ├── hal.c ├── include │ └── ... └── platforms ├── bare-metal │ ├── bsp │ │ ├── bsp-core │ │ │ └── include │ │ ├── bsp-packs │ │ │ └── mps3 │ │ ├── cmsis-device │ │ ├── include │ │ └── mem_layout │ ├── data_acquisition │ ├── data_presentation │ │ ├── data_psn.c │ │ └── lcd │ │ └── include │ ├── images │ ├── timer │ └── utils └── native
hal.c: contains the hardware abstraction layer (HAL) top level platform API and data acquisition, data presentation and timer interfaces.
Note: the files here and lower in the hierarchy have been written in C and this layer is a clean C/C++ boundary in the sources.
platforms/bare-metal/utils: contains bare metal HAL support layer and platform initialisation helpers. Function calls are routed to platform specific logic at this level. For example, for data presentation, an
lcd module has been used. This wraps the LCD driver calls for the actual hardware (for example MPS3).
platforms/bare-metal/bsp/bsp-packs: contains the core low-level drivers (written in C) for the platform reside. For supplied examples this happens to be an MPS3 board, but support could be added here for other platforms too. The functions defined in this space are wired to the higher level functions under HAL (as those in
platforms/bare-metal/bsp/bsp-packs/mps3: contains the peripheral (LCD, UART and timer) drivers specific to MPS3 board.
platforms/bare-metal/bsp/include: contains the BSP core sources common to all BSPs. These include a UART header (only the implementation of this is platform specific, but the API is common) and "re-targeting" of the standard output and error streams to the UART block.
platforms/bare-metal/bsp/cmsis-device: contains the CMSIS template implementation for the CPU and also device initialisation routines. It is also where the system interrupts are set up and handlers are overridden. The main entry point of a bare metal application will most likely reside in this space. This entry point is responsible for setting up before calling the user defined "main" function in the higher level
platforms/bare-metal/bsp/mem_layout: contains the platform specific linker scripts.
The models used in the use cases implemented in this project can be downloaded from Arm ML-Zoo.
When using Ethos-U55 NPU backend, the NN model is assumed to be optimized by Vela compiler. However, even if not, it will fall back on the CPU and execute, if supported by TensorFlow Lite Micro.
The Vela compiler is a tool that can optimize a neural network model into a version that can run on an embedded system containing Ethos-U55 NPU.
The optimized model will contain custom operators for sub-graphs of the model that can be accelerated by Ethos-U55 NPU, the remaining layers that cannot be accelerated are left unchanged and will run on the CPU using optimized (CMSIS-NN) or reference kernels provided by the inference engine.
For detailed information see Optimize model with Vela compiler.
This section describes how to build the code sample applications from sources - illustrating the build options and the process.
The project can be built for MPS3 FPGA and FVP emulating MPS3. Default values for configuration parameters will build executable models with Ethos-U55 NPU support. See:
This section describes how to deploy the code sample applications on the Fixed Virtual Platform or the MPS3 board. See:
This section describes how to implement a custom Machine Learning application running on a platform supported by the repository (Fixed Virtual Platform or an MPS3 board).
Cortex-M55 CPU and Ethos-U55 NPU Code Samples software project offers a simple way to incorporate additional use-case code into the existing infrastructure and provides a build system that automatically picks up added functionality and produces corresponding executable for each use-case.