Arm® ML embedded evaluation kit

Trademarks

  • Arm® and Cortex® are registered trademarks of Arm® Limited (or its subsidiaries) in the US and/or elsewhere.
  • Arm® and Ethos™ are registered trademarks or trademarks of Arm® Limited (or its subsidiaries) in the US and/or elsewhere.
  • Arm® and Corstone™ are registered trademarks or trademarks of Arm® Limited (or its subsidiaries) in the US and/or elsewhere.
  • TensorFlow™, the TensorFlow logo, and any related marks are trademarks of Google Inc.

Prerequisites

Before starting the setup process, please make sure that you have:

Additional reading

This document contains information that is specific to Arm® Ethos™-U55 and Arm® Ethos™-U65 products. Please refer to the following documents for additional information:

To access Arm documentation online, please visit: http://developer.arm.com

Repository structure

The repository has the following structure:

.
├── dependencies
├── docs
├── model_conditioning_examples
├── resources
├── /resources_downloaded/
├── scripts
   └── ...
├── source
   ├── application
    ├── hal
    ├── main
    └── tensorflow-lite-micro
   └── use_case
     └── <usecase_name>
│          ├── include
│          ├── src
│          └── usecase.cmake
├── tests
   └── ...
└── CMakeLists.txt

What these folders contain:

  • dependencies: All the third-party dependencies for this project.

  • docs: The documentation for this ML application.

  • model_conditioning_examples: short example scripts that demonstrate some methods available in TensorFlow to condition your model in preparation for deployment on Arm Ethos NPU.

  • resources: contains ML use-cases applications resources such as input data, label files, etc.

  • resources_downloaded: created by set_up_default_resources.py, contains downloaded resources for ML use-cases applications such as models, test data, etc.

  • scripts: Build and source generation scripts.

  • source: C/C++ sources for the platform and ML applications.

    Note: Common code related to the Ethos-U NPU software framework resides in application subfolder.

    The contents of the application subfolder is as follows:

    • application: All sources that form the core of the application. The use-case part of the sources depend on the sources themselves, such as:

      • hal: Contains Hardware Abstraction Layer (HAL) sources, providing a platform agnostic API to access hardware platform-specific functions.

      • main: Contains the main function and calls to platform initialization logic to set up things before launching the main loop. Also contains sources common to all use-case implementations.

      • tensorflow-lite-micro: Contains abstraction around TensorFlow Lite Micro API. This abstraction implements common functions to initialize a neural network model, run an inference, and access inference results.

    • use_case: Contains the ML use-case specific logic. Stored as a separate subfolder, it helps isolate the ML-specific application logic. With the assumption that the application performs the required setup for logic to run. It also makes it easier to add a new use-case block.

    • tests: Contains the x86 tests for the use-case applications.

The HAL has the following structure:

hal
├── hal.c
├── include
│   └── ...
└── platforms
    ├── bare-metal
    │   ├── bsp
    │   │   ├── bsp-core
    │   │   │   └── include
    │   │   ├── bsp-packs
    │   │   │   └── mps3
    │   │   ├── cmsis-device
    │   │   ├── include
    │   │   └── mem_layout
    │   ├── data_acquisition
    │   ├── data_presentation
    │   │   ├── data_psn.c
    │   │   └── lcd
    │   │       └── include
    │   ├── images
    │   ├── timer
    │   └── utils
    └── native

What these folders contain:

  • The folders include and hal.c contain the HAL top-level platform API and data acquisition, data presentation, and timer interfaces.

    Note: the files here and lower in the hierarchy have been written in C and this layer is a clean C/ + boundary in the sources.

  • platforms/bare-metal/data_acquisition
    platforms/bare-metal/data_presentation
    platforms/bare-metal/timer
    platforms/bare-metal/utils:

    These folders contain the bare-metal HAL support layer and platform initialization helpers. Function calls are routed to platform-specific logic at this level. For example, for data presentation, an lcd module has been used. This lcd module wraps the LCD driver calls for the actual hardware (for example, MPS3).

  • platforms/bare-metal/bsp/bsp-packs: The core low-level drivers (written in C) for the platform reside. For supplied examples, this happens to be an MPS3 board. However, support can be added here for other platforms. The functions defined in this space are wired to the higher-level functions under HAL and is like those in the platforms/bare-metal/ level).

  • platforms/bare-metal/bsp/bsp-packs/mps3/include
    platforms/bare-metal/bsp/bsp-packs/mps3: Contains the peripheral (LCD, UART, and timer) drivers specific to MPS3 board.

  • platforms/bare-metal/bsp/bsp-core
    platforms/bare-metal/bsp/include: Contains the BSP core sources common to all BSPs and includes a UART header. However, the implementation of this is platform-specific, while the API is common. Also "re-targets" the standard output and error streams to the UART block.

  • platforms/bare-metal/bsp/cmsis-device: Contains the CMSIS template implementation for the CPU and also device initialization routines. It is also where the system interrupts are set up and the handlers are overridden. The main entry point of a bare-metal application most likely resides in this space. This entry point is responsible for the set-up before calling the user defined "main" function in the higher-level application logic.

  • platforms/bare-metal/bsp/mem_layout: Contains the platform-specific linker scripts.

Models and resources

The models used in the use-cases implemented in this project can be downloaded from: Arm ML-Zoo.

When using Ethos-U NPU backend, Vela compiler optimizes the the NN model. However, if not and it is supported by TensorFlow Lite Micro, then it falls back on the CPU and execute.

Vela compiler

The Vela compiler is a tool that can optimize a neural network model into a version that run on an embedded system containing the Ethos-U NPU.

The optimized model contains custom operators for sub-graphs of the model that the Ethos-U NPU can accelerate. The remaining layers that cannot be accelerated, are left unchanged, and are run on the CPU using optimized, CMSIS-NN, or reference kernels provided by the inference engine.

For detailed information, see: Optimize model with Vela compiler.

Building

This section describes how to build the code sample applications from sources and includes illustrating the build options and the process.

The project can be built for MPS3 FPGA and FVP emulating MPS3. Using default values for configuration parameters builds executable models that support the Ethos-U NPU.

For further information, please see:

Deployment

This section describes how to deploy the code sample applications on the Fixed Virtual Platform (FVP) or the MPS3 board.

For further information, please see:

Implementing custom ML application

This section describes how to implement a custom Machine Learning application running on a platform supported by the repository, either an FVP or an MPS3 board.

Both the Cortex-M55 CPU and Ethos-U NPU Code Samples software project offers a way to incorporate extra use-case code into the existing infrastructure. It also provides a build system that automatically picks up added functionality and produces corresponding executable for each use-case.

For further information, please see:

Testing and benchmarking

Please refer to: Testing and benchmarking.

Memory Considerations

Please refer to:

Troubleshooting

For further information, please see:

Appendix

Please refer to:

FAQ

Please refer to: FAQ