MLECO-1913: Documentation update: helper scripts and AD use case model update

Change-Id: I610b720146b520fe8633d25255b97df647b99ef5
Signed-off-by: Isabella Gottardi <isabella.gottardi@arm.com>
diff --git a/Readme.md b/Readme.md
index 472cf54..ddfb14b 100644
--- a/Readme.md
+++ b/Readme.md
@@ -22,11 +22,11 @@
 
 |   ML application                     |  Description | Neural Network Model |
 | :----------------------------------: | :-----------------------------------------------------: | :----: |
-|  [Image classification](./docs/use_cases/img_class.md)              | Recognize the presence of objects in a given image | [Mobilenet V2](https://github.com/ARM-software/ML-zoo/blob/master/models/image_classification/mobilenet_v2_1.0_224/tflite_uint8)   |
+|  [Image classification](./docs/use_cases/img_class.md)        | Recognize the presence of objects in a given image | [Mobilenet V2](https://github.com/ARM-software/ML-zoo/blob/master/models/image_classification/mobilenet_v2_1.0_224/tflite_uint8)   |
 |  [Keyword spotting(KWS)](./docs/use_cases/kws.md)             | Recognize the presence of a key word in a recording | [DS-CNN-L](https://github.com/ARM-software/ML-zoo/blob/master/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8) |
 |  [Automated Speech Recognition(ASR)](./docs/use_cases/asr.md) | Transcribe words in a recording | [Wav2Letter](https://github.com/ARM-software/ML-zoo/blob/master/models/speech_recognition/wav2letter/tflite_int8) |
 |  [KWS and ASR](./docs/use_cases/kws_asr.md) | Utilise Cortex-M and Ethos-U to transcribe words in a recording after a keyword was spotted | [DS-CNN-L](https://github.com/ARM-software/ML-zoo/blob/master/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8)  [Wav2Letter](https://github.com/ARM-software/ML-zoo/blob/master/models/speech_recognition/wav2letter/tflite_int8) |
-|  [Anomaly Detection](./docs/use_cases/ad.md)                 | Detecting abnormal behavior based on a sound recording of a machine | Coming soon|
+|  [Anomaly Detection](./docs/use_cases/ad.md)                 | Detecting abnormal behavior based on a sound recording of a machine | [Anomaly Detection](https://github.com/ARM-software/ML-zoo/raw/7c32b097f7d94aae2cd0b98a8ed5a3ba81e66b18/models/anomaly_detection/micronet_medium/tflite_int8/ad_medium_int8.tflite)|
 | [Generic inference runner](./docs/use_cases/inference_runner.md) | Code block allowing you to develop your own use case for Ethos-U55 NPU | Your custom model |
 
 The above use cases implement end-to-end ML flow including data pre-processing and post-processing. They will allow you
@@ -81,11 +81,95 @@
   - [Implementing custom ML application](./docs/documentation.md#implementing-custom-ml-application)
   - [Testing and benchmarking](./docs/documentation.md#testing-and-benchmarking)
   - [Troubleshooting](./docs/documentation.md#troubleshooting)
-  - [Contribution guidelines](./docs/documentation.md#contribution-guidelines)
-    - [Coding standards and guidelines](./docs/documentation.md#coding-standards-and-guidelines)
-    - [Code Reviews](./docs/documentation.md#code-reviews)
-    - [Testing](./docs/documentation.md#testing)
-  - [Communication](./docs/documentation.md#communication)
-  - [Licenses](./docs/documentation.md#licenses)
   - [Appendix](./docs/documentation.md#appendix)
-  
\ No newline at end of file
+
+## Contribution guidelines
+
+Contributions are only accepted under the following conditions:
+
+- The contribution have certified origin and give us your permission. To manage this process we use
+  [Developer Certificate of Origin (DCO) V1.1](https://developercertificate.org/).
+  To indicate that contributors agree to the the terms of the DCO, it's neccessary "sign off" the
+  contribution by adding a line with name and e-mail address to every git commit message:
+
+  ```log
+  Signed-off-by: John Doe <john.doe@example.org>
+  ```
+
+  This can be done automatically by adding the `-s` option to your `git commit` command.
+  You must use your real name, no pseudonyms or anonymous contributions are accepted.
+
+- You give permission according to the [Apache License 2.0](../LICENSE_APACHE_2.0.txt).
+
+  In each source file, include the following copyright notice:
+
+  ```copyright
+  /*
+  * Copyright (c) <years additions were made to project> <your name>, Arm Limited. All rights reserved.
+  * SPDX-License-Identifier: Apache-2.0
+  *
+  * Licensed under the Apache License, Version 2.0 (the "License");
+  * you may not use this file except in compliance with the License.
+  * You may obtain a copy of the License at
+  *
+  *     http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+  ```
+
+### Coding standards and guidelines
+
+This repository follows a set of guidelines, best practices, programming styles and conventions,
+see:
+
+- [Coding standards and guidelines](./docs/sections/coding_guidelines.md)
+  - [Introduction](./docs/sections/coding_guidelines.md#introduction)
+  - [Language version](./docs/sections/coding_guidelines.md#language-version)
+  - [File naming](./docs/sections/coding_guidelines.md#file-naming)
+  - [File layout](./docs/sections/coding_guidelines.md#file-layout)
+  - [Block Management](./docs/sections/coding_guidelines.md#block-management)
+  - [Naming Conventions](./docs/sections/coding_guidelines.md#naming-conventions)
+    - [C++ language naming conventions](./docs/sections/coding_guidelines.md#c_language-naming-conventions)
+    - [C language naming conventions](./docs/sections/coding_guidelines.md#c-language-naming-conventions)
+  - [Layout and formatting conventions](./docs/sections/coding_guidelines.md#layout-and-formatting-conventions)
+  - [Language usage](./docs/sections/coding_guidelines.md#language-usage)
+
+### Code Reviews
+
+Contributions must go through code review. Code reviews are performed through the
+[mlplatform.org Gerrit server](https://review.mlplatform.org). Contributors need to signup to this
+Gerrit server with their GitHub account credentials.
+In order to be merged a patch needs to:
+
+- get a "+1 Verified" from the pre-commit job.
+- get a "+2 Code-review" from a reviewer, it means the patch has the final approval.
+
+### Testing
+
+Prior to submitting a patch for review please make sure that all build variants works and unit tests pass.
+Contributions go through testing at the continuous integration system. All builds, tests and checks must pass before a
+contribution gets merged to the master branch.
+
+## Communication
+
+Please, if you want to start public discussion, raise any issues or questions related to this repository, use
+[https://discuss.mlplatform.org/c/ml-embedded-evaluation-kit](https://discuss.mlplatform.org/c/ml-embedded-evaluation-kit/)
+forum.
+
+## Licenses
+
+The ML Embedded applications samples are provided under the Apache 2.0 license, see [License Apache 2.0](../LICENSE_APACHE_2.0.txt).
+
+Application input data sample files are provided under their original license:
+
+|  | Licence | Provenience |
+|---------------|---------|---------|
+| [Automatic Speech Recognition Samples](./resources/asr/samples/files.md) | [Creative Commons Attribution 4.0 International Public License](./resources/LICENSE_CC_4.0.txt) | <http://www.openslr.org/12/> |
+| [Image Classification Samples](./resources/img_class/samples/files.md) | [Creative Commons Attribution 1.0](./resources/LICENSE_CC_1.0.txt) | <https://www.pexels.com> |
+| [Keyword Spotting Samples](./resources/kws/samples/files.md) | [Creative Commons Attribution 4.0 International Public License](./resources/LICENSE_CC_4.0.txt) | <http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz> |
+| [Keyword Spotting and Automatic Speech Recognition Samples](./resources/kws_asr/samples/files.md) | [Creative Commons Attribution 4.0 International Public License](./resources/LICENSE_CC_4.0.txt) | <http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz> |
diff --git a/docs/documentation.md b/docs/documentation.md
index 8ab9fa3..9ec73a3 100644
--- a/docs/documentation.md
+++ b/docs/documentation.md
@@ -185,7 +185,7 @@
 - [Mobilenet V2](https://github.com/ARM-software/ML-zoo/blob/master/models/image_classification/mobilenet_v2_1.0_224/tflite_uint8).
 - [DS-CNN](https://github.com/ARM-software/ML-zoo/blob/master/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8).
 - [Wav2Letter](https://github.com/ARM-software/ML-zoo/blob/master/models/speech_recognition/wav2letter/tflite_int8).
-- Anomaly Detection (coming soon).
+- [Anomaly Detection](https://github.com/ARM-software/ML-zoo/raw/7c32b097f7d94aae2cd0b98a8ed5a3ba81e66b18/models/anomaly_detection/micronet_medium/tflite_int8/ad_medium_int8.tflite).
 
 When using Ethos-U55 NPU backend, the NN model is assumed to be optimized by Vela compiler.
 However, even if not, it will fall back on the CPU and execute, if supported by TensorFlow Lite Micro.
diff --git a/docs/quick_start.md b/docs/quick_start.md
index f3565e8..abf8f50 100644
--- a/docs/quick_start.md
+++ b/docs/quick_start.md
@@ -1,8 +1,9 @@
 # Quick start example ML application
 
-This is a quick start guide that will show you how to run the keyword spotting example application. The aim of this guide
-is to illustrate the flow of running an application on the evaluation kit rather than showing the keyword spotting
-functionality or performance. All use cases in the evaluation kit follow the steps.
+This is a quick start guide that will show you how to run the keyword spotting example application.
+The aim of this quick start guide is to enable you to run an application quickly on the Fixed Virtual Platform.
+The assumption we are making is that your Arm® Ethos™-U55 NPU is configured to use 128 Multiply-Accumulate units,
+is using a shared SRAM with the Arm® Cortex®-M55.  
 
 1. Verify you have installed [the required prerequisites](sections/building.md#Build-prerequisites).
 
@@ -19,75 +20,142 @@
     git submodule update --init
     ```
 
-4. Next, you would need to get a neural network model. For the purpose of this quick start guide, we'll use the
-    `ds_cnn_clustered_int8` keyword spotting model from the [Arm public model zoo](https://github.com/ARM-software/ML-zoo)
-    and the principle remains the same for all of the other use cases. Download the `ds_cnn_large_int8.tflite` model
-    file with the curl command below:
-
-    ```commandline
-    curl -L https://github.com/ARM-software/ML-zoo/blob/master/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8/ds_cnn_clustered_int8.tflite?raw=true --output ds_cnn_clustered_int8.tflite
-    ```
-
-5. [Vela](https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ethos-u-vela) is an open-source python tool converting
+4. Next, you can use the `build_default` python script to get the default neural network models, compile them with
+    Vela and build the project.
+    [Vela](https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ethos-u-vela) is an open-source python tool converting
     TensorFlow Lite for Microcontrollers neural network model into an optimized model that can run on an embedded system
     containing an Ethos-U55 NPU. It is worth noting that in order to take full advantage of the capabilities of the NPU, the
     neural network operators should be [supported by Vela](https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ethos-u-vela/+/HEAD/SUPPORTED_OPS.md).
-    In this step, you will compile the model with Vela.
-
-    For this step, you need to ensure you have [correctly installed the Vela package](https://pypi.org/project/ethos-u-vela/):
 
     ```commandline
-    python3 -m venv env
-    source ./env/bin/activate
-    pip install --upgrade pip
-    pip install --upgrade setuptools
-    pip install ethos-u-vela
+    python3 build_default.py
     ```
 
-    In the command below, we specify that we are using the Arm® Ethos™-U55 NPU with a 128 Multiply-Accumulate units
-    (MAC units) configured for a High End Embedded use case. The [building section](sections/building.md#Optimize-custom-model-with-Vela-compiler)
-    has more detailed explanation about Vela usage.
-
-    ```commandline
-    vela ds_cnn_clustered_int8.tflite \
-        --accelerator-config=ethos-u55-128 \
-        --block-config-limit=0 \
-        --config scripts/vela/vela.ini \
-        --memory-mode Shared_Sram \
-        --system-config Ethos_U55_High_End_Embedded
-    ```
-
-    An optimized model file for Ethos-U55 is generated in a folder named `output`.
-
-6. Create a `build` folder in the root level of the evaluation kit.
-
-    ```commandline
-    mkdir build && cd build
-    ```
-
-7. Build the makefiles with `CMake` as shown in the command below. The [build process section](sections/building.md#Build-process)
-    gives an in-depth explanation about the meaning of every parameter. For the time being, note that we point the Vela
-    optimized model from stage 5 in the `-Dkws_MODEL_TFLITE_PATH` parameter.
-
-    ```commandline
-    cmake \
-        -DUSE_CASE_BUILD=kws \
-        -Dkws_MODEL_TFLITE_PATH=output/ds_cnn_clustered_int8_vela.tflite \
-        ..
-    ```
-
-8. Compile the project with a `make`. Details about this stage can be found in the [building part of the documentation](sections/building.md#Building-the-configured-project).
-
-    ```commandline
-    make -j4
-    ```
-
-9. Launch the project as explained [here](sections/deployment.md#Deployment). In this quick-start guide, we'll use the Fixed
-    Virtual Platform. Point the generated `bin/ethos-u-kws.axf` file in stage 8 to the FVP that you have downloaded when
+5. Launch the project as explained [here](sections/deployment.md#Deployment). For the purpose of this quick start guide,
+    we'll use the keyword spotting application and the Fixed Virtual Platform.
+    Point the generated `bin/ethos-u-kws.axf` file in stage 4 to the FVP that you have downloaded when
     installing the prerequisites.
 
     ```commandline
     <path_to_FVP>/FVP_Corstone_SSE-300_Ethos-U55 -a ./bin/ethos-u-kws.axf
     ```
 
-10. A telnet window is launched through which you can interact with the application and obtain performance figures.
+6. A telnet window is launched through which you can interact with the application and obtain performance figures.
+
+> **Note:**: Execution of the build_default.py script is equivalent to running the following commands:
+
+```commandline
+mkdir resources_downloaded && cd resources_downloaded
+python3 -m venv env
+env/bin/python3 -m pip install --upgrade pip
+env/bin/python3 -m pip install --upgrade setuptools
+env/bin/python3 -m pip install ethos-u-vela==2.1.1
+cd ..
+
+curl -L https://github.com/ARM-software/ML-zoo/raw/7c32b097f7d94aae2cd0b98a8ed5a3ba81e66b18/models/anomaly_detection/micronet_medium/tflite_int8/ad_medium_int8.tflite \
+    --output resources_downloaded/ad/ad_medium_int8.tflite
+curl -L https://github.com/ARM-software/ML-zoo/raw/7c32b097f7d94aae2cd0b98a8ed5a3ba81e66b18/models/anomaly_detection/micronet_medium/tflite_int8/testing_input/input/0.npy \
+    --output ./resources_downloaded/ad/ifm0.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/7c32b097f7d94aae2cd0b98a8ed5a3ba81e66b18/models/anomaly_detection/micronet_medium/tflite_int8/testing_output/Identity/0.npy \
+    --output ./resources_downloaded/ad/ofm0.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/speech_recognition/wav2letter/tflite_int8/wav2letter_int8.tflite \
+    --output ./resources_downloaded/asr/wav2letter_int8.tflite
+curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/speech_recognition/wav2letter/tflite_int8/testing_input/input_2_int8/0.npy \
+    --output ./resources_downloaded/asr/ifm0.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/speech_recognition/wav2letter/tflite_int8/testing_output/Identity_int8/0.npy \
+    --output ./resources_downloaded/asr/ofm0.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/image_classification/mobilenet_v2_1.0_224/tflite_uint8/mobilenet_v2_1.0_224_quantized_1_default_1.tflite \
+    --output ./resources_downloaded/img_class/mobilenet_v2_1.0_224_quantized_1_default_1.tflite
+curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/image_classification/mobilenet_v2_1.0_224/tflite_uint8/testing_input/input/0.npy \
+    --output ./resources_downloaded/img_class/ifm0.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/image_classification/mobilenet_v2_1.0_224/tflite_uint8/testing_output/output/0.npy \
+    --output ./resources_downloaded/img_class/ofm0.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8/ds_cnn_clustered_int8.tflite \
+    --output ./resources_downloaded/kws/ds_cnn_clustered_int8.tflite
+curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8/testing_input/input_2/0.npy \
+    --output ./resources_downloaded/kws/ifm0.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8/testing_output/Identity/0.npy \
+    --output ./resources_downloaded/kws/ofm0.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/speech_recognition/wav2letter/tflite_int8/wav2letter_int8.tflite \
+    --output ./resources_downloaded/kws_asr/wav2letter_int8.tflite
+curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/speech_recognition/wav2letter/tflite_int8/testing_input/input_2_int8/0.npy \
+    --output ./resources_downloaded/kws_asr/asr/ifm0.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/speech_recognition/wav2letter/tflite_int8/testing_input/input_2_int8/0.npy
+    --output ./resources_downloaded/kws_asr/asr/ifm0.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/speech_recognition/wav2letter/tflite_int8/testing_output/Identity_int8/0.npy \
+    --output ./resources_downloaded/kws_asr/asr/ofm0.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8/ds_cnn_clustered_int8.tflite \
+    --output ./resources_downloaded/kws_asr/ds_cnn_clustered_int8.tflite
+curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8/testing_input/input_2/0.npy \
+    --output ./resources_downloaded/kws_asr/kws/ifm0.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8/testing_output/Identity/0.npy \
+    --output ./resources_downloaded/kws_asr/kws/ofm0.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/keyword_spotting/dnn_small/tflite_int8/dnn_s_quantized.tflite \
+    --output ./resources_downloaded/inference_runner/dnn_s_quantized.tflite
+
+. resources_downloaded/env/bin/activate && vela resources_downloaded/kws/ds_cnn_clustered_int8.tflite \
+    --accelerator-config=ethos-u55-128 \
+    --block-config-limit=0 --config scripts/vela/default_vela.ini \
+    --memory-mode=Shared_Sram \
+    --system-config=Ethos_U55_High_End_Embedded \
+    --output-dir=resources_downloaded/kws
+mv resources_downloaded/kws/ds_cnn_clustered_int8_vela.tflite resources_downloaded/kws/ds_cnn_clustered_int8_vela_H128.tflite
+
+. resources_downloaded/env/bin/activate && vela resources_downloaded/kws_asr/wav2letter_int8.tflite \
+    --accelerator-config=ethos-u55-128 \
+    --block-config-limit=0 --config scripts/vela/default_vela.ini \
+    --memory-mode=Shared_Sram \
+    --system-config=Ethos_U55_High_End_Embedded \
+    --output-dir=resources_downloaded/kws_asr
+mv resources_downloaded/kws_asr/wav2letter_int8_vela.tflite resources_downloaded/kws_asr/wav2letter_int8_vela_H128.tflite
+
+. resources_downloaded/env/bin/activate && vela resources_downloaded/kws_asr/ds_cnn_clustered_int8.tflite -\
+    --accelerator-config=ethos-u55-128 \
+    --block-config-limit=0 --config scripts/vela/default_vela.ini \
+    --memory-mode=Shared_Sram \
+    --system-config=Ethos_U55_High_End_Embedded \
+    --output-dir=resources_downloaded/kws_asr
+mv resources_downloaded/kws_asr/ds_cnn_clustered_int8_vela.tflite resources_downloaded/kws_asr/ds_cnn_clustered_int8_vela_H128.tflite
+
+. resources_downloaded/env/bin/activate && vela resources_downloaded/inference_runner/dnn_s_quantized.tflite -\
+    --accelerator-config=ethos-u55-128 \
+    --block-config-limit=0 --config scripts/vela/default_vela.ini \
+    --memory-mode=Shared_Sram \
+    --system-config=Ethos_U55_High_End_Embedded \
+    --output-dir=resources_downloaded/inference_runner
+mv resources_downloaded/inference_runner/dnn_s_quantized_vela.tflite resources_downloaded/inference_runner/dnn_s_quantized_vela_H128.tflite
+
+. resources_downloaded/env/bin/activate && vela resources_downloaded/img_class/mobilenet_v2_1.0_224_quantized_1_default_1.tflite \
+    --accelerator-config=ethos-u55-128 \
+    --block-config-limit=0 --config scripts/vela/default_vela.ini \
+    --memory-mode=Shared_Sram \
+    --system-config=Ethos_U55_High_End_Embedded \
+    --output-dir=resources_downloaded/img_class
+mv resources_downloaded/img_class/mobilenet_v2_1.0_224_quantized_1_default_1_vela.tflite resources_downloaded/img_class/mobilenet_v2_1.0_224_quantized_1_default_1_vela_H128.tflite
+
+. resources_downloaded/env/bin/activate && vela resources_downloaded/asr/wav2letter_int8.tflite \
+    --accelerator-config=ethos-u55-128 \
+    --block-config-limit=0 --config scripts/vela/default_vela.ini \
+    --memory-mode=Shared_Sram \
+    --system-config=Ethos_U55_High_End_Embedded \
+    --output-dir=resources_downloaded/asr
+mv resources_downloaded/asr/wav2letter_int8_vela.tflite resources_downloaded/asr/wav2letter_int8_vela_H128.tflite
+
+. resources_downloaded/env/bin/activate && vela resources_downloaded/ad/ad_medium_int8.tflite \
+    --accelerator-config=ethos-u55-128 \
+    --block-config-limit=0 --config scripts/vela/default_vela.ini \
+    --memory-mode=Shared_Sram \
+    --system-config=Ethos_U55_High_End_Embedded \
+    --output-dir=resources_downloaded/ad
+mv resources_downloaded/ad/ad_medium_int8_vela.tflite resources_downloaded/ad/ad_medium_int8_vela_H128.tflite
+
+mkdir cmake-build-mps3-sse-300-gnu-release and cd cmake-build-mps3-sse-300-gnu-release
+
+cmake .. \
+    -DTARGET_PLATFORM=mps3 \
+    -DTARGET_SUBSYSTEM=sse-300 \
+    -DCMAKE_TOOLCHAIN_FILE=scripts/cmake/toolchains/bare-metal-gcc.cmake
+```
+
+> **Note:** If you want to make changes to the application (for example modifying the number of MAC units of the Ethos-U or running a custom neural network),
+> you should follow the approach defined in [documentation.md](../docs/documentation.md) instead of using the `build_default` python script.
diff --git a/docs/sections/building.md b/docs/sections/building.md
index 7bd01d1..98cb5e8 100644
--- a/docs/sections/building.md
+++ b/docs/sections/building.md
@@ -393,9 +393,11 @@
 Results of the build will be placed under `build/bin/` folder:
 
 ```tree
- bin
-  |- dev_ethosu_eval-tests
-  |_ ethos-u
+bin
+├── arm_ml_embedded_evaluation_kit-<usecase1>-tests
+├── arm_ml_embedded_evaluation_kit-<usecase2>-tests
+├── ethos-u-<usecase1>
+└── ethos-u-<usecase1>
 ```
 
 ### Configuring the build for simple_platform
diff --git a/docs/sections/testing_benchmarking.md b/docs/sections/testing_benchmarking.md
index 7932dde..904f2c9 100644
--- a/docs/sections/testing_benchmarking.md
+++ b/docs/sections/testing_benchmarking.md
@@ -27,13 +27,13 @@
 - `utils`: contains utilities sources used only within the tests.
 
 When [configuring](./building.md#configuring-the-build-native-unit-test) and
-[building](./building.md#Building-the-configured-project) for `native` target platform results of the build will
-be placed under `build/bin/` folder, for example:
+[building](./building.md#building-the-configured-project) for `native` target platform results of the build will
+be placed under `<build folder>/bin/` folder, for example:
 
 ```tree
 .
-├── dev_ethosu_eval-<usecase1>-tests
-├── dev_ethosu_eval-<usecase2>-tests
+├── arm_ml_embedded_evaluation_kit-<usecase1>-tests
+├── arm_ml_embedded_evaluation_kit-<usecase2>-tests
 ├── ethos-u-<usecase1>
 └── ethos-u-<usecase1>
 ```
@@ -41,7 +41,7 @@
 To execute unit-tests for a specific use-case in addition to the common tests:
 
 ```commandline
-dev_ethosu_eval-<use_case>-tests
+arm_ml_embedded_evaluation_kit-<use_case>-tests
 ```
 
 ```log
diff --git a/set_up_default_resources.py b/set_up_default_resources.py
index 915120f..79b0333 100755
--- a/set_up_default_resources.py
+++ b/set_up_default_resources.py
@@ -215,6 +215,7 @@
             new_vela_optimised_model_path = vela_optimised_model_path.replace("_vela.tflite", "_vela_H128.tflite")
             # rename default vela model
             os.rename(vela_optimised_model_path, new_vela_optimised_model_path)
+            logging.info(f"Renaming {vela_optimised_model_path} to {new_vela_optimised_model_path}.")
 
 
 if __name__ == '__main__':