MLECO-3250 Eval Kit README refresh

* Simplified the landing README file
* Rearranged sections for improved reading flow

Signed-off-by: Ayaan Masood <Ayaan.Masood@arm.com>
Change-Id: I050b39d1acdb08626134e66af2ce2eee1dbffbf9
diff --git a/Readme.md b/Readme.md
index 0c8d019..732bdde 100644
--- a/Readme.md
+++ b/Readme.md
@@ -1,45 +1,17 @@
 
 # Arm® ML embedded evaluation kit
 
-This repository is for building and deploying Machine Learning (ML) applications targeted for Arm® Cortex®-M and Arm®
-Ethos™-U NPU.
-To run evaluations using this software, we suggest using:
+## Overview
 
-- an [MPS3 board](https://developer.arm.com/tools-and-software/development-boards/fpga-prototyping-boards/mps3) with
-  [Arm® Corstone-300](https://developer.arm.com/Processors/Corstone-300) or [Arm® Corstone-310](https://developer.arm.com/Processors/Corstone-310) implementations.
-  - Arm® Corstone™-300 runs a combination of
-  the [Arm® Cortex™-M55 processor](https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m55) and the
-  [Arm® Ethos™-U55 NPU](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u55).
-  - Arm® Corstone™-310 runs a combination of
-      the [Arm® Cortex™-M85 processor](https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m85) and the
-      [Arm® Ethos™-U55 NPU](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u55).
+The ML embedded evaluation kit provides a range of ready to use machine learning (ML) applications for users to develop ML workloads running on the Arm® Ethos-U NPU and
+Arm® Cortex-M CPUs. You can also access metrics such as inference cycle count to estimate performance.
 
-- a [Arm® Corstone™-300 MPS3 based Fixed Virtual Platform (FVP)](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps)
-  that offers a choice of the [Arm® Ethos™-U55 NPU](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u55)
-  or [Arm® Ethos™-U65 NPU](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u65) software fast model in combination of
-  the new [Arm® Cortex™-M55 processor](https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m55). You can also take advantage of
-  [Arm Virtual Hardware](https://www.arm.com/products/development-tools/simulation/virtual-hardware) (AVH) and [run the Fixed Virtual Platform
-  in the cloud](./docs/sections/arm_virtual_hardware.md).
-  > **NOTE**: While Arm® Corstone™-300 is available as an [Ecosystem FVP](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps)
-  > and AVH, Arm® Corstone™-310 is available (for both Arm® Ethos™-U55 and Ethos™-U65 NPUs) only as AVH implementations.
-
-## Inclusive language commitment
-
-This product conforms to Arm's inclusive language policy and, to the best of our knowledge,
-does not contain any non-inclusive language. If you find something that concerns you, email terms@arm.com.
-
-## Overview of the evaluation kit
-
-The purpose of this evaluation kit is to allow users to develop software and test the performance of the Arm® Ethos-U NPU and
-Arm® Cortex-M CPUs for ML workloads. The Ethos-U NPU is a new class of machine learning (ML) processor, specifically designed
-to accelerate ML computation in constrained embedded and IoT devices. The product is optimized to execute
-mathematical operations efficiently that are commonly used in ML algorithms, such as convolutions or activation functions.
+>*The Arm® Ethos-U NPU is a new class of ML processor, specifically designed
+to accelerate ML computation in constrained embedded and IoT devices.*
 
 ## ML use cases
 
-The evaluation kit adds value by providing ready to use ML applications for the embedded stack. As a result, you can
-experiment with the already developed software use cases and create your own applications for Cortex-M CPU and Ethos-U NPU.
-The example application at your disposal and the utilized models are listed in the table below.
+Experiment with the included end-to-end software use cases and create your own ML applications for Cortex-M CPU and Ethos-U NPU.
 
 |   ML application                     |  Description | Neural Network Model |
 | :----------------------------------: | :-----------------------------------------------------: | :----: |
@@ -50,40 +22,141 @@
 |  [Anomaly Detection](./docs/use_cases/ad.md)                 | Detecting abnormal behavior based on a sound recording of a machine | [MicroNet](https://github.com/ARM-software/ML-zoo/tree/7c32b097f7d94aae2cd0b98a8ed5a3ba81e66b18/models/anomaly_detection/micronet_medium/tflite_int8/)|
 |  [Visual Wake Word](./docs/use_cases/visual_wake_word.md)                 | Recognize if person is present in a given image | [MicroNet](https://github.com/ARM-software/ML-zoo/tree/7dd3b16bb84007daf88be8648983c07f3eb21140/models/visual_wake_words/micronet_vww4/tflite_int8/vww4_128_128_INT8.tflite)|
 |  [Noise Reduction](./docs/use_cases/noise_reduction.md)        | Remove noise from audio while keeping speech intact | [RNNoise](https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8)   |
-|  [Generic inference runner](./docs/use_cases/inference_runner.md) | Code block allowing you to develop your own use case for Ethos-U NPU | Your custom model |
 |  [Object detection](./docs/use_cases/object_detection.md)      | Detects and draws face bounding box in a given image | [Yolo Fastest](https://github.com/emza-vs/ModelZoo/blob/master/object_detection/yolo-fastest_192_face_v4.tflite)
+|  [Generic inference runner](./docs/use_cases/inference_runner.md) | Code block allowing you to develop your own use case for Ethos-U NPU | Your custom model |
 
-The above use cases implement end-to-end ML flow including data pre-processing and post-processing. They will allow you
-to investigate embedded software stack, evaluate performance of the networks running on Cortex-M55 CPU and Ethos-U NPU
-by displaying different performance metrics such as inference cycle count estimation and results of the network execution.
+## Recommended build targets 
+
+This repository is for building and deploying Machine Learning (ML) applications targeted for Arm® Cortex®-M and Arm®
+Ethos™-U NPU.
+To run evaluations using this software, we suggest using:
+
+- [MPS3 board](https://developer.arm.com/tools-and-software/development-boards/fpga-prototyping-boards/mps3) with
+  [Arm® Corstone-300](https://developer.arm.com/Processors/Corstone-300) or [Arm® Corstone-310](https://developer.arm.com/Processors/Corstone-310) implementations.
+  - Arm® Corstone™-300 runs a combination of
+  the [Arm® Cortex™-M55 processor](https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m55) and the
+  [Arm® Ethos™-U55 NPU](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u55).
+  - Arm® Corstone™-310 runs a combination of
+      the [Arm® Cortex™-M85 processor](https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m85) and the
+      [Arm® Ethos™-U55 NPU](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u55).
+
+- [Arm® Corstone™-300 MPS3 based Fixed Virtual Platform (FVP)](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps)
+  offers a choice of the [Arm® Ethos™-U55 NPU](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u55)
+  or [Arm® Ethos™-U65 NPU](https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u65) software fast model in combination with
+  the new [Arm® Cortex™-M55 processor](https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m55)
+  - You can also take advantage of
+    [Arm Virtual Hardware](https://www.arm.com/products/development-tools/simulation/virtual-hardware) (AVH) and [run the Fixed Virtual Platform
+    in the cloud](./docs/sections/arm_virtual_hardware.md).
+
+>Arm® Corstone™-300 and Corstone™-310 design implementations are publicly available on [Download FPGA Images](https://developer.arm.com/tools-and-software/development-boards/fpga-prototyping-boards/download-fpga-images) page,
+or as a [Fixed Virtual Platform of the MPS3 development board](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
+
+### Quick Start
+
+To run ML applications on the Cortex-M and Ethos-U NPU:
+
+1. First, verify that you have installed all of [the required prerequisites](sections/building.md#build-prerequisites).
+   > **NOTE**: `Dockerfile` is also available if you would like to build a Docker image.
+
+2. Clone the *Ethos-U* evaluation kit repository:
+
+    ```commandline
+    git clone "https://review.mlplatform.org/ml/ethos-u/ml-embedded-evaluation-kit"
+    cd ml-embedded-evaluation-kit
+    ```
+
+3. Pull all the external dependencies with the following command:
+
+    ```commandline
+    git submodule update --init
+    ```
+
+4. Next, run the `build_default` Python script. It handles the downloading of the neural network models, compiling using 
+[Vela](https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ethos-u-vela), and building the project using CMake.
+
+###### Arm compiler
+````commandline
+python3.9 ./build_default.py --toolchain arm
+````
+
+###### GNU Arm Embedded toolchain
+```commandline
+python3.9 ./build_default.py
+```
+
+5. Change directory to the generated cmake build folder which contains the `.axf` file output in the `bin`
+   subdirectory. Launch the application by passing the `.axf` to the FVP you downloaded when installing the prerequisites.
+   Alternatively, from the root directory add `<cmake-build-your_config>` to the path to the axf and use one of the
+   following commands:
+
+    ```commandline
+   From auto-generated (or custom) build directory:
+   <path_to_FVP>/FVP_Corstone_SSE-300_Ethos-U55 -a ./bin/ethos-u-kws.axf
+   
+   From root directory:
+   <path_to_FVP>/FVP_Corstone_SSE-300_Ethos-U55 -a <cmake-build-your_config>/bin/ethos-u-kws.axf
+    ```
+
+6. A telnet window is launched through which you can interact with the application and obtain performance figures.
+
+
+**For more details, you can view the [quick start guide](docs/quick_start.md).**
+
+> **Note:** The default flow assumes Arm® *Ethos™-U55* NPU usage, configured to use 128 Multiply-Accumulate units
+> and is sharing SRAM with the Arm® *Cortex®-M55*.
+>
+> Ml embedded evaluation kit supports:
+>
+> |  *Ethos™-U* NPU  | Default MACs/cc | Other MACs/cc supported | Default Memory Mode | Other Memory Modes supported |
+> |------------------|-----------------|-------------------------|---------------------|------------------------------|
+> |   *Ethos™-U55*   |       128       |      32, 64, 256        |     Shared_Sram     |          Sram_Only           |
+> |   *Ethos™-U65*   |       256       |          512            |    Dedicated_Sram   |         Shared_Sram          |
+>
+> For more information see [Building](./docs/documentation.md#building).
+
+**See full documentation:**
+
+- [Arm® ML embedded evaluation kit](./docs/documentation.md#arm_ml-embedded-evaluation-kit)
+  - [Table of Content](./docs/documentation.md#table-of-content)
+  - [Trademarks](./docs/documentation.md#trademarks)
+  - **[Prerequisites](./docs/documentation.md#prerequisites)**
+    - [Additional reading](./docs/documentation.md#additional-reading)
+  - [Repository structure](./docs/documentation.md#repository-structure)
+  - [Models and resources](./docs/documentation.md#models-and-resources)
+  - **[Building](./docs/documentation.md#building)**
+  - [Deployment](./docs/documentation.md#deployment)
+  - [Running code samples applications](./docs/documentation.md#running-code-samples-applications)
+  - [Implementing custom ML application](./docs/documentation.md#implementing-custom-ml-application)
+  - [Testing and benchmarking](./docs/documentation.md#testing-and-benchmarking)
+  - **[Troubleshooting](./docs/documentation.md#troubleshooting)**
+  - [Appendix](./docs/documentation.md#appendix)
+  - [Contributions](./docs/documentation.md#contributing)
+  - **[FAQ](./docs/documentation.md#faq)**
 
 ## Software and hardware overview
 
-The evaluation kit primarily supports [Arm® Corstone™-300](https://developer.arm.com/ip-products/subsystem/corstone/corstone-300)
-and [Arm® Corstone™-310](https://developer.arm.com/ip-products/subsystem/corstone/corstone-310) reference packages as its
-primary targets. Arm® Corstone™-300 and Corstone™-310 design implementations are publicly available on [Download FPGA Images](https://developer.arm.com/tools-and-software/development-boards/fpga-prototyping-boards/download-fpga-images) page,
-or as a [Fixed Virtual Platform of the MPS3 development board](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
+* The ML use cases have common code such as initializing the Hardware Abstraction Layer (HAL)
 
-The Ethos-U NPU software stack is described [here](https://developer.arm.com/documentation/101888/0500/NPU-software-overview/NPU-software-components?lang=en).
+* The common application code can be run on native host machine (x86_64 or aarch64) or Arm
+   Cortex-M architecture because of the HAL
 
-All ML use cases, albeit illustrating a different application, have common code such as initializing the Hardware
-Abstraction Layer (HAL). The application common code can be run on native host machine (x86_64 or aarch64) or Arm
-Cortex-M architecture thanks to the HAL.
-For the ML application-specific part, Google® TensorFlow™ Lite for Microcontrollers inference engine is used to schedule
-the neural networks models executions. TensorFlow Lite for Microcontrollers is integrated with the
-[Ethos-U NPU driver](https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ethos-u-core-driver)
-and delegates execution of certain operators to the NPU or, if the neural network model operators are not supported on
-NPU, to the CPU. If the operator is supported, [CMSIS-NN](https://github.com/ARM-software/CMSIS_5) is used to optimise
-CPU workload execution with int8 data type. Else, TensorFlow™ Lite for Microcontrollers' reference kernels are used as
-a final fall-back.
-Common ML application functions will help you to focus on implementing logic of your custom ML use case: you can modify
-only the use case code and leave all other components unchanged. Supplied build system will discover new ML application
-code and automatically include it into compilation flow.
+* Google® TensorFlow™ Lite for Microcontrollers inference engine is used to schedule
+  the execution of neural network models
+
+*  The [Ethos-U NPU driver](https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ethos-u-core-driver) is integrated TensorFlow Lite for Microcontrollers
+   *  ML operators are delegated to the NPU with CPU fall-back for unsupported operators
+   * [CMSIS-NN](https://github.com/ARM-software/CMSIS_5) is used to optimise CPU workload execution with int8 data type
+   * Final ML operator fall-back is TensorFlow™ Lite for Microcontrollers' reference kernels
+
+* The provided set of common ML use-case functions will assist in implementing your application logic
+   * When modifying use-case code, there is no requirement to modify other components of the eval kit
+* The CMake build system will discover and automatically include new ML application code into the compilation workflow
 
 A high level overview of the different components in the software, and the platforms supported out-of-the-box, is shown
 in the diagram below.
 
 ![APIs](docs/media/APIs_description.png)
+>Note: The Ethos-U NPU software stack is described [here](https://developer.arm.com/documentation/101888/0500/NPU-software-overview/NPU-software-components?lang=en).
 
 For a more detailed description of the build graph with all major components, see [Building](./docs/documentation.md#building).
 
@@ -101,121 +174,10 @@
 [Arm® Development Studio](https://developer.arm.com/Tools%20and%20Software/Arm%20Development%20Studio) and
 [Keil® µVision®](https://www2.keil.com/mdk5/uvision/)) that support the use of CMSIS Packs.
 
-### Getting started
 
-To run an ML application on the Cortex-M and Ethos-U NPU, please, follow these steps:
-
-1. Set up your environment by installing [the required prerequisites](./docs/sections/building.md#Build-prerequisites).
-   > **NOTE**: A Docker image built from the `Dockerfile` provided will have all the required packages installed.
-2. Generate an optimized neural network model for Ethos-U with a Vela compiler by following instructions [here](./docs/sections/building.md#Add-custom-model).
-3. [Configure the build system](./docs/sections/building.md#Build-process).
-4. [Compile the project](./docs/sections/building.md#Building-the-configured-project) with a `make` command.
-5. If using a FVP, [launch the desired application on the FVP](./docs/sections/deployment.md#Fixed-Virtual-Platform).
-If using the FPGA option, load the image on the FPGA and [launch the application](./docs/sections/deployment.md#MPS3-board).
-
-**To get familiar with these steps, you can follow the [quick start guide](docs/quick_start.md).**
-
-> **Note:** The default flow assumes Arm® *Ethos™-U55* NPU usage, configured to use 128 Multiply-Accumulate units
-> and is sharing SRAM with the Arm® *Cortex®-M55*.
->
-> Ml embedded evaluation kit supports:
->
-> |  *Ethos™-U* NPU  | Default MACs/cc | Other MACs/cc supported | Default Memory Mode | Other Memory Modes supported |
-> |------------------|-----------------|-------------------------|---------------------|------------------------------|
-> |   *Ethos™-U55*   |       128       |      32, 64, 256        |     Shared_Sram     |          Sram_Only           |
-> |   *Ethos™-U65*   |       256       |          512            |    Dedicated_Sram   |         Shared_Sram          |
->
-> For more information see [Building](./docs/documentation.md#building).
-
-For more details see full documentation:
-
-- [Arm® ML embedded evaluation kit](./docs/documentation.md#arm_ml-embedded-evaluation-kit)
-  - [Table of Content](./docs/documentation.md#table-of-content)
-  - [Trademarks](./docs/documentation.md#trademarks)
-  - **[Prerequisites](./docs/documentation.md#prerequisites)**
-    - [Additional reading](./docs/documentation.md#additional-reading)
-  - [Repository structure](./docs/documentation.md#repository-structure)
-  - [Models and resources](./docs/documentation.md#models-and-resources)
-  - [Building](./docs/documentation.md#building)
-  - [Deployment](./docs/documentation.md#deployment)
-  - [Running code samples applications](./docs/documentation.md#running-code-samples-applications)
-  - [Implementing custom ML application](./docs/documentation.md#implementing-custom-ml-application)
-  - [Testing and benchmarking](./docs/documentation.md#testing-and-benchmarking)
-  - **[Troubleshooting](./docs/documentation.md#troubleshooting)**
-  - [Appendix](./docs/documentation.md#appendix)
-  - **[FAQ](./docs/documentation.md#faq)**
-
-## Contribution guidelines
-
-Contributions are only accepted under the following conditions:
-
-- The contribution have certified origin and give us your permission. To manage this process we use
-  [Developer Certificate of Origin (DCO) V1.1](https://developercertificate.org/).
-  To indicate that contributors agree to the terms of the DCO, it's necessary "sign off" the
-  contribution by adding a line with name and e-mail address to every git commit message:
-
-  ```log
-  Signed-off-by: John Doe <john.doe@example.org>
-  ```
-
-  This can be done automatically by adding the `-s` option to your `git commit` command.
-  You must use your real name, no pseudonyms or anonymous contributions are accepted.
-
-- You give permission according to the [Apache License 2.0](../LICENSE_APACHE_2.0.txt).
-
-  In each source file, include the following copyright notice:
-
-  ```copyright
-  /*
-  * SPDX-FileCopyrightText: Copyright <years additions were made to project> <your name>, Arm Limited and/or its affiliates <open-source-office@arm.com>
-  * SPDX-License-Identifier: Apache-2.0
-  *
-  * Licensed under the Apache License, Version 2.0 (the "License");
-  * you may not use this file except in compliance with the License.
-  * You may obtain a copy of the License at
-  *
-  *     http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-  ```
-
-### Coding standards and guidelines
-
-This repository follows a set of guidelines, best practices, programming styles and conventions,
-see:
-
-- [Coding standards and guidelines](./docs/sections/coding_guidelines.md)
-  - [Introduction](./docs/sections/coding_guidelines.md#introduction)
-  - [Language version](./docs/sections/coding_guidelines.md#language-version)
-  - [File naming](./docs/sections/coding_guidelines.md#file-naming)
-  - [File layout](./docs/sections/coding_guidelines.md#file-layout)
-  - [Block Management](./docs/sections/coding_guidelines.md#block-management)
-  - [Naming Conventions](./docs/sections/coding_guidelines.md#naming-conventions)
-    - [C++ language naming conventions](./docs/sections/coding_guidelines.md#c_language-naming-conventions)
-    - [C language naming conventions](./docs/sections/coding_guidelines.md#c-language-naming-conventions)
-  - [Layout and formatting conventions](./docs/sections/coding_guidelines.md#layout-and-formatting-conventions)
-  - [Language usage](./docs/sections/coding_guidelines.md#language-usage)
-
-### Code Reviews
-
-Contributions must go through code review. Code reviews are performed through the
-[mlplatform.org Gerrit server](https://review.mlplatform.org). Contributors need to sign up to this
-Gerrit server with their GitHub account credentials.
-In order to be merged a patch needs to:
-
-- get a "+1 Verified" from the pre-commit job.
-- get a "+2 Code-review" from a reviewer, it means the patch has the final approval.
-
-### Testing
-
-Prior to submitting a patch for review please make sure that all build variants works and unit tests pass.
-Contributions go through testing at the continuous integration system. All builds, tests and checks must pass before a
-contribution gets merged to the main branch.
+## Contributions
+The ML embedded eval kit welcomes contributions. For more details on contributing to the eval kit please see the
+the [contributors guide](./docs/sections/contributing.md#contributions).
 
 ## Communication
 
@@ -223,6 +185,11 @@
 [https://discuss.mlplatform.org/c/ml-embedded-evaluation-kit](https://discuss.mlplatform.org/c/ml-embedded-evaluation-kit/)
 forum.
 
+## Inclusive language commitment
+
+This product conforms to Arm's inclusive language policy and, to the best of our knowledge,
+does not contain any non-inclusive language. If you find something that concerns you, email terms@arm.com.
+
 ## Licenses
 
 The ML Embedded applications samples are provided under the Apache 2.0 license, see [License Apache 2.0](../LICENSE_APACHE_2.0.txt).
diff --git a/docs/documentation.md b/docs/documentation.md
index 570541a..99027f1 100644
--- a/docs/documentation.md
+++ b/docs/documentation.md
@@ -395,6 +395,11 @@
 - [Appendix](./sections/appendix.md#appendix)
   - [Cortex-M55 Memory map overview](./sections/appendix.md#arm_cortex_m55-memory-map-overview-for-corstone_300-reference-design)
 
+## Contributing
+
+Guidelines to contributing changes can be found [here](./sections/contributing.md#contributions)
+
+
 ## FAQ
 
 Please refer to: [FAQ](./sections/faq.md#faq)
diff --git a/docs/sections/contributing.md b/docs/sections/contributing.md
new file mode 100644
index 0000000..5a23c54
--- /dev/null
+++ b/docs/sections/contributing.md
@@ -0,0 +1,71 @@
+## Contributions
+
+Contributions are only accepted under the following conditions:
+
+- The contribution have certified origin and give us your permission. To manage this process we use
+  [Developer Certificate of Origin (DCO) V1.1](https://developercertificate.org/).
+  To indicate that contributors agree to the terms of the DCO, it's necessary "sign off" the
+  contribution by adding a line with name and e-mail address to every git commit message:
+
+  ```log
+  Signed-off-by: John Doe <john.doe@example.org>
+  ```
+
+  This can be done automatically by adding the `-s` option to your `git commit` command.
+  You must use your real name, no pseudonyms or anonymous contributions are accepted.
+
+- You give permission according to the [Apache License 2.0](../LICENSE_APACHE_2.0.txt).
+
+  In each source file, include the following copyright notice:
+
+  ```copyright
+  /*
+  * SPDX-FileCopyrightText: Copyright <years additions were made to project> <your name>, Arm Limited and/or its affiliates <open-source-office@arm.com>
+  * SPDX-License-Identifier: Apache-2.0
+  *
+  * Licensed under the Apache License, Version 2.0 (the "License");
+  * you may not use this file except in compliance with the License.
+  * You may obtain a copy of the License at
+  *
+  *     http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+  ```
+
+### Coding standards and guidelines
+
+This repository follows a set of guidelines, best practices, programming styles and conventions,
+see:
+
+- [Coding standards and guidelines](./docs/sections/coding_guidelines.md)
+    - [Introduction](./docs/sections/coding_guidelines.md#introduction)
+    - [Language version](./docs/sections/coding_guidelines.md#language-version)
+    - [File naming](./docs/sections/coding_guidelines.md#file-naming)
+    - [File layout](./docs/sections/coding_guidelines.md#file-layout)
+    - [Block Management](./docs/sections/coding_guidelines.md#block-management)
+    - [Naming Conventions](./docs/sections/coding_guidelines.md#naming-conventions)
+        - [C++ language naming conventions](./docs/sections/coding_guidelines.md#c_language-naming-conventions)
+        - [C language naming conventions](./docs/sections/coding_guidelines.md#c-language-naming-conventions)
+    - [Layout and formatting conventions](./docs/sections/coding_guidelines.md#layout-and-formatting-conventions)
+    - [Language usage](./docs/sections/coding_guidelines.md#language-usage)
+
+### Code Reviews
+
+Contributions must go through code review. Code reviews are performed through the
+[mlplatform.org Gerrit server](https://review.mlplatform.org). Contributors need to sign up to this
+Gerrit server with their GitHub account credentials.
+In order to be merged a patch needs to:
+
+- get a "+1 Verified" from the pre-commit job.
+- get a "+2 Code-review" from a reviewer, it means the patch has the final approval.
+
+### Testing
+
+Prior to submitting a patch for review please make sure that all build variants works and unit tests pass.
+Contributions go through testing at the continuous integration system. All builds, tests and checks must pass before a
+contribution gets merged to the main branch.