MLECO-1943: Documentation review

Major update for the documentation. Also, a minor logging
change in helper scripts.

Change-Id: Ia79f78a45c9fa2d139418fbc0ca9e52245704ba3
diff --git a/build_default.py b/build_default.py
index 3bb91b1..e4aa59d 100755
--- a/build_default.py
+++ b/build_default.py
@@ -38,8 +38,6 @@
     """
 
     current_file_dir = os.path.dirname(os.path.abspath(__file__))
-    logging.basicConfig(filename='log_build_default.log', level=logging.DEBUG)
-    logging.getLogger().addHandler(logging.StreamHandler(sys.stdout))
 
     # 1. Make sure the toolchain is supported, and set the right one here
     supported_toolchain_ids = ["gnu", "arm"]
@@ -105,4 +103,8 @@
                         help="Do not run Vela optimizer on downloaded models.",
                         action="store_true")
     args = parser.parse_args()
+
+    logging.basicConfig(filename='log_build_default.log', level=logging.DEBUG)
+    logging.getLogger().addHandler(logging.StreamHandler(sys.stdout))
+
     run(args.toolchain.lower(), not args.skip_download, not args.skip_vela)
diff --git a/docs/documentation.md b/docs/documentation.md
index d08e313..a55e577 100644
--- a/docs/documentation.md
+++ b/docs/documentation.md
@@ -17,40 +17,45 @@
 ## Trademarks
 
 - Arm® and Cortex® are registered trademarks of Arm® Limited (or its subsidiaries) in the US and/or elsewhere.
-- Arm® and Ethos™ are registered trademarks or trademarks of Arm® Limited (or its subsidiaries) in the US and/or elsewhere.
-- Arm® and Corstone™ are registered trademarks or trademarks of Arm® Limited (or its subsidiaries) in the US and/or elsewhere.
-- TensorFlow™, the TensorFlow logo and any related marks are trademarks of Google Inc.
+- Arm® and Ethos™ are registered trademarks or trademarks of Arm® Limited (or its subsidiaries) in the US and/or
+  elsewhere.
+- Arm® and Corstone™ are registered trademarks or trademarks of Arm® Limited (or its subsidiaries) in the US and/or
+  elsewhere.
+- TensorFlow™, the TensorFlow logo, and any related marks are trademarks of Google Inc.
 
 ## Prerequisites
 
 Before starting the setup process, please make sure that you have:
 
-- Linux x86_64 based machine or Windows Subsystem for Linux is preferable.
-  Unfortunately, Windows is not supported as a build environment yet.
+- A Linux x86_64 based machine, the Windows Subsystem for Linux is preferable.
+  > **Note:** Currently, Windows is not supported as a build environment.
 
 - At least one of the following toolchains:
-  - GNU Arm Embedded Toolchain (version 10.2.1 or above) - [GNU Arm Embedded Toolchain Downloads](https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-rm/downloads)
-  - Arm Compiler (version 6.14 or above) with a valid license - [Arm Compiler Download Page](https://developer.arm.com/tools-and-software/embedded/arm-compiler/downloads/)
+  - GNU Arm Embedded toolchain (version 10.2.1 or above) -
+  [GNU Arm Embedded toolchain downloads](https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-rm/downloads)
+  - Arm Compiler (version 6.14 or above) with a valid license -
+  [Arm Compiler download Page](https://developer.arm.com/tools-and-software/embedded/arm-compiler/downloads/)
 
 - An Arm® MPS3 FPGA prototyping board and components for FPGA evaluation or a `Fixed Virtual Platform` binary:
-  - An MPS3 board loaded with  Arm® Corstone™-300 reference package (`AN547`) from:
-    <https://developer.arm.com/tools-and-software/development-boards/fpga-prototyping-boards/download-fpga-images>.
-    You would also need to have a USB connection between your machine and the MPS3 board - for UART menu and for
-    deploying the application.
-  - `Arm Corstone-300` based FVP for MPS3 is available from: <https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps>.
+  - An MPS3 board loaded with Arm® Corstone™-300 reference package (`AN547`) from:
+    <https://developer.arm.com/tools-and-software/development-boards/fpga-prototyping-boards/download-fpga-images>. You
+    must have a USB connection between your machine and the MPS3 board - for UART menu and for deploying the
+    application.
+  - `Arm Corstone-300` based FVP for MPS3 is available from:
+    <https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps>.
 
 ### Additional reading
 
-This document contains information that is specific to Arm® Ethos™-U55 products.
-See the following documents for other relevant information:
+This document contains information that is specific to Arm® Ethos™-U55 products. Please refer to the following documents
+for additional information:
 
 - ML platform overview: <https://mlplatform.org/>
 
 - Arm® ML processors technical overview: <https://developer.arm.com/ip-products/processors/machine-learning>
 
-- Arm® Cortex-M55® processor: <https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m55>
+- Arm® `Cortex-M55`® processor: <https://www.arm.com/products/silicon-ip-cpu/cortex-m/cortex-m55>
 
-- ML processor, also referred to as a Neural Processing Unit (NPU) - Arm® Ethos™-U55:
+- ML processor, also referred to as a Neural Processing Unit (NPU) - Arm® `Ethos™-U55`:
     <https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u55>
 
 - Arm® MPS3 FPGA Prototyping Board:
@@ -58,7 +63,7 @@
 
 - Arm® ML-Zoo: <https://github.com/ARM-software/ML-zoo/>
 
-See <http://developer.arm.com> for access to Arm documentation.
+To access Arm documentation online, please visit: <http://developer.arm.com>
 
 ## Repository structure
 
@@ -80,56 +85,54 @@
 │   │ └── tensorflow-lite-micro
 │   └── use_case
 │     └── <usecase_name>
-│                ├── include
-│                ├── src
-│                └── usecase.cmake
+│          ├── include
+│          ├── src
+│          └── usecase.cmake
 ├── tests
 │   └── ...
 └── CMakeLists.txt
 ```
 
-Where:
+What these folders contain:
 
-- `dependencies`: contains all the third party dependencies for this project.
+- `dependencies`: All the third-party dependencies for this project.
 
-- `docs`: contains the documentation for this ML applications.
+- `docs`: The documentation for this ML application.
 
-- `model_conditioning_examples`: contains short example scripts that demonstrate some methods available in TensorFlow 
+- `model_conditioning_examples`: short example scripts that demonstrate some methods available in TensorFlow
     to condition your model in preparation for deployment on Arm Ethos NPU.
 
-- `resources`: contains ML use cases applications resources such as input data, label files, etc.
+- `resources`: contains ML use-cases applications resources such as input data, label files, etc.
 
-- `resources_downloaded`: created by `set_up_default_resources.py`, contains downloaded resources for ML use cases 
+- `resources_downloaded`: created by `set_up_default_resources.py`, contains downloaded resources for ML use-cases
     applications such as models, test data, etc.
 
-- `scripts`: contains build related and source generation scripts.
+- `scripts`: Build and source generation scripts.
 
-- `source`: contains C/C++ sources for the platform and ML applications.
-    Common code related to the Ethos-U55 NPU software
-    framework resides in *application* sub-folder with the following
-    structure:
+- `source`: C/C++ sources for the platform and ML applications.
+    > **Note:** Common code related to the `Ethos-U55` NPU software framework resides in *application* subfolder.
 
-  - `application`: contains all the sources that form the *core* of the application.
-    The `use case` part of the sources depend on sources here.
+  The contents of the *application* subfolder is as follows:
 
-    - `hal`: contains hardware abstraction layer sources providing a
-        platform agnostic API to access hardware platform specific functions.
+  - `application`: All sources that form the *core* of the application. The `use-case` part of the sources depend on the
+    sources themsleves, such as:
 
-    - `main`: contains the main function and calls to platform initialization
-          logic to set things up before launching the main loop.
-          It also contains sources common to all use case implementations.
+    - `hal`: Contains Hardware Abstraction Layer (HAL) sources, providing a platform agnostic API to access hardware
+      platform-specific functions.
 
-    - `tensorflow-lite-micro`: contains abstraction around TensorFlow Lite Micro API
-          implementing common functions to initialize a neural network model, run an inference, and
-          access inference results.
+    - `main`: Contains the main function and calls to platform initialization logic to set up things before launching
+      the main loop. Also contains sources common to all use-case implementations.
 
-  - `use_case`: contains the ML use-case specific logic. Having this as a separate sub-folder isolates ML specific
-    application logic with the assumption that the `application` will do all the required set up for logic here to run.
-    It also makes it easier to add a new use case block.
+    - `tensorflow-lite-micro`: Contains abstraction around TensorFlow Lite Micro API. This abstraction implements common
+      functions to initialize a neural network model, run an inference, and access inference results.
 
-  - `tests`: contains the x86 tests for the use case applications.
+  - `use_case`: Contains the ML use-case specific logic. Stored as a separate subfolder, it helps isolate the
+    ML-specific application logic. With the assumption that the `application` performs the required setup for logic to
+    run. It also makes it easier to add a new use-case block.
 
-Hardware abstraction layer has the following structure:
+  - `tests`: Contains the x86 tests for the use-case applications.
+
+The HAL has the following structure:
 
 ```tree
 hal
@@ -157,71 +160,75 @@
     └── native
 ```
 
-- `include` and `hal.c`: contains the hardware abstraction layer (HAL) top level platform API and data acquisition, data
-presentation and timer interfaces.
-    > Note: the files here and lower in the hierarchy have been written in
-    C and this layer is a clean C/C++ boundary in the sources.
+What these folders contain:
+
+- The folders `include` and `hal.c` contain the HAL top-level platform API and data acquisition, data presentation, and
+  timer interfaces.
+    > **Note:** the files here and lower in the hierarchy have been written in C and this layer is a clean C/ + boundary
+    > in the sources.
 
 - `platforms/bare-metal/data_acquisition`\
-`platforms/bare-metal/data_presentation`\
-`platforms/bare-metal/timer`\
-`platforms/bare-metal/utils`: contains bare metal HAL support layer and platform initialisation helpers. Function calls
-  are routed to platform specific logic at this level. For example, for data presentation, an `lcd` module has been used.
-  This wraps the LCD driver calls for the actual hardware (for example MPS3).
+  `platforms/bare-metal/data_presentation`\
+  `platforms/bare-metal/timer`\
+  `platforms/bare-metal/utils`:
 
-- `platforms/bare-metal/bsp/bsp-packs`: contains the core low-level drivers (written in C) for the platform reside.
-  For supplied examples this happens to be an MPS3 board, but support could be added here for other platforms too.
-  The functions defined in this space are wired to the higher level functions under HAL (as those in `platforms/bare-metal/` level).
+  These folders contain the bare-metal HAL support layer and platform initialization helpers. Function calls are routed
+  to platform-specific logic at this level. For example, for data presentation, an `lcd` module has been used. This
+  `lcd` module wraps the LCD driver calls for the actual hardware (for example, MPS3).
+
+- `platforms/bare-metal/bsp/bsp-packs`: The core low-level drivers (written in C) for the platform reside. For supplied
+  examples, this happens to be an MPS3 board. However, support can be added here for other platforms. The functions
+  defined in this space are wired to the higher-level functions under HAL and is like those in the
+  `platforms/bare-metal/` level).
 
 - `platforms/bare-metal/bsp/bsp-packs/mps3/include`\
-`platforms/bare-metal/bsp/bsp-packs/mps3`: contains the peripheral (LCD, UART and timer) drivers specific to MPS3 board.
+  `platforms/bare-metal/bsp/bsp-packs/mps3`: Contains the peripheral (LCD, UART, and timer) drivers specific to MPS3
+  board.
 
 - `platforms/bare-metal/bsp/bsp-core`\
-`platforms/bare-metal/bsp/include`: contains the BSP core sources common to all BSPs. These include a UART header
-  (only the implementation of this is platform specific, but the API is common) and "re-targeting" of the standard output
-  and error streams to the UART block.
+  `platforms/bare-metal/bsp/include`: Contains the BSP core sources common to all BSPs and includes a UART header.
+  However, the implementation of this is platform-specific, while the API is common. Also "re-targets" the standard
+  output and error streams to the UART block.
 
-- `platforms/bare-metal/bsp/cmsis-device`: contains the CMSIS template implementation for the CPU and also device
-  initialisation routines. It is also where the system interrupts are set up and handlers are overridden.
-  The main entry point of a bare metal application will most likely reside in this space. This entry point is
-  responsible for setting up before calling the user defined "main" function in the higher level `application` logic.
+- `platforms/bare-metal/bsp/cmsis-device`: Contains the CMSIS template implementation for the CPU and also device
+  initialization routines. It is also where the system interrupts are set up and the handlers are overridden. The main
+  entry point of a bare-metal application most likely resides in this space. This entry point is responsible for the
+  set-up before calling the user defined "main" function in the higher-level `application` logic.
 
-- `platforms/bare-metal/bsp/mem_layout`: contains the platform specific linker scripts.
+- `platforms/bare-metal/bsp/mem_layout`: Contains the platform-specific linker scripts.
 
 ## Models and resources
 
-The models used in the use cases implemented in this project can be downloaded
-from [Arm ML-Zoo](https://github.com/ARM-software/ML-zoo/).
+The models used in the use-cases implemented in this project can be downloaded from: [Arm ML-Zoo](https://github.com/ARM-software/ML-zoo/).
 
 - [Mobilenet V2](https://github.com/ARM-software/ML-zoo/blob/master/models/image_classification/mobilenet_v2_1.0_224/tflite_uint8).
 - [DS-CNN](https://github.com/ARM-software/ML-zoo/blob/master/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8).
 - [Wav2Letter](https://github.com/ARM-software/ML-zoo/tree/1a92aa08c0de49a7304e0a7f3f59df6f4fd33ac8/models/speech_recognition/wav2letter/tflite_pruned_int8).
 - [Anomaly Detection](https://github.com/ARM-software/ML-zoo/raw/7c32b097f7d94aae2cd0b98a8ed5a3ba81e66b18/models/anomaly_detection/micronet_medium/tflite_int8/ad_medium_int8.tflite).
 
-When using Ethos-U55 NPU backend, the NN model is assumed to be optimized by Vela compiler.
-However, even if not, it will fall back on the CPU and execute, if supported by TensorFlow Lite Micro.
+When using *Ethos-U55* NPU backend, Vela compiler optimizes the the NN model. However, if not and it is supported by
+TensorFlow Lite Micro, then it falls back on the CPU and execute.
 
 ![Vela compiler](./media/vela_flow.jpg)
 
-The Vela compiler is a tool that can optimize a neural network model
-into a version that can run on an embedded system containing Ethos-U55 NPU.
+The Vela compiler is a tool that can optimize a neural network model into a version that run on an embedded system
+containing the *Ethos-U55* NPU.
 
-The optimized model will contain custom operators for sub-graphs of the
-model that can be accelerated by Ethos-U55 NPU, the remaining layers that
-cannot be accelerated are left unchanged and will run on the CPU using
-optimized (CMSIS-NN) or reference kernels provided by the inference
-engine.
+The optimized model contains custom operators for sub-graphs of the model that the *Ethos-U55* NPU can accelerate. The
+remaining layers that cannot be accelerated, are left unchanged, and are run on the CPU using optimized, `CMSIS-NN`, or
+reference kernels provided by the inference engine.
 
-For detailed information see [Optimize model with Vela compiler](./sections/building.md#Optimize-custom-model-with-Vela-compiler).
+For detailed information, see: [Optimize model with Vela compiler](./sections/building.md#Optimize-custom-model-with-Vela-compiler).
 
 ## Building
 
-This section describes how to build the code sample applications from sources - illustrating the build
+This section describes how to build the code sample applications from sources and includes illustrating the build
 options and the process.
 
-The project can be built for MPS3 FPGA and FVP emulating MPS3. Default values for configuration parameters
-will build executable models with Ethos-U55 NPU support.
-See:
+The project can be built for MPS3 FPGA and FVP emulating MPS3. Using default values for configuration parameters builds
+executable models that support the *Ethos-U55* NPU.
+
+For further information, please see:
 
 - [Building the ML embedded code sample applications from sources](./sections/building.md#building-the-ml-embedded-code-sample-applications-from-sources)
   - [Build prerequisites](./sections/building.md#build-prerequisites)
@@ -232,10 +239,10 @@
       - [Fetching resource files](./sections/building.md#fetching-resource-files)
     - [Create a build directory](./sections/building.md#create-a-build-directory)
     - [Configuring the build for MPS3 SSE-300](./sections/building.md#configuring-the-build-for-mps3-sse-300)
-      - [Using GNU Arm Embedded Toolchain](./sections/building.md#using-gnu-arm-embedded-toolchain)
+      - [Using GNU Arm embedded toolchain](./sections/building.md#using-gnu-arm-embedded-toolchain)
       - [Using Arm Compiler](./sections/building.md#using-arm-compiler)
       - [Generating project for Arm Development Studio](./sections/building.md#generating-project-for-arm-development-studio)
-      - [Working with model debugger from Arm FastModel Tools](./sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
+      - [Working with model debugger from Arm Fast Model Tools](./sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
       - [Configuring with custom TPIP dependencies](./sections/building.md#configuring-with-custom-tpip-dependencies)
     - [Configuring native unit-test build](./sections/building.md#configuring-native-unit-test-build)
     - [Configuring the build for simple_platform](./sections/building.md#configuring-the-build-for-simple_platform)
@@ -248,8 +255,9 @@
 
 ## Deployment
 
-This section describes how to deploy the code sample applications on the Fixed Virtual Platform or the MPS3 board.
-See:
+This section describes how to deploy the code sample applications on the Fixed Virtual Platform (FVP) or the MPS3 board.
+
+For further information, please see:
 
 - [Deployment](./sections/deployment.md)
   - [Fixed Virtual Platform](./sections/deployment.md#fixed-virtual-platform)
@@ -260,15 +268,14 @@
 
 ## Implementing custom ML application
 
-This section describes how to implement a custom Machine Learning application running
-on a platform supported by the repository (Fixed Virtual Platform or an MPS3 board).
+This section describes how to implement a custom Machine Learning application running on a platform supported by the
+repository, either an FVP or an MPS3 board.
 
-Cortex-M55 CPU and Ethos-U55 NPU Code Samples software project offers a simple way to incorporate additional
-use-case code into the existing infrastructure and provides a build
-system that automatically picks up added functionality and produces
-corresponding executable for each use-case.
+Both the *Cortex-M55* CPU and *Ethos-U55* NPU Code Samples software project offers a way to incorporate extra use-case
+code into the existing infrastructure. It also provides a build system that automatically picks up added functionality
+and produces corresponding executable for each use-case.
 
-See:
+For further information, please see:
 
 - [Implementing custom ML application](./sections/customizing.md)
   - [Software project description](./sections/customizing.md#software-project-description)
@@ -288,15 +295,15 @@
 
 ## Testing and benchmarking
 
-See [Testing and benchmarking](./sections/testing_benchmarking.md).
+Please refer to: [Testing and benchmarking](./sections/testing_benchmarking.md).
 
 ## Memory Considerations
 
-See [Memory considerations](./sections/memory_considerations.md)
+Please refer to: [Memory considerations](./sections/memory_considerations.md)
 
 ## Troubleshooting
 
-See:
+For further information, please see:
 
 - [Troubleshooting](./sections/troubleshooting.md)
   - [Inference results are incorrect for my custom files](./sections/troubleshooting.md#inference-results-are-incorrect-for-my-custom-files)
@@ -304,7 +311,7 @@
 
 ## Appendix
 
-See:
+Please refer to:
 
 - [Appendix](./sections/appendix.md)
   - [Cortex-M55 Memory map overview](./sections/appendix.md#arm_cortex_m55-memory-map-overview-for-corstone_300-reference-design)
diff --git a/docs/quick_start.md b/docs/quick_start.md
index 80cbc30..d4919a6 100644
--- a/docs/quick_start.md
+++ b/docs/quick_start.md
@@ -1,59 +1,62 @@
 # Quick start example ML application
 
-This is a quick start guide that will show you how to run the keyword spotting example application.
-The aim of this quick start guide is to enable you to run an application quickly on the Fixed Virtual Platform.
-The assumption we are making is that your Arm® Ethos™-U55 NPU is configured to use 128 Multiply-Accumulate units,
-is using a shared SRAM with the Arm® Cortex®-M55.
+This is a quick start guide that shows you how to run the keyword spotting example application.
 
-1. Verify you have installed [the required prerequisites](sections/building.md#Build-prerequisites).
+The aim of this quick start guide is to enable you to run an application quickly on the Fixed Virtual Platform (FVP).
+This documentation assumes that your Arm® *Ethos™-U55* NPU is configured to use 128 Multiply-Accumulate units, and is
+sharing SRAM with the Arm® *Cortex®-M55*.
 
-2. Clone the Ethos-U55 evaluation kit repository.
+To get started quickly, please follow these steps:
+
+1. First, verify that you have installed [the required prerequisites](sections/building.md#Build-prerequisites).
+
+2. Clone the *Ethos-U55* evaluation kit repository:
 
     ```commandline
     git clone "https://review.mlplatform.org/ml/ethos-u/ml-embedded-evaluation-kit"
     cd ml-embedded-evaluation-kit
     ```
 
-3. Pull all the external dependencies with the commands below:
+3. Pull all the external dependencies with the following command:
 
     ```commandline
     git submodule update --init
     ```
 
-4. Next, you can use the `build_default` python script to get the default neural network models, compile them with
-    Vela and build the project.
+4. Next, you can use the `build_default` Python script to get the default neural network models, compile them with Vela,
+    and then build the project.
 
-    > **Note:** This helper script needs python version 3.6 or higher.
+    [Vela](https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ethos-u-vela) is an open-source Python tool. Vela
+    converts a TensorFlow Lite for Microcontrollers neural network model into an optimized model that can run on an
+    embedded system that contains an *Ethos-U55* NPU.
 
-    [Vela](https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ethos-u-vela) is an open-source python tool converting
-    TensorFlow Lite for Microcontrollers neural network model into an optimized model that can run on an embedded system
-    containing an Ethos-U55 NPU. It is worth noting that in order to take full advantage of the capabilities of the NPU, the
-    neural network operators should be [supported by Vela](https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ethos-u-vela/+/HEAD/SUPPORTED_OPS.md).
+    It is worth noting that to take full advantage of the capabilities of the NPU, the neural network operators must be
+    [supported by Vela](https://review.mlplatform.org/plugins/gitiles/ml/ethos-u/ethos-u-vela/+/HEAD/SUPPORTED_OPS.md).
 
     ```commandline
     ./build_default.py
     ```
 
-    > **Note** The above command assumes you are using the GNU Arm Embedded Toolchain.
-    > If you are using the Arm Compiler instead, you can override the default selection
-    > by executing:
+    > **Note** The preceding command assumes you are using the GNU Arm Embedded toolchain. If you are using the Arm
+    > Compiler instead, you can override the default selection by executing:
 
     ```commandline
     ./build_default.py --toolchain arm
     ```
 
-5. Launch the project as explained [here](sections/deployment.md#Deployment). For the purpose of this quick start guide,
-    we'll use the keyword spotting application and the Fixed Virtual Platform.
-    Point the generated `bin/ethos-u-kws.axf` file in stage 4 to the FVP that you have downloaded when
-    installing the prerequisites.
+5. Launch the project as explained in the following section: [Deployments](sections/deployment.md#Deployment). In quick
+   start guide, we use the keyword spotting application and the FVP.
+
+    Point the generated `bin/ethos-u-kws.axf` file, from step four, to the FVP you downloaded when installing the
+    prerequisites. Now use the following command:
 
     ```commandline
     <path_to_FVP>/FVP_Corstone_SSE-300_Ethos-U55 -a ./bin/ethos-u-kws.axf
     ```
 
-6. A telnet window is launched through which you can interact with the application and obtain performance figures.
+6. A telnet window is launched through where you can interact with the application and obtain performance figures.
 
-> **Note:**: Execution of the build_default.py script is equivalent to running the following commands:
+> **Note:** Executing the `build_default.py` script is equivalent to running the following commands:
 
 ```commandline
 mkdir resources_downloaded && cd resources_downloaded
@@ -166,5 +169,6 @@
     -DCMAKE_TOOLCHAIN_FILE=scripts/cmake/toolchains/bare-metal-gcc.cmake
 ```
 
-> **Note:** If you want to make changes to the application (for example modifying the number of MAC units of the Ethos-U or running a custom neural network),
-> you should follow the approach defined in [documentation.md](../docs/documentation.md) instead of using the `build_default` python script.
+> **Note:** If you want to change the application, then, instead of using the `build_default` Python script, follow the
+> approach defined in [documentation.md](./documentation.md). For example, if you wanted to modify the number of
+> MAC units of the Ethos-U, or running a custom neural network.
diff --git a/docs/sections/appendix.md b/docs/sections/appendix.md
index fe8e85d..555560f 100644
--- a/docs/sections/appendix.md
+++ b/docs/sections/appendix.md
@@ -2,7 +2,7 @@
 
 ## Arm® Cortex®-M55 Memory map overview for Corstone™-300 reference design
 
-The table below is the memory mapping information specific to the Arm® Cortex®-M55.
+The following table refers to the memory mapping information specific to the Arm® Cortex®-M55.
 
 | Name  | Base address | Limit address |  Size     | IDAU |  Remarks                                                  |
 |-------|--------------|---------------|-----------|------|-----------------------------------------------------------|
@@ -17,4 +17,4 @@
 | SRAM  | 0x3100_0000  |  0x313F_FFFF  |   4 MiB   |  S   |   2 banks of 2 MiB each as SSE-300 internal SRAM region   |
 | DDR   | 0x7000_0000  |  0x7FFF_FFFF  |  256 MiB  |  S   |   DDR memory region                                       |
 
-Default memory map can be found here: <https://developer.arm.com/documentation/101051/0002/Memory-model/Memory-map>.
+The default memory map can be found here: <https://developer.arm.com/documentation/101051/0002/Memory-model/Memory-map>.
diff --git a/docs/sections/building.md b/docs/sections/building.md
index 9688586..dccc712 100644
--- a/docs/sections/building.md
+++ b/docs/sections/building.md
@@ -7,13 +7,12 @@
     - [Preparing build environment](#preparing-build-environment)
       - [Fetching submodules](#fetching-submodules)
       - [Fetching resource files](#fetching-resource-files)
-    - [Building for default configuration](#building-for-default-configuration)
     - [Create a build directory](#create-a-build-directory)
     - [Configuring the build for MPS3 SSE-300](#configuring-the-build-for-mps3-sse-300)
-      - [Using GNU Arm Embedded Toolchain](#using-gnu-arm-embedded-toolchain)
+      - [Using GNU Arm Embedded toolchain](#using-gnu-arm-embedded-toolchain)
       - [Using Arm Compiler](#using-arm-compiler)
       - [Generating project for Arm Development Studio](#generating-project-for-arm-development-studio)
-      - [Working with model debugger from Arm FastModel Tools](#working-with-model-debugger-from-arm-fastmodel-tools)
+      - [Working with model debugger from Arm Fast Model Tools](#working-with-model-debugger-from-arm-fast-model-tools)
       - [Configuring with custom TPIP dependencies](#configuring-with-custom-tpip-dependencies)
     - [Configuring native unit-test build](#configuring-native-unit-test-build)
     - [Configuring the build for simple_platform](#configuring-the-build-for-simple_platform)
@@ -24,16 +23,14 @@
   - [Optimize custom model with Vela compiler](#optimize-custom-model-with-vela-compiler)
   - [Automatic file generation](#automatic-file-generation)
 
-This section assumes the use of an **x86 Linux** build machine.
+This section assumes that you are using an **x86 Linux** build machine.
 
 ## Build prerequisites
 
-Before proceeding, please, make sure that the following prerequisites
-are fulfilled:
+Before proceeding, it is *essential* to ensure that the following prerequisites have been fulfilled:
 
-- GNU Arm embedded toolchain 10.2.1 (or higher) or the Arm Compiler version 6.14 (or higher)
-  is installed and available on the path.
-  Test the compiler by running:
+- GNU Arm embedded toolchain 10.2.1 (or higher) or the Arm Compiler version 6.14, or higher, is installed and available
+  on the path. Test the compiler by running:
 
     ```commandline
     armclang -v
@@ -44,7 +41,7 @@
     Component: ARM Compiler 6.14
     ```
 
-  Alternatively,
+  Alternatively, use:
 
     ```commandline
     arm-none-eabi-gcc --version
@@ -57,16 +54,13 @@
     warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
     ```
 
-> **Note:** Add compiler to the path, if needed:
+> **Note:** If required, add the compiler to the path:
 >
-> `export PATH=/path/to/armclang/bin:$PATH`
-> OR
-> `export PATH=/path/to/gcc-arm-none-eabi-toolchain/bin:$PATH`
+> `export PATH=/path/to/armclang/bin:$PATH` OR `export PATH=/path/to/gcc-arm-none-eabi-toolchain/bin:$PATH`
 
-- Compiler license, if using the proprietary Arm Compiler, is configured correctly.
+- If you are using the proprietary Arm Compiler, ensure that the compiler license has been correctly configured.
 
-- CMake version 3.15 or above is installed and available on the path.
-    Test CMake by running:
+- CMake version 3.15 or above is installed and available on the path. Test CMake by running:
 
     ```commandline
     cmake --version
@@ -76,11 +70,11 @@
     cmake version 3.16.2
     ```
 
-> **Note:** Add cmake to the path, if needed:
+> **Note:** How to add cmake to the path:
 >
 > `export PATH=/path/to/cmake/bin:$PATH`
 
-- Python 3.6 or above is installed. Test python version by running:
+- Python 3.6 or above is installed. Check your current installed version of Python by running:
 
     ```commandline
     python3 --version
@@ -90,9 +84,8 @@
     Python 3.6.8
     ```
 
-- Build system will create python virtual environment during the build
-    process. Please make sure that python virtual environment module is
-    installed:
+- The build system creates a Python virtual environment during the build process. Please make sure that Python virtual
+  environment module is installed by running:
 
     ```commandline
     python3 -m venv
@@ -112,139 +105,124 @@
 
 > **Note:** Add it to the path environment variable, if needed.
 
-- Access to the Internet to download the third party dependencies, specifically: TensorFlow Lite Micro, Arm® Ethos™-U55 NPU
-driver and CMSIS. Instructions for downloading these are listed under [preparing build environment](#preparing-build-environment).
+- Access to the internet to download the third-party dependencies, specifically: TensorFlow Lite Micro, Arm®
+  *Ethos™-U55* NPU driver, and CMSIS. Instructions for downloading these are listed under:
+  [preparing build environment](#preparing-build-environment).
 
 ## Build options
 
-The project build system allows user to specify custom neural network
-models (in `.tflite` format) for each use case along with the network
-inputs. It also builds TensorFlow Lite for Microcontrollers library,
-Arm® Ethos™-U55 driver library, and CMSIS-DSP library from sources.
+The project build system allows you to specify custom neural network models (in the `.tflite` format) for each use-case
+along with the network inputs.
 
-The build script is parameterized to support different options. Default
-values for build parameters will build the applications for all use cases
-for Arm® Corstone™-300 design that can execute on an MPS3 FPGA or the FVP.
+It also builds TensorFlow Lite for Microcontrollers library, Arm® *Ethos™-U55* driver library, and the CMSIS-DSP library
+from sources.
+
+The build script is parameterized to support different options. Default values for build parameters build the
+applications for all use-cases where the Arm® *Corstone™-300* design can execute on an MPS3 FPGA or the Fixed Virtual
+Platform (FVP).
 
 The build parameters are:
 
-- `TARGET_PLATFORM`: Target platform to execute application:
+- `TARGET_PLATFORM`: The target platform to execute the application on:
   - `mps3` (default)
   - `native`
   - `simple_platform`
 
-- `TARGET_SUBSYSTEM`: Platform target subsystem; this specifies the
-    design implementation for the deployment target. For both, the MPS3
-    FVP and the MPS3 FPGA, this should be left to the default value of
-    SSE-300:
+- `TARGET_SUBSYSTEM`: The target platform subsystem. Specifies the design implementation for the deployment target. For
+  both, the MPS3 FVP and the MPS3 FPGA, this must be left to the default value of SSE-300:
   - `sse-300` (default - [Arm® Corstone™-300](https://developer.arm.com/ip-products/subsystem/corstone/corstone-300))
 
-- `CMAKE_TOOLCHAIN_FILE`: This built-in CMake parameter can be used to override the
-    default toolchain file used for the build. All the valid toolchain files are in the
-    scripts directory. For example, see [bare-metal-gcc.cmake](../../scripts/cmake/toolchains/bare-metal-gcc.cmake).
+- `CMAKE_TOOLCHAIN_FILE`: This built-in CMake parameter can be used to override the default toolchain file used for the
+  build. All the valid toolchain files are located in the scripts directory. For example, see:
+  [bare-metal-gcc.cmake](../../scripts/cmake/toolchains/bare-metal-gcc.cmake).
 
-- `TENSORFLOW_SRC_PATH`: Path to the root of the TensorFlow directory.
-    The default value points to the TensorFlow submodule in the
-    [ethos-u](https://git.mlplatform.org/ml/ethos-u/ethos-u.git/about/) `dependencies` folder.
+- `TENSORFLOW_SRC_PATH`: the path to the root of the TensorFlow directory. The default value points to the TensorFlow
+  submodule in the [ethos-u](https://git.mlplatform.org/ml/ethos-u/ethos-u.git/about/) `dependencies` folder.
 
-- `ETHOS_U55_DRIVER_SRC_PATH`: Path to the Ethos-U55 NPU core driver sources.
-    The default value points to the core_driver submodule in the
-    [ethos-u](https://git.mlplatform.org/ml/ethos-u/ethos-u.git/about/) `dependencies` folder.
+- `ETHOS_U55_DRIVER_SRC_PATH`: The path to the *Ethos-U55* NPU core driver sources. The default value points to the
+  `core_driver` submodule in the [ethos-u](https://git.mlplatform.org/ml/ethos-u/ethos-u.git/about/) `dependencies`
+  folder.
 
-- `CMSIS_SRC_PATH`: Path to the CMSIS sources to be used to build TensorFlow
-    Lite Micro library. This parameters is optional and valid only for
-    Arm® Cortex®-M CPU targeted configurations. The default value points to the CMSIS submodule in the
-    [ethos-u](https://git.mlplatform.org/ml/ethos-u/ethos-u.git/about/) `dependencies` folder.
+- `CMSIS_SRC_PATH`: The path to the CMSIS sources to be used to build TensorFlow Lite Micro library. This parameter is
+  optional and is only valid for Arm® *Cortex®-M* CPU targeted configurations. The default value points to the `CMSIS`
+  submodule in the [ethos-u](https://git.mlplatform.org/ml/ethos-u/ethos-u.git/about/) `dependencies` folder.
 
-- `ETHOS_U55_ENABLED`: Sets whether the use of Ethos-U55 NPU is available for
-    the deployment target. By default, this is set and therefore
-    application is built with Ethos-U55 NPU supported.
+- `ETHOS_U55_ENABLED`: Sets whether the use of *Ethos-U55* NPU is available for the deployment target. By default, this
+  is set and therefore application is built with *Ethos-U55* NPU supported.
 
-- `CPU_PROFILE_ENABLED`: Sets whether profiling information for the CPU
-    core should be displayed. By default, this is set to false, but can
-    be turned on for FPGA targets. The the FVP, the CPU core's cycle
-    counts are not meaningful and should not be used.
+- `CPU_PROFILE_ENABLED`: Sets whether profiling information for the CPU core should be displayed. By default, this is
+  set to false, but can be turned on for FPGA targets. The the FVP and the CPU core cycle counts are not meaningful and
+  are not to be used.
 
-- `LOG_LEVEL`: Sets the verbosity level for the application's output
-    over UART/stdout. Valid values are `LOG_LEVEL_TRACE`, `LOG_LEVEL_DEBUG`,
-    `LOG_LEVEL_INFO`, `LOG_LEVEL_WARN` and `LOG_LEVEL_ERROR`. By default, it
-    is set to `LOG_LEVEL_INFO`.
+- `LOG_LEVEL`: Sets the verbosity level for the output of the application over `UART`, or `stdout`. Valid values are:
+  `LOG_LEVEL_TRACE`, `LOG_LEVEL_DEBUG`, `LOG_LEVEL_INFO`, `LOG_LEVEL_WARN`, and `LOG_LEVEL_ERROR`. The default is set
+  to: `LOG_LEVEL_INFO`.
 
-- `<use_case>_MODEL_TFLITE_PATH`: Path to the model file that will be
-    processed and included into the application axf file. The default
-    value points to one of the delivered set of models. Make sure the
-    model chosen is aligned with the `ETHOS_U55_ENABLED` setting.
+- `<use_case>_MODEL_TFLITE_PATH`: The path to the model file that is processed and is included into the application
+  `axf` file. The default value points to one of the delivered set of models. Make sure that the model chosen is aligned
+  with the `ETHOS_U55_ENABLED` setting.
 
-  - When using Ethos-U55 NPU backend, the NN model is assumed to be
-    optimized by Vela compiler.
-    However, even if not, it will fall back on the CPU and execute,
-    if supported by TensorFlow Lite Micro.
+  - When using the *Ethos-U55* NPU backend, the NN model is assumed to be optimized by Vela compiler. However, even if
+    not, if it is supported by TensorFlow Lite Micro, it falls back on the CPU and execute.
 
-  - When use of Ethos-U55 NPU is disabled, and if a Vela optimized model
-    is provided, the application will report a failure at runtime.
+  - When use of the *Ethos-U55* NPU is disabled, and if a Vela optimized model is provided, then the application reports
+    a failure at runtime.
 
-- `USE_CASE_BUILD`: specifies the list of applications to build. By
-    default, the build system scans sources to identify available ML
-    applications and produces executables for all detected use-cases.
-    This parameter can accept single value, for example,
-    `USE_CASE_BUILD=img_class` or multiple values, for example,
-    `USE_CASE_BUILD="img_class;kws"`.
+- `USE_CASE_BUILD`: Specifies the list of applications to build. By default, the build system scans sources to identify
+  available ML applications and produces executables for all detected use-cases. This parameter can accept single value,
+  for example: `USE_CASE_BUILD=img_class`, or multiple values. For example: `USE_CASE_BUILD="img_class;kws"`.
 
-- `ETHOS_U55_TIMING_ADAPTER_SRC_PATH`: Path to timing adapter sources.
-    The default value points to the `timing_adapter` dependencies folder.
+- `ETHOS_U55_TIMING_ADAPTER_SRC_PATH`: The path to timing adapter sources. The default value points to the
+  `timing_adapter` dependencies folder.
 
-- `TA_CONFIG_FILE`: Path to the CMake configuration file containing the
-    timing adapter parameters. Used only if the timing adapter build is
-    enabled.
+- `TA_CONFIG_FILE`: The path to the CMake configuration file that contains the timing adapter parameters. Used only if
+  the timing adapter build is enabled.
 
-- `TENSORFLOW_LITE_MICRO_CLEAN_BUILD`: Optional parameter to enable/disable
-    "cleaning" prior to building for the TensorFlow Lite Micro library.
-    It is enabled by default.
+- `TENSORFLOW_LITE_MICRO_CLEAN_BUILD`: Optional parameter to enable, or disable, "cleaning" prior to building for the
+  TensorFlow Lite Micro library. Enabled by default.
 
-- `TENSORFLOW_LITE_MICRO_CLEAN_DOWNLOADS`: Optional parameter to enable wiping
-    out TPIP downloads from TensorFlow source tree prior to each build.
-    It is disabled by default.
+- `TENSORFLOW_LITE_MICRO_CLEAN_DOWNLOADS`: Optional parameter to enable wiping out `TPIP` downloads from TensorFlow
+  source tree prior to each build. Disabled by default.
 
-- `ARMCLANG_DEBUG_DWARF_LEVEL`: When the CMake build type is specified as `Debug`
-    and when armclang toolchain is being used to build for a Cortex-M CPU target,
-    this optional argument can be set to specify the DWARF format.
-    By default, this is set to 4 and is synonymous with passing `-g`
-    flag to the compiler. This is compatible with Arm-DS and other tools
-    which can interpret the latest DWARF format. To allow debugging using
-    the Model Debugger from Arm FastModel Tools Suite, this argument can be used
-    to pass DWARF format version as "3". Note: this option is only available
-    when CMake project is configured with `-DCMAKE_BUILD_TYPE=Debug` argument.
-    Also, the same dwarf format is used for building TensorFlow Lite Micro library.
+- `ARMCLANG_DEBUG_DWARF_LEVEL`: When the CMake build type is specified as `Debug` and when the `armclang` toolchain is
+  being used to build for a *Cortex-M* CPU target, this optional argument can be set to specify the `DWARF` format.
 
-> **Note:** For details on the specific use case build options, follow the
-> instructions in the use-case specific documentation.
-> Also, when setting any of the CMake configuration parameters that expect a directory/file path, it is advised
-> to **use absolute paths instead of relative paths**.
+    By default, this is set to 4 and is synonymous with passing `-g` flag to the compiler. This is compatible with Arm
+    DS and other tools which can interpret the latest DWARF format. To allow debugging using the Model Debugger from Arm
+    Fast Model Tools Suite, this argument can be used to pass DWARF format version as "3".
+
+    >**Note:** This option is only available when the CMake project is configured with the `-DCMAKE_BUILD_TYPE=Debug`
+    >argument. Also, the same dwarf format is used for building TensorFlow Lite Micro library.
+
+For details on the specific use-case build options, follow the instructions in the use-case specific documentation.
+
+Also, when setting any of the CMake configuration parameters that expect a directory, or file, path, **use absolute
+paths instead of relative paths**.
 
 ## Build process
 
-The build process can summarized in three major steps:
+The build process uses three major steps:
 
-- Prepare the build environment by downloading third party sources required, see
-[Preparing build environment](#preparing-build-environment).
+1. Prepare the build environment by downloading third-party sources required, see
+   [Preparing build environment](#preparing-build-environment).
 
-- Configure the build for the platform chosen.
-This stage includes:
-  - CMake options configuration
-  - When `<use_case>_MODEL_TFLITE_PATH` build options aren't provided, defaults neural network models are be downloaded
-from [Arm ML-Zoo](https://github.com/ARM-software/ML-zoo/). In case of native build, network's input and output data
-for tests are downloaded.
-  - Some files such as neural network models, network's inputs and output labels are automatically converted
-    into C/C++ arrays, see [Automatic file generation](#automatic-file-generation).
+2. Configure the build for the platform chosen. This stage includes:
+    - CMake options configuration
+    - When `<use_case>_MODEL_TFLITE_PATH` build options are not provided, the default neural network models can be
+      downloaded from [Arm ML-Zoo](https://github.com/ARM-software/ML-zoo/). For native builds, the network input and
+      output data for tests are downloaded.
+    - Some files such as neural network models, network inputs, and output labels are automatically converted into C/C++
+      arrays, see: [Automatic file generation](#automatic-file-generation).
 
-- Build the application.\
-During this stage application and third party libraries are built see [Building the configured project](#building-the-configured-project).
+3. Build the application.\
+   Application and third-party libraries are now built. For further information, see:
+   [Building the configured project](#building-the-configured-project).
 
 ### Preparing build environment
 
 #### Fetching submodules
 
-Certain third party sources are required to be present on the development machine to allow the example sources in this
+Certain third-party sources are required to be present on the development machine to allow the example sources in this
 repository to link against.
 
 1. [TensorFlow Lite Micro repository](https://github.com/tensorflow/tensorflow)
@@ -254,7 +232,7 @@
 These are part of the [ethos-u repository](https://git.mlplatform.org/ml/ethos-u/ethos-u.git/about/) and set as
 submodules of this project.
 
-> **NOTE**: If you are using non git project sources, run `python3 ./download_dependencies.py` and ignore further git
+> **Note:** If you are using non git project sources, run `python3 ./download_dependencies.py` and ignore further git
 > instructions. Proceed to [Fetching resource files](#fetching-resource-files) section.
 >
 
@@ -264,7 +242,7 @@
 git submodule update --init
 ```
 
-This will download all the required components and place them in a tree like:
+This downloads all of the required components and places them in a tree, like so:
 
 ```tree
 dependencies
@@ -274,32 +252,29 @@
     └── tensorflow
 ```
 
-> **NOTE**: The default source paths for the TPIP sources assume the above directory structure, but all of the relevant
-> paths can be overridden by CMake configuration arguments `TENSORFLOW_SRC_PATH`, `ETHOS_U55_DRIVER_SRC_PATH`,
+> **Note:** The default source paths for the `TPIP` sources assume the above directory structure. However, all of the
+> relevant paths can be overridden by CMake configuration arguments `TENSORFLOW_SRC_PATH` `ETHOS_U55_DRIVER_SRC_PATH`,
 > and `CMSIS_SRC_PATH`.
 
 #### Fetching resource files
 
-All the ML use case examples in this repository also depend on external neural
-network models. To download these, run the following command from the root of
-the repository:
+Every ML use-case example in this repository also depends on external neural network models. To download these, run the
+following command from the root of the repository:
 
 ```sh
 python3 ./set_up_default_resources.py
 ```
 
-> **Note:** This script requires Python version 3.6 or higher. See all pre-requisites under the section
-> [Build Prerequisites](#build-prerequisites).
+This fetches every model into the `resources_downloaded` directory. It also optimizes the models using the Vela compiler
+for the default 128 MAC configuration of the Arm® *Ethos™-U55* NPU.
 
-This will fetch all the models into `resources_downloaded` directory. It will
-also optimize the models using the Vela compiler for default 128 MAC configuration
-of Arm® Ethos™-U55 NPU.
+> **Note:** This script requires Python version 3.6 or higher. Please make sure all [build prerequisites](#build-prerequisites)
+> are satisfied.
 
 ### Building for default configuration
 
-A helper script `build_default.py` is provided to configure and build all the
-applications. It configures the project with default settings i.e., for `mps3`
-target and `sse-300` subsystem. Under the hood, it invokes all the necessary
+A helper script `build_default.py` is provided to configure and build all the applications. It configures the project
+with default settings i.e., for `mps3` target and `sse-300` subsystem. Under the hood, it invokes all the necessary
 CMake commands that are described in the next sections.
 
 If using the `Arm GNU embedded toolchain`, execute:
@@ -315,13 +290,13 @@
 ```
 
 Additional command line arguments supported by this script are:
- - `--skip-download`: Do not download resources: models and test vectors
- - `--skip-vela`: Do not run Vela optimizer on downloaded models.
+
+- `--skip-download`: Do not download resources: models and test vectors
+- `--skip-vela`: Do not run Vela optimizer on downloaded models.
 
 ### Create a build directory
 
-To configure and build the project manually, create a build directory in the
-root of the project and navigate inside:
+To configure the build project manually, create a build directory in the root of the project and navigate inside:
 
 ```commandline
 mkdir build && cd build
@@ -329,19 +304,17 @@
 
 ### Configuring the build for MPS3 SSE-300
 
-#### Using GNU Arm Embedded Toolchain
+#### Using GNU Arm Embedded toolchain
 
-On Linux, if using `Arm GNU embedded toolchain`, execute the following command
-to build the application to run on the Arm® Ethos™-U55 NPU when providing only
-the mandatory arguments for CMake configuration:
+On Linux, if using `Arm GNU embedded toolchain`, execute the following command to build the application to run on the
+Arm® *Ethos™-U55* NPU when providing only the mandatory arguments for CMake configuration:
 
 ```commandline
 cmake ../
 ```
 
-The above command will build for the default target platform `mps3`, the default subsystem
-`sse-300`, and using the default toolchain file for the target as `bare-metal-gcc.` This is
-equivalent to:
+The preceding command builds for the default target platform `mps3`, the default subsystem `sse-300`, and using the
+default toolchain file for the target as `bare-metal-gcc.` This is equivalent to running:
 
 ```commandline
 cmake .. \
@@ -352,15 +325,14 @@
 
 #### Using Arm Compiler
 
-If using `Arm Compiler` instead, the toolchain option `CMAKE_TOOLCHAIN_FILE` can be used to
-point to the ARMClang CMake file instead to set the compiler and platform specific parameters.
+If using `Arm Compiler` to set the compiler and platform-specific parameters, the toolchain option
+`CMAKE_TOOLCHAIN_FILE` can be used to point to the `ARMClang` CMake file, like so:
 
 ```commandline
 cmake ../ -DCMAKE_TOOLCHAIN_FILE=scripts/cmake/toolchains/bare-metal-armclang.cmake
 ```
 
-To configure a build that can be debugged using Arm Development Studio, we can just specify
-the build type as `Debug`:
+To configure a build that can be debugged using Arm Development Studio, specify the build type as `Debug`. For example:
 
 ```commandline
 cmake .. \
@@ -370,7 +342,15 @@
 
 #### Generating project for Arm Development Studio
 
-To be able to import the project in Arm Development Studio, add the Eclipse project generator and CMAKE_ECLIPSE_VERSION in the CMake command. It is advisable that the build directory is one level up relative to the source directory. When the build has been generated, you need to follow the Import wizard in Arm Development Studio and import the existing project into the workspace. You can then compile and debug the project using Arm Development Studio. Note that the below command is executed one level up from the source directory.
+To import the project into Arm Development Studio, add the Eclipse project generator and `CMAKE_ECLIPSE_VERSION` in the
+CMake command.
+
+It is advisable that the build directory is one level up relative to the source directory. When the build has been
+generated, you must follow the Import wizard in Arm Development Studio and import the existing project into the
+workspace.
+
+You can then compile and debug the project using Arm Development Studio. Note that the following command is executed one
+level up from the source directory:
 
 ```commandline
 cmake \
@@ -383,10 +363,10 @@
     ml-embedded-evaluation-kit
 ```
 
-#### Working with model debugger from Arm FastModel Tools
+#### Working with model debugger from Arm Fast Model Tools
 
-To configure a build that can be debugged using a tool that only supports
-DWARF format 3 (Modeldebugger for example), we can use:
+To configure a build that can be debugged using a tool that only supports the `DWARF format 3`, such as *Modeldebugger*,
+you can use:
 
 ```commandline
 cmake .. \
@@ -399,10 +379,10 @@
 
 #### Configuring with custom TPIP dependencies
 
-If the TensorFlow source tree is not in its default expected location, set the path
-using `TENSORFLOW_SRC_PATH`. Similarly, if the Ethos-U55 NPU driver and CMSIS are
-not in the default location, `ETHOS_U55_DRIVER_SRC_PATH` and `CMSIS_SRC_PATH` can be
-used to configure their location.
+If the TensorFlow source tree is not in its default expected location, set the path using `TENSORFLOW_SRC_PATH`.
+Similarly, if the *Ethos-U55* NPU driver and `CMSIS` are not in the default location, then use
+`ETHOS_U55_DRIVER_SRC_PATH` and `CMSIS_SRC_PATH` to configure their location.
+
 For example:
 
 ```commandline
@@ -412,8 +392,8 @@
     -DCMSIS_SRC_PATH=/my/custom/location/cmsis
 ```
 
-> **Note:** If re-building with changed parameters values, it is
-highly advised to clean the build directory and re-run the CMake command.
+> **Note:** If re-building with changed parameters values, we recommend that you clean the build directory and re-run
+> the CMake command.
 
 ### Configuring native unit-test build
 
@@ -421,7 +401,7 @@
 cmake ../ -DTARGET_PLATFORM=native
 ```
 
-Results of the build will be placed under `build/bin/` folder:
+Results of the build are placed under the `build/bin/` folder. For example:
 
 ```tree
 bin
@@ -453,10 +433,9 @@
 make -j4
 ```
 
-Add `VERBOSE=1` to see compilation and link details.
+To see compilation and link details, add `VERBOSE=1`.
 
-Results of the build will be placed under `build/bin` folder, an
-example:
+Results of the build are placed under `build/bin` folder, for example:
 
 ```tree
 bin
@@ -464,100 +443,83 @@
  ├── ethos-u-<use_case_name>.htm
  ├── ethos-u-<use_case_name>.map
  └── sectors
-        ├── images.txt
+        ├── images.txt
         └── <use_case>
-                ├── dram.bin
+                ├── ddr.bin
                 └── itcm.bin
 ```
 
-Where for each implemented use-case under the `source/use-case` directory,
-the following build artefacts will be created:
+Where for each implemented use-case under the `source/use-case` directory, the following build artifacts are created:
 
-- `ethos-u-<use case name>.axf`: The built application binary for a ML
-    use case.
+- `ethos-u-<use-case name>.axf`: The built application binary for an ML use-case.
 
-- `ethos-u-<use case name>.map`: Information from building the
-    application (e.g. libraries used, what was optimized, location of
-    objects).
+- `ethos-u-<use-case name>.map`: Information from building the application. For example: Libraries used, what was
+  optimized, and location of objects).
 
-- `ethos-u-<use case name>.htm`: Human readable file containing the
-    call graph of application functions.
+- `ethos-u-<use-case name>.htm`: Human readable file containing the call graph of application functions.
 
-- `sectors/<use_case_name>`: Folder containing the built application, split into files
-    for loading into different FPGA memory regions.
+- `sectors/<use-case>`: Folder containing the built application. Split into files for loading into different FPGA memory regions.
 
-- `sectors/images.txt`: Tells the FPGA which memory regions to
-    use for loading the binaries in sectors/** folder.
+- `images.txt`: Tells the FPGA which memory regions to use for loading the binaries in the `sectors/..`
+  folder.
 
-> **Note:**  For the specific use case commands see the relative section
-in the use case documentation.
+> **Note:**  For the specific use-case commands, refer to the relative section in the use-case documentation.
 
 ## Building timing adapter with custom options
 
-The sources also contains the configuration for a timing adapter utility
-for the Ethos-U55 NPU driver. The timing adapter allows the platform to simulate user
-provided memory bandwidth and latency constraints.
+The sources also contain the configuration for a timing adapter utility for the *Ethos-U55* NPU driver. The timing
+adapter allows the platform to simulate user provided memory bandwidth and latency constraints.
 
-The timing adapter driver aims to control the behavior of two AXI buses
-used by Ethos-U55 NPU. One is for SRAM memory region and the other is for
-flash or DRAM. The SRAM is where intermediate buffers are expected to be
-allocated and therefore, this region can serve frequent R/W traffic
-generated by computation operations while executing a neural network
-inference. The flash or DDR is where we expect to store the model
-weights and therefore, this bus would typically be used only for R/O
+The timing adapter driver aims to control the behavior of two AXI buses used by *Ethos-U55* NPU. One is for SRAM memory
+region, and the other is for flash or DRAM.
+
+The SRAM is where intermediate buffers are expected to be allocated and therefore, this region can serve frequent Read
+and Write traffic generated by computation operations while executing a neural network inference.
+
+The flash or DDR is where we expect to store the model weights and therefore, this bus would only usually be used for RO
 traffic.
 
-It is used for MPS3 FPGA as well as for Fast Model environment.
+It is used for MPS3 FPGA and for Fast Model environment.
 
-The CMake build framework allows the parameters to control the behavior
-of each bus with following parameters:
+The CMake build framework allows the parameters to control the behavior of each bus with following parameters:
 
-- `MAXR`: Maximum number of pending read operations allowed. 0 is
-    inferred as infinite, and the default value is 4.
+- `MAXR`: Maximum number of pending read operations allowed. `0` is inferred as infinite and the default value is `4`.
 
-- `MAXW`: Maximum number of pending write operations allowed. 0 is
-    inferred as infinite, and the default value is 4.
+- `MAXW`: Maximum number of pending write operations allowed. `0` is inferred as infinite and the default value is `4`.
 
-- `MAXRW`: Maximum number of pending read+write operations allowed. 0 is
-    inferred as infinite, and the default value is 8.
+- `MAXRW`: Maximum number of pending read and write operations allowed. `0` is inferred as infinite and the default
+  value is `8`.
 
-- `RLATENCY`: Minimum latency, in cycle counts, for a read operation.
-    This is the duration between ARVALID and RVALID signals. The default
-    value is 50.
+- `RLATENCY`: Minimum latency, in cycle counts, for a read operation. This is the duration between `ARVALID` and
+  `RVALID` signals. The default value is `50`.
 
-- `WLATENCY`: Minimum latency, in cycle counts, for a write operation.
-    This is the duration between WVALID + WLAST and BVALID being
-    de-asserted. The default value is 50.
+- `WLATENCY`: Minimum latency, in cycle counts, for a write operation. This is the duration between `WVALID` and
+  `WLAST`, with `BVALID` being deasserted. The default value is `50`.
 
-- `PULSE_ON`: Number of cycles during which addresses are let through.
-    The default value is 5100.
+- `PULSE_ON`: The number of cycles where addresses are let through. The default value is `5100`.
 
-- `PULSE_OFF`: Number of cycles during which addresses are blocked. The
-    default value is 5100.
+- `PULSE_OFF`: The number of cycles where addresses are blocked. The default value is `5100`.
 
-- `BWCAP`: Maximum number of 64-bit words transferred per pulse cycle. A
-    pulse cycle is PULSE_ON + PULSE_OFF. 0 is inferred as infinite, and
-    the default value is 625.
+- `BWCAP`: Maximum number of 64-bit words transferred per pulse cycle. A pulse cycle is `PULSE_ON` and `PULSE_OFF`. `0`
+  is inferred as infinite and the default value is `625`.
 
-- `MODE`: Timing adapter operation mode. Default value is 0
+- `MODE`: Timing adapter operation mode. Default value is `0`.
 
-  - Bit 0: 0=simple; 1=latency-deadline QoS throttling of read vs.
-        write
+  - `Bit 0`: `0`=simple, `1`=latency-deadline QoS throttling of read versus write,
 
-  - Bit 1: 1=enable random AR reordering (0=default),
+  - `Bit 1`: `1`=enable random AR reordering (`0`=default),
 
-  - Bit 2: 1=enable random R reordering (0=default),
+  - `Bit 2`: `1`=enable random R reordering (`0`=default),
 
-  - Bit 3: 1=enable random B reordering (0=default)
+  - `Bit 3`: `1`=enable random B reordering (`0`=default)
 
-For timing adapter's CMake build configuration, the SRAM AXI is assigned
-index 0 and the flash/DRAM AXI bus has index 1. To change the bus
-parameter for the build a "***TA_\<index>_**"* prefix should be added
-to the above. For example, **TA0_MAXR=10** will set the SRAM AXI bus's
-maximum pending reads to 10.
+For the CMake build configuration of the timing adapter, the SRAM AXI is assigned `index 0` and the flash, or DRAM, AXI
+bus has `index 1`.
 
-As an example, if we have the following parameters for flash/DRAM
-region:
+To change the bus parameter for the build a "***TA_\<index>_*"** prefix should be added to the above. For example,
+**TA0_MAXR=10** sets the maximum pending reads to 10 on the SRAM AXI bus.
+
+As an example, if we have the following parameters for the flash, or DRAM, region:
 
 - `TA1_MAXR` = "2"
 
@@ -578,31 +540,29 @@
 For a clock rate of 500MHz, this would translate to:
 
 - The maximum duty cycle for any operation is:\
-![Maximum duty cycle formula](../media/F1.png)
+  ![Maximum duty cycle formula](../media/F1.png)
 
 - Maximum bit rate for this bus (64-bit wide) is:\
-![Maximum bit rate formula](../media/F2.png)
+  ![Maximum bit rate formula](../media/F2.png)
 
-- With a read latency of 64 cycles, and maximum pending reads as 2,
-    each read could be a maximum of 64 or 128 bytes, as defined for
-    Ethos-U55 NPU\'s AXI bus\'s attribute.
+- With a read latency of 64 cycles, and maximum pending reads as 2, each read could be a maximum of 64 or 128 bytes. As
+  defined for the *Ethos-U55* NPU AXI bus attribute.
 
-    The bandwidth is calculated solely by read parameters ![Bandwidth formula](
-        ../media/F3.png)
+    The bandwidth is calculated solely by read parameters:
 
-    This is higher than the overall bandwidth dictated by the bus parameters
-    of \
+    ![Bandwidth formula](../media/F3.png)
+
+    This is higher than the overall bandwidth dictated by the bus parameters of:
+
     ![Overall bandwidth formula](../media/F4.png)
 
-This suggests that the read operation is limited only by the overall bus
-bandwidth.
+This suggests that the read operation is only limited by the overall bus bandwidth.
 
-Timing adapter requires recompilation to change parameters. Default timing
-adapter configuration file pointed to by `TA_CONFIG_FILE` build parameter is
-located in the scripts/cmake folder and contains all options for AXI0 and
-AXI1 described above.
+Timing adapter requires recompilation to change parameters. Default timing adapter configuration file pointed to by
+`TA_CONFIG_FILE` build parameter is located in the `scripts/cmake folder` and contains all options for `AXI0` and `AXI1`
+as previously described.
 
-An example of scripts/cmake/ta_config.cmake:
+here is an example of `scripts/cmake/ta_config.cmake`:
 
 ```cmake
 # Timing adapter options
@@ -620,7 +580,7 @@
 ...
 ```
 
-An example of the build with custom timing adapter configuration:
+An example of the build with a custom timing adapter configuration:
 
 ```commandline
 cmake .. -DTA_CONFIG_FILE=scripts/cmake/my_ta_config.cmake
@@ -628,27 +588,24 @@
 
 ## Add custom inputs
 
-The application performs inference on input data found in the folder set
-by the CMake parameters, for more information see the 3.3 section in the
-specific use case documentation.
+The application performs inference on input data found in the folder set by the CMake parameters, for more information
+see section 3.3 in the specific use-case documentation.
 
 ## Add custom model
 
-The application performs inference using the model pointed to by the
-CMake parameter `MODEL_TFLITE_PATH`.
+The application performs inference using the model pointed to by the CMake parameter `MODEL_TFLITE_PATH`.
 
-> **Note:** If you want to run the model using Ethos-U55 NPU, ensure your custom
-model has been run through the Vela compiler successfully before continuing.
+> **Note:** If you want to run the model using *Ethos-U55* NPU, ensure that your custom model has been run through the
+> Vela compiler successfully before continuing.
 
-To run the application with a custom model you will need to provide a
-labels_<model_name>.txt file of labels associated with the model.
-Each line of the file should correspond to one of the outputs in your
-model. See the provided labels_mobilenet_v2_1.0_224.txt file in the
-img_class use case for an example.
+To run the application with a custom model, you must provide a `labels_<model_name>.txt` file of labels that are
+associated with the model.
 
-Then, you must set `<use_case>_MODEL_TFLITE_PATH` to the location of
-the Vela processed model file and `<use_case>_LABELS_TXT_FILE` to the
-location of the associated labels file:
+Each line of the file should correspond to one of the outputs in your model. See the provided
+`labels_mobilenet_v2_1.0_224.txt` file in the `img_class` use-case for an example.
+
+Then, you must set `<use_case>_MODEL_TFLITE_PATH` to the location of the Vela processed model file and
+`<use_case>_LABELS_TXT_FILE` to the location of the associated labels file, like so:
 
 ```commandline
 cmake .. \
@@ -659,16 +616,15 @@
     -DCMAKE_TOOLCHAIN_FILE=scripts/cmake/toolchains/bare-metal-armclang.cmake
 ```
 
-> **Note:** For the specific use case command see the relative section in the use case documentation.
+> **Note:** For the specific use-case command, refer to the relative section in the use-case documentation.
+>
 > **Note:** Clean the build directory before re-running the CMake command.
 
-The TensorFlow Lite for Microcontrollers model pointed to by `<use_case>_MODEL_TFLITE_PATH` and
-labels text file pointed to by `<use_case>_LABELS_TXT_FILE` will be
-converted to C++ files during the CMake configuration stage and then
-compiled into the application for performing inference with.
+The TensorFlow Lite for Microcontrollers model pointed to by `<use_case>_MODEL_TFLITE_PATH` and the labels text file
+pointed to by `<use_case>_LABELS_TXT_FILE` are converted to C++ files during the CMake configuration stage. They are
+then compiled into the application for performing inference with.
 
-The log from the configuration stage should tell you what model path and
-labels file have been used:
+The log from the configuration stage tells you what model path and labels file have been used. For example:
 
 ```log
 -- User option TARGET_PLATFORM is set to mps3
@@ -685,34 +641,28 @@
 ...
 ```
 
-After compiling, your custom model will have now replaced the default
-one in the application.
+After compiling, your custom model has now replaced the default one in the application.
 
 ## Optimize custom model with Vela compiler
 
-> **Note:** This tool is not available within this project.
-It is a python tool available from <https://pypi.org/project/ethos-u-vela/>.
+> **Note:** This tool is not available within this project. It is a Python tool available from
+> <https://pypi.org/project/ethos-u-vela/>.\
 The source code is hosted on <https://git.mlplatform.org/ml/ethos-u/ethos-u-vela.git/>.
 
-The Vela compiler is a tool that can optimize a neural network model
-into a version that can run on an embedded system containing Ethos-U55 NPU.
+The Vela compiler is a tool that can optimize a neural network model into a version that can run on an embedded system
+containing an *Ethos-U55* NPU.
 
-The optimized model will contain custom operators for sub-graphs of the
-model that can be accelerated by Ethos-U55 NPU, the remaining layers that
-cannot be accelerated are left unchanged and will run on the CPU using
-optimized (CMSIS-NN) or reference kernels provided by the inference
-engine.
+The optimized model contains custom operators for sub-graphs of the model that can be accelerated by the *Ethos-U55*
+NPU. The remaining layers that cannot be accelerated, are left unchanged and are run on the CPU using optimized, or
+`CMSIS-NN`, or reference kernels that are provided by the inference engine.
 
-After the compilation, the optimized model can only be executed on a
-system with Ethos-U55 NPU.
+After the compilation, the optimized model can only be executed on a system using an *Ethos-U55* NPU.
 
-> **Note:** The NN model provided during the build and compiled into the application
-executable binary defines whether CPU or NPU is used to execute workloads.
-If unoptimized model is used, then inference will run on Cortex-M CPU.
+> **Note:** The NN model provided during the build and compiled into the application executable binary defines whether
+the CPU or NPU is used to execute workloads. If an unoptimized model is used, then inference runs on the *Cortex-M* CPU.
 
-Vela compiler accepts parameters to influence a model optimization. The
-model provided within this project has been optimized with
-the following parameters:
+The Vela compiler accepts parameters to influence a model optimization. The model provided within this project has been
+optimized with the following parameters:
 
 ```commandline
 vela \
@@ -724,40 +674,34 @@
     <model>.tflite
 ```
 
-Where:
+The Vela folder contains the following:
 
-- `--accelerator-config`: Specify the accelerator configuration to use
-    between ethos-u55-256, ethos-u55-128, ethos-u55-64 and ethos-u55-32.
-- `--block-config-limit`: Limit block config search space, use zero for
-    unlimited.
-- `--config`: Specifies the path to the Vela configuration file. The format of the file is a Python ConfigParser .ini file.
-    An example can be found in the `dependencies` folder [vela.ini](../../scripts/vela/vela.ini).
+- `--accelerator-config`: Specifies the accelerator configuration to use between `ethos-u55-256`, `ethos-u55-128`,
+  `ethos-u55-64`, and `ethos-u55-32`.
+- `--block-config-limit`: Limits the block config search space. Use `zero` for unlimited search space.
+- `--config`: Specifies the path to the Vela configuration file. The format of the file is a Python ConfigParser `.ini`
+    file. An example can be found in the `dependencies` folder [vela.ini](../../scripts/vela/vela.ini).
 - `--memory-mode`: Selects the memory mode to use as specified in the Vela configuration file.
-- `--system-config`:Selects the system configuration to use as specified in the Vela configuration file.
+- `--system-config`: Selects the system configuration to use as specified in the Vela configuration file.
 
-Vela compiler accepts `.tflite` file as input and saves optimized network
-model as a `.tflite` file.
+Vela compiler accepts `.tflite` file as input and saves optimized network model as a `.tflite` file.
 
-Using `--show-cpu-operations` and `--show-subgraph-io-summary` will show
-all the operations that fall back to the CPU and a summary of all the
-subgraphs and their inputs and outputs.
+Using `--show-cpu-operations` and `--show-subgraph-io-summary` shows all the operations that fall back to the CPU. And
+includes a summary of all the subgraphs and their inputs and outputs.
 
 To see Vela helper for all the parameters use: `vela --help`.
 
-Please, get in touch with your Arm representative to request access to
-Vela Compiler documentation for more details.
-
-> **Note:** By default, use of the Ethos-U55 NPU is enabled in the CMake configuration.
-This could be changed by passing `-DETHOS_U55_ENABLED`.
+> **Note:** By default, use of the *Ethos-U55* NPU is enabled in the CMake configuration. This can be changed by passing
+> `-DETHOS_U55_ENABLED`.
 
 ## Automatic file generation
 
-As mentioned in the previous sections, some files such as neural network
-models, network's inputs, and output labels are automatically converted
-into C/C++ arrays during the CMake project configuration stage.
-Additionally, some code is generated to allow access to these arrays.
+As mentioned in the previous sections, some files such as neural network models, network inputs, and output labels are
+automatically converted into C/C++ arrays during the CMake project configuration stage.
 
-An example:
+Also, some code is generated to allow access to these arrays.
+
+For example:
 
 ```log
 -- Building use-cases: img_class.
@@ -781,10 +725,9 @@
 ...
 ```
 
-In particular, the building options pointing to the input files `<use_case>_FILE_PATH`,
-the model `<use_case>_MODEL_TFLITE_PATH` and labels text file `<use_case>_LABELS_TXT_FILE`
-are used by python scripts in order to generate not only the converted array files,
-but also some headers with utility functions.
+In particular, the building options pointing to the input files `<use_case>_FILE_PATH`, the model
+`<use_case>_MODEL_TFLITE_PATH`, and labels text file `<use_case>_LABELS_TXT_FILE` are used by Python scripts in order to
+generate not only the converted array files, but also some headers with utility functions.
 
 For example, the generated utility functions for image classification are:
 
@@ -840,7 +783,7 @@
     }
     ```
 
-These headers are generated using python templates, that are in `scripts/py/templates/*.template`.
+These headers are generated using Python templates, that are located in `scripts/py/templates/*.template`:
 
 ```tree
 scripts/
@@ -873,9 +816,10 @@
         └── tflite.cc.template
 ```
 
-Based on the type of use case the correct conversion is called in the use case cmake file
-(audio or image respectively for voice or vision use cases).
-For example, the generations call for image classification (`source/use_case/img_class/usecase.cmake`):
+Based on the type of use-case, the correct conversion is called in the use-case cmake file. Or, audio or image
+respectively, for voice, or vision use-cases.
+
+For example, the generations call for image classification, `source/use_case/img_class/usecase.cmake`, looks like:
 
 ```c++
 # Generate input files
@@ -902,8 +846,8 @@
 )
 ```
 
-> **Note:** When required, for models and labels conversion it's possible to add extra parameters such
-> as extra code to put in `<model>.cc` file or namespaces.
+> **Note:** When required, for models and labels conversion, it is possible to add extra parameters such as extra code
+> to put in `<model>.cc` file or namespaces.
 >
 > ```c++
 > set(${use_case}_LABELS_CPP_FILE Labels)
@@ -931,10 +875,11 @@
 > )
 > ```
 
-In addition to input file conversions, the correct platform/system profile is selected
-(in `scripts/cmake/subsystem-profiles/*.cmake`) based on `TARGET_SUBSYSTEM` build option
-and the variables set are used to generate memory region sizes, base addresses and IRQ numbers,
-respectively used to generate mem_region.h, peripheral_irqs.h and peripheral_memmap.h headers.
+In addition to input file conversions, the correct platform, or system, profile is selected, in
+`scripts/cmake/subsystem-profiles/*.cmake`. It is based on `TARGET_SUBSYSTEM` build option and the variables set are
+used to generate memory region sizes, base addresses and IRQ numbers, respectively used to generate the `mem_region.h`,
+`peripheral_irqs.h`, and `peripheral_memmap.h` headers.
+
 Templates from `scripts/cmake/templates/*.template` are used to generate the header files.
 
 After the build, the files generated in the build folder are:
@@ -967,4 +912,4 @@
         └── <uc2_model_name>.tflite.cc
 ```
 
-Next section of the documentation: [Deployment](deployment.md).
+The next section of the documentation details: [Deployment](deployment.md).
diff --git a/docs/sections/coding_guidelines.md b/docs/sections/coding_guidelines.md
index 664b548..2a3f9cc 100644
--- a/docs/sections/coding_guidelines.md
+++ b/docs/sections/coding_guidelines.md
@@ -1,45 +1,47 @@
 # Coding standards and guidelines
 
-- [Introduction](#introduction)
-- [Language version](#language-version)
-- [File naming](#file-naming)
-- [File layout](#file-layout)
-- [Block Management](#block-management)
-- [Naming Conventions](#naming-conventions)
-  - [C++ language naming conventions](#c_language-naming-conventions)
-  - [C language naming conventions](#c-language-naming-conventions)
-- [Layout and formatting conventions](#layout-and-formatting-conventions)
-- [Language usage](#language-usage)
+- [Coding standards and guidelines](#coding-standards-and-guidelines)
+  - [Introduction](#introduction)
+  - [Language version](#language-version)
+  - [File naming](#file-naming)
+  - [File layout](#file-layout)
+  - [Block Management](#block-management)
+  - [Naming Conventions](#naming-conventions)
+    - [C++ language naming conventions](#c-language-naming-conventions)
+    - [C language naming conventions](#c-language-naming-conventions-1)
+  - [Layout and formatting conventions](#layout-and-formatting-conventions)
+  - [Language usage](#language-usage)
 
 ## Introduction
 
 This document presents some standard coding guidelines to be followed for contributions to this repository. Most of the
-code is written in C++, but there is some written in C as well. There is a clear C/C++ boundary at the Hardware
-Abstraction Layer (HAL). Both these languages follow different naming conventions within this repository, by design, to:
+code is written in C++, but there is also some written in C. There is a clear C/C++ boundary at the Hardware Abstraction
+Layer (HAL). Both of these languages follow different naming conventions within this repository, by design, to:
 
-- have clearly distinguishable C and C++ sources.
-- make cross language function calls stand out. Mostly these will be C++ function calls to the HAL functions written in C.
-However, because we also issue function calls to third party API's (and they may not follow these conventions), the
-intended outcome may not be fully realised in all of the cases.
+- Have clearly distinguishable C and C++ sources.
+- Make cross language function calls stand out. These are mainly C++ function calls to the HAL functions, that were
+  written in C.
+
+However, because we also issue function calls to third-party APIs, and they are not guaranteed to follow these
+conventions, the intended outcome could be different for every case.
 
 ## Language version
 
-For this project, code written in C++ shall use a subset of the C++11 feature set and software
-may be written using the C++11 language standard. Code written in C should be compatible
-with the C99 standard.
+For this project, code written in C++ uses a subset of the `C++11` feature set and software may be written using the
+`C++11` language standard. Code written in C is compatible with the `C99` standard.
 
-Software components written in C/C++ may use the language features allowed and encouraged by this documentation.
+Software components written in C/C++ may use the language features allowed and is encouraged.
 
 ## File naming
 
-- C files should have `.c` extension
-- C++ files should have `.cc` or `.cpp` extension.
-- Header files for functions implemented in C should have `.h` extension.
-- Header files for functions implemented in C++ should have `.hpp` extension.
+- C files must have a `.c` extension
+- C++ files must have a `.cc` or `.cpp` extension.
+- Header files for functions implemented in C must have a `.h` extension.
+- Header files for functions implemented in C++ must have a `.hpp` extension.
 
 ## File layout
 
-- Standard copyright notice must be included in all files:
+- The standard copyright notice must be included in all files:
 
   ```copyright
   /*
@@ -60,8 +62,8 @@
   */
   ```
 
-- Source lines must be no longer than 120 characters. Prefer to spread code out vertically rather than horizontally,
-  wherever it makes sense:
+- Source lines must be no longer than 120 characters. You can spread the code out vertically, rather than horizontally,
+  if required. For example:
 
   ```C++
   # This is significantly easier to read
@@ -76,9 +78,9 @@
   enum class SomeEnum2 { ENUM_VALUE_1, ENUM_VALUE_2, ENUM_VALUE_3 };
   ```
 
-- Block indentation should use 4 characters, no tabs.
+- Block indentation must use 4 characters and not use tabs.
 
-- Each statement must be on a separate line.
+- Each statement must be on a separate line. For example:
 
   ```C++
   int a, b; // Error prone
@@ -88,18 +90,18 @@
   int *p = nullptr; // GOOD
   ```
 
-- Source must not contain commented out code or unreachable code
+- Also, the source code must not contain code that has been commented out or is unreachable.
 
 ## Block Management
 
-- Blocks must use braces and braces location must be consistent.
-  - Each function has its opening brace at the next line on the same indentation level as its header, the code within
-  the braces is indented and the closing brace at the end is on the same level as the opening.
-  For compactness, if the class/function body is empty braces are accepted on the same line.
+- Blocks must use braces and the brace location must be consistent throughout.
+  - Therefore, each function has its opening brace at the next line on the same indentation level as its header. The
+    code within the braces is indented and the closing brace at the end is on the same level as the opening. For
+    compactness, if the class, or function, body is empty, then braces on the same line are acceptable.
 
-  - Conditional statements and loops, even if are just single-statement body, needs to be surrounded by braces, the
-opening brace is at the same line, the closing brace is at the next line on the same indentation level as its header;
-the same rule is applied to classes.
+  - Conditional statements and loops, even if they are just single-statement body, must be surrounded by braces. The
+    opening brace is at the same line, the closing brace is at the next line, and on the same indentation level as its
+    header. The same rule is applied to classes.
 
     ```C++
     class Class1 {
@@ -172,7 +174,7 @@
   void SomeFunction(int someParameter) {}
   ```
 
-- Macros, pre-processor definitions, and enumeration values should use upper case names:
+- Use uppercase names for macros, pre-processor definitions, and enumeration values:
 
   ```C++
   #define SOME_DEFINE
@@ -184,7 +186,7 @@
   };
   ```
 
-- Namespace names must be lower case
+- Namespace names must be lowercase
 
   ```C++
   namespace nspace
@@ -193,18 +195,18 @@
   };
   ```
 
-- Source code should use Hungarian notation to annotate the name of a variable with information about its meaning.
+- Source code must use Hungarian notation to annotate the name of a variable with information about its meaning.
 
   | Prefix | Class | Description |
   | ------ | ----- | ----------- |
-  | p | Type      | Pointer to any other type |
-  | k | Qualifier | Constant |
-  | v | Qualifier | Volatile |
-  | m | Scope     | Member of a class or struct |
-  | s | Scope     | Static |
-  | g | Scope     | Used to indicate variable has scope beyond the current function: file-scope or externally visible scope|
+  | `p` | Type      | Pointer to any other type |
+  | `k` | Qualifier | Constant |
+  | `v` | Qualifier | Volatile |
+  | `m` | Scope     | Member of a class or struct |
+  | `s` | Scope     | Static |
+  | `g` | Scope     | Used to indicate variable has scope beyond the current function: file-scope or externally visible scope. |
 
-The following examples  of Hungarian notation are one possible set of uses:
+The following examples of Hungarian notation are one possible set of uses:
 
   ```C++
   int g_GlobalInt=123;
@@ -218,7 +220,7 @@
 
 For C sources, we follow the Linux variant of the K&R style wherever possible.
 
-- For function and variable names we use `snake_case` convention:
+- For function and variable names, we use the `snake_case` convention:
 
   ```C
   int some_variable;
@@ -226,7 +228,7 @@
   void some_function(int some_parameter) {}
   ```
 
-- Macros, pre-processor definitions, and enumeration values should use upper case names:
+- Use uppercase names for macros, pre-processor definitions, and enumeration values:
 
   ```C
   #define SOME_DEFINE
@@ -240,13 +242,11 @@
 
 ## Layout and formatting conventions
 
-- C++ class code layout
-  Public function definitions should be at the top of a class definition, since they are things most likely to be used
-by other people.
-  Private functions and member variables should be last.
-  Class functions and member variables should be laid out logically in blocks of related functionality.
+- C++ class code layout: Public function definitions must be at the top of a class definition, since they are most
+  likely to be used. Private functions and member variables are left to last. Lay out class functions and member
+  variables logically in blocks of related functionality.
 
-- Class  inheritance keywords are not indented.
+- Class inheritance keywords are not indented. For example:
 
   ```C++
   class MyClass
@@ -260,12 +260,12 @@
   };
   ```
 
-- Don't leave trailing spaces at the end of lines.
+- Do not leave trailing spaces at the end of lines.
 
-- Empty lines should have no trailing spaces.
+- Empty lines do not have trailing spaces.
 
-- For pointers and references, the symbols `*` and `&` should be adjacent to the name of the type, not the name
-  of the variable.
+- For pointers and references, the symbols `*` and `&` must be next to the name of the type - *not* the name of the
+  variable.
 
   ```C++
   char* someText = "abc";
@@ -275,22 +275,23 @@
 
 ## Language usage
 
-- Header `#include` statements should be minimized.
-  Inclusion of unnecessary headers slows down compilation, and can hide errors where a function calls a
-  subroutine which it should not be using if the unnecessary header defining this subroutine is included.
+- Minimize header `#include` statements: The inclusion of unnecessary headers slows down compilation. If the unnecessary
+  header defining this subroutine is included, then it can also hide errors where a function calls a subroutine that it
+  must not use.
 
-  Header statements should be included in the following order:
+  Include header statements in the following order:
 
-  - Header file corresponding to the current source file (if applicable)
-  - Headers from the same component
-  - Headers from other components
-  - Third-party headers
-  - System headers
+  - If applicable, begin with the header file corresponding to the current source file,
+  - Headers from the same component,
+  - Headers from other components,
+  - Third-party headers,
+  - System headers.
 
-  > **Note:** Leave one blank line between each of these groups for readability.
-  >Use quotes for headers from within the same project and angle brackets for third-party and system headers.
-  >Do not use paths relative to the current source file, such as `../Header.hpp`. Instead configure your include paths
-  >in the project makefiles.
+  > **Note:** Leave one blank line between each of these groups for readability. Use quotes for headers from within the
+  > same project and angle brackets for third-party and system headers. Do not use paths relative to the current source
+  > file, such as `../Header.hpp`. Instead, configure your include paths in the project makefiles.
+
+  For example:
 
   ```C++
   #include "ExampleClass.hpp"     // Own header
@@ -307,7 +308,7 @@
   // [...]
   ```
 
-- C++ casts should use the template-styled case syntax
+- Use the template-styled case syntax for C++ casts:
 
   ```C++
   int a = 100;
@@ -315,7 +316,7 @@
   float c = static_cast<float>(a); // OK
   ```
 
-- Use the const keyword to declare constants instead of define.
+- Use the `const` keyword to declare constants instead of `define`.
 
-- Should use `nullptr` instead of `NULL`,
-  C++11 introduced the `nullptr` type to distinguish null pointer constants from the integer 0.
+- Use `nullptr` instead of `NULL`. C++11 introduced the `nullptr` type to distinguish null pointer constants from the
+  integer 0.
diff --git a/docs/sections/customizing.md b/docs/sections/customizing.md
index ae911d9..2df32d5 100644
--- a/docs/sections/customizing.md
+++ b/docs/sections/customizing.md
@@ -2,33 +2,33 @@
 
 - [Implementing custom ML application](#implementing-custom-ml-application)
   - [Software project description](#software-project-description)
-  - [HAL API](#hal-api)
+  - [Hardware Abstraction Layer (HAL) API](#hardware-abstraction-layer-hal-api)
   - [Main loop function](#main-loop-function)
   - [Application context](#application-context)
   - [Profiler](#profiler)
   - [NN Model API](#nn-model-api)
-  - [Adding custom ML use case](#adding-custom-ml-use-case)
+  - [Adding custom ML use-case](#adding-custom-ml-use-case)
   - [Implementing main loop](#implementing-main-loop)
   - [Implementing custom NN model](#implementing-custom-nn-model)
+    - [Define `ModelPointer` and `ModelSize` methods](#define-modelpointer-and-modelsize-methods)
   - [Executing inference](#executing-inference)
   - [Printing to console](#printing-to-console)
   - [Reading user input from console](#reading-user-input-from-console)
   - [Output to MPS3 LCD](#output-to-mps3-lcd)
-  - [Building custom use case](#building-custom-use-case)
+  - [Building custom use-case](#building-custom-use-case)
 
-This section describes how to implement a custom Machine Learning
-application running on `Arm® Corstone™-300` based FVP or on the Arm® MPS3 FPGA prototyping board.
+This section describes how to implement a custom Machine Learning application running on Arm® *Corstone™-300* based FVP
+or on the Arm® MPS3 FPGA prototyping board.
 
-Arm® Ethos™-U55 code sample software project offers a simple way to incorporate
-additional use-case code into the existing infrastructure and provides a build
-system that automatically picks up added functionality and produces corresponding
-executable for each use-case. This is achieved by following certain configuration
-and code implementation conventions.
+the Arm® *Ethos™-U55* code sample software project offers a way to incorporate more use-case code into the existing
+infrastructure. It also provides a build system that automatically picks up added functionality and produces
+corresponding executable for each use-case. This is achieved by following certain configuration and code implementation
+conventions.
 
-The following sign will indicate the important conventions to apply:
+The following sign indicates the important conventions to apply:
 
-> **Convention:** The code is developed using C++11 and C99 standards.
-> This is governed by TensorFlow Lite for Microcontrollers framework.
+> **Convention:** The code is developed using `C++11` and `C99` standards. This is then governed by TensorFlow Lite for
+> Microcontrollers framework.
 
 ## Software project description
 
@@ -54,17 +54,14 @@
 └── Readme.md
 ```
 
-Where `source` contains C/C++ sources for the platform and ML applications.
-Common code related to the Ethos-U55 code samples software
-framework resides in the `application` sub-folder and ML application specific logic (use-cases)
-sources are in the `use-case` subfolder.
+Where the `source` folder contains C/C++ sources for the platform and ML applications. Common code related to the
+*Ethos-U55* code samples software framework resides in the `application` sub-folder and ML application-specific logic,
+use-cases, sources are in the `use-case` subfolder.
 
-> **Convention**: Separate use-cases must be organized in sub-folders under the use-case folder.
-> The name of the directory is used as a name for this use-case and could be provided
-> as a `USE_CASE_BUILD` parameter value.
-> It is expected by the build system that sources for the use-case are structured as follows:
-> headers in an `include` directory, C/C++ sources in a `src` directory.
-> For example:
+> **Convention**: Separate use-cases must be organized in sub-folders under the use-case folder. The name of the
+> directory is used as a name for this use-case and can be provided as a `USE_CASE_BUILD` parameter value. The build
+> system expects that sources for the use-case are structured as follows: Headers in an `include` directory and C/C++
+> sources in a `src` directory. For example:
 >
 > ```tree
 > use_case
@@ -75,92 +72,84 @@
 >             └── *.cc
 > ```
 
-## HAL API
+## Hardware Abstraction Layer (HAL) API
 
-Hardware abstraction layer is represented by the following interfaces.
-To access them, include `hal.h` header.
+The HAL is represented by the following interfaces. To access them, include the `hal.h` header.
 
-- `hal_platform` structure:
-    Structure that defines a platform context to be used by the application
+- `hal_platform` structure: Defines a platform context to be used by the application.
 
   |  Attribute name    | Description |
   |--------------------|----------------------------------------------------------------------------------------------|
-  |  inited            |  Initialization flag. Is set after the platform_init() function is called.                   |
-  |  plat_name         |  Platform name. it is set to "mps3-bare" for MPS3 build and "FVP" for Fast Model build.      |
-  |  data_acq          |  Pointer to data acquisition module responsible for user interaction and other data collection for the application logic.               |
-  |  data_psn          |  Pointer to data presentation module responsible for data output through components available in the selected platform: LCD -- for MPS3, console -- for Fast Model. |
-  |  timer             |  Pointer to platform timer implementation (see platform_timer)                               |
-  |  platform_init     |  Pointer to platform initialization function.                                                |
-  |  platform_release  |  Pointer to platform release function                                                        |
+  |  `inited`            |  Initialization flag. Is set after the `platform_init()` function is called.                   |
+  |  `plat_name`         |  Platform name. it is set to `mps3-bare` for MPS3 build and `FVP` for Fast Model build.      |
+  |  `data_acq`          |  Pointer to data acquisition module responsible for user interaction and other data collection for the application logic.               |
+  |  `data_psn`          |  Pointer to data presentation module responsible for data output through components available in the selected platform: `LCD --` for MPS3, `console --` for Fast Model. |
+  |  `timer`             |  Pointer to platform timer implementation (see `platform_timer`)                               |
+  |  `platform_init`     |  Pointer to platform initialization function.                                                |
+  |  `platform_release`  |  Pointer to platform release function                                                        |
 
-- `hal_init` function:
-    Initializes the HAL structure based on compile time config. This
-    should be called before any other function in this API.
+- `hal_init` function: Initializes the HAL structure based on the compile time configuration. This must be called before
+    any other function in this API.
 
   |  Parameter name  | Description|
   |------------------|-----------------------------------------------------|
-  |  platform        | Pointer to a pre-allocated `hal_platform` struct.   |
-  |  data_acq        | Pointer to a pre-allocated data acquisition module  |
-  |  data_psn        | Pointer to a pre-allocated data presentation module |
-  |  timer           | Pointer to a pre-allocated timer module             |
-  |  return          | zero if successful, error code otherwise            |
+  |  `platform`        | Pointer to a pre-allocated `hal_platform` struct.   |
+  |  `data_acq`        | Pointer to a pre-allocated data acquisition module  |
+  |  `data_psn`        | Pointer to a pre-allocated data presentation module |
+  |  `timer`           | Pointer to a pre-allocated timer module             |
+  |  `return`          | Zero returned if successful, an error code is returned if unsuccessful.            |
 
-- `hal_platform_init` function:
-  Initializes the HAL platform and all the modules on the platform the
-  application requires to run.
+- `hal_platform_init` function: Initializes the HAL platform and every module on the platform that the application
+  requires to run.
 
   | Parameter name  | Description                                                         |
   | ----------------| ------------------------------------------------------------------- |
-  | platform        | Pointer to a pre-allocated and initialized `hal_platform` struct.   |
-  | return          | zero if successful, error code otherwise.                           |
+  | `platform`        | Pointer to a pre-allocated and initialized `hal_platform` struct.   |
+  | `return`          | zero if successful, error code otherwise.                           |
 
-- `hal_platform_release` function
-  Releases the HAL platform. This should release resources acquired.
+- `hal_platform_release` function Releases the HAL platform and any acquired resources.
 
   | Parameter name  | Description                                                         |
   | ----------------| ------------------------------------------------------------------- |
-  |  platform       | Pointer to a pre-allocated and initialized `hal_platform` struct.   |
+  |  `platform`       | Pointer to a pre-allocated and initialized `hal_platform` struct.   |
 
-- `data_acq_module` structure:
-  Structure to encompass the data acquisition module and it's methods.
+- `data_acq_module` structure: Structure to encompass the data acquisition module and linked methods.
 
   | Attribute name | Description                                        |
   |----------------|----------------------------------------------------|
-  | inited         | Initialization flag. Is set after the system_init () function is called. |
-  | system_name    | Channel name. It is set to "UART" for MPS3 build and fastmodel builds.   |
-  | system_init    | Pointer to data acquisition module initialization function. The pointer is set according to the platform selected during the build. This function is called by the platforminitialization routines. |
-  | get_input      | Pointer to a function reading user input. The pointer is set according to the selected platform during the build. For MPS3 and fastmodel environments, the function reads data from UART.   |
+  | `inited`        | Initialization flag. Is set after the `system_init ()` function is called. |
+  | `system_name`    | Channel name. It is set to `UART` for MPS3 build and Fast Model builds.   |
+  | `system_init`    | Pointer to data acquisition module initialization function. The pointer is set according to the platform selected during the build. This function is called by the platform initialization routines. |
+  | `get_input`      | Pointer to a function reading user input. The pointer is set according to the selected platform during the build. For MPS3 and Fast Model environments, the function reads data from UART.   |
 
-- `data_psn_module` structure:
-  Structure to encompass the data presentation module and its methods.
+- `data_psn_module` structure: Structure to encompass the data presentation module and associated methods.
 
   | Attribute name     | Description                                    |
   |--------------------|------------------------------------------------|
-  | inited             | Initialization flag. It is set after the system_init () function is called. |
-  | system_name        | System component name used to present data. It is set to "lcd" for MPS3 build and to "log_psn" for fastmodel build. In case of fastmodel, all pixel drawing functions are replaced by console output of the data summary.                              |
-  | system_init        | Pointer to data presentation module initialization function. The pointer is set according to the platform selected during the build. This function is called by the platform initialization routines. |
-  | present_data_image | Pointer to a function to draw an image. The pointer is set according to the selected platform during the build. For MPS3, the image will be drawn on the LCD; for fastmodel  image summary will be printed in the UART  (coordinates, channel info, downsample factor) |
-  | present_data_text  | Pointer to a function to print a text. The pointer is set according to the selected platform during the build. For MPS3, the text will be drawn on the LCD; for fastmodel text will be printed in the UART. |
-  | present_box        | Pointer to a function to draw a rectangle. The pointer is set according to the selected platform during the build. For MPS3, the image will be drawn on the LCD; for fastmodel  image summary will be printed in the UART. |
-  | clear              | Pointer to a function to clear the output. The pointer is set according to the selected platform during the build. For MPS3, the function will clear the LCD; for fastmodel will do nothing. |
-  | set_text_color     | Pointer to a function to set text color for the next call of present_data_text() function. The pointer is set according to the selected platform during the build. For MPS3, the function will set the color for the text printed on the LCD; for fastmodel -- will do nothing. |
+  | `inited`             | Initialization flag. It is set after the `system_init ()` function is called. |
+  | `system_name`        | System component name used to present data. It is set to `lcd` for the MPS3 build and to `log_psn` for the Fast Model build. For Fast Model, the console output of the data summary replaces all pixel drawing functions.  |
+  | `system_init`        | Pointer to data presentation module initialization function. The pointer is set according to the platform selected during the build. This function is called by the platform initialization routines. |
+  | `present_data_image` | Pointer to a function to draw an image. The pointer is set according to the selected platform during the build. For MPS3, the image is drawn on the LCD. For Fast Model, the image summary is printed in the UART (coordinates, channel info, downsample factor). |
+  | `present_data_text`  | Pointer to a function to print a text. The pointer is set according to the selected platform during the build. For MPS3, the text is drawn on the LCD. For Fast Model, the text is printed in the UART. |
+  | `present_box`        | Pointer to a function to draw a rectangle. The pointer is set according to the selected platform during the build. For MPS3, the image is drawn on the LCD. For Fast Model, the image summary is printed in the UART. |
+  | `clear`              | Pointer to a function to clear the output. The pointer is set according to the selected platform during the build. For MPS3, the function clears the LCD. For Fast Model, nothing happens. |
+  | `set_text_color`     | Pointer to a function to set text color for the next call of `present_data_text()` function. The pointer is set according to the selected platform during the build. For MPS3, the function sets the color for the text printed on the LCD. For Fast Model, nothing happens. |
 
-- `platform_timer` structure:
-    Structure to hold a platform specific timer implementation.
+- `platform_timer` structure: The structure to hold a platform-specific timer implementation.
 
   | Attribute name      | Description                                    |
   |---------------------|------------------------------------------------|
-  |  inited             |  Initialization flag. It is set after the timer is initialized by the `hal_platform_init` function. |
-  |  reset              |  Pointer to a function to reset a timer. |
-  |  get_time_counter   |  Pointer to a function to get current time counter. |
-  |  get_duration_ms    |  Pointer to a function to calculate duration between two time-counters in milliseconds. |
-  |  get_duration_us    |  Pointer to a function to calculate duration between two time-counters in microseconds |
-  |  get_cpu_cycle_diff |  Pointer to a function to calculate duration between two time-counters in Cortex-M55 cycles. |
-  |  get_npu_cycle_diff |  Pointer to a function to calculate duration between two time-counters in Ethos-U55 cycles. Available only when project is configured with ETHOS_U55_ENABLED set. |
-  |  start_profiling    |  Wraps `get_time_counter` function with additional profiling initialisation, if required. |
-  |  stop_profiling     |  Wraps `get_time_counter` function along with additional instructions when profiling ends, if required. |
+  |  `inited`             |  Initialization flag. It is set after the timer is initialized by the `hal_platform_init` function. |
+  |  `reset`              |  Pointer to a function to reset a timer. |
+  |  `get_time_counter`   |  Pointer to a function to get current time counter. |
+  |  `get_duration_ms`    |  Pointer to a function to calculate duration between two time-counters in milliseconds. |
+  |  `get_duration_us`    |  Pointer to a function to calculate duration between two time-counters in microseconds |
+  |  `get_cpu_cycle_diff` |  Pointer to a function to calculate duration between two time-counters in *Cortex-M55* cycles. |
+  |  `get_npu_cycle_diff` |  Pointer to a function to calculate duration between two time-counters in *Ethos-U55* cycles. Available only when project is configured with `ETHOS_U55_ENABLED` set. |
+  |  `start_profiling`    |  If necessary, wraps the `get_time_counter` function with another profiling initialization, if necessary. |
+  |  `stop_profiling`     |  If necessary, wraps the `get_time_counter` function along with more instructions when profiling ends. |
 
-Example of the API initialization in the main function:
+An example of the API initialization in the main function:
 
 ```C++
 #include "hal.h"
@@ -189,16 +178,13 @@
 
 ## Main loop function
 
-Code samples application main function will delegate the use-case
-logic execution to the main loop function that must be implemented for
-each custom ML scenario.
+Code samples application main function delegates the use-case logic execution to the main loop function that must be
+implemented for each custom ML scenario.
 
-Main loop function takes the initialized *hal_platform* structure
-pointer as an argument.
+Main loop function takes the initialized `hal_platform` structure pointer as an argument.
 
-The main loop function has external linkage and main executable for the
-use-case will have reference to the function defined in the use-case
-code.
+The main loop function has external linkage and the main executable for the use-case references the function defined in
+the use-case code.
 
 ```C++
 void main_loop(hal_platform& platform){
@@ -210,14 +196,14 @@
 
 ## Application context
 
-Application context could be used as a holder for a state between main
-loop iterations. Include AppContext.hpp to use ApplicationContext class.
+Application context can be used as a holder for a state between main loop iterations. Include `AppContext.hpp` to use
+`ApplicationContext` class.
 
 | Method name  | Description                                                      |
 |--------------|------------------------------------------------------------------|
-|  Set         |  Saves given value as a named attribute in the context.          |
-|  Get         |  Gets the saved attribute from the context by the given name.    |
-|  Has         |  Checks if an attribute with a given name exists in the context. |
+| `Set`         |  Saves given value as a named attribute in the context.          |
+|  `Get`         |  Gets the saved attribute from the context by the given name.    |
+|  `Has`         |  Checks if an attribute with a given name exists in the context. |
 
 For example:
 
@@ -241,21 +227,20 @@
 
 ## Profiler
 
-Profiler is a helper class assisting in collection of timings and
-Ethos-U55 cycle counts for operations. It uses platform timer to get
-system timing information.
+The profiler is a helper class that assists with the collection of timings and *Ethos-U55* cycle counts for operations.
+It uses platform timer to get system timing information.
 
 | Method name             | Description                                                    |
 |-------------------------|----------------------------------------------------------------|
-|  StartProfiling         | Starts profiling and records the starting timing data.         |
-|  StopProfiling          | Stops profiling and records the ending timing data.            |
-|  StopProfilingAndReset  | Stops the profiling and internally resets the platform timers. |
-|  Reset                  | Resets the profiler and clears all collected data.             |
-|  GetAllResultsAndReset  | Gets all the results as string and resets the profiler.            |
-|  PrintProfilingResult   | Prints collected profiling results and resets the profiler.    |
-|  SetName                | Set the profiler name.                                         |
+|  `StartProfiling`         | Starts profiling and records the starting timing data.         |
+|  `StopProfiling`          | Stops profiling and records the ending timing data.            |
+|  `StopProfilingAndReset`  | Stops the profiling and internally resets the platform timers. |
+|  `Reset`                  | Resets the profiler and clears all collected data.             |
+|  `GetAllResultsAndReset`  | Gets all the results as string and resets the profiler.            |
+|  `PrintProfilingResult`   | Prints collected profiling results and resets the profiler.    |
+|  `SetName`                | Set the profiler name.                                         |
 
-Usage example:
+An example of it in use:
 
 ```C++
 Profiler profiler{&platform, "Inference"};
@@ -269,39 +254,38 @@
 
 ## NN Model API
 
-Model (refers to neural network model) is an abstract class wrapping the
-underlying TensorFlow Lite Micro API and providing methods to perform
-common operations such as TensorFlow Lite Micro framework
-initialization, inference execution, accessing input and output tensor
-objects.
+The Model, which refers to neural network model, is an abstract class wrapping the underlying TensorFlow Lite Micro API.
+It provides methods to perform common operations such as TensorFlow Lite Micro framework initialization, inference
+execution, accessing input, and output tensor objects.
 
-To use this abstraction, import TensorFlowLiteMicro.hpp header.
+To use this abstraction, import the `TensorFlowLiteMicro.hpp` header.
 
 | Method name              | Description                                                                  |
 |--------------------------|------------------------------------------------------------------------------|
-|  GetInputTensor          |  Returns the pointer to the model's input tensor.                            |
-|  GetOutputTensor         |  Returns the pointer to the model's output tensor                            |
-|  GetType                 |  Returns the model's data type                                               |
-|  GetInputShape           |  Return the pointer to the model's input shape                               |
-|  GetOutputShape          |  Return the pointer to the model's output shape.                             |
-|  GetNumInputs            |  Return the number of input tensors the model has.                           |
-|  GetNumOutputs           |  Return the number of output tensors the model has.                          |
-|  LogTensorInfo           |  Logs the tensor information to stdout for the given tensor pointer: tensor name, tensor address, tensor type, tensor memory size and quantization params.  |
-|  LogInterpreterInfo      |  Logs the interpreter information to stdout.                                 |
-|  Init                    |  Initializes the TensorFlow Lite Micro framework, allocates require memory for the model. |
-|  GetAllocator            |  Gets the allocator pointer for the instance.                                |
-|  IsInited                |  Checks if this model object has been initialized.                           |
-|  IsDataSigned            |  Checks if the model uses signed data type.                                  |
-|  RunInference            |  Runs the inference (invokes the interpreter).                               |
-|  ShowModelInfoHandler    |  Model information handler common to all models.                             |
-|  GetTensorArena          |  Returns pointer to memory region to be used for tensors allocations.        |
-|  ModelPointer            |  Returns the pointer to the NN model data array.                             |
-|  ModelSize               |  Returns the model size.                                                     |
-|  GetOpResolver           |  Returns the reference to the TensorFlow Lite Micro operator resolver.       |
-|  EnlistOperations        |  Registers required operators with TensorFlow Lite Micro operator resolver.  |
-|  GetActivationBufferSize |  Returns the size of the tensor arena memory region.                         |
+|  `GetInputTensor`          |  Returns the pointer to the model's input tensor.                            |
+|  `GetOutputTensor`         |  Returns the pointer to the model's output tensor                            |
+|  `GetType`                 |  Returns the model's data type                                               |
+|  `GetInputShape`           |  Return the pointer to the model's input shape                               |
+|  `GetOutputShape`          |  Return the pointer to the model's output shape.                             |
+|  `GetNumInputs`            |  Return the number of input tensors the model has.                           |
+|  `GetNumOutputs`           |  Return the number of output tensors the model has.                          |
+|  `LogTensorInfo`           |  Logs the tensor information to `stdout` for the given tensor pointer. Includes: Tensor name, tensor address, tensor type, tensor memory size, and quantization params.  |
+|  `LogInterpreterInfo`      |  Logs the interpreter information to stdout.                                 |
+|  `Init`                    |  Initializes the TensorFlow Lite Micro framework, allocates require memory for the model. |
+|  `GetAllocator`            |  Gets the allocator pointer for the instance.                                |
+|  `IsInited`                |  Checks if this model object has been initialized.                           |
+|  `IsDataSigned`            |  Checks if the model uses signed data type.                                  |
+|  `RunInference`            |  Runs the inference, so invokes the interpreter.                               |
+|  `ShowModelInfoHandler`    |  Model information handler common to all models.                             |
+|  `GetTensorArena`          |  Returns pointer to memory region to be used for tensors allocations.        |
+|  `ModelPointer`            |  Returns the pointer to the NN model data array.                             |
+|  `ModelSize`               |  Returns the model size.                                                     |
+|  `GetOpResolver`           |  Returns the reference to the TensorFlow Lite Micro operator resolver.       |
+|  `EnlistOperations`        |  Registers required operators with TensorFlow Lite Micro operator resolver.  |
+|  `GetActivationBufferSize` |  Returns the size of the tensor arena memory region.                         |
 
-> **Convention:**  Each ML use-case must have extension of this class and implementation of the protected virtual methods:
+> **Convention:**  Each ML use-case must have an extension of this class and an implementation of the protected virtual
+> methods:
 >
 > ```C++
 > virtual const uint8_t* ModelPointer() = 0;
@@ -311,25 +295,25 @@
 > virtual size_t GetActivationBufferSize() = 0;
 > ```
 >
-> Network models have different set of operators that must be registered with
-> tflite::MicroMutableOpResolver object in the EnlistOperations method.
-> Network models could require different size of activation buffer that is returned as
-> tensor arena memory for TensorFlow Lite Micro framework by the GetTensorArena
-> and GetActivationBufferSize methods.
+> Network models have different set of operators that must be registered with `tflite::MicroMutableOpResolver` object in
+> the `EnlistOperations` method. Network models can require different size of activation buffer that is returned as
+> tensor arena memory for TensorFlow Lite Micro framework by the `GetTensorArena` and `GetActivationBufferSize` methods.
+>
+> **Note:** Please see `MobileNetModel.hpp` and `MobileNetModel.cc` files from the image classification ML application
+> use-case as an example of the model base class extension.
 
-Please see `MobileNetModel.hpp` and `MobileNetModel.cc` files from image
-classification ML application use-case as an example of the model base
-class extension.
+## Adding custom ML use-case
 
-## Adding custom ML use case
+This section describes how to implement additional use-case and then compile it into the binary executable to run with
+Fast Model or MPS3 FPGA board.
 
-This section describes how to implement additional use-case and compile
-it into the binary executable to run with Fast Model or MPS3 FPGA board.
-It covers common major steps: application main loop creation,
-description of the NN model, inference execution.
+It covers common major steps: The application main loop creation, a description of the NN model, and inference
+execution.
 
-In addition, few useful examples are provided: reading user input,
-printing into console, drawing images into MPS3 LCD.
+In addition, few useful examples are provided: Reading user input, printing into console, and drawing images into MPS3
+LCD.
+
+For example:
 
 ```tree
 use_case
@@ -338,25 +322,23 @@
       └── src
 ```
 
-Start with creation of a sub-directory under the `use_case` directory and
-two other directories `src` and `include` as described in
-[Software project description](#software-project-description) section:
+Start with creation of a sub-directory under the `use_case` directory and two additional directories `src` and `include`
+as described in the [Software project description](#software-project-description) section.
 
 ## Implementing main loop
 
-Use-case main loop is the place to put use-case main logic. Essentially,
-it is an infinite loop that reacts on user input, triggers use-case
-conditional logic based on the input and present results back to the
-user. However, it could also be a simple logic that runs a single inference
-and then exits.
+The use-case main loop is the place to put use-case main logic. It is an infinite loop that reacts on user input,
+triggers use-case conditional logic based on the input and present results back to the user.
 
-Main loop has knowledge about the platform and has access to the
-platform components through the hardware abstraction layer (referred to as HAL).
+However, it could also be a simple logic that runs a single inference and then exits.
 
-Create a `MainLoop.cc` file in the `src` directory (the one created under
-[Adding custom ML use case](#adding-custom-ml-use-case)), the name is not
-important. Define `main_loop` function with the signature described in
-[Main loop function](#main-loop-function):
+Main loop has knowledge about the platform and has access to the platform components through the Hardware Abstraction
+Layer (HAL).
+
+Start by creating a `MainLoop.cc` file in the `src` directory (the one created under
+[Adding custom ML use case](#adding-custom-ml-use-case)).  The name used is not important.
+
+Now define the `main_loop` function with the signature described in [Main loop function](#main-loop-function):
 
 ```C++
 #include "hal.h"
@@ -366,23 +348,19 @@
 }
 ```
 
-The above is already a working use-case, if you compile and run it (see
-[Building custom usecase](#building-custom-use-case)) the application will start, print
-message to console and exit straight away.
+The preceeding code is already a working use-case. If you compile and run it (see [Building custom usecase](#building-custom-use-case)),
+then the application starts and prints a message to console and exits straight away.
 
-Now, you can start filling this function with logic.
+You can now start filling this function with logic.
 
 ## Implementing custom NN model
 
-Before inference could be run with a custom NN model, TensorFlow Lite
-Micro framework must learn about the operators/layers included in the
-model. Developer must register operators using `MicroMutableOpResolver`
-API.
+Before inference could be run with a custom NN model, TensorFlow Lite Micro framework must learn about the operators, or
+layers, included in the model. You must register operators using the `MicroMutableOpResolver` API.
 
-Ethos-U55 code samples project has an abstraction around TensorFlow
-Lite Micro API (see [NN model API](#nn-model-api)). Create `HelloWorldModel.hpp` in
-the use-case include sub-directory, extend Model abstract class and
-declare required methods.
+The *Ethos-U55* code samples project has an abstraction around TensorFlow Lite Micro API (see [NN model API](#nn-model-api)).
+Create `HelloWorldModel.hpp` in the use-case include sub-directory, extend Model abstract class,
+and then declare the required methods.
 
 For example:
 
@@ -420,19 +398,20 @@
 #endif /* HELLOWORLDMODEL_HPP */
 ```
 
-Create `HelloWorldModel.cc` file in the `src` sub-directory and define the methods
-there. Include `HelloWorldModel.hpp` created earlier. Note that `Model.hpp`
-included in the header provides access to TensorFlow Lite Micro's operation
-resolver API.
+Create the `HelloWorldModel.cc` file in the `src` sub-directory and define the methods there. Include
+`HelloWorldModel.hpp` created earlier.
 
-Please, see `use_case/img_class/src/MobileNetModel.cc` for
-code examples.
-If you are using a TensorFlow Lite model compiled with Vela, it is important to add
-custom Ethos-U55 operator to the operators list.
+> **Note:** The `Model.hpp` included in the header provides access to TensorFlow Lite Micro's operation resolver API.
 
-The following example shows how to add the custom Ethos-U55 operator with
-TensorFlow Lite Micro framework. We will use the ARM_NPU define to exclude
-the code if the application was built without NPU support.
+Please refer to `use_case/img_class/src/MobileNetModel.cc` for code examples.
+
+If you are using a TensorFlow Lite model compiled with Vela, it is important to add a custom *Ethos-U55* operator to the
+operators list.
+
+The following example shows how to add the custom *Ethos-U55* operator with the TensorFlow Lite Micro framework. when
+defined, `ARM_NPU` excludes the code if the application was built without NPU support.
+
+For example:
 
 ```C++
 #include "HelloWorldModel.hpp"
@@ -453,53 +432,51 @@
 }
 ```
 
-To minimize application memory footprint, it is advised to register only
-operators used by the NN model.
+To minimize the memory footprint of the application, we advise you to only register operators that are used by the NN
+model.
 
-Define `ModelPointer` and `ModelSize` methods. These functions are wrappers around the
-functions generated in the C++ file containing the neural network model as an array.
-This generation the C++ array from the .tflite file, logic needs to be defined in
-the `usecase.cmake` file for this `HelloWorld` example.
+### Define `ModelPointer` and `ModelSize` methods
 
-For more details on `usecase.cmake`, see [Building custom use case](#building-custom-use-case).
-For details on code generation flow in general, see [Automatic file generation](./building.md#automatic-file-generation)
+These functions are wrappers around the functions generated in the C++ file containing the neural network model as an
+array. This generation the C++ array from the `.tflite` file, logic needs to be defined in the `usecase.cmake` file for
+this `HelloWorld` example.
 
-The TensorFlow Lite model data is read during Model::Init() method execution, see
-`application/tensorflow-lite-micro/Model.cc` for more details. Model invokes
-`ModelPointer()` function which calls the `GetModelPointer()` function to get
-neural network model data memory address. The `GetModelPointer()` function
-will be generated during the build and could be found in the
-file `build/generated/hello_world/src/<model_file_name>.cc`. Generated
-file is added to the compilation automatically.
+For more details on `usecase.cmake`, refer to: [Building custom use-case](#building-custom-use-case).
 
-Use `${use-case}_MODEL_TFLITE_PATH` build parameter to include custom
-model to the generation/compilation process (see [Build options](./building.md#build-options)).
+For details on code generation flow in general, refer to: [Automatic file generation](./building.md#automatic-file-generation).
+
+The TensorFlow Lite model data is read during the `Model::Init()` method execution. Please refer to
+`application/tensorflow-lite-micro/Model.cc` for more details.
+
+Model invokes the `ModelPointer()` function which calls the `GetModelPointer()` function to get the neural network model
+data memory address. The `GetModelPointer()` function is generated during the build and can be found in the file
+`build/generated/hello_world/src/<model_file_name>.cc`. The file generated is automatically added to the compilation.
+
+Use the `${use-case}_MODEL_TFLITE_PATH` build parameter to include custom model to the generation, or compilation,
+process. Please refer to: [Build options](./building.md#build-options) for further information.
 
 ## Executing inference
 
-To run an inference successfully it is required to have:
+To run an inference successfully, you must use:
 
-- a TensorFlow Lite model file
-- extended Model class
-- place to add the code to invoke inference
-- main loop function
-- and some input data.
+- A TensorFlow Lite model file,
+- An extended Model class,
+- A place to add the code to invoke inference,
+- A main loop function,
+- And some input data.
 
-For the hello_world example below, the input array is not populated.
-However, for real-world scenarios, this data should either be read from
-an on-board device or be prepared in the form of C++ sources before
-compilation and be baked into the application.
+For the `hello_world` example below, the input array is not populated. However, for real-world scenarios, and before
+compilation and be baked into the application, this data must either be read from an on-board device, or be prepared in
+the form of C++ sources.
 
-For example, the image classification application has extra build steps
-to generate C++ sources from the provided images with
-`generate_images_code` CMake function.
+For example, the image classification application requires extra build steps to generate C++ sources from the provided
+images with `generate_images_code` CMake function.
 
-> **Note:** Check the input data type for your NN model and input array data type are  the same.
-> For example, generated C++ sources for images store image data as uint8 array. For models that were
-> quantized to int8 data type, it is important to convert image data to int8 correctly before inference execution.
-> Asymmetric data type to symmetric data type conversion involves positioning zero value, i.e. subtracting an
-> offset for uint8 values. Please check image classification application source for the code example
-> (ConvertImgToInt8 function).
+> **Note:** Check that the input data type for your NN model and input array data type are the same. For example,
+> generated C++ sources for images store image data as a `uint8` array. For models that were quantized to an `int8` data
+> type, convert the image data to `int8` correctly *before* inference execution. Converting asymmetric data to symmetric
+> data involves positioning the zero value. In other words, subtracting an offset for `uint8` values. Please check the
+> image classification application source for the code example, such as the `ConvertImgToInt8` function.
 
 The following code adds inference invocation to the main loop function:
 
@@ -555,8 +532,7 @@
   TfLiteTensor *inputTensor = model.GetInputTensor();
   ```
 
-- Copying input data to the input tensor. We assume input tensor size
-  to be 1000 uint8 elements.
+- Copying input data to the input tensor. We assume input tensor size to be 1000 `uint8` elements.
 
   ```C++
   memcpy(inputTensor->data.data, inputData, 1000);
@@ -568,8 +544,8 @@
   model.RunInference();
   ```
 
-- Reading inference results: data and data size from the output
-  tensor. We assume that output layer has uint8 data type.
+- Reading inference results: Data and data size from the output tensor. We assume that the output layer has a `uint8`
+  data type.
 
   ```C++
   Const uint32_t tensorSz = outputTensor->bytes ;
@@ -577,9 +553,10 @@
   const uint8_t *outputData = tflite::GetTensorData<uint8>(outputTensor);
   ```
 
-Adding profiling for Ethos-U55 is easy. Include `Profiler.hpp` header and
-invoke `StartProfiling` and `StopProfiling` around inference
-execution.
+To add profiling for the *Ethos-U55*, include a `Profiler.hpp` header and invoke both `StartProfiling` and
+`StopProfiling` around inference execution.
+
+For example:
 
 ```C++
 Profiler profiler{&platform, "Inference"};
@@ -593,110 +570,105 @@
 
 ## Printing to console
 
-Provided examples already used some function to print messages to the
-console. The full list of available functions:
+The preceding examples used some function to print messages to the console.
+
+However, for clarity, here is the full list of available functions:
 
 - `printf`
-- `trace` - printf wrapper for tracing messages
-- `debug` - printf wrapper for debug messages
-- `info` - printf wrapper for informational messages
-- `warn` - printf wrapper for warning messages
-- `printf_err` - printf wrapper for error messages
+- `trace` - printf wrapper for tracing messages.
+- `debug` - printf wrapper for debug messages.
+- `info` - printf wrapper for informational messages.
+- `warn` - printf wrapper for warning messages.
+- `printf_err` - printf wrapper for error messages.
 
-`printf` wrappers could be switched off with `LOG_LEVEL` define:
+`printf` wrappers can be switched off with `LOG_LEVEL` define:
 
-trace (0) < debug (1) < info (2) < warn (3) < error (4).
+`trace (0) < debug (1) < info (2) < warn (3) < error (4)`.
 
-Default output level is info = level 2.
+> **Note:** The default output level is `info = level 2`.
 
 ## Reading user input from console
 
-Platform data acquisition module has get_input function to read keyboard
-input from the UART. It can be used as follows:
+The platform data acquisition module uses the `get_input` function to read the keyboard input from the UART. It can be
+used as follows:
 
 ```C++
 char ch_input[128];
 platform.data_acq->get_input(ch_input, sizeof(ch_input));
 ```
 
-The function will block until user provides an input.
+The function is blocked until a user provides an input.
 
 ## Output to MPS3 LCD
 
-Platform presentation module has functions to print text or an image to
-the board LCD:
+The platform presentation module has functions to print text or an image to the board LCD. For example:
 
 - `present_data_text`
 - `present_data_image`
 
 Text presentation function has the following signature:
 
-- `const char* str`: string to print.
-- `const uint32_t str_sz`: string size.
-- `const uint32_t pos_x`: x coordinate of the first letter in pixels.
-- `const uint32_t pos_y`: y coordinate of the first letter in pixels.
-- `const uint32_t alow_multiple_lines`: signals whether the text is
-    allowed to span multiple lines on the screen, or should be truncated
-    to the current line.
+- `const char* str`: the string to print.
+- `const uint32_t str_sz`: The string size.
+- `const uint32_t pos_x`: The x coordinate of the first letter in pixels.
+- `const uint32_t pos_y`: The y coordinate of the first letter in pixels.
+- `const uint32_t alow_multiple_lines`: Signals whether the text is allowed to span multiple lines on the screen, or
+  must be truncated to the current line.
 
-This function does not wrap text, if the given string cannot fit on the
-screen it will go outside the screen boundary.
+This function does not wrap text. If the given string cannot fit on the screen, it goes outside the screen boundary.
 
-Example that prints "Hello world" on the LCD:
+Here is an example that prints "Hello world" on the LCD screen:
 
 ```C++
 std::string hello("Hello world");
 platform.data_psn->present_data_text(hello.c_str(), hello.size(), 10, 35, 0);
 ```
 
-Image presentation function has the following signature:
+The image presentation function has the following signature:
 
-- `uint8_t* data`: image data pointer;
-- `const uint32_t width`: image width;
-- `const uint32_t height`: image height;
-- `const uint32_t channels`: number of channels. Only 1 and 3 channels are supported now.
-- `const uint32_t pos_x`: x coordinate of the first pixel.
-- `const uint32_t pos_y`: y coordinate of the first pixel.
-- `const uint32_t downsample_factor`: the factor by which the image is to be down sampled.
+- `uint8_t* data`: The image data pointer;
+- `const uint32_t width`: The image width;
+- `const uint32_t height`: The image height;
+- `const uint32_t channels`: The number of channels. Only 1 and 3 channels are supported now.
+- `const uint32_t pos_x`: The x coordinate of the first pixel.
+- `const uint32_t pos_y`: The y coordinate of the first pixel.
+- `const uint32_t downsample_factor`: The factor by which the image is to be downsampled.
 
-For example, the following code snippet visualizes an input tensor data
-for MobileNet v2 224 (down sampling it twice):
+For example, the following code snippet visualizes an input tensor data for `MobileNet v2 224`, by downsampling it
+twice:
 
 ```C++
 platform.data_psn->present_data_image((uint8_t *) inputTensor->data.data, 224, 224, 3, 10, 35, 2);
 ```
 
-Please see [hal-api](#hal-api) section for other data presentation
-functions.
+Please refer to the [HAL API](#hal-api) section for more data presentation functions.
 
-## Building custom use case
+## Building custom use-case
 
-There is one last thing to do before building and running a use-case
-application: create a `usecase.cmake` file in the root of your use-case,
-the name of the file is not important.
+There is one last thing to do before building and running a use-case application. You must create a `usecase.cmake` file
+in the root of your use-case. However, the name of the file is not important.
 
 > **Convention:**  The build system searches for CMake file in each use-case directory and includes it into the build
-> flow. This file could be used to specify additional application specific build options, add custom build steps or
-> override standard compilation and linking flags.
-> Use `USER_OPTION` function to add additional build option. Prefix variable name with `${use_case}` (use-case name) to
-> avoid names collisions with other CMake variables.
-> Some useful variable names visible in use-case CMake file:
+> flow. This file can be used to specify additional application-specific build options, add custom build steps, or
+> override standard compilation and linking flags. Use the `USER_OPTION` function to add further build options. Prefix
+> the variable name with `${use_case}`, the use-case name, to avoid names collisions with other CMake variables. Here
+> are some useful variable names visible in use-case CMake file:
 >
-> - `DEFAULT_MODEL_PATH` – default model path to use if use-case specific `${use_case}_MODEL_TFLITE_PATH` is not set
->in the build arguments.
->- `TARGET_NAME` – name of the executable.
-> - `use_case` – name of the current use-case.
-> - `UC_SRC` – list of use-case sources.
-> - `UC_INCLUDE` – path to the use-case headers.
-> - `ETHOS_U55_ENABLED` – flag indicating if the current build supports Ethos-U55.
-> - `TARGET_PLATFORM` – Target platform being built for.
+> - `DEFAULT_MODEL_PATH` – The default model path to use if use-case specific `${use_case}_MODEL_TFLITE_PATH` is not set
+>  in the build arguments.
+>- `TARGET_NAME` – The name of the executable.
+> - `use_case` – The name of the current use-case.
+> - `UC_SRC` – A list of use-case sources.
+> - `UC_INCLUDE` – The path to the use-case headers.
+> - `ETHOS_U55_ENABLED` – The flag indicating if the current build supports Ethos-U55.
+> - `TARGET_PLATFORM` – The target platform being built for.
 > - `TARGET_SUBSYSTEM` – If target platform supports multiple subsystems, this is the name of the subsystem.
 > - All standard build options.
->   - `CMAKE_CXX_FLAGS` and `CMAKE_C_FLAGS` – compilation flags.
->   - `CMAKE_EXE_LINKER_FLAGS` – linker flags.
+>   - `CMAKE_CXX_FLAGS` and `CMAKE_C_FLAGS` – The compilation flags.
+>   - `CMAKE_EXE_LINKER_FLAGS` – The linker flags.
 
-For the hello world use-case it will be enough to create
-`helloworld.cmake` file and set DEFAULT_MODEL_PATH:
+For the hello world use-case, it is enough to create a `helloworld.cmake` file and set the `DEFAULT_MODEL_PATH`, like
+so:
 
 ```cmake
 if (ETHOS_U55_ENABLED EQUAL 1)
@@ -720,13 +692,12 @@
     )
 ```
 
-This ensures that the model path pointed by `${use_case}_MODEL_TFLITE_PATH` is converted to a C++ array and is picked
-up by the build system. More information on auto-generations is available under section
+This ensures that the model path pointed to by `${use_case}_MODEL_TFLITE_PATH` is converted to a C++ array and is picked
+up by the build system. More information on auto-generations is available under section:
 [Automatic file generation](./building.md#Automatic-file-generation).
 
-To build you application follow the general instructions from
-[Add Custom inputs](./building.md#add-custom-inputs) and specify the name of the use-case in the
-build command:
+To build you application, follow the general instructions from [Add Custom inputs](./building.md#add-custom-inputs) and
+then specify the name of the use-case in the build command, like so:
 
 ```commandline
 cmake .. \
@@ -736,8 +707,7 @@
   -DCMAKE_TOOLCHAIN_FILE=scripts/cmake/toolchains/bare-metal-armclang.cmake
 ```
 
-As a result, `ethos-u-hello_world.axf` should be created, MPS3 build
-will also produce `sectors/hello_world` directory with binaries and
-`sectors/images.txt` to be copied to the board MicroSD card.
+As a result, the file `ethos-u-hello_world.axf` is created. The MPS3 build also produces the `sectors/hello_world`
+directory with binaries and the file `sectors/images.txt` to be copied to the MicroSD card on the board.
 
-Next section of the documentation: [Testing and benchmarking](testing_benchmarking.md).
+The next section of the documentation covers: [Testing and benchmarking](testing_benchmarking.md).
diff --git a/docs/sections/deployment.md b/docs/sections/deployment.md
index b852887..caa3859 100644
--- a/docs/sections/deployment.md
+++ b/docs/sections/deployment.md
@@ -1,51 +1,57 @@
 # Deployment
 
 - [Deployment](#deployment)
-  - [Fixed Virtual Platform](#fixed-virtual-platform)
+  - [Fixed Virtual Platform (FVP)](#fixed-virtual-platform-fvp)
     - [Setting up the MPS3 Arm Corstone-300 FVP](#setting-up-the-mps3-arm-corstone-300-fvp)
     - [Deploying on an FVP emulating MPS3](#deploying-on-an-fvp-emulating-mps3)
   - [MPS3 board](#mps3-board)
+    - [MPS3 board top-view](#mps3-board-top-view)
     - [Deployment on MPS3 board](#deployment-on-mps3-board)
 
-The sample application for Arm® Ethos™-U55 can be deployed on two
-target platforms, both of which implement the Arm® Corstone™-300 design (see
-<https://www.arm.com/products/iot/soc/corstone-300>):
+The sample application for Arm® *Ethos™-U55* can be deployed on two target platforms:
 
 - A physical Arm MPS3 FPGA prototyping board
 
 - An MPS3 FVP
 
-## Fixed Virtual Platform
+Both implement the Arm® *Corstone™-300* design. For further information, please refer to:
+[Arm Corstone-300](https://www.arm.com/products/iot/soc/corstone-300)
 
-The FVP is available publicly from [Arm Ecosystem FVP downloads
-](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
-Download the correct archive from the list under `Arm Corstone-300`. We need the one which:
+## Fixed Virtual Platform (FVP)
 
-- Emulates MPS3 board (not for MPS2 FPGA board)
-- Contains support for Arm® Ethos™-U55
+The FVP is available publicly from the following page:
+[Arm Ecosystem FVP downloads](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
+
+Please ensure that you download the correct archive from the list under `Arm Corstone-300`. You need the one which:
+
+- Emulates MPS3 board and *not* for MPS2 FPGA board,
+- Contains support for Arm® *Ethos™-U55*.
 
 ### Setting up the MPS3 Arm Corstone-300 FVP
 
-For Ethos-U55 sample application, please download the MPS3 version of the
-Arm® Corstone™-300 model that contains Ethos-U55 and Arm® Cortex®-M55. The model is
-currently only supported on Linux based machines. To install the FVP:
+For the *Ethos-U55* sample application, please download the MPS3 version of the Arm® *Corstone™-300* model that contains
+both the *Ethos-U55* and *Arm® Cortex®-M55*. The model is currently only supported on Linux-based machines.
 
-- Unpack the archive
+To install the FVP:
 
-- Run the install script in the extracted package
+- Unpack the archive.
+
+- Run the install script in the extracted package:
 
     `./FVP_Corstone_SSE-300_Ethos-U55.sh`
 
-- Follow the instructions to install the FVP to your desired location
+- Follow the instructions to install the FVP to your required location.
 
 ### Deploying on an FVP emulating MPS3
 
-This section assumes that the FVP has been installed (see [Setting up the MPS3 Arm Corstone-300 FVP](#setting-up-the-mps3-arm-corstone-300-fvp)) to the user's home directory `~/FVP_Corstone_SSE-300_Ethos-U55`.
+This section assumes that the FVP has been installed (see
+[Setting up the MPS3 Arm Corstone-300 FVP](#setting-up-the-mps3-arm-corstone-300-fvp))
+to the home directory of the user: `~/FVP_Corstone_SSE-300_Ethos-U55`.
 
-The installation, typically, will have the executable under `~/FVP_Corstone_SSE-300_Ethos-U55/model/<OS>_<compiler-version>/`
-directory. For the example below, we assume it to be `~/FVP_Corstone_SSE-300_Ethos-U55/models/Linux64_GCC-6.4`.
+The installation, typically, has the executable under `~/FVP_Corstone_SSE-300_Ethos-U55/model/<OS>_<compiler-version>/`
+directory. For the example below, we assume it is: `~/FVP_Corstone_SSE-300_Ethos-U55/models/Linux64_GCC-6.4`.
 
-To run a use case on the FVP, from the [Build directory](../sections/building.md#create-a-build-directory):
+To run a use-case on the FVP, from the [Build directory](../sections/building.md#create-a-build-directory):
 
 ```commandline
 ~/FVP_Corstone_SSE-300_Ethos-U55/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 -a ./bin/ethos-u-<use_case>.axf
@@ -59,15 +65,15 @@
     ALL RIGHTS RESERVED
 ```
 
-This will also launch a telnet window with the sample application's standard output and error log entries containing
-information about the pre-built application version, TensorFlow Lite Micro library version used, data type as well as
-the input and output tensor sizes of the model compiled into the executable binary.
+This also launches a telnet window with the standard output from the sample application. And also error log entries
+containing information about the pre-built application version, TensorFlow Lite Micro library version used, and data
+type. It also includes the input and output tensor sizes of the model that are compiled into the executable binary.
 
-> **Note:** For details on the specific use-case follow the instructions in the corresponding documentation.
+> **Note:** For details on the specific use-case, follow the instructions in the corresponding documentation.
 
-After the application has started it outputs a menu and waits for the user input from telnet terminal.
+After starting, the application outputs a menu and waits for the user-input from the telnet terminal.
 
-For example, the image classification use case can be started by:
+For example, the image classification use-case can be started by using:
 
 ```commandline
 ~/FVP_Corstone_SSE-300_Ethos-U55/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 -a ./bin/ethos-u-img_class.axf
@@ -77,20 +83,21 @@
 
 ![FVP Terminal](../media/fvpterminal.png)
 
-The FVP supports many command line parameters:
+The FVP supports many command-line parameters, such as:
 
-- passed by using `-C <param>=<value>`. The most important ones are:
-  - `ethosu.num_macs`: Sets the Ethos-U55 configuration for the model. Valid parameters are `32`, `64`, `256`,
-    and the default one `128`. The number signifies the 8x8 MACs performed per cycle count available on the hardware.
-  - `cpu0.CFGITCMSZ`: ITCM size for the Cortex-M CPU. Size of ITCM is pow(2, CFGITCMSZ - 1) KB
-  - `cpu0.CFGDTCMSZ`: DTCM size for the Cortex-M CPU. Size of DTCM is pow(2, CFGDTCMSZ - 1) KB
-  - `mps3_board.telnetterminal0.start_telnet` : Starts the telnet session if nothing connected.
-  - `mps3_board.uart0.out_file`: Sets the output file to hold data written by the UART
-    (use '-' to send all output to stdout, empty by default).
-  - `mps3_board.uart0.shutdown_on_eot`: Sets to shutdown simulation when a EOT (ASCII 4) char is transmitted.
-  - `mps3_board.visualisation.disable-visualisation`: Enables or disables visualisation (disabled by default).
+- Those passed by using `-C <param>=<value>`. The most important ones are:
+  - `ethosu.num_macs`: Sets the *Ethos-U55* configuration for the model. Valid parameters are `32`, `64`, `256`, and the
+    default one `128`. The number signifies the 8x8 MACs that are performed per cycle-count and that are available on
+    the hardware.
+  - `cpu0.CFGITCMSZ`: The ITCM size for the *Cortex-M* CPU. The size of ITCM is *pow(2, CFGITCMSZ - 1)* KB
+  - `cpu0.CFGDTCMSZ`: The DTCM size for the *Cortex-M* CPU. The size of DTCM is *pow(2, CFGDTCMSZ - 1)* KB
+  - `mps3_board.telnetterminal0.start_telnet`: Starts the telnet session if nothing connected.
+  - `mps3_board.uart0.out_file`: Sets the output file to hold the data written by the UART. Use `'-'` to send all output
+    to `stdout` and is empty by default).
+  - `mps3_board.uart0.shutdown_on_eot`: Shut down the simulation when an `EOT (ASCII 4)` char is transmitted.
+  - `mps3_board.visualisation.disable-visualisation`: Enables, or disables, visualization and is disabled by default.
 
-  To start the model in `128` mode for Ethos-U55:
+  To start the model in `128` mode for *Ethos-U55*:
 
     ```commandline
     ~/FVP_Corstone_SSE-300_Ethos-U55/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 -a ./bin/ethos-u-img_class.axf -C ethosu.num_macs=128
@@ -112,89 +119,79 @@
 
 ## MPS3 board
 
-> **Note:**  Before proceeding, make sure you have the MPS3 board powered on,
-and USB A to B connected between your machine and the MPS3.
-The connector on the MPS3 is marked as "Debug USB".
+> **Note:**  Before proceeding, make sure that you have the MPS3 board powered on, and a USB A to B cable connected
+> between your machine and the MPS3. The connector on the MPS3 is marked as "Debug USB".
 
 ![MPS3](../media/mps3.png)
 
-1. MPS3 board top view.
+### MPS3 board top-view
 
-Once the board has booted, the micro SD card will enumerate as a mass
-storage device. On most systems this will be automatically mounted, but
-you might need to mount it manually.
+Once the board has booted, the micro SD card is enumerated as a mass storage device. On most systems, this is
+automatically mounted. However, manual mounting is sometimes required.
 
-Also, there should be four serial-over-USB ports available for use via
-this connection. On Linux based machines, these would typically be
-*/dev/ttyUSB\<n\>* to */dev/ttyUSB\<n+3\>*.
+Also, check for four serial-over-USB ports that are available for use through this connection. On Linux-based machines,
+these would typically be */dev/ttyUSB\<n\>* to */dev/ttyUSB\<n+3\>*.
 
-The default configuration for all of them is 115200, 8/N/1 (15200 bauds,
-8 bits, no parity and 1 stop bit) with no flow control.
+The default configuration for all of them is `115200`, `8/N/1`. So, 15200 Baud, 8 bits, no parity, and one stop bit,
+with no flow control.
 
-> **Note:** For Windows machines, additional FTDI drivers might need to be installed
-for these serial ports to be available.
-For more information on getting started with an MPS3 board, please refer to
-<https://developer.arm.com/-/media/Arm%20Developer%20Community/PDF/MPS3GettingStarted.pdf>
+> **Note:** For Windows machines, extra FTDI drivers may be required for these serial ports to be available.
+
+For more information on getting started with an MPS3 board, please refer to:
+[MPS3 Getting Started](https://developer.arm.com/-/media/Arm%20Developer%20Community/PDF/MPS3GettingStarted.pdf).
 
 ### Deployment on MPS3 board
 
-> **NOTE**: These instructions are valid only if the evaluation is being
- done using the MPS3 FPGA platform using `SSE-300`.
+> **Note:**: These instructions are valid only if the evaluation is being done using the MPS3 FPGA platform using
+> `SSE-300`.
 
-To run the application on MPS3 platform, firstly it's necessary to make sure
-that the platform has been set up using the correct configuration.
-For details, on platform set up, please see the relevant documentation. For `Arm Corstone-300`, this is available
-[here](https://developer.arm.com/-/media/Arm%20Developer%20Community/PDF/DAI0547B_SSE300_PLUS_U55_FPGA_for_mps3.pdf?revision=d088d931-03c7-40e4-9045-31ed8c54a26f&la=en&hash=F0C7837C8ACEBC3A0CF02D871B3A6FF93E09C6B8).
+To run the application on MPS3 platform, you must first ensure that the platform has been set up using the correct
+configuration.
 
-For MPS3 board, instead of loading the axf file directly, the executable blobs
-generated under the *sectors/<use_case>* subdirectory need to be
-copied over to the MP3 board's micro SD card. Also, *sectors/images.txt* file is
-used by the MPS3 to understand which memory regions the blobs are to be loaded
-into.
+For details on platform set-up, please see the relevant documentation. For the Arm `Corstone-300`, the PDF is available
+here: [Arm Developer](https://developer.arm.com/-/media/Arm%20Developer%20Community/PDF/DAI0547B_SSE300_PLUS_U55_FPGA_for_mps3.pdf?revision=d088d931-03c7-40e4-9045-31ed8c54a26f&la=en&hash=F0C7837C8ACEBC3A0CF02D871B3A6FF93E09C6B8).
 
-Once the USB A <--> B cable between the MPS3 and the development machine
-is connected and the MPS3 board powered on, the board should enumerate
-as a mass storage device over this USB connection.
-There might be two devices also, depending on the version of the board
-you are using. The device named `V2M-MPS3` or `V2MMPS3` is the `SD card`.
+For the MPS3 board, instead of loading the `axf` file directly, copy the executable blobs generated under the
+`sectors/<use_case>` subdirectory to the micro SD card located on the board. Also, the `sectors/images.txt` file is used
+by the MPS3 to understand which memory regions the blobs must be loaded into.
 
-If the axf/elf file is within the ITCM load size limit, it can be copied into
-the FPGA memory directly without having to break it down into separate load
-region specific blobs. However, with neural network models exceeding this size,
-it becomes necessary to follow this approach.
+Once the USB A to USB B cable between the MPS3 and the development machine is connected, and the MPS3 board powered on,
+the board enumerates as a mass storage device over this USB connection.
 
-1. For example, the image classification use case will produce:
+Depending on the version of the board you are using, there might be two devices listed. The device named `V2M-MPS3`, or
+`V2MMPS3`, which is the `SD card`.
+
+If the `axf` or `elf` file is within the ITCM load size limit, it can be copied into the FPGA memory directly without
+having to break it down into separate load region-specific blobs. However, if the neural network models exceed this
+size, you must use the following approach:
+
+1. For example, the image classification use-case produces:
 
     ```tree
     ./bin/sectors/
         └── img_class
-            ├── dram.bin
+            ├── ddr.bin
             └── itcm.bin
     ```
 
-    For example, if the micro SD card is mounted at
-    /media/user/V2M-MPS3/:
+    If the micro SD card is mounted at `/media/user/V2M-MPS3/`, then use:
 
     ```commandline
     cp -av ./bin/sectors/img_class/* /media/user/V2M-MPS3/SOFTWARE/
     ```
 
-2. The `./bin/sectors/images.txt` file needs to be copied
-over to the MPS3. The exact location for the destination will depend
-on the MPS3 board's version and the application note for the bit
-file in use.
-For example, for MPS3 board hardware revision C, using an
-application note directory named "ETHOSU", to replace the images.txt
-file:
+2. The `./bin/sectors/images.txt` file must be copied over to the MPS3. The exact location for the destination depends
+   on the version of the MPS3 board and the application note for the bit file in use.
+
+   For example, the revision C of the MPS3 board hardware uses an application note directory named `ETHOSU`, to replace the
+   `images.txt` file, like so:
 
     ```commandline
     cp ./bin/sectors/images.txt /media/user/V2M-MPS3/MB/HBI0309C/ETHOSU/images.txt
     ```
 
-3. Open the first serial port available from MPS3, for example,
-"/dev/ttyUSB0". This can be typically done using minicom, screen or
-Putty application. Make sure the flow control setting is switched
-off.
+3. Open the first serial port available from MPS3. For example, `/dev/ttyUSB0`. This can be typically done using
+   minicom, screen, or Putty application. Make sure the flow control setting is switched off:
 
     ```commandline
     minicom --D /dev/ttyUSB0
@@ -209,15 +206,13 @@
     Cmd>
     ```
 
-4. In another terminal, open the second serial port, for example,
-    "/dev/ttyUSB1":
+4. In another terminal, open the second serial port. For example: `/dev/ttyUSB1`:
 
     ```commandline
     minicom --D /dev/ttyUSB1
     ```
 
-5. On the first serial port, issue a "reboot" command and press the
-    return key
+5. On the first serial port, issue a "reboot" command and then press the return key:
 
     ```commandline
     $ Cmd> reboot
@@ -234,8 +229,8 @@
     Configuring motherboard (rev C, var A)...
     ```
 
-    This will go on to reboot the board and prime the application to run by
-    flashing the binaries into their respective FPGA memory locations. For example:
+    This goes on to reboot the board and prime the application to run by flashing the binaries into their respective
+    FPGA memory locations. For example:
 
     ```log
     Reading images file \MB\HBI0309C\ETHOSU\images.txt
@@ -245,25 +240,24 @@
 
     File \SOFTWARE\itcm.bin written to memory address 0x00000000
     Image loaded from \SOFTWARE\itcm.bin
-    Writing File \SOFTWARE\dram.bin to Address 0x08000000
+    Writing File \SOFTWARE\ddr.bin to Address 0x08000000
 
     ..........................................................................
 
 
-    File \SOFTWARE\dram.bin written to memory address 0x08000000
-    Image loaded from \SOFTWARE\dram.bin
+    File \SOFTWARE\ddr.bin written to memory address 0x08000000
+    Image loaded from \SOFTWARE\ddr.bin
     ```
 
-6. When the reboot from previous step is completed, issue a reset
-        command on the command prompt.
+6. When the reboot from previous step is completed, issue a reset command on the command prompt:
 
     ``` commandline
     $ Cmd> reset
     ```
 
-    This will trigger the application to start, and the output should be visible on the second serial connection.
+    This triggers the application to start, and the output becomes visible on the second serial connection.
 
-7. On the second serial port, output similar to section 2.2 should be visible:
+7. On the second serial port, the output is similar to that in section 2.2, is visible, like so:
 
     ```log
     INFO - Setting up system tick IRQ (for NPU)
@@ -276,4 +270,4 @@
     ...
     ```
 
-Next section of the documentation: [Implementing custom ML application](customizing.md).
+The next section of the documentation details: [Implementing custom ML application](customizing.md).
diff --git a/docs/sections/memory_considerations.md b/docs/sections/memory_considerations.md
index 4727711..970be3a 100644
--- a/docs/sections/memory_considerations.md
+++ b/docs/sections/memory_considerations.md
@@ -7,37 +7,37 @@
     - [Total Off-chip Flash used](#total-off-chip-flash-used)
   - [Non-default configurations](#non-default-configurations)
   - [Tensor arena and neural network model memory placement](#tensor-arena-and-neural-network-model-memory-placement)
-  - [Memory usage for ML use cases](#memory-usage-for-ml-use-cases)
+  - [Memory usage for ML use-cases](#memory-usage-for-ml-use-cases)
   - [Memory constraints](#memory-constraints)
 
 ## Introduction
 
-This section provides useful details on how the Machine Learning use cases of the
-evaluation kit use the system memory. Although the guidance provided here is with
-respect to the Arm® Corstone™-300 system, it is fairly generic and is applicable
-for other platforms too. Arm® Corstone™-300 is composed of Arm® Cortex™-M55 and
-Arm® Ethos™-U55 and the memory map for the Arm® Cortex™-M55 core can be found in the
-[Appendix 1](./appendix.md).
+This section provides useful details on how the Machine Learning use-cases of the evaluation kit use the system memory.
 
-The Arm® Ethos™-U55 NPU interacts with the system via two AXI interfaces. The first one
-is envisaged to be the higher bandwidth and/or lower latency interface. In a typical
-system would this be wired to an SRAM as it would be required to service frequent R/W
-traffic. The second interface is expected to have higher latency and/or lower bandwidth
-characteristics and would typically be wired to a flash device servicing read-only
-traffic. In this configuration, the Arm® Cortex™-M55 CPU and Arm® Ethos™-U55 NPU read the contents
-of the neural network model (`.tflite file`) from the flash memory region with Arm® Ethos™-U55
-requesting these read transactions over its second AXI bus.
-The input and output tensors, along with any intermediate computation buffers, would be
-placed on SRAM and both the Arm® Cortex™-M55 CPU and Arm® Ethos™-U55 NPU would be reading/writing
-to this region when running an inference. The Arm® Ethos™-U55 NPU will be requesting these R/W
-transactions over the first AXI bus.
+Although the guidance provided here is concerning the Arm® *Corstone™-300* system, it is fairly generic and is
+applicable for other platforms too. The Arm® *Corstone™-300* is composed of both the Arm® *Cortex™-M55* and the Arm®
+*Ethos™-U55*. The memory map for the Arm® *Cortex™-M55* core can be found in the [Appendix](./appendix.md).
+
+The Arm® *Ethos™-U55* NPU interacts with the system through two AXI interfaces. The first one, is envisaged to be the
+higher-bandwidth, lower-latency, interface. In a typical system, this is wired to an SRAM as it is required to service
+frequent Read and Write traffic.
+
+The second interface is expected to have a higher-latency, lower-bandwidth characteristic, and is typically wired to a
+flash device servicing read-only traffic. In this configuration, the Arm® *Cortex™-M55* CPU and Arm® *Ethos™-U55* NPU
+read the contents of the neural network model, or the `.tflite` file, from the flash memory region. With the Arm®
+*Ethos™-U55* requesting these read transactions over its second AXI bus.
+
+The input and output tensors, along with any intermediate computation buffers, are placed on SRAM. Therefore, both the
+Arm® *Cortex™-M55* CPU and Arm® *Ethos™-U55* NPU would be reading, or writing, to this region when running an inference.
+The Arm® *Ethos™-U55* NPU requests these Read and Write transactions over the first AXI bus.
 
 ## Understanding memory usage from Vela output
 
 ### Total SRAM used
 
-When the neural network model is compiled with Vela, a summary report that includes memory
-usage is generated. For example, compiling the keyword spotting model [ds_cnn_clustered_int8](https://github.com/ARM-software/ML-zoo/blob/master/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8/ds_cnn_clustered_int8.tflite)
+When the neural network model is compiled with Vela, a summary report that includes memory usage is generated. For
+example, compiling the keyword spotting model
+[ds_cnn_clustered_int8](https://github.com/ARM-software/ML-zoo/blob/master/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8/ds_cnn_clustered_int8.tflite)
 with Vela produces, among others, the following output:
 
 ```log
@@ -45,86 +45,94 @@
 Total Off-chip Flash used                      430.78 KiB
 ```
 
-The `Total SRAM used` here indicates the required memory to store the `tensor arena` for the
-TensorFlow Lite Micro framework. This is the amount of memory required to store the input,
-output and intermediate buffers. In the example above, the tensor arena requires 70.77 KiB
-of available SRAM. Note that Vela can only estimate the SRAM required for graph execution.
-It has no way of estimating the memory used by internal structures from TensorFlow Lite
-Micro framework. Therefore, it is recommended to top this memory size by at least 2KiB and
-carve out the `tensor arena` of this size and place it on the target system's SRAM.
+The `Total SRAM used` here shows the required memory to store the `tensor arena` for the TensorFlow Lite Micro
+framework. This is the amount of memory required to store the input, output, and intermediate buffers. In the preceding
+example, the tensor arena requires 70.77 KiB of available SRAM.
+
+> **Note:** Vela can only estimate the SRAM required for graph execution. It has no way of estimating the memory used by
+> internal structures from TensorFlow Lite Micro framework.
+
+Therefore, we recommend that you top this memory size by at least 2KiB. We also recoomend that you also carve out the
+`tensor arena` of this size, and then place it on the SRAM of the target system.
 
 ### Total Off-chip Flash used
 
-The `Total Off-chip Flash` parameter indicates the minimum amount of flash required to store
-the neural network model. In the example above, the system needs to have a minimum of 430.78
-KiB of available flash memory to store `.tflite` file contents.
+The `Total Off-chip Flash` parameter indicates the minimum amount of flash required to store the neural network model.
+In the preceding example, the system must have a minimum of 430.78 KiB of available flash memory to store the `.tflite`
+file contents.
 
-> Note: For Arm® Corstone™-300 system we use the DDR region as a flash memory. The timing
-> adapter sets up AXI bus wired to the DDR to mimic bandwidth and latency
-> characteristics of a flash memory device.
+> **Note:** The Arm® *Corstone™-300* system uses the DDR region as a flash memory. The timing adapter sets up the AXI
+> bus that is wired to the DDR to mimic both bandwidth and latency characteristics of a flash memory device.
 
 ## Non-default configurations
 
-The above example outlines a typical configuration, and this corresponds to the default Vela
-setting. However, the system SRAM can also be used to store the neural network model along with
-the `tensor arena`. Vela supports optimizing the model for this configuration with its `Sram_Only`
-memory mode. See [vela.ini](../../scripts/vela/vela.ini). To make use of a neural network model
-optimised for this configuration, the linker script for the target platform would need to be
-changed. By default, the linker scripts are set up to support the default configuration only. See
-[Memory constraints](#memory-constraints) for snippet of a script.
+The preceding example outlines a typical configuration, and this corresponds to the default Vela setting. However, the
+system SRAM can also be used to store the neural network model along with the `tensor arena`. Vela supports optimizing
+the model for this configuration with its `Sram_Only` memory mode.
 
-> Note
+For further information, please refer to: [vela.ini](../../scripts/vela/vela.ini).
+
+To make use of a neural network model that is optimized for this configuration, the linker script for the target
+platform must be changed. By default, the linker scripts are set up to support the default configuration only.
+
+For script snippets, please refer to: [Memory constraints](#memory-constraints).
+
+> **Note:**
 >
-> 1. The default configuration is represented by `Shared_Sram` memory mode.
-> 2. `Dedicated_Sram` mode is only applicable for Arm® Ethos™-U65.
+> 1. The the `Shared_Sram` memory mode represents the default configuration.
+> 2. The `Dedicated_Sram` mode is only applicable for the Arm® *Ethos™-U65*.
 
 ## Tensor arena and neural network model memory placement
 
-The evaluation kit uses the name `activation buffer` for what is called `tensor arena` in the
-TensorFlow Lite Micro framework. Every use case application has a corresponding
-`<use_case_name>_ACTIVATION_BUF_SZ` parameter that governs the maximum available size of the
-`activation buffer` for that particular use case.
+The evaluation kit uses the name `activation buffer` for the `tensor arena` in the TensorFlow Lite Micro framework.
+Every use-case application has a corresponding `<use_case_name>_ACTIVATION_BUF_SZ` parameter that governs the maximum
+available size of the `activation buffer` for that particular use-case.
 
-The linker script is set up to place this memory region in SRAM. However, if the memory required
-is more than what the target platform supports, this buffer needs to be placed on flash instead.
-Every target platform has a profile definition in the form of a CMake file. See [Corstone-300
-profile](../../scripts/cmake/subsystem-profiles/corstone-sse-300.cmake) for example. The parameter
-`ACTIVATION_BUF_SRAM_SZ` defines the maximum SRAM size available for the platform. This is
-propagated through the build system and if the `<use_case_name>_ACTIVATION_BUF_SZ` for a given
-use case is more than the `ACTIVATION_BUF_SRAM_SZ` for the target build platform, the `activation buffer`
-is placed on the flash instead.
+The linker script is set up to place this memory region in SRAM. However, if the memory required is more than what the
+target platform supports, this buffer needs to be placed on flash instead. Every target platform has a profile
+definition in the form of a `CMake` file.
 
-The neural network model is always placed in the flash region. However, this can be changed easily
-in the linker script.
+For further information and an example, please refer to: [Corstone-300 profile](../../scripts/cmake/subsystem-profiles/corstone-sse-300.cmake).
 
-## Memory usage for ML use cases
+The parameter `ACTIVATION_BUF_SRAM_SZ` defines the maximum SRAM size available for the platform. This is propagated
+through the build system. If the `<use_case_name>_ACTIVATION_BUF_SZ` for a given use-case is *more* than the
+`ACTIVATION_BUF_SRAM_SZ` for the target build platform, then the `activation buffer` is placed on the flash memory
+instead.
 
-The following numbers have been obtained from Vela for `Shared_Sram` memory mode and the SRAM and
-flash memory requirements for the different use cases of the evaluation kit. Note that the SRAM usage
-does not include memory used by TensorFlow Lite Micro and this will need to be topped up as explained
-under [Total SRAM used](#total-sram-used).
+The neural network model is always placed in the flash region. However, this can be changed in the linker script.
 
-- [Keyword spotting model](https://github.com/ARM-software/ML-zoo/tree/master/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8) requires
+## Memory usage for ML use-cases
+
+The following numbers have been obtained from Vela for the `Shared_Sram` memory mode, along with the SRAM and flash
+memory requirements for the different use-cases of the evaluation kit.
+
+> **Note:** The SRAM usage does not include memory used by TensorFlow Lite Micro and must be topped up as explained
+> under [Total SRAM used](#total-sram-used).
+
+- [Keyword spotting model](https://github.com/ARM-software/ML-zoo/tree/master/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8)
+  requires
   - 70.7 KiB of SRAM
   - 430.7 KiB of flash memory.
 
-- [Image classification model](https://github.com/ARM-software/ML-zoo/tree/master/models/image_classification/mobilenet_v2_1.0_224/tflite_uint8) requires
+- [Image classification model](https://github.com/ARM-software/ML-zoo/tree/master/models/image_classification/mobilenet_v2_1.0_224/tflite_uint8)
+  requires
   - 638.6 KiB of SRAM
   - 3.1 MB of flash memory.
 
-- [Automated speech recognition](https://github.com/ARM-software/ML-zoo/tree/1a92aa08c0de49a7304e0a7f3f59df6f4fd33ac8/models/speech_recognition/wav2letter/tflite_pruned_int8) requires
+- [Automated speech recognition](https://github.com/ARM-software/ML-zoo/tree/1a92aa08c0de49a7304e0a7f3f59df6f4fd33ac8/models/speech_recognition/wav2letter/tflite_pruned_int8)
+  requires
   - 655.16 KiB of SRAM
   - 13.42 MB of flash memory.
 
 ## Memory constraints
 
-Both the MPS3 Fixed Virtual Platform and the MPS3 FPGA platform share the linker script for Arm® Corstone™-300
-design. The design is set by the CMake configuration parameter `TARGET_SUBSYSTEM` as described in
+Both the MPS3 Fixed Virtual Platform (FVP) and the MPS3 FPGA platform share the linker script for Arm® *Corstone™-300*
+design. The CMake configuration parameter `TARGET_SUBSYSTEM` sets the design, which is described in:
 [build options](./building.md#build-options).
 
-The memory map exposed by this design is presented in [Appendix 1](./appendix.md). This can be used as a reference
-when editing the linker script, especially to make sure that region boundaries are respected. The snippet from the
-scatter file is presented below:
+The memory map exposed by this design is located in the [Appendix](./appendix.md), and can be used as a reference when
+editing the linker script. This is useful to make sure that the region boundaries are respected. The snippet from the
+scatter file is as follows:
 
 ```log
 ;---------------------------------------------------------
@@ -189,7 +197,7 @@
     ; size required by the network is bigger than the
     ; SRAM size available, it is accommodated here.
     ;-----------------------------------------------------
-    dram.bin        0x70000000 ALIGN 16         0x02000000
+    ddr.bin        0x70000000 ALIGN 16         0x02000000
     {
         ; nn model's baked in input matrices
         *.o (ifm)
@@ -225,14 +233,15 @@
 
 ```
 
-It is worth noting that for the Arm® Corstone™-300 FPGA and FVP implementations, only the BRAM,
-internal SRAM and DDR memory regions are accessible to the Arm® Ethos™-U55 NPU block. In the above
-snippet, the internal SRAM region memory can be seen to be utilized by activation buffers with
-a limit of 4MiB. If used by a Vela optimised neural network model, this region will be written to
-by the Arm® Ethos™-U55 NPU block frequently. A bigger region of memory for storing the neural
-network model is placed in the DDR/flash region under LOAD_REGION_1. The two load regions are necessary
-as the MPS3's motherboard configuration controller limits the load size at address 0x00000000 to 1MiB.
-This has implications on how the application **is deployed** on MPS3 as explained under the section
-[Deployment on MPS3](./deployment.md#mps3-board).
+> **Note:** With Arm® *Corstone™-300* FPGA and FVP implementations, only the BRAM, internal SRAM, and DDR memory regions
+> are accessible to the Arm® Ethos™-U55 NPU block.
 
-Next section of the documentation: [Troubleshooting](troubleshooting.md).
+In the preceding snippet, the internal SRAM region memory is utilized by the activation buffers with a limit of 4MiB. If
+used by a Vela optimized neural network model, then the Arm® *Ethos™-U55* NPU writes to this block frequently.
+
+A bigger region of memory for storing the neural network model is placed in the DDR, or flash, region under
+`LOAD_REGION_1`. The two load regions are necessary as the motherboard configuration controller of the MPS3 limits the
+load size at address `0x00000000` to 1MiB. This has implications on how the application **is deployed** on MPS3, as
+explained under the following section: [Deployment on MPS3](./deployment.md#mps3-board).
+
+The next section of the documentation covers: [Troubleshooting](troubleshooting.md).
diff --git a/docs/sections/testing_benchmarking.md b/docs/sections/testing_benchmarking.md
index 904f2c9..6350f52 100644
--- a/docs/sections/testing_benchmarking.md
+++ b/docs/sections/testing_benchmarking.md
@@ -1,11 +1,12 @@
 # Testing and benchmarking
 
-- [Testing](#testing)
-- [Benchmarking](#benchmarking)
+- [Testing and benchmarking](#testing-and-benchmarking)
+  - [Testing](#testing)
+  - [Benchmarking](#benchmarking)
 
 ## Testing
 
-The `tests` folder has the following structure:
+The `tests` folder uses the following structure:
 
 ```tree
 .
@@ -20,15 +21,15 @@
     └── ...
 ```
 
-Where:
+The folders contain the following information:
 
-- `common`: contains tests for generic and common appplication functions.
-- `use_case`: contains all the use case specific tests in the respective folders.
-- `utils`: contains utilities sources used only within the tests.
+- `common`: The tests for generic and common appplication functions.
+- `use_case`: Every use-case specific test in their respective folders.
+- `utils`: Utility sources that are only used within the tests.
 
 When [configuring](./building.md#configuring-the-build-native-unit-test) and
-[building](./building.md#building-the-configured-project) for `native` target platform results of the build will
-be placed under `<build folder>/bin/` folder, for example:
+[building](./building.md#building-the-configured-project) for your `native` target platform, the results of the build is
+placed under `<build folder>/bin/` folder. For example:
 
 ```tree
 .
@@ -38,7 +39,7 @@
 └── ethos-u-<usecase1>
 ```
 
-To execute unit-tests for a specific use-case in addition to the common tests:
+To execute unit-tests for a specific use-case, in addition to the common tests, use:
 
 ```commandline
 arm_ml_embedded_evaluation_kit-<use_case>-tests
@@ -53,17 +54,20 @@
    All tests passed (37 assertions in 7 test cases)
 ```
 
-Tests output could have `[ERROR]` messages, that's alright - they are coming from negative scenarios tests.
+> **Note:** Test outputs could contain `[ERROR]` messages. This is OK as they are coming from negative scenarios tests.
 
 ## Benchmarking
 
-Profiling is enabled by default when configuring the project. This will enable displaying:
+Profiling is enabled by default when configuring the project. Profiling enables you to display:
 
-- the active and idle NPU cycle counts when Arm® Ethos™-U55 is enabled (see `-DETHOS_U55_ENABLED` in
+- The active and idle NPU cycle counts when the Arm® *Ethos™-U55* is enabled. For more information, refer to the
+  `-DETHOS_U55_ENABLED` section in: [Build options](./building.md#build-options).
+- If CPU profiling is enabled, the CPU cycle counts and the time elapsed, in milliseconds, for inferences performed. For
+  further information, please refer to the `-DCPU_PROFILE_ENABLED` section in:
   [Build options](./building.md#build-options).
-- CPU cycle counts and/or in milliseconds elapsed for inferences performed if CPU profiling is enabled
-  (see `-DCPU_PROFILE_ENABLED` in [Build options](./building.md#build-options). This should be done only
-  when running on a physical FPGA board as the FVP does not contain a cycle-approximate or cycle-accurate Cortex-M model.
+
+> **Note:** Only do this when running on a physical FPGA board as the FVP does not contain a cycle-approximate or
+> cycle-accurate *Cortex-M* model.
 
 For example:
 
@@ -74,8 +78,8 @@
     Idle NPU cycles:   702
 ```
 
-- For MPS3 platform, the time duration in milliseconds is also reported when `-DCPU_PROFILE_ENABLED=1` is added to
-  CMake configuration command:
+- For the MPS3 platform, the time duration in milliseconds is also reported when `-DCPU_PROFILE_ENABLED=1` is added to
+  CMake configuration command, like so:
 
 ```log
     Active NPU cycles: 5629033
@@ -84,4 +88,4 @@
     Time in ms:        210
 ```
 
-Next section of the documentation: [Memory Considerations](memory_considerations.md).
+The next section of the documentation refers to: [Memory Considerations](memory_considerations.md).
diff --git a/docs/sections/troubleshooting.md b/docs/sections/troubleshooting.md
index a4f60fb..8ab81eb 100644
--- a/docs/sections/troubleshooting.md
+++ b/docs/sections/troubleshooting.md
@@ -1,27 +1,26 @@
 # Troubleshooting
 
-- [Inference results are incorrect for my custom files](#inference-results-are-incorrect-for-my-custom-files)
-- [The application does not work with my custom model](#the-application-does-not-work-with-my-custom-model)
+- [Troubleshooting](#troubleshooting)
+  - [Inference results are incorrect for my custom files](#inference-results-are-incorrect-for-my-custom-files)
+  - [The application does not work with my custom model](#the-application-does-not-work-with-my-custom-model)
 
 ## Inference results are incorrect for my custom files
 
-Ensure that the files you are using match the requirements of the model
-you are using and that cmake parameters are set accordingly. More
-information on these cmake parameters is detailed in their separate
-sections. Note that preprocessing of the files could also affect the
-inference result, such as the rescaling and padding operations done for
-image classification.
+Ensure that the files you are using match the requirements of the model you are using. Ensure that cmake parameters are
+set accordingly. More information on these cmake parameters is detailed in their separate sections.
+
+> **Note:** Preprocessing of the files can also affect the inference result, such as the rescaling and padding
+> operations performed for image classification.
 
 ## The application does not work with my custom model
 
-Ensure that your model is in a fully quantized `.tflite` file format,
-either uint8 or int8, and has successfully been run through the Vela
-compiler.
+Ensure that your model is in a fully quantized `.tflite` file format, either `uint8` or `int8`, and that it has
+successfully been run through the Vela compiler.
 
-Check that cmake parameters match your new models input requirements.
+Also, please check that the cmake parameters used match the input requirements of your new model.
 
-> **Note:** Vela tool is not available within this software project.
-It is a python tool available from <https://pypi.org/project/ethos-u-vela/>.
-The source code is hosted on <https://git.mlplatform.org/ml/ethos-u/ethos-u-vela.git/>.
+> **Note:** The Vela tool is not available within this software project. It is a separate Python tool that is available
+> from: <https://pypi.org/project/ethos-u-vela/>. The source code is hosted on
+> <https://git.mlplatform.org/ml/ethos-u/ethos-u-vela.git/>.
 
-Next section of the documentation: [Appendix](appendix.md).
+The next section of the documentation is: [Appendix](appendix.md).
diff --git a/docs/use_cases/ad.md b/docs/use_cases/ad.md
index a6e368c..d41f970 100644
--- a/docs/use_cases/ad.md
+++ b/docs/use_cases/ad.md
@@ -17,38 +17,40 @@
 
 ## Introduction
 
-This document describes the process of setting up and running the Arm® Ethos™-U55 Anomaly Detection example.
+This document describes the process of setting up and running the Arm® *Ethos™-U55* Anomaly Detection example.
 
-Use case code could be found in [source/use_case/ad](../../source/use_case/ad]) directory.
+Use-case code could be found in the following directory: [source/use_case/ad](../../source/use_case/ad]).
 
 ### Preprocessing and feature extraction
 
-The Anomaly Detection model that is used with the Code Samples expects audio data to be preprocessed
-in a specific way before performing an inference. This section aims to provide an overview of the feature extraction
-process used.
+The Anomaly Detection model that is used with the Code Samples andexpects audio data to be preprocessed in a specific
+way before performing an inference.
 
-First the audio data is normalized to the range (-1, 1).
+Therefore, this section provides an overview of the feature extraction process used.
 
-Next, a window of 1024 audio samples are taken from the start of the audio clip. From these 1024 samples we calculate 64
+First, the audio data is normalized to the range (`-1`, `1`).
+
+Next, a window of 1024 audio samples is taken from the start of the audio clip. From these 1024 samples, we calculate 64
 Log Mel Energies that form part of a Log Mel Spectrogram.
 
 The window is shifted by 512 audio samples and another 64 Log Mel Energies are calculated. This is repeated until we
 have 64 sets of Log Mel Energies.
 
-This 64x64 matrix of values is resized by a factor of 2 resulting in a 32x32 matrix of values.
+This 64x64 matrix of values is then resized by a factor of two, resulting in a 32x32 matrix of values.
 
-The average of the training dataset is subtracted from this 32x32 matrix and an inference can then be performed.
+The average of the training dataset is then subtracted from this 32x32 matrix and an inference can now be performed.
 
-We start this process again but shifting the start by 20\*512=10240 audio samples. This keeps repeating until enough
+We start this process again, but shift the start by 20\*512=10240 audio samples. This keeps repeating until enough
 inferences have been performed to cover the whole audio clip.
 
 ### Postprocessing
 
-Softmax is applied to the result of each inference. Based on the machine ID of the wav clip being processed we look at a
-specific index in each output vector. An average of the negative value at this index across all the inferences performed
-for the audio clip is taken. If this average value is greater than a chosen threshold score, then the machine in the
-clip is not behaving anomalously. If the score is lower than the threshold then the machine in the clip is behaving
-anomalously.
+Softmax is then applied to the result of each inference. Based on the machine ID of the wav clip being processed, we
+look at a specific index in each output vector. An average of the negative value at this index across all the inferences
+performed for the audio clip is taken.
+
+If this average value is greater than a chosen threshold score, then the machine in the clip is not behaving
+anomalously. If the score is lower than the threshold, then the machine in the clip is behaving anomalously.
 
 ### Prerequisites
 
@@ -58,61 +60,66 @@
 
 ### Build options
 
-In addition to the already specified build option in the main documentation, Anomaly Detection use case adds:
+In addition to the already specified build option in the main documentation, the Anomaly Detection use-case adds:
 
-- `ad_MODEL_TFLITE_PATH` - Path to the NN model file in TFLite format. Model will be processed and included into
-the application axf
-    file. The default value points to one of the delivered set of models. Note that the parameters `ad_LABELS_TXT_FILE`,
-    `TARGET_PLATFORM` and `ETHOS_U55_ENABLED` should be aligned with the chosen model, i.e.:
-  - if `ETHOS_U55_ENABLED` is set to `On` or `1`, the NN model is assumed to be optimized. The model will naturally fall
-back to the Arm® Cortex®-M CPU if an unoptimized model is supplied.
+- `ad_MODEL_TFLITE_PATH` - Path to the NN model file in the `TFLite` format. The model is then processed and included in
+  the application `axf` file. The default value points to one of the delivered set of models.
+
+    Note that the parameters `ad_LABELS_TXT_FILE`, `TARGET_PLATFORM`, and `ETHOS_U55_ENABLED` must be aligned with the
+    chosen model. In other words:
+
+  - If `ETHOS_U55_ENABLED` is set to `On` or `1`, then the NN model is assumed to be optimized. The model naturally
+    falls back to the Arm® *Cortex®-M* CPU if an unoptimized model is supplied.
   - if `ETHOS_U55_ENABLED` is set to `Off` or `0`, the NN model is assumed to be unoptimized. Supplying an optimized
-model in this case will result in a runtime error.
+    model in this case results in a runtime error.
 
 - `ad_FILE_PATH`: Path to the directory containing audio files, or a path to single WAV file, to be used in the
-    application. The default value points to the resources/ad/samples folder containing the delivered set of audio clips.
+  application. The default value points to the `resources/ad/samples` folder containing the delivered set of audio
+  clips.
 
-- `ad_AUDIO_RATE`: Input data sampling rate. Each audio file from ad_FILE_PATH is preprocessed during the build to match
-NN model input requirements.
-    Default value is 16000.
+- `ad_AUDIO_RATE`: The input data sampling rate. Each audio file from `ad_FILE_PATH` is preprocessed during the build to
+  match the NN model input requirements. The default value is `16000`.
 
-- `ad_AUDIO_MONO`: If set to ON the audio data will be converted to mono. Default is ON.
+- `ad_AUDIO_MONO`: If set to `ON`, then the audio data is converted to mono. The default value is `ON`.
 
-- `ad_AUDIO_OFFSET`: Start loading audio data starting from this offset (in seconds). Default value is 0.
+- `ad_AUDIO_OFFSET`: begin loading the audio data, while starting from this offset amount, defined in seconds. The
+  default value is set to `0`.
 
-- `ad_AUDIO_DURATION`: Length of the audio data to be used in the application in seconds. Default is 0 meaning the
-    whole audio file will be taken.
+- `ad_AUDIO_DURATION`: Length of the audio data to be used in the application in seconds. Default is `0`, meaning that
+  the whole audio file is used.
 
 - `ad_AUDIO_MIN_SAMPLES`: Minimum number of samples required by the network model. If the audio clip is shorter than
-    this number, it is padded with zeros. Default value is 16000.
+  this number, then it is padded with zeros. The default value is `16000`.
 
-- `ad_MODEL_SCORE_THRESHOLD`: Threshold value to be applied to average softmax score over the clip, if larger than this
-score we have an anomaly.
+- `ad_MODEL_SCORE_THRESHOLD`: Threshold value to be applied to average Softmax score over the clip, if larger than this
+  value, then there is an anomaly.
 
-- `ad_ACTIVATION_BUF_SZ`: The intermediate/activation buffer size reserved for the NN model. By default, it is set to
-    2MiB and should be enough for most models.
+- `ad_ACTIVATION_BUF_SZ`: The intermediate, or activation, buffer size reserved for the NN model. By default, it is set
+  to 2MiB and is enough for most models.
 
-In order to build **ONLY** Anomaly Detection example application add to the `cmake` command line specified in [Building](../documentation.md#Building) `-DUSE_CASE_BUILD=ad`.
+In order to **ONLY** build the Anomaly Detection example application, add `-DUSE_CASE_BUILD=ad` to the `cmake` command
+line that is specified in: [Building](../documentation.md#Building).
 
 ### Build process
 
-> **Note:** This section describes the process for configuring the build for `MPS3: SSE-300` for different target
->platform see [Building](../documentation.md#Building).
+> **Note:** This section describes the process for configuring the build for the `MPS3: SSE-300` for a different target
+> platform. Additional information can be found at: [Building](../documentation.md#Building).
 
-Create a build directory folder and navigate inside:
+Create a build directory folder and then navigate inside using:
 
 ```commandline
 mkdir build_ad && cd build_ad
 ```
 
-On Linux, execute the following command to build **only** Anomaly Detection application to run on the Ethos-U55 Fast Model when providing only the mandatory arguments for CMake configuration:
+On Linux, when providing only the mandatory arguments for CMake configuration, execute the following command to **only**
+build the Anomaly Detection application to run on the *Ethos-U55* Fast Model:
 
 ```commandline
 cmake ../ -DUSE_CASE_BUILD=ad
 ```
 
-To configure a build that can be debugged using Arm-DS, we can just specify
-the build type as `Debug` and use the `Arm Compiler` toolchain file:
+To configure a build that can be debugged using Arm DS, specify the build type as `Debug` and use the `Arm Compiler`
+toolchain file, like so:
 
 ```commandline
 cmake .. \
@@ -121,15 +128,15 @@
     -DUSE_CASE_BUILD=ad
 ```
 
-Also see:
+For additional information, please refer to:
 
 - [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
 - [Using Arm Compiler](../sections/building.md#using-arm-compiler)
 - [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
+- [Working with model debugger from Arm Fast Model Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
 
-> **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run
->the CMake command.
+> **Note:** If re-building with changed parameters values, we recommend that you clean the build directory and then
+> re-run the CMake command.
 
 If the CMake command succeeded, build the application as follows:
 
@@ -137,9 +144,9 @@
 make -j4
 ```
 
-Add VERBOSE=1 to see compilation and link details.
+To see compilation and link details, add `VERBOSE=1`.
 
-Results of the build will be placed under `build/bin` folder:
+Results of the build are placed under `build/bin` folder. For example:
 
 ```tree
 bin
@@ -149,30 +156,31 @@
  └── sectors
       ├── images.txt
       └── ad
-          ├── dram.bin
+          ├── ddr.bin
           └── itcm.bin
 ```
 
-Where:
+The bin folder contains the following files and folders:
 
-- `ethos-u-ad.axf`: The built application binary for the Anomaly Detection use case.
+- `ethos-u-ad.axf`: The built application binary for the Anomaly Detection use-case.
 
-- `ethos-u-ad.map`: Information from building the application (e.g. libraries used, what was optimized, location of
-    objects)
+- `ethos-u-ad.map`: Information from building the application. For example, the libraries used, what was optimized, and
+  the location of objects.
 
 - `ethos-u-ad.htm`: Human readable file containing the call graph of application functions.
 
-- `sectors/ad`: Folder containing the built application, split into files for loading into different FPGA memory regions.
+- `sectors/ad`: Folder containing the built application. is split into files for loading into different FPGA memory
+  regions.
 
-- `sectors/images.txt`: Tells the FPGA which memory regions to use for loading the binaries in sectors/\*\* folder.
+- `sectors/images.txt`: Tells the FPGA which memory regions to use for loading the binaries in the `sectors/..` folder.
 
 ### Add custom input
 
-The application anomaly detection on audio data found in the folder, or an individual file, set by the CMake parameter
-``ad_FILE_PATH``.
+The application anomaly detection is set up to perform inferences on data found in the folder, or an individual file,
+that is pointed to by the parameter `ad_FILE_PATH`.
 
-To run the application with your own audio clips first create a folder to hold them and then copy the custom clips into
-this folder:
+To run the application with your own audio clips, first create a folder to hold them and then copy the custom clips into
+the following folder:
 
 ```commandline
 mkdir /tmp/custom_files
@@ -181,16 +189,16 @@
 ```
 
 > **Note:** The data used for this example comes from
-[https://zenodo.org/record/3384388\#.X6GILFNKiqA](https://zenodo.org/record/3384388\#.X6GILFNKiqA)
-and the model included in this example is trained on the ‘Slider’ part of the dataset.
-The machine ID (00, 02, 04, 06) the clip comes from must be in the file name for the application to work.
-The file name should have a pattern that matches
-e.g. `<any>_<text>_00_<here>.wav` if the audio was from machine ID 00
-or `<any>_<text>_02_<here>.wav` if it was from machine ID 02 etc.
+> [https://zenodo.org/record/3384388\#.X6GILFNKiqA](https://zenodo.org/record/3384388\#.X6GILFNKiqA) and the model
+> included in this example is trained on the "Slider" part of the dataset.\
+> The machine ID for the clip, so: `00`, `02`, `04`, `06`, comes from must be in the file name for the application to
+> work.\
+> The file name must have a pattern that matches. For example: `<any>_<text>_00_<here>.wav` if the audio was from machine
+> ID `00`, or `<any>_<text>_02_<here>.wav` if it was from machine ID `02`, and so on.
 >
 > **Note:** Clean the build directory before re-running the CMake command.
 
-Next set ad_FILE_PATH to the location of this folder when building:
+Next, set `ad_FILE_PATH` to the location of the following folder when building:
 
 ```commandline
 cmake .. \
@@ -198,25 +206,27 @@
     -DUSE_CASE_BUILD=ad
 ```
 
-The audio flies found in the `ad_FILE_PATH` folder will be picked up and automatically converted to C++ files during the CMake
-configuration stage and then compiled into the application during the build phase for performing inference with.
+The audio flies found in the `ad_FILE_PATH` folder are picked up and automatically converted to C++ files during the
+CMake configuration stage. They are then compiled into the application during the build phase for performing inference
+with.
 
-The log from the configuration stage should tell you what image directory path has been used:
+The log from the configuration stage tells you what image directory path has been used:
 
 ```log
 -- User option ad_FILE_PATH is set to /tmp/custom_files
 ```
 
-After compiling, your custom inputs will have now replaced the default ones in the application.
+After compiling, your custom inputs have now replaced the default ones in the application.
 
 ### Add custom model
 
 The application performs inference using the model pointed to by the CMake parameter ``ad_MODEL_TFLITE_PATH``.
 
-> **Note:** If you want to run the model using Ethos-U55, ensure your custom model has been run through the Vela compiler
->successfully before continuing. See [Optimize model with Vela compiler](../sections/building.md#Optimize-custom-model-with-Vela-compiler).
+> **Note:** If you want to run the model using an *Ethos-U55*, ensure that your custom model has been successfully run
+> through the Vela compiler *before* continuing. Please refer to this section for more help:
+> [Optimize model with Vela compiler](../sections/building.md#Optimize-custom-model-with-Vela-compiler).
 
-An example:
+For example:
 
 ```commandline
 cmake .. \
@@ -226,11 +236,10 @@
 
 > **Note:** Clean the build directory before re-running the CMake command.
 
-The `.tflite` model file pointed to by `ad_MODEL_TFLITE_PATH` will be converted
-to C++ files during the CMake configuration
-stage and then compiled into the application for performing inference with.
+The `.tflite` model file pointed to by `ad_MODEL_TFLITE_PATH` is converted to C++ files during the CMake configuration
+stage and is then compiled into the application for performing inference with.
 
-The log from the configuration stage should tell you what model path has been used:
+The log from the configuration stage tells you what model path has been used. For example:
 
 ```log
 -- User option TARGET_PLATFORM is set to fastmodel
@@ -241,44 +250,46 @@
 ...
 ```
 
-After compiling, your custom model will have now replaced the default one in the application.
+After compiling, your custom model has now replaced the default one in the application.
 
- >**Note:** In order to successfully run the model, the NPU needs to be enabled and
- the platform `TARGET_PLATFORM` is set to `mps3` and `TARGET_SUBSYSTEM` is `SSE-300`.
+ >**Note:** To successfully run the model, the NPU must be enabled and the platform `TARGET_PLATFORM` is set to `mps3`
+ >and `TARGET_SUBSYSTEM` is `SSE-300`.
 
 ## Setting-up and running Ethos-U55 Code Sample
 
 ### Setting up the Ethos-U55 Fast Model
 
-The FVP is available publicly from [Arm Ecosystem FVP downloads
-](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
+The FVP is available publicly from
+[Arm Ecosystem FVP downloads](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
 
-For Ethos-U55 evaluation, please download the MPS3 version of the Arm® Corstone™-300 model that contains Ethos-U55 and
-Cortex-M55. The model is currently only supported on Linux based machines. To install the FVP:
+For the *Ethos-U55* evaluation, please download the MPS3 version of the Arm® *Corstone™-300* model that contains both
+the *Ethos-U55* and *Cortex-M55*. The model is currently only supported on Linux-based machines.
 
-- Unpack the archive
+To install the FVP:
 
-- Run the install script in the extracted package
+- Unpack the archive.
+
+- Run the install script in the extracted package:
 
 ```commandline
 .FVP_Corstone_SSE-300_Ethos-U55.sh
 ```
 
-- Follow the instructions to install the FVP to your desired location
+- Follow the instructions to install the FVP to the required location.
 
 ### Starting Fast Model simulation
 
-> **Note:** The anomaly detection example does not come pre-built. You will first need to follow the instructions in
->section 3 for building the application from source.
+> **Note:** The anomaly detection example does not come pre-built. Therefore, you must first follow the instructions in
+> section three for building the application from source.
 
-After building, and assuming the install location of the FVP was set to ~/FVP_install_location, the simulation can be
-started by:
+After building, and assuming the install location of the FVP was set to the `~/FVP_install_location`, the simulation can
+be started by running:
 
 ```commandline
 ~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 ./bin/ethos-u-ad.axf
 ```
 
-A log output should appear on the terminal:
+A log output now appears on the terminal:
 
 ```log
 telnetterminal0: Listening for serial connection on port 5000
@@ -287,13 +298,13 @@
 telnetterminal5: Listening for serial connection on port 5003
 ```
 
-This will also launch a telnet window with the sample application's standard output and error log entries containing
-information about the pre-built application version, TensorFlow Lite Micro library version used, data type as well as
-the input and output tensor sizes of the model compiled into the executable binary.
+This also launches a telnet window with the standard output of the sample application. It also includes error log
+entries containing information about the pre-built application version, TensorFlow Lite Micro library version used, and
+data types. The log also includes the input and output tensor sizes of the model compiled into the executable binary.
 
-After the application has started if `ad_FILE_PATH` pointed to a single file (or a folder containing a single input file)
-the inference starts immediately. In case of multiple inputs choice, it outputs a menu and waits for the user input from
-telnet terminal:
+After the application has started, if `ad_FILE_PATH` points to a single file, or even a folder that contains a single
+input file, then the inference starts immediately. If there are multiple inputs, it outputs a menu and then waits for
+input from the user. For example:
 
 ```log
 User input required
@@ -309,44 +320,46 @@
 
 ```
 
-1. “Classify next audio clip” menu option will run single inference on the next in line.
+What the preceding choices do:
 
-2. “Classify audio clip at chosen index” menu option will run inference on the chosen audio clip.
+1. Classify next audio clip: Runs a single inference on the next in line.
 
-    > **Note:** Please make sure to select audio clip index in the range of supplied audio clips during application build.
-    By default, pre-built application has 4 files, indexes from 0 to 3.
+2. Classify audio clip at chosen index: Runs inference on the chosen audio clip.
 
-3. “Run ... on all” menu option triggers sequential inference executions on all built-in .
+    > **Note:** Please make sure to select audio clip index within the range of supplied audio clips during application
+    > build. By default, a pre-built application has four files, with indexes from `0` to `3`.
 
-4. “Show NN model info” menu option prints information about model data type, input and output tensor sizes:
+3. Run ... on all: Triggers sequential inference executions on all built-in applications.
+
+4. Show NN model info: Prints information about the model data type, input, and output, tensor sizes:
 
     ```log
     INFO - uTFL version: 2.5.0
     INFO - Model info:
     INFO - Model INPUT tensors:
-    INFO - 	tensor type is INT8
-    INFO - 	tensor occupies 1024 bytes with dimensions
-    INFO - 		0:   1
-    INFO - 		1:  32
-    INFO - 		2:  32
-    INFO - 		3:   1
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 1024 bytes with dimensions
+    INFO -    0:   1
+    INFO -    1:  32
+    INFO -    2:  32
+    INFO -    3:   1
     INFO - Quant dimension: 0
     INFO - Scale[0] = 0.192437
     INFO - ZeroPoint[0] = 11
     INFO - Model OUTPUT tensors:
-    INFO - 	tensor type is INT8
-    INFO - 	tensor occupies 8 bytes with dimensions
-    INFO - 		0:   1
-    INFO - 		1:   8
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 8 bytes with dimensions
+    INFO -    0:   1
+    INFO -    1:   8
     INFO - Quant dimension: 0
     INFO - Scale[0] = 0.048891
     INFO - ZeroPoint[0] = -30
     INFO - Activation buffer (a.k.a tensor arena) size used: 198016
     INFO - Number of operators: 1
-    INFO - 	Operator 0: ethos-u
+    INFO -  Operator 0: ethos-u
     ```
 
-5. “List” menu option prints a list of pair ... indexes - the original filenames embedded in the application:
+5. List: Prints a list of pair ... indexes. The original filenames are embedded in the application, like so:
 
     ```log
     INFO - List of Files:
@@ -358,9 +371,9 @@
 
 ### Running Anomaly Detection
 
-Please select the first menu option to execute Anomaly Detection.
+Please select the first menu option to execute the Anomaly Detection.
 
-The following example illustrates application output:
+The following example illustrates the output of an application:
 
 ```log
 INFO - Running inference on audio clip 0 => anomaly_id_00_00000000.wav
@@ -389,14 +402,14 @@
 INFO - NPU total cycles: 1081634
 ```
 
-As multiple inferences have to be run for one clip it will take around a minute or so for all inferences to complete.
+As multiple inferences must be run for one clip, it takes around a minute for all inferences to complete.
 
-For the anomaly_id_00_00000000.wav clip, after averaging results across all inferences the score is greater than the
-chosen anomaly threshold so an anomaly was detected with the machine in this clip.
+For the `anomaly_id_00_00000000.wav` clip, after averaging results across all inferences, the score is greater than the
+chosen anomaly threshold. Therefore, an anomaly was detected with the machine in this clip.
 
-The profiling section of the log shows that for each inference. For the last inference the profiling reports:
+The profiling section of the log shows that for each inference. For the last inference, the profiling reports:
 
-- Ethos-U55's PMU report:
+- *Ethos-U55* PMU report:
 
   - 1,081,634 total cycle: The number of NPU cycles
 
@@ -404,13 +417,13 @@
 
   - 626 idle cycles: number of cycles for which the NPU was idle
 
-  - 628,122 AXI0 read beats: The number of AXI beats with read transactions from AXI0 bus.
-    AXI0 is the bus where Ethos-U55 NPU reads and writes to the computation buffers (activation buf/tensor arenas).
+  - 628,122 AXI0 read beats: The number of AXI beats with read transactions from AXI0 bus. AXI0 is the bus where
+    Ethos-U55 NPU reads and writes to the computation buffers, activation buf, or tensor arenas.
 
   - 135,087 AXI0 write beats: The number of AXI beats with write transactions to AXI0 bus.
 
-  - 62,870 AXI1 read beats: The number of AXI beats with read transactions from AXI1 bus.
-    AXI1 is the bus where Ethos-U55 NPU reads the model (read only)
+  - 62,870 AXI1 read beats: The number of AXI beats with read transactions from AXI1 bus. AXI1 is the bus where
+    Ethos-U55 NPU reads the model. So, read-only.
 
-- For FPGA platforms, CPU cycle count can also be enabled. For FVP, however, CPU cycle counters should not be used as
-    the CPU model is not cycle-approximate or cycle-accurate.
+- For FPGA platforms, a CPU cycle count can also be enabled. However, do not use cycle counters for FVP, as the CPU
+  model is not cycle-approximate or cycle-accurate.
diff --git a/docs/use_cases/asr.md b/docs/use_cases/asr.md
index 0f5da40..a12455c 100644
--- a/docs/use_cases/asr.md
+++ b/docs/use_cases/asr.md
@@ -10,79 +10,91 @@
     - [Build process](#build-process)
     - [Add custom input](#add-custom-input)
     - [Add custom model](#add-custom-model)
-  - [Setting-up and running Ethos-U55 Code Sample](#setting-up-and-running-ethos-u55-code-sample)
+  - [Setting-up and running Ethos-U55 Code Samples](#setting-up-and-running-ethos-u55-code-samples)
     - [Setting up the Ethos-U55 Fast Model](#setting-up-the-ethos-u55-fast-model)
     - [Starting Fast Model simulation](#starting-fast-model-simulation)
     - [Running Automatic Speech Recognition](#running-automatic-speech-recognition)
 
 ## Introduction
 
-This document describes the process of setting up and running the Arm® Ethos™-U55 Automatic Speech Recognition example.
+This document describes the process of setting up and running the Arm® *Ethos™-U55* Automatic Speech Recognition
+example.
 
-Use case code could be found in [source/use_case/asr](../../source/use_case/asr]) directory.
+Use-case code could be found in the following directory: [source/use_case/asr](../../source/use_case/asr]).
 
 ### Preprocessing and feature extraction
 
-The wav2letter automatic speech recognition model that is used with the Code Samples expects audio data to be
-preprocessed in a specific way before performing an inference. This section aims to provide an overview of the feature
-extraction process used.
+The *wav2letter* automatic speech recognition model that is used with the code samples, expects audio data to be
+preprocessed in a specific way before performing an inference.
 
-First the audio data is normalized to the range (-1, 1).
+This section provides an overview of the feature extraction process used.
 
-> **Note:** Mel-frequency cepstral coefficients (MFCCs) are a common feature extracted from audio data and can be used as
->input for machine learning tasks like keyword spotting and speech recognition. See source/application/main/include/Mfcc.hpp
->for implementation details.
+First, the audio data is normalized to the range (`-1`, `1`).
 
-Next, a window of 512 audio samples is taken from the start of the audio clip. From these 512 samples we calculate 13
+> **Note:** Mel-Frequency Cepstral Coefficients (MFCCs) are a common feature that is extracted from audio data and can
+> be used as input for machine learning tasks. Such as keyword spotting and speech recognition. For implementation
+> details, please refer to: `source/application/main/include/Mfcc.hpp`
+
+Next, a window of 512 audio samples is taken from the start of the audio clip. From these 512 samples, we calculate 13
 MFCC features.
 
 The whole window is shifted to the right by 160 audio samples and 13 new MFCC features are calculated. This process of
-shifting and calculating is repeated until enough audio samples to perform an inference have been processed. In total
-this will be 296 windows that each have 13 MFCC features calculated for them.
+shifting and calculating is repeated until enough audio samples to perform an inference have been processed.
 
-After extracting MFCC features the first and second order derivatives of these features with respect to time are
-calculated. These derivative features are then standardized and concatenated with the MFCC features (which also get
-standardized). At this point the input tensor will have a shape of 296x39.
+In total, this is 296 windows that each have 13 MFCC features calculated for them.
 
-These extracted features are quantized, and an inference is performed.
+After extracting MFCC features, the first and second order derivatives of these features, regarding time, are
+calculated.
+
+These derivative features are then standardized and concatenated with the MFCC features (which also get standardized).
+At this point, the input tensor has a shape of 296x39.
+
+These extracted features are quantized and an inference is performed.
 
 ![ASR preprocessing](../media/ASR_preprocessing.png)
 
-For longer audio clips where multiple inferences need to be performed, then the initial starting position is offset by
-(100*160) = 16000 audio samples. From this new starting point, MFCC and derivative features are calculated as before
-until there is enough to perform another inference. Padding can be used if there are not enough audio samples for at
-least 1 inference. This step is repeated until the whole audio clip has been processed. If there are not enough audio
-samples for a final complete inference the MFCC features will be padded by repeating the last calculated feature until
-an inference can be performed.
+For longer audio clips, where multiple inferences must be performed, then the initial starting position is offset by
+`(100*160) = 16000` audio samples. From this new starting point, MFCC and derivative features are calculated as before,
+until there is enough to perform another inference.
 
-> **Note:** Parameters of the MFCC feature extraction such as window size, stride, number of features etc. all depend on
->what was used during model training. These values are specific to each model. If you switch to a different ASR model
->than the one supplied, then the feature extraction process could be completely different to the one currently implemented.
+Padding can be used if there are not enough audio samples for at least one inference. This step is repeated until the
+whole audio clip has been processed. If there are not enough audio samples for a final complete inference, then the MFCC
+features are padded by repeating the last calculated feature until an inference can be performed.
 
-The amount of audio samples we offset by for long audio clips is specific to the included wav2letter model.
+> **Note:** Parameters of the MFCC feature extraction all depend on what was used during model training. These values
+> are specific to each model.\
+If you switch to a different ASR model than the one supplied, then the feature extraction process could be completely
+different to the one currently implemented.
+
+The amount of time that audio samples that are offset for long audio clips is specific to the included *wav2letter*
+model.
 
 ### Postprocessing
 
-After performing an inference, the raw output need to be postprocessed to get a usable result.
+After performing an inference, the raw output must be postprocessed to get a usable result.
 
 The raw output from the model is a tensor of shape 148x29 where each row is a probability distribution over the possible
 29 characters that can appear at each of the 148 time steps.
 
-This wav2letter model is trained using context windows, this means that only certain parts of the output are usable
-depending on the bit of the audio clip that is currently being processed.
+This *wav2letter* model is trained using context windows. This means that, depending on the bit of the audio clip that
+is currently being processed, only certain parts of the output are usable.
 
-If this is the first inference and multiple inferences are required, then ignore the final 49 rows of the output.
-Similarly, if this is the final inference from multiple inferences then ignore the first 49 rows of the output. Finally,
-if this inference is not the last or first inference then ignore the first and last 49 rows of the model output.
+If this is the first inference, and multiple inferences are required, then ignore the final 49 rows of the output.
+Similarly, if this is the final inference from multiple inferences, then ignore the first 49 rows of the output.
 
-> **Note:** If the audio clip is small enough then the whole of the model output is usable and there is no need to throw
->away any of the output before continuing.
+Finally, if this inference is not the last, or the first inference, then ignore the first and last 49 rows of the model
+output.
 
-Once any rows have been removed the final processing can be done. To process the output, first the letter with the
-highest probability at each time step is found. Next, any letters that are repeated multiple times in a row are removed
-(e.g. [t, t, t, o, p, p] becomes [t, o, p]). Finally, the 29th blank token letter is removed from the output.
+> **Note:** If the audio clip is small enough, then the whole of the model output is usable and there is no need to
+> throw away any of the outputs before continuing.
 
-For the final output, the result from all inferences are combined before decoding. What you are left with is then
+Once any rows have been removed, the final processing can be done. To process the output, the letter with the highest
+probability at each time step is found first. Next, any letters that are repeated multiple times in a row are removed.
+
+For example: [`t`, `t`, `t`, `o`, `p`, `p`] becomes [`t`, `o`, `p`]). Finally, the 29th blank token letter is removed
+from the output.
+
+For the final output, the results from all inferences are combined before decoding. What you are left with is then
 displayed to the console.
 
 ### Prerequisites
@@ -93,66 +105,68 @@
 
 ### Build options
 
-In addition to the already specified build option in the main documentation, Automatic Speech Recognition use case
+In addition to the already specified build option in the main documentation, the Automatic Speech Recognition use-case
 adds:
 
-- `asr_MODEL_TFLITE_PATH` - Path to the NN model file in TFLite format. Model will be processed and included into the
-application axf file. The default value points to one of the delivered set of models. Note that the parameters
-`asr_LABELS_TXT_FILE`,`TARGET_PLATFORM` and `ETHOS_U55_ENABLED` should be aligned with the chosen model, i.e.:
-  - if `ETHOS_U55_ENABLED` is set to `On` or `1`, the NN model is assumed to be optimized. The model will naturally
-fall back to the Arm® Cortex®-M CPU if an unoptimized model is supplied.
-  - if `ETHOS_U55_ENABLED` is set to `Off` or `0`, the NN model is assumed to be unoptimized. Supplying an optimized
-model in this case will result in a runtime error.
+- `asr_MODEL_TFLITE_PATH` - The path to the NN model file in `TFLite` format. The model is processed and then included
+  into the application `axf` file. The default value points to one of the delivered set of models. Note that the
+  parameters `asr_LABELS_TXT_FILE`,`TARGET_PLATFORM`, and `ETHOS_U55_ENABLED` must be aligned with the chosen model. In
+  other words:
+  - If `ETHOS_U55_ENABLED` is set to `On` or `1`, then the NN model is assumed to be optimized. The model naturally
+    falls back to the Arm® *Cortex®-M* CPU if an unoptimized model is supplied.
+  - If `ETHOS_U55_ENABLED` is set to `Off` or `0`, then the NN model is assumed to be unoptimized. Supplying an
+    optimized model in this case results in a runtime error.
 
-- `asr_FILE_PATH`:  Path to the directory containing audio files, or a path to single WAV file, to be used in the
-    application. The default value points
-    to the resources/asr/samples folder containing the delivered set of audio clips.
+- `asr_FILE_PATH`: The path to the directory containing audio files, or a path to single WAV file, to be used in the
+  application. The default value points to the `resources/asr/samples` folder that contains the delivered set of audio
+  clips.
 
-- `asr_LABELS_TXT_FILE`: Path to the labels' text file. The file is used to map letter class index to the text label.
-    The default value points to the delivered labels.txt file inside the delivery package.
+- `asr_LABELS_TXT_FILE`: The path to the text file for the label. The file is used to map letter class index to the text
+  label. The default value points to the delivered `labels.txt` file inside the delivery package.
 
-- `asr_AUDIO_RATE`: Input data sampling rate. Each audio file from asr_FILE_PATH is preprocessed during the build to
-    match NN model input requirements. Default value is 16000.
+- `asr_AUDIO_RATE`: The input data sampling rate. Each audio file from `asr_FILE_PATH` is preprocessed during the build
+  to match the NN model input requirements. The default value is `16000`.
 
-- `asr_AUDIO_MONO`: If set to ON the audio data will be converted to mono. Default is ON.
+- `asr_AUDIO_MONO`: If set to `ON`, then the audio data is converted to mono. The default value is `ON`.
 
-- `asr_AUDIO_OFFSET`: Start loading audio data starting from this offset (in seconds). Default value is 0.
+- `asr_AUDIO_OFFSET`: Begins loading audio data and starts from this specified offset, defined in seconds. the default
+  value is set to `0`.
 
-- `asr_AUDIO_DURATION`: Length of the audio data to be used in the application in seconds. Default is 0 meaning the
-    whole audio file will be taken.
+- `asr_AUDIO_DURATION`: The length of the audio data to be used in the application in seconds. The default is `0`,
+  meaning that the whole audio file is used.
 
 - `asr_AUDIO_MIN_SAMPLES`: Minimum number of samples required by the network model. If the audio clip is shorter than
-    this number, it is padded with zeros. Default value is 16000.
+  this number, then it is padded with zeros. The default value is `16000`.
 
-- `asr_MODEL_SCORE_THRESHOLD`: Threshold value that must be applied to the inference results for a label to be
-    deemed valid. Default is 0.5.
+- `asr_MODEL_SCORE_THRESHOLD`: Threshold value that must be applied to the inference results for a label to be deemed
+  valid. The default is `0.5`.
 
-- `asr_ACTIVATION_BUF_SZ`: The intermediate/activation buffer size reserved for the NN model. By default, it is set
-    to 2MiB and should be enough for most models.
+- `asr_ACTIVATION_BUF_SZ`: The intermediate, or activation, buffer size reserved for the NN model. By default, it is set
+  to 2MiB and is enough for most models.
 
-In order to build **ONLY** automatic speech recognition example application add to the `cmake` command line specified in
-[Building](../documentation.md#Building) `-DUSE_CASE_BUILD=asr`.
+To **ONLY** build the automatic speech recognition example application, add `-DUSE_CASE_BUILD=asr` to the `cmake`
+command line, as specified in: [Building](../documentation.md#Building).
 
 ### Build process
 
-> **Note:** This section describes the process for configuring the build for `MPS3: SSE-300` for different target
->platform see [Building](../documentation.md#Building) section.
+> **Note:** This section describes the process for configuring the build for the *MPS3: SSE-300*. To build for a
+> different target platform, please refer to: [Building](../documentation.md#Building).
 
-In order to build **only** the automatic speech recognition example, create a build directory and navigate inside:
+To build **only** the automatic speech recognition example, create a build directory and navigate inside, like so:
 
 ```commandline
 mkdir build_asr && cd build_asr
 ```
 
-On Linux, execute the following command to build **only** Automatic Speech Recognition application to run on the
-Ethos-U55 Fast Model when providing only the mandatory arguments for CMake configuration:
+On Linux, when providing only the mandatory arguments for CMake configuration, execute the following command to build
+**only** Automatic Speech Recognition application to run on the *Ethos-U55* Fast Model:
 
 ```commandline
 cmake ../ -DUSE_CASE_BUILD=asr
 ```
 
-To configure a build that can be debugged using Arm-DS, we can just specify
-the build type as `Debug` and use the `Arm Compiler` toolchain file:
+To configure a build that can be debugged using Arm DS specify the build type as `Debug` and then use the `Arm Compiler`
+toolchain file:
 
 ```commandline
 cmake .. \
@@ -161,24 +175,25 @@
     -DUSE_CASE_BUILD=asr
 ```
 
-Also see:
+For further information, please refer to:
+
 - [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
 - [Using Arm Compiler](../sections/building.md#using-arm-compiler)
 - [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
+- [Working with model debugger from Arm Fast Model Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
 
-> **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run
->the CMake command.
+> **Note:** If re-building with changed parameters values, we recommend that you clean the build directory and re-run
+> the CMake command.
 
-If the CMake command succeeded, build the application as follows:
+If the CMake command succeeds, build the application as follows:
 
 ```commandline
 make -j4
 ```
 
-Add `VERBOSE=1` to see compilation and link details.
+To see compilation and link details, add `VERBOSE=1`.
 
-Results of the build will be placed under `build/bin` folder:
+Results of the build are placed under the `build/bin` folder, like so:
 
 ```tree
 bin
@@ -188,30 +203,31 @@
  └── sectors
       ├── images.txt
       └── asr
-          ├── dram.bin
+          ├── ddr.bin
           └── itcm.bin
 ```
 
-Where:
+The `bin` folder contains the following files:
 
-- `ethos-u-asr.axf`: The built application binary for the Automatic Speech Recognition use case.
+- `ethos-u-asr.axf`: The built application binary for the Automatic Speech Recognition use-case.
 
-- `ethos-u-asr.map`: Information from building the application (e.g. libraries used, what was optimized, location of
-    objects)
+- `ethos-u-asr.map`: Information from building the application. For example: The libraries used, what was optimized, and
+  the location of objects.
 
 - `ethos-u-asr.htm`: Human readable file containing the call graph of application functions.
 
-- `sectors/asr`: Folder containing the built application, split into files for loading into different FPGA memory regions.
+- `sectors/asr`: Folder containing the built application. It is split into files for loading into different FPGA memory
+  regions.
 
-- `sectors/images.txt`: Tells the FPGA which memory regions to use for loading the binaries in sectors/** folder.
+- `sectors/images.txt`: Tells the FPGA which memory regions to use for loading the binaries in the `sectors/..` folder.
 
 ### Add custom input
 
-The application performs inference on audio data found in the folder, or an individual file, set by the CMake parameter
-`asr_FILE_PATH`.
+The application anomaly detection is set up to perform inferences on data found in the folder, or an individual file,
+that is pointed to by the parameter `asr_FILE_PATH`.
 
-To run the application with your own audio clips first create a folder to hold them and then copy the custom audio clips
-into this folder:
+To run the application with your own audio clips, first create a folder to hold them and then copy the custom clips into
+the following folder:
 
 ```commandline
 mkdir /tmp/custom_wavs
@@ -221,7 +237,7 @@
 
 > **Note:** Clean the build directory before re-running the CMake command.
 
-Next set `asr_FILE_PATH` to the location of this folder when building:
+Next, when building, set `asr_FILE_PATH` to the location of the following folder:
 
 ```commandline
 cmake .. \
@@ -229,10 +245,11 @@
     -DUSE_CASE_BUILD=asr
 ```
 
-The audio clips found in the `asr_FILE_PATH` folder will be picked up and automatically converted to C++ files during the
-CMake configuration stage and then compiled into the application during the build phase for performing inference with.
+The audio flies found in the `asr_FILE_PATH` folder are picked up and automatically converted to C++ files during the
+CMake configuration stage. They are then compiled into the application during the build phase for performing inference
+with.
 
-The log from the configuration stage should tell you what audio clip directory path has been used:
+The log from the configuration stage tells you what audio directory path has been used:
 
 ```log
 -- User option asr_FILE_PATH is set to /tmp/custom_wavs
@@ -244,26 +261,29 @@
 -- asr_FILE_PATH=/tmp/custom_wavs
 ```
 
-After compiling, your custom inputs will have now replaced the default ones in the application.
+After compiling, your custom inputs have now replaced the default ones in the application.
 
-> **Note:** The CMake parameter asr_AUDIO_MIN_SAMPLES determine the minimum number of input sample. When building the
->application, if the size of the audio clips is less then asr_AUDIO_MIN_SAMPLES then it will be padded so that it does.
+> **Note:** The CMake parameter `asr_AUDIO_MIN_SAMPLES` determines the minimum number of input samples. When building
+> the application, if the size of the audio clips is less than `asr_AUDIO_MIN_SAMPLES`, then it is padded until it
+> matches.
 
 ### Add custom model
 
-The application performs inference using the model pointed to by the CMake parameter MODEL_TFLITE_PATH.
+The application performs inference using the model pointed to by the CMake parameter `MODEL_TFLITE_PATH`.
 
-> **Note:** If you want to run the model using Ethos-U55, ensure your custom model has been run through the Vela
->compiler successfully before continuing. See [Optimize model with Vela compiler](../sections/building.md#Optimize-custom-model-with-Vela-compiler).
+> **Note:** If you want to run the model using an *Ethos-U55*, ensure that your custom model has been successfully run
+> through the Vela compiler *before* continuing.
 
-To run the application with a custom model you will need to provide a labels_<model_name>.txt file of labels
-associated with the model. Each line of the file should correspond to one of the outputs in your model. See the provided
-labels_wav2letter.txt file for an example.
+For further information: [Optimize model with Vela compiler](../sections/building.md#Optimize-custom-model-with-Vela-compiler).
+
+To run the application with a custom model, you must provide a `labels_<model_name>.txt` file of labels that are
+associated with the model. Each line of the file must correspond to one of the outputs in your model. Refer to the
+provided `labels_wav2letter.txt` file for an example.
 
 Then, you must set `asr_MODEL_TFLITE_PATH` to the location of the Vela processed model file and `asr_LABELS_TXT_FILE`to
 the location of the associated labels file.
 
-An example:
+For example:
 
 ```commandline
 cmake .. \
@@ -274,11 +294,11 @@
 
 > **Note:** Clean the build directory before re-running the CMake command.
 
-The `.tflite` model file pointed to by `asr_MODEL_TFLITE_PATH` and labels text file pointed to by `asr_LABELS_TXT_FILE`
-will be converted to C++ files during the CMake configuration stage and then compiled into the application for performing
-inference with.
+The `.tflite` model file pointed to by `asr_MODEL_TFLITE_PATH`, and the labels text file pointed to by
+`asr_LABELS_TXT_FILE` are converted to C++ files during the CMake configuration stage. They are then compiled into the
+application for performing inference with.
 
-The log from the configuration stage should tell you what model path and labels file have been used:
+The log from the configuration stage tells you what model path and labels file have been used, for example:
 
 ```log
 -- User option TARGET_PLATFORM is set to mps3
@@ -294,39 +314,43 @@
 ...
 ```
 
-After compiling, your custom model will have now replaced the default one in the application.
+After compiling, your custom model has now replaced the default one in the application.
 
-## Setting-up and running Ethos-U55 Code Sample
+## Setting-up and running Ethos-U55 Code Samples
 
 ### Setting up the Ethos-U55 Fast Model
 
-The FVP is available publicly from [Arm Ecosystem FVP downloads
-](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
+The FVP is available publicly from
+[Arm Ecosystem FVP downloads](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
 
-For Ethos-U55 evaluation, please download the MPS3 version of the Arm® Corstone™-300 model that contains Ethos-U55 and
-Cortex-M55. The model is currently only supported on Linux based machines. To install the FVP:
+For the *Ethos-U55* evaluation, please download the MPS3 version of the Arm® *Corstone™-300* model that contains both
+the *Ethos-U55* and *Cortex-M55*. The model is currently only supported on Linux-based machines.
 
-- Unpack the archive
+To install the FVP:
 
-- Run the install script in the extracted package
+- Unpack the archive.
+
+- Run the install script in the extracted package:
 
 ```commandline
 ./FVP_Corstone_SSE-300_Ethos-U55.sh
 ```
 
-- Follow the instructions to install the FVP to your desired location
+- Follow the instructions to install the FVP to the required location.
 
 ### Starting Fast Model simulation
 
-Once completed the building step, application binary ethos-u-asr.axf can be found in the `build/bin` folder.
-Assuming the install location of the FVP was set to ~/FVP_install_location, the simulation can be started by:
+Once the building has been completed, the application binary `ethos-u-asr.axf` can be found in the `build/bin` folder.
+
+Assuming that the install location of the FVP was set to `~/FVP_install_location`, then the simulation can be started by
+using:
 
 ```commandline
 ~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55
 ./bin/mps3-sse-300/ethos-u-asr.axf
 ```
 
-A log output should appear on the terminal:
+A log output appears on the terminal:
 
 ```log
 telnetterminal0: Listening for serial connection on port 5000
@@ -335,13 +359,15 @@
 telnetterminal5: Listening for serial connection on port 5003
 ```
 
-This will also launch a telnet window with the sample application's standard output and error log entries containing
-information about the pre-built application version, TensorFlow Lite Micro library version used, data type as well as
-the input and output tensor sizes of the model compiled into the executable binary.
+This also launches a telnet window with the standard output of the sample application. It also includes error log
+entries containing information about the pre-built application version, TensorFlow Lite Micro library version used, and
+data types. The log also includes the input and output tensor sizes of the model compiled into the executable binary.
 
-After the application has started if `asr_FILE_PATH` pointed to a single file (or a folder containing a single input file)
-the inference starts immediately. In case of multiple inputs choice, it outputs a menu and waits for the user input from
-telnet terminal:
+After the application has started, if `asr_FILE_PATH` points to a single file, or even a folder that contains a single
+input file, then the inference starts immediately. If there are multiple inputs, it outputs a menu and then waits for
+input from the user.
+
+For example:
 
 ```log
 User input required
@@ -357,50 +383,47 @@
 
 ```
 
-1. “Classify next audio clip” menu option will run inference on the next in line voice clip from the collection of the
-    compiled audio.
+What the preceding choices do:
 
-    > **Note:** Note that if the clip is over a certain length, the application will invoke multiple inference runs to
-    >cover the entire file.
+1. Classify next audio clip: Runs a single inference on the next in line.
 
-2. “Classify audio clip at chosen index” menu option will run inference on the chosen audio clip.
+2. Classify audio clip at chosen index: Runs inference on the chosen audio clip.
 
-    > **Note:** Please make sure to select audio clip index in the range of supplied audio clips during application build.
-    By default, pre-built application has 4 files, indexes from 0 to 3.
+    > **Note:** Please make sure to select audio clip index within the range of supplied audio clips during application
+    > build. By default, a pre-built application has four files, with indexes from `0` to `3`.
 
-3. “Run classification on all audio clips” menu option triggers sequential inference executions on all built-in voice
-    samples.
+3. Run ... on all: Triggers sequential inference executions on all built-in applications.
 
-4. “Show NN model info” menu option prints information about model data type, input and output tensor sizes:
+4. Show NN model info: Prints information about the model data type, input, and output, tensor sizes:
 
     ```log
     INFO - uTFL version: 2.5.0
     INFO - Model info:
     INFO - Model INPUT tensors:
-    INFO - 	tensor type is INT8
-    INFO - 	tensor occupies 11544 bytes with dimensions
-    INFO - 		0:   1
-    INFO - 		1: 296
-    INFO - 		2:  39
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 11544 bytes with dimensions
+    INFO -    0:   1
+    INFO -    1: 296
+    INFO -    2:  39
     INFO - Quant dimension: 0
     INFO - Scale[0] = 0.110316
     INFO - ZeroPoint[0] = -11
     INFO - Model OUTPUT tensors:
-    INFO - 	tensor type is INT8
-    INFO - 	tensor occupies 4292 bytes with dimensions
-    INFO - 		0:   1
-    INFO - 		1:   1
-    INFO - 		2: 148
-    INFO - 		3:  29
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 4292 bytes with dimensions
+    INFO -    0:   1
+    INFO -    1:   1
+    INFO -    2: 148
+    INFO -    3:  29
     INFO - Quant dimension: 0
     INFO - Scale[0] = 0.003906
     INFO - ZeroPoint[0] = -128
     INFO - Activation buffer (a.k.a tensor arena) size used: 783168
     INFO - Number of operators: 1
-    INFO - 	Operator 0: ethos-u
+    INFO -  Operator 0: ethos-u
     ```
 
-5. “List” menu option prints a list of pair audio clip indexes - the original filenames embedded in the application:
+5. List audio clips: Prints a list of pair ... indexes. The original filenames are embedded in the application, like so:
 
     ```log
     [INFO] List of Files:
@@ -414,7 +437,7 @@
 
 Please select the first menu option to execute Automatic Speech Recognition.
 
-The following example illustrates application output:
+The following example illustrates the output of an application:
 
 ```log
 INFO - Running inference on audio clip 0 => another_door.wav
@@ -434,28 +457,28 @@
 INFO - NPU total cycles: 28451172
 ```
 
-It could take several minutes to complete each inference (average time is 5-7 minutes), and on this audio clip multiple
-inferences were required to cover the whole clip.
+It can take several minutes to complete each inference. The average time is around 5-7 minutes, and on this audio clip,
+multiple inferences were required to cover the whole clip.
 
 The profiling section of the log shows that for the first inference:
 
-- Ethos-U55's PMU report:
+- *Ethos-U55* PMU report:
 
-  - 28,451,172 total cycle: The number of NPU cycles
+  - 28,451,172 total cycle: The number of NPU cycles.
 
-  - 28,450,696 active cycles: number of NPU cycles that were used for computation
+  - 28,450,696 active cycles: The number of NPU cycles that were used for computation.
 
-  - 476 idle cycles: number of cycles for which the NPU was idle
+  - 476 idle cycles: The number of cycles for which the NPU was idle.
 
-  - 6,564,262 AXI0 read beats: The number of AXI beats with read transactions from AXI0 bus.
-    AXI0 is the bus where Ethos-U55 NPU reads and writes to the computation buffers (activation buf/tensor arenas).
+  - 6,564,262 AXI0 read beats: The number of AXI beats with read transactions from the AXI0 bus. AXI0 is the bus where
+    the *Ethos-U55* NPU reads and writes to the computation buffers, activation buf, or tensor arenas.
 
-  - 928,889 AXI0 write beats: The number of AXI beats with write transactions to AXI0 bus.
+  - 928,889 AXI0 write beats: The number of AXI beats with write transactions to the AXI0 bus.
 
-  - 841,712 AXI1 read beats: The number of AXI beats with read transactions from AXI1 bus.
-    AXI1 is the bus where Ethos-U55 NPU reads the model (read only)
+  - 841,712 AXI1 read beats: The number of AXI beats with read transactions from the AXI1 bus. AXI1 is the bus where the
+    *Ethos-U55* NPU reads the model. So, read-only.
 
-- For FPGA platforms, CPU cycle count can also be enabled. For FVP, however, CPU cycle counters should not be used as
-    the CPU model is not cycle-approximate or cycle-accurate.
+- For FPGA platforms, a CPU cycle count can also be enabled. However, do not use cycle counters for FVP, as the CPU
+  model is not cycle-approximate or cycle-accurate.
 
-The application prints the decoded output from each of the inference runs as well as the final combined result.
+The application prints the decoded output from each of the inference runs, and the final combined result.
diff --git a/docs/use_cases/img_class.md b/docs/use_cases/img_class.md
index 2a31322..9a3451d 100644
--- a/docs/use_cases/img_class.md
+++ b/docs/use_cases/img_class.md
@@ -15,13 +15,12 @@
 
 ## Introduction
 
-This document describes the process of setting up and running the Arm® Ethos™-U55 Image Classification
-example.
+This document describes the process of setting up and running the Arm® *Ethos™-U55* Image Classification example.
 
-Use case solves classical computer vision problem: image classification. The ML sample was developed using MobileNet v2
-model trained on ImageNet dataset.
+This use-case example solves the classical computer vision problem of image classification. The ML sample was developed
+using the *MobileNet v2* model that was trained on the *ImageNet* dataset.
 
-Use case code could be found in [source/use_case/img_class](../../source/use_case/img_class]) directory.
+Use-case code could be found in the following directory:[source/use_case/img_class](../../source/use_case/img_class]).
 
 ### Prerequisites
 
@@ -31,57 +30,62 @@
 
 ### Build options
 
-In addition to the already specified build option in the main documentation, Image Classification use case specifies:
+In addition to the already specified build option in the main documentation, the Image Classification use-case
+specifies:
 
-- `img_class_MODEL_TFLITE_PATH` - Path to the NN model file in TFLite format. Model will be processed and included into
-    the application axf file. The default value points to one of the delivered set of models. Note that the parameters
-    `img_class_LABELS_TXT_FILE`,`TARGET_PLATFORM` and `ETHOS_U55_ENABLED` should be aligned with the chosen model, i.e.:
-  - if `ETHOS_U55_ENABLED` is set to `On` or `1`, the NN model is assumed to be optimized. The model will naturally
-    fall back to the Arm® Cortex®-M CPU if an unoptimized model is supplied.
+- `img_class_MODEL_TFLITE_PATH` - The path to the NN model file in the `TFLite` format. The model is then processed and
+  included in the application `axf` file. The default value points to one of the delivered set of models.
+
+    Note that the parameters `img_class_LABELS_TXT_FILE`,`TARGET_PLATFORM`, and `ETHOS_U55_ENABLED` must be aligned with
+    the chosen model. In other words:
+
+  - If `ETHOS_U55_ENABLED` is set to `On` or `1`, then the NN model is assumed to be optimized. The model naturally
+    falls back to the Arm® *Cortex®-M* CPU if an unoptimized model is supplied.
   - if `ETHOS_U55_ENABLED` is set to `Off` or `0`, the NN model is assumed to be unoptimized. Supplying an optimized
-    model in this case will result in a runtime error.
+    model in this case results in a runtime error.
 
-- `img_class_FILE_PATH`: Path to the directory containing images, or path to a single image file, to be used file(s) in
-    the application. The default value points to the resources/img_class/samples folder containing the delivered
-    set of images. See more in the [Add custom input data section](#add-custom-input).
+- `img_class_FILE_PATH`: The path to the directory containing the images, or a path to a single image file, that is to
+   be used in the application. The default value points to the `resources/img_class/samples` folder containing the
+   delivered set of images.
 
-- `img_class_IMAGE_SIZE`: The NN model requires input images to be of a specific size. This parameter defines the
-    size of the image side in pixels. Images are considered squared. Default value is 224, which is what the supplied
-    MobilenetV2-1.0 model expects.
+    For further information, please refer to: [Add custom input data section](#add-custom-input).
 
-- `img_class_LABELS_TXT_FILE`: Path to the labels' text file to be baked into the application. The file is used to
-    map classified classes index to the text label. Change this parameter to point to the custom labels file to map
-    custom NN model output correctly.\
-    The default value points to the delivered labels.txt file inside the delivery package.
+- `img_class_IMAGE_SIZE`: The NN model requires input images to be of a specific size. This parameter defines the size
+  of the image side in pixels. Images are considered squared. The default value is `224`, which is what the supplied
+  *MobilenetV2-1.0* model expects.
 
-- `img_class_ACTIVATION_BUF_SZ`: The intermediate/activation buffer size reserved for the NN model. By default, it
-    is set to 2MiB and should be enough for most models.
+- `img_class_LABELS_TXT_FILE`: The path to the text file for the label. The file is used to map a classified class index
+  to the text label. The default value points to the delivered `labels.txt` file inside the delivery package. Change
+  this parameter to point to the custom labels file to map custom NN model output correctly.
 
-- `USE_CASE_BUILD`: set to img_class to build only this example.
+- `img_class_ACTIVATION_BUF_SZ`: The intermediate, or activation, buffer size reserved for the NN model. By default, it
+  is set to 2MiB and is enough for most models.
 
-In order to build **ONLY** Image Classification example application add to the `cmake` command line specified in
-[Building](../documentation.md#Building) `-DUSE_CASE_BUILD=img_class`.
+- `USE_CASE_BUILD`: is set to `img_class` to only build this example.
+
+To build **ONLY** the Image Classification example application, add `-DUSE_CASE_BUILD=img_class` to the `cmake` command
+line, as specified in: [Building](../documentation.md#Building).
 
 ### Build process
 
-> **Note:** This section describes the process for configuring the build for `MPS3: SSE-300` for different target platform
-see [Building](../documentation.md#Building).
+> **Note:** This section describes the process for configuring the build for the *MPS3: SSE-300*. To build for a
+> different target platform, please refer to: [Building](../documentation.md#Building).
 
-Create a build directory folder and navigate inside:
+Create a build directory and navigate inside, like so:
 
 ```commandline
 mkdir build_img_class && cd build_img_class
 ```
 
-On Linux, execute the following command to build **only** Image Classification application to run on the Ethos-U55 Fast
-Model when providing only the mandatory arguments for CMake configuration:
+On Linux, when providing only the mandatory arguments for the CMake configuration, execute the following command to
+build **only** Image Classification application to run on the *Ethos-U55* Fast Model:
 
 ```commandline
 cmake ../ -DUSE_CASE_BUILD=img_class
 ```
 
-To configure a build that can be debugged using Arm-DS, we can just specify
-the build type as `Debug` and use the `Arm Compiler` toolchain file:
+To configure a build that can be debugged using Arm DS specify the build type as `Debug` and then use the `Arm Compiler`
+toolchain file:
 
 ```commandline
 cmake .. \
@@ -90,15 +94,15 @@
     -DUSE_CASE_BUILD=img_class
 ```
 
-Also see:
+For further information, please refer to:
 
 - [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
 - [Using Arm Compiler](../sections/building.md#using-arm-compiler)
 - [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
+- [Working with model debugger from Arm Fast Model Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
 
-> **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run
->the CMake command.
+> **Note:** If re-building with changed parameters values, we recommend that you clean the build directory and re-run
+> the CMake command.
 
 If the CMake command succeeds, build the application as follows:
 
@@ -106,9 +110,9 @@
 make -j4
 ```
 
-Add VERBOSE=1 to see compilation and link details.
+To see compilation and link details, add `VERBOSE=1`.
 
-Results of the build will be placed under `build/bin` folder:
+Results of the build are placed under the `build/bin` folder, like so:
 
 ```tree
 bin
@@ -118,30 +122,32 @@
  └── sectors
       ├── images.txt
       └── img_class
-           ├── dram.bin
+           ├── ddr.bin
            └── itcm.bin
 ```
 
-Where:
+The `bin` folder contains the following files:
 
-- `ethos-u-img_class.axf`: The built application binary for the Image Classification use case.
+- `ethos-u-img_class.axf`: The built application binary for the Image Classification use-case.
 
-- `ethos-u-img_class.map`: Information from building the application (e.g. libraries used, what was optimized, location
-    of objects)
+- `ethos-u-img_class.map`: Information from building the application. For example: The libraries used, what was
+  optimized, and the location of objects.
 
 - `ethos-u-img_class.htm`: Human readable file containing the call graph of application functions.
 
-- `sectors/img_class`: Folder containing the built application, split into files for loading into different FPGA memory regions.
+- `sectors/img_class`: Folder containing the built application. It is split into files for loading into different FPGA memory
+  regions.
 
-- `sectors/images.txt`: Tells the FPGA which memory regions to use for loading the binaries in sectors/** folder.
+- `sectors/images.txt`: Tells the FPGA which memory regions to use for loading the binaries in the `sectors/..`
+  folder.
 
 ### Add custom input
 
-The application performs inference on input data found in the folder, or an individual file set by the CMake parameter
-img_class_FILE_PATH.
+The application anomaly detection is set up to perform inferences on data found in the folder, or an individual file,
+that is pointed to by the parameter `img_class_FILE_PATH`.
 
-To run the application with your own images, first create a folder to hold them and then copy the custom images into
-this folder, for example:
+To run the application with your own images, first create a folder to hold them and then copy the custom images into the
+following folder:
 
 ```commandline
 mkdir /tmp/custom_images
@@ -151,7 +157,7 @@
 
 > **Note:** Clean the build directory before re-running the CMake command.
 
-Next set `img_class_FILE_PATH` to the location of this folder when building:
+Next, set `img_class_FILE_PATH` to the location of this folder when building:
 
 ```commandline
 cmake .. \
@@ -159,11 +165,11 @@
     -DUSE_CASE_BUILD=img_class
 ```
 
-The images found in the `img_class_FILE_PATH` folder will be picked up and automatically converted to C++ files during
-the CMake configuration stage and then compiled into the application during the build phase for performing inference
+The images found in the `img_class_FILE_PATH` folder are picked up and automatically converted to C++ files during the
+CMake configuration stage. They are then compiled into the application during the build phase for performing inference
 with.
 
-The log from the configuration stage should tell you what image directory path has been used:
+The log from the configuration stage tells you what image directory path has been used:
 
 ```log
 -- User option img_class_FILE_PATH is set to /tmp/custom_images
@@ -178,26 +184,29 @@
 -- img_class_IMAGE_SIZE=224
 ```
 
-After compiling, your custom images will have now replaced the default ones in the application.
+After compiling, your custom images have now replaced the default ones in the application.
 
-> **Note:** The CMake parameter IMAGE_SIZE should match the model input size. When building the application,
-if the size of any image does not match IMAGE_SIZE then it will be rescaled and padded so that it does.
+> **Note:** The CMake parameter `IMAGE_SIZE` must match the model input size. When building the application, if the size
+of any image does not match `IMAGE_SIZE`, then it is rescaled and padded so that it does.
 
 ### Add custom model
 
-The application performs inference using the model pointed to by the CMake parameter MODEL_TFLITE_PATH.
+The application performs inference using the model pointed to by the CMake parameter `MODEL_TFLITE_PATH`.
 
-> **Note:** If you want to run the model using Ethos-U55, ensure your custom model has been run through the Vela compiler
->successfully before continuing. See [Optimize model with Vela compiler](../sections/building.md#Optimize-custom-model-with-Vela-compiler).
+> **Note:** If you want to run the model using an *Ethos-U55*, ensure that your custom model has been successfully run
+> through the Vela compiler *before* continuing.
 
-To run the application with a custom model you will need to provide a labels_<model_name>.txt file of labels
-associated with the model. Each line of the file should correspond to one of the outputs in your model. See the provided
-labels_mobilenet_v2_1.0_224.txt file for an example.
+For further information: [Optimize model with Vela compiler](../sections/building.md#Optimize-custom-model-with-Vela-compiler).
+
+To run the application with a custom model, you must provide a `labels_<model_name>.txt` file of labels that are
+associated with the model. Each line of the file must correspond to one of the outputs in your model.
+
+Refer to the provided `labels_mobilenet_v2_1.0_224.txt` file for an example.
 
 Then, you must set `img_class_MODEL_TFLITE_PATH` to the location of the Vela processed model file and
 `img_class_LABELS_TXT_FILE` to the location of the associated labels file.
 
-An example:
+For example:
 
 ```commandline
 cmake .. \
@@ -208,11 +217,11 @@
 
 > **Note:** Clean the build directory before re-running the CMake command.
 
-The `.tflite` model file pointed to by `img_class_MODEL_TFLITE_PATH` and labels text file pointed to by
-`img_class_LABELS_TXT_FILE` will be converted to C++ files during the CMake configuration stage and then compiled into
+The `.tflite` model file pointed to by `img_class_MODEL_TFLITE_PATH`, and the labels text file pointed to by
+`img_class_LABELS_TXT_FILE` are converted to C++ files during the CMake configuration stage. They are then compiled into
 the application for performing inference with.
 
-The log from the configuration stage should tell you what model path and labels file have been used:
+The log from the configuration stage tells you what model path and labels file have been used, for example:
 
 ```log
 -- User option img_class_MODEL_TFLITE_PATH is set to <path/to/custom_model_after_vela.tflite>
@@ -227,38 +236,44 @@
 ...
 ```
 
-After compiling, your custom model will have now replaced the default one in the application.
+After compiling, your custom model has now replaced the default one in the application.
 
 ## Setting up and running Ethos-U55 code sample
 
 ### Setting up the Ethos-U55 Fast Model
 
-The FVP is available publicly from [Arm Ecosystem FVP downloads](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
+The FVP is available publicly from
+[Arm Ecosystem FVP downloads](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
 
-For Ethos-U55 evaluation, please download the MPS3 version of the Arm® Corstone™-300 model that contains Ethos-U55 and
-Cortex-M55. The model is currently only supported on Linux based machines. To install the FVP:
+For the *Ethos-U55* evaluation, please download the MPS3 version of the Arm® *Corstone™-300* model that contains both
+the *Ethos-U55* and *Cortex-M55*. The model is currently only supported on Linux-based machines.
 
-- Unpack the archive
+To install the FVP:
 
-- Run the install script in the extracted package
+- Unpack the archive.
+
+- Run the install script in the extracted package:
 
 ```commandline
-$./FVP_Corstone_SSE-300_Ethos-U55.sh
+./FVP_Corstone_SSE-300_Ethos-U55.sh
 ```
 
-- Follow the instructions to install the FVP to your desired location
+- Follow the instructions to install the FVP to the required location.
 
 ### Starting Fast Model simulation
 
-Pre-built application binary ethos-u-img_class.axf can be found in the bin/mps3-sse-300 folder of the delivery package.
-Assuming the install location of the FVP was set to ~/FVP_install_location, the simulation can be started by:
+The pre-built application binary `ethos-u-img_class.axf` can be found in the `bin/mps3-sse-300` folder of the delivery
+package.
+
+Assuming that the install location of the FVP was set to `~/FVP_install_location`, then the simulation can be started by
+using:
 
 ```commandline
 ~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55
 ./bin/mps3-sse-300/ethos-u-img_class.axf
 ```
 
-A log output should appear on the terminal:
+A log output appears on the terminal:
 
 ```log
 telnetterminal0: Listening for serial connection on port 5000
@@ -267,13 +282,13 @@
 telnetterminal5: Listening for serial connection on port 5003
 ```
 
-This will also launch a telnet window with the sample application's standard output and error log entries containing
-information about the pre-built application version, TensorFlow Lite Micro library version used, data type as well as
-the input and output tensor sizes of the model compiled into the executable binary.
+This also launches a telnet window with the standard output of the sample application. It also includes error log
+entries containing information about the pre-built application version, TensorFlow Lite Micro library version used, and
+data types. The log also includes the input and output tensor sizes of the model compiled into the executable binary.
 
-After the application has started if `img_class_FILE_PATH` pointed to a single file (or a folder containing a single image)
-the inference starts immediately. In case of multiple inputs choice, it outputs a menu and waits for the user input from
-telnet terminal:
+After the application has started, if `img_class_FILE_PATH` points to a single file, or even a folder that contains a
+single image, then the inference starts immediately. If there are multiple inputs, it outputs a menu and then waits for
+input from the user:
 
 ```log
 User input required
@@ -289,45 +304,46 @@
 
 ```
 
-1. “Classify next image” menu option will run single inference on the next in line image from the collection of the
-    compiled images.
+What the preceding choices do:
 
-2. “Classify image at chosen index” menu option will run single inference on the chosen image.
+1. Classify next image: Runs a single inference on the next in line image from the collection of the compiled images.
 
-    > **Note:** Please make sure to select image index in the range of supplied images during application build.
-    By default, pre-built application has 4 images, indexes from 0 to 3.
+2. Classify image at chosen index: Runs inference on the chosen image.
 
-3. “Run classification on all images” menu option triggers sequential inference executions on all built-in images.
+    > **Note:** Please make sure to select image index from within the range of supplied audio clips during application
+    > build. By default, a pre-built application has four images, with indexes from `0` to `3`.
 
-4. “Show NN model info” menu option prints information about model data type, input and output tensor sizes:
+3. Run classification on all images: Triggers sequential inference executions on all built-in images.
+
+4. Show NN model info: Prints information about the model data type, input, and output, tensor sizes:
 
     ```log
     INFO - uTFL version: 2.5.0
     INFO - Model info:
     INFO - Model INPUT tensors:
-    INFO - 	tensor type is UINT8
-    INFO - 	tensor occupies 150528 bytes with dimensions
-    INFO - 		0:   1
-    INFO - 		1: 224
-    INFO - 		2: 224
-    INFO - 		3:   3
+    INFO -  tensor type is UINT8
+    INFO -  tensor occupies 150528 bytes with dimensions
+    INFO -    0:   1
+    INFO -    1: 224
+    INFO -    2: 224
+    INFO -    3:   3
     INFO - Quant dimension: 0
     INFO - Scale[0] = 0.007812
     INFO - ZeroPoint[0] = 128
     INFO - Model OUTPUT tensors:
-    INFO - 	tensor type is UINT8
-    INFO - 	tensor occupies 1001 bytes with dimensions
-    INFO - 		0:   1
-    INFO - 		1: 1001
+    INFO -  tensor type is UINT8
+    INFO -  tensor occupies 1001 bytes with dimensions
+    INFO -    0:   1
+    INFO -    1: 1001
     INFO - Quant dimension: 0
     INFO - Scale[0] = 0.098893
     INFO - ZeroPoint[0] = 58
     INFO - Activation buffer (a.k.a tensor arena) size used: 521760
     INFO - Number of operators: 1
-    INFO - 	Operator 0: ethos-u
+    INFO -  Operator 0: ethos-u
     ```
 
-5. “List Images” menu option prints a list of pair image indexes - the original filenames embedded in the application:
+5. List Images: Prints a list of pair image indexes. The original filenames are embedded in the application, like so:
 
     ```log
     INFO - List of Files:
@@ -341,7 +357,7 @@
 
 Please select the first menu option to execute Image Classification.
 
-The following example illustrates application output for classification:
+The following example illustrates an application output for classification:
 
 ```log
 INFO - Running inference on image 0 => cat.bmp
@@ -361,31 +377,31 @@
 INFO - NPU total cycles: 7490172
 ```
 
-It could take several minutes to complete one inference run (average time is 2-3 minutes).
+It can take several minutes to complete one inference run. The average time is around 2-3 minutes.
 
-The log shows the inference results for “image 0” (0 - index) that corresponds to “cat.bmp” in the sample image resource
-folder.
+The log shows the inference results for `image 0`, so `0` - `index`, that corresponds to `cat.bmp` in the sample image
+resource folder.
 
 The profiling section of the log shows that for this inference:
 
-- Ethos-U55's PMU report:
+- *Ethos-U55* PMU report:
 
-  - 7,490,172 total cycle: The number of NPU cycles
+  - 7,490,172 total cycle: The number of NPU cycles.
 
-  - 7,489,258 active cycles: number of NPU cycles that were used for computation
+  - 7,489,258 active cycles: The number of NPU cycles that were used for computation.
 
-  - 914 idle cycles: number of cycles for which the NPU was idle
+  - 914 idle cycles: The number of cycles for which the NPU was idle.
 
-  - 2,489,726 AXI0 read beats: The number of AXI beats with read transactions from AXI0 bus.
-    AXI0 is the bus where Ethos-U55 NPU reads and writes to the computation buffers (activation buf/tensor arenas).
+  - 2,489,726 AXI0 read beats: The number of AXI beats with read transactions from AXI0 bus. AXI0 is the bus where the
+    *Ethos-U55* NPU reads and writes to the computation buffers, activation buf, or tensor arenas.
 
   - 1,098,726 AXI0 write beats: The number of AXI beats with write transactions to AXI0 bus.
 
-  - 471,129 AXI1 read beats: The number of AXI beats with read transactions from AXI1 bus.
-    AXI1 is the bus where Ethos-U55 NPU reads the model (read only)
+  - 471,129 AXI1 read beats: The number of AXI beats with read transactions from AXI1 bus. AXI1 is the bus where the
+    *Ethos-U55* NPU reads the model. So, read-only.
 
-- For FPGA platforms, CPU cycle count can also be enabled. For FVP, however, CPU cycle counters should not be used as
-    the CPU model is not cycle-approximate or cycle-accurate.
+- For FPGA platforms, a CPU cycle count can also be enabled. However, do not use cycle counters for FVP, as the CPU
+  model is not cycle-approximate or cycle-accurate
 
-The application prints the top 5 classes with indexes, confidence score and labels from associated
-labels_mobilenet_v2_1.0_224.txt file. The FVP window also shows the output on its LCD section.
+The application prints the top five classes with indexes, a confidence score, and labels from the associated
+*labels_mobilenet_v2_1.0_224.txt* file. The FVP window also shows the output on its LCD section.
diff --git a/docs/use_cases/inference_runner.md b/docs/use_cases/inference_runner.md
index 0ac604b..acfdb78 100644
--- a/docs/use_cases/inference_runner.md
+++ b/docs/use_cases/inference_runner.md
@@ -14,20 +14,21 @@
 
 ## Introduction
 
-This document describes the process of setting up and running the Arm® Ethos™-U55 NPU Inference Runner.
-The inference runner is intended for quickly checking profiling results for any desired network, providing it has been
+This document describes the process of setting up and running the Arm® *Ethos™-U55* NPU Inference Runner.
+
+The inference runner is intended for quickly checking profiling results for any wanted network, providing it has been
 processed by the Vela compiler.
 
-A simple model is provided with the Inference Runner as an example, but it is expected that the user will replace this
-model with one they wish to profile, see [Add custom model](./inference_runner.md#Add-custom-model) for more details.
+A simple model is provided with the Inference Runner as an example. However, we expect you to replace this model with
+one that you must profile.
 
-The inference runner is intended for quickly checking profiling results for any desired network
-providing it has been processed by the Vela compiler.
+For further details, refer to: [Add custom model](./inference_runner.md#Add-custom-model).
 
-The inference runner will populate all input tensors for the provided model with randomly generated data and an
-inference is then performed. Profiling results are then displayed in the console.
+The inference runner populates all input tensors for the provided model with randomly generated data and an inference is
+then performed. Profiling results are then displayed in the console.
 
-Use case code could be found in [source/use_case/inference_runner](../../source/use_case/inference_runner]) directory.
+The example use-case code can be found in the following directory:
+[source/use_case/inference_runner](../../source/use_case/inference_runner]).
 
 ### Prerequisites
 
@@ -37,42 +38,46 @@
 
 ### Build options
 
-In addition to the already specified build option in the main documentation, the Inference Runner use case adds:
+In addition to the already specified build option in the main documentation, the Inference Runner use-case adds the
+following:
 
-- `inference_runner_MODEL_TFLITE_PATH` - Path to the NN model file in TFLite format. Model will be processed and
-  included into the application axf file. The default value points to one of the delivered set of models.
-  Note that the parameters `TARGET_PLATFORM` and `ETHOS_U55_ENABLED` should be aligned with the chosen model, i.e.:
-  - if `ETHOS_U55_ENABLED` is set to `On` or `1`, the NN model is assumed to be optimized. The model will naturally
-    all back to the Arm® Cortex®-M CPU if an unoptimized model is supplied.
-  - if `ETHOS_U55_ENABLED` is set to `Off` or `0`, the NN model is assumed to be unoptimized. Supplying an optimized model
-    in this case will result in a runtime error.
+- `inference_runner_MODEL_TFLITE_PATH` - The path to the NN model file in the `TFLite` format. The model is then
+  processed and included in the application `axf` file. The default value points to one of the delivered set of models.
 
-- `inference_runner_ACTIVATION_BUF_SZ`: The intermediate/activation buffer size reserved for the NN model. By
-    default, it is set to 2MiB and should be enough for most models.
+  Note that the parameters `TARGET_PLATFORM` and `ETHOS_U55_ENABLED` must be aligned with the chosen model. In other
+  words:
 
-In order to build **ONLY** Inference Runner example application add to the `cmake` command line specified in
-[Building](../documentation.md#Building) `-DUSE_CASE_BUILD=inferece_runner`.
+  - If `ETHOS_U55_ENABLED` is set to `On` or `1`, then the NN model is assumed to be optimized. The model naturally
+    falls back to the Arm® *Cortex®-M* CPU if an unoptimized model is supplied.
+  - if `ETHOS_U55_ENABLED` is set to `Off` or `0`, the NN model is assumed to be unoptimized. Supplying an optimized
+    model in this case results in a runtime error.
+
+- `inference_runner_ACTIVATION_BUF_SZ`: The intermediate, or activation, buffer size reserved for the NN model. By
+  default, it is set to 2MiB and is enough for most models.
+
+To build **ONLY** the Inference Runner example application, add `-DUSE_CASE_BUILD=inferece_runner` to the `cmake`
+command line, as specified in: [Building](../documentation.md#Building).
 
 ### Build process
 
-> **Note:** This section describes the process for configuring the build for `MPS3: SSE-300` for different target platform
->see [Building](../documentation.md#Building) section.
+> **Note:** This section describes the process for configuring the build for the *MPS3: SSE-300*. To build for a
+> different target platform, please refer to: [Building](../documentation.md#Building).
 
-Create a build directory and navigate inside:
+Create a build directory and navigate inside, like so:
 
 ```commandline
 mkdir build_inference_runner && cd build_inference_runner
 ```
 
-On Linux, execute the following command to build **only** Inference Runner application to run on the Ethos-U55 Fast
-Model when providing only the mandatory arguments for CMake configuration:
+On Linux, when providing only the mandatory arguments for the CMake configuration, execute the following command to
+build **only** Image Classification application to run on the *Ethos-U55* Fast Model:
 
 ```commandline
 cmake ../ -DUSE_CASE_BUILD=inference_runner
 ```
 
-To configure a build that can be debugged using Arm-DS, we can just specify
-the build type as `Debug` and use the `Arm Compiler` toolchain file:
+To configure a build that can be debugged using Arm DS specify the build type as `Debug` and then use the `Arm Compiler`
+toolchain file:
 
 ```commandline
 cmake .. \
@@ -81,25 +86,25 @@
     -DUSE_CASE_BUILD=inference_runner
 ```
 
-Also see:
+For further information, please refer to:
 
 - [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
 - [Using Arm Compiler](../sections/building.md#using-arm-compiler)
 - [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
+- [Working with model debugger from Arm Fast Model Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
 
-> **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run
->the CMake command.
+> **Note:** If re-building with changed parameters values, we recommend that you clean the build directory and re-run
+> the CMake command.
 
-If the CMake command succeeded, build the application as follows:
+If the CMake command succeeds, build the application as follows:
 
 ```commandline
 make -j4
 ```
 
-Add VERBOSE=1 to see compilation and link details.
+To see compilation and link details, add `VERBOSE=1`.
 
-Results of the build will be placed under `build/bin` folder:
+Results of the build are placed under the `build/bin` folder, like so:
 
 ```tree
 bin
@@ -107,32 +112,35 @@
  ├── ethos-u-inference_runner.htm
  ├── ethos-u-inference_runner.map
  └── sectors
-      ├── images.txt
       └── inference_runner
-        ├── dram.bin
+        ├── ddr.bin
         └── itcm.bin
 ```
 
-Where:
+The `bin` folder contains the following files:
 
-- `ethos-u-inference_runner.axf`: The built application binary for the Inference Runner use case.
+- `ethos-u-inference_runner.axf`: The built application binary for the Inference Runner use-case.
 
-- `ethos-u-inference_runner.map`: Information from building the application (e.g. libraries used, what was optimized,
-    location of objects)
+- `ethos-u-inference_runner.map`: Information from building the application. For example: The libraries used, what was
+  optimized, and the location of objects.
 
 - `ethos-u-inference_runner.htm`: Human readable file containing the call graph of application functions.
 
-- `sectors/inference_runner`: Folder containing the built application, split into files for loading into different FPGA memory regions.
+- `sectors/inference_runner`: Folder containing the built application. It is split into files for loading into different FPGA memory
+  regions.
 
-- `sectors/images.txt`: Tells the FPGA which memory regions to use for loading the binaries in sectors/**
-    folder.
+- `sectors/images.txt`: Tells the FPGA which memory regions to use for loading the binaries in the `sectors/..`
+  folder.
 
 ### Add custom model
 
-The application performs inference using the model pointed to by the CMake parameter `inference_runner_MODEL_TFLITE_PATH`.
+The application performs inference using the model pointed to by the CMake parameter
+`inference_runner_MODEL_TFLITE_PATH`.
 
-> **Note:** If you want to run the model using Ethos-U55, ensure your custom model has been run through the Vela compiler
->successfully before continuing. See [Optimize model with Vela compiler](../sections/building.md#Optimize-custom-model-with-Vela-compiler).
+> **Note:** If you want to run the model using an *Ethos-U55*, ensure that your custom model has been successfully run
+> through the Vela compiler *before* continuing.
+
+For further information: [Optimize model with Vela compiler](../sections/building.md#Optimize-custom-model-with-Vela-compiler).
 
 Then, you must set `inference_runner_MODEL_TFLITE_PATH` to the location of the Vela processed model file.
 
@@ -146,10 +154,10 @@
 
 > **Note:** Clean the build directory before re-running the CMake command.
 
-The `.tflite` model file pointed to by `inference_runner_MODEL_TFLITE_PATH` will be converted to C++ files during the CMake
-configuration stage and then compiled into the application for performing inference with.
+The `.tflite` model file pointed to by `inference_runner_MODEL_TFLITE_PATH` is converted to C++ files during the CMake
+configuration stage. It is then compiled into the application for performing inference with.
 
-The log from the configuration stage should tell you what model path has been used:
+The log from the configuration stage tells you what model path and labels file have been used, for example:
 
 ```stdout
 -- User option inference_runner_MODEL_TFLITE_PATH is set to <path/to/custom_model_after_vela.tflite>
@@ -160,7 +168,7 @@
 ...
 ```
 
-After compiling, your custom model will have now replaced the default one in the application.
+After compiling, your custom model has now replaced the default one in the application.
 
 ## Setting up and running Ethos-U55 code sample
 
@@ -169,30 +177,35 @@
 The FVP is available publicly from
 [Arm Ecosystem FVP downloads](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
 
-For Ethos-U55 evaluation, please download the MPS3 version of the Arm® Corstone™-300 model that contains Ethos-U55 and
-Cortex-M55. The model is currently only supported on Linux based machines. To install the FVP:
+For the *Ethos-U55* evaluation, please download the MPS3 version of the Arm® *Corstone™-300* model that contains both
+the *Ethos-U55* and *Cortex-M55*. The model is currently only supported on Linux-based machines.
 
-- Unpack the archive
+To install the FVP:
 
-- Run the install script in the extracted package
+- Unpack the archive.
+
+- Run the install script in the extracted package:
 
 ```commandline
 ./FVP_Corstone_SSE-300_Ethos-U55.sh
 ```
 
-- Follow the instructions to install the FVP to your desired location
+- Follow the instructions to install the FVP to the required location.
 
 ### Starting Fast Model simulation
 
-Once completed the building step, application binary ethos-u-infernce_runner.axf can be found in the `build/bin` folder.
-Assuming the install location of the FVP was set to ~/FVP_install_location, the simulation can be started by:
+Once completed the building step, the application binary `ethos-u-infernce_runner.axf` can be found in the `build/bin`
+folder.
+
+Assuming that the install location of the FVP was set to `~/FVP_install_location`, then the simulation can be started by
+using:
 
 ```commandline
 ~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55
 ./bin/mps3-sse-300/ethos-u-inference_runner.axf
 ```
 
-A log output should appear on the terminal:
+A log output appears on the terminal:
 
 ```log
 telnetterminal0: Listening for serial connection on port 5000
@@ -201,9 +214,9 @@
 telnetterminal5: Listening for serial connection on port 5003
 ```
 
-This will also launch a telnet window with the sample application's standard output and error log entries containing
-information about the pre-built application version, TensorFlow Lite Micro library version used, data type as well as
-the input and output tensor sizes of the model compiled into the executable binary.
+This also launches a telnet window with the standard output of the sample application. It also includes error log
+entries containing information about the pre-built application version, TensorFlow Lite Micro library version used, and
+data types. The log also includes the input and output tensor sizes of the model compiled into the executable binary.
 
 ### Running Inference Runner
 
@@ -223,23 +236,23 @@
 ```
 
 After running an inference on randomly generated data, the output of the log shows the profiling results that for this
-inference:
+inference. For example:
 
-- Ethos-U55's PMU report:
+- *Ethos-U55* PMU report:
 
-  - 34,178 total cycle: The number of NPU cycles
+  - 34,178 total cycle: The number of NPU cycles.
 
-  - 33,145 active cycles: number of NPU cycles that were used for computation
+  - 33,145 active cycles: The number of NPU cycles that were used for computation.
 
-  - 1,033 idle cycles: number of cycles for which the NPU was idle
+  - 1,033 idle cycles: The number of cycles for which the NPU was idle.
 
-  - 9,332 AXI0 read beats: The number of AXI beats with read transactions from AXI0 bus.
-    AXI0 is the bus where Ethos-U55 NPU reads and writes to the computation buffers (activation buf/tensor arenas).
+  - 9,332 AXI0 read beats: The number of AXI beats with read transactions from AXI0 bus. AXI0 is the bus where the
+    *Ethos-U55* NPU reads and writes to the computation buffers, activation buf, or tensor arenas.
 
   - 3,248 AXI0 write beats: The number of AXI beats with write transactions to AXI0 bus.
 
-  - 2,219 AXI1 read beats: The number of AXI beats with read transactions from AXI1 bus.
-    AXI1 is the bus where Ethos-U55 NPU reads the model (read only)
+  - 2,219 AXI1 read beats: The number of AXI beats with read transactions from AXI1 bus. AXI1 is the bus where the
+    *Ethos-U55* NPU reads the model. So, read-only.
 
-- For FPGA platforms, CPU cycle count can also be enabled. For FVP, however, CPU cycle counters should not be used as
-    the CPU model is not cycle-approximate or cycle-accurate.
+- For FPGA platforms, a CPU cycle count can also be enabled. However, do not use cycle counters for FVP, as the CPU
+  model is not cycle-approximate or cycle-accurate.
diff --git a/docs/use_cases/kws.md b/docs/use_cases/kws.md
index dc0e1f5..9b0372c 100644
--- a/docs/use_cases/kws.md
+++ b/docs/use_cases/kws.md
@@ -17,50 +17,54 @@
 
 ## Introduction
 
-This document describes the process of setting up and running the Arm® Ethos™-U55 Keyword Spotting
-example.
+This document describes the process of setting up and running the Arm® *Ethos™-U55* Keyword Spotting example.
 
-Use case code could be found in [source/use_case/kws](../../source/use_case/kws]) directory.
+Use-case code could be found in the following directory: [source/use_case/kws](../../source/use_case/kws]).
 
 ### Preprocessing and feature extraction
 
-The DS-CNN keyword spotting model that is supplied with the Code Samples expects audio data to be preprocessed in
-a specific way before performing an inference. This section aims to provide an overview of the feature extraction
-process used.
+The `DS-CNN` keyword spotting model that is used with the Code Samples expects audio data to be preprocessed in a
+specific way before performing an inference.
 
-First the audio data is normalized to the range (-1, 1).
+Therefore, this section aims to provide an overview of the feature extraction process used.
 
-> **Note:** Mel-frequency cepstral coefficients (MFCCs) are a common feature extracted from audio data and can be used as
->input for machine learning tasks like keyword spotting and speech recognition.
->See source/application/main/include/Mfcc.hpp for implementation details.
+First, the audio data is normalized to the range (`-1`, `1`).
 
-Next, a window of 640 audio samples is taken from the start of the audio clip. From these 640 samples we calculate 10
+> **Note:** Mel-Frequency Cepstral Coefficients (MFCCs) are a common feature that is extracted from audio data and can
+> be used as input for machine learning tasks. Such as keyword spotting and speech recognition. For implementation
+> details, please refer to: `source/application/main/include/Mfcc.hpp`
+
+Next, a window of 640 audio samples is taken from the start of the audio clip. From these 640 samples, we calculate 10
 MFCC features.
 
 The whole window is shifted to the right by 320 audio samples and 10 new MFCC features are calculated. This process of
-shifting and calculating is repeated until the end of the 16000 audio samples needed to perform an inference is reached.
-In total this will be 49 windows that each have 10 MFCC features calculated for them, giving an input tensor of shape
-49x10.
+shifting and calculating is repeated until the end of the 16000 audio samples required to perform an inference is
+reached.
 
-These extracted features are quantized, and an inference is performed.
+In total, this is 49 windows that each have 10 MFCC features calculated for them, giving an input tensor of shape 49x10.
+
+These extracted features are quantized and an inference is performed.
 
 ![KWS preprocessing](../media/KWS_preprocessing.png)
 
-If the audio clip is longer than 16000 audio samples then the initial starting position is offset by 16000/2 = 8000
-audio samples. From this new starting point, MFCC features for the next 16000 audio samples are calculated and another
-inference is performed (i.e. do an inference for samples 8000-24000).
+If the audio clip is longer than 16000 audio samples, then the initial starting position is offset by `16000/2 = 8000`
+audio samples. From this new starting point, MFCC features for the next `16000` audio samples are calculated and another
+inference is performed. In other words, do an inference for samples `8000-24000`.
 
-> **Note:** Parameters of the MFCC feature extraction such as window size, stride, number of features etc. all depend on
->what was used during model training. These values are specific to each model and if you try a different keyword spotting
->model that uses MFCC input then values are likely to need changing to match the new model.
-In addition, MFCC feature extraction methods can vary slightly with different normalization methods or scaling etc. being used.
+> **Note:** Parameters of the MFCC feature extraction all depend on what was used during model training. These values
+> are specific to each model.\
+If you try a different keyword spotting model that uses MFCC input, then values check to see if the values need changing
+to match the new model.
+
+In addition, MFCC feature extraction methods can vary slightly with different normalization methods or scaling being
+used.
 
 ### Postprocessing
 
-After an inference is complete the highest probability detected word is output to console, providing its probability is
-larger than a threshold value (default 0.9).
+After an inference is complete, the word with the highest detected probability is output to console. Providing that the
+probability is larger than a threshold value. The default is set to `0.9`.
 
-If multiple inferences are performed for an audio clip, then multiple results will be output.
+If multiple inferences are performed for an audio clip, then multiple results are output.
 
 ### Prerequisites
 
@@ -70,58 +74,67 @@
 
 ### Build options
 
-In addition to the already specified build option in the main documentation, keyword spotting use case adds:
+In addition to the already specified build option in the main documentation, the Keyword Spotting use-case adds:
 
-- `kws_MODEL_TFLITE_PATH` - Path to the NN model file in TFLite format. Model will be processed and included into the application axf file. The default value points to one of the delivered set of models. Note that the parameters `kws_LABELS_TXT_FILE`,`TARGET_PLATFORM` and `ETHOS_U55_ENABLED` should be aligned with the chosen model, i.e.:
-  - if `ETHOS_U55_ENABLED` is set to `On` or `1`, the NN model is assumed to be optimized. The model will naturally fall back to the Arm® Cortex®-M CPU if an unoptimized model is supplied.
-  - if `ETHOS_U55_ENABLED` is set to `Off` or `0`, the NN model is assumed to be unoptimized. Supplying an optimized model in this case will result in a runtime error.
+- `kws_MODEL_TFLITE_PATH` - The path to the NN model file in `TFLite` format. The model is processed and then included
+  into the application `axf` file. The default value points to one of the delivered set of models. Note that the
+  parameters `kws_LABELS_TXT_FILE`,`TARGET_PLATFORM`, and `ETHOS_U55_ENABLED` must be aligned with the chosen model. In
+  other words:
+  - If `ETHOS_U55_ENABLED` is set to `On` or `1`, then the NN model is assumed to be optimized. The model naturally
+    falls back to the Arm® *Cortex®-M* CPU if an unoptimized model is supplied.
+  - If `ETHOS_U55_ENABLED` is set to `Off` or `0`, then the NN model is assumed to be unoptimized. Supplying an
+    optimized model in this case results in a runtime error.
 
-- `kws_FILE_PATH`: Path to the directory containing audio files, or a path to single WAV file, to be used in the application. The default value points
-    to the resources/kws/samples folder containing the delivered set of audio clips.
+- `kws_FILE_PATH`: The path to the directory containing audio files, or a path to single WAV file, to be used in the
+  application. The default value points to the `resources/kws/samples` folder that contains the delivered set of audio
+  clips.
 
-- `kws_LABELS_TXT_FILE`: Path to the labels' text file. The file is used to map key word class index to the text
-    label. The default value points to the delivered labels.txt file inside the delivery package.
+- `kws_LABELS_TXT_FILE`: Path to the text file of the label. The file is used to map letter class index to the text
+  label. The default value points to the delivered `labels.txt` file inside the delivery package.
 
-- `kws_AUDIO_RATE`: Input data sampling rate. Each audio file from kws_FILE_PATH is preprocessed during the build to
-    match NN model input requirements. Default value is 16000.
+- `kws_AUDIO_RATE`: The input data sampling rate. Each audio file from `kws_FILE_PATH` is preprocessed during the build
+  to match the NN model input requirements. The default value is `16000`.
 
-- `kws_AUDIO_MONO`: If set to ON the audio data will be converted to mono. Default is ON.
+- `kws_AUDIO_MONO`: If set to `ON`, then the audio data is converted to mono. The default value is `ON`.
 
-- `kws_AUDIO_OFFSET`: Start loading audio data starting from this offset (in seconds). Default value is 0.
+- `kws_AUDIO_OFFSET`: Begins loading audio data and starts from this specified offset, defined in seconds. the default
+  value is set to `0`.
 
-- `kws_AUDIO_DURATION`: Length of the audio data to be used in the application in seconds. Default is 0 meaning the
-    whole audio file will be taken.
+- `kws_AUDIO_DURATION`: The length of the audio data to be used in the application in seconds. The default is `0`,
+  meaning that the whole audio file is used.
 
 - `kws_AUDIO_MIN_SAMPLES`: Minimum number of samples required by the network model. If the audio clip is shorter than
-    this number, it is padded with zeros. Default value is 16000.
+  this number, then it is padded with zeros. The default value is `16000`.
 
-- `kws_MODEL_SCORE_THRESHOLD`: Threshold value [0.0, 1.0] that must be applied to the inference results for a
-    label to be deemed valid. Default is 0.9
+- `kws_MODEL_SCORE_THRESHOLD`: Threshold value that must be applied to the inference results for a label to be deemed
+  valid. Goes from 0.00 to 1.0. The default is `0.9`.
 
-- `kws_ACTIVATION_BUF_SZ`: The intermediate/activation buffer size reserved for the NN model. By default, it is set
-    to 1MiB and should be enough for most models.
+- `kws_ACTIVATION_BUF_SZ`: The intermediate, or activation, buffer size reserved for the NN model. By default, it is set
+  to 2MiB and is enough for most models
 
-In order to build **ONLY** keyword spotting example application add to the `cmake` command line specified in [Building](../documentation.md#Building) `-DUSE_CASE_BUILD=kws`.
+To **ONLY** build the automatic speech recognition example application, add `-DUSE_CASE_BUILD=kws` to the `cmake`
+command line, as specified in: [Building](../documentation.md#Building).
 
 ### Build process
 
-> **Note:** This section describes the process for configuring the build for `MPS3: SSE-300` for different target platform see [Building](../documentation.md#Building) section.
+> **Note:** This section describes the process for configuring the build for the *MPS3: SSE-300*. To build for a
+> different target platform, please refer to: [Building](../documentation.md#Building).
 
-In order to build **only** the keyword spotting example, create a build directory and
-navigate inside, for example:
+To build **only** the keyword spotting example, create a build directory and navigate inside, like so:
 
 ```commandline
 mkdir build_kws && cd build_kws
 ```
 
-On Linux, execute the following command to build Keyword Spotting application to run on the Ethos-U55 Fast Model when providing only the mandatory arguments for CMake configuration:
+On Linux, when providing only the mandatory arguments for CMake configuration, execute the following command to build
+**only** the Keyword Spotting application to run on the *Ethos-U55* Fast Model:
 
 ```commandline
 cmake ../ -DUSE_CASE_BUILD=kws
 ```
 
-To configure a build that can be debugged using Arm-DS, we can just specify
-the build type as `Debug` and use the `Arm Compiler` toolchain file:
+To configure a build that can be debugged using Arm DS specify the build type as `Debug` and then use the `Arm Compiler`
+toolchain file:
 
 ```commandline
 cmake .. \
@@ -130,24 +143,25 @@
     -DUSE_CASE_BUILD=kws
 ```
 
-Also see:
+For further information, please refer to:
 
 - [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
 - [Using Arm Compiler](../sections/building.md#using-arm-compiler)
 - [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
+- [Working with model debugger from Arm Fast Model Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
 
-> **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run the CMake command.
+> **Note:** If re-building with changed parameters values, we recommend that you clean the build directory and re-run
+> the CMake command.
 
-If the CMake command succeeded, build the application as follows:
+If the CMake command succeeds, build the application as follows:
 
 ```commandline
 make -j4
 ```
 
-Add VERBOSE=1 to see compilation and link details.
+To see compilation and link details, add `VERBOSE=1`.
 
-Results of the build will be placed under `build/bin` folder:
+Results of the build are placed under the `build/bin` folder, like so:
 
 ```tree
 bin
@@ -157,29 +171,31 @@
  └── sectors
       ├── images.txt
       └── kws
-           ├── dram.bin
+           ├── ddr.bin
            └── itcm.bin
 ```
 
-Where:
+The `bin` folder contains the following files:
 
-- `ethos-u-kws.axf`: The built application binary for the Keyword Spotting use case.
+- `ethos-u-kws.axf`: The built application binary for the Keyword Spotting use-case.
 
-- `ethos-u-kws.map`: Information from building the application (e.g. libraries used, what was optimized, location of
-    objects)
+- `ethos-u-kws.map`: Information from building the application. For example: The libraries used, what was optimized, and
+  the location of objects.
 
 - `ethos-u-kws.htm`: Human readable file containing the call graph of application functions.
 
-- `sectors/kws`: Folder containing the built application, split into files for loading into different FPGA memory regions.
+- `sectors/kws`: Folder containing the built application. It is split into files for loading into different FPGA memory
+  regions.
 
-- `sectors/images.txt`: Tells the FPGA which memory regions to use for loading the binaries in sectors/\*\* folder.
+- `sectors/images.txt`: Tells the FPGA which memory regions to use for loading the binaries in the `sectors/..` folder.
 
 ### Add custom input
 
-The application performs inference on audio data found in the folder, or an individual file, set by the CMake parameter `kws_FILE_PATH`.
+The application anomaly detection is set up to perform inferences on data found in the folder, or an individual file,
+that is pointed to by the parameter `kws_FILE_PATH`.
 
-To run the application with your own audio clips first create a folder to hold them and then copy the custom audio files
-into this folder, for example:
+To run the application with your own audio clips, first create a folder to hold them and then copy the custom clips into
+the following folder:
 
 ```commandline
 mkdir /tmp/custom_wavs
@@ -189,7 +205,7 @@
 
 > **Note:** Clean the build directory before re-running the CMake command.
 
-Next set `kws_FILE_PATH` to the location of this folder when building:
+Next, when building, set `kws_FILE_PATH` to the location of the following folder:
 
 ```commandline
 cmake .. \
@@ -197,10 +213,11 @@
     -DUSE_CASE_BUILD=kws
 ```
 
-The audio clips found in the `kws_FILE_PATH` folder will be picked up and automatically converted to C++ files during the
-CMake configuration stage and then compiled into the application during the build phase for performing inference with.
+The audio flies found in the `kws_FILE_PATH` folder are picked up and automatically converted to C++ files during the
+CMake configuration stage. They are then compiled into the application during the build phase for performing inference
+with.
 
-The log from the configuration stage should tell you what audio clip directory path has been used:
+The log from the configuration stage tells you what audio directory path has been used:
 
 ```log
 -- User option kws_FILE_PATH is set to /tmp/custom_wavs
@@ -212,25 +229,29 @@
 -- kws_FILE_PATH=/tmp/custom_wavs
 ```
 
-After compiling, your custom inputs will have now replaced the default ones in the application.
+After compiling, your custom inputs have now replaced the default ones in the application.
 
-> **Note:** The CMake parameter `kws_AUDIO_MIN_SAMPLES` determine the minimum number of input sample. When building the application,
-if the size of the audio clips is less then `kws_AUDIO_MIN_SAMPLES` then it will be padded so that it does.
+> **Note:** The CMake parameter `kws_AUDIO_MIN_SAMPLES` determines the minimum number of input samples. When building
+> the application, if the size of the audio clips is less then `kws_AUDIO_MIN_SAMPLES`, then it is padded until it
+> matches.
 
 ### Add custom model
 
 The application performs inference using the model pointed to by the CMake parameter `kws_MODEL_TFLITE_PATH`.
 
-> **Note:** If you want to run the model using Ethos-U55, ensure your custom model has been run through the Vela compiler successfully before continuing. See [Optimize model with Vela compiler](../sections/building.md#Optimize-custom-model-with-Vela-compiler).
+> **Note:** If you want to run the model using an *Ethos-U55*, ensure that your custom model has been successfully run
+> through the Vela compiler *before* continuing.
 
-To run the application with a custom model you will need to provide a labels_<model_name>.txt file of labels
-associated with the model. Each line of the file should correspond to one of the outputs in your model. See the provided
-ds_cnn_labels.txt file for an example.
+For further information: [Optimize model with Vela compiler](../sections/building.md#Optimize-custom-model-with-Vela-compiler).
 
-Then, you must set kws_MODEL_TFLITE_PATH to the location of the Vela processed model file and kws_LABELS_TXT_FILE
-to the location of the associated labels file.
+To run the application with a custom model, you must provide a `labels_<model_name>.txt` file of labels that are
+associated with the model. Each line of the file must correspond to one of the outputs in your model. Refer to the
+provided `ds_cnn_labels.txt` file for an example.
 
-An example:
+Then, you must set `kws_MODEL_TFLITE_PATH` to the location of the Vela processed model file and `kws_LABELS_TXT_FILE`to
+the location of the associated labels file.
+
+For example:
 
 ```commandline
 cmake .. \
@@ -241,11 +262,11 @@
 
 > **Note:** Clean the build directory before re-running the CMake command.
 
-The `.tflite` model file pointed to by `kws_MODEL_TFLITE_PATH` and labels text file pointed to by `kws_LABELS_TXT_FILE` will
-be converted to C++ files during the CMake configuration stage and then compiled into the application for performing
-inference with.
+The `.tflite` model file pointed to by `kws_MODEL_TFLITE_PATH` and labels text file pointed to by `kws_LABELS_TXT_FILE`
+are converted to C++ files during the CMake configuration stage. They are then compiled into the application for
+performing inference with.
 
-The log from the configuration stage should tell you what model path and labels file have been used:
+The log from the configuration stage tells you what model path and labels file have been used, for example:
 
 ```log
 -- User option kws_MODEL_TFLITE_PATH is set to <path/to/custom_model_after_vela.tflite>
@@ -260,38 +281,43 @@
 ...
 ```
 
-After compiling, your custom model will have now replaced the default one in the application.
+After compiling, your custom model has now replaced the default one in the application.
 
 ## Setting up and running Ethos-U55 code sample
 
 ### Setting up the Ethos-U55 Fast Model
 
-The FVP is available publicly from [Arm Ecosystem FVP downloads](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
+The FVP is available publicly from
+[Arm Ecosystem FVP downloads](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
 
-For Ethos-U55 evaluation, please download the MPS3 version of the Arm® Corstone™-300 model that contains Ethos-U55 and
-Cortex-M55. The model is currently only supported on Linux based machines. To install the FVP:
+For the *Ethos-U55* evaluation, please download the MPS3 version of the Arm® *Corstone™-300* model that contains both
+the *Ethos-U55* and *Cortex-M55*. The model is only supported on Linux-based machines.
 
-- Unpack the archive
+To install the FVP:
 
-- Run the install script in the extracted package
+- Unpack the archive.
+
+- Run the install script in the extracted package:
 
 ```commandline
 ./FVP_Corstone_SSE-300_Ethos-U55.sh
 ```
 
-- Follow the instructions to install the FVP to your desired location
+- Follow the instructions to install the FVP to the required location.
 
 ### Starting Fast Model simulation
 
-Once completed the building step, application binary ethos-u-kws.axf can be found in the `build/bin` folder.
-Assuming the install location of the FVP was set to ~/FVP_install_location, the simulation can be started by:
+Once the building has been completed, the application binary `ethos-u-kws.axf` can be found in the `build/bin` folder.
+
+Assuming that the install location of the FVP was set to `~/FVP_install_location`, then the simulation can be started by
+using:
 
 ```commandline
 ~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55
 ./bin/mps3-sse-300/ethos-u-kws.axf
 ```
 
-A log output should appear on the terminal:
+A log output appears on the terminal:
 
 ```log
 telnetterminal0: Listening for serial connection on port 5000
@@ -300,12 +326,15 @@
 telnetterminal5: Listening for serial connection on port 5003
 ```
 
-This will also launch a telnet window with the sample application's standard output and error log entries containing
-information about the pre-built application version, TensorFlow Lite Micro library version used, data type as well as
-the input and output tensor sizes of the model compiled into the executable binary.
+This also launches a telnet window with the standard output of the sample application. It also includes error log
+entries containing information about the pre-built application version, TensorFlow Lite Micro library version used, and
+data types. The log also includes the input and output tensor sizes of the model compiled into the executable binary.
 
-After the application has started if `kws_FILE_PATH` pointed to a single file (or a folder containing a single input file)
-the inference starts immediately. In case of multiple inputs choice, it outputs a menu and waits for the user input from telnet terminal:
+After the application has started, if `kws_FILE_PATH` points to a single file, or even a folder that contains a single
+input file, then the inference starts immediately. If there are multiple inputs, it outputs a menu and then waits for
+input from the user.
+
+For example:
 
 ```log
 User input required
@@ -321,49 +350,46 @@
 
 ```
 
-1. “Classify next audio clip” menu option will run inference on the next in line voice clip from the collection of the
-    compiled audio.
+What the preceding choices do:
 
-    > **Note:** Note that if the clip is over a certain length, the application will invoke multiple inference runs to cover the entire file.
+1. Classify next audio clip: Runs a single inference on the next in line.
 
-2. “Classify audio clip at chosen index” menu option will run inference on the chosen audio clip.
+2. Classify audio clip at chosen index: Runs inference on the chosen audio clip.
 
-    > **Note:** Please make sure to select audio clip index in the range of supplied audio clips during application build.
-    By default, pre-built application has 4 files, indexes from 0 to 3.
+    > **Note:** Please make sure to select audio clip index within the range of supplied audio clips during application
+    > build. By default, a pre-built application has four files, with indexes from `0` to `3`.
 
-3. “Run classification on all audio clips” menu option triggers sequential inference executions on all built-in voice
-    samples.
+3. Run ... on all: Triggers sequential inference executions on all built-in applications.
 
-4. “Show NN model info” menu option prints information about model data type, input and output tensor sizes:
+4. Show NN model info: Prints information about the model data type, input, and output, tensor sizes:
 
     ```log
     INFO - uTFL version: 2.5.0
     INFO - Model info:
     INFO - Model INPUT tensors:
-    INFO - 	tensor type is INT8
-    INFO - 	tensor occupies 490 bytes with dimensions
-    INFO - 		0:   1
-    INFO - 		1:   1
-    INFO - 		2:  49
-    INFO - 		3:  10
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 490 bytes with dimensions
+    INFO -    0:   1
+    INFO -    1:   1
+    INFO -    2:  49
+    INFO -    3:  10
     INFO - Quant dimension: 0
     INFO - Scale[0] = 1.107164
     INFO - ZeroPoint[0] = 95
     INFO - Model OUTPUT tensors:
-    INFO - 	tensor type is INT8
-    INFO - 	tensor occupies 12 bytes with dimensions
-    INFO - 		0:   1
-    INFO - 		1:  12
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 12 bytes with dimensions
+    INFO -    0:   1
+    INFO -    1:  12
     INFO - Quant dimension: 0
     INFO - Scale[0] = 0.003906
     INFO - ZeroPoint[0] = -128
     INFO - Activation buffer (a.k.a tensor arena) size used: 72848
     INFO - Number of operators: 1
-    INFO - 	Operator 0: ethos-u
+    INFO -  Operator 0: ethos-u
     ```
 
-5. “List audio clips” menu option prints a list of pair audio indexes - the original filenames embedded in the
-    application:
+5. List audio clips: Prints a list of pair ... indexes. The original filenames are embedded in the application, like so:
 
     ```log
     [INFO] List of Files:
@@ -375,9 +401,9 @@
 
 ### Running Keyword Spotting
 
-Selecting the first option will run inference on the first file.
+Please select the first menu option to execute inference on the first file.
 
-The following example illustrates application output for classification:
+The following example illustrates the output for classification:
 
 ```logINFO - Running inference on audio clip 0 => down.wav
 INFO - Inference 1/1
@@ -393,26 +419,28 @@
 INFO - NPU total cycles: 681172
 ```
 
-Each inference should take less than 30 seconds on most systems running Fast Model.
+On most systems running Fast Model, each inference takes under 30 seconds.
+
 The profiling section of the log shows that for this inference:
 
-- Ethos-U55's PMU report:
+- *Ethos-U55* PMU report:
 
-  - 681,172 total cycle: The number of NPU cycles
+  - 681,172 total cycle: The number of NPU cycles.
 
-  - 680,611 active cycles: The number of NPU cycles that were used for computation
+  - 680,611 active cycles: The number of NPU cycles that were used for computation.
 
-  - 561 idle cycles: number of cycles for which the NPU was idle
+  - 561 idle cycles: The number of cycles for which the NPU was idle.
 
-  - 217,385 AXI0 read beats: The number of AXI beats with read transactions from AXI0 bus.
-    AXI0 is the bus where Ethos-U55 NPU reads and writes to the computation buffers (activation buf/tensor arenas).
+  - 217,385 AXI0 read beats: The number of AXI beats with read transactions from the AXI0 bus. AXI0 is the bus where the
+    *Ethos-U55* NPU reads and writes to the computation buffers, activation buf, or tensor arenas.
 
   - 82,607 write cycles: The number of AXI beats with write transactions to AXI0 bus.
 
-  - 59,608 AXI1 read beats: The number of AXI beats with read transactions from AXI1 bus.
-    AXI1 is the bus where Ethos-U55 NPU reads the model (read only)
+  - 59,608 AXI1 read beats: The number of AXI beats with read transactions from the AXI1 bus. AXI1 is the bus where the
+    *Ethos-U55* NPU reads the model. So, read-only.
 
-- For FPGA platforms, CPU cycle count can also be enabled. For FVP, however, CPU cycle counters should not be used as
-    the CPU model is not cycle-approximate or cycle-accurate.
+- For FPGA platforms, a CPU cycle count can also be enabled. However, do not use cycle counters for FVP, as the CPU
+  model is not cycle-approximate or cycle-accurate.
 
-The application prints the highest confidence score and the associated label from ds_cnn_labels.txt file.
+> **Note:** The application prints the highest confidence score and the associated label from the `ds_cnn_labels.txt`
+> file.
diff --git a/docs/use_cases/kws_asr.md b/docs/use_cases/kws_asr.md
index 9fbab26..0297f05 100644
--- a/docs/use_cases/kws_asr.md
+++ b/docs/use_cases/kws_asr.md
@@ -20,108 +20,136 @@
 ## Introduction
 
 This document describes the process of setting up and running an example of sequential execution of the Keyword Spotting
-and Automatic Speech Recognition models on Cortex-M CPU and Ethos-U NPU.
+and Automatic Speech Recognition models on a *Cortex-M* CPU and *Ethos-U* NPU.
 
-The Keyword Spotting and Automatic Speech Recognition example demonstrates how to run multiple models sequentially. A
-Keyword Spotting model is first run on the CPU and if a set keyword is detected then an Automatic Speech Recognition
-model is run on Ethos-U55 on the remaining audio.
-Tensor arena memory region is reused between models to optimise application memory footprint.
+The Keyword Spotting and Automatic Speech Recognition example demonstrates how to run multiple models sequentially.
 
-"Yes" key word is used to trigger full command recognition following the key word.
-Use case code could be found in [source/use_case/kws_asr](../../source/use_case/kws_asr]) directory.
+A Keyword Spotting model is first run on the CPU. If a set keyword is detected on the remaining audio, then an Automatic
+Speech Recognition model is run on the *Ethos-U55*.
+
+The tensor arena memory region is reused between models to optimize application memory footprint.
+
+The `Yes` keyword is used to trigger full command recognition following the keyword.
+
+Use-case code could be found in the following directory: [source/use_case/kws_asr](../../source/use_case/kws_asr]).
 
 ### Preprocessing and feature extraction
 
-In this use-case there are 2 different models being used with different requirements for preprocessing. As such each
-preprocessing process is detailed below. Note that Automatic Speech Recognition only occurs if a keyword is detected in
-the audio clip.
+In this use-case, there are two different models being used with different requirements for preprocessing. As such, each
+preprocessing process is detailed as follows.
 
-By default the KWS model is run purely on CPU and not on the Ethos-U55.
+> **Note:** Automatic Speech Recognition only occurs if a keyword is detected in the audio clip.
+
+By default, the KWS model is run purely on the CPU and **not** on the *Ethos-U55*.
 
 #### Keyword Spotting Preprocessing
 
-The DS-CNN keyword spotting model that is used with the Code Samples expects audio data to be preprocessed in
-a specific way before performing an inference. This section aims to provide an overview of the feature extraction
-process used.
+The `DS-CNN` keyword spotting model that is used with the Code Samples expects audio data to be preprocessed in a
+specific way before performing an inference.
 
-First the audio data is normalized to the range (-1, 1).
+Therefore, this section aims to provide an overview of the feature extraction process used.
 
-> **Note:** Mel-frequency cepstral coefficients (MFCCs) are a common feature extracted from audio data and can be used as input for machine learning tasks like keyword spotting and speech recognition. See source/application/main/include/Mfcc.hpp for implementation details.
+First, the audio data is normalized to the range (`-1`, `1`).
 
-Next, a window of 640 audio samples is taken from the start of the audio clip. From these 640 samples we calculate 10
+> **Note:** Mel-Frequency Cepstral Coefficients (MFCCs) are a common feature that is extracted from audio data and can
+> be used as input for machine learning tasks. Such as keyword spotting and speech recognition. For implementation
+> details, please refer to: `source/application/main/include/Mfcc.hpp`
+
+Next, a window of 640 audio samples is taken from the start of the audio clip. From these 640 samples, we calculate 10
 MFCC features.
 
 The whole window is shifted to the right by 320 audio samples and 10 new MFCC features are calculated. This process of
-shifting and calculating is repeated until the end of the 16000 audio samples needed to perform an inference is reached.
-In total this will be 49 windows that each have 10 MFCC features calculated for them, giving an input tensor of shape
-49x10.
+shifting and calculating is repeated until the end of the 16000 audio samples required to perform an inference is
+reached.
 
-These extracted features are quantized, and an inference is performed.
+In total, this is 49 windows that each have 10 MFCC features calculated for them, giving an input tensor of shape 49x10.
 
-If the audio clip is longer than 16000 audio samples then the initial starting position is offset by 16000/2 = 8000
-audio samples. From this new starting point, MFCC features for the next 16000 audio samples are calculated and another
-inference is performed (i.e. do an inference for samples 8000-24000).
+These extracted features are quantized and an inference is performed.
 
-> **Note:** Parameters of the MFCC feature extraction such as window size, stride, number of features etc. all depend on what was used during model training. These values are specific to each model and if you try a different keyword spotting model that uses MFCC input then values are likely to need changing to match the new model.
+If the audio clip is longer than 16000 audio samples, then the initial starting position is offset by `16000/2 = 8000`
+audio samples. From this new starting point, MFCC features for the next `16000` audio samples are calculated and another
+inference is performed. In other words, do an inference for samples `8000-24000`.
 
-In addition, MFCC feature extraction methods can vary slightly with different normalization methods or scaling etc. being used.
+> **Note:** Parameters of the MFCC feature extraction all depend on what was used during model training. These values
+> are specific to each model.
+
+If you try a different keyword spotting model that uses MFCC input, then values check to see if the values need changing
+to match the new model.
+
+In addition, MFCC feature extraction methods can vary slightly with different normalization methods or scaling being
+used.
 
 #### Automatic Speech Recognition Preprocessing
 
-The wav2letter automatic speech recognition model that is used with the Code Samples expects audio data to be
+The *wav2letter* automatic speech recognition model that is used with the Code Samples expects audio data to be
 preprocessed in a specific way before performing an inference. This section aims to provide an overview of the feature
 extraction process used.
 
-First the audio data is normalized to the range (-1, 1).
+First, the audio data is normalized to the range (`-1`, `1`).
 
-> **Note:** Mel-frequency cepstral coefficients (MFCCs) are a common feature extracted from audio data and can be used as input for machine learning tasks like keyword spotting and speech recognition. See source/application/main/include/Mfcc.hpp for implementation details.
+> **Note:** Mel-Frequency Cepstral Coefficients (MFCCs) are a common feature that is extracted from audio data and can
+> be used as input for machine learning tasks. Such as keyword spotting and speech recognition. For implementation
+> details, please refer to: `source/application/main/include/Mfcc.hpp`
 
-Next, a window of 512 audio samples is taken from the start of the audio clip. From these 512 samples we calculate 13
+Next, a window of 512 audio samples is taken from the start of the audio clip. From these 512 samples, we calculate 13
 MFCC features.
 
 The whole window is shifted to the right by 160 audio samples and 13 new MFCC features are calculated. This process of
-shifting and calculating is repeated until enough audio samples to perform an inference have been processed. In total
-this will be 296 windows that each have 13 MFCC features calculated for them.
+shifting and calculating is repeated until enough audio samples to perform an inference have been processed.
 
-After extracting MFCC features the first and second order derivatives of these features with respect to time are
-calculated. These derivative features are then standardized and concatenated with the MFCC features (which also get
-standardized). At this point the input tensor will have a shape of 296x39.
+In total, this is 296 windows that each have 13 MFCC features calculated for them.
 
-These extracted features are quantized, and an inference is performed.
+After extracting MFCC features, the first and second order derivatives of these features, regarding time, are
+calculated.
 
-For longer audio clips where multiple inferences need to be performed, then the initial starting position is offset by
-(100\*160) = 16000 audio samples. From this new starting point, MFCC and derivative features are calculated as before
-until there is enough to perform another inference. Padding can be used if there are not enough audio samples for at
-least 1 inference. This step is repeated until the whole audio clip has been processed. If there are not enough audio
-samples for a final complete inference the MFCC features will be padded by repeating the last calculated feature until
-an inference can be performed.
+These derivative features are then standardized and concatenated with the MFCC features (which also get standardized).
+At this point, the input tensor has a shape of 296x39.
 
-> **Note:** Parameters of the MFCC feature extraction such as window size, stride, number of features etc. all depend on what was used during model training. These values are specific to each model. If you switch to a different ASR model than the one supplied, then the feature extraction process could be completely different to the one currently implemented.
+These extracted features are quantized and an inference is performed.
 
-The amount of audio samples we offset by for long audio clips is specific to the included wav2letter model.
+For longer audio clips, where multiple inferences must be performed, then the initial starting position is offset by
+`(100*160) = 16000` audio samples. From this new starting point, MFCC and derivative features are calculated as before,
+until there is enough to perform another inference.
+
+Padding can be used if there are not enough audio samples for at least one inference. This step is repeated until the
+whole audio clip has been processed. If there are not enough audio samples for a final complete inference, then the MFCC
+features are padded by repeating the last calculated feature until an inference can be performed.
+
+> **Note:** Parameters of the MFCC feature extraction all depend on what was used during model training. These values
+> are specific to each model.\
+If you switch to a different ASR model than the one supplied, then the feature extraction process could be completely
+different to the one currently implemented.
+
+The amount of time that audio samples that are offset for long audio clips is specific to the included *wav2letter*
+model.
 
 ### Postprocessing
 
-If a keyword is detected then the ASR process is run and the raw output of that inference needs to be postprocessed to
-get a usable result.
+If a keyword is detected, then the ASR process is run and the raw output of that inference must be postprocessed to get
+a usable result.
 
 The raw output from the model is a tensor of shape 148x29 where each row is a probability distribution over the possible
 29 characters that can appear at each of the 148 time steps.
 
-This wav2letter model is trained using context windows, this means that only certain parts of the output are usable
-depending on the bit of the audio clip that is currently being processed.
+This *wav2letter* model is trained using context windows. This means that, depending on the bit of the audio clip that
+is being processed, only certain parts of the output are usable.
 
-If this is the first inference and multiple inferences are required, then ignore the final 49 rows of the output.
-Similarly, if this is the final inference from multiple inferences then ignore the first 49 rows of the output. Finally,
-if this inference is not the last or first inference then ignore the first and last 49 rows of the model output.
+If this is the first inference, and multiple inferences are required, then ignore the final 49 rows of the output.
+Similarly, if this is the final inference from multiple inferences, then ignore the first 49 rows of the output.
 
-> **Note:** If the audio clip is small enough then the whole of the model output is usable and there is no need to throw away any of the output before continuing.
+Finally, if this inference is not the last, or the first inference, then ignore the first and last 49 rows of the model
+output.
 
-Once any rows have been removed the final processing can be done. To process the output, first the letter with the
-highest probability at each time step is found. Next, any letters that are repeated multiple times in a row are removed
-(e.g. [t, t, t, o, p, p] becomes [t, o, p]). Finally, the 29^th^ blank token letter is removed from the output.
+> **Note:** If the audio clip is small enough, then the whole of the model output is usable and there is no need to
+> throw away any of the outputs before continuing.
 
-For the final output, the result from all inferences are combined before decoding. What you are left with is then
+Once any rows have been removed, the final processing can be done. To process the output, the letter with the highest
+probability at each time step is found first. Next, any letters that are repeated multiple times in a row are removed.
+
+For example: [`t`, `t`, `t`, `o`, `p`, `p`] becomes [`t`, `o`, `p`]). Finally, the 29th blank token letter is removed
+from the output.
+
+For the final output, the results from all inferences are combined before decoding. What you are left with is then
 displayed to the console.
 
 ### Prerequisites
@@ -132,67 +160,74 @@
 
 ### Build options
 
-In addition to the already specified build option in the main documentation, Keyword Spotting and Automatic Speech
-Recognition use case adds:
+In addition to the already specified build option in the main documentation, the Keyword Spotting and Automatic Speech
+Recognition use-case adds:
 
-- `kws_asr_MODEL_TFLITE_PATH_ASR` and `kws_asr_MODEL_TFLITE_PATH_KWS`: Path to the NN model files in TFLite format.
-    Models will be processed and included into the application axf file. The default value points to one of the delivered set of models.
-    Note that the parameters `kws_asr_LABELS_TXT_FILE_KWS`, `kws_asr_LABELS_TXT_FILE_ASR`,`TARGET_PLATFORM` and `ETHOS_U55_ENABLED`
-    should be aligned with the chosen model, i.e:
-  - if `ETHOS_U55_ENABLED` is set to `On` or `1`, the NN model is assumed to be optimized. The model will naturally fall back to the Arm® Cortex®-M CPU if an unoptimized model is supplied.
-  - if `ETHOS_U55_ENABLED` is set to `Off` or `0`, the NN model is assumed to be unoptimized. Supplying an optimized model in this case will result in a runtime error.
+- `kws_asr_MODEL_TFLITE_PATH_ASR` and `kws_asr_MODEL_TFLITE_PATH_KWS`: The path to the NN model file in `TFLite` format.
+    The model is processed and then included into the application `axf` file. The default value points to one of the
+    delivered set of models. Note that the parameters `kws_asr_LABELS_TXT_FILE_KWS`,
+    `kws_asr_LABELS_TXT_FILE_ASR`,`TARGET_PLATFORM`, and `ETHOS_U55_ENABLED` must be aligned with the chosen model. In
+    other words:
+  - If `ETHOS_U55_ENABLED` is set to `On` or `1`, then the NN model is assumed to be optimized. The model naturally
+    falls back to the Arm® *Cortex®-M* CPU if an unoptimized model is supplied.
+  - If `ETHOS_U55_ENABLED` is set to `Off` or `0`, then the NN model is assumed to be unoptimized. Supplying an
+    optimized model in this case results in a runtime error.
 
-- `kws_asr_FILE_PATH`: Path to the directory containing audio files, or a path to single WAV file, to be used in the application. The default value
-    points to the resources/kws_asr/samples folder containing the delivered set of audio clips.
+- `kws_asr_FILE_PATH`: The path to the directory containing audio files, or a path to single WAV file, to be used in the
+  application. The default value points to the `resources/kws_asr/samples` folder that contains the delivered set of
+  audio clips.
 
-- `kws_asr_LABELS_TXT_FILE_KWS` and `kws_asr_LABELS_TXT_FILE_ASR`: Path respectively to keyword spotting labels' and the automatic speech
-    recognition labels' text files. The file is used to map
-    letter class index to the text label. The default value points to the delivered labels.txt file inside the delivery
-    package.
+- `kws_asr_LABELS_TXT_FILE_KWS` and `kws_asr_LABELS_TXT_FILE_ASR`: The respective paths to the keyword spotting labels
+    and the automatic speech recognition labels text files. The file is used to map letter class index to the text
+    label. The default value points to the delivered `labels.txt` file inside the delivery package.
 
-- `kws_asr_AUDIO_RATE`: Input data sampling rate. Each audio file from kws_asr_FILE_PATH is preprocessed during the
-    build to match NN model input requirements. Default value is 16000.
+- `kws_asr_AUDIO_RATE`: The input data sampling rate. Each audio file from `kws_asr_FILE_PATH` is preprocessed during
+  the build to match the NN model input requirements. The default value is `16000`.
 
-- `kws_asr_AUDIO_MONO`: If set to ON the audio data will be converted to mono. Default is ON.
+- `kws_asr_AUDIO_MONO`: If set to `ON`, then the audio data is converted to mono. The default value is `ON`.
 
-- `kws_asr_AUDIO_OFFSET`: Start loading audio data starting from this offset (in seconds). Default value is 0.
+- `kws_asr_AUDIO_OFFSET`: Begins loading audio data and starts from this specified offset, defined in seconds. the
+  default value is set to `0`.
 
-- `kws_asr_AUDIO_DURATION`: Length of the audio data to be used in the application in seconds. Default is 0 meaning
-    the whole audio file will be taken.
+- `kws_asr_AUDIO_DURATION`: The length of the audio data to be used in the application in seconds. The default is `0`,
+  meaning that the whole audio file is used.
 
 - `kws_asr_AUDIO_MIN_SAMPLES`: Minimum number of samples required by the network model. If the audio clip is shorter
-    than this number, it is padded with zeros. Default value is 16000.
+  than this number, then it is padded with zeros. The default value is `16000`.
 
-- `kws_asr_MODEL_SCORE_THRESHOLD_KWS`: Threshold value that must be applied to the keyword spotting inference
-    results for a label to be deemed valid. Default is 0.9.
+- `kws_asr_MODEL_SCORE_THRESHOLD_KWS`: Threshold value that must be applied to the inference results for a label to be
+  deemed valid. The default is `0.9`.
 
 - `kws_asr_MODEL_SCORE_THRESHOLD_ASR`: Threshold value that must be applied to the automatic speech recognition
-    inference results for a label to be deemed valid. Default is 0.5.
+  inference results for a label to be deemed valid. The default is `0.5`.
 
-- `kws_asr_ACTIVATION_BUF_SZ`: The intermediate/activation buffer size reserved for the NN model. By default, it is
-    set to 2MiB and should be enough for most models.
+- `kws_asr_ACTIVATION_BUF_SZ`: The intermediate, or activation, buffer size reserved for the NN model. By default, it is
+  set to 2MiB and is enough for most models.
 
-In order to build **ONLY** Keyword Spotting and Automatic Speech
-Recognition example application add to the `cmake` command line specified in [Building](../documentation.md#Building) `-DUSE_CASE_BUILD=kws_asr`.
+To **ONLY** build the automatic speech recognition example application, add `-DUSE_CASE_BUILD=kws_asr` to the `cmake`
+command line, as specified in: [Building](../documentation.md#Building).
 
 ### Build process
 
-> **Note:** This section describes the process for configuring the build for `MPS3: SSE-300` for different target platform see [Building](../documentation.md#Building).
+> **Note:** This section describes the process for configuring the build for the *MPS3: SSE-300*. To build for a
+> different target platform, please refer to: [Building](../documentation.md#Building).
 
-Create a build directory and navigate inside:
+To build **only** the keyword spotting and automatic speech recognition example, create a build directory and navigate
+inside, like so:
 
 ```commandline
 mkdir build_kws_asr && cd build_kws_asr
 ```
 
-On Linux, execute the following command to build the application to run on the Ethos-U55 Fast Model when providing only the mandatory arguments for CMake configuration:
+On Linux, when providing only the mandatory arguments for CMake configuration, execute the following command to build
+**only** the Keyword Spotting and Automatic Speech Recognition application to run on the *Ethos-U55* Fast Model:
 
 ```commandline
 cmake ../ -DUSE_CASE_BUILD=kws_asr
 ```
 
-To configure a build that can be debugged using Arm-DS, we can just specify
-the build type as `Debug` and use the `Arm Compiler` toolchain file:
+To configure a build that can be debugged using Arm DS specify the build type as `Debug` and then use the `Arm Compiler`
+toolchain file:
 
 ```commandline
 cmake .. \
@@ -201,24 +236,25 @@
     -DUSE_CASE_BUILD=kws_asr
 ```
 
-Also see:
+For further information, please refer to:
 
 - [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
 - [Using Arm Compiler](../sections/building.md#using-arm-compiler)
 - [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
+- [Working with model debugger from Arm Fast Model Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
 
-> **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run the CMake command.
+> **Note:** If re-building with changed parameters values, we recommend that you clean the build directory and re-run
+> the CMake command.
 
-If the CMake command succeeded, build the application as follows:
+If the CMake command succeeds, build the application as follows:
 
 ```commandline
 make -j4
 ```
 
-Add VERBOSE=1 to see compilation and link details.
+To see compilation and link details, add `VERBOSE=1`.
 
-Results of the build will be placed under `build/bin` folder:
+Results of the build are placed under the `build/bin` folder, like so:
 
 ```tree
 bin
@@ -228,30 +264,32 @@
  └── sectors
       ├── images.txt
       └── kws_asr
-           ├── dram.bin
+           ├── ddr.bin
            └── itcm.bin
 ```
 
-Where:
+The `bin` folder contains the following files:
 
-- `ethos-u-kws_asr.axf`: The built application binary for the Keyword Spotting and Automatic Speech Recognition use
-    case.
+- `ethos-u-kws_asr.axf`: The built application binary for the Keyword Spotting and Automatic Speech Recognition
+  use-case.
 
-- `ethos-u-kws_asr.map`: Information from building the application (e.g. libraries used, what was optimized, location
-    of objects)
+- `ethos-u-kws_asr.map`: Information from building the application. For example: The libraries used, what was optimized,
+  and the location of objects.
 
 - `ethos-u-kws_asr.htm`: Human readable file containing the call graph of application functions.
 
-- `sectors/kws_asr`: Folder containing the built application, split into files for loading into different FPGA memory regions.
+- `sectors/kws_asr`: Folder containing the built application. It is split into files for loading into different FPGA memory
+  regions.
 
-- `sectors/images.txt`: Tells the FPGA which memory regions to use for loading the binaries in sectors/** folder.
+- `sectors/images.txt`: Tells the FPGA which memory regions to use for loading the binaries in the `sectors/..` folder.
 
 ### Add custom input
 
-The application performs inference on data found in the folder set by the CMake parameter `kws_asr_FILE_PATH`.
+The application anomaly detection is set up to perform inferences on data found in the folder, or an individual file,
+that is pointed to by the parameter `kws_asr_FILE_PATH`.
 
-To run the application with your own audio clips first create a folder to hold them and then copy the custom files into
-this folder:
+To run the application with your own audio clips, first create a folder to hold them and then copy the custom clips into
+the following folder:
 
 ```commandline
 mkdir /tmp/custom_files
@@ -261,7 +299,7 @@
 
 > **Note:** Clean the build directory before re-running the CMake command.
 
-Next set `kws_asr_FILE_PATH` to the location of this folder when building:
+Next, when building, set `kws_asr_FILE_PATH` to the location of the following folder:
 
 ```commandline
 cmake .. \
@@ -269,35 +307,41 @@
     -DUSE_CASE_BUILD=kws_asr
 ```
 
-The files found in the `kws_asr_FILE_PATH` folder will be picked up and automatically converted to C++ files during the
-CMake configuration stage and then compiled into the application during the build phase for performing inference with.
+The audio flies found in the `kws_asr_FILE_PATH` folder are picked up and automatically converted to C++ files during
+the CMake configuration stage. They are then compiled into the application during the build phase for performing
+inference with.
 
-The log from the configuration stage should tell you what directory path has been used:
+The log from the configuration stage tells you what audio directory path has been used:
 
 ```log
 -- User option kws_asr_FILE_PATH is set to /tmp/custom_files
 ```
 
-After compiling, your custom inputs will have now replaced the default ones in the application.
+After compiling, your custom inputs have now replaced the default ones in the application.
 
 ### Add custom model
 
-The application performs KWS inference using the model pointed to by the CMake parameter `kws_asr_MODEL_TFLITE_PATH_KWS` and
-ASR inference using the model pointed to by the CMake parameter `kws_asr_MODEL_TFLITE_PATH_ASR`.
+The application performs KWS inference using the model pointed to by the CMake parameter
+`kws_asr_MODEL_TFLITE_PATH_KWS`. ASR inference is performed using the model pointed to by the CMake parameter
+`kws_asr_MODEL_TFLITE_PATH_ASR`.
 
-This section assumes you wish to change the existing ASR model to a custom one. If instead you wish to change the KWS
-model then the instructions will be the same except ASR will change to KWS.
+This section assumes you want to change the existing ASR model to a custom one. If, instead, you want to change the KWS
+model, then the instructions are the same. Except ASR changes to KWS.
 
-> **Note:** If you want to run the model using Ethos-U55, ensure your custom model has been run through the Vela compiler successfully before continuing. See [Optimize model with Vela compiler](../sections/building.md#Optimize-custom-model-with-Vela-compiler).
+> **Note:** If you want to run the model using an *Ethos-U55*, ensure that your custom model has been successfully run
+> through the Vela compiler *before* continuing.
 
-To run the application with a custom model you will need to provide a labels_<model_name>.txt file of labels
-associated with the model. Each line of the file should correspond to one of the outputs in your model. See the provided
-labels_wav2letter.txt file for an example.
+For further information:
+[Optimize model with Vela compiler](../sections/building.md#Optimize-custom-model-with-Vela-compiler).
 
-Then, you must set `kws_asr_MODEL_TFLITE_PATH_ASR` to the location of the Vela processed model file and
-`kws_asr_LABELS_TXT_FILE_ASR` to the location of the associated labels file.
+To run the application with a custom model, you must provide a `labels_<model_name>.txt` file of labels that are
+associated with the model. Each line of the file must correspond to one of the outputs in your model. Refer to the
+provided `labels_wav2letter.txt` file for an example.
 
-An example:
+Then, you must set `kws_asr_MODEL_TFLITE_PATH` to the location of the Vela processed model file and
+`kws_asr_LABELS_TXT_FILE`to the location of the associated labels file.
+
+For example:
 
 ```commandline
 cmake .. \
@@ -308,11 +352,11 @@
 
 > **Note:** Clean the build directory before re-running the CMake command.
 
-The `.tflite` model files pointed to by `kws_asr_MODEL_TFLITE_PATH_KWS` and `kws_asr_MODEL_TFLITE_PATH_ASR`, labels text files pointed to by `kws_asr_LABELS_TXT_FILE_KWS` and `kws_asr_LABELS_TXT_FILE_ASR`
-will be converted to C++ files during the CMake configuration stage and then compiled into the application for
-performing inference with.
+The `.tflite` model files pointed to by `kws_asr_MODEL_TFLITE_PATH_KWS` and `kws_asr_MODEL_TFLITE_PATH_ASR`, and the
+labels text files pointed to by `kws_asr_LABELS_TXT_FILE_KWS` and `kws_asr_LABELS_TXT_FILE_ASR` are converted to C++
+files during the CMake configuration stage. They are then compiled into the application for performing inference with.
 
-The log from the configuration stage should tell you what model path and labels file have been used:
+The log from the configuration stage tells you what model path and labels file have been used, for example:
 
 ```log
 -- User option TARGET_PLATFORM is set to mps3
@@ -328,38 +372,44 @@
 ...
 ```
 
-After compiling, your custom model will have now replaced the default one in the application.
+After compiling, your custom model has now replaced the default one in the application.
 
 ## Setting-up and running Ethos-U55 Code Samples
 
 ### Setting up the Ethos-U55 Fast Model
 
-The FVP is available publicly from [Arm Ecosystem FVP downloads](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
+The FVP is available publicly from
+[Arm Ecosystem FVP downloads](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
 
-For Ethos-U55 evaluation, please download the MPS3 version of the Arm® Corstone™-300 model that contains Ethos-U55 and
-Cortex-M55. The model is currently only supported on Linux based machines. To install the FVP:
+For the *Ethos-U55* evaluation, please download the MPS3 version of the Arm® *Corstone™-300* model that contains both
+the *Ethos-U55* and *Cortex-M55*. The model is only supported on Linux-based machines.
 
-- Unpack the archive
+To install the FVP:
 
-- Run the install script in the extracted package
+- Unpack the archive.
+
+- Run the install script in the extracted package:
 
 ```commandline
 ./FVP_Corstone_SSE-300_Ethos-U55.sh
 ```
 
-- Follow the instructions to install the FVP to your desired location
+- Follow the instructions to install the FVP to the required location.
 
 ### Starting Fast Model simulation
 
-Once completed the building step, application binary ethos-u-kws_asr.axf can be found in the `build/bin` folder.
-Assuming the install location of the FVP was set to ~/FVP_install_location, the simulation can be started by:
+Once the building has been completed, the application binary `ethos-u-kws_asr.axf` can be found in the `build/bin`
+folder.
+
+Assuming that the install location of the FVP was set to `~/FVP_install_location`, then the simulation can be started by
+using:
 
 ```commandline
 $ ~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55
 ./bin/mps3-sse-300/ethos-u-kws_asr.axf
 ```
 
-A log output should appear on the terminal:
+A log output appears on the terminal:
 
 ```log
 telnetterminal0: Listening for serial connection on port 5000
@@ -368,12 +418,15 @@
 telnetterminal5: Listening for serial connection on port 5003
 ```
 
-This will also launch a telnet window with the sample application's standard output and error log entries containing
-information about the pre-built application version, TensorFlow Lite Micro library version used, data type as well as
-the input and output tensor sizes of the model compiled into the executable binary.
+This also launches a telnet window with the standard output of the sample application. It also includes error log
+entries containing information about the pre-built application version, TensorFlow Lite Micro library version used, and
+data types. The log also includes the input and output tensor sizes of the model compiled into the executable binary.
 
-After the application has started if `kws_asr_FILE_PATH` pointed to a single file (or a folder containing a single input file)
-the inference starts immediately. In case of multiple inputs choice, it outputs a menu and waits for the user input from telnet terminal:
+After the application has started, if `kws_asr_FILE_PATH` points to a single file, or even a folder that contains a
+single input file, then the inference starts immediately. If there are multiple inputs, it outputs a menu and then waits
+for input from the user.
+
+For example:
 
 ```log
 User input required
@@ -389,79 +442,82 @@
 
 ```
 
-1. “Classify next audio clip” menu option will run single inference on the next included file.
+What the preceding choices do:
 
-2. “Classify audio clip at chosen index” menu option will run inference on the chosen audio clip.
+1. Classify next audio clip: Runs a single inference on the next in line.
 
-    > **Note:** Please make sure to select audio clip index in the range of supplied audio clips during application build.
+2. Classify audio clip at chosen index: Runs inference on the chosen audio clip.
 
-3. “Run ... on all” menu option triggers sequential inference executions on all built-in files.
+    > **Note:** Please make sure to select audio clip index within the range of supplied audio clips during application
+    > build. By default, a pre-built application has four files, with indexes from `0` to `3`.
 
-4. “Show NN model info” menu option prints information about model data type, input and output tensor sizes:
+3. Run ... on all: Triggers sequential inference executions on all built-in applications.
+
+4. Show NN model info: Prints information about the model data type, input, and output, tensor sizes:
 
     ```log
     INFO - uTFL version: 2.5.0
     INFO - Model INPUT tensors:
-    INFO - 	tensor type is INT8
-    INFO - 	tensor occupies 490 bytes with dimensions
-    INFO - 		0:   1
-    INFO - 		1:   1
-    INFO - 		2:  49
-    INFO - 		3:  10
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 490 bytes with dimensions
+    INFO -    0:   1
+    INFO -    1:   1
+    INFO -    2:  49
+    INFO -    3:  10
     INFO - Quant dimension: 0
     INFO - Scale[0] = 1.107164
     INFO - ZeroPoint[0] = 95
     INFO - Model OUTPUT tensors:
-    INFO - 	tensor type is INT8
-    INFO - 	tensor occupies 12 bytes with dimensions
-    INFO - 		0:   1
-    INFO - 		1:  12
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 12 bytes with dimensions
+    INFO -    0:   1
+    INFO -    1:  12
     INFO - Quant dimension: 0
     INFO - Scale[0] = 0.003906
     INFO - ZeroPoint[0] = -128
     INFO - Activation buffer (a.k.a tensor arena) size used: 123616
     INFO - Number of operators: 16
-    INFO - 	Operator 0: RESHAPE
-    INFO - 	Operator 1: CONV_2D
-    INFO - 	Operator 2: DEPTHWISE_CONV_2D
-    INFO - 	Operator 3: CONV_2D
-    INFO - 	Operator 4: DEPTHWISE_CONV_2D
-    INFO - 	Operator 5: CONV_2D
-    INFO - 	Operator 6: DEPTHWISE_CONV_2D
-    INFO - 	Operator 7: CONV_2D
-    INFO - 	Operator 8: DEPTHWISE_CONV_2D
-    INFO - 	Operator 9: CONV_2D
-    INFO - 	Operator 10: DEPTHWISE_CONV_2D
-    INFO - 	Operator 11: CONV_2D
-    INFO - 	Operator 12: AVERAGE_POOL_2D
-    INFO - 	Operator 13: RESHAPE
-    INFO - 	Operator 14: FULLY_CONNECTED
-    INFO - 	Operator 15: SOFTMAX
+    INFO -  Operator 0: RESHAPE
+    INFO -  Operator 1: CONV_2D
+    INFO -  Operator 2: DEPTHWISE_CONV_2D
+    INFO -  Operator 3: CONV_2D
+    INFO -  Operator 4: DEPTHWISE_CONV_2D
+    INFO -  Operator 5: CONV_2D
+    INFO -  Operator 6: DEPTHWISE_CONV_2D
+    INFO -  Operator 7: CONV_2D
+    INFO -  Operator 8: DEPTHWISE_CONV_2D
+    INFO -  Operator 9: CONV_2D
+    INFO -  Operator 10: DEPTHWISE_CONV_2D
+    INFO -  Operator 11: CONV_2D
+    INFO -  Operator 12: AVERAGE_POOL_2D
+    INFO -  Operator 13: RESHAPE
+    INFO -  Operator 14: FULLY_CONNECTED
+    INFO -  Operator 15: SOFTMAX
     INFO - Model INPUT tensors:
-    INFO - 	tensor type is INT8
-    INFO - 	tensor occupies 11544 bytes with dimensions
-    INFO - 		0:   1
-    INFO - 		1: 296
-    INFO - 		2:  39
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 11544 bytes with dimensions
+    INFO -    0:   1
+    INFO -    1: 296
+    INFO -    2:  39
     INFO - Quant dimension: 0
     INFO - Scale[0] = 0.110316
     INFO - ZeroPoint[0] = -11
     INFO - Model OUTPUT tensors:
-    INFO - 	tensor type is INT8
-    INFO - 	tensor occupies 4292 bytes with dimensions
-    INFO - 		0:   1
-    INFO - 		1:   1
-    INFO - 		2: 148
-    INFO - 		3:  29
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 4292 bytes with dimensions
+    INFO -    0:   1
+    INFO -    1:   1
+    INFO -    2: 148
+    INFO -    3:  29
     INFO - Quant dimension: 0
     INFO - Scale[0] = 0.003906
     INFO - ZeroPoint[0] = -128
     INFO - Activation buffer (a.k.a tensor arena) size used: 809808
     INFO - Number of operators: 1
-    INFO - 	Operator 0: ethos-u
+    INFO -  Operator 0: ethos-u
     ```
 
-5. “List” menu option prints a list of pair ... indexes - the original filenames embedded in the application:
+5. List audio clips: Prints a list of pair ... indexes. The original filenames are embedded in the application, like so:
 
     ```log
     [INFO] List of Files:
@@ -472,7 +528,7 @@
 
 Please select the first menu option to execute Keyword Spotting and Automatic Speech Recognition.
 
-The following example illustrates application output:
+The following example illustrates the output of an application:
 
 ```log
 INFO - KWS audio data window size 16000
@@ -502,32 +558,32 @@
 INFO - NPU total cycles: 28910172
 ```
 
-It could take several minutes to complete one inference run (average time is 2-3 minutes).
+It can take several minutes to complete one inference run. The average time is around 2-3 minutes.
 
-Using the input “yes_no_go_stop.wav”, the log shows inference results for the KWS operation first, detecting the
-trigger word “yes“ with the stated probability score (in this case 0.99). After this, the ASR inference is run,
-printing the words recognized from the input sample.
+Using the input `yes_no_go_stop.wav`, the log shows inference results for the KWS operation first. Detecting the trigger
+word `yes` with the stated probability score. In this case, `0.99`. After this, the ASR inference is run, printing the
+words recognized from the input sample.
 
 The profiling section of the log shows that for the ASR inference:
 
-- Ethos-U55's PMU report:
+- *Ethos-U55* PMU report:
 
-  - 28,910,172 total cycle: The number of NPU cycles
+  - 28,910,172 total cycle: The number of NPU cycles.
 
-  - 28,909,309 active cycles: number of NPU cycles that were used for computation
+  - 28,909,309 active cycles: The number of NPU cycles that were used for computation.
 
-  - 863 idle cycles: number of cycles for which the NPU was idle
+  - 863 idle cycles: The number of cycles for which the NPU was idle.
 
-  - 13,520,864 AXI0 read beats: The number of AXI beats with read transactions from AXI0 bus.
-    AXI0 is the bus where Ethos-U55 NPU reads and writes to the computation buffers (activation buf/tensor arenas).
+  - 13,520,864 AXI0 read beats: The number of AXI beats with read transactions from the AXI0 bus. AXI0 is the bus where
+    the *Ethos-U55* NPU reads and writes to the computation buffers, activation buf, or tensor arenas.
 
   - 2,841,970 AXI0 write beats: The number of AXI beats with write transactions to AXI0 bus.
 
-  - 2,717,670 AXI1 read beats: The number of AXI beats with read transactions from AXI1 bus.
-    AXI1 is the bus where Ethos-U55 NPU reads the model (read only)
+  - 2,717,670 AXI1 read beats: The number of AXI beats with read transactions from the AXI1 bus. AXI1 is the bus where
+    the *Ethos-U55* NPU reads the model. So, read-only.
 
-- For FPGA platforms, CPU cycle count can also be enabled. For FVP, however, CPU cycle counters should not be used as
-the CPU model is not cycle-approximate or cycle-accurate.
+- For FPGA platforms, a CPU cycle count can also be enabled. However, do not use cycle counters for FVP, as the CPU
+  model is not cycle-approximate or cycle-accurate.
 
-    Note that in this example the KWS inference does not use the Ethos-U55 and is run purely on CPU, therefore 0 Active
-    NPU cycles is shown.
+> **Note:** In this example, the KWS inference does *not* use the *Ethos-U55* and only runs on the CPU. Therefore, `0`
+> Active NPU cycles are shown.
diff --git a/set_up_default_resources.py b/set_up_default_resources.py
index 362552a..47d2881 100755
--- a/set_up_default_resources.py
+++ b/set_up_default_resources.py
@@ -113,8 +113,6 @@
     """
     current_file_dir = os.path.dirname(os.path.abspath(__file__))
     download_dir = os.path.abspath(os.path.join(current_file_dir, "resources_downloaded"))
-    logging.basicConfig(filename='log_build_default.log', level=logging.DEBUG)
-    logging.getLogger().addHandler(logging.StreamHandler(sys.stdout))
 
     try:
         #   1.1 Does the download dir exist?
@@ -234,4 +232,8 @@
                         help="Do not run Vela optimizer on downloaded models.",
                         action="store_true")
     args = parser.parse_args()
+
+    logging.basicConfig(filename='log_build_default.log', level=logging.DEBUG)
+    logging.getLogger().addHandler(logging.StreamHandler(sys.stdout))
+
     set_up_resources(not args.skip_vela)