MLECO-1858: Documentation update

* Removing `_` in front of private functions and member

Signed-off-by: Isabella Gottardi <isabella.gottardi@arm.com>
Change-Id: I5a5d652f9647ebb16d2d2bd16ab980e73f7be3cf
diff --git a/Readme.md b/Readme.md
index 16ac683..472cf54 100644
--- a/Readme.md
+++ b/Readme.md
@@ -87,4 +87,5 @@
     - [Testing](./docs/documentation.md#testing)
   - [Communication](./docs/documentation.md#communication)
   - [Licenses](./docs/documentation.md#licenses)
-  - [Appendix](./docs/documentation.md#appendix)
\ No newline at end of file
+  - [Appendix](./docs/documentation.md#appendix)
+  
\ No newline at end of file
diff --git a/docs/documentation.md b/docs/documentation.md
index 050ca60..8ab9fa3 100644
--- a/docs/documentation.md
+++ b/docs/documentation.md
@@ -1,28 +1,18 @@
 # Arm® ML embedded evaluation kit
 
-## Table of Contents
-
-- [Arm® ML embedded evaluation kit](./documentation.md#arm-ml-embedded-evaluation-kit)
-  - [Table of Contents](./documentation.md#table-of-content)
-  - [Trademarks](./documentation.md#trademarks)
-  - [Prerequisites](./documentation.md#prerequisites)
-    - [Additional reading](./documentation.md#additional-reading)
-  - [Repository structure](./documentation.md#repository-structure)
-  - [Models and resources](./documentation.md#models-and-resources)
-  - [Building](./documentation.md#building)
-  - [Deployment](./documentation.md#deployment)
-  - [Running code samples applications](./documentation.md#running-code-samples-applications)
-  - [Implementing custom ML application](./documentation.md#implementing-custom-ml-application)
-  - [Testing and benchmarking](./documentation.md#testing-and-benchmarking)
-  - [Memory considerations](./documentation.md#memory-considerations)
-  - [Troubleshooting](./documentation.md#troubleshooting)
-  - [Contribution guidelines](./documentation.md#contribution-guidelines)
-    - [Coding standards and guidelines](./documentation.md#coding-standards-and-guidelines)
-    - [Code Reviews](./documentation.md#code-reviews)
-    - [Testing](./documentation.md#testing)
-  - [Communication](./documentation.md#communication)
-  - [Licenses](./documentation.md#licenses)
-  - [Appendix](./documentation.md#appendix)
+- [Arm® ML embedded evaluation kit](#arm_ml-embedded-evaluation-kit)
+  - [Trademarks](#trademarks)
+  - [Prerequisites](#prerequisites)
+    - [Additional reading](#additional-reading)
+  - [Repository structure](#repository-structure)
+  - [Models and resources](#models-and-resources)
+  - [Building](#building)
+  - [Deployment](#deployment)
+  - [Implementing custom ML application](#implementing-custom-ml-application)
+  - [Testing and benchmarking](#testing-and-benchmarking)
+  - [Memory considerations](#memory-considerations)
+  - [Troubleshooting](#troubleshooting)
+  - [Appendix](#appendix)
 
 ## Trademarks
 
@@ -222,16 +212,22 @@
 will build executable models with Ethos-U55 NPU support.
 See:
 
-- [Building the Code Samples application from sources](./sections/building.md#building-the-ml-embedded-code-sample-applications-from-sources)
-  - [Contents](./sections/building.md#contents)
+- [Building the ML embedded code sample applications from sources](./sections/building.md#building-the-ml-embedded-code-sample-applications-from-sources)
   - [Build prerequisites](./sections/building.md#build-prerequisites)
   - [Build options](./sections/building.md#build-options)
   - [Build process](./sections/building.md#build-process)
     - [Preparing build environment](./sections/building.md#preparing-build-environment)
+      - [Fetching submodules](./sections/building.md#fetching-submodules)
+      - [Fetching resource files](./sections/building.md#fetching-resource-files)
     - [Create a build directory](./sections/building.md#create-a-build-directory)
-    - [Configuring the build for `MPS3: SSE-300`](./sections/building.md#configuring-the-build-for-mps3-sse-300)
+    - [Configuring the build for MPS3 SSE-300](./sections/building.md#configuring-the-build-for-mps3-sse-300)
+      - [Using GNU Arm Embedded Toolchain](./sections/building.md#using-gnu-arm-embedded-toolchain)
+      - [Using Arm Compiler](./sections/building.md#using-arm-compiler)
+      - [Generating project for Arm Development Studio](./sections/building.md#generating-project-for-arm-development-studio)
+      - [Working with model debugger from Arm FastModel Tools](./sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
+      - [Configuring with custom TPIP dependencies](./sections/building.md#configuring-with-custom-tpip-dependencies)
     - [Configuring native unit-test build](./sections/building.md#configuring-native-unit-test-build)
-    - [Configuring the build for `simple_platform`](./sections/building.md#configuring-the-build-for-simple_platform)
+    - [Configuring the build for simple_platform](./sections/building.md#configuring-the-build-for-simple_platform)
     - [Building the configured project](./sections/building.md#building-the-configured-project)
   - [Building timing adapter with custom options](./sections/building.md#building-timing-adapter-with-custom-options)
   - [Add custom inputs](./sections/building.md#add-custom-inputs)
@@ -245,16 +241,11 @@
 See:
 
 - [Deployment](./sections/deployment.md)
-  - [Fixed Virtual Platform](./sections/deployment.md#fixed-Virtual-Platform)
-    - [Setting up the MPS3 Corstone-300 FVP](./sections/deployment.md#Setting-up-the-MPS3-Corstone-300-FVP)
-    - [Deploying on an FVP emulating MPS3](./sections/deployment.md#Deploying-on-an-FVP-emulating-MPS3)
-  - [MPS3 board](./sections/deployment.md#MPS3-board)
-    - [Deployment on MPS3 board](./sections/deployment.md#Deployment-on-MPS3-board)
-
-## Running code samples applications
-
-This section covers the process for getting started with pre-built binaries for the code samples.
-See [Running applications](./sections/run.md).
+  - [Fixed Virtual Platform](./sections/deployment.md#fixed-virtual-platform)
+    - [Setting up the MPS3 Corstone-300 FVP](./sections/deployment.md#setting-up-the-mps3-arm-corstone-300-fvp)
+    - [Deploying on an FVP emulating MPS3](./sections/deployment.md#deploying-on-an-fvp-emulating-mps3)
+  - [MPS3 board](./sections/deployment.md#mps3-board)
+    - [Deployment on MPS3 board](./sections/deployment.md#deployment-on-mps3-board)
 
 ## Implementing custom ML application
 
@@ -268,20 +259,20 @@
 
 See:
 
-- [Customizing](./sections/customizing.md)
-  - [Software project description](./sections/customizing.md#Software-project-description)
+- [Implementing custom ML application](./sections/customizing.md)
+  - [Software project description](./sections/customizing.md#software-project-description)
   - [HAL API](./sections/customizing.md#hal-api)
   - [Main loop function](./sections/customizing.md#main-loop-function)
   - [Application context](./sections/customizing.md#application-context)
-  - [Profiler](./sections/customizing.md#Profiler)
-  - [NN Model API](./sections/customizing.md#NN-model-API)
-  - [Adding custom ML use-case](./sections/customizing.md#Adding-custom-ML-use-case)
-  - [Implementing main loop](./sections/customizing.md#Implementing-main-loop)
-  - [Implementing custom NN model](./sections/customizing.md#Implementing-custom-NN-model)
+  - [Profiler](./sections/customizing.md#profiler)
+  - [NN Model API](./sections/customizing.md#nn-model-api)
+  - [Adding custom ML use-case](./sections/customizing.md#adding-custom-ml-use-case)
+  - [Implementing main loop](./sections/customizing.md#implementing-main-loop)
+  - [Implementing custom NN model](./sections/customizing.md#implementing-custom-nn-model)
   - [Executing inference](./sections/customizing.md#executing-inference)
   - [Printing to console](./sections/customizing.md#printing-to-console)
   - [Reading user input from console](./sections/customizing.md#reading-user-input-from-console)
-  - [Output to MPS3 LCD](./sections/customizing.md#output-to-MPS3-LCD)
+  - [Output to MPS3 LCD](./sections/customizing.md#output-to-mps3-lcd)
   - [Building custom use-case](./sections/customizing.md#building-custom-use-case)
 
 ## Testing and benchmarking
@@ -297,103 +288,12 @@
 See:
 
 - [Troubleshooting](./sections/troubleshooting.md)
-  - [Inference results are incorrect for my custom files](./sections/troubleshooting.md#Inference-results-are-incorrect-for-my-custom-files)
-  - [The application does not work with my custom model](./sections/troubleshooting.md#The-application-does-not-work-with-my-custom-model)
-
-## Contribution guidelines
-
-Contributions are only accepted under the following conditions:
-
-- The contribution have certified origin and give us your permission. To manage this process we use
-  [Developer Certificate of Origin (DCO) V1.1](https://developercertificate.org/).
-  To indicate that contributors agree to the the terms of the DCO, it's neccessary "sign off" the
-  contribution by adding a line with name and e-mail address to every git commit message:
-
-  ```log
-  Signed-off-by: John Doe <john.doe@example.org>
-  ```
-
-  This can be done automatically by adding the `-s` option to your `git commit` command.
-  You must use your real name, no pseudonyms or anonymous contributions are accepted.
-
-- You give permission according to the [Apache License 2.0](../LICENSE_APACHE_2.0.txt).
-
-  In each source file, include the following copyright notice:
-
-  ```copyright
-  /*
-  * Copyright (c) <years additions were made to project> <your name>, Arm Limited. All rights reserved.
-  * SPDX-License-Identifier: Apache-2.0
-  *
-  * Licensed under the Apache License, Version 2.0 (the "License");
-  * you may not use this file except in compliance with the License.
-  * You may obtain a copy of the License at
-  *
-  *     http://www.apache.org/licenses/LICENSE-2.0
-  *
-  * Unless required by applicable law or agreed to in writing, software
-  * distributed under the License is distributed on an "AS IS" BASIS,
-  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  * See the License for the specific language governing permissions and
-  * limitations under the License.
-  */
-  ```
-
-### Coding standards and guidelines
-
-This repository follows a set of guidelines, best practices, programming styles and conventions,
-see:
-
-- [Coding standards and guidelines](./sections/coding_guidelines.md)
-  - [Introduction](./sections/coding_guidelines.md#introduction)
-  - [Language version](./sections/coding_guidelines.md#language-version)
-  - [File naming](./sections/coding_guidelines.md#file-naming)
-  - [File layout](./sections/coding_guidelines.md#file-layout)
-  - [Block Management](./sections/coding_guidelines.md#block-management)
-  - [Naming Conventions](./sections/coding_guidelines.md#naming-conventions)
-    - [C++ language naming conventions](./sections/coding_guidelines.md#c_language-naming-conventions)
-    - [C language naming conventions](./sections/coding_guidelines.md#c-language-naming-conventions)
-  - [Layout and formatting conventions](./sections/coding_guidelines.md#layout-and-formatting-conventions)
-  - [Language usage](./sections/coding_guidelines.md#language-usage)
-
-### Code Reviews
-
-Contributions must go through code review. Code reviews are performed through the
-[mlplatform.org Gerrit server](https://review.mlplatform.org). Contributors need to signup to this
-Gerrit server with their GitHub account credentials.
-In order to be merged a patch needs to:
-
-- get a "+1 Verified" from the pre-commit job.
-- get a "+2 Code-review" from a reviewer, it means the patch has the final approval.
-
-### Testing
-
-Prior to submitting a patch for review please make sure that all build variants works and unit tests pass.
-Contributions go through testing at the continuous integration system. All builds, tests and checks must pass before a
-contribution gets merged to the master branch.
-
-## Communication
-
-Please, if you want to start public discussion, raise any issues or questions related to this repository, use
-[https://discuss.mlplatform.org/c/ml-embedded-evaluation-kit](https://discuss.mlplatform.org/c/ml-embedded-evaluation-kit/)
-forum.
-
-## Licenses
-
-The ML Embedded applications samples are provided under the Apache 2.0 license, see [License Apache 2.0](../LICENSE_APACHE_2.0.txt).
-
-Application input data sample files are provided under their original license:
-
-|  | Licence | Provenience |
-|---------------|---------|---------|
-| [Automatic Speech Recognition Samples](../resources/asr/samples/files.md) | [Creative Commons Attribution 4.0 International Public License](../resources/LICENSE_CC_4.0.txt) | <http://www.openslr.org/12/> |
-| [Image Classification Samples](../resources/img_class/samples/files.md) | [Creative Commons Attribution 1.0](../resources/LICENSE_CC_1.0.txt) | <https://www.pexels.com> |
-| [Keyword Spotting Samples](../resources/kws/samples/files.md) | [Creative Commons Attribution 4.0 International Public License](../resources/LICENSE_CC_4.0.txt) | <http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz> |
-| [Keyword Spotting and Automatic Speech Recognition Samples](../resources/kws_asr/samples/files.md) | [Creative Commons Attribution 4.0 International Public License](../resources/LICENSE_CC_4.0.txt) | <http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz> |
+  - [Inference results are incorrect for my custom files](./sections/troubleshooting.md#inference-results-are-incorrect-for-my-custom-files)
+  - [The application does not work with my custom model](./sections/troubleshooting.md#the-application-does-not-work-with-my-custom-model)
 
 ## Appendix
 
 See:
 
 - [Appendix](./sections/appendix.md)
-  - [Cortex-M55 Memory map overview](./sections/appendix.md#cortex-m55-memory-map-overview)
+  - [Cortex-M55 Memory map overview](./sections/appendix.md#arm_cortex_m55-memory-map-overview-for-corstone_300-reference-design)
diff --git a/docs/sections/building.md b/docs/sections/building.md
index 4b1514b..ff5b518 100644
--- a/docs/sections/building.md
+++ b/docs/sections/building.md
@@ -1,9 +1,6 @@
 # Building the ML embedded code sample applications from sources
 
-## Contents
-
 - [Building the ML embedded code sample applications from sources](#building-the-ml-embedded-code-sample-applications-from-sources)
-  - [Contents](#contents)
   - [Build prerequisites](#build-prerequisites)
   - [Build options](#build-options)
   - [Build process](#build-process)
@@ -11,7 +8,7 @@
       - [Fetching submodules](#fetching-submodules)
       - [Fetching resource files](#fetching-resource-files)
     - [Create a build directory](#create-a-build-directory)
-    - [Configuring the build for MPS3: SSE-300](#configuring-the-build-for-mps3-sse-300)
+    - [Configuring the build for MPS3 SSE-300](#configuring-the-build-for-mps3-sse-300)
       - [Using GNU Arm Embedded Toolchain](#using-gnu-arm-embedded-toolchain)
       - [Using Arm Compiler](#using-arm-compiler)
       - [Generating project for Arm Development Studio](#generating-project-for-arm-development-studio)
@@ -34,9 +31,8 @@
 are fulfilled:
 
 - GNU Arm embedded toolchain 10.2.1 (or higher) or the Arm Compiler version 6.14 (or higher)
-    is installed and available on the path.
-
-    Test the compiler by running:
+  is installed and available on the path.
+  Test the compiler by running:
 
     ```commandline
     armclang -v
@@ -47,11 +43,12 @@
     Component: ARM Compiler 6.14
     ```
 
-    Alternatively,
+  Alternatively,
 
     ```commandline
     arm-none-eabi-gcc --version
     ```
+
     ```log
     arm-none-eabi-gcc (GNU Arm Embedded Toolchain 10-2020-q4-major) 10.2.1 20201103 (release)
     Copyright (C) 2020 Free Software Foundation, Inc.
@@ -59,11 +56,11 @@
     warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
     ```
 
-    > **Note:** Add compiler to the path, if needed:
-    >
-    > `export PATH=/path/to/armclang/bin:$PATH`
-    >           OR
-    > `export PATH=/path/to/gcc-arm-none-eabi-toolchain/bin:$PATH`
+> **Note:** Add compiler to the path, if needed:
+>
+> `export PATH=/path/to/armclang/bin:$PATH`
+> OR
+> `export PATH=/path/to/gcc-arm-none-eabi-toolchain/bin:$PATH`
 
 - Compiler license, if using the proprietary Arm Compiler, is configured correctly.
 
@@ -78,9 +75,9 @@
     cmake version 3.16.2
     ```
 
-    > **Note:** Add cmake to the path, if needed:
-    >
-    > `export PATH=/path/to/cmake/bin:$PATH`
+> **Note:** Add cmake to the path, if needed:
+>
+> `export PATH=/path/to/cmake/bin:$PATH`
 
 - Python 3.6 or above is installed. Test python version by running:
 
@@ -112,7 +109,7 @@
     ...
     ```
 
-    > **Note:** Add it to the path environment variable, if needed.
+> **Note:** Add it to the path environment variable, if needed.
 
 - Access to the Internet to download the third party dependencies, specifically: TensorFlow Lite Micro, Arm® Ethos™-U55 NPU
 driver and CMSIS. Instructions for downloading these are listed under [preparing build environment](#preparing-build-environment).
@@ -220,8 +217,8 @@
 
 > **Note:** For details on the specific use case build options, follow the
 > instructions in the use-case specific documentation.
-> Also, when setting any of the CMake configuration parameters that expect a directory/file path , it is advised
->to **use absolute paths instead of relative paths**.
+> Also, when setting any of the CMake configuration parameters that expect a directory/file path, it is advised
+> to **use absolute paths instead of relative paths**.
 
 ## Build process
 
@@ -274,9 +271,8 @@
 ```
 
 > **NOTE**: The default source paths for the TPIP sources assume the above directory structure, but all of the relevant
->paths can be overridden by CMake configuration arguments `TENSORFLOW_SRC_PATH`, `ETHOS_U55_DRIVER_SRC_PATH`,
->and `CMSIS_SRC_PATH`.
-
+> paths can be overridden by CMake configuration arguments `TENSORFLOW_SRC_PATH`, `ETHOS_U55_DRIVER_SRC_PATH`,
+> and `CMSIS_SRC_PATH`.
 
 #### Fetching resource files
 
@@ -300,7 +296,7 @@
 mkdir build && cd build
 ```
 
-### Configuring the build for MPS3: SSE-300
+### Configuring the build for MPS3 SSE-300
 
 #### Using GNU Arm Embedded Toolchain
 
@@ -308,7 +304,6 @@
 to build the application to run on the Arm® Ethos™-U55 NPU when providing only
 the mandatory arguments for CMake configuration:
 
-
 ```commandline
 cmake ../
 ```
@@ -317,7 +312,6 @@
 `sse-300`, and using the default toolchain file for the target as `bare-metal-gcc.` This is
 equivalent to:
 
-
 ```commandline
 cmake .. \
     -DCMAKE_TOOLCHAIN_FILE=scripts/cmake/toolchains/bare-metal-gcc.cmake
@@ -722,7 +716,6 @@
 > **Note:** By default, use of the Ethos-U55 NPU is enabled in the CMake configuration.
 This could be changed by passing `-DETHOS_U55_ENABLED`.
 
-
 ## Automatic file generation
 
 As mentioned in the previous sections, some files such as neural network
@@ -763,55 +756,55 @@
 
 - `build/generated/include/InputFiles.hpp`
 
-```c++
-#ifndef GENERATED_IMAGES_H
-#define GENERATED_IMAGES_H
+    ```C++
+    #ifndef GENERATED_IMAGES_H
+    #define GENERATED_IMAGES_H
 
-#include <cstdint>
+    #include <cstdint>
 
-#define NUMBER_OF_FILES  (2U)
-#define IMAGE_DATA_SIZE  (150528U)
+    #define NUMBER_OF_FILES  (2U)
+    #define IMAGE_DATA_SIZE  (150528U)
 
-extern const uint8_t im0[IMAGE_DATA_SIZE];
-extern const uint8_t im1[IMAGE_DATA_SIZE];
+    extern const uint8_t im0[IMAGE_DATA_SIZE];
+    extern const uint8_t im1[IMAGE_DATA_SIZE];
 
-const char* get_filename(const uint32_t idx);
-const uint8_t* get_img_array(const uint32_t idx);
+    const char* get_filename(const uint32_t idx);
+    const uint8_t* get_img_array(const uint32_t idx);
 
-#endif /* GENERATED_IMAGES_H */
-```
+    #endif /* GENERATED_IMAGES_H */
+    ```
 
 - `build/generated/src/InputFiles.cc`
 
-```c++
-#include "InputFiles.hpp"
+    ```C++
+    #include "InputFiles.hpp"
 
-static const char *img_filenames[] = {
-    "img1.bmp",
-    "img2.bmp",
-};
+    static const char *img_filenames[] = {
+        "img1.bmp",
+        "img2.bmp",
+    };
 
-static const uint8_t *img_arrays[] = {
-    im0,
-    im1
-};
+    static const uint8_t *img_arrays[] = {
+        im0,
+        im1
+    };
 
-const char* get_filename(const uint32_t idx)
-{
-    if (idx < NUMBER_OF_FILES) {
-        return img_filenames[idx];
+    const char* get_filename(const uint32_t idx)
+    {
+        if (idx < NUMBER_OF_FILES) {
+            return img_filenames[idx];
+        }
+        return nullptr;
     }
-    return nullptr;
-}
 
-const uint8_t* get_img_array(const uint32_t idx)
-{
-    if (idx < NUMBER_OF_FILES) {
-        return img_arrays[idx];
+    const uint8_t* get_img_array(const uint32_t idx)
+    {
+        if (idx < NUMBER_OF_FILES) {
+            return img_arrays[idx];
+        }
+        return nullptr;
     }
-    return nullptr;
-}
-```
+    ```
 
 These headers are generated using python templates, that are in `scripts/py/templates/*.template`.
 
@@ -940,4 +933,4 @@
         └── <uc2_model_name>.tflite.cc
 ```
 
-Next section of the documentation: [Deployment](../documentation.md#Deployment).
+Next section of the documentation: [Deployment](deployment.md).
diff --git a/docs/sections/coding_guidelines.md b/docs/sections/coding_guidelines.md
index 752fe54..664b548 100644
--- a/docs/sections/coding_guidelines.md
+++ b/docs/sections/coding_guidelines.md
@@ -1,7 +1,5 @@
 # Coding standards and guidelines
 
-## Contents
-
 - [Introduction](#introduction)
 - [Language version](#language-version)
 - [File naming](#file-naming)
diff --git a/docs/sections/customizing.md b/docs/sections/customizing.md
index adf7749..056bc55 100644
--- a/docs/sections/customizing.md
+++ b/docs/sections/customizing.md
@@ -1,9 +1,6 @@
 # Implementing custom ML application
 
-## Contents
-
 - [Implementing custom ML application](#implementing-custom-ml-application)
-  - [Contents](#contents)
   - [Software project description](#software-project-description)
   - [HAL API](#hal-api)
   - [Main loop function](#main-loop-function)
@@ -69,14 +66,14 @@
 > headers in an `include` directory, C/C++ sources in a `src` directory.
 > For example:
 >
->```tree
->use_case
-> └──img_class
->       ├── include
->       │   └── *.hpp
->       └── src
->           └── *.cc
->```
+> ```tree
+> use_case
+>   └──img_class
+>         ├── include
+>         │   └── *.hpp
+>         └── src
+>             └── *.cc
+> ```
 
 ## HAL API
 
@@ -165,7 +162,7 @@
 
 Example of the API initialization in the main function:
 
-```c++
+```C++
 #include "hal.h"
 
 int main ()
@@ -203,7 +200,7 @@
 use-case will have reference to the function defined in the use-case
 code.
 
-```c++
+```C++
 void main_loop(hal_platform& platform){
 
 ...
@@ -224,7 +221,7 @@
 
 For example:
 
-```c++
+```C++
 #include "hal.h"
 #include "AppContext.hpp"
 
@@ -260,7 +257,7 @@
 
 Usage example:
 
-```c++
+```C++
 Profiler profiler{&platform, "Inference"};
 
 profiler.StartProfiling();
@@ -306,13 +303,13 @@
 
 > **Convention:**  Each ML use-case must have extension of this class and implementation of the protected virtual methods:
 >
->```c++
+> ```C++
 > virtual const uint8_t* ModelPointer() = 0;
 > virtual size_t ModelSize() = 0;
 > virtual const tflite::MicroOpResolver& GetOpResolver() = 0;
 > virtual bool EnlistOperations() = 0;
 > virtual size_t GetActivationBufferSize() = 0;
->```
+> ```
 >
 > Network models have different set of operators that must be registered with
 > tflite::MicroMutableOpResolver object in the EnlistOperations method.
@@ -361,7 +358,7 @@
 important. Define `main_loop` function with the signature described in
 [Main loop function](#main-loop-function):
 
-```c++
+```C++
 #include "hal.h"
 
 void main_loop(hal_platform& platform) {
@@ -370,7 +367,7 @@
 ```
 
 The above is already a working use-case, if you compile and run it (see
-[Building custom usecase](#Building-custom-use-case)) the application will start, print
+[Building custom usecase](#building-custom-use-case)) the application will start, print
 message to console and exit straight away.
 
 Now, you can start filling this function with logic.
@@ -389,7 +386,7 @@
 
 For example:
 
-```c++
+```C++
 #ifndef HELLOWORLDMODEL_HPP
 #define HELLOWORLDMODEL_HPP
 
@@ -415,7 +412,7 @@
     static constexpr int ms_maxOpCnt = 5;
 
     /* A mutable op resolver instance. */
-    tflite::MicroMutableOpResolver<ms_maxOpCnt> _m_opResolver;
+    tflite::MicroMutableOpResolver<ms_maxOpCnt> m_opResolver;
   };
 } /* namespace app */
 } /* namespace arm */
@@ -437,13 +434,13 @@
 TensorFlow Lite Micro framework. We will use the ARM_NPU define to exclude
 the code if the application was built without NPU support.
 
-```c++
+```C++
 #include "HelloWorldModel.hpp"
 
 bool arm::app::HelloWorldModel::EnlistOperations() {
 
 #if defined(ARM_NPU)
-    if (kTfLiteOk == this->_m_opResolver.AddEthosU()) {
+    if (kTfLiteOk == this->m_opResolver.AddEthosU()) {
         info("Added %s support to op resolver\n",
             tflite::GetString_ETHOSU());
     } else {
@@ -465,7 +462,7 @@
 the `usecase.cmake` file for this `HelloWorld` example.
 
 For more details on `usecase.cmake`, see [Building custom use case](#building-custom-use-case).
-For details on code generation flow in general, see [Automatic file generation](./building.md#Automatic-file-generation)
+For details on code generation flow in general, see [Automatic file generation](./building.md#automatic-file-generation)
 
 The TensorFlow Lite model data is read during Model::Init() method execution, see
 `application/tensorflow-lite-micro/Model.cc` for more details. Model invokes
@@ -476,7 +473,7 @@
 file is added to the compilation automatically.
 
 Use `${use-case}_MODEL_TFLITE_PATH` build parameter to include custom
-model to the generation/compilation process (see [Build options](./building.md/#build-options)).
+model to the generation/compilation process (see [Build options](./building.md#build-options)).
 
 ## Executing inference
 
@@ -506,7 +503,7 @@
 
 The following code adds inference invocation to the main loop function:
 
-```c++
+```C++
 #include "hal.h"
 #include "HelloWorldModel.hpp"
 
@@ -541,7 +538,7 @@
 
 - Creating HelloWorldModel object and initializing it.
 
-  ```c++
+  ```C++
   arm::app::HelloWorldModel model;
 
   /* Load the model */
@@ -553,7 +550,7 @@
 
 - Getting pointers to allocated input and output tensors.
 
-  ```c++
+  ```C++
   TfLiteTensor *outputTensor = model.GetOutputTensor();
   TfLiteTensor *inputTensor = model.GetInputTensor();
   ```
@@ -561,20 +558,20 @@
 - Copying input data to the input tensor. We assume input tensor size
   to be 1000 uint8 elements.
 
-  ```c++
+  ```C++
   memcpy(inputTensor->data.data, inputData, 1000);
   ```
 
 - Running inference
 
-  ```c++
+  ```C++
   model.RunInference();
   ```
 
 - Reading inference results: data and data size from the output
   tensor. We assume that output layer has uint8 data type.
 
-  ```c++
+  ```C++
   Const uint32_t tensorSz = outputTensor->bytes ;
 
   const uint8_t *outputData = tflite::GetTensorData<uint8>(outputTensor);
@@ -584,7 +581,7 @@
 invoke `StartProfiling` and `StopProfiling` around inference
 execution.
 
-```c++
+```C++
 Profiler profiler{&platform, "Inference"};
 
 profiler.StartProfiling();
@@ -617,7 +614,7 @@
 Platform data acquisition module has get_input function to read keyboard
 input from the UART. It can be used as follows:
 
-```c++
+```C++
 char ch_input[128];
 platform.data_acq->get_input(ch_input, sizeof(ch_input));
 ```
@@ -647,7 +644,7 @@
 
 Example that prints "Hello world" on the LCD:
 
-```c++
+```C++
 std::string hello("Hello world");
 platform.data_psn->present_data_text(hello.c_str(), hello.size(), 10, 35, 0);
 ```
@@ -665,7 +662,7 @@
 For example, the following code snippet visualizes an input tensor data
 for MobileNet v2 224 (down sampling it twice):
 
-```c++
+```C++
 platform.data_psn->present_data_image((uint8_t *) inputTensor->data.data, 224, 224, 3, 10, 35, 2);
 ```
 
@@ -717,7 +714,6 @@
     FILEPATH
     )
 
-# Generate model file
 generate_tflite_code(
     MODEL_PATH ${${use_case}_MODEL_TFLITE_PATH}
     DESTINATION ${SRC_GEN_DIR}
@@ -729,7 +725,7 @@
 [Automatic file generation](./building.md#Automatic-file-generation).
 
 To build you application follow the general instructions from
-[Add Custom inputs](#add-custom-inputs) and specify the name of the use-case in the
+[Add Custom inputs](./building.md#add-custom-inputs) and specify the name of the use-case in the
 build command:
 
 ```commandline
@@ -744,4 +740,4 @@
 will also produce `sectors/hello_world` directory with binaries and
 `images-hello_world.txt` to be copied to the board MicroSD card.
 
-Next section of the documentation: [Testing and benchmarking](../documentation.md#Testing-and-benchmarking).
+Next section of the documentation: [Testing and benchmarking](testing_benchmarking.md).
diff --git a/docs/sections/deployment.md b/docs/sections/deployment.md
index 10acbcf..b852887 100644
--- a/docs/sections/deployment.md
+++ b/docs/sections/deployment.md
@@ -1,9 +1,6 @@
 # Deployment
 
-## Contents
-
 - [Deployment](#deployment)
-  - [Contents](#contents)
   - [Fixed Virtual Platform](#fixed-virtual-platform)
     - [Setting up the MPS3 Arm Corstone-300 FVP](#setting-up-the-mps3-arm-corstone-300-fvp)
     - [Deploying on an FVP emulating MPS3](#deploying-on-an-fvp-emulating-mps3)
@@ -27,11 +24,6 @@
 - Emulates MPS3 board (not for MPS2 FPGA board)
 - Contains support for Arm® Ethos™-U55
 
-For FVP, the elf or the axf file can be run using the Fast Model
-executable as outlined under the [Starting Fast Model simulation](./setup.md/#starting-fast-model-simulation)
-except for the binary being pointed at here
-is the one just built using the steps in the previous section.
-
 ### Setting up the MPS3 Arm Corstone-300 FVP
 
 For Ethos-U55 sample application, please download the MPS3 version of the
@@ -48,12 +40,12 @@
 
 ### Deploying on an FVP emulating MPS3
 
-This section assumes that the FVP has been installed (see [Setting up the MPS3 Arm Corstone-300 FVP](#Setting-up-the-MPS3-Arm-Corstone-300-FVP)) to the user's home directory `~/FVP_Corstone_SSE-300_Ethos-U55`.
+This section assumes that the FVP has been installed (see [Setting up the MPS3 Arm Corstone-300 FVP](#setting-up-the-mps3-arm-corstone-300-fvp)) to the user's home directory `~/FVP_Corstone_SSE-300_Ethos-U55`.
 
 The installation, typically, will have the executable under `~/FVP_Corstone_SSE-300_Ethos-U55/model/<OS>_<compiler-version>/`
 directory. For the example below, we assume it to be `~/FVP_Corstone_SSE-300_Ethos-U55/models/Linux64_GCC-6.4`.
 
-To run a use case on the FVP, from the [Build directory](../sections/building.md#Create-a-build-directory):
+To run a use case on the FVP, from the [Build directory](../sections/building.md#create-a-build-directory):
 
 ```commandline
 ~/FVP_Corstone_SSE-300_Ethos-U55/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 -a ./bin/ethos-u-<use_case>.axf
@@ -71,6 +63,8 @@
 information about the pre-built application version, TensorFlow Lite Micro library version used, data type as well as
 the input and output tensor sizes of the model compiled into the executable binary.
 
+> **Note:** For details on the specific use-case follow the instructions in the corresponding documentation.
+
 After the application has started it outputs a menu and waits for the user input from telnet terminal.
 
 For example, the image classification use case can be started by:
@@ -79,6 +73,10 @@
 ~/FVP_Corstone_SSE-300_Ethos-U55/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 -a ./bin/ethos-u-img_class.axf
 ```
 
+![FVP](../media/fvp.png)
+
+![FVP Terminal](../media/fvpterminal.png)
+
 The FVP supports many command line parameters:
 
 - passed by using `-C <param>=<value>`. The most important ones are:
@@ -278,4 +276,4 @@
     ...
     ```
 
-Next section of the main documentation, [Running code samples applications](../documentation.md#Running-code-samples-applications).
+Next section of the documentation: [Implementing custom ML application](customizing.md).
diff --git a/docs/sections/memory_considerations.md b/docs/sections/memory_considerations.md
index 7db0eba..48651f1 100644
--- a/docs/sections/memory_considerations.md
+++ b/docs/sections/memory_considerations.md
@@ -1,9 +1,8 @@
 # Memory considerations
 
-## Table of Contents
+## Contents
 
 - [Memory considerations](#memory-considerations)
-  - [Table of Contents](#table-of-contents)
   - [Introduction](#introduction)
   - [Understanding memory usage from Vela output](#understanding-memory-usage-from-vela-output)
     - [Total SRAM used](#total-sram-used)
@@ -43,7 +42,7 @@
 usage is generated. For example, compiling the keyword spotting model [ds_cnn_clustered_int8](https://github.com/ARM-software/ML-zoo/blob/master/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8/ds_cnn_clustered_int8.tflite)
 with Vela produces, among others, the following output:
 
-```
+```log
 Total SRAM used                                 70.77 KiB
 Total Off-chip Flash used                      430.78 KiB
 ```
@@ -74,9 +73,10 @@
 memory mode. See [vela.ini](../../scripts/vela/vela.ini). To make use of a neural network model
 optimised for this configuration, the linker script for the target platform would need to be
 changed. By default, the linker scripts are set up to support the default configuration only. See
-[Memory constraints](#Memory-constraints) for snippet of a script.
+[Memory constraints](#memory-constraints) for snippet of a script.
 
 > Note
+>
 > 1. The default configuration is represented by `Shared_Sram` memory mode.
 > 2. `Dedicated_Sram` mode is only applicable for Arm® Ethos™-U65.
 
@@ -104,11 +104,11 @@
 The following numbers have been obtained from Vela for `Shared_Sram` memory mode and the SRAM and
 flash memory requirements for the different use cases of the evaluation kit. Note that the SRAM usage
 does not include memory used by TensorFlow Lite Micro and this will need to be topped up as explained
-under [Total SRAM used](#Total-SRAM-used).
+under [Total SRAM used](#total-sram-used).
 
 - [Keyword spotting model](https://github.com/ARM-software/ML-zoo/tree/master/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8) requires
-  -  70.7 KiB of SRAM
-  -  430.7 KiB of flash memory.
+  - 70.7 KiB of SRAM
+  - 430.7 KiB of flash memory.
 
 - [Image classification model](https://github.com/ARM-software/ML-zoo/tree/master/models/image_classification/mobilenet_v2_1.0_224/tflite_uint8) requires
   - 638.6 KiB of SRAM
@@ -122,13 +122,13 @@
 
 Both the MPS3 Fixed Virtual Platform and the MPS3 FPGA platform share the linker script for Arm® Corstone™-300
 design. The design is set by the CMake configuration parameter `TARGET_SUBSYSTEM` as described in
-[build options](./building.md#Build-options).
+[build options](./building.md#build-options).
 
 The memory map exposed by this design is presented in [Appendix 1](./appendix.md). This can be used as a reference
 when editing the linker script, especially to make sure that region boundaries are respected. The snippet from the
 scatter file is presented below:
 
-```
+```log
 ;---------------------------------------------------------
 ; First load region (ITCM)
 ;---------------------------------------------------------
@@ -235,4 +235,6 @@
 network model is placed in the DDR/flash region under LOAD_REGION_1. The two load regions are necessary
 as the MPS3's motherboard configuration controller limits the load size at address 0x00000000 to 1MiB.
 This has implications on how the application **is deployed** on MPS3 as explained under the section
-[Deployment on MPS3](./deployment.md#MPS3-board).
+[Deployment on MPS3](./deployment.md#mps3-board).
+
+Next section of the documentation: [Troubleshooting](troubleshooting.md).
diff --git a/docs/sections/run.md b/docs/sections/run.md
deleted file mode 100644
index 900101d..0000000
--- a/docs/sections/run.md
+++ /dev/null
@@ -1,44 +0,0 @@
-
-# Running Ethos-U55 Code Samples
-
-## Contents
-
-- [Starting Fast Model simulation](#starting-fast-model-simulation)
-
-This section covers the process for getting started with pre-built binaries for the Code Samples.
-
-## Starting Fast Model simulation
-
-Once built application binaries and assuming the install location of the FVP
-was set to ~/FVP_install_location, the simulation can be started by:
-
-```commandline
-FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55
-./bin/mps3-sse-300/ethos-u-<use_case>.axf
-```
-
-This will start the Fast Model simulation for the chosen use-case.
-
-A log output should appear on the terminal:
-
-```log
-telnetterminal0: Listening for serial connection on port 5000
-telnetterminal1: Listening for serial connection on port 5001
-telnetterminal2: Listening for serial connection on port 5002
-telnetterminal5: Listening for serial connection on port 5003
-```
-
-This will also launch a telnet window with the sample application's
-standard output and error log entries containing information about the
-pre-built application version, TensorFlow Lite Micro library version
-used, data type as well as the input and output tensor sizes of the
-model compiled into the executable binary.
-
-![FVP](../media/fvp.png)
-
-![FVP Terminal](../media/fvpterminal.png)
-
-> **Note:**
-For details on the specific use-case follow the instructions in the corresponding documentation.
-
-Next section of the documentation: [Implementing custom ML application](../documentation.md#Implementing-custom-ML-application).
diff --git a/docs/sections/testing_benchmarking.md b/docs/sections/testing_benchmarking.md
index e2ed434..7932dde 100644
--- a/docs/sections/testing_benchmarking.md
+++ b/docs/sections/testing_benchmarking.md
@@ -1,7 +1,5 @@
 # Testing and benchmarking
 
-## Contents
-
 - [Testing](#testing)
 - [Benchmarking](#benchmarking)
 
@@ -86,4 +84,4 @@
     Time in ms:        210
 ```
 
-Next section of the main documentation: [Troubleshooting](../documentation.md#Troubleshooting).
+Next section of the documentation: [Memory Considerations](memory_considerations.md).
diff --git a/docs/sections/troubleshooting.md b/docs/sections/troubleshooting.md
index 5e52a4e..a4f60fb 100644
--- a/docs/sections/troubleshooting.md
+++ b/docs/sections/troubleshooting.md
@@ -1,7 +1,5 @@
 # Troubleshooting
 
-## Contents
-
 - [Inference results are incorrect for my custom files](#inference-results-are-incorrect-for-my-custom-files)
 - [The application does not work with my custom model](#the-application-does-not-work-with-my-custom-model)
 
@@ -26,4 +24,4 @@
 It is a python tool available from <https://pypi.org/project/ethos-u-vela/>.
 The source code is hosted on <https://git.mlplatform.org/ml/ethos-u/ethos-u-vela.git/>.
 
-Next section of the documentation: [Contribution guidelines](../documentation.md#Contribution-guidelines).
+Next section of the documentation: [Appendix](appendix.md).
diff --git a/docs/use_cases/ad.md b/docs/use_cases/ad.md
index 5f210b1..661cf49 100644
--- a/docs/use_cases/ad.md
+++ b/docs/use_cases/ad.md
@@ -110,6 +110,7 @@
 ```commandline
 cmake ../ -DUSE_CASE_BUILD=ad
 ```
+
 To configure a build that can be debugged using Arm-DS, we can just specify
 the build type as `Debug` and use the `Arm Compiler` toolchain file:
 
@@ -121,10 +122,11 @@
 ```
 
 Also see:
-- [Configuring with custom TPIP dependencies](../sections/building.md#Configuring-with-custom-TPIP-dependencies)
+
+- [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
 - [Using Arm Compiler](../sections/building.md#using-arm-compiler)
-- [Configuring the build for simple_platform](../sections/building.md#Configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#Working-with-model-debugger-from-Arm-FastModel-Tools)
+- [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
+- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
 
 > **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run
 >the CMake command.
diff --git a/docs/use_cases/asr.md b/docs/use_cases/asr.md
index ec10fdb..a8142aa 100644
--- a/docs/use_cases/asr.md
+++ b/docs/use_cases/asr.md
@@ -162,10 +162,10 @@
 ```
 
 Also see:
-- [Configuring with custom TPIP dependencies](../sections/building.md#Configuring-with-custom-TPIP-dependencies)
+- [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
 - [Using Arm Compiler](../sections/building.md#using-arm-compiler)
-- [Configuring the build for simple_platform](../sections/building.md#Configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#Working-with-model-debugger-from-Arm-FastModel-Tools)
+- [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
+- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
 
 > **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run
 >the CMake command.
diff --git a/docs/use_cases/img_class.md b/docs/use_cases/img_class.md
index 68a5285..75f0bd6 100644
--- a/docs/use_cases/img_class.md
+++ b/docs/use_cases/img_class.md
@@ -91,10 +91,11 @@
 ```
 
 Also see:
-- [Configuring with custom TPIP dependencies](../sections/building.md#Configuring-with-custom-TPIP-dependencies)
+
+- [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
 - [Using Arm Compiler](../sections/building.md#using-arm-compiler)
-- [Configuring the build for simple_platform](../sections/building.md#Configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#Working-with-model-debugger-from-Arm-FastModel-Tools)
+- [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
+- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
 
 > **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run
 >the CMake command.
@@ -228,7 +229,7 @@
 
 After compiling, your custom model will have now replaced the default one in the application.
 
-## Setting-up and running Ethos-U55 code sample
+## Setting up and running Ethos-U55 code sample
 
 ### Setting up the Ethos-U55 Fast Model
 
diff --git a/docs/use_cases/inference_runner.md b/docs/use_cases/inference_runner.md
index ebc4677..b8004ed 100644
--- a/docs/use_cases/inference_runner.md
+++ b/docs/use_cases/inference_runner.md
@@ -70,6 +70,7 @@
 ```commandline
 cmake ../ -DUSE_CASE_BUILD=inference_runner
 ```
+
 To configure a build that can be debugged using Arm-DS, we can just specify
 the build type as `Debug` and use the `Arm Compiler` toolchain file:
 
@@ -81,10 +82,11 @@
 ```
 
 Also see:
-- [Configuring with custom TPIP dependencies](../sections/building.md#Configuring-with-custom-TPIP-dependencies)
+
+- [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
 - [Using Arm Compiler](../sections/building.md#using-arm-compiler)
-- [Configuring the build for simple_platform](../sections/building.md#Configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#Working-with-model-debugger-from-Arm-FastModel-Tools)
+- [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
+- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
 
 > **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run
 >the CMake command.
@@ -162,7 +164,7 @@
 
 After compiling, your custom model will have now replaced the default one in the application.
 
-## Setting-up and running Ethos-U55 code sample
+## Setting up and running Ethos-U55 code sample
 
 ### Setting up the Ethos-U55 Fast Model
 
diff --git a/docs/use_cases/kws.md b/docs/use_cases/kws.md
index 8811efb..bf3e088 100644
--- a/docs/use_cases/kws.md
+++ b/docs/use_cases/kws.md
@@ -131,10 +131,11 @@
 ```
 
 Also see:
-- [Configuring with custom TPIP dependencies](../sections/building.md#Configuring-with-custom-TPIP-dependencies)
+
+- [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
 - [Using Arm Compiler](../sections/building.md#using-arm-compiler)
-- [Configuring the build for simple_platform](../sections/building.md#Configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#Working-with-model-debugger-from-Arm-FastModel-Tools)
+- [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
+- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
 
 > **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run the CMake command.
 
@@ -261,7 +262,7 @@
 
 After compiling, your custom model will have now replaced the default one in the application.
 
-## Setting-up and running Ethos-U55 code sample
+## Setting up and running Ethos-U55 code sample
 
 ### Setting up the Ethos-U55 Fast Model
 
diff --git a/docs/use_cases/kws_asr.md b/docs/use_cases/kws_asr.md
index b63ee3a..745a108 100644
--- a/docs/use_cases/kws_asr.md
+++ b/docs/use_cases/kws_asr.md
@@ -202,10 +202,11 @@
 ```
 
 Also see:
-- [Configuring with custom TPIP dependencies](../sections/building.md#Configuring-with-custom-TPIP-dependencies)
+
+- [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
 - [Using Arm Compiler](../sections/building.md#using-arm-compiler)
-- [Configuring the build for simple_platform](../sections/building.md#Configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#Working-with-model-debugger-from-Arm-FastModel-Tools)
+- [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
+- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
 
 > **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run the CMake command.
 
diff --git a/source/application/hal/platforms/bare-metal/bsp/bsp-packs/simple_platform/include/stubs_fvp.h b/source/application/hal/platforms/bare-metal/bsp/bsp-packs/simple_platform/include/stubs_fvp.h
index a21f2d2..aec0be1 100644
--- a/source/application/hal/platforms/bare-metal/bsp/bsp-packs/simple_platform/include/stubs_fvp.h
+++ b/source/application/hal/platforms/bare-metal/bsp/bsp-packs/simple_platform/include/stubs_fvp.h
@@ -35,7 +35,6 @@
 /************************  GLCD related functions ****************************/
 /**
  * @brief      Initialize the Himax LCD with HX8347-D LCD Controller
- * @return     none
  */
 void GLCD_Initialize(void);
 
@@ -48,7 +47,6 @@
  * @param[in]  w        width of bitmap.
  * @param[in]  h        height of bitmap.
  * @param[in]  bitmap   address at which the bitmap data resides.
- * @return     none
  */
 void GLCD_Bitmap(unsigned int x,  unsigned int y,
                 unsigned int w, unsigned int h,
@@ -65,7 +63,6 @@
  * @param[in] pos_y     start y position for the LCD.
  * @param[in] downsample_factor   factor by which the image
  *                                is downsampled by.
- * @return none
  */
 void GLCD_Image(void *data, const uint32_t width,
                 const uint32_t height, const uint32_t channels,
@@ -75,14 +72,12 @@
 /**
  * @brief      Clear display
  * @param[in]  color    display clearing color
- * @return     none
  */
 void GLCD_Clear(unsigned short color);
 
 /**
  * @brief      Set foreground color
  * @param[in]  color    foreground color
- * @return     none
  */
 void GLCD_SetTextColor(unsigned short color);
 
@@ -92,7 +87,6 @@
  * @param[in]  col   column number
  * @param[in]  fi    font index (0 = 9x15)
  * @param[in]  c     ASCII character
- * @return     none
  */
 void GLCD_DisplayChar(unsigned int ln, unsigned int col,
                     unsigned char fi, unsigned char  c);
@@ -103,7 +97,6 @@
  * @param[in]  col   column number
  * @param[in]  fi    font index (0 = 9x15)
  * @param[in]  s     pointer to string
- * @return     none
  */
 void GLCD_DisplayString(unsigned int ln, unsigned int col,
                         unsigned char fi, char *s);
@@ -115,7 +108,6 @@
  * @param[in]  w:       window width in pixels
  * @param[in]  h:       window height in pixels
  * @param[in]  color    box color
- * @return     none
  */
 void GLCD_Box(unsigned int x, unsigned int y,
             unsigned int w, unsigned int h,
diff --git a/source/application/hal/platforms/native/data_presentation/log/include/log.h b/source/application/hal/platforms/native/data_presentation/log/include/log.h
index 10cf303..9b9928f 100644
--- a/source/application/hal/platforms/native/data_presentation/log/include/log.h
+++ b/source/application/hal/platforms/native/data_presentation/log/include/log.h
@@ -50,6 +50,7 @@
  * @param[in]   str_sz      Length of the string.
  * @param[in]   pos_x       Screen position x co-ordinate.
  * @param[in]   pos_y       Screen position y co-ordinate.
+ * @param[in]   allow_multiple_lines  Specifies if multiple lines are allowed.
  * @return      0 if successful, non-zero otherwise.
  **/
 int log_display_text(const char* str, const size_t str_sz,
diff --git a/source/application/main/Mfcc.cc b/source/application/main/Mfcc.cc
index c8ad138..c998ef2 100644
--- a/source/application/main/Mfcc.cc
+++ b/source/application/main/Mfcc.cc
@@ -64,27 +64,27 @@
     }
 
     MFCC::MFCC(const MfccParams& params):
-        _m_params(params),
-        _m_filterBankInitialised(false)
+        m_params(params),
+        m_filterBankInitialised(false)
     {
-        this->_m_buffer = std::vector<float>(
-                            this->_m_params.m_frameLenPadded, 0.0);
-        this->_m_frame = std::vector<float>(
-                            this->_m_params.m_frameLenPadded, 0.0);
-        this->_m_melEnergies = std::vector<float>(
-                                this->_m_params.m_numFbankBins, 0.0);
+        this->m_buffer = std::vector<float>(
+                            this->m_params.m_frameLenPadded, 0.0);
+        this->m_frame = std::vector<float>(
+                            this->m_params.m_frameLenPadded, 0.0);
+        this->m_melEnergies = std::vector<float>(
+                                this->m_params.m_numFbankBins, 0.0);
 
-        this->_m_windowFunc = std::vector<float>(this->_m_params.m_frameLen);
-        const auto multiplier = static_cast<float>(2 * M_PI / this->_m_params.m_frameLen);
+        this->m_windowFunc = std::vector<float>(this->m_params.m_frameLen);
+        const auto multiplier = static_cast<float>(2 * M_PI / this->m_params.m_frameLen);
 
         /* Create window function. */
-        for (size_t i = 0; i < this->_m_params.m_frameLen; i++) {
-            this->_m_windowFunc[i] = (0.5 - (0.5 *
+        for (size_t i = 0; i < this->m_params.m_frameLen; i++) {
+            this->m_windowFunc[i] = (0.5 - (0.5 *
                 math::MathUtils::CosineF32(static_cast<float>(i) * multiplier)));
         }
 
-        math::MathUtils::FftInitF32(this->_m_params.m_frameLenPadded, this->_m_fftInstance);
-        debug("Instantiated MFCC object: %s\n", this->_m_params.Str().c_str());
+        math::MathUtils::FftInitF32(this->m_params.m_frameLenPadded, this->m_fftInstance);
+        debug("Instantiated MFCC object: %s\n", this->m_params.Str().c_str());
     }
 
     void MFCC::Init()
@@ -166,20 +166,20 @@
 
     void MFCC::ConvertToPowerSpectrum()
     {
-        const uint32_t halfDim = this->_m_buffer.size() / 2;
+        const uint32_t halfDim = this->m_buffer.size() / 2;
 
         /* Handle this special case. */
-        float firstEnergy = this->_m_buffer[0] * this->_m_buffer[0];
-        float lastEnergy = this->_m_buffer[1] * this->_m_buffer[1];
+        float firstEnergy = this->m_buffer[0] * this->m_buffer[0];
+        float lastEnergy = this->m_buffer[1] * this->m_buffer[1];
 
         math::MathUtils::ComplexMagnitudeSquaredF32(
-                            this->_m_buffer.data(),
-                            this->_m_buffer.size(),
-                            this->_m_buffer.data(),
-                            this->_m_buffer.size()/2);
+                            this->m_buffer.data(),
+                            this->m_buffer.size(),
+                            this->m_buffer.data(),
+                            this->m_buffer.size()/2);
 
-        this->_m_buffer[0] = firstEnergy;
-        this->_m_buffer[halfDim] = lastEnergy;
+        this->m_buffer[0] = firstEnergy;
+        this->m_buffer[halfDim] = lastEnergy;
     }
 
     std::vector<float> MFCC::CreateDCTMatrix(
@@ -219,17 +219,17 @@
     void MFCC::InitMelFilterBank()
     {
         if (!this->IsMelFilterBankInited()) {
-            this->_m_melFilterBank = this->CreateMelFilterBank();
-            this->_m_dctMatrix = this->CreateDCTMatrix(
-                                    this->_m_params.m_numFbankBins,
-                                    this->_m_params.m_numMfccFeatures);
-            this->_m_filterBankInitialised = true;
+            this->m_melFilterBank = this->CreateMelFilterBank();
+            this->m_dctMatrix = this->CreateDCTMatrix(
+                                    this->m_params.m_numFbankBins,
+                                    this->m_params.m_numMfccFeatures);
+            this->m_filterBankInitialised = true;
         }
     }
 
     bool MFCC::IsMelFilterBankInited() const
     {
-        return this->_m_filterBankInitialised;
+        return this->m_filterBankInitialised;
     }
 
     void MFCC::MfccComputePreFeature(const std::vector<int16_t>& audioData)
@@ -238,78 +238,78 @@
 
         /* TensorFlow way of normalizing .wav data to (-1, 1). */
         constexpr float normaliser = 1.0/(1u<<15u);
-        for (size_t i = 0; i < this->_m_params.m_frameLen; i++) {
-            this->_m_frame[i] = static_cast<float>(audioData[i]) * normaliser;
+        for (size_t i = 0; i < this->m_params.m_frameLen; i++) {
+            this->m_frame[i] = static_cast<float>(audioData[i]) * normaliser;
         }
 
         /* Apply window function to input frame. */
-        for(size_t i = 0; i < this->_m_params.m_frameLen; i++) {
-            this->_m_frame[i] *= this->_m_windowFunc[i];
+        for(size_t i = 0; i < this->m_params.m_frameLen; i++) {
+            this->m_frame[i] *= this->m_windowFunc[i];
         }
 
         /* Set remaining frame values to 0. */
-        std::fill(this->_m_frame.begin() + this->_m_params.m_frameLen,this->_m_frame.end(), 0);
+        std::fill(this->m_frame.begin() + this->m_params.m_frameLen,this->m_frame.end(), 0);
 
         /* Compute FFT. */
-        math::MathUtils::FftF32(this->_m_frame, this->_m_buffer, this->_m_fftInstance);
+        math::MathUtils::FftF32(this->m_frame, this->m_buffer, this->m_fftInstance);
 
         /* Convert to power spectrum. */
         this->ConvertToPowerSpectrum();
 
         /* Apply mel filterbanks. */
-        if (!this->ApplyMelFilterBank(this->_m_buffer,
-                                      this->_m_melFilterBank,
-                                      this->_m_filterBankFilterFirst,
-                                      this->_m_filterBankFilterLast,
-                                      this->_m_melEnergies)) {
+        if (!this->ApplyMelFilterBank(this->m_buffer,
+                                      this->m_melFilterBank,
+                                      this->m_filterBankFilterFirst,
+                                      this->m_filterBankFilterLast,
+                                      this->m_melEnergies)) {
             printf_err("Failed to apply MEL filter banks\n");
         }
 
         /* Convert to logarithmic scale. */
-        this->ConvertToLogarithmicScale(this->_m_melEnergies);
+        this->ConvertToLogarithmicScale(this->m_melEnergies);
     }
 
     std::vector<float> MFCC::MfccCompute(const std::vector<int16_t>& audioData)
     {
         this->MfccComputePreFeature(audioData);
 
-        std::vector<float> mfccOut(this->_m_params.m_numMfccFeatures);
+        std::vector<float> mfccOut(this->m_params.m_numMfccFeatures);
 
-        float * ptrMel = this->_m_melEnergies.data();
-        float * ptrDct = this->_m_dctMatrix.data();
+        float * ptrMel = this->m_melEnergies.data();
+        float * ptrDct = this->m_dctMatrix.data();
         float * ptrMfcc = mfccOut.data();
 
         /* Take DCT. Uses matrix mul. */
         for (size_t i = 0, j = 0; i < mfccOut.size();
-                    ++i, j += this->_m_params.m_numFbankBins) {
+                    ++i, j += this->m_params.m_numFbankBins) {
             *ptrMfcc++ = math::MathUtils::DotProductF32(
                                             ptrDct + j,
                                             ptrMel,
-                                            this->_m_params.m_numFbankBins);
+                                            this->m_params.m_numFbankBins);
         }
         return mfccOut;
     }
 
     std::vector<std::vector<float>> MFCC::CreateMelFilterBank()
     {
-        size_t numFftBins = this->_m_params.m_frameLenPadded / 2;
-        float fftBinWidth = static_cast<float>(this->_m_params.m_samplingFreq) / this->_m_params.m_frameLenPadded;
+        size_t numFftBins = this->m_params.m_frameLenPadded / 2;
+        float fftBinWidth = static_cast<float>(this->m_params.m_samplingFreq) / this->m_params.m_frameLenPadded;
 
-        float melLowFreq = MFCC::MelScale(this->_m_params.m_melLoFreq,
-                                          this->_m_params.m_useHtkMethod);
-        float melHighFreq = MFCC::MelScale(this->_m_params.m_melHiFreq,
-                                           this->_m_params.m_useHtkMethod);
-        float melFreqDelta = (melHighFreq - melLowFreq) / (this->_m_params.m_numFbankBins + 1);
+        float melLowFreq = MFCC::MelScale(this->m_params.m_melLoFreq,
+                                          this->m_params.m_useHtkMethod);
+        float melHighFreq = MFCC::MelScale(this->m_params.m_melHiFreq,
+                                           this->m_params.m_useHtkMethod);
+        float melFreqDelta = (melHighFreq - melLowFreq) / (this->m_params.m_numFbankBins + 1);
 
         std::vector<float> thisBin = std::vector<float>(numFftBins);
         std::vector<std::vector<float>> melFilterBank(
-                                            this->_m_params.m_numFbankBins);
-        this->_m_filterBankFilterFirst =
-                        std::vector<uint32_t>(this->_m_params.m_numFbankBins);
-        this->_m_filterBankFilterLast =
-                        std::vector<uint32_t>(this->_m_params.m_numFbankBins);
+                                            this->m_params.m_numFbankBins);
+        this->m_filterBankFilterFirst =
+                        std::vector<uint32_t>(this->m_params.m_numFbankBins);
+        this->m_filterBankFilterLast =
+                        std::vector<uint32_t>(this->m_params.m_numFbankBins);
 
-        for (size_t bin = 0; bin < this->_m_params.m_numFbankBins; bin++) {
+        for (size_t bin = 0; bin < this->m_params.m_numFbankBins; bin++) {
             float leftMel = melLowFreq + bin * melFreqDelta;
             float centerMel = melLowFreq + (bin + 1) * melFreqDelta;
             float rightMel = melLowFreq + (bin + 2) * melFreqDelta;
@@ -317,11 +317,11 @@
             uint32_t firstIndex = 0;
             uint32_t lastIndex = 0;
             bool firstIndexFound = false;
-            const float normaliser = this->GetMelFilterBankNormaliser(leftMel, rightMel, this->_m_params.m_useHtkMethod);
+            const float normaliser = this->GetMelFilterBankNormaliser(leftMel, rightMel, this->m_params.m_useHtkMethod);
 
             for (size_t i = 0; i < numFftBins; i++) {
                 float freq = (fftBinWidth * i);  /* Center freq of this fft bin. */
-                float mel = MFCC::MelScale(freq, this->_m_params.m_useHtkMethod);
+                float mel = MFCC::MelScale(freq, this->m_params.m_useHtkMethod);
                 thisBin[i] = 0.0;
 
                 if (mel > leftMel && mel < rightMel) {
@@ -341,8 +341,8 @@
                 }
             }
 
-            this->_m_filterBankFilterFirst[bin] = firstIndex;
-            this->_m_filterBankFilterLast[bin] = lastIndex;
+            this->m_filterBankFilterFirst[bin] = firstIndex;
+            this->m_filterBankFilterLast[bin] = lastIndex;
 
             /* Copy the part we care about. */
             for (uint32_t i = firstIndex; i <= lastIndex; i++) {
diff --git a/source/application/main/Profiler.cc b/source/application/main/Profiler.cc
index 5924414..d8a6fa3 100644
--- a/source/application/main/Profiler.cc
+++ b/source/application/main/Profiler.cc
@@ -22,14 +22,14 @@
 namespace arm {
 namespace app {
     Profiler::Profiler(hal_platform* platform, const char* name = "Unknown")
-    : _m_name(name)
+    : m_name(name)
     {
         if (platform && platform->inited) {
-            this->_m_pPlatform = platform;
+            this->m_pPlatform = platform;
             this->Reset();
         } else {
             printf_err("Profiler %s initialised with invalid platform\n",
-                this->_m_name.c_str());
+                this->m_name.c_str());
         }
     }
 
@@ -38,27 +38,27 @@
         if (name) {
             this->SetName(name);
         }
-        if (this->_m_pPlatform && !this->_m_started) {
-            this->_m_pPlatform->timer->reset();
-            this->_m_tstampSt = this->_m_pPlatform->timer->start_profiling();
-            this->_m_started = true;
+        if (this->m_pPlatform && !this->m_started) {
+            this->m_pPlatform->timer->reset();
+            this->m_tstampSt = this->m_pPlatform->timer->start_profiling();
+            this->m_started = true;
             return true;
         }
-        printf_err("Failed to start profiler %s\n", this->_m_name.c_str());
+        printf_err("Failed to start profiler %s\n", this->m_name.c_str());
         return false;
     }
 
     bool Profiler::StopProfiling()
     {
-        if (this->_m_pPlatform && this->_m_started) {
-            this->_m_tstampEnd = this->_m_pPlatform->timer->stop_profiling();
-            this->_m_started = false;
+        if (this->m_pPlatform && this->m_started) {
+            this->m_tstampEnd = this->m_pPlatform->timer->stop_profiling();
+            this->m_started = false;
 
-            this->AddProfilingUnit(this->_m_tstampSt, this->_m_tstampEnd, this->_m_name);
+            this->AddProfilingUnit(this->m_tstampSt, this->m_tstampEnd, this->m_name);
 
             return true;
         }
-        printf_err("Failed to stop profiler %s\n", this->_m_name.c_str());
+        printf_err("Failed to stop profiler %s\n", this->m_name.c_str());
         return false;
     }
 
@@ -68,16 +68,16 @@
             this->Reset();
             return true;
         }
-        printf_err("Failed to stop profiler %s\n", this->_m_name.c_str());
+        printf_err("Failed to stop profiler %s\n", this->m_name.c_str());
         return false;
     }
 
     void Profiler::Reset()
     {
-        this->_m_started = false;
-        this->_m_series.clear();
-        memset(&this->_m_tstampSt, 0, sizeof(this->_m_tstampSt));
-        memset(&this->_m_tstampEnd, 0, sizeof(this->_m_tstampEnd));
+        this->m_started = false;
+        this->m_series.clear();
+        memset(&this->m_tstampSt, 0, sizeof(this->m_tstampSt));
+        memset(&this->m_tstampEnd, 0, sizeof(this->m_tstampEnd));
     }
 
     void calcProfilingStat(uint64_t currentValue,
@@ -92,7 +92,7 @@
 
     void Profiler::GetAllResultsAndReset(std::vector<ProfileResult>& results)
     {
-        for (const auto& item: this->_m_series) {
+        for (const auto& item: this->m_series) {
             auto name = item.first;
             ProfilingSeries series = item.second;
             ProfileResult result{};
@@ -236,13 +236,13 @@
 
     void Profiler::SetName(const char* str)
     {
-        this->_m_name = std::string(str);
+        this->m_name = std::string(str);
     }
 
     void Profiler::AddProfilingUnit(time_counter start, time_counter end,
                                     const std::string& name)
     {
-        platform_timer * timer = this->_m_pPlatform->timer;
+        platform_timer * timer = this->m_pPlatform->timer;
 
         struct ProfilingUnit unit;
 
@@ -269,7 +269,7 @@
             unit.time = timer->get_duration_ms(&start, &end);
         }
 
-        this->_m_series[name].emplace_back(unit);
+        this->m_series[name].emplace_back(unit);
     }
 
 } /* namespace app */
diff --git a/source/application/main/include/AppContext.hpp b/source/application/main/include/AppContext.hpp
index 588dfaa..10de126 100644
--- a/source/application/main/include/AppContext.hpp
+++ b/source/application/main/include/AppContext.hpp
@@ -35,14 +35,14 @@
     public:
         ~Attribute() override = default;
 
-        explicit Attribute(const T value): _m_value(value){}
+        explicit Attribute(const T value): m_value(value){}
 
         T Get()
         {
-            return _m_value;
+            return m_value;
         }
     private:
-        T _m_value;
+        T m_value;
     };
 
     /* Application context class */
@@ -58,7 +58,7 @@
         template<typename T>
         void Set(const std::string &name, T object)
         {
-            this->_m_attributes[name] = new Attribute<T>(object);
+            this->m_attributes[name] = new Attribute<T>(object);
         }
 
         /**
@@ -70,7 +70,7 @@
         template<typename T>
         T Get(const std::string &name)
         {
-            auto a = (Attribute<T>*)_m_attributes[name];
+            auto a = (Attribute<T>*)m_attributes[name];
             return a->Get();
         }
 
@@ -81,19 +81,19 @@
          */
         bool Has(const std::string& name)
         {
-            return _m_attributes.find(name) != _m_attributes.end();
+            return m_attributes.find(name) != m_attributes.end();
         }
 
         ApplicationContext() = default;
 
         ~ApplicationContext() {
-            for (auto& attribute : _m_attributes)
+            for (auto& attribute : m_attributes)
                 delete attribute.second;
 
-            this->_m_attributes.clear();
+            this->m_attributes.clear();
         }
     private:
-        std::map<std::string, IAttribute*> _m_attributes;
+        std::map<std::string, IAttribute*> m_attributes;
     };
 
 } /* namespace app */
diff --git a/source/application/main/include/DataStructures.hpp b/source/application/main/include/DataStructures.hpp
index 2f267c0..d369cb6 100644
--- a/source/application/main/include/DataStructures.hpp
+++ b/source/application/main/include/DataStructures.hpp
@@ -47,39 +47,39 @@
          * @param[in] rows   Number of rows.
          * @param[in] cols   Number of columns.
          */
-        Array2d(unsigned rows, unsigned cols): _m_rows(rows), _m_cols(cols)
+        Array2d(unsigned rows, unsigned cols): m_rows(rows), m_cols(cols)
         {
             if (rows == 0 || cols == 0) {
                 printf_err("Array2d constructor has 0 size.\n");
-                _m_data = nullptr;
+                m_data = nullptr;
                 return;
             }
-            _m_data = new T[rows * cols];
+            m_data = new T[rows * cols];
         }
 
         ~Array2d()
         {
-            delete[] _m_data;
+            delete[] m_data;
         }
 
         T& operator() (unsigned int row, unsigned int col)
         {
 #if defined(DEBUG)
-            if (row >= _m_rows || col >= _m_cols ||  _m_data == nullptr) {
+            if (row >= m_rows || col >= m_cols ||  m_data == nullptr) {
                 printf_err("Array2d subscript out of bounds.\n");
             }
 #endif /* defined(DEBUG) */
-            return _m_data[_m_cols * row + col];
+            return m_data[m_cols * row + col];
         }
 
         T operator() (unsigned int row, unsigned int col) const
         {
 #if defined(DEBUG)
-            if (row >= _m_rows || col >= _m_cols ||  _m_data == nullptr) {
+            if (row >= m_rows || col >= m_cols ||  m_data == nullptr) {
                 printf_err("const Array2d subscript out of bounds.\n");
             }
 #endif /* defined(DEBUG) */
-            return _m_data[_m_cols * row + col];
+            return m_data[m_cols * row + col];
         }
 
         /**
@@ -91,9 +91,9 @@
             switch (dim)
             {
                 case 0:
-                    return _m_rows;
+                    return m_rows;
                 case 1:
-                    return _m_cols;
+                    return m_cols;
                 default:
                     return 0;
             }
@@ -104,7 +104,7 @@
          */
         size_t totalSize()
         {
-            return _m_rows * _m_cols;
+            return m_rows * m_cols;
         }
 
         /**
@@ -113,15 +113,15 @@
         using iterator=T*;
         using const_iterator=T const*;
 
-        iterator begin() { return _m_data; }
-        iterator end() { return _m_data + totalSize(); }
-        const_iterator begin() const { return _m_data; }
-        const_iterator end() const { return _m_data + totalSize(); };
+        iterator begin() { return m_data; }
+        iterator end() { return m_data + totalSize(); }
+        const_iterator begin() const { return m_data; }
+        const_iterator end() const { return m_data + totalSize(); };
 
     private:
-        size_t _m_rows;
-        size_t _m_cols;
-        T* _m_data;
+        size_t m_rows;
+        size_t m_cols;
+        T* m_data;
     };
 
 } /* namespace app */
diff --git a/source/application/main/include/Mfcc.hpp b/source/application/main/include/Mfcc.hpp
index dcafe62..6b11ebb 100644
--- a/source/application/main/include/Mfcc.hpp
+++ b/source/application/main/include/Mfcc.hpp
@@ -104,14 +104,14 @@
             float minVal = std::numeric_limits<T>::min();
             float maxVal = std::numeric_limits<T>::max();
 
-            std::vector<T> mfccOut(this->_m_params.m_numMfccFeatures);
-            const size_t numFbankBins = this->_m_params.m_numFbankBins;
+            std::vector<T> mfccOut(this->m_params.m_numMfccFeatures);
+            const size_t numFbankBins = this->m_params.m_numFbankBins;
 
             /* Take DCT. Uses matrix mul. */
             for (size_t i = 0, j = 0; i < mfccOut.size(); ++i, j += numFbankBins) {
                 float sum = 0;
                 for (size_t k = 0; k < numFbankBins; ++k) {
-                    sum += this->_m_dctMatrix[j + k] * this->_m_melEnergies[k];
+                    sum += this->m_dctMatrix[j + k] * this->m_melEnergies[k];
                 }
                 /* Quantize to T. */
                 sum = std::round((sum / quantScale) + quantOffset);
@@ -131,7 +131,7 @@
         /**
          * @brief       Project input frequency to Mel Scale.
          * @param[in]   freq           Input frequency in floating point.
-         * @param[in]   useHTKmethod   bool to signal if HTK method is to be
+         * @param[in]   useHTKMethod   bool to signal if HTK method is to be
          *                             used for calculation.
          * @return      Mel transformed frequency in floating point.
          **/
@@ -141,8 +141,8 @@
         /**
          * @brief       Inverse Mel transform - convert MEL warped frequency
          *              back to normal frequency.
-         * @param[in]   freq           Mel frequency in floating point.
-         * @param[in]   useHTKmethod   bool to signal if HTK method is to be
+         * @param[in]   melFreq        Mel frequency in floating point.
+         * @param[in]   useHTKMethod   bool to signal if HTK method is to be
          *                             used for calculation.
          * @return      Real world frequency in floating point.
          **/
@@ -207,17 +207,17 @@
                         bool     useHTKMethod);
 
     private:
-        MfccParams                      _m_params;
-        std::vector<float>              _m_frame;
-        std::vector<float>              _m_buffer;
-        std::vector<float>              _m_melEnergies;
-        std::vector<float>              _m_windowFunc;
-        std::vector<std::vector<float>> _m_melFilterBank;
-        std::vector<float>              _m_dctMatrix;
-        std::vector<uint32_t>           _m_filterBankFilterFirst;
-        std::vector<uint32_t>           _m_filterBankFilterLast;
-        bool                            _m_filterBankInitialised;
-        arm::app::math::FftInstance     _m_fftInstance;
+        MfccParams                      m_params;
+        std::vector<float>              m_frame;
+        std::vector<float>              m_buffer;
+        std::vector<float>              m_melEnergies;
+        std::vector<float>              m_windowFunc;
+        std::vector<std::vector<float>> m_melFilterBank;
+        std::vector<float>              m_dctMatrix;
+        std::vector<uint32_t>           m_filterBankFilterFirst;
+        std::vector<uint32_t>           m_filterBankFilterLast;
+        bool                            m_filterBankInitialised;
+        arm::app::math::FftInstance     m_fftInstance;
 
         /**
          * @brief       Initialises the filter banks and the DCT matrix. **/
diff --git a/source/application/main/include/Profiler.hpp b/source/application/main/include/Profiler.hpp
index c5f77e7..d1b6d91 100644
--- a/source/application/main/include/Profiler.hpp
+++ b/source/application/main/include/Profiler.hpp
@@ -107,14 +107,14 @@
         void SetName(const char* str);
 
     private:
-        ProfilingMap    _m_series;                /* Profiling series map. */
-        time_counter    _m_tstampSt{};            /* Container for a current starting timestamp. */
-        time_counter    _m_tstampEnd{};           /* Container for a current ending timestamp. */
-        hal_platform *  _m_pPlatform = nullptr;   /* Platform pointer - to get the timer. */
+        ProfilingMap    m_series;                /* Profiling series map. */
+        time_counter    m_tstampSt{};            /* Container for a current starting timestamp. */
+        time_counter    m_tstampEnd{};           /* Container for a current ending timestamp. */
+        hal_platform *  m_pPlatform = nullptr;   /* Platform pointer - to get the timer. */
 
-        bool            _m_started = false;       /* Indicates profiler has been started. */
+        bool            m_started = false;       /* Indicates profiler has been started. */
 
-        std::string     _m_name;                  /* Name given to this profiler. */
+        std::string     m_name;                  /* Name given to this profiler. */
 
         /**
          * @brief       Appends the profiling unit computed by the "start" and
diff --git a/source/application/main/include/UseCaseCommonUtils.hpp b/source/application/main/include/UseCaseCommonUtils.hpp
index 7887aea..d328392 100644
--- a/source/application/main/include/UseCaseCommonUtils.hpp
+++ b/source/application/main/include/UseCaseCommonUtils.hpp
@@ -42,12 +42,11 @@
      * @param[in]       profiler   Reference to the initialised profiler.
      * @return          true if inference succeeds, false otherwise.
      **/
-    bool RunInference(arm::app::Model& mode, Profiler& profiler);
+    bool RunInference(arm::app::Model& model, Profiler& profiler);
 
     /**
      * @brief           Read input and return as an integer.
      * @param[in]       platform   Reference to the hal platform object.
-     * @param[in]       model      Reference to the initialised model.
      * @return          Integer value corresponding to the user input.
      **/
     int ReadUserInputAsInt(hal_platform& platform);
diff --git a/source/application/tensorflow-lite-micro/Model.cc b/source/application/tensorflow-lite-micro/Model.cc
index 4a7f0a4..e9c6cd3 100644
--- a/source/application/tensorflow-lite-micro/Model.cc
+++ b/source/application/tensorflow-lite-micro/Model.cc
@@ -24,8 +24,8 @@
 /* Initialise the model */
 arm::app::Model::~Model()
 {
-    if (this->_m_pInterpreter) {
-        delete this->_m_pInterpreter;
+    if (this->m_pInterpreter) {
+        delete this->m_pInterpreter;
     }
 
     /**
@@ -34,10 +34,10 @@
 }
 
 arm::app::Model::Model() :
-    _m_inited (false),
-    _m_type(kTfLiteNoType)
+    m_inited (false),
+    m_type(kTfLiteNoType)
 {
-    this->_m_pErrorReporter = &this->_m_uErrorReporter;
+    this->m_pErrorReporter = &this->m_uErrorReporter;
 }
 
 bool arm::app::Model::Init(tflite::MicroAllocator* allocator)
@@ -47,13 +47,13 @@
      * copying or parsing, it's a very lightweight operation. */
     const uint8_t* model_addr = ModelPointer();
     debug("loading model from @ 0x%p\n", model_addr);
-    this->_m_pModel = ::tflite::GetModel(model_addr);
+    this->m_pModel = ::tflite::GetModel(model_addr);
 
-    if (this->_m_pModel->version() != TFLITE_SCHEMA_VERSION) {
-        this->_m_pErrorReporter->Report(
+    if (this->m_pModel->version() != TFLITE_SCHEMA_VERSION) {
+        this->m_pErrorReporter->Report(
             "[ERROR] model's schema version %d is not equal "
             "to supported version %d.",
-            this->_m_pModel->version(), TFLITE_SCHEMA_VERSION);
+            this->m_pModel->version(), TFLITE_SCHEMA_VERSION);
         return false;
     }
 
@@ -69,80 +69,80 @@
     this->EnlistOperations();
 
     /* Create allocator instance, if it doesn't exist */
-    this->_m_pAllocator = allocator;
-    if (!this->_m_pAllocator) {
+    this->m_pAllocator = allocator;
+    if (!this->m_pAllocator) {
         /* Create an allocator instance */
         info("Creating allocator using tensor arena in %s\n",
             ACTIVATION_BUF_SECTION_NAME);
 
-        this->_m_pAllocator = tflite::MicroAllocator::Create(
+        this->m_pAllocator = tflite::MicroAllocator::Create(
                                         this->GetTensorArena(),
                                         this->GetActivationBufferSize(),
-                                        this->_m_pErrorReporter);
+                                        this->m_pErrorReporter);
 
-        if (!this->_m_pAllocator) {
+        if (!this->m_pAllocator) {
             printf_err("Failed to create allocator\n");
             return false;
         }
-        debug("Created new allocator @ 0x%p\n", this->_m_pAllocator);
+        debug("Created new allocator @ 0x%p\n", this->m_pAllocator);
     } else {
-        debug("Using existing allocator @ 0x%p\n", this->_m_pAllocator);
+        debug("Using existing allocator @ 0x%p\n", this->m_pAllocator);
     }
 
-    this->_m_pInterpreter = new ::tflite::MicroInterpreter(
-        this->_m_pModel, this->GetOpResolver(),
-        this->_m_pAllocator, this->_m_pErrorReporter);
+    this->m_pInterpreter = new ::tflite::MicroInterpreter(
+        this->m_pModel, this->GetOpResolver(),
+        this->m_pAllocator, this->m_pErrorReporter);
 
-    if (!this->_m_pInterpreter) {
+    if (!this->m_pInterpreter) {
         printf_err("Failed to allocate interpreter\n");
         return false;
     }
 
     /* Allocate memory from the tensor_arena for the model's tensors. */
     info("Allocating tensors\n");
-    TfLiteStatus allocate_status = this->_m_pInterpreter->AllocateTensors();
+    TfLiteStatus allocate_status = this->m_pInterpreter->AllocateTensors();
 
     if (allocate_status != kTfLiteOk) {
-        this->_m_pErrorReporter->Report("[ERROR] allocateTensors() failed");
+        this->m_pErrorReporter->Report("[ERROR] allocateTensors() failed");
         printf_err("tensor allocation failed!\n");
-        delete this->_m_pInterpreter;
+        delete this->m_pInterpreter;
         return false;
     }
 
     /* Get information about the memory area to use for the model's input. */
-    this->_m_input.resize(this->GetNumInputs());
+    this->m_input.resize(this->GetNumInputs());
     for (size_t inIndex = 0; inIndex < this->GetNumInputs(); inIndex++)
-        this->_m_input[inIndex] = this->_m_pInterpreter->input(inIndex);
+        this->m_input[inIndex] = this->m_pInterpreter->input(inIndex);
 
-    this->_m_output.resize(this->GetNumOutputs());
+    this->m_output.resize(this->GetNumOutputs());
     for (size_t outIndex = 0; outIndex < this->GetNumOutputs(); outIndex++)
-        this->_m_output[outIndex] = this->_m_pInterpreter->output(outIndex);
+        this->m_output[outIndex] = this->m_pInterpreter->output(outIndex);
 
-    if (this->_m_input.empty() || this->_m_output.empty()) {
+    if (this->m_input.empty() || this->m_output.empty()) {
         printf_err("failed to get tensors\n");
         return false;
     } else {
-        this->_m_type = this->_m_input[0]->type;  /* Input 0 should be the main input */
+        this->m_type = this->m_input[0]->type;  /* Input 0 should be the main input */
 
         /* Clear the input & output tensors */
         for (size_t inIndex = 0; inIndex < this->GetNumInputs(); inIndex++) {
-            std::memset(this->_m_input[inIndex]->data.data, 0, this->_m_input[inIndex]->bytes);
+            std::memset(this->m_input[inIndex]->data.data, 0, this->m_input[inIndex]->bytes);
         }
         for (size_t outIndex = 0; outIndex < this->GetNumOutputs(); outIndex++) {
-            std::memset(this->_m_output[outIndex]->data.data, 0, this->_m_output[outIndex]->bytes);
+            std::memset(this->m_output[outIndex]->data.data, 0, this->m_output[outIndex]->bytes);
         }
 
         this->LogInterpreterInfo();
     }
 
-    this->_m_inited = true;
+    this->m_inited = true;
     return true;
 }
 
 tflite::MicroAllocator* arm::app::Model::GetAllocator()
 {
     if (this->IsInited()) {
-        return this->_m_pAllocator;
+        return this->m_pAllocator;
     }
     return nullptr;
 }
@@ -178,31 +178,31 @@
 
 void arm::app::Model::LogInterpreterInfo()
 {
-    if (!this->_m_pInterpreter) {
+    if (!this->m_pInterpreter) {
         printf_err("Invalid interpreter\n");
         return;
     }
 
     info("Model INPUT tensors: \n");
-    for (auto input : this->_m_input) {
+    for (auto input : this->m_input) {
         this->LogTensorInfo(input);
     }
 
     info("Model OUTPUT tensors: \n");
-    for (auto output : this->_m_output) {
+    for (auto output : this->m_output) {
         this->LogTensorInfo(output);
     }
 
     info("Activation buffer (a.k.a tensor arena) size used: %zu\n",
-        this->_m_pInterpreter->arena_used_bytes());
+        this->m_pInterpreter->arena_used_bytes());
 
-    const size_t nOperators = this->_m_pInterpreter->operators_size();
+    const size_t nOperators = this->m_pInterpreter->operators_size();
     info("Number of operators: %zu\n", nOperators);
 
     /* For each operator, display registration information */
     for (size_t i = 0 ; i < nOperators; ++i) {
         const tflite::NodeAndRegistration nodeReg =
-            this->_m_pInterpreter->node_and_registration(i);
+            this->m_pInterpreter->node_and_registration(i);
         const TfLiteRegistration* reg = nodeReg.registration;
         std::string opName{""};
 
@@ -220,7 +220,7 @@
 
 bool arm::app::Model::IsInited() const
 {
-    return this->_m_inited;
+    return this->m_inited;
 }
 
 bool arm::app::Model::IsDataSigned() const
@@ -231,8 +231,8 @@
 bool arm::app::Model::RunInference()
 {
     bool inference_state = false;
-    if (this->_m_pModel && this->_m_pInterpreter) {
-        if (kTfLiteOk != this->_m_pInterpreter->Invoke()) {
+    if (this->m_pModel && this->m_pInterpreter) {
+        if (kTfLiteOk != this->m_pInterpreter->Invoke()) {
             printf_err("Invoke failed.\n");
         } else {
             inference_state = true;
@@ -246,7 +246,7 @@
 TfLiteTensor* arm::app::Model::GetInputTensor(size_t index) const
 {
     if (index < this->GetNumInputs()) {
-        return this->_m_input.at(index);
+        return this->m_input.at(index);
     }
     return nullptr;
 }
@@ -254,23 +254,23 @@
 TfLiteTensor* arm::app::Model::GetOutputTensor(size_t index) const
 {
     if (index < this->GetNumOutputs()) {
-        return this->_m_output.at(index);
+        return this->m_output.at(index);
     }
     return nullptr;
 }
 
 size_t arm::app::Model::GetNumInputs() const
 {
-    if (this->_m_pModel && this->_m_pInterpreter) {
-        return this->_m_pInterpreter->inputs_size();
+    if (this->m_pModel && this->m_pInterpreter) {
+        return this->m_pInterpreter->inputs_size();
     }
     return 0;
 }
 
 size_t arm::app::Model::GetNumOutputs() const
 {
-    if (this->_m_pModel && this->_m_pInterpreter) {
-        return this->_m_pInterpreter->outputs_size();
+    if (this->m_pModel && this->m_pInterpreter) {
+        return this->m_pInterpreter->outputs_size();
     }
     return 0;
 }
@@ -278,13 +278,13 @@
 
 TfLiteType arm::app::Model::GetType() const
 {
-    return this->_m_type;
+    return this->m_type;
 }
 
 TfLiteIntArray* arm::app::Model::GetInputShape(size_t index) const
 {
     if (index < this->GetNumInputs()) {
-        return this->_m_input.at(index)->dims;
+        return this->m_input.at(index)->dims;
     }
     return nullptr;
 }
@@ -292,7 +292,7 @@
 TfLiteIntArray* arm::app::Model::GetOutputShape(size_t index) const
 {
     if (index < this->GetNumOutputs()) {
-        return this->_m_output.at(index)->dims;
+        return this->m_output.at(index)->dims;
     }
     return nullptr;
 }
diff --git a/source/application/tensorflow-lite-micro/include/Model.hpp b/source/application/tensorflow-lite-micro/include/Model.hpp
index 70cf9ca..7a0493c 100644
--- a/source/application/tensorflow-lite-micro/include/Model.hpp
+++ b/source/application/tensorflow-lite-micro/include/Model.hpp
@@ -123,16 +123,16 @@
         size_t GetActivationBufferSize();
 
     private:
-        tflite::MicroErrorReporter      _m_uErrorReporter;                     /* Error reporter object. */
-        tflite::ErrorReporter*          _m_pErrorReporter      = nullptr;      /* Pointer to the error reporter. */
-        const tflite::Model*            _m_pModel              = nullptr;      /* Tflite model pointer. */
-        tflite::MicroInterpreter*       _m_pInterpreter        = nullptr;      /* Tflite interpreter. */
-        tflite::MicroAllocator*         _m_pAllocator          = nullptr;      /* Tflite micro allocator. */
-        bool                            _m_inited              = false;        /* Indicates whether this object has been initialised. */
+        tflite::MicroErrorReporter      m_uErrorReporter;                     /* Error reporter object. */
+        tflite::ErrorReporter*          m_pErrorReporter      = nullptr;      /* Pointer to the error reporter. */
+        const tflite::Model*            m_pModel              = nullptr;      /* Tflite model pointer. */
+        tflite::MicroInterpreter*       m_pInterpreter        = nullptr;      /* Tflite interpreter. */
+        tflite::MicroAllocator*         m_pAllocator          = nullptr;      /* Tflite micro allocator. */
+        bool                            m_inited              = false;        /* Indicates whether this object has been initialised. */
 
-        std::vector<TfLiteTensor*>      _m_input              = {};           /* Model's input tensor pointers. */
-        std::vector<TfLiteTensor*>      _m_output             = {};           /* Model's output tensor pointers. */
-        TfLiteType                      _m_type               = kTfLiteNoType;/* Model's data type. */
+        std::vector<TfLiteTensor*>      m_input              = {};           /* Model's input tensor pointers. */
+        std::vector<TfLiteTensor*>      m_output             = {};           /* Model's output tensor pointers. */
+        TfLiteType                      m_type               = kTfLiteNoType;/* Model's data type. */
 
     };
 
diff --git a/source/use_case/ad/include/AdMelSpectrogram.hpp b/source/use_case/ad/include/AdMelSpectrogram.hpp
index 30a77c1..05c5bfc 100644
--- a/source/use_case/ad/include/AdMelSpectrogram.hpp
+++ b/source/use_case/ad/include/AdMelSpectrogram.hpp
@@ -69,7 +69,7 @@
          *              energies to logarithmic scale. The difference from
          *              default behaviour is that the power is converted to dB
          *              and subsequently clamped.
-         * @param[in/out]   melEnergies - 1D vector of Mel energies
+         * @param[in,out]   melEnergies - 1D vector of Mel energies
          **/
         virtual void ConvertToLogarithmicScale(std::vector<float>& melEnergies) override;
 
diff --git a/source/use_case/ad/include/AdModel.hpp b/source/use_case/ad/include/AdModel.hpp
index bbdf91c..8d914c4 100644
--- a/source/use_case/ad/include/AdModel.hpp
+++ b/source/use_case/ad/include/AdModel.hpp
@@ -44,7 +44,7 @@
         static constexpr int ms_maxOpCnt = 6;
 
         /* A mutable op resolver instance */
-        tflite::MicroMutableOpResolver<ms_maxOpCnt> _m_opResolver;
+        tflite::MicroMutableOpResolver<ms_maxOpCnt> m_opResolver;
     };
 
 } /* namespace app */
diff --git a/source/use_case/ad/include/AdPostProcessing.hpp b/source/use_case/ad/include/AdPostProcessing.hpp
index f3b35a1..7eaec84 100644
--- a/source/use_case/ad/include/AdPostProcessing.hpp
+++ b/source/use_case/ad/include/AdPostProcessing.hpp
@@ -38,7 +38,7 @@
 
     /** @brief      Given a wav file name return AD model output index.
      *  @param[in]  wavFileName Audio WAV filename.
-     *                          File name should be in format <anything>_<goes>_XX_<here>.wav
+     *                          File name should be in format anything_goes_XX_here.wav
      *                          where XX is the machine ID e.g. 00, 02, 04 or 06
      *  @return     AD model output index as 8 bit integer.
     **/
diff --git a/source/use_case/ad/include/MelSpectrogram.hpp b/source/use_case/ad/include/MelSpectrogram.hpp
index 22b5d29..d3ea3f7 100644
--- a/source/use_case/ad/include/MelSpectrogram.hpp
+++ b/source/use_case/ad/include/MelSpectrogram.hpp
@@ -65,16 +65,16 @@
         /**
         * @brief        Extract Mel Spectrogram for one single small frame of
         *               audio data e.g. 640 samples.
-        * @param[in]    audioData - Vector of audio samples to calculate
+        * @param[in]    audioData       Vector of audio samples to calculate
         *               features for.
-        * @param[in]    trainingMean - Value to subtract from the the computed mel spectrogram, default 0.
+        * @param[in]    trainingMean    Value to subtract from the the computed mel spectrogram, default 0.
         * @return       Vector of extracted Mel Spectrogram features.
         **/
         std::vector<float> ComputeMelSpec(const std::vector<int16_t>& audioData, float trainingMean = 0);
 
         /**
          * @brief       Constructor
-         * @param[in]   params - Mel Spectrogram parameters
+         * @param[in]   params   Mel Spectrogram parameters
         */
         explicit MelSpectrogram(const MelSpecParams& params);
 
@@ -87,10 +87,11 @@
         /**
          * @brief        Extract Mel Spectrogram features and quantise for one single small
          *               frame of audio data e.g. 640 samples.
-         * @param[in]    audioData - Vector of audio samples to calculate
+         * @param[in]    audioData      Vector of audio samples to calculate
          *               features for.
-         * @param[in]    quantScale - quantisation scale.
-         * @param[in]    quantOffset - quantisation offset
+         * @param[in]    quantScale     quantisation scale.
+         * @param[in]    quantOffset    quantisation offset.
+         * @param[in]    trainingMean   training mean.
          * @return       Vector of extracted quantised Mel Spectrogram features.
          **/
         template<typename T>
@@ -103,12 +104,12 @@
             float minVal = std::numeric_limits<T>::min();
             float maxVal = std::numeric_limits<T>::max();
 
-            std::vector<T> melSpecOut(this->_m_params.m_numFbankBins);
-            const size_t numFbankBins = this->_m_params.m_numFbankBins;
+            std::vector<T> melSpecOut(this->m_params.m_numFbankBins);
+            const size_t numFbankBins = this->m_params.m_numFbankBins;
 
             /* Quantize to T. */
             for (size_t k = 0; k < numFbankBins; ++k) {
-                auto quantizedEnergy = std::round(((this->_m_melEnergies[k]) / quantScale) + quantOffset);
+                auto quantizedEnergy = std::round(((this->m_melEnergies[k]) / quantScale) + quantOffset);
                 melSpecOut[k] = static_cast<T>(std::min<float>(std::max<float>(quantizedEnergy, minVal), maxVal));
             }
 
@@ -124,9 +125,9 @@
     protected:
         /**
          * @brief       Project input frequency to Mel Scale.
-         * @param[in]   freq - input frequency in floating point
-         * @param[in]   useHTKmethod - bool to signal if HTK method is to be
-         *              used for calculation
+         * @param[in]   freq          input frequency in floating point
+         * @param[in]   useHTKMethod  bool to signal if HTK method is to be
+         *                            used for calculation
          * @return      Mel transformed frequency in floating point
          **/
         static float MelScale(const float    freq,
@@ -135,9 +136,9 @@
         /**
          * @brief       Inverse Mel transform - convert MEL warped frequency
          *              back to normal frequency
-         * @param[in]   freq - Mel frequency in floating point
-         * @param[in]   useHTKmethod - bool to signal if HTK method is to be
-         *              used for calculation
+         * @param[in]   melFreq          Mel frequency in floating point
+         * @param[in]   useHTKMethod  bool to signal if HTK method is to be
+         *                            used for calculation
          * @return      Real world frequency in floating point
          **/
         static float InverseMelScale(const float melFreq,
@@ -168,7 +169,7 @@
 
         /**
          * @brief           Converts the Mel energies for logarithmic scale
-         * @param[in/out]   melEnergies - 1D vector of Mel energies
+         * @param[in,out]   melEnergies 1D vector of Mel energies
          **/
         virtual void ConvertToLogarithmicScale(std::vector<float>& melEnergies);
 
@@ -176,10 +177,10 @@
          * @brief       Given the low and high Mel values, get the normaliser
          *              for weights to be applied when populating the filter
          *              bank.
-         * @param[in]   leftMel - low Mel frequency value
-         * @param[in]   rightMel - high Mel frequency value
-         * @param[in]   useHTKMethod - bool to signal if HTK method is to be
-         *              used for calculation
+         * @param[in]   leftMel      low Mel frequency value
+         * @param[in]   rightMel     high Mel frequency value
+         * @param[in]   useHTKMethod bool to signal if HTK method is to be
+         *                           used for calculation
          * @return      Return float value to be applied 
          *              when populating the filter bank.
          */
@@ -189,16 +190,16 @@
                 const bool     useHTKMethod);
 
     private:
-        MelSpecParams                   _m_params;
-        std::vector<float>              _m_frame;
-        std::vector<float>              _m_buffer;
-        std::vector<float>              _m_melEnergies;
-        std::vector<float>              _m_windowFunc;
-        std::vector<std::vector<float>> _m_melFilterBank;
-        std::vector<uint32_t>            _m_filterBankFilterFirst;
-        std::vector<uint32_t>            _m_filterBankFilterLast;
-        bool                            _m_filterBankInitialised;
-        arm::app::math::FftInstance     _m_fftInstance;
+        MelSpecParams                   m_params;
+        std::vector<float>              m_frame;
+        std::vector<float>              m_buffer;
+        std::vector<float>              m_melEnergies;
+        std::vector<float>              m_windowFunc;
+        std::vector<std::vector<float>> m_melFilterBank;
+        std::vector<uint32_t>            m_filterBankFilterFirst;
+        std::vector<uint32_t>            m_filterBankFilterLast;
+        bool                            m_filterBankInitialised;
+        arm::app::math::FftInstance     m_fftInstance;
 
         /**
          * @brief       Initialises the filter banks.
diff --git a/source/use_case/ad/src/AdModel.cc b/source/use_case/ad/src/AdModel.cc
index 148bc98..82ad822 100644
--- a/source/use_case/ad/src/AdModel.cc
+++ b/source/use_case/ad/src/AdModel.cc
@@ -20,19 +20,19 @@
 
 const tflite::MicroOpResolver& arm::app::AdModel::GetOpResolver()
 {
-    return this->_m_opResolver;
+    return this->m_opResolver;
 }
 
 bool arm::app::AdModel::EnlistOperations()
 {
-    this->_m_opResolver.AddAveragePool2D();
-    this->_m_opResolver.AddConv2D();
-    this->_m_opResolver.AddDepthwiseConv2D();
-    this->_m_opResolver.AddRelu6();
-    this->_m_opResolver.AddReshape();
+    this->m_opResolver.AddAveragePool2D();
+    this->m_opResolver.AddConv2D();
+    this->m_opResolver.AddDepthwiseConv2D();
+    this->m_opResolver.AddRelu6();
+    this->m_opResolver.AddReshape();
 
 #if defined(ARM_NPU)
-    if (kTfLiteOk == this->_m_opResolver.AddEthosU()) {
+    if (kTfLiteOk == this->m_opResolver.AddEthosU()) {
         info("Added %s support to op resolver\n",
             tflite::GetString_ETHOSU());
     } else {
diff --git a/source/use_case/ad/src/MelSpectrogram.cc b/source/use_case/ad/src/MelSpectrogram.cc
index f1752e1..fa7714a 100644
--- a/source/use_case/ad/src/MelSpectrogram.cc
+++ b/source/use_case/ad/src/MelSpectrogram.cc
@@ -61,27 +61,27 @@
     }
 
     MelSpectrogram::MelSpectrogram(const MelSpecParams& params):
-            _m_params(params),
-            _m_filterBankInitialised(false)
+            m_params(params),
+            m_filterBankInitialised(false)
     {
-        this->_m_buffer = std::vector<float>(
-                this->_m_params.m_frameLenPadded, 0.0);
-        this->_m_frame = std::vector<float>(
-                this->_m_params.m_frameLenPadded, 0.0);
-        this->_m_melEnergies = std::vector<float>(
-                this->_m_params.m_numFbankBins, 0.0);
+        this->m_buffer = std::vector<float>(
+                this->m_params.m_frameLenPadded, 0.0);
+        this->m_frame = std::vector<float>(
+                this->m_params.m_frameLenPadded, 0.0);
+        this->m_melEnergies = std::vector<float>(
+                this->m_params.m_numFbankBins, 0.0);
 
-        this->_m_windowFunc = std::vector<float>(this->_m_params.m_frameLen);
-        const auto multiplier = static_cast<float>(2 * M_PI / this->_m_params.m_frameLen);
+        this->m_windowFunc = std::vector<float>(this->m_params.m_frameLen);
+        const auto multiplier = static_cast<float>(2 * M_PI / this->m_params.m_frameLen);
 
         /* Create window function. */
-        for (size_t i = 0; i < this->_m_params.m_frameLen; ++i) {
-            this->_m_windowFunc[i] = (0.5 - (0.5 *
+        for (size_t i = 0; i < this->m_params.m_frameLen; ++i) {
+            this->m_windowFunc[i] = (0.5 - (0.5 *
                                              math::MathUtils::CosineF32(static_cast<float>(i) * multiplier)));
         }
 
-        math::MathUtils::FftInitF32(this->_m_params.m_frameLenPadded, this->_m_fftInstance);
-        debug("Instantiated Mel Spectrogram object: %s\n", this->_m_params.Str().c_str());
+        math::MathUtils::FftInitF32(this->m_params.m_frameLenPadded, this->m_fftInstance);
+        debug("Instantiated Mel Spectrogram object: %s\n", this->m_params.Str().c_str());
     }
 
     void MelSpectrogram::Init()
@@ -161,20 +161,20 @@
 
     void MelSpectrogram::ConvertToPowerSpectrum()
     {
-        const uint32_t halfDim = this->_m_buffer.size() / 2;
+        const uint32_t halfDim = this->m_buffer.size() / 2;
 
         /* Handle this special case. */
-        float firstEnergy = this->_m_buffer[0] * this->_m_buffer[0];
-        float lastEnergy = this->_m_buffer[1] * this->_m_buffer[1];
+        float firstEnergy = this->m_buffer[0] * this->m_buffer[0];
+        float lastEnergy = this->m_buffer[1] * this->m_buffer[1];
 
         math::MathUtils::ComplexMagnitudeSquaredF32(
-                this->_m_buffer.data(),
-                this->_m_buffer.size(),
-                this->_m_buffer.data(),
-                this->_m_buffer.size()/2);
+                this->m_buffer.data(),
+                this->m_buffer.size(),
+                this->m_buffer.data(),
+                this->m_buffer.size()/2);
 
-        this->_m_buffer[0] = firstEnergy;
-        this->_m_buffer[halfDim] = lastEnergy;
+        this->m_buffer[0] = firstEnergy;
+        this->m_buffer[halfDim] = lastEnergy;
     }
 
     float MelSpectrogram::GetMelFilterBankNormaliser(
@@ -193,14 +193,14 @@
     void MelSpectrogram::InitMelFilterBank()
     {
         if (!this->IsMelFilterBankInited()) {
-            this->_m_melFilterBank = this->CreateMelFilterBank();
-            this->_m_filterBankInitialised = true;
+            this->m_melFilterBank = this->CreateMelFilterBank();
+            this->m_filterBankInitialised = true;
         }
     }
 
     bool MelSpectrogram::IsMelFilterBankInited() const
     {
-        return this->_m_filterBankInitialised;
+        return this->m_filterBankInitialised;
     }
 
     std::vector<float> MelSpectrogram::ComputeMelSpec(const std::vector<int16_t>& audioData, float trainingMean)
@@ -209,64 +209,64 @@
 
         /* TensorFlow way of normalizing .wav data to (-1, 1). */
         constexpr float normaliser = 1.0/(1<<15);
-        for (size_t i = 0; i < this->_m_params.m_frameLen; ++i) {
-            this->_m_frame[i] = static_cast<float>(audioData[i]) * normaliser;
+        for (size_t i = 0; i < this->m_params.m_frameLen; ++i) {
+            this->m_frame[i] = static_cast<float>(audioData[i]) * normaliser;
         }
 
         /* Apply window function to input frame. */
-        for(size_t i = 0; i < this->_m_params.m_frameLen; ++i) {
-            this->_m_frame[i] *= this->_m_windowFunc[i];
+        for(size_t i = 0; i < this->m_params.m_frameLen; ++i) {
+            this->m_frame[i] *= this->m_windowFunc[i];
         }
 
         /* Set remaining frame values to 0. */
-        std::fill(this->_m_frame.begin() + this->_m_params.m_frameLen,this->_m_frame.end(), 0);
+        std::fill(this->m_frame.begin() + this->m_params.m_frameLen,this->m_frame.end(), 0);
 
         /* Compute FFT. */
-        math::MathUtils::FftF32(this->_m_frame, this->_m_buffer, this->_m_fftInstance);
+        math::MathUtils::FftF32(this->m_frame, this->m_buffer, this->m_fftInstance);
 
         /* Convert to power spectrum. */
         this->ConvertToPowerSpectrum();
 
         /* Apply mel filterbanks. */
-        if (!this->ApplyMelFilterBank(this->_m_buffer,
-                                      this->_m_melFilterBank,
-                                      this->_m_filterBankFilterFirst,
-                                      this->_m_filterBankFilterLast,
-                                      this->_m_melEnergies)) {
+        if (!this->ApplyMelFilterBank(this->m_buffer,
+                                      this->m_melFilterBank,
+                                      this->m_filterBankFilterFirst,
+                                      this->m_filterBankFilterLast,
+                                      this->m_melEnergies)) {
             printf_err("Failed to apply MEL filter banks\n");
         }
 
         /* Convert to logarithmic scale */
-        this->ConvertToLogarithmicScale(this->_m_melEnergies);
+        this->ConvertToLogarithmicScale(this->m_melEnergies);
 
         /* Perform mean subtraction. */
-        for (auto& energy:this->_m_melEnergies) {
+        for (auto& energy:this->m_melEnergies) {
             energy -= trainingMean;
         }
 
-        return this->_m_melEnergies;
+        return this->m_melEnergies;
     }
 
     std::vector<std::vector<float>> MelSpectrogram::CreateMelFilterBank()
     {
-        size_t numFftBins = this->_m_params.m_frameLenPadded / 2;
-        float fftBinWidth = static_cast<float>(this->_m_params.m_samplingFreq) / this->_m_params.m_frameLenPadded;
+        size_t numFftBins = this->m_params.m_frameLenPadded / 2;
+        float fftBinWidth = static_cast<float>(this->m_params.m_samplingFreq) / this->m_params.m_frameLenPadded;
 
-        float melLowFreq = MelSpectrogram::MelScale(this->_m_params.m_melLoFreq,
-                                          this->_m_params.m_useHtkMethod);
-        float melHighFreq = MelSpectrogram::MelScale(this->_m_params.m_melHiFreq,
-                                           this->_m_params.m_useHtkMethod);
-        float melFreqDelta = (melHighFreq - melLowFreq) / (this->_m_params.m_numFbankBins + 1);
+        float melLowFreq = MelSpectrogram::MelScale(this->m_params.m_melLoFreq,
+                                          this->m_params.m_useHtkMethod);
+        float melHighFreq = MelSpectrogram::MelScale(this->m_params.m_melHiFreq,
+                                           this->m_params.m_useHtkMethod);
+        float melFreqDelta = (melHighFreq - melLowFreq) / (this->m_params.m_numFbankBins + 1);
 
         std::vector<float> thisBin = std::vector<float>(numFftBins);
         std::vector<std::vector<float>> melFilterBank(
-                this->_m_params.m_numFbankBins);
-        this->_m_filterBankFilterFirst =
-                std::vector<uint32_t>(this->_m_params.m_numFbankBins);
-        this->_m_filterBankFilterLast =
-                std::vector<uint32_t>(this->_m_params.m_numFbankBins);
+                this->m_params.m_numFbankBins);
+        this->m_filterBankFilterFirst =
+                std::vector<uint32_t>(this->m_params.m_numFbankBins);
+        this->m_filterBankFilterLast =
+                std::vector<uint32_t>(this->m_params.m_numFbankBins);
 
-        for (size_t bin = 0; bin < this->_m_params.m_numFbankBins; bin++) {
+        for (size_t bin = 0; bin < this->m_params.m_numFbankBins; bin++) {
             float leftMel = melLowFreq + bin * melFreqDelta;
             float centerMel = melLowFreq + (bin + 1) * melFreqDelta;
             float rightMel = melLowFreq + (bin + 2) * melFreqDelta;
@@ -274,11 +274,11 @@
             uint32_t firstIndex = 0;
             uint32_t lastIndex = 0;
             bool firstIndexFound = false;
-            const float normaliser = this->GetMelFilterBankNormaliser(leftMel, rightMel, this->_m_params.m_useHtkMethod);
+            const float normaliser = this->GetMelFilterBankNormaliser(leftMel, rightMel, this->m_params.m_useHtkMethod);
 
             for (size_t i = 0; i < numFftBins; ++i) {
                 float freq = (fftBinWidth * i); /* Center freq of this fft bin. */
-                float mel = MelSpectrogram::MelScale(freq, this->_m_params.m_useHtkMethod);
+                float mel = MelSpectrogram::MelScale(freq, this->m_params.m_useHtkMethod);
                 thisBin[i] = 0.0;
 
                 if (mel > leftMel && mel < rightMel) {
@@ -298,8 +298,8 @@
                 }
             }
 
-            this->_m_filterBankFilterFirst[bin] = firstIndex;
-            this->_m_filterBankFilterLast[bin] = lastIndex;
+            this->m_filterBankFilterFirst[bin] = firstIndex;
+            this->m_filterBankFilterLast[bin] = lastIndex;
 
             /* Copy the part we care about. */
             for (uint32_t i = firstIndex; i <= lastIndex; ++i) {
diff --git a/source/use_case/ad/src/UseCaseHandler.cc b/source/use_case/ad/src/UseCaseHandler.cc
index 233b0f4..ec35156 100644
--- a/source/use_case/ad/src/UseCaseHandler.cc
+++ b/source/use_case/ad/src/UseCaseHandler.cc
@@ -30,13 +30,13 @@
 
     /**
     * @brief           Helper function to increment current audio clip index
-    * @param[in/out]   ctx     pointer to the application context object
+    * @param[in,out]   ctx     pointer to the application context object
     **/
     static void IncrementAppCtxClipIdx(ApplicationContext& ctx);
 
     /**
      * @brief           Helper function to set the audio clip index
-     * @param[in/out]   ctx     pointer to the application context object
+     * @param[in,out]   ctx     pointer to the application context object
      * @param[in]       idx     value to be set
      * @return          true if index is set, false otherwise
      **/
@@ -47,7 +47,7 @@
      *                  object.
      * @param[in]       platform    reference to the hal platform object
      * @param[in]       result      average sum of classification results
-     * @param[in]       threhsold   if larger than this value we have an anomaly
+     * @param[in]       threshold   if larger than this value we have an anomaly
      * @return          true if successful, false otherwise
      **/
     static bool PresentInferenceResult(hal_platform& platform, float result, float threshold);
@@ -61,9 +61,10 @@
      *
      * Warning: mfcc calculator provided as input must have the same life scope as returned function.
      *
-     * @param[in]           mfcc            MFCC feature calculator.
-     * @param[in/out]       inputTensor     Input tensor pointer to store calculated features.
-     * @param[i]            cacheSize       Size of the feture vectors cache (number of feature vectors).
+     * @param[in]           melSpec         MFCC feature calculator.
+     * @param[in,out]       inputTensor     Input tensor pointer to store calculated features.
+     * @param[in]           cacheSize       Size of the feture vectors cache (number of feature vectors).
+     * @param[in]           trainingMean    Training mean.
      * @return function     function to be called providing audio sample and sliding window index.
      */
     static std::function<void (std::vector<int16_t>&, int, bool, size_t, size_t)>
diff --git a/source/use_case/asr/include/OutputDecode.hpp b/source/use_case/asr/include/OutputDecode.hpp
index 6095531..9d39057 100644
--- a/source/use_case/asr/include/OutputDecode.hpp
+++ b/source/use_case/asr/include/OutputDecode.hpp
@@ -27,7 +27,7 @@
     /**
      * @brief       Gets the top N classification results from the
      *              output vector.
-     * @param[in]   tensor   Label output from classifier.
+     * @param[in]   vecResults   Label output from classifier.
      * @return      true if successful, false otherwise.
     **/
     std::string DecodeOutput(const std::vector<ClassificationResult>& vecResults);
diff --git a/source/use_case/asr/include/Wav2LetterModel.hpp b/source/use_case/asr/include/Wav2LetterModel.hpp
index 4c62578..55395b9 100644
--- a/source/use_case/asr/include/Wav2LetterModel.hpp
+++ b/source/use_case/asr/include/Wav2LetterModel.hpp
@@ -52,7 +52,7 @@
         static constexpr int ms_maxOpCnt = 5;
 
         /* A mutable op resolver instance. */
-        tflite::MicroMutableOpResolver<ms_maxOpCnt> _m_opResolver;
+        tflite::MicroMutableOpResolver<ms_maxOpCnt> m_opResolver;
     };
 
 } /* namespace app */
diff --git a/source/use_case/asr/include/Wav2LetterPostprocess.hpp b/source/use_case/asr/include/Wav2LetterPostprocess.hpp
index e16d35b..a744e0f 100644
--- a/source/use_case/asr/include/Wav2LetterPostprocess.hpp
+++ b/source/use_case/asr/include/Wav2LetterPostprocess.hpp
@@ -37,6 +37,7 @@
          *                              output tensor.
          * @param[in]   innerLen        This is the length of the section
          *                              between left and right context.
+         * @param[in]   blankTokenIdx  Blank token index.
          **/
         Postprocess(uint32_t contextLen,
                     uint32_t innerLen,
@@ -61,11 +62,11 @@
                     bool lastIteration = false);
 
     private:
-        uint32_t    _m_contextLen;      /* lengths of left and right contexts. */
-        uint32_t    _m_innerLen;        /* Length of inner context. */
-        uint32_t    _m_totalLen;        /* Total length of the required axis. */
-        uint32_t    _m_countIterations; /* Current number of iterations. */
-        uint32_t    _m_blankTokenIdx;   /* Index of the labels blank token. */
+        uint32_t    m_contextLen;      /* lengths of left and right contexts. */
+        uint32_t    m_innerLen;        /* Length of inner context. */
+        uint32_t    m_totalLen;        /* Total length of the required axis. */
+        uint32_t    m_countIterations; /* Current number of iterations. */
+        uint32_t    m_blankTokenIdx;   /* Index of the labels blank token. */
         /**
          * @brief       Checks if the tensor and axis index are valid
          *              inputs to the object - based on how it has been
diff --git a/source/use_case/asr/include/Wav2LetterPreprocess.hpp b/source/use_case/asr/include/Wav2LetterPreprocess.hpp
index 10512b9..b0e0c67 100644
--- a/source/use_case/asr/include/Wav2LetterPreprocess.hpp
+++ b/source/use_case/asr/include/Wav2LetterPreprocess.hpp
@@ -144,31 +144,31 @@
                 const int       quantOffset)
         {
             /* Check the output size will fit everything. */
-            if (outputBufSz < (this->_m_mfccBuf.size(0) * 3 * sizeof(T))) {
+            if (outputBufSz < (this->m_mfccBuf.size(0) * 3 * sizeof(T))) {
                 printf_err("Tensor size too small for features\n");
                 return false;
             }
 
             /* Populate. */
             T * outputBufMfcc = outputBuf;
-            T * outputBufD1 = outputBuf + this->_m_numMfccFeats;
-            T * outputBufD2 = outputBufD1 + this->_m_numMfccFeats;
-            const uint32_t ptrIncr = this->_m_numMfccFeats * 2;  /* (3 vectors - 1 vector) */
+            T * outputBufD1 = outputBuf + this->m_numMfccFeats;
+            T * outputBufD2 = outputBufD1 + this->m_numMfccFeats;
+            const uint32_t ptrIncr = this->m_numMfccFeats * 2;  /* (3 vectors - 1 vector) */
 
             const float minVal = std::numeric_limits<T>::min();
             const float maxVal = std::numeric_limits<T>::max();
 
             /* Need to transpose while copying and concatenating the tensor. */
-            for (uint32_t j = 0; j < this->_m_numFeatVectors; ++j) {
-                for (uint32_t i = 0; i < this->_m_numMfccFeats; ++i) {
+            for (uint32_t j = 0; j < this->m_numFeatVectors; ++j) {
+                for (uint32_t i = 0; i < this->m_numMfccFeats; ++i) {
                     *outputBufMfcc++ = static_cast<T>(Preprocess::GetQuantElem(
-                            this->_m_mfccBuf(i, j), quantScale,
+                            this->m_mfccBuf(i, j), quantScale,
                             quantOffset, minVal, maxVal));
                     *outputBufD1++ = static_cast<T>(Preprocess::GetQuantElem(
-                            this->_m_delta1Buf(i, j), quantScale,
+                            this->m_delta1Buf(i, j), quantScale,
                             quantOffset, minVal, maxVal));
                     *outputBufD2++ = static_cast<T>(Preprocess::GetQuantElem(
-                            this->_m_delta2Buf(i, j), quantScale,
+                            this->m_delta2Buf(i, j), quantScale,
                             quantOffset, minVal, maxVal));
                 }
                 outputBufMfcc += ptrIncr;
@@ -180,18 +180,18 @@
         }
 
     private:
-        Wav2LetterMFCC      _m_mfcc;            /* MFCC instance. */
+        Wav2LetterMFCC      m_mfcc;            /* MFCC instance. */
 
         /* Actual buffers to be populated. */
-        Array2d<float>      _m_mfccBuf;         /* Contiguous buffer 1D: MFCC */
-        Array2d<float>      _m_delta1Buf;       /* Contiguous buffer 1D: Delta 1 */
-        Array2d<float>      _m_delta2Buf;       /* Contiguous buffer 1D: Delta 2 */
+        Array2d<float>      m_mfccBuf;         /* Contiguous buffer 1D: MFCC */
+        Array2d<float>      m_delta1Buf;       /* Contiguous buffer 1D: Delta 1 */
+        Array2d<float>      m_delta2Buf;       /* Contiguous buffer 1D: Delta 2 */
 
-        uint32_t            _m_windowLen;       /* Window length for MFCC. */
-        uint32_t            _m_windowStride;    /* Window stride len for MFCC. */
-        uint32_t            _m_numMfccFeats;    /* Number of MFCC features per window. */
-        uint32_t            _m_numFeatVectors;  /* Number of _m_numMfccFeats. */
-        AudioWindow         _m_window;          /* Sliding window. */
+        uint32_t            m_windowLen;       /* Window length for MFCC. */
+        uint32_t            m_windowStride;    /* Window stride len for MFCC. */
+        uint32_t            m_numMfccFeats;    /* Number of MFCC features per window. */
+        uint32_t            m_numFeatVectors;  /* Number of m_numMfccFeats. */
+        AudioWindow         m_window;          /* Sliding window. */
 
     };
 
diff --git a/source/use_case/asr/src/UseCaseHandler.cc b/source/use_case/asr/src/UseCaseHandler.cc
index 43b17dc..dcc879f 100644
--- a/source/use_case/asr/src/UseCaseHandler.cc
+++ b/source/use_case/asr/src/UseCaseHandler.cc
@@ -50,8 +50,6 @@
      *                  object.
      * @param[in]       platform    Reference to the hal platform object.
      * @param[in]       results     Vector of classification results to be displayed.
-     * @param[in]       infTimeMs   Inference time in milliseconds, if available
-     *                              otherwise, this can be passed in as 0.
      * @return          true if successful, false otherwise.
      **/
     static bool PresentInferenceResult(
diff --git a/source/use_case/asr/src/Wav2LetterModel.cc b/source/use_case/asr/src/Wav2LetterModel.cc
index 5aefecd..6f87be8 100644
--- a/source/use_case/asr/src/Wav2LetterModel.cc
+++ b/source/use_case/asr/src/Wav2LetterModel.cc
@@ -20,18 +20,18 @@
 
 const tflite::MicroOpResolver& arm::app::Wav2LetterModel::GetOpResolver()
 {
-    return this->_m_opResolver;
+    return this->m_opResolver;
 }
 
 bool arm::app::Wav2LetterModel::EnlistOperations()
 {
-    this->_m_opResolver.AddConv2D();
-    this->_m_opResolver.AddMul();
-    this->_m_opResolver.AddMaximum();
-    this->_m_opResolver.AddReshape();
+    this->m_opResolver.AddConv2D();
+    this->m_opResolver.AddMul();
+    this->m_opResolver.AddMaximum();
+    this->m_opResolver.AddReshape();
 
 #if defined(ARM_NPU)
-    if (kTfLiteOk == this->_m_opResolver.AddEthosU()) {
+    if (kTfLiteOk == this->m_opResolver.AddEthosU()) {
         info("Added %s support to op resolver\n",
             tflite::GetString_ETHOSU());
     } else {
diff --git a/source/use_case/asr/src/Wav2LetterPostprocess.cc b/source/use_case/asr/src/Wav2LetterPostprocess.cc
index b1bcdc8..fd11eef 100644
--- a/source/use_case/asr/src/Wav2LetterPostprocess.cc
+++ b/source/use_case/asr/src/Wav2LetterPostprocess.cc
@@ -27,11 +27,11 @@
     Postprocess::Postprocess(const uint32_t contextLen,
                              const uint32_t innerLen,
                              const uint32_t blankTokenIdx)
-        :   _m_contextLen(contextLen),
-            _m_innerLen(innerLen),
-            _m_totalLen(2 * this->_m_contextLen + this->_m_innerLen),
-            _m_countIterations(0),
-            _m_blankTokenIdx(blankTokenIdx)
+        :   m_contextLen(contextLen),
+            m_innerLen(innerLen),
+            m_totalLen(2 * this->m_contextLen + this->m_innerLen),
+            m_countIterations(0),
+            m_blankTokenIdx(blankTokenIdx)
     {}
 
     bool Postprocess::Invoke(TfLiteTensor*  tensor,
@@ -51,7 +51,7 @@
         if (0 == elemSz) {
             printf_err("Tensor type not supported for post processing\n");
             return false;
-        } else if (elemSz * this->_m_totalLen > tensor->bytes) {
+        } else if (elemSz * this->m_totalLen > tensor->bytes) {
             printf_err("Insufficient number of tensor bytes\n");
             return false;
         }
@@ -88,7 +88,7 @@
             return false;
         }
 
-        if (static_cast<int>(this->_m_totalLen) !=
+        if (static_cast<int>(this->m_totalLen) !=
                              tensor->dims->data[axisIdx]) {
             printf_err("Unexpected tensor dimension for axis %d, \n",
                 tensor->dims->data[axisIdx]);
@@ -124,31 +124,31 @@
     {
         /* In this case, the "zero-ing" is quite simple as the region
          * to be zeroed sits in contiguous memory (row-major). */
-        const uint32_t eraseLen = strideSzBytes * this->_m_contextLen;
+        const uint32_t eraseLen = strideSzBytes * this->m_contextLen;
 
         /* Erase left context? */
-        if (this->_m_countIterations > 0) {
+        if (this->m_countIterations > 0) {
             /* Set output of each classification window to the blank token. */
             std::memset(ptrData, 0, eraseLen);
-            for (size_t windowIdx = 0; windowIdx < this->_m_contextLen; windowIdx++) {
-                ptrData[windowIdx*strideSzBytes + this->_m_blankTokenIdx] = 1;
+            for (size_t windowIdx = 0; windowIdx < this->m_contextLen; windowIdx++) {
+                ptrData[windowIdx*strideSzBytes + this->m_blankTokenIdx] = 1;
             }
         }
 
         /* Erase right context? */
         if (false == lastIteration) {
-            uint8_t * rightCtxPtr = ptrData + (strideSzBytes * (this->_m_contextLen + this->_m_innerLen));
+            uint8_t * rightCtxPtr = ptrData + (strideSzBytes * (this->m_contextLen + this->m_innerLen));
             /* Set output of each classification window to the blank token. */
             std::memset(rightCtxPtr, 0, eraseLen);
-            for (size_t windowIdx = 0; windowIdx < this->_m_contextLen; windowIdx++) {
-                rightCtxPtr[windowIdx*strideSzBytes + this->_m_blankTokenIdx] = 1;
+            for (size_t windowIdx = 0; windowIdx < this->m_contextLen; windowIdx++) {
+                rightCtxPtr[windowIdx*strideSzBytes + this->m_blankTokenIdx] = 1;
             }
         }
 
         if (lastIteration) {
-            this->_m_countIterations = 0;
+            this->m_countIterations = 0;
         } else {
-            ++this->_m_countIterations;
+            ++this->m_countIterations;
         }
 
         return true;
diff --git a/source/use_case/asr/src/Wav2LetterPreprocess.cc b/source/use_case/asr/src/Wav2LetterPreprocess.cc
index d65ea75..e5ac3ca 100644
--- a/source/use_case/asr/src/Wav2LetterPreprocess.cc
+++ b/source/use_case/asr/src/Wav2LetterPreprocess.cc
@@ -32,18 +32,18 @@
         const uint32_t  windowLen,
         const uint32_t  windowStride,
         const uint32_t  numMfccVectors):
-            _m_mfcc(numMfccFeatures, windowLen),
-            _m_mfccBuf(numMfccFeatures, numMfccVectors),
-            _m_delta1Buf(numMfccFeatures, numMfccVectors),
-            _m_delta2Buf(numMfccFeatures, numMfccVectors),
-            _m_windowLen(windowLen),
-            _m_windowStride(windowStride),
-            _m_numMfccFeats(numMfccFeatures),
-            _m_numFeatVectors(numMfccVectors),
-            _m_window()
+            m_mfcc(numMfccFeatures, windowLen),
+            m_mfccBuf(numMfccFeatures, numMfccVectors),
+            m_delta1Buf(numMfccFeatures, numMfccVectors),
+            m_delta2Buf(numMfccFeatures, numMfccVectors),
+            m_windowLen(windowLen),
+            m_windowStride(windowStride),
+            m_numMfccFeats(numMfccFeatures),
+            m_numFeatVectors(numMfccVectors),
+            m_window()
     {
         if (numMfccFeatures > 0 && windowLen > 0) {
-            this->_m_mfcc.Init();
+            this->m_mfcc.Init();
         }
     }
 
@@ -52,45 +52,45 @@
                 const uint32_t  audioDataLen,
                 TfLiteTensor*   tensor)
     {
-        this->_m_window = SlidingWindow<const int16_t>(
+        this->m_window = SlidingWindow<const int16_t>(
                             audioData, audioDataLen,
-                            this->_m_windowLen, this->_m_windowStride);
+                            this->m_windowLen, this->m_windowStride);
 
         uint32_t mfccBufIdx = 0;
 
-        std::fill(_m_mfccBuf.begin(), _m_mfccBuf.end(), 0.f);
-        std::fill(_m_delta1Buf.begin(), _m_delta1Buf.end(), 0.f);
-        std::fill(_m_delta2Buf.begin(), _m_delta2Buf.end(), 0.f);
+        std::fill(m_mfccBuf.begin(), m_mfccBuf.end(), 0.f);
+        std::fill(m_delta1Buf.begin(), m_delta1Buf.end(), 0.f);
+        std::fill(m_delta2Buf.begin(), m_delta2Buf.end(), 0.f);
 
         /* While we can slide over the window. */
-        while (this->_m_window.HasNext()) {
-            const int16_t*  mfccWindow = this->_m_window.Next();
+        while (this->m_window.HasNext()) {
+            const int16_t*  mfccWindow = this->m_window.Next();
             auto mfccAudioData = std::vector<int16_t>(
                                         mfccWindow,
-                                        mfccWindow + this->_m_windowLen);
-            auto mfcc = this->_m_mfcc.MfccCompute(mfccAudioData);
-            for (size_t i = 0; i < this->_m_mfccBuf.size(0); ++i) {
-                this->_m_mfccBuf(i, mfccBufIdx) = mfcc[i];
+                                        mfccWindow + this->m_windowLen);
+            auto mfcc = this->m_mfcc.MfccCompute(mfccAudioData);
+            for (size_t i = 0; i < this->m_mfccBuf.size(0); ++i) {
+                this->m_mfccBuf(i, mfccBufIdx) = mfcc[i];
             }
             ++mfccBufIdx;
         }
 
         /* Pad MFCC if needed by adding MFCC for zeros. */
-        if (mfccBufIdx != this->_m_numFeatVectors) {
-            std::vector<int16_t> zerosWindow = std::vector<int16_t>(this->_m_windowLen, 0);
-            std::vector<float> mfccZeros = this->_m_mfcc.MfccCompute(zerosWindow);
+        if (mfccBufIdx != this->m_numFeatVectors) {
+            std::vector<int16_t> zerosWindow = std::vector<int16_t>(this->m_windowLen, 0);
+            std::vector<float> mfccZeros = this->m_mfcc.MfccCompute(zerosWindow);
 
-            while (mfccBufIdx != this->_m_numFeatVectors) {
-                memcpy(&this->_m_mfccBuf(0, mfccBufIdx),
-                       mfccZeros.data(), sizeof(float) * _m_numMfccFeats);
+            while (mfccBufIdx != this->m_numFeatVectors) {
+                memcpy(&this->m_mfccBuf(0, mfccBufIdx),
+                       mfccZeros.data(), sizeof(float) * m_numMfccFeats);
                 ++mfccBufIdx;
             }
         }
 
         /* Compute first and second order deltas from MFCCs. */
-        Preprocess::ComputeDeltas(this->_m_mfccBuf,
-                            this->_m_delta1Buf,
-                            this->_m_delta2Buf);
+        Preprocess::ComputeDeltas(this->m_mfccBuf,
+                            this->m_delta1Buf,
+                            this->m_delta2Buf);
 
         /* Normalise. */
         this->Normalise();
@@ -206,9 +206,9 @@
 
     void Preprocess::Normalise()
     {
-        Preprocess::NormaliseVec(this->_m_mfccBuf);
-        Preprocess::NormaliseVec(this->_m_delta1Buf);
-        Preprocess::NormaliseVec(this->_m_delta2Buf);
+        Preprocess::NormaliseVec(this->m_mfccBuf);
+        Preprocess::NormaliseVec(this->m_delta1Buf);
+        Preprocess::NormaliseVec(this->m_delta2Buf);
     }
 
     float Preprocess::GetQuantElem(
diff --git a/source/use_case/img_class/include/MobileNetModel.hpp b/source/use_case/img_class/include/MobileNetModel.hpp
index 2540564..503f1ac 100644
--- a/source/use_case/img_class/include/MobileNetModel.hpp
+++ b/source/use_case/img_class/include/MobileNetModel.hpp
@@ -46,7 +46,7 @@
         static constexpr int ms_maxOpCnt = 7;
 
         /* A mutable op resolver instance. */
-        tflite::MicroMutableOpResolver<ms_maxOpCnt> _m_opResolver;
+        tflite::MicroMutableOpResolver<ms_maxOpCnt> m_opResolver;
     };
 
 } /* namespace app */
diff --git a/source/use_case/img_class/src/MobileNetModel.cc b/source/use_case/img_class/src/MobileNetModel.cc
index eeaa109..b937382 100644
--- a/source/use_case/img_class/src/MobileNetModel.cc
+++ b/source/use_case/img_class/src/MobileNetModel.cc
@@ -20,20 +20,20 @@
 
 const tflite::MicroOpResolver& arm::app::MobileNetModel::GetOpResolver()
 {
-    return this->_m_opResolver;
+    return this->m_opResolver;
 }
 
 bool arm::app::MobileNetModel::EnlistOperations()
 {
-    this->_m_opResolver.AddDepthwiseConv2D();
-    this->_m_opResolver.AddConv2D();
-    this->_m_opResolver.AddAveragePool2D();
-    this->_m_opResolver.AddAdd();
-    this->_m_opResolver.AddReshape();
-    this->_m_opResolver.AddSoftmax();
+    this->m_opResolver.AddDepthwiseConv2D();
+    this->m_opResolver.AddConv2D();
+    this->m_opResolver.AddAveragePool2D();
+    this->m_opResolver.AddAdd();
+    this->m_opResolver.AddReshape();
+    this->m_opResolver.AddSoftmax();
 
 #if defined(ARM_NPU)
-    if (kTfLiteOk == this->_m_opResolver.AddEthosU()) {
+    if (kTfLiteOk == this->m_opResolver.AddEthosU()) {
         info("Added %s support to op resolver\n",
             tflite::GetString_ETHOSU());
     } else {
diff --git a/source/use_case/img_class/src/UseCaseHandler.cc b/source/use_case/img_class/src/UseCaseHandler.cc
index fa77512..337cb29 100644
--- a/source/use_case/img_class/src/UseCaseHandler.cc
+++ b/source/use_case/img_class/src/UseCaseHandler.cc
@@ -58,8 +58,6 @@
      *                  object.
      * @param[in]       platform    Reference to the hal platform object.
      * @param[in]       results     Vector of classification results to be displayed.
-     * @param[in]       infTimeMs   Inference time in milliseconds, if available
-     *                              otherwise, this can be passed in as 0.
      * @return          true if successful, false otherwise.
      **/
     static bool PresentInferenceResult(hal_platform& platform,
diff --git a/source/use_case/inference_runner/include/TestModel.hpp b/source/use_case/inference_runner/include/TestModel.hpp
index 0b3e9b9..0846bd4 100644
--- a/source/use_case/inference_runner/include/TestModel.hpp
+++ b/source/use_case/inference_runner/include/TestModel.hpp
@@ -38,7 +38,7 @@
     private:
 
         /* No need to define individual ops at the cost of extra memory. */
-        tflite::AllOpsResolver _m_opResolver;
+        tflite::AllOpsResolver m_opResolver;
     };
 
 } /* namespace app */
diff --git a/source/use_case/inference_runner/src/TestModel.cc b/source/use_case/inference_runner/src/TestModel.cc
index 0926a96..4512a9b 100644
--- a/source/use_case/inference_runner/src/TestModel.cc
+++ b/source/use_case/inference_runner/src/TestModel.cc
@@ -20,7 +20,7 @@
 
 const tflite::AllOpsResolver& arm::app::TestModel::GetOpResolver()
 {
-    return this->_m_opResolver;
+    return this->m_opResolver;
 }
 
 extern uint8_t* GetModelPointer();
diff --git a/source/use_case/kws/include/DsCnnModel.hpp b/source/use_case/kws/include/DsCnnModel.hpp
index e9ac18c..a1a45cd 100644
--- a/source/use_case/kws/include/DsCnnModel.hpp
+++ b/source/use_case/kws/include/DsCnnModel.hpp
@@ -50,7 +50,7 @@
         static constexpr int ms_maxOpCnt = 8;
 
         /* A mutable op resolver instance. */
-        tflite::MicroMutableOpResolver<ms_maxOpCnt> _m_opResolver;
+        tflite::MicroMutableOpResolver<ms_maxOpCnt> m_opResolver;
     };
 
 } /* namespace app */
diff --git a/source/use_case/kws/src/DsCnnModel.cc b/source/use_case/kws/src/DsCnnModel.cc
index a093eb4..4edfc04 100644
--- a/source/use_case/kws/src/DsCnnModel.cc
+++ b/source/use_case/kws/src/DsCnnModel.cc
@@ -20,21 +20,21 @@
 
 const tflite::MicroOpResolver& arm::app::DsCnnModel::GetOpResolver()
 {
-    return this->_m_opResolver;
+    return this->m_opResolver;
 }
 
 bool arm::app::DsCnnModel::EnlistOperations()
 {
-    this->_m_opResolver.AddReshape();
-    this->_m_opResolver.AddAveragePool2D();
-    this->_m_opResolver.AddConv2D();
-    this->_m_opResolver.AddDepthwiseConv2D();
-    this->_m_opResolver.AddFullyConnected();
-    this->_m_opResolver.AddRelu();
-    this->_m_opResolver.AddSoftmax();
+    this->m_opResolver.AddReshape();
+    this->m_opResolver.AddAveragePool2D();
+    this->m_opResolver.AddConv2D();
+    this->m_opResolver.AddDepthwiseConv2D();
+    this->m_opResolver.AddFullyConnected();
+    this->m_opResolver.AddRelu();
+    this->m_opResolver.AddSoftmax();
 
 #if defined(ARM_NPU)
-    if (kTfLiteOk == this->_m_opResolver.AddEthosU()) {
+    if (kTfLiteOk == this->m_opResolver.AddEthosU()) {
         info("Added %s support to op resolver\n",
             tflite::GetString_ETHOSU());
     } else {
diff --git a/source/use_case/kws/src/UseCaseHandler.cc b/source/use_case/kws/src/UseCaseHandler.cc
index eaf53c1..2144c03 100644
--- a/source/use_case/kws/src/UseCaseHandler.cc
+++ b/source/use_case/kws/src/UseCaseHandler.cc
@@ -52,8 +52,6 @@
      *                  object.
      * @param[in]       platform    Reference to the hal platform object.
      * @param[in]       results     Vector of classification results to be displayed.
-     * @param[in]       infTimeMs   Inference time in milliseconds, if available,
-     *                              otherwise, this can be passed in as 0.
      * @return          true if successful, false otherwise.
      **/
     static bool PresentInferenceResult(hal_platform& platform,
@@ -341,11 +339,11 @@
      * Real features math is done by a lambda function provided as a parameter.
      * Features are written to input tensor memory.
      *
-     * @tparam T            Feature vector type.
-     * @param inputTensor   Model input tensor pointer.
-     * @param cacheSize     Number of feature vectors to cache. Defined by the sliding window overlap.
-     * @param compute       Features calculator function.
-     * @return              Lambda function to compute features.
+     * @tparam T                Feature vector type.
+     * @param[in] inputTensor   Model input tensor pointer.
+     * @param[in] cacheSize     Number of feature vectors to cache. Defined by the sliding window overlap.
+     * @param[in] compute       Features calculator function.
+     * @return                  Lambda function to compute features.
      */
     template<class T>
     std::function<void (std::vector<int16_t>&, size_t, bool, size_t)>
diff --git a/source/use_case/kws_asr/include/DsCnnModel.hpp b/source/use_case/kws_asr/include/DsCnnModel.hpp
index f9d4357..92d96b9 100644
--- a/source/use_case/kws_asr/include/DsCnnModel.hpp
+++ b/source/use_case/kws_asr/include/DsCnnModel.hpp
@@ -58,7 +58,7 @@
         static constexpr int ms_maxOpCnt = 10;
 
         /* A mutable op resolver instance. */
-        tflite::MicroMutableOpResolver<ms_maxOpCnt> _m_opResolver;
+        tflite::MicroMutableOpResolver<ms_maxOpCnt> m_opResolver;
     };
 
 } /* namespace app */
diff --git a/source/use_case/kws_asr/include/OutputDecode.hpp b/source/use_case/kws_asr/include/OutputDecode.hpp
index 2bbb29c..cea2c33 100644
--- a/source/use_case/kws_asr/include/OutputDecode.hpp
+++ b/source/use_case/kws_asr/include/OutputDecode.hpp
@@ -27,7 +27,7 @@
     /**
      * @brief       Gets the top N classification results from the
      *              output vector.
-     * @param[in]   tensor   Label output from classifier.
+     * @param[in]   vecResults   Label output from classifier.
      * @return      true if successful, false otherwise.
     **/
     std::string DecodeOutput(const std::vector<ClassificationResult>& vecResults);
diff --git a/source/use_case/kws_asr/include/Wav2LetterModel.hpp b/source/use_case/kws_asr/include/Wav2LetterModel.hpp
index 9a86bd9..7c327b3 100644
--- a/source/use_case/kws_asr/include/Wav2LetterModel.hpp
+++ b/source/use_case/kws_asr/include/Wav2LetterModel.hpp
@@ -58,7 +58,7 @@
         static constexpr int ms_maxOpCnt = 5;
 
         /* A mutable op resolver instance. */
-        tflite::MicroMutableOpResolver<ms_maxOpCnt> _m_opResolver;
+        tflite::MicroMutableOpResolver<ms_maxOpCnt> m_opResolver;
     };
 
 } /* namespace app */
diff --git a/source/use_case/kws_asr/include/Wav2LetterPostprocess.hpp b/source/use_case/kws_asr/include/Wav2LetterPostprocess.hpp
index fe60923..5c11412 100644
--- a/source/use_case/kws_asr/include/Wav2LetterPostprocess.hpp
+++ b/source/use_case/kws_asr/include/Wav2LetterPostprocess.hpp
@@ -33,10 +33,11 @@
     public:
         /**
          * @brief       Constructor
-         * @param[in]   contextLen   Left and right context length for
-         *                           output tensor.
-         * @param[in]   innerLen     This is the length of the section
-         *                           between left and right context.
+         * @param[in]   contextLen     Left and right context length for
+         *                             output tensor.
+         * @param[in]   innerLen       This is the length of the section
+         *                             between left and right context.
+         * @param[in]   blankTokenIdx  Blank token index.
          **/
         Postprocess(uint32_t contextLen,
                     uint32_t innerLen,
@@ -61,11 +62,11 @@
                     bool lastIteration = false);
 
     private:
-        uint32_t    _m_contextLen;      /* Lengths of left and right contexts. */
-        uint32_t    _m_innerLen;        /* Length of inner context. */
-        uint32_t    _m_totalLen;        /* Total length of the required axis. */
-        uint32_t    _m_countIterations; /* Current number of iterations. */
-        uint32_t    _m_blankTokenIdx;   /* Index of the labels blank token. */
+        uint32_t    m_contextLen;      /* Lengths of left and right contexts. */
+        uint32_t    m_innerLen;        /* Length of inner context. */
+        uint32_t    m_totalLen;        /* Total length of the required axis. */
+        uint32_t    m_countIterations; /* Current number of iterations. */
+        uint32_t    m_blankTokenIdx;   /* Index of the labels blank token. */
         /**
          * @brief       Checks if the tensor and axis index are valid
          *              inputs to the object - based on how it has been
diff --git a/source/use_case/kws_asr/include/Wav2LetterPreprocess.hpp b/source/use_case/kws_asr/include/Wav2LetterPreprocess.hpp
index cf40fa8..66d19d3 100644
--- a/source/use_case/kws_asr/include/Wav2LetterPreprocess.hpp
+++ b/source/use_case/kws_asr/include/Wav2LetterPreprocess.hpp
@@ -145,32 +145,32 @@
                 const int       quantOffset)
         {
             /* Check the output size will for everything. */
-            if (outputBufSz < (this->_m_mfccBuf.size(0) * 3 * sizeof(T))) {
+            if (outputBufSz < (this->m_mfccBuf.size(0) * 3 * sizeof(T))) {
                 printf_err("Tensor size too small for features\n");
                 return false;
             }
 
             /* Populate. */
             T * outputBufMfcc = outputBuf;
-            T * outputBufD1 = outputBuf + this->_m_numMfccFeats;
-            T * outputBufD2 = outputBufD1 + this->_m_numMfccFeats;
-            const uint32_t ptrIncr = this->_m_numMfccFeats * 2;  /* (3 vectors - 1 vector) */
+            T * outputBufD1 = outputBuf + this->m_numMfccFeats;
+            T * outputBufD2 = outputBufD1 + this->m_numMfccFeats;
+            const uint32_t ptrIncr = this->m_numMfccFeats * 2;  /* (3 vectors - 1 vector) */
 
             const float minVal = std::numeric_limits<T>::min();
             const float maxVal = std::numeric_limits<T>::max();
 
             /* We need to do a transpose while copying and concatenating
              * the tensor. */
-            for (uint32_t j = 0; j < this->_m_numFeatVectors; ++j) {
-                for (uint32_t i = 0; i < this->_m_numMfccFeats; ++i) {
+            for (uint32_t j = 0; j < this->m_numFeatVectors; ++j) {
+                for (uint32_t i = 0; i < this->m_numMfccFeats; ++i) {
                     *outputBufMfcc++ = static_cast<T>(this->GetQuantElem(
-                            this->_m_mfccBuf(i, j), quantScale,
+                            this->m_mfccBuf(i, j), quantScale,
                             quantOffset, minVal, maxVal));
                     *outputBufD1++ = static_cast<T>(this->GetQuantElem(
-                            this->_m_delta1Buf(i, j), quantScale,
+                            this->m_delta1Buf(i, j), quantScale,
                             quantOffset, minVal, maxVal));
                     *outputBufD2++ = static_cast<T>(this->GetQuantElem(
-                            this->_m_delta2Buf(i, j), quantScale,
+                            this->m_delta2Buf(i, j), quantScale,
                             quantOffset, minVal, maxVal));
                 }
                 outputBufMfcc += ptrIncr;
@@ -182,18 +182,18 @@
         }
 
     private:
-        Wav2LetterMFCC      _m_mfcc;            /* MFCC instance. */
+        Wav2LetterMFCC      m_mfcc;            /* MFCC instance. */
 
         /* Actual buffers to be populated. */
-        Array2d<float>      _m_mfccBuf;         /* Contiguous buffer 1D: MFCC */
-        Array2d<float>      _m_delta1Buf;       /* Contiguous buffer 1D: Delta 1 */
-        Array2d<float>      _m_delta2Buf;       /* Contiguous buffer 1D: Delta 2 */
+        Array2d<float>      m_mfccBuf;         /* Contiguous buffer 1D: MFCC */
+        Array2d<float>      m_delta1Buf;       /* Contiguous buffer 1D: Delta 1 */
+        Array2d<float>      m_delta2Buf;       /* Contiguous buffer 1D: Delta 2 */
 
-        uint32_t            _m_windowLen;       /* Window length for MFCC. */
-        uint32_t            _m_windowStride;    /* Window stride len for MFCC. */
-        uint32_t            _m_numMfccFeats;    /* Number of MFCC features per window. */
-        uint32_t            _m_numFeatVectors;  /* Number of _m_numMfccFeats. */
-        AudioWindow         _m_window;          /* Sliding window. */
+        uint32_t            m_windowLen;       /* Window length for MFCC. */
+        uint32_t            m_windowStride;    /* Window stride len for MFCC. */
+        uint32_t            m_numMfccFeats;    /* Number of MFCC features per window. */
+        uint32_t            m_numFeatVectors;  /* Number of m_numMfccFeats. */
+        AudioWindow         m_window;          /* Sliding window. */
 
     };
 
diff --git a/source/use_case/kws_asr/src/DsCnnModel.cc b/source/use_case/kws_asr/src/DsCnnModel.cc
index b573a12..71d4ceb 100644
--- a/source/use_case/kws_asr/src/DsCnnModel.cc
+++ b/source/use_case/kws_asr/src/DsCnnModel.cc
@@ -29,23 +29,23 @@
 
 const tflite::MicroOpResolver& arm::app::DsCnnModel::GetOpResolver()
 {
-    return this->_m_opResolver;
+    return this->m_opResolver;
 }
 
 bool arm::app::DsCnnModel::EnlistOperations()
 {
-    this->_m_opResolver.AddAveragePool2D();
-    this->_m_opResolver.AddConv2D();
-    this->_m_opResolver.AddDepthwiseConv2D();
-    this->_m_opResolver.AddFullyConnected();
-    this->_m_opResolver.AddRelu();
-    this->_m_opResolver.AddSoftmax();
-    this->_m_opResolver.AddQuantize();
-    this->_m_opResolver.AddDequantize();
-    this->_m_opResolver.AddReshape();
+    this->m_opResolver.AddAveragePool2D();
+    this->m_opResolver.AddConv2D();
+    this->m_opResolver.AddDepthwiseConv2D();
+    this->m_opResolver.AddFullyConnected();
+    this->m_opResolver.AddRelu();
+    this->m_opResolver.AddSoftmax();
+    this->m_opResolver.AddQuantize();
+    this->m_opResolver.AddDequantize();
+    this->m_opResolver.AddReshape();
 
 #if defined(ARM_NPU)
-    if (kTfLiteOk == this->_m_opResolver.AddEthosU()) {
+    if (kTfLiteOk == this->m_opResolver.AddEthosU()) {
         info("Added %s support to op resolver\n",
             tflite::GetString_ETHOSU());
     } else {
diff --git a/source/use_case/kws_asr/src/UseCaseHandler.cc b/source/use_case/kws_asr/src/UseCaseHandler.cc
index 0560e88..60c0fd2 100644
--- a/source/use_case/kws_asr/src/UseCaseHandler.cc
+++ b/source/use_case/kws_asr/src/UseCaseHandler.cc
@@ -67,8 +67,6 @@
      *                  object.
      * @param[in]       platform    reference to the hal platform object
      * @param[in]       results     vector of classification results to be displayed
-     * @param[in]       infTimeMs   inference time in milliseconds, if available
-     *                              Otherwise, this can be passed in as 0.
      * @return          true if successful, false otherwise
      **/
     static bool PresentInferenceResult(hal_platform& platform, std::vector<arm::app::kws::KwsResult>& results);
@@ -78,8 +76,6 @@
      *                  object.
      * @param[in]       platform    reference to the hal platform object
      * @param[in]       results     vector of classification results to be displayed
-     * @param[in]       infTimeMs   inference time in milliseconds, if available
-     *                              Otherwise, this can be passed in as 0.
      * @return          true if successful, false otherwise
      **/
     static bool PresentInferenceResult(hal_platform& platform, std::vector<arm::app::asr::AsrResult>& results);
@@ -291,8 +287,8 @@
     /**
      * @brief Performs the ASR pipeline.
      *
-     * @param ctx[in/out]   pointer to the application context object
-     * @param kwsOutput[in] struct containing pointer to audio data where ASR should begin
+     * @param[in,out] ctx   pointer to the application context object
+     * @param[in] kwsOutput struct containing pointer to audio data where ASR should begin
      *                      and how much data to process
      * @return bool         true if pipeline executed without failure
      */
diff --git a/source/use_case/kws_asr/src/Wav2LetterModel.cc b/source/use_case/kws_asr/src/Wav2LetterModel.cc
index 2114a3f..62245b9 100644
--- a/source/use_case/kws_asr/src/Wav2LetterModel.cc
+++ b/source/use_case/kws_asr/src/Wav2LetterModel.cc
@@ -29,18 +29,18 @@
 
 const tflite::MicroOpResolver& arm::app::Wav2LetterModel::GetOpResolver()
 {
-    return this->_m_opResolver;
+    return this->m_opResolver;
 }
 
 bool arm::app::Wav2LetterModel::EnlistOperations()
 {
-    this->_m_opResolver.AddConv2D();
-    this->_m_opResolver.AddMul();
-    this->_m_opResolver.AddMaximum();
-    this->_m_opResolver.AddReshape();
+    this->m_opResolver.AddConv2D();
+    this->m_opResolver.AddMul();
+    this->m_opResolver.AddMaximum();
+    this->m_opResolver.AddReshape();
 
 #if defined(ARM_NPU)
-    if (kTfLiteOk == this->_m_opResolver.AddEthosU()) {
+    if (kTfLiteOk == this->m_opResolver.AddEthosU()) {
         info("Added %s support to op resolver\n",
             tflite::GetString_ETHOSU());
     } else {
diff --git a/source/use_case/kws_asr/src/Wav2LetterPostprocess.cc b/source/use_case/kws_asr/src/Wav2LetterPostprocess.cc
index e3c0c20..f2d9357 100644
--- a/source/use_case/kws_asr/src/Wav2LetterPostprocess.cc
+++ b/source/use_case/kws_asr/src/Wav2LetterPostprocess.cc
@@ -26,11 +26,11 @@
     Postprocess::Postprocess(const uint32_t contextLen,
                              const uint32_t innerLen,
                              const uint32_t blankTokenIdx)
-        :   _m_contextLen(contextLen),
-            _m_innerLen(innerLen),
-            _m_totalLen(2 * this->_m_contextLen + this->_m_innerLen),
-            _m_countIterations(0),
-            _m_blankTokenIdx(blankTokenIdx)
+        :   m_contextLen(contextLen),
+            m_innerLen(innerLen),
+            m_totalLen(2 * this->m_contextLen + this->m_innerLen),
+            m_countIterations(0),
+            m_blankTokenIdx(blankTokenIdx)
     {}
 
     bool Postprocess::Invoke(TfLiteTensor*  tensor,
@@ -50,7 +50,7 @@
         if (0 == elemSz) {
             printf_err("Tensor type not supported for post processing\n");
             return false;
-        } else if (elemSz * this->_m_totalLen > tensor->bytes) {
+        } else if (elemSz * this->m_totalLen > tensor->bytes) {
             printf_err("Insufficient number of tensor bytes\n");
             return false;
         }
@@ -82,7 +82,7 @@
             return false;
         }
 
-        if (static_cast<int>(this->_m_totalLen) !=
+        if (static_cast<int>(this->m_totalLen) !=
                              tensor->dims->data[axisIdx]) {
             printf_err("Unexpected tensor dimension for axis %d, \n",
                 tensor->dims->data[axisIdx]);
@@ -120,31 +120,31 @@
     {
         /* In this case, the "zero-ing" is quite simple as the region
          * to be zeroed sits in contiguous memory (row-major). */
-        const uint32_t eraseLen = strideSzBytes * this->_m_contextLen;
+        const uint32_t eraseLen = strideSzBytes * this->m_contextLen;
 
         /* Erase left context? */
-        if (this->_m_countIterations > 0) {
+        if (this->m_countIterations > 0) {
             /* Set output of each classification window to the blank token. */
             std::memset(ptrData, 0, eraseLen);
-            for (size_t windowIdx = 0; windowIdx < this->_m_contextLen; windowIdx++) {
-                ptrData[windowIdx*strideSzBytes + this->_m_blankTokenIdx] = 1;
+            for (size_t windowIdx = 0; windowIdx < this->m_contextLen; windowIdx++) {
+                ptrData[windowIdx*strideSzBytes + this->m_blankTokenIdx] = 1;
             }
         }
 
         /* Erase right context? */
         if (false == lastIteration) {
-            uint8_t * rightCtxPtr = ptrData + (strideSzBytes * (this->_m_contextLen + this->_m_innerLen));
+            uint8_t * rightCtxPtr = ptrData + (strideSzBytes * (this->m_contextLen + this->m_innerLen));
             /* Set output of each classification window to the blank token. */
             std::memset(rightCtxPtr, 0, eraseLen);
-            for (size_t windowIdx = 0; windowIdx < this->_m_contextLen; windowIdx++) {
-                rightCtxPtr[windowIdx*strideSzBytes + this->_m_blankTokenIdx] = 1;
+            for (size_t windowIdx = 0; windowIdx < this->m_contextLen; windowIdx++) {
+                rightCtxPtr[windowIdx*strideSzBytes + this->m_blankTokenIdx] = 1;
             }
         }
 
         if (lastIteration) {
-            this->_m_countIterations = 0;
+            this->m_countIterations = 0;
         } else {
-            ++this->_m_countIterations;
+            ++this->m_countIterations;
         }
 
         return true;
diff --git a/source/use_case/kws_asr/src/Wav2LetterPreprocess.cc b/source/use_case/kws_asr/src/Wav2LetterPreprocess.cc
index 8251396..d3f3579 100644
--- a/source/use_case/kws_asr/src/Wav2LetterPreprocess.cc
+++ b/source/use_case/kws_asr/src/Wav2LetterPreprocess.cc
@@ -32,18 +32,18 @@
         const uint32_t  windowLen,
         const uint32_t  windowStride,
         const uint32_t  numMfccVectors):
-            _m_mfcc(numMfccFeatures, windowLen),
-            _m_mfccBuf(numMfccFeatures, numMfccVectors),
-            _m_delta1Buf(numMfccFeatures, numMfccVectors),
-            _m_delta2Buf(numMfccFeatures, numMfccVectors),
-            _m_windowLen(windowLen),
-            _m_windowStride(windowStride),
-            _m_numMfccFeats(numMfccFeatures),
-            _m_numFeatVectors(numMfccVectors),
-            _m_window()
+            m_mfcc(numMfccFeatures, windowLen),
+            m_mfccBuf(numMfccFeatures, numMfccVectors),
+            m_delta1Buf(numMfccFeatures, numMfccVectors),
+            m_delta2Buf(numMfccFeatures, numMfccVectors),
+            m_windowLen(windowLen),
+            m_windowStride(windowStride),
+            m_numMfccFeats(numMfccFeatures),
+            m_numFeatVectors(numMfccVectors),
+            m_window()
     {
         if (numMfccFeatures > 0 && windowLen > 0) {
-            this->_m_mfcc.Init();
+            this->m_mfcc.Init();
         }
     }
 
@@ -52,45 +52,45 @@
                 const uint32_t  audioDataLen,
                 TfLiteTensor*   tensor)
     {
-        this->_m_window = SlidingWindow<const int16_t>(
+        this->m_window = SlidingWindow<const int16_t>(
                             audioData, audioDataLen,
-                            this->_m_windowLen, this->_m_windowStride);
+                            this->m_windowLen, this->m_windowStride);
 
         uint32_t mfccBufIdx = 0;
 
-        std::fill(_m_mfccBuf.begin(), _m_mfccBuf.end(), 0.f);
-        std::fill(_m_delta1Buf.begin(), _m_delta1Buf.end(), 0.f);
-        std::fill(_m_delta2Buf.begin(), _m_delta2Buf.end(), 0.f);
+        std::fill(m_mfccBuf.begin(), m_mfccBuf.end(), 0.f);
+        std::fill(m_delta1Buf.begin(), m_delta1Buf.end(), 0.f);
+        std::fill(m_delta2Buf.begin(), m_delta2Buf.end(), 0.f);
 
         /* While we can slide over the window. */
-        while (this->_m_window.HasNext()) {
-            const int16_t*  mfccWindow = this->_m_window.Next();
+        while (this->m_window.HasNext()) {
+            const int16_t*  mfccWindow = this->m_window.Next();
             auto mfccAudioData = std::vector<int16_t>(
                                         mfccWindow,
-                                        mfccWindow + this->_m_windowLen);
-            auto mfcc = this->_m_mfcc.MfccCompute(mfccAudioData);
-            for (size_t i = 0; i < this->_m_mfccBuf.size(0); ++i) {
-                this->_m_mfccBuf(i, mfccBufIdx) = mfcc[i];
+                                        mfccWindow + this->m_windowLen);
+            auto mfcc = this->m_mfcc.MfccCompute(mfccAudioData);
+            for (size_t i = 0; i < this->m_mfccBuf.size(0); ++i) {
+                this->m_mfccBuf(i, mfccBufIdx) = mfcc[i];
             }
             ++mfccBufIdx;
         }
 
         /* Pad MFCC if needed by adding MFCC for zeros. */
-        if (mfccBufIdx != this->_m_numFeatVectors) {
-            std::vector<int16_t> zerosWindow = std::vector<int16_t>(this->_m_windowLen, 0);
-            std::vector<float> mfccZeros = this->_m_mfcc.MfccCompute(zerosWindow);
+        if (mfccBufIdx != this->m_numFeatVectors) {
+            std::vector<int16_t> zerosWindow = std::vector<int16_t>(this->m_windowLen, 0);
+            std::vector<float> mfccZeros = this->m_mfcc.MfccCompute(zerosWindow);
 
-            while (mfccBufIdx != this->_m_numFeatVectors) {
-                memcpy(&this->_m_mfccBuf(0, mfccBufIdx),
-                       mfccZeros.data(), sizeof(float) * _m_numMfccFeats);
+            while (mfccBufIdx != this->m_numFeatVectors) {
+                memcpy(&this->m_mfccBuf(0, mfccBufIdx),
+                       mfccZeros.data(), sizeof(float) * m_numMfccFeats);
                 ++mfccBufIdx;
             }
         }
 
         /* Compute first and second order deltas from MFCCs. */
-        this->ComputeDeltas(this->_m_mfccBuf,
-                            this->_m_delta1Buf,
-                            this->_m_delta2Buf);
+        this->ComputeDeltas(this->m_mfccBuf,
+                            this->m_delta1Buf,
+                            this->m_delta2Buf);
 
         /* Normalise. */
         this->Normalise();
@@ -206,9 +206,9 @@
 
     void Preprocess::Normalise()
     {
-        Preprocess::NormaliseVec(this->_m_mfccBuf);
-        Preprocess::NormaliseVec(this->_m_delta1Buf);
-        Preprocess::NormaliseVec(this->_m_delta2Buf);
+        Preprocess::NormaliseVec(this->m_mfccBuf);
+        Preprocess::NormaliseVec(this->m_delta1Buf);
+        Preprocess::NormaliseVec(this->m_delta2Buf);
     }
 
     float Preprocess::GetQuantElem(