MLECO-2354 MLECO-2355 MLECO-2356: Moving noise reduction to public repository

* Use RNNoise model from PMZ
* Add Noise reduction use-case

Signed-off-by: Richard burton <richard.burton@arm.com>
Change-Id: Ia8cc7ef102e22a5ff8bfbd3833594a4905a66057
diff --git a/Readme.md b/Readme.md
index f3a2f6c..48a1773 100644
--- a/Readme.md
+++ b/Readme.md
@@ -33,9 +33,10 @@
 |  [Keyword spotting(KWS)](./docs/use_cases/kws.md)             | Recognize the presence of a key word in a recording | [DS-CNN-L](https://github.com/ARM-software/ML-zoo/tree/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8) |
 |  [Automated Speech Recognition(ASR)](./docs/use_cases/asr.md) | Transcribe words in a recording | [Wav2Letter](https://github.com/ARM-software/ML-zoo/tree/1a92aa08c0de49a7304e0a7f3f59df6f4fd33ac8/models/speech_recognition/wav2letter/tflite_int8) |
 |  [KWS and ASR](./docs/use_cases/kws_asr.md) | Utilise Cortex-M and Ethos-U to transcribe words in a recording after a keyword was spotted | [DS-CNN-L](https://github.com/ARM-software/ML-zoo/tree/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8)  [Wav2Letter](https://github.com/ARM-software/ML-zoo/tree/1a92aa08c0de49a7304e0a7f3f59df6f4fd33ac8/models/speech_recognition/wav2letter/tflite_int8) |
-|  [Anomaly Detection](./docs/use_cases/ad.md)                 | Detecting abnormal behavior based on a sound recording of a machine | [Anomaly Detection](https://github.com/ARM-software/ML-zoo/tree/7c32b097f7d94aae2cd0b98a8ed5a3ba81e66b18/models/anomaly_detection/micronet_medium/tflite_int8/)|
-[Visual Wake Word](./docs/use_cases/visual_wake_word.md)                 | Recognize if person is present in a given image | [Visual Wake Word](https://github.com/ARM-software/ML-zoo/tree/7dd3b16bb84007daf88be8648983c07f3eb21140/models/visual_wake_words/micronet_vww4/tflite_int8/vww4_128_128_INT8.tflite)|
-| [Generic inference runner](./docs/use_cases/inference_runner.md) | Code block allowing you to develop your own use case for Ethos-U NPU | Your custom model |
+|  [Anomaly Detection](./docs/use_cases/ad.md)                 | Detecting abnormal behavior based on a sound recording of a machine | [MicroNet](https://github.com/ARM-software/ML-zoo/tree/7c32b097f7d94aae2cd0b98a8ed5a3ba81e66b18/models/anomaly_detection/micronet_medium/tflite_int8/)|
+|  [Visual Wake Word](./docs/use_cases/visual_wake_word.md)                 | Recognize if person is present in a given image | [MicroNet](https://github.com/ARM-software/ML-zoo/tree/7dd3b16bb84007daf88be8648983c07f3eb21140/models/visual_wake_words/micronet_vww4/tflite_int8/vww4_128_128_INT8.tflite)|
+|  [Noise Reduction](./docs/use_cases/noise_reduction.md)        | Remove noise from audio while keeping speech intact | [RNNoise](https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8)   |
+|  [Generic inference runner](./docs/use_cases/inference_runner.md) | Code block allowing you to develop your own use case for Ethos-U NPU | Your custom model |
 
 The above use cases implement end-to-end ML flow including data pre-processing and post-processing. They will allow you
 to investigate embedded software stack, evaluate performance of the networks running on Cortex-M55 CPU and Ethos-U NPU
@@ -195,3 +196,4 @@
 | [Keyword Spotting Samples](./resources/kws/samples/files.md) | [Creative Commons Attribution 4.0 International Public License](./resources/LICENSE_CC_4.0.txt) | <http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz> |
 | [Keyword Spotting and Automatic Speech Recognition Samples](./resources/kws_asr/samples/files.md) | [Creative Commons Attribution 4.0 International Public License](./resources/LICENSE_CC_4.0.txt) | <http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz> |
 | [Visual Wake Word Samples](./resources/vww/samples/files.md) | [Creative Commons Attribution 1.0](./resources/LICENSE_CC_1.0.txt) | <https://www.pexels.com> |
+| [Noise Reduction Samples](./resources/noise_reduction/samples/files.md) | [Creative Commons Attribution 4.0 International Public License](./resources/LICENSE_CC_4.0.txt) | <https://datashare.ed.ac.uk/handle/10283/2791/> | 
diff --git a/docs/documentation.md b/docs/documentation.md
index a186fbb..0642075 100644
--- a/docs/documentation.md
+++ b/docs/documentation.md
@@ -206,11 +206,12 @@
 
 The models used in the use-cases implemented in this project can be downloaded from: [Arm ML-Zoo](https://github.com/ARM-software/ML-zoo).
 
-- [Mobilenet V2](https://github.com/ARM-software/ML-zoo/tree/e0aa361b03c738047b9147d1a50e3f2dcb13dbcb/models/image_classification/mobilenet_v2_1.0_224/tflite_uint8).
-- [DS-CNN](https://github.com/ARM-software/ML-zoo/tree/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b//models/keyword_spotting/ds_cnn_large/tflite_clustered_int8).
-- [Wav2Letter](https://github.com/ARM-software/ML-zoo/tree/1a92aa08c0de49a7304e0a7f3f59df6f4fd33ac8/models/speech_recognition/wav2letter/tflite_pruned_int8).
-- [Anomaly Detection](https://github.com/ARM-software/ML-zoo/tree/7c32b097f7d94aae2cd0b98a8ed5a3ba81e66b18/models/anomaly_detection/micronet_medium/tflite_int8).
-- [Visual Wake Word](https://github.com/ARM-software/ML-zoo/raw/7dd3b16bb84007daf88be8648983c07f3eb21140/models/visual_wake_words/micronet_vww4/tflite_int8/vww4_128_128_INT8.tflite).
+- [Mobilenet V2](https://github.com/ARM-software/ML-zoo/tree/e0aa361b03c738047b9147d1a50e3f2dcb13dbcb/models/image_classification/mobilenet_v2_1.0_224/tflite_int8)
+- [DS-CNN](https://github.com/ARM-software/ML-zoo/tree/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b//models/keyword_spotting/ds_cnn_large/tflite_clustered_int8)
+- [Wav2Letter](https://github.com/ARM-software/ML-zoo/tree/1a92aa08c0de49a7304e0a7f3f59df6f4fd33ac8/models/speech_recognition/wav2letter/tflite_pruned_int8)
+- [MicroNet for Anomaly Detection](https://github.com/ARM-software/ML-zoo/tree/7c32b097f7d94aae2cd0b98a8ed5a3ba81e66b18/models/anomaly_detection/micronet_medium/tflite_int8)
+- [MicroNet for Visual Wake Word](https://github.com/ARM-software/ML-zoo/raw/7dd3b16bb84007daf88be8648983c07f3eb21140/models/visual_wake_words/micronet_vww4/tflite_int8/vww4_128_128_INT8.tflite)
+- [RNNoise](https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/rnnoise_INT8.tflite)
 
 When using *Ethos-U* NPU backend, Vela compiler optimizes the the NN model. However, if not and it is supported by
 TensorFlow Lite Micro, then it falls back on the CPU and execute.
diff --git a/docs/quick_start.md b/docs/quick_start.md
index 3488447..7613912 100644
--- a/docs/quick_start.md
+++ b/docs/quick_start.md
@@ -102,6 +102,26 @@
     --output ./resources_downloaded/kws_asr/kws/ifm0.npy
 curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8/testing_output/Identity/0.npy \
     --output ./resources_downloaded/kws_asr/kws/ofm0.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/rnnoise_INT8.tflite \
+    --output ./resources_downloaded/noise_reduction/rnnoise_INT8.tflite
+curl -L https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/testing_input/main_input_int8/0.npy \
+    --output ./resources_downloaded/noise_reduction/ifm0.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/testing_input/vad_gru_prev_state_int8/0.npy \
+    --output ./resources_downloaded/noise_reduction/ifm1.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/testing_input/noise_gru_prev_state_int8/0.npy \
+    --output ./resources_downloaded/noise_reduction/ifm2.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/testing_input/denoise_gru_prev_state_int8/0.npy \
+    --output ./resources_downloaded/noise_reduction/ifm3.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/testing_output/Identity_int8/0.npy \
+    --output ./resources_downloaded/noise_reduction/ofm0.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/testing_output/Identity_1_int8/0.npy \
+    --output ./resources_downloaded/noise_reduction/ofm1.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/testing_output/Identity_2_int8/0.npy \
+    --output ./resources_downloaded/noise_reduction/ofm2.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/testing_output/Identity_3_int8/0.npy \
+    --output ./resources_downloaded/noise_reduction/ofm3.npy
+curl -L https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/testing_output/Identity_4_int8/0.npy \
+    --output ./resources_downloaded/noise_reduction/ofm4.npy
 curl -L https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/keyword_spotting/dnn_small/tflite_int8/dnn_s_quantized.tflite \
     --output ./resources_downloaded/inference_runner/dnn_s_quantized.tflite
 
@@ -217,6 +237,38 @@
     --output-dir=resources_downloaded/ad
 mv resources_downloaded/ad/ad_medium_int8_vela.tflite resources_downloaded/ad/ad_medium_int8_vela_Y256.tflite
 
+. resources_downloaded/env/bin/activate && vela resources_downloaded/vww/vww4_128_128_INT8.tflite \
+    --accelerator-config=ethos-u55-128 \
+    --optimise Performance --config scripts/vela/default_vela.ini \
+    --memory-mode=Shared_Sram \
+    --system-config=Ethos_U55_High_End_Embedded \
+    --output-dir=resources_downloaded/ad
+mv resources_downloaded/vww/vww4_128_128_INT8_vela.tflite resources_downloaded/vww/vww4_128_128_INT8_vela_H128.tflite
+
+. resources_downloaded/env/bin/activate && vela resources_downloaded/vww/vww4_128_128_INT8.tflite \
+    --accelerator-config=ethos-u65-256 \
+    --optimise Performance --config scripts/vela/default_vela.ini \
+    --memory-mode=Dedicated_Sram \
+    --system-config=Ethos_U65_High_End \
+    --output-dir=resources_downloaded/ad
+mv resources_downloaded/vww/vww4_128_128_INT8_vela.tflite resources_downloaded/vww/vww4_128_128_INT8_vela_Y256.tflite
+
+. resources_downloaded/env/bin/activate && vela resources_downloaded/noise_reduction/rnnoise_INT8.tflite \
+    --accelerator-config=ethos-u55-128 \
+    --optimise Performance --config scripts/vela/default_vela.ini \
+    --memory-mode=Shared_Sram \
+    --system-config=Ethos_U55_High_End_Embedded \
+    --output-dir=resources_downloaded/ad
+mv resources_downloaded/noise_reduction/rnnoise_INT8_vela.tflite resources_downloaded/noise_reduction/rnnoise_INT8_vela_H128.tflite
+
+. resources_downloaded/env/bin/activate && vela resources_downloaded/noise_reduction/rnnoise_INT8.tflite \
+    --accelerator-config=ethos-u65-256 \
+    --optimise Performance --config scripts/vela/default_vela.ini \
+    --memory-mode=Dedicated_Sram \
+    --system-config=Ethos_U65_High_End \
+    --output-dir=resources_downloaded/ad
+mv resources_downloaded/noise_reduction/rnnoise_INT8_vela.tflite resources_downloaded/noise_reduction/rnnoise_INT8_vela_Y256.tflite
+
 mkdir cmake-build-mps3-sse-300-gnu-release and cd cmake-build-mps3-sse-300-gnu-release
 
 cmake .. \
diff --git a/docs/sections/arm_virtual_hardware.md b/docs/sections/arm_virtual_hardware.md
index 2f05525..ca60a28 100644
--- a/docs/sections/arm_virtual_hardware.md
+++ b/docs/sections/arm_virtual_hardware.md
@@ -23,5 +23,5 @@
 
 You can find more information about Arm Virtual Hardware [here](https://arm-software.github.io/VHT/main/overview/html/index.html).
 
-Once you have access to the AWS instance, we recommend starting from the [quick start guide](../quick_start.md) in order to get familiar
+Once you have access to the AWS instance, we recommend starting from the [quick start guide](../quick_start.md#Quick-start-example-ML-application) in order to get familiar
 with the ml-embedded-evaluation-kit. Note that on the AWS instance, the FVP is available under `/opt/FVP_Corstone_SSE-300`.
diff --git a/docs/use_cases/noise_reduction.md b/docs/use_cases/noise_reduction.md
new file mode 100644
index 0000000..e6df89c
--- /dev/null
+++ b/docs/use_cases/noise_reduction.md
@@ -0,0 +1,529 @@
+# Noise Reduction Code Sample
+
+- [Noise Reduction Code Sample](#noise-reduction-code-sample)
+  - [Introduction](#introduction)
+  - [How the default neural network model works](#how-the-default-neural-network-model-works)
+  - [Post-processing](#post_processing)
+    - [Dumping of memory contents from the Fixed Virtual Platform](#dumping-of-memory-contents-from-the-fixed-virtual-platform)
+    - [Dumping post processed results for all inferences](#dumping-post_processed-results-for-all-inferences)
+  - [Prerequisites](#prerequisites)
+  - [Building the code sample application from sources](#building-the-code-sample-application-from-sources)
+    - [Build options](#build-options)
+    - [Build process](#build-process)
+    - [Add custom input](#add-custom-input)
+    - [Add custom model](#add-custom-model)
+  - [Setting up and running Ethos-U NPU code sample](#setting-up-and-running-ethos_u-npu-code-sample)
+    - [Setting up the Ethos-U NPU Fast Model](#setting-up-the-ethos_u-npu-fast-model)
+    - [Starting Fast Model simulation](#starting-fast-model-simulation)
+    - [Running Noise Reduction](#running-noise-reduction)
+
+## Introduction
+
+This document describes the process of setting up and running the Arm® Ethos™-U NPU Noise Reduction
+example.
+
+Use case code is stored in the following directory: [source/use_case/noise_reduction](../../source/use_case/noise_reduction).
+
+## How the default neural network model works
+
+Instead of replicating a "noisy audio in" and "clean audio out" problem, a simpler version is
+defined. We use different frequency bands for the audio (22 in the original paper
+[RNNoise: Learning Noise Suppression](https://jmvalin.ca/demo/rnnoise/)). It is based on a scale like the "Mel scale"
+or "Bark scale" and calculates the energies for each band. Using this type of scale, the bands get
+divided up and the result is based on what is important to the human ear.
+
+When we have a noisy audio clip, the model takes the energy levels of these different bands as
+input. The model then tries to predict a value (called a gain), to apply to each frequency band. It
+is expected that applying this gain to each band brings the audio back to what a "clean" audio
+sample would have been like. It is like a 22-band equalizer, where we quickly adjust the level of
+each band so that the noise is removed. However, the signal, or speech, still passes through.
+
+In addition to the 22 band values calculated, the input features also include:
+
+- First and second derivatives of the first 6 coefficients,
+- The pitch period (1/frequency),
+- The pitch gain for six bands,
+- A value used to detect if speech is occurring.
+
+This provides 42 feature inputs, `22 + 6 + 6 + 1 + 6 + 1 = 42`, and the model produces `22` (gain
+values) outputs.
+
+> **Note:** The model also has a second output that predicts if speech is occurring in the given
+> sample.
+
+The pre-processing works in a windowed fashion, on 20ms of the audio clip at a time, and the stride
+is 10ms. So, for example, if we provide one second of audio this gives us `1000ms/10ms = 100` windows of
+features and, therefore, an input shape of `100x42` to the model. The output shape of the model is
+then `100x22`, representing the gain values to apply to each of the 100 windows.
+
+These output gain values can then be applied to each corresponding window of the noisy audio clip,
+producing a cleaner output.
+
+For more information please refer to the original paper: 
+[A Hybrid DSP/Deep Learning Approach to Real-Time Full-Band Speech Enhancement](https://arxiv.org/pdf/1709.08243.pdf)
+
+## Post-processing
+
+After each inference the output of the model is passed to post-processing code which uses the gain values the model
+produced to generate audio with the noise removed from it.
+
+For you to verify the outputs of the model after post-processing, you will have manually use an [offline script](../../scripts/py/rnnoise_dump_extractor.py)
+to convert the post-processed outputs into a wav file.
+This offline script takes a dump file as the input and saves the denoised WAV file to disk. The following is an example
+of how to call the script from the command line after running the use-case and
+[selecting to dump memory contents](#dumping-post_processed-results-for-all-inferences).
+
+```commandline
+python scripts/py/rnnoise_dump_extractor.py --dump_file <path_to_dump_file.bin> --output_dir <path_to_output_folder>
+```
+
+The application for this use case has been written to dump the post-processed output to the address pointed to by
+the CMake parameter `noise_reduction_MEM_DUMP_BASE_ADDR`. The default value is set to `0x80000000`.
+
+### Dumping of memory contents from the Fixed Virtual Platform
+
+The fixed virtual platform supports dumping of memory contents to a file. This can be done by
+specifying command-line arguments when starting the FVP executable. For example, the argument:
+
+```commandline
+$ FVP_Corstone_SSE-300_Ethos-U55 -a ./bin/ethos-u-noise_reduction.axf \
+    --dump cpu0=output.bin@Memory:0x80000000,0x100000
+```
+
+Dumps 1 MiB worth of data from address `0x80000000` to the file `output.bin`.
+
+### Dumping post-processed results for all inferences
+
+The Noise Reduction application uses the memory address specified by
+`noise_reduction_MEM_DUMP_BASE_ADDR` as a buffer to store post-processed results from all inferences. 
+The maximum size of this buffer is set by the parameter
+`noise_reduction_MEM_DUMP_LEN` which defaults to 1 MiB.
+
+Logging information is generated for every inference run performed. Each line corresponds to the post-processed
+result of that inference being written to a certain location in memory.
+
+For example:
+
+```log
+INFO - Audio Clip dump header info (20 bytes) written to 0x80000000
+INFO - Inference 1/136
+INFO - Copied 960 bytes to 0x80000014
+...
+INFO - Inference 136/136
+INFO - Copied 960 bytes to 0x8001fa54
+```
+
+In the preceding output we can see that it starts at the default address of
+`0x80000000` where some header information is dumped. Then, after the first inference 960 bytes 
+(480 INT16 values) are written to the first address after the dumped header `0x80000014`.
+Each inference afterward will then write another 960 bytes to the next address and so on until all inferences
+are complete.
+
+When consolidating all inference outputs for an entire audio clip, the application output should report:
+
+```log
+INFO - Output memory dump of 130580 bytes written at address 0x80000000
+```
+
+The application output log states that there are 130580 bytes worth of valid data ready to be read
+from `0x80000000`. If the FVP was started with the `--dump` option, then the output file is created
+when the FVP instance exits.
+
+## Prerequisites
+
+See [Prerequisites](../documentation.md#prerequisites)
+
+## Building the code sample application from sources
+
+### Build options
+
+In addition to the already specified build option in the main documentation, keyword spotting use
+case adds:
+
+- `noise_reduction_MODEL_TFLITE_PATH` - The path to the NN model file in *TFLite* format. The model
+  is processed and is included in the application axf file. The default value points to one of the
+  delivered set of models. Note that the parameter
+  `ETHOS_U_NPU_ENABLED` must be aligned with the chosen model. Therefore:
+  - if `ETHOS_U_NPU_ENABLED` is set to `On` or `1`, we assume that the NN model is optimized. The
+    model naturally falls back to the Arm® Cortex®-M CPU if an unoptimized model is supplied.
+  - if `ETHOS_U_NPU_ENABLED` is set to `Off` or `0`, then we assume that the NN model is unoptimized.
+    In this case, supplying an optimized model results in a runtime error.
+
+- `noise_reduction_FILE_PATH`: The path to the directory containing WAV files, or a path to single
+  WAV file, to be used in the application. The default value points to the
+  `resources/noise_reduction/samples` folder containing the delivered set of audio clips.
+
+- `noise_reduction_AUDIO_RATE`: The input data sampling rate. Each audio file from `noise_reduction_FILE_PATH` is 
+  preprocessed during the build to match the NN model input requirements. The default value is `48000`.
+
+- `noise_reduction_AUDIO_MONO`: If set to `ON`, then the audio data is converted to mono. The default value is `ON`.
+
+- `noise_reduction_AUDIO_OFFSET`: Begins loading audio data and starts from this specified offset, defined in seconds. 
+  The default value is set to `0`.
+
+- `noise_reduction_AUDIO_DURATION`: The length of the audio data to be used in the application in seconds. 
+  The default is `0`, meaning that the whole audio file is used.
+
+- `noise_reduction_AUDIO_MIN_SAMPLES`: Minimum number of samples required by the network model. If the audio clip is shorter than
+  this number, then it is padded with zeros. The default value is `480`.
+
+- `noise_reduction_ACTIVATION_BUF_SZ`: The intermediate, or activation, buffer size reserved for the
+  neural network model. By default, it is set to 2MiB.
+
+To **ONLY** build a `noise_reduction` example application, add `-DUSE_CASE_BUILD=noise_reduction`
+  (as specified in [Building](../documentation.md#Building) to the `cmake` command line).
+
+### Build process
+
+> **Note:** This section describes the process for configuring the build for `MPS3: SSE-300`. To
+> configure a different target platform, please see the [Building](../documentation.md#Building)
+> section.
+
+To **only** build the `noise_reduction` example, create a build directory, and then navigate inside.
+For example:
+
+```commandline
+mkdir build_noise_reduction && cd build_noise_reduction
+```
+
+On Linux, when providing only the mandatory arguments for CMake configuration, use the following
+command to build the Noise Reduction application to run on the *Ethos-U55* Fast Model:
+
+```commandline
+cmake ../ -DUSE_CASE_BUILD=noise_reduction
+```
+
+To configure a build that can be debugged using Arm DS, we specify the build type as `Debug` and use
+the `Arm Compiler` toolchain file:
+
+```commandline
+cmake .. \
+    -DCMAKE_TOOLCHAIN_FILE=scripts/cmake/toolchains/bare-metal-armclang.cmake \
+    -DCMAKE_BUILD_TYPE=Debug \
+    -DUSE_CASE_BUILD=noise_reduction
+```
+
+For more notes, please refer to:
+
+- [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
+- [Using Arm Compiler](../sections/building.md#using-arm-compiler)
+- [Configuring the build for simple-platform](../sections/building.md#configuring-the-build-for-simple_platform)
+- [Working with model debugger from Arm Fast Model Tools](../sections/building.md#working-with-model-debugger-from-arm-fast-model-tools)
+- [Building for different Ethos-U variants](../sections/building.md#building-for-different-ethos_u-npu-variants)
+
+> **Note:** If you are rebuilding with changed parameters values, it is highly advised that you
+> clean the build directory and rerun the CMake command.
+
+If the CMake command is successful, then build the application as follows:
+
+```commandline
+make -j4
+```
+
+> **Note:** To see compilation and link details, add `VERBOSE=1`.
+
+The build results are placed under the `build/bin` folder. For example:
+
+```tree
+bin
+ ├── ethos-u-noise_reduction.axf
+ ├── ethos-u-noise_reduction.htm
+ ├── ethos-u-noise_reduction.map
+ ├── images-noise_reduction.txt
+ └── sectors
+      └── noise_reduction
+           ├── dram.bin
+           └── itcm.bin
+```
+
+Based on the preceding output, the files contain the following information:
+
+- `ethos-u-noise_reduction.axf`: The built application binary for the noise reduction use case.
+
+- `ethos-u-noise_reduction.map`: Information from building the application (for example. The
+  libraries used, what was optimized, and location of objects).
+
+- `ethos-u-noise_reduction.htm`: A human readable file containing the call graph of application
+  functions.
+
+- `sectors/`: This folder contains the built application, which is split into files for loading into
+  different FPGA memory regions.
+
+- `Images-noise_reduction.txt`: Tells the FPGA which memory regions to use for loading the binaries
+  in the `sectors/...` folder.
+
+### Add custom input
+
+To run with inputs different to the ones supplied, the parameter `noise_reduction_FILE_PATH` can be
+pointed to a WAV file, or a directory containing WAV files. Once you have a directory with WAV files, 
+run the following command:
+
+```commandline
+cmake .. \
+    -DUSE_CASE_BUILD=noise_reduction \
+    -Dnoise_reduction_FILE_PATH=/path/to/custom/wav_files
+```
+
+### Add custom model
+
+The application performs inference using the model pointed to by the CMake parameter
+`noise_reduction_MODEL_TFLITE_PATH`.
+
+> **Note:** If you want to run the model using *Ethos-U* ensure that your custom model has been
+> run through the Vela compiler successfully before continuing.
+
+For further information: [Optimize model with Vela compiler](../sections/building.md#optimize-custom-model-with-vela-compiler).
+
+An example:
+
+```commandline
+cmake .. \
+    -Dnoise_reduction_MODEL_TFLITE_PATH=<path/to/custom_model_after_vela.tflite> \
+    -DUSE_CASE_BUILD=noise_reduction
+```
+
+> **Note** Changing the neural network model often also requires the pre-processing implementation
+> to be changed. Please refer to:
+> [How the default neural network model works](#how-the-default-neural-network-model-works).
+
+> **Note:** Before re-running the CMake command, clean the build directory.
+
+The `.tflite` model file, which is pointed to by `noise_reduction_MODEL_TFLITE_PATH`, is converted
+to C++ files during the CMake configuration stage. It is then compiled into the application for
+performing inference with.
+
+To see which model path was used, inspect the configuration stage log:
+
+```log
+-- User option noise_reduction_MODEL_TFLITE_PATH is set to <path/to/custom_model_after_vela.tflite>
+...
+-- Using <path/to/custom_model_after_vela.tflite>
+++ Converting custom_model_after_vela.tflite to custom_model_after_vela.tflite.cc
+-- Generating labels file from <path/to/labels_custom_model.txt>
+-- writing to <path/to/build/generated/src/Labels.cc>
+...
+```
+
+After compiling, your custom model replaces the default one in the application.
+
+## Setting up and running Ethos-U NPU code sample
+
+### Setting up the Ethos-U NPU Fast Model
+
+The FVP is available publicly from [Arm Ecosystem FVP downloads](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
+
+For the *Ethos-U* evaluation, please download the MPS3 based version of the Arm® *Corstone™-300* model that contains *Cortex-M55*
+and offers a choice of the *Ethos-U55* and *Ethos-U65* processors.
+
+To install the FVP:
+
+- Unpack the archive,
+
+- Run the install script in the extracted package:
+
+```commandline
+$./FVP_Corstone_SSE-300.sh
+```
+
+- Follow the instructions to install the FVP to your required location.
+
+### Starting Fast Model simulation
+
+Once the building step has completed, the application binary `ethos-u-noise_reduction.axf` can be
+found in the `build/bin` folder. Assuming the install location of the FVP was set to
+`~/FVP_install_location`, start the simulation with the following command:
+
+```commandline
+~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 ./bin/mps3-sse-300/ethos-u-noise_reduction.axf
+```
+
+A log output then appears on the terminal:
+
+```log
+telnetterminal0: Listening for serial connection on port 5000
+telnetterminal1: Listening for serial connection on port 5001
+telnetterminal2: Listening for serial connection on port 5002
+telnetterminal5: Listening for serial connection on port 5003
+```
+
+This also launches a telnet window with the standard output of the sample application. It also
+includes error log entries containing information about the pre-built application version,
+TensorFlow Lite Micro library version used, and the data type. As well as the input and output
+tensor sizes of the model that was compiled into the executable binary.
+
+After the application has started, if `noise_reduction_FILE_PATH` pointed to a single file (or a
+folder containing a single input file), then the inference starts immediately. If multiple inputs
+are chosen, then a menu is output and waits for the user input from telnet terminal.
+
+For example:
+
+```log
+User input required
+Enter option number from:
+
+  1. Run noise reduction on the next WAV
+  2. Run noise reduction on a WAV at chosen index
+  3. Run noise reduction on all WAVs
+  4. Show NN model info
+  5. List audio clips
+
+Choice:
+```
+
+1. “Run noise reduction on the next WAV”: Runs processing and inference on the next in line WAV file.
+
+    > **Note:** Depending on the size of the input WAV file, multiple inferences can be invoked.
+
+2. “Run noise reduction on a WAV at chosen index”: Runs processing and inference on the WAV file
+   corresponding to the chosen index.
+
+    > **Note:** Select the index in the range of supplied WAVs during application build. By default,
+    the pre-built application has three files and indexes from 0-2.
+
+3. “Run noise reduction on all WAVs”: Triggers sequential processing and inference executions on 
+   all baked-in WAV files.
+
+4. “Show NN model info”: Prints information about the model data type, including the input and
+   output tensor sizes. For example:
+
+    ```log
+    INFO - Model info:
+    INFO - Model INPUT tensors:
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 42 bytes with dimensions
+    INFO -          0:   1
+    INFO -          1:   1
+    INFO -          2:  42
+    INFO - Quant dimension: 0
+    INFO - Scale[0] = 0.221501
+    INFO - ZeroPoint[0] = 14
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 24 bytes with dimensions
+    INFO -          0:   1
+    INFO -          1:  24
+    INFO - Quant dimension: 0
+    INFO - Scale[0] = 0.007843
+    INFO - ZeroPoint[0] = -1
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 48 bytes with dimensions
+    INFO -          0:   1
+    INFO -          1:  48
+    INFO - Quant dimension: 0
+    INFO - Scale[0] = 0.047942
+    INFO - ZeroPoint[0] = -128
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 96 bytes with dimensions
+    INFO -          0:   1
+    INFO -          1:  96
+    INFO - Quant dimension: 0
+    INFO - Scale[0] = 0.007843
+    INFO - ZeroPoint[0] = -1
+    INFO - Model OUTPUT tensors: 
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 96 bytes with dimensions
+    INFO -          0:   1
+    INFO -          1:   1
+    INFO -          2:  96
+    INFO - Quant dimension: 0
+    INFO - Scale[0] = 0.007843
+    INFO - ZeroPoint[0] = -1
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 22 bytes with dimensions
+    INFO -          0:   1
+    INFO -          1:   1
+    INFO -          2:  22
+    INFO - Quant dimension: 0
+    INFO - Scale[0] = 0.003906
+    INFO - ZeroPoint[0] = -128
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 48 bytes with dimensions
+    INFO -          0:   1
+    INFO -          1:   1
+    INFO -          2:  48
+    INFO - Quant dimension: 0
+    INFO - Scale[0] = 0.047942
+    INFO - ZeroPoint[0] = -128
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 24 bytes with dimensions
+    INFO -          0:   1
+    INFO -          1:   1
+    INFO -          2:  24
+    INFO - Quant dimension: 0
+    INFO - Scale[0] = 0.007843
+    INFO - ZeroPoint[0] = -1
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 1 bytes with dimensions
+    INFO -          0:   1
+    INFO -          1:   1
+    INFO -          2:   1
+    INFO - Quant dimension: 0
+    INFO - Scale[0] = 0.003906
+    INFO - ZeroPoint[0] = -128
+    INFO - Activation buffer (a.k.a tensor arena) size used: 1940
+    INFO - Number of operators: 1
+    INFO -  Operator 0: ethos-u
+    INFO - Use of Arm uNPU is enabled
+    ```
+
+5. “List audio clips”: Prints a list of pair audio indexes. The original filenames are embedded in
+    the application. For example:
+
+    ```log
+    INFO - List of Files:
+    INFO -  0 => p232_113.wav
+    INFO -  1 => p232_208.wav
+    INFO -  2 => p257_031.wav
+    ```
+
+### Running Noise Reduction
+
+Selecting the first option runs inference on the first file.
+
+The following example illustrates an application output:
+
+```log
+INFO - Audio Clip dump header info (20 bytes) written to 0x80000000
+INFO - Inference 1/136
+INFO - Copied 960 bytes to 0x80000014
+INFO - Inference 2/136
+INFO - Copied 960 bytes to 0x800003d4
+...
+INFO - Inference 136/136
+INFO - Copied 960 bytes to 0x8001fa54
+INFO - Output memory dump of 130580 bytes written at address 0x80000000
+INFO - Final results:
+INFO - Profile for Inference:
+INFO - NPU AXI0_RD_DATA_BEAT_RECEIVED beats: 530 
+INFO - NPU AXI0_WR_DATA_BEAT_WRITTEN beats: 376
+INFO - NPU AXI1_RD_DATA_BEAT_RECEIVED beats: 13911
+INFO - NPU ACTIVE cycles: 103870
+INFO - NPU IDLE cycles: 643
+INFO - NPU TOTAL cycles: 104514
+```
+
+> **Note:** When running Fast Model, each inference can take several seconds on most systems.
+
+Each inference dumps the post processed output to memory. For further information, please refer to: 
+[Dumping post processed results for all inferences](#dumping-post_processed-results-for-all-inferences).
+
+The profiling section of the log shows that for this inference:
+
+- *Ethos-U* NPU PMU report for each inference:
+
+  - 104514: The total number of NPU cycles.
+
+  - 103870: How many NPU cycles were used for computation.
+
+  - 643: How many cycles the NPU was idle for.
+
+  - 530: The number of AXI beats with read transactions from AXI0 bus.
+    > **Note:** The AXI0 is the bus where the *Ethos-U* NPU reads and writes to the computation
+    > buffers, or the activation buf or tensor arenas.
+
+  - 370: The number of AXI beats with write transactions to the AXI0 bus.
+
+  - 13911: The number of AXI beats with read transactions from AXI1 bus.
+    > **Note:** The AXI1 is the bus where *Ethos-U* NPU reads the model, which is read-only.
+
+- For FPGA platforms, the CPU cycle count can also be enabled. However, for FVP, do not use the CPU
+  cycle counters as the CPU model is not cycle-approximate or cycle-accurate.
diff --git a/resources/noise_reduction/samples/files.md b/resources/noise_reduction/samples/files.md
new file mode 100644
index 0000000..b8cbbee
--- /dev/null
+++ b/resources/noise_reduction/samples/files.md
@@ -0,0 +1,12 @@
+# Sample wav audio clip
+
+The sample wav audio clips provided are under Creative Commons License (Creative Commons Attribution 4.0 International Public License).
+The source is Edinburgh DataShare (https://datashare.ed.ac.uk/handle/10283/2791/). The files used are listed here for traceability:
+
+- p232_113.wav
+- p232_208.wav
+- p257_031.wav
+
+## License
+
+[Creative Commons Attribution 4.0 International Public License](../../LICENSE_CC_4.0.txt).
diff --git a/resources/noise_reduction/samples/p232_113.wav b/resources/noise_reduction/samples/p232_113.wav
new file mode 100644
index 0000000..f2c41f0
--- /dev/null
+++ b/resources/noise_reduction/samples/p232_113.wav
Binary files differ
diff --git a/resources/noise_reduction/samples/p232_208.wav b/resources/noise_reduction/samples/p232_208.wav
new file mode 100644
index 0000000..f6c8a9f
--- /dev/null
+++ b/resources/noise_reduction/samples/p232_208.wav
Binary files differ
diff --git a/resources/noise_reduction/samples/p257_031.wav b/resources/noise_reduction/samples/p257_031.wav
new file mode 100644
index 0000000..5972fff
--- /dev/null
+++ b/resources/noise_reduction/samples/p257_031.wav
Binary files differ
diff --git a/scripts/py/gen_test_data_cpp.py b/scripts/py/gen_test_data_cpp.py
index a58f415..ba8f725 100644
--- a/scripts/py/gen_test_data_cpp.py
+++ b/scripts/py/gen_test_data_cpp.py
@@ -22,6 +22,7 @@
 import math
 import os
 import numpy as np
+from pathlib import Path
 
 from argparse import ArgumentParser
 from jinja2 import Environment, FileSystemLoader
@@ -43,8 +44,8 @@
                   lstrip_blocks=True)
 
 
-def write_hpp_file(header_filename, cc_file_path, header_template_file, num_iofms,
-                   ifm_array_names, ifm_size, ofm_array_names, ofm_size, iofm_data_type):
+def write_hpp_file(header_filename, cc_file_path, header_template_file, num_ifms, num_ofms,
+                   ifm_array_names, ifm_sizes, ofm_array_names, ofm_sizes, iofm_data_type):
     header_file_path = os.path.join(args.header_folder_path, header_filename)
 
     print(f"++ Generating {header_file_path}")
@@ -53,11 +54,12 @@
                                  gen_time=datetime.datetime.now(),
                                  year=datetime.datetime.now().year)
     env.get_template('TestData.hpp.template').stream(common_template_header=hdr,
-                                                   fm_count=num_iofms,
+                                                   ifm_count=num_ifms,
+                                                   ofm_count=num_ofms,
                                                    ifm_var_names=ifm_array_names,
-                                                   ifm_var_size=ifm_size,
+                                                   ifm_var_sizes=ifm_sizes,
                                                    ofm_var_names=ofm_array_names,
-                                                   ofm_var_size=ofm_size,
+                                                   ofm_var_sizes=ofm_sizes,
                                                    data_type=iofm_data_type,
                                                    namespaces=args.namespaces) \
         .dump(str(header_file_path))
@@ -116,17 +118,20 @@
     common_cc_filename = "TestData" + add_usecase_fname + ".cc"
 
     # In the data_folder_path there should be pairs of ifm-ofm
-    # It's assumed the ifm-ofm nameing convention: ifm0.npy-ofm0.npy, ifm1.npy-ofm1.npy
-    i_ofms_count = int(len([name for name in os.listdir(os.path.join(args.data_folder_path)) if name.lower().endswith('.npy')]) / 2)
+    # It's assumed the ifm-ofm naming convention: ifm0.npy-ofm0.npy, ifm1.npy-ofm1.npy
+    ifms_count = int(len(list(Path(args.data_folder_path).glob('ifm*.npy'))))
+    ofms_count = int(len(list(Path(args.data_folder_path).glob('ofm*.npy'))))
+
+    #i_ofms_count = int(len([name for name in os.listdir(os.path.join(args.data_folder_path)) if name.lower().endswith('.npy')]) / 2)
 
     iofm_data_type = "int8_t"
-    if (i_ofms_count > 0):
+    if ifms_count > 0:
         iofm_data_type = "int8_t" if (np.load(os.path.join(args.data_folder_path, "ifm0.npy")).dtype == np.int8) else "uint8_t"
 
-    ifm_size = -1
-    ofm_size = -1
+    ifm_sizes = []
+    ofm_sizes = []
 
-    for idx in range(i_ofms_count):
+    for idx in range(ifms_count):
         # Save the fm cc file
         base_name = "ifm" + str(idx)
         filename = base_name+".npy"
@@ -134,11 +139,9 @@
         cc_filename = os.path.join(args.source_folder_path, array_name + ".cc")
         ifm_array_names.append(array_name)
         write_individual_cc_file(filename, cc_filename, header_filename, args.license_template, array_name, iofm_data_type)
-        if ifm_size == -1:
-            ifm_size = get_npy_vec_size(filename)
-        elif ifm_size != get_npy_vec_size(filename):
-            raise Exception(f"ifm size changed for index {idx}")
+        ifm_sizes.append(get_npy_vec_size(filename))
 
+    for idx in range(ofms_count):
         # Save the fm cc file
         base_name = "ofm" + str(idx)
         filename = base_name+".npy"
@@ -146,14 +149,11 @@
         cc_filename = os.path.join(args.source_folder_path, array_name + ".cc")
         ofm_array_names.append(array_name)
         write_individual_cc_file(filename, cc_filename, header_filename, args.license_template, array_name, iofm_data_type)
-        if ofm_size == -1:
-            ofm_size = get_npy_vec_size(filename)
-        elif ofm_size != get_npy_vec_size(filename):
-            raise Exception(f"ofm size changed for index {idx}")
+        ofm_sizes.append(get_npy_vec_size(filename))
 
     common_cc_filepath = os.path.join(args.source_folder_path, common_cc_filename)
     write_hpp_file(header_filename, common_cc_filepath, args.license_template,
-                   i_ofms_count, ifm_array_names, ifm_size, ofm_array_names, ofm_size, iofm_data_type)
+                   ifms_count, ofms_count, ifm_array_names, ifm_sizes, ofm_array_names, ofm_sizes, iofm_data_type)
 
 
 if __name__ == '__main__':
diff --git a/scripts/py/rnnoise_dump_extractor.py b/scripts/py/rnnoise_dump_extractor.py
new file mode 100644
index 0000000..947a75a
--- /dev/null
+++ b/scripts/py/rnnoise_dump_extractor.py
@@ -0,0 +1,65 @@
+#  Copyright (c) 2021 Arm Limited. All rights reserved.
+#  SPDX-License-Identifier: Apache-2.0
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+
+"""
+This script can be used with the noise reduction use case to save
+the dumped noise reduced audio to a wav file.
+
+Example use:
+python rnnoise_dump_extractor.py --dump_file output.bin --output_dir ./denoised_wavs/
+"""
+import soundfile as sf
+import numpy as np
+
+import argparse
+from os import path
+
+import struct
+
+def extract(fp, output_dir, export_npy):
+    while True:
+        filename_length = struct.unpack("i", fp.read(4))[0]
+
+        if filename_length == -1:
+            return
+
+        filename = struct.unpack("{}s".format(filename_length), fp.read(filename_length))[0].decode('ascii')
+        audio_clip_length = struct.unpack("I", fp.read(4))[0]
+        output_file_name = path.join(output_dir, "denoised_{}".format(filename))
+        audio_clip = fp.read(audio_clip_length)
+        
+        with sf.SoundFile(output_file_name, 'w', channels=1, samplerate=48000, subtype="PCM_16", endian="LITTLE") as wav_file:
+            wav_file.buffer_write(audio_clip, dtype='int16')
+            print("{} written to disk".format(output_file_name))
+
+        if export_npy:
+            output_file_name += ".npy"
+            pack_format = "{}h".format(int(audio_clip_length/2))
+            npdata = np.array(struct.unpack(pack_format,audio_clip)).astype(np.int16)
+            np.save(output_file_name, npdata)
+            print("{} written to disk".format(output_file_name))
+
+def main(args):
+    extract(args.dump_file, args.output_dir, args.export_npy)
+
+parser = argparse.ArgumentParser()
+parser.add_argument("--dump_file", type=argparse.FileType('rb'), help="Dump file with audio files to extract.", required=True)
+parser.add_argument("--output_dir", help="Output directory, Warning: Duplicated file names will be overwritten.", required=True)
+parser.add_argument("--export_npy", help="Export the audio buffer in NumPy format", action="store_true")
+args = parser.parse_args()
+
+if __name__=="__main__":
+    main(args)
+
diff --git a/scripts/py/templates/TestData.cc.template b/scripts/py/templates/TestData.cc.template
index 1acd14d..d0f2698 100644
--- a/scripts/py/templates/TestData.cc.template
+++ b/scripts/py/templates/TestData.cc.template
@@ -32,7 +32,7 @@
 
 const {{data_type}}* get_ifm_data_array(const uint32_t idx)
 {
-    if (idx < NUMBER_OF_FM_FILES) {
+    if (idx < NUMBER_OF_IFM_FILES) {
         return ifm_arrays[idx];
     }
     return nullptr;
@@ -40,7 +40,7 @@
 
 const {{data_type}}* get_ofm_data_array(const uint32_t idx)
 {
-    if (idx < NUMBER_OF_FM_FILES) {
+    if (idx < NUMBER_OF_OFM_FILES) {
         return ofm_arrays[idx];
     }
     return nullptr;
diff --git a/scripts/py/templates/TestData.hpp.template b/scripts/py/templates/TestData.hpp.template
index cdedd48..413c062 100644
--- a/scripts/py/templates/TestData.hpp.template
+++ b/scripts/py/templates/TestData.hpp.template
@@ -25,16 +25,21 @@
 namespace {{namespace}} {
 {% endfor %}
 
-#define NUMBER_OF_FM_FILES  ({{fm_count}}U)
-#define IFM_DATA_SIZE  ({{ifm_var_size}}U)
-#define OFM_DATA_SIZE  ({{ofm_var_size}}U)
+#define NUMBER_OF_IFM_FILES  ({{ifm_count}}U)
+#define NUMBER_OF_OFM_FILES  ({{ofm_count}}U)
+{% for ifm_size in ifm_var_sizes %}
+#define IFM_{{loop.index0}}_DATA_SIZE  ({{ifm_size}}U)
+{% endfor %}
+{% for ofm_size in ofm_var_sizes %}
+#define OFM_{{loop.index0}}_DATA_SIZE  ({{ofm_size}}U)
+{% endfor %}
 
 {% for ifm_var_name in ifm_var_names %}
-extern const {{data_type}} {{ifm_var_name}}[IFM_DATA_SIZE];
+extern const {{data_type}} {{ifm_var_name}}[IFM_{{loop.index0}}_DATA_SIZE];
 {% endfor %}
 
 {% for ofm_var_name in ofm_var_names %}
-extern const {{data_type}} {{ofm_var_name}}[OFM_DATA_SIZE];
+extern const {{data_type}} {{ofm_var_name}}[OFM_{{loop.index0}}_DATA_SIZE];
 {% endfor %}
 
 const {{data_type}}* get_ifm_data_array(const uint32_t idx);
diff --git a/set_up_default_resources.py b/set_up_default_resources.py
index 6c65ee1..4a5ef10 100755
--- a/set_up_default_resources.py
+++ b/set_up_default_resources.py
@@ -88,6 +88,30 @@
                        "url": "https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8/testing_output/Identity/0.npy"}]
     },
     {
+        "use_case_name": "noise_reduction",
+        "resources": [{"name": "rnnoise_INT8.tflite",
+                       "url": "https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/rnnoise_INT8.tflite"},
+                      {"name": "ifm0.npy",
+                       "url": "https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/testing_input/main_input_int8/0.npy"},
+                      {"name": "ifm1.npy",
+                       "url": "https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/testing_input/vad_gru_prev_state_int8/0.npy"},
+                      {"name": "ifm2.npy",
+                       "url": "https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/testing_input/noise_gru_prev_state_int8/0.npy"},
+                      {"name": "ifm3.npy",
+                       "url": "https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/testing_input/denoise_gru_prev_state_int8/0.npy"},
+                      {"name": "ofm0.npy",
+                       "url": "https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/testing_output/Identity_int8/0.npy"},
+                      {"name": "ofm1.npy",
+                       "url": "https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/testing_output/Identity_1_int8/0.npy"},
+                      {"name": "ofm2.npy",
+                       "url": "https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/testing_output/Identity_2_int8/0.npy"},
+                      {"name": "ofm3.npy",
+                       "url": "https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/testing_output/Identity_3_int8/0.npy"},
+                      {"name": "ofm4.npy",
+                       "url": "https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/testing_output/Identity_4_int8/0.npy"},
+                     ]
+    },
+    {
         "use_case_name": "inference_runner",
         "resources": [{"name": "dnn_s_quantized.tflite",
                        "url": "https://github.com/ARM-software/ML-zoo/raw/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b/models/keyword_spotting/dnn_small/tflite_int8/dnn_s_quantized.tflite"}
diff --git a/source/use_case/noise_reduction/include/RNNoiseModel.hpp b/source/use_case/noise_reduction/include/RNNoiseModel.hpp
new file mode 100644
index 0000000..f6e4510
--- /dev/null
+++ b/source/use_case/noise_reduction/include/RNNoiseModel.hpp
@@ -0,0 +1,82 @@
+/*
+ * Copyright (c) 2021 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef RNNOISE_MODEL_HPP
+#define RNNOISE_MODEL_HPP
+
+#include "Model.hpp"
+
+extern const uint32_t g_NumInputFeatures;
+extern const uint32_t g_FrameLength;
+extern const uint32_t g_FrameStride;
+
+namespace arm {
+namespace app {
+
+    class RNNoiseModel : public Model {
+    public:
+        /**
+         * @brief Runs inference for RNNoise model.
+         *
+         * Call CopyGruStates so GRU state outputs are copied to GRU state inputs before the inference run.
+         * Run ResetGruState() method to set states to zero before starting processing logically related data.
+         * @return True if inference succeeded, False - otherwise
+         */
+        bool RunInference() override;
+
+        /**
+         * @brief Sets GRU input states to zeros.
+         * Call this method before starting processing the new sequence of logically related data.
+         */
+        void ResetGruState();
+
+        /**
+        * @brief Copy current GRU output states to input states.
+        * Call this method before starting processing the next sequence of logically related data.
+         */
+        bool CopyGruStates();
+
+        /* Which index of model outputs does the main output (gains) come from. */
+        const size_t m_indexForModelOutput = 1;
+
+    protected:
+        /** @brief   Gets the reference to op resolver interface class. */
+        const tflite::MicroOpResolver& GetOpResolver() override;
+
+        /** @brief   Adds operations to the op resolver instance. */
+        bool EnlistOperations() override;
+
+        const uint8_t* ModelPointer() override;
+
+        size_t ModelSize() override;
+
+        /*
+        Each inference after the first needs to copy 3 GRU states from a output index to input index (model dependent):
+        0 -> 3, 2 -> 2, 3 -> 1
+        */
+        const std::vector<std::pair<size_t, size_t>> m_gruStateMap = {{0,3}, {2, 2}, {3, 1}};
+    private:
+        /* Maximum number of individual operations that can be enlisted. */
+        static constexpr int ms_maxOpCnt = 15;
+
+        /* A mutable op resolver instance. */
+        tflite::MicroMutableOpResolver<ms_maxOpCnt> m_opResolver;
+    };
+
+} /* namespace app */
+} /* namespace arm */
+
+#endif /* RNNOISE_MODEL_HPP */
\ No newline at end of file
diff --git a/source/use_case/noise_reduction/include/RNNoiseProcess.hpp b/source/use_case/noise_reduction/include/RNNoiseProcess.hpp
new file mode 100644
index 0000000..3800019
--- /dev/null
+++ b/source/use_case/noise_reduction/include/RNNoiseProcess.hpp
@@ -0,0 +1,337 @@
+/*
+ * Copyright (c) 2021 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "PlatformMath.hpp"
+#include <cstdint>
+#include <vector>
+#include <array>
+#include <tuple>
+
+namespace arm {
+namespace app {
+namespace rnn {
+
+    using vec1D32F = std::vector<float>;
+    using vec2D32F = std::vector<vec1D32F>;
+    using arrHp = std::array<float, 2>;
+    using math::FftInstance;
+    using math::FftType;
+
+    class FrameFeatures {
+    public:
+        bool m_silence{false};        /* If frame contains silence or not. */
+        vec1D32F m_featuresVec{};     /* Calculated feature vector to feed to model. */
+        vec1D32F m_fftX{};            /* Vector of floats arranged to represent complex numbers. */
+        vec1D32F m_fftP{};            /* Vector of floats arranged to represent complex numbers. */
+        vec1D32F m_Ex{};              /* Spectral band energy for audio x. */
+        vec1D32F m_Ep{};              /* Spectral band energy for pitch p. */
+        vec1D32F m_Exp{};             /* Correlated spectral energy between x and p. */
+    };
+
+    /**
+     * @brief   RNNoise pre and post processing class based on the 2018 paper from
+     *          Jan-Marc Valin. Recommended reading:
+     *          - https://jmvalin.ca/demo/rnnoise/
+     *          - https://arxiv.org/abs/1709.08243
+     **/
+    class RNNoiseProcess {
+    /* Public interface */
+    public:
+        RNNoiseProcess();
+        ~RNNoiseProcess() = default;
+
+        /**
+         * @brief        Calculates the features from a given audio buffer ready to be sent to RNNoise model.
+         * @param[in]    audioData   Pointer to the floating point vector
+         *                           with audio data (within the numerical
+         *                           limits of int16_t type).
+         * @param[in]    audioLen    Number of elements in the audio window.
+         * @param[out]   features    FrameFeatures object reference.
+         **/
+        void PreprocessFrame(const float*   audioData,
+                             size_t   audioLen,
+                             FrameFeatures& features);
+
+        /**
+         * @brief        Use the RNNoise model output gain values with pre-processing features
+         *               to generate audio with noise suppressed.
+         * @param[in]    modelOutput   Output gain values from model.
+         * @param[in]    features      Calculated features from pre-processing step.
+         * @param[out]   outFrame      Output frame to be populated.
+         **/
+        void PostProcessFrame(vec1D32F& modelOutput, FrameFeatures& features,  vec1D32F& outFrame);
+
+
+    /* Public constants */
+    public:
+        static constexpr uint32_t FRAME_SIZE_SHIFT{2};
+        static constexpr uint32_t FRAME_SIZE{480};
+        static constexpr uint32_t WINDOW_SIZE{2 * FRAME_SIZE};
+        static constexpr uint32_t FREQ_SIZE{FRAME_SIZE + 1};
+
+        static constexpr uint32_t PITCH_MIN_PERIOD{60};
+        static constexpr uint32_t PITCH_MAX_PERIOD{768};
+        static constexpr uint32_t PITCH_FRAME_SIZE{960};
+        static constexpr uint32_t PITCH_BUF_SIZE{PITCH_MAX_PERIOD + PITCH_FRAME_SIZE};
+
+        static constexpr uint32_t NB_BANDS{22};
+        static constexpr uint32_t CEPS_MEM{8};
+        static constexpr uint32_t NB_DELTA_CEPS{6};
+
+        static constexpr uint32_t NB_FEATURES{NB_BANDS + 3*NB_DELTA_CEPS + 2};
+
+    /* Private functions */
+    private:
+
+        /**
+         * @brief   Initialises the half window and DCT tables.
+         */
+        void InitTables();
+
+        /**
+         * @brief           Applies a bi-quadratic filter over the audio window.
+         * @param[in]       bHp           Constant coefficient set b (arrHp type).
+         * @param[in]       aHp           Constant coefficient set a (arrHp type).
+         * @param[in,out]   memHpX        Coefficients populated by this function.
+         * @param[in,out]   audioWindow   Floating point vector with audio data.
+         **/
+        void BiQuad(
+            const arrHp& bHp,
+            const arrHp& aHp,
+            arrHp& memHpX,
+            vec1D32F& audioWindow);
+
+        /**
+         * @brief        Computes features from the "filtered" audio window.
+         * @param[in]    audioWindow   Floating point vector with audio data.
+         * @param[out]   features      FrameFeatures object reference.
+         **/
+        void ComputeFrameFeatures(vec1D32F& audioWindow, FrameFeatures& features);
+
+        /**
+         * @brief        Runs analysis on the audio buffer.
+         * @param[in]    audioWindow   Floating point vector with audio data.
+         * @param[out]   fft           Floating point FFT vector containing real and
+         *                             imaginary pairs of elements. NOTE: this vector
+         *                             does not contain the mirror image (conjugates)
+         *                             part of the spectrum.
+         * @param[out]   energy        Computed energy for each band in the Bark scale.
+         * @param[out]   analysisMem   Buffer sequentially, but partially,
+         *                             populated with new audio data.
+         **/
+        void FrameAnalysis(
+            const vec1D32F& audioWindow,
+            vec1D32F& fft,
+            vec1D32F& energy,
+            vec1D32F& analysisMem);
+
+        /**
+         * @brief               Applies the window function, in-place, over the given
+         *                      floating point buffer.
+         * @param[in,out]   x   Buffer the window will be applied to.
+         **/
+        void ApplyWindow(vec1D32F& x);
+
+        /**
+         * @brief        Computes the FFT for a given vector.
+         * @param[in]    x     Vector to compute the FFT from.
+         * @param[out]   fft   Floating point FFT vector containing real and
+         *                     imaginary pairs of elements. NOTE: this vector
+         *                     does not contain the mirror image (conjugates)
+         *                     part of the spectrum.
+         **/
+        void ForwardTransform(
+            vec1D32F& x,
+            vec1D32F& fft);
+
+        /**
+         * @brief        Computes band energy for each of the 22 Bark scale bands.
+         * @param[in]    fft_X   FFT spectrum (as computed by ForwardTransform).
+         * @param[out]   bandE   Vector with 22 elements populated with energy for
+         *                       each band.
+         **/
+        void ComputeBandEnergy(const vec1D32F& fft_X, vec1D32F& bandE);
+
+        /**
+         * @brief        Computes band energy correlation.
+         * @param[in]    X       FFT vector X.
+         * @param[in]    P       FFT vector P.
+         * @param[out]   bandC   Vector with 22 elements populated with band energy
+         *                       correlation for the two input FFT vectors.
+         **/
+        void ComputeBandCorr(const vec1D32F& X, const vec1D32F& P, vec1D32F& bandC);
+
+        /**
+         * @brief        Performs pitch auto-correlation for a given vector for
+         *               given lag.
+         * @param[in]    x     Input vector.
+         * @param[out]   ac    Auto-correlation output vector.
+         * @param[in]    lag   Lag value.
+         * @param[in]    n     Number of elements to consider for correlation
+         *                     computation.
+         **/
+        void AutoCorr(const vec1D32F &x,
+                     vec1D32F &ac,
+                     size_t lag,
+                     size_t n);
+
+        /**
+         * @brief       Computes pitch cross-correlation.
+         * @param[in]   x          Input vector 1.
+         * @param[in]   y          Input vector 2.
+         * @param[out]  ac         Cross-correlation output vector.
+         * @param[in]   len        Number of elements to consider for correlation.
+         *                         computation.
+         * @param[in]   maxPitch   Maximum pitch.
+         **/
+        void PitchXCorr(
+            const vec1D32F& x,
+            const vec1D32F& y,
+            vec1D32F& ac,
+            size_t len,
+            size_t maxPitch);
+
+        /**
+         * @brief        Computes "Linear Predictor Coefficients".
+         * @param[in]    ac    Correlation vector.
+         * @param[in]    p     Number of elements of input vector to consider.
+         * @param[out]   lpc   Output coefficients vector.
+         **/
+        void LPC(const vec1D32F& ac, int32_t p, vec1D32F& lpc);
+
+        /**
+         * @brief        Custom FIR implementation.
+         * @param[in]    num   FIR coefficient vector.
+         * @param[in]    N     Number of elements.
+         * @param[out]   x     Vector to be be processed.
+         **/
+        void Fir5(const vec1D32F& num, uint32_t N, vec1D32F& x);
+
+        /**
+         * @brief           Down-sample the pitch buffer.
+         * @param[in,out]   pitchBuf     Pitch buffer.
+         * @param[in]       pitchBufSz   Buffer size.
+         **/
+        void PitchDownsample(vec1D32F& pitchBuf, size_t pitchBufSz);
+
+        /**
+         * @brief       Pitch search function.
+         * @param[in]   xLP        Shifted pitch buffer input.
+         * @param[in]   y          Pitch buffer input.
+         * @param[in]   len        Length to search for.
+         * @param[in]   maxPitch   Maximum pitch.
+         * @return      pitch index.
+         **/
+        int PitchSearch(vec1D32F& xLp, vec1D32F& y, uint32_t len, uint32_t maxPitch);
+
+        /**
+         * @brief       Finds the "best" pitch from the buffer.
+         * @param[in]   xCorr      Pitch correlation vector.
+         * @param[in]   y          Pitch buffer input.
+         * @param[in]   len        Length to search for.
+         * @param[in]   maxPitch   Maximum pitch.
+         * @return      pitch array (2 elements).
+         **/
+        arrHp FindBestPitch(vec1D32F& xCorr, vec1D32F& y, uint32_t len, uint32_t maxPitch);
+
+        /**
+         * @brief           Remove pitch period doubling errors.
+         * @param[in,out]   pitchBuf     Pitch buffer vector.
+         * @param[in]       maxPeriod    Maximum period.
+         * @param[in]       minPeriod    Minimum period.
+         * @param[in]       frameSize    Frame size.
+         * @param[in]       pitchIdx0_   Pitch index 0.
+         * @return          pitch index.
+         **/
+        int RemoveDoubling(
+                vec1D32F& pitchBuf,
+                uint32_t maxPeriod,
+                uint32_t minPeriod,
+                uint32_t frameSize,
+                size_t pitchIdx0_);
+
+        /**
+         * @brief       Computes pitch gain.
+         * @param[in]   xy   Single xy cross correlation value.
+         * @param[in]   xx   Single xx auto correlation value.
+         * @param[in]   yy   Single yy auto correlation value.
+         * @return      Calculated pitch gain.
+         **/
+        float ComputePitchGain(float xy, float xx, float yy);
+
+        /**
+         * @brief        Computes DCT vector from the given input.
+         * @param[in]    input    Input vector.
+         * @param[out]   output   Output vector with DCT coefficients.
+         **/
+        void DCT(vec1D32F& input, vec1D32F& output);
+
+        /**
+         * @brief        Perform inverse fourier transform on complex spectral vector.
+         * @param[out]   out      Output vector.
+         * @param[in]    fftXIn   Vector of floats arranged to represent complex numbers interleaved.
+         **/
+        void InverseTransform(vec1D32F& out, vec1D32F& fftXIn);
+
+        /**
+         * @brief       Perform pitch filtering.
+         * @param[in]   features   Object with pre-processing calculated frame features.
+         * @param[in]   g          Gain values.
+         **/
+        void PitchFilter(FrameFeatures& features, vec1D32F& g);
+
+        /**
+         * @brief        Interpolate the band gain values.
+         * @param[out]   g       Gain values.
+         * @param[in]    bandE   Vector with 22 elements populated with energy for
+         *                       each band.
+         **/
+        void InterpBandGain(vec1D32F& g, vec1D32F& bandE);
+
+        /**
+         * @brief        Create de-noised frame.
+         * @param[out]   outFrame   Output vector for storing the created audio frame.
+         * @param[in]    fftY       Gain adjusted complex spectral vector.
+         */
+        void FrameSynthesis(vec1D32F& outFrame, vec1D32F& fftY);
+
+    /* Private objects */
+    private:
+        FftInstance m_fftInstReal;  /* FFT instance for real numbers */
+        FftInstance m_fftInstCmplx; /* FFT instance for complex numbers */
+        vec1D32F m_halfWindow;      /* Window coefficients */
+        vec1D32F m_dctTable;        /* DCT table */
+        vec1D32F m_analysisMem;     /* Buffer used for frame analysis */
+        vec2D32F m_cepstralMem;     /* Cepstral coefficients */
+        size_t m_memId;             /* memory ID */
+        vec1D32F m_synthesisMem;    /* Synthesis mem (used by post-processing) */
+        vec1D32F m_pitchBuf;        /* Pitch buffer */
+        float m_lastGain;           /* Last gain calculated */
+        int m_lastPeriod;           /* Last period calculated */
+        arrHp m_memHpX;             /* HpX coefficients. */
+        vec1D32F m_lastGVec;        /* Last gain vector (used by post-processing) */
+
+        /* Constants */
+        const std::array <uint32_t, NB_BANDS> m_eband5ms {
+            0,  1,  2,  3,  4,  5,  6,  7,  8, 10,  12,
+            14, 16, 20, 24, 28, 34, 40, 48, 60, 78, 100};
+
+    };
+
+
+} /* namespace rnn */
+} /* namspace app */
+} /* namespace arm */
diff --git a/source/use_case/noise_reduction/include/UseCaseHandler.hpp b/source/use_case/noise_reduction/include/UseCaseHandler.hpp
new file mode 100644
index 0000000..143f2ed
--- /dev/null
+++ b/source/use_case/noise_reduction/include/UseCaseHandler.hpp
@@ -0,0 +1,97 @@
+/*
+ * Copyright (c) 2021 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef NOISE_REDUCTION_EVT_HANDLER_HPP
+#define NOISE_REDUCTION_EVT_HANDLER_HPP
+
+#include "AppContext.hpp"
+#include "Model.hpp"
+
+namespace arm {
+namespace app {
+
+    /**
+     * @brief       Handles the inference event for noise reduction.
+     * @param[in]   ctx         pointer to the application context
+     * @param[in]   runAll      flag to request classification of all the available audio clips
+     * @return      True or false based on execution success
+     **/
+    bool NoiseReductionHandler(ApplicationContext& ctx, bool runAll);
+
+    /**
+     * @brief           Dumps the output tensors to a memory address.
+     * This functionality is required for RNNoise use case as we want to
+     * save the inference output to a file. Dumping out tensors to a
+     * memory location will allow the Arm FVP or MPS3 to extract the
+     * contents of this memory location to a file. This file could then
+     * be used by an offline post-processing script.
+     *
+     * @param[in]   model       reference to a model
+     * @param[in]   memAddress  memory address at which the dump will start
+     * @param[in]   memSize     maximum size (in bytes) of the dump.
+     *
+     * @return  number of bytes written to memory.
+     */
+    size_t DumpOutputTensorsToMemory(Model& model, uint8_t* memAddress,
+                                    size_t memSize);
+
+    /**
+     * @brief Dumps the audio file header.
+     * This functionality is required for RNNoise use case as we want to
+     * save the inference output to a file. Dumping out the header to a
+     * memory location will allow the Arm FVP or MPS3 to extract the
+     * contents of this memory location to a file. 
+     * The header contains the following information 
+     * int32_t filenameLength: filename length
+     * uint8_t[] filename: the string containing the file name (without trailing \0)
+     * int32_t dumpSizeByte: audiofile buffer size in bytes
+     *
+     * @param[in]   filename    the file name
+     * @param[in]   dumpSize    the size of the audio file (int elements)
+     * @param[in]   memAddress  memory address at which the dump will start
+     * @param[in]   memSize     maximum size (in bytes) of the dump.
+     *
+     * @return  number of bytes written to memory.
+     */
+    size_t DumpDenoisedAudioHeader(const char* filename, size_t dumpSize,
+                                   uint8_t* memAddress, size_t memSize);
+
+    /**
+     * @brief Write a EOF marker at the end of the dump memory.
+     *
+     * @param[in]   memAddress  memory address at which the dump will start
+     * @param[in]   memSize     maximum size (in bytes) of the dump.
+     *
+     * @return  number of bytes written to memory.
+     */
+    size_t DumpDenoisedAudioFooter(uint8_t *memAddress, size_t memSize);
+
+    /**
+     * @brief Dump the audio data to the memory
+     *
+     * @param[in]   audioFrame  The vector containg the audio data
+     * @param[in]   memAddress  memory address at which the dump will start
+     * @param[in]   memSize     maximum size (in bytes) of the dump.
+     *
+     * @return  number of bytes written to memory.
+     */
+    size_t DumpOutputDenoisedAudioFrame(const std::vector<int16_t> &audioFrame,
+                                        uint8_t *memAddress, size_t memSize);
+
+} /* namespace app */
+} /* namespace arm */
+
+#endif /* NOISE_REDUCTION_EVT_HANDLER_HPP */
\ No newline at end of file
diff --git a/source/use_case/noise_reduction/src/MainLoop.cc b/source/use_case/noise_reduction/src/MainLoop.cc
new file mode 100644
index 0000000..ee0a61b
--- /dev/null
+++ b/source/use_case/noise_reduction/src/MainLoop.cc
@@ -0,0 +1,129 @@
+/*
+ * Copyright (c) 2021 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "hal.h"                    /* Brings in platform definitions. */
+#include "UseCaseHandler.hpp"       /* Handlers for different user options. */
+#include "UseCaseCommonUtils.hpp"   /* Utils functions. */
+#include "RNNoiseModel.hpp"         /* Model class for running inference. */
+#include "InputFiles.hpp"           /* For input audio clips. */
+#include "RNNoiseProcess.hpp"       /* Pre-processing class */
+
+enum opcodes
+{
+    MENU_OPT_RUN_INF_NEXT = 1,       /* Run on next vector. */
+    MENU_OPT_RUN_INF_CHOSEN,         /* Run on a user provided vector index. */
+    MENU_OPT_RUN_INF_ALL,            /* Run inference on all. */
+    MENU_OPT_SHOW_MODEL_INFO,        /* Show model info. */
+    MENU_OPT_LIST_AUDIO_CLIPS        /* List the current baked audio clip features. */
+};
+
+static void DisplayMenu()
+{
+    printf("\n\n");
+    printf("User input required\n");
+    printf("Enter option number from:\n\n");
+    printf("  %u. Run noise reduction on the next WAV\n", MENU_OPT_RUN_INF_NEXT);
+    printf("  %u. Run noise reduction on a WAV at chosen index\n", MENU_OPT_RUN_INF_CHOSEN);
+    printf("  %u. Run noise reduction on all WAVs\n", MENU_OPT_RUN_INF_ALL);
+    printf("  %u. Show NN model info\n", MENU_OPT_SHOW_MODEL_INFO);
+    printf("  %u. List audio clips\n\n", MENU_OPT_LIST_AUDIO_CLIPS);
+    printf("  Choice: ");
+    fflush(stdout);
+}
+
+static bool SetAppCtxClipIdx(arm::app::ApplicationContext& ctx, uint32_t idx)
+{
+    if (idx >= NUMBER_OF_FILES) {
+        printf_err("Invalid idx %" PRIu32 " (expected less than %u)\n",
+                   idx, NUMBER_OF_FILES);
+        return false;
+    }
+    ctx.Set<uint32_t>("clipIndex", idx);
+    return true;
+}
+
+void main_loop(hal_platform& platform)
+{
+    arm::app::RNNoiseModel model;  /* Model wrapper object. */
+
+    bool executionSuccessful = true;
+    constexpr bool bUseMenu = NUMBER_OF_FILES > 1 ? true : false;
+
+    /* Load the model. */
+    if (!model.Init()) {
+        printf_err("Failed to initialise model\n");
+        return;
+    }
+    /* Instantiate application context. */
+    arm::app::ApplicationContext caseContext;
+
+    arm::app::Profiler profiler{&platform, "noise_reduction"};
+    caseContext.Set<arm::app::Profiler&>("profiler", profiler);
+
+    caseContext.Set<hal_platform&>("platform", platform);
+    caseContext.Set<uint32_t>("numInputFeatures", g_NumInputFeatures);
+    caseContext.Set<uint32_t>("frameLength", g_FrameLength);
+    caseContext.Set<uint32_t>("frameStride", g_FrameStride);
+    caseContext.Set<arm::app::RNNoiseModel&>("model", model);
+    SetAppCtxClipIdx(caseContext, 0);
+
+#if defined(MEM_DUMP_BASE_ADDR) && defined(MPS3_PLATFORM)
+    /* For this use case, for valid targets, we dump contents
+     * of the output tensor to a certain location in memory to
+     * allow offline tools to pick this data up. */
+    constexpr size_t memDumpMaxLen = MEM_DUMP_LEN;
+    uint8_t* memDumpBaseAddr = reinterpret_cast<uint8_t *>(MEM_DUMP_BASE_ADDR);
+    size_t memDumpBytesWritten = 0;
+    caseContext.Set<size_t>("MEM_DUMP_LEN", memDumpMaxLen);
+    caseContext.Set<uint8_t*>("MEM_DUMP_BASE_ADDR", memDumpBaseAddr);
+    caseContext.Set<size_t*>("MEM_DUMP_BYTE_WRITTEN", &memDumpBytesWritten);
+#endif /* defined(MEM_DUMP_BASE_ADDR) && defined(MPS3_PLATFORM) */
+    /* Loop. */
+    do {
+        int menuOption = MENU_OPT_RUN_INF_NEXT;
+
+        if (bUseMenu) {
+            DisplayMenu();
+            menuOption = arm::app::ReadUserInputAsInt(platform);
+            printf("\n");
+        }
+        switch (menuOption) {
+            case MENU_OPT_RUN_INF_NEXT:
+                executionSuccessful = NoiseReductionHandler(caseContext, false);
+                break;
+            case MENU_OPT_RUN_INF_CHOSEN: {
+                printf("    Enter the audio clip IFM index [0, %d]: ", NUMBER_OF_FILES-1);
+                auto clipIndex = static_cast<uint32_t>(arm::app::ReadUserInputAsInt(platform));
+                SetAppCtxClipIdx(caseContext, clipIndex);
+                executionSuccessful = NoiseReductionHandler(caseContext, false);
+                break;
+            }
+            case MENU_OPT_RUN_INF_ALL:
+                executionSuccessful = NoiseReductionHandler(caseContext, true);
+                break;
+            case MENU_OPT_SHOW_MODEL_INFO:
+                executionSuccessful = model.ShowModelInfoHandler();
+                break;
+            case MENU_OPT_LIST_AUDIO_CLIPS:
+                executionSuccessful = ListFilesHandler(caseContext);
+                break;
+            default:
+                printf("Incorrect choice, try again.");
+                break;
+        }
+    } while (executionSuccessful && bUseMenu);
+    info("Main loop terminated.\n");
+}
\ No newline at end of file
diff --git a/source/use_case/noise_reduction/src/RNNoiseModel.cc b/source/use_case/noise_reduction/src/RNNoiseModel.cc
new file mode 100644
index 0000000..be0f369
--- /dev/null
+++ b/source/use_case/noise_reduction/src/RNNoiseModel.cc
@@ -0,0 +1,111 @@
+/*
+ * Copyright (c) 2021 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "RNNoiseModel.hpp"
+
+#include "hal.h"
+
+const tflite::MicroOpResolver& arm::app::RNNoiseModel::GetOpResolver()
+{
+    return this->m_opResolver;
+}
+
+bool arm::app::RNNoiseModel::EnlistOperations()
+{
+    this->m_opResolver.AddUnpack();
+    this->m_opResolver.AddFullyConnected();
+    this->m_opResolver.AddSplit();
+    this->m_opResolver.AddSplitV();
+    this->m_opResolver.AddAdd();
+    this->m_opResolver.AddLogistic();
+    this->m_opResolver.AddMul();
+    this->m_opResolver.AddSub();
+    this->m_opResolver.AddTanh();
+    this->m_opResolver.AddPack();
+    this->m_opResolver.AddReshape();
+    this->m_opResolver.AddQuantize();
+    this->m_opResolver.AddConcatenation();
+    this->m_opResolver.AddRelu();
+
+#if defined(ARM_NPU)
+    if (kTfLiteOk == this->m_opResolver.AddEthosU()) {
+        info("Added %s support to op resolver\n",
+            tflite::GetString_ETHOSU());
+    } else {
+        printf_err("Failed to add Arm NPU support to op resolver.");
+        return false;
+    }
+#endif /* ARM_NPU */
+    return true;
+}
+
+extern uint8_t* GetModelPointer();
+const uint8_t* arm::app::RNNoiseModel::ModelPointer()
+{
+    return GetModelPointer();
+}
+
+extern size_t GetModelLen();
+size_t arm::app::RNNoiseModel::ModelSize()
+{
+    return GetModelLen();
+}
+
+bool arm::app::RNNoiseModel::RunInference()
+{
+    return Model::RunInference();
+}
+
+void arm::app::RNNoiseModel::ResetGruState()
+{
+    for (auto& stateMapping: this->m_gruStateMap) {
+        TfLiteTensor* inputGruStateTensor = this->GetInputTensor(stateMapping.second);
+        auto* inputGruState = tflite::GetTensorData<int8_t>(inputGruStateTensor);
+        /* Initial value of states is 0, but this is affected by quantization zero point. */
+        auto quantParams = arm::app::GetTensorQuantParams(inputGruStateTensor);
+        memset(inputGruState, quantParams.offset, inputGruStateTensor->bytes);
+    }
+}
+
+bool arm::app::RNNoiseModel::CopyGruStates()
+{
+    std::vector<std::pair<size_t, std::vector<int8_t>>> tempOutGruStates;
+    /* Saving output states before copying them to input states to avoid output states modification in the tensor.
+     * tflu shares input and output tensors memory, thus writing to input tensor can change output tensor values. */
+    for (auto& stateMapping: this->m_gruStateMap) {
+        TfLiteTensor* outputGruStateTensor = this->GetOutputTensor(stateMapping.first);
+        std::vector<int8_t> tempOutGruState(outputGruStateTensor->bytes);
+        auto* outGruState = tflite::GetTensorData<int8_t>(outputGruStateTensor);
+        memcpy(tempOutGruState.data(), outGruState, outputGruStateTensor->bytes);
+        /* Index of the input tensor and the data to copy. */
+        tempOutGruStates.emplace_back(stateMapping.second, std::move(tempOutGruState));
+    }
+    /* Updating input GRU states with saved GRU output states. */
+    for (auto& stateMapping: tempOutGruStates) {
+        auto outputGruStateTensorData = stateMapping.second;
+        TfLiteTensor* inputGruStateTensor = this->GetInputTensor(stateMapping.first);
+        if (outputGruStateTensorData.size() != inputGruStateTensor->bytes) {
+            printf_err("Unexpected number of bytes for GRU state mapping. Input = %zuz, output = %zuz.\n",
+                       inputGruStateTensor->bytes,
+                       outputGruStateTensorData.size());
+            return false;
+        }
+        auto* inputGruState = tflite::GetTensorData<int8_t>(inputGruStateTensor);
+        auto* outGruState = outputGruStateTensorData.data();
+        memcpy(inputGruState, outGruState, inputGruStateTensor->bytes);
+    }
+    return true;
+}
\ No newline at end of file
diff --git a/source/use_case/noise_reduction/src/RNNoiseProcess.cc b/source/use_case/noise_reduction/src/RNNoiseProcess.cc
new file mode 100644
index 0000000..d9a7b35
--- /dev/null
+++ b/source/use_case/noise_reduction/src/RNNoiseProcess.cc
@@ -0,0 +1,888 @@
+/*
+ * Copyright (c) 2021 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "RNNoiseProcess.hpp"
+#include <algorithm>
+#include <cmath>
+#include <cstring>
+
+namespace arm {
+namespace app {
+namespace rnn {
+
+#define VERIFY(x)                                   \
+do {                                                \
+    if (!(x)) {                                     \
+        printf_err("Assert failed:" #x "\n");       \
+        exit(1);                                    \
+    }                                               \
+} while(0)
+
+RNNoiseProcess::RNNoiseProcess() :
+        m_halfWindow(FRAME_SIZE, 0),
+        m_dctTable(NB_BANDS * NB_BANDS),
+        m_analysisMem(FRAME_SIZE, 0),
+        m_cepstralMem(CEPS_MEM, vec1D32F(NB_BANDS, 0)),
+        m_memId{0},
+        m_synthesisMem(FRAME_SIZE, 0),
+        m_pitchBuf(PITCH_BUF_SIZE, 0),
+        m_lastGain{0.0},
+        m_lastPeriod{0},
+        m_memHpX{},
+        m_lastGVec(NB_BANDS, 0)
+{
+    constexpr uint32_t numFFt = 2 * FRAME_SIZE;
+    static_assert(numFFt != 0, "Num FFT can't be 0");
+
+    math::MathUtils::FftInitF32(numFFt, this->m_fftInstReal, FftType::real);
+    math::MathUtils::FftInitF32(numFFt, this->m_fftInstCmplx, FftType::complex);
+    this->InitTables();
+}
+
+void RNNoiseProcess::PreprocessFrame(const float*   audioData,
+                                     const size_t   audioLen,
+                                     FrameFeatures& features)
+{
+    /* Note audioWindow is modified in place */
+    const arrHp aHp {-1.99599, 0.99600 };
+    const arrHp bHp {-2.00000, 1.00000 };
+
+    vec1D32F audioWindow{audioData, audioData + audioLen};
+
+    this->BiQuad(bHp, aHp, this->m_memHpX, audioWindow);
+    this->ComputeFrameFeatures(audioWindow, features);
+}
+
+void RNNoiseProcess::PostProcessFrame(vec1D32F& modelOutput, FrameFeatures& features, vec1D32F& outFrame)
+{
+    std::vector<float> g = modelOutput;  /* Gain values. */
+    std::vector<float> gf(FREQ_SIZE, 0);
+
+    if (!features.m_silence) {
+        PitchFilter(features, g);
+        for (size_t i = 0; i < NB_BANDS; i++) {
+            float alpha = .6f;
+            g[i] = std::max(g[i], alpha * m_lastGVec[i]);
+            m_lastGVec[i] = g[i];
+        }
+        InterpBandGain(gf, g);
+        for (size_t i = 0; i < FREQ_SIZE; i++) {
+            features.m_fftX[2 * i] *= gf[i];  /* Real. */
+            features.m_fftX[2 * i + 1] *= gf[i];  /*imaginary. */
+
+        }
+
+    }
+
+    FrameSynthesis(outFrame, features.m_fftX);
+}
+
+void RNNoiseProcess::InitTables()
+{
+    constexpr float pi = M_PI;
+    constexpr float halfPi = M_PI / 2;
+    constexpr float halfPiOverFrameSz = halfPi/FRAME_SIZE;
+
+    for (uint32_t i = 0; i < FRAME_SIZE; i++) {
+        const float sinVal = math::MathUtils::SineF32(halfPiOverFrameSz * (i + 0.5f));
+        m_halfWindow[i] = math::MathUtils::SineF32(halfPi * sinVal * sinVal);
+    }
+
+    for (uint32_t i = 0; i < NB_BANDS; i++) {
+        for (uint32_t j = 0; j < NB_BANDS; j++) {
+            m_dctTable[i * NB_BANDS + j] = math::MathUtils::CosineF32((i + 0.5f) * j * pi / NB_BANDS);
+        }
+        m_dctTable[i * NB_BANDS] *= math::MathUtils::SqrtF32(0.5f);
+    }
+}
+
+void RNNoiseProcess::BiQuad(
+        const arrHp& bHp,
+        const arrHp& aHp,
+        arrHp& memHpX,
+        vec1D32F& audioWindow)
+{
+    for (float& audioElement : audioWindow) {
+        const auto xi = audioElement;
+        const auto yi = audioElement + memHpX[0];
+        memHpX[0] = memHpX[1] + (bHp[0] * xi - aHp[0] * yi);
+        memHpX[1] = (bHp[1] * xi - aHp[1] * yi);
+        audioElement = yi;
+    }
+}
+
+void RNNoiseProcess::ComputeFrameFeatures(vec1D32F& audioWindow,
+                                          FrameFeatures& features)
+{
+    this->FrameAnalysis(audioWindow,
+                        features.m_fftX,
+                        features.m_Ex,
+                        this->m_analysisMem);
+
+    float E = 0.0;
+
+    vec1D32F Ly(NB_BANDS, 0);
+    vec1D32F p(WINDOW_SIZE, 0);
+    vec1D32F pitchBuf(PITCH_BUF_SIZE >> 1, 0);
+
+    VERIFY(PITCH_BUF_SIZE >= this->m_pitchBuf.size());
+    std::copy_n(this->m_pitchBuf.begin() + FRAME_SIZE,
+                PITCH_BUF_SIZE - FRAME_SIZE,
+                this->m_pitchBuf.begin());
+
+    VERIFY(FRAME_SIZE <= audioWindow.size() && PITCH_BUF_SIZE > FRAME_SIZE);
+    std::copy_n(audioWindow.begin(),
+                FRAME_SIZE,
+                this->m_pitchBuf.begin() + PITCH_BUF_SIZE - FRAME_SIZE);
+
+    this->PitchDownsample(pitchBuf, PITCH_BUF_SIZE);
+
+    VERIFY(pitchBuf.size() > PITCH_MAX_PERIOD/2);
+    vec1D32F xLp(pitchBuf.size() - PITCH_MAX_PERIOD/2, 0);
+    std::copy_n(pitchBuf.begin() + PITCH_MAX_PERIOD/2, xLp.size(), xLp.begin());
+
+    int pitchIdx = this->PitchSearch(xLp, pitchBuf,
+            PITCH_FRAME_SIZE, (PITCH_MAX_PERIOD - (3*PITCH_MIN_PERIOD)));
+
+    pitchIdx = this->RemoveDoubling(
+                pitchBuf,
+                PITCH_MAX_PERIOD,
+                PITCH_MIN_PERIOD,
+                PITCH_FRAME_SIZE,
+                PITCH_MAX_PERIOD - pitchIdx);
+
+    size_t stIdx = PITCH_BUF_SIZE - WINDOW_SIZE - pitchIdx;
+    VERIFY((static_cast<int>(PITCH_BUF_SIZE) - static_cast<int>(WINDOW_SIZE) - pitchIdx) >= 0);
+    std::copy_n(this->m_pitchBuf.begin() + stIdx, WINDOW_SIZE, p.begin());
+
+    this->ApplyWindow(p);
+    this->ForwardTransform(p, features.m_fftP);
+    this->ComputeBandEnergy(features.m_fftP, features.m_Ep);
+    this->ComputeBandCorr(features.m_fftX, features.m_fftP, features.m_Exp);
+
+    for (uint32_t i = 0 ; i < NB_BANDS; ++i) {
+        features.m_Exp[i] /= math::MathUtils::SqrtF32(
+            0.001f + features.m_Ex[i] * features.m_Ep[i]);
+    }
+
+    vec1D32F dctVec(NB_BANDS, 0);
+    this->DCT(features.m_Exp, dctVec);
+
+    features.m_featuresVec = vec1D32F (NB_FEATURES, 0);
+    for (uint32_t i = 0; i < NB_DELTA_CEPS; ++i) {
+        features.m_featuresVec[NB_BANDS + 2*NB_DELTA_CEPS + i] = dctVec[i];
+    }
+
+    features.m_featuresVec[NB_BANDS + 2*NB_DELTA_CEPS] -= 1.3;
+    features.m_featuresVec[NB_BANDS + 2*NB_DELTA_CEPS + 1] -= 0.9;
+    features.m_featuresVec[NB_BANDS + 3*NB_DELTA_CEPS] = 0.01 * (static_cast<int>(pitchIdx) - 300);
+
+    float logMax = -2.f;
+    float follow = -2.f;
+    for (uint32_t i = 0; i < NB_BANDS; ++i) {
+        Ly[i] = log10f(1e-2f + features.m_Ex[i]);
+        Ly[i] = std::max<float>(logMax - 7, std::max<float>(follow - 1.5, Ly[i]));
+        logMax = std::max<float>(logMax, Ly[i]);
+        follow = std::max<float>(follow - 1.5, Ly[i]);
+        E += features.m_Ex[i];
+    }
+
+    /* If there's no audio avoid messing up the state. */
+    features.m_silence = true;
+    if (E < 0.04) {
+        return;
+    } else {
+        features.m_silence = false;
+    }
+
+    this->DCT(Ly, features.m_featuresVec);
+    features.m_featuresVec[0] -= 12.0;
+    features.m_featuresVec[1] -= 4.0;
+
+    VERIFY(CEPS_MEM > 2);
+    uint32_t stIdx1 = this->m_memId < 1 ? CEPS_MEM + this->m_memId - 1 : this->m_memId - 1;
+    uint32_t stIdx2 = this->m_memId < 2 ? CEPS_MEM + this->m_memId - 2 : this->m_memId - 2;
+
+    auto ceps1 = this->m_cepstralMem[stIdx1];
+    auto ceps2 = this->m_cepstralMem[stIdx2];
+
+    /* Ceps 0 */
+    for (uint32_t i = 0; i < NB_BANDS; ++i) {
+        this->m_cepstralMem[this->m_memId][i] = features.m_featuresVec[i];
+    }
+
+    for (uint32_t i = 0; i < NB_DELTA_CEPS; ++i) {
+        features.m_featuresVec[i] = this->m_cepstralMem[this->m_memId][i] + ceps1[i] + ceps2[i];
+        features.m_featuresVec[NB_BANDS + i] = this->m_cepstralMem[this->m_memId][i] - ceps2[i];
+        features.m_featuresVec[NB_BANDS + NB_DELTA_CEPS + i] =
+                this->m_cepstralMem[this->m_memId][i] - 2 * ceps1[i] + ceps2[i];
+    }
+
+    /* Spectral variability features. */
+    this->m_memId += 1;
+    if (this->m_memId == CEPS_MEM) {
+        this->m_memId = 0;
+    }
+
+    float specVariability = 0.f;
+
+    VERIFY(this->m_cepstralMem.size() >= CEPS_MEM);
+    for (size_t i = 0; i < CEPS_MEM; ++i) {
+        float minDist = 1e15;
+        for (size_t j = 0; j < CEPS_MEM; ++j) {
+            float dist = 0.f;
+            for (size_t k = 0; k < NB_BANDS; ++k) {
+                VERIFY(this->m_cepstralMem[i].size() >= NB_BANDS);
+                auto tmp = this->m_cepstralMem[i][k] - this->m_cepstralMem[j][k];
+                dist += tmp * tmp;
+            }
+
+            if (j != i) {
+                minDist = std::min<float>(minDist, dist);
+            }
+        }
+        specVariability += minDist;
+    }
+
+    VERIFY(features.m_featuresVec.size() >= NB_BANDS + 3 * NB_DELTA_CEPS + 1);
+    features.m_featuresVec[NB_BANDS + 3 * NB_DELTA_CEPS + 1] = specVariability / CEPS_MEM - 2.1;
+}
+
+void RNNoiseProcess::FrameAnalysis(
+    const vec1D32F& audioWindow,
+    vec1D32F& fft,
+    vec1D32F& energy,
+    vec1D32F& analysisMem)
+{
+    vec1D32F x(WINDOW_SIZE, 0);
+
+    /* Move old audio down and populate end with latest audio window. */
+    VERIFY(x.size() >= FRAME_SIZE && analysisMem.size() >= FRAME_SIZE);
+    VERIFY(audioWindow.size() >= FRAME_SIZE);
+
+    std::copy_n(analysisMem.begin(), FRAME_SIZE, x.begin());
+    std::copy_n(audioWindow.begin(), x.size() - FRAME_SIZE, x.begin() + FRAME_SIZE);
+    std::copy_n(audioWindow.begin(), FRAME_SIZE, analysisMem.begin());
+
+    this->ApplyWindow(x);
+
+    /* Calculate FFT. */
+    ForwardTransform(x, fft);
+
+    /* Compute band energy. */
+    ComputeBandEnergy(fft, energy);
+}
+
+void RNNoiseProcess::ApplyWindow(vec1D32F& x)
+{
+    if (WINDOW_SIZE != x.size()) {
+        printf_err("Invalid size for vector to be windowed\n");
+        return;
+    }
+
+    VERIFY(this->m_halfWindow.size() >= FRAME_SIZE);
+
+    /* Multiply input by sinusoidal function. */
+    for (size_t i = 0; i < FRAME_SIZE; i++) {
+        x[i] *= this->m_halfWindow[i];
+        x[WINDOW_SIZE - 1 - i] *= this->m_halfWindow[i];
+    }
+}
+
+void RNNoiseProcess::ForwardTransform(
+    vec1D32F& x,
+    vec1D32F& fft)
+{
+    /* The input vector can be modified by the fft function. */
+    fft.reserve(x.size() + 2);
+    fft.resize(x.size() + 2, 0);
+    math::MathUtils::FftF32(x, fft, this->m_fftInstReal);
+
+    /* Normalise. */
+    for (auto& f : fft) {
+        f /= this->m_fftInstReal.m_fftLen;
+    }
+
+    /* Place the last freq element correctly */
+    fft[fft.size()-2] = fft[1];
+    fft[1] = 0;
+
+    /* NOTE: We don't truncate out FFT vector as it already contains only the
+     * first half of the FFT's. The conjugates are not present. */
+}
+
+void RNNoiseProcess::ComputeBandEnergy(const vec1D32F& fftX, vec1D32F& bandE)
+{
+    bandE = vec1D32F(NB_BANDS, 0);
+
+    VERIFY(this->m_eband5ms.size() >= NB_BANDS);
+    for (uint32_t i = 0; i < NB_BANDS - 1; i++) {
+        const auto bandSize = (this->m_eband5ms[i + 1] - this->m_eband5ms[i])
+                              << FRAME_SIZE_SHIFT;
+
+        for (uint32_t j = 0; j < bandSize; j++) {
+            const auto frac = static_cast<float>(j) / bandSize;
+            const auto idx = (this->m_eband5ms[i] << FRAME_SIZE_SHIFT) + j;
+
+            auto tmp = fftX[2 * idx] * fftX[2 * idx]; /* Real part */
+            tmp += fftX[2 * idx + 1] * fftX[2 * idx + 1]; /* Imaginary part */
+
+            bandE[i] += (1 - frac) * tmp;
+            bandE[i + 1] += frac * tmp;
+        }
+    }
+    bandE[0] *= 2;
+    bandE[NB_BANDS - 1] *= 2;
+}
+
+void RNNoiseProcess::ComputeBandCorr(const vec1D32F& X, const vec1D32F& P, vec1D32F& bandC)
+{
+    bandC = vec1D32F(NB_BANDS, 0);
+    VERIFY(this->m_eband5ms.size() >= NB_BANDS);
+
+    for (uint32_t i = 0; i < NB_BANDS - 1; i++) {
+        const auto bandSize = (this->m_eband5ms[i + 1] - this->m_eband5ms[i]) << FRAME_SIZE_SHIFT;
+
+        for (uint32_t j = 0; j < bandSize; j++) {
+            const auto frac = static_cast<float>(j) / bandSize;
+            const auto idx = (this->m_eband5ms[i] << FRAME_SIZE_SHIFT) + j;
+
+            auto tmp = X[2 * idx] * P[2 * idx]; /* Real part */
+            tmp += X[2 * idx + 1] * P[2 * idx + 1]; /* Imaginary part */
+
+            bandC[i] += (1 - frac) * tmp;
+            bandC[i + 1] += frac * tmp;
+        }
+    }
+    bandC[0] *= 2;
+    bandC[NB_BANDS - 1] *= 2;
+}
+
+void RNNoiseProcess::DCT(vec1D32F& input, vec1D32F& output)
+{
+    VERIFY(this->m_dctTable.size() >= NB_BANDS * NB_BANDS);
+    for (uint32_t i = 0; i < NB_BANDS; ++i) {
+        float sum = 0;
+
+        for (uint32_t j = 0, k = 0; j < NB_BANDS; ++j, k += NB_BANDS) {
+            sum += input[j] * this->m_dctTable[k + i];
+        }
+        output[i] = sum * math::MathUtils::SqrtF32(2.0/22);
+    }
+}
+
+void RNNoiseProcess::PitchDownsample(vec1D32F& pitchBuf, size_t pitchBufSz) {
+    for (size_t i = 1; i < (pitchBufSz >> 1); ++i) {
+        pitchBuf[i] = 0.5 * (
+                        0.5 * (this->m_pitchBuf[2 * i - 1] + this->m_pitchBuf[2 * i + 1])
+                            + this->m_pitchBuf[2 * i]);
+    }
+
+    pitchBuf[0] = 0.5*(0.5*(this->m_pitchBuf[1]) + this->m_pitchBuf[0]);
+
+    vec1D32F ac(5, 0);
+    size_t numLags = 4;
+
+    this->AutoCorr(pitchBuf, ac, numLags, pitchBufSz >> 1);
+
+    /* Noise floor -40db */
+    ac[0] *= 1.0001;
+
+    /* Lag windowing. */
+    for (size_t i = 1; i < numLags + 1; ++i) {
+        ac[i] -= ac[i] * (0.008 * i) * (0.008 * i);
+    }
+
+    vec1D32F lpc(numLags, 0);
+    this->LPC(ac, numLags, lpc);
+
+    float tmp = 1.0;
+    for (size_t i = 0; i < numLags; ++i) {
+        tmp = 0.9f * tmp;
+        lpc[i] = lpc[i] * tmp;
+    }
+
+    vec1D32F lpc2(numLags + 1, 0);
+    float c1 = 0.8;
+
+    /* Add a zero. */
+    lpc2[0] = lpc[0] + 0.8;
+    lpc2[1] = lpc[1] + (c1 * lpc[0]);
+    lpc2[2] = lpc[2] + (c1 * lpc[1]);
+    lpc2[3] = lpc[3] + (c1 * lpc[2]);
+    lpc2[4] = (c1 * lpc[3]);
+
+    this->Fir5(lpc2, pitchBufSz >> 1, pitchBuf);
+}
+
+int RNNoiseProcess::PitchSearch(vec1D32F& xLp, vec1D32F& y, uint32_t len, uint32_t maxPitch) {
+    uint32_t lag = len + maxPitch;
+    vec1D32F xLp4(len >> 2, 0);
+    vec1D32F yLp4(lag >> 2, 0);
+    vec1D32F xCorr(maxPitch >> 1, 0);
+
+    /* Downsample by 2 again. */
+    for (size_t j = 0; j < (len >> 2); ++j) {
+        xLp4[j] = xLp[2*j];
+    }
+    for (size_t j = 0; j < (lag >> 2); ++j) {
+        yLp4[j] = y[2*j];
+    }
+
+    this->PitchXCorr(xLp4, yLp4, xCorr, len >> 2, maxPitch >> 2);
+
+    /* Coarse search with 4x decimation. */
+    arrHp bestPitch = this->FindBestPitch(xCorr, yLp4, len >> 2, maxPitch >> 2);
+
+    /* Finer search with 2x decimation. */
+    const int maxIdx = (maxPitch >> 1);
+    for (int i = 0; i < maxIdx; ++i) {
+        xCorr[i] = 0;
+        if (std::abs(i - 2*bestPitch[0]) > 2 and std::abs(i - 2*bestPitch[1]) > 2) {
+            continue;
+        }
+        float sum = 0;
+        for (size_t j = 0; j < len >> 1; ++j) {
+            sum += xLp[j] * y[i+j];
+        }
+
+        xCorr[i] = std::max(-1.0f, sum);
+    }
+
+    bestPitch = this->FindBestPitch(xCorr, y, len >> 1, maxPitch >> 1);
+
+    int offset;
+    /* Refine by pseudo-interpolation. */
+    if ( 0 < bestPitch[0] && bestPitch[0] < ((maxPitch >> 1) - 1)) {
+        float a = xCorr[bestPitch[0] - 1];
+        float b = xCorr[bestPitch[0]];
+        float c = xCorr[bestPitch[0] + 1];
+
+        if ( (c-a) > 0.7*(b-a) ) {
+            offset = 1;
+        } else if ( (a-c) > 0.7*(b-c) ) {
+            offset = -1;
+        } else {
+            offset = 0;
+        }
+    } else {
+        offset = 0;
+    }
+
+    return 2*bestPitch[0] - offset;
+}
+
+arrHp RNNoiseProcess::FindBestPitch(vec1D32F& xCorr, vec1D32F& y, uint32_t len, uint32_t maxPitch)
+{
+    float Syy = 1;
+    arrHp bestNum {-1, -1};
+    arrHp bestDen {0, 0};
+    arrHp bestPitch {0, 1};
+
+    for (size_t j = 0; j < len; ++j) {
+        Syy += (y[j] * y[j]);
+    }
+
+    for (size_t i = 0; i < maxPitch; ++i ) {
+        if (xCorr[i] > 0) {
+            float xCorr16 = xCorr[i] * 1e-12f;  /* Avoid problems when squaring. */
+
+            float num = xCorr16 * xCorr16;
+            if (num*bestDen[1] > bestNum[1]*Syy) {
+                if (num*bestDen[0] > bestNum[0]*Syy) {
+                    bestNum[1] = bestNum[0];
+                    bestDen[1] = bestDen[0];
+                    bestPitch[1] = bestPitch[0];
+                    bestNum[0] = num;
+                    bestDen[0] = Syy;
+                    bestPitch[0] = i;
+                } else {
+                    bestNum[1] = num;
+                    bestDen[1] = Syy;
+                    bestPitch[1] = i;
+                }
+            }
+        }
+
+        Syy += (y[i+len]*y[i+len]) - (y[i]*y[i]);
+        Syy = std::max(1.0f, Syy);
+    }
+
+    return bestPitch;
+}
+
+int RNNoiseProcess::RemoveDoubling(
+    vec1D32F& pitchBuf,
+    uint32_t maxPeriod,
+    uint32_t minPeriod,
+    uint32_t frameSize,
+    size_t pitchIdx0_)
+{
+    constexpr std::array<size_t, 16> secondCheck {0, 0, 3, 2, 3, 2, 5, 2, 3, 2, 3, 2, 5, 2, 3, 2};
+    uint32_t minPeriod0 = minPeriod;
+    float lastPeriod = static_cast<float>(this->m_lastPeriod)/2;
+    float lastGain = static_cast<float>(this->m_lastGain);
+
+    maxPeriod /= 2;
+    minPeriod /= 2;
+    pitchIdx0_ /= 2;
+    frameSize /= 2;
+    uint32_t xStart = maxPeriod;
+
+    if (pitchIdx0_ >= maxPeriod) {
+        pitchIdx0_ = maxPeriod - 1;
+    }
+
+    size_t pitchIdx  = pitchIdx0_;
+    size_t pitchIdx0 = pitchIdx0_;
+
+    float xx = 0;
+    for ( size_t i = xStart; i < xStart+frameSize; ++i) {
+        xx += (pitchBuf[i] * pitchBuf[i]);
+    }
+
+    float xy = 0;
+    for ( size_t i = xStart; i < xStart+frameSize; ++i) {
+        xy += (pitchBuf[i] * pitchBuf[i-pitchIdx0]);
+    }
+
+    vec1D32F yyLookup (maxPeriod+1, 0);
+    yyLookup[0] = xx;
+    float yy = xx;
+
+    for ( size_t i = 1; i < maxPeriod+1; ++i) {
+        yy = yy + (pitchBuf[xStart-i] * pitchBuf[xStart-i]) -
+                (pitchBuf[xStart+frameSize-i] * pitchBuf[xStart+frameSize-i]);
+        yyLookup[i] = std::max(0.0f, yy);
+    }
+
+    yy = yyLookup[pitchIdx0];
+    float bestXy = xy;
+    float bestYy = yy;
+
+    float g = this->ComputePitchGain(xy, xx, yy);
+    float g0 = g;
+
+    /* Look for any pitch at pitchIndex/k. */
+    for ( size_t k = 2; k < 16; ++k) {
+        size_t pitchIdx1 = (2*pitchIdx0+k) / (2*k);
+        if (pitchIdx1 < minPeriod) {
+            break;
+        }
+
+        size_t pitchIdx1b;
+        /* Look for another strong correlation at T1b. */
+        if (k == 2) {
+            if ((pitchIdx1 + pitchIdx0) > maxPeriod) {
+                pitchIdx1b = pitchIdx0;
+            } else {
+                pitchIdx1b = pitchIdx0 + pitchIdx1;
+            }
+        } else {
+            pitchIdx1b = (2*(secondCheck[k])*pitchIdx0 + k) / (2*k);
+        }
+
+        xy = 0;
+        for ( size_t i = xStart; i < xStart+frameSize; ++i) {
+            xy += (pitchBuf[i] * pitchBuf[i-pitchIdx1]);
+        }
+
+        float xy2 = 0;
+        for ( size_t i = xStart; i < xStart+frameSize; ++i) {
+            xy2 += (pitchBuf[i] * pitchBuf[i-pitchIdx1b]);
+        }
+        xy = 0.5f * (xy + xy2);
+        yy = 0.5f * (yyLookup[pitchIdx1] + yyLookup[pitchIdx1b]);
+
+        float g1 = this->ComputePitchGain(xy, xx, yy);
+
+        float cont;
+        if (std::abs(pitchIdx1-lastPeriod) <= 1) {
+            cont = lastGain;
+        } else if (std::abs(pitchIdx1-lastPeriod) <= 2 and 5*k*k < pitchIdx0) {
+            cont = 0.5f*lastGain;
+        } else {
+            cont = 0.0f;
+        }
+
+        float thresh = std::max(0.3, 0.7*g0-cont);
+
+        /* Bias against very high pitch (very short period) to avoid false-positives
+         * due to short-term correlation */
+        if (pitchIdx1 < 3*minPeriod) {
+            thresh = std::max(0.4, 0.85*g0-cont);
+        } else if (pitchIdx1 < 2*minPeriod) {
+            thresh = std::max(0.5, 0.9*g0-cont);
+        }
+        if (g1 > thresh) {
+            bestXy = xy;
+            bestYy = yy;
+            pitchIdx = pitchIdx1;
+            g = g1;
+        }
+    }
+
+    bestXy = std::max(0.0f, bestXy);
+    float pg;
+    if (bestYy <= bestXy) {
+        pg = 1.0;
+    } else {
+        pg = bestXy/(bestYy+1);
+    }
+
+    std::array<float, 3> xCorr {0};
+    for ( size_t k = 0; k < 3; ++k ) {
+        for ( size_t i = xStart; i < xStart+frameSize; ++i) {
+            xCorr[k] += (pitchBuf[i] * pitchBuf[i-(pitchIdx+k-1)]);
+        }
+    }
+
+    size_t offset;
+    if ((xCorr[2]-xCorr[0]) > 0.7*(xCorr[1]-xCorr[0])) {
+        offset = 1;
+    } else if ((xCorr[0]-xCorr[2]) > 0.7*(xCorr[1]-xCorr[2])) {
+        offset = -1;
+    } else {
+        offset = 0;
+    }
+
+    if (pg > g) {
+        pg = g;
+    }
+
+    pitchIdx0_ = 2*pitchIdx + offset;
+
+    if (pitchIdx0_ < minPeriod0) {
+        pitchIdx0_ = minPeriod0;
+    }
+
+    this->m_lastPeriod = pitchIdx0_;
+    this->m_lastGain = pg;
+
+    return this->m_lastPeriod;
+}
+
+float RNNoiseProcess::ComputePitchGain(float xy, float xx, float yy)
+{
+    return xy / math::MathUtils::SqrtF32(1+xx*yy);
+}
+
+void RNNoiseProcess::AutoCorr(
+    const vec1D32F& x,
+    vec1D32F& ac,
+    size_t lag,
+    size_t n)
+{
+    if (n < lag) {
+        printf_err("Invalid parameters for AutoCorr\n");
+        return;
+    }
+
+    auto fastN = n - lag;
+
+    /* Auto-correlation - can be done by PlatformMath functions */
+    this->PitchXCorr(x, x, ac, fastN, lag + 1);
+
+    /* Modify auto-correlation by summing with auto-correlation for different lags. */
+    for (size_t k = 0; k < lag + 1; k++) {
+        float d = 0;
+        for (size_t i = k + fastN; i < n; i++) {
+            d += x[i] * x[i - k];
+        }
+        ac[k] += d;
+    }
+}
+
+
+void RNNoiseProcess::PitchXCorr(
+    const vec1D32F& x,
+    const vec1D32F& y,
+    vec1D32F& ac,
+    size_t len,
+    size_t maxPitch)
+{
+    for (size_t i = 0; i < maxPitch; i++) {
+        float sum = 0;
+        for (size_t j = 0; j < len; j++) {
+            sum += x[j] * y[i + j];
+        }
+        ac[i] = sum;
+    }
+}
+
+/* Linear predictor coefficients */
+void RNNoiseProcess::LPC(
+    const vec1D32F& ac,
+    int32_t p,
+    vec1D32F& lpc)
+{
+    auto error = ac[0];
+
+    if (error != 0) {
+        for (int i = 0; i < p; i++) {
+
+            /* Sum up this iteration's reflection coefficient */
+            float rr = 0;
+            for (int j = 0; j < i; j++) {
+                rr += lpc[j] * ac[i - j];
+            }
+
+            rr += ac[i + 1];
+            auto r = -rr / error;
+
+            /* Update LP coefficients and total error */
+            lpc[i] = r;
+            for (int j = 0; j < ((i + 1) >> 1); j++) {
+                auto tmp1 = lpc[j];
+                auto tmp2 = lpc[i - 1 - j];
+                lpc[j] = tmp1 + (r * tmp2);
+                lpc[i - 1 - j] = tmp2 + (r * tmp1);
+            }
+
+            error = error - (r * r * error);
+
+            /* Bail out once we get 30dB gain */
+            if (error < (0.001 * ac[0])) {
+                break;
+            }
+        }
+    }
+}
+
+void RNNoiseProcess::Fir5(
+    const vec1D32F &num,
+    uint32_t N,
+    vec1D32F &x)
+{
+    auto num0 = num[0];
+    auto num1 = num[1];
+    auto num2 = num[2];
+    auto num3 = num[3];
+    auto num4 = num[4];
+    auto mem0 = 0;
+    auto mem1 = 0;
+    auto mem2 = 0;
+    auto mem3 = 0;
+    auto mem4 = 0;
+    for (uint32_t i = 0; i < N; i++)
+    {
+        auto sum_ = x[i] +  (num0 * mem0) + (num1 * mem1) +
+                    (num2 * mem2) + (num3 * mem3) + (num4 * mem4);
+        mem4 = mem3;
+        mem3 = mem2;
+        mem2 = mem1;
+        mem1 = mem0;
+        mem0 = x[i];
+        x[i] = sum_;
+    }
+}
+
+void RNNoiseProcess::PitchFilter(FrameFeatures &features, vec1D32F &g) {
+    std::vector<float> r(NB_BANDS, 0);
+    std::vector<float> rf(FREQ_SIZE, 0);
+    std::vector<float> newE(NB_BANDS);
+
+    for (size_t i = 0; i < NB_BANDS; i++) {
+        if (features.m_Exp[i] > g[i]) {
+            r[i] = 1;
+        } else {
+
+
+            r[i] = std::pow(features.m_Exp[i], 2) * (1 - std::pow(g[i], 2)) /
+                   (.001 + std::pow(g[i], 2) * (1 - std::pow(features.m_Exp[i], 2)));
+        }
+
+
+        r[i] = math::MathUtils::SqrtF32(std::min(1.0f, std::max(0.0f, r[i])));
+        r[i] *= math::MathUtils::SqrtF32(features.m_Ex[i] / (1e-8f + features.m_Ep[i]));
+    }
+
+    InterpBandGain(rf, r);
+    for (size_t i = 0; i < FREQ_SIZE - 1; i++) {
+        features.m_fftX[2 * i] += rf[i] * features.m_fftP[2 * i];  /* Real. */
+        features.m_fftX[2 * i + 1] += rf[i] * features.m_fftP[2 * i + 1];  /* Imaginary. */
+
+    }
+    ComputeBandEnergy(features.m_fftX, newE);
+    std::vector<float> norm(NB_BANDS);
+    std::vector<float> normf(FRAME_SIZE, 0);
+    for (size_t i = 0; i < NB_BANDS; i++) {
+        norm[i] = math::MathUtils::SqrtF32(features.m_Ex[i] / (1e-8f + newE[i]));
+    }
+
+    InterpBandGain(normf, norm);
+    for (size_t i = 0; i < FREQ_SIZE - 1; i++) {
+        features.m_fftX[2 * i] *= normf[i];  /* Real. */
+        features.m_fftX[2 * i + 1] *= normf[i];  /* Imaginary. */
+
+    }
+}
+
+void RNNoiseProcess::FrameSynthesis(vec1D32F& outFrame, vec1D32F& fftY) {
+    std::vector<float> x(WINDOW_SIZE, 0);
+    InverseTransform(x, fftY);
+    ApplyWindow(x);
+    for (size_t i = 0; i < FRAME_SIZE; i++) {
+        outFrame[i] = x[i] + m_synthesisMem[i];
+    }
+    memcpy((m_synthesisMem.data()), &x[FRAME_SIZE], FRAME_SIZE*sizeof(float));
+}
+
+void RNNoiseProcess::InterpBandGain(vec1D32F& g, vec1D32F& bandE) {
+    for (size_t i = 0; i < NB_BANDS - 1; i++) {
+        int bandSize = (m_eband5ms[i + 1] - m_eband5ms[i]) << FRAME_SIZE_SHIFT;
+        for (int j = 0; j < bandSize; j++) {
+            float frac = static_cast<float>(j) / bandSize;
+            g[(m_eband5ms[i] << FRAME_SIZE_SHIFT) + j] = (1 - frac) * bandE[i] + frac * bandE[i + 1];
+        }
+    }
+}
+
+void RNNoiseProcess::InverseTransform(vec1D32F& out, vec1D32F& fftXIn) {
+
+    std::vector<float> x(WINDOW_SIZE * 2);  /* This is complex. */
+    vec1D32F newFFT;  /* This is complex. */
+
+    size_t i;
+    for (i = 0; i < FREQ_SIZE * 2; i++) {
+        x[i] = fftXIn[i];
+    }
+    for (i = FREQ_SIZE; i < WINDOW_SIZE; i++) {
+        x[2 * i] = x[2 * (WINDOW_SIZE - i)];  /* Real. */
+        x[2 * i + 1] = -x[2 * (WINDOW_SIZE - i) + 1];  /* Imaginary. */
+    }
+
+    constexpr uint32_t numFFt = 2 * FRAME_SIZE;
+    static_assert(numFFt != 0);
+
+    vec1D32F fftOut = vec1D32F(x.size(), 0);
+    math::MathUtils::FftF32(x,fftOut, m_fftInstCmplx);
+
+    /* Normalize. */
+    for (auto &f: fftOut) {
+        f /= numFFt;
+    }
+
+    out[0] = WINDOW_SIZE * fftOut[0];  /* Real. */
+    for (i = 1; i < WINDOW_SIZE; i++) {
+        out[i] = WINDOW_SIZE * fftOut[(WINDOW_SIZE * 2) - (2 * i)];  /* Real. */
+    }
+}
+
+
+} /* namespace rnn */
+} /* namespace app */
+} /* namspace arm */
diff --git a/source/use_case/noise_reduction/src/UseCaseHandler.cc b/source/use_case/noise_reduction/src/UseCaseHandler.cc
new file mode 100644
index 0000000..12579df
--- /dev/null
+++ b/source/use_case/noise_reduction/src/UseCaseHandler.cc
@@ -0,0 +1,367 @@
+/*
+ * Copyright (c) 2021 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include <cmath>
+#include <algorithm>
+
+#include "UseCaseHandler.hpp"
+#include "hal.h"
+#include "UseCaseCommonUtils.hpp"
+#include "AudioUtils.hpp"
+#include "InputFiles.hpp"
+#include "RNNoiseModel.hpp"
+#include "RNNoiseProcess.hpp"
+
+namespace arm {
+namespace app {
+
+    /**
+    * @brief            Helper function to increment current audio clip features index.
+    * @param[in,out]    ctx   Pointer to the application context object.
+    **/
+    static void IncrementAppCtxClipIdx(ApplicationContext& ctx);
+
+    /**
+    * @brief            Quantize the given features and populate the input Tensor.
+    * @param[in]        inputFeatures   Vector of floating point features to quantize.
+    * @param[in]        quantScale      Quantization scale for the inputTensor.
+    * @param[in]        quantOffset     Quantization offset for the inputTensor.
+    * @param[in,out]    inputTensor     TFLite micro tensor to populate.
+    **/
+    static void QuantizeAndPopulateInput(rnn::vec1D32F& inputFeatures,
+                                         float quantScale, int quantOffset,
+                                         TfLiteTensor* inputTensor);
+
+    /* Noise reduction inference handler. */
+    bool NoiseReductionHandler(ApplicationContext& ctx, bool runAll)
+    {
+        constexpr uint32_t dataPsnTxtInfStartX = 20;
+        constexpr uint32_t dataPsnTxtInfStartY = 40;
+
+        /* Variables used for memory dumping. */
+        size_t memDumpMaxLen = 0;
+        uint8_t* memDumpBaseAddr = nullptr;
+        size_t undefMemDumpBytesWritten = 0;
+        size_t *pMemDumpBytesWritten = &undefMemDumpBytesWritten;
+        if (ctx.Has("MEM_DUMP_LEN") && ctx.Has("MEM_DUMP_BASE_ADDR") && ctx.Has("MEM_DUMP_BYTE_WRITTEN")) {
+            memDumpMaxLen = ctx.Get<size_t>("MEM_DUMP_LEN");
+            memDumpBaseAddr = ctx.Get<uint8_t*>("MEM_DUMP_BASE_ADDR");
+            pMemDumpBytesWritten = ctx.Get<size_t*>("MEM_DUMP_BYTE_WRITTEN");
+        }
+        std::reference_wrapper<size_t> memDumpBytesWritten = std::ref(*pMemDumpBytesWritten);
+
+        auto& platform = ctx.Get<hal_platform&>("platform");
+        platform.data_psn->clear(COLOR_BLACK);
+
+        auto& profiler = ctx.Get<Profiler&>("profiler");
+
+        /* Get model reference. */
+        auto& model = ctx.Get<RNNoiseModel&>("model");
+        if (!model.IsInited()) {
+            printf_err("Model is not initialised! Terminating processing.\n");
+            return false;
+        }
+
+        /* Populate Pre-Processing related parameters. */
+        auto audioParamsWinLen = ctx.Get<uint32_t>("frameLength");
+        auto audioParamsWinStride = ctx.Get<uint32_t>("frameStride");
+        auto nrNumInputFeatures = ctx.Get<uint32_t>("numInputFeatures");
+
+        TfLiteTensor* inputTensor = model.GetInputTensor(0);
+        if (nrNumInputFeatures != inputTensor->bytes) {
+            printf_err("Input features size must be equal to input tensor size."
+                       " Feature size = %" PRIu32 ", Tensor size = %zu.\n",
+                       nrNumInputFeatures, inputTensor->bytes);
+            return false;
+        }
+
+        TfLiteTensor* outputTensor = model.GetOutputTensor(model.m_indexForModelOutput);
+
+        /* Initial choice of index for WAV file. */
+        auto startClipIdx = ctx.Get<uint32_t>("clipIndex");
+
+        std::function<const int16_t* (const uint32_t)> audioAccessorFunc = get_audio_array;
+        if (ctx.Has("features")) {
+            audioAccessorFunc = ctx.Get<std::function<const int16_t* (const uint32_t)>>("features");
+        }
+        std::function<uint32_t (const uint32_t)> audioSizeAccessorFunc = get_audio_array_size;
+        if (ctx.Has("featureSizes")) {
+            audioSizeAccessorFunc = ctx.Get<std::function<uint32_t (const uint32_t)>>("featureSizes");
+        }
+        std::function<const char*(const uint32_t)> audioFileAccessorFunc = get_filename;
+        if (ctx.Has("featureFileNames")) {
+            audioFileAccessorFunc = ctx.Get<std::function<const char*(const uint32_t)>>("featureFileNames");
+        }
+        do{
+            auto startDumpAddress = memDumpBaseAddr + memDumpBytesWritten;
+            auto currentIndex = ctx.Get<uint32_t>("clipIndex");
+
+            /* Creating a sliding window through the audio. */
+            auto audioDataSlider = audio::SlidingWindow<const int16_t>(
+                    audioAccessorFunc(currentIndex),
+                    audioSizeAccessorFunc(currentIndex), audioParamsWinLen,
+                    audioParamsWinStride);
+
+            info("Running inference on input feature map %" PRIu32 " => %s\n", currentIndex,
+                 audioFileAccessorFunc(currentIndex));
+
+            memDumpBytesWritten += DumpDenoisedAudioHeader(audioFileAccessorFunc(currentIndex),
+                 (audioDataSlider.TotalStrides() + 1) * audioParamsWinLen,
+                 memDumpBaseAddr + memDumpBytesWritten,
+                 memDumpMaxLen - memDumpBytesWritten);
+
+            rnn::RNNoiseProcess featureProcessor = rnn::RNNoiseProcess();
+            rnn::vec1D32F audioFrame(audioParamsWinLen);
+            rnn::vec1D32F inputFeatures(nrNumInputFeatures);
+            rnn::vec1D32F denoisedAudioFrameFloat(audioParamsWinLen);
+            std::vector<int16_t> denoisedAudioFrame(audioParamsWinLen);
+
+            std::vector<float> modelOutputFloat(outputTensor->bytes);
+            rnn::FrameFeatures frameFeatures;
+            bool resetGRU = true;
+
+            while (audioDataSlider.HasNext()) {
+                const int16_t* inferenceWindow = audioDataSlider.Next();
+                audioFrame = rnn::vec1D32F(inferenceWindow, inferenceWindow+audioParamsWinLen);
+
+                featureProcessor.PreprocessFrame(audioFrame.data(), audioParamsWinLen, frameFeatures);
+
+                /* Reset or copy over GRU states first to avoid TFLu memory overlap issues. */
+                if (resetGRU){
+                    model.ResetGruState();
+                } else {
+                    /* Copying gru state outputs to gru state inputs.
+                     * Call ResetGruState in between the sequence of inferences on unrelated input data. */
+                    model.CopyGruStates();
+                }
+
+                QuantizeAndPopulateInput(frameFeatures.m_featuresVec,
+                        inputTensor->params.scale, inputTensor->params.zero_point,
+                        inputTensor);
+
+                /* Strings for presentation/logging. */
+                std::string str_inf{"Running inference... "};
+
+                /* Display message on the LCD - inference running. */
+                platform.data_psn->present_data_text(
+                            str_inf.c_str(), str_inf.size(),
+                            dataPsnTxtInfStartX, dataPsnTxtInfStartY, false);
+
+                info("Inference %zu/%zu\n", audioDataSlider.Index() + 1, audioDataSlider.TotalStrides() + 1);
+
+                /* Run inference over this feature sliding window. */
+                profiler.StartProfiling("Inference");
+                bool success = model.RunInference();
+                profiler.StopProfiling();
+                resetGRU = false;
+
+                if (!success) {
+                    return false;
+                }
+
+                /* De-quantize main model output ready for post-processing. */
+                const auto* outputData = tflite::GetTensorData<int8_t>(outputTensor);
+                auto outputQuantParams = arm::app::GetTensorQuantParams(outputTensor);
+
+                for (size_t i = 0; i < outputTensor->bytes; ++i) {
+                    modelOutputFloat[i] = (static_cast<float>(outputData[i]) - outputQuantParams.offset)
+                            * outputQuantParams.scale;
+                }
+
+                /* Round and cast the post-processed results for dumping to wav. */
+                featureProcessor.PostProcessFrame(modelOutputFloat, frameFeatures, denoisedAudioFrameFloat);
+                for (size_t i = 0; i < audioParamsWinLen; ++i) {
+                    denoisedAudioFrame[i] = static_cast<int16_t>(std::roundf(denoisedAudioFrameFloat[i]));
+                }
+
+                /* Erase. */
+                str_inf = std::string(str_inf.size(), ' ');
+                platform.data_psn->present_data_text(
+                                str_inf.c_str(), str_inf.size(),
+                                dataPsnTxtInfStartX, dataPsnTxtInfStartY, false);
+
+                if (memDumpMaxLen > 0) {
+                    /* Dump output tensors to memory. */
+                    memDumpBytesWritten += DumpOutputDenoisedAudioFrame(
+                            denoisedAudioFrame,
+                            memDumpBaseAddr + memDumpBytesWritten,
+                            memDumpMaxLen - memDumpBytesWritten);
+                }
+            }
+
+            if (memDumpMaxLen > 0) {
+                /* Needed to not let the compiler complain about type mismatch. */
+                size_t valMemDumpBytesWritten = memDumpBytesWritten;
+                info("Output memory dump of %zu bytes written at address 0x%p\n",
+                     valMemDumpBytesWritten, startDumpAddress);
+            }
+
+            DumpDenoisedAudioFooter(memDumpBaseAddr + memDumpBytesWritten, memDumpMaxLen - memDumpBytesWritten);
+
+            info("Final results:\n");
+            profiler.PrintProfilingResult();
+            IncrementAppCtxClipIdx(ctx);
+
+        } while (runAll && ctx.Get<uint32_t>("clipIndex") != startClipIdx);
+
+        return true;
+    }
+
+    size_t DumpDenoisedAudioHeader(const char* filename, size_t dumpSize,
+                                   uint8_t *memAddress, size_t memSize){
+
+        if (memAddress == nullptr){
+            return 0;
+        }
+
+        int32_t filenameLength = strlen(filename);
+        size_t numBytesWritten = 0;
+        size_t numBytesToWrite = 0;
+        int32_t dumpSizeByte = dumpSize * sizeof(int16_t);
+        bool overflow = false;
+
+        /* Write the filename length */
+        numBytesToWrite = sizeof(filenameLength);
+        if (memSize - numBytesToWrite > 0) {
+            std::memcpy(memAddress, &filenameLength, numBytesToWrite);
+            numBytesWritten += numBytesToWrite;
+            memSize -= numBytesWritten;
+        } else {
+            overflow = true;
+        }
+
+        /* Write file name */
+        numBytesToWrite = filenameLength;
+        if(memSize - numBytesToWrite > 0) {
+            std::memcpy(memAddress + numBytesWritten, filename, numBytesToWrite);
+            numBytesWritten += numBytesToWrite;
+            memSize -= numBytesWritten;
+        } else {
+            overflow = true;
+        }
+
+        /* Write dumpSize in byte */
+        numBytesToWrite = sizeof(dumpSizeByte);
+        if(memSize  - numBytesToWrite > 0) {
+            std::memcpy(memAddress + numBytesWritten, &(dumpSizeByte), numBytesToWrite);
+            numBytesWritten += numBytesToWrite;
+            memSize -= numBytesWritten;
+        } else {
+            overflow = true;
+        }
+
+        if(false == overflow) {
+            info("Audio Clip dump header info (%zu bytes) written to %p\n", numBytesWritten,  memAddress);
+        } else {
+            printf_err("Not enough memory to dump Audio Clip header.\n");
+        }
+
+        return numBytesWritten;
+    }
+
+    size_t DumpDenoisedAudioFooter(uint8_t *memAddress, size_t memSize){
+        if ((memAddress == nullptr) || (memSize < 4)) {
+            return 0;
+        }
+        const int32_t eofMarker = -1;
+        std::memcpy(memAddress, &eofMarker, sizeof(int32_t));
+
+        return sizeof(int32_t);
+     }
+
+    size_t DumpOutputDenoisedAudioFrame(const std::vector<int16_t> &audioFrame,
+                                        uint8_t *memAddress, size_t memSize)
+    {
+        if (memAddress == nullptr) {
+            return 0;
+        }
+
+        size_t numByteToBeWritten = audioFrame.size() * sizeof(int16_t);
+        if( numByteToBeWritten > memSize) {
+            printf_err("Overflow error: Writing %d of %d bytes to memory @ 0x%p.\n", memSize, numByteToBeWritten, memAddress);
+            numByteToBeWritten = memSize;
+        }
+
+        std::memcpy(memAddress, audioFrame.data(), numByteToBeWritten);
+        info("Copied %zu bytes to %p\n", numByteToBeWritten,  memAddress);
+
+        return numByteToBeWritten;
+    }
+
+    size_t DumpOutputTensorsToMemory(Model& model, uint8_t* memAddress, const size_t memSize)
+    {
+        const size_t numOutputs = model.GetNumOutputs();
+        size_t numBytesWritten = 0;
+        uint8_t* ptr = memAddress;
+
+        /* Iterate over all output tensors. */
+        for (size_t i = 0; i < numOutputs; ++i) {
+            const TfLiteTensor* tensor = model.GetOutputTensor(i);
+            const auto* tData = tflite::GetTensorData<uint8_t>(tensor);
+#if VERIFY_TEST_OUTPUT
+            arm::app::DumpTensor(tensor);
+#endif /* VERIFY_TEST_OUTPUT */
+            /* Ensure that we don't overflow the allowed limit. */
+            if (numBytesWritten + tensor->bytes <= memSize) {
+                if (tensor->bytes > 0) {
+                    std::memcpy(ptr, tData, tensor->bytes);
+
+                    info("Copied %zu bytes for tensor %zu to 0x%p\n",
+                        tensor->bytes, i, ptr);
+
+                    numBytesWritten += tensor->bytes;
+                    ptr += tensor->bytes;
+                }
+            } else {
+                printf_err("Error writing tensor %zu to memory @ 0x%p\n",
+                    i, memAddress);
+                break;
+            }
+        }
+
+        info("%zu bytes written to memory @ 0x%p\n", numBytesWritten, memAddress);
+
+        return numBytesWritten;
+    }
+
+    static void IncrementAppCtxClipIdx(ApplicationContext& ctx)
+    {
+        auto curClipIdx = ctx.Get<uint32_t>("clipIndex");
+        if (curClipIdx + 1 >= NUMBER_OF_FILES) {
+            ctx.Set<uint32_t>("clipIndex", 0);
+            return;
+        }
+        ++curClipIdx;
+        ctx.Set<uint32_t>("clipIndex", curClipIdx);
+    }
+
+    void QuantizeAndPopulateInput(rnn::vec1D32F& inputFeatures,
+            const float quantScale, const int quantOffset, TfLiteTensor* inputTensor)
+    {
+        const float minVal = std::numeric_limits<int8_t>::min();
+        const float maxVal = std::numeric_limits<int8_t>::max();
+
+        auto* inputTensorData = tflite::GetTensorData<int8_t>(inputTensor);
+
+        for (size_t i=0; i < inputFeatures.size(); ++i) {
+            float quantValue = ((inputFeatures[i] / quantScale) + quantOffset);
+            inputTensorData[i] = static_cast<int8_t>(std::min<float>(std::max<float>(quantValue, minVal), maxVal));
+        }
+    }
+
+
+} /* namespace app */
+} /* namespace arm */
diff --git a/source/use_case/noise_reduction/usecase.cmake b/source/use_case/noise_reduction/usecase.cmake
new file mode 100644
index 0000000..14cff17
--- /dev/null
+++ b/source/use_case/noise_reduction/usecase.cmake
@@ -0,0 +1,110 @@
+#----------------------------------------------------------------------------
+#  Copyright (c) 2021 Arm Limited. All rights reserved.
+#  SPDX-License-Identifier: Apache-2.0
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+#----------------------------------------------------------------------------
+
+USER_OPTION(${use_case}_ACTIVATION_BUF_SZ "Activation buffer size for the chosen model"
+    0x00200000
+    STRING)
+
+if (ETHOS_U_NPU_ENABLED)
+    set(DEFAULT_MODEL_PATH      ${DEFAULT_MODEL_DIR}/rnnoise_INT8_vela_${DEFAULT_NPU_CONFIG_ID}.tflite)
+else()
+    set(DEFAULT_MODEL_PATH      ${DEFAULT_MODEL_DIR}/rnnoise_INT8.tflite)
+endif()
+
+USER_OPTION(${use_case}_MODEL_TFLITE_PATH "NN models file to be used in the evaluation application. Model files must be in tflite format."
+    ${DEFAULT_MODEL_PATH}
+    FILEPATH)
+
+USER_OPTION(${use_case}_FILE_PATH "Directory with custom WAV input files, or path to a single WAV file, to use in the evaluation application."
+    ${CMAKE_CURRENT_SOURCE_DIR}/resources/${use_case}/samples/
+    PATH_OR_FILE)
+
+USER_OPTION(${use_case}_AUDIO_RATE "Specify the target sampling rate. Default is 48000."
+    48000
+    STRING)
+
+USER_OPTION(${use_case}_AUDIO_MONO "Specify if the audio needs to be converted to mono. Default is ON."
+    ON
+    BOOL)
+
+USER_OPTION(${use_case}_AUDIO_OFFSET "Specify the offset to start reading after this time (in seconds). Default is 0."
+    0
+    STRING)
+
+USER_OPTION(${use_case}_AUDIO_DURATION "Specify the audio duration to load (in seconds). If set to 0 the entire audio will be processed."
+    0
+    STRING)
+
+USER_OPTION(${use_case}_AUDIO_RES_TYPE "Specify re-sampling algorithm to use. By default is 'kaiser_best'."
+    kaiser_best
+    STRING)
+
+USER_OPTION(${use_case}_AUDIO_MIN_SAMPLES "Specify the minimum number of samples to use. Default is 480, if the audio is shorter it will be automatically padded."
+    480
+    STRING)
+
+# Generate input files from audio wav files
+generate_audio_code(${${use_case}_FILE_PATH} ${SRC_GEN_DIR} ${INC_GEN_DIR}
+    ${${use_case}_AUDIO_RATE}
+    ${${use_case}_AUDIO_MONO}
+    ${${use_case}_AUDIO_OFFSET}
+    ${${use_case}_AUDIO_DURATION}
+    ${${use_case}_AUDIO_RES_TYPE}
+    ${${use_case}_AUDIO_MIN_SAMPLES})
+
+
+set(EXTRA_MODEL_CODE
+    "/* Model parameters for ${use_case} */"
+    "extern const int        g_FrameLength         = 480"
+    "extern const int        g_FrameStride         = 480"
+    "extern const uint32_t   g_NumInputFeatures    = 42*1"  # Single time-step input of 42 features.
+    )
+
+# Generate model file.
+generate_tflite_code(
+    MODEL_PATH ${${use_case}_MODEL_TFLITE_PATH}
+    DESTINATION ${SRC_GEN_DIR}
+    EXPRESSIONS ${EXTRA_MODEL_CODE}
+)
+
+
+# For MPS3, allow dumping of output data to memory, based on these parameters:
+if (TARGET_PLATFORM STREQUAL mps3)
+    USER_OPTION(${use_case}_MEM_DUMP_BASE_ADDR
+        "Inference output dump address for ${use_case}"
+        0x80000000 # DDR bank 2
+        STRING)
+
+    USER_OPTION(${use_case}_MEM_DUMP_LEN
+        "Inference output dump buffer size for ${use_case}"
+        0x00100000 # 1 MiB
+        STRING)
+
+    # Add special compile definitions for this use case files:
+    set(${use_case}_COMPILE_DEFS
+        "MEM_DUMP_BASE_ADDR=${${use_case}_MEM_DUMP_BASE_ADDR}"
+        "MEM_DUMP_LEN=${${use_case}_MEM_DUMP_LEN}")
+
+    file(GLOB_RECURSE SRC_FILES
+        "${SRC_USE_CASE}/${use_case}/src/*.cpp"
+        "${SRC_USE_CASE}/${use_case}/src/*.cc")
+
+    set_source_files_properties(
+        ${SRC_FILES}
+        PROPERTIES COMPILE_DEFINITIONS
+        "${${use_case}_COMPILE_DEFS}")
+endif()
diff --git a/tests/use_case/ad/InferenceTestAD.cc b/tests/use_case/ad/InferenceTestAD.cc
index ad785e8..2933fbe 100644
--- a/tests/use_case/ad/InferenceTestAD.cc
+++ b/tests/use_case/ad/InferenceTestAD.cc
@@ -69,7 +69,7 @@
     TfLiteTensor *outputTensor = model.GetOutputTensor(0);
 
     REQUIRE(outputTensor);
-    REQUIRE(outputTensor->bytes == OFM_DATA_SIZE);
+    REQUIRE(outputTensor->bytes == OFM_0_DATA_SIZE);
     auto tensorData = tflite::GetTensorData<T>(outputTensor);
     REQUIRE(tensorData);
 
@@ -92,7 +92,8 @@
 
 TEST_CASE("Running golden vector inference with TensorFlow Lite Micro and AdModel Int8", "[AD]")
 {
-    for (uint32_t i = 0 ; i < NUMBER_OF_FM_FILES; ++i) {
+    REQUIRE(NUMBER_OF_IFM_FILES == NUMBER_OF_IFM_FILES);
+    for (uint32_t i = 0 ; i < NUMBER_OF_IFM_FILES; ++i) {
         auto input_goldenFV = get_ifm_data_array(i);;
         auto output_goldenFV = get_ofm_data_array(i);
 
diff --git a/tests/use_case/asr/InferenceTestWav2Letter.cc b/tests/use_case/asr/InferenceTestWav2Letter.cc
index 1f9cb80..3e30bd2 100644
--- a/tests/use_case/asr/InferenceTestWav2Letter.cc
+++ b/tests/use_case/asr/InferenceTestWav2Letter.cc
@@ -76,7 +76,7 @@
     TfLiteTensor* outputTensor = model.GetOutputTensor(0);
 
     REQUIRE(outputTensor);
-    REQUIRE(outputTensor->bytes == OFM_DATA_SIZE);
+    REQUIRE(outputTensor->bytes == OFM_0_DATA_SIZE);
     auto tensorData = tflite::GetTensorData<T>(outputTensor);
     REQUIRE(tensorData);
 
@@ -87,7 +87,8 @@
 
 TEST_CASE("Running inference with Tflu and Wav2LetterModel Int8", "[Wav2Letter]")
 {
-    for (uint32_t i = 0 ; i < NUMBER_OF_FM_FILES; ++i) {
+    REQUIRE(NUMBER_OF_IFM_FILES == NUMBER_OF_IFM_FILES);
+    for (uint32_t i = 0 ; i < NUMBER_OF_IFM_FILES; ++i) {
         auto input_goldenFV = get_ifm_data_array(i);;
         auto output_goldenFV = get_ofm_data_array(i);
 
diff --git a/tests/use_case/img_class/InferenceTestMobilenetV2.cc b/tests/use_case/img_class/InferenceTestMobilenetV2.cc
index bb89c99..07bd78f 100644
--- a/tests/use_case/img_class/InferenceTestMobilenetV2.cc
+++ b/tests/use_case/img_class/InferenceTestMobilenetV2.cc
@@ -29,9 +29,9 @@
     TfLiteTensor* inputTensor = model.GetInputTensor(0);
     REQUIRE(inputTensor);
 
-    const size_t copySz = inputTensor->bytes < IFM_DATA_SIZE ?
+    const size_t copySz = inputTensor->bytes < IFM_0_DATA_SIZE ?
                             inputTensor->bytes :
-                            IFM_DATA_SIZE;
+                            IFM_0_DATA_SIZE;
     memcpy(inputTensor->data.data, imageData, copySz);
 
     if(model.IsDataSigned()){
@@ -51,7 +51,7 @@
     TfLiteTensor* outputTensor = model.GetOutputTensor(0);
 
     REQUIRE(outputTensor);
-    REQUIRE(outputTensor->bytes == OFM_DATA_SIZE);
+    REQUIRE(outputTensor->bytes == OFM_0_DATA_SIZE);
     auto tensorData = tflite::GetTensorData<T>(outputTensor);
     REQUIRE(tensorData);
 
@@ -71,12 +71,12 @@
         REQUIRE(model.Init());
         REQUIRE(model.IsInited());
 
-        for (uint32_t i = 0 ; i < NUMBER_OF_FM_FILES; ++i) {
+        for (uint32_t i = 0 ; i < NUMBER_OF_IFM_FILES; ++i) {
             TestInference<uint8_t>(i, model, 1);
         }
     }
 
-    for (uint32_t i = 0 ; i < NUMBER_OF_FM_FILES; ++i) {
+    for (uint32_t i = 0 ; i < NUMBER_OF_IFM_FILES; ++i) {
         DYNAMIC_SECTION("Executing inference with re-init")
         {
             arm::app::MobileNetModel model{};
diff --git a/tests/use_case/kws/InferenceTestDSCNN.cc b/tests/use_case/kws/InferenceTestDSCNN.cc
index 7ce55dd..8918073 100644
--- a/tests/use_case/kws/InferenceTestDSCNN.cc
+++ b/tests/use_case/kws/InferenceTestDSCNN.cc
@@ -29,9 +29,9 @@
     TfLiteTensor* inputTensor = model.GetInputTensor(0);
     REQUIRE(inputTensor);
 
-    const size_t copySz = inputTensor->bytes < IFM_DATA_SIZE ?
+    const size_t copySz = inputTensor->bytes < IFM_0_DATA_SIZE ?
                             inputTensor->bytes :
-                            IFM_DATA_SIZE;
+                            IFM_0_DATA_SIZE;
     memcpy(inputTensor->data.data, vec, copySz);
 
     return model.RunInference();
@@ -65,7 +65,7 @@
     TfLiteTensor* outputTensor = model.GetOutputTensor(0);
 
     REQUIRE(outputTensor);
-    REQUIRE(outputTensor->bytes == OFM_DATA_SIZE);
+    REQUIRE(outputTensor->bytes == OFM_0_DATA_SIZE);
     auto tensorData = tflite::GetTensorData<T>(outputTensor);
     REQUIRE(tensorData);
 
@@ -87,7 +87,8 @@
 
 TEST_CASE("Running inference with TensorFlow Lite Micro and DsCnnModel Uint8", "[DS_CNN]")
 {
-    for (uint32_t i = 0 ; i < NUMBER_OF_FM_FILES; ++i) {
+    REQUIRE(NUMBER_OF_IFM_FILES == NUMBER_OF_OFM_FILES);
+    for (uint32_t i = 0 ; i < NUMBER_OF_IFM_FILES; ++i) {
         const int8_t* input_goldenFV = get_ifm_data_array(i);;
         const int8_t* output_goldenFV = get_ofm_data_array(i);
 
diff --git a/tests/use_case/kws_asr/InferenceTestDSCNN.cc b/tests/use_case/kws_asr/InferenceTestDSCNN.cc
index 134003d..ad1731b 100644
--- a/tests/use_case/kws_asr/InferenceTestDSCNN.cc
+++ b/tests/use_case/kws_asr/InferenceTestDSCNN.cc
@@ -29,9 +29,9 @@
     TfLiteTensor* inputTensor = model.GetInputTensor(0);
     REQUIRE(inputTensor);
 
-    const size_t copySz = inputTensor->bytes < IFM_DATA_SIZE ?
+    const size_t copySz = inputTensor->bytes < IFM_0_DATA_SIZE ?
                           inputTensor->bytes :
-                          IFM_DATA_SIZE;
+                          IFM_0_DATA_SIZE;
     memcpy(inputTensor->data.data, vec, copySz);
 
     return model.RunInference();
@@ -63,7 +63,7 @@
     TfLiteTensor* outputTensor = model.GetOutputTensor(0);
 
     REQUIRE(outputTensor);
-    REQUIRE(outputTensor->bytes == OFM_DATA_SIZE);
+    REQUIRE(outputTensor->bytes == OFM_0_DATA_SIZE);
     auto tensorData = tflite::GetTensorData<T>(outputTensor);
     REQUIRE(tensorData);
 
@@ -83,7 +83,8 @@
 }
 
 TEST_CASE("Running inference with Tflu and DsCnnModel Uint8", "[DS_CNN]") {
-    for (uint32_t i = 0; i < NUMBER_OF_FM_FILES; ++i) {
+    REQUIRE(NUMBER_OF_IFM_FILES == NUMBER_OF_OFM_FILES);
+    for (uint32_t i = 0; i < NUMBER_OF_IFM_FILES; ++i) {
         const int8_t* input_goldenFV = get_ifm_data_array(i);
         const int8_t* output_goldenFV = get_ofm_data_array(i);
 
diff --git a/tests/use_case/kws_asr/InferenceTestWav2Letter.cc b/tests/use_case/kws_asr/InferenceTestWav2Letter.cc
index 1b14a42..477a1dd 100644
--- a/tests/use_case/kws_asr/InferenceTestWav2Letter.cc
+++ b/tests/use_case/kws_asr/InferenceTestWav2Letter.cc
@@ -78,7 +78,7 @@
     TfLiteTensor* outputTensor = model.GetOutputTensor(0);
 
     REQUIRE(outputTensor);
-    REQUIRE(outputTensor->bytes == OFM_DATA_SIZE);
+    REQUIRE(outputTensor->bytes == OFM_0_DATA_SIZE);
     auto tensorData = tflite::GetTensorData<T>(outputTensor);
     REQUIRE(tensorData);
 
@@ -89,7 +89,8 @@
 
 TEST_CASE("Running inference with Tflu and Wav2LetterModel Int8", "[Wav2Letter]")
 {
-    for (uint32_t i = 0 ; i < NUMBER_OF_FM_FILES; ++i) {
+    REQUIRE(NUMBER_OF_IFM_FILES == NUMBER_OF_OFM_FILES);
+    for (uint32_t i = 0 ; i < NUMBER_OF_IFM_FILES; ++i) {
         auto input_goldenFV = get_ifm_data_array(i);;
         auto output_goldenFV = get_ofm_data_array(i);
 
diff --git a/tests/use_case/noise_reduction/InferenceTestRNNoise.cc b/tests/use_case/noise_reduction/InferenceTestRNNoise.cc
new file mode 100644
index 0000000..f32a460
--- /dev/null
+++ b/tests/use_case/noise_reduction/InferenceTestRNNoise.cc
@@ -0,0 +1,133 @@
+/*
+ * Copyright (c) 2021 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "hal.h"
+#include "TensorFlowLiteMicro.hpp"
+#include "RNNoiseModel.hpp"
+#include "TestData_noise_reduction.hpp"
+
+#include <catch.hpp>
+#include <random>
+
+namespace test {
+namespace rnnoise {
+
+    bool RunInference(arm::app::Model& model, const std::vector<std::vector<int8_t>> inData)
+    {
+        for (size_t i = 0; i < model.GetNumInputs(); ++i) {
+            TfLiteTensor* inputTensor = model.GetInputTensor(i);
+            REQUIRE(inputTensor);
+            memcpy(inputTensor->data.data, inData[i].data(), inData[i].size());
+        }
+
+        return model.RunInference();
+    }
+
+    bool RunInferenceRandom(arm::app::Model& model)
+    {
+        std::random_device rndDevice;
+        std::mt19937 mersenneGen{rndDevice()};
+        std::uniform_int_distribution<short> dist {-128, 127};
+
+        auto gen = [&dist, &mersenneGen](){
+            return dist(mersenneGen);
+        };
+
+        std::vector<std::vector<int8_t>> randomInput{NUMBER_OF_IFM_FILES};
+        for (size_t i = 0; i < model.GetNumInputs(); ++i) {
+            TfLiteTensor *inputTensor = model.GetInputTensor(i);
+            REQUIRE(inputTensor);
+            randomInput[i].resize(inputTensor->bytes);
+            std::generate(std::begin(randomInput[i]), std::end(randomInput[i]), gen);
+        }
+
+        REQUIRE(RunInference(model, randomInput));
+        return true;
+    }
+
+    TEST_CASE("Running random inference with Tflu and RNNoise Int8", "[RNNoise]")
+    {
+        arm::app::RNNoiseModel model{};
+
+        REQUIRE_FALSE(model.IsInited());
+        REQUIRE(model.Init());
+        REQUIRE(model.IsInited());
+
+        REQUIRE(RunInferenceRandom(model));
+    }
+
+    template<typename T>
+    void TestInference(const std::vector<std::vector<T>> input_goldenFV, const std::vector<std::vector<T>> output_goldenFV, arm::app::Model& model)
+    {
+        for (size_t i = 0; i < model.GetNumInputs(); ++i) {
+            TfLiteTensor* inputTensor = model.GetInputTensor(i);
+            REQUIRE(inputTensor);
+        }
+
+        REQUIRE(RunInference(model, input_goldenFV));
+
+        for (size_t i = 0; i < model.GetNumOutputs(); ++i) {
+            TfLiteTensor *outputTensor = model.GetOutputTensor(i);
+
+            REQUIRE(outputTensor);
+            auto tensorData = tflite::GetTensorData<T>(outputTensor);
+            REQUIRE(tensorData);
+
+            for (size_t j = 0; j < outputTensor->bytes; j++) {
+                REQUIRE(static_cast<int>(tensorData[j]) == static_cast<int>((output_goldenFV[i][j])));
+            }
+        }
+    }
+
+    TEST_CASE("Running inference with Tflu and RNNoise Int8", "[RNNoise]")
+    {
+        std::vector<std::vector<int8_t>> goldenInputFV {NUMBER_OF_IFM_FILES};
+        std::vector<std::vector<int8_t>> goldenOutputFV {NUMBER_OF_OFM_FILES};
+
+        std::array<size_t, NUMBER_OF_IFM_FILES> inputSizes = {IFM_0_DATA_SIZE,
+                                                              IFM_1_DATA_SIZE,
+                                                              IFM_2_DATA_SIZE,
+                                                              IFM_3_DATA_SIZE};
+
+        std::array<size_t, NUMBER_OF_OFM_FILES> outputSizes = {OFM_0_DATA_SIZE,
+                                                               OFM_1_DATA_SIZE,
+                                                               OFM_2_DATA_SIZE,
+                                                               OFM_3_DATA_SIZE,
+                                                               OFM_4_DATA_SIZE};
+
+        for (uint32_t i = 0 ; i < NUMBER_OF_IFM_FILES; ++i) {
+            goldenInputFV[i].resize(inputSizes[i]);
+            std::memcpy(goldenInputFV[i].data(), get_ifm_data_array(i), inputSizes[i]);
+        }
+        for (uint32_t i = 0 ; i < NUMBER_OF_OFM_FILES; ++i) {
+            goldenOutputFV[i].resize(outputSizes[i]);
+            std::memcpy(goldenOutputFV[i].data(), get_ofm_data_array(i), outputSizes[i]);
+        }
+
+        DYNAMIC_SECTION("Executing inference with re-init")
+        {
+            arm::app::RNNoiseModel model{};
+
+            REQUIRE_FALSE(model.IsInited());
+            REQUIRE(model.Init());
+            REQUIRE(model.IsInited());
+
+            TestInference<int8_t>(goldenInputFV, goldenOutputFV, model);
+        }
+    }
+
+}  /* namespace rnnoise */
+}  /* namespace test */
diff --git a/tests/use_case/noise_reduction/NoiseReductionTests.cc b/tests/use_case/noise_reduction/NoiseReductionTests.cc
new file mode 100644
index 0000000..09f82da
--- /dev/null
+++ b/tests/use_case/noise_reduction/NoiseReductionTests.cc
@@ -0,0 +1,18 @@
+/*
+ * Copyright (c) 2021 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#define CATCH_CONFIG_MAIN
+#include <catch.hpp>
diff --git a/tests/use_case/noise_reduction/RNNNoiseUCTests.cc b/tests/use_case/noise_reduction/RNNNoiseUCTests.cc
new file mode 100644
index 0000000..d57fced
--- /dev/null
+++ b/tests/use_case/noise_reduction/RNNNoiseUCTests.cc
@@ -0,0 +1,206 @@
+/*
+ * Copyright (c) 2021 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "RNNoiseModel.hpp"
+#include "UseCaseHandler.hpp"
+#include "InputFiles.hpp"
+#include "RNNUCTestCaseData.hpp"
+#include "UseCaseCommonUtils.hpp"
+
+#include <catch.hpp>
+#include <hal.h>
+#include <Profiler.hpp>
+#include <iostream>
+#define PLATFORM \
+hal_platform    platform; \
+data_acq_module data_acq; \
+data_psn_module data_psn; \
+platform_timer  timer;    \
+hal_init(&platform, &data_acq, &data_psn, &timer); \
+hal_platform_init(&platform);
+
+#define CONTEXT \
+arm::app::ApplicationContext caseContext; \
+arm::app::Profiler profiler{&platform, "noise_reduction"}; \
+caseContext.Set<arm::app::Profiler&>("profiler", profiler); \
+caseContext.Set<hal_platform&>("platform", platform); \
+caseContext.Set<arm::app::RNNoiseModel&>("model", model);
+
+TEST_CASE("Verify output tensor memory dump")
+{
+    constexpr size_t maxMemDumpSz = 0x100000;   /* 1 MiB worth of space */
+    std::vector<uint8_t> memPool(maxMemDumpSz); /* Memory pool */
+    arm::app::RNNoiseModel model{};
+
+    REQUIRE(model.Init());
+    REQUIRE(model.IsInited());
+
+    /* Populate the output tensors */
+    const size_t numOutputs = model.GetNumOutputs();
+    size_t sizeToWrite = 0;
+    size_t lastTensorSize = model.GetOutputTensor(numOutputs - 1)->bytes;
+
+    for (size_t i = 0; i < numOutputs; ++i) {
+        TfLiteTensor* tensor = model.GetOutputTensor(i);
+        auto* tData = tflite::GetTensorData<uint8_t>(tensor);
+
+        if (tensor->bytes > 0) {
+            memset(tData, static_cast<uint8_t>(i), tensor->bytes);
+            sizeToWrite += tensor->bytes;
+        }
+    }
+
+
+    SECTION("Positive use case")
+    {
+        /* Run the memory dump */
+        auto bytesWritten = DumpOutputTensorsToMemory(model, memPool.data(), memPool.size());
+        REQUIRE(sizeToWrite == bytesWritten);
+
+        /* Verify the dump */
+        size_t k = 0;
+        for (size_t i = 0; i < numOutputs && k < memPool.size(); ++i) {
+            TfLiteTensor* tensor = model.GetOutputTensor(i);
+            auto* tData = tflite::GetTensorData<uint8_t>(tensor);
+
+            for (size_t j = 0; j < tensor->bytes && k < memPool.size(); ++j) {
+                REQUIRE(tData[j] == memPool[k++]);
+            }
+        }
+    }
+
+    SECTION("Limited memory - skipping last tensor")
+    {
+        /* Run the memory dump */
+        auto bytesWritten = DumpOutputTensorsToMemory(model, memPool.data(), sizeToWrite - 1);
+        REQUIRE(lastTensorSize > 0);
+        REQUIRE(bytesWritten == sizeToWrite - lastTensorSize);
+    }
+
+    SECTION("Zero memory")
+    {
+        /* Run the memory dump */
+        auto bytesWritten = DumpOutputTensorsToMemory(model, memPool.data(), 0);
+        REQUIRE(bytesWritten == 0);
+    }
+}
+
+TEST_CASE("Inference run all clips", "[RNNoise]")
+{
+    PLATFORM
+
+    arm::app::RNNoiseModel model;
+
+    CONTEXT
+
+    caseContext.Set<uint32_t>("clipIndex", 0);
+    caseContext.Set<uint32_t>("numInputFeatures", g_NumInputFeatures);
+    caseContext.Set<uint32_t>("frameLength", g_FrameLength);
+    caseContext.Set<uint32_t>("frameStride", g_FrameStride);
+
+    /* Load the model. */
+    REQUIRE(model.Init());
+
+    REQUIRE(arm::app::NoiseReductionHandler(caseContext, true));
+}
+
+std::function<uint32_t(const uint32_t)> get_golden_input_p232_208_array_size(const uint32_t numberOfFeatures) {
+
+    return [numberOfFeatures](const uint32_t) ->  uint32_t{
+        return numberOfFeatures;
+    };
+}
+
+const char* get_test_filename(const uint32_t idx) {
+    auto name = get_filename(idx);
+    REQUIRE(std::string("p232_208.wav") == name);
+    return "p232_208.wav";
+}
+
+void testInfByIndex(std::vector<uint32_t>& numberOfInferences) {
+    PLATFORM
+
+    arm::app::RNNoiseModel model;
+
+    CONTEXT
+
+    caseContext.Set<std::function<const int16_t*(const uint32_t)>>("features", get_audio_array);
+    caseContext.Set<std::function<const char* (const uint32_t)>>("featureFileNames", get_test_filename);
+    caseContext.Set<uint32_t>("frameLength", g_FrameLength);
+    caseContext.Set<uint32_t>("frameStride", g_FrameStride);
+    caseContext.Set<uint32_t>("numInputFeatures", g_NumInputFeatures);
+    /* Load the model. */
+    REQUIRE(model.Init());
+
+    size_t oneInferenceOutSizeBytes = g_FrameLength * sizeof(int16_t);
+
+    auto infIndex = 0;
+    for (auto numInf: numberOfInferences) {
+        DYNAMIC_SECTION("Number of features: "<< numInf) {
+            caseContext.Set<uint32_t>("clipIndex", 1);  /* Only getting p232_208.wav for tests. */
+            uint32_t audioSizeInput = numInf*g_FrameLength;
+            caseContext.Set<std::function<uint32_t(const uint32_t)>>("featureSizes",
+                                                                     get_golden_input_p232_208_array_size(audioSizeInput));
+
+            size_t headerNumBytes = 4 + 12 + 4;  /* Filename length, filename (12 for p232_208.wav), dump size. */
+            size_t footerNumBytes = 4;  /* Eof value. */
+            size_t memDumpMaxLenBytes = headerNumBytes + footerNumBytes + oneInferenceOutSizeBytes * numInf;
+
+            std::vector<uint8_t > memDump(memDumpMaxLenBytes);
+            size_t undefMemDumpBytesWritten = 0;
+            caseContext.Set<size_t>("MEM_DUMP_LEN", memDumpMaxLenBytes);
+            caseContext.Set<uint8_t*>("MEM_DUMP_BASE_ADDR", memDump.data());
+            caseContext.Set<size_t*>("MEM_DUMP_BYTE_WRITTEN", &undefMemDumpBytesWritten);
+
+            /* Inference. */
+            REQUIRE(arm::app::NoiseReductionHandler(caseContext, false));
+
+            /* The expected output after post-processing. */
+            std::vector<int16_t> golden(&ofms[infIndex][0], &ofms[infIndex][0] + g_FrameLength);
+
+            size_t startOfLastInfOut = undefMemDumpBytesWritten - oneInferenceOutSizeBytes;
+
+            /* The actual result from the usecase handler. */
+            std::vector<int16_t> runtime(g_FrameLength);
+            std::memcpy(runtime.data(), &memDump[startOfLastInfOut], oneInferenceOutSizeBytes);
+
+            /* Margin of 22 is 0.03% error. */
+            REQUIRE_THAT(golden, Catch::Matchers::Approx(runtime).margin(22));
+        }
+        ++infIndex;
+    }
+}
+
+TEST_CASE("Inference by index - one inference", "[RNNoise]")
+{
+    auto totalAudioSize = get_audio_array_size(1);
+    REQUIRE(64757 == totalAudioSize);  /* Checking that the input file is as expected and has not changed. */
+
+    /* Run 1 inference */
+    std::vector<uint32_t> numberOfInferences = {1};
+    testInfByIndex(numberOfInferences);
+}
+
+TEST_CASE("Inference by index - several inferences", "[RNNoise]")
+{
+    auto totalAudioSize = get_audio_array_size(1);
+    REQUIRE(64757 == totalAudioSize);  /* Checking that the input file is as expected and has not changed. */
+
+    /* 3 different inference amounts: 1, 2 and all inferences required to cover total feature set */
+    uint32_t totalInferences = totalAudioSize / g_FrameLength;
+    std::vector<uint32_t> numberOfInferences = {1, 2, totalInferences};
+    testInfByIndex(numberOfInferences);
+}
diff --git a/tests/use_case/noise_reduction/RNNUCTestCaseData.hpp b/tests/use_case/noise_reduction/RNNUCTestCaseData.hpp
new file mode 100644
index 0000000..37bc6a5
--- /dev/null
+++ b/tests/use_case/noise_reduction/RNNUCTestCaseData.hpp
@@ -0,0 +1,180 @@
+/*
+ * Copyright (c) 2021 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef RNNUC_TEST_DATA
+#define RNNUC_TEST_DATA
+
+#include <cstdint>
+
+/* 1st inference denoised output. */
+int16_t denoisedInf0 [480] = {
+        0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
+        0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
+        0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
+        0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
+        0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
+        0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
+        0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
+        0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
+        0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x1, 0x1,
+        0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1,
+        0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1,
+        0x1, 0x1, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
+        0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
+        0x0, 0x0, 0x0, 0x0, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1,
+        0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
+        0x0, 0x0, 0x0, 0x0, 0x0, -0x1, -0x1, -0x1, -0x1, -0x1,
+        -0x1, -0x1, -0x1, -0x1, -0x1, -0x1, -0x1, -0x1, 0x0, -0x1,
+        0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
+        0x0, -0x1, -0x1, -0x1, -0x1, -0x1, -0x1, -0x1, -0x1, -0x1,
+        -0x1, -0x1, -0x1, -0x1, -0x1, -0x1, -0x1, -0x1, -0x1, -0x1,
+        -0x1, -0x1, -0x1, -0x1, 0x0, -0x1, 0x0, 0x0, 0x0, 0x0,
+        0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
+        0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
+        0x1, 0x0, 0x1, 0x1, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0,
+        0x1, 0x0, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1, 0x1,
+        0x1, 0x1, 0x2, 0x1, 0x2, 0x2, 0x2, 0x2, 0x2, 0x2,
+        0x2, 0x2, 0x2, 0x2, 0x2, 0x2, 0x2, 0x2, 0x2, 0x2,
+        0x2, 0x2, 0x2, 0x2, 0x2, 0x2, 0x2, 0x2, 0x2, 0x2,
+        0x2, 0x2, 0x2, 0x3, 0x2, 0x3, 0x3, 0x3, 0x3, 0x3,
+        0x3, 0x3, 0x3, 0x3, 0x3, 0x4, 0x3, 0x3, 0x3, 0x3,
+        0x3, 0x3, 0x3, 0x3, 0x3, 0x3, 0x3, 0x3, 0x3, 0x3,
+        0x3, 0x3, 0x3, 0x3, 0x3, 0x3, 0x3, 0x3, 0x3, 0x3,
+        0x3, 0x3, 0x3, 0x3, 0x3, 0x3, 0x3, 0x3, 0x3, 0x3,
+        0x3, 0x3, 0x2, 0x3, 0x3, 0x3, 0x3, 0x2, 0x3, 0x2,
+        0x3, 0x3, 0x2, 0x3, 0x3, 0x3, 0x3, 0x3, 0x3, 0x2,
+        0x3, 0x2, 0x2, 0x2, 0x2, 0x2, 0x1, 0x1, 0x1, 0x0,
+        0x1, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1,
+        0x1, 0x1, 0x2, 0x2, 0x3, 0x3, 0x3, 0x4, 0x4, 0x4,
+        0x4, 0x5, 0x4, 0x4, 0x5, 0x4, 0x5, 0x4, 0x4, 0x4,
+        0x3, 0x3, 0x2, 0x3, 0x2, 0x1, 0x2, 0x1, 0x1, 0x1,
+        0x1, 0x2, 0x2, 0x3, 0x3, 0x4, 0x5, 0x5, 0x7, 0x7,
+        0x8, 0x8, 0x9, 0xa, 0x9, 0xa, 0xa, 0xb, 0xa, 0xa,
+        0xb, 0xa, 0xa, 0x9, 0xa, 0xa, 0x9, 0xa, 0x8, 0x9,
+        0x9, 0x9, 0x9, 0x8, 0xa, 0x9, 0xa, 0xb, 0xb, 0xc,
+        0xc, 0xe, 0xf, 0xf, 0x11, 0x11, 0x13, 0x13, 0x14, 0x15,
+        0x14, 0x16, 0x14, 0x14, 0x12, 0x11, 0x10, 0xd, 0xd, 0xb,
+        0xb, 0xb, 0xa, 0xc, 0xb, 0xd, 0xd, 0xe, 0x11, 0x11,
+        0x14, 0x15, 0x17, 0x19, 0x1a, 0x1d, 0x1c, 0x1c, 0x1b, 0x1a,
+};
+
+/* 2nd inference denoised output. */
+int16_t denoisedInf1 [480] = {
+        0x11, 0x17, 0x29, 0x23, 0x33, 0x43, 0x3f, 0x53, 0x52, 0x4b,
+        0x46, 0x32, 0x27, 0x13, -0x2, -0x1c, -0x2f, -0x2f, -0x2e, -0x2c,
+        -0x34, -0x31, -0x2f, -0x34, -0x30, -0x4a, -0x38, -0x18, -0x25, -0x1a,
+        -0x15, -0x14, -0x12, -0x1d, -0x21, -0x2f, -0x32, -0x36, -0x33, -0x29,
+        -0x31, -0x23, -0x26, -0x30, -0x2b, -0x38, -0x2e, -0x22, -0x36, -0x53,
+        -0x60, -0x5a, -0x62, -0x6c, -0x84, -0xa1, -0xa0, -0xb1, -0xc2, -0xb0,
+        -0xa9, -0x9c, -0x85, -0x97, -0xa2, -0x99, -0x9f, -0x9e, -0xa4, -0xa6,
+        -0x97, -0x92, -0x8e, -0x9f, -0xb1, -0xb6, -0xbf, -0xc4, -0xcc, -0xae,
+        -0x91, -0x8a, -0x7f, -0x8a, -0x84, -0x8e, -0x98, -0x88, -0xa5, -0x9f,
+        -0x97, -0xa2, -0x8e, -0x97, -0x88, -0x76, -0x7c, -0x7c, -0x91, -0x85,
+        -0x82, -0x88, -0x78, -0x78, -0x5f, -0x55, -0x48, -0x3c, -0x4b, -0x2f,
+        -0x3a, -0x48, -0x31, -0x3b, -0x21, -0xc, -0x18, -0x16, -0x29, -0x2e,
+        -0x30, -0x39, -0x39, -0x3f, -0x30, -0x2f, -0x3b, -0x30, -0x33, -0x31,
+        -0x29, -0x38, -0x3d, -0x36, -0x3e, -0x48, -0x46, -0x46, -0x3a, -0x39,
+        -0x42, -0x3a, -0x44, -0x52, -0x53, -0x60, -0x60, -0x66, -0x6d, -0x5b,
+        -0x53, -0x47, -0x35, -0x2b, -0x24, -0x26, -0x24, -0x20, -0x20, -0x26,
+        -0x23, -0x17, -0xf, -0x6, -0xb, -0xc, -0x22, -0x39, -0x21, -0x25,
+        -0x21, -0x17, -0x23, -0x10, -0x24, -0x2b, -0x31, -0x5c, -0x43, -0x42,
+        -0x53, -0x33, -0x19, -0x14, -0x28, -0x29, -0x33, -0x36, -0x29, -0x46,
+        -0x3c, -0x35, -0x3e, -0x30, -0x49, -0x52, -0x55, -0x5f, -0x56, -0x50,
+        -0x47, -0x4b, -0x4f, -0x5e, -0x5e, -0x47, -0x56, -0x4f, -0x37, -0x27,
+        -0x15, -0x10, 0x6, 0x15, 0x2b, 0x36, 0x31, 0x45, 0x47, 0x53,
+        0x4d, 0x3f, 0x55, 0x53, 0x5d, 0x65, 0x5a, 0x55, 0x45, 0x40,
+        0x39, 0x35, 0x32, 0x35, 0x44, 0x36, 0x3d, 0x4b, 0x4c, 0x51,
+        0x4c, 0x5a, 0x5b, 0x60, 0x69, 0x58, 0x53, 0x3f, 0x22, -0x1,
+        -0x21, -0x20, -0x2a, -0x30, -0x2c, -0x2a, -0x2f, -0x34, -0x28, -0x30,
+        -0x31, -0x2d, -0x29, -0x1d, -0x2b, -0x23, -0x1c, -0x20, -0x13, -0x12,
+        -0x9, -0x18, -0x1d, -0x17, -0x2c, -0x24, -0x26, -0x2e, -0x29, -0x3c,
+        -0x46, -0x51, -0x62, -0x74, -0x80, -0x88, -0x9d, -0xa4, -0xac, -0xa1,
+        -0x92, -0x8c, -0x6f, -0x65, -0x53, -0x42, -0x4b, -0x3a, -0x35, -0x44,
+        -0x44, -0x46, -0x5c, -0x6f, -0x77, -0x8d, -0x90, -0x96, -0xa3, -0x9c,
+        -0xa8, -0xa1, -0x8e, -0x7e, -0x5d, -0x50, -0x40, -0x35, -0x36, -0x30,
+        -0x3a, -0x32, -0x2b, -0x34, -0x33, -0x40, -0x51, -0x51, -0x4a, -0x47,
+        -0x35, -0x20, -0x19, -0xa, -0xd, -0x1b, -0x15, -0x19, -0x22, -0x1f,
+        -0x1c, -0x21, -0x21, -0x17, -0x1e, -0x1d, -0x4, 0x4, 0xd, 0x24,
+        0x2c, 0x3d, 0x54, 0x50, 0x58, 0x5f, 0x5d, 0x64, 0x56, 0x5b,
+        0x67, 0x60, 0x76, 0x7d, 0x77, 0x8b, 0x96, 0x9b, 0x9e, 0xa3,
+        0xa8, 0x9d, 0x9a, 0x9a, 0x87, 0x78, 0x64, 0x49, 0x44, 0x38,
+        0x11, -0x11, -0x24, -0x29, -0x35, -0x3f, -0x35, -0x32, -0x20, -0x1a,
+        -0x2a, -0x1d, -0x28, -0x3a, -0x3f, -0x53, -0x56, -0x5e, -0x59, -0x41,
+        -0x40, -0x2e, -0x22, -0x1a, 0x7, 0x19, 0x27, 0x32, 0x37, 0x38,
+        0x23, 0x11, -0x7, -0x1f, -0x29, -0x36, -0x34, -0x35, -0x2f, -0xb,
+        0xb, 0x14, 0x25, 0x3f, 0x51, 0x49, 0x54, 0x6a, 0x5f, 0x5b,
+        0x66, 0x5d, 0x59, 0x4f, 0x3a, 0x3b, 0x30, 0x2f, 0x2d, 0x1b,
+        0x2f, 0x2e, 0x28, 0x3a, 0x2c, 0x37, 0x47, 0x4c, 0x5e, 0x58,
+        0x52, 0x4b, 0x45, 0x43, 0x36, 0x3f, 0x42, 0x49, 0x54, 0x4e,
+        0x61, 0x60, 0x59, 0x6b, 0x65, 0x60, 0x5e, 0x4e, 0x3d, 0x2e,
+        0x2a, 0x2c, 0x2f, 0x2b, 0x30, 0x3d, 0x47, 0x57, 0x61, 0x6d,
+};
+
+/* Final denoised results after 134 steps */
+int16_t denoisedInf2 [480] = {
+        -0x66, -0x8a, -0x8a, -0x6f, -0x99, -0x9c, -0x92, -0xbf, -0xa4, -0xb1,
+        -0xf0, -0xf1, -0xf3, -0xe5, -0xf9, -0x107, -0xd2, -0xe8, -0x100, -0xdb,
+        -0xda, -0xec, -0xfa, -0xfd, -0xe7, -0xd6, -0xe6, -0xfd, -0x102, -0xfc,
+        -0xfd, -0x11f, -0x123, -0x119, -0x11c, -0xf6, -0x10a, -0x130, -0x10f, -0x107,
+        -0x106, -0x10e, -0x11f, -0xff, -0xed, -0xf3, -0xee, -0xfb, -0x10f, -0x108,
+        -0xe9, -0xd4, -0xda, -0xe7, -0xed, -0xf0, -0xf1, -0x10c, -0xff, -0xd3,
+        -0xfb, -0xed, -0xc9, -0x107, -0xe4, -0xbb, -0xe9, -0xeb, -0xf6, -0xfb,
+        -0x114, -0x12e, -0x105, -0x116, -0x134, -0x138, -0x149, -0x12a, -0x11a, -0x13c,
+        -0x151, -0x13f, -0x13a, -0x16f, -0x176, -0x15d, -0x16d, -0x169, -0x163, -0x170,
+        -0x176, -0x181, -0x17d, -0x173, -0x18b, -0x1af, -0x1ad, -0x185, -0x18c, -0x1b0,
+        -0x1aa, -0x1b9, -0x1c0, -0x1b7, -0x1d5, -0x1d7, -0x1ca, -0x1cd, -0x1e8, -0x1f3,
+        -0x1c6, -0x1cd, -0x1c2, -0x191, -0x1a2, -0x1a3, -0x193, -0x187, -0x19b, -0x1b0,
+        -0x184, -0x199, -0x1bb, -0x1a9, -0x196, -0x18c, -0x1b7, -0x1b0, -0x19d, -0x1b9,
+        -0x1b2, -0x1c2, -0x1d1, -0x1dd, -0x1ce, -0x1a6, -0x1cf, -0x1e4, -0x1dc, -0x1c9,
+        -0x1bc, -0x1e2, -0x1c8, -0x1c7, -0x1d5, -0x1c1, -0x1dc, -0x1bd, -0x1cd, -0x1fe,
+        -0x1d7, -0x1e6, -0x1f3, -0x1f3, -0x201, -0x1f0, -0x1f8, -0x1f0, -0x1f4, -0x206,
+        -0x1f3, -0x206, -0x20d, -0x1f5, -0x1e1, -0x1d5, -0x1fe, -0x214, -0x1f4, -0x1f3,
+        -0x21a, -0x232, -0x214, -0x203, -0x20b, -0x1fc, -0x1f9, -0x1ef, -0x1e5, -0x1ef,
+        -0x1de, -0x1dd, -0x1ea, -0x1f2, -0x219, -0x21d, -0x201, -0x1ff, -0x1fa, -0x205,
+        -0x21f, -0x215, -0x210, -0x217, -0x20c, -0x21f, -0x223, -0x202, -0x208, -0x21f,
+        -0x233, -0x22f, -0x221, -0x229, -0x233, -0x239, -0x218, -0x21d, -0x242, -0x22e,
+        -0x23d, -0x239, -0x22f, -0x251, -0x238, -0x22e, -0x22e, -0x234, -0x236, -0x1fc,
+        -0x220, -0x254, -0x241, -0x249, -0x250, -0x260, -0x25e, -0x244, -0x24c, -0x267,
+        -0x268, -0x25d, -0x272, -0x24e, -0x245, -0x275, -0x259, -0x254, -0x251, -0x252,
+        -0x27e, -0x251, -0x23f, -0x25b, -0x24c, -0x254, -0x270, -0x274, -0x265, -0x267,
+        -0x265, -0x274, -0x27f, -0x25c, -0x279, -0x282, -0x266, -0x281, -0x271, -0x264,
+        -0x26e, -0x262, -0x262, -0x267, -0x270, -0x25e, -0x260, -0x276, -0x269, -0x273,
+        -0x286, -0x282, -0x27d, -0x27d, -0x282, -0x292, -0x289, -0x25e, -0x263, -0x253,
+        -0x22b, -0x24a, -0x26d, -0x27c, -0x263, -0x251, -0x269, -0x256, -0x25d, -0x263,
+        -0x259, -0x26b, -0x267, -0x26e, -0x267, -0x267, -0x265, -0x24f, -0x277, -0x25e,
+        -0x24d, -0x28e, -0x26b, -0x251, -0x25b, -0x256, -0x26f, -0x256, -0x245, -0x25c,
+        -0x266, -0x26d, -0x266, -0x260, -0x25f, -0x265, -0x25d, -0x254, -0x26b, -0x257,
+        -0x252, -0x27d, -0x270, -0x265, -0x274, -0x25a, -0x24d, -0x25b, -0x258, -0x255,
+        -0x256, -0x25c, -0x260, -0x247, -0x24b, -0x25a, -0x24e, -0x250, -0x23b, -0x234,
+        -0x254, -0x242, -0x22b, -0x241, -0x247, -0x231, -0x22a, -0x223, -0x20c, -0x212,
+        -0x219, -0x209, -0x203, -0x203, -0x200, -0x205, -0x217, -0x212, -0x205, -0x20c,
+        -0x1ec, -0x1ef, -0x20d, -0x1f2, -0x1ee, -0x1f3, -0x1eb, -0x1e4, -0x1ca, -0x1c6,
+        -0x1b7, -0x1b2, -0x1d4, -0x1d9, -0x1b7, -0x199, -0x1b7, -0x1c7, -0x1a5, -0x199,
+        -0x18d, -0x1a7, -0x1c0, -0x1a9, -0x1b6, -0x1a7, -0x17f, -0x18c, -0x186, -0x172,
+        -0x173, -0x178, -0x192, -0x190, -0x16d, -0x174, -0x17f, -0x179, -0x173, -0x15b,
+        -0x167, -0x17b, -0x16b, -0x169, -0x15c, -0x160, -0x16c, -0x156, -0x159, -0x151,
+        -0x13f, -0x147, -0x13f, -0x144, -0x133, -0x116, -0x12b, -0x134, -0x120, -0x118,
+        -0x115, -0x110, -0x114, -0x125, -0x128, -0x11f, -0x112, -0xfb, -0xf1, -0xe9,
+        -0xc2, -0xa7, -0xb3, -0xc3, -0xbf, -0x9f, -0x96, -0xa6, -0xa8, -0xb6,
+        -0xa8, -0x8e, -0xa6, -0xb9, -0xb1, -0x9e, -0x96, -0x80, -0x69, -0x6a,
+        -0x55, -0x5b, -0x67, -0x69, -0x7b, -0x5d, -0x67, -0x6a, -0x48, -0x66,
+        -0x50, -0x37, -0x41, -0x42, -0x45, -0x1a, -0x23, -0x33, -0x27, -0x3a,
+        -0x1b, -0xf, -0x4, 0x2, -0x12, 0x8, -0x11, 0x7, 0x29, 0x8,
+};
+
+static int16_t* ofms[3] = {denoisedInf0, denoisedInf1, denoisedInf2};
+
+#endif  /* RNNUC_TEST_DATA */
\ No newline at end of file
diff --git a/tests/use_case/noise_reduction/RNNoiseModelTests.cc b/tests/use_case/noise_reduction/RNNoiseModelTests.cc
new file mode 100644
index 0000000..705c41a
--- /dev/null
+++ b/tests/use_case/noise_reduction/RNNoiseModelTests.cc
@@ -0,0 +1,166 @@
+/*
+ * Copyright (c) 2021 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "RNNoiseModel.hpp"
+#include "hal.h"
+#include "TensorFlowLiteMicro.hpp"
+#include "TestData_noise_reduction.hpp"
+
+#include <catch.hpp>
+#include <random>
+
+bool RunInference(arm::app::Model& model, std::vector<int8_t> vec,
+                    const size_t sizeRequired, const size_t dataInputIndex)
+{
+    TfLiteTensor* inputTensor = model.GetInputTensor(dataInputIndex);
+    REQUIRE(inputTensor);
+    size_t copySz = inputTensor->bytes < sizeRequired ? inputTensor->bytes : sizeRequired;
+    const int8_t* vecData = vec.data();
+    memcpy(inputTensor->data.data, vecData, copySz);
+    return model.RunInference();
+}
+
+void genRandom(size_t bytes, std::vector<int8_t>& randomAudio)
+{
+    randomAudio.resize(bytes);
+    std::random_device rndDevice;
+    std::mt19937 mersenneGen{rndDevice()};
+    std::uniform_int_distribution<short> dist {-128, 127};
+    auto gen = [&dist, &mersenneGen](){
+        return dist(mersenneGen);
+    };
+    std::generate(std::begin(randomAudio), std::end(randomAudio), gen);
+}
+
+bool RunInferenceRandom(arm::app::Model& model, const size_t dataInputIndex)
+{
+    std::array<size_t, 4> inputSizes = {IFM_0_DATA_SIZE, IFM_1_DATA_SIZE, IFM_2_DATA_SIZE, IFM_3_DATA_SIZE};
+    std::vector<int8_t> randomAudio;
+    TfLiteTensor* inputTensor = model.GetInputTensor(dataInputIndex);
+    REQUIRE(inputTensor);
+    genRandom(inputTensor->bytes, randomAudio);
+
+    REQUIRE(RunInference(model, randomAudio, inputSizes[dataInputIndex], dataInputIndex));
+    return true;
+}
+
+TEST_CASE("Running random inference with TensorFlow Lite Micro and RNNoiseModel Int8", "[RNNoise]")
+{
+    arm::app::RNNoiseModel model{};
+
+    REQUIRE_FALSE(model.IsInited());
+    REQUIRE(model.Init());
+    REQUIRE(model.IsInited());
+
+    model.ResetGruState();
+
+    for (int i = 1; i < 4; i++ ) {
+        TfLiteTensor* inputGruStateTensor = model.GetInputTensor(i);
+        auto* inputGruState = tflite::GetTensorData<int8_t>(inputGruStateTensor);
+        for (size_t tIndex = 0;  tIndex < inputGruStateTensor->bytes; tIndex++) {
+            REQUIRE(inputGruState[tIndex] == arm::app::GetTensorQuantParams(inputGruStateTensor).offset);
+        }
+    }
+
+    REQUIRE(RunInferenceRandom(model, 0));
+}
+
+class TestRNNoiseModel : public arm::app::RNNoiseModel
+{
+public:
+    bool CopyGruStatesTest() {
+        return RNNoiseModel::CopyGruStates();
+    }
+
+    std::vector<std::pair<size_t, size_t>> GetStateMap() {
+        return  m_gruStateMap;
+    }
+
+};
+
+template <class T>
+void printArray(size_t dataSz, T data){
+    char strhex[8];
+    std::string strdump;
+
+    for (size_t i = 0; i < dataSz; ++i) {
+        if (0 == i % 8) {
+            printf("%s\n\t", strdump.c_str());
+            strdump.clear();
+        }
+        snprintf(strhex, sizeof(strhex) - 1,
+                 "0x%02x, ", data[i]);
+        strdump += std::string(strhex);
+    }
+
+    if (!strdump.empty()) {
+        printf("%s\n", strdump.c_str());
+    }
+}
+
+/* This is true for gcc x86 platform, not guaranteed for other compilers and platforms. */
+TEST_CASE("Test initial GRU out state is 0", "[RNNoise]")
+{
+    TestRNNoiseModel model{};
+    model.Init();
+
+    auto map = model.GetStateMap();
+
+    for(auto& mapping: map) {
+        TfLiteTensor* gruOut = model.GetOutputTensor(mapping.first);
+        auto* outGruState = tflite::GetTensorData<uint8_t>(gruOut);
+
+        printf("gru out state:");
+        printArray(gruOut->bytes, outGruState);
+
+        for (size_t tIndex = 0;  tIndex < gruOut->bytes; tIndex++) {
+            REQUIRE(outGruState[tIndex] == 0);
+        }
+    }
+
+}
+
+TEST_CASE("Test GRU state copy", "[RNNoise]")
+{
+    TestRNNoiseModel model{};
+    model.Init();
+    REQUIRE(RunInferenceRandom(model, 0));
+
+    auto map = model.GetStateMap();
+
+    std::vector<std::vector<uint8_t>> oldStates;
+    for(auto& mapping: map) {
+
+        TfLiteTensor* gruOut = model.GetOutputTensor(mapping.first);
+        auto* outGruState = tflite::GetTensorData<uint8_t>(gruOut);
+        /* Save old output state. */
+        std::vector<uint8_t> oldState(gruOut->bytes);
+        memcpy(oldState.data(), outGruState, gruOut->bytes);
+        oldStates.push_back(oldState);
+    }
+
+    model.CopyGruStatesTest();
+    auto statesIter = oldStates.begin();
+    for(auto& mapping: map) {
+        TfLiteTensor* gruInput = model.GetInputTensor(mapping.second);
+        auto* inGruState = tflite::GetTensorData<uint8_t>(gruInput);
+        for (size_t tIndex = 0;  tIndex < gruInput->bytes; tIndex++) {
+            REQUIRE((*statesIter)[tIndex] == inGruState[tIndex]);
+        }
+        statesIter++;
+    }
+
+}
\ No newline at end of file
diff --git a/tests/use_case/noise_reduction/RNNoiseProcessingTests.cpp b/tests/use_case/noise_reduction/RNNoiseProcessingTests.cpp
new file mode 100644
index 0000000..24dd550
--- /dev/null
+++ b/tests/use_case/noise_reduction/RNNoiseProcessingTests.cpp
@@ -0,0 +1,245 @@
+/*
+ * Copyright (c) 2021 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "RNNoiseProcess.hpp"
+#include <catch.hpp>
+#include <limits>
+
+
+/* Elements [0:480] from p232_113.wav cast as fp32. */
+const std::vector<float> testWav0 = std::vector<float>{
+    -1058.0, -768.0, -737.0, -1141.0, -1015.0, -315.0, -205.0, -105.0, -150.0, 277.0,
+    424.0, 523.0, 431.0, 256.0, 441.0, 830.0, 413.0, 421.0, 1002.0, 1186.0,
+    926.0, 841.0, 894.0, 1419.0, 1427.0, 1102.0, 587.0, 455.0, 962.0, 904.0,
+    504.0, -61.0, 242.0, 534.0, 407.0, -344.0, -973.0, -1178.0, -1056.0, -1454.0,
+    -1294.0, -1729.0, -2234.0, -2164.0, -2148.0, -1967.0, -2699.0, -2923.0, -2408.0, -2304.0,
+    -2567.0, -2894.0, -3104.0, -3045.0, -3210.0, -3774.0, -4159.0, -3902.0, -3525.0, -3652.0,
+    -3804.0, -3493.0, -3034.0, -2715.0, -2599.0, -2432.0, -2045.0, -1934.0, -1966.0, -2018.0,
+    -1757.0, -1296.0, -1336.0, -1124.0, -1282.0, -1001.0, -601.0, -706.0, -511.0, 278.0,
+    678.0, 1009.0, 1088.0, 1150.0, 1815.0, 2572.0, 2457.0, 2150.0, 2566.0, 2720.0,
+    3040.0, 3203.0, 3353.0, 3536.0, 3838.0, 3808.0, 3672.0, 3346.0, 3281.0, 3570.0,
+    3215.0, 2684.0, 3153.0, 3167.0, 3049.0, 2837.0, 2965.0, 3167.0, 3286.0, 2572.0,
+    1952.0, 1434.0, 1398.0, 505.0, -740.0, -898.0, -598.0, -1047.0, -1514.0, -1756.0,
+    -1457.0, -1518.0, -1497.0, -1605.0, -1364.0, -1332.0, -1306.0, -2361.0, -2809.0, -2185.0,
+    -1323.0, -1714.0, -2323.0, -1888.0, -1273.0, -1208.0, -1656.0, -1543.0, -736.0, -772.0,
+    -1113.0, -1001.0, -185.0, 468.0, 625.0, 609.0, 1080.0, 1654.0, 1678.0, 1462.0,
+    1468.0, 2065.0, 2266.0, 1779.0, 1513.0, 1646.0, 1721.0, 2019.0, 1212.0, 688.0,
+    1256.0, 1917.0, 2104.0, 1714.0, 1581.0, 2013.0, 1946.0, 2276.0, 2419.0, 2546.0,
+    2229.0, 1768.0, 1691.0, 1484.0, 914.0, 591.0, -279.0, 85.0, -190.0, -647.0,
+    -1120.0, -1636.0, -2057.0, -2177.0, -1650.0, -1826.0, -2206.0, -2568.0, -2374.0, -2227.0,
+    -2013.0, -1844.0, -2079.0, -1953.0, -1609.0, -1897.0, -2185.0, -2320.0, -2212.0, -2593.0,
+    -3077.0, -2840.0, -2081.0, -1642.0, -1793.0, -1437.0, -870.0, -451.0, -242.0, -267.0,
+    314.0, 641.0, 448.0, 721.0, 1087.0, 1720.0, 1831.0, 1381.0, 1254.0, 1873.0,
+    2504.0, 2496.0, 2265.0, 2396.0, 2703.0, 2933.0, 3100.0, 3423.0, 3464.0, 3846.0,
+    3890.0, 3959.0, 4047.0, 4058.0, 4327.0, 3907.0, 3505.0, 3837.0, 3471.0, 3490.0,
+    2991.0, 3129.0, 3082.0, 2950.0, 2329.0, 1964.0, 1523.0, 1179.0, 673.0, 439.0,
+    -130.0, -878.0, -1670.0, -1648.0, -1566.0, -1721.0, -2028.0, -2308.0, -1826.0, -2027.0,
+    -2221.0, -2025.0, -1858.0, -1966.0, -2384.0, -2221.0, -1936.0, -1747.0, -2159.0, -2265.0,
+    -2186.0, -1536.0, -1520.0, -1838.0, -1919.0, -1630.0, -1450.0, -1751.0, -2751.0, -3125.0,
+    -3258.0, -3049.0, -3199.0, -3272.0, -2498.0, -1884.0, -1660.0, -1894.0, -1208.0, -736.0,
+    -346.0, -337.0, -628.0, -274.0, 71.0, 245.0, 255.0, 132.0, 433.0, 229.0,
+    345.0, -85.0, 221.0, 278.0, 227.0, -107.0, -613.0, -215.0, -448.0, -306.0,
+    -845.0, -456.0, -390.0, -239.0, -895.0, -1151.0, -619.0, -554.0, -495.0, -1141.0,
+    -1079.0, -1342.0, -1252.0, -1668.0, -2177.0, -2478.0, -2116.0, -2163.0, -2343.0, -2380.0,
+    -2269.0, -1541.0, -1668.0, -2034.0, -2264.0, -2200.0, -2224.0, -2578.0, -2213.0, -2069.0,
+    -1774.0, -1437.0, -1845.0, -1812.0, -1654.0, -1492.0, -1914.0, -1944.0, -1870.0, -2477.0,
+    -2538.0, -2298.0, -2143.0, -2146.0, -2311.0, -1777.0, -1193.0, -1206.0, -1254.0, -743.0,
+    -84.0, -129.0, -469.0, -679.0, -114.0, 352.0, 239.0, 93.0, 381.0, 543.0,
+    283.0, 196.0, -460.0, -443.0, -307.0, -445.0, -979.0, -1095.0, -1050.0, -1172.0,
+    -967.0, -1246.0, -1217.0, -1830.0, -2167.0, -2712.0, -2778.0, -2980.0, -3055.0, -3839.0,
+    -4253.0, -4163.0, -4240.0, -4487.0, -4861.0, -5019.0, -4875.0, -4883.0, -5109.0, -5022.0,
+    -4438.0, -4639.0, -4509.0, -4761.0, -4472.0, -4841.0, -4910.0, -5264.0, -4743.0, -4802.0,
+    -4617.0, -4302.0, -4367.0, -3968.0, -3632.0, -3434.0, -4356.0, -4329.0, -3850.0, -3603.0,
+    -3654.0, -4229.0, -4262.0, -3681.0, -3026.0, -2570.0, -2486.0, -1859.0, -1264.0, -1145.0,
+    -1064.0, -1125.0, -855.0, -400.0, -469.0, -498.0, -691.0, -475.0, -528.0, -809.0,
+    -948.0, -1047.0, -1250.0, -1691.0, -2110.0, -2790.0, -2818.0, -2589.0, -2415.0, -2710.0,
+    -2744.0, -2767.0, -2506.0, -2285.0, -2361.0, -2103.0, -2336.0, -2341.0, -2687.0, -2667.0,
+    -2925.0, -2761.0, -2816.0, -2644.0, -2456.0, -2186.0, -2092.0, -2498.0, -2773.0, -2554.0,
+    -2218.0, -2626.0, -2996.0, -3119.0, -2574.0, -2582.0, -3009.0, -2876.0, -2747.0, -2999.0
+};
+
+/* Elements [480:960] from p232_113.wav cast as fp32. */
+const std::vector<float> testWav1 = std::vector<float>{
+    -2918.0, -2418.0, -2452.0, -2172.0, -2261.0, -2337.0, -2399.0, -2209.0, -2269.0, -2509.0,
+    -2721.0, -2884.0, -2891.0, -3440.0, -3757.0, -4338.0, -4304.0, -4587.0, -4714.0, -5686.0,
+    -5699.0, -5447.0, -5008.0, -5052.0, -5135.0, -4807.0, -4515.0, -3850.0, -3804.0, -3813.0,
+    -3451.0, -3527.0, -3764.0, -3627.0, -3527.0, -3737.0, -4043.0, -4394.0, -4672.0, -4561.0,
+    -4718.0, -4737.0, -5018.0, -5187.0, -5043.0, -4734.0, -4841.0, -5363.0, -5870.0, -5697.0,
+    -5731.0, -6081.0, -6557.0, -6306.0, -6422.0, -5990.0, -5738.0, -5559.0, -5880.0, -6093.0,
+    -6718.0, -6853.0, -6966.0, -6907.0, -6887.0, -7046.0, -6902.0, -6927.0, -6754.0, -6891.0,
+    -6630.0, -6381.0, -5877.0, -5858.0, -6237.0, -6129.0, -6248.0, -6297.0, -6717.0, -6731.0,
+    -5888.0, -5239.0, -5635.0, -5808.0, -5418.0, -4780.0, -4311.0, -4082.0, -4053.0, -3274.0,
+    -3214.0, -3194.0, -3206.0, -2407.0, -1824.0, -1753.0, -1908.0, -1865.0, -1535.0, -1246.0,
+    -1434.0, -1970.0, -1890.0, -1815.0, -1949.0, -2296.0, -2356.0, -1972.0, -2156.0, -2057.0,
+    -2189.0, -1861.0, -1640.0, -1456.0, -1641.0, -1786.0, -1781.0, -1880.0, -1918.0, -2251.0,
+    -2256.0, -2608.0, -3169.0, -2983.0, -2785.0, -2948.0, -3267.0, -3856.0, -3847.0, -3534.0,
+    -3799.0, -4028.0, -4438.0, -4509.0, -4343.0, -3913.0, -3752.0, -3709.0, -3302.0, -2612.0,
+    -2848.0, -3320.0, -3049.0, -2171.0, -2342.0, -2746.0, -2618.0, -2031.0, -1166.0, -1454.0,
+    -995.0, -156.0, 573.0, 1240.0, 506.0, 296.0, 524.0, 581.0, 212.0, -191.0,
+    169.0, -46.0, 17.0, 221.0, 586.0, 347.0, 40.0, 217.0, 951.0, 694.0,
+    191.0, -535.0, -260.0, 252.0, 187.0, -230.0, -541.0, -124.0, -59.0, -1152.0,
+    -1397.0, -1176.0, -1195.0, -2218.0, -2960.0, -2338.0, -1895.0, -2460.0, -3599.0, -3728.0,
+    -2896.0, -2672.0, -4025.0, -4322.0, -3625.0, -3066.0, -3599.0, -4989.0, -5005.0, -3988.0,
+    -3153.0, -3921.0, -4349.0, -4444.0, -3526.0, -2896.0, -3810.0, -4252.0, -3300.0, -2234.0,
+    -2044.0, -3229.0, -2959.0, -2542.0, -1821.0, -1561.0, -1853.0, -2112.0, -1361.0, -831.0,
+    -840.0, -999.0, -1021.0, -769.0, -388.0, -377.0, -513.0, -790.0, -938.0, -911.0,
+    -1654.0, -1809.0, -2326.0, -1879.0, -1956.0, -2241.0, -2307.0, -1900.0, -1620.0, -2265.0,
+    -2170.0, -1257.0, -681.0, -1552.0, -2405.0, -2443.0, -1941.0, -1774.0, -2245.0, -2652.0,
+    -2769.0, -2622.0, -2714.0, -3558.0, -4449.0, -4894.0, -4583.0, -5179.0, -6471.0, -6526.0,
+    -5918.0, -5153.0, -5770.0, -6250.0, -5532.0, -4751.0, -4810.0, -5519.0, -5661.0, -5028.0,
+    -4737.0, -5482.0, -5837.0, -5005.0, -4200.0, -4374.0, -4962.0, -5199.0, -4464.0, -4106.0,
+    -4783.0, -5151.0, -4588.0, -4137.0, -3936.0, -4954.0, -4582.0, -3855.0, -2912.0, -2867.0,
+    -2965.0, -2919.0, -2362.0, -1800.0, -2025.0, -1931.0, -1438.0, -979.0, -1124.0, -1124.0,
+    -1130.0, -781.0, -652.0, -814.0, -976.0, -1269.0, -1052.0, -551.0, -724.0, -947.0,
+    -934.0, -856.0, -705.0, -894.0, -916.0, -861.0, -487.0, -681.0, -493.0, -902.0,
+    -547.0, -466.0, -1013.0, -1466.0, -2178.0, -1907.0, -1618.0, -2169.0, -3226.0, -2973.0,
+    -2390.0, -2227.0, -3257.0, -4297.0, -4227.0, -3022.0, -3017.0, -4268.0, -4956.0, -4199.0,
+    -3099.0, -3627.0, -4820.0, -4666.0, -3475.0, -2648.0, -3613.0, -4521.0, -3942.0, -3083.0,
+    -2832.0, -3912.0, -4289.0, -3684.0, -2728.0, -2702.0, -3279.0, -2636.0, -2261.0, -2170.0,
+    -2346.0, -2500.0, -1894.0, -1745.0, -1849.0, -2078.0, -2170.0, -1608.0, -1027.0, -1350.0,
+    -1330.0, -1128.0, -478.0, -1113.0, -1584.0, -1656.0, -1636.0, -1678.0, -1726.0, -1554.0,
+    -1434.0, -1243.0, -748.0, -463.0, -277.0, 216.0, 517.0, 1063.0, 1101.0, 839.0,
+    724.0, 543.0, 713.0, 598.0, 806.0, 499.0, 612.0, 385.0, 830.0, 939.0,
+    602.0, 60.0, -378.0, -300.0, -308.0, -1079.0, -1461.0, -997.0, -855.0, -1087.0,
+    -1579.0, -1314.0, -742.0, -452.0, -327.0, 224.0, -46.0, -119.0, -339.0, -22.0,
+    172.0, -137.0, 196.0, -89.0, 34.0, -324.0, -281.0, -999.0, -1134.0, -516.0,
+    101.0, 321.0, -584.0, -231.0, 1254.0, 1744.0, 1175.0, 684.0, 842.0, 1439.0,
+    1507.0, 829.0, 296.0, 519.0, 716.0, 961.0, 175.0, -494.0, -501.0, -628.0,
+    -658.0, -700.0, -989.0, -1342.0, -1298.0, -1347.0, -1223.0, -1388.0, -1308.0, -1184.0,
+    -468.0, -2.0, -444.0, -388.0, -80.0, 361.0, 700.0, 120.0, 101.0, 464.0,
+    654.0, 40.0, -586.0, -607.0, -730.0, -705.0, -844.0, -692.0, -1032.0, -1216.0
+};
+
+/* Golden RNNoise pre-processing output for [0:480] p232_113.wav */
+const std::vector<float> RNNoisePreProcessGolden0 {
+    4.597353, -0.908727, 1.067204, -0.034760, -0.084974,
+    -0.361086, -1.494876, -0.173461, -0.671268, 0.245229,
+    0.371219, 0.159632, 0.230595, 0.245066, 0.148395,
+    -0.660396, -0.157954, 0.136425, 0.062801, -0.049542,
+    0.179730, 0.178653, 4.597353, -0.908727, 1.067204,
+    -0.034760, -0.084974, -0.361086, 4.597353, -0.908727,
+    1.067204, -0.034760, -0.084974, -0.361086, -1.437083,
+    -0.722769, -0.232802, -0.178104, -0.431379, -0.591088,
+    -0.930000, 1.257937
+};
+
+/* Golden RNNoise pre-processing output for [480:960] p232_113.wav */
+const std::vector<float> RNNoisePreProcessGolden1 {
+    11.031052, -1.249548, 2.498929, 0.492149, 0.364215,
+    0.138582, -0.846219, 0.279253, -0.526596, 0.610061,
+    0.820483, 0.293216, -0.047377, -0.178503, 0.229638,
+    -0.516174, 0.149612, 0.100330, 0.010542, 0.028561,
+    -0.037554, -0.094355, 6.433699, -0.340821, 1.431725,
+    0.526909, 0.449189, 0.499668, -2.761007, 1.476633,
+    -0.702682, 0.596430, 0.619138, 1.221840, -0.739308,
+    -0.490715, -0.085385, 0.035244, 0.104252, -0.192160,
+    -0.810000, -0.430191
+};
+
+
+const std::vector<float> RNNoisePostProcessDenoiseGolden0 {
+        0,
+        0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+        0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+        0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+        0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+        0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+        0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+        0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+        -1, 0, 0, 0, 0, -1, 0, -1, 0, 0,
+        -1, 0, -1, -1, -1, -1, -1, -1, 0, -1,
+        -1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+        0, 0, 0, 0, 0, 1, 1, 1, 1, 1,
+        2, 2, 2, 2, 3, 3, 3, 3, 3, 4,
+        4, 4, 4, 4, 4, 4, 4, 5, 4, 5,
+        5, 5, 5, 5, 5, 4, 5, 4, 4, 4,
+        4, 4, 3, 4, 3, 3, 3, 2, 3, 2,
+        2, 2, 1, 1, 1, 1, 1, 0, 0, 0,
+        0, 0, 0, 0, 0, 0, 0, 0, 0, -1,
+        0, -1, 0, -1, -1, -1, -1, -1, -2, -1,
+        -1, -2, -1, -2, -1, -2, -2, -1, -2, -1,
+        -2, -1, -1, -1, 0, -1, 0, -1, 0, 0,
+        -1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+        0, 1, 0, 1, 0, 1, 1, 1, 2, 1,
+        2, 2, 2, 3, 2, 3, 2, 3, 3, 3,
+        4, 3, 4, 3, 3, 4, 3, 5, 4, 4,
+        4, 4, 5, 4, 5, 4, 4, 5, 4, 5,
+        4, 4, 4, 3, 4, 3, 4, 3, 2, 3,
+        1, 2, 0, 0, 0, 0, 0, -1, 0, -1,
+        -2, -1, -3, -1, -3, -2, -2, -3, -2, -3,
+        -2, -3, -3, -2, -4, -2, -3, -3, -3, -4,
+        -3, -4, -3, -4, -5, -4, -6, -4, -6, -6,
+        -5, -7, -5, -7, -6, -6, -7, -6, -8, -6,
+        -7, -7, -6, -8, -6, -8, -6, -7, -8, -6,
+        -9, -7, -8, -8, -7, -9, -7, -9, -8, -8,
+        -8, -6, -8, -5, -6, -5, -3, -3, 0, -1,
+        1, 2, 3, 7, 6, 10, 11, 13, 16, 15,
+        20, 19, 23, 24, 25, 28, 27, 31, 31, 32,
+        34, 32, 35, 33, 35, 35, 34, 36, 33, 35,
+        33, 32, 33, 30, 31, 28, 28, 27, 24, 25,
+        20, 21, 18, 16, 15, 11, 12, 8, 8, 7,
+        4, 6, 1, 3, 1, 0, 2, 0, 2, 0,
+        0, 1, 0, 3, 0, 3, 1, 1, 4, 0,
+        4, 1, 3, 3, 1, 4, 0, 3, 1, 0,
+        2, -1, 1, -1, -1, 0, -3, 0, -3, 0,
+        -1, -1, 2, 0, 5, 4, 7, 11, 11, 18,
+        15, 21, 23, 24, 31, 29, 38, 37, 42, 46,
+        45, 54, 51, 59, 60, 61, 68, 62, 70, 66,
+        68, 73, 69, 79, 73, 79, 76, 70, 75, 61,
+        71, 64, 74, 85, 70, 86, 51, 92, 73
+};
+
+
+std::vector<float> RNNoiseModelOutputGolden0{0.157920, 0.392021, 0.368438, 0.258663, 0.202650,
+                                             0.256764, 0.185472, 0.149062, 0.147317, 0.142133,
+                                             0.148236, 0.173523, 0.197672, 0.200920, 0.198408,
+                                             0.147500, 0.140215, 0.166651, 0.250242, 0.256278,
+                                             0.252104, 0.241938};
+
+TEST_CASE("RNNoise preprocessing calculation test",  "[RNNoise]")
+{
+    SECTION("FP32")
+    {
+        arm::app::rnn::RNNoiseProcess rnnoiseProcessor;
+        arm::app::rnn::FrameFeatures features;
+
+        rnnoiseProcessor.PreprocessFrame(testWav0.data(), testWav0.size(), features);
+        REQUIRE_THAT( features.m_featuresVec,
+            Catch::Approx( RNNoisePreProcessGolden0 ).margin(0.1));
+        rnnoiseProcessor.PreprocessFrame(testWav1.data(), testWav1.size(), features);
+        REQUIRE_THAT( features.m_featuresVec,
+            Catch::Approx( RNNoisePreProcessGolden1 ).margin(0.1));
+    }
+}
+
+
+TEST_CASE("RNNoise postprocessing test", "[RNNoise]")
+{
+    arm::app::rnn::RNNoiseProcess rnnoiseProcessor;
+    arm::app::rnn::FrameFeatures p;
+    rnnoiseProcessor.PreprocessFrame(testWav0.data(), testWav0.size(), p);
+    std::vector<float> denoised(testWav0.size());
+    rnnoiseProcessor.PostProcessFrame(RNNoiseModelOutputGolden0, p, denoised);
+
+    std::vector<float> denoisedRoundedInt;
+
+    denoisedRoundedInt.reserve(denoised.size());
+    for(auto i:denoised){
+        denoisedRoundedInt.push_back(static_cast<float>(static_cast<int>(std::roundf(i))));
+    }
+
+    REQUIRE_THAT( denoisedRoundedInt, Catch::Approx( RNNoisePostProcessDenoiseGolden0 ).margin(1));
+}
\ No newline at end of file
diff --git a/tests/use_case/vww/InferenceVisualWakeWordModelTests.cc b/tests/use_case/vww/InferenceVisualWakeWordModelTests.cc
index 3a42dde..82fea9f 100644
--- a/tests/use_case/vww/InferenceVisualWakeWordModelTests.cc
+++ b/tests/use_case/vww/InferenceVisualWakeWordModelTests.cc
@@ -28,9 +28,9 @@
     TfLiteTensor* inputTensor = model.GetInputTensor(0);
     REQUIRE(inputTensor);
 
-    const size_t copySz = inputTensor->bytes < IFM_DATA_SIZE ?
+    const size_t copySz = inputTensor->bytes < IFM_0_DATA_SIZE ?
                             inputTensor->bytes :
-                            IFM_DATA_SIZE;
+                            IFM_0_DATA_SIZE;
 
     memcpy(inputTensor->data.data, imageData, copySz);
 
@@ -52,7 +52,7 @@
     TfLiteTensor* outputTensor = model.GetOutputTensor(0);
 
     REQUIRE(outputTensor);
-    REQUIRE(outputTensor->bytes == OFM_DATA_SIZE);
+    REQUIRE(outputTensor->bytes == OFM_0_DATA_SIZE);
     auto tensorData = tflite::GetTensorData<T>(outputTensor);
     REQUIRE(tensorData);