Face detection demo from Emza Visual Sense
Signed-off-by: Michael Levit michaell@emza-vs.com

Change-Id: I7958b05b5dbe9a785e0f8a241b716c17a9ca976f
diff --git a/Readme.md b/Readme.md
index 6f95808..596c991 100644
--- a/Readme.md
+++ b/Readme.md
@@ -37,6 +37,7 @@
 |  [Visual Wake Word](./docs/use_cases/visual_wake_word.md)                 | Recognize if person is present in a given image | [MicroNet](https://github.com/ARM-software/ML-zoo/tree/7dd3b16bb84007daf88be8648983c07f3eb21140/models/visual_wake_words/micronet_vww4/tflite_int8/vww4_128_128_INT8.tflite)|
 |  [Noise Reduction](./docs/use_cases/noise_reduction.md)        | Remove noise from audio while keeping speech intact | [RNNoise](https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8)   |
 |  [Generic inference runner](./docs/use_cases/inference_runner.md) | Code block allowing you to develop your own use case for Ethos-U NPU | Your custom model |
+|  [Object detection](./docs/use_cases/object_detection.md)      | Detects and draws face bounding box in a given image | [Yolo Fastest](https://github.com/emza-vs/ModelZoo/blob/master/object_detection/yolo-fastest_192_face_v4.tflite)
 
 The above use cases implement end-to-end ML flow including data pre-processing and post-processing. They will allow you
 to investigate embedded software stack, evaluate performance of the networks running on Cortex-M55 CPU and Ethos-U NPU
@@ -196,4 +197,5 @@
 | [Keyword Spotting Samples](./resources/kws/samples/files.md) | [Creative Commons Attribution 4.0 International Public License](./resources/LICENSE_CC_4.0.txt) | <http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz> |
 | [Keyword Spotting and Automatic Speech Recognition Samples](./resources/kws_asr/samples/files.md) | [Creative Commons Attribution 4.0 International Public License](./resources/LICENSE_CC_4.0.txt) | <http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz> |
 | [Visual Wake Word Samples](./resources/vww/samples/files.md) | [Creative Commons Attribution 1.0](./resources/LICENSE_CC_1.0.txt) | <https://www.pexels.com> |
-| [Noise Reduction Samples](./resources/noise_reduction/samples/files.md) | [Creative Commons Attribution 4.0 International Public License](./resources/LICENSE_CC_4.0.txt) | <https://datashare.ed.ac.uk/handle/10283/2791/> | 
+| [Noise Reduction Samples](./resources/noise_reduction/samples/files.md) | [Creative Commons Attribution 4.0 International Public License](./resources/LICENSE_CC_4.0.txt) | <https://datashare.ed.ac.uk/handle/10283/2791/> |
+| [Object Detection Samples](./resources/object_detection/samples/files.md) | [Creative Commons Attribution 1.0](./resources/LICENSE_CC_1.0.txt) | <https://www.pexels.com> | 
diff --git a/docs/use_cases/object_detection.md b/docs/use_cases/object_detection.md
new file mode 100644
index 0000000..e0d8899
--- /dev/null
+++ b/docs/use_cases/object_detection.md
@@ -0,0 +1,403 @@
+# Object Detection Code Sample
+
+- [Object Detection Code Sample](./object_detection.md#object-detection-code-sample)
+  - [Introduction](./object_detection.md#introduction)
+    - [Prerequisites](./object_detection.md#prerequisites)
+  - [Building the code sample application from sources](./object_detection.md#building-the-code-sample-application-from-sources)
+    - [Build options](./object_detection.md#build-options)
+    - [Build process](./object_detection.md#build-process)
+    - [Add custom input](./object_detection.md#add-custom-input)
+    - [Add custom model](./object_detection.md#add-custom-model)
+  - [Setting up and running Ethos-U NPU code sample](./object_detection.md#setting-up-and-running-ethos_u-npu-code-sample)
+    - [Setting up the Ethos-U NPU Fast Model](./object_detection.md#setting-up-the-ethos_u-npu-fast-model)
+    - [Starting Fast Model simulation](./object_detection.md#starting-fast-model-simulation)
+    - [Running Object Detection](./object_detection.md#running-object-detection)
+
+## Introduction
+
+This document describes the process of setting up and running the Arm® *Ethos™-U* NPU Object Detection example.
+
+This use-case example solves the classical computer vision problem of Object Detection. The ML sample was developed
+using the *YOLO Fastest* model that was trained on the *Wider* dataset.
+
+Use-case code could be found in the following directory:[source/use_case/object_detection](../../source/use_case/object_detection).
+
+### Prerequisites
+
+See [Prerequisites](../documentation.md#prerequisites)
+
+## Building the code sample application from sources
+
+### Build options
+
+In addition to the already specified build option in the main documentation, the Object Detection use-case
+specifies:
+
+- `object_detection_MODEL_TFLITE_PATH` - The path to the NN model file in the `TFLite` format. The model is then processed and
+  included in the application `axf` file. The default value points to one of the delivered set of models.
+
+    Note that the parameters `TARGET_PLATFORM`, and `ETHOS_U_NPU_ENABLED` must be aligned with
+    the chosen model. In other words:
+
+  - If `ETHOS_U_NPU_ENABLED` is set to `On` or `1`, then the NN model is assumed to be optimized. The model naturally
+    falls back to the Arm® *Cortex®-M* CPU if an unoptimized model is supplied.
+  - if `ETHOS_U_NPU_ENABLED` is set to `Off` or `0`, the NN model is assumed to be unoptimized. Supplying an optimized
+    model in this case results in a runtime error.
+
+- `object_detection_FILE_PATH`: The path to the directory containing the images, or a path to a single image file, that is to
+   be used in the application. The default value points to the `resources/object_detection/samples` folder containing the
+   delivered set of images.
+
+    For further information, please refer to: [Add custom input data section](./object_detection.md#add-custom-input).
+
+- `object_detection_IMAGE_SIZE`: The NN model requires input images to be of a specific size. This parameter defines the size
+  of the image side in pixels. Images are considered squared. The default value is `224`, which is what the supplied
+  *MobilenetV2-1.0* model expects.
+
+- `object_detection_ACTIVATION_BUF_SZ`: The intermediate, or activation, buffer size reserved for the NN model. By default, it
+  is set to 2MiB and is enough for most models.
+
+- `USE_CASE_BUILD`: is set to `object_detection` to only build this example.
+
+To build **ONLY** the Object Detection example application, add `-DUSE_CASE_BUILD=object_detection` to the `cmake` command
+line, as specified in: [Building](../documentation.md#Building).
+
+### Build process
+
+> **Note:** This section describes the process for configuring the build for the *MPS3: SSE-300*. To build for a
+> different target platform, please refer to: [Building](../documentation.md#Building).
+
+Create a build directory and navigate inside, like so:
+
+```commandline
+mkdir build_object_detection && cd build_object_detection
+```
+
+On Linux, when providing only the mandatory arguments for the CMake configuration, execute the following command to
+build **only** Object Detection application to run on the *Ethos-U55* Fast Model:
+
+```commandline
+cmake ../ -DUSE_CASE_BUILD=object_detection
+```
+
+To configure a build that can be debugged using Arm DS specify the build type as `Debug` and then use the `Arm Compiler`
+toolchain file:
+
+```commandline
+cmake .. \
+    -DCMAKE_TOOLCHAIN_FILE=scripts/cmake/toolchains/bare-metal-armclang.cmake \
+    -DCMAKE_BUILD_TYPE=Debug \
+    -DUSE_CASE_BUILD=object_detection
+```
+
+For further information, please refer to:
+
+- [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
+- [Using Arm Compiler](../sections/building.md#using-arm-compiler)
+- [Configuring the build for simple-platform](../sections/building.md#configuring-the-build-for-simple_platform)
+- [Working with model debugger from Arm Fast Model Tools](../sections/building.md#working-with-model-debugger-from-arm-fast-model-tools)
+- [Building for different Ethos-U NPU variants](../sections/building.md#building-for-different-ethos_u-npu-variants)
+
+> **Note:** If re-building with changed parameters values, we recommend that you clean the build directory and re-run
+> the CMake command.
+
+If the CMake command succeeds, build the application as follows:
+
+```commandline
+make -j4
+```
+
+To see compilation and link details, add `VERBOSE=1`.
+
+Results of the build are placed under the `build/bin` folder, like so:
+
+```tree
+bin
+ ├── ethos-u-object_detection.axf
+ ├── ethos-u-object_detection.htm
+ ├── ethos-u-object_detection.map
+ └── sectors
+      ├── images.txt
+      └── object_detection
+           ├── ddr.bin
+           └── itcm.bin
+```
+
+The `bin` folder contains the following files:
+
+- `ethos-u-object_detection.axf`: The built application binary for the Object Detection use-case.
+
+- `ethos-u-object_detection.map`: Information from building the application. For example: The libraries used, what was
+  optimized, and the location of objects.
+
+- `ethos-u-object_detection.htm`: Human readable file containing the call graph of application functions.
+
+- `sectors/object_detection`: Folder containing the built application. It is split into files for loading into different FPGA memory
+  regions.
+
+- `sectors/images.txt`: Tells the FPGA which memory regions to use for loading the binaries in the `sectors/..`
+  folder.
+
+### Add custom input
+
+The application object detection is set up to perform inferences on data found in the folder, or an individual file,
+that is pointed to by the parameter `object_detection_FILE_PATH`.
+
+To run the application with your own images, first create a folder to hold them and then copy the custom images into the
+following folder:
+
+```commandline
+mkdir /tmp/custom_images
+
+cp custom_image1.bmp /tmp/custom_images/
+```
+
+> **Note:** Clean the build directory before re-running the CMake command.
+
+Next, set `object_detection_FILE_PATH` to the location of this folder when building:
+
+```commandline
+cmake .. \
+    -Dobject_detection_FILE_PATH=/tmp/custom_images/ \
+    -DUSE_CASE_BUILD=object_detection
+```
+
+The images found in the `object_detection_FILE_PATH` folder are picked up and automatically converted to C++ files during the
+CMake configuration stage. They are then compiled into the application during the build phase for performing inference
+with.
+
+The log from the configuration stage tells you what image directory path has been used:
+
+```log
+-- User option object_detection_FILE_PATH is set to /tmp/custom_images
+-- User option object_detection_IMAGE_SIZE is set to 192
+...
+-- Generating image files from /tmp/custom_images
+++ Converting custom_image1.bmp to custom_image1.cc
+...
+-- Defined build user options:
+...
+-- object_detection_FILE_PATH=/tmp/custom_images
+-- object_detection_IMAGE_SIZE=192
+```
+
+After compiling, your custom images have now replaced the default ones in the application.
+
+> **Note:** The CMake parameter `IMAGE_SIZE` must match the model input size. When building the application, if the size
+of any image does not match `IMAGE_SIZE`, then it is rescaled and padded so that it does.
+
+### Add custom model
+
+The application performs inference using the model pointed to by the CMake parameter `MODEL_TFLITE_PATH`.
+
+> **Note:** If you want to run the model using an *Ethos-U*, ensure that your custom model has been successfully run
+> through the Vela compiler *before* continuing.
+
+For further information: [Optimize model with Vela compiler](../sections/building.md#Optimize-custom-model-with-Vela-compiler).
+
+Then, you must set `object_detection_MODEL_TFLITE_PATH` to the location of the Vela processed model file and
+`object_detection_LABELS_TXT_FILE` to the location of the associated labels file.
+
+For example:
+
+```commandline
+cmake .. \
+    -Dobject_detection_MODEL_TFLITE_PATH=<path/to/custom_model_after_vela.tflite> \
+    -Dobject_detection_LABELS_TXT_FILE=<path/to/labels_custom_model.txt> \
+    -DUSE_CASE_BUILD=object_detection
+```
+
+> **Note:** Clean the build directory before re-running the CMake command.
+
+The `.tflite` model file pointed to by `object_detection_MODEL_TFLITE_PATH` is converted to C++ files during the CMake configuration stage. They are then compiled into
+the application for performing inference with.
+
+The log from the configuration stage tells you what model path and labels file have been used, for example:
+
+```log
+-- User option object_detection_MODEL_TFLITE_PATH is set to <path/to/custom_model_after_vela.tflite>
+...
+-- User option object_detection_LABELS_TXT_FILE is set to <path/to/labels_custom_model.txt>
+...
+-- Using <path/to/custom_model_after_vela.tflite>
+++ Converting custom_model_after_vela.tflite to\
+custom_model_after_vela.tflite.cc
+...
+```
+
+After compiling, your custom model has now replaced the default one in the application.
+
+## Setting up and running Ethos-U NPU code sample
+
+### Setting up the Ethos-U NPU Fast Model
+
+The FVP is available publicly from
+[Arm Ecosystem FVP downloads](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
+
+For the *Ethos-U* evaluation, please download the MPS3 based version of the Arm® *Corstone™-300* model that contains *Cortex-M55*
+and offers a choice of the *Ethos-U55* and *Ethos-U65* processors.
+
+To install the FVP:
+
+- Unpack the archive.
+
+- Run the install script in the extracted package:
+
+```commandline
+./FVP_Corstone_SSE-300.sh
+```
+
+- Follow the instructions to install the FVP to the required location.
+
+### Starting Fast Model simulation
+
+The pre-built application binary `ethos-u-object_detection.axf` can be found in the `bin/mps3-sse-300` folder of the delivery
+package.
+
+Assuming that the install location of the FVP was set to `~/FVP_install_location`, then the simulation can be started by
+using:
+
+```commandline
+~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55
+./bin/mps3-sse-300/ethos-u-object_detection.axf
+```
+
+A log output appears on the terminal:
+
+```log
+telnetterminal0: Listening for serial connection on port 5000
+telnetterminal1: Listening for serial connection on port 5001
+telnetterminal2: Listening for serial connection on port 5002
+telnetterminal5: Listening for serial connection on port 5003
+```
+
+This also launches a telnet window with the standard output of the sample application. It also includes error log
+entries containing information about the pre-built application version, TensorFlow Lite Micro library version used, and
+data types. The log also includes the input and output tensor sizes of the model compiled into the executable binary.
+
+After the application has started, if `object_detection_FILE_PATH` points to a single file, or even a folder that contains a
+single image, then the inference starts immediately. If there are multiple inputs, it outputs a menu and then waits for
+input from the user:
+
+```log
+User input required
+Enter option number from:
+
+  1. Run detection on next ifm
+  2. Run detection ifm at chosen index
+  3. Run detection on all ifm
+  4. Show NN model info
+  5. List ifm
+
+Choice:
+
+```
+
+What the preceding choices do:
+
+1. Run detection on next ifm: Runs a single inference on the next in line image from the collection of the compiled images.
+
+2. Run detection ifm at chosen index: Runs inference on the chosen image.
+
+    > **Note:** Please make sure to select image index from within the range of supplied audio clips during application
+    > build. By default, a pre-built application has four images, with indexes from `0` to `3`.
+
+3. Run detection on all ifm4: Triggers sequential inference executions on all built-in images.
+
+4. Show NN model info: Prints information about the model data type, input, and output, tensor sizes:
+
+    ```log
+    INFO - Model info:
+    INFO - Model INPUT tensors:
+    INFO -  tensor type is UINT8
+    INFO -  tensor occupies 150528 bytes with dimensions
+    INFO -    0:   1
+    INFO -    1: 224
+    INFO -    2: 224
+    INFO -    3:   3
+    INFO - Quant dimension: 0
+    INFO - Scale[0] = 0.003921
+    INFO - ZeroPoint[0] = -128
+    INFO - Model OUTPUT tensors:
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 648 bytes with dimensions
+    INFO -    0:   1
+    INFO -    1:   6
+    INFO -    2:   6
+    INFO -    3:  18
+    INFO - Quant dimension: 0
+    INFO - Scale[0] = 0.134084
+    INFO - ZeroPoint[0] = 47
+    INFO -  tensor type is INT8
+    INFO -  tensor occupies 2592 bytes with dimensions
+    INFO -    0:   1
+    INFO -    1:  12
+    INFO -    2:  12
+    INFO -    3:  18
+    INFO - Quant dimension: 0
+    INFO - Scale[0] = 0.185359
+    INFO - ZeroPoint[0] = 10
+    INFO - Activation buffer (a.k.a tensor arena) size used: 443992
+    INFO - Number of operators: 3
+    INFO -  Operator 0: ethos-u
+   
+    ```
+
+5. List Images: Prints a list of pair image indexes. The original filenames are embedded in the application, like so:
+
+    ```log
+    INFO - List of Files:
+    INFO - 0 => couple.bmp
+    INFO - 1 => glasses.bmp
+    INFO - 2 => man_and_baby.bmp
+    INFO - 3 => pitch_and_roll.bmp
+    ```
+
+### Running Object Detection
+
+Please select the first menu option to execute Object Detection.
+
+The following example illustrates an application output for detection:
+
+```log
+INFO - Running inference on image 0 => couple.bmp
+INFO - Final results:
+INFO - Total number of inferences: 1
+INFO - 0)  (0.999246) -> Detection box: {x=89,y=17,w=41,h=56}
+INFO - 0)  (0.995367) -> Detection box: {x=27,y=81,w=48,h=53}
+INFO - Profile for Inference:
+INFO - NPU AXI0_RD_DATA_BEAT_RECEIVED beats: 678955
+INFO - NPU AXI0_WR_DATA_BEAT_WRITTEN beats: 467545
+INFO - NPU AXI1_RD_DATA_BEAT_RECEIVED beats: 46895
+INFO - NPU ACTIVE cycles: 2050339
+INFO - NPU IDLE cycles: 855
+INFO - NPU TOTAL cycles: 2051194
+```
+
+It can take several minutes to complete one inference run. The average time is around 10-20 seconds.
+
+The log shows the inference results for `image 0`, so `0` - `index`, that corresponds to `couple.bmp` in the sample image
+resource folder.
+
+The profiling section of the log shows that for this inference:
+
+- *Ethos-U* PMU report:
+
+  - 2,051,194 total cycle: The number of NPU cycles.
+
+  - 2,050,339 active cycles: The number of NPU cycles that were used for computation.
+
+  - 855 idle cycles: The number of cycles for which the NPU was idle.
+
+  - 678,955 AXI0 read beats: The number of AXI beats with read transactions from AXI0 bus. AXI0 is the bus where the
+    *Ethos-U* NPU reads and writes to the computation buffers, activation buf, or tensor arenas.
+
+  - 467,545 AXI0 write beats: The number of AXI beats with write transactions to AXI0 bus.
+
+  - 46,895 AXI1 read beats: The number of AXI beats with read transactions from AXI1 bus. AXI1 is the bus where the
+    *Ethos-U* NPU reads the model. So, read-only.
+
+- For FPGA platforms, a CPU cycle count can also be enabled. However, do not use cycle counters for FVP, as the CPU
+  model is not cycle-approximate or cycle-accurate
+
+The application prints the detected bounding boxes for faces in image. The FVP window also shows the output on its LCD section.
diff --git a/resources/object_detection/samples/couple.bmp b/resources/object_detection/samples/couple.bmp
new file mode 100755
index 0000000..7308984
--- /dev/null
+++ b/resources/object_detection/samples/couple.bmp
Binary files differ
diff --git a/resources/object_detection/samples/files.md b/resources/object_detection/samples/files.md
new file mode 100644
index 0000000..478845d
--- /dev/null
+++ b/resources/object_detection/samples/files.md
@@ -0,0 +1,12 @@
+# Sample images
+
+The sample images provided are under Creative Commons License. The links are documented here for traceability:
+
+- [glasses.bmp](https://www.pexels.com/photo/man-looking-to-his-right-2741701/)
+- [pitch_and_roll.bmp](https://www.pexels.com/photo/close-up-shot-of-a-happy-elderly-couple-hugging-while-looking-at-camera-8317676/)
+- [couple.bmp](https://www.pexels.com/photo/photography-of-man-and-woman-1137902/)
+- [man_and_baby.bmp](https://www.pexels.com/photo/photo-of-man-carrying-baby-1578996/)
+
+## License
+
+[Creative Commons Attribution 1.0 Generic](../../LICENSE_CC_1.0.txt).
diff --git a/resources/object_detection/samples/glasses.bmp b/resources/object_detection/samples/glasses.bmp
new file mode 100755
index 0000000..5bd7ce1
--- /dev/null
+++ b/resources/object_detection/samples/glasses.bmp
Binary files differ
diff --git a/resources/object_detection/samples/man_and_baby.bmp b/resources/object_detection/samples/man_and_baby.bmp
new file mode 100755
index 0000000..ce39e28
--- /dev/null
+++ b/resources/object_detection/samples/man_and_baby.bmp
Binary files differ
diff --git a/resources/object_detection/samples/pitch_and_roll.bmp b/resources/object_detection/samples/pitch_and_roll.bmp
new file mode 100755
index 0000000..64450fd
--- /dev/null
+++ b/resources/object_detection/samples/pitch_and_roll.bmp
Binary files differ
diff --git a/set_up_default_resources.py b/set_up_default_resources.py
index d244213..5ff829e 100755
--- a/set_up_default_resources.py
+++ b/set_up_default_resources.py
@@ -1,6 +1,6 @@
 #!/usr/bin/env python3
 
-#  Copyright (c) 2021 Arm Limited. All rights reserved.
+#  Copyright (c) 2022 Arm Limited. All rights reserved.
 #  SPDX-License-Identifier: Apache-2.0
 #
 #  Licensed under the Apache License, Version 2.0 (the "License");
@@ -55,6 +55,11 @@
                        "url": "https://github.com/ARM-software/ML-zoo/raw/e0aa361b03c738047b9147d1a50e3f2dcb13dbcb/models/image_classification/mobilenet_v2_1.0_224/tflite_int8/testing_output/MobilenetV2/Predictions/Reshape_11/0.npy"}]
     },
     {
+        "use_case_name": "object_detection",
+        "resources": [{"name": "yolo-fastest_192_face_v4.tflite",
+                       "url": "https://github.com/emza-vs/ModelZoo/blob/v1.0/object_detection/yolo-fastest_192_face_v4.tflite?raw=true"}]
+    },
+    {
         "use_case_name": "kws",
         "resources": [{"name": "ifm0.npy",
                        "url": "https://github.com/ARM-software/ML-zoo/raw/9f506fe52b39df545f0e6c5ff9223f671bc5ae00/models/keyword_spotting/micronet_medium/tflite_int8/testing_input/input/0.npy"},
diff --git a/source/use_case/object_detection/include/DetectionResult.hpp b/source/use_case/object_detection/include/DetectionResult.hpp
new file mode 100644
index 0000000..78895f7
--- /dev/null
+++ b/source/use_case/object_detection/include/DetectionResult.hpp
@@ -0,0 +1,51 @@
+/*
+ * Copyright (c) 2021 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef DETECTION_RESULT_HPP
+#define DETECTION_RESULT_HPP
+
+
+namespace arm {
+namespace app {
+
+    /**
+     * @brief   Class representing a single detection result.
+     */
+    class DetectionResult {
+    public:
+        double  m_normalisedVal{0.0};
+        int     m_x0{0};
+        int     m_y0{0};
+        int     m_w{0};
+        int     m_h{0};
+       
+        DetectionResult() = default;
+        ~DetectionResult() = default;
+        
+        DetectionResult(double normalisedVal,int x0,int y0, int w,int h) :
+                m_normalisedVal(normalisedVal),
+                m_x0(x0),
+                m_y0(y0),
+                m_w(w),
+                m_h(h) 
+            {
+            }
+    };
+
+} /* namespace app */
+} /* namespace arm */
+
+#endif /* DETECTION_RESULT_HPP */
diff --git a/source/use_case/object_detection/include/DetectionUseCaseUtils.hpp b/source/use_case/object_detection/include/DetectionUseCaseUtils.hpp
new file mode 100644
index 0000000..8ef48ac
--- /dev/null
+++ b/source/use_case/object_detection/include/DetectionUseCaseUtils.hpp
@@ -0,0 +1,72 @@
+/*
+ * Copyright (c) 2021 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef DETECTION_USE_CASE_UTILS_HPP
+#define DETECTION_USE_CASE_UTILS_HPP
+
+#include "hal.h"
+#include "DetectionResult.hpp"
+#include "UseCaseHandler.hpp"       /* Handlers for different user options. */
+#include <inttypes.h>
+#include <vector>
+
+
+void DisplayDetectionMenu();
+
+namespace image{
+
+
+  /**
+   * @brief           Presents inference results using the data presentation
+   *                  object.
+   * @param[in]       platform    Reference to the hal platform object.
+   * @param[in]       results     Vector of detection results to be displayed.
+   * @return          true if successful, false otherwise.
+   **/
+  bool PresentInferenceResult(hal_platform & platform,
+                              const std::vector < arm::app::DetectionResult > & results);
+
+
+  /**
+   * @brief           Presents inference results along with the inference time using the data presentation
+   *                  object.
+   * @param[in]       platform    Reference to the hal platform object.
+   * @param[in]       results     Vector of detection results to be displayed.
+   * @param[in]       infTimeMs   Inference time in ms.
+   * @return          true if successful, false otherwise.
+   **/
+  bool PresentInferenceResult(hal_platform & platform,
+                              const std::vector < arm::app::DetectionResult > & results,
+                              const time_t infTimeMs);
+
+  /**
+  * @brief           Presents inference results along with the inference time using the data presentation
+  *                  object.
+  * @param[in]       platform    Reference to the hal platform object.
+  * @param[in]       results     Vector of detection results to be displayed.
+  * @param[in]       infTimeMs   Inference time in ms.
+  * @return          true if successful, false otherwise.
+  **/
+  bool PresentInferenceResult(hal_platform & platform,
+                              const std::vector < arm::app::DetectionResult > & results,
+                              bool profilingEnabled,
+                              const time_t infTimeMs = 0);
+  }
+
+
+
+
+#endif /* DETECTION_USE_CASE_UTILS_HPP */
diff --git a/source/use_case/object_detection/include/DetectorPostProcessing.hpp b/source/use_case/object_detection/include/DetectorPostProcessing.hpp
new file mode 100644
index 0000000..9a8549c
--- /dev/null
+++ b/source/use_case/object_detection/include/DetectorPostProcessing.hpp
@@ -0,0 +1,55 @@
+/*

+ * Copyright (c) 2022 Arm Limited. All rights reserved.

+ * SPDX-License-Identifier: Apache-2.0

+ *

+ * Licensed under the Apache License, Version 2.0 (the "License");

+ * you may not use this file except in compliance with the License.

+ * You may obtain a copy of the License at

+ *

+ *     http://www.apache.org/licenses/LICENSE-2.0

+ *

+ * Unless required by applicable law or agreed to in writing, software

+ * distributed under the License is distributed on an "AS IS" BASIS,

+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ * See the License for the specific language governing permissions and

+ * limitations under the License.

+ */

+#ifndef DETECTOR_POST_PROCESSING_HPP

+#define DETECTOR_POST_PROCESSING_HPP

+

+#include "UseCaseCommonUtils.hpp"

+#include "DetectionResult.hpp"

+

+namespace arm {

+namespace app {

+

+#if DISPLAY_RGB_IMAGE

+#define FORMAT_MULTIPLY_FACTOR 3

+#else

+#define FORMAT_MULTIPLY_FACTOR 1

+#endif /* DISPLAY_RGB_IMAGE */

+

+    /**

+     * @brief       Post processing part of Yolo object detection CNN

+     * @param[in]   img_in        Pointer to the input image,detection bounding boxes drown on it.

+     * @param[in]   model_output  Output tesnsors after CNN invoked

+     * @param[out]  results_out   Vector of detected results.

+     * @return      void

+     **/

+void RunPostProcessing(uint8_t *img_in,TfLiteTensor* model_output[2],std::vector<arm::app::DetectionResult> & results_out);

+

+

+    /**

+     * @brief       Converts RGB image to grayscale

+     * @param[in]   rgb    Pointer to RGB input image

+     * @param[out]  gray   Pointer to RGB out image

+     * @param[in]   im_w   Input image width

+     * @param[in]   im_h   Input image height

+     * @return      void

+     **/

+void RgbToGrayscale(const uint8_t *rgb,uint8_t *gray, int im_w,int im_h);

+

+} /* namespace app */

+} /* namespace arm */

+

+#endif /* DETECTOR_POST_PROCESSING_HPP */

diff --git a/source/use_case/object_detection/include/UseCaseHandler.hpp b/source/use_case/object_detection/include/UseCaseHandler.hpp
new file mode 100644
index 0000000..56629c8
--- /dev/null
+++ b/source/use_case/object_detection/include/UseCaseHandler.hpp
@@ -0,0 +1,37 @@
+/*
+ * Copyright (c) 2022 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef OBJ_DET_HANDLER_HPP
+#define OBJ_DET_HANDLER_HPP
+
+#include "AppContext.hpp"
+
+namespace arm {
+namespace app {
+
+    /**
+     * @brief       Handles the inference event.
+     * @param[in]   ctx        Pointer to the application context.
+     * @param[in]   imgIndex   Index to the image to run object detection.
+     * @param[in]   runAll     Flag to request classification of all the available images.
+     * @return      true or false based on execution success.
+     **/
+    bool ObjectDetectionHandler(ApplicationContext& ctx, uint32_t imgIndex, bool runAll);
+
+} /* namespace app */
+} /* namespace arm */
+
+#endif /* OBJ_DET_HANDLER_HPP */
diff --git a/source/use_case/object_detection/include/YoloFastestModel.hpp b/source/use_case/object_detection/include/YoloFastestModel.hpp
new file mode 100644
index 0000000..f5709ea
--- /dev/null
+++ b/source/use_case/object_detection/include/YoloFastestModel.hpp
@@ -0,0 +1,55 @@
+/*
+ * Copyright (c) 2022 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef YOLO_FASTEST_MODEL_HPP
+#define YOLO_FASTEST_MODEL_HPP
+
+#include "Model.hpp"
+
+namespace arm {
+namespace app {
+
+    class YoloFastestModel : public Model {
+
+    public:
+        /* Indices for the expected model - based on input tensor shape */
+        static constexpr uint32_t ms_inputRowsIdx     = 1;
+        static constexpr uint32_t ms_inputColsIdx     = 2;
+        static constexpr uint32_t ms_inputChannelsIdx = 3;
+
+    protected:
+        /** @brief   Gets the reference to op resolver interface class. */
+        const tflite::MicroOpResolver& GetOpResolver() override;
+
+        /** @brief   Adds operations to the op resolver instance. */
+        bool EnlistOperations() override;
+
+        const uint8_t* ModelPointer() override;
+
+        size_t ModelSize() override;
+
+    private:
+        /* Maximum number of individual operations that can be enlisted. */
+        static constexpr int ms_maxOpCnt = 8;
+
+        /* A mutable op resolver instance. */
+        tflite::MicroMutableOpResolver<ms_maxOpCnt> m_opResolver;
+    };
+
+} /* namespace app */
+} /* namespace arm */
+
+#endif /* YOLO_FASTEST_MODEL_HPP */
diff --git a/source/use_case/object_detection/src/DetectionUseCaseUtils.cc b/source/use_case/object_detection/src/DetectionUseCaseUtils.cc
new file mode 100644
index 0000000..1713c7e
--- /dev/null
+++ b/source/use_case/object_detection/src/DetectionUseCaseUtils.cc
@@ -0,0 +1,105 @@
+/*
+ * Copyright (c) 2022 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "DetectionUseCaseUtils.hpp"
+#include "UseCaseCommonUtils.hpp"
+#include "InputFiles.hpp"
+#include <inttypes.h>
+
+
+void DisplayDetectionMenu()
+{
+    printf("\n\n");
+    printf("User input required\n");
+    printf("Enter option number from:\n\n");
+    printf("  %u. Run detection on next ifm\n", common::MENU_OPT_RUN_INF_NEXT);
+    printf("  %u. Run detection ifm at chosen index\n", common::MENU_OPT_RUN_INF_CHOSEN);
+    printf("  %u. Run detection on all ifm\n", common::MENU_OPT_RUN_INF_ALL);
+    printf("  %u. Show NN model info\n", common::MENU_OPT_SHOW_MODEL_INFO);
+    printf("  %u. List ifm\n\n", common::MENU_OPT_LIST_IFM);
+    printf("  Choice: ");
+    fflush(stdout);
+}
+
+
+bool image::PresentInferenceResult(hal_platform& platform,
+                                   const std::vector<arm::app::DetectionResult>& results)
+{
+    return PresentInferenceResult(platform, results, false);
+}
+
+bool image::PresentInferenceResult(hal_platform &platform,
+                                   const std::vector<arm::app::DetectionResult> &results,
+                                   const time_t infTimeMs)
+{
+    return PresentInferenceResult(platform, results, true, infTimeMs);
+}
+
+
+bool image::PresentInferenceResult(hal_platform &platform,
+                                   const std::vector<arm::app::DetectionResult> &results,
+                                   bool profilingEnabled,
+                                   const time_t infTimeMs)
+{
+    constexpr uint32_t dataPsnTxtStartX1 = 150;
+    constexpr uint32_t dataPsnTxtStartY1 = 30;
+
+
+    if(profilingEnabled)
+    {
+        platform.data_psn->set_text_color(COLOR_YELLOW);
+
+        /* If profiling is enabled, and the time is valid. */
+        info("Final results:\n");
+        info("Total number of inferences: 1\n");
+        if (infTimeMs)
+        {
+            std::string strInf =
+                    std::string{"Inference: "} +
+                    std::to_string(infTimeMs) +
+                    std::string{"ms"};
+            platform.data_psn->present_data_text(
+                    strInf.c_str(), strInf.size(),
+                    dataPsnTxtStartX1, dataPsnTxtStartY1, 0);
+        }
+    }
+    platform.data_psn->set_text_color(COLOR_GREEN);
+
+    if(!profilingEnabled) {
+        info("Final results:\n");
+        info("Total number of inferences: 1\n");
+    }
+
+    for (uint32_t i = 0; i < results.size(); ++i) {
+        
+        if(profilingEnabled) {
+            info("%" PRIu32 ")  (%f) -> %s {x=%d,y=%d,w=%d,h=%d}\n", i, 
+                 results[i].m_normalisedVal, "Detection box:",
+                 results[i].m_x0, results[i].m_y0, results[i].m_w, results[i].m_h );
+        }
+        else
+        {
+            info("%" PRIu32 ")  (%f) -> %s {x=%d,y=%d,w=%d,h=%d}\n", i, 
+                 results[i].m_normalisedVal, "Detection box:",
+                 results[i].m_x0, results[i].m_y0, results[i].m_w, results[i].m_h );
+        }
+    }
+
+    return true;
+}
+
+
+
diff --git a/source/use_case/object_detection/src/DetectorPostProcessing.cc b/source/use_case/object_detection/src/DetectorPostProcessing.cc
new file mode 100755
index 0000000..e781b62
--- /dev/null
+++ b/source/use_case/object_detection/src/DetectorPostProcessing.cc
@@ -0,0 +1,447 @@
+/*

+ * Copyright (c) 2022 Arm Limited. All rights reserved.

+ * SPDX-License-Identifier: Apache-2.0

+ *

+ * Licensed under the Apache License, Version 2.0 (the "License");

+ * you may not use this file except in compliance with the License.

+ * You may obtain a copy of the License at

+ *

+ *     http://www.apache.org/licenses/LICENSE-2.0

+ *

+ * Unless required by applicable law or agreed to in writing, software

+ * distributed under the License is distributed on an "AS IS" BASIS,

+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ * See the License for the specific language governing permissions and

+ * limitations under the License.

+ */

+#include "DetectorPostProcessing.hpp"

+#include <algorithm>

+#include <cmath>

+#include <stdint.h>

+#include <forward_list>

+

+

+typedef struct boxabs {

+    float left, right, top, bot;

+} boxabs;

+

+

+typedef struct branch {

+    int resolution;

+    int num_box;

+    float *anchor;

+    int8_t *tf_output;

+    float scale;

+    int zero_point;

+    size_t size;

+    float scale_x_y;

+} branch;

+

+typedef struct network {

+    int input_w;

+    int input_h;

+    int num_classes;

+    int num_branch;

+    branch *branchs;

+    int topN;

+} network;

+

+

+typedef struct box {

+    float x, y, w, h;

+} box;

+

+typedef struct detection{

+    box bbox;

+    float *prob;

+    float objectness;

+} detection;

+

+

+

+static int sort_class;

+

+static void free_dets(std::forward_list<detection> &dets){

+    std::forward_list<detection>::iterator it;

+    for ( it = dets.begin(); it != dets.end(); ++it ){

+        free(it->prob);

+    }

+}

+

+float sigmoid(float x)

+{

+    return 1.f/(1.f + exp(-x));

+} 

+

+static bool det_objectness_comparator(detection &pa, detection &pb)

+{

+    return pa.objectness < pb.objectness;

+}

+

+static void insert_topN_det(std::forward_list<detection> &dets, detection det)

+{

+    std::forward_list<detection>::iterator it;

+    std::forward_list<detection>::iterator last_it;

+    for ( it = dets.begin(); it != dets.end(); ++it ){

+        if(it->objectness > det.objectness)

+            break;

+        last_it = it;

+    }

+    if(it != dets.begin()){

+        dets.emplace_after(last_it, det);

+        free(dets.begin()->prob);

+        dets.pop_front();

+    }

+    else{

+        free(det.prob);

+    }

+}

+

+static std::forward_list<detection> get_network_boxes(network *net, int image_w, int image_h, float thresh, int *num)

+{

+    std::forward_list<detection> dets;

+    int i;

+    int num_classes = net->num_classes;

+    *num = 0;

+

+    for (i = 0; i < net->num_branch; ++i) {

+        int height  = net->branchs[i].resolution;

+        int width = net->branchs[i].resolution;

+        int channel  = net->branchs[i].num_box*(5+num_classes);

+

+        for (int h = 0; h < net->branchs[i].resolution; h++) {

+            for (int w = 0; w < net->branchs[i].resolution; w++) {

+                for (int anc = 0; anc < net->branchs[i].num_box; anc++) {

+                    

+                    // objectness score

+                    int bbox_obj_offset = h * width * channel + w * channel + anc * (num_classes + 5) + 4;

+                    float objectness = sigmoid(((float)net->branchs[i].tf_output[bbox_obj_offset] - net->branchs[i].zero_point) * net->branchs[i].scale);

+

+                    if(objectness > thresh){

+                        detection det;

+                        det.prob = (float*)calloc(num_classes, sizeof(float));

+                        det.objectness = objectness;

+                        //get bbox prediction data for each anchor, each feature point

+                        int bbox_x_offset = bbox_obj_offset -4;

+                        int bbox_y_offset = bbox_x_offset + 1;

+                        int bbox_w_offset = bbox_x_offset + 2;

+                        int bbox_h_offset = bbox_x_offset + 3;

+                        int bbox_scores_offset = bbox_x_offset + 5;

+                        //int bbox_scores_step = 1;

+                        det.bbox.x = ((float)net->branchs[i].tf_output[bbox_x_offset] - net->branchs[i].zero_point) * net->branchs[i].scale;

+                        det.bbox.y = ((float)net->branchs[i].tf_output[bbox_y_offset] - net->branchs[i].zero_point) * net->branchs[i].scale;

+                        det.bbox.w = ((float)net->branchs[i].tf_output[bbox_w_offset] - net->branchs[i].zero_point) * net->branchs[i].scale;

+                        det.bbox.h = ((float)net->branchs[i].tf_output[bbox_h_offset] - net->branchs[i].zero_point) * net->branchs[i].scale;

+                        

+

+                        float bbox_x, bbox_y;

+

+                        // Eliminate grid sensitivity trick involved in YOLOv4

+                        bbox_x = sigmoid(det.bbox.x); //* net->branchs[i].scale_x_y - (net->branchs[i].scale_x_y - 1) / 2;

+                        bbox_y = sigmoid(det.bbox.y); //* net->branchs[i].scale_x_y - (net->branchs[i].scale_x_y - 1) / 2;

+                        det.bbox.x = (bbox_x + w) / width;

+                        det.bbox.y = (bbox_y + h) / height;

+

+                        det.bbox.w = exp(det.bbox.w) * net->branchs[i].anchor[anc*2] / net->input_w;

+                        det.bbox.h = exp(det.bbox.h) * net->branchs[i].anchor[anc*2+1] / net->input_h;

+                        

+                        for (int s = 0; s < num_classes; s++) {

+                            det.prob[s] = sigmoid(((float)net->branchs[i].tf_output[bbox_scores_offset + s] - net->branchs[i].zero_point) * net->branchs[i].scale)*objectness;

+                            det.prob[s] = (det.prob[s] > thresh) ? det.prob[s] : 0;

+                        }

+

+                        //correct_yolo_boxes 

+                        det.bbox.x *= image_w;

+                        det.bbox.w *= image_w;

+                        det.bbox.y *= image_h;

+                        det.bbox.h *= image_h;

+

+                        if (*num < net->topN || net->topN <=0){

+                            dets.emplace_front(det);

+                            *num += 1;

+                        }

+                        else if(*num ==  net->topN){

+                            dets.sort(det_objectness_comparator);

+                            insert_topN_det(dets,det);

+                            *num += 1;

+                        }else{

+                            insert_topN_det(dets,det);

+                        }

+                    }

+                }

+            }

+        }

+    }

+    if(*num > net->topN)

+        *num -=1;

+    return dets;

+}

+

+// init part

+

+static branch create_brach(int resolution, int num_box, float *anchor, int8_t *tf_output, size_t size, float scale, int zero_point)

+{

+    branch b;

+    b.resolution = resolution;

+    b.num_box = num_box;

+    b.anchor = anchor;

+    b.tf_output = tf_output;

+    b.size = size;

+    b.scale = scale;

+    b.zero_point = zero_point;

+    return b;

+}

+

+static network creat_network(int input_w, int input_h, int num_classes, int num_branch, branch* branchs, int topN)

+{

+    network net;

+    net.input_w = input_w;

+    net.input_h = input_h;

+    net.num_classes = num_classes;

+    net.num_branch = num_branch;

+    net.branchs = branchs;

+    net.topN = topN;

+    return net;

+}

+

+// NMS part

+

+static float Calc1DOverlap(float x1_center, float width1, float x2_center, float width2)

+{

+    float left_1 = x1_center - width1/2;

+    float left_2 = x2_center - width2/2;

+    float leftest;

+    if (left_1 > left_2) {

+        leftest = left_1;

+    } else {

+        leftest = left_2;    

+    }

+        

+    float right_1 = x1_center + width1/2;

+    float right_2 = x2_center + width2/2;

+    float rightest;

+    if (right_1 < right_2) {

+        rightest = right_1;

+    } else {

+        rightest = right_2;    

+    }

+        

+    return rightest - leftest;

+}

+

+

+static float CalcBoxIntersect(box box1, box box2)

+{

+    float width = Calc1DOverlap(box1.x, box1.w, box2.x, box2.w);

+    if (width < 0) return 0;

+    float height = Calc1DOverlap(box1.y, box1.h, box2.y, box2.h);

+    if (height < 0) return 0;

+    

+    float total_area = width*height;

+    return total_area;

+}

+

+

+static float CalcBoxUnion(box box1, box box2)

+{

+    float boxes_intersection = CalcBoxIntersect(box1, box2);

+    float boxes_union = box1.w*box1.h + box2.w*box2.h - boxes_intersection;

+    return boxes_union;

+}

+

+

+static float CalcBoxIOU(box box1, box box2)

+{

+    float boxes_intersection = CalcBoxIntersect(box1, box2); 

+    

+    if (boxes_intersection == 0) return 0;    

+    

+    float boxes_union = CalcBoxUnion(box1, box2);

+

+    if (boxes_union == 0) return 0;    

+    

+    return boxes_intersection / boxes_union;

+}

+

+

+static bool CompareProbs(detection &prob1, detection &prob2)

+{

+    return prob1.prob[sort_class] > prob2.prob[sort_class];

+}

+

+

+static void CalcNMS(std::forward_list<detection> &detections, int classes, float iou_threshold)

+{

+    int k;

+    

+    for (k = 0; k < classes; ++k) {

+        sort_class = k;

+        detections.sort(CompareProbs);

+        

+        for (std::forward_list<detection>::iterator it=detections.begin(); it != detections.end(); ++it){

+            if (it->prob[k] == 0) continue;

+            for (std::forward_list<detection>::iterator itc=std::next(it, 1); itc != detections.end(); ++itc){

+                if (itc->prob[k] == 0) continue;

+                if (CalcBoxIOU(it->bbox, itc->bbox) > iou_threshold) {

+                    itc->prob[k] = 0;

+                }

+            }

+        }

+    }

+}

+

+

+static void inline check_and_fix_offset(int im_w,int im_h,int *offset) 

+{

+    

+    if (!offset) return;    

+    

+    if ( (*offset) >= im_w*im_h*FORMAT_MULTIPLY_FACTOR)

+        (*offset) = im_w*im_h*FORMAT_MULTIPLY_FACTOR -1;

+    else if ( (*offset) < 0)

+            *offset =0;    

+    

+}

+

+

+static void DrawBoxOnImage(uint8_t *img_in,int im_w,int im_h,int bx,int by,int bw,int bh) 

+{

+    

+    if (!img_in) {

+        return;

+    }

+    

+    int offset=0;

+    for (int i=0; i < bw; i++) {        

+        /*draw two lines */

+        for (int line=0; line < 2; line++) {

+            /*top*/

+            offset =(i + (by + line)*im_w + bx)*FORMAT_MULTIPLY_FACTOR;

+            check_and_fix_offset(im_w,im_h,&offset);

+            img_in[offset] = 0xFF;  /* FORMAT_MULTIPLY_FACTOR for rgb or grayscale*/

+            /*bottom*/

+            offset = (i + (by + bh - line)*im_w + bx)*FORMAT_MULTIPLY_FACTOR;

+            check_and_fix_offset(im_w,im_h,&offset);

+            img_in[offset] = 0xFF;    

+        }                

+    }

+    

+    for (int i=0; i < bh; i++) {

+        /*draw two lines */

+        for (int line=0; line < 2; line++) {

+            /*left*/

+            offset = ((i + by)*im_w + bx + line)*FORMAT_MULTIPLY_FACTOR;

+            check_and_fix_offset(im_w,im_h,&offset);            

+            img_in[offset] = 0xFF;

+            /*right*/

+            offset = ((i + by)*im_w + bx + bw - line)*FORMAT_MULTIPLY_FACTOR;

+            check_and_fix_offset(im_w,im_h,&offset);            

+            img_in[offset] = 0xFF;    

+        }

+    }

+

+}

+

+

+void arm::app::RunPostProcessing(uint8_t *img_in,TfLiteTensor* model_output[2],std::vector<arm::app::DetectionResult> & results_out)

+{

+       

+    TfLiteTensor* output[2] = {nullptr,nullptr};

+    int input_w = INPUT_IMAGE_WIDTH;

+    int input_h = INPUT_IMAGE_HEIGHT;

+  

+    for(int anchor=0;anchor<2;anchor++)

+    {

+         output[anchor] = model_output[anchor];

+    }

+

+    /* init postprocessing 	 */

+    int num_classes = 1;

+    int num_branch = 2;

+    int topN = 0;

+

+    branch* branchs = (branch*)calloc(num_branch, sizeof(branch));

+

+    /*NOTE: anchors are different for any given input model size, estimated during training phase */

+    float anchor1[] = {38, 77, 47, 97, 61, 126};

+    float anchor2[] = {14, 26, 19, 37, 28, 55 };

+

+

+    branchs[0] = create_brach(INPUT_IMAGE_WIDTH/32, 3, anchor1, output[0]->data.int8, output[0]->bytes, ((TfLiteAffineQuantization*)(output[0]->quantization.params))->scale->data[0], ((TfLiteAffineQuantization*)(output[0]->quantization.params))->zero_point->data[0]);

+

+    branchs[1] = create_brach(INPUT_IMAGE_WIDTH/16, 3, anchor2, output[1]->data.int8, output[1]->bytes, ((TfLiteAffineQuantization*)(output[1]->quantization.params))->scale->data[0],((TfLiteAffineQuantization*)(output[1]->quantization.params))->zero_point->data[0]);

+

+    network net = creat_network(input_w, input_h, num_classes, num_branch, branchs,topN);

+    /* end init */

+

+    /* start postprocessing */

+    int nboxes=0;

+    float thresh = .5;//50%

+    float nms = .45;

+    int orig_image_width = ORIGINAL_IMAGE_WIDTH;

+    int orig_image_height = ORIGINAL_IMAGE_HEIGHT;

+    std::forward_list<detection> dets = get_network_boxes(&net, orig_image_width, orig_image_height, thresh, &nboxes);

+    /* do nms */

+    CalcNMS(dets, net.num_classes, nms);

+    uint8_t temp_unsuppressed_counter = 0;

+    int j;

+    for (std::forward_list<detection>::iterator it=dets.begin(); it != dets.end(); ++it){

+        float xmin = it->bbox.x - it->bbox.w / 2.0f;

+        float xmax = it->bbox.x + it->bbox.w / 2.0f;

+        float ymin = it->bbox.y - it->bbox.h / 2.0f;

+        float ymax = it->bbox.y + it->bbox.h / 2.0f;

+

+        if (xmin < 0) xmin = 0;

+        if (ymin < 0) ymin = 0;

+        if (xmax > orig_image_width) xmax = orig_image_width;

+        if (ymax > orig_image_height) ymax = orig_image_height;

+

+        float bx = xmin;

+        float by = ymin;

+        float bw = xmax - xmin;

+        float bh = ymax - ymin;

+

+        for (j = 0; j <  net.num_classes; ++j) {

+            if (it->prob[j] > 0) {

+

+                arm::app::DetectionResult tmp_result = {};

+                

+                tmp_result.m_normalisedVal = it->prob[j];

+                tmp_result.m_x0=bx;

+                tmp_result.m_y0=by;

+                tmp_result.m_w=bw;

+                tmp_result.m_h=bh;

+                

+                results_out.push_back(tmp_result);

+

+                DrawBoxOnImage(img_in,orig_image_width,orig_image_height,bx,by,bw,bh);

+                

+                temp_unsuppressed_counter++;

+            }

+        }

+    }

+

+    free_dets(dets);

+    free(branchs);

+

+}

+

+void arm::app::RgbToGrayscale(const uint8_t *rgb,uint8_t *gray, int im_w,int im_h) 

+{

+    float R=0.299;

+    float G=0.587; 

+    float B=0.114; 

+    for (int i=0; i< im_w*im_h; i++ ) {

+

+        uint32_t  int_gray = rgb[i*3 + 0]*R + rgb[i*3 + 1]*G+ rgb[i*3 + 2]*B;

+        /*clip if need */

+        if (int_gray <= UINT8_MAX) {

+            gray[i] =  int_gray;

+        } else {

+            gray[i] = UINT8_MAX;

+        }

+

+    }

+

+}

+

diff --git a/source/use_case/object_detection/src/MainLoop.cc b/source/use_case/object_detection/src/MainLoop.cc
new file mode 100644
index 0000000..b0fbf96
--- /dev/null
+++ b/source/use_case/object_detection/src/MainLoop.cc
@@ -0,0 +1,83 @@
+/*
+ * Copyright (c) 2022 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "hal.h"                    /* Brings in platform definitions. */
+#include "InputFiles.hpp"           /* For input images. */
+#include "YoloFastestModel.hpp"     /* Model class for running inference. */
+#include "UseCaseHandler.hpp"       /* Handlers for different user options. */
+#include "UseCaseCommonUtils.hpp"   /* Utils functions. */
+#include "DetectionUseCaseUtils.hpp"   /* Utils functions specific to object detection. */
+
+
+void main_loop(hal_platform& platform)
+{
+    arm::app::YoloFastestModel model;  /* Model wrapper object. */
+
+    /* Load the model. */
+    if (!model.Init()) {
+        printf_err("Failed to initialise model\n");
+        return;
+    }
+
+    /* Instantiate application context. */
+    arm::app::ApplicationContext caseContext;
+
+    arm::app::Profiler profiler{&platform, "object_detection"};
+    caseContext.Set<arm::app::Profiler&>("profiler", profiler);
+    caseContext.Set<hal_platform&>("platform", platform);
+    caseContext.Set<arm::app::Model&>("model", model);
+    caseContext.Set<uint32_t>("imgIndex", 0);
+
+    
+    /* Loop. */
+    bool executionSuccessful = true;
+    constexpr bool bUseMenu = NUMBER_OF_FILES > 1 ? true : false;
+
+    /* Loop. */
+    do {
+        int menuOption = common::MENU_OPT_RUN_INF_NEXT;
+        if (bUseMenu) {
+            DisplayDetectionMenu();
+            menuOption = arm::app::ReadUserInputAsInt(platform);
+            printf("\n");
+        }
+        switch (menuOption) {
+            case common::MENU_OPT_RUN_INF_NEXT:
+                executionSuccessful = ObjectDetectionHandler(caseContext, caseContext.Get<uint32_t>("imgIndex"), false);
+                break;
+            case common::MENU_OPT_RUN_INF_CHOSEN: {
+                printf("    Enter the image index [0, %d]: ", NUMBER_OF_FILES-1);
+                fflush(stdout);
+                auto imgIndex = static_cast<uint32_t>(arm::app::ReadUserInputAsInt(platform));
+                executionSuccessful = ObjectDetectionHandler(caseContext, imgIndex, false);
+                break;
+            }
+            case common::MENU_OPT_RUN_INF_ALL:
+                executionSuccessful = ObjectDetectionHandler(caseContext, caseContext.Get<uint32_t>("imgIndex"), true);
+                break;
+            case common::MENU_OPT_SHOW_MODEL_INFO:
+                executionSuccessful = model.ShowModelInfoHandler();
+                break;
+            case common::MENU_OPT_LIST_IFM:
+                executionSuccessful = ListFilesHandler(caseContext);
+                break;
+            default:
+                printf("Incorrect choice, try again.");
+                break;
+        }
+    } while (executionSuccessful && bUseMenu);
+    info("Main loop terminated.\n");
+}
diff --git a/source/use_case/object_detection/src/UseCaseHandler.cc b/source/use_case/object_detection/src/UseCaseHandler.cc
new file mode 100644
index 0000000..45df4f8
--- /dev/null
+++ b/source/use_case/object_detection/src/UseCaseHandler.cc
@@ -0,0 +1,162 @@
+/*
+ * Copyright (c) 2022 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "UseCaseHandler.hpp"
+#include "InputFiles.hpp"
+#include "YoloFastestModel.hpp"
+#include "UseCaseCommonUtils.hpp"
+#include "DetectionUseCaseUtils.hpp"
+#include "DetectorPostProcessing.hpp"
+#include "hal.h"
+
+#include <inttypes.h>
+
+
+/* used for presentation, original images are read-only"*/
+static uint8_t g_image_buffer[INPUT_IMAGE_WIDTH*INPUT_IMAGE_HEIGHT*FORMAT_MULTIPLY_FACTOR] IFM_BUF_ATTRIBUTE = {}; 
+
+namespace arm {
+namespace app {
+
+
+    /* Object detection classification handler. */
+    bool ObjectDetectionHandler(ApplicationContext& ctx, uint32_t imgIndex, bool runAll)
+    {
+        auto& platform = ctx.Get<hal_platform&>("platform");
+        auto& profiler = ctx.Get<Profiler&>("profiler");
+
+        constexpr uint32_t dataPsnImgDownscaleFactor = 1;
+        constexpr uint32_t dataPsnImgStartX = 10;
+        constexpr uint32_t dataPsnImgStartY = 35;
+
+        constexpr uint32_t dataPsnTxtInfStartX = 150;
+        constexpr uint32_t dataPsnTxtInfStartY = 40;
+
+        platform.data_psn->clear(COLOR_BLACK);
+
+        auto& model = ctx.Get<Model&>("model");
+        
+        /* If the request has a valid size, set the image index. */
+        if (imgIndex < NUMBER_OF_FILES) {
+            if (!SetAppCtxIfmIdx(ctx, imgIndex, "imgIndex")) {
+                return false;
+            }
+        }
+        if (!model.IsInited()) {
+            printf_err("Model is not initialised! Terminating processing.\n");
+            return false;
+        }
+
+        auto curImIdx = ctx.Get<uint32_t>("imgIndex");
+
+        TfLiteTensor* inputTensor = model.GetInputTensor(0);
+
+        if (!inputTensor->dims) {
+            printf_err("Invalid input tensor dims\n");
+            return false;
+        } else if (inputTensor->dims->size < 3) {
+            printf_err("Input tensor dimension should be >= 3\n");
+            return false;
+        }
+
+        TfLiteIntArray* inputShape = model.GetInputShape(0);
+
+        const uint32_t nCols = inputShape->data[arm::app::YoloFastestModel::ms_inputColsIdx];
+        const uint32_t nRows = inputShape->data[arm::app::YoloFastestModel::ms_inputRowsIdx];
+        const uint32_t nPresentationChannels = FORMAT_MULTIPLY_FACTOR;
+
+        std::vector<DetectionResult> results;
+
+        do {
+            /* Strings for presentation/logging. */
+            std::string str_inf{"Running inference... "};
+
+            const uint8_t* curr_image = get_img_array(ctx.Get<uint32_t>("imgIndex"));
+
+            /* Copy over the data  and convert to gryscale */
+#if DISPLAY_RGB_IMAGE
+            memcpy(g_image_buffer,curr_image, INPUT_IMAGE_WIDTH*INPUT_IMAGE_HEIGHT*FORMAT_MULTIPLY_FACTOR);
+#else 
+            RgbToGrayscale(curr_image,g_image_buffer,INPUT_IMAGE_WIDTH,INPUT_IMAGE_HEIGHT);
+#endif /*DISPLAY_RGB_IMAGE*/
+            
+            RgbToGrayscale(curr_image,inputTensor->data.uint8,INPUT_IMAGE_WIDTH,INPUT_IMAGE_HEIGHT);
+
+
+            /* Display this image on the LCD. */
+            platform.data_psn->present_data_image(
+                g_image_buffer,
+                nCols, nRows, nPresentationChannels,
+                dataPsnImgStartX, dataPsnImgStartY, dataPsnImgDownscaleFactor);
+
+            /* If the data is signed. */
+            if (model.IsDataSigned()) {
+                image::ConvertImgToInt8(inputTensor->data.data, inputTensor->bytes);
+            }
+
+            /* Display message on the LCD - inference running. */
+            platform.data_psn->present_data_text(str_inf.c_str(), str_inf.size(),
+                                    dataPsnTxtInfStartX, dataPsnTxtInfStartY, 0);
+
+            /* Run inference over this image. */
+            info("Running inference on image %" PRIu32 " => %s\n", ctx.Get<uint32_t>("imgIndex"),
+                get_filename(ctx.Get<uint32_t>("imgIndex")));
+
+            if (!RunInference(model, profiler)) {
+                return false;
+            }
+
+            /* Erase. */
+            str_inf = std::string(str_inf.size(), ' ');
+            platform.data_psn->present_data_text(str_inf.c_str(), str_inf.size(),
+                                    dataPsnTxtInfStartX, dataPsnTxtInfStartY, 0);
+
+            /* Detector post-processing*/
+            TfLiteTensor* output_arr[2] = {nullptr,nullptr};
+            output_arr[0] = model.GetOutputTensor(0);
+            output_arr[1] = model.GetOutputTensor(1);
+            RunPostProcessing(g_image_buffer,output_arr,results);
+
+            platform.data_psn->present_data_image(
+                g_image_buffer,
+                nCols, nRows, nPresentationChannels,
+                dataPsnImgStartX, dataPsnImgStartY, dataPsnImgDownscaleFactor);
+
+            /*Detector post-processing*/
+
+
+            /* Add results to context for access outside handler. */
+            ctx.Set<std::vector<DetectionResult>>("results", results);
+
+#if VERIFY_TEST_OUTPUT
+            arm::app::DumpTensor(outputTensor);
+#endif /* VERIFY_TEST_OUTPUT */
+
+            if (!image::PresentInferenceResult(platform, results)) {
+                return false;
+            }
+
+            profiler.PrintProfilingResult();
+
+            IncrementAppCtxIfmIdx(ctx,"imgIndex");
+
+        } while (runAll && ctx.Get<uint32_t>("imgIndex") != curImIdx);
+
+        return true;
+    }
+
+} /* namespace app */
+} /* namespace arm */
diff --git a/source/use_case/object_detection/src/YoloFastestModel.cc b/source/use_case/object_detection/src/YoloFastestModel.cc
new file mode 100644
index 0000000..a8afd59
--- /dev/null
+++ b/source/use_case/object_detection/src/YoloFastestModel.cc
@@ -0,0 +1,59 @@
+/*
+ * Copyright (c) 2022 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "YoloFastestModel.hpp"
+
+#include "hal.h"
+
+const tflite::MicroOpResolver& arm::app::YoloFastestModel::GetOpResolver()
+{
+    return this->m_opResolver;
+}
+
+bool arm::app::YoloFastestModel::EnlistOperations()
+{
+    this->m_opResolver.AddDepthwiseConv2D();
+    this->m_opResolver.AddConv2D();
+    this->m_opResolver.AddAdd();
+    this->m_opResolver.AddResizeNearestNeighbor();
+    /*These are needed for UT to work, not needed on FVP */
+    this->m_opResolver.AddPad();
+    this->m_opResolver.AddMaxPool2D();
+    this->m_opResolver.AddConcatenation();
+
+#if defined(ARM_NPU)
+    if (kTfLiteOk == this->m_opResolver.AddEthosU()) {
+        info("Added %s support to op resolver\n",
+            tflite::GetString_ETHOSU());
+    } else {
+        printf_err("Failed to add Arm NPU support to op resolver.");
+        return false;
+    }
+#endif /* ARM_NPU */
+    return true;
+}
+
+extern uint8_t* GetModelPointer();
+const uint8_t* arm::app::YoloFastestModel::ModelPointer()
+{
+    return GetModelPointer();
+}
+
+extern size_t GetModelLen();
+size_t arm::app::YoloFastestModel::ModelSize()
+{
+    return GetModelLen();
+}
diff --git a/source/use_case/object_detection/usecase.cmake b/source/use_case/object_detection/usecase.cmake
new file mode 100644
index 0000000..15bf534
--- /dev/null
+++ b/source/use_case/object_detection/usecase.cmake
@@ -0,0 +1,59 @@
+#----------------------------------------------------------------------------
+#  Copyright (c) 2022 Arm Limited. All rights reserved.
+#  SPDX-License-Identifier: Apache-2.0
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+#----------------------------------------------------------------------------
+
+USER_OPTION(${use_case}_FILE_PATH "Directory with custom image files to use, or path to a single image, in the evaluation application"
+    ${CMAKE_CURRENT_SOURCE_DIR}/resources/${use_case}/samples/
+    PATH_OR_FILE)
+
+USER_OPTION(${use_case}_IMAGE_SIZE "Square image size in pixels. Images will be resized to this size."
+    192
+    STRING)
+    
+add_compile_definitions(DISPLAY_RGB_IMAGE=1)
+add_compile_definitions(INPUT_IMAGE_WIDTH=${${use_case}_IMAGE_SIZE})
+add_compile_definitions(INPUT_IMAGE_HEIGHT=${${use_case}_IMAGE_SIZE})
+add_compile_definitions(ORIGINAL_IMAGE_WIDTH=${${use_case}_IMAGE_SIZE})
+add_compile_definitions(ORIGINAL_IMAGE_HEIGHT=${${use_case}_IMAGE_SIZE})
+
+
+# Generate input files
+generate_images_code("${${use_case}_FILE_PATH}"
+                     ${SRC_GEN_DIR}
+                     ${INC_GEN_DIR}
+                     "${${use_case}_IMAGE_SIZE}")
+
+
+USER_OPTION(${use_case}_ACTIVATION_BUF_SZ "Activation buffer size for the chosen model"
+    0x00082000
+    STRING)
+
+if (ETHOS_U_NPU_ENABLED)
+    set(DEFAULT_MODEL_PATH      ${DEFAULT_MODEL_DIR}/yolo-fastest_192_face_v4_vela_${ETHOS_U_NPU_CONFIG_ID}.tflite)
+else()
+    set(DEFAULT_MODEL_PATH      ${DEFAULT_MODEL_DIR}/yolo-fastest_192_face_v4.tflite)
+endif()
+
+USER_OPTION(${use_case}_MODEL_TFLITE_PATH "NN models file to be used in the evaluation application. Model files must be in tflite format."
+    ${DEFAULT_MODEL_PATH}
+    FILEPATH
+    )
+
+# Generate model file
+generate_tflite_code(
+    MODEL_PATH ${${use_case}_MODEL_TFLITE_PATH}
+    DESTINATION ${SRC_GEN_DIR}
+    )
diff --git a/tests/use_case/object_detection/ExpectedObjectDetectionResults.cpp b/tests/use_case/object_detection/ExpectedObjectDetectionResults.cpp
new file mode 100644
index 0000000..2c69057
--- /dev/null
+++ b/tests/use_case/object_detection/ExpectedObjectDetectionResults.cpp
@@ -0,0 +1,66 @@
+/*
+ * Copyright (c) 2022 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "ExpectedObjectDetectionResults.hpp"
+
+
+/*
+//Reference results
+Got 2 boxes
+0)  (0.999246) -> Detection box: {x=89,y=17,w=41,h=56}
+1)  (0.995367) -> Detection box: {x=27,y=81,w=48,h=53}
+Entering TestInference
+Got 1 boxes
+0)  (0.998107) -> Detection box: {x=87,y=35,w=53,h=64}
+Entering TestInference
+Got 2 boxes
+0)  (0.999244) -> Detection box: {x=105,y=73,w=58,h=66}
+1)  (0.985984) -> Detection box: {x=34,y=40,w=70,h=95}
+Entering TestInference
+Got 2 boxes
+0)  (0.993294) -> Detection box: {x=22,y=43,w=39,h=53}
+1)  (0.992021) -> Detection box: {x=63,y=60,w=38,h=45}
+
+*/
+
+void get_expected_ut_results(std::vector<std::vector<arm::app::DetectionResult>> &expected_results)
+{
+
+    expected_results.resize(4);
+
+    std::vector<arm::app::DetectionResult> img_1(2);
+    std::vector<arm::app::DetectionResult> img_2(1);
+    std::vector<arm::app::DetectionResult> img_3(2);
+    std::vector<arm::app::DetectionResult> img_4(2);
+
+    img_1[0] = arm::app::DetectionResult(0.99,89,17,41,56);
+    img_1[1] = arm::app::DetectionResult(0.99,27,81,48,53);
+
+    img_2[0] = arm::app::DetectionResult(0.99,87,35,53,64);
+
+    img_3[0] = arm::app::DetectionResult(0.99,105,73,58,66);
+    img_3[1] = arm::app::DetectionResult(0.98,34,40,70,95);
+
+    img_4[0] = arm::app::DetectionResult(0.99,22,43,39,53);
+    img_4[1] = arm::app::DetectionResult(0.99,63,60,38,45);
+
+    expected_results[0] = img_1;
+    expected_results[1] = img_2;
+    expected_results[2] = img_3;
+    expected_results[3] = img_4;
+
+
+}
diff --git a/tests/use_case/object_detection/InferenceTestYoloFastest.cc b/tests/use_case/object_detection/InferenceTestYoloFastest.cc
new file mode 100644
index 0000000..e6ae573
--- /dev/null
+++ b/tests/use_case/object_detection/InferenceTestYoloFastest.cc
@@ -0,0 +1,124 @@
+/*
+ * Copyright (c) 2022 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "hal.h"
+#include "ImageUtils.hpp"
+#include "YoloFastestModel.hpp"
+#include "TensorFlowLiteMicro.hpp"
+#include "DetectorPostProcessing.hpp"
+#include "InputFiles.hpp"
+#include "UseCaseCommonUtils.hpp"
+#include "DetectionUseCaseUtils.hpp"
+#include "ExpectedObjectDetectionResults.hpp"
+
+#include <catch.hpp>
+
+
+bool RunInference(arm::app::Model& model, const uint8_t imageData[])
+{
+    TfLiteTensor* inputTensor = model.GetInputTensor(0);
+    REQUIRE(inputTensor);
+
+    const size_t copySz = inputTensor->bytes < (INPUT_IMAGE_WIDTH*INPUT_IMAGE_HEIGHT) ?
+                            inputTensor->bytes :
+                            (INPUT_IMAGE_WIDTH*INPUT_IMAGE_HEIGHT);
+
+    arm::app::RgbToGrayscale(imageData,inputTensor->data.uint8,INPUT_IMAGE_WIDTH,INPUT_IMAGE_HEIGHT);            
+
+    if(model.IsDataSigned()){
+        convertImgIoInt8(inputTensor->data.data, copySz);
+    }
+
+    return model.RunInference();
+}
+
+template<typename T>
+void TestInference(int imageIdx, arm::app::Model& model, T tolerance) {
+
+    info("Entering TestInference for image %d \n", imageIdx);
+
+    std::vector<arm::app::DetectionResult> results;
+    auto image = get_img_array(imageIdx);
+
+    REQUIRE(RunInference(model, image));
+
+
+    TfLiteTensor* output_arr[2] = {nullptr,nullptr};
+    output_arr[0] = model.GetOutputTensor(0);
+    output_arr[1] = model.GetOutputTensor(1);
+    
+    for (int i =0; i < 2; i++) {
+        REQUIRE(output_arr[i]);    
+        REQUIRE(tflite::GetTensorData<T>(output_arr[i]));
+    }
+
+    RunPostProcessing(NULL,output_arr,results);
+    
+    info("Got %ld boxes \n",results.size());
+      
+    std::vector<std::vector<arm::app::DetectionResult>> expected_results;
+    get_expected_ut_results(expected_results);
+    
+    /*validate got the same number of boxes */
+    REQUIRE(results.size() == expected_results[imageIdx].size());
+    
+    
+    for (int i=0; i < (int)results.size(); i++) {
+    
+        info("%" PRIu32 ")  (%f) -> %s {x=%d,y=%d,w=%d,h=%d}\n", (int)i, 
+                 results[i].m_normalisedVal, "Detection box:",
+               results[i].m_x0, results[i].m_y0, results[i].m_w, results[i].m_h );
+
+        /*validate confidence and box dimensions */
+        REQUIRE(fabs(results[i].m_normalisedVal - expected_results[imageIdx][i].m_normalisedVal) < 0.1);
+        REQUIRE(static_cast<int>(results[i].m_x0) == Approx(static_cast<int>((T)expected_results[imageIdx][i].m_x0)).epsilon(tolerance));
+        REQUIRE(static_cast<int>(results[i].m_y0) == Approx(static_cast<int>((T)expected_results[imageIdx][i].m_y0)).epsilon(tolerance));
+        REQUIRE(static_cast<int>(results[i].m_w) == Approx(static_cast<int>((T)expected_results[imageIdx][i].m_w)).epsilon(tolerance));
+        REQUIRE(static_cast<int>(results[i].m_h) == Approx(static_cast<int>((T)expected_results[imageIdx][i].m_h)).epsilon(tolerance));
+    }
+    
+    
+}
+
+
+TEST_CASE("Running inference with TensorFlow Lite Micro and YoloFastest", "[YoloFastest]")
+{
+    SECTION("Executing inferences sequentially")
+    {
+        arm::app::YoloFastestModel model{};
+
+        REQUIRE_FALSE(model.IsInited());
+        REQUIRE(model.Init());
+        REQUIRE(model.IsInited());
+
+        for (uint32_t i = 0 ; i < NUMBER_OF_FILES; ++i) {
+            TestInference<uint8_t>(i, model, 1);
+        }
+    }
+
+    for (uint32_t i = 0 ; i < NUMBER_OF_FILES; ++i) {
+        DYNAMIC_SECTION("Executing inference with re-init")
+        {
+            arm::app::YoloFastestModel model{};
+
+            REQUIRE_FALSE(model.IsInited());
+            REQUIRE(model.Init());
+            REQUIRE(model.IsInited());
+
+            TestInference<uint8_t>(i, model, 1);
+        }
+    }
+}
diff --git a/tests/use_case/object_detection/ObjectDetectionTests.cc b/tests/use_case/object_detection/ObjectDetectionTests.cc
new file mode 100644
index 0000000..dd2d707
--- /dev/null
+++ b/tests/use_case/object_detection/ObjectDetectionTests.cc
@@ -0,0 +1,18 @@
+/*
+ * Copyright (c) 2022 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#define CATCH_CONFIG_MAIN
+#include <catch.hpp>
diff --git a/tests/use_case/object_detection/ObjectDetectionUCTest.cc b/tests/use_case/object_detection/ObjectDetectionUCTest.cc
new file mode 100644
index 0000000..0a0486e
--- /dev/null
+++ b/tests/use_case/object_detection/ObjectDetectionUCTest.cc
@@ -0,0 +1,135 @@
+/*
+ * Copyright (c) 2022 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "DetectionResult.hpp"
+//include "Detector.hpp"
+#include "hal.h"
+#include "YoloFastestModel.hpp"
+#include "UseCaseHandler.hpp"
+#include "UseCaseCommonUtils.hpp"
+
+#include <catch.hpp>
+
+TEST_CASE("Model info")
+{
+printf("Entering Model info \n");
+    /* Model wrapper object. */
+    arm::app::YoloFastestModel model;
+
+    /* Load the model. */
+    REQUIRE(model.Init());
+
+    /* Instantiate application context. */
+    arm::app::ApplicationContext caseContext;
+
+    caseContext.Set<arm::app::Model&>("model", model);
+
+    REQUIRE(model.ShowModelInfoHandler());
+}
+
+
+TEST_CASE("Inference by index")
+{
+printf("Entering Inference by index \n");
+    hal_platform    platform;
+    data_acq_module data_acq;
+    data_psn_module data_psn;
+    platform_timer  timer;
+
+    /* Initialise the HAL and platform. */
+    hal_init(&platform, &data_acq, &data_psn, &timer);
+    hal_platform_init(&platform);
+
+    /* Model wrapper object. */
+    arm::app::YoloFastestModel model;
+
+    /* Load the model. */
+    REQUIRE(model.Init());
+
+    /* Instantiate application context. */
+    arm::app::ApplicationContext caseContext;
+
+    arm::app::Profiler profiler{&platform, "object_detection"};
+    caseContext.Set<arm::app::Profiler&>("profiler", profiler);
+    caseContext.Set<hal_platform&>("platform", platform);
+    caseContext.Set<arm::app::Model&>("model", model);
+    caseContext.Set<uint32_t>("imgIndex", 0);
+
+    REQUIRE(arm::app::ObjectDetectionHandler(caseContext, 0, false));
+
+    auto results = caseContext.Get<std::vector<arm::app::DetectionResult>>("results");
+
+}
+
+
+TEST_CASE("Inference run all images")
+{
+    printf("Entering Inference run all images \n");
+    hal_platform    platform;
+    data_acq_module data_acq;
+    data_psn_module data_psn;
+    platform_timer  timer;
+
+    /* Initialise the HAL and platform. */
+    hal_init(&platform, &data_acq, &data_psn, &timer);
+    hal_platform_init(&platform);
+
+    /* Model wrapper object. */
+    arm::app::YoloFastestModel model;
+
+    /* Load the model. */
+    REQUIRE(model.Init());
+
+    /* Instantiate application context. */
+    arm::app::ApplicationContext caseContext;
+
+    arm::app::Profiler profiler{&platform, "object_detection"};
+    caseContext.Set<arm::app::Profiler&>("profiler", profiler);
+    caseContext.Set<hal_platform&>("platform", platform);
+    caseContext.Set<arm::app::Model&>("model", model);
+    caseContext.Set<uint32_t>("imgIndex", 0);
+
+    REQUIRE(arm::app::ObjectDetectionHandler(caseContext, 0, true));
+    
+}
+
+
+TEST_CASE("List all images")
+{
+printf("Entering List all images \n");
+    hal_platform    platform;
+    data_acq_module data_acq;
+    data_psn_module data_psn;
+    platform_timer  timer;
+
+    /* Initialise the HAL and platform. */
+    hal_init(&platform, &data_acq, &data_psn, &timer);
+    hal_platform_init(&platform);
+
+    /* Model wrapper object. */
+    arm::app::YoloFastestModel model;
+
+    /* Load the model. */
+    REQUIRE(model.Init());
+
+    /* Instantiate application context. */
+    arm::app::ApplicationContext caseContext;
+
+    caseContext.Set<hal_platform&>("platform", platform);
+    caseContext.Set<arm::app::Model&>("model", model);
+
+    REQUIRE(arm::app::ListFilesHandler(caseContext));
+}
diff --git a/tests/use_case/object_detection/include/ExpectedObjectDetectionResults.hpp b/tests/use_case/object_detection/include/ExpectedObjectDetectionResults.hpp
new file mode 100644
index 0000000..09edc00
--- /dev/null
+++ b/tests/use_case/object_detection/include/ExpectedObjectDetectionResults.hpp
@@ -0,0 +1,26 @@
+/*
+ * Copyright (c) 2022 Arm Limited. All rights reserved.
+ * SPDX-License-Identifier: Apache-2.0
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef EXPECTED_OBJECT_DETECTION_RESULTS_H
+#define EXPECTED_OBJECT_DETECTION_RESULTS_H
+
+#include "DetectionResult.hpp"
+#include <vector>
+
+void get_expected_ut_results(std::vector<std::vector<arm::app::DetectionResult>> &expected_results);
+
+
+#endif /* EXPECTED_OBJECT_DETECTION_RESULTS_H */