COMPMID-3575: Mixed preicision in NEInstanceNormalizationLayerKernel

In order to fix the issue caused by the limited precision of FP16.
mixed precision (float accumulator) is introduced to
NEInstanceNormalizationLayerKernel. Since the reference kernel
is doing the mixed precision, currently mixed preicision computation
is default when it is called from NEInstanceNormalizationLayer.

- Make NEInstanceNormalizationLayerKernel use kernel descriptor
  to enable mixed precision computation
- NEInstanceNormalizationLayer is modified to use the descriptor

Change-Id: I7766622d715df054e303f9b441380b15b51f02b2
Signed-off-by: Sang-Hoon Park <sang-hoon.park@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/3589
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Michele Di Giorgio <michele.digiorgio@arm.com>
Reviewed-by: Georgios Pinitas <georgios.pinitas@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
diff --git a/tests/validation/NEON/InstanceNormalizationLayer.cpp b/tests/validation/NEON/InstanceNormalizationLayer.cpp
index 9d13a2b..1073a7f 100644
--- a/tests/validation/NEON/InstanceNormalizationLayer.cpp
+++ b/tests/validation/NEON/InstanceNormalizationLayer.cpp
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2019 Arm Limited.
+ * Copyright (c) 2019-2020 Arm Limited.
  *
  * SPDX-License-Identifier: MIT
  *
@@ -45,7 +45,11 @@
 /** Tolerance for float operations */
 AbsoluteTolerance<float> tolerance_f32(0.0015f);
 #ifdef __ARM_FEATURE_FP16_VECTOR_ARITHMETIC
-AbsoluteTolerance<float> tolerance_f16(0.5f);
+// This precision is chosen based on the precision float16_t can provide
+// for the decimal numbers between 16 and 32 and decided based on multiple
+// times of execution of tests. Although, with randomly generated numbers
+// there is no gaurantee that this tolerance will be always large enough.
+AbsoluteTolerance<half> tolerance_f16(static_cast<half>(0.015625f));
 #endif // __ARM_FEATURE_FP16_VECTOR_ARITHMETIC
 } // namespace