COMPMID-1671: Allow fp mixed precision in CLFCLayer.

Adds the ability to request accumulation in float instead of half to
avoid any accuracy related issues.

Change-Id: I97de27fa36853834cd9eb69c0077e1cb1e6dd5ec
Signed-off-by: Georgios Pinitas <georgios.pinitas@arm.com>
Reviewed-on: https://review.mlplatform.org/c/2173
Reviewed-by: Manuel Bottini <manuel.bottini@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Giorgio Arena <giorgio.arena@arm.com>
diff --git a/arm_compute/core/Types.h b/arm_compute/core/Types.h
index 0a25277..f4955ed 100644
--- a/arm_compute/core/Types.h
+++ b/arm_compute/core/Types.h
@@ -805,6 +805,7 @@
     bool       transpose_weights{ true };                  /**<  Transpose weights if true. */
     bool       are_weights_reshaped{ false };              /**<  Reshape the weights tensor if false. */
     bool       retain_internal_weights{ false };           /**<  Retain internal reshaped weights. */
+    bool       fp_mixed_precision{ false };                /**<  Use wider accumulators (32 bit instead of 16 for FP16) to improve accuracy. */
 
     /** Sets the weights trained data layout
      *