IVGCVSW-4119 Fix FP16 to FP32 fallback mechanism in optimizer to work with Dequantize

* Check for output data type as well as input data type when determining
  whether we should attempt to fall back to FP32 if FP16 is not supported
* Override output type for Dequantize in IsLayerSupported() instead of
  input type
* Updated original input type from FP16 to FP32 in InsertConvertFp32ToFp16LayersAfter()

Signed-off-by: Aron Virginas-Tar <Aron.Virginas-Tar@arm.com>
Change-Id: Ic6477fd17cea5a91bd8bf9ae0cf836520897d5b7
diff --git a/src/backends/backendsCommon/WorkloadFactory.cpp b/src/backends/backendsCommon/WorkloadFactory.cpp
index 4a7f007..9901dcb 100644
--- a/src/backends/backendsCommon/WorkloadFactory.cpp
+++ b/src/backends/backendsCommon/WorkloadFactory.cpp
@@ -265,8 +265,8 @@
             const TensorInfo& input = layer.GetInputSlot(0).GetConnection()->GetTensorInfo();
             const TensorInfo& output = layer.GetOutputSlot(0).GetTensorInfo();
 
-            result = layerSupportObject->IsDequantizeSupported(OverrideDataType(input, dataType),
-                                                               output,
+            result = layerSupportObject->IsDequantizeSupported(input,
+                                                               OverrideDataType(output, dataType),
                                                                reason);
             break;
         }