COMPMID-759 - CLGEMM optimization for McVail benchmarks

This patch introduces an optimization for CLGEMM on Bifrost
architectures which can bring to 40% of FMA utilization on
config 3 of McVail. The new CLGEMM does not require any reshape of
matrix A and matrix B.

This patch also adds the auto-config in CLConvolutionLayer and CLGEMM
and extends the interface for NEGEMM and CLGEMM.

Change-Id: Ibb354eda45e9ca64b14a99700fb21dff5989dda9
Reviewed-on: https://eu-gerrit-1.euhpc.arm.com/113716
Tested-by: Jenkins <bsgcomp@arm.com>
Reviewed-by: Michalis Spyrou <michalis.spyrou@arm.com>
Reviewed-by: Anthony Barbier <anthony.barbier@arm.com>
diff --git a/src/core/CL/kernels/CLGEMMMatrixMultiplyKernel.cpp b/src/core/CL/kernels/CLGEMMMatrixMultiplyKernel.cpp
index f51d0f9..19f38bf 100644
--- a/src/core/CL/kernels/CLGEMMMatrixMultiplyKernel.cpp
+++ b/src/core/CL/kernels/CLGEMMMatrixMultiplyKernel.cpp
@@ -95,7 +95,7 @@
         // Create kernels according to the architecture, data type and input size.
         if(gpu_target == GPUTarget::BIFROST && data_type == DataType::F32)
         {
-            num_elems_processed_per_iteration_x = (input1->dimension(0) <= 1000) ? 2 : 4;
+            num_elems_processed_per_iteration_x = (input1->dimension(0) <= 1000 && input0->num_dimensions() == 1) ? 2 : 4;
         }
 
         // Configure window
@@ -196,7 +196,7 @@
             // The first kernel is optimized for the case of 1000 or less output elements (e.g. FC8 of AlexNet and VGG-16, and
             // FC1 of Inception v3). The second kernel is optimized for the case of greater than 1000 output elements (e.g.
             // FC6 and FC7 of AlexNet and VGG-16).
-            kernel_name = (input1->info()->dimension(0) <= 1000) ? "gemm_mm_floating_point_f32_bifrost_1000" : "gemm_mm_floating_point_f32_bifrost";
+            kernel_name = (input1->info()->dimension(0) <= 1000 && input0->info()->num_dimensions() == 1) ? "gemm_mm_floating_point_f32_bifrost_1000" : "gemm_mm_floating_point_f32_bifrost";
 
             // The work-group size equal to the Bifrost quad size has been proved to be optimal for these kernels
             // via exhaustive autotuning over a range of representative layer configurations.