Implement MatMul Function and Operator with Floating Point support for CPU

- Implements MatMul function and operator for floating point datatype FP16/FP32
- Includes support for transposing dynamic tensors prior to matrix multiplication.
- Adds tests for 2D/3D/4D+ tensors in MatMul with F32/F16 datatype (with all combinations of transposed/not-transposed tensors)
- Updates fixture to allow for testing fused activation in MatMul
- Adds tests for matmul with and without fused activation

Resolved: [COMPMID-5898]
Signed-off-by: Mohammed Suhail Munshi <MohammedSuhail.Munshi@arm.com>
Change-Id: Iefa84b26dd723c9a51e6c3f91023152c6c31ace2
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/9411
Reviewed-by: SiCong Li <sicong.li@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
diff --git a/Android.bp b/Android.bp
index a08bab6..e38ea65 100644
--- a/Android.bp
+++ b/Android.bp
@@ -585,6 +585,7 @@
         "src/cpu/operators/CpuGemmDirectConv2d.cpp",
         "src/cpu/operators/CpuGemmLowpMatrixMultiplyCore.cpp",
         "src/cpu/operators/CpuGemmLowpOutputStage.cpp",
+        "src/cpu/operators/CpuMatMul.cpp",
         "src/cpu/operators/CpuMaxUnpooling.cpp",
         "src/cpu/operators/CpuMul.cpp",
         "src/cpu/operators/CpuPermute.cpp",
@@ -940,6 +941,7 @@
         "src/runtime/NEON/functions/NELSTMLayer.cpp",
         "src/runtime/NEON/functions/NELSTMLayerQuantized.cpp",
         "src/runtime/NEON/functions/NELogical.cpp",
+        "src/runtime/NEON/functions/NEMatMul.cpp",
         "src/runtime/NEON/functions/NEMaxUnpoolingLayer.cpp",
         "src/runtime/NEON/functions/NEMeanStdDevNormalizationLayer.cpp",
         "src/runtime/NEON/functions/NENormalizationLayer.cpp",