Add quantized CL MatMul kernels for Lhs NT/T, Rhs NT

Implement OpenCL kernels for batched Matrix Multiplication for the quantized data types QASYMM8 and QASYMM8_SIGNED.

Quantized MatMul is supported with the following MatMul attributes:
* adj_x = false, adj_y = false
* adj_x = true, adj_y = false

We consider native format kernels only. In other words, no reshaping of the operand matrices is done.

Resolves: COMPMID-5921, COMPMID-5922

Change-Id: I99e0f68054a2bd635c60ec2641acc2e7ff398473
Signed-off-by: Omar Al Khatib <omar.alkhatib@arm.com>
Signed-off-by: Gunes Bayir <gunes.bayir@arm.com>
Signed-off-by: Jakub Sujak <jakub.sujak@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/9435
Reviewed-by: SiCong Li <sicong.li@arm.com>
Reviewed-by: Viet-Hoa Do <viet-hoa.do@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
diff --git a/src/gpu/cl/ClKernelLibrary.cpp b/src/gpu/cl/ClKernelLibrary.cpp
index 44b086f..e657687 100644
--- a/src/gpu/cl/ClKernelLibrary.cpp
+++ b/src/gpu/cl/ClKernelLibrary.cpp
@@ -323,6 +323,8 @@
     { "mat_mul_native_nt_t", "common/mat_mul.cl" },
     { "mat_mul_native_t_nt", "common/mat_mul.cl" },
     { "mat_mul_native_t_t", "common/mat_mul.cl" },
+    { "mat_mul_native_quantized_nt_nt", "common/mat_mul_quantized.cl" },
+    { "mat_mul_native_quantized_t_nt", "common/mat_mul_quantized.cl" },
     { "max_unpooling_layer_2", "common/unpooling_layer.cl" },
     { "mean_stddev_normalization", "common/mean_stddev_normalization.cl" },
     { "memset", "common/memset.cl" },
@@ -794,6 +796,10 @@
         "common/mat_mul.cl",
 #include "./cl_kernels/common/mat_mul.clembed"
     },
+    {
+        "common/mat_mul_quantized.cl",
+#include "./cl_kernels/common/mat_mul_quantized.clembed"
+    },
 #ifdef ENABLE_NCHW_KERNELS
     {
         "nchw/batch_to_space.cl",