Add new operator AddMulAdd for Neon™ backend for Float/Quantized types

This is a fused operator that merges Add + Mul + Add [+ Relu-based-Activation] layers and have an intermediate output after the first Add. It's supported for FP16/32/QASYMM8/QASYMM8_SIGNED data types.

The subsequent Add and Mul are intended for scaling and the coefficients only have one dimension (per channel).

The inputs are
     - input1    : nD tensor   [X, Y, Z, W, ..]
     - input2    : nD tensor   [X, Y, Z, W, ..]
     - add_coef  : 1D tensor   [X]
     - mul_coef  : 1D tensor   [X]

The outputs are
     - out1          : nD tensor (intermediate output)  [X, Y, Z, W, ..]
     - out2          : nD tensor (final output)  [X, Y, Z, W, ..]

The operation can be summarized as follows:
     out1 <- input1 + input2
     out2 <- Act(out1 * mul_coef + add_coef)

The activation function can be Identity, Relu, Bounded Relu or Lower/Upper Bounded Relu. The intermediate output can be skipped by providing a nullptr.

The reason of providing this operator is to be able to fuse in case of Residual network patterns and save computations by reducing memory back and forward.

Resolves: COMPMID-5463

Signed-off-by: Gunes Bayir <gunes.bayir@arm.com>
Change-Id: I8ef577aa623b036e9a9f655cc088493fd19a6109
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/9055
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Jakub Sujak <jakub.sujak@arm.com>
Reviewed-by: Viet-Hoa Do <viet-hoa.do@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
diff --git a/filelist.json b/filelist.json
index aec4fa8..ea75c4a 100644
--- a/filelist.json
+++ b/filelist.json
@@ -922,6 +922,21 @@
           }
         }
       },
+      "AddMulAdd": {
+        "files": {
+          "common": [
+            "src/cpu/operators/CpuAddMulAdd.cpp",
+            "src/cpu/kernels/CpuAddMulAddKernel.cpp",
+            "src/runtime/NEON/functions/NEAddMulAdd.cpp"
+          ],
+          "neon": {
+            "fp32":["src/cpu/kernels/addmuladd/generic/neon/fp32.cpp"],
+            "fp16":["src/cpu/kernels/addmuladd/generic/neon/fp16.cpp"],
+            "qasymm8": ["src/cpu/kernels/addmuladd/generic/neon/qasymm8.cpp"],
+            "qasymm8_signed": ["src/cpu/kernels/addmuladd/generic/neon/qasymm8_signed.cpp"]
+          }
+        }
+      },
       "BatchNormalize": {
         "files": {
           "common": [