Fix the bug in GpuTanh operator in dynamic fusion

Tanh in dynamic fusion is a simple operator with no A and B coefficients, as its public interface implies. Tanh operator follows the TOSA specification.

Customization of tanh calculation with a and b can be achieved via fusion as below:

out = a * tanh(b *in) -->

x = b * in
y = tanh(x)
out  = a * y;

Resolves: COMPMID-6873

Signed-off-by: Gunes Bayir <gunes.bayir@arm.com>
Change-Id: I818765192f631ae82c2094b0fc376fb87bae4fa4
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11109
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Gian Marco Iodice <gianmarco.iodice@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
diff --git a/src/dynamic_fusion/sketch/gpu/ckw_driver/components/GpuCkwActivation.cpp b/src/dynamic_fusion/sketch/gpu/ckw_driver/components/GpuCkwActivation.cpp
index 68f478a..18fda5b 100644
--- a/src/dynamic_fusion/sketch/gpu/ckw_driver/components/GpuCkwActivation.cpp
+++ b/src/dynamic_fusion/sketch/gpu/ckw_driver/components/GpuCkwActivation.cpp
@@ -242,12 +242,7 @@
         }
         case ActivationLayerInfo::ActivationFunction::TANH:
         {
-            // dst = B_VAL * src
-            writer->op_binary(tile_dst, ckw::BinaryOp::Mul, tile_src, const_B_fp);
-            // dst = tanh(B_VAL * src)
-            writer->op_unary(tile_dst, ckw::UnaryOp::Tanh, tile_dst);
-            // dst = A_VAL * tanh(B_VAL * src)
-            writer->op_binary(tile_dst, ckw::BinaryOp::Mul, tile_dst, const_A_fp);
+            writer->op_unary(tile_dst, ckw::UnaryOp::Tanh, tile_src);
             break;
         }
         case ActivationLayerInfo::ActivationFunction::RELU:
diff --git a/tests/validation/fixtures/dynamic_fusion/operators/ActivationFixture.h b/tests/validation/fixtures/dynamic_fusion/operators/ActivationFixture.h
index 2f0b133..c9ffbcc 100644
--- a/tests/validation/fixtures/dynamic_fusion/operators/ActivationFixture.h
+++ b/tests/validation/fixtures/dynamic_fusion/operators/ActivationFixture.h
@@ -194,7 +194,7 @@
 public:
     void setup(TensorShape shape, bool fuse, DataType data_type)
     {
-        ActivationLayerInfo act_info{ActivationLayerInfo::ActivationFunction::TANH};
+        ActivationLayerInfo act_info{ActivationLayerInfo::ActivationFunction::TANH, 1.0f, 1.0f};
         DynamicFusionActivationValidationFixture<TensorType, AccessorType, FunctionType, T>::setup(shape, fuse,
                                                                                                    data_type, act_info);
     }