Change dynamic fusion API to return destination tensor info

The new dynamic fusion API is introduced in the following patch:
https://review.mlplatform.org/c/ml/ComputeLibrary/+/8906

For each operator (except Conv2D, which is migrated in the above patch), we
   - remove destination tensor from is_supported, validate and create calls
   - make create_op return ITensorInfo* to the intermediate destination object

Affected operators:
   - DepthwiseConv2D
   - Cast
   - Elementwise Ops
   - Clamp
   - Reshape
   - Resize

Resolves: COMPMID-5777
Change-Id: Ib60ec8a5f081752808455d7a7d790f2ed0627059
Signed-off-by: Gunes Bayir <gunes.bayir@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/8991
Reviewed-by: Ramy Elgammal <ramy.elgammal@arm.com>
Reviewed-by: Jakub Sujak <jakub.sujak@arm.com>
Dynamic-Fusion: Ramy Elgammal <ramy.elgammal@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
diff --git a/arm_compute/dynamic_fusion/sketch/gpu/operators/GpuClamp.h b/arm_compute/dynamic_fusion/sketch/gpu/operators/GpuClamp.h
index 66d6c5f..e962511 100644
--- a/arm_compute/dynamic_fusion/sketch/gpu/operators/GpuClamp.h
+++ b/arm_compute/dynamic_fusion/sketch/gpu/operators/GpuClamp.h
@@ -1,5 +1,5 @@
 /*
- * Copyright (c) 2022 Arm Limited.
+ * Copyright (c) 2022-2023 Arm Limited.
  *
  * SPDX-License-Identifier: MIT
  *
@@ -58,34 +58,34 @@
      *
      * @param[in, out] sketch     Workload sketch into which the operator will be fused
      * @param[in]      src        Source tensor info. Data types supported: F16/F32.
-     * @param[out]     dst        Destination tensor info. Data types supported: F16/F32.
-     *                            If an uninitialized ITensorInfo is passed in, it will be auto-initialized
      * @param[in]      attributes Operator attributes
+     *
+     * @return Pointer for the destination tensor info
      */
-    static void create_op(GpuWorkloadSketch &sketch,
-                          ITensorInfo       *src,
-                          ITensorInfo       *dst,
-                          const Attributes  &attributes);
+    static ITensorInfo *create_op(GpuWorkloadSketch &sketch,
+                                  ITensorInfo       *src,
+                                  const Attributes &attributes);
 
     /** Check if the operator configuration is supported, irrespective of fusion
      *
      * @param[in] context    Workload context within which the operator is running
      * @param[in] src        Source tensor info. Data types supported: F16/F32.
-     * @param[in] dst        Destination tensor info. Data types supported: F16/F32.
-     *                       If an uninitialized ITensorInfo is passed in, it will be auto-initialized
      * @param[in] attributes Operator attributes
+     *
+     * @return Status
      */
     static Status is_supported_op(const GpuWorkloadContext &context,
                                   const ITensorInfo        *src,
-                                  const ITensorInfo        *dst,
                                   const Attributes         &attributes);
 
     /** Validate the operator and check if it can be fused into the workload sketch.
-     * Similar to @ref GpuClamp::create_op()
+     *
+     * Parameters are similar to @ref GpuClamp::create_op()
+     *
+     * @return Status
      */
     static Status validate_op(const GpuWorkloadSketch &sketch,
                               const ITensorInfo       *src,
-                              const ITensorInfo       *dst,
                               const Attributes        &attributes);
 };
 } // namespace dynamic_fusion