IVGCVSW-6420: Constant flag in tensor info is not set correctly

!android-nn-driver:6532
!armnn-internal-tests:372451

  * Made fix to 2 out of 3 ConstTensor() constructors in Tensor.hpp to
    throw InvalidArgumentException when TensorInfo isConstant parameter
    is false.
  * Added new ConstTensor() constructor in Tensor.cpp to accept vector<>.data()
    using template<typename MemoryType>.
  * Fixed runtime->GetOutputTensorInfo()/GetInputTensorInfo() methods and
    called submethods to return TensorInfo& rather than TensorInfo.
  * Fixed all failing unit tests for CpuRef/CpuAcc/GpuAcc to ensure any
    ConstTensor created has it's TensorInfo isConstant set to true.
  * Added unit tests in TensorTest.cpp to ensure ConstTensor constructors
    throw InvalidArgumentException when TensorInfo isConstat parameter is
    false.
  * Added unit test to ensure an empty ConstTensor constructor will set
    TensorInfo isConatant to true.
  * Indentation fixes.
  * Fix to arm_tensor.i to add isConstant parameter to TensorInfo
    constructor. Added methods IsConstant() and SetConstant().
  * Fix to const_tensor.py to throw ValueError when TensorInfo
    isConstant is set to false when constructing a ConstTensor.
  * Fixed PyArmnn unit tests to set TensorInfo isConstant to
    True when ConstTensor is used.
  * Added unit tests in test_const_tensor.py to ensure ConstTensor
    constructors throw ValueError when TensorInfo isConstat parameter
    is false.

Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: I44e440dd0422c366d31bbdbc77ad2b4db0bde148
diff --git a/src/backends/backendsCommon/test/ArgMinMaxEndToEndTestImpl.hpp b/src/backends/backendsCommon/test/ArgMinMaxEndToEndTestImpl.hpp
index 2ffe06f..041f9f8 100644
--- a/src/backends/backendsCommon/test/ArgMinMaxEndToEndTestImpl.hpp
+++ b/src/backends/backendsCommon/test/ArgMinMaxEndToEndTestImpl.hpp
@@ -47,7 +47,7 @@
     const float qScale  = armnn::IsQuantizedType<T>() ? 2.0f : 1.0f;
     const int32_t qOffset = armnn::IsQuantizedType<T>() ? 2 : 0;
 
-    armnn::TensorInfo inputTensorInfo(inputShape, ArmnnType, qScale, qOffset);
+    armnn::TensorInfo inputTensorInfo(inputShape, ArmnnType, qScale, qOffset, true);
     armnn::TensorInfo outputTensorInfo(outputShape, armnn::DataType::Signed32);
 
     // quantize data