IVGCVSW-7526 Upgrade ArmNN to Tensorflow 2.12

 When creating a flatbuffers model, we need to provide an empty buffer 0 that is
 reserved by tensorflow. When creating empty buffers for inputs and outputs we
 can not pass in an empty vector, or tflite will assume that we know how many bytes to
 allocate in advance. Instead we need to only pass in the builder.

 * Update libraries in FindTfLite.cmake
 * Add nullptr to delegate struct for OpaqueDelegateBuilder
 * Fix issue in unit tests where Flatbuffers model was not being parsed by tflite
 * Tensorflow 2.12 now includes C++ 17 features. Update our cmake build
   to require a compiler to support these features.
 * Change minimum cmake in Arm NN to 3.7 as that's the minimum for the
   delegate build.

Signed-off-by: Ryan OShea <ryan.oshea3@arm.com>
Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com>
Signed-off-by: Colm Donelan <colm.donelan@arm.com>

Change-Id: I7d15b196b8c59b1914f8fc1c4c2f8960630c069c
diff --git a/delegate/src/test/QuantizationTestHelper.hpp b/delegate/src/test/QuantizationTestHelper.hpp
index e415504..a8b1022 100644
--- a/delegate/src/test/QuantizationTestHelper.hpp
+++ b/delegate/src/test/QuantizationTestHelper.hpp
@@ -1,5 +1,5 @@
 //
-// Copyright © 2020 Arm Ltd and Contributors. All rights reserved.
+// Copyright © 2020, 2023 Arm Ltd and Contributors. All rights reserved.
 // SPDX-License-Identifier: MIT
 //
 
@@ -31,7 +31,10 @@
     flatbuffers::FlatBufferBuilder flatBufferBuilder;
 
     std::vector<flatbuffers::Offset<tflite::Buffer>> buffers;
-    buffers.push_back(CreateBuffer(flatBufferBuilder, flatBufferBuilder.CreateVector({})));
+    buffers.push_back(CreateBuffer(flatBufferBuilder));
+    buffers.push_back(CreateBuffer(flatBufferBuilder));
+    buffers.push_back(CreateBuffer(flatBufferBuilder));
+
 
     auto quantizationParameters =
             CreateQuantizationParameters(flatBufferBuilder,
@@ -46,14 +49,14 @@
                               flatBufferBuilder.CreateVector<int32_t>(inputTensorShape.data(),
                                                                       inputTensorShape.size()),
                               inputTensorType,
-                              0,
+                              1,
                               flatBufferBuilder.CreateString("input"),
                               quantizationParameters);
     tensors[1] = CreateTensor(flatBufferBuilder,
                               flatBufferBuilder.CreateVector<int32_t>(outputTensorShape.data(),
                                                                       outputTensorShape.size()),
                               outputTensorType,
-                              0,
+                              2,
                               flatBufferBuilder.CreateString("output"),
                               quantizationParameters);