IVGCVSW-4453 Add Support for ANEURALNETWORKS_QLSTM to HAL 1.3 Driver

 * Add QLSTM support for Android NN Driver
 * Add overrideOutputInfo parameter to SetupAndTrackLayerOutputSlot
 * Add optional condition to GetInputScalar
 * Refactor Quantized 16 Bit LSTM impl

Change-Id: Ie8fa98ad5ee4a62174ef91ca80f1df62b7fde937
Signed-off-by: Keith Davis <keith.davis@arm.com>
Signed-off-by: Sadik Armagan <sadik.armagan@arm.com>
diff --git a/NnapiSupport.txt b/NnapiSupport.txt
index d5e077b..e3d7c69 100644
--- a/NnapiSupport.txt
+++ b/NnapiSupport.txt
@@ -54,6 +54,7 @@
 PRELU                        (FLOAT32, QUANT8_ASYMM)
 QUANTIZE                     (FLOAT32 (input only), QUANT8_ASYMM (output only))
 QUANTIZED_16BIT_LSTM         (QUANT8_ASYMM)
+QUANTIZED_LSTM               (QUANT8_ASYMM)
 RELU                         (FLOAT32, QUANT8_ASYMM)
 RELU1                        (FLOAT32, QUANT8_ASYMM)
 RELU6                        (FLOAT32, QUANT8_ASYMM)
@@ -74,7 +75,6 @@
 
 Where operations are not supported by the ArmNN Android NN Driver, the driver indicates this to the framework
 appropriately and the framework implements those operations using a CPU implementation.
-
 NOTE: By convention, only those tensor types have been listed above, which are fully supported across all
 ArmNN backends. FLOAT16 input tensors are partially supported on most HAL 1.2 operators on the GpuAcc and
 CpuRef backends, however not on CpuAcc.