IVGCVSW-5695 Update supported operators

 * Update supported operators for the delegate, parsers,
   serializer and deserializer


Signed-off-by: Jan Eilers <jan.eilers@arm.com>
Change-Id: I33ac99a29d894eec055cd05411014075d78b3168
diff --git a/README.md b/README.md
index d602861..d645bff 100644
--- a/README.md
+++ b/README.md
@@ -36,7 +36,7 @@
 Depending on what kind of framework (Tensorflow, Caffe, ONNX) you've been using to create your model there are multiple 
 software tools available within Arm NN that can serve your needs.
 
-Generally, there is a **parser** available **for each supported framework**. Each parser allows you to run a models from 
+Generally, there is a **parser** available **for each supported framework**. Each parser allows you to run models from 
 one framework e.g. the TfLite-Parser lets you run TfLite models. You can integrate these parsers into your own 
 application to load, optimize and execute your model. We also provide **python bindings** for our parsers and the Arm NN core.
 We call the result **PyArmNN**. Therefore your application can be conveniently written in either C++ using the "original"
diff --git a/docs/01_01_parsers.dox b/docs/01_01_parsers.dox
index 025858e..ae49303 100644
--- a/docs/01_01_parsers.dox
+++ b/docs/01_01_parsers.dox
@@ -127,8 +127,7 @@
 ### Partially supported
 
 - Conv
-  - The parser only supports 2D convolutions with a dilation rate of [1, 1] and group = 1 or group = #Nb_of_channel (depthwise convolution)
-    See the ONNX [Conv documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Conv) for more information.
+  - The parser only supports 2D convolutions with a group = 1 or group = #Nb_of_channel (depthwise convolution)
 - BatchNormalization
   - The parser does not support training mode. See the ONNX [BatchNormalization documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#BatchNormalization) for more information.
 - MatMul
@@ -179,13 +178,15 @@
 - MAXIMUM
 - MEAN
 - MINIMUM
-- MU
+- MUL
 - NEG
 - PACK
 - PAD
 - QUANTIZE
 - RELU
 - RELU6
+- REDUCE_MAX
+- REDUCE_MIN
 - RESHAPE
 - RESIZE_BILINEAR
 - RESIZE_NEAREST_NEIGHBOR
@@ -197,6 +198,7 @@
 - SQUEEZE
 - STRIDED_SLICE
 - SUB
+- SUM
 - TANH
 - TRANSPOSE
 - TRANSPOSE_CONV
diff --git a/docs/01_02_deserializer_serializer.dox b/docs/01_02_deserializer_serializer.dox
index 047cb5d..6884b93 100644
--- a/docs/01_02_deserializer_serializer.dox
+++ b/docs/01_02_deserializer_serializer.dox
@@ -59,6 +59,7 @@
 - Quantize
 - QuantizedLstm
 - Rank
+- Reduce
 - Reshape
 - Resize
 - Slice
@@ -143,10 +144,10 @@
 - QLstm
 - QuantizedLstm
 - Rank
+- Reduce
 - Reshape
 - Resize
 - ResizeBilinear
-- Rsqrt
 - Slice
 - Softmax
 - SpaceToBatchNd
@@ -157,6 +158,7 @@
 - StridedSlice
 - Subtraction
 - Switch
+- Transpose
 - TransposeConvolution2d
 
 More machine learning layers will be supported in future releases.
diff --git a/docs/01_03_delegate.dox b/docs/01_03_delegate.dox
index 9063f05..f6d8e76 100644
--- a/docs/01_03_delegate.dox
+++ b/docs/01_03_delegate.dox
@@ -82,15 +82,17 @@
 - LOCAL_RESPONSE_NORMALIZATION
 
 - LOGICAL_AND
--
+
 - LOGICAL_NOT
--
+
 - LOGICAL_OR
 
 - LOGISTIC
 
 - LOG_SOFTMAX
 
+- LSTM
+
 - L2_NORMALIZATION
 
 - L2_POOL_2D
@@ -111,8 +113,16 @@
 
 - PAD
 
+- PRELU
+
 - QUANTIZE
 
+- RANK
+
+- REDUCE_MAX
+
+- REDUCE_MIN
+
 - RESHAPE
 
 - RESIZE_BILINEAR
@@ -137,8 +147,12 @@
 
 - SQRT
 
+- STRIDED_SLICE
+
 - SUB
 
+- SUM
+
 - TANH
 
 - TRANSPOSE