Add SME2 implementation of softmax for FP32

Signed-off-by: Viet-Hoa Do <viet-hoa.do@arm.com>
Change-Id: I8a63610cfb9ccff89dec6045d023439fc19b027a
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11357
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
diff --git a/docs/user_guide/release_version_and_change_log.dox b/docs/user_guide/release_version_and_change_log.dox
index 31b7560..aa27c2b 100644
--- a/docs/user_guide/release_version_and_change_log.dox
+++ b/docs/user_guide/release_version_and_change_log.dox
@@ -45,6 +45,7 @@
  - Add Bfloat16 data type support for @ref NEMatMul.
  - Optimize start-up time of @ref NEConvolutionLayer for some input configurations where GeMM is selected as the convolution algorithm
  - Optimize @ref NEConvolutionLayer for input tensor size > 1e7 bytes and weight tensor height > 7
+ - Add support for SoftMax in SME2 for FP32.
  - Performance optimizations:
    - Optimize @ref NESoftmaxLayer for axis != 0 by natively supporting higher axes up to axis 3.