Rework CpuQuantizeKernel to enable FP16 in multi_isa builds

Resolves: COMPMID-7054
Signed-off-by: Ramy Elgammal <ramy.elgammal@arm.com>
Change-Id: I68d125b81ad7f74b2594ccda8d6ec08beef1ebd7
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11555
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Pablo Marquez Tello <pablo.tello@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
diff --git a/docs/user_guide/release_version_and_change_log.dox b/docs/user_guide/release_version_and_change_log.dox
index 9c3eb8e..f493ff6 100644
--- a/docs/user_guide/release_version_and_change_log.dox
+++ b/docs/user_guide/release_version_and_change_log.dox
@@ -43,7 +43,7 @@
 
 v24.05 Public major release
  - Add @ref CLScatter operator for FP32/16, S32/16/8, U32/16/8 data types
- - Fix @ref NEReductionOperationKernel FP16 for armv8a multi_isa builds
+ - Various fixes to enable FP16 kernels in armv8a multi_isa builds.
 
 v24.04 Public major release
  - Add Bfloat16 data type support for @ref NEMatMul.