commit | e8f28136a048829211d30208cdd6679f0f1c1632 | [log] [tgz] |
---|---|---|
author | Ramy Elgammal <ramy.elgammal@arm.com> | Wed Jun 12 18:22:57 2024 +0100 |
committer | Ramy Elgammal <ramy.elgammal@arm.com> | Wed Jun 12 18:26:21 2024 +0000 |
tree | 75cd5b3c46bc78ac12887e0fb62721ab2aca0733 | |
parent | d7230761a65ff4d559eb28945b0d4e3dfb46926f [diff] [blame] |
Update documentation Signed-off-by: Ramy Elgammal <ramy.elgammal@arm.com> Change-Id: I46f936f3c503d4801c4dba85900cee00bc372683 Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11690 Reviewed-by: Suhail M <MohammedSuhail.Munshi@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com> Benchmark: Arm Jenkins <bsgcomp@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com>
diff --git a/docs/user_guide/release_version_and_change_log.dox b/docs/user_guide/release_version_and_change_log.dox index d9c2c84..16664c8 100644 --- a/docs/user_guide/release_version_and_change_log.dox +++ b/docs/user_guide/release_version_and_change_log.dox
@@ -41,7 +41,9 @@ @section S2_2_changelog Changelog -v24.08 Public major release +v24.06 Public minor release + - Enable FP16 in multiple Neon™ kernels for multi_isa + v8a + - Fix OpenMP® thread scheduling for large machine - Optimize CPU activation functions using LUT-based implementation: - Tanh function for FP16.