commit | ae72a46e495742863dba44fcf5fdc673c9d2afbc | [log] [tgz] |
---|---|---|
author | Gunes Bayir <gunes.bayir@arm.com> | Sun Jan 29 13:24:24 2023 +0000 |
committer | Gunes Bayir <gunes.bayir@arm.com> | Wed Feb 01 09:59:30 2023 +0000 |
tree | 65bab43d0feddaa66b160ac7dc746651dc7c48de | |
parent | ec320d9fc418e2d95a3a38ce87233397535f467d [diff] |
Add new operator AddMulAdd for Neon™ backend for Float/Quantized types This is a fused operator that merges Add + Mul + Add [+ Relu-based-Activation] layers and have an intermediate output after the first Add. It's supported for FP16/32/QASYMM8/QASYMM8_SIGNED data types. The subsequent Add and Mul are intended for scaling and the coefficients only have one dimension (per channel). The inputs are - input1 : nD tensor [X, Y, Z, W, ..] - input2 : nD tensor [X, Y, Z, W, ..] - add_coef : 1D tensor [X] - mul_coef : 1D tensor [X] The outputs are - out1 : nD tensor (intermediate output) [X, Y, Z, W, ..] - out2 : nD tensor (final output) [X, Y, Z, W, ..] The operation can be summarized as follows: out1 <- input1 + input2 out2 <- Act(out1 * mul_coef + add_coef) The activation function can be Identity, Relu, Bounded Relu or Lower/Upper Bounded Relu. The intermediate output can be skipped by providing a nullptr. The reason of providing this operator is to be able to fuse in case of Residual network patterns and save computations by reducing memory back and forward. Resolves: COMPMID-5463 Signed-off-by: Gunes Bayir <gunes.bayir@arm.com> Change-Id: I8ef577aa623b036e9a9f655cc088493fd19a6109 Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/9055 Comments-Addressed: Arm Jenkins <bsgcomp@arm.com> Reviewed-by: Jakub Sujak <jakub.sujak@arm.com> Reviewed-by: Viet-Hoa Do <viet-hoa.do@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com> Benchmark: Arm Jenkins <bsgcomp@arm.com>
⚠ Important From release 22.05: 'master' branch has been replaced with 'main' following our inclusive language update, more information here.
⚠ Important From release 22.08: armv7a with Android build will no longer be tested or maintained.
The Compute Library is a collection of low-level machine learning functions optimized for Arm® Cortex®-A, Arm® Neoverse® and Arm® Mali™ GPUs architectures.
The library provides superior performance to other open source alternatives and immediate support for new Arm® technologies e.g. SVE2.
Key Features:
Repository | Link |
---|---|
Release | https://github.com/arm-software/ComputeLibrary |
Development | https://review.mlplatform.org/#/admin/projects/ml/ComputeLibrary |
Note: The documentation includes the reference API, changelogs, build guide, contribution guide, errata, etc.
All the binaries can be downloaded from here or from the tables below.
Platform | Operating System | Release archive (Download) |
---|---|---|
Raspberry Pi 4 | Linux 32bit | |
Raspberry Pi 4 | Linux 64bit | |
Odroid N2 | Linux 64bit | |
HiKey960 | Linux 64bit |
Architecture | Operating System | Release archive (Download) |
---|---|---|
armv7 | Linux | |
arm64-v8a | Android | |
arm64-v8a | Linux | |
arm64-v8.2-a | Android | |
arm64-v8.2-a | Linux |
Please refer to the following link for more pre-built binaries:
Pre-build binaries are generated with the following security / good coding practices related flags:
-Wall, -Wextra, -Wformat=2, -Winit-self, -Wstrict-overflow=2, -Wswitch-default, -Woverloaded-virtual, -Wformat-security, -Wctor-dtor-privacy, -Wsign-promo, -Weffc++, -pedantic, -fstack-protector-strong
Arm® CPUs:
Arm® Mali™ GPUs:
x86
⚠ Important Bazel and CMake builds are experimental CPU only builds, please see the documentation for more details.
Contributions to the Compute Library are more than welcome. If you are interested on contributing, please have a look at our how to contribute guidelines.
Before the Compute Library accepts your contribution, you need to certify its origin and give us your permission. To manage this process we use the Developer Certificate of Origin (DCO) V1.1 (https://developercertificate.org/)
To indicate that you agree to the the terms of the DCO, you "sign off" your contribution by adding a line with your name and e-mail address to every git commit message:
Signed-off-by: John Doe <john.doe@example.org>
You must use your real name, no pseudonyms or anonymous contributions are accepted.
For technical discussion, the ComputeLibrary project has a public mailing list: acl-dev@lists.linaro.org The list is open to anyone inside or outside of Arm to self subscribe. In order to subscribe, please visit the following website: https://lists.linaro.org/mailman3/lists/acl-dev.lists.linaro.org/
The software is provided under MIT license. Contributions to this project are accepted under the same license.
This project contains code from other projects as listed below. The original license text is included in those source files.
The OpenCL header library is licensed under Apache License, Version 2.0, which is a permissive license compatible with MIT license.
The half library is licensed under MIT license.
The libnpy library is licensed under MIT license.
The stb image library is either licensed under MIT license or is in Public Domain. It is used by this project under the terms of MIT license.
Android is a trademark of Google LLC.
Arm, Cortex, Mali and Neon are registered trademarks or trademarks of Arm Limited (or its subsidiaries) in the US and/or elsewhere.
Bazel is a trademark of Google LLC., registered in the U.S. and other countries.
CMake is a trademark of Kitware, Inc., registered in the U.S. and other countries.
Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.
Mac and macOS are trademarks of Apple Inc., registered in the U.S. and other countries.
Tizen is a registered trademark of The Linux Foundation.