blob: 21012e6d997932ce12338c227cf864fda04cdd8a [file] [log] [blame]
Vidhya Sudhan Loganathand646ae12018-11-19 15:18:20 +00001///
Michele Di Giorgiod9eaf612020-07-08 11:12:57 +01002/// Copyright (c) 2017-2020 Arm Limited.
Vidhya Sudhan Loganathand646ae12018-11-19 15:18:20 +00003///
4/// SPDX-License-Identifier: MIT
5///
6/// Permission is hereby granted, free of charge, to any person obtaining a copy
7/// of this software and associated documentation files (the "Software"), to
8/// deal in the Software without restriction, including without limitation the
9/// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
10/// sell copies of the Software, and to permit persons to whom the Software is
11/// furnished to do so, subject to the following conditions:
12///
13/// The above copyright notice and this permission notice shall be included in all
14/// copies or substantial portions of the Software.
15///
16/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
22/// SOFTWARE.
23///
Anthony Barbier3762e742018-03-02 11:49:33 +000024namespace arm_compute
25{
Anthony Barbier6ff3b192017-09-04 18:44:23 +010026/** @mainpage Introduction
27
28@tableofcontents
29
30The Computer Vision and Machine Learning library is a set of functions optimised for both ARM CPUs and GPUs using SIMD technologies.
31
32Several builds of the library are available using various configurations:
33 - OS: Linux, Android or bare metal.
34 - Architecture: armv7a (32bit) or arm64-v8a (64bit)
Anthony Barbier20dbb822017-12-13 21:19:39 +000035 - Technology: NEON / OpenCL / GLES_COMPUTE / NEON and OpenCL and GLES_COMPUTE
Anthony Barbier6ff3b192017-09-04 18:44:23 +010036 - Debug / Asserts / Release: Use a build with asserts enabled to debug your application and enable extra validation. Once you are sure your application works as expected you can switch to a release build of the library for maximum performance.
37
38@section S0_1_contact Contact / Support
39
40Please email developer@arm.com
41
42In order to facilitate the work of the support team please provide the build information of the library you are using. To get the version of the library you are using simply run:
43
44 $ strings android-armv7a-cl-asserts/libarm_compute.so | grep arm_compute_version
45 arm_compute_version=v16.12 Build options: {'embed_kernels': '1', 'opencl': '1', 'arch': 'armv7a', 'neon': '0', 'asserts': '1', 'debug': '0', 'os': 'android', 'Werror': '1'} Git hash=f51a545d4ea12a9059fe4e598a092f1fd06dc858
46
Anthony Barbier14c86a92017-12-14 16:27:41 +000047@section S0_2_prebuilt_binaries Pre-built binaries
48
49For each release we provide some pre-built binaries of the library [here](https://github.com/ARM-software/ComputeLibrary/releases)
50
51These binaries have been built using the following toolchains:
Michele Di Giorgio36a551f2020-04-23 11:55:29 +010052 - Linux armv7a: gcc-linaro-6.3.1-2017.05-x86_64_arm-linux-gnueabihf
53 - Linux arm64-v8a: gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu
54 - Android armv7a: clang++ / libc++ NDK r18b
55 - Android am64-v8a: clang++ / libc++ NDK r18b
Anthony Barbier14c86a92017-12-14 16:27:41 +000056
57@warning Make sure to use a compatible toolchain to build your application or you will get some std::bad_alloc errors at runtime.
58
Anthony Barbier6ff3b192017-09-04 18:44:23 +010059@section S1_file_organisation File organisation
60
61This archive contains:
62 - The arm_compute header and source files
63 - The latest Khronos OpenCL 1.2 C headers from the <a href="https://www.khronos.org/registry/cl/">Khronos OpenCL registry</a>
64 - The latest Khronos cl2.hpp from the <a href="https://www.khronos.org/registry/cl/">Khronos OpenCL registry</a> (API version 2.1 when this document was written)
Anthony Barbier20dbb822017-12-13 21:19:39 +000065 - The latest Khronos OpenGL ES 3.1 C headers from the <a href="https://www.khronos.org/registry/gles/">Khronos OpenGL ES registry</a>
66 - The latest Khronos EGL 1.5 C headers from the <a href="https://www.khronos.org/registry/gles/">Khronos EGL registry</a>
67 - The sources for a stub version of libOpenCL.so, libGLESv1_CM.so, libGLESv2.so and libEGL.so to help you build your application.
Anthony Barbier6ff3b192017-09-04 18:44:23 +010068 - An examples folder containing a few examples to compile and link against the library.
69 - A @ref utils folder containing headers with some boiler plate code used by the examples.
70 - This documentation.
71
Michele Di Giorgio552e11d2020-09-23 15:08:38 +010072 For detailed information about file organization, please refer to Files -> File List section of this documentation.
Anthony Barbier6ff3b192017-09-04 18:44:23 +010073
74@section S2_versions_changelog Release versions and changelog
75
76@subsection S2_1_versions Release versions
77
78All releases are numbered vYY.MM Where YY are the last two digits of the year, and MM the month number.
79If there is more than one release in a month then an extra sequential number is appended at the end:
80
81 v17.03 (First release of March 2017)
82 v17.03.1 (Second release of March 2017)
83 v17.04 (First release of April 2017)
84
85@note We're aiming at releasing one major public release with new features per quarter. All releases in between will only contain bug fixes.
86
87@subsection S2_2_changelog Changelog
88
SiCong Li96209c72020-08-21 12:28:30 +010089v20.11 Public major release
SiCong Li903f8cc2020-08-27 10:17:10 +010090 - Added new data type S32 support for:
91 - @ref NEArithmeticSubtraction
92 - @ref NEArithmeticSubtractionKernel
SiCong Libb88f892020-08-28 11:18:47 +010093 - @ref NEPixelWiseMultiplication
94 - @ref NEPixelWiseMultiplicationKernel
Georgios Pinitas18134222020-09-03 21:00:23 +010095 - @ref NEElementwiseDivision
96 - @ref NEDivisionOperationKernel
SiCong Li96209c72020-08-21 12:28:30 +010097 - Interface change
98 - Properly support softmax axis to have the same meaning as other major frameworks. That is, axis now defines the dimension
99 on which Softmax/Logsoftmax is performed. E.g. for input of shape 4x5x6 and axis=1, softmax will be applied to 4x6=24 vectors of size 5.
100 The supported value range of axis is [-rank, rank).
101 This change applies to the following functions:
102 - @ref NESoftmaxLayer
103 - @ref NELogSoftmaxLayer
104 - @ref CLSoftmaxLayer
105 - @ref CLLogSoftmaxLayer
106 - @ref GCSoftmaxLayer
Georgios Pinitas2d221392020-09-03 15:16:37 +0100107 - Deprecated OpenCL kernels / functions:
108 - CLLocallyConnectedLayer
109 - CLLocallyConnectedMatrixMultiplyKernel
110 - Deprecated NEON kernels / functions:
111 - NELocallyConnectedLayer
112 - NELocallyConnectedMatrixMultiplyKernel
SiCong Li96209c72020-08-21 12:28:30 +0100113
Georgios Pinitas25ef7212020-06-02 23:00:41 +0100114v20.08 Public major release
115 - Various bug fixes.
116 - Various optimisations.
Sheri Zhang3ef9b5f2020-07-09 16:32:58 +0100117 - Added new data type QASYMM8_SIGNED support for:
Sheri Zhangdd4cfc02020-07-10 14:15:41 +0100118 - @ref CLArgMinMaxLayer
119 - @ref CLArgMinMaxLayerKernel
120 - Added new data type U8 support for:
121 - @ref NECropKernel
122 - @ref CLCropKernel
123 - Added aligh_corner support for nearest neighbor interpolation in:
124 - @ref NEScaleKernel
125 - @ref CLScaleKernel
126 - New OpenCL kernels / functions:
127 - @ref CLMaxUnpoolingLayerKernel
128 - New NEON kernels / functions:
129 - @ref NEMaxUnpoolingLayerKernel
Sheri Zhang3ef9b5f2020-07-09 16:32:58 +0100130 - New graph example:
Sheri Zhangdd4cfc02020-07-10 14:15:41 +0100131 - graph_yolov3_output_detector
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100132 - GEMMTuner improvements:
133 - Added fp16 support
134 - Output json files for easier integration
135 - Enabled tuning for export_to_cl_image_rhs option for RHS tensors
136 - More robust script for running benchmarks
Sheri Zhang3ef9b5f2020-07-09 16:32:58 +0100137 - Removed padding from:
Sheri Zhangdd4cfc02020-07-10 14:15:41 +0100138 - @ref NEPixelWiseMultiplicationKernel
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100139 - @ref NEHeightConcatenateLayerKernel
140 - @ref NEThresholdKernel
141 - @ref NEBatchConcatenateLayerKernel
142 - @ref NETransposeKernel
143 - @ref NEBatchNormalizationLayerKernel
144 - @ref NEArithmeticSubtractionKernel
145 - @ref NEBoundingBoxTransformKernel
146 - @ref NELogits1DMaxKernel
147 - @ref NELogits1DSoftmaxKernel
148 - @ref NEROIPoolingLayerKernel
149 - @ref NEROIAlignLayerKernel
150 - @ref NEYOLOLayerKernel
151 - @ref NEUpsampleLayerKernel
152 - @ref NEFloorKernel
153 - @ref NEWidthConcatenateLayerKernel
154 - @ref NEDepthConcatenateLayerKernel
155 - @ref NENormalizationLayerKernel
156 - @ref NEL2NormalizeLayerKernel
157 - @ref NEFillArrayKernel
158 - @ref NEDepthConvertLayerKernel
159 - @ref NERangeKernel
160 - @ref NEPriorBoxLayer
Sang-Hoon Parka45abfd2020-08-17 13:50:15 +0100161 - Removedd OpenCL kernels / functions:
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100162 - CLGEMMLowpQuantizeDownInt32ToUint8Scale
163 - CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFloat
Sang-Hoon Parka45abfd2020-08-17 13:50:15 +0100164 - Removed NEON kernels / functions:
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100165 - NEGEMMLowpQuantizeDownInt32ToUint8Scale
166 - NEGEMMMatrixAccumulateBiasesKernel
SiCong Lid004a7a2020-05-28 15:26:41 +0100167 - Deprecated functions / interfaces:
168 - Non-descriptor based interfaces for @ref NEThreshold, @ref CLThreshold
Sang-Hoon Park97c1a672020-08-18 11:44:13 +0100169 - Non-descriptor based interfaces for @ref NEScale, @ref CLScale and @ref GCScale
SiCong Lid004a7a2020-05-28 15:26:41 +0100170 - In @ref NESoftmaxLayer, @ref NELogSoftmaxLayer, @ref CLSoftmaxLayer, @ref CLLogSoftmaxLayer and @ref GCSoftmaxLayer :
morgolock9c7fed82020-08-05 12:30:56 +0100171 The default "axis" value for @ref CLSoftmaxLayer, @ref CLLogSoftmaxLayer and @ref GCSoftmaxLayer is changed from 1 to 0.
172 Only axis 0 is supported.
173 The default "axis" value for @ref NESoftmaxLayer, @ref NELogSoftmaxLayer is changed from 1 to 0.
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100174 Only axis 0 is supported.
Sang-Hoon Parka0205b92020-07-07 09:36:09 +0100175 - The support for quantized data types has been removed from @ref CLLogSoftmaxLayer due to implementation complexity.
Gian Marco Iodice547b2e72020-08-12 10:25:29 +0100176 - Removed padding requirement for the input (e.g. LHS of GEMM) and output in @ref CLGEMMMatrixMultiplyNativeKernel, @ref CLGEMMMatrixMultiplyReshapedKernel, @ref CLGEMMMatrixMultiplyReshapedOnlyRHSKernel and @ref CLIm2ColKernel (NHWC only)
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100177 - This change allows to use @ref CLGEMMConvolutionLayer without extra padding for the input and output.
178 - Only the weights/bias of @ref CLGEMMConvolutionLayer could require padding for the computation.
179 - Only on Arm Mali Midgard GPUs, @ref CLGEMMConvolutionLayer could require padding since @ref CLGEMMMatrixMultiplyKernel is called and currently requires padding.
Gian Marco Iodice547b2e72020-08-12 10:25:29 +0100180 - Added support for exporting the OpenCL buffer object to the OpenCL image object in @ref CLGEMMMatrixMultiplyReshapedKernel and @ref CLGEMMMatrixMultiplyReshapedOnlyRHSKernel.
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100181 - This support allows to export the OpenCL buffer used for the reshaped RHS matrix to the OpenCL image object.
182 - The padding requirement for the OpenCL image object is considered into the @ref CLGEMMReshapeRHSMatrixKernel.
183 - The reshaped RHS matrix stores the weights when GEMM is used to accelerate @ref CLGEMMConvolutionLayer.
Georgios Pinitas25ef7212020-06-02 23:00:41 +0100184
Georgios Pinitasfd7780d2020-03-17 11:41:00 +0000185v20.05 Public major release
Georgios Pinitasc7b183a2020-03-06 18:12:09 +0000186 - Various bug fixes.
187 - Various optimisations.
Michele Di Giorgio36a551f2020-04-23 11:55:29 +0100188 - Updated recommended NDK version to r18b.
189 - Updated recommended gcc version to Linaro 6.3.1.
Georgios Pinitasc7b183a2020-03-06 18:12:09 +0000190 - Added Bfloat16 type support
191 - Added Bfloat16 support in:
192 - @ref NEWeightsReshapeKernel
193 - @ref NEConvolutionLayerReshapeWeights
194 - @ref NEIm2ColKernel
195 - @ref NEIm2Col
196 - @ref NEDepthConvertLayerKernel
197 - @ref NEDepthConvertLayer
198 - @ref NEGEMMConvolutionLayer
Georgios Pinitasc7b183a2020-03-06 18:12:09 +0000199 - @ref NEGEMMAssemblyDispatch
Sheri Zhang0f2522b2020-03-25 16:38:19 +0000200 - Added new data type QASYMM8_SIGNED support for:
201 - @ref CLDirectConvolutionLayer
202 - @ref CLDeconvolutionLayer
203 - @ref CLDirectDeconvolutionLayer
204 - @ref CLGEMMDeconvolutionLayer
205 - @ref CLGEMMLowpMatrixMultiplyReshapedKernel
206 - @ref CLGEMMLowpQuantizeDownInt32ScaleKernel
207 - @ref CLGEMMLowpQuantizeDownInt32ScaleByFloatKernel
208 - @ref CLReductionOperation
209 - @ref CLReduceMean
Sheri Zhang359c48e2020-04-30 22:53:39 +0100210 - @ref NEScale
211 - @ref NEScaleKernel
Sheri Zhang0f2522b2020-03-25 16:38:19 +0000212 - @ref NEUpsampleLayer
213 - @ref NECast
214 - @ref NEReductionOperation
215 - @ref NEReduceMean
216 - @ref NEArgMinMaxLayer
217 - @ref NEDeconvolutionLayer
218 - @ref NEGEMMLowpQuantizeDownInt32ScaleKernel
219 - @ref CPPBoxWithNonMaximaSuppressionLimit
220 - @ref CPPDetectionPostProcessLayer
221 - @ref CPPPermuteKernel
222 - @ref CPPPermute
223 - @ref CPPTopKVKernel
224 - @ref CPPTopKV
Sheri Zhang359c48e2020-04-30 22:53:39 +0100225 - @ref CPPUpsample
226 - @ref CPPUpsampleKernel
Sheri Zhang31b49ca2020-04-24 11:15:10 +0100227 - New OpenCL kernels / functions:
228 - @ref CLQLSTMLayer
229 - @ref CLQLSTMLayerNormalizationKernel
230 - New NEON kernels / functions:
231 - @ref NEQLSTMLayer
232 - @ref NEQLSTMLayerNormalizationKernel
233 - Added HARD_SWISH support in:
234 - @ref CLActivationLayerKernel
235 - @ref NEActivationLayerKernel
Sheri Zhang0f2522b2020-03-25 16:38:19 +0000236 - Deprecated OpenCL kernels / functions:
237 - CLGEMMLowpQuantizeDownInt32ToUint8Scale
238 - CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFloat
239 - Deprecated NEON kernels / functions:
240 - NEGEMMLowpQuantizeDownInt32ToUint8Scale
241 - Removed CPP kernels / functions:
242 - CPPFlipWeightsKernel
Manuel Bottini387259a2020-05-21 17:14:36 +0100243 - Removed PoolingLayerInfo constructors without Data Layout.
244 - Removed CLDepthwiseConvolutionLayer3x3
245 - Removed NEDepthwiseConvolutionLayerOptimized
Manuel Bottini075253a2020-05-22 12:57:18 +0100246 - Added support for Winograd 3x3,4x4 on NEON FP16:
247 - @ref NEWinogradConvolutionLayer
248 - @ref NEWinogradLayerTransformInputKernel
249 - @ref NEWinogradLayerTransformOutputKernel
250 - @ref NEWinogradLayerTransformWeightsKernel
251 - Added CLCompileContext
252 - Added NEON GEMM kernel with 2D window support
Georgios Pinitasc7b183a2020-03-06 18:12:09 +0000253
Michele Di Giorgio740872e2020-03-04 15:29:49 +0000254v20.02.1 Maintenance release
255 - Added Android-NN build script.
256
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000257v20.02 Public major release
258 - Various bug fixes.
259 - Various optimisations.
260 - Added new data type QASYMM8_SIGNED support for:
261 - @ref CLDepthwiseConvolutionLayer
Manuel Bottini387259a2020-05-21 17:14:36 +0100262 - CLDepthwiseConvolutionLayer3x3
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000263 - @ref CLGEMMConvolutionLayer
264 - @ref CLGEMMLowpMatrixMultiplyCore
265 - @ref CLGEMMLowpMatrixMultiplyReshapedOnlyRHSKernel
266 - @ref CLGEMMLowpMatrixMultiplyNativeKernel
267 - @ref NEActivationLayer
268 - @ref NEComparisonOperationKernel
269 - @ref NEConvolutionLayer
270 - @ref NEDepthwiseConvolutionLayer
Georgios Pinitas7d0adc62020-09-04 15:25:24 +0100271 - NEDepthwiseConvolutionLayer3x3Kernel
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000272 - @ref NEDirectConvolutionLayerOutputStageKernel
273 - @ref NEElementwiseComparison
274 - @ref NEElementwiseMax
275 - @ref NEElementwiseMin
276 - @ref NEElementwiseSquaredDiff
277 - @ref NEFullyConnectedLayer
Michele Di Giorgiof22f6722020-07-03 16:29:24 +0100278 - NEGEMMMatrixVectorMultiplyKernel
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000279 - @ref NEPixelWiseMultiplication
280 - @ref NEPoolingLayer
281 - @ref NEPReluLayer
282 - Added support for QSYMM8_PER_CHANNEL in:
Georgios Pinitas7d0adc62020-09-04 15:25:24 +0100283 - NEDepthwiseConvolutionLayer3x3Kernel
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000284 - Added support for split sizes in:
285 - @ref CLSplit
286 - @ref NESplit
287 - New OpenCL kernels / functions:
288 - @ref CLFill
289 - @ref CLGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPointKernel / @ref CLGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint
290 - New NEON kernels / functions:
291 - @ref NEFill
292 - @ref NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPointKernel / @ref NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint
293 - Deprecated NEON functions / interfaces:
Manuel Bottini387259a2020-05-21 17:14:36 +0100294 - CLDepthwiseConvolutionLayer3x3
295 - NEDepthwiseConvolutionLayerOptimized
296 - PoolingLayerInfo constructors without Data Layout.
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000297 - Added support for quantization with multiplier greater than 1 on NEON and CL.
298 - Added support for quantized inputs of type QASYMM8_SIGNED and QASYMM8 to @ref CLQuantizationLayer.
299 - Added the ability to build bootcode for bare metal.
300 - Added support for generating synthetic QASYMM8 graphs.
301 - Added support for F16 datatype in VGG16.
302 - Removed pre-built binaries for GLES.
303
Michele Di Giorgiod374ff22020-01-21 10:03:20 +0000304v19.11.1 Public maintenance release
305 - Fix offset calculation in NEReductionOperationKernel.
306 - Fix data layout in NEScaleKernel for nhwc.
307 - Retain configuration step data layout to avoid side-effects.
308 - Perform sqrt in double domain for L2 pooling.
309 - Fix output shape calculation for Reduce Mean
310 - Restrict cases where optimized NEPadLayer runs.
311
Michele Di Giorgioa046e162019-10-08 09:36:26 +0100312v19.11 Public major release
SiCong Lica1f98c2019-11-28 11:06:11 +0000313 - Various bug fixes.
314 - Various optimisations.
SiCong Li1f7f9882019-11-28 14:59:35 +0000315 - Updated recommended NDK version to r17c.
SiCong Lica1f98c2019-11-28 11:06:11 +0000316 - Deprecated OpenCL kernels / functions:
Michele Di Giorgioa046e162019-10-08 09:36:26 +0100317 - CLDepthwiseConvolutionLayerReshapeWeightsGenericKernel
318 - CLDepthwiseIm2ColKernel
SiCong Lica1f98c2019-11-28 11:06:11 +0000319 - CLDepthwiseSeparableConvolutionLayer
Michele Di Giorgioa046e162019-10-08 09:36:26 +0100320 - CLDepthwiseVectorToTensorKernel
321 - CLDirectConvolutionLayerOutputStageKernel
SiCong Lica1f98c2019-11-28 11:06:11 +0000322 - Deprecated NEON kernels / functions:
Giorgio Arenad93e2632019-10-15 11:09:33 +0100323 - NEDepthwiseWeightsReshapeKernel
324 - NEDepthwiseIm2ColKernel
SiCong Lica1f98c2019-11-28 11:06:11 +0000325 - NEDepthwiseSeparableConvolutionLayer
Giorgio Arenad93e2632019-10-15 11:09:33 +0100326 - NEDepthwiseVectorToTensorKernel
Manuel Bottini05069f02019-09-26 17:18:26 +0100327 - NEDepthwiseConvolutionLayer3x3
SiCong Lica1f98c2019-11-28 11:06:11 +0000328 - New OpenCL kernels / functions:
329 - @ref CLInstanceNormalizationLayerKernel / @ref CLInstanceNormalizationLayer
330 - @ref CLDepthwiseConvolutionLayerNativeKernel to replace the old generic depthwise convolution (see Deprecated
331 OpenCL kernels / functions)
332 - @ref CLLogSoftmaxLayer
333 - New NEON kernels / functions:
334 - @ref NEBoundingBoxTransformKernel / @ref NEBoundingBoxTransform
335 - @ref NEComputeAllAnchorsKernel / @ref NEComputeAllAnchors
336 - @ref NEDetectionPostProcessLayer
337 - @ref NEGenerateProposalsLayer
338 - @ref NEInstanceNormalizationLayerKernel / @ref NEInstanceNormalizationLayer
339 - @ref NELogSoftmaxLayer
340 - @ref NEROIAlignLayerKernel / @ref NEROIAlignLayer
341 - Added QASYMM8 support for:
342 - @ref CLGenerateProposalsLayer
343 - @ref CLROIAlignLayer
344 - @ref CPPBoxWithNonMaximaSuppressionLimit
345 - Added QASYMM16 support for:
346 - @ref CLBoundingBoxTransform
347 - Added FP16 support for:
348 - @ref CLGEMMMatrixMultiplyReshapedKernel
349 - Added new data type QASYMM8_PER_CHANNEL support for:
350 - @ref CLDequantizationLayer
351 - @ref NEDequantizationLayer
352 - Added new data type QSYMM8_PER_CHANNEL support for:
353 - @ref CLConvolutionLayer
354 - @ref NEConvolutionLayer
355 - @ref CLDepthwiseConvolutionLayer
356 - @ref NEDepthwiseConvolutionLayer
357 - Added FP16 mixed-precision support for:
358 - @ref CLGEMMMatrixMultiplyReshapedKernel
359 - @ref CLPoolingLayerKernel
360 - Added FP32 and FP16 ELU activation for:
361 - @ref CLActivationLayer
362 - @ref NEActivationLayer
363 - Added asymmetric padding support for:
364 - @ref CLDirectDeconvolutionLayer
365 - @ref CLGEMMDeconvolutionLayer
366 - @ref NEDeconvolutionLayer
367 - Added SYMMETRIC and REFLECT modes for @ref CLPadLayerKernel / @ref CLPadLayer.
368 - Replaced the calls to @ref NECopyKernel and @ref NEMemsetKernel with @ref NEPadLayer in @ref NEGenerateProposalsLayer.
369 - Replaced the calls to @ref CLCopyKernel and @ref CLMemsetKernel with @ref CLPadLayer in @ref CLGenerateProposalsLayer.
370 - Improved performance for CL Inception V3 - FP16.
371 - Improved accuracy for CL Inception V3 - FP16 by enabling FP32 accumulator (mixed-precision).
372 - Improved NEON performance by enabling fusing batch normalization with convolution and depth-wise convolution layer.
373 - Improved NEON performance for MobileNet-SSD by improving the output detection performance.
374 - Optimized @ref CLPadLayer.
375 - Optimized CL generic depthwise convolution layer by introducing @ref CLDepthwiseConvolutionLayerNativeKernel.
376 - Reduced memory consumption by implementing weights sharing.
Michele Di Giorgioa046e162019-10-08 09:36:26 +0100377
Michele Di Giorgiod374ff22020-01-21 10:03:20 +0000378v19.08.1 Public maintenance release
379 - Fix offset calculation in NEReductionOperationKernel.
380 - Fix data layout in NEScaleKernel for nhwc.
381 - Retain configuration step data layout to avoid side-effects.
382 - Perform sqrt in double domain for L2 pooling.
383 - Fix output shape calculation for Reduce Mean
384 - Fix broadcast CLPixelwiseMultiplication with 5D tensors
385
Georgios Pinitas3d13af82019-06-04 13:04:16 +0100386v19.08 Public major release
387 - Various bug fixes.
388 - Various optimisations.
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100389 - Deprecated NEON functions
390 - NEDepthConcatenateLayer
391 - NEWidthConcatenateLayer
392 - Deprecated OpenCL kernels / functions
393 - CLDepthConcatenateLayer
394 - CLGEMMInterleave4x4Kernel / CLGEMMInterleave4x4
395 - CLGEMMTranspose1xWKernel / CLGEMMTranspose1xW
396 - CLWidthConcatenateLayer
397 - New NEON kernels / functions:
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100398 - @ref NEAbsLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100399 - @ref NECast
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100400 - @ref NEElementwisePower
401 - @ref NELogLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100402 - @ref NELSTMLayerQuantized
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100403 - @ref NENegLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100404 - @ref NEPReluLayer
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100405 - @ref NESinLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100406 - @ref NEBatchConcatenateLayerKernel
407 - @ref NEDepthToSpaceLayerKernel / @ref NEDepthToSpaceLayer
408 - @ref NEDepthwiseConvolutionLayerNativeKernel
409 - @ref NEGEMMLowpQuantizeDownInt32ToInt16ScaleByFixedPointKernel
410 - @ref NEMeanStdDevNormalizationKernel / @ref NEMeanStdDevNormalizationLayer
411 - @ref NESpaceToDepthLayerKernel / @ref NESpaceToDepthLayer
412 - New OpenCL kernels / functions:
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100413 - @ref CLAbsLayer
414 - @ref CLElementwisePower
415 - @ref CLLogLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100416 - @ref CLLSTMLayerQuantized
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100417 - @ref CLNegLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100418 - @ref CLPReluLayer
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100419 - @ref CLSinLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100420 - @ref CLBatchConcatenateLayerKernel
421 - @ref CLDepthToSpaceLayerKernel / @ref CLDepthToSpaceLayer
422 - @ref CLGEMMLowpMatrixMultiplyNativeKernel
423 - @ref CLGEMMLowpQuantizeDownInt32ToInt16ScaleByFixedPointKernel
424 - @ref CLGEMMMatrixMultiplyNativeKernel
425 - @ref CLMeanStdDevNormalizationKernel / @ref CLMeanStdDevNormalizationLayer
426 - @ref CLSpaceToDepthLayerKernel / @ref CLSpaceToDepthLayer
427 - New examples:
428 - neon_opticalflow
429 - cl_cache
430 - neon_permute
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100431 - Added support for FP16 in @ref NEDeconvolutionLayer
432 - Added support for FP16 in @ref CLDeconvolutionLayer
433 - Added support for REDUCE_MIN and REDUCE_MAX in @ref ReductionOperation
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100434 - Enable the fusion of batch normalization with convolution and depthwise convolution layer for FP32 in the graph API (OpenCL only)
435 - Added support for fusing activation function and broadcast addition with the matrix multiplication for FP32 (OpenCL only)
436 - Re-factored the depthwise convolution layer kernel on NEON for generic cases
437 - Added an optimized depthwise convolution layer kernel for 5x5 filters (NEON only)
438 - Added support to enable OpenCL kernel cache. Added example showing how to load the prebuilt OpenCL kernels from a binary cache file
439 - Altered @ref QuantizationInfo interface to support per-channel quantization.
Manuel Bottini387259a2020-05-21 17:14:36 +0100440 - The CLDepthwiseConvolutionLayer3x3 will be included by @ref CLDepthwiseConvolutionLayer to accommodate for future optimizations.
441 - The NEDepthwiseConvolutionLayerOptimized will be included by @ref NEDepthwiseConvolutionLayer to accommodate for future optimizations.
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100442 - Removed inner_border_right and inner_border_top parameters from @ref CLDeconvolutionLayer interface
443 - Removed inner_border_right and inner_border_top parameters from @ref NEDeconvolutionLayer interface
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100444 - Optimized the NEON assembly kernel for GEMMLowp. The new implementation fuses the output stage and quantization with the matrix multiplication kernel
Georgios Pinitas3d13af82019-06-04 13:04:16 +0100445
Michalis Spyroua9c44722019-04-05 17:18:36 +0100446v19.05 Public major release
Michalis Spyrouc6608ac2019-05-16 17:40:23 +0100447 - Various bug fixes.
448 - Various optimisations.
Georgios Pinitasf790fdb2019-04-24 12:41:25 +0100449 - New Neon kernels / functions:
450 - @ref NEBatchToSpaceLayerKernel / @ref NEBatchToSpaceLayer
Michalis Spyrouca82e622019-05-10 16:43:20 +0100451 - @ref NEComplexPixelWiseMultiplicationKernel / @ref NEComplexPixelWiseMultiplication
Georgios Pinitasf790fdb2019-04-24 12:41:25 +0100452 - @ref NECropKernel / @ref NECropResize
Michalis Spyrouca82e622019-05-10 16:43:20 +0100453 - @ref NEDepthwiseConvolutionAssemblyDispatch
454 - @ref NEFFTDigitReverseKernel
455 - @ref NEFFTRadixStageKernel
456 - @ref NEFFTScaleKernel
Georgios Pinitasf790fdb2019-04-24 12:41:25 +0100457 - @ref NEGEMMLowpOffsetContributionOutputStageKernel
458 - @ref NEHeightConcatenateLayerKernel
459 - @ref NESpaceToBatchLayerKernel / @ref NESpaceToBatchLayer
Michalis Spyroud7dd15c2019-05-30 14:53:58 +0100460 - @ref NEFFT1D
461 - @ref NEFFT2D
462 - @ref NEFFTConvolutionLayer
Georgios Pinitasf790fdb2019-04-24 12:41:25 +0100463 - New OpenCL kernels / functions:
Michalis Spyrouca82e622019-05-10 16:43:20 +0100464 - @ref CLComplexPixelWiseMultiplicationKernel / @ref CLComplexPixelWiseMultiplication
Georgios Pinitasf790fdb2019-04-24 12:41:25 +0100465 - @ref CLCropKernel / @ref CLCropResize
Michalis Spyroud7dd15c2019-05-30 14:53:58 +0100466 - @ref CLDeconvolutionReshapeOutputKernel
Georgios Pinitasf790fdb2019-04-24 12:41:25 +0100467 - @ref CLFFTDigitReverseKernel
468 - @ref CLFFTRadixStageKernel
469 - @ref CLFFTScaleKernel
470 - @ref CLGEMMLowpMatrixMultiplyReshapedOnlyRHSKernel
471 - @ref CLGEMMMatrixMultiplyReshapedOnlyRHSKernel
472 - @ref CLHeightConcatenateLayerKernel
473 - @ref CLDirectDeconvolutionLayer
474 - @ref CLFFT1D
475 - @ref CLFFT2D
476 - @ref CLFFTConvolutionLayer
Michalis Spyrouca82e622019-05-10 16:43:20 +0100477 - @ref CLGEMMDeconvolutionLayer
478 - New OpenGLES kernels / functions:
479 - @ref GCConcatenateLayer
Michalis Spyroua9c44722019-04-05 17:18:36 +0100480 - Deprecated functions/interfaces
Georgios Pinitas09f24972019-05-17 18:14:40 +0100481 - GCDepthConcatenateLayer
482 - NEWidthConcatenateLayer
483 - NEDepthConcatenateLayer
484 - CLWidthConcatenateLayer
485 - CLDepthConcatenateLayer
Gian Marco Iodice5fc07aa2019-05-15 17:08:02 +0100486 - CLGEMMInterleave4x4
487 - CLGEMMTranspose1xW
Michalis Spyrouc6608ac2019-05-16 17:40:23 +0100488 - Support different quantization info in CLConcatLayer.
489 - Add checks on different input/output quantization info were not supported.
490 - Tensors have different quantization information.
491 - Add FP16 support checks.
492 - Fix output quantization CLDeptwiseConv3x3 when activation is fused.
493 - New graph examples:
494 - graph_convolution
495 - graph_fully_connected
496 - graph_depthwise_convolution
497 - Deepspeech v0.4.1
498 - Add support for QASYMM8 in NEArithmeticSubtractionKernel.
499 - Add support for QASYMM8 in NEPixelWiseMultiplicationKernel.
500 - Add support for QASYMM8 NEDeconvolution.
501 - Add support for DequantizationLayer for NEON/CL.
502 - Add support for dilation in CLDepthwiseConvolution.
503 - Fuse offset contribution with the output stage when we use NEGEMMLowpMatrixMultiplyCore.
504 - Optimize CLDeconvolution.
505 - Add StackLayer to the graph API.
506 - Add support for "reflect" padding mode in NEPad.
507 - Winograd 7x7 NHWC on OpenCL.
508 - Rework CL ML layers to run exclusively on CL.
509 - Support different quantization info in PoolingLayer.
510 - Implement and test import memory interfaces.
511 - Added new tests and removed old ones.
512 - Various clang-tidy fixes.
Michalis Spyroua9c44722019-04-05 17:18:36 +0100513
giuros01a69a88b2019-01-31 16:29:19 +0000514v19.02 Public major release
Isabella Gottardi62538972019-02-12 19:52:44 +0000515 - Various bug fixes.
516 - Various optimisations.
517 - New Neon kernels / functions:
518 - @ref NETileKernel / @ref NETile
519 - @ref NEFuseBatchNormalizationKernel / @ref NEFuseBatchNormalization
520 - @ref NEElementwiseOperationKernel
521 - @ref NEElementwiseMax
522 - @ref NEElementwiseMin
523 - @ref NEElementwiseSquaredDiff
524 - @ref NESelectKernel / @ref NESelect
525 - @ref NESplit
526 - @ref NESlice
527 - @ref NEUnstack
528 - @ref NEStridedSliceKernel / @ref NEStridedSlice
529 - @ref NEElementwiseUnaryKernel
530 - @ref NERsqrtLayer
531 - @ref NEExpLayer
532 - @ref NEReverseKernel / @ref NEReverse
533 - @ref NEArgMinMaxLayer
534 - @ref NEStackLayerKernel / @ref NEStackLayer
535 - @ref NERangeKernel / @ref NERange
536 - @ref NEPadLayer
537 - @ref NEMemsetKernel
538 - @ref NEGatherKernel / @ref NEGather
539 - @ref NEElementwiseComparison
540 - @ref NEElementwiseComparisonStatic
541 - @ref NEComparisonOperationKernel
542 - @ref NEElementwiseDivision
543 - New OpenCL kernels / functions:
544 - @ref CLSelectKernel / @ref CLSelect
545 - @ref CLTileKernel / @ref CLTile
546 - @ref CLComparisonKernel / @ref CLComparison
547 - @ref CLArgMinMaxLayer
548 - @ref CLElementwiseMax
549 - @ref CLElementwiseMin
550 - @ref CLElementwiseSquaredDiff
551 - @ref CLStackLayerKernel / @ref CLStackLayer
552 - @ref CLReverse / @ref CLReverseKernel
553 - @ref CLRsqrtLayer
554 - @ref CLExpLayer
555 - @ref CLElementWiseUnaryLayerKernel
556 - @ref CLGEMMReshapeLHSMatrixKernel
557 - @ref CLGEMMReshapeRHSMatrixKernel
558 - @ref CLGEMMMatrixMultiplyReshapedKernel
559 - @ref CLRangeKernel / @ref CLRange
560 - @ref CLUnstack
561 - @ref CLGatherKernel / @ref CLGather
562 - @ref CLGEMMLowpMatrixMultiplyReshapedKernel
563 - New CPP kernels / functions:
564 - @ref CPPDetectionOutputLayer
565 - @ref CPPTopKV / @ref CPPTopKVKernel
Isabella Gottardi62538972019-02-12 19:52:44 +0000566 - Added new examples:
567 - graph_ssd_mobilenet.cpp
568 - graph_mobilenet_v2.cpp
569 - graph_resnet12.cpp
570 - graph_srcnn955.cpp
571 - graph_vgg_vdsr.cpp
572 - graph_inception_resnet_v1.cpp
573 - Add 4D tensors support to
574 - @ref NESoftmaxLayer
575 - Fused activation in @ref CLWinogradConvolutionLayer
576 - Extented @ref NEPermute to support more cases
577 - Added NEON/SVE GEMM Hybrid kernels
578 - Added u8 and s8 hybrid assembly kernels
579 - Introduced GEMM strategy name in NEGEMMAssemblyWrapper
580 - Improved @ref CLTuner
581 - Fused the bias addition within @ref CLGEMM
582 - Added support for QASYMM8 LOGISTIC activation in @ref NEActivationLayer
583 - Added NHWC data layout support to:
584 - @ref NEScale for F16
585 - @ref CLNormalizationLayer IN_MAP_2D for FP32/FP16
586 - @ref NEL2NormalizeLayer for FP32/FP16
587 - @ref NENormalizationLayer IN_MAP_2D for FP32/FP16
588 - @ref CLROIAlignLayer
Manuel Bottini5209be52019-02-13 16:34:56 +0000589 - @ref CLGenerateProposalsLayer
Isabella Gottardi62538972019-02-12 19:52:44 +0000590 - Added QASYMM8 support to the following kernels:
591 - @ref NEArithmeticAdditionKernel
592 - @ref NEScale
593 - Added new tests and improved validation and benchmarking suites.
giuros01a69a88b2019-01-31 16:29:19 +0000594 - Deprecated functions/interfaces
595 - Usage of inner_border_right and inner_border_top has been deprecated in @ref CLDeconvolutionLayer and @ref NEDeconvolutionLayer
596
Isabella Gottardi8773d7c2018-11-20 09:56:46 +0000597v18.11 Public major release
598 - Various bug fixes.
599 - Various optimisations.
600 - New Neon kernels / functions:
601 - @ref NEChannelShuffleLayer / @ref NEChannelShuffleLayerKernel
602 - @ref NEReduceMean
603 - @ref NEReorgLayer / @ref NEReorgLayerKernel
604 - @ref NEPriorBoxLayer / @ref NEPriorBoxLayerKernel
605 - @ref NEUpsampleLayer / @ref NEUpsampleLayerKernel
606 - @ref NEYOLOLayer / @ref NEYOLOLayerKernel
607 - New OpenCL kernels / functions:
608 - @ref CLBatchToSpaceLayer / @ref CLBatchToSpaceLayerKernel
609 - @ref CLBoundingBoxTransform / @ref CLBoundingBoxTransformKernel
Manuel Bottini5209be52019-02-13 16:34:56 +0000610 - @ref CLComputeAllAnchorsKernel
611 - @ref CLGenerateProposalsLayer
Isabella Gottardi8773d7c2018-11-20 09:56:46 +0000612 - @ref CLNormalizePlanarYUVLayer / @ref CLNormalizePlanarYUVLayerKernel
613 - @ref CLReorgLayer / @ref CLReorgLayerKernel
614 - @ref CLSpaceToBatchLayer / @ref CLSpaceToBatchLayerKernel
615 - @ref CLPadLayer
616 - @ref CLReduceMean
617 - @ref CLPriorBoxLayer / @ref CLPriorBoxLayerKernel
618 - @ref CLROIAlignLayer / @ref CLROIAlignLayerKernel
619 - @ref CLSlice
620 - @ref CLSplit
621 - @ref CLStridedSlice / @ref CLStridedSliceKernel
622 - @ref CLUpsampleLayer / @ref CLUpsampleLayerKernel
623 - @ref CLYOLOLayer / @ref CLYOLOLayerKernel
624 - New CPP kernels / functions:
625 - @ref CPPBoxWithNonMaximaSuppressionLimit / @ref CPPBoxWithNonMaximaSuppressionLimitKernel
626 - Added the validate method in:
627 - @ref NEDepthConvertLayer
628 - @ref NEFloor / @ref CLFloor
629 - @ref NEGEMMMatrixAdditionKernel
630 - @ref NEReshapeLayer / @ref CLReshapeLayer
631 - @ref CLScale
632 - Added new examples:
633 - graph_shufflenet.cpp
634 - graph_yolov3.cpp
635 - Added documentation for add a new function or kernel.
636 - Improved doxygen documentation adding a list of the existing functions.
637 - Add 4D tensors support to
Georgios Pinitas09f24972019-05-17 18:14:40 +0100638 - CLWidthConcatenateLayer
Isabella Gottardi8773d7c2018-11-20 09:56:46 +0000639 - @ref CLFlattenLayer
640 - @ref CLSoftmaxLayer
641 - Add dot product support for @ref CLDepthwiseConvolutionLayer3x3NHWCKernel non-unit stride
642 - Add SVE support
643 - Fused batch normalization into convolution layer weights in @ref CLFuseBatchNormalization
644 - Fuses activation in @ref CLDepthwiseConvolutionLayer3x3NCHWKernel, @ref CLDepthwiseConvolutionLayer3x3NHWCKernel and @ref NEGEMMConvolutionLayer
645 - Added NHWC data layout support to:
646 - @ref CLChannelShuffleLayer
647 - @ref CLDeconvolutionLayer
648 - @ref CLL2NormalizeLayer
649 - Added QASYMM8 support to the following kernels:
650 - @ref CLScaleKernel
Georgios Pinitas7d0adc62020-09-04 15:25:24 +0100651 - NEDepthwiseConvolutionLayer3x3Kernel
Isabella Gottardi8773d7c2018-11-20 09:56:46 +0000652 - @ref CLPixelWiseMultiplicationKernel
653 - Added FP16 support to the following kernels:
654 - @ref CLDepthwiseConvolutionLayer3x3NHWCKernel
Georgios Pinitas7d0adc62020-09-04 15:25:24 +0100655 - NEDepthwiseConvolutionLayer3x3Kernel
Isabella Gottardi8773d7c2018-11-20 09:56:46 +0000656 - @ref CLNormalizePlanarYUVLayerKernel
657 - @ref CLWinogradConvolutionLayer (5x5 kernel)
658 - More tests added to both validation and benchmarking suites.
659
Anthony Barbierd51ea0a2018-08-07 17:48:03 +0100660v18.08 Public major release
661 - Various bug fixes.
Michele Di Giorgio02baf012018-08-20 18:10:38 +0100662 - Various optimisations.
Anthony Barbierd51ea0a2018-08-07 17:48:03 +0100663 - Updated recommended NDK version to r17b.
Michele Di Giorgio02baf012018-08-20 18:10:38 +0100664 - Removed support for QS8/QS16 data types.
665 - Added support for grouped convolution in @ref CLConvolutionLayer.
666 - Added NHWC data layout support to:
Georgios Pinitas09f24972019-05-17 18:14:40 +0100667 - NEDepthConcatenateLayer / CLDepthConcatenateLayer
Michele Di Giorgio02baf012018-08-20 18:10:38 +0100668 - @ref NEWinogradConvolutionLayer / @ref CLWinogradConvolutionLayer
669 - @ref CLDepthwiseConvolutionLayer
670 - @ref CLDirectConvolutionLayer
671 - @ref CLConvolutionLayer
672 - @ref CLScale
673 - @ref CLIm2ColKernel
674 - New Neon kernels / functions:
675 - @ref NERNNLayer
676 - New OpenCL kernels / functions:
677 - @ref CLArithmeticDivision
678 - Introduced prepare() stage support in the graph API for GLES.
679 - Added support for memory reusage when trying to allocate smaller CLTensors.
680 - Enabled NHWC execution on graph examples.
681 - Added JPEG accessor for validation purposes.
682 - Added validate methods to some kernels / functions.
Anthony Barbierd51ea0a2018-08-07 17:48:03 +0100683
684v18.05 Public major release
Pablo Tellob5cc95b2018-05-15 11:49:33 +0100685 - Various bug fixes.
686 - Various optimisations.
Pablo Telloeb82fd22018-02-23 13:43:50 +0000687 - Major redesign in the interface for the neon kernels implemented in assembly.
688 - Removed arm_compute::NEGEMMLowpAArch64A53Kernel / arm_compute::NEGEMMLowpAArch64Kernel / arm_compute::NEGEMMLowpAArch64V8P4Kernel / arm_compute::NEGEMMInterleavedBlockedKernel / arm_compute::NEGEMMLowpAssemblyMatrixMultiplyCore / arm_compute::NEHGEMMAArch64FP16Kernel
689 - Added NEGEMMAssemblyWrapper and AssemblyKernelGlue which are used to execute assembly kernels in neon functions.
690 - Minor changes to the CPUInfo type to make it compatible with the new assembly gemm interface.
Pablo Tellob5cc95b2018-05-15 11:49:33 +0100691 - Moved neon assembly kernels to the folder src/core/NEON/kernels/arm_gemm.
692 - Improved doxygen documentation.
693 - Improved memory management for layer's transitions.
694 - Added support for NHWC data layout in tensors.
695 - Added NHWC data layout support to:
696 - @ref NEGEMMConvolutionLayer
697 - @ref NEDirectConvolutionLayer
698 - @ref NEPoolingLayer / @ref CLPoolingLayer
699 - @ref NEBatchNormalizationLayer / @ref CLBatchNormalizationLayer
700 - @ref NEDepthwiseConvolutionLayer
701 - @ref NEScale
702 - @ref NEIm2Col
703 - Added support for dilated convolutions in @ref NEConvolutionLayer and @ref CLConvolutionLayer.
704 - New OpenCL kernels / functions:
705 - @ref CLChannelShuffleLayer / @ref CLChannelShuffleLayerKernel
706 - @ref CLConvertFullyConnectedWeightsKernel / @ref CLConvertFullyConnectedWeights
707 - @ref CLCopy / @ref CLCopyKernel
Anthony Barbier38e7f1f2018-05-21 13:37:47 +0100708 - @ref CLLSTMLayer
Pablo Tellob5cc95b2018-05-15 11:49:33 +0100709 - @ref CLRNNLayer
Georgios Pinitas09f24972019-05-17 18:14:40 +0100710 - CLWidthConcatenateLayer / @ref CLWidthConcatenateLayerKernel
Pablo Tellob5cc95b2018-05-15 11:49:33 +0100711 - @ref CLWinogradFilterTransformKernel / @ref CLWinogradInputTransformKernel / @ref CLWinogradConvolutionLayer
712 - @ref CLWinogradInputTransformKernel / @ref CLWinogradInputTransform
713 - New Neon kernels / functions:
Pablo Tellob5cc95b2018-05-15 11:49:33 +0100714 - @ref NEConvertFullyConnectedWeightsKernel / @ref NEConvertFullyConnectedWeights.
715 - Created the validate method in @ref CLDepthwiseConvolutionLayer.
716 - Beta and gamma are no longer mandatory arguments in @ref NEBatchNormalizationLayer and @ref CLBatchNormalizationLayer.
717 - Added depth multiplier support in @ref NEDepthwiseConvolutionLayer and @ref CLDepthwiseConvolutionLayer.
718 - Added broadcast multiply support in @ref NEPixelWiseMultiplication / @ref NEPixelWiseMultiplicationKernel.
719 - Port mobilenet example to NHWC data layout.
720 - Enabled Winograd method in @ref CLConvolutionLayer.
721 - Renamed NEWinogradLayer to @ref NEWinogradConvolutionLayer.
722 - Updated @ref NEWinogradConvolutionLayer to use highly optimised assembly kernels in src/core/NEON/kernels/arm_gemm.
723 - Added memory manager support in GLES functions.
724 - Major refactoring of the graph API.
725 - Added GLES backend in the graph API.
726 - Added support for the memory manager in the graph API.
727 - Enabled Winograd Convolution method in the graph API.
728 - Added support for grouped convolutions in the graph API.
729 - Replaced NEDeconvolutionLayerUpsampleKernel with @ref NEScaleKernel in @ref NEDeconvolutionLayer.
730 - Added fast maths flag in @ref CLConvolutionLayer.
731 - Added new tests and benchmarks in validation and benchmark frameworks
732 - Merge Activation layer with Convolution Layer (NEON. CL, GLES)
733 - Added support to OpenCL 2.0 SVM
734 - Added support to import memory in OpenCL tensors.
735 - Added the prepare() method to perform any one off pre-processing before running the function.
736 - Added new examples:
737 - graph_inception_v4.cpp
Anthony Barbier38e7f1f2018-05-21 13:37:47 +0100738 - graph_resnext50.cpp
Pablo Tellob5cc95b2018-05-15 11:49:33 +0100739 - Added memory measurement instrument for CL.
Pablo Telloeb82fd22018-02-23 13:43:50 +0000740
Anthony Barbier577fbdf2018-03-01 15:17:54 +0000741v18.03 Public maintenance release
742 - Various bug fixes.
Anthony Barbier3762e742018-03-02 11:49:33 +0000743 - Fixed bug in @ref NEActivationLayer
744 - Fix in @ref CLTuner when using batches.
Anthony Barbier577fbdf2018-03-01 15:17:54 +0000745 - Updated recommended NDK version to r16b (And fixed warnings).
746 - Fixed bug in validation code.
747 - Added Inception v4 graph example.
Georgios Pinitas9fb11592018-04-26 20:34:58 +0100748 - Renamed NEWinogradLayer.cpp to @ref NEWinogradConvolutionLayer
Anthony Barbier577fbdf2018-03-01 15:17:54 +0000749
Anthony Barbier2d0ce772018-02-21 15:35:36 +0000750v18.02 Public major release
751 - Various NEON / OpenCL / GLES optimisations.
752 - Various bug fixes.
753 - Changed default number of threads on big LITTLE systems.
754 - Refactored examples and added:
755 - graph_mobilenet_qassym8
756 - graph_resnet
757 - graph_squeezenet_v1_1
Anthony Barbier3762e742018-03-02 11:49:33 +0000758 - Renamed @ref CLConvolutionLayer into @ref CLGEMMConvolutionLayer and created a new @ref CLConvolutionLayer to select the fastest convolution method.
759 - Renamed @ref NEConvolutionLayer into @ref NEGEMMConvolutionLayer and created a new @ref NEConvolutionLayer to select the fastest convolution method.
Anthony Barbier2d0ce772018-02-21 15:35:36 +0000760 - Added in place support to:
Anthony Barbier3762e742018-03-02 11:49:33 +0000761 - @ref CLActivationLayer
762 - @ref CLBatchNormalizationLayer
Anthony Barbier2d0ce772018-02-21 15:35:36 +0000763 - Added QASYMM8 support to:
Anthony Barbier3762e742018-03-02 11:49:33 +0000764 - @ref CLActivationLayer
765 - @ref CLDepthwiseConvolutionLayer
766 - @ref NEDepthwiseConvolutionLayer
767 - @ref NESoftmaxLayer
Anthony Barbier2d0ce772018-02-21 15:35:36 +0000768 - Added FP16 support to:
Manuel Bottini387259a2020-05-21 17:14:36 +0100769 - CLDepthwiseConvolutionLayer3x3
Anthony Barbier3762e742018-03-02 11:49:33 +0000770 - @ref CLDepthwiseConvolutionLayer
771 - Added broadcasting support to @ref NEArithmeticAddition / @ref CLArithmeticAddition / @ref CLPixelWiseMultiplication
772 - Added fused batched normalization and activation to @ref CLBatchNormalizationLayer and @ref NEBatchNormalizationLayer
773 - Added support for non-square pooling to @ref NEPoolingLayer and @ref CLPoolingLayer
Anthony Barbier2d0ce772018-02-21 15:35:36 +0000774 - New OpenCL kernels / functions:
Michele Di Giorgioa046e162019-10-08 09:36:26 +0100775 - CLDirectConvolutionLayerOutputStageKernel
Pablo Tellof6c572c2018-02-14 12:47:30 +0000776 - New NEON kernels / functions
Anthony Barbier2d0ce772018-02-21 15:35:36 +0000777 - Added name() method to all kernels.
778 - Added support for Winograd 5x5.
Anthony Barbier3762e742018-03-02 11:49:33 +0000779 - @ref NEPermuteKernel / @ref NEPermute
Georgios Pinitas9fb11592018-04-26 20:34:58 +0100780 - @ref NEWinogradLayerTransformInputKernel / NEWinogradLayer
781 - @ref NEWinogradLayerTransformOutputKernel / NEWinogradLayer
782 - @ref NEWinogradLayerTransformWeightsKernel / NEWinogradLayer
Anthony Barbiere1553372018-07-16 18:53:52 +0100783 - Renamed NEWinogradLayerKernel into NEWinogradLayerBatchedGEMMKernel
Anthony Barbier2d0ce772018-02-21 15:35:36 +0000784 - New GLES kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +0000785 - @ref GCTensorShiftKernel / @ref GCTensorShift
Pablo Tellof6c572c2018-02-14 12:47:30 +0000786
Anthony Barbier64c95a02018-01-22 18:48:55 +0000787v18.01 Public maintenance release
788 - Various bug fixes
789 - Added some of the missing validate() methods
Anthony Barbier3762e742018-03-02 11:49:33 +0000790 - Added @ref CLDeconvolutionLayerUpsampleKernel / @ref CLDeconvolutionLayer @ref CLDeconvolutionLayerUpsample
791 - Added @ref CLPermuteKernel / @ref CLPermute
Anthony Barbier64c95a02018-01-22 18:48:55 +0000792 - Added method to clean the programs cache in the CL Kernel library.
Anthony Barbier3762e742018-03-02 11:49:33 +0000793 - Added @ref GCArithmeticAdditionKernel / @ref GCArithmeticAddition
794 - Added @ref GCDepthwiseConvolutionLayer3x3Kernel / @ref GCDepthwiseConvolutionLayer3x3
795 - Added @ref GCNormalizePlanarYUVLayerKernel / @ref GCNormalizePlanarYUVLayer
796 - Added @ref GCScaleKernel / @ref GCScale
797 - Added @ref GCWeightsReshapeKernel / @ref GCConvolutionLayer
Anthony Barbier64c95a02018-01-22 18:48:55 +0000798 - Added FP16 support to the following GLES compute kernels:
Anthony Barbier3762e742018-03-02 11:49:33 +0000799 - @ref GCCol2ImKernel
800 - @ref GCGEMMInterleave4x4Kernel
801 - @ref GCGEMMTranspose1xWKernel
802 - @ref GCIm2ColKernel
803 - Refactored NEON Winograd (NEWinogradLayerKernel)
804 - Added @ref NEDirectConvolutionLayerOutputStageKernel
Anthony Barbier64c95a02018-01-22 18:48:55 +0000805 - Added QASYMM8 support to the following NEON kernels:
Georgios Pinitas7d0adc62020-09-04 15:25:24 +0100806 - NEDepthwiseConvolutionLayer3x3Kernel
Anthony Barbier3762e742018-03-02 11:49:33 +0000807 - @ref NEFillBorderKernel
808 - @ref NEPoolingLayerKernel
Anthony Barbier64c95a02018-01-22 18:48:55 +0000809 - Added new examples:
810 - graph_cl_mobilenet_qasymm8.cpp
811 - graph_inception_v3.cpp
812 - gc_dc.cpp
813 - More tests added to both validation and benchmarking suites.
814
Gian Marcoff850932017-12-11 12:37:17 +0000815v17.12 Public major release
816 - Most machine learning functions on OpenCL support the new data type QASYMM8
817 - Introduced logging interface
818 - Introduced opencl timer
819 - Reworked GEMMLowp interface
820 - Added new NEON assembly kernels for GEMMLowp, SGEMM and HGEMM
821 - Added validation method for most Machine Learning kernels / functions
822 - Added new graph examples such as googlenet, mobilenet, squeezenet, vgg16 and vgg19
823 - Added sgemm example for OpenCL
824 - Added absolute difference example for GLES compute
825 - Added new tests and benchmarks in validation and benchmark frameworks
826 - Added new kernels / functions for GLES compute
827
828 - New OpenGL ES kernels / functions
Anthony Barbier3762e742018-03-02 11:49:33 +0000829 - @ref GCAbsoluteDifferenceKernel / @ref GCAbsoluteDifference
830 - @ref GCActivationLayerKernel / @ref GCActivationLayer
831 - @ref GCBatchNormalizationLayerKernel / @ref GCBatchNormalizationLayer
832 - @ref GCCol2ImKernel
Georgios Pinitas09f24972019-05-17 18:14:40 +0100833 - @ref GCDepthConcatenateLayerKernel / GCDepthConcatenateLayer
Anthony Barbier3762e742018-03-02 11:49:33 +0000834 - @ref GCDirectConvolutionLayerKernel / @ref GCDirectConvolutionLayer
835 - @ref GCDropoutLayerKernel / @ref GCDropoutLayer
836 - @ref GCFillBorderKernel / @ref GCFillBorder
837 - @ref GCGEMMInterleave4x4Kernel / @ref GCGEMMInterleave4x4
838 - @ref GCGEMMMatrixAccumulateBiasesKernel / @ref GCGEMMMatrixAdditionKernel / @ref GCGEMMMatrixMultiplyKernel / @ref GCGEMM
839 - @ref GCGEMMTranspose1xWKernel / @ref GCGEMMTranspose1xW
840 - @ref GCIm2ColKernel
841 - @ref GCNormalizationLayerKernel / @ref GCNormalizationLayer
842 - @ref GCPixelWiseMultiplicationKernel / @ref GCPixelWiseMultiplication
843 - @ref GCPoolingLayerKernel / @ref GCPoolingLayer
844 - @ref GCLogits1DMaxKernel / @ref GCLogits1DShiftExpSumKernel / @ref GCLogits1DNormKernel / @ref GCSoftmaxLayer
845 - @ref GCTransposeKernel / @ref GCTranspose
Gian Marcoff850932017-12-11 12:37:17 +0000846
847 - New NEON kernels / functions
Pablo Telloeb82fd22018-02-23 13:43:50 +0000848 - arm_compute::NEGEMMLowpAArch64A53Kernel / arm_compute::NEGEMMLowpAArch64Kernel / arm_compute::NEGEMMLowpAArch64V8P4Kernel / arm_compute::NEGEMMInterleavedBlockedKernel / arm_compute::NEGEMMLowpAssemblyMatrixMultiplyCore
849 - arm_compute::NEHGEMMAArch64FP16Kernel
Georgios Pinitas7d0adc62020-09-04 15:25:24 +0100850 - NEDepthwiseConvolutionLayer3x3Kernel / NEDepthwiseIm2ColKernel / NEGEMMMatrixVectorMultiplyKernel / NEDepthwiseVectorToTensorKernel / @ref NEDepthwiseConvolutionLayer
Anthony Barbier3762e742018-03-02 11:49:33 +0000851 - @ref NEGEMMLowpOffsetContributionKernel / @ref NEGEMMLowpMatrixAReductionKernel / @ref NEGEMMLowpMatrixBReductionKernel / @ref NEGEMMLowpMatrixMultiplyCore
852 - @ref NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel / @ref NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint
Georgios Pinitas9fb11592018-04-26 20:34:58 +0100853 - NEWinogradLayer / NEWinogradLayerKernel
Gian Marcoff850932017-12-11 12:37:17 +0000854
855 - New OpenCL kernels / functions
Anthony Barbier3762e742018-03-02 11:49:33 +0000856 - @ref CLGEMMLowpOffsetContributionKernel / @ref CLGEMMLowpMatrixAReductionKernel / @ref CLGEMMLowpMatrixBReductionKernel / @ref CLGEMMLowpMatrixMultiplyCore
857 - @ref CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel / @ref CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint
Gian Marcoff850932017-12-11 12:37:17 +0000858
859 - New graph nodes for NEON and OpenCL
Georgios Pinitasd9eb2752018-04-03 13:44:29 +0100860 - graph::BranchLayer
861 - graph::DepthConvertLayer
862 - graph::DepthwiseConvolutionLayer
863 - graph::DequantizationLayer
864 - graph::FlattenLayer
865 - graph::QuantizationLayer
866 - graph::ReshapeLayer
Gian Marcoff850932017-12-11 12:37:17 +0000867
Anthony Barbier3c5b4ff2017-10-12 13:20:52 +0100868v17.10 Public maintenance release
869 - Bug fixes:
870 - Check the maximum local workgroup size supported by OpenCL devices
871 - Minor documentation updates (Fixed instructions to build the examples)
Anthony Barbier3762e742018-03-02 11:49:33 +0000872 - Introduced a graph::GraphContext
Anthony Barbier3c5b4ff2017-10-12 13:20:52 +0100873 - Added a few new Graph nodes, support for branches and grouping.
874 - Automatically enable cl_printf in debug builds
875 - Fixed bare metal builds for armv7a
876 - Added AlexNet and cartoon effect examples
877 - Fixed library builds: libraries are no longer built as supersets of each other.(It means application using the Runtime part of the library now need to link against both libarm_compute_core and libarm_compute)
878
Anthony Barbier6a5627a2017-09-26 14:42:02 +0100879v17.09 Public major release
880 - Experimental Graph support: initial implementation of a simple stream API to easily chain machine learning layers.
Anthony Barbier3762e742018-03-02 11:49:33 +0000881 - Memory Manager (@ref BlobLifetimeManager, @ref BlobMemoryPool, @ref ILifetimeManager, @ref IMemoryGroup, @ref IMemoryManager, @ref IMemoryPool, @ref IPoolManager, @ref MemoryManagerOnDemand, @ref PoolManager)
Anthony Barbier6a5627a2017-09-26 14:42:02 +0100882 - New validation and benchmark frameworks (Boost and Google frameworks replaced by homemade framework).
883 - Most machine learning functions support both fixed point 8 and 16 bit (QS8, QS16) for both NEON and OpenCL.
884 - New NEON kernels / functions:
Pablo Telloeb82fd22018-02-23 13:43:50 +0000885 - arm_compute::NEGEMMAssemblyBaseKernel arm_compute::NEGEMMAArch64Kernel
Anthony Barbier3762e742018-03-02 11:49:33 +0000886 - @ref NEDequantizationLayerKernel / @ref NEDequantizationLayer
887 - @ref NEFloorKernel / @ref NEFloor
888 - @ref NEL2NormalizeLayerKernel / @ref NEL2NormalizeLayer
889 - @ref NEQuantizationLayerKernel @ref NEMinMaxLayerKernel / @ref NEQuantizationLayer
890 - @ref NEROIPoolingLayerKernel / @ref NEROIPoolingLayer
891 - @ref NEReductionOperationKernel / @ref NEReductionOperation
892 - @ref NEReshapeLayerKernel / @ref NEReshapeLayer
Anthony Barbier6a5627a2017-09-26 14:42:02 +0100893
894 - New OpenCL kernels / functions:
Manuel Bottini387259a2020-05-21 17:14:36 +0100895 - @ref CLDepthwiseConvolutionLayer3x3NCHWKernel @ref CLDepthwiseConvolutionLayer3x3NHWCKernel CLDepthwiseIm2ColKernel CLDepthwiseVectorToTensorKernel CLDepthwiseWeightsReshapeKernel / CLDepthwiseConvolutionLayer3x3 @ref CLDepthwiseConvolutionLayer CLDepthwiseSeparableConvolutionLayer
Anthony Barbier3762e742018-03-02 11:49:33 +0000896 - @ref CLDequantizationLayerKernel / @ref CLDequantizationLayer
897 - @ref CLDirectConvolutionLayerKernel / @ref CLDirectConvolutionLayer
898 - @ref CLFlattenLayer
899 - @ref CLFloorKernel / @ref CLFloor
Gian Marco Iodice5fc07aa2019-05-15 17:08:02 +0100900 - CLGEMMTranspose1xW
Anthony Barbier3762e742018-03-02 11:49:33 +0000901 - @ref CLGEMMMatrixVectorMultiplyKernel
902 - @ref CLL2NormalizeLayerKernel / @ref CLL2NormalizeLayer
903 - @ref CLQuantizationLayerKernel @ref CLMinMaxLayerKernel / @ref CLQuantizationLayer
904 - @ref CLROIPoolingLayerKernel / @ref CLROIPoolingLayer
905 - @ref CLReductionOperationKernel / @ref CLReductionOperation
906 - @ref CLReshapeLayerKernel / @ref CLReshapeLayer
Anthony Barbier6a5627a2017-09-26 14:42:02 +0100907
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100908v17.06 Public major release
909 - Various bug fixes
910 - Added support for fixed point 8 bit (QS8) to the various NEON machine learning kernels.
911 - Added unit tests and benchmarks (AlexNet, LeNet)
912 - Added support for sub tensors.
913 - Added infrastructure to provide GPU specific optimisation for some OpenCL kernels.
Anthony Barbier3762e742018-03-02 11:49:33 +0000914 - Added @ref OMPScheduler (OpenMP) scheduler for NEON
915 - Added @ref SingleThreadScheduler scheduler for NEON (For bare metal)
916 - User can specify his own scheduler by implementing the @ref IScheduler interface.
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100917 - New OpenCL kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +0000918 - @ref CLBatchNormalizationLayerKernel / @ref CLBatchNormalizationLayer
Georgios Pinitas09f24972019-05-17 18:14:40 +0100919 - @ref CLDepthConcatenateLayerKernel / CLDepthConcatenateLayer
Anthony Barbier3762e742018-03-02 11:49:33 +0000920 - @ref CLHOGOrientationBinningKernel @ref CLHOGBlockNormalizationKernel, @ref CLHOGDetectorKernel / @ref CLHOGDescriptor @ref CLHOGDetector @ref CLHOGGradient @ref CLHOGMultiDetection
921 - @ref CLLocallyConnectedMatrixMultiplyKernel / @ref CLLocallyConnectedLayer
922 - @ref CLWeightsReshapeKernel / @ref CLConvolutionLayerReshapeWeights
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100923 - New C++ kernels:
Anthony Barbier3762e742018-03-02 11:49:33 +0000924 - @ref CPPDetectionWindowNonMaximaSuppressionKernel
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100925 - New NEON kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +0000926 - @ref NEBatchNormalizationLayerKernel / @ref NEBatchNormalizationLayer
Georgios Pinitas09f24972019-05-17 18:14:40 +0100927 - @ref NEDepthConcatenateLayerKernel / NEDepthConcatenateLayer
Anthony Barbier3762e742018-03-02 11:49:33 +0000928 - @ref NEDirectConvolutionLayerKernel / @ref NEDirectConvolutionLayer
929 - @ref NELocallyConnectedMatrixMultiplyKernel / @ref NELocallyConnectedLayer
930 - @ref NEWeightsReshapeKernel / @ref NEConvolutionLayerReshapeWeights
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100931
932v17.05 Public bug fixes release
933 - Various bug fixes
934 - Remaining of the functions ported to use accurate padding.
935 - Library does not link against OpenCL anymore (It uses dlopen / dlsym at runtime instead to determine whether or not OpenCL is available).
936 - Added "free" method to allocator.
937 - Minimum version of g++ required for armv7 Linux changed from 4.8 to 4.9
938
939v17.04 Public bug fixes release
940
941 The following functions have been ported to use the new accurate padding:
Anthony Barbier3762e742018-03-02 11:49:33 +0000942 - @ref CLColorConvertKernel
943 - @ref CLEdgeNonMaxSuppressionKernel
944 - @ref CLEdgeTraceKernel
945 - @ref CLGaussianPyramidHorKernel
946 - @ref CLGaussianPyramidVertKernel
947 - @ref CLGradientKernel
948 - @ref NEChannelCombineKernel
949 - @ref NEFillArrayKernel
950 - @ref NEGaussianPyramidHorKernel
951 - @ref NEGaussianPyramidVertKernel
Georgios Pinitas09d34512018-08-30 16:02:11 +0100952 - NEHarrisScoreFP16Kernel
Anthony Barbier3762e742018-03-02 11:49:33 +0000953 - @ref NEHarrisScoreKernel
954 - @ref NEHOGDetectorKernel
955 - @ref NELogits1DMaxKernel
956 - NELogits1DShiftExpSumKernel
957 - NELogits1DNormKernel
958 - @ref NENonMaximaSuppression3x3FP16Kernel
959 - @ref NENonMaximaSuppression3x3Kernel
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100960
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100961v17.03.1 First Major public release of the sources
962 - Renamed the library to arm_compute
963 - New CPP target introduced for C++ kernels shared between NEON and CL functions.
964 - New padding calculation interface introduced and ported most kernels / functions to use it.
965 - New OpenCL kernels / functions:
Gian Marco Iodiceeb65f6d2020-04-15 11:42:15 +0100966 - CLGEMMLowpMatrixMultiplyKernel / CLGEMMLowp
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100967 - New NEON kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +0000968 - @ref NENormalizationLayerKernel / @ref NENormalizationLayer
969 - @ref NETransposeKernel / @ref NETranspose
970 - @ref NELogits1DMaxKernel, NELogits1DShiftExpSumKernel, NELogits1DNormKernel / @ref NESoftmaxLayer
971 - @ref NEIm2ColKernel, @ref NECol2ImKernel, NEConvolutionLayerWeightsReshapeKernel / @ref NEConvolutionLayer
Michele Di Giorgiof22f6722020-07-03 16:29:24 +0100972 - NEGEMMMatrixAccumulateBiasesKernel / @ref NEFullyConnectedLayer
Anthony Barbier3762e742018-03-02 11:49:33 +0000973 - @ref NEGEMMLowpMatrixMultiplyKernel / NEGEMMLowp
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100974
975v17.03 Sources preview
976 - New OpenCL kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +0000977 - @ref CLGradientKernel, @ref CLEdgeNonMaxSuppressionKernel, @ref CLEdgeTraceKernel / @ref CLCannyEdge
Gian Marco Iodice57a89612019-08-22 14:10:27 +0100978 - GEMM refactoring + FP16 support: CLGEMMInterleave4x4Kernel, CLGEMMTranspose1xWKernel, @ref CLGEMMMatrixMultiplyKernel, CLGEMMMatrixAdditionKernel / @ref CLGEMM
Michele Di Giorgiof6f78762020-07-06 11:27:21 +0100979 - CLGEMMMatrixAccumulateBiasesKernel / @ref CLFullyConnectedLayer
Anthony Barbier3762e742018-03-02 11:49:33 +0000980 - @ref CLTransposeKernel / @ref CLTranspose
981 - @ref CLLKTrackerInitKernel, @ref CLLKTrackerStage0Kernel, @ref CLLKTrackerStage1Kernel, @ref CLLKTrackerFinalizeKernel / @ref CLOpticalFlow
982 - @ref CLNormalizationLayerKernel / @ref CLNormalizationLayer
983 - @ref CLLaplacianPyramid, @ref CLLaplacianReconstruct
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100984 - New NEON kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +0000985 - @ref NEActivationLayerKernel / @ref NEActivationLayer
986 - GEMM refactoring + FP16 support (Requires armv8.2 CPU): @ref NEGEMMInterleave4x4Kernel, @ref NEGEMMTranspose1xWKernel, @ref NEGEMMMatrixMultiplyKernel, @ref NEGEMMMatrixAdditionKernel / @ref NEGEMM
987 - @ref NEPoolingLayerKernel / @ref NEPoolingLayer
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100988
989v17.02.1 Sources preview
990 - New OpenCL kernels / functions:
Michele Di Giorgiof6f78762020-07-06 11:27:21 +0100991 - CLLogits1DMaxKernel, CLLogits1DShiftExpSumKernel, @ref CLLogits1DNormKernel / @ref CLSoftmaxLayer
Anthony Barbier3762e742018-03-02 11:49:33 +0000992 - @ref CLPoolingLayerKernel / @ref CLPoolingLayer
993 - @ref CLIm2ColKernel, @ref CLCol2ImKernel, CLConvolutionLayerWeightsReshapeKernel / @ref CLConvolutionLayer
994 - @ref CLRemapKernel / @ref CLRemap
995 - @ref CLGaussianPyramidHorKernel, @ref CLGaussianPyramidVertKernel / @ref CLGaussianPyramid, @ref CLGaussianPyramidHalf, @ref CLGaussianPyramidOrb
996 - @ref CLMinMaxKernel, @ref CLMinMaxLocationKernel / @ref CLMinMaxLocation
997 - @ref CLNonLinearFilterKernel / @ref CLNonLinearFilter
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100998 - New NEON FP16 kernels (Requires armv8.2 CPU)
Anthony Barbier3762e742018-03-02 11:49:33 +0000999 - @ref NEAccumulateWeightedFP16Kernel
1000 - @ref NEBox3x3FP16Kernel
1001 - @ref NENonMaximaSuppression3x3FP16Kernel
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001002
1003v17.02 Sources preview
1004 - New OpenCL kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +00001005 - @ref CLActivationLayerKernel / @ref CLActivationLayer
1006 - @ref CLChannelCombineKernel / @ref CLChannelCombine
1007 - @ref CLDerivativeKernel / @ref CLChannelExtract
1008 - @ref CLFastCornersKernel / @ref CLFastCorners
1009 - @ref CLMeanStdDevKernel / @ref CLMeanStdDev
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001010 - New NEON kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +00001011 - HOG / SVM: @ref NEHOGOrientationBinningKernel, @ref NEHOGBlockNormalizationKernel, @ref NEHOGDetectorKernel, NEHOGNonMaximaSuppressionKernel / @ref NEHOGDescriptor, @ref NEHOGDetector, @ref NEHOGGradient, @ref NEHOGMultiDetection
1012 - @ref NENonLinearFilterKernel / @ref NENonLinearFilter
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001013 - Introduced a CLScheduler to manage the default context and command queue used by the runtime library and create synchronisation events.
1014 - Switched all the kernels / functions to use tensors instead of images.
1015 - Updated documentation to include instructions to build the library from sources.
1016
1017v16.12 Binary preview release
1018 - Original release
1019
1020@section S3_how_to_build How to build the library and the examples
1021
1022@subsection S3_1_build_options Build options
1023
1024scons 2.3 or above is required to build the library.
1025To see the build options available simply run ```scons -h```:
1026
Anthony Barbier79c61782017-06-23 11:48:24 +01001027 debug: Debug (yes|no)
1028 default: False
1029 actual: False
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001030
Anthony Barbier79c61782017-06-23 11:48:24 +01001031 asserts: Enable asserts (this flag is forced to 1 for debug=1) (yes|no)
1032 default: False
1033 actual: False
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001034
Anthony Barbier79c61782017-06-23 11:48:24 +01001035 arch: Target Architecture (armv7a|arm64-v8a|arm64-v8.2-a|x86_32|x86_64)
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001036 default: armv7a
1037 actual: armv7a
1038
Anthony Barbier79c61782017-06-23 11:48:24 +01001039 os: Target OS (linux|android|bare_metal)
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001040 default: linux
1041 actual: linux
1042
Anthony Barbier2d0ce772018-02-21 15:35:36 +00001043 build: Build type (native|cross_compile|embed_only)
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001044 default: cross_compile
1045 actual: cross_compile
1046
Anthony Barbier79c61782017-06-23 11:48:24 +01001047 examples: Build example programs (yes|no)
1048 default: True
1049 actual: True
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001050
Anthony Barbier79c61782017-06-23 11:48:24 +01001051 Werror: Enable/disable the -Werror compilation flag (yes|no)
1052 default: True
1053 actual: True
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001054
Anthony Barbier79c61782017-06-23 11:48:24 +01001055 opencl: Enable OpenCL support (yes|no)
1056 default: True
1057 actual: True
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001058
Anthony Barbier79c61782017-06-23 11:48:24 +01001059 neon: Enable Neon support (yes|no)
1060 default: False
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001061 actual: False
1062
Anthony Barbier20dbb822017-12-13 21:19:39 +00001063 gles_compute: Enable OpenGL ES Compute Shader support (yes|no)
1064 default: False
1065 actual: False
1066
1067 embed_kernels: Embed OpenCL kernels and OpenGL ES compute shader in library binary (yes|no)
Anthony Barbiercc0a80b2017-12-15 11:37:29 +00001068 default: True
1069 actual: True
Anthony Barbier79c61782017-06-23 11:48:24 +01001070
1071 set_soname: Set the library's soname and shlibversion (requires SCons 2.4 or above) (yes|no)
1072 default: False
1073 actual: False
1074
1075 openmp: Enable OpenMP backend (yes|no)
1076 default: False
1077 actual: False
1078
1079 cppthreads: Enable C++11 threads backend (yes|no)
1080 default: True
1081 actual: True
1082
1083 build_dir: Specify sub-folder for the build ( /path/to/build_dir )
1084 default: .
1085 actual: .
1086
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001087 extra_cxx_flags: Extra CXX flags to be appended to the build command
1088 default:
1089 actual:
1090
Anthony Barbier79c61782017-06-23 11:48:24 +01001091 pmu: Enable PMU counters (yes|no)
1092 default: False
1093 actual: False
1094
Anthony Barbier6a5627a2017-09-26 14:42:02 +01001095 mali: Enable Mali hardware counters (yes|no)
1096 default: False
1097 actual: False
1098
Anthony Barbier79c61782017-06-23 11:48:24 +01001099 validation_tests: Build validation test programs (yes|no)
1100 default: False
1101 actual: False
1102
1103 benchmark_tests: Build benchmark test programs (yes|no)
1104 default: False
1105 actual: False
1106
1107@b debug / @b asserts:
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001108 - With debug=1 asserts are enabled, and the library is built with symbols and no optimisations enabled.
1109 - With debug=0 and asserts=1: Optimisations are enabled and symbols are removed, however all the asserts are still present (This is about 20% slower than the release build)
1110 - With debug=0 and asserts=0: All optimisations are enable and no validation is performed, if the application misuses the library it is likely to result in a crash. (Only use this mode once you are sure your application is working as expected).
1111
Anthony Barbier79c61782017-06-23 11:48:24 +01001112@b arch: The x86_32 and x86_64 targets can only be used with neon=0 and opencl=1.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001113
Anthony Barbier79c61782017-06-23 11:48:24 +01001114@b os: Choose the operating system you are targeting: Linux, Android or bare metal.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001115@note bare metal can only be used for NEON (not OpenCL), only static libraries get built and NEON's multi-threading support is disabled.
1116
Anthony Barbier79c61782017-06-23 11:48:24 +01001117@b build: you can either build directly on your device (native) or cross compile from your desktop machine (cross-compile). In both cases make sure the compiler is available in your path.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001118
Anthony Barbier79c61782017-06-23 11:48:24 +01001119@note If you want to natively compile for 32bit on a 64bit ARM device running a 64bit OS then you will have to use cross-compile too.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001120
Anthony Barbier2d0ce772018-02-21 15:35:36 +00001121There is also an 'embed_only' option which will generate all the .embed files for the OpenCL kernels and / or OpenGLES compute shaders. This might be useful if using a different build system to compile the library.
1122
Anthony Barbier79c61782017-06-23 11:48:24 +01001123@b Werror: If you are compiling using the same toolchains as the ones used in this guide then there shouldn't be any warning and therefore you should be able to keep Werror=1. If with a different compiler version the library fails to build because of warnings interpreted as errors then, if you are sure the warnings are not important, you might want to try to build with Werror=0 (But please do report the issue either on Github or by an email to developer@arm.com so that the issue can be addressed).
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001124
Anthony Barbier20dbb822017-12-13 21:19:39 +00001125@b opencl / @b neon / @b gles_compute: Choose which SIMD technology you want to target. (NEON for ARM Cortex-A CPUs or OpenCL / GLES_COMPUTE for ARM Mali GPUs)
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001126
Anthony Barbier20dbb822017-12-13 21:19:39 +00001127@b embed_kernels: For OpenCL / GLES_COMPUTE only: set embed_kernels=1 if you want the OpenCL / GLES_COMPUTE kernels to be built in the library's binaries instead of being read from separate ".cl" / ".cs" files. If embed_kernels is set to 0 then the application can set the path to the folder containing the OpenCL / GLES_COMPUTE kernel files by calling CLKernelLibrary::init() / GCKernelLibrary::init(). By default the path is set to "./cl_kernels" / "./cs_shaders".
Anthony Barbier79c61782017-06-23 11:48:24 +01001128
1129@b set_soname: Do you want to build the versioned version of the library ?
1130
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001131If enabled the library will contain a SONAME and SHLIBVERSION and some symlinks will automatically be created between the objects.
1132Example:
1133 libarm_compute_core.so -> libarm_compute_core.so.1.0.0
1134 libarm_compute_core.so.1 -> libarm_compute_core.so.1.0.0
1135 libarm_compute_core.so.1.0.0
1136
1137@note This options is disabled by default as it requires SCons version 2.4 or above.
1138
Anthony Barbier79c61782017-06-23 11:48:24 +01001139@b extra_cxx_flags: Custom CXX flags which will be appended to the end of the build command.
1140
1141@b build_dir: Build the library in a subfolder of the "build" folder. (Allows to build several configurations in parallel).
1142
1143@b examples: Build or not the examples
1144
1145@b validation_tests: Enable the build of the validation suite.
1146
Anthony Barbier79c61782017-06-23 11:48:24 +01001147@b benchmark_tests: Enable the build of the benchmark tests
1148
1149@b pmu: Enable the PMU cycle counter to measure execution time in benchmark tests. (Your device needs to support it)
1150
Anthony Barbier6a5627a2017-09-26 14:42:02 +01001151@b mali: Enable the collection of Mali hardware counters to measure execution time in benchmark tests. (Your device needs to have a Mali driver that supports it)
1152
Anthony Barbier79c61782017-06-23 11:48:24 +01001153@b openmp Build in the OpenMP scheduler for NEON.
1154
1155@note Only works when building with g++ not clang++
1156
1157@b cppthreads Build in the C++11 scheduler for NEON.
1158
Anthony Barbier3762e742018-03-02 11:49:33 +00001159@sa Scheduler::set
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001160
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001161@subsection S3_2_linux Building for Linux
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001162
1163@subsubsection S3_2_1_library How to build the library ?
1164
1165For Linux, the library was successfully built and tested using the following Linaro GCC toolchain:
1166
Michele Di Giorgio36a551f2020-04-23 11:55:29 +01001167 - gcc-linaro-6.3.1-2017.05-x86_64_arm-linux-gnueabihf
1168 - gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001169
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001170To cross-compile the library in debug mode, with NEON only support, for Linux 32bit:
1171
1172 scons Werror=1 -j8 debug=1 neon=1 opencl=0 os=linux arch=armv7a
1173
1174To cross-compile the library in asserts mode, with OpenCL only support, for Linux 64bit:
1175
1176 scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=1 embed_kernels=1 os=linux arch=arm64-v8a
1177
Anthony Barbier20dbb822017-12-13 21:19:39 +00001178To cross-compile the library in asserts mode, with GLES_COMPUTE only support, for Linux 64bit:
1179
1180 scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=0 gles_compute=1 embed_kernels=1 os=linux arch=arm64-v8a
1181
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001182You can also compile the library natively on an ARM device by using <b>build=native</b>:
1183
1184 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=arm64-v8a build=native
1185 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a build=native
1186
1187@note g++ for ARM is mono-arch, therefore if you want to compile for Linux 32bit on a Linux 64bit platform you will have to use a cross compiler.
1188
1189For example on a 64bit Debian based system you would have to install <b>g++-arm-linux-gnueabihf</b>
1190
1191 apt-get install g++-arm-linux-gnueabihf
1192
1193Then run
1194
1195 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a build=cross_compile
1196
1197or simply remove the build parameter as build=cross_compile is the default value:
1198
1199 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a
1200
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001201@subsubsection S3_2_2_examples How to manually build the examples ?
1202
1203The examples get automatically built by scons as part of the build process of the library described above. This section just describes how you can build and link your own application against our library.
1204
Sheri Zhang7a7f4e02020-08-28 20:08:49 +01001205@note The following command lines assume the arm_compute libraries are present in the current directory or in the system library path. If this is not the case you can specify the location of the pre-built libraries with the compiler option -L. When building the OpenCL example the commands below assume that the CL headers are located in the include folder where the command is executed.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001206
1207To cross compile a NEON example for Linux 32bit:
1208
Anthony Barbierb2881fc2017-09-29 17:12:12 +01001209 arm-linux-gnueabihf-g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -mfpu=neon -L. -larm_compute -larm_compute_core -o neon_convolution
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001210
1211To cross compile a NEON example for Linux 64bit:
1212
Anthony Barbierb2881fc2017-09-29 17:12:12 +01001213 aarch64-linux-gnu-g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -L. -larm_compute -larm_compute_core -o neon_convolution
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001214
1215(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
1216
1217To cross compile an OpenCL example for Linux 32bit:
1218
Georgios Pinitasd9eb2752018-04-03 13:44:29 +01001219 arm-linux-gnueabihf-g++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -mfpu=neon -L. -larm_compute -larm_compute_core -o cl_convolution -DARM_COMPUTE_CL
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001220
1221To cross compile an OpenCL example for Linux 64bit:
1222
Georgios Pinitasd9eb2752018-04-03 13:44:29 +01001223 aarch64-linux-gnu-g++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -L. -larm_compute -larm_compute_core -o cl_convolution -DARM_COMPUTE_CL
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001224
Anthony Barbier14c86a92017-12-14 16:27:41 +00001225To cross compile a GLES example for Linux 32bit:
1226
1227 arm-linux-gnueabihf-g++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude/ -L. -larm_compute -larm_compute_core -std=c++11 -mfpu=neon -DARM_COMPUTE_GC -Iinclude/linux/ -o gc_absdiff
1228
1229To cross compile a GLES example for Linux 64bit:
1230
1231 aarch64-linux-gnu-g++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude/ -L. -larm_compute -larm_compute_core -std=c++11 -DARM_COMPUTE_GC -Iinclude/linux/ -o gc_absdiff
1232
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001233(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
1234
Anthony Barbier14c86a92017-12-14 16:27:41 +00001235To cross compile the examples with the Graph API, such as graph_lenet.cpp, you need to link the examples against arm_compute_graph.so too.
1236
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001237i.e. to cross compile the "graph_lenet" example for Linux 32bit:
1238
Georgios Pinitas12be7ab2018-07-03 12:06:23 +01001239 arm-linux-gnueabihf-g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -mfpu=neon -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001240
1241i.e. to cross compile the "graph_lenet" example for Linux 64bit:
1242
Georgios Pinitas12be7ab2018-07-03 12:06:23 +01001243 aarch64-linux-gnu-g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001244
1245(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
1246
Anthony Barbiere5007472017-10-27 15:01:44 +01001247@note If compiling using static libraries, this order must be followed when linking: arm_compute_graph_static, arm_compute, arm_compute_core
1248
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001249To compile natively (i.e directly on an ARM device) for NEON for Linux 32bit:
1250
Anthony Barbierb2881fc2017-09-29 17:12:12 +01001251 g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -mfpu=neon -larm_compute -larm_compute_core -o neon_convolution
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001252
1253To compile natively (i.e directly on an ARM device) for NEON for Linux 64bit:
1254
Anthony Barbierb2881fc2017-09-29 17:12:12 +01001255 g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute -larm_compute_core -o neon_convolution
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001256
1257(notice the only difference with the 32 bit command is that we don't need the -mfpu option)
1258
1259To compile natively (i.e directly on an ARM device) for OpenCL for Linux 32bit or Linux 64bit:
1260
Georgios Pinitasd9eb2752018-04-03 13:44:29 +01001261 g++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute -larm_compute_core -o cl_convolution -DARM_COMPUTE_CL
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001262
Anthony Barbier14c86a92017-12-14 16:27:41 +00001263To compile natively (i.e directly on an ARM device) for GLES for Linux 32bit or Linux 64bit:
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001264
Anthony Barbier14c86a92017-12-14 16:27:41 +00001265 g++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude/ -L. -larm_compute -larm_compute_core -std=c++11 -DARM_COMPUTE_GC -Iinclude/linux/ -o gc_absdiff
1266
1267To compile natively the examples with the Graph API, such as graph_lenet.cpp, you need to link the examples against arm_compute_graph.so too.
Anthony Barbier14c86a92017-12-14 16:27:41 +00001268
1269i.e. to natively compile the "graph_lenet" example for Linux 32bit:
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001270
Georgios Pinitas12be7ab2018-07-03 12:06:23 +01001271 g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -mfpu=neon -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001272
Anthony Barbier14c86a92017-12-14 16:27:41 +00001273i.e. to natively compile the "graph_lenet" example for Linux 64bit:
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001274
Gian Marco Iodicef94c6742020-06-26 12:35:09 +01001275 g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001276
1277(notice the only difference with the 32 bit command is that we don't need the -mfpu option)
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001278
Anthony Barbiere5007472017-10-27 15:01:44 +01001279@note If compiling using static libraries, this order must be followed when linking: arm_compute_graph_static, arm_compute, arm_compute_core
1280
Gian Marco Iodicef94c6742020-06-26 12:35:09 +01001281@note These two commands assume libarm_compute.so is available in your library path, if not add the path to it using -L (e.g. -Llib/linux-arm64-v8a-neon-cl-asserts/)
Georgios Pinitas58216322020-02-26 11:13:13 +00001282@note You might need to export the path to OpenCL library as well in your LD_LIBRARY_PATH if Compute Library was built with OpenCL enabled.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001283
1284To run the built executable simply run:
1285
1286 LD_LIBRARY_PATH=build ./neon_convolution
1287
1288or
1289
1290 LD_LIBRARY_PATH=build ./cl_convolution
1291
Georgios Pinitas9f28b392018-07-18 20:01:53 +01001292@note Examples accept different types of arguments, to find out what they are run the example with \a --help as an argument. If no arguments are specified then random values will be used to execute the graph.
Anthony Barbier3762e742018-03-02 11:49:33 +00001293
1294For example:
Anthony Barbier38e7f1f2018-05-21 13:37:47 +01001295
Georgios Pinitas9f28b392018-07-18 20:01:53 +01001296 LD_LIBRARY_PATH=. ./graph_lenet --help
Anthony Barbier3762e742018-03-02 11:49:33 +00001297
Georgios Pinitas9f28b392018-07-18 20:01:53 +01001298Below is a list of the common parameters among the graph examples :
1299@snippet utils/CommonGraphOptions.h Common graph examples parameters
Anthony Barbier3762e742018-03-02 11:49:33 +00001300
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001301@subsection S3_3_android Building for Android
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001302
1303For Android, the library was successfully built and tested using Google's standalone toolchains:
Michele Di Giorgio36a551f2020-04-23 11:55:29 +01001304 - clang++ from NDK r18b for armv7a
1305 - clang++ from NDK r18b for arm64-v8a
1306 - clang++ from NDK r18b for arm64-v8.2-a with FP16 support
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001307
1308Here is a guide to <a href="https://developer.android.com/ndk/guides/standalone_toolchain.html">create your Android standalone toolchains from the NDK</a>
1309
Sheri Zhang7a7f4e02020-08-28 20:08:49 +01001310- Download the NDK r18b from here: https://developer.android.com/ndk/downloads/index.html to directory $NDK
Georgios Pinitasf112ede2019-03-01 19:11:20 +00001311- Make sure you have Python 2.7 installed on your machine.
Sheri Zhang7a7f4e02020-08-28 20:08:49 +01001312- Generate the 32 and/or 64 toolchains by running the following commands to your toolchain dirctory $MY_TOOLCHAINS:
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001313
Anthony Barbier38e7f1f2018-05-21 13:37:47 +01001314
Michele Di Giorgio36a551f2020-04-23 11:55:29 +01001315 $NDK/build/tools/make_standalone_toolchain.py --arch arm64 --install-dir $MY_TOOLCHAINS/aarch64-linux-android-ndk-r18b --stl libc++ --api 21
1316 $NDK/build/tools/make_standalone_toolchain.py --arch arm --install-dir $MY_TOOLCHAINS/arm-linux-android-ndk-r18b --stl libc++ --api 21
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001317
Anthony Barbierd51ea0a2018-08-07 17:48:03 +01001318@attention We used to use gnustl but as of NDK r17 it is deprecated so we switched to libc++
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001319
Anthony Barbier38e7f1f2018-05-21 13:37:47 +01001320@note Make sure to add the toolchains to your PATH:
1321
Michele Di Giorgio36a551f2020-04-23 11:55:29 +01001322 export PATH=$PATH:$MY_TOOLCHAINS/aarch64-linux-android-ndk-r18b/bin:$MY_TOOLCHAINS/arm-linux-android-ndk-r18b/bin
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001323
1324@subsubsection S3_3_1_library How to build the library ?
1325
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001326To cross-compile the library in debug mode, with NEON only support, for Android 32bit:
1327
1328 CXX=clang++ CC=clang scons Werror=1 -j8 debug=1 neon=1 opencl=0 os=android arch=armv7a
1329
1330To cross-compile the library in asserts mode, with OpenCL only support, for Android 64bit:
1331
Anthony Barbier14c86a92017-12-14 16:27:41 +00001332 CXX=clang++ CC=clang scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=1 embed_kernels=1 os=android arch=arm64-v8a
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001333
Anthony Barbier20dbb822017-12-13 21:19:39 +00001334To cross-compile the library in asserts mode, with GLES_COMPUTE only support, for Android 64bit:
1335
Anthony Barbier14c86a92017-12-14 16:27:41 +00001336 CXX=clang++ CC=clang scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=0 gles_compute=1 embed_kernels=1 os=android arch=arm64-v8a
Anthony Barbier20dbb822017-12-13 21:19:39 +00001337
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001338@subsubsection S3_3_2_examples How to manually build the examples ?
1339
1340The examples get automatically built by scons as part of the build process of the library described above. This section just describes how you can build and link your own application against our library.
1341
Sheri Zhang7a7f4e02020-08-28 20:08:49 +01001342@note The following command lines assume the arm_compute libraries are present in the current directory or in the system library path. If this is not the case you can specify the location of the pre-built libraries with the compiler option -L. When building the OpenCL example the commands below assume that the CL headers are located in the include folder where the command is executed.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001343
1344Once you've got your Android standalone toolchain built and added to your path you can do the following:
1345
1346To cross compile a NEON example:
1347
1348 #32 bit:
Georgios Pinitas9873ea32017-12-05 15:28:55 +00001349 arm-linux-androideabi-clang++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o neon_convolution_arm -static-libstdc++ -pie
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001350 #64 bit:
Anthony Barbier14c86a92017-12-14 16:27:41 +00001351 aarch64-linux-android-clang++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o neon_convolution_aarch64 -static-libstdc++ -pie
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001352
1353To cross compile an OpenCL example:
1354
1355 #32 bit:
Georgios Pinitasd9eb2752018-04-03 13:44:29 +01001356 arm-linux-androideabi-clang++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o cl_convolution_arm -static-libstdc++ -pie -DARM_COMPUTE_CL
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001357 #64 bit:
Georgios Pinitasd9eb2752018-04-03 13:44:29 +01001358 aarch64-linux-android-clang++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o cl_convolution_aarch64 -static-libstdc++ -pie -DARM_COMPUTE_CL
Anthony Barbier14c86a92017-12-14 16:27:41 +00001359
1360To cross compile a GLES example:
Anthony Barbiercc0a80b2017-12-15 11:37:29 +00001361
Anthony Barbier14c86a92017-12-14 16:27:41 +00001362 #32 bit:
1363 arm-linux-androideabi-clang++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o gc_absdiff_arm -static-libstdc++ -pie -DARM_COMPUTE_GC
1364 #64 bit:
1365 aarch64-linux-android-clang++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o gc_absdiff_aarch64 -static-libstdc++ -pie -DARM_COMPUTE_GC
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001366
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001367To cross compile the examples with the Graph API, such as graph_lenet.cpp, you need to link the library arm_compute_graph also.
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001368
1369 #32 bit:
Georgios Pinitas12be7ab2018-07-03 12:06:23 +01001370 arm-linux-androideabi-clang++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -Wl,--whole-archive -larm_compute_graph-static -Wl,--no-whole-archive -larm_compute-static -larm_compute_core-static -L. -o graph_lenet_arm -static-libstdc++ -pie -DARM_COMPUTE_CL
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001371 #64 bit:
Georgios Pinitas12be7ab2018-07-03 12:06:23 +01001372 aarch64-linux-android-clang++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -Wl,--whole-archive -larm_compute_graph-static -Wl,--no-whole-archive -larm_compute-static -larm_compute_core-static -L. -o graph_lenet_aarch64 -static-libstdc++ -pie -DARM_COMPUTE_CL
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001373
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001374@note Due to some issues in older versions of the Mali OpenCL DDK (<= r13p0), we recommend to link arm_compute statically on Android.
Anthony Barbier20dbb822017-12-13 21:19:39 +00001375@note When linked statically the arm_compute_graph library currently needs the --whole-archive linker flag in order to work properly
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001376
1377Then you need to do is upload the executable and the shared library to the device using ADB:
1378
1379 adb push neon_convolution_arm /data/local/tmp/
1380 adb push cl_convolution_arm /data/local/tmp/
Anthony Barbier14c86a92017-12-14 16:27:41 +00001381 adb push gc_absdiff_arm /data/local/tmp/
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001382 adb shell chmod 777 -R /data/local/tmp/
1383
1384And finally to run the example:
1385
1386 adb shell /data/local/tmp/neon_convolution_arm
1387 adb shell /data/local/tmp/cl_convolution_arm
Anthony Barbier14c86a92017-12-14 16:27:41 +00001388 adb shell /data/local/tmp/gc_absdiff_arm
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001389
1390For 64bit:
1391
1392 adb push neon_convolution_aarch64 /data/local/tmp/
1393 adb push cl_convolution_aarch64 /data/local/tmp/
Anthony Barbier14c86a92017-12-14 16:27:41 +00001394 adb push gc_absdiff_aarch64 /data/local/tmp/
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001395 adb shell chmod 777 -R /data/local/tmp/
1396
1397And finally to run the example:
1398
1399 adb shell /data/local/tmp/neon_convolution_aarch64
1400 adb shell /data/local/tmp/cl_convolution_aarch64
Anthony Barbier14c86a92017-12-14 16:27:41 +00001401 adb shell /data/local/tmp/gc_absdiff_aarch64
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001402
Georgios Pinitas9f28b392018-07-18 20:01:53 +01001403@note Examples accept different types of arguments, to find out what they are run the example with \a --help as an argument. If no arguments are specified then random values will be used to execute the graph.
Anthony Barbier3762e742018-03-02 11:49:33 +00001404
1405For example:
Georgios Pinitas9f28b392018-07-18 20:01:53 +01001406 adb shell /data/local/tmp/graph_lenet --help
Anthony Barbier3762e742018-03-02 11:49:33 +00001407
1408In this case the first argument of LeNet (like all the graph examples) is the target (i.e 0 to run on NEON, 1 to run on OpenCL if available, 2 to run on OpenCL using the CLTuner), the second argument is the path to the folder containing the npy files for the weights and finally the third argument is the number of batches to run.
1409
Michalis Spyrou6e52ba32017-10-04 15:40:38 +01001410@subsection S3_4_bare_metal Building for bare metal
1411
Georgios Pinitas58216322020-02-26 11:13:13 +00001412For bare metal, the library was successfully built using linaro's latest (gcc-linaro-6.3.1-2017.05) bare metal toolchains:
Michalis Spyrou6e52ba32017-10-04 15:40:38 +01001413 - arm-eabi for armv7a
1414 - aarch64-elf for arm64-v8a
1415
1416Download linaro for <a href="https://releases.linaro.org/components/toolchain/binaries/6.3-2017.05/arm-eabi/">armv7a</a> and <a href="https://releases.linaro.org/components/toolchain/binaries/6.3-2017.05/aarch64-elf/">arm64-v8a</a>.
1417
1418@note Make sure to add the toolchains to your PATH: export PATH=$PATH:$MY_TOOLCHAINS/gcc-linaro-6.3.1-2017.05-x86_64_aarch64-elf/bin:$MY_TOOLCHAINS/gcc-linaro-6.3.1-2017.05-x86_64_arm-eabi/bin
1419
1420@subsubsection S3_4_1_library How to build the library ?
1421
1422To cross-compile the library with NEON support for baremetal arm64-v8a:
1423
1424 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=bare_metal arch=arm64-v8a build=cross_compile cppthreads=0 openmp=0 standalone=1
1425
1426@subsubsection S3_4_2_examples How to manually build the examples ?
1427
1428Examples are disabled when building for bare metal. If you want to build the examples you need to provide a custom bootcode depending on the target architecture and link against the compute library. More information about bare metal bootcode can be found <a href="http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dai0527a/index.html">here</a>.
1429
1430@subsection S3_5_windows_host Building on a Windows host system
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001431
1432Using `scons` directly from the Windows command line is known to cause
1433problems. The reason seems to be that if `scons` is setup for cross-compilation
1434it gets confused about Windows style paths (using backslashes). Thus it is
1435recommended to follow one of the options outlined below.
1436
Michalis Spyrou6e52ba32017-10-04 15:40:38 +01001437@subsubsection S3_5_1_ubuntu_on_windows Bash on Ubuntu on Windows
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001438
Gian Marco Iodice5fc07aa2019-05-15 17:08:02 +01001439The best and easiest option is to use
1440<a href="https://msdn.microsoft.com/en-gb/commandline/wsl/about">Ubuntu on Windows</a>.
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001441This feature is still marked as *beta* and thus might not be available.
1442However, if it is building the library is as simple as opening a *Bash on
1443Ubuntu on Windows* shell and following the general guidelines given above.
1444
Michalis Spyrou6e52ba32017-10-04 15:40:38 +01001445@subsubsection S3_5_2_cygwin Cygwin
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001446
Gian Marco Iodice5fc07aa2019-05-15 17:08:02 +01001447If the Windows subsystem for Linux is not available <a href="https://www.cygwin.com/">Cygwin</a>
Pablo Tello78a5d222019-08-06 10:09:18 +01001448can be used to install and run `scons`, the minimum Cygwin version must be 3.0.7 or later. In addition
1449to the default packages installed by Cygwin `scons` has to be selected in the installer. (`git` might
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001450also be useful but is not strictly required if you already have got the source
Gian Marco Iodice5fc07aa2019-05-15 17:08:02 +01001451code of the library.) Linaro provides pre-built versions of
1452<a href="http://releases.linaro.org/components/toolchain/binaries/">GCC cross-compilers</a>
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001453that can be used from the Cygwin terminal. When building for Android the
1454compiler is included in the Android standalone toolchain. After everything has
1455been set up in the Cygwin terminal the general guide on building the library
1456can be followed.
1457
Georgios Pinitasfd7780d2020-03-17 11:41:00 +00001458@subsection S3_6_cl_requirements OpenCL DDK Requirements
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001459
Georgios Pinitasfd7780d2020-03-17 11:41:00 +00001460@subsubsection S3_6_1_cl_hard_requirements Hard Requirements
Georgios Pinitasd9cb0572018-07-16 12:23:09 +01001461
1462Compute Library requires OpenCL 1.1 and above with support of non uniform workgroup sizes, which is officially supported in the Mali OpenCL DDK r8p0 and above as an extension (respective extension flag is \a -cl-arm-non-uniform-work-group-size).
1463
1464Enabling 16-bit floating point calculations require \a cl_khr_fp16 extension to be supported. All Mali GPUs with compute capabilities have native support for half precision floating points.
1465
1466Use of @ref CLMeanStdDev function requires 64-bit atomics support, thus \a cl_khr_int64_base_atomics should be supported in order to use.
1467
Georgios Pinitasfd7780d2020-03-17 11:41:00 +00001468@subsubsection S3_6_2_cl_performance_requirements Performance improvements
Georgios Pinitasd9cb0572018-07-16 12:23:09 +01001469
1470Integer dot product built-in function extensions (and therefore optimized kernels) are available with Mali OpenCL DDK r22p0 and above for the following GPUs : G71, G76. The relevant extensions are \a cl_arm_integer_dot_product_int8, \a cl_arm_integer_dot_product_accumulate_int8 and \a cl_arm_integer_dot_product_accumulate_int16.
1471
1472OpenCL kernel level debugging can be simplified with the use of printf, this requires the \a cl_arm_printf extension to be supported.
1473
1474SVM allocations are supported for all the underlying allocations in Compute Library. To enable this OpenCL 2.0 and above is a requirement.
Gian Marco Iodice201cea12018-07-30 17:21:41 +01001475
Georgios Pinitasfd7780d2020-03-17 11:41:00 +00001476@subsection S3_7_cl_tuner OpenCL Tuner
Gian Marco Iodice201cea12018-07-30 17:21:41 +01001477
1478The OpenCL tuner, a.k.a. CLTuner, is a module of Arm Compute Library that can improve the performance of the OpenCL kernels tuning the Local-Workgroup-Size (LWS).
1479The optimal LWS for each unique OpenCL kernel configuration is stored in a table. This table can be either imported or exported from/to a file.
Vidhya Sudhan Loganathandc5d3432019-04-29 11:44:11 +01001480The OpenCL tuner runs the same OpenCL kernel for a range of local workgroup sizes and keeps the local workgroup size of the fastest run to use in subsequent calls to the kernel. It supports three modes of tuning with different trade-offs between the time taken to tune and the kernel execution time achieved using the best LWS found. In the Exhaustive mode, it searches all the supported values of LWS. This mode takes the longest time to tune and is the most likely to find the optimal LWS. Normal mode searches a subset of LWS values to yield a good approximation of the optimal LWS. It takes less time to tune than Exhaustive mode. Rapid mode takes the shortest time to tune and finds an LWS value that is at least as good or better than the default LWS value. The mode affects only the search for the optimal LWS and has no effect when the LWS value is imported from a file.
Gian Marco Iodice201cea12018-07-30 17:21:41 +01001481In order for the performance numbers to be meaningful you must disable the GPU power management and set it to a fixed frequency for the entire duration of the tuning phase.
1482
1483If you wish to know more about LWS and the important role on improving the GPU cache utilization, we suggest having a look at the presentation "Even Faster CNNs: Exploring the New Class of Winograd Algorithms available at the following link:
1484
1485https://www.embedded-vision.com/platinum-members/arm/embedded-vision-training/videos/pages/may-2018-embedded-vision-summit-iodice
1486
1487Tuning a network from scratch can be long and affect considerably the execution time for the first run of your network. It is recommended for this reason to store the CLTuner's result in a file to amortize this time when you either re-use the same network or the functions with the same configurations. The tuning is performed only once for each OpenCL kernel.
1488
1489CLTuner looks for the optimal LWS for each unique OpenCL kernel configuration. Since a function (i.e. Convolution Layer, Pooling Layer, Fully Connected Layer ...) can be called multiple times but with different parameters, we associate an "id" (called "config_id") to each kernel to distinguish the unique configurations.
1490
1491 #Example: 2 unique Matrix Multiply configurations
1492@code{.cpp}
1493 TensorShape a0 = TensorShape(32,32);
1494 TensorShape b0 = TensorShape(32,32);
1495 TensorShape c0 = TensorShape(32,32);
1496 TensorShape a1 = TensorShape(64,64);
1497 TensorShape b1 = TensorShape(64,64);
1498 TensorShape c1 = TensorShape(64,64);
1499
1500 Tensor a0_tensor;
1501 Tensor b0_tensor;
1502 Tensor c0_tensor;
1503 Tensor a1_tensor;
1504 Tensor b1_tensor;
1505 Tensor c1_tensor;
1506
1507 a0_tensor.allocator()->init(TensorInfo(a0, 1, DataType::F32));
1508 b0_tensor.allocator()->init(TensorInfo(b0, 1, DataType::F32));
1509 c0_tensor.allocator()->init(TensorInfo(c0, 1, DataType::F32));
1510 a1_tensor.allocator()->init(TensorInfo(a1, 1, DataType::F32));
1511 b1_tensor.allocator()->init(TensorInfo(b1, 1, DataType::F32));
1512 c1_tensor.allocator()->init(TensorInfo(c1 1, DataType::F32));
1513
1514 CLGEMM gemm0;
1515 CLGEMM gemm1;
1516
1517 // Configuration 0
1518 gemm0.configure(&a0, &b0, nullptr, &c0, 1.0f, 0.0f);
1519
1520 // Configuration 1
1521 gemm1.configure(&a1, &b1, nullptr, &c1, 1.0f, 0.0f);
1522@endcode
1523
Georgios Pinitasfd7780d2020-03-17 11:41:00 +00001524@subsubsection S3_7_1_cl_tuner_how_to How to use it
Gian Marco Iodice201cea12018-07-30 17:21:41 +01001525
Michele Di Giorgio57f30a92020-09-08 14:03:51 +01001526All the graph examples in the Compute Library's folder "examples" and the arm_compute_benchmark accept an argument to enable the OpenCL tuner and an argument to export/import the LWS values to/from a file
Gian Marco Iodice201cea12018-07-30 17:21:41 +01001527
1528 #Enable CL tuner
1529 ./graph_mobilenet --enable-tuner –-target=CL
1530 ./arm_compute_benchmark --enable-tuner
1531
1532 #Export/Import to/from a file
1533 ./graph_mobilenet --enable-tuner --target=CL --tuner-file=acl_tuner.csv
1534 ./arm_compute_benchmark --enable-tuner --tuner-file=acl_tuner.csv
1535
1536If you are importing the CLTuner'results from a file, the new tuned LWS values will be appended to it.
1537
1538Either you are benchmarking the graph examples or the test cases in the arm_compute_benchmark remember to:
1539
1540 -# Disable the power management
1541 -# Keep the GPU frequency constant
1542 -# Run multiple times the network (i.e. 10).
1543
1544If you are not using the graph API or the benchmark infrastructure you will need to manually pass a CLTuner object to CLScheduler before configuring any function.
1545
1546@code{.cpp}
1547CLTuner tuner;
1548
1549// Setup Scheduler
1550CLScheduler::get().default_init(&tuner);
1551@endcode
1552
1553After the first run, the CLTuner's results can be exported to a file using the method "save_to_file()".
1554- tuner.save_to_file("results.csv");
1555
1556This file can be also imported using the method "load_from_file("results.csv")".
1557- tuner.load_from_file("results.csv");
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001558*/
Anthony Barbierd51ea0a2018-08-07 17:48:03 +01001559} // namespace arm_compute