blob: ae2903db3bdfd25aeff4c74f1ec82774315ef336 [file] [log] [blame]
Vidhya Sudhan Loganathand646ae12018-11-19 15:18:20 +00001///
Michele Di Giorgiod9eaf612020-07-08 11:12:57 +01002/// Copyright (c) 2017-2020 Arm Limited.
Vidhya Sudhan Loganathand646ae12018-11-19 15:18:20 +00003///
4/// SPDX-License-Identifier: MIT
5///
6/// Permission is hereby granted, free of charge, to any person obtaining a copy
7/// of this software and associated documentation files (the "Software"), to
8/// deal in the Software without restriction, including without limitation the
9/// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
10/// sell copies of the Software, and to permit persons to whom the Software is
11/// furnished to do so, subject to the following conditions:
12///
13/// The above copyright notice and this permission notice shall be included in all
14/// copies or substantial portions of the Software.
15///
16/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
22/// SOFTWARE.
23///
Anthony Barbier3762e742018-03-02 11:49:33 +000024namespace arm_compute
25{
Anthony Barbier6ff3b192017-09-04 18:44:23 +010026/** @mainpage Introduction
27
28@tableofcontents
29
30The Computer Vision and Machine Learning library is a set of functions optimised for both ARM CPUs and GPUs using SIMD technologies.
31
32Several builds of the library are available using various configurations:
33 - OS: Linux, Android or bare metal.
34 - Architecture: armv7a (32bit) or arm64-v8a (64bit)
Anthony Barbier20dbb822017-12-13 21:19:39 +000035 - Technology: NEON / OpenCL / GLES_COMPUTE / NEON and OpenCL and GLES_COMPUTE
Anthony Barbier6ff3b192017-09-04 18:44:23 +010036 - Debug / Asserts / Release: Use a build with asserts enabled to debug your application and enable extra validation. Once you are sure your application works as expected you can switch to a release build of the library for maximum performance.
37
38@section S0_1_contact Contact / Support
39
40Please email developer@arm.com
41
42In order to facilitate the work of the support team please provide the build information of the library you are using. To get the version of the library you are using simply run:
43
44 $ strings android-armv7a-cl-asserts/libarm_compute.so | grep arm_compute_version
45 arm_compute_version=v16.12 Build options: {'embed_kernels': '1', 'opencl': '1', 'arch': 'armv7a', 'neon': '0', 'asserts': '1', 'debug': '0', 'os': 'android', 'Werror': '1'} Git hash=f51a545d4ea12a9059fe4e598a092f1fd06dc858
46
Anthony Barbier14c86a92017-12-14 16:27:41 +000047@section S0_2_prebuilt_binaries Pre-built binaries
48
49For each release we provide some pre-built binaries of the library [here](https://github.com/ARM-software/ComputeLibrary/releases)
50
51These binaries have been built using the following toolchains:
Michele Di Giorgio36a551f2020-04-23 11:55:29 +010052 - Linux armv7a: gcc-linaro-6.3.1-2017.05-x86_64_arm-linux-gnueabihf
53 - Linux arm64-v8a: gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu
54 - Android armv7a: clang++ / libc++ NDK r18b
55 - Android am64-v8a: clang++ / libc++ NDK r18b
Anthony Barbier14c86a92017-12-14 16:27:41 +000056
57@warning Make sure to use a compatible toolchain to build your application or you will get some std::bad_alloc errors at runtime.
58
Anthony Barbier6ff3b192017-09-04 18:44:23 +010059@section S1_file_organisation File organisation
60
61This archive contains:
62 - The arm_compute header and source files
63 - The latest Khronos OpenCL 1.2 C headers from the <a href="https://www.khronos.org/registry/cl/">Khronos OpenCL registry</a>
64 - The latest Khronos cl2.hpp from the <a href="https://www.khronos.org/registry/cl/">Khronos OpenCL registry</a> (API version 2.1 when this document was written)
Anthony Barbier20dbb822017-12-13 21:19:39 +000065 - The latest Khronos OpenGL ES 3.1 C headers from the <a href="https://www.khronos.org/registry/gles/">Khronos OpenGL ES registry</a>
66 - The latest Khronos EGL 1.5 C headers from the <a href="https://www.khronos.org/registry/gles/">Khronos EGL registry</a>
67 - The sources for a stub version of libOpenCL.so, libGLESv1_CM.so, libGLESv2.so and libEGL.so to help you build your application.
Anthony Barbier6ff3b192017-09-04 18:44:23 +010068 - An examples folder containing a few examples to compile and link against the library.
69 - A @ref utils folder containing headers with some boiler plate code used by the examples.
70 - This documentation.
71
Michele Di Giorgio552e11d2020-09-23 15:08:38 +010072 For detailed information about file organization, please refer to Files -> File List section of this documentation.
Anthony Barbier6ff3b192017-09-04 18:44:23 +010073
74@section S2_versions_changelog Release versions and changelog
75
76@subsection S2_1_versions Release versions
77
78All releases are numbered vYY.MM Where YY are the last two digits of the year, and MM the month number.
79If there is more than one release in a month then an extra sequential number is appended at the end:
80
81 v17.03 (First release of March 2017)
82 v17.03.1 (Second release of March 2017)
83 v17.04 (First release of April 2017)
84
85@note We're aiming at releasing one major public release with new features per quarter. All releases in between will only contain bug fixes.
86
87@subsection S2_2_changelog Changelog
88
SiCong Li96209c72020-08-21 12:28:30 +010089v20.11 Public major release
SiCong Li903f8cc2020-08-27 10:17:10 +010090 - Added new data type S32 support for:
91 - @ref NEArithmeticSubtraction
92 - @ref NEArithmeticSubtractionKernel
SiCong Libb88f892020-08-28 11:18:47 +010093 - @ref NEPixelWiseMultiplication
94 - @ref NEPixelWiseMultiplicationKernel
Georgios Pinitas18134222020-09-03 21:00:23 +010095 - @ref NEElementwiseDivision
96 - @ref NEDivisionOperationKernel
SiCong Li96209c72020-08-21 12:28:30 +010097 - Interface change
98 - Properly support softmax axis to have the same meaning as other major frameworks. That is, axis now defines the dimension
99 on which Softmax/Logsoftmax is performed. E.g. for input of shape 4x5x6 and axis=1, softmax will be applied to 4x6=24 vectors of size 5.
100 The supported value range of axis is [-rank, rank).
101 This change applies to the following functions:
102 - @ref NESoftmaxLayer
103 - @ref NELogSoftmaxLayer
104 - @ref CLSoftmaxLayer
105 - @ref CLLogSoftmaxLayer
106 - @ref GCSoftmaxLayer
Sheri Zhanged367132020-10-08 15:46:16 +0100107 - Removed padding from:
108 - @ref NEComplexPixelWiseMultiplicationKernel
109 - @ref NENonMaximaSuppression3x3Kernel
110 - @ref NERemapKernel
111 - @ref NEGEMMInterleave4x4Kernel
112 - @ref NEDirectConvolutionLayerKernel
113 - @ref NEScaleKernel
114 - @ref NELocallyConnectedMatrixMultiplyKernel
115 - @ref NEGEMMLowpOffsetContributionKernel
116 - @ref NEGEMMTranspose1xWKernel
117 - @ref NEPoolingLayerKernel
118 - @ref NEConvolutionKernel
119 - @ref NEDepthwiseConvolutionLayerNativeKernel
120 - @ref NEGEMMLowpMatrixMultiplyKernel
121 - @ref NEGEMMMatrixMultiplyKernel
122 - @ref NEDirectConvolutionLayerOutputStageKernel
123 - @ref NEReductionOperationKernel
124 - @ref NEGEMMLowpMatrixAReductionKernel
125 - @ref NEGEMMLowpMatrixBReductionKernel
Georgios Pinitas2d221392020-09-03 15:16:37 +0100126 - Deprecated OpenCL kernels / functions:
127 - CLLocallyConnectedLayer
128 - CLLocallyConnectedMatrixMultiplyKernel
129 - Deprecated NEON kernels / functions:
130 - NELocallyConnectedLayer
131 - NELocallyConnectedMatrixMultiplyKernel
SiCong Li96209c72020-08-21 12:28:30 +0100132
Georgios Pinitas25ef7212020-06-02 23:00:41 +0100133v20.08 Public major release
134 - Various bug fixes.
135 - Various optimisations.
Sheri Zhang3ef9b5f2020-07-09 16:32:58 +0100136 - Added new data type QASYMM8_SIGNED support for:
Sheri Zhangdd4cfc02020-07-10 14:15:41 +0100137 - @ref CLArgMinMaxLayer
138 - @ref CLArgMinMaxLayerKernel
139 - Added new data type U8 support for:
140 - @ref NECropKernel
141 - @ref CLCropKernel
142 - Added aligh_corner support for nearest neighbor interpolation in:
143 - @ref NEScaleKernel
144 - @ref CLScaleKernel
145 - New OpenCL kernels / functions:
146 - @ref CLMaxUnpoolingLayerKernel
147 - New NEON kernels / functions:
148 - @ref NEMaxUnpoolingLayerKernel
Sheri Zhang3ef9b5f2020-07-09 16:32:58 +0100149 - New graph example:
Sheri Zhangdd4cfc02020-07-10 14:15:41 +0100150 - graph_yolov3_output_detector
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100151 - GEMMTuner improvements:
152 - Added fp16 support
153 - Output json files for easier integration
154 - Enabled tuning for export_to_cl_image_rhs option for RHS tensors
155 - More robust script for running benchmarks
Sheri Zhang3ef9b5f2020-07-09 16:32:58 +0100156 - Removed padding from:
Sheri Zhangdd4cfc02020-07-10 14:15:41 +0100157 - @ref NEPixelWiseMultiplicationKernel
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100158 - @ref NEHeightConcatenateLayerKernel
159 - @ref NEThresholdKernel
160 - @ref NEBatchConcatenateLayerKernel
161 - @ref NETransposeKernel
162 - @ref NEBatchNormalizationLayerKernel
163 - @ref NEArithmeticSubtractionKernel
164 - @ref NEBoundingBoxTransformKernel
165 - @ref NELogits1DMaxKernel
166 - @ref NELogits1DSoftmaxKernel
167 - @ref NEROIPoolingLayerKernel
168 - @ref NEROIAlignLayerKernel
169 - @ref NEYOLOLayerKernel
170 - @ref NEUpsampleLayerKernel
171 - @ref NEFloorKernel
172 - @ref NEWidthConcatenateLayerKernel
173 - @ref NEDepthConcatenateLayerKernel
174 - @ref NENormalizationLayerKernel
175 - @ref NEL2NormalizeLayerKernel
176 - @ref NEFillArrayKernel
177 - @ref NEDepthConvertLayerKernel
178 - @ref NERangeKernel
179 - @ref NEPriorBoxLayer
Sheri Zhanged367132020-10-08 15:46:16 +0100180 - Removed OpenCL kernels / functions:
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100181 - CLGEMMLowpQuantizeDownInt32ToUint8Scale
182 - CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFloat
Sang-Hoon Parka45abfd2020-08-17 13:50:15 +0100183 - Removed NEON kernels / functions:
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100184 - NEGEMMLowpQuantizeDownInt32ToUint8Scale
185 - NEGEMMMatrixAccumulateBiasesKernel
SiCong Lid004a7a2020-05-28 15:26:41 +0100186 - Deprecated functions / interfaces:
187 - Non-descriptor based interfaces for @ref NEThreshold, @ref CLThreshold
Sang-Hoon Park97c1a672020-08-18 11:44:13 +0100188 - Non-descriptor based interfaces for @ref NEScale, @ref CLScale and @ref GCScale
SiCong Lid004a7a2020-05-28 15:26:41 +0100189 - In @ref NESoftmaxLayer, @ref NELogSoftmaxLayer, @ref CLSoftmaxLayer, @ref CLLogSoftmaxLayer and @ref GCSoftmaxLayer :
morgolock9c7fed82020-08-05 12:30:56 +0100190 The default "axis" value for @ref CLSoftmaxLayer, @ref CLLogSoftmaxLayer and @ref GCSoftmaxLayer is changed from 1 to 0.
191 Only axis 0 is supported.
192 The default "axis" value for @ref NESoftmaxLayer, @ref NELogSoftmaxLayer is changed from 1 to 0.
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100193 Only axis 0 is supported.
Sang-Hoon Parka0205b92020-07-07 09:36:09 +0100194 - The support for quantized data types has been removed from @ref CLLogSoftmaxLayer due to implementation complexity.
Gian Marco Iodice547b2e72020-08-12 10:25:29 +0100195 - Removed padding requirement for the input (e.g. LHS of GEMM) and output in @ref CLGEMMMatrixMultiplyNativeKernel, @ref CLGEMMMatrixMultiplyReshapedKernel, @ref CLGEMMMatrixMultiplyReshapedOnlyRHSKernel and @ref CLIm2ColKernel (NHWC only)
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100196 - This change allows to use @ref CLGEMMConvolutionLayer without extra padding for the input and output.
197 - Only the weights/bias of @ref CLGEMMConvolutionLayer could require padding for the computation.
198 - Only on Arm Mali Midgard GPUs, @ref CLGEMMConvolutionLayer could require padding since @ref CLGEMMMatrixMultiplyKernel is called and currently requires padding.
Gian Marco Iodice547b2e72020-08-12 10:25:29 +0100199 - Added support for exporting the OpenCL buffer object to the OpenCL image object in @ref CLGEMMMatrixMultiplyReshapedKernel and @ref CLGEMMMatrixMultiplyReshapedOnlyRHSKernel.
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100200 - This support allows to export the OpenCL buffer used for the reshaped RHS matrix to the OpenCL image object.
201 - The padding requirement for the OpenCL image object is considered into the @ref CLGEMMReshapeRHSMatrixKernel.
202 - The reshaped RHS matrix stores the weights when GEMM is used to accelerate @ref CLGEMMConvolutionLayer.
Georgios Pinitas25ef7212020-06-02 23:00:41 +0100203
Georgios Pinitasfd7780d2020-03-17 11:41:00 +0000204v20.05 Public major release
Georgios Pinitasc7b183a2020-03-06 18:12:09 +0000205 - Various bug fixes.
206 - Various optimisations.
Michele Di Giorgio36a551f2020-04-23 11:55:29 +0100207 - Updated recommended NDK version to r18b.
208 - Updated recommended gcc version to Linaro 6.3.1.
Georgios Pinitasc7b183a2020-03-06 18:12:09 +0000209 - Added Bfloat16 type support
210 - Added Bfloat16 support in:
211 - @ref NEWeightsReshapeKernel
212 - @ref NEConvolutionLayerReshapeWeights
213 - @ref NEIm2ColKernel
214 - @ref NEIm2Col
215 - @ref NEDepthConvertLayerKernel
216 - @ref NEDepthConvertLayer
217 - @ref NEGEMMConvolutionLayer
Georgios Pinitasc7b183a2020-03-06 18:12:09 +0000218 - @ref NEGEMMAssemblyDispatch
Sheri Zhang0f2522b2020-03-25 16:38:19 +0000219 - Added new data type QASYMM8_SIGNED support for:
220 - @ref CLDirectConvolutionLayer
221 - @ref CLDeconvolutionLayer
222 - @ref CLDirectDeconvolutionLayer
223 - @ref CLGEMMDeconvolutionLayer
224 - @ref CLGEMMLowpMatrixMultiplyReshapedKernel
225 - @ref CLGEMMLowpQuantizeDownInt32ScaleKernel
226 - @ref CLGEMMLowpQuantizeDownInt32ScaleByFloatKernel
227 - @ref CLReductionOperation
228 - @ref CLReduceMean
Sheri Zhang359c48e2020-04-30 22:53:39 +0100229 - @ref NEScale
230 - @ref NEScaleKernel
Sheri Zhang0f2522b2020-03-25 16:38:19 +0000231 - @ref NEUpsampleLayer
232 - @ref NECast
233 - @ref NEReductionOperation
234 - @ref NEReduceMean
235 - @ref NEArgMinMaxLayer
236 - @ref NEDeconvolutionLayer
237 - @ref NEGEMMLowpQuantizeDownInt32ScaleKernel
238 - @ref CPPBoxWithNonMaximaSuppressionLimit
239 - @ref CPPDetectionPostProcessLayer
240 - @ref CPPPermuteKernel
241 - @ref CPPPermute
242 - @ref CPPTopKVKernel
243 - @ref CPPTopKV
Sheri Zhang359c48e2020-04-30 22:53:39 +0100244 - @ref CPPUpsample
245 - @ref CPPUpsampleKernel
Sheri Zhang31b49ca2020-04-24 11:15:10 +0100246 - New OpenCL kernels / functions:
247 - @ref CLQLSTMLayer
248 - @ref CLQLSTMLayerNormalizationKernel
249 - New NEON kernels / functions:
250 - @ref NEQLSTMLayer
251 - @ref NEQLSTMLayerNormalizationKernel
252 - Added HARD_SWISH support in:
253 - @ref CLActivationLayerKernel
254 - @ref NEActivationLayerKernel
Sheri Zhang0f2522b2020-03-25 16:38:19 +0000255 - Deprecated OpenCL kernels / functions:
256 - CLGEMMLowpQuantizeDownInt32ToUint8Scale
257 - CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFloat
258 - Deprecated NEON kernels / functions:
259 - NEGEMMLowpQuantizeDownInt32ToUint8Scale
260 - Removed CPP kernels / functions:
261 - CPPFlipWeightsKernel
Manuel Bottini387259a2020-05-21 17:14:36 +0100262 - Removed PoolingLayerInfo constructors without Data Layout.
263 - Removed CLDepthwiseConvolutionLayer3x3
264 - Removed NEDepthwiseConvolutionLayerOptimized
Manuel Bottini075253a2020-05-22 12:57:18 +0100265 - Added support for Winograd 3x3,4x4 on NEON FP16:
266 - @ref NEWinogradConvolutionLayer
267 - @ref NEWinogradLayerTransformInputKernel
268 - @ref NEWinogradLayerTransformOutputKernel
269 - @ref NEWinogradLayerTransformWeightsKernel
270 - Added CLCompileContext
271 - Added NEON GEMM kernel with 2D window support
Georgios Pinitasc7b183a2020-03-06 18:12:09 +0000272
Michele Di Giorgio740872e2020-03-04 15:29:49 +0000273v20.02.1 Maintenance release
274 - Added Android-NN build script.
275
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000276v20.02 Public major release
277 - Various bug fixes.
278 - Various optimisations.
279 - Added new data type QASYMM8_SIGNED support for:
280 - @ref CLDepthwiseConvolutionLayer
Manuel Bottini387259a2020-05-21 17:14:36 +0100281 - CLDepthwiseConvolutionLayer3x3
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000282 - @ref CLGEMMConvolutionLayer
283 - @ref CLGEMMLowpMatrixMultiplyCore
284 - @ref CLGEMMLowpMatrixMultiplyReshapedOnlyRHSKernel
285 - @ref CLGEMMLowpMatrixMultiplyNativeKernel
286 - @ref NEActivationLayer
287 - @ref NEComparisonOperationKernel
288 - @ref NEConvolutionLayer
289 - @ref NEDepthwiseConvolutionLayer
Georgios Pinitas7d0adc62020-09-04 15:25:24 +0100290 - NEDepthwiseConvolutionLayer3x3Kernel
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000291 - @ref NEDirectConvolutionLayerOutputStageKernel
292 - @ref NEElementwiseComparison
293 - @ref NEElementwiseMax
294 - @ref NEElementwiseMin
295 - @ref NEElementwiseSquaredDiff
296 - @ref NEFullyConnectedLayer
Michele Di Giorgiof22f6722020-07-03 16:29:24 +0100297 - NEGEMMMatrixVectorMultiplyKernel
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000298 - @ref NEPixelWiseMultiplication
299 - @ref NEPoolingLayer
300 - @ref NEPReluLayer
301 - Added support for QSYMM8_PER_CHANNEL in:
Georgios Pinitas7d0adc62020-09-04 15:25:24 +0100302 - NEDepthwiseConvolutionLayer3x3Kernel
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000303 - Added support for split sizes in:
304 - @ref CLSplit
305 - @ref NESplit
306 - New OpenCL kernels / functions:
307 - @ref CLFill
Michele Di Giorgioba14c922020-10-12 13:27:57 +0100308 - CLGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPointKernel / @ref CLGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000309 - New NEON kernels / functions:
310 - @ref NEFill
311 - @ref NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPointKernel / @ref NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint
312 - Deprecated NEON functions / interfaces:
Manuel Bottini387259a2020-05-21 17:14:36 +0100313 - CLDepthwiseConvolutionLayer3x3
314 - NEDepthwiseConvolutionLayerOptimized
315 - PoolingLayerInfo constructors without Data Layout.
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000316 - Added support for quantization with multiplier greater than 1 on NEON and CL.
317 - Added support for quantized inputs of type QASYMM8_SIGNED and QASYMM8 to @ref CLQuantizationLayer.
318 - Added the ability to build bootcode for bare metal.
319 - Added support for generating synthetic QASYMM8 graphs.
320 - Added support for F16 datatype in VGG16.
321 - Removed pre-built binaries for GLES.
322
Michele Di Giorgiod374ff22020-01-21 10:03:20 +0000323v19.11.1 Public maintenance release
324 - Fix offset calculation in NEReductionOperationKernel.
325 - Fix data layout in NEScaleKernel for nhwc.
326 - Retain configuration step data layout to avoid side-effects.
327 - Perform sqrt in double domain for L2 pooling.
328 - Fix output shape calculation for Reduce Mean
329 - Restrict cases where optimized NEPadLayer runs.
330
Michele Di Giorgioa046e162019-10-08 09:36:26 +0100331v19.11 Public major release
SiCong Lica1f98c2019-11-28 11:06:11 +0000332 - Various bug fixes.
333 - Various optimisations.
SiCong Li1f7f9882019-11-28 14:59:35 +0000334 - Updated recommended NDK version to r17c.
SiCong Lica1f98c2019-11-28 11:06:11 +0000335 - Deprecated OpenCL kernels / functions:
Michele Di Giorgioa046e162019-10-08 09:36:26 +0100336 - CLDepthwiseConvolutionLayerReshapeWeightsGenericKernel
337 - CLDepthwiseIm2ColKernel
SiCong Lica1f98c2019-11-28 11:06:11 +0000338 - CLDepthwiseSeparableConvolutionLayer
Michele Di Giorgioa046e162019-10-08 09:36:26 +0100339 - CLDepthwiseVectorToTensorKernel
340 - CLDirectConvolutionLayerOutputStageKernel
SiCong Lica1f98c2019-11-28 11:06:11 +0000341 - Deprecated NEON kernels / functions:
Giorgio Arenad93e2632019-10-15 11:09:33 +0100342 - NEDepthwiseWeightsReshapeKernel
343 - NEDepthwiseIm2ColKernel
SiCong Lica1f98c2019-11-28 11:06:11 +0000344 - NEDepthwiseSeparableConvolutionLayer
Giorgio Arenad93e2632019-10-15 11:09:33 +0100345 - NEDepthwiseVectorToTensorKernel
Manuel Bottini05069f02019-09-26 17:18:26 +0100346 - NEDepthwiseConvolutionLayer3x3
SiCong Lica1f98c2019-11-28 11:06:11 +0000347 - New OpenCL kernels / functions:
348 - @ref CLInstanceNormalizationLayerKernel / @ref CLInstanceNormalizationLayer
349 - @ref CLDepthwiseConvolutionLayerNativeKernel to replace the old generic depthwise convolution (see Deprecated
350 OpenCL kernels / functions)
351 - @ref CLLogSoftmaxLayer
352 - New NEON kernels / functions:
353 - @ref NEBoundingBoxTransformKernel / @ref NEBoundingBoxTransform
354 - @ref NEComputeAllAnchorsKernel / @ref NEComputeAllAnchors
355 - @ref NEDetectionPostProcessLayer
356 - @ref NEGenerateProposalsLayer
357 - @ref NEInstanceNormalizationLayerKernel / @ref NEInstanceNormalizationLayer
358 - @ref NELogSoftmaxLayer
359 - @ref NEROIAlignLayerKernel / @ref NEROIAlignLayer
360 - Added QASYMM8 support for:
361 - @ref CLGenerateProposalsLayer
362 - @ref CLROIAlignLayer
363 - @ref CPPBoxWithNonMaximaSuppressionLimit
364 - Added QASYMM16 support for:
365 - @ref CLBoundingBoxTransform
366 - Added FP16 support for:
367 - @ref CLGEMMMatrixMultiplyReshapedKernel
368 - Added new data type QASYMM8_PER_CHANNEL support for:
369 - @ref CLDequantizationLayer
370 - @ref NEDequantizationLayer
371 - Added new data type QSYMM8_PER_CHANNEL support for:
372 - @ref CLConvolutionLayer
373 - @ref NEConvolutionLayer
374 - @ref CLDepthwiseConvolutionLayer
375 - @ref NEDepthwiseConvolutionLayer
376 - Added FP16 mixed-precision support for:
377 - @ref CLGEMMMatrixMultiplyReshapedKernel
378 - @ref CLPoolingLayerKernel
379 - Added FP32 and FP16 ELU activation for:
380 - @ref CLActivationLayer
381 - @ref NEActivationLayer
382 - Added asymmetric padding support for:
383 - @ref CLDirectDeconvolutionLayer
384 - @ref CLGEMMDeconvolutionLayer
385 - @ref NEDeconvolutionLayer
386 - Added SYMMETRIC and REFLECT modes for @ref CLPadLayerKernel / @ref CLPadLayer.
387 - Replaced the calls to @ref NECopyKernel and @ref NEMemsetKernel with @ref NEPadLayer in @ref NEGenerateProposalsLayer.
388 - Replaced the calls to @ref CLCopyKernel and @ref CLMemsetKernel with @ref CLPadLayer in @ref CLGenerateProposalsLayer.
389 - Improved performance for CL Inception V3 - FP16.
390 - Improved accuracy for CL Inception V3 - FP16 by enabling FP32 accumulator (mixed-precision).
391 - Improved NEON performance by enabling fusing batch normalization with convolution and depth-wise convolution layer.
392 - Improved NEON performance for MobileNet-SSD by improving the output detection performance.
393 - Optimized @ref CLPadLayer.
394 - Optimized CL generic depthwise convolution layer by introducing @ref CLDepthwiseConvolutionLayerNativeKernel.
395 - Reduced memory consumption by implementing weights sharing.
Michele Di Giorgioa046e162019-10-08 09:36:26 +0100396
Michele Di Giorgiod374ff22020-01-21 10:03:20 +0000397v19.08.1 Public maintenance release
398 - Fix offset calculation in NEReductionOperationKernel.
399 - Fix data layout in NEScaleKernel for nhwc.
400 - Retain configuration step data layout to avoid side-effects.
401 - Perform sqrt in double domain for L2 pooling.
402 - Fix output shape calculation for Reduce Mean
403 - Fix broadcast CLPixelwiseMultiplication with 5D tensors
404
Georgios Pinitas3d13af82019-06-04 13:04:16 +0100405v19.08 Public major release
406 - Various bug fixes.
407 - Various optimisations.
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100408 - Deprecated NEON functions
409 - NEDepthConcatenateLayer
410 - NEWidthConcatenateLayer
411 - Deprecated OpenCL kernels / functions
412 - CLDepthConcatenateLayer
413 - CLGEMMInterleave4x4Kernel / CLGEMMInterleave4x4
414 - CLGEMMTranspose1xWKernel / CLGEMMTranspose1xW
415 - CLWidthConcatenateLayer
416 - New NEON kernels / functions:
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100417 - @ref NEAbsLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100418 - @ref NECast
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100419 - @ref NEElementwisePower
420 - @ref NELogLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100421 - @ref NELSTMLayerQuantized
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100422 - @ref NENegLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100423 - @ref NEPReluLayer
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100424 - @ref NESinLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100425 - @ref NEBatchConcatenateLayerKernel
426 - @ref NEDepthToSpaceLayerKernel / @ref NEDepthToSpaceLayer
427 - @ref NEDepthwiseConvolutionLayerNativeKernel
428 - @ref NEGEMMLowpQuantizeDownInt32ToInt16ScaleByFixedPointKernel
429 - @ref NEMeanStdDevNormalizationKernel / @ref NEMeanStdDevNormalizationLayer
430 - @ref NESpaceToDepthLayerKernel / @ref NESpaceToDepthLayer
431 - New OpenCL kernels / functions:
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100432 - @ref CLAbsLayer
433 - @ref CLElementwisePower
434 - @ref CLLogLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100435 - @ref CLLSTMLayerQuantized
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100436 - @ref CLNegLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100437 - @ref CLPReluLayer
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100438 - @ref CLSinLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100439 - @ref CLBatchConcatenateLayerKernel
440 - @ref CLDepthToSpaceLayerKernel / @ref CLDepthToSpaceLayer
441 - @ref CLGEMMLowpMatrixMultiplyNativeKernel
Michele Di Giorgioba14c922020-10-12 13:27:57 +0100442 - CLGEMMLowpQuantizeDownInt32ToInt16ScaleByFixedPointKernel
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100443 - @ref CLGEMMMatrixMultiplyNativeKernel
444 - @ref CLMeanStdDevNormalizationKernel / @ref CLMeanStdDevNormalizationLayer
445 - @ref CLSpaceToDepthLayerKernel / @ref CLSpaceToDepthLayer
446 - New examples:
447 - neon_opticalflow
448 - cl_cache
449 - neon_permute
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100450 - Added support for FP16 in @ref NEDeconvolutionLayer
451 - Added support for FP16 in @ref CLDeconvolutionLayer
452 - Added support for REDUCE_MIN and REDUCE_MAX in @ref ReductionOperation
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100453 - Enable the fusion of batch normalization with convolution and depthwise convolution layer for FP32 in the graph API (OpenCL only)
454 - Added support for fusing activation function and broadcast addition with the matrix multiplication for FP32 (OpenCL only)
455 - Re-factored the depthwise convolution layer kernel on NEON for generic cases
456 - Added an optimized depthwise convolution layer kernel for 5x5 filters (NEON only)
457 - Added support to enable OpenCL kernel cache. Added example showing how to load the prebuilt OpenCL kernels from a binary cache file
458 - Altered @ref QuantizationInfo interface to support per-channel quantization.
Manuel Bottini387259a2020-05-21 17:14:36 +0100459 - The CLDepthwiseConvolutionLayer3x3 will be included by @ref CLDepthwiseConvolutionLayer to accommodate for future optimizations.
460 - The NEDepthwiseConvolutionLayerOptimized will be included by @ref NEDepthwiseConvolutionLayer to accommodate for future optimizations.
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100461 - Removed inner_border_right and inner_border_top parameters from @ref CLDeconvolutionLayer interface
462 - Removed inner_border_right and inner_border_top parameters from @ref NEDeconvolutionLayer interface
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100463 - Optimized the NEON assembly kernel for GEMMLowp. The new implementation fuses the output stage and quantization with the matrix multiplication kernel
Georgios Pinitas3d13af82019-06-04 13:04:16 +0100464
Michalis Spyroua9c44722019-04-05 17:18:36 +0100465v19.05 Public major release
Michalis Spyrouc6608ac2019-05-16 17:40:23 +0100466 - Various bug fixes.
467 - Various optimisations.
Georgios Pinitasf790fdb2019-04-24 12:41:25 +0100468 - New Neon kernels / functions:
469 - @ref NEBatchToSpaceLayerKernel / @ref NEBatchToSpaceLayer
Michalis Spyrouca82e622019-05-10 16:43:20 +0100470 - @ref NEComplexPixelWiseMultiplicationKernel / @ref NEComplexPixelWiseMultiplication
Georgios Pinitasf790fdb2019-04-24 12:41:25 +0100471 - @ref NECropKernel / @ref NECropResize
Michalis Spyrouca82e622019-05-10 16:43:20 +0100472 - @ref NEDepthwiseConvolutionAssemblyDispatch
473 - @ref NEFFTDigitReverseKernel
474 - @ref NEFFTRadixStageKernel
475 - @ref NEFFTScaleKernel
Georgios Pinitasf790fdb2019-04-24 12:41:25 +0100476 - @ref NEGEMMLowpOffsetContributionOutputStageKernel
477 - @ref NEHeightConcatenateLayerKernel
478 - @ref NESpaceToBatchLayerKernel / @ref NESpaceToBatchLayer
Michalis Spyroud7dd15c2019-05-30 14:53:58 +0100479 - @ref NEFFT1D
480 - @ref NEFFT2D
481 - @ref NEFFTConvolutionLayer
Georgios Pinitasf790fdb2019-04-24 12:41:25 +0100482 - New OpenCL kernels / functions:
Michalis Spyrouca82e622019-05-10 16:43:20 +0100483 - @ref CLComplexPixelWiseMultiplicationKernel / @ref CLComplexPixelWiseMultiplication
Georgios Pinitasf790fdb2019-04-24 12:41:25 +0100484 - @ref CLCropKernel / @ref CLCropResize
Michalis Spyroud7dd15c2019-05-30 14:53:58 +0100485 - @ref CLDeconvolutionReshapeOutputKernel
Georgios Pinitasf790fdb2019-04-24 12:41:25 +0100486 - @ref CLFFTDigitReverseKernel
487 - @ref CLFFTRadixStageKernel
488 - @ref CLFFTScaleKernel
489 - @ref CLGEMMLowpMatrixMultiplyReshapedOnlyRHSKernel
490 - @ref CLGEMMMatrixMultiplyReshapedOnlyRHSKernel
491 - @ref CLHeightConcatenateLayerKernel
492 - @ref CLDirectDeconvolutionLayer
493 - @ref CLFFT1D
494 - @ref CLFFT2D
495 - @ref CLFFTConvolutionLayer
Michalis Spyrouca82e622019-05-10 16:43:20 +0100496 - @ref CLGEMMDeconvolutionLayer
497 - New OpenGLES kernels / functions:
498 - @ref GCConcatenateLayer
Michalis Spyroua9c44722019-04-05 17:18:36 +0100499 - Deprecated functions/interfaces
Georgios Pinitas09f24972019-05-17 18:14:40 +0100500 - GCDepthConcatenateLayer
501 - NEWidthConcatenateLayer
502 - NEDepthConcatenateLayer
503 - CLWidthConcatenateLayer
504 - CLDepthConcatenateLayer
Gian Marco Iodice5fc07aa2019-05-15 17:08:02 +0100505 - CLGEMMInterleave4x4
506 - CLGEMMTranspose1xW
Michalis Spyrouc6608ac2019-05-16 17:40:23 +0100507 - Support different quantization info in CLConcatLayer.
508 - Add checks on different input/output quantization info were not supported.
509 - Tensors have different quantization information.
510 - Add FP16 support checks.
511 - Fix output quantization CLDeptwiseConv3x3 when activation is fused.
512 - New graph examples:
513 - graph_convolution
514 - graph_fully_connected
515 - graph_depthwise_convolution
516 - Deepspeech v0.4.1
517 - Add support for QASYMM8 in NEArithmeticSubtractionKernel.
518 - Add support for QASYMM8 in NEPixelWiseMultiplicationKernel.
519 - Add support for QASYMM8 NEDeconvolution.
520 - Add support for DequantizationLayer for NEON/CL.
521 - Add support for dilation in CLDepthwiseConvolution.
522 - Fuse offset contribution with the output stage when we use NEGEMMLowpMatrixMultiplyCore.
523 - Optimize CLDeconvolution.
524 - Add StackLayer to the graph API.
525 - Add support for "reflect" padding mode in NEPad.
526 - Winograd 7x7 NHWC on OpenCL.
527 - Rework CL ML layers to run exclusively on CL.
528 - Support different quantization info in PoolingLayer.
529 - Implement and test import memory interfaces.
530 - Added new tests and removed old ones.
531 - Various clang-tidy fixes.
Michalis Spyroua9c44722019-04-05 17:18:36 +0100532
giuros01a69a88b2019-01-31 16:29:19 +0000533v19.02 Public major release
Isabella Gottardi62538972019-02-12 19:52:44 +0000534 - Various bug fixes.
535 - Various optimisations.
536 - New Neon kernels / functions:
537 - @ref NETileKernel / @ref NETile
538 - @ref NEFuseBatchNormalizationKernel / @ref NEFuseBatchNormalization
539 - @ref NEElementwiseOperationKernel
540 - @ref NEElementwiseMax
541 - @ref NEElementwiseMin
542 - @ref NEElementwiseSquaredDiff
543 - @ref NESelectKernel / @ref NESelect
544 - @ref NESplit
545 - @ref NESlice
546 - @ref NEUnstack
547 - @ref NEStridedSliceKernel / @ref NEStridedSlice
548 - @ref NEElementwiseUnaryKernel
549 - @ref NERsqrtLayer
550 - @ref NEExpLayer
551 - @ref NEReverseKernel / @ref NEReverse
552 - @ref NEArgMinMaxLayer
553 - @ref NEStackLayerKernel / @ref NEStackLayer
554 - @ref NERangeKernel / @ref NERange
555 - @ref NEPadLayer
556 - @ref NEMemsetKernel
557 - @ref NEGatherKernel / @ref NEGather
558 - @ref NEElementwiseComparison
559 - @ref NEElementwiseComparisonStatic
560 - @ref NEComparisonOperationKernel
561 - @ref NEElementwiseDivision
562 - New OpenCL kernels / functions:
563 - @ref CLSelectKernel / @ref CLSelect
564 - @ref CLTileKernel / @ref CLTile
565 - @ref CLComparisonKernel / @ref CLComparison
566 - @ref CLArgMinMaxLayer
567 - @ref CLElementwiseMax
568 - @ref CLElementwiseMin
569 - @ref CLElementwiseSquaredDiff
570 - @ref CLStackLayerKernel / @ref CLStackLayer
571 - @ref CLReverse / @ref CLReverseKernel
572 - @ref CLRsqrtLayer
573 - @ref CLExpLayer
574 - @ref CLElementWiseUnaryLayerKernel
575 - @ref CLGEMMReshapeLHSMatrixKernel
576 - @ref CLGEMMReshapeRHSMatrixKernel
577 - @ref CLGEMMMatrixMultiplyReshapedKernel
578 - @ref CLRangeKernel / @ref CLRange
579 - @ref CLUnstack
580 - @ref CLGatherKernel / @ref CLGather
581 - @ref CLGEMMLowpMatrixMultiplyReshapedKernel
582 - New CPP kernels / functions:
583 - @ref CPPDetectionOutputLayer
584 - @ref CPPTopKV / @ref CPPTopKVKernel
Isabella Gottardi62538972019-02-12 19:52:44 +0000585 - Added new examples:
586 - graph_ssd_mobilenet.cpp
587 - graph_mobilenet_v2.cpp
588 - graph_resnet12.cpp
589 - graph_srcnn955.cpp
590 - graph_vgg_vdsr.cpp
591 - graph_inception_resnet_v1.cpp
592 - Add 4D tensors support to
593 - @ref NESoftmaxLayer
594 - Fused activation in @ref CLWinogradConvolutionLayer
595 - Extented @ref NEPermute to support more cases
596 - Added NEON/SVE GEMM Hybrid kernels
597 - Added u8 and s8 hybrid assembly kernels
598 - Introduced GEMM strategy name in NEGEMMAssemblyWrapper
599 - Improved @ref CLTuner
600 - Fused the bias addition within @ref CLGEMM
601 - Added support for QASYMM8 LOGISTIC activation in @ref NEActivationLayer
602 - Added NHWC data layout support to:
603 - @ref NEScale for F16
604 - @ref CLNormalizationLayer IN_MAP_2D for FP32/FP16
605 - @ref NEL2NormalizeLayer for FP32/FP16
606 - @ref NENormalizationLayer IN_MAP_2D for FP32/FP16
607 - @ref CLROIAlignLayer
Manuel Bottini5209be52019-02-13 16:34:56 +0000608 - @ref CLGenerateProposalsLayer
Isabella Gottardi62538972019-02-12 19:52:44 +0000609 - Added QASYMM8 support to the following kernels:
610 - @ref NEArithmeticAdditionKernel
611 - @ref NEScale
612 - Added new tests and improved validation and benchmarking suites.
giuros01a69a88b2019-01-31 16:29:19 +0000613 - Deprecated functions/interfaces
614 - Usage of inner_border_right and inner_border_top has been deprecated in @ref CLDeconvolutionLayer and @ref NEDeconvolutionLayer
615
Isabella Gottardi8773d7c2018-11-20 09:56:46 +0000616v18.11 Public major release
617 - Various bug fixes.
618 - Various optimisations.
619 - New Neon kernels / functions:
620 - @ref NEChannelShuffleLayer / @ref NEChannelShuffleLayerKernel
621 - @ref NEReduceMean
622 - @ref NEReorgLayer / @ref NEReorgLayerKernel
623 - @ref NEPriorBoxLayer / @ref NEPriorBoxLayerKernel
624 - @ref NEUpsampleLayer / @ref NEUpsampleLayerKernel
625 - @ref NEYOLOLayer / @ref NEYOLOLayerKernel
626 - New OpenCL kernels / functions:
627 - @ref CLBatchToSpaceLayer / @ref CLBatchToSpaceLayerKernel
628 - @ref CLBoundingBoxTransform / @ref CLBoundingBoxTransformKernel
Manuel Bottini5209be52019-02-13 16:34:56 +0000629 - @ref CLComputeAllAnchorsKernel
630 - @ref CLGenerateProposalsLayer
Isabella Gottardi8773d7c2018-11-20 09:56:46 +0000631 - @ref CLNormalizePlanarYUVLayer / @ref CLNormalizePlanarYUVLayerKernel
632 - @ref CLReorgLayer / @ref CLReorgLayerKernel
633 - @ref CLSpaceToBatchLayer / @ref CLSpaceToBatchLayerKernel
634 - @ref CLPadLayer
635 - @ref CLReduceMean
636 - @ref CLPriorBoxLayer / @ref CLPriorBoxLayerKernel
637 - @ref CLROIAlignLayer / @ref CLROIAlignLayerKernel
638 - @ref CLSlice
639 - @ref CLSplit
640 - @ref CLStridedSlice / @ref CLStridedSliceKernel
641 - @ref CLUpsampleLayer / @ref CLUpsampleLayerKernel
642 - @ref CLYOLOLayer / @ref CLYOLOLayerKernel
643 - New CPP kernels / functions:
644 - @ref CPPBoxWithNonMaximaSuppressionLimit / @ref CPPBoxWithNonMaximaSuppressionLimitKernel
645 - Added the validate method in:
646 - @ref NEDepthConvertLayer
647 - @ref NEFloor / @ref CLFloor
648 - @ref NEGEMMMatrixAdditionKernel
649 - @ref NEReshapeLayer / @ref CLReshapeLayer
650 - @ref CLScale
651 - Added new examples:
652 - graph_shufflenet.cpp
653 - graph_yolov3.cpp
654 - Added documentation for add a new function or kernel.
655 - Improved doxygen documentation adding a list of the existing functions.
656 - Add 4D tensors support to
Georgios Pinitas09f24972019-05-17 18:14:40 +0100657 - CLWidthConcatenateLayer
Isabella Gottardi8773d7c2018-11-20 09:56:46 +0000658 - @ref CLFlattenLayer
659 - @ref CLSoftmaxLayer
660 - Add dot product support for @ref CLDepthwiseConvolutionLayer3x3NHWCKernel non-unit stride
661 - Add SVE support
662 - Fused batch normalization into convolution layer weights in @ref CLFuseBatchNormalization
663 - Fuses activation in @ref CLDepthwiseConvolutionLayer3x3NCHWKernel, @ref CLDepthwiseConvolutionLayer3x3NHWCKernel and @ref NEGEMMConvolutionLayer
664 - Added NHWC data layout support to:
665 - @ref CLChannelShuffleLayer
666 - @ref CLDeconvolutionLayer
667 - @ref CLL2NormalizeLayer
668 - Added QASYMM8 support to the following kernels:
669 - @ref CLScaleKernel
Georgios Pinitas7d0adc62020-09-04 15:25:24 +0100670 - NEDepthwiseConvolutionLayer3x3Kernel
Isabella Gottardi8773d7c2018-11-20 09:56:46 +0000671 - @ref CLPixelWiseMultiplicationKernel
672 - Added FP16 support to the following kernels:
673 - @ref CLDepthwiseConvolutionLayer3x3NHWCKernel
Georgios Pinitas7d0adc62020-09-04 15:25:24 +0100674 - NEDepthwiseConvolutionLayer3x3Kernel
Isabella Gottardi8773d7c2018-11-20 09:56:46 +0000675 - @ref CLNormalizePlanarYUVLayerKernel
676 - @ref CLWinogradConvolutionLayer (5x5 kernel)
677 - More tests added to both validation and benchmarking suites.
678
Anthony Barbierd51ea0a2018-08-07 17:48:03 +0100679v18.08 Public major release
680 - Various bug fixes.
Michele Di Giorgio02baf012018-08-20 18:10:38 +0100681 - Various optimisations.
Anthony Barbierd51ea0a2018-08-07 17:48:03 +0100682 - Updated recommended NDK version to r17b.
Michele Di Giorgio02baf012018-08-20 18:10:38 +0100683 - Removed support for QS8/QS16 data types.
684 - Added support for grouped convolution in @ref CLConvolutionLayer.
685 - Added NHWC data layout support to:
Georgios Pinitas09f24972019-05-17 18:14:40 +0100686 - NEDepthConcatenateLayer / CLDepthConcatenateLayer
Michele Di Giorgio02baf012018-08-20 18:10:38 +0100687 - @ref NEWinogradConvolutionLayer / @ref CLWinogradConvolutionLayer
688 - @ref CLDepthwiseConvolutionLayer
689 - @ref CLDirectConvolutionLayer
690 - @ref CLConvolutionLayer
691 - @ref CLScale
692 - @ref CLIm2ColKernel
693 - New Neon kernels / functions:
694 - @ref NERNNLayer
695 - New OpenCL kernels / functions:
696 - @ref CLArithmeticDivision
697 - Introduced prepare() stage support in the graph API for GLES.
698 - Added support for memory reusage when trying to allocate smaller CLTensors.
699 - Enabled NHWC execution on graph examples.
700 - Added JPEG accessor for validation purposes.
701 - Added validate methods to some kernels / functions.
Anthony Barbierd51ea0a2018-08-07 17:48:03 +0100702
703v18.05 Public major release
Pablo Tellob5cc95b2018-05-15 11:49:33 +0100704 - Various bug fixes.
705 - Various optimisations.
Pablo Telloeb82fd22018-02-23 13:43:50 +0000706 - Major redesign in the interface for the neon kernels implemented in assembly.
707 - Removed arm_compute::NEGEMMLowpAArch64A53Kernel / arm_compute::NEGEMMLowpAArch64Kernel / arm_compute::NEGEMMLowpAArch64V8P4Kernel / arm_compute::NEGEMMInterleavedBlockedKernel / arm_compute::NEGEMMLowpAssemblyMatrixMultiplyCore / arm_compute::NEHGEMMAArch64FP16Kernel
708 - Added NEGEMMAssemblyWrapper and AssemblyKernelGlue which are used to execute assembly kernels in neon functions.
709 - Minor changes to the CPUInfo type to make it compatible with the new assembly gemm interface.
Pablo Tellob5cc95b2018-05-15 11:49:33 +0100710 - Moved neon assembly kernels to the folder src/core/NEON/kernels/arm_gemm.
711 - Improved doxygen documentation.
712 - Improved memory management for layer's transitions.
713 - Added support for NHWC data layout in tensors.
714 - Added NHWC data layout support to:
715 - @ref NEGEMMConvolutionLayer
716 - @ref NEDirectConvolutionLayer
717 - @ref NEPoolingLayer / @ref CLPoolingLayer
718 - @ref NEBatchNormalizationLayer / @ref CLBatchNormalizationLayer
719 - @ref NEDepthwiseConvolutionLayer
720 - @ref NEScale
721 - @ref NEIm2Col
722 - Added support for dilated convolutions in @ref NEConvolutionLayer and @ref CLConvolutionLayer.
723 - New OpenCL kernels / functions:
724 - @ref CLChannelShuffleLayer / @ref CLChannelShuffleLayerKernel
725 - @ref CLConvertFullyConnectedWeightsKernel / @ref CLConvertFullyConnectedWeights
726 - @ref CLCopy / @ref CLCopyKernel
Anthony Barbier38e7f1f2018-05-21 13:37:47 +0100727 - @ref CLLSTMLayer
Pablo Tellob5cc95b2018-05-15 11:49:33 +0100728 - @ref CLRNNLayer
Georgios Pinitas09f24972019-05-17 18:14:40 +0100729 - CLWidthConcatenateLayer / @ref CLWidthConcatenateLayerKernel
Pablo Tellob5cc95b2018-05-15 11:49:33 +0100730 - @ref CLWinogradFilterTransformKernel / @ref CLWinogradInputTransformKernel / @ref CLWinogradConvolutionLayer
731 - @ref CLWinogradInputTransformKernel / @ref CLWinogradInputTransform
732 - New Neon kernels / functions:
Pablo Tellob5cc95b2018-05-15 11:49:33 +0100733 - @ref NEConvertFullyConnectedWeightsKernel / @ref NEConvertFullyConnectedWeights.
734 - Created the validate method in @ref CLDepthwiseConvolutionLayer.
735 - Beta and gamma are no longer mandatory arguments in @ref NEBatchNormalizationLayer and @ref CLBatchNormalizationLayer.
736 - Added depth multiplier support in @ref NEDepthwiseConvolutionLayer and @ref CLDepthwiseConvolutionLayer.
737 - Added broadcast multiply support in @ref NEPixelWiseMultiplication / @ref NEPixelWiseMultiplicationKernel.
738 - Port mobilenet example to NHWC data layout.
739 - Enabled Winograd method in @ref CLConvolutionLayer.
740 - Renamed NEWinogradLayer to @ref NEWinogradConvolutionLayer.
741 - Updated @ref NEWinogradConvolutionLayer to use highly optimised assembly kernels in src/core/NEON/kernels/arm_gemm.
742 - Added memory manager support in GLES functions.
743 - Major refactoring of the graph API.
744 - Added GLES backend in the graph API.
745 - Added support for the memory manager in the graph API.
746 - Enabled Winograd Convolution method in the graph API.
747 - Added support for grouped convolutions in the graph API.
748 - Replaced NEDeconvolutionLayerUpsampleKernel with @ref NEScaleKernel in @ref NEDeconvolutionLayer.
749 - Added fast maths flag in @ref CLConvolutionLayer.
750 - Added new tests and benchmarks in validation and benchmark frameworks
751 - Merge Activation layer with Convolution Layer (NEON. CL, GLES)
752 - Added support to OpenCL 2.0 SVM
753 - Added support to import memory in OpenCL tensors.
754 - Added the prepare() method to perform any one off pre-processing before running the function.
755 - Added new examples:
756 - graph_inception_v4.cpp
Anthony Barbier38e7f1f2018-05-21 13:37:47 +0100757 - graph_resnext50.cpp
Pablo Tellob5cc95b2018-05-15 11:49:33 +0100758 - Added memory measurement instrument for CL.
Pablo Telloeb82fd22018-02-23 13:43:50 +0000759
Anthony Barbier577fbdf2018-03-01 15:17:54 +0000760v18.03 Public maintenance release
761 - Various bug fixes.
Anthony Barbier3762e742018-03-02 11:49:33 +0000762 - Fixed bug in @ref NEActivationLayer
763 - Fix in @ref CLTuner when using batches.
Anthony Barbier577fbdf2018-03-01 15:17:54 +0000764 - Updated recommended NDK version to r16b (And fixed warnings).
765 - Fixed bug in validation code.
766 - Added Inception v4 graph example.
Georgios Pinitas9fb11592018-04-26 20:34:58 +0100767 - Renamed NEWinogradLayer.cpp to @ref NEWinogradConvolutionLayer
Anthony Barbier577fbdf2018-03-01 15:17:54 +0000768
Anthony Barbier2d0ce772018-02-21 15:35:36 +0000769v18.02 Public major release
770 - Various NEON / OpenCL / GLES optimisations.
771 - Various bug fixes.
772 - Changed default number of threads on big LITTLE systems.
773 - Refactored examples and added:
774 - graph_mobilenet_qassym8
775 - graph_resnet
776 - graph_squeezenet_v1_1
Anthony Barbier3762e742018-03-02 11:49:33 +0000777 - Renamed @ref CLConvolutionLayer into @ref CLGEMMConvolutionLayer and created a new @ref CLConvolutionLayer to select the fastest convolution method.
778 - Renamed @ref NEConvolutionLayer into @ref NEGEMMConvolutionLayer and created a new @ref NEConvolutionLayer to select the fastest convolution method.
Anthony Barbier2d0ce772018-02-21 15:35:36 +0000779 - Added in place support to:
Anthony Barbier3762e742018-03-02 11:49:33 +0000780 - @ref CLActivationLayer
781 - @ref CLBatchNormalizationLayer
Anthony Barbier2d0ce772018-02-21 15:35:36 +0000782 - Added QASYMM8 support to:
Anthony Barbier3762e742018-03-02 11:49:33 +0000783 - @ref CLActivationLayer
784 - @ref CLDepthwiseConvolutionLayer
785 - @ref NEDepthwiseConvolutionLayer
786 - @ref NESoftmaxLayer
Anthony Barbier2d0ce772018-02-21 15:35:36 +0000787 - Added FP16 support to:
Manuel Bottini387259a2020-05-21 17:14:36 +0100788 - CLDepthwiseConvolutionLayer3x3
Anthony Barbier3762e742018-03-02 11:49:33 +0000789 - @ref CLDepthwiseConvolutionLayer
790 - Added broadcasting support to @ref NEArithmeticAddition / @ref CLArithmeticAddition / @ref CLPixelWiseMultiplication
791 - Added fused batched normalization and activation to @ref CLBatchNormalizationLayer and @ref NEBatchNormalizationLayer
792 - Added support for non-square pooling to @ref NEPoolingLayer and @ref CLPoolingLayer
Anthony Barbier2d0ce772018-02-21 15:35:36 +0000793 - New OpenCL kernels / functions:
Michele Di Giorgioa046e162019-10-08 09:36:26 +0100794 - CLDirectConvolutionLayerOutputStageKernel
Pablo Tellof6c572c2018-02-14 12:47:30 +0000795 - New NEON kernels / functions
Anthony Barbier2d0ce772018-02-21 15:35:36 +0000796 - Added name() method to all kernels.
797 - Added support for Winograd 5x5.
Anthony Barbier3762e742018-03-02 11:49:33 +0000798 - @ref NEPermuteKernel / @ref NEPermute
Georgios Pinitas9fb11592018-04-26 20:34:58 +0100799 - @ref NEWinogradLayerTransformInputKernel / NEWinogradLayer
800 - @ref NEWinogradLayerTransformOutputKernel / NEWinogradLayer
801 - @ref NEWinogradLayerTransformWeightsKernel / NEWinogradLayer
Anthony Barbiere1553372018-07-16 18:53:52 +0100802 - Renamed NEWinogradLayerKernel into NEWinogradLayerBatchedGEMMKernel
Anthony Barbier2d0ce772018-02-21 15:35:36 +0000803 - New GLES kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +0000804 - @ref GCTensorShiftKernel / @ref GCTensorShift
Pablo Tellof6c572c2018-02-14 12:47:30 +0000805
Anthony Barbier64c95a02018-01-22 18:48:55 +0000806v18.01 Public maintenance release
807 - Various bug fixes
808 - Added some of the missing validate() methods
Anthony Barbier3762e742018-03-02 11:49:33 +0000809 - Added @ref CLDeconvolutionLayerUpsampleKernel / @ref CLDeconvolutionLayer @ref CLDeconvolutionLayerUpsample
810 - Added @ref CLPermuteKernel / @ref CLPermute
Anthony Barbier64c95a02018-01-22 18:48:55 +0000811 - Added method to clean the programs cache in the CL Kernel library.
Anthony Barbier3762e742018-03-02 11:49:33 +0000812 - Added @ref GCArithmeticAdditionKernel / @ref GCArithmeticAddition
813 - Added @ref GCDepthwiseConvolutionLayer3x3Kernel / @ref GCDepthwiseConvolutionLayer3x3
814 - Added @ref GCNormalizePlanarYUVLayerKernel / @ref GCNormalizePlanarYUVLayer
815 - Added @ref GCScaleKernel / @ref GCScale
816 - Added @ref GCWeightsReshapeKernel / @ref GCConvolutionLayer
Anthony Barbier64c95a02018-01-22 18:48:55 +0000817 - Added FP16 support to the following GLES compute kernels:
Anthony Barbier3762e742018-03-02 11:49:33 +0000818 - @ref GCCol2ImKernel
819 - @ref GCGEMMInterleave4x4Kernel
820 - @ref GCGEMMTranspose1xWKernel
821 - @ref GCIm2ColKernel
822 - Refactored NEON Winograd (NEWinogradLayerKernel)
823 - Added @ref NEDirectConvolutionLayerOutputStageKernel
Anthony Barbier64c95a02018-01-22 18:48:55 +0000824 - Added QASYMM8 support to the following NEON kernels:
Georgios Pinitas7d0adc62020-09-04 15:25:24 +0100825 - NEDepthwiseConvolutionLayer3x3Kernel
Anthony Barbier3762e742018-03-02 11:49:33 +0000826 - @ref NEFillBorderKernel
827 - @ref NEPoolingLayerKernel
Anthony Barbier64c95a02018-01-22 18:48:55 +0000828 - Added new examples:
829 - graph_cl_mobilenet_qasymm8.cpp
830 - graph_inception_v3.cpp
831 - gc_dc.cpp
832 - More tests added to both validation and benchmarking suites.
833
Gian Marcoff850932017-12-11 12:37:17 +0000834v17.12 Public major release
835 - Most machine learning functions on OpenCL support the new data type QASYMM8
836 - Introduced logging interface
837 - Introduced opencl timer
838 - Reworked GEMMLowp interface
839 - Added new NEON assembly kernels for GEMMLowp, SGEMM and HGEMM
840 - Added validation method for most Machine Learning kernels / functions
841 - Added new graph examples such as googlenet, mobilenet, squeezenet, vgg16 and vgg19
842 - Added sgemm example for OpenCL
843 - Added absolute difference example for GLES compute
844 - Added new tests and benchmarks in validation and benchmark frameworks
845 - Added new kernels / functions for GLES compute
846
847 - New OpenGL ES kernels / functions
Anthony Barbier3762e742018-03-02 11:49:33 +0000848 - @ref GCAbsoluteDifferenceKernel / @ref GCAbsoluteDifference
849 - @ref GCActivationLayerKernel / @ref GCActivationLayer
850 - @ref GCBatchNormalizationLayerKernel / @ref GCBatchNormalizationLayer
851 - @ref GCCol2ImKernel
Georgios Pinitas09f24972019-05-17 18:14:40 +0100852 - @ref GCDepthConcatenateLayerKernel / GCDepthConcatenateLayer
Anthony Barbier3762e742018-03-02 11:49:33 +0000853 - @ref GCDirectConvolutionLayerKernel / @ref GCDirectConvolutionLayer
854 - @ref GCDropoutLayerKernel / @ref GCDropoutLayer
855 - @ref GCFillBorderKernel / @ref GCFillBorder
856 - @ref GCGEMMInterleave4x4Kernel / @ref GCGEMMInterleave4x4
857 - @ref GCGEMMMatrixAccumulateBiasesKernel / @ref GCGEMMMatrixAdditionKernel / @ref GCGEMMMatrixMultiplyKernel / @ref GCGEMM
858 - @ref GCGEMMTranspose1xWKernel / @ref GCGEMMTranspose1xW
859 - @ref GCIm2ColKernel
860 - @ref GCNormalizationLayerKernel / @ref GCNormalizationLayer
861 - @ref GCPixelWiseMultiplicationKernel / @ref GCPixelWiseMultiplication
862 - @ref GCPoolingLayerKernel / @ref GCPoolingLayer
863 - @ref GCLogits1DMaxKernel / @ref GCLogits1DShiftExpSumKernel / @ref GCLogits1DNormKernel / @ref GCSoftmaxLayer
864 - @ref GCTransposeKernel / @ref GCTranspose
Gian Marcoff850932017-12-11 12:37:17 +0000865
866 - New NEON kernels / functions
Pablo Telloeb82fd22018-02-23 13:43:50 +0000867 - arm_compute::NEGEMMLowpAArch64A53Kernel / arm_compute::NEGEMMLowpAArch64Kernel / arm_compute::NEGEMMLowpAArch64V8P4Kernel / arm_compute::NEGEMMInterleavedBlockedKernel / arm_compute::NEGEMMLowpAssemblyMatrixMultiplyCore
868 - arm_compute::NEHGEMMAArch64FP16Kernel
Georgios Pinitas7d0adc62020-09-04 15:25:24 +0100869 - NEDepthwiseConvolutionLayer3x3Kernel / NEDepthwiseIm2ColKernel / NEGEMMMatrixVectorMultiplyKernel / NEDepthwiseVectorToTensorKernel / @ref NEDepthwiseConvolutionLayer
Anthony Barbier3762e742018-03-02 11:49:33 +0000870 - @ref NEGEMMLowpOffsetContributionKernel / @ref NEGEMMLowpMatrixAReductionKernel / @ref NEGEMMLowpMatrixBReductionKernel / @ref NEGEMMLowpMatrixMultiplyCore
871 - @ref NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel / @ref NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint
Georgios Pinitas9fb11592018-04-26 20:34:58 +0100872 - NEWinogradLayer / NEWinogradLayerKernel
Gian Marcoff850932017-12-11 12:37:17 +0000873
874 - New OpenCL kernels / functions
Anthony Barbier3762e742018-03-02 11:49:33 +0000875 - @ref CLGEMMLowpOffsetContributionKernel / @ref CLGEMMLowpMatrixAReductionKernel / @ref CLGEMMLowpMatrixBReductionKernel / @ref CLGEMMLowpMatrixMultiplyCore
Michele Di Giorgioba14c922020-10-12 13:27:57 +0100876 - CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel / @ref CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint
Gian Marcoff850932017-12-11 12:37:17 +0000877
878 - New graph nodes for NEON and OpenCL
Georgios Pinitasd9eb2752018-04-03 13:44:29 +0100879 - graph::BranchLayer
880 - graph::DepthConvertLayer
881 - graph::DepthwiseConvolutionLayer
882 - graph::DequantizationLayer
883 - graph::FlattenLayer
884 - graph::QuantizationLayer
885 - graph::ReshapeLayer
Gian Marcoff850932017-12-11 12:37:17 +0000886
Anthony Barbier3c5b4ff2017-10-12 13:20:52 +0100887v17.10 Public maintenance release
888 - Bug fixes:
889 - Check the maximum local workgroup size supported by OpenCL devices
890 - Minor documentation updates (Fixed instructions to build the examples)
Anthony Barbier3762e742018-03-02 11:49:33 +0000891 - Introduced a graph::GraphContext
Anthony Barbier3c5b4ff2017-10-12 13:20:52 +0100892 - Added a few new Graph nodes, support for branches and grouping.
893 - Automatically enable cl_printf in debug builds
894 - Fixed bare metal builds for armv7a
895 - Added AlexNet and cartoon effect examples
896 - Fixed library builds: libraries are no longer built as supersets of each other.(It means application using the Runtime part of the library now need to link against both libarm_compute_core and libarm_compute)
897
Anthony Barbier6a5627a2017-09-26 14:42:02 +0100898v17.09 Public major release
899 - Experimental Graph support: initial implementation of a simple stream API to easily chain machine learning layers.
Anthony Barbier3762e742018-03-02 11:49:33 +0000900 - Memory Manager (@ref BlobLifetimeManager, @ref BlobMemoryPool, @ref ILifetimeManager, @ref IMemoryGroup, @ref IMemoryManager, @ref IMemoryPool, @ref IPoolManager, @ref MemoryManagerOnDemand, @ref PoolManager)
Anthony Barbier6a5627a2017-09-26 14:42:02 +0100901 - New validation and benchmark frameworks (Boost and Google frameworks replaced by homemade framework).
902 - Most machine learning functions support both fixed point 8 and 16 bit (QS8, QS16) for both NEON and OpenCL.
903 - New NEON kernels / functions:
Pablo Telloeb82fd22018-02-23 13:43:50 +0000904 - arm_compute::NEGEMMAssemblyBaseKernel arm_compute::NEGEMMAArch64Kernel
Anthony Barbier3762e742018-03-02 11:49:33 +0000905 - @ref NEDequantizationLayerKernel / @ref NEDequantizationLayer
906 - @ref NEFloorKernel / @ref NEFloor
907 - @ref NEL2NormalizeLayerKernel / @ref NEL2NormalizeLayer
908 - @ref NEQuantizationLayerKernel @ref NEMinMaxLayerKernel / @ref NEQuantizationLayer
909 - @ref NEROIPoolingLayerKernel / @ref NEROIPoolingLayer
910 - @ref NEReductionOperationKernel / @ref NEReductionOperation
911 - @ref NEReshapeLayerKernel / @ref NEReshapeLayer
Anthony Barbier6a5627a2017-09-26 14:42:02 +0100912
913 - New OpenCL kernels / functions:
Manuel Bottini387259a2020-05-21 17:14:36 +0100914 - @ref CLDepthwiseConvolutionLayer3x3NCHWKernel @ref CLDepthwiseConvolutionLayer3x3NHWCKernel CLDepthwiseIm2ColKernel CLDepthwiseVectorToTensorKernel CLDepthwiseWeightsReshapeKernel / CLDepthwiseConvolutionLayer3x3 @ref CLDepthwiseConvolutionLayer CLDepthwiseSeparableConvolutionLayer
Anthony Barbier3762e742018-03-02 11:49:33 +0000915 - @ref CLDequantizationLayerKernel / @ref CLDequantizationLayer
916 - @ref CLDirectConvolutionLayerKernel / @ref CLDirectConvolutionLayer
917 - @ref CLFlattenLayer
918 - @ref CLFloorKernel / @ref CLFloor
Gian Marco Iodice5fc07aa2019-05-15 17:08:02 +0100919 - CLGEMMTranspose1xW
Anthony Barbier3762e742018-03-02 11:49:33 +0000920 - @ref CLGEMMMatrixVectorMultiplyKernel
921 - @ref CLL2NormalizeLayerKernel / @ref CLL2NormalizeLayer
922 - @ref CLQuantizationLayerKernel @ref CLMinMaxLayerKernel / @ref CLQuantizationLayer
923 - @ref CLROIPoolingLayerKernel / @ref CLROIPoolingLayer
924 - @ref CLReductionOperationKernel / @ref CLReductionOperation
925 - @ref CLReshapeLayerKernel / @ref CLReshapeLayer
Anthony Barbier6a5627a2017-09-26 14:42:02 +0100926
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100927v17.06 Public major release
928 - Various bug fixes
929 - Added support for fixed point 8 bit (QS8) to the various NEON machine learning kernels.
930 - Added unit tests and benchmarks (AlexNet, LeNet)
931 - Added support for sub tensors.
932 - Added infrastructure to provide GPU specific optimisation for some OpenCL kernels.
Anthony Barbier3762e742018-03-02 11:49:33 +0000933 - Added @ref OMPScheduler (OpenMP) scheduler for NEON
934 - Added @ref SingleThreadScheduler scheduler for NEON (For bare metal)
935 - User can specify his own scheduler by implementing the @ref IScheduler interface.
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100936 - New OpenCL kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +0000937 - @ref CLBatchNormalizationLayerKernel / @ref CLBatchNormalizationLayer
Georgios Pinitas09f24972019-05-17 18:14:40 +0100938 - @ref CLDepthConcatenateLayerKernel / CLDepthConcatenateLayer
Anthony Barbier3762e742018-03-02 11:49:33 +0000939 - @ref CLHOGOrientationBinningKernel @ref CLHOGBlockNormalizationKernel, @ref CLHOGDetectorKernel / @ref CLHOGDescriptor @ref CLHOGDetector @ref CLHOGGradient @ref CLHOGMultiDetection
940 - @ref CLLocallyConnectedMatrixMultiplyKernel / @ref CLLocallyConnectedLayer
941 - @ref CLWeightsReshapeKernel / @ref CLConvolutionLayerReshapeWeights
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100942 - New C++ kernels:
Anthony Barbier3762e742018-03-02 11:49:33 +0000943 - @ref CPPDetectionWindowNonMaximaSuppressionKernel
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100944 - New NEON kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +0000945 - @ref NEBatchNormalizationLayerKernel / @ref NEBatchNormalizationLayer
Georgios Pinitas09f24972019-05-17 18:14:40 +0100946 - @ref NEDepthConcatenateLayerKernel / NEDepthConcatenateLayer
Anthony Barbier3762e742018-03-02 11:49:33 +0000947 - @ref NEDirectConvolutionLayerKernel / @ref NEDirectConvolutionLayer
948 - @ref NELocallyConnectedMatrixMultiplyKernel / @ref NELocallyConnectedLayer
949 - @ref NEWeightsReshapeKernel / @ref NEConvolutionLayerReshapeWeights
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100950
951v17.05 Public bug fixes release
952 - Various bug fixes
953 - Remaining of the functions ported to use accurate padding.
954 - Library does not link against OpenCL anymore (It uses dlopen / dlsym at runtime instead to determine whether or not OpenCL is available).
955 - Added "free" method to allocator.
956 - Minimum version of g++ required for armv7 Linux changed from 4.8 to 4.9
957
958v17.04 Public bug fixes release
959
960 The following functions have been ported to use the new accurate padding:
Anthony Barbier3762e742018-03-02 11:49:33 +0000961 - @ref CLColorConvertKernel
962 - @ref CLEdgeNonMaxSuppressionKernel
963 - @ref CLEdgeTraceKernel
964 - @ref CLGaussianPyramidHorKernel
965 - @ref CLGaussianPyramidVertKernel
966 - @ref CLGradientKernel
967 - @ref NEChannelCombineKernel
968 - @ref NEFillArrayKernel
969 - @ref NEGaussianPyramidHorKernel
970 - @ref NEGaussianPyramidVertKernel
Georgios Pinitas09d34512018-08-30 16:02:11 +0100971 - NEHarrisScoreFP16Kernel
Anthony Barbier3762e742018-03-02 11:49:33 +0000972 - @ref NEHarrisScoreKernel
973 - @ref NEHOGDetectorKernel
974 - @ref NELogits1DMaxKernel
975 - NELogits1DShiftExpSumKernel
976 - NELogits1DNormKernel
977 - @ref NENonMaximaSuppression3x3FP16Kernel
978 - @ref NENonMaximaSuppression3x3Kernel
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100979
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100980v17.03.1 First Major public release of the sources
981 - Renamed the library to arm_compute
982 - New CPP target introduced for C++ kernels shared between NEON and CL functions.
983 - New padding calculation interface introduced and ported most kernels / functions to use it.
984 - New OpenCL kernels / functions:
Gian Marco Iodiceeb65f6d2020-04-15 11:42:15 +0100985 - CLGEMMLowpMatrixMultiplyKernel / CLGEMMLowp
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100986 - New NEON kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +0000987 - @ref NENormalizationLayerKernel / @ref NENormalizationLayer
988 - @ref NETransposeKernel / @ref NETranspose
989 - @ref NELogits1DMaxKernel, NELogits1DShiftExpSumKernel, NELogits1DNormKernel / @ref NESoftmaxLayer
990 - @ref NEIm2ColKernel, @ref NECol2ImKernel, NEConvolutionLayerWeightsReshapeKernel / @ref NEConvolutionLayer
Michele Di Giorgiof22f6722020-07-03 16:29:24 +0100991 - NEGEMMMatrixAccumulateBiasesKernel / @ref NEFullyConnectedLayer
Anthony Barbier3762e742018-03-02 11:49:33 +0000992 - @ref NEGEMMLowpMatrixMultiplyKernel / NEGEMMLowp
Anthony Barbier6ff3b192017-09-04 18:44:23 +0100993
994v17.03 Sources preview
995 - New OpenCL kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +0000996 - @ref CLGradientKernel, @ref CLEdgeNonMaxSuppressionKernel, @ref CLEdgeTraceKernel / @ref CLCannyEdge
Gian Marco Iodice57a89612019-08-22 14:10:27 +0100997 - GEMM refactoring + FP16 support: CLGEMMInterleave4x4Kernel, CLGEMMTranspose1xWKernel, @ref CLGEMMMatrixMultiplyKernel, CLGEMMMatrixAdditionKernel / @ref CLGEMM
Michele Di Giorgiof6f78762020-07-06 11:27:21 +0100998 - CLGEMMMatrixAccumulateBiasesKernel / @ref CLFullyConnectedLayer
Anthony Barbier3762e742018-03-02 11:49:33 +0000999 - @ref CLTransposeKernel / @ref CLTranspose
1000 - @ref CLLKTrackerInitKernel, @ref CLLKTrackerStage0Kernel, @ref CLLKTrackerStage1Kernel, @ref CLLKTrackerFinalizeKernel / @ref CLOpticalFlow
1001 - @ref CLNormalizationLayerKernel / @ref CLNormalizationLayer
1002 - @ref CLLaplacianPyramid, @ref CLLaplacianReconstruct
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001003 - New NEON kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +00001004 - @ref NEActivationLayerKernel / @ref NEActivationLayer
1005 - GEMM refactoring + FP16 support (Requires armv8.2 CPU): @ref NEGEMMInterleave4x4Kernel, @ref NEGEMMTranspose1xWKernel, @ref NEGEMMMatrixMultiplyKernel, @ref NEGEMMMatrixAdditionKernel / @ref NEGEMM
1006 - @ref NEPoolingLayerKernel / @ref NEPoolingLayer
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001007
1008v17.02.1 Sources preview
1009 - New OpenCL kernels / functions:
Michele Di Giorgiof6f78762020-07-06 11:27:21 +01001010 - CLLogits1DMaxKernel, CLLogits1DShiftExpSumKernel, @ref CLLogits1DNormKernel / @ref CLSoftmaxLayer
Anthony Barbier3762e742018-03-02 11:49:33 +00001011 - @ref CLPoolingLayerKernel / @ref CLPoolingLayer
1012 - @ref CLIm2ColKernel, @ref CLCol2ImKernel, CLConvolutionLayerWeightsReshapeKernel / @ref CLConvolutionLayer
1013 - @ref CLRemapKernel / @ref CLRemap
1014 - @ref CLGaussianPyramidHorKernel, @ref CLGaussianPyramidVertKernel / @ref CLGaussianPyramid, @ref CLGaussianPyramidHalf, @ref CLGaussianPyramidOrb
1015 - @ref CLMinMaxKernel, @ref CLMinMaxLocationKernel / @ref CLMinMaxLocation
1016 - @ref CLNonLinearFilterKernel / @ref CLNonLinearFilter
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001017 - New NEON FP16 kernels (Requires armv8.2 CPU)
Anthony Barbier3762e742018-03-02 11:49:33 +00001018 - @ref NEAccumulateWeightedFP16Kernel
1019 - @ref NEBox3x3FP16Kernel
1020 - @ref NENonMaximaSuppression3x3FP16Kernel
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001021
1022v17.02 Sources preview
1023 - New OpenCL kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +00001024 - @ref CLActivationLayerKernel / @ref CLActivationLayer
1025 - @ref CLChannelCombineKernel / @ref CLChannelCombine
1026 - @ref CLDerivativeKernel / @ref CLChannelExtract
1027 - @ref CLFastCornersKernel / @ref CLFastCorners
1028 - @ref CLMeanStdDevKernel / @ref CLMeanStdDev
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001029 - New NEON kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +00001030 - HOG / SVM: @ref NEHOGOrientationBinningKernel, @ref NEHOGBlockNormalizationKernel, @ref NEHOGDetectorKernel, NEHOGNonMaximaSuppressionKernel / @ref NEHOGDescriptor, @ref NEHOGDetector, @ref NEHOGGradient, @ref NEHOGMultiDetection
1031 - @ref NENonLinearFilterKernel / @ref NENonLinearFilter
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001032 - Introduced a CLScheduler to manage the default context and command queue used by the runtime library and create synchronisation events.
1033 - Switched all the kernels / functions to use tensors instead of images.
1034 - Updated documentation to include instructions to build the library from sources.
1035
1036v16.12 Binary preview release
1037 - Original release
1038
1039@section S3_how_to_build How to build the library and the examples
1040
1041@subsection S3_1_build_options Build options
1042
1043scons 2.3 or above is required to build the library.
1044To see the build options available simply run ```scons -h```:
1045
Anthony Barbier79c61782017-06-23 11:48:24 +01001046 debug: Debug (yes|no)
1047 default: False
1048 actual: False
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001049
Anthony Barbier79c61782017-06-23 11:48:24 +01001050 asserts: Enable asserts (this flag is forced to 1 for debug=1) (yes|no)
1051 default: False
1052 actual: False
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001053
Anthony Barbier79c61782017-06-23 11:48:24 +01001054 arch: Target Architecture (armv7a|arm64-v8a|arm64-v8.2-a|x86_32|x86_64)
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001055 default: armv7a
1056 actual: armv7a
1057
Anthony Barbier79c61782017-06-23 11:48:24 +01001058 os: Target OS (linux|android|bare_metal)
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001059 default: linux
1060 actual: linux
1061
Anthony Barbier2d0ce772018-02-21 15:35:36 +00001062 build: Build type (native|cross_compile|embed_only)
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001063 default: cross_compile
1064 actual: cross_compile
1065
Anthony Barbier79c61782017-06-23 11:48:24 +01001066 examples: Build example programs (yes|no)
1067 default: True
1068 actual: True
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001069
Anthony Barbier79c61782017-06-23 11:48:24 +01001070 Werror: Enable/disable the -Werror compilation flag (yes|no)
1071 default: True
1072 actual: True
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001073
Anthony Barbier79c61782017-06-23 11:48:24 +01001074 opencl: Enable OpenCL support (yes|no)
1075 default: True
1076 actual: True
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001077
Anthony Barbier79c61782017-06-23 11:48:24 +01001078 neon: Enable Neon support (yes|no)
1079 default: False
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001080 actual: False
1081
Anthony Barbier20dbb822017-12-13 21:19:39 +00001082 gles_compute: Enable OpenGL ES Compute Shader support (yes|no)
1083 default: False
1084 actual: False
1085
1086 embed_kernels: Embed OpenCL kernels and OpenGL ES compute shader in library binary (yes|no)
Anthony Barbiercc0a80b2017-12-15 11:37:29 +00001087 default: True
1088 actual: True
Anthony Barbier79c61782017-06-23 11:48:24 +01001089
1090 set_soname: Set the library's soname and shlibversion (requires SCons 2.4 or above) (yes|no)
1091 default: False
1092 actual: False
1093
1094 openmp: Enable OpenMP backend (yes|no)
1095 default: False
1096 actual: False
1097
1098 cppthreads: Enable C++11 threads backend (yes|no)
1099 default: True
1100 actual: True
1101
1102 build_dir: Specify sub-folder for the build ( /path/to/build_dir )
1103 default: .
1104 actual: .
1105
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001106 extra_cxx_flags: Extra CXX flags to be appended to the build command
1107 default:
1108 actual:
1109
Anthony Barbier79c61782017-06-23 11:48:24 +01001110 pmu: Enable PMU counters (yes|no)
1111 default: False
1112 actual: False
1113
Anthony Barbier6a5627a2017-09-26 14:42:02 +01001114 mali: Enable Mali hardware counters (yes|no)
1115 default: False
1116 actual: False
1117
Anthony Barbier79c61782017-06-23 11:48:24 +01001118 validation_tests: Build validation test programs (yes|no)
1119 default: False
1120 actual: False
1121
1122 benchmark_tests: Build benchmark test programs (yes|no)
1123 default: False
1124 actual: False
1125
1126@b debug / @b asserts:
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001127 - With debug=1 asserts are enabled, and the library is built with symbols and no optimisations enabled.
1128 - With debug=0 and asserts=1: Optimisations are enabled and symbols are removed, however all the asserts are still present (This is about 20% slower than the release build)
1129 - With debug=0 and asserts=0: All optimisations are enable and no validation is performed, if the application misuses the library it is likely to result in a crash. (Only use this mode once you are sure your application is working as expected).
1130
Anthony Barbier79c61782017-06-23 11:48:24 +01001131@b arch: The x86_32 and x86_64 targets can only be used with neon=0 and opencl=1.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001132
Anthony Barbier79c61782017-06-23 11:48:24 +01001133@b os: Choose the operating system you are targeting: Linux, Android or bare metal.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001134@note bare metal can only be used for NEON (not OpenCL), only static libraries get built and NEON's multi-threading support is disabled.
1135
Anthony Barbier79c61782017-06-23 11:48:24 +01001136@b build: you can either build directly on your device (native) or cross compile from your desktop machine (cross-compile). In both cases make sure the compiler is available in your path.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001137
Anthony Barbier79c61782017-06-23 11:48:24 +01001138@note If you want to natively compile for 32bit on a 64bit ARM device running a 64bit OS then you will have to use cross-compile too.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001139
Anthony Barbier2d0ce772018-02-21 15:35:36 +00001140There is also an 'embed_only' option which will generate all the .embed files for the OpenCL kernels and / or OpenGLES compute shaders. This might be useful if using a different build system to compile the library.
1141
Anthony Barbier79c61782017-06-23 11:48:24 +01001142@b Werror: If you are compiling using the same toolchains as the ones used in this guide then there shouldn't be any warning and therefore you should be able to keep Werror=1. If with a different compiler version the library fails to build because of warnings interpreted as errors then, if you are sure the warnings are not important, you might want to try to build with Werror=0 (But please do report the issue either on Github or by an email to developer@arm.com so that the issue can be addressed).
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001143
Anthony Barbier20dbb822017-12-13 21:19:39 +00001144@b opencl / @b neon / @b gles_compute: Choose which SIMD technology you want to target. (NEON for ARM Cortex-A CPUs or OpenCL / GLES_COMPUTE for ARM Mali GPUs)
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001145
Anthony Barbier20dbb822017-12-13 21:19:39 +00001146@b embed_kernels: For OpenCL / GLES_COMPUTE only: set embed_kernels=1 if you want the OpenCL / GLES_COMPUTE kernels to be built in the library's binaries instead of being read from separate ".cl" / ".cs" files. If embed_kernels is set to 0 then the application can set the path to the folder containing the OpenCL / GLES_COMPUTE kernel files by calling CLKernelLibrary::init() / GCKernelLibrary::init(). By default the path is set to "./cl_kernels" / "./cs_shaders".
Anthony Barbier79c61782017-06-23 11:48:24 +01001147
1148@b set_soname: Do you want to build the versioned version of the library ?
1149
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001150If enabled the library will contain a SONAME and SHLIBVERSION and some symlinks will automatically be created between the objects.
1151Example:
1152 libarm_compute_core.so -> libarm_compute_core.so.1.0.0
1153 libarm_compute_core.so.1 -> libarm_compute_core.so.1.0.0
1154 libarm_compute_core.so.1.0.0
1155
1156@note This options is disabled by default as it requires SCons version 2.4 or above.
1157
Anthony Barbier79c61782017-06-23 11:48:24 +01001158@b extra_cxx_flags: Custom CXX flags which will be appended to the end of the build command.
1159
1160@b build_dir: Build the library in a subfolder of the "build" folder. (Allows to build several configurations in parallel).
1161
1162@b examples: Build or not the examples
1163
1164@b validation_tests: Enable the build of the validation suite.
1165
Anthony Barbier79c61782017-06-23 11:48:24 +01001166@b benchmark_tests: Enable the build of the benchmark tests
1167
1168@b pmu: Enable the PMU cycle counter to measure execution time in benchmark tests. (Your device needs to support it)
1169
Anthony Barbier6a5627a2017-09-26 14:42:02 +01001170@b mali: Enable the collection of Mali hardware counters to measure execution time in benchmark tests. (Your device needs to have a Mali driver that supports it)
1171
Anthony Barbier79c61782017-06-23 11:48:24 +01001172@b openmp Build in the OpenMP scheduler for NEON.
1173
1174@note Only works when building with g++ not clang++
1175
1176@b cppthreads Build in the C++11 scheduler for NEON.
1177
Anthony Barbier3762e742018-03-02 11:49:33 +00001178@sa Scheduler::set
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001179
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001180@subsection S3_2_linux Building for Linux
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001181
1182@subsubsection S3_2_1_library How to build the library ?
1183
1184For Linux, the library was successfully built and tested using the following Linaro GCC toolchain:
1185
Michele Di Giorgio36a551f2020-04-23 11:55:29 +01001186 - gcc-linaro-6.3.1-2017.05-x86_64_arm-linux-gnueabihf
1187 - gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001188
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001189To cross-compile the library in debug mode, with NEON only support, for Linux 32bit:
1190
1191 scons Werror=1 -j8 debug=1 neon=1 opencl=0 os=linux arch=armv7a
1192
1193To cross-compile the library in asserts mode, with OpenCL only support, for Linux 64bit:
1194
1195 scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=1 embed_kernels=1 os=linux arch=arm64-v8a
1196
Anthony Barbier20dbb822017-12-13 21:19:39 +00001197To cross-compile the library in asserts mode, with GLES_COMPUTE only support, for Linux 64bit:
1198
1199 scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=0 gles_compute=1 embed_kernels=1 os=linux arch=arm64-v8a
1200
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001201You can also compile the library natively on an ARM device by using <b>build=native</b>:
1202
1203 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=arm64-v8a build=native
1204 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a build=native
1205
1206@note g++ for ARM is mono-arch, therefore if you want to compile for Linux 32bit on a Linux 64bit platform you will have to use a cross compiler.
1207
1208For example on a 64bit Debian based system you would have to install <b>g++-arm-linux-gnueabihf</b>
1209
1210 apt-get install g++-arm-linux-gnueabihf
1211
1212Then run
1213
1214 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a build=cross_compile
1215
1216or simply remove the build parameter as build=cross_compile is the default value:
1217
1218 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a
1219
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001220@subsubsection S3_2_2_examples How to manually build the examples ?
1221
1222The examples get automatically built by scons as part of the build process of the library described above. This section just describes how you can build and link your own application against our library.
1223
Sheri Zhang7a7f4e02020-08-28 20:08:49 +01001224@note The following command lines assume the arm_compute libraries are present in the current directory or in the system library path. If this is not the case you can specify the location of the pre-built libraries with the compiler option -L. When building the OpenCL example the commands below assume that the CL headers are located in the include folder where the command is executed.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001225
1226To cross compile a NEON example for Linux 32bit:
1227
Anthony Barbierb2881fc2017-09-29 17:12:12 +01001228 arm-linux-gnueabihf-g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -mfpu=neon -L. -larm_compute -larm_compute_core -o neon_convolution
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001229
1230To cross compile a NEON example for Linux 64bit:
1231
Anthony Barbierb2881fc2017-09-29 17:12:12 +01001232 aarch64-linux-gnu-g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -L. -larm_compute -larm_compute_core -o neon_convolution
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001233
1234(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
1235
1236To cross compile an OpenCL example for Linux 32bit:
1237
Georgios Pinitasd9eb2752018-04-03 13:44:29 +01001238 arm-linux-gnueabihf-g++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -mfpu=neon -L. -larm_compute -larm_compute_core -o cl_convolution -DARM_COMPUTE_CL
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001239
1240To cross compile an OpenCL example for Linux 64bit:
1241
Georgios Pinitasd9eb2752018-04-03 13:44:29 +01001242 aarch64-linux-gnu-g++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -L. -larm_compute -larm_compute_core -o cl_convolution -DARM_COMPUTE_CL
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001243
Anthony Barbier14c86a92017-12-14 16:27:41 +00001244To cross compile a GLES example for Linux 32bit:
1245
1246 arm-linux-gnueabihf-g++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude/ -L. -larm_compute -larm_compute_core -std=c++11 -mfpu=neon -DARM_COMPUTE_GC -Iinclude/linux/ -o gc_absdiff
1247
1248To cross compile a GLES example for Linux 64bit:
1249
1250 aarch64-linux-gnu-g++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude/ -L. -larm_compute -larm_compute_core -std=c++11 -DARM_COMPUTE_GC -Iinclude/linux/ -o gc_absdiff
1251
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001252(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
1253
Anthony Barbier14c86a92017-12-14 16:27:41 +00001254To cross compile the examples with the Graph API, such as graph_lenet.cpp, you need to link the examples against arm_compute_graph.so too.
1255
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001256i.e. to cross compile the "graph_lenet" example for Linux 32bit:
1257
Georgios Pinitas12be7ab2018-07-03 12:06:23 +01001258 arm-linux-gnueabihf-g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -mfpu=neon -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001259
1260i.e. to cross compile the "graph_lenet" example for Linux 64bit:
1261
Georgios Pinitas12be7ab2018-07-03 12:06:23 +01001262 aarch64-linux-gnu-g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001263
1264(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
1265
Anthony Barbiere5007472017-10-27 15:01:44 +01001266@note If compiling using static libraries, this order must be followed when linking: arm_compute_graph_static, arm_compute, arm_compute_core
1267
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001268To compile natively (i.e directly on an ARM device) for NEON for Linux 32bit:
1269
Anthony Barbierb2881fc2017-09-29 17:12:12 +01001270 g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -mfpu=neon -larm_compute -larm_compute_core -o neon_convolution
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001271
1272To compile natively (i.e directly on an ARM device) for NEON for Linux 64bit:
1273
Anthony Barbierb2881fc2017-09-29 17:12:12 +01001274 g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute -larm_compute_core -o neon_convolution
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001275
1276(notice the only difference with the 32 bit command is that we don't need the -mfpu option)
1277
1278To compile natively (i.e directly on an ARM device) for OpenCL for Linux 32bit or Linux 64bit:
1279
Georgios Pinitasd9eb2752018-04-03 13:44:29 +01001280 g++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute -larm_compute_core -o cl_convolution -DARM_COMPUTE_CL
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001281
Anthony Barbier14c86a92017-12-14 16:27:41 +00001282To compile natively (i.e directly on an ARM device) for GLES for Linux 32bit or Linux 64bit:
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001283
Anthony Barbier14c86a92017-12-14 16:27:41 +00001284 g++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude/ -L. -larm_compute -larm_compute_core -std=c++11 -DARM_COMPUTE_GC -Iinclude/linux/ -o gc_absdiff
1285
1286To compile natively the examples with the Graph API, such as graph_lenet.cpp, you need to link the examples against arm_compute_graph.so too.
Anthony Barbier14c86a92017-12-14 16:27:41 +00001287
1288i.e. to natively compile the "graph_lenet" example for Linux 32bit:
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001289
Georgios Pinitas12be7ab2018-07-03 12:06:23 +01001290 g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -mfpu=neon -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001291
Anthony Barbier14c86a92017-12-14 16:27:41 +00001292i.e. to natively compile the "graph_lenet" example for Linux 64bit:
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001293
Gian Marco Iodicef94c6742020-06-26 12:35:09 +01001294 g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001295
1296(notice the only difference with the 32 bit command is that we don't need the -mfpu option)
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001297
Anthony Barbiere5007472017-10-27 15:01:44 +01001298@note If compiling using static libraries, this order must be followed when linking: arm_compute_graph_static, arm_compute, arm_compute_core
1299
Gian Marco Iodicef94c6742020-06-26 12:35:09 +01001300@note These two commands assume libarm_compute.so is available in your library path, if not add the path to it using -L (e.g. -Llib/linux-arm64-v8a-neon-cl-asserts/)
Georgios Pinitas58216322020-02-26 11:13:13 +00001301@note You might need to export the path to OpenCL library as well in your LD_LIBRARY_PATH if Compute Library was built with OpenCL enabled.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001302
1303To run the built executable simply run:
1304
1305 LD_LIBRARY_PATH=build ./neon_convolution
1306
1307or
1308
1309 LD_LIBRARY_PATH=build ./cl_convolution
1310
Georgios Pinitas9f28b392018-07-18 20:01:53 +01001311@note Examples accept different types of arguments, to find out what they are run the example with \a --help as an argument. If no arguments are specified then random values will be used to execute the graph.
Anthony Barbier3762e742018-03-02 11:49:33 +00001312
1313For example:
Anthony Barbier38e7f1f2018-05-21 13:37:47 +01001314
Georgios Pinitas9f28b392018-07-18 20:01:53 +01001315 LD_LIBRARY_PATH=. ./graph_lenet --help
Anthony Barbier3762e742018-03-02 11:49:33 +00001316
Georgios Pinitas9f28b392018-07-18 20:01:53 +01001317Below is a list of the common parameters among the graph examples :
1318@snippet utils/CommonGraphOptions.h Common graph examples parameters
Anthony Barbier3762e742018-03-02 11:49:33 +00001319
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001320@subsection S3_3_android Building for Android
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001321
1322For Android, the library was successfully built and tested using Google's standalone toolchains:
Michele Di Giorgio36a551f2020-04-23 11:55:29 +01001323 - clang++ from NDK r18b for armv7a
1324 - clang++ from NDK r18b for arm64-v8a
1325 - clang++ from NDK r18b for arm64-v8.2-a with FP16 support
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001326
1327Here is a guide to <a href="https://developer.android.com/ndk/guides/standalone_toolchain.html">create your Android standalone toolchains from the NDK</a>
1328
Sheri Zhang7a7f4e02020-08-28 20:08:49 +01001329- Download the NDK r18b from here: https://developer.android.com/ndk/downloads/index.html to directory $NDK
Georgios Pinitasf112ede2019-03-01 19:11:20 +00001330- Make sure you have Python 2.7 installed on your machine.
Sheri Zhang7a7f4e02020-08-28 20:08:49 +01001331- Generate the 32 and/or 64 toolchains by running the following commands to your toolchain dirctory $MY_TOOLCHAINS:
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001332
Anthony Barbier38e7f1f2018-05-21 13:37:47 +01001333
Michele Di Giorgio36a551f2020-04-23 11:55:29 +01001334 $NDK/build/tools/make_standalone_toolchain.py --arch arm64 --install-dir $MY_TOOLCHAINS/aarch64-linux-android-ndk-r18b --stl libc++ --api 21
1335 $NDK/build/tools/make_standalone_toolchain.py --arch arm --install-dir $MY_TOOLCHAINS/arm-linux-android-ndk-r18b --stl libc++ --api 21
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001336
Anthony Barbierd51ea0a2018-08-07 17:48:03 +01001337@attention We used to use gnustl but as of NDK r17 it is deprecated so we switched to libc++
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001338
Anthony Barbier38e7f1f2018-05-21 13:37:47 +01001339@note Make sure to add the toolchains to your PATH:
1340
Michele Di Giorgio36a551f2020-04-23 11:55:29 +01001341 export PATH=$PATH:$MY_TOOLCHAINS/aarch64-linux-android-ndk-r18b/bin:$MY_TOOLCHAINS/arm-linux-android-ndk-r18b/bin
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001342
1343@subsubsection S3_3_1_library How to build the library ?
1344
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001345To cross-compile the library in debug mode, with NEON only support, for Android 32bit:
1346
1347 CXX=clang++ CC=clang scons Werror=1 -j8 debug=1 neon=1 opencl=0 os=android arch=armv7a
1348
1349To cross-compile the library in asserts mode, with OpenCL only support, for Android 64bit:
1350
Anthony Barbier14c86a92017-12-14 16:27:41 +00001351 CXX=clang++ CC=clang scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=1 embed_kernels=1 os=android arch=arm64-v8a
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001352
Anthony Barbier20dbb822017-12-13 21:19:39 +00001353To cross-compile the library in asserts mode, with GLES_COMPUTE only support, for Android 64bit:
1354
Anthony Barbier14c86a92017-12-14 16:27:41 +00001355 CXX=clang++ CC=clang scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=0 gles_compute=1 embed_kernels=1 os=android arch=arm64-v8a
Anthony Barbier20dbb822017-12-13 21:19:39 +00001356
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001357@subsubsection S3_3_2_examples How to manually build the examples ?
1358
1359The examples get automatically built by scons as part of the build process of the library described above. This section just describes how you can build and link your own application against our library.
1360
Sheri Zhang7a7f4e02020-08-28 20:08:49 +01001361@note The following command lines assume the arm_compute libraries are present in the current directory or in the system library path. If this is not the case you can specify the location of the pre-built libraries with the compiler option -L. When building the OpenCL example the commands below assume that the CL headers are located in the include folder where the command is executed.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001362
1363Once you've got your Android standalone toolchain built and added to your path you can do the following:
1364
1365To cross compile a NEON example:
1366
1367 #32 bit:
Georgios Pinitas9873ea32017-12-05 15:28:55 +00001368 arm-linux-androideabi-clang++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o neon_convolution_arm -static-libstdc++ -pie
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001369 #64 bit:
Anthony Barbier14c86a92017-12-14 16:27:41 +00001370 aarch64-linux-android-clang++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o neon_convolution_aarch64 -static-libstdc++ -pie
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001371
1372To cross compile an OpenCL example:
1373
1374 #32 bit:
Georgios Pinitasd9eb2752018-04-03 13:44:29 +01001375 arm-linux-androideabi-clang++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o cl_convolution_arm -static-libstdc++ -pie -DARM_COMPUTE_CL
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001376 #64 bit:
Georgios Pinitasd9eb2752018-04-03 13:44:29 +01001377 aarch64-linux-android-clang++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o cl_convolution_aarch64 -static-libstdc++ -pie -DARM_COMPUTE_CL
Anthony Barbier14c86a92017-12-14 16:27:41 +00001378
1379To cross compile a GLES example:
Anthony Barbiercc0a80b2017-12-15 11:37:29 +00001380
Anthony Barbier14c86a92017-12-14 16:27:41 +00001381 #32 bit:
1382 arm-linux-androideabi-clang++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o gc_absdiff_arm -static-libstdc++ -pie -DARM_COMPUTE_GC
1383 #64 bit:
1384 aarch64-linux-android-clang++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o gc_absdiff_aarch64 -static-libstdc++ -pie -DARM_COMPUTE_GC
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001385
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001386To cross compile the examples with the Graph API, such as graph_lenet.cpp, you need to link the library arm_compute_graph also.
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001387
1388 #32 bit:
Georgios Pinitas12be7ab2018-07-03 12:06:23 +01001389 arm-linux-androideabi-clang++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -Wl,--whole-archive -larm_compute_graph-static -Wl,--no-whole-archive -larm_compute-static -larm_compute_core-static -L. -o graph_lenet_arm -static-libstdc++ -pie -DARM_COMPUTE_CL
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001390 #64 bit:
Georgios Pinitas12be7ab2018-07-03 12:06:23 +01001391 aarch64-linux-android-clang++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -Wl,--whole-archive -larm_compute_graph-static -Wl,--no-whole-archive -larm_compute-static -larm_compute_core-static -L. -o graph_lenet_aarch64 -static-libstdc++ -pie -DARM_COMPUTE_CL
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001392
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001393@note Due to some issues in older versions of the Mali OpenCL DDK (<= r13p0), we recommend to link arm_compute statically on Android.
Anthony Barbier20dbb822017-12-13 21:19:39 +00001394@note When linked statically the arm_compute_graph library currently needs the --whole-archive linker flag in order to work properly
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001395
1396Then you need to do is upload the executable and the shared library to the device using ADB:
1397
1398 adb push neon_convolution_arm /data/local/tmp/
1399 adb push cl_convolution_arm /data/local/tmp/
Anthony Barbier14c86a92017-12-14 16:27:41 +00001400 adb push gc_absdiff_arm /data/local/tmp/
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001401 adb shell chmod 777 -R /data/local/tmp/
1402
1403And finally to run the example:
1404
1405 adb shell /data/local/tmp/neon_convolution_arm
1406 adb shell /data/local/tmp/cl_convolution_arm
Anthony Barbier14c86a92017-12-14 16:27:41 +00001407 adb shell /data/local/tmp/gc_absdiff_arm
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001408
1409For 64bit:
1410
1411 adb push neon_convolution_aarch64 /data/local/tmp/
1412 adb push cl_convolution_aarch64 /data/local/tmp/
Anthony Barbier14c86a92017-12-14 16:27:41 +00001413 adb push gc_absdiff_aarch64 /data/local/tmp/
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001414 adb shell chmod 777 -R /data/local/tmp/
1415
1416And finally to run the example:
1417
1418 adb shell /data/local/tmp/neon_convolution_aarch64
1419 adb shell /data/local/tmp/cl_convolution_aarch64
Anthony Barbier14c86a92017-12-14 16:27:41 +00001420 adb shell /data/local/tmp/gc_absdiff_aarch64
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001421
Georgios Pinitas9f28b392018-07-18 20:01:53 +01001422@note Examples accept different types of arguments, to find out what they are run the example with \a --help as an argument. If no arguments are specified then random values will be used to execute the graph.
Anthony Barbier3762e742018-03-02 11:49:33 +00001423
1424For example:
Georgios Pinitas9f28b392018-07-18 20:01:53 +01001425 adb shell /data/local/tmp/graph_lenet --help
Anthony Barbier3762e742018-03-02 11:49:33 +00001426
1427In this case the first argument of LeNet (like all the graph examples) is the target (i.e 0 to run on NEON, 1 to run on OpenCL if available, 2 to run on OpenCL using the CLTuner), the second argument is the path to the folder containing the npy files for the weights and finally the third argument is the number of batches to run.
1428
Michalis Spyrou6e52ba32017-10-04 15:40:38 +01001429@subsection S3_4_bare_metal Building for bare metal
1430
Georgios Pinitas58216322020-02-26 11:13:13 +00001431For bare metal, the library was successfully built using linaro's latest (gcc-linaro-6.3.1-2017.05) bare metal toolchains:
Michalis Spyrou6e52ba32017-10-04 15:40:38 +01001432 - arm-eabi for armv7a
1433 - aarch64-elf for arm64-v8a
1434
1435Download linaro for <a href="https://releases.linaro.org/components/toolchain/binaries/6.3-2017.05/arm-eabi/">armv7a</a> and <a href="https://releases.linaro.org/components/toolchain/binaries/6.3-2017.05/aarch64-elf/">arm64-v8a</a>.
1436
1437@note Make sure to add the toolchains to your PATH: export PATH=$PATH:$MY_TOOLCHAINS/gcc-linaro-6.3.1-2017.05-x86_64_aarch64-elf/bin:$MY_TOOLCHAINS/gcc-linaro-6.3.1-2017.05-x86_64_arm-eabi/bin
1438
1439@subsubsection S3_4_1_library How to build the library ?
1440
1441To cross-compile the library with NEON support for baremetal arm64-v8a:
1442
1443 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=bare_metal arch=arm64-v8a build=cross_compile cppthreads=0 openmp=0 standalone=1
1444
1445@subsubsection S3_4_2_examples How to manually build the examples ?
1446
1447Examples are disabled when building for bare metal. If you want to build the examples you need to provide a custom bootcode depending on the target architecture and link against the compute library. More information about bare metal bootcode can be found <a href="http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dai0527a/index.html">here</a>.
1448
1449@subsection S3_5_windows_host Building on a Windows host system
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001450
1451Using `scons` directly from the Windows command line is known to cause
1452problems. The reason seems to be that if `scons` is setup for cross-compilation
1453it gets confused about Windows style paths (using backslashes). Thus it is
1454recommended to follow one of the options outlined below.
1455
Michalis Spyrou6e52ba32017-10-04 15:40:38 +01001456@subsubsection S3_5_1_ubuntu_on_windows Bash on Ubuntu on Windows
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001457
Gian Marco Iodice5fc07aa2019-05-15 17:08:02 +01001458The best and easiest option is to use
1459<a href="https://msdn.microsoft.com/en-gb/commandline/wsl/about">Ubuntu on Windows</a>.
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001460This feature is still marked as *beta* and thus might not be available.
1461However, if it is building the library is as simple as opening a *Bash on
1462Ubuntu on Windows* shell and following the general guidelines given above.
1463
Michalis Spyrou6e52ba32017-10-04 15:40:38 +01001464@subsubsection S3_5_2_cygwin Cygwin
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001465
Gian Marco Iodice5fc07aa2019-05-15 17:08:02 +01001466If the Windows subsystem for Linux is not available <a href="https://www.cygwin.com/">Cygwin</a>
Pablo Tello78a5d222019-08-06 10:09:18 +01001467can be used to install and run `scons`, the minimum Cygwin version must be 3.0.7 or later. In addition
1468to the default packages installed by Cygwin `scons` has to be selected in the installer. (`git` might
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001469also be useful but is not strictly required if you already have got the source
Gian Marco Iodice5fc07aa2019-05-15 17:08:02 +01001470code of the library.) Linaro provides pre-built versions of
1471<a href="http://releases.linaro.org/components/toolchain/binaries/">GCC cross-compilers</a>
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001472that can be used from the Cygwin terminal. When building for Android the
1473compiler is included in the Android standalone toolchain. After everything has
1474been set up in the Cygwin terminal the general guide on building the library
1475can be followed.
1476
Georgios Pinitasfd7780d2020-03-17 11:41:00 +00001477@subsection S3_6_cl_requirements OpenCL DDK Requirements
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001478
Georgios Pinitasfd7780d2020-03-17 11:41:00 +00001479@subsubsection S3_6_1_cl_hard_requirements Hard Requirements
Georgios Pinitasd9cb0572018-07-16 12:23:09 +01001480
1481Compute Library requires OpenCL 1.1 and above with support of non uniform workgroup sizes, which is officially supported in the Mali OpenCL DDK r8p0 and above as an extension (respective extension flag is \a -cl-arm-non-uniform-work-group-size).
1482
1483Enabling 16-bit floating point calculations require \a cl_khr_fp16 extension to be supported. All Mali GPUs with compute capabilities have native support for half precision floating points.
1484
1485Use of @ref CLMeanStdDev function requires 64-bit atomics support, thus \a cl_khr_int64_base_atomics should be supported in order to use.
1486
Georgios Pinitasfd7780d2020-03-17 11:41:00 +00001487@subsubsection S3_6_2_cl_performance_requirements Performance improvements
Georgios Pinitasd9cb0572018-07-16 12:23:09 +01001488
1489Integer dot product built-in function extensions (and therefore optimized kernels) are available with Mali OpenCL DDK r22p0 and above for the following GPUs : G71, G76. The relevant extensions are \a cl_arm_integer_dot_product_int8, \a cl_arm_integer_dot_product_accumulate_int8 and \a cl_arm_integer_dot_product_accumulate_int16.
1490
1491OpenCL kernel level debugging can be simplified with the use of printf, this requires the \a cl_arm_printf extension to be supported.
1492
1493SVM allocations are supported for all the underlying allocations in Compute Library. To enable this OpenCL 2.0 and above is a requirement.
Gian Marco Iodice201cea12018-07-30 17:21:41 +01001494
Georgios Pinitasfd7780d2020-03-17 11:41:00 +00001495@subsection S3_7_cl_tuner OpenCL Tuner
Gian Marco Iodice201cea12018-07-30 17:21:41 +01001496
1497The OpenCL tuner, a.k.a. CLTuner, is a module of Arm Compute Library that can improve the performance of the OpenCL kernels tuning the Local-Workgroup-Size (LWS).
1498The optimal LWS for each unique OpenCL kernel configuration is stored in a table. This table can be either imported or exported from/to a file.
Vidhya Sudhan Loganathandc5d3432019-04-29 11:44:11 +01001499The OpenCL tuner runs the same OpenCL kernel for a range of local workgroup sizes and keeps the local workgroup size of the fastest run to use in subsequent calls to the kernel. It supports three modes of tuning with different trade-offs between the time taken to tune and the kernel execution time achieved using the best LWS found. In the Exhaustive mode, it searches all the supported values of LWS. This mode takes the longest time to tune and is the most likely to find the optimal LWS. Normal mode searches a subset of LWS values to yield a good approximation of the optimal LWS. It takes less time to tune than Exhaustive mode. Rapid mode takes the shortest time to tune and finds an LWS value that is at least as good or better than the default LWS value. The mode affects only the search for the optimal LWS and has no effect when the LWS value is imported from a file.
Gian Marco Iodice201cea12018-07-30 17:21:41 +01001500In order for the performance numbers to be meaningful you must disable the GPU power management and set it to a fixed frequency for the entire duration of the tuning phase.
1501
1502If you wish to know more about LWS and the important role on improving the GPU cache utilization, we suggest having a look at the presentation "Even Faster CNNs: Exploring the New Class of Winograd Algorithms available at the following link:
1503
1504https://www.embedded-vision.com/platinum-members/arm/embedded-vision-training/videos/pages/may-2018-embedded-vision-summit-iodice
1505
1506Tuning a network from scratch can be long and affect considerably the execution time for the first run of your network. It is recommended for this reason to store the CLTuner's result in a file to amortize this time when you either re-use the same network or the functions with the same configurations. The tuning is performed only once for each OpenCL kernel.
1507
1508CLTuner looks for the optimal LWS for each unique OpenCL kernel configuration. Since a function (i.e. Convolution Layer, Pooling Layer, Fully Connected Layer ...) can be called multiple times but with different parameters, we associate an "id" (called "config_id") to each kernel to distinguish the unique configurations.
1509
1510 #Example: 2 unique Matrix Multiply configurations
1511@code{.cpp}
1512 TensorShape a0 = TensorShape(32,32);
1513 TensorShape b0 = TensorShape(32,32);
1514 TensorShape c0 = TensorShape(32,32);
1515 TensorShape a1 = TensorShape(64,64);
1516 TensorShape b1 = TensorShape(64,64);
1517 TensorShape c1 = TensorShape(64,64);
1518
1519 Tensor a0_tensor;
1520 Tensor b0_tensor;
1521 Tensor c0_tensor;
1522 Tensor a1_tensor;
1523 Tensor b1_tensor;
1524 Tensor c1_tensor;
1525
1526 a0_tensor.allocator()->init(TensorInfo(a0, 1, DataType::F32));
1527 b0_tensor.allocator()->init(TensorInfo(b0, 1, DataType::F32));
1528 c0_tensor.allocator()->init(TensorInfo(c0, 1, DataType::F32));
1529 a1_tensor.allocator()->init(TensorInfo(a1, 1, DataType::F32));
1530 b1_tensor.allocator()->init(TensorInfo(b1, 1, DataType::F32));
1531 c1_tensor.allocator()->init(TensorInfo(c1 1, DataType::F32));
1532
1533 CLGEMM gemm0;
1534 CLGEMM gemm1;
1535
1536 // Configuration 0
1537 gemm0.configure(&a0, &b0, nullptr, &c0, 1.0f, 0.0f);
1538
1539 // Configuration 1
1540 gemm1.configure(&a1, &b1, nullptr, &c1, 1.0f, 0.0f);
1541@endcode
1542
Georgios Pinitasfd7780d2020-03-17 11:41:00 +00001543@subsubsection S3_7_1_cl_tuner_how_to How to use it
Gian Marco Iodice201cea12018-07-30 17:21:41 +01001544
Michele Di Giorgio57f30a92020-09-08 14:03:51 +01001545All the graph examples in the Compute Library's folder "examples" and the arm_compute_benchmark accept an argument to enable the OpenCL tuner and an argument to export/import the LWS values to/from a file
Gian Marco Iodice201cea12018-07-30 17:21:41 +01001546
1547 #Enable CL tuner
1548 ./graph_mobilenet --enable-tuner –-target=CL
1549 ./arm_compute_benchmark --enable-tuner
1550
1551 #Export/Import to/from a file
1552 ./graph_mobilenet --enable-tuner --target=CL --tuner-file=acl_tuner.csv
1553 ./arm_compute_benchmark --enable-tuner --tuner-file=acl_tuner.csv
1554
1555If you are importing the CLTuner'results from a file, the new tuned LWS values will be appended to it.
1556
1557Either you are benchmarking the graph examples or the test cases in the arm_compute_benchmark remember to:
1558
1559 -# Disable the power management
1560 -# Keep the GPU frequency constant
1561 -# Run multiple times the network (i.e. 10).
1562
1563If you are not using the graph API or the benchmark infrastructure you will need to manually pass a CLTuner object to CLScheduler before configuring any function.
1564
1565@code{.cpp}
1566CLTuner tuner;
1567
1568// Setup Scheduler
1569CLScheduler::get().default_init(&tuner);
1570@endcode
1571
1572After the first run, the CLTuner's results can be exported to a file using the method "save_to_file()".
1573- tuner.save_to_file("results.csv");
1574
1575This file can be also imported using the method "load_from_file("results.csv")".
1576- tuner.load_from_file("results.csv");
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001577*/
Anthony Barbierd51ea0a2018-08-07 17:48:03 +01001578} // namespace arm_compute