blob: 8eb0762f9f5bbff5399f54488682ed2b239d4d1b [file] [log] [blame]
Vidhya Sudhan Loganathand646ae12018-11-19 15:18:20 +00001///
Michele Di Giorgiod9eaf612020-07-08 11:12:57 +01002/// Copyright (c) 2017-2020 Arm Limited.
Vidhya Sudhan Loganathand646ae12018-11-19 15:18:20 +00003///
4/// SPDX-License-Identifier: MIT
5///
6/// Permission is hereby granted, free of charge, to any person obtaining a copy
7/// of this software and associated documentation files (the "Software"), to
8/// deal in the Software without restriction, including without limitation the
9/// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
10/// sell copies of the Software, and to permit persons to whom the Software is
11/// furnished to do so, subject to the following conditions:
12///
13/// The above copyright notice and this permission notice shall be included in all
14/// copies or substantial portions of the Software.
15///
16/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
22/// SOFTWARE.
23///
Anthony Barbier3762e742018-03-02 11:49:33 +000024namespace arm_compute
25{
Anthony Barbier6ff3b192017-09-04 18:44:23 +010026/** @mainpage Introduction
27
28@tableofcontents
29
30The Computer Vision and Machine Learning library is a set of functions optimised for both ARM CPUs and GPUs using SIMD technologies.
31
32Several builds of the library are available using various configurations:
33 - OS: Linux, Android or bare metal.
34 - Architecture: armv7a (32bit) or arm64-v8a (64bit)
Anthony Barbier20dbb822017-12-13 21:19:39 +000035 - Technology: NEON / OpenCL / GLES_COMPUTE / NEON and OpenCL and GLES_COMPUTE
Anthony Barbier6ff3b192017-09-04 18:44:23 +010036 - Debug / Asserts / Release: Use a build with asserts enabled to debug your application and enable extra validation. Once you are sure your application works as expected you can switch to a release build of the library for maximum performance.
37
38@section S0_1_contact Contact / Support
39
40Please email developer@arm.com
41
42In order to facilitate the work of the support team please provide the build information of the library you are using. To get the version of the library you are using simply run:
43
44 $ strings android-armv7a-cl-asserts/libarm_compute.so | grep arm_compute_version
45 arm_compute_version=v16.12 Build options: {'embed_kernels': '1', 'opencl': '1', 'arch': 'armv7a', 'neon': '0', 'asserts': '1', 'debug': '0', 'os': 'android', 'Werror': '1'} Git hash=f51a545d4ea12a9059fe4e598a092f1fd06dc858
46
Anthony Barbier14c86a92017-12-14 16:27:41 +000047@section S0_2_prebuilt_binaries Pre-built binaries
48
49For each release we provide some pre-built binaries of the library [here](https://github.com/ARM-software/ComputeLibrary/releases)
50
51These binaries have been built using the following toolchains:
Michele Di Giorgio36a551f2020-04-23 11:55:29 +010052 - Linux armv7a: gcc-linaro-6.3.1-2017.05-x86_64_arm-linux-gnueabihf
53 - Linux arm64-v8a: gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu
54 - Android armv7a: clang++ / libc++ NDK r18b
55 - Android am64-v8a: clang++ / libc++ NDK r18b
Anthony Barbier14c86a92017-12-14 16:27:41 +000056
57@warning Make sure to use a compatible toolchain to build your application or you will get some std::bad_alloc errors at runtime.
58
Anthony Barbier6ff3b192017-09-04 18:44:23 +010059@section S1_file_organisation File organisation
60
61This archive contains:
62 - The arm_compute header and source files
63 - The latest Khronos OpenCL 1.2 C headers from the <a href="https://www.khronos.org/registry/cl/">Khronos OpenCL registry</a>
64 - The latest Khronos cl2.hpp from the <a href="https://www.khronos.org/registry/cl/">Khronos OpenCL registry</a> (API version 2.1 when this document was written)
Anthony Barbier20dbb822017-12-13 21:19:39 +000065 - The latest Khronos OpenGL ES 3.1 C headers from the <a href="https://www.khronos.org/registry/gles/">Khronos OpenGL ES registry</a>
66 - The latest Khronos EGL 1.5 C headers from the <a href="https://www.khronos.org/registry/gles/">Khronos EGL registry</a>
67 - The sources for a stub version of libOpenCL.so, libGLESv1_CM.so, libGLESv2.so and libEGL.so to help you build your application.
Anthony Barbier6ff3b192017-09-04 18:44:23 +010068 - An examples folder containing a few examples to compile and link against the library.
69 - A @ref utils folder containing headers with some boiler plate code used by the examples.
70 - This documentation.
71
Michele Di Giorgio552e11d2020-09-23 15:08:38 +010072 For detailed information about file organization, please refer to Files -> File List section of this documentation.
Anthony Barbier6ff3b192017-09-04 18:44:23 +010073
74@section S2_versions_changelog Release versions and changelog
75
76@subsection S2_1_versions Release versions
77
78All releases are numbered vYY.MM Where YY are the last two digits of the year, and MM the month number.
79If there is more than one release in a month then an extra sequential number is appended at the end:
80
81 v17.03 (First release of March 2017)
82 v17.03.1 (Second release of March 2017)
83 v17.04 (First release of April 2017)
84
85@note We're aiming at releasing one major public release with new features per quarter. All releases in between will only contain bug fixes.
86
87@subsection S2_2_changelog Changelog
88
SiCong Li96209c72020-08-21 12:28:30 +010089v20.11 Public major release
morgolock70b1eb82020-11-24 13:54:19 +000090 - Various bug fixes.
91 - Various optimisations.
92 - Performance regressions can be noted when executing Depthwise Convolution on Neon with a depth multiplier > 1 for quantized data type.
morgolock0e728492020-11-20 11:03:33 +000093 This is planned to be resolved in 21.02 release.
morgolock70b1eb82020-11-24 13:54:19 +000094 - Added new data type QASYMM8_SIGNED support for @ref NEROIAlignLayer.
SiCong Li903f8cc2020-08-27 10:17:10 +010095 - Added new data type S32 support for:
96 - @ref NEArithmeticSubtraction
97 - @ref NEArithmeticSubtractionKernel
SiCong Libb88f892020-08-28 11:18:47 +010098 - @ref NEPixelWiseMultiplication
99 - @ref NEPixelWiseMultiplicationKernel
Georgios Pinitas18134222020-09-03 21:00:23 +0100100 - @ref NEElementwiseDivision
101 - @ref NEDivisionOperationKernel
SiCong Li96209c72020-08-21 12:28:30 +0100102 - Interface change
103 - Properly support softmax axis to have the same meaning as other major frameworks. That is, axis now defines the dimension
104 on which Softmax/Logsoftmax is performed. E.g. for input of shape 4x5x6 and axis=1, softmax will be applied to 4x6=24 vectors of size 5.
105 The supported value range of axis is [-rank, rank).
106 This change applies to the following functions:
107 - @ref NESoftmaxLayer
108 - @ref NELogSoftmaxLayer
109 - @ref CLSoftmaxLayer
110 - @ref CLLogSoftmaxLayer
111 - @ref GCSoftmaxLayer
Sheri Zhang824061d2020-10-26 15:46:37 +0000112 - New OpenCL kernels / functions:
113 - @ref CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel
morgolock0e728492020-11-20 11:03:33 +0000114 - @ref CLLogicalNot
115 - @ref CLLogicalAnd
116 - @ref CLLogicalOr
117 - New NEON kernels / functions:
118 - @ref NELogicalNot
119 - @ref NELogicalAnd
120 - @ref NELogicalOr
Sheri Zhang824061d2020-10-26 15:46:37 +0000121 - Removed padding from NEON kernels:
Sheri Zhanged367132020-10-08 15:46:16 +0100122 - @ref NEComplexPixelWiseMultiplicationKernel
123 - @ref NENonMaximaSuppression3x3Kernel
124 - @ref NERemapKernel
125 - @ref NEGEMMInterleave4x4Kernel
126 - @ref NEDirectConvolutionLayerKernel
127 - @ref NEScaleKernel
128 - @ref NELocallyConnectedMatrixMultiplyKernel
129 - @ref NEGEMMLowpOffsetContributionKernel
130 - @ref NEGEMMTranspose1xWKernel
131 - @ref NEPoolingLayerKernel
132 - @ref NEConvolutionKernel
133 - @ref NEDepthwiseConvolutionLayerNativeKernel
134 - @ref NEGEMMLowpMatrixMultiplyKernel
135 - @ref NEGEMMMatrixMultiplyKernel
136 - @ref NEDirectConvolutionLayerOutputStageKernel
137 - @ref NEReductionOperationKernel
138 - @ref NEGEMMLowpMatrixAReductionKernel
139 - @ref NEGEMMLowpMatrixBReductionKernel
Sheri Zhang824061d2020-10-26 15:46:37 +0000140 - Removed padding from OpenCL kernels:
141 - @ref CLBatchConcatenateLayerKernel
142 - @ref CLElementwiseOperationKernel
143 - @ref CLBatchNormalizationLayerKernel
144 - @ref CLPoolingLayerKernel
145 - @ref CLWinogradInputTransformKernel
146 - @ref CLGEMMLowpMatrixMultiplyNativeKernel
147 - @ref CLGEMMLowpMatrixAReductionKernel
148 - @ref CLGEMMLowpMatrixBReductionKernel
149 - @ref CLGEMMLowpOffsetContributionOutputStageKernel
150 - @ref CLGEMMLowpOffsetContributionKernel
151 - @ref CLWinogradOutputTransformKernel
152 - @ref CLGEMMLowpMatrixMultiplyReshapedKernel
153 - @ref CLFuseBatchNormalizationKernel
154 - @ref CLDepthwiseConvolutionLayerNativeKernel
155 - @ref CLDepthConvertLayerKernel
156 - @ref CLCopyKernel
157 - @ref CLDepthwiseConvolutionLayer3x3NHWCKernel
158 - @ref CLActivationLayerKernel
159 - @ref CLWinogradFilterTransformKernel
160 - @ref CLWidthConcatenateLayerKernel
161 - @ref CLWidthConcatenate4TensorsKernel
162 - @ref CLWidthConcatenate2TensorsKernel
163 - @ref CLLogits1DMaxShiftExpSumKernel
164 - @ref CLLogits1DNormKernel
165 - @ref CLHeightConcatenateLayerKernel
166 - @ref CLGEMMMatrixMultiplyKernel
167 - @ref CLGEMMLowpQuantizeDownInt32ScaleKernel
168 - @ref CLGEMMLowpQuantizeDownInt32ScaleByFloatKernel
169 - @ref CLGEMMLowpMatrixMultiplyReshapedOnlyRHSKernel
170 - @ref CLDepthConcatenateLayerKernel
171 - @ref CLGEMMLowpQuantizeDownInt32ScaleByFixedPointKernel
172 - Removed OpenCL kernels / functions:
173 - CLGEMMLowpQuantizeDownInt32ToInt16ScaleByFixedPointKernel
174 - CLGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPointKernel
175 - CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel
morgolock00c76012020-11-06 10:40:12 +0000176 - Deprecated OpenCL kernels / functions (If a kernel is used only by the function that is being deprecated, the kernel is deprecated together):
Georgios Pinitas2d221392020-09-03 15:16:37 +0100177 - CLLocallyConnectedLayer
178 - CLLocallyConnectedMatrixMultiplyKernel
morgolock00c76012020-11-06 10:40:12 +0000179 - CLAbsoluteDifference
180 - CLAbsoluteDifferenceKernel
181 - CLAccumulate
182 - CLAccumulateKernel
183 - CLAccumulateSquared
184 - CLAccumulateSquaredKernel
185 - CLAccumulateWeighted
186 - CLAccumulateWeightedKernel
187 - CLAccumulateWeightedFP16Kernel
188 - CLBox3x3
189 - CLBox3x3Kernel
190 - CLBox3x3FP16Kernel
191 - CLCannyEdge
192 - CLChannelCombine
193 - CLChannelCombineKernel
194 - CLChannelExtract
195 - CLChannelExtractKernel
196 - CLColorConvert
197 - CLColorConvertKernel
198 - CLConvolution3x3
199 - CLConvolutionRectangle
200 - CLConvolutionRectangleKernel
201 - CLConvolutionSquare
202 - CLConvolutionKernel
203 - CLDerivative
204 - CLDerivativeKernel
205 - CLDilate
206 - CLDilateKernel
207 - CLEqualizeHistogram
208 - CLErode
209 - CLErodeKernel
210 - CLFastCorners
211 - CLFastCornersKernel
212 - CLGaussian3x3
213 - CLGaussian3x3Kernel
214 - CLGaussian5x5
215 - CLGaussian5x5HorKernel
216 - CLGaussian5x5VertKernel
217 - CLGaussianPyramid
218 - CLGaussianPyramidHalf
219 - CLGaussianPyramidOrb
220 - CLHarrisCorners
221 - CLHarrisScoreKernel
222 - CLHarrisScoreFP16Kernel
223 - CLHistogram
224 - CLHistogramKernel
225 - CLHOGOrientationBinningKernel
226 - CLHOGBlockNormalizationKernel
227 - CLHOGDetectorKernel
228 - CLHOGNonMaximaSuppressionKernel
229 - CLHOGDescriptor
230 - CLHOGDetector
231 - CLHOGGradient
232 - CLHOGMultiDetection
233 - CLHOGOrientationBinningKernel
234 - CLHOGBlockNormalizationKernel
235 - CLHOGDetectorKernel
236 - CLIntegralImage
237 - CLIntegralImageKernel
238 - CLLaplacianReconstruct
239 - CLLaplacianPyramid
240 - CLMagnitude
241 - CLMagnitudePhaseKernel
242 - CLMedian3x3
243 - CLMedian3x3Kernel
244 - CLMinMaxLocation
245 - CLMinMaxLocationKernel
246 - CLNonLinearFilter
247 - CLNonLinearFilterKernel
248 - CLNonMaximaSuppression3x3
249 - CLNonMaximaSuppression3x3FP16Kernel
250 - CLNonMaximaSuppression3x3Kernel
251 - CLOpticalFlow
252 - CLPhase
253 - CLRemap
254 - CLRemapKernel
255 - CLScharr3x3
256 - CLScharr3x3Kernel
257 - CLSobel3x3
258 - CLSobel3x3Kernel
259 - CLSobel5x5
260 - CLSobel5x5HorKernel
261 - CLSobel5x5VertKernel
262 - CLSobel7x7
263 - CLSobel7x7HorKernel
264 - CLSobel7x7VertKernel
265 - CLThreshold
266 - CLThresholdKernel
267 - CLWarpAffine
268 - CLWarpAffineKernel
269 - CLWarpPerspective
270 - CLWarpPerspectiveKernel
271 - Deprecated NEON kernels / functions (If a kernel is used only by the function that is being deprecated, the kernel is deprecated together):
Georgios Pinitas2d221392020-09-03 15:16:37 +0100272 - NELocallyConnectedLayer
273 - NELocallyConnectedMatrixMultiplyKernel
morgolock0c862652020-11-06 08:59:45 +0000274 - NEAbsoluteDifference
275 - NEAbsoluteDifferenceKernel
276 - NEAccumulate
277 - NEAccumulateKernel
278 - NEAccumulateSquared
279 - NEAccumulateSquaredKernel
280 - NEAccumulateWeighted
281 - NEAccumulateWeightedKernel
282 - NEAccumulateWeightedFP16Kernel
283 - NEBox3x3
284 - NEBox3x3Kernel
285 - NEBox3x3FP16Kernel
286 - NECannyEdge
287 - NEChannelCombine
288 - NEChannelCombineKernel
289 - NEChannelExtract
290 - NEChannelExtractKernel
291 - NEColorConvert
292 - NEColorConvertKernel
293 - NEConvolution3x3
294 - NEConvolutionRectangle
295 - NEConvolutionRectangleKernel
296 - NEConvolutionSquare
297 - NEConvolutionKernel
298 - NEDerivative
299 - NEDerivativeKernel
300 - NEDilate
301 - NEDilateKernel
302 - NEEqualizeHistogram
303 - NEErode
304 - NEErodeKernel
305 - NEFastCorners
306 - NEFastCornersKernel
307 - NEGaussian3x3
308 - NEGaussian3x3Kernel
309 - NEGaussian5x5
310 - NEGaussian5x5HorKernel
311 - NEGaussian5x5VertKernel
312 - NEGaussianPyramid
313 - NEGaussianPyramidHalf
314 - NEGaussianPyramidOrb
315 - NEHarrisCorners
316 - NEHarrisScoreKernel
317 - NEHarrisScoreFP16Kernel
318 - NEHistogram
319 - NEHistogramKernel
320 - NEHOGOrientationBinningKernel
321 - NEHOGBlockNormalizationKernel
322 - NEHOGDetectorKernel
323 - NEHOGNonMaximaSuppressionKernel
324 - NEHOGDescriptor
325 - NEHOGDetector
326 - NEHOGGradient
327 - NEHOGMultiDetection
328 - NEHOGOrientationBinningKernel
329 - NEHOGBlockNormalizationKernel
330 - NEHOGDetectorKernel
331 - NEIntegralImage
332 - NEIntegralImageKernel
333 - NELaplacianReconstruct
334 - NELaplacianPyramid
335 - NEMagnitude
336 - NEMagnitudePhaseKernel
337 - NEMedian3x3
338 - NEMedian3x3Kernel
339 - NEMinMaxLocation
340 - NEMinMaxLocationKernel
341 - NENonLinearFilter
342 - NENonLinearFilterKernel
343 - NENonMaximaSuppression3x3
344 - NENonMaximaSuppression3x3FP16Kernel
345 - NENonMaximaSuppression3x3Kernel
346 - NEOpticalFlow
347 - NEPhase
348 - NERemap
349 - NERemapKernel
350 - NEScharr3x3
351 - NEScharr3x3Kernel
352 - NESobel3x3
353 - NESobel3x3Kernel
354 - NESobel5x5
355 - NESobel5x5HorKernel
356 - NESobel5x5VertKernel
357 - NESobel7x7
358 - NESobel7x7HorKernel
359 - NESobel7x7VertKernel
360 - NEThreshold
361 - NEThresholdKernel
362 - NEWarpAffine
363 - NEWarpAffineKernel
364 - NEWarpPerspective
365 - NEWarpPerspectiveKernel
morgolockd6ee9ed2020-11-19 10:07:14 +0000366 - Deprecated GLES kernels / functions (If a kernel is used only by the function that is being deprecated, the kernel is deprecated together):
367 - GCAbsoluteDifference
368 - GCActivationLayer
369 - GCArithmeticAddition
370 - GCBatchNormalizationLayer
371 - GCConcatenateLayer
372 - GCConvolutionLayer
373 - GCDepthwiseConvolutionLayer
374 - GCDirectConvolutionLayer
375 - GCDropoutLayer
376 - GCFillBorder
377 - GCFullyConnectedLayer
378 - GCGEMM
379 - GCGEMMInterleave4x4
380 - GCGEMMTranspose1xW
381 - GCNormalizationLayer
382 - GCNormalizePlanarYUVLayer
383 - GCPixelWiseMultiplication
384 - GCPoolingLayer
385 - GCScale
386 - GCSoftmaxLayer
387 - GCTensorShift
388 - GCTranspose
389
SiCong Li96209c72020-08-21 12:28:30 +0100390
Georgios Pinitas25ef7212020-06-02 23:00:41 +0100391v20.08 Public major release
392 - Various bug fixes.
393 - Various optimisations.
Sheri Zhang3ef9b5f2020-07-09 16:32:58 +0100394 - Added new data type QASYMM8_SIGNED support for:
Sheri Zhangdd4cfc02020-07-10 14:15:41 +0100395 - @ref CLArgMinMaxLayer
396 - @ref CLArgMinMaxLayerKernel
397 - Added new data type U8 support for:
398 - @ref NECropKernel
399 - @ref CLCropKernel
400 - Added aligh_corner support for nearest neighbor interpolation in:
401 - @ref NEScaleKernel
402 - @ref CLScaleKernel
403 - New OpenCL kernels / functions:
404 - @ref CLMaxUnpoolingLayerKernel
405 - New NEON kernels / functions:
406 - @ref NEMaxUnpoolingLayerKernel
Sheri Zhang3ef9b5f2020-07-09 16:32:58 +0100407 - New graph example:
Sheri Zhangdd4cfc02020-07-10 14:15:41 +0100408 - graph_yolov3_output_detector
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100409 - GEMMTuner improvements:
410 - Added fp16 support
411 - Output json files for easier integration
412 - Enabled tuning for export_to_cl_image_rhs option for RHS tensors
413 - More robust script for running benchmarks
Sheri Zhang3ef9b5f2020-07-09 16:32:58 +0100414 - Removed padding from:
Sheri Zhangdd4cfc02020-07-10 14:15:41 +0100415 - @ref NEPixelWiseMultiplicationKernel
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100416 - @ref NEHeightConcatenateLayerKernel
417 - @ref NEThresholdKernel
418 - @ref NEBatchConcatenateLayerKernel
419 - @ref NETransposeKernel
420 - @ref NEBatchNormalizationLayerKernel
421 - @ref NEArithmeticSubtractionKernel
422 - @ref NEBoundingBoxTransformKernel
423 - @ref NELogits1DMaxKernel
424 - @ref NELogits1DSoftmaxKernel
425 - @ref NEROIPoolingLayerKernel
426 - @ref NEROIAlignLayerKernel
427 - @ref NEYOLOLayerKernel
428 - @ref NEUpsampleLayerKernel
429 - @ref NEFloorKernel
430 - @ref NEWidthConcatenateLayerKernel
431 - @ref NEDepthConcatenateLayerKernel
432 - @ref NENormalizationLayerKernel
433 - @ref NEL2NormalizeLayerKernel
434 - @ref NEFillArrayKernel
435 - @ref NEDepthConvertLayerKernel
436 - @ref NERangeKernel
437 - @ref NEPriorBoxLayer
Sheri Zhanged367132020-10-08 15:46:16 +0100438 - Removed OpenCL kernels / functions:
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100439 - CLGEMMLowpQuantizeDownInt32ToUint8Scale
440 - CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFloat
Sang-Hoon Parka45abfd2020-08-17 13:50:15 +0100441 - Removed NEON kernels / functions:
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100442 - NEGEMMLowpQuantizeDownInt32ToUint8Scale
443 - NEGEMMMatrixAccumulateBiasesKernel
SiCong Lid004a7a2020-05-28 15:26:41 +0100444 - Deprecated functions / interfaces:
445 - Non-descriptor based interfaces for @ref NEThreshold, @ref CLThreshold
Sang-Hoon Park97c1a672020-08-18 11:44:13 +0100446 - Non-descriptor based interfaces for @ref NEScale, @ref CLScale and @ref GCScale
SiCong Lid004a7a2020-05-28 15:26:41 +0100447 - In @ref NESoftmaxLayer, @ref NELogSoftmaxLayer, @ref CLSoftmaxLayer, @ref CLLogSoftmaxLayer and @ref GCSoftmaxLayer :
morgolock9c7fed82020-08-05 12:30:56 +0100448 The default "axis" value for @ref CLSoftmaxLayer, @ref CLLogSoftmaxLayer and @ref GCSoftmaxLayer is changed from 1 to 0.
449 Only axis 0 is supported.
450 The default "axis" value for @ref NESoftmaxLayer, @ref NELogSoftmaxLayer is changed from 1 to 0.
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100451 Only axis 0 is supported.
Sang-Hoon Parka0205b92020-07-07 09:36:09 +0100452 - The support for quantized data types has been removed from @ref CLLogSoftmaxLayer due to implementation complexity.
Gian Marco Iodice547b2e72020-08-12 10:25:29 +0100453 - Removed padding requirement for the input (e.g. LHS of GEMM) and output in @ref CLGEMMMatrixMultiplyNativeKernel, @ref CLGEMMMatrixMultiplyReshapedKernel, @ref CLGEMMMatrixMultiplyReshapedOnlyRHSKernel and @ref CLIm2ColKernel (NHWC only)
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100454 - This change allows to use @ref CLGEMMConvolutionLayer without extra padding for the input and output.
455 - Only the weights/bias of @ref CLGEMMConvolutionLayer could require padding for the computation.
456 - Only on Arm Mali Midgard GPUs, @ref CLGEMMConvolutionLayer could require padding since @ref CLGEMMMatrixMultiplyKernel is called and currently requires padding.
Gian Marco Iodice547b2e72020-08-12 10:25:29 +0100457 - Added support for exporting the OpenCL buffer object to the OpenCL image object in @ref CLGEMMMatrixMultiplyReshapedKernel and @ref CLGEMMMatrixMultiplyReshapedOnlyRHSKernel.
Sang-Hoon Parkadfaefb2020-08-18 09:13:05 +0100458 - This support allows to export the OpenCL buffer used for the reshaped RHS matrix to the OpenCL image object.
459 - The padding requirement for the OpenCL image object is considered into the @ref CLGEMMReshapeRHSMatrixKernel.
460 - The reshaped RHS matrix stores the weights when GEMM is used to accelerate @ref CLGEMMConvolutionLayer.
Georgios Pinitas25ef7212020-06-02 23:00:41 +0100461
Georgios Pinitasfd7780d2020-03-17 11:41:00 +0000462v20.05 Public major release
Georgios Pinitasc7b183a2020-03-06 18:12:09 +0000463 - Various bug fixes.
464 - Various optimisations.
Michele Di Giorgio36a551f2020-04-23 11:55:29 +0100465 - Updated recommended NDK version to r18b.
466 - Updated recommended gcc version to Linaro 6.3.1.
Georgios Pinitasc7b183a2020-03-06 18:12:09 +0000467 - Added Bfloat16 type support
468 - Added Bfloat16 support in:
469 - @ref NEWeightsReshapeKernel
470 - @ref NEConvolutionLayerReshapeWeights
471 - @ref NEIm2ColKernel
472 - @ref NEIm2Col
473 - @ref NEDepthConvertLayerKernel
474 - @ref NEDepthConvertLayer
475 - @ref NEGEMMConvolutionLayer
Georgios Pinitasc7b183a2020-03-06 18:12:09 +0000476 - @ref NEGEMMAssemblyDispatch
Sheri Zhang0f2522b2020-03-25 16:38:19 +0000477 - Added new data type QASYMM8_SIGNED support for:
478 - @ref CLDirectConvolutionLayer
479 - @ref CLDeconvolutionLayer
480 - @ref CLDirectDeconvolutionLayer
481 - @ref CLGEMMDeconvolutionLayer
482 - @ref CLGEMMLowpMatrixMultiplyReshapedKernel
483 - @ref CLGEMMLowpQuantizeDownInt32ScaleKernel
484 - @ref CLGEMMLowpQuantizeDownInt32ScaleByFloatKernel
485 - @ref CLReductionOperation
486 - @ref CLReduceMean
Sheri Zhang359c48e2020-04-30 22:53:39 +0100487 - @ref NEScale
488 - @ref NEScaleKernel
Sheri Zhang0f2522b2020-03-25 16:38:19 +0000489 - @ref NEUpsampleLayer
490 - @ref NECast
491 - @ref NEReductionOperation
492 - @ref NEReduceMean
493 - @ref NEArgMinMaxLayer
494 - @ref NEDeconvolutionLayer
495 - @ref NEGEMMLowpQuantizeDownInt32ScaleKernel
496 - @ref CPPBoxWithNonMaximaSuppressionLimit
497 - @ref CPPDetectionPostProcessLayer
498 - @ref CPPPermuteKernel
499 - @ref CPPPermute
500 - @ref CPPTopKVKernel
501 - @ref CPPTopKV
Sheri Zhang359c48e2020-04-30 22:53:39 +0100502 - @ref CPPUpsample
503 - @ref CPPUpsampleKernel
Sheri Zhang31b49ca2020-04-24 11:15:10 +0100504 - New OpenCL kernels / functions:
505 - @ref CLQLSTMLayer
506 - @ref CLQLSTMLayerNormalizationKernel
507 - New NEON kernels / functions:
508 - @ref NEQLSTMLayer
509 - @ref NEQLSTMLayerNormalizationKernel
510 - Added HARD_SWISH support in:
511 - @ref CLActivationLayerKernel
512 - @ref NEActivationLayerKernel
Sheri Zhang0f2522b2020-03-25 16:38:19 +0000513 - Deprecated OpenCL kernels / functions:
514 - CLGEMMLowpQuantizeDownInt32ToUint8Scale
515 - CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFloat
516 - Deprecated NEON kernels / functions:
517 - NEGEMMLowpQuantizeDownInt32ToUint8Scale
518 - Removed CPP kernels / functions:
519 - CPPFlipWeightsKernel
Manuel Bottini387259a2020-05-21 17:14:36 +0100520 - Removed PoolingLayerInfo constructors without Data Layout.
521 - Removed CLDepthwiseConvolutionLayer3x3
522 - Removed NEDepthwiseConvolutionLayerOptimized
Manuel Bottini075253a2020-05-22 12:57:18 +0100523 - Added support for Winograd 3x3,4x4 on NEON FP16:
524 - @ref NEWinogradConvolutionLayer
525 - @ref NEWinogradLayerTransformInputKernel
526 - @ref NEWinogradLayerTransformOutputKernel
527 - @ref NEWinogradLayerTransformWeightsKernel
528 - Added CLCompileContext
529 - Added NEON GEMM kernel with 2D window support
Georgios Pinitasc7b183a2020-03-06 18:12:09 +0000530
Michele Di Giorgio740872e2020-03-04 15:29:49 +0000531v20.02.1 Maintenance release
532 - Added Android-NN build script.
533
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000534v20.02 Public major release
535 - Various bug fixes.
536 - Various optimisations.
537 - Added new data type QASYMM8_SIGNED support for:
538 - @ref CLDepthwiseConvolutionLayer
Manuel Bottini387259a2020-05-21 17:14:36 +0100539 - CLDepthwiseConvolutionLayer3x3
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000540 - @ref CLGEMMConvolutionLayer
541 - @ref CLGEMMLowpMatrixMultiplyCore
542 - @ref CLGEMMLowpMatrixMultiplyReshapedOnlyRHSKernel
543 - @ref CLGEMMLowpMatrixMultiplyNativeKernel
544 - @ref NEActivationLayer
545 - @ref NEComparisonOperationKernel
546 - @ref NEConvolutionLayer
547 - @ref NEDepthwiseConvolutionLayer
Georgios Pinitas7d0adc62020-09-04 15:25:24 +0100548 - NEDepthwiseConvolutionLayer3x3Kernel
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000549 - @ref NEDirectConvolutionLayerOutputStageKernel
550 - @ref NEElementwiseComparison
551 - @ref NEElementwiseMax
552 - @ref NEElementwiseMin
553 - @ref NEElementwiseSquaredDiff
554 - @ref NEFullyConnectedLayer
Michele Di Giorgiof22f6722020-07-03 16:29:24 +0100555 - NEGEMMMatrixVectorMultiplyKernel
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000556 - @ref NEPixelWiseMultiplication
557 - @ref NEPoolingLayer
558 - @ref NEPReluLayer
559 - Added support for QSYMM8_PER_CHANNEL in:
Georgios Pinitas7d0adc62020-09-04 15:25:24 +0100560 - NEDepthwiseConvolutionLayer3x3Kernel
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000561 - Added support for split sizes in:
562 - @ref CLSplit
563 - @ref NESplit
564 - New OpenCL kernels / functions:
565 - @ref CLFill
Michele Di Giorgioba14c922020-10-12 13:27:57 +0100566 - CLGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPointKernel / @ref CLGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000567 - New NEON kernels / functions:
568 - @ref NEFill
569 - @ref NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPointKernel / @ref NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint
570 - Deprecated NEON functions / interfaces:
Manuel Bottini387259a2020-05-21 17:14:36 +0100571 - CLDepthwiseConvolutionLayer3x3
572 - NEDepthwiseConvolutionLayerOptimized
573 - PoolingLayerInfo constructors without Data Layout.
Giuseppe Rossinif04ddbc2020-02-17 17:22:49 +0000574 - Added support for quantization with multiplier greater than 1 on NEON and CL.
575 - Added support for quantized inputs of type QASYMM8_SIGNED and QASYMM8 to @ref CLQuantizationLayer.
576 - Added the ability to build bootcode for bare metal.
577 - Added support for generating synthetic QASYMM8 graphs.
578 - Added support for F16 datatype in VGG16.
579 - Removed pre-built binaries for GLES.
580
Michele Di Giorgiod374ff22020-01-21 10:03:20 +0000581v19.11.1 Public maintenance release
582 - Fix offset calculation in NEReductionOperationKernel.
583 - Fix data layout in NEScaleKernel for nhwc.
584 - Retain configuration step data layout to avoid side-effects.
585 - Perform sqrt in double domain for L2 pooling.
586 - Fix output shape calculation for Reduce Mean
587 - Restrict cases where optimized NEPadLayer runs.
588
Michele Di Giorgioa046e162019-10-08 09:36:26 +0100589v19.11 Public major release
SiCong Lica1f98c2019-11-28 11:06:11 +0000590 - Various bug fixes.
591 - Various optimisations.
SiCong Li1f7f9882019-11-28 14:59:35 +0000592 - Updated recommended NDK version to r17c.
SiCong Lica1f98c2019-11-28 11:06:11 +0000593 - Deprecated OpenCL kernels / functions:
Michele Di Giorgioa046e162019-10-08 09:36:26 +0100594 - CLDepthwiseConvolutionLayerReshapeWeightsGenericKernel
595 - CLDepthwiseIm2ColKernel
SiCong Lica1f98c2019-11-28 11:06:11 +0000596 - CLDepthwiseSeparableConvolutionLayer
Michele Di Giorgioa046e162019-10-08 09:36:26 +0100597 - CLDepthwiseVectorToTensorKernel
598 - CLDirectConvolutionLayerOutputStageKernel
SiCong Lica1f98c2019-11-28 11:06:11 +0000599 - Deprecated NEON kernels / functions:
Giorgio Arenad93e2632019-10-15 11:09:33 +0100600 - NEDepthwiseWeightsReshapeKernel
601 - NEDepthwiseIm2ColKernel
SiCong Lica1f98c2019-11-28 11:06:11 +0000602 - NEDepthwiseSeparableConvolutionLayer
Giorgio Arenad93e2632019-10-15 11:09:33 +0100603 - NEDepthwiseVectorToTensorKernel
Manuel Bottini05069f02019-09-26 17:18:26 +0100604 - NEDepthwiseConvolutionLayer3x3
SiCong Lica1f98c2019-11-28 11:06:11 +0000605 - New OpenCL kernels / functions:
606 - @ref CLInstanceNormalizationLayerKernel / @ref CLInstanceNormalizationLayer
607 - @ref CLDepthwiseConvolutionLayerNativeKernel to replace the old generic depthwise convolution (see Deprecated
608 OpenCL kernels / functions)
609 - @ref CLLogSoftmaxLayer
610 - New NEON kernels / functions:
611 - @ref NEBoundingBoxTransformKernel / @ref NEBoundingBoxTransform
612 - @ref NEComputeAllAnchorsKernel / @ref NEComputeAllAnchors
613 - @ref NEDetectionPostProcessLayer
614 - @ref NEGenerateProposalsLayer
615 - @ref NEInstanceNormalizationLayerKernel / @ref NEInstanceNormalizationLayer
616 - @ref NELogSoftmaxLayer
617 - @ref NEROIAlignLayerKernel / @ref NEROIAlignLayer
618 - Added QASYMM8 support for:
619 - @ref CLGenerateProposalsLayer
620 - @ref CLROIAlignLayer
621 - @ref CPPBoxWithNonMaximaSuppressionLimit
622 - Added QASYMM16 support for:
623 - @ref CLBoundingBoxTransform
624 - Added FP16 support for:
625 - @ref CLGEMMMatrixMultiplyReshapedKernel
626 - Added new data type QASYMM8_PER_CHANNEL support for:
627 - @ref CLDequantizationLayer
628 - @ref NEDequantizationLayer
629 - Added new data type QSYMM8_PER_CHANNEL support for:
630 - @ref CLConvolutionLayer
631 - @ref NEConvolutionLayer
632 - @ref CLDepthwiseConvolutionLayer
633 - @ref NEDepthwiseConvolutionLayer
634 - Added FP16 mixed-precision support for:
635 - @ref CLGEMMMatrixMultiplyReshapedKernel
636 - @ref CLPoolingLayerKernel
637 - Added FP32 and FP16 ELU activation for:
638 - @ref CLActivationLayer
639 - @ref NEActivationLayer
640 - Added asymmetric padding support for:
641 - @ref CLDirectDeconvolutionLayer
642 - @ref CLGEMMDeconvolutionLayer
643 - @ref NEDeconvolutionLayer
644 - Added SYMMETRIC and REFLECT modes for @ref CLPadLayerKernel / @ref CLPadLayer.
645 - Replaced the calls to @ref NECopyKernel and @ref NEMemsetKernel with @ref NEPadLayer in @ref NEGenerateProposalsLayer.
646 - Replaced the calls to @ref CLCopyKernel and @ref CLMemsetKernel with @ref CLPadLayer in @ref CLGenerateProposalsLayer.
647 - Improved performance for CL Inception V3 - FP16.
648 - Improved accuracy for CL Inception V3 - FP16 by enabling FP32 accumulator (mixed-precision).
649 - Improved NEON performance by enabling fusing batch normalization with convolution and depth-wise convolution layer.
650 - Improved NEON performance for MobileNet-SSD by improving the output detection performance.
651 - Optimized @ref CLPadLayer.
652 - Optimized CL generic depthwise convolution layer by introducing @ref CLDepthwiseConvolutionLayerNativeKernel.
653 - Reduced memory consumption by implementing weights sharing.
Michele Di Giorgioa046e162019-10-08 09:36:26 +0100654
Michele Di Giorgiod374ff22020-01-21 10:03:20 +0000655v19.08.1 Public maintenance release
656 - Fix offset calculation in NEReductionOperationKernel.
657 - Fix data layout in NEScaleKernel for nhwc.
658 - Retain configuration step data layout to avoid side-effects.
659 - Perform sqrt in double domain for L2 pooling.
660 - Fix output shape calculation for Reduce Mean
661 - Fix broadcast CLPixelwiseMultiplication with 5D tensors
662
Georgios Pinitas3d13af82019-06-04 13:04:16 +0100663v19.08 Public major release
664 - Various bug fixes.
665 - Various optimisations.
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100666 - Deprecated NEON functions
667 - NEDepthConcatenateLayer
668 - NEWidthConcatenateLayer
669 - Deprecated OpenCL kernels / functions
670 - CLDepthConcatenateLayer
671 - CLGEMMInterleave4x4Kernel / CLGEMMInterleave4x4
672 - CLGEMMTranspose1xWKernel / CLGEMMTranspose1xW
673 - CLWidthConcatenateLayer
674 - New NEON kernels / functions:
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100675 - @ref NEAbsLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100676 - @ref NECast
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100677 - @ref NEElementwisePower
678 - @ref NELogLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100679 - @ref NELSTMLayerQuantized
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100680 - @ref NENegLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100681 - @ref NEPReluLayer
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100682 - @ref NESinLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100683 - @ref NEBatchConcatenateLayerKernel
684 - @ref NEDepthToSpaceLayerKernel / @ref NEDepthToSpaceLayer
685 - @ref NEDepthwiseConvolutionLayerNativeKernel
686 - @ref NEGEMMLowpQuantizeDownInt32ToInt16ScaleByFixedPointKernel
687 - @ref NEMeanStdDevNormalizationKernel / @ref NEMeanStdDevNormalizationLayer
688 - @ref NESpaceToDepthLayerKernel / @ref NESpaceToDepthLayer
689 - New OpenCL kernels / functions:
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100690 - @ref CLAbsLayer
691 - @ref CLElementwisePower
692 - @ref CLLogLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100693 - @ref CLLSTMLayerQuantized
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100694 - @ref CLNegLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100695 - @ref CLPReluLayer
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100696 - @ref CLSinLayer
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100697 - @ref CLBatchConcatenateLayerKernel
698 - @ref CLDepthToSpaceLayerKernel / @ref CLDepthToSpaceLayer
699 - @ref CLGEMMLowpMatrixMultiplyNativeKernel
Michele Di Giorgioba14c922020-10-12 13:27:57 +0100700 - CLGEMMLowpQuantizeDownInt32ToInt16ScaleByFixedPointKernel
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100701 - @ref CLGEMMMatrixMultiplyNativeKernel
702 - @ref CLMeanStdDevNormalizationKernel / @ref CLMeanStdDevNormalizationLayer
703 - @ref CLSpaceToDepthLayerKernel / @ref CLSpaceToDepthLayer
704 - New examples:
705 - neon_opticalflow
706 - cl_cache
707 - neon_permute
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100708 - Added support for FP16 in @ref NEDeconvolutionLayer
709 - Added support for FP16 in @ref CLDeconvolutionLayer
710 - Added support for REDUCE_MIN and REDUCE_MAX in @ref ReductionOperation
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100711 - Enable the fusion of batch normalization with convolution and depthwise convolution layer for FP32 in the graph API (OpenCL only)
712 - Added support for fusing activation function and broadcast addition with the matrix multiplication for FP32 (OpenCL only)
713 - Re-factored the depthwise convolution layer kernel on NEON for generic cases
714 - Added an optimized depthwise convolution layer kernel for 5x5 filters (NEON only)
715 - Added support to enable OpenCL kernel cache. Added example showing how to load the prebuilt OpenCL kernels from a binary cache file
716 - Altered @ref QuantizationInfo interface to support per-channel quantization.
Manuel Bottini387259a2020-05-21 17:14:36 +0100717 - The CLDepthwiseConvolutionLayer3x3 will be included by @ref CLDepthwiseConvolutionLayer to accommodate for future optimizations.
718 - The NEDepthwiseConvolutionLayerOptimized will be included by @ref NEDepthwiseConvolutionLayer to accommodate for future optimizations.
Gian Marco Iodicecc2f54b2019-08-22 10:10:52 +0100719 - Removed inner_border_right and inner_border_top parameters from @ref CLDeconvolutionLayer interface
720 - Removed inner_border_right and inner_border_top parameters from @ref NEDeconvolutionLayer interface
Gian Marco Iodicec5f48ad2019-09-02 09:52:12 +0100721 - Optimized the NEON assembly kernel for GEMMLowp. The new implementation fuses the output stage and quantization with the matrix multiplication kernel
Georgios Pinitas3d13af82019-06-04 13:04:16 +0100722
Michalis Spyroua9c44722019-04-05 17:18:36 +0100723v19.05 Public major release
Michalis Spyrouc6608ac2019-05-16 17:40:23 +0100724 - Various bug fixes.
725 - Various optimisations.
Georgios Pinitasf790fdb2019-04-24 12:41:25 +0100726 - New Neon kernels / functions:
727 - @ref NEBatchToSpaceLayerKernel / @ref NEBatchToSpaceLayer
Michalis Spyrouca82e622019-05-10 16:43:20 +0100728 - @ref NEComplexPixelWiseMultiplicationKernel / @ref NEComplexPixelWiseMultiplication
Georgios Pinitasf790fdb2019-04-24 12:41:25 +0100729 - @ref NECropKernel / @ref NECropResize
Michalis Spyrouca82e622019-05-10 16:43:20 +0100730 - @ref NEDepthwiseConvolutionAssemblyDispatch
731 - @ref NEFFTDigitReverseKernel
732 - @ref NEFFTRadixStageKernel
733 - @ref NEFFTScaleKernel
Georgios Pinitasf790fdb2019-04-24 12:41:25 +0100734 - @ref NEGEMMLowpOffsetContributionOutputStageKernel
735 - @ref NEHeightConcatenateLayerKernel
736 - @ref NESpaceToBatchLayerKernel / @ref NESpaceToBatchLayer
Michalis Spyroud7dd15c2019-05-30 14:53:58 +0100737 - @ref NEFFT1D
738 - @ref NEFFT2D
739 - @ref NEFFTConvolutionLayer
Georgios Pinitasf790fdb2019-04-24 12:41:25 +0100740 - New OpenCL kernels / functions:
Michalis Spyrouca82e622019-05-10 16:43:20 +0100741 - @ref CLComplexPixelWiseMultiplicationKernel / @ref CLComplexPixelWiseMultiplication
Georgios Pinitasf790fdb2019-04-24 12:41:25 +0100742 - @ref CLCropKernel / @ref CLCropResize
Michalis Spyroud7dd15c2019-05-30 14:53:58 +0100743 - @ref CLDeconvolutionReshapeOutputKernel
Georgios Pinitasf790fdb2019-04-24 12:41:25 +0100744 - @ref CLFFTDigitReverseKernel
745 - @ref CLFFTRadixStageKernel
746 - @ref CLFFTScaleKernel
747 - @ref CLGEMMLowpMatrixMultiplyReshapedOnlyRHSKernel
748 - @ref CLGEMMMatrixMultiplyReshapedOnlyRHSKernel
749 - @ref CLHeightConcatenateLayerKernel
750 - @ref CLDirectDeconvolutionLayer
751 - @ref CLFFT1D
752 - @ref CLFFT2D
753 - @ref CLFFTConvolutionLayer
Michalis Spyrouca82e622019-05-10 16:43:20 +0100754 - @ref CLGEMMDeconvolutionLayer
755 - New OpenGLES kernels / functions:
756 - @ref GCConcatenateLayer
Michalis Spyroua9c44722019-04-05 17:18:36 +0100757 - Deprecated functions/interfaces
Georgios Pinitas09f24972019-05-17 18:14:40 +0100758 - GCDepthConcatenateLayer
759 - NEWidthConcatenateLayer
760 - NEDepthConcatenateLayer
761 - CLWidthConcatenateLayer
762 - CLDepthConcatenateLayer
Gian Marco Iodice5fc07aa2019-05-15 17:08:02 +0100763 - CLGEMMInterleave4x4
764 - CLGEMMTranspose1xW
Michalis Spyrouc6608ac2019-05-16 17:40:23 +0100765 - Support different quantization info in CLConcatLayer.
766 - Add checks on different input/output quantization info were not supported.
767 - Tensors have different quantization information.
768 - Add FP16 support checks.
769 - Fix output quantization CLDeptwiseConv3x3 when activation is fused.
770 - New graph examples:
771 - graph_convolution
772 - graph_fully_connected
773 - graph_depthwise_convolution
774 - Deepspeech v0.4.1
775 - Add support for QASYMM8 in NEArithmeticSubtractionKernel.
776 - Add support for QASYMM8 in NEPixelWiseMultiplicationKernel.
777 - Add support for QASYMM8 NEDeconvolution.
778 - Add support for DequantizationLayer for NEON/CL.
779 - Add support for dilation in CLDepthwiseConvolution.
780 - Fuse offset contribution with the output stage when we use NEGEMMLowpMatrixMultiplyCore.
781 - Optimize CLDeconvolution.
782 - Add StackLayer to the graph API.
783 - Add support for "reflect" padding mode in NEPad.
784 - Winograd 7x7 NHWC on OpenCL.
785 - Rework CL ML layers to run exclusively on CL.
786 - Support different quantization info in PoolingLayer.
787 - Implement and test import memory interfaces.
788 - Added new tests and removed old ones.
789 - Various clang-tidy fixes.
Michalis Spyroua9c44722019-04-05 17:18:36 +0100790
giuros01a69a88b2019-01-31 16:29:19 +0000791v19.02 Public major release
Isabella Gottardi62538972019-02-12 19:52:44 +0000792 - Various bug fixes.
793 - Various optimisations.
794 - New Neon kernels / functions:
795 - @ref NETileKernel / @ref NETile
796 - @ref NEFuseBatchNormalizationKernel / @ref NEFuseBatchNormalization
797 - @ref NEElementwiseOperationKernel
798 - @ref NEElementwiseMax
799 - @ref NEElementwiseMin
800 - @ref NEElementwiseSquaredDiff
801 - @ref NESelectKernel / @ref NESelect
802 - @ref NESplit
803 - @ref NESlice
804 - @ref NEUnstack
805 - @ref NEStridedSliceKernel / @ref NEStridedSlice
806 - @ref NEElementwiseUnaryKernel
807 - @ref NERsqrtLayer
808 - @ref NEExpLayer
809 - @ref NEReverseKernel / @ref NEReverse
810 - @ref NEArgMinMaxLayer
811 - @ref NEStackLayerKernel / @ref NEStackLayer
812 - @ref NERangeKernel / @ref NERange
813 - @ref NEPadLayer
814 - @ref NEMemsetKernel
815 - @ref NEGatherKernel / @ref NEGather
816 - @ref NEElementwiseComparison
817 - @ref NEElementwiseComparisonStatic
818 - @ref NEComparisonOperationKernel
819 - @ref NEElementwiseDivision
820 - New OpenCL kernels / functions:
821 - @ref CLSelectKernel / @ref CLSelect
822 - @ref CLTileKernel / @ref CLTile
823 - @ref CLComparisonKernel / @ref CLComparison
824 - @ref CLArgMinMaxLayer
825 - @ref CLElementwiseMax
826 - @ref CLElementwiseMin
827 - @ref CLElementwiseSquaredDiff
828 - @ref CLStackLayerKernel / @ref CLStackLayer
829 - @ref CLReverse / @ref CLReverseKernel
830 - @ref CLRsqrtLayer
831 - @ref CLExpLayer
832 - @ref CLElementWiseUnaryLayerKernel
833 - @ref CLGEMMReshapeLHSMatrixKernel
834 - @ref CLGEMMReshapeRHSMatrixKernel
835 - @ref CLGEMMMatrixMultiplyReshapedKernel
836 - @ref CLRangeKernel / @ref CLRange
837 - @ref CLUnstack
838 - @ref CLGatherKernel / @ref CLGather
839 - @ref CLGEMMLowpMatrixMultiplyReshapedKernel
840 - New CPP kernels / functions:
841 - @ref CPPDetectionOutputLayer
842 - @ref CPPTopKV / @ref CPPTopKVKernel
Isabella Gottardi62538972019-02-12 19:52:44 +0000843 - Added new examples:
844 - graph_ssd_mobilenet.cpp
845 - graph_mobilenet_v2.cpp
846 - graph_resnet12.cpp
847 - graph_srcnn955.cpp
848 - graph_vgg_vdsr.cpp
849 - graph_inception_resnet_v1.cpp
850 - Add 4D tensors support to
851 - @ref NESoftmaxLayer
852 - Fused activation in @ref CLWinogradConvolutionLayer
853 - Extented @ref NEPermute to support more cases
854 - Added NEON/SVE GEMM Hybrid kernels
855 - Added u8 and s8 hybrid assembly kernels
856 - Introduced GEMM strategy name in NEGEMMAssemblyWrapper
857 - Improved @ref CLTuner
858 - Fused the bias addition within @ref CLGEMM
859 - Added support for QASYMM8 LOGISTIC activation in @ref NEActivationLayer
860 - Added NHWC data layout support to:
861 - @ref NEScale for F16
862 - @ref CLNormalizationLayer IN_MAP_2D for FP32/FP16
863 - @ref NEL2NormalizeLayer for FP32/FP16
864 - @ref NENormalizationLayer IN_MAP_2D for FP32/FP16
865 - @ref CLROIAlignLayer
Manuel Bottini5209be52019-02-13 16:34:56 +0000866 - @ref CLGenerateProposalsLayer
Isabella Gottardi62538972019-02-12 19:52:44 +0000867 - Added QASYMM8 support to the following kernels:
868 - @ref NEArithmeticAdditionKernel
869 - @ref NEScale
870 - Added new tests and improved validation and benchmarking suites.
giuros01a69a88b2019-01-31 16:29:19 +0000871 - Deprecated functions/interfaces
872 - Usage of inner_border_right and inner_border_top has been deprecated in @ref CLDeconvolutionLayer and @ref NEDeconvolutionLayer
873
Isabella Gottardi8773d7c2018-11-20 09:56:46 +0000874v18.11 Public major release
875 - Various bug fixes.
876 - Various optimisations.
877 - New Neon kernels / functions:
878 - @ref NEChannelShuffleLayer / @ref NEChannelShuffleLayerKernel
879 - @ref NEReduceMean
880 - @ref NEReorgLayer / @ref NEReorgLayerKernel
881 - @ref NEPriorBoxLayer / @ref NEPriorBoxLayerKernel
882 - @ref NEUpsampleLayer / @ref NEUpsampleLayerKernel
883 - @ref NEYOLOLayer / @ref NEYOLOLayerKernel
884 - New OpenCL kernels / functions:
885 - @ref CLBatchToSpaceLayer / @ref CLBatchToSpaceLayerKernel
886 - @ref CLBoundingBoxTransform / @ref CLBoundingBoxTransformKernel
Manuel Bottini5209be52019-02-13 16:34:56 +0000887 - @ref CLComputeAllAnchorsKernel
888 - @ref CLGenerateProposalsLayer
Isabella Gottardi8773d7c2018-11-20 09:56:46 +0000889 - @ref CLNormalizePlanarYUVLayer / @ref CLNormalizePlanarYUVLayerKernel
890 - @ref CLReorgLayer / @ref CLReorgLayerKernel
891 - @ref CLSpaceToBatchLayer / @ref CLSpaceToBatchLayerKernel
892 - @ref CLPadLayer
893 - @ref CLReduceMean
894 - @ref CLPriorBoxLayer / @ref CLPriorBoxLayerKernel
895 - @ref CLROIAlignLayer / @ref CLROIAlignLayerKernel
896 - @ref CLSlice
897 - @ref CLSplit
898 - @ref CLStridedSlice / @ref CLStridedSliceKernel
899 - @ref CLUpsampleLayer / @ref CLUpsampleLayerKernel
900 - @ref CLYOLOLayer / @ref CLYOLOLayerKernel
901 - New CPP kernels / functions:
902 - @ref CPPBoxWithNonMaximaSuppressionLimit / @ref CPPBoxWithNonMaximaSuppressionLimitKernel
903 - Added the validate method in:
904 - @ref NEDepthConvertLayer
905 - @ref NEFloor / @ref CLFloor
906 - @ref NEGEMMMatrixAdditionKernel
907 - @ref NEReshapeLayer / @ref CLReshapeLayer
908 - @ref CLScale
909 - Added new examples:
910 - graph_shufflenet.cpp
911 - graph_yolov3.cpp
912 - Added documentation for add a new function or kernel.
913 - Improved doxygen documentation adding a list of the existing functions.
914 - Add 4D tensors support to
Georgios Pinitas09f24972019-05-17 18:14:40 +0100915 - CLWidthConcatenateLayer
Isabella Gottardi8773d7c2018-11-20 09:56:46 +0000916 - @ref CLFlattenLayer
917 - @ref CLSoftmaxLayer
918 - Add dot product support for @ref CLDepthwiseConvolutionLayer3x3NHWCKernel non-unit stride
919 - Add SVE support
920 - Fused batch normalization into convolution layer weights in @ref CLFuseBatchNormalization
921 - Fuses activation in @ref CLDepthwiseConvolutionLayer3x3NCHWKernel, @ref CLDepthwiseConvolutionLayer3x3NHWCKernel and @ref NEGEMMConvolutionLayer
922 - Added NHWC data layout support to:
923 - @ref CLChannelShuffleLayer
924 - @ref CLDeconvolutionLayer
925 - @ref CLL2NormalizeLayer
926 - Added QASYMM8 support to the following kernels:
927 - @ref CLScaleKernel
Georgios Pinitas7d0adc62020-09-04 15:25:24 +0100928 - NEDepthwiseConvolutionLayer3x3Kernel
Isabella Gottardi8773d7c2018-11-20 09:56:46 +0000929 - @ref CLPixelWiseMultiplicationKernel
930 - Added FP16 support to the following kernels:
931 - @ref CLDepthwiseConvolutionLayer3x3NHWCKernel
Georgios Pinitas7d0adc62020-09-04 15:25:24 +0100932 - NEDepthwiseConvolutionLayer3x3Kernel
Isabella Gottardi8773d7c2018-11-20 09:56:46 +0000933 - @ref CLNormalizePlanarYUVLayerKernel
934 - @ref CLWinogradConvolutionLayer (5x5 kernel)
935 - More tests added to both validation and benchmarking suites.
936
Anthony Barbierd51ea0a2018-08-07 17:48:03 +0100937v18.08 Public major release
938 - Various bug fixes.
Michele Di Giorgio02baf012018-08-20 18:10:38 +0100939 - Various optimisations.
Anthony Barbierd51ea0a2018-08-07 17:48:03 +0100940 - Updated recommended NDK version to r17b.
Michele Di Giorgio02baf012018-08-20 18:10:38 +0100941 - Removed support for QS8/QS16 data types.
942 - Added support for grouped convolution in @ref CLConvolutionLayer.
943 - Added NHWC data layout support to:
Georgios Pinitas09f24972019-05-17 18:14:40 +0100944 - NEDepthConcatenateLayer / CLDepthConcatenateLayer
Michele Di Giorgio02baf012018-08-20 18:10:38 +0100945 - @ref NEWinogradConvolutionLayer / @ref CLWinogradConvolutionLayer
946 - @ref CLDepthwiseConvolutionLayer
947 - @ref CLDirectConvolutionLayer
948 - @ref CLConvolutionLayer
949 - @ref CLScale
950 - @ref CLIm2ColKernel
951 - New Neon kernels / functions:
952 - @ref NERNNLayer
953 - New OpenCL kernels / functions:
954 - @ref CLArithmeticDivision
955 - Introduced prepare() stage support in the graph API for GLES.
956 - Added support for memory reusage when trying to allocate smaller CLTensors.
957 - Enabled NHWC execution on graph examples.
958 - Added JPEG accessor for validation purposes.
959 - Added validate methods to some kernels / functions.
Anthony Barbierd51ea0a2018-08-07 17:48:03 +0100960
961v18.05 Public major release
Pablo Tellob5cc95b2018-05-15 11:49:33 +0100962 - Various bug fixes.
963 - Various optimisations.
Pablo Telloeb82fd22018-02-23 13:43:50 +0000964 - Major redesign in the interface for the neon kernels implemented in assembly.
965 - Removed arm_compute::NEGEMMLowpAArch64A53Kernel / arm_compute::NEGEMMLowpAArch64Kernel / arm_compute::NEGEMMLowpAArch64V8P4Kernel / arm_compute::NEGEMMInterleavedBlockedKernel / arm_compute::NEGEMMLowpAssemblyMatrixMultiplyCore / arm_compute::NEHGEMMAArch64FP16Kernel
966 - Added NEGEMMAssemblyWrapper and AssemblyKernelGlue which are used to execute assembly kernels in neon functions.
967 - Minor changes to the CPUInfo type to make it compatible with the new assembly gemm interface.
Pablo Tellob5cc95b2018-05-15 11:49:33 +0100968 - Moved neon assembly kernels to the folder src/core/NEON/kernels/arm_gemm.
969 - Improved doxygen documentation.
970 - Improved memory management for layer's transitions.
971 - Added support for NHWC data layout in tensors.
972 - Added NHWC data layout support to:
973 - @ref NEGEMMConvolutionLayer
974 - @ref NEDirectConvolutionLayer
975 - @ref NEPoolingLayer / @ref CLPoolingLayer
976 - @ref NEBatchNormalizationLayer / @ref CLBatchNormalizationLayer
977 - @ref NEDepthwiseConvolutionLayer
978 - @ref NEScale
979 - @ref NEIm2Col
980 - Added support for dilated convolutions in @ref NEConvolutionLayer and @ref CLConvolutionLayer.
981 - New OpenCL kernels / functions:
982 - @ref CLChannelShuffleLayer / @ref CLChannelShuffleLayerKernel
983 - @ref CLConvertFullyConnectedWeightsKernel / @ref CLConvertFullyConnectedWeights
984 - @ref CLCopy / @ref CLCopyKernel
Anthony Barbier38e7f1f2018-05-21 13:37:47 +0100985 - @ref CLLSTMLayer
Pablo Tellob5cc95b2018-05-15 11:49:33 +0100986 - @ref CLRNNLayer
Georgios Pinitas09f24972019-05-17 18:14:40 +0100987 - CLWidthConcatenateLayer / @ref CLWidthConcatenateLayerKernel
Pablo Tellob5cc95b2018-05-15 11:49:33 +0100988 - @ref CLWinogradFilterTransformKernel / @ref CLWinogradInputTransformKernel / @ref CLWinogradConvolutionLayer
989 - @ref CLWinogradInputTransformKernel / @ref CLWinogradInputTransform
990 - New Neon kernels / functions:
Pablo Tellob5cc95b2018-05-15 11:49:33 +0100991 - @ref NEConvertFullyConnectedWeightsKernel / @ref NEConvertFullyConnectedWeights.
992 - Created the validate method in @ref CLDepthwiseConvolutionLayer.
993 - Beta and gamma are no longer mandatory arguments in @ref NEBatchNormalizationLayer and @ref CLBatchNormalizationLayer.
994 - Added depth multiplier support in @ref NEDepthwiseConvolutionLayer and @ref CLDepthwiseConvolutionLayer.
995 - Added broadcast multiply support in @ref NEPixelWiseMultiplication / @ref NEPixelWiseMultiplicationKernel.
996 - Port mobilenet example to NHWC data layout.
997 - Enabled Winograd method in @ref CLConvolutionLayer.
998 - Renamed NEWinogradLayer to @ref NEWinogradConvolutionLayer.
999 - Updated @ref NEWinogradConvolutionLayer to use highly optimised assembly kernels in src/core/NEON/kernels/arm_gemm.
1000 - Added memory manager support in GLES functions.
1001 - Major refactoring of the graph API.
1002 - Added GLES backend in the graph API.
1003 - Added support for the memory manager in the graph API.
1004 - Enabled Winograd Convolution method in the graph API.
1005 - Added support for grouped convolutions in the graph API.
1006 - Replaced NEDeconvolutionLayerUpsampleKernel with @ref NEScaleKernel in @ref NEDeconvolutionLayer.
1007 - Added fast maths flag in @ref CLConvolutionLayer.
1008 - Added new tests and benchmarks in validation and benchmark frameworks
1009 - Merge Activation layer with Convolution Layer (NEON. CL, GLES)
1010 - Added support to OpenCL 2.0 SVM
1011 - Added support to import memory in OpenCL tensors.
1012 - Added the prepare() method to perform any one off pre-processing before running the function.
1013 - Added new examples:
1014 - graph_inception_v4.cpp
Anthony Barbier38e7f1f2018-05-21 13:37:47 +01001015 - graph_resnext50.cpp
Pablo Tellob5cc95b2018-05-15 11:49:33 +01001016 - Added memory measurement instrument for CL.
Pablo Telloeb82fd22018-02-23 13:43:50 +00001017
Anthony Barbier577fbdf2018-03-01 15:17:54 +00001018v18.03 Public maintenance release
1019 - Various bug fixes.
Anthony Barbier3762e742018-03-02 11:49:33 +00001020 - Fixed bug in @ref NEActivationLayer
1021 - Fix in @ref CLTuner when using batches.
Anthony Barbier577fbdf2018-03-01 15:17:54 +00001022 - Updated recommended NDK version to r16b (And fixed warnings).
1023 - Fixed bug in validation code.
1024 - Added Inception v4 graph example.
Georgios Pinitas9fb11592018-04-26 20:34:58 +01001025 - Renamed NEWinogradLayer.cpp to @ref NEWinogradConvolutionLayer
Anthony Barbier577fbdf2018-03-01 15:17:54 +00001026
Anthony Barbier2d0ce772018-02-21 15:35:36 +00001027v18.02 Public major release
1028 - Various NEON / OpenCL / GLES optimisations.
1029 - Various bug fixes.
1030 - Changed default number of threads on big LITTLE systems.
1031 - Refactored examples and added:
1032 - graph_mobilenet_qassym8
1033 - graph_resnet
1034 - graph_squeezenet_v1_1
Anthony Barbier3762e742018-03-02 11:49:33 +00001035 - Renamed @ref CLConvolutionLayer into @ref CLGEMMConvolutionLayer and created a new @ref CLConvolutionLayer to select the fastest convolution method.
1036 - Renamed @ref NEConvolutionLayer into @ref NEGEMMConvolutionLayer and created a new @ref NEConvolutionLayer to select the fastest convolution method.
Anthony Barbier2d0ce772018-02-21 15:35:36 +00001037 - Added in place support to:
Anthony Barbier3762e742018-03-02 11:49:33 +00001038 - @ref CLActivationLayer
1039 - @ref CLBatchNormalizationLayer
Anthony Barbier2d0ce772018-02-21 15:35:36 +00001040 - Added QASYMM8 support to:
Anthony Barbier3762e742018-03-02 11:49:33 +00001041 - @ref CLActivationLayer
1042 - @ref CLDepthwiseConvolutionLayer
1043 - @ref NEDepthwiseConvolutionLayer
1044 - @ref NESoftmaxLayer
Anthony Barbier2d0ce772018-02-21 15:35:36 +00001045 - Added FP16 support to:
Manuel Bottini387259a2020-05-21 17:14:36 +01001046 - CLDepthwiseConvolutionLayer3x3
Anthony Barbier3762e742018-03-02 11:49:33 +00001047 - @ref CLDepthwiseConvolutionLayer
1048 - Added broadcasting support to @ref NEArithmeticAddition / @ref CLArithmeticAddition / @ref CLPixelWiseMultiplication
1049 - Added fused batched normalization and activation to @ref CLBatchNormalizationLayer and @ref NEBatchNormalizationLayer
1050 - Added support for non-square pooling to @ref NEPoolingLayer and @ref CLPoolingLayer
Anthony Barbier2d0ce772018-02-21 15:35:36 +00001051 - New OpenCL kernels / functions:
Michele Di Giorgioa046e162019-10-08 09:36:26 +01001052 - CLDirectConvolutionLayerOutputStageKernel
Pablo Tellof6c572c2018-02-14 12:47:30 +00001053 - New NEON kernels / functions
Anthony Barbier2d0ce772018-02-21 15:35:36 +00001054 - Added name() method to all kernels.
1055 - Added support for Winograd 5x5.
Anthony Barbier3762e742018-03-02 11:49:33 +00001056 - @ref NEPermuteKernel / @ref NEPermute
Georgios Pinitas9fb11592018-04-26 20:34:58 +01001057 - @ref NEWinogradLayerTransformInputKernel / NEWinogradLayer
1058 - @ref NEWinogradLayerTransformOutputKernel / NEWinogradLayer
1059 - @ref NEWinogradLayerTransformWeightsKernel / NEWinogradLayer
Anthony Barbiere1553372018-07-16 18:53:52 +01001060 - Renamed NEWinogradLayerKernel into NEWinogradLayerBatchedGEMMKernel
Anthony Barbier2d0ce772018-02-21 15:35:36 +00001061 - New GLES kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +00001062 - @ref GCTensorShiftKernel / @ref GCTensorShift
Pablo Tellof6c572c2018-02-14 12:47:30 +00001063
Anthony Barbier64c95a02018-01-22 18:48:55 +00001064v18.01 Public maintenance release
1065 - Various bug fixes
1066 - Added some of the missing validate() methods
Anthony Barbier3762e742018-03-02 11:49:33 +00001067 - Added @ref CLDeconvolutionLayerUpsampleKernel / @ref CLDeconvolutionLayer @ref CLDeconvolutionLayerUpsample
1068 - Added @ref CLPermuteKernel / @ref CLPermute
Anthony Barbier64c95a02018-01-22 18:48:55 +00001069 - Added method to clean the programs cache in the CL Kernel library.
Anthony Barbier3762e742018-03-02 11:49:33 +00001070 - Added @ref GCArithmeticAdditionKernel / @ref GCArithmeticAddition
1071 - Added @ref GCDepthwiseConvolutionLayer3x3Kernel / @ref GCDepthwiseConvolutionLayer3x3
1072 - Added @ref GCNormalizePlanarYUVLayerKernel / @ref GCNormalizePlanarYUVLayer
1073 - Added @ref GCScaleKernel / @ref GCScale
1074 - Added @ref GCWeightsReshapeKernel / @ref GCConvolutionLayer
Anthony Barbier64c95a02018-01-22 18:48:55 +00001075 - Added FP16 support to the following GLES compute kernels:
Anthony Barbier3762e742018-03-02 11:49:33 +00001076 - @ref GCCol2ImKernel
1077 - @ref GCGEMMInterleave4x4Kernel
1078 - @ref GCGEMMTranspose1xWKernel
1079 - @ref GCIm2ColKernel
1080 - Refactored NEON Winograd (NEWinogradLayerKernel)
1081 - Added @ref NEDirectConvolutionLayerOutputStageKernel
Anthony Barbier64c95a02018-01-22 18:48:55 +00001082 - Added QASYMM8 support to the following NEON kernels:
Georgios Pinitas7d0adc62020-09-04 15:25:24 +01001083 - NEDepthwiseConvolutionLayer3x3Kernel
Anthony Barbier3762e742018-03-02 11:49:33 +00001084 - @ref NEFillBorderKernel
1085 - @ref NEPoolingLayerKernel
Anthony Barbier64c95a02018-01-22 18:48:55 +00001086 - Added new examples:
1087 - graph_cl_mobilenet_qasymm8.cpp
1088 - graph_inception_v3.cpp
1089 - gc_dc.cpp
1090 - More tests added to both validation and benchmarking suites.
1091
Gian Marcoff850932017-12-11 12:37:17 +00001092v17.12 Public major release
1093 - Most machine learning functions on OpenCL support the new data type QASYMM8
1094 - Introduced logging interface
1095 - Introduced opencl timer
1096 - Reworked GEMMLowp interface
1097 - Added new NEON assembly kernels for GEMMLowp, SGEMM and HGEMM
1098 - Added validation method for most Machine Learning kernels / functions
1099 - Added new graph examples such as googlenet, mobilenet, squeezenet, vgg16 and vgg19
1100 - Added sgemm example for OpenCL
1101 - Added absolute difference example for GLES compute
1102 - Added new tests and benchmarks in validation and benchmark frameworks
1103 - Added new kernels / functions for GLES compute
1104
1105 - New OpenGL ES kernels / functions
Anthony Barbier3762e742018-03-02 11:49:33 +00001106 - @ref GCAbsoluteDifferenceKernel / @ref GCAbsoluteDifference
1107 - @ref GCActivationLayerKernel / @ref GCActivationLayer
1108 - @ref GCBatchNormalizationLayerKernel / @ref GCBatchNormalizationLayer
1109 - @ref GCCol2ImKernel
Georgios Pinitas09f24972019-05-17 18:14:40 +01001110 - @ref GCDepthConcatenateLayerKernel / GCDepthConcatenateLayer
Anthony Barbier3762e742018-03-02 11:49:33 +00001111 - @ref GCDirectConvolutionLayerKernel / @ref GCDirectConvolutionLayer
1112 - @ref GCDropoutLayerKernel / @ref GCDropoutLayer
1113 - @ref GCFillBorderKernel / @ref GCFillBorder
1114 - @ref GCGEMMInterleave4x4Kernel / @ref GCGEMMInterleave4x4
1115 - @ref GCGEMMMatrixAccumulateBiasesKernel / @ref GCGEMMMatrixAdditionKernel / @ref GCGEMMMatrixMultiplyKernel / @ref GCGEMM
1116 - @ref GCGEMMTranspose1xWKernel / @ref GCGEMMTranspose1xW
1117 - @ref GCIm2ColKernel
1118 - @ref GCNormalizationLayerKernel / @ref GCNormalizationLayer
1119 - @ref GCPixelWiseMultiplicationKernel / @ref GCPixelWiseMultiplication
1120 - @ref GCPoolingLayerKernel / @ref GCPoolingLayer
1121 - @ref GCLogits1DMaxKernel / @ref GCLogits1DShiftExpSumKernel / @ref GCLogits1DNormKernel / @ref GCSoftmaxLayer
1122 - @ref GCTransposeKernel / @ref GCTranspose
Gian Marcoff850932017-12-11 12:37:17 +00001123
1124 - New NEON kernels / functions
Pablo Telloeb82fd22018-02-23 13:43:50 +00001125 - arm_compute::NEGEMMLowpAArch64A53Kernel / arm_compute::NEGEMMLowpAArch64Kernel / arm_compute::NEGEMMLowpAArch64V8P4Kernel / arm_compute::NEGEMMInterleavedBlockedKernel / arm_compute::NEGEMMLowpAssemblyMatrixMultiplyCore
1126 - arm_compute::NEHGEMMAArch64FP16Kernel
Georgios Pinitas7d0adc62020-09-04 15:25:24 +01001127 - NEDepthwiseConvolutionLayer3x3Kernel / NEDepthwiseIm2ColKernel / NEGEMMMatrixVectorMultiplyKernel / NEDepthwiseVectorToTensorKernel / @ref NEDepthwiseConvolutionLayer
Anthony Barbier3762e742018-03-02 11:49:33 +00001128 - @ref NEGEMMLowpOffsetContributionKernel / @ref NEGEMMLowpMatrixAReductionKernel / @ref NEGEMMLowpMatrixBReductionKernel / @ref NEGEMMLowpMatrixMultiplyCore
1129 - @ref NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel / @ref NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint
Georgios Pinitas9fb11592018-04-26 20:34:58 +01001130 - NEWinogradLayer / NEWinogradLayerKernel
Gian Marcoff850932017-12-11 12:37:17 +00001131
1132 - New OpenCL kernels / functions
Anthony Barbier3762e742018-03-02 11:49:33 +00001133 - @ref CLGEMMLowpOffsetContributionKernel / @ref CLGEMMLowpMatrixAReductionKernel / @ref CLGEMMLowpMatrixBReductionKernel / @ref CLGEMMLowpMatrixMultiplyCore
Michele Di Giorgioba14c922020-10-12 13:27:57 +01001134 - CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel / @ref CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint
Gian Marcoff850932017-12-11 12:37:17 +00001135
1136 - New graph nodes for NEON and OpenCL
Georgios Pinitasd9eb2752018-04-03 13:44:29 +01001137 - graph::BranchLayer
1138 - graph::DepthConvertLayer
1139 - graph::DepthwiseConvolutionLayer
1140 - graph::DequantizationLayer
1141 - graph::FlattenLayer
1142 - graph::QuantizationLayer
1143 - graph::ReshapeLayer
Gian Marcoff850932017-12-11 12:37:17 +00001144
Anthony Barbier3c5b4ff2017-10-12 13:20:52 +01001145v17.10 Public maintenance release
1146 - Bug fixes:
1147 - Check the maximum local workgroup size supported by OpenCL devices
1148 - Minor documentation updates (Fixed instructions to build the examples)
Anthony Barbier3762e742018-03-02 11:49:33 +00001149 - Introduced a graph::GraphContext
Anthony Barbier3c5b4ff2017-10-12 13:20:52 +01001150 - Added a few new Graph nodes, support for branches and grouping.
1151 - Automatically enable cl_printf in debug builds
1152 - Fixed bare metal builds for armv7a
1153 - Added AlexNet and cartoon effect examples
1154 - Fixed library builds: libraries are no longer built as supersets of each other.(It means application using the Runtime part of the library now need to link against both libarm_compute_core and libarm_compute)
1155
Anthony Barbier6a5627a2017-09-26 14:42:02 +01001156v17.09 Public major release
1157 - Experimental Graph support: initial implementation of a simple stream API to easily chain machine learning layers.
Anthony Barbier3762e742018-03-02 11:49:33 +00001158 - Memory Manager (@ref BlobLifetimeManager, @ref BlobMemoryPool, @ref ILifetimeManager, @ref IMemoryGroup, @ref IMemoryManager, @ref IMemoryPool, @ref IPoolManager, @ref MemoryManagerOnDemand, @ref PoolManager)
Anthony Barbier6a5627a2017-09-26 14:42:02 +01001159 - New validation and benchmark frameworks (Boost and Google frameworks replaced by homemade framework).
1160 - Most machine learning functions support both fixed point 8 and 16 bit (QS8, QS16) for both NEON and OpenCL.
1161 - New NEON kernels / functions:
Pablo Telloeb82fd22018-02-23 13:43:50 +00001162 - arm_compute::NEGEMMAssemblyBaseKernel arm_compute::NEGEMMAArch64Kernel
Anthony Barbier3762e742018-03-02 11:49:33 +00001163 - @ref NEDequantizationLayerKernel / @ref NEDequantizationLayer
1164 - @ref NEFloorKernel / @ref NEFloor
1165 - @ref NEL2NormalizeLayerKernel / @ref NEL2NormalizeLayer
1166 - @ref NEQuantizationLayerKernel @ref NEMinMaxLayerKernel / @ref NEQuantizationLayer
1167 - @ref NEROIPoolingLayerKernel / @ref NEROIPoolingLayer
1168 - @ref NEReductionOperationKernel / @ref NEReductionOperation
1169 - @ref NEReshapeLayerKernel / @ref NEReshapeLayer
Anthony Barbier6a5627a2017-09-26 14:42:02 +01001170
1171 - New OpenCL kernels / functions:
Manuel Bottini387259a2020-05-21 17:14:36 +01001172 - @ref CLDepthwiseConvolutionLayer3x3NCHWKernel @ref CLDepthwiseConvolutionLayer3x3NHWCKernel CLDepthwiseIm2ColKernel CLDepthwiseVectorToTensorKernel CLDepthwiseWeightsReshapeKernel / CLDepthwiseConvolutionLayer3x3 @ref CLDepthwiseConvolutionLayer CLDepthwiseSeparableConvolutionLayer
Anthony Barbier3762e742018-03-02 11:49:33 +00001173 - @ref CLDequantizationLayerKernel / @ref CLDequantizationLayer
1174 - @ref CLDirectConvolutionLayerKernel / @ref CLDirectConvolutionLayer
1175 - @ref CLFlattenLayer
1176 - @ref CLFloorKernel / @ref CLFloor
Gian Marco Iodice5fc07aa2019-05-15 17:08:02 +01001177 - CLGEMMTranspose1xW
Anthony Barbier3762e742018-03-02 11:49:33 +00001178 - @ref CLGEMMMatrixVectorMultiplyKernel
1179 - @ref CLL2NormalizeLayerKernel / @ref CLL2NormalizeLayer
1180 - @ref CLQuantizationLayerKernel @ref CLMinMaxLayerKernel / @ref CLQuantizationLayer
1181 - @ref CLROIPoolingLayerKernel / @ref CLROIPoolingLayer
1182 - @ref CLReductionOperationKernel / @ref CLReductionOperation
1183 - @ref CLReshapeLayerKernel / @ref CLReshapeLayer
Anthony Barbier6a5627a2017-09-26 14:42:02 +01001184
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001185v17.06 Public major release
1186 - Various bug fixes
1187 - Added support for fixed point 8 bit (QS8) to the various NEON machine learning kernels.
1188 - Added unit tests and benchmarks (AlexNet, LeNet)
1189 - Added support for sub tensors.
1190 - Added infrastructure to provide GPU specific optimisation for some OpenCL kernels.
Anthony Barbier3762e742018-03-02 11:49:33 +00001191 - Added @ref OMPScheduler (OpenMP) scheduler for NEON
1192 - Added @ref SingleThreadScheduler scheduler for NEON (For bare metal)
1193 - User can specify his own scheduler by implementing the @ref IScheduler interface.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001194 - New OpenCL kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +00001195 - @ref CLBatchNormalizationLayerKernel / @ref CLBatchNormalizationLayer
Georgios Pinitas09f24972019-05-17 18:14:40 +01001196 - @ref CLDepthConcatenateLayerKernel / CLDepthConcatenateLayer
Anthony Barbier3762e742018-03-02 11:49:33 +00001197 - @ref CLHOGOrientationBinningKernel @ref CLHOGBlockNormalizationKernel, @ref CLHOGDetectorKernel / @ref CLHOGDescriptor @ref CLHOGDetector @ref CLHOGGradient @ref CLHOGMultiDetection
1198 - @ref CLLocallyConnectedMatrixMultiplyKernel / @ref CLLocallyConnectedLayer
1199 - @ref CLWeightsReshapeKernel / @ref CLConvolutionLayerReshapeWeights
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001200 - New C++ kernels:
Anthony Barbier3762e742018-03-02 11:49:33 +00001201 - @ref CPPDetectionWindowNonMaximaSuppressionKernel
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001202 - New NEON kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +00001203 - @ref NEBatchNormalizationLayerKernel / @ref NEBatchNormalizationLayer
Georgios Pinitas09f24972019-05-17 18:14:40 +01001204 - @ref NEDepthConcatenateLayerKernel / NEDepthConcatenateLayer
Anthony Barbier3762e742018-03-02 11:49:33 +00001205 - @ref NEDirectConvolutionLayerKernel / @ref NEDirectConvolutionLayer
1206 - @ref NELocallyConnectedMatrixMultiplyKernel / @ref NELocallyConnectedLayer
1207 - @ref NEWeightsReshapeKernel / @ref NEConvolutionLayerReshapeWeights
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001208
1209v17.05 Public bug fixes release
1210 - Various bug fixes
1211 - Remaining of the functions ported to use accurate padding.
1212 - Library does not link against OpenCL anymore (It uses dlopen / dlsym at runtime instead to determine whether or not OpenCL is available).
1213 - Added "free" method to allocator.
1214 - Minimum version of g++ required for armv7 Linux changed from 4.8 to 4.9
1215
1216v17.04 Public bug fixes release
1217
1218 The following functions have been ported to use the new accurate padding:
Anthony Barbier3762e742018-03-02 11:49:33 +00001219 - @ref CLColorConvertKernel
1220 - @ref CLEdgeNonMaxSuppressionKernel
1221 - @ref CLEdgeTraceKernel
1222 - @ref CLGaussianPyramidHorKernel
1223 - @ref CLGaussianPyramidVertKernel
1224 - @ref CLGradientKernel
1225 - @ref NEChannelCombineKernel
1226 - @ref NEFillArrayKernel
1227 - @ref NEGaussianPyramidHorKernel
1228 - @ref NEGaussianPyramidVertKernel
Georgios Pinitas09d34512018-08-30 16:02:11 +01001229 - NEHarrisScoreFP16Kernel
Anthony Barbier3762e742018-03-02 11:49:33 +00001230 - @ref NEHarrisScoreKernel
1231 - @ref NEHOGDetectorKernel
1232 - @ref NELogits1DMaxKernel
1233 - NELogits1DShiftExpSumKernel
1234 - NELogits1DNormKernel
1235 - @ref NENonMaximaSuppression3x3FP16Kernel
1236 - @ref NENonMaximaSuppression3x3Kernel
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001237
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001238v17.03.1 First Major public release of the sources
1239 - Renamed the library to arm_compute
1240 - New CPP target introduced for C++ kernels shared between NEON and CL functions.
1241 - New padding calculation interface introduced and ported most kernels / functions to use it.
1242 - New OpenCL kernels / functions:
Gian Marco Iodiceeb65f6d2020-04-15 11:42:15 +01001243 - CLGEMMLowpMatrixMultiplyKernel / CLGEMMLowp
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001244 - New NEON kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +00001245 - @ref NENormalizationLayerKernel / @ref NENormalizationLayer
1246 - @ref NETransposeKernel / @ref NETranspose
1247 - @ref NELogits1DMaxKernel, NELogits1DShiftExpSumKernel, NELogits1DNormKernel / @ref NESoftmaxLayer
1248 - @ref NEIm2ColKernel, @ref NECol2ImKernel, NEConvolutionLayerWeightsReshapeKernel / @ref NEConvolutionLayer
Michele Di Giorgiof22f6722020-07-03 16:29:24 +01001249 - NEGEMMMatrixAccumulateBiasesKernel / @ref NEFullyConnectedLayer
Anthony Barbier3762e742018-03-02 11:49:33 +00001250 - @ref NEGEMMLowpMatrixMultiplyKernel / NEGEMMLowp
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001251
1252v17.03 Sources preview
1253 - New OpenCL kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +00001254 - @ref CLGradientKernel, @ref CLEdgeNonMaxSuppressionKernel, @ref CLEdgeTraceKernel / @ref CLCannyEdge
Gian Marco Iodice57a89612019-08-22 14:10:27 +01001255 - GEMM refactoring + FP16 support: CLGEMMInterleave4x4Kernel, CLGEMMTranspose1xWKernel, @ref CLGEMMMatrixMultiplyKernel, CLGEMMMatrixAdditionKernel / @ref CLGEMM
Michele Di Giorgiof6f78762020-07-06 11:27:21 +01001256 - CLGEMMMatrixAccumulateBiasesKernel / @ref CLFullyConnectedLayer
Anthony Barbier3762e742018-03-02 11:49:33 +00001257 - @ref CLTransposeKernel / @ref CLTranspose
1258 - @ref CLLKTrackerInitKernel, @ref CLLKTrackerStage0Kernel, @ref CLLKTrackerStage1Kernel, @ref CLLKTrackerFinalizeKernel / @ref CLOpticalFlow
1259 - @ref CLNormalizationLayerKernel / @ref CLNormalizationLayer
1260 - @ref CLLaplacianPyramid, @ref CLLaplacianReconstruct
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001261 - New NEON kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +00001262 - @ref NEActivationLayerKernel / @ref NEActivationLayer
1263 - GEMM refactoring + FP16 support (Requires armv8.2 CPU): @ref NEGEMMInterleave4x4Kernel, @ref NEGEMMTranspose1xWKernel, @ref NEGEMMMatrixMultiplyKernel, @ref NEGEMMMatrixAdditionKernel / @ref NEGEMM
1264 - @ref NEPoolingLayerKernel / @ref NEPoolingLayer
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001265
1266v17.02.1 Sources preview
1267 - New OpenCL kernels / functions:
Michele Di Giorgiof6f78762020-07-06 11:27:21 +01001268 - CLLogits1DMaxKernel, CLLogits1DShiftExpSumKernel, @ref CLLogits1DNormKernel / @ref CLSoftmaxLayer
Anthony Barbier3762e742018-03-02 11:49:33 +00001269 - @ref CLPoolingLayerKernel / @ref CLPoolingLayer
1270 - @ref CLIm2ColKernel, @ref CLCol2ImKernel, CLConvolutionLayerWeightsReshapeKernel / @ref CLConvolutionLayer
1271 - @ref CLRemapKernel / @ref CLRemap
1272 - @ref CLGaussianPyramidHorKernel, @ref CLGaussianPyramidVertKernel / @ref CLGaussianPyramid, @ref CLGaussianPyramidHalf, @ref CLGaussianPyramidOrb
1273 - @ref CLMinMaxKernel, @ref CLMinMaxLocationKernel / @ref CLMinMaxLocation
1274 - @ref CLNonLinearFilterKernel / @ref CLNonLinearFilter
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001275 - New NEON FP16 kernels (Requires armv8.2 CPU)
Anthony Barbier3762e742018-03-02 11:49:33 +00001276 - @ref NEAccumulateWeightedFP16Kernel
1277 - @ref NEBox3x3FP16Kernel
1278 - @ref NENonMaximaSuppression3x3FP16Kernel
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001279
1280v17.02 Sources preview
1281 - New OpenCL kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +00001282 - @ref CLActivationLayerKernel / @ref CLActivationLayer
1283 - @ref CLChannelCombineKernel / @ref CLChannelCombine
1284 - @ref CLDerivativeKernel / @ref CLChannelExtract
1285 - @ref CLFastCornersKernel / @ref CLFastCorners
1286 - @ref CLMeanStdDevKernel / @ref CLMeanStdDev
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001287 - New NEON kernels / functions:
Anthony Barbier3762e742018-03-02 11:49:33 +00001288 - HOG / SVM: @ref NEHOGOrientationBinningKernel, @ref NEHOGBlockNormalizationKernel, @ref NEHOGDetectorKernel, NEHOGNonMaximaSuppressionKernel / @ref NEHOGDescriptor, @ref NEHOGDetector, @ref NEHOGGradient, @ref NEHOGMultiDetection
1289 - @ref NENonLinearFilterKernel / @ref NENonLinearFilter
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001290 - Introduced a CLScheduler to manage the default context and command queue used by the runtime library and create synchronisation events.
1291 - Switched all the kernels / functions to use tensors instead of images.
1292 - Updated documentation to include instructions to build the library from sources.
1293
1294v16.12 Binary preview release
1295 - Original release
1296
1297@section S3_how_to_build How to build the library and the examples
1298
1299@subsection S3_1_build_options Build options
1300
1301scons 2.3 or above is required to build the library.
1302To see the build options available simply run ```scons -h```:
1303
Anthony Barbier79c61782017-06-23 11:48:24 +01001304 debug: Debug (yes|no)
1305 default: False
1306 actual: False
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001307
Anthony Barbier79c61782017-06-23 11:48:24 +01001308 asserts: Enable asserts (this flag is forced to 1 for debug=1) (yes|no)
1309 default: False
1310 actual: False
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001311
Anthony Barbier79c61782017-06-23 11:48:24 +01001312 arch: Target Architecture (armv7a|arm64-v8a|arm64-v8.2-a|x86_32|x86_64)
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001313 default: armv7a
1314 actual: armv7a
1315
Anthony Barbier79c61782017-06-23 11:48:24 +01001316 os: Target OS (linux|android|bare_metal)
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001317 default: linux
1318 actual: linux
1319
Anthony Barbier2d0ce772018-02-21 15:35:36 +00001320 build: Build type (native|cross_compile|embed_only)
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001321 default: cross_compile
1322 actual: cross_compile
1323
Anthony Barbier79c61782017-06-23 11:48:24 +01001324 examples: Build example programs (yes|no)
1325 default: True
1326 actual: True
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001327
Anthony Barbier79c61782017-06-23 11:48:24 +01001328 Werror: Enable/disable the -Werror compilation flag (yes|no)
1329 default: True
1330 actual: True
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001331
Anthony Barbier79c61782017-06-23 11:48:24 +01001332 opencl: Enable OpenCL support (yes|no)
1333 default: True
1334 actual: True
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001335
Anthony Barbier79c61782017-06-23 11:48:24 +01001336 neon: Enable Neon support (yes|no)
1337 default: False
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001338 actual: False
1339
Anthony Barbier20dbb822017-12-13 21:19:39 +00001340 gles_compute: Enable OpenGL ES Compute Shader support (yes|no)
1341 default: False
1342 actual: False
1343
1344 embed_kernels: Embed OpenCL kernels and OpenGL ES compute shader in library binary (yes|no)
Anthony Barbiercc0a80b2017-12-15 11:37:29 +00001345 default: True
1346 actual: True
Anthony Barbier79c61782017-06-23 11:48:24 +01001347
1348 set_soname: Set the library's soname and shlibversion (requires SCons 2.4 or above) (yes|no)
1349 default: False
1350 actual: False
1351
1352 openmp: Enable OpenMP backend (yes|no)
1353 default: False
1354 actual: False
1355
1356 cppthreads: Enable C++11 threads backend (yes|no)
1357 default: True
1358 actual: True
1359
1360 build_dir: Specify sub-folder for the build ( /path/to/build_dir )
1361 default: .
1362 actual: .
1363
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001364 extra_cxx_flags: Extra CXX flags to be appended to the build command
1365 default:
1366 actual:
1367
Anthony Barbier79c61782017-06-23 11:48:24 +01001368 pmu: Enable PMU counters (yes|no)
1369 default: False
1370 actual: False
1371
Anthony Barbier6a5627a2017-09-26 14:42:02 +01001372 mali: Enable Mali hardware counters (yes|no)
1373 default: False
1374 actual: False
1375
Anthony Barbier79c61782017-06-23 11:48:24 +01001376 validation_tests: Build validation test programs (yes|no)
1377 default: False
1378 actual: False
1379
1380 benchmark_tests: Build benchmark test programs (yes|no)
1381 default: False
1382 actual: False
1383
1384@b debug / @b asserts:
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001385 - With debug=1 asserts are enabled, and the library is built with symbols and no optimisations enabled.
1386 - With debug=0 and asserts=1: Optimisations are enabled and symbols are removed, however all the asserts are still present (This is about 20% slower than the release build)
1387 - With debug=0 and asserts=0: All optimisations are enable and no validation is performed, if the application misuses the library it is likely to result in a crash. (Only use this mode once you are sure your application is working as expected).
1388
Anthony Barbier79c61782017-06-23 11:48:24 +01001389@b arch: The x86_32 and x86_64 targets can only be used with neon=0 and opencl=1.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001390
Anthony Barbier79c61782017-06-23 11:48:24 +01001391@b os: Choose the operating system you are targeting: Linux, Android or bare metal.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001392@note bare metal can only be used for NEON (not OpenCL), only static libraries get built and NEON's multi-threading support is disabled.
1393
Anthony Barbier79c61782017-06-23 11:48:24 +01001394@b build: you can either build directly on your device (native) or cross compile from your desktop machine (cross-compile). In both cases make sure the compiler is available in your path.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001395
Anthony Barbier79c61782017-06-23 11:48:24 +01001396@note If you want to natively compile for 32bit on a 64bit ARM device running a 64bit OS then you will have to use cross-compile too.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001397
Anthony Barbier2d0ce772018-02-21 15:35:36 +00001398There is also an 'embed_only' option which will generate all the .embed files for the OpenCL kernels and / or OpenGLES compute shaders. This might be useful if using a different build system to compile the library.
1399
Anthony Barbier79c61782017-06-23 11:48:24 +01001400@b Werror: If you are compiling using the same toolchains as the ones used in this guide then there shouldn't be any warning and therefore you should be able to keep Werror=1. If with a different compiler version the library fails to build because of warnings interpreted as errors then, if you are sure the warnings are not important, you might want to try to build with Werror=0 (But please do report the issue either on Github or by an email to developer@arm.com so that the issue can be addressed).
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001401
Anthony Barbier20dbb822017-12-13 21:19:39 +00001402@b opencl / @b neon / @b gles_compute: Choose which SIMD technology you want to target. (NEON for ARM Cortex-A CPUs or OpenCL / GLES_COMPUTE for ARM Mali GPUs)
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001403
Anthony Barbier20dbb822017-12-13 21:19:39 +00001404@b embed_kernels: For OpenCL / GLES_COMPUTE only: set embed_kernels=1 if you want the OpenCL / GLES_COMPUTE kernels to be built in the library's binaries instead of being read from separate ".cl" / ".cs" files. If embed_kernels is set to 0 then the application can set the path to the folder containing the OpenCL / GLES_COMPUTE kernel files by calling CLKernelLibrary::init() / GCKernelLibrary::init(). By default the path is set to "./cl_kernels" / "./cs_shaders".
Anthony Barbier79c61782017-06-23 11:48:24 +01001405
1406@b set_soname: Do you want to build the versioned version of the library ?
1407
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001408If enabled the library will contain a SONAME and SHLIBVERSION and some symlinks will automatically be created between the objects.
1409Example:
1410 libarm_compute_core.so -> libarm_compute_core.so.1.0.0
1411 libarm_compute_core.so.1 -> libarm_compute_core.so.1.0.0
1412 libarm_compute_core.so.1.0.0
1413
1414@note This options is disabled by default as it requires SCons version 2.4 or above.
1415
Anthony Barbier79c61782017-06-23 11:48:24 +01001416@b extra_cxx_flags: Custom CXX flags which will be appended to the end of the build command.
1417
1418@b build_dir: Build the library in a subfolder of the "build" folder. (Allows to build several configurations in parallel).
1419
1420@b examples: Build or not the examples
1421
1422@b validation_tests: Enable the build of the validation suite.
1423
Anthony Barbier79c61782017-06-23 11:48:24 +01001424@b benchmark_tests: Enable the build of the benchmark tests
1425
1426@b pmu: Enable the PMU cycle counter to measure execution time in benchmark tests. (Your device needs to support it)
1427
Anthony Barbier6a5627a2017-09-26 14:42:02 +01001428@b mali: Enable the collection of Mali hardware counters to measure execution time in benchmark tests. (Your device needs to have a Mali driver that supports it)
1429
Anthony Barbier79c61782017-06-23 11:48:24 +01001430@b openmp Build in the OpenMP scheduler for NEON.
1431
1432@note Only works when building with g++ not clang++
1433
1434@b cppthreads Build in the C++11 scheduler for NEON.
1435
Anthony Barbier3762e742018-03-02 11:49:33 +00001436@sa Scheduler::set
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001437
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001438@subsection S3_2_linux Building for Linux
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001439
1440@subsubsection S3_2_1_library How to build the library ?
1441
1442For Linux, the library was successfully built and tested using the following Linaro GCC toolchain:
1443
Michele Di Giorgio36a551f2020-04-23 11:55:29 +01001444 - gcc-linaro-6.3.1-2017.05-x86_64_arm-linux-gnueabihf
1445 - gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001446
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001447To cross-compile the library in debug mode, with NEON only support, for Linux 32bit:
1448
1449 scons Werror=1 -j8 debug=1 neon=1 opencl=0 os=linux arch=armv7a
1450
1451To cross-compile the library in asserts mode, with OpenCL only support, for Linux 64bit:
1452
1453 scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=1 embed_kernels=1 os=linux arch=arm64-v8a
1454
Anthony Barbier20dbb822017-12-13 21:19:39 +00001455To cross-compile the library in asserts mode, with GLES_COMPUTE only support, for Linux 64bit:
1456
1457 scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=0 gles_compute=1 embed_kernels=1 os=linux arch=arm64-v8a
1458
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001459You can also compile the library natively on an ARM device by using <b>build=native</b>:
1460
1461 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=arm64-v8a build=native
1462 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a build=native
1463
1464@note g++ for ARM is mono-arch, therefore if you want to compile for Linux 32bit on a Linux 64bit platform you will have to use a cross compiler.
1465
1466For example on a 64bit Debian based system you would have to install <b>g++-arm-linux-gnueabihf</b>
1467
1468 apt-get install g++-arm-linux-gnueabihf
1469
1470Then run
1471
1472 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a build=cross_compile
1473
1474or simply remove the build parameter as build=cross_compile is the default value:
1475
1476 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a
1477
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001478@subsubsection S3_2_2_examples How to manually build the examples ?
1479
1480The examples get automatically built by scons as part of the build process of the library described above. This section just describes how you can build and link your own application against our library.
1481
Sheri Zhang7a7f4e02020-08-28 20:08:49 +01001482@note The following command lines assume the arm_compute libraries are present in the current directory or in the system library path. If this is not the case you can specify the location of the pre-built libraries with the compiler option -L. When building the OpenCL example the commands below assume that the CL headers are located in the include folder where the command is executed.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001483
1484To cross compile a NEON example for Linux 32bit:
1485
Anthony Barbierb2881fc2017-09-29 17:12:12 +01001486 arm-linux-gnueabihf-g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -mfpu=neon -L. -larm_compute -larm_compute_core -o neon_convolution
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001487
1488To cross compile a NEON example for Linux 64bit:
1489
Anthony Barbierb2881fc2017-09-29 17:12:12 +01001490 aarch64-linux-gnu-g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -L. -larm_compute -larm_compute_core -o neon_convolution
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001491
1492(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
1493
1494To cross compile an OpenCL example for Linux 32bit:
1495
Georgios Pinitasd9eb2752018-04-03 13:44:29 +01001496 arm-linux-gnueabihf-g++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -mfpu=neon -L. -larm_compute -larm_compute_core -o cl_convolution -DARM_COMPUTE_CL
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001497
1498To cross compile an OpenCL example for Linux 64bit:
1499
Georgios Pinitasd9eb2752018-04-03 13:44:29 +01001500 aarch64-linux-gnu-g++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -L. -larm_compute -larm_compute_core -o cl_convolution -DARM_COMPUTE_CL
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001501
Anthony Barbier14c86a92017-12-14 16:27:41 +00001502To cross compile a GLES example for Linux 32bit:
1503
1504 arm-linux-gnueabihf-g++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude/ -L. -larm_compute -larm_compute_core -std=c++11 -mfpu=neon -DARM_COMPUTE_GC -Iinclude/linux/ -o gc_absdiff
1505
1506To cross compile a GLES example for Linux 64bit:
1507
1508 aarch64-linux-gnu-g++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude/ -L. -larm_compute -larm_compute_core -std=c++11 -DARM_COMPUTE_GC -Iinclude/linux/ -o gc_absdiff
1509
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001510(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
1511
Anthony Barbier14c86a92017-12-14 16:27:41 +00001512To cross compile the examples with the Graph API, such as graph_lenet.cpp, you need to link the examples against arm_compute_graph.so too.
1513
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001514i.e. to cross compile the "graph_lenet" example for Linux 32bit:
1515
Georgios Pinitas12be7ab2018-07-03 12:06:23 +01001516 arm-linux-gnueabihf-g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -mfpu=neon -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001517
1518i.e. to cross compile the "graph_lenet" example for Linux 64bit:
1519
Georgios Pinitas12be7ab2018-07-03 12:06:23 +01001520 aarch64-linux-gnu-g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001521
1522(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
1523
Anthony Barbiere5007472017-10-27 15:01:44 +01001524@note If compiling using static libraries, this order must be followed when linking: arm_compute_graph_static, arm_compute, arm_compute_core
1525
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001526To compile natively (i.e directly on an ARM device) for NEON for Linux 32bit:
1527
Anthony Barbierb2881fc2017-09-29 17:12:12 +01001528 g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -mfpu=neon -larm_compute -larm_compute_core -o neon_convolution
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001529
1530To compile natively (i.e directly on an ARM device) for NEON for Linux 64bit:
1531
Anthony Barbierb2881fc2017-09-29 17:12:12 +01001532 g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute -larm_compute_core -o neon_convolution
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001533
1534(notice the only difference with the 32 bit command is that we don't need the -mfpu option)
1535
1536To compile natively (i.e directly on an ARM device) for OpenCL for Linux 32bit or Linux 64bit:
1537
Georgios Pinitasd9eb2752018-04-03 13:44:29 +01001538 g++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute -larm_compute_core -o cl_convolution -DARM_COMPUTE_CL
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001539
Anthony Barbier14c86a92017-12-14 16:27:41 +00001540To compile natively (i.e directly on an ARM device) for GLES for Linux 32bit or Linux 64bit:
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001541
Anthony Barbier14c86a92017-12-14 16:27:41 +00001542 g++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude/ -L. -larm_compute -larm_compute_core -std=c++11 -DARM_COMPUTE_GC -Iinclude/linux/ -o gc_absdiff
1543
1544To compile natively the examples with the Graph API, such as graph_lenet.cpp, you need to link the examples against arm_compute_graph.so too.
Anthony Barbier14c86a92017-12-14 16:27:41 +00001545
1546i.e. to natively compile the "graph_lenet" example for Linux 32bit:
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001547
Georgios Pinitas12be7ab2018-07-03 12:06:23 +01001548 g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -mfpu=neon -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001549
Anthony Barbier14c86a92017-12-14 16:27:41 +00001550i.e. to natively compile the "graph_lenet" example for Linux 64bit:
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001551
Gian Marco Iodicef94c6742020-06-26 12:35:09 +01001552 g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001553
1554(notice the only difference with the 32 bit command is that we don't need the -mfpu option)
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001555
Anthony Barbiere5007472017-10-27 15:01:44 +01001556@note If compiling using static libraries, this order must be followed when linking: arm_compute_graph_static, arm_compute, arm_compute_core
1557
Gian Marco Iodicef94c6742020-06-26 12:35:09 +01001558@note These two commands assume libarm_compute.so is available in your library path, if not add the path to it using -L (e.g. -Llib/linux-arm64-v8a-neon-cl-asserts/)
Georgios Pinitas58216322020-02-26 11:13:13 +00001559@note You might need to export the path to OpenCL library as well in your LD_LIBRARY_PATH if Compute Library was built with OpenCL enabled.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001560
1561To run the built executable simply run:
1562
1563 LD_LIBRARY_PATH=build ./neon_convolution
1564
1565or
1566
1567 LD_LIBRARY_PATH=build ./cl_convolution
1568
Georgios Pinitas9f28b392018-07-18 20:01:53 +01001569@note Examples accept different types of arguments, to find out what they are run the example with \a --help as an argument. If no arguments are specified then random values will be used to execute the graph.
Anthony Barbier3762e742018-03-02 11:49:33 +00001570
1571For example:
Anthony Barbier38e7f1f2018-05-21 13:37:47 +01001572
Georgios Pinitas9f28b392018-07-18 20:01:53 +01001573 LD_LIBRARY_PATH=. ./graph_lenet --help
Anthony Barbier3762e742018-03-02 11:49:33 +00001574
Georgios Pinitas9f28b392018-07-18 20:01:53 +01001575Below is a list of the common parameters among the graph examples :
1576@snippet utils/CommonGraphOptions.h Common graph examples parameters
Anthony Barbier3762e742018-03-02 11:49:33 +00001577
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001578@subsection S3_3_android Building for Android
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001579
1580For Android, the library was successfully built and tested using Google's standalone toolchains:
Michele Di Giorgio36a551f2020-04-23 11:55:29 +01001581 - clang++ from NDK r18b for armv7a
1582 - clang++ from NDK r18b for arm64-v8a
1583 - clang++ from NDK r18b for arm64-v8.2-a with FP16 support
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001584
1585Here is a guide to <a href="https://developer.android.com/ndk/guides/standalone_toolchain.html">create your Android standalone toolchains from the NDK</a>
1586
Sheri Zhang7a7f4e02020-08-28 20:08:49 +01001587- Download the NDK r18b from here: https://developer.android.com/ndk/downloads/index.html to directory $NDK
Georgios Pinitasf112ede2019-03-01 19:11:20 +00001588- Make sure you have Python 2.7 installed on your machine.
Sheri Zhang7a7f4e02020-08-28 20:08:49 +01001589- Generate the 32 and/or 64 toolchains by running the following commands to your toolchain dirctory $MY_TOOLCHAINS:
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001590
Anthony Barbier38e7f1f2018-05-21 13:37:47 +01001591
Michele Di Giorgio36a551f2020-04-23 11:55:29 +01001592 $NDK/build/tools/make_standalone_toolchain.py --arch arm64 --install-dir $MY_TOOLCHAINS/aarch64-linux-android-ndk-r18b --stl libc++ --api 21
1593 $NDK/build/tools/make_standalone_toolchain.py --arch arm --install-dir $MY_TOOLCHAINS/arm-linux-android-ndk-r18b --stl libc++ --api 21
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001594
Anthony Barbierd51ea0a2018-08-07 17:48:03 +01001595@attention We used to use gnustl but as of NDK r17 it is deprecated so we switched to libc++
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001596
Anthony Barbier38e7f1f2018-05-21 13:37:47 +01001597@note Make sure to add the toolchains to your PATH:
1598
Michele Di Giorgio36a551f2020-04-23 11:55:29 +01001599 export PATH=$PATH:$MY_TOOLCHAINS/aarch64-linux-android-ndk-r18b/bin:$MY_TOOLCHAINS/arm-linux-android-ndk-r18b/bin
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001600
1601@subsubsection S3_3_1_library How to build the library ?
1602
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001603To cross-compile the library in debug mode, with NEON only support, for Android 32bit:
1604
1605 CXX=clang++ CC=clang scons Werror=1 -j8 debug=1 neon=1 opencl=0 os=android arch=armv7a
1606
1607To cross-compile the library in asserts mode, with OpenCL only support, for Android 64bit:
1608
Anthony Barbier14c86a92017-12-14 16:27:41 +00001609 CXX=clang++ CC=clang scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=1 embed_kernels=1 os=android arch=arm64-v8a
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001610
Anthony Barbier20dbb822017-12-13 21:19:39 +00001611To cross-compile the library in asserts mode, with GLES_COMPUTE only support, for Android 64bit:
1612
Anthony Barbier14c86a92017-12-14 16:27:41 +00001613 CXX=clang++ CC=clang scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=0 gles_compute=1 embed_kernels=1 os=android arch=arm64-v8a
Anthony Barbier20dbb822017-12-13 21:19:39 +00001614
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001615@subsubsection S3_3_2_examples How to manually build the examples ?
1616
1617The examples get automatically built by scons as part of the build process of the library described above. This section just describes how you can build and link your own application against our library.
1618
Sheri Zhang7a7f4e02020-08-28 20:08:49 +01001619@note The following command lines assume the arm_compute libraries are present in the current directory or in the system library path. If this is not the case you can specify the location of the pre-built libraries with the compiler option -L. When building the OpenCL example the commands below assume that the CL headers are located in the include folder where the command is executed.
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001620
1621Once you've got your Android standalone toolchain built and added to your path you can do the following:
1622
1623To cross compile a NEON example:
1624
1625 #32 bit:
Georgios Pinitas9873ea32017-12-05 15:28:55 +00001626 arm-linux-androideabi-clang++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o neon_convolution_arm -static-libstdc++ -pie
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001627 #64 bit:
Anthony Barbier14c86a92017-12-14 16:27:41 +00001628 aarch64-linux-android-clang++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o neon_convolution_aarch64 -static-libstdc++ -pie
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001629
1630To cross compile an OpenCL example:
1631
1632 #32 bit:
Georgios Pinitasd9eb2752018-04-03 13:44:29 +01001633 arm-linux-androideabi-clang++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o cl_convolution_arm -static-libstdc++ -pie -DARM_COMPUTE_CL
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001634 #64 bit:
Georgios Pinitasd9eb2752018-04-03 13:44:29 +01001635 aarch64-linux-android-clang++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o cl_convolution_aarch64 -static-libstdc++ -pie -DARM_COMPUTE_CL
Anthony Barbier14c86a92017-12-14 16:27:41 +00001636
1637To cross compile a GLES example:
Anthony Barbiercc0a80b2017-12-15 11:37:29 +00001638
Anthony Barbier14c86a92017-12-14 16:27:41 +00001639 #32 bit:
1640 arm-linux-androideabi-clang++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o gc_absdiff_arm -static-libstdc++ -pie -DARM_COMPUTE_GC
1641 #64 bit:
1642 aarch64-linux-android-clang++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o gc_absdiff_aarch64 -static-libstdc++ -pie -DARM_COMPUTE_GC
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001643
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001644To cross compile the examples with the Graph API, such as graph_lenet.cpp, you need to link the library arm_compute_graph also.
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001645
1646 #32 bit:
Georgios Pinitas12be7ab2018-07-03 12:06:23 +01001647 arm-linux-androideabi-clang++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -Wl,--whole-archive -larm_compute_graph-static -Wl,--no-whole-archive -larm_compute-static -larm_compute_core-static -L. -o graph_lenet_arm -static-libstdc++ -pie -DARM_COMPUTE_CL
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001648 #64 bit:
Georgios Pinitas12be7ab2018-07-03 12:06:23 +01001649 aarch64-linux-android-clang++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -Wl,--whole-archive -larm_compute_graph-static -Wl,--no-whole-archive -larm_compute-static -larm_compute_core-static -L. -o graph_lenet_aarch64 -static-libstdc++ -pie -DARM_COMPUTE_CL
Gian Marco Iodicedaec1aa2017-09-29 12:03:18 +01001650
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001651@note Due to some issues in older versions of the Mali OpenCL DDK (<= r13p0), we recommend to link arm_compute statically on Android.
Anthony Barbier20dbb822017-12-13 21:19:39 +00001652@note When linked statically the arm_compute_graph library currently needs the --whole-archive linker flag in order to work properly
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001653
1654Then you need to do is upload the executable and the shared library to the device using ADB:
1655
1656 adb push neon_convolution_arm /data/local/tmp/
1657 adb push cl_convolution_arm /data/local/tmp/
Anthony Barbier14c86a92017-12-14 16:27:41 +00001658 adb push gc_absdiff_arm /data/local/tmp/
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001659 adb shell chmod 777 -R /data/local/tmp/
1660
1661And finally to run the example:
1662
1663 adb shell /data/local/tmp/neon_convolution_arm
1664 adb shell /data/local/tmp/cl_convolution_arm
Anthony Barbier14c86a92017-12-14 16:27:41 +00001665 adb shell /data/local/tmp/gc_absdiff_arm
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001666
1667For 64bit:
1668
1669 adb push neon_convolution_aarch64 /data/local/tmp/
1670 adb push cl_convolution_aarch64 /data/local/tmp/
Anthony Barbier14c86a92017-12-14 16:27:41 +00001671 adb push gc_absdiff_aarch64 /data/local/tmp/
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001672 adb shell chmod 777 -R /data/local/tmp/
1673
1674And finally to run the example:
1675
1676 adb shell /data/local/tmp/neon_convolution_aarch64
1677 adb shell /data/local/tmp/cl_convolution_aarch64
Anthony Barbier14c86a92017-12-14 16:27:41 +00001678 adb shell /data/local/tmp/gc_absdiff_aarch64
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001679
Georgios Pinitas9f28b392018-07-18 20:01:53 +01001680@note Examples accept different types of arguments, to find out what they are run the example with \a --help as an argument. If no arguments are specified then random values will be used to execute the graph.
Anthony Barbier3762e742018-03-02 11:49:33 +00001681
1682For example:
Georgios Pinitas9f28b392018-07-18 20:01:53 +01001683 adb shell /data/local/tmp/graph_lenet --help
Anthony Barbier3762e742018-03-02 11:49:33 +00001684
1685In this case the first argument of LeNet (like all the graph examples) is the target (i.e 0 to run on NEON, 1 to run on OpenCL if available, 2 to run on OpenCL using the CLTuner), the second argument is the path to the folder containing the npy files for the weights and finally the third argument is the number of batches to run.
1686
Michalis Spyrou6e52ba32017-10-04 15:40:38 +01001687@subsection S3_4_bare_metal Building for bare metal
1688
Georgios Pinitas58216322020-02-26 11:13:13 +00001689For bare metal, the library was successfully built using linaro's latest (gcc-linaro-6.3.1-2017.05) bare metal toolchains:
Michalis Spyrou6e52ba32017-10-04 15:40:38 +01001690 - arm-eabi for armv7a
1691 - aarch64-elf for arm64-v8a
1692
1693Download linaro for <a href="https://releases.linaro.org/components/toolchain/binaries/6.3-2017.05/arm-eabi/">armv7a</a> and <a href="https://releases.linaro.org/components/toolchain/binaries/6.3-2017.05/aarch64-elf/">arm64-v8a</a>.
1694
1695@note Make sure to add the toolchains to your PATH: export PATH=$PATH:$MY_TOOLCHAINS/gcc-linaro-6.3.1-2017.05-x86_64_aarch64-elf/bin:$MY_TOOLCHAINS/gcc-linaro-6.3.1-2017.05-x86_64_arm-eabi/bin
1696
1697@subsubsection S3_4_1_library How to build the library ?
1698
1699To cross-compile the library with NEON support for baremetal arm64-v8a:
1700
1701 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=bare_metal arch=arm64-v8a build=cross_compile cppthreads=0 openmp=0 standalone=1
1702
1703@subsubsection S3_4_2_examples How to manually build the examples ?
1704
1705Examples are disabled when building for bare metal. If you want to build the examples you need to provide a custom bootcode depending on the target architecture and link against the compute library. More information about bare metal bootcode can be found <a href="http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dai0527a/index.html">here</a>.
1706
1707@subsection S3_5_windows_host Building on a Windows host system
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001708
1709Using `scons` directly from the Windows command line is known to cause
1710problems. The reason seems to be that if `scons` is setup for cross-compilation
1711it gets confused about Windows style paths (using backslashes). Thus it is
1712recommended to follow one of the options outlined below.
1713
Michalis Spyrou6e52ba32017-10-04 15:40:38 +01001714@subsubsection S3_5_1_ubuntu_on_windows Bash on Ubuntu on Windows
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001715
Gian Marco Iodice5fc07aa2019-05-15 17:08:02 +01001716The best and easiest option is to use
1717<a href="https://msdn.microsoft.com/en-gb/commandline/wsl/about">Ubuntu on Windows</a>.
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001718This feature is still marked as *beta* and thus might not be available.
1719However, if it is building the library is as simple as opening a *Bash on
1720Ubuntu on Windows* shell and following the general guidelines given above.
1721
Michalis Spyrou6e52ba32017-10-04 15:40:38 +01001722@subsubsection S3_5_2_cygwin Cygwin
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001723
Gian Marco Iodice5fc07aa2019-05-15 17:08:02 +01001724If the Windows subsystem for Linux is not available <a href="https://www.cygwin.com/">Cygwin</a>
Pablo Tello78a5d222019-08-06 10:09:18 +01001725can be used to install and run `scons`, the minimum Cygwin version must be 3.0.7 or later. In addition
1726to the default packages installed by Cygwin `scons` has to be selected in the installer. (`git` might
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001727also be useful but is not strictly required if you already have got the source
Gian Marco Iodice5fc07aa2019-05-15 17:08:02 +01001728code of the library.) Linaro provides pre-built versions of
1729<a href="http://releases.linaro.org/components/toolchain/binaries/">GCC cross-compilers</a>
Moritz Pflanzer07674de2017-07-21 09:39:36 +01001730that can be used from the Cygwin terminal. When building for Android the
1731compiler is included in the Android standalone toolchain. After everything has
1732been set up in the Cygwin terminal the general guide on building the library
1733can be followed.
1734
Georgios Pinitasfd7780d2020-03-17 11:41:00 +00001735@subsection S3_6_cl_requirements OpenCL DDK Requirements
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001736
Georgios Pinitasfd7780d2020-03-17 11:41:00 +00001737@subsubsection S3_6_1_cl_hard_requirements Hard Requirements
Georgios Pinitasd9cb0572018-07-16 12:23:09 +01001738
1739Compute Library requires OpenCL 1.1 and above with support of non uniform workgroup sizes, which is officially supported in the Mali OpenCL DDK r8p0 and above as an extension (respective extension flag is \a -cl-arm-non-uniform-work-group-size).
1740
1741Enabling 16-bit floating point calculations require \a cl_khr_fp16 extension to be supported. All Mali GPUs with compute capabilities have native support for half precision floating points.
1742
1743Use of @ref CLMeanStdDev function requires 64-bit atomics support, thus \a cl_khr_int64_base_atomics should be supported in order to use.
1744
Georgios Pinitasfd7780d2020-03-17 11:41:00 +00001745@subsubsection S3_6_2_cl_performance_requirements Performance improvements
Georgios Pinitasd9cb0572018-07-16 12:23:09 +01001746
1747Integer dot product built-in function extensions (and therefore optimized kernels) are available with Mali OpenCL DDK r22p0 and above for the following GPUs : G71, G76. The relevant extensions are \a cl_arm_integer_dot_product_int8, \a cl_arm_integer_dot_product_accumulate_int8 and \a cl_arm_integer_dot_product_accumulate_int16.
1748
1749OpenCL kernel level debugging can be simplified with the use of printf, this requires the \a cl_arm_printf extension to be supported.
1750
1751SVM allocations are supported for all the underlying allocations in Compute Library. To enable this OpenCL 2.0 and above is a requirement.
Gian Marco Iodice201cea12018-07-30 17:21:41 +01001752
Georgios Pinitasfd7780d2020-03-17 11:41:00 +00001753@subsection S3_7_cl_tuner OpenCL Tuner
Gian Marco Iodice201cea12018-07-30 17:21:41 +01001754
1755The OpenCL tuner, a.k.a. CLTuner, is a module of Arm Compute Library that can improve the performance of the OpenCL kernels tuning the Local-Workgroup-Size (LWS).
1756The optimal LWS for each unique OpenCL kernel configuration is stored in a table. This table can be either imported or exported from/to a file.
Vidhya Sudhan Loganathandc5d3432019-04-29 11:44:11 +01001757The OpenCL tuner runs the same OpenCL kernel for a range of local workgroup sizes and keeps the local workgroup size of the fastest run to use in subsequent calls to the kernel. It supports three modes of tuning with different trade-offs between the time taken to tune and the kernel execution time achieved using the best LWS found. In the Exhaustive mode, it searches all the supported values of LWS. This mode takes the longest time to tune and is the most likely to find the optimal LWS. Normal mode searches a subset of LWS values to yield a good approximation of the optimal LWS. It takes less time to tune than Exhaustive mode. Rapid mode takes the shortest time to tune and finds an LWS value that is at least as good or better than the default LWS value. The mode affects only the search for the optimal LWS and has no effect when the LWS value is imported from a file.
Gian Marco Iodice201cea12018-07-30 17:21:41 +01001758In order for the performance numbers to be meaningful you must disable the GPU power management and set it to a fixed frequency for the entire duration of the tuning phase.
1759
1760If you wish to know more about LWS and the important role on improving the GPU cache utilization, we suggest having a look at the presentation "Even Faster CNNs: Exploring the New Class of Winograd Algorithms available at the following link:
1761
1762https://www.embedded-vision.com/platinum-members/arm/embedded-vision-training/videos/pages/may-2018-embedded-vision-summit-iodice
1763
1764Tuning a network from scratch can be long and affect considerably the execution time for the first run of your network. It is recommended for this reason to store the CLTuner's result in a file to amortize this time when you either re-use the same network or the functions with the same configurations. The tuning is performed only once for each OpenCL kernel.
1765
1766CLTuner looks for the optimal LWS for each unique OpenCL kernel configuration. Since a function (i.e. Convolution Layer, Pooling Layer, Fully Connected Layer ...) can be called multiple times but with different parameters, we associate an "id" (called "config_id") to each kernel to distinguish the unique configurations.
1767
1768 #Example: 2 unique Matrix Multiply configurations
1769@code{.cpp}
1770 TensorShape a0 = TensorShape(32,32);
1771 TensorShape b0 = TensorShape(32,32);
1772 TensorShape c0 = TensorShape(32,32);
1773 TensorShape a1 = TensorShape(64,64);
1774 TensorShape b1 = TensorShape(64,64);
1775 TensorShape c1 = TensorShape(64,64);
1776
1777 Tensor a0_tensor;
1778 Tensor b0_tensor;
1779 Tensor c0_tensor;
1780 Tensor a1_tensor;
1781 Tensor b1_tensor;
1782 Tensor c1_tensor;
1783
1784 a0_tensor.allocator()->init(TensorInfo(a0, 1, DataType::F32));
1785 b0_tensor.allocator()->init(TensorInfo(b0, 1, DataType::F32));
1786 c0_tensor.allocator()->init(TensorInfo(c0, 1, DataType::F32));
1787 a1_tensor.allocator()->init(TensorInfo(a1, 1, DataType::F32));
1788 b1_tensor.allocator()->init(TensorInfo(b1, 1, DataType::F32));
1789 c1_tensor.allocator()->init(TensorInfo(c1 1, DataType::F32));
1790
1791 CLGEMM gemm0;
1792 CLGEMM gemm1;
1793
1794 // Configuration 0
1795 gemm0.configure(&a0, &b0, nullptr, &c0, 1.0f, 0.0f);
1796
1797 // Configuration 1
1798 gemm1.configure(&a1, &b1, nullptr, &c1, 1.0f, 0.0f);
1799@endcode
1800
Georgios Pinitasfd7780d2020-03-17 11:41:00 +00001801@subsubsection S3_7_1_cl_tuner_how_to How to use it
Gian Marco Iodice201cea12018-07-30 17:21:41 +01001802
Michele Di Giorgio57f30a92020-09-08 14:03:51 +01001803All the graph examples in the Compute Library's folder "examples" and the arm_compute_benchmark accept an argument to enable the OpenCL tuner and an argument to export/import the LWS values to/from a file
Gian Marco Iodice201cea12018-07-30 17:21:41 +01001804
1805 #Enable CL tuner
1806 ./graph_mobilenet --enable-tuner –-target=CL
1807 ./arm_compute_benchmark --enable-tuner
1808
1809 #Export/Import to/from a file
1810 ./graph_mobilenet --enable-tuner --target=CL --tuner-file=acl_tuner.csv
1811 ./arm_compute_benchmark --enable-tuner --tuner-file=acl_tuner.csv
1812
1813If you are importing the CLTuner'results from a file, the new tuned LWS values will be appended to it.
1814
1815Either you are benchmarking the graph examples or the test cases in the arm_compute_benchmark remember to:
1816
1817 -# Disable the power management
1818 -# Keep the GPU frequency constant
1819 -# Run multiple times the network (i.e. 10).
1820
1821If you are not using the graph API or the benchmark infrastructure you will need to manually pass a CLTuner object to CLScheduler before configuring any function.
1822
1823@code{.cpp}
1824CLTuner tuner;
1825
1826// Setup Scheduler
1827CLScheduler::get().default_init(&tuner);
1828@endcode
1829
1830After the first run, the CLTuner's results can be exported to a file using the method "save_to_file()".
1831- tuner.save_to_file("results.csv");
1832
1833This file can be also imported using the method "load_from_file("results.csv")".
1834- tuner.load_from_file("results.csv");
Anthony Barbier6ff3b192017-09-04 18:44:23 +01001835*/
Anthony Barbierd51ea0a2018-08-07 17:48:03 +01001836} // namespace arm_compute