blob: 498b925cd3c58c55c3ff4d595f68ca68d76c578c [file] [log] [blame]
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001///
Adnan AlSinan171fc3d2022-03-15 18:46:42 +00002/// Copyright (c) 2021-2022 Arm Limited.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003///
4/// SPDX-License-Identifier: MIT
5///
6/// Permission is hereby granted, free of charge, to any person obtaining a copy
7/// of this software and associated documentation files (the "Software"), to
8/// deal in the Software without restriction, including without limitation the
9/// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
10/// sell copies of the Software, and to permit persons to whom the Software is
11/// furnished to do so, subject to the following conditions:
12///
13/// The above copyright notice and this permission notice shall be included in all
14/// copies or substantial portions of the Software.
15///
16/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
22/// SOFTWARE.
23///
24namespace arm_compute
25{
26/**
27@page operators_list Supported Operators
28
29@tableofcontents
30
31@section S9_1_operators_list Supported Operators
32
33Compute Library supports operators that are listed in below table.
34
35Compute Library supports a wide list of data-types, information can been directly found in the documentation of each kernel/function.
36The main data-types that the Machine Learning functions support are the following:
37 <ul>
38 <li>BFLOAT16: 16-bit non-standard brain floating point
39 <li>QASYMM8: 8-bit unsigned asymmetric quantized
40 <li>QASYMM8_SIGNED: 8-bit signed asymmetric quantized
41 <li>QSYMM8_PER_CHANNEL: 8-bit signed symmetric quantized (Used for the weights)
42 <li>QSYMM8: 8-bit unsigned symmetric quantized
43 <li>QSYMM16: 16-bit unsigned symmetric quantized
44 <li>F32: 32-bit single precision floating point
45 <li>F16: 16-bit half precision floating point
46 <li>S32: 32-bit signed integer
47 <li>U8: 8-bit unsigned char
Jakub Sujakee301b32021-06-04 09:46:08 +010048 <li>All: Agnostic to any specific data type
Sheri Zhanga47dcc22021-04-22 14:41:12 +010049 </ul>
50
51Compute Library supports the following data layouts (fast changing dimension from right to left):
52 <ul>
53 <li>NHWC: The native layout of Compute Library that delivers the best performance where channels are in the fastest changing dimension
54 <li>NCHW: Legacy layout where width is in the fastest changing dimension
Sheri Zhang5dda2172021-10-15 19:54:17 +010055 <li>NDHWC: New data layout for supporting 3D operators
Jakub Sujakee301b32021-06-04 09:46:08 +010056 <li>All: Agnostic to any specific data layout
Sheri Zhanga47dcc22021-04-22 14:41:12 +010057 </ul>
Sheri Zhang5dda2172021-10-15 19:54:17 +010058where N = batches, C = channels, H = height, W = width, D = depth
Sheri Zhanga47dcc22021-04-22 14:41:12 +010059
60<table>
61<caption id="multi_row"></caption>
62<tr>
63 <th>Function
64 <th>Description
65 <th>Equivalent Android NNAPI Op
66 <th>Backends
67 <th>Data Layouts
68 <th>Data Types
69<tr>
70 <td rowspan="2">ActivationLayer
71 <td rowspan="2" style="width:200px;"> Function to simulate an activation layer with the specified activation function.
72 <td rowspan="2">
73 <ul>
74 <li>ANEURALNETWORKS_ELU
75 <li>ANEURALNETWORKS_HARD_SWISH
76 <li>ANEURALNETWORKS_LOGISTIC
77 <li>ANEURALNETWORKS_RELU
78 <li>ANEURALNETWORKS_RELU1
79 <li>ANEURALNETWORKS_RELU6
80 <li>ANEURALNETWORKS_TANH
81 </ul>
82 <td>NEActivationLayer
83 <td>
84 <ul>
85 <li>All
86 </ul>
87 <td>
88 <table>
89 <tr><th>src<th>dst
90 <tr><td>QASYMM8<td>QASYMM8
91 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
92 <tr><td>QSYMM16<td>QSYMM16
93 <tr><td>F16<td>F16
94 <tr><td>F32<td>F32
95 </table>
96<tr>
97 <td>CLActivationLayer
98 <td>
99 <ul>
100 <li>All
101 </ul>
102 <td>
103 <table>
104 <tr><th>src<th>dst
105 <tr><td>QASYMM8<td>QASYMM8
106 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
107 <tr><td>QSYMM16<td>QSYMM16
108 <tr><td>F16<td>F16
109 <tr><td>F32<td>F32
110 </table>
111<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100112 <td rowspan="2">ArgMinMaxLayer
113 <td rowspan="2" style="width:200px;"> Function to calculate the index of the minimum or maximum values in a tensor based on an axis.
114 <td rowspan="2">
115 <ul>
116 <li>ANEURALNETWORKS_ARGMAX
117 <li>ANEURALNETWORKS_ARGMIN
118 </ul>
119 <td>NEArgMinMaxLayer
120 <td>
121 <ul>
122 <li>All
123 </ul>
124 <td>
125 <table>
126 <tr><th>src<th>dst
127 <tr><td>QASYMM8<td>U32, S32
128 <tr><td>QASYMM8_SIGNED<td>U32, S32
Pablo Marquez Tello29e27b02023-08-03 14:47:31 +0100129 <tr><td>S32<td>U32, S32, S64
Teresa Charlin62687422021-04-28 10:58:49 +0100130 <tr><td>F16<td>U32, S32
131 <tr><td>F32<td>U32, S32
132 </table>
133<tr>
134 <td>CLArgMinMaxLayer
135 <td>
136 <ul>
137 <li>All
138 </ul>
139 <td>
140 <table>
141 <tr><th>src<th>dst
142 <tr><td>QASYMM8<td>U32, S32
143 <tr><td>QASYMM8_SIGNED<td>U32, S32
144 <tr><td>S32<td>U32, S32
145 <tr><td>F16<td>U32, S32
146 <tr><td>F32<td>U32, S32
147 </table>
148<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100149 <td rowspan="1">ArithmeticAddition
150 <td rowspan="1" style="width:200px;"> Function to add 2 tensors.
151 <td rowspan="1">
152 <ul>
153 <li>ANEURALNETWORKS_ADD
154 </ul>
155 <td>NEArithmeticAddition
156 <td>
157 <ul>
158 <li>All
159 </ul>
160 <td>
161 <table>
162 <tr><th>src0<th>src1<th>dst
163 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
164 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
165 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
166 <tr><td>QSYMM16<td>QSYMM16<td>S32
167 <tr><td>U8<td>U8<td>U8
Sheri Zhang6124ce62021-05-04 14:03:13 +0100168 <tr><td>S16<td>S16<td>S16
169 <tr><td>S32<td>S32<td>S32
170 <tr><td>F16<td>F16<td>F16
171 <tr><td>F32<td>F32<td>F32
172 </table>
173<tr>
174 <td rowspan="1">ArithmeticSubtraction
175 <td rowspan="1" style="width:200px;"> Function to substract 2 tensors.
176 <td rowspan="1">
177 <ul>
178 <li>ANEURALNETWORKS_SUB
179 </ul>
180 <td>NEArithmeticSubtraction
181 <td>
182 <ul>
183 <li>All
184 </ul>
185 <td>
186 <table>
187 <tr><th>src0<th>src1<th>dst
188 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
189 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
190 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
191 <tr><td>QSYMM16<td>QSYMM16<td>S32
192 <tr><td>U8<td>U8<td>U8
Sheri Zhang6124ce62021-05-04 14:03:13 +0100193 <tr><td>S16<td>S16<td>S16
194 <tr><td>S32<td>S32<td>S32
195 <tr><td>F16<td>F16<td>F16
196 <tr><td>F32<td>F32<td>F32
197 </table>
198<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100199 <td rowspan="2">BatchNormalizationLayer
200 <td rowspan="2" style="width:200px;"> Function to perform batch normalization.
201 <td rowspan="2">
202 <ul>
203 <li>n/a
204 </ul>
205 <td>NEBatchNormalizationLayer
206 <td>
207 <ul>
208 <li>NHWC
209 <li>NCHW
210 </ul>
211 <td>
212 <table>
213 <tr><th>src<th>dst
214 <tr><td>F32<td>F32
215 <tr><td>F16<td>F16
216 </table>
217<tr>
218 <td>CLBatchNormalizationLayer
219 <td>
220 <ul>
221 <li>NHWC
222 <li>NCHW
223 </ul>
224 <td>
225 <table>
226 <tr><th>src<th>dst
227 <tr><td>F32<td>F32
228 <tr><td>F16<td>F16
229 </table>
230<tr>
231 <td rowspan="2">BatchToSpaceLayer
232 <td rowspan="2" style="width:200px;"> Batch to space transformation.
233 <td rowspan="2">
234 <ul>
235 <li>ANEURALNETWORKS_BATCH_TO_SPACE_ND
236 </ul>
237 <td>NEBatchToSpaceLayer
238 <td>
239 <ul>
240 <li>NHWC
241 <li>NCHW
242 </ul>
243 <td>
244 <table>
245 <tr><th>src0<th>src1<th>dst
246 <tr><td>All<td>s32<td>All
247 </table>
248<tr>
249 <td>CLBatchToSpaceLayer
250 <td>
251 <ul>
252 <li>NHWC
253 <li>NCHW
254 </ul>
255 <td>
256 <table>
257 <tr><th>src0<th>src1<th>dst
258 <tr><td>All<td>s32<td>All
259 </table>
260<tr>
261 <td rowspan="2">BitwiseAnd
Jakub Sujakee301b32021-06-04 09:46:08 +0100262 <td rowspan="2" style="width:200px;"> Function to perform bitwise AND between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100263 <td rowspan="2">
264 <ul>
265 <li>ANEURALNETWORKS_LOGICAL_AND
266 </ul>
267 <td>NEBitwiseAnd
268 <td>
269 <ul>
270 <li>All
271 </ul>
272 <td>
273 <table>
274 <tr><th>src<th>dst
275 <tr><td>U8<td>U8
276 </table>
277<tr>
278 <td>CLBitwiseAnd
279 <td>
280 <ul>
281 <li>All
282 </ul>
283 <td>
284 <table>
285 <tr><th>src<th>dst
286 <tr><td>U8<td>U8
287 </table>
288<tr>
289 <td rowspan="2">BitwiseNot
Jakub Sujakee301b32021-06-04 09:46:08 +0100290 <td rowspan="2" style="width:200px;"> Function to perform bitwise NOT.
Teresa Charlin62687422021-04-28 10:58:49 +0100291 <td rowspan="2">
292 <ul>
293 <li>ANEURALNETWORKS_LOGICAL_NOT
294 </ul>
295 <td>NEBitwiseNot
296 <td>
297 <ul>
298 <li>All
299 </ul>
300 <td>
301 <table>
302 <tr><th>src<th>dst
303 <tr><td>U8<td>U8
304 </table>
305<tr>
306 <td>CLBitwiseNot
307 <td>
308 <ul>
309 <li>All
310 </ul>
311 <td>
312 <table>
313 <tr><th>src<th>dst
314 <tr><td>U8<td>U8
315 </table>
316<tr>
317 <td rowspan="2">BitwiseOr
Jakub Sujakee301b32021-06-04 09:46:08 +0100318 <td rowspan="2" style="width:200px;"> Function to perform bitwise OR between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100319 <td rowspan="2">
320 <ul>
321 <li>ANEURALNETWORKS_LOGICAL_OR
322 </ul>
323 <td>NEBitwiseOr
324 <td>
325 <ul>
326 <li>All
327 </ul>
328 <td>
329 <table>
330 <tr><th>src<th>dst
331 <tr><td>U8<td>U8
332 </table>
333<tr>
334 <td>CLBitwiseOr
335 <td>
336 <ul>
337 <li>All
338 </ul>
339 <td>
340 <table>
341 <tr><th>src<th>dst
342 <tr><td>U8<td>U8
343 </table>
344<tr>
345 <td rowspan="2">BitwiseXor
Jakub Sujakee301b32021-06-04 09:46:08 +0100346 <td rowspan="2" style="width:200px;"> Function to perform bitwise XOR between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100347 <td rowspan="2">
348 <ul>
349 <li>n/a
350 </ul>
351 <td>NEBitwiseXor
352 <td>
353 <ul>
354 <li>All
355 </ul>
356 <td>
357 <table>
358 <tr><th>src<th>dst
359 <tr><td>U8<td>U8
360 </table>
361<tr>
362 <td>CLBitwiseXor
363 <td>
364 <ul>
365 <li>All
366 </ul>
367 <td>
368 <table>
369 <tr><th>src<th>dst
370 <tr><td>U8<td>U8
371 </table>
372<tr>
373 <td rowspan="2">BoundingBoxTransform
374 <td rowspan="2" style="width:200px;"> Transform proposal bounding boxes to target bounding box using bounding box deltas.
375 <td rowspan="2">
376 <ul>
377 <li>n/a
378 </ul>
379 <td>NEBoundingBoxTransform
380 <td>
381 <ul>
382 <li>NHWC
383 <li>NCHW
384 </ul>
385 <td>
386 <table>
387 <tr><th>src0<th>src1<th>dst
388 <tr><td>QASYMM16<td>QASYMM8<td>QASYMM16
389 <tr><td>F16<td>F16<td>F16
390 <tr><td>F32<td>F32<td>F32
391 </table>
392<tr>
393 <td>CLBoundingBoxTransform
394 <td>
395 <ul>
396 <li>NHWC
397 <li>NCHW
398 </ul>
399 <td>
400 <table>
401 <tr><th>src0<th>src1<th>dst
402 <tr><td>QASYMM16<td>QASYMM8<td>QASYMM16
403 <tr><td>F16<td>F16<td>F16
404 <tr><td>F32<td>F32<td>F32
405 </table>
406<tr>
407 <td rowspan="2">Cast
408 <td rowspan="2" style="width:200px;"> Function to cast a tensor.
409 <td rowspan="2">
410 <ul>
411 <li>ANEURALNETWORKS_CAST
412 </ul>
413 <td>NECast
414 <td>
415 <ul>
416 <li>All
417 </ul>
418 <td>
419 <table>
420 <tr><th>src<th>dst
421 <tr><td>QASYMM8_SIGNED<td>S16, S32, F32, F16
422 <tr><td>QASYMM8<td>U16, S16, S32, F32, F16
423 <tr><td>U8<td>U16, S16, S32, F32, F16
424 <tr><td>U16<td>U8, U32
425 <tr><td>S16<td>QASYMM8_SIGNED, U8, S32
426 <tr><td>F16<td>QASYMM8_SIGNED, QASYMM8, F32, S32, U8
427 <tr><td>S32<td>QASYMM8_SIGNED, QASYMM8, F16, F32, U8
428 <tr><td>F32<td>QASYMM8_SIGNED, QASYMM8, BFLOAT16, F16, S32, U8
429 </table>
430<tr>
431 <td>CLCast
432 <td>
433 <ul>
434 <li>All
435 </ul>
436 <td>
437 <table>
438 <tr><th>src<th>dst
439 <tr><td>U8<td>S8, U16, S16, U32, S32, F16, F32
Pablo Marquez Tello205ba242023-07-12 14:29:58 +0100440 <tr><td>S8<td>U8, U16, S16, U32, S32, F16, F32
Teresa Charlin62687422021-04-28 10:58:49 +0100441 <tr><td>U16<td>U8, S8, S16, U32, S32, F16, F32
442 <tr><td>S16<td>U8, S8, U16, U32, S32, F16, F32
443 <tr><td>U32<td>U8, S8, U16, S16, S32, F16, F32
444 <tr><td>S32<td>U8, S8, U16, S16, U32, F16, F32
Pablo Marquez Tello205ba242023-07-12 14:29:58 +0100445 <tr><td>U64<td>U8, S8, U16, S16, U32, S32, F16, F32
446 <tr><td>S64<td>U8, S8, U16, S16, U32, S32, F16, F32
447 <tr><td>F16<td>U8, S8, U16, S16, S32, U32, F32
448 <tr><td>F32<td>U8, S8, U16, S16, S32, U32, F16
Teresa Charlin62687422021-04-28 10:58:49 +0100449 </table>
450<tr>
451 <td rowspan="2">ChannelShuffleLayer
452 <td rowspan="2" style="width:200px;"> Function to shuffle the channels of the input tensor.
453 <td rowspan="2">
454 <ul>
455 <li>ANEURALNETWORKS_CHANNEL_SHUFFLE
456 </ul>
457 <td>NEChannelShuffleLayer
458 <td>
459 <ul>
460 <li>NCHW
Michele Di Giorgiob8025b32021-09-03 10:29:49 +0100461 <li>NHWC
Teresa Charlin62687422021-04-28 10:58:49 +0100462 </ul>
463 <td>
464 <table>
465 <tr><th>src<th>dst
466 <tr><td>All<td>All
467 </table>
468<tr>
469 <td>CLChannelShuffleLayer
470 <td>
471 <ul>
472 <li>NCHW
Michele Di Giorgiob8025b32021-09-03 10:29:49 +0100473 <li>NHWC
Teresa Charlin62687422021-04-28 10:58:49 +0100474 </ul>
475 <td>
476 <table>
477 <tr><th>src<th>dst
478 <tr><td>All<td>All
479 </table>
480<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100481 <td rowspan="1">Comparison
482 <td rowspan="1" style="width:200px;"> Function to compare 2 tensors.
483 <td rowspan="1">
484 <ul>
485 <li>ANEURALNETWORKS_EQUAL
486 <li>ANEURALNETWORKS_GREATER
487 <li>ANEURALNETWORKS_GREATER_EQUAL
488 <li>ANEURALNETWORKS_LESS
489 <li>ANEURALNETWORKS_LESS_EQUAL
490 <li>ANEURALNETWORKS_NOT_EQUAL
491 </ul>
492 <td>CLComparison
493 <td>
494 <ul>
495 <li>All
496 </ul>
497 <td>
498 <table>
499 <tr><th>src0<th>src1<th>dst
500 <tr><td>All<td>All<td>U8
501 </table>
502<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100503 <td rowspan="2">ConcatenateLayer
504 <td rowspan="2" style="width:200px;"> Function to concatenate tensors along a given axis.
505 <td rowspan="2">
506 <ul>
507 <li>ANEURALNETWORKS_CONCATENATION
508 </ul>
509 <td>NEConcatenateLayer
510 <td>
511 <ul>
512 <li>All
513 </ul>
514 <td>
515 <table>
516 <tr><th>src<th>dst
517 <tr><td>QASYMM8<td>QASYMM8
518 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
519 <tr><td>F16<td>F16
520 <tr><td>F32<td>F32
521 </table>
522<tr>
523 <td>CLConcatenateLayer
524 <td>
525 <ul>
526 <li>All
527 </ul>
528 <td>
529 <table>
530 <tr><th>src<th>dst
531 <tr><td>QASYMM8<td>QASYMM8
532 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
533 <tr><td>F16<td>F16
534 <tr><td>F32<td>F32
535 </table>
536<tr>
537 <td rowspan="2">ConvertFullyConnectedWeights
Jakub Sujakee301b32021-06-04 09:46:08 +0100538 <td rowspan="2" style="width:200px;"> Function to transpose the weights for the fully connected layer.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100539 <td rowspan="2">
540 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +0100541 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100542 </ul>
543 <td>NEConvertFullyConnectedWeights
544 <td>
545 <ul>
546 <li>NHWC
547 <li>NCHW
548 </ul>
549 <td>
550 <table>
551 <tr><th>src<th>dst
552 <tr><td>All<td>All
553 </table>
554<tr>
555 <td>CLConvertFullyConnectedWeights
556 <td>
557 <ul>
558 <li>NHWC
559 <li>NCHW
560 </ul>
561 <td>
562 <table>
563 <tr><th>src<th>dst
564 <tr><td>All<td>All
565 </table>
566<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100567 <td rowspan="2">ConvolutionLayer
568 <td rowspan="2" style="width:200px;"> Function to compute a convolution layer.
569 <td rowspan="2">
570 <ul>
571 <li>ANEURALNETWORKS_CONV_2D
572 </ul>
573 <td>NEConvolutionLayer
574 <td>
575 <ul>
576 <li>NHWC
577 <li>NCHW
578 </ul>
579 <td>
580 <table>
581 <tr><th>src0<th>src1<th>src2<th>dst
582 <tr><td>F16<td>F16<td>F16<td>F16
583 <tr><td>F32<td>F32<td>F32<td>F32
584 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
585 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
586 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
587 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
588 </table>
589<tr>
590 <td>CLConvolutionLayer
591 <td>
592 <ul>
593 <li>NHWC
594 <li>NCHW
595 </ul>
596 <td>
597 <table>
598 <tr><th>src0<th>src1<th>src2<th>dst
599 <tr><td>F16<td>F16<td>F16<td>F16
600 <tr><td>F32<td>F32<td>F32<td>F32
601 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
602 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
603 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
604 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
605 </table>
606<tr>
Sheri Zhang6d9c9822021-09-24 16:02:57 +0100607 <td rowspan="2">Conv3D
608 <td rowspan="2" style="width:200px;"> Function to compute a 3d convolution layer.
609 <td rowspan="2">
610 <ul>
611 <li>ANEURALNETWORKS_CONV_3D
612 </ul>
613 <td>NEConv3D
614 <td>
615 <ul>
616 <li>NDHWC
617 </ul>
618 <td>
619 <table>
620 <tr><th>src0<th>src1<th>src2<th>dst
621 <tr><td>F16<td>F16<td>F16<td>F16
622 <tr><td>F32<td>F32<td>F32<td>F32
Freddie Liardetf727ef42021-10-18 13:28:57 +0100623 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
624 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
Sheri Zhang6d9c9822021-09-24 16:02:57 +0100625 </table>
626<tr>
627 <td>CLConv3D
628 <td>
629 <ul>
630 <li>NDHWC
631 </ul>
632 <td>
633 <table>
634 <tr><th>src0<th>src1<th>src2<th>dst
635 <tr><td>F16<td>F16<td>F16<td>F16
636 <tr><td>F32<td>F32<td>F32<td>F32
Giorgio Arena51847d52021-10-19 15:45:57 +0100637 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
638 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
Sheri Zhang6d9c9822021-09-24 16:02:57 +0100639 </table>
640<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100641 <td rowspan="2">Copy
642 <td rowspan="2" style="width:200px;"> Function to copy a tensor.
643 <td rowspan="2">
644 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +0100645 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100646 </ul>
647 <td>NECopy
648 <td>
649 <ul>
650 <li>All
651 </ul>
652 <td>
653 <table>
654 <tr><th>src<th>dst
655 <tr><td>All<td>All
656 </table>
657<tr>
658 <td>CLCopy
659 <td>
660 <ul>
661 <li>All
662 </ul>
663 <td>
664 <table>
665 <tr><th>src<th>dst
666 <tr><td>All<td>All
667 </table>
668<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100669 <td rowspan="1">Crop
670 <td rowspan="1" style="width:200px;"> Performs a copy of input tensor to the output tensor.
671 <td rowspan="1">
672 <ul>
673 <li>n/a
674 </ul>
675 <td>CLCrop
676 <td>
677 <ul>
678 <li>NHWC
679 </ul>
680 <td>
681 <table>
682 <tr><th>src<th>dst
683 <tr><td>All<td>F32
684 </table>
685<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100686 <td rowspan="2">CropResize
687 <td rowspan="2" style="width:200px;"> Function to perform cropping and resizing.
688 <td rowspan="2">
689 <ul>
690 <li>n/a
691 </ul>
692 <td>NECropResize
693 <td>
694 <ul>
695 <li>NHWC
696 </ul>
697 <td>
698 <table>
699 <tr><th>src0<th>src1<th>src2<th>dst
700 <tr><td>All<td>F32<td>F32<td>F32
701 </table>
702<tr>
703 <td>CLCropResize
704 <td>
705 <ul>
706 <li>NHWC
707 </ul>
708 <td>
709 <table>
710 <tr><th>src0<th>src1<th>src2<th>dst
711 <tr><td>All<td>F32<td>F32<td>F32
712 </table>
713<tr>
714 <td rowspan="2">DeconvolutionLayer
Jakub Sujakee301b32021-06-04 09:46:08 +0100715 <td rowspan="2" style="width:200px;"> Function to compute a deconvolution or transpose convolution.
Teresa Charlin62687422021-04-28 10:58:49 +0100716 <td rowspan="2">
717 <ul>
718 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
719 </ul>
720 <td>NEDeconvolutionLayer
721 <td>
722 <ul>
723 <li>NHWC
724 <li>NCHW
725 </ul>
726 <td>
727 <table>
728 <tr><th>src0<th>src1<th>src2<th>dst
729 <tr><td>F16<td>F16<td>F16<td>F16
730 <tr><td>F32<td>F32<td>F32<td>F32
731 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
732 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
733 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
734 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
735 </table>
736<tr>
737 <td>CLDeconvolutionLayer
738 <td>
739 <ul>
740 <li>NHWC
741 <li>NCHW
742 </ul>
743 <td>
744 <table>
745 <tr><th>src0<th>src1<th>src2<th>dst
746 <tr><td>F16<td>F16<td>F16<td>F16
747 <tr><td>F32<td>F32<td>F32<td>F32
748 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
749 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
750 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
751 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
752 </table>
753<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100754 <td rowspan="1">DeconvolutionLayerUpsample
755 <td rowspan="1" style="width:200px;"> Function to execute deconvolution upsample on OpenCL.
756 <td rowspan="1">
757 <ul>
758 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
759 </ul>
760 <td>CLDeconvolutionLayerUpsample
761 <td>
762 <ul>
763 <li>NHWC
764 <li>NCHW
765 </ul>
766 <td>
767 <table>
768 <tr><th>src<th>dst
769 <tr><td>All<td>All
770 </table>
771<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100772 <td rowspan="2">DepthConvertLayer
773 <td rowspan="2" style="width:200px;"> Performs a down-scaling depth conversion.
774 <td rowspan="2">
775 <ul>
776 <li>n/a
777 </ul>
778 <td>NEDepthConvertLayer
779 <td>
780 <ul>
781 <li>All
782 </ul>
783 <td>
784 <table>
785 <tr><th>src<th>dst
786 <tr><td>QASYMM8<td>F16, F32
787 <tr><td>U8<td>U16, S16, S32
788 <tr><td>U16<td>U8, U32
789 <tr><td>S16<td>U8, S32
790 <tr><td>BFLOAT16<td>F32
791 <tr><td>F16<td>QASYMM8, F32
792 <tr><td>F32<td>QASYMM8, F16, BFLOAT16
793 </table>
794<tr>
795 <td>CLDepthConvertLayer
796 <td>
797 <ul>
798 <li>All
799 </ul>
800 <td>
801 <table>
802 <tr><th>src<th>dst
803 <tr><td>U8<td>S8, U16, S16, U32, S32, F16, F32
804 <tr><td>U16<td>U8, S8, S16, U32, S32, F16, F32
805 <tr><td>S16<td>U8, S8, U16, U32, S32, F16, F32
806 <tr><td>U32<td>U8, S8, U16, S16, S32, F16, F32
807 <tr><td>S32<td>U8, S8, U16, S16, U32, F16, F32
808 <tr><td>F16<td>U8, S8, U16, S16, U32, F32
809 <tr><td>F32<td>U8, S8, U16, S16, U32, F16
810 </table>
811<tr>
812 <td rowspan="2">DepthToSpaceLayer
813 <td rowspan="2" style="width:200px;"> Depth to Space transformation.
814 <td rowspan="2">
815 <ul>
816 <li>ANEURALNETWORKS_DEPTH_TO_SPACE
817 </ul>
818 <td>NEDepthToSpaceLayer
819 <td>
820 <ul>
821 <li>NHWC
822 <li>NCHW
823 </ul>
824 <td>
825 <table>
826 <tr><th>src<th>dst
827 <tr><td>All<td>All
828 </table>
829<tr>
830 <td>CLDepthToSpaceLayer
831 <td>
832 <ul>
833 <li>NHWC
834 <li>NCHW
835 </ul>
836 <td>
837 <table>
838 <tr><th>src<th>dst
839 <tr><td>All<td>All
840 </table>
841<tr>
842 <td rowspan="2">DepthwiseConvolutionLayer
843 <td rowspan="2" style="width:200px;"> Function to perform depthwise separable convolution.
844 <td rowspan="2">
845 <ul>
846 <li>ANEURALNETWORKS_DEPTHWISE_CONV_2D
847 </ul>
848 <td>NEDepthwiseConvolutionLayer
849 <td>
850 <ul>
851 <li>NHWC
852 <li>NCHW
853 </ul>
854 <td>
855 <table>
856 <tr><th>src0<th>src1<th>src2<th>dst
857 <tr><td>F16<td>F16<td>F16<td>F16
858 <tr><td>F32<td>F32<td>F32<td>F32
859 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
860 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
861 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
862 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
863 </table>
864<tr>
865 <td>CLDepthwiseConvolutionLayer
866 <td>
867 <ul>
868 <li>NHWC
869 <li>NCHW
870 </ul>
871 <td>
872 <table>
873 <tr><th>src0<th>src1<th>src2<th>dst
874 <tr><td>F16<td>F16<td>F16<td>F16
875 <tr><td>F32<td>F32<td>F32<td>F32
876 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
877 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
878 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
879 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
880 </table>
881<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100882 <td rowspan="2">DequantizationLayer
Teresa Charlin62687422021-04-28 10:58:49 +0100883 <td rowspan="2" style="width:200px;"> Function to dequantize the values in a tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100884 <td rowspan="2">
885 <ul>
886 <li>ANEURALNETWORKS_DEQUANTIZE
887 </ul>
888 <td>NEDequantizationLayer
889 <td>
890 <ul>
891 <li>All
892 </ul>
893 <td>
894 <table>
895 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +0100896 <tr><td>QASYMM8<td>F16, F32
897 <tr><td>QASYMM8_SIGNED<td>F16, F32
898 <tr><td>QSYMM8_PER_CHANNEL<td>F16, F32
899 <tr><td>QSYMM8<td>F16, F32
900 <tr><td>QSYMM16<td>F16, F32
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100901 </table>
902<tr>
903 <td>CLDequantizationLayer
904 <td>
905 <ul>
906 <li>All
907 </ul>
908 <td>
909 <table>
910 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +0100911 <tr><td>QASYMM8<td>F16, F32
912 <tr><td>QASYMM8_SIGNED<td>F16, F32
913 <tr><td>QSYMM8_PER_CHANNEL<td>F16, F32
914 <tr><td>QSYMM8<td>F16, F32
915 <tr><td>QSYMM16<td>F16, F32
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100916 </table>
917<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100918 <td rowspan="1">DetectionPostProcessLayer
919 <td rowspan="1" style="width:200px;"> Function to generate the detection output based on center size encoded boxes, class prediction and anchors by doing non maximum suppression (NMS).
920 <td rowspan="1">
921 <ul>
922 <li>ANEURALNETWORKS_DETECTION_POSTPROCESSING
923 </ul>
924 <td>NEDetectionPostProcessLayer
925 <td>
926 <ul>
927 <li>All
928 </ul>
929 <td>
930 <table>
931 <tr><th>src0 - src2<th>dst0 - dst3
932 <tr><td>QASYMM8<td>F32
933 <tr><td>QASYMM8_SIGNED<td>F32
934 <tr><td>F32<td>F32
935 </table>
936<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100937 <td rowspan="2">DirectConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +0100938 <td rowspan="2" style="width:200px;"> Function to compute direct convolution.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100939 <td rowspan="2">
940 <ul>
941 <li>ANEURALNETWORKS_CONV_2D
942 </ul>
943 <td>NEDirectConvolutionLayer
944 <td>
945 <ul>
946 <li>NHWC
947 <li>NCHW
948 </ul>
949 <td>
950 <table>
951 <tr><th>src0<th>src1<th>src2<th>dst
952 <tr><td>F16<td>F16<td>F16<td>F16
953 <tr><td>F32<td>F32<td>F32<td>F32
954 </table>
955<tr>
956 <td>CLDirectConvolutionLayer
957 <td>
958 <ul>
959 <li>NHWC
960 <li>NCHW
961 </ul>
962 <td>
963 <table>
964 <tr><th>src0<th>src1<th>src2<th>dst
965 <tr><td>F16<td>F16<td>F16<td>F16
966 <tr><td>F32<td>F32<td>F32<td>F32
967 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
968 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
969 </table>
970<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100971 <td rowspan="1">DirectDeconvolutionLayer
972 <td rowspan="1" style="width:200px;"> Function to run the deconvolution layer.
973 <td rowspan="1">
974 <ul>
975 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
976 </ul>
977 <td>CLDirectDeconvolutionLayer
978 <td>
979 <ul>
980 <li>NHWC
981 <li>NCHW
982 </ul>
983 <td>
984 <table>
985 <tr><th>src0<th>src1<th>src2<th>dst
986 <tr><td>F16<td>F16<td>F16<td>F16
987 <tr><td>F32<td>F32<td>F32<td>F32
988 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
989 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
990 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
991 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
992 </table>
993<tr>
Jakub Sujakee301b32021-06-04 09:46:08 +0100994 <td rowspan="13">ElementwiseOperations
Sheri Zhang6124ce62021-05-04 14:03:13 +0100995 <td rowspan="13" style="width:200px;"> Function to perform in Cpu: - Div - Max - Min - Pow - SquaredDiff - Comparisons (Equal, greater, greater_equal, less, less_equal, not_equal) Function to perform in CL: - Add - Sub - Div - Max - Min - Pow - SquaredDiff
996 <td rowspan="13">
997 <ul>
998 <li>ANEURALNETWORKS_MAXIMUM
999 <li>ANEURALNETWORKS_MINIMUM
1000 <li>ANEURALNETWORKS_POW
1001 <li>ANEURALNETWORKS_DIV
1002 <li>ANEURALNETWORKS_ADD
1003 <li>ANEURALNETWORKS_SUB
1004 <li>ANEURALNETWORKS_EQUAL
1005 <li>ANEURALNETWORKS_GREATER
1006 <li>ANEURALNETWORKS_GREATER_EQUAL
1007 <li>ANEURALNETWORKS_LESS
1008 <li>ANEURALNETWORKS_LESS_EQUAL
1009 <li>ANEURALNETWORKS_NOT_EQUAL
1010 </ul>
1011 <td>NEElementwiseMax
1012 <td>
1013 <ul>
1014 <li>All
1015 </ul>
1016 <td>
1017 <table>
1018 <tr><th>src0<th>src1<th>dst
1019 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1020 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1021 <tr><td>S32<td>S32<td>S32
1022 <tr><td>S16<td>S16<td>S16
1023 <tr><td>F16<td>F16<td>F16
1024 <tr><td>F32<td>F32<td>F32
1025 </table>
1026<tr>
1027 <td>NEElementwiseMin
1028 <td>
1029 <ul>
1030 <li>All
1031 </ul>
1032 <td>
1033 <table>
1034 <tr><th>src0<th>src1<th>dst
1035 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1036 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1037 <tr><td>S32<td>S32<td>S32
1038 <tr><td>S16<td>S16<td>S16
1039 <tr><td>F16<td>F16<td>F16
1040 <tr><td>F32<td>F32<td>F32
1041 </table>
1042<tr>
1043 <td>NEElementwiseSquaredDiff
1044 <td>
1045 <ul>
1046 <li>All
1047 </ul>
1048 <td>
1049 <table>
1050 <tr><th>src0<th>src1<th>dst
1051 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1052 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1053 <tr><td>S32<td>S32<td>S32
1054 <tr><td>S16<td>S16<td>S16
1055 <tr><td>F16<td>F16<td>F16
1056 <tr><td>F32<td>F32<td>F32
1057 </table>
1058<tr>
1059 <td>NEElementwiseDivision
1060 <td>
1061 <ul>
1062 <li>All
1063 </ul>
1064 <td>
1065 <table>
1066 <tr><th>src0<th>src1<th>dst
1067 <tr><td>F16<td>F16<td>F16
1068 <tr><td>F32<td>F32<td>F32
1069 </table>
1070<tr>
1071 <td>NEElementwisePower
1072 <td>
1073 <ul>
1074 <li>All
1075 </ul>
1076 <td>
1077 <table>
1078 <tr><th>src0<th>src1<th>dst
1079 <tr><td>F16<td>F16<td>F16
1080 <tr><td>F32<td>F32<td>F32
1081 </table>
1082<tr>
1083 <td>NEElementwiseComparison
1084 <td>
1085 <ul>
1086 <li>All
1087 </ul>
1088 <td>
1089 <table>
1090 <tr><th>src0<th>src1<th>dst
1091 <tr><td>QASYMM8<td>QASYMM8<td>U8
1092 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>U8
1093 <tr><td>S32<td>S32<td>U8
1094 <tr><td>U8<td>U8<td>U8
1095 <tr><td>S16<td>S16<td>U8
1096 <tr><td>F16<td>F16<td>U8
1097 <tr><td>F32<td>F32<td>U8
1098 </table>
1099<tr>
1100 <td>CLArithmeticAddition
1101 <td>
1102 <ul>
1103 <li>All
1104 </ul>
1105 <td>
1106 <table>
1107 <tr><th>src0<th>src1<th>dst
1108 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1109 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1110 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1111 <tr><td>U8<td>U8<td>U8
1112 <tr><td>U8<td>U8<td>S16
1113 <tr><td>U8<td>S16<td>S16
1114 <tr><td>S16<td>U8<td>S16
1115 <tr><td>S16<td>S16<td>S16
1116 <tr><td>S32<td>S32<td>S32
1117 <tr><td>F16<td>F16<td>F16
1118 <tr><td>F32<td>F32<td>F32
1119 </table>
1120<tr>
1121 <td>CLArithmeticSubtraction
1122 <td>
1123 <ul>
1124 <li>All
1125 </ul>
1126 <td>
1127 <table>
1128 <tr><th>src0<th>src1<th>dst
1129 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1130 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1131 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1132 <tr><td>U8<td>U8<td>U8
1133 <tr><td>U8<td>U8<td>S16
1134 <tr><td>U8<td>S16<td>S16
1135 <tr><td>S16<td>U8<td>S16
1136 <tr><td>S16<td>S16<td>S16
1137 <tr><td>S32<td>S32<td>S32
1138 <tr><td>F16<td>F16<td>F16
1139 <tr><td>F32<td>F32<td>F32
1140 </table>
1141<tr>
1142 <td>CLArithmeticDivision
1143 <td>
1144 <ul>
1145 <li>All
1146 </ul>
1147 <td>
1148 <table>
1149 <tr><th>src0<th>src1<th>dst
1150 <tr><td>F16<td>F16<td>F16
1151 <tr><td>F32<td>F32<td>F32
1152 </table>
1153<tr>
1154 <td>CLElementwiseMax
1155 <td>
1156 <ul>
1157 <li>All
1158 </ul>
1159 <td>
1160 <table>
1161 <tr><th>src0<th>src1<th>dst
1162 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1163 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1164 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1165 <tr><td>U8<td>U8<td>U8
1166 <tr><td>S16<td>S16<td>S16
1167 <tr><td>S32<td>S32<td>S32
1168 <tr><td>U32<td>U32<td>U32
1169 <tr><td>F16<td>F16<td>F16
1170 <tr><td>F32<td>F32<td>F32
1171 </table>
1172<tr>
1173 <td>CLElementwiseMin
1174 <td>
1175 <ul>
1176 <li>All
1177 </ul>
1178 <td>
1179 <table>
1180 <tr><th>src0<th>src1<th>dst
1181 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1182 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1183 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1184 <tr><td>U8<td>U8<td>U8
1185 <tr><td>S16<td>S16<td>S16
1186 <tr><td>S32<td>S32<td>S32
1187 <tr><td>U32<td>U32<td>U32
1188 <tr><td>F16<td>F16<td>F16
1189 <tr><td>F32<td>F32<td>F32
1190 </table>
1191<tr>
1192 <td>CLElementwiseSquaredDiff
1193 <td>
1194 <ul>
1195 <li>All
1196 </ul>
1197 <td>
1198 <table>
1199 <tr><th>src0<th>src1<th>dst
1200 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1201 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1202 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1203 <tr><td>U8<td>U8<td>U8
1204 <tr><td>S16<td>S16<td>S16
1205 <tr><td>F16<td>F16<td>F16
1206 <tr><td>F32<td>F32<td>F32
1207 </table>
1208<tr>
1209 <td>CLElementwisePower
1210 <td>
1211 <ul>
1212 <li>All
1213 </ul>
1214 <td>
1215 <table>
1216 <tr><th>src0<th>src1<th>dst
1217 <tr><td>F16<td>F16<td>F16
1218 <tr><td>F32<td>F32<td>F32
1219 </table>
1220<tr>
1221 <td rowspan="8">ElementwiseUnaryLayer
1222 <td rowspan="8" style="width:200px;"> Function to perform: - Rsqrt - Exp - Neg - Log - Abs - Round - Sin
1223 <td rowspan="8">
1224 <ul>
1225 <li>ANEURALNETWORKS_ABS
1226 <li>ANEURALNETWORKS_EXP
1227 <li>ANEURALNETWORKS_LOG
1228 <li>ANEURALNETWORKS_NEG
1229 <li>ANEURALNETWORKS_RSQRT
1230 <li>ANEURALNETWORKS_SIN
1231 </ul>
1232 <td>NEElementwiseUnaryLayer
1233 <td>
1234 <ul>
1235 <li>All
1236 </ul>
1237 <td>
1238 <table>
1239 <tr><th>src<th>dst
1240 <tr><td>F16<td>F16
1241 <tr><td>F32<td>F32
1242 <tr><td>S32<td>S32
1243 </table>
1244<tr>
1245 <td>CLRsqrtLayer
1246 <td>
1247 <ul>
1248 <li>All
1249 </ul>
1250 <td>
1251 <table>
1252 <tr><th>src<th>dst
1253 <tr><td>F16<td>F16
1254 <tr><td>F32<td>F32
1255 </table>
1256<tr>
1257 <td>CLExpLayer
1258 <td>
1259 <ul>
1260 <li>All
1261 </ul>
1262 <td>
1263 <table>
1264 <tr><th>src<th>dst
1265 <tr><td>F16<td>F16
1266 <tr><td>F32<td>F32
1267 </table>
1268<tr>
1269 <td>CLNegLayer
1270 <td>
1271 <ul>
1272 <li>All
1273 </ul>
1274 <td>
1275 <table>
1276 <tr><th>src<th>dst
1277 <tr><td>F16<td>F16
1278 <tr><td>F32<td>F32
Jakub Sujakee301b32021-06-04 09:46:08 +01001279 <tr><td>S32<td>S32
Sheri Zhang6124ce62021-05-04 14:03:13 +01001280 </table>
1281<tr>
1282 <td>CLSinLayer
1283 <td>
1284 <ul>
1285 <li>All
1286 </ul>
1287 <td>
1288 <table>
1289 <tr><th>src<th>dst
1290 <tr><td>F16<td>F16
1291 <tr><td>F32<td>F32
1292 </table>
1293<tr>
1294 <td>CLLogLayer
1295 <td>
1296 <ul>
1297 <li>All
1298 </ul>
1299 <td>
1300 <table>
1301 <tr><th>src<th>dst
1302 <tr><td>F16<td>F16
1303 <tr><td>F32<td>F32
1304 </table>
1305<tr>
1306 <td>CLAbsLayer
1307 <td>
1308 <ul>
1309 <li>All
1310 </ul>
1311 <td>
1312 <table>
1313 <tr><th>src<th>dst
1314 <tr><td>F16<td>F16
1315 <tr><td>F32<td>F32
1316 </table>
1317<tr>
1318 <td>CLRoundLayer
1319 <td>
1320 <ul>
1321 <li>All
1322 </ul>
1323 <td>
1324 <table>
1325 <tr><th>src<th>dst
1326 <tr><td>F16<td>F16
1327 <tr><td>F32<td>F32
1328 </table>
1329<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001330 <td rowspan="2">FFT1D
Teresa Charlin62687422021-04-28 10:58:49 +01001331 <td rowspan="2" style="width:200px;"> Fast Fourier Transform 1D.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001332 <td rowspan="2">
1333 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +01001334 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001335 </ul>
1336 <td>NEFFT1D
1337 <td>
1338 <ul>
1339 <li>All
1340 </ul>
1341 <td>
1342 <table>
1343 <tr><th>src<th>dst
1344 <tr><td>F32<td>F32
1345 </table>
1346<tr>
1347 <td>CLFFT1D
1348 <td>
1349 <ul>
1350 <li>All
1351 </ul>
1352 <td>
1353 <table>
1354 <tr><th>src<th>dst
1355 <tr><td>F32<td>F32
1356 <tr><td>F16<td>F16
1357 </table>
1358<tr>
1359 <td rowspan="2">FFT2D
Teresa Charlin62687422021-04-28 10:58:49 +01001360 <td rowspan="2" style="width:200px;"> Fast Fourier Transform 2D.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001361 <td rowspan="2">
1362 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +01001363 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001364 </ul>
1365 <td>NEFFT2D
1366 <td>
1367 <ul>
1368 <li>All
1369 </ul>
1370 <td>
1371 <table>
1372 <tr><th>src<th>dst
1373 <tr><td>F32<td>F32
1374 </table>
1375<tr>
1376 <td>CLFFT2D
1377 <td>
1378 <ul>
1379 <li>All
1380 </ul>
1381 <td>
1382 <table>
1383 <tr><th>src<th>dst
1384 <tr><td>F32<td>F32
1385 <tr><td>F16<td>F16
1386 </table>
1387<tr>
1388 <td rowspan="2">FFTConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001389 <td rowspan="2" style="width:200px;"> Fast Fourier Transform Convolution.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001390 <td rowspan="2">
1391 <ul>
1392 <li>ANEURALNETWORKS_CONV_2D
1393 </ul>
1394 <td>NEFFTConvolutionLayer
1395 <td>
1396 <ul>
1397 <li>All
1398 </ul>
1399 <td>
1400 <table>
1401 <tr><th>src<th>dst
1402 <tr><td>F32<td>F32
1403 </table>
1404<tr>
1405 <td>CLFFTConvolutionLayer
1406 <td>
1407 <ul>
1408 <li>All
1409 </ul>
1410 <td>
1411 <table>
1412 <tr><th>src<th>dst
1413 <tr><td>F32<td>F32
1414 <tr><td>F16<td>F16
1415 </table>
1416<tr>
1417 <td rowspan="2">Fill
Teresa Charlin62687422021-04-28 10:58:49 +01001418 <td rowspan="2" style="width:200px;"> Set the values of a tensor with a given value.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001419 <td rowspan="2">
1420 <ul>
1421 <li>ANEURALNETWORKS_FILL
1422 </ul>
1423 <td>NEFill
1424 <td>
1425 <ul>
1426 <li>All
1427 </ul>
1428 <td>
1429 <table>
1430 <tr><th>src<th>dst
1431 <tr><td>All<td>All
1432 </table>
1433<tr>
1434 <td>CLFill
1435 <td>
1436 <ul>
1437 <li>All
1438 </ul>
1439 <td>
1440 <table>
1441 <tr><th>src<th>dst
1442 <tr><td>All<td>All
1443 </table>
1444<tr>
Georgios Pinitasb6af4822021-09-14 12:33:34 +01001445 <td rowspan="1">FillBorder
1446 <td rowspan="1" style="width:200px;"> Function to fill the borders within the XY-planes.
1447 <td rowspan="1">
Teresa Charlin62687422021-04-28 10:58:49 +01001448 <ul>
1449 <li>n/a
1450 </ul>
1451 <td>NEFillBorder
1452 <td>
1453 <ul>
1454 <li>All
1455 </ul>
1456 <td>
1457 <table>
1458 <tr><th>src<th>dst
1459 <tr><td>All<td>All
1460 </table>
1461<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001462 <td rowspan="2">FlattenLayer
1463 <td rowspan="2" style="width:200px;"> Reshape a tensor to be 1D
1464 <td rowspan="2">
1465 <ul>
1466 <li>ANEURALNETWORKS_RESHAPE
1467 </ul>
1468 <td>NEFlattenLayer
1469 <td>
1470 <ul>
1471 <li>All
1472 </ul>
1473 <td>
1474 <table>
1475 <tr><th>src<th>dst
1476 <tr><td>All<td>All
1477 </table>
1478<tr>
1479 <td>CLFlattenLayer
1480 <td>
1481 <ul>
1482 <li>All
1483 </ul>
1484 <td>
1485 <table>
1486 <tr><th>src<th>dst
1487 <tr><td>All<td>All
1488 </table>
1489<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001490 <td rowspan="2">Floor
Teresa Charlin62687422021-04-28 10:58:49 +01001491 <td rowspan="2" style="width:200px;"> Round the value to the lowest number.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001492 <td rowspan="2">
1493 <ul>
1494 <li>ANEURALNETWORKS_FLOOR
1495 </ul>
1496 <td>NEFloor
1497 <td>
1498 <ul>
1499 <li>All
1500 </ul>
1501 <td>
1502 <table>
1503 <tr><th>src<th>dst
1504 <tr><td>F32<td>F32
1505 <tr><td>F16<td>F16
1506 </table>
1507<tr>
1508 <td>CLFloor
1509 <td>
1510 <ul>
1511 <li>All
1512 </ul>
1513 <td>
1514 <table>
1515 <tr><th>src<th>dst
1516 <tr><td>F32<td>F32
1517 <tr><td>F16<td>F16
1518 </table>
1519<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001520 <td rowspan="2">FullyConnectedLayer
1521 <td rowspan="2" style="width:200px;"> Function to perform a fully connected / dense layer.
1522 <td rowspan="2">
1523 <ul>
1524 <li>ANEURALNETWORKS_FULLY_CONNECTED
1525 </ul>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001526 <td>NEFullyConnectedLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001527 <td>
1528 <ul>
1529 <li>NHWC
1530 <li>NCHW
1531 </ul>
1532 <td>
1533 <table>
1534 <tr><th>src0<th>src1<th>src2<th>dst
1535 <tr><td>F16<td>F16<td>F16<td>F16
1536 <tr><td>F32<td>F32<td>F32<td>F32
1537 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1538 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1539 </table>
1540<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001541 <td>CLFullyConnectedLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001542 <td>
1543 <ul>
1544 <li>NHWC
1545 <li>NCHW
1546 </ul>
1547 <td>
1548 <table>
1549 <tr><th>src0<th>src1<th>src2<th>dst
1550 <tr><td>F16<td>F16<td>F16<td>F16
1551 <tr><td>F32<td>F32<td>F32<td>F32
1552 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1553 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1554 </table>
1555<tr>
1556 <td rowspan="2">FuseBatchNormalization
1557 <td rowspan="2" style="width:200px;"> Function to fuse the batch normalization node to a preceding convolution node.
1558 <td rowspan="2">
1559 <ul>
1560 <li>n/a
1561 </ul>
1562 <td>NEFuseBatchNormalization
1563 <td>
1564 <ul>
1565 <li>NHWC
1566 <li>NCHW
1567 </ul>
1568 <td>
1569 <table>
1570 <tr><th>src<th>dst
1571 <tr><td>F32<td>F32
1572 <tr><td>F16<td>F16
1573 </table>
1574<tr>
1575 <td>CLFuseBatchNormalization
1576 <td>
1577 <ul>
1578 <li>NHWC
1579 <li>NCHW
1580 </ul>
1581 <td>
1582 <table>
1583 <tr><th>src<th>dst
1584 <tr><td>F32<td>F32
1585 <tr><td>F16<td>F16
1586 </table>
1587<tr>
1588 <td rowspan="2">Gather
1589 <td rowspan="2" style="width:200px;"> Performs the Gather operation along the chosen axis.
1590 <td rowspan="2">
1591 <ul>
1592 <li>ANEURALNETWORKS_GATHER
1593 </ul>
1594 <td>NEGather
1595 <td>
1596 <ul>
1597 <li>All
1598 </ul>
1599 <td>
1600 <table>
1601 <tr><th>src<th>dst
1602 <tr><td>All<td>All
1603 </table>
1604<tr>
1605 <td>CLGather
1606 <td>
1607 <ul>
1608 <li>All
1609 </ul>
1610 <td>
1611 <table>
1612 <tr><th>src<th>dst
1613 <tr><td>All<td>All
1614 </table>
1615<tr>
1616 <td rowspan="2">GEMM
1617 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1618 <td rowspan="2">
1619 <ul>
1620 <li>n/a
1621 </ul>
1622 <td>NEGEMM
1623 <td>
1624 <ul>
1625 <li>All
1626 </ul>
1627 <td>
1628 <table>
1629 <tr><th>src0<th>src1<th>src2<th>dst
1630 <tr><td>F32<td>F32<td>F32<td>F32
1631 <tr><td>F16<td>F16<td>F16<td>F16
1632 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1633 </table>
1634<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001635 <td>CLGEMM
Teresa Charlin62687422021-04-28 10:58:49 +01001636 <td>
1637 <ul>
1638 <li>All
1639 </ul>
1640 <td>
1641 <table>
1642 <tr><th>src0<th>src1<th>src2<th>dst
1643 <tr><td>F32<td>F32<td>F32<td>F32
1644 <tr><td>F16<td>F16<td>F16<td>F16
1645 </table>
1646<tr>
Jakub Sujakee301b32021-06-04 09:46:08 +01001647 <td rowspan="1">GEMMConv2d
Sheri Zhang6124ce62021-05-04 14:03:13 +01001648 <td rowspan="1" style="width:200px;"> General Matrix Multiplication.
1649 <td rowspan="1">
1650 <ul>
1651 <li>ANEURALNETWORKS_CONV_2D
1652 </ul>
1653 <td>NEGEMMConv2d
1654 <td>
1655 <ul>
1656 <li>All
1657 </ul>
1658 <td>
1659 <table>
1660 <tr><th>src0<th>src1<th>src2<th>dst
1661 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1662 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1663 <tr><td>F16<td>F16<td>F16<td>F16
1664 <tr><td>F32<td>F32<td>F32<td>F32
1665 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1666 </table>
1667<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001668 <td rowspan="2">GEMMConvolutionLayer
1669 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1670 <td rowspan="2">
1671 <ul>
1672 <li>ANEURALNETWORKS_CONV_2D
1673 </ul>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001674 <td>NEGEMMConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001675 <td>
1676 <ul>
1677 <li>NHWC
1678 <li>NCHW
1679 </ul>
1680 <td>
1681 <table>
1682 <tr><th>src0<th>src1<th>src2<th>dst
1683 <tr><td>F16<td>F16<td>F16<td>F16
1684 <tr><td>F32<td>F32<td>F32<td>F32
1685 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1686 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1687 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1688 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1689 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1690 </table>
1691<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001692 <td>CLGEMMConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001693 <td>
1694 <ul>
1695 <li>NHWC
1696 <li>NCHW
1697 </ul>
1698 <td>
1699 <table>
1700 <tr><th>src0<th>src1<th>src2<th>dst
1701 <tr><td>F16<td>F16<td>F16<td>F16
1702 <tr><td>F32<td>F32<td>F32<td>F32
1703 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1704 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1705 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1706 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1707 </table>
1708<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001709 <td rowspan="1">GEMMDeconvolutionLayer
1710 <td rowspan="1" style="width:200px;"> General Matrix Multiplication.
1711 <td rowspan="1">
1712 <ul>
1713 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
1714 </ul>
1715 <td>CLGEMMDeconvolutionLayer
1716 <td>
1717 <ul>
1718 <li>NHWC
1719 </ul>
1720 <td>
1721 <table>
1722 <tr><th>src0<th>src1<th>src2<th>dst
1723 <tr><td>F16<td>F16<td>F16<td>F16
1724 <tr><td>F32<td>F32<td>F32<td>F32
1725 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1726 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1727 </table>
1728<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001729 <td rowspan="2">GEMMLowpMatrixMultiplyCore
1730 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1731 <td rowspan="2">
1732 <ul>
1733 <li>n/a
1734 </ul>
1735 <td>NEGEMMLowpMatrixMultiplyCore
1736 <td>
1737 <ul>
1738 <li>NHWC
1739 <li>NCHW
1740 </ul>
1741 <td>
1742 <table>
1743 <tr><th>src0<th>src1<th>src2<th>dst
1744 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1745 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1746 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>QASYMM8
1747 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>S32
1748 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1749 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>S32
1750 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1751 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1752 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>QASYMM8_SIGNED
1753 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>S32
1754 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1755 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>S32
1756 </table>
1757<tr>
1758 <td>CLGEMMLowpMatrixMultiplyCore
1759 <td>
1760 <ul>
1761 <li>NHWC
1762 <li>NCHW
1763 </ul>
1764 <td>
1765 <table>
1766 <tr><th>src0<th>src1<th>src2<th>dst
1767 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1768 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1769 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>QASYMM8
1770 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>S32
1771 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1772 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>S32
1773 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1774 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1775 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>QASYMM8_SIGNED
1776 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>S32
1777 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1778 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>S32
1779 </table>
1780<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001781 <td rowspan="2">GEMMLowpOutputStage
1782 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1783 <td rowspan="2">
1784 <ul>
1785 <li>n/a
1786 </ul>
1787 <td>NEGEMMLowpOutputStage
1788 <td>
1789 <ul>
1790 <li>All
1791 </ul>
1792 <td>
1793 <table>
1794 <tr><th>src0<th>src1<th>dst
1795 <tr><td>S32<td>S32<td>QASYMM8
1796 <tr><td>S32<td>S32<td>QASYMM8_SIGNED
1797 <tr><td>S32<td>S32<td>QSYMM16
1798 </table>
1799<tr>
1800 <td>CLGEMMLowpOutputStage
1801 <td>
1802 <ul>
1803 <li>All
1804 </ul>
1805 <td>
1806 <table>
1807 <tr><th>src0<th>src1<th>dst
1808 <tr><td>S32<td>S32<td>QASYMM8
1809 <tr><td>S32<td>S32<td>QASYMM8_SIGNED
1810 <tr><td>S32<td>S32<td>QSYMM16
1811 </table>
1812<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001813 <td rowspan="2">GenerateProposalsLayer
1814 <td rowspan="2" style="width:200px;"> Function to generate proposals for a RPN (Region Proposal Network).
1815 <td rowspan="2">
1816 <ul>
1817 <li>ANEURALNETWORKS_GENERATE_PROPOSALS
1818 </ul>
1819 <td>NEGenerateProposalsLayer
1820 <td>
1821 <ul>
1822 <li>All
1823 </ul>
1824 <td>
1825 <table>
1826 <tr><th>src0<th>src1<th>src2<th>dst
1827 <tr><td>F16<td>F16<td>F16<td>F16
1828 <tr><td>F32<td>F32<td>F32<td>F32
1829 <tr><td>QASYMM8<td>QSYMM8<td>QSYMM16<td>QASYMM8
1830 </table>
1831<tr>
1832 <td>CLGenerateProposalsLayer
1833 <td>
1834 <ul>
1835 <li>All
1836 </ul>
1837 <td>
1838 <table>
1839 <tr><th>src0<th>src1<th>src2<th>dst
1840 <tr><td>F16<td>F16<td>F16<td>F16
1841 <tr><td>F32<td>F32<td>F32<td>F32
1842 <tr><td>QASYMM8<td>QSYMM8<td>QSYMM16<td>QASYMM8
1843 </table>
1844<tr>
1845 <td rowspan="2">InstanceNormalizationLayer
1846 <td rowspan="2" style="width:200px;"> Function to perform a Instance normalization on a given axis.
1847 <td rowspan="2">
1848 <ul>
1849 <li>ANEURALNETWORKS_INSTANCE_NORMALIZATION
1850 </ul>
1851 <td>NEInstanceNormalizationLayer
1852 <td>
1853 <ul>
1854 <li>NHWC
1855 <li>NCHW
1856 </ul>
1857 <td>
1858 <table>
1859 <tr><th>src<th>dst
1860 <tr><td>F16<td>F16
1861 <tr><td>F32<td>F32
1862 </table>
1863<tr>
1864 <td>CLInstanceNormalizationLayer
1865 <td>
1866 <ul>
1867 <li>NHWC
1868 <li>NCHW
1869 </ul>
1870 <td>
1871 <table>
1872 <tr><th>src<th>dst
1873 <tr><td>F16<td>F16
1874 <tr><td>F32<td>F32
1875 </table>
1876<tr>
1877 <td rowspan="2">L2NormalizeLayer
1878 <td rowspan="2" style="width:200px;"> Function to perform a L2 normalization on a given axis.
1879 <td rowspan="2">
1880 <ul>
1881 <li>ANEURALNETWORKS_L2_NORMALIZATION
1882 </ul>
1883 <td>NEL2NormalizeLayer
1884 <td>
1885 <ul>
1886 <li>NHWC
1887 <li>NCHW
1888 </ul>
1889 <td>
1890 <table>
1891 <tr><th>src<th>dst
1892 <tr><td>F16<td>F16
1893 <tr><td>F32<td>F32
1894 </table>
1895<tr>
1896 <td>CLL2NormalizeLayer
1897 <td>
1898 <ul>
1899 <li>NHWC
1900 <li>NCHW
1901 </ul>
1902 <td>
1903 <table>
1904 <tr><th>src<th>dst
1905 <tr><td>F16<td>F16
1906 <tr><td>F32<td>F32
1907 </table>
1908<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001909 <td rowspan="3">Logical
1910 <td rowspan="3" style="width:200px;"> Function to perform: - Logical AND - Logical OR - Logical NOT
1911 <td rowspan="3">
1912 <ul>
1913 <li>n/a
1914 </ul>
1915 <td>NELogicalAnd
1916 <td>
1917 <ul>
1918 <li>All
1919 </ul>
1920 <td>
1921 <table>
1922 <tr><th>src0<th>src1<th>dst
1923 <tr><td>U8<td>U8<td>U8
1924 </table>
1925<tr>
1926 <td>NELogicalOr
1927 <td>
1928 <ul>
1929 <li>All
1930 </ul>
1931 <td>
1932 <table>
1933 <tr><th>src0<th>src1<th>dst
1934 <tr><td>U8<td>U8<td>U8
1935 </table>
1936<tr>
1937 <td>NELogicalNot
1938 <td>
1939 <ul>
1940 <li>All
1941 </ul>
1942 <td>
1943 <table>
1944 <tr><th>src<th>dst
1945 <tr><td>U8<td>U8
1946 </table>
1947<tr>
1948 <td rowspan="1">LogicalAnd
1949 <td rowspan="1" style="width:200px;"> Function to perform Logical AND.
1950 <td rowspan="1">
1951 <ul>
1952 <li>n/a
1953 </ul>
1954 <td>CLLogicalAnd
1955 <td>
1956 <ul>
1957 <li>All
1958 </ul>
1959 <td>
1960 <table>
1961 <tr><th>src0<th>src1<th>dst
1962 <tr><td>U8<td>U8<td>U8
1963 </table>
1964<tr>
1965 <td rowspan="1">LogicalOr
1966 <td rowspan="1" style="width:200px;"> Function to perform Logical OR.
1967 <td rowspan="1">
1968 <ul>
1969 <li>n/a
1970 </ul>
1971 <td>CLLogicalOr
1972 <td>
1973 <ul>
1974 <li>All
1975 </ul>
1976 <td>
1977 <table>
1978 <tr><th>src0<th>src1<th>dst
1979 <tr><td>U8<td>U8<td>U8
1980 </table>
1981<tr>
1982 <td rowspan="1">LogicalNot
1983 <td rowspan="1" style="width:200px;"> Function to perform Logical NOT.
1984 <td rowspan="1">
1985 <ul>
1986 <li>n/a
1987 </ul>
1988 <td>CLLogicalNot
1989 <td>
1990 <ul>
1991 <li>All
1992 </ul>
1993 <td>
1994 <table>
1995 <tr><th>src<th>dst
1996 <tr><td>U8<td>U8
1997 </table>
1998<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001999 <td rowspan="2">LSTMLayer
2000 <td rowspan="2" style="width:200px;"> Function to perform a single time step in a Long Short-Term Memory (LSTM) layer.
2001 <td rowspan="2">
2002 <ul>
2003 <li>ANEURALNETWORKS_LSTM
2004 </ul>
2005 <td>NELSTMLayer
2006 <td>
2007 <ul>
2008 <li>All
2009 </ul>
2010 <td>
2011 <table>
2012 <tr><th>src0 - src13<th>dst0 - dst3
2013 <tr><td>F16<td>F16
2014 <tr><td>F32<td>F32
2015 </table>
2016<tr>
2017 <td>CLLSTMLayer
2018 <td>
2019 <ul>
2020 <li>All
2021 </ul>
2022 <td>
2023 <table>
2024 <tr><th>src0 - src13<th>dst0 - dst3
2025 <tr><td>F16<td>F16
2026 <tr><td>F32<td>F32
2027 </table>
2028<tr>
2029 <td rowspan="2">LSTMLayerQuantized
2030 <td rowspan="2" style="width:200px;"> Function to perform quantized LSTM (Long Short-Term Memory)
2031 <td rowspan="2">
2032 <ul>
2033 <li>ANEURALNETWORKS_QUANTIZED_LSTM
2034 <li>ANEURALNETWORKS_QUANTIZED_16BIT_LSTM
2035 </ul>
2036 <td>NELSTMLayerQuantized
2037 <td>
2038 <ul>
2039 <li>All
2040 </ul>
2041 <td>
2042 <table>
2043 <tr><th>src0 - src8<th>src9 - src12<th>src13<th>src14<th>dst0<th>dst1
2044 <tr><td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8<td>QSYMM16<td>QASYMM8
2045 </table>
2046<tr>
2047 <td>CLLSTMLayerQuantized
2048 <td>
2049 <ul>
2050 <li>All
2051 </ul>
2052 <td>
2053 <table>
2054 <tr><th>src0 - src8<th>src9 - src12<th>src13<th>src14<th>dst0<th>dst1
2055 <tr><td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8<td>QSYMM16<td>QASYMM8
2056 </table>
2057<tr>
2058 <td rowspan="2">MaxUnpoolingLayer
2059 <td rowspan="2" style="width:200px;"> Function to perform MaxUnpooling.
2060 <td rowspan="2">
2061 <ul>
2062 <li>n/a
2063 </ul>
2064 <td>NEMaxUnpoolingLayer
2065 <td>
2066 <ul>
2067 <li>NHWC
2068 <li>NCHW
2069 </ul>
2070 <td>
2071 <table>
2072 <tr><th>src<th>dst
2073 <tr><td>QASYMM8<td>QASYMM8
2074 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2075 <tr><td>F16<td>F16
2076 <tr><td>F32<td>F32
2077 </table>
2078<tr>
2079 <td>CLMaxUnpoolingLayer
2080 <td>
2081 <ul>
2082 <li>NHWC
2083 <li>NCHW
2084 </ul>
2085 <td>
2086 <table>
2087 <tr><th>src<th>dst
2088 <tr><td>QASYMM8<td>QASYMM8
2089 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2090 <tr><td>F16<td>F16
2091 <tr><td>F32<td>F32
2092 </table>
2093<tr>
2094 <td rowspan="2">MeanStdDevNormalizationLayer
2095 <td rowspan="2" style="width:200px;"> Function to execute mean and standard deviation normalization.
2096 <td rowspan="2">
2097 <ul>
2098 <li>n/a
2099 </ul>
2100 <td>NEMeanStdDevNormalizationLayer
2101 <td>
2102 <ul>
2103 <li>NHWC
2104 <li>NCHW
2105 </ul>
2106 <td>
2107 <table>
2108 <tr><th>src<th>dst
2109 <tr><td>F32<td>F32
2110 <tr><td>F16<td>F16
2111 </table>
2112<tr>
2113 <td>CLMeanStdDevNormalizationLayer
2114 <td>
2115 <ul>
2116 <li>NHWC
2117 <li>NCHW
2118 </ul>
2119 <td>
2120 <table>
2121 <tr><th>src<th>dst
2122 <tr><td>F32<td>F32
2123 <tr><td>F16<td>F16
2124 </table>
2125<tr>
2126 <td rowspan="2">NormalizationLayer
2127 <td rowspan="2" style="width:200px;"> Function to compute normalization layer.
2128 <td rowspan="2">
2129 <ul>
2130 <li>ANEURALNETWORKS_LOCAL_RESPONSE_NORMALIZATION
2131 </ul>
2132 <td>NENormalizationLayer
2133 <td>
2134 <ul>
2135 <li>NHWC
2136 <li>NCHW
2137 </ul>
2138 <td>
2139 <table>
2140 <tr><th>src<th>dst
2141 <tr><td>F32<td>F32
2142 <tr><td>F16<td>F16
2143 </table>
2144<tr>
2145 <td>CLNormalizationLayer
2146 <td>
2147 <ul>
2148 <li>NHWC
2149 <li>NCHW
2150 </ul>
2151 <td>
2152 <table>
2153 <tr><th>src<th>dst
2154 <tr><td>F32<td>F32
2155 <tr><td>F16<td>F16
2156 </table>
2157<tr>
2158 <td rowspan="2">PadLayer
2159 <td rowspan="2" style="width:200px;"> Function to pad a tensor.
2160 <td rowspan="2">
2161 <ul>
2162 <li>ANEURALNETWORKS_PAD
2163 <li>ANEURALNETWORKS_PAD_V2
2164 </ul>
2165 <td>NEPadLayer
2166 <td>
2167 <ul>
2168 <li>NHWC
2169 <li>NCHW
2170 </ul>
2171 <td>
2172 <table>
2173 <tr><th>src<th>dst
2174 <tr><td>All<td>All
2175 </table>
2176<tr>
2177 <td>CLPadLayer
2178 <td>
2179 <ul>
2180 <li>NHWC
2181 <li>NCHW
2182 </ul>
2183 <td>
2184 <table>
2185 <tr><th>src<th>dst
2186 <tr><td>All<td>All
2187 </table>
2188<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002189 <td rowspan="2">Permute
2190 <td rowspan="2" style="width:200px;"> Function to transpose an ND tensor.
2191 <td rowspan="2">
2192 <ul>
2193 <li>ANEURALNETWORKS_TRANSPOSE
2194 </ul>
2195 <td>NEPermute
2196 <td>
2197 <ul>
2198 <li>NHWC
2199 <li>NCHW
2200 </ul>
2201 <td>
2202 <table>
2203 <tr><th>src<th>dst
2204 <tr><td>All<td>All
2205 </table>
2206<tr>
2207 <td>CLPermute
2208 <td>
2209 <ul>
2210 <li>NHWC
2211 <li>NCHW
2212 </ul>
2213 <td>
2214 <table>
2215 <tr><th>src<th>dst
2216 <tr><td>All<td>All
2217 </table>
2218<tr>
2219 <td rowspan="2">PixelWiseMultiplication
Jakub Sujakee301b32021-06-04 09:46:08 +01002220 <td rowspan="2" style="width:200px;"> Function to perform a multiplication.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002221 <td rowspan="2">
2222 <ul>
2223 <li>ANEURALNETWORKS_MUL
2224 </ul>
2225 <td>NEPixelWiseMultiplication
2226 <td>
2227 <ul>
2228 <li>All
2229 </ul>
2230 <td>
2231 <table>
2232 <tr><th>src0<th>src1<th>dst
2233 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2234 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2235 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
2236 <tr><td>QSYMM16<td>QSYMM16<td>S32
2237 <tr><td>U8<td>U8<td>U8
2238 <tr><td>U8<td>U8<td>S16
2239 <tr><td>U8<td>S16<td>S16
2240 <tr><td>S16<td>U8<td>S16
2241 <tr><td>S16<td>S16<td>S16
2242 <tr><td>F16<td>F16<td>F16
2243 <tr><td>F32<td>S32<td>F32
2244 </table>
2245<tr>
2246 <td>CLPixelWiseMultiplication
2247 <td>
2248 <ul>
2249 <li>All
2250 </ul>
2251 <td>
2252 <table>
2253 <tr><th>src0<th>src1<th>dst
2254 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2255 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2256 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
2257 <tr><td>QSYMM16<td>QSYMM16<td>S32
2258 <tr><td>U8<td>U8<td>U8
2259 <tr><td>U8<td>U8<td>S16
2260 <tr><td>U8<td>S16<td>S16
2261 <tr><td>S16<td>U8<td>S16
2262 <tr><td>S16<td>S16<td>S16
2263 <tr><td>F16<td>F16<td>F16
Jakub Sujakee301b32021-06-04 09:46:08 +01002264 <tr><td>F32<td>F32<td>F32
2265 <tr><td>S32<td>S32<td>S32
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002266 </table>
2267<tr>
2268 <td rowspan="2">PoolingLayer
Jakub Sujakee301b32021-06-04 09:46:08 +01002269 <td rowspan="2" style="width:200px;"> Function to perform pooling with the specified pooling operation.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002270 <td rowspan="2">
2271 <ul>
2272 <li>ANEURALNETWORKS_AVERAGE_POOL_2D
2273 <li>ANEURALNETWORKS_L2_POOL_2D
2274 <li>ANEURALNETWORKS_MAX_POOL_2D
2275 </ul>
2276 <td>NEPoolingLayer
2277 <td>
2278 <ul>
2279 <li>NHWC
2280 <li>NCHW
2281 </ul>
2282 <td>
2283 <table>
2284 <tr><th>src<th>dst
2285 <tr><td>QASYMM8<td>QASYMM8
2286 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2287 <tr><td>F16<td>F16
2288 <tr><td>F32<td>F32
2289 </table>
2290<tr>
2291 <td>CLPoolingLayer
2292 <td>
2293 <ul>
2294 <li>NHWC
2295 <li>NCHW
2296 </ul>
2297 <td>
2298 <table>
2299 <tr><th>src<th>dst
2300 <tr><td>QASYMM8<td>QASYMM8
2301 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2302 <tr><td>F16<td>F16
2303 <tr><td>F32<td>F32
2304 </table>
2305<tr>
Adnan AlSinan171fc3d2022-03-15 18:46:42 +00002306 <td rowspan="2">Pooling3dLayer
2307 <td rowspan="2" style="width:200px;"> Function to perform pooling 3D with the specified pooling operation.
2308 <td rowspan="2">
2309 <ul>
2310 <li>N/A
2311 </ul>
2312 <td>NEPooling3dLayer
2313 <td>
2314 <ul>
2315 <li>NDHWC
2316 </ul>
2317 <td>
2318 <table>
2319 <tr><th>src<th>dst
2320 <tr><td>F16<td>F16
2321 <tr><td>F32<td>F32
Adnan AlSinan9104cd52022-04-06 16:19:31 +01002322 <tr><td>QASYMM8<td>QASYMM8
2323 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
Adnan AlSinan171fc3d2022-03-15 18:46:42 +00002324 </table>
2325<tr>
2326 <td>CLPooling3dLayer
2327 <td>
2328 <ul>
2329 <li>NDHWC
2330 </ul>
2331 <td>
2332 <table>
2333 <tr><th>src<th>dst
2334 <tr><td>F16<td>F16
2335 <tr><td>F32<td>F32
Mohammed Suhail Munshi5e549fa2022-03-16 11:14:06 +00002336 <tr><td>QASYMM8<td>QASYMM8
2337 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
Adnan AlSinan171fc3d2022-03-15 18:46:42 +00002338 </table>
2339<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002340 <td rowspan="2">PReluLayer
2341 <td rowspan="2" style="width:200px;"> Function to compute the activation layer with the PRELU activation function.
2342 <td rowspan="2">
2343 <ul>
2344 <li>ANEURALNETWORKS_PRELU
2345 </ul>
2346 <td>NEPReluLayer
2347 <td>
2348 <ul>
2349 <li>All
2350 </ul>
2351 <td>
2352 <table>
2353 <tr><th>src<th>dst
2354 <tr><td>QASYMM8<td>QASYMM8
2355 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2356 <tr><td>F16<td>F16
2357 <tr><td>F32<td>F32
2358 </table>
2359<tr>
2360 <td>CLPReluLayer
2361 <td>
2362 <ul>
2363 <li>All
2364 </ul>
2365 <td>
2366 <table>
2367 <tr><th>src<th>dst
2368 <tr><td>QASYMM8<td>QASYMM8
2369 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2370 <tr><td>F16<td>F16
2371 <tr><td>F32<td>F32
2372 </table>
2373<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002374 <td rowspan="2">PriorBoxLayer
Sheri Zhang6124ce62021-05-04 14:03:13 +01002375 <td rowspan="2" style="width:200px;"> Function to compute prior boxes and clip.
Teresa Charlin62687422021-04-28 10:58:49 +01002376 <td rowspan="2">
2377 <ul>
2378 <li>n/a
2379 </ul>
2380 <td>NEPriorBoxLayer
2381 <td>
2382 <ul>
2383 <li>NHWC
2384 <li>NCHW
2385 </ul>
2386 <td>
2387 <table>
2388 <tr><th>src0<th>src1<th>dst
2389 <tr><td>F32<td>F32<td>F32
2390 </table>
2391<tr>
2392 <td>CLPriorBoxLayer
2393 <td>
2394 <ul>
2395 <li>NHWC
2396 <li>NCHW
2397 </ul>
2398 <td>
2399 <table>
2400 <tr><th>src0<th>src1<th>dst
2401 <tr><td>F32<td>F32<td>F32
2402 </table>
2403<tr>
2404 <td rowspan="2">QLSTMLayer
2405 <td rowspan="2" style="width:200px;"> Function to perform quantized LSTM (Long Short-Term Memory).
2406 <td rowspan="2">
2407 <ul>
2408 <li>ANEURALNETWORKS_QUANTIZED_LSTM
2409 <li>ANEURALNETWORKS_QUANTIZED_16BIT_LSTM
2410 </ul>
2411 <td>NEQLSTMLayer
2412 <td>
2413 <ul>
2414 <li>All
2415 </ul>
2416 <td>
2417 <table>
2418 <tr><th>src0<th>src1 - src6<th>src7 -src9<th>src10<th>src11<th>dst0<th>dst1 - dst2
2419 <tr><td>QASYMM8_SIGNED<td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8_SIGNED<td>QSYMM16<td>QASYMM8_SIGNED
2420 </table>
2421<tr>
2422 <td>CLQLSTMLayer
2423 <td>
2424 <ul>
2425 <li>All
2426 </ul>
2427 <td>
2428 <table>
2429 <tr><th>src0<th>src1 - src6<th>src7 -src9<th>src10<th>src11<th>dst0<th>dst1 - dst2
2430 <tr><td>QASYMM8_SIGNED<td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8_SIGNED<td>QSYMM16<td>QASYMM8_SIGNED
2431 </table>
2432<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002433 <td rowspan="2">QuantizationLayer
2434 <td rowspan="2" style="width:200px;"> Function to perform quantization layer
2435 <td rowspan="2">
2436 <ul>
2437 <li>ANEURALNETWORKS_QUANTIZE
2438 </ul>
2439 <td>NEQuantizationLayer
2440 <td>
2441 <ul>
2442 <li>All
2443 </ul>
2444 <td>
2445 <table>
2446 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +01002447 <tr><td>QASYMM8<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2448 <tr><td>QASYMM8_SIGNED<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2449 <tr><td>F16<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2450 <tr><td>F32<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002451 </table>
2452<tr>
2453 <td>CLQuantizationLayer
2454 <td>
2455 <ul>
2456 <li>All
2457 </ul>
2458 <td>
2459 <table>
2460 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +01002461 <tr><td>QASYMM8<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2462 <tr><td>QASYMM8_SIGNED<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2463 <tr><td>F16<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2464 <tr><td>F32<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2465 </table>
2466<tr>
2467 <td rowspan="2">Range
2468 <td rowspan="2" style="width:200px;"> Function to generates a sequence of numbers starting from START and extends by increments of 'STEP' up to but not including 'END'.
2469 <td rowspan="2">
2470 <ul>
2471 <li>n/a
2472 </ul>
2473 <td>NERange
2474 <td>
2475 <ul>
2476 <li>All
2477 </ul>
2478 <td>
2479 <table>
2480 <tr><th>dst
2481 <tr><td>U8
2482 <tr><td>S8
2483 <tr><td>U16
2484 <tr><td>S16
2485 <tr><td>U32
2486 <tr><td>S32
2487 <tr><td>F16
2488 <tr><td>F32
2489 </table>
2490<tr>
2491 <td>CLRange
2492 <td>
2493 <ul>
2494 <li>All
2495 </ul>
2496 <td>
2497 <table>
2498 <tr><th>dst
2499 <tr><td>U8
2500 <tr><td>S8
2501 <tr><td>QASYMM8
2502 <tr><td>U16
2503 <tr><td>S16
2504 <tr><td>U32
2505 <tr><td>S32
2506 <tr><td>F16
2507 <tr><td>F32
2508 </table>
2509<tr>
2510 <td rowspan="2">ReduceMean
Jakub Sujakee301b32021-06-04 09:46:08 +01002511 <td rowspan="2" style="width:200px;"> Function to perform reduce mean operation.
Teresa Charlin62687422021-04-28 10:58:49 +01002512 <td rowspan="2">
2513 <ul>
2514 <li>ANEURALNETWORKS_MEAN
2515 </ul>
2516 <td>NEReduceMean
2517 <td>
2518 <ul>
2519 <li>All
2520 </ul>
2521 <td>
2522 <table>
2523 <tr><th>src<th>dst
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002524 <tr><td>QASYMM8<td>QASYMM8
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002525 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
Teresa Charlin62687422021-04-28 10:58:49 +01002526 <tr><td>F16<td>F16
2527 <tr><td>F32<td>F32
2528 </table>
2529<tr>
2530 <td>CLReduceMean
2531 <td>
2532 <ul>
2533 <li>All
2534 </ul>
2535 <td>
2536 <table>
2537 <tr><th>src<th>dst
2538 <tr><td>QASYMM8<td>QASYMM8
2539 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2540 <tr><td>F16<td>F16
2541 <tr><td>F32<td>F32
2542 </table>
2543<tr>
2544 <td rowspan="2">ReductionOperation
Jakub Sujakee301b32021-06-04 09:46:08 +01002545 <td rowspan="2" style="width:200px;"> Function to perform reduce with the following operations - ARG_IDX_MAX: Index of the max value - ARG_IDX_MIN: Index of the min value - MEAN_SUM: Mean of sum - PROD: Product - SUM_SQUARE: Sum of squares - SUM: Sum - MIN: Min - MAX: Max
Teresa Charlin62687422021-04-28 10:58:49 +01002546 <td rowspan="2">
2547 <ul>
2548 <li>ANEURALNETWORKS_REDUCE_ALL
2549 <li>ANEURALNETWORKS_REDUCE_ANY
2550 <li>ANEURALNETWORKS_REDUCE_MAX
2551 <li>ANEURALNETWORKS_REDUCE_MIN
2552 <li>ANEURALNETWORKS_REDUCE_PROD
2553 <li>ANEURALNETWORKS_REDUCE_SUM
2554 </ul>
2555 <td>NEReductionOperation
2556 <td>
2557 <ul>
2558 <li>All
2559 </ul>
2560 <td>
2561 <table>
2562 <tr><th>src<th>dst
2563 <tr><td>QASYMM8<td>QASYMM8
2564 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2565 <tr><td>F16<td>F16
2566 <tr><td>F32<td>F32
2567 <tr><td>S32<td>S32
2568 </table>
2569<tr>
2570 <td>CLReductionOperation
2571 <td>
2572 <ul>
2573 <li>All
2574 </ul>
2575 <td>
2576 <table>
2577 <tr><th>src<th>dst
2578 <tr><td>QASYMM8<td>QASYMM8
2579 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2580 <tr><td>F16<td>F16
2581 <tr><td>F32<td>F32
2582 <tr><td>S32<td>S32
2583 </table>
2584<tr>
2585 <td rowspan="2">ReorgLayer
2586 <td rowspan="2" style="width:200px;"> Performs a reorganization layer of input tensor to the output tensor.
2587 <td rowspan="2">
2588 <ul>
2589 <li>n/a
2590 </ul>
2591 <td>NEReorgLayer
2592 <td>
2593 <ul>
2594 <li>NHWC
2595 <li>NCHW
2596 </ul>
2597 <td>
2598 <table>
2599 <tr><th>src<th>dst
2600 <tr><td>All<td>All
2601 </table>
2602<tr>
2603 <td>CLReorgLayer
2604 <td>
2605 <ul>
2606 <li>NHWC
2607 <li>NCHW
2608 </ul>
2609 <td>
2610 <table>
2611 <tr><th>src<th>dst
2612 <tr><td>All<td>All
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002613 </table>
2614<tr>
2615 <td rowspan="2">ReshapeLayer
Teresa Charlin62687422021-04-28 10:58:49 +01002616 <td rowspan="2" style="width:200px;"> Function to reshape a tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002617 <td rowspan="2">
2618 <ul>
2619 <li>ANEURALNETWORKS_RESHAPE
2620 <li>ANEURALNETWORKS_SQUEEZE
2621 </ul>
2622 <td>NEReshapeLayer
2623 <td>
2624 <ul>
2625 <li>All
2626 </ul>
2627 <td>
2628 <table>
2629 <tr><th>src<th>dst
2630 <tr><td>All<td>All
2631 </table>
2632<tr>
2633 <td>CLReshapeLayer
2634 <td>
2635 <ul>
2636 <li>All
2637 </ul>
2638 <td>
2639 <table>
2640 <tr><th>src<th>dst
2641 <tr><td>All<td>All
2642 </table>
2643<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002644 <td rowspan="2">Reverse
2645 <td rowspan="2" style="width:200px;"> Function to reverse tensor according to axis.
2646 <td rowspan="2">
2647 <ul>
2648 <li>n/a
2649 </ul>
2650 <td>NEReverse
2651 <td>
2652 <ul>
2653 <li>All
2654 </ul>
2655 <td>
2656 <table>
2657 <tr><th>src0<th>src1<th>dst
Adnan AlSinanbdcb4c12023-09-18 14:49:45 +01002658 <tr><td>All<td>U32, S32<td>All
Teresa Charlin62687422021-04-28 10:58:49 +01002659 </table>
2660<tr>
2661 <td>CLReverse
2662 <td>
2663 <ul>
2664 <li>All
2665 </ul>
2666 <td>
2667 <table>
2668 <tr><th>src0<th>src1<th>dst
2669 <tr><td>All<td>U32<td>All
2670 </table>
2671<tr>
2672 <td rowspan="2">RNNLayer
2673 <td rowspan="2" style="width:200px;"> Function to perform recurrent neural network layer.
2674 <td rowspan="2">
2675 <ul>
2676 <li>ANEURALNETWORKS_RNN
2677 </ul>
2678 <td>NERNNLayer
2679 <td>
2680 <ul>
2681 <li>NHWC
2682 <li>NCHW
2683 </ul>
2684 <td>
2685 <table>
2686 <tr><th>src0<th>src1<th>src2<th>src3<th>dst0<th>dst1
2687 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
2688 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
2689 </table>
2690<tr>
2691 <td>CLRNNLayer
2692 <td>
2693 <ul>
2694 <li>NHWC
2695 <li>NCHW
2696 </ul>
2697 <td>
2698 <table>
2699 <tr><th>src0<th>src1<th>src2<th>src3<th>dst0<th>dst1
2700 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
2701 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
2702 </table>
2703<tr>
2704 <td rowspan="2">ROIAlignLayer
2705 <td rowspan="2" style="width:200px;"> Function to perform ROI alignment.
2706 <td rowspan="2">
2707 <ul>
2708 <li>ANEURALNETWORKS_ROI_ALIGN
2709 </ul>
2710 <td>NEROIAlignLayer
2711 <td>
2712 <ul>
2713 <li>All
2714 </ul>
2715 <td>
2716 <table>
2717 <tr><th>src0<th>src1<th>dst
2718 <tr><td>F16<td>F16<td>F16
2719 <tr><td>F32<td>F32<td>F32
2720 <tr><td>QASYMM8<td>QASYMM16<td>QASYMM8
2721 <tr><td>QASYMM8_SIGNED<td>QASYMM16<td>QASYMM8_SIGNED
2722 </table>
2723<tr>
2724 <td>CLROIAlignLayer
2725 <td>
2726 <ul>
2727 <li>All
2728 </ul>
2729 <td>
2730 <table>
2731 <tr><th>src0<th>src1<th>dst
2732 <tr><td>F16<td>F16<td>F16
2733 <tr><td>F32<td>F32<td>F32
2734 <tr><td>QASYMM8<td>QASYMM16<td>QASYMM8
2735 <tr><td>QASYMM8_SIGNED<td>QASYMM16<td>QASYMM8_SIGNED
2736 </table>
2737<tr>
2738 <td rowspan="2">ROIPoolingLayer
2739 <td rowspan="2" style="width:200px;"> Function to perform ROI pooling.
2740 <td rowspan="2">
2741 <ul>
2742 <li>ANEURALNETWORKS_ROI_POOLING
2743 </ul>
2744 <td>NEROIPoolingLayer
2745 <td>
2746 <ul>
2747 <li>All
2748 </ul>
2749 <td>
2750 <table>
2751 <tr><th>src0<th>src1<th>dst
2752 <tr><td>F32<td>U16<td>F32
2753 <tr><td>QASYMM8<td>U16<td>QASYMM8
2754 </table>
2755<tr>
2756 <td>CLROIPoolingLayer
2757 <td>
2758 <ul>
2759 <li>All
2760 </ul>
2761 <td>
2762 <table>
2763 <tr><th>src0<th>src1<th>dst
2764 <tr><td>F16<td>U16<td>F16
2765 <tr><td>F32<td>U16<td>F32
2766 <tr><td>QASYMM8<td>U16<td>QASYMM8
2767 </table>
2768<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002769 <td rowspan="2">Scale
Teresa Charlin62687422021-04-28 10:58:49 +01002770 <td rowspan="2" style="width:200px;"> Function to perform resize a tensor using to interpolate: - Bilinear - Nearest neighbor
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002771 <td rowspan="2">
2772 <ul>
2773 <li>ANEURALNETWORKS_RESIZE_BILINEAR
2774 <li>ANEURALNETWORKS_RESIZE_NEAREST_NEIGHBOR
2775 </ul>
2776 <td>NEScale
2777 <td>
2778 <ul>
2779 <li>NHWC
2780 <li>NCHW
2781 </ul>
2782 <td>
2783 <table>
2784 <tr><th>src<th>dst
2785 <tr><td>QASYMM8<td>QASYMM8
2786 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2787 <tr><td>F16<td>F16
2788 <tr><td>F32<td>F32
2789 <tr><td>U8<td>U8
Gunes Bayirc4f27432022-09-11 15:59:19 +01002790 <tr><td>S8<td>S8
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002791 <tr><td>S16<td>S16
2792 </table>
2793<tr>
2794 <td>CLScale
2795 <td>
2796 <ul>
2797 <li>NHWC
2798 <li>NCHW
2799 </ul>
2800 <td>
2801 <table>
2802 <tr><th>src<th>dst
2803 <tr><td>QASYMM8<td>QASYMM8
2804 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2805 <tr><td>F16<td>F16
2806 <tr><td>F32<td>F32
2807 <tr><td>U8<td>U8
2808 <tr><td>S16<td>S16
2809 </table>
2810<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002811 <td rowspan="2">Select
2812 <td rowspan="2" style="width:200px;"> Function to select values from 2 tensors depending on an input tensor of booleans.
2813 <td rowspan="2">
2814 <ul>
2815 <li>ANEURALNETWORKS_SELECT
2816 </ul>
2817 <td>NESelect
2818 <td>
2819 <ul>
2820 <li>All
2821 </ul>
2822 <td>
2823 <table>
2824 <tr><th>src0<th>src1<th>src2<th>dst
2825 <tr><td>U8<td>All<td>All<td>All
2826 </table>
2827<tr>
2828 <td>CLSelect
2829 <td>
2830 <ul>
2831 <li>All
2832 </ul>
2833 <td>
2834 <table>
2835 <tr><th>src0<th>src1<th>src2<th>dst
2836 <tr><td>U8<td>All<td>All<td>All
2837 </table>
2838<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002839 <td rowspan="2">Slice
2840 <td rowspan="2" style="width:200px;"> Function to perform tensor slicing.
2841 <td rowspan="2">
2842 <ul>
2843 <li>ANEURALNETWORKS_SLICE
2844 </ul>
2845 <td>NESlice
2846 <td>
2847 <ul>
2848 <li>All
2849 </ul>
2850 <td>
2851 <table>
2852 <tr><th>src<th>dst
2853 <tr><td>All<td>All
2854 </table>
2855<tr>
2856 <td>CLSlice
2857 <td>
2858 <ul>
2859 <li>All
2860 </ul>
2861 <td>
2862 <table>
2863 <tr><th>src<th>dst
2864 <tr><td>All<td>All
2865 </table>
2866<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01002867 <td rowspan="2">SoftmaxLayer
2868 <td rowspan="2" style="width:200px;"> Function to compute a SoftmaxLayer and a Log SoftmaxLayer.
2869 <td rowspan="2">
2870 <ul>
2871 <li>ANEURALNETWORKS_LOG_SOFTMAX
2872 <li>ANEURALNETWORKS_SOFTMAX
2873 </ul>
2874 <td>NESoftmaxLayerGeneric
2875 <td>
2876 <ul>
2877 <li>All
2878 </ul>
2879 <td>
2880 <table>
2881 <tr><th>src<th>dst
2882 <tr><td>QASYMM8<td>QASYMM8
2883 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2884 <tr><td>F16<td>F16
2885 <tr><td>F32<td>F32
2886 </table>
2887<tr>
2888 <td>CLSoftmaxLayerGeneric
2889 <td>
2890 <ul>
2891 <li>All
2892 </ul>
2893 <td>
2894 <table>
2895 <tr><th>src<th>dst
2896 <tr><td>QASYMM8<td>QASYMM8
2897 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2898 <tr><td>F16<td>F16
2899 <tr><td>F32<td>F32
2900 </table>
2901<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002902 <td rowspan="2">SpaceToBatchLayer
2903 <td rowspan="2" style="width:200px;"> Function to divide a tensor spatially.
2904 <td rowspan="2">
2905 <ul>
2906 <li>ANEURALNETWORKS_SPACE_TO_BATCH_ND
2907 </ul>
2908 <td>NESpaceToBatchLayer
2909 <td>
2910 <ul>
2911 <li>NHWC
2912 <li>NCHW
2913 </ul>
2914 <td>
2915 <table>
2916 <tr><th>src0<th>src1<th>src2<th>dst
2917 <tr><td>All<td>S32<td>S32<td>All
2918 </table>
2919<tr>
2920 <td>CLSpaceToBatchLayer
2921 <td>
2922 <ul>
2923 <li>NHWC
2924 <li>NCHW
2925 </ul>
2926 <td>
2927 <table>
2928 <tr><th>src0<th>src1<th>src2<th>dst
2929 <tr><td>All<td>S32<td>S32<td>All
2930 </table>
2931<tr>
2932 <td rowspan="2">SpaceToDepthLayer
2933 <td rowspan="2" style="width:200px;"> Function to rearrange blocks of spatial data into depth.
2934 <td rowspan="2">
2935 <ul>
2936 <li>ANEURALNETWORKS_SPACE_TO_DEPTH
2937 </ul>
2938 <td>NESpaceToDepthLayer
2939 <td>
2940 <ul>
2941 <li>NHWC
2942 <li>NCHW
2943 </ul>
2944 <td>
2945 <table>
2946 <tr><th>src<th>dst
2947 <tr><td>All<td>All
2948 </table>
2949<tr>
2950 <td>CLSpaceToDepthLayer
2951 <td>
2952 <ul>
2953 <li>NHWC
2954 <li>NCHW
2955 </ul>
2956 <td>
2957 <table>
2958 <tr><th>src<th>dst
2959 <tr><td>All<td>All
2960 </table>
2961<tr>
2962 <td rowspan="2">Split
2963 <td rowspan="2" style="width:200px;"> Function to split a tensor along a given axis.
2964 <td rowspan="2">
2965 <ul>
2966 <li>ANEURALNETWORKS_SPLIT
2967 </ul>
2968 <td>NESplit
2969 <td>
2970 <ul>
2971 <li>All
2972 </ul>
2973 <td>
2974 <table>
2975 <tr><th>src<th>dst
2976 <tr><td>All<td>All
2977 </table>
2978<tr>
2979 <td>CLSplit
2980 <td>
2981 <ul>
2982 <li>All
2983 </ul>
2984 <td>
2985 <table>
2986 <tr><th>src<th>dst
2987 <tr><td>All<td>All
2988 </table>
2989<tr>
2990 <td rowspan="2">StackLayer
2991 <td rowspan="2" style="width:200px;"> Function to stack tensors along an axis.
2992 <td rowspan="2">
2993 <ul>
2994 <li>n/a
2995 </ul>
2996 <td>NEStackLayer
2997 <td>
2998 <ul>
2999 <li>All
3000 </ul>
3001 <td>
3002 <table>
3003 <tr><th>src<th>dst
3004 <tr><td>All<td>All
3005 </table>
3006<tr>
3007 <td>CLStackLayer
3008 <td>
3009 <ul>
3010 <li>All
3011 </ul>
3012 <td>
3013 <table>
3014 <tr><th>src<th>dst
3015 <tr><td>All<td>All
3016 </table>
3017<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003018 <td rowspan="2">StridedSlice
3019 <td rowspan="2" style="width:200px;"> Function to extract a strided slice of a tensor.
3020 <td rowspan="2">
3021 <ul>
3022 <li>ANEURALNETWORKS_STRIDED_SLICE
3023 </ul>
3024 <td>NEStridedSlice
3025 <td>
3026 <ul>
3027 <li>All
3028 </ul>
3029 <td>
3030 <table>
3031 <tr><th>src<th>dst
3032 <tr><td>All<td>All
3033 </table>
3034<tr>
3035 <td>CLStridedSlice
3036 <td>
3037 <ul>
3038 <li>All
3039 </ul>
3040 <td>
3041 <table>
3042 <tr><th>src<th>dst
3043 <tr><td>All<td>All
3044 </table>
3045<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01003046 <td rowspan="2">Tile
3047 <td rowspan="2" style="width:200px;"> Function to construct a tensor by tiling a given tensor.
3048 <td rowspan="2">
3049 <ul>
3050 <li>ANEURALNETWORKS_TILE
3051 </ul>
3052 <td>NETile
3053 <td>
3054 <ul>
3055 <li>All
3056 </ul>
3057 <td>
3058 <table>
3059 <tr><th>src<th>dst
3060 <tr><td>All<td>All
3061 </table>
3062<tr>
3063 <td>CLTile
3064 <td>
3065 <ul>
3066 <li>All
3067 </ul>
3068 <td>
3069 <table>
3070 <tr><th>src<th>dst
3071 <tr><td>All<td>All
3072 </table>
3073<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003074 <td rowspan="2">Transpose
Teresa Charlin62687422021-04-28 10:58:49 +01003075 <td rowspan="2" style="width:200px;"> Function to transpose a 2D tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003076 <td rowspan="2">
3077 <ul>
3078 <li>ANEURALNETWORKS_TRANSPOSE
3079 </ul>
3080 <td>NETranspose
3081 <td>
3082 <ul>
3083 <li>All
3084 </ul>
3085 <td>
3086 <table>
3087 <tr><th>src<th>dst
3088 <tr><td>All<td>All
3089 </table>
3090<tr>
3091 <td>CLTranspose
3092 <td>
3093 <ul>
3094 <li>All
3095 </ul>
3096 <td>
3097 <table>
3098 <tr><th>src<th>dst
3099 <tr><td>All<td>All
3100 </table>
Teresa Charlin62687422021-04-28 10:58:49 +01003101<tr>
3102 <td rowspan="2">Unstack
3103 <td rowspan="2" style="width:200px;"> Function to unpack a rank-R tensor into rank-(R-1) tensors.
3104 <td rowspan="2">
3105 <ul>
3106 <li>n/a
3107 </ul>
3108 <td>NEUnstack
3109 <td>
3110 <ul>
3111 <li>All
3112 </ul>
3113 <td>
3114 <table>
3115 <tr><th>src<th>dst
3116 <tr><td>All<td>All
3117 </table>
3118<tr>
3119 <td>CLUnstack
3120 <td>
3121 <ul>
3122 <li>All
3123 </ul>
3124 <td>
3125 <table>
3126 <tr><th>src<th>dst
3127 <tr><td>All<td>All
3128 </table>
3129<tr>
3130 <td rowspan="2">WinogradConvolutionLayer
3131 <td rowspan="2" style="width:200px;"> Function to do Winograd Convolution.
3132 <td rowspan="2">
3133 <ul>
3134 <li>ANEURALNETWORKS_CONV_2D
3135 </ul>
3136 <td>NEWinogradConvolutionLayer
3137 <td>
3138 <ul>
3139 <li>NHWC
3140 <li>NCHW
3141 </ul>
3142 <td>
3143 <table>
3144 <tr><th>src0<th>src1<th>src2<th>dst
3145 <tr><td>F16<td>F16<td>F16<td>F16
3146 <tr><td>F32<td>F32<td>F32<td>F32
3147 </table>
3148<tr>
3149 <td>CLWinogradConvolutionLayer
3150 <td>
3151 <ul>
3152 <li>NHWC
3153 <li>NCHW
3154 </ul>
3155 <td>
3156 <table>
3157 <tr><th>src0<th>src1<th>src2<th>dst
3158 <tr><td>F16<td>F16<td>F16<td>F16
3159 <tr><td>F32<td>F32<td>F32<td>F32
3160 </table>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003161</table>
3162
3163*/
Mohammed Suhail Munshi5e549fa2022-03-16 11:14:06 +00003164} // namespace