blob: 55bfe38f55ea642cfd29b6c19b943d5fbe58e022 [file] [log] [blame]
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001///
2/// Copyright (c) 2021 Arm Limited.
3///
4/// SPDX-License-Identifier: MIT
5///
6/// Permission is hereby granted, free of charge, to any person obtaining a copy
7/// of this software and associated documentation files (the "Software"), to
8/// deal in the Software without restriction, including without limitation the
9/// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
10/// sell copies of the Software, and to permit persons to whom the Software is
11/// furnished to do so, subject to the following conditions:
12///
13/// The above copyright notice and this permission notice shall be included in all
14/// copies or substantial portions of the Software.
15///
16/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
22/// SOFTWARE.
23///
24namespace arm_compute
25{
26/**
27@page operators_list Supported Operators
28
29@tableofcontents
30
31@section S9_1_operators_list Supported Operators
32
33Compute Library supports operators that are listed in below table.
34
35Compute Library supports a wide list of data-types, information can been directly found in the documentation of each kernel/function.
36The main data-types that the Machine Learning functions support are the following:
37 <ul>
38 <li>BFLOAT16: 16-bit non-standard brain floating point
39 <li>QASYMM8: 8-bit unsigned asymmetric quantized
40 <li>QASYMM8_SIGNED: 8-bit signed asymmetric quantized
41 <li>QSYMM8_PER_CHANNEL: 8-bit signed symmetric quantized (Used for the weights)
42 <li>QSYMM8: 8-bit unsigned symmetric quantized
43 <li>QSYMM16: 16-bit unsigned symmetric quantized
44 <li>F32: 32-bit single precision floating point
45 <li>F16: 16-bit half precision floating point
46 <li>S32: 32-bit signed integer
47 <li>U8: 8-bit unsigned char
Jakub Sujakee301b32021-06-04 09:46:08 +010048 <li>All: Agnostic to any specific data type
Sheri Zhanga47dcc22021-04-22 14:41:12 +010049 </ul>
50
51Compute Library supports the following data layouts (fast changing dimension from right to left):
52 <ul>
53 <li>NHWC: The native layout of Compute Library that delivers the best performance where channels are in the fastest changing dimension
54 <li>NCHW: Legacy layout where width is in the fastest changing dimension
Sheri Zhang5dda2172021-10-15 19:54:17 +010055 <li>NDHWC: New data layout for supporting 3D operators
Jakub Sujakee301b32021-06-04 09:46:08 +010056 <li>All: Agnostic to any specific data layout
Sheri Zhanga47dcc22021-04-22 14:41:12 +010057 </ul>
Sheri Zhang5dda2172021-10-15 19:54:17 +010058where N = batches, C = channels, H = height, W = width, D = depth
Sheri Zhanga47dcc22021-04-22 14:41:12 +010059
60<table>
61<caption id="multi_row"></caption>
62<tr>
63 <th>Function
64 <th>Description
65 <th>Equivalent Android NNAPI Op
66 <th>Backends
67 <th>Data Layouts
68 <th>Data Types
69<tr>
70 <td rowspan="2">ActivationLayer
71 <td rowspan="2" style="width:200px;"> Function to simulate an activation layer with the specified activation function.
72 <td rowspan="2">
73 <ul>
74 <li>ANEURALNETWORKS_ELU
75 <li>ANEURALNETWORKS_HARD_SWISH
76 <li>ANEURALNETWORKS_LOGISTIC
77 <li>ANEURALNETWORKS_RELU
78 <li>ANEURALNETWORKS_RELU1
79 <li>ANEURALNETWORKS_RELU6
80 <li>ANEURALNETWORKS_TANH
81 </ul>
82 <td>NEActivationLayer
83 <td>
84 <ul>
85 <li>All
86 </ul>
87 <td>
88 <table>
89 <tr><th>src<th>dst
90 <tr><td>QASYMM8<td>QASYMM8
91 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
92 <tr><td>QSYMM16<td>QSYMM16
93 <tr><td>F16<td>F16
94 <tr><td>F32<td>F32
95 </table>
96<tr>
97 <td>CLActivationLayer
98 <td>
99 <ul>
100 <li>All
101 </ul>
102 <td>
103 <table>
104 <tr><th>src<th>dst
105 <tr><td>QASYMM8<td>QASYMM8
106 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
107 <tr><td>QSYMM16<td>QSYMM16
108 <tr><td>F16<td>F16
109 <tr><td>F32<td>F32
110 </table>
111<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100112 <td rowspan="2">ArgMinMaxLayer
113 <td rowspan="2" style="width:200px;"> Function to calculate the index of the minimum or maximum values in a tensor based on an axis.
114 <td rowspan="2">
115 <ul>
116 <li>ANEURALNETWORKS_ARGMAX
117 <li>ANEURALNETWORKS_ARGMIN
118 </ul>
119 <td>NEArgMinMaxLayer
120 <td>
121 <ul>
122 <li>All
123 </ul>
124 <td>
125 <table>
126 <tr><th>src<th>dst
127 <tr><td>QASYMM8<td>U32, S32
128 <tr><td>QASYMM8_SIGNED<td>U32, S32
129 <tr><td>S32<td>U32, S32
130 <tr><td>F16<td>U32, S32
131 <tr><td>F32<td>U32, S32
132 </table>
133<tr>
134 <td>CLArgMinMaxLayer
135 <td>
136 <ul>
137 <li>All
138 </ul>
139 <td>
140 <table>
141 <tr><th>src<th>dst
142 <tr><td>QASYMM8<td>U32, S32
143 <tr><td>QASYMM8_SIGNED<td>U32, S32
144 <tr><td>S32<td>U32, S32
145 <tr><td>F16<td>U32, S32
146 <tr><td>F32<td>U32, S32
147 </table>
148<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100149 <td rowspan="1">ArithmeticAddition
150 <td rowspan="1" style="width:200px;"> Function to add 2 tensors.
151 <td rowspan="1">
152 <ul>
153 <li>ANEURALNETWORKS_ADD
154 </ul>
155 <td>NEArithmeticAddition
156 <td>
157 <ul>
158 <li>All
159 </ul>
160 <td>
161 <table>
162 <tr><th>src0<th>src1<th>dst
163 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
164 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
165 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
166 <tr><td>QSYMM16<td>QSYMM16<td>S32
167 <tr><td>U8<td>U8<td>U8
Sheri Zhang6124ce62021-05-04 14:03:13 +0100168 <tr><td>S16<td>S16<td>S16
169 <tr><td>S32<td>S32<td>S32
170 <tr><td>F16<td>F16<td>F16
171 <tr><td>F32<td>F32<td>F32
172 </table>
173<tr>
174 <td rowspan="1">ArithmeticSubtraction
175 <td rowspan="1" style="width:200px;"> Function to substract 2 tensors.
176 <td rowspan="1">
177 <ul>
178 <li>ANEURALNETWORKS_SUB
179 </ul>
180 <td>NEArithmeticSubtraction
181 <td>
182 <ul>
183 <li>All
184 </ul>
185 <td>
186 <table>
187 <tr><th>src0<th>src1<th>dst
188 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
189 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
190 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
191 <tr><td>QSYMM16<td>QSYMM16<td>S32
192 <tr><td>U8<td>U8<td>U8
Sheri Zhang6124ce62021-05-04 14:03:13 +0100193 <tr><td>S16<td>S16<td>S16
194 <tr><td>S32<td>S32<td>S32
195 <tr><td>F16<td>F16<td>F16
196 <tr><td>F32<td>F32<td>F32
197 </table>
198<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100199 <td rowspan="2">BatchNormalizationLayer
200 <td rowspan="2" style="width:200px;"> Function to perform batch normalization.
201 <td rowspan="2">
202 <ul>
203 <li>n/a
204 </ul>
205 <td>NEBatchNormalizationLayer
206 <td>
207 <ul>
208 <li>NHWC
209 <li>NCHW
210 </ul>
211 <td>
212 <table>
213 <tr><th>src<th>dst
214 <tr><td>F32<td>F32
215 <tr><td>F16<td>F16
216 </table>
217<tr>
218 <td>CLBatchNormalizationLayer
219 <td>
220 <ul>
221 <li>NHWC
222 <li>NCHW
223 </ul>
224 <td>
225 <table>
226 <tr><th>src<th>dst
227 <tr><td>F32<td>F32
228 <tr><td>F16<td>F16
229 </table>
230<tr>
231 <td rowspan="2">BatchToSpaceLayer
232 <td rowspan="2" style="width:200px;"> Batch to space transformation.
233 <td rowspan="2">
234 <ul>
235 <li>ANEURALNETWORKS_BATCH_TO_SPACE_ND
236 </ul>
237 <td>NEBatchToSpaceLayer
238 <td>
239 <ul>
240 <li>NHWC
241 <li>NCHW
242 </ul>
243 <td>
244 <table>
245 <tr><th>src0<th>src1<th>dst
246 <tr><td>All<td>s32<td>All
247 </table>
248<tr>
249 <td>CLBatchToSpaceLayer
250 <td>
251 <ul>
252 <li>NHWC
253 <li>NCHW
254 </ul>
255 <td>
256 <table>
257 <tr><th>src0<th>src1<th>dst
258 <tr><td>All<td>s32<td>All
259 </table>
260<tr>
261 <td rowspan="2">BitwiseAnd
Jakub Sujakee301b32021-06-04 09:46:08 +0100262 <td rowspan="2" style="width:200px;"> Function to perform bitwise AND between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100263 <td rowspan="2">
264 <ul>
265 <li>ANEURALNETWORKS_LOGICAL_AND
266 </ul>
267 <td>NEBitwiseAnd
268 <td>
269 <ul>
270 <li>All
271 </ul>
272 <td>
273 <table>
274 <tr><th>src<th>dst
275 <tr><td>U8<td>U8
276 </table>
277<tr>
278 <td>CLBitwiseAnd
279 <td>
280 <ul>
281 <li>All
282 </ul>
283 <td>
284 <table>
285 <tr><th>src<th>dst
286 <tr><td>U8<td>U8
287 </table>
288<tr>
289 <td rowspan="2">BitwiseNot
Jakub Sujakee301b32021-06-04 09:46:08 +0100290 <td rowspan="2" style="width:200px;"> Function to perform bitwise NOT.
Teresa Charlin62687422021-04-28 10:58:49 +0100291 <td rowspan="2">
292 <ul>
293 <li>ANEURALNETWORKS_LOGICAL_NOT
294 </ul>
295 <td>NEBitwiseNot
296 <td>
297 <ul>
298 <li>All
299 </ul>
300 <td>
301 <table>
302 <tr><th>src<th>dst
303 <tr><td>U8<td>U8
304 </table>
305<tr>
306 <td>CLBitwiseNot
307 <td>
308 <ul>
309 <li>All
310 </ul>
311 <td>
312 <table>
313 <tr><th>src<th>dst
314 <tr><td>U8<td>U8
315 </table>
316<tr>
317 <td rowspan="2">BitwiseOr
Jakub Sujakee301b32021-06-04 09:46:08 +0100318 <td rowspan="2" style="width:200px;"> Function to perform bitwise OR between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100319 <td rowspan="2">
320 <ul>
321 <li>ANEURALNETWORKS_LOGICAL_OR
322 </ul>
323 <td>NEBitwiseOr
324 <td>
325 <ul>
326 <li>All
327 </ul>
328 <td>
329 <table>
330 <tr><th>src<th>dst
331 <tr><td>U8<td>U8
332 </table>
333<tr>
334 <td>CLBitwiseOr
335 <td>
336 <ul>
337 <li>All
338 </ul>
339 <td>
340 <table>
341 <tr><th>src<th>dst
342 <tr><td>U8<td>U8
343 </table>
344<tr>
345 <td rowspan="2">BitwiseXor
Jakub Sujakee301b32021-06-04 09:46:08 +0100346 <td rowspan="2" style="width:200px;"> Function to perform bitwise XOR between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100347 <td rowspan="2">
348 <ul>
349 <li>n/a
350 </ul>
351 <td>NEBitwiseXor
352 <td>
353 <ul>
354 <li>All
355 </ul>
356 <td>
357 <table>
358 <tr><th>src<th>dst
359 <tr><td>U8<td>U8
360 </table>
361<tr>
362 <td>CLBitwiseXor
363 <td>
364 <ul>
365 <li>All
366 </ul>
367 <td>
368 <table>
369 <tr><th>src<th>dst
370 <tr><td>U8<td>U8
371 </table>
372<tr>
373 <td rowspan="2">BoundingBoxTransform
374 <td rowspan="2" style="width:200px;"> Transform proposal bounding boxes to target bounding box using bounding box deltas.
375 <td rowspan="2">
376 <ul>
377 <li>n/a
378 </ul>
379 <td>NEBoundingBoxTransform
380 <td>
381 <ul>
382 <li>NHWC
383 <li>NCHW
384 </ul>
385 <td>
386 <table>
387 <tr><th>src0<th>src1<th>dst
388 <tr><td>QASYMM16<td>QASYMM8<td>QASYMM16
389 <tr><td>F16<td>F16<td>F16
390 <tr><td>F32<td>F32<td>F32
391 </table>
392<tr>
393 <td>CLBoundingBoxTransform
394 <td>
395 <ul>
396 <li>NHWC
397 <li>NCHW
398 </ul>
399 <td>
400 <table>
401 <tr><th>src0<th>src1<th>dst
402 <tr><td>QASYMM16<td>QASYMM8<td>QASYMM16
403 <tr><td>F16<td>F16<td>F16
404 <tr><td>F32<td>F32<td>F32
405 </table>
406<tr>
407 <td rowspan="2">Cast
408 <td rowspan="2" style="width:200px;"> Function to cast a tensor.
409 <td rowspan="2">
410 <ul>
411 <li>ANEURALNETWORKS_CAST
412 </ul>
413 <td>NECast
414 <td>
415 <ul>
416 <li>All
417 </ul>
418 <td>
419 <table>
420 <tr><th>src<th>dst
421 <tr><td>QASYMM8_SIGNED<td>S16, S32, F32, F16
422 <tr><td>QASYMM8<td>U16, S16, S32, F32, F16
423 <tr><td>U8<td>U16, S16, S32, F32, F16
424 <tr><td>U16<td>U8, U32
425 <tr><td>S16<td>QASYMM8_SIGNED, U8, S32
426 <tr><td>F16<td>QASYMM8_SIGNED, QASYMM8, F32, S32, U8
427 <tr><td>S32<td>QASYMM8_SIGNED, QASYMM8, F16, F32, U8
428 <tr><td>F32<td>QASYMM8_SIGNED, QASYMM8, BFLOAT16, F16, S32, U8
429 </table>
430<tr>
431 <td>CLCast
432 <td>
433 <ul>
434 <li>All
435 </ul>
436 <td>
437 <table>
438 <tr><th>src<th>dst
439 <tr><td>U8<td>S8, U16, S16, U32, S32, F16, F32
440 <tr><td>U16<td>U8, S8, S16, U32, S32, F16, F32
441 <tr><td>S16<td>U8, S8, U16, U32, S32, F16, F32
442 <tr><td>U32<td>U8, S8, U16, S16, S32, F16, F32
443 <tr><td>S32<td>U8, S8, U16, S16, U32, F16, F32
444 <tr><td>F16<td>U8, S8, U16, S16, U32, F32
445 <tr><td>F32<td>U8, S8, U16, S16, U32, F16
446 </table>
447<tr>
448 <td rowspan="2">ChannelShuffleLayer
449 <td rowspan="2" style="width:200px;"> Function to shuffle the channels of the input tensor.
450 <td rowspan="2">
451 <ul>
452 <li>ANEURALNETWORKS_CHANNEL_SHUFFLE
453 </ul>
454 <td>NEChannelShuffleLayer
455 <td>
456 <ul>
457 <li>NCHW
Michele Di Giorgiob8025b32021-09-03 10:29:49 +0100458 <li>NHWC
Teresa Charlin62687422021-04-28 10:58:49 +0100459 </ul>
460 <td>
461 <table>
462 <tr><th>src<th>dst
463 <tr><td>All<td>All
464 </table>
465<tr>
466 <td>CLChannelShuffleLayer
467 <td>
468 <ul>
469 <li>NCHW
Michele Di Giorgiob8025b32021-09-03 10:29:49 +0100470 <li>NHWC
Teresa Charlin62687422021-04-28 10:58:49 +0100471 </ul>
472 <td>
473 <table>
474 <tr><th>src<th>dst
475 <tr><td>All<td>All
476 </table>
477<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100478 <td rowspan="1">Comparison
479 <td rowspan="1" style="width:200px;"> Function to compare 2 tensors.
480 <td rowspan="1">
481 <ul>
482 <li>ANEURALNETWORKS_EQUAL
483 <li>ANEURALNETWORKS_GREATER
484 <li>ANEURALNETWORKS_GREATER_EQUAL
485 <li>ANEURALNETWORKS_LESS
486 <li>ANEURALNETWORKS_LESS_EQUAL
487 <li>ANEURALNETWORKS_NOT_EQUAL
488 </ul>
489 <td>CLComparison
490 <td>
491 <ul>
492 <li>All
493 </ul>
494 <td>
495 <table>
496 <tr><th>src0<th>src1<th>dst
497 <tr><td>All<td>All<td>U8
498 </table>
499<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100500 <td rowspan="2">ConcatenateLayer
501 <td rowspan="2" style="width:200px;"> Function to concatenate tensors along a given axis.
502 <td rowspan="2">
503 <ul>
504 <li>ANEURALNETWORKS_CONCATENATION
505 </ul>
506 <td>NEConcatenateLayer
507 <td>
508 <ul>
509 <li>All
510 </ul>
511 <td>
512 <table>
513 <tr><th>src<th>dst
514 <tr><td>QASYMM8<td>QASYMM8
515 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
516 <tr><td>F16<td>F16
517 <tr><td>F32<td>F32
518 </table>
519<tr>
520 <td>CLConcatenateLayer
521 <td>
522 <ul>
523 <li>All
524 </ul>
525 <td>
526 <table>
527 <tr><th>src<th>dst
528 <tr><td>QASYMM8<td>QASYMM8
529 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
530 <tr><td>F16<td>F16
531 <tr><td>F32<td>F32
532 </table>
533<tr>
534 <td rowspan="2">ConvertFullyConnectedWeights
Jakub Sujakee301b32021-06-04 09:46:08 +0100535 <td rowspan="2" style="width:200px;"> Function to transpose the weights for the fully connected layer.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100536 <td rowspan="2">
537 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +0100538 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100539 </ul>
540 <td>NEConvertFullyConnectedWeights
541 <td>
542 <ul>
543 <li>NHWC
544 <li>NCHW
545 </ul>
546 <td>
547 <table>
548 <tr><th>src<th>dst
549 <tr><td>All<td>All
550 </table>
551<tr>
552 <td>CLConvertFullyConnectedWeights
553 <td>
554 <ul>
555 <li>NHWC
556 <li>NCHW
557 </ul>
558 <td>
559 <table>
560 <tr><th>src<th>dst
561 <tr><td>All<td>All
562 </table>
563<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100564 <td rowspan="2">ConvolutionLayer
565 <td rowspan="2" style="width:200px;"> Function to compute a convolution layer.
566 <td rowspan="2">
567 <ul>
568 <li>ANEURALNETWORKS_CONV_2D
569 </ul>
570 <td>NEConvolutionLayer
571 <td>
572 <ul>
573 <li>NHWC
574 <li>NCHW
575 </ul>
576 <td>
577 <table>
578 <tr><th>src0<th>src1<th>src2<th>dst
579 <tr><td>F16<td>F16<td>F16<td>F16
580 <tr><td>F32<td>F32<td>F32<td>F32
581 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
582 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
583 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
584 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
585 </table>
586<tr>
587 <td>CLConvolutionLayer
588 <td>
589 <ul>
590 <li>NHWC
591 <li>NCHW
592 </ul>
593 <td>
594 <table>
595 <tr><th>src0<th>src1<th>src2<th>dst
596 <tr><td>F16<td>F16<td>F16<td>F16
597 <tr><td>F32<td>F32<td>F32<td>F32
598 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
599 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
600 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
601 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
602 </table>
603<tr>
Sheri Zhang6d9c9822021-09-24 16:02:57 +0100604 <td rowspan="2">Conv3D
605 <td rowspan="2" style="width:200px;"> Function to compute a 3d convolution layer.
606 <td rowspan="2">
607 <ul>
608 <li>ANEURALNETWORKS_CONV_3D
609 </ul>
610 <td>NEConv3D
611 <td>
612 <ul>
613 <li>NDHWC
614 </ul>
615 <td>
616 <table>
617 <tr><th>src0<th>src1<th>src2<th>dst
618 <tr><td>F16<td>F16<td>F16<td>F16
619 <tr><td>F32<td>F32<td>F32<td>F32
620 </table>
621<tr>
622 <td>CLConv3D
623 <td>
624 <ul>
625 <li>NDHWC
626 </ul>
627 <td>
628 <table>
629 <tr><th>src0<th>src1<th>src2<th>dst
630 <tr><td>F16<td>F16<td>F16<td>F16
631 <tr><td>F32<td>F32<td>F32<td>F32
Giorgio Arena51847d52021-10-19 15:45:57 +0100632 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
633 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
Sheri Zhang6d9c9822021-09-24 16:02:57 +0100634 </table>
635<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100636 <td rowspan="2">Copy
637 <td rowspan="2" style="width:200px;"> Function to copy a tensor.
638 <td rowspan="2">
639 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +0100640 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100641 </ul>
642 <td>NECopy
643 <td>
644 <ul>
645 <li>All
646 </ul>
647 <td>
648 <table>
649 <tr><th>src<th>dst
650 <tr><td>All<td>All
651 </table>
652<tr>
653 <td>CLCopy
654 <td>
655 <ul>
656 <li>All
657 </ul>
658 <td>
659 <table>
660 <tr><th>src<th>dst
661 <tr><td>All<td>All
662 </table>
663<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100664 <td rowspan="1">Crop
665 <td rowspan="1" style="width:200px;"> Performs a copy of input tensor to the output tensor.
666 <td rowspan="1">
667 <ul>
668 <li>n/a
669 </ul>
670 <td>CLCrop
671 <td>
672 <ul>
673 <li>NHWC
674 </ul>
675 <td>
676 <table>
677 <tr><th>src<th>dst
678 <tr><td>All<td>F32
679 </table>
680<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100681 <td rowspan="2">CropResize
682 <td rowspan="2" style="width:200px;"> Function to perform cropping and resizing.
683 <td rowspan="2">
684 <ul>
685 <li>n/a
686 </ul>
687 <td>NECropResize
688 <td>
689 <ul>
690 <li>NHWC
691 </ul>
692 <td>
693 <table>
694 <tr><th>src0<th>src1<th>src2<th>dst
695 <tr><td>All<td>F32<td>F32<td>F32
696 </table>
697<tr>
698 <td>CLCropResize
699 <td>
700 <ul>
701 <li>NHWC
702 </ul>
703 <td>
704 <table>
705 <tr><th>src0<th>src1<th>src2<th>dst
706 <tr><td>All<td>F32<td>F32<td>F32
707 </table>
708<tr>
709 <td rowspan="2">DeconvolutionLayer
Jakub Sujakee301b32021-06-04 09:46:08 +0100710 <td rowspan="2" style="width:200px;"> Function to compute a deconvolution or transpose convolution.
Teresa Charlin62687422021-04-28 10:58:49 +0100711 <td rowspan="2">
712 <ul>
713 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
714 </ul>
715 <td>NEDeconvolutionLayer
716 <td>
717 <ul>
718 <li>NHWC
719 <li>NCHW
720 </ul>
721 <td>
722 <table>
723 <tr><th>src0<th>src1<th>src2<th>dst
724 <tr><td>F16<td>F16<td>F16<td>F16
725 <tr><td>F32<td>F32<td>F32<td>F32
726 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
727 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
728 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
729 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
730 </table>
731<tr>
732 <td>CLDeconvolutionLayer
733 <td>
734 <ul>
735 <li>NHWC
736 <li>NCHW
737 </ul>
738 <td>
739 <table>
740 <tr><th>src0<th>src1<th>src2<th>dst
741 <tr><td>F16<td>F16<td>F16<td>F16
742 <tr><td>F32<td>F32<td>F32<td>F32
743 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
744 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
745 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
746 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
747 </table>
748<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100749 <td rowspan="1">DeconvolutionLayerUpsample
750 <td rowspan="1" style="width:200px;"> Function to execute deconvolution upsample on OpenCL.
751 <td rowspan="1">
752 <ul>
753 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
754 </ul>
755 <td>CLDeconvolutionLayerUpsample
756 <td>
757 <ul>
758 <li>NHWC
759 <li>NCHW
760 </ul>
761 <td>
762 <table>
763 <tr><th>src<th>dst
764 <tr><td>All<td>All
765 </table>
766<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100767 <td rowspan="2">DepthConvertLayer
768 <td rowspan="2" style="width:200px;"> Performs a down-scaling depth conversion.
769 <td rowspan="2">
770 <ul>
771 <li>n/a
772 </ul>
773 <td>NEDepthConvertLayer
774 <td>
775 <ul>
776 <li>All
777 </ul>
778 <td>
779 <table>
780 <tr><th>src<th>dst
781 <tr><td>QASYMM8<td>F16, F32
782 <tr><td>U8<td>U16, S16, S32
783 <tr><td>U16<td>U8, U32
784 <tr><td>S16<td>U8, S32
785 <tr><td>BFLOAT16<td>F32
786 <tr><td>F16<td>QASYMM8, F32
787 <tr><td>F32<td>QASYMM8, F16, BFLOAT16
788 </table>
789<tr>
790 <td>CLDepthConvertLayer
791 <td>
792 <ul>
793 <li>All
794 </ul>
795 <td>
796 <table>
797 <tr><th>src<th>dst
798 <tr><td>U8<td>S8, U16, S16, U32, S32, F16, F32
799 <tr><td>U16<td>U8, S8, S16, U32, S32, F16, F32
800 <tr><td>S16<td>U8, S8, U16, U32, S32, F16, F32
801 <tr><td>U32<td>U8, S8, U16, S16, S32, F16, F32
802 <tr><td>S32<td>U8, S8, U16, S16, U32, F16, F32
803 <tr><td>F16<td>U8, S8, U16, S16, U32, F32
804 <tr><td>F32<td>U8, S8, U16, S16, U32, F16
805 </table>
806<tr>
807 <td rowspan="2">DepthToSpaceLayer
808 <td rowspan="2" style="width:200px;"> Depth to Space transformation.
809 <td rowspan="2">
810 <ul>
811 <li>ANEURALNETWORKS_DEPTH_TO_SPACE
812 </ul>
813 <td>NEDepthToSpaceLayer
814 <td>
815 <ul>
816 <li>NHWC
817 <li>NCHW
818 </ul>
819 <td>
820 <table>
821 <tr><th>src<th>dst
822 <tr><td>All<td>All
823 </table>
824<tr>
825 <td>CLDepthToSpaceLayer
826 <td>
827 <ul>
828 <li>NHWC
829 <li>NCHW
830 </ul>
831 <td>
832 <table>
833 <tr><th>src<th>dst
834 <tr><td>All<td>All
835 </table>
836<tr>
837 <td rowspan="2">DepthwiseConvolutionLayer
838 <td rowspan="2" style="width:200px;"> Function to perform depthwise separable convolution.
839 <td rowspan="2">
840 <ul>
841 <li>ANEURALNETWORKS_DEPTHWISE_CONV_2D
842 </ul>
843 <td>NEDepthwiseConvolutionLayer
844 <td>
845 <ul>
846 <li>NHWC
847 <li>NCHW
848 </ul>
849 <td>
850 <table>
851 <tr><th>src0<th>src1<th>src2<th>dst
852 <tr><td>F16<td>F16<td>F16<td>F16
853 <tr><td>F32<td>F32<td>F32<td>F32
854 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
855 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
856 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
857 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
858 </table>
859<tr>
860 <td>CLDepthwiseConvolutionLayer
861 <td>
862 <ul>
863 <li>NHWC
864 <li>NCHW
865 </ul>
866 <td>
867 <table>
868 <tr><th>src0<th>src1<th>src2<th>dst
869 <tr><td>F16<td>F16<td>F16<td>F16
870 <tr><td>F32<td>F32<td>F32<td>F32
871 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
872 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
873 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
874 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
875 </table>
876<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100877 <td rowspan="2">DequantizationLayer
Teresa Charlin62687422021-04-28 10:58:49 +0100878 <td rowspan="2" style="width:200px;"> Function to dequantize the values in a tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100879 <td rowspan="2">
880 <ul>
881 <li>ANEURALNETWORKS_DEQUANTIZE
882 </ul>
883 <td>NEDequantizationLayer
884 <td>
885 <ul>
886 <li>All
887 </ul>
888 <td>
889 <table>
890 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +0100891 <tr><td>QASYMM8<td>F16, F32
892 <tr><td>QASYMM8_SIGNED<td>F16, F32
893 <tr><td>QSYMM8_PER_CHANNEL<td>F16, F32
894 <tr><td>QSYMM8<td>F16, F32
895 <tr><td>QSYMM16<td>F16, F32
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100896 </table>
897<tr>
898 <td>CLDequantizationLayer
899 <td>
900 <ul>
901 <li>All
902 </ul>
903 <td>
904 <table>
905 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +0100906 <tr><td>QASYMM8<td>F16, F32
907 <tr><td>QASYMM8_SIGNED<td>F16, F32
908 <tr><td>QSYMM8_PER_CHANNEL<td>F16, F32
909 <tr><td>QSYMM8<td>F16, F32
910 <tr><td>QSYMM16<td>F16, F32
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100911 </table>
912<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100913 <td rowspan="1">DetectionPostProcessLayer
914 <td rowspan="1" style="width:200px;"> Function to generate the detection output based on center size encoded boxes, class prediction and anchors by doing non maximum suppression (NMS).
915 <td rowspan="1">
916 <ul>
917 <li>ANEURALNETWORKS_DETECTION_POSTPROCESSING
918 </ul>
919 <td>NEDetectionPostProcessLayer
920 <td>
921 <ul>
922 <li>All
923 </ul>
924 <td>
925 <table>
926 <tr><th>src0 - src2<th>dst0 - dst3
927 <tr><td>QASYMM8<td>F32
928 <tr><td>QASYMM8_SIGNED<td>F32
929 <tr><td>F32<td>F32
930 </table>
931<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100932 <td rowspan="2">DirectConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +0100933 <td rowspan="2" style="width:200px;"> Function to compute direct convolution.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100934 <td rowspan="2">
935 <ul>
936 <li>ANEURALNETWORKS_CONV_2D
937 </ul>
938 <td>NEDirectConvolutionLayer
939 <td>
940 <ul>
941 <li>NHWC
942 <li>NCHW
943 </ul>
944 <td>
945 <table>
946 <tr><th>src0<th>src1<th>src2<th>dst
947 <tr><td>F16<td>F16<td>F16<td>F16
948 <tr><td>F32<td>F32<td>F32<td>F32
949 </table>
950<tr>
951 <td>CLDirectConvolutionLayer
952 <td>
953 <ul>
954 <li>NHWC
955 <li>NCHW
956 </ul>
957 <td>
958 <table>
959 <tr><th>src0<th>src1<th>src2<th>dst
960 <tr><td>F16<td>F16<td>F16<td>F16
961 <tr><td>F32<td>F32<td>F32<td>F32
962 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
963 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
964 </table>
965<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100966 <td rowspan="1">DirectDeconvolutionLayer
967 <td rowspan="1" style="width:200px;"> Function to run the deconvolution layer.
968 <td rowspan="1">
969 <ul>
970 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
971 </ul>
972 <td>CLDirectDeconvolutionLayer
973 <td>
974 <ul>
975 <li>NHWC
976 <li>NCHW
977 </ul>
978 <td>
979 <table>
980 <tr><th>src0<th>src1<th>src2<th>dst
981 <tr><td>F16<td>F16<td>F16<td>F16
982 <tr><td>F32<td>F32<td>F32<td>F32
983 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
984 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
985 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
986 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
987 </table>
988<tr>
Jakub Sujakee301b32021-06-04 09:46:08 +0100989 <td rowspan="13">ElementwiseOperations
Sheri Zhang6124ce62021-05-04 14:03:13 +0100990 <td rowspan="13" style="width:200px;"> Function to perform in Cpu: - Div - Max - Min - Pow - SquaredDiff - Comparisons (Equal, greater, greater_equal, less, less_equal, not_equal) Function to perform in CL: - Add - Sub - Div - Max - Min - Pow - SquaredDiff
991 <td rowspan="13">
992 <ul>
993 <li>ANEURALNETWORKS_MAXIMUM
994 <li>ANEURALNETWORKS_MINIMUM
995 <li>ANEURALNETWORKS_POW
996 <li>ANEURALNETWORKS_DIV
997 <li>ANEURALNETWORKS_ADD
998 <li>ANEURALNETWORKS_SUB
999 <li>ANEURALNETWORKS_EQUAL
1000 <li>ANEURALNETWORKS_GREATER
1001 <li>ANEURALNETWORKS_GREATER_EQUAL
1002 <li>ANEURALNETWORKS_LESS
1003 <li>ANEURALNETWORKS_LESS_EQUAL
1004 <li>ANEURALNETWORKS_NOT_EQUAL
1005 </ul>
1006 <td>NEElementwiseMax
1007 <td>
1008 <ul>
1009 <li>All
1010 </ul>
1011 <td>
1012 <table>
1013 <tr><th>src0<th>src1<th>dst
1014 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1015 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1016 <tr><td>S32<td>S32<td>S32
1017 <tr><td>S16<td>S16<td>S16
1018 <tr><td>F16<td>F16<td>F16
1019 <tr><td>F32<td>F32<td>F32
1020 </table>
1021<tr>
1022 <td>NEElementwiseMin
1023 <td>
1024 <ul>
1025 <li>All
1026 </ul>
1027 <td>
1028 <table>
1029 <tr><th>src0<th>src1<th>dst
1030 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1031 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1032 <tr><td>S32<td>S32<td>S32
1033 <tr><td>S16<td>S16<td>S16
1034 <tr><td>F16<td>F16<td>F16
1035 <tr><td>F32<td>F32<td>F32
1036 </table>
1037<tr>
1038 <td>NEElementwiseSquaredDiff
1039 <td>
1040 <ul>
1041 <li>All
1042 </ul>
1043 <td>
1044 <table>
1045 <tr><th>src0<th>src1<th>dst
1046 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1047 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1048 <tr><td>S32<td>S32<td>S32
1049 <tr><td>S16<td>S16<td>S16
1050 <tr><td>F16<td>F16<td>F16
1051 <tr><td>F32<td>F32<td>F32
1052 </table>
1053<tr>
1054 <td>NEElementwiseDivision
1055 <td>
1056 <ul>
1057 <li>All
1058 </ul>
1059 <td>
1060 <table>
1061 <tr><th>src0<th>src1<th>dst
1062 <tr><td>F16<td>F16<td>F16
1063 <tr><td>F32<td>F32<td>F32
1064 </table>
1065<tr>
1066 <td>NEElementwisePower
1067 <td>
1068 <ul>
1069 <li>All
1070 </ul>
1071 <td>
1072 <table>
1073 <tr><th>src0<th>src1<th>dst
1074 <tr><td>F16<td>F16<td>F16
1075 <tr><td>F32<td>F32<td>F32
1076 </table>
1077<tr>
1078 <td>NEElementwiseComparison
1079 <td>
1080 <ul>
1081 <li>All
1082 </ul>
1083 <td>
1084 <table>
1085 <tr><th>src0<th>src1<th>dst
1086 <tr><td>QASYMM8<td>QASYMM8<td>U8
1087 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>U8
1088 <tr><td>S32<td>S32<td>U8
1089 <tr><td>U8<td>U8<td>U8
1090 <tr><td>S16<td>S16<td>U8
1091 <tr><td>F16<td>F16<td>U8
1092 <tr><td>F32<td>F32<td>U8
1093 </table>
1094<tr>
1095 <td>CLArithmeticAddition
1096 <td>
1097 <ul>
1098 <li>All
1099 </ul>
1100 <td>
1101 <table>
1102 <tr><th>src0<th>src1<th>dst
1103 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1104 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1105 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1106 <tr><td>U8<td>U8<td>U8
1107 <tr><td>U8<td>U8<td>S16
1108 <tr><td>U8<td>S16<td>S16
1109 <tr><td>S16<td>U8<td>S16
1110 <tr><td>S16<td>S16<td>S16
1111 <tr><td>S32<td>S32<td>S32
1112 <tr><td>F16<td>F16<td>F16
1113 <tr><td>F32<td>F32<td>F32
1114 </table>
1115<tr>
1116 <td>CLArithmeticSubtraction
1117 <td>
1118 <ul>
1119 <li>All
1120 </ul>
1121 <td>
1122 <table>
1123 <tr><th>src0<th>src1<th>dst
1124 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1125 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1126 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1127 <tr><td>U8<td>U8<td>U8
1128 <tr><td>U8<td>U8<td>S16
1129 <tr><td>U8<td>S16<td>S16
1130 <tr><td>S16<td>U8<td>S16
1131 <tr><td>S16<td>S16<td>S16
1132 <tr><td>S32<td>S32<td>S32
1133 <tr><td>F16<td>F16<td>F16
1134 <tr><td>F32<td>F32<td>F32
1135 </table>
1136<tr>
1137 <td>CLArithmeticDivision
1138 <td>
1139 <ul>
1140 <li>All
1141 </ul>
1142 <td>
1143 <table>
1144 <tr><th>src0<th>src1<th>dst
1145 <tr><td>F16<td>F16<td>F16
1146 <tr><td>F32<td>F32<td>F32
1147 </table>
1148<tr>
1149 <td>CLElementwiseMax
1150 <td>
1151 <ul>
1152 <li>All
1153 </ul>
1154 <td>
1155 <table>
1156 <tr><th>src0<th>src1<th>dst
1157 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1158 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1159 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1160 <tr><td>U8<td>U8<td>U8
1161 <tr><td>S16<td>S16<td>S16
1162 <tr><td>S32<td>S32<td>S32
1163 <tr><td>U32<td>U32<td>U32
1164 <tr><td>F16<td>F16<td>F16
1165 <tr><td>F32<td>F32<td>F32
1166 </table>
1167<tr>
1168 <td>CLElementwiseMin
1169 <td>
1170 <ul>
1171 <li>All
1172 </ul>
1173 <td>
1174 <table>
1175 <tr><th>src0<th>src1<th>dst
1176 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1177 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1178 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1179 <tr><td>U8<td>U8<td>U8
1180 <tr><td>S16<td>S16<td>S16
1181 <tr><td>S32<td>S32<td>S32
1182 <tr><td>U32<td>U32<td>U32
1183 <tr><td>F16<td>F16<td>F16
1184 <tr><td>F32<td>F32<td>F32
1185 </table>
1186<tr>
1187 <td>CLElementwiseSquaredDiff
1188 <td>
1189 <ul>
1190 <li>All
1191 </ul>
1192 <td>
1193 <table>
1194 <tr><th>src0<th>src1<th>dst
1195 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1196 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1197 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1198 <tr><td>U8<td>U8<td>U8
1199 <tr><td>S16<td>S16<td>S16
1200 <tr><td>F16<td>F16<td>F16
1201 <tr><td>F32<td>F32<td>F32
1202 </table>
1203<tr>
1204 <td>CLElementwisePower
1205 <td>
1206 <ul>
1207 <li>All
1208 </ul>
1209 <td>
1210 <table>
1211 <tr><th>src0<th>src1<th>dst
1212 <tr><td>F16<td>F16<td>F16
1213 <tr><td>F32<td>F32<td>F32
1214 </table>
1215<tr>
1216 <td rowspan="8">ElementwiseUnaryLayer
1217 <td rowspan="8" style="width:200px;"> Function to perform: - Rsqrt - Exp - Neg - Log - Abs - Round - Sin
1218 <td rowspan="8">
1219 <ul>
1220 <li>ANEURALNETWORKS_ABS
1221 <li>ANEURALNETWORKS_EXP
1222 <li>ANEURALNETWORKS_LOG
1223 <li>ANEURALNETWORKS_NEG
1224 <li>ANEURALNETWORKS_RSQRT
1225 <li>ANEURALNETWORKS_SIN
1226 </ul>
1227 <td>NEElementwiseUnaryLayer
1228 <td>
1229 <ul>
1230 <li>All
1231 </ul>
1232 <td>
1233 <table>
1234 <tr><th>src<th>dst
1235 <tr><td>F16<td>F16
1236 <tr><td>F32<td>F32
1237 <tr><td>S32<td>S32
1238 </table>
1239<tr>
1240 <td>CLRsqrtLayer
1241 <td>
1242 <ul>
1243 <li>All
1244 </ul>
1245 <td>
1246 <table>
1247 <tr><th>src<th>dst
1248 <tr><td>F16<td>F16
1249 <tr><td>F32<td>F32
1250 </table>
1251<tr>
1252 <td>CLExpLayer
1253 <td>
1254 <ul>
1255 <li>All
1256 </ul>
1257 <td>
1258 <table>
1259 <tr><th>src<th>dst
1260 <tr><td>F16<td>F16
1261 <tr><td>F32<td>F32
1262 </table>
1263<tr>
1264 <td>CLNegLayer
1265 <td>
1266 <ul>
1267 <li>All
1268 </ul>
1269 <td>
1270 <table>
1271 <tr><th>src<th>dst
1272 <tr><td>F16<td>F16
1273 <tr><td>F32<td>F32
Jakub Sujakee301b32021-06-04 09:46:08 +01001274 <tr><td>S32<td>S32
Sheri Zhang6124ce62021-05-04 14:03:13 +01001275 </table>
1276<tr>
1277 <td>CLSinLayer
1278 <td>
1279 <ul>
1280 <li>All
1281 </ul>
1282 <td>
1283 <table>
1284 <tr><th>src<th>dst
1285 <tr><td>F16<td>F16
1286 <tr><td>F32<td>F32
1287 </table>
1288<tr>
1289 <td>CLLogLayer
1290 <td>
1291 <ul>
1292 <li>All
1293 </ul>
1294 <td>
1295 <table>
1296 <tr><th>src<th>dst
1297 <tr><td>F16<td>F16
1298 <tr><td>F32<td>F32
1299 </table>
1300<tr>
1301 <td>CLAbsLayer
1302 <td>
1303 <ul>
1304 <li>All
1305 </ul>
1306 <td>
1307 <table>
1308 <tr><th>src<th>dst
1309 <tr><td>F16<td>F16
1310 <tr><td>F32<td>F32
1311 </table>
1312<tr>
1313 <td>CLRoundLayer
1314 <td>
1315 <ul>
1316 <li>All
1317 </ul>
1318 <td>
1319 <table>
1320 <tr><th>src<th>dst
1321 <tr><td>F16<td>F16
1322 <tr><td>F32<td>F32
1323 </table>
1324<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001325 <td rowspan="2">FFT1D
Teresa Charlin62687422021-04-28 10:58:49 +01001326 <td rowspan="2" style="width:200px;"> Fast Fourier Transform 1D.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001327 <td rowspan="2">
1328 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +01001329 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001330 </ul>
1331 <td>NEFFT1D
1332 <td>
1333 <ul>
1334 <li>All
1335 </ul>
1336 <td>
1337 <table>
1338 <tr><th>src<th>dst
1339 <tr><td>F32<td>F32
1340 </table>
1341<tr>
1342 <td>CLFFT1D
1343 <td>
1344 <ul>
1345 <li>All
1346 </ul>
1347 <td>
1348 <table>
1349 <tr><th>src<th>dst
1350 <tr><td>F32<td>F32
1351 <tr><td>F16<td>F16
1352 </table>
1353<tr>
1354 <td rowspan="2">FFT2D
Teresa Charlin62687422021-04-28 10:58:49 +01001355 <td rowspan="2" style="width:200px;"> Fast Fourier Transform 2D.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001356 <td rowspan="2">
1357 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +01001358 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001359 </ul>
1360 <td>NEFFT2D
1361 <td>
1362 <ul>
1363 <li>All
1364 </ul>
1365 <td>
1366 <table>
1367 <tr><th>src<th>dst
1368 <tr><td>F32<td>F32
1369 </table>
1370<tr>
1371 <td>CLFFT2D
1372 <td>
1373 <ul>
1374 <li>All
1375 </ul>
1376 <td>
1377 <table>
1378 <tr><th>src<th>dst
1379 <tr><td>F32<td>F32
1380 <tr><td>F16<td>F16
1381 </table>
1382<tr>
1383 <td rowspan="2">FFTConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001384 <td rowspan="2" style="width:200px;"> Fast Fourier Transform Convolution.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001385 <td rowspan="2">
1386 <ul>
1387 <li>ANEURALNETWORKS_CONV_2D
1388 </ul>
1389 <td>NEFFTConvolutionLayer
1390 <td>
1391 <ul>
1392 <li>All
1393 </ul>
1394 <td>
1395 <table>
1396 <tr><th>src<th>dst
1397 <tr><td>F32<td>F32
1398 </table>
1399<tr>
1400 <td>CLFFTConvolutionLayer
1401 <td>
1402 <ul>
1403 <li>All
1404 </ul>
1405 <td>
1406 <table>
1407 <tr><th>src<th>dst
1408 <tr><td>F32<td>F32
1409 <tr><td>F16<td>F16
1410 </table>
1411<tr>
1412 <td rowspan="2">Fill
Teresa Charlin62687422021-04-28 10:58:49 +01001413 <td rowspan="2" style="width:200px;"> Set the values of a tensor with a given value.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001414 <td rowspan="2">
1415 <ul>
1416 <li>ANEURALNETWORKS_FILL
1417 </ul>
1418 <td>NEFill
1419 <td>
1420 <ul>
1421 <li>All
1422 </ul>
1423 <td>
1424 <table>
1425 <tr><th>src<th>dst
1426 <tr><td>All<td>All
1427 </table>
1428<tr>
1429 <td>CLFill
1430 <td>
1431 <ul>
1432 <li>All
1433 </ul>
1434 <td>
1435 <table>
1436 <tr><th>src<th>dst
1437 <tr><td>All<td>All
1438 </table>
1439<tr>
Georgios Pinitasb6af4822021-09-14 12:33:34 +01001440 <td rowspan="1">FillBorder
1441 <td rowspan="1" style="width:200px;"> Function to fill the borders within the XY-planes.
1442 <td rowspan="1">
Teresa Charlin62687422021-04-28 10:58:49 +01001443 <ul>
1444 <li>n/a
1445 </ul>
1446 <td>NEFillBorder
1447 <td>
1448 <ul>
1449 <li>All
1450 </ul>
1451 <td>
1452 <table>
1453 <tr><th>src<th>dst
1454 <tr><td>All<td>All
1455 </table>
1456<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001457 <td rowspan="2">FlattenLayer
1458 <td rowspan="2" style="width:200px;"> Reshape a tensor to be 1D
1459 <td rowspan="2">
1460 <ul>
1461 <li>ANEURALNETWORKS_RESHAPE
1462 </ul>
1463 <td>NEFlattenLayer
1464 <td>
1465 <ul>
1466 <li>All
1467 </ul>
1468 <td>
1469 <table>
1470 <tr><th>src<th>dst
1471 <tr><td>All<td>All
1472 </table>
1473<tr>
1474 <td>CLFlattenLayer
1475 <td>
1476 <ul>
1477 <li>All
1478 </ul>
1479 <td>
1480 <table>
1481 <tr><th>src<th>dst
1482 <tr><td>All<td>All
1483 </table>
1484<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001485 <td rowspan="2">Floor
Teresa Charlin62687422021-04-28 10:58:49 +01001486 <td rowspan="2" style="width:200px;"> Round the value to the lowest number.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001487 <td rowspan="2">
1488 <ul>
1489 <li>ANEURALNETWORKS_FLOOR
1490 </ul>
1491 <td>NEFloor
1492 <td>
1493 <ul>
1494 <li>All
1495 </ul>
1496 <td>
1497 <table>
1498 <tr><th>src<th>dst
1499 <tr><td>F32<td>F32
1500 <tr><td>F16<td>F16
1501 </table>
1502<tr>
1503 <td>CLFloor
1504 <td>
1505 <ul>
1506 <li>All
1507 </ul>
1508 <td>
1509 <table>
1510 <tr><th>src<th>dst
1511 <tr><td>F32<td>F32
1512 <tr><td>F16<td>F16
1513 </table>
1514<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001515 <td rowspan="2">FullyConnectedLayer
1516 <td rowspan="2" style="width:200px;"> Function to perform a fully connected / dense layer.
1517 <td rowspan="2">
1518 <ul>
1519 <li>ANEURALNETWORKS_FULLY_CONNECTED
1520 </ul>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001521 <td>NEFullyConnectedLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001522 <td>
1523 <ul>
1524 <li>NHWC
1525 <li>NCHW
1526 </ul>
1527 <td>
1528 <table>
1529 <tr><th>src0<th>src1<th>src2<th>dst
1530 <tr><td>F16<td>F16<td>F16<td>F16
1531 <tr><td>F32<td>F32<td>F32<td>F32
1532 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1533 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1534 </table>
1535<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001536 <td>CLFullyConnectedLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001537 <td>
1538 <ul>
1539 <li>NHWC
1540 <li>NCHW
1541 </ul>
1542 <td>
1543 <table>
1544 <tr><th>src0<th>src1<th>src2<th>dst
1545 <tr><td>F16<td>F16<td>F16<td>F16
1546 <tr><td>F32<td>F32<td>F32<td>F32
1547 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1548 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1549 </table>
1550<tr>
1551 <td rowspan="2">FuseBatchNormalization
1552 <td rowspan="2" style="width:200px;"> Function to fuse the batch normalization node to a preceding convolution node.
1553 <td rowspan="2">
1554 <ul>
1555 <li>n/a
1556 </ul>
1557 <td>NEFuseBatchNormalization
1558 <td>
1559 <ul>
1560 <li>NHWC
1561 <li>NCHW
1562 </ul>
1563 <td>
1564 <table>
1565 <tr><th>src<th>dst
1566 <tr><td>F32<td>F32
1567 <tr><td>F16<td>F16
1568 </table>
1569<tr>
1570 <td>CLFuseBatchNormalization
1571 <td>
1572 <ul>
1573 <li>NHWC
1574 <li>NCHW
1575 </ul>
1576 <td>
1577 <table>
1578 <tr><th>src<th>dst
1579 <tr><td>F32<td>F32
1580 <tr><td>F16<td>F16
1581 </table>
1582<tr>
1583 <td rowspan="2">Gather
1584 <td rowspan="2" style="width:200px;"> Performs the Gather operation along the chosen axis.
1585 <td rowspan="2">
1586 <ul>
1587 <li>ANEURALNETWORKS_GATHER
1588 </ul>
1589 <td>NEGather
1590 <td>
1591 <ul>
1592 <li>All
1593 </ul>
1594 <td>
1595 <table>
1596 <tr><th>src<th>dst
1597 <tr><td>All<td>All
1598 </table>
1599<tr>
1600 <td>CLGather
1601 <td>
1602 <ul>
1603 <li>All
1604 </ul>
1605 <td>
1606 <table>
1607 <tr><th>src<th>dst
1608 <tr><td>All<td>All
1609 </table>
1610<tr>
1611 <td rowspan="2">GEMM
1612 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1613 <td rowspan="2">
1614 <ul>
1615 <li>n/a
1616 </ul>
1617 <td>NEGEMM
1618 <td>
1619 <ul>
1620 <li>All
1621 </ul>
1622 <td>
1623 <table>
1624 <tr><th>src0<th>src1<th>src2<th>dst
1625 <tr><td>F32<td>F32<td>F32<td>F32
1626 <tr><td>F16<td>F16<td>F16<td>F16
1627 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1628 </table>
1629<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001630 <td>CLGEMM
Teresa Charlin62687422021-04-28 10:58:49 +01001631 <td>
1632 <ul>
1633 <li>All
1634 </ul>
1635 <td>
1636 <table>
1637 <tr><th>src0<th>src1<th>src2<th>dst
1638 <tr><td>F32<td>F32<td>F32<td>F32
1639 <tr><td>F16<td>F16<td>F16<td>F16
1640 </table>
1641<tr>
Jakub Sujakee301b32021-06-04 09:46:08 +01001642 <td rowspan="1">GEMMConv2d
Sheri Zhang6124ce62021-05-04 14:03:13 +01001643 <td rowspan="1" style="width:200px;"> General Matrix Multiplication.
1644 <td rowspan="1">
1645 <ul>
1646 <li>ANEURALNETWORKS_CONV_2D
1647 </ul>
1648 <td>NEGEMMConv2d
1649 <td>
1650 <ul>
1651 <li>All
1652 </ul>
1653 <td>
1654 <table>
1655 <tr><th>src0<th>src1<th>src2<th>dst
1656 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1657 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1658 <tr><td>F16<td>F16<td>F16<td>F16
1659 <tr><td>F32<td>F32<td>F32<td>F32
1660 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1661 </table>
1662<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001663 <td rowspan="2">GEMMConvolutionLayer
1664 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1665 <td rowspan="2">
1666 <ul>
1667 <li>ANEURALNETWORKS_CONV_2D
1668 </ul>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001669 <td>NEGEMMConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001670 <td>
1671 <ul>
1672 <li>NHWC
1673 <li>NCHW
1674 </ul>
1675 <td>
1676 <table>
1677 <tr><th>src0<th>src1<th>src2<th>dst
1678 <tr><td>F16<td>F16<td>F16<td>F16
1679 <tr><td>F32<td>F32<td>F32<td>F32
1680 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1681 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1682 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1683 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1684 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1685 </table>
1686<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001687 <td>CLGEMMConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001688 <td>
1689 <ul>
1690 <li>NHWC
1691 <li>NCHW
1692 </ul>
1693 <td>
1694 <table>
1695 <tr><th>src0<th>src1<th>src2<th>dst
1696 <tr><td>F16<td>F16<td>F16<td>F16
1697 <tr><td>F32<td>F32<td>F32<td>F32
1698 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1699 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1700 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1701 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1702 </table>
1703<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001704 <td rowspan="1">GEMMDeconvolutionLayer
1705 <td rowspan="1" style="width:200px;"> General Matrix Multiplication.
1706 <td rowspan="1">
1707 <ul>
1708 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
1709 </ul>
1710 <td>CLGEMMDeconvolutionLayer
1711 <td>
1712 <ul>
1713 <li>NHWC
1714 </ul>
1715 <td>
1716 <table>
1717 <tr><th>src0<th>src1<th>src2<th>dst
1718 <tr><td>F16<td>F16<td>F16<td>F16
1719 <tr><td>F32<td>F32<td>F32<td>F32
1720 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1721 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1722 </table>
1723<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001724 <td rowspan="2">GEMMLowpMatrixMultiplyCore
1725 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1726 <td rowspan="2">
1727 <ul>
1728 <li>n/a
1729 </ul>
1730 <td>NEGEMMLowpMatrixMultiplyCore
1731 <td>
1732 <ul>
1733 <li>NHWC
1734 <li>NCHW
1735 </ul>
1736 <td>
1737 <table>
1738 <tr><th>src0<th>src1<th>src2<th>dst
1739 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1740 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1741 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>QASYMM8
1742 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>S32
1743 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1744 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>S32
1745 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1746 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1747 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>QASYMM8_SIGNED
1748 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>S32
1749 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1750 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>S32
1751 </table>
1752<tr>
1753 <td>CLGEMMLowpMatrixMultiplyCore
1754 <td>
1755 <ul>
1756 <li>NHWC
1757 <li>NCHW
1758 </ul>
1759 <td>
1760 <table>
1761 <tr><th>src0<th>src1<th>src2<th>dst
1762 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1763 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1764 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>QASYMM8
1765 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>S32
1766 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1767 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>S32
1768 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1769 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1770 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>QASYMM8_SIGNED
1771 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>S32
1772 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1773 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>S32
1774 </table>
1775<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001776 <td rowspan="2">GEMMLowpOutputStage
1777 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1778 <td rowspan="2">
1779 <ul>
1780 <li>n/a
1781 </ul>
1782 <td>NEGEMMLowpOutputStage
1783 <td>
1784 <ul>
1785 <li>All
1786 </ul>
1787 <td>
1788 <table>
1789 <tr><th>src0<th>src1<th>dst
1790 <tr><td>S32<td>S32<td>QASYMM8
1791 <tr><td>S32<td>S32<td>QASYMM8_SIGNED
1792 <tr><td>S32<td>S32<td>QSYMM16
1793 </table>
1794<tr>
1795 <td>CLGEMMLowpOutputStage
1796 <td>
1797 <ul>
1798 <li>All
1799 </ul>
1800 <td>
1801 <table>
1802 <tr><th>src0<th>src1<th>dst
1803 <tr><td>S32<td>S32<td>QASYMM8
1804 <tr><td>S32<td>S32<td>QASYMM8_SIGNED
1805 <tr><td>S32<td>S32<td>QSYMM16
1806 </table>
1807<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001808 <td rowspan="2">GenerateProposalsLayer
1809 <td rowspan="2" style="width:200px;"> Function to generate proposals for a RPN (Region Proposal Network).
1810 <td rowspan="2">
1811 <ul>
1812 <li>ANEURALNETWORKS_GENERATE_PROPOSALS
1813 </ul>
1814 <td>NEGenerateProposalsLayer
1815 <td>
1816 <ul>
1817 <li>All
1818 </ul>
1819 <td>
1820 <table>
1821 <tr><th>src0<th>src1<th>src2<th>dst
1822 <tr><td>F16<td>F16<td>F16<td>F16
1823 <tr><td>F32<td>F32<td>F32<td>F32
1824 <tr><td>QASYMM8<td>QSYMM8<td>QSYMM16<td>QASYMM8
1825 </table>
1826<tr>
1827 <td>CLGenerateProposalsLayer
1828 <td>
1829 <ul>
1830 <li>All
1831 </ul>
1832 <td>
1833 <table>
1834 <tr><th>src0<th>src1<th>src2<th>dst
1835 <tr><td>F16<td>F16<td>F16<td>F16
1836 <tr><td>F32<td>F32<td>F32<td>F32
1837 <tr><td>QASYMM8<td>QSYMM8<td>QSYMM16<td>QASYMM8
1838 </table>
1839<tr>
1840 <td rowspan="2">InstanceNormalizationLayer
1841 <td rowspan="2" style="width:200px;"> Function to perform a Instance normalization on a given axis.
1842 <td rowspan="2">
1843 <ul>
1844 <li>ANEURALNETWORKS_INSTANCE_NORMALIZATION
1845 </ul>
1846 <td>NEInstanceNormalizationLayer
1847 <td>
1848 <ul>
1849 <li>NHWC
1850 <li>NCHW
1851 </ul>
1852 <td>
1853 <table>
1854 <tr><th>src<th>dst
1855 <tr><td>F16<td>F16
1856 <tr><td>F32<td>F32
1857 </table>
1858<tr>
1859 <td>CLInstanceNormalizationLayer
1860 <td>
1861 <ul>
1862 <li>NHWC
1863 <li>NCHW
1864 </ul>
1865 <td>
1866 <table>
1867 <tr><th>src<th>dst
1868 <tr><td>F16<td>F16
1869 <tr><td>F32<td>F32
1870 </table>
1871<tr>
1872 <td rowspan="2">L2NormalizeLayer
1873 <td rowspan="2" style="width:200px;"> Function to perform a L2 normalization on a given axis.
1874 <td rowspan="2">
1875 <ul>
1876 <li>ANEURALNETWORKS_L2_NORMALIZATION
1877 </ul>
1878 <td>NEL2NormalizeLayer
1879 <td>
1880 <ul>
1881 <li>NHWC
1882 <li>NCHW
1883 </ul>
1884 <td>
1885 <table>
1886 <tr><th>src<th>dst
1887 <tr><td>F16<td>F16
1888 <tr><td>F32<td>F32
1889 </table>
1890<tr>
1891 <td>CLL2NormalizeLayer
1892 <td>
1893 <ul>
1894 <li>NHWC
1895 <li>NCHW
1896 </ul>
1897 <td>
1898 <table>
1899 <tr><th>src<th>dst
1900 <tr><td>F16<td>F16
1901 <tr><td>F32<td>F32
1902 </table>
1903<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001904 <td rowspan="3">Logical
1905 <td rowspan="3" style="width:200px;"> Function to perform: - Logical AND - Logical OR - Logical NOT
1906 <td rowspan="3">
1907 <ul>
1908 <li>n/a
1909 </ul>
1910 <td>NELogicalAnd
1911 <td>
1912 <ul>
1913 <li>All
1914 </ul>
1915 <td>
1916 <table>
1917 <tr><th>src0<th>src1<th>dst
1918 <tr><td>U8<td>U8<td>U8
1919 </table>
1920<tr>
1921 <td>NELogicalOr
1922 <td>
1923 <ul>
1924 <li>All
1925 </ul>
1926 <td>
1927 <table>
1928 <tr><th>src0<th>src1<th>dst
1929 <tr><td>U8<td>U8<td>U8
1930 </table>
1931<tr>
1932 <td>NELogicalNot
1933 <td>
1934 <ul>
1935 <li>All
1936 </ul>
1937 <td>
1938 <table>
1939 <tr><th>src<th>dst
1940 <tr><td>U8<td>U8
1941 </table>
1942<tr>
1943 <td rowspan="1">LogicalAnd
1944 <td rowspan="1" style="width:200px;"> Function to perform Logical AND.
1945 <td rowspan="1">
1946 <ul>
1947 <li>n/a
1948 </ul>
1949 <td>CLLogicalAnd
1950 <td>
1951 <ul>
1952 <li>All
1953 </ul>
1954 <td>
1955 <table>
1956 <tr><th>src0<th>src1<th>dst
1957 <tr><td>U8<td>U8<td>U8
1958 </table>
1959<tr>
1960 <td rowspan="1">LogicalOr
1961 <td rowspan="1" style="width:200px;"> Function to perform Logical OR.
1962 <td rowspan="1">
1963 <ul>
1964 <li>n/a
1965 </ul>
1966 <td>CLLogicalOr
1967 <td>
1968 <ul>
1969 <li>All
1970 </ul>
1971 <td>
1972 <table>
1973 <tr><th>src0<th>src1<th>dst
1974 <tr><td>U8<td>U8<td>U8
1975 </table>
1976<tr>
1977 <td rowspan="1">LogicalNot
1978 <td rowspan="1" style="width:200px;"> Function to perform Logical NOT.
1979 <td rowspan="1">
1980 <ul>
1981 <li>n/a
1982 </ul>
1983 <td>CLLogicalNot
1984 <td>
1985 <ul>
1986 <li>All
1987 </ul>
1988 <td>
1989 <table>
1990 <tr><th>src<th>dst
1991 <tr><td>U8<td>U8
1992 </table>
1993<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001994 <td rowspan="2">LSTMLayer
1995 <td rowspan="2" style="width:200px;"> Function to perform a single time step in a Long Short-Term Memory (LSTM) layer.
1996 <td rowspan="2">
1997 <ul>
1998 <li>ANEURALNETWORKS_LSTM
1999 </ul>
2000 <td>NELSTMLayer
2001 <td>
2002 <ul>
2003 <li>All
2004 </ul>
2005 <td>
2006 <table>
2007 <tr><th>src0 - src13<th>dst0 - dst3
2008 <tr><td>F16<td>F16
2009 <tr><td>F32<td>F32
2010 </table>
2011<tr>
2012 <td>CLLSTMLayer
2013 <td>
2014 <ul>
2015 <li>All
2016 </ul>
2017 <td>
2018 <table>
2019 <tr><th>src0 - src13<th>dst0 - dst3
2020 <tr><td>F16<td>F16
2021 <tr><td>F32<td>F32
2022 </table>
2023<tr>
2024 <td rowspan="2">LSTMLayerQuantized
2025 <td rowspan="2" style="width:200px;"> Function to perform quantized LSTM (Long Short-Term Memory)
2026 <td rowspan="2">
2027 <ul>
2028 <li>ANEURALNETWORKS_QUANTIZED_LSTM
2029 <li>ANEURALNETWORKS_QUANTIZED_16BIT_LSTM
2030 </ul>
2031 <td>NELSTMLayerQuantized
2032 <td>
2033 <ul>
2034 <li>All
2035 </ul>
2036 <td>
2037 <table>
2038 <tr><th>src0 - src8<th>src9 - src12<th>src13<th>src14<th>dst0<th>dst1
2039 <tr><td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8<td>QSYMM16<td>QASYMM8
2040 </table>
2041<tr>
2042 <td>CLLSTMLayerQuantized
2043 <td>
2044 <ul>
2045 <li>All
2046 </ul>
2047 <td>
2048 <table>
2049 <tr><th>src0 - src8<th>src9 - src12<th>src13<th>src14<th>dst0<th>dst1
2050 <tr><td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8<td>QSYMM16<td>QASYMM8
2051 </table>
2052<tr>
2053 <td rowspan="2">MaxUnpoolingLayer
2054 <td rowspan="2" style="width:200px;"> Function to perform MaxUnpooling.
2055 <td rowspan="2">
2056 <ul>
2057 <li>n/a
2058 </ul>
2059 <td>NEMaxUnpoolingLayer
2060 <td>
2061 <ul>
2062 <li>NHWC
2063 <li>NCHW
2064 </ul>
2065 <td>
2066 <table>
2067 <tr><th>src<th>dst
2068 <tr><td>QASYMM8<td>QASYMM8
2069 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2070 <tr><td>F16<td>F16
2071 <tr><td>F32<td>F32
2072 </table>
2073<tr>
2074 <td>CLMaxUnpoolingLayer
2075 <td>
2076 <ul>
2077 <li>NHWC
2078 <li>NCHW
2079 </ul>
2080 <td>
2081 <table>
2082 <tr><th>src<th>dst
2083 <tr><td>QASYMM8<td>QASYMM8
2084 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2085 <tr><td>F16<td>F16
2086 <tr><td>F32<td>F32
2087 </table>
2088<tr>
2089 <td rowspan="2">MeanStdDevNormalizationLayer
2090 <td rowspan="2" style="width:200px;"> Function to execute mean and standard deviation normalization.
2091 <td rowspan="2">
2092 <ul>
2093 <li>n/a
2094 </ul>
2095 <td>NEMeanStdDevNormalizationLayer
2096 <td>
2097 <ul>
2098 <li>NHWC
2099 <li>NCHW
2100 </ul>
2101 <td>
2102 <table>
2103 <tr><th>src<th>dst
2104 <tr><td>F32<td>F32
2105 <tr><td>F16<td>F16
2106 </table>
2107<tr>
2108 <td>CLMeanStdDevNormalizationLayer
2109 <td>
2110 <ul>
2111 <li>NHWC
2112 <li>NCHW
2113 </ul>
2114 <td>
2115 <table>
2116 <tr><th>src<th>dst
2117 <tr><td>F32<td>F32
2118 <tr><td>F16<td>F16
2119 </table>
2120<tr>
2121 <td rowspan="2">NormalizationLayer
2122 <td rowspan="2" style="width:200px;"> Function to compute normalization layer.
2123 <td rowspan="2">
2124 <ul>
2125 <li>ANEURALNETWORKS_LOCAL_RESPONSE_NORMALIZATION
2126 </ul>
2127 <td>NENormalizationLayer
2128 <td>
2129 <ul>
2130 <li>NHWC
2131 <li>NCHW
2132 </ul>
2133 <td>
2134 <table>
2135 <tr><th>src<th>dst
2136 <tr><td>F32<td>F32
2137 <tr><td>F16<td>F16
2138 </table>
2139<tr>
2140 <td>CLNormalizationLayer
2141 <td>
2142 <ul>
2143 <li>NHWC
2144 <li>NCHW
2145 </ul>
2146 <td>
2147 <table>
2148 <tr><th>src<th>dst
2149 <tr><td>F32<td>F32
2150 <tr><td>F16<td>F16
2151 </table>
2152<tr>
2153 <td rowspan="2">PadLayer
2154 <td rowspan="2" style="width:200px;"> Function to pad a tensor.
2155 <td rowspan="2">
2156 <ul>
2157 <li>ANEURALNETWORKS_PAD
2158 <li>ANEURALNETWORKS_PAD_V2
2159 </ul>
2160 <td>NEPadLayer
2161 <td>
2162 <ul>
2163 <li>NHWC
2164 <li>NCHW
2165 </ul>
2166 <td>
2167 <table>
2168 <tr><th>src<th>dst
2169 <tr><td>All<td>All
2170 </table>
2171<tr>
2172 <td>CLPadLayer
2173 <td>
2174 <ul>
2175 <li>NHWC
2176 <li>NCHW
2177 </ul>
2178 <td>
2179 <table>
2180 <tr><th>src<th>dst
2181 <tr><td>All<td>All
2182 </table>
2183<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002184 <td rowspan="2">Permute
2185 <td rowspan="2" style="width:200px;"> Function to transpose an ND tensor.
2186 <td rowspan="2">
2187 <ul>
2188 <li>ANEURALNETWORKS_TRANSPOSE
2189 </ul>
2190 <td>NEPermute
2191 <td>
2192 <ul>
2193 <li>NHWC
2194 <li>NCHW
2195 </ul>
2196 <td>
2197 <table>
2198 <tr><th>src<th>dst
2199 <tr><td>All<td>All
2200 </table>
2201<tr>
2202 <td>CLPermute
2203 <td>
2204 <ul>
2205 <li>NHWC
2206 <li>NCHW
2207 </ul>
2208 <td>
2209 <table>
2210 <tr><th>src<th>dst
2211 <tr><td>All<td>All
2212 </table>
2213<tr>
2214 <td rowspan="2">PixelWiseMultiplication
Jakub Sujakee301b32021-06-04 09:46:08 +01002215 <td rowspan="2" style="width:200px;"> Function to perform a multiplication.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002216 <td rowspan="2">
2217 <ul>
2218 <li>ANEURALNETWORKS_MUL
2219 </ul>
2220 <td>NEPixelWiseMultiplication
2221 <td>
2222 <ul>
2223 <li>All
2224 </ul>
2225 <td>
2226 <table>
2227 <tr><th>src0<th>src1<th>dst
2228 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2229 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2230 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
2231 <tr><td>QSYMM16<td>QSYMM16<td>S32
2232 <tr><td>U8<td>U8<td>U8
2233 <tr><td>U8<td>U8<td>S16
2234 <tr><td>U8<td>S16<td>S16
2235 <tr><td>S16<td>U8<td>S16
2236 <tr><td>S16<td>S16<td>S16
2237 <tr><td>F16<td>F16<td>F16
2238 <tr><td>F32<td>S32<td>F32
2239 </table>
2240<tr>
2241 <td>CLPixelWiseMultiplication
2242 <td>
2243 <ul>
2244 <li>All
2245 </ul>
2246 <td>
2247 <table>
2248 <tr><th>src0<th>src1<th>dst
2249 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2250 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2251 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
2252 <tr><td>QSYMM16<td>QSYMM16<td>S32
2253 <tr><td>U8<td>U8<td>U8
2254 <tr><td>U8<td>U8<td>S16
2255 <tr><td>U8<td>S16<td>S16
2256 <tr><td>S16<td>U8<td>S16
2257 <tr><td>S16<td>S16<td>S16
2258 <tr><td>F16<td>F16<td>F16
Jakub Sujakee301b32021-06-04 09:46:08 +01002259 <tr><td>F32<td>F32<td>F32
2260 <tr><td>S32<td>S32<td>S32
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002261 </table>
2262<tr>
2263 <td rowspan="2">PoolingLayer
Jakub Sujakee301b32021-06-04 09:46:08 +01002264 <td rowspan="2" style="width:200px;"> Function to perform pooling with the specified pooling operation.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002265 <td rowspan="2">
2266 <ul>
2267 <li>ANEURALNETWORKS_AVERAGE_POOL_2D
2268 <li>ANEURALNETWORKS_L2_POOL_2D
2269 <li>ANEURALNETWORKS_MAX_POOL_2D
2270 </ul>
2271 <td>NEPoolingLayer
2272 <td>
2273 <ul>
2274 <li>NHWC
2275 <li>NCHW
2276 </ul>
2277 <td>
2278 <table>
2279 <tr><th>src<th>dst
2280 <tr><td>QASYMM8<td>QASYMM8
2281 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2282 <tr><td>F16<td>F16
2283 <tr><td>F32<td>F32
2284 </table>
2285<tr>
2286 <td>CLPoolingLayer
2287 <td>
2288 <ul>
2289 <li>NHWC
2290 <li>NCHW
2291 </ul>
2292 <td>
2293 <table>
2294 <tr><th>src<th>dst
2295 <tr><td>QASYMM8<td>QASYMM8
2296 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2297 <tr><td>F16<td>F16
2298 <tr><td>F32<td>F32
2299 </table>
2300<tr>
2301 <td rowspan="2">PReluLayer
2302 <td rowspan="2" style="width:200px;"> Function to compute the activation layer with the PRELU activation function.
2303 <td rowspan="2">
2304 <ul>
2305 <li>ANEURALNETWORKS_PRELU
2306 </ul>
2307 <td>NEPReluLayer
2308 <td>
2309 <ul>
2310 <li>All
2311 </ul>
2312 <td>
2313 <table>
2314 <tr><th>src<th>dst
2315 <tr><td>QASYMM8<td>QASYMM8
2316 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2317 <tr><td>F16<td>F16
2318 <tr><td>F32<td>F32
2319 </table>
2320<tr>
2321 <td>CLPReluLayer
2322 <td>
2323 <ul>
2324 <li>All
2325 </ul>
2326 <td>
2327 <table>
2328 <tr><th>src<th>dst
2329 <tr><td>QASYMM8<td>QASYMM8
2330 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2331 <tr><td>F16<td>F16
2332 <tr><td>F32<td>F32
2333 </table>
2334<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002335 <td rowspan="2">PriorBoxLayer
Sheri Zhang6124ce62021-05-04 14:03:13 +01002336 <td rowspan="2" style="width:200px;"> Function to compute prior boxes and clip.
Teresa Charlin62687422021-04-28 10:58:49 +01002337 <td rowspan="2">
2338 <ul>
2339 <li>n/a
2340 </ul>
2341 <td>NEPriorBoxLayer
2342 <td>
2343 <ul>
2344 <li>NHWC
2345 <li>NCHW
2346 </ul>
2347 <td>
2348 <table>
2349 <tr><th>src0<th>src1<th>dst
2350 <tr><td>F32<td>F32<td>F32
2351 </table>
2352<tr>
2353 <td>CLPriorBoxLayer
2354 <td>
2355 <ul>
2356 <li>NHWC
2357 <li>NCHW
2358 </ul>
2359 <td>
2360 <table>
2361 <tr><th>src0<th>src1<th>dst
2362 <tr><td>F32<td>F32<td>F32
2363 </table>
2364<tr>
2365 <td rowspan="2">QLSTMLayer
2366 <td rowspan="2" style="width:200px;"> Function to perform quantized LSTM (Long Short-Term Memory).
2367 <td rowspan="2">
2368 <ul>
2369 <li>ANEURALNETWORKS_QUANTIZED_LSTM
2370 <li>ANEURALNETWORKS_QUANTIZED_16BIT_LSTM
2371 </ul>
2372 <td>NEQLSTMLayer
2373 <td>
2374 <ul>
2375 <li>All
2376 </ul>
2377 <td>
2378 <table>
2379 <tr><th>src0<th>src1 - src6<th>src7 -src9<th>src10<th>src11<th>dst0<th>dst1 - dst2
2380 <tr><td>QASYMM8_SIGNED<td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8_SIGNED<td>QSYMM16<td>QASYMM8_SIGNED
2381 </table>
2382<tr>
2383 <td>CLQLSTMLayer
2384 <td>
2385 <ul>
2386 <li>All
2387 </ul>
2388 <td>
2389 <table>
2390 <tr><th>src0<th>src1 - src6<th>src7 -src9<th>src10<th>src11<th>dst0<th>dst1 - dst2
2391 <tr><td>QASYMM8_SIGNED<td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8_SIGNED<td>QSYMM16<td>QASYMM8_SIGNED
2392 </table>
2393<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002394 <td rowspan="2">QuantizationLayer
2395 <td rowspan="2" style="width:200px;"> Function to perform quantization layer
2396 <td rowspan="2">
2397 <ul>
2398 <li>ANEURALNETWORKS_QUANTIZE
2399 </ul>
2400 <td>NEQuantizationLayer
2401 <td>
2402 <ul>
2403 <li>All
2404 </ul>
2405 <td>
2406 <table>
2407 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +01002408 <tr><td>QASYMM8<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2409 <tr><td>QASYMM8_SIGNED<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2410 <tr><td>F16<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2411 <tr><td>F32<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002412 </table>
2413<tr>
2414 <td>CLQuantizationLayer
2415 <td>
2416 <ul>
2417 <li>All
2418 </ul>
2419 <td>
2420 <table>
2421 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +01002422 <tr><td>QASYMM8<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2423 <tr><td>QASYMM8_SIGNED<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2424 <tr><td>F16<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2425 <tr><td>F32<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2426 </table>
2427<tr>
2428 <td rowspan="2">Range
2429 <td rowspan="2" style="width:200px;"> Function to generates a sequence of numbers starting from START and extends by increments of 'STEP' up to but not including 'END'.
2430 <td rowspan="2">
2431 <ul>
2432 <li>n/a
2433 </ul>
2434 <td>NERange
2435 <td>
2436 <ul>
2437 <li>All
2438 </ul>
2439 <td>
2440 <table>
2441 <tr><th>dst
2442 <tr><td>U8
2443 <tr><td>S8
2444 <tr><td>U16
2445 <tr><td>S16
2446 <tr><td>U32
2447 <tr><td>S32
2448 <tr><td>F16
2449 <tr><td>F32
2450 </table>
2451<tr>
2452 <td>CLRange
2453 <td>
2454 <ul>
2455 <li>All
2456 </ul>
2457 <td>
2458 <table>
2459 <tr><th>dst
2460 <tr><td>U8
2461 <tr><td>S8
2462 <tr><td>QASYMM8
2463 <tr><td>U16
2464 <tr><td>S16
2465 <tr><td>U32
2466 <tr><td>S32
2467 <tr><td>F16
2468 <tr><td>F32
2469 </table>
2470<tr>
2471 <td rowspan="2">ReduceMean
Jakub Sujakee301b32021-06-04 09:46:08 +01002472 <td rowspan="2" style="width:200px;"> Function to perform reduce mean operation.
Teresa Charlin62687422021-04-28 10:58:49 +01002473 <td rowspan="2">
2474 <ul>
2475 <li>ANEURALNETWORKS_MEAN
2476 </ul>
2477 <td>NEReduceMean
2478 <td>
2479 <ul>
2480 <li>All
2481 </ul>
2482 <td>
2483 <table>
2484 <tr><th>src<th>dst
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002485 <tr><td>QASYMM8<td>QASYMM8
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002486 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
Teresa Charlin62687422021-04-28 10:58:49 +01002487 <tr><td>F16<td>F16
2488 <tr><td>F32<td>F32
2489 </table>
2490<tr>
2491 <td>CLReduceMean
2492 <td>
2493 <ul>
2494 <li>All
2495 </ul>
2496 <td>
2497 <table>
2498 <tr><th>src<th>dst
2499 <tr><td>QASYMM8<td>QASYMM8
2500 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2501 <tr><td>F16<td>F16
2502 <tr><td>F32<td>F32
2503 </table>
2504<tr>
2505 <td rowspan="2">ReductionOperation
Jakub Sujakee301b32021-06-04 09:46:08 +01002506 <td rowspan="2" style="width:200px;"> Function to perform reduce with the following operations - ARG_IDX_MAX: Index of the max value - ARG_IDX_MIN: Index of the min value - MEAN_SUM: Mean of sum - PROD: Product - SUM_SQUARE: Sum of squares - SUM: Sum - MIN: Min - MAX: Max
Teresa Charlin62687422021-04-28 10:58:49 +01002507 <td rowspan="2">
2508 <ul>
2509 <li>ANEURALNETWORKS_REDUCE_ALL
2510 <li>ANEURALNETWORKS_REDUCE_ANY
2511 <li>ANEURALNETWORKS_REDUCE_MAX
2512 <li>ANEURALNETWORKS_REDUCE_MIN
2513 <li>ANEURALNETWORKS_REDUCE_PROD
2514 <li>ANEURALNETWORKS_REDUCE_SUM
2515 </ul>
2516 <td>NEReductionOperation
2517 <td>
2518 <ul>
2519 <li>All
2520 </ul>
2521 <td>
2522 <table>
2523 <tr><th>src<th>dst
2524 <tr><td>QASYMM8<td>QASYMM8
2525 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2526 <tr><td>F16<td>F16
2527 <tr><td>F32<td>F32
2528 <tr><td>S32<td>S32
2529 </table>
2530<tr>
2531 <td>CLReductionOperation
2532 <td>
2533 <ul>
2534 <li>All
2535 </ul>
2536 <td>
2537 <table>
2538 <tr><th>src<th>dst
2539 <tr><td>QASYMM8<td>QASYMM8
2540 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2541 <tr><td>F16<td>F16
2542 <tr><td>F32<td>F32
2543 <tr><td>S32<td>S32
2544 </table>
2545<tr>
2546 <td rowspan="2">ReorgLayer
2547 <td rowspan="2" style="width:200px;"> Performs a reorganization layer of input tensor to the output tensor.
2548 <td rowspan="2">
2549 <ul>
2550 <li>n/a
2551 </ul>
2552 <td>NEReorgLayer
2553 <td>
2554 <ul>
2555 <li>NHWC
2556 <li>NCHW
2557 </ul>
2558 <td>
2559 <table>
2560 <tr><th>src<th>dst
2561 <tr><td>All<td>All
2562 </table>
2563<tr>
2564 <td>CLReorgLayer
2565 <td>
2566 <ul>
2567 <li>NHWC
2568 <li>NCHW
2569 </ul>
2570 <td>
2571 <table>
2572 <tr><th>src<th>dst
2573 <tr><td>All<td>All
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002574 </table>
2575<tr>
2576 <td rowspan="2">ReshapeLayer
Teresa Charlin62687422021-04-28 10:58:49 +01002577 <td rowspan="2" style="width:200px;"> Function to reshape a tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002578 <td rowspan="2">
2579 <ul>
2580 <li>ANEURALNETWORKS_RESHAPE
2581 <li>ANEURALNETWORKS_SQUEEZE
2582 </ul>
2583 <td>NEReshapeLayer
2584 <td>
2585 <ul>
2586 <li>All
2587 </ul>
2588 <td>
2589 <table>
2590 <tr><th>src<th>dst
2591 <tr><td>All<td>All
2592 </table>
2593<tr>
2594 <td>CLReshapeLayer
2595 <td>
2596 <ul>
2597 <li>All
2598 </ul>
2599 <td>
2600 <table>
2601 <tr><th>src<th>dst
2602 <tr><td>All<td>All
2603 </table>
2604<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002605 <td rowspan="2">Reverse
2606 <td rowspan="2" style="width:200px;"> Function to reverse tensor according to axis.
2607 <td rowspan="2">
2608 <ul>
2609 <li>n/a
2610 </ul>
2611 <td>NEReverse
2612 <td>
2613 <ul>
2614 <li>All
2615 </ul>
2616 <td>
2617 <table>
2618 <tr><th>src0<th>src1<th>dst
2619 <tr><td>All<td>U32<td>All
2620 </table>
2621<tr>
2622 <td>CLReverse
2623 <td>
2624 <ul>
2625 <li>All
2626 </ul>
2627 <td>
2628 <table>
2629 <tr><th>src0<th>src1<th>dst
2630 <tr><td>All<td>U32<td>All
2631 </table>
2632<tr>
2633 <td rowspan="2">RNNLayer
2634 <td rowspan="2" style="width:200px;"> Function to perform recurrent neural network layer.
2635 <td rowspan="2">
2636 <ul>
2637 <li>ANEURALNETWORKS_RNN
2638 </ul>
2639 <td>NERNNLayer
2640 <td>
2641 <ul>
2642 <li>NHWC
2643 <li>NCHW
2644 </ul>
2645 <td>
2646 <table>
2647 <tr><th>src0<th>src1<th>src2<th>src3<th>dst0<th>dst1
2648 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
2649 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
2650 </table>
2651<tr>
2652 <td>CLRNNLayer
2653 <td>
2654 <ul>
2655 <li>NHWC
2656 <li>NCHW
2657 </ul>
2658 <td>
2659 <table>
2660 <tr><th>src0<th>src1<th>src2<th>src3<th>dst0<th>dst1
2661 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
2662 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
2663 </table>
2664<tr>
2665 <td rowspan="2">ROIAlignLayer
2666 <td rowspan="2" style="width:200px;"> Function to perform ROI alignment.
2667 <td rowspan="2">
2668 <ul>
2669 <li>ANEURALNETWORKS_ROI_ALIGN
2670 </ul>
2671 <td>NEROIAlignLayer
2672 <td>
2673 <ul>
2674 <li>All
2675 </ul>
2676 <td>
2677 <table>
2678 <tr><th>src0<th>src1<th>dst
2679 <tr><td>F16<td>F16<td>F16
2680 <tr><td>F32<td>F32<td>F32
2681 <tr><td>QASYMM8<td>QASYMM16<td>QASYMM8
2682 <tr><td>QASYMM8_SIGNED<td>QASYMM16<td>QASYMM8_SIGNED
2683 </table>
2684<tr>
2685 <td>CLROIAlignLayer
2686 <td>
2687 <ul>
2688 <li>All
2689 </ul>
2690 <td>
2691 <table>
2692 <tr><th>src0<th>src1<th>dst
2693 <tr><td>F16<td>F16<td>F16
2694 <tr><td>F32<td>F32<td>F32
2695 <tr><td>QASYMM8<td>QASYMM16<td>QASYMM8
2696 <tr><td>QASYMM8_SIGNED<td>QASYMM16<td>QASYMM8_SIGNED
2697 </table>
2698<tr>
2699 <td rowspan="2">ROIPoolingLayer
2700 <td rowspan="2" style="width:200px;"> Function to perform ROI pooling.
2701 <td rowspan="2">
2702 <ul>
2703 <li>ANEURALNETWORKS_ROI_POOLING
2704 </ul>
2705 <td>NEROIPoolingLayer
2706 <td>
2707 <ul>
2708 <li>All
2709 </ul>
2710 <td>
2711 <table>
2712 <tr><th>src0<th>src1<th>dst
2713 <tr><td>F32<td>U16<td>F32
2714 <tr><td>QASYMM8<td>U16<td>QASYMM8
2715 </table>
2716<tr>
2717 <td>CLROIPoolingLayer
2718 <td>
2719 <ul>
2720 <li>All
2721 </ul>
2722 <td>
2723 <table>
2724 <tr><th>src0<th>src1<th>dst
2725 <tr><td>F16<td>U16<td>F16
2726 <tr><td>F32<td>U16<td>F32
2727 <tr><td>QASYMM8<td>U16<td>QASYMM8
2728 </table>
2729<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002730 <td rowspan="2">Scale
Teresa Charlin62687422021-04-28 10:58:49 +01002731 <td rowspan="2" style="width:200px;"> Function to perform resize a tensor using to interpolate: - Bilinear - Nearest neighbor
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002732 <td rowspan="2">
2733 <ul>
2734 <li>ANEURALNETWORKS_RESIZE_BILINEAR
2735 <li>ANEURALNETWORKS_RESIZE_NEAREST_NEIGHBOR
2736 </ul>
2737 <td>NEScale
2738 <td>
2739 <ul>
2740 <li>NHWC
2741 <li>NCHW
2742 </ul>
2743 <td>
2744 <table>
2745 <tr><th>src<th>dst
2746 <tr><td>QASYMM8<td>QASYMM8
2747 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2748 <tr><td>F16<td>F16
2749 <tr><td>F32<td>F32
2750 <tr><td>U8<td>U8
2751 <tr><td>S16<td>S16
2752 </table>
2753<tr>
2754 <td>CLScale
2755 <td>
2756 <ul>
2757 <li>NHWC
2758 <li>NCHW
2759 </ul>
2760 <td>
2761 <table>
2762 <tr><th>src<th>dst
2763 <tr><td>QASYMM8<td>QASYMM8
2764 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2765 <tr><td>F16<td>F16
2766 <tr><td>F32<td>F32
2767 <tr><td>U8<td>U8
2768 <tr><td>S16<td>S16
2769 </table>
2770<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002771 <td rowspan="2">Select
2772 <td rowspan="2" style="width:200px;"> Function to select values from 2 tensors depending on an input tensor of booleans.
2773 <td rowspan="2">
2774 <ul>
2775 <li>ANEURALNETWORKS_SELECT
2776 </ul>
2777 <td>NESelect
2778 <td>
2779 <ul>
2780 <li>All
2781 </ul>
2782 <td>
2783 <table>
2784 <tr><th>src0<th>src1<th>src2<th>dst
2785 <tr><td>U8<td>All<td>All<td>All
2786 </table>
2787<tr>
2788 <td>CLSelect
2789 <td>
2790 <ul>
2791 <li>All
2792 </ul>
2793 <td>
2794 <table>
2795 <tr><th>src0<th>src1<th>src2<th>dst
2796 <tr><td>U8<td>All<td>All<td>All
2797 </table>
2798<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002799 <td rowspan="2">Slice
2800 <td rowspan="2" style="width:200px;"> Function to perform tensor slicing.
2801 <td rowspan="2">
2802 <ul>
2803 <li>ANEURALNETWORKS_SLICE
2804 </ul>
2805 <td>NESlice
2806 <td>
2807 <ul>
2808 <li>All
2809 </ul>
2810 <td>
2811 <table>
2812 <tr><th>src<th>dst
2813 <tr><td>All<td>All
2814 </table>
2815<tr>
2816 <td>CLSlice
2817 <td>
2818 <ul>
2819 <li>All
2820 </ul>
2821 <td>
2822 <table>
2823 <tr><th>src<th>dst
2824 <tr><td>All<td>All
2825 </table>
2826<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01002827 <td rowspan="2">SoftmaxLayer
2828 <td rowspan="2" style="width:200px;"> Function to compute a SoftmaxLayer and a Log SoftmaxLayer.
2829 <td rowspan="2">
2830 <ul>
2831 <li>ANEURALNETWORKS_LOG_SOFTMAX
2832 <li>ANEURALNETWORKS_SOFTMAX
2833 </ul>
2834 <td>NESoftmaxLayerGeneric
2835 <td>
2836 <ul>
2837 <li>All
2838 </ul>
2839 <td>
2840 <table>
2841 <tr><th>src<th>dst
2842 <tr><td>QASYMM8<td>QASYMM8
2843 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2844 <tr><td>F16<td>F16
2845 <tr><td>F32<td>F32
2846 </table>
2847<tr>
2848 <td>CLSoftmaxLayerGeneric
2849 <td>
2850 <ul>
2851 <li>All
2852 </ul>
2853 <td>
2854 <table>
2855 <tr><th>src<th>dst
2856 <tr><td>QASYMM8<td>QASYMM8
2857 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2858 <tr><td>F16<td>F16
2859 <tr><td>F32<td>F32
2860 </table>
2861<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002862 <td rowspan="2">SpaceToBatchLayer
2863 <td rowspan="2" style="width:200px;"> Function to divide a tensor spatially.
2864 <td rowspan="2">
2865 <ul>
2866 <li>ANEURALNETWORKS_SPACE_TO_BATCH_ND
2867 </ul>
2868 <td>NESpaceToBatchLayer
2869 <td>
2870 <ul>
2871 <li>NHWC
2872 <li>NCHW
2873 </ul>
2874 <td>
2875 <table>
2876 <tr><th>src0<th>src1<th>src2<th>dst
2877 <tr><td>All<td>S32<td>S32<td>All
2878 </table>
2879<tr>
2880 <td>CLSpaceToBatchLayer
2881 <td>
2882 <ul>
2883 <li>NHWC
2884 <li>NCHW
2885 </ul>
2886 <td>
2887 <table>
2888 <tr><th>src0<th>src1<th>src2<th>dst
2889 <tr><td>All<td>S32<td>S32<td>All
2890 </table>
2891<tr>
2892 <td rowspan="2">SpaceToDepthLayer
2893 <td rowspan="2" style="width:200px;"> Function to rearrange blocks of spatial data into depth.
2894 <td rowspan="2">
2895 <ul>
2896 <li>ANEURALNETWORKS_SPACE_TO_DEPTH
2897 </ul>
2898 <td>NESpaceToDepthLayer
2899 <td>
2900 <ul>
2901 <li>NHWC
2902 <li>NCHW
2903 </ul>
2904 <td>
2905 <table>
2906 <tr><th>src<th>dst
2907 <tr><td>All<td>All
2908 </table>
2909<tr>
2910 <td>CLSpaceToDepthLayer
2911 <td>
2912 <ul>
2913 <li>NHWC
2914 <li>NCHW
2915 </ul>
2916 <td>
2917 <table>
2918 <tr><th>src<th>dst
2919 <tr><td>All<td>All
2920 </table>
2921<tr>
2922 <td rowspan="2">Split
2923 <td rowspan="2" style="width:200px;"> Function to split a tensor along a given axis.
2924 <td rowspan="2">
2925 <ul>
2926 <li>ANEURALNETWORKS_SPLIT
2927 </ul>
2928 <td>NESplit
2929 <td>
2930 <ul>
2931 <li>All
2932 </ul>
2933 <td>
2934 <table>
2935 <tr><th>src<th>dst
2936 <tr><td>All<td>All
2937 </table>
2938<tr>
2939 <td>CLSplit
2940 <td>
2941 <ul>
2942 <li>All
2943 </ul>
2944 <td>
2945 <table>
2946 <tr><th>src<th>dst
2947 <tr><td>All<td>All
2948 </table>
2949<tr>
2950 <td rowspan="2">StackLayer
2951 <td rowspan="2" style="width:200px;"> Function to stack tensors along an axis.
2952 <td rowspan="2">
2953 <ul>
2954 <li>n/a
2955 </ul>
2956 <td>NEStackLayer
2957 <td>
2958 <ul>
2959 <li>All
2960 </ul>
2961 <td>
2962 <table>
2963 <tr><th>src<th>dst
2964 <tr><td>All<td>All
2965 </table>
2966<tr>
2967 <td>CLStackLayer
2968 <td>
2969 <ul>
2970 <li>All
2971 </ul>
2972 <td>
2973 <table>
2974 <tr><th>src<th>dst
2975 <tr><td>All<td>All
2976 </table>
2977<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002978 <td rowspan="2">StridedSlice
2979 <td rowspan="2" style="width:200px;"> Function to extract a strided slice of a tensor.
2980 <td rowspan="2">
2981 <ul>
2982 <li>ANEURALNETWORKS_STRIDED_SLICE
2983 </ul>
2984 <td>NEStridedSlice
2985 <td>
2986 <ul>
2987 <li>All
2988 </ul>
2989 <td>
2990 <table>
2991 <tr><th>src<th>dst
2992 <tr><td>All<td>All
2993 </table>
2994<tr>
2995 <td>CLStridedSlice
2996 <td>
2997 <ul>
2998 <li>All
2999 </ul>
3000 <td>
3001 <table>
3002 <tr><th>src<th>dst
3003 <tr><td>All<td>All
3004 </table>
3005<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01003006 <td rowspan="2">Tile
3007 <td rowspan="2" style="width:200px;"> Function to construct a tensor by tiling a given tensor.
3008 <td rowspan="2">
3009 <ul>
3010 <li>ANEURALNETWORKS_TILE
3011 </ul>
3012 <td>NETile
3013 <td>
3014 <ul>
3015 <li>All
3016 </ul>
3017 <td>
3018 <table>
3019 <tr><th>src<th>dst
3020 <tr><td>All<td>All
3021 </table>
3022<tr>
3023 <td>CLTile
3024 <td>
3025 <ul>
3026 <li>All
3027 </ul>
3028 <td>
3029 <table>
3030 <tr><th>src<th>dst
3031 <tr><td>All<td>All
3032 </table>
3033<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003034 <td rowspan="2">Transpose
Teresa Charlin62687422021-04-28 10:58:49 +01003035 <td rowspan="2" style="width:200px;"> Function to transpose a 2D tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003036 <td rowspan="2">
3037 <ul>
3038 <li>ANEURALNETWORKS_TRANSPOSE
3039 </ul>
3040 <td>NETranspose
3041 <td>
3042 <ul>
3043 <li>All
3044 </ul>
3045 <td>
3046 <table>
3047 <tr><th>src<th>dst
3048 <tr><td>All<td>All
3049 </table>
3050<tr>
3051 <td>CLTranspose
3052 <td>
3053 <ul>
3054 <li>All
3055 </ul>
3056 <td>
3057 <table>
3058 <tr><th>src<th>dst
3059 <tr><td>All<td>All
3060 </table>
Teresa Charlin62687422021-04-28 10:58:49 +01003061<tr>
3062 <td rowspan="2">Unstack
3063 <td rowspan="2" style="width:200px;"> Function to unpack a rank-R tensor into rank-(R-1) tensors.
3064 <td rowspan="2">
3065 <ul>
3066 <li>n/a
3067 </ul>
3068 <td>NEUnstack
3069 <td>
3070 <ul>
3071 <li>All
3072 </ul>
3073 <td>
3074 <table>
3075 <tr><th>src<th>dst
3076 <tr><td>All<td>All
3077 </table>
3078<tr>
3079 <td>CLUnstack
3080 <td>
3081 <ul>
3082 <li>All
3083 </ul>
3084 <td>
3085 <table>
3086 <tr><th>src<th>dst
3087 <tr><td>All<td>All
3088 </table>
3089<tr>
3090 <td rowspan="2">WinogradConvolutionLayer
3091 <td rowspan="2" style="width:200px;"> Function to do Winograd Convolution.
3092 <td rowspan="2">
3093 <ul>
3094 <li>ANEURALNETWORKS_CONV_2D
3095 </ul>
3096 <td>NEWinogradConvolutionLayer
3097 <td>
3098 <ul>
3099 <li>NHWC
3100 <li>NCHW
3101 </ul>
3102 <td>
3103 <table>
3104 <tr><th>src0<th>src1<th>src2<th>dst
3105 <tr><td>F16<td>F16<td>F16<td>F16
3106 <tr><td>F32<td>F32<td>F32<td>F32
3107 </table>
3108<tr>
3109 <td>CLWinogradConvolutionLayer
3110 <td>
3111 <ul>
3112 <li>NHWC
3113 <li>NCHW
3114 </ul>
3115 <td>
3116 <table>
3117 <tr><th>src0<th>src1<th>src2<th>dst
3118 <tr><td>F16<td>F16<td>F16<td>F16
3119 <tr><td>F32<td>F32<td>F32<td>F32
3120 </table>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003121</table>
3122
3123*/
3124} // namespace