blob: 1d06a394a9f7bd2a8831311d119764ddafa92896 [file] [log] [blame]
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001///
2/// Copyright (c) 2021 Arm Limited.
3///
4/// SPDX-License-Identifier: MIT
5///
6/// Permission is hereby granted, free of charge, to any person obtaining a copy
7/// of this software and associated documentation files (the "Software"), to
8/// deal in the Software without restriction, including without limitation the
9/// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
10/// sell copies of the Software, and to permit persons to whom the Software is
11/// furnished to do so, subject to the following conditions:
12///
13/// The above copyright notice and this permission notice shall be included in all
14/// copies or substantial portions of the Software.
15///
16/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
22/// SOFTWARE.
23///
24namespace arm_compute
25{
26/**
27@page operators_list Supported Operators
28
29@tableofcontents
30
31@section S9_1_operators_list Supported Operators
32
33Compute Library supports operators that are listed in below table.
34
35Compute Library supports a wide list of data-types, information can been directly found in the documentation of each kernel/function.
36The main data-types that the Machine Learning functions support are the following:
37 <ul>
38 <li>BFLOAT16: 16-bit non-standard brain floating point
39 <li>QASYMM8: 8-bit unsigned asymmetric quantized
40 <li>QASYMM8_SIGNED: 8-bit signed asymmetric quantized
41 <li>QSYMM8_PER_CHANNEL: 8-bit signed symmetric quantized (Used for the weights)
42 <li>QSYMM8: 8-bit unsigned symmetric quantized
43 <li>QSYMM16: 16-bit unsigned symmetric quantized
44 <li>F32: 32-bit single precision floating point
45 <li>F16: 16-bit half precision floating point
46 <li>S32: 32-bit signed integer
47 <li>U8: 8-bit unsigned char
Jakub Sujakee301b32021-06-04 09:46:08 +010048 <li>All: Agnostic to any specific data type
Sheri Zhanga47dcc22021-04-22 14:41:12 +010049 </ul>
50
51Compute Library supports the following data layouts (fast changing dimension from right to left):
52 <ul>
53 <li>NHWC: The native layout of Compute Library that delivers the best performance where channels are in the fastest changing dimension
54 <li>NCHW: Legacy layout where width is in the fastest changing dimension
Sheri Zhang5dda2172021-10-15 19:54:17 +010055 <li>NDHWC: New data layout for supporting 3D operators
Jakub Sujakee301b32021-06-04 09:46:08 +010056 <li>All: Agnostic to any specific data layout
Sheri Zhanga47dcc22021-04-22 14:41:12 +010057 </ul>
Sheri Zhang5dda2172021-10-15 19:54:17 +010058where N = batches, C = channels, H = height, W = width, D = depth
Sheri Zhanga47dcc22021-04-22 14:41:12 +010059
60<table>
61<caption id="multi_row"></caption>
62<tr>
63 <th>Function
64 <th>Description
65 <th>Equivalent Android NNAPI Op
66 <th>Backends
67 <th>Data Layouts
68 <th>Data Types
69<tr>
70 <td rowspan="2">ActivationLayer
71 <td rowspan="2" style="width:200px;"> Function to simulate an activation layer with the specified activation function.
72 <td rowspan="2">
73 <ul>
74 <li>ANEURALNETWORKS_ELU
75 <li>ANEURALNETWORKS_HARD_SWISH
76 <li>ANEURALNETWORKS_LOGISTIC
77 <li>ANEURALNETWORKS_RELU
78 <li>ANEURALNETWORKS_RELU1
79 <li>ANEURALNETWORKS_RELU6
80 <li>ANEURALNETWORKS_TANH
81 </ul>
82 <td>NEActivationLayer
83 <td>
84 <ul>
85 <li>All
86 </ul>
87 <td>
88 <table>
89 <tr><th>src<th>dst
90 <tr><td>QASYMM8<td>QASYMM8
91 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
92 <tr><td>QSYMM16<td>QSYMM16
93 <tr><td>F16<td>F16
94 <tr><td>F32<td>F32
95 </table>
96<tr>
97 <td>CLActivationLayer
98 <td>
99 <ul>
100 <li>All
101 </ul>
102 <td>
103 <table>
104 <tr><th>src<th>dst
105 <tr><td>QASYMM8<td>QASYMM8
106 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
107 <tr><td>QSYMM16<td>QSYMM16
108 <tr><td>F16<td>F16
109 <tr><td>F32<td>F32
110 </table>
111<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100112 <td rowspan="2">ArgMinMaxLayer
113 <td rowspan="2" style="width:200px;"> Function to calculate the index of the minimum or maximum values in a tensor based on an axis.
114 <td rowspan="2">
115 <ul>
116 <li>ANEURALNETWORKS_ARGMAX
117 <li>ANEURALNETWORKS_ARGMIN
118 </ul>
119 <td>NEArgMinMaxLayer
120 <td>
121 <ul>
122 <li>All
123 </ul>
124 <td>
125 <table>
126 <tr><th>src<th>dst
127 <tr><td>QASYMM8<td>U32, S32
128 <tr><td>QASYMM8_SIGNED<td>U32, S32
129 <tr><td>S32<td>U32, S32
130 <tr><td>F16<td>U32, S32
131 <tr><td>F32<td>U32, S32
132 </table>
133<tr>
134 <td>CLArgMinMaxLayer
135 <td>
136 <ul>
137 <li>All
138 </ul>
139 <td>
140 <table>
141 <tr><th>src<th>dst
142 <tr><td>QASYMM8<td>U32, S32
143 <tr><td>QASYMM8_SIGNED<td>U32, S32
144 <tr><td>S32<td>U32, S32
145 <tr><td>F16<td>U32, S32
146 <tr><td>F32<td>U32, S32
147 </table>
148<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100149 <td rowspan="1">ArithmeticAddition
150 <td rowspan="1" style="width:200px;"> Function to add 2 tensors.
151 <td rowspan="1">
152 <ul>
153 <li>ANEURALNETWORKS_ADD
154 </ul>
155 <td>NEArithmeticAddition
156 <td>
157 <ul>
158 <li>All
159 </ul>
160 <td>
161 <table>
162 <tr><th>src0<th>src1<th>dst
163 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
164 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
165 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
166 <tr><td>QSYMM16<td>QSYMM16<td>S32
167 <tr><td>U8<td>U8<td>U8
Sheri Zhang6124ce62021-05-04 14:03:13 +0100168 <tr><td>S16<td>S16<td>S16
169 <tr><td>S32<td>S32<td>S32
170 <tr><td>F16<td>F16<td>F16
171 <tr><td>F32<td>F32<td>F32
172 </table>
173<tr>
174 <td rowspan="1">ArithmeticSubtraction
175 <td rowspan="1" style="width:200px;"> Function to substract 2 tensors.
176 <td rowspan="1">
177 <ul>
178 <li>ANEURALNETWORKS_SUB
179 </ul>
180 <td>NEArithmeticSubtraction
181 <td>
182 <ul>
183 <li>All
184 </ul>
185 <td>
186 <table>
187 <tr><th>src0<th>src1<th>dst
188 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
189 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
190 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
191 <tr><td>QSYMM16<td>QSYMM16<td>S32
192 <tr><td>U8<td>U8<td>U8
Sheri Zhang6124ce62021-05-04 14:03:13 +0100193 <tr><td>S16<td>S16<td>S16
194 <tr><td>S32<td>S32<td>S32
195 <tr><td>F16<td>F16<td>F16
196 <tr><td>F32<td>F32<td>F32
197 </table>
198<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100199 <td rowspan="2">BatchNormalizationLayer
200 <td rowspan="2" style="width:200px;"> Function to perform batch normalization.
201 <td rowspan="2">
202 <ul>
203 <li>n/a
204 </ul>
205 <td>NEBatchNormalizationLayer
206 <td>
207 <ul>
208 <li>NHWC
209 <li>NCHW
210 </ul>
211 <td>
212 <table>
213 <tr><th>src<th>dst
214 <tr><td>F32<td>F32
215 <tr><td>F16<td>F16
216 </table>
217<tr>
218 <td>CLBatchNormalizationLayer
219 <td>
220 <ul>
221 <li>NHWC
222 <li>NCHW
223 </ul>
224 <td>
225 <table>
226 <tr><th>src<th>dst
227 <tr><td>F32<td>F32
228 <tr><td>F16<td>F16
229 </table>
230<tr>
231 <td rowspan="2">BatchToSpaceLayer
232 <td rowspan="2" style="width:200px;"> Batch to space transformation.
233 <td rowspan="2">
234 <ul>
235 <li>ANEURALNETWORKS_BATCH_TO_SPACE_ND
236 </ul>
237 <td>NEBatchToSpaceLayer
238 <td>
239 <ul>
240 <li>NHWC
241 <li>NCHW
242 </ul>
243 <td>
244 <table>
245 <tr><th>src0<th>src1<th>dst
246 <tr><td>All<td>s32<td>All
247 </table>
248<tr>
249 <td>CLBatchToSpaceLayer
250 <td>
251 <ul>
252 <li>NHWC
253 <li>NCHW
254 </ul>
255 <td>
256 <table>
257 <tr><th>src0<th>src1<th>dst
258 <tr><td>All<td>s32<td>All
259 </table>
260<tr>
261 <td rowspan="2">BitwiseAnd
Jakub Sujakee301b32021-06-04 09:46:08 +0100262 <td rowspan="2" style="width:200px;"> Function to perform bitwise AND between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100263 <td rowspan="2">
264 <ul>
265 <li>ANEURALNETWORKS_LOGICAL_AND
266 </ul>
267 <td>NEBitwiseAnd
268 <td>
269 <ul>
270 <li>All
271 </ul>
272 <td>
273 <table>
274 <tr><th>src<th>dst
275 <tr><td>U8<td>U8
276 </table>
277<tr>
278 <td>CLBitwiseAnd
279 <td>
280 <ul>
281 <li>All
282 </ul>
283 <td>
284 <table>
285 <tr><th>src<th>dst
286 <tr><td>U8<td>U8
287 </table>
288<tr>
289 <td rowspan="2">BitwiseNot
Jakub Sujakee301b32021-06-04 09:46:08 +0100290 <td rowspan="2" style="width:200px;"> Function to perform bitwise NOT.
Teresa Charlin62687422021-04-28 10:58:49 +0100291 <td rowspan="2">
292 <ul>
293 <li>ANEURALNETWORKS_LOGICAL_NOT
294 </ul>
295 <td>NEBitwiseNot
296 <td>
297 <ul>
298 <li>All
299 </ul>
300 <td>
301 <table>
302 <tr><th>src<th>dst
303 <tr><td>U8<td>U8
304 </table>
305<tr>
306 <td>CLBitwiseNot
307 <td>
308 <ul>
309 <li>All
310 </ul>
311 <td>
312 <table>
313 <tr><th>src<th>dst
314 <tr><td>U8<td>U8
315 </table>
316<tr>
317 <td rowspan="2">BitwiseOr
Jakub Sujakee301b32021-06-04 09:46:08 +0100318 <td rowspan="2" style="width:200px;"> Function to perform bitwise OR between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100319 <td rowspan="2">
320 <ul>
321 <li>ANEURALNETWORKS_LOGICAL_OR
322 </ul>
323 <td>NEBitwiseOr
324 <td>
325 <ul>
326 <li>All
327 </ul>
328 <td>
329 <table>
330 <tr><th>src<th>dst
331 <tr><td>U8<td>U8
332 </table>
333<tr>
334 <td>CLBitwiseOr
335 <td>
336 <ul>
337 <li>All
338 </ul>
339 <td>
340 <table>
341 <tr><th>src<th>dst
342 <tr><td>U8<td>U8
343 </table>
344<tr>
345 <td rowspan="2">BitwiseXor
Jakub Sujakee301b32021-06-04 09:46:08 +0100346 <td rowspan="2" style="width:200px;"> Function to perform bitwise XOR between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100347 <td rowspan="2">
348 <ul>
349 <li>n/a
350 </ul>
351 <td>NEBitwiseXor
352 <td>
353 <ul>
354 <li>All
355 </ul>
356 <td>
357 <table>
358 <tr><th>src<th>dst
359 <tr><td>U8<td>U8
360 </table>
361<tr>
362 <td>CLBitwiseXor
363 <td>
364 <ul>
365 <li>All
366 </ul>
367 <td>
368 <table>
369 <tr><th>src<th>dst
370 <tr><td>U8<td>U8
371 </table>
372<tr>
373 <td rowspan="2">BoundingBoxTransform
374 <td rowspan="2" style="width:200px;"> Transform proposal bounding boxes to target bounding box using bounding box deltas.
375 <td rowspan="2">
376 <ul>
377 <li>n/a
378 </ul>
379 <td>NEBoundingBoxTransform
380 <td>
381 <ul>
382 <li>NHWC
383 <li>NCHW
384 </ul>
385 <td>
386 <table>
387 <tr><th>src0<th>src1<th>dst
388 <tr><td>QASYMM16<td>QASYMM8<td>QASYMM16
389 <tr><td>F16<td>F16<td>F16
390 <tr><td>F32<td>F32<td>F32
391 </table>
392<tr>
393 <td>CLBoundingBoxTransform
394 <td>
395 <ul>
396 <li>NHWC
397 <li>NCHW
398 </ul>
399 <td>
400 <table>
401 <tr><th>src0<th>src1<th>dst
402 <tr><td>QASYMM16<td>QASYMM8<td>QASYMM16
403 <tr><td>F16<td>F16<td>F16
404 <tr><td>F32<td>F32<td>F32
405 </table>
406<tr>
407 <td rowspan="2">Cast
408 <td rowspan="2" style="width:200px;"> Function to cast a tensor.
409 <td rowspan="2">
410 <ul>
411 <li>ANEURALNETWORKS_CAST
412 </ul>
413 <td>NECast
414 <td>
415 <ul>
416 <li>All
417 </ul>
418 <td>
419 <table>
420 <tr><th>src<th>dst
421 <tr><td>QASYMM8_SIGNED<td>S16, S32, F32, F16
422 <tr><td>QASYMM8<td>U16, S16, S32, F32, F16
423 <tr><td>U8<td>U16, S16, S32, F32, F16
424 <tr><td>U16<td>U8, U32
425 <tr><td>S16<td>QASYMM8_SIGNED, U8, S32
426 <tr><td>F16<td>QASYMM8_SIGNED, QASYMM8, F32, S32, U8
427 <tr><td>S32<td>QASYMM8_SIGNED, QASYMM8, F16, F32, U8
428 <tr><td>F32<td>QASYMM8_SIGNED, QASYMM8, BFLOAT16, F16, S32, U8
429 </table>
430<tr>
431 <td>CLCast
432 <td>
433 <ul>
434 <li>All
435 </ul>
436 <td>
437 <table>
438 <tr><th>src<th>dst
439 <tr><td>U8<td>S8, U16, S16, U32, S32, F16, F32
440 <tr><td>U16<td>U8, S8, S16, U32, S32, F16, F32
441 <tr><td>S16<td>U8, S8, U16, U32, S32, F16, F32
442 <tr><td>U32<td>U8, S8, U16, S16, S32, F16, F32
443 <tr><td>S32<td>U8, S8, U16, S16, U32, F16, F32
444 <tr><td>F16<td>U8, S8, U16, S16, U32, F32
445 <tr><td>F32<td>U8, S8, U16, S16, U32, F16
446 </table>
447<tr>
448 <td rowspan="2">ChannelShuffleLayer
449 <td rowspan="2" style="width:200px;"> Function to shuffle the channels of the input tensor.
450 <td rowspan="2">
451 <ul>
452 <li>ANEURALNETWORKS_CHANNEL_SHUFFLE
453 </ul>
454 <td>NEChannelShuffleLayer
455 <td>
456 <ul>
457 <li>NCHW
Michele Di Giorgiob8025b32021-09-03 10:29:49 +0100458 <li>NHWC
Teresa Charlin62687422021-04-28 10:58:49 +0100459 </ul>
460 <td>
461 <table>
462 <tr><th>src<th>dst
463 <tr><td>All<td>All
464 </table>
465<tr>
466 <td>CLChannelShuffleLayer
467 <td>
468 <ul>
469 <li>NCHW
Michele Di Giorgiob8025b32021-09-03 10:29:49 +0100470 <li>NHWC
Teresa Charlin62687422021-04-28 10:58:49 +0100471 </ul>
472 <td>
473 <table>
474 <tr><th>src<th>dst
475 <tr><td>All<td>All
476 </table>
477<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100478 <td rowspan="1">Comparison
479 <td rowspan="1" style="width:200px;"> Function to compare 2 tensors.
480 <td rowspan="1">
481 <ul>
482 <li>ANEURALNETWORKS_EQUAL
483 <li>ANEURALNETWORKS_GREATER
484 <li>ANEURALNETWORKS_GREATER_EQUAL
485 <li>ANEURALNETWORKS_LESS
486 <li>ANEURALNETWORKS_LESS_EQUAL
487 <li>ANEURALNETWORKS_NOT_EQUAL
488 </ul>
489 <td>CLComparison
490 <td>
491 <ul>
492 <li>All
493 </ul>
494 <td>
495 <table>
496 <tr><th>src0<th>src1<th>dst
497 <tr><td>All<td>All<td>U8
498 </table>
499<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100500 <td rowspan="2">ConcatenateLayer
501 <td rowspan="2" style="width:200px;"> Function to concatenate tensors along a given axis.
502 <td rowspan="2">
503 <ul>
504 <li>ANEURALNETWORKS_CONCATENATION
505 </ul>
506 <td>NEConcatenateLayer
507 <td>
508 <ul>
509 <li>All
510 </ul>
511 <td>
512 <table>
513 <tr><th>src<th>dst
514 <tr><td>QASYMM8<td>QASYMM8
515 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
516 <tr><td>F16<td>F16
517 <tr><td>F32<td>F32
518 </table>
519<tr>
520 <td>CLConcatenateLayer
521 <td>
522 <ul>
523 <li>All
524 </ul>
525 <td>
526 <table>
527 <tr><th>src<th>dst
528 <tr><td>QASYMM8<td>QASYMM8
529 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
530 <tr><td>F16<td>F16
531 <tr><td>F32<td>F32
532 </table>
533<tr>
534 <td rowspan="2">ConvertFullyConnectedWeights
Jakub Sujakee301b32021-06-04 09:46:08 +0100535 <td rowspan="2" style="width:200px;"> Function to transpose the weights for the fully connected layer.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100536 <td rowspan="2">
537 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +0100538 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100539 </ul>
540 <td>NEConvertFullyConnectedWeights
541 <td>
542 <ul>
543 <li>NHWC
544 <li>NCHW
545 </ul>
546 <td>
547 <table>
548 <tr><th>src<th>dst
549 <tr><td>All<td>All
550 </table>
551<tr>
552 <td>CLConvertFullyConnectedWeights
553 <td>
554 <ul>
555 <li>NHWC
556 <li>NCHW
557 </ul>
558 <td>
559 <table>
560 <tr><th>src<th>dst
561 <tr><td>All<td>All
562 </table>
563<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100564 <td rowspan="2">ConvolutionLayer
565 <td rowspan="2" style="width:200px;"> Function to compute a convolution layer.
566 <td rowspan="2">
567 <ul>
568 <li>ANEURALNETWORKS_CONV_2D
569 </ul>
570 <td>NEConvolutionLayer
571 <td>
572 <ul>
573 <li>NHWC
574 <li>NCHW
575 </ul>
576 <td>
577 <table>
578 <tr><th>src0<th>src1<th>src2<th>dst
579 <tr><td>F16<td>F16<td>F16<td>F16
580 <tr><td>F32<td>F32<td>F32<td>F32
581 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
582 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
583 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
584 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
585 </table>
586<tr>
587 <td>CLConvolutionLayer
588 <td>
589 <ul>
590 <li>NHWC
591 <li>NCHW
592 </ul>
593 <td>
594 <table>
595 <tr><th>src0<th>src1<th>src2<th>dst
596 <tr><td>F16<td>F16<td>F16<td>F16
597 <tr><td>F32<td>F32<td>F32<td>F32
598 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
599 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
600 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
601 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
602 </table>
603<tr>
Sheri Zhang6d9c9822021-09-24 16:02:57 +0100604 <td rowspan="2">Conv3D
605 <td rowspan="2" style="width:200px;"> Function to compute a 3d convolution layer.
606 <td rowspan="2">
607 <ul>
608 <li>ANEURALNETWORKS_CONV_3D
609 </ul>
610 <td>NEConv3D
611 <td>
612 <ul>
613 <li>NDHWC
614 </ul>
615 <td>
616 <table>
617 <tr><th>src0<th>src1<th>src2<th>dst
618 <tr><td>F16<td>F16<td>F16<td>F16
619 <tr><td>F32<td>F32<td>F32<td>F32
620 </table>
621<tr>
622 <td>CLConv3D
623 <td>
624 <ul>
625 <li>NDHWC
626 </ul>
627 <td>
628 <table>
629 <tr><th>src0<th>src1<th>src2<th>dst
630 <tr><td>F16<td>F16<td>F16<td>F16
631 <tr><td>F32<td>F32<td>F32<td>F32
632 </table>
633<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100634 <td rowspan="2">Copy
635 <td rowspan="2" style="width:200px;"> Function to copy a tensor.
636 <td rowspan="2">
637 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +0100638 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100639 </ul>
640 <td>NECopy
641 <td>
642 <ul>
643 <li>All
644 </ul>
645 <td>
646 <table>
647 <tr><th>src<th>dst
648 <tr><td>All<td>All
649 </table>
650<tr>
651 <td>CLCopy
652 <td>
653 <ul>
654 <li>All
655 </ul>
656 <td>
657 <table>
658 <tr><th>src<th>dst
659 <tr><td>All<td>All
660 </table>
661<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100662 <td rowspan="1">Crop
663 <td rowspan="1" style="width:200px;"> Performs a copy of input tensor to the output tensor.
664 <td rowspan="1">
665 <ul>
666 <li>n/a
667 </ul>
668 <td>CLCrop
669 <td>
670 <ul>
671 <li>NHWC
672 </ul>
673 <td>
674 <table>
675 <tr><th>src<th>dst
676 <tr><td>All<td>F32
677 </table>
678<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100679 <td rowspan="2">CropResize
680 <td rowspan="2" style="width:200px;"> Function to perform cropping and resizing.
681 <td rowspan="2">
682 <ul>
683 <li>n/a
684 </ul>
685 <td>NECropResize
686 <td>
687 <ul>
688 <li>NHWC
689 </ul>
690 <td>
691 <table>
692 <tr><th>src0<th>src1<th>src2<th>dst
693 <tr><td>All<td>F32<td>F32<td>F32
694 </table>
695<tr>
696 <td>CLCropResize
697 <td>
698 <ul>
699 <li>NHWC
700 </ul>
701 <td>
702 <table>
703 <tr><th>src0<th>src1<th>src2<th>dst
704 <tr><td>All<td>F32<td>F32<td>F32
705 </table>
706<tr>
707 <td rowspan="2">DeconvolutionLayer
Jakub Sujakee301b32021-06-04 09:46:08 +0100708 <td rowspan="2" style="width:200px;"> Function to compute a deconvolution or transpose convolution.
Teresa Charlin62687422021-04-28 10:58:49 +0100709 <td rowspan="2">
710 <ul>
711 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
712 </ul>
713 <td>NEDeconvolutionLayer
714 <td>
715 <ul>
716 <li>NHWC
717 <li>NCHW
718 </ul>
719 <td>
720 <table>
721 <tr><th>src0<th>src1<th>src2<th>dst
722 <tr><td>F16<td>F16<td>F16<td>F16
723 <tr><td>F32<td>F32<td>F32<td>F32
724 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
725 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
726 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
727 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
728 </table>
729<tr>
730 <td>CLDeconvolutionLayer
731 <td>
732 <ul>
733 <li>NHWC
734 <li>NCHW
735 </ul>
736 <td>
737 <table>
738 <tr><th>src0<th>src1<th>src2<th>dst
739 <tr><td>F16<td>F16<td>F16<td>F16
740 <tr><td>F32<td>F32<td>F32<td>F32
741 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
742 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
743 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
744 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
745 </table>
746<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100747 <td rowspan="1">DeconvolutionLayerUpsample
748 <td rowspan="1" style="width:200px;"> Function to execute deconvolution upsample on OpenCL.
749 <td rowspan="1">
750 <ul>
751 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
752 </ul>
753 <td>CLDeconvolutionLayerUpsample
754 <td>
755 <ul>
756 <li>NHWC
757 <li>NCHW
758 </ul>
759 <td>
760 <table>
761 <tr><th>src<th>dst
762 <tr><td>All<td>All
763 </table>
764<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100765 <td rowspan="2">DepthConvertLayer
766 <td rowspan="2" style="width:200px;"> Performs a down-scaling depth conversion.
767 <td rowspan="2">
768 <ul>
769 <li>n/a
770 </ul>
771 <td>NEDepthConvertLayer
772 <td>
773 <ul>
774 <li>All
775 </ul>
776 <td>
777 <table>
778 <tr><th>src<th>dst
779 <tr><td>QASYMM8<td>F16, F32
780 <tr><td>U8<td>U16, S16, S32
781 <tr><td>U16<td>U8, U32
782 <tr><td>S16<td>U8, S32
783 <tr><td>BFLOAT16<td>F32
784 <tr><td>F16<td>QASYMM8, F32
785 <tr><td>F32<td>QASYMM8, F16, BFLOAT16
786 </table>
787<tr>
788 <td>CLDepthConvertLayer
789 <td>
790 <ul>
791 <li>All
792 </ul>
793 <td>
794 <table>
795 <tr><th>src<th>dst
796 <tr><td>U8<td>S8, U16, S16, U32, S32, F16, F32
797 <tr><td>U16<td>U8, S8, S16, U32, S32, F16, F32
798 <tr><td>S16<td>U8, S8, U16, U32, S32, F16, F32
799 <tr><td>U32<td>U8, S8, U16, S16, S32, F16, F32
800 <tr><td>S32<td>U8, S8, U16, S16, U32, F16, F32
801 <tr><td>F16<td>U8, S8, U16, S16, U32, F32
802 <tr><td>F32<td>U8, S8, U16, S16, U32, F16
803 </table>
804<tr>
805 <td rowspan="2">DepthToSpaceLayer
806 <td rowspan="2" style="width:200px;"> Depth to Space transformation.
807 <td rowspan="2">
808 <ul>
809 <li>ANEURALNETWORKS_DEPTH_TO_SPACE
810 </ul>
811 <td>NEDepthToSpaceLayer
812 <td>
813 <ul>
814 <li>NHWC
815 <li>NCHW
816 </ul>
817 <td>
818 <table>
819 <tr><th>src<th>dst
820 <tr><td>All<td>All
821 </table>
822<tr>
823 <td>CLDepthToSpaceLayer
824 <td>
825 <ul>
826 <li>NHWC
827 <li>NCHW
828 </ul>
829 <td>
830 <table>
831 <tr><th>src<th>dst
832 <tr><td>All<td>All
833 </table>
834<tr>
835 <td rowspan="2">DepthwiseConvolutionLayer
836 <td rowspan="2" style="width:200px;"> Function to perform depthwise separable convolution.
837 <td rowspan="2">
838 <ul>
839 <li>ANEURALNETWORKS_DEPTHWISE_CONV_2D
840 </ul>
841 <td>NEDepthwiseConvolutionLayer
842 <td>
843 <ul>
844 <li>NHWC
845 <li>NCHW
846 </ul>
847 <td>
848 <table>
849 <tr><th>src0<th>src1<th>src2<th>dst
850 <tr><td>F16<td>F16<td>F16<td>F16
851 <tr><td>F32<td>F32<td>F32<td>F32
852 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
853 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
854 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
855 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
856 </table>
857<tr>
858 <td>CLDepthwiseConvolutionLayer
859 <td>
860 <ul>
861 <li>NHWC
862 <li>NCHW
863 </ul>
864 <td>
865 <table>
866 <tr><th>src0<th>src1<th>src2<th>dst
867 <tr><td>F16<td>F16<td>F16<td>F16
868 <tr><td>F32<td>F32<td>F32<td>F32
869 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
870 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
871 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
872 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
873 </table>
874<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100875 <td rowspan="2">DequantizationLayer
Teresa Charlin62687422021-04-28 10:58:49 +0100876 <td rowspan="2" style="width:200px;"> Function to dequantize the values in a tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100877 <td rowspan="2">
878 <ul>
879 <li>ANEURALNETWORKS_DEQUANTIZE
880 </ul>
881 <td>NEDequantizationLayer
882 <td>
883 <ul>
884 <li>All
885 </ul>
886 <td>
887 <table>
888 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +0100889 <tr><td>QASYMM8<td>F16, F32
890 <tr><td>QASYMM8_SIGNED<td>F16, F32
891 <tr><td>QSYMM8_PER_CHANNEL<td>F16, F32
892 <tr><td>QSYMM8<td>F16, F32
893 <tr><td>QSYMM16<td>F16, F32
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100894 </table>
895<tr>
896 <td>CLDequantizationLayer
897 <td>
898 <ul>
899 <li>All
900 </ul>
901 <td>
902 <table>
903 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +0100904 <tr><td>QASYMM8<td>F16, F32
905 <tr><td>QASYMM8_SIGNED<td>F16, F32
906 <tr><td>QSYMM8_PER_CHANNEL<td>F16, F32
907 <tr><td>QSYMM8<td>F16, F32
908 <tr><td>QSYMM16<td>F16, F32
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100909 </table>
910<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100911 <td rowspan="1">DetectionPostProcessLayer
912 <td rowspan="1" style="width:200px;"> Function to generate the detection output based on center size encoded boxes, class prediction and anchors by doing non maximum suppression (NMS).
913 <td rowspan="1">
914 <ul>
915 <li>ANEURALNETWORKS_DETECTION_POSTPROCESSING
916 </ul>
917 <td>NEDetectionPostProcessLayer
918 <td>
919 <ul>
920 <li>All
921 </ul>
922 <td>
923 <table>
924 <tr><th>src0 - src2<th>dst0 - dst3
925 <tr><td>QASYMM8<td>F32
926 <tr><td>QASYMM8_SIGNED<td>F32
927 <tr><td>F32<td>F32
928 </table>
929<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100930 <td rowspan="2">DirectConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +0100931 <td rowspan="2" style="width:200px;"> Function to compute direct convolution.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100932 <td rowspan="2">
933 <ul>
934 <li>ANEURALNETWORKS_CONV_2D
935 </ul>
936 <td>NEDirectConvolutionLayer
937 <td>
938 <ul>
939 <li>NHWC
940 <li>NCHW
941 </ul>
942 <td>
943 <table>
944 <tr><th>src0<th>src1<th>src2<th>dst
945 <tr><td>F16<td>F16<td>F16<td>F16
946 <tr><td>F32<td>F32<td>F32<td>F32
947 </table>
948<tr>
949 <td>CLDirectConvolutionLayer
950 <td>
951 <ul>
952 <li>NHWC
953 <li>NCHW
954 </ul>
955 <td>
956 <table>
957 <tr><th>src0<th>src1<th>src2<th>dst
958 <tr><td>F16<td>F16<td>F16<td>F16
959 <tr><td>F32<td>F32<td>F32<td>F32
960 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
961 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
962 </table>
963<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100964 <td rowspan="1">DirectDeconvolutionLayer
965 <td rowspan="1" style="width:200px;"> Function to run the deconvolution layer.
966 <td rowspan="1">
967 <ul>
968 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
969 </ul>
970 <td>CLDirectDeconvolutionLayer
971 <td>
972 <ul>
973 <li>NHWC
974 <li>NCHW
975 </ul>
976 <td>
977 <table>
978 <tr><th>src0<th>src1<th>src2<th>dst
979 <tr><td>F16<td>F16<td>F16<td>F16
980 <tr><td>F32<td>F32<td>F32<td>F32
981 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
982 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
983 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
984 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
985 </table>
986<tr>
Jakub Sujakee301b32021-06-04 09:46:08 +0100987 <td rowspan="13">ElementwiseOperations
Sheri Zhang6124ce62021-05-04 14:03:13 +0100988 <td rowspan="13" style="width:200px;"> Function to perform in Cpu: - Div - Max - Min - Pow - SquaredDiff - Comparisons (Equal, greater, greater_equal, less, less_equal, not_equal) Function to perform in CL: - Add - Sub - Div - Max - Min - Pow - SquaredDiff
989 <td rowspan="13">
990 <ul>
991 <li>ANEURALNETWORKS_MAXIMUM
992 <li>ANEURALNETWORKS_MINIMUM
993 <li>ANEURALNETWORKS_POW
994 <li>ANEURALNETWORKS_DIV
995 <li>ANEURALNETWORKS_ADD
996 <li>ANEURALNETWORKS_SUB
997 <li>ANEURALNETWORKS_EQUAL
998 <li>ANEURALNETWORKS_GREATER
999 <li>ANEURALNETWORKS_GREATER_EQUAL
1000 <li>ANEURALNETWORKS_LESS
1001 <li>ANEURALNETWORKS_LESS_EQUAL
1002 <li>ANEURALNETWORKS_NOT_EQUAL
1003 </ul>
1004 <td>NEElementwiseMax
1005 <td>
1006 <ul>
1007 <li>All
1008 </ul>
1009 <td>
1010 <table>
1011 <tr><th>src0<th>src1<th>dst
1012 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1013 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1014 <tr><td>S32<td>S32<td>S32
1015 <tr><td>S16<td>S16<td>S16
1016 <tr><td>F16<td>F16<td>F16
1017 <tr><td>F32<td>F32<td>F32
1018 </table>
1019<tr>
1020 <td>NEElementwiseMin
1021 <td>
1022 <ul>
1023 <li>All
1024 </ul>
1025 <td>
1026 <table>
1027 <tr><th>src0<th>src1<th>dst
1028 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1029 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1030 <tr><td>S32<td>S32<td>S32
1031 <tr><td>S16<td>S16<td>S16
1032 <tr><td>F16<td>F16<td>F16
1033 <tr><td>F32<td>F32<td>F32
1034 </table>
1035<tr>
1036 <td>NEElementwiseSquaredDiff
1037 <td>
1038 <ul>
1039 <li>All
1040 </ul>
1041 <td>
1042 <table>
1043 <tr><th>src0<th>src1<th>dst
1044 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1045 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1046 <tr><td>S32<td>S32<td>S32
1047 <tr><td>S16<td>S16<td>S16
1048 <tr><td>F16<td>F16<td>F16
1049 <tr><td>F32<td>F32<td>F32
1050 </table>
1051<tr>
1052 <td>NEElementwiseDivision
1053 <td>
1054 <ul>
1055 <li>All
1056 </ul>
1057 <td>
1058 <table>
1059 <tr><th>src0<th>src1<th>dst
1060 <tr><td>F16<td>F16<td>F16
1061 <tr><td>F32<td>F32<td>F32
1062 </table>
1063<tr>
1064 <td>NEElementwisePower
1065 <td>
1066 <ul>
1067 <li>All
1068 </ul>
1069 <td>
1070 <table>
1071 <tr><th>src0<th>src1<th>dst
1072 <tr><td>F16<td>F16<td>F16
1073 <tr><td>F32<td>F32<td>F32
1074 </table>
1075<tr>
1076 <td>NEElementwiseComparison
1077 <td>
1078 <ul>
1079 <li>All
1080 </ul>
1081 <td>
1082 <table>
1083 <tr><th>src0<th>src1<th>dst
1084 <tr><td>QASYMM8<td>QASYMM8<td>U8
1085 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>U8
1086 <tr><td>S32<td>S32<td>U8
1087 <tr><td>U8<td>U8<td>U8
1088 <tr><td>S16<td>S16<td>U8
1089 <tr><td>F16<td>F16<td>U8
1090 <tr><td>F32<td>F32<td>U8
1091 </table>
1092<tr>
1093 <td>CLArithmeticAddition
1094 <td>
1095 <ul>
1096 <li>All
1097 </ul>
1098 <td>
1099 <table>
1100 <tr><th>src0<th>src1<th>dst
1101 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1102 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1103 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1104 <tr><td>U8<td>U8<td>U8
1105 <tr><td>U8<td>U8<td>S16
1106 <tr><td>U8<td>S16<td>S16
1107 <tr><td>S16<td>U8<td>S16
1108 <tr><td>S16<td>S16<td>S16
1109 <tr><td>S32<td>S32<td>S32
1110 <tr><td>F16<td>F16<td>F16
1111 <tr><td>F32<td>F32<td>F32
1112 </table>
1113<tr>
1114 <td>CLArithmeticSubtraction
1115 <td>
1116 <ul>
1117 <li>All
1118 </ul>
1119 <td>
1120 <table>
1121 <tr><th>src0<th>src1<th>dst
1122 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1123 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1124 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1125 <tr><td>U8<td>U8<td>U8
1126 <tr><td>U8<td>U8<td>S16
1127 <tr><td>U8<td>S16<td>S16
1128 <tr><td>S16<td>U8<td>S16
1129 <tr><td>S16<td>S16<td>S16
1130 <tr><td>S32<td>S32<td>S32
1131 <tr><td>F16<td>F16<td>F16
1132 <tr><td>F32<td>F32<td>F32
1133 </table>
1134<tr>
1135 <td>CLArithmeticDivision
1136 <td>
1137 <ul>
1138 <li>All
1139 </ul>
1140 <td>
1141 <table>
1142 <tr><th>src0<th>src1<th>dst
1143 <tr><td>F16<td>F16<td>F16
1144 <tr><td>F32<td>F32<td>F32
1145 </table>
1146<tr>
1147 <td>CLElementwiseMax
1148 <td>
1149 <ul>
1150 <li>All
1151 </ul>
1152 <td>
1153 <table>
1154 <tr><th>src0<th>src1<th>dst
1155 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1156 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1157 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1158 <tr><td>U8<td>U8<td>U8
1159 <tr><td>S16<td>S16<td>S16
1160 <tr><td>S32<td>S32<td>S32
1161 <tr><td>U32<td>U32<td>U32
1162 <tr><td>F16<td>F16<td>F16
1163 <tr><td>F32<td>F32<td>F32
1164 </table>
1165<tr>
1166 <td>CLElementwiseMin
1167 <td>
1168 <ul>
1169 <li>All
1170 </ul>
1171 <td>
1172 <table>
1173 <tr><th>src0<th>src1<th>dst
1174 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1175 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1176 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1177 <tr><td>U8<td>U8<td>U8
1178 <tr><td>S16<td>S16<td>S16
1179 <tr><td>S32<td>S32<td>S32
1180 <tr><td>U32<td>U32<td>U32
1181 <tr><td>F16<td>F16<td>F16
1182 <tr><td>F32<td>F32<td>F32
1183 </table>
1184<tr>
1185 <td>CLElementwiseSquaredDiff
1186 <td>
1187 <ul>
1188 <li>All
1189 </ul>
1190 <td>
1191 <table>
1192 <tr><th>src0<th>src1<th>dst
1193 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1194 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1195 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1196 <tr><td>U8<td>U8<td>U8
1197 <tr><td>S16<td>S16<td>S16
1198 <tr><td>F16<td>F16<td>F16
1199 <tr><td>F32<td>F32<td>F32
1200 </table>
1201<tr>
1202 <td>CLElementwisePower
1203 <td>
1204 <ul>
1205 <li>All
1206 </ul>
1207 <td>
1208 <table>
1209 <tr><th>src0<th>src1<th>dst
1210 <tr><td>F16<td>F16<td>F16
1211 <tr><td>F32<td>F32<td>F32
1212 </table>
1213<tr>
1214 <td rowspan="8">ElementwiseUnaryLayer
1215 <td rowspan="8" style="width:200px;"> Function to perform: - Rsqrt - Exp - Neg - Log - Abs - Round - Sin
1216 <td rowspan="8">
1217 <ul>
1218 <li>ANEURALNETWORKS_ABS
1219 <li>ANEURALNETWORKS_EXP
1220 <li>ANEURALNETWORKS_LOG
1221 <li>ANEURALNETWORKS_NEG
1222 <li>ANEURALNETWORKS_RSQRT
1223 <li>ANEURALNETWORKS_SIN
1224 </ul>
1225 <td>NEElementwiseUnaryLayer
1226 <td>
1227 <ul>
1228 <li>All
1229 </ul>
1230 <td>
1231 <table>
1232 <tr><th>src<th>dst
1233 <tr><td>F16<td>F16
1234 <tr><td>F32<td>F32
1235 <tr><td>S32<td>S32
1236 </table>
1237<tr>
1238 <td>CLRsqrtLayer
1239 <td>
1240 <ul>
1241 <li>All
1242 </ul>
1243 <td>
1244 <table>
1245 <tr><th>src<th>dst
1246 <tr><td>F16<td>F16
1247 <tr><td>F32<td>F32
1248 </table>
1249<tr>
1250 <td>CLExpLayer
1251 <td>
1252 <ul>
1253 <li>All
1254 </ul>
1255 <td>
1256 <table>
1257 <tr><th>src<th>dst
1258 <tr><td>F16<td>F16
1259 <tr><td>F32<td>F32
1260 </table>
1261<tr>
1262 <td>CLNegLayer
1263 <td>
1264 <ul>
1265 <li>All
1266 </ul>
1267 <td>
1268 <table>
1269 <tr><th>src<th>dst
1270 <tr><td>F16<td>F16
1271 <tr><td>F32<td>F32
Jakub Sujakee301b32021-06-04 09:46:08 +01001272 <tr><td>S32<td>S32
Sheri Zhang6124ce62021-05-04 14:03:13 +01001273 </table>
1274<tr>
1275 <td>CLSinLayer
1276 <td>
1277 <ul>
1278 <li>All
1279 </ul>
1280 <td>
1281 <table>
1282 <tr><th>src<th>dst
1283 <tr><td>F16<td>F16
1284 <tr><td>F32<td>F32
1285 </table>
1286<tr>
1287 <td>CLLogLayer
1288 <td>
1289 <ul>
1290 <li>All
1291 </ul>
1292 <td>
1293 <table>
1294 <tr><th>src<th>dst
1295 <tr><td>F16<td>F16
1296 <tr><td>F32<td>F32
1297 </table>
1298<tr>
1299 <td>CLAbsLayer
1300 <td>
1301 <ul>
1302 <li>All
1303 </ul>
1304 <td>
1305 <table>
1306 <tr><th>src<th>dst
1307 <tr><td>F16<td>F16
1308 <tr><td>F32<td>F32
1309 </table>
1310<tr>
1311 <td>CLRoundLayer
1312 <td>
1313 <ul>
1314 <li>All
1315 </ul>
1316 <td>
1317 <table>
1318 <tr><th>src<th>dst
1319 <tr><td>F16<td>F16
1320 <tr><td>F32<td>F32
1321 </table>
1322<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001323 <td rowspan="2">FFT1D
Teresa Charlin62687422021-04-28 10:58:49 +01001324 <td rowspan="2" style="width:200px;"> Fast Fourier Transform 1D.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001325 <td rowspan="2">
1326 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +01001327 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001328 </ul>
1329 <td>NEFFT1D
1330 <td>
1331 <ul>
1332 <li>All
1333 </ul>
1334 <td>
1335 <table>
1336 <tr><th>src<th>dst
1337 <tr><td>F32<td>F32
1338 </table>
1339<tr>
1340 <td>CLFFT1D
1341 <td>
1342 <ul>
1343 <li>All
1344 </ul>
1345 <td>
1346 <table>
1347 <tr><th>src<th>dst
1348 <tr><td>F32<td>F32
1349 <tr><td>F16<td>F16
1350 </table>
1351<tr>
1352 <td rowspan="2">FFT2D
Teresa Charlin62687422021-04-28 10:58:49 +01001353 <td rowspan="2" style="width:200px;"> Fast Fourier Transform 2D.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001354 <td rowspan="2">
1355 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +01001356 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001357 </ul>
1358 <td>NEFFT2D
1359 <td>
1360 <ul>
1361 <li>All
1362 </ul>
1363 <td>
1364 <table>
1365 <tr><th>src<th>dst
1366 <tr><td>F32<td>F32
1367 </table>
1368<tr>
1369 <td>CLFFT2D
1370 <td>
1371 <ul>
1372 <li>All
1373 </ul>
1374 <td>
1375 <table>
1376 <tr><th>src<th>dst
1377 <tr><td>F32<td>F32
1378 <tr><td>F16<td>F16
1379 </table>
1380<tr>
1381 <td rowspan="2">FFTConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001382 <td rowspan="2" style="width:200px;"> Fast Fourier Transform Convolution.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001383 <td rowspan="2">
1384 <ul>
1385 <li>ANEURALNETWORKS_CONV_2D
1386 </ul>
1387 <td>NEFFTConvolutionLayer
1388 <td>
1389 <ul>
1390 <li>All
1391 </ul>
1392 <td>
1393 <table>
1394 <tr><th>src<th>dst
1395 <tr><td>F32<td>F32
1396 </table>
1397<tr>
1398 <td>CLFFTConvolutionLayer
1399 <td>
1400 <ul>
1401 <li>All
1402 </ul>
1403 <td>
1404 <table>
1405 <tr><th>src<th>dst
1406 <tr><td>F32<td>F32
1407 <tr><td>F16<td>F16
1408 </table>
1409<tr>
1410 <td rowspan="2">Fill
Teresa Charlin62687422021-04-28 10:58:49 +01001411 <td rowspan="2" style="width:200px;"> Set the values of a tensor with a given value.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001412 <td rowspan="2">
1413 <ul>
1414 <li>ANEURALNETWORKS_FILL
1415 </ul>
1416 <td>NEFill
1417 <td>
1418 <ul>
1419 <li>All
1420 </ul>
1421 <td>
1422 <table>
1423 <tr><th>src<th>dst
1424 <tr><td>All<td>All
1425 </table>
1426<tr>
1427 <td>CLFill
1428 <td>
1429 <ul>
1430 <li>All
1431 </ul>
1432 <td>
1433 <table>
1434 <tr><th>src<th>dst
1435 <tr><td>All<td>All
1436 </table>
1437<tr>
Georgios Pinitasb6af4822021-09-14 12:33:34 +01001438 <td rowspan="1">FillBorder
1439 <td rowspan="1" style="width:200px;"> Function to fill the borders within the XY-planes.
1440 <td rowspan="1">
Teresa Charlin62687422021-04-28 10:58:49 +01001441 <ul>
1442 <li>n/a
1443 </ul>
1444 <td>NEFillBorder
1445 <td>
1446 <ul>
1447 <li>All
1448 </ul>
1449 <td>
1450 <table>
1451 <tr><th>src<th>dst
1452 <tr><td>All<td>All
1453 </table>
1454<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001455 <td rowspan="2">FlattenLayer
1456 <td rowspan="2" style="width:200px;"> Reshape a tensor to be 1D
1457 <td rowspan="2">
1458 <ul>
1459 <li>ANEURALNETWORKS_RESHAPE
1460 </ul>
1461 <td>NEFlattenLayer
1462 <td>
1463 <ul>
1464 <li>All
1465 </ul>
1466 <td>
1467 <table>
1468 <tr><th>src<th>dst
1469 <tr><td>All<td>All
1470 </table>
1471<tr>
1472 <td>CLFlattenLayer
1473 <td>
1474 <ul>
1475 <li>All
1476 </ul>
1477 <td>
1478 <table>
1479 <tr><th>src<th>dst
1480 <tr><td>All<td>All
1481 </table>
1482<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001483 <td rowspan="2">Floor
Teresa Charlin62687422021-04-28 10:58:49 +01001484 <td rowspan="2" style="width:200px;"> Round the value to the lowest number.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001485 <td rowspan="2">
1486 <ul>
1487 <li>ANEURALNETWORKS_FLOOR
1488 </ul>
1489 <td>NEFloor
1490 <td>
1491 <ul>
1492 <li>All
1493 </ul>
1494 <td>
1495 <table>
1496 <tr><th>src<th>dst
1497 <tr><td>F32<td>F32
1498 <tr><td>F16<td>F16
1499 </table>
1500<tr>
1501 <td>CLFloor
1502 <td>
1503 <ul>
1504 <li>All
1505 </ul>
1506 <td>
1507 <table>
1508 <tr><th>src<th>dst
1509 <tr><td>F32<td>F32
1510 <tr><td>F16<td>F16
1511 </table>
1512<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001513 <td rowspan="2">FullyConnectedLayer
1514 <td rowspan="2" style="width:200px;"> Function to perform a fully connected / dense layer.
1515 <td rowspan="2">
1516 <ul>
1517 <li>ANEURALNETWORKS_FULLY_CONNECTED
1518 </ul>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001519 <td>NEFullyConnectedLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001520 <td>
1521 <ul>
1522 <li>NHWC
1523 <li>NCHW
1524 </ul>
1525 <td>
1526 <table>
1527 <tr><th>src0<th>src1<th>src2<th>dst
1528 <tr><td>F16<td>F16<td>F16<td>F16
1529 <tr><td>F32<td>F32<td>F32<td>F32
1530 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1531 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1532 </table>
1533<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001534 <td>CLFullyConnectedLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001535 <td>
1536 <ul>
1537 <li>NHWC
1538 <li>NCHW
1539 </ul>
1540 <td>
1541 <table>
1542 <tr><th>src0<th>src1<th>src2<th>dst
1543 <tr><td>F16<td>F16<td>F16<td>F16
1544 <tr><td>F32<td>F32<td>F32<td>F32
1545 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1546 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1547 </table>
1548<tr>
1549 <td rowspan="2">FuseBatchNormalization
1550 <td rowspan="2" style="width:200px;"> Function to fuse the batch normalization node to a preceding convolution node.
1551 <td rowspan="2">
1552 <ul>
1553 <li>n/a
1554 </ul>
1555 <td>NEFuseBatchNormalization
1556 <td>
1557 <ul>
1558 <li>NHWC
1559 <li>NCHW
1560 </ul>
1561 <td>
1562 <table>
1563 <tr><th>src<th>dst
1564 <tr><td>F32<td>F32
1565 <tr><td>F16<td>F16
1566 </table>
1567<tr>
1568 <td>CLFuseBatchNormalization
1569 <td>
1570 <ul>
1571 <li>NHWC
1572 <li>NCHW
1573 </ul>
1574 <td>
1575 <table>
1576 <tr><th>src<th>dst
1577 <tr><td>F32<td>F32
1578 <tr><td>F16<td>F16
1579 </table>
1580<tr>
1581 <td rowspan="2">Gather
1582 <td rowspan="2" style="width:200px;"> Performs the Gather operation along the chosen axis.
1583 <td rowspan="2">
1584 <ul>
1585 <li>ANEURALNETWORKS_GATHER
1586 </ul>
1587 <td>NEGather
1588 <td>
1589 <ul>
1590 <li>All
1591 </ul>
1592 <td>
1593 <table>
1594 <tr><th>src<th>dst
1595 <tr><td>All<td>All
1596 </table>
1597<tr>
1598 <td>CLGather
1599 <td>
1600 <ul>
1601 <li>All
1602 </ul>
1603 <td>
1604 <table>
1605 <tr><th>src<th>dst
1606 <tr><td>All<td>All
1607 </table>
1608<tr>
1609 <td rowspan="2">GEMM
1610 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1611 <td rowspan="2">
1612 <ul>
1613 <li>n/a
1614 </ul>
1615 <td>NEGEMM
1616 <td>
1617 <ul>
1618 <li>All
1619 </ul>
1620 <td>
1621 <table>
1622 <tr><th>src0<th>src1<th>src2<th>dst
1623 <tr><td>F32<td>F32<td>F32<td>F32
1624 <tr><td>F16<td>F16<td>F16<td>F16
1625 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1626 </table>
1627<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001628 <td>CLGEMM
Teresa Charlin62687422021-04-28 10:58:49 +01001629 <td>
1630 <ul>
1631 <li>All
1632 </ul>
1633 <td>
1634 <table>
1635 <tr><th>src0<th>src1<th>src2<th>dst
1636 <tr><td>F32<td>F32<td>F32<td>F32
1637 <tr><td>F16<td>F16<td>F16<td>F16
1638 </table>
1639<tr>
Jakub Sujakee301b32021-06-04 09:46:08 +01001640 <td rowspan="1">GEMMConv2d
Sheri Zhang6124ce62021-05-04 14:03:13 +01001641 <td rowspan="1" style="width:200px;"> General Matrix Multiplication.
1642 <td rowspan="1">
1643 <ul>
1644 <li>ANEURALNETWORKS_CONV_2D
1645 </ul>
1646 <td>NEGEMMConv2d
1647 <td>
1648 <ul>
1649 <li>All
1650 </ul>
1651 <td>
1652 <table>
1653 <tr><th>src0<th>src1<th>src2<th>dst
1654 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1655 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1656 <tr><td>F16<td>F16<td>F16<td>F16
1657 <tr><td>F32<td>F32<td>F32<td>F32
1658 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1659 </table>
1660<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001661 <td rowspan="2">GEMMConvolutionLayer
1662 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1663 <td rowspan="2">
1664 <ul>
1665 <li>ANEURALNETWORKS_CONV_2D
1666 </ul>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001667 <td>NEGEMMConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001668 <td>
1669 <ul>
1670 <li>NHWC
1671 <li>NCHW
1672 </ul>
1673 <td>
1674 <table>
1675 <tr><th>src0<th>src1<th>src2<th>dst
1676 <tr><td>F16<td>F16<td>F16<td>F16
1677 <tr><td>F32<td>F32<td>F32<td>F32
1678 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1679 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1680 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1681 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1682 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1683 </table>
1684<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001685 <td>CLGEMMConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001686 <td>
1687 <ul>
1688 <li>NHWC
1689 <li>NCHW
1690 </ul>
1691 <td>
1692 <table>
1693 <tr><th>src0<th>src1<th>src2<th>dst
1694 <tr><td>F16<td>F16<td>F16<td>F16
1695 <tr><td>F32<td>F32<td>F32<td>F32
1696 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1697 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1698 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1699 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1700 </table>
1701<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001702 <td rowspan="1">GEMMDeconvolutionLayer
1703 <td rowspan="1" style="width:200px;"> General Matrix Multiplication.
1704 <td rowspan="1">
1705 <ul>
1706 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
1707 </ul>
1708 <td>CLGEMMDeconvolutionLayer
1709 <td>
1710 <ul>
1711 <li>NHWC
1712 </ul>
1713 <td>
1714 <table>
1715 <tr><th>src0<th>src1<th>src2<th>dst
1716 <tr><td>F16<td>F16<td>F16<td>F16
1717 <tr><td>F32<td>F32<td>F32<td>F32
1718 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1719 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1720 </table>
1721<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001722 <td rowspan="2">GEMMLowpMatrixMultiplyCore
1723 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1724 <td rowspan="2">
1725 <ul>
1726 <li>n/a
1727 </ul>
1728 <td>NEGEMMLowpMatrixMultiplyCore
1729 <td>
1730 <ul>
1731 <li>NHWC
1732 <li>NCHW
1733 </ul>
1734 <td>
1735 <table>
1736 <tr><th>src0<th>src1<th>src2<th>dst
1737 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1738 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1739 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>QASYMM8
1740 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>S32
1741 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1742 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>S32
1743 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1744 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1745 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>QASYMM8_SIGNED
1746 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>S32
1747 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1748 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>S32
1749 </table>
1750<tr>
1751 <td>CLGEMMLowpMatrixMultiplyCore
1752 <td>
1753 <ul>
1754 <li>NHWC
1755 <li>NCHW
1756 </ul>
1757 <td>
1758 <table>
1759 <tr><th>src0<th>src1<th>src2<th>dst
1760 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1761 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1762 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>QASYMM8
1763 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>S32
1764 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1765 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>S32
1766 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1767 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1768 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>QASYMM8_SIGNED
1769 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>S32
1770 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1771 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>S32
1772 </table>
1773<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001774 <td rowspan="2">GEMMLowpOutputStage
1775 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1776 <td rowspan="2">
1777 <ul>
1778 <li>n/a
1779 </ul>
1780 <td>NEGEMMLowpOutputStage
1781 <td>
1782 <ul>
1783 <li>All
1784 </ul>
1785 <td>
1786 <table>
1787 <tr><th>src0<th>src1<th>dst
1788 <tr><td>S32<td>S32<td>QASYMM8
1789 <tr><td>S32<td>S32<td>QASYMM8_SIGNED
1790 <tr><td>S32<td>S32<td>QSYMM16
1791 </table>
1792<tr>
1793 <td>CLGEMMLowpOutputStage
1794 <td>
1795 <ul>
1796 <li>All
1797 </ul>
1798 <td>
1799 <table>
1800 <tr><th>src0<th>src1<th>dst
1801 <tr><td>S32<td>S32<td>QASYMM8
1802 <tr><td>S32<td>S32<td>QASYMM8_SIGNED
1803 <tr><td>S32<td>S32<td>QSYMM16
1804 </table>
1805<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001806 <td rowspan="2">GenerateProposalsLayer
1807 <td rowspan="2" style="width:200px;"> Function to generate proposals for a RPN (Region Proposal Network).
1808 <td rowspan="2">
1809 <ul>
1810 <li>ANEURALNETWORKS_GENERATE_PROPOSALS
1811 </ul>
1812 <td>NEGenerateProposalsLayer
1813 <td>
1814 <ul>
1815 <li>All
1816 </ul>
1817 <td>
1818 <table>
1819 <tr><th>src0<th>src1<th>src2<th>dst
1820 <tr><td>F16<td>F16<td>F16<td>F16
1821 <tr><td>F32<td>F32<td>F32<td>F32
1822 <tr><td>QASYMM8<td>QSYMM8<td>QSYMM16<td>QASYMM8
1823 </table>
1824<tr>
1825 <td>CLGenerateProposalsLayer
1826 <td>
1827 <ul>
1828 <li>All
1829 </ul>
1830 <td>
1831 <table>
1832 <tr><th>src0<th>src1<th>src2<th>dst
1833 <tr><td>F16<td>F16<td>F16<td>F16
1834 <tr><td>F32<td>F32<td>F32<td>F32
1835 <tr><td>QASYMM8<td>QSYMM8<td>QSYMM16<td>QASYMM8
1836 </table>
1837<tr>
1838 <td rowspan="2">InstanceNormalizationLayer
1839 <td rowspan="2" style="width:200px;"> Function to perform a Instance normalization on a given axis.
1840 <td rowspan="2">
1841 <ul>
1842 <li>ANEURALNETWORKS_INSTANCE_NORMALIZATION
1843 </ul>
1844 <td>NEInstanceNormalizationLayer
1845 <td>
1846 <ul>
1847 <li>NHWC
1848 <li>NCHW
1849 </ul>
1850 <td>
1851 <table>
1852 <tr><th>src<th>dst
1853 <tr><td>F16<td>F16
1854 <tr><td>F32<td>F32
1855 </table>
1856<tr>
1857 <td>CLInstanceNormalizationLayer
1858 <td>
1859 <ul>
1860 <li>NHWC
1861 <li>NCHW
1862 </ul>
1863 <td>
1864 <table>
1865 <tr><th>src<th>dst
1866 <tr><td>F16<td>F16
1867 <tr><td>F32<td>F32
1868 </table>
1869<tr>
1870 <td rowspan="2">L2NormalizeLayer
1871 <td rowspan="2" style="width:200px;"> Function to perform a L2 normalization on a given axis.
1872 <td rowspan="2">
1873 <ul>
1874 <li>ANEURALNETWORKS_L2_NORMALIZATION
1875 </ul>
1876 <td>NEL2NormalizeLayer
1877 <td>
1878 <ul>
1879 <li>NHWC
1880 <li>NCHW
1881 </ul>
1882 <td>
1883 <table>
1884 <tr><th>src<th>dst
1885 <tr><td>F16<td>F16
1886 <tr><td>F32<td>F32
1887 </table>
1888<tr>
1889 <td>CLL2NormalizeLayer
1890 <td>
1891 <ul>
1892 <li>NHWC
1893 <li>NCHW
1894 </ul>
1895 <td>
1896 <table>
1897 <tr><th>src<th>dst
1898 <tr><td>F16<td>F16
1899 <tr><td>F32<td>F32
1900 </table>
1901<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001902 <td rowspan="3">Logical
1903 <td rowspan="3" style="width:200px;"> Function to perform: - Logical AND - Logical OR - Logical NOT
1904 <td rowspan="3">
1905 <ul>
1906 <li>n/a
1907 </ul>
1908 <td>NELogicalAnd
1909 <td>
1910 <ul>
1911 <li>All
1912 </ul>
1913 <td>
1914 <table>
1915 <tr><th>src0<th>src1<th>dst
1916 <tr><td>U8<td>U8<td>U8
1917 </table>
1918<tr>
1919 <td>NELogicalOr
1920 <td>
1921 <ul>
1922 <li>All
1923 </ul>
1924 <td>
1925 <table>
1926 <tr><th>src0<th>src1<th>dst
1927 <tr><td>U8<td>U8<td>U8
1928 </table>
1929<tr>
1930 <td>NELogicalNot
1931 <td>
1932 <ul>
1933 <li>All
1934 </ul>
1935 <td>
1936 <table>
1937 <tr><th>src<th>dst
1938 <tr><td>U8<td>U8
1939 </table>
1940<tr>
1941 <td rowspan="1">LogicalAnd
1942 <td rowspan="1" style="width:200px;"> Function to perform Logical AND.
1943 <td rowspan="1">
1944 <ul>
1945 <li>n/a
1946 </ul>
1947 <td>CLLogicalAnd
1948 <td>
1949 <ul>
1950 <li>All
1951 </ul>
1952 <td>
1953 <table>
1954 <tr><th>src0<th>src1<th>dst
1955 <tr><td>U8<td>U8<td>U8
1956 </table>
1957<tr>
1958 <td rowspan="1">LogicalOr
1959 <td rowspan="1" style="width:200px;"> Function to perform Logical OR.
1960 <td rowspan="1">
1961 <ul>
1962 <li>n/a
1963 </ul>
1964 <td>CLLogicalOr
1965 <td>
1966 <ul>
1967 <li>All
1968 </ul>
1969 <td>
1970 <table>
1971 <tr><th>src0<th>src1<th>dst
1972 <tr><td>U8<td>U8<td>U8
1973 </table>
1974<tr>
1975 <td rowspan="1">LogicalNot
1976 <td rowspan="1" style="width:200px;"> Function to perform Logical NOT.
1977 <td rowspan="1">
1978 <ul>
1979 <li>n/a
1980 </ul>
1981 <td>CLLogicalNot
1982 <td>
1983 <ul>
1984 <li>All
1985 </ul>
1986 <td>
1987 <table>
1988 <tr><th>src<th>dst
1989 <tr><td>U8<td>U8
1990 </table>
1991<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001992 <td rowspan="2">LSTMLayer
1993 <td rowspan="2" style="width:200px;"> Function to perform a single time step in a Long Short-Term Memory (LSTM) layer.
1994 <td rowspan="2">
1995 <ul>
1996 <li>ANEURALNETWORKS_LSTM
1997 </ul>
1998 <td>NELSTMLayer
1999 <td>
2000 <ul>
2001 <li>All
2002 </ul>
2003 <td>
2004 <table>
2005 <tr><th>src0 - src13<th>dst0 - dst3
2006 <tr><td>F16<td>F16
2007 <tr><td>F32<td>F32
2008 </table>
2009<tr>
2010 <td>CLLSTMLayer
2011 <td>
2012 <ul>
2013 <li>All
2014 </ul>
2015 <td>
2016 <table>
2017 <tr><th>src0 - src13<th>dst0 - dst3
2018 <tr><td>F16<td>F16
2019 <tr><td>F32<td>F32
2020 </table>
2021<tr>
2022 <td rowspan="2">LSTMLayerQuantized
2023 <td rowspan="2" style="width:200px;"> Function to perform quantized LSTM (Long Short-Term Memory)
2024 <td rowspan="2">
2025 <ul>
2026 <li>ANEURALNETWORKS_QUANTIZED_LSTM
2027 <li>ANEURALNETWORKS_QUANTIZED_16BIT_LSTM
2028 </ul>
2029 <td>NELSTMLayerQuantized
2030 <td>
2031 <ul>
2032 <li>All
2033 </ul>
2034 <td>
2035 <table>
2036 <tr><th>src0 - src8<th>src9 - src12<th>src13<th>src14<th>dst0<th>dst1
2037 <tr><td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8<td>QSYMM16<td>QASYMM8
2038 </table>
2039<tr>
2040 <td>CLLSTMLayerQuantized
2041 <td>
2042 <ul>
2043 <li>All
2044 </ul>
2045 <td>
2046 <table>
2047 <tr><th>src0 - src8<th>src9 - src12<th>src13<th>src14<th>dst0<th>dst1
2048 <tr><td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8<td>QSYMM16<td>QASYMM8
2049 </table>
2050<tr>
2051 <td rowspan="2">MaxUnpoolingLayer
2052 <td rowspan="2" style="width:200px;"> Function to perform MaxUnpooling.
2053 <td rowspan="2">
2054 <ul>
2055 <li>n/a
2056 </ul>
2057 <td>NEMaxUnpoolingLayer
2058 <td>
2059 <ul>
2060 <li>NHWC
2061 <li>NCHW
2062 </ul>
2063 <td>
2064 <table>
2065 <tr><th>src<th>dst
2066 <tr><td>QASYMM8<td>QASYMM8
2067 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2068 <tr><td>F16<td>F16
2069 <tr><td>F32<td>F32
2070 </table>
2071<tr>
2072 <td>CLMaxUnpoolingLayer
2073 <td>
2074 <ul>
2075 <li>NHWC
2076 <li>NCHW
2077 </ul>
2078 <td>
2079 <table>
2080 <tr><th>src<th>dst
2081 <tr><td>QASYMM8<td>QASYMM8
2082 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2083 <tr><td>F16<td>F16
2084 <tr><td>F32<td>F32
2085 </table>
2086<tr>
2087 <td rowspan="2">MeanStdDevNormalizationLayer
2088 <td rowspan="2" style="width:200px;"> Function to execute mean and standard deviation normalization.
2089 <td rowspan="2">
2090 <ul>
2091 <li>n/a
2092 </ul>
2093 <td>NEMeanStdDevNormalizationLayer
2094 <td>
2095 <ul>
2096 <li>NHWC
2097 <li>NCHW
2098 </ul>
2099 <td>
2100 <table>
2101 <tr><th>src<th>dst
2102 <tr><td>F32<td>F32
2103 <tr><td>F16<td>F16
2104 </table>
2105<tr>
2106 <td>CLMeanStdDevNormalizationLayer
2107 <td>
2108 <ul>
2109 <li>NHWC
2110 <li>NCHW
2111 </ul>
2112 <td>
2113 <table>
2114 <tr><th>src<th>dst
2115 <tr><td>F32<td>F32
2116 <tr><td>F16<td>F16
2117 </table>
2118<tr>
2119 <td rowspan="2">NormalizationLayer
2120 <td rowspan="2" style="width:200px;"> Function to compute normalization layer.
2121 <td rowspan="2">
2122 <ul>
2123 <li>ANEURALNETWORKS_LOCAL_RESPONSE_NORMALIZATION
2124 </ul>
2125 <td>NENormalizationLayer
2126 <td>
2127 <ul>
2128 <li>NHWC
2129 <li>NCHW
2130 </ul>
2131 <td>
2132 <table>
2133 <tr><th>src<th>dst
2134 <tr><td>F32<td>F32
2135 <tr><td>F16<td>F16
2136 </table>
2137<tr>
2138 <td>CLNormalizationLayer
2139 <td>
2140 <ul>
2141 <li>NHWC
2142 <li>NCHW
2143 </ul>
2144 <td>
2145 <table>
2146 <tr><th>src<th>dst
2147 <tr><td>F32<td>F32
2148 <tr><td>F16<td>F16
2149 </table>
2150<tr>
2151 <td rowspan="2">PadLayer
2152 <td rowspan="2" style="width:200px;"> Function to pad a tensor.
2153 <td rowspan="2">
2154 <ul>
2155 <li>ANEURALNETWORKS_PAD
2156 <li>ANEURALNETWORKS_PAD_V2
2157 </ul>
2158 <td>NEPadLayer
2159 <td>
2160 <ul>
2161 <li>NHWC
2162 <li>NCHW
2163 </ul>
2164 <td>
2165 <table>
2166 <tr><th>src<th>dst
2167 <tr><td>All<td>All
2168 </table>
2169<tr>
2170 <td>CLPadLayer
2171 <td>
2172 <ul>
2173 <li>NHWC
2174 <li>NCHW
2175 </ul>
2176 <td>
2177 <table>
2178 <tr><th>src<th>dst
2179 <tr><td>All<td>All
2180 </table>
2181<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002182 <td rowspan="2">Permute
2183 <td rowspan="2" style="width:200px;"> Function to transpose an ND tensor.
2184 <td rowspan="2">
2185 <ul>
2186 <li>ANEURALNETWORKS_TRANSPOSE
2187 </ul>
2188 <td>NEPermute
2189 <td>
2190 <ul>
2191 <li>NHWC
2192 <li>NCHW
2193 </ul>
2194 <td>
2195 <table>
2196 <tr><th>src<th>dst
2197 <tr><td>All<td>All
2198 </table>
2199<tr>
2200 <td>CLPermute
2201 <td>
2202 <ul>
2203 <li>NHWC
2204 <li>NCHW
2205 </ul>
2206 <td>
2207 <table>
2208 <tr><th>src<th>dst
2209 <tr><td>All<td>All
2210 </table>
2211<tr>
2212 <td rowspan="2">PixelWiseMultiplication
Jakub Sujakee301b32021-06-04 09:46:08 +01002213 <td rowspan="2" style="width:200px;"> Function to perform a multiplication.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002214 <td rowspan="2">
2215 <ul>
2216 <li>ANEURALNETWORKS_MUL
2217 </ul>
2218 <td>NEPixelWiseMultiplication
2219 <td>
2220 <ul>
2221 <li>All
2222 </ul>
2223 <td>
2224 <table>
2225 <tr><th>src0<th>src1<th>dst
2226 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2227 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2228 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
2229 <tr><td>QSYMM16<td>QSYMM16<td>S32
2230 <tr><td>U8<td>U8<td>U8
2231 <tr><td>U8<td>U8<td>S16
2232 <tr><td>U8<td>S16<td>S16
2233 <tr><td>S16<td>U8<td>S16
2234 <tr><td>S16<td>S16<td>S16
2235 <tr><td>F16<td>F16<td>F16
2236 <tr><td>F32<td>S32<td>F32
2237 </table>
2238<tr>
2239 <td>CLPixelWiseMultiplication
2240 <td>
2241 <ul>
2242 <li>All
2243 </ul>
2244 <td>
2245 <table>
2246 <tr><th>src0<th>src1<th>dst
2247 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2248 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2249 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
2250 <tr><td>QSYMM16<td>QSYMM16<td>S32
2251 <tr><td>U8<td>U8<td>U8
2252 <tr><td>U8<td>U8<td>S16
2253 <tr><td>U8<td>S16<td>S16
2254 <tr><td>S16<td>U8<td>S16
2255 <tr><td>S16<td>S16<td>S16
2256 <tr><td>F16<td>F16<td>F16
Jakub Sujakee301b32021-06-04 09:46:08 +01002257 <tr><td>F32<td>F32<td>F32
2258 <tr><td>S32<td>S32<td>S32
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002259 </table>
2260<tr>
2261 <td rowspan="2">PoolingLayer
Jakub Sujakee301b32021-06-04 09:46:08 +01002262 <td rowspan="2" style="width:200px;"> Function to perform pooling with the specified pooling operation.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002263 <td rowspan="2">
2264 <ul>
2265 <li>ANEURALNETWORKS_AVERAGE_POOL_2D
2266 <li>ANEURALNETWORKS_L2_POOL_2D
2267 <li>ANEURALNETWORKS_MAX_POOL_2D
2268 </ul>
2269 <td>NEPoolingLayer
2270 <td>
2271 <ul>
2272 <li>NHWC
2273 <li>NCHW
2274 </ul>
2275 <td>
2276 <table>
2277 <tr><th>src<th>dst
2278 <tr><td>QASYMM8<td>QASYMM8
2279 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2280 <tr><td>F16<td>F16
2281 <tr><td>F32<td>F32
2282 </table>
2283<tr>
2284 <td>CLPoolingLayer
2285 <td>
2286 <ul>
2287 <li>NHWC
2288 <li>NCHW
2289 </ul>
2290 <td>
2291 <table>
2292 <tr><th>src<th>dst
2293 <tr><td>QASYMM8<td>QASYMM8
2294 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2295 <tr><td>F16<td>F16
2296 <tr><td>F32<td>F32
2297 </table>
2298<tr>
2299 <td rowspan="2">PReluLayer
2300 <td rowspan="2" style="width:200px;"> Function to compute the activation layer with the PRELU activation function.
2301 <td rowspan="2">
2302 <ul>
2303 <li>ANEURALNETWORKS_PRELU
2304 </ul>
2305 <td>NEPReluLayer
2306 <td>
2307 <ul>
2308 <li>All
2309 </ul>
2310 <td>
2311 <table>
2312 <tr><th>src<th>dst
2313 <tr><td>QASYMM8<td>QASYMM8
2314 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2315 <tr><td>F16<td>F16
2316 <tr><td>F32<td>F32
2317 </table>
2318<tr>
2319 <td>CLPReluLayer
2320 <td>
2321 <ul>
2322 <li>All
2323 </ul>
2324 <td>
2325 <table>
2326 <tr><th>src<th>dst
2327 <tr><td>QASYMM8<td>QASYMM8
2328 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2329 <tr><td>F16<td>F16
2330 <tr><td>F32<td>F32
2331 </table>
2332<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002333 <td rowspan="2">PriorBoxLayer
Sheri Zhang6124ce62021-05-04 14:03:13 +01002334 <td rowspan="2" style="width:200px;"> Function to compute prior boxes and clip.
Teresa Charlin62687422021-04-28 10:58:49 +01002335 <td rowspan="2">
2336 <ul>
2337 <li>n/a
2338 </ul>
2339 <td>NEPriorBoxLayer
2340 <td>
2341 <ul>
2342 <li>NHWC
2343 <li>NCHW
2344 </ul>
2345 <td>
2346 <table>
2347 <tr><th>src0<th>src1<th>dst
2348 <tr><td>F32<td>F32<td>F32
2349 </table>
2350<tr>
2351 <td>CLPriorBoxLayer
2352 <td>
2353 <ul>
2354 <li>NHWC
2355 <li>NCHW
2356 </ul>
2357 <td>
2358 <table>
2359 <tr><th>src0<th>src1<th>dst
2360 <tr><td>F32<td>F32<td>F32
2361 </table>
2362<tr>
2363 <td rowspan="2">QLSTMLayer
2364 <td rowspan="2" style="width:200px;"> Function to perform quantized LSTM (Long Short-Term Memory).
2365 <td rowspan="2">
2366 <ul>
2367 <li>ANEURALNETWORKS_QUANTIZED_LSTM
2368 <li>ANEURALNETWORKS_QUANTIZED_16BIT_LSTM
2369 </ul>
2370 <td>NEQLSTMLayer
2371 <td>
2372 <ul>
2373 <li>All
2374 </ul>
2375 <td>
2376 <table>
2377 <tr><th>src0<th>src1 - src6<th>src7 -src9<th>src10<th>src11<th>dst0<th>dst1 - dst2
2378 <tr><td>QASYMM8_SIGNED<td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8_SIGNED<td>QSYMM16<td>QASYMM8_SIGNED
2379 </table>
2380<tr>
2381 <td>CLQLSTMLayer
2382 <td>
2383 <ul>
2384 <li>All
2385 </ul>
2386 <td>
2387 <table>
2388 <tr><th>src0<th>src1 - src6<th>src7 -src9<th>src10<th>src11<th>dst0<th>dst1 - dst2
2389 <tr><td>QASYMM8_SIGNED<td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8_SIGNED<td>QSYMM16<td>QASYMM8_SIGNED
2390 </table>
2391<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002392 <td rowspan="2">QuantizationLayer
2393 <td rowspan="2" style="width:200px;"> Function to perform quantization layer
2394 <td rowspan="2">
2395 <ul>
2396 <li>ANEURALNETWORKS_QUANTIZE
2397 </ul>
2398 <td>NEQuantizationLayer
2399 <td>
2400 <ul>
2401 <li>All
2402 </ul>
2403 <td>
2404 <table>
2405 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +01002406 <tr><td>QASYMM8<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2407 <tr><td>QASYMM8_SIGNED<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2408 <tr><td>F16<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2409 <tr><td>F32<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002410 </table>
2411<tr>
2412 <td>CLQuantizationLayer
2413 <td>
2414 <ul>
2415 <li>All
2416 </ul>
2417 <td>
2418 <table>
2419 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +01002420 <tr><td>QASYMM8<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2421 <tr><td>QASYMM8_SIGNED<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2422 <tr><td>F16<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2423 <tr><td>F32<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2424 </table>
2425<tr>
2426 <td rowspan="2">Range
2427 <td rowspan="2" style="width:200px;"> Function to generates a sequence of numbers starting from START and extends by increments of 'STEP' up to but not including 'END'.
2428 <td rowspan="2">
2429 <ul>
2430 <li>n/a
2431 </ul>
2432 <td>NERange
2433 <td>
2434 <ul>
2435 <li>All
2436 </ul>
2437 <td>
2438 <table>
2439 <tr><th>dst
2440 <tr><td>U8
2441 <tr><td>S8
2442 <tr><td>U16
2443 <tr><td>S16
2444 <tr><td>U32
2445 <tr><td>S32
2446 <tr><td>F16
2447 <tr><td>F32
2448 </table>
2449<tr>
2450 <td>CLRange
2451 <td>
2452 <ul>
2453 <li>All
2454 </ul>
2455 <td>
2456 <table>
2457 <tr><th>dst
2458 <tr><td>U8
2459 <tr><td>S8
2460 <tr><td>QASYMM8
2461 <tr><td>U16
2462 <tr><td>S16
2463 <tr><td>U32
2464 <tr><td>S32
2465 <tr><td>F16
2466 <tr><td>F32
2467 </table>
2468<tr>
2469 <td rowspan="2">ReduceMean
Jakub Sujakee301b32021-06-04 09:46:08 +01002470 <td rowspan="2" style="width:200px;"> Function to perform reduce mean operation.
Teresa Charlin62687422021-04-28 10:58:49 +01002471 <td rowspan="2">
2472 <ul>
2473 <li>ANEURALNETWORKS_MEAN
2474 </ul>
2475 <td>NEReduceMean
2476 <td>
2477 <ul>
2478 <li>All
2479 </ul>
2480 <td>
2481 <table>
2482 <tr><th>src<th>dst
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002483 <tr><td>QASYMM8<td>QASYMM8
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002484 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
Teresa Charlin62687422021-04-28 10:58:49 +01002485 <tr><td>F16<td>F16
2486 <tr><td>F32<td>F32
2487 </table>
2488<tr>
2489 <td>CLReduceMean
2490 <td>
2491 <ul>
2492 <li>All
2493 </ul>
2494 <td>
2495 <table>
2496 <tr><th>src<th>dst
2497 <tr><td>QASYMM8<td>QASYMM8
2498 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2499 <tr><td>F16<td>F16
2500 <tr><td>F32<td>F32
2501 </table>
2502<tr>
2503 <td rowspan="2">ReductionOperation
Jakub Sujakee301b32021-06-04 09:46:08 +01002504 <td rowspan="2" style="width:200px;"> Function to perform reduce with the following operations - ARG_IDX_MAX: Index of the max value - ARG_IDX_MIN: Index of the min value - MEAN_SUM: Mean of sum - PROD: Product - SUM_SQUARE: Sum of squares - SUM: Sum - MIN: Min - MAX: Max
Teresa Charlin62687422021-04-28 10:58:49 +01002505 <td rowspan="2">
2506 <ul>
2507 <li>ANEURALNETWORKS_REDUCE_ALL
2508 <li>ANEURALNETWORKS_REDUCE_ANY
2509 <li>ANEURALNETWORKS_REDUCE_MAX
2510 <li>ANEURALNETWORKS_REDUCE_MIN
2511 <li>ANEURALNETWORKS_REDUCE_PROD
2512 <li>ANEURALNETWORKS_REDUCE_SUM
2513 </ul>
2514 <td>NEReductionOperation
2515 <td>
2516 <ul>
2517 <li>All
2518 </ul>
2519 <td>
2520 <table>
2521 <tr><th>src<th>dst
2522 <tr><td>QASYMM8<td>QASYMM8
2523 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2524 <tr><td>F16<td>F16
2525 <tr><td>F32<td>F32
2526 <tr><td>S32<td>S32
2527 </table>
2528<tr>
2529 <td>CLReductionOperation
2530 <td>
2531 <ul>
2532 <li>All
2533 </ul>
2534 <td>
2535 <table>
2536 <tr><th>src<th>dst
2537 <tr><td>QASYMM8<td>QASYMM8
2538 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2539 <tr><td>F16<td>F16
2540 <tr><td>F32<td>F32
2541 <tr><td>S32<td>S32
2542 </table>
2543<tr>
2544 <td rowspan="2">ReorgLayer
2545 <td rowspan="2" style="width:200px;"> Performs a reorganization layer of input tensor to the output tensor.
2546 <td rowspan="2">
2547 <ul>
2548 <li>n/a
2549 </ul>
2550 <td>NEReorgLayer
2551 <td>
2552 <ul>
2553 <li>NHWC
2554 <li>NCHW
2555 </ul>
2556 <td>
2557 <table>
2558 <tr><th>src<th>dst
2559 <tr><td>All<td>All
2560 </table>
2561<tr>
2562 <td>CLReorgLayer
2563 <td>
2564 <ul>
2565 <li>NHWC
2566 <li>NCHW
2567 </ul>
2568 <td>
2569 <table>
2570 <tr><th>src<th>dst
2571 <tr><td>All<td>All
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002572 </table>
2573<tr>
2574 <td rowspan="2">ReshapeLayer
Teresa Charlin62687422021-04-28 10:58:49 +01002575 <td rowspan="2" style="width:200px;"> Function to reshape a tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002576 <td rowspan="2">
2577 <ul>
2578 <li>ANEURALNETWORKS_RESHAPE
2579 <li>ANEURALNETWORKS_SQUEEZE
2580 </ul>
2581 <td>NEReshapeLayer
2582 <td>
2583 <ul>
2584 <li>All
2585 </ul>
2586 <td>
2587 <table>
2588 <tr><th>src<th>dst
2589 <tr><td>All<td>All
2590 </table>
2591<tr>
2592 <td>CLReshapeLayer
2593 <td>
2594 <ul>
2595 <li>All
2596 </ul>
2597 <td>
2598 <table>
2599 <tr><th>src<th>dst
2600 <tr><td>All<td>All
2601 </table>
2602<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002603 <td rowspan="2">Reverse
2604 <td rowspan="2" style="width:200px;"> Function to reverse tensor according to axis.
2605 <td rowspan="2">
2606 <ul>
2607 <li>n/a
2608 </ul>
2609 <td>NEReverse
2610 <td>
2611 <ul>
2612 <li>All
2613 </ul>
2614 <td>
2615 <table>
2616 <tr><th>src0<th>src1<th>dst
2617 <tr><td>All<td>U32<td>All
2618 </table>
2619<tr>
2620 <td>CLReverse
2621 <td>
2622 <ul>
2623 <li>All
2624 </ul>
2625 <td>
2626 <table>
2627 <tr><th>src0<th>src1<th>dst
2628 <tr><td>All<td>U32<td>All
2629 </table>
2630<tr>
2631 <td rowspan="2">RNNLayer
2632 <td rowspan="2" style="width:200px;"> Function to perform recurrent neural network layer.
2633 <td rowspan="2">
2634 <ul>
2635 <li>ANEURALNETWORKS_RNN
2636 </ul>
2637 <td>NERNNLayer
2638 <td>
2639 <ul>
2640 <li>NHWC
2641 <li>NCHW
2642 </ul>
2643 <td>
2644 <table>
2645 <tr><th>src0<th>src1<th>src2<th>src3<th>dst0<th>dst1
2646 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
2647 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
2648 </table>
2649<tr>
2650 <td>CLRNNLayer
2651 <td>
2652 <ul>
2653 <li>NHWC
2654 <li>NCHW
2655 </ul>
2656 <td>
2657 <table>
2658 <tr><th>src0<th>src1<th>src2<th>src3<th>dst0<th>dst1
2659 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
2660 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
2661 </table>
2662<tr>
2663 <td rowspan="2">ROIAlignLayer
2664 <td rowspan="2" style="width:200px;"> Function to perform ROI alignment.
2665 <td rowspan="2">
2666 <ul>
2667 <li>ANEURALNETWORKS_ROI_ALIGN
2668 </ul>
2669 <td>NEROIAlignLayer
2670 <td>
2671 <ul>
2672 <li>All
2673 </ul>
2674 <td>
2675 <table>
2676 <tr><th>src0<th>src1<th>dst
2677 <tr><td>F16<td>F16<td>F16
2678 <tr><td>F32<td>F32<td>F32
2679 <tr><td>QASYMM8<td>QASYMM16<td>QASYMM8
2680 <tr><td>QASYMM8_SIGNED<td>QASYMM16<td>QASYMM8_SIGNED
2681 </table>
2682<tr>
2683 <td>CLROIAlignLayer
2684 <td>
2685 <ul>
2686 <li>All
2687 </ul>
2688 <td>
2689 <table>
2690 <tr><th>src0<th>src1<th>dst
2691 <tr><td>F16<td>F16<td>F16
2692 <tr><td>F32<td>F32<td>F32
2693 <tr><td>QASYMM8<td>QASYMM16<td>QASYMM8
2694 <tr><td>QASYMM8_SIGNED<td>QASYMM16<td>QASYMM8_SIGNED
2695 </table>
2696<tr>
2697 <td rowspan="2">ROIPoolingLayer
2698 <td rowspan="2" style="width:200px;"> Function to perform ROI pooling.
2699 <td rowspan="2">
2700 <ul>
2701 <li>ANEURALNETWORKS_ROI_POOLING
2702 </ul>
2703 <td>NEROIPoolingLayer
2704 <td>
2705 <ul>
2706 <li>All
2707 </ul>
2708 <td>
2709 <table>
2710 <tr><th>src0<th>src1<th>dst
2711 <tr><td>F32<td>U16<td>F32
2712 <tr><td>QASYMM8<td>U16<td>QASYMM8
2713 </table>
2714<tr>
2715 <td>CLROIPoolingLayer
2716 <td>
2717 <ul>
2718 <li>All
2719 </ul>
2720 <td>
2721 <table>
2722 <tr><th>src0<th>src1<th>dst
2723 <tr><td>F16<td>U16<td>F16
2724 <tr><td>F32<td>U16<td>F32
2725 <tr><td>QASYMM8<td>U16<td>QASYMM8
2726 </table>
2727<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002728 <td rowspan="2">Scale
Teresa Charlin62687422021-04-28 10:58:49 +01002729 <td rowspan="2" style="width:200px;"> Function to perform resize a tensor using to interpolate: - Bilinear - Nearest neighbor
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002730 <td rowspan="2">
2731 <ul>
2732 <li>ANEURALNETWORKS_RESIZE_BILINEAR
2733 <li>ANEURALNETWORKS_RESIZE_NEAREST_NEIGHBOR
2734 </ul>
2735 <td>NEScale
2736 <td>
2737 <ul>
2738 <li>NHWC
2739 <li>NCHW
2740 </ul>
2741 <td>
2742 <table>
2743 <tr><th>src<th>dst
2744 <tr><td>QASYMM8<td>QASYMM8
2745 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2746 <tr><td>F16<td>F16
2747 <tr><td>F32<td>F32
2748 <tr><td>U8<td>U8
2749 <tr><td>S16<td>S16
2750 </table>
2751<tr>
2752 <td>CLScale
2753 <td>
2754 <ul>
2755 <li>NHWC
2756 <li>NCHW
2757 </ul>
2758 <td>
2759 <table>
2760 <tr><th>src<th>dst
2761 <tr><td>QASYMM8<td>QASYMM8
2762 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2763 <tr><td>F16<td>F16
2764 <tr><td>F32<td>F32
2765 <tr><td>U8<td>U8
2766 <tr><td>S16<td>S16
2767 </table>
2768<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002769 <td rowspan="2">Select
2770 <td rowspan="2" style="width:200px;"> Function to select values from 2 tensors depending on an input tensor of booleans.
2771 <td rowspan="2">
2772 <ul>
2773 <li>ANEURALNETWORKS_SELECT
2774 </ul>
2775 <td>NESelect
2776 <td>
2777 <ul>
2778 <li>All
2779 </ul>
2780 <td>
2781 <table>
2782 <tr><th>src0<th>src1<th>src2<th>dst
2783 <tr><td>U8<td>All<td>All<td>All
2784 </table>
2785<tr>
2786 <td>CLSelect
2787 <td>
2788 <ul>
2789 <li>All
2790 </ul>
2791 <td>
2792 <table>
2793 <tr><th>src0<th>src1<th>src2<th>dst
2794 <tr><td>U8<td>All<td>All<td>All
2795 </table>
2796<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002797 <td rowspan="2">Slice
2798 <td rowspan="2" style="width:200px;"> Function to perform tensor slicing.
2799 <td rowspan="2">
2800 <ul>
2801 <li>ANEURALNETWORKS_SLICE
2802 </ul>
2803 <td>NESlice
2804 <td>
2805 <ul>
2806 <li>All
2807 </ul>
2808 <td>
2809 <table>
2810 <tr><th>src<th>dst
2811 <tr><td>All<td>All
2812 </table>
2813<tr>
2814 <td>CLSlice
2815 <td>
2816 <ul>
2817 <li>All
2818 </ul>
2819 <td>
2820 <table>
2821 <tr><th>src<th>dst
2822 <tr><td>All<td>All
2823 </table>
2824<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01002825 <td rowspan="2">SoftmaxLayer
2826 <td rowspan="2" style="width:200px;"> Function to compute a SoftmaxLayer and a Log SoftmaxLayer.
2827 <td rowspan="2">
2828 <ul>
2829 <li>ANEURALNETWORKS_LOG_SOFTMAX
2830 <li>ANEURALNETWORKS_SOFTMAX
2831 </ul>
2832 <td>NESoftmaxLayerGeneric
2833 <td>
2834 <ul>
2835 <li>All
2836 </ul>
2837 <td>
2838 <table>
2839 <tr><th>src<th>dst
2840 <tr><td>QASYMM8<td>QASYMM8
2841 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2842 <tr><td>F16<td>F16
2843 <tr><td>F32<td>F32
2844 </table>
2845<tr>
2846 <td>CLSoftmaxLayerGeneric
2847 <td>
2848 <ul>
2849 <li>All
2850 </ul>
2851 <td>
2852 <table>
2853 <tr><th>src<th>dst
2854 <tr><td>QASYMM8<td>QASYMM8
2855 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2856 <tr><td>F16<td>F16
2857 <tr><td>F32<td>F32
2858 </table>
2859<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002860 <td rowspan="2">SpaceToBatchLayer
2861 <td rowspan="2" style="width:200px;"> Function to divide a tensor spatially.
2862 <td rowspan="2">
2863 <ul>
2864 <li>ANEURALNETWORKS_SPACE_TO_BATCH_ND
2865 </ul>
2866 <td>NESpaceToBatchLayer
2867 <td>
2868 <ul>
2869 <li>NHWC
2870 <li>NCHW
2871 </ul>
2872 <td>
2873 <table>
2874 <tr><th>src0<th>src1<th>src2<th>dst
2875 <tr><td>All<td>S32<td>S32<td>All
2876 </table>
2877<tr>
2878 <td>CLSpaceToBatchLayer
2879 <td>
2880 <ul>
2881 <li>NHWC
2882 <li>NCHW
2883 </ul>
2884 <td>
2885 <table>
2886 <tr><th>src0<th>src1<th>src2<th>dst
2887 <tr><td>All<td>S32<td>S32<td>All
2888 </table>
2889<tr>
2890 <td rowspan="2">SpaceToDepthLayer
2891 <td rowspan="2" style="width:200px;"> Function to rearrange blocks of spatial data into depth.
2892 <td rowspan="2">
2893 <ul>
2894 <li>ANEURALNETWORKS_SPACE_TO_DEPTH
2895 </ul>
2896 <td>NESpaceToDepthLayer
2897 <td>
2898 <ul>
2899 <li>NHWC
2900 <li>NCHW
2901 </ul>
2902 <td>
2903 <table>
2904 <tr><th>src<th>dst
2905 <tr><td>All<td>All
2906 </table>
2907<tr>
2908 <td>CLSpaceToDepthLayer
2909 <td>
2910 <ul>
2911 <li>NHWC
2912 <li>NCHW
2913 </ul>
2914 <td>
2915 <table>
2916 <tr><th>src<th>dst
2917 <tr><td>All<td>All
2918 </table>
2919<tr>
2920 <td rowspan="2">Split
2921 <td rowspan="2" style="width:200px;"> Function to split a tensor along a given axis.
2922 <td rowspan="2">
2923 <ul>
2924 <li>ANEURALNETWORKS_SPLIT
2925 </ul>
2926 <td>NESplit
2927 <td>
2928 <ul>
2929 <li>All
2930 </ul>
2931 <td>
2932 <table>
2933 <tr><th>src<th>dst
2934 <tr><td>All<td>All
2935 </table>
2936<tr>
2937 <td>CLSplit
2938 <td>
2939 <ul>
2940 <li>All
2941 </ul>
2942 <td>
2943 <table>
2944 <tr><th>src<th>dst
2945 <tr><td>All<td>All
2946 </table>
2947<tr>
2948 <td rowspan="2">StackLayer
2949 <td rowspan="2" style="width:200px;"> Function to stack tensors along an axis.
2950 <td rowspan="2">
2951 <ul>
2952 <li>n/a
2953 </ul>
2954 <td>NEStackLayer
2955 <td>
2956 <ul>
2957 <li>All
2958 </ul>
2959 <td>
2960 <table>
2961 <tr><th>src<th>dst
2962 <tr><td>All<td>All
2963 </table>
2964<tr>
2965 <td>CLStackLayer
2966 <td>
2967 <ul>
2968 <li>All
2969 </ul>
2970 <td>
2971 <table>
2972 <tr><th>src<th>dst
2973 <tr><td>All<td>All
2974 </table>
2975<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002976 <td rowspan="2">StridedSlice
2977 <td rowspan="2" style="width:200px;"> Function to extract a strided slice of a tensor.
2978 <td rowspan="2">
2979 <ul>
2980 <li>ANEURALNETWORKS_STRIDED_SLICE
2981 </ul>
2982 <td>NEStridedSlice
2983 <td>
2984 <ul>
2985 <li>All
2986 </ul>
2987 <td>
2988 <table>
2989 <tr><th>src<th>dst
2990 <tr><td>All<td>All
2991 </table>
2992<tr>
2993 <td>CLStridedSlice
2994 <td>
2995 <ul>
2996 <li>All
2997 </ul>
2998 <td>
2999 <table>
3000 <tr><th>src<th>dst
3001 <tr><td>All<td>All
3002 </table>
3003<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01003004 <td rowspan="2">Tile
3005 <td rowspan="2" style="width:200px;"> Function to construct a tensor by tiling a given tensor.
3006 <td rowspan="2">
3007 <ul>
3008 <li>ANEURALNETWORKS_TILE
3009 </ul>
3010 <td>NETile
3011 <td>
3012 <ul>
3013 <li>All
3014 </ul>
3015 <td>
3016 <table>
3017 <tr><th>src<th>dst
3018 <tr><td>All<td>All
3019 </table>
3020<tr>
3021 <td>CLTile
3022 <td>
3023 <ul>
3024 <li>All
3025 </ul>
3026 <td>
3027 <table>
3028 <tr><th>src<th>dst
3029 <tr><td>All<td>All
3030 </table>
3031<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003032 <td rowspan="2">Transpose
Teresa Charlin62687422021-04-28 10:58:49 +01003033 <td rowspan="2" style="width:200px;"> Function to transpose a 2D tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003034 <td rowspan="2">
3035 <ul>
3036 <li>ANEURALNETWORKS_TRANSPOSE
3037 </ul>
3038 <td>NETranspose
3039 <td>
3040 <ul>
3041 <li>All
3042 </ul>
3043 <td>
3044 <table>
3045 <tr><th>src<th>dst
3046 <tr><td>All<td>All
3047 </table>
3048<tr>
3049 <td>CLTranspose
3050 <td>
3051 <ul>
3052 <li>All
3053 </ul>
3054 <td>
3055 <table>
3056 <tr><th>src<th>dst
3057 <tr><td>All<td>All
3058 </table>
Teresa Charlin62687422021-04-28 10:58:49 +01003059<tr>
3060 <td rowspan="2">Unstack
3061 <td rowspan="2" style="width:200px;"> Function to unpack a rank-R tensor into rank-(R-1) tensors.
3062 <td rowspan="2">
3063 <ul>
3064 <li>n/a
3065 </ul>
3066 <td>NEUnstack
3067 <td>
3068 <ul>
3069 <li>All
3070 </ul>
3071 <td>
3072 <table>
3073 <tr><th>src<th>dst
3074 <tr><td>All<td>All
3075 </table>
3076<tr>
3077 <td>CLUnstack
3078 <td>
3079 <ul>
3080 <li>All
3081 </ul>
3082 <td>
3083 <table>
3084 <tr><th>src<th>dst
3085 <tr><td>All<td>All
3086 </table>
3087<tr>
3088 <td rowspan="2">WinogradConvolutionLayer
3089 <td rowspan="2" style="width:200px;"> Function to do Winograd Convolution.
3090 <td rowspan="2">
3091 <ul>
3092 <li>ANEURALNETWORKS_CONV_2D
3093 </ul>
3094 <td>NEWinogradConvolutionLayer
3095 <td>
3096 <ul>
3097 <li>NHWC
3098 <li>NCHW
3099 </ul>
3100 <td>
3101 <table>
3102 <tr><th>src0<th>src1<th>src2<th>dst
3103 <tr><td>F16<td>F16<td>F16<td>F16
3104 <tr><td>F32<td>F32<td>F32<td>F32
3105 </table>
3106<tr>
3107 <td>CLWinogradConvolutionLayer
3108 <td>
3109 <ul>
3110 <li>NHWC
3111 <li>NCHW
3112 </ul>
3113 <td>
3114 <table>
3115 <tr><th>src0<th>src1<th>src2<th>dst
3116 <tr><td>F16<td>F16<td>F16<td>F16
3117 <tr><td>F32<td>F32<td>F32<td>F32
3118 </table>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003119</table>
3120
3121*/
3122} // namespace