blob: 8210c4310df0995d7ee302d0ad8147ad5ce768b4 [file] [log] [blame]
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001///
2/// Copyright (c) 2021 Arm Limited.
3///
4/// SPDX-License-Identifier: MIT
5///
6/// Permission is hereby granted, free of charge, to any person obtaining a copy
7/// of this software and associated documentation files (the "Software"), to
8/// deal in the Software without restriction, including without limitation the
9/// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
10/// sell copies of the Software, and to permit persons to whom the Software is
11/// furnished to do so, subject to the following conditions:
12///
13/// The above copyright notice and this permission notice shall be included in all
14/// copies or substantial portions of the Software.
15///
16/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
22/// SOFTWARE.
23///
24namespace arm_compute
25{
26/**
27@page operators_list Supported Operators
28
29@tableofcontents
30
31@section S9_1_operators_list Supported Operators
32
33Compute Library supports operators that are listed in below table.
34
35Compute Library supports a wide list of data-types, information can been directly found in the documentation of each kernel/function.
36The main data-types that the Machine Learning functions support are the following:
37 <ul>
38 <li>BFLOAT16: 16-bit non-standard brain floating point
39 <li>QASYMM8: 8-bit unsigned asymmetric quantized
40 <li>QASYMM8_SIGNED: 8-bit signed asymmetric quantized
41 <li>QSYMM8_PER_CHANNEL: 8-bit signed symmetric quantized (Used for the weights)
42 <li>QSYMM8: 8-bit unsigned symmetric quantized
43 <li>QSYMM16: 16-bit unsigned symmetric quantized
44 <li>F32: 32-bit single precision floating point
45 <li>F16: 16-bit half precision floating point
46 <li>S32: 32-bit signed integer
47 <li>U8: 8-bit unsigned char
Jakub Sujakee301b32021-06-04 09:46:08 +010048 <li>All: Agnostic to any specific data type
Sheri Zhanga47dcc22021-04-22 14:41:12 +010049 </ul>
50
51Compute Library supports the following data layouts (fast changing dimension from right to left):
52 <ul>
53 <li>NHWC: The native layout of Compute Library that delivers the best performance where channels are in the fastest changing dimension
54 <li>NCHW: Legacy layout where width is in the fastest changing dimension
Jakub Sujakee301b32021-06-04 09:46:08 +010055 <li>All: Agnostic to any specific data layout
Sheri Zhanga47dcc22021-04-22 14:41:12 +010056 </ul>
57where N = batches, C = channels, H = height, W = width
58
59<table>
60<caption id="multi_row"></caption>
61<tr>
62 <th>Function
63 <th>Description
64 <th>Equivalent Android NNAPI Op
65 <th>Backends
66 <th>Data Layouts
67 <th>Data Types
68<tr>
69 <td rowspan="2">ActivationLayer
70 <td rowspan="2" style="width:200px;"> Function to simulate an activation layer with the specified activation function.
71 <td rowspan="2">
72 <ul>
73 <li>ANEURALNETWORKS_ELU
74 <li>ANEURALNETWORKS_HARD_SWISH
75 <li>ANEURALNETWORKS_LOGISTIC
76 <li>ANEURALNETWORKS_RELU
77 <li>ANEURALNETWORKS_RELU1
78 <li>ANEURALNETWORKS_RELU6
79 <li>ANEURALNETWORKS_TANH
80 </ul>
81 <td>NEActivationLayer
82 <td>
83 <ul>
84 <li>All
85 </ul>
86 <td>
87 <table>
88 <tr><th>src<th>dst
89 <tr><td>QASYMM8<td>QASYMM8
90 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
91 <tr><td>QSYMM16<td>QSYMM16
92 <tr><td>F16<td>F16
93 <tr><td>F32<td>F32
94 </table>
95<tr>
96 <td>CLActivationLayer
97 <td>
98 <ul>
99 <li>All
100 </ul>
101 <td>
102 <table>
103 <tr><th>src<th>dst
104 <tr><td>QASYMM8<td>QASYMM8
105 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
106 <tr><td>QSYMM16<td>QSYMM16
107 <tr><td>F16<td>F16
108 <tr><td>F32<td>F32
109 </table>
110<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100111 <td rowspan="2">ArgMinMaxLayer
112 <td rowspan="2" style="width:200px;"> Function to calculate the index of the minimum or maximum values in a tensor based on an axis.
113 <td rowspan="2">
114 <ul>
115 <li>ANEURALNETWORKS_ARGMAX
116 <li>ANEURALNETWORKS_ARGMIN
117 </ul>
118 <td>NEArgMinMaxLayer
119 <td>
120 <ul>
121 <li>All
122 </ul>
123 <td>
124 <table>
125 <tr><th>src<th>dst
126 <tr><td>QASYMM8<td>U32, S32
127 <tr><td>QASYMM8_SIGNED<td>U32, S32
128 <tr><td>S32<td>U32, S32
129 <tr><td>F16<td>U32, S32
130 <tr><td>F32<td>U32, S32
131 </table>
132<tr>
133 <td>CLArgMinMaxLayer
134 <td>
135 <ul>
136 <li>All
137 </ul>
138 <td>
139 <table>
140 <tr><th>src<th>dst
141 <tr><td>QASYMM8<td>U32, S32
142 <tr><td>QASYMM8_SIGNED<td>U32, S32
143 <tr><td>S32<td>U32, S32
144 <tr><td>F16<td>U32, S32
145 <tr><td>F32<td>U32, S32
146 </table>
147<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100148 <td rowspan="1">ArithmeticAddition
149 <td rowspan="1" style="width:200px;"> Function to add 2 tensors.
150 <td rowspan="1">
151 <ul>
152 <li>ANEURALNETWORKS_ADD
153 </ul>
154 <td>NEArithmeticAddition
155 <td>
156 <ul>
157 <li>All
158 </ul>
159 <td>
160 <table>
161 <tr><th>src0<th>src1<th>dst
162 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
163 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
164 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
165 <tr><td>QSYMM16<td>QSYMM16<td>S32
166 <tr><td>U8<td>U8<td>U8
Sheri Zhang6124ce62021-05-04 14:03:13 +0100167 <tr><td>S16<td>S16<td>S16
168 <tr><td>S32<td>S32<td>S32
169 <tr><td>F16<td>F16<td>F16
170 <tr><td>F32<td>F32<td>F32
171 </table>
172<tr>
173 <td rowspan="1">ArithmeticSubtraction
174 <td rowspan="1" style="width:200px;"> Function to substract 2 tensors.
175 <td rowspan="1">
176 <ul>
177 <li>ANEURALNETWORKS_SUB
178 </ul>
179 <td>NEArithmeticSubtraction
180 <td>
181 <ul>
182 <li>All
183 </ul>
184 <td>
185 <table>
186 <tr><th>src0<th>src1<th>dst
187 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
188 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
189 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
190 <tr><td>QSYMM16<td>QSYMM16<td>S32
191 <tr><td>U8<td>U8<td>U8
Sheri Zhang6124ce62021-05-04 14:03:13 +0100192 <tr><td>S16<td>S16<td>S16
193 <tr><td>S32<td>S32<td>S32
194 <tr><td>F16<td>F16<td>F16
195 <tr><td>F32<td>F32<td>F32
196 </table>
197<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100198 <td rowspan="2">BatchNormalizationLayer
199 <td rowspan="2" style="width:200px;"> Function to perform batch normalization.
200 <td rowspan="2">
201 <ul>
202 <li>n/a
203 </ul>
204 <td>NEBatchNormalizationLayer
205 <td>
206 <ul>
207 <li>NHWC
208 <li>NCHW
209 </ul>
210 <td>
211 <table>
212 <tr><th>src<th>dst
213 <tr><td>F32<td>F32
214 <tr><td>F16<td>F16
215 </table>
216<tr>
217 <td>CLBatchNormalizationLayer
218 <td>
219 <ul>
220 <li>NHWC
221 <li>NCHW
222 </ul>
223 <td>
224 <table>
225 <tr><th>src<th>dst
226 <tr><td>F32<td>F32
227 <tr><td>F16<td>F16
228 </table>
229<tr>
230 <td rowspan="2">BatchToSpaceLayer
231 <td rowspan="2" style="width:200px;"> Batch to space transformation.
232 <td rowspan="2">
233 <ul>
234 <li>ANEURALNETWORKS_BATCH_TO_SPACE_ND
235 </ul>
236 <td>NEBatchToSpaceLayer
237 <td>
238 <ul>
239 <li>NHWC
240 <li>NCHW
241 </ul>
242 <td>
243 <table>
244 <tr><th>src0<th>src1<th>dst
245 <tr><td>All<td>s32<td>All
246 </table>
247<tr>
248 <td>CLBatchToSpaceLayer
249 <td>
250 <ul>
251 <li>NHWC
252 <li>NCHW
253 </ul>
254 <td>
255 <table>
256 <tr><th>src0<th>src1<th>dst
257 <tr><td>All<td>s32<td>All
258 </table>
259<tr>
260 <td rowspan="2">BitwiseAnd
Jakub Sujakee301b32021-06-04 09:46:08 +0100261 <td rowspan="2" style="width:200px;"> Function to perform bitwise AND between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100262 <td rowspan="2">
263 <ul>
264 <li>ANEURALNETWORKS_LOGICAL_AND
265 </ul>
266 <td>NEBitwiseAnd
267 <td>
268 <ul>
269 <li>All
270 </ul>
271 <td>
272 <table>
273 <tr><th>src<th>dst
274 <tr><td>U8<td>U8
275 </table>
276<tr>
277 <td>CLBitwiseAnd
278 <td>
279 <ul>
280 <li>All
281 </ul>
282 <td>
283 <table>
284 <tr><th>src<th>dst
285 <tr><td>U8<td>U8
286 </table>
287<tr>
288 <td rowspan="2">BitwiseNot
Jakub Sujakee301b32021-06-04 09:46:08 +0100289 <td rowspan="2" style="width:200px;"> Function to perform bitwise NOT.
Teresa Charlin62687422021-04-28 10:58:49 +0100290 <td rowspan="2">
291 <ul>
292 <li>ANEURALNETWORKS_LOGICAL_NOT
293 </ul>
294 <td>NEBitwiseNot
295 <td>
296 <ul>
297 <li>All
298 </ul>
299 <td>
300 <table>
301 <tr><th>src<th>dst
302 <tr><td>U8<td>U8
303 </table>
304<tr>
305 <td>CLBitwiseNot
306 <td>
307 <ul>
308 <li>All
309 </ul>
310 <td>
311 <table>
312 <tr><th>src<th>dst
313 <tr><td>U8<td>U8
314 </table>
315<tr>
316 <td rowspan="2">BitwiseOr
Jakub Sujakee301b32021-06-04 09:46:08 +0100317 <td rowspan="2" style="width:200px;"> Function to perform bitwise OR between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100318 <td rowspan="2">
319 <ul>
320 <li>ANEURALNETWORKS_LOGICAL_OR
321 </ul>
322 <td>NEBitwiseOr
323 <td>
324 <ul>
325 <li>All
326 </ul>
327 <td>
328 <table>
329 <tr><th>src<th>dst
330 <tr><td>U8<td>U8
331 </table>
332<tr>
333 <td>CLBitwiseOr
334 <td>
335 <ul>
336 <li>All
337 </ul>
338 <td>
339 <table>
340 <tr><th>src<th>dst
341 <tr><td>U8<td>U8
342 </table>
343<tr>
344 <td rowspan="2">BitwiseXor
Jakub Sujakee301b32021-06-04 09:46:08 +0100345 <td rowspan="2" style="width:200px;"> Function to perform bitwise XOR between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100346 <td rowspan="2">
347 <ul>
348 <li>n/a
349 </ul>
350 <td>NEBitwiseXor
351 <td>
352 <ul>
353 <li>All
354 </ul>
355 <td>
356 <table>
357 <tr><th>src<th>dst
358 <tr><td>U8<td>U8
359 </table>
360<tr>
361 <td>CLBitwiseXor
362 <td>
363 <ul>
364 <li>All
365 </ul>
366 <td>
367 <table>
368 <tr><th>src<th>dst
369 <tr><td>U8<td>U8
370 </table>
371<tr>
372 <td rowspan="2">BoundingBoxTransform
373 <td rowspan="2" style="width:200px;"> Transform proposal bounding boxes to target bounding box using bounding box deltas.
374 <td rowspan="2">
375 <ul>
376 <li>n/a
377 </ul>
378 <td>NEBoundingBoxTransform
379 <td>
380 <ul>
381 <li>NHWC
382 <li>NCHW
383 </ul>
384 <td>
385 <table>
386 <tr><th>src0<th>src1<th>dst
387 <tr><td>QASYMM16<td>QASYMM8<td>QASYMM16
388 <tr><td>F16<td>F16<td>F16
389 <tr><td>F32<td>F32<td>F32
390 </table>
391<tr>
392 <td>CLBoundingBoxTransform
393 <td>
394 <ul>
395 <li>NHWC
396 <li>NCHW
397 </ul>
398 <td>
399 <table>
400 <tr><th>src0<th>src1<th>dst
401 <tr><td>QASYMM16<td>QASYMM8<td>QASYMM16
402 <tr><td>F16<td>F16<td>F16
403 <tr><td>F32<td>F32<td>F32
404 </table>
405<tr>
406 <td rowspan="2">Cast
407 <td rowspan="2" style="width:200px;"> Function to cast a tensor.
408 <td rowspan="2">
409 <ul>
410 <li>ANEURALNETWORKS_CAST
411 </ul>
412 <td>NECast
413 <td>
414 <ul>
415 <li>All
416 </ul>
417 <td>
418 <table>
419 <tr><th>src<th>dst
420 <tr><td>QASYMM8_SIGNED<td>S16, S32, F32, F16
421 <tr><td>QASYMM8<td>U16, S16, S32, F32, F16
422 <tr><td>U8<td>U16, S16, S32, F32, F16
423 <tr><td>U16<td>U8, U32
424 <tr><td>S16<td>QASYMM8_SIGNED, U8, S32
425 <tr><td>F16<td>QASYMM8_SIGNED, QASYMM8, F32, S32, U8
426 <tr><td>S32<td>QASYMM8_SIGNED, QASYMM8, F16, F32, U8
427 <tr><td>F32<td>QASYMM8_SIGNED, QASYMM8, BFLOAT16, F16, S32, U8
428 </table>
429<tr>
430 <td>CLCast
431 <td>
432 <ul>
433 <li>All
434 </ul>
435 <td>
436 <table>
437 <tr><th>src<th>dst
438 <tr><td>U8<td>S8, U16, S16, U32, S32, F16, F32
439 <tr><td>U16<td>U8, S8, S16, U32, S32, F16, F32
440 <tr><td>S16<td>U8, S8, U16, U32, S32, F16, F32
441 <tr><td>U32<td>U8, S8, U16, S16, S32, F16, F32
442 <tr><td>S32<td>U8, S8, U16, S16, U32, F16, F32
443 <tr><td>F16<td>U8, S8, U16, S16, U32, F32
444 <tr><td>F32<td>U8, S8, U16, S16, U32, F16
445 </table>
446<tr>
447 <td rowspan="2">ChannelShuffleLayer
448 <td rowspan="2" style="width:200px;"> Function to shuffle the channels of the input tensor.
449 <td rowspan="2">
450 <ul>
451 <li>ANEURALNETWORKS_CHANNEL_SHUFFLE
452 </ul>
453 <td>NEChannelShuffleLayer
454 <td>
455 <ul>
456 <li>NCHW
Michele Di Giorgiob8025b32021-09-03 10:29:49 +0100457 <li>NHWC
Teresa Charlin62687422021-04-28 10:58:49 +0100458 </ul>
459 <td>
460 <table>
461 <tr><th>src<th>dst
462 <tr><td>All<td>All
463 </table>
464<tr>
465 <td>CLChannelShuffleLayer
466 <td>
467 <ul>
468 <li>NCHW
Michele Di Giorgiob8025b32021-09-03 10:29:49 +0100469 <li>NHWC
Teresa Charlin62687422021-04-28 10:58:49 +0100470 </ul>
471 <td>
472 <table>
473 <tr><th>src<th>dst
474 <tr><td>All<td>All
475 </table>
476<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100477 <td rowspan="1">Comparison
478 <td rowspan="1" style="width:200px;"> Function to compare 2 tensors.
479 <td rowspan="1">
480 <ul>
481 <li>ANEURALNETWORKS_EQUAL
482 <li>ANEURALNETWORKS_GREATER
483 <li>ANEURALNETWORKS_GREATER_EQUAL
484 <li>ANEURALNETWORKS_LESS
485 <li>ANEURALNETWORKS_LESS_EQUAL
486 <li>ANEURALNETWORKS_NOT_EQUAL
487 </ul>
488 <td>CLComparison
489 <td>
490 <ul>
491 <li>All
492 </ul>
493 <td>
494 <table>
495 <tr><th>src0<th>src1<th>dst
496 <tr><td>All<td>All<td>U8
497 </table>
498<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100499 <td rowspan="2">ConcatenateLayer
500 <td rowspan="2" style="width:200px;"> Function to concatenate tensors along a given axis.
501 <td rowspan="2">
502 <ul>
503 <li>ANEURALNETWORKS_CONCATENATION
504 </ul>
505 <td>NEConcatenateLayer
506 <td>
507 <ul>
508 <li>All
509 </ul>
510 <td>
511 <table>
512 <tr><th>src<th>dst
513 <tr><td>QASYMM8<td>QASYMM8
514 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
515 <tr><td>F16<td>F16
516 <tr><td>F32<td>F32
517 </table>
518<tr>
519 <td>CLConcatenateLayer
520 <td>
521 <ul>
522 <li>All
523 </ul>
524 <td>
525 <table>
526 <tr><th>src<th>dst
527 <tr><td>QASYMM8<td>QASYMM8
528 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
529 <tr><td>F16<td>F16
530 <tr><td>F32<td>F32
531 </table>
532<tr>
533 <td rowspan="2">ConvertFullyConnectedWeights
Jakub Sujakee301b32021-06-04 09:46:08 +0100534 <td rowspan="2" style="width:200px;"> Function to transpose the weights for the fully connected layer.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100535 <td rowspan="2">
536 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +0100537 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100538 </ul>
539 <td>NEConvertFullyConnectedWeights
540 <td>
541 <ul>
542 <li>NHWC
543 <li>NCHW
544 </ul>
545 <td>
546 <table>
547 <tr><th>src<th>dst
548 <tr><td>All<td>All
549 </table>
550<tr>
551 <td>CLConvertFullyConnectedWeights
552 <td>
553 <ul>
554 <li>NHWC
555 <li>NCHW
556 </ul>
557 <td>
558 <table>
559 <tr><th>src<th>dst
560 <tr><td>All<td>All
561 </table>
562<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100563 <td rowspan="2">ConvolutionLayer
564 <td rowspan="2" style="width:200px;"> Function to compute a convolution layer.
565 <td rowspan="2">
566 <ul>
567 <li>ANEURALNETWORKS_CONV_2D
568 </ul>
569 <td>NEConvolutionLayer
570 <td>
571 <ul>
572 <li>NHWC
573 <li>NCHW
574 </ul>
575 <td>
576 <table>
577 <tr><th>src0<th>src1<th>src2<th>dst
578 <tr><td>F16<td>F16<td>F16<td>F16
579 <tr><td>F32<td>F32<td>F32<td>F32
580 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
581 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
582 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
583 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
584 </table>
585<tr>
586 <td>CLConvolutionLayer
587 <td>
588 <ul>
589 <li>NHWC
590 <li>NCHW
591 </ul>
592 <td>
593 <table>
594 <tr><th>src0<th>src1<th>src2<th>dst
595 <tr><td>F16<td>F16<td>F16<td>F16
596 <tr><td>F32<td>F32<td>F32<td>F32
597 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
598 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
599 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
600 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
601 </table>
602<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100603 <td rowspan="2">Copy
604 <td rowspan="2" style="width:200px;"> Function to copy a tensor.
605 <td rowspan="2">
606 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +0100607 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100608 </ul>
609 <td>NECopy
610 <td>
611 <ul>
612 <li>All
613 </ul>
614 <td>
615 <table>
616 <tr><th>src<th>dst
617 <tr><td>All<td>All
618 </table>
619<tr>
620 <td>CLCopy
621 <td>
622 <ul>
623 <li>All
624 </ul>
625 <td>
626 <table>
627 <tr><th>src<th>dst
628 <tr><td>All<td>All
629 </table>
630<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100631 <td rowspan="1">Crop
632 <td rowspan="1" style="width:200px;"> Performs a copy of input tensor to the output tensor.
633 <td rowspan="1">
634 <ul>
635 <li>n/a
636 </ul>
637 <td>CLCrop
638 <td>
639 <ul>
640 <li>NHWC
641 </ul>
642 <td>
643 <table>
644 <tr><th>src<th>dst
645 <tr><td>All<td>F32
646 </table>
647<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100648 <td rowspan="2">CropResize
649 <td rowspan="2" style="width:200px;"> Function to perform cropping and resizing.
650 <td rowspan="2">
651 <ul>
652 <li>n/a
653 </ul>
654 <td>NECropResize
655 <td>
656 <ul>
657 <li>NHWC
658 </ul>
659 <td>
660 <table>
661 <tr><th>src0<th>src1<th>src2<th>dst
662 <tr><td>All<td>F32<td>F32<td>F32
663 </table>
664<tr>
665 <td>CLCropResize
666 <td>
667 <ul>
668 <li>NHWC
669 </ul>
670 <td>
671 <table>
672 <tr><th>src0<th>src1<th>src2<th>dst
673 <tr><td>All<td>F32<td>F32<td>F32
674 </table>
675<tr>
676 <td rowspan="2">DeconvolutionLayer
Jakub Sujakee301b32021-06-04 09:46:08 +0100677 <td rowspan="2" style="width:200px;"> Function to compute a deconvolution or transpose convolution.
Teresa Charlin62687422021-04-28 10:58:49 +0100678 <td rowspan="2">
679 <ul>
680 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
681 </ul>
682 <td>NEDeconvolutionLayer
683 <td>
684 <ul>
685 <li>NHWC
686 <li>NCHW
687 </ul>
688 <td>
689 <table>
690 <tr><th>src0<th>src1<th>src2<th>dst
691 <tr><td>F16<td>F16<td>F16<td>F16
692 <tr><td>F32<td>F32<td>F32<td>F32
693 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
694 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
695 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
696 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
697 </table>
698<tr>
699 <td>CLDeconvolutionLayer
700 <td>
701 <ul>
702 <li>NHWC
703 <li>NCHW
704 </ul>
705 <td>
706 <table>
707 <tr><th>src0<th>src1<th>src2<th>dst
708 <tr><td>F16<td>F16<td>F16<td>F16
709 <tr><td>F32<td>F32<td>F32<td>F32
710 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
711 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
712 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
713 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
714 </table>
715<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100716 <td rowspan="1">DeconvolutionLayerUpsample
717 <td rowspan="1" style="width:200px;"> Function to execute deconvolution upsample on OpenCL.
718 <td rowspan="1">
719 <ul>
720 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
721 </ul>
722 <td>CLDeconvolutionLayerUpsample
723 <td>
724 <ul>
725 <li>NHWC
726 <li>NCHW
727 </ul>
728 <td>
729 <table>
730 <tr><th>src<th>dst
731 <tr><td>All<td>All
732 </table>
733<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100734 <td rowspan="2">DepthConvertLayer
735 <td rowspan="2" style="width:200px;"> Performs a down-scaling depth conversion.
736 <td rowspan="2">
737 <ul>
738 <li>n/a
739 </ul>
740 <td>NEDepthConvertLayer
741 <td>
742 <ul>
743 <li>All
744 </ul>
745 <td>
746 <table>
747 <tr><th>src<th>dst
748 <tr><td>QASYMM8<td>F16, F32
749 <tr><td>U8<td>U16, S16, S32
750 <tr><td>U16<td>U8, U32
751 <tr><td>S16<td>U8, S32
752 <tr><td>BFLOAT16<td>F32
753 <tr><td>F16<td>QASYMM8, F32
754 <tr><td>F32<td>QASYMM8, F16, BFLOAT16
755 </table>
756<tr>
757 <td>CLDepthConvertLayer
758 <td>
759 <ul>
760 <li>All
761 </ul>
762 <td>
763 <table>
764 <tr><th>src<th>dst
765 <tr><td>U8<td>S8, U16, S16, U32, S32, F16, F32
766 <tr><td>U16<td>U8, S8, S16, U32, S32, F16, F32
767 <tr><td>S16<td>U8, S8, U16, U32, S32, F16, F32
768 <tr><td>U32<td>U8, S8, U16, S16, S32, F16, F32
769 <tr><td>S32<td>U8, S8, U16, S16, U32, F16, F32
770 <tr><td>F16<td>U8, S8, U16, S16, U32, F32
771 <tr><td>F32<td>U8, S8, U16, S16, U32, F16
772 </table>
773<tr>
774 <td rowspan="2">DepthToSpaceLayer
775 <td rowspan="2" style="width:200px;"> Depth to Space transformation.
776 <td rowspan="2">
777 <ul>
778 <li>ANEURALNETWORKS_DEPTH_TO_SPACE
779 </ul>
780 <td>NEDepthToSpaceLayer
781 <td>
782 <ul>
783 <li>NHWC
784 <li>NCHW
785 </ul>
786 <td>
787 <table>
788 <tr><th>src<th>dst
789 <tr><td>All<td>All
790 </table>
791<tr>
792 <td>CLDepthToSpaceLayer
793 <td>
794 <ul>
795 <li>NHWC
796 <li>NCHW
797 </ul>
798 <td>
799 <table>
800 <tr><th>src<th>dst
801 <tr><td>All<td>All
802 </table>
803<tr>
804 <td rowspan="2">DepthwiseConvolutionLayer
805 <td rowspan="2" style="width:200px;"> Function to perform depthwise separable convolution.
806 <td rowspan="2">
807 <ul>
808 <li>ANEURALNETWORKS_DEPTHWISE_CONV_2D
809 </ul>
810 <td>NEDepthwiseConvolutionLayer
811 <td>
812 <ul>
813 <li>NHWC
814 <li>NCHW
815 </ul>
816 <td>
817 <table>
818 <tr><th>src0<th>src1<th>src2<th>dst
819 <tr><td>F16<td>F16<td>F16<td>F16
820 <tr><td>F32<td>F32<td>F32<td>F32
821 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
822 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
823 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
824 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
825 </table>
826<tr>
827 <td>CLDepthwiseConvolutionLayer
828 <td>
829 <ul>
830 <li>NHWC
831 <li>NCHW
832 </ul>
833 <td>
834 <table>
835 <tr><th>src0<th>src1<th>src2<th>dst
836 <tr><td>F16<td>F16<td>F16<td>F16
837 <tr><td>F32<td>F32<td>F32<td>F32
838 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
839 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
840 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
841 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
842 </table>
843<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100844 <td rowspan="2">DequantizationLayer
Teresa Charlin62687422021-04-28 10:58:49 +0100845 <td rowspan="2" style="width:200px;"> Function to dequantize the values in a tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100846 <td rowspan="2">
847 <ul>
848 <li>ANEURALNETWORKS_DEQUANTIZE
849 </ul>
850 <td>NEDequantizationLayer
851 <td>
852 <ul>
853 <li>All
854 </ul>
855 <td>
856 <table>
857 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +0100858 <tr><td>QASYMM8<td>F16, F32
859 <tr><td>QASYMM8_SIGNED<td>F16, F32
860 <tr><td>QSYMM8_PER_CHANNEL<td>F16, F32
861 <tr><td>QSYMM8<td>F16, F32
862 <tr><td>QSYMM16<td>F16, F32
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100863 </table>
864<tr>
865 <td>CLDequantizationLayer
866 <td>
867 <ul>
868 <li>All
869 </ul>
870 <td>
871 <table>
872 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +0100873 <tr><td>QASYMM8<td>F16, F32
874 <tr><td>QASYMM8_SIGNED<td>F16, F32
875 <tr><td>QSYMM8_PER_CHANNEL<td>F16, F32
876 <tr><td>QSYMM8<td>F16, F32
877 <tr><td>QSYMM16<td>F16, F32
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100878 </table>
879<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100880 <td rowspan="1">DetectionPostProcessLayer
881 <td rowspan="1" style="width:200px;"> Function to generate the detection output based on center size encoded boxes, class prediction and anchors by doing non maximum suppression (NMS).
882 <td rowspan="1">
883 <ul>
884 <li>ANEURALNETWORKS_DETECTION_POSTPROCESSING
885 </ul>
886 <td>NEDetectionPostProcessLayer
887 <td>
888 <ul>
889 <li>All
890 </ul>
891 <td>
892 <table>
893 <tr><th>src0 - src2<th>dst0 - dst3
894 <tr><td>QASYMM8<td>F32
895 <tr><td>QASYMM8_SIGNED<td>F32
896 <tr><td>F32<td>F32
897 </table>
898<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100899 <td rowspan="2">DirectConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +0100900 <td rowspan="2" style="width:200px;"> Function to compute direct convolution.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100901 <td rowspan="2">
902 <ul>
903 <li>ANEURALNETWORKS_CONV_2D
904 </ul>
905 <td>NEDirectConvolutionLayer
906 <td>
907 <ul>
908 <li>NHWC
909 <li>NCHW
910 </ul>
911 <td>
912 <table>
913 <tr><th>src0<th>src1<th>src2<th>dst
914 <tr><td>F16<td>F16<td>F16<td>F16
915 <tr><td>F32<td>F32<td>F32<td>F32
916 </table>
917<tr>
918 <td>CLDirectConvolutionLayer
919 <td>
920 <ul>
921 <li>NHWC
922 <li>NCHW
923 </ul>
924 <td>
925 <table>
926 <tr><th>src0<th>src1<th>src2<th>dst
927 <tr><td>F16<td>F16<td>F16<td>F16
928 <tr><td>F32<td>F32<td>F32<td>F32
929 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
930 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
931 </table>
932<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100933 <td rowspan="1">DirectDeconvolutionLayer
934 <td rowspan="1" style="width:200px;"> Function to run the deconvolution layer.
935 <td rowspan="1">
936 <ul>
937 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
938 </ul>
939 <td>CLDirectDeconvolutionLayer
940 <td>
941 <ul>
942 <li>NHWC
943 <li>NCHW
944 </ul>
945 <td>
946 <table>
947 <tr><th>src0<th>src1<th>src2<th>dst
948 <tr><td>F16<td>F16<td>F16<td>F16
949 <tr><td>F32<td>F32<td>F32<td>F32
950 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
951 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
952 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
953 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
954 </table>
955<tr>
Jakub Sujakee301b32021-06-04 09:46:08 +0100956 <td rowspan="13">ElementwiseOperations
Sheri Zhang6124ce62021-05-04 14:03:13 +0100957 <td rowspan="13" style="width:200px;"> Function to perform in Cpu: - Div - Max - Min - Pow - SquaredDiff - Comparisons (Equal, greater, greater_equal, less, less_equal, not_equal) Function to perform in CL: - Add - Sub - Div - Max - Min - Pow - SquaredDiff
958 <td rowspan="13">
959 <ul>
960 <li>ANEURALNETWORKS_MAXIMUM
961 <li>ANEURALNETWORKS_MINIMUM
962 <li>ANEURALNETWORKS_POW
963 <li>ANEURALNETWORKS_DIV
964 <li>ANEURALNETWORKS_ADD
965 <li>ANEURALNETWORKS_SUB
966 <li>ANEURALNETWORKS_EQUAL
967 <li>ANEURALNETWORKS_GREATER
968 <li>ANEURALNETWORKS_GREATER_EQUAL
969 <li>ANEURALNETWORKS_LESS
970 <li>ANEURALNETWORKS_LESS_EQUAL
971 <li>ANEURALNETWORKS_NOT_EQUAL
972 </ul>
973 <td>NEElementwiseMax
974 <td>
975 <ul>
976 <li>All
977 </ul>
978 <td>
979 <table>
980 <tr><th>src0<th>src1<th>dst
981 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
982 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
983 <tr><td>S32<td>S32<td>S32
984 <tr><td>S16<td>S16<td>S16
985 <tr><td>F16<td>F16<td>F16
986 <tr><td>F32<td>F32<td>F32
987 </table>
988<tr>
989 <td>NEElementwiseMin
990 <td>
991 <ul>
992 <li>All
993 </ul>
994 <td>
995 <table>
996 <tr><th>src0<th>src1<th>dst
997 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
998 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
999 <tr><td>S32<td>S32<td>S32
1000 <tr><td>S16<td>S16<td>S16
1001 <tr><td>F16<td>F16<td>F16
1002 <tr><td>F32<td>F32<td>F32
1003 </table>
1004<tr>
1005 <td>NEElementwiseSquaredDiff
1006 <td>
1007 <ul>
1008 <li>All
1009 </ul>
1010 <td>
1011 <table>
1012 <tr><th>src0<th>src1<th>dst
1013 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1014 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1015 <tr><td>S32<td>S32<td>S32
1016 <tr><td>S16<td>S16<td>S16
1017 <tr><td>F16<td>F16<td>F16
1018 <tr><td>F32<td>F32<td>F32
1019 </table>
1020<tr>
1021 <td>NEElementwiseDivision
1022 <td>
1023 <ul>
1024 <li>All
1025 </ul>
1026 <td>
1027 <table>
1028 <tr><th>src0<th>src1<th>dst
1029 <tr><td>F16<td>F16<td>F16
1030 <tr><td>F32<td>F32<td>F32
1031 </table>
1032<tr>
1033 <td>NEElementwisePower
1034 <td>
1035 <ul>
1036 <li>All
1037 </ul>
1038 <td>
1039 <table>
1040 <tr><th>src0<th>src1<th>dst
1041 <tr><td>F16<td>F16<td>F16
1042 <tr><td>F32<td>F32<td>F32
1043 </table>
1044<tr>
1045 <td>NEElementwiseComparison
1046 <td>
1047 <ul>
1048 <li>All
1049 </ul>
1050 <td>
1051 <table>
1052 <tr><th>src0<th>src1<th>dst
1053 <tr><td>QASYMM8<td>QASYMM8<td>U8
1054 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>U8
1055 <tr><td>S32<td>S32<td>U8
1056 <tr><td>U8<td>U8<td>U8
1057 <tr><td>S16<td>S16<td>U8
1058 <tr><td>F16<td>F16<td>U8
1059 <tr><td>F32<td>F32<td>U8
1060 </table>
1061<tr>
1062 <td>CLArithmeticAddition
1063 <td>
1064 <ul>
1065 <li>All
1066 </ul>
1067 <td>
1068 <table>
1069 <tr><th>src0<th>src1<th>dst
1070 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1071 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1072 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1073 <tr><td>U8<td>U8<td>U8
1074 <tr><td>U8<td>U8<td>S16
1075 <tr><td>U8<td>S16<td>S16
1076 <tr><td>S16<td>U8<td>S16
1077 <tr><td>S16<td>S16<td>S16
1078 <tr><td>S32<td>S32<td>S32
1079 <tr><td>F16<td>F16<td>F16
1080 <tr><td>F32<td>F32<td>F32
1081 </table>
1082<tr>
1083 <td>CLArithmeticSubtraction
1084 <td>
1085 <ul>
1086 <li>All
1087 </ul>
1088 <td>
1089 <table>
1090 <tr><th>src0<th>src1<th>dst
1091 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1092 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1093 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1094 <tr><td>U8<td>U8<td>U8
1095 <tr><td>U8<td>U8<td>S16
1096 <tr><td>U8<td>S16<td>S16
1097 <tr><td>S16<td>U8<td>S16
1098 <tr><td>S16<td>S16<td>S16
1099 <tr><td>S32<td>S32<td>S32
1100 <tr><td>F16<td>F16<td>F16
1101 <tr><td>F32<td>F32<td>F32
1102 </table>
1103<tr>
1104 <td>CLArithmeticDivision
1105 <td>
1106 <ul>
1107 <li>All
1108 </ul>
1109 <td>
1110 <table>
1111 <tr><th>src0<th>src1<th>dst
1112 <tr><td>F16<td>F16<td>F16
1113 <tr><td>F32<td>F32<td>F32
1114 </table>
1115<tr>
1116 <td>CLElementwiseMax
1117 <td>
1118 <ul>
1119 <li>All
1120 </ul>
1121 <td>
1122 <table>
1123 <tr><th>src0<th>src1<th>dst
1124 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1125 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1126 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1127 <tr><td>U8<td>U8<td>U8
1128 <tr><td>S16<td>S16<td>S16
1129 <tr><td>S32<td>S32<td>S32
1130 <tr><td>U32<td>U32<td>U32
1131 <tr><td>F16<td>F16<td>F16
1132 <tr><td>F32<td>F32<td>F32
1133 </table>
1134<tr>
1135 <td>CLElementwiseMin
1136 <td>
1137 <ul>
1138 <li>All
1139 </ul>
1140 <td>
1141 <table>
1142 <tr><th>src0<th>src1<th>dst
1143 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1144 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1145 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1146 <tr><td>U8<td>U8<td>U8
1147 <tr><td>S16<td>S16<td>S16
1148 <tr><td>S32<td>S32<td>S32
1149 <tr><td>U32<td>U32<td>U32
1150 <tr><td>F16<td>F16<td>F16
1151 <tr><td>F32<td>F32<td>F32
1152 </table>
1153<tr>
1154 <td>CLElementwiseSquaredDiff
1155 <td>
1156 <ul>
1157 <li>All
1158 </ul>
1159 <td>
1160 <table>
1161 <tr><th>src0<th>src1<th>dst
1162 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1163 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1164 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1165 <tr><td>U8<td>U8<td>U8
1166 <tr><td>S16<td>S16<td>S16
1167 <tr><td>F16<td>F16<td>F16
1168 <tr><td>F32<td>F32<td>F32
1169 </table>
1170<tr>
1171 <td>CLElementwisePower
1172 <td>
1173 <ul>
1174 <li>All
1175 </ul>
1176 <td>
1177 <table>
1178 <tr><th>src0<th>src1<th>dst
1179 <tr><td>F16<td>F16<td>F16
1180 <tr><td>F32<td>F32<td>F32
1181 </table>
1182<tr>
1183 <td rowspan="8">ElementwiseUnaryLayer
1184 <td rowspan="8" style="width:200px;"> Function to perform: - Rsqrt - Exp - Neg - Log - Abs - Round - Sin
1185 <td rowspan="8">
1186 <ul>
1187 <li>ANEURALNETWORKS_ABS
1188 <li>ANEURALNETWORKS_EXP
1189 <li>ANEURALNETWORKS_LOG
1190 <li>ANEURALNETWORKS_NEG
1191 <li>ANEURALNETWORKS_RSQRT
1192 <li>ANEURALNETWORKS_SIN
1193 </ul>
1194 <td>NEElementwiseUnaryLayer
1195 <td>
1196 <ul>
1197 <li>All
1198 </ul>
1199 <td>
1200 <table>
1201 <tr><th>src<th>dst
1202 <tr><td>F16<td>F16
1203 <tr><td>F32<td>F32
1204 <tr><td>S32<td>S32
1205 </table>
1206<tr>
1207 <td>CLRsqrtLayer
1208 <td>
1209 <ul>
1210 <li>All
1211 </ul>
1212 <td>
1213 <table>
1214 <tr><th>src<th>dst
1215 <tr><td>F16<td>F16
1216 <tr><td>F32<td>F32
1217 </table>
1218<tr>
1219 <td>CLExpLayer
1220 <td>
1221 <ul>
1222 <li>All
1223 </ul>
1224 <td>
1225 <table>
1226 <tr><th>src<th>dst
1227 <tr><td>F16<td>F16
1228 <tr><td>F32<td>F32
1229 </table>
1230<tr>
1231 <td>CLNegLayer
1232 <td>
1233 <ul>
1234 <li>All
1235 </ul>
1236 <td>
1237 <table>
1238 <tr><th>src<th>dst
1239 <tr><td>F16<td>F16
1240 <tr><td>F32<td>F32
Jakub Sujakee301b32021-06-04 09:46:08 +01001241 <tr><td>S32<td>S32
Sheri Zhang6124ce62021-05-04 14:03:13 +01001242 </table>
1243<tr>
1244 <td>CLSinLayer
1245 <td>
1246 <ul>
1247 <li>All
1248 </ul>
1249 <td>
1250 <table>
1251 <tr><th>src<th>dst
1252 <tr><td>F16<td>F16
1253 <tr><td>F32<td>F32
1254 </table>
1255<tr>
1256 <td>CLLogLayer
1257 <td>
1258 <ul>
1259 <li>All
1260 </ul>
1261 <td>
1262 <table>
1263 <tr><th>src<th>dst
1264 <tr><td>F16<td>F16
1265 <tr><td>F32<td>F32
1266 </table>
1267<tr>
1268 <td>CLAbsLayer
1269 <td>
1270 <ul>
1271 <li>All
1272 </ul>
1273 <td>
1274 <table>
1275 <tr><th>src<th>dst
1276 <tr><td>F16<td>F16
1277 <tr><td>F32<td>F32
1278 </table>
1279<tr>
1280 <td>CLRoundLayer
1281 <td>
1282 <ul>
1283 <li>All
1284 </ul>
1285 <td>
1286 <table>
1287 <tr><th>src<th>dst
1288 <tr><td>F16<td>F16
1289 <tr><td>F32<td>F32
1290 </table>
1291<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001292 <td rowspan="2">FFT1D
Teresa Charlin62687422021-04-28 10:58:49 +01001293 <td rowspan="2" style="width:200px;"> Fast Fourier Transform 1D.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001294 <td rowspan="2">
1295 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +01001296 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001297 </ul>
1298 <td>NEFFT1D
1299 <td>
1300 <ul>
1301 <li>All
1302 </ul>
1303 <td>
1304 <table>
1305 <tr><th>src<th>dst
1306 <tr><td>F32<td>F32
1307 </table>
1308<tr>
1309 <td>CLFFT1D
1310 <td>
1311 <ul>
1312 <li>All
1313 </ul>
1314 <td>
1315 <table>
1316 <tr><th>src<th>dst
1317 <tr><td>F32<td>F32
1318 <tr><td>F16<td>F16
1319 </table>
1320<tr>
1321 <td rowspan="2">FFT2D
Teresa Charlin62687422021-04-28 10:58:49 +01001322 <td rowspan="2" style="width:200px;"> Fast Fourier Transform 2D.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001323 <td rowspan="2">
1324 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +01001325 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001326 </ul>
1327 <td>NEFFT2D
1328 <td>
1329 <ul>
1330 <li>All
1331 </ul>
1332 <td>
1333 <table>
1334 <tr><th>src<th>dst
1335 <tr><td>F32<td>F32
1336 </table>
1337<tr>
1338 <td>CLFFT2D
1339 <td>
1340 <ul>
1341 <li>All
1342 </ul>
1343 <td>
1344 <table>
1345 <tr><th>src<th>dst
1346 <tr><td>F32<td>F32
1347 <tr><td>F16<td>F16
1348 </table>
1349<tr>
1350 <td rowspan="2">FFTConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001351 <td rowspan="2" style="width:200px;"> Fast Fourier Transform Convolution.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001352 <td rowspan="2">
1353 <ul>
1354 <li>ANEURALNETWORKS_CONV_2D
1355 </ul>
1356 <td>NEFFTConvolutionLayer
1357 <td>
1358 <ul>
1359 <li>All
1360 </ul>
1361 <td>
1362 <table>
1363 <tr><th>src<th>dst
1364 <tr><td>F32<td>F32
1365 </table>
1366<tr>
1367 <td>CLFFTConvolutionLayer
1368 <td>
1369 <ul>
1370 <li>All
1371 </ul>
1372 <td>
1373 <table>
1374 <tr><th>src<th>dst
1375 <tr><td>F32<td>F32
1376 <tr><td>F16<td>F16
1377 </table>
1378<tr>
1379 <td rowspan="2">Fill
Teresa Charlin62687422021-04-28 10:58:49 +01001380 <td rowspan="2" style="width:200px;"> Set the values of a tensor with a given value.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001381 <td rowspan="2">
1382 <ul>
1383 <li>ANEURALNETWORKS_FILL
1384 </ul>
1385 <td>NEFill
1386 <td>
1387 <ul>
1388 <li>All
1389 </ul>
1390 <td>
1391 <table>
1392 <tr><th>src<th>dst
1393 <tr><td>All<td>All
1394 </table>
1395<tr>
1396 <td>CLFill
1397 <td>
1398 <ul>
1399 <li>All
1400 </ul>
1401 <td>
1402 <table>
1403 <tr><th>src<th>dst
1404 <tr><td>All<td>All
1405 </table>
1406<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001407 <td rowspan="2">FillBorder
Jakub Sujakee301b32021-06-04 09:46:08 +01001408 <td rowspan="2" style="width:200px;"> Function to fill the borders within the XY-planes.
Teresa Charlin62687422021-04-28 10:58:49 +01001409 <td rowspan="2">
1410 <ul>
1411 <li>n/a
1412 </ul>
1413 <td>NEFillBorder
1414 <td>
1415 <ul>
1416 <li>All
1417 </ul>
1418 <td>
1419 <table>
1420 <tr><th>src<th>dst
1421 <tr><td>All<td>All
1422 </table>
1423<tr>
1424 <td>CLFillBorder
1425 <td>
1426 <ul>
1427 <li>All
1428 </ul>
1429 <td>
1430 <table>
1431 <tr><th>src<th>dst
1432 <tr><td>All<td>All
1433 </table>
1434<tr>
1435 <td rowspan="2">FlattenLayer
1436 <td rowspan="2" style="width:200px;"> Reshape a tensor to be 1D
1437 <td rowspan="2">
1438 <ul>
1439 <li>ANEURALNETWORKS_RESHAPE
1440 </ul>
1441 <td>NEFlattenLayer
1442 <td>
1443 <ul>
1444 <li>All
1445 </ul>
1446 <td>
1447 <table>
1448 <tr><th>src<th>dst
1449 <tr><td>All<td>All
1450 </table>
1451<tr>
1452 <td>CLFlattenLayer
1453 <td>
1454 <ul>
1455 <li>All
1456 </ul>
1457 <td>
1458 <table>
1459 <tr><th>src<th>dst
1460 <tr><td>All<td>All
1461 </table>
1462<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001463 <td rowspan="2">Floor
Teresa Charlin62687422021-04-28 10:58:49 +01001464 <td rowspan="2" style="width:200px;"> Round the value to the lowest number.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001465 <td rowspan="2">
1466 <ul>
1467 <li>ANEURALNETWORKS_FLOOR
1468 </ul>
1469 <td>NEFloor
1470 <td>
1471 <ul>
1472 <li>All
1473 </ul>
1474 <td>
1475 <table>
1476 <tr><th>src<th>dst
1477 <tr><td>F32<td>F32
1478 <tr><td>F16<td>F16
1479 </table>
1480<tr>
1481 <td>CLFloor
1482 <td>
1483 <ul>
1484 <li>All
1485 </ul>
1486 <td>
1487 <table>
1488 <tr><th>src<th>dst
1489 <tr><td>F32<td>F32
1490 <tr><td>F16<td>F16
1491 </table>
1492<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001493 <td rowspan="2">FullyConnectedLayer
1494 <td rowspan="2" style="width:200px;"> Function to perform a fully connected / dense layer.
1495 <td rowspan="2">
1496 <ul>
1497 <li>ANEURALNETWORKS_FULLY_CONNECTED
1498 </ul>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001499 <td>NEFullyConnectedLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001500 <td>
1501 <ul>
1502 <li>NHWC
1503 <li>NCHW
1504 </ul>
1505 <td>
1506 <table>
1507 <tr><th>src0<th>src1<th>src2<th>dst
1508 <tr><td>F16<td>F16<td>F16<td>F16
1509 <tr><td>F32<td>F32<td>F32<td>F32
1510 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1511 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1512 </table>
1513<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001514 <td>CLFullyConnectedLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001515 <td>
1516 <ul>
1517 <li>NHWC
1518 <li>NCHW
1519 </ul>
1520 <td>
1521 <table>
1522 <tr><th>src0<th>src1<th>src2<th>dst
1523 <tr><td>F16<td>F16<td>F16<td>F16
1524 <tr><td>F32<td>F32<td>F32<td>F32
1525 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1526 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1527 </table>
1528<tr>
1529 <td rowspan="2">FuseBatchNormalization
1530 <td rowspan="2" style="width:200px;"> Function to fuse the batch normalization node to a preceding convolution node.
1531 <td rowspan="2">
1532 <ul>
1533 <li>n/a
1534 </ul>
1535 <td>NEFuseBatchNormalization
1536 <td>
1537 <ul>
1538 <li>NHWC
1539 <li>NCHW
1540 </ul>
1541 <td>
1542 <table>
1543 <tr><th>src<th>dst
1544 <tr><td>F32<td>F32
1545 <tr><td>F16<td>F16
1546 </table>
1547<tr>
1548 <td>CLFuseBatchNormalization
1549 <td>
1550 <ul>
1551 <li>NHWC
1552 <li>NCHW
1553 </ul>
1554 <td>
1555 <table>
1556 <tr><th>src<th>dst
1557 <tr><td>F32<td>F32
1558 <tr><td>F16<td>F16
1559 </table>
1560<tr>
1561 <td rowspan="2">Gather
1562 <td rowspan="2" style="width:200px;"> Performs the Gather operation along the chosen axis.
1563 <td rowspan="2">
1564 <ul>
1565 <li>ANEURALNETWORKS_GATHER
1566 </ul>
1567 <td>NEGather
1568 <td>
1569 <ul>
1570 <li>All
1571 </ul>
1572 <td>
1573 <table>
1574 <tr><th>src<th>dst
1575 <tr><td>All<td>All
1576 </table>
1577<tr>
1578 <td>CLGather
1579 <td>
1580 <ul>
1581 <li>All
1582 </ul>
1583 <td>
1584 <table>
1585 <tr><th>src<th>dst
1586 <tr><td>All<td>All
1587 </table>
1588<tr>
1589 <td rowspan="2">GEMM
1590 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1591 <td rowspan="2">
1592 <ul>
1593 <li>n/a
1594 </ul>
1595 <td>NEGEMM
1596 <td>
1597 <ul>
1598 <li>All
1599 </ul>
1600 <td>
1601 <table>
1602 <tr><th>src0<th>src1<th>src2<th>dst
1603 <tr><td>F32<td>F32<td>F32<td>F32
1604 <tr><td>F16<td>F16<td>F16<td>F16
1605 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1606 </table>
1607<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001608 <td>CLGEMM
Teresa Charlin62687422021-04-28 10:58:49 +01001609 <td>
1610 <ul>
1611 <li>All
1612 </ul>
1613 <td>
1614 <table>
1615 <tr><th>src0<th>src1<th>src2<th>dst
1616 <tr><td>F32<td>F32<td>F32<td>F32
1617 <tr><td>F16<td>F16<td>F16<td>F16
1618 </table>
1619<tr>
Jakub Sujakee301b32021-06-04 09:46:08 +01001620 <td rowspan="1">GEMMConv2d
Sheri Zhang6124ce62021-05-04 14:03:13 +01001621 <td rowspan="1" style="width:200px;"> General Matrix Multiplication.
1622 <td rowspan="1">
1623 <ul>
1624 <li>ANEURALNETWORKS_CONV_2D
1625 </ul>
1626 <td>NEGEMMConv2d
1627 <td>
1628 <ul>
1629 <li>All
1630 </ul>
1631 <td>
1632 <table>
1633 <tr><th>src0<th>src1<th>src2<th>dst
1634 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1635 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1636 <tr><td>F16<td>F16<td>F16<td>F16
1637 <tr><td>F32<td>F32<td>F32<td>F32
1638 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1639 </table>
1640<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001641 <td rowspan="2">GEMMConvolutionLayer
1642 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1643 <td rowspan="2">
1644 <ul>
1645 <li>ANEURALNETWORKS_CONV_2D
1646 </ul>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001647 <td>NEGEMMConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001648 <td>
1649 <ul>
1650 <li>NHWC
1651 <li>NCHW
1652 </ul>
1653 <td>
1654 <table>
1655 <tr><th>src0<th>src1<th>src2<th>dst
1656 <tr><td>F16<td>F16<td>F16<td>F16
1657 <tr><td>F32<td>F32<td>F32<td>F32
1658 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1659 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1660 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1661 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1662 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1663 </table>
1664<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001665 <td>CLGEMMConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001666 <td>
1667 <ul>
1668 <li>NHWC
1669 <li>NCHW
1670 </ul>
1671 <td>
1672 <table>
1673 <tr><th>src0<th>src1<th>src2<th>dst
1674 <tr><td>F16<td>F16<td>F16<td>F16
1675 <tr><td>F32<td>F32<td>F32<td>F32
1676 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1677 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1678 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1679 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1680 </table>
1681<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001682 <td rowspan="1">GEMMDeconvolutionLayer
1683 <td rowspan="1" style="width:200px;"> General Matrix Multiplication.
1684 <td rowspan="1">
1685 <ul>
1686 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
1687 </ul>
1688 <td>CLGEMMDeconvolutionLayer
1689 <td>
1690 <ul>
1691 <li>NHWC
1692 </ul>
1693 <td>
1694 <table>
1695 <tr><th>src0<th>src1<th>src2<th>dst
1696 <tr><td>F16<td>F16<td>F16<td>F16
1697 <tr><td>F32<td>F32<td>F32<td>F32
1698 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1699 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1700 </table>
1701<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001702 <td rowspan="2">GEMMLowpMatrixMultiplyCore
1703 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1704 <td rowspan="2">
1705 <ul>
1706 <li>n/a
1707 </ul>
1708 <td>NEGEMMLowpMatrixMultiplyCore
1709 <td>
1710 <ul>
1711 <li>NHWC
1712 <li>NCHW
1713 </ul>
1714 <td>
1715 <table>
1716 <tr><th>src0<th>src1<th>src2<th>dst
1717 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1718 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1719 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>QASYMM8
1720 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>S32
1721 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1722 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>S32
1723 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1724 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1725 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>QASYMM8_SIGNED
1726 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>S32
1727 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1728 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>S32
1729 </table>
1730<tr>
1731 <td>CLGEMMLowpMatrixMultiplyCore
1732 <td>
1733 <ul>
1734 <li>NHWC
1735 <li>NCHW
1736 </ul>
1737 <td>
1738 <table>
1739 <tr><th>src0<th>src1<th>src2<th>dst
1740 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1741 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1742 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>QASYMM8
1743 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>S32
1744 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1745 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>S32
1746 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1747 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1748 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>QASYMM8_SIGNED
1749 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>S32
1750 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1751 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>S32
1752 </table>
1753<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001754 <td rowspan="2">GEMMLowpOutputStage
1755 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1756 <td rowspan="2">
1757 <ul>
1758 <li>n/a
1759 </ul>
1760 <td>NEGEMMLowpOutputStage
1761 <td>
1762 <ul>
1763 <li>All
1764 </ul>
1765 <td>
1766 <table>
1767 <tr><th>src0<th>src1<th>dst
1768 <tr><td>S32<td>S32<td>QASYMM8
1769 <tr><td>S32<td>S32<td>QASYMM8_SIGNED
1770 <tr><td>S32<td>S32<td>QSYMM16
1771 </table>
1772<tr>
1773 <td>CLGEMMLowpOutputStage
1774 <td>
1775 <ul>
1776 <li>All
1777 </ul>
1778 <td>
1779 <table>
1780 <tr><th>src0<th>src1<th>dst
1781 <tr><td>S32<td>S32<td>QASYMM8
1782 <tr><td>S32<td>S32<td>QASYMM8_SIGNED
1783 <tr><td>S32<td>S32<td>QSYMM16
1784 </table>
1785<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001786 <td rowspan="2">GenerateProposalsLayer
1787 <td rowspan="2" style="width:200px;"> Function to generate proposals for a RPN (Region Proposal Network).
1788 <td rowspan="2">
1789 <ul>
1790 <li>ANEURALNETWORKS_GENERATE_PROPOSALS
1791 </ul>
1792 <td>NEGenerateProposalsLayer
1793 <td>
1794 <ul>
1795 <li>All
1796 </ul>
1797 <td>
1798 <table>
1799 <tr><th>src0<th>src1<th>src2<th>dst
1800 <tr><td>F16<td>F16<td>F16<td>F16
1801 <tr><td>F32<td>F32<td>F32<td>F32
1802 <tr><td>QASYMM8<td>QSYMM8<td>QSYMM16<td>QASYMM8
1803 </table>
1804<tr>
1805 <td>CLGenerateProposalsLayer
1806 <td>
1807 <ul>
1808 <li>All
1809 </ul>
1810 <td>
1811 <table>
1812 <tr><th>src0<th>src1<th>src2<th>dst
1813 <tr><td>F16<td>F16<td>F16<td>F16
1814 <tr><td>F32<td>F32<td>F32<td>F32
1815 <tr><td>QASYMM8<td>QSYMM8<td>QSYMM16<td>QASYMM8
1816 </table>
1817<tr>
1818 <td rowspan="2">InstanceNormalizationLayer
1819 <td rowspan="2" style="width:200px;"> Function to perform a Instance normalization on a given axis.
1820 <td rowspan="2">
1821 <ul>
1822 <li>ANEURALNETWORKS_INSTANCE_NORMALIZATION
1823 </ul>
1824 <td>NEInstanceNormalizationLayer
1825 <td>
1826 <ul>
1827 <li>NHWC
1828 <li>NCHW
1829 </ul>
1830 <td>
1831 <table>
1832 <tr><th>src<th>dst
1833 <tr><td>F16<td>F16
1834 <tr><td>F32<td>F32
1835 </table>
1836<tr>
1837 <td>CLInstanceNormalizationLayer
1838 <td>
1839 <ul>
1840 <li>NHWC
1841 <li>NCHW
1842 </ul>
1843 <td>
1844 <table>
1845 <tr><th>src<th>dst
1846 <tr><td>F16<td>F16
1847 <tr><td>F32<td>F32
1848 </table>
1849<tr>
1850 <td rowspan="2">L2NormalizeLayer
1851 <td rowspan="2" style="width:200px;"> Function to perform a L2 normalization on a given axis.
1852 <td rowspan="2">
1853 <ul>
1854 <li>ANEURALNETWORKS_L2_NORMALIZATION
1855 </ul>
1856 <td>NEL2NormalizeLayer
1857 <td>
1858 <ul>
1859 <li>NHWC
1860 <li>NCHW
1861 </ul>
1862 <td>
1863 <table>
1864 <tr><th>src<th>dst
1865 <tr><td>F16<td>F16
1866 <tr><td>F32<td>F32
1867 </table>
1868<tr>
1869 <td>CLL2NormalizeLayer
1870 <td>
1871 <ul>
1872 <li>NHWC
1873 <li>NCHW
1874 </ul>
1875 <td>
1876 <table>
1877 <tr><th>src<th>dst
1878 <tr><td>F16<td>F16
1879 <tr><td>F32<td>F32
1880 </table>
1881<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001882 <td rowspan="3">Logical
1883 <td rowspan="3" style="width:200px;"> Function to perform: - Logical AND - Logical OR - Logical NOT
1884 <td rowspan="3">
1885 <ul>
1886 <li>n/a
1887 </ul>
1888 <td>NELogicalAnd
1889 <td>
1890 <ul>
1891 <li>All
1892 </ul>
1893 <td>
1894 <table>
1895 <tr><th>src0<th>src1<th>dst
1896 <tr><td>U8<td>U8<td>U8
1897 </table>
1898<tr>
1899 <td>NELogicalOr
1900 <td>
1901 <ul>
1902 <li>All
1903 </ul>
1904 <td>
1905 <table>
1906 <tr><th>src0<th>src1<th>dst
1907 <tr><td>U8<td>U8<td>U8
1908 </table>
1909<tr>
1910 <td>NELogicalNot
1911 <td>
1912 <ul>
1913 <li>All
1914 </ul>
1915 <td>
1916 <table>
1917 <tr><th>src<th>dst
1918 <tr><td>U8<td>U8
1919 </table>
1920<tr>
1921 <td rowspan="1">LogicalAnd
1922 <td rowspan="1" style="width:200px;"> Function to perform Logical AND.
1923 <td rowspan="1">
1924 <ul>
1925 <li>n/a
1926 </ul>
1927 <td>CLLogicalAnd
1928 <td>
1929 <ul>
1930 <li>All
1931 </ul>
1932 <td>
1933 <table>
1934 <tr><th>src0<th>src1<th>dst
1935 <tr><td>U8<td>U8<td>U8
1936 </table>
1937<tr>
1938 <td rowspan="1">LogicalOr
1939 <td rowspan="1" style="width:200px;"> Function to perform Logical OR.
1940 <td rowspan="1">
1941 <ul>
1942 <li>n/a
1943 </ul>
1944 <td>CLLogicalOr
1945 <td>
1946 <ul>
1947 <li>All
1948 </ul>
1949 <td>
1950 <table>
1951 <tr><th>src0<th>src1<th>dst
1952 <tr><td>U8<td>U8<td>U8
1953 </table>
1954<tr>
1955 <td rowspan="1">LogicalNot
1956 <td rowspan="1" style="width:200px;"> Function to perform Logical NOT.
1957 <td rowspan="1">
1958 <ul>
1959 <li>n/a
1960 </ul>
1961 <td>CLLogicalNot
1962 <td>
1963 <ul>
1964 <li>All
1965 </ul>
1966 <td>
1967 <table>
1968 <tr><th>src<th>dst
1969 <tr><td>U8<td>U8
1970 </table>
1971<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001972 <td rowspan="2">LSTMLayer
1973 <td rowspan="2" style="width:200px;"> Function to perform a single time step in a Long Short-Term Memory (LSTM) layer.
1974 <td rowspan="2">
1975 <ul>
1976 <li>ANEURALNETWORKS_LSTM
1977 </ul>
1978 <td>NELSTMLayer
1979 <td>
1980 <ul>
1981 <li>All
1982 </ul>
1983 <td>
1984 <table>
1985 <tr><th>src0 - src13<th>dst0 - dst3
1986 <tr><td>F16<td>F16
1987 <tr><td>F32<td>F32
1988 </table>
1989<tr>
1990 <td>CLLSTMLayer
1991 <td>
1992 <ul>
1993 <li>All
1994 </ul>
1995 <td>
1996 <table>
1997 <tr><th>src0 - src13<th>dst0 - dst3
1998 <tr><td>F16<td>F16
1999 <tr><td>F32<td>F32
2000 </table>
2001<tr>
2002 <td rowspan="2">LSTMLayerQuantized
2003 <td rowspan="2" style="width:200px;"> Function to perform quantized LSTM (Long Short-Term Memory)
2004 <td rowspan="2">
2005 <ul>
2006 <li>ANEURALNETWORKS_QUANTIZED_LSTM
2007 <li>ANEURALNETWORKS_QUANTIZED_16BIT_LSTM
2008 </ul>
2009 <td>NELSTMLayerQuantized
2010 <td>
2011 <ul>
2012 <li>All
2013 </ul>
2014 <td>
2015 <table>
2016 <tr><th>src0 - src8<th>src9 - src12<th>src13<th>src14<th>dst0<th>dst1
2017 <tr><td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8<td>QSYMM16<td>QASYMM8
2018 </table>
2019<tr>
2020 <td>CLLSTMLayerQuantized
2021 <td>
2022 <ul>
2023 <li>All
2024 </ul>
2025 <td>
2026 <table>
2027 <tr><th>src0 - src8<th>src9 - src12<th>src13<th>src14<th>dst0<th>dst1
2028 <tr><td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8<td>QSYMM16<td>QASYMM8
2029 </table>
2030<tr>
2031 <td rowspan="2">MaxUnpoolingLayer
2032 <td rowspan="2" style="width:200px;"> Function to perform MaxUnpooling.
2033 <td rowspan="2">
2034 <ul>
2035 <li>n/a
2036 </ul>
2037 <td>NEMaxUnpoolingLayer
2038 <td>
2039 <ul>
2040 <li>NHWC
2041 <li>NCHW
2042 </ul>
2043 <td>
2044 <table>
2045 <tr><th>src<th>dst
2046 <tr><td>QASYMM8<td>QASYMM8
2047 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2048 <tr><td>F16<td>F16
2049 <tr><td>F32<td>F32
2050 </table>
2051<tr>
2052 <td>CLMaxUnpoolingLayer
2053 <td>
2054 <ul>
2055 <li>NHWC
2056 <li>NCHW
2057 </ul>
2058 <td>
2059 <table>
2060 <tr><th>src<th>dst
2061 <tr><td>QASYMM8<td>QASYMM8
2062 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2063 <tr><td>F16<td>F16
2064 <tr><td>F32<td>F32
2065 </table>
2066<tr>
2067 <td rowspan="2">MeanStdDevNormalizationLayer
2068 <td rowspan="2" style="width:200px;"> Function to execute mean and standard deviation normalization.
2069 <td rowspan="2">
2070 <ul>
2071 <li>n/a
2072 </ul>
2073 <td>NEMeanStdDevNormalizationLayer
2074 <td>
2075 <ul>
2076 <li>NHWC
2077 <li>NCHW
2078 </ul>
2079 <td>
2080 <table>
2081 <tr><th>src<th>dst
2082 <tr><td>F32<td>F32
2083 <tr><td>F16<td>F16
2084 </table>
2085<tr>
2086 <td>CLMeanStdDevNormalizationLayer
2087 <td>
2088 <ul>
2089 <li>NHWC
2090 <li>NCHW
2091 </ul>
2092 <td>
2093 <table>
2094 <tr><th>src<th>dst
2095 <tr><td>F32<td>F32
2096 <tr><td>F16<td>F16
2097 </table>
2098<tr>
2099 <td rowspan="2">NormalizationLayer
2100 <td rowspan="2" style="width:200px;"> Function to compute normalization layer.
2101 <td rowspan="2">
2102 <ul>
2103 <li>ANEURALNETWORKS_LOCAL_RESPONSE_NORMALIZATION
2104 </ul>
2105 <td>NENormalizationLayer
2106 <td>
2107 <ul>
2108 <li>NHWC
2109 <li>NCHW
2110 </ul>
2111 <td>
2112 <table>
2113 <tr><th>src<th>dst
2114 <tr><td>F32<td>F32
2115 <tr><td>F16<td>F16
2116 </table>
2117<tr>
2118 <td>CLNormalizationLayer
2119 <td>
2120 <ul>
2121 <li>NHWC
2122 <li>NCHW
2123 </ul>
2124 <td>
2125 <table>
2126 <tr><th>src<th>dst
2127 <tr><td>F32<td>F32
2128 <tr><td>F16<td>F16
2129 </table>
2130<tr>
2131 <td rowspan="2">PadLayer
2132 <td rowspan="2" style="width:200px;"> Function to pad a tensor.
2133 <td rowspan="2">
2134 <ul>
2135 <li>ANEURALNETWORKS_PAD
2136 <li>ANEURALNETWORKS_PAD_V2
2137 </ul>
2138 <td>NEPadLayer
2139 <td>
2140 <ul>
2141 <li>NHWC
2142 <li>NCHW
2143 </ul>
2144 <td>
2145 <table>
2146 <tr><th>src<th>dst
2147 <tr><td>All<td>All
2148 </table>
2149<tr>
2150 <td>CLPadLayer
2151 <td>
2152 <ul>
2153 <li>NHWC
2154 <li>NCHW
2155 </ul>
2156 <td>
2157 <table>
2158 <tr><th>src<th>dst
2159 <tr><td>All<td>All
2160 </table>
2161<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002162 <td rowspan="2">Permute
2163 <td rowspan="2" style="width:200px;"> Function to transpose an ND tensor.
2164 <td rowspan="2">
2165 <ul>
2166 <li>ANEURALNETWORKS_TRANSPOSE
2167 </ul>
2168 <td>NEPermute
2169 <td>
2170 <ul>
2171 <li>NHWC
2172 <li>NCHW
2173 </ul>
2174 <td>
2175 <table>
2176 <tr><th>src<th>dst
2177 <tr><td>All<td>All
2178 </table>
2179<tr>
2180 <td>CLPermute
2181 <td>
2182 <ul>
2183 <li>NHWC
2184 <li>NCHW
2185 </ul>
2186 <td>
2187 <table>
2188 <tr><th>src<th>dst
2189 <tr><td>All<td>All
2190 </table>
2191<tr>
2192 <td rowspan="2">PixelWiseMultiplication
Jakub Sujakee301b32021-06-04 09:46:08 +01002193 <td rowspan="2" style="width:200px;"> Function to perform a multiplication.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002194 <td rowspan="2">
2195 <ul>
2196 <li>ANEURALNETWORKS_MUL
2197 </ul>
2198 <td>NEPixelWiseMultiplication
2199 <td>
2200 <ul>
2201 <li>All
2202 </ul>
2203 <td>
2204 <table>
2205 <tr><th>src0<th>src1<th>dst
2206 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2207 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2208 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
2209 <tr><td>QSYMM16<td>QSYMM16<td>S32
2210 <tr><td>U8<td>U8<td>U8
2211 <tr><td>U8<td>U8<td>S16
2212 <tr><td>U8<td>S16<td>S16
2213 <tr><td>S16<td>U8<td>S16
2214 <tr><td>S16<td>S16<td>S16
2215 <tr><td>F16<td>F16<td>F16
2216 <tr><td>F32<td>S32<td>F32
2217 </table>
2218<tr>
2219 <td>CLPixelWiseMultiplication
2220 <td>
2221 <ul>
2222 <li>All
2223 </ul>
2224 <td>
2225 <table>
2226 <tr><th>src0<th>src1<th>dst
2227 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2228 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2229 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
2230 <tr><td>QSYMM16<td>QSYMM16<td>S32
2231 <tr><td>U8<td>U8<td>U8
2232 <tr><td>U8<td>U8<td>S16
2233 <tr><td>U8<td>S16<td>S16
2234 <tr><td>S16<td>U8<td>S16
2235 <tr><td>S16<td>S16<td>S16
2236 <tr><td>F16<td>F16<td>F16
Jakub Sujakee301b32021-06-04 09:46:08 +01002237 <tr><td>F32<td>F32<td>F32
2238 <tr><td>S32<td>S32<td>S32
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002239 </table>
2240<tr>
2241 <td rowspan="2">PoolingLayer
Jakub Sujakee301b32021-06-04 09:46:08 +01002242 <td rowspan="2" style="width:200px;"> Function to perform pooling with the specified pooling operation.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002243 <td rowspan="2">
2244 <ul>
2245 <li>ANEURALNETWORKS_AVERAGE_POOL_2D
2246 <li>ANEURALNETWORKS_L2_POOL_2D
2247 <li>ANEURALNETWORKS_MAX_POOL_2D
2248 </ul>
2249 <td>NEPoolingLayer
2250 <td>
2251 <ul>
2252 <li>NHWC
2253 <li>NCHW
2254 </ul>
2255 <td>
2256 <table>
2257 <tr><th>src<th>dst
2258 <tr><td>QASYMM8<td>QASYMM8
2259 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2260 <tr><td>F16<td>F16
2261 <tr><td>F32<td>F32
2262 </table>
2263<tr>
2264 <td>CLPoolingLayer
2265 <td>
2266 <ul>
2267 <li>NHWC
2268 <li>NCHW
2269 </ul>
2270 <td>
2271 <table>
2272 <tr><th>src<th>dst
2273 <tr><td>QASYMM8<td>QASYMM8
2274 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2275 <tr><td>F16<td>F16
2276 <tr><td>F32<td>F32
2277 </table>
2278<tr>
2279 <td rowspan="2">PReluLayer
2280 <td rowspan="2" style="width:200px;"> Function to compute the activation layer with the PRELU activation function.
2281 <td rowspan="2">
2282 <ul>
2283 <li>ANEURALNETWORKS_PRELU
2284 </ul>
2285 <td>NEPReluLayer
2286 <td>
2287 <ul>
2288 <li>All
2289 </ul>
2290 <td>
2291 <table>
2292 <tr><th>src<th>dst
2293 <tr><td>QASYMM8<td>QASYMM8
2294 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2295 <tr><td>F16<td>F16
2296 <tr><td>F32<td>F32
2297 </table>
2298<tr>
2299 <td>CLPReluLayer
2300 <td>
2301 <ul>
2302 <li>All
2303 </ul>
2304 <td>
2305 <table>
2306 <tr><th>src<th>dst
2307 <tr><td>QASYMM8<td>QASYMM8
2308 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2309 <tr><td>F16<td>F16
2310 <tr><td>F32<td>F32
2311 </table>
2312<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002313 <td rowspan="2">PriorBoxLayer
Sheri Zhang6124ce62021-05-04 14:03:13 +01002314 <td rowspan="2" style="width:200px;"> Function to compute prior boxes and clip.
Teresa Charlin62687422021-04-28 10:58:49 +01002315 <td rowspan="2">
2316 <ul>
2317 <li>n/a
2318 </ul>
2319 <td>NEPriorBoxLayer
2320 <td>
2321 <ul>
2322 <li>NHWC
2323 <li>NCHW
2324 </ul>
2325 <td>
2326 <table>
2327 <tr><th>src0<th>src1<th>dst
2328 <tr><td>F32<td>F32<td>F32
2329 </table>
2330<tr>
2331 <td>CLPriorBoxLayer
2332 <td>
2333 <ul>
2334 <li>NHWC
2335 <li>NCHW
2336 </ul>
2337 <td>
2338 <table>
2339 <tr><th>src0<th>src1<th>dst
2340 <tr><td>F32<td>F32<td>F32
2341 </table>
2342<tr>
2343 <td rowspan="2">QLSTMLayer
2344 <td rowspan="2" style="width:200px;"> Function to perform quantized LSTM (Long Short-Term Memory).
2345 <td rowspan="2">
2346 <ul>
2347 <li>ANEURALNETWORKS_QUANTIZED_LSTM
2348 <li>ANEURALNETWORKS_QUANTIZED_16BIT_LSTM
2349 </ul>
2350 <td>NEQLSTMLayer
2351 <td>
2352 <ul>
2353 <li>All
2354 </ul>
2355 <td>
2356 <table>
2357 <tr><th>src0<th>src1 - src6<th>src7 -src9<th>src10<th>src11<th>dst0<th>dst1 - dst2
2358 <tr><td>QASYMM8_SIGNED<td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8_SIGNED<td>QSYMM16<td>QASYMM8_SIGNED
2359 </table>
2360<tr>
2361 <td>CLQLSTMLayer
2362 <td>
2363 <ul>
2364 <li>All
2365 </ul>
2366 <td>
2367 <table>
2368 <tr><th>src0<th>src1 - src6<th>src7 -src9<th>src10<th>src11<th>dst0<th>dst1 - dst2
2369 <tr><td>QASYMM8_SIGNED<td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8_SIGNED<td>QSYMM16<td>QASYMM8_SIGNED
2370 </table>
2371<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002372 <td rowspan="2">QuantizationLayer
2373 <td rowspan="2" style="width:200px;"> Function to perform quantization layer
2374 <td rowspan="2">
2375 <ul>
2376 <li>ANEURALNETWORKS_QUANTIZE
2377 </ul>
2378 <td>NEQuantizationLayer
2379 <td>
2380 <ul>
2381 <li>All
2382 </ul>
2383 <td>
2384 <table>
2385 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +01002386 <tr><td>QASYMM8<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2387 <tr><td>QASYMM8_SIGNED<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2388 <tr><td>F16<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2389 <tr><td>F32<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002390 </table>
2391<tr>
2392 <td>CLQuantizationLayer
2393 <td>
2394 <ul>
2395 <li>All
2396 </ul>
2397 <td>
2398 <table>
2399 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +01002400 <tr><td>QASYMM8<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2401 <tr><td>QASYMM8_SIGNED<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2402 <tr><td>F16<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2403 <tr><td>F32<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2404 </table>
2405<tr>
2406 <td rowspan="2">Range
2407 <td rowspan="2" style="width:200px;"> Function to generates a sequence of numbers starting from START and extends by increments of 'STEP' up to but not including 'END'.
2408 <td rowspan="2">
2409 <ul>
2410 <li>n/a
2411 </ul>
2412 <td>NERange
2413 <td>
2414 <ul>
2415 <li>All
2416 </ul>
2417 <td>
2418 <table>
2419 <tr><th>dst
2420 <tr><td>U8
2421 <tr><td>S8
2422 <tr><td>U16
2423 <tr><td>S16
2424 <tr><td>U32
2425 <tr><td>S32
2426 <tr><td>F16
2427 <tr><td>F32
2428 </table>
2429<tr>
2430 <td>CLRange
2431 <td>
2432 <ul>
2433 <li>All
2434 </ul>
2435 <td>
2436 <table>
2437 <tr><th>dst
2438 <tr><td>U8
2439 <tr><td>S8
2440 <tr><td>QASYMM8
2441 <tr><td>U16
2442 <tr><td>S16
2443 <tr><td>U32
2444 <tr><td>S32
2445 <tr><td>F16
2446 <tr><td>F32
2447 </table>
2448<tr>
2449 <td rowspan="2">ReduceMean
Jakub Sujakee301b32021-06-04 09:46:08 +01002450 <td rowspan="2" style="width:200px;"> Function to perform reduce mean operation.
Teresa Charlin62687422021-04-28 10:58:49 +01002451 <td rowspan="2">
2452 <ul>
2453 <li>ANEURALNETWORKS_MEAN
2454 </ul>
2455 <td>NEReduceMean
2456 <td>
2457 <ul>
2458 <li>All
2459 </ul>
2460 <td>
2461 <table>
2462 <tr><th>src<th>dst
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002463 <tr><td>QASYMM8<td>QASYMM8
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002464 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
Teresa Charlin62687422021-04-28 10:58:49 +01002465 <tr><td>F16<td>F16
2466 <tr><td>F32<td>F32
2467 </table>
2468<tr>
2469 <td>CLReduceMean
2470 <td>
2471 <ul>
2472 <li>All
2473 </ul>
2474 <td>
2475 <table>
2476 <tr><th>src<th>dst
2477 <tr><td>QASYMM8<td>QASYMM8
2478 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2479 <tr><td>F16<td>F16
2480 <tr><td>F32<td>F32
2481 </table>
2482<tr>
2483 <td rowspan="2">ReductionOperation
Jakub Sujakee301b32021-06-04 09:46:08 +01002484 <td rowspan="2" style="width:200px;"> Function to perform reduce with the following operations - ARG_IDX_MAX: Index of the max value - ARG_IDX_MIN: Index of the min value - MEAN_SUM: Mean of sum - PROD: Product - SUM_SQUARE: Sum of squares - SUM: Sum - MIN: Min - MAX: Max
Teresa Charlin62687422021-04-28 10:58:49 +01002485 <td rowspan="2">
2486 <ul>
2487 <li>ANEURALNETWORKS_REDUCE_ALL
2488 <li>ANEURALNETWORKS_REDUCE_ANY
2489 <li>ANEURALNETWORKS_REDUCE_MAX
2490 <li>ANEURALNETWORKS_REDUCE_MIN
2491 <li>ANEURALNETWORKS_REDUCE_PROD
2492 <li>ANEURALNETWORKS_REDUCE_SUM
2493 </ul>
2494 <td>NEReductionOperation
2495 <td>
2496 <ul>
2497 <li>All
2498 </ul>
2499 <td>
2500 <table>
2501 <tr><th>src<th>dst
2502 <tr><td>QASYMM8<td>QASYMM8
2503 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2504 <tr><td>F16<td>F16
2505 <tr><td>F32<td>F32
2506 <tr><td>S32<td>S32
2507 </table>
2508<tr>
2509 <td>CLReductionOperation
2510 <td>
2511 <ul>
2512 <li>All
2513 </ul>
2514 <td>
2515 <table>
2516 <tr><th>src<th>dst
2517 <tr><td>QASYMM8<td>QASYMM8
2518 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2519 <tr><td>F16<td>F16
2520 <tr><td>F32<td>F32
2521 <tr><td>S32<td>S32
2522 </table>
2523<tr>
2524 <td rowspan="2">ReorgLayer
2525 <td rowspan="2" style="width:200px;"> Performs a reorganization layer of input tensor to the output tensor.
2526 <td rowspan="2">
2527 <ul>
2528 <li>n/a
2529 </ul>
2530 <td>NEReorgLayer
2531 <td>
2532 <ul>
2533 <li>NHWC
2534 <li>NCHW
2535 </ul>
2536 <td>
2537 <table>
2538 <tr><th>src<th>dst
2539 <tr><td>All<td>All
2540 </table>
2541<tr>
2542 <td>CLReorgLayer
2543 <td>
2544 <ul>
2545 <li>NHWC
2546 <li>NCHW
2547 </ul>
2548 <td>
2549 <table>
2550 <tr><th>src<th>dst
2551 <tr><td>All<td>All
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002552 </table>
2553<tr>
2554 <td rowspan="2">ReshapeLayer
Teresa Charlin62687422021-04-28 10:58:49 +01002555 <td rowspan="2" style="width:200px;"> Function to reshape a tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002556 <td rowspan="2">
2557 <ul>
2558 <li>ANEURALNETWORKS_RESHAPE
2559 <li>ANEURALNETWORKS_SQUEEZE
2560 </ul>
2561 <td>NEReshapeLayer
2562 <td>
2563 <ul>
2564 <li>All
2565 </ul>
2566 <td>
2567 <table>
2568 <tr><th>src<th>dst
2569 <tr><td>All<td>All
2570 </table>
2571<tr>
2572 <td>CLReshapeLayer
2573 <td>
2574 <ul>
2575 <li>All
2576 </ul>
2577 <td>
2578 <table>
2579 <tr><th>src<th>dst
2580 <tr><td>All<td>All
2581 </table>
2582<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002583 <td rowspan="2">Reverse
2584 <td rowspan="2" style="width:200px;"> Function to reverse tensor according to axis.
2585 <td rowspan="2">
2586 <ul>
2587 <li>n/a
2588 </ul>
2589 <td>NEReverse
2590 <td>
2591 <ul>
2592 <li>All
2593 </ul>
2594 <td>
2595 <table>
2596 <tr><th>src0<th>src1<th>dst
2597 <tr><td>All<td>U32<td>All
2598 </table>
2599<tr>
2600 <td>CLReverse
2601 <td>
2602 <ul>
2603 <li>All
2604 </ul>
2605 <td>
2606 <table>
2607 <tr><th>src0<th>src1<th>dst
2608 <tr><td>All<td>U32<td>All
2609 </table>
2610<tr>
2611 <td rowspan="2">RNNLayer
2612 <td rowspan="2" style="width:200px;"> Function to perform recurrent neural network layer.
2613 <td rowspan="2">
2614 <ul>
2615 <li>ANEURALNETWORKS_RNN
2616 </ul>
2617 <td>NERNNLayer
2618 <td>
2619 <ul>
2620 <li>NHWC
2621 <li>NCHW
2622 </ul>
2623 <td>
2624 <table>
2625 <tr><th>src0<th>src1<th>src2<th>src3<th>dst0<th>dst1
2626 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
2627 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
2628 </table>
2629<tr>
2630 <td>CLRNNLayer
2631 <td>
2632 <ul>
2633 <li>NHWC
2634 <li>NCHW
2635 </ul>
2636 <td>
2637 <table>
2638 <tr><th>src0<th>src1<th>src2<th>src3<th>dst0<th>dst1
2639 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
2640 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
2641 </table>
2642<tr>
2643 <td rowspan="2">ROIAlignLayer
2644 <td rowspan="2" style="width:200px;"> Function to perform ROI alignment.
2645 <td rowspan="2">
2646 <ul>
2647 <li>ANEURALNETWORKS_ROI_ALIGN
2648 </ul>
2649 <td>NEROIAlignLayer
2650 <td>
2651 <ul>
2652 <li>All
2653 </ul>
2654 <td>
2655 <table>
2656 <tr><th>src0<th>src1<th>dst
2657 <tr><td>F16<td>F16<td>F16
2658 <tr><td>F32<td>F32<td>F32
2659 <tr><td>QASYMM8<td>QASYMM16<td>QASYMM8
2660 <tr><td>QASYMM8_SIGNED<td>QASYMM16<td>QASYMM8_SIGNED
2661 </table>
2662<tr>
2663 <td>CLROIAlignLayer
2664 <td>
2665 <ul>
2666 <li>All
2667 </ul>
2668 <td>
2669 <table>
2670 <tr><th>src0<th>src1<th>dst
2671 <tr><td>F16<td>F16<td>F16
2672 <tr><td>F32<td>F32<td>F32
2673 <tr><td>QASYMM8<td>QASYMM16<td>QASYMM8
2674 <tr><td>QASYMM8_SIGNED<td>QASYMM16<td>QASYMM8_SIGNED
2675 </table>
2676<tr>
2677 <td rowspan="2">ROIPoolingLayer
2678 <td rowspan="2" style="width:200px;"> Function to perform ROI pooling.
2679 <td rowspan="2">
2680 <ul>
2681 <li>ANEURALNETWORKS_ROI_POOLING
2682 </ul>
2683 <td>NEROIPoolingLayer
2684 <td>
2685 <ul>
2686 <li>All
2687 </ul>
2688 <td>
2689 <table>
2690 <tr><th>src0<th>src1<th>dst
2691 <tr><td>F32<td>U16<td>F32
2692 <tr><td>QASYMM8<td>U16<td>QASYMM8
2693 </table>
2694<tr>
2695 <td>CLROIPoolingLayer
2696 <td>
2697 <ul>
2698 <li>All
2699 </ul>
2700 <td>
2701 <table>
2702 <tr><th>src0<th>src1<th>dst
2703 <tr><td>F16<td>U16<td>F16
2704 <tr><td>F32<td>U16<td>F32
2705 <tr><td>QASYMM8<td>U16<td>QASYMM8
2706 </table>
2707<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002708 <td rowspan="2">Scale
Teresa Charlin62687422021-04-28 10:58:49 +01002709 <td rowspan="2" style="width:200px;"> Function to perform resize a tensor using to interpolate: - Bilinear - Nearest neighbor
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002710 <td rowspan="2">
2711 <ul>
2712 <li>ANEURALNETWORKS_RESIZE_BILINEAR
2713 <li>ANEURALNETWORKS_RESIZE_NEAREST_NEIGHBOR
2714 </ul>
2715 <td>NEScale
2716 <td>
2717 <ul>
2718 <li>NHWC
2719 <li>NCHW
2720 </ul>
2721 <td>
2722 <table>
2723 <tr><th>src<th>dst
2724 <tr><td>QASYMM8<td>QASYMM8
2725 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2726 <tr><td>F16<td>F16
2727 <tr><td>F32<td>F32
2728 <tr><td>U8<td>U8
2729 <tr><td>S16<td>S16
2730 </table>
2731<tr>
2732 <td>CLScale
2733 <td>
2734 <ul>
2735 <li>NHWC
2736 <li>NCHW
2737 </ul>
2738 <td>
2739 <table>
2740 <tr><th>src<th>dst
2741 <tr><td>QASYMM8<td>QASYMM8
2742 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2743 <tr><td>F16<td>F16
2744 <tr><td>F32<td>F32
2745 <tr><td>U8<td>U8
2746 <tr><td>S16<td>S16
2747 </table>
2748<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002749 <td rowspan="2">Select
2750 <td rowspan="2" style="width:200px;"> Function to select values from 2 tensors depending on an input tensor of booleans.
2751 <td rowspan="2">
2752 <ul>
2753 <li>ANEURALNETWORKS_SELECT
2754 </ul>
2755 <td>NESelect
2756 <td>
2757 <ul>
2758 <li>All
2759 </ul>
2760 <td>
2761 <table>
2762 <tr><th>src0<th>src1<th>src2<th>dst
2763 <tr><td>U8<td>All<td>All<td>All
2764 </table>
2765<tr>
2766 <td>CLSelect
2767 <td>
2768 <ul>
2769 <li>All
2770 </ul>
2771 <td>
2772 <table>
2773 <tr><th>src0<th>src1<th>src2<th>dst
2774 <tr><td>U8<td>All<td>All<td>All
2775 </table>
2776<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002777 <td rowspan="2">Slice
2778 <td rowspan="2" style="width:200px;"> Function to perform tensor slicing.
2779 <td rowspan="2">
2780 <ul>
2781 <li>ANEURALNETWORKS_SLICE
2782 </ul>
2783 <td>NESlice
2784 <td>
2785 <ul>
2786 <li>All
2787 </ul>
2788 <td>
2789 <table>
2790 <tr><th>src<th>dst
2791 <tr><td>All<td>All
2792 </table>
2793<tr>
2794 <td>CLSlice
2795 <td>
2796 <ul>
2797 <li>All
2798 </ul>
2799 <td>
2800 <table>
2801 <tr><th>src<th>dst
2802 <tr><td>All<td>All
2803 </table>
2804<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01002805 <td rowspan="2">SoftmaxLayer
2806 <td rowspan="2" style="width:200px;"> Function to compute a SoftmaxLayer and a Log SoftmaxLayer.
2807 <td rowspan="2">
2808 <ul>
2809 <li>ANEURALNETWORKS_LOG_SOFTMAX
2810 <li>ANEURALNETWORKS_SOFTMAX
2811 </ul>
2812 <td>NESoftmaxLayerGeneric
2813 <td>
2814 <ul>
2815 <li>All
2816 </ul>
2817 <td>
2818 <table>
2819 <tr><th>src<th>dst
2820 <tr><td>QASYMM8<td>QASYMM8
2821 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2822 <tr><td>F16<td>F16
2823 <tr><td>F32<td>F32
2824 </table>
2825<tr>
2826 <td>CLSoftmaxLayerGeneric
2827 <td>
2828 <ul>
2829 <li>All
2830 </ul>
2831 <td>
2832 <table>
2833 <tr><th>src<th>dst
2834 <tr><td>QASYMM8<td>QASYMM8
2835 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2836 <tr><td>F16<td>F16
2837 <tr><td>F32<td>F32
2838 </table>
2839<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002840 <td rowspan="2">SpaceToBatchLayer
2841 <td rowspan="2" style="width:200px;"> Function to divide a tensor spatially.
2842 <td rowspan="2">
2843 <ul>
2844 <li>ANEURALNETWORKS_SPACE_TO_BATCH_ND
2845 </ul>
2846 <td>NESpaceToBatchLayer
2847 <td>
2848 <ul>
2849 <li>NHWC
2850 <li>NCHW
2851 </ul>
2852 <td>
2853 <table>
2854 <tr><th>src0<th>src1<th>src2<th>dst
2855 <tr><td>All<td>S32<td>S32<td>All
2856 </table>
2857<tr>
2858 <td>CLSpaceToBatchLayer
2859 <td>
2860 <ul>
2861 <li>NHWC
2862 <li>NCHW
2863 </ul>
2864 <td>
2865 <table>
2866 <tr><th>src0<th>src1<th>src2<th>dst
2867 <tr><td>All<td>S32<td>S32<td>All
2868 </table>
2869<tr>
2870 <td rowspan="2">SpaceToDepthLayer
2871 <td rowspan="2" style="width:200px;"> Function to rearrange blocks of spatial data into depth.
2872 <td rowspan="2">
2873 <ul>
2874 <li>ANEURALNETWORKS_SPACE_TO_DEPTH
2875 </ul>
2876 <td>NESpaceToDepthLayer
2877 <td>
2878 <ul>
2879 <li>NHWC
2880 <li>NCHW
2881 </ul>
2882 <td>
2883 <table>
2884 <tr><th>src<th>dst
2885 <tr><td>All<td>All
2886 </table>
2887<tr>
2888 <td>CLSpaceToDepthLayer
2889 <td>
2890 <ul>
2891 <li>NHWC
2892 <li>NCHW
2893 </ul>
2894 <td>
2895 <table>
2896 <tr><th>src<th>dst
2897 <tr><td>All<td>All
2898 </table>
2899<tr>
2900 <td rowspan="2">Split
2901 <td rowspan="2" style="width:200px;"> Function to split a tensor along a given axis.
2902 <td rowspan="2">
2903 <ul>
2904 <li>ANEURALNETWORKS_SPLIT
2905 </ul>
2906 <td>NESplit
2907 <td>
2908 <ul>
2909 <li>All
2910 </ul>
2911 <td>
2912 <table>
2913 <tr><th>src<th>dst
2914 <tr><td>All<td>All
2915 </table>
2916<tr>
2917 <td>CLSplit
2918 <td>
2919 <ul>
2920 <li>All
2921 </ul>
2922 <td>
2923 <table>
2924 <tr><th>src<th>dst
2925 <tr><td>All<td>All
2926 </table>
2927<tr>
2928 <td rowspan="2">StackLayer
2929 <td rowspan="2" style="width:200px;"> Function to stack tensors along an axis.
2930 <td rowspan="2">
2931 <ul>
2932 <li>n/a
2933 </ul>
2934 <td>NEStackLayer
2935 <td>
2936 <ul>
2937 <li>All
2938 </ul>
2939 <td>
2940 <table>
2941 <tr><th>src<th>dst
2942 <tr><td>All<td>All
2943 </table>
2944<tr>
2945 <td>CLStackLayer
2946 <td>
2947 <ul>
2948 <li>All
2949 </ul>
2950 <td>
2951 <table>
2952 <tr><th>src<th>dst
2953 <tr><td>All<td>All
2954 </table>
2955<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002956 <td rowspan="2">StridedSlice
2957 <td rowspan="2" style="width:200px;"> Function to extract a strided slice of a tensor.
2958 <td rowspan="2">
2959 <ul>
2960 <li>ANEURALNETWORKS_STRIDED_SLICE
2961 </ul>
2962 <td>NEStridedSlice
2963 <td>
2964 <ul>
2965 <li>All
2966 </ul>
2967 <td>
2968 <table>
2969 <tr><th>src<th>dst
2970 <tr><td>All<td>All
2971 </table>
2972<tr>
2973 <td>CLStridedSlice
2974 <td>
2975 <ul>
2976 <li>All
2977 </ul>
2978 <td>
2979 <table>
2980 <tr><th>src<th>dst
2981 <tr><td>All<td>All
2982 </table>
2983<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002984 <td rowspan="2">Tile
2985 <td rowspan="2" style="width:200px;"> Function to construct a tensor by tiling a given tensor.
2986 <td rowspan="2">
2987 <ul>
2988 <li>ANEURALNETWORKS_TILE
2989 </ul>
2990 <td>NETile
2991 <td>
2992 <ul>
2993 <li>All
2994 </ul>
2995 <td>
2996 <table>
2997 <tr><th>src<th>dst
2998 <tr><td>All<td>All
2999 </table>
3000<tr>
3001 <td>CLTile
3002 <td>
3003 <ul>
3004 <li>All
3005 </ul>
3006 <td>
3007 <table>
3008 <tr><th>src<th>dst
3009 <tr><td>All<td>All
3010 </table>
3011<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003012 <td rowspan="2">Transpose
Teresa Charlin62687422021-04-28 10:58:49 +01003013 <td rowspan="2" style="width:200px;"> Function to transpose a 2D tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003014 <td rowspan="2">
3015 <ul>
3016 <li>ANEURALNETWORKS_TRANSPOSE
3017 </ul>
3018 <td>NETranspose
3019 <td>
3020 <ul>
3021 <li>All
3022 </ul>
3023 <td>
3024 <table>
3025 <tr><th>src<th>dst
3026 <tr><td>All<td>All
3027 </table>
3028<tr>
3029 <td>CLTranspose
3030 <td>
3031 <ul>
3032 <li>All
3033 </ul>
3034 <td>
3035 <table>
3036 <tr><th>src<th>dst
3037 <tr><td>All<td>All
3038 </table>
Teresa Charlin62687422021-04-28 10:58:49 +01003039<tr>
3040 <td rowspan="2">Unstack
3041 <td rowspan="2" style="width:200px;"> Function to unpack a rank-R tensor into rank-(R-1) tensors.
3042 <td rowspan="2">
3043 <ul>
3044 <li>n/a
3045 </ul>
3046 <td>NEUnstack
3047 <td>
3048 <ul>
3049 <li>All
3050 </ul>
3051 <td>
3052 <table>
3053 <tr><th>src<th>dst
3054 <tr><td>All<td>All
3055 </table>
3056<tr>
3057 <td>CLUnstack
3058 <td>
3059 <ul>
3060 <li>All
3061 </ul>
3062 <td>
3063 <table>
3064 <tr><th>src<th>dst
3065 <tr><td>All<td>All
3066 </table>
3067<tr>
3068 <td rowspan="2">WinogradConvolutionLayer
3069 <td rowspan="2" style="width:200px;"> Function to do Winograd Convolution.
3070 <td rowspan="2">
3071 <ul>
3072 <li>ANEURALNETWORKS_CONV_2D
3073 </ul>
3074 <td>NEWinogradConvolutionLayer
3075 <td>
3076 <ul>
3077 <li>NHWC
3078 <li>NCHW
3079 </ul>
3080 <td>
3081 <table>
3082 <tr><th>src0<th>src1<th>src2<th>dst
3083 <tr><td>F16<td>F16<td>F16<td>F16
3084 <tr><td>F32<td>F32<td>F32<td>F32
3085 </table>
3086<tr>
3087 <td>CLWinogradConvolutionLayer
3088 <td>
3089 <ul>
3090 <li>NHWC
3091 <li>NCHW
3092 </ul>
3093 <td>
3094 <table>
3095 <tr><th>src0<th>src1<th>src2<th>dst
3096 <tr><td>F16<td>F16<td>F16<td>F16
3097 <tr><td>F32<td>F32<td>F32<td>F32
3098 </table>
Sheri Zhang6124ce62021-05-04 14:03:13 +01003099<tr>
3100 <td rowspan="1">WinogradInputTransform
Jakub Sujakee301b32021-06-04 09:46:08 +01003101 <td rowspan="1" style="width:200px;"> Function to perform a Winograd transform on the input tensor.
Sheri Zhang6124ce62021-05-04 14:03:13 +01003102 <td rowspan="1">
3103 <ul>
3104 <li>n/a
3105 </ul>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003106</table>
3107
3108*/
3109} // namespace