blob: 50d3b55c0f26e46fc0796cb3ba1716bbf021ab3d [file] [log] [blame]
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001///
2/// Copyright (c) 2021 Arm Limited.
3///
4/// SPDX-License-Identifier: MIT
5///
6/// Permission is hereby granted, free of charge, to any person obtaining a copy
7/// of this software and associated documentation files (the "Software"), to
8/// deal in the Software without restriction, including without limitation the
9/// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
10/// sell copies of the Software, and to permit persons to whom the Software is
11/// furnished to do so, subject to the following conditions:
12///
13/// The above copyright notice and this permission notice shall be included in all
14/// copies or substantial portions of the Software.
15///
16/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
22/// SOFTWARE.
23///
24namespace arm_compute
25{
26/**
27@page operators_list Supported Operators
28
29@tableofcontents
30
31@section S9_1_operators_list Supported Operators
32
33Compute Library supports operators that are listed in below table.
34
35Compute Library supports a wide list of data-types, information can been directly found in the documentation of each kernel/function.
36The main data-types that the Machine Learning functions support are the following:
37 <ul>
38 <li>BFLOAT16: 16-bit non-standard brain floating point
39 <li>QASYMM8: 8-bit unsigned asymmetric quantized
40 <li>QASYMM8_SIGNED: 8-bit signed asymmetric quantized
41 <li>QSYMM8_PER_CHANNEL: 8-bit signed symmetric quantized (Used for the weights)
42 <li>QSYMM8: 8-bit unsigned symmetric quantized
43 <li>QSYMM16: 16-bit unsigned symmetric quantized
44 <li>F32: 32-bit single precision floating point
45 <li>F16: 16-bit half precision floating point
46 <li>S32: 32-bit signed integer
47 <li>U8: 8-bit unsigned char
Jakub Sujakee301b32021-06-04 09:46:08 +010048 <li>All: Agnostic to any specific data type
Sheri Zhanga47dcc22021-04-22 14:41:12 +010049 </ul>
50
51Compute Library supports the following data layouts (fast changing dimension from right to left):
52 <ul>
53 <li>NHWC: The native layout of Compute Library that delivers the best performance where channels are in the fastest changing dimension
54 <li>NCHW: Legacy layout where width is in the fastest changing dimension
Jakub Sujakee301b32021-06-04 09:46:08 +010055 <li>All: Agnostic to any specific data layout
Sheri Zhanga47dcc22021-04-22 14:41:12 +010056 </ul>
57where N = batches, C = channels, H = height, W = width
58
59<table>
60<caption id="multi_row"></caption>
61<tr>
62 <th>Function
63 <th>Description
64 <th>Equivalent Android NNAPI Op
65 <th>Backends
66 <th>Data Layouts
67 <th>Data Types
68<tr>
69 <td rowspan="2">ActivationLayer
70 <td rowspan="2" style="width:200px;"> Function to simulate an activation layer with the specified activation function.
71 <td rowspan="2">
72 <ul>
73 <li>ANEURALNETWORKS_ELU
74 <li>ANEURALNETWORKS_HARD_SWISH
75 <li>ANEURALNETWORKS_LOGISTIC
76 <li>ANEURALNETWORKS_RELU
77 <li>ANEURALNETWORKS_RELU1
78 <li>ANEURALNETWORKS_RELU6
79 <li>ANEURALNETWORKS_TANH
80 </ul>
81 <td>NEActivationLayer
82 <td>
83 <ul>
84 <li>All
85 </ul>
86 <td>
87 <table>
88 <tr><th>src<th>dst
89 <tr><td>QASYMM8<td>QASYMM8
90 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
91 <tr><td>QSYMM16<td>QSYMM16
92 <tr><td>F16<td>F16
93 <tr><td>F32<td>F32
94 </table>
95<tr>
96 <td>CLActivationLayer
97 <td>
98 <ul>
99 <li>All
100 </ul>
101 <td>
102 <table>
103 <tr><th>src<th>dst
104 <tr><td>QASYMM8<td>QASYMM8
105 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
106 <tr><td>QSYMM16<td>QSYMM16
107 <tr><td>F16<td>F16
108 <tr><td>F32<td>F32
109 </table>
110<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100111 <td rowspan="2">ArgMinMaxLayer
112 <td rowspan="2" style="width:200px;"> Function to calculate the index of the minimum or maximum values in a tensor based on an axis.
113 <td rowspan="2">
114 <ul>
115 <li>ANEURALNETWORKS_ARGMAX
116 <li>ANEURALNETWORKS_ARGMIN
117 </ul>
118 <td>NEArgMinMaxLayer
119 <td>
120 <ul>
121 <li>All
122 </ul>
123 <td>
124 <table>
125 <tr><th>src<th>dst
126 <tr><td>QASYMM8<td>U32, S32
127 <tr><td>QASYMM8_SIGNED<td>U32, S32
128 <tr><td>S32<td>U32, S32
129 <tr><td>F16<td>U32, S32
130 <tr><td>F32<td>U32, S32
131 </table>
132<tr>
133 <td>CLArgMinMaxLayer
134 <td>
135 <ul>
136 <li>All
137 </ul>
138 <td>
139 <table>
140 <tr><th>src<th>dst
141 <tr><td>QASYMM8<td>U32, S32
142 <tr><td>QASYMM8_SIGNED<td>U32, S32
143 <tr><td>S32<td>U32, S32
144 <tr><td>F16<td>U32, S32
145 <tr><td>F32<td>U32, S32
146 </table>
147<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100148 <td rowspan="1">ArithmeticAddition
149 <td rowspan="1" style="width:200px;"> Function to add 2 tensors.
150 <td rowspan="1">
151 <ul>
152 <li>ANEURALNETWORKS_ADD
153 </ul>
154 <td>NEArithmeticAddition
155 <td>
156 <ul>
157 <li>All
158 </ul>
159 <td>
160 <table>
161 <tr><th>src0<th>src1<th>dst
162 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
163 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
164 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
165 <tr><td>QSYMM16<td>QSYMM16<td>S32
166 <tr><td>U8<td>U8<td>U8
Sheri Zhang6124ce62021-05-04 14:03:13 +0100167 <tr><td>S16<td>S16<td>S16
168 <tr><td>S32<td>S32<td>S32
169 <tr><td>F16<td>F16<td>F16
170 <tr><td>F32<td>F32<td>F32
171 </table>
172<tr>
173 <td rowspan="1">ArithmeticSubtraction
174 <td rowspan="1" style="width:200px;"> Function to substract 2 tensors.
175 <td rowspan="1">
176 <ul>
177 <li>ANEURALNETWORKS_SUB
178 </ul>
179 <td>NEArithmeticSubtraction
180 <td>
181 <ul>
182 <li>All
183 </ul>
184 <td>
185 <table>
186 <tr><th>src0<th>src1<th>dst
187 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
188 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
189 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
190 <tr><td>QSYMM16<td>QSYMM16<td>S32
191 <tr><td>U8<td>U8<td>U8
Sheri Zhang6124ce62021-05-04 14:03:13 +0100192 <tr><td>S16<td>S16<td>S16
193 <tr><td>S32<td>S32<td>S32
194 <tr><td>F16<td>F16<td>F16
195 <tr><td>F32<td>F32<td>F32
196 </table>
197<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100198 <td rowspan="2">BatchNormalizationLayer
199 <td rowspan="2" style="width:200px;"> Function to perform batch normalization.
200 <td rowspan="2">
201 <ul>
202 <li>n/a
203 </ul>
204 <td>NEBatchNormalizationLayer
205 <td>
206 <ul>
207 <li>NHWC
208 <li>NCHW
209 </ul>
210 <td>
211 <table>
212 <tr><th>src<th>dst
213 <tr><td>F32<td>F32
214 <tr><td>F16<td>F16
215 </table>
216<tr>
217 <td>CLBatchNormalizationLayer
218 <td>
219 <ul>
220 <li>NHWC
221 <li>NCHW
222 </ul>
223 <td>
224 <table>
225 <tr><th>src<th>dst
226 <tr><td>F32<td>F32
227 <tr><td>F16<td>F16
228 </table>
229<tr>
230 <td rowspan="2">BatchToSpaceLayer
231 <td rowspan="2" style="width:200px;"> Batch to space transformation.
232 <td rowspan="2">
233 <ul>
234 <li>ANEURALNETWORKS_BATCH_TO_SPACE_ND
235 </ul>
236 <td>NEBatchToSpaceLayer
237 <td>
238 <ul>
239 <li>NHWC
240 <li>NCHW
241 </ul>
242 <td>
243 <table>
244 <tr><th>src0<th>src1<th>dst
245 <tr><td>All<td>s32<td>All
246 </table>
247<tr>
248 <td>CLBatchToSpaceLayer
249 <td>
250 <ul>
251 <li>NHWC
252 <li>NCHW
253 </ul>
254 <td>
255 <table>
256 <tr><th>src0<th>src1<th>dst
257 <tr><td>All<td>s32<td>All
258 </table>
259<tr>
260 <td rowspan="2">BitwiseAnd
Jakub Sujakee301b32021-06-04 09:46:08 +0100261 <td rowspan="2" style="width:200px;"> Function to perform bitwise AND between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100262 <td rowspan="2">
263 <ul>
264 <li>ANEURALNETWORKS_LOGICAL_AND
265 </ul>
266 <td>NEBitwiseAnd
267 <td>
268 <ul>
269 <li>All
270 </ul>
271 <td>
272 <table>
273 <tr><th>src<th>dst
274 <tr><td>U8<td>U8
275 </table>
276<tr>
277 <td>CLBitwiseAnd
278 <td>
279 <ul>
280 <li>All
281 </ul>
282 <td>
283 <table>
284 <tr><th>src<th>dst
285 <tr><td>U8<td>U8
286 </table>
287<tr>
288 <td rowspan="2">BitwiseNot
Jakub Sujakee301b32021-06-04 09:46:08 +0100289 <td rowspan="2" style="width:200px;"> Function to perform bitwise NOT.
Teresa Charlin62687422021-04-28 10:58:49 +0100290 <td rowspan="2">
291 <ul>
292 <li>ANEURALNETWORKS_LOGICAL_NOT
293 </ul>
294 <td>NEBitwiseNot
295 <td>
296 <ul>
297 <li>All
298 </ul>
299 <td>
300 <table>
301 <tr><th>src<th>dst
302 <tr><td>U8<td>U8
303 </table>
304<tr>
305 <td>CLBitwiseNot
306 <td>
307 <ul>
308 <li>All
309 </ul>
310 <td>
311 <table>
312 <tr><th>src<th>dst
313 <tr><td>U8<td>U8
314 </table>
315<tr>
316 <td rowspan="2">BitwiseOr
Jakub Sujakee301b32021-06-04 09:46:08 +0100317 <td rowspan="2" style="width:200px;"> Function to perform bitwise OR between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100318 <td rowspan="2">
319 <ul>
320 <li>ANEURALNETWORKS_LOGICAL_OR
321 </ul>
322 <td>NEBitwiseOr
323 <td>
324 <ul>
325 <li>All
326 </ul>
327 <td>
328 <table>
329 <tr><th>src<th>dst
330 <tr><td>U8<td>U8
331 </table>
332<tr>
333 <td>CLBitwiseOr
334 <td>
335 <ul>
336 <li>All
337 </ul>
338 <td>
339 <table>
340 <tr><th>src<th>dst
341 <tr><td>U8<td>U8
342 </table>
343<tr>
344 <td rowspan="2">BitwiseXor
Jakub Sujakee301b32021-06-04 09:46:08 +0100345 <td rowspan="2" style="width:200px;"> Function to perform bitwise XOR between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100346 <td rowspan="2">
347 <ul>
348 <li>n/a
349 </ul>
350 <td>NEBitwiseXor
351 <td>
352 <ul>
353 <li>All
354 </ul>
355 <td>
356 <table>
357 <tr><th>src<th>dst
358 <tr><td>U8<td>U8
359 </table>
360<tr>
361 <td>CLBitwiseXor
362 <td>
363 <ul>
364 <li>All
365 </ul>
366 <td>
367 <table>
368 <tr><th>src<th>dst
369 <tr><td>U8<td>U8
370 </table>
371<tr>
372 <td rowspan="2">BoundingBoxTransform
373 <td rowspan="2" style="width:200px;"> Transform proposal bounding boxes to target bounding box using bounding box deltas.
374 <td rowspan="2">
375 <ul>
376 <li>n/a
377 </ul>
378 <td>NEBoundingBoxTransform
379 <td>
380 <ul>
381 <li>NHWC
382 <li>NCHW
383 </ul>
384 <td>
385 <table>
386 <tr><th>src0<th>src1<th>dst
387 <tr><td>QASYMM16<td>QASYMM8<td>QASYMM16
388 <tr><td>F16<td>F16<td>F16
389 <tr><td>F32<td>F32<td>F32
390 </table>
391<tr>
392 <td>CLBoundingBoxTransform
393 <td>
394 <ul>
395 <li>NHWC
396 <li>NCHW
397 </ul>
398 <td>
399 <table>
400 <tr><th>src0<th>src1<th>dst
401 <tr><td>QASYMM16<td>QASYMM8<td>QASYMM16
402 <tr><td>F16<td>F16<td>F16
403 <tr><td>F32<td>F32<td>F32
404 </table>
405<tr>
406 <td rowspan="2">Cast
407 <td rowspan="2" style="width:200px;"> Function to cast a tensor.
408 <td rowspan="2">
409 <ul>
410 <li>ANEURALNETWORKS_CAST
411 </ul>
412 <td>NECast
413 <td>
414 <ul>
415 <li>All
416 </ul>
417 <td>
418 <table>
419 <tr><th>src<th>dst
420 <tr><td>QASYMM8_SIGNED<td>S16, S32, F32, F16
421 <tr><td>QASYMM8<td>U16, S16, S32, F32, F16
422 <tr><td>U8<td>U16, S16, S32, F32, F16
423 <tr><td>U16<td>U8, U32
424 <tr><td>S16<td>QASYMM8_SIGNED, U8, S32
425 <tr><td>F16<td>QASYMM8_SIGNED, QASYMM8, F32, S32, U8
426 <tr><td>S32<td>QASYMM8_SIGNED, QASYMM8, F16, F32, U8
427 <tr><td>F32<td>QASYMM8_SIGNED, QASYMM8, BFLOAT16, F16, S32, U8
428 </table>
429<tr>
430 <td>CLCast
431 <td>
432 <ul>
433 <li>All
434 </ul>
435 <td>
436 <table>
437 <tr><th>src<th>dst
438 <tr><td>U8<td>S8, U16, S16, U32, S32, F16, F32
439 <tr><td>U16<td>U8, S8, S16, U32, S32, F16, F32
440 <tr><td>S16<td>U8, S8, U16, U32, S32, F16, F32
441 <tr><td>U32<td>U8, S8, U16, S16, S32, F16, F32
442 <tr><td>S32<td>U8, S8, U16, S16, U32, F16, F32
443 <tr><td>F16<td>U8, S8, U16, S16, U32, F32
444 <tr><td>F32<td>U8, S8, U16, S16, U32, F16
445 </table>
446<tr>
447 <td rowspan="2">ChannelShuffleLayer
448 <td rowspan="2" style="width:200px;"> Function to shuffle the channels of the input tensor.
449 <td rowspan="2">
450 <ul>
451 <li>ANEURALNETWORKS_CHANNEL_SHUFFLE
452 </ul>
453 <td>NEChannelShuffleLayer
454 <td>
455 <ul>
456 <li>NCHW
457 </ul>
458 <td>
459 <table>
460 <tr><th>src<th>dst
461 <tr><td>All<td>All
462 </table>
463<tr>
464 <td>CLChannelShuffleLayer
465 <td>
466 <ul>
467 <li>NCHW
468 </ul>
469 <td>
470 <table>
471 <tr><th>src<th>dst
472 <tr><td>All<td>All
473 </table>
474<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100475 <td rowspan="1">Comparison
476 <td rowspan="1" style="width:200px;"> Function to compare 2 tensors.
477 <td rowspan="1">
478 <ul>
479 <li>ANEURALNETWORKS_EQUAL
480 <li>ANEURALNETWORKS_GREATER
481 <li>ANEURALNETWORKS_GREATER_EQUAL
482 <li>ANEURALNETWORKS_LESS
483 <li>ANEURALNETWORKS_LESS_EQUAL
484 <li>ANEURALNETWORKS_NOT_EQUAL
485 </ul>
486 <td>CLComparison
487 <td>
488 <ul>
489 <li>All
490 </ul>
491 <td>
492 <table>
493 <tr><th>src0<th>src1<th>dst
494 <tr><td>All<td>All<td>U8
495 </table>
496<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100497 <td rowspan="2">ConcatenateLayer
498 <td rowspan="2" style="width:200px;"> Function to concatenate tensors along a given axis.
499 <td rowspan="2">
500 <ul>
501 <li>ANEURALNETWORKS_CONCATENATION
502 </ul>
503 <td>NEConcatenateLayer
504 <td>
505 <ul>
506 <li>All
507 </ul>
508 <td>
509 <table>
510 <tr><th>src<th>dst
511 <tr><td>QASYMM8<td>QASYMM8
512 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
513 <tr><td>F16<td>F16
514 <tr><td>F32<td>F32
515 </table>
516<tr>
517 <td>CLConcatenateLayer
518 <td>
519 <ul>
520 <li>All
521 </ul>
522 <td>
523 <table>
524 <tr><th>src<th>dst
525 <tr><td>QASYMM8<td>QASYMM8
526 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
527 <tr><td>F16<td>F16
528 <tr><td>F32<td>F32
529 </table>
530<tr>
531 <td rowspan="2">ConvertFullyConnectedWeights
Jakub Sujakee301b32021-06-04 09:46:08 +0100532 <td rowspan="2" style="width:200px;"> Function to transpose the weights for the fully connected layer.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100533 <td rowspan="2">
534 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +0100535 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100536 </ul>
537 <td>NEConvertFullyConnectedWeights
538 <td>
539 <ul>
540 <li>NHWC
541 <li>NCHW
542 </ul>
543 <td>
544 <table>
545 <tr><th>src<th>dst
546 <tr><td>All<td>All
547 </table>
548<tr>
549 <td>CLConvertFullyConnectedWeights
550 <td>
551 <ul>
552 <li>NHWC
553 <li>NCHW
554 </ul>
555 <td>
556 <table>
557 <tr><th>src<th>dst
558 <tr><td>All<td>All
559 </table>
560<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100561 <td rowspan="2">ConvolutionLayer
562 <td rowspan="2" style="width:200px;"> Function to compute a convolution layer.
563 <td rowspan="2">
564 <ul>
565 <li>ANEURALNETWORKS_CONV_2D
566 </ul>
567 <td>NEConvolutionLayer
568 <td>
569 <ul>
570 <li>NHWC
571 <li>NCHW
572 </ul>
573 <td>
574 <table>
575 <tr><th>src0<th>src1<th>src2<th>dst
576 <tr><td>F16<td>F16<td>F16<td>F16
577 <tr><td>F32<td>F32<td>F32<td>F32
578 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
579 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
580 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
581 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
582 </table>
583<tr>
584 <td>CLConvolutionLayer
585 <td>
586 <ul>
587 <li>NHWC
588 <li>NCHW
589 </ul>
590 <td>
591 <table>
592 <tr><th>src0<th>src1<th>src2<th>dst
593 <tr><td>F16<td>F16<td>F16<td>F16
594 <tr><td>F32<td>F32<td>F32<td>F32
595 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
596 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
597 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
598 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
599 </table>
600<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100601 <td rowspan="2">Copy
602 <td rowspan="2" style="width:200px;"> Function to copy a tensor.
603 <td rowspan="2">
604 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +0100605 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100606 </ul>
607 <td>NECopy
608 <td>
609 <ul>
610 <li>All
611 </ul>
612 <td>
613 <table>
614 <tr><th>src<th>dst
615 <tr><td>All<td>All
616 </table>
617<tr>
618 <td>CLCopy
619 <td>
620 <ul>
621 <li>All
622 </ul>
623 <td>
624 <table>
625 <tr><th>src<th>dst
626 <tr><td>All<td>All
627 </table>
628<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100629 <td rowspan="1">Crop
630 <td rowspan="1" style="width:200px;"> Performs a copy of input tensor to the output tensor.
631 <td rowspan="1">
632 <ul>
633 <li>n/a
634 </ul>
635 <td>CLCrop
636 <td>
637 <ul>
638 <li>NHWC
639 </ul>
640 <td>
641 <table>
642 <tr><th>src<th>dst
643 <tr><td>All<td>F32
644 </table>
645<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100646 <td rowspan="2">CropResize
647 <td rowspan="2" style="width:200px;"> Function to perform cropping and resizing.
648 <td rowspan="2">
649 <ul>
650 <li>n/a
651 </ul>
652 <td>NECropResize
653 <td>
654 <ul>
655 <li>NHWC
656 </ul>
657 <td>
658 <table>
659 <tr><th>src0<th>src1<th>src2<th>dst
660 <tr><td>All<td>F32<td>F32<td>F32
661 </table>
662<tr>
663 <td>CLCropResize
664 <td>
665 <ul>
666 <li>NHWC
667 </ul>
668 <td>
669 <table>
670 <tr><th>src0<th>src1<th>src2<th>dst
671 <tr><td>All<td>F32<td>F32<td>F32
672 </table>
673<tr>
674 <td rowspan="2">DeconvolutionLayer
Jakub Sujakee301b32021-06-04 09:46:08 +0100675 <td rowspan="2" style="width:200px;"> Function to compute a deconvolution or transpose convolution.
Teresa Charlin62687422021-04-28 10:58:49 +0100676 <td rowspan="2">
677 <ul>
678 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
679 </ul>
680 <td>NEDeconvolutionLayer
681 <td>
682 <ul>
683 <li>NHWC
684 <li>NCHW
685 </ul>
686 <td>
687 <table>
688 <tr><th>src0<th>src1<th>src2<th>dst
689 <tr><td>F16<td>F16<td>F16<td>F16
690 <tr><td>F32<td>F32<td>F32<td>F32
691 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
692 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
693 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
694 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
695 </table>
696<tr>
697 <td>CLDeconvolutionLayer
698 <td>
699 <ul>
700 <li>NHWC
701 <li>NCHW
702 </ul>
703 <td>
704 <table>
705 <tr><th>src0<th>src1<th>src2<th>dst
706 <tr><td>F16<td>F16<td>F16<td>F16
707 <tr><td>F32<td>F32<td>F32<td>F32
708 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
709 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
710 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
711 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
712 </table>
713<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100714 <td rowspan="1">DeconvolutionLayerUpsample
715 <td rowspan="1" style="width:200px;"> Function to execute deconvolution upsample on OpenCL.
716 <td rowspan="1">
717 <ul>
718 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
719 </ul>
720 <td>CLDeconvolutionLayerUpsample
721 <td>
722 <ul>
723 <li>NHWC
724 <li>NCHW
725 </ul>
726 <td>
727 <table>
728 <tr><th>src<th>dst
729 <tr><td>All<td>All
730 </table>
731<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100732 <td rowspan="2">DepthConvertLayer
733 <td rowspan="2" style="width:200px;"> Performs a down-scaling depth conversion.
734 <td rowspan="2">
735 <ul>
736 <li>n/a
737 </ul>
738 <td>NEDepthConvertLayer
739 <td>
740 <ul>
741 <li>All
742 </ul>
743 <td>
744 <table>
745 <tr><th>src<th>dst
746 <tr><td>QASYMM8<td>F16, F32
747 <tr><td>U8<td>U16, S16, S32
748 <tr><td>U16<td>U8, U32
749 <tr><td>S16<td>U8, S32
750 <tr><td>BFLOAT16<td>F32
751 <tr><td>F16<td>QASYMM8, F32
752 <tr><td>F32<td>QASYMM8, F16, BFLOAT16
753 </table>
754<tr>
755 <td>CLDepthConvertLayer
756 <td>
757 <ul>
758 <li>All
759 </ul>
760 <td>
761 <table>
762 <tr><th>src<th>dst
763 <tr><td>U8<td>S8, U16, S16, U32, S32, F16, F32
764 <tr><td>U16<td>U8, S8, S16, U32, S32, F16, F32
765 <tr><td>S16<td>U8, S8, U16, U32, S32, F16, F32
766 <tr><td>U32<td>U8, S8, U16, S16, S32, F16, F32
767 <tr><td>S32<td>U8, S8, U16, S16, U32, F16, F32
768 <tr><td>F16<td>U8, S8, U16, S16, U32, F32
769 <tr><td>F32<td>U8, S8, U16, S16, U32, F16
770 </table>
771<tr>
772 <td rowspan="2">DepthToSpaceLayer
773 <td rowspan="2" style="width:200px;"> Depth to Space transformation.
774 <td rowspan="2">
775 <ul>
776 <li>ANEURALNETWORKS_DEPTH_TO_SPACE
777 </ul>
778 <td>NEDepthToSpaceLayer
779 <td>
780 <ul>
781 <li>NHWC
782 <li>NCHW
783 </ul>
784 <td>
785 <table>
786 <tr><th>src<th>dst
787 <tr><td>All<td>All
788 </table>
789<tr>
790 <td>CLDepthToSpaceLayer
791 <td>
792 <ul>
793 <li>NHWC
794 <li>NCHW
795 </ul>
796 <td>
797 <table>
798 <tr><th>src<th>dst
799 <tr><td>All<td>All
800 </table>
801<tr>
802 <td rowspan="2">DepthwiseConvolutionLayer
803 <td rowspan="2" style="width:200px;"> Function to perform depthwise separable convolution.
804 <td rowspan="2">
805 <ul>
806 <li>ANEURALNETWORKS_DEPTHWISE_CONV_2D
807 </ul>
808 <td>NEDepthwiseConvolutionLayer
809 <td>
810 <ul>
811 <li>NHWC
812 <li>NCHW
813 </ul>
814 <td>
815 <table>
816 <tr><th>src0<th>src1<th>src2<th>dst
817 <tr><td>F16<td>F16<td>F16<td>F16
818 <tr><td>F32<td>F32<td>F32<td>F32
819 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
820 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
821 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
822 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
823 </table>
824<tr>
825 <td>CLDepthwiseConvolutionLayer
826 <td>
827 <ul>
828 <li>NHWC
829 <li>NCHW
830 </ul>
831 <td>
832 <table>
833 <tr><th>src0<th>src1<th>src2<th>dst
834 <tr><td>F16<td>F16<td>F16<td>F16
835 <tr><td>F32<td>F32<td>F32<td>F32
836 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
837 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
838 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
839 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
840 </table>
841<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100842 <td rowspan="2">DequantizationLayer
Teresa Charlin62687422021-04-28 10:58:49 +0100843 <td rowspan="2" style="width:200px;"> Function to dequantize the values in a tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100844 <td rowspan="2">
845 <ul>
846 <li>ANEURALNETWORKS_DEQUANTIZE
847 </ul>
848 <td>NEDequantizationLayer
849 <td>
850 <ul>
851 <li>All
852 </ul>
853 <td>
854 <table>
855 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +0100856 <tr><td>QASYMM8<td>F16, F32
857 <tr><td>QASYMM8_SIGNED<td>F16, F32
858 <tr><td>QSYMM8_PER_CHANNEL<td>F16, F32
859 <tr><td>QSYMM8<td>F16, F32
860 <tr><td>QSYMM16<td>F16, F32
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100861 </table>
862<tr>
863 <td>CLDequantizationLayer
864 <td>
865 <ul>
866 <li>All
867 </ul>
868 <td>
869 <table>
870 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +0100871 <tr><td>QASYMM8<td>F16, F32
872 <tr><td>QASYMM8_SIGNED<td>F16, F32
873 <tr><td>QSYMM8_PER_CHANNEL<td>F16, F32
874 <tr><td>QSYMM8<td>F16, F32
875 <tr><td>QSYMM16<td>F16, F32
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100876 </table>
877<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100878 <td rowspan="1">DetectionPostProcessLayer
879 <td rowspan="1" style="width:200px;"> Function to generate the detection output based on center size encoded boxes, class prediction and anchors by doing non maximum suppression (NMS).
880 <td rowspan="1">
881 <ul>
882 <li>ANEURALNETWORKS_DETECTION_POSTPROCESSING
883 </ul>
884 <td>NEDetectionPostProcessLayer
885 <td>
886 <ul>
887 <li>All
888 </ul>
889 <td>
890 <table>
891 <tr><th>src0 - src2<th>dst0 - dst3
892 <tr><td>QASYMM8<td>F32
893 <tr><td>QASYMM8_SIGNED<td>F32
894 <tr><td>F32<td>F32
895 </table>
896<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100897 <td rowspan="2">DirectConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +0100898 <td rowspan="2" style="width:200px;"> Function to compute direct convolution.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100899 <td rowspan="2">
900 <ul>
901 <li>ANEURALNETWORKS_CONV_2D
902 </ul>
903 <td>NEDirectConvolutionLayer
904 <td>
905 <ul>
906 <li>NHWC
907 <li>NCHW
908 </ul>
909 <td>
910 <table>
911 <tr><th>src0<th>src1<th>src2<th>dst
912 <tr><td>F16<td>F16<td>F16<td>F16
913 <tr><td>F32<td>F32<td>F32<td>F32
914 </table>
915<tr>
916 <td>CLDirectConvolutionLayer
917 <td>
918 <ul>
919 <li>NHWC
920 <li>NCHW
921 </ul>
922 <td>
923 <table>
924 <tr><th>src0<th>src1<th>src2<th>dst
925 <tr><td>F16<td>F16<td>F16<td>F16
926 <tr><td>F32<td>F32<td>F32<td>F32
927 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
928 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
929 </table>
930<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100931 <td rowspan="1">DirectDeconvolutionLayer
932 <td rowspan="1" style="width:200px;"> Function to run the deconvolution layer.
933 <td rowspan="1">
934 <ul>
935 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
936 </ul>
937 <td>CLDirectDeconvolutionLayer
938 <td>
939 <ul>
940 <li>NHWC
941 <li>NCHW
942 </ul>
943 <td>
944 <table>
945 <tr><th>src0<th>src1<th>src2<th>dst
946 <tr><td>F16<td>F16<td>F16<td>F16
947 <tr><td>F32<td>F32<td>F32<td>F32
948 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
949 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
950 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
951 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
952 </table>
953<tr>
Jakub Sujakee301b32021-06-04 09:46:08 +0100954 <td rowspan="13">ElementwiseOperations
Sheri Zhang6124ce62021-05-04 14:03:13 +0100955 <td rowspan="13" style="width:200px;"> Function to perform in Cpu: - Div - Max - Min - Pow - SquaredDiff - Comparisons (Equal, greater, greater_equal, less, less_equal, not_equal) Function to perform in CL: - Add - Sub - Div - Max - Min - Pow - SquaredDiff
956 <td rowspan="13">
957 <ul>
958 <li>ANEURALNETWORKS_MAXIMUM
959 <li>ANEURALNETWORKS_MINIMUM
960 <li>ANEURALNETWORKS_POW
961 <li>ANEURALNETWORKS_DIV
962 <li>ANEURALNETWORKS_ADD
963 <li>ANEURALNETWORKS_SUB
964 <li>ANEURALNETWORKS_EQUAL
965 <li>ANEURALNETWORKS_GREATER
966 <li>ANEURALNETWORKS_GREATER_EQUAL
967 <li>ANEURALNETWORKS_LESS
968 <li>ANEURALNETWORKS_LESS_EQUAL
969 <li>ANEURALNETWORKS_NOT_EQUAL
970 </ul>
971 <td>NEElementwiseMax
972 <td>
973 <ul>
974 <li>All
975 </ul>
976 <td>
977 <table>
978 <tr><th>src0<th>src1<th>dst
979 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
980 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
981 <tr><td>S32<td>S32<td>S32
982 <tr><td>S16<td>S16<td>S16
983 <tr><td>F16<td>F16<td>F16
984 <tr><td>F32<td>F32<td>F32
985 </table>
986<tr>
987 <td>NEElementwiseMin
988 <td>
989 <ul>
990 <li>All
991 </ul>
992 <td>
993 <table>
994 <tr><th>src0<th>src1<th>dst
995 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
996 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
997 <tr><td>S32<td>S32<td>S32
998 <tr><td>S16<td>S16<td>S16
999 <tr><td>F16<td>F16<td>F16
1000 <tr><td>F32<td>F32<td>F32
1001 </table>
1002<tr>
1003 <td>NEElementwiseSquaredDiff
1004 <td>
1005 <ul>
1006 <li>All
1007 </ul>
1008 <td>
1009 <table>
1010 <tr><th>src0<th>src1<th>dst
1011 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1012 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1013 <tr><td>S32<td>S32<td>S32
1014 <tr><td>S16<td>S16<td>S16
1015 <tr><td>F16<td>F16<td>F16
1016 <tr><td>F32<td>F32<td>F32
1017 </table>
1018<tr>
1019 <td>NEElementwiseDivision
1020 <td>
1021 <ul>
1022 <li>All
1023 </ul>
1024 <td>
1025 <table>
1026 <tr><th>src0<th>src1<th>dst
1027 <tr><td>F16<td>F16<td>F16
1028 <tr><td>F32<td>F32<td>F32
1029 </table>
1030<tr>
1031 <td>NEElementwisePower
1032 <td>
1033 <ul>
1034 <li>All
1035 </ul>
1036 <td>
1037 <table>
1038 <tr><th>src0<th>src1<th>dst
1039 <tr><td>F16<td>F16<td>F16
1040 <tr><td>F32<td>F32<td>F32
1041 </table>
1042<tr>
1043 <td>NEElementwiseComparison
1044 <td>
1045 <ul>
1046 <li>All
1047 </ul>
1048 <td>
1049 <table>
1050 <tr><th>src0<th>src1<th>dst
1051 <tr><td>QASYMM8<td>QASYMM8<td>U8
1052 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>U8
1053 <tr><td>S32<td>S32<td>U8
1054 <tr><td>U8<td>U8<td>U8
1055 <tr><td>S16<td>S16<td>U8
1056 <tr><td>F16<td>F16<td>U8
1057 <tr><td>F32<td>F32<td>U8
1058 </table>
1059<tr>
1060 <td>CLArithmeticAddition
1061 <td>
1062 <ul>
1063 <li>All
1064 </ul>
1065 <td>
1066 <table>
1067 <tr><th>src0<th>src1<th>dst
1068 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1069 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1070 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1071 <tr><td>U8<td>U8<td>U8
1072 <tr><td>U8<td>U8<td>S16
1073 <tr><td>U8<td>S16<td>S16
1074 <tr><td>S16<td>U8<td>S16
1075 <tr><td>S16<td>S16<td>S16
1076 <tr><td>S32<td>S32<td>S32
1077 <tr><td>F16<td>F16<td>F16
1078 <tr><td>F32<td>F32<td>F32
1079 </table>
1080<tr>
1081 <td>CLArithmeticSubtraction
1082 <td>
1083 <ul>
1084 <li>All
1085 </ul>
1086 <td>
1087 <table>
1088 <tr><th>src0<th>src1<th>dst
1089 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1090 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1091 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1092 <tr><td>U8<td>U8<td>U8
1093 <tr><td>U8<td>U8<td>S16
1094 <tr><td>U8<td>S16<td>S16
1095 <tr><td>S16<td>U8<td>S16
1096 <tr><td>S16<td>S16<td>S16
1097 <tr><td>S32<td>S32<td>S32
1098 <tr><td>F16<td>F16<td>F16
1099 <tr><td>F32<td>F32<td>F32
1100 </table>
1101<tr>
1102 <td>CLArithmeticDivision
1103 <td>
1104 <ul>
1105 <li>All
1106 </ul>
1107 <td>
1108 <table>
1109 <tr><th>src0<th>src1<th>dst
1110 <tr><td>F16<td>F16<td>F16
1111 <tr><td>F32<td>F32<td>F32
1112 </table>
1113<tr>
1114 <td>CLElementwiseMax
1115 <td>
1116 <ul>
1117 <li>All
1118 </ul>
1119 <td>
1120 <table>
1121 <tr><th>src0<th>src1<th>dst
1122 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1123 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1124 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1125 <tr><td>U8<td>U8<td>U8
1126 <tr><td>S16<td>S16<td>S16
1127 <tr><td>S32<td>S32<td>S32
1128 <tr><td>U32<td>U32<td>U32
1129 <tr><td>F16<td>F16<td>F16
1130 <tr><td>F32<td>F32<td>F32
1131 </table>
1132<tr>
1133 <td>CLElementwiseMin
1134 <td>
1135 <ul>
1136 <li>All
1137 </ul>
1138 <td>
1139 <table>
1140 <tr><th>src0<th>src1<th>dst
1141 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1142 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1143 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1144 <tr><td>U8<td>U8<td>U8
1145 <tr><td>S16<td>S16<td>S16
1146 <tr><td>S32<td>S32<td>S32
1147 <tr><td>U32<td>U32<td>U32
1148 <tr><td>F16<td>F16<td>F16
1149 <tr><td>F32<td>F32<td>F32
1150 </table>
1151<tr>
1152 <td>CLElementwiseSquaredDiff
1153 <td>
1154 <ul>
1155 <li>All
1156 </ul>
1157 <td>
1158 <table>
1159 <tr><th>src0<th>src1<th>dst
1160 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1161 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1162 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1163 <tr><td>U8<td>U8<td>U8
1164 <tr><td>S16<td>S16<td>S16
1165 <tr><td>F16<td>F16<td>F16
1166 <tr><td>F32<td>F32<td>F32
1167 </table>
1168<tr>
1169 <td>CLElementwisePower
1170 <td>
1171 <ul>
1172 <li>All
1173 </ul>
1174 <td>
1175 <table>
1176 <tr><th>src0<th>src1<th>dst
1177 <tr><td>F16<td>F16<td>F16
1178 <tr><td>F32<td>F32<td>F32
1179 </table>
1180<tr>
1181 <td rowspan="8">ElementwiseUnaryLayer
1182 <td rowspan="8" style="width:200px;"> Function to perform: - Rsqrt - Exp - Neg - Log - Abs - Round - Sin
1183 <td rowspan="8">
1184 <ul>
1185 <li>ANEURALNETWORKS_ABS
1186 <li>ANEURALNETWORKS_EXP
1187 <li>ANEURALNETWORKS_LOG
1188 <li>ANEURALNETWORKS_NEG
1189 <li>ANEURALNETWORKS_RSQRT
1190 <li>ANEURALNETWORKS_SIN
1191 </ul>
1192 <td>NEElementwiseUnaryLayer
1193 <td>
1194 <ul>
1195 <li>All
1196 </ul>
1197 <td>
1198 <table>
1199 <tr><th>src<th>dst
1200 <tr><td>F16<td>F16
1201 <tr><td>F32<td>F32
1202 <tr><td>S32<td>S32
1203 </table>
1204<tr>
1205 <td>CLRsqrtLayer
1206 <td>
1207 <ul>
1208 <li>All
1209 </ul>
1210 <td>
1211 <table>
1212 <tr><th>src<th>dst
1213 <tr><td>F16<td>F16
1214 <tr><td>F32<td>F32
1215 </table>
1216<tr>
1217 <td>CLExpLayer
1218 <td>
1219 <ul>
1220 <li>All
1221 </ul>
1222 <td>
1223 <table>
1224 <tr><th>src<th>dst
1225 <tr><td>F16<td>F16
1226 <tr><td>F32<td>F32
1227 </table>
1228<tr>
1229 <td>CLNegLayer
1230 <td>
1231 <ul>
1232 <li>All
1233 </ul>
1234 <td>
1235 <table>
1236 <tr><th>src<th>dst
1237 <tr><td>F16<td>F16
1238 <tr><td>F32<td>F32
Jakub Sujakee301b32021-06-04 09:46:08 +01001239 <tr><td>S32<td>S32
Sheri Zhang6124ce62021-05-04 14:03:13 +01001240 </table>
1241<tr>
1242 <td>CLSinLayer
1243 <td>
1244 <ul>
1245 <li>All
1246 </ul>
1247 <td>
1248 <table>
1249 <tr><th>src<th>dst
1250 <tr><td>F16<td>F16
1251 <tr><td>F32<td>F32
1252 </table>
1253<tr>
1254 <td>CLLogLayer
1255 <td>
1256 <ul>
1257 <li>All
1258 </ul>
1259 <td>
1260 <table>
1261 <tr><th>src<th>dst
1262 <tr><td>F16<td>F16
1263 <tr><td>F32<td>F32
1264 </table>
1265<tr>
1266 <td>CLAbsLayer
1267 <td>
1268 <ul>
1269 <li>All
1270 </ul>
1271 <td>
1272 <table>
1273 <tr><th>src<th>dst
1274 <tr><td>F16<td>F16
1275 <tr><td>F32<td>F32
1276 </table>
1277<tr>
1278 <td>CLRoundLayer
1279 <td>
1280 <ul>
1281 <li>All
1282 </ul>
1283 <td>
1284 <table>
1285 <tr><th>src<th>dst
1286 <tr><td>F16<td>F16
1287 <tr><td>F32<td>F32
1288 </table>
1289<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001290 <td rowspan="2">FFT1D
Teresa Charlin62687422021-04-28 10:58:49 +01001291 <td rowspan="2" style="width:200px;"> Fast Fourier Transform 1D.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001292 <td rowspan="2">
1293 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +01001294 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001295 </ul>
1296 <td>NEFFT1D
1297 <td>
1298 <ul>
1299 <li>All
1300 </ul>
1301 <td>
1302 <table>
1303 <tr><th>src<th>dst
1304 <tr><td>F32<td>F32
1305 </table>
1306<tr>
1307 <td>CLFFT1D
1308 <td>
1309 <ul>
1310 <li>All
1311 </ul>
1312 <td>
1313 <table>
1314 <tr><th>src<th>dst
1315 <tr><td>F32<td>F32
1316 <tr><td>F16<td>F16
1317 </table>
1318<tr>
1319 <td rowspan="2">FFT2D
Teresa Charlin62687422021-04-28 10:58:49 +01001320 <td rowspan="2" style="width:200px;"> Fast Fourier Transform 2D.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001321 <td rowspan="2">
1322 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +01001323 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001324 </ul>
1325 <td>NEFFT2D
1326 <td>
1327 <ul>
1328 <li>All
1329 </ul>
1330 <td>
1331 <table>
1332 <tr><th>src<th>dst
1333 <tr><td>F32<td>F32
1334 </table>
1335<tr>
1336 <td>CLFFT2D
1337 <td>
1338 <ul>
1339 <li>All
1340 </ul>
1341 <td>
1342 <table>
1343 <tr><th>src<th>dst
1344 <tr><td>F32<td>F32
1345 <tr><td>F16<td>F16
1346 </table>
1347<tr>
1348 <td rowspan="2">FFTConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001349 <td rowspan="2" style="width:200px;"> Fast Fourier Transform Convolution.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001350 <td rowspan="2">
1351 <ul>
1352 <li>ANEURALNETWORKS_CONV_2D
1353 </ul>
1354 <td>NEFFTConvolutionLayer
1355 <td>
1356 <ul>
1357 <li>All
1358 </ul>
1359 <td>
1360 <table>
1361 <tr><th>src<th>dst
1362 <tr><td>F32<td>F32
1363 </table>
1364<tr>
1365 <td>CLFFTConvolutionLayer
1366 <td>
1367 <ul>
1368 <li>All
1369 </ul>
1370 <td>
1371 <table>
1372 <tr><th>src<th>dst
1373 <tr><td>F32<td>F32
1374 <tr><td>F16<td>F16
1375 </table>
1376<tr>
1377 <td rowspan="2">Fill
Teresa Charlin62687422021-04-28 10:58:49 +01001378 <td rowspan="2" style="width:200px;"> Set the values of a tensor with a given value.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001379 <td rowspan="2">
1380 <ul>
1381 <li>ANEURALNETWORKS_FILL
1382 </ul>
1383 <td>NEFill
1384 <td>
1385 <ul>
1386 <li>All
1387 </ul>
1388 <td>
1389 <table>
1390 <tr><th>src<th>dst
1391 <tr><td>All<td>All
1392 </table>
1393<tr>
1394 <td>CLFill
1395 <td>
1396 <ul>
1397 <li>All
1398 </ul>
1399 <td>
1400 <table>
1401 <tr><th>src<th>dst
1402 <tr><td>All<td>All
1403 </table>
1404<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001405 <td rowspan="2">FillBorder
Jakub Sujakee301b32021-06-04 09:46:08 +01001406 <td rowspan="2" style="width:200px;"> Function to fill the borders within the XY-planes.
Teresa Charlin62687422021-04-28 10:58:49 +01001407 <td rowspan="2">
1408 <ul>
1409 <li>n/a
1410 </ul>
1411 <td>NEFillBorder
1412 <td>
1413 <ul>
1414 <li>All
1415 </ul>
1416 <td>
1417 <table>
1418 <tr><th>src<th>dst
1419 <tr><td>All<td>All
1420 </table>
1421<tr>
1422 <td>CLFillBorder
1423 <td>
1424 <ul>
1425 <li>All
1426 </ul>
1427 <td>
1428 <table>
1429 <tr><th>src<th>dst
1430 <tr><td>All<td>All
1431 </table>
1432<tr>
1433 <td rowspan="2">FlattenLayer
1434 <td rowspan="2" style="width:200px;"> Reshape a tensor to be 1D
1435 <td rowspan="2">
1436 <ul>
1437 <li>ANEURALNETWORKS_RESHAPE
1438 </ul>
1439 <td>NEFlattenLayer
1440 <td>
1441 <ul>
1442 <li>All
1443 </ul>
1444 <td>
1445 <table>
1446 <tr><th>src<th>dst
1447 <tr><td>All<td>All
1448 </table>
1449<tr>
1450 <td>CLFlattenLayer
1451 <td>
1452 <ul>
1453 <li>All
1454 </ul>
1455 <td>
1456 <table>
1457 <tr><th>src<th>dst
1458 <tr><td>All<td>All
1459 </table>
1460<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001461 <td rowspan="2">Floor
Teresa Charlin62687422021-04-28 10:58:49 +01001462 <td rowspan="2" style="width:200px;"> Round the value to the lowest number.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001463 <td rowspan="2">
1464 <ul>
1465 <li>ANEURALNETWORKS_FLOOR
1466 </ul>
1467 <td>NEFloor
1468 <td>
1469 <ul>
1470 <li>All
1471 </ul>
1472 <td>
1473 <table>
1474 <tr><th>src<th>dst
1475 <tr><td>F32<td>F32
1476 <tr><td>F16<td>F16
1477 </table>
1478<tr>
1479 <td>CLFloor
1480 <td>
1481 <ul>
1482 <li>All
1483 </ul>
1484 <td>
1485 <table>
1486 <tr><th>src<th>dst
1487 <tr><td>F32<td>F32
1488 <tr><td>F16<td>F16
1489 </table>
1490<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001491 <td rowspan="2">FullyConnectedLayer
1492 <td rowspan="2" style="width:200px;"> Function to perform a fully connected / dense layer.
1493 <td rowspan="2">
1494 <ul>
1495 <li>ANEURALNETWORKS_FULLY_CONNECTED
1496 </ul>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001497 <td>NEFullyConnectedLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001498 <td>
1499 <ul>
1500 <li>NHWC
1501 <li>NCHW
1502 </ul>
1503 <td>
1504 <table>
1505 <tr><th>src0<th>src1<th>src2<th>dst
1506 <tr><td>F16<td>F16<td>F16<td>F16
1507 <tr><td>F32<td>F32<td>F32<td>F32
1508 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1509 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1510 </table>
1511<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001512 <td>CLFullyConnectedLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001513 <td>
1514 <ul>
1515 <li>NHWC
1516 <li>NCHW
1517 </ul>
1518 <td>
1519 <table>
1520 <tr><th>src0<th>src1<th>src2<th>dst
1521 <tr><td>F16<td>F16<td>F16<td>F16
1522 <tr><td>F32<td>F32<td>F32<td>F32
1523 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1524 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1525 </table>
1526<tr>
1527 <td rowspan="2">FuseBatchNormalization
1528 <td rowspan="2" style="width:200px;"> Function to fuse the batch normalization node to a preceding convolution node.
1529 <td rowspan="2">
1530 <ul>
1531 <li>n/a
1532 </ul>
1533 <td>NEFuseBatchNormalization
1534 <td>
1535 <ul>
1536 <li>NHWC
1537 <li>NCHW
1538 </ul>
1539 <td>
1540 <table>
1541 <tr><th>src<th>dst
1542 <tr><td>F32<td>F32
1543 <tr><td>F16<td>F16
1544 </table>
1545<tr>
1546 <td>CLFuseBatchNormalization
1547 <td>
1548 <ul>
1549 <li>NHWC
1550 <li>NCHW
1551 </ul>
1552 <td>
1553 <table>
1554 <tr><th>src<th>dst
1555 <tr><td>F32<td>F32
1556 <tr><td>F16<td>F16
1557 </table>
1558<tr>
1559 <td rowspan="2">Gather
1560 <td rowspan="2" style="width:200px;"> Performs the Gather operation along the chosen axis.
1561 <td rowspan="2">
1562 <ul>
1563 <li>ANEURALNETWORKS_GATHER
1564 </ul>
1565 <td>NEGather
1566 <td>
1567 <ul>
1568 <li>All
1569 </ul>
1570 <td>
1571 <table>
1572 <tr><th>src<th>dst
1573 <tr><td>All<td>All
1574 </table>
1575<tr>
1576 <td>CLGather
1577 <td>
1578 <ul>
1579 <li>All
1580 </ul>
1581 <td>
1582 <table>
1583 <tr><th>src<th>dst
1584 <tr><td>All<td>All
1585 </table>
1586<tr>
1587 <td rowspan="2">GEMM
1588 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1589 <td rowspan="2">
1590 <ul>
1591 <li>n/a
1592 </ul>
1593 <td>NEGEMM
1594 <td>
1595 <ul>
1596 <li>All
1597 </ul>
1598 <td>
1599 <table>
1600 <tr><th>src0<th>src1<th>src2<th>dst
1601 <tr><td>F32<td>F32<td>F32<td>F32
1602 <tr><td>F16<td>F16<td>F16<td>F16
1603 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1604 </table>
1605<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001606 <td>CLGEMM
Teresa Charlin62687422021-04-28 10:58:49 +01001607 <td>
1608 <ul>
1609 <li>All
1610 </ul>
1611 <td>
1612 <table>
1613 <tr><th>src0<th>src1<th>src2<th>dst
1614 <tr><td>F32<td>F32<td>F32<td>F32
1615 <tr><td>F16<td>F16<td>F16<td>F16
1616 </table>
1617<tr>
Jakub Sujakee301b32021-06-04 09:46:08 +01001618 <td rowspan="1">GEMMConv2d
Sheri Zhang6124ce62021-05-04 14:03:13 +01001619 <td rowspan="1" style="width:200px;"> General Matrix Multiplication.
1620 <td rowspan="1">
1621 <ul>
1622 <li>ANEURALNETWORKS_CONV_2D
1623 </ul>
1624 <td>NEGEMMConv2d
1625 <td>
1626 <ul>
1627 <li>All
1628 </ul>
1629 <td>
1630 <table>
1631 <tr><th>src0<th>src1<th>src2<th>dst
1632 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1633 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1634 <tr><td>F16<td>F16<td>F16<td>F16
1635 <tr><td>F32<td>F32<td>F32<td>F32
1636 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1637 </table>
1638<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001639 <td rowspan="2">GEMMConvolutionLayer
1640 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1641 <td rowspan="2">
1642 <ul>
1643 <li>ANEURALNETWORKS_CONV_2D
1644 </ul>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001645 <td>NEGEMMConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001646 <td>
1647 <ul>
1648 <li>NHWC
1649 <li>NCHW
1650 </ul>
1651 <td>
1652 <table>
1653 <tr><th>src0<th>src1<th>src2<th>dst
1654 <tr><td>F16<td>F16<td>F16<td>F16
1655 <tr><td>F32<td>F32<td>F32<td>F32
1656 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1657 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1658 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1659 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1660 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1661 </table>
1662<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001663 <td>CLGEMMConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001664 <td>
1665 <ul>
1666 <li>NHWC
1667 <li>NCHW
1668 </ul>
1669 <td>
1670 <table>
1671 <tr><th>src0<th>src1<th>src2<th>dst
1672 <tr><td>F16<td>F16<td>F16<td>F16
1673 <tr><td>F32<td>F32<td>F32<td>F32
1674 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1675 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1676 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1677 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1678 </table>
1679<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001680 <td rowspan="1">GEMMDeconvolutionLayer
1681 <td rowspan="1" style="width:200px;"> General Matrix Multiplication.
1682 <td rowspan="1">
1683 <ul>
1684 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
1685 </ul>
1686 <td>CLGEMMDeconvolutionLayer
1687 <td>
1688 <ul>
1689 <li>NHWC
1690 </ul>
1691 <td>
1692 <table>
1693 <tr><th>src0<th>src1<th>src2<th>dst
1694 <tr><td>F16<td>F16<td>F16<td>F16
1695 <tr><td>F32<td>F32<td>F32<td>F32
1696 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1697 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1698 </table>
1699<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001700 <td rowspan="2">GEMMLowpMatrixMultiplyCore
1701 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1702 <td rowspan="2">
1703 <ul>
1704 <li>n/a
1705 </ul>
1706 <td>NEGEMMLowpMatrixMultiplyCore
1707 <td>
1708 <ul>
1709 <li>NHWC
1710 <li>NCHW
1711 </ul>
1712 <td>
1713 <table>
1714 <tr><th>src0<th>src1<th>src2<th>dst
1715 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1716 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1717 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>QASYMM8
1718 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>S32
1719 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1720 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>S32
1721 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1722 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1723 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>QASYMM8_SIGNED
1724 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>S32
1725 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1726 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>S32
1727 </table>
1728<tr>
1729 <td>CLGEMMLowpMatrixMultiplyCore
1730 <td>
1731 <ul>
1732 <li>NHWC
1733 <li>NCHW
1734 </ul>
1735 <td>
1736 <table>
1737 <tr><th>src0<th>src1<th>src2<th>dst
1738 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1739 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1740 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>QASYMM8
1741 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>S32
1742 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1743 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>S32
1744 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1745 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1746 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>QASYMM8_SIGNED
1747 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>S32
1748 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1749 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>S32
1750 </table>
1751<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001752 <td rowspan="2">GEMMLowpOutputStage
1753 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1754 <td rowspan="2">
1755 <ul>
1756 <li>n/a
1757 </ul>
1758 <td>NEGEMMLowpOutputStage
1759 <td>
1760 <ul>
1761 <li>All
1762 </ul>
1763 <td>
1764 <table>
1765 <tr><th>src0<th>src1<th>dst
1766 <tr><td>S32<td>S32<td>QASYMM8
1767 <tr><td>S32<td>S32<td>QASYMM8_SIGNED
1768 <tr><td>S32<td>S32<td>QSYMM16
1769 </table>
1770<tr>
1771 <td>CLGEMMLowpOutputStage
1772 <td>
1773 <ul>
1774 <li>All
1775 </ul>
1776 <td>
1777 <table>
1778 <tr><th>src0<th>src1<th>dst
1779 <tr><td>S32<td>S32<td>QASYMM8
1780 <tr><td>S32<td>S32<td>QASYMM8_SIGNED
1781 <tr><td>S32<td>S32<td>QSYMM16
1782 </table>
1783<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001784 <td rowspan="2">GenerateProposalsLayer
1785 <td rowspan="2" style="width:200px;"> Function to generate proposals for a RPN (Region Proposal Network).
1786 <td rowspan="2">
1787 <ul>
1788 <li>ANEURALNETWORKS_GENERATE_PROPOSALS
1789 </ul>
1790 <td>NEGenerateProposalsLayer
1791 <td>
1792 <ul>
1793 <li>All
1794 </ul>
1795 <td>
1796 <table>
1797 <tr><th>src0<th>src1<th>src2<th>dst
1798 <tr><td>F16<td>F16<td>F16<td>F16
1799 <tr><td>F32<td>F32<td>F32<td>F32
1800 <tr><td>QASYMM8<td>QSYMM8<td>QSYMM16<td>QASYMM8
1801 </table>
1802<tr>
1803 <td>CLGenerateProposalsLayer
1804 <td>
1805 <ul>
1806 <li>All
1807 </ul>
1808 <td>
1809 <table>
1810 <tr><th>src0<th>src1<th>src2<th>dst
1811 <tr><td>F16<td>F16<td>F16<td>F16
1812 <tr><td>F32<td>F32<td>F32<td>F32
1813 <tr><td>QASYMM8<td>QSYMM8<td>QSYMM16<td>QASYMM8
1814 </table>
1815<tr>
1816 <td rowspan="2">InstanceNormalizationLayer
1817 <td rowspan="2" style="width:200px;"> Function to perform a Instance normalization on a given axis.
1818 <td rowspan="2">
1819 <ul>
1820 <li>ANEURALNETWORKS_INSTANCE_NORMALIZATION
1821 </ul>
1822 <td>NEInstanceNormalizationLayer
1823 <td>
1824 <ul>
1825 <li>NHWC
1826 <li>NCHW
1827 </ul>
1828 <td>
1829 <table>
1830 <tr><th>src<th>dst
1831 <tr><td>F16<td>F16
1832 <tr><td>F32<td>F32
1833 </table>
1834<tr>
1835 <td>CLInstanceNormalizationLayer
1836 <td>
1837 <ul>
1838 <li>NHWC
1839 <li>NCHW
1840 </ul>
1841 <td>
1842 <table>
1843 <tr><th>src<th>dst
1844 <tr><td>F16<td>F16
1845 <tr><td>F32<td>F32
1846 </table>
1847<tr>
1848 <td rowspan="2">L2NormalizeLayer
1849 <td rowspan="2" style="width:200px;"> Function to perform a L2 normalization on a given axis.
1850 <td rowspan="2">
1851 <ul>
1852 <li>ANEURALNETWORKS_L2_NORMALIZATION
1853 </ul>
1854 <td>NEL2NormalizeLayer
1855 <td>
1856 <ul>
1857 <li>NHWC
1858 <li>NCHW
1859 </ul>
1860 <td>
1861 <table>
1862 <tr><th>src<th>dst
1863 <tr><td>F16<td>F16
1864 <tr><td>F32<td>F32
1865 </table>
1866<tr>
1867 <td>CLL2NormalizeLayer
1868 <td>
1869 <ul>
1870 <li>NHWC
1871 <li>NCHW
1872 </ul>
1873 <td>
1874 <table>
1875 <tr><th>src<th>dst
1876 <tr><td>F16<td>F16
1877 <tr><td>F32<td>F32
1878 </table>
1879<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001880 <td rowspan="3">Logical
1881 <td rowspan="3" style="width:200px;"> Function to perform: - Logical AND - Logical OR - Logical NOT
1882 <td rowspan="3">
1883 <ul>
1884 <li>n/a
1885 </ul>
1886 <td>NELogicalAnd
1887 <td>
1888 <ul>
1889 <li>All
1890 </ul>
1891 <td>
1892 <table>
1893 <tr><th>src0<th>src1<th>dst
1894 <tr><td>U8<td>U8<td>U8
1895 </table>
1896<tr>
1897 <td>NELogicalOr
1898 <td>
1899 <ul>
1900 <li>All
1901 </ul>
1902 <td>
1903 <table>
1904 <tr><th>src0<th>src1<th>dst
1905 <tr><td>U8<td>U8<td>U8
1906 </table>
1907<tr>
1908 <td>NELogicalNot
1909 <td>
1910 <ul>
1911 <li>All
1912 </ul>
1913 <td>
1914 <table>
1915 <tr><th>src<th>dst
1916 <tr><td>U8<td>U8
1917 </table>
1918<tr>
1919 <td rowspan="1">LogicalAnd
1920 <td rowspan="1" style="width:200px;"> Function to perform Logical AND.
1921 <td rowspan="1">
1922 <ul>
1923 <li>n/a
1924 </ul>
1925 <td>CLLogicalAnd
1926 <td>
1927 <ul>
1928 <li>All
1929 </ul>
1930 <td>
1931 <table>
1932 <tr><th>src0<th>src1<th>dst
1933 <tr><td>U8<td>U8<td>U8
1934 </table>
1935<tr>
1936 <td rowspan="1">LogicalOr
1937 <td rowspan="1" style="width:200px;"> Function to perform Logical OR.
1938 <td rowspan="1">
1939 <ul>
1940 <li>n/a
1941 </ul>
1942 <td>CLLogicalOr
1943 <td>
1944 <ul>
1945 <li>All
1946 </ul>
1947 <td>
1948 <table>
1949 <tr><th>src0<th>src1<th>dst
1950 <tr><td>U8<td>U8<td>U8
1951 </table>
1952<tr>
1953 <td rowspan="1">LogicalNot
1954 <td rowspan="1" style="width:200px;"> Function to perform Logical NOT.
1955 <td rowspan="1">
1956 <ul>
1957 <li>n/a
1958 </ul>
1959 <td>CLLogicalNot
1960 <td>
1961 <ul>
1962 <li>All
1963 </ul>
1964 <td>
1965 <table>
1966 <tr><th>src<th>dst
1967 <tr><td>U8<td>U8
1968 </table>
1969<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001970 <td rowspan="2">LSTMLayer
1971 <td rowspan="2" style="width:200px;"> Function to perform a single time step in a Long Short-Term Memory (LSTM) layer.
1972 <td rowspan="2">
1973 <ul>
1974 <li>ANEURALNETWORKS_LSTM
1975 </ul>
1976 <td>NELSTMLayer
1977 <td>
1978 <ul>
1979 <li>All
1980 </ul>
1981 <td>
1982 <table>
1983 <tr><th>src0 - src13<th>dst0 - dst3
1984 <tr><td>F16<td>F16
1985 <tr><td>F32<td>F32
1986 </table>
1987<tr>
1988 <td>CLLSTMLayer
1989 <td>
1990 <ul>
1991 <li>All
1992 </ul>
1993 <td>
1994 <table>
1995 <tr><th>src0 - src13<th>dst0 - dst3
1996 <tr><td>F16<td>F16
1997 <tr><td>F32<td>F32
1998 </table>
1999<tr>
2000 <td rowspan="2">LSTMLayerQuantized
2001 <td rowspan="2" style="width:200px;"> Function to perform quantized LSTM (Long Short-Term Memory)
2002 <td rowspan="2">
2003 <ul>
2004 <li>ANEURALNETWORKS_QUANTIZED_LSTM
2005 <li>ANEURALNETWORKS_QUANTIZED_16BIT_LSTM
2006 </ul>
2007 <td>NELSTMLayerQuantized
2008 <td>
2009 <ul>
2010 <li>All
2011 </ul>
2012 <td>
2013 <table>
2014 <tr><th>src0 - src8<th>src9 - src12<th>src13<th>src14<th>dst0<th>dst1
2015 <tr><td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8<td>QSYMM16<td>QASYMM8
2016 </table>
2017<tr>
2018 <td>CLLSTMLayerQuantized
2019 <td>
2020 <ul>
2021 <li>All
2022 </ul>
2023 <td>
2024 <table>
2025 <tr><th>src0 - src8<th>src9 - src12<th>src13<th>src14<th>dst0<th>dst1
2026 <tr><td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8<td>QSYMM16<td>QASYMM8
2027 </table>
2028<tr>
2029 <td rowspan="2">MaxUnpoolingLayer
2030 <td rowspan="2" style="width:200px;"> Function to perform MaxUnpooling.
2031 <td rowspan="2">
2032 <ul>
2033 <li>n/a
2034 </ul>
2035 <td>NEMaxUnpoolingLayer
2036 <td>
2037 <ul>
2038 <li>NHWC
2039 <li>NCHW
2040 </ul>
2041 <td>
2042 <table>
2043 <tr><th>src<th>dst
2044 <tr><td>QASYMM8<td>QASYMM8
2045 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2046 <tr><td>F16<td>F16
2047 <tr><td>F32<td>F32
2048 </table>
2049<tr>
2050 <td>CLMaxUnpoolingLayer
2051 <td>
2052 <ul>
2053 <li>NHWC
2054 <li>NCHW
2055 </ul>
2056 <td>
2057 <table>
2058 <tr><th>src<th>dst
2059 <tr><td>QASYMM8<td>QASYMM8
2060 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2061 <tr><td>F16<td>F16
2062 <tr><td>F32<td>F32
2063 </table>
2064<tr>
2065 <td rowspan="2">MeanStdDevNormalizationLayer
2066 <td rowspan="2" style="width:200px;"> Function to execute mean and standard deviation normalization.
2067 <td rowspan="2">
2068 <ul>
2069 <li>n/a
2070 </ul>
2071 <td>NEMeanStdDevNormalizationLayer
2072 <td>
2073 <ul>
2074 <li>NHWC
2075 <li>NCHW
2076 </ul>
2077 <td>
2078 <table>
2079 <tr><th>src<th>dst
2080 <tr><td>F32<td>F32
2081 <tr><td>F16<td>F16
2082 </table>
2083<tr>
2084 <td>CLMeanStdDevNormalizationLayer
2085 <td>
2086 <ul>
2087 <li>NHWC
2088 <li>NCHW
2089 </ul>
2090 <td>
2091 <table>
2092 <tr><th>src<th>dst
2093 <tr><td>F32<td>F32
2094 <tr><td>F16<td>F16
2095 </table>
2096<tr>
2097 <td rowspan="2">NormalizationLayer
2098 <td rowspan="2" style="width:200px;"> Function to compute normalization layer.
2099 <td rowspan="2">
2100 <ul>
2101 <li>ANEURALNETWORKS_LOCAL_RESPONSE_NORMALIZATION
2102 </ul>
2103 <td>NENormalizationLayer
2104 <td>
2105 <ul>
2106 <li>NHWC
2107 <li>NCHW
2108 </ul>
2109 <td>
2110 <table>
2111 <tr><th>src<th>dst
2112 <tr><td>F32<td>F32
2113 <tr><td>F16<td>F16
2114 </table>
2115<tr>
2116 <td>CLNormalizationLayer
2117 <td>
2118 <ul>
2119 <li>NHWC
2120 <li>NCHW
2121 </ul>
2122 <td>
2123 <table>
2124 <tr><th>src<th>dst
2125 <tr><td>F32<td>F32
2126 <tr><td>F16<td>F16
2127 </table>
2128<tr>
2129 <td rowspan="2">PadLayer
2130 <td rowspan="2" style="width:200px;"> Function to pad a tensor.
2131 <td rowspan="2">
2132 <ul>
2133 <li>ANEURALNETWORKS_PAD
2134 <li>ANEURALNETWORKS_PAD_V2
2135 </ul>
2136 <td>NEPadLayer
2137 <td>
2138 <ul>
2139 <li>NHWC
2140 <li>NCHW
2141 </ul>
2142 <td>
2143 <table>
2144 <tr><th>src<th>dst
2145 <tr><td>All<td>All
2146 </table>
2147<tr>
2148 <td>CLPadLayer
2149 <td>
2150 <ul>
2151 <li>NHWC
2152 <li>NCHW
2153 </ul>
2154 <td>
2155 <table>
2156 <tr><th>src<th>dst
2157 <tr><td>All<td>All
2158 </table>
2159<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002160 <td rowspan="2">Permute
2161 <td rowspan="2" style="width:200px;"> Function to transpose an ND tensor.
2162 <td rowspan="2">
2163 <ul>
2164 <li>ANEURALNETWORKS_TRANSPOSE
2165 </ul>
2166 <td>NEPermute
2167 <td>
2168 <ul>
2169 <li>NHWC
2170 <li>NCHW
2171 </ul>
2172 <td>
2173 <table>
2174 <tr><th>src<th>dst
2175 <tr><td>All<td>All
2176 </table>
2177<tr>
2178 <td>CLPermute
2179 <td>
2180 <ul>
2181 <li>NHWC
2182 <li>NCHW
2183 </ul>
2184 <td>
2185 <table>
2186 <tr><th>src<th>dst
2187 <tr><td>All<td>All
2188 </table>
2189<tr>
2190 <td rowspan="2">PixelWiseMultiplication
Jakub Sujakee301b32021-06-04 09:46:08 +01002191 <td rowspan="2" style="width:200px;"> Function to perform a multiplication.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002192 <td rowspan="2">
2193 <ul>
2194 <li>ANEURALNETWORKS_MUL
2195 </ul>
2196 <td>NEPixelWiseMultiplication
2197 <td>
2198 <ul>
2199 <li>All
2200 </ul>
2201 <td>
2202 <table>
2203 <tr><th>src0<th>src1<th>dst
2204 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2205 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2206 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
2207 <tr><td>QSYMM16<td>QSYMM16<td>S32
2208 <tr><td>U8<td>U8<td>U8
2209 <tr><td>U8<td>U8<td>S16
2210 <tr><td>U8<td>S16<td>S16
2211 <tr><td>S16<td>U8<td>S16
2212 <tr><td>S16<td>S16<td>S16
2213 <tr><td>F16<td>F16<td>F16
2214 <tr><td>F32<td>S32<td>F32
2215 </table>
2216<tr>
2217 <td>CLPixelWiseMultiplication
2218 <td>
2219 <ul>
2220 <li>All
2221 </ul>
2222 <td>
2223 <table>
2224 <tr><th>src0<th>src1<th>dst
2225 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2226 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2227 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
2228 <tr><td>QSYMM16<td>QSYMM16<td>S32
2229 <tr><td>U8<td>U8<td>U8
2230 <tr><td>U8<td>U8<td>S16
2231 <tr><td>U8<td>S16<td>S16
2232 <tr><td>S16<td>U8<td>S16
2233 <tr><td>S16<td>S16<td>S16
2234 <tr><td>F16<td>F16<td>F16
Jakub Sujakee301b32021-06-04 09:46:08 +01002235 <tr><td>F32<td>F32<td>F32
2236 <tr><td>S32<td>S32<td>S32
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002237 </table>
2238<tr>
2239 <td rowspan="2">PoolingLayer
Jakub Sujakee301b32021-06-04 09:46:08 +01002240 <td rowspan="2" style="width:200px;"> Function to perform pooling with the specified pooling operation.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002241 <td rowspan="2">
2242 <ul>
2243 <li>ANEURALNETWORKS_AVERAGE_POOL_2D
2244 <li>ANEURALNETWORKS_L2_POOL_2D
2245 <li>ANEURALNETWORKS_MAX_POOL_2D
2246 </ul>
2247 <td>NEPoolingLayer
2248 <td>
2249 <ul>
2250 <li>NHWC
2251 <li>NCHW
2252 </ul>
2253 <td>
2254 <table>
2255 <tr><th>src<th>dst
2256 <tr><td>QASYMM8<td>QASYMM8
2257 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2258 <tr><td>F16<td>F16
2259 <tr><td>F32<td>F32
2260 </table>
2261<tr>
2262 <td>CLPoolingLayer
2263 <td>
2264 <ul>
2265 <li>NHWC
2266 <li>NCHW
2267 </ul>
2268 <td>
2269 <table>
2270 <tr><th>src<th>dst
2271 <tr><td>QASYMM8<td>QASYMM8
2272 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2273 <tr><td>F16<td>F16
2274 <tr><td>F32<td>F32
2275 </table>
2276<tr>
2277 <td rowspan="2">PReluLayer
2278 <td rowspan="2" style="width:200px;"> Function to compute the activation layer with the PRELU activation function.
2279 <td rowspan="2">
2280 <ul>
2281 <li>ANEURALNETWORKS_PRELU
2282 </ul>
2283 <td>NEPReluLayer
2284 <td>
2285 <ul>
2286 <li>All
2287 </ul>
2288 <td>
2289 <table>
2290 <tr><th>src<th>dst
2291 <tr><td>QASYMM8<td>QASYMM8
2292 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2293 <tr><td>F16<td>F16
2294 <tr><td>F32<td>F32
2295 </table>
2296<tr>
2297 <td>CLPReluLayer
2298 <td>
2299 <ul>
2300 <li>All
2301 </ul>
2302 <td>
2303 <table>
2304 <tr><th>src<th>dst
2305 <tr><td>QASYMM8<td>QASYMM8
2306 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2307 <tr><td>F16<td>F16
2308 <tr><td>F32<td>F32
2309 </table>
2310<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002311 <td rowspan="2">PriorBoxLayer
Sheri Zhang6124ce62021-05-04 14:03:13 +01002312 <td rowspan="2" style="width:200px;"> Function to compute prior boxes and clip.
Teresa Charlin62687422021-04-28 10:58:49 +01002313 <td rowspan="2">
2314 <ul>
2315 <li>n/a
2316 </ul>
2317 <td>NEPriorBoxLayer
2318 <td>
2319 <ul>
2320 <li>NHWC
2321 <li>NCHW
2322 </ul>
2323 <td>
2324 <table>
2325 <tr><th>src0<th>src1<th>dst
2326 <tr><td>F32<td>F32<td>F32
2327 </table>
2328<tr>
2329 <td>CLPriorBoxLayer
2330 <td>
2331 <ul>
2332 <li>NHWC
2333 <li>NCHW
2334 </ul>
2335 <td>
2336 <table>
2337 <tr><th>src0<th>src1<th>dst
2338 <tr><td>F32<td>F32<td>F32
2339 </table>
2340<tr>
2341 <td rowspan="2">QLSTMLayer
2342 <td rowspan="2" style="width:200px;"> Function to perform quantized LSTM (Long Short-Term Memory).
2343 <td rowspan="2">
2344 <ul>
2345 <li>ANEURALNETWORKS_QUANTIZED_LSTM
2346 <li>ANEURALNETWORKS_QUANTIZED_16BIT_LSTM
2347 </ul>
2348 <td>NEQLSTMLayer
2349 <td>
2350 <ul>
2351 <li>All
2352 </ul>
2353 <td>
2354 <table>
2355 <tr><th>src0<th>src1 - src6<th>src7 -src9<th>src10<th>src11<th>dst0<th>dst1 - dst2
2356 <tr><td>QASYMM8_SIGNED<td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8_SIGNED<td>QSYMM16<td>QASYMM8_SIGNED
2357 </table>
2358<tr>
2359 <td>CLQLSTMLayer
2360 <td>
2361 <ul>
2362 <li>All
2363 </ul>
2364 <td>
2365 <table>
2366 <tr><th>src0<th>src1 - src6<th>src7 -src9<th>src10<th>src11<th>dst0<th>dst1 - dst2
2367 <tr><td>QASYMM8_SIGNED<td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8_SIGNED<td>QSYMM16<td>QASYMM8_SIGNED
2368 </table>
2369<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002370 <td rowspan="2">QuantizationLayer
2371 <td rowspan="2" style="width:200px;"> Function to perform quantization layer
2372 <td rowspan="2">
2373 <ul>
2374 <li>ANEURALNETWORKS_QUANTIZE
2375 </ul>
2376 <td>NEQuantizationLayer
2377 <td>
2378 <ul>
2379 <li>All
2380 </ul>
2381 <td>
2382 <table>
2383 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +01002384 <tr><td>QASYMM8<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2385 <tr><td>QASYMM8_SIGNED<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2386 <tr><td>F16<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2387 <tr><td>F32<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002388 </table>
2389<tr>
2390 <td>CLQuantizationLayer
2391 <td>
2392 <ul>
2393 <li>All
2394 </ul>
2395 <td>
2396 <table>
2397 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +01002398 <tr><td>QASYMM8<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2399 <tr><td>QASYMM8_SIGNED<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2400 <tr><td>F16<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2401 <tr><td>F32<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2402 </table>
2403<tr>
2404 <td rowspan="2">Range
2405 <td rowspan="2" style="width:200px;"> Function to generates a sequence of numbers starting from START and extends by increments of 'STEP' up to but not including 'END'.
2406 <td rowspan="2">
2407 <ul>
2408 <li>n/a
2409 </ul>
2410 <td>NERange
2411 <td>
2412 <ul>
2413 <li>All
2414 </ul>
2415 <td>
2416 <table>
2417 <tr><th>dst
2418 <tr><td>U8
2419 <tr><td>S8
2420 <tr><td>U16
2421 <tr><td>S16
2422 <tr><td>U32
2423 <tr><td>S32
2424 <tr><td>F16
2425 <tr><td>F32
2426 </table>
2427<tr>
2428 <td>CLRange
2429 <td>
2430 <ul>
2431 <li>All
2432 </ul>
2433 <td>
2434 <table>
2435 <tr><th>dst
2436 <tr><td>U8
2437 <tr><td>S8
2438 <tr><td>QASYMM8
2439 <tr><td>U16
2440 <tr><td>S16
2441 <tr><td>U32
2442 <tr><td>S32
2443 <tr><td>F16
2444 <tr><td>F32
2445 </table>
2446<tr>
2447 <td rowspan="2">ReduceMean
Jakub Sujakee301b32021-06-04 09:46:08 +01002448 <td rowspan="2" style="width:200px;"> Function to perform reduce mean operation.
Teresa Charlin62687422021-04-28 10:58:49 +01002449 <td rowspan="2">
2450 <ul>
2451 <li>ANEURALNETWORKS_MEAN
2452 </ul>
2453 <td>NEReduceMean
2454 <td>
2455 <ul>
2456 <li>All
2457 </ul>
2458 <td>
2459 <table>
2460 <tr><th>src<th>dst
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002461 <tr><td>QASYMM8<td>QASYMM8
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002462 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
Teresa Charlin62687422021-04-28 10:58:49 +01002463 <tr><td>F16<td>F16
2464 <tr><td>F32<td>F32
2465 </table>
2466<tr>
2467 <td>CLReduceMean
2468 <td>
2469 <ul>
2470 <li>All
2471 </ul>
2472 <td>
2473 <table>
2474 <tr><th>src<th>dst
2475 <tr><td>QASYMM8<td>QASYMM8
2476 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2477 <tr><td>F16<td>F16
2478 <tr><td>F32<td>F32
2479 </table>
2480<tr>
2481 <td rowspan="2">ReductionOperation
Jakub Sujakee301b32021-06-04 09:46:08 +01002482 <td rowspan="2" style="width:200px;"> Function to perform reduce with the following operations - ARG_IDX_MAX: Index of the max value - ARG_IDX_MIN: Index of the min value - MEAN_SUM: Mean of sum - PROD: Product - SUM_SQUARE: Sum of squares - SUM: Sum - MIN: Min - MAX: Max
Teresa Charlin62687422021-04-28 10:58:49 +01002483 <td rowspan="2">
2484 <ul>
2485 <li>ANEURALNETWORKS_REDUCE_ALL
2486 <li>ANEURALNETWORKS_REDUCE_ANY
2487 <li>ANEURALNETWORKS_REDUCE_MAX
2488 <li>ANEURALNETWORKS_REDUCE_MIN
2489 <li>ANEURALNETWORKS_REDUCE_PROD
2490 <li>ANEURALNETWORKS_REDUCE_SUM
2491 </ul>
2492 <td>NEReductionOperation
2493 <td>
2494 <ul>
2495 <li>All
2496 </ul>
2497 <td>
2498 <table>
2499 <tr><th>src<th>dst
2500 <tr><td>QASYMM8<td>QASYMM8
2501 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2502 <tr><td>F16<td>F16
2503 <tr><td>F32<td>F32
2504 <tr><td>S32<td>S32
2505 </table>
2506<tr>
2507 <td>CLReductionOperation
2508 <td>
2509 <ul>
2510 <li>All
2511 </ul>
2512 <td>
2513 <table>
2514 <tr><th>src<th>dst
2515 <tr><td>QASYMM8<td>QASYMM8
2516 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2517 <tr><td>F16<td>F16
2518 <tr><td>F32<td>F32
2519 <tr><td>S32<td>S32
2520 </table>
2521<tr>
2522 <td rowspan="2">ReorgLayer
2523 <td rowspan="2" style="width:200px;"> Performs a reorganization layer of input tensor to the output tensor.
2524 <td rowspan="2">
2525 <ul>
2526 <li>n/a
2527 </ul>
2528 <td>NEReorgLayer
2529 <td>
2530 <ul>
2531 <li>NHWC
2532 <li>NCHW
2533 </ul>
2534 <td>
2535 <table>
2536 <tr><th>src<th>dst
2537 <tr><td>All<td>All
2538 </table>
2539<tr>
2540 <td>CLReorgLayer
2541 <td>
2542 <ul>
2543 <li>NHWC
2544 <li>NCHW
2545 </ul>
2546 <td>
2547 <table>
2548 <tr><th>src<th>dst
2549 <tr><td>All<td>All
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002550 </table>
2551<tr>
2552 <td rowspan="2">ReshapeLayer
Teresa Charlin62687422021-04-28 10:58:49 +01002553 <td rowspan="2" style="width:200px;"> Function to reshape a tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002554 <td rowspan="2">
2555 <ul>
2556 <li>ANEURALNETWORKS_RESHAPE
2557 <li>ANEURALNETWORKS_SQUEEZE
2558 </ul>
2559 <td>NEReshapeLayer
2560 <td>
2561 <ul>
2562 <li>All
2563 </ul>
2564 <td>
2565 <table>
2566 <tr><th>src<th>dst
2567 <tr><td>All<td>All
2568 </table>
2569<tr>
2570 <td>CLReshapeLayer
2571 <td>
2572 <ul>
2573 <li>All
2574 </ul>
2575 <td>
2576 <table>
2577 <tr><th>src<th>dst
2578 <tr><td>All<td>All
2579 </table>
2580<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002581 <td rowspan="2">Reverse
2582 <td rowspan="2" style="width:200px;"> Function to reverse tensor according to axis.
2583 <td rowspan="2">
2584 <ul>
2585 <li>n/a
2586 </ul>
2587 <td>NEReverse
2588 <td>
2589 <ul>
2590 <li>All
2591 </ul>
2592 <td>
2593 <table>
2594 <tr><th>src0<th>src1<th>dst
2595 <tr><td>All<td>U32<td>All
2596 </table>
2597<tr>
2598 <td>CLReverse
2599 <td>
2600 <ul>
2601 <li>All
2602 </ul>
2603 <td>
2604 <table>
2605 <tr><th>src0<th>src1<th>dst
2606 <tr><td>All<td>U32<td>All
2607 </table>
2608<tr>
2609 <td rowspan="2">RNNLayer
2610 <td rowspan="2" style="width:200px;"> Function to perform recurrent neural network layer.
2611 <td rowspan="2">
2612 <ul>
2613 <li>ANEURALNETWORKS_RNN
2614 </ul>
2615 <td>NERNNLayer
2616 <td>
2617 <ul>
2618 <li>NHWC
2619 <li>NCHW
2620 </ul>
2621 <td>
2622 <table>
2623 <tr><th>src0<th>src1<th>src2<th>src3<th>dst0<th>dst1
2624 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
2625 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
2626 </table>
2627<tr>
2628 <td>CLRNNLayer
2629 <td>
2630 <ul>
2631 <li>NHWC
2632 <li>NCHW
2633 </ul>
2634 <td>
2635 <table>
2636 <tr><th>src0<th>src1<th>src2<th>src3<th>dst0<th>dst1
2637 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
2638 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
2639 </table>
2640<tr>
2641 <td rowspan="2">ROIAlignLayer
2642 <td rowspan="2" style="width:200px;"> Function to perform ROI alignment.
2643 <td rowspan="2">
2644 <ul>
2645 <li>ANEURALNETWORKS_ROI_ALIGN
2646 </ul>
2647 <td>NEROIAlignLayer
2648 <td>
2649 <ul>
2650 <li>All
2651 </ul>
2652 <td>
2653 <table>
2654 <tr><th>src0<th>src1<th>dst
2655 <tr><td>F16<td>F16<td>F16
2656 <tr><td>F32<td>F32<td>F32
2657 <tr><td>QASYMM8<td>QASYMM16<td>QASYMM8
2658 <tr><td>QASYMM8_SIGNED<td>QASYMM16<td>QASYMM8_SIGNED
2659 </table>
2660<tr>
2661 <td>CLROIAlignLayer
2662 <td>
2663 <ul>
2664 <li>All
2665 </ul>
2666 <td>
2667 <table>
2668 <tr><th>src0<th>src1<th>dst
2669 <tr><td>F16<td>F16<td>F16
2670 <tr><td>F32<td>F32<td>F32
2671 <tr><td>QASYMM8<td>QASYMM16<td>QASYMM8
2672 <tr><td>QASYMM8_SIGNED<td>QASYMM16<td>QASYMM8_SIGNED
2673 </table>
2674<tr>
2675 <td rowspan="2">ROIPoolingLayer
2676 <td rowspan="2" style="width:200px;"> Function to perform ROI pooling.
2677 <td rowspan="2">
2678 <ul>
2679 <li>ANEURALNETWORKS_ROI_POOLING
2680 </ul>
2681 <td>NEROIPoolingLayer
2682 <td>
2683 <ul>
2684 <li>All
2685 </ul>
2686 <td>
2687 <table>
2688 <tr><th>src0<th>src1<th>dst
2689 <tr><td>F32<td>U16<td>F32
2690 <tr><td>QASYMM8<td>U16<td>QASYMM8
2691 </table>
2692<tr>
2693 <td>CLROIPoolingLayer
2694 <td>
2695 <ul>
2696 <li>All
2697 </ul>
2698 <td>
2699 <table>
2700 <tr><th>src0<th>src1<th>dst
2701 <tr><td>F16<td>U16<td>F16
2702 <tr><td>F32<td>U16<td>F32
2703 <tr><td>QASYMM8<td>U16<td>QASYMM8
2704 </table>
2705<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002706 <td rowspan="2">Scale
Teresa Charlin62687422021-04-28 10:58:49 +01002707 <td rowspan="2" style="width:200px;"> Function to perform resize a tensor using to interpolate: - Bilinear - Nearest neighbor
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002708 <td rowspan="2">
2709 <ul>
2710 <li>ANEURALNETWORKS_RESIZE_BILINEAR
2711 <li>ANEURALNETWORKS_RESIZE_NEAREST_NEIGHBOR
2712 </ul>
2713 <td>NEScale
2714 <td>
2715 <ul>
2716 <li>NHWC
2717 <li>NCHW
2718 </ul>
2719 <td>
2720 <table>
2721 <tr><th>src<th>dst
2722 <tr><td>QASYMM8<td>QASYMM8
2723 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2724 <tr><td>F16<td>F16
2725 <tr><td>F32<td>F32
2726 <tr><td>U8<td>U8
2727 <tr><td>S16<td>S16
2728 </table>
2729<tr>
2730 <td>CLScale
2731 <td>
2732 <ul>
2733 <li>NHWC
2734 <li>NCHW
2735 </ul>
2736 <td>
2737 <table>
2738 <tr><th>src<th>dst
2739 <tr><td>QASYMM8<td>QASYMM8
2740 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2741 <tr><td>F16<td>F16
2742 <tr><td>F32<td>F32
2743 <tr><td>U8<td>U8
2744 <tr><td>S16<td>S16
2745 </table>
2746<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002747 <td rowspan="2">Select
2748 <td rowspan="2" style="width:200px;"> Function to select values from 2 tensors depending on an input tensor of booleans.
2749 <td rowspan="2">
2750 <ul>
2751 <li>ANEURALNETWORKS_SELECT
2752 </ul>
2753 <td>NESelect
2754 <td>
2755 <ul>
2756 <li>All
2757 </ul>
2758 <td>
2759 <table>
2760 <tr><th>src0<th>src1<th>src2<th>dst
2761 <tr><td>U8<td>All<td>All<td>All
2762 </table>
2763<tr>
2764 <td>CLSelect
2765 <td>
2766 <ul>
2767 <li>All
2768 </ul>
2769 <td>
2770 <table>
2771 <tr><th>src0<th>src1<th>src2<th>dst
2772 <tr><td>U8<td>All<td>All<td>All
2773 </table>
2774<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002775 <td rowspan="2">Slice
2776 <td rowspan="2" style="width:200px;"> Function to perform tensor slicing.
2777 <td rowspan="2">
2778 <ul>
2779 <li>ANEURALNETWORKS_SLICE
2780 </ul>
2781 <td>NESlice
2782 <td>
2783 <ul>
2784 <li>All
2785 </ul>
2786 <td>
2787 <table>
2788 <tr><th>src<th>dst
2789 <tr><td>All<td>All
2790 </table>
2791<tr>
2792 <td>CLSlice
2793 <td>
2794 <ul>
2795 <li>All
2796 </ul>
2797 <td>
2798 <table>
2799 <tr><th>src<th>dst
2800 <tr><td>All<td>All
2801 </table>
2802<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01002803 <td rowspan="2">SoftmaxLayer
2804 <td rowspan="2" style="width:200px;"> Function to compute a SoftmaxLayer and a Log SoftmaxLayer.
2805 <td rowspan="2">
2806 <ul>
2807 <li>ANEURALNETWORKS_LOG_SOFTMAX
2808 <li>ANEURALNETWORKS_SOFTMAX
2809 </ul>
2810 <td>NESoftmaxLayerGeneric
2811 <td>
2812 <ul>
2813 <li>All
2814 </ul>
2815 <td>
2816 <table>
2817 <tr><th>src<th>dst
2818 <tr><td>QASYMM8<td>QASYMM8
2819 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2820 <tr><td>F16<td>F16
2821 <tr><td>F32<td>F32
2822 </table>
2823<tr>
2824 <td>CLSoftmaxLayerGeneric
2825 <td>
2826 <ul>
2827 <li>All
2828 </ul>
2829 <td>
2830 <table>
2831 <tr><th>src<th>dst
2832 <tr><td>QASYMM8<td>QASYMM8
2833 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2834 <tr><td>F16<td>F16
2835 <tr><td>F32<td>F32
2836 </table>
2837<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002838 <td rowspan="2">SpaceToBatchLayer
2839 <td rowspan="2" style="width:200px;"> Function to divide a tensor spatially.
2840 <td rowspan="2">
2841 <ul>
2842 <li>ANEURALNETWORKS_SPACE_TO_BATCH_ND
2843 </ul>
2844 <td>NESpaceToBatchLayer
2845 <td>
2846 <ul>
2847 <li>NHWC
2848 <li>NCHW
2849 </ul>
2850 <td>
2851 <table>
2852 <tr><th>src0<th>src1<th>src2<th>dst
2853 <tr><td>All<td>S32<td>S32<td>All
2854 </table>
2855<tr>
2856 <td>CLSpaceToBatchLayer
2857 <td>
2858 <ul>
2859 <li>NHWC
2860 <li>NCHW
2861 </ul>
2862 <td>
2863 <table>
2864 <tr><th>src0<th>src1<th>src2<th>dst
2865 <tr><td>All<td>S32<td>S32<td>All
2866 </table>
2867<tr>
2868 <td rowspan="2">SpaceToDepthLayer
2869 <td rowspan="2" style="width:200px;"> Function to rearrange blocks of spatial data into depth.
2870 <td rowspan="2">
2871 <ul>
2872 <li>ANEURALNETWORKS_SPACE_TO_DEPTH
2873 </ul>
2874 <td>NESpaceToDepthLayer
2875 <td>
2876 <ul>
2877 <li>NHWC
2878 <li>NCHW
2879 </ul>
2880 <td>
2881 <table>
2882 <tr><th>src<th>dst
2883 <tr><td>All<td>All
2884 </table>
2885<tr>
2886 <td>CLSpaceToDepthLayer
2887 <td>
2888 <ul>
2889 <li>NHWC
2890 <li>NCHW
2891 </ul>
2892 <td>
2893 <table>
2894 <tr><th>src<th>dst
2895 <tr><td>All<td>All
2896 </table>
2897<tr>
2898 <td rowspan="2">Split
2899 <td rowspan="2" style="width:200px;"> Function to split a tensor along a given axis.
2900 <td rowspan="2">
2901 <ul>
2902 <li>ANEURALNETWORKS_SPLIT
2903 </ul>
2904 <td>NESplit
2905 <td>
2906 <ul>
2907 <li>All
2908 </ul>
2909 <td>
2910 <table>
2911 <tr><th>src<th>dst
2912 <tr><td>All<td>All
2913 </table>
2914<tr>
2915 <td>CLSplit
2916 <td>
2917 <ul>
2918 <li>All
2919 </ul>
2920 <td>
2921 <table>
2922 <tr><th>src<th>dst
2923 <tr><td>All<td>All
2924 </table>
2925<tr>
2926 <td rowspan="2">StackLayer
2927 <td rowspan="2" style="width:200px;"> Function to stack tensors along an axis.
2928 <td rowspan="2">
2929 <ul>
2930 <li>n/a
2931 </ul>
2932 <td>NEStackLayer
2933 <td>
2934 <ul>
2935 <li>All
2936 </ul>
2937 <td>
2938 <table>
2939 <tr><th>src<th>dst
2940 <tr><td>All<td>All
2941 </table>
2942<tr>
2943 <td>CLStackLayer
2944 <td>
2945 <ul>
2946 <li>All
2947 </ul>
2948 <td>
2949 <table>
2950 <tr><th>src<th>dst
2951 <tr><td>All<td>All
2952 </table>
2953<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002954 <td rowspan="2">StridedSlice
2955 <td rowspan="2" style="width:200px;"> Function to extract a strided slice of a tensor.
2956 <td rowspan="2">
2957 <ul>
2958 <li>ANEURALNETWORKS_STRIDED_SLICE
2959 </ul>
2960 <td>NEStridedSlice
2961 <td>
2962 <ul>
2963 <li>All
2964 </ul>
2965 <td>
2966 <table>
2967 <tr><th>src<th>dst
2968 <tr><td>All<td>All
2969 </table>
2970<tr>
2971 <td>CLStridedSlice
2972 <td>
2973 <ul>
2974 <li>All
2975 </ul>
2976 <td>
2977 <table>
2978 <tr><th>src<th>dst
2979 <tr><td>All<td>All
2980 </table>
2981<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002982 <td rowspan="2">Tile
2983 <td rowspan="2" style="width:200px;"> Function to construct a tensor by tiling a given tensor.
2984 <td rowspan="2">
2985 <ul>
2986 <li>ANEURALNETWORKS_TILE
2987 </ul>
2988 <td>NETile
2989 <td>
2990 <ul>
2991 <li>All
2992 </ul>
2993 <td>
2994 <table>
2995 <tr><th>src<th>dst
2996 <tr><td>All<td>All
2997 </table>
2998<tr>
2999 <td>CLTile
3000 <td>
3001 <ul>
3002 <li>All
3003 </ul>
3004 <td>
3005 <table>
3006 <tr><th>src<th>dst
3007 <tr><td>All<td>All
3008 </table>
3009<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003010 <td rowspan="2">Transpose
Teresa Charlin62687422021-04-28 10:58:49 +01003011 <td rowspan="2" style="width:200px;"> Function to transpose a 2D tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003012 <td rowspan="2">
3013 <ul>
3014 <li>ANEURALNETWORKS_TRANSPOSE
3015 </ul>
3016 <td>NETranspose
3017 <td>
3018 <ul>
3019 <li>All
3020 </ul>
3021 <td>
3022 <table>
3023 <tr><th>src<th>dst
3024 <tr><td>All<td>All
3025 </table>
3026<tr>
3027 <td>CLTranspose
3028 <td>
3029 <ul>
3030 <li>All
3031 </ul>
3032 <td>
3033 <table>
3034 <tr><th>src<th>dst
3035 <tr><td>All<td>All
3036 </table>
Teresa Charlin62687422021-04-28 10:58:49 +01003037<tr>
3038 <td rowspan="2">Unstack
3039 <td rowspan="2" style="width:200px;"> Function to unpack a rank-R tensor into rank-(R-1) tensors.
3040 <td rowspan="2">
3041 <ul>
3042 <li>n/a
3043 </ul>
3044 <td>NEUnstack
3045 <td>
3046 <ul>
3047 <li>All
3048 </ul>
3049 <td>
3050 <table>
3051 <tr><th>src<th>dst
3052 <tr><td>All<td>All
3053 </table>
3054<tr>
3055 <td>CLUnstack
3056 <td>
3057 <ul>
3058 <li>All
3059 </ul>
3060 <td>
3061 <table>
3062 <tr><th>src<th>dst
3063 <tr><td>All<td>All
3064 </table>
3065<tr>
3066 <td rowspan="2">WinogradConvolutionLayer
3067 <td rowspan="2" style="width:200px;"> Function to do Winograd Convolution.
3068 <td rowspan="2">
3069 <ul>
3070 <li>ANEURALNETWORKS_CONV_2D
3071 </ul>
3072 <td>NEWinogradConvolutionLayer
3073 <td>
3074 <ul>
3075 <li>NHWC
3076 <li>NCHW
3077 </ul>
3078 <td>
3079 <table>
3080 <tr><th>src0<th>src1<th>src2<th>dst
3081 <tr><td>F16<td>F16<td>F16<td>F16
3082 <tr><td>F32<td>F32<td>F32<td>F32
3083 </table>
3084<tr>
3085 <td>CLWinogradConvolutionLayer
3086 <td>
3087 <ul>
3088 <li>NHWC
3089 <li>NCHW
3090 </ul>
3091 <td>
3092 <table>
3093 <tr><th>src0<th>src1<th>src2<th>dst
3094 <tr><td>F16<td>F16<td>F16<td>F16
3095 <tr><td>F32<td>F32<td>F32<td>F32
3096 </table>
Sheri Zhang6124ce62021-05-04 14:03:13 +01003097<tr>
3098 <td rowspan="1">WinogradInputTransform
Jakub Sujakee301b32021-06-04 09:46:08 +01003099 <td rowspan="1" style="width:200px;"> Function to perform a Winograd transform on the input tensor.
Sheri Zhang6124ce62021-05-04 14:03:13 +01003100 <td rowspan="1">
3101 <ul>
3102 <li>n/a
3103 </ul>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003104</table>
3105
3106*/
3107} // namespace