blob: 25c856da10054a3347654055c580ac0f93bf1228 [file] [log] [blame]
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001///
Adnan AlSinan704c22f2023-10-24 11:05:56 +01002/// Copyright (c) 2021-2023 Arm Limited.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003///
4/// SPDX-License-Identifier: MIT
5///
6/// Permission is hereby granted, free of charge, to any person obtaining a copy
7/// of this software and associated documentation files (the "Software"), to
8/// deal in the Software without restriction, including without limitation the
9/// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
10/// sell copies of the Software, and to permit persons to whom the Software is
11/// furnished to do so, subject to the following conditions:
12///
13/// The above copyright notice and this permission notice shall be included in all
14/// copies or substantial portions of the Software.
15///
16/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
22/// SOFTWARE.
23///
24namespace arm_compute
25{
26/**
27@page operators_list Supported Operators
28
29@tableofcontents
30
31@section S9_1_operators_list Supported Operators
32
33Compute Library supports operators that are listed in below table.
34
35Compute Library supports a wide list of data-types, information can been directly found in the documentation of each kernel/function.
36The main data-types that the Machine Learning functions support are the following:
37 <ul>
38 <li>BFLOAT16: 16-bit non-standard brain floating point
39 <li>QASYMM8: 8-bit unsigned asymmetric quantized
40 <li>QASYMM8_SIGNED: 8-bit signed asymmetric quantized
41 <li>QSYMM8_PER_CHANNEL: 8-bit signed symmetric quantized (Used for the weights)
42 <li>QSYMM8: 8-bit unsigned symmetric quantized
43 <li>QSYMM16: 16-bit unsigned symmetric quantized
44 <li>F32: 32-bit single precision floating point
45 <li>F16: 16-bit half precision floating point
46 <li>S32: 32-bit signed integer
47 <li>U8: 8-bit unsigned char
Jakub Sujakee301b32021-06-04 09:46:08 +010048 <li>All: Agnostic to any specific data type
Sheri Zhanga47dcc22021-04-22 14:41:12 +010049 </ul>
50
51Compute Library supports the following data layouts (fast changing dimension from right to left):
52 <ul>
53 <li>NHWC: The native layout of Compute Library that delivers the best performance where channels are in the fastest changing dimension
54 <li>NCHW: Legacy layout where width is in the fastest changing dimension
Sheri Zhang5dda2172021-10-15 19:54:17 +010055 <li>NDHWC: New data layout for supporting 3D operators
Jakub Sujakee301b32021-06-04 09:46:08 +010056 <li>All: Agnostic to any specific data layout
Sheri Zhanga47dcc22021-04-22 14:41:12 +010057 </ul>
Sheri Zhang5dda2172021-10-15 19:54:17 +010058where N = batches, C = channels, H = height, W = width, D = depth
Sheri Zhanga47dcc22021-04-22 14:41:12 +010059
60<table>
61<caption id="multi_row"></caption>
62<tr>
63 <th>Function
64 <th>Description
65 <th>Equivalent Android NNAPI Op
66 <th>Backends
67 <th>Data Layouts
68 <th>Data Types
69<tr>
70 <td rowspan="2">ActivationLayer
71 <td rowspan="2" style="width:200px;"> Function to simulate an activation layer with the specified activation function.
72 <td rowspan="2">
73 <ul>
74 <li>ANEURALNETWORKS_ELU
75 <li>ANEURALNETWORKS_HARD_SWISH
76 <li>ANEURALNETWORKS_LOGISTIC
77 <li>ANEURALNETWORKS_RELU
78 <li>ANEURALNETWORKS_RELU1
79 <li>ANEURALNETWORKS_RELU6
80 <li>ANEURALNETWORKS_TANH
81 </ul>
82 <td>NEActivationLayer
83 <td>
84 <ul>
85 <li>All
86 </ul>
87 <td>
88 <table>
89 <tr><th>src<th>dst
90 <tr><td>QASYMM8<td>QASYMM8
91 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
92 <tr><td>QSYMM16<td>QSYMM16
93 <tr><td>F16<td>F16
94 <tr><td>F32<td>F32
95 </table>
96<tr>
97 <td>CLActivationLayer
98 <td>
99 <ul>
100 <li>All
101 </ul>
102 <td>
103 <table>
104 <tr><th>src<th>dst
105 <tr><td>QASYMM8<td>QASYMM8
106 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
107 <tr><td>QSYMM16<td>QSYMM16
108 <tr><td>F16<td>F16
109 <tr><td>F32<td>F32
110 </table>
111<tr>
Jakub Sujak667e82f2023-11-07 22:39:30 +0000112 <td rowspan="1">AddMulAdd
113 <td rowspan="1" style="width:200px;"> Performs a fused Add + Mul + Add [+ Relu-based-Activation] operation.
114 <td rowspan="1">
115 <ul>
116 <li>n/a
117 </ul>
118 <td>NEAddMulAdd
119 <td>
120 <ul>
121 <li>Any
122 </ul>
123 <td>
124 <table>
125 <tr><th>input1<th>input2<th>bn_mul<th>bn_add<th>add_output<th>final_output
126 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8<td>QASYMM8<td>QASYMM8<td>QASYMM8
127 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
128 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
129 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
130 </table>
131<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100132 <td rowspan="2">ArgMinMaxLayer
133 <td rowspan="2" style="width:200px;"> Function to calculate the index of the minimum or maximum values in a tensor based on an axis.
134 <td rowspan="2">
135 <ul>
136 <li>ANEURALNETWORKS_ARGMAX
137 <li>ANEURALNETWORKS_ARGMIN
138 </ul>
139 <td>NEArgMinMaxLayer
140 <td>
141 <ul>
142 <li>All
143 </ul>
144 <td>
145 <table>
146 <tr><th>src<th>dst
147 <tr><td>QASYMM8<td>U32, S32
148 <tr><td>QASYMM8_SIGNED<td>U32, S32
Pablo Marquez Tello29e27b02023-08-03 14:47:31 +0100149 <tr><td>S32<td>U32, S32, S64
Teresa Charlin62687422021-04-28 10:58:49 +0100150 <tr><td>F16<td>U32, S32
151 <tr><td>F32<td>U32, S32
152 </table>
153<tr>
154 <td>CLArgMinMaxLayer
155 <td>
156 <ul>
157 <li>All
158 </ul>
159 <td>
160 <table>
161 <tr><th>src<th>dst
162 <tr><td>QASYMM8<td>U32, S32
163 <tr><td>QASYMM8_SIGNED<td>U32, S32
164 <tr><td>S32<td>U32, S32
165 <tr><td>F16<td>U32, S32
166 <tr><td>F32<td>U32, S32
167 </table>
168<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100169 <td rowspan="1">ArithmeticAddition
170 <td rowspan="1" style="width:200px;"> Function to add 2 tensors.
171 <td rowspan="1">
172 <ul>
173 <li>ANEURALNETWORKS_ADD
174 </ul>
175 <td>NEArithmeticAddition
176 <td>
177 <ul>
178 <li>All
179 </ul>
180 <td>
181 <table>
182 <tr><th>src0<th>src1<th>dst
183 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
184 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
185 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
186 <tr><td>QSYMM16<td>QSYMM16<td>S32
187 <tr><td>U8<td>U8<td>U8
Sheri Zhang6124ce62021-05-04 14:03:13 +0100188 <tr><td>S16<td>S16<td>S16
189 <tr><td>S32<td>S32<td>S32
190 <tr><td>F16<td>F16<td>F16
191 <tr><td>F32<td>F32<td>F32
192 </table>
193<tr>
194 <td rowspan="1">ArithmeticSubtraction
195 <td rowspan="1" style="width:200px;"> Function to substract 2 tensors.
196 <td rowspan="1">
197 <ul>
198 <li>ANEURALNETWORKS_SUB
199 </ul>
200 <td>NEArithmeticSubtraction
201 <td>
202 <ul>
203 <li>All
204 </ul>
205 <td>
206 <table>
207 <tr><th>src0<th>src1<th>dst
208 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
209 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
210 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
211 <tr><td>QSYMM16<td>QSYMM16<td>S32
212 <tr><td>U8<td>U8<td>U8
Sheri Zhang6124ce62021-05-04 14:03:13 +0100213 <tr><td>S16<td>S16<td>S16
214 <tr><td>S32<td>S32<td>S32
215 <tr><td>F16<td>F16<td>F16
216 <tr><td>F32<td>F32<td>F32
217 </table>
218<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100219 <td rowspan="2">BatchNormalizationLayer
220 <td rowspan="2" style="width:200px;"> Function to perform batch normalization.
221 <td rowspan="2">
222 <ul>
223 <li>n/a
224 </ul>
225 <td>NEBatchNormalizationLayer
226 <td>
227 <ul>
228 <li>NHWC
229 <li>NCHW
230 </ul>
231 <td>
232 <table>
233 <tr><th>src<th>dst
234 <tr><td>F32<td>F32
235 <tr><td>F16<td>F16
236 </table>
237<tr>
238 <td>CLBatchNormalizationLayer
239 <td>
240 <ul>
241 <li>NHWC
242 <li>NCHW
243 </ul>
244 <td>
245 <table>
246 <tr><th>src<th>dst
247 <tr><td>F32<td>F32
248 <tr><td>F16<td>F16
249 </table>
250<tr>
251 <td rowspan="2">BatchToSpaceLayer
252 <td rowspan="2" style="width:200px;"> Batch to space transformation.
253 <td rowspan="2">
254 <ul>
255 <li>ANEURALNETWORKS_BATCH_TO_SPACE_ND
256 </ul>
257 <td>NEBatchToSpaceLayer
258 <td>
259 <ul>
260 <li>NHWC
261 <li>NCHW
262 </ul>
263 <td>
264 <table>
265 <tr><th>src0<th>src1<th>dst
266 <tr><td>All<td>s32<td>All
267 </table>
268<tr>
269 <td>CLBatchToSpaceLayer
270 <td>
271 <ul>
272 <li>NHWC
273 <li>NCHW
274 </ul>
275 <td>
276 <table>
277 <tr><th>src0<th>src1<th>dst
278 <tr><td>All<td>s32<td>All
279 </table>
280<tr>
281 <td rowspan="2">BitwiseAnd
Jakub Sujakee301b32021-06-04 09:46:08 +0100282 <td rowspan="2" style="width:200px;"> Function to perform bitwise AND between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100283 <td rowspan="2">
284 <ul>
285 <li>ANEURALNETWORKS_LOGICAL_AND
286 </ul>
287 <td>NEBitwiseAnd
288 <td>
289 <ul>
290 <li>All
291 </ul>
292 <td>
293 <table>
294 <tr><th>src<th>dst
295 <tr><td>U8<td>U8
296 </table>
297<tr>
298 <td>CLBitwiseAnd
299 <td>
300 <ul>
301 <li>All
302 </ul>
303 <td>
304 <table>
305 <tr><th>src<th>dst
306 <tr><td>U8<td>U8
307 </table>
308<tr>
309 <td rowspan="2">BitwiseNot
Jakub Sujakee301b32021-06-04 09:46:08 +0100310 <td rowspan="2" style="width:200px;"> Function to perform bitwise NOT.
Teresa Charlin62687422021-04-28 10:58:49 +0100311 <td rowspan="2">
312 <ul>
313 <li>ANEURALNETWORKS_LOGICAL_NOT
314 </ul>
315 <td>NEBitwiseNot
316 <td>
317 <ul>
318 <li>All
319 </ul>
320 <td>
321 <table>
322 <tr><th>src<th>dst
323 <tr><td>U8<td>U8
324 </table>
325<tr>
326 <td>CLBitwiseNot
327 <td>
328 <ul>
329 <li>All
330 </ul>
331 <td>
332 <table>
333 <tr><th>src<th>dst
334 <tr><td>U8<td>U8
335 </table>
336<tr>
337 <td rowspan="2">BitwiseOr
Jakub Sujakee301b32021-06-04 09:46:08 +0100338 <td rowspan="2" style="width:200px;"> Function to perform bitwise OR between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100339 <td rowspan="2">
340 <ul>
341 <li>ANEURALNETWORKS_LOGICAL_OR
342 </ul>
343 <td>NEBitwiseOr
344 <td>
345 <ul>
346 <li>All
347 </ul>
348 <td>
349 <table>
350 <tr><th>src<th>dst
351 <tr><td>U8<td>U8
352 </table>
353<tr>
354 <td>CLBitwiseOr
355 <td>
356 <ul>
357 <li>All
358 </ul>
359 <td>
360 <table>
361 <tr><th>src<th>dst
362 <tr><td>U8<td>U8
363 </table>
364<tr>
365 <td rowspan="2">BitwiseXor
Jakub Sujakee301b32021-06-04 09:46:08 +0100366 <td rowspan="2" style="width:200px;"> Function to perform bitwise XOR between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100367 <td rowspan="2">
368 <ul>
369 <li>n/a
370 </ul>
371 <td>NEBitwiseXor
372 <td>
373 <ul>
374 <li>All
375 </ul>
376 <td>
377 <table>
378 <tr><th>src<th>dst
379 <tr><td>U8<td>U8
380 </table>
381<tr>
382 <td>CLBitwiseXor
383 <td>
384 <ul>
385 <li>All
386 </ul>
387 <td>
388 <table>
389 <tr><th>src<th>dst
390 <tr><td>U8<td>U8
391 </table>
392<tr>
393 <td rowspan="2">BoundingBoxTransform
394 <td rowspan="2" style="width:200px;"> Transform proposal bounding boxes to target bounding box using bounding box deltas.
395 <td rowspan="2">
396 <ul>
397 <li>n/a
398 </ul>
399 <td>NEBoundingBoxTransform
400 <td>
401 <ul>
402 <li>NHWC
403 <li>NCHW
404 </ul>
405 <td>
406 <table>
407 <tr><th>src0<th>src1<th>dst
408 <tr><td>QASYMM16<td>QASYMM8<td>QASYMM16
409 <tr><td>F16<td>F16<td>F16
410 <tr><td>F32<td>F32<td>F32
411 </table>
412<tr>
413 <td>CLBoundingBoxTransform
414 <td>
415 <ul>
416 <li>NHWC
417 <li>NCHW
418 </ul>
419 <td>
420 <table>
421 <tr><th>src0<th>src1<th>dst
422 <tr><td>QASYMM16<td>QASYMM8<td>QASYMM16
423 <tr><td>F16<td>F16<td>F16
424 <tr><td>F32<td>F32<td>F32
425 </table>
426<tr>
427 <td rowspan="2">Cast
428 <td rowspan="2" style="width:200px;"> Function to cast a tensor.
429 <td rowspan="2">
430 <ul>
431 <li>ANEURALNETWORKS_CAST
432 </ul>
433 <td>NECast
434 <td>
435 <ul>
436 <li>All
437 </ul>
438 <td>
439 <table>
440 <tr><th>src<th>dst
441 <tr><td>QASYMM8_SIGNED<td>S16, S32, F32, F16
442 <tr><td>QASYMM8<td>U16, S16, S32, F32, F16
443 <tr><td>U8<td>U16, S16, S32, F32, F16
444 <tr><td>U16<td>U8, U32
445 <tr><td>S16<td>QASYMM8_SIGNED, U8, S32
446 <tr><td>F16<td>QASYMM8_SIGNED, QASYMM8, F32, S32, U8
447 <tr><td>S32<td>QASYMM8_SIGNED, QASYMM8, F16, F32, U8
448 <tr><td>F32<td>QASYMM8_SIGNED, QASYMM8, BFLOAT16, F16, S32, U8
449 </table>
450<tr>
451 <td>CLCast
452 <td>
453 <ul>
454 <li>All
455 </ul>
456 <td>
457 <table>
458 <tr><th>src<th>dst
459 <tr><td>U8<td>S8, U16, S16, U32, S32, F16, F32
Pablo Marquez Tello205ba242023-07-12 14:29:58 +0100460 <tr><td>S8<td>U8, U16, S16, U32, S32, F16, F32
Teresa Charlin62687422021-04-28 10:58:49 +0100461 <tr><td>U16<td>U8, S8, S16, U32, S32, F16, F32
462 <tr><td>S16<td>U8, S8, U16, U32, S32, F16, F32
463 <tr><td>U32<td>U8, S8, U16, S16, S32, F16, F32
464 <tr><td>S32<td>U8, S8, U16, S16, U32, F16, F32
Pablo Marquez Tello205ba242023-07-12 14:29:58 +0100465 <tr><td>U64<td>U8, S8, U16, S16, U32, S32, F16, F32
466 <tr><td>S64<td>U8, S8, U16, S16, U32, S32, F16, F32
467 <tr><td>F16<td>U8, S8, U16, S16, S32, U32, F32
468 <tr><td>F32<td>U8, S8, U16, S16, S32, U32, F16
Teresa Charlin62687422021-04-28 10:58:49 +0100469 </table>
470<tr>
471 <td rowspan="2">ChannelShuffleLayer
472 <td rowspan="2" style="width:200px;"> Function to shuffle the channels of the input tensor.
473 <td rowspan="2">
474 <ul>
475 <li>ANEURALNETWORKS_CHANNEL_SHUFFLE
476 </ul>
477 <td>NEChannelShuffleLayer
478 <td>
479 <ul>
480 <li>NCHW
Michele Di Giorgiob8025b32021-09-03 10:29:49 +0100481 <li>NHWC
Teresa Charlin62687422021-04-28 10:58:49 +0100482 </ul>
483 <td>
484 <table>
485 <tr><th>src<th>dst
486 <tr><td>All<td>All
487 </table>
488<tr>
489 <td>CLChannelShuffleLayer
490 <td>
491 <ul>
492 <li>NCHW
Michele Di Giorgiob8025b32021-09-03 10:29:49 +0100493 <li>NHWC
Teresa Charlin62687422021-04-28 10:58:49 +0100494 </ul>
495 <td>
496 <table>
497 <tr><th>src<th>dst
498 <tr><td>All<td>All
499 </table>
500<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100501 <td rowspan="1">Comparison
502 <td rowspan="1" style="width:200px;"> Function to compare 2 tensors.
503 <td rowspan="1">
504 <ul>
505 <li>ANEURALNETWORKS_EQUAL
506 <li>ANEURALNETWORKS_GREATER
507 <li>ANEURALNETWORKS_GREATER_EQUAL
508 <li>ANEURALNETWORKS_LESS
509 <li>ANEURALNETWORKS_LESS_EQUAL
510 <li>ANEURALNETWORKS_NOT_EQUAL
511 </ul>
512 <td>CLComparison
513 <td>
514 <ul>
515 <li>All
516 </ul>
517 <td>
518 <table>
519 <tr><th>src0<th>src1<th>dst
520 <tr><td>All<td>All<td>U8
521 </table>
522<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100523 <td rowspan="2">ConcatenateLayer
524 <td rowspan="2" style="width:200px;"> Function to concatenate tensors along a given axis.
525 <td rowspan="2">
526 <ul>
527 <li>ANEURALNETWORKS_CONCATENATION
528 </ul>
529 <td>NEConcatenateLayer
530 <td>
531 <ul>
532 <li>All
533 </ul>
534 <td>
535 <table>
536 <tr><th>src<th>dst
537 <tr><td>QASYMM8<td>QASYMM8
538 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
539 <tr><td>F16<td>F16
540 <tr><td>F32<td>F32
541 </table>
542<tr>
543 <td>CLConcatenateLayer
544 <td>
545 <ul>
546 <li>All
547 </ul>
548 <td>
549 <table>
550 <tr><th>src<th>dst
551 <tr><td>QASYMM8<td>QASYMM8
552 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
553 <tr><td>F16<td>F16
554 <tr><td>F32<td>F32
555 </table>
556<tr>
557 <td rowspan="2">ConvertFullyConnectedWeights
Jakub Sujakee301b32021-06-04 09:46:08 +0100558 <td rowspan="2" style="width:200px;"> Function to transpose the weights for the fully connected layer.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100559 <td rowspan="2">
560 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +0100561 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100562 </ul>
563 <td>NEConvertFullyConnectedWeights
564 <td>
565 <ul>
566 <li>NHWC
567 <li>NCHW
568 </ul>
569 <td>
570 <table>
571 <tr><th>src<th>dst
572 <tr><td>All<td>All
573 </table>
574<tr>
575 <td>CLConvertFullyConnectedWeights
576 <td>
577 <ul>
578 <li>NHWC
579 <li>NCHW
580 </ul>
581 <td>
582 <table>
583 <tr><th>src<th>dst
584 <tr><td>All<td>All
585 </table>
586<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100587 <td rowspan="2">ConvolutionLayer
588 <td rowspan="2" style="width:200px;"> Function to compute a convolution layer.
589 <td rowspan="2">
590 <ul>
591 <li>ANEURALNETWORKS_CONV_2D
592 </ul>
593 <td>NEConvolutionLayer
594 <td>
595 <ul>
596 <li>NHWC
597 <li>NCHW
598 </ul>
599 <td>
600 <table>
601 <tr><th>src0<th>src1<th>src2<th>dst
602 <tr><td>F16<td>F16<td>F16<td>F16
603 <tr><td>F32<td>F32<td>F32<td>F32
604 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
605 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
606 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
607 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
608 </table>
609<tr>
610 <td>CLConvolutionLayer
611 <td>
612 <ul>
613 <li>NHWC
614 <li>NCHW
615 </ul>
616 <td>
617 <table>
618 <tr><th>src0<th>src1<th>src2<th>dst
619 <tr><td>F16<td>F16<td>F16<td>F16
620 <tr><td>F32<td>F32<td>F32<td>F32
621 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
622 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
623 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
624 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
625 </table>
626<tr>
Sheri Zhang6d9c9822021-09-24 16:02:57 +0100627 <td rowspan="2">Conv3D
628 <td rowspan="2" style="width:200px;"> Function to compute a 3d convolution layer.
629 <td rowspan="2">
630 <ul>
631 <li>ANEURALNETWORKS_CONV_3D
632 </ul>
633 <td>NEConv3D
634 <td>
635 <ul>
636 <li>NDHWC
637 </ul>
638 <td>
639 <table>
640 <tr><th>src0<th>src1<th>src2<th>dst
641 <tr><td>F16<td>F16<td>F16<td>F16
642 <tr><td>F32<td>F32<td>F32<td>F32
Freddie Liardetf727ef42021-10-18 13:28:57 +0100643 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
644 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
Sheri Zhang6d9c9822021-09-24 16:02:57 +0100645 </table>
646<tr>
647 <td>CLConv3D
648 <td>
649 <ul>
650 <li>NDHWC
651 </ul>
652 <td>
653 <table>
654 <tr><th>src0<th>src1<th>src2<th>dst
655 <tr><td>F16<td>F16<td>F16<td>F16
656 <tr><td>F32<td>F32<td>F32<td>F32
Giorgio Arena51847d52021-10-19 15:45:57 +0100657 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
658 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
Sheri Zhang6d9c9822021-09-24 16:02:57 +0100659 </table>
660<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100661 <td rowspan="2">Copy
662 <td rowspan="2" style="width:200px;"> Function to copy a tensor.
663 <td rowspan="2">
664 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +0100665 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100666 </ul>
667 <td>NECopy
668 <td>
669 <ul>
670 <li>All
671 </ul>
672 <td>
673 <table>
674 <tr><th>src<th>dst
675 <tr><td>All<td>All
676 </table>
677<tr>
678 <td>CLCopy
679 <td>
680 <ul>
681 <li>All
682 </ul>
683 <td>
684 <table>
685 <tr><th>src<th>dst
686 <tr><td>All<td>All
687 </table>
688<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100689 <td rowspan="1">Crop
690 <td rowspan="1" style="width:200px;"> Performs a copy of input tensor to the output tensor.
691 <td rowspan="1">
692 <ul>
693 <li>n/a
694 </ul>
695 <td>CLCrop
696 <td>
697 <ul>
698 <li>NHWC
699 </ul>
700 <td>
701 <table>
702 <tr><th>src<th>dst
703 <tr><td>All<td>F32
704 </table>
705<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100706 <td rowspan="2">CropResize
707 <td rowspan="2" style="width:200px;"> Function to perform cropping and resizing.
708 <td rowspan="2">
709 <ul>
710 <li>n/a
711 </ul>
712 <td>NECropResize
713 <td>
714 <ul>
715 <li>NHWC
716 </ul>
717 <td>
718 <table>
719 <tr><th>src0<th>src1<th>src2<th>dst
720 <tr><td>All<td>F32<td>F32<td>F32
721 </table>
722<tr>
723 <td>CLCropResize
724 <td>
725 <ul>
726 <li>NHWC
727 </ul>
728 <td>
729 <table>
730 <tr><th>src0<th>src1<th>src2<th>dst
731 <tr><td>All<td>F32<td>F32<td>F32
732 </table>
733<tr>
734 <td rowspan="2">DeconvolutionLayer
Jakub Sujakee301b32021-06-04 09:46:08 +0100735 <td rowspan="2" style="width:200px;"> Function to compute a deconvolution or transpose convolution.
Teresa Charlin62687422021-04-28 10:58:49 +0100736 <td rowspan="2">
737 <ul>
738 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
739 </ul>
740 <td>NEDeconvolutionLayer
741 <td>
742 <ul>
743 <li>NHWC
744 <li>NCHW
745 </ul>
746 <td>
747 <table>
748 <tr><th>src0<th>src1<th>src2<th>dst
749 <tr><td>F16<td>F16<td>F16<td>F16
750 <tr><td>F32<td>F32<td>F32<td>F32
751 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
752 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
753 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
754 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
755 </table>
756<tr>
757 <td>CLDeconvolutionLayer
758 <td>
759 <ul>
760 <li>NHWC
761 <li>NCHW
762 </ul>
763 <td>
764 <table>
765 <tr><th>src0<th>src1<th>src2<th>dst
766 <tr><td>F16<td>F16<td>F16<td>F16
767 <tr><td>F32<td>F32<td>F32<td>F32
768 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
769 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
770 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
771 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
772 </table>
773<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100774 <td rowspan="1">DeconvolutionLayerUpsample
775 <td rowspan="1" style="width:200px;"> Function to execute deconvolution upsample on OpenCL.
776 <td rowspan="1">
777 <ul>
778 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
779 </ul>
780 <td>CLDeconvolutionLayerUpsample
781 <td>
782 <ul>
783 <li>NHWC
784 <li>NCHW
785 </ul>
786 <td>
787 <table>
788 <tr><th>src<th>dst
789 <tr><td>All<td>All
790 </table>
791<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100792 <td rowspan="2">DepthConvertLayer
793 <td rowspan="2" style="width:200px;"> Performs a down-scaling depth conversion.
794 <td rowspan="2">
795 <ul>
796 <li>n/a
797 </ul>
798 <td>NEDepthConvertLayer
799 <td>
800 <ul>
801 <li>All
802 </ul>
803 <td>
804 <table>
805 <tr><th>src<th>dst
806 <tr><td>QASYMM8<td>F16, F32
807 <tr><td>U8<td>U16, S16, S32
808 <tr><td>U16<td>U8, U32
809 <tr><td>S16<td>U8, S32
810 <tr><td>BFLOAT16<td>F32
811 <tr><td>F16<td>QASYMM8, F32
812 <tr><td>F32<td>QASYMM8, F16, BFLOAT16
813 </table>
814<tr>
815 <td>CLDepthConvertLayer
816 <td>
817 <ul>
818 <li>All
819 </ul>
820 <td>
821 <table>
822 <tr><th>src<th>dst
823 <tr><td>U8<td>S8, U16, S16, U32, S32, F16, F32
824 <tr><td>U16<td>U8, S8, S16, U32, S32, F16, F32
825 <tr><td>S16<td>U8, S8, U16, U32, S32, F16, F32
826 <tr><td>U32<td>U8, S8, U16, S16, S32, F16, F32
827 <tr><td>S32<td>U8, S8, U16, S16, U32, F16, F32
828 <tr><td>F16<td>U8, S8, U16, S16, U32, F32
829 <tr><td>F32<td>U8, S8, U16, S16, U32, F16
830 </table>
831<tr>
832 <td rowspan="2">DepthToSpaceLayer
833 <td rowspan="2" style="width:200px;"> Depth to Space transformation.
834 <td rowspan="2">
835 <ul>
836 <li>ANEURALNETWORKS_DEPTH_TO_SPACE
837 </ul>
838 <td>NEDepthToSpaceLayer
839 <td>
840 <ul>
841 <li>NHWC
842 <li>NCHW
843 </ul>
844 <td>
845 <table>
846 <tr><th>src<th>dst
847 <tr><td>All<td>All
848 </table>
849<tr>
850 <td>CLDepthToSpaceLayer
851 <td>
852 <ul>
853 <li>NHWC
854 <li>NCHW
855 </ul>
856 <td>
857 <table>
858 <tr><th>src<th>dst
859 <tr><td>All<td>All
860 </table>
861<tr>
862 <td rowspan="2">DepthwiseConvolutionLayer
863 <td rowspan="2" style="width:200px;"> Function to perform depthwise separable convolution.
864 <td rowspan="2">
865 <ul>
866 <li>ANEURALNETWORKS_DEPTHWISE_CONV_2D
867 </ul>
868 <td>NEDepthwiseConvolutionLayer
869 <td>
870 <ul>
871 <li>NHWC
872 <li>NCHW
873 </ul>
874 <td>
875 <table>
876 <tr><th>src0<th>src1<th>src2<th>dst
877 <tr><td>F16<td>F16<td>F16<td>F16
878 <tr><td>F32<td>F32<td>F32<td>F32
879 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
880 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
881 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
882 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
883 </table>
884<tr>
885 <td>CLDepthwiseConvolutionLayer
886 <td>
887 <ul>
888 <li>NHWC
889 <li>NCHW
890 </ul>
891 <td>
892 <table>
893 <tr><th>src0<th>src1<th>src2<th>dst
894 <tr><td>F16<td>F16<td>F16<td>F16
895 <tr><td>F32<td>F32<td>F32<td>F32
896 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
897 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
898 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
899 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
900 </table>
901<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100902 <td rowspan="2">DequantizationLayer
Teresa Charlin62687422021-04-28 10:58:49 +0100903 <td rowspan="2" style="width:200px;"> Function to dequantize the values in a tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100904 <td rowspan="2">
905 <ul>
906 <li>ANEURALNETWORKS_DEQUANTIZE
907 </ul>
908 <td>NEDequantizationLayer
909 <td>
910 <ul>
911 <li>All
912 </ul>
913 <td>
914 <table>
915 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +0100916 <tr><td>QASYMM8<td>F16, F32
917 <tr><td>QASYMM8_SIGNED<td>F16, F32
918 <tr><td>QSYMM8_PER_CHANNEL<td>F16, F32
919 <tr><td>QSYMM8<td>F16, F32
920 <tr><td>QSYMM16<td>F16, F32
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100921 </table>
922<tr>
923 <td>CLDequantizationLayer
924 <td>
925 <ul>
926 <li>All
927 </ul>
928 <td>
929 <table>
930 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +0100931 <tr><td>QASYMM8<td>F16, F32
932 <tr><td>QASYMM8_SIGNED<td>F16, F32
933 <tr><td>QSYMM8_PER_CHANNEL<td>F16, F32
934 <tr><td>QSYMM8<td>F16, F32
935 <tr><td>QSYMM16<td>F16, F32
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100936 </table>
937<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100938 <td rowspan="1">DetectionPostProcessLayer
939 <td rowspan="1" style="width:200px;"> Function to generate the detection output based on center size encoded boxes, class prediction and anchors by doing non maximum suppression (NMS).
940 <td rowspan="1">
941 <ul>
942 <li>ANEURALNETWORKS_DETECTION_POSTPROCESSING
943 </ul>
944 <td>NEDetectionPostProcessLayer
945 <td>
946 <ul>
947 <li>All
948 </ul>
949 <td>
950 <table>
951 <tr><th>src0 - src2<th>dst0 - dst3
952 <tr><td>QASYMM8<td>F32
953 <tr><td>QASYMM8_SIGNED<td>F32
954 <tr><td>F32<td>F32
955 </table>
956<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100957 <td rowspan="2">DirectConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +0100958 <td rowspan="2" style="width:200px;"> Function to compute direct convolution.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100959 <td rowspan="2">
960 <ul>
961 <li>ANEURALNETWORKS_CONV_2D
962 </ul>
963 <td>NEDirectConvolutionLayer
964 <td>
965 <ul>
966 <li>NHWC
967 <li>NCHW
968 </ul>
969 <td>
970 <table>
971 <tr><th>src0<th>src1<th>src2<th>dst
972 <tr><td>F16<td>F16<td>F16<td>F16
973 <tr><td>F32<td>F32<td>F32<td>F32
974 </table>
975<tr>
976 <td>CLDirectConvolutionLayer
977 <td>
978 <ul>
979 <li>NHWC
980 <li>NCHW
981 </ul>
982 <td>
983 <table>
984 <tr><th>src0<th>src1<th>src2<th>dst
985 <tr><td>F16<td>F16<td>F16<td>F16
986 <tr><td>F32<td>F32<td>F32<td>F32
987 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
988 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
989 </table>
990<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100991 <td rowspan="1">DirectDeconvolutionLayer
992 <td rowspan="1" style="width:200px;"> Function to run the deconvolution layer.
993 <td rowspan="1">
994 <ul>
995 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
996 </ul>
997 <td>CLDirectDeconvolutionLayer
998 <td>
999 <ul>
1000 <li>NHWC
1001 <li>NCHW
1002 </ul>
1003 <td>
1004 <table>
1005 <tr><th>src0<th>src1<th>src2<th>dst
1006 <tr><td>F16<td>F16<td>F16<td>F16
1007 <tr><td>F32<td>F32<td>F32<td>F32
1008 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1009 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1010 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1011 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1012 </table>
1013<tr>
Jakub Sujakee301b32021-06-04 09:46:08 +01001014 <td rowspan="13">ElementwiseOperations
Sheri Zhang6124ce62021-05-04 14:03:13 +01001015 <td rowspan="13" style="width:200px;"> Function to perform in Cpu: - Div - Max - Min - Pow - SquaredDiff - Comparisons (Equal, greater, greater_equal, less, less_equal, not_equal) Function to perform in CL: - Add - Sub - Div - Max - Min - Pow - SquaredDiff
1016 <td rowspan="13">
1017 <ul>
1018 <li>ANEURALNETWORKS_MAXIMUM
1019 <li>ANEURALNETWORKS_MINIMUM
1020 <li>ANEURALNETWORKS_POW
1021 <li>ANEURALNETWORKS_DIV
1022 <li>ANEURALNETWORKS_ADD
1023 <li>ANEURALNETWORKS_SUB
1024 <li>ANEURALNETWORKS_EQUAL
1025 <li>ANEURALNETWORKS_GREATER
1026 <li>ANEURALNETWORKS_GREATER_EQUAL
1027 <li>ANEURALNETWORKS_LESS
1028 <li>ANEURALNETWORKS_LESS_EQUAL
1029 <li>ANEURALNETWORKS_NOT_EQUAL
1030 </ul>
1031 <td>NEElementwiseMax
1032 <td>
1033 <ul>
1034 <li>All
1035 </ul>
1036 <td>
1037 <table>
1038 <tr><th>src0<th>src1<th>dst
1039 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1040 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1041 <tr><td>S32<td>S32<td>S32
1042 <tr><td>S16<td>S16<td>S16
1043 <tr><td>F16<td>F16<td>F16
1044 <tr><td>F32<td>F32<td>F32
1045 </table>
1046<tr>
1047 <td>NEElementwiseMin
1048 <td>
1049 <ul>
1050 <li>All
1051 </ul>
1052 <td>
1053 <table>
1054 <tr><th>src0<th>src1<th>dst
1055 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1056 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1057 <tr><td>S32<td>S32<td>S32
1058 <tr><td>S16<td>S16<td>S16
1059 <tr><td>F16<td>F16<td>F16
1060 <tr><td>F32<td>F32<td>F32
1061 </table>
1062<tr>
1063 <td>NEElementwiseSquaredDiff
1064 <td>
1065 <ul>
1066 <li>All
1067 </ul>
1068 <td>
1069 <table>
1070 <tr><th>src0<th>src1<th>dst
1071 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1072 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1073 <tr><td>S32<td>S32<td>S32
1074 <tr><td>S16<td>S16<td>S16
1075 <tr><td>F16<td>F16<td>F16
1076 <tr><td>F32<td>F32<td>F32
1077 </table>
1078<tr>
1079 <td>NEElementwiseDivision
1080 <td>
1081 <ul>
1082 <li>All
1083 </ul>
1084 <td>
1085 <table>
1086 <tr><th>src0<th>src1<th>dst
1087 <tr><td>F16<td>F16<td>F16
1088 <tr><td>F32<td>F32<td>F32
1089 </table>
1090<tr>
1091 <td>NEElementwisePower
1092 <td>
1093 <ul>
1094 <li>All
1095 </ul>
1096 <td>
1097 <table>
1098 <tr><th>src0<th>src1<th>dst
1099 <tr><td>F16<td>F16<td>F16
1100 <tr><td>F32<td>F32<td>F32
1101 </table>
1102<tr>
1103 <td>NEElementwiseComparison
1104 <td>
1105 <ul>
1106 <li>All
1107 </ul>
1108 <td>
1109 <table>
1110 <tr><th>src0<th>src1<th>dst
1111 <tr><td>QASYMM8<td>QASYMM8<td>U8
1112 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>U8
1113 <tr><td>S32<td>S32<td>U8
1114 <tr><td>U8<td>U8<td>U8
1115 <tr><td>S16<td>S16<td>U8
1116 <tr><td>F16<td>F16<td>U8
1117 <tr><td>F32<td>F32<td>U8
1118 </table>
1119<tr>
1120 <td>CLArithmeticAddition
1121 <td>
1122 <ul>
1123 <li>All
1124 </ul>
1125 <td>
1126 <table>
1127 <tr><th>src0<th>src1<th>dst
1128 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1129 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1130 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1131 <tr><td>U8<td>U8<td>U8
1132 <tr><td>U8<td>U8<td>S16
1133 <tr><td>U8<td>S16<td>S16
1134 <tr><td>S16<td>U8<td>S16
1135 <tr><td>S16<td>S16<td>S16
1136 <tr><td>S32<td>S32<td>S32
1137 <tr><td>F16<td>F16<td>F16
1138 <tr><td>F32<td>F32<td>F32
1139 </table>
1140<tr>
1141 <td>CLArithmeticSubtraction
1142 <td>
1143 <ul>
1144 <li>All
1145 </ul>
1146 <td>
1147 <table>
1148 <tr><th>src0<th>src1<th>dst
1149 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1150 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1151 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1152 <tr><td>U8<td>U8<td>U8
1153 <tr><td>U8<td>U8<td>S16
1154 <tr><td>U8<td>S16<td>S16
1155 <tr><td>S16<td>U8<td>S16
1156 <tr><td>S16<td>S16<td>S16
1157 <tr><td>S32<td>S32<td>S32
1158 <tr><td>F16<td>F16<td>F16
1159 <tr><td>F32<td>F32<td>F32
1160 </table>
1161<tr>
1162 <td>CLArithmeticDivision
1163 <td>
1164 <ul>
1165 <li>All
1166 </ul>
1167 <td>
1168 <table>
1169 <tr><th>src0<th>src1<th>dst
1170 <tr><td>F16<td>F16<td>F16
1171 <tr><td>F32<td>F32<td>F32
1172 </table>
1173<tr>
1174 <td>CLElementwiseMax
1175 <td>
1176 <ul>
1177 <li>All
1178 </ul>
1179 <td>
1180 <table>
1181 <tr><th>src0<th>src1<th>dst
1182 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1183 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1184 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1185 <tr><td>U8<td>U8<td>U8
1186 <tr><td>S16<td>S16<td>S16
1187 <tr><td>S32<td>S32<td>S32
1188 <tr><td>U32<td>U32<td>U32
1189 <tr><td>F16<td>F16<td>F16
1190 <tr><td>F32<td>F32<td>F32
1191 </table>
1192<tr>
1193 <td>CLElementwiseMin
1194 <td>
1195 <ul>
1196 <li>All
1197 </ul>
1198 <td>
1199 <table>
1200 <tr><th>src0<th>src1<th>dst
1201 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1202 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1203 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1204 <tr><td>U8<td>U8<td>U8
1205 <tr><td>S16<td>S16<td>S16
1206 <tr><td>S32<td>S32<td>S32
1207 <tr><td>U32<td>U32<td>U32
1208 <tr><td>F16<td>F16<td>F16
1209 <tr><td>F32<td>F32<td>F32
1210 </table>
1211<tr>
1212 <td>CLElementwiseSquaredDiff
1213 <td>
1214 <ul>
1215 <li>All
1216 </ul>
1217 <td>
1218 <table>
1219 <tr><th>src0<th>src1<th>dst
1220 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1221 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1222 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1223 <tr><td>U8<td>U8<td>U8
1224 <tr><td>S16<td>S16<td>S16
1225 <tr><td>F16<td>F16<td>F16
1226 <tr><td>F32<td>F32<td>F32
1227 </table>
1228<tr>
1229 <td>CLElementwisePower
1230 <td>
1231 <ul>
1232 <li>All
1233 </ul>
1234 <td>
1235 <table>
1236 <tr><th>src0<th>src1<th>dst
1237 <tr><td>F16<td>F16<td>F16
1238 <tr><td>F32<td>F32<td>F32
1239 </table>
1240<tr>
1241 <td rowspan="8">ElementwiseUnaryLayer
1242 <td rowspan="8" style="width:200px;"> Function to perform: - Rsqrt - Exp - Neg - Log - Abs - Round - Sin
1243 <td rowspan="8">
1244 <ul>
1245 <li>ANEURALNETWORKS_ABS
1246 <li>ANEURALNETWORKS_EXP
1247 <li>ANEURALNETWORKS_LOG
1248 <li>ANEURALNETWORKS_NEG
1249 <li>ANEURALNETWORKS_RSQRT
1250 <li>ANEURALNETWORKS_SIN
1251 </ul>
1252 <td>NEElementwiseUnaryLayer
1253 <td>
1254 <ul>
1255 <li>All
1256 </ul>
1257 <td>
1258 <table>
1259 <tr><th>src<th>dst
1260 <tr><td>F16<td>F16
1261 <tr><td>F32<td>F32
1262 <tr><td>S32<td>S32
1263 </table>
1264<tr>
1265 <td>CLRsqrtLayer
1266 <td>
1267 <ul>
1268 <li>All
1269 </ul>
1270 <td>
1271 <table>
1272 <tr><th>src<th>dst
1273 <tr><td>F16<td>F16
1274 <tr><td>F32<td>F32
1275 </table>
1276<tr>
1277 <td>CLExpLayer
1278 <td>
1279 <ul>
1280 <li>All
1281 </ul>
1282 <td>
1283 <table>
1284 <tr><th>src<th>dst
1285 <tr><td>F16<td>F16
1286 <tr><td>F32<td>F32
1287 </table>
1288<tr>
1289 <td>CLNegLayer
1290 <td>
1291 <ul>
1292 <li>All
1293 </ul>
1294 <td>
1295 <table>
1296 <tr><th>src<th>dst
1297 <tr><td>F16<td>F16
1298 <tr><td>F32<td>F32
Jakub Sujakee301b32021-06-04 09:46:08 +01001299 <tr><td>S32<td>S32
Sheri Zhang6124ce62021-05-04 14:03:13 +01001300 </table>
1301<tr>
1302 <td>CLSinLayer
1303 <td>
1304 <ul>
1305 <li>All
1306 </ul>
1307 <td>
1308 <table>
1309 <tr><th>src<th>dst
1310 <tr><td>F16<td>F16
1311 <tr><td>F32<td>F32
1312 </table>
1313<tr>
1314 <td>CLLogLayer
1315 <td>
1316 <ul>
1317 <li>All
1318 </ul>
1319 <td>
1320 <table>
1321 <tr><th>src<th>dst
1322 <tr><td>F16<td>F16
1323 <tr><td>F32<td>F32
1324 </table>
1325<tr>
1326 <td>CLAbsLayer
1327 <td>
1328 <ul>
1329 <li>All
1330 </ul>
1331 <td>
1332 <table>
1333 <tr><th>src<th>dst
1334 <tr><td>F16<td>F16
1335 <tr><td>F32<td>F32
1336 </table>
1337<tr>
1338 <td>CLRoundLayer
1339 <td>
1340 <ul>
1341 <li>All
1342 </ul>
1343 <td>
1344 <table>
1345 <tr><th>src<th>dst
1346 <tr><td>F16<td>F16
1347 <tr><td>F32<td>F32
1348 </table>
1349<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001350 <td rowspan="2">FFT1D
Teresa Charlin62687422021-04-28 10:58:49 +01001351 <td rowspan="2" style="width:200px;"> Fast Fourier Transform 1D.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001352 <td rowspan="2">
1353 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +01001354 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001355 </ul>
1356 <td>NEFFT1D
1357 <td>
1358 <ul>
1359 <li>All
1360 </ul>
1361 <td>
1362 <table>
1363 <tr><th>src<th>dst
1364 <tr><td>F32<td>F32
1365 </table>
1366<tr>
1367 <td>CLFFT1D
1368 <td>
1369 <ul>
1370 <li>All
1371 </ul>
1372 <td>
1373 <table>
1374 <tr><th>src<th>dst
1375 <tr><td>F32<td>F32
1376 <tr><td>F16<td>F16
1377 </table>
1378<tr>
1379 <td rowspan="2">FFT2D
Teresa Charlin62687422021-04-28 10:58:49 +01001380 <td rowspan="2" style="width:200px;"> Fast Fourier Transform 2D.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001381 <td rowspan="2">
1382 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +01001383 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001384 </ul>
1385 <td>NEFFT2D
1386 <td>
1387 <ul>
1388 <li>All
1389 </ul>
1390 <td>
1391 <table>
1392 <tr><th>src<th>dst
1393 <tr><td>F32<td>F32
1394 </table>
1395<tr>
1396 <td>CLFFT2D
1397 <td>
1398 <ul>
1399 <li>All
1400 </ul>
1401 <td>
1402 <table>
1403 <tr><th>src<th>dst
1404 <tr><td>F32<td>F32
1405 <tr><td>F16<td>F16
1406 </table>
1407<tr>
1408 <td rowspan="2">FFTConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001409 <td rowspan="2" style="width:200px;"> Fast Fourier Transform Convolution.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001410 <td rowspan="2">
1411 <ul>
1412 <li>ANEURALNETWORKS_CONV_2D
1413 </ul>
1414 <td>NEFFTConvolutionLayer
1415 <td>
1416 <ul>
1417 <li>All
1418 </ul>
1419 <td>
1420 <table>
1421 <tr><th>src<th>dst
1422 <tr><td>F32<td>F32
1423 </table>
1424<tr>
1425 <td>CLFFTConvolutionLayer
1426 <td>
1427 <ul>
1428 <li>All
1429 </ul>
1430 <td>
1431 <table>
1432 <tr><th>src<th>dst
1433 <tr><td>F32<td>F32
1434 <tr><td>F16<td>F16
1435 </table>
1436<tr>
1437 <td rowspan="2">Fill
Teresa Charlin62687422021-04-28 10:58:49 +01001438 <td rowspan="2" style="width:200px;"> Set the values of a tensor with a given value.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001439 <td rowspan="2">
1440 <ul>
1441 <li>ANEURALNETWORKS_FILL
1442 </ul>
1443 <td>NEFill
1444 <td>
1445 <ul>
1446 <li>All
1447 </ul>
1448 <td>
1449 <table>
1450 <tr><th>src<th>dst
1451 <tr><td>All<td>All
1452 </table>
1453<tr>
1454 <td>CLFill
1455 <td>
1456 <ul>
1457 <li>All
1458 </ul>
1459 <td>
1460 <table>
1461 <tr><th>src<th>dst
1462 <tr><td>All<td>All
1463 </table>
1464<tr>
Georgios Pinitasb6af4822021-09-14 12:33:34 +01001465 <td rowspan="1">FillBorder
1466 <td rowspan="1" style="width:200px;"> Function to fill the borders within the XY-planes.
1467 <td rowspan="1">
Teresa Charlin62687422021-04-28 10:58:49 +01001468 <ul>
1469 <li>n/a
1470 </ul>
1471 <td>NEFillBorder
1472 <td>
1473 <ul>
1474 <li>All
1475 </ul>
1476 <td>
1477 <table>
1478 <tr><th>src<th>dst
1479 <tr><td>All<td>All
1480 </table>
1481<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001482 <td rowspan="2">FlattenLayer
1483 <td rowspan="2" style="width:200px;"> Reshape a tensor to be 1D
1484 <td rowspan="2">
1485 <ul>
1486 <li>ANEURALNETWORKS_RESHAPE
1487 </ul>
1488 <td>NEFlattenLayer
1489 <td>
1490 <ul>
1491 <li>All
1492 </ul>
1493 <td>
1494 <table>
1495 <tr><th>src<th>dst
1496 <tr><td>All<td>All
1497 </table>
1498<tr>
1499 <td>CLFlattenLayer
1500 <td>
1501 <ul>
1502 <li>All
1503 </ul>
1504 <td>
1505 <table>
1506 <tr><th>src<th>dst
1507 <tr><td>All<td>All
1508 </table>
1509<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001510 <td rowspan="2">Floor
Teresa Charlin62687422021-04-28 10:58:49 +01001511 <td rowspan="2" style="width:200px;"> Round the value to the lowest number.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001512 <td rowspan="2">
1513 <ul>
1514 <li>ANEURALNETWORKS_FLOOR
1515 </ul>
1516 <td>NEFloor
1517 <td>
1518 <ul>
1519 <li>All
1520 </ul>
1521 <td>
1522 <table>
1523 <tr><th>src<th>dst
1524 <tr><td>F32<td>F32
1525 <tr><td>F16<td>F16
1526 </table>
1527<tr>
1528 <td>CLFloor
1529 <td>
1530 <ul>
1531 <li>All
1532 </ul>
1533 <td>
1534 <table>
1535 <tr><th>src<th>dst
1536 <tr><td>F32<td>F32
1537 <tr><td>F16<td>F16
1538 </table>
1539<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001540 <td rowspan="2">FullyConnectedLayer
1541 <td rowspan="2" style="width:200px;"> Function to perform a fully connected / dense layer.
1542 <td rowspan="2">
1543 <ul>
1544 <li>ANEURALNETWORKS_FULLY_CONNECTED
1545 </ul>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001546 <td>NEFullyConnectedLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001547 <td>
1548 <ul>
1549 <li>NHWC
1550 <li>NCHW
1551 </ul>
1552 <td>
1553 <table>
1554 <tr><th>src0<th>src1<th>src2<th>dst
1555 <tr><td>F16<td>F16<td>F16<td>F16
1556 <tr><td>F32<td>F32<td>F32<td>F32
1557 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1558 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1559 </table>
1560<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001561 <td>CLFullyConnectedLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001562 <td>
1563 <ul>
1564 <li>NHWC
1565 <li>NCHW
1566 </ul>
1567 <td>
1568 <table>
1569 <tr><th>src0<th>src1<th>src2<th>dst
1570 <tr><td>F16<td>F16<td>F16<td>F16
1571 <tr><td>F32<td>F32<td>F32<td>F32
1572 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1573 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1574 </table>
1575<tr>
1576 <td rowspan="2">FuseBatchNormalization
1577 <td rowspan="2" style="width:200px;"> Function to fuse the batch normalization node to a preceding convolution node.
1578 <td rowspan="2">
1579 <ul>
1580 <li>n/a
1581 </ul>
1582 <td>NEFuseBatchNormalization
1583 <td>
1584 <ul>
1585 <li>NHWC
1586 <li>NCHW
1587 </ul>
1588 <td>
1589 <table>
1590 <tr><th>src<th>dst
1591 <tr><td>F32<td>F32
1592 <tr><td>F16<td>F16
1593 </table>
1594<tr>
1595 <td>CLFuseBatchNormalization
1596 <td>
1597 <ul>
1598 <li>NHWC
1599 <li>NCHW
1600 </ul>
1601 <td>
1602 <table>
1603 <tr><th>src<th>dst
1604 <tr><td>F32<td>F32
1605 <tr><td>F16<td>F16
1606 </table>
1607<tr>
1608 <td rowspan="2">Gather
1609 <td rowspan="2" style="width:200px;"> Performs the Gather operation along the chosen axis.
1610 <td rowspan="2">
1611 <ul>
1612 <li>ANEURALNETWORKS_GATHER
1613 </ul>
1614 <td>NEGather
1615 <td>
1616 <ul>
1617 <li>All
1618 </ul>
1619 <td>
1620 <table>
1621 <tr><th>src<th>dst
1622 <tr><td>All<td>All
1623 </table>
1624<tr>
1625 <td>CLGather
1626 <td>
1627 <ul>
1628 <li>All
1629 </ul>
1630 <td>
1631 <table>
1632 <tr><th>src<th>dst
1633 <tr><td>All<td>All
1634 </table>
1635<tr>
1636 <td rowspan="2">GEMM
1637 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1638 <td rowspan="2">
1639 <ul>
1640 <li>n/a
1641 </ul>
1642 <td>NEGEMM
1643 <td>
1644 <ul>
1645 <li>All
1646 </ul>
1647 <td>
1648 <table>
1649 <tr><th>src0<th>src1<th>src2<th>dst
1650 <tr><td>F32<td>F32<td>F32<td>F32
1651 <tr><td>F16<td>F16<td>F16<td>F16
1652 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1653 </table>
1654<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001655 <td>CLGEMM
Teresa Charlin62687422021-04-28 10:58:49 +01001656 <td>
1657 <ul>
1658 <li>All
1659 </ul>
1660 <td>
1661 <table>
1662 <tr><th>src0<th>src1<th>src2<th>dst
1663 <tr><td>F32<td>F32<td>F32<td>F32
1664 <tr><td>F16<td>F16<td>F16<td>F16
1665 </table>
1666<tr>
Jakub Sujakee301b32021-06-04 09:46:08 +01001667 <td rowspan="1">GEMMConv2d
Sheri Zhang6124ce62021-05-04 14:03:13 +01001668 <td rowspan="1" style="width:200px;"> General Matrix Multiplication.
1669 <td rowspan="1">
1670 <ul>
1671 <li>ANEURALNETWORKS_CONV_2D
1672 </ul>
1673 <td>NEGEMMConv2d
1674 <td>
1675 <ul>
1676 <li>All
1677 </ul>
1678 <td>
1679 <table>
1680 <tr><th>src0<th>src1<th>src2<th>dst
1681 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1682 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1683 <tr><td>F16<td>F16<td>F16<td>F16
1684 <tr><td>F32<td>F32<td>F32<td>F32
1685 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1686 </table>
1687<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001688 <td rowspan="2">GEMMConvolutionLayer
1689 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1690 <td rowspan="2">
1691 <ul>
1692 <li>ANEURALNETWORKS_CONV_2D
1693 </ul>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001694 <td>NEGEMMConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001695 <td>
1696 <ul>
1697 <li>NHWC
1698 <li>NCHW
1699 </ul>
1700 <td>
1701 <table>
1702 <tr><th>src0<th>src1<th>src2<th>dst
1703 <tr><td>F16<td>F16<td>F16<td>F16
1704 <tr><td>F32<td>F32<td>F32<td>F32
1705 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1706 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1707 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1708 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1709 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1710 </table>
1711<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001712 <td>CLGEMMConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001713 <td>
1714 <ul>
1715 <li>NHWC
1716 <li>NCHW
1717 </ul>
1718 <td>
1719 <table>
1720 <tr><th>src0<th>src1<th>src2<th>dst
1721 <tr><td>F16<td>F16<td>F16<td>F16
1722 <tr><td>F32<td>F32<td>F32<td>F32
1723 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1724 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1725 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1726 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1727 </table>
1728<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001729 <td rowspan="1">GEMMDeconvolutionLayer
1730 <td rowspan="1" style="width:200px;"> General Matrix Multiplication.
1731 <td rowspan="1">
1732 <ul>
1733 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
1734 </ul>
1735 <td>CLGEMMDeconvolutionLayer
1736 <td>
1737 <ul>
1738 <li>NHWC
1739 </ul>
1740 <td>
1741 <table>
1742 <tr><th>src0<th>src1<th>src2<th>dst
1743 <tr><td>F16<td>F16<td>F16<td>F16
1744 <tr><td>F32<td>F32<td>F32<td>F32
1745 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1746 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1747 </table>
1748<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001749 <td rowspan="2">GEMMLowpMatrixMultiplyCore
1750 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1751 <td rowspan="2">
1752 <ul>
1753 <li>n/a
1754 </ul>
1755 <td>NEGEMMLowpMatrixMultiplyCore
1756 <td>
1757 <ul>
1758 <li>NHWC
1759 <li>NCHW
1760 </ul>
1761 <td>
1762 <table>
1763 <tr><th>src0<th>src1<th>src2<th>dst
1764 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1765 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1766 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>QASYMM8
1767 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>S32
1768 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1769 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>S32
1770 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1771 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1772 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>QASYMM8_SIGNED
1773 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>S32
1774 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1775 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>S32
1776 </table>
1777<tr>
1778 <td>CLGEMMLowpMatrixMultiplyCore
1779 <td>
1780 <ul>
1781 <li>NHWC
1782 <li>NCHW
1783 </ul>
1784 <td>
1785 <table>
1786 <tr><th>src0<th>src1<th>src2<th>dst
1787 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1788 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1789 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>QASYMM8
1790 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>S32
1791 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1792 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>S32
1793 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1794 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1795 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>QASYMM8_SIGNED
1796 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>S32
1797 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1798 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>S32
1799 </table>
1800<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001801 <td rowspan="2">GEMMLowpOutputStage
1802 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1803 <td rowspan="2">
1804 <ul>
1805 <li>n/a
1806 </ul>
1807 <td>NEGEMMLowpOutputStage
1808 <td>
1809 <ul>
1810 <li>All
1811 </ul>
1812 <td>
1813 <table>
1814 <tr><th>src0<th>src1<th>dst
1815 <tr><td>S32<td>S32<td>QASYMM8
1816 <tr><td>S32<td>S32<td>QASYMM8_SIGNED
1817 <tr><td>S32<td>S32<td>QSYMM16
1818 </table>
1819<tr>
1820 <td>CLGEMMLowpOutputStage
1821 <td>
1822 <ul>
1823 <li>All
1824 </ul>
1825 <td>
1826 <table>
1827 <tr><th>src0<th>src1<th>dst
1828 <tr><td>S32<td>S32<td>QASYMM8
1829 <tr><td>S32<td>S32<td>QASYMM8_SIGNED
1830 <tr><td>S32<td>S32<td>QSYMM16
1831 </table>
1832<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001833 <td rowspan="2">GenerateProposalsLayer
1834 <td rowspan="2" style="width:200px;"> Function to generate proposals for a RPN (Region Proposal Network).
1835 <td rowspan="2">
1836 <ul>
1837 <li>ANEURALNETWORKS_GENERATE_PROPOSALS
1838 </ul>
1839 <td>NEGenerateProposalsLayer
1840 <td>
1841 <ul>
1842 <li>All
1843 </ul>
1844 <td>
1845 <table>
1846 <tr><th>src0<th>src1<th>src2<th>dst
1847 <tr><td>F16<td>F16<td>F16<td>F16
1848 <tr><td>F32<td>F32<td>F32<td>F32
1849 <tr><td>QASYMM8<td>QSYMM8<td>QSYMM16<td>QASYMM8
1850 </table>
1851<tr>
1852 <td>CLGenerateProposalsLayer
1853 <td>
1854 <ul>
1855 <li>All
1856 </ul>
1857 <td>
1858 <table>
1859 <tr><th>src0<th>src1<th>src2<th>dst
1860 <tr><td>F16<td>F16<td>F16<td>F16
1861 <tr><td>F32<td>F32<td>F32<td>F32
1862 <tr><td>QASYMM8<td>QSYMM8<td>QSYMM16<td>QASYMM8
1863 </table>
1864<tr>
1865 <td rowspan="2">InstanceNormalizationLayer
1866 <td rowspan="2" style="width:200px;"> Function to perform a Instance normalization on a given axis.
1867 <td rowspan="2">
1868 <ul>
1869 <li>ANEURALNETWORKS_INSTANCE_NORMALIZATION
1870 </ul>
1871 <td>NEInstanceNormalizationLayer
1872 <td>
1873 <ul>
1874 <li>NHWC
1875 <li>NCHW
1876 </ul>
1877 <td>
1878 <table>
1879 <tr><th>src<th>dst
1880 <tr><td>F16<td>F16
1881 <tr><td>F32<td>F32
1882 </table>
1883<tr>
1884 <td>CLInstanceNormalizationLayer
1885 <td>
1886 <ul>
1887 <li>NHWC
1888 <li>NCHW
1889 </ul>
1890 <td>
1891 <table>
1892 <tr><th>src<th>dst
1893 <tr><td>F16<td>F16
1894 <tr><td>F32<td>F32
1895 </table>
1896<tr>
1897 <td rowspan="2">L2NormalizeLayer
1898 <td rowspan="2" style="width:200px;"> Function to perform a L2 normalization on a given axis.
1899 <td rowspan="2">
1900 <ul>
1901 <li>ANEURALNETWORKS_L2_NORMALIZATION
1902 </ul>
1903 <td>NEL2NormalizeLayer
1904 <td>
1905 <ul>
1906 <li>NHWC
1907 <li>NCHW
1908 </ul>
1909 <td>
1910 <table>
1911 <tr><th>src<th>dst
1912 <tr><td>F16<td>F16
1913 <tr><td>F32<td>F32
1914 </table>
1915<tr>
1916 <td>CLL2NormalizeLayer
1917 <td>
1918 <ul>
1919 <li>NHWC
1920 <li>NCHW
1921 </ul>
1922 <td>
1923 <table>
1924 <tr><th>src<th>dst
1925 <tr><td>F16<td>F16
1926 <tr><td>F32<td>F32
1927 </table>
1928<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001929 <td rowspan="3">Logical
1930 <td rowspan="3" style="width:200px;"> Function to perform: - Logical AND - Logical OR - Logical NOT
1931 <td rowspan="3">
1932 <ul>
1933 <li>n/a
1934 </ul>
1935 <td>NELogicalAnd
1936 <td>
1937 <ul>
1938 <li>All
1939 </ul>
1940 <td>
1941 <table>
1942 <tr><th>src0<th>src1<th>dst
1943 <tr><td>U8<td>U8<td>U8
1944 </table>
1945<tr>
1946 <td>NELogicalOr
1947 <td>
1948 <ul>
1949 <li>All
1950 </ul>
1951 <td>
1952 <table>
1953 <tr><th>src0<th>src1<th>dst
1954 <tr><td>U8<td>U8<td>U8
1955 </table>
1956<tr>
1957 <td>NELogicalNot
1958 <td>
1959 <ul>
1960 <li>All
1961 </ul>
1962 <td>
1963 <table>
1964 <tr><th>src<th>dst
1965 <tr><td>U8<td>U8
1966 </table>
1967<tr>
1968 <td rowspan="1">LogicalAnd
1969 <td rowspan="1" style="width:200px;"> Function to perform Logical AND.
1970 <td rowspan="1">
1971 <ul>
1972 <li>n/a
1973 </ul>
1974 <td>CLLogicalAnd
1975 <td>
1976 <ul>
1977 <li>All
1978 </ul>
1979 <td>
1980 <table>
1981 <tr><th>src0<th>src1<th>dst
1982 <tr><td>U8<td>U8<td>U8
1983 </table>
1984<tr>
1985 <td rowspan="1">LogicalOr
1986 <td rowspan="1" style="width:200px;"> Function to perform Logical OR.
1987 <td rowspan="1">
1988 <ul>
1989 <li>n/a
1990 </ul>
1991 <td>CLLogicalOr
1992 <td>
1993 <ul>
1994 <li>All
1995 </ul>
1996 <td>
1997 <table>
1998 <tr><th>src0<th>src1<th>dst
1999 <tr><td>U8<td>U8<td>U8
2000 </table>
2001<tr>
2002 <td rowspan="1">LogicalNot
2003 <td rowspan="1" style="width:200px;"> Function to perform Logical NOT.
2004 <td rowspan="1">
2005 <ul>
2006 <li>n/a
2007 </ul>
2008 <td>CLLogicalNot
2009 <td>
2010 <ul>
2011 <li>All
2012 </ul>
2013 <td>
2014 <table>
2015 <tr><th>src<th>dst
2016 <tr><td>U8<td>U8
2017 </table>
2018<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002019 <td rowspan="2">LSTMLayer
2020 <td rowspan="2" style="width:200px;"> Function to perform a single time step in a Long Short-Term Memory (LSTM) layer.
2021 <td rowspan="2">
2022 <ul>
2023 <li>ANEURALNETWORKS_LSTM
2024 </ul>
2025 <td>NELSTMLayer
2026 <td>
2027 <ul>
2028 <li>All
2029 </ul>
2030 <td>
2031 <table>
2032 <tr><th>src0 - src13<th>dst0 - dst3
2033 <tr><td>F16<td>F16
2034 <tr><td>F32<td>F32
2035 </table>
2036<tr>
2037 <td>CLLSTMLayer
2038 <td>
2039 <ul>
2040 <li>All
2041 </ul>
2042 <td>
2043 <table>
2044 <tr><th>src0 - src13<th>dst0 - dst3
2045 <tr><td>F16<td>F16
2046 <tr><td>F32<td>F32
2047 </table>
2048<tr>
2049 <td rowspan="2">LSTMLayerQuantized
2050 <td rowspan="2" style="width:200px;"> Function to perform quantized LSTM (Long Short-Term Memory)
2051 <td rowspan="2">
2052 <ul>
2053 <li>ANEURALNETWORKS_QUANTIZED_LSTM
2054 <li>ANEURALNETWORKS_QUANTIZED_16BIT_LSTM
2055 </ul>
2056 <td>NELSTMLayerQuantized
2057 <td>
2058 <ul>
2059 <li>All
2060 </ul>
2061 <td>
2062 <table>
2063 <tr><th>src0 - src8<th>src9 - src12<th>src13<th>src14<th>dst0<th>dst1
2064 <tr><td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8<td>QSYMM16<td>QASYMM8
2065 </table>
2066<tr>
2067 <td>CLLSTMLayerQuantized
2068 <td>
2069 <ul>
2070 <li>All
2071 </ul>
2072 <td>
2073 <table>
2074 <tr><th>src0 - src8<th>src9 - src12<th>src13<th>src14<th>dst0<th>dst1
2075 <tr><td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8<td>QSYMM16<td>QASYMM8
2076 </table>
2077<tr>
Jakub Sujak667e82f2023-11-07 22:39:30 +00002078 <td rowspan="2">MatMul
2079 <td rowspan="2" style="width:200px;"> Computes a matrix multiplication in batches.
2080 <td rowspan="2">
2081 <ul>
2082 <li>ANEURALNETWORKS_BATCH_MATMUL
2083 </ul>
2084 <td>NEMatMul
2085 <td>
2086 <ul>
2087 <li>Any
2088 </ul>
2089 <td>
2090 <table>
2091 <tr><th>lhs<th>rhs<th>dst
2092 <tr><td>F32<td>F32<td>F32
2093 <tr><td>F16<td>F16<td>F16
2094 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2095 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2096 </table>
2097<tr>
2098 <td>CLMatMul
2099 <td>
2100 <ul>
2101 <li>All
2102 </ul>
2103 <td>
2104 <table>
2105 <tr><th>lhs<th>rhs<th>dst
2106 <tr><td>F32<td>F32<td>F32
2107 <tr><td>F16<td>F16<td>F16
2108 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2109 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2110 </table>
2111<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002112 <td rowspan="2">MaxUnpoolingLayer
2113 <td rowspan="2" style="width:200px;"> Function to perform MaxUnpooling.
2114 <td rowspan="2">
2115 <ul>
2116 <li>n/a
2117 </ul>
2118 <td>NEMaxUnpoolingLayer
2119 <td>
2120 <ul>
2121 <li>NHWC
2122 <li>NCHW
2123 </ul>
2124 <td>
2125 <table>
2126 <tr><th>src<th>dst
2127 <tr><td>QASYMM8<td>QASYMM8
2128 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2129 <tr><td>F16<td>F16
2130 <tr><td>F32<td>F32
2131 </table>
2132<tr>
2133 <td>CLMaxUnpoolingLayer
2134 <td>
2135 <ul>
2136 <li>NHWC
2137 <li>NCHW
2138 </ul>
2139 <td>
2140 <table>
2141 <tr><th>src<th>dst
2142 <tr><td>QASYMM8<td>QASYMM8
2143 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2144 <tr><td>F16<td>F16
2145 <tr><td>F32<td>F32
2146 </table>
2147<tr>
2148 <td rowspan="2">MeanStdDevNormalizationLayer
2149 <td rowspan="2" style="width:200px;"> Function to execute mean and standard deviation normalization.
2150 <td rowspan="2">
2151 <ul>
2152 <li>n/a
2153 </ul>
2154 <td>NEMeanStdDevNormalizationLayer
2155 <td>
2156 <ul>
2157 <li>NHWC
2158 <li>NCHW
2159 </ul>
2160 <td>
2161 <table>
2162 <tr><th>src<th>dst
2163 <tr><td>F32<td>F32
2164 <tr><td>F16<td>F16
2165 </table>
2166<tr>
2167 <td>CLMeanStdDevNormalizationLayer
2168 <td>
2169 <ul>
2170 <li>NHWC
2171 <li>NCHW
2172 </ul>
2173 <td>
2174 <table>
2175 <tr><th>src<th>dst
2176 <tr><td>F32<td>F32
2177 <tr><td>F16<td>F16
2178 </table>
2179<tr>
2180 <td rowspan="2">NormalizationLayer
2181 <td rowspan="2" style="width:200px;"> Function to compute normalization layer.
2182 <td rowspan="2">
2183 <ul>
2184 <li>ANEURALNETWORKS_LOCAL_RESPONSE_NORMALIZATION
2185 </ul>
2186 <td>NENormalizationLayer
2187 <td>
2188 <ul>
2189 <li>NHWC
2190 <li>NCHW
2191 </ul>
2192 <td>
2193 <table>
2194 <tr><th>src<th>dst
2195 <tr><td>F32<td>F32
2196 <tr><td>F16<td>F16
2197 </table>
2198<tr>
2199 <td>CLNormalizationLayer
2200 <td>
2201 <ul>
2202 <li>NHWC
2203 <li>NCHW
2204 </ul>
2205 <td>
2206 <table>
2207 <tr><th>src<th>dst
2208 <tr><td>F32<td>F32
2209 <tr><td>F16<td>F16
2210 </table>
2211<tr>
Jakub Sujak667e82f2023-11-07 22:39:30 +00002212 <td rowspan="1">NormalizePlanarYUVLayer
2213 <td rowspan="1" style="width:200px;"> Function to compute normalization planar YUV layer.
2214 <td rowspan="1">
2215 <ul>
2216 <li>n/a
2217 </ul>
2218 <td>CLNormalizePlanarYUVLayer
2219 <td>
2220 <ul>
2221 <li>NHWC
2222 <li>NCHW
2223 </ul>
2224 <td>
2225 <table>
2226 <tr><th>src<th>dst
2227 <tr><td>F32<td>F32
2228 <tr><td>F16<td>F16
2229 <tr><td>QASYMM8<td>QASYMM8
2230 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2231 </table>
2232<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002233 <td rowspan="2">PadLayer
2234 <td rowspan="2" style="width:200px;"> Function to pad a tensor.
2235 <td rowspan="2">
2236 <ul>
2237 <li>ANEURALNETWORKS_PAD
2238 <li>ANEURALNETWORKS_PAD_V2
2239 </ul>
2240 <td>NEPadLayer
2241 <td>
2242 <ul>
2243 <li>NHWC
2244 <li>NCHW
2245 </ul>
2246 <td>
2247 <table>
2248 <tr><th>src<th>dst
2249 <tr><td>All<td>All
2250 </table>
2251<tr>
2252 <td>CLPadLayer
2253 <td>
2254 <ul>
2255 <li>NHWC
2256 <li>NCHW
2257 </ul>
2258 <td>
2259 <table>
2260 <tr><th>src<th>dst
2261 <tr><td>All<td>All
2262 </table>
2263<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002264 <td rowspan="2">Permute
2265 <td rowspan="2" style="width:200px;"> Function to transpose an ND tensor.
2266 <td rowspan="2">
2267 <ul>
2268 <li>ANEURALNETWORKS_TRANSPOSE
2269 </ul>
2270 <td>NEPermute
2271 <td>
2272 <ul>
2273 <li>NHWC
2274 <li>NCHW
2275 </ul>
2276 <td>
2277 <table>
2278 <tr><th>src<th>dst
2279 <tr><td>All<td>All
2280 </table>
2281<tr>
2282 <td>CLPermute
2283 <td>
2284 <ul>
2285 <li>NHWC
2286 <li>NCHW
2287 </ul>
2288 <td>
2289 <table>
2290 <tr><th>src<th>dst
2291 <tr><td>All<td>All
2292 </table>
2293<tr>
2294 <td rowspan="2">PixelWiseMultiplication
Jakub Sujakee301b32021-06-04 09:46:08 +01002295 <td rowspan="2" style="width:200px;"> Function to perform a multiplication.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002296 <td rowspan="2">
2297 <ul>
2298 <li>ANEURALNETWORKS_MUL
2299 </ul>
2300 <td>NEPixelWiseMultiplication
2301 <td>
2302 <ul>
2303 <li>All
2304 </ul>
2305 <td>
2306 <table>
2307 <tr><th>src0<th>src1<th>dst
2308 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2309 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2310 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
2311 <tr><td>QSYMM16<td>QSYMM16<td>S32
2312 <tr><td>U8<td>U8<td>U8
2313 <tr><td>U8<td>U8<td>S16
2314 <tr><td>U8<td>S16<td>S16
2315 <tr><td>S16<td>U8<td>S16
2316 <tr><td>S16<td>S16<td>S16
2317 <tr><td>F16<td>F16<td>F16
2318 <tr><td>F32<td>S32<td>F32
2319 </table>
2320<tr>
2321 <td>CLPixelWiseMultiplication
2322 <td>
2323 <ul>
2324 <li>All
2325 </ul>
2326 <td>
2327 <table>
2328 <tr><th>src0<th>src1<th>dst
2329 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2330 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2331 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
2332 <tr><td>QSYMM16<td>QSYMM16<td>S32
2333 <tr><td>U8<td>U8<td>U8
2334 <tr><td>U8<td>U8<td>S16
2335 <tr><td>U8<td>S16<td>S16
2336 <tr><td>S16<td>U8<td>S16
2337 <tr><td>S16<td>S16<td>S16
2338 <tr><td>F16<td>F16<td>F16
Jakub Sujakee301b32021-06-04 09:46:08 +01002339 <tr><td>F32<td>F32<td>F32
2340 <tr><td>S32<td>S32<td>S32
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002341 </table>
2342<tr>
2343 <td rowspan="2">PoolingLayer
Jakub Sujakee301b32021-06-04 09:46:08 +01002344 <td rowspan="2" style="width:200px;"> Function to perform pooling with the specified pooling operation.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002345 <td rowspan="2">
2346 <ul>
2347 <li>ANEURALNETWORKS_AVERAGE_POOL_2D
2348 <li>ANEURALNETWORKS_L2_POOL_2D
2349 <li>ANEURALNETWORKS_MAX_POOL_2D
2350 </ul>
2351 <td>NEPoolingLayer
2352 <td>
2353 <ul>
2354 <li>NHWC
2355 <li>NCHW
2356 </ul>
2357 <td>
2358 <table>
2359 <tr><th>src<th>dst
2360 <tr><td>QASYMM8<td>QASYMM8
2361 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2362 <tr><td>F16<td>F16
2363 <tr><td>F32<td>F32
2364 </table>
2365<tr>
2366 <td>CLPoolingLayer
2367 <td>
2368 <ul>
2369 <li>NHWC
2370 <li>NCHW
2371 </ul>
2372 <td>
2373 <table>
2374 <tr><th>src<th>dst
2375 <tr><td>QASYMM8<td>QASYMM8
2376 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2377 <tr><td>F16<td>F16
2378 <tr><td>F32<td>F32
2379 </table>
2380<tr>
Adnan AlSinan171fc3d2022-03-15 18:46:42 +00002381 <td rowspan="2">Pooling3dLayer
2382 <td rowspan="2" style="width:200px;"> Function to perform pooling 3D with the specified pooling operation.
2383 <td rowspan="2">
2384 <ul>
2385 <li>N/A
2386 </ul>
2387 <td>NEPooling3dLayer
2388 <td>
2389 <ul>
2390 <li>NDHWC
2391 </ul>
2392 <td>
2393 <table>
2394 <tr><th>src<th>dst
2395 <tr><td>F16<td>F16
2396 <tr><td>F32<td>F32
Adnan AlSinan9104cd52022-04-06 16:19:31 +01002397 <tr><td>QASYMM8<td>QASYMM8
2398 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
Adnan AlSinan171fc3d2022-03-15 18:46:42 +00002399 </table>
2400<tr>
2401 <td>CLPooling3dLayer
2402 <td>
2403 <ul>
2404 <li>NDHWC
2405 </ul>
2406 <td>
2407 <table>
2408 <tr><th>src<th>dst
2409 <tr><td>F16<td>F16
2410 <tr><td>F32<td>F32
Mohammed Suhail Munshi5e549fa2022-03-16 11:14:06 +00002411 <tr><td>QASYMM8<td>QASYMM8
2412 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
Adnan AlSinan171fc3d2022-03-15 18:46:42 +00002413 </table>
2414<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002415 <td rowspan="2">PReluLayer
2416 <td rowspan="2" style="width:200px;"> Function to compute the activation layer with the PRELU activation function.
2417 <td rowspan="2">
2418 <ul>
2419 <li>ANEURALNETWORKS_PRELU
2420 </ul>
2421 <td>NEPReluLayer
2422 <td>
2423 <ul>
2424 <li>All
2425 </ul>
2426 <td>
2427 <table>
2428 <tr><th>src<th>dst
2429 <tr><td>QASYMM8<td>QASYMM8
2430 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2431 <tr><td>F16<td>F16
2432 <tr><td>F32<td>F32
2433 </table>
2434<tr>
2435 <td>CLPReluLayer
2436 <td>
2437 <ul>
2438 <li>All
2439 </ul>
2440 <td>
2441 <table>
2442 <tr><th>src<th>dst
2443 <tr><td>QASYMM8<td>QASYMM8
2444 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2445 <tr><td>F16<td>F16
2446 <tr><td>F32<td>F32
2447 </table>
2448<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002449 <td rowspan="2">PriorBoxLayer
Sheri Zhang6124ce62021-05-04 14:03:13 +01002450 <td rowspan="2" style="width:200px;"> Function to compute prior boxes and clip.
Teresa Charlin62687422021-04-28 10:58:49 +01002451 <td rowspan="2">
2452 <ul>
2453 <li>n/a
2454 </ul>
2455 <td>NEPriorBoxLayer
2456 <td>
2457 <ul>
2458 <li>NHWC
2459 <li>NCHW
2460 </ul>
2461 <td>
2462 <table>
2463 <tr><th>src0<th>src1<th>dst
2464 <tr><td>F32<td>F32<td>F32
2465 </table>
2466<tr>
2467 <td>CLPriorBoxLayer
2468 <td>
2469 <ul>
2470 <li>NHWC
2471 <li>NCHW
2472 </ul>
2473 <td>
2474 <table>
2475 <tr><th>src0<th>src1<th>dst
2476 <tr><td>F32<td>F32<td>F32
2477 </table>
2478<tr>
2479 <td rowspan="2">QLSTMLayer
2480 <td rowspan="2" style="width:200px;"> Function to perform quantized LSTM (Long Short-Term Memory).
2481 <td rowspan="2">
2482 <ul>
2483 <li>ANEURALNETWORKS_QUANTIZED_LSTM
2484 <li>ANEURALNETWORKS_QUANTIZED_16BIT_LSTM
2485 </ul>
2486 <td>NEQLSTMLayer
2487 <td>
2488 <ul>
2489 <li>All
2490 </ul>
2491 <td>
2492 <table>
2493 <tr><th>src0<th>src1 - src6<th>src7 -src9<th>src10<th>src11<th>dst0<th>dst1 - dst2
2494 <tr><td>QASYMM8_SIGNED<td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8_SIGNED<td>QSYMM16<td>QASYMM8_SIGNED
2495 </table>
2496<tr>
2497 <td>CLQLSTMLayer
2498 <td>
2499 <ul>
2500 <li>All
2501 </ul>
2502 <td>
2503 <table>
2504 <tr><th>src0<th>src1 - src6<th>src7 -src9<th>src10<th>src11<th>dst0<th>dst1 - dst2
2505 <tr><td>QASYMM8_SIGNED<td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8_SIGNED<td>QSYMM16<td>QASYMM8_SIGNED
2506 </table>
2507<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002508 <td rowspan="2">QuantizationLayer
2509 <td rowspan="2" style="width:200px;"> Function to perform quantization layer
2510 <td rowspan="2">
2511 <ul>
2512 <li>ANEURALNETWORKS_QUANTIZE
2513 </ul>
2514 <td>NEQuantizationLayer
2515 <td>
2516 <ul>
2517 <li>All
2518 </ul>
2519 <td>
2520 <table>
2521 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +01002522 <tr><td>QASYMM8<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2523 <tr><td>QASYMM8_SIGNED<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2524 <tr><td>F16<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2525 <tr><td>F32<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002526 </table>
2527<tr>
2528 <td>CLQuantizationLayer
2529 <td>
2530 <ul>
2531 <li>All
2532 </ul>
2533 <td>
2534 <table>
2535 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +01002536 <tr><td>QASYMM8<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2537 <tr><td>QASYMM8_SIGNED<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2538 <tr><td>F16<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2539 <tr><td>F32<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2540 </table>
2541<tr>
2542 <td rowspan="2">Range
2543 <td rowspan="2" style="width:200px;"> Function to generates a sequence of numbers starting from START and extends by increments of 'STEP' up to but not including 'END'.
2544 <td rowspan="2">
2545 <ul>
2546 <li>n/a
2547 </ul>
2548 <td>NERange
2549 <td>
2550 <ul>
2551 <li>All
2552 </ul>
2553 <td>
2554 <table>
2555 <tr><th>dst
2556 <tr><td>U8
2557 <tr><td>S8
2558 <tr><td>U16
2559 <tr><td>S16
2560 <tr><td>U32
2561 <tr><td>S32
2562 <tr><td>F16
2563 <tr><td>F32
2564 </table>
2565<tr>
2566 <td>CLRange
2567 <td>
2568 <ul>
2569 <li>All
2570 </ul>
2571 <td>
2572 <table>
2573 <tr><th>dst
2574 <tr><td>U8
2575 <tr><td>S8
2576 <tr><td>QASYMM8
2577 <tr><td>U16
2578 <tr><td>S16
2579 <tr><td>U32
2580 <tr><td>S32
2581 <tr><td>F16
2582 <tr><td>F32
2583 </table>
2584<tr>
2585 <td rowspan="2">ReduceMean
Jakub Sujakee301b32021-06-04 09:46:08 +01002586 <td rowspan="2" style="width:200px;"> Function to perform reduce mean operation.
Teresa Charlin62687422021-04-28 10:58:49 +01002587 <td rowspan="2">
2588 <ul>
2589 <li>ANEURALNETWORKS_MEAN
2590 </ul>
2591 <td>NEReduceMean
2592 <td>
2593 <ul>
2594 <li>All
2595 </ul>
2596 <td>
2597 <table>
2598 <tr><th>src<th>dst
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002599 <tr><td>QASYMM8<td>QASYMM8
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002600 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
Teresa Charlin62687422021-04-28 10:58:49 +01002601 <tr><td>F16<td>F16
2602 <tr><td>F32<td>F32
2603 </table>
2604<tr>
2605 <td>CLReduceMean
2606 <td>
2607 <ul>
2608 <li>All
2609 </ul>
2610 <td>
2611 <table>
2612 <tr><th>src<th>dst
2613 <tr><td>QASYMM8<td>QASYMM8
2614 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2615 <tr><td>F16<td>F16
2616 <tr><td>F32<td>F32
2617 </table>
2618<tr>
2619 <td rowspan="2">ReductionOperation
Jakub Sujakee301b32021-06-04 09:46:08 +01002620 <td rowspan="2" style="width:200px;"> Function to perform reduce with the following operations - ARG_IDX_MAX: Index of the max value - ARG_IDX_MIN: Index of the min value - MEAN_SUM: Mean of sum - PROD: Product - SUM_SQUARE: Sum of squares - SUM: Sum - MIN: Min - MAX: Max
Teresa Charlin62687422021-04-28 10:58:49 +01002621 <td rowspan="2">
2622 <ul>
2623 <li>ANEURALNETWORKS_REDUCE_ALL
2624 <li>ANEURALNETWORKS_REDUCE_ANY
2625 <li>ANEURALNETWORKS_REDUCE_MAX
2626 <li>ANEURALNETWORKS_REDUCE_MIN
2627 <li>ANEURALNETWORKS_REDUCE_PROD
2628 <li>ANEURALNETWORKS_REDUCE_SUM
2629 </ul>
2630 <td>NEReductionOperation
2631 <td>
2632 <ul>
2633 <li>All
2634 </ul>
2635 <td>
2636 <table>
2637 <tr><th>src<th>dst
2638 <tr><td>QASYMM8<td>QASYMM8
2639 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2640 <tr><td>F16<td>F16
2641 <tr><td>F32<td>F32
2642 <tr><td>S32<td>S32
2643 </table>
2644<tr>
2645 <td>CLReductionOperation
2646 <td>
2647 <ul>
2648 <li>All
2649 </ul>
2650 <td>
2651 <table>
2652 <tr><th>src<th>dst
2653 <tr><td>QASYMM8<td>QASYMM8
2654 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2655 <tr><td>F16<td>F16
2656 <tr><td>F32<td>F32
2657 <tr><td>S32<td>S32
2658 </table>
2659<tr>
Jakub Sujak667e82f2023-11-07 22:39:30 +00002660 <td rowspan="1">ReorderLayer
2661 <td rowspan="1" style="width:200px;"> Reorders a tensor to a different weights format.
2662 <td rowspan="1">
2663 <ul>
2664 <li>n/a
2665 </ul>
2666 <td>NEReorderLayer
2667 <td>
2668 <ul>
2669 <li>NCHW
2670 </ul>
2671 <td>
2672 <table>
2673 <tr><th>src<th>dst
2674 <tr><td>F32<td>F32
2675 </table>
2676<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002677 <td rowspan="2">ReorgLayer
2678 <td rowspan="2" style="width:200px;"> Performs a reorganization layer of input tensor to the output tensor.
2679 <td rowspan="2">
2680 <ul>
2681 <li>n/a
2682 </ul>
2683 <td>NEReorgLayer
2684 <td>
2685 <ul>
2686 <li>NHWC
2687 <li>NCHW
2688 </ul>
2689 <td>
2690 <table>
2691 <tr><th>src<th>dst
2692 <tr><td>All<td>All
2693 </table>
2694<tr>
2695 <td>CLReorgLayer
2696 <td>
2697 <ul>
2698 <li>NHWC
2699 <li>NCHW
2700 </ul>
2701 <td>
2702 <table>
2703 <tr><th>src<th>dst
2704 <tr><td>All<td>All
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002705 </table>
2706<tr>
2707 <td rowspan="2">ReshapeLayer
Teresa Charlin62687422021-04-28 10:58:49 +01002708 <td rowspan="2" style="width:200px;"> Function to reshape a tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002709 <td rowspan="2">
2710 <ul>
2711 <li>ANEURALNETWORKS_RESHAPE
2712 <li>ANEURALNETWORKS_SQUEEZE
2713 </ul>
2714 <td>NEReshapeLayer
2715 <td>
2716 <ul>
2717 <li>All
2718 </ul>
2719 <td>
2720 <table>
2721 <tr><th>src<th>dst
2722 <tr><td>All<td>All
2723 </table>
2724<tr>
2725 <td>CLReshapeLayer
2726 <td>
2727 <ul>
2728 <li>All
2729 </ul>
2730 <td>
2731 <table>
2732 <tr><th>src<th>dst
2733 <tr><td>All<td>All
2734 </table>
2735<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002736 <td rowspan="2">Reverse
2737 <td rowspan="2" style="width:200px;"> Function to reverse tensor according to axis.
2738 <td rowspan="2">
2739 <ul>
2740 <li>n/a
2741 </ul>
2742 <td>NEReverse
2743 <td>
2744 <ul>
2745 <li>All
2746 </ul>
2747 <td>
2748 <table>
2749 <tr><th>src0<th>src1<th>dst
Adnan AlSinanbdcb4c12023-09-18 14:49:45 +01002750 <tr><td>All<td>U32, S32<td>All
Teresa Charlin62687422021-04-28 10:58:49 +01002751 </table>
2752<tr>
2753 <td>CLReverse
2754 <td>
2755 <ul>
2756 <li>All
2757 </ul>
2758 <td>
2759 <table>
2760 <tr><th>src0<th>src1<th>dst
Adnan AlSinan704c22f2023-10-24 11:05:56 +01002761 <tr><td>All<td>U32, S32<td>All
Teresa Charlin62687422021-04-28 10:58:49 +01002762 </table>
2763<tr>
2764 <td rowspan="2">RNNLayer
2765 <td rowspan="2" style="width:200px;"> Function to perform recurrent neural network layer.
2766 <td rowspan="2">
2767 <ul>
2768 <li>ANEURALNETWORKS_RNN
2769 </ul>
2770 <td>NERNNLayer
2771 <td>
2772 <ul>
2773 <li>NHWC
2774 <li>NCHW
2775 </ul>
2776 <td>
2777 <table>
2778 <tr><th>src0<th>src1<th>src2<th>src3<th>dst0<th>dst1
2779 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
2780 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
2781 </table>
2782<tr>
2783 <td>CLRNNLayer
2784 <td>
2785 <ul>
2786 <li>NHWC
2787 <li>NCHW
2788 </ul>
2789 <td>
2790 <table>
2791 <tr><th>src0<th>src1<th>src2<th>src3<th>dst0<th>dst1
2792 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
2793 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
2794 </table>
2795<tr>
2796 <td rowspan="2">ROIAlignLayer
2797 <td rowspan="2" style="width:200px;"> Function to perform ROI alignment.
2798 <td rowspan="2">
2799 <ul>
2800 <li>ANEURALNETWORKS_ROI_ALIGN
2801 </ul>
2802 <td>NEROIAlignLayer
2803 <td>
2804 <ul>
2805 <li>All
2806 </ul>
2807 <td>
2808 <table>
2809 <tr><th>src0<th>src1<th>dst
2810 <tr><td>F16<td>F16<td>F16
2811 <tr><td>F32<td>F32<td>F32
2812 <tr><td>QASYMM8<td>QASYMM16<td>QASYMM8
2813 <tr><td>QASYMM8_SIGNED<td>QASYMM16<td>QASYMM8_SIGNED
2814 </table>
2815<tr>
2816 <td>CLROIAlignLayer
2817 <td>
2818 <ul>
2819 <li>All
2820 </ul>
2821 <td>
2822 <table>
2823 <tr><th>src0<th>src1<th>dst
2824 <tr><td>F16<td>F16<td>F16
2825 <tr><td>F32<td>F32<td>F32
2826 <tr><td>QASYMM8<td>QASYMM16<td>QASYMM8
2827 <tr><td>QASYMM8_SIGNED<td>QASYMM16<td>QASYMM8_SIGNED
2828 </table>
2829<tr>
2830 <td rowspan="2">ROIPoolingLayer
2831 <td rowspan="2" style="width:200px;"> Function to perform ROI pooling.
2832 <td rowspan="2">
2833 <ul>
2834 <li>ANEURALNETWORKS_ROI_POOLING
2835 </ul>
2836 <td>NEROIPoolingLayer
2837 <td>
2838 <ul>
2839 <li>All
2840 </ul>
2841 <td>
2842 <table>
2843 <tr><th>src0<th>src1<th>dst
2844 <tr><td>F32<td>U16<td>F32
2845 <tr><td>QASYMM8<td>U16<td>QASYMM8
2846 </table>
2847<tr>
2848 <td>CLROIPoolingLayer
2849 <td>
2850 <ul>
2851 <li>All
2852 </ul>
2853 <td>
2854 <table>
2855 <tr><th>src0<th>src1<th>dst
2856 <tr><td>F16<td>U16<td>F16
2857 <tr><td>F32<td>U16<td>F32
2858 <tr><td>QASYMM8<td>U16<td>QASYMM8
2859 </table>
2860<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002861 <td rowspan="2">Scale
Teresa Charlin62687422021-04-28 10:58:49 +01002862 <td rowspan="2" style="width:200px;"> Function to perform resize a tensor using to interpolate: - Bilinear - Nearest neighbor
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002863 <td rowspan="2">
2864 <ul>
2865 <li>ANEURALNETWORKS_RESIZE_BILINEAR
2866 <li>ANEURALNETWORKS_RESIZE_NEAREST_NEIGHBOR
2867 </ul>
2868 <td>NEScale
2869 <td>
2870 <ul>
2871 <li>NHWC
2872 <li>NCHW
2873 </ul>
2874 <td>
2875 <table>
2876 <tr><th>src<th>dst
2877 <tr><td>QASYMM8<td>QASYMM8
2878 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2879 <tr><td>F16<td>F16
2880 <tr><td>F32<td>F32
2881 <tr><td>U8<td>U8
Gunes Bayirc4f27432022-09-11 15:59:19 +01002882 <tr><td>S8<td>S8
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002883 <tr><td>S16<td>S16
2884 </table>
2885<tr>
2886 <td>CLScale
2887 <td>
2888 <ul>
2889 <li>NHWC
2890 <li>NCHW
2891 </ul>
2892 <td>
2893 <table>
2894 <tr><th>src<th>dst
2895 <tr><td>QASYMM8<td>QASYMM8
2896 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2897 <tr><td>F16<td>F16
2898 <tr><td>F32<td>F32
2899 <tr><td>U8<td>U8
2900 <tr><td>S16<td>S16
2901 </table>
2902<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002903 <td rowspan="2">Select
2904 <td rowspan="2" style="width:200px;"> Function to select values from 2 tensors depending on an input tensor of booleans.
2905 <td rowspan="2">
2906 <ul>
2907 <li>ANEURALNETWORKS_SELECT
2908 </ul>
2909 <td>NESelect
2910 <td>
2911 <ul>
2912 <li>All
2913 </ul>
2914 <td>
2915 <table>
2916 <tr><th>src0<th>src1<th>src2<th>dst
2917 <tr><td>U8<td>All<td>All<td>All
2918 </table>
2919<tr>
2920 <td>CLSelect
2921 <td>
2922 <ul>
2923 <li>All
2924 </ul>
2925 <td>
2926 <table>
2927 <tr><th>src0<th>src1<th>src2<th>dst
2928 <tr><td>U8<td>All<td>All<td>All
2929 </table>
2930<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002931 <td rowspan="2">Slice
2932 <td rowspan="2" style="width:200px;"> Function to perform tensor slicing.
2933 <td rowspan="2">
2934 <ul>
2935 <li>ANEURALNETWORKS_SLICE
2936 </ul>
2937 <td>NESlice
2938 <td>
2939 <ul>
2940 <li>All
2941 </ul>
2942 <td>
2943 <table>
2944 <tr><th>src<th>dst
2945 <tr><td>All<td>All
2946 </table>
2947<tr>
2948 <td>CLSlice
2949 <td>
2950 <ul>
2951 <li>All
2952 </ul>
2953 <td>
2954 <table>
2955 <tr><th>src<th>dst
2956 <tr><td>All<td>All
2957 </table>
2958<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01002959 <td rowspan="2">SoftmaxLayer
2960 <td rowspan="2" style="width:200px;"> Function to compute a SoftmaxLayer and a Log SoftmaxLayer.
2961 <td rowspan="2">
2962 <ul>
2963 <li>ANEURALNETWORKS_LOG_SOFTMAX
2964 <li>ANEURALNETWORKS_SOFTMAX
2965 </ul>
2966 <td>NESoftmaxLayerGeneric
2967 <td>
2968 <ul>
2969 <li>All
2970 </ul>
2971 <td>
2972 <table>
2973 <tr><th>src<th>dst
2974 <tr><td>QASYMM8<td>QASYMM8
2975 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2976 <tr><td>F16<td>F16
2977 <tr><td>F32<td>F32
2978 </table>
2979<tr>
2980 <td>CLSoftmaxLayerGeneric
2981 <td>
2982 <ul>
2983 <li>All
2984 </ul>
2985 <td>
2986 <table>
2987 <tr><th>src<th>dst
2988 <tr><td>QASYMM8<td>QASYMM8
2989 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2990 <tr><td>F16<td>F16
2991 <tr><td>F32<td>F32
2992 </table>
2993<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002994 <td rowspan="2">SpaceToBatchLayer
2995 <td rowspan="2" style="width:200px;"> Function to divide a tensor spatially.
2996 <td rowspan="2">
2997 <ul>
2998 <li>ANEURALNETWORKS_SPACE_TO_BATCH_ND
2999 </ul>
3000 <td>NESpaceToBatchLayer
3001 <td>
3002 <ul>
3003 <li>NHWC
3004 <li>NCHW
3005 </ul>
3006 <td>
3007 <table>
3008 <tr><th>src0<th>src1<th>src2<th>dst
3009 <tr><td>All<td>S32<td>S32<td>All
3010 </table>
3011<tr>
3012 <td>CLSpaceToBatchLayer
3013 <td>
3014 <ul>
3015 <li>NHWC
3016 <li>NCHW
3017 </ul>
3018 <td>
3019 <table>
3020 <tr><th>src0<th>src1<th>src2<th>dst
3021 <tr><td>All<td>S32<td>S32<td>All
3022 </table>
3023<tr>
3024 <td rowspan="2">SpaceToDepthLayer
3025 <td rowspan="2" style="width:200px;"> Function to rearrange blocks of spatial data into depth.
3026 <td rowspan="2">
3027 <ul>
3028 <li>ANEURALNETWORKS_SPACE_TO_DEPTH
3029 </ul>
3030 <td>NESpaceToDepthLayer
3031 <td>
3032 <ul>
3033 <li>NHWC
3034 <li>NCHW
3035 </ul>
3036 <td>
3037 <table>
3038 <tr><th>src<th>dst
3039 <tr><td>All<td>All
3040 </table>
3041<tr>
3042 <td>CLSpaceToDepthLayer
3043 <td>
3044 <ul>
3045 <li>NHWC
3046 <li>NCHW
3047 </ul>
3048 <td>
3049 <table>
3050 <tr><th>src<th>dst
3051 <tr><td>All<td>All
3052 </table>
3053<tr>
3054 <td rowspan="2">Split
3055 <td rowspan="2" style="width:200px;"> Function to split a tensor along a given axis.
3056 <td rowspan="2">
3057 <ul>
3058 <li>ANEURALNETWORKS_SPLIT
3059 </ul>
3060 <td>NESplit
3061 <td>
3062 <ul>
3063 <li>All
3064 </ul>
3065 <td>
3066 <table>
3067 <tr><th>src<th>dst
3068 <tr><td>All<td>All
3069 </table>
3070<tr>
3071 <td>CLSplit
3072 <td>
3073 <ul>
3074 <li>All
3075 </ul>
3076 <td>
3077 <table>
3078 <tr><th>src<th>dst
3079 <tr><td>All<td>All
3080 </table>
3081<tr>
3082 <td rowspan="2">StackLayer
3083 <td rowspan="2" style="width:200px;"> Function to stack tensors along an axis.
3084 <td rowspan="2">
3085 <ul>
3086 <li>n/a
3087 </ul>
3088 <td>NEStackLayer
3089 <td>
3090 <ul>
3091 <li>All
3092 </ul>
3093 <td>
3094 <table>
3095 <tr><th>src<th>dst
3096 <tr><td>All<td>All
3097 </table>
3098<tr>
3099 <td>CLStackLayer
3100 <td>
3101 <ul>
3102 <li>All
3103 </ul>
3104 <td>
3105 <table>
3106 <tr><th>src<th>dst
3107 <tr><td>All<td>All
3108 </table>
3109<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003110 <td rowspan="2">StridedSlice
3111 <td rowspan="2" style="width:200px;"> Function to extract a strided slice of a tensor.
3112 <td rowspan="2">
3113 <ul>
3114 <li>ANEURALNETWORKS_STRIDED_SLICE
3115 </ul>
3116 <td>NEStridedSlice
3117 <td>
3118 <ul>
3119 <li>All
3120 </ul>
3121 <td>
3122 <table>
3123 <tr><th>src<th>dst
3124 <tr><td>All<td>All
3125 </table>
3126<tr>
3127 <td>CLStridedSlice
3128 <td>
3129 <ul>
3130 <li>All
3131 </ul>
3132 <td>
3133 <table>
3134 <tr><th>src<th>dst
3135 <tr><td>All<td>All
3136 </table>
3137<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01003138 <td rowspan="2">Tile
3139 <td rowspan="2" style="width:200px;"> Function to construct a tensor by tiling a given tensor.
3140 <td rowspan="2">
3141 <ul>
3142 <li>ANEURALNETWORKS_TILE
3143 </ul>
3144 <td>NETile
3145 <td>
3146 <ul>
3147 <li>All
3148 </ul>
3149 <td>
3150 <table>
3151 <tr><th>src<th>dst
3152 <tr><td>All<td>All
3153 </table>
3154<tr>
3155 <td>CLTile
3156 <td>
3157 <ul>
3158 <li>All
3159 </ul>
3160 <td>
3161 <table>
3162 <tr><th>src<th>dst
3163 <tr><td>All<td>All
3164 </table>
3165<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003166 <td rowspan="2">Transpose
Teresa Charlin62687422021-04-28 10:58:49 +01003167 <td rowspan="2" style="width:200px;"> Function to transpose a 2D tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003168 <td rowspan="2">
3169 <ul>
3170 <li>ANEURALNETWORKS_TRANSPOSE
3171 </ul>
3172 <td>NETranspose
3173 <td>
3174 <ul>
3175 <li>All
3176 </ul>
3177 <td>
3178 <table>
3179 <tr><th>src<th>dst
3180 <tr><td>All<td>All
3181 </table>
3182<tr>
3183 <td>CLTranspose
3184 <td>
3185 <ul>
3186 <li>All
3187 </ul>
3188 <td>
3189 <table>
3190 <tr><th>src<th>dst
3191 <tr><td>All<td>All
3192 </table>
Teresa Charlin62687422021-04-28 10:58:49 +01003193<tr>
3194 <td rowspan="2">Unstack
3195 <td rowspan="2" style="width:200px;"> Function to unpack a rank-R tensor into rank-(R-1) tensors.
3196 <td rowspan="2">
3197 <ul>
3198 <li>n/a
3199 </ul>
3200 <td>NEUnstack
3201 <td>
3202 <ul>
3203 <li>All
3204 </ul>
3205 <td>
3206 <table>
3207 <tr><th>src<th>dst
3208 <tr><td>All<td>All
3209 </table>
3210<tr>
3211 <td>CLUnstack
3212 <td>
3213 <ul>
3214 <li>All
3215 </ul>
3216 <td>
3217 <table>
3218 <tr><th>src<th>dst
3219 <tr><td>All<td>All
3220 </table>
3221<tr>
3222 <td rowspan="2">WinogradConvolutionLayer
3223 <td rowspan="2" style="width:200px;"> Function to do Winograd Convolution.
3224 <td rowspan="2">
3225 <ul>
3226 <li>ANEURALNETWORKS_CONV_2D
3227 </ul>
3228 <td>NEWinogradConvolutionLayer
3229 <td>
3230 <ul>
3231 <li>NHWC
3232 <li>NCHW
3233 </ul>
3234 <td>
3235 <table>
3236 <tr><th>src0<th>src1<th>src2<th>dst
3237 <tr><td>F16<td>F16<td>F16<td>F16
3238 <tr><td>F32<td>F32<td>F32<td>F32
3239 </table>
3240<tr>
3241 <td>CLWinogradConvolutionLayer
3242 <td>
3243 <ul>
3244 <li>NHWC
3245 <li>NCHW
3246 </ul>
3247 <td>
3248 <table>
3249 <tr><th>src0<th>src1<th>src2<th>dst
3250 <tr><td>F16<td>F16<td>F16<td>F16
3251 <tr><td>F32<td>F32<td>F32<td>F32
3252 </table>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003253</table>
3254
3255*/
Mohammed Suhail Munshi5e549fa2022-03-16 11:14:06 +00003256} // namespace