blob: e7f1823f8b39b2d388873937ebb13ac450d6f39c [file] [log] [blame]
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001///
Jonathan Deakina668f9f2024-01-24 09:15:38 +00002/// Copyright (c) 2021-2024 Arm Limited.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003///
4/// SPDX-License-Identifier: MIT
5///
6/// Permission is hereby granted, free of charge, to any person obtaining a copy
7/// of this software and associated documentation files (the "Software"), to
8/// deal in the Software without restriction, including without limitation the
9/// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
10/// sell copies of the Software, and to permit persons to whom the Software is
11/// furnished to do so, subject to the following conditions:
12///
13/// The above copyright notice and this permission notice shall be included in all
14/// copies or substantial portions of the Software.
15///
16/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
22/// SOFTWARE.
23///
24namespace arm_compute
25{
26/**
27@page operators_list Supported Operators
28
29@tableofcontents
30
31@section S9_1_operators_list Supported Operators
32
33Compute Library supports operators that are listed in below table.
34
35Compute Library supports a wide list of data-types, information can been directly found in the documentation of each kernel/function.
36The main data-types that the Machine Learning functions support are the following:
37 <ul>
38 <li>BFLOAT16: 16-bit non-standard brain floating point
39 <li>QASYMM8: 8-bit unsigned asymmetric quantized
40 <li>QASYMM8_SIGNED: 8-bit signed asymmetric quantized
41 <li>QSYMM8_PER_CHANNEL: 8-bit signed symmetric quantized (Used for the weights)
42 <li>QSYMM8: 8-bit unsigned symmetric quantized
43 <li>QSYMM16: 16-bit unsigned symmetric quantized
44 <li>F32: 32-bit single precision floating point
45 <li>F16: 16-bit half precision floating point
46 <li>S32: 32-bit signed integer
47 <li>U8: 8-bit unsigned char
Jakub Sujakee301b32021-06-04 09:46:08 +010048 <li>All: Agnostic to any specific data type
Sheri Zhanga47dcc22021-04-22 14:41:12 +010049 </ul>
50
51Compute Library supports the following data layouts (fast changing dimension from right to left):
52 <ul>
53 <li>NHWC: The native layout of Compute Library that delivers the best performance where channels are in the fastest changing dimension
54 <li>NCHW: Legacy layout where width is in the fastest changing dimension
Sheri Zhang5dda2172021-10-15 19:54:17 +010055 <li>NDHWC: New data layout for supporting 3D operators
Jakub Sujakee301b32021-06-04 09:46:08 +010056 <li>All: Agnostic to any specific data layout
Sheri Zhanga47dcc22021-04-22 14:41:12 +010057 </ul>
Sheri Zhang5dda2172021-10-15 19:54:17 +010058where N = batches, C = channels, H = height, W = width, D = depth
Sheri Zhanga47dcc22021-04-22 14:41:12 +010059
60<table>
61<caption id="multi_row"></caption>
62<tr>
63 <th>Function
64 <th>Description
65 <th>Equivalent Android NNAPI Op
66 <th>Backends
67 <th>Data Layouts
68 <th>Data Types
69<tr>
70 <td rowspan="2">ActivationLayer
71 <td rowspan="2" style="width:200px;"> Function to simulate an activation layer with the specified activation function.
72 <td rowspan="2">
73 <ul>
74 <li>ANEURALNETWORKS_ELU
75 <li>ANEURALNETWORKS_HARD_SWISH
76 <li>ANEURALNETWORKS_LOGISTIC
77 <li>ANEURALNETWORKS_RELU
78 <li>ANEURALNETWORKS_RELU1
79 <li>ANEURALNETWORKS_RELU6
80 <li>ANEURALNETWORKS_TANH
81 </ul>
82 <td>NEActivationLayer
83 <td>
84 <ul>
85 <li>All
86 </ul>
87 <td>
88 <table>
89 <tr><th>src<th>dst
90 <tr><td>QASYMM8<td>QASYMM8
91 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
92 <tr><td>QSYMM16<td>QSYMM16
93 <tr><td>F16<td>F16
94 <tr><td>F32<td>F32
95 </table>
96<tr>
97 <td>CLActivationLayer
98 <td>
99 <ul>
100 <li>All
101 </ul>
102 <td>
103 <table>
104 <tr><th>src<th>dst
105 <tr><td>QASYMM8<td>QASYMM8
106 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
107 <tr><td>QSYMM16<td>QSYMM16
108 <tr><td>F16<td>F16
109 <tr><td>F32<td>F32
110 </table>
111<tr>
Jakub Sujak667e82f2023-11-07 22:39:30 +0000112 <td rowspan="1">AddMulAdd
113 <td rowspan="1" style="width:200px;"> Performs a fused Add + Mul + Add [+ Relu-based-Activation] operation.
114 <td rowspan="1">
115 <ul>
116 <li>n/a
117 </ul>
118 <td>NEAddMulAdd
119 <td>
120 <ul>
121 <li>Any
122 </ul>
123 <td>
124 <table>
125 <tr><th>input1<th>input2<th>bn_mul<th>bn_add<th>add_output<th>final_output
126 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8<td>QASYMM8<td>QASYMM8<td>QASYMM8
127 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
128 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
129 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
130 </table>
131<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100132 <td rowspan="2">ArgMinMaxLayer
133 <td rowspan="2" style="width:200px;"> Function to calculate the index of the minimum or maximum values in a tensor based on an axis.
134 <td rowspan="2">
135 <ul>
136 <li>ANEURALNETWORKS_ARGMAX
137 <li>ANEURALNETWORKS_ARGMIN
138 </ul>
139 <td>NEArgMinMaxLayer
140 <td>
141 <ul>
142 <li>All
143 </ul>
144 <td>
145 <table>
146 <tr><th>src<th>dst
147 <tr><td>QASYMM8<td>U32, S32
148 <tr><td>QASYMM8_SIGNED<td>U32, S32
Pablo Marquez Tello29e27b02023-08-03 14:47:31 +0100149 <tr><td>S32<td>U32, S32, S64
Teresa Charlin62687422021-04-28 10:58:49 +0100150 <tr><td>F16<td>U32, S32
151 <tr><td>F32<td>U32, S32
152 </table>
153<tr>
154 <td>CLArgMinMaxLayer
155 <td>
156 <ul>
157 <li>All
158 </ul>
159 <td>
160 <table>
161 <tr><th>src<th>dst
162 <tr><td>QASYMM8<td>U32, S32
163 <tr><td>QASYMM8_SIGNED<td>U32, S32
164 <tr><td>S32<td>U32, S32
165 <tr><td>F16<td>U32, S32
166 <tr><td>F32<td>U32, S32
167 </table>
168<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100169 <td rowspan="1">ArithmeticAddition
170 <td rowspan="1" style="width:200px;"> Function to add 2 tensors.
171 <td rowspan="1">
172 <ul>
173 <li>ANEURALNETWORKS_ADD
174 </ul>
175 <td>NEArithmeticAddition
176 <td>
177 <ul>
178 <li>All
179 </ul>
180 <td>
181 <table>
182 <tr><th>src0<th>src1<th>dst
183 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
184 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
185 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
186 <tr><td>QSYMM16<td>QSYMM16<td>S32
187 <tr><td>U8<td>U8<td>U8
Sheri Zhang6124ce62021-05-04 14:03:13 +0100188 <tr><td>S16<td>S16<td>S16
189 <tr><td>S32<td>S32<td>S32
190 <tr><td>F16<td>F16<td>F16
191 <tr><td>F32<td>F32<td>F32
192 </table>
193<tr>
194 <td rowspan="1">ArithmeticSubtraction
195 <td rowspan="1" style="width:200px;"> Function to substract 2 tensors.
196 <td rowspan="1">
197 <ul>
198 <li>ANEURALNETWORKS_SUB
199 </ul>
200 <td>NEArithmeticSubtraction
201 <td>
202 <ul>
203 <li>All
204 </ul>
205 <td>
206 <table>
207 <tr><th>src0<th>src1<th>dst
208 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
209 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
210 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
211 <tr><td>QSYMM16<td>QSYMM16<td>S32
212 <tr><td>U8<td>U8<td>U8
Sheri Zhang6124ce62021-05-04 14:03:13 +0100213 <tr><td>S16<td>S16<td>S16
214 <tr><td>S32<td>S32<td>S32
215 <tr><td>F16<td>F16<td>F16
216 <tr><td>F32<td>F32<td>F32
217 </table>
218<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100219 <td rowspan="2">BatchNormalizationLayer
220 <td rowspan="2" style="width:200px;"> Function to perform batch normalization.
221 <td rowspan="2">
222 <ul>
223 <li>n/a
224 </ul>
225 <td>NEBatchNormalizationLayer
226 <td>
227 <ul>
228 <li>NHWC
229 <li>NCHW
230 </ul>
231 <td>
232 <table>
233 <tr><th>src<th>dst
234 <tr><td>F32<td>F32
235 <tr><td>F16<td>F16
236 </table>
237<tr>
238 <td>CLBatchNormalizationLayer
239 <td>
240 <ul>
241 <li>NHWC
242 <li>NCHW
243 </ul>
244 <td>
245 <table>
246 <tr><th>src<th>dst
247 <tr><td>F32<td>F32
248 <tr><td>F16<td>F16
249 </table>
250<tr>
251 <td rowspan="2">BatchToSpaceLayer
252 <td rowspan="2" style="width:200px;"> Batch to space transformation.
253 <td rowspan="2">
254 <ul>
255 <li>ANEURALNETWORKS_BATCH_TO_SPACE_ND
256 </ul>
257 <td>NEBatchToSpaceLayer
258 <td>
259 <ul>
260 <li>NHWC
261 <li>NCHW
262 </ul>
263 <td>
264 <table>
265 <tr><th>src0<th>src1<th>dst
266 <tr><td>All<td>s32<td>All
267 </table>
268<tr>
269 <td>CLBatchToSpaceLayer
270 <td>
271 <ul>
272 <li>NHWC
273 <li>NCHW
274 </ul>
275 <td>
276 <table>
277 <tr><th>src0<th>src1<th>dst
278 <tr><td>All<td>s32<td>All
279 </table>
280<tr>
281 <td rowspan="2">BitwiseAnd
Jakub Sujakee301b32021-06-04 09:46:08 +0100282 <td rowspan="2" style="width:200px;"> Function to perform bitwise AND between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100283 <td rowspan="2">
284 <ul>
285 <li>ANEURALNETWORKS_LOGICAL_AND
286 </ul>
287 <td>NEBitwiseAnd
288 <td>
289 <ul>
290 <li>All
291 </ul>
292 <td>
293 <table>
294 <tr><th>src<th>dst
295 <tr><td>U8<td>U8
296 </table>
297<tr>
298 <td>CLBitwiseAnd
299 <td>
300 <ul>
301 <li>All
302 </ul>
303 <td>
304 <table>
305 <tr><th>src<th>dst
306 <tr><td>U8<td>U8
307 </table>
308<tr>
309 <td rowspan="2">BitwiseNot
Jakub Sujakee301b32021-06-04 09:46:08 +0100310 <td rowspan="2" style="width:200px;"> Function to perform bitwise NOT.
Teresa Charlin62687422021-04-28 10:58:49 +0100311 <td rowspan="2">
312 <ul>
313 <li>ANEURALNETWORKS_LOGICAL_NOT
314 </ul>
315 <td>NEBitwiseNot
316 <td>
317 <ul>
318 <li>All
319 </ul>
320 <td>
321 <table>
322 <tr><th>src<th>dst
323 <tr><td>U8<td>U8
324 </table>
325<tr>
326 <td>CLBitwiseNot
327 <td>
328 <ul>
329 <li>All
330 </ul>
331 <td>
332 <table>
333 <tr><th>src<th>dst
334 <tr><td>U8<td>U8
335 </table>
336<tr>
337 <td rowspan="2">BitwiseOr
Jakub Sujakee301b32021-06-04 09:46:08 +0100338 <td rowspan="2" style="width:200px;"> Function to perform bitwise OR between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100339 <td rowspan="2">
340 <ul>
341 <li>ANEURALNETWORKS_LOGICAL_OR
342 </ul>
343 <td>NEBitwiseOr
344 <td>
345 <ul>
346 <li>All
347 </ul>
348 <td>
349 <table>
350 <tr><th>src<th>dst
351 <tr><td>U8<td>U8
352 </table>
353<tr>
354 <td>CLBitwiseOr
355 <td>
356 <ul>
357 <li>All
358 </ul>
359 <td>
360 <table>
361 <tr><th>src<th>dst
362 <tr><td>U8<td>U8
363 </table>
364<tr>
365 <td rowspan="2">BitwiseXor
Jakub Sujakee301b32021-06-04 09:46:08 +0100366 <td rowspan="2" style="width:200px;"> Function to perform bitwise XOR between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100367 <td rowspan="2">
368 <ul>
369 <li>n/a
370 </ul>
371 <td>NEBitwiseXor
372 <td>
373 <ul>
374 <li>All
375 </ul>
376 <td>
377 <table>
378 <tr><th>src<th>dst
379 <tr><td>U8<td>U8
380 </table>
381<tr>
382 <td>CLBitwiseXor
383 <td>
384 <ul>
385 <li>All
386 </ul>
387 <td>
388 <table>
389 <tr><th>src<th>dst
390 <tr><td>U8<td>U8
391 </table>
392<tr>
393 <td rowspan="2">BoundingBoxTransform
394 <td rowspan="2" style="width:200px;"> Transform proposal bounding boxes to target bounding box using bounding box deltas.
395 <td rowspan="2">
396 <ul>
397 <li>n/a
398 </ul>
399 <td>NEBoundingBoxTransform
400 <td>
401 <ul>
402 <li>NHWC
403 <li>NCHW
404 </ul>
405 <td>
406 <table>
407 <tr><th>src0<th>src1<th>dst
408 <tr><td>QASYMM16<td>QASYMM8<td>QASYMM16
409 <tr><td>F16<td>F16<td>F16
410 <tr><td>F32<td>F32<td>F32
411 </table>
412<tr>
413 <td>CLBoundingBoxTransform
414 <td>
415 <ul>
416 <li>NHWC
417 <li>NCHW
418 </ul>
419 <td>
420 <table>
421 <tr><th>src0<th>src1<th>dst
422 <tr><td>QASYMM16<td>QASYMM8<td>QASYMM16
423 <tr><td>F16<td>F16<td>F16
424 <tr><td>F32<td>F32<td>F32
425 </table>
426<tr>
427 <td rowspan="2">Cast
428 <td rowspan="2" style="width:200px;"> Function to cast a tensor.
429 <td rowspan="2">
430 <ul>
431 <li>ANEURALNETWORKS_CAST
432 </ul>
433 <td>NECast
434 <td>
435 <ul>
436 <li>All
437 </ul>
438 <td>
439 <table>
440 <tr><th>src<th>dst
441 <tr><td>QASYMM8_SIGNED<td>S16, S32, F32, F16
442 <tr><td>QASYMM8<td>U16, S16, S32, F32, F16
443 <tr><td>U8<td>U16, S16, S32, F32, F16
444 <tr><td>U16<td>U8, U32
445 <tr><td>S16<td>QASYMM8_SIGNED, U8, S32
446 <tr><td>F16<td>QASYMM8_SIGNED, QASYMM8, F32, S32, U8
447 <tr><td>S32<td>QASYMM8_SIGNED, QASYMM8, F16, F32, U8
448 <tr><td>F32<td>QASYMM8_SIGNED, QASYMM8, BFLOAT16, F16, S32, U8
449 </table>
450<tr>
451 <td>CLCast
452 <td>
453 <ul>
454 <li>All
455 </ul>
456 <td>
457 <table>
458 <tr><th>src<th>dst
459 <tr><td>U8<td>S8, U16, S16, U32, S32, F16, F32
Pablo Marquez Tello205ba242023-07-12 14:29:58 +0100460 <tr><td>S8<td>U8, U16, S16, U32, S32, F16, F32
Teresa Charlin62687422021-04-28 10:58:49 +0100461 <tr><td>U16<td>U8, S8, S16, U32, S32, F16, F32
462 <tr><td>S16<td>U8, S8, U16, U32, S32, F16, F32
463 <tr><td>U32<td>U8, S8, U16, S16, S32, F16, F32
464 <tr><td>S32<td>U8, S8, U16, S16, U32, F16, F32
Pablo Marquez Tello205ba242023-07-12 14:29:58 +0100465 <tr><td>U64<td>U8, S8, U16, S16, U32, S32, F16, F32
466 <tr><td>S64<td>U8, S8, U16, S16, U32, S32, F16, F32
467 <tr><td>F16<td>U8, S8, U16, S16, S32, U32, F32
468 <tr><td>F32<td>U8, S8, U16, S16, S32, U32, F16
Teresa Charlin62687422021-04-28 10:58:49 +0100469 </table>
470<tr>
471 <td rowspan="2">ChannelShuffleLayer
472 <td rowspan="2" style="width:200px;"> Function to shuffle the channels of the input tensor.
473 <td rowspan="2">
474 <ul>
475 <li>ANEURALNETWORKS_CHANNEL_SHUFFLE
476 </ul>
477 <td>NEChannelShuffleLayer
478 <td>
479 <ul>
480 <li>NCHW
Michele Di Giorgiob8025b32021-09-03 10:29:49 +0100481 <li>NHWC
Teresa Charlin62687422021-04-28 10:58:49 +0100482 </ul>
483 <td>
484 <table>
485 <tr><th>src<th>dst
486 <tr><td>All<td>All
487 </table>
488<tr>
489 <td>CLChannelShuffleLayer
490 <td>
491 <ul>
492 <li>NCHW
Michele Di Giorgiob8025b32021-09-03 10:29:49 +0100493 <li>NHWC
Teresa Charlin62687422021-04-28 10:58:49 +0100494 </ul>
495 <td>
496 <table>
497 <tr><th>src<th>dst
498 <tr><td>All<td>All
499 </table>
500<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100501 <td rowspan="1">Comparison
502 <td rowspan="1" style="width:200px;"> Function to compare 2 tensors.
503 <td rowspan="1">
504 <ul>
505 <li>ANEURALNETWORKS_EQUAL
506 <li>ANEURALNETWORKS_GREATER
507 <li>ANEURALNETWORKS_GREATER_EQUAL
508 <li>ANEURALNETWORKS_LESS
509 <li>ANEURALNETWORKS_LESS_EQUAL
510 <li>ANEURALNETWORKS_NOT_EQUAL
511 </ul>
512 <td>CLComparison
513 <td>
514 <ul>
515 <li>All
516 </ul>
517 <td>
518 <table>
519 <tr><th>src0<th>src1<th>dst
520 <tr><td>All<td>All<td>U8
521 </table>
522<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100523 <td rowspan="2">ConcatenateLayer
524 <td rowspan="2" style="width:200px;"> Function to concatenate tensors along a given axis.
525 <td rowspan="2">
526 <ul>
527 <li>ANEURALNETWORKS_CONCATENATION
528 </ul>
529 <td>NEConcatenateLayer
530 <td>
531 <ul>
532 <li>All
533 </ul>
534 <td>
535 <table>
536 <tr><th>src<th>dst
537 <tr><td>QASYMM8<td>QASYMM8
538 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
539 <tr><td>F16<td>F16
540 <tr><td>F32<td>F32
541 </table>
542<tr>
543 <td>CLConcatenateLayer
544 <td>
545 <ul>
546 <li>All
547 </ul>
548 <td>
549 <table>
550 <tr><th>src<th>dst
551 <tr><td>QASYMM8<td>QASYMM8
552 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
553 <tr><td>F16<td>F16
554 <tr><td>F32<td>F32
555 </table>
556<tr>
557 <td rowspan="2">ConvertFullyConnectedWeights
Jakub Sujakee301b32021-06-04 09:46:08 +0100558 <td rowspan="2" style="width:200px;"> Function to transpose the weights for the fully connected layer.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100559 <td rowspan="2">
560 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +0100561 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100562 </ul>
563 <td>NEConvertFullyConnectedWeights
564 <td>
565 <ul>
566 <li>NHWC
567 <li>NCHW
568 </ul>
569 <td>
570 <table>
571 <tr><th>src<th>dst
572 <tr><td>All<td>All
573 </table>
574<tr>
575 <td>CLConvertFullyConnectedWeights
576 <td>
577 <ul>
578 <li>NHWC
579 <li>NCHW
580 </ul>
581 <td>
582 <table>
583 <tr><th>src<th>dst
584 <tr><td>All<td>All
585 </table>
586<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100587 <td rowspan="2">ConvolutionLayer
588 <td rowspan="2" style="width:200px;"> Function to compute a convolution layer.
589 <td rowspan="2">
590 <ul>
591 <li>ANEURALNETWORKS_CONV_2D
592 </ul>
593 <td>NEConvolutionLayer
594 <td>
595 <ul>
596 <li>NHWC
597 <li>NCHW
598 </ul>
599 <td>
600 <table>
601 <tr><th>src0<th>src1<th>src2<th>dst
602 <tr><td>F16<td>F16<td>F16<td>F16
603 <tr><td>F32<td>F32<td>F32<td>F32
604 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
605 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
606 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
607 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
608 </table>
609<tr>
610 <td>CLConvolutionLayer
611 <td>
612 <ul>
613 <li>NHWC
614 <li>NCHW
615 </ul>
616 <td>
617 <table>
618 <tr><th>src0<th>src1<th>src2<th>dst
619 <tr><td>F16<td>F16<td>F16<td>F16
620 <tr><td>F32<td>F32<td>F32<td>F32
621 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
622 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
623 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
624 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
625 </table>
626<tr>
Sheri Zhang6d9c9822021-09-24 16:02:57 +0100627 <td rowspan="2">Conv3D
628 <td rowspan="2" style="width:200px;"> Function to compute a 3d convolution layer.
629 <td rowspan="2">
630 <ul>
631 <li>ANEURALNETWORKS_CONV_3D
632 </ul>
633 <td>NEConv3D
634 <td>
635 <ul>
636 <li>NDHWC
637 </ul>
638 <td>
639 <table>
640 <tr><th>src0<th>src1<th>src2<th>dst
641 <tr><td>F16<td>F16<td>F16<td>F16
642 <tr><td>F32<td>F32<td>F32<td>F32
Freddie Liardetf727ef42021-10-18 13:28:57 +0100643 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
644 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
Sheri Zhang6d9c9822021-09-24 16:02:57 +0100645 </table>
646<tr>
647 <td>CLConv3D
648 <td>
649 <ul>
650 <li>NDHWC
651 </ul>
652 <td>
653 <table>
654 <tr><th>src0<th>src1<th>src2<th>dst
655 <tr><td>F16<td>F16<td>F16<td>F16
656 <tr><td>F32<td>F32<td>F32<td>F32
Giorgio Arena51847d52021-10-19 15:45:57 +0100657 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
658 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
Sheri Zhang6d9c9822021-09-24 16:02:57 +0100659 </table>
660<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100661 <td rowspan="2">Copy
662 <td rowspan="2" style="width:200px;"> Function to copy a tensor.
663 <td rowspan="2">
664 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +0100665 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100666 </ul>
667 <td>NECopy
668 <td>
669 <ul>
670 <li>All
671 </ul>
672 <td>
673 <table>
674 <tr><th>src<th>dst
675 <tr><td>All<td>All
676 </table>
677<tr>
678 <td>CLCopy
679 <td>
680 <ul>
681 <li>All
682 </ul>
683 <td>
684 <table>
685 <tr><th>src<th>dst
686 <tr><td>All<td>All
687 </table>
688<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100689 <td rowspan="1">Crop
690 <td rowspan="1" style="width:200px;"> Performs a copy of input tensor to the output tensor.
691 <td rowspan="1">
692 <ul>
693 <li>n/a
694 </ul>
695 <td>CLCrop
696 <td>
697 <ul>
698 <li>NHWC
699 </ul>
700 <td>
701 <table>
702 <tr><th>src<th>dst
703 <tr><td>All<td>F32
704 </table>
705<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100706 <td rowspan="2">CropResize
707 <td rowspan="2" style="width:200px;"> Function to perform cropping and resizing.
708 <td rowspan="2">
709 <ul>
710 <li>n/a
711 </ul>
712 <td>NECropResize
713 <td>
714 <ul>
715 <li>NHWC
716 </ul>
717 <td>
718 <table>
719 <tr><th>src0<th>src1<th>src2<th>dst
720 <tr><td>All<td>F32<td>F32<td>F32
721 </table>
722<tr>
723 <td>CLCropResize
724 <td>
725 <ul>
726 <li>NHWC
727 </ul>
728 <td>
729 <table>
730 <tr><th>src0<th>src1<th>src2<th>dst
731 <tr><td>All<td>F32<td>F32<td>F32
732 </table>
733<tr>
734 <td rowspan="2">DeconvolutionLayer
Jakub Sujakee301b32021-06-04 09:46:08 +0100735 <td rowspan="2" style="width:200px;"> Function to compute a deconvolution or transpose convolution.
Teresa Charlin62687422021-04-28 10:58:49 +0100736 <td rowspan="2">
737 <ul>
738 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
739 </ul>
740 <td>NEDeconvolutionLayer
741 <td>
742 <ul>
743 <li>NHWC
744 <li>NCHW
745 </ul>
746 <td>
747 <table>
748 <tr><th>src0<th>src1<th>src2<th>dst
749 <tr><td>F16<td>F16<td>F16<td>F16
750 <tr><td>F32<td>F32<td>F32<td>F32
751 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
752 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
753 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
754 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
755 </table>
756<tr>
757 <td>CLDeconvolutionLayer
758 <td>
759 <ul>
760 <li>NHWC
761 <li>NCHW
762 </ul>
763 <td>
764 <table>
765 <tr><th>src0<th>src1<th>src2<th>dst
766 <tr><td>F16<td>F16<td>F16<td>F16
767 <tr><td>F32<td>F32<td>F32<td>F32
768 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
769 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
770 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
771 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
772 </table>
773<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100774 <td rowspan="1">DeconvolutionLayerUpsample
775 <td rowspan="1" style="width:200px;"> Function to execute deconvolution upsample on OpenCL.
776 <td rowspan="1">
777 <ul>
778 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
779 </ul>
780 <td>CLDeconvolutionLayerUpsample
781 <td>
782 <ul>
783 <li>NHWC
784 <li>NCHW
785 </ul>
786 <td>
787 <table>
788 <tr><th>src<th>dst
789 <tr><td>All<td>All
790 </table>
791<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100792 <td rowspan="2">DepthConvertLayer
793 <td rowspan="2" style="width:200px;"> Performs a down-scaling depth conversion.
794 <td rowspan="2">
795 <ul>
796 <li>n/a
797 </ul>
798 <td>NEDepthConvertLayer
799 <td>
800 <ul>
801 <li>All
802 </ul>
803 <td>
804 <table>
805 <tr><th>src<th>dst
806 <tr><td>QASYMM8<td>F16, F32
807 <tr><td>U8<td>U16, S16, S32
808 <tr><td>U16<td>U8, U32
809 <tr><td>S16<td>U8, S32
810 <tr><td>BFLOAT16<td>F32
811 <tr><td>F16<td>QASYMM8, F32
812 <tr><td>F32<td>QASYMM8, F16, BFLOAT16
813 </table>
814<tr>
815 <td>CLDepthConvertLayer
816 <td>
817 <ul>
818 <li>All
819 </ul>
820 <td>
821 <table>
822 <tr><th>src<th>dst
823 <tr><td>U8<td>S8, U16, S16, U32, S32, F16, F32
824 <tr><td>U16<td>U8, S8, S16, U32, S32, F16, F32
825 <tr><td>S16<td>U8, S8, U16, U32, S32, F16, F32
826 <tr><td>U32<td>U8, S8, U16, S16, S32, F16, F32
827 <tr><td>S32<td>U8, S8, U16, S16, U32, F16, F32
828 <tr><td>F16<td>U8, S8, U16, S16, U32, F32
829 <tr><td>F32<td>U8, S8, U16, S16, U32, F16
830 </table>
831<tr>
832 <td rowspan="2">DepthToSpaceLayer
833 <td rowspan="2" style="width:200px;"> Depth to Space transformation.
834 <td rowspan="2">
835 <ul>
836 <li>ANEURALNETWORKS_DEPTH_TO_SPACE
837 </ul>
838 <td>NEDepthToSpaceLayer
839 <td>
840 <ul>
841 <li>NHWC
842 <li>NCHW
843 </ul>
844 <td>
845 <table>
846 <tr><th>src<th>dst
847 <tr><td>All<td>All
848 </table>
849<tr>
850 <td>CLDepthToSpaceLayer
851 <td>
852 <ul>
853 <li>NHWC
854 <li>NCHW
855 </ul>
856 <td>
857 <table>
858 <tr><th>src<th>dst
859 <tr><td>All<td>All
860 </table>
861<tr>
862 <td rowspan="2">DepthwiseConvolutionLayer
863 <td rowspan="2" style="width:200px;"> Function to perform depthwise separable convolution.
864 <td rowspan="2">
865 <ul>
866 <li>ANEURALNETWORKS_DEPTHWISE_CONV_2D
867 </ul>
868 <td>NEDepthwiseConvolutionLayer
869 <td>
870 <ul>
871 <li>NHWC
872 <li>NCHW
873 </ul>
874 <td>
875 <table>
876 <tr><th>src0<th>src1<th>src2<th>dst
877 <tr><td>F16<td>F16<td>F16<td>F16
878 <tr><td>F32<td>F32<td>F32<td>F32
879 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
880 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
881 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
882 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
883 </table>
884<tr>
885 <td>CLDepthwiseConvolutionLayer
886 <td>
887 <ul>
888 <li>NHWC
889 <li>NCHW
890 </ul>
891 <td>
892 <table>
893 <tr><th>src0<th>src1<th>src2<th>dst
894 <tr><td>F16<td>F16<td>F16<td>F16
895 <tr><td>F32<td>F32<td>F32<td>F32
896 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
897 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
898 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
899 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
900 </table>
901<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100902 <td rowspan="2">DequantizationLayer
Teresa Charlin62687422021-04-28 10:58:49 +0100903 <td rowspan="2" style="width:200px;"> Function to dequantize the values in a tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100904 <td rowspan="2">
905 <ul>
906 <li>ANEURALNETWORKS_DEQUANTIZE
907 </ul>
908 <td>NEDequantizationLayer
909 <td>
910 <ul>
911 <li>All
912 </ul>
913 <td>
914 <table>
915 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +0100916 <tr><td>QASYMM8<td>F16, F32
917 <tr><td>QASYMM8_SIGNED<td>F16, F32
918 <tr><td>QSYMM8_PER_CHANNEL<td>F16, F32
919 <tr><td>QSYMM8<td>F16, F32
920 <tr><td>QSYMM16<td>F16, F32
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100921 </table>
922<tr>
923 <td>CLDequantizationLayer
924 <td>
925 <ul>
926 <li>All
927 </ul>
928 <td>
929 <table>
930 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +0100931 <tr><td>QASYMM8<td>F16, F32
932 <tr><td>QASYMM8_SIGNED<td>F16, F32
933 <tr><td>QSYMM8_PER_CHANNEL<td>F16, F32
934 <tr><td>QSYMM8<td>F16, F32
935 <tr><td>QSYMM16<td>F16, F32
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100936 </table>
937<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100938 <td rowspan="1">DetectionPostProcessLayer
939 <td rowspan="1" style="width:200px;"> Function to generate the detection output based on center size encoded boxes, class prediction and anchors by doing non maximum suppression (NMS).
940 <td rowspan="1">
941 <ul>
942 <li>ANEURALNETWORKS_DETECTION_POSTPROCESSING
943 </ul>
944 <td>NEDetectionPostProcessLayer
945 <td>
946 <ul>
947 <li>All
948 </ul>
949 <td>
950 <table>
951 <tr><th>src0 - src2<th>dst0 - dst3
952 <tr><td>QASYMM8<td>F32
953 <tr><td>QASYMM8_SIGNED<td>F32
954 <tr><td>F32<td>F32
955 </table>
956<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100957 <td rowspan="2">DirectConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +0100958 <td rowspan="2" style="width:200px;"> Function to compute direct convolution.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100959 <td rowspan="2">
960 <ul>
961 <li>ANEURALNETWORKS_CONV_2D
962 </ul>
963 <td>NEDirectConvolutionLayer
964 <td>
965 <ul>
966 <li>NHWC
967 <li>NCHW
968 </ul>
969 <td>
970 <table>
971 <tr><th>src0<th>src1<th>src2<th>dst
972 <tr><td>F16<td>F16<td>F16<td>F16
973 <tr><td>F32<td>F32<td>F32<td>F32
974 </table>
975<tr>
976 <td>CLDirectConvolutionLayer
977 <td>
978 <ul>
979 <li>NHWC
980 <li>NCHW
981 </ul>
982 <td>
983 <table>
984 <tr><th>src0<th>src1<th>src2<th>dst
985 <tr><td>F16<td>F16<td>F16<td>F16
986 <tr><td>F32<td>F32<td>F32<td>F32
987 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
988 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
989 </table>
990<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100991 <td rowspan="1">DirectDeconvolutionLayer
992 <td rowspan="1" style="width:200px;"> Function to run the deconvolution layer.
993 <td rowspan="1">
994 <ul>
995 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
996 </ul>
997 <td>CLDirectDeconvolutionLayer
998 <td>
999 <ul>
1000 <li>NHWC
1001 <li>NCHW
1002 </ul>
1003 <td>
1004 <table>
1005 <tr><th>src0<th>src1<th>src2<th>dst
1006 <tr><td>F16<td>F16<td>F16<td>F16
1007 <tr><td>F32<td>F32<td>F32<td>F32
1008 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1009 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1010 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1011 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1012 </table>
1013<tr>
Jakub Sujakee301b32021-06-04 09:46:08 +01001014 <td rowspan="13">ElementwiseOperations
Sheri Zhang6124ce62021-05-04 14:03:13 +01001015 <td rowspan="13" style="width:200px;"> Function to perform in Cpu: - Div - Max - Min - Pow - SquaredDiff - Comparisons (Equal, greater, greater_equal, less, less_equal, not_equal) Function to perform in CL: - Add - Sub - Div - Max - Min - Pow - SquaredDiff
1016 <td rowspan="13">
1017 <ul>
1018 <li>ANEURALNETWORKS_MAXIMUM
1019 <li>ANEURALNETWORKS_MINIMUM
1020 <li>ANEURALNETWORKS_POW
1021 <li>ANEURALNETWORKS_DIV
1022 <li>ANEURALNETWORKS_ADD
1023 <li>ANEURALNETWORKS_SUB
1024 <li>ANEURALNETWORKS_EQUAL
1025 <li>ANEURALNETWORKS_GREATER
1026 <li>ANEURALNETWORKS_GREATER_EQUAL
1027 <li>ANEURALNETWORKS_LESS
1028 <li>ANEURALNETWORKS_LESS_EQUAL
1029 <li>ANEURALNETWORKS_NOT_EQUAL
1030 </ul>
1031 <td>NEElementwiseMax
1032 <td>
1033 <ul>
1034 <li>All
1035 </ul>
1036 <td>
1037 <table>
1038 <tr><th>src0<th>src1<th>dst
1039 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1040 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1041 <tr><td>S32<td>S32<td>S32
1042 <tr><td>S16<td>S16<td>S16
1043 <tr><td>F16<td>F16<td>F16
1044 <tr><td>F32<td>F32<td>F32
1045 </table>
1046<tr>
1047 <td>NEElementwiseMin
1048 <td>
1049 <ul>
1050 <li>All
1051 </ul>
1052 <td>
1053 <table>
1054 <tr><th>src0<th>src1<th>dst
1055 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1056 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1057 <tr><td>S32<td>S32<td>S32
1058 <tr><td>S16<td>S16<td>S16
1059 <tr><td>F16<td>F16<td>F16
1060 <tr><td>F32<td>F32<td>F32
1061 </table>
1062<tr>
1063 <td>NEElementwiseSquaredDiff
1064 <td>
1065 <ul>
1066 <li>All
1067 </ul>
1068 <td>
1069 <table>
1070 <tr><th>src0<th>src1<th>dst
1071 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1072 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1073 <tr><td>S32<td>S32<td>S32
1074 <tr><td>S16<td>S16<td>S16
1075 <tr><td>F16<td>F16<td>F16
1076 <tr><td>F32<td>F32<td>F32
1077 </table>
1078<tr>
1079 <td>NEElementwiseDivision
1080 <td>
1081 <ul>
1082 <li>All
1083 </ul>
1084 <td>
1085 <table>
1086 <tr><th>src0<th>src1<th>dst
1087 <tr><td>F16<td>F16<td>F16
1088 <tr><td>F32<td>F32<td>F32
1089 </table>
1090<tr>
1091 <td>NEElementwisePower
1092 <td>
1093 <ul>
1094 <li>All
1095 </ul>
1096 <td>
1097 <table>
1098 <tr><th>src0<th>src1<th>dst
1099 <tr><td>F16<td>F16<td>F16
1100 <tr><td>F32<td>F32<td>F32
1101 </table>
1102<tr>
1103 <td>NEElementwiseComparison
1104 <td>
1105 <ul>
1106 <li>All
1107 </ul>
1108 <td>
1109 <table>
1110 <tr><th>src0<th>src1<th>dst
1111 <tr><td>QASYMM8<td>QASYMM8<td>U8
1112 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>U8
1113 <tr><td>S32<td>S32<td>U8
1114 <tr><td>U8<td>U8<td>U8
1115 <tr><td>S16<td>S16<td>U8
1116 <tr><td>F16<td>F16<td>U8
1117 <tr><td>F32<td>F32<td>U8
1118 </table>
1119<tr>
1120 <td>CLArithmeticAddition
1121 <td>
1122 <ul>
1123 <li>All
1124 </ul>
1125 <td>
1126 <table>
1127 <tr><th>src0<th>src1<th>dst
1128 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1129 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1130 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1131 <tr><td>U8<td>U8<td>U8
1132 <tr><td>U8<td>U8<td>S16
1133 <tr><td>U8<td>S16<td>S16
1134 <tr><td>S16<td>U8<td>S16
1135 <tr><td>S16<td>S16<td>S16
1136 <tr><td>S32<td>S32<td>S32
1137 <tr><td>F16<td>F16<td>F16
1138 <tr><td>F32<td>F32<td>F32
1139 </table>
1140<tr>
1141 <td>CLArithmeticSubtraction
1142 <td>
1143 <ul>
1144 <li>All
1145 </ul>
1146 <td>
1147 <table>
1148 <tr><th>src0<th>src1<th>dst
1149 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1150 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1151 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1152 <tr><td>U8<td>U8<td>U8
1153 <tr><td>U8<td>U8<td>S16
1154 <tr><td>U8<td>S16<td>S16
1155 <tr><td>S16<td>U8<td>S16
1156 <tr><td>S16<td>S16<td>S16
1157 <tr><td>S32<td>S32<td>S32
1158 <tr><td>F16<td>F16<td>F16
1159 <tr><td>F32<td>F32<td>F32
1160 </table>
1161<tr>
1162 <td>CLArithmeticDivision
1163 <td>
1164 <ul>
1165 <li>All
1166 </ul>
1167 <td>
1168 <table>
1169 <tr><th>src0<th>src1<th>dst
1170 <tr><td>F16<td>F16<td>F16
1171 <tr><td>F32<td>F32<td>F32
1172 </table>
1173<tr>
1174 <td>CLElementwiseMax
1175 <td>
1176 <ul>
1177 <li>All
1178 </ul>
1179 <td>
1180 <table>
1181 <tr><th>src0<th>src1<th>dst
1182 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1183 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1184 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1185 <tr><td>U8<td>U8<td>U8
1186 <tr><td>S16<td>S16<td>S16
1187 <tr><td>S32<td>S32<td>S32
1188 <tr><td>U32<td>U32<td>U32
1189 <tr><td>F16<td>F16<td>F16
1190 <tr><td>F32<td>F32<td>F32
1191 </table>
1192<tr>
1193 <td>CLElementwiseMin
1194 <td>
1195 <ul>
1196 <li>All
1197 </ul>
1198 <td>
1199 <table>
1200 <tr><th>src0<th>src1<th>dst
1201 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1202 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1203 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1204 <tr><td>U8<td>U8<td>U8
1205 <tr><td>S16<td>S16<td>S16
1206 <tr><td>S32<td>S32<td>S32
1207 <tr><td>U32<td>U32<td>U32
1208 <tr><td>F16<td>F16<td>F16
1209 <tr><td>F32<td>F32<td>F32
1210 </table>
1211<tr>
1212 <td>CLElementwiseSquaredDiff
1213 <td>
1214 <ul>
1215 <li>All
1216 </ul>
1217 <td>
1218 <table>
1219 <tr><th>src0<th>src1<th>dst
1220 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1221 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1222 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1223 <tr><td>U8<td>U8<td>U8
1224 <tr><td>S16<td>S16<td>S16
1225 <tr><td>F16<td>F16<td>F16
1226 <tr><td>F32<td>F32<td>F32
1227 </table>
1228<tr>
1229 <td>CLElementwisePower
1230 <td>
1231 <ul>
1232 <li>All
1233 </ul>
1234 <td>
1235 <table>
1236 <tr><th>src0<th>src1<th>dst
1237 <tr><td>F16<td>F16<td>F16
1238 <tr><td>F32<td>F32<td>F32
1239 </table>
1240<tr>
1241 <td rowspan="8">ElementwiseUnaryLayer
1242 <td rowspan="8" style="width:200px;"> Function to perform: - Rsqrt - Exp - Neg - Log - Abs - Round - Sin
1243 <td rowspan="8">
1244 <ul>
1245 <li>ANEURALNETWORKS_ABS
1246 <li>ANEURALNETWORKS_EXP
1247 <li>ANEURALNETWORKS_LOG
1248 <li>ANEURALNETWORKS_NEG
1249 <li>ANEURALNETWORKS_RSQRT
1250 <li>ANEURALNETWORKS_SIN
1251 </ul>
1252 <td>NEElementwiseUnaryLayer
1253 <td>
1254 <ul>
1255 <li>All
1256 </ul>
1257 <td>
1258 <table>
1259 <tr><th>src<th>dst
1260 <tr><td>F16<td>F16
1261 <tr><td>F32<td>F32
1262 <tr><td>S32<td>S32
1263 </table>
1264<tr>
1265 <td>CLRsqrtLayer
1266 <td>
1267 <ul>
1268 <li>All
1269 </ul>
1270 <td>
1271 <table>
1272 <tr><th>src<th>dst
1273 <tr><td>F16<td>F16
1274 <tr><td>F32<td>F32
1275 </table>
1276<tr>
1277 <td>CLExpLayer
1278 <td>
1279 <ul>
1280 <li>All
1281 </ul>
1282 <td>
1283 <table>
1284 <tr><th>src<th>dst
1285 <tr><td>F16<td>F16
1286 <tr><td>F32<td>F32
1287 </table>
1288<tr>
1289 <td>CLNegLayer
1290 <td>
1291 <ul>
1292 <li>All
1293 </ul>
1294 <td>
1295 <table>
1296 <tr><th>src<th>dst
1297 <tr><td>F16<td>F16
1298 <tr><td>F32<td>F32
Jakub Sujakee301b32021-06-04 09:46:08 +01001299 <tr><td>S32<td>S32
Sheri Zhang6124ce62021-05-04 14:03:13 +01001300 </table>
1301<tr>
1302 <td>CLSinLayer
1303 <td>
1304 <ul>
1305 <li>All
1306 </ul>
1307 <td>
1308 <table>
1309 <tr><th>src<th>dst
1310 <tr><td>F16<td>F16
1311 <tr><td>F32<td>F32
1312 </table>
1313<tr>
1314 <td>CLLogLayer
1315 <td>
1316 <ul>
1317 <li>All
1318 </ul>
1319 <td>
1320 <table>
1321 <tr><th>src<th>dst
1322 <tr><td>F16<td>F16
1323 <tr><td>F32<td>F32
1324 </table>
1325<tr>
1326 <td>CLAbsLayer
1327 <td>
1328 <ul>
1329 <li>All
1330 </ul>
1331 <td>
1332 <table>
1333 <tr><th>src<th>dst
1334 <tr><td>F16<td>F16
1335 <tr><td>F32<td>F32
1336 </table>
1337<tr>
1338 <td>CLRoundLayer
1339 <td>
1340 <ul>
1341 <li>All
1342 </ul>
1343 <td>
1344 <table>
1345 <tr><th>src<th>dst
1346 <tr><td>F16<td>F16
1347 <tr><td>F32<td>F32
1348 </table>
1349<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001350 <td rowspan="2">FFT1D
Teresa Charlin62687422021-04-28 10:58:49 +01001351 <td rowspan="2" style="width:200px;"> Fast Fourier Transform 1D.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001352 <td rowspan="2">
1353 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +01001354 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001355 </ul>
1356 <td>NEFFT1D
1357 <td>
1358 <ul>
1359 <li>All
1360 </ul>
1361 <td>
1362 <table>
1363 <tr><th>src<th>dst
1364 <tr><td>F32<td>F32
1365 </table>
1366<tr>
1367 <td>CLFFT1D
1368 <td>
1369 <ul>
1370 <li>All
1371 </ul>
1372 <td>
1373 <table>
1374 <tr><th>src<th>dst
1375 <tr><td>F32<td>F32
1376 <tr><td>F16<td>F16
1377 </table>
1378<tr>
1379 <td rowspan="2">FFT2D
Teresa Charlin62687422021-04-28 10:58:49 +01001380 <td rowspan="2" style="width:200px;"> Fast Fourier Transform 2D.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001381 <td rowspan="2">
1382 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +01001383 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001384 </ul>
1385 <td>NEFFT2D
1386 <td>
1387 <ul>
1388 <li>All
1389 </ul>
1390 <td>
1391 <table>
1392 <tr><th>src<th>dst
1393 <tr><td>F32<td>F32
1394 </table>
1395<tr>
1396 <td>CLFFT2D
1397 <td>
1398 <ul>
1399 <li>All
1400 </ul>
1401 <td>
1402 <table>
1403 <tr><th>src<th>dst
1404 <tr><td>F32<td>F32
1405 <tr><td>F16<td>F16
1406 </table>
1407<tr>
1408 <td rowspan="2">FFTConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001409 <td rowspan="2" style="width:200px;"> Fast Fourier Transform Convolution.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001410 <td rowspan="2">
1411 <ul>
1412 <li>ANEURALNETWORKS_CONV_2D
1413 </ul>
1414 <td>NEFFTConvolutionLayer
1415 <td>
1416 <ul>
1417 <li>All
1418 </ul>
1419 <td>
1420 <table>
1421 <tr><th>src<th>dst
1422 <tr><td>F32<td>F32
1423 </table>
1424<tr>
1425 <td>CLFFTConvolutionLayer
1426 <td>
1427 <ul>
1428 <li>All
1429 </ul>
1430 <td>
1431 <table>
1432 <tr><th>src<th>dst
1433 <tr><td>F32<td>F32
1434 <tr><td>F16<td>F16
1435 </table>
1436<tr>
1437 <td rowspan="2">Fill
Teresa Charlin62687422021-04-28 10:58:49 +01001438 <td rowspan="2" style="width:200px;"> Set the values of a tensor with a given value.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001439 <td rowspan="2">
1440 <ul>
1441 <li>ANEURALNETWORKS_FILL
1442 </ul>
1443 <td>NEFill
1444 <td>
1445 <ul>
1446 <li>All
1447 </ul>
1448 <td>
1449 <table>
1450 <tr><th>src<th>dst
1451 <tr><td>All<td>All
1452 </table>
1453<tr>
1454 <td>CLFill
1455 <td>
1456 <ul>
1457 <li>All
1458 </ul>
1459 <td>
1460 <table>
1461 <tr><th>src<th>dst
1462 <tr><td>All<td>All
1463 </table>
1464<tr>
Georgios Pinitasb6af4822021-09-14 12:33:34 +01001465 <td rowspan="1">FillBorder
1466 <td rowspan="1" style="width:200px;"> Function to fill the borders within the XY-planes.
1467 <td rowspan="1">
Teresa Charlin62687422021-04-28 10:58:49 +01001468 <ul>
1469 <li>n/a
1470 </ul>
1471 <td>NEFillBorder
1472 <td>
1473 <ul>
1474 <li>All
1475 </ul>
1476 <td>
1477 <table>
1478 <tr><th>src<th>dst
1479 <tr><td>All<td>All
1480 </table>
1481<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001482 <td rowspan="2">FlattenLayer
1483 <td rowspan="2" style="width:200px;"> Reshape a tensor to be 1D
1484 <td rowspan="2">
1485 <ul>
1486 <li>ANEURALNETWORKS_RESHAPE
1487 </ul>
1488 <td>NEFlattenLayer
1489 <td>
1490 <ul>
1491 <li>All
1492 </ul>
1493 <td>
1494 <table>
1495 <tr><th>src<th>dst
1496 <tr><td>All<td>All
1497 </table>
1498<tr>
1499 <td>CLFlattenLayer
1500 <td>
1501 <ul>
1502 <li>All
1503 </ul>
1504 <td>
1505 <table>
1506 <tr><th>src<th>dst
1507 <tr><td>All<td>All
1508 </table>
1509<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001510 <td rowspan="2">Floor
Teresa Charlin62687422021-04-28 10:58:49 +01001511 <td rowspan="2" style="width:200px;"> Round the value to the lowest number.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001512 <td rowspan="2">
1513 <ul>
1514 <li>ANEURALNETWORKS_FLOOR
1515 </ul>
1516 <td>NEFloor
1517 <td>
1518 <ul>
1519 <li>All
1520 </ul>
1521 <td>
1522 <table>
1523 <tr><th>src<th>dst
1524 <tr><td>F32<td>F32
1525 <tr><td>F16<td>F16
1526 </table>
1527<tr>
1528 <td>CLFloor
1529 <td>
1530 <ul>
1531 <li>All
1532 </ul>
1533 <td>
1534 <table>
1535 <tr><th>src<th>dst
1536 <tr><td>F32<td>F32
1537 <tr><td>F16<td>F16
1538 </table>
1539<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001540 <td rowspan="2">FullyConnectedLayer
1541 <td rowspan="2" style="width:200px;"> Function to perform a fully connected / dense layer.
1542 <td rowspan="2">
1543 <ul>
1544 <li>ANEURALNETWORKS_FULLY_CONNECTED
1545 </ul>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001546 <td>NEFullyConnectedLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001547 <td>
1548 <ul>
1549 <li>NHWC
1550 <li>NCHW
1551 </ul>
1552 <td>
1553 <table>
1554 <tr><th>src0<th>src1<th>src2<th>dst
1555 <tr><td>F16<td>F16<td>F16<td>F16
1556 <tr><td>F32<td>F32<td>F32<td>F32
1557 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1558 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1559 </table>
1560<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001561 <td>CLFullyConnectedLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001562 <td>
1563 <ul>
1564 <li>NHWC
1565 <li>NCHW
1566 </ul>
1567 <td>
1568 <table>
1569 <tr><th>src0<th>src1<th>src2<th>dst
1570 <tr><td>F16<td>F16<td>F16<td>F16
1571 <tr><td>F32<td>F32<td>F32<td>F32
1572 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1573 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1574 </table>
1575<tr>
1576 <td rowspan="2">FuseBatchNormalization
1577 <td rowspan="2" style="width:200px;"> Function to fuse the batch normalization node to a preceding convolution node.
1578 <td rowspan="2">
1579 <ul>
1580 <li>n/a
1581 </ul>
1582 <td>NEFuseBatchNormalization
1583 <td>
1584 <ul>
1585 <li>NHWC
1586 <li>NCHW
1587 </ul>
1588 <td>
1589 <table>
1590 <tr><th>src<th>dst
1591 <tr><td>F32<td>F32
1592 <tr><td>F16<td>F16
1593 </table>
1594<tr>
1595 <td>CLFuseBatchNormalization
1596 <td>
1597 <ul>
1598 <li>NHWC
1599 <li>NCHW
1600 </ul>
1601 <td>
1602 <table>
1603 <tr><th>src<th>dst
1604 <tr><td>F32<td>F32
1605 <tr><td>F16<td>F16
1606 </table>
1607<tr>
1608 <td rowspan="2">Gather
1609 <td rowspan="2" style="width:200px;"> Performs the Gather operation along the chosen axis.
1610 <td rowspan="2">
1611 <ul>
1612 <li>ANEURALNETWORKS_GATHER
1613 </ul>
1614 <td>NEGather
1615 <td>
1616 <ul>
1617 <li>All
1618 </ul>
1619 <td>
1620 <table>
1621 <tr><th>src<th>dst
1622 <tr><td>All<td>All
1623 </table>
1624<tr>
1625 <td>CLGather
1626 <td>
1627 <ul>
1628 <li>All
1629 </ul>
1630 <td>
1631 <table>
1632 <tr><th>src<th>dst
1633 <tr><td>All<td>All
1634 </table>
1635<tr>
1636 <td rowspan="2">GEMM
1637 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1638 <td rowspan="2">
1639 <ul>
1640 <li>n/a
1641 </ul>
1642 <td>NEGEMM
1643 <td>
1644 <ul>
1645 <li>All
1646 </ul>
1647 <td>
1648 <table>
1649 <tr><th>src0<th>src1<th>src2<th>dst
1650 <tr><td>F32<td>F32<td>F32<td>F32
1651 <tr><td>F16<td>F16<td>F16<td>F16
1652 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1653 </table>
1654<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001655 <td>CLGEMM
Teresa Charlin62687422021-04-28 10:58:49 +01001656 <td>
1657 <ul>
1658 <li>All
1659 </ul>
1660 <td>
1661 <table>
1662 <tr><th>src0<th>src1<th>src2<th>dst
1663 <tr><td>F32<td>F32<td>F32<td>F32
1664 <tr><td>F16<td>F16<td>F16<td>F16
1665 </table>
1666<tr>
Jakub Sujakee301b32021-06-04 09:46:08 +01001667 <td rowspan="1">GEMMConv2d
Sheri Zhang6124ce62021-05-04 14:03:13 +01001668 <td rowspan="1" style="width:200px;"> General Matrix Multiplication.
1669 <td rowspan="1">
1670 <ul>
1671 <li>ANEURALNETWORKS_CONV_2D
1672 </ul>
1673 <td>NEGEMMConv2d
1674 <td>
1675 <ul>
1676 <li>All
1677 </ul>
1678 <td>
1679 <table>
1680 <tr><th>src0<th>src1<th>src2<th>dst
1681 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1682 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1683 <tr><td>F16<td>F16<td>F16<td>F16
1684 <tr><td>F32<td>F32<td>F32<td>F32
1685 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1686 </table>
1687<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001688 <td rowspan="2">GEMMConvolutionLayer
1689 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1690 <td rowspan="2">
1691 <ul>
1692 <li>ANEURALNETWORKS_CONV_2D
1693 </ul>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001694 <td>NEGEMMConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001695 <td>
1696 <ul>
1697 <li>NHWC
1698 <li>NCHW
1699 </ul>
1700 <td>
1701 <table>
1702 <tr><th>src0<th>src1<th>src2<th>dst
1703 <tr><td>F16<td>F16<td>F16<td>F16
1704 <tr><td>F32<td>F32<td>F32<td>F32
1705 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1706 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1707 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1708 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1709 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1710 </table>
1711<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001712 <td>CLGEMMConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001713 <td>
1714 <ul>
1715 <li>NHWC
1716 <li>NCHW
1717 </ul>
1718 <td>
1719 <table>
1720 <tr><th>src0<th>src1<th>src2<th>dst
1721 <tr><td>F16<td>F16<td>F16<td>F16
1722 <tr><td>F32<td>F32<td>F32<td>F32
1723 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1724 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1725 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1726 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1727 </table>
1728<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001729 <td rowspan="1">GEMMDeconvolutionLayer
1730 <td rowspan="1" style="width:200px;"> General Matrix Multiplication.
1731 <td rowspan="1">
1732 <ul>
1733 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
1734 </ul>
1735 <td>CLGEMMDeconvolutionLayer
1736 <td>
1737 <ul>
1738 <li>NHWC
1739 </ul>
1740 <td>
1741 <table>
1742 <tr><th>src0<th>src1<th>src2<th>dst
1743 <tr><td>F16<td>F16<td>F16<td>F16
1744 <tr><td>F32<td>F32<td>F32<td>F32
1745 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1746 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1747 </table>
1748<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001749 <td rowspan="2">GEMMLowpMatrixMultiplyCore
1750 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1751 <td rowspan="2">
1752 <ul>
1753 <li>n/a
1754 </ul>
1755 <td>NEGEMMLowpMatrixMultiplyCore
1756 <td>
1757 <ul>
1758 <li>NHWC
1759 <li>NCHW
1760 </ul>
1761 <td>
1762 <table>
1763 <tr><th>src0<th>src1<th>src2<th>dst
1764 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1765 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1766 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>QASYMM8
1767 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>S32
1768 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1769 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>S32
1770 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1771 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1772 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>QASYMM8_SIGNED
1773 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>S32
1774 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1775 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>S32
Jonathan Deakina668f9f2024-01-24 09:15:38 +00001776 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>F32<td>F32
Teresa Charlin62687422021-04-28 10:58:49 +01001777 </table>
1778<tr>
1779 <td>CLGEMMLowpMatrixMultiplyCore
1780 <td>
1781 <ul>
1782 <li>NHWC
1783 <li>NCHW
1784 </ul>
1785 <td>
1786 <table>
1787 <tr><th>src0<th>src1<th>src2<th>dst
1788 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1789 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1790 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>QASYMM8
1791 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>S32
1792 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1793 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>S32
1794 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1795 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1796 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>QASYMM8_SIGNED
1797 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>S32
1798 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1799 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>S32
1800 </table>
1801<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001802 <td rowspan="2">GEMMLowpOutputStage
1803 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1804 <td rowspan="2">
1805 <ul>
1806 <li>n/a
1807 </ul>
1808 <td>NEGEMMLowpOutputStage
1809 <td>
1810 <ul>
1811 <li>All
1812 </ul>
1813 <td>
1814 <table>
1815 <tr><th>src0<th>src1<th>dst
1816 <tr><td>S32<td>S32<td>QASYMM8
1817 <tr><td>S32<td>S32<td>QASYMM8_SIGNED
1818 <tr><td>S32<td>S32<td>QSYMM16
1819 </table>
1820<tr>
1821 <td>CLGEMMLowpOutputStage
1822 <td>
1823 <ul>
1824 <li>All
1825 </ul>
1826 <td>
1827 <table>
1828 <tr><th>src0<th>src1<th>dst
1829 <tr><td>S32<td>S32<td>QASYMM8
1830 <tr><td>S32<td>S32<td>QASYMM8_SIGNED
1831 <tr><td>S32<td>S32<td>QSYMM16
1832 </table>
1833<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001834 <td rowspan="2">GenerateProposalsLayer
1835 <td rowspan="2" style="width:200px;"> Function to generate proposals for a RPN (Region Proposal Network).
1836 <td rowspan="2">
1837 <ul>
1838 <li>ANEURALNETWORKS_GENERATE_PROPOSALS
1839 </ul>
1840 <td>NEGenerateProposalsLayer
1841 <td>
1842 <ul>
1843 <li>All
1844 </ul>
1845 <td>
1846 <table>
1847 <tr><th>src0<th>src1<th>src2<th>dst
1848 <tr><td>F16<td>F16<td>F16<td>F16
1849 <tr><td>F32<td>F32<td>F32<td>F32
1850 <tr><td>QASYMM8<td>QSYMM8<td>QSYMM16<td>QASYMM8
1851 </table>
1852<tr>
1853 <td>CLGenerateProposalsLayer
1854 <td>
1855 <ul>
1856 <li>All
1857 </ul>
1858 <td>
1859 <table>
1860 <tr><th>src0<th>src1<th>src2<th>dst
1861 <tr><td>F16<td>F16<td>F16<td>F16
1862 <tr><td>F32<td>F32<td>F32<td>F32
1863 <tr><td>QASYMM8<td>QSYMM8<td>QSYMM16<td>QASYMM8
1864 </table>
1865<tr>
1866 <td rowspan="2">InstanceNormalizationLayer
1867 <td rowspan="2" style="width:200px;"> Function to perform a Instance normalization on a given axis.
1868 <td rowspan="2">
1869 <ul>
1870 <li>ANEURALNETWORKS_INSTANCE_NORMALIZATION
1871 </ul>
1872 <td>NEInstanceNormalizationLayer
1873 <td>
1874 <ul>
1875 <li>NHWC
1876 <li>NCHW
1877 </ul>
1878 <td>
1879 <table>
1880 <tr><th>src<th>dst
1881 <tr><td>F16<td>F16
1882 <tr><td>F32<td>F32
1883 </table>
1884<tr>
1885 <td>CLInstanceNormalizationLayer
1886 <td>
1887 <ul>
1888 <li>NHWC
1889 <li>NCHW
1890 </ul>
1891 <td>
1892 <table>
1893 <tr><th>src<th>dst
1894 <tr><td>F16<td>F16
1895 <tr><td>F32<td>F32
1896 </table>
1897<tr>
1898 <td rowspan="2">L2NormalizeLayer
1899 <td rowspan="2" style="width:200px;"> Function to perform a L2 normalization on a given axis.
1900 <td rowspan="2">
1901 <ul>
1902 <li>ANEURALNETWORKS_L2_NORMALIZATION
1903 </ul>
1904 <td>NEL2NormalizeLayer
1905 <td>
1906 <ul>
1907 <li>NHWC
1908 <li>NCHW
1909 </ul>
1910 <td>
1911 <table>
1912 <tr><th>src<th>dst
1913 <tr><td>F16<td>F16
1914 <tr><td>F32<td>F32
1915 </table>
1916<tr>
1917 <td>CLL2NormalizeLayer
1918 <td>
1919 <ul>
1920 <li>NHWC
1921 <li>NCHW
1922 </ul>
1923 <td>
1924 <table>
1925 <tr><th>src<th>dst
1926 <tr><td>F16<td>F16
1927 <tr><td>F32<td>F32
1928 </table>
1929<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001930 <td rowspan="3">Logical
1931 <td rowspan="3" style="width:200px;"> Function to perform: - Logical AND - Logical OR - Logical NOT
1932 <td rowspan="3">
1933 <ul>
1934 <li>n/a
1935 </ul>
1936 <td>NELogicalAnd
1937 <td>
1938 <ul>
1939 <li>All
1940 </ul>
1941 <td>
1942 <table>
1943 <tr><th>src0<th>src1<th>dst
1944 <tr><td>U8<td>U8<td>U8
1945 </table>
1946<tr>
1947 <td>NELogicalOr
1948 <td>
1949 <ul>
1950 <li>All
1951 </ul>
1952 <td>
1953 <table>
1954 <tr><th>src0<th>src1<th>dst
1955 <tr><td>U8<td>U8<td>U8
1956 </table>
1957<tr>
1958 <td>NELogicalNot
1959 <td>
1960 <ul>
1961 <li>All
1962 </ul>
1963 <td>
1964 <table>
1965 <tr><th>src<th>dst
1966 <tr><td>U8<td>U8
1967 </table>
1968<tr>
1969 <td rowspan="1">LogicalAnd
1970 <td rowspan="1" style="width:200px;"> Function to perform Logical AND.
1971 <td rowspan="1">
1972 <ul>
1973 <li>n/a
1974 </ul>
1975 <td>CLLogicalAnd
1976 <td>
1977 <ul>
1978 <li>All
1979 </ul>
1980 <td>
1981 <table>
1982 <tr><th>src0<th>src1<th>dst
1983 <tr><td>U8<td>U8<td>U8
1984 </table>
1985<tr>
1986 <td rowspan="1">LogicalOr
1987 <td rowspan="1" style="width:200px;"> Function to perform Logical OR.
1988 <td rowspan="1">
1989 <ul>
1990 <li>n/a
1991 </ul>
1992 <td>CLLogicalOr
1993 <td>
1994 <ul>
1995 <li>All
1996 </ul>
1997 <td>
1998 <table>
1999 <tr><th>src0<th>src1<th>dst
2000 <tr><td>U8<td>U8<td>U8
2001 </table>
2002<tr>
2003 <td rowspan="1">LogicalNot
2004 <td rowspan="1" style="width:200px;"> Function to perform Logical NOT.
2005 <td rowspan="1">
2006 <ul>
2007 <li>n/a
2008 </ul>
2009 <td>CLLogicalNot
2010 <td>
2011 <ul>
2012 <li>All
2013 </ul>
2014 <td>
2015 <table>
2016 <tr><th>src<th>dst
2017 <tr><td>U8<td>U8
2018 </table>
2019<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002020 <td rowspan="2">LSTMLayer
2021 <td rowspan="2" style="width:200px;"> Function to perform a single time step in a Long Short-Term Memory (LSTM) layer.
2022 <td rowspan="2">
2023 <ul>
2024 <li>ANEURALNETWORKS_LSTM
2025 </ul>
2026 <td>NELSTMLayer
2027 <td>
2028 <ul>
2029 <li>All
2030 </ul>
2031 <td>
2032 <table>
2033 <tr><th>src0 - src13<th>dst0 - dst3
2034 <tr><td>F16<td>F16
2035 <tr><td>F32<td>F32
2036 </table>
2037<tr>
2038 <td>CLLSTMLayer
2039 <td>
2040 <ul>
2041 <li>All
2042 </ul>
2043 <td>
2044 <table>
2045 <tr><th>src0 - src13<th>dst0 - dst3
2046 <tr><td>F16<td>F16
2047 <tr><td>F32<td>F32
2048 </table>
2049<tr>
2050 <td rowspan="2">LSTMLayerQuantized
2051 <td rowspan="2" style="width:200px;"> Function to perform quantized LSTM (Long Short-Term Memory)
2052 <td rowspan="2">
2053 <ul>
2054 <li>ANEURALNETWORKS_QUANTIZED_LSTM
2055 <li>ANEURALNETWORKS_QUANTIZED_16BIT_LSTM
2056 </ul>
2057 <td>NELSTMLayerQuantized
2058 <td>
2059 <ul>
2060 <li>All
2061 </ul>
2062 <td>
2063 <table>
2064 <tr><th>src0 - src8<th>src9 - src12<th>src13<th>src14<th>dst0<th>dst1
2065 <tr><td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8<td>QSYMM16<td>QASYMM8
2066 </table>
2067<tr>
2068 <td>CLLSTMLayerQuantized
2069 <td>
2070 <ul>
2071 <li>All
2072 </ul>
2073 <td>
2074 <table>
2075 <tr><th>src0 - src8<th>src9 - src12<th>src13<th>src14<th>dst0<th>dst1
2076 <tr><td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8<td>QSYMM16<td>QASYMM8
2077 </table>
2078<tr>
Jakub Sujak667e82f2023-11-07 22:39:30 +00002079 <td rowspan="2">MatMul
2080 <td rowspan="2" style="width:200px;"> Computes a matrix multiplication in batches.
2081 <td rowspan="2">
2082 <ul>
2083 <li>ANEURALNETWORKS_BATCH_MATMUL
2084 </ul>
2085 <td>NEMatMul
2086 <td>
2087 <ul>
2088 <li>Any
2089 </ul>
2090 <td>
2091 <table>
2092 <tr><th>lhs<th>rhs<th>dst
2093 <tr><td>F32<td>F32<td>F32
2094 <tr><td>F16<td>F16<td>F16
Renato Arantes36a75da2024-01-26 17:31:18 +00002095 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
Jakub Sujak667e82f2023-11-07 22:39:30 +00002096 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2097 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2098 </table>
2099<tr>
2100 <td>CLMatMul
2101 <td>
2102 <ul>
2103 <li>All
2104 </ul>
2105 <td>
2106 <table>
2107 <tr><th>lhs<th>rhs<th>dst
2108 <tr><td>F32<td>F32<td>F32
2109 <tr><td>F16<td>F16<td>F16
2110 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2111 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2112 </table>
2113<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002114 <td rowspan="2">MaxUnpoolingLayer
2115 <td rowspan="2" style="width:200px;"> Function to perform MaxUnpooling.
2116 <td rowspan="2">
2117 <ul>
2118 <li>n/a
2119 </ul>
2120 <td>NEMaxUnpoolingLayer
2121 <td>
2122 <ul>
2123 <li>NHWC
2124 <li>NCHW
2125 </ul>
2126 <td>
2127 <table>
2128 <tr><th>src<th>dst
2129 <tr><td>QASYMM8<td>QASYMM8
2130 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2131 <tr><td>F16<td>F16
2132 <tr><td>F32<td>F32
2133 </table>
2134<tr>
2135 <td>CLMaxUnpoolingLayer
2136 <td>
2137 <ul>
2138 <li>NHWC
2139 <li>NCHW
2140 </ul>
2141 <td>
2142 <table>
2143 <tr><th>src<th>dst
2144 <tr><td>QASYMM8<td>QASYMM8
2145 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2146 <tr><td>F16<td>F16
2147 <tr><td>F32<td>F32
2148 </table>
2149<tr>
2150 <td rowspan="2">MeanStdDevNormalizationLayer
2151 <td rowspan="2" style="width:200px;"> Function to execute mean and standard deviation normalization.
2152 <td rowspan="2">
2153 <ul>
2154 <li>n/a
2155 </ul>
2156 <td>NEMeanStdDevNormalizationLayer
2157 <td>
2158 <ul>
2159 <li>NHWC
2160 <li>NCHW
2161 </ul>
2162 <td>
2163 <table>
2164 <tr><th>src<th>dst
2165 <tr><td>F32<td>F32
2166 <tr><td>F16<td>F16
2167 </table>
2168<tr>
2169 <td>CLMeanStdDevNormalizationLayer
2170 <td>
2171 <ul>
2172 <li>NHWC
2173 <li>NCHW
2174 </ul>
2175 <td>
2176 <table>
2177 <tr><th>src<th>dst
2178 <tr><td>F32<td>F32
2179 <tr><td>F16<td>F16
2180 </table>
2181<tr>
2182 <td rowspan="2">NormalizationLayer
2183 <td rowspan="2" style="width:200px;"> Function to compute normalization layer.
2184 <td rowspan="2">
2185 <ul>
2186 <li>ANEURALNETWORKS_LOCAL_RESPONSE_NORMALIZATION
2187 </ul>
2188 <td>NENormalizationLayer
2189 <td>
2190 <ul>
2191 <li>NHWC
2192 <li>NCHW
2193 </ul>
2194 <td>
2195 <table>
2196 <tr><th>src<th>dst
2197 <tr><td>F32<td>F32
2198 <tr><td>F16<td>F16
2199 </table>
2200<tr>
2201 <td>CLNormalizationLayer
2202 <td>
2203 <ul>
2204 <li>NHWC
2205 <li>NCHW
2206 </ul>
2207 <td>
2208 <table>
2209 <tr><th>src<th>dst
2210 <tr><td>F32<td>F32
2211 <tr><td>F16<td>F16
2212 </table>
2213<tr>
Jakub Sujak667e82f2023-11-07 22:39:30 +00002214 <td rowspan="1">NormalizePlanarYUVLayer
2215 <td rowspan="1" style="width:200px;"> Function to compute normalization planar YUV layer.
2216 <td rowspan="1">
2217 <ul>
2218 <li>n/a
2219 </ul>
2220 <td>CLNormalizePlanarYUVLayer
2221 <td>
2222 <ul>
2223 <li>NHWC
2224 <li>NCHW
2225 </ul>
2226 <td>
2227 <table>
2228 <tr><th>src<th>dst
2229 <tr><td>F32<td>F32
2230 <tr><td>F16<td>F16
2231 <tr><td>QASYMM8<td>QASYMM8
2232 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2233 </table>
2234<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002235 <td rowspan="2">PadLayer
2236 <td rowspan="2" style="width:200px;"> Function to pad a tensor.
2237 <td rowspan="2">
2238 <ul>
2239 <li>ANEURALNETWORKS_PAD
2240 <li>ANEURALNETWORKS_PAD_V2
2241 </ul>
2242 <td>NEPadLayer
2243 <td>
2244 <ul>
2245 <li>NHWC
2246 <li>NCHW
2247 </ul>
2248 <td>
2249 <table>
2250 <tr><th>src<th>dst
2251 <tr><td>All<td>All
2252 </table>
2253<tr>
2254 <td>CLPadLayer
2255 <td>
2256 <ul>
2257 <li>NHWC
2258 <li>NCHW
2259 </ul>
2260 <td>
2261 <table>
2262 <tr><th>src<th>dst
2263 <tr><td>All<td>All
2264 </table>
2265<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002266 <td rowspan="2">Permute
2267 <td rowspan="2" style="width:200px;"> Function to transpose an ND tensor.
2268 <td rowspan="2">
2269 <ul>
2270 <li>ANEURALNETWORKS_TRANSPOSE
2271 </ul>
2272 <td>NEPermute
2273 <td>
2274 <ul>
2275 <li>NHWC
2276 <li>NCHW
2277 </ul>
2278 <td>
2279 <table>
2280 <tr><th>src<th>dst
2281 <tr><td>All<td>All
2282 </table>
2283<tr>
2284 <td>CLPermute
2285 <td>
2286 <ul>
2287 <li>NHWC
2288 <li>NCHW
2289 </ul>
2290 <td>
2291 <table>
2292 <tr><th>src<th>dst
2293 <tr><td>All<td>All
2294 </table>
2295<tr>
2296 <td rowspan="2">PixelWiseMultiplication
Jakub Sujakee301b32021-06-04 09:46:08 +01002297 <td rowspan="2" style="width:200px;"> Function to perform a multiplication.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002298 <td rowspan="2">
2299 <ul>
2300 <li>ANEURALNETWORKS_MUL
2301 </ul>
2302 <td>NEPixelWiseMultiplication
2303 <td>
2304 <ul>
2305 <li>All
2306 </ul>
2307 <td>
2308 <table>
2309 <tr><th>src0<th>src1<th>dst
2310 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2311 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2312 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
2313 <tr><td>QSYMM16<td>QSYMM16<td>S32
2314 <tr><td>U8<td>U8<td>U8
2315 <tr><td>U8<td>U8<td>S16
2316 <tr><td>U8<td>S16<td>S16
2317 <tr><td>S16<td>U8<td>S16
2318 <tr><td>S16<td>S16<td>S16
2319 <tr><td>F16<td>F16<td>F16
2320 <tr><td>F32<td>S32<td>F32
2321 </table>
2322<tr>
2323 <td>CLPixelWiseMultiplication
2324 <td>
2325 <ul>
2326 <li>All
2327 </ul>
2328 <td>
2329 <table>
2330 <tr><th>src0<th>src1<th>dst
2331 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2332 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2333 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
2334 <tr><td>QSYMM16<td>QSYMM16<td>S32
2335 <tr><td>U8<td>U8<td>U8
2336 <tr><td>U8<td>U8<td>S16
2337 <tr><td>U8<td>S16<td>S16
2338 <tr><td>S16<td>U8<td>S16
2339 <tr><td>S16<td>S16<td>S16
2340 <tr><td>F16<td>F16<td>F16
Jakub Sujakee301b32021-06-04 09:46:08 +01002341 <tr><td>F32<td>F32<td>F32
2342 <tr><td>S32<td>S32<td>S32
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002343 </table>
2344<tr>
2345 <td rowspan="2">PoolingLayer
Jakub Sujakee301b32021-06-04 09:46:08 +01002346 <td rowspan="2" style="width:200px;"> Function to perform pooling with the specified pooling operation.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002347 <td rowspan="2">
2348 <ul>
2349 <li>ANEURALNETWORKS_AVERAGE_POOL_2D
2350 <li>ANEURALNETWORKS_L2_POOL_2D
2351 <li>ANEURALNETWORKS_MAX_POOL_2D
2352 </ul>
2353 <td>NEPoolingLayer
2354 <td>
2355 <ul>
2356 <li>NHWC
2357 <li>NCHW
2358 </ul>
2359 <td>
2360 <table>
2361 <tr><th>src<th>dst
2362 <tr><td>QASYMM8<td>QASYMM8
2363 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2364 <tr><td>F16<td>F16
2365 <tr><td>F32<td>F32
2366 </table>
2367<tr>
2368 <td>CLPoolingLayer
2369 <td>
2370 <ul>
2371 <li>NHWC
2372 <li>NCHW
2373 </ul>
2374 <td>
2375 <table>
2376 <tr><th>src<th>dst
2377 <tr><td>QASYMM8<td>QASYMM8
2378 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2379 <tr><td>F16<td>F16
2380 <tr><td>F32<td>F32
2381 </table>
2382<tr>
Adnan AlSinan171fc3d2022-03-15 18:46:42 +00002383 <td rowspan="2">Pooling3dLayer
2384 <td rowspan="2" style="width:200px;"> Function to perform pooling 3D with the specified pooling operation.
2385 <td rowspan="2">
2386 <ul>
2387 <li>N/A
2388 </ul>
2389 <td>NEPooling3dLayer
2390 <td>
2391 <ul>
2392 <li>NDHWC
2393 </ul>
2394 <td>
2395 <table>
2396 <tr><th>src<th>dst
2397 <tr><td>F16<td>F16
2398 <tr><td>F32<td>F32
Adnan AlSinan9104cd52022-04-06 16:19:31 +01002399 <tr><td>QASYMM8<td>QASYMM8
2400 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
Adnan AlSinan171fc3d2022-03-15 18:46:42 +00002401 </table>
2402<tr>
2403 <td>CLPooling3dLayer
2404 <td>
2405 <ul>
2406 <li>NDHWC
2407 </ul>
2408 <td>
2409 <table>
2410 <tr><th>src<th>dst
2411 <tr><td>F16<td>F16
2412 <tr><td>F32<td>F32
Mohammed Suhail Munshi5e549fa2022-03-16 11:14:06 +00002413 <tr><td>QASYMM8<td>QASYMM8
2414 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
Adnan AlSinan171fc3d2022-03-15 18:46:42 +00002415 </table>
2416<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002417 <td rowspan="2">PReluLayer
2418 <td rowspan="2" style="width:200px;"> Function to compute the activation layer with the PRELU activation function.
2419 <td rowspan="2">
2420 <ul>
2421 <li>ANEURALNETWORKS_PRELU
2422 </ul>
2423 <td>NEPReluLayer
2424 <td>
2425 <ul>
2426 <li>All
2427 </ul>
2428 <td>
2429 <table>
2430 <tr><th>src<th>dst
2431 <tr><td>QASYMM8<td>QASYMM8
2432 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2433 <tr><td>F16<td>F16
2434 <tr><td>F32<td>F32
2435 </table>
2436<tr>
2437 <td>CLPReluLayer
2438 <td>
2439 <ul>
2440 <li>All
2441 </ul>
2442 <td>
2443 <table>
2444 <tr><th>src<th>dst
2445 <tr><td>QASYMM8<td>QASYMM8
2446 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2447 <tr><td>F16<td>F16
2448 <tr><td>F32<td>F32
2449 </table>
2450<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002451 <td rowspan="2">PriorBoxLayer
Sheri Zhang6124ce62021-05-04 14:03:13 +01002452 <td rowspan="2" style="width:200px;"> Function to compute prior boxes and clip.
Teresa Charlin62687422021-04-28 10:58:49 +01002453 <td rowspan="2">
2454 <ul>
2455 <li>n/a
2456 </ul>
2457 <td>NEPriorBoxLayer
2458 <td>
2459 <ul>
2460 <li>NHWC
2461 <li>NCHW
2462 </ul>
2463 <td>
2464 <table>
2465 <tr><th>src0<th>src1<th>dst
2466 <tr><td>F32<td>F32<td>F32
2467 </table>
2468<tr>
2469 <td>CLPriorBoxLayer
2470 <td>
2471 <ul>
2472 <li>NHWC
2473 <li>NCHW
2474 </ul>
2475 <td>
2476 <table>
2477 <tr><th>src0<th>src1<th>dst
2478 <tr><td>F32<td>F32<td>F32
2479 </table>
2480<tr>
2481 <td rowspan="2">QLSTMLayer
2482 <td rowspan="2" style="width:200px;"> Function to perform quantized LSTM (Long Short-Term Memory).
2483 <td rowspan="2">
2484 <ul>
2485 <li>ANEURALNETWORKS_QUANTIZED_LSTM
2486 <li>ANEURALNETWORKS_QUANTIZED_16BIT_LSTM
2487 </ul>
2488 <td>NEQLSTMLayer
2489 <td>
2490 <ul>
2491 <li>All
2492 </ul>
2493 <td>
2494 <table>
2495 <tr><th>src0<th>src1 - src6<th>src7 -src9<th>src10<th>src11<th>dst0<th>dst1 - dst2
2496 <tr><td>QASYMM8_SIGNED<td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8_SIGNED<td>QSYMM16<td>QASYMM8_SIGNED
2497 </table>
2498<tr>
2499 <td>CLQLSTMLayer
2500 <td>
2501 <ul>
2502 <li>All
2503 </ul>
2504 <td>
2505 <table>
2506 <tr><th>src0<th>src1 - src6<th>src7 -src9<th>src10<th>src11<th>dst0<th>dst1 - dst2
2507 <tr><td>QASYMM8_SIGNED<td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8_SIGNED<td>QSYMM16<td>QASYMM8_SIGNED
2508 </table>
2509<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002510 <td rowspan="2">QuantizationLayer
2511 <td rowspan="2" style="width:200px;"> Function to perform quantization layer
2512 <td rowspan="2">
2513 <ul>
2514 <li>ANEURALNETWORKS_QUANTIZE
2515 </ul>
2516 <td>NEQuantizationLayer
2517 <td>
2518 <ul>
2519 <li>All
2520 </ul>
2521 <td>
2522 <table>
2523 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +01002524 <tr><td>QASYMM8<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2525 <tr><td>QASYMM8_SIGNED<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2526 <tr><td>F16<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2527 <tr><td>F32<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002528 </table>
2529<tr>
2530 <td>CLQuantizationLayer
2531 <td>
2532 <ul>
2533 <li>All
2534 </ul>
2535 <td>
2536 <table>
2537 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +01002538 <tr><td>QASYMM8<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2539 <tr><td>QASYMM8_SIGNED<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2540 <tr><td>F16<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2541 <tr><td>F32<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2542 </table>
2543<tr>
2544 <td rowspan="2">Range
2545 <td rowspan="2" style="width:200px;"> Function to generates a sequence of numbers starting from START and extends by increments of 'STEP' up to but not including 'END'.
2546 <td rowspan="2">
2547 <ul>
2548 <li>n/a
2549 </ul>
2550 <td>NERange
2551 <td>
2552 <ul>
2553 <li>All
2554 </ul>
2555 <td>
2556 <table>
2557 <tr><th>dst
2558 <tr><td>U8
2559 <tr><td>S8
2560 <tr><td>U16
2561 <tr><td>S16
2562 <tr><td>U32
2563 <tr><td>S32
2564 <tr><td>F16
2565 <tr><td>F32
2566 </table>
2567<tr>
2568 <td>CLRange
2569 <td>
2570 <ul>
2571 <li>All
2572 </ul>
2573 <td>
2574 <table>
2575 <tr><th>dst
2576 <tr><td>U8
2577 <tr><td>S8
2578 <tr><td>QASYMM8
2579 <tr><td>U16
2580 <tr><td>S16
2581 <tr><td>U32
2582 <tr><td>S32
2583 <tr><td>F16
2584 <tr><td>F32
2585 </table>
2586<tr>
2587 <td rowspan="2">ReduceMean
Jakub Sujakee301b32021-06-04 09:46:08 +01002588 <td rowspan="2" style="width:200px;"> Function to perform reduce mean operation.
Teresa Charlin62687422021-04-28 10:58:49 +01002589 <td rowspan="2">
2590 <ul>
2591 <li>ANEURALNETWORKS_MEAN
2592 </ul>
2593 <td>NEReduceMean
2594 <td>
2595 <ul>
2596 <li>All
2597 </ul>
2598 <td>
2599 <table>
2600 <tr><th>src<th>dst
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002601 <tr><td>QASYMM8<td>QASYMM8
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002602 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
Teresa Charlin62687422021-04-28 10:58:49 +01002603 <tr><td>F16<td>F16
2604 <tr><td>F32<td>F32
2605 </table>
2606<tr>
2607 <td>CLReduceMean
2608 <td>
2609 <ul>
2610 <li>All
2611 </ul>
2612 <td>
2613 <table>
2614 <tr><th>src<th>dst
2615 <tr><td>QASYMM8<td>QASYMM8
2616 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2617 <tr><td>F16<td>F16
2618 <tr><td>F32<td>F32
2619 </table>
2620<tr>
2621 <td rowspan="2">ReductionOperation
Jakub Sujakee301b32021-06-04 09:46:08 +01002622 <td rowspan="2" style="width:200px;"> Function to perform reduce with the following operations - ARG_IDX_MAX: Index of the max value - ARG_IDX_MIN: Index of the min value - MEAN_SUM: Mean of sum - PROD: Product - SUM_SQUARE: Sum of squares - SUM: Sum - MIN: Min - MAX: Max
Teresa Charlin62687422021-04-28 10:58:49 +01002623 <td rowspan="2">
2624 <ul>
2625 <li>ANEURALNETWORKS_REDUCE_ALL
2626 <li>ANEURALNETWORKS_REDUCE_ANY
2627 <li>ANEURALNETWORKS_REDUCE_MAX
2628 <li>ANEURALNETWORKS_REDUCE_MIN
2629 <li>ANEURALNETWORKS_REDUCE_PROD
2630 <li>ANEURALNETWORKS_REDUCE_SUM
2631 </ul>
2632 <td>NEReductionOperation
2633 <td>
2634 <ul>
2635 <li>All
2636 </ul>
2637 <td>
2638 <table>
2639 <tr><th>src<th>dst
2640 <tr><td>QASYMM8<td>QASYMM8
2641 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2642 <tr><td>F16<td>F16
2643 <tr><td>F32<td>F32
2644 <tr><td>S32<td>S32
2645 </table>
2646<tr>
2647 <td>CLReductionOperation
2648 <td>
2649 <ul>
2650 <li>All
2651 </ul>
2652 <td>
2653 <table>
2654 <tr><th>src<th>dst
2655 <tr><td>QASYMM8<td>QASYMM8
2656 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2657 <tr><td>F16<td>F16
2658 <tr><td>F32<td>F32
2659 <tr><td>S32<td>S32
2660 </table>
2661<tr>
Jakub Sujak667e82f2023-11-07 22:39:30 +00002662 <td rowspan="1">ReorderLayer
2663 <td rowspan="1" style="width:200px;"> Reorders a tensor to a different weights format.
2664 <td rowspan="1">
2665 <ul>
2666 <li>n/a
2667 </ul>
2668 <td>NEReorderLayer
2669 <td>
2670 <ul>
2671 <li>NCHW
2672 </ul>
2673 <td>
2674 <table>
2675 <tr><th>src<th>dst
2676 <tr><td>F32<td>F32
2677 </table>
2678<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002679 <td rowspan="2">ReorgLayer
2680 <td rowspan="2" style="width:200px;"> Performs a reorganization layer of input tensor to the output tensor.
2681 <td rowspan="2">
2682 <ul>
2683 <li>n/a
2684 </ul>
2685 <td>NEReorgLayer
2686 <td>
2687 <ul>
2688 <li>NHWC
2689 <li>NCHW
2690 </ul>
2691 <td>
2692 <table>
2693 <tr><th>src<th>dst
2694 <tr><td>All<td>All
2695 </table>
2696<tr>
2697 <td>CLReorgLayer
2698 <td>
2699 <ul>
2700 <li>NHWC
2701 <li>NCHW
2702 </ul>
2703 <td>
2704 <table>
2705 <tr><th>src<th>dst
2706 <tr><td>All<td>All
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002707 </table>
2708<tr>
2709 <td rowspan="2">ReshapeLayer
Teresa Charlin62687422021-04-28 10:58:49 +01002710 <td rowspan="2" style="width:200px;"> Function to reshape a tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002711 <td rowspan="2">
2712 <ul>
2713 <li>ANEURALNETWORKS_RESHAPE
2714 <li>ANEURALNETWORKS_SQUEEZE
2715 </ul>
2716 <td>NEReshapeLayer
2717 <td>
2718 <ul>
2719 <li>All
2720 </ul>
2721 <td>
2722 <table>
2723 <tr><th>src<th>dst
2724 <tr><td>All<td>All
2725 </table>
2726<tr>
2727 <td>CLReshapeLayer
2728 <td>
2729 <ul>
2730 <li>All
2731 </ul>
2732 <td>
2733 <table>
2734 <tr><th>src<th>dst
2735 <tr><td>All<td>All
2736 </table>
2737<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002738 <td rowspan="2">Reverse
2739 <td rowspan="2" style="width:200px;"> Function to reverse tensor according to axis.
2740 <td rowspan="2">
2741 <ul>
2742 <li>n/a
2743 </ul>
2744 <td>NEReverse
2745 <td>
2746 <ul>
2747 <li>All
2748 </ul>
2749 <td>
2750 <table>
2751 <tr><th>src0<th>src1<th>dst
Adnan AlSinanbdcb4c12023-09-18 14:49:45 +01002752 <tr><td>All<td>U32, S32<td>All
Teresa Charlin62687422021-04-28 10:58:49 +01002753 </table>
2754<tr>
2755 <td>CLReverse
2756 <td>
2757 <ul>
2758 <li>All
2759 </ul>
2760 <td>
2761 <table>
2762 <tr><th>src0<th>src1<th>dst
Adnan AlSinan704c22f2023-10-24 11:05:56 +01002763 <tr><td>All<td>U32, S32<td>All
Teresa Charlin62687422021-04-28 10:58:49 +01002764 </table>
2765<tr>
2766 <td rowspan="2">RNNLayer
2767 <td rowspan="2" style="width:200px;"> Function to perform recurrent neural network layer.
2768 <td rowspan="2">
2769 <ul>
2770 <li>ANEURALNETWORKS_RNN
2771 </ul>
2772 <td>NERNNLayer
2773 <td>
2774 <ul>
2775 <li>NHWC
2776 <li>NCHW
2777 </ul>
2778 <td>
2779 <table>
2780 <tr><th>src0<th>src1<th>src2<th>src3<th>dst0<th>dst1
2781 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
2782 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
2783 </table>
2784<tr>
2785 <td>CLRNNLayer
2786 <td>
2787 <ul>
2788 <li>NHWC
2789 <li>NCHW
2790 </ul>
2791 <td>
2792 <table>
2793 <tr><th>src0<th>src1<th>src2<th>src3<th>dst0<th>dst1
2794 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
2795 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
2796 </table>
2797<tr>
2798 <td rowspan="2">ROIAlignLayer
2799 <td rowspan="2" style="width:200px;"> Function to perform ROI alignment.
2800 <td rowspan="2">
2801 <ul>
2802 <li>ANEURALNETWORKS_ROI_ALIGN
2803 </ul>
2804 <td>NEROIAlignLayer
2805 <td>
2806 <ul>
2807 <li>All
2808 </ul>
2809 <td>
2810 <table>
2811 <tr><th>src0<th>src1<th>dst
2812 <tr><td>F16<td>F16<td>F16
2813 <tr><td>F32<td>F32<td>F32
2814 <tr><td>QASYMM8<td>QASYMM16<td>QASYMM8
2815 <tr><td>QASYMM8_SIGNED<td>QASYMM16<td>QASYMM8_SIGNED
2816 </table>
2817<tr>
2818 <td>CLROIAlignLayer
2819 <td>
2820 <ul>
2821 <li>All
2822 </ul>
2823 <td>
2824 <table>
2825 <tr><th>src0<th>src1<th>dst
2826 <tr><td>F16<td>F16<td>F16
2827 <tr><td>F32<td>F32<td>F32
2828 <tr><td>QASYMM8<td>QASYMM16<td>QASYMM8
2829 <tr><td>QASYMM8_SIGNED<td>QASYMM16<td>QASYMM8_SIGNED
2830 </table>
2831<tr>
2832 <td rowspan="2">ROIPoolingLayer
2833 <td rowspan="2" style="width:200px;"> Function to perform ROI pooling.
2834 <td rowspan="2">
2835 <ul>
2836 <li>ANEURALNETWORKS_ROI_POOLING
2837 </ul>
2838 <td>NEROIPoolingLayer
2839 <td>
2840 <ul>
2841 <li>All
2842 </ul>
2843 <td>
2844 <table>
2845 <tr><th>src0<th>src1<th>dst
2846 <tr><td>F32<td>U16<td>F32
2847 <tr><td>QASYMM8<td>U16<td>QASYMM8
2848 </table>
2849<tr>
2850 <td>CLROIPoolingLayer
2851 <td>
2852 <ul>
2853 <li>All
2854 </ul>
2855 <td>
2856 <table>
2857 <tr><th>src0<th>src1<th>dst
2858 <tr><td>F16<td>U16<td>F16
2859 <tr><td>F32<td>U16<td>F32
2860 <tr><td>QASYMM8<td>U16<td>QASYMM8
2861 </table>
2862<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002863 <td rowspan="2">Scale
Teresa Charlin62687422021-04-28 10:58:49 +01002864 <td rowspan="2" style="width:200px;"> Function to perform resize a tensor using to interpolate: - Bilinear - Nearest neighbor
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002865 <td rowspan="2">
2866 <ul>
2867 <li>ANEURALNETWORKS_RESIZE_BILINEAR
2868 <li>ANEURALNETWORKS_RESIZE_NEAREST_NEIGHBOR
2869 </ul>
2870 <td>NEScale
2871 <td>
2872 <ul>
2873 <li>NHWC
2874 <li>NCHW
2875 </ul>
2876 <td>
2877 <table>
2878 <tr><th>src<th>dst
2879 <tr><td>QASYMM8<td>QASYMM8
2880 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2881 <tr><td>F16<td>F16
2882 <tr><td>F32<td>F32
2883 <tr><td>U8<td>U8
Gunes Bayirc4f27432022-09-11 15:59:19 +01002884 <tr><td>S8<td>S8
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002885 <tr><td>S16<td>S16
2886 </table>
2887<tr>
2888 <td>CLScale
2889 <td>
2890 <ul>
2891 <li>NHWC
2892 <li>NCHW
2893 </ul>
2894 <td>
2895 <table>
2896 <tr><th>src<th>dst
2897 <tr><td>QASYMM8<td>QASYMM8
2898 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2899 <tr><td>F16<td>F16
2900 <tr><td>F32<td>F32
2901 <tr><td>U8<td>U8
2902 <tr><td>S16<td>S16
2903 </table>
2904<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002905 <td rowspan="2">Select
2906 <td rowspan="2" style="width:200px;"> Function to select values from 2 tensors depending on an input tensor of booleans.
2907 <td rowspan="2">
2908 <ul>
2909 <li>ANEURALNETWORKS_SELECT
2910 </ul>
2911 <td>NESelect
2912 <td>
2913 <ul>
2914 <li>All
2915 </ul>
2916 <td>
2917 <table>
2918 <tr><th>src0<th>src1<th>src2<th>dst
2919 <tr><td>U8<td>All<td>All<td>All
2920 </table>
2921<tr>
2922 <td>CLSelect
2923 <td>
2924 <ul>
2925 <li>All
2926 </ul>
2927 <td>
2928 <table>
2929 <tr><th>src0<th>src1<th>src2<th>dst
2930 <tr><td>U8<td>All<td>All<td>All
2931 </table>
2932<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002933 <td rowspan="2">Slice
2934 <td rowspan="2" style="width:200px;"> Function to perform tensor slicing.
2935 <td rowspan="2">
2936 <ul>
2937 <li>ANEURALNETWORKS_SLICE
2938 </ul>
2939 <td>NESlice
2940 <td>
2941 <ul>
2942 <li>All
2943 </ul>
2944 <td>
2945 <table>
2946 <tr><th>src<th>dst
2947 <tr><td>All<td>All
2948 </table>
2949<tr>
2950 <td>CLSlice
2951 <td>
2952 <ul>
2953 <li>All
2954 </ul>
2955 <td>
2956 <table>
2957 <tr><th>src<th>dst
2958 <tr><td>All<td>All
2959 </table>
2960<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01002961 <td rowspan="2">SoftmaxLayer
2962 <td rowspan="2" style="width:200px;"> Function to compute a SoftmaxLayer and a Log SoftmaxLayer.
2963 <td rowspan="2">
2964 <ul>
2965 <li>ANEURALNETWORKS_LOG_SOFTMAX
2966 <li>ANEURALNETWORKS_SOFTMAX
2967 </ul>
2968 <td>NESoftmaxLayerGeneric
2969 <td>
2970 <ul>
2971 <li>All
2972 </ul>
2973 <td>
2974 <table>
2975 <tr><th>src<th>dst
2976 <tr><td>QASYMM8<td>QASYMM8
2977 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2978 <tr><td>F16<td>F16
2979 <tr><td>F32<td>F32
2980 </table>
2981<tr>
2982 <td>CLSoftmaxLayerGeneric
2983 <td>
2984 <ul>
2985 <li>All
2986 </ul>
2987 <td>
2988 <table>
2989 <tr><th>src<th>dst
2990 <tr><td>QASYMM8<td>QASYMM8
2991 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2992 <tr><td>F16<td>F16
2993 <tr><td>F32<td>F32
2994 </table>
2995<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002996 <td rowspan="2">SpaceToBatchLayer
2997 <td rowspan="2" style="width:200px;"> Function to divide a tensor spatially.
2998 <td rowspan="2">
2999 <ul>
3000 <li>ANEURALNETWORKS_SPACE_TO_BATCH_ND
3001 </ul>
3002 <td>NESpaceToBatchLayer
3003 <td>
3004 <ul>
3005 <li>NHWC
3006 <li>NCHW
3007 </ul>
3008 <td>
3009 <table>
3010 <tr><th>src0<th>src1<th>src2<th>dst
3011 <tr><td>All<td>S32<td>S32<td>All
3012 </table>
3013<tr>
3014 <td>CLSpaceToBatchLayer
3015 <td>
3016 <ul>
3017 <li>NHWC
3018 <li>NCHW
3019 </ul>
3020 <td>
3021 <table>
3022 <tr><th>src0<th>src1<th>src2<th>dst
3023 <tr><td>All<td>S32<td>S32<td>All
3024 </table>
3025<tr>
3026 <td rowspan="2">SpaceToDepthLayer
3027 <td rowspan="2" style="width:200px;"> Function to rearrange blocks of spatial data into depth.
3028 <td rowspan="2">
3029 <ul>
3030 <li>ANEURALNETWORKS_SPACE_TO_DEPTH
3031 </ul>
3032 <td>NESpaceToDepthLayer
3033 <td>
3034 <ul>
3035 <li>NHWC
3036 <li>NCHW
3037 </ul>
3038 <td>
3039 <table>
3040 <tr><th>src<th>dst
3041 <tr><td>All<td>All
3042 </table>
3043<tr>
3044 <td>CLSpaceToDepthLayer
3045 <td>
3046 <ul>
3047 <li>NHWC
3048 <li>NCHW
3049 </ul>
3050 <td>
3051 <table>
3052 <tr><th>src<th>dst
3053 <tr><td>All<td>All
3054 </table>
3055<tr>
3056 <td rowspan="2">Split
3057 <td rowspan="2" style="width:200px;"> Function to split a tensor along a given axis.
3058 <td rowspan="2">
3059 <ul>
3060 <li>ANEURALNETWORKS_SPLIT
3061 </ul>
3062 <td>NESplit
3063 <td>
3064 <ul>
3065 <li>All
3066 </ul>
3067 <td>
3068 <table>
3069 <tr><th>src<th>dst
3070 <tr><td>All<td>All
3071 </table>
3072<tr>
3073 <td>CLSplit
3074 <td>
3075 <ul>
3076 <li>All
3077 </ul>
3078 <td>
3079 <table>
3080 <tr><th>src<th>dst
3081 <tr><td>All<td>All
3082 </table>
3083<tr>
3084 <td rowspan="2">StackLayer
3085 <td rowspan="2" style="width:200px;"> Function to stack tensors along an axis.
3086 <td rowspan="2">
3087 <ul>
3088 <li>n/a
3089 </ul>
3090 <td>NEStackLayer
3091 <td>
3092 <ul>
3093 <li>All
3094 </ul>
3095 <td>
3096 <table>
3097 <tr><th>src<th>dst
3098 <tr><td>All<td>All
3099 </table>
3100<tr>
3101 <td>CLStackLayer
3102 <td>
3103 <ul>
3104 <li>All
3105 </ul>
3106 <td>
3107 <table>
3108 <tr><th>src<th>dst
3109 <tr><td>All<td>All
3110 </table>
3111<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003112 <td rowspan="2">StridedSlice
3113 <td rowspan="2" style="width:200px;"> Function to extract a strided slice of a tensor.
3114 <td rowspan="2">
3115 <ul>
3116 <li>ANEURALNETWORKS_STRIDED_SLICE
3117 </ul>
3118 <td>NEStridedSlice
3119 <td>
3120 <ul>
3121 <li>All
3122 </ul>
3123 <td>
3124 <table>
3125 <tr><th>src<th>dst
3126 <tr><td>All<td>All
3127 </table>
3128<tr>
3129 <td>CLStridedSlice
3130 <td>
3131 <ul>
3132 <li>All
3133 </ul>
3134 <td>
3135 <table>
3136 <tr><th>src<th>dst
3137 <tr><td>All<td>All
3138 </table>
3139<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01003140 <td rowspan="2">Tile
3141 <td rowspan="2" style="width:200px;"> Function to construct a tensor by tiling a given tensor.
3142 <td rowspan="2">
3143 <ul>
3144 <li>ANEURALNETWORKS_TILE
3145 </ul>
3146 <td>NETile
3147 <td>
3148 <ul>
3149 <li>All
3150 </ul>
3151 <td>
3152 <table>
3153 <tr><th>src<th>dst
3154 <tr><td>All<td>All
3155 </table>
3156<tr>
3157 <td>CLTile
3158 <td>
3159 <ul>
3160 <li>All
3161 </ul>
3162 <td>
3163 <table>
3164 <tr><th>src<th>dst
3165 <tr><td>All<td>All
3166 </table>
3167<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003168 <td rowspan="2">Transpose
Teresa Charlin62687422021-04-28 10:58:49 +01003169 <td rowspan="2" style="width:200px;"> Function to transpose a 2D tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003170 <td rowspan="2">
3171 <ul>
3172 <li>ANEURALNETWORKS_TRANSPOSE
3173 </ul>
3174 <td>NETranspose
3175 <td>
3176 <ul>
3177 <li>All
3178 </ul>
3179 <td>
3180 <table>
3181 <tr><th>src<th>dst
3182 <tr><td>All<td>All
3183 </table>
3184<tr>
3185 <td>CLTranspose
3186 <td>
3187 <ul>
3188 <li>All
3189 </ul>
3190 <td>
3191 <table>
3192 <tr><th>src<th>dst
3193 <tr><td>All<td>All
3194 </table>
Teresa Charlin62687422021-04-28 10:58:49 +01003195<tr>
3196 <td rowspan="2">Unstack
3197 <td rowspan="2" style="width:200px;"> Function to unpack a rank-R tensor into rank-(R-1) tensors.
3198 <td rowspan="2">
3199 <ul>
3200 <li>n/a
3201 </ul>
3202 <td>NEUnstack
3203 <td>
3204 <ul>
3205 <li>All
3206 </ul>
3207 <td>
3208 <table>
3209 <tr><th>src<th>dst
3210 <tr><td>All<td>All
3211 </table>
3212<tr>
3213 <td>CLUnstack
3214 <td>
3215 <ul>
3216 <li>All
3217 </ul>
3218 <td>
3219 <table>
3220 <tr><th>src<th>dst
3221 <tr><td>All<td>All
3222 </table>
3223<tr>
3224 <td rowspan="2">WinogradConvolutionLayer
3225 <td rowspan="2" style="width:200px;"> Function to do Winograd Convolution.
3226 <td rowspan="2">
3227 <ul>
3228 <li>ANEURALNETWORKS_CONV_2D
3229 </ul>
3230 <td>NEWinogradConvolutionLayer
3231 <td>
3232 <ul>
3233 <li>NHWC
3234 <li>NCHW
3235 </ul>
3236 <td>
3237 <table>
3238 <tr><th>src0<th>src1<th>src2<th>dst
3239 <tr><td>F16<td>F16<td>F16<td>F16
3240 <tr><td>F32<td>F32<td>F32<td>F32
3241 </table>
3242<tr>
3243 <td>CLWinogradConvolutionLayer
3244 <td>
3245 <ul>
3246 <li>NHWC
3247 <li>NCHW
3248 </ul>
3249 <td>
3250 <table>
3251 <tr><th>src0<th>src1<th>src2<th>dst
3252 <tr><td>F16<td>F16<td>F16<td>F16
3253 <tr><td>F32<td>F32<td>F32<td>F32
3254 </table>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003255</table>
3256
3257*/
Mohammed Suhail Munshi5e549fa2022-03-16 11:14:06 +00003258} // namespace