blob: 8e828e88a4b9da8aa2945cde14981aa438cdef5c [file] [log] [blame]
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001///
Jonathan Deakina668f9f2024-01-24 09:15:38 +00002/// Copyright (c) 2021-2024 Arm Limited.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003///
4/// SPDX-License-Identifier: MIT
5///
6/// Permission is hereby granted, free of charge, to any person obtaining a copy
7/// of this software and associated documentation files (the "Software"), to
8/// deal in the Software without restriction, including without limitation the
9/// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
10/// sell copies of the Software, and to permit persons to whom the Software is
11/// furnished to do so, subject to the following conditions:
12///
13/// The above copyright notice and this permission notice shall be included in all
14/// copies or substantial portions of the Software.
15///
16/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
22/// SOFTWARE.
23///
24namespace arm_compute
25{
26/**
27@page operators_list Supported Operators
28
29@tableofcontents
30
31@section S9_1_operators_list Supported Operators
32
33Compute Library supports operators that are listed in below table.
34
35Compute Library supports a wide list of data-types, information can been directly found in the documentation of each kernel/function.
36The main data-types that the Machine Learning functions support are the following:
37 <ul>
38 <li>BFLOAT16: 16-bit non-standard brain floating point
39 <li>QASYMM8: 8-bit unsigned asymmetric quantized
40 <li>QASYMM8_SIGNED: 8-bit signed asymmetric quantized
41 <li>QSYMM8_PER_CHANNEL: 8-bit signed symmetric quantized (Used for the weights)
42 <li>QSYMM8: 8-bit unsigned symmetric quantized
43 <li>QSYMM16: 16-bit unsigned symmetric quantized
44 <li>F32: 32-bit single precision floating point
45 <li>F16: 16-bit half precision floating point
46 <li>S32: 32-bit signed integer
47 <li>U8: 8-bit unsigned char
Jakub Sujakee301b32021-06-04 09:46:08 +010048 <li>All: Agnostic to any specific data type
Sheri Zhanga47dcc22021-04-22 14:41:12 +010049 </ul>
50
51Compute Library supports the following data layouts (fast changing dimension from right to left):
52 <ul>
53 <li>NHWC: The native layout of Compute Library that delivers the best performance where channels are in the fastest changing dimension
54 <li>NCHW: Legacy layout where width is in the fastest changing dimension
Sheri Zhang5dda2172021-10-15 19:54:17 +010055 <li>NDHWC: New data layout for supporting 3D operators
Jakub Sujakee301b32021-06-04 09:46:08 +010056 <li>All: Agnostic to any specific data layout
Sheri Zhanga47dcc22021-04-22 14:41:12 +010057 </ul>
Sheri Zhang5dda2172021-10-15 19:54:17 +010058where N = batches, C = channels, H = height, W = width, D = depth
Sheri Zhanga47dcc22021-04-22 14:41:12 +010059
60<table>
61<caption id="multi_row"></caption>
62<tr>
63 <th>Function
64 <th>Description
65 <th>Equivalent Android NNAPI Op
66 <th>Backends
67 <th>Data Layouts
68 <th>Data Types
69<tr>
70 <td rowspan="2">ActivationLayer
71 <td rowspan="2" style="width:200px;"> Function to simulate an activation layer with the specified activation function.
72 <td rowspan="2">
73 <ul>
74 <li>ANEURALNETWORKS_ELU
75 <li>ANEURALNETWORKS_HARD_SWISH
76 <li>ANEURALNETWORKS_LOGISTIC
77 <li>ANEURALNETWORKS_RELU
78 <li>ANEURALNETWORKS_RELU1
79 <li>ANEURALNETWORKS_RELU6
80 <li>ANEURALNETWORKS_TANH
81 </ul>
82 <td>NEActivationLayer
83 <td>
84 <ul>
85 <li>All
86 </ul>
87 <td>
88 <table>
89 <tr><th>src<th>dst
90 <tr><td>QASYMM8<td>QASYMM8
91 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
92 <tr><td>QSYMM16<td>QSYMM16
93 <tr><td>F16<td>F16
94 <tr><td>F32<td>F32
95 </table>
96<tr>
97 <td>CLActivationLayer
98 <td>
99 <ul>
100 <li>All
101 </ul>
102 <td>
103 <table>
104 <tr><th>src<th>dst
105 <tr><td>QASYMM8<td>QASYMM8
106 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
107 <tr><td>QSYMM16<td>QSYMM16
108 <tr><td>F16<td>F16
109 <tr><td>F32<td>F32
110 </table>
111<tr>
Jakub Sujak667e82f2023-11-07 22:39:30 +0000112 <td rowspan="1">AddMulAdd
113 <td rowspan="1" style="width:200px;"> Performs a fused Add + Mul + Add [+ Relu-based-Activation] operation.
114 <td rowspan="1">
115 <ul>
116 <li>n/a
117 </ul>
118 <td>NEAddMulAdd
119 <td>
120 <ul>
121 <li>Any
122 </ul>
123 <td>
124 <table>
125 <tr><th>input1<th>input2<th>bn_mul<th>bn_add<th>add_output<th>final_output
126 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8<td>QASYMM8<td>QASYMM8<td>QASYMM8
127 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
128 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
129 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
130 </table>
131<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100132 <td rowspan="2">ArgMinMaxLayer
133 <td rowspan="2" style="width:200px;"> Function to calculate the index of the minimum or maximum values in a tensor based on an axis.
134 <td rowspan="2">
135 <ul>
136 <li>ANEURALNETWORKS_ARGMAX
137 <li>ANEURALNETWORKS_ARGMIN
138 </ul>
139 <td>NEArgMinMaxLayer
140 <td>
141 <ul>
142 <li>All
143 </ul>
144 <td>
145 <table>
146 <tr><th>src<th>dst
147 <tr><td>QASYMM8<td>U32, S32
148 <tr><td>QASYMM8_SIGNED<td>U32, S32
Pablo Marquez Tello29e27b02023-08-03 14:47:31 +0100149 <tr><td>S32<td>U32, S32, S64
Teresa Charlin62687422021-04-28 10:58:49 +0100150 <tr><td>F16<td>U32, S32
151 <tr><td>F32<td>U32, S32
152 </table>
153<tr>
154 <td>CLArgMinMaxLayer
155 <td>
156 <ul>
157 <li>All
158 </ul>
159 <td>
160 <table>
161 <tr><th>src<th>dst
162 <tr><td>QASYMM8<td>U32, S32
163 <tr><td>QASYMM8_SIGNED<td>U32, S32
164 <tr><td>S32<td>U32, S32
165 <tr><td>F16<td>U32, S32
166 <tr><td>F32<td>U32, S32
167 </table>
168<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100169 <td rowspan="1">ArithmeticAddition
170 <td rowspan="1" style="width:200px;"> Function to add 2 tensors.
171 <td rowspan="1">
172 <ul>
173 <li>ANEURALNETWORKS_ADD
174 </ul>
175 <td>NEArithmeticAddition
176 <td>
177 <ul>
178 <li>All
179 </ul>
180 <td>
181 <table>
182 <tr><th>src0<th>src1<th>dst
183 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
184 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
185 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
186 <tr><td>QSYMM16<td>QSYMM16<td>S32
187 <tr><td>U8<td>U8<td>U8
Sheri Zhang6124ce62021-05-04 14:03:13 +0100188 <tr><td>S16<td>S16<td>S16
189 <tr><td>S32<td>S32<td>S32
190 <tr><td>F16<td>F16<td>F16
191 <tr><td>F32<td>F32<td>F32
192 </table>
193<tr>
194 <td rowspan="1">ArithmeticSubtraction
195 <td rowspan="1" style="width:200px;"> Function to substract 2 tensors.
196 <td rowspan="1">
197 <ul>
198 <li>ANEURALNETWORKS_SUB
199 </ul>
200 <td>NEArithmeticSubtraction
201 <td>
202 <ul>
203 <li>All
204 </ul>
205 <td>
206 <table>
207 <tr><th>src0<th>src1<th>dst
208 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
209 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
210 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
211 <tr><td>QSYMM16<td>QSYMM16<td>S32
212 <tr><td>U8<td>U8<td>U8
Sheri Zhang6124ce62021-05-04 14:03:13 +0100213 <tr><td>S16<td>S16<td>S16
214 <tr><td>S32<td>S32<td>S32
215 <tr><td>F16<td>F16<td>F16
216 <tr><td>F32<td>F32<td>F32
217 </table>
218<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100219 <td rowspan="2">BatchNormalizationLayer
220 <td rowspan="2" style="width:200px;"> Function to perform batch normalization.
221 <td rowspan="2">
222 <ul>
223 <li>n/a
224 </ul>
225 <td>NEBatchNormalizationLayer
226 <td>
227 <ul>
228 <li>NHWC
229 <li>NCHW
230 </ul>
231 <td>
232 <table>
233 <tr><th>src<th>dst
234 <tr><td>F32<td>F32
235 <tr><td>F16<td>F16
236 </table>
237<tr>
238 <td>CLBatchNormalizationLayer
239 <td>
240 <ul>
241 <li>NHWC
242 <li>NCHW
243 </ul>
244 <td>
245 <table>
246 <tr><th>src<th>dst
247 <tr><td>F32<td>F32
248 <tr><td>F16<td>F16
249 </table>
250<tr>
251 <td rowspan="2">BatchToSpaceLayer
252 <td rowspan="2" style="width:200px;"> Batch to space transformation.
253 <td rowspan="2">
254 <ul>
255 <li>ANEURALNETWORKS_BATCH_TO_SPACE_ND
256 </ul>
257 <td>NEBatchToSpaceLayer
258 <td>
259 <ul>
260 <li>NHWC
261 <li>NCHW
262 </ul>
263 <td>
264 <table>
265 <tr><th>src0<th>src1<th>dst
266 <tr><td>All<td>s32<td>All
267 </table>
268<tr>
269 <td>CLBatchToSpaceLayer
270 <td>
271 <ul>
272 <li>NHWC
273 <li>NCHW
274 </ul>
275 <td>
276 <table>
277 <tr><th>src0<th>src1<th>dst
278 <tr><td>All<td>s32<td>All
279 </table>
280<tr>
281 <td rowspan="2">BitwiseAnd
Jakub Sujakee301b32021-06-04 09:46:08 +0100282 <td rowspan="2" style="width:200px;"> Function to perform bitwise AND between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100283 <td rowspan="2">
284 <ul>
285 <li>ANEURALNETWORKS_LOGICAL_AND
286 </ul>
287 <td>NEBitwiseAnd
288 <td>
289 <ul>
290 <li>All
291 </ul>
292 <td>
293 <table>
294 <tr><th>src<th>dst
295 <tr><td>U8<td>U8
296 </table>
297<tr>
298 <td>CLBitwiseAnd
299 <td>
300 <ul>
301 <li>All
302 </ul>
303 <td>
304 <table>
305 <tr><th>src<th>dst
306 <tr><td>U8<td>U8
307 </table>
308<tr>
309 <td rowspan="2">BitwiseNot
Jakub Sujakee301b32021-06-04 09:46:08 +0100310 <td rowspan="2" style="width:200px;"> Function to perform bitwise NOT.
Teresa Charlin62687422021-04-28 10:58:49 +0100311 <td rowspan="2">
312 <ul>
313 <li>ANEURALNETWORKS_LOGICAL_NOT
314 </ul>
315 <td>NEBitwiseNot
316 <td>
317 <ul>
318 <li>All
319 </ul>
320 <td>
321 <table>
322 <tr><th>src<th>dst
323 <tr><td>U8<td>U8
324 </table>
325<tr>
326 <td>CLBitwiseNot
327 <td>
328 <ul>
329 <li>All
330 </ul>
331 <td>
332 <table>
333 <tr><th>src<th>dst
334 <tr><td>U8<td>U8
335 </table>
336<tr>
337 <td rowspan="2">BitwiseOr
Jakub Sujakee301b32021-06-04 09:46:08 +0100338 <td rowspan="2" style="width:200px;"> Function to perform bitwise OR between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100339 <td rowspan="2">
340 <ul>
341 <li>ANEURALNETWORKS_LOGICAL_OR
342 </ul>
343 <td>NEBitwiseOr
344 <td>
345 <ul>
346 <li>All
347 </ul>
348 <td>
349 <table>
350 <tr><th>src<th>dst
351 <tr><td>U8<td>U8
352 </table>
353<tr>
354 <td>CLBitwiseOr
355 <td>
356 <ul>
357 <li>All
358 </ul>
359 <td>
360 <table>
361 <tr><th>src<th>dst
362 <tr><td>U8<td>U8
363 </table>
364<tr>
365 <td rowspan="2">BitwiseXor
Jakub Sujakee301b32021-06-04 09:46:08 +0100366 <td rowspan="2" style="width:200px;"> Function to perform bitwise XOR between 2 tensors.
Teresa Charlin62687422021-04-28 10:58:49 +0100367 <td rowspan="2">
368 <ul>
369 <li>n/a
370 </ul>
371 <td>NEBitwiseXor
372 <td>
373 <ul>
374 <li>All
375 </ul>
376 <td>
377 <table>
378 <tr><th>src<th>dst
379 <tr><td>U8<td>U8
380 </table>
381<tr>
382 <td>CLBitwiseXor
383 <td>
384 <ul>
385 <li>All
386 </ul>
387 <td>
388 <table>
389 <tr><th>src<th>dst
390 <tr><td>U8<td>U8
391 </table>
392<tr>
393 <td rowspan="2">BoundingBoxTransform
394 <td rowspan="2" style="width:200px;"> Transform proposal bounding boxes to target bounding box using bounding box deltas.
395 <td rowspan="2">
396 <ul>
397 <li>n/a
398 </ul>
399 <td>NEBoundingBoxTransform
400 <td>
401 <ul>
402 <li>NHWC
403 <li>NCHW
404 </ul>
405 <td>
406 <table>
407 <tr><th>src0<th>src1<th>dst
408 <tr><td>QASYMM16<td>QASYMM8<td>QASYMM16
409 <tr><td>F16<td>F16<td>F16
410 <tr><td>F32<td>F32<td>F32
411 </table>
412<tr>
413 <td>CLBoundingBoxTransform
414 <td>
415 <ul>
416 <li>NHWC
417 <li>NCHW
418 </ul>
419 <td>
420 <table>
421 <tr><th>src0<th>src1<th>dst
422 <tr><td>QASYMM16<td>QASYMM8<td>QASYMM16
423 <tr><td>F16<td>F16<td>F16
424 <tr><td>F32<td>F32<td>F32
425 </table>
426<tr>
427 <td rowspan="2">Cast
428 <td rowspan="2" style="width:200px;"> Function to cast a tensor.
429 <td rowspan="2">
430 <ul>
431 <li>ANEURALNETWORKS_CAST
432 </ul>
433 <td>NECast
434 <td>
435 <ul>
436 <li>All
437 </ul>
438 <td>
439 <table>
440 <tr><th>src<th>dst
441 <tr><td>QASYMM8_SIGNED<td>S16, S32, F32, F16
442 <tr><td>QASYMM8<td>U16, S16, S32, F32, F16
443 <tr><td>U8<td>U16, S16, S32, F32, F16
444 <tr><td>U16<td>U8, U32
445 <tr><td>S16<td>QASYMM8_SIGNED, U8, S32
446 <tr><td>F16<td>QASYMM8_SIGNED, QASYMM8, F32, S32, U8
447 <tr><td>S32<td>QASYMM8_SIGNED, QASYMM8, F16, F32, U8
448 <tr><td>F32<td>QASYMM8_SIGNED, QASYMM8, BFLOAT16, F16, S32, U8
449 </table>
450<tr>
451 <td>CLCast
452 <td>
453 <ul>
454 <li>All
455 </ul>
456 <td>
457 <table>
458 <tr><th>src<th>dst
459 <tr><td>U8<td>S8, U16, S16, U32, S32, F16, F32
Pablo Marquez Tello205ba242023-07-12 14:29:58 +0100460 <tr><td>S8<td>U8, U16, S16, U32, S32, F16, F32
Teresa Charlin62687422021-04-28 10:58:49 +0100461 <tr><td>U16<td>U8, S8, S16, U32, S32, F16, F32
462 <tr><td>S16<td>U8, S8, U16, U32, S32, F16, F32
463 <tr><td>U32<td>U8, S8, U16, S16, S32, F16, F32
464 <tr><td>S32<td>U8, S8, U16, S16, U32, F16, F32
Pablo Marquez Tello205ba242023-07-12 14:29:58 +0100465 <tr><td>U64<td>U8, S8, U16, S16, U32, S32, F16, F32
466 <tr><td>S64<td>U8, S8, U16, S16, U32, S32, F16, F32
467 <tr><td>F16<td>U8, S8, U16, S16, S32, U32, F32
468 <tr><td>F32<td>U8, S8, U16, S16, S32, U32, F16
Teresa Charlin62687422021-04-28 10:58:49 +0100469 </table>
470<tr>
471 <td rowspan="2">ChannelShuffleLayer
472 <td rowspan="2" style="width:200px;"> Function to shuffle the channels of the input tensor.
473 <td rowspan="2">
474 <ul>
475 <li>ANEURALNETWORKS_CHANNEL_SHUFFLE
476 </ul>
477 <td>NEChannelShuffleLayer
478 <td>
479 <ul>
480 <li>NCHW
Michele Di Giorgiob8025b32021-09-03 10:29:49 +0100481 <li>NHWC
Teresa Charlin62687422021-04-28 10:58:49 +0100482 </ul>
483 <td>
484 <table>
485 <tr><th>src<th>dst
486 <tr><td>All<td>All
487 </table>
488<tr>
489 <td>CLChannelShuffleLayer
490 <td>
491 <ul>
492 <li>NCHW
Michele Di Giorgiob8025b32021-09-03 10:29:49 +0100493 <li>NHWC
Teresa Charlin62687422021-04-28 10:58:49 +0100494 </ul>
495 <td>
496 <table>
497 <tr><th>src<th>dst
498 <tr><td>All<td>All
499 </table>
500<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100501 <td rowspan="1">Comparison
502 <td rowspan="1" style="width:200px;"> Function to compare 2 tensors.
503 <td rowspan="1">
504 <ul>
505 <li>ANEURALNETWORKS_EQUAL
506 <li>ANEURALNETWORKS_GREATER
507 <li>ANEURALNETWORKS_GREATER_EQUAL
508 <li>ANEURALNETWORKS_LESS
509 <li>ANEURALNETWORKS_LESS_EQUAL
510 <li>ANEURALNETWORKS_NOT_EQUAL
511 </ul>
512 <td>CLComparison
513 <td>
514 <ul>
515 <li>All
516 </ul>
517 <td>
518 <table>
519 <tr><th>src0<th>src1<th>dst
520 <tr><td>All<td>All<td>U8
521 </table>
522<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100523 <td rowspan="2">ConcatenateLayer
524 <td rowspan="2" style="width:200px;"> Function to concatenate tensors along a given axis.
525 <td rowspan="2">
526 <ul>
527 <li>ANEURALNETWORKS_CONCATENATION
528 </ul>
529 <td>NEConcatenateLayer
530 <td>
531 <ul>
532 <li>All
533 </ul>
534 <td>
535 <table>
536 <tr><th>src<th>dst
537 <tr><td>QASYMM8<td>QASYMM8
538 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
539 <tr><td>F16<td>F16
540 <tr><td>F32<td>F32
541 </table>
542<tr>
543 <td>CLConcatenateLayer
544 <td>
545 <ul>
546 <li>All
547 </ul>
548 <td>
549 <table>
550 <tr><th>src<th>dst
551 <tr><td>QASYMM8<td>QASYMM8
552 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
553 <tr><td>F16<td>F16
554 <tr><td>F32<td>F32
555 </table>
556<tr>
557 <td rowspan="2">ConvertFullyConnectedWeights
Jakub Sujakee301b32021-06-04 09:46:08 +0100558 <td rowspan="2" style="width:200px;"> Function to transpose the weights for the fully connected layer.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100559 <td rowspan="2">
560 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +0100561 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100562 </ul>
563 <td>NEConvertFullyConnectedWeights
564 <td>
565 <ul>
566 <li>NHWC
567 <li>NCHW
568 </ul>
569 <td>
570 <table>
571 <tr><th>src<th>dst
572 <tr><td>All<td>All
573 </table>
574<tr>
575 <td>CLConvertFullyConnectedWeights
576 <td>
577 <ul>
578 <li>NHWC
579 <li>NCHW
580 </ul>
581 <td>
582 <table>
583 <tr><th>src<th>dst
584 <tr><td>All<td>All
585 </table>
586<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100587 <td rowspan="2">ConvolutionLayer
588 <td rowspan="2" style="width:200px;"> Function to compute a convolution layer.
589 <td rowspan="2">
590 <ul>
591 <li>ANEURALNETWORKS_CONV_2D
592 </ul>
593 <td>NEConvolutionLayer
594 <td>
595 <ul>
596 <li>NHWC
597 <li>NCHW
598 </ul>
599 <td>
600 <table>
601 <tr><th>src0<th>src1<th>src2<th>dst
602 <tr><td>F16<td>F16<td>F16<td>F16
603 <tr><td>F32<td>F32<td>F32<td>F32
604 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
Michael Tylerfc94f4d2024-06-04 15:47:37 +0100605 <tr><td>QASYMM8<td>QASYMM8_SIGNED<td>S32<td>QASYMM8
Teresa Charlin62687422021-04-28 10:58:49 +0100606 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
607 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
608 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
609 </table>
610<tr>
611 <td>CLConvolutionLayer
612 <td>
613 <ul>
614 <li>NHWC
615 <li>NCHW
616 </ul>
617 <td>
618 <table>
619 <tr><th>src0<th>src1<th>src2<th>dst
620 <tr><td>F16<td>F16<td>F16<td>F16
621 <tr><td>F32<td>F32<td>F32<td>F32
622 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
623 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
624 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
625 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
626 </table>
627<tr>
Sheri Zhang6d9c9822021-09-24 16:02:57 +0100628 <td rowspan="2">Conv3D
629 <td rowspan="2" style="width:200px;"> Function to compute a 3d convolution layer.
630 <td rowspan="2">
631 <ul>
632 <li>ANEURALNETWORKS_CONV_3D
633 </ul>
634 <td>NEConv3D
635 <td>
636 <ul>
637 <li>NDHWC
638 </ul>
639 <td>
640 <table>
641 <tr><th>src0<th>src1<th>src2<th>dst
642 <tr><td>F16<td>F16<td>F16<td>F16
643 <tr><td>F32<td>F32<td>F32<td>F32
Freddie Liardetf727ef42021-10-18 13:28:57 +0100644 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
645 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
Sheri Zhang6d9c9822021-09-24 16:02:57 +0100646 </table>
647<tr>
648 <td>CLConv3D
649 <td>
650 <ul>
651 <li>NDHWC
652 </ul>
653 <td>
654 <table>
655 <tr><th>src0<th>src1<th>src2<th>dst
656 <tr><td>F16<td>F16<td>F16<td>F16
657 <tr><td>F32<td>F32<td>F32<td>F32
Giorgio Arena51847d52021-10-19 15:45:57 +0100658 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
659 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
Sheri Zhang6d9c9822021-09-24 16:02:57 +0100660 </table>
661<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100662 <td rowspan="2">Copy
663 <td rowspan="2" style="width:200px;"> Function to copy a tensor.
664 <td rowspan="2">
665 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +0100666 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100667 </ul>
668 <td>NECopy
669 <td>
670 <ul>
671 <li>All
672 </ul>
673 <td>
674 <table>
675 <tr><th>src<th>dst
676 <tr><td>All<td>All
677 </table>
678<tr>
679 <td>CLCopy
680 <td>
681 <ul>
682 <li>All
683 </ul>
684 <td>
685 <table>
686 <tr><th>src<th>dst
687 <tr><td>All<td>All
688 </table>
689<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100690 <td rowspan="1">Crop
691 <td rowspan="1" style="width:200px;"> Performs a copy of input tensor to the output tensor.
692 <td rowspan="1">
693 <ul>
694 <li>n/a
695 </ul>
696 <td>CLCrop
697 <td>
698 <ul>
699 <li>NHWC
700 </ul>
701 <td>
702 <table>
703 <tr><th>src<th>dst
704 <tr><td>All<td>F32
705 </table>
706<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100707 <td rowspan="2">CropResize
708 <td rowspan="2" style="width:200px;"> Function to perform cropping and resizing.
709 <td rowspan="2">
710 <ul>
711 <li>n/a
712 </ul>
713 <td>NECropResize
714 <td>
715 <ul>
716 <li>NHWC
717 </ul>
718 <td>
719 <table>
720 <tr><th>src0<th>src1<th>src2<th>dst
721 <tr><td>All<td>F32<td>F32<td>F32
722 </table>
723<tr>
724 <td>CLCropResize
725 <td>
726 <ul>
727 <li>NHWC
728 </ul>
729 <td>
730 <table>
731 <tr><th>src0<th>src1<th>src2<th>dst
732 <tr><td>All<td>F32<td>F32<td>F32
733 </table>
734<tr>
735 <td rowspan="2">DeconvolutionLayer
Jakub Sujakee301b32021-06-04 09:46:08 +0100736 <td rowspan="2" style="width:200px;"> Function to compute a deconvolution or transpose convolution.
Teresa Charlin62687422021-04-28 10:58:49 +0100737 <td rowspan="2">
738 <ul>
739 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
740 </ul>
741 <td>NEDeconvolutionLayer
742 <td>
743 <ul>
744 <li>NHWC
745 <li>NCHW
746 </ul>
747 <td>
748 <table>
749 <tr><th>src0<th>src1<th>src2<th>dst
750 <tr><td>F16<td>F16<td>F16<td>F16
751 <tr><td>F32<td>F32<td>F32<td>F32
752 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
753 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
754 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
755 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
756 </table>
757<tr>
758 <td>CLDeconvolutionLayer
759 <td>
760 <ul>
761 <li>NHWC
762 <li>NCHW
763 </ul>
764 <td>
765 <table>
766 <tr><th>src0<th>src1<th>src2<th>dst
767 <tr><td>F16<td>F16<td>F16<td>F16
768 <tr><td>F32<td>F32<td>F32<td>F32
769 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
770 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
771 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
772 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
773 </table>
774<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100775 <td rowspan="1">DeconvolutionLayerUpsample
776 <td rowspan="1" style="width:200px;"> Function to execute deconvolution upsample on OpenCL.
777 <td rowspan="1">
778 <ul>
779 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
780 </ul>
781 <td>CLDeconvolutionLayerUpsample
782 <td>
783 <ul>
784 <li>NHWC
785 <li>NCHW
786 </ul>
787 <td>
788 <table>
789 <tr><th>src<th>dst
790 <tr><td>All<td>All
791 </table>
792<tr>
Teresa Charlin62687422021-04-28 10:58:49 +0100793 <td rowspan="2">DepthConvertLayer
794 <td rowspan="2" style="width:200px;"> Performs a down-scaling depth conversion.
795 <td rowspan="2">
796 <ul>
797 <li>n/a
798 </ul>
799 <td>NEDepthConvertLayer
800 <td>
801 <ul>
802 <li>All
803 </ul>
804 <td>
805 <table>
806 <tr><th>src<th>dst
807 <tr><td>QASYMM8<td>F16, F32
808 <tr><td>U8<td>U16, S16, S32
809 <tr><td>U16<td>U8, U32
810 <tr><td>S16<td>U8, S32
811 <tr><td>BFLOAT16<td>F32
812 <tr><td>F16<td>QASYMM8, F32
813 <tr><td>F32<td>QASYMM8, F16, BFLOAT16
814 </table>
815<tr>
816 <td>CLDepthConvertLayer
817 <td>
818 <ul>
819 <li>All
820 </ul>
821 <td>
822 <table>
823 <tr><th>src<th>dst
824 <tr><td>U8<td>S8, U16, S16, U32, S32, F16, F32
825 <tr><td>U16<td>U8, S8, S16, U32, S32, F16, F32
826 <tr><td>S16<td>U8, S8, U16, U32, S32, F16, F32
827 <tr><td>U32<td>U8, S8, U16, S16, S32, F16, F32
828 <tr><td>S32<td>U8, S8, U16, S16, U32, F16, F32
829 <tr><td>F16<td>U8, S8, U16, S16, U32, F32
830 <tr><td>F32<td>U8, S8, U16, S16, U32, F16
831 </table>
832<tr>
833 <td rowspan="2">DepthToSpaceLayer
834 <td rowspan="2" style="width:200px;"> Depth to Space transformation.
835 <td rowspan="2">
836 <ul>
837 <li>ANEURALNETWORKS_DEPTH_TO_SPACE
838 </ul>
839 <td>NEDepthToSpaceLayer
840 <td>
841 <ul>
842 <li>NHWC
843 <li>NCHW
844 </ul>
845 <td>
846 <table>
847 <tr><th>src<th>dst
848 <tr><td>All<td>All
849 </table>
850<tr>
851 <td>CLDepthToSpaceLayer
852 <td>
853 <ul>
854 <li>NHWC
855 <li>NCHW
856 </ul>
857 <td>
858 <table>
859 <tr><th>src<th>dst
860 <tr><td>All<td>All
861 </table>
862<tr>
863 <td rowspan="2">DepthwiseConvolutionLayer
864 <td rowspan="2" style="width:200px;"> Function to perform depthwise separable convolution.
865 <td rowspan="2">
866 <ul>
867 <li>ANEURALNETWORKS_DEPTHWISE_CONV_2D
868 </ul>
869 <td>NEDepthwiseConvolutionLayer
870 <td>
871 <ul>
872 <li>NHWC
873 <li>NCHW
874 </ul>
875 <td>
876 <table>
877 <tr><th>src0<th>src1<th>src2<th>dst
878 <tr><td>F16<td>F16<td>F16<td>F16
879 <tr><td>F32<td>F32<td>F32<td>F32
880 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
881 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
882 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
883 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
884 </table>
885<tr>
886 <td>CLDepthwiseConvolutionLayer
887 <td>
888 <ul>
889 <li>NHWC
890 <li>NCHW
891 </ul>
892 <td>
893 <table>
894 <tr><th>src0<th>src1<th>src2<th>dst
895 <tr><td>F16<td>F16<td>F16<td>F16
896 <tr><td>F32<td>F32<td>F32<td>F32
897 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
898 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
899 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
900 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
901 </table>
902<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100903 <td rowspan="2">DequantizationLayer
Teresa Charlin62687422021-04-28 10:58:49 +0100904 <td rowspan="2" style="width:200px;"> Function to dequantize the values in a tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100905 <td rowspan="2">
906 <ul>
907 <li>ANEURALNETWORKS_DEQUANTIZE
908 </ul>
909 <td>NEDequantizationLayer
910 <td>
911 <ul>
912 <li>All
913 </ul>
914 <td>
915 <table>
916 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +0100917 <tr><td>QASYMM8<td>F16, F32
918 <tr><td>QASYMM8_SIGNED<td>F16, F32
919 <tr><td>QSYMM8_PER_CHANNEL<td>F16, F32
920 <tr><td>QSYMM8<td>F16, F32
921 <tr><td>QSYMM16<td>F16, F32
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100922 </table>
923<tr>
924 <td>CLDequantizationLayer
925 <td>
926 <ul>
927 <li>All
928 </ul>
929 <td>
930 <table>
931 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +0100932 <tr><td>QASYMM8<td>F16, F32
933 <tr><td>QASYMM8_SIGNED<td>F16, F32
934 <tr><td>QSYMM8_PER_CHANNEL<td>F16, F32
935 <tr><td>QSYMM8<td>F16, F32
936 <tr><td>QSYMM16<td>F16, F32
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100937 </table>
938<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100939 <td rowspan="1">DetectionPostProcessLayer
940 <td rowspan="1" style="width:200px;"> Function to generate the detection output based on center size encoded boxes, class prediction and anchors by doing non maximum suppression (NMS).
941 <td rowspan="1">
942 <ul>
943 <li>ANEURALNETWORKS_DETECTION_POSTPROCESSING
944 </ul>
945 <td>NEDetectionPostProcessLayer
946 <td>
947 <ul>
948 <li>All
949 </ul>
950 <td>
951 <table>
952 <tr><th>src0 - src2<th>dst0 - dst3
953 <tr><td>QASYMM8<td>F32
954 <tr><td>QASYMM8_SIGNED<td>F32
955 <tr><td>F32<td>F32
956 </table>
957<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100958 <td rowspan="2">DirectConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +0100959 <td rowspan="2" style="width:200px;"> Function to compute direct convolution.
Sheri Zhanga47dcc22021-04-22 14:41:12 +0100960 <td rowspan="2">
961 <ul>
962 <li>ANEURALNETWORKS_CONV_2D
963 </ul>
964 <td>NEDirectConvolutionLayer
965 <td>
966 <ul>
967 <li>NHWC
968 <li>NCHW
969 </ul>
970 <td>
971 <table>
972 <tr><th>src0<th>src1<th>src2<th>dst
973 <tr><td>F16<td>F16<td>F16<td>F16
974 <tr><td>F32<td>F32<td>F32<td>F32
975 </table>
976<tr>
977 <td>CLDirectConvolutionLayer
978 <td>
979 <ul>
980 <li>NHWC
981 <li>NCHW
982 </ul>
983 <td>
984 <table>
985 <tr><th>src0<th>src1<th>src2<th>dst
986 <tr><td>F16<td>F16<td>F16<td>F16
987 <tr><td>F32<td>F32<td>F32<td>F32
988 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
989 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
990 </table>
991<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +0100992 <td rowspan="1">DirectDeconvolutionLayer
993 <td rowspan="1" style="width:200px;"> Function to run the deconvolution layer.
994 <td rowspan="1">
995 <ul>
996 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
997 </ul>
998 <td>CLDirectDeconvolutionLayer
999 <td>
1000 <ul>
1001 <li>NHWC
1002 <li>NCHW
1003 </ul>
1004 <td>
1005 <table>
1006 <tr><th>src0<th>src1<th>src2<th>dst
1007 <tr><td>F16<td>F16<td>F16<td>F16
1008 <tr><td>F32<td>F32<td>F32<td>F32
1009 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1010 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1011 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1012 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1013 </table>
1014<tr>
Jakub Sujakee301b32021-06-04 09:46:08 +01001015 <td rowspan="13">ElementwiseOperations
Sheri Zhang6124ce62021-05-04 14:03:13 +01001016 <td rowspan="13" style="width:200px;"> Function to perform in Cpu: - Div - Max - Min - Pow - SquaredDiff - Comparisons (Equal, greater, greater_equal, less, less_equal, not_equal) Function to perform in CL: - Add - Sub - Div - Max - Min - Pow - SquaredDiff
1017 <td rowspan="13">
1018 <ul>
1019 <li>ANEURALNETWORKS_MAXIMUM
1020 <li>ANEURALNETWORKS_MINIMUM
1021 <li>ANEURALNETWORKS_POW
1022 <li>ANEURALNETWORKS_DIV
1023 <li>ANEURALNETWORKS_ADD
1024 <li>ANEURALNETWORKS_SUB
1025 <li>ANEURALNETWORKS_EQUAL
1026 <li>ANEURALNETWORKS_GREATER
1027 <li>ANEURALNETWORKS_GREATER_EQUAL
1028 <li>ANEURALNETWORKS_LESS
1029 <li>ANEURALNETWORKS_LESS_EQUAL
1030 <li>ANEURALNETWORKS_NOT_EQUAL
1031 </ul>
1032 <td>NEElementwiseMax
1033 <td>
1034 <ul>
1035 <li>All
1036 </ul>
1037 <td>
1038 <table>
1039 <tr><th>src0<th>src1<th>dst
1040 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1041 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1042 <tr><td>S32<td>S32<td>S32
1043 <tr><td>S16<td>S16<td>S16
1044 <tr><td>F16<td>F16<td>F16
1045 <tr><td>F32<td>F32<td>F32
1046 </table>
1047<tr>
1048 <td>NEElementwiseMin
1049 <td>
1050 <ul>
1051 <li>All
1052 </ul>
1053 <td>
1054 <table>
1055 <tr><th>src0<th>src1<th>dst
1056 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1057 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1058 <tr><td>S32<td>S32<td>S32
1059 <tr><td>S16<td>S16<td>S16
1060 <tr><td>F16<td>F16<td>F16
1061 <tr><td>F32<td>F32<td>F32
1062 </table>
1063<tr>
1064 <td>NEElementwiseSquaredDiff
1065 <td>
1066 <ul>
1067 <li>All
1068 </ul>
1069 <td>
1070 <table>
1071 <tr><th>src0<th>src1<th>dst
1072 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1073 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1074 <tr><td>S32<td>S32<td>S32
1075 <tr><td>S16<td>S16<td>S16
1076 <tr><td>F16<td>F16<td>F16
1077 <tr><td>F32<td>F32<td>F32
1078 </table>
1079<tr>
1080 <td>NEElementwiseDivision
1081 <td>
1082 <ul>
1083 <li>All
1084 </ul>
1085 <td>
1086 <table>
1087 <tr><th>src0<th>src1<th>dst
1088 <tr><td>F16<td>F16<td>F16
1089 <tr><td>F32<td>F32<td>F32
1090 </table>
1091<tr>
1092 <td>NEElementwisePower
1093 <td>
1094 <ul>
1095 <li>All
1096 </ul>
1097 <td>
1098 <table>
1099 <tr><th>src0<th>src1<th>dst
1100 <tr><td>F16<td>F16<td>F16
1101 <tr><td>F32<td>F32<td>F32
1102 </table>
1103<tr>
1104 <td>NEElementwiseComparison
1105 <td>
1106 <ul>
1107 <li>All
1108 </ul>
1109 <td>
1110 <table>
1111 <tr><th>src0<th>src1<th>dst
1112 <tr><td>QASYMM8<td>QASYMM8<td>U8
1113 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>U8
1114 <tr><td>S32<td>S32<td>U8
1115 <tr><td>U8<td>U8<td>U8
1116 <tr><td>S16<td>S16<td>U8
1117 <tr><td>F16<td>F16<td>U8
1118 <tr><td>F32<td>F32<td>U8
1119 </table>
1120<tr>
1121 <td>CLArithmeticAddition
1122 <td>
1123 <ul>
1124 <li>All
1125 </ul>
1126 <td>
1127 <table>
1128 <tr><th>src0<th>src1<th>dst
1129 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1130 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1131 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1132 <tr><td>U8<td>U8<td>U8
1133 <tr><td>U8<td>U8<td>S16
1134 <tr><td>U8<td>S16<td>S16
1135 <tr><td>S16<td>U8<td>S16
1136 <tr><td>S16<td>S16<td>S16
1137 <tr><td>S32<td>S32<td>S32
1138 <tr><td>F16<td>F16<td>F16
1139 <tr><td>F32<td>F32<td>F32
1140 </table>
1141<tr>
1142 <td>CLArithmeticSubtraction
1143 <td>
1144 <ul>
1145 <li>All
1146 </ul>
1147 <td>
1148 <table>
1149 <tr><th>src0<th>src1<th>dst
1150 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1151 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1152 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1153 <tr><td>U8<td>U8<td>U8
1154 <tr><td>U8<td>U8<td>S16
1155 <tr><td>U8<td>S16<td>S16
1156 <tr><td>S16<td>U8<td>S16
1157 <tr><td>S16<td>S16<td>S16
1158 <tr><td>S32<td>S32<td>S32
1159 <tr><td>F16<td>F16<td>F16
1160 <tr><td>F32<td>F32<td>F32
1161 </table>
1162<tr>
1163 <td>CLArithmeticDivision
1164 <td>
1165 <ul>
1166 <li>All
1167 </ul>
1168 <td>
1169 <table>
1170 <tr><th>src0<th>src1<th>dst
1171 <tr><td>F16<td>F16<td>F16
1172 <tr><td>F32<td>F32<td>F32
1173 </table>
1174<tr>
1175 <td>CLElementwiseMax
1176 <td>
1177 <ul>
1178 <li>All
1179 </ul>
1180 <td>
1181 <table>
1182 <tr><th>src0<th>src1<th>dst
1183 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1184 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1185 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1186 <tr><td>U8<td>U8<td>U8
1187 <tr><td>S16<td>S16<td>S16
1188 <tr><td>S32<td>S32<td>S32
1189 <tr><td>U32<td>U32<td>U32
1190 <tr><td>F16<td>F16<td>F16
1191 <tr><td>F32<td>F32<td>F32
1192 </table>
1193<tr>
1194 <td>CLElementwiseMin
1195 <td>
1196 <ul>
1197 <li>All
1198 </ul>
1199 <td>
1200 <table>
1201 <tr><th>src0<th>src1<th>dst
1202 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1203 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1204 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1205 <tr><td>U8<td>U8<td>U8
1206 <tr><td>S16<td>S16<td>S16
1207 <tr><td>S32<td>S32<td>S32
1208 <tr><td>U32<td>U32<td>U32
1209 <tr><td>F16<td>F16<td>F16
1210 <tr><td>F32<td>F32<td>F32
1211 </table>
1212<tr>
1213 <td>CLElementwiseSquaredDiff
1214 <td>
1215 <ul>
1216 <li>All
1217 </ul>
1218 <td>
1219 <table>
1220 <tr><th>src0<th>src1<th>dst
1221 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
1222 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
1223 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
1224 <tr><td>U8<td>U8<td>U8
1225 <tr><td>S16<td>S16<td>S16
1226 <tr><td>F16<td>F16<td>F16
1227 <tr><td>F32<td>F32<td>F32
1228 </table>
1229<tr>
1230 <td>CLElementwisePower
1231 <td>
1232 <ul>
1233 <li>All
1234 </ul>
1235 <td>
1236 <table>
1237 <tr><th>src0<th>src1<th>dst
1238 <tr><td>F16<td>F16<td>F16
1239 <tr><td>F32<td>F32<td>F32
1240 </table>
1241<tr>
1242 <td rowspan="8">ElementwiseUnaryLayer
1243 <td rowspan="8" style="width:200px;"> Function to perform: - Rsqrt - Exp - Neg - Log - Abs - Round - Sin
1244 <td rowspan="8">
1245 <ul>
1246 <li>ANEURALNETWORKS_ABS
1247 <li>ANEURALNETWORKS_EXP
1248 <li>ANEURALNETWORKS_LOG
1249 <li>ANEURALNETWORKS_NEG
1250 <li>ANEURALNETWORKS_RSQRT
1251 <li>ANEURALNETWORKS_SIN
1252 </ul>
1253 <td>NEElementwiseUnaryLayer
1254 <td>
1255 <ul>
1256 <li>All
1257 </ul>
1258 <td>
1259 <table>
1260 <tr><th>src<th>dst
1261 <tr><td>F16<td>F16
1262 <tr><td>F32<td>F32
1263 <tr><td>S32<td>S32
1264 </table>
1265<tr>
1266 <td>CLRsqrtLayer
1267 <td>
1268 <ul>
1269 <li>All
1270 </ul>
1271 <td>
1272 <table>
1273 <tr><th>src<th>dst
1274 <tr><td>F16<td>F16
1275 <tr><td>F32<td>F32
1276 </table>
1277<tr>
1278 <td>CLExpLayer
1279 <td>
1280 <ul>
1281 <li>All
1282 </ul>
1283 <td>
1284 <table>
1285 <tr><th>src<th>dst
1286 <tr><td>F16<td>F16
1287 <tr><td>F32<td>F32
1288 </table>
1289<tr>
1290 <td>CLNegLayer
1291 <td>
1292 <ul>
1293 <li>All
1294 </ul>
1295 <td>
1296 <table>
1297 <tr><th>src<th>dst
1298 <tr><td>F16<td>F16
1299 <tr><td>F32<td>F32
Jakub Sujakee301b32021-06-04 09:46:08 +01001300 <tr><td>S32<td>S32
Sheri Zhang6124ce62021-05-04 14:03:13 +01001301 </table>
1302<tr>
1303 <td>CLSinLayer
1304 <td>
1305 <ul>
1306 <li>All
1307 </ul>
1308 <td>
1309 <table>
1310 <tr><th>src<th>dst
1311 <tr><td>F16<td>F16
1312 <tr><td>F32<td>F32
1313 </table>
1314<tr>
1315 <td>CLLogLayer
1316 <td>
1317 <ul>
1318 <li>All
1319 </ul>
1320 <td>
1321 <table>
1322 <tr><th>src<th>dst
1323 <tr><td>F16<td>F16
1324 <tr><td>F32<td>F32
1325 </table>
1326<tr>
1327 <td>CLAbsLayer
1328 <td>
1329 <ul>
1330 <li>All
1331 </ul>
1332 <td>
1333 <table>
1334 <tr><th>src<th>dst
1335 <tr><td>F16<td>F16
1336 <tr><td>F32<td>F32
1337 </table>
1338<tr>
1339 <td>CLRoundLayer
1340 <td>
1341 <ul>
1342 <li>All
1343 </ul>
1344 <td>
1345 <table>
1346 <tr><th>src<th>dst
1347 <tr><td>F16<td>F16
1348 <tr><td>F32<td>F32
1349 </table>
1350<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001351 <td rowspan="2">FFT1D
Teresa Charlin62687422021-04-28 10:58:49 +01001352 <td rowspan="2" style="width:200px;"> Fast Fourier Transform 1D.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001353 <td rowspan="2">
1354 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +01001355 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001356 </ul>
1357 <td>NEFFT1D
1358 <td>
1359 <ul>
1360 <li>All
1361 </ul>
1362 <td>
1363 <table>
1364 <tr><th>src<th>dst
1365 <tr><td>F32<td>F32
1366 </table>
1367<tr>
1368 <td>CLFFT1D
1369 <td>
1370 <ul>
1371 <li>All
1372 </ul>
1373 <td>
1374 <table>
1375 <tr><th>src<th>dst
1376 <tr><td>F32<td>F32
1377 <tr><td>F16<td>F16
1378 </table>
1379<tr>
1380 <td rowspan="2">FFT2D
Teresa Charlin62687422021-04-28 10:58:49 +01001381 <td rowspan="2" style="width:200px;"> Fast Fourier Transform 2D.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001382 <td rowspan="2">
1383 <ul>
Teresa Charlin62687422021-04-28 10:58:49 +01001384 <li>n/a
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001385 </ul>
1386 <td>NEFFT2D
1387 <td>
1388 <ul>
1389 <li>All
1390 </ul>
1391 <td>
1392 <table>
1393 <tr><th>src<th>dst
1394 <tr><td>F32<td>F32
1395 </table>
1396<tr>
1397 <td>CLFFT2D
1398 <td>
1399 <ul>
1400 <li>All
1401 </ul>
1402 <td>
1403 <table>
1404 <tr><th>src<th>dst
1405 <tr><td>F32<td>F32
1406 <tr><td>F16<td>F16
1407 </table>
1408<tr>
1409 <td rowspan="2">FFTConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001410 <td rowspan="2" style="width:200px;"> Fast Fourier Transform Convolution.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001411 <td rowspan="2">
1412 <ul>
1413 <li>ANEURALNETWORKS_CONV_2D
1414 </ul>
1415 <td>NEFFTConvolutionLayer
1416 <td>
1417 <ul>
1418 <li>All
1419 </ul>
1420 <td>
1421 <table>
1422 <tr><th>src<th>dst
1423 <tr><td>F32<td>F32
1424 </table>
1425<tr>
1426 <td>CLFFTConvolutionLayer
1427 <td>
1428 <ul>
1429 <li>All
1430 </ul>
1431 <td>
1432 <table>
1433 <tr><th>src<th>dst
1434 <tr><td>F32<td>F32
1435 <tr><td>F16<td>F16
1436 </table>
1437<tr>
1438 <td rowspan="2">Fill
Teresa Charlin62687422021-04-28 10:58:49 +01001439 <td rowspan="2" style="width:200px;"> Set the values of a tensor with a given value.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001440 <td rowspan="2">
1441 <ul>
1442 <li>ANEURALNETWORKS_FILL
1443 </ul>
1444 <td>NEFill
1445 <td>
1446 <ul>
1447 <li>All
1448 </ul>
1449 <td>
1450 <table>
1451 <tr><th>src<th>dst
1452 <tr><td>All<td>All
1453 </table>
1454<tr>
1455 <td>CLFill
1456 <td>
1457 <ul>
1458 <li>All
1459 </ul>
1460 <td>
1461 <table>
1462 <tr><th>src<th>dst
1463 <tr><td>All<td>All
1464 </table>
1465<tr>
Georgios Pinitasb6af4822021-09-14 12:33:34 +01001466 <td rowspan="1">FillBorder
1467 <td rowspan="1" style="width:200px;"> Function to fill the borders within the XY-planes.
1468 <td rowspan="1">
Teresa Charlin62687422021-04-28 10:58:49 +01001469 <ul>
1470 <li>n/a
1471 </ul>
1472 <td>NEFillBorder
1473 <td>
1474 <ul>
1475 <li>All
1476 </ul>
1477 <td>
1478 <table>
1479 <tr><th>src<th>dst
1480 <tr><td>All<td>All
1481 </table>
1482<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001483 <td rowspan="2">FlattenLayer
1484 <td rowspan="2" style="width:200px;"> Reshape a tensor to be 1D
1485 <td rowspan="2">
1486 <ul>
1487 <li>ANEURALNETWORKS_RESHAPE
1488 </ul>
1489 <td>NEFlattenLayer
1490 <td>
1491 <ul>
1492 <li>All
1493 </ul>
1494 <td>
1495 <table>
1496 <tr><th>src<th>dst
1497 <tr><td>All<td>All
1498 </table>
1499<tr>
1500 <td>CLFlattenLayer
1501 <td>
1502 <ul>
1503 <li>All
1504 </ul>
1505 <td>
1506 <table>
1507 <tr><th>src<th>dst
1508 <tr><td>All<td>All
1509 </table>
1510<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001511 <td rowspan="2">Floor
Teresa Charlin62687422021-04-28 10:58:49 +01001512 <td rowspan="2" style="width:200px;"> Round the value to the lowest number.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01001513 <td rowspan="2">
1514 <ul>
1515 <li>ANEURALNETWORKS_FLOOR
1516 </ul>
1517 <td>NEFloor
1518 <td>
1519 <ul>
1520 <li>All
1521 </ul>
1522 <td>
1523 <table>
1524 <tr><th>src<th>dst
1525 <tr><td>F32<td>F32
1526 <tr><td>F16<td>F16
1527 </table>
1528<tr>
1529 <td>CLFloor
1530 <td>
1531 <ul>
1532 <li>All
1533 </ul>
1534 <td>
1535 <table>
1536 <tr><th>src<th>dst
1537 <tr><td>F32<td>F32
1538 <tr><td>F16<td>F16
1539 </table>
1540<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001541 <td rowspan="2">FullyConnectedLayer
1542 <td rowspan="2" style="width:200px;"> Function to perform a fully connected / dense layer.
1543 <td rowspan="2">
1544 <ul>
1545 <li>ANEURALNETWORKS_FULLY_CONNECTED
1546 </ul>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001547 <td>NEFullyConnectedLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001548 <td>
1549 <ul>
1550 <li>NHWC
1551 <li>NCHW
1552 </ul>
1553 <td>
1554 <table>
1555 <tr><th>src0<th>src1<th>src2<th>dst
1556 <tr><td>F16<td>F16<td>F16<td>F16
1557 <tr><td>F32<td>F32<td>F32<td>F32
1558 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1559 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1560 </table>
1561<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001562 <td>CLFullyConnectedLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001563 <td>
1564 <ul>
1565 <li>NHWC
1566 <li>NCHW
1567 </ul>
1568 <td>
1569 <table>
1570 <tr><th>src0<th>src1<th>src2<th>dst
1571 <tr><td>F16<td>F16<td>F16<td>F16
1572 <tr><td>F32<td>F32<td>F32<td>F32
1573 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1574 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1575 </table>
1576<tr>
1577 <td rowspan="2">FuseBatchNormalization
1578 <td rowspan="2" style="width:200px;"> Function to fuse the batch normalization node to a preceding convolution node.
1579 <td rowspan="2">
1580 <ul>
1581 <li>n/a
1582 </ul>
1583 <td>NEFuseBatchNormalization
1584 <td>
1585 <ul>
1586 <li>NHWC
1587 <li>NCHW
1588 </ul>
1589 <td>
1590 <table>
1591 <tr><th>src<th>dst
1592 <tr><td>F32<td>F32
1593 <tr><td>F16<td>F16
1594 </table>
1595<tr>
1596 <td>CLFuseBatchNormalization
1597 <td>
1598 <ul>
1599 <li>NHWC
1600 <li>NCHW
1601 </ul>
1602 <td>
1603 <table>
1604 <tr><th>src<th>dst
1605 <tr><td>F32<td>F32
1606 <tr><td>F16<td>F16
1607 </table>
1608<tr>
1609 <td rowspan="2">Gather
1610 <td rowspan="2" style="width:200px;"> Performs the Gather operation along the chosen axis.
1611 <td rowspan="2">
1612 <ul>
1613 <li>ANEURALNETWORKS_GATHER
1614 </ul>
1615 <td>NEGather
1616 <td>
1617 <ul>
1618 <li>All
1619 </ul>
1620 <td>
1621 <table>
1622 <tr><th>src<th>dst
1623 <tr><td>All<td>All
1624 </table>
1625<tr>
1626 <td>CLGather
1627 <td>
1628 <ul>
1629 <li>All
1630 </ul>
1631 <td>
1632 <table>
1633 <tr><th>src<th>dst
1634 <tr><td>All<td>All
1635 </table>
1636<tr>
1637 <td rowspan="2">GEMM
1638 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1639 <td rowspan="2">
1640 <ul>
1641 <li>n/a
1642 </ul>
1643 <td>NEGEMM
1644 <td>
1645 <ul>
1646 <li>All
1647 </ul>
1648 <td>
1649 <table>
1650 <tr><th>src0<th>src1<th>src2<th>dst
1651 <tr><td>F32<td>F32<td>F32<td>F32
1652 <tr><td>F16<td>F16<td>F16<td>F16
1653 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1654 </table>
1655<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001656 <td>CLGEMM
Teresa Charlin62687422021-04-28 10:58:49 +01001657 <td>
1658 <ul>
1659 <li>All
1660 </ul>
1661 <td>
1662 <table>
1663 <tr><th>src0<th>src1<th>src2<th>dst
1664 <tr><td>F32<td>F32<td>F32<td>F32
1665 <tr><td>F16<td>F16<td>F16<td>F16
1666 </table>
1667<tr>
Jakub Sujakee301b32021-06-04 09:46:08 +01001668 <td rowspan="1">GEMMConv2d
Sheri Zhang6124ce62021-05-04 14:03:13 +01001669 <td rowspan="1" style="width:200px;"> General Matrix Multiplication.
1670 <td rowspan="1">
1671 <ul>
1672 <li>ANEURALNETWORKS_CONV_2D
1673 </ul>
1674 <td>NEGEMMConv2d
1675 <td>
1676 <ul>
1677 <li>All
1678 </ul>
1679 <td>
1680 <table>
1681 <tr><th>src0<th>src1<th>src2<th>dst
1682 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1683 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1684 <tr><td>F16<td>F16<td>F16<td>F16
1685 <tr><td>F32<td>F32<td>F32<td>F32
1686 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1687 </table>
1688<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001689 <td rowspan="2">GEMMConvolutionLayer
1690 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1691 <td rowspan="2">
1692 <ul>
1693 <li>ANEURALNETWORKS_CONV_2D
1694 </ul>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001695 <td>NEGEMMConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001696 <td>
1697 <ul>
1698 <li>NHWC
1699 <li>NCHW
1700 </ul>
1701 <td>
1702 <table>
1703 <tr><th>src0<th>src1<th>src2<th>dst
1704 <tr><td>F16<td>F16<td>F16<td>F16
1705 <tr><td>F32<td>F32<td>F32<td>F32
1706 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
1707 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1708 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1709 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1710 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1711 </table>
1712<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001713 <td>CLGEMMConvolutionLayer
Teresa Charlin62687422021-04-28 10:58:49 +01001714 <td>
1715 <ul>
1716 <li>NHWC
1717 <li>NCHW
1718 </ul>
1719 <td>
1720 <table>
1721 <tr><th>src0<th>src1<th>src2<th>dst
1722 <tr><td>F16<td>F16<td>F16<td>F16
1723 <tr><td>F32<td>F32<td>F32<td>F32
1724 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1725 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1726 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1727 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1728 </table>
1729<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001730 <td rowspan="1">GEMMDeconvolutionLayer
1731 <td rowspan="1" style="width:200px;"> General Matrix Multiplication.
1732 <td rowspan="1">
1733 <ul>
1734 <li>ANEURALNETWORKS_TRANSPOSE_CONV_2D
1735 </ul>
1736 <td>CLGEMMDeconvolutionLayer
1737 <td>
1738 <ul>
1739 <li>NHWC
1740 </ul>
1741 <td>
1742 <table>
1743 <tr><th>src0<th>src1<th>src2<th>dst
1744 <tr><td>F16<td>F16<td>F16<td>F16
1745 <tr><td>F32<td>F32<td>F32<td>F32
1746 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1747 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1748 </table>
1749<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001750 <td rowspan="2">GEMMLowpMatrixMultiplyCore
1751 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1752 <td rowspan="2">
1753 <ul>
1754 <li>n/a
1755 </ul>
1756 <td>NEGEMMLowpMatrixMultiplyCore
1757 <td>
1758 <ul>
1759 <li>NHWC
1760 <li>NCHW
1761 </ul>
1762 <td>
1763 <table>
1764 <tr><th>src0<th>src1<th>src2<th>dst
1765 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
Michael Tylerfc94f4d2024-06-04 15:47:37 +01001766 <tr><td>QASYMM8<td>QASYMM8_SIGNED<td>S32<td>QASYMM8
Teresa Charlin62687422021-04-28 10:58:49 +01001767 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1768 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>QASYMM8
1769 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>S32
1770 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1771 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>S32
Michael Tylerfc94f4d2024-06-04 15:47:37 +01001772 <tr><td>QASYMM8<td>QASYMM8_SIGNED<td>F32<td>F32
Teresa Charlin62687422021-04-28 10:58:49 +01001773 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1774 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1775 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>QASYMM8_SIGNED
1776 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>S32
1777 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1778 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>S32
Jonathan Deakina668f9f2024-01-24 09:15:38 +00001779 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>F32<td>F32
Teresa Charlin62687422021-04-28 10:58:49 +01001780 </table>
1781<tr>
1782 <td>CLGEMMLowpMatrixMultiplyCore
1783 <td>
1784 <ul>
1785 <li>NHWC
1786 <li>NCHW
1787 </ul>
1788 <td>
1789 <table>
1790 <tr><th>src0<th>src1<th>src2<th>dst
1791 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>QASYMM8
1792 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8
1793 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>QASYMM8
1794 <tr><td>QASYMM8<td>QASYMM8<td>S32<td>S32
1795 <tr><td>QASYMM8<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1796 <tr><td>QASYMM8<td>QSYMM8<td>S32<td>S32
1797 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>QASYMM8_SIGNED
1798 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>QASYMM8_SIGNED
1799 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>QASYMM8_SIGNED
1800 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>S32<td>S32
1801 <tr><td>QASYMM8_SIGNED<td>QSYMM8_PER_CHANNEL<td>S32<td>S32
1802 <tr><td>QASYMM8_SIGNED<td>QSYMM8<td>S32<td>S32
1803 </table>
1804<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001805 <td rowspan="2">GEMMLowpOutputStage
1806 <td rowspan="2" style="width:200px;"> General Matrix Multiplication.
1807 <td rowspan="2">
1808 <ul>
1809 <li>n/a
1810 </ul>
1811 <td>NEGEMMLowpOutputStage
1812 <td>
1813 <ul>
1814 <li>All
1815 </ul>
1816 <td>
1817 <table>
1818 <tr><th>src0<th>src1<th>dst
1819 <tr><td>S32<td>S32<td>QASYMM8
1820 <tr><td>S32<td>S32<td>QASYMM8_SIGNED
1821 <tr><td>S32<td>S32<td>QSYMM16
1822 </table>
1823<tr>
1824 <td>CLGEMMLowpOutputStage
1825 <td>
1826 <ul>
1827 <li>All
1828 </ul>
1829 <td>
1830 <table>
1831 <tr><th>src0<th>src1<th>dst
1832 <tr><td>S32<td>S32<td>QASYMM8
1833 <tr><td>S32<td>S32<td>QASYMM8_SIGNED
1834 <tr><td>S32<td>S32<td>QSYMM16
1835 </table>
1836<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01001837 <td rowspan="2">GenerateProposalsLayer
1838 <td rowspan="2" style="width:200px;"> Function to generate proposals for a RPN (Region Proposal Network).
1839 <td rowspan="2">
1840 <ul>
1841 <li>ANEURALNETWORKS_GENERATE_PROPOSALS
1842 </ul>
1843 <td>NEGenerateProposalsLayer
1844 <td>
1845 <ul>
1846 <li>All
1847 </ul>
1848 <td>
1849 <table>
1850 <tr><th>src0<th>src1<th>src2<th>dst
1851 <tr><td>F16<td>F16<td>F16<td>F16
1852 <tr><td>F32<td>F32<td>F32<td>F32
1853 <tr><td>QASYMM8<td>QSYMM8<td>QSYMM16<td>QASYMM8
1854 </table>
1855<tr>
1856 <td>CLGenerateProposalsLayer
1857 <td>
1858 <ul>
1859 <li>All
1860 </ul>
1861 <td>
1862 <table>
1863 <tr><th>src0<th>src1<th>src2<th>dst
1864 <tr><td>F16<td>F16<td>F16<td>F16
1865 <tr><td>F32<td>F32<td>F32<td>F32
1866 <tr><td>QASYMM8<td>QSYMM8<td>QSYMM16<td>QASYMM8
1867 </table>
1868<tr>
1869 <td rowspan="2">InstanceNormalizationLayer
1870 <td rowspan="2" style="width:200px;"> Function to perform a Instance normalization on a given axis.
1871 <td rowspan="2">
1872 <ul>
1873 <li>ANEURALNETWORKS_INSTANCE_NORMALIZATION
1874 </ul>
1875 <td>NEInstanceNormalizationLayer
1876 <td>
1877 <ul>
1878 <li>NHWC
1879 <li>NCHW
1880 </ul>
1881 <td>
1882 <table>
1883 <tr><th>src<th>dst
1884 <tr><td>F16<td>F16
1885 <tr><td>F32<td>F32
1886 </table>
1887<tr>
1888 <td>CLInstanceNormalizationLayer
1889 <td>
1890 <ul>
1891 <li>NHWC
1892 <li>NCHW
1893 </ul>
1894 <td>
1895 <table>
1896 <tr><th>src<th>dst
1897 <tr><td>F16<td>F16
1898 <tr><td>F32<td>F32
1899 </table>
1900<tr>
1901 <td rowspan="2">L2NormalizeLayer
1902 <td rowspan="2" style="width:200px;"> Function to perform a L2 normalization on a given axis.
1903 <td rowspan="2">
1904 <ul>
1905 <li>ANEURALNETWORKS_L2_NORMALIZATION
1906 </ul>
1907 <td>NEL2NormalizeLayer
1908 <td>
1909 <ul>
1910 <li>NHWC
1911 <li>NCHW
1912 </ul>
1913 <td>
1914 <table>
1915 <tr><th>src<th>dst
1916 <tr><td>F16<td>F16
1917 <tr><td>F32<td>F32
1918 </table>
1919<tr>
1920 <td>CLL2NormalizeLayer
1921 <td>
1922 <ul>
1923 <li>NHWC
1924 <li>NCHW
1925 </ul>
1926 <td>
1927 <table>
1928 <tr><th>src<th>dst
1929 <tr><td>F16<td>F16
1930 <tr><td>F32<td>F32
1931 </table>
1932<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01001933 <td rowspan="3">Logical
1934 <td rowspan="3" style="width:200px;"> Function to perform: - Logical AND - Logical OR - Logical NOT
1935 <td rowspan="3">
1936 <ul>
1937 <li>n/a
1938 </ul>
1939 <td>NELogicalAnd
1940 <td>
1941 <ul>
1942 <li>All
1943 </ul>
1944 <td>
1945 <table>
1946 <tr><th>src0<th>src1<th>dst
1947 <tr><td>U8<td>U8<td>U8
1948 </table>
1949<tr>
1950 <td>NELogicalOr
1951 <td>
1952 <ul>
1953 <li>All
1954 </ul>
1955 <td>
1956 <table>
1957 <tr><th>src0<th>src1<th>dst
1958 <tr><td>U8<td>U8<td>U8
1959 </table>
1960<tr>
1961 <td>NELogicalNot
1962 <td>
1963 <ul>
1964 <li>All
1965 </ul>
1966 <td>
1967 <table>
1968 <tr><th>src<th>dst
1969 <tr><td>U8<td>U8
1970 </table>
1971<tr>
1972 <td rowspan="1">LogicalAnd
1973 <td rowspan="1" style="width:200px;"> Function to perform Logical AND.
1974 <td rowspan="1">
1975 <ul>
1976 <li>n/a
1977 </ul>
1978 <td>CLLogicalAnd
1979 <td>
1980 <ul>
1981 <li>All
1982 </ul>
1983 <td>
1984 <table>
1985 <tr><th>src0<th>src1<th>dst
1986 <tr><td>U8<td>U8<td>U8
1987 </table>
1988<tr>
1989 <td rowspan="1">LogicalOr
1990 <td rowspan="1" style="width:200px;"> Function to perform Logical OR.
1991 <td rowspan="1">
1992 <ul>
1993 <li>n/a
1994 </ul>
1995 <td>CLLogicalOr
1996 <td>
1997 <ul>
1998 <li>All
1999 </ul>
2000 <td>
2001 <table>
2002 <tr><th>src0<th>src1<th>dst
2003 <tr><td>U8<td>U8<td>U8
2004 </table>
2005<tr>
2006 <td rowspan="1">LogicalNot
2007 <td rowspan="1" style="width:200px;"> Function to perform Logical NOT.
2008 <td rowspan="1">
2009 <ul>
2010 <li>n/a
2011 </ul>
2012 <td>CLLogicalNot
2013 <td>
2014 <ul>
2015 <li>All
2016 </ul>
2017 <td>
2018 <table>
2019 <tr><th>src<th>dst
2020 <tr><td>U8<td>U8
2021 </table>
2022<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002023 <td rowspan="2">LSTMLayer
2024 <td rowspan="2" style="width:200px;"> Function to perform a single time step in a Long Short-Term Memory (LSTM) layer.
2025 <td rowspan="2">
2026 <ul>
2027 <li>ANEURALNETWORKS_LSTM
2028 </ul>
2029 <td>NELSTMLayer
2030 <td>
2031 <ul>
2032 <li>All
2033 </ul>
2034 <td>
2035 <table>
2036 <tr><th>src0 - src13<th>dst0 - dst3
2037 <tr><td>F16<td>F16
2038 <tr><td>F32<td>F32
2039 </table>
2040<tr>
2041 <td>CLLSTMLayer
2042 <td>
2043 <ul>
2044 <li>All
2045 </ul>
2046 <td>
2047 <table>
2048 <tr><th>src0 - src13<th>dst0 - dst3
2049 <tr><td>F16<td>F16
2050 <tr><td>F32<td>F32
2051 </table>
2052<tr>
2053 <td rowspan="2">LSTMLayerQuantized
2054 <td rowspan="2" style="width:200px;"> Function to perform quantized LSTM (Long Short-Term Memory)
2055 <td rowspan="2">
2056 <ul>
2057 <li>ANEURALNETWORKS_QUANTIZED_LSTM
2058 <li>ANEURALNETWORKS_QUANTIZED_16BIT_LSTM
2059 </ul>
2060 <td>NELSTMLayerQuantized
2061 <td>
2062 <ul>
2063 <li>All
2064 </ul>
2065 <td>
2066 <table>
2067 <tr><th>src0 - src8<th>src9 - src12<th>src13<th>src14<th>dst0<th>dst1
2068 <tr><td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8<td>QSYMM16<td>QASYMM8
2069 </table>
2070<tr>
2071 <td>CLLSTMLayerQuantized
2072 <td>
2073 <ul>
2074 <li>All
2075 </ul>
2076 <td>
2077 <table>
2078 <tr><th>src0 - src8<th>src9 - src12<th>src13<th>src14<th>dst0<th>dst1
2079 <tr><td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8<td>QSYMM16<td>QASYMM8
2080 </table>
2081<tr>
Jakub Sujak667e82f2023-11-07 22:39:30 +00002082 <td rowspan="2">MatMul
2083 <td rowspan="2" style="width:200px;"> Computes a matrix multiplication in batches.
2084 <td rowspan="2">
2085 <ul>
2086 <li>ANEURALNETWORKS_BATCH_MATMUL
2087 </ul>
2088 <td>NEMatMul
2089 <td>
2090 <ul>
2091 <li>Any
2092 </ul>
2093 <td>
2094 <table>
2095 <tr><th>lhs<th>rhs<th>dst
2096 <tr><td>F32<td>F32<td>F32
2097 <tr><td>F16<td>F16<td>F16
Renato Arantes36a75da2024-01-26 17:31:18 +00002098 <tr><td>BFLOAT16<td>BFLOAT16<td>BFLOAT16
Jakub Sujak667e82f2023-11-07 22:39:30 +00002099 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2100 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2101 </table>
2102<tr>
2103 <td>CLMatMul
2104 <td>
2105 <ul>
2106 <li>All
2107 </ul>
2108 <td>
2109 <table>
2110 <tr><th>lhs<th>rhs<th>dst
2111 <tr><td>F32<td>F32<td>F32
2112 <tr><td>F16<td>F16<td>F16
2113 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2114 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2115 </table>
2116<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002117 <td rowspan="2">MaxUnpoolingLayer
2118 <td rowspan="2" style="width:200px;"> Function to perform MaxUnpooling.
2119 <td rowspan="2">
2120 <ul>
2121 <li>n/a
2122 </ul>
2123 <td>NEMaxUnpoolingLayer
2124 <td>
2125 <ul>
2126 <li>NHWC
2127 <li>NCHW
2128 </ul>
2129 <td>
2130 <table>
2131 <tr><th>src<th>dst
2132 <tr><td>QASYMM8<td>QASYMM8
2133 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2134 <tr><td>F16<td>F16
2135 <tr><td>F32<td>F32
2136 </table>
2137<tr>
2138 <td>CLMaxUnpoolingLayer
2139 <td>
2140 <ul>
2141 <li>NHWC
2142 <li>NCHW
2143 </ul>
2144 <td>
2145 <table>
2146 <tr><th>src<th>dst
2147 <tr><td>QASYMM8<td>QASYMM8
2148 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2149 <tr><td>F16<td>F16
2150 <tr><td>F32<td>F32
2151 </table>
2152<tr>
2153 <td rowspan="2">MeanStdDevNormalizationLayer
2154 <td rowspan="2" style="width:200px;"> Function to execute mean and standard deviation normalization.
2155 <td rowspan="2">
2156 <ul>
2157 <li>n/a
2158 </ul>
2159 <td>NEMeanStdDevNormalizationLayer
2160 <td>
2161 <ul>
2162 <li>NHWC
2163 <li>NCHW
2164 </ul>
2165 <td>
2166 <table>
2167 <tr><th>src<th>dst
2168 <tr><td>F32<td>F32
2169 <tr><td>F16<td>F16
2170 </table>
2171<tr>
2172 <td>CLMeanStdDevNormalizationLayer
2173 <td>
2174 <ul>
2175 <li>NHWC
2176 <li>NCHW
2177 </ul>
2178 <td>
2179 <table>
2180 <tr><th>src<th>dst
2181 <tr><td>F32<td>F32
2182 <tr><td>F16<td>F16
2183 </table>
2184<tr>
2185 <td rowspan="2">NormalizationLayer
2186 <td rowspan="2" style="width:200px;"> Function to compute normalization layer.
2187 <td rowspan="2">
2188 <ul>
2189 <li>ANEURALNETWORKS_LOCAL_RESPONSE_NORMALIZATION
2190 </ul>
2191 <td>NENormalizationLayer
2192 <td>
2193 <ul>
2194 <li>NHWC
2195 <li>NCHW
2196 </ul>
2197 <td>
2198 <table>
2199 <tr><th>src<th>dst
2200 <tr><td>F32<td>F32
2201 <tr><td>F16<td>F16
2202 </table>
2203<tr>
2204 <td>CLNormalizationLayer
2205 <td>
2206 <ul>
2207 <li>NHWC
2208 <li>NCHW
2209 </ul>
2210 <td>
2211 <table>
2212 <tr><th>src<th>dst
2213 <tr><td>F32<td>F32
2214 <tr><td>F16<td>F16
2215 </table>
2216<tr>
Jakub Sujak667e82f2023-11-07 22:39:30 +00002217 <td rowspan="1">NormalizePlanarYUVLayer
2218 <td rowspan="1" style="width:200px;"> Function to compute normalization planar YUV layer.
2219 <td rowspan="1">
2220 <ul>
2221 <li>n/a
2222 </ul>
2223 <td>CLNormalizePlanarYUVLayer
2224 <td>
2225 <ul>
2226 <li>NHWC
2227 <li>NCHW
2228 </ul>
2229 <td>
2230 <table>
2231 <tr><th>src<th>dst
2232 <tr><td>F32<td>F32
2233 <tr><td>F16<td>F16
2234 <tr><td>QASYMM8<td>QASYMM8
2235 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2236 </table>
2237<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002238 <td rowspan="2">PadLayer
2239 <td rowspan="2" style="width:200px;"> Function to pad a tensor.
2240 <td rowspan="2">
2241 <ul>
2242 <li>ANEURALNETWORKS_PAD
2243 <li>ANEURALNETWORKS_PAD_V2
2244 </ul>
2245 <td>NEPadLayer
2246 <td>
2247 <ul>
2248 <li>NHWC
2249 <li>NCHW
2250 </ul>
2251 <td>
2252 <table>
2253 <tr><th>src<th>dst
2254 <tr><td>All<td>All
2255 </table>
2256<tr>
2257 <td>CLPadLayer
2258 <td>
2259 <ul>
2260 <li>NHWC
2261 <li>NCHW
2262 </ul>
2263 <td>
2264 <table>
2265 <tr><th>src<th>dst
2266 <tr><td>All<td>All
2267 </table>
2268<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002269 <td rowspan="2">Permute
2270 <td rowspan="2" style="width:200px;"> Function to transpose an ND tensor.
2271 <td rowspan="2">
2272 <ul>
2273 <li>ANEURALNETWORKS_TRANSPOSE
2274 </ul>
2275 <td>NEPermute
2276 <td>
2277 <ul>
2278 <li>NHWC
2279 <li>NCHW
2280 </ul>
2281 <td>
2282 <table>
2283 <tr><th>src<th>dst
2284 <tr><td>All<td>All
2285 </table>
2286<tr>
2287 <td>CLPermute
2288 <td>
2289 <ul>
2290 <li>NHWC
2291 <li>NCHW
2292 </ul>
2293 <td>
2294 <table>
2295 <tr><th>src<th>dst
2296 <tr><td>All<td>All
2297 </table>
2298<tr>
2299 <td rowspan="2">PixelWiseMultiplication
Jakub Sujakee301b32021-06-04 09:46:08 +01002300 <td rowspan="2" style="width:200px;"> Function to perform a multiplication.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002301 <td rowspan="2">
2302 <ul>
2303 <li>ANEURALNETWORKS_MUL
2304 </ul>
2305 <td>NEPixelWiseMultiplication
2306 <td>
2307 <ul>
2308 <li>All
2309 </ul>
2310 <td>
2311 <table>
2312 <tr><th>src0<th>src1<th>dst
2313 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2314 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2315 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
2316 <tr><td>QSYMM16<td>QSYMM16<td>S32
2317 <tr><td>U8<td>U8<td>U8
2318 <tr><td>U8<td>U8<td>S16
2319 <tr><td>U8<td>S16<td>S16
2320 <tr><td>S16<td>U8<td>S16
2321 <tr><td>S16<td>S16<td>S16
2322 <tr><td>F16<td>F16<td>F16
2323 <tr><td>F32<td>S32<td>F32
2324 </table>
2325<tr>
2326 <td>CLPixelWiseMultiplication
2327 <td>
2328 <ul>
2329 <li>All
2330 </ul>
2331 <td>
2332 <table>
2333 <tr><th>src0<th>src1<th>dst
2334 <tr><td>QASYMM8<td>QASYMM8<td>QASYMM8
2335 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2336 <tr><td>QSYMM16<td>QSYMM16<td>QASYMM16
2337 <tr><td>QSYMM16<td>QSYMM16<td>S32
2338 <tr><td>U8<td>U8<td>U8
2339 <tr><td>U8<td>U8<td>S16
2340 <tr><td>U8<td>S16<td>S16
2341 <tr><td>S16<td>U8<td>S16
2342 <tr><td>S16<td>S16<td>S16
2343 <tr><td>F16<td>F16<td>F16
Jakub Sujakee301b32021-06-04 09:46:08 +01002344 <tr><td>F32<td>F32<td>F32
2345 <tr><td>S32<td>S32<td>S32
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002346 </table>
2347<tr>
2348 <td rowspan="2">PoolingLayer
Jakub Sujakee301b32021-06-04 09:46:08 +01002349 <td rowspan="2" style="width:200px;"> Function to perform pooling with the specified pooling operation.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002350 <td rowspan="2">
2351 <ul>
2352 <li>ANEURALNETWORKS_AVERAGE_POOL_2D
2353 <li>ANEURALNETWORKS_L2_POOL_2D
2354 <li>ANEURALNETWORKS_MAX_POOL_2D
2355 </ul>
2356 <td>NEPoolingLayer
2357 <td>
2358 <ul>
2359 <li>NHWC
2360 <li>NCHW
2361 </ul>
2362 <td>
2363 <table>
2364 <tr><th>src<th>dst
2365 <tr><td>QASYMM8<td>QASYMM8
2366 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2367 <tr><td>F16<td>F16
2368 <tr><td>F32<td>F32
2369 </table>
2370<tr>
2371 <td>CLPoolingLayer
2372 <td>
2373 <ul>
2374 <li>NHWC
2375 <li>NCHW
2376 </ul>
2377 <td>
2378 <table>
2379 <tr><th>src<th>dst
2380 <tr><td>QASYMM8<td>QASYMM8
2381 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2382 <tr><td>F16<td>F16
2383 <tr><td>F32<td>F32
2384 </table>
2385<tr>
Adnan AlSinan171fc3d2022-03-15 18:46:42 +00002386 <td rowspan="2">Pooling3dLayer
2387 <td rowspan="2" style="width:200px;"> Function to perform pooling 3D with the specified pooling operation.
2388 <td rowspan="2">
2389 <ul>
2390 <li>N/A
2391 </ul>
2392 <td>NEPooling3dLayer
2393 <td>
2394 <ul>
2395 <li>NDHWC
2396 </ul>
2397 <td>
2398 <table>
2399 <tr><th>src<th>dst
2400 <tr><td>F16<td>F16
2401 <tr><td>F32<td>F32
Adnan AlSinan9104cd52022-04-06 16:19:31 +01002402 <tr><td>QASYMM8<td>QASYMM8
2403 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
Adnan AlSinan171fc3d2022-03-15 18:46:42 +00002404 </table>
2405<tr>
2406 <td>CLPooling3dLayer
2407 <td>
2408 <ul>
2409 <li>NDHWC
2410 </ul>
2411 <td>
2412 <table>
2413 <tr><th>src<th>dst
2414 <tr><td>F16<td>F16
2415 <tr><td>F32<td>F32
Mohammed Suhail Munshi5e549fa2022-03-16 11:14:06 +00002416 <tr><td>QASYMM8<td>QASYMM8
2417 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
Adnan AlSinan171fc3d2022-03-15 18:46:42 +00002418 </table>
2419<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002420 <td rowspan="2">PReluLayer
2421 <td rowspan="2" style="width:200px;"> Function to compute the activation layer with the PRELU activation function.
2422 <td rowspan="2">
2423 <ul>
2424 <li>ANEURALNETWORKS_PRELU
2425 </ul>
2426 <td>NEPReluLayer
2427 <td>
2428 <ul>
2429 <li>All
2430 </ul>
2431 <td>
2432 <table>
2433 <tr><th>src<th>dst
2434 <tr><td>QASYMM8<td>QASYMM8
2435 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2436 <tr><td>F16<td>F16
2437 <tr><td>F32<td>F32
2438 </table>
2439<tr>
2440 <td>CLPReluLayer
2441 <td>
2442 <ul>
2443 <li>All
2444 </ul>
2445 <td>
2446 <table>
2447 <tr><th>src<th>dst
2448 <tr><td>QASYMM8<td>QASYMM8
2449 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2450 <tr><td>F16<td>F16
2451 <tr><td>F32<td>F32
2452 </table>
2453<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002454 <td rowspan="2">PriorBoxLayer
Sheri Zhang6124ce62021-05-04 14:03:13 +01002455 <td rowspan="2" style="width:200px;"> Function to compute prior boxes and clip.
Teresa Charlin62687422021-04-28 10:58:49 +01002456 <td rowspan="2">
2457 <ul>
2458 <li>n/a
2459 </ul>
2460 <td>NEPriorBoxLayer
2461 <td>
2462 <ul>
2463 <li>NHWC
2464 <li>NCHW
2465 </ul>
2466 <td>
2467 <table>
2468 <tr><th>src0<th>src1<th>dst
2469 <tr><td>F32<td>F32<td>F32
2470 </table>
2471<tr>
2472 <td>CLPriorBoxLayer
2473 <td>
2474 <ul>
2475 <li>NHWC
2476 <li>NCHW
2477 </ul>
2478 <td>
2479 <table>
2480 <tr><th>src0<th>src1<th>dst
2481 <tr><td>F32<td>F32<td>F32
2482 </table>
2483<tr>
2484 <td rowspan="2">QLSTMLayer
2485 <td rowspan="2" style="width:200px;"> Function to perform quantized LSTM (Long Short-Term Memory).
2486 <td rowspan="2">
2487 <ul>
2488 <li>ANEURALNETWORKS_QUANTIZED_LSTM
2489 <li>ANEURALNETWORKS_QUANTIZED_16BIT_LSTM
2490 </ul>
2491 <td>NEQLSTMLayer
2492 <td>
2493 <ul>
2494 <li>All
2495 </ul>
2496 <td>
2497 <table>
2498 <tr><th>src0<th>src1 - src6<th>src7 -src9<th>src10<th>src11<th>dst0<th>dst1 - dst2
2499 <tr><td>QASYMM8_SIGNED<td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8_SIGNED<td>QSYMM16<td>QASYMM8_SIGNED
2500 </table>
2501<tr>
2502 <td>CLQLSTMLayer
2503 <td>
2504 <ul>
2505 <li>All
2506 </ul>
2507 <td>
2508 <table>
2509 <tr><th>src0<th>src1 - src6<th>src7 -src9<th>src10<th>src11<th>dst0<th>dst1 - dst2
2510 <tr><td>QASYMM8_SIGNED<td>QASYMM8<td>S32<td>QSYMM16<td>QASYMM8_SIGNED<td>QSYMM16<td>QASYMM8_SIGNED
2511 </table>
2512<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002513 <td rowspan="2">QuantizationLayer
2514 <td rowspan="2" style="width:200px;"> Function to perform quantization layer
2515 <td rowspan="2">
2516 <ul>
2517 <li>ANEURALNETWORKS_QUANTIZE
2518 </ul>
2519 <td>NEQuantizationLayer
2520 <td>
2521 <ul>
2522 <li>All
2523 </ul>
2524 <td>
2525 <table>
2526 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +01002527 <tr><td>QASYMM8<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2528 <tr><td>QASYMM8_SIGNED<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2529 <tr><td>F16<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2530 <tr><td>F32<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002531 </table>
2532<tr>
2533 <td>CLQuantizationLayer
2534 <td>
2535 <ul>
2536 <li>All
2537 </ul>
2538 <td>
2539 <table>
2540 <tr><th>src<th>dst
Teresa Charlin62687422021-04-28 10:58:49 +01002541 <tr><td>QASYMM8<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2542 <tr><td>QASYMM8_SIGNED<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2543 <tr><td>F16<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2544 <tr><td>F32<td>QASYMM8, QASYMM8_SIGNED, QASYMM16
2545 </table>
2546<tr>
2547 <td rowspan="2">Range
2548 <td rowspan="2" style="width:200px;"> Function to generates a sequence of numbers starting from START and extends by increments of 'STEP' up to but not including 'END'.
2549 <td rowspan="2">
2550 <ul>
2551 <li>n/a
2552 </ul>
2553 <td>NERange
2554 <td>
2555 <ul>
2556 <li>All
2557 </ul>
2558 <td>
2559 <table>
2560 <tr><th>dst
2561 <tr><td>U8
2562 <tr><td>S8
2563 <tr><td>U16
2564 <tr><td>S16
2565 <tr><td>U32
2566 <tr><td>S32
2567 <tr><td>F16
2568 <tr><td>F32
2569 </table>
2570<tr>
2571 <td>CLRange
2572 <td>
2573 <ul>
2574 <li>All
2575 </ul>
2576 <td>
2577 <table>
2578 <tr><th>dst
2579 <tr><td>U8
2580 <tr><td>S8
2581 <tr><td>QASYMM8
2582 <tr><td>U16
2583 <tr><td>S16
2584 <tr><td>U32
2585 <tr><td>S32
2586 <tr><td>F16
2587 <tr><td>F32
2588 </table>
2589<tr>
2590 <td rowspan="2">ReduceMean
Jakub Sujakee301b32021-06-04 09:46:08 +01002591 <td rowspan="2" style="width:200px;"> Function to perform reduce mean operation.
Teresa Charlin62687422021-04-28 10:58:49 +01002592 <td rowspan="2">
2593 <ul>
2594 <li>ANEURALNETWORKS_MEAN
2595 </ul>
2596 <td>NEReduceMean
2597 <td>
2598 <ul>
2599 <li>All
2600 </ul>
2601 <td>
2602 <table>
2603 <tr><th>src<th>dst
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002604 <tr><td>QASYMM8<td>QASYMM8
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002605 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
Teresa Charlin62687422021-04-28 10:58:49 +01002606 <tr><td>F16<td>F16
2607 <tr><td>F32<td>F32
2608 </table>
2609<tr>
2610 <td>CLReduceMean
2611 <td>
2612 <ul>
2613 <li>All
2614 </ul>
2615 <td>
2616 <table>
2617 <tr><th>src<th>dst
2618 <tr><td>QASYMM8<td>QASYMM8
2619 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2620 <tr><td>F16<td>F16
2621 <tr><td>F32<td>F32
2622 </table>
2623<tr>
2624 <td rowspan="2">ReductionOperation
Jakub Sujakee301b32021-06-04 09:46:08 +01002625 <td rowspan="2" style="width:200px;"> Function to perform reduce with the following operations - ARG_IDX_MAX: Index of the max value - ARG_IDX_MIN: Index of the min value - MEAN_SUM: Mean of sum - PROD: Product - SUM_SQUARE: Sum of squares - SUM: Sum - MIN: Min - MAX: Max
Teresa Charlin62687422021-04-28 10:58:49 +01002626 <td rowspan="2">
2627 <ul>
2628 <li>ANEURALNETWORKS_REDUCE_ALL
2629 <li>ANEURALNETWORKS_REDUCE_ANY
2630 <li>ANEURALNETWORKS_REDUCE_MAX
2631 <li>ANEURALNETWORKS_REDUCE_MIN
2632 <li>ANEURALNETWORKS_REDUCE_PROD
2633 <li>ANEURALNETWORKS_REDUCE_SUM
2634 </ul>
2635 <td>NEReductionOperation
2636 <td>
2637 <ul>
2638 <li>All
2639 </ul>
2640 <td>
2641 <table>
2642 <tr><th>src<th>dst
2643 <tr><td>QASYMM8<td>QASYMM8
2644 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2645 <tr><td>F16<td>F16
2646 <tr><td>F32<td>F32
2647 <tr><td>S32<td>S32
2648 </table>
2649<tr>
2650 <td>CLReductionOperation
2651 <td>
2652 <ul>
2653 <li>All
2654 </ul>
2655 <td>
2656 <table>
2657 <tr><th>src<th>dst
2658 <tr><td>QASYMM8<td>QASYMM8
2659 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2660 <tr><td>F16<td>F16
2661 <tr><td>F32<td>F32
2662 <tr><td>S32<td>S32
2663 </table>
2664<tr>
Jakub Sujak667e82f2023-11-07 22:39:30 +00002665 <td rowspan="1">ReorderLayer
2666 <td rowspan="1" style="width:200px;"> Reorders a tensor to a different weights format.
2667 <td rowspan="1">
2668 <ul>
2669 <li>n/a
2670 </ul>
2671 <td>NEReorderLayer
2672 <td>
2673 <ul>
2674 <li>NCHW
2675 </ul>
2676 <td>
2677 <table>
2678 <tr><th>src<th>dst
2679 <tr><td>F32<td>F32
2680 </table>
2681<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002682 <td rowspan="2">ReorgLayer
2683 <td rowspan="2" style="width:200px;"> Performs a reorganization layer of input tensor to the output tensor.
2684 <td rowspan="2">
2685 <ul>
2686 <li>n/a
2687 </ul>
2688 <td>NEReorgLayer
2689 <td>
2690 <ul>
2691 <li>NHWC
2692 <li>NCHW
2693 </ul>
2694 <td>
2695 <table>
2696 <tr><th>src<th>dst
2697 <tr><td>All<td>All
2698 </table>
2699<tr>
2700 <td>CLReorgLayer
2701 <td>
2702 <ul>
2703 <li>NHWC
2704 <li>NCHW
2705 </ul>
2706 <td>
2707 <table>
2708 <tr><th>src<th>dst
2709 <tr><td>All<td>All
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002710 </table>
2711<tr>
2712 <td rowspan="2">ReshapeLayer
Teresa Charlin62687422021-04-28 10:58:49 +01002713 <td rowspan="2" style="width:200px;"> Function to reshape a tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002714 <td rowspan="2">
2715 <ul>
2716 <li>ANEURALNETWORKS_RESHAPE
2717 <li>ANEURALNETWORKS_SQUEEZE
2718 </ul>
2719 <td>NEReshapeLayer
2720 <td>
2721 <ul>
2722 <li>All
2723 </ul>
2724 <td>
2725 <table>
2726 <tr><th>src<th>dst
2727 <tr><td>All<td>All
2728 </table>
2729<tr>
2730 <td>CLReshapeLayer
2731 <td>
2732 <ul>
2733 <li>All
2734 </ul>
2735 <td>
2736 <table>
2737 <tr><th>src<th>dst
2738 <tr><td>All<td>All
2739 </table>
2740<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002741 <td rowspan="2">Reverse
2742 <td rowspan="2" style="width:200px;"> Function to reverse tensor according to axis.
2743 <td rowspan="2">
2744 <ul>
2745 <li>n/a
2746 </ul>
2747 <td>NEReverse
2748 <td>
2749 <ul>
2750 <li>All
2751 </ul>
2752 <td>
2753 <table>
2754 <tr><th>src0<th>src1<th>dst
Adnan AlSinanbdcb4c12023-09-18 14:49:45 +01002755 <tr><td>All<td>U32, S32<td>All
Teresa Charlin62687422021-04-28 10:58:49 +01002756 </table>
2757<tr>
2758 <td>CLReverse
2759 <td>
2760 <ul>
2761 <li>All
2762 </ul>
2763 <td>
2764 <table>
2765 <tr><th>src0<th>src1<th>dst
Adnan AlSinan704c22f2023-10-24 11:05:56 +01002766 <tr><td>All<td>U32, S32<td>All
Teresa Charlin62687422021-04-28 10:58:49 +01002767 </table>
2768<tr>
2769 <td rowspan="2">RNNLayer
2770 <td rowspan="2" style="width:200px;"> Function to perform recurrent neural network layer.
2771 <td rowspan="2">
2772 <ul>
2773 <li>ANEURALNETWORKS_RNN
2774 </ul>
2775 <td>NERNNLayer
2776 <td>
2777 <ul>
2778 <li>NHWC
2779 <li>NCHW
2780 </ul>
2781 <td>
2782 <table>
2783 <tr><th>src0<th>src1<th>src2<th>src3<th>dst0<th>dst1
2784 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
2785 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
2786 </table>
2787<tr>
2788 <td>CLRNNLayer
2789 <td>
2790 <ul>
2791 <li>NHWC
2792 <li>NCHW
2793 </ul>
2794 <td>
2795 <table>
2796 <tr><th>src0<th>src1<th>src2<th>src3<th>dst0<th>dst1
2797 <tr><td>F16<td>F16<td>F16<td>F16<td>F16<td>F16
2798 <tr><td>F32<td>F32<td>F32<td>F32<td>F32<td>F32
2799 </table>
2800<tr>
2801 <td rowspan="2">ROIAlignLayer
2802 <td rowspan="2" style="width:200px;"> Function to perform ROI alignment.
2803 <td rowspan="2">
2804 <ul>
2805 <li>ANEURALNETWORKS_ROI_ALIGN
2806 </ul>
2807 <td>NEROIAlignLayer
2808 <td>
2809 <ul>
2810 <li>All
2811 </ul>
2812 <td>
2813 <table>
2814 <tr><th>src0<th>src1<th>dst
2815 <tr><td>F16<td>F16<td>F16
2816 <tr><td>F32<td>F32<td>F32
2817 <tr><td>QASYMM8<td>QASYMM16<td>QASYMM8
2818 <tr><td>QASYMM8_SIGNED<td>QASYMM16<td>QASYMM8_SIGNED
2819 </table>
2820<tr>
2821 <td>CLROIAlignLayer
2822 <td>
2823 <ul>
2824 <li>All
2825 </ul>
2826 <td>
2827 <table>
2828 <tr><th>src0<th>src1<th>dst
2829 <tr><td>F16<td>F16<td>F16
2830 <tr><td>F32<td>F32<td>F32
2831 <tr><td>QASYMM8<td>QASYMM16<td>QASYMM8
2832 <tr><td>QASYMM8_SIGNED<td>QASYMM16<td>QASYMM8_SIGNED
2833 </table>
2834<tr>
2835 <td rowspan="2">ROIPoolingLayer
2836 <td rowspan="2" style="width:200px;"> Function to perform ROI pooling.
2837 <td rowspan="2">
2838 <ul>
2839 <li>ANEURALNETWORKS_ROI_POOLING
2840 </ul>
2841 <td>NEROIPoolingLayer
2842 <td>
2843 <ul>
2844 <li>All
2845 </ul>
2846 <td>
2847 <table>
2848 <tr><th>src0<th>src1<th>dst
2849 <tr><td>F32<td>U16<td>F32
2850 <tr><td>QASYMM8<td>U16<td>QASYMM8
2851 </table>
2852<tr>
2853 <td>CLROIPoolingLayer
2854 <td>
2855 <ul>
2856 <li>All
2857 </ul>
2858 <td>
2859 <table>
2860 <tr><th>src0<th>src1<th>dst
2861 <tr><td>F16<td>U16<td>F16
2862 <tr><td>F32<td>U16<td>F32
2863 <tr><td>QASYMM8<td>U16<td>QASYMM8
2864 </table>
2865<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002866 <td rowspan="2">Scale
Teresa Charlin62687422021-04-28 10:58:49 +01002867 <td rowspan="2" style="width:200px;"> Function to perform resize a tensor using to interpolate: - Bilinear - Nearest neighbor
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002868 <td rowspan="2">
2869 <ul>
2870 <li>ANEURALNETWORKS_RESIZE_BILINEAR
2871 <li>ANEURALNETWORKS_RESIZE_NEAREST_NEIGHBOR
2872 </ul>
2873 <td>NEScale
2874 <td>
2875 <ul>
2876 <li>NHWC
2877 <li>NCHW
2878 </ul>
2879 <td>
2880 <table>
2881 <tr><th>src<th>dst
2882 <tr><td>QASYMM8<td>QASYMM8
2883 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2884 <tr><td>F16<td>F16
2885 <tr><td>F32<td>F32
2886 <tr><td>U8<td>U8
Gunes Bayirc4f27432022-09-11 15:59:19 +01002887 <tr><td>S8<td>S8
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002888 <tr><td>S16<td>S16
2889 </table>
2890<tr>
2891 <td>CLScale
2892 <td>
2893 <ul>
2894 <li>NHWC
2895 <li>NCHW
2896 </ul>
2897 <td>
2898 <table>
2899 <tr><th>src<th>dst
2900 <tr><td>QASYMM8<td>QASYMM8
2901 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2902 <tr><td>F16<td>F16
2903 <tr><td>F32<td>F32
2904 <tr><td>U8<td>U8
2905 <tr><td>S16<td>S16
2906 </table>
2907<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002908 <td rowspan="2">Select
2909 <td rowspan="2" style="width:200px;"> Function to select values from 2 tensors depending on an input tensor of booleans.
2910 <td rowspan="2">
2911 <ul>
2912 <li>ANEURALNETWORKS_SELECT
2913 </ul>
2914 <td>NESelect
2915 <td>
2916 <ul>
2917 <li>All
2918 </ul>
2919 <td>
2920 <table>
2921 <tr><th>src0<th>src1<th>src2<th>dst
2922 <tr><td>U8<td>All<td>All<td>All
2923 </table>
2924<tr>
2925 <td>CLSelect
2926 <td>
2927 <ul>
2928 <li>All
2929 </ul>
2930 <td>
2931 <table>
2932 <tr><th>src0<th>src1<th>src2<th>dst
2933 <tr><td>U8<td>All<td>All<td>All
2934 </table>
2935<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01002936 <td rowspan="2">Slice
2937 <td rowspan="2" style="width:200px;"> Function to perform tensor slicing.
2938 <td rowspan="2">
2939 <ul>
2940 <li>ANEURALNETWORKS_SLICE
2941 </ul>
2942 <td>NESlice
2943 <td>
2944 <ul>
2945 <li>All
2946 </ul>
2947 <td>
2948 <table>
2949 <tr><th>src<th>dst
2950 <tr><td>All<td>All
2951 </table>
2952<tr>
2953 <td>CLSlice
2954 <td>
2955 <ul>
2956 <li>All
2957 </ul>
2958 <td>
2959 <table>
2960 <tr><th>src<th>dst
2961 <tr><td>All<td>All
2962 </table>
2963<tr>
Sheri Zhang6124ce62021-05-04 14:03:13 +01002964 <td rowspan="2">SoftmaxLayer
2965 <td rowspan="2" style="width:200px;"> Function to compute a SoftmaxLayer and a Log SoftmaxLayer.
2966 <td rowspan="2">
2967 <ul>
2968 <li>ANEURALNETWORKS_LOG_SOFTMAX
2969 <li>ANEURALNETWORKS_SOFTMAX
2970 </ul>
2971 <td>NESoftmaxLayerGeneric
2972 <td>
2973 <ul>
2974 <li>All
2975 </ul>
2976 <td>
2977 <table>
2978 <tr><th>src<th>dst
2979 <tr><td>QASYMM8<td>QASYMM8
2980 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2981 <tr><td>F16<td>F16
2982 <tr><td>F32<td>F32
2983 </table>
2984<tr>
2985 <td>CLSoftmaxLayerGeneric
2986 <td>
2987 <ul>
2988 <li>All
2989 </ul>
2990 <td>
2991 <table>
2992 <tr><th>src<th>dst
2993 <tr><td>QASYMM8<td>QASYMM8
2994 <tr><td>QASYMM8_SIGNED<td>QASYMM8_SIGNED
2995 <tr><td>F16<td>F16
2996 <tr><td>F32<td>F32
2997 </table>
2998<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01002999 <td rowspan="2">SpaceToBatchLayer
3000 <td rowspan="2" style="width:200px;"> Function to divide a tensor spatially.
3001 <td rowspan="2">
3002 <ul>
3003 <li>ANEURALNETWORKS_SPACE_TO_BATCH_ND
3004 </ul>
3005 <td>NESpaceToBatchLayer
3006 <td>
3007 <ul>
3008 <li>NHWC
3009 <li>NCHW
3010 </ul>
3011 <td>
3012 <table>
3013 <tr><th>src0<th>src1<th>src2<th>dst
3014 <tr><td>All<td>S32<td>S32<td>All
3015 </table>
3016<tr>
3017 <td>CLSpaceToBatchLayer
3018 <td>
3019 <ul>
3020 <li>NHWC
3021 <li>NCHW
3022 </ul>
3023 <td>
3024 <table>
3025 <tr><th>src0<th>src1<th>src2<th>dst
3026 <tr><td>All<td>S32<td>S32<td>All
3027 </table>
3028<tr>
3029 <td rowspan="2">SpaceToDepthLayer
3030 <td rowspan="2" style="width:200px;"> Function to rearrange blocks of spatial data into depth.
3031 <td rowspan="2">
3032 <ul>
3033 <li>ANEURALNETWORKS_SPACE_TO_DEPTH
3034 </ul>
3035 <td>NESpaceToDepthLayer
3036 <td>
3037 <ul>
3038 <li>NHWC
3039 <li>NCHW
3040 </ul>
3041 <td>
3042 <table>
3043 <tr><th>src<th>dst
3044 <tr><td>All<td>All
3045 </table>
3046<tr>
3047 <td>CLSpaceToDepthLayer
3048 <td>
3049 <ul>
3050 <li>NHWC
3051 <li>NCHW
3052 </ul>
3053 <td>
3054 <table>
3055 <tr><th>src<th>dst
3056 <tr><td>All<td>All
3057 </table>
3058<tr>
3059 <td rowspan="2">Split
3060 <td rowspan="2" style="width:200px;"> Function to split a tensor along a given axis.
3061 <td rowspan="2">
3062 <ul>
3063 <li>ANEURALNETWORKS_SPLIT
3064 </ul>
3065 <td>NESplit
3066 <td>
3067 <ul>
3068 <li>All
3069 </ul>
3070 <td>
3071 <table>
3072 <tr><th>src<th>dst
3073 <tr><td>All<td>All
3074 </table>
3075<tr>
3076 <td>CLSplit
3077 <td>
3078 <ul>
3079 <li>All
3080 </ul>
3081 <td>
3082 <table>
3083 <tr><th>src<th>dst
3084 <tr><td>All<td>All
3085 </table>
3086<tr>
3087 <td rowspan="2">StackLayer
3088 <td rowspan="2" style="width:200px;"> Function to stack tensors along an axis.
3089 <td rowspan="2">
3090 <ul>
3091 <li>n/a
3092 </ul>
3093 <td>NEStackLayer
3094 <td>
3095 <ul>
3096 <li>All
3097 </ul>
3098 <td>
3099 <table>
3100 <tr><th>src<th>dst
3101 <tr><td>All<td>All
3102 </table>
3103<tr>
3104 <td>CLStackLayer
3105 <td>
3106 <ul>
3107 <li>All
3108 </ul>
3109 <td>
3110 <table>
3111 <tr><th>src<th>dst
3112 <tr><td>All<td>All
3113 </table>
3114<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003115 <td rowspan="2">StridedSlice
3116 <td rowspan="2" style="width:200px;"> Function to extract a strided slice of a tensor.
3117 <td rowspan="2">
3118 <ul>
3119 <li>ANEURALNETWORKS_STRIDED_SLICE
3120 </ul>
3121 <td>NEStridedSlice
3122 <td>
3123 <ul>
3124 <li>All
3125 </ul>
3126 <td>
3127 <table>
3128 <tr><th>src<th>dst
3129 <tr><td>All<td>All
3130 </table>
3131<tr>
3132 <td>CLStridedSlice
3133 <td>
3134 <ul>
3135 <li>All
3136 </ul>
3137 <td>
3138 <table>
3139 <tr><th>src<th>dst
3140 <tr><td>All<td>All
3141 </table>
3142<tr>
Teresa Charlin62687422021-04-28 10:58:49 +01003143 <td rowspan="2">Tile
3144 <td rowspan="2" style="width:200px;"> Function to construct a tensor by tiling a given tensor.
3145 <td rowspan="2">
3146 <ul>
3147 <li>ANEURALNETWORKS_TILE
3148 </ul>
3149 <td>NETile
3150 <td>
3151 <ul>
3152 <li>All
3153 </ul>
3154 <td>
3155 <table>
3156 <tr><th>src<th>dst
3157 <tr><td>All<td>All
3158 </table>
3159<tr>
3160 <td>CLTile
3161 <td>
3162 <ul>
3163 <li>All
3164 </ul>
3165 <td>
3166 <table>
3167 <tr><th>src<th>dst
3168 <tr><td>All<td>All
3169 </table>
3170<tr>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003171 <td rowspan="2">Transpose
Teresa Charlin62687422021-04-28 10:58:49 +01003172 <td rowspan="2" style="width:200px;"> Function to transpose a 2D tensor.
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003173 <td rowspan="2">
3174 <ul>
3175 <li>ANEURALNETWORKS_TRANSPOSE
3176 </ul>
3177 <td>NETranspose
3178 <td>
3179 <ul>
3180 <li>All
3181 </ul>
3182 <td>
3183 <table>
3184 <tr><th>src<th>dst
3185 <tr><td>All<td>All
3186 </table>
3187<tr>
3188 <td>CLTranspose
3189 <td>
3190 <ul>
3191 <li>All
3192 </ul>
3193 <td>
3194 <table>
3195 <tr><th>src<th>dst
3196 <tr><td>All<td>All
3197 </table>
Teresa Charlin62687422021-04-28 10:58:49 +01003198<tr>
3199 <td rowspan="2">Unstack
3200 <td rowspan="2" style="width:200px;"> Function to unpack a rank-R tensor into rank-(R-1) tensors.
3201 <td rowspan="2">
3202 <ul>
3203 <li>n/a
3204 </ul>
3205 <td>NEUnstack
3206 <td>
3207 <ul>
3208 <li>All
3209 </ul>
3210 <td>
3211 <table>
3212 <tr><th>src<th>dst
3213 <tr><td>All<td>All
3214 </table>
3215<tr>
3216 <td>CLUnstack
3217 <td>
3218 <ul>
3219 <li>All
3220 </ul>
3221 <td>
3222 <table>
3223 <tr><th>src<th>dst
3224 <tr><td>All<td>All
3225 </table>
3226<tr>
3227 <td rowspan="2">WinogradConvolutionLayer
3228 <td rowspan="2" style="width:200px;"> Function to do Winograd Convolution.
3229 <td rowspan="2">
3230 <ul>
3231 <li>ANEURALNETWORKS_CONV_2D
3232 </ul>
3233 <td>NEWinogradConvolutionLayer
3234 <td>
3235 <ul>
3236 <li>NHWC
3237 <li>NCHW
3238 </ul>
3239 <td>
3240 <table>
3241 <tr><th>src0<th>src1<th>src2<th>dst
3242 <tr><td>F16<td>F16<td>F16<td>F16
3243 <tr><td>F32<td>F32<td>F32<td>F32
3244 </table>
3245<tr>
3246 <td>CLWinogradConvolutionLayer
3247 <td>
3248 <ul>
3249 <li>NHWC
3250 <li>NCHW
3251 </ul>
3252 <td>
3253 <table>
3254 <tr><th>src0<th>src1<th>src2<th>dst
3255 <tr><td>F16<td>F16<td>F16<td>F16
3256 <tr><td>F32<td>F32<td>F32<td>F32
3257 </table>
Sheri Zhanga47dcc22021-04-22 14:41:12 +01003258</table>
3259
3260*/
Mohammed Suhail Munshi5e549fa2022-03-16 11:14:06 +00003261} // namespace