surmeh01 | 49b9e10 | 2018-05-17 14:11:25 +0100 | [diff] [blame] | 1 | ------ ArmNN for Android NNAPI supported operations ------ |
telsoa01 | 5307bc1 | 2018-03-09 13:51:08 +0000 | [diff] [blame] | 2 | |
Matthew Bentham | a3e23ca | 2019-05-13 12:49:59 +0100 | [diff] [blame] | 3 | This release of ArmNN for Android supports use as a driver for the Android Neural Networks API. It implements the |
Aron Virginas-Tar | b23732b | 2019-06-17 15:48:45 +0100 | [diff] [blame] | 4 | android.hardware.neuralnetworks@1.0, android.hardware.neuralnetworks@1.1 and android.hardware.neuralnetworks@1.2 |
| 5 | HAL interfaces. |
telsoa01 | 5307bc1 | 2018-03-09 13:51:08 +0000 | [diff] [blame] | 6 | |
| 7 | For more information on the Android Neural Networks API, see https://developer.android.com/ndk/guides/neuralnetworks/index.html |
| 8 | |
| 9 | For integration and usage documentation, please see README.md. |
| 10 | |
| 11 | --- Support for Android Neural Networks HAL operations --- |
| 12 | |
Sadik Armagan | 1ba9989 | 2019-08-21 15:03:44 +0100 | [diff] [blame] | 13 | The following AndroidNN HAL 1.0, 1.1 and 1.2 operations are currently supported: |
telsoa01 | 5307bc1 | 2018-03-09 13:51:08 +0000 | [diff] [blame] | 14 | |
| 15 | AndroidNN operator Tensor type supported |
Kevin May | 407718f | 2019-09-09 14:46:41 +0100 | [diff] [blame] | 16 | ABS (FLOAT32) |
Aron Virginas-Tar | 9acf579 | 2019-11-18 16:21:23 +0000 | [diff] [blame] | 17 | ADD (FLOAT32, QUANT8_ASYMM) |
Sadik Armagan | 71cfb30 | 2020-02-19 11:51:58 +0000 | [diff] [blame] | 18 | ARGMAX (FLOAT32, QUANT8_ASYMM) |
| 19 | ARGMIN (FLOAT32, QUANT8_ASYMM) |
Aron Virginas-Tar | 9acf579 | 2019-11-18 16:21:23 +0000 | [diff] [blame] | 20 | AVERAGE_POOL_2D (FLOAT32, QUANT8_ASYMM) |
| 21 | BATCH_TO_SPACE_ND (FLOAT32, QUANT8_ASYMM) |
Teresa Charlin | e9eeb20 | 2019-11-27 12:55:21 +0000 | [diff] [blame] | 22 | CONCATENATION (FLOAT32, FLOAT16, QUANT8_ASYMM) |
| 23 | CONV_2D (FLOAT32, QUANT8_ASYMM) |
Aron Virginas-Tar | 9acf579 | 2019-11-18 16:21:23 +0000 | [diff] [blame] | 24 | DEPTH_TO_SPACE (FLOAT32, FLOAT16, QUANT8_ASYMM) |
Teresa Charlin | e9eeb20 | 2019-11-27 12:55:21 +0000 | [diff] [blame] | 25 | DEPTHWISE_CONV_2D (FLOAT32, QUANT8_ASYMM) |
Aron Virginas-Tar | 9acf579 | 2019-11-18 16:21:23 +0000 | [diff] [blame] | 26 | DEQUANTIZE (FLOAT32 (output only), QUANT8_ASYMM (input only)) |
| 27 | DIV (FLOAT32, QUANT8_ASYMM) |
| 28 | EQUAL (FLOAT32, QUANT8_ASYMM) |
| 29 | EXPAND_DIMS (FLOAT32, FLOAT16, QUANT8_ASYMM) |
telsoa01 | 5307bc1 | 2018-03-09 13:51:08 +0000 | [diff] [blame] | 30 | FLOOR (FLOAT32) |
Aron Virginas-Tar | 9acf579 | 2019-11-18 16:21:23 +0000 | [diff] [blame] | 31 | FULLY_CONNECTED (FLOAT32, QUANT8_ASYMM) |
| 32 | GREATER (FLOAT32, QUANT8_ASYMM) |
| 33 | GREATER_EQUAL (FLOAT32, QUANT8_ASYMM) |
Teresa Charlin | e9eeb20 | 2019-11-27 12:55:21 +0000 | [diff] [blame] | 34 | GROUPED_CONV_2D (FLOAT32, QUANT8_ASYMM) |
Aron Virginas-Tar | ad75453 | 2019-10-10 14:02:37 +0100 | [diff] [blame] | 35 | INSTANCE_NORMALIZATION (FLOAT32) |
telsoa01 | 5307bc1 | 2018-03-09 13:51:08 +0000 | [diff] [blame] | 36 | L2_NORMALIZATION (FLOAT32) |
Aron Virginas-Tar | 9acf579 | 2019-11-18 16:21:23 +0000 | [diff] [blame] | 37 | L2_POOL_2D (FLOAT32, QUANT8_ASYMM) |
| 38 | LESS (FLOAT32, QUANT8_ASYMM) |
| 39 | LESS_EQUAL (FLOAT32, QUANT8_ASYMM) |
telsoa01 | 5307bc1 | 2018-03-09 13:51:08 +0000 | [diff] [blame] | 40 | LOCAL_RESPONSE_NORMALIZATION (FLOAT32) |
Aron Virginas-Tar | 9acf579 | 2019-11-18 16:21:23 +0000 | [diff] [blame] | 41 | LOGISTIC (FLOAT32, QUANT8_ASYMM) |
Aron Virginas-Tar | 75e6779 | 2019-10-15 13:33:03 +0100 | [diff] [blame] | 42 | LOG_SOFTMAX (FLOAT32) |
Ferran Balaguer | d04c043 | 2018-11-15 14:48:05 +0000 | [diff] [blame] | 43 | LSTM (FLOAT32) |
Aron Virginas-Tar | 9acf579 | 2019-11-18 16:21:23 +0000 | [diff] [blame] | 44 | MAXIMUM (FLOAT32, QUANT8_ASYMM) |
| 45 | MAX_POOL_2D (FLOAT32, QUANT8_ASYMM) |
| 46 | MEAN (FLOAT32, QUANT8_ASYMM) |
| 47 | MINIMUM (FLOAT32, QUANT8_ASYMM) |
| 48 | MUL (FLOAT32, QUANT8_ASYMM) |
| 49 | NOT_EQUAL (FLOAT32, QUANT8_ASYMM) |
Teresa Charlin | e9eeb20 | 2019-11-27 12:55:21 +0000 | [diff] [blame] | 50 | PAD (FLOAT32, FLOAT16, QUANT8_ASYMM) |
| 51 | PAD_V2 (FLOAT32, FLOAT16, QUANT8_ASYMM) |
Aron Virginas-Tar | 9acf579 | 2019-11-18 16:21:23 +0000 | [diff] [blame] | 52 | PRELU (FLOAT32, QUANT8_ASYMM) |
| 53 | QUANTIZE (FLOAT32 (input only), QUANT8_ASYMM (output only)) |
Sadik Armagan | 1ba9989 | 2019-08-21 15:03:44 +0100 | [diff] [blame] | 54 | QUANTIZED_16BIT_LSTM (QUANT8_ASYMM) |
Aron Virginas-Tar | 9acf579 | 2019-11-18 16:21:23 +0000 | [diff] [blame] | 55 | RELU (FLOAT32, QUANT8_ASYMM) |
| 56 | RELU1 (FLOAT32, QUANT8_ASYMM) |
| 57 | RELU6 (FLOAT32, QUANT8_ASYMM) |
Teresa Charlin | e9eeb20 | 2019-11-27 12:55:21 +0000 | [diff] [blame] | 58 | RESHAPE (FLOAT32, FLOAT16, QUANT8_ASYMM) |
Aron Virginas-Tar | 9acf579 | 2019-11-18 16:21:23 +0000 | [diff] [blame] | 59 | RESIZE_BILINEAR (FLOAT32, QUANT8_ASYMM) |
| 60 | RESIZE_NEAREST_NEIGHBOR (FLOAT32, QUANT8_ASYMM) |
Aron Virginas-Tar | a97efbb | 2019-09-10 14:46:41 +0100 | [diff] [blame] | 61 | RSQRT (FLOAT32) |
Aron Virginas-Tar | 9acf579 | 2019-11-18 16:21:23 +0000 | [diff] [blame] | 62 | SOFTMAX (FLOAT32, QUANT8_ASYMM) |
| 63 | SPACE_TO_BATCH_ND (FLOAT32, QUANT8_ASYMM) |
Teresa Charlin | e9eeb20 | 2019-11-27 12:55:21 +0000 | [diff] [blame] | 64 | SPACE_TO_DEPTH (FLOAT32, FLOAT16, QUANT8_ASYMM) |
Sadik Armagan | 701d9a0 | 2019-09-04 15:16:18 +0100 | [diff] [blame] | 65 | SQRT (FLOAT32) |
Teresa Charlin | e9eeb20 | 2019-11-27 12:55:21 +0000 | [diff] [blame] | 66 | SQUEEZE (FLOAT32, FLOAT16, QUANT8_ASYMM) |
Aron Virginas-Tar | 9acf579 | 2019-11-18 16:21:23 +0000 | [diff] [blame] | 67 | STRIDED_SLICE (FLOAT32, QUANT8_ASYMM) |
| 68 | SUB (FLOAT32, QUANT8_ASYMM) |
| 69 | TANH (FLOAT32, QUANT8_ASYMM) |
| 70 | TRANSPOSE (FLOAT32, QUANT8_ASYMM) |
Teresa Charlin | e9eeb20 | 2019-11-27 12:55:21 +0000 | [diff] [blame] | 71 | TRANSPOSE_CONV_2D (FLOAT32, QUANT8_ASYMM) |
telsoa01 | 5307bc1 | 2018-03-09 13:51:08 +0000 | [diff] [blame] | 72 | |
Mike Kelly | 56df76c | 2019-06-14 15:51:39 +0100 | [diff] [blame] | 73 | Where operations are not supported by the ArmNN Android NN Driver, the driver indicates this to the framework |
| 74 | appropriately and the framework implements those operations using a CPU implementation. |
Aron Virginas-Tar | 9acf579 | 2019-11-18 16:21:23 +0000 | [diff] [blame] | 75 | |
| 76 | NOTE: By convention, only those tensor types have been listed above, which are fully supported across all |
| 77 | ArmNN backends. FLOAT16 input tensors are partially supported on most HAL 1.2 operators on the GpuAcc and |
| 78 | CpuRef backends, however not on CpuAcc. |