blob: 84e643af357c26837ac46b2ac0291958d72374b7 [file] [log] [blame]
surmeh0149b9e102018-05-17 14:11:25 +01001------ ArmNN for Android NNAPI supported operations ------
telsoa015307bc12018-03-09 13:51:08 +00002
Matthew Benthama3e23ca2019-05-13 12:49:59 +01003This release of ArmNN for Android supports use as a driver for the Android Neural Networks API. It implements the
Aron Virginas-Tarb23732b2019-06-17 15:48:45 +01004android.hardware.neuralnetworks@1.0, android.hardware.neuralnetworks@1.1 and android.hardware.neuralnetworks@1.2
5HAL interfaces.
telsoa015307bc12018-03-09 13:51:08 +00006
7For more information on the Android Neural Networks API, see https://developer.android.com/ndk/guides/neuralnetworks/index.html
8
9For integration and usage documentation, please see README.md.
10
11--- Support for Android Neural Networks HAL operations ---
12
Sadik Armagan1ba99892019-08-21 15:03:44 +010013The following AndroidNN HAL 1.0, 1.1 and 1.2 operations are currently supported:
telsoa015307bc12018-03-09 13:51:08 +000014
15AndroidNN operator Tensor type supported
Kevin May407718f2019-09-09 14:46:41 +010016ABS (FLOAT32)
Aron Virginas-Tar9acf5792019-11-18 16:21:23 +000017ADD (FLOAT32, QUANT8_ASYMM)
Sadik Armagan71cfb302020-02-19 11:51:58 +000018ARGMAX (FLOAT32, QUANT8_ASYMM)
19ARGMIN (FLOAT32, QUANT8_ASYMM)
Aron Virginas-Tar9acf5792019-11-18 16:21:23 +000020AVERAGE_POOL_2D (FLOAT32, QUANT8_ASYMM)
21BATCH_TO_SPACE_ND (FLOAT32, QUANT8_ASYMM)
Teresa Charline9eeb202019-11-27 12:55:21 +000022CONCATENATION (FLOAT32, FLOAT16, QUANT8_ASYMM)
23CONV_2D (FLOAT32, QUANT8_ASYMM)
Aron Virginas-Tar9acf5792019-11-18 16:21:23 +000024DEPTH_TO_SPACE (FLOAT32, FLOAT16, QUANT8_ASYMM)
Teresa Charline9eeb202019-11-27 12:55:21 +000025DEPTHWISE_CONV_2D (FLOAT32, QUANT8_ASYMM)
Aron Virginas-Tar9acf5792019-11-18 16:21:23 +000026DEQUANTIZE (FLOAT32 (output only), QUANT8_ASYMM (input only))
27DIV (FLOAT32, QUANT8_ASYMM)
28EQUAL (FLOAT32, QUANT8_ASYMM)
29EXPAND_DIMS (FLOAT32, FLOAT16, QUANT8_ASYMM)
telsoa015307bc12018-03-09 13:51:08 +000030FLOOR (FLOAT32)
Aron Virginas-Tar9acf5792019-11-18 16:21:23 +000031FULLY_CONNECTED (FLOAT32, QUANT8_ASYMM)
32GREATER (FLOAT32, QUANT8_ASYMM)
33GREATER_EQUAL (FLOAT32, QUANT8_ASYMM)
Teresa Charline9eeb202019-11-27 12:55:21 +000034GROUPED_CONV_2D (FLOAT32, QUANT8_ASYMM)
Aron Virginas-Tarad754532019-10-10 14:02:37 +010035INSTANCE_NORMALIZATION (FLOAT32)
telsoa015307bc12018-03-09 13:51:08 +000036L2_NORMALIZATION (FLOAT32)
Aron Virginas-Tar9acf5792019-11-18 16:21:23 +000037L2_POOL_2D (FLOAT32, QUANT8_ASYMM)
38LESS (FLOAT32, QUANT8_ASYMM)
39LESS_EQUAL (FLOAT32, QUANT8_ASYMM)
telsoa015307bc12018-03-09 13:51:08 +000040LOCAL_RESPONSE_NORMALIZATION (FLOAT32)
Aron Virginas-Tar9acf5792019-11-18 16:21:23 +000041LOGISTIC (FLOAT32, QUANT8_ASYMM)
Aron Virginas-Tar75e67792019-10-15 13:33:03 +010042LOG_SOFTMAX (FLOAT32)
Ferran Balaguerd04c0432018-11-15 14:48:05 +000043LSTM (FLOAT32)
Aron Virginas-Tar9acf5792019-11-18 16:21:23 +000044MAXIMUM (FLOAT32, QUANT8_ASYMM)
45MAX_POOL_2D (FLOAT32, QUANT8_ASYMM)
46MEAN (FLOAT32, QUANT8_ASYMM)
47MINIMUM (FLOAT32, QUANT8_ASYMM)
48MUL (FLOAT32, QUANT8_ASYMM)
49NOT_EQUAL (FLOAT32, QUANT8_ASYMM)
Teresa Charline9eeb202019-11-27 12:55:21 +000050PAD (FLOAT32, FLOAT16, QUANT8_ASYMM)
51PAD_V2 (FLOAT32, FLOAT16, QUANT8_ASYMM)
Aron Virginas-Tar9acf5792019-11-18 16:21:23 +000052PRELU (FLOAT32, QUANT8_ASYMM)
53QUANTIZE (FLOAT32 (input only), QUANT8_ASYMM (output only))
Sadik Armagan1ba99892019-08-21 15:03:44 +010054QUANTIZED_16BIT_LSTM (QUANT8_ASYMM)
Aron Virginas-Tar9acf5792019-11-18 16:21:23 +000055RELU (FLOAT32, QUANT8_ASYMM)
56RELU1 (FLOAT32, QUANT8_ASYMM)
57RELU6 (FLOAT32, QUANT8_ASYMM)
Teresa Charline9eeb202019-11-27 12:55:21 +000058RESHAPE (FLOAT32, FLOAT16, QUANT8_ASYMM)
Aron Virginas-Tar9acf5792019-11-18 16:21:23 +000059RESIZE_BILINEAR (FLOAT32, QUANT8_ASYMM)
60RESIZE_NEAREST_NEIGHBOR (FLOAT32, QUANT8_ASYMM)
Aron Virginas-Tara97efbb2019-09-10 14:46:41 +010061RSQRT (FLOAT32)
Aron Virginas-Tar9acf5792019-11-18 16:21:23 +000062SOFTMAX (FLOAT32, QUANT8_ASYMM)
63SPACE_TO_BATCH_ND (FLOAT32, QUANT8_ASYMM)
Teresa Charline9eeb202019-11-27 12:55:21 +000064SPACE_TO_DEPTH (FLOAT32, FLOAT16, QUANT8_ASYMM)
Sadik Armagan701d9a02019-09-04 15:16:18 +010065SQRT (FLOAT32)
Teresa Charline9eeb202019-11-27 12:55:21 +000066SQUEEZE (FLOAT32, FLOAT16, QUANT8_ASYMM)
Aron Virginas-Tar9acf5792019-11-18 16:21:23 +000067STRIDED_SLICE (FLOAT32, QUANT8_ASYMM)
68SUB (FLOAT32, QUANT8_ASYMM)
69TANH (FLOAT32, QUANT8_ASYMM)
70TRANSPOSE (FLOAT32, QUANT8_ASYMM)
Teresa Charline9eeb202019-11-27 12:55:21 +000071TRANSPOSE_CONV_2D (FLOAT32, QUANT8_ASYMM)
telsoa015307bc12018-03-09 13:51:08 +000072
Mike Kelly56df76c2019-06-14 15:51:39 +010073Where operations are not supported by the ArmNN Android NN Driver, the driver indicates this to the framework
74appropriately and the framework implements those operations using a CPU implementation.
Aron Virginas-Tar9acf5792019-11-18 16:21:23 +000075
76NOTE: By convention, only those tensor types have been listed above, which are fully supported across all
77ArmNN backends. FLOAT16 input tensors are partially supported on most HAL 1.2 operators on the GpuAcc and
78CpuRef backends, however not on CpuAcc.