blob: 769e6df403d66df8264bd0e2e1c85289bee438ba [file] [log] [blame]
<!-- HTML header for doxygen 1.8.17-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "https://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<meta name="generator" content="Doxygen 1.8.17"/>
<meta name="viewport" content="width=device-width, initial-scale=1"/>
<title>Arm NN: Parsers</title>
<link href="tabs.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript" src="dynsections.js"></script>
<link href="navtree.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="resize.js"></script>
<script type="text/javascript" src="navtreedata.js"></script>
<script type="text/javascript" src="navtree.js"></script>
<link href="search/search.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="search/searchdata.js"></script>
<script type="text/javascript" src="search/search.js"></script>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
extensions: ["tex2jax.js"],
jax: ["input/TeX","output/HTML-CSS"],
});
</script>
<script type="text/javascript" async="async" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js"></script>
<link href="doxygen.css" rel="stylesheet" type="text/css" />
<link href="customdoxygen.css" rel="stylesheet" type="text/css"/>
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<table cellspacing="0" cellpadding="0">
<tbody>
<tr style="height: 56px;">
<img alt="ArmNN" src="Arm_NN_horizontal_blue.png" style="max-width: 15rem; margin-top: .5rem; margin-left 13px"/>
<td id="projectalign" style="padding-left: 0.9em;">
<div id="projectname">
&#160;<span id="projectnumber">24.02</span>
</div>
</td>
</tr>
</tbody>
</table>
</div>
<!-- end header part -->
<!-- Generated by Doxygen 1.8.17 -->
<script type="text/javascript">
/* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&amp;dn=gpl-2.0.txt GPL-v2 */
var searchBox = new SearchBox("searchBox", "search",false,'Search');
/* @license-end */
</script>
<script type="text/javascript" src="menudata.js"></script>
<script type="text/javascript" src="menu.js"></script>
<script type="text/javascript">
/* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&amp;dn=gpl-2.0.txt GPL-v2 */
$(function() {
initMenu('',true,false,'search.php','Search');
$(document).ready(function() { init_search(); });
});
/* @license-end */</script>
<div id="main-nav"></div>
</div><!-- top -->
<div id="side-nav" class="ui-resizable side-nav-resizable">
<div id="nav-tree">
<div id="nav-tree-contents">
<div id="nav-sync" class="sync"></div>
</div>
</div>
<div id="splitbar" style="-moz-user-select:none;"
class="ui-resizable-handle">
</div>
</div>
<script type="text/javascript">
/* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&amp;dn=gpl-2.0.txt GPL-v2 */
$(document).ready(function(){initNavTree('parsers.html',''); initResizable(); });
/* @license-end */
</script>
<div id="doc-content">
<!-- window showing the filter options -->
<div id="MSearchSelectWindow"
onmouseover="return searchBox.OnSearchSelectShow()"
onmouseout="return searchBox.OnSearchSelectHide()"
onkeydown="return searchBox.OnSearchSelectKey(event)">
</div>
<!-- iframe showing the search results (closed by default) -->
<div id="MSearchResultsWindow">
<iframe src="javascript:void(0)" frameborder="0"
name="MSearchResults" id="MSearchResults">
</iframe>
</div>
<div class="PageDoc"><div class="header">
<div class="headertitle">
<div class="title">Parsers </div> </div>
</div><!--header-->
<div class="contents">
<div class="textblock"><p>Execute models from different machine learning platforms efficiently with our parsers. Simply choose a parser according to the model you want to run e.g. If you've got a model in onnx format (&lt;model_name&gt;.onnx) use our onnx-parser.</p>
<p>If you would like to run a Tensorflow Lite (TfLite) model you probably also want to take a look at our <a class="el" href="delegate.html">TfLite Delegate</a>.</p>
<p>All parsers are written in C++ but it is also possible to use them in python. For more information on our python bindings take a look into the <a class="el" href="md_python_pyarmnn__r_e_a_d_m_e.html">PyArmNN</a> section.</p>
<p><br />
<br />
</p>
<h1><a class="anchor" id="S5_onnx_parser"></a>
Arm NN Onnx Parser</h1>
<p><code><a class="el" href="namespacearmnn_onnx_parser.html">armnnOnnxParser</a></code> is a library for loading neural networks defined in ONNX protobuf files into the Arm NN runtime.</p>
<h2>ONNX operators that the Arm NN SDK supports</h2>
<p>This reference guide provides a list of ONNX operators the Arm NN SDK currently supports.</p>
<p>The Arm NN SDK ONNX parser currently only supports fp32 operators.</p>
<h3>Fully supported</h3>
<ul>
<li>Add<ul>
<li>See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#Add">Add documentation</a> for more information</li>
</ul>
</li>
<li>AveragePool<ul>
<li>See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#AveragePool">AveragePool documentation</a> for more information.</li>
</ul>
</li>
<li>Concat<ul>
<li>See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#Concat">Concat documentation</a> for more information.</li>
</ul>
</li>
<li>Constant<ul>
<li>See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#Constant">Constant documentation</a> for more information.</li>
</ul>
</li>
<li>Clip<ul>
<li>See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#Clip">Clip documentation</a> for more information.</li>
</ul>
</li>
<li>Flatten<ul>
<li>See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#Flatten">Flatten documentation</a> for more information.</li>
</ul>
</li>
<li>Gather<ul>
<li>See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#Gather">Gather documentation</a> for more information.</li>
</ul>
</li>
<li>GlobalAveragePool<ul>
<li>See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#GlobalAveragePool">GlobalAveragePool documentation</a> for more information.</li>
</ul>
</li>
<li>LeakyRelu<ul>
<li>See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#LeakyRelu">LeakyRelu documentation</a> for more information.</li>
</ul>
</li>
<li>MaxPool<ul>
<li>See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#MaxPool">max_pool documentation</a> for more information.</li>
</ul>
</li>
<li>Relu<ul>
<li>See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#Relu">Relu documentation</a> for more information.</li>
</ul>
</li>
<li>Reshape<ul>
<li>See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#Reshape">Reshape documentation</a> for more information.</li>
</ul>
</li>
<li>Shape<ul>
<li>See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#Shape">Shape documentation</a> for more information.</li>
</ul>
</li>
<li>Sigmoid<ul>
<li>See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#Sigmoid">Sigmoid documentation</a> for more information.</li>
</ul>
</li>
<li>Tanh<ul>
<li>See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#Tanh">Tanh documentation</a> for more information.</li>
</ul>
</li>
<li>Unsqueeze<ul>
<li>See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#Unsqueeze">Unsqueeze documentation</a> for more information.</li>
</ul>
</li>
</ul>
<h3>Partially supported</h3>
<ul>
<li>Conv<ul>
<li>The parser only supports 2D convolutions with a group = 1 or group = #Nb_of_channel (depthwise convolution)</li>
</ul>
</li>
<li>BatchNormalization<ul>
<li>The parser does not support training mode. See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#BatchNormalization">BatchNormalization documentation</a> for more information.</li>
</ul>
</li>
<li>Gemm<ul>
<li>The parser only supports constant bias or non-constant bias where bias dimension = 1. See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#Gemm">Gemm documentation</a> for more information.</li>
</ul>
</li>
<li>MatMul<ul>
<li>The parser only supports constant weights in a fully connected layer. See the ONNX <a href="https://github.com/onnx/onnx/blob/master/docs/Operators.md#MatMul">MatMul documentation</a> for more information.</li>
</ul>
</li>
</ul>
<h2>Tested networks</h2>
<p>Arm tested these operators with the following ONNX fp32 neural networks:</p><ul>
<li>Mobilenet_v2. See the ONNX <a href="https://github.com/onnx/models/tree/master/vision/classification/mobilenet">MobileNet documentation</a> for more information.</li>
<li>Simple MNIST. This is no longer directly documented by ONNX. The model and test data may be downloaded <a href="https://onnxzoo.blob.core.windows.net/models/opset_8/mnist/mnist.tar.gz">from the ONNX model zoo</a>.</li>
</ul>
<p>More machine learning operators will be supported in future releases. <br />
<br />
<br />
<br />
</p>
<h1><a class="anchor" id="S6_tf_lite_parser"></a>
Arm NN Tf Lite Parser</h1>
<p><code><a class="el" href="namespacearmnn_tf_lite_parser.html">armnnTfLiteParser</a></code> is a library for loading neural networks defined by TensorFlow Lite FlatBuffers files into the Arm NN runtime.</p>
<h2>TensorFlow Lite operators that the Arm NN SDK supports</h2>
<p>This reference guide provides a list of TensorFlow Lite operators the Arm NN SDK currently supports.</p>
<h3>Fully supported</h3>
<p>The Arm NN SDK TensorFlow Lite parser currently supports the following operators:</p>
<ul>
<li>ABS</li>
<li>ADD</li>
<li>ARG_MAX</li>
<li>ARG_MIN</li>
<li>AVERAGE_POOL_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE</li>
<li>BATCH_TO_SPACE</li>
<li>BROADCAST_TO</li>
<li>CAST</li>
<li>CEIL</li>
<li>CONCATENATION, Supported Fused Activation: RELU , RELU6 , TANH, NONE</li>
<li>CONV_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE</li>
<li>CONV_3D, Supported Fused Activation: RELU , RELU6 , TANH, NONE</li>
<li>DEPTH_TO_SPACE</li>
<li>DEPTHWISE_CONV_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE</li>
<li>DEQUANTIZE</li>
<li>DIV</li>
<li>ELU</li>
<li>EQUAL</li>
<li>EXP</li>
<li>EXPAND_DIMS</li>
<li>FLOOR_DIV</li>
<li>FULLY_CONNECTED, Supported Fused Activation: RELU , RELU6 , TANH, NONE</li>
<li>GATHER</li>
<li>GATHER_ND</li>
<li>GELU</li>
<li>GREATER</li>
<li>GREATER_EQUAL</li>
<li>HARD_SWISH</li>
<li>LEAKY_RELU</li>
<li>LESS</li>
<li>LESS_EQUAL</li>
<li>LOG</li>
<li>LOGICAL_NOT</li>
<li>LOGISTIC</li>
<li>LOG_SOFTMAX</li>
<li>L2_NORMALIZATION</li>
<li>MAX_POOL_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE</li>
<li>MAXIMUM</li>
<li>MEAN</li>
<li>MINIMUM</li>
<li>MIRROR_PAD</li>
<li>MUL</li>
<li>NEG</li>
<li>NOT_EQUAL</li>
<li>PACK</li>
<li>PAD</li>
<li>PADV2</li>
<li>POW</li>
<li>PRELU</li>
<li>QUANTIZE</li>
<li>RELU</li>
<li>RELU6</li>
<li>REDUCE_MAX</li>
<li>REDUCE_MIN</li>
<li>REDUCE_PROD</li>
<li>RESHAPE</li>
<li>RESIZE_BILINEAR</li>
<li>RESIZE_NEAREST_NEIGHBOR</li>
<li>REVERSE_V2</li>
<li>RSQRT</li>
<li>SHAPE</li>
<li>SIN</li>
<li>SLICE</li>
<li>SOFTMAX</li>
<li>SPACE_TO_BATCH</li>
<li>SPACE_TO_DEPTH</li>
<li>SPLIT</li>
<li>SPLIT_V</li>
<li>SQUEEZE</li>
<li>SQRT</li>
<li>SQUARE</li>
<li>SQUARE_DIFFERENCE</li>
<li>STRIDED_SLICE</li>
<li>SUB</li>
<li>SUM</li>
<li>TANH</li>
<li>TILE</li>
<li>TRANSPOSE</li>
<li>TRANSPOSE_CONV</li>
<li>UNIDIRECTIONAL_SEQUENCE_LSTM</li>
<li>UNPACK</li>
</ul>
<h3>Custom Operator</h3>
<ul>
<li>TFLite_Detection_PostProcess</li>
</ul>
<h2>Tested networks</h2>
<p>Arm tested these operators with the following TensorFlow Lite neural network:</p><ul>
<li><a href="http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224_quant.tgz">Quantized MobileNet</a></li>
<li><a href="http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_quantized_300x300_coco14_sync_2018_07_18.tar.gz">Quantized SSD MobileNet</a></li>
<li>DeepSpeech v1 converted from <a href="https://github.com/mozilla/DeepSpeech/releases/tag/v0.4.1">TensorFlow model</a></li>
<li>DeepSpeaker</li>
<li><a href="https://www.tensorflow.org/lite/models/segmentation/overview">DeepLab v3+</a></li>
<li>FSRCNN</li>
<li>EfficientNet-lite</li>
<li>RDN converted from <a href="https://github.com/hengchuan/RDN-TensorFlow">TensorFlow model</a></li>
<li>Quantized RDN (CpuRef)</li>
<li><a href="http://download.tensorflow.org/models/tflite_11_05_08/inception_v3_quant.tgz">Quantized Inception v3</a></li>
<li><a href="http://download.tensorflow.org/models/inception_v4_299_quant_20181026.tgz">Quantized Inception v4</a> (CpuRef)</li>
<li>Quantized ResNet v2 50 (CpuRef)</li>
<li>Quantized Yolo v3 (CpuRef)</li>
</ul>
<p>More machine learning operators will be supported in future releases. </p>
</div></div><!-- contents -->
</div><!-- PageDoc -->
</div><!-- doc-content -->
<!-- start footer part -->
<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
<ul>
<li class="navelem"><a class="el" href="swtools.html">Software Components</a></li>
<li class="footer">Generated on Wed Feb 14 2024 16:36:20 for Arm NN by
<a href="http://www.doxygen.org/index.html">
<img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.17 </li>
</ul>
</div>
</body>
</html>