Gitiles
Code Review
Sign In
review.mlplatform.org
/
ml
/
ComputeLibrary
/
aa29fdef732fa2e56943b982a8643d579d2f3714
/
arm_compute
/
core
« Previous
a261181
COMPMID-417 NEON/CL MeanStdDev bugfix using FillBorderKernel
by Giorgio Arena
· 7 years ago
c51b72f
COMPMID-355 Implement CL DirectConvolution1x1
by SiCong Li
· 7 years ago
ef4b4ae
COMPMID-438: Add support for floating point Min-Max Location layer.
by Michele Di Giorgio
· 7 years ago
f87cc7f
COMPMID-417: Port NEDirectConvolution 1x1 to QS16.
by Pablo Tello
· 7 years ago
6c92834
COMPMID-413: Add support for QS8 and QS16 CLNormalizationLayer.
by Michele Di Giorgio
· 7 years ago
d5e65c7
COMPMID-456: Add support for QS16 NEON Normalization Layer.
by Michele Di Giorgio
· 7 years ago
f9bae2e
COMPMID-417 - Bug Fix WarpPerspective kernel
by Isabella Gottardi
· 7 years ago
8fda1cb
COMPMID-421: Added FP16 support in BatchNormalizationLayer.
by Pablo Tello
· 7 years ago
afde732
COMPMID-421: Added FP16 support in the Neon Locally Connected Layer.
by Pablo Tello
· 7 years ago
27b386c
COMPMID-355 Implement 3x3 CL direct convolution
by steniu01
· 7 years ago
b49a715
COMPMID-421: Added FP16 support to Softmax.
by Pablo Tello
· 7 years ago
0c34fe2
COMPMID-421: Added FP16 support in Pooling Layer
by Pablo Tello
· 7 years ago
d929b9c
COMPMID-417: Enable CPU target selection
by Moritz Pflanzer
· 7 years ago
725788e
COMPMID-417: Allow loading of custom OpenCL library
by Moritz Pflanzer
· 7 years ago
91654c4
COMPMID-421: Added FP16 support in ActivationLayer.
by Pablo Tello
· 7 years ago
04b7412
COMPMID-421: Update NEArithmeticAdditionKernel documentation.
by Pablo Tello
· 7 years ago
b6c8d24
COMPMID-415: Use templates for data arguments
by Moritz Pflanzer
· 7 years ago
d7a5d22
COMPMID-421: Added FP16 support to Arithmetic Subtraction.
by Pablo Tello
· 7 years ago
7281834
COMPMID-446: Add support for QS8/QS16 CL Arithmetic Add/Sub
by Michele Di Giorgio
· 7 years ago
e222941
COMPMID-401: Implement FixedPointPosition conversion for NEON.
by Georgios Pinitas
· 7 years ago
bbd3d60
COMPMID-410 Port BatchNormalization to use fixed point 16
by Michalis Spyrou
· 7 years ago
172e570
COMPMID-425 Port CLBatchnormalization to support QS8/QS16
by Michalis Spyrou
· 7 years ago
579c049
COMPMID-417: Add Leaky RELU support for both NEON/CL.
by Georgios Pinitas
· 7 years ago
81f0d15
COMPMID-444: Add support for QS8/QS16 NEON Arithmetic Add/Sub/Mul.
by Michele Di Giorgio
· 7 years ago
f70256b
COMPMID-443 Collapse higher dimension for pooling layer and normalization layer
by steniu01
· 7 years ago
0d523cc
COMPMID-443 Change CLSoftMaxLayerKernel to use 3D tensor and collapse the higer dimension
by steniu01
· 7 years ago
00394ae
COMPMID-406: Port CLActivationLayer to use QS8/QS16.
by Georgios Pinitas
· 7 years ago
ac4e873
COMPMID-417: Port DepthConcatenate to QS8/QS16 for NEON/CL.
by Georgios Pinitas
· 7 years ago
df24618
COMPMID-421: Added FP16 suppot to NENormalizationLayer and NEPixelWiseMultiplication.
by Pablo Tello
· 7 years ago
d1b0ecc
COMPMID-421: Added FP16 support to Arithmetic Addition.
by Pablo Tello
· 7 years ago
be35b0e
COMPMID-443: Collapse higher dimensions for activation layer
by Anthony Barbier
· 7 years ago
da37e2f
COMPMID-431 Port CLDepthConvert to use 8-bit and 16-bit fixed point
by steniu01
· 7 years ago
3470247
COMPMID-417 Checking CL non uniform support at runtime.
by steniu01
· 7 years ago
9247c92
COMPMID-428: Port NESoftmaxLayer to 16-bit fixed point.
by Georgios Pinitas
· 7 years ago
0979675
COMPMID-429: Port CLSoftmaxLayer to QS16.
by Georgios Pinitas
· 7 years ago
7d323a6
COMPMID-440, COMPMID-441 - Port CLConvolutionLayer and CLFullyConnectedLayer to support 16 bit fixed point
by Gian Marco Iodice
· 7 years ago
ab0a77e
COMPMID-409: Add support for QS8 and QS16 CLPixelWiseMultiplication.
by Michele Di Giorgio
· 7 years ago
ccc65d4
COMPMID-427: Port NEActivationLayer in 16bit fixed point.
by Georgios Pinitas
· 7 years ago
0745a98
COMPMID-417: Fix assert in GEMMTranspose
by Moritz Pflanzer
· 7 years ago
21efeb4
COMPMID-417: DepthConvert NEON for QS8/QS16.
by Georgios Pinitas
· 7 years ago
368da83
COMPMID-420, COMPMID-414 - Port CLConvolutionLayer and CLFullyConnectedLayer to use 8 bit fixed point
by Gian Marco Iodice
· 7 years ago
1b31afc
COMPMID-417: Fix Dimensions::collapse
by Moritz Pflanzer
· 7 years ago
2bbd964
COMPMID-436, COMPMID-437 - Port NEConvolutionLayer & NEFullyConnectedLayer to support 16 bit fixed point
by Gian Marco Iodice
· 7 years ago
8a38369
COMPMID-434 - Port CLGEMM to support 16 bit fixed point
by Gian Marco Iodice
· 7 years ago
bdb6b0b
COMPMID-433 - Port NEGEMM to support 16 bit fixed point
by Gian Marco Iodice
· 7 years ago
ac69aa1
COMPMID-418 Add check and fix comments after preprocessor conditions
by Anthony Barbier
· 7 years ago
05da6dd
COMPMID-417: Remove val_to_string
by Moritz Pflanzer
· 7 years ago
d7e8281
COMPMID-408 Create OpenCL complex math functions for 8 bit fixed point arithmetic.
by Michalis Spyrou
· 7 years ago
13edbff
COMPMID-432 - Extended Convolution Layer to support rectangular kernels
by Gian Marco Iodice
· 7 years ago
3a3066b
COMPMID-411 - Port CLGEMM to support 8 bit fixed point
by Gian Marco Iodice
· 7 years ago
7b7858d
COMPMID-359: Implement NEON ROIPoolingLayer
by Georgios Pinitas
· 7 years ago
d0ae8b8
COMPMID-417: Extract common toolchain support file
by Moritz Pflanzer
· 7 years ago
5cb4c42
COMPMID-414 - Port CLConvolutionLayer to support 8 bit fixed point - CLWeightsReshapeKernel
by Gian Marco Iodice
· 7 years ago
0a8334c
COMPMID-400 Add support for 16 bit fixed point arithmetic.
by Michalis Spyrou
· 7 years ago
e5f8fd6
COMPMID-423: Port CLSoftmaxLayer to QS8
by Georgios Pinitas
· 7 years ago
4e28869
COMPMID-417 - Adding support for rectangular kernels
by Gian Marco Iodice
· 7 years ago
0326eae
COMPMID-417: Check for nullptr in mismatching data types check.
by Georgios Pinitas
· 7 years ago
8af2dd6
COMPMID-403: Add 7x7 NEON Pooling support.
by Michele Di Giorgio
· 7 years ago
ee12254
COMPMID-417: Fix negative overflowing in NESoftmaxLayer.
by Georgios Pinitas
· 7 years ago
659abc0
COMPMID-421: Added FP16 support in Convolutional layer (Neon)
by Pablo Tello
· 7 years ago
b30dcc5
COMPMID-345 - In-place computation for Activation Layer
by Gian Marco Iodice
· 7 years ago
578ab61
COMPMID-414 - Port CLConvolutionLayer to support 8 bit fixed point - CLGEMMMatrixAccumulateBiasesKernel
by Gian Marco Iodice
· 7 years ago
ec8b45e
COMPMID-345 - Auto-inizialization for NECol2ImKernel and NEGEMMInterleave4x4Kernel
by Gian Marco Iodice
· 7 years ago
9f89bae
COMPMID-411 - Ported CLGEMMInterleave4x4Kernel and CLGEMMTranspose1xWKernel to support 8 bit fixed point
by Gian Marco Iodice
· 7 years ago
ce09314
COMPMID-403:Add support for 7x7 pooling on CL.
by Georgios Pinitas
· 7 years ago
4c2938e
COMPMID-315 Fix NEMinMaxLocation bug
by steniu01
· 7 years ago
6ff3b19
COMPMID-344 Updated doxygen
by Anthony Barbier
· 7 years ago