Gitiles
Code Review
Sign In
review.mlplatform.org
/
ml
/
ComputeLibrary
/
4bd2cb8aec96de89eb9cf652b83298bf89486bca
/
src
/
core
/
CL
/
cl_kernels
349feef
COMPMID-417 - Added validation for FP16 CLBatchNormalizationLayer
by Gian Marco Iodice
· 7 years ago
83be745
COMPMID-424 Implemented reference implementation and tests for WarpAffine
by Isabella Gottardi
· 7 years ago
54f366a
COMPMID-417: Fix CL compiler warnings
by Moritz Pflanzer
· 7 years ago
4726fdf
COMPMID-541: Fix padding in CLMinMaxLocationKernel
by Moritz Pflanzer
· 7 years ago
cdf5145
COMPMID-515: L2 Pooling for FP32/FP16 in CL.
by Georgios Pinitas
· 7 years ago
f81652d
COMPMID-516 Increase tolerance rate of Scale, Conv, fully connected and GEMM
by steniu01
· 7 years ago
9fe4144
COMPMID-452 CL Generic Depthwise Convolution implementation.
by Giorgio Arena
· 7 years ago
bf17955
COMPMID-522 - Added support for GlobalPooling in CLPoolingLayer and CLFlattening for 3D tensor
by Gian Marco Iodice
· 7 years ago
52f8b39
COMPMID-417: Fix CLNonLinearFilter
by Georgios Pinitas
· 7 years ago
5ee66ea
COMPMID-462: Implement TensorReshape for NEON and CL.
by Georgios Pinitas
· 7 years ago
56dd726
COMPMID-448: Implement CL Quantization/Dequantization Layer.
by Michele Di Giorgio
· 7 years ago
1c8409d
COMPMID-477 - Optimized CLDirectConvolution1x1 for Bifrost
by Gian Marco Iodice
· 7 years ago
cfb6553
COMPMID-417 Fix ROIPooling
by SiCong Li
· 7 years ago
64ebe5b
COMPMID-519: Add support for Lower and Upper Bounded RELU for CL/NEON
by Georgios Pinitas
· 7 years ago
1fab09f
COMPMID-424 Implemented reference implementation, new output valid region and validation tests (NEON and CL) for Scale
by Isabella Gottardi
· 7 years ago
04f089c
COMPMID-476 L2 Normalization for CL
by Michalis Spyrou
· 7 years ago
3e36369
COMPMID-358 Implement OpenCL ROI Pooling
by SiCong Li
· 7 years ago
edfa9f4
COMPMID-477 - Optimized batched case in CLConvolutionLayer
by Gian Marco Iodice
· 7 years ago
93a690e
COMPMID-452 CL Depthwise Separable Convolution Layer kernel implementation, validation and benchmarking for 3x3xC depthwise filter and DataType::F32.
by Giorgio Arena
· 7 years ago
d60a6b9
COMPMID-477 - Optimized CLNormalizationLayer
by Gian Marco Iodice
· 7 years ago
0c7614f
COMPMID-431 Port OpenCL pooling layer to use fixed point
by steniu01
· 7 years ago
cb29283
COMPMID-477 - Optimizing Pooling 3x3 with stride_x <= 3 on OpenCL
by Gian Marco Iodice
· 7 years ago
1246b63
COMPMID-477 - Optimized Direct Convolution 3x3 and 5x5 (f32) for Bifrost.
by Gian Marco Iodice
· 7 years ago
409ee0a
COMPMID-417: Add in-place support for batch-normalization.
by Georgios Pinitas
· 7 years ago
1e5c157
COMPMID-450 Add YOLOV2 benchmark tests
by SiCong Li
· 7 years ago
db00668
COMPMID-478 Implemnt CL direct convolution 5x5
by steniu01
· 7 years ago
def665a
COMPMID-474 - Add support for QS8/QS16 DirectConvolution CL
by Michalis Spyrou
· 7 years ago
2eac5bd
COMPMID-417 - Fixed bug in CLCol2ImKernek related to the stride passed during the configuration
by Gian Marco Iodice
· 7 years ago
868e541
COMPMID-459 Collapse CL Im2col's higher dimensions
by steniu01
· 7 years ago
5cb4d6a
COMPMID-477 - Optimizing CLDirectConvolution 3x3 on OpenCL and added the auto configuration
by Gian Marco Iodice
· 7 years ago
d8e765b
COMPMID-472 : Implement Floor for CL and NEON.
by Georgios Pinitas
· 7 years ago
cfc6fe8
COMPMID-443 collapse higher dimension for CL col2im kernel
by steniu01
· 7 years ago
c51b72f
COMPMID-355 Implement CL DirectConvolution1x1
by SiCong Li
· 7 years ago
ef4b4ae
COMPMID-438: Add support for floating point Min-Max Location layer.
by Michele Di Giorgio
· 7 years ago
6c92834
COMPMID-413: Add support for QS8 and QS16 CLNormalizationLayer.
by Michele Di Giorgio
· 7 years ago
f9bae2e
COMPMID-417 - Bug Fix WarpPerspective kernel
by Isabella Gottardi
· 7 years ago
02dfb2c
COMPMID-457 Fix F16 NormalizationLayer CL kernel
by SiCong Li
· 7 years ago
a36ccf1
COMPMID-417: Fix CL F16 ActivationLayer
by Moritz Pflanzer
· 7 years ago
3a62324
COMPMID-455 - Optimizing CLIm2ColKernel
by Gian Marco Iodice
· 7 years ago
e49e266
COMPMID-415: Use half_float library for F16
by Moritz Pflanzer
· 7 years ago
27b386c
COMPMID-355 Implement 3x3 CL direct convolution
by steniu01
· 7 years ago
7281834
COMPMID-446: Add support for QS8/QS16 CL Arithmetic Add/Sub
by Michele Di Giorgio
· 7 years ago
172e570
COMPMID-425 Port CLBatchnormalization to support QS8/QS16
by Michalis Spyrou
· 7 years ago
579c049
COMPMID-417: Add Leaky RELU support for both NEON/CL.
by Georgios Pinitas
· 7 years ago
0d523cc
COMPMID-443 Change CLSoftMaxLayerKernel to use 3D tensor and collapse the higer dimension
by steniu01
· 7 years ago
00394ae
COMPMID-406: Port CLActivationLayer to use QS8/QS16.
by Georgios Pinitas
· 7 years ago
ac4e873
COMPMID-417: Port DepthConcatenate to QS8/QS16 for NEON/CL.
by Georgios Pinitas
· 7 years ago
9a7182e
COMPMID-443 Use 3D tensor for pixel multiply (Needed for Normalization Layer)
by Anthony Barbier
· 7 years ago
7ff47a3
COMPMID-443: Use 3D tensors for fill_border_image
by Anthony Barbier
· 7 years ago
da37e2f
COMPMID-431 Port CLDepthConvert to use 8-bit and 16-bit fixed point
by steniu01
· 7 years ago
0979675
COMPMID-429: Port CLSoftmaxLayer to QS16.
by Georgios Pinitas
· 7 years ago
7d323a6
COMPMID-440, COMPMID-441 - Port CLConvolutionLayer and CLFullyConnectedLayer to support 16 bit fixed point
by Gian Marco Iodice
· 7 years ago
ab0a77e
COMPMID-409: Add support for QS8 and QS16 CLPixelWiseMultiplication.
by Michele Di Giorgio
· 7 years ago
368da83
COMPMID-420, COMPMID-414 - Port CLConvolutionLayer and CLFullyConnectedLayer to use 8 bit fixed point
by Gian Marco Iodice
· 7 years ago
b93f5de
COMPMID-417 - Fixed bug in gemm_interleave_16bit and gemm_interleave_32_bit due to the non non representable numbers in half and float
by Gian Marco Iodice
· 7 years ago
8a38369
COMPMID-434 - Port CLGEMM to support 16 bit fixed point
by Gian Marco Iodice
· 7 years ago
ac69aa1
COMPMID-418 Add check and fix comments after preprocessor conditions
by Anthony Barbier
· 7 years ago
d7e8281
COMPMID-408 Create OpenCL complex math functions for 8 bit fixed point arithmetic.
by Michalis Spyrou
· 7 years ago
13edbff
COMPMID-432 - Extended Convolution Layer to support rectangular kernels
by Gian Marco Iodice
· 7 years ago
3a3066b
COMPMID-411 - Port CLGEMM to support 8 bit fixed point
by Gian Marco Iodice
· 7 years ago
e5f8fd6
COMPMID-423: Port CLSoftmaxLayer to QS8
by Georgios Pinitas
· 7 years ago
b30dcc5
COMPMID-345 - In-place computation for Activation Layer
by Gian Marco Iodice
· 7 years ago
578ab61
COMPMID-414 - Port CLConvolutionLayer to support 8 bit fixed point - CLGEMMMatrixAccumulateBiasesKernel
by Gian Marco Iodice
· 7 years ago
9f89bae
COMPMID-411 - Ported CLGEMMInterleave4x4Kernel and CLGEMMTranspose1xWKernel to support 8 bit fixed point
by Gian Marco Iodice
· 7 years ago
ce09314
COMPMID-403:Add support for 7x7 pooling on CL.
by Georgios Pinitas
· 7 years ago
6ff3b19
COMPMID-344 Updated doxygen
by Anthony Barbier
· 7 years ago