Gitiles
Code Review
Sign In
review.mlplatform.org
/
ml
/
ComputeLibrary
/
35fcc4345b6468139a7199a48f75f70d19ea0d31
/
src
/
core
/
NEON
/
kernels
/
NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel.cpp
1856ff7
COMPMID-3097 Fuse activation with fully connected layer CL
by Giorgio Arena
· 4 years, 6 months ago
f29d1b7
COMPMID-2608: Enable quantization with multiplier greater than 1 on NEON
by Michele Di Giorgio
· 4 years, 9 months ago
a4f378d
COMPMID-1995: Fix clang-tidy warnings
by Michalis Spyrou
· 5 years ago
2d7e683
COMPMID-1694: Fuse offset contribution with the output stage when we use NEGEMMLowpMatrixMultiplyCore
by George Wort
· 5 years ago
5a59453
COMPMID-1809: Remove padding in NEGEMMConvolutionLayer 64-bit path.
by Georgios Pinitas
· 6 years ago
bb081ca
COMPMID-1751: Remove output_3d_depth from NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint
by Georgios Pinitas
· 6 years ago
932491f
COMPMID-1519: Add support for 3D input/output in CLGEMMLowpOutputStage
by Georgios Pinitas
· 6 years ago
041f36d
COMPMID-1446 : Add support for 3D output in NEGEMMLowpOutputStage
by Georgios Pinitas
· 6 years ago
f72f936
COMPMID-791: Adds support of QASYMM8 in NEDepthwiseConvolution3x3
by Georgios Pinitas
· 7 years ago
7f0f790
COMPMID-731 - Remove padding requirements for NEGEMMLowpOutputStage
by Gian Marco
· 7 years ago
631c41a
COMPMID-556: Rename Error to Status and inverse logic
by Georgios Pinitas
· 7 years ago
5124be5
COMPMID-661: Convolution quantized (#32)
by Chunosov
· 7 years ago
58c5794
COMPMID-706 - Add GEMMLowp output stage for scaling by a fixed point number
by Gian Marco
· 7 years ago