Gitiles
Code Review
Sign In
review.mlplatform.org
/
ml
/
ComputeLibrary
/
b797fa235f714440ffa7a2ad4eef7ae14ee45da4
/
src
/
runtime
/
NEON
ac4e873
COMPMID-417: Port DepthConcatenate to QS8/QS16 for NEON/CL.
by Georgios Pinitas
· 7 years ago
9247c92
COMPMID-428: Port NESoftmaxLayer to 16-bit fixed point.
by Georgios Pinitas
· 7 years ago
dcdc85e
COMPMID-421: Added F16 support in FC Layer.
by Pablo Tello
· 7 years ago
d368df3
COMPMID-417: Auto initialize for SoftmaxLayer NEON/CL.
by Georgios Pinitas
· 7 years ago
21efeb4
COMPMID-417: DepthConvert NEON for QS8/QS16.
by Georgios Pinitas
· 7 years ago
2bbd964
COMPMID-436, COMPMID-437 - Port NEConvolutionLayer & NEFullyConnectedLayer to support 16 bit fixed point
by Gian Marco Iodice
· 7 years ago
bdb6b0b
COMPMID-433 - Port NEGEMM to support 16 bit fixed point
by Gian Marco Iodice
· 7 years ago
ac69aa1
COMPMID-418 Add check and fix comments after preprocessor conditions
by Anthony Barbier
· 7 years ago
13edbff
COMPMID-432 - Extended Convolution Layer to support rectangular kernels
by Gian Marco Iodice
· 7 years ago
3a3066b
COMPMID-411 - Port CLGEMM to support 8 bit fixed point
by Gian Marco Iodice
· 7 years ago
7b7858d
COMPMID-359: Implement NEON ROIPoolingLayer
by Georgios Pinitas
· 7 years ago
d0ae8b8
COMPMID-417: Extract common toolchain support file
by Moritz Pflanzer
· 7 years ago
4e28869
COMPMID-417 - Adding support for rectangular kernels
by Gian Marco Iodice
· 7 years ago
3eb263e
COMPMID-424 Add validation tests for Gaussian5x5
by SiCong Li
· 7 years ago
659abc0
COMPMID-421: Added FP16 support in Convolutional layer (Neon)
by Pablo Tello
· 7 years ago
b30dcc5
COMPMID-345 - In-place computation for Activation Layer
by Gian Marco Iodice
· 7 years ago
960b084
COMPMID-250 make BorderSize explicit and fix float value validation bug
by steniu01
· 7 years ago
6ff3b19
COMPMID-344 Updated doxygen
by Anthony Barbier
· 7 years ago