Gitiles
Code Review
Sign In
review.mlplatform.org
/
ml
/
ComputeLibrary
/
19835e591cb0b66a0f5000ae1505bf299e50337d
/
src
/
core
/
CL
/
cl_kernels
/
gemm.cl
19835e5
COMPMID-882 - Optimizing GEMMLowp on OpenCL reshaping matrices
by Gian Marco
· 6 years ago
36a0a46
COMPMID-748 - Integrating optimized SGEMM for bifrost
by Gian Marco
· 7 years ago
05288a2
COMPMID-697 - Rework GEMMLowp interface on OpenCL
by Gian Marco
· 7 years ago
3e80c7f
COMPMID-661: Optimize FC layer with 2 new Bifrost kernels and LWS tuning (#33)
by Anton Lokhmotov
· 7 years ago
6f31f8c
Allow running without cl_khr_fp16
by Matthew Bentham
· 7 years ago
96880cf
COMPMID-640: FullyConnectedLayer failures on both NEON/CL
by Georgios Pinitas
· 7 years ago
edfa9f4
COMPMID-477 - Optimized batched case in CLConvolutionLayer
by Gian Marco Iodice
· 7 years ago
e49e266
COMPMID-415: Use half_float library for F16
by Moritz Pflanzer
· 7 years ago
368da83
COMPMID-420, COMPMID-414 - Port CLConvolutionLayer and CLFullyConnectedLayer to use 8 bit fixed point
by Gian Marco Iodice
· 7 years ago
b93f5de
COMPMID-417 - Fixed bug in gemm_interleave_16bit and gemm_interleave_32_bit due to the non non representable numbers in half and float
by Gian Marco Iodice
· 7 years ago
8a38369
COMPMID-434 - Port CLGEMM to support 16 bit fixed point
by Gian Marco Iodice
· 7 years ago
ac69aa1
COMPMID-418 Add check and fix comments after preprocessor conditions
by Anthony Barbier
· 7 years ago
3a3066b
COMPMID-411 - Port CLGEMM to support 8 bit fixed point
by Gian Marco Iodice
· 7 years ago
578ab61
COMPMID-414 - Port CLConvolutionLayer to support 8 bit fixed point - CLGEMMMatrixAccumulateBiasesKernel
by Gian Marco Iodice
· 7 years ago
9f89bae
COMPMID-411 - Ported CLGEMMInterleave4x4Kernel and CLGEMMTranspose1xWKernel to support 8 bit fixed point
by Gian Marco Iodice
· 7 years ago
6ff3b19
COMPMID-344 Updated doxygen
by Anthony Barbier
· 7 years ago