Gitiles
Code Review
Sign In
review.mlplatform.org
/
ml
/
ComputeLibrary
/
a35980546c00ae1647ce033b061530607a5ad1e4
/
arm_compute
/
runtime
/
CL
/
functions
/
CLFullyConnectedLayer.h
f464337
COMPMID-2826 Comply with DCL51-CPP
by Michalis Spyrou
· 4 years, 8 months ago
44bfc3f
COMPMID-1671: Allow fp mixed precision in CLFCLayer.
by Georgios Pinitas
· 4 years, 9 months ago
8b72199
COMPMID-1889: Fuse bias addition and output stage in CLFCLayer.
by Georgios Pinitas
· 4 years, 9 months ago
b27e13a
COMPMID-2685: [CL] Use Weights manager
by Michalis Spyrou
· 4 years, 10 months ago
1a569a3
COMPMID-2161 [NEON] Create IWeightManager class
by Michalis Spyrou
· 4 years, 10 months ago
26014cf
COMPMID-2649: Generalize MemoryGroup.
by Georgios Pinitas
· 4 years, 10 months ago
ebc3a90
COMPMID-1706: Fuse the bias addition within CLGEMM
by Michele Di Giorgio
· 6 years ago
ba1ffe9
COMPMID-1537: Fix weights retention in CLFullyConnectedLayer
by Michele Di Giorgio
· 6 years ago
215b4ea
COMPMID-1277 - Optimizing CLIm2ColKernel for NHWC.
by Gian Marco Iodice
· 6 years ago
a855af1
COMPMID-1401 Implement NEFullyConnectedLayer for QASYMM8
by Giorgio Arena
· 6 years ago
7d66a8e
COMPMID-1386: Add support for converting weights for CL.
by Georgios Pinitas
· 6 years ago
7485d5a
COMPMID-970 : Remove QS8 / QS16 support
by Vidhya Sudhan Loganathan
· 6 years ago
b62280a
COMPMID-1244: Allow retaining weights in CLGEMMConvolutionLayer and CLFullyConnectedLayer
by Michele Di Giorgio
· 6 years ago
e043767
COMPMID-920: Introduce prepare() stage
by Georgios Pinitas
· 6 years ago
c9c62c2
COMPMID-1056 - Optimizing CLGEMMMatrixMultiplyKernel refactoring the inner loop
by Gian Marco Iodice
· 6 years ago
a1667fb
COMPMID-959 - Fix doxygem comment in CLGEMMConvolutionLayer
by Isabella Gottardi
· 6 years ago
1562be3
COMPMID-998: Release unused trainable parameters.
by Georgios Pinitas
· 6 years ago
358ca20
COMPMID-617: Adds CLFullyConnectionLayer validation support
by Georgios Pinitas
· 7 years ago
58c5794
COMPMID-706 - Add GEMMLowp output stage for scaling by a fixed point number
by Gian Marco
· 7 years ago
45bcc3a
COMPMID-661: QASYMM8 support for fully connected layer.
by Georgios Pinitas
· 7 years ago
3e80c7f
COMPMID-661: Optimize FC layer with 2 new Bifrost kernels and LWS tuning (#33)
by Anton Lokhmotov
· 7 years ago
baf174e
COMPMID-485: Memory Manager
by Georgios Pinitas
· 7 years ago
edfa9f4
COMPMID-477 - Optimized batched case in CLConvolutionLayer
by Gian Marco Iodice
· 7 years ago
768e9f1
COMPMID-417: Cleanup CL FullyConnectedLayer
by Moritz Pflanzer
· 7 years ago
7d323a6
COMPMID-440, COMPMID-441 - Port CLConvolutionLayer and CLFullyConnectedLayer to support 16 bit fixed point
by Gian Marco Iodice
· 7 years ago
368da83
COMPMID-420, COMPMID-414 - Port CLConvolutionLayer and CLFullyConnectedLayer to use 8 bit fixed point
by Gian Marco Iodice
· 7 years ago
6ff3b19
COMPMID-344 Updated doxygen
by Anthony Barbier
· 7 years ago