1. 8481d83 COMPMID-2753: Add support for QASYMM8_SIGNED in CL kernels/functions by Manuel Bottini · 4 years, 7 months ago
  2. f464337 COMPMID-2826 Comply with DCL51-CPP by Michalis Spyrou · 4 years, 7 months ago
  3. 44bfc3f COMPMID-1671: Allow fp mixed precision in CLFCLayer. by Georgios Pinitas · 4 years, 8 months ago
  4. 8b72199 COMPMID-1889: Fuse bias addition and output stage in CLFCLayer. by Georgios Pinitas · 4 years, 8 months ago
  5. b27e13a COMPMID-2685: [CL] Use Weights manager by Michalis Spyrou · 4 years, 9 months ago
  6. 1a569a3 COMPMID-2161 [NEON] Create IWeightManager class by Michalis Spyrou · 4 years, 10 months ago
  7. 26014cf COMPMID-2649: Generalize MemoryGroup. by Georgios Pinitas · 4 years, 10 months ago
  8. ebc3a90 COMPMID-1706: Fuse the bias addition within CLGEMM by Michele Di Giorgio · 6 years ago
  9. ba1ffe9 COMPMID-1537: Fix weights retention in CLFullyConnectedLayer by Michele Di Giorgio · 6 years ago
  10. 215b4ea COMPMID-1277 - Optimizing CLIm2ColKernel for NHWC. by Gian Marco Iodice · 6 years ago
  11. a855af1 COMPMID-1401 Implement NEFullyConnectedLayer for QASYMM8 by Giorgio Arena · 6 years ago
  12. 7d66a8e COMPMID-1386: Add support for converting weights for CL. by Georgios Pinitas · 6 years ago
  13. 7485d5a COMPMID-970 : Remove QS8 / QS16 support by Vidhya Sudhan Loganathan · 6 years ago
  14. b62280a COMPMID-1244: Allow retaining weights in CLGEMMConvolutionLayer and CLFullyConnectedLayer by Michele Di Giorgio · 6 years ago
  15. e043767 COMPMID-920: Introduce prepare() stage by Georgios Pinitas · 6 years ago
  16. c9c62c2 COMPMID-1056 - Optimizing CLGEMMMatrixMultiplyKernel refactoring the inner loop by Gian Marco Iodice · 6 years ago
  17. a1667fb COMPMID-959 - Fix doxygem comment in CLGEMMConvolutionLayer by Isabella Gottardi · 6 years ago
  18. 1562be3 COMPMID-998: Release unused trainable parameters. by Georgios Pinitas · 6 years ago
  19. 358ca20 COMPMID-617: Adds CLFullyConnectionLayer validation support by Georgios Pinitas · 7 years ago
  20. 58c5794 COMPMID-706 - Add GEMMLowp output stage for scaling by a fixed point number by Gian Marco · 7 years ago
  21. 45bcc3a COMPMID-661: QASYMM8 support for fully connected layer. by Georgios Pinitas · 7 years ago
  22. 3e80c7f COMPMID-661: Optimize FC layer with 2 new Bifrost kernels and LWS tuning (#33) by Anton Lokhmotov · 7 years ago
  23. baf174e COMPMID-485: Memory Manager by Georgios Pinitas · 7 years ago
  24. edfa9f4 COMPMID-477 - Optimized batched case in CLConvolutionLayer by Gian Marco Iodice · 7 years ago
  25. 768e9f1 COMPMID-417: Cleanup CL FullyConnectedLayer by Moritz Pflanzer · 7 years ago
  26. 7d323a6 COMPMID-440, COMPMID-441 - Port CLConvolutionLayer and CLFullyConnectedLayer to support 16 bit fixed point by Gian Marco Iodice · 7 years ago
  27. 368da83 COMPMID-420, COMPMID-414 - Port CLConvolutionLayer and CLFullyConnectedLayer to use 8 bit fixed point by Gian Marco Iodice · 7 years ago
  28. 6ff3b19 COMPMID-344 Updated doxygen by Anthony Barbier · 7 years ago