1. 7485d5a COMPMID-970 : Remove QS8 / QS16 support by Vidhya Sudhan Loganathan · 6 years ago
  2. b62280a COMPMID-1244: Allow retaining weights in CLGEMMConvolutionLayer and CLFullyConnectedLayer by Michele Di Giorgio · 6 years ago
  3. e043767 COMPMID-920: Introduce prepare() stage by Georgios Pinitas · 6 years ago
  4. c9c62c2 COMPMID-1056 - Optimizing CLGEMMMatrixMultiplyKernel refactoring the inner loop by Gian Marco Iodice · 6 years ago
  5. a1667fb COMPMID-959 - Fix doxygem comment in CLGEMMConvolutionLayer by Isabella Gottardi · 6 years ago
  6. 1562be3 COMPMID-998: Release unused trainable parameters. by Georgios Pinitas · 6 years ago
  7. 358ca20 COMPMID-617: Adds CLFullyConnectionLayer validation support by Georgios Pinitas · 7 years ago
  8. 58c5794 COMPMID-706 - Add GEMMLowp output stage for scaling by a fixed point number by Gian Marco · 7 years ago
  9. 45bcc3a COMPMID-661: QASYMM8 support for fully connected layer. by Georgios Pinitas · 7 years ago
  10. 3e80c7f COMPMID-661: Optimize FC layer with 2 new Bifrost kernels and LWS tuning (#33) by Anton Lokhmotov · 7 years ago
  11. baf174e COMPMID-485: Memory Manager by Georgios Pinitas · 7 years ago
  12. edfa9f4 COMPMID-477 - Optimized batched case in CLConvolutionLayer by Gian Marco Iodice · 7 years ago
  13. 768e9f1 COMPMID-417: Cleanup CL FullyConnectedLayer by Moritz Pflanzer · 7 years ago
  14. 7d323a6 COMPMID-440, COMPMID-441 - Port CLConvolutionLayer and CLFullyConnectedLayer to support 16 bit fixed point by Gian Marco Iodice · 7 years ago
  15. 368da83 COMPMID-420, COMPMID-414 - Port CLConvolutionLayer and CLFullyConnectedLayer to use 8 bit fixed point by Gian Marco Iodice · 7 years ago
  16. 6ff3b19 COMPMID-344 Updated doxygen by Anthony Barbier · 7 years ago