1. c9c62c2 COMPMID-1056 - Optimizing CLGEMMMatrixMultiplyKernel refactoring the inner loop by Gian Marco Iodice · 6 years ago
  2. 156fcf3 COMPMID-802 Add NHWC data format support for NEON im2col. by Giorgio Arena · 6 years ago
  3. 1562be3 COMPMID-998: Release unused trainable parameters. by Georgios Pinitas · 6 years ago
  4. b4e3e1c COMPMID-617: Add validate support for NEON FullyConnectedLayer by Ioan-Cristian Szabo · 7 years ago
  5. 36a0a46 COMPMID-748 - Integrating optimized SGEMM for bifrost by Gian Marco · 6 years ago
  6. 358ca20 COMPMID-617: Adds CLFullyConnectionLayer validation support by Georgios Pinitas · 7 years ago
  7. 5124be5 COMPMID-661: Convolution quantized (#32) by Chunosov · 7 years ago
  8. 58c5794 COMPMID-706 - Add GEMMLowp output stage for scaling by a fixed point number by Gian Marco · 7 years ago
  9. 45bcc3a COMPMID-661: QASYMM8 support for fully connected layer. by Georgios Pinitas · 7 years ago
  10. 3e80c7f COMPMID-661: Optimize FC layer with 2 new Bifrost kernels and LWS tuning (#33) by Anton Lokhmotov · 7 years ago
  11. 96880cf COMPMID-640: FullyConnectedLayer failures on both NEON/CL by Georgios Pinitas · 7 years ago
  12. baf174e COMPMID-485: Memory Manager by Georgios Pinitas · 7 years ago
  13. edfa9f4 COMPMID-477 - Optimized batched case in CLConvolutionLayer by Gian Marco Iodice · 7 years ago
  14. 768e9f1 COMPMID-417: Cleanup CL FullyConnectedLayer by Moritz Pflanzer · 7 years ago
  15. 7d323a6 COMPMID-440, COMPMID-441 - Port CLConvolutionLayer and CLFullyConnectedLayer to support 16 bit fixed point by Gian Marco Iodice · 7 years ago
  16. 368da83 COMPMID-420, COMPMID-414 - Port CLConvolutionLayer and CLFullyConnectedLayer to use 8 bit fixed point by Gian Marco Iodice · 7 years ago
  17. 13edbff COMPMID-432 - Extended Convolution Layer to support rectangular kernels by Gian Marco Iodice · 7 years ago
  18. 6ff3b19 COMPMID-344 Updated doxygen by Anthony Barbier · 7 years ago