1. e043767 COMPMID-920: Introduce prepare() stage by Georgios Pinitas · 6 years ago
  2. 2213d4b COMPMID-1096 - Add fast_math flag to CLConvolutionLayer by Gian Marco Iodice · 6 years ago
  3. e52a300 COMPMID-1026 - Add support for 4x4 output tile in CLWinogradConvolutionLayer by Gian Marco Iodice · 6 years ago
  4. 3f217ec COMPMID-908 - Merge Activation layer with Convolution Layer (NEON. CL, GLES) by Isabella Gottardi · 6 years ago
  5. 7da29b6 COMPMID-1017: Implement dilated convolution in NEON, OpenCL, and GC by Alex Gilday · 6 years ago
  6. f07d28d COMPMID-845: Create a ConvolutionLayer for CL by Isabella Gottardi · 6 years ago
  7. 20d7848 COMPMID-816 - Enabled CLConvolutionLayer to use CLGEMM function instead by Gian Marco · 6 years ago
  8. 1d25ed5 COMPMID-759 - CLGEMM optimization for McVail benchmarks by Gian Marco · 7 years ago
  9. 3972528 COMPMID-556 Fix NEON/CL copy-paste documentation mistakes by Giorgio Arena · 7 years ago
  10. 5124be5 COMPMID-661: Convolution quantized (#32) by Chunosov · 7 years ago
  11. baf174e COMPMID-485: Memory Manager by Georgios Pinitas · 7 years ago
  12. 409ee0a COMPMID-417: Add in-place support for batch-normalization. by Georgios Pinitas · 7 years ago
  13. 7d323a6 COMPMID-440, COMPMID-441 - Port CLConvolutionLayer and CLFullyConnectedLayer to support 16 bit fixed point by Gian Marco Iodice · 7 years ago
  14. 368da83 COMPMID-420, COMPMID-414 - Port CLConvolutionLayer and CLFullyConnectedLayer to use 8 bit fixed point by Gian Marco Iodice · 7 years ago
  15. 5cb4c42 COMPMID-414 - Port CLConvolutionLayer to support 8 bit fixed point - CLWeightsReshapeKernel by Gian Marco Iodice · 7 years ago
  16. 6ff3b19 COMPMID-344 Updated doxygen by Anthony Barbier · 7 years ago