1. 70d33bd COMPMID-2755: update CLConvolutionLayer's doxygen and test for QASYMM8_SIGNED by Sang-Hoon Park · 4 years, 6 months ago
  2. f464337 COMPMID-2826 Comply with DCL51-CPP by Michalis Spyrou · 4 years, 7 months ago
  3. 951b8a4 COMPMID-2309 : CLConvolutionLayer: support QUANT8_SYMM_PER_CHANNEL filters by Vidhya Sudhan Loganathan · 4 years, 8 months ago
  4. 8f309ab COMPMID-2466: Improved ConvLayer documentation. by Pablo Tello · 5 years ago
  5. 8ec0bb6 COMPMID-2117 : Use FFT convolution if output feature map depth is less than input by Vidhya Sudhan Loganathan · 5 years ago
  6. 916d1bc COMPMID-1498 - Enable grouping in CLGEMMConvolutionLayer by Gian Marco Iodice · 6 years ago
  7. 7485d5a COMPMID-970 : Remove QS8 / QS16 support by Vidhya Sudhan Loganathan · 6 years ago
  8. 70ba7d6 COMPMID-1257: Allow retaining weights in CLDeconvolutionLayer by Michele Di Giorgio · 6 years ago
  9. e043767 COMPMID-920: Introduce prepare() stage by Georgios Pinitas · 6 years ago
  10. 2213d4b COMPMID-1096 - Add fast_math flag to CLConvolutionLayer by Gian Marco Iodice · 6 years ago
  11. e52a300 COMPMID-1026 - Add support for 4x4 output tile in CLWinogradConvolutionLayer by Gian Marco Iodice · 6 years ago
  12. 3f217ec COMPMID-908 - Merge Activation layer with Convolution Layer (NEON. CL, GLES) by Isabella Gottardi · 6 years ago
  13. 7da29b6 COMPMID-1017: Implement dilated convolution in NEON, OpenCL, and GC by Alex Gilday · 6 years ago
  14. f07d28d COMPMID-845: Create a ConvolutionLayer for CL by Isabella Gottardi · 6 years ago
  15. 20d7848 COMPMID-816 - Enabled CLConvolutionLayer to use CLGEMM function instead by Gian Marco · 6 years ago
  16. 1d25ed5 COMPMID-759 - CLGEMM optimization for McVail benchmarks by Gian Marco · 7 years ago
  17. 3972528 COMPMID-556 Fix NEON/CL copy-paste documentation mistakes by Giorgio Arena · 7 years ago
  18. 5124be5 COMPMID-661: Convolution quantized (#32) by Chunosov · 7 years ago
  19. baf174e COMPMID-485: Memory Manager by Georgios Pinitas · 7 years ago
  20. 409ee0a COMPMID-417: Add in-place support for batch-normalization. by Georgios Pinitas · 7 years ago
  21. 7d323a6 COMPMID-440, COMPMID-441 - Port CLConvolutionLayer and CLFullyConnectedLayer to support 16 bit fixed point by Gian Marco Iodice · 7 years ago
  22. 368da83 COMPMID-420, COMPMID-414 - Port CLConvolutionLayer and CLFullyConnectedLayer to use 8 bit fixed point by Gian Marco Iodice · 7 years ago
  23. 5cb4c42 COMPMID-414 - Port CLConvolutionLayer to support 8 bit fixed point - CLWeightsReshapeKernel by Gian Marco Iodice · 7 years ago
  24. 6ff3b19 COMPMID-344 Updated doxygen by Anthony Barbier · 7 years ago