1. 5cb4d6a COMPMID-477 - Optimizing CLDirectConvolution 3x3 on OpenCL and added the auto configuration by Gian Marco Iodice · 7 years ago
  2. d8e765b COMPMID-472 : Implement Floor for CL and NEON. by Georgios Pinitas · 7 years ago
  3. cfc6fe8 COMPMID-443 collapse higher dimension for CL col2im kernel by steniu01 · 7 years ago
  4. c51b72f COMPMID-355 Implement CL DirectConvolution1x1 by SiCong Li · 7 years ago
  5. ef4b4ae COMPMID-438: Add support for floating point Min-Max Location layer. by Michele Di Giorgio · 7 years ago
  6. 6c92834 COMPMID-413: Add support for QS8 and QS16 CLNormalizationLayer. by Michele Di Giorgio · 7 years ago
  7. f9bae2e COMPMID-417 - Bug Fix WarpPerspective kernel by Isabella Gottardi · 7 years ago
  8. 02dfb2c COMPMID-457 Fix F16 NormalizationLayer CL kernel by SiCong Li · 7 years ago
  9. a36ccf1 COMPMID-417: Fix CL F16 ActivationLayer by Moritz Pflanzer · 7 years ago
  10. 3a62324 COMPMID-455 - Optimizing CLIm2ColKernel by Gian Marco Iodice · 7 years ago
  11. e49e266 COMPMID-415: Use half_float library for F16 by Moritz Pflanzer · 7 years ago
  12. 27b386c COMPMID-355 Implement 3x3 CL direct convolution by steniu01 · 7 years ago
  13. 7281834 COMPMID-446: Add support for QS8/QS16 CL Arithmetic Add/Sub by Michele Di Giorgio · 7 years ago
  14. 172e570 COMPMID-425 Port CLBatchnormalization to support QS8/QS16 by Michalis Spyrou · 7 years ago
  15. 579c049 COMPMID-417: Add Leaky RELU support for both NEON/CL. by Georgios Pinitas · 7 years ago
  16. 0d523cc COMPMID-443 Change CLSoftMaxLayerKernel to use 3D tensor and collapse the higer dimension by steniu01 · 7 years ago
  17. 00394ae COMPMID-406: Port CLActivationLayer to use QS8/QS16. by Georgios Pinitas · 7 years ago
  18. ac4e873 COMPMID-417: Port DepthConcatenate to QS8/QS16 for NEON/CL. by Georgios Pinitas · 7 years ago
  19. 9a7182e COMPMID-443 Use 3D tensor for pixel multiply (Needed for Normalization Layer) by Anthony Barbier · 7 years ago
  20. 7ff47a3 COMPMID-443: Use 3D tensors for fill_border_image by Anthony Barbier · 7 years ago
  21. da37e2f COMPMID-431 Port CLDepthConvert to use 8-bit and 16-bit fixed point by steniu01 · 7 years ago
  22. 0979675 COMPMID-429: Port CLSoftmaxLayer to QS16. by Georgios Pinitas · 7 years ago
  23. 7d323a6 COMPMID-440, COMPMID-441 - Port CLConvolutionLayer and CLFullyConnectedLayer to support 16 bit fixed point by Gian Marco Iodice · 7 years ago
  24. ab0a77e COMPMID-409: Add support for QS8 and QS16 CLPixelWiseMultiplication. by Michele Di Giorgio · 7 years ago
  25. 368da83 COMPMID-420, COMPMID-414 - Port CLConvolutionLayer and CLFullyConnectedLayer to use 8 bit fixed point by Gian Marco Iodice · 7 years ago
  26. b93f5de COMPMID-417 - Fixed bug in gemm_interleave_16bit and gemm_interleave_32_bit due to the non non representable numbers in half and float by Gian Marco Iodice · 7 years ago
  27. 8a38369 COMPMID-434 - Port CLGEMM to support 16 bit fixed point by Gian Marco Iodice · 7 years ago
  28. ac69aa1 COMPMID-418 Add check and fix comments after preprocessor conditions by Anthony Barbier · 7 years ago
  29. d7e8281 COMPMID-408 Create OpenCL complex math functions for 8 bit fixed point arithmetic. by Michalis Spyrou · 7 years ago
  30. 13edbff COMPMID-432 - Extended Convolution Layer to support rectangular kernels by Gian Marco Iodice · 7 years ago
  31. 3a3066b COMPMID-411 - Port CLGEMM to support 8 bit fixed point by Gian Marco Iodice · 7 years ago
  32. e5f8fd6 COMPMID-423: Port CLSoftmaxLayer to QS8 by Georgios Pinitas · 7 years ago
  33. b30dcc5 COMPMID-345 - In-place computation for Activation Layer by Gian Marco Iodice · 7 years ago
  34. 578ab61 COMPMID-414 - Port CLConvolutionLayer to support 8 bit fixed point - CLGEMMMatrixAccumulateBiasesKernel by Gian Marco Iodice · 7 years ago
  35. 9f89bae COMPMID-411 - Ported CLGEMMInterleave4x4Kernel and CLGEMMTranspose1xWKernel to support 8 bit fixed point by Gian Marco Iodice · 7 years ago
  36. ce09314 COMPMID-403:Add support for 7x7 pooling on CL. by Georgios Pinitas · 7 years ago
  37. 6ff3b19 COMPMID-344 Updated doxygen by Anthony Barbier · 7 years ago