1. da37e2f COMPMID-431 Port CLDepthConvert to use 8-bit and 16-bit fixed point by steniu01 · 7 years ago
  2. 0979675 COMPMID-429: Port CLSoftmaxLayer to QS16. by Georgios Pinitas · 7 years ago
  3. 7d323a6 COMPMID-440, COMPMID-441 - Port CLConvolutionLayer and CLFullyConnectedLayer to support 16 bit fixed point by Gian Marco Iodice · 7 years ago
  4. ab0a77e COMPMID-409: Add support for QS8 and QS16 CLPixelWiseMultiplication. by Michele Di Giorgio · 7 years ago
  5. 368da83 COMPMID-420, COMPMID-414 - Port CLConvolutionLayer and CLFullyConnectedLayer to use 8 bit fixed point by Gian Marco Iodice · 7 years ago
  6. b93f5de COMPMID-417 - Fixed bug in gemm_interleave_16bit and gemm_interleave_32_bit due to the non non representable numbers in half and float by Gian Marco Iodice · 7 years ago
  7. 8a38369 COMPMID-434 - Port CLGEMM to support 16 bit fixed point by Gian Marco Iodice · 7 years ago
  8. ac69aa1 COMPMID-418 Add check and fix comments after preprocessor conditions by Anthony Barbier · 7 years ago
  9. d7e8281 COMPMID-408 Create OpenCL complex math functions for 8 bit fixed point arithmetic. by Michalis Spyrou · 7 years ago
  10. 13edbff COMPMID-432 - Extended Convolution Layer to support rectangular kernels by Gian Marco Iodice · 7 years ago
  11. 3a3066b COMPMID-411 - Port CLGEMM to support 8 bit fixed point by Gian Marco Iodice · 7 years ago
  12. e5f8fd6 COMPMID-423: Port CLSoftmaxLayer to QS8 by Georgios Pinitas · 7 years ago
  13. b30dcc5 COMPMID-345 - In-place computation for Activation Layer by Gian Marco Iodice · 7 years ago
  14. 578ab61 COMPMID-414 - Port CLConvolutionLayer to support 8 bit fixed point - CLGEMMMatrixAccumulateBiasesKernel by Gian Marco Iodice · 7 years ago
  15. 9f89bae COMPMID-411 - Ported CLGEMMInterleave4x4Kernel and CLGEMMTranspose1xWKernel to support 8 bit fixed point by Gian Marco Iodice · 7 years ago
  16. ce09314 COMPMID-403:Add support for 7x7 pooling on CL. by Georgios Pinitas · 7 years ago
  17. 6ff3b19 COMPMID-344 Updated doxygen by Anthony Barbier · 7 years ago