1. 58c5794 COMPMID-706 - Add GEMMLowp output stage for scaling by a fixed point number by Gian Marco · 7 years ago
  2. 05288a2 COMPMID-697 - Rework GEMMLowp interface on OpenCL by Gian Marco · 7 years ago
  3. c827990 COMPMID-556 - Add QASYMM8 support for missing OpenCL kernels by Gian Marco · 7 years ago
  4. 857e32c COMPMID-699 - Add NEON functions for im2col and col2im by Gian Marco · 7 years ago
  5. 02bf80d COMPMID-661: Fix scale border issue (#38) by Daniil Efremov · 7 years ago
  6. 30902ed COMPMID-617: Add validation methods to ML CL functions. by Georgios Pinitas · 7 years ago
  7. d7295b7 COMPMID-661: Add QASYMM8 support (and basic tests) to CLDepthwiseConvolution3x3 kernel (#28) by Dmitry Savenko · 7 years ago
  8. 540d008 COMPMID-556: Fixes bias in CLDirectConvolutionLayer to be int32. by Georgios Pinitas · 7 years ago
  9. 82afedf COMPMID-678 Align Convolution Interfaces by Giorgio Arena · 7 years ago
  10. f450caa COMPMID-661: softmax-uint8 implementation (#16) by Chunosov · 7 years ago
  11. 7068f99 COMPMID-631: Merge branches/gles_compute branch by Anthony Barbier · 7 years ago
  12. eed841c COMPMID-661: Add QAsymm8 for Reshape (#29) by Daniil Efremov · 7 years ago
  13. af6204c COMPMID-661: Add avgpool-uint8 support. Optimize avgpool-fp32 for Bifrost. (#13) by Anton Lokhmotov · 7 years ago
  14. f9d3a0a COMPMID-617: Add validation functions. by Georgios Pinitas · 7 years ago
  15. d6afedc COMPMID-661: softmax-fp32 optimisation (#14) by Chunosov · 7 years ago
  16. d621bca COMPMID-661: directconv-uint8 (#20) by Chunosov · 7 years ago
  17. 388d3ec COMPMID-556: Support beta for all softmax data types. by Georgios Pinitas · 7 years ago
  18. 3faea25 COMPMID-617: Adds validation to CLPoolingLayer by Georgios Pinitas · 7 years ago
  19. 16cdf89 IVGCVSW-657 : fix asymmetric padding for 3x3 depthwise conv by Jaroslaw Rzepecki · 7 years ago
  20. 81a26ad COMPMID-643: Add bias to CLDepthwiseConvolution. by Georgios Pinitas · 7 years ago
  21. 48a60f9 IVGCVSW-632 CL support for Softmax beta parameter by Pablo Palmier · 7 years ago
  22. 9fe4144 COMPMID-452 CL Generic Depthwise Convolution implementation. by Giorgio Arena · 7 years ago
  23. f670a0a COMPMID-417 - Added arm_compute comment at the end of CL header files by Gian Marco Iodice · 7 years ago
  24. bf17955 COMPMID-522 - Added support for GlobalPooling in CLPoolingLayer and CLFlattening for 3D tensor by Gian Marco Iodice · 7 years ago
  25. 583137c COMPMID-417: Add support for floats in scale. by Georgios Pinitas · 7 years ago
  26. 5ee66ea COMPMID-462: Implement TensorReshape for NEON and CL. by Georgios Pinitas · 7 years ago
  27. 56dd726 COMPMID-448: Implement CL Quantization/Dequantization Layer. by Michele Di Giorgio · 7 years ago
  28. 906443f COMPMID-415 - Fixed bug in CLDepthConcatenateKernel by Gian Marco Iodice · 7 years ago
  29. 04f089c COMPMID-476 L2 Normalization for CL by Michalis Spyrou · 7 years ago
  30. 3e36369 COMPMID-358 Implement OpenCL ROI Pooling by SiCong Li · 7 years ago
  31. edfa9f4 COMPMID-477 - Optimized batched case in CLConvolutionLayer by Gian Marco Iodice · 7 years ago
  32. 93a690e COMPMID-452 CL Depthwise Separable Convolution Layer kernel implementation, validation and benchmarking for 3x3xC depthwise filter and DataType::F32. by Giorgio Arena · 7 years ago
  33. d60a6b9 COMPMID-477 - Optimized CLNormalizationLayer by Gian Marco Iodice · 7 years ago
  34. 0c7614f COMPMID-431 Port OpenCL pooling layer to use fixed point by steniu01 · 7 years ago
  35. cb29283 COMPMID-477 - Optimizing Pooling 3x3 with stride_x <= 3 on OpenCL by Gian Marco Iodice · 7 years ago
  36. 1246b63 COMPMID-477 - Optimized Direct Convolution 3x3 and 5x5 (f32) for Bifrost. by Gian Marco Iodice · 7 years ago
  37. 409ee0a COMPMID-417: Add in-place support for batch-normalization. by Georgios Pinitas · 7 years ago
  38. db00668 COMPMID-478 Implemnt CL direct convolution 5x5 by steniu01 · 7 years ago
  39. 5cb4d6a COMPMID-477 - Optimizing CLDirectConvolution 3x3 on OpenCL and added the auto configuration by Gian Marco Iodice · 7 years ago
  40. d8e765b COMPMID-472 : Implement Floor for CL and NEON. by Georgios Pinitas · 7 years ago
  41. a261181 COMPMID-417 NEON/CL MeanStdDev bugfix using FillBorderKernel by Giorgio Arena · 7 years ago
  42. c51b72f COMPMID-355 Implement CL DirectConvolution1x1 by SiCong Li · 7 years ago
  43. ef4b4ae COMPMID-438: Add support for floating point Min-Max Location layer. by Michele Di Giorgio · 7 years ago
  44. 6c92834 COMPMID-413: Add support for QS8 and QS16 CLNormalizationLayer. by Michele Di Giorgio · 7 years ago
  45. 27b386c COMPMID-355 Implement 3x3 CL direct convolution by steniu01 · 7 years ago
  46. 7281834 COMPMID-446: Add support for QS8/QS16 CL Arithmetic Add/Sub by Michele Di Giorgio · 7 years ago
  47. 172e570 COMPMID-425 Port CLBatchnormalization to support QS8/QS16 by Michalis Spyrou · 7 years ago
  48. f70256b COMPMID-443 Collapse higher dimension for pooling layer and normalization layer by steniu01 · 7 years ago
  49. 0d523cc COMPMID-443 Change CLSoftMaxLayerKernel to use 3D tensor and collapse the higer dimension by steniu01 · 7 years ago
  50. 00394ae COMPMID-406: Port CLActivationLayer to use QS8/QS16. by Georgios Pinitas · 7 years ago
  51. ac4e873 COMPMID-417: Port DepthConcatenate to QS8/QS16 for NEON/CL. by Georgios Pinitas · 7 years ago
  52. da37e2f COMPMID-431 Port CLDepthConvert to use 8-bit and 16-bit fixed point by steniu01 · 7 years ago
  53. 0979675 COMPMID-429: Port CLSoftmaxLayer to QS16. by Georgios Pinitas · 7 years ago
  54. 7d323a6 COMPMID-440, COMPMID-441 - Port CLConvolutionLayer and CLFullyConnectedLayer to support 16 bit fixed point by Gian Marco Iodice · 7 years ago
  55. ab0a77e COMPMID-409: Add support for QS8 and QS16 CLPixelWiseMultiplication. by Michele Di Giorgio · 7 years ago
  56. 368da83 COMPMID-420, COMPMID-414 - Port CLConvolutionLayer and CLFullyConnectedLayer to use 8 bit fixed point by Gian Marco Iodice · 7 years ago
  57. 8a38369 COMPMID-434 - Port CLGEMM to support 16 bit fixed point by Gian Marco Iodice · 7 years ago
  58. 13edbff COMPMID-432 - Extended Convolution Layer to support rectangular kernels by Gian Marco Iodice · 7 years ago
  59. 3a3066b COMPMID-411 - Port CLGEMM to support 8 bit fixed point by Gian Marco Iodice · 7 years ago
  60. 5cb4c42 COMPMID-414 - Port CLConvolutionLayer to support 8 bit fixed point - CLWeightsReshapeKernel by Gian Marco Iodice · 7 years ago
  61. e5f8fd6 COMPMID-423: Port CLSoftmaxLayer to QS8 by Georgios Pinitas · 7 years ago
  62. b30dcc5 COMPMID-345 - In-place computation for Activation Layer by Gian Marco Iodice · 7 years ago
  63. 578ab61 COMPMID-414 - Port CLConvolutionLayer to support 8 bit fixed point - CLGEMMMatrixAccumulateBiasesKernel by Gian Marco Iodice · 7 years ago
  64. ec8b45e COMPMID-345 - Auto-inizialization for NECol2ImKernel and NEGEMMInterleave4x4Kernel by Gian Marco Iodice · 7 years ago
  65. 9f89bae COMPMID-411 - Ported CLGEMMInterleave4x4Kernel and CLGEMMTranspose1xWKernel to support 8 bit fixed point by Gian Marco Iodice · 7 years ago
  66. ce09314 COMPMID-403:Add support for 7x7 pooling on CL. by Georgios Pinitas · 7 years ago
  67. 6ff3b19 COMPMID-344 Updated doxygen by Anthony Barbier · 7 years ago