1. eb5696d Optimize CpuReshapeKernel by Anitha Raj · 12 months ago
  2. 6075097 Remove functionality to add padding in Y dimension in validation tests by Anitha Raj · 11 months ago
  3. 29e27b0 Add support for S64 output in NEArgMinMaxLayer by Pablo Marquez Tello · 11 months ago
  4. 78da34c Fix failure in MeanReduce layer by Viet-Hoa Do · 11 months ago
  5. e1c96e7 Port DirectConv2d to CKW backend by Jakub Sujak · 11 months ago
  6. 0c19f59 Fix CL Tile operator by Viet-Hoa Do · 11 months ago
  7. 4cb0bd4 Improved testing for ArgMinMax by Pablo Marquez Tello · 11 months ago
  8. 16b3752 Port ElementwiseBinary to CKW part 2 by SiCong Li · 12 months ago
  9. 9129549 Retain back-compatibility for arm_compute/core/Types.h by SiCong Li · 12 months ago
  10. 9662ac0 Add missing tests for CLCast by Pablo Marquez Tello · 11 months ago
  11. 23882a9 Add GpuKernelArgumentBinding for runtime argument setting by SiCong Li · 1 year ago
  12. 0a59e69 Fix problem with exception handling in CPPScheduler by Matthew Bentham · 12 months ago
  13. 4a1c917 Add support for input S64/U64 in CpuCastKernel by Pablo Marquez Tello · 12 months ago
  14. 314d3e2 Break up core/Utils.h to reduce unused code being included everywhere by Matthew Bentham · 1 year ago
  15. 4184e86 Port ClTemplateActivation into Ckw by Adnan AlSinan · 12 months ago
  16. a5577db Fix dynamic fusion compilation error by Viet-Hoa Do · 12 months ago
  17. 205ba24 Added S64/U64 support for the input in CLCast by Pablo Marquez Tello · 12 months ago
  18. 945b8da Make test fixture setup methods not be templated by Matthew Bentham · 12 months ago
  19. 653b96c Improved Argminmax testing by Pablo Marquez Tello · 12 months ago
  20. 8e2dede Add Bias to MatMul Kernels and add support for use in Fully Connected Layer by Mohammed Suhail Munshi · 1 year ago
  21. 019a7d9 Enable transpose convolution with non-square kernels by Viet-Hoa Do · 1 year ago
  22. c9eeee5 Fix nightly failures in MatMulLowpNativeKernel when using bounded activation functions by Mohammed Suhail Munshi · 1 year ago
  23. 00474e9 Implement FP32/16 MatMul Lhs T Rhs T/NT kernel using MMUL extension by Gunes Bayir · 1 year ago
  24. a2bb80e Use MatMul in fully connected layer with dynamic weights when supported by Mohammed Suhail Munshi · 1 year ago
  25. c952596 Implement FP32/FP16 MatMul NT/T kernel using the MMUL extension by Ramy Elgammal · 1 year, 1 month ago
  26. a2561f0 Fix doxygen warnings by ramy.elgammal@arm.com · 1 year, 1 month ago
  27. 90d15b9 Bazel and CMake optional fp16 support by David Svantesson · 1 year, 1 month ago
  28. 8eb82d2 Fix CPU depthwise convolution in case of large padding by Viet-Hoa Do · 1 year ago
  29. a8d8058 Implement FP32/FP16 MatMul NT/NT kernel using the MMUL extension by SiCong Li · 1 year, 1 month ago
  30. 94abde4 Add Fused Activation to OpenCL MatMul by Mohammed Suhail Munshi · 1 year, 1 month ago
  31. f1aeab9 Break up arm_compute/core/Types.h a bit by Matthew Bentham · 1 year, 1 month ago
  32. 3fcf3dc Add multi-sketch support for dynamic fusion by Viet-Hoa Do · 1 year, 2 months ago
  33. 48cfd5f Refactor activation LUT computation by Pablo Marquez Tello · 1 year, 1 month ago
  34. 1355ec4 Printing out the rerun command of each failed testcase by Ramy Elgammal · 1 year, 2 months ago
  35. 95f1e4a Raise abs_tolerance number for CL/DirectConvolution3D fp16 tests by Ramy Elgammal · 1 year, 2 months ago
  36. bae01a5 Raise tolerance number for NEDeconvolutionLayer fp16 tests by SiCong Li · 1 year, 2 months ago
  37. cd2502c Fix invalid vector length in CL by Viet-Hoa Do · 1 year, 2 months ago
  38. 318782b Remove inclusion of NEReorderKernel header from NEReorderLayer by Ramy Elgammal · 1 year, 2 months ago
  39. 0ed0ddd Fix validation issue with CPU scale on FP16 by Viet-Hoa Do · 1 year, 2 months ago
  40. a8db612 Re-enable dyanmic weights in Neon™ depthwise convolution by Ramy Elgammal · 1 year, 2 months ago
  41. e9b3ee2 Connect CLMatMul function to quantized kernels and resolve NE BatchMatMul int_8 failures by Jakub Sujak · 1 year, 3 months ago
  42. cdd1e03 Support multi-dimensional indices in the CL Gather Layer up to four-dimensional output tensors by Omar Al Khatib · 1 year, 2 months ago
  43. cd8b40d Guards to make NEReorder aarch64 only by David Svantesson · 1 year, 2 months ago
  44. b5d6c28 Bazel and CMake updates by David Svantesson · 1 year, 2 months ago
  45. d7113e4 Removes `experimental` from `experimental_fixed_format_kernels` flag by Nathan John Sircombe · 1 year, 2 months ago
  46. a62129a Fix fully connected and matmul mismatches by Viet-Hoa Do · 1 year, 2 months ago
  47. 3b162e5 Reorder added by David Svantesson · 1 year, 3 months ago
  48. a25582c Fix the gather layer indices check by Viet-Hoa Do · 1 year, 4 months ago
  49. 5e99a3e Add quantized CL MatMul kernel for LHS NT, RHS T by Jakub Sujak · 1 year, 2 months ago
  50. af15076 Only define validation test tolerance for quantized types in case of aarch64 for Neon™ Matmul by Ramy Elgammal · 1 year, 2 months ago
  51. 05a65e3 Disable Neon/MatMul/Quantized for armv7a by Ramy Elgammal · 1 year, 2 months ago
  52. 467daef Implement CL kernel for a native batched matmul Quantized - LHS transposed, RHS transposed by Omar Al Khatib · 1 year, 3 months ago
  53. 9c7c2d2 Add quantized support for CPU MatMul by Viet-Hoa Do · 1 year, 3 months ago
  54. 9d0c4de Add quantized CL MatMul kernels for Lhs NT/T, Rhs NT by Gunes Bayir · 1 year, 3 months ago
  55. b84df20 Fix unhandled case in ElementwiseUnary by Ramy Elgammal · 1 year, 3 months ago
  56. 1ed6a14 Align naming convention of ClMatMul by Jakub Sujak · 1 year, 3 months ago
  57. 9b0a6b4 Fix dynamic weights for CPU connected layer by Viet-Hoa Do · 1 year, 3 months ago
  58. 4e84f24 Increase tolerance for SME kernels by Viet-Hoa Do · 1 year, 3 months ago
  59. a1b1e41 Implement MatMul Function and Operator with Floating Point support for CPU by Mohammed Suhail Munshi · 1 year, 3 months ago
  60. 8b7f42a Enable quantized data types for CpuElementwiseUnary on Armv7a by Ramy Elgammal · 1 year, 3 months ago
  61. dcab9ca Fix unused variable warning reported in nightly nuild by Ramy Elgammal · 1 year, 3 months ago
  62. 617ed50 Support dynamic weights for Fully Connected layers on GPU by Jakub Sujak · 1 year, 3 months ago
  63. f26ea2f Implement MatMul Function by Ramy Elgammal · 1 year, 3 months ago
  64. fff9a4c Add Cropping to CLBatchToSpace by Omar Al Khatib · 1 year, 3 months ago
  65. 8893e45 Add cropping support to NEBatchToSpace by SiCong Li · 1 year, 3 months ago
  66. 732c1b2 Fix GCC13 compiler errors by Pablo Marquez Tello · 1 year, 3 months ago
  67. 573a33f Add additional FP16 guards to Convolution Layer by Nathan John Sircombe · 1 year, 3 months ago
  68. fd472f0 Add quantized support for unary elementwise in CPU by Viet-Hoa Do · 1 year, 4 months ago
  69. 5a7d157 Fix BatchToSpaceFixture by SiCong Li · 1 year, 3 months ago
  70. b531b75 Add Texture Pipe Support for Matmul Lhs T/NT Rhs T kernels by Ramy Elgammal · 1 year, 3 months ago
  71. bbeef72 Add Texture Pipe Support for Matmul Lhs T/NT Rhs NT kernels by Gunes Bayir · 1 year, 3 months ago
  72. a3e57c2 Add dynamic weights for CPU fully connected layer by Viet-Hoa Do · 1 year, 4 months ago
  73. 8918b23 Implement OpenCL MatMul for Lhs T Rhs T/NT FP32/16 by Gunes Bayir · 1 year, 4 months ago
  74. 14d7b53 Implementation of RSQRT for quantized int8 by Ramy Elgammal · 1 year, 5 months ago
  75. 2b6ebfe Implement OpenCL MatMul for Lhs NT Rhs T/NT FP32/16 by Ramy Elgammal · 1 year, 4 months ago
  76. 4ceb453 Add CropInfo to BatchToSpace reference and fixture by SiCong Li · 1 year, 4 months ago
  77. 37c989a Add support for arbitrary parameters for CPU Gather by Viet-Hoa Do · 1 year, 4 months ago
  78. 98aca0f Add sigmoid and tanh for dynamic fusion by Viet-Hoa Do · 1 year, 4 months ago
  79. 3c7c1fa Resolve the presence of variables that are unused in release mode in Dynamic Fusion source files by Omar Al Khatib · 1 year, 4 months ago
  80. bbf2e74 Add support for kernel indices in Maxpool by Adnan AlSinan · 1 year, 4 months ago
  81. 227db8d Add an option to use lowest for max-pooling by Adnan AlSinan · 1 year, 5 months ago
  82. 4537089 Fixes for CMake and Bazel builds, tests failing in scons by David Svantesson · 1 year, 4 months ago
  83. ecaa10a Fix Intermittent Neon™ ReduceMean QASYMM8 Mismatch by Mohammed Suhail Munshi · 1 year, 5 months ago
  84. c5fb6b2 Add absolute tolerance to f16 CLConv3D validation tests by SiCong Li · 1 year, 5 months ago
  85. a4ff9d0 Fix convolution layer fixture by Viet-Hoa Do · 1 year, 5 months ago
  86. 6b4571a Add missing cpu feature printouts during tests by Gunes Bayir · 1 year, 5 months ago
  87. f4230aa Fix DeconvolutionLayer tolerance issues in FP16 tests by Gunes Bayir · 1 year, 5 months ago
  88. d04528b Disable AddMulAdd armv7a tests by Gunes Bayir · 1 year, 5 months ago
  89. 5b9d223 Fix GEMMLowp/Batched MatMul mismatches on CPU by Mohammed Suhail Munshi · 1 year, 5 months ago
  90. ae72a46 Add new operator AddMulAdd for Neon™ backend for Float/Quantized types by Gunes Bayir · 1 year, 5 months ago
  91. ec320d9 Add Subtraction operator to Dynamic Fusion interface by Ramy Elgammal · 1 year, 7 months ago
  92. 464ed20 Remove fixed format strides hack by Jonathan Deakin · 1 year, 6 months ago
  93. 7359a87 Add Multiplication operator (FP only) to Dynamic Fusion Interface by Jakub Sujak · 1 year, 6 months ago
  94. e0c42ef Bazel and CMake builds Resolves: ONCPUML-1110, ONCPUML-1109 by David Svantesson · 1 year, 7 months ago
  95. 54eafd8 Sync tolerance number of dynamic fusion direct conv2d with the current library by SiCong Li · 1 year, 5 months ago
  96. 002e653 Implement dynamic fusion softmax operator by Ramy Elgammal · 1 year, 6 months ago
  97. cc28773 Change dynamic fusion API to return destination tensor info by Gunes Bayir · 1 year, 5 months ago
  98. 5a63d1e Add missing direct conv2d tests to dynamic fusion by SiCong Li · 1 year, 6 months ago
  99. 3b504ef Update libnpy header external dependency to the latest version by Jakub Sujak · 1 year, 7 months ago
  100. a18d85c Dynamic Fusion Pooling Layer 2d by Mohammed Suhail Munshi · 1 year, 6 months ago