1. 2481e95 Add memory stress tests for per channel quantized convolution by Gunes Bayir · 9 weeks ago
  2. 83ca105 Fix v7 test failure when core matmul result is dequantized into fp32 by Gunes Bayir · 2 months ago
  3. a668f9f Add s8f32 kernels and dynamic QuantizationInfo by Jonathan Deakin · 5 months ago
  4. 34bdffb Add guarding for accumulation validation test in aarch32 by Radu Salavat · 2 months ago
  5. 64f2300 Runtime checks for bf16 fixed format tests by David Svantesson-Yeung · 3 months ago
  6. cdce25b Accumulation in Cpu Gemm kernels is not supported for quantized kernels in aarch32. This patch guards the relevant tests. by Radu Salavat · 3 months ago
  7. cfca87b Add SME2 implementation of softmax for FP16 by Gunes Bayir · 3 months ago
  8. f1f1f87 Add in place summation to CPU GEMM kernels by Radu Salavat · 4 months ago
  9. 4908981 [ONCPUML-1451] Guard bf16 to bf16 tests with ARM_COMPUTE_ENABLE_FIXED_FORMAT_KERNELS by Renato Arantes · 3 months ago
  10. 36a75da [ONCPUML-1451] Add matmul kernel to enable bf16 to bf16 operations via PyTorch® autocast() function by Renato Arantes · 5 months ago
  11. d219115 Make Cpu/Gpu/Ref scalar/vectoral S32 division consistent by Gunes Bayir · 3 months ago
  12. 1618e95 Increase tolerance_num of Cpu RNNLayer tests by Gunes Bayir · 3 months ago
  13. 43ba0dd Increase MatMul and DilatedConv test Q8 thresholds to 1 by Gunes Bayir · 3 months ago
  14. 3ac0b87 Fix validation in pool2d assembly wrapper by Pablo Marquez Tello · 3 months ago
  15. 9167c9c Prefer indirect Gemm vs. Direct convolution if supported by Gunes Bayir · 4 months ago
  16. e77736f Set int8 test tolerance in FullyConnected to int8 by Gunes Bayir · 4 months ago
  17. 0a48c4c Requantization cases for offset changes only by Mohammed Suhail Munshi · 5 months ago
  18. 0cba93f [QTest] Use dynamic output quantization in Depthwise Conv tests by Omar Al Khatib · 5 months ago
  19. 8050d22 Disable FP16 tests compilation on Multi-Isa v8a by Mohammed Suhail Munshi · 5 months ago
  20. 2aec5f1 Fix tolerance issue in BF16 MatMul tests by Gunes Bayir · 5 months ago
  21. 11ab451 Implement dynamic quantization for GEMMLowp tests by SiCong Li · 8 months ago
  22. 6b5a361 Adjust NEReduceMean test tolerance by SiCong Li · 7 months ago
  23. fadc9b1 Optimize CpuSoftmaxKernel for axis=0 by Gunes Bayir · 8 months ago
  24. c5ab4df Optimize CpuGemmConv2d start-up time by SiCong Li · 8 months ago
  25. 02c452f Add Dynamic Quantization tests to Fully Connected Layer by Mohammed Suhail Munshi · 8 months ago
  26. 93a77cd Use dynamic quantization in Convolution and Dilated Convolution tests by Gunes Bayir · 9 months ago
  27. 4ea9bac Fix memory Error in Reverse Fixture. by Adnan AlSinan · 9 months ago
  28. 0b72aa4 Optimize NEStackLayer by Gunes Bayir · 9 months ago
  29. 3af4c9b Optimize CL and Neon depthwise convolution tests by Gunes Bayir · 9 months ago
  30. 745153b NEDeconvolutionLayer validation fix by Pablo Marquez Tello · 9 months ago
  31. c2a51bd Optimize CL and Neon Winograd tests by Gunes Bayir · 9 months ago
  32. bdcb4c1 Implement tflite compliant reverse for CPU by Adnan AlSinan · 9 months ago
  33. 40a9d3e Remove deprecated support for BF16 in CpuCast by Adnan AlSinan · 10 months ago
  34. c85edf1 Make zip and combine variadic by Viet-Hoa Do · 10 months ago
  35. b1fcb41 Disable NEArgMinMaxLayer RunSmall_F32_S64 for armv7a by Pablo Marquez Tello · 10 months ago
  36. eb5696d Optimize CpuReshapeKernel by Anitha Raj · 12 months ago
  37. 29e27b0 Add support for S64 output in NEArgMinMaxLayer by Pablo Marquez Tello · 11 months ago
  38. 78da34c Fix failure in MeanReduce layer by Viet-Hoa Do · 11 months ago
  39. 4cb0bd4 Improved testing for ArgMinMax by Pablo Marquez Tello · 11 months ago
  40. 4a1c917 Add support for input S64/U64 in CpuCastKernel by Pablo Marquez Tello · 11 months ago
  41. 314d3e2 Break up core/Utils.h to reduce unused code being included everywhere by Matthew Bentham · 1 year ago
  42. 019a7d9 Enable transpose convolution with non-square kernels by Viet-Hoa Do · 12 months ago
  43. 48cfd5f Refactor activation LUT computation by Pablo Marquez Tello · 1 year, 1 month ago
  44. bae01a5 Raise tolerance number for NEDeconvolutionLayer fp16 tests by SiCong Li · 1 year, 1 month ago
  45. 318782b Remove inclusion of NEReorderKernel header from NEReorderLayer by Ramy Elgammal · 1 year, 2 months ago
  46. 0ed0ddd Fix validation issue with CPU scale on FP16 by Viet-Hoa Do · 1 year, 2 months ago
  47. cd8b40d Guards to make NEReorder aarch64 only by David Svantesson · 1 year, 2 months ago
  48. 3b162e5 Reorder added by David Svantesson · 1 year, 3 months ago
  49. af15076 Only define validation test tolerance for quantized types in case of aarch64 for Neon™ Matmul by Ramy Elgammal · 1 year, 2 months ago
  50. 05a65e3 Disable Neon/MatMul/Quantized for armv7a by Ramy Elgammal · 1 year, 2 months ago
  51. 9c7c2d2 Add quantized support for CPU MatMul by Viet-Hoa Do · 1 year, 3 months ago
  52. b84df20 Fix unhandled case in ElementwiseUnary by Ramy Elgammal · 1 year, 3 months ago
  53. 9b0a6b4 Fix dynamic weights for CPU connected layer by Viet-Hoa Do · 1 year, 3 months ago
  54. 4e84f24 Increase tolerance for SME kernels by Viet-Hoa Do · 1 year, 3 months ago
  55. a1b1e41 Implement MatMul Function and Operator with Floating Point support for CPU by Mohammed Suhail Munshi · 1 year, 3 months ago
  56. 8b7f42a Enable quantized data types for CpuElementwiseUnary on Armv7a by Ramy Elgammal · 1 year, 3 months ago
  57. 8893e45 Add cropping support to NEBatchToSpace by SiCong Li · 1 year, 3 months ago
  58. 732c1b2 Fix GCC13 compiler errors by Pablo Marquez Tello · 1 year, 3 months ago
  59. 573a33f Add additional FP16 guards to Convolution Layer by Nathan John Sircombe · 1 year, 3 months ago
  60. fd472f0 Add quantized support for unary elementwise in CPU by Viet-Hoa Do · 1 year, 3 months ago
  61. a3e57c2 Add dynamic weights for CPU fully connected layer by Viet-Hoa Do · 1 year, 4 months ago
  62. bbf2e74 Add support for kernel indices in Maxpool by Adnan AlSinan · 1 year, 4 months ago
  63. 4537089 Fixes for CMake and Bazel builds, tests failing in scons by David Svantesson · 1 year, 4 months ago
  64. f4230aa Fix DeconvolutionLayer tolerance issues in FP16 tests by Gunes Bayir · 1 year, 5 months ago
  65. d04528b Disable AddMulAdd armv7a tests by Gunes Bayir · 1 year, 5 months ago
  66. 5b9d223 Fix GEMMLowp/Batched MatMul mismatches on CPU by Mohammed Suhail Munshi · 1 year, 5 months ago
  67. ae72a46 Add new operator AddMulAdd for Neon™ backend for Float/Quantized types by Gunes Bayir · 1 year, 5 months ago
  68. 464ed20 Remove fixed format strides hack by Jonathan Deakin · 1 year, 6 months ago
  69. 13bab71 Fix ClGemm crashes on unsupported data types by SiCong Li · 1 year, 5 months ago
  70. 6bcdc57 Deprecated BF16 support in DepthConvert by Pablo Marquez Tello · 1 year, 6 months ago
  71. 1b2f868 Fix CL DirectConvolutionLayer validate tests by SiCong Li · 1 year, 6 months ago
  72. 97a609b Fix GemmLowp BatchMatMul Tests to use quantized Outputs by Mohammed Suhail Munshi · 1 year, 8 months ago
  73. d158609 SVE Hard-Swish via Lookup table for quantized input by Pablo Marquez Tello · 2 years, 1 month ago
  74. a7077e9 Updateable weights in depthwise convolution by Milos Puzovic · 1 year, 8 months ago
  75. 9fc0b5c Update reinterpret tensor as 1D for CPU add by Viet-Hoa Do · 1 year, 8 months ago
  76. 6782452 Add test in GEMMLowp for batch matmul by Mohammed Suhail Munshi · 1 year, 9 months ago
  77. 304dfdb Fix Batch Matmul nightly failure by Adnan AlSinan · 1 year, 9 months ago
  78. 40b4419 Optimize CPU add layer on quantized data by Viet-Hoa Do · 1 year, 9 months ago
  79. d6b8a71 Add FP32 Neon™ swish activation by Jonathan Deakin · 1 year, 10 months ago
  80. c4f2743 Optimize Quantized/Integer Bilinear Scale for Neon™ by Gunes Bayir · 1 year, 10 months ago
  81. 926f502 Adding GELU activation by Murray Kornelsen · 2 years ago
  82. 6e09e14 INT8 Quantized MeanStdDevNorm (LayerNorm) by Murray Kornelsen · 2 years ago
  83. a4814e8 Add test case for disable Winograd on fp16 if fast-math = false by Ramy Elgammal · 1 year, 10 months ago
  84. 26c9d1a Add test for NEGEMM to test a batched matrix multiplication with variable input tensors by Adnan AlSinan · 1 year, 10 months ago
  85. 93581a5 [ONCPUML-970] Fast math mode for fixed format kernels by Pablo Marquez Tello · 2 years ago
  86. 9b921be Optimize add layer by considering the input tensors as 1D array by Gunes Bayir · 1 year, 11 months ago
  87. 9178002 Fix for inclusion of "arm_gemm" from src into "Types.h" from core by Ramy Elgammal · 2 years ago
  88. d208f4f Enable march=armv8.6-a in non multi-isa builds by Pablo Marquez Tello · 2 years ago
  89. 553f695 [ONCPUML-951] Variable weight support for Convolution. by Francesco Petrogalli · 2 years ago
  90. 29cab36 Fixed clang-cl errors on Windows native builds. by Pablo Tello · 2 years, 4 months ago
  91. 894659a Add support for 2d and 3d indices for axis 1 by Pablo Marquez Tello · 2 years, 2 months ago
  92. d75cd8a Compute Hard-Swish with a Lookup table for qasymm8. by Pablo Marquez Tello · 2 years, 1 month ago
  93. dc4f276 Revert "Add support for 2d and 3d indices for axis 0" by Mohammed Suhail Munshi · 2 years, 2 months ago
  94. 920f2b6 Add support for 2d and 3d indices for axis 0 by Pablo Marquez Tello · 2 years, 2 months ago
  95. f55cca5 Add LU_BOUNDED_RELU support for QSYMM16 by Pablo Marquez Tello · 2 years, 3 months ago
  96. 9104cd5 Add support for int8 CpuPool3d by Adnan AlSinan · 2 years, 3 months ago
  97. 4c17ba9 Fix Nightly build failure by Adnan AlSinan · 2 years, 3 months ago
  98. 171fc3d Add CPU Pool3d FP16/32 implementation by Adnan AlSinan · 2 years, 3 months ago
  99. 298b2c0 Decouple castKernel by Yair Schwarzbaum · 2 years, 5 months ago
  100. c9e519d Decouple CpuDirectConv2dKernel by alerah01 · 2 years, 5 months ago