1. 29e27b0 Add support for S64 output in NEArgMinMaxLayer by Pablo Marquez Tello · 11 months ago
  2. 78ce273 Document the Conv2D heuristic by Gian Marco Iodice · 11 months ago
  3. 9129549 Retain back-compatibility for arm_compute/core/Types.h by SiCong Li · 12 months ago
  4. 4a1c917 Add support for input S64/U64 in CpuCastKernel by Pablo Marquez Tello · 12 months ago
  5. 314d3e2 Break up core/Utils.h to reduce unused code being included everywhere by Matthew Bentham · 1 year ago
  6. 1d06204 Do not include headers necessary for logging when logging is disabled by Matthew Bentham · 12 months ago
  7. 8deee9b Depthwise channel pre-multiplication by Michael Tyler · 1 year ago
  8. 7d9a78e Remove dependency on fp16 definitions from some core include files by Matthew Bentham · 1 year, 1 month ago
  9. 47a50ef Address the issues with the ACL coverage pipeline failures related to matmul. by Renato Arantes · 1 year, 1 month ago
  10. 8eb82d2 Fix CPU depthwise convolution in case of large padding by Viet-Hoa Do · 1 year ago
  11. 94abde4 Add Fused Activation to OpenCL MatMul by Mohammed Suhail Munshi · 1 year, 1 month ago
  12. 043613f Break up Utils.h a bit to reduce unused code being included everywhere by Matthew Bentham · 1 year, 1 month ago
  13. f1aeab9 Break up arm_compute/core/Types.h a bit by Matthew Bentham · 1 year, 1 month ago
  14. 48cfd5f Refactor activation LUT computation by Pablo Marquez Tello · 1 year, 1 month ago
  15. 48c0ed9 Fix ScaleKernel validate method. by Pablo Marquez Tello · 1 year, 2 months ago
  16. c0463a2 Move lut kernel to sve2 category by SiCong Li · 1 year, 2 months ago
  17. a8db612 Re-enable dyanmic weights in Neon™ depthwise convolution by Ramy Elgammal · 1 year, 2 months ago
  18. e9b3ee2 Connect CLMatMul function to quantized kernels and resolve NE BatchMatMul int_8 failures by Jakub Sujak · 1 year, 3 months ago
  19. edafe7f Disable dynamic weights in unsupported operators by Viet-Hoa Do · 1 year, 2 months ago
  20. 5713294 Fix im2col for fast-maths mode with padding. by Renato Arantes · 1 year, 2 months ago
  21. 54e52a9 Fix CPU MatMul broadcast detection by Viet-Hoa Do · 1 year, 2 months ago
  22. a62129a Fix fully connected and matmul mismatches by Viet-Hoa Do · 1 year, 2 months ago
  23. dba672c Integrate multi-threaded pretranspose_B_array by SiCong Li · 1 year, 3 months ago
  24. a07c01b NETranspose 8x8 kernel for 32-bit elements by Ethan Doe · 1 year, 3 months ago
  25. 9c7c2d2 Add quantized support for CPU MatMul by Viet-Hoa Do · 1 year, 3 months ago
  26. b84df20 Fix unhandled case in ElementwiseUnary by Ramy Elgammal · 1 year, 3 months ago
  27. 9b0a6b4 Fix dynamic weights for CPU connected layer by Viet-Hoa Do · 1 year, 3 months ago
  28. a1b1e41 Implement MatMul Function and Operator with Floating Point support for CPU by Mohammed Suhail Munshi · 1 year, 3 months ago
  29. 8b7f42a Enable quantized data types for CpuElementwiseUnary on Armv7a by Ramy Elgammal · 1 year, 3 months ago
  30. 732c1b2 Fix GCC13 compiler errors by Pablo Marquez Tello · 1 year, 3 months ago
  31. fd472f0 Add quantized support for unary elementwise in CPU by Viet-Hoa Do · 1 year, 4 months ago
  32. 20cfa45 Round to nearest with ties to away from zero in Relu by Pablo Marquez Tello · 1 year, 4 months ago
  33. a3e57c2 Add dynamic weights for CPU fully connected layer by Viet-Hoa Do · 1 year, 4 months ago
  34. 0ffc88b [ONCPUML-1174] Allow src/weights mismatch for fixed format by Jonathan Deakin · 1 year, 4 months ago
  35. 1fe48ca NEGEMMLowpMatrixMultiplyCore should be configured for optimized int8 kernel. by Ethan Doe · 1 year, 4 months ago
  36. bbf2e74 Add support for kernel indices in Maxpool by Adnan AlSinan · 1 year, 4 months ago
  37. 227db8d Add an option to use lowest for max-pooling by Adnan AlSinan · 1 year, 5 months ago
  38. 7d9a626 Update CPU kernels to remove x19 and w19 by Michael Tyler · 1 year, 5 months ago
  39. 4e2bbbb Add support for dilation > 1 in assembly DepthwiseConvolution by Pablo Marquez Tello · 1 year, 6 months ago
  40. fbe94da Fix armv7a failing GEMMConvolutionLayer tests by Mohammed Suhail Munshi · 1 year, 5 months ago
  41. 5b9d223 Fix GEMMLowp/Batched MatMul mismatches on CPU by Mohammed Suhail Munshi · 1 year, 5 months ago
  42. ae72a46 Add new operator AddMulAdd for Neon™ backend for Float/Quantized types by Gunes Bayir · 1 year, 5 months ago
  43. 464ed20 Remove fixed format strides hack by Jonathan Deakin · 1 year, 6 months ago
  44. 1b6377b Add broadcast batched matmul validation cases by SiCong Li · 1 year, 6 months ago
  45. be13cea Revert "Update CPU kernels to remove x19" by Michael Tyler · 1 year, 6 months ago
  46. ba20975 Update CPU kernels to remove x19 by Michael Tyler · 1 year, 7 months ago
  47. 6bcdc57 Deprecated BF16 support in DepthConvert by Pablo Marquez Tello · 1 year, 6 months ago
  48. 939b21a Use CPU quantized addition kernel for quantized subtraction by Omar Al Khatib · 1 year, 7 months ago
  49. 9d7b690 Fixed various mismatches in CpuCastKernel by Pablo Marquez Tello · 1 year, 7 months ago
  50. f16973b Fix build error for unused variables in data type specific builds by Gunes Bayir · 1 year, 7 months ago
  51. e112ef1 ONCPUML-1072: Remove double definition of get_mws for Mul kernel by fadara01 · 1 year, 7 months ago
  52. 73bb6b7 ONCPUML-1072: Tuned MWS values (for N1, V1) for binary operators used by oneDNN by Fadi Arafeh · 1 year, 9 months ago
  53. 8307ecf Fix regression caused by mws in ActivationLayer by Mohammed Suhail Munshi · 1 year, 8 months ago
  54. b230a1f Fixed Arm NN unit test failure caused by quantised multiplication patch. by Omar Al Khatib · 1 year, 8 months ago
  55. d4a9cc0 Fix CPU multiplication layer threading overhead by Viet-Hoa Do · 1 year, 8 months ago
  56. d158609 SVE Hard-Swish via Lookup table for quantized input by Pablo Marquez Tello · 2 years, 1 month ago
  57. 605a928 Optimize CPU mul layer on quantized data by Omar Al Khatib · 1 year, 8 months ago
  58. 0ae31d1 Fix fixed-point quantized addition by Viet-Hoa Do · 1 year, 8 months ago
  59. a7077e9 Updateable weights in depthwise convolution by Milos Puzovic · 1 year, 8 months ago
  60. 199982f Add threshold for floating-point SOFT_RELU activation by Milos Puzovic · 1 year, 8 months ago
  61. 4b5f6ef Add check for Batch Matmul in GemmAssemblyDispatch by Mohammed Suhail Munshi · 1 year, 9 months ago
  62. 910e3f9 Fix fixed-point quantized addition by Viet-Hoa Do · 1 year, 9 months ago
  63. 9fc0b5c Update reinterpret tensor as 1D for CPU add by Viet-Hoa Do · 1 year, 9 months ago
  64. 6782452 Add test in GEMMLowp for batch matmul by Mohammed Suhail Munshi · 1 year, 9 months ago
  65. 0a36f58 Fix FFTConvolutionLayer test by Viet-Hoa Do · 1 year, 9 months ago
  66. fa79fda Optimize Neon™ Logistic Activation by Mohammed Suhail Munshi · 1 year, 10 months ago
  67. c8cc024 Adding documentation section explaining how BF16 is used by Ramy Elgammal · 1 year, 9 months ago
  68. 6670413 Fix LUT-based activation layer by Viet-Hoa Do · 1 year, 9 months ago
  69. 842ad21 Optimize Neon™ SUB operator by squashing execution window by Jakub Sujak · 1 year, 10 months ago
  70. 304dfdb Fix Batch Matmul nightly failure by Adnan AlSinan · 1 year, 10 months ago
  71. 40b4419 Optimize CPU add layer on quantized data by Viet-Hoa Do · 1 year, 9 months ago
  72. d6b8a71 Add FP32 Neon™ swish activation by Jonathan Deakin · 1 year, 10 months ago
  73. ead4d11 Fix unresolved symbol for target armv7a + Android by Pablo Marquez Tello · 1 year, 10 months ago
  74. 622b8ad Fix bug in QASYMM8_SIGNED to F32 cast layer by Viet-Hoa Do · 1 year, 10 months ago
  75. c4f2743 Optimize Quantized/Integer Bilinear Scale for Neon™ by Gunes Bayir · 1 year, 10 months ago
  76. 0d05b66 Interpreting tensor as 1D for CPU multiplication by Viet-Hoa Do · 1 year, 10 months ago
  77. 926f502 Adding GELU activation by Murray Kornelsen · 2 years ago
  78. 6e09e14 INT8 Quantized MeanStdDevNorm (LayerNorm) by Murray Kornelsen · 2 years ago
  79. 26c9d1a Add test for NEGEMM to test a batched matrix multiplication with variable input tensors by Adnan AlSinan · 1 year, 10 months ago
  80. 0eed305 Optimize FP32/16 Bilinear Scale Kernel for Neon™ by Gunes Bayir · 1 year, 10 months ago
  81. e4e3b2e Disable Winograd on fp16 if fast-math = false by Ramy Elgammal · 1 year, 10 months ago
  82. 552fe4c F16 Specialization for MeanStdDevNorm by Murray Kornelsen · 2 years ago
  83. a331e48 Fix add for tensors with non-matching strides by Jonathan Deakin · 1 year, 11 months ago
  84. 53929b1 Use Neon™ kernels for FP Bilinear Resize for SVE by Gunes Bayir · 1 year, 11 months ago
  85. 29db3d2 Add LUT for quantized sigmoid function by Viet-Hoa Do · 1 year, 11 months ago
  86. 65c8db8 Fix for AI benchmark ResNet regression by Viet-Hoa Do · 1 year, 11 months ago
  87. 93581a5 [ONCPUML-970] Fast math mode for fixed format kernels by Pablo Marquez Tello · 2 years ago
  88. 13b623e [ONCPUML-968] Fixed format kernel support in additional APIs by Milos Puzovic · 2 years ago
  89. 9b921be Optimize add layer by considering the input tensors as 1D array by Gunes Bayir · 2 years ago
  90. aa52b7d Fix compilation error rasied in Nightly_NEW by Ramy Elgammal · 2 years ago
  91. 9178002 Fix for inclusion of "arm_gemm" from src into "Types.h" from core by Ramy Elgammal · 2 years ago
  92. d208f4f Enable march=armv8.6-a in non multi-isa builds by Pablo Marquez Tello · 2 years ago
  93. 553f695 [ONCPUML-951] Variable weight support for Convolution. by Francesco Petrogalli · 2 years ago
  94. a1f7851 Integrate new winograd APIs from MLTech by ramelg01 · 2 years ago
  95. e417ff1 Fix build errors on armv8.6 SVE2 with NDK 23 and 24 by Michalis Spyrou · 2 years ago
  96. 16aa474 Wrong arguments for running activation function in CpuGemmDirectConv2d by Michalis Spyrou · 2 years ago
  97. b042e39 Add LUT-based leaky relu for QASYMM8 on CPU by Viet-Hoa Do · 2 years ago
  98. 41eb2d9 Improve LUT Neon Hard-Swish by Pablo Marquez Tello · 2 years ago
  99. 700b913 Select neon LUT Hard-Swish kernel on all devices by Pablo Marquez Tello · 2 years ago
  100. f1f7779 Fix SVE2 implementation of quantized SoftMax 1D by Viet-Hoa Do · 2 years, 1 month ago