1. ea9bd8f Changes to ElementwiseOp to enable fp16 in armv8a multi_isa builds by Pablo Marquez Tello · 10 months ago
  2. 0d27b2e Remove legacy PostOps code by Jakub Sujak · 10 months ago
  3. 7ff03b6 DWC changes to enable fp16 in armv8a multi_isa builds by Pablo Marquez Tello · 10 months ago
  4. 324ba7a Pool3d changes to enable fp16 in armv8a multi_isa builds by Pablo Marquez Tello · 10 months ago
  5. 8770669 Changes in roi_align to enable fp16 in armv8a multi_isa builds by Pablo Marquez Tello · 10 months ago
  6. cea7060 NEFuseBatchNormalizationKernel rework by Pablo Marquez Tello · 11 months ago
  7. 3a9ecdf CpuAdd rework to enable fp16 in armv8a multi_isa builds by Pablo Marquez Tello · 11 months ago
  8. 082630b Update CpuGemmConv2d and CpuFlatten to use CpuReshape operator by Anitha Raj · 11 months ago
  9. eb5696d Optimize CpuReshapeKernel by Anitha Raj · 12 months ago
  10. 580ecd7 Fix depthwise convolution not using assembly kernel by Viet-Hoa Do · 11 months ago
  11. 246fe08 Fix various static check issues by Viet-Hoa Do · 11 months ago
  12. 29e27b0 Add support for S64 output in NEArgMinMaxLayer by Pablo Marquez Tello · 11 months ago
  13. 78ce273 Document the Conv2D heuristic by Gian Marco Iodice · 11 months ago
  14. 9129549 Retain back-compatibility for arm_compute/core/Types.h by SiCong Li · 12 months ago
  15. 4a1c917 Add support for input S64/U64 in CpuCastKernel by Pablo Marquez Tello · 12 months ago
  16. 314d3e2 Break up core/Utils.h to reduce unused code being included everywhere by Matthew Bentham · 1 year ago
  17. 1d06204 Do not include headers necessary for logging when logging is disabled by Matthew Bentham · 12 months ago
  18. 8deee9b Depthwise channel pre-multiplication by Michael Tyler · 1 year ago
  19. 7d9a78e Remove dependency on fp16 definitions from some core include files by Matthew Bentham · 1 year, 1 month ago
  20. 47a50ef Address the issues with the ACL coverage pipeline failures related to matmul. by Renato Arantes · 1 year, 1 month ago
  21. 8eb82d2 Fix CPU depthwise convolution in case of large padding by Viet-Hoa Do · 1 year ago
  22. 94abde4 Add Fused Activation to OpenCL MatMul by Mohammed Suhail Munshi · 1 year, 1 month ago
  23. 043613f Break up Utils.h a bit to reduce unused code being included everywhere by Matthew Bentham · 1 year, 1 month ago
  24. f1aeab9 Break up arm_compute/core/Types.h a bit by Matthew Bentham · 1 year, 1 month ago
  25. 48cfd5f Refactor activation LUT computation by Pablo Marquez Tello · 1 year, 1 month ago
  26. 48c0ed9 Fix ScaleKernel validate method. by Pablo Marquez Tello · 1 year, 2 months ago
  27. c0463a2 Move lut kernel to sve2 category by SiCong Li · 1 year, 2 months ago
  28. a8db612 Re-enable dyanmic weights in Neon™ depthwise convolution by Ramy Elgammal · 1 year, 2 months ago
  29. e9b3ee2 Connect CLMatMul function to quantized kernels and resolve NE BatchMatMul int_8 failures by Jakub Sujak · 1 year, 3 months ago
  30. edafe7f Disable dynamic weights in unsupported operators by Viet-Hoa Do · 1 year, 2 months ago
  31. 5713294 Fix im2col for fast-maths mode with padding. by Renato Arantes · 1 year, 2 months ago
  32. 54e52a9 Fix CPU MatMul broadcast detection by Viet-Hoa Do · 1 year, 2 months ago
  33. a62129a Fix fully connected and matmul mismatches by Viet-Hoa Do · 1 year, 2 months ago
  34. dba672c Integrate multi-threaded pretranspose_B_array by SiCong Li · 1 year, 3 months ago
  35. a07c01b NETranspose 8x8 kernel for 32-bit elements by Ethan Doe · 1 year, 3 months ago
  36. 9c7c2d2 Add quantized support for CPU MatMul by Viet-Hoa Do · 1 year, 3 months ago
  37. b84df20 Fix unhandled case in ElementwiseUnary by Ramy Elgammal · 1 year, 3 months ago
  38. 9b0a6b4 Fix dynamic weights for CPU connected layer by Viet-Hoa Do · 1 year, 3 months ago
  39. a1b1e41 Implement MatMul Function and Operator with Floating Point support for CPU by Mohammed Suhail Munshi · 1 year, 3 months ago
  40. 8b7f42a Enable quantized data types for CpuElementwiseUnary on Armv7a by Ramy Elgammal · 1 year, 3 months ago
  41. 732c1b2 Fix GCC13 compiler errors by Pablo Marquez Tello · 1 year, 3 months ago
  42. fd472f0 Add quantized support for unary elementwise in CPU by Viet-Hoa Do · 1 year, 4 months ago
  43. 20cfa45 Round to nearest with ties to away from zero in Relu by Pablo Marquez Tello · 1 year, 4 months ago
  44. a3e57c2 Add dynamic weights for CPU fully connected layer by Viet-Hoa Do · 1 year, 4 months ago
  45. 0ffc88b [ONCPUML-1174] Allow src/weights mismatch for fixed format by Jonathan Deakin · 1 year, 4 months ago
  46. 1fe48ca NEGEMMLowpMatrixMultiplyCore should be configured for optimized int8 kernel. by Ethan Doe · 1 year, 4 months ago
  47. bbf2e74 Add support for kernel indices in Maxpool by Adnan AlSinan · 1 year, 4 months ago
  48. 227db8d Add an option to use lowest for max-pooling by Adnan AlSinan · 1 year, 5 months ago
  49. 7d9a626 Update CPU kernels to remove x19 and w19 by Michael Tyler · 1 year, 5 months ago
  50. 4e2bbbb Add support for dilation > 1 in assembly DepthwiseConvolution by Pablo Marquez Tello · 1 year, 6 months ago
  51. fbe94da Fix armv7a failing GEMMConvolutionLayer tests by Mohammed Suhail Munshi · 1 year, 5 months ago
  52. 5b9d223 Fix GEMMLowp/Batched MatMul mismatches on CPU by Mohammed Suhail Munshi · 1 year, 5 months ago
  53. ae72a46 Add new operator AddMulAdd for Neon™ backend for Float/Quantized types by Gunes Bayir · 1 year, 5 months ago
  54. 464ed20 Remove fixed format strides hack by Jonathan Deakin · 1 year, 6 months ago
  55. 1b6377b Add broadcast batched matmul validation cases by SiCong Li · 1 year, 6 months ago
  56. be13cea Revert "Update CPU kernels to remove x19" by Michael Tyler · 1 year, 6 months ago
  57. ba20975 Update CPU kernels to remove x19 by Michael Tyler · 1 year, 7 months ago
  58. 6bcdc57 Deprecated BF16 support in DepthConvert by Pablo Marquez Tello · 1 year, 6 months ago
  59. 939b21a Use CPU quantized addition kernel for quantized subtraction by Omar Al Khatib · 1 year, 7 months ago
  60. 9d7b690 Fixed various mismatches in CpuCastKernel by Pablo Marquez Tello · 1 year, 7 months ago
  61. f16973b Fix build error for unused variables in data type specific builds by Gunes Bayir · 1 year, 7 months ago
  62. e112ef1 ONCPUML-1072: Remove double definition of get_mws for Mul kernel by fadara01 · 1 year, 7 months ago
  63. 73bb6b7 ONCPUML-1072: Tuned MWS values (for N1, V1) for binary operators used by oneDNN by Fadi Arafeh · 1 year, 9 months ago
  64. 8307ecf Fix regression caused by mws in ActivationLayer by Mohammed Suhail Munshi · 1 year, 8 months ago
  65. b230a1f Fixed Arm NN unit test failure caused by quantised multiplication patch. by Omar Al Khatib · 1 year, 8 months ago
  66. d4a9cc0 Fix CPU multiplication layer threading overhead by Viet-Hoa Do · 1 year, 8 months ago
  67. d158609 SVE Hard-Swish via Lookup table for quantized input by Pablo Marquez Tello · 2 years, 1 month ago
  68. 605a928 Optimize CPU mul layer on quantized data by Omar Al Khatib · 1 year, 8 months ago
  69. 0ae31d1 Fix fixed-point quantized addition by Viet-Hoa Do · 1 year, 8 months ago
  70. a7077e9 Updateable weights in depthwise convolution by Milos Puzovic · 1 year, 8 months ago
  71. 199982f Add threshold for floating-point SOFT_RELU activation by Milos Puzovic · 1 year, 8 months ago
  72. 4b5f6ef Add check for Batch Matmul in GemmAssemblyDispatch by Mohammed Suhail Munshi · 1 year, 9 months ago
  73. 910e3f9 Fix fixed-point quantized addition by Viet-Hoa Do · 1 year, 9 months ago
  74. 9fc0b5c Update reinterpret tensor as 1D for CPU add by Viet-Hoa Do · 1 year, 9 months ago
  75. 6782452 Add test in GEMMLowp for batch matmul by Mohammed Suhail Munshi · 1 year, 9 months ago
  76. 0a36f58 Fix FFTConvolutionLayer test by Viet-Hoa Do · 1 year, 9 months ago
  77. fa79fda Optimize Neon™ Logistic Activation by Mohammed Suhail Munshi · 1 year, 10 months ago
  78. c8cc024 Adding documentation section explaining how BF16 is used by Ramy Elgammal · 1 year, 9 months ago
  79. 6670413 Fix LUT-based activation layer by Viet-Hoa Do · 1 year, 9 months ago
  80. 842ad21 Optimize Neon™ SUB operator by squashing execution window by Jakub Sujak · 1 year, 10 months ago
  81. 304dfdb Fix Batch Matmul nightly failure by Adnan AlSinan · 1 year, 10 months ago
  82. 40b4419 Optimize CPU add layer on quantized data by Viet-Hoa Do · 1 year, 9 months ago
  83. d6b8a71 Add FP32 Neon™ swish activation by Jonathan Deakin · 1 year, 10 months ago
  84. ead4d11 Fix unresolved symbol for target armv7a + Android by Pablo Marquez Tello · 1 year, 10 months ago
  85. 622b8ad Fix bug in QASYMM8_SIGNED to F32 cast layer by Viet-Hoa Do · 1 year, 10 months ago
  86. c4f2743 Optimize Quantized/Integer Bilinear Scale for Neon™ by Gunes Bayir · 1 year, 10 months ago
  87. 0d05b66 Interpreting tensor as 1D for CPU multiplication by Viet-Hoa Do · 1 year, 10 months ago
  88. 926f502 Adding GELU activation by Murray Kornelsen · 2 years ago
  89. 6e09e14 INT8 Quantized MeanStdDevNorm (LayerNorm) by Murray Kornelsen · 2 years ago
  90. 26c9d1a Add test for NEGEMM to test a batched matrix multiplication with variable input tensors by Adnan AlSinan · 1 year, 10 months ago
  91. 0eed305 Optimize FP32/16 Bilinear Scale Kernel for Neon™ by Gunes Bayir · 1 year, 10 months ago
  92. e4e3b2e Disable Winograd on fp16 if fast-math = false by Ramy Elgammal · 1 year, 10 months ago
  93. 552fe4c F16 Specialization for MeanStdDevNorm by Murray Kornelsen · 2 years ago
  94. a331e48 Fix add for tensors with non-matching strides by Jonathan Deakin · 1 year, 11 months ago
  95. 53929b1 Use Neon™ kernels for FP Bilinear Resize for SVE by Gunes Bayir · 1 year, 11 months ago
  96. 29db3d2 Add LUT for quantized sigmoid function by Viet-Hoa Do · 1 year, 11 months ago
  97. 65c8db8 Fix for AI benchmark ResNet regression by Viet-Hoa Do · 1 year, 11 months ago
  98. 93581a5 [ONCPUML-970] Fast math mode for fixed format kernels by Pablo Marquez Tello · 2 years ago
  99. 13b623e [ONCPUML-968] Fixed format kernel support in additional APIs by Milos Puzovic · 2 years ago
  100. 9b921be Optimize add layer by considering the input tensors as 1D array by Gunes Bayir · 2 years ago