1. 227db8d Add an option to use lowest for max-pooling by Adnan AlSinan · 1 year, 5 months ago
  2. 7d9a626 Update CPU kernels to remove x19 and w19 by Michael Tyler · 1 year, 5 months ago
  3. 4e2bbbb Add support for dilation > 1 in assembly DepthwiseConvolution by Pablo Marquez Tello · 1 year, 6 months ago
  4. fbe94da Fix armv7a failing GEMMConvolutionLayer tests by Mohammed Suhail Munshi · 1 year, 5 months ago
  5. 5b9d223 Fix GEMMLowp/Batched MatMul mismatches on CPU by Mohammed Suhail Munshi · 1 year, 5 months ago
  6. ae72a46 Add new operator AddMulAdd for Neon™ backend for Float/Quantized types by Gunes Bayir · 1 year, 5 months ago
  7. 464ed20 Remove fixed format strides hack by Jonathan Deakin · 1 year, 6 months ago
  8. 1b6377b Add broadcast batched matmul validation cases by SiCong Li · 1 year, 6 months ago
  9. be13cea Revert "Update CPU kernels to remove x19" by Michael Tyler · 1 year, 6 months ago
  10. ba20975 Update CPU kernels to remove x19 by Michael Tyler · 1 year, 7 months ago
  11. 6bcdc57 Deprecated BF16 support in DepthConvert by Pablo Marquez Tello · 1 year, 6 months ago
  12. 939b21a Use CPU quantized addition kernel for quantized subtraction by Omar Al Khatib · 1 year, 7 months ago
  13. 9d7b690 Fixed various mismatches in CpuCastKernel by Pablo Marquez Tello · 1 year, 7 months ago
  14. f16973b Fix build error for unused variables in data type specific builds by Gunes Bayir · 1 year, 7 months ago
  15. e112ef1 ONCPUML-1072: Remove double definition of get_mws for Mul kernel by fadara01 · 1 year, 8 months ago
  16. 73bb6b7 ONCPUML-1072: Tuned MWS values (for N1, V1) for binary operators used by oneDNN by Fadi Arafeh · 1 year, 9 months ago
  17. 8307ecf Fix regression caused by mws in ActivationLayer by Mohammed Suhail Munshi · 1 year, 8 months ago
  18. b230a1f Fixed Arm NN unit test failure caused by quantised multiplication patch. by Omar Al Khatib · 1 year, 8 months ago
  19. d4a9cc0 Fix CPU multiplication layer threading overhead by Viet-Hoa Do · 1 year, 8 months ago
  20. d158609 SVE Hard-Swish via Lookup table for quantized input by Pablo Marquez Tello · 2 years, 1 month ago
  21. 605a928 Optimize CPU mul layer on quantized data by Omar Al Khatib · 1 year, 8 months ago
  22. 0ae31d1 Fix fixed-point quantized addition by Viet-Hoa Do · 1 year, 8 months ago
  23. a7077e9 Updateable weights in depthwise convolution by Milos Puzovic · 1 year, 8 months ago
  24. 199982f Add threshold for floating-point SOFT_RELU activation by Milos Puzovic · 1 year, 8 months ago
  25. 4b5f6ef Add check for Batch Matmul in GemmAssemblyDispatch by Mohammed Suhail Munshi · 1 year, 9 months ago
  26. 910e3f9 Fix fixed-point quantized addition by Viet-Hoa Do · 1 year, 9 months ago
  27. 9fc0b5c Update reinterpret tensor as 1D for CPU add by Viet-Hoa Do · 1 year, 9 months ago
  28. 6782452 Add test in GEMMLowp for batch matmul by Mohammed Suhail Munshi · 1 year, 9 months ago
  29. 0a36f58 Fix FFTConvolutionLayer test by Viet-Hoa Do · 1 year, 9 months ago
  30. fa79fda Optimize Neon™ Logistic Activation by Mohammed Suhail Munshi · 1 year, 10 months ago
  31. c8cc024 Adding documentation section explaining how BF16 is used by Ramy Elgammal · 1 year, 9 months ago
  32. 6670413 Fix LUT-based activation layer by Viet-Hoa Do · 1 year, 9 months ago
  33. 842ad21 Optimize Neon™ SUB operator by squashing execution window by Jakub Sujak · 1 year, 10 months ago
  34. 304dfdb Fix Batch Matmul nightly failure by Adnan AlSinan · 1 year, 10 months ago
  35. 40b4419 Optimize CPU add layer on quantized data by Viet-Hoa Do · 1 year, 10 months ago
  36. d6b8a71 Add FP32 Neon™ swish activation by Jonathan Deakin · 1 year, 11 months ago
  37. ead4d11 Fix unresolved symbol for target armv7a + Android by Pablo Marquez Tello · 1 year, 10 months ago
  38. 622b8ad Fix bug in QASYMM8_SIGNED to F32 cast layer by Viet-Hoa Do · 1 year, 10 months ago
  39. c4f2743 Optimize Quantized/Integer Bilinear Scale for Neon™ by Gunes Bayir · 1 year, 10 months ago
  40. 0d05b66 Interpreting tensor as 1D for CPU multiplication by Viet-Hoa Do · 1 year, 10 months ago
  41. 926f502 Adding GELU activation by Murray Kornelsen · 2 years ago
  42. 6e09e14 INT8 Quantized MeanStdDevNorm (LayerNorm) by Murray Kornelsen · 2 years ago
  43. 26c9d1a Add test for NEGEMM to test a batched matrix multiplication with variable input tensors by Adnan AlSinan · 1 year, 10 months ago
  44. 0eed305 Optimize FP32/16 Bilinear Scale Kernel for Neon™ by Gunes Bayir · 1 year, 10 months ago
  45. e4e3b2e Disable Winograd on fp16 if fast-math = false by Ramy Elgammal · 1 year, 10 months ago
  46. 552fe4c F16 Specialization for MeanStdDevNorm by Murray Kornelsen · 2 years ago
  47. a331e48 Fix add for tensors with non-matching strides by Jonathan Deakin · 1 year, 11 months ago
  48. 53929b1 Use Neon™ kernels for FP Bilinear Resize for SVE by Gunes Bayir · 1 year, 11 months ago
  49. 29db3d2 Add LUT for quantized sigmoid function by Viet-Hoa Do · 1 year, 11 months ago
  50. 65c8db8 Fix for AI benchmark ResNet regression by Viet-Hoa Do · 1 year, 11 months ago
  51. 93581a5 [ONCPUML-970] Fast math mode for fixed format kernels by Pablo Marquez Tello · 2 years ago
  52. 13b623e [ONCPUML-968] Fixed format kernel support in additional APIs by Milos Puzovic · 2 years ago
  53. 9b921be Optimize add layer by considering the input tensors as 1D array by Gunes Bayir · 2 years ago
  54. aa52b7d Fix compilation error rasied in Nightly_NEW by Ramy Elgammal · 2 years ago
  55. 9178002 Fix for inclusion of "arm_gemm" from src into "Types.h" from core by Ramy Elgammal · 2 years ago
  56. d208f4f Enable march=armv8.6-a in non multi-isa builds by Pablo Marquez Tello · 2 years ago
  57. 553f695 [ONCPUML-951] Variable weight support for Convolution. by Francesco Petrogalli · 2 years ago
  58. a1f7851 Integrate new winograd APIs from MLTech by ramelg01 · 2 years ago
  59. e417ff1 Fix build errors on armv8.6 SVE2 with NDK 23 and 24 by Michalis Spyrou · 2 years ago
  60. 16aa474 Wrong arguments for running activation function in CpuGemmDirectConv2d by Michalis Spyrou · 2 years ago
  61. b042e39 Add LUT-based leaky relu for QASYMM8 on CPU by Viet-Hoa Do · 2 years, 1 month ago
  62. 41eb2d9 Improve LUT Neon Hard-Swish by Pablo Marquez Tello · 2 years ago
  63. 700b913 Select neon LUT Hard-Swish kernel on all devices by Pablo Marquez Tello · 2 years ago
  64. f1f7779 Fix SVE2 implementation of quantized SoftMax 1D by Viet-Hoa Do · 2 years, 1 month ago
  65. c3bc093 Fix crash in CpuActivationKernel by Pablo Marquez Tello · 2 years, 1 month ago
  66. d75cd8a Compute Hard-Swish with a Lookup table for qasymm8. by Pablo Marquez Tello · 2 years, 1 month ago
  67. 5fcf22d [arm_gemm] Import fixed-format kernels from gemm_linux. by Francesco.Petrogalli@arm.com · 2 years, 3 months ago
  68. 168d6a8 Use svcreate instead of list initializations. by Michalis Spyrou · 2 years, 2 months ago
  69. facd9dd Add a missing validation check to CPU Pool3d by Adnan AlSinan · 2 years, 2 months ago
  70. c827e99 Update Neon™ pooling kernel by ramelg01 · 2 years, 3 months ago
  71. f55cca5 Add LU_BOUNDED_RELU support for QSYMM16 by Pablo Marquez Tello · 2 years, 3 months ago
  72. fa6877f [CpuGemmConv2d] Extract skip_im2col and skip_col2im computation. by Francesco.Petrogalli@arm.com · 2 years, 3 months ago
  73. 9104cd5 Add support for int8 CpuPool3d by Adnan AlSinan · 2 years, 3 months ago
  74. 5d606cc Fix CpuGemmAssemblyDispatch::has_opt_impl. by Francesco.Petrogalli@arm.com · 2 years, 3 months ago
  75. e33c556 [arm_gemm] Use static validate to find arm_gemm kernels. by Francesco.Petrogalli@arm.com · 2 years, 3 months ago
  76. 171fc3d Add CPU Pool3d FP16/32 implementation by Adnan AlSinan · 2 years, 4 months ago
  77. a5d61bf NEQLSTM: Add support for QASYMM8_SIGNED for input_to_forget_weights by Pablo Marquez Tello · 2 years, 4 months ago
  78. 193cad3 Remove deprecated interface from arm_compute. by Francesco.Petrogalli@arm.com · 2 years, 4 months ago
  79. 4e66d70 Added windows native build support by Pablo Tello · 2 years, 4 months ago
  80. 17c48f9 Revert mws heuristics for CpuPool2d by Giorgio Arena · 2 years, 4 months ago
  81. 41a729e Decouple fuseBatchNormalizationKernel by Yair Schwarzbaum · 2 years, 8 months ago
  82. 4cbcb84 Removing SVE / SVE2 guards from decoupled kernels by alerah01 · 2 years, 4 months ago
  83. 298b2c0 Decouple castKernel by Yair Schwarzbaum · 2 years, 5 months ago
  84. a538ae5 Multi ISA Technical Debt by Dana Zlotnik · 2 years, 5 months ago
  85. 32fbb07 Fix CpuPool2d regression on A53/A55 due to mws by Giorgio Arena · 2 years, 5 months ago
  86. c9e519d Decouple CpuDirectConv2dKernel by alerah01 · 2 years, 5 months ago
  87. ebbae94 Decouple CpuDepthwiseConv2dNativeKernel by Dana Zlotnik · 2 years, 5 months ago
  88. 5e99318 Decouple NEL2NormalizeLayerKernel by Yair Schwarzbaum · 2 years, 6 months ago
  89. 256ac62 Decouple CpuGemmMatrixMultiplyKernel and CpuGemmMatrixAdditionKernel by Dana Zlotnik · 2 years, 5 months ago
  90. 149203b Port MaxUnpoolingLayer kernel and add KernelSelect vaidation test by Dana Zlotnik · 2 years, 5 months ago
  91. 6a2df88 Add kernel selection UT for submitted kernels by Dana Zlotnik · 2 years, 6 months ago
  92. d56d94d Fix build for gcc 9 by Giorgio Arena · 2 years, 5 months ago
  93. 9d9ad33 SCons build system refactoring (phase #2). by Motti Gondabi · 2 years, 5 months ago
  94. 46d44d2 Enable kernel selection testing (Phase #2) by Yair Schwarzbaum · 2 years, 6 months ago
  95. 0ef2c21 Remove padding from CpuDirectConv2dKernel by Adnan AlSinan · 2 years, 5 months ago
  96. 21391c3 Fix s10plus NEON/PoolingLayer Nightly failure by Adnan AlSinan · 2 years, 5 months ago
  97. 7195f71 Add OpenBSD/arm64 support. by Kevin Lo · 2 years, 6 months ago
  98. 8a9a0fb Select kernel decoupling by Anton Vainer · 2 years, 6 months ago
  99. 8d8208c Fix nightly failure within the elementwise unary build. by Dana Zlotnik · 2 years, 5 months ago
  100. 62a3b0c DepthwiseConv reports full assembly kernel name by Pablo Marquez Tello · 2 years, 7 months ago