1. ae72a46 Add new operator AddMulAdd for Neon™ backend for Float/Quantized types by Gunes Bayir · 1 year, 5 months ago
  2. 464ed20 Remove fixed format strides hack by Jonathan Deakin · 1 year, 6 months ago
  3. 1b6377b Add broadcast batched matmul validation cases by SiCong Li · 1 year, 6 months ago
  4. 6bcdc57 Deprecated BF16 support in DepthConvert by Pablo Marquez Tello · 1 year, 6 months ago
  5. a7077e9 Updateable weights in depthwise convolution by Milos Puzovic · 1 year, 8 months ago
  6. 4b5f6ef Add check for Batch Matmul in GemmAssemblyDispatch by Mohammed Suhail Munshi · 1 year, 8 months ago
  7. 9fc0b5c Update reinterpret tensor as 1D for CPU add by Viet-Hoa Do · 1 year, 8 months ago
  8. fa79fda Optimize Neon™ Logistic Activation by Mohammed Suhail Munshi · 1 year, 9 months ago
  9. c8cc024 Adding documentation section explaining how BF16 is used by Ramy Elgammal · 1 year, 9 months ago
  10. 842ad21 Optimize Neon™ SUB operator by squashing execution window by Jakub Sujak · 1 year, 10 months ago
  11. c4f2743 Optimize Quantized/Integer Bilinear Scale for Neon™ by Gunes Bayir · 1 year, 10 months ago
  12. 0d05b66 Interpreting tensor as 1D for CPU multiplication by Viet-Hoa Do · 1 year, 10 months ago
  13. 26c9d1a Add test for NEGEMM to test a batched matrix multiplication with variable input tensors by Adnan AlSinan · 1 year, 10 months ago
  14. 0eed305 Optimize FP32/16 Bilinear Scale Kernel for Neon™ by Gunes Bayir · 1 year, 10 months ago
  15. e4e3b2e Disable Winograd on fp16 if fast-math = false by Ramy Elgammal · 1 year, 10 months ago
  16. 65c8db8 Fix for AI benchmark ResNet regression by Viet-Hoa Do · 1 year, 11 months ago
  17. 93581a5 [ONCPUML-970] Fast math mode for fixed format kernels by Pablo Marquez Tello · 2 years ago
  18. 13b623e [ONCPUML-968] Fixed format kernel support in additional APIs by Milos Puzovic · 1 year, 11 months ago
  19. 9b921be Optimize add layer by considering the input tensors as 1D array by Gunes Bayir · 1 year, 11 months ago
  20. aa52b7d Fix compilation error rasied in Nightly_NEW by Ramy Elgammal · 1 year, 11 months ago
  21. 9178002 Fix for inclusion of "arm_gemm" from src into "Types.h" from core by Ramy Elgammal · 2 years ago
  22. d208f4f Enable march=armv8.6-a in non multi-isa builds by Pablo Marquez Tello · 2 years ago
  23. 553f695 [ONCPUML-951] Variable weight support for Convolution. by Francesco Petrogalli · 2 years ago
  24. a1f7851 Integrate new winograd APIs from MLTech by ramelg01 · 2 years ago
  25. 16aa474 Wrong arguments for running activation function in CpuGemmDirectConv2d by Michalis Spyrou · 2 years ago
  26. 5fcf22d [arm_gemm] Import fixed-format kernels from gemm_linux. by Francesco.Petrogalli@arm.com · 2 years, 3 months ago
  27. 168d6a8 Use svcreate instead of list initializations. by Michalis Spyrou · 2 years, 2 months ago
  28. fa6877f [CpuGemmConv2d] Extract skip_im2col and skip_col2im computation. by Francesco.Petrogalli@arm.com · 2 years, 3 months ago
  29. 9104cd5 Add support for int8 CpuPool3d by Adnan AlSinan · 2 years, 3 months ago
  30. 5d606cc Fix CpuGemmAssemblyDispatch::has_opt_impl. by Francesco.Petrogalli@arm.com · 2 years, 3 months ago
  31. e33c556 [arm_gemm] Use static validate to find arm_gemm kernels. by Francesco.Petrogalli@arm.com · 2 years, 3 months ago
  32. 171fc3d Add CPU Pool3d FP16/32 implementation by Adnan AlSinan · 2 years, 4 months ago
  33. 193cad3 Remove deprecated interface from arm_compute. by Francesco.Petrogalli@arm.com · 2 years, 4 months ago
  34. 149203b Port MaxUnpoolingLayer kernel and add KernelSelect vaidation test by Dana Zlotnik · 2 years, 5 months ago
  35. 46d44d2 Enable kernel selection testing (Phase #2) by Yair Schwarzbaum · 2 years, 6 months ago
  36. f2c022e Enable fast_math in CpuFullyConnected by cfRod · 2 years, 8 months ago
  37. f727ef4 Add uint8/int8 support to cpu conv3d by Freddie Liardet · 2 years, 9 months ago
  38. 5dda217 DirectConv3d support refine by Sheri Zhang · 2 years, 9 months ago
  39. 6d9c982 Conv3d support by Sheri Zhang · 2 years, 9 months ago
  40. ded3663 Remove padding in cpuPool2d NCHW by Freddie Liardet · 2 years, 10 months ago
  41. 63e0beb Add support for non-constant weights and biases in CpuFullyConnected by Giorgio Arena · 2 years, 9 months ago
  42. 3ae3d88 Provide logging for configure functions in all cpu operators by ramelg01 · 2 years, 10 months ago
  43. 9ac7b99 Revert "Add support for non-constant weights and biases in CpuFullyConnected" by Pablo Marquez Tello · 2 years, 10 months ago
  44. 2f9ae16 Avoid checking on biases' constantness if nullptr by Giorgio Arena · 2 years, 10 months ago
  45. aed63ee Add support for non-constant weights and biases in CpuFullyConnected by Michele Di Giorgio · 3 years ago
  46. e920d6a Printing operators parameters, currently for CpuAdd operator only. by Ramy Elgammal · 2 years, 10 months ago
  47. 7891a73 Move CPU/GPU files from Core/Runtime to the respective backend folders by Georgios Pinitas · 2 years, 10 months ago