- 043613f Break up Utils.h a bit to reduce unused code being included everywhere by Matthew Bentham · 1 year, 2 months ago
- f1aeab9 Break up arm_compute/core/Types.h a bit by Matthew Bentham · 1 year, 2 months ago
- a8db612 Re-enable dyanmic weights in Neon™ depthwise convolution by Ramy Elgammal · 1 year, 3 months ago
- e9b3ee2 Connect CLMatMul function to quantized kernels and resolve NE BatchMatMul int_8 failures by Jakub Sujak · 1 year, 3 months ago
- edafe7f Disable dynamic weights in unsupported operators by Viet-Hoa Do · 1 year, 3 months ago
- 5713294 Fix im2col for fast-maths mode with padding. by Renato Arantes · 1 year, 3 months ago
- 54e52a9 Fix CPU MatMul broadcast detection by Viet-Hoa Do · 1 year, 3 months ago
- a62129a Fix fully connected and matmul mismatches by Viet-Hoa Do · 1 year, 3 months ago
- dba672c Integrate multi-threaded pretranspose_B_array by SiCong Li · 1 year, 4 months ago
- 9c7c2d2 Add quantized support for CPU MatMul by Viet-Hoa Do · 1 year, 4 months ago
- 9b0a6b4 Fix dynamic weights for CPU connected layer by Viet-Hoa Do · 1 year, 4 months ago
- a1b1e41 Implement MatMul Function and Operator with Floating Point support for CPU by Mohammed Suhail Munshi · 1 year, 4 months ago
- a3e57c2 Add dynamic weights for CPU fully connected layer by Viet-Hoa Do · 1 year, 5 months ago
- 0ffc88b [ONCPUML-1174] Allow src/weights mismatch for fixed format by Jonathan Deakin · 1 year, 5 months ago
- 1fe48ca NEGEMMLowpMatrixMultiplyCore should be configured for optimized int8 kernel. by Ethan Doe · 1 year, 5 months ago
- bbf2e74 Add support for kernel indices in Maxpool by Adnan AlSinan · 1 year, 5 months ago
- ae72a46 Add new operator AddMulAdd for Neon™ backend for Float/Quantized types by Gunes Bayir · 1 year, 6 months ago
- 464ed20 Remove fixed format strides hack by Jonathan Deakin · 1 year, 7 months ago
- 1b6377b Add broadcast batched matmul validation cases by SiCong Li · 1 year, 7 months ago
- 6bcdc57 Deprecated BF16 support in DepthConvert by Pablo Marquez Tello · 1 year, 7 months ago
- a7077e9 Updateable weights in depthwise convolution by Milos Puzovic · 1 year, 9 months ago
- 4b5f6ef Add check for Batch Matmul in GemmAssemblyDispatch by Mohammed Suhail Munshi · 1 year, 9 months ago
- 9fc0b5c Update reinterpret tensor as 1D for CPU add by Viet-Hoa Do · 1 year, 9 months ago
- fa79fda Optimize Neon™ Logistic Activation by Mohammed Suhail Munshi · 1 year, 10 months ago
- c8cc024 Adding documentation section explaining how BF16 is used by Ramy Elgammal · 1 year, 10 months ago
- 842ad21 Optimize Neon™ SUB operator by squashing execution window by Jakub Sujak · 1 year, 11 months ago
- c4f2743 Optimize Quantized/Integer Bilinear Scale for Neon™ by Gunes Bayir · 1 year, 11 months ago
- 0d05b66 Interpreting tensor as 1D for CPU multiplication by Viet-Hoa Do · 1 year, 11 months ago
- 26c9d1a Add test for NEGEMM to test a batched matrix multiplication with variable input tensors by Adnan AlSinan · 1 year, 11 months ago
- 0eed305 Optimize FP32/16 Bilinear Scale Kernel for Neon™ by Gunes Bayir · 1 year, 11 months ago
- e4e3b2e Disable Winograd on fp16 if fast-math = false by Ramy Elgammal · 1 year, 11 months ago
- 65c8db8 Fix for AI benchmark ResNet regression by Viet-Hoa Do · 2 years ago
- 93581a5 [ONCPUML-970] Fast math mode for fixed format kernels by Pablo Marquez Tello · 2 years ago
- 13b623e [ONCPUML-968] Fixed format kernel support in additional APIs by Milos Puzovic · 2 years ago
- 9b921be Optimize add layer by considering the input tensors as 1D array by Gunes Bayir · 2 years ago
- aa52b7d Fix compilation error rasied in Nightly_NEW by Ramy Elgammal · 2 years ago
- 9178002 Fix for inclusion of "arm_gemm" from src into "Types.h" from core by Ramy Elgammal · 2 years ago
- d208f4f Enable march=armv8.6-a in non multi-isa builds by Pablo Marquez Tello · 2 years ago
- 553f695 [ONCPUML-951] Variable weight support for Convolution. by Francesco Petrogalli · 2 years, 1 month ago
- a1f7851 Integrate new winograd APIs from MLTech by ramelg01 · 2 years, 1 month ago
- 16aa474 Wrong arguments for running activation function in CpuGemmDirectConv2d by Michalis Spyrou · 2 years, 1 month ago
- 5fcf22d [arm_gemm] Import fixed-format kernels from gemm_linux. by Francesco.Petrogalli@arm.com · 2 years, 4 months ago
- 168d6a8 Use svcreate instead of list initializations. by Michalis Spyrou · 2 years, 3 months ago
- fa6877f [CpuGemmConv2d] Extract skip_im2col and skip_col2im computation. by Francesco.Petrogalli@arm.com · 2 years, 4 months ago
- 9104cd5 Add support for int8 CpuPool3d by Adnan AlSinan · 2 years, 4 months ago
- 5d606cc Fix CpuGemmAssemblyDispatch::has_opt_impl. by Francesco.Petrogalli@arm.com · 2 years, 4 months ago
- e33c556 [arm_gemm] Use static validate to find arm_gemm kernels. by Francesco.Petrogalli@arm.com · 2 years, 4 months ago
- 171fc3d Add CPU Pool3d FP16/32 implementation by Adnan AlSinan · 2 years, 5 months ago
- 193cad3 Remove deprecated interface from arm_compute. by Francesco.Petrogalli@arm.com · 2 years, 5 months ago
- 149203b Port MaxUnpoolingLayer kernel and add KernelSelect vaidation test by Dana Zlotnik · 2 years, 6 months ago
- 46d44d2 Enable kernel selection testing (Phase #2) by Yair Schwarzbaum · 2 years, 7 months ago
- f2c022e Enable fast_math in CpuFullyConnected by cfRod · 2 years, 9 months ago
- f727ef4 Add uint8/int8 support to cpu conv3d by Freddie Liardet · 2 years, 9 months ago
- 5dda217 DirectConv3d support refine by Sheri Zhang · 2 years, 10 months ago
- 6d9c982 Conv3d support by Sheri Zhang · 2 years, 10 months ago
- ded3663 Remove padding in cpuPool2d NCHW by Freddie Liardet · 2 years, 11 months ago
- 63e0beb Add support for non-constant weights and biases in CpuFullyConnected by Giorgio Arena · 2 years, 10 months ago
- 3ae3d88 Provide logging for configure functions in all cpu operators by ramelg01 · 2 years, 11 months ago
- 9ac7b99 Revert "Add support for non-constant weights and biases in CpuFullyConnected" by Pablo Marquez Tello · 2 years, 11 months ago
- 2f9ae16 Avoid checking on biases' constantness if nullptr by Giorgio Arena · 2 years, 11 months ago
- aed63ee Add support for non-constant weights and biases in CpuFullyConnected by Michele Di Giorgio · 3 years ago
- e920d6a Printing operators parameters, currently for CpuAdd operator only. by Ramy Elgammal · 3 years ago
- 7891a73 Move CPU/GPU files from Core/Runtime to the respective backend folders by Georgios Pinitas · 3 years ago