- 94abde4 Add Fused Activation to OpenCL MatMul by Mohammed Suhail Munshi · 1 year, 2 months ago
- 043613f Break up Utils.h a bit to reduce unused code being included everywhere by Matthew Bentham · 1 year, 2 months ago
- f1aeab9 Break up arm_compute/core/Types.h a bit by Matthew Bentham · 1 year, 2 months ago
- 48cfd5f Refactor activation LUT computation by Pablo Marquez Tello · 1 year, 2 months ago
- 48c0ed9 Fix ScaleKernel validate method. by Pablo Marquez Tello · 1 year, 3 months ago
- c0463a2 Move lut kernel to sve2 category by SiCong Li · 1 year, 3 months ago
- a8db612 Re-enable dyanmic weights in Neon™ depthwise convolution by Ramy Elgammal · 1 year, 3 months ago
- e9b3ee2 Connect CLMatMul function to quantized kernels and resolve NE BatchMatMul int_8 failures by Jakub Sujak · 1 year, 4 months ago
- edafe7f Disable dynamic weights in unsupported operators by Viet-Hoa Do · 1 year, 3 months ago
- 5713294 Fix im2col for fast-maths mode with padding. by Renato Arantes · 1 year, 3 months ago
- 54e52a9 Fix CPU MatMul broadcast detection by Viet-Hoa Do · 1 year, 3 months ago
- a62129a Fix fully connected and matmul mismatches by Viet-Hoa Do · 1 year, 3 months ago
- dba672c Integrate multi-threaded pretranspose_B_array by SiCong Li · 1 year, 4 months ago
- a07c01b NETranspose 8x8 kernel for 32-bit elements by Ethan Doe · 1 year, 4 months ago
- 9c7c2d2 Add quantized support for CPU MatMul by Viet-Hoa Do · 1 year, 4 months ago
- b84df20 Fix unhandled case in ElementwiseUnary by Ramy Elgammal · 1 year, 4 months ago
- 9b0a6b4 Fix dynamic weights for CPU connected layer by Viet-Hoa Do · 1 year, 4 months ago
- a1b1e41 Implement MatMul Function and Operator with Floating Point support for CPU by Mohammed Suhail Munshi · 1 year, 5 months ago
- 8b7f42a Enable quantized data types for CpuElementwiseUnary on Armv7a by Ramy Elgammal · 1 year, 4 months ago
- 732c1b2 Fix GCC13 compiler errors by Pablo Marquez Tello · 1 year, 4 months ago
- fd472f0 Add quantized support for unary elementwise in CPU by Viet-Hoa Do · 1 year, 5 months ago
- 20cfa45 Round to nearest with ties to away from zero in Relu by Pablo Marquez Tello · 1 year, 5 months ago
- a3e57c2 Add dynamic weights for CPU fully connected layer by Viet-Hoa Do · 1 year, 5 months ago
- 0ffc88b [ONCPUML-1174] Allow src/weights mismatch for fixed format by Jonathan Deakin · 1 year, 5 months ago
- 1fe48ca NEGEMMLowpMatrixMultiplyCore should be configured for optimized int8 kernel. by Ethan Doe · 1 year, 5 months ago
- bbf2e74 Add support for kernel indices in Maxpool by Adnan AlSinan · 1 year, 6 months ago
- 227db8d Add an option to use lowest for max-pooling by Adnan AlSinan · 1 year, 6 months ago
- 7d9a626 Update CPU kernels to remove x19 and w19 by Michael Tyler · 1 year, 6 months ago
- 4e2bbbb Add support for dilation > 1 in assembly DepthwiseConvolution by Pablo Marquez Tello · 1 year, 7 months ago
- fbe94da Fix armv7a failing GEMMConvolutionLayer tests by Mohammed Suhail Munshi · 1 year, 6 months ago
- 5b9d223 Fix GEMMLowp/Batched MatMul mismatches on CPU by Mohammed Suhail Munshi · 1 year, 6 months ago
- ae72a46 Add new operator AddMulAdd for Neon™ backend for Float/Quantized types by Gunes Bayir · 1 year, 6 months ago
- 464ed20 Remove fixed format strides hack by Jonathan Deakin · 1 year, 7 months ago
- 1b6377b Add broadcast batched matmul validation cases by SiCong Li · 1 year, 7 months ago
- be13cea Revert "Update CPU kernels to remove x19" by Michael Tyler · 1 year, 7 months ago
- ba20975 Update CPU kernels to remove x19 by Michael Tyler · 1 year, 8 months ago
- 6bcdc57 Deprecated BF16 support in DepthConvert by Pablo Marquez Tello · 1 year, 7 months ago
- 939b21a Use CPU quantized addition kernel for quantized subtraction by Omar Al Khatib · 1 year, 8 months ago
- 9d7b690 Fixed various mismatches in CpuCastKernel by Pablo Marquez Tello · 1 year, 8 months ago
- f16973b Fix build error for unused variables in data type specific builds by Gunes Bayir · 1 year, 8 months ago
- e112ef1 ONCPUML-1072: Remove double definition of get_mws for Mul kernel by fadara01 · 1 year, 9 months ago
- 73bb6b7 ONCPUML-1072: Tuned MWS values (for N1, V1) for binary operators used by oneDNN by Fadi Arafeh · 1 year, 10 months ago
- 8307ecf Fix regression caused by mws in ActivationLayer by Mohammed Suhail Munshi · 1 year, 9 months ago
- b230a1f Fixed Arm NN unit test failure caused by quantised multiplication patch. by Omar Al Khatib · 1 year, 9 months ago
- d4a9cc0 Fix CPU multiplication layer threading overhead by Viet-Hoa Do · 1 year, 9 months ago
- d158609 SVE Hard-Swish via Lookup table for quantized input by Pablo Marquez Tello · 2 years, 2 months ago
- 605a928 Optimize CPU mul layer on quantized data by Omar Al Khatib · 1 year, 9 months ago
- 0ae31d1 Fix fixed-point quantized addition by Viet-Hoa Do · 1 year, 9 months ago
- a7077e9 Updateable weights in depthwise convolution by Milos Puzovic · 1 year, 9 months ago
- 199982f Add threshold for floating-point SOFT_RELU activation by Milos Puzovic · 1 year, 9 months ago
- 4b5f6ef Add check for Batch Matmul in GemmAssemblyDispatch by Mohammed Suhail Munshi · 1 year, 10 months ago
- 910e3f9 Fix fixed-point quantized addition by Viet-Hoa Do · 1 year, 10 months ago
- 9fc0b5c Update reinterpret tensor as 1D for CPU add by Viet-Hoa Do · 1 year, 10 months ago
- 6782452 Add test in GEMMLowp for batch matmul by Mohammed Suhail Munshi · 1 year, 10 months ago
- 0a36f58 Fix FFTConvolutionLayer test by Viet-Hoa Do · 1 year, 10 months ago
- fa79fda Optimize Neon™ Logistic Activation by Mohammed Suhail Munshi · 1 year, 11 months ago
- c8cc024 Adding documentation section explaining how BF16 is used by Ramy Elgammal · 1 year, 10 months ago
- 6670413 Fix LUT-based activation layer by Viet-Hoa Do · 1 year, 10 months ago
- 842ad21 Optimize Neon™ SUB operator by squashing execution window by Jakub Sujak · 1 year, 11 months ago
- 304dfdb Fix Batch Matmul nightly failure by Adnan AlSinan · 1 year, 11 months ago
- 40b4419 Optimize CPU add layer on quantized data by Viet-Hoa Do · 1 year, 11 months ago
- d6b8a71 Add FP32 Neon™ swish activation by Jonathan Deakin · 2 years ago
- ead4d11 Fix unresolved symbol for target armv7a + Android by Pablo Marquez Tello · 1 year, 11 months ago
- 622b8ad Fix bug in QASYMM8_SIGNED to F32 cast layer by Viet-Hoa Do · 1 year, 11 months ago
- c4f2743 Optimize Quantized/Integer Bilinear Scale for Neon™ by Gunes Bayir · 1 year, 11 months ago
- 0d05b66 Interpreting tensor as 1D for CPU multiplication by Viet-Hoa Do · 1 year, 11 months ago
- 926f502 Adding GELU activation by Murray Kornelsen · 2 years, 1 month ago
- 6e09e14 INT8 Quantized MeanStdDevNorm (LayerNorm) by Murray Kornelsen · 2 years, 1 month ago
- 26c9d1a Add test for NEGEMM to test a batched matrix multiplication with variable input tensors by Adnan AlSinan · 1 year, 11 months ago
- 0eed305 Optimize FP32/16 Bilinear Scale Kernel for Neon™ by Gunes Bayir · 1 year, 11 months ago
- e4e3b2e Disable Winograd on fp16 if fast-math = false by Ramy Elgammal · 1 year, 11 months ago
- 552fe4c F16 Specialization for MeanStdDevNorm by Murray Kornelsen · 2 years, 1 month ago
- a331e48 Fix add for tensors with non-matching strides by Jonathan Deakin · 2 years ago
- 53929b1 Use Neon™ kernels for FP Bilinear Resize for SVE by Gunes Bayir · 2 years ago
- 29db3d2 Add LUT for quantized sigmoid function by Viet-Hoa Do · 2 years ago
- 65c8db8 Fix for AI benchmark ResNet regression by Viet-Hoa Do · 2 years ago
- 93581a5 [ONCPUML-970] Fast math mode for fixed format kernels by Pablo Marquez Tello · 2 years, 1 month ago
- 13b623e [ONCPUML-968] Fixed format kernel support in additional APIs by Milos Puzovic · 2 years ago
- 9b921be Optimize add layer by considering the input tensors as 1D array by Gunes Bayir · 2 years ago
- aa52b7d Fix compilation error rasied in Nightly_NEW by Ramy Elgammal · 2 years ago
- 9178002 Fix for inclusion of "arm_gemm" from src into "Types.h" from core by Ramy Elgammal · 2 years, 1 month ago
- d208f4f Enable march=armv8.6-a in non multi-isa builds by Pablo Marquez Tello · 2 years, 1 month ago
- 553f695 [ONCPUML-951] Variable weight support for Convolution. by Francesco Petrogalli · 2 years, 1 month ago
- a1f7851 Integrate new winograd APIs from MLTech by ramelg01 · 2 years, 1 month ago
- e417ff1 Fix build errors on armv8.6 SVE2 with NDK 23 and 24 by Michalis Spyrou · 2 years, 1 month ago
- 16aa474 Wrong arguments for running activation function in CpuGemmDirectConv2d by Michalis Spyrou · 2 years, 1 month ago
- b042e39 Add LUT-based leaky relu for QASYMM8 on CPU by Viet-Hoa Do · 2 years, 2 months ago
- 41eb2d9 Improve LUT Neon Hard-Swish by Pablo Marquez Tello · 2 years, 1 month ago
- 700b913 Select neon LUT Hard-Swish kernel on all devices by Pablo Marquez Tello · 2 years, 2 months ago
- f1f7779 Fix SVE2 implementation of quantized SoftMax 1D by Viet-Hoa Do · 2 years, 2 months ago
- c3bc093 Fix crash in CpuActivationKernel by Pablo Marquez Tello · 2 years, 2 months ago
- d75cd8a Compute Hard-Swish with a Lookup table for qasymm8. by Pablo Marquez Tello · 2 years, 2 months ago
- 5fcf22d [arm_gemm] Import fixed-format kernels from gemm_linux. by Francesco.Petrogalli@arm.com · 2 years, 4 months ago
- 168d6a8 Use svcreate instead of list initializations. by Michalis Spyrou · 2 years, 3 months ago
- facd9dd Add a missing validation check to CPU Pool3d by Adnan AlSinan · 2 years, 3 months ago
- c827e99 Update Neon™ pooling kernel by ramelg01 · 2 years, 4 months ago
- f55cca5 Add LU_BOUNDED_RELU support for QSYMM16 by Pablo Marquez Tello · 2 years, 4 months ago
- fa6877f [CpuGemmConv2d] Extract skip_im2col and skip_col2im computation. by Francesco.Petrogalli@arm.com · 2 years, 4 months ago
- 9104cd5 Add support for int8 CpuPool3d by Adnan AlSinan · 2 years, 4 months ago
- 5d606cc Fix CpuGemmAssemblyDispatch::has_opt_impl. by Francesco.Petrogalli@arm.com · 2 years, 4 months ago