- fadc9b1 Optimize CpuSoftmaxKernel for axis=0 by Gunes Bayir · 9 months ago
- c5ab4df Optimize CpuGemmConv2d start-up time by SiCong Li · 10 months ago
- 02c452f Add Dynamic Quantization tests to Fully Connected Layer by Mohammed Suhail Munshi · 9 months ago
- 93a77cd Use dynamic quantization in Convolution and Dilated Convolution tests by Gunes Bayir · 10 months ago
- 4ea9bac Fix memory Error in Reverse Fixture. by Adnan AlSinan · 10 months ago
- 0b72aa4 Optimize NEStackLayer by Gunes Bayir · 10 months ago
- 3af4c9b Optimize CL and Neon depthwise convolution tests by Gunes Bayir · 10 months ago
- 745153b NEDeconvolutionLayer validation fix by Pablo Marquez Tello · 10 months ago
- c2a51bd Optimize CL and Neon Winograd tests by Gunes Bayir · 10 months ago
- bdcb4c1 Implement tflite compliant reverse for CPU by Adnan AlSinan · 11 months ago
- 40a9d3e Remove deprecated support for BF16 in CpuCast by Adnan AlSinan · 11 months ago
- c85edf1 Make zip and combine variadic by Viet-Hoa Do · 11 months ago
- b1fcb41 Disable NEArgMinMaxLayer RunSmall_F32_S64 for armv7a by Pablo Marquez Tello · 11 months ago
- eb5696d Optimize CpuReshapeKernel by Anitha Raj · 1 year, 1 month ago
- 29e27b0 Add support for S64 output in NEArgMinMaxLayer by Pablo Marquez Tello · 1 year ago
- 78da34c Fix failure in MeanReduce layer by Viet-Hoa Do · 12 months ago
- 4cb0bd4 Improved testing for ArgMinMax by Pablo Marquez Tello · 1 year ago
- 4a1c917 Add support for input S64/U64 in CpuCastKernel by Pablo Marquez Tello · 1 year, 1 month ago
- 314d3e2 Break up core/Utils.h to reduce unused code being included everywhere by Matthew Bentham · 1 year, 1 month ago
- 019a7d9 Enable transpose convolution with non-square kernels by Viet-Hoa Do · 1 year, 1 month ago
- 48cfd5f Refactor activation LUT computation by Pablo Marquez Tello · 1 year, 2 months ago
- bae01a5 Raise tolerance number for NEDeconvolutionLayer fp16 tests by SiCong Li · 1 year, 3 months ago
- 318782b Remove inclusion of NEReorderKernel header from NEReorderLayer by Ramy Elgammal · 1 year, 3 months ago
- 0ed0ddd Fix validation issue with CPU scale on FP16 by Viet-Hoa Do · 1 year, 3 months ago
- cd8b40d Guards to make NEReorder aarch64 only by David Svantesson · 1 year, 3 months ago
- 3b162e5 Reorder added by David Svantesson · 1 year, 4 months ago
- af15076 Only define validation test tolerance for quantized types in case of aarch64 for Neon™ Matmul by Ramy Elgammal · 1 year, 3 months ago
- 05a65e3 Disable Neon/MatMul/Quantized for armv7a by Ramy Elgammal · 1 year, 3 months ago
- 9c7c2d2 Add quantized support for CPU MatMul by Viet-Hoa Do · 1 year, 4 months ago
- b84df20 Fix unhandled case in ElementwiseUnary by Ramy Elgammal · 1 year, 4 months ago
- 9b0a6b4 Fix dynamic weights for CPU connected layer by Viet-Hoa Do · 1 year, 4 months ago
- 4e84f24 Increase tolerance for SME kernels by Viet-Hoa Do · 1 year, 4 months ago
- a1b1e41 Implement MatMul Function and Operator with Floating Point support for CPU by Mohammed Suhail Munshi · 1 year, 5 months ago
- 8b7f42a Enable quantized data types for CpuElementwiseUnary on Armv7a by Ramy Elgammal · 1 year, 4 months ago
- 8893e45 Add cropping support to NEBatchToSpace by SiCong Li · 1 year, 5 months ago
- 732c1b2 Fix GCC13 compiler errors by Pablo Marquez Tello · 1 year, 4 months ago
- 573a33f Add additional FP16 guards to Convolution Layer by Nathan John Sircombe · 1 year, 4 months ago
- fd472f0 Add quantized support for unary elementwise in CPU by Viet-Hoa Do · 1 year, 5 months ago
- a3e57c2 Add dynamic weights for CPU fully connected layer by Viet-Hoa Do · 1 year, 5 months ago
- bbf2e74 Add support for kernel indices in Maxpool by Adnan AlSinan · 1 year, 5 months ago
- 4537089 Fixes for CMake and Bazel builds, tests failing in scons by David Svantesson · 1 year, 6 months ago
- f4230aa Fix DeconvolutionLayer tolerance issues in FP16 tests by Gunes Bayir · 1 year, 6 months ago
- d04528b Disable AddMulAdd armv7a tests by Gunes Bayir · 1 year, 6 months ago
- 5b9d223 Fix GEMMLowp/Batched MatMul mismatches on CPU by Mohammed Suhail Munshi · 1 year, 6 months ago
- ae72a46 Add new operator AddMulAdd for Neon™ backend for Float/Quantized types by Gunes Bayir · 1 year, 6 months ago
- 464ed20 Remove fixed format strides hack by Jonathan Deakin · 1 year, 7 months ago
- 13bab71 Fix ClGemm crashes on unsupported data types by SiCong Li · 1 year, 7 months ago
- 6bcdc57 Deprecated BF16 support in DepthConvert by Pablo Marquez Tello · 1 year, 7 months ago
- 1b2f868 Fix CL DirectConvolutionLayer validate tests by SiCong Li · 1 year, 7 months ago
- 97a609b Fix GemmLowp BatchMatMul Tests to use quantized Outputs by Mohammed Suhail Munshi · 1 year, 10 months ago
- d158609 SVE Hard-Swish via Lookup table for quantized input by Pablo Marquez Tello · 2 years, 2 months ago
- a7077e9 Updateable weights in depthwise convolution by Milos Puzovic · 1 year, 9 months ago
- 9fc0b5c Update reinterpret tensor as 1D for CPU add by Viet-Hoa Do · 1 year, 10 months ago
- 6782452 Add test in GEMMLowp for batch matmul by Mohammed Suhail Munshi · 1 year, 10 months ago
- 304dfdb Fix Batch Matmul nightly failure by Adnan AlSinan · 1 year, 11 months ago
- 40b4419 Optimize CPU add layer on quantized data by Viet-Hoa Do · 1 year, 11 months ago
- d6b8a71 Add FP32 Neon™ swish activation by Jonathan Deakin · 2 years ago
- c4f2743 Optimize Quantized/Integer Bilinear Scale for Neon™ by Gunes Bayir · 1 year, 11 months ago
- 926f502 Adding GELU activation by Murray Kornelsen · 2 years, 1 month ago
- 6e09e14 INT8 Quantized MeanStdDevNorm (LayerNorm) by Murray Kornelsen · 2 years, 1 month ago
- a4814e8 Add test case for disable Winograd on fp16 if fast-math = false by Ramy Elgammal · 1 year, 11 months ago
- 26c9d1a Add test for NEGEMM to test a batched matrix multiplication with variable input tensors by Adnan AlSinan · 1 year, 11 months ago
- 93581a5 [ONCPUML-970] Fast math mode for fixed format kernels by Pablo Marquez Tello · 2 years, 1 month ago
- 9b921be Optimize add layer by considering the input tensors as 1D array by Gunes Bayir · 2 years ago
- 9178002 Fix for inclusion of "arm_gemm" from src into "Types.h" from core by Ramy Elgammal · 2 years, 1 month ago
- d208f4f Enable march=armv8.6-a in non multi-isa builds by Pablo Marquez Tello · 2 years, 1 month ago
- 553f695 [ONCPUML-951] Variable weight support for Convolution. by Francesco Petrogalli · 2 years, 1 month ago
- 29cab36 Fixed clang-cl errors on Windows native builds. by Pablo Tello · 2 years, 5 months ago
- 894659a Add support for 2d and 3d indices for axis 1 by Pablo Marquez Tello · 2 years, 3 months ago
- d75cd8a Compute Hard-Swish with a Lookup table for qasymm8. by Pablo Marquez Tello · 2 years, 2 months ago
- dc4f276 Revert "Add support for 2d and 3d indices for axis 0" by Mohammed Suhail Munshi · 2 years, 3 months ago
- 920f2b6 Add support for 2d and 3d indices for axis 0 by Pablo Marquez Tello · 2 years, 3 months ago
- f55cca5 Add LU_BOUNDED_RELU support for QSYMM16 by Pablo Marquez Tello · 2 years, 4 months ago
- 9104cd5 Add support for int8 CpuPool3d by Adnan AlSinan · 2 years, 4 months ago
- 4c17ba9 Fix Nightly build failure by Adnan AlSinan · 2 years, 4 months ago
- 171fc3d Add CPU Pool3d FP16/32 implementation by Adnan AlSinan · 2 years, 5 months ago
- 298b2c0 Decouple castKernel by Yair Schwarzbaum · 2 years, 6 months ago
- c9e519d Decouple CpuDirectConv2dKernel by alerah01 · 2 years, 6 months ago
- ebbae94 Decouple CpuDepthwiseConv2dNativeKernel by Dana Zlotnik · 2 years, 6 months ago
- 256ac62 Decouple CpuGemmMatrixMultiplyKernel and CpuGemmMatrixAdditionKernel by Dana Zlotnik · 2 years, 6 months ago
- 149203b Port MaxUnpoolingLayer kernel and add KernelSelect vaidation test by Dana Zlotnik · 2 years, 6 months ago
- 6a2df88 Add kernel selection UT for submitted kernels by Dana Zlotnik · 2 years, 7 months ago
- 6863fa0 Remove deprecated remap functions. by Adnan AlSinan · 2 years, 6 months ago
- 0ef2c21 Remove padding from CpuDirectConv2dKernel by Adnan AlSinan · 2 years, 6 months ago
- 5ae8d80 Enable kernel selection testing (Phase #1) by Giorgio Arena · 2 years, 9 months ago
- c4270cf Add tests for FP Cpu Pooling where pool region is completely outside the input by SiCongLi · 2 years, 8 months ago
- 4d44ac8 Added support for filter size 8x8 NCHW DirectConv by Pablo Marquez Tello · 2 years, 8 months ago
- 8b9ce8c Increase FP16 tolerance for BatchNormalizationLayer by Freddie Liardet · 2 years, 9 months ago
- 69df64f Improve conv3d validation by Freddie Liardet · 2 years, 9 months ago
- f727ef4 Add uint8/int8 support to cpu conv3d by Freddie Liardet · 2 years, 10 months ago
- 841c3e9 Fix unused variable issue by Sheri Zhang · 2 years, 10 months ago
- 6d9c982 Conv3d support by Sheri Zhang · 2 years, 11 months ago
- ded3663 Remove padding in cpuPool2d NCHW by Freddie Liardet · 3 years ago
- 8b8405a Optimize CpuScale NHWC F32/F16 by Gian Marco Iodice · 2 years, 10 months ago
- 63e0beb Add support for non-constant weights and biases in CpuFullyConnected by Giorgio Arena · 2 years, 11 months ago
- 9ac7b99 Revert "Add support for non-constant weights and biases in CpuFullyConnected" by Pablo Marquez Tello · 2 years, 11 months ago
- aed63ee Add support for non-constant weights and biases in CpuFullyConnected by Michele Di Giorgio · 3 years ago
- 9dc558f Review all shapes in datasets to account for padding removal Part 1 by Gian Marco Iodice · 3 years, 9 months ago
- 7891a73 Move CPU/GPU files from Core/Runtime to the respective backend folders by Georgios Pinitas · 3 years ago
- 1988463 Rename [Cl|Cpu]GemmConvolution to [Cl|Gpu]GemmConv2d by Georgios Pinitas · 3 years ago