1. 4908981 [ONCPUML-1451] Guard bf16 to bf16 tests with ARM_COMPUTE_ENABLE_FIXED_FORMAT_KERNELS by Renato Arantes · 3 months ago
  2. 36a75da [ONCPUML-1451] Add matmul kernel to enable bf16 to bf16 operations via PyTorch® autocast() function by Renato Arantes · 5 months ago
  3. 43ba0dd Increase MatMul and DilatedConv test Q8 thresholds to 1 by Gunes Bayir · 3 months ago
  4. 2aec5f1 Fix tolerance issue in BF16 MatMul tests by Gunes Bayir · 5 months ago
  5. c85edf1 Make zip and combine variadic by Viet-Hoa Do · 10 months ago
  6. af15076 Only define validation test tolerance for quantized types in case of aarch64 for Neon™ Matmul by Ramy Elgammal · 1 year, 2 months ago
  7. 05a65e3 Disable Neon/MatMul/Quantized for armv7a by Ramy Elgammal · 1 year, 2 months ago
  8. 9c7c2d2 Add quantized support for CPU MatMul by Viet-Hoa Do · 1 year, 3 months ago
  9. a1b1e41 Implement MatMul Function and Operator with Floating Point support for CPU by Mohammed Suhail Munshi · 1 year, 3 months ago