1. 36a75da [ONCPUML-1451] Add matmul kernel to enable bf16 to bf16 operations via PyTorch® autocast() function by Renato Arantes · 5 months ago
  2. 8f4b3df Fix clang-tidy errors by Jakub Sujak · 8 months ago
  3. 945b8da Make test fixture setup methods not be templated by Matthew Bentham · 12 months ago
  4. 94abde4 Add Fused Activation to OpenCL MatMul by Mohammed Suhail Munshi · 1 year, 1 month ago
  5. e9b3ee2 Connect CLMatMul function to quantized kernels and resolve NE BatchMatMul int_8 failures by Jakub Sujak · 1 year, 3 months ago
  6. a62129a Fix fully connected and matmul mismatches by Viet-Hoa Do · 1 year, 2 months ago
  7. 9c7c2d2 Add quantized support for CPU MatMul by Viet-Hoa Do · 1 year, 3 months ago
  8. a1b1e41 Implement MatMul Function and Operator with Floating Point support for CPU by Mohammed Suhail Munshi · 1 year, 3 months ago
  9. f26ea2f Implement MatMul Function by Ramy Elgammal · 1 year, 3 months ago