Gitiles
Code Review
Sign In
review.mlplatform.org
/
ml
/
ComputeLibrary
/
21fb2ad16a30a5ff29929515abe28c14b2c6b5a1
/
tests
/
validation
/
NEON
/
MatMul.cpp
4908981
[ONCPUML-1451] Guard bf16 to bf16 tests with ARM_COMPUTE_ENABLE_FIXED_FORMAT_KERNELS
by Renato Arantes
· 4 months ago
36a75da
[ONCPUML-1451] Add matmul kernel to enable bf16 to bf16 operations via PyTorch® autocast() function
by Renato Arantes
· 5 months ago
43ba0dd
Increase MatMul and DilatedConv test Q8 thresholds to 1
by Gunes Bayir
· 4 months ago
2aec5f1
Fix tolerance issue in BF16 MatMul tests
by Gunes Bayir
· 6 months ago
c85edf1
Make zip and combine variadic
by Viet-Hoa Do
· 10 months ago
af15076
Only define validation test tolerance for quantized types in case of aarch64 for Neon™ Matmul
by Ramy Elgammal
· 1 year, 2 months ago
05a65e3
Disable Neon/MatMul/Quantized for armv7a
by Ramy Elgammal
· 1 year, 3 months ago
9c7c2d2
Add quantized support for CPU MatMul
by Viet-Hoa Do
· 1 year, 3 months ago
a1b1e41
Implement MatMul Function and Operator with Floating Point support for CPU
by Mohammed Suhail Munshi
· 1 year, 4 months ago