MLBEDSW-4338 Randomized int16 PAD output diff

The issue was that the AveragePool in these test cases were
translated to DepthwiseConv2DBias and int16 convolutions
always runs with reduced scale. Fixed so that reduced scale
is not used in this case.

Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
Change-Id: Ice956eabbb37c8aa1991464870006971c6ecec43
diff --git a/ethosu/vela/weight_compressor.py b/ethosu/vela/weight_compressor.py
index 78c4351..db225fb 100644
--- a/ethosu/vela/weight_compressor.py
+++ b/ethosu/vela/weight_compressor.py
@@ -275,7 +275,8 @@
         quantised_scales = [(int(m), int(s)) for s, m in zip(explicit_scaling.shift, explicit_scaling.multiplier)]
     else:
         # quantise all of the weight scales into (scale_factor, shift)
-        if ifm_dtype == DataType.int16:
+        if ifm_dtype == DataType.int16 and bias_tens.dtype == DataType.int64:
+            # Reference uses reduced scaling for int16 with int64 bias
             quantised_scales = [reduced_quantise_scale(scale) for scale in scales]
         else:
             quantised_scales = [quantise_scale(scale) for scale in scales]