MLBEDSW-6928: Add int16 support for Resize Bilinear HPC

Setting bias tensor dtype to DataType.int32 solves rounding issues for
RB HPC int16.

Removing the input data type check also solves the issue of resize
nearest neighbor int16 ops incorrectly getting placed on the CPU.

Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
Change-Id: Iee352bcb78e581c0cde3c203dfbe866f1f6fae18
diff --git a/ethosu/vela/tflite_graph_optimiser.py b/ethosu/vela/tflite_graph_optimiser.py
index 27513d3..1310ee6 100644
--- a/ethosu/vela/tflite_graph_optimiser.py
+++ b/ethosu/vela/tflite_graph_optimiser.py
@@ -586,7 +586,7 @@
                 # need to append the bias tensor as resize ops only have 2 inputs
                 assert len(dw_conv.inputs) == 2
                 dw_conv.inputs.append(None)
-                fixup_bias_tensors(dw_conv, None, None)
+                fixup_bias_tensors(dw_conv, None, None, dtype=DataType.int32)
 
                 dw_conv.set_ifm_ofm_shapes()
                 dw_conv = dw_conv.clone(f"_{index}")