MLBEDSW-2589: Skip weight compression for CPU ops

This commit fixes a bug where CPU ops were getting
passed on as NPU ops in weight_compressor.py due to
Operation.find_npu_op() incorrectly returning any
op with an 'npu_block_type' attribute (which every
op has) as an NPU op.

Signed-off-by: Dwight Lidman <dwight.lidman@arm.com>
Change-Id: I7a758f8d1b1237907816bc1be7b77aff765ae688
diff --git a/ethosu/vela/tensor.py b/ethosu/vela/tensor.py
index 312e8f3..c41a7eb 100644
--- a/ethosu/vela/tensor.py
+++ b/ethosu/vela/tensor.py
@@ -626,7 +626,7 @@
         for op in self.consumers():
             if op.type == "DMA":
                 return op.outputs[0].find_npu_op()
-            if "npu_block_type" in op.attrs:
+            if op.run_on_npu:
                 return op
             return None