Add pre-commit support for sanity checks

Use pre-commit framework [1] to run black and flake8 before the commit.
black and flake8 are managed by the pre-commit framework and they can be
run manually by the user using `pre-commit run` command.

Fix the code base with the help of black and flake8.
Fix import statements according to PEP8 guidelines [1]
Both tools have the following settings (specified in the pre-commit
configuration file):
* line length: 120 characters
* directory to exclude: ethosu/vela/tflite/ and ethosu/vela/ethos_u55_regs

Updated README.md on how to install pre-commit and how to run sanity checks.
Pipenv files have been updated including new dependencies for pre-commit.

[1]: https://www.python.org/dev/peps/pep-0008/#imports
[2]: https://github.com/pre-commit/pre-commit

Change-Id: I304d9fffdf019d390ffa396a529c8a7c2437f63d
Signed-off-by: Diego Russo <diego.russo@arm.com>
diff --git a/ethosu/vela/weight_compressor.py b/ethosu/vela/weight_compressor.py
index 9219724..ee554b5 100644
--- a/ethosu/vela/weight_compressor.py
+++ b/ethosu/vela/weight_compressor.py
@@ -18,12 +18,11 @@
 # Description:
 # Compresses and pads the weigths. It also calculates the scales and packs with the biases.
 
-import os
-import sys
-import enum
 import math
-import numpy as np
 from collections import namedtuple
+
+import numpy as np
+
 from .numeric_util import round_up
 from .scaling import quantise_scale, reduced_quantise_scale
 from .tensor import TensorPurpose, TensorSubPurpose, TensorFormat, TensorBlockTraversal
@@ -44,7 +43,7 @@
 
     # pad with 0xFF as needed so the length of the weight stream
     # is a multiple of 16
-  
+
     while (len(compressed) % 16) != 0:
         compressed.append(0xFF)
 
@@ -348,7 +347,7 @@
 
     for sg in nng.subgraphs:
         for ps in sg.passes:
-            if ps.weight_tensor != None:
+            if ps.weight_tensor is not None:
                 npu_usage_of_tensor = find_npu_usage_of_tensor(ps.weight_tensor)
                 if npu_usage_of_tensor == NpuBlockType.ConvolutionDepthWise:
                     ps.weight_tensor.quant_values = np.transpose(ps.weight_tensor.quant_values, (0, 1, 3, 2))
@@ -382,7 +381,7 @@
                     src_tens.weight_compression_scales = ps.weight_tensor.weight_compression_scales
                     src_tens.weight_compressed_offsets = ps.weight_tensor.weight_compressed_offsets
 
-            if ps.scale_tensor != None:
+            if ps.scale_tensor is not None:
                 rescale_for_faf = False
                 activation_ops = set(("Sigmoid", "Tanh"))
                 if (ps.ops[-1].type in activation_ops) and (ps.npu_block_type != NpuBlockType.ElementWise):