Refactor performance measurements

Change 'Inference runtime' to measure CPU cycles for the
Tensorflow Lite Micro interpreter.Invoke() call.

Add 'Operator(s) runtime' print that prints a summary for
cycles spent on all operators during an inference. (This is
equivalent to the old reported 'Inference runtime')

Move prints out of the EndEvent() function in ArmProfiler as
it otherwise interferes with the inference cycle measurement.

Change-Id: Ie11b5abb5b12a3bcf5a67841f04834d05dfd796d
diff --git a/applications/inference_process/include/inference_process.hpp b/applications/inference_process/include/inference_process.hpp
index 9635884..fc54ae0 100644
--- a/applications/inference_process/include/inference_process.hpp
+++ b/applications/inference_process/include/inference_process.hpp
@@ -52,6 +52,7 @@
     std::vector<DataPtr> input;
     std::vector<DataPtr> output;
     std::vector<DataPtr> expectedOutput;
+    uint64_t cpuCycles{0};
     size_t numBytesToPrint;
     void *externalContext;