Conformance simple backward version and binary files support

Add --test-version option to tosa_verif_conformance_generator to
select the version for tests to output.
Add --output-type to allow json, binary or both files to be created
during conformance generation.
Fix passing schema_path to test runner.
Add explicit verify lib path arg to test runner.

Change-Id: I5f1ad137d713fca408a98470ea77bddf8916c5f3
Signed-off-by: Jeremy Johnson <jeremy.johnson@arm.com>
diff --git a/README.md b/README.md
index 4b74b6e..30773d7 100644
--- a/README.md
+++ b/README.md
@@ -444,9 +444,10 @@
 
 **DEPRECATION NOTES:**
 
-* The repository *TOSA conformance tests - <https://git.mlplatform.org/tosa/conformance_tests.git/>* - has been DEPRECATED, tests need to be
-generated using the script detailed in this section.
-* The framework tests are DEPRECATED as part of the conformance testing, so there is no need to follow the TOSA Framework Unit Tests instructions above for this section.
+* The repository *TOSA conformance tests - <https://git.mlplatform.org/tosa/conformance_tests.git/>* -
+has been DEPRECATED, tests need to be generated using the script detailed in this section.
+* The framework tests are DEPRECATED as part of the conformance testing, so there
+is no need to follow the TOSA Framework Unit Tests instructions above for this section.
 
 #### Setup
 
@@ -457,31 +458,53 @@
 
 These are the main script options for controlling the types of tests produced:
 
-* `--profile` - controls the TOSA profile, only `base` - for base inference tests - is fully supported, but other options are `main` - for the floating point main inference tests - or `all` - for both.
-* `--unit-tests` - choose which tests to produce, only `operator` should be used as `framework` (and `both`) tests are DEPRECATED.
+* `--profile` - controls the TOSA profile, only `tosa-bi` - for base inference tests -
+is fully supported. The other options are `tosa-mi` - for the floating point main
+inference tests - or `all` - for both.
+* `--unit-tests` - choose which tests to produce, only `operator` should be used as
+`framework` (and `both`) tests are DEPRECATED.
 * `--test-type` - selects `positive`, `negative` or `both` types of test.
-
+* `--output-type` - selects the output file type between `json`, `binary` or `both`.
+The default - `json` - converts numpy data files and flatbuffer files into JSON for
+ease in viewing and comparison.
 
 An example to create the TOSA operator unit tests for ADD and SUB:
 
 ```bash
-tosa_verif_conformance_generator        \
-  --profile base                        \
-  --ref-model-directory reference_model \
-  --operator add sub
+tosa_verif_conformance_generator \
+  --profile tosa-bi              \
+  --ref-model-path reference_model/build/reference_model/tosa_reference_model \
+  --operators add sub
 ```
 
 The above command will create some temporary files in a `conformance_build`
 directory, but will output the conformance unit tests into a `conformance`
 directory.
 
+If you have a different build directory for the reference model, you may have
+to supply one or more of the following options for path locations:
+
+* `--ref-model-path` - path to the `tosa_reference_model` executable.
+* `--schema-path` or `--operator-fbs` - path to the TOSA flatbuffer schema file (`tosa.fbs`)
+* `--flatc-path` - path to the flatbuffers compiler `flatc`
+
+This is an example using the default locations:
+
+```bash
+tosa_verif_conformance_generator \
+  --ref-model-path reference_model/build/reference_model/tosa_reference_model \
+  --flatc-path reference_model/build/thirdparty/serialization_lib/third_party/flatbuffers/flatc \
+  --schema-path referecne_model/thirdparty/serialization_lib/schema/tosa.fbs \
+  --operators abs
+```
+
 This next example will create all the conformance tests, using different
 temporary build and output directories:
 
 ```bash
-tosa_verif_conformance_generator        \
-  --ref-model-directory reference_model \
-  --build-directory tmp_build           \
+tosa_verif_conformance_generator \
+  --ref-model-path reference_model/build/reference_model/tosa_reference_model \
+  --build-directory tmp_build    \
   --output-directory conf_tests
 ```
 
@@ -512,7 +535,7 @@
 ``` bash
 # After compiling the reference model (in the build directory)
 cd thirdparty/serialization_lib/third_party/flatbuffers
-make
+make flatc
 ```
 
 
diff --git a/scripts/convert2conformance/convert2conformance.py b/scripts/convert2conformance/convert2conformance.py
index 171ec3e..531dca8 100755
--- a/scripts/convert2conformance/convert2conformance.py
+++ b/scripts/convert2conformance/convert2conformance.py
@@ -30,6 +30,12 @@
 
 PROFILES_LIST = ["tosa-bi", "tosa-mi"]
 
+OUTPUT_TYPE_JSON = "json"
+OUTPUT_TYPE_BINARY = "binary"
+OUTPUT_TYPE_BOTH = "both"
+OUTPUT_TYPES = (OUTPUT_TYPE_JSON, OUTPUT_TYPE_BINARY, OUTPUT_TYPE_BOTH)
+OUTPUT_TYPE_DEFAULT = OUTPUT_TYPE_JSON
+
 
 def parse_args(argv):
     """Parse the arguments."""
@@ -48,14 +54,14 @@
         dest="schema_path",
         type=Path,
         required=True,
-        help=("Path to reference model schema."),
+        help="Path to reference model schema.",
     )
     parser.add_argument(
         "--flatc-path",
         dest="flatc_path",
         type=Path,
         required=True,
-        help=("Path to flatc executable."),
+        help="Path to flatc executable.",
     )
     parser.add_argument(
         "--output-directory",
@@ -65,6 +71,13 @@
         help="Output directory (default is conformance in CWD)",
     )
     parser.add_argument(
+        "--output-type",
+        dest="output_type",
+        choices=OUTPUT_TYPES,
+        default=OUTPUT_TYPE_DEFAULT,
+        help=f"Output file type produced (default is {OUTPUT_TYPE_DEFAULT})",
+    )
+    parser.add_argument(
         "--framework",
         dest="framework",
         choices=["tflite"],
@@ -135,30 +148,48 @@
     return name
 
 
-def convert_flatbuffer_file(flatc: Path, schema: Path, model_file: Path, output: Path):
-    """Convert the flatbuffer binary into JSON."""
-    try:
-        fbbin_to_json(flatc, schema, model_file, output)
-    except Exception as e:
-        logger.error(f"Failed to convert flatbuffer binary:\n{e}")
-        return None
+def convert_flatbuffer_file(
+    output_type: str, flatc: Path, schema: Path, model_file: Path, output: Path
+):
+    """Convert and/or copy the flatbuffer binary."""
+    if output_type in (OUTPUT_TYPE_JSON, OUTPUT_TYPE_BOTH):
+        try:
+            fbbin_to_json(flatc, schema, model_file, output)
+        except Exception as e:
+            logger.error(f"Failed to convert flatbuffer binary:\n{e}")
+            return None
 
-    if model_file.name == "model.tflite":
-        file_name = "model-tflite.json"
-        os.rename(output / "model.json", output / file_name)
-    else:
-        file_name = model_file.stem + ".json"
+        if model_file.name == "model.tflite":
+            file_name = "model-tflite.json"
+            os.rename(output / "model.json", output / file_name)
+        else:
+            file_name = model_file.stem + ".json"
+    if output_type in (OUTPUT_TYPE_BINARY, OUTPUT_TYPE_BOTH):
+        try:
+            shutil.copy(model_file, output)
+        except Exception as e:
+            logger.error(f"Failed to copy flatbuffer binary:\n{e}")
+            return None
+        # By default return the binary name (if we have created both)
+        file_name = model_file.name
+
     return output / file_name
 
 
-def convert_numpy_file(n_file: Path, output: Path, outname: Optional[str] = None):
-    """Convert a numpy file into a JSON file."""
-    j_file = output / (outname if outname else (n_file.stem + ".json"))
-    npy_to_json(n_file, j_file)
-    return j_file
+def convert_numpy_file(
+    output_type: str, npy_file: Path, output: Path, outstem: Optional[str] = None
+):
+    """Convert and/or copy the numpy file."""
+    if output_type in (OUTPUT_TYPE_JSON, OUTPUT_TYPE_BOTH):
+        new_file = output / ((outstem if outstem else npy_file.stem) + ".json")
+        npy_to_json(npy_file, new_file)
+    if output_type in (OUTPUT_TYPE_BINARY, OUTPUT_TYPE_BOTH):
+        new_file = output / ((outstem + ".npy") if outstem else npy_file.name)
+        shutil.copy(npy_file, new_file)
 
 
 def update_desc_json(
+    output_type: str,
     test_dir: Path,
     test_desc,
     output_dir: Optional[Path] = None,
@@ -184,7 +215,9 @@
                     ofm_refmodel = ofm_path.with_suffix(NAME_REFMODEL_RUN_RESULT_SUFFIX)
                 # Create conformance result
                 if ofm_refmodel.is_file():
-                    convert_numpy_file(ofm_refmodel, output_dir, outname=cfm + ".json")
+                    convert_numpy_file(
+                        output_type, ofm_refmodel, output_dir, outstem=cfm
+                    )
                 else:
                     logger.error(f"Missing result file {ofm_path}")
                     return None
@@ -297,7 +330,11 @@
     # Convert the TOSA flatbuffer binary
     tosa_filename = desc_filename.parent / test_desc["tosa_file"]
     tosa_filename = convert_flatbuffer_file(
-        args.flatc_path, args.schema_path, tosa_filename, args.output_dir
+        args.output_type,
+        args.flatc_path,
+        args.schema_path,
+        tosa_filename,
+        args.output_dir,
     )
     if not tosa_filename:
         # Failed to convert the file, json2fbbin will have printed an error
@@ -309,7 +346,11 @@
     if framework_conversion and framework_filename:
         # Convert the framework flatbuffer binary
         framework_filename = convert_flatbuffer_file(
-            args.flatc_path, framework_schema, framework_filename, args.output_dir
+            args.output_type,
+            args.flatc_path,
+            framework_schema,
+            framework_filename,
+            args.output_dir,
         )
         if not framework_filename:
             # Failed to convert the file, json2fbbin will have printed an error
@@ -322,7 +363,7 @@
             path = desc_filename.parent / file
             ifm_files.append(path.name)
             if path.is_file():
-                convert_numpy_file(path, args.output_dir)
+                convert_numpy_file(args.output_type, path, args.output_dir)
             else:
                 if not args.lazy_data_generation:
                     logger.error(f"Missing input file {path.name}")
@@ -346,6 +387,7 @@
 
     # Update desc.json and convert result files to JSON
     test_desc = update_desc_json(
+        args.output_type,
         desc_filename.parent,
         test_desc,
         output_dir=args.output_dir,
diff --git a/verif/conformance/tosa_verif_conformance_generator.py b/verif/conformance/tosa_verif_conformance_generator.py
index c9a0b3a..4281fc2 100644
--- a/verif/conformance/tosa_verif_conformance_generator.py
+++ b/verif/conformance/tosa_verif_conformance_generator.py
@@ -8,7 +8,7 @@
   settings in the .json files.
 - Tests are selected to produce a good coverage.
 - Tests are run on the reference model to produce the correct output files.
-- Tests are converted into JSON format and saved to desired output directory.
+- Tests are converted to JSON and/or copied and saved to desired output directory.
 """
 import argparse
 import copy
@@ -26,6 +26,8 @@
 import conformance.model_files as cmf
 from conformance.test_select import Operator
 from convert2conformance.convert2conformance import main as c2c_main
+from convert2conformance.convert2conformance import OUTPUT_TYPE_DEFAULT
+from convert2conformance.convert2conformance import OUTPUT_TYPES
 from distutils.dir_util import copy_tree
 
 logging.basicConfig()
@@ -51,6 +53,10 @@
 # standard group will have negative tests generated for it
 STANDARD_GENERATOR_GROUP = "standard"
 
+TEST_VERSION_LATEST = "latest"
+TEST_VERSION_V0_60_0 = "v0.60.0"
+TEST_VERSIONS = (TEST_VERSION_LATEST, TEST_VERSION_V0_60_0)
+
 
 class GenConformanceError(Exception):
     """Generation error reporting exception."""
@@ -214,12 +220,14 @@
         return
 
     num_cores = args.num_cores
-    run_tests_cmd = "tosa_verif_run_tests"
 
-    ref_cmd_base = ref_cmd = [
-        run_tests_cmd,
+    # Use the test runner
+    ref_cmd_base = [
+        "tosa_verif_run_tests",
         "--ref-model-path",
         str(args.ref_model_path.absolute()),
+        "--schema-path",
+        str(args.schema_path.absolute()),
         "-j",
         str(num_cores),
         "-v",
@@ -243,7 +251,7 @@
             )
             continue
         ref_cmd = ref_cmd_base.copy()
-        ref_cmd.append(str(test))
+        ref_cmd.append(str(test.absolute()))
         ref_cmds.append(ref_cmd)
 
     fail_string = "UNEXPECTED_FAILURE"
@@ -280,13 +288,14 @@
     trim_op_subdir=False,
     tags=None,
 ):
-    """Convert tests to JSON and save to output directory."""
+    """Convert/copy tests to output directory."""
     if group:
         output_dir = output_dir / group
 
     c2c_args_base = ["--strict"]
     c2c_args_base.extend(["--schema-path", str(args.schema_path)])
     c2c_args_base.extend(["--flatc-path", str(args.flatc_path)])
+    c2c_args_base.extend(["--output-type", args.output_type])
     # This op maybe in more than one profile - e.g. tosa_bi and tosa_mi
     # even if we are only producing tests for tosa_mi
     for op_profile in op_profiles_list:
@@ -349,7 +358,7 @@
         logger.error(f"Stopping due to {failed_counter} test conversion errors")
         raise (GenConformanceError())
 
-    logger.info("Converted tests to JSON and saved to output directory")
+    logger.info("Converted/copied tests and saved to output directory")
 
     return output_dir
 
@@ -535,6 +544,20 @@
         ),
     )
     parser.add_argument(
+        "--test-version",
+        dest="test_version",
+        choices=TEST_VERSIONS,
+        default=TEST_VERSION_LATEST,
+        help=f"Version of the tests to produce (default is {TEST_VERSION_LATEST})",
+    )
+    parser.add_argument(
+        "--output-type",
+        dest="output_type",
+        choices=OUTPUT_TYPES,
+        default=OUTPUT_TYPE_DEFAULT,
+        help=f"Output file type produced (default is {OUTPUT_TYPE_DEFAULT})",
+    )
+    parser.add_argument(
         "--seed",
         dest="random_seed",
         default=DEFAULT_SEED,
@@ -778,6 +801,10 @@
                         )
                         continue
 
+                    if args.test_version == TEST_VERSION_V0_60_0 and op in ("dim",):
+                        logger.warning(f"{op} is not in {args.test_version} - skipping")
+                        continue
+
                     op_profiles_list = test_params[op]["profile"]
                     if (
                         args.profile != PROFILES_ALL
diff --git a/verif/runner/tosa_test_runner.py b/verif/runner/tosa_test_runner.py
index 30a7168..b348f50 100644
--- a/verif/runner/tosa_test_runner.py
+++ b/verif/runner/tosa_test_runner.py
@@ -4,7 +4,6 @@
 import json
 from enum import IntEnum
 
-import conformance.model_files as cmf
 import schemavalidation.schemavalidation as sch
 from checker.color_print import LogColors
 from checker.color_print import print_color
@@ -71,9 +70,7 @@
         self.testDir = str(testDirPath)
         self.testDirPath = testDirPath
         self.testName = self.testDirPath.name
-        self.verify_lib_path = cmf.find_tosa_file(
-            cmf.TosaFileType.VERIFY_LIBRARY, args.ref_model_path
-        )
+        self.verify_lib_path = args.verify_lib_path
 
         set_print_in_color(not args.no_color)
         # Stop the result checker printing anything - we will do it
diff --git a/verif/runner/tosa_verif_run_tests.py b/verif/runner/tosa_verif_run_tests.py
index d1755e6..54cb7b2 100644
--- a/verif/runner/tosa_verif_run_tests.py
+++ b/verif/runner/tosa_verif_run_tests.py
@@ -52,6 +52,15 @@
         help="Path to TOSA reference model executable",
     )
     parser.add_argument(
+        "--verify-lib-path",
+        dest="verify_lib_path",
+        type=Path,
+        help=(
+            "Path to TOSA verify library. Defaults to "
+            "the library in the directory of `ref-model-path`"
+        ),
+    )
+    parser.add_argument(
         "--operator-fbs",
         "--schema-path",
         dest="schema_path",
@@ -365,6 +374,10 @@
         args.ref_model_path = cmf.find_tosa_file(
             cmf.TosaFileType.REF_MODEL, Path("reference_model"), False
         )
+    if args.verify_lib_path is None:
+        args.verify_lib_path = cmf.find_tosa_file(
+            cmf.TosaFileType.VERIFY_LIBRARY, args.ref_model_path
+        )
     if args.flatc_path is None:
         args.flatc_path = cmf.find_tosa_file(
             cmf.TosaFileType.FLATC, args.ref_model_path