Vidhya Sudhan Loganathan | d646ae1 | 2018-11-19 15:18:20 +0000 | [diff] [blame] | 1 | /// |
Georgios Pinitas | fd7780d | 2020-03-17 11:41:00 +0000 | [diff] [blame] | 2 | /// Copyright (c) 2017-2020 ARM Limited. |
Vidhya Sudhan Loganathan | d646ae1 | 2018-11-19 15:18:20 +0000 | [diff] [blame] | 3 | /// |
| 4 | /// SPDX-License-Identifier: MIT |
| 5 | /// |
| 6 | /// Permission is hereby granted, free of charge, to any person obtaining a copy |
| 7 | /// of this software and associated documentation files (the "Software"), to |
| 8 | /// deal in the Software without restriction, including without limitation the |
| 9 | /// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or |
| 10 | /// sell copies of the Software, and to permit persons to whom the Software is |
| 11 | /// furnished to do so, subject to the following conditions: |
| 12 | /// |
| 13 | /// The above copyright notice and this permission notice shall be included in all |
| 14 | /// copies or substantial portions of the Software. |
| 15 | /// |
| 16 | /// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR |
| 17 | /// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, |
| 18 | /// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE |
| 19 | /// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER |
| 20 | /// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, |
| 21 | /// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE |
| 22 | /// SOFTWARE. |
| 23 | /// |
Moritz Pflanzer | e3e7345 | 2017-08-11 15:40:16 +0100 | [diff] [blame] | 24 | namespace arm_compute |
| 25 | { |
| 26 | namespace test |
| 27 | { |
Anthony Barbier | 6ff3b19 | 2017-09-04 18:44:23 +0100 | [diff] [blame] | 28 | /** |
Anthony Barbier | 79c6178 | 2017-06-23 11:48:24 +0100 | [diff] [blame] | 29 | @page tests Validation and benchmarks tests |
Anthony Barbier | 6ff3b19 | 2017-09-04 18:44:23 +0100 | [diff] [blame] | 30 | |
| 31 | @tableofcontents |
| 32 | |
Moritz Pflanzer | e3e7345 | 2017-08-11 15:40:16 +0100 | [diff] [blame] | 33 | @section tests_overview Overview |
| 34 | |
| 35 | Benchmark and validation tests are based on the same framework to setup and run |
| 36 | the tests. In addition to running simple, self-contained test functions the |
| 37 | framework supports fixtures and data test cases. The former allows to share |
| 38 | common setup routines between various backends thus reducing the amount of |
| 39 | duplicated code. The latter can be used to parameterize tests or fixtures with |
| 40 | different inputs, e.g. different tensor shapes. One limitation is that |
| 41 | tests/fixtures cannot be parameterized based on the data type if static type |
| 42 | information is needed within the test (e.g. to validate the results). |
| 43 | |
Anthony Barbier | cc0a80b | 2017-12-15 11:37:29 +0000 | [diff] [blame] | 44 | @note By default tests are not built. To enable them you need to add validation_tests=1 and / or benchmark_tests=1 to your SCons line. |
| 45 | |
| 46 | @note Tests are not included in the pre-built binary archive, you have to build them from sources. |
| 47 | |
Moritz Pflanzer | e3e7345 | 2017-08-11 15:40:16 +0100 | [diff] [blame] | 48 | @subsection tests_overview_structure Directory structure |
| 49 | |
| 50 | . |
Moritz Pflanzer | a09de0c | 2017-09-01 20:41:12 +0100 | [diff] [blame] | 51 | `-- tests <- Top level test directory. All files in here are shared among validation and benchmark. |
| 52 | |-- framework <- Underlying test framework. |
Georgios Pinitas | fd7780d | 2020-03-17 11:41:00 +0000 | [diff] [blame] | 53 | |-- CL \ |
| 54 | |-- GLES_COMPUTE \ |
Moritz Pflanzer | e3e7345 | 2017-08-11 15:40:16 +0100 | [diff] [blame] | 55 | |-- NEON -> Backend specific files with helper functions etc. |
Moritz Pflanzer | a09de0c | 2017-09-01 20:41:12 +0100 | [diff] [blame] | 56 | |-- benchmark <- Top level directory for the benchmarking files. |
| 57 | | |-- fixtures <- Fixtures for benchmark tests. |
| 58 | | |-- CL <- OpenCL backend test cases on a function level. |
Georgios Pinitas | fd7780d | 2020-03-17 11:41:00 +0000 | [diff] [blame] | 59 | | |-- GLES_COMPUTE <- Same of OpenGL ES |
Moritz Pflanzer | e3e7345 | 2017-08-11 15:40:16 +0100 | [diff] [blame] | 60 | | `-- NEON <- Same for NEON |
Moritz Pflanzer | a09de0c | 2017-09-01 20:41:12 +0100 | [diff] [blame] | 61 | |-- datasets <- Datasets for benchmark and validation tests. |
Moritz Pflanzer | e3e7345 | 2017-08-11 15:40:16 +0100 | [diff] [blame] | 62 | |-- main.cpp <- Main entry point for the tests. Currently shared between validation and benchmarking. |
Moritz Pflanzer | a09de0c | 2017-09-01 20:41:12 +0100 | [diff] [blame] | 63 | `-- validation -> Top level directory for validation files. |
Moritz Pflanzer | e3e7345 | 2017-08-11 15:40:16 +0100 | [diff] [blame] | 64 | |-- CPP -> C++ reference code |
Georgios Pinitas | fd7780d | 2020-03-17 11:41:00 +0000 | [diff] [blame] | 65 | |-- CL \ |
| 66 | |-- GLES_COMPUTE \ |
Moritz Pflanzer | e3e7345 | 2017-08-11 15:40:16 +0100 | [diff] [blame] | 67 | |-- NEON -> Backend specific test cases |
Moritz Pflanzer | e3e7345 | 2017-08-11 15:40:16 +0100 | [diff] [blame] | 68 | `-- fixtures -> Fixtures shared among all backends. Used to setup target function and tensors. |
| 69 | |
| 70 | @subsection tests_overview_fixtures Fixtures |
| 71 | |
| 72 | Fixtures can be used to share common setup, teardown or even run tasks among |
| 73 | multiple test cases. For that purpose a fixture can define a `setup`, |
| 74 | `teardown` and `run` method. Additionally the constructor and destructor might |
| 75 | also be customized. |
| 76 | |
| 77 | An instance of the fixture is created immediately before the actual test is |
| 78 | executed. After construction the @ref framework::Fixture::setup method is called. Then the test |
| 79 | function or the fixtures `run` method is invoked. After test execution the |
| 80 | @ref framework::Fixture::teardown method is called and lastly the fixture is destructed. |
| 81 | |
| 82 | @subsubsection tests_overview_fixtures_fixture Fixture |
| 83 | |
| 84 | Fixtures for non-parameterized test are straightforward. The custom fixture |
| 85 | class has to inherit from @ref framework::Fixture and choose to implement any of the |
| 86 | `setup`, `teardown` or `run` methods. None of the methods takes any arguments |
| 87 | or returns anything. |
| 88 | |
| 89 | class CustomFixture : public framework::Fixture |
| 90 | { |
| 91 | void setup() |
| 92 | { |
| 93 | _ptr = malloc(4000); |
| 94 | } |
| 95 | |
| 96 | void run() |
| 97 | { |
| 98 | ARM_COMPUTE_ASSERT(_ptr != nullptr); |
| 99 | } |
| 100 | |
| 101 | void teardown() |
| 102 | { |
| 103 | free(_ptr); |
| 104 | } |
| 105 | |
| 106 | void *_ptr; |
| 107 | }; |
| 108 | |
| 109 | @subsubsection tests_overview_fixtures_data_fixture Data fixture |
| 110 | |
| 111 | The advantage of a parameterized fixture is that arguments can be passed to the setup method at runtime. To make this possible the setup method has to be a template with a type parameter for every argument (though the template parameter doesn't have to be used). All other methods remain the same. |
| 112 | |
| 113 | class CustomFixture : public framework::Fixture |
| 114 | { |
| 115 | #ifdef ALTERNATIVE_DECLARATION |
| 116 | template <typename ...> |
| 117 | void setup(size_t size) |
| 118 | { |
| 119 | _ptr = malloc(size); |
| 120 | } |
| 121 | #else |
| 122 | template <typename T> |
| 123 | void setup(T size) |
| 124 | { |
| 125 | _ptr = malloc(size); |
| 126 | } |
| 127 | #endif |
| 128 | |
| 129 | void run() |
| 130 | { |
| 131 | ARM_COMPUTE_ASSERT(_ptr != nullptr); |
| 132 | } |
| 133 | |
| 134 | void teardown() |
| 135 | { |
| 136 | free(_ptr); |
| 137 | } |
| 138 | |
| 139 | void *_ptr; |
| 140 | }; |
| 141 | |
| 142 | @subsection tests_overview_test_cases Test cases |
| 143 | |
| 144 | All following commands can be optionally prefixed with `EXPECTED_FAILURE_` or |
| 145 | `DISABLED_`. |
| 146 | |
| 147 | @subsubsection tests_overview_test_cases_test_case Test case |
| 148 | |
| 149 | A simple test case function taking no inputs and having no (shared) state. |
| 150 | |
| 151 | - First argument is the name of the test case (has to be unique within the |
| 152 | enclosing test suite). |
| 153 | - Second argument is the dataset mode in which the test will be active. |
| 154 | |
| 155 | |
| 156 | TEST_CASE(TestCaseName, DatasetMode::PRECOMMIT) |
| 157 | { |
| 158 | ARM_COMPUTE_ASSERT_EQUAL(1 + 1, 2); |
| 159 | } |
| 160 | |
| 161 | @subsubsection tests_overview_test_cases_fixture_fixture_test_case Fixture test case |
| 162 | |
| 163 | A simple test case function taking no inputs that inherits from a fixture. The |
| 164 | test case will have access to all public and protected members of the fixture. |
| 165 | Only the setup and teardown methods of the fixture will be used. The body of |
| 166 | this function will be used as test function. |
| 167 | |
| 168 | - First argument is the name of the test case (has to be unique within the |
| 169 | enclosing test suite). |
| 170 | - Second argument is the class name of the fixture. |
| 171 | - Third argument is the dataset mode in which the test will be active. |
| 172 | |
| 173 | |
| 174 | class FixtureName : public framework::Fixture |
| 175 | { |
| 176 | public: |
| 177 | void setup() override |
| 178 | { |
| 179 | _one = 1; |
| 180 | } |
| 181 | |
| 182 | protected: |
| 183 | int _one; |
| 184 | }; |
| 185 | |
| 186 | FIXTURE_TEST_CASE(TestCaseName, FixtureName, DatasetMode::PRECOMMIT) |
| 187 | { |
| 188 | ARM_COMPUTE_ASSERT_EQUAL(_one + 1, 2); |
| 189 | } |
| 190 | |
| 191 | @subsubsection tests_overview_test_cases_fixture_register_fixture_test_case Registering a fixture as test case |
| 192 | |
| 193 | Allows to use a fixture directly as test case. Instead of defining a new test |
| 194 | function the run method of the fixture will be executed. |
| 195 | |
| 196 | - First argument is the name of the test case (has to be unique within the |
| 197 | enclosing test suite). |
| 198 | - Second argument is the class name of the fixture. |
| 199 | - Third argument is the dataset mode in which the test will be active. |
| 200 | |
| 201 | |
| 202 | class FixtureName : public framework::Fixture |
| 203 | { |
| 204 | public: |
| 205 | void setup() override |
| 206 | { |
| 207 | _one = 1; |
| 208 | } |
| 209 | |
| 210 | void run() override |
| 211 | { |
| 212 | ARM_COMPUTE_ASSERT_EQUAL(_one + 1, 2); |
| 213 | } |
| 214 | |
| 215 | protected: |
| 216 | int _one; |
| 217 | }; |
| 218 | |
| 219 | REGISTER_FIXTURE_TEST_CASE(TestCaseName, FixtureName, DatasetMode::PRECOMMIT); |
| 220 | |
| 221 | |
| 222 | @subsubsection tests_overview_test_cases_data_test_case Data test case |
| 223 | |
| 224 | A parameterized test case function that has no (shared) state. The dataset will |
| 225 | be used to generate versions of the test case with different inputs. |
| 226 | |
| 227 | - First argument is the name of the test case (has to be unique within the |
| 228 | enclosing test suite). |
| 229 | - Second argument is the dataset mode in which the test will be active. |
| 230 | - Third argument is the dataset. |
| 231 | - Further arguments specify names of the arguments to the test function. The |
| 232 | number must match the arity of the dataset. |
| 233 | |
| 234 | |
| 235 | DATA_TEST_CASE(TestCaseName, DatasetMode::PRECOMMIT, framework::make("Numbers", {1, 2, 3}), num) |
| 236 | { |
| 237 | ARM_COMPUTE_ASSERT(num < 4); |
| 238 | } |
| 239 | |
| 240 | @subsubsection tests_overview_test_cases_fixture_data_test_case Fixture data test case |
| 241 | |
| 242 | A parameterized test case that inherits from a fixture. The test case will have |
| 243 | access to all public and protected members of the fixture. Only the setup and |
| 244 | teardown methods of the fixture will be used. The setup method of the fixture |
| 245 | needs to be a template and has to accept inputs from the dataset as arguments. |
| 246 | The body of this function will be used as test function. The dataset will be |
| 247 | used to generate versions of the test case with different inputs. |
| 248 | |
| 249 | - First argument is the name of the test case (has to be unique within the |
| 250 | enclosing test suite). |
| 251 | - Second argument is the class name of the fixture. |
| 252 | - Third argument is the dataset mode in which the test will be active. |
| 253 | - Fourth argument is the dataset. |
| 254 | |
| 255 | |
| 256 | class FixtureName : public framework::Fixture |
| 257 | { |
| 258 | public: |
| 259 | template <typename T> |
| 260 | void setup(T num) |
| 261 | { |
| 262 | _num = num; |
| 263 | } |
| 264 | |
| 265 | protected: |
| 266 | int _num; |
| 267 | }; |
| 268 | |
| 269 | FIXTURE_DATA_TEST_CASE(TestCaseName, FixtureName, DatasetMode::PRECOMMIT, framework::make("Numbers", {1, 2, 3})) |
| 270 | { |
| 271 | ARM_COMPUTE_ASSERT(_num < 4); |
| 272 | } |
| 273 | |
| 274 | @subsubsection tests_overview_test_cases_register_fixture_data_test_case Registering a fixture as data test case |
| 275 | |
| 276 | Allows to use a fixture directly as parameterized test case. Instead of |
| 277 | defining a new test function the run method of the fixture will be executed. |
| 278 | The setup method of the fixture needs to be a template and has to accept inputs |
| 279 | from the dataset as arguments. The dataset will be used to generate versions of |
| 280 | the test case with different inputs. |
| 281 | |
| 282 | - First argument is the name of the test case (has to be unique within the |
| 283 | enclosing test suite). |
| 284 | - Second argument is the class name of the fixture. |
| 285 | - Third argument is the dataset mode in which the test will be active. |
| 286 | - Fourth argument is the dataset. |
| 287 | |
| 288 | |
| 289 | class FixtureName : public framework::Fixture |
| 290 | { |
| 291 | public: |
| 292 | template <typename T> |
| 293 | void setup(T num) |
| 294 | { |
| 295 | _num = num; |
| 296 | } |
| 297 | |
| 298 | void run() override |
| 299 | { |
| 300 | ARM_COMPUTE_ASSERT(_num < 4); |
| 301 | } |
| 302 | |
| 303 | protected: |
| 304 | int _num; |
| 305 | }; |
| 306 | |
| 307 | REGISTER_FIXTURE_DATA_TEST_CASE(TestCaseName, FixtureName, DatasetMode::PRECOMMIT, framework::make("Numbers", {1, 2, 3})); |
| 308 | |
| 309 | @section writing_tests Writing validation tests |
| 310 | |
| 311 | Before starting a new test case have a look at the existing ones. They should |
| 312 | provide a good overview how test cases are structured. |
| 313 | |
Anthony Barbier | 144d2ff | 2017-09-29 10:46:08 +0100 | [diff] [blame] | 314 | - The C++ reference needs to be added to `tests/validation/CPP/`. The |
Moritz Pflanzer | e3e7345 | 2017-08-11 15:40:16 +0100 | [diff] [blame] | 315 | reference function is typically a template parameterized by the underlying |
| 316 | value type of the `SimpleTensor`. This makes it easy to specialise for |
| 317 | different data types. |
| 318 | - If all backends have a common interface it makes sense to share the setup |
| 319 | code. This can be done by adding a fixture in |
Anthony Barbier | 144d2ff | 2017-09-29 10:46:08 +0100 | [diff] [blame] | 320 | `tests/validation/fixtures/`. Inside of the `setup` method of a fixture |
Moritz Pflanzer | e3e7345 | 2017-08-11 15:40:16 +0100 | [diff] [blame] | 321 | the tensors can be created and initialised and the function can be configured |
| 322 | and run. The actual test will only have to validate the results. To be shared |
| 323 | among multiple backends the fixture class is usually a template that accepts |
| 324 | the specific types (data, tensor class, function class etc.) as parameters. |
| 325 | - The actual test cases need to be added for each backend individually. |
| 326 | Typically the will be multiple tests for different data types and for |
| 327 | different execution modes, e.g. precommit and nightly. |
| 328 | |
Anthony Barbier | 6ff3b19 | 2017-09-04 18:44:23 +0100 | [diff] [blame] | 329 | @section tests_running_tests Running tests |
Anthony Barbier | 38e7f1f | 2018-05-21 13:37:47 +0100 | [diff] [blame] | 330 | @subsection tests_running_tests_benchmark_and_validation Benchmarking and validation suites |
Anthony Barbier | 6ff3b19 | 2017-09-04 18:44:23 +0100 | [diff] [blame] | 331 | @subsubsection tests_running_tests_benchmarking_filter Filter tests |
| 332 | All tests can be run by invoking |
| 333 | |
Moritz Pflanzer | 2b26b85 | 2017-07-21 10:09:30 +0100 | [diff] [blame] | 334 | ./arm_compute_benchmark ./data |
Anthony Barbier | 6ff3b19 | 2017-09-04 18:44:23 +0100 | [diff] [blame] | 335 | |
| 336 | where `./data` contains the assets needed by the tests. |
| 337 | |
Moritz Pflanzer | 2b26b85 | 2017-07-21 10:09:30 +0100 | [diff] [blame] | 338 | If only a subset of the tests has to be executed the `--filter` option takes a |
| 339 | regular expression to select matching tests. |
Anthony Barbier | 6ff3b19 | 2017-09-04 18:44:23 +0100 | [diff] [blame] | 340 | |
Anthony Barbier | cc0a80b | 2017-12-15 11:37:29 +0000 | [diff] [blame] | 341 | ./arm_compute_benchmark --filter='^NEON/.*AlexNet' ./data |
| 342 | |
| 343 | @note Filtering will be much faster if the regular expression starts from the start ("^") or end ("$") of the line. |
Anthony Barbier | 6ff3b19 | 2017-09-04 18:44:23 +0100 | [diff] [blame] | 344 | |
Moritz Pflanzer | 2b26b85 | 2017-07-21 10:09:30 +0100 | [diff] [blame] | 345 | Additionally each test has a test id which can be used as a filter, too. |
| 346 | However, the test id is not guaranteed to be stable when new tests are added. |
| 347 | Only for a specific build the same the test will keep its id. |
Anthony Barbier | 6ff3b19 | 2017-09-04 18:44:23 +0100 | [diff] [blame] | 348 | |
Moritz Pflanzer | 2b26b85 | 2017-07-21 10:09:30 +0100 | [diff] [blame] | 349 | ./arm_compute_benchmark --filter-id=10 ./data |
| 350 | |
| 351 | All available tests can be displayed with the `--list-tests` switch. |
| 352 | |
| 353 | ./arm_compute_benchmark --list-tests |
| 354 | |
| 355 | More options can be found in the `--help` message. |
Anthony Barbier | 6ff3b19 | 2017-09-04 18:44:23 +0100 | [diff] [blame] | 356 | |
| 357 | @subsubsection tests_running_tests_benchmarking_runtime Runtime |
Moritz Pflanzer | 2b26b85 | 2017-07-21 10:09:30 +0100 | [diff] [blame] | 358 | By default every test is run once on a single thread. The number of iterations |
| 359 | can be controlled via the `--iterations` option and the number of threads via |
| 360 | `--threads`. |
Anthony Barbier | 6ff3b19 | 2017-09-04 18:44:23 +0100 | [diff] [blame] | 361 | |
Moritz Pflanzer | 2b26b85 | 2017-07-21 10:09:30 +0100 | [diff] [blame] | 362 | @subsubsection tests_running_tests_benchmarking_output Output |
| 363 | By default the benchmarking results are printed in a human readable format on |
| 364 | the command line. The colored output can be disabled via `--no-color-output`. |
| 365 | As an alternative output format JSON is supported and can be selected via |
| 366 | `--log-format=json`. To write the output to a file instead of stdout the |
| 367 | `--log-file` option can be used. |
Anthony Barbier | 6ff3b19 | 2017-09-04 18:44:23 +0100 | [diff] [blame] | 368 | |
Anthony Barbier | 144d2ff | 2017-09-29 10:46:08 +0100 | [diff] [blame] | 369 | @subsubsection tests_running_tests_benchmarking_mode Mode |
| 370 | Tests contain different datasets of different sizes, some of which will take several hours to run. |
| 371 | You can select which datasets to use by using the `--mode` option, we recommed you use `--mode=precommit` to start with. |
| 372 | |
| 373 | @subsubsection tests_running_tests_benchmarking_instruments Instruments |
| 374 | You can use the `--instruments` option to select one or more instruments to measure the execution time of the benchmark tests. |
| 375 | |
| 376 | `PMU` will try to read the CPU PMU events from the kernel (They need to be enabled on your platform) |
| 377 | |
| 378 | `MALI` will try to collect Mali hardware performance counters. (You need to have a recent enough Mali driver) |
| 379 | |
Anthony Barbier | cc0a80b | 2017-12-15 11:37:29 +0000 | [diff] [blame] | 380 | `WALL_CLOCK_TIMER` will measure time using `gettimeofday`: this should work on all platforms. |
Anthony Barbier | 144d2ff | 2017-09-29 10:46:08 +0100 | [diff] [blame] | 381 | |
Anthony Barbier | cc0a80b | 2017-12-15 11:37:29 +0000 | [diff] [blame] | 382 | You can pass a combinations of these instruments: `--instruments=PMU,MALI,WALL_CLOCK_TIMER` |
Anthony Barbier | 144d2ff | 2017-09-29 10:46:08 +0100 | [diff] [blame] | 383 | |
| 384 | @note You need to make sure the instruments have been selected at compile time using the `pmu=1` or `mali=1` scons options. |
| 385 | |
Anthony Barbier | cc0a80b | 2017-12-15 11:37:29 +0000 | [diff] [blame] | 386 | @subsubsection tests_running_examples Examples |
| 387 | |
| 388 | To run all the precommit validation tests: |
| 389 | |
| 390 | LD_LIBRARY_PATH=. ./arm_compute_validation --mode=precommit |
| 391 | |
| 392 | To run the OpenCL precommit validation tests: |
| 393 | |
| 394 | LD_LIBRARY_PATH=. ./arm_compute_validation --mode=precommit --filter="^CL.*" |
| 395 | |
| 396 | To run the NEON precommit benchmark tests with PMU and Wall Clock timer in miliseconds instruments enabled: |
| 397 | |
| 398 | LD_LIBRARY_PATH=. ./arm_compute_benchmark --mode=precommit --filter="^NEON.*" --instruments="pmu,wall_clock_timer_ms" --iterations=10 |
| 399 | |
| 400 | To run the OpenCL precommit benchmark tests with OpenCL kernel timers in miliseconds enabled: |
| 401 | |
| 402 | LD_LIBRARY_PATH=. ./arm_compute_benchmark --mode=precommit --filter="^CL.*" --instruments="opencl_timer_ms" --iterations=10 |
| 403 | |
Georgios Pinitas | 5821632 | 2020-02-26 11:13:13 +0000 | [diff] [blame] | 404 | @note You might need to export the path to OpenCL library as well in your LD_LIBRARY_PATH if Compute Library was built with OpenCL enabled. |
Anthony Barbier | 6ff3b19 | 2017-09-04 18:44:23 +0100 | [diff] [blame] | 405 | */ |
Moritz Pflanzer | e3e7345 | 2017-08-11 15:40:16 +0100 | [diff] [blame] | 406 | } // namespace test |
| 407 | } // namespace arm_compute |