Remove Compute Vision CL support

Resolves COMPMID-4151

Change-Id: I46f541efe8c4087f27794d2e158b6c1547d459ba
Signed-off-by: Michalis Spyrou <michalis.spyrou@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/5160
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Michele Di Giorgio <michele.digiorgio@arm.com>
diff --git a/docs/01_library.dox b/docs/01_library.dox
index 641fc3e..5cd33b6 100644
--- a/docs/01_library.dox
+++ b/docs/01_library.dox
@@ -191,7 +191,7 @@
 
 Functions will automatically allocate the temporary buffers mentioned above, and will automatically multi-thread kernels' executions using the very basic scheduler described in the previous section.
 
-Simple functions only call a single kernel (e.g @ref NEConvolution3x3), while more complex ones consist of several kernels pipelined together (e.g @ref NEFullyConnectedLayer ). Check their documentation to find out which kernels are used by each function.
+Simple functions only call a single kernel (e.g NEConvolution3x3), while more complex ones consist of several kernels pipelined together (e.g @ref NEFullyConnectedLayer ). Check their documentation to find out which kernels are used by each function.
 
 @code{.cpp}
 //Create a function object:
@@ -225,9 +225,6 @@
 
 In order to block until all the jobs in the CLScheduler's command queue are done executing the user can call @ref CLScheduler::sync() or create a sync event using @ref CLScheduler::enqueue_sync_event()
 
-For example:
-@snippet cl_events.cpp OpenCL events
-
 @subsection S4_4_2_cl_neon OpenCL / Neon interoperability
 
 You can mix OpenCL and Neon kernels and functions. However it is the user's responsibility to handle the mapping/unmapping of OpenCL objects.
@@ -260,8 +257,6 @@
 
 - Accurate padding:
 
-@snippet neon_convolution.cpp Accurate padding
-
 @note It's important to call allocate @b after the function is configured: if the image / tensor is already allocated then the function will shrink its execution window instead of increasing the padding. (See below for more details).
 
 - Manual padding / no padding / auto padding: You can allocate your images / tensors up front (before configuring your functions). In that case the function will use whatever padding is available and will shrink its execution window if there isn't enough padding available (which translates into a smaller valid region for the output). See also @ref valid_region).