Fix LUT-based activation layer

* Use the window instead of the tensor shape to determine the
  number of elements in the x-dimension.
* Remove the LUT implementation in 32-bit build.

Resolves: COMPMID-5641
Signed-off-by: Viet-Hoa Do <viet-hoa.do@arm.com>
Change-Id: I0a79aa38d8f6a105ad01785bd94571f5a2ecb348
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/8380
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Pablo Marquez Tello <pablo.tello@arm.com>
diff --git a/src/cpu/kernels/activation/list.h b/src/cpu/kernels/activation/list.h
index 3850d4d..c0a2446 100644
--- a/src/cpu/kernels/activation/list.h
+++ b/src/cpu/kernels/activation/list.h
@@ -31,7 +31,10 @@
 #define DECLARE_ACTIVATION_KERNEL(func_name) \
     void func_name(const ITensor *src, ITensor *dst, const ActivationLayerInfo &act_info, const Window &window)
 
+#ifdef __aarch64__
 DECLARE_ACTIVATION_KERNEL(neon_q8_activation_lut);
+#endif // __aarch64__
+
 DECLARE_ACTIVATION_KERNEL(neon_qasymm8_activation);
 DECLARE_ACTIVATION_KERNEL(sve2_qasymm8_activation);
 DECLARE_ACTIVATION_KERNEL(neon_qasymm8_signed_activation);