Remove experimental flag for per-channel FC quantization (#2482)

Per-channel quantization in fully connected layers are still not supported by TFLM, but the converter now has proper support so we can remove the flag.

BUG=cl/610755484
diff --git a/tensorflow/lite/micro/tools/requantize_flatbuffer_test.py b/tensorflow/lite/micro/tools/requantize_flatbuffer_test.py
index 04bbb32..4d80991 100644
--- a/tensorflow/lite/micro/tools/requantize_flatbuffer_test.py
+++ b/tensorflow/lite/micro/tools/requantize_flatbuffer_test.py
@@ -60,10 +60,6 @@
         EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8
     ]
   converter.representative_dataset = representative_dataset_gen
-  # TODO(b/324385802): Disable per channel quantization in FC layers (currently
-  # default behaviour) since it's not yet supported in TFLM.
-  converter._experimental_disable_per_channel_quantization_for_dense_layers = (  # pylint: disable=protected-access
-      True)
   return converter.convert()