Work-in-Progress: Quantized NNs as the Definitive solution for inference on low-power ARM MCUs?