loading page

QARGS-YOLO: A Better Object Detector for Autonomous Driving via Quantization-Aware Re-parameterization
  • +1
  • Yuan Sun,
  • Weixiang Li,
  • Ming Cheng,
  • Chuang Chen
Yuan Sun
Nanjing Tech University College of Electrical Engineering and Control Science
Author Profile
Weixiang Li
Nanjing Tech University College of Electrical Engineering and Control Science

Corresponding Author:[email protected]

Author Profile
Ming Cheng
Nanjing Tech University College of Electrical Engineering and Control Science
Author Profile
Chuang Chen
Nanjing Tech University College of Electrical Engineering and Control Science
Author Profile

Abstract

This work aims to address the challenges of high parameter count and low accuracy after quantization in real-time road object detection for autonomous driving applications. Firstly, a novel convolutional module called QARGSConv is proposed by combining the GS convolution module and quantization-aware re-parameterization. Secondly, an improved detection model called QARGS-YOLO is designed based on the YOLOv7-tiny model and validated on the Pascal VOC and KITTI datasets. Finally, experiments are conducted in real driving scenarios using the quantized model to analyze its performance in practical environments. The design principle of QARGSConv is to reduce the parameter count while maintaining accuracy after quantization. By performing re-parameterization on GS convolution module, the module achieves lightweight characteristics while improving its accuracy. Furthermore, to address the significant accuracy degradation caused by re-parameterization, quantization-aware techniques are introduced to mitigate the accuracy loss after quantization. The results demonstrate that with the adoption of quantization-aware strategies, the FP16 quantized model achieves an accuracy loss of around 0.1%, and the INT8 quantized model achieves an accuracy loss of around 0.3%. Moreover, the QARGS-YOLO model achieves an [email protected] of 71.4% on the VOC dataset and 90.9% on the KITTI dataset, surpassing the accuracy of the YOLOv7-tiny model. Additionally, the parameter count is reduced by 16.3% compared to the baseline model. The capability of road detection is also validated in real driving environments.