Multi-Domain Adaptation for Autonomous Driving Perception under Diverse Weather
DOI:
https://doi.org/10.71222/tx9cwf03Keywords:
multi-domain adaptation, autonomous driving perception, diverse weather conditions, domain shift, robust sensor fusion, adversarial learning, cross-modal feature alignmentAbstract
Autonomous driving perception systems are confronted with substantial robustness challenges under diverse weather conditions, where sensor data distortion caused by rain, fog, snow, or intense illumination often leads to degraded performance in critical tasks such as object detection and semantic segmentation. Existing approaches predominantly depend on single-domain models trained under ideal environmental conditions, which suffer from poor generalization across weather domains due to inherent domain shifts. This study explores the application of multi-domain adaptation techniques to enhance perception stability by integrating heterogeneous sensor data, including RGB (Red, Green, Blue) images, LiDAR (Light Detection and Ranging) point clouds, and thermal imaging, while leveraging cross-domain feature alignment mechanisms. The proposed framework employs domain-specific encoders combined with adversarial learning to mitigate weather-induced domain gaps, alongside a multi-task learning objective that simultaneously optimizes perception accuracy and domain invariance. Experimental validation demonstrates that the framework achieves superior performance compared to conventional single-domain and shallow adaptation models, with interpretability analyses revealing key weather-robust features such as thermal edge consistency and LiDAR (Light Detection and Ranging) point density patterns. Its ability to adapt to unseen weather conditions could enable reliable autonomous driving in complex real-world environments and reduce weather-related accidents. By bridging domain adaptation theory with automotive perception requirements, this work advances the translation of robust AI (Artificial Intelligence)-driven systems into practical autonomous driving applications.
References
1. J. Vargas, et al., "An overview of autonomous vehicles sensors and their vulnerability to weather conditions," Sensors, vol. 21, no. 16, pp. 5397, 2021, doi: 10.3390/s21165397.
2. C. Xu and R. Sankar, "A comprehensive review of autonomous driving algorithms: Tackling adverse weather conditions, unpredictable traffic violations, blind spot monitoring, and emergency maneuvers," Algorithms, vol. 17, no. 11, pp. 526, 2024, doi: 10.3390/a17110526.
3. A. Khosravian, et al., "Multi‐domain autonomous driving dataset: Towards enhancing the generalization of the convolu-tional neural networks in new environments," IET Image Process., vol. 17, no. 4, pp. 1253–1266, 2023, doi: 10.1049/ipr2.12710.
4. L. Han, et al., "A Novel Multi-Object Tracking Framework Based on Multi-Sensor Data Fusion for Autonomous Driving in Adverse Weather Environments," IEEE Sens. J., 2025, doi: 10.1109/JSEN.2025.3550506.
5. K. Muhammad, et al., "Vision-based semantic segmentation in scene understanding for autonomous driving: Recent achievements, challenges, and outlooks," IEEE Trans. Intell. Transp. Syst., vol. 23, no. 12, pp. 22694–22715, 2022. , doi: 10.1109/TITS.2022.3207665.
6. A. Piroli, et al., "Label-efficient semantic segmentation of LiDAR point clouds in adverse weather conditions," IEEE Robot. Autom. Lett., vol. 9, no. 6, pp. 5575–5582, 2024, doi: 10.1109/LRA.2024.3396099.
7. B. Sun, J. Feng and K. Saenko, "Return of frustratingly easy domain adaptation," Proc. AAAI Conf. Artif. Intell., vol. 30, no. 1, 2016, doi: 10.1609/aaai.v30i1.10306.
8. W. M. Kouw and M. Loog, "A review of domain adaptation without target labels," IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 3, pp. 766–785, 2019, doi: 10.1109/TPAMI.2019.2945942.
9. J. Wang, et al., "Superpixel segmentation with squeeze-and-excitation networks," Signal Image Video Process., vol. 16, no. 5, pp. 1161–1168, 2022, doi: 10.1007/s11760-021-02066-2.
10. Y. Wang, et al., "Dynamic graph cnn for learning on point clouds," ACM Trans. Graph., vol. 38, no. 5, pp. 1–12, 2019, doi: 10.1145/3326362.
11. S. Yang, “The Impact of Continuous Integration and Continuous Delivery on Software Development Efficiency”, J. Comput. Signal Syst. Res., vol. 2, no. 3, pp. 59–68, Apr. 2025, doi: 10.71222/pzvfqm21.
12. U. Tatli and C. Budak, "Biomedical image segmentation with modified U-Net," Traitement du Signal, vol. 40, no. 2, pp. 523–531, 2023, doi: 10.18280/ts.400211.
13. Z. Gu, et al., "Point cloud processing under adverse weather: a survey of datasets, enhancement, and denoising methods," Signal Image Video Process., vol. 19, no. 9, pp. 718, 2025, doi: 10.1007/s11760-025-04352-9.
14. X. Chen, Domain adaptation for autonomous driving, Diss., University of Waterloo, 2020.
15. M. Alcon, et al., "Timing of Autonomous Driving Software: Problem Analysis and Prospects for," Proc. 26th IEEE Real-Time Embedded Technol. Appl. Symp. (RTAS’20), 2020.
16. J. Li, et al., "Domain adaptive object detection for autonomous driving under foggy weather," Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis., 2023.
17. P. Chen, et al., "Distilling knowledge via knowledge review," Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Jinghui Tan (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.