WANG Yan, ZUO Yong, TANG Yi, et al. High-precision calibration of LiDAR-infrared camera extrinsic with weak-edge featuresJ. Infrared and Laser Engineering, 2026, 55(1): 20250427. DOI: 10.3788/IRLA20250427
Citation: WANG Yan, ZUO Yong, TANG Yi, et al. High-precision calibration of LiDAR-infrared camera extrinsic with weak-edge featuresJ. Infrared and Laser Engineering, 2026, 55(1): 20250427. DOI: 10.3788/IRLA20250427

High-precision calibration of LiDAR-infrared camera extrinsic with weak-edge features

  • Objective With the rapid advancement of multi-sensor fusion and intelligent perception technologies, LiDAR-infrared (IR) camera fusion has become a key enabler for reliable perception in weak-light and no-light environments, where accurate extrinsic calibration is fundamental for heterogeneous data integration. However, existing LiDAR-IR calibration methods predominantly rely on traditional calibration-board-based procedures, which impose strict requirements on pattern visibility, offer limited generalization capability, and often suffer from insufficient accuracy. Moreover, due to the inherently low resolution and blurred edges of IR images, directly applying LiDAR-visible-light calibration techniques frequently leads to discontinuous boundaries or false detections, further constraining calibration precision and stability. To address these challenges, this paper proposes a high-precision LiDAR-IR extrinsic calibration method based on weak-edge features, providing robust support for the stable operation of all-weather multimodal perception systems.
    Methods This paper develops a high-precision extrinsic calibration pipeline for LiDAR-IR cameras based on weak-edge features. A cross-modal adaptive corner detection framework is first introduced, in which IR image features and point-cloud structural cues are jointly formulated as a hierarchical optimization process consisting of coarse localization, local enhancement, and adaptive refinement. For IR image feature extraction, true structural corners are enhanced through local optimal projection, convex-hull extraction, and an area-rectangularity scoring function (Fig.3). For 3D point-cloud feature extraction, precise spatial alignment is achieved by combining multi-scale neighborhood search with KD-tree-based iterative back-projection, while dual geometric constraints on edge-length ratio and angular consistency are applied to suppress pseudo structures commonly found in natural scenes (Fig.4). Subsequently, the unified extrinsic parameters are estimated using EPnP modeling integrated with Ceres-based nonlinear optimization, forming a fully automated calibration procedure that requires neither calibration boards nor manual intervention. The proposed method is applicable to natural scenes with sufficient geometric structure and depth variation, and is suitable for large-scale perception systems operating in weak-light or no-light environments, rapid cross-sensor deployment, and industrial or outdoor scenarios where calibration boards cannot be conveniently used.
    Results and Discussions This paper proposes a high-precision extrinsic calibration method for LiDAR–IR cameras based on weak-edge features. By introducing a cross-modal adaptive corner detection framework, IR image features and point-cloud structures are jointly formulated as a hierarchical optimization process consisting of coarse localization, local enhancement, and adaptive refinement. Combined with EPnP modeling and Ceres-based nonlinear iterative optimization, the method achieves stable and reliable extrinsic parameter estimation. Experimental results show that the feature repeatability of IR images and 3D point clouds is improved to 83% and 89% (Tab.1-Tab.2), respectively, while the average reprojection error is reduced to only 1.74 pixels (Fig.7, Tab.4). Moreover, in multi-distance fusion tests conducted under weak-light and no-light conditions, the proposed method consistently maintains strong spatial alignment and imaging stability, effectively mitigating the perception degradation caused by insufficient single-modality information, and demonstrating enhanced robustness in the presence of noise interference. (Fig.10-Fig.13).
    Conclusions To address the issue of insufficient extrinsic calibration accuracy caused by the weak-edge characteristics of IR images, this paper proposes a high-precision LiDAR-IR camera extrinsic calibration method based on weak-edge features. A cross-modal adaptive corner detection framework is constructed to model the extraction of IR-image and point-cloud corners as a multi-level iterative optimization process. By further integrating EPnP modeling with Ceres-based nonlinear optimization, the method enables fully automated, high-accuracy LiDAR–IR extrinsic calibration and significantly enhances calibration flexibility in natural scenes. Experimental results demonstrate that the proposed method achieves stable multimodal sensor fusion under various distances in both low-light and no-light conditions, providing reliable support for all-weather multi-sensor fusion perception. The method also exhibits strong engineering applicability in scenarios such as autonomous robotics, night-time inspection, and perception tasks in low-illumination environments.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return