弱边缘特征的LiDAR-红外相机高精度外参标定方法

High-precision calibration of LiDAR-infrared camera extrinsic with weak-edge features

  • 摘要: LiDAR-红外相机外参标定是实现多源传感器信息融合的关键环节。针对传统方法对标定板要求高且需人工干预以及红外图像分辨率低、边缘模糊的问题,文中提出了弱边缘特征的LiDAR-红外相机高精度外参标定方法。首先,设计了跨模态自适应角点检测框架,将红外图像与点云特征提取统一建模为“粗定位−局部增强−自适应精修”的多层级迭代优化过程,有效解决了不同模态下特征分布不一致和弱边缘特性导致的误检问题。实验结果表明,该框架在红外图像与三维点云数据中分别实现了83%和89%的特征点检测重复率;其次,结合EPnP建模与Ceres非线性优化,文中方法实现了无需标定板的全自动高精度外参估计,平均重投影误差为1.74 pixel,较标定板方法降低54.45%,较引入SAM大模型的方法降低19.44%;最后,通过多场景实验验证,该方法在不同光照和测距条件下均能保持稳定性能,为全天时LiDAR-红外相机多源融合感知提供了可靠支撑。

     

    Abstract:
    Objective With the rapid advancement of multi-sensor fusion and intelligent perception technologies, LiDAR-infrared (IR) camera fusion has become a key enabler for reliable perception in weak-light and no-light environments, where accurate extrinsic calibration is fundamental for heterogeneous data integration. However, existing LiDAR-IR calibration methods predominantly rely on traditional calibration-board-based procedures, which impose strict requirements on pattern visibility, offer limited generalization capability, and often suffer from insufficient accuracy. Moreover, due to the inherently low resolution and blurred edges of IR images, directly applying LiDAR-visible-light calibration techniques frequently leads to discontinuous boundaries or false detections, further constraining calibration precision and stability. To address these challenges, this paper proposes a high-precision LiDAR-IR extrinsic calibration method based on weak-edge features, providing robust support for the stable operation of all-weather multimodal perception systems.
    Methods This paper develops a high-precision extrinsic calibration pipeline for LiDAR-IR cameras based on weak-edge features. A cross-modal adaptive corner detection framework is first introduced, in which IR image features and point-cloud structural cues are jointly formulated as a hierarchical optimization process consisting of coarse localization, local enhancement, and adaptive refinement. For IR image feature extraction, true structural corners are enhanced through local optimal projection, convex-hull extraction, and an area-rectangularity scoring function (Fig.3). For 3D point-cloud feature extraction, precise spatial alignment is achieved by combining multi-scale neighborhood search with KD-tree-based iterative back-projection, while dual geometric constraints on edge-length ratio and angular consistency are applied to suppress pseudo structures commonly found in natural scenes (Fig.4). Subsequently, the unified extrinsic parameters are estimated using EPnP modeling integrated with Ceres-based nonlinear optimization, forming a fully automated calibration procedure that requires neither calibration boards nor manual intervention. The proposed method is applicable to natural scenes with sufficient geometric structure and depth variation, and is suitable for large-scale perception systems operating in weak-light or no-light environments, rapid cross-sensor deployment, and industrial or outdoor scenarios where calibration boards cannot be conveniently used.
    Results and Discussions This paper proposes a high-precision extrinsic calibration method for LiDAR–IR cameras based on weak-edge features. By introducing a cross-modal adaptive corner detection framework, IR image features and point-cloud structures are jointly formulated as a hierarchical optimization process consisting of coarse localization, local enhancement, and adaptive refinement. Combined with EPnP modeling and Ceres-based nonlinear iterative optimization, the method achieves stable and reliable extrinsic parameter estimation. Experimental results show that the feature repeatability of IR images and 3D point clouds is improved to 83% and 89% (Tab.1-Tab.2), respectively, while the average reprojection error is reduced to only 1.74 pixels (Fig.7, Tab.4). Moreover, in multi-distance fusion tests conducted under weak-light and no-light conditions, the proposed method consistently maintains strong spatial alignment and imaging stability, effectively mitigating the perception degradation caused by insufficient single-modality information, and demonstrating enhanced robustness in the presence of noise interference. (Fig.10-Fig.13).
    Conclusions To address the issue of insufficient extrinsic calibration accuracy caused by the weak-edge characteristics of IR images, this paper proposes a high-precision LiDAR-IR camera extrinsic calibration method based on weak-edge features. A cross-modal adaptive corner detection framework is constructed to model the extraction of IR-image and point-cloud corners as a multi-level iterative optimization process. By further integrating EPnP modeling with Ceres-based nonlinear optimization, the method enables fully automated, high-accuracy LiDAR–IR extrinsic calibration and significantly enhances calibration flexibility in natural scenes. Experimental results demonstrate that the proposed method achieves stable multimodal sensor fusion under various distances in both low-light and no-light conditions, providing reliable support for all-weather multi-sensor fusion perception. The method also exhibits strong engineering applicability in scenarios such as autonomous robotics, night-time inspection, and perception tasks in low-illumination environments.

     

/

返回文章
返回