融合感知与校正网络的高动态三维轮廓测量方法(特邀)

A high dynamic range 3D contour measurement method integrating perception and correction networks (invited)

  • 摘要: 条纹投影轮廓术凭借其非接触、高精度与快速测量等优势,广泛应用于工业三维测量领域。然而,在高动态范围场景中,受限于相机的动态范围,条纹图常出现过曝或欠曝区域,导致条纹调制度下降,进而影响了相位提取精度及三维重建质量。为此,提出了一种基于改进U-Net架构的高动态范围三维轮廓测量方法。该方法通过引入混合注意力机制与残差连接,构建了包含区域感知子网络与光强分布校正子网络的网络结构,有效地实现了条纹图低调制区域检测及正弦光强分布校正,同时抑制噪声干扰。另外,为提高模型的泛化能力,基于数字孪生软件Blender构建了条纹投影系统的数字虚体,通过模拟物体材质、光照条件及反射特性等因素生成了高保真的训练用数据集,避免了传统数据采集的繁琐过程。仿真实验表明,所提方法能有效地恢复低调制区域条纹图的正弦光强分布,噪声抑制能力强,将相位提取的平均绝对误差降低至0.0165 rad。实测结果显示,所提方法重建的高度平均绝对误差为0.029 mm,并相较于经典的多重曝光融合法,误差降低了约53%。该方法在高动态三维重建中性能稳定,显著提升了测量精度,具有良好的应用前景。

     

    Abstract:
    Objective Fringe Projection Profilometry (FPP) is widely used in industrial 3D measurement due to its non-contact nature, high accuracy, and rapid acquisition. However, limited by camera dynamic range, fringe patterns in High Dynamic Range (HDR) scenes often exhibit low-modulation regions caused by overexposure or underexposure, severely degrading phase extraction accuracy and 3D reconstruction quality. Existing solutions such as multi-exposure fusion or polarization-based methods suffer from long measurement times and high hardware costs. Although deep learning-based correction methods show potential for enhancing fringe pattern quality and reducing measurement time, they face challenges including difficulties in acquiring training data and limited generalization capabilities from mathematically synthesized datasets. This study aims to address two critical issues: 1) efficiently generating high-fidelity and diverse HDR fringe data to enhance model generalization; 2) designing an efficient HDR fringe correction network to accurately correct low-modulation regions in single-frame HDR fringe patterns, thereby improving 3D measurement accuracy and efficiency without requiring multi-exposure or additional hardware.
    Methods To overcome training data acquisition challenges and enhance model generalization, a digital twin of a physical FPP system was constructed using the open-source 3D software Blender. By varying object materials, surface roughness, metallicity, projection intensity, and noise interference, high-fidelity and highly diverse HDR fringe images were generated to provide rich training samples. Furthermore, an improved U-Net-based Fringe Pattern Correction Network (FPCNet) was designed, comprising two submodules: 1) Region-Aware Subnetwork (RAS) incorporating multi-scale convolutional structures and spatial-channel attention mechanisms (SCSE Block) to precisely identify low-modulation regions and generate attention masks; 2) Intensity Distribution Correction Subnetwork (IDCS) integrating dilated convolution modules (DCM Block), residual connections, and attention gate mechanisms (AG Block) within an encoder-decoder architecture to enhance contextual awareness while suppressing noise interference, effectively reconstructing fringe intensity distributions in low-modulation regions. The network was trained using the Adam optimizer with a combined MSE and SSIM loss function to improve structural fidelity and phase recovery accuracy.
    Results and Discussions Simulation tests demonstrate that FPCNet effectively restores sinusoidal intensity distributions in overexposed and underexposed regions of fringe patterns, mitigating "peak platform effects" and "valley platform effects" while suppressing noise (Fig.9). Corrected fringe patterns show significant improvements in peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) (Fig.10). Ablation studies confirm the critical contributions of dilated convolution, residual structures, and attention mechanisms to performance enhancement. Phase extraction accuracy increases, reducing the mean absolute phase error to 0.0165 rad (Tab.2). In physical validation, a physical FPP system was built and typical HDR objects (e.g., metal wrench, black metal block, plastic curved surface) were tested. Corrected fringe patterns exhibit significantly enhanced modulation and effective noise suppression (Fig.14), with phase maps demonstrating superior continuity and smoothness compared to uncorrected images (Fig.16). For a standard gauge block reconstruction, FPCNet achieves an average absolute height error of only 0.029 mm, reducing error by 53% compared to multi-exposure fusion (MEF) and outperforming phase fusion methods (MPF/HPF) and classical U-Net in both accuracy and stability (Fig.18).
    Conclusions This study proposes a deep learning-based HDR fringe correction method. By constructing a virtual FPP system based on Blender, a high-fidelity HDR fringe dataset was efficiently generated. The designed FPCNet network can accurately identify and correct low-modulation regions in single-frame HDR fringe patterns, restoring sinusoidal intensity distributions and suppressing noise. Simulation and physical experiments validate the effectiveness of this method in improving fringe pattern quality, enhancing phase extraction accuracy, and optimizing 3D reconstruction results. FPCNet enables high-precision HDR 3D reconstruction without requiring multiple exposure images, achieving significantly lower errors than existing methods, demonstrating good practicality and application prospects.

     

/

返回文章
返回