Volume 51 Issue 11
Nov.  2022
Turn off MathJax
Article Contents

Zhu Xinjun, Hou Linpeng, Song Limei, Yuan Mengkai, Wang Hongyi, Wu Zhichao. Fringe structured light 3D reconstruction based on virtual binocular[J]. Infrared and Laser Engineering, 2022, 51(11): 20210955. doi: 10.3788/IRLA20210955
Citation: Zhu Xinjun, Hou Linpeng, Song Limei, Yuan Mengkai, Wang Hongyi, Wu Zhichao. Fringe structured light 3D reconstruction based on virtual binocular[J]. Infrared and Laser Engineering, 2022, 51(11): 20210955. doi: 10.3788/IRLA20210955

Fringe structured light 3D reconstruction based on virtual binocular

doi: 10.3788/IRLA20210955
  • Received Date: 2022-03-10
  • Rev Recd Date: 2022-04-25
  • Publish Date: 2022-11-30
  • In order to solve such problems as synchronization, high cost and others in traditional binocular fringe structured light 3D reconstruction, a virtual binocular fringe structured light 3D reconstruction method based on a single camera is proposed. The virtual binocular fringe structured light 3D reconstruction system with binocular vision is designed using a single camera, two biprisms and one projector. Biprism refraction and light splitting are used to change the path of reflected light on the surface of the measured object, by which the goal of acquisition of multi-view images are attained under single camera. Through the multi-frequency heterodyne method, stereo matching and binocular calibration of virtual binocular, the depth information of the measured object is obtained and the point cloud is reconstructed. Experimental results show that the root mean square error of the proposed method and the real binocular structured light method to measure the standard ball are 0.037 9 mm and 0.030 5 mm respectively. The proposed method in this paper can promote the development of binocular fringe structured light technology in the aspects of rapidity, low cost, miniaturization, etc. At the same time, this method can be extended to the color camera based fringe structured light 3D reconstruction and projection speckle structured light 3D reconstruction.
  • [1] Zhang Zonghua, Yu Jin, Gao Nan, et al. Three-dimensional shape measurement techniques of shiny surfaces [J]. Infrared and Laser Engineering, 2020, 49(3): 0303006. (in Chinese)
    [2] Zhu Xinjun, Deng Yaohui, Tang Chen, et al. Variational mode decomposition for phase retrieval in fringe projection 3D shape measurement [J]. Optics and Precision Engineering, 2016, 24(9): 2318-2324. (in Chinese) doi:  10.3788/OPE.20162409.2318
    [3] Yang Fan, Ding Xiaojian, Cao Jie. 3D reconstruction of free-form surface based on color structured light [J]. Acta Optica Sinica, 2021, 41(2): 0212001. (in Chinese)
    [4] Guo Wenbo, Zhang Qican, Wu Zhoujie. Real-time three-dimensional imaging technique based on phase-shift fringe analysis: A review [J]. Laser & Optoelectronics Progress, 2021, 58(8): 0800001. (in Chinese)
    [5] Lu Rongsheng, Shi Yanqiong, Hu Haibing. Review of three-dimensional imaging techniques for robotic vision [J]. Laser & Optoelectronics Progress, 2020, 57(4): 040001. (in Chinese)
    [6] Chen Jun, Wan Zhechao, Zhang Jiacheng, et al. Medical image segmentation and reconstruction of prostate tumor based on 3D AlexNet [J]. Computer Methods and Programs in Biomedicine, 2021, 200(1): 105878.
    [7] Wang Yuwei, Chen Xiangcheng, Wang Yajun. Modified dual-frequency geometric constraint fringe projection for 3 D shape measurement [J]. Infrared and Laser Engineering, 2020, 49(6): 20200049. (in Chinese)
    [8] Zhang Qican, Wu Zhoujie. Three-dimensional imaging technique based on Gray-coded structured illumination [J]. Infrared and Laser Engineering, 2020, 49(3): 0303004. (in Chinese)
    [9] Zhou Q, Yang Y M, Wang C. Combing structured light measurement technology with binocular stereo vision[C]//2017 IEEE 2nd International Conference on Opto-Electronic Information Processing (ICOIP), 2017: 64-69.
    [10] Wei Yin, Feng Shijie, Tao Tianyang, et al. High-speed 3D shape measurement using the optimized composite fringe patterns and stereo-assisted structured light system [J]. Optics Express, 2019, 27(3): 2411-2431. doi:  10.1364/OE.27.002411
    [11] Liu Kun , Zhou Changhe, Wei Shengbin, et al. Optimized stereo matching in binocular three-dimensional measurement system using structured light [J]. Applied Optics, 2014, 53(26): 6083-6090. doi:  10.1364/AO.53.006083
    [12] Su Xianyu, Zhang Qican, Chen Wenjing. Three-dimensional imaging based on structured illumination [J]. Chinese Journal of Lasers, 2014, 41(2): 0209001. (in Chinese)
    [13] Zhang Zhijia, Yin Xiuping, Yuan Weiqi, et al. Online detection method of binocular vision based on mechanical error for bowl plug [J]. Infrared and Laser Engineering, 2016, 45(12): 1217002. (in Chinese)
    [14] Zhong Jinxin, Yin Wei, Feng Shijie, et al. Speckle projection profilometry with deep learning [J]. Infrared and Laser Engineering, 2020, 49(6): 20200011. (in Chinese)
    [15] DooHyun Lee , InSo Kweon. A novel stereo camera system by a Biprism [J]. IEEE Transactions on Robotics and Automation, 2000, 16(5): 528-541.
    [16] Feng Xiaofeng, Pan Difu. Research on the application of single camera stereo vision sensor in three-dimensional point measurement [J]. Journal of Modern Optics, 2015, 62(15): 1204-1210. (in Chinese) doi:  10.1080/09500340.2015.1024775
    [17] Yan Li, Duan Fajie. Optimum design and accuracy analysis of monocular stereoscopic vision sensor system [J]. Chinese Journal of Sensors and Actuators, 2006, 19(2): 349-352. (in Chinese)
    [18] Chen Bin, Pan Bin. Calibration-free single camera stereo-digital image correlation for small-scale underwater deformation measurement [J]. Optics Express, 2019, 27(8): 10509-10523. doi:  10.1364/OE.27.010509
    [19] Dong Bo, Zeng Fancang, Pan Bing. A simple and practical single-camera stereo-digital image correlation using a color camera and X-cube prism [J]. Sensors, 2019, 19(21): 1-12. doi:  10.1109/JSEN.2019.2926001
    [20] Yuan Tianyu, Dai Xiangjun, Shao Xinxing, et al. Dual-biprism-based digital image correlation for defect detection of pipelines [J]. Optical Engineering, 2019, 58(1): 1-13.
    [21] Xue Ting, Cao Zhaofeng, Jin Yuxin. Calibration of three-dimensional measurement system for gas-liquid two phase flow based on virtual stereo vision [J]. Optics and Precision Engineering, 2012, 20(1): 124-130. (in Chinese) doi:  10.3788/OPE.20122001.0124
    [22] Ma Suodong, Li Bo. Virtual-stereo fringe reflection technique for specular free-form surface testing[C]//Proc SPIE, 2016, 24:100231.
    [23] Hu Jinsong, Cheng Peng, Xu Boqin. 3D reconstruction of free flying insect base on biprism-stereo camera [J]. Journal of Experimental Mechanics, 2007, 22(5): 511-518.
    [24] Ye Jianjian, Liu Xiangjun. Automatic calibration system for high speed camera based on virtual binocular vision [J]. Electronic Measurement Technology, 2019, 42(3): 74-79.
    [25] Song Limei, Huang Haozhen, Chen Yang, et al. Fast measurement of human body posture based on three-dimensional optical information [J]. Infrared and Laser Engineering, 2020, 49(6): 20200079. (in Chinese)
    [26] Yin Wei, Zhong J, Feng Shijie, et al. Composite deep learning framework for absolute 3D shape measurement based on single fringe phase retrieval and speckle correlation [J]. Journal of Physics Photonics, 2020, 2(4): 045009.
    [27] Zhang Zhengyou. A flexible new technique for camera calibration [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330-1334.
  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(9)  / Tables(1)

Article Metrics

Article views(328) PDF downloads(78) Cited by()

Related
Proportional views

Fringe structured light 3D reconstruction based on virtual binocular

doi: 10.3788/IRLA20210955
  • 1. School of Artificial Intelligence, Tiangong University, Tianjin 300387, China
  • 2. Tianjin Key Laboratory of Intelligent Control of Electrical Equipment, Tianjin 300387, China
  • 3. School of Control Science and Engineering, Tiangong University, Tianjin 300387, China

Abstract: In order to solve such problems as synchronization, high cost and others in traditional binocular fringe structured light 3D reconstruction, a virtual binocular fringe structured light 3D reconstruction method based on a single camera is proposed. The virtual binocular fringe structured light 3D reconstruction system with binocular vision is designed using a single camera, two biprisms and one projector. Biprism refraction and light splitting are used to change the path of reflected light on the surface of the measured object, by which the goal of acquisition of multi-view images are attained under single camera. Through the multi-frequency heterodyne method, stereo matching and binocular calibration of virtual binocular, the depth information of the measured object is obtained and the point cloud is reconstructed. Experimental results show that the root mean square error of the proposed method and the real binocular structured light method to measure the standard ball are 0.037 9 mm and 0.030 5 mm respectively. The proposed method in this paper can promote the development of binocular fringe structured light technology in the aspects of rapidity, low cost, miniaturization, etc. At the same time, this method can be extended to the color camera based fringe structured light 3D reconstruction and projection speckle structured light 3D reconstruction.

    • 光学三维测量具有非接触、全场、高精度等优点,可快速准确地获取被测物体表面的深度信息,广泛应用于机器视觉、缺陷检测、生物医学等领域[1-7]。结构光三维测量作为应用较为广泛地光学三维测量方法,通过向被测物体表面投射散斑、条纹等结构光信息辅助实现高精度、稠密点云的三维重建。在结构光投影三维重建方法中,基于条纹投影的三维成像技术因具有较高的精度和鲁棒性被广泛的应用到工业检测当中[8-12]。通过投影仪向被测对象表面投射光栅条纹图像,由于受被测对象表面高度信息调制,投射到被测对象表面的光栅条纹发生形变。相机采集变形的条纹图像后经过条纹分析、相位提取、相位展开和立体匹配等过程得到被测物体的深度信息。

      相比单目结构光三维测量,双目结构光三维测量利用双目视觉三维重建理论,使用两个位于不同位置的摄像机从不同视角采集被测对象的图像信息,通过双目展开相位匹配实现高精度三维重建,可无需对投影仪进行标定,因此受投影仪畸变影响更小,具有更高的重建精度和鲁棒性[11, 13]。但传统的双目结构光视觉系统需要保证两个相机在同一时间进行采集,对两个相机的同步稳定性要求较高[14]。此外,两个相机之间灰度值的差异也会对重建结果产生影响。为克服这些限制,虚拟双目视觉被提出[15]。Lee等[15]提出的基于双棱镜的立体摄像系统,通过双棱镜折射改变被测对象表面反射光的路径,使相机能够采集到被测对象表面反射的不同视角的光线,实现虚拟双目系统的立体视觉测量。此外,也有通过一个相机搭配一个或多个反光镜组成虚拟双目系统[16-17],通过调整反光镜的位置和角度,由于虚拟立体视差,在一个图像中获取被测对象两个不同视角的图像,实现立体测量。在三维数字图像相关(DIC)领域,使用单相机的虚拟双目成像系统在近些年得到了快速发展[18-19]。Yuan等[20]提出的基于棱镜的虚拟双目系统使用两个双棱镜和一个相机搭配一个反光镜,实现了虚拟双目DIC技术并用于管道检测。

      综上所述,虚拟双目的技术已经用于DIC技术、双目粒子图像测速(PIV)、条纹偏折法镜面测量等[18, 21-24]。据文中所知,基于虚拟双目技术的条纹投影三维重建技术还未得到应用。文中提出了基于双棱镜的虚拟双目条纹结构光三维重建系统,通过投影仪向被测对象表面投射光栅条纹,使用基于双棱镜的虚拟双目系统采集图像,结合多频外差解包裹的方法进行三维重建。文中方法在提高稳定性、节约成本、结构简单紧凑的同时又保证了三维重建的精度。

    • 基于双棱镜的虚拟双目是实现虚拟双目系统的重要方式之一。与单个双棱镜的虚拟双目方式相比,Yuan等提出的两个双棱镜的虚拟双目可采用市场已有的双棱镜,无需特殊加工的双棱镜,具有较低的成本与较大的灵活性,在此基础上,文中提出的基于虚拟双目的条纹结构光三维重建系统如图1所示,两个双棱镜位于相机正前方,关于相机的光轴对称,投影仪放置于相机旁,两个双棱镜中间放置一个遮光片用来遮挡被测对象表面反射的光线。测量过程中,有效视场内被测对象表面反射的光线从不同方向经过两个双棱镜发生折射改变光线路径后分别映射到相机成像区域的左右两侧[20]图1中,从双棱镜折射出来的光线最终交汇于相机成像区域的前方,这使得相机采集到被测对象的图像是左右镜像的。两个虚拟相机的光学中心分别位于点$\rm{ C{a^ + }}$和点${\rm{C{a^ - }}}$,这两点关于相机光轴对称。通过精细调整双棱镜和相机的相对位置,使得左右虚拟相机成像区域分别对应真实相机成像区域右左两部分。此外,通过调整两块双棱镜之间的夹角,调节左右虚拟相机之间的基线大小。

      Figure 1.  Diagram of the virtual binocular system scheme

      图2为虚拟双目成像模型光路图,设置双棱镜系统的光轴与相机的光轴重合。两个双棱镜具有相同的尺寸,且对称的放置在相机光轴两侧。位于$ X' $轴上的一点$ M $反射的光经过双棱镜$ \text{І} $发生两次折射后进入相机镜头。被测物体表面反射的入射光线以入射角$ {\theta _{i1}} $进入双棱镜$ \text{І} $发生折射,折射角为$ {\theta _{o1}} $,之后以入射角$ {\theta _{i2}} $照射到第二个表面,折射角为$ {\theta _{o2}} $。由此可得:

      Figure 2.  Optical path diagram of virtual binocular imaging system

      公式(1)及图2中,$ \gamma $是双棱镜的楔角,$ \alpha $为双棱镜${І} $斜边与$ Y $轴的夹角。同理,可分析位于$ X' $轴上的一点$ M $经过双棱镜Ⅱ的分析。经过双棱镜Ⅰ和Ⅱ发生折射后,物点$ M $反射出的两道光线分别汇交到像平面上的$ {m^ - } $$ {m^ + } $两点。

    • 文中提出基于虚拟双目的条纹结构光三维重建方法,该方法的完整流程图如图3所示。为得到更高精度的三维重建结果,文中使用基于多波长的多频相位展开方法[25]计算展开相位进行双目立体匹配。

      Figure 3.  Schematic diagram of the proposed method

      使用投影仪向被测物体投射波长分别为$ {\lambda }_{1}= 16、{\lambda }_{2}=18、{\lambda }_{3}=21 $ 的四步相移光栅条纹,触发虚拟双目视觉系统采集调制后的条纹图像。通过四步相移法解得三种不同波长的相位函数如公式(3)所示:

      式中:$ I_n^i(x,y) $为投射波长为$ {\lambda _i} $的光栅条纹时相机采集到$ \left( {x,y} \right) $点的调制光强;$ n $为相移指数;$ 0 \leqslant n < N $$ N $为相移的步数。

      多波长相位解包裹方法如公式(4)所示:

      式中:$ Round\left( {} \right) $为最近取整函数;$ {\varphi }_{\text{1}}(x,y)、 {\varphi }_{\text{2}}(x,y) $$ {\varphi }_{\text{3}}(x,y) $为公式(3)求得分别对应波长为$ {\lambda }_{1}、{\lambda }_{2} $$ {\lambda _3} $的相位函数;波长$ {\lambda _{12}} = \left| {{\lambda _1}{\lambda _2}/} \right.\left. {({\lambda _1} - {\lambda _2})} \right| $为波长分别为$ {\lambda _1} $$ {\lambda _2} $的光栅条纹经过外差计算得到的光栅条纹所对应的波长;波长$ {\lambda }_{23}=|{\lambda }_{2}{\lambda }_{3}/({\lambda }_{2}-{\lambda }_{3})|、{\lambda }_{123}=|{\lambda }_{12}{\lambda }_{23}/ ({\lambda }_{12}-{\lambda }_{23})| $同理可得,其对应的相位为$ {\varphi }_{\text{12}}(x,y)、{\varphi }_{\text{23}}(x,y) $$ {\varphi }_{\text{123}}(x,y) $

      为得到被测物体的深度信息,文中使用基于误差平方和(SSD)的立体匹配算法对通过公式(4)得到的左右展开相位$ {\phi _l} $$ {\phi _r} $计算视差。为减少计算成本,文中使用极线校正方法对左右相机对应的展开相位$ {\phi _l} $$ {\phi _r} $进行校正。校正结果如图4所示,使得左右相机展开相位的待匹配点在同一水平线上。

      Figure 4.  Epipolar (stereo) geometry and epipolar constaint

      图4中,$ {I_l} $$ {I_r} $为校正后的左右展开相位,$ {I_l}(x,y) $$ {I_r}(x,y) $为展开相位中$ (x,y) $点的展开相位值。取以待匹配点$ ({x_l},{y_l}) $为中心的$ (2m+1) \times (2n+1) $大小的矩形子图像作为参考子图像,在$ {I_r} $中同一极线上为$ {I_l}(x,y) $寻找匹配点$ {I_r}(x,y + d) $($ d $为两幅图像的视差)。最终求得该点的匹配代价${{S S D}}(x,y,d)$如公式(5)所示:

      式中:匹配代价${{S S D}}(x,y,d)$最小时对应的$ d $即为所求的视差值。文中采用二次曲线亚像素插值方法[26]将整像素的视差数据转换为亚像素的视差数据以进一步提高视差精度。二次曲线亚像素插值计算公式可表示为:

      式中:$ {D_s}(p) $$ {D_{{int} }}(p) $分别为所求的$ p $点亚像素和整数视差;$ d' $$ d' - 1 $$ d' + 1 $分别为$ p $点对应$S S D$立体匹配代价最小位置,以及左右相邻的次匹配代价最小位置;${C_{S S D}}(p,d')$${C_{S S D}}(p,d' - 1)$${C_{S S D}}(p,d' + 1)$分别为$ p $点在$ d' $$ d' - 1 $$ d' + 1 $位置对应的基于$ SSD $立体匹配的代价。根据双目视觉理论和相机标定的内、外参数(内参数包含相机的焦距,外参数包含两个相机坐标系之间的旋转、平移矩阵),最终得到被测物体的深度信息$ z = fb/{D_s} $,其中$ f $为相机的焦距,$ b $为两个虚拟相机光心的距离,$ {D_s} $为公式(6)求得的亚像素视差。

      通过立体匹配算法及深度转换得到图像中各点对应的深度值${\textit{z}}$,选择左相机坐标系作为世界坐标系,则该点对应在世界坐标系下的$x$$y$坐标分别为:

      式中:$({u_l},{v_l})$代表该点左相机图像像素坐标;$({u_o},{v_o})$为左相机图像主点坐标;${f_x}{\text{ = }}f/{\rm{d}}x$${f_y}{\text{ = }}f/{\rm{d}}y$表示分别对应$x$$y$方向上的归一化焦距(${\text{d}}x,{\text{d}}y$为像元尺寸);基线$ b $和焦距${f_x}$${f_y}$均为相机标定参数。根据坐标$(x,y)$和各点的深度信息${\textit{z}}$,进而得到各点在世界坐标系下的坐标$(x,y,{\textit{z}})$,并由此重建出物体的稠密点云信息,完成物体三维重建。

    • 图5为文中实验装置图,使用的DLP LightCrafter投影仪输出分辨率为1280 pixel×720 pixel, 刷新率为60 Hz。相机为海康威视(MV-CA013-21UM)130万像素170 fpsUSB3.0黑白工业相机,工业相机镜头焦距为8 mm,分辨率为1280 pixel×1024 pixel。两个双棱镜具有相同的参数和尺寸,楔角$ \gamma $$ {45^ \circ } $,两个直角边的长度均为50 mm。

      Figure 5.  Diagram of experimental setup

      受双棱镜材质的限制,在相机采集被测物体时极易受到环境光和两块双棱镜之间光线交互的影响,文中使用遮光片对基于双棱镜的虚拟双目系统进行处理去除了由环境光折射和棱镜反射光照成的光线干扰,并依据相机标定参数对原始图像进行校正以保证图像质量。该三维重建系统测量距离约为$ 600\;\mathrm{m}\mathrm{m} $,测量范围约为$ 200\;\mathrm{m}\mathrm{m}\times 320\;\mathrm{m}\mathrm{m} $,由投影仪向被测物体投射条纹的波长分别为16、18和21。

    • 文中使用硬触发的方式采集图像,实现投射条纹和采集同步。通过投射三频外差共12幅条纹图,采集到的其中一组条纹图如图6(a)所示。由上文虚拟双目成像模型分析可知,相机采集到被测对象的图像是成左右镜像的,因此在图像分割之后对图像进行左右翻转操作。处理后的图像如图6(b)和图6(c)所示。

      Figure 6.  Image preprocessing. (a) Original image; (b) Left camera image after processing; (c) Right camera image after processing

    • 为获得被测对象的深度信息,需建立两个虚拟相机之间的联系。文中使用尺寸为$ {\text{150 mm}} \times {\text{150 mm}} $,角点数为$ 11 \times 8 $的棋盘格标定板作为标定工具对虚拟双目系统进行标定,在确保标定板能够完整位于相机视野范围内的前提下多次改变标定板的位置,并使用相机捕捉棋盘格标定板图像。

      文中使用张正友标定法[27]对虚拟双目系统进行标定以获取虚拟双目系统的内外参数。输入分割处理后的15组标定板图像,进行畸变校正处理得到标定的重投影误差如图7所示,总体平均重投影误差约为0.04个像素点,每组棋盘格图片的重投影误差最高不超过0.05个像素点。

      Figure 7.  Calibration reprojection error

    • 文中使用多频外差法获得被测对象的展开相位,通过使用虚拟双目条纹结构光三维重建系统重建多种不同形状的物体,验证文中提出方法鲁棒性。

      图8(a)~图8(b)和图8(e)~图8(f)分别显示了左右虚拟相机用条纹投影法获得的板状和弧状物体的捕获图像。图8(c)和图8(g)显示了其中一个虚拟相机应用滤波算法后获得的展开相位图。使用极线校正方法处理后使得校正后右展开相位图像中的所有候选匹配点与校正后左图像对应点保持在同一行。使用文中的方法将校正后的展开相位图像通过立体匹配得到被测对象的深度信息,并生成点云数据。板状和弧状物体的三维重建结果如图8(d)和图8(h)所示,可以重建出被测对象形状合理点云模型。

      Figure 8.  3D reconstruction process diagram. (a) Fringe image of left camera of plate model; (b) Fringe image of right camera of plate model; (c) Unwrapped phase of plate model; (d) Point cloud image of plate model; (e) Left camera fringe image of arc model; (f) Right camera fringe image of arc model; (g) Unwrapped phase of arc model; (h) Right camera stripe image of arc model

    • 为验证虚拟双目方案的可行性,文中设立了对比实验。该实验选用标称直径为38.1118 mm (±5 μm)和38.1122 mm (±5 μm)的两个高精度哑光陶瓷标准球测量结果验证方法的精度。在相同实验环境下多次使用虚拟双目系统和真实双目系统对该物体进行三维重建,得到该标准球的点云数据并拟合球体得到重建标准球的三维信息。其中真实双目系统由两个与虚拟双目系统参数完全相同的相机组成。通过对比真实、虚拟双目重建系统得到的标准球的尺寸信息与真实尺寸信息的差异,来验证虚拟双目系统方案的可行性。

      图9(a)~图9(b)和图9(d)~图9(e)分为虚拟双目系统和真实双目系统捕获的条纹图像。虚拟、真实双目三维重建系统的三维重建结果如图9(c)和图9(f)所示,两种方法重建出来的点云均为圆弧状,细节符合球体表面形状。

      Figure 9.  Reconstruction of point cloud with standard sphere. (a) Virtual binocular left fringe image of standard sphere; (b) Virtual binocular right fringe image of standard sphere; (c) Virtual binocular reconstruction point cloud image of standard sphere; (d) Real binocular left fringe image of standard ball; (e) Real binocular right fringe image of standard ball; (f) Real binocular reconstruction point cloud image of standard sphere

      通过去除点云数据噪点后进行球体拟合,得到拟合球的直径信息。并将两种方法得到的拟合球直径与标准球真实直径进行比较,得到两种方法的测量误差。部分拟合球的直径信息如表1所示,其中每组数据包含两个标准球的拟合信息。经过多次对比实验,真实双目条纹结构光测量的平均均方根误差约为0.0305 mm,文中提出的基于虚拟双目的条纹结构光三维重建系统测量的平均均方根误差约为0.0379 mm。

      MethodResult
      Virtual stereo camera Real stereo camera
      Ball 1/mmBall 2/mmBall 1/mmBall 2/mm
      1 38.1462 38.1631 38.1012 38.1441
      2 38.0310 38.0947 38.0624 38.1193
      3 38.1188 38.1349 38.1049 38.1249
      4 38.1406 38.1560 38.0837 38.1228
      5 38.0946 38.1016 38.1211 38.1792

      Table 1.  Comparison of virtual and real binocular reconstruction results

    • 文中提出了基于双棱镜的虚拟双目条纹结构光三维重建系统,使用系统采集被投影条纹编码的被测对象条纹图,使用多频外差法和立体匹配算法得到被测对象的展开相位和深度信息,并生成点云数据得到被测对象的尺寸信息。通过文中的对比实验可知,真实双目条纹结构光三维重建系统和文中提出的虚拟双目条纹结构光三维重建系统在同一实验环境下对两个标准球进行多次测量所得的平均均方根误差分别约为0.0305 mm和0.0379 mm。文中提出的虚拟双目三维重建系统达到了与真实双目三维重建系统相近的重建精度,但由于两个虚拟相机共用一个相机的成像区域导致图像分辨率下降。在未来的工作中,可借助于图像超分辨重建算法实现更高分辨率的条纹图及三维重建结果。此外,文中提出的虚拟双目条纹结构光三维重建系统还可以应用到彩色相机、偏振相机等不同相机成像方式下的虚拟双目条纹投影三维测量中以及投影散斑结构光三维测量中。

Reference (27)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return