-
文中重建系统包括投影仪、主相机、辅助相机以及计算机,重建系统模型如图1所示。计算机生成条纹编码图案,通过投影仪投射多幅正弦条纹图像到待测物表面,条纹图像受到待测物表面形状的调制,使得条纹相位发生变化,同时相机采集经过调制后的条纹图案,应用相位展开算法,获取绝对相位,结合标定结果重建出被测物体的三维形貌[11]。
相移技术[12]是将正弦条纹在周期内移动N(N≥3)次,每次移动
$ 2\pi /N $ 的相位。文中采用四步相移的方法,连续平移三次,得到的光强分布函数为:$$ \left\{ \begin{gathered} {I_1}(u,v) = a(u,v) + b(u,v)\cos \left[ {\phi (u,v)} \right] \\ {I_2}(u,v) = a(u,v) + b(u,v)\cos \left[ {\phi (u,v) + \pi /2} \right] \\ {I_3}(u,v) = a(u,v) + b(u,v)\cos \left[ {\phi (u,v) + \pi } \right] \\ {I_4}(u,v) = a(u,v) + b(u,v)\cos \left[ {\phi (u,v) + 3\pi /2} \right] \\ \end{gathered} \right. $$ (1) 式中:
$ {I}_{i}(u,v) $ 表示第i张相移图上点$ (u,v) $ 的强度值;$ a(u,v) $ 为平均光强值;$ b(u,v) $ 为调制光强值。$ \varphi (u,v) $ 的相位值计算如公式(2)所示。为保证重建精度,通常投影正弦图像周期不为1,因此相位会截断在$ \pm \pi $ 之间,呈现出周期性分布。$$ \phi (u,v) = \arctan \left[ {\frac{{{I_4}(u,v) - {I_2}(u,v)}}{{{I_1}(u,v) - {I_3}(u,v)}}} \right] $$ (2) -
由于被测物表面反射率和背景光强度不均匀的影响,格雷码黑白边界非锐利截止,不是理想的二值分布,因此,将图像进行二值化后会出现解码级次边沿和截断相位边沿不完全对齐的情况,造成相位展开错误。针对该问题,文中采用互补格雷码方法[13]对包裹相位进行展开。相较于传统格雷码解相方法,互补格雷码方法通过多投影一幅格雷码,使得最后一幅格雷码的码字宽度为标准正弦条纹周期的一半,得到级次K1和K2,
$\varPhi (u,v)$ 绝对相位可表示为:$$ \varPhi (u,v) = \left\{ {\begin{array}{*{20}{l}} {\phi (u,v) + 2{\text{π }}{k_2}(u,v),{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \phi (u,v) \leqslant - \dfrac{\pi }{2}{\kern 1pt} } \\ {\phi (u,v) + 2{\text{π }}{k_1}(u,v),{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} - \dfrac{\pi }{2}{\kern 1pt} < \phi (u,v) \leqslant \dfrac{\pi }{2}} \\ {\phi (u,v) + 2{\text{π }}{k_2}(u,v) - 2{\text{π }},{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \phi (u,v) \geqslant \dfrac{\pi }{2}} \end{array}} \right. $$ (3) -
多项式拟合标定过程中,投影图案容易受到标定板图案的影响,产生相位误差,造成标定精度的降低。Zhang Song[14]提出了拟合相位平面的方法,但是没有从根本上解决相位误差过大的问题。为了改善相位误差的影响,文中采用灰白棋盘格[15]的多项式拟合法对相机和投影仪进行标定。将标定板放在恰当的位置,通过投影仪将条纹图案投射到标定板上,利用相机采集标定板上的图案。采集完成后,保持标定板位置不动,相机再次采集无条纹的标定板图案,改变标定板的位姿若干次并重复操作,获取多张无条纹的标定板图案。将采集到的条纹图案通过公式(3)进行相位展开获取相位主值。利用OpenCV角点检测获取无条纹标定板上角点的像素坐标。根据张正友标定法[16]建立像素坐标与世界坐标之间关系如公式(4)所示,可以得到像素坐标到世界坐标之间的转换矩阵。
$$ s\left[ {\begin{array}{*{20}{c}} u \\ v \\ 1 \end{array}} \right] = {A_c}\left[ {{R_i},{T_i}} \right]\left[ {\begin{array}{*{20}{c}} {{X_w}} \\ {{Y_w}} \\ {{Z_w}} \\ 1 \end{array}} \right] $$ (4) 式中:
$ ({X}_{w},{Y}_{w},{Z}_{w}) $ 为盘格上任意一点的世界坐标,其对应的像素坐标$ (u,v) $ ;$ s $ 为比例系数;${A}_{c}$ 为内参矩阵;$ \left[{R}_{i},{T}_{i}\right] $ 为棋盘格在$ i $ 位置处对应的外参矩阵。对于棋盘格上任意一点,设其世界坐标为
$ ({X}_{w}, {Y}_{w},0) $ ,相机坐标系下的坐标为$ ({X}_{c},{Y}_{c},{Z}_{c}) $ ,转换公式为:$$ \left[\begin{array}{l} X_{{c}} \\ Y_{{c}} \\ Z_{{c}} \end{array}\right]=\left[\begin{array}{ll} {R}_i & {T}_i \end{array} \right]\left[\begin{array}{c} X_w \\ Y_w \\ 0 \\ 1 \end{array}\right]=\left[\begin{array}{lll} r_{i 1} & r_{i 2} & {T}_i \end{array} \right]\left[\begin{array}{c} X_w \\ Y_w \\ 1 \end{array}\right] $$ (5) 式中:
$ {r}_{i1},{r}_{i2} $ 为旋转矩阵$ {R}_{i} $ 的前两列。由公式(4)和公式(5)可得:
$$ s\left[ {\begin{array}{*{20}{c}} u \\ v \\ 1 \end{array}} \right] = {A_c}\left[ {\begin{array}{*{20}{c}} {{X_c}} \\ {{Y_c}} \\ {{Z_c}} \end{array}} \right] = {A_c}\left[ {\begin{array}{*{20}{c}} {{r_{i1}}}&{{r_{i2}}}&{{T_i}} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{X_w}} \\ {{Y_w}} \\ 1 \end{array}} \right] = H\left[ {\begin{array}{*{20}{c}} {{X_w}} \\ {{Y_w}} \\ 1 \end{array}} \right] $$ (6) 将公式(6)表示为线性方程形式,即:
$$ \left[ {\begin{array}{*{20}{c}} {{m_{31}}u - {m_{11}}}&{{m_{32}}u - {m_{12}}} \\ {{m_{31}}v - {m_{21}}}&{{m_{32}}v - {m_{22}}} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{X_w}} \\ {{Y_w}} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {{p_1} - {p_3}u} \\ {{p_2} - {p_3}v} \end{array}} \right] $$ (7) 式中:
$ {m}_{ij} $ 为矩阵$ M=A\times R $ 的第$ i $ 行、第$ j $ 列元素;$ {p}_{k} $ 为向量$ P=A\times T $ 的第$ k $ 个元素。由公式(7)可求解棋盘格上像素坐标
$ (u,v) $ 的世界坐标$ ({X}_{w},{Y}_{w}) $ ,进而得到该点在相机坐标系下的相机坐标为:$$ {\left[ {{X_c},{Y_c},{Z_c}} \right]^{\rm{T}}} = R{\left[ {{X_w},{Y_w},0} \right]^{\rm{T}}} + T $$ (8) 对于相机视野中的每一个像素点,建立相机坐标系下坐标
$ ({X}_{c},{Y}_{c},{Z}_{c}) $ 与绝对相位$\varPhi (u,v)$ 之间的关系,采用三阶多项式近似拟合每个像素,即:$$ \left\{ {\begin{array}{*{20}{c}} {{X_c} = {a_0} + {a_1}\varPhi + {a_2}{\varPhi ^2} + {a_3}{\varPhi ^3}} \\ {{Y_c} = {b_0} + {b_1}\varPhi + {b_2}{\varPhi ^2} + {b_3}{\varPhi ^3}} \\ {{Z_c} = {c_0} + {c_1}\varPhi + {c_2}{\varPhi ^2} + {c_3}{\varPhi ^3}} \end{array}} \right. $$ (9) 式中:系数ai、bi、ci (i=0, 1, 2, 3)包含了系统的结构参数,通过多项式拟合建立起多项式系数表。
通过多项式拟合建立像素的相位-高度映射关系,在后续的三维重建中,对于像素坐标系下任意一点
$ \left(u,v\right) $ ,根据其绝对相位$\varPhi (u,v)$ ,通过查找多项式系数表即可获取其相机坐标系下坐标$ ({X}_{c},{Y}_{c},{Z}_{c}) $ 。 -
在工业检测和智能制造领域,当场景前后有多个目标物需要进行三维重建时,目标物可能会超出相机的景深范围,导致结构光三维重建出错。这样场景下需要多次调节结构光系统重新标定,限制了其鲁棒性和灵活性,在实际应用中造成很多不便。
为了获取超出重建景深的待测物准确的三维信息,借助辅助相机来建立超出重建景深的相位-高度映射。重建系统中相机与投影仪的相对位置设置如图2所示,其中
$ P1 $ 、$ P2 $ 分别为主相机和辅助相机的位置点。利用光学成像景深模型[17](图3)初步计算主相机和辅助相机各自的景深范围,在确定主相机位置
$ P1 $ 后,将辅助相机放在待定位置$ P2 $ ,确保两相机有重合的重建景深范围。文中采用互补格雷码算法,且相机景深不会超出投影仪景深范围,因此可以保证获取绝对相位$\varPhi \left(u,v\right)$ 的准确性。通过双相机的联合标定,获取辅助相机到主相机的旋转矩阵R和平移矩阵T。将标定板放置在主相机的重建景深范围之外、位于辅助相机的重建景深范围之内的区域,并使其在该区域内移动。由于标定板超出主相机的重建景深范围,通过主相机获取的角点像素坐标存在误差。因此,借助辅助相机获取该相机坐标系下角点的像素坐标,同时利用主相机获取此时投射到标定板上经过调制后的条纹,通过四步相移以及互补格雷码方法进行解包裹,获取真实的绝对相位值。实验流程如图4所示。
将辅助相机获取的像素角点坐标转换到辅助相机坐标系下,得到该坐标系下坐标
$ ({X}_{c1},{Y}_{c1},{Z}_{c1}) $ ,利用双相机标定获取的旋转矩阵R和平移矩阵T,将辅助相机坐标系下坐标转换到主相机坐标系下,得到$ ({X}_{c2},{Y}_{c2},{Z}_{c2}) $ ,即:$$ {\left[ {{X_{c2}},{Y_{c2}},{Z_{c2}}} \right]^{\rm{T}}} = R{\left[ {{X_{c1}},{Y_{c1}},{Z_{c1}}} \right]^{\rm{T}}} + T $$ (10) 由公式(10)得到的主相机坐标即为超出主相机景深范围像素坐标对应的准确相机坐标值。将获取的主相机坐标值
$ ({X}_{c2},{Y}_{c2},{Z}_{c2}) $ 以及同时刻展开的绝对相位$\varPhi \left(u,v\right)$ 通过多项式拟合建立超出主相机重建景深的相位-高度映射,通过查询多项式系数表对超出主相机重建景深的待测物进行重建,提高相机的重建范围。 -
借助辅助相机建立了超出主相机重建景深的相位-高度映射,根据待测物与主相机的距离,可以采用不同的映射进行重建。但是由于待测物与主相机之间的实际距离在操作中无法准确获取,所以文中通过建立相位
$\varPhi \left(u,v\right)$ 与拍摄距离$ Z $ 之间的关系来自适应地选取不同的映射,从而对待测物进行三维重建。基于相位法模型[18]如图5所示,其中${{\varOmega }}_{w}、{{\varOmega }}_{c}、{O}_{p}、{O}_{c}$ 分别表示参考系坐标系、相机坐标系、投影仪中心以及光心。$ {o}_{1}mn $ 表示相机成像面坐标系,$ OXY $ 平面平行于投影面,$ Y $ 轴平行于光栅条纹,$ Z $ 轴经过投影中心$ {O}_{p} $ 。相位与世界坐标系下坐标
$ ({X}_{w},{Y}_{w},{Z}_{w}) $ 之间的关系如公式(11)所示:$$ \varPhi = \frac{{(2\pi l/\lambda {}_0){X_w} - {\varPhi _0}Z{}_w + l{\varPhi _0}}}{{l - {Z_w}}} $$ (11) 式中:
$ ({X}_{w},{Y}_{w},{Z}_{w}) $ 表示世界坐标系的三维坐标;$ l $ 为点$ O,{O}_{p} $ 之间的距离;$ {\lambda }_{0} $ 为栅线截距;${\varPhi }_{0}$ 为原点处的相位。公式(11)中,$ X $ 可视为定值,由于$ Y $ 轴平行于光栅条纹,即对于$ Y $ 轴上任意点的相位值都是一样的。通过获取的像素点坐标$ \left(u,v\right) $ 与世界坐标$ ({X}_{w},{Y}_{w},{Z}_{w}) $ ,在确定$ X $ 以及$ Y $ 为某一定值的前提下,利用对应的$ Z $ 值与相位$\varPhi \left(u,v\right)$ 求解未知量,则公式(11)可改写为:$$ \varPhi = \frac{{A - BZ}}{{l - Z}} $$ (12) 式中:
$A=\left(\dfrac{2\pi l}{{\lambda }_{0}}\right)X+l{\varPhi }_{0,}B={\varPhi }_{0}$ 。观察公式(12)可知,待测物与主相机的距离
$ Z $ 与相位值$\varPhi$ 成单调关系,因此判断待测物与主相机距离的关系可以转换成相位值之间的判断关系。利用光学成像景深模型,测量主相机的对焦距离即为$ Z $ 值,该值为相机景深内外的临界值,将其代入公式(12),求取此时的相位即为相机景深内外的阈值。 -
搭建了一套三维测量系统验证文中拓展景深方案的有效性。实验平台包括两台工业相机(分辨率1280 pixel×1024 pixel),像元尺寸为δ=4.8 μm,其有效像面尺寸为6.144 mm×4.915 mm。相机采用MH1218M镜头,其焦距最大值f为2 mm,光圈最大值F为18。采用一台DLP-Ⅲ型独立光机数字投影仪(分辨率1280 pixel×720 pixel)。采用7位(即n=7)互补格雷码,其中,第6位格雷码条纹宽度为20 pixel。四步相移法采用光栅条纹,周期节距为20 pixel。
-
为评估文中多项式拟合标定方法的性能,对一个精密加工的阶梯台面进行测量实验,恢复出台阶的三维形貌如图6所示。表1为实际值与文中方法的测量值以及两者的绝对误差。由测量结果可知,台阶间距测量值与真实值间的最大绝对误差为0.041 mm。
图 6 台阶测量结果图。(a)台阶实物图;(b)台阶的三维形貌
Figure 6. Measurement results of steps. (a) Physical map of step; (b) Three-dimensional topography map of step
表 1 台阶测量结果 (单位:mm)
Table 1. Experimental results of measured step (Unit: mm)
The actual distance Measuring distance Absolute error 9.985 10.023 0.038 9.008 9.041 0.033 8.036 7.995 0.041 6.990 7.012 0.022 9.986 10.025 0.039 为获得清晰的图像,镜头焦距f=8 mm,根据实际待测物测量需要,文中选取对焦距离d=500 mm。
$ CoC $ 为允许弥散圆直径:$$ CoC=\frac{\mathrm{幅}\mathrm{面}\mathrm{对}\mathrm{角}\mathrm{线}\mathrm{长}\mathrm{度}}{1\; 730} $$ (13) 前景深:
$$ \Delta L1=\frac{F \cdot CoC \cdot {d}^{2}}{{f}^{2}+F \cdot CoC \cdot d} $$ (14) 后景深:
$$ \Delta L2=\frac{F \cdot CoC \cdot {d}^{2}}{{f}^{2}-F \cdot CoC \cdot d} $$ (15) 景深:
$$ \Delta L=\Delta L1+\Delta L2 $$ (16) 式中:幅面对角线长度为有效像面对角线长度,为7.871 mm。计算可得
$ CoC $ 为0.0045 mm,前景深∆L1为83.00 mm,后景深∆L2为124.26 mm,景深$ \Delta L $ 为207.26 mm。考虑相机与投影仪之间夹角影响,相机实际重建景深要小于计算景深值,即估计相机重建景深在200 mm左右。固定主相机与投影仪的位置,移动辅助相机位置,使辅助相机的前景深与主相机的后景深重合,主相机与辅助相机沿投影仪轴线方向上距离80 mm。此时双相机重建最近点距离主相机为400 mm左右,重建最远点距离主相机为700 mm左右。调整投影仪使其景深范围大于300 mm。
为了验证所提出的景深拓展方法的可行性,选择对四种不同的待测物利用多项式拟合的办法对重建景深内和超出重建景深进行了实验,结果如图7所示。其中,图7(a)为待测物相机实拍图;图7(b)为将待测物放置在主相机景深内并采用景深内的相位-高度映射重建结果图;图7(c)为将待测物放置在主相机景深内并采用景深外的相位-高度映射重建效果图。图7(d)为将待测物放置在超出主相机景深位置并采用景深内的相位-高度映射重建效果图;图7(e)为将待测物放置在景深外并采用景深外的相位-高度映射重建效果图。由图7(d)可以看出,对于超出主相机重建景深的待测物,如果仍利用景深内的相位-高度映射会出现点云错位以及待测物拉伸重建的错误,但是利用文中改进的方法,通过建立超出主相机景深的相位-高度映射对待测物进行重建,结果如图7(e)所示,重建效果十分显著。该实验表明,所提出的景深拓展三维重建方法能够高质量地还原物体的真实形貌特征。
图 7 局部映射重建。(a) 实物图片;(b) 重建景深内物体用重建景深内映射; (c) 重建景深内物体用重建景深外映射;(d) 重建景深外物体用重建景深内映射;(e) 重建景深外物体用重建景深外映射
Figure 7. Local mapping reconstruction. (a) Physical picture; (b) Reconstruction of depth of field mapping for objects in depth of field; (c) Reconstruction of depth of field outer mapping for objects in reconstruction depth of field; (d) Reconstruction of depth of field mapping for objects outside depth of field; (e) Reconstruction of depth of field outer mapping for objects outside the field
在重建景深内外分别求取对应相位-高度映射的基础上,利用重建景深内外获取的所有相位坐标与绝对相位建立全局相位-高度映射,即不考虑待测物与主相机的距离关系对待测物进行重建,结果如图8所示。
图 8 全局映射重建。(a) 重建景深内物体用全局映射;(b) 重建景深外物体用全局映射
Figure 8. Global mapping reconstruction. (a) Global mapping for objects in depth of field; (b) Global mapping for objects outside depth of field
由图8可知,利用全局相位-高度映射进行重建的结果相比于图7(b)和图7(e)分别采用局部相位-高度映射重建结果会出现空洞、点云错位等问题,其效果不如采用对应局部相位-高度映射进行重建的结果。为了实现对待测物处于不同位置的自动选取相位-高度映射进行三维重建,文中提出借助相位值作为阈值来反映待测物与相机之间的距离关系。
将标定板从相机位置沿着轴线方向移动34次进行标定,当标定张数N=0时,标定板沿轴线远离相机方向进行标定。当N=5时,标定板位于主相机景深边沿范围,之后将标定板沿轴线靠近相机方向移动进行标定。重复上述步骤,当N=18时,完成主相机景深内标定。当N=19时,标定板沿轴线远离相机方向标定。当N=28时,标定板位于辅助相机景深边沿范围,之后将标定板沿轴线靠近相机方向移动进行标定。当N=34时,完成主相机景深外标定。获取图像中心点的相位,建立相位与标定张数之间的关系如图9所示。
由于标定张数按照与距离相机位置的特定顺序进行拍摄,标定张数可以反映其与相机距离关系。由图9可以看出,标定张数与相位在每个区间段内都呈线性关系,由此可以验证得到相位与拍摄距离呈线性关系,因此在全局进行三维重建时,只需要获取主相机景深的临界点的相位值作为阈值,通过对比该阈值即可自适应地选取相机景深内外的不同相位-高度映射,通过建立的多项式拟合系数表对待测物进行三维重建。
Research on 3D reconstruction technology of extended depth of field based on auxiliary camera
-
摘要: 针对面结构光三维重建中,由于重建景深的限制导致待测物在超出重建景深后出现重建错误的问题,提出了一种基于辅助相机的景深拓展三维重建技术,并借助相位阈值自适应地对重建景深内外物体进行重建。采用四步相移与互补格雷码结合的方法获取绝对相位,通过多项式拟合法对相机、投影仪进行标定。提出了借助辅助相机建立超出重建景深的相位-高度映射的方法。实验结果表明:该方法能提高重建景深范围50%左右,大大提升了面结构光的重建范围。Abstract:
Objective In the three-dimensional reconstruction of surface structured light, due to the limitation of the reconstruction depth of field, the problem of reconstruction error occurs when the measured object exceeds the reconstruction depth of field. In the large scene where the longitudinal shooting range is required, a single shot cannot meet the reconstruction requirements. It is necessary to move the structured light system along the longitudinal direction and re-calibrate, which increases the complexity and repeatability of the task. In this paper, a depth-of-field extended 3D reconstruction technology based on auxiliary camera is proposed, and the objects inside and outside the reconstructed depth of field are reconstructed adaptively with the help of phase threshold. Methods The absolute phase is obtained by the combination of four-step phase shift and complementary gray code, and the camera and projector are calibrated by polynomial fitting method. With the help of the depth of field calculation model, the depth of field range of the main camera and the auxiliary camera is calculated and the device is fixed according to the specific position (Fig.2). The auxiliary camera is used to obtain the pixel coordinates of the calibration plate beyond the depth of field reconstructed by the main camera. The joint calibration results of the binocular camera are used to convert it into the main camera coordinate system. Combined with the phase value obtained by the main camera, the phase-height mapping relationship beyond the depth of field reconstructed by the main camera is established (Fig.4). Based on the traditional phase method model, the relationship between phase and shooting distance is quantitatively analyzed, and the phase adaptive threshold is proposed. Results and Discussions Three-dimensional reconstruction is carried out by using different mapping relationships for four different objects to be measured (Fig.7). The effect pictures of the traditional method (Fig.7(d)) and the method in this paper (Fig.7(e)) are shown respectively, and the reconstruction contrast effect is obvious. In this paper, the three-dimensional reconstruction of the object to be measured inside and outside the reconstructed depth of field is carried out by establishing a global mapping. The reconstruction effect is not as good as the local corresponding mapping reconstruction effect (Fig.8), so the phase adaptive threshold is introduced. When the system is calibrated, the specific order of the calibration number and the shooting distance is strictly followed to verify the linear relationship between the phase and the shooting distance (Fig.9), which proves the correctness of the adaptive threshold. Conclusions The maximum error between point cloud data and real data is 0.041 mm by three-dimensional reconstruction of the inner step of the main camera depth of field. With the help of the auxiliary camera, the mapping relationship beyond the reconstructed depth of field of the main camera is established. Based on the depth of field calculation model, the reconstructed depth of field range in this scene is quantitatively calculated. Experiments show that this method can improve the reconstructed depth of field range by about 50%, which greatly improves the reconstruction range of surface structured light. -
图 7 局部映射重建。(a) 实物图片;(b) 重建景深内物体用重建景深内映射; (c) 重建景深内物体用重建景深外映射;(d) 重建景深外物体用重建景深内映射;(e) 重建景深外物体用重建景深外映射
Figure 7. Local mapping reconstruction. (a) Physical picture; (b) Reconstruction of depth of field mapping for objects in depth of field; (c) Reconstruction of depth of field outer mapping for objects in reconstruction depth of field; (d) Reconstruction of depth of field mapping for objects outside depth of field; (e) Reconstruction of depth of field outer mapping for objects outside the field
表 1 台阶测量结果 (单位:mm)
Table 1. Experimental results of measured step (Unit: mm)
The actual distance Measuring distance Absolute error 9.985 10.023 0.038 9.008 9.041 0.033 8.036 7.995 0.041 6.990 7.012 0.022 9.986 10.025 0.039 -
[1] Zuo Chao, Zhang Xiaolei, Hu Yan, et al. Has 3D finally come of age?—An introduction to 3D structured-light sensor [J]. Infrared and Laser Engineering, 2020, 49(3): 0303001. (in Chinese) [2] He Boxia, Zhang Zhisheng, Xu Sunhao, et al. Research on high-precision machine vision measurement method for large scale parts [J]. China Mechanical Engineering, 2009, 20(1): 5-10. (in Chinese) [3] Mo Xutao. Study on extended depth of field optical imaging system[D]. Tianjin: Tianjin University, 2008. (in Chinese) [4] Feng X F, Fang B. Algorithm for epipolar geometry and correcting monocular stereo vision based on a plane mirror [J]. Optik - International Journal for Light and Electron Optics, 2021, 226(7): 165890. [5] Chen Yuanze, Zhou Fuqiang, Zhang Wanning, et al. Analysis of imaging optical characteristics of vision sensor with single camera and double spherical mirrors [J]. Acta Optica Sinica, 2022, 42(10): 1015003. (in Chinese) [6] Wu G, Masia B, Jarabo A, et al. Light field image processing: an overview [J]. IEEE Journal of Selected Topics in Signal Processing, 2017, 11(7): 926-954. doi: 10.1109/JSTSP.2017.2747126 [7] Lin Zhaosu, Wang Yangyundou, Wang Hao, et al. Expansion of depth-of-field of scattering imaging based on DenseNet [J]. Acta Optica Sinica, 2022, 42(4): 0436001. (in Chinese) [8] Guo Xiaohu, Zhao Chenxiao, Zhou Ping, et al. Application and analysis of the extension of depth of filed and focus in zoom system [J]. Optical Technique, 2019, 45(3): 263-268. (in Chinese) [9] Jiang Yate. Research on surfacereconstruction of multiple objects with large depth-of-field based on structured light field [D]. Wuhan: Huazhong University of Science and Technology, 2021. (in Chinese) [10] Wang Jiahua, Du Shaojun, Zhang Xuanzhe, et al. Design of focused lightd field computational imaging system with four-types focal lengths [J]. Infrared and Laser Engineering, 2019, 48(2): 0218003. (in Chinese) [11] Zhang Qican, Wu Zhoujie. Three-dimensional imaging technique based on Gray-coded structured illumination [J]. Infrared and Laser Engineering, 2020, 49(3): 0303004. (in Chinese) [12] Su Xianyu, Zhang Qican, Chen Wenjing. Three dimensional imaging based on structured illumination [J]. Chinese Journal of Lasers, 2014, 41(2): 0209001. (in Chinese) [13] Zhang Q, Su X, Xiang L, et al. 3-D shape measurement based on complementary Gray-code light [J]. Optics and Lasers in Engineering, 2012, 50(4): 574-579. doi: 10.1016/j.optlaseng.2011.06.024 [14] Zhang S. Flexible and high-accuracy method for uni-directional structured light system calibratio [J]. Optics and Lasers in Engineering, 2021, 143(9): 106637. [15] Liu Jinyue, Zhai Zhiguo, Jia Xiaohui, et al. Reserach on rotating splicing of point cloud in workpiece wall based on surface structured light [J]. Infrared and Laser Engineering, 2022, 51(9): 20210952. (in Chinese) [16] Zhang Z. A flexible new technique for camera calibration [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330-1334. doi: 10.1109/34.888718 [17] Ohte A, Tsuzuki O, Mori K. A practical spherical mirror omnidirectional camera [C]//International Workshop on Robotic Sensors: Robotic Sensor Environments, 2005. [18] Gai Shaoyan, Da Feipeng. A new model of 3D shape measurement system based on phase measuring profilometry and its calibration [J]. Acta Automatica Sinica, 2007, 33(9): 902-910. (in Chinese)