-
光场的本质是空间中所有光线向各个方向传播的光辐射函数的总体,这可以通过全光函数完全描述。但是当光线在光场中传输时,只考虑由射线携带的二维位置信息(u,v)和二维方向信息(θ,φ)。在这种情况下,光场数据可以利用两个平行平面进行参数化描述。如图1所示,光线分别与透镜平面(u,v)和传感器平面(s,t)相交与两点,构成4D光场函数L(u,v,s,t)。其中透镜平面和传感器平面之间的距离是d,透镜平面与传感器平面之间的关系用于描述光线的传播方向,并且光线的强度由传感器获得。在这种情况下,光线的强度和方向就由光线和两个平面的交叉点的坐标表示。
在文中,主要利用极平面分析法估算场景的深度信息,因此,下面就极平面分法的原理进行描述。通过单点采样模型来描述3D场景,并且将4D光场的平面参数化模型转换为几何图像进行分析。在图2(a)中,从空间中的点P(px,py,pz)发散出的光线与两个平面(u,v),(s,t)分别相交与两个点并且在两个平面之间的距离为d。
图 2 (a) 两平面参数化描述的几何表示;(b) u-s面几何分析的示意图
Figure 2. (a) Geometric representation of two plane parameterized; (b) Schematic diagram of geometric analysis of u-s plane
值得注意的是,相机在实验中获得的光场图像由不同视角下的图像组成,其中u和v表示相机在空间坐标系中特定移动的相对物理位置,例如(u1,v1)表示第1行第1列的摄像机采集到的一组子图象,s和t描述了对应的子图像中像素的位置分布。由于各自表示的单位不同,因此在计算时,要将单位进行统一,例如将s,t的单位转换为物理尺寸。
如图2(b)所示,当只考虑单向的数据时,例如u,s,分析如下。首先,根据三角形相似的原理,p点坐标之间的u,s对应关系如下:
$$\frac{{{r_{{s_1}}} - {r_{{u_1}}}}}{d}=\frac{{{r_{{s_1}}} - {p_x}}}{{{p_z}}}$$ (1) $$\frac{{{r_{{s_2}}} - {r_{{u_2}}}}}{d}=\frac{{{r_{{s_2}}} - {p_x}}}{{{p_z}}}$$ (2) 然后,直线的斜率
$ \dfrac{{\Delta {\rm{u}}}}{{\Delta {\rm{s}}}}$ 可以表示为:$$\frac{{\Delta u}}{{\Delta s}}=1 - \frac{d}{{{p_z}}}$$ (3) 类似地,也可以得到关于点p的坐标在v和t之间的关系。
$$\frac{{\Delta v}}{{\Delta t}}=1 - \frac{d}{{{p_z}}}$$ (4) 通常来说,u和s分别可以由ru和rs代替,(同理v和t也可以),然后可以将两组方程(u, v和s, t)重新整合为一个方程,由点p所表示的4D射线空间的子集可以描述为:
$$\left[ {\begin{array}{*{20}{c}} u \\ v \end{array}} \right]=\left[ {\begin{array}{*{20}{c}} s&{{p_x}} \\ t&{{p_y}} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {1 - d/{p_z}} \\ {d/{p_z}} \end{array}} \right]$$ (5) 其中,Px和Py可以通过公式变换可得,并且都可以用Pz来表示,如公式(6)所示:
$$\left[ {\begin{array}{*{20}{c}} {{p_x}} \\ {{p_y}} \end{array}} \right]=\left[ {\begin{array}{*{20}{c}} s&u \\ t&v \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {1 - {p_z}/d} \\ {{p_z}/d} \end{array}} \right]$$ (6) 当物距远大于像距时,可以将d的值近似看作焦距值,因此,可以通过校准相机来准确地确定d的值。当然,考虑到相机镜头的失真,在校准过程中进行了失真的校正。经过u-s平面的分析可以看出,由于空间中的点p在多个子图像上成像,并且这些像素可以拟合成直线,因此,p点的深度值与所拟合成的直线的斜率成比例关系。同理,对于v-t平面,同样满足。
Research on 3D imaging technology of light field based on structural light marker
-
摘要:
由于四维光场数据中记录了场景中所有光线的方向和强度,通过对其分析和计算即可获得空间场景中对应点的3D坐标。但是目前对光场数据进行处理所采用的重聚焦和多目视觉方法,由于空间场景中特征较少,导致很难确定光场数据中的对应关系光线,因此,重建获得的3D数据不仅稀疏而且精度较低。针对此,提出了一种使用结构光投影技术对三维场景进行标记,并根据相位标记精确的建立起光线之间的对应信息,最终快速地计算出空间场景3D数据的方法。由于4D光场矩阵中存储的是光线相位而不是传统方法中的强度,并且还记录有不同方向的光线信息,因此,该方法不仅提高了传统光场三维重建的精度,还可以解决以往结构光投影三维测量方法中存在的遮挡和高光反射问题。最后通过实验验证了所提方法的可行性和准确性。
Abstract:A method for three-dimensional (3D) reconstruction from four-dimensional (4D) light fields was presented. The 4D light field image recorded the direction and intensity of all rays passing through the scene and contained useful information to estimate scene depth. Point 3D coordinates were obtained by calculating relative positional relationships between rays emitted from one point in the scene. However, it was very difficult in practice to determine these light rays from 4D light field data. The proposed method used fringe projection to mark object surfaces. Light ray information can then be accurately and quickly determined from the phase marker and 3D data calculated. The 4D light field matrix was light ray phase rather of intensity as in the conventional method and can record rays with various directions. Thus, shadow, occlusion, and surface specular reflection problems can be addressed. Feasibility and accuracy of the proposed method were verified experimentally.
-
-
[1] Orth A, Crozier K B. Light field moment imaging [J]. Opt Lett, 2013, 38(15): 2666−2668. doi: 10.1364/OL.38.002666 [2] Wanner S, Goldluecke B. Globally consistent depth labeling of 4D light field[C]//Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2012: 41−48. [3] Lüke J P , Rosa F, Marichal-Hernández J G, et al. Depth from light fields analyzing 4D local structure [J]. Display Technology, 2015, 11(11): 900−907. doi: 10.1109/JDT.2014.2360992 [4] Ng R, Levoy M, Brédif M, et al. Light field photography with a hand-held plenoptic camera[R]. Stanford Technical Report CTSR, 2005:1−11. [5] Ng R. Fourier Slice Photography[C]//ACM Transactions on Graphics, 2005,24(3): 735−744. [6] Lumsdaine A, Georgiev T. The focused plenoptic camera[C]//Proceedings of the ICCP (IEEE), 2009:1−8. [7] Bishop T E, Favaro P. The light field camera: Extended depth of field, aliasing and superresolution [J]. IEEE Trans Pattern Anal Mach Intell, 2017, 34(5): 972−986. [8] Wilburn B, Joshi N, Vaish V. High performance imaging using large camera arrays [J]. ACM Transactions on Graphics, 2005, 24(3): 765−776. doi: 10.1145/1073204.1073259 [9] Tao M W, Hadap S, Malik J, et al. Depth from combining defocus and correspondence using light-field cameras[C]//Proceedings of IEEE International Conference on Computer Vision, IEEE, 2013: 673−680. [10] Lin X, Wu J, Zheng G, et al. Camera array based light field microscopy [J]. Biomed Opt Express, 2015, 6(9): 3179−3189. doi: 10.1364/BOE.6.003179 [11] Wanner S, Goldluecke B. Globally consistent depth labeling of 4D light fields[C]// Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2012: 41−48. [12] Frese C, Gheta I. Robust depth estimation by fusion of stereo and focus series acquired with a camera array[C]// Proceedings of IEEE International Conference on Multisensor fusion and Intergration for Intelligent Systems, IEEE, 2006: 243−248. [13] Levoy M, Zhang Z, McDowell I. Recording and controlling the 4D light field in a microscope using microlens arrays [J]. Microsc, 2009, 235(2): 144−162. doi: 10.1111/j.1365-2818.2009.03195.x [14] Sansoni G, Carocci M, Rodella R. Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors [J]. Appl Opt, 1999, 31(38): 6565−6573. [15] Liu X, Kofman J. High-frequency background modulation fringe patterns based on a fringe-wavelength geometry-constraint model for 3D surface-shape measurement [J]. Opt Express, 2017, 14(25): 16618−16628. [16] Zheng D, Da F, Qian K M, et al. Phase-shifting profilometry combined with Gray-code patterns projection: unwrapping error removal by an adaptive median filter [J]. Opt Express, 2017, 25(5): 4700−4713. doi: 10.1364/OE.25.004700
计量
- 文章访问数: 1063
- HTML全文浏览量: 690
- 被引次数: 0