2020 Vol. 49, No. 6
Sweat latent fingerprint was the most common type of fingerprint on crime scenes, featuring features that disappeared quickly and were not easily detected. According to its characteristics, the UV polarization imaging detection technology was used for target detection. Compared with the traditional intensity image, the polarization parameter image could improve the target contrast and help to distinguish the targets in different backgrounds. However, the UV polarization imaging detection technology was sensitive to angles, bands and guest materials. Therefore, through the design of reasonable experiments, the characteristics of the UV polarization reflection characteristics of the sweat latent fingerprints with angle, band and guest materials were analyzed. The results show that the sweat latent fingerprints exhibited regular polarization characteristics at different angles; among the four spectral polarization channels provided by the system, the near-ultraviolet bands are well reproducible and distinguishable; the polarization characteristics of different guest materials varied greatly. The UV-polarized reflection characteristics of the comparative analysis samples effectively improved the detection and recognition performance of latent fingerprints, and provided a basis for the UV-polarized imaging detection technology of sweat latent fingerprints.
Crimes associated to explosives always kill the lives of people, destroy the facilities, and threaten the public security. Rapid identifying the medal debris from the residuals of the explosives in criminal scenes is the key to speed up the detection of these criminal cases. Responding to these applications, in the paper, a method was proposed based on polarization imaging to rapidly identify the medal debris from the complicated scenes. Two setups were built, which can respectively capture polarization images with multiple colors and in a simultaneous imaging manner. Experiments of the medal and non-medal debris and their mixtures show that polarization imaging can effectively identify the medal debris if the incident angle polarization, and wavelength of the illuminating light are appropriately set. Simulations based on Fresnel formula show how the polarization degrees and angles change with the incident angle, polarization, and wavelength of the illumination light. And the results suggest that polarization imaging can identify the types of the medal debris, which is proved by the additional experiments. In summary, it is indicated that polarization imaging has the potential to rapidly identify the medal debris from the complicated scenes which would help for the physical evidence in criminal detections.
The polarization state of light is the basic attribute of electromagnetic wave. Polarization information is an engineering science and applying technology, which uses the polarization states of light as the information representation, requiring proper characterizing methods to be described. Due to the scattering characteristics of atmosphere, light has a special polarization distribution pattern on the earth surface, which can be used for autonomous navigation in near-earth space. At the same time, the transmissions of different polarization states in various scattering media also have some specific transmission characteristics. Therefore, the investigations on the transmission characteristics of polarization information in different dispersive media have important values for its wide applications in modern military, aviation, marine and other fields. Meanwhile, in recent years, the optical polarization imaging has been widely applied for achieving clear imaging in haze, underwater and other scattering media systems, in which many excellent research results have been obtained. In this paper, various presentation forms of the polarization states of light, transmission characteristics of polarization information in different dispersion media systems, polarization information recovery algorithm, and application of polarization dehazing technology were mainly introduced. Lastly, the future-development trends for the applications of polarization information was prospected.
Underwater images often suffer from many typical problems: in a complex optical environment, the quality of underwater images drops sharply, and features such as color and brightness are often attenuated seriously, which makes it difficult to improve the quality of underwater images. Polarization imaging can effectively suppress underwater scattering. In the underwater imaging environment, according to the polarization characteristics of the signal, backscatter and forwardscatter light, the impact of different components on the image is solved. Based on the underwater physical imaging model and the principle of polarization imaging, the principle of underwater polarization imaging is described in detail, and several classic underwater polarization imaging methods are emphasized. The current underwater imaging technology based on polarization characteristics is summarized, and according to their actual effect, these methods are evaluated and analyzed. What's more, based on the advantages and disadvantages of the existing underwater polarization imaging technology and their actual results, the future development of the underwater polarization imaging technology is summarized.
Submarine detection is a key technology of coastline defense, the direct and indirect imaging to detect submarines both involves solving the problem of detecting the water surface ripple. 3D surface measurement method based on polarization detection can effectively detect the water corrugated surface, in which the detection and calculation of surface polarization characteristics is an important part of surface reconstruction. The model of water surface polarization was established. Visible and infrared waveband of the water polarization characteristic under different water temperature, different meteorological conditions were simulated and analyzed. The results show that the surface polarization is s polarization in the visible waveband and p polarization in the middle and long infrared waveband, degree of polarization first increases with the increase of incident angle and then decreases, moreover the degree of water surface polarization increases with the rising of temperature. The degree of water surface polarization in visible, middle infrared and long infrared waveband was measured by polarization measurement system based on Stokes vector. The results of the simulation were consistent with the measured results, which proved the accuracy of the polarization model used. The characteristics of polarization imaging detection in common wavebands were analyzed, which can provide theoretical simulation and experimental methods for the analysis and calculation of polarization characteristics of water surface.
In order to reduce the accuracy of material infrared recognition, a common method was presented to change the emissivity of material surface by coating the material surface. Firstly, the influence of the surface emissivity of the target material on the infrared characteristics of the target material was deduced and analyzed through the Stokes expression of the infrared radiation polarization transmission model of the target material based on the micro-plane element theory. The theoretical analysis results show that the change of the surface emissivity of the target material does not affect the infrared polarization of the surface. Secondly, in view of the irrelevance between the material surface emissivity and infrared polarization characteristics, this method of coating material was proposed based on the spectral degree of polarization contrast. And the infrared hyperspectral polarization imaging characteristics of coating materials with different surface emissivities on the same substrate and coating materials with the same emissivity on different substrates were verified and analyzed. The results show that the change of the surface emissivity of the coated material does not affect the polarization degree of the infrared spectrum. And even if the target material is coated with the same emissivity coating, the characteristic of spectral polarization degree is more obviously different than that of spectral radiation brightness. The research results can provide a new method for infrared camouflage material detection and recognition.
The three-dimensional measurement of human body posture is of great significance to the comfort evaluation of car seat design. In order to acquire the 3D data of the human body in the car quickly and accurately, a method of 3D data acquisition based on binocular vision was adopted, which combined the structured light with the marked points, and realized the rapid reconstruction of 3D point cloud of the human body and the automatic and rapid measurement of 3D attitude (distance and angle). The experimental results show that when the distance is more than 2 m and the measuring range is 1.5 m × 2 m, the measurement accuracy of human body posture can reach 0.03 mm, which meets the demand of high-precision three-dimensional data acquisition of automobile human body posture. Compared with traditional three-dimensional measurement method, the three-dimensional automatic measurement method used in this paper not only has a high degree of automation, but also has the advantages of high accuracy, fast speed and strong robustness.
Traditional single speckle pattern matching algorithms always suffer from the low measurement accuracy and cannot be used to measure complex surface objects. A speckle projection profilometry with deep learning was proposed to realize the pixel-by-pixel matching. The siamese convolutional neural network structure was applied and extended where the main speckle pattern and the auxiliary speckle pattern were fed into the neural network patch by patch. It was expected that the feature from the speckle pattern patches could be extracted by the convolution operation. In this way, the features were fused and the matching coefficient between the two patches was obtained, which could be further used to formulate the disparity data and then the three-dimensional (3D) object was reconstructed. The experiment results demonstrate that with the proposed method 3D measurement with an accuracy of about 290 μm could be achieved through a single speckle pattern.
A single-shot 3D shape measurement using orthogonal composited grating based on grayscale expanding (OCGGE) was proposed. In the traditional orthogonal composited grating (OCG) profilometry, the modulated gratings in the orthogonal composited grating must share the same grayscale level since the maximal grayscale dynamic range of commercial Digital Light Processing (DLP) is limited in 256, that results in some phenomenon increasing the measuring error, including the weakened contrast of the modulated grating, the compressed phase information and the broken phase during the process of phase unwrapping. Based on the principle of time division multiplexing, one orthogonal composited grating was designed with 766 gray level and was spited into three different fringe patterns with 256 grayscales, then loaded these patterns in sequence to edit a video. When this video was played and projected onto the measured object continuously, by setting the exposure time as an integer times of the 3 times of the frame refresh cycle of the video for a 10bit CCD, a deformed pattern with 766 grayscales could be obtained. After the filtering and grayscale calibration, the object could be reconstructed accurately and completely. Both simulation and experiment results prove that the proposed method can break the limit of 256 grayscale projection and increase the dynamic range of the phase-shifting deformed patterns efficiently. And it can also enrich the detailed information of the measured object and avoid the incomplete surface reconstruction caused by phase break.
A distortion calibration method for wide-angle lens was proposed based on fringe-pattern phase analysis. Firstly, four standard cosine fringe-patterns with phase shift step of π/2, which were used as calibration templates, were shown on a large-size Liquid Crystal Display screen, and captured by the camera with wide-angle lens to obtain four distorted fringe-patterns. A four-step phase-shifting method was employed to obtain the phase distribution of the radial distorted fringe-pattern. There was no distortion within the central region of the image captured by the wide-angle lens, so the phase distribution of radial undistorted fringe-pattern, as a benchmark for computing radial distorted phase, could be acquired by performing numerical fitting by the central undistorted phase value of the distorted image. It means that the radial distorted phase distribution was computed by subtracting the phase distribution of radial distorted fringe-pattern from the phase distribution of radial undistorted fringe-pattern. Finally, the distorted phase was transformed into the actual distorted variables. There was no need to establish any kind of image distortion model by lots of characteristic points or lines. Furthermore, the radial distortion variable at each point of the distorted image can be determined by the proposed method. Experimental results show that the proposed method is simple, effective, and has wide application value.
Single-pixel imaging system attracts a lot of attentions because of its special imaging method, but its target recognition method in noisy environment has not been studied deeply. Aiming at this problem, the signal sequences obtained by the bucket detector and the corresponding formed two-dimensional images were used as the training samples for deep learning to identify targets in noisy environments. By comparing the recognition results of these two methods, it was found that when the sampling rate was low, the former one could obtain a higher recognition rate even in a strong noise environment; while for the latter one, although the recognition rate was relatively stable, its preprocessing time was high, so the former one was more suitable for target recognition in high-speed imaging. In addition, for the method using only the bucket detector signal as the training samples, the effect of target sparsity on its recognition accuracy was also analyzed. It was found that when the external noise and sampling rate were fixed, the higher the sparsity of the target, the higher the recognition accuracy was. This paper can be used as the reference for the selection of single pixel system recognition methods in noisy environments.
The application of deep learning has simplified the process of 3D measurement of digital fringe projection. In the process of fringe projection, phase calculation, phase unwrapping, and phase-depth mapping of traditional digital fringe projection 3D measurement technology, researchers have successfully demonstrated the feasibility of combining the first three stages and the entire process with deep neural networks. Based on deep learning, the Phase to Depth Network (PDNet) was proposed to achieve the map from absolute phase to depth. Combined with multi-stage deep learning based single-frame fringe projection 3D measurement method, the absolute phase and depth information of the object were obtained by deep learning in stages. The experimental results show that the PDNet can measure the depth information of the object comparatively accurately, and the application of deep learning is feasible in the phase-height mapping stage. And compared with the single-stage deep learning based single-frame fringe projection 3D measurement method that directly maps from the fringe image to the three-dimensional topography information, multi-stage deep learning based single-frame fringe projection 3D measurement method can significantly improve the measurement accuracy, which only require a single fringe input to obtain millimeter-level measurement accuracy, and it can adapt to 3D measurement of objects with complex surfaces.
Noise is an important problem that affects image segmentation. A novel scheme was proposed that could accurately extract multiple objects in a noisy real-world scene. The phase map and disparity map were obtained by using the binocular structured light system based on sinusoidal fringe projection. Firstly, a disparity map was transformed into the corresponding U-disparity map. Then, according to the different projection characteristics of object and noise regions in the disparity map, the preliminary segmentation regions were obtained by using the closed region detection algorithm. In addition, the fringe modulation analysis method was used to remove the noise in the shadow region, and the accurate segmentation results were finally obtained. Experimental results and objective evaluation data indicate that the proposed segmentation algorithm in this paper is not only robust to noise but also can effectively separate the object from the horizontal support surface. It has the advantages of low computational complexity and strong anti-interference ability in different scenarios. The average segmentation accuracy is above 90%, and the best accuracy can achieve 99.2%. The average running time is almost 27 ms.
Dual-frequency fringe projection methods have been widely used in three-dimensional (3D) shape measurement, but the phase unwrapping is very sensitive to random noises. A modified dual-frequency geometric constraint fringe was presented. The robustness of phase unwrapping can be effectively enhanced by improving the frequency of low-frequency phase. During the 3D shape measurement, firstly, the five-step phase-shifting algorithm was used to extract two wrapped phases. Secondly, the low-frequency phase was unwrapped based on the geometric constraint method. Finally, the dual-frequency algorithm was used to unwrap the high-frequency phase, and then the 3D shape could be reconstructed. Both simulations and experiments demonstrate that the modified dual-frequency fringe is more robust and applicable than the traditional one.
Traditional incremental structure from motion is susceptible to scale change, and the reconstructed point cloud is hierarchical and has no units. A new Euclidean 3D reconstruction method was proposed by improving the reconstruction topology and the scaling iterative closest point algorithm. First, a new reconstruction topology, reconstructing a point cloud from two adjacent pictures and then merging it into the main point cloud, was presented; then, corresponding tables were established aiming to find the corresponding 3D point pairs of a world point between the newly created point cloud and the main point cloud; subsequently, combining Geman-McClure norm, an anti-noise scaling iterative closest point method was proposed; finally, ground control points were set up to introduce scale for the reconstructed point cloud. Experiment results show that the point cloud reconstructed by proposed method is more accurate than that reconstructed by traditional incremental structure from motion, and the absolute error of length for the point cloud is about 1%-2%. The proposed method is suitable for precise Euclidean reconstruction of objects in close scene.