Volume 49 Issue 1
Jan.  2020
Turn off MathJax
Article Contents

Lu Chunqing, Yang Mengfei, Wu Yanpeng, Liang Xiao. Research on pose measurement and ground object recognition technology based on C-TOF imaging[J]. Infrared and Laser Engineering, 2020, 49(1): 0113005-0113005(9). doi: 10.3788/IRLA202049.0113005
Citation: Lu Chunqing, Yang Mengfei, Wu Yanpeng, Liang Xiao. Research on pose measurement and ground object recognition technology based on C-TOF imaging[J]. Infrared and Laser Engineering, 2020, 49(1): 0113005-0113005(9). doi: 10.3788/IRLA202049.0113005

Research on pose measurement and ground object recognition technology based on C-TOF imaging

doi: 10.3788/IRLA202049.0113005
  • Received Date: 2019-05-05
  • Rev Recd Date: 2019-06-15
  • Publish Date: 2020-01-28
  • Deep space probes have limited power consumption and volume, and have diverse mission conditions. Compared with low-orbit earth probes, deep space probes have higher requirements for the mission capabilities of navigation sensors. This paper proposed a fast pose measurement and ground object recognition technology based on time-of-flight imaging. In order to meet the time requirements of pose measurement under the premise of ensuring the accuracy of pose measurement, a dynamic scale estimation method based on depth information was proposed. This method improved the temporal stability of point cloud registration under multi-scale object-side changes. The average registration time was reduced by more than 60% and the average registration accuracy was about 0.04 m. In order to meet the needs of multi-scale and multi-morph object recognition, a light-weight deep neural network was used to detect ground objects based on scene depth information. The results show that this method can quickly perceive the features of ground features, and the accuracy rate is more than 70% in real scenes.
  • [1] Yano H, Kubota T, Miyamoto H, et al. Touchdown of the Hayabusa spacecraft at the Muses Sea on Itokawa[J]. Science, 2006, 312(5778):1350-1353.
    [2] Tsuchiyama A, Uesugi M, Matsushima T, et al. Three-dimensional structure of Hayabusa samples:origin and evolution of Itokawa regolith[J]. Science, 2011, 333(6046):1125-1128.
    [3] Tsuda Y, Yoshikawa M, Abe M, et al. System design of the Hayabusa 2-Asteroid sample return mission to 1999 JU3[J]. Acta Astronautica, 2013, 91:356-362.
    [4] Tsuda Y, Yoshikawa M, Saiki T, et al. Hayabusa2-Sample return and kinetic impact mission to near-earth asteroid Ryugu[J]. Acta Astronautica, 2019, 156:387-393.
    [5] Titterton D H. Military Laser Technology and Systems[M]. US:Artech House, 2015.
    [6] Rusu R B, Blodow N, Beetz M. Fast point feature histograms (FPFH) for 3D registration[C]//2009 IEEE International Conference on Robotics and Automation. IEEE, 2009:3212-3217.
    [7] Zhang Z. Iterative point matching for registration of free-form curves and surfaces[J]. International Journal of Computer Vision, 1994, 13(2):119-152.
    [8] Rusinkiewicz S, Levoy M. Efficient variants of the ICP algorithm[C]//3-D Digital Imaging and Modeling, 2001. Proceedings. Third International Conference on. IEEE, 2001:145-152.
    [9] Biber P, Strasser W. The normal distributions transform:A new approach to laser scan matching[C]//Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS 2003)(Cat. No. 03CH37453). IEEE, 2003, 3:2743-2748.
    [10] Yue X, Wu B, Seshia S A, et al. A lidar point cloud generator:from a virtual world to autonomous driving[C]//Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval. ACM, 2018:458-464.
    [11] Wu B, Wan A, Yue X, et al. Squeezeseg:Convolutional neural nets with recurrent crf for real-time road-object segmentation from 3d lidar point cloud[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018:1887-1893.
    [12] Wu B, Zhou X, Zhao S, et al. Squeezesegv2:Improved model structure and unsupervised domain adaptation for road-object segmentation from a lidar point cloud[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019:4376-4382.
    [13] Srivastava N, Hinton G, Krizhevsky A, et al. Dropout:a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15(1):1929-1958.
  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Article Metrics

Article views(773) PDF downloads(43) Cited by()

Related
Proportional views

Research on pose measurement and ground object recognition technology based on C-TOF imaging

doi: 10.3788/IRLA202049.0113005
  • 1. Beijing Institute of Control Engineering, Beijing 100080, China;
  • 2. China Academy of Space Technology, Beijing 100094, China

Abstract: Deep space probes have limited power consumption and volume, and have diverse mission conditions. Compared with low-orbit earth probes, deep space probes have higher requirements for the mission capabilities of navigation sensors. This paper proposed a fast pose measurement and ground object recognition technology based on time-of-flight imaging. In order to meet the time requirements of pose measurement under the premise of ensuring the accuracy of pose measurement, a dynamic scale estimation method based on depth information was proposed. This method improved the temporal stability of point cloud registration under multi-scale object-side changes. The average registration time was reduced by more than 60% and the average registration accuracy was about 0.04 m. In order to meet the needs of multi-scale and multi-morph object recognition, a light-weight deep neural network was used to detect ground objects based on scene depth information. The results show that this method can quickly perceive the features of ground features, and the accuracy rate is more than 70% in real scenes.

Reference (13)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return