Volume 47 Issue 8
Aug.  2018
Turn off MathJax
Article Contents

Chen Yu, Wen Xinling, Liu Zhaoyu, Ma Pengge. Research of multi-missile classification algorithm based on sparse auto-encoder visual feature fusion[J]. Infrared and Laser Engineering, 2018, 47(8): 826004-0826004(8). doi: 10.3788/IRLA201847.0826004
Citation: Chen Yu, Wen Xinling, Liu Zhaoyu, Ma Pengge. Research of multi-missile classification algorithm based on sparse auto-encoder visual feature fusion[J]. Infrared and Laser Engineering, 2018, 47(8): 826004-0826004(8). doi: 10.3788/IRLA201847.0826004

Research of multi-missile classification algorithm based on sparse auto-encoder visual feature fusion

doi: 10.3788/IRLA201847.0826004
  • Received Date: 2018-03-07
  • Rev Recd Date: 2018-04-05
  • Publish Date: 2018-08-25
  • Accurate classification of missile by the missile image (or in flight state) taken through the satellite equipment, which achieve the timely and effective defense, is one of the hot spot in the military field at home and abroad. Because the missile in the war state has masked color, and the missile shape differences are not significant, it is difficult to classify the missile type based on the low level features. Aiming at these problems, a new algorithm was presented based on Sparse Auto-Encoder (SAE) combining the high level visual feature and low level feature extraction. In order to improve classification accuracy, transfer learning was introduced, with the help of the STL-10 sample database local features, the global features of small sample missile target image can be extracted through the local features by the convolution neural network (CNN) of pooling layer, and then transmitted into the Softmax regression model to realize classification of missiles. Experiments show that compared with the traditional low level vision features and SAE high level vision feature classification algorithm, the SAE fusion feature classification algorithm has higher accuracy and robustness. In addition, in order to avoid classification performance reduce even failure under the lack of training for new type missile target object, the new algorithm induces transfer learning to extract local feature, experimental result proves the feasibility and accuracy of the algorithm.
  • [1] Shi Dongcheng, Ni Kang. Background modeling based on YCbCr color space and gesture shadow elimination[J]. Chinese Optics, 2015, 8(4):589-595. (in Chinese)史东承, 倪康. 基于YCbCr颜色空间背景建模与手势阴影消除[J]. 中国光学, 2015, 8(4):589-595.
    [2] Cheng Peirui, Wang Jianli, Wang Bin, et al. Salient object detection based on multi-scale region contrast[J]. Chinese Optics, 2016, 9(1):97-105. (in Chinese)成培瑞, 王建立, 王斌, 等. 基于多尺度区域对比的显著目标识别[J]. 中国光学, 2016, 9(1):97-105.
    [3] Chen Chao, Yu Yanqin, Huang Shujun, et al. 3D small-field imaging system[J]. Infrared and Laser Engineering, 2016, 45(8):0824002. (in Chinese)陈超, 於燕琴, 黄淑君, 等. 三维小视场成像系统[J]. 红外与激光工程, 2016, 45(8):0824002.
    [4] Zhang Zhongyu, Jiao Shuhong. Infrared ship target detection method based on multiple feature fusion[J]. Infrared and Laser Engineering, 2015, 44(s):29-34. (in Chinese)张仲瑜, 焦淑红. 多特征融合的红外舰船目标检测方法[J]. 红外与激光工程, 2015, 44(s):29-34.
    [5] Guo Congzhou, Shi Wenjun, Qin Zhiyuan, et al. Non-convex sparsity regularization for wave back restoration of space object images[J]. Optics and Precision Engineering, 2016, 24(4):902-912. (in Chinese)郭从洲, 时文俊, 秦志远, 等. 空间目标图像的非凸稀疏正则化波后复原[J]. 光学精密工程, 2016, 24(4):902-912.
    [6] Yin Ming, Duan Puhong, Chu Biao, et al. Fusion of infrared and visible images combined with NSDTCT and sparse representation[J]. Optics and Precision Engineering, 2016, 24(7):1763-1771. (in Chinese)殷明, 段普宏, 褚标, 等. 基于非下采样双树复轮廓波变换和稀疏表示的红外和可见光图像融合[J]. 光学精密工程, 2016, 24(7):1763-1771.
    [7] Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks[C]//Preceedings of the 26th Annual Conference on Neural Information Processing Systems(NIPS), 2012:1097-1105.
    [8] Bengio Y, Clurville A, Vincent P. Representation learning:a review and new perspectives[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(8):1798-1828.
    [9] Masci J, Meier U, Dan C, et al. Stacked convolutional auto-encoders for hierarchical feature extraction[C]//Proceedings of the 21st International Conference on Artificial Neural Networks, 2011:52-59.
    [10] Li Zuhe, Fan Yangyu, Wang Fengqin. Unsupervised feature learning with sparse autoencoders in YUV space[J]. Journal of Electronics Information Technology, 2016, 38(1):29-37. (in Chinese)李祖贺, 樊养余, 王凤琴. YUV空间中基于稀疏自动编码器的无监督特征学习[J]. 电子与信息学报, 2016, 38(1):29-37.
    [11] Zhang F, Du B, Zhang L. Saliency-guided unsupervised feature learning for scene classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(4):2175-2184.
    [12] Luo Haibo, Xu Lingyun, Hui Bin, et al. Status and prospect of target tracking based on deep learning[J]. Infrared and Laser Engineering, 2017, 46(5):0502002. (in Chinese)罗海波, 许凌云, 惠斌, 等. 基于深度学习的目标跟踪方法研究现状与展望[J]. 红外与激光工程, 2017, 46(5):0502002.
    [13] Adam Coates, Honglak Lee, Andrew Y Ng. An analysis of single-layer networks in unsupervised feature learning[C]//14th International Conference on Artificial Intelligence and Statistics, 2011:215-223.
    [14] Dan C Ciresan, Ueli Meier, Jonathan Masci, et al. Flexible, high performance convolutional neural networks for image classification[C]//Proceedings of the 22nd International Joint Conference on Artificial Intelligence, 2011:1237-1242.
    [15] Zeng R, Wu J, Shao Z, et al. Quaternion softmax classifier[J]. Electronics Letters, 2014, 50(25):1929-1930.
  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Article Metrics

Article views(549) PDF downloads(37) Cited by()

Related
Proportional views

Research of multi-missile classification algorithm based on sparse auto-encoder visual feature fusion

doi: 10.3788/IRLA201847.0826004
  • 1. College of Electronic and Communications Engineering,Zhengzhou University of Aeronautics,Zhengzhou 450015,China

Abstract: Accurate classification of missile by the missile image (or in flight state) taken through the satellite equipment, which achieve the timely and effective defense, is one of the hot spot in the military field at home and abroad. Because the missile in the war state has masked color, and the missile shape differences are not significant, it is difficult to classify the missile type based on the low level features. Aiming at these problems, a new algorithm was presented based on Sparse Auto-Encoder (SAE) combining the high level visual feature and low level feature extraction. In order to improve classification accuracy, transfer learning was introduced, with the help of the STL-10 sample database local features, the global features of small sample missile target image can be extracted through the local features by the convolution neural network (CNN) of pooling layer, and then transmitted into the Softmax regression model to realize classification of missiles. Experiments show that compared with the traditional low level vision features and SAE high level vision feature classification algorithm, the SAE fusion feature classification algorithm has higher accuracy and robustness. In addition, in order to avoid classification performance reduce even failure under the lack of training for new type missile target object, the new algorithm induces transfer learning to extract local feature, experimental result proves the feasibility and accuracy of the algorithm.

Reference (15)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return