Volume 47 Issue 7
Jul.  2018
Turn off MathJax
Article Contents

Yao Wang, Liu Yunpeng, Zhu Changbo. Deep learning of full-reference image quality assessment based on human visual properties[J]. Infrared and Laser Engineering, 2018, 47(7): 703004-0703004(8). doi: 10.3788/IRLA201847.0703004
Citation: Yao Wang, Liu Yunpeng, Zhu Changbo. Deep learning of full-reference image quality assessment based on human visual properties[J]. Infrared and Laser Engineering, 2018, 47(7): 703004-0703004(8). doi: 10.3788/IRLA201847.0703004

Deep learning of full-reference image quality assessment based on human visual properties

doi: 10.3788/IRLA201847.0703004
  • Received Date: 2018-04-10
  • Rev Recd Date: 2018-05-20
  • Publish Date: 2018-07-25
  • Since the current image quality assessment methods are generally based on hand-crafted features, it is difficult to automatically and effectively extract image features that conform to the human visual system. Inspired by human visual characteristics, a new method of full-reference image quality assessment was proposed by this paper which was based on convolutional neural network (DeepFR). According to this method, the DeepFR model of convolutional neural network was designed which was based on the understanding of the dataset by itself using the human visual system to weight the sensitivity of the gradient, and the visual gradient perception map was extracted that was consistent with human visual characteristics. The experimental results show that the DeepFR model is superior to the current full-reference image quality assessment methods, its prediction score and subjective quality evaluation have good accuracy and consistency.
  • [1] Wang Z, Bovik A C, Sheikh H R, et al. Image quality assessment:from error visibility to structural similarity[J]. IEEE Trans Image Process, 2004, 13(4):600-612.
    [2] Wang Z, Li Q. Information content weighting for perceptual image quality assessment[J]. IEEE Transactions on Image Processing, 2011, 20(5):1185-1198.
    [3] Sheikh H R, Bovik A C. Image information and visual quality[J]. IEEE Transactions on Image Processing, 2006, 15(2):430-444.
    [4] Cheng G, Huang J C, Zhu C, et al. Perceptual image quality assessment using a geometric structural distortion model[C]//IEEE International Conference on Image Processing, 2010:325-328.
    [5] Zhang D. FSIM:A feature similarity index for image quality assessment[J]. IEEE Transactions on Image Processing, 2011, 20(8):2378-2386.
    [6] Xue W, Zhang L, Mou X, et al. Gradient magnitude similarity deviation:A highly efficient perceptual image quality index[J]. IEEE Transactions on Image Processing, 2014, 23(2):684-695.
    [7] Luo Haibo, He Miao, Hui Bin, et al. Pedestrian detection algorithm based on dual-model fused fully convolutional networks[J]. Infrared and Laser Engineering, 2018, 47(2):0203001. (in Chinese)罗海波, 何淼, 惠斌,等. 基于双模全卷积网络的行人检测算法(特邀)[J]. 红外与激光工程, 2018, 47(2):0203001.
    [8] Luo Haibo, Xu Lingyun, Hui Bin, et al. Status and prospect of target tracking based on deep learning[J]. Infrared and Laser Engineering, 2017, 46(5):0502002. (in Chinese)罗海波, 许凌云, 惠斌,等. 基于深度学习的目标跟踪方法研究现状与展望[J]. 红外与激光工程, 2017, 46(5):0502002.
    [9] Kang L, Ye P, Li Y, et al. Convolutional neural networks for No-reference image quality assessment[C]//Computer Vision and Pattern Recognition, IEEE, 2014:1733-1740.
    [10] Li Y, Po L M, Feng L, et al. No-reference image quality assessment with deep convolutional neural networks[C]//IEEE International Conference on Digital Signal Processing, 2017:685-689.
    [11] Kim J, Lee S. Fully deep blind image quality predictor[J]. IEEE Journal of Selected Topics in Signal Processing, 2017, 11(1):206-220.
    [12] Ali Amirshahi S, Pedersen M, Yu S X. Image quality assessment by comparing CNN features between images[J]. Electronic Imaging, 2016, 60(6):6041010.
    [13] Gao F, Wang Y, Li P, et al. Deep Sim:Deep similarity for image quality assessment[J]. Neurocomputing, 2017(1):104-114.
    [14] Mahendran A, Vedaldi A. Visualizing deep convolutional neural Networks using natural pre-images[J]. International Journal of Computer Vision, 2016, 120(4):1-23.
    [15] Ponomarenko N, Lukin V, Zelensky A, et al. TID2008-a database for evaluation of full-reference visual quality assessment metrics[J]. Adv Modern Radioelectron, 2009, 10(1):30-45.
  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Article Metrics

Article views(755) PDF downloads(108) Cited by()

Related
Proportional views

Deep learning of full-reference image quality assessment based on human visual properties

doi: 10.3788/IRLA201847.0703004
  • 1. Shenyang Institute of Automation,Chinese Academy of Sciences,Shenyang 110016,China;
  • 2. University of Chinese Academy of Sciences,Beijing 100049,China;
  • 3. Key Laboratory of Opto-Electronic Information Processing,Chinese Academy of Sciences,Shenyang 110016,China;
  • 4. State Key Laboratory of Robotics,Shenyang Institute of Automation,Chinese Academy of Sciences,Shenyang 110016,China

Abstract: Since the current image quality assessment methods are generally based on hand-crafted features, it is difficult to automatically and effectively extract image features that conform to the human visual system. Inspired by human visual characteristics, a new method of full-reference image quality assessment was proposed by this paper which was based on convolutional neural network (DeepFR). According to this method, the DeepFR model of convolutional neural network was designed which was based on the understanding of the dataset by itself using the human visual system to weight the sensitivity of the gradient, and the visual gradient perception map was extracted that was consistent with human visual characteristics. The experimental results show that the DeepFR model is superior to the current full-reference image quality assessment methods, its prediction score and subjective quality evaluation have good accuracy and consistency.

Reference (15)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return