Volume 47 Issue 2
Mar.  2018
Turn off MathJax
Article Contents

Yu Siquan, Han Zhi, Tang Yandong, Wu Chengdong. Texture synthesis method based on generative adversarial networks[J]. Infrared and Laser Engineering, 2018, 47(2): 203005-0203005(6). doi: 10.3788/IRLA201847.0203005
Citation: Yu Siquan, Han Zhi, Tang Yandong, Wu Chengdong. Texture synthesis method based on generative adversarial networks[J]. Infrared and Laser Engineering, 2018, 47(2): 203005-0203005(6). doi: 10.3788/IRLA201847.0203005

Texture synthesis method based on generative adversarial networks

doi: 10.3788/IRLA201847.0203005
  • Received Date: 2017-09-05
  • Rev Recd Date: 2017-10-05
  • Publish Date: 2018-02-25
  • Texture synthesis is a hot research topic in the fields of computer graphics, vision, and image processing. Traditional texture synthesis methods are generally achieved by extracting effective feature patterns or statistics and generating random images under the constraint of the feature information. Generative adversarial networks (GANs) is a new type of deep network. It can randomly generate new data of the same distribution as the observed data by training generator and discriminator in an adversarial learning mechanism. Inspired by this point, a texture synthesis method based on GANs was proposed. The advantage of the algorithm was that it could generate more realistic texture images without iteration; the generated images were visually consistent with the observed texture image and also had randomness. A series of experiments for random texture and structured texture synthesis verify the effectiveness of the proposed algorithm.
  • [1] Huang Jun, Li Feng, Gui Yan, et al. Surfaces texture synthesis based on texel distribution[J]. Journal of Chinese Computer Systems, 2016, 37(10):2361-2365. (in Chinese)
    [2] Criminisi A, Perez P, Toyama K. Region filling and object removal by exemplar-based image inpainting[J]. IEEE Tras Process, 2004, 13(9):1200-1212.
    [3] Kwatra V, Essa I, Turk G, et al. Graphcut textures:Image and video synthesis using graph cuts[C]//ACM Transactions on Graphics, 2003:277-286.
    [4] Lefebvre S, Hoppe H. Parallel controllable texture synthesis[C]//ACM Transactions on Graphics, 2005, 24(3):777-786.
    [5] Zhang Weiwei, He Kai, Meng Chunzhi. Texture synthesis method by adaptive selecting size of patches[J]. Computer Engineering and Applications, 2012, 48(17):170-173. (in Chinese)
    [6] Song C Z, Wu Y, Mumford D. Filters, random fields and maximum entropy (FRAME):towards a unified theory for texture modeling[J]. International Journal of Computer Vision, 1998, 27(2):107-126.
    [7] Kwatra V, Essa I, Bobick A, et al. Texture optimization for example-based synthesis[C]//ACM Transactions on Graphics, 2005:795-802.
    [8] Urs R D, Costa J P D, Germain C. Maximum-likelihood based synthesis of volumetric textures from a 2D sample[J]. IEEE Transactions on Image Processing, 2014, 23(4):1820-1830.
    [9] Xie J, Hu W, Zhu S C, et al. Learning sparse FRAME models for natural image patterns[J]. International Journal of Computer Vision, 2015, 114(2-3):1-22.
    [10] Lu Y, Zhu S C, Wu Y N. Learning FRAME models using CNN filters[J]. Computer Science, 2015, arxiv:1509.08379v3.
    [11] Gatys L A, Ecker A S, Bethge M. Texture synthesis using convolutional neural networks[J]. Febs Letters, 2015, 70(1):51-55.
    [12] Schreiber S, Geldenhuys J, Villiers H D. Texture synthesis using convolutional neural networks with long-range consistency and spectral constraints[C]//Pattern Recognition Association of South Africa and Robotics and Mechatronics International Conference. IEEE, 2017:1-6.
    [13] Xiao Chunxia, Huang Zhiyong, Nie Yongwei, et al. Global texture optimization incorporating with image detail[J]. Chinese Journal of Computers, 2009, 32(6):1196-1205. (in Chinese)
    [14] Tang Ying, Lin Qifeng, Xiao Tingzhe, et al. GPU-based texture synthesis with preserved structures[J]. Computer Science, 2016, 43(4):299-302. (in Chinese)
    [15] Goodfellow I J, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]//International Conference on Neural Information Processing Systems, 2014:2672-2680.
  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Article Metrics

Article views(555) PDF downloads(122) Cited by()

Related
Proportional views

Texture synthesis method based on generative adversarial networks

doi: 10.3788/IRLA201847.0203005
  • 1. School of Information Science and Engineering,Northeastern University,Shenyang 110000,China;
  • 2. State Key Laboratory of Robotics,Shenyang Institute of Automation,Chinese Academy of Sciences,Shenyang 110000,China

Abstract: Texture synthesis is a hot research topic in the fields of computer graphics, vision, and image processing. Traditional texture synthesis methods are generally achieved by extracting effective feature patterns or statistics and generating random images under the constraint of the feature information. Generative adversarial networks (GANs) is a new type of deep network. It can randomly generate new data of the same distribution as the observed data by training generator and discriminator in an adversarial learning mechanism. Inspired by this point, a texture synthesis method based on GANs was proposed. The advantage of the algorithm was that it could generate more realistic texture images without iteration; the generated images were visually consistent with the observed texture image and also had randomness. A series of experiments for random texture and structured texture synthesis verify the effectiveness of the proposed algorithm.

Reference (15)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return