Xu Liang, Fu Randi, Jin Wei, et al. Image super-resolution reconstruction based on multi-scale feature loss function[J]. Opto-Electronic Engineering, 2019, 46(11): 180419. doi: 10.12086/oee.2019.180419
Citation: Xu Liang, Fu Randi, Jin Wei, et al. Image super-resolution reconstruction based on multi-scale feature loss function[J]. Opto-Electronic Engineering, 2019, 46(11): 180419. doi: 10.12086/oee.2019.180419

Image super-resolution reconstruction based on multi-scale feature loss function

    Fund Project: Supported by National Natural Science Foundation of China (61471212) and Zhejiang Province Natural Science Fund (LY16F010001)
More Information
  • In the image super-resolution reconstruction, many methods based on deep learning mostly adopt the traditional mean squared error (MSE) as the loss function, and the reconstructed image is prone to the problem of fuzzy details and too smooth. In order to solve this problem, this paper improves the traditional mean square error loss function and proposes an image super-resolution reconstruction method based on multi-scale feature loss function. The whole network model consists of a DenseNet-based reconstruction model and a convolutional neural network which is used to optimize the multi-scale feature loss function. Taking the reconstructed image and the corresponding original HD image as the input of the convolved neural network in series, the mean square error of the different scale feature images obtained by convolution of the reconstructed image with the corresponding original HD image was calculated. Experimental results show that the method in this paper is improved in subjective vision, PSRN and SSIM.
  • 加载中
  • [1] Glasner D, Bagon S, Irani M. Super-resolution from a single image[C]//Proceedings of the IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 2009: 349-356.

    Google Scholar

    [2] Peled S, Yeshurun Y. Superresolution in MRI: application to human white matter fiber tract visualization by diffusion tensor imaging[J]. Magnetic Resonance in Medicine, 2001, 45(1): 29-35. doi: 10.1002/1522-2594(200101)45:1<29::AID-MRM1005>3.0.CO;2-Z

    CrossRef Google Scholar

    [3] Shi W Z, Caballero J, Ledig C, et al. Cardiac image super-resolution with global correspondence using multi-atlas patchmatch[C]//Proceedings of the 16th International Conference on Medical Image Computing and Computer-Assisted Intervention, Nagoya, Japan, 2013: 9-16.

    Google Scholar

    [4] Gunturk B K, Batur A U, Altunbasak Y, et al. Eigenface-domain super-resolution for face recognition[J]. IEEE Transactions on Image Processing, 2003, 12(5): 597-606. doi: 10.1109/TIP.2003.811513

    CrossRef Google Scholar

    [5] Zhang L P, Zhang H Y, Shen H F, et al. A super-resolution reconstruction algorithm for surveillance images[J]. Signal Processing, 2010, 90(3): 848-859. doi: 10.1016/j.sigpro.2009.09.002

    CrossRef Google Scholar

    [6] Zhou F, Yang W M, Liao Q M. Interpolation-based image super-resolution using multisurface fitting[J]. IEEE Transactions on Image Processing, 2012, 21(7): 3312-3318. doi: 10.1109/TIP.2012.2189576

    CrossRef Google Scholar

    [7] Zhang L, Wu X L. An edge-guided image interpolation algorithm via directional filtering and data fusion[J]. IEEE Transactions on Image Processing, 2006, 15(8): 2226-2238. doi: 10.1109/TIP.2006.877407

    CrossRef Google Scholar

    [8] Lin Z C, Shum H Y. Fundamental limits of reconstruction-based superresolution algorithms under local translation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(1): 83-97. doi: 10.1109/TPAMI.2004.1261081

    CrossRef Google Scholar

    [9] Rasti P, Demirel H, Anbarjafari G. Image resolution enhancement by using interpolation followed by iterative back projection[C]//Proceedings of the 21st Signal Processing and Communications Applications Conference, Haspolat, Turkey, 2013: 1-4.

    Google Scholar

    [10] 周靖鸿, 周璀, 朱建军, 等.基于非下采样轮廓波变换遥感影像超分辨重建方法[J].光学学报, 2015, 35(1): 0110001. doi: 10.3788/AOS201535.0110001

    CrossRef Google Scholar

    Zhou J H, Zhou C, Zhu J J, et al. A method of super-resolution reconstruction for remote sensing image based on non-subsampled contourlet transform[J]. Acta Optica Sinica, 2015, 35(1): 0110001. doi: 10.3788/AOS201535.0110001

    CrossRef Google Scholar

    [11] Yang J C, Wright J, Huang T S, et al. Image super-resolution via sparse representation[J]. IEEE Transactions on Image Processing, 2010, 19(11): 2861-2873. doi: 10.1109/TIP.2010.2050625

    CrossRef Google Scholar

    [12] Timofte R, De V, van Gool L. Anchored neighborhood regression for fast example-based super-resolution[C]//Proceedings of 2013 IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 2013: 1920-1927.

    Google Scholar

    [13] Dong C, Loy C C, He K M, et al. Image super-resolution using deep convolutional networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(2): 295-307. doi: 10.1109/TPAMI.2015.2439281

    CrossRef Google Scholar

    [14] 苏衡, 周杰, 张志浩.超分辨率图像重建方法综述[J].自动化学报, 2013, 39(8): 1202-1213. doi: 10.3724/SP.J.1004.2013.01202

    CrossRef Google Scholar

    Su H, Zhou J, Zhang Z H. Survey of super-resolution image reconstruction methods[J]. Acta Automatica Sinica, 2013, 39(8): 1202-1213. doi: 10.3724/SP.J.1004.2013.01202

    CrossRef Google Scholar

    [15] Keys R. Cubic convolution interpolation for digital image processing[J]. IEEE Transactions on Acoustics Speech & Signal Processing, 1981, 29(6): 1153-1160. doi: 10.1109/TASSP.1981.1163711

    CrossRef Google Scholar

    [16] 袁琪, 荆树旭.改进的序列图像超分辨率重建方法[J].计算机应用, 2009, 29(12): 3310-3313.

    Google Scholar

    Yuan Q, Jing S X. Improved super resolution reconstruction method for video sequence[J]. Journal of Computer Applications, 2009, 29(12): 3310-3313.

    Google Scholar

    [17] Chang H, Yeung D Y, Xiong Y. Super-resolution through neighbor embedding[C]//Proceedings of the 2004 Computer Vision and Pattern Recognition.Piscataway, NJ: IEEE, 2004, 1: I-I.

    Google Scholar

    [18] Yang J C, Wright J, Huang T S, et al. Image super-resolution via sparse representation[J]. IEEE Transactions on Image Processing, 2010, 19(11): 2861-2873. doi: 10.1109/TIP.2010.2050625

    CrossRef Google Scholar

    [19] Yang J, Wright J, Huang T, et al. Image super-resolution as sparse representation of raw image patches[C]//Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2008: 1-8.

    Google Scholar

    [20] Dong C, Loy C C, Tang X O. Accelerating the super-resolution convolutional neural network[C]//Proceedings of the 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2016.

    Google Scholar

    [21] Kim J W, Lee J K, Lee K M. Accurate image super-resolution using very deep convolutional networks[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016: 1646-1654.

    Google Scholar

    [22] Tong T, Li G, Liu X J, et al. Image super-resolution using dense skip connections[C]//Proceedings of 2017 IEEE International Conference on Computer Vision, Venice, Italy, 2017.

    Google Scholar

    [23] Yamanaka J, Kuwashima S, Kurita T. Fast and accurate image super resolution by deep CNN with skip connection and network in network[C]//Proceedings of the 24th International Conference on Neural Information Processing, Guangzhou, China, 2017.

    Google Scholar

    [24] He K M, Zhang X Y, Ren S Q, et al. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification[C]//Proceedings of 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 1026-1034.

    Google Scholar

    [25] Huang G, Liu Z, van der Maaten L, et al. Densely connected convolutional networks[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Hawaii, 2017.

    Google Scholar

  • Overview: In recent years, with the research and development of deep learning, it has been widely used in image processing. Compared with the traditional shallow learning, which can only extract the features of images simply, deep learning can learn the deeper feature representation, so as to have better performance in image processing. The traditional mean squared error (MSE) as the loss function is mostly adopted in the image super-resolution based on deep learning to obtain better PSNR, such as SRCNN, FSRCNN, and SRDenseNet. However, the reconstructed image is prone to edge blur and may be too smooth. The multi-scale loss function proposed in this paper can improve this problem. Based on the analysis of SRCNN, FSRCNN, SRDenseNet, and other methods, the reconstruction model was built with the DenseNet model as the basic framework, and a three-layer convolutional neural network was connected after the reconstruction model to calculate the multi-scale feature loss function. The reconstruction model consists of four parts: dense connection block, dimension reduction layer, deconvolution layer, and reconstruction layer. Each dense connection block is composed of 4 convolution layers, and 3×3 convolution kernel is adopted. The number of feature maps output by each dense connection block is 256. Since the output of all dense connection blocks is concatenated, the feature map is reduced to 256 by means of 1×1 convolution kernel in the dimension reduction layer to reduce the computational burden. After the deconvolution layer, a single channel image is reconstructed by 3×3 convolution kernel. At last, the reconstructed image and the corresponding original HD image were extracted by the three-layer convolution neural network in series, and the difference between the reconstructed image and the original HD image was compared by calculating the mean square error. This article uses Yang91 and BSD200 dataset that consists of 291 images. Considering that the training of convolution neural network depends on a large number of data samples, the original 291 data sets are extended to ensure sufficient samples. First, the original sample set was flipped from left to right and from top to bottom, and the training sample set was 4 times more than the original one, obtaining 291+(291×4)=1455 training samples. Then, the original sample size is enlarged by 2, 3, and 4 times, respectively, with further 180° mirror transformation. After that, 291×2×3=1746 training samples were obtained, with total samples 1455+1746=3201. Set5, Set14 and BSD100 were selected as the standard evaluation dataset in the field of super-resolution research for the test samples, and objective indicators were evaluated using peak signal to noise ratio (PSNR) and structural similarity (SSIM). The experimental results show that the details of the reconstructed images become richer and the edge blur is improved.

  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(9)

Tables(1)

Article Metrics

Article views(9859) PDF downloads(3007) Cited by(0)

Access History

Other Articles By Authors

Article Contents

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint