Citation: | Qi Yubin, Yu Mei, Jiang Hao, et al. Multi-exposure image fusion based on tensor decomposition and convolution sparse representation[J]. Opto-Electronic Engineering, 2019, 46(1): 180084. doi: 10.12086/oee.2019.180084 |
[1] | Artusi A, Richter T, Ebrahimi T, et al. High dynamic range imaging technology[J]. IEEE Signal Processing Magazine, 2017, 34(5): 165-172. doi: 10.1109/MSP.2017.2716957 |
[2] | Chiang J C, Kao P H, Chen Y S, et al. High-dynamic-range image generation and coding for multi-exposure multi-view images[J]. Circuits, Systems, and Signal Processing, 2017, 36(7): 2786-2814. doi: 10.1007/s00034-016-0437-x |
[3] | 都琳, 孙华燕, 王帅, 等.针对动态目标的高动态范围图像融合算法研究[J].光学学报, 2017, 37(4): 101-109. Du L, Sun H Y, Wang S, et al. High dynamic range image fusion algorithm for moving targets[J]. Acta Optica Sinica, 2017, 37(4): 101-109. |
[4] | Li S T, Kang X D, Fang L Y, et al. Pixel-level image fusion: a survey of the state of the art[J]. Information Fusion, 2017, 33: 100-112. doi: 10.1016/j.inffus.2016.05.004 |
[5] | Zhao C H, Guo Y T, Wang Y L. A fast fusion scheme for infrared and visible light images in NSCT domain[J]. Infrared Physics & Technology, 2015, 72: 266-275. |
[6] | Chen C, Li Y Q, Liu W, et al. Image fusion with local spectral consistency and dynamic gradient sparsity[C]//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014: 2760-2765. |
[7] | Sun J, Zhu H Y, Xu Z B, et al. Poisson image fusion based on Markov random field fusion model[J]. Information Fusion, 2013, 14(3): 241-254. doi: 10.1016/j.inffus.2012.07.003 |
[8] | Liu Y, Liu S P, Wang Z F. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 2015, 24: 147-164. doi: 10.1016/j.inffus.2014.09.004 |
[9] | Mertens T, Kautz J, van Reeth F. Exposure fusion: a simple and practical alternative to high dynamic range photography[J]. Computer Graphics Forum, 2009, 28(1): 161-171. doi: 10.1111/cgf.2009.28.issue-1 |
[10] | Li S T, Kang X D, Hu J W. Image fusion with guided filtering[J]. IEEE Transactions on Image Processing, 2013, 22(7): 2864-2875. doi: 10.1109/TIP.2013.2244222 |
[11] | Liu Y, Wang Z F. Dense SIFT for ghost-free multi-exposure fusion[J]. Journal of Visual Communication and Image Representation, 2015, 31: 208-224. doi: 10.1016/j.jvcir.2015.06.021 |
[12] | Ma K D, Li H, Yong H W, et al. Robust multi-exposure image fusion: a structural patch decomposition approach[J]. IEEE Transactions on Image Processing, 2017, 26(5): 2519-2532. doi: 10.1109/TIP.2017.2671921 |
[13] | Ma K D, Duanmu Z F, Yeganeh H, et al. Multi-exposure image fusion by optimizing a structural similarity index[J]. IEEE Transactions on Computational Imaging, 2018, 4(1): 60-72. doi: 10.1109/TCI.2017.2786138 |
[14] | Kolda T G, Bader B W. Tensor decompositions and applications[J]. SIAM Review, 2009, 51(3): 455-500. doi: 10.1137/07070111X |
[15] | Wang H Z, Ahuja N. A tensor approximation approach to dimensionality reduction[J]. International Journal of Computer Vision, 2008, 76(3): 217-229. |
[16] | Zeiler M D, Krishnan D, Taylor G W, et al. Deconvolutional networks[C]//Proceedings of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 2010: 2528-2535. |
[17] | Chen S S, Donoho D L, Saunders M A. Atomic decomposition by basis pursuit[J]. SIAM Journal on Scientific Computing, 1998, 20(1): 33-61. doi: 10.1137/S1064827596304010 |
[18] | Wohlberg B. Efficient algorithms for convolutional sparse representations[J]. IEEE Transactions on Image Processing, 2016, 25(1): 301-315. doi: 10.1109/TIP.2015.2495260 |
[19] | Liu J L, Garcia-Cardona C, Wohlberg B, et al. Online convolutional dictionary learning[C]//Proceedings of 2017 IEEE International Conference on Image Processing, Beijing, China, 2017. |
[20] | Liu Y, Chen X, Ward R K, et al. Image fusion with convolutional sparse representation[J]. IEEE Signal Processing Letters, 2016, 23(12): 1882-1886. doi: 10.1109/LSP.2016.2618776 |
[21] | Paul S, Sevcenco I S, Agathoklis P. Multi-exposure and multi-focus image fusion in gradient domain[J]. Journal of Circuits, Systems, and Computers, 2016, 25(10): 1650123. doi: 10.1142/S0218126616501231 |
[22] | Banterle F, Artusi A, Debattista K, et al. Advanced High Dynamic Range Imaging: Theory and Practice[M]. Natick, MA: A K Peters, 2011. |
[23] | Ma K D. Multi-Exposure Image Fusion by Optimizing A Structural Similarity Index[DB/OL]. https://ece.uwaterloo.ca/~k29ma/dataset/MEFOpt_Database, 2018. |
[24] | Xydeas C S, Petrovic V. Objective image fusion performance measure[J]. Electronics Letters, 2000, 36(4): 308-309. doi: 10.1049/el:20000267 |
Overview: The real scene usually has a luminance range from 10-5 cd/m2 to 108 cd/m2, but the existing image video devices can capture a limited luminance dynamic range. Thus, it cannot retain all the details of the real scene. In recent years, multi-exposure fusion (MEF), as an effective quality enhancement technology, has gradually become a hot research topic in digital media field. This technique combines multiple low dynamic range (LDR) images with different exposures taken by ordinary cameras to generate an image with rich details and saturated color. At present, many MEF algorithms have been proposed by relevant researchers and they can achieve great results when processing image sequences with simple background. However, when multi-exposure image sequences contain many objects with complex textures, the performance of these algorithms is not satisfactory and the terrible phenomena such as detail loss and color distortion often appear in the fused images. To solve the above problem, this paper proposesd a multi-exposure image fusion method based on tensor decomposition (TD) and convolution sparse representation (CSR). Among them, TD, as a method of low rank approximation for high-dimensional data, has great potential in multi-exposure image feature extraction, while CSR performs sparse optimization on the whole image, which can retain the detail information to the greatest extent. At the same time, in order to avoid color distortion in the fused image, luminance and chrominance were fused separately. Firstly, the core tensor of the source image was obtained through tensor decomposition and the edge feature extraction was carried out on the first sub-band which contains the most information. Secondly, the edge feature map was sparsely decomposed to obtain the activity level of each pixel by using L1 norm of the decomposition coefficient. Finally, take the "winner-take-all" strategy to generate the weight map so as to obtain the fused luminance component. Different from the luminance fusion process, chrominance components were fused by Gaussian weighting method simply according to the color space characteristics. The experiment used two sets of image sequences with complex background. Compared with other five advanced MEF algorithms, the fusion image by the proposed algorithm not only had rich details, but also did not appear the large-scale color distortion. In addition, in order to evaluate the detail preserving ability of the proposed algorithm more comprehensively, seven groups of multi-exposure image sequences were selected for objective measurement. Experimental results show that the proposed method has strong edge information preserving ability.
Flowchart of tensor decomposition and convolution sparse representation for multi-exposure image fusion
Multi-exposure image stack of House sequence
The core tensor of House sequence.
The edge feature of the first sub-band
A trained convolution dictionary
Initial weight map of the House sequence
(a) The enlarged low exposure image of House sequence; (b) The initial weight map of Fig. 7(a)
Refined weight map. (a) The weight map of basic layer; (b) The weight map of detailed layer
Histogram comparison for chrominance components between Gaussian weighting and linear weighting algorithm
Comparison of fused images obtained by two chrominance fusion methods.
Multi-exposure image group of the Lamp sequence
Contrast maps of Lamp sequence.
Enlarged maps of Lamp sequence.
Multi-exposure image group of the Studio sequence
Contrast maps of Studio sequence.
Enlarged maps of Studio sequence.
Enlarged maps of Studio sequence.