Citation: | Xuedian Zhang, Hong Wang, Minshan Jiang, et al. Applications of saliency analysis in focus image fusion[J]. Opto-Electronic Engineering, 2017, 44(4): 435-441. doi: 10.3969/j.issn.1003-501X.2017.04.008 |
[1] | Zhang Baohua, Lv Xiaoqi, Pei Haiquan, et al. Multi-focus image fusion algorithm based on focused region extraction[J]. Neurocomputing, 2016, 174: 733-748. doi: 10.1016/j.neucom.2015.09.092 |
[2] | 张宝华, 裴海全, 吕晓琪.基于显著性检测和稀疏表示的多聚焦图像融合算法[J].小型微型计算机系统, 2016, 37(7): 1604-1607. Zhang Baohua, Pei Haiquan, Lv Xiaoqi. Multi-focus image fusion based on saliency detection and sparse representation[J]. Journal of Chinese Computer Systems, 2016, 37(7): 1604-1607. |
[3] | Yan Xiang, Qin Hanlin, Li Jia, et al. Multi-focus image fusion using a guided-filter-based difference image[J]. Applied Optics, 2016, 55(9): 2230-2239. doi: 10.1364/AO.55.002230 |
[4] | Duan Jiangyong, Meng Gaofeng, Xiang Shiming, et al. Multifocus image fusion via focus segmentation and region reconstruction[J]. Neurocomputing, 2014, 140(22): 193-209. |
[5] |
张立凯. 多焦点图像融合方法的研究[D]. 长春: 吉林大学, 2013: 1-59.
Zhang Likai. The research of more focus image fusion method[D]. Changchun: Jilin University, 2013: 1-59. |
[6] | Luo Xiaoyan, Zhang Jun, Dai Qionghai. A regional image fusion based on similarity characteristics[J]. Signal Processing, 2012, 92(5): 1268-1280. doi: 10.1016/j.sigpro.2011.11.021 |
[7] | Chai Yi, Li Huafeng, Li Zhaofei. Multifocus image fusion scheme using focused region detection and multiresolution[J]. Optics Communications, 2011, 284(19): 4376-4389. doi: 10.1016/j.optcom.2011.05.046 |
[8] | Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(11): 1254-1259. doi: 10.1109/34.730558 |
[9] | Harel J, Koch C, Perona P. Graph-based visual saliency[M]//Schölkopf B, Platt J, Hofmann T. Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press, 2006: 545-552. |
[10] | Behura J, Wapenaar K, Snieder R. Autofocus imaging: image reconstruction based on inverse scattering theory[J]. Geophysics, 2014, 79(3): A19-A26. doi: 10.1190/geo2013-0398.1 |
[11] | Zhang Xuedian, Liu Zhaoqing, Jiang Minshan, et al. Fast and accurate auto-focusing algorithm based on the combination of depth from focus and improved depth from defocus[J]. Optics Express, 2014, 22(25): 31237-31247. doi: 10.1364/OE.22.031237 |
[12] | Li Huafeng, Chai Yi, Li Zhaofei. Multi-focus image fusion based on nonsubsampled contourlet transform and focused regions detection[J]. Optik-International Journal for Light and Electron Optics, 2013, 124(1): 40-51. doi: 10.1016/j.ijleo.2011.11.088 |
[13] | Guo Kanghui, Labate D. Optimally sparse multidimensional representation using shearlets[J]. SIAM Journal on Mathematical Analysis, 2007, 39(1): 298-318. doi: 10.1137/060649781 |
[14] | 王飞, 王瑶, 史彩成.采用shearlet变换的多聚焦图像融合[J].计算机工程与应用, 2016, 52(2): 205-208. Wang Fei, Wang Yao, Shi Caicheng. Multi-focus image fusion using shearlet transform[J]. Computer Engineering and Applications, 2016, 52(2): 205-208. |
[15] | 郑红, 郑晨, 闫秀生, 等.基于剪切波变换的可见光与红外图像融合算法[J].仪器仪表学报, 2012, 33(7): 1613-1619. Zheng Hong, Zheng Chen, Yan Xiusheng, et al. Visible and infrared image fusion algorithm based on shearlet transform[J]. Chinese Journal of Scientific Instrument. 2012, 33(7): 1613-1619. |
[16] | 廖勇, 黄文龙, 尚琳, 等. Shearlet与改进PCNN相结合的图像融合[J].计算机工程与应用, 2014, 50(2): 142-146. Liao Yong, Huang Wenlong, Shang Lin, et al. Image fusion based on Shearlet and improved PCNN[J]. Computer Engineering and Applications, 2014, 50(2): 142-146. |
[17] | 李美丽, 李言俊, 王红梅, 等.基于NSCT和PCNN的红外与可见光图像融合方法[J].光电工程, 2010, 37(6): 90-95. Li Meili, Li Yanjun, Wang Hongmei, et al. Fusion algorithm of infrared and visible images based on NSCT and PCNN[J]. Opto-Electronic Engineering, 2010, 37(6): 90-95. |
[18] | 洪裕珍, 任国强, 孙健.离焦模糊图像清晰度评价函数的分析与改进[J].光学 精密工程, 2014, 22(12): 3401-3408. Hong Yuzhen, Ren Guoqiang, Sun Jian. Analysis and improvement on sharpness evaluation function of defocused image[J]. Optics and Precision Engineering, 2014, 22(12): 3401-3408. |
Abstract: In the study of autofocus technology, it is always difficult to obtain the all-in focus images because of optical system's limited focus depth, but a high definition image is very important to scientific research. In the research of autofocus technology, we know that detection of the focused region is the key issue of the multi-focus image fusion algorithm, the blurred boundary of the focused region increases the more difficulty of identifying focused regions accurately. According to these principles, the focused region of the source image should be directly fused as much as possible. We propose an image fusion method based on saliency analysis, which can solve the problem of all-in focus. The saliency analysis simulates the human visual attention mechanism by calculating color, direction, brightness and other characteristic information to get the visual significance of the image. The focus of human eyes usually falls in the region of higher saliency. With the saliency analysis, the saliency maps are obtained by comparing the differences among the components of the characteristics which are used to identify the focused regions of the multi-focus image. The process of our method can be described as follow: First, the focal area in the source image is positioned by the graph-based visual saliency (GBVS) algorithm, and then the watershed and morphological methods are used to obtain the closed area of the saliency region and the pseudo-focus regions are removed. In consideration of the defocused region containing much texture and direction information, the defocused region is processed by the Shearlet transform, and the SML operator is used to choose the fusion parts. Finally, the precisely focused region and the processed defocused region are fused into an all-in focus image. Experiments visually show that the fused image of the proposed method has clear and rich details. Compared with 3 traditional methods in 4 objective evaluation criteria: entropy, QAB/F, MI and SF, NSCT method performs best in the 3 traditional methods, while the entropy of the proposed method is 1% higher than that of NSCT method, QAB/F is about 4% higher, MI and SF are about 2% higher each, which means the proposed method has the clearest image and the best fusion performance. In the broken line graph, the merit of the proposed method is more obvious. The subjective visual effects and objective evaluation of the results demonstrate neatly that the proposed method is an effective image fusion method, and we will do further research on color image fusion.
The flow chart of autofocus method based on saliency analysis.
The source image of the experiment. (a1) Clock right focus. (a2) Clock left focus. (b1) Pepsi right focus. (b2) Pepsi left focus. (c1) Lab right focus. (c2) Lab left focus. (d1) Pen right focus. (d2) Pen left focus.
The results of 4 fusion methods. (a1)~(a4) LAP. (b1)~(b4) Wavelet. (c1)~(c4) NSCT+PCNN. (d1)~(d4) GBVS.
Comparison of image fusion quality evaluation. (a) E. (b) QAB/F. (c) MI. (d) SF.