Citation: |
|
[1] | Shao L, Liu L, Li X L. Feature learning for image classification via multiobjective genetic programming[J]. IEEE Transactions on Neural Networks and Learning Systems, 2014, 25(7): 1359–1371. doi: 10.1109/TNNLS.2013.2293418 |
[2] | Crebolder J M, Sloan R B. Determining the effects of eyewear fogging on visual task performance[J]. Applied Ergonomics, 2004, 35(4): 371–381. doi: 10.1016/j.apergo.2004.02.005 |
[3] | 肖创柏, 赵宏宇, 禹晶, 等.基于WLS的雾天交通图像恢复方法[J].红外与激光工程, 2015, 44(3): 1080–1084. Xiao C B, Zhao H Y, Yu J, et al. Traffic image defogging method based on WLS[J]. Infrared and Laser Engineering, 2015, 44(3): 1080–1084. |
[4] | Zhu F, Shao L. Weakly-supervised cross-domain dictionary learning for visual recognition[J]. International Journal of Computer Vision, 2014, 109(1–2): 42–59. doi: 10.1007/s11263-014-0703-y |
[5] | Zhang Z, Tao D C. Slow feature analysis for human action recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(3): 436–450. doi: 10.1109/TPAMI.2011.157 |
[6] | Wang L J, Zhu R. Image defogging algorithm of single color image based on wavelet transform and histogram equalization[J]. Applied Mathematical Sciences, 2013, 7(79): 3913–3921. |
[7] | Shen H F, Li H F, Qian Y, et al. An effective thin cloud removal procedure for visible remote sensing images[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2014, 96: 224–235. doi: 10.1016/j.isprsjprs.2014.06.011 |
[8] | Pei S C, Lee T Y. Nighttime haze removal using color transfer pre-processing and Dark Channel Prior[C]//Proceedings of the 2012 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 2012. |
[9] | Zhu Q S, Yang S, Heng P A, et al. An adaptive and effective single image dehazing algorithm based on dark channel prior[C]//Proceedings of 2013 IEEE International Conference on Robotics and Biomimetics, Shenzhen, China, 2013. |
[10] | Oakley J P, Satherley B L. Improving image quality in poor visibility conditions using a physical model for contrast degradation[J]. IEEE Transactions on Image Processing, 1988, 7(2): 167–169. |
[11] | Berman D, Treibitz T, Avidan S. Non-local image dehazing[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016: 1674–1682. |
[12] | Meng G F, Wang Y, Duan J Y, et al. Efficient image dehazing with boundary constraint and contextual regularization[C]//Proceedings of 2013 IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 2013. |
[13] | Zhu Q S, Mai J M, Shao L. A fast single image haze removal algorithm using color attenuation prior[J]. IEEE Transactions on Image Processing, 2015, 24(11): 3522–3533. doi: 10.1109/TIP.2015.2446191 |
[14] | He K M, Sun J, Tang X O. Single image haze removal using dark channel prior[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(12): 2341–2353. doi: 10.1109/TPAMI.2010.168 |
[15] | Tarel J P, Hautière N. Fast visibility restoration from a single color or gray level image[C]//Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 2009. |
[16] | Cai B L, Xu X M, Jia K, et al. DehazeNet: an end-to-end system for single image haze removal[J]. IEEE Transactions on Image Processing, 2016, 25(11): 5187–5198. doi: 10.1109/TIP.2016.2598681 |
[17] | Zhu Y Y, Tang G Y, Zhang X Y, et al. Haze removal method for natural restoration of images with sky[J]. Neurocomputing, 2018, 275: 499–510. doi: 10.1016/j.neucom.2017.08.055 |
[18] | Schechner Y Y, Narasimhan S G, Nayar S K. Polarization-based vision through haze[J]. Applied Optics, 2003, 42(3): 511–525. doi: 10.1364/AO.42.000511 |
[19] | Raikwar S C, Tapaswi S. An improved linear depth model for single image fog removal[J]. Multimedia Tools and Applications, 2018, 77(15): 19719–19744. doi: 10.1007/s11042-017-5398-y |
[20] | Ng R, Levoy M, Brédif M, et al. Light field photography with a hand-held plenoptic camera[J]. Computer Science Technical Report CSTR, 2005, 2(11): 1–11. |
[21] | Tao M W, Srinivasan P P, Malik J, et al. Depth from shading, defocus, and correspondence using light-field angular coherence[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 2015. |
[22] | Raj A S, Lowney M, Shah R. Light-field database creation and depth estimation[R]. Palo Alto, USA: Stanford University, 2016. |
[23] | Wang T C, Efros A A, Ramamoorthi R. Occlusion-aware depth estimation using light-field cameras[C]//Proceedings of 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015. |
[24] | Williem W, Park I K. Robust light field depth estimation for noisy scene with occlusion[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016: 4396–4404. |
[25] | Wang M, Zhou S D, Huang F, et al. The study of color image defogging based on wavelet transform and single scale retinex[J]. Proceedings of SPIE, 2011, 8194: 81940F. |
[26] | Ramya C, Rani D S. Contrast enhancement for fog degraded video sequences using BPDFHE[J]. International Journal of Computer Science and Information Technologies, 2012, 3(2): 3463–3468. |
[27] | Xu Z Y, Liu X M, Chen X N. Fog removal from video sequences using contrast limited adaptive histogram equalization[C]//Proceedings of 2009 International Computational Intelligence and Software Engineering, Wuhan, China, 2009. |
[28] | Howard J N. Book Reviews: Scattering Phenomena[J]. Science, 1977, 196(4294): 1084–1085. |
[29] | Narasimhan S G, Nayar S K. Contrast restoration of weather degraded images[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(6): 713–724. doi: 10.1109/TPAMI.2003.1201821 |
[30] | Narasimhan S G, Nayar S K. Vision and the atmosphere[J]. International Journal of Computer Vision, 2002, 48(3): 233–254. |
[31] | 熊伟, 张骏, 高欣健, 等.自适应成本量的抗遮挡光场深度估计算法[J].中国图象图形学报, 2018, 22(12): 1709–1722. Xiong W, Zhang J, Gao X J, et al. Anti-occlusion light-field depth estimation from adaptive cost volume[J]. Journal of Image and Graphics, 2018, 22(12): 1709–1722. |
[32] | Sun J, Shum H Y, Zheng N N. Stereo matching using belief propagation[C]//Proceedings of the 7th European Conference on Computer Vision, Denmark, 2002: 510–524. |
[33] | Hu X Y, Mordohai P. A quantitative evaluation of confidence measures for stereo vision[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(11): 2121–2133. doi: 10.1109/TPAMI.2012.46 |
[34] | Tan R T. Visibility in bad weather from a single image[C]//Proceedings of 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 2008. |
[35] | Tang K, Yang J C, Wang J. Investigating haze-relevant features in a learning framework for image dehazing[C]//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 2014. |
[36] | Ying Z Q, Li G, Gao W. A bio-inspired multi-exposure fusion framework for low-light image enhancement[Z]. arXiv: 1711.00591[cs], 2017. |
Overview: Under severe weather conditions such as fog, rain, and haze, the scattering of atmospheric particles degrades the images captured by camera. Image contrast and color fidelity will be reduced to some extent, which may have a negative impact on computer vision applications. At the same time, due to the limited information provided by single image, it is difficult to extract the depth information of the scene for image dehazing. Thus, studies on image dehazing methods have great significance. In this paper, we first present an image dehazing algorithm by combining light field technology with atmospheric scattering model. Firstly, taking the advantages of light field refocusing and capturing multi-view information, we extract defocus and correspondence cues. After that, we extract the depth information of the scene by defocusing and correspondence cues, respectively, and the attainable maximum likelihood (AML) is taken as confidence measure method, which can be used to calculate confidence to synthesize the depth maps. Secondly, the scene transmission is calculated according to the exponential relationship between the scene depth and scene transmission. After that, we construct a weight function to constrain the singular value of the scene transmission by using the obtained depth information, and introducing the weight function into weighted 1-norm context constraint to optimize the transmission map iteratively. Finally, the obtained scene transmission and the central view image of the hazy light field images are introduced into the atmospheric scattering model to achieve image dehazing. The experiments were tested on synthetic hazy images and real hazy images respectively. Experiments results on the synthetic hazy images evaluate the performance of eight dehazing methods. In quantitative analysis, compared to seven kinds of single image dehazing algorithms, the peak signal to noise ratio get 2 dB improvement and the structural similarity raise about 0.04. In qualitative analysis, our method has achieved the best results in five scenarios, and images after dehazing has higher contrast and color fidelity for better visual effects. Experiments results on real hazy images demonstrate that our method can achieve superior dehazing results. Images after dehazing with our method have higher contrast and color fidelity. At the same time, our method has a certain inhibitory effect on noise in the images. The comparison results of noise contained in images after dehazing by different algorithms show that there is less noise in the images by our method, and the images have the highest contrast and visibility. In general, compared with seven single image dehazing algorithms, our method achieves the best dehazing effect, images contrast and structural similarity after dehazing have been greatly improved.
Flow chart of developed algorithm
Imaging model of light-field camera
The pipeline of depth estimation algorithm
Depth maps obtained by different methods
Flow chart of transmission map optimization algorithm
Results of scene transmission map optimization iteratively
Flow chart of global atmospheric light estimation
Comparison of image dehazing results using the depth extracted by the method of Ref. [13] and the depth extracted by our method. (a) Hazy image and haze-free image; (b) Initial depth map; (c) Optimized depth map using guided filtering; (d) Transmission map; (e) Restored image
Comparison of dehazing results between light field single cue and multi-cues fusion. (a) Input hazy image and ground truth (from top to bottom); (b) Transmission map obtained from defocusing cue alone and corresponding dehazing result; (c) Transmission map obtained from correspondence cue alone and corresponding dehazing result; (d) Transmission map obtained by our method and corresponding dehazing result
Effect of transmission map optimization on image dehazing results. (a) Input hazy image and ground truth (from top to bottom); (b) Initial scene transmission map and optimized transmission map; (c) Dehazing results by using (b)
Comparison results of noise contained in images after dehazing by different algorithms
Comparisons of dehazing results on synthetic hazy images. The first line is hazy images; the second line to the eighth line are the dehazing results of Ref. [11~17] methods; the ninth line is the dehazing results of our method; and the tenth line is ground truth
Comparisons of dehazing results on real hazy images. The first line is hazy images; The second line to the eighth line are the dehazing results of Ref. [11~17] methods; The ninth line is the dehazing results of our method