Citation: | Guo Zhicheng, Dang Jianwu, Wang Yangping, et al. Background modeling method based on multi-feature fusion[J]. Opto-Electronic Engineering, 2018, 45(12): 180206. doi: 10.12086/oee.2018.180206 |
[1] | Ueng S K, Chen G Z. Vision based multi-user human computer interaction[J]. Multimedia Tools and Applications, 2016, 75(16): 10059-10076. doi: 10.1007/s11042-015-3061-z |
[2] | 刘行, 陈莹.自适应多特征融合目标跟踪[J].光电工程, 2016, 43(3): 58-65. doi: 10.3969/j.issn.1003-501X.2016.03.010 Liu X, Chen Y. Target tracking based on adaptive fusion of multi-feature[J]. Opto-Electronic Engineering, 2016, 43(3): 58-65. doi: 10.3969/j.issn.1003-501X.2016.03.010 |
[3] |
Piccardi M. Background subtraction techniques: a review[C]//Proceedings of 2004 IEEE International Conference on Systems, Man and Cybernetics, The Hague, Netherlands, 2004: 3099-3104. |
[4] |
Lipton A J, Fujiyoshi H, Patil R S. Moving target classification and tracking from real-time video[C]//Proceedings of the 4th IEEE Workshop on Applications of Computer Vision. WACV'98, Princeton, NJ, USA, 1998: 8-14. |
[5] | Barron J L, Fleet D J, Beauchemin S. Performance of optical flow techniques[J]. International Journal of Computer Vision, 1994, 12(1): 43-77. doi: 10.1007/BF01420984 |
[6] |
Dikmen M, Huang T S. Robust estimation of foreground in surveillance videos by sparse error estimation[C]//Proceedings of the 19th International Conference on Pattern Recognition, Tampa, USA, 2008: 1-4. |
[7] |
Xue G J, Song L, Sun J, et al. Foreground estimation based on robust linear regression model[C]//Proceedings of the 18th IEEE International Conference on Image Processing, Brussels, Belgium, 2011: 3269-3272. |
[8] | Xue G J, Song L, Sun J. Foreground estimation based on linear regression model with fused sparsity on outliers[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2013, 23(8): 1346-1357. doi: 10.1109/TCSVT.2013.2243053 |
[9] | 张金敏, 王斌.光照快速变化条件下的运动目标检测[J].光电工程, 2016, 43(2): 14-21. doi: 10.3969/j.issn.1003-501X.2016.02.003 Zhang J M, Wang B. Moving object detection under condition of fast illumination change[J]. Opto-Electronic Engineering, 2016, 43(2): 14-21. doi: 10.3969/j.issn.1003-501X.2016.02.003 |
[10] | 李飞, 张小洪, 赵晨丘, 等.自适应的SILTP算法在运动车辆检测中的研究[J].计算机科学, 2016, 43(6): 294-297. Li F, Zhang X H, Zhao C Q, et al. Vehicle detection research based on adaptive SILTP algorithm[J]. Computer Science, 2016, 43(6): 294-297. |
[11] | 王永忠, 梁彦, 潘泉, 等.基于自适应混合高斯模型的时空背景建模[J].自动化学报, 2009, 35(4): 371-378. Wang Y Z, Liang Y, Pan Q, et al. Spatiotemporal background modeling based on adaptive mixture of Gaussians[J]. Acta Automatica Sinica, 2009, 35(4): 371-378. |
[12] | 范文超, 李晓宇, 魏凯, 等.基于改进的高斯混合模型的运动目标检测[J].计算机科学, 2015, 42(5): 286-288, 319. Fan W C, Li X Y, Wei K, et al. Moving target detection based on improved Gaussian mixture model[J]. Computer Science, 2015, 42(5): 286-288, 319. |
[13] | 霍东海, 杨丹, 张小洪, 等.一种基于主成分分析的Codebook背景建模算法[J].自动化学报, 2012, 38(4): 591-600. Huo D H, Yang D, Zhang X H, et al. Principal component analysis based Codebook background modeling algorithm[J]. Acta Automatica Sinica, 2012, 38(4): 591-600. |
[14] | Barnich O, van Droogenbroeck M. ViBe: a universal background subtraction algorithm for video sequences[J]. IEEE Transactions on Image Processing, 2011, 20(6): 1709-1724. doi: 10.1109/TIP.2010.2101613 |
[15] | 张泽斌, 袁哓兵.一种改进反馈机制的PBAS运动目标检测算法[J].电子设计工程, 2017, 25(3): 35-40. Zhang Z B, Yuan X B. An improved PBAS algorithm for dynamic background[J]. Electronic Design Engineering, 2017, 25(3): 35-40. |
[16] |
Wang Y, Jodoin P M, Porikli F, et al. CDnet 2014: an expanded change detection benchmark dataset[C]//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, OH, USA, 2014: 393-400. |
Overview: Background modeling of moving targets is one of the research focal points and difficulties in machine vision and intelligent video processing. Its goal is to extract the change regions from the video sequence and effectively detect the moving targets for follow-up research such as object tracking, target classification, and application understanding such as behavior analysis and behavior understanding plays an important role. The commonly used detection methods include frame difference method, optical flow method, and background difference method. The background difference method has the advantages of small overhead, high speed, high accuracy, and accurate target extraction. It has become the most common method for detecting moving targets. The detection performance of the background difference method mainly depends on a robust background model. The background model establishment and update algorithm directly affects the detection effect of the final target. In order to build a robust background model and improve the accuracy of foreground detection, the temporal correlation of pixels in the same location of the video image and the spatial correlation of pixels in the neighborhood are considered comprehensively. This paper proposed a background modeling method based on multi-feature fusion. The rapid establishment of the initial background model with the first frame of the video sequence reduces the complexity of modeling sampling. The background model is updated using the video image sequence pixel values, frequency, update time, and adaptive sensitivity, wherein the adaptive sensitivity uses the feedback information of the pixel level background to adaptively acquire sensitivity for regions of different complexity to adapt to different complexity backgrounds. The high complexity background area has a high sensitivity, avoids the generation of erroneous front sights, and has a low complexity background area with less sensitivity and reduces misidentification of background points. The algorithm effectively improves the ghost phenomenon through multiple features, reducing the holes in the moving object in the foreground and the false foreground caused by pixel drift. In order to verify the effectiveness and practicability of the proposed algorithm, four background modeling algorithms, CodeBook, MOG, PBAS and ViBe, were selected for comparison experiments. Experiments selected Bootstrap, TimeOfDay, and WavingTrees in the Microsoft Wallflower paper dataset, highway, canoe, fountain02 in the CDNet2014 dataset, and were divided into three types of scene test algorithms: indoor, outdoor, and complex backgrounds. The test results show that this algorithm improves the adaptability and robustness of dynamic background and complex background.
Image domain correlation. (a) First frame; (b) Area A 5×5 pixels; (c) Area B 5×5 pixels
The dynamically changed pixels. (a) 100 frame image sequence; (b) The complexity of regional A and B
The results comparison processed byfive algorithms