Citation: |
|
[1] | 吴小俊, 徐天阳, 须文波.基于相关滤波的视频目标跟踪算法综述[J].指挥信息系统与技术, 2017, 8(3): 1–5. doi: 10.15908/j.cnki.cist.2017.03.001 Wu X J, Xu T Y, Xu W B. Review of target tracking algorithms in video based on correlation filter[J]. Command Information System and Technology, 2017, 8(3): 1–5. doi: 10.15908/j.cnki.cist.2017.03.001 |
[2] | Henriques J F, Caseiro R, Martins P, et al. Exploiting the circulant structure of tracking-by-detection with kernels[C]// Proceedings of the 12th European Conference on Computer Vision, 2012: 702–715. |
[3] | Comaniciu D, Meer P. Mean shift: a robust approach toward feature space analysis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 24(5): 603–619. doi: 10.1109/34.1000236 |
[4] | Mei X, Ling H B. Robust visual tracking and vehicle classification via sparse representation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(11): 2259–2272. doi: 10.1109/TPAMI.2011.66 |
[5] | Danelljan M, Häger G, Khan F S, et al. Accurate scale estimation for robust visual tracking[C]//Proceedings of the British Machine Vision Conference, 2014. |
[6] | Grabner H, Grabner M, Bischof H. Real-time tracking via on-line boosting[C]//Proceedings of the British Machine Vision Conference, 2006, 1: 47–56. |
[7] | 王暐, 王春平, 付强, 等.基于分块的尺度自适应CSK跟踪算法[J].电光与控制, 2017, 24(2): 25–29. doi: 10.3969/j.issn.1671-637X.2017.02.005 Wang W, Wang C P, Fu Q, et al. Patch-based scale adaptive CSK tracking method[J]. Electronics Optics & Control, 2017, 24(2): 25–29. doi: 10.3969/j.issn.1671-637X.2017.02.005 |
[8] | Henriques J F, Caseiro R, Martins P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583–596. doi: 10.1109/TPAMI.2014.2345390 |
[9] | Danelljan M, Khan F S, Felsberg M, et al. Adaptive color attributes for real-time visual tracking[C]//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014: 1090–1097. |
[10] | Danelljan M, Häger G, Khan F S, et al. Learning spatially regularized correlation filters for visual tracking[C]//Proceedings of 2015 IEEE International Conference on Computer Vision, 2015: 4310–4318. |
[11] | Bertinetto L, Valmadre J, Golodetz S, et al. Staple: complementary learners for real-time tracking[C]//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016: 1401–1409. |
[12] | Valmadre J, Bertinetto L, Henriques J, et al. End-to-end representation learning for correlation filter based tracking[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 5000–5008. |
[13] | Danelljan M, Bhat G, Khan F S, et al. Eco: efficient convolution operators for tracking[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 6931–6939. |
[14] | 包晓安, 詹秀娟, 王强, 等.基于KCF和SIFT特征的抗遮挡目标跟踪算法[J].计算机测量与控制, 2018, 26(5): 148–152. doi: 10.16526/j.cnki.11-4762/tp.2018.05.037 Bao X A, Zhan X J, Wang Q, et al. Anti occlusion target tracking algorithm based on KCF and SIFT feature[J]. Computer Measurement & Control, 2018, 26(5): 148–152. doi: 10.16526/j.cnki.11-4762/tp.2018.05.037 |
[15] | 闫河, 张杨, 杨晓龙, 等.一种抗遮挡核相关滤波目标跟踪算法[J].光电子·激光, 2018, 29(6): 647–652. doi: 10.16136/j.joel.2018.06.0286 Yan H, Zhang Y, Yang X L, et al. A kernelized correlaton filters with occlusion handling[J]. Journal of Optoelectronics·Laser, 2018, 29(6): 647–652. doi: 10.16136/j.joel.2018.06.0286 |
[16] | Li F, Tian C, Zuo W M, et al. Learning spatial-temporal regularized correlation filters for visual tracking[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018: 4904–4913. |
[17] | Li Y, Zhu J K, Hoi S C H, et al. Robust estimation of similarity transformation for visual object tracking[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33: 8666–8673. |
[18] | Xu T Y, Feng Z H, Wu X J, et al. Learning adaptive discriminative correlation filters via temporal consistency preserving spatial feature selection for robust visual tracking[C]//Proceedings of 2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. |
[19] | Li B, Yan J J, Wu W, et al. High performance visual tracking with Siamese region proposal network[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018: 8971–8980. |
[20] | Li B, Wu W, Wang Q, et al. SiamRPN++: evolution of Siamese visual tracking with very deep networks[C]//Proceedings of 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. |
[21] | 单倩文, 郑新波, 何小海, 等.基于改进多尺度特征图的目标快速检测与识别算法[J].激光与光电子学进展, 2019, 56(2): 55–62. doi: 10.3788/LOP56.021002 Shan Q W, Zheng X B, He X H, et al. Fast object detection and recognition algorithm based on improved multi-scale feature maps[J]. Laser & Optoelectronics Progress, 2019, 56(2): 55–62. doi: 10.3788/LOP56.021002 |
[22] | Fan H, Lin L T, Yang F, et al. LaSOT: a high-quality benchmark for large-scale single object tracking[C]//Proceedings of 2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. |
[23] | Faragher R. Understanding the basis of the Kalman filter via a simple and intuitive derivation[lecture notes][J]. IEEE Signal Processing Magazine, 2012, 29(5): 128–132. doi: 10.1109/MSP.2012.2203621 |
[24] | Bolme D S, Beveridge J R, Draper B A, et al. Visual object tracking using adaptive correlation filters[C]//Proceedings of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010: 2544–2550. |
[25] | Li Y, Zhu J K. A scale adaptive kernel correlation filter tracker with feature integration[C]//Computer Vision-ECCV 2014 Workshops, 2014: 254–265. |
Overview: Target tracking is a topic that has been discussed in depth in current academic community. Its application is quite broad, spanning monitoring, motion analysis, medical imaging, behavior recognition, monitoring and human-computer interaction. When the tracking target is occluded, the accuracy of the current algorithm is not high. Therefore, the research of target tracking algorithm is still an important topic in the field of computer vision. The kernelized correlation filter is one of the most effective methods in the target tracking algorithm. It has become a research hotspot with its high speed, high precision and high robustness. More and more experts and scholars are committed to optimizing the existing features, so that the improved algorithm can achieve good experimental results. The kernelized correlation filter mainly uses the histogram of oriented gradient (HOG) in feature extraction, and determines the target position by the similarity between the template and the detection target. However, the inherent nature of the gradient makes the histogram of oriented gradient of the target very sensitive to noise and the target cannot be tracked by using this algorithm when the target is occluded. In order to overcome these shortcomings of the algorithm, this paper proposes an improved kernelized correlation filter that combines the Sobel edge binary mode algorithm. Firstly, the Sobel edge binary mode algorithm and the histogram of oriented gradient are used to weight the fusion target feature, and the HOG edge detection is enhanced for the target feature, which makes the tracking target information more obvious. Secondly, in order to make the Kalman prediction algorithm can accurately judge the target after it is occluded, the target position obtained by the kernelized correlation filter in the unoccluded tracking process is continuously merged with the target position obtained by the Kalman algorithm. Finally, the target's peak response intensity sidelobe ratio is calculated, and the detection target is judged whether it is lost. Combined with the Kalman algorithm, the position of the next frame of the target can be predicted according to the state before the target is lost. In this paper, six sets of occlusion test videos are selected on the public database visual tracker benchmark for experiments. In order to verify the effectiveness of the proposed algorithm, the authors use Matlab2018b programming, and select DSST, ECO, KCF, LDES, SRDCF, SAMF and STRCF as a comparison algorithm, which has good performance. The final experimental results show that the proposed method improves the accuracy of the algorithm when the target is occluded.
(a) Original image; (b) FHOG; (c) Sobel-FHOG
SPSR time value. (a) Jogging; (b) Rom105
Algorithm overall flow char
KCF and SPKCF overlap with the real position on the video (the red line is SPKCF and the blue line is KCF)
Part of target occlusion results
Accuracy diagram of tracker on different videos. (a) Coke; (b) Girl; (c) Jogging; (d) Subway; (e) Rom105; (f) Tiger2
Average accuracy on six videos