Hou Zhiqiang, Wang Liping, Guo Jianxin, et al. An object tracking algorithm based on color, space and texture information[J]. Opto-Electronic Engineering, 2018, 45(5): 170643. doi: 10.12086/oee.2018.170643
Citation: Hou Zhiqiang, Wang Liping, Guo Jianxin, et al. An object tracking algorithm based on color, space and texture information[J]. Opto-Electronic Engineering, 2018, 45(5): 170643. doi: 10.12086/oee.2018.170643

An object tracking algorithm based on color, space and texture information

    Fund Project: Supported by National Natural Science Foundation of China (61473309)
More Information
  • In order to deal with complex scene change problem in the tracking process, we propose a tracking algorithm via multiple feature fusion. Under the framework of particle filter, dynamic feature weights are calculated by making an uncertain measure of each feature in the tracking process, which results in adaptive feature fusion. The algorithm uses the complementarity of color, space and texture features to improve the tracking performance. Experimental results show that the algorithm can adapt to complex scene changes such as scale, rotation and motion blur. Compared with traditional algorithms, the proposed algorithm has obvious advantages to complete the tracking task.
  • 加载中
  • [1] Verma K K, Kumar P, Tomar A. Analysis of moving object detection and tracking in video surveillance system[C]//Proceedings of the 2nd International Conference on Computing for Sustainable Global Development, 2015: 1759-1762.

    Google Scholar

    [2] Tsai F S, Hsu S Y, Shih M H. Adaptive tracking control for robots with an interneural computing scheme[J].IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(4): 832-844. doi: 10.1109/TNNLS.2017.2647819

    CrossRef Google Scholar

    [3] Gu Q, Yang J Y, Zhai Y Q, et al. Vision-based multi-scaled vehicle detection and distance relevant mix tracking for driver assistance system[J]. Optical Review, 2015, 22(2): 197-209. doi: 10.1007/s10043-015-0067-8

    CrossRef Google Scholar

    [4] Ma L, Lu J W, Feng J J, et al. Multiple feature fusion via weighted entropy for visual tracking[C]//Proceedings of 2015 IEEE International Conference on Computer Vision, 2015: 3128-3136.

    Google Scholar

    [5] Dou J F, Li J X. Robust visual tracking based on interactive multiple model particle filter by integrating multiple cues[J]. Neurocomputing, 2014, 135: 118-129. doi: 10.1016/j.neucom.2013.12.049

    CrossRef Google Scholar

    [6] Chen D P, Yuan Z J, Wu Y, et al. Constructing adaptive complex cells for robust visual tracking[C]//Proceedings of 2013 IEEE International Conference on Computer Vision, 2013: 1113-1120.

    Google Scholar

    [7] 鲁琴, 肖晶晶, 罗武胜.基于多尺度混合模型多特征融合的单目标跟踪[J].光电工程, 2016, 43(7): 16-21.

    Google Scholar

    Lu Q, Xiao J J, Luo W S. Single target tracking with multi-feature fusion in multi-scale models[J]. Opto-Electronic Engineering, 2016, 43(7): 16-21.

    Google Scholar

    [8] 顾鑫, 王海涛, 汪凌峰, 等.基于不确定性度量的多特征融合跟踪[J].自动化学报, 2011, 37(5): 550-559.

    Google Scholar

    Gu X, Wang H T, Wang L F, et al. Fusing multiple features for object tracking based on uncertainty measurement[J]. Acta Automatica Siniaca, 2011, 37(5): 550-559.

    Google Scholar

    [9] 刘晴, 唐林波, 赵保军, 等.基于自适应多特征融合的均值迁移红外目标跟踪[J].电子与信息学报, 2012, 34(5): 1137-1141.

    Google Scholar

    Liu Q, Tang L B, Zhao B J, et al. Infrared target tracking based on adaptive multiple features fusion and mean shift[J]. Journal of Electronics & Information Technology, 2012, 34(5): 1137-1141.

    Google Scholar

    [10] 李培华.一种改进的Mean Shift跟踪算法[J].自动化学报, 2007, 33(4): 347-354.

    Google Scholar

    Li P H. An improved mean shift algorithm for object tracking[J]. Acta Automatica Sinica, 2007, 33(4): 347-354.

    Google Scholar

    [11] Choi E, Lee C. Feature extraction based on the Bhattacharyya distance for multimodal data[C]//Proceedings of 2001 IEEE International Geoscience and Remote Sensing Symposium, 2001: 524-526.

    Google Scholar

    [12] 姚志均.一种新的空间直方图相似性度量方法及其在目标跟踪中的应用[J].电子与信息学报, 2013, 35(7): 1644-1649.

    Google Scholar

    Yao Z J. A new spatiogram similarity measure method and its application to object tracking[J]. Journal of Electronics & Information Technology, 2013, 35(7): 1644-1649.

    Google Scholar

    [13] Zhao G Y, Pietikainen M. Dynamic texture recognition using local binary patterns with an application to facial expressions[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(6): 915-928. doi: 10.1109/TPAMI.2007.1110

    CrossRef Google Scholar

    [14] Danelljan M, Khan F S, Felsberg M, et al. Adaptive color attributes for real-time visual tracking[C]//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014: 1090-1097.

    Google Scholar

    [15] Jia X, Lu H C, Yang M H. Visual tracking via adaptive structural local sparse appearance model[C]//Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012: 1822-1829.

    Google Scholar

    [16] Wang N Y, Yeung D Y. Learning a deep compact image representation for visual tracking[C]//Proceedings of the 27th Annual Conference on Neural Information Processing Systems, 2013: 809-817.

    Google Scholar

    [17] Danelljan M, Hager G, Khan F S, et al. Accurate scale estimation for robust visual tracking[C]//Proceedings of the British Machine Vision Conference 2014, 2014: 1-11.

    Google Scholar

    [18] Wang G F, Qin X Y, Zhong F, et al. Visual tracking via sparse and local linear coding[J]. IEEE Transactions on Image Processing, 2015, 24(11): 3796-3809. doi: 10.1109/TIP.2015.2445291

    CrossRef Google Scholar

    [19] Wu G F, Li J W, Yang M H. Online object tracking: a benchmark[C]//Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition, 2013: 2411-2418.

    Google Scholar

  • Overview: In order to deal with complex scene change problem in the tracking process, we propose a tracking algorithm via multiple feature fusion. Due to the computational convenience, single feature descriptor is widely used in visual tracking for target model expression. However, single feature descriptor is usually not enough to describe the complex characteristics and changes of target. The target representation combined with multiple feature descriptors can improve the overall performance of visual tracking, because different features can provide complementary target information. How to effectively combine multiple features to make the algorithm truly improve performance is the most important issue for the multi-feature fusion algorithm. Therefore, we use a method of uncertainty measurement, by measuring the reliability of feature to determine the influence of it. Under the framework of particle filter, dynamic feature weights are calculated by making an uncertain measure of each feature in the tracking process, which results in adaptive feature fusion. This method adjusts the influence of features on tracking according to the uncertainty of features, so that the reliable feature has a stronger influence. In addition, color feature is robust to changes in rotation, scaling, etc., but difficult to cope with changes in illumination variation. Spatial feature contains the spatial information of target, which can make up for the lack of spatial information in color histogram. Texture feature is not sensitive to changes in illumination variation and not easily affected by local deviations. Therefore, if we fuse these three kinds of complementary features, the target expression can be provided by these features, and it can provide more effective target information. Based on the above discussions, the algorithm uses the complementarity of color, space and texture features to improve the tracking performance. Experimental results show that the algorithm can adapt to complex scene changes such as scale, rotation and motion blur. Compared with traditional algorithms, the proposed algorithm has obvious advantages to complete the tracking task. In order to verify the performance of the algorithm in this paper, we programmed it through MATLAB2009a, and tested a large number of experiments on the computer with 4 GB memory. We chose ACT, ASLA, DLT, DSST, and LLC as contrast algorithm, which have good performance. The figure shows the overall tracking accuracy and success rate of 30 videos in OTB2013 dataset. It can be seen from the figure that the accuracy and success rate of proposed algorithm are the highest of these six algorithms. The overall tracking performance of ours algorithm is the best, which can better adapt to different tracking environment and target changes.

  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(5)

Tables(1)

Article Metrics

Article views() PDF downloads() Cited by()

Access History

Other Articles By Authors

Article Contents

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint