﻿ 基于颜色、空间和纹理信息的目标跟踪
 光电工程  2018, Vol. 45 Issue (5): 170643      DOI: 10.12086/oee.2018.170643

An object tracking algorithm based on color, space and texture information
Hou Zhiqiang, Wang Liping, Guo Jianxin, Chu Peng
School of Information Engineering, Xijing University, Xi'an, Shaanxi 710123, China
Abstract: In order to deal with complex scene change problem in the tracking process, we propose a tracking algorithm via multiple feature fusion. Under the framework of particle filter, dynamic feature weights are calculated by making an uncertain measure of each feature in the tracking process, which results in adaptive feature fusion. The algorithm uses the complementarity of color, space and texture features to improve the tracking performance. Experimental results show that the algorithm can adapt to complex scene changes such as scale, rotation and motion blur. Compared with traditional algorithms, the proposed algorithm has obvious advantages to complete the tracking task.
Keywords: visual tracking    feature fusion    color    space    texture

1 引言

2 相关基础理论 2.1 粒子滤波

k-1时刻：对于目标状态x(k-1)和观测z(k-1)，目标状态后验概率分布p(x(k-1)︱z(k-1))的粒子集为$\{ {x^i}(k-1), w_{k-1}^i\} _{i = 1}^{{N_{\rm{s}}}}$，则在k时刻，得到目标的观测z(k)之后，采用如下步骤计算目标状态x(k)：

 $w_k^i \propto w_{k-1}^i\frac{{p(z(k)|{x^i}(k))p({x^i}(k)|{x^i}(k-1))}}{{q({x^i}(k)|{x^i}(k-1), z(k))}},$ (1)

 $w_k^i = \frac{{w_k^i}}{{\sum\limits_{j = 1}^{{N_{\rm{s}}}} {w_k^j} }}。$ (2)

 ${\hat N_{{\rm{eff}}}} = \frac{1}{{\sum\limits_{i = 1}^{{N_{\rm{s}}}} {{{(w_k^i)}^2}} }}。$ (3)

 $x(k) = \sum\limits_{i = 1}^{{N_{\rm{s}}}} {w_k^i{x^i}(k)}。$ (4)

 ${x_k} = \boldsymbol{A}{x_{k-1}} + W,$ (5)

2.2 特征不确定性

 $\beta _{t + 1}^i = {\sigma _t}H(p_t^i),$ (6)

 $H({p_i}) =-\sum\limits_{j = 1}^N {(p({z^i}/{x_j})\log 2p({z^i}/{x_j}))} ，$ (7)

H(pi)的值越小，说明特征i对目标的位置估计越准确。

3 本文跟踪算法 3.1 特征提取 3.1.1 颜色直方图

 $q_b^{\rm{c}} = \sum\limits_{b = 1}^{{B_{\rm{c}}}} {\delta (I(x, y)-b)},$ (8)

3.1.2 空间直方图

 $q_b^{\rm{s}} = ({n_b}, {\boldsymbol{\mu} _b}, {\boldsymbol{\Sigma} _b}),$ (9)
 ${\boldsymbol{\mu} _b} = {[{\mu _{bx}}, {\mu _{by}}]^{\rm{T}}},$ (10)
 ${\boldsymbol{\Sigma }_b} = {({n_b})^{-1}}\sum\limits_{i = 1}^{{n_b}} {{{({s_i}-{\boldsymbol{\mu} _b})}^{\rm{T}}}} ({s_i}-{\boldsymbol{\mu} _b})。$ (11)

 $\rho (p_b^{\rm{s}}, q_b^{\rm{s}}) = \sum\limits_{b = 1}^B {{\varphi _b}{\rho _n}({n_b}, {n_b}^\prime )},$ (12)

 ${\varphi _b} = \eta \exp \{-\frac{1}{2}{({\boldsymbol{\mu} _b}-{\boldsymbol{\mu} _b}^\prime )^{\rm{T}}}\boldsymbol{\hat \Sigma }_b^{-1}({\boldsymbol{\mu} _b} - {\boldsymbol{\mu} _b}^\prime )\} ,$ (13)

4) For k=1, …, Frame do。

① 由状态转移模型xk=Axk-1+W和上一时刻的粒子状态xk-1，预测出当前时刻的粒子状态xk

② 由三种特征的相似性度量计算观测概率$p({z^1}|x)$$p({z^2}|x)$$p({z^3}|x)$，从而得到熵$H(p_k^1)$$H(p_k^2)$$H(p_k^3)$

③ 由观测概率和熵算得每一特征的不确定性β1β2β3

④ 根据式(19)得到融合后的似然函数p(z1, z2, z3 |x)，并由$\omega _k^i = \omega {}_{k-1}^i \cdot p({z^1}, {z^2}, {z^3}|x)$更新粒子权值，计算目标当前估计状态：${\hat x_t} = \sum\limits_{i = 1}^N {\omega _k^ix_k^i}$

⑤ 根据更新后的粒子权值分布决定是否进行重采样操作。

⑥ 输出第k帧的跟踪结果${\hat x_k}$

5) End。

4 实验

4.1 参数设置

 $\omega = \frac{1}{{2{\sigma ^2}}}\exp ((d - 1)\frac{1}{{\sqrt {2{\rm{ \mathit{ π} }}} \sigma }}),$ (20)

4.2 定性分析

 图 4 跟踪算法性能的定性比较。(a) David3系列；(b) Deer系列；(c) Football系列；(d) Lemming系列；(e) Liquor系列；(f) Matrix系列；(g) Mountainbike系列；(h) Skiing系列；(i) Basketball系列；(j) Boy系列 Fig. 4 Qualitative comparison of the six tracking algorithms. (a) David3 series; (b) Deer series; (c) Football series; (d) Lemming series; (e) Liquor series; (f) Matrix series; (g) Mountainbike series; (h) Skiing series; (i) Basketball series; (j) Boy series
4.3 定量分析

 名称 ACT ASLA DLT DSST LLC Ours David3 74.6(9.1) 51.6(87.8) 32.9(107.4) 54.0(88.4) 11.9(286.8) 72.3(16.1) Deer 100(5.1) 2.8(160.1) 38.0(49.1) 93.0(8.5) 2.8(216.3) 71.4(15.2) Football1 48.7(9.8) 44.6(12.2) 52.4 (10.4) 41.9(20.5) 70.3(15.4) 52.5(11.6) Lemming 31.3(90.7) 16.9(178.8) 28.0(128.9) 46.0(81.5) 17.0(158.8) 85.9(15.3) Liquor 20.8(326.4) 23.6(146.7) 20.5(153.3) 40.8(99.3) 24.2(180.6) 82.1(28.5) Matrix 1.00(79.2) 2.0(65.2) 2.0(171.1) 21.0(59.7) 16.0(63.4) 32.1(38.7) Mountain bike 100(6.8) 91.2(9.0) 84.2(13.1) 100(7.8) 100(7.9) 83.7(18.5) Skiing 9.9(274.9) 11.1(266.6) 7.4(244.5) 7.4(220.1) 11.1(269.5) 25.7(96.2) Basketball 25.9(89.1) 71.6(18.0) 49.7(13.9) 64.0(73.1) 62.5(73.8) 91.3(9.6) Boy 71.6(8.8) 48.3(106.7) 100(2.5) 17.1(179.5) 12.6(163.2) 91.5(5.2) 注：括号前的数字表示覆盖率为0.5时的成功率(%)，括号内数字表示平均中心误差(像素)。每个图像序列对应的最优算法标为红色，次优算法标为绿色。

 图 5 整体精度(a)和成功率(b)比较 Fig. 5 Overall comparison of precision (a) and success rate (b)
5 结论

 [1] Verma K K, Kumar P, Tomar A. Analysis of moving object detection and tracking in video surveillance system[C]//Proceedings of the 2nd International Conference on Computing for Sustainable Global Development, 2015: 1759-1762. [2] Tsai F S, Hsu S Y, Shih M H. Adaptive tracking control for robots with an interneural computing scheme[J]. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(4): 832-844. DOI:10.1109/TNNLS.2017.2647819 [3] Gu Q, Yang J Y, Zhai Y Q, et al. Vision-based multi-scaled vehicle detection and distance relevant mix tracking for driver assistance system[J]. Optical Review, 2015, 22(2): 197-209. DOI:10.1007/s10043-015-0067-8 [4] Ma L, Lu J W, Feng J J, et al. Multiple feature fusion via weighted entropy for visual tracking[C]//Proceedings of 2015 IEEE International Conference on Computer Vision, 2015: 3128-3136. [5] Dou J F, Li J X. Robust visual tracking based on interactive multiple model particle filter by integrating multiple cues[J]. Neurocomputing, 2014, 135: 118-129. DOI:10.1016/j.neucom.2013.12.049 [6] Chen D P, Yuan Z J, Wu Y, et al. Constructing adaptive complex cells for robust visual tracking[C]//Proceedings of 2013 IEEE International Conference on Computer Vision, 2013: 1113-1120. [7] Lu Q, Xiao J J, Luo W S. Single target tracking with multi-feature fusion in multi-scale models[J]. Opto-Electronic Engineering, 2016, 43(7): 16-21. 鲁琴, 肖晶晶, 罗武胜. 基于多尺度混合模型多特征融合的单目标跟踪[J]. 光电工程, 2016, 43(7): 16-21. [8] Gu X, Wang H T, Wang L F, et al. Fusing multiple features for object tracking based on uncertainty measurement[J]. Acta Automatica Siniaca, 2011, 37(5): 550-559. 顾鑫, 王海涛, 汪凌峰, 等. 基于不确定性度量的多特征融合跟踪[J]. 自动化学报, 2011, 37(5): 550-559. [9] Liu Q, Tang L B, Zhao B J, et al. Infrared target tracking based on adaptive multiple features fusion and mean shift[J]. Journal of Electronics & Information Technology, 2012, 34(5): 1137-1141. 刘晴, 唐林波, 赵保军, 等. 基于自适应多特征融合的均值迁移红外目标跟踪[J]. 电子与信息学报, 2012, 34(5): 1137-1141. [10] Li P H. An improved mean shift algorithm for object tracking[J]. Acta Automatica Sinica, 2007, 33(4): 347-354. 李培华. 一种改进的Mean Shift跟踪算法[J]. 自动化学报, 2007, 33(4): 347-354. [11] Choi E, Lee C. Feature extraction based on the Bhattacharyya distance for multimodal data[C]//Proceedings of 2001 IEEE International Geoscience and Remote Sensing Symposium, 2001: 524-526. [12] Yao Z J. A new spatiogram similarity measure method and its application to object tracking[J]. Journal of Electronics & Information Technology, 2013, 35(7): 1644-1649. 姚志均. 一种新的空间直方图相似性度量方法及其在目标跟踪中的应用[J]. 电子与信息学报, 2013, 35(7): 1644-1649. [13] Zhao G Y, Pietikainen M. Dynamic texture recognition using local binary patterns with an application to facial expressions[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(6): 915-928. DOI:10.1109/TPAMI.2007.1110 [14] Danelljan M, Khan F S, Felsberg M, et al. Adaptive color attributes for real-time visual tracking[C]//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014: 1090-1097. [15] Jia X, Lu H C, Yang M H. Visual tracking via adaptive structural local sparse appearance model[C]//Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012: 1822-1829. [16] Wang N Y, Yeung D Y. Learning a deep compact image representation for visual tracking[C]//Proceedings of the 27th Annual Conference on Neural Information Processing Systems, 2013: 809-817. [17] Danelljan M, Hager G, Khan F S, et al. Accurate scale estimation for robust visual tracking[C]//Proceedings of the British Machine Vision Conference 2014, 2014: 1-11. [18] Wang G F, Qin X Y, Zhong F, et al. Visual tracking via sparse and local linear coding[J]. IEEE Transactions on Image Processing, 2015, 24(11): 3796-3809. DOI:10.1109/TIP.2015.2445291 [19] Wu G F, Li J W, Yang M H. Online object tracking: a benchmark[C]//Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition, 2013: 2411-2418.