Citation: | Chang Xin, Chen Xiaodong, Zhang Jiachen, et al. An object detection and tracking algorithm based on LiDAR and camera information fusion[J]. Opto-Electronic Engineering, 2019, 46(7): 180420. doi: 10.12086/oee.2019.180420 |
[1] | Bishop R. Intelligent Vehicle Technology and Trends[M]. Boston: Artech House, 2005. |
[2] | 何树林.浅谈智能汽车及其相关问题[J].汽车工业研究, 2010(9): 28-30. doi: 10.3969/j.issn.1009-847X.2010.09.008 |
[3] | Lan Y, Huang J, Chen X. Environmental perception for information and immune control algorithm of miniature intelligent vehicle[J]. International Journal of Control & Automation, 2017, 10(5): 221-232. |
[4] | 高德芝, 段建民, 郑榜贵, 等.智能车辆环境感知传感器的应用现状[J].现代电子技术, 2008(19): 151-156. doi: 10.3969/j.issn.1004-373X.2008.19.049 Gao D Z, Duan J M, Zheng B G, et al. Application statement of intelligent vehicle environment perception sensor[J]. Modern Electronics Technique, 2008(19): 151-156. doi: 10.3969/j.issn.1004-373X.2008.19.049 |
[5] | 王世峰, 戴祥, 徐宁, 等.无人驾驶汽车环境感知技术综述[J].长春理工大学学报(自然科学版), 2017, 40(1): 1-6. doi: 10.3969/j.issn.1672-9870.2017.01.001 Wang S F, Dai X, Xu N, et al. Overview on environment perception technology for unmanned ground vehicle[J]. Journal of Changchun University of Science and Technology (Natural Science Edition), 2017, 40(1): 1-6. doi: 10.3969/j.issn.1672-9870.2017.01.001 |
[6] |
Wang Z N, Zhan W, Tomizuka M. Fusing bird view LIDAR point cloud and front view camera image for deep object detection[OL]. arXiv: 1711.06703[cs.CV]. |
[7] |
Dieterle T, Particke F, Patino-Studencki L, et al. Sensor data fusion of LIDAR with stereo RGB-D camera for object tracking[C]//Proceedings of 2017 IEEE Sensors, 2017: 1-3. |
[8] | Oh S I, Kang H B. Object detection and classification by decision-level fusion for intelligent vehicle systems[J]. Sensors, 2017, 17(1): 207. |
[9] | 厉小润, 谢冬.基于双目视觉的智能跟踪行李车的设计[J].控制工程, 2013, 20(1): 98-101. doi: 10.3969/j.issn.1671-7848.2013.01.023 Li X R, Xie D. Design of intelligent object tracking baggage vehicle based on binocular vision[J]. Control Engineering of China, 2013, 20(1): 98-101. doi: 10.3969/j.issn.1671-7848.2013.01.023 |
[10] | Granstr m K, Baum M, Reuter S. Extended object tracking: introduction, overview, and applications[J]. Journal of Advances in Information Fusion, 2017, 12(2): 139-174. |
[11] |
Li X, Wang K J, Wang W, et al. A multiple object tracking method using Kalman filter[C]//Proceedings of 2012 IEEE International Conference on Information and Automation, 2010: 1862-1866. |
[12] |
Liu B, Cheng S, Shi Y H. Particle filter optimization: a brief introduction[C]//Proceedings of the 7th Advances in Swarm Intelligence, 2016. |
[13] | Dou J F, Li J X. Robust visual tracking based on interactive multiple model particle filter by integrating multiple cues[J]. Neurocomputing, 2014, 135: 118-129. doi: 10.1016/j.neucom.2013.12.049 |
[14] | 侯志强, 王利平, 郭建新, 等.基于颜色、空间和纹理信息的目标跟踪[J].光电工程, 2018, 45(5): 170643. doi: 10.12086/oee.2018.170643 Hou Z Q, Wang L P, Guo J X, et al. An object tracking algorithm based on color, space and texture inf ormation[J]. Opto-Electronic Engineering, 2018, 45(5): 170643. doi: 10.12086/oee.2018.170643 |
[15] | 张娟, 毛晓波, 陈铁军.运动目标跟踪算法研究综述[J].计算机应用研究, 2009, 26(12): 4407-4410. doi: 10.3969/j.issn.1001-3695.2009.12.002 Zhang J, Mao X B, Chen T J. Survey of moving object tracking algorithm[J]. Application Research of Computers, 2009, 26(12): 4407-4410. doi: 10.3969/j.issn.1001-3695.2009.12.002 |
[16] | Zhang Z. A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330-1334. doi: 10.1109/34.888718 |
[17] | 蔡喜平, 赵远, 黄建明, 等.成像激光雷达系统性能的研究[J].光学技术, 2001, 27(1): 60-62. doi: 10.3321/j.issn:1002-1582.2001.01.016 Cai X P, Zhao Y, Huang J M, et al. Research on the performance of imaging laser radar[J]. Optical Technique, 2001, 27(1): 60-62. doi: 10.3321/j.issn:1002-1582.2001.01.016 |
[18] | Rusu R B, Cousins S. 3D is here: point cloud library (PCL)[C]//Proceedings of 2011 IEEE International Conference on Robotics and Automation, 2011: 1-4. |
[19] | 周琴, 张秀达, 胡剑, 等.凝视成像三维激光雷达噪声分析[J].中国激光, 2011, 38(9): 0908005. Zhou Q, Zhang X D, Hu J, et al. Noise analysis of staring three-dimensinal active imaging laser radar[J]. Chinese Journal of Lasers, 2011, 38(9): 0908005. |
[20] | 官云兰, 刘绍堂, 周世健, 等.基于整体最小二乘的稳健点云数据平面拟合[J].大地测量与地球动力学, 2011, 31(5): 80-83. Guan Y L, Liu S T, Zhou S J, et al. Obust plane fitting of point clouds based on TLS[J]. Journal of Geodesy and Geodynamics, 2011, 31(5): 80-83. |
[21] | 邹晓亮, 缪剑, 郭锐增, 等.移动车载激光点云的道路标线自动识别与提取[J].测绘与空间地理信息, 2012, 35(9): 5-8. doi: 10.3969/j.issn.1672-5867.2012.09.002 Zou X L, Miao J, Guo R Z, et al. Automatic road marking detection and extraction based on LiDAR point clouds from vehicle- borne MMS[J]. Geomatics & Spatial Information Technology, 2012, 35(9): 5-8. doi: 10.3969/j.issn.1672-5867.2012.09.002 |
[22] | Ester M, Kriegel H P, Sander J, et al. A density-based algorithm for discovering clusters a density-based algorithm for discovering clusters in large spatial databases with noise[C]//Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining, 1996: 226-231. |
[23] | 朱明清, 王智灵, 陈宗海.基于改进Bhattacharyya系数的粒子滤波视觉跟踪算法[J].控制与决策, 2012, 27(10): 1579-1583. Zhu M Q, Wang Z L, Chen Z H. Modified Bhattacharyya coefficient for particle filter visual tracking[J]. Control and Decision, 2012, 27(10): 1579-1583. |
[24] | 冯驰, 王萌, 汲清波.粒子滤波器重采样算法的分析与比较[J].系统仿真学报, 2009, 21(4): 1101-1105, 1110. Feng C, Wang M, Ji Q B. Analysis and comparison of resampling algorithms in particle filter[J]. Journal of System Simulation, 21(4): 1101-1105, 1110. |
[25] | Geiger A, Lenz P, Stiller C, et al. Vision meets robotics: the KITTI dataset[J]. The International Journal of Robotics Research, 2013, 32(11): 1231-1237. doi: 10.1177/0278364913491297 |
[26] | Zhou T R, Ouyang Y N, Wang R, et al. Particle filter based on real-time compressive tracking[C]//Proceedings of 2016 International Conference on Audio, Language and Image Processing, 2016: 754-759. |
[27] | 石勇, 韩崇昭.自适应UKF算法在目标跟踪中的应用[J].自动化学报, 2011, 37(6): 755-759. Shi Y, Han C Z. Adaptive UKF method with applications to target tracking[J]. Acta Automatica Sinica, 2011, 37(6): 755-759. |
[28] | Milan A, Schindler K, Roth S. Detection- and trajectory-level exclusion in multiple object tracking[C]//Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition, 2013: 3682-3689. |
Overview: Intelligent vehicle refers to the new type of car which integrates a variety of technologies, including environmental perception, path planning, decision-making, controlling, etc., which carries advanced vehicle sensor, controller, actuator and other devices, can realize the car with X (people, vehicles, road, cloud, etc.) of information exchange and sharing to achieve safety, high efficiency, energy saving, and ultimately. Environmental perception is the technology which detecting vehicle environment information relies on the on-board sensors including vehicle vision sensors, LiDAR, millimeter wave radar, global positioning system (GPS), INS system and ultrasonic wave radar. In order to ensure the accuracy and stability of environmental perception of intelligent vehicle, it is necessary to use intelligent vehicle on-board sensors to detect and track the objects in the passable area. This paper puts forward a kind of object detection and tracking algorithm based on the LiDAR and camera information fusion. Firstly, this algorithm uses the LiDAR point cloud data clustering method to detect the objects in the passable area and project them onto the picture to determine the tracking objects. The LiDAR point cloud data clustering method contains filtering of original point cloud data, ground detection, passable area extraction based on point cloud data reflectivity and data clustering based on DBSCAN algorithm. After the object has been determined, this algorithm uses color information to track the object in the image sequence. Since object tracking algorithm based on image is easily influenced by light, shade and background interference, this algorithm uses LiDAR point cloud to modify tracking results in the process of tracking. The tracking strategy is: first, place N initial particles uniformly at the target position; second, calculate the similarity between the current moment particles and the previous moment particles according to the Bhattacharyya coefficient; third, resample particles according to similarity; finally, since LiDAR point cloud can be projected onto picture, calculate the object position by combining the particles and the point cloud through the algorithm. At the end of paper, this paper uses KITTI data set to test and verify the algorithm. KITTI dataset is established by Germany Karlsruhe Institute of Technology and Technology Research Institute in the United States, which is currently the largest data of computer vision algorithm for automatic driving scenarios evaluation. The experiment used a computer with 4 GB memory as the experimental platform and programmed on MATLAB 2017b. In this paper, particle filter, unscented Kalman filter (UKF) and DCO-X algorithm are used as comparison algorithms to verify the effectiveness of the algorithm. Experiments show that the algorithm has a good effect in object tracking evaluation standard of X direction, Y direction errors and center position error, regional overlap and the success rate.
Flow chart of target detection
Flow chart of coordinate transformation
Flow chart of passable area extraction
Flow chart of object tracking
Typical DDistm graph
Target detection results. (a), (b) 3D plot of point cloud; (c), (d) Target detection results in pictures
Target tracking results
Target tracking results. (a) X-trace; (b) Y-trace