Luo X, Huang Y P, Liang Z M. Axial attention-guided anchor classification lane detection[J]. Opto-Electron Eng, 2023, 50(7): 230079. doi: 10.12086/oee.2023.230079
Citation: Luo X, Huang Y P, Liang Z M. Axial attention-guided anchor classification lane detection[J]. Opto-Electron Eng, 2023, 50(7): 230079. doi: 10.12086/oee.2023.230079

Axial attention-guided anchor classification lane detection

    Fund Project: Project supported by the Natural Science Foundation of Shanghai (20ZR1437900), and National Natural Science Foundation of China (61374197)
More Information
  • Lane detection is a challenging task due to the diversity of lane lines and the complexity of traffic scenes. The detection results of the existing detection methods are not ideal when the vehicle is driving in congestion, at night, and the lane lines are not clear or blocked on the road such as curves. Based on the framework of detection methods, a method that axial attention-guided anchor classification lane detection is proposed to solve two problems. The first is the problem of missing visual cues when lane lines are unclear or missing. The second problem is the lack of feature information caused by using sparse coordinates on mixed anchors, which leads to a decline of detection accuracy. Therefore, an axial attention layer is added to the backbone network to focus on prominent features of the row and column directions to improve the accuracy. Extensive experiments are conducted on the TuSimple and CULane datasets. Experimental results show that the proposed method is robust under various conditions while showing comprehensive advantages in terms of detection accuracy and speed compared with existing advanced methods.
  • 加载中
  • [1] The tusimple lane challenge[EB/OL]. (2018-10-20). https://github.com/TuSimple/tusimple-benchmark/issues/3.

    Google Scholar

    [2] Pan X G, Shi J P, Luo P, et al. Spatial as deep: spatial CNN for traffic scene understanding[C]//Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, 2018: 891.

    Google Scholar

    [3] Huang Y P, Li Y W, Hu X, et al. Lane detection based on inverse perspective transformation and Kalman filter[J]. KSII Trans Internet Inf Syst, 2018, 12(2): 643−661. doi: 10.3837/tiis.2018.02.006

    CrossRef Google Scholar

    [4] Laskar Z, Kannala J. Context aware query image representation for particular object retrieval[C]//Proceedings of the 20th Scandinavian Conference on Image Analysis, 2017: 88–99. https://doi.org/10.1007/978-3-319-59129-2_8.

    Google Scholar

    [5] Zhao H S, Zhang Y, Liu S, et al. PSANet: point-wise spatial attention network for scene parsing[C]//Proceedings of the 15th European Conference on Computer Vision, 2018: 270–286. https://doi.org/10.1007/978-3-030-01240-3_17.

    Google Scholar

    [6] Hou Y N, Ma Z, Liu C X, et al. Learning lightweight lane detection CNNs by self attention distillation[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019: 1013–1021. https://doi.org/10.1109/ICCV.2019.00110.

    Google Scholar

    [7] Neven D, De Brabandere B, Georgoulis S, et al. Towards end-to-end lane detection: an instance segmentation approach[C]//2018 IEEE Intelligent Vehicles Symposium, 2018: 286–291. https://doi.org/10.1109/IVS.2018.8500547.

    Google Scholar

    [8] Guo Z Y, Huang Y P, Wei H J, et al. DALaneNet: a dual attention instance segmentation network for real-time lane detection[J]. IEEE Sensors J, 2021, 21(19): 21730−21739. doi: 10.1109/JSEN.2021.3100489

    CrossRef Google Scholar

    [9] 张冲, 黄影平, 郭志阳, 等. 基于语义分割的实时车道线检测方法[J]. 光电工程, 2022, 49(5): 210378. doi: 10.12086/oee.2022.210378

    CrossRef Google Scholar

    Zhang C, Huang Y P, Guo Z Y, et al. Real-time lane detection method based on semantic segmentation[J]. Opto-Electron Eng, 2022, 49(5): 210378. doi: 10.12086/oee.2022.210378

    CrossRef Google Scholar

    [10] Yoo S, Lee H S, Myeong H, et al. End-to-end lane marker detection via row-wise classification[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020: 4335–4343.

    Google Scholar

    [11] Qin Z Q, Wang H Y, Li X. Ultra fast structure-aware deep lane detection[C]//Proceedings of the 16th European Conference on Computer Vision, 2020: 276–291. https://doi.org/10.1007/978-3-030-58586-0_17.

    Google Scholar

    [12] Qin Z Q, Zhang P Y, Li X. Ultra fast deep lane detection with hybrid anchor driven ordinal classification[J]. IEEE Trans Pattern Anal Mach Intell, 2022. https://doi.org/10.1109/TPAMI.2022.3182097.

    Google Scholar

    [13] Zheng T, Huang Y F, Liu Y, et al. CLRNet: cross layer refinement network for lane detection[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022: 888–897. https://doi.org/10.1109/CVPR52688.2022.00097.

    Google Scholar

    [14] Tabelini L, Berriel R, Paixão T M, et al. Keep your eyes on the lane: real-time attention-guided lane detection[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021: 294–302. https://doi.org/10.1109/CVPR46437.2021.00036.

    Google Scholar

    [15] Feng Z Y, Guo S H, Tan X, et al. Rethinking efficient lane detection via curve modeling[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022: 17041–17049. https://doi.org/10.1109/CVPR52688.2022.01655.

    Google Scholar

    [16] Felguera-Martin D, Gonzalez-Partida J T, Almorox-Gonzalez P, et al. Vehicular traffic surveillance and road lane detection using radar interferometry[J]. IEEE Trans Veh Technol, 2012, 61(3): 959−970. doi: 10.1109/TVT.2012.2186323

    CrossRef Google Scholar

    [17] Wang F, Jiang M Q, Qian C, et al. Residual attention network for image classification[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 6450–6458. https://doi.org/10.1109/CVPR.2017.683.

    Google Scholar

    [18] Hu J, Shen L, Sun G. Squeeze-and-excitation networks[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018: 7132–7141. https://doi.org/10.1109/CVPR.2018.00745.

    Google Scholar

    [19] Woo S, Park J, Lee J Y, et al. CBAM: convolutional block attention module[C]//Proceedings of the 15th European Conference on Computer Vision, 2018: 3–19. https://doi.org/10.1007/978-3-030-01234-2_1.

    Google Scholar

    [20] Fu J, Liu J, Tian H J, et al. Dual attention network for scene segmentation[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 3141–3149. https://doi.org/10.1109/CVPR.2019.00326.

    Google Scholar

    [21] He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016: 770–778. https://doi.org/10.1109/CVPR.2016.90.

    Google Scholar

    [22] Wang H Y, Zhu Y K, Green B, et al. Axial-deeplab: stand-alone axial-attention for panoptic segmentation[C]//Proceedings of the 16th European Conference on Computer Vision, 2020: 108–126. https://doi.org/10.1007/978-3-030-58548-8_7.

    Google Scholar

    [23] Ramachandran P, Parmar N, Vaswani A, et al. Stand-alone self-attention in vision models[C]//Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2019: 7.

    Google Scholar

  • Lane detection is an important function of environment perception for autonomous vehicles. Although lane detection algorithms have been studied for a long time, existing algorithms still face many challenges in practical applications, mainly reflected in their unsatisfactory detection results when vehicles travel on roads with unclear or occluded lane lines such as in congestion, at night, or on curves. In recent years, deep learning-based methods have attracted more and more attention in lane detection because of their excellent robustness to image noise. These methods can be roughly divided into three categories: segment-based, detection-based, and parametric curve-based. Segmentation-based methods can achieve high-precision detection by detecting lane features pixel by pixel but have low detection efficiency due to high computational cost and time consumption. Detection-based methods usually convert the lane segments into learnable structural representations such as blocks or points,and then detect these structural features as lane lines. This method has the advantages of high speed and a strong ability to handle straight lanes, but their detection accuracy is obviously inferior to the segmentation-based methods. The performance of parametric curve-based methods lags behind well-designed segmentation-based and detection-based methods because the abstract polynomial coefficients are difficult for computers to learn. Following the framework of detection-based methods, a method that axial attention-guided anchor classification lane detection is proposed. The basic idea is to segment the lane into intermittent point blocks and transform the lane detection problem into the detection of lane anchor points. In the implementation process, replacing the pixel-by-pixel segmentation with a row anchor and column anchor can not only improve the lane detection speed but also improve the problem of missing visual cues of lane lines. In terms of network structure, adding the axial attention mechanism to the feature extraction network can more effectively extract anchor features and filter out redundant features, thereby improving the accuracy problem of detection-based methods. We conducted extensive experiments on two datasets, TuSimple and CULane, and the experimental results show that the proposed method has good robustness under various road conditions, especially in the case of occlusion. Compared with the existing models, it has comprehensive advantages in detection accuracy and speed. However as a detection method reliant on a single sensor, it remains challenging to achieve high-accuracy detection in highly complex real-world scenes, like rainy and polluted roads. Subsequent studies might achieve lane detection in more demanding environments by fusing multiple sensors together, such as laser radar and vision, and by incorporating prior constraints on vehicle motion.

  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(9)

Tables(5)

Article Metrics

Article views() PDF downloads() Cited by()

Access History

Other Articles By Authors

Article Contents

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint