Volume 41 Issue 4
Aug.  2023
Turn off MathJax
Article Contents
YE Qing, ZHAO Cong, ZHU Yifan, YU Shanchuan. An Analysis of the Impact of Time Delay of Fusion Modes for Point Clouds from Cooperative Road Vehicle Systems on Autonomous Driving[J]. Journal of Transport Information and Safety, 2023, 41(4): 72-79. doi: 10.3963/j.jssn.1674-4861.2023.04.008
Citation: YE Qing, ZHAO Cong, ZHU Yifan, YU Shanchuan. An Analysis of the Impact of Time Delay of Fusion Modes for Point Clouds from Cooperative Road Vehicle Systems on Autonomous Driving[J]. Journal of Transport Information and Safety, 2023, 41(4): 72-79. doi: 10.3963/j.jssn.1674-4861.2023.04.008

An Analysis of the Impact of Time Delay of Fusion Modes for Point Clouds from Cooperative Road Vehicle Systems on Autonomous Driving

doi: 10.3963/j.jssn.1674-4861.2023.04.008
  • Received Date: 2021-06-07
    Available Online: 2023-11-23
  • The rapid development of the new generation of communication technologies provides a foundation for cooperative perception between autonomous vehicles (AVs) and road. This advancement holds the potential to significantly enhance the perception capabilities of AVs in complex scenarios. Previous studies have explored different information fusion modes for cooperative perception, but neglected to analyze the balance between perception accuracy and communication delay. Aiming at the delay characteristics of point cloud fusion in cooperative perception of AVs, a delay impact analysis framework is proposed based on simulation, concentrating on three fusion modes: pre-fusion, feature fusion, and post-fusion. Considering the time lag of cooperative perception results caused by communication delay, the Extended Kalman Filter algorithm is used to make predictive compensation for cooperative perception results with delay. The novel metrics, namely Lag Compensation Error and equivalent time delay, are proposed for comprehensive evaluation of the impact of different fusion modes on cooperative perception results. Based on perception results from various point cloud fusion modes, a model is established to fit the relationship between average perception accuracy and translation error distribution. Utilizing the distribution characteristics of translation errors, this model serves as the basis for generating simulated trajectories with perception errors and subsequently the evaluating of cooperative perception performance. Finally, leveraging the TrajNet++ pedestrian trajectory dataset, 180 000 numerical simulations are conducted across 1 200 trajectories with various point cloud fusion modes and different delay parameters. The results demonstrate that the shorter trajectory lengths and the higher target speeds amplify the impact of delay on cooperative perception accuracy. In comparison to post-fusion with a 100 ms delay as the benchmark, the equivalent or superior cooperative perception accuracy is feasible when the feature-fusion delay is below 500 ms and the front-fusion delay is below 700 ms. In complex scenarios involving sudden target appearances or high-speed targets, it is recommended to choose low-delay, low-accuracy post-fusion modes. Conversely, it is advisable to consider feature fusion or pre-fusion modes with high delay and high accuracy. This study can provide a basis for the selection of point cloud fusion modes for cooperative perception of autonomous driving.

     

  • loading
  • [1]
    许庆, 王嘉伟, 王建强, 等. 网联通信时延下的混合队列控制特性分析[J]. 交通信息与安全, 2021, 39(1): 128-136.

    XU Q, WANG J W, WANG J Q, et al. A performance analysis of mixed platoon control under communication delay[J]. Journal of Transport Information and Safety, 2021, 39(1): 128-136. (in Chinese)
    [2]
    陈志军, 张晶明, 熊盛光, 等. 智能网联车辆生态驾驶研究现状及展望[J]. 交通信息与安全. 2022, 40(4): 13-25.

    CHEN Z J, ZHANG J M, XIONG S G, et al. A review on research status and trends of eco-driving on intelligent connected vehicles[J]. Journal of Transport Information and Safety, 2022, 40(4): 13-25. (in Chinese)
    [3]
    ARNOLD E, DIANATI M, DE TEMPLE R, et al. Cooperative perception for 3D object detection in driving scenarios using infrastructure sensors[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(3): 1852-1864.
    [4]
    ZHOU Y, TUZEL O. VoxelNet: End-to-end learning for point cloudbased 3D object detection[C]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, United States: IEEE, 2018.
    [5]
    LANG A H, VORA S, CAESAR H, et al. Pointpillars: Fast encoders for object detection from point clouds[C]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, United States: IEEE, 2019.
    [6]
    QI C R, SU H, MO K, et al. PointNet: Deep learning on point sets for 3D classification and segmentation[C]. IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Honolulu, Hawaii, United States: IEEE, 2017.
    [7]
    LI Y, BU R, SUN M, et al. Pointcnn: Convolution on x-transformed points[J]. Advances in Neural Information Processing Systems, 2018(31): 828-838.
    [8]
    THOMA RS, ANDRICH C, GALDO G, et al. Cooperative passive coherent location: a promising 5G service to support road safety[J]. IEEE Communications Magazine, 2019, 57 (9): 86-92.
    [9]
    CHEN Q, TANG S, YANG Q, et al. Cooper: Cooperative perception for connected autonomous vehicles based on 3D point clouds[C]. 39th International Conference on Distributed Computing Systems, Dallas, United States: IEEE, 2019.
    [10]
    CHEN Q, MA X, TANG S, et al. F-Cooper: Feature based cooperative perception for autonomous vehicle edge computing system using 3D point clouds[C]. 4th ACM/IEEE Symposium on Edge Computing, Washington, D. C., United States: ACM/IEEE, 2019.
    [11]
    XU R, XIANG H, XIA X, et al. Opv2v: An open benchmark dataset and fusion pipeline for perception with vehicle-to-vehicle communication[C]. International Conference on Robotics and Automation(ICRA), Philadelphia, United States: IEEE, 2022.
    [12]
    XU R, TU Z, XIANG H, et al. CoBEVT: Cooperative bird's eye view semantic segmentation with sparse transformers[J]. ProceedingsofMachine Learning Research, 2023, 205: 989-1000.
    [13]
    XU R, XIANG H, TU Z, et al. V2x-vit: Vehicle-to-everything cooperative perception with vision transformer[C]. European Conference on Computer Vision, Tel Aviv, Israel: Springer, 2022.
    [14]
    TSUKADA M, OI T, ITO A, et al. AutoC2X: Open-source software to realize V2X cooperative perception among autonomous vehicles[C]. IEEE 92nd Vehicular Technology Conference, Victoria, BC, Canada: IEEE, 2020.
    [15]
    GHOSH A, MAEDER A, BAKER M, et al. 5G evolution: A view on 5G cellular technology beyond 3GPP release 15[J]. IEEE Access, 2019, 7: 127639-127651.
    [16]
    HARTLEY R, GHAFFARI M, EUSTICE R M, et al. Contact-aided invariant extended Kalman filtering for robot state estimation[J]. The International Journal of Robotics Research, 2020, 39(4): 402-430.
    [17]
    KOTHARI P, KREISS S, ALAHI A. Human trajectory forecasting in crowds: a deep learning perspective[J]. IEEE Transactions on Intelligent Transportation Systems, 2021, 23 (7): 7386-7400.
    [18]
    ZHAO C, SONG A, DU Y, et al. TrajGAT: A map-embedded graph attention network for real-time vehicle trajectory imputation of roadside perception[J]. Transportation Research Part C: Emerging Technologies, 2022, 142: 103787.
    [19]
    GUO J, CARRILLO D, TANG S, et al. CoFF: Cooperative spatial feature fusion for 3D object detection on autonomous vehicles[J]. IEEE Internet of Things Journal, 2021, 8(14): 11078-11087.
    [20]
    CAESAR H, BANKITI V, LANG AH, et al. Nuscenes: A multimodal dataset for autonomous driving[C]. IEEE Conference on Computer Vision and Pattern Recognition, Washington, D. C., United States: IEEE, 2020.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(10)  / Tables(3)

    Article Metrics

    Article views (574) PDF downloads(34) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return