摘要
Autonomous vehicles need to continuously navigate complex traffic environments by efficiently analyzing the surrounding scene, understanding the behavior of other traffic agents, and predicting their future trajectories. The primary objective is to draw up a safe motion and reduce the reaction time for possibly imminent hazards. The main problem addressed in this paper is to explore the movement patterns of surrounding traffic-agents and accurately predict their future trajectories for assisting the vehicle to make a reasonable decision. Traditional trajectory prediction modules require explicit coordinate information to model the interaction between the autonomous car and its surrounding vehicles. However, it is hard to know the real coordinate of surrounding vehicles in real-world scenarios without communications between vehicles. A GAN (generative adversarial network)-based deep learning framework is presented in this paper for predicting the trajectories of surrounding vehicles of an autonomous vehicle in an RGB image sequence without explicit coordinate annotation to solve this problem. To automatically predict the trajectory from RGB image sequences, a coordinate augmentation module and a coordinate stabilization module are proposed to extract the historical trajectory from an image sequence. Meanwhile, the self-attention mechanism is also proposed to improve the social pooling module for better capturing the context information of trajectories of surrounding vehicles. Experimental results are demonstrated that the proposed method is effective and efficient.
原文 | 英語 |
---|---|
頁(從 - 到) | 10763-10780 |
頁數 | 18 |
期刊 | Multimedia Tools and Applications |
卷 | 82 |
發行號 | 7 |
DOIs | |
出版狀態 | 已發佈 - 2023 3月 |
ASJC Scopus subject areas
- 軟體
- 媒體技術
- 硬體和架構
- 電腦網路與通信