Learning-based methods have shown promising performance for accelerating motion planning, but mostly in the setting of static environments. For the more challenging problem of planning in dynamic environments, such as multi-arm assembly tasks and human-robot interaction, motion planners need to consider the trajectories of the dynamic obstacles and reason about temporal-spatial interactions in very large state spaces. We propose a GNN-based approach that uses temporal encoding and imitation learning with data aggregation for learning both the embeddings and the edge prioritization policies. Experiments show that the proposed methods can significantly accelerate online planning over state-of-the-art complete dynamic planning algorithms. The learned models can often reduce costly collision checking operations by more than 1000x, and thus accelerating planning by up to 95%, while achieving high success rates on hard instances as well.
We have tested our algorithm in extensive dynamic environments. Our GNN-TE has reduced costly collision checking operations by more than 1000x than the oracle Safe Interval Path Planning, while achieving high success rates on hard instances compared with naive heuristic method.
Other than this work, we also build a repo for a collection of research projects for learning-enabled motion planning.