Reactive grasping: Reactive grasping, which enables the robot to successfully grasp dynamic moving objects, is of great interest in robotics. Current methods mainly focus on the temporal smoothness of the predicted grasp poses but few consider their semantic consistency. Consequently, the predicted grasps are not guaranteed to fall on the same part of the same object through a sequence of observation frames, especially in cluttered scenes. In this paper, we propose to achieve temporally smooth and semantically consistent reactive grasping in a target-referenced setting by tracking through generated grasp spaces. Given a targeted grasp pose on an object and detected grasp poses in a new observation, our method is composed of two stages: 1) discovering grasp pose correspondences through an attentional graph neural network and selecting the one with the highest similarity with respect to the target pose; 2) refining the selected grasp poses based on target and historical information. Following these steps, our method provides temporally-smooth grasp poses that are semantically consistent with the given reference. We evaluate our method on a large-scale benchmark GraspNet-1Billion. We also collect 30 scenes of dynamic objects for testing. The results suggest that our method outperforms other representative methods. Furthermore, our real robot experiments achieve an overall success rate of around 80 percent. Our code will be made publicly available. Library is available at Moving GraspNet testset is available at Baidu
All data, labels, code and models belong to the graspnet team, MVIG, SJTU and are licensed under a Creative Commons Attribution 4.0 Non Commercial License (BY-NC-SA). They are freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please drop an email at fhaoshu at gmail.com and cc lucewu at sjtu.edu.cn .