As the basis for prehensile manipulation, it is vital to enable robots to grasp as robustly as humans. In daily manipulation, our grasping system is prompt, accurate, flexible and continuous across spatial and temporal domains. Few existing methods cover all these properties for robot grasping. In this paper, we propose a new methodology for grasp perception to enable robots these abilities. Specifically, we develop a dense supervision strategy with real perception and analytic labels in the spatial-temporal domain. Additional awareness of objects’ center-of-mass is incorporated into the learning process to help improve grasping stability. Utilization of grasp correspondence across observations enables dynamic grasp tracking. Our model, AnyGrasp, can generate accurate, full-DoF, dense and temporally-smooth grasp poses efficiently, and works robustly against large depth sensing noise. Embedded with AnyGrasp, we achieve a 93.3% success rate when clearing bins with over 300 unseen objects, which is comparable with human subjects under controlled conditions. Over 900 MPPH is reported on a single-arm system. For dynamic grasping, we demonstrate catching swimming robot fish in the water Preprint paper can be found at Library is available at Some demos are available below.
Including rigid/deformable objects
Challenging dynamic scene grasping
Human-level accuracy and speed
Fragments thinner than 5 mm
Thousands of grasp poses in < 1s
Robust to light/angle variance in scenes
To subscribe, simply drop an empty letter to graspnet+subscribe@googlegroups.com and reply the validation email, or through the web interface http://groups.google.com/group/graspnet/subscribe To unsubscribe, send an empty email to graspnet+unsubscribe@googlegroups.com
Explore our website for more details.
Please cite our paper if it helps your research:
@article{fang2023anygrasp, title={AnyGrasp: Robust and Efficient Grasp Perception in Spatial and Temporal Domains}, author = {Fang, Hao-Shu and Wang, Chenxi and Fang, Hongjie and Gou, Minghao and Liu, Jirong and Yan, Hengxu and Liu, Wenhai and Xie, Yichen and Lu, Cewu}, journal={IEEE Transactions on Robotics (T-RO)}, year={2023} }
Copyright © 2022 Machine Vision and Intelligence Group, Shanghai Jiao Tong University.