• Home
  • AnyGrasp
  • Publications
  • Datasets
  • Tasks
  • Evaluation
  • About

As the basis for prehensile manipulation, it is vital to enable robots to grasp as robustly as humans. In daily manipulation, our grasping system is prompt, accurate, flexible and continuous across spatial and temporal domains. Few existing methods cover all these properties for robot grasping. In this paper, we propose a new methodology for grasp perception to enable robots these abilities. Specifically, we develop a dense supervision strategy with real perception and analytic labels in the spatial-temporal domain. Additional awareness of objects’ center-of-mass is incorporated into the learning process to help improve grasping stability. Utilization of grasp correspondence across observations enables dynamic grasp tracking. Our model, AnyGrasp, can generate accurate, full-DoF, dense and temporally-smooth grasp poses efficiently, and works robustly against large depth sensing noise. Embedded with AnyGrasp, we achieve a 93.3% success rate when clearing bins with over 300 unseen objects, which is comparable with human subjects under controlled conditions. Over 900 MPPH is reported on a single-arm system. For dynamic grasping, we demonstrate catching swimming robot fish in the water

Preprint paper can be found at 
Library is available at 

Some demos are available below.

Any Objects

Including rigid/deformable objects



Robot Fish Catching

Challenging dynamic scene grasping

YouTube

AnyGrasp Fish Catching


Challenging Scenario

Robust to light/angle variance in scenes



Extreme Case

Fragments thinner than 5 mm

YouTube BiliBili

AnyGrasp


AnyGrasp


Dense Prediction

Thousands of grasp poses in < 1s





Subscribe to get notifications


To subscribe, simply drop an empty letter to graspnet+subscribe@googlegroups.com and reply the validation email, or through the web interface http://groups.google.com/group/graspnet/subscribe

To unsubscribe, send an empty email to graspnet+unsubscribe@googlegroups.com


Check it out!

Explore our website
for more details.


Please cite our paper if it helps your research:

@article{fang2022anygrasp,
  title={AnyGrasp: Robust and Efficient Grasp Perception in Spatial and Temporal Domains},
  author = {Fang, Hao-Shu and Wang, Chenxi and Fang, Hongjie and Gou, Minghao and Liu, Jirong and Yan, Hengxu and Liu, Wenhai and Xie, Yichen and Lu, Cewu},
  journal={arXiv preprint arXiv:2212.08333},
  year={2022}
}

GraspNet

  • Home
  • AnyGrasp
  • Publications
  • Datasets
  • Tasks
  • Evaluation
  • About

Copyright © 2022 Machine Vision and Intelligence Group, Shanghai Jiao Tong University.