GraspNet is an open project for general object grasping that is continuously enriched. It currently contains: GraspNet-1Billion: a benchmark that contains 190 cluttered and complex scenes captured by two popular RGBD cameras (Kinect Azure and RealSense D435), bringing 97,280 images in total. For each image, we annotate the accurate 6D pose and the dense grasp poses for each object. In total, our dataset contains 88 objects and over 1.1 billion grasp poses . We hope our benchmark can facilitate related research in general object grasping as well as other related areas (e.g. 6D pose estimation, unseen object segmentation, etc.). AnyGrasp: the first solution that achieves human-level grasping in cluttered scene. Enabled by GraspNet-1Billion, AnyGrasp aims to grasp any objects in any scenario, no matter rigid or deformable objects. It would be a solid basis for robotics manipulation. We release the library to facilitate related research. More details at this page. SuctionNet-1Billion: a benchmark for suction-based grasping. Compared with other kinds of grasping, suction is usually more reliable and easier to represent. With the rich data provided by GraspNet-1Billion, we further provide suction-related labels and additional online evaluation system to boost the development of suction grasping and other related areas. More details at this page. TransCG: the first large-scale real-world dataset for transparent object depth completion and grasping, whose construction process is accelarated by a novel semi-automatic pipeline. The dataset contains 57,715 RGB-D images of 51 transparent objects and many opaque objects captured from different perspectives of 130 scenes under real-world settings. More details at this page. Community on GitHub Click here to join our mailing list.
2023/8/1: Extended paper of GraspNet-1Billion is accepted to IJRR! 2023/4/26: Our paper of "AnyGrasp" is accepted to T-RO! 2023/2/28: Our paper of "Target-referenced Reactive Grasping" is accepted to CVPR 2023! 2022/6/15: Our paper of "TransCG" is accepted to RAL! 2021/7/29: Our paper of "SuctionNet-1Billion" is accepted to RAL! 2021/7/23: Our paper of "Graspness" is accepted to ICCV 2021! 2021/6/1: Demo of our human-level grasp detection AnyGrasp is released! 2021/2/28: Our paper "RGB Matters" is accepted to ICRA 2021! 2020/12/22: Baseline network in our paper is open source! 2020/10/10: API for dataset and evaluation is live! 2020/06/15: Data of 190 scenes are released 2020/02/23: GraspNet-1Billion got accepted by CVPR 2020
190 scenes, 97,280 images, 88 objects, 2 cameras
190 scenes, 97,280 images, 88 objects, 2 cameras
6DoF grasp labels, planar grasp labels, 6D poses, instance masks, etc.
6DoF grasp labels, planar grasp labels, 6D poses, instance masks, etc.
Standard evaluation metric for different grasp pose representation
Standard evaluation metric for different grasp pose representation
To subscribe, simply drop an empty letter to graspnet+subscribe@googlegroups.com and reply the validation email, or through the web interface http://groups.google.com/group/graspnet/subscribe To unsubscribe, send an empty email to graspnet+unsubscribe@googlegroups.com
Explore our website for more details.
Please cite our paper if it helps your research:
@article{fang2023robust, title={Robust grasping across diverse sensor qualities: The GraspNet-1Billion dataset}, author={Fang, Hao-Shu and Gou, Minghao and Wang, Chenxi and Lu, Cewu}, journal={The International Journal of Robotics Research}, year={2023}, publisher={SAGE Publications Sage UK: London, England} } @inproceedings{fang2020graspnet, title={GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping}, author={Fang, Hao-Shu and Wang, Chenxi and Gou, Minghao and Lu, Cewu}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={11444--11453}, year={2020} }
Copyright © 2021 Machine Vision and Intelligence Group, Shanghai Jiao Tong University.