GraspNet is an open project for general object grasping that is continuously enriched. Currently we release GraspNet-1Billion. It contains 190 cluttered and complex scenes captured by two popular RGBD cameras (Kinect A4Z and RealSense D435), bringing 97,280 images in total. For each image, we annotate the accurate 6D pose and the dense grasp poses for each object. In total, our dataset contains over 1.1 billion grasp poses. We hope our benchmark can facilitate related research in general object grasping as well as other related areas (e.g. 6D pose estimation, unseen object segmentation, etc.). Community on GitHub Click here to join our mailing list.
2020/12/22: Baseline network in our paper is open source! 2020/10/10: API for dataset and evaluation is live! 2020/06/15: Data of 190 scenes are released 2020/02/23: GraspNet-1Billion got accepted by CVPR 2020
190 scenes, 97,280 images, 88 objects, 2 cameras
190 scenes, 97,280 images, 88 objects, 2 cameras
6DoF grasp labels, planar grasp labels, 6D poses, instance masks, etc.
6DoF grasp labels, planar grasp labels, 6D poses, instance masks, etc.
Standard evaluation metric for different grasp pose representation
Standard evaluation metric for different grasp pose representation
To subscribe, simply drop an empty letter to graspnet+subscribe@googlegroups.com and reply the validation email, or through the web interface http://groups.google.com/group/graspnet/subscribe To unsubscribe, send an empty email to graspnet+unsubscribe@googlegroups.com
Explore our website for more details.
Please cite our paper if it helps your research:
@inproceedings{fang2020graspnet, title={GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping}, author={Fang, Hao-Shu and Wang, Chenxi and Gou, Minghao and Lu, Cewu}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={11444--11453}, year={2020} }
Copyright © 2020 Machine Vision and Intelligence Group, Shanghai Jiao Tong University.