Brief: For object grasping, the algorithm needs to estimate the feasible grasp pose for robot to place the gripper on the scene with visual perception, such that the robot can successfully grasp objects.
Tools: We provide APIs for the GraspNet images, annotations, and evaluation on GitHub. Demos and instructions can also be found on our GitHub repository. For the evaluation details, please visit the evaluation page.
Brief: Given known camera intrinsic c, known object 3D model and RGB/RGBD inputs, 6D pose estimation infers the translation T and rotation R of the object with respect to the camera coordinate.
Tools: GraspNet provide 6D pose annotations in the same format as YCB-Video dataset. Related toolkit can be found at their website.