The ability of a robot to detect, grasp and manipulate objects is one of the key challenges in building an intelligent humanoid robot. For a collaborative robot, dealing with objects is a key part of its job and its ability to manipulate objects starts with the accuracy of grasping. Intelligent control of a robotic arm to manipulate objects is based on detecting the object and understanding its pose. Precise 3D poses of the objects are necessary to achieve a successful grasp on the objects. We can use the available data from the surrounding environment to recognize the objects poses. In our environment, the data provided will be RGBD frames. Recent work in Deep Learning has achieved amazing results in the problem of 3d pose estimation using RGBD data.
In this project, we will work to integrate a pose estimation component to RoboComp using some state-of-art work in Deep Learning. The output poses will be visualized in V-REP simulator and used to drive the new Kinova Gen3 arm, in order to precisely grasp and manipulate some objects.