A shape primitive-based grasping strategy using visual object recognition in confined, hazardous environments
MetadataShow full item record
Grasping can be a complicated process for robotics due to the replication of human fine motor skills and typically high degrees of freedom in robotic hands. Robotic hands that are underactuated provide a method by which grasps can be executed without the onerous task of calculating every fingertip placement. The general shape configuration modes available to underactuated hands lend themselves well to an approach of grasping by shape primitives, and especially so when applied to gloveboxes in the nuclear domain due to the finite number of objects anticipated and the safe assumption that objects in the set are rigid. Thus, the object set found in a glovebox can be categorized as a small set of primitives such as cylinders, cubes, and bowls/hemispheres, etc. These same assumptions can also be leveraged for reliable identification and pose estimation within a glovebox. This effort develops and simulates a simple, but robust and effective grasp planning algorithm for a 7DOF industrial robot and three fingered dexterous, but underactuated robotic hand. The proposed grasping algorithm creates a grasp by generating a vector to the object from the base of the robot and manipulating that vector to be in a suitable starting location for a grasp. The grasp preshapes are selected to match shape primitives and are built-in to the Robotiq gripper used for algorithm demonstration purposes. If a grasp is found to be unsuitable via an inverse kinematics solution check, the algorithm procedurally generates additional grasps to try based on object geometry until a solution can be found or all possibilities are exhausted. The algorithm was tested and found capable of generating valid grasps for visually identified objects, and can recalculate grasps if one is found to be incompatible with the current kinematics of the robotic arm.