Show simple item record

dc.contributor.advisorLandsberger, Sheldon
dc.contributor.advisorPryor, Mitchell Wayne
dc.creatorO'Neil, Brian Ericken
dc.date.accessioned2013-10-17T18:22:36Zen
dc.date.issued2013-05en
dc.date.submittedMay 2013en
dc.identifier.urihttp://hdl.handle.net/2152/21610en
dc.descriptiontexten
dc.description.abstractThis dissertation advances the capability of autonomous or semiautonomous robotic manipulation systems by providing the tools required to turn depth sensor measurements into a meaningful representation of the objects present in the robot's environment. This process happens in two steps. First, the points from depth imagery are separated into clusters representing individual objects by a Euclidean clustering scheme. Each cluster is then passed to a recognition algorithm that determines what it is, and where it is. This information allows the robot to determine a pose of the object for grasp planning or obstacle avoidance. To accomplish this, the recognition system must extract mathematical representation of each point cluster. To this end, this dissertation presents a new feature descriptor, the Cylindrical Projection Histogram which captures the shape, size, and viewpoint of the object while maintaining invariance to image scale. These features are used to train a classifier which can then determine the label and pose of each cluster identified in a scene. The results are used to inform a probabilistic model of the object, that quantifies uncertainty and allows Bayesian update of the object's label and position. Experimental results on live data show a 97.2% correct recognition rate for a classifier based on the Cylindrical Projection Histogram. This is a significant improvement over another state-of-the art feature that gives an 89.6% recognition rate on the same object set. With statistical filtering over 10 frames, the raw recognition rate improve to 100% and 92.3% respectively. For pose estimation, both features offe rrotational pose estimation performance from 12° to 30°, and pose errors below 1 cm. This work supports deployment of robotic manipulation systems in unstructured glovebox environments in US Department of Energy facilities. The recognition performance of the CPH classifier is adequate for this purpose. The pose estimation performance is sufficient for gross pick-and-place tasks of simple objects, but not sufficient for dexterous manipulation. However, the pose estimation, along with the probabilistic model, support post-recognition pose refinement techniques.en
dc.format.mimetypeapplication/pdfen
dc.language.isoen_USen
dc.subjectComputer visionen
dc.subjectRoboticsen
dc.subjectNuclear engineeringen
dc.subjectObject recognitionen
dc.subjectManipulationen
dc.titleObject recognition and pose estimation for nuclear manipulation in nuclear materials handling applicationsen
dc.date.updated2013-10-17T18:22:36Zen
dc.description.departmentMechanical Engineeringen
thesis.degree.departmentMechanical Engineeringen
thesis.degree.disciplineMechanical Engineeringen
thesis.degree.grantorThe University of Texas at Austinen
thesis.degree.levelDoctoralen
thesis.degree.nameDoctor of Philosophyen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record