Browsing by Subject "Mobile robots"
Now showing 1 - 7 of 7
- Results Per Page
- Sort Options
Item Autonomous sensor and action model learning for mobile robots(2008-08) Stronger, Daniel Adam; Stone, Peter, 1971-Autonomous mobile robots have the potential to be extremely beneficial to society due to their ability to perform tasks that are difficult or dangerous for humans. These robots will necessarily interact with their environment through the two fundamental processes of acting and sensing. Robots learn about the state of the world around them through their sensations, and they influence that state through their actions. However, in order to interact with their environment effectively, these robots must have accurate models of their sensors and actions: knowledge of what their sensations say about the state of the world and how their actions affect that state. A mobile robot’s action and sensor models are typically tuned manually, a brittle and laborious process. The robot’s actions and sensors may change either over time from wear or because of a novel environment’s terrain or lighting. It is therefore valuable for the robot to be able to autonomously learn these models. This dissertation presents a methodology that enables mobile robots to learn their action and sensor models starting without an accurate estimate of either model. This methodology is instantiated in three robotic scenarios. First, an algorithm is presented that enables an autonomous agent to learn its action and sensor models in a class of one-dimensional settings. Experimental tests are performed on a four-legged robot, the Sony Aibo ERS-7, walking forward and backward at different speeds while facing a fixed landmark. Second, a probabilistically motivated model learning algorithm is presented that operates on the same robot walking in two dimensions with arbitrary combinations of forward, sideways, and turning velocities. Finally, an algorithm is presented to learn the action and sensor models of a very different mobile robot, an autonomous car.Item Creating and utilizing symbolic representations of spatial knowledge using mobile robots(2008-08) Beeson, Patrick Foil, 1977-; Kuipers, BenjaminA map is a description of an environment allowing an agent--a human, or in our case a mobile robot--to plan and perform effective actions. From a single location, an agent’s sensors can not observe the whole structure of a complex, large environment. For this reason, the agent must build a map from observations gathered over time and space. We distinguish between large-scale space, with spatial structure larger than the agent’s sensory horizon, and small-scale space, with structure within the sensory horizon. We propose a factored approach to mobile robot map-building that handles qualitatively different types of uncertainty by combining the strengths of topological and metrical approaches. Our framework is based on a computational model of the human cognitive map; thus it allows robust navigation and communication within several different spatial ontologies. Our approach factors the mapping problem into natural sub-goals: building a metrical representation for local small-scale spaces; finding a topological map that represents the qualitative structure of large-scale space; and (when necessary) constructing a metrical representation for large-scale space using the skeleton provided by the topological map. The core contributions of this thesis are a formal description of the Hybrid Spatial Semantic Hierarchy (HSSH), a framework for both small-scale and large-scale representations of space, and an implementation of the HSSH that allows a robot to ground the largescale concepts of place and path in a metrical model of the local surround. Given metrical models of the robot’s local surround, we argue that places at decision points in the world can be grounded by the use of a primitive called a gateway. Gateways separate different regions in space and have a natural description at intersections and in doorways. We provide an algorithmic definition of gateways, a theory of how they contribute to the description of paths and places, and practical uses of gateways in spatial mapping and learning.Item Design and development of a modular robot for research use(2010-05) Paine, Nicholas Arden; Vishwanath, Sriram; Valvano, Jonathan W.This report summarizes the work performed for the design and development of the Proteus research robot. The Proteus design is motivated by the need for a modular, flexible, and usable autonomous robotic platform. To accomplish these goals, a modular hardware architecture coupled with low-power, high-computation processing is presented. The robot is subdivided into three layers: mobility, computation, and application. The interface between layers is characterized by well defined APIs and may be individually replaced to achieve different functionality. An efficient low-level event scheduler is described along with higher-level software algorithms for motion control and navigation. Experiments of Proteus robots are provided including field tests and collaboration with outside research institutions.Item Introspective perception for mobile robots(2023-02-28) Rabiee, Sadegh; Biswas, Joydeep (Assistant professor); Stone, Peter; Zhu, Yuke; Zilberstein, ShlomoPerception algorithms that provide estimates of their uncertainty are crucial to the development of autonomous robots that can operate in challenging and uncontrolled environments. Such perception algorithms provide the means for having risk-aware robots that reason about the probability of successfully completing a task when planning. There exist perception algorithms that come with models of their uncertainty; however, these models are often developed with assumptions, such as perfect data associations, that do not hold in the real world. Hence the resultant estimated uncertainty is a weak lower bound. To tackle this problem, we present introspective perception -- a novel approach for predicting accurate estimates of the uncertainty of perception algorithms deployed on mobile robots. By exploiting sensing redundancy and consistency constraints naturally present in the data collected by a mobile robot, introspective perception learns an empirical model of the error distribution of perception algorithms in the deployment environment and in an autonomously supervised manner. In this thesis, we present the general theory of introspective perception and demonstrate successful implementations for two different perception tasks. We provide empirical results on challenging real-robot data for introspective stereo depth estimation and introspective visual simultaneous localization and mapping and show that they learn to predict their uncertainty with high accuracy. We also present a framework for integrating introspective perception with robot path planning algorithms. This framework enables the robot to leverage the accurate estimates of the perception uncertainty to reason about the probability of successfully completing a plan in novel deployment environments, hence reducing task execution failures.Item Reflexive state awareness within mobile robots through sensor fusion and qualitative reasoning(2004-12-18) Weas, Abram Damon; Campbell, Matthew I.The overall goal of the project described in this thesis is to discover and implement a methodology that provides the groundwork for robotic systems to comprehend large amounts of sensory data in real-time (i.e. reflexive state awareness), termed a Reflexive Sensor Fusion System (RSFS). This is done by creating a new form of qualitative description of its state, termed a “Q-State.” The goals of this paper are to: Verify the usability of the pattern classification techniques by testing multidegree-of-freedom, simple dynamic systems within the software test-bed. Develop an overall methodology for the implementation of physical test-beds. Within this thesis, these goals were accomplished by implementing a software test-bed testing the ability of pattern classification methods, including Bayesian classification and Nearest Neighbor Rule, and two dimensional reduction methods: Principle Component Analysis and Multiple Discriminate Analysis, to correctly recognize the Q-States defined for a simulated bicycle and rider and a mobile robot modeled after the Mars Pathfinder, Sojourner. Finally, the results showing high classification rates are discussed along with suggestions to move forward with physical implementationItem Robust structure-based autonomous color learning on a mobile robot(2007) Sridharan, Mohan; Kuipers, Benjamin; Stone, Peter, 1971-Mobile robots are increasingly finding application in fields as diverse as medicine, surveillance and navigation. In order to operate in the real world, robots are primarily dependent on sensory information but the ability to accurately sense the real world is still missing. Though visual input in the form of color images from a camera is a rich source of information for mobile robots, until recently most people have focussed their attention on other sensors such as laser, sonar and tactile sensors. There are several reasons for this reliance on other relatively low-bandwidth sensors. Most sophisticated vision algorithms require substantial computational (and memory) resources and assume a stationary or slow moving camera, while many mobile robot systems and embedded systems are characterized by rapid camera motion and real-time operation within constrained computational resources. In addition, color cameras require time-consuming manual color calibration, which is sensitive to illumination changes, while mobile robots typically need to be deployed in a short period of time and often go into places with changing illumination. It is commonly asserted that in order to achieve autonomous behavior, an agent must learn to deal with unexpected environmental conditions. However, for true extended autonomy, an agent must be able to recognize when to abandon its current model in favor of learning a new one, how to learn in its current situation, and also what features or representation to learn. This thesis is a fully implemented example of such autonomy in the context of color learning and segmentation, which primarily leverages the fact that many mobile robot applications involve a structured environment consisting of objects of unique shape(s) and color(s) - information that can be exploited to overcome the challenges mentioned above. The main contributions of this thesis are as follows. First, the thesis presents a hybrid color representation that enables color learning both within constrained lab settings and in un-engineered indoor corridors, i.e. it enables the robot to decide what to learn. The second main contribution of the thesis is to enable a mobile robot to exploit the known structure of its environment to significantly reduce human involvement in the color calibration process. The known positions, shapes and color labels of the objects of interest are used by the robot to autonomously plan an action sequence to facilitate learning, i.e. it decides how to learn. The third main contribution is a novel representation for illumination, which enables the robot to detect and adapt smoothly to a range of illumination changes, without any prior knowledge of the different illuminations, i.e. the robot figures out when to learn. Fourth, as a means of testing the proposed algorithms, the thesis provides a real-time mobile robot vision system, which performs color segmentation, object recognition and line detection in the presence of rapid camera motion. In addition, a practical comparison is performed of the color spaces for robot vision – YCbCr, RGB and LAB are considered. The baseline system initially requires manual color calibration and constant illumination, but with the proposed innovations, it provides a self-contained mobile robot vision system that enables a robot to exploit the inherent structure and plan a motion sequence for learning the desired colors, and to detect and adapt to illumination changes, with minimal human supervision.Item Task encoding, motion planning and intelligent control using qualitative models(2007) Ramamoorthy, Subramanian; Kuipers, BenjaminThis dissertation addresses the problem of trajectory generation for dynamical robots operating in unstructured environments in the absence of detailed models of the dynamics of the environment or of the robot itself. We factor this problem into the subproblem of task variation, and the subproblem of imprecision in models of dynamics.