Browsing by Subject "AI"
Now showing 1 - 20 of 23
- Results Per Page
- Sort Options
Item Adapting to unseen driving conditions using context-aware neural networksAbdulquddos, Suhaib; Miikkulainen, Risto; Tutum, Cem COne of the primary inhibitors to successful deployment of autonomous agents in real-world tasks such as driving is their poor ability to adapt to unseen conditions. Whereas a human might be able to deduce the best course of action when confronted with an unfamiliar set of conditions based on past experiences, artificial agents have difficulty performing in conditions that are significantly different from those in which they were trained. This thesis explores an approach in which the addition of a context module to a neural network is used to overcome the challenge of adapting to unseen conditions during evaluation. The approach is tested in the CARLA simulator wherein the torque and steering curves of a vehicle are modified during training and evaluation. Furthermore the agent is trained only on a track with a relatively large radius of curvature but is evaluated on a track with much sharper turns and the agent must learn to adapt its speed and steering during evaluation. Three different neural network architectures are used for these experiments, and their respective performances are compared: Context+Skill, Context only, Skill only. It is observed that when both performance and safety of agents behavior are considered, the context+skill network consistently outperforms both the skill only and the context only architectures. The results presented in this thesis indicate that the context aware approach is a promising step towards solving the generalization problem in the autonomous vehicle domain. Furthermore, this research presents a framework for comparing the generalization capabilities of various network architectures and approaches. It is posited that the context+skill neural network has the potential to advance the field of machine learning with regards to generalization in domains beyond just autonomous driving; that is, any domain where awareness of changing environment parameters can have a positive impact on performance.Item AI's Inflection Point(The Texas Scientist, 2021) Airhart, MarcItem Allies and Artificial Intelligence: Obstacles to Operations and Decision-Making (Spring 2020)(Texas National Security Review, 2020) Lin-Greenberg, ErikItem Arms Control for Artificial Intelligence (Spring 2023)(Texas National Security Review, 2023) Lamberth, Megan; Scharre, PaulItem Artificial Intelligence, International Competition, and the Balance of Power (May 2018)(Texas National Security Review, 2018-05) Horowitz, Michael C.Item Assured decison-making for autonomous systems(2021-08-10) Bharadwaj, Sudarshanan; Topcu, Ufuk; Neogi, Natasha; Stone, Peter H; Tanaka, Takashi; Clarke, John-PaulAs autonomous systems become more widely used in society, they will necessarily have to make more decisions in order to meet increasingly complex objectives. However, to facilitate greater deployment of autonomous systems, especially in safety-critical contexts, it is crucial to provide guarantees that the decisions made by these systems will be safe and achieve the desired objective. This dissertation studies techniques for assuring decision-making in complex and large-scale autonomous systems. The dissertation uses synthesis techniques from the fields of formal methods to provide guarantees of correctness with respect to specifications provided in temporal logic. Synthesis methods often suffer from scalability issues limiting their applicability in realistic systems. To address this issue, the dissertation provides abstraction methods and decentralized synthesis architectures to provide guarantees in systems with partial-information as well as large numbers of interacting agents. The dissertation provides a systematic approach to assured decision-making in this dissertation that is agnostic to the specifics of the implementation details of the autonomous systems. Such an approach avoids having to assure systems case-by-case and will facilitate certification and deployment of autonomous systems in more application areas. Finally, the dissertation illustrates this concept in traffic management for urban air mobility operations and provide a synthesis architecture that can adapt to changing specifications or vehicle capabilities.Item Automated domain analysis and transfer learning in general game playing(2010-08) Kuhlmann, Gregory John; Stone, Peter, 1971-; Lifschitz, Vladimir; Mooney, Raymond J.; Porter, Bruce W.; Schaeffer, JonathanCreating programs that can play games such as chess, checkers, and backgammon, at a high level has long been a challenge and benchmark for AI. Computer game playing is arguably one of AI's biggest success stories. Several game playing systems developed in the past, such as Deep Blue, Chinook and TD-Gammon have demonstrated competitive play against the top human players. However, such systems are limited in that they play only one particular game and they typically must be supplied with game-specific knowledge. While their performance is impressive, it is difficult to determine if their success is due to generally applicable techniques or due to the human game analysis. A general game player is an agent capable of taking as input a description of a game's rules and proceeding to play without any subsequent human input. In doing so, the agent, rather than the human designer, is responsible for the domain analysis. Developing such a system requires the integration of several AI components, including theorem proving, feature discovery, heuristic search, and machine learning. In the general game playing scenario, the player agent is supplied with a game's rules in a formal language, prior to match play. This thesis contributes a collection of general methods for analyzing these game descriptions to improve performance. Prior work on automated domain analysis has focused on generating heuristic evaluation functions for use in search. The thesis builds upon this work by introducing a novel feature generation method. Also, I introduce a method for generating and comparing simple evaluation functions based on these features. I describe how more sophisticated evaluation functions can be generated through learning. Finally, this thesis demonstrates the utility of domain analysis in facilitating knowledge transfer between games for improved learning speed. The contributions are fully implemented with empirical results in the general game playing system.Item Debunking the AI Arms Race Theory (Summer 2021)(Texas National Security Review, 2021) Scharre, PaulItem Deepfake technology and the future of public trust in video(2023-08) Verma, Nitin; Fleischmann, Kenneth R.; Strover, Sharon L; Arif, Ahmer; Gwizdka, JacekWill deepfake technology break public trust in video? Deepfakes—highly realistic digital images created using deep learning technology—have raised concerns among scholars, policymakers, public interest groups, and journalists. Scholars and media pundits alike have rushed to call deepfake technology an ‘epistemic threat’ and have rekindled concerns about the beginning of a ‘post-truth’ era. However, at present there are only a handful of data-driven investigations of the potential of this technology to reshape society’s relationship with visual media. In this dissertation, I identify the centrality of video in the contemporary information ecology and then present findings from a study that investigated how people conceptualize trust in video, how they justify their trust in videos they consume, and how they perceive deepfake technology’s effect on their relationship with video as a medium. To understand people’s perceptions of deepfake technology, I proposed three research questions: RQ1, how will deepfake technology impact people’s trust in video?; RQ2, what are people’s perspectives on the societal implications of deepfake technology?; and RQ3, what are people’s perspectives on how society should respond to deepfake technology? To obtain answers to these questions, I conducted a qualitative study in which I interviewed 33 individuals from two key societal stakeholder groups—21 members of the general population and 12 visual journalists. Using reflexive thematic analysis of the interview data, I report several themes that summarize people’s perspectives on trust in video, the impact of deepfake technology, and how society ought to respond to the anticipated impacts of the technology. I conclude this dissertation by juxtaposing the findings from the analysis of the interview data with existing literature on trust and the anticipated impact of deepfake technology. I then describe the theoretical implications in relation to the literature, the implications of the research for practice, and some recommendations for future research. This dissertation enriches our understanding of the problem space of anticipating the ramifications of deepfake technology for a culture that privileges visual information.Item Designing AI Technologies that Benefit Society(The Medium, 2018-09-24) Fleischmann, KennethItem Edison to AI: Intellectual Property in AI-Driven Drug R&D(2023-05) Turner, ZakIn 2019, the inventor Stephen Thaler filed a patent on two inventions in which he listed his AI program, Device for the Autonomous Bootstrapping of Unified Sentience (DABUS), as the inventor. The US Patent and Trademark Office rejected Thaler’s applications and, in 2022, a US Federal Court of Appeals upheld this decision on the grounds that an inventor must be a human being. Although this decision is perhaps consistent with the law, a refusal to patent AI-inventions could have negative consequences for innovation. This thesis examines the question of IP policy toward AI-inventions through the prism of pharmaceutical drugs. The question is: how should AI-designed drugs be treated by US IP law? There are two smaller questions involved here. First, who, if anyone, should be recognized as the inventors for inventions created by AI. Second, is there a justification for IP rights in AI inventions? In attempting to answer these questions, this paper focuses on two AI- driven pharmaceutical companies, Insilico and Recursion. I then compare the data and models from the two firms against the arguments made regarding patent policy for AI-inventions in three scholarly works. My conclusion is that extending FDA market exclusivity privileges to AI-produced drugs is preferable to extending patent protections.Item Effects of Increased Broadband Access and Bandwidth Capacity on Information Nationalization and Voter Partisan Affect (2014 - 2018)(2024-08) Shears, Dylan R.; Sparrow, Bartholomew H.This thesis examines how increased broadband access and bandwidth capacity influenced the nationalization of information in the 2014 to 2018 U.S. House elections. Expanding on Trussler's (2021) analysis, this research incorporates bandwidth speeds to assess the effects of internet access on electoral outcomes. The study hypothesizes that expanded broadband access and higher speeds increase voter exposure to national political information, which reduces the incumbency advantage and increases instances of straight-ticket voting, with greater broadband speeds being a better proxy of nationalization than the change in broadband providers alone. Panel Linear Modeling of the changes in broadband providers and bandwidth speeds across congressional districts over the 2014-2018 election cycles determined that higher internet speeds contributed to election nationalization by facilitating access to partisan content. The research expands on how these trends have led to a more polarized electorate via the current architecture and incentives of Internet Platforms, such as YouTube's recommendation system potentially directing users to extremist content, reframing the existence of echo chambers on Facebook, and the increase of disinformation on X/Twitter resulting in increased partisan affect online.Item Ethical Artificial Intelligence is Focus of New Robotics Program(UT News, 2021-09-09) News, UTItem Future information professionals’ perspectives on the impact of AI on the future of their profession(2020-08-11) Li, Lan, M.S. in Information Studies; Fleischmann, Kenneth R.The field of library and information science (LIS) has always shaped and been shaped by the changing tides of technology. Given that recent developments in AI appear to be the next technological wave that may bring major disruption for the future of information professionals, how are students in ALA-accredited master’s programs reacting to these changes? This paper reports findings from interviews with students pursuing studies in librarianship and archival studies about their educational experiences with artificial intelligence (AI) and their expectations about how AI will impact their future careers as librarians and archivists. Key themes that emerge from this analysis include structural and professional changes in libraries and archives, the loss of human elements in libraries and archives, and ethical challenges of AI in libraries and archives. Recommendations based on these findings include ways to adapt LIS education to better prepare students, such as developing courses that combine the technical aspects of how to leverage AI that is content and context relevant to libraries and archives, to ensure that librarians and archivists can have active roles in leveraging AI to better accomplish their goals of serving patrons’ information needs and wants and preserving the past.Item Humans vs AI: Robot Soccer and Gran Turismo(2024-04-19) Stone, PeterAdvancements in AI have unleashed astonishing capabilities, but it is not magic. Peter Stone reveals his insights into cutting-edge AI and robotics and explores how they may reshape our world. Someday these technologies could win the World Cup, and they are already outperforming the best humans at complex tasks like high-speed racing.Item Our Fear of AI: Exploring Its Creators and Creations in Fiction(2020-05) Kothare-Arora, MayaThe idea of technological creation has proliferated across fiction for the last century. As the world becomes increasingly technologically advanced, these fears have become more tangible. With the rise of Artificial Intelligence particularly, from Alexa to self-driving cars, comes a rise in the fear of what intelligent creations might lead to. In order for AI to continue growing and adding value to society, experts must contend with the apprehension surrounding AI. While these conversations are already occurring, they generally focus on the fear of the machine itself. This thesis argues that the fear of the creators and regulators of AI, not just the machine, heavily influences the fear of AI as a field. It examines three different AI takeover narratives, "With Folded Hands", Do Androids Dream of Electric Sheep?, and Ex Machina, in order to analyze the fears surrounding technology creators in conjunction with the influence of societal events and systems of the times.Item Planning Forum Volume 19(University of Texas at Austin, 2023) Liu, Haijing; Jafar, Rahanat Ara; Hoque, Farhana; Chowdhury, Shahriar Sakib; Nipun, Wahida Rahman; Pérez, Keila Zarí; Poulsen, Mathias; Pérez-Quiñones, Katherine A.; Hendawy, Mennatullah; Sivakumar, Siddharth; Fahami, Mahlaqa; Sorto, Juan Antonio; Losoya, Jorge Antonio; Bashar, Samira; Levine, KaylynItem Temporal-Logic-Based Reward Shaping for Continuing Reinforcement Learning Tasks(University of Texas at Austin, 2021-02) Jiang, Yuqian; Bharadwaj, Suda; Wu, Bo; Shah, Rishi; Topcu, Ufuk; Stone, PeterItem The ethics of care and participatory design : a situated exploration(2023-04-21) Kravchenko, Elizaveta; Fleischmann, Kenneth R.This thesis provides suggestions to an early-stage start-up regarding integration of ethical perspectives throughout the design process. By critically considering core tenets of feminist care ethics and participatory design, the project establishes a foundation of synergies and methodological overlaps between the ethical theory and design process. This understanding is then utilized to propose a framework emphasizing contextual relationality between project stakeholders for integration within a design. The framework is applied to identify opportunities for support within the prototyping of Camp Cura, an AI-powered mobile application aimed to help young adults self-manage asthma symptoms. Through shadowing the initial design team’s work, areas of particular caution within the design are identified. The thesis culminates in tailored identifications of opportunities for ethical framework integration and areas for project improvement.Item Theory and techniques for synthesizing efficient breadth-first search algorithms(2012-08) Nedunuri, Srinivas; Cook, William Randall; Batory, Don; Baxter, Ira; Pingali, Keshav; Smith, Douglas R.The development of efficient algorithms to solve a wide variety of combinatorial and planning problems is a significant achievement in computer science. Traditionally each algorithm is developed individually, based on flashes of insight or experience, and then (optionally) verified for correctness. While computer science has formalized the analysis and verification of algorithms, the process of algorithm development remains largely ad-hoc. The ad-hoc nature of algorithm development is especially limiting when developing algorithms for a family of related problems. Guided program synthesis is an existing methodology for systematic development of algorithms. Specific algorithms are viewed as instances of very general algorithm schemas. For example, the Global Search schema generalizes traditional branch-and-bound search, and includes both depth-first and breadth-first strategies. Algorithm development involves systematic specialization of the algorithm schema based on problem-specific constraints to create efficient algorithms that are correct by construction, obviating the need for a separate verification step. Guided program synthesis has been applied to a wide range of algorithms, but there is still no systematic process for the synthesis of large search programs such as AI planners. Our first contribution is the specialization of Global Search to a class we call Efficient Breadth-First Search (EBFS), by incorporating dominance relations to constrain the size of the frontier of the search to be polynomially bounded. Dominance relations allow two search spaces to be compared to determine whether one dominates the other, thus allowing the dominated space to be eliminated from the search. We further show that EBFS is an effective characterization of greedy algorithms, when the breadth bound is set to one. Surprisingly, the resulting characterization is more general than the well-known characterization of greedy algorithms, namely the Greedy Algorithm parametrized over algebraic structures called greedoids. Our second contribution is a methodology for systematically deriving dominance relations, not just for individual problems but for families of related problems. The techniques are illustrated on numerous well-known problems. Combining this with the program schema for EBFS results in efficient greedy algorithms. Our third contribution is application of the theory and methodology to the practical problem of synthesizing fast planners. Nearly all the state-of-the-art planners in the planning literature are heuristic domain-independent planners. They generally do not scale well and their space requirements also become quite prohibitive. Planners such as TLPlan that incorporate domain-specific information in the form of control rules are orders of magnitude faster. However, devising the control rules is labor-intensive task and requires domain expertise and insight. The correctness of the rules is also not guaranteed. We introduce a method by which domain-specific dominance relations can be systematically derived, which can then be turned into control rules, and demonstrate the method on a planning problem (Logistics).