Browsing by Subject "Artificial Intelligence"
Now showing 1 - 10 of 10
- Results Per Page
- Sort Options
Item Automated domain analysis and transfer learning in general game playing(2010-08) Kuhlmann, Gregory John; Stone, Peter, 1971-; Lifschitz, Vladimir; Mooney, Raymond J.; Porter, Bruce W.; Schaeffer, JonathanCreating programs that can play games such as chess, checkers, and backgammon, at a high level has long been a challenge and benchmark for AI. Computer game playing is arguably one of AI's biggest success stories. Several game playing systems developed in the past, such as Deep Blue, Chinook and TD-Gammon have demonstrated competitive play against the top human players. However, such systems are limited in that they play only one particular game and they typically must be supplied with game-specific knowledge. While their performance is impressive, it is difficult to determine if their success is due to generally applicable techniques or due to the human game analysis. A general game player is an agent capable of taking as input a description of a game's rules and proceeding to play without any subsequent human input. In doing so, the agent, rather than the human designer, is responsible for the domain analysis. Developing such a system requires the integration of several AI components, including theorem proving, feature discovery, heuristic search, and machine learning. In the general game playing scenario, the player agent is supplied with a game's rules in a formal language, prior to match play. This thesis contributes a collection of general methods for analyzing these game descriptions to improve performance. Prior work on automated domain analysis has focused on generating heuristic evaluation functions for use in search. The thesis builds upon this work by introducing a novel feature generation method. Also, I introduce a method for generating and comparing simple evaluation functions based on these features. I describe how more sophisticated evaluation functions can be generated through learning. Finally, this thesis demonstrates the utility of domain analysis in facilitating knowledge transfer between games for improved learning speed. The contributions are fully implemented with empirical results in the general game playing system.Item A Broader, More Inclusive Definition of AI(Journal of Artificial General Intelligence, 2020) Stone, PeterItem A Broader, More Inclusive Definition of AI(Journal of Artificial General Intelligence, 2020) Stone, PeterItem Designing Human-AI Partnerships to Combat Misinfomation(2020) Lease, MattItem Expected Value of Communication for Planning in Ad Hoc Teamwork(The University of Texas at Austin, 2021-02) Macke, William; Mirsky, Reuth; Stone, PeterItem Good Systems, a UT Grand Challenge: Socially Responsible AI(Participatory Design Conference, 2020-05-25) Fleischmann, KenItem Government's AI principles overlook two important issues(The Hill, 2020-02-18) STONE, PETERItem Government's AI principles overlook two important issues(The Hill, 2020-02-18) STONE, PETERItem A Study on Ethical Data Management, Publication, and Use(Open Repositories Conference, 2021-04) Esteva, Maria; Strover, Sharon L.; Park, Soyoung; Rossbach, Christopher; Thywissen, JohnItem The Importance of Multi-Dimensional Intersectionality in Algorithmic Fairness and AI Model Development(2023-05) Mickel, Jennifer; De-Arteaga, Maria; Peterson, TinaPeople are increasingly interacting with artificial intelligence (AI) systems and algorithms, but oftentimes, these models are embedded with unfair biases. These biases can lead to harm when an AI system’s output is implicitly or explicitly racist, sexist, or derogatory. If the output is offensive to a person interacting with it, it can cause the person emotional harm that may manifest physically. Alternatively, if a person agrees with the model’s output, the person’s negative biases may be reinforced, inciting the person to engage in discriminatory behavior. Researchers have recognized the harm AI systems can lead to, and they have worked to develop fairness definitions and methodologies for mitigating unfair biases in machine learning models. Unfortunately, these definitions (typically binary) and methodologies are insufficient for preventing AI models from learning unfair biases. To address this, fairness definitions and methodologies must account for intersectional identities in multicultural contexts. The limited scope of fairness definitions allows for models to develop biases against people with intersectional identities that are unaccounted for in the fairness definition. Existing frameworks and methodologies for model development are based in the US cultural context, which may be insufficient for fair model development in different cultural contexts. To assist machine learning practitioners in understanding the intersectional groups affected by their models, a database should be constructed detailing the intersectional identities, cultural contexts, and relevant model domains in which people may be affected. This can lead to fairer model development, for machine learning practitioners will be better adept at testing their model's performance on intersectional groups.