Browsing by Subject "Natural language"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Exploring how natural language reflects individual and group social dynamics(2021-12-02) Seraj, Sarah; Pennebaker, James W.; Swann, William B; Gosling, Samuel D; Durland, MikeLanguage can be a window into people’s thoughts, feelings, and life experiences. With the increasing use of online communication platforms, researchers now have more avenues to study people’s life events using real-time and real-world data. My dissertation attempts to identify the most important language markers for understanding people’s cognitive, social, and emotional lives. What are signs in language that predict a distressing life event and how do people cope with it in the months afterward? After identifying a large group of users on Reddit (N = 6,813) who had gone through emotional upheavals such as breakups, divorce, or other distressing life events, we tracked their language in the months before, during and after their upheaval (Chapter 2). In 2020, the world faced a global crisis: the COVID-19 pandemic. In the US, the pandemic was followed by the killing of George Floyd at the hands of police and the Black Lives Matter (BLM) protests of summer 2020, a time of national reckoning on police brutality. These two events naturally led to the question of how people dealt with collective upheavals compared to a personal crisis like a breakup and how the context of the pandemic (social isolation, people living in lockdowns) affected people’s response to the BLM movement. Would the two upheavals interact with each other in any way? A large-scale Reddit dataset (33.7 million posts, 1.37 million users) was used to study the two upheavals (Chapter 3). After identifying important language markers that help us understand people’s psychological state during personal and collective upheavals, we wanted to see if the same markers were important for understanding social dynamics outside of the context of upheavals. A group of individuals who were all part of the same work team were recruited to hold a series of one-on-one chats with everyone else on the team (N = 27; 198 conversations). The language markers that predict successful conversations were identified from the study (Chapter 4). The final chapter puts together the insights from the three different studies and highlights the contribution of each of them.Item Semantic interpretation with distributional analysis(2012-05) Glass, Michael Robert; Barker, Ken, 1959-; Porter, Bruce, 1956-; Mooney, Ray; Erk, Katrin; Dhillon, InderjitUnstructured text contains a wealth of knowledge, however, it is in a form unsuitable for reasoning. Semantic interpretation is the task of processing natural language text to create or extend a coherent, formal knowledgebase able to reason and support question answering. This task involves entity, event and relation extraction, co-reference resolution, and inference. Many domains, from intelligence data to bioinformatics, would benefit by semantic interpretation. But traditional approaches to the subtasks typically require a large annotated corpus specific to a single domain and ontology. This dissertation describes an approach to rapidly train a semantic interpreter using a set of seed annotations and a large, unlabeled corpus. Our approach adapts methods from paraphrase acquisition and automatic thesaurus construction to extend seed syntactic to semantic mappings using an automatically gathered, domain specific, parallel corpus. During interpretation, the system uses joint probabilistic inference to select the most probable interpretation consistent with the background knowledge. We evaluate both the quality of the extended mappings as well as the performance of the semantic interpreter.Item Using natural language to aid task specification in sequential decision making problems(2022-09-12) Goyal, Prasoon; Mooney, Raymond J. (Raymond Joseph); Niekum, Scott David; Stone, Peter; Artzi, YoavBuilding intelligent agents that can help humans accomplish everyday tasks, such as a personal robot at home or a robot in a work environment, is a long-standing goal of artificial intelligence. One of the requirements for such general-purpose agents is the ability to teach them new tasks or skills relatively easily. Common approaches to teaching agents new skills include reinforcement learning (RL) and imitation learning (IL). However, specifying the task to the learning agent, i.e. designing effective reward functions for reinforcement learning and providing demonstrations for imitation learning, are often cumbersome and time-consuming. Further, designing reward functions and providing a set of demonstrations that sufficiently disambiguates the desired task may not be particularly accessible for end users without a technical background. In this dissertation, we explore using natural language as an auxiliary signal to aid task specification, which reduces the burden on the end user. To make reward design easier, we propose a novel framework that is used to generate language-based rewards in addition to the extrinsic rewards from the environment for faster policy training using RL. We show that using our framework, very simple extrinsic rewards along with a natural language description of the task are sufficient to teach new tasks to the learning agent. To ameliorate the problem of providing demonstrations, we propose a new setting that enables an agent to learn a new task without demonstrations in an IL setting, given a demonstration from a related task and a natural language description of the difference between the desired task and the demonstrated task. The techniques we develop for this setting would enable teaching multiple related tasks to learning agents by providing a small set of demonstrations and several natural language descriptions, thereby reducing the burden of providing demonstrations for each task. The primary contributions of this dissertation include novel problem settings, benchmarks, and algorithms that allow using natural language as an auxiliary modality for task specification in RL and IL. We believe this dissertation will serve as a foundation for future research along these lines, to make progress toward having intelligent agents that can conveniently be taught new tasks by end users.Item What is the best automated metric for text to motion generation?(2023-04-25) Voas, Jordan Guy; Mooney, Raymond J. (Raymond Joseph)There is growing interest in generating skeleton-based human motions from natural language descriptions. While most efforts have focused on developing better neural architectures for this task, there has been no significant work on determining the proper evaluation metric. Human evaluation is the ultimate accuracy measure for this task, and automated metrics should correlate well with human quality judgments. Since descriptions are compatible with many motions, determining the right metric is critical for evaluating and designing meaningful training losses for supervising generative models. This paper systematically studies which metrics best align with human evaluations and proposes new metrics that align even better. Our findings indicate that none of the metrics currently used for this task show even a moderate correlation with human judgments on a sample level. However, for assessing average model performance, commonly used metrics such as R-Precision and rarely used coordinate errors show strong correlations. Several recently developed metrics are not recommended due to their low correlation compared to alternatives. Additionally, multiple novel metrics which exhibiting improved correlation and potential for future use.