Browsing by Subject "Risk"
Now showing 1 - 20 of 29
- Results Per Page
- Sort Options
Item A cross-sectional pilot study in adolescents to evaluate determinants of health regarding e-cigarette, or vaping, product use(2021-11-19) Gilmore, Bretton Alex; Frei, Christopher R.; Reveles, Kelly R.; Koeller, Jim M.; Flores, Bertha E. "Penny"; Spoor, Jodi H.Purpose. It is estimated that five million United States adolescents vape.¹⁻³ Throughout the literature, assumptions have been made regarding adolescents’ vaping knowledge, attitudes, and beliefs. Gaps exist establishing evidence that adolescents believe vaping is the same things as cigarette smoking. This study evaluated adolescent vaping to 1.) tabulate the number of respondents self-reporting to vape regularly, 2.) gauge the age of initiation, 3.) identify trends in attitudes and beliefs regarding the health of common activities of daily living 4.) consider perceptions of vaping equivalence to smoking cigarettes or using traditional tobacco products, and 5.) assess reporting of vaping associated negative health outcomes. Methods. A cross-sectional study design was created, and novel electronic survey developed to gather anonymous data via Google Forms. The survey included a Variable Activity Perception Evaluation (VAPE) Scale and direct questions related to vaping. The instrument was circulated amongst students aged 12 to 20 years enrolled at 10 greater San Antonio area schools (six middle and four high schools) across three districts over a 90-day period. Responses came in from 11 schools across six districts. Descriptive and comparative statistics, including nonparametric methodology (e.g., Chi-square, Kruskal-Wallis, and Wilcoxon Rank Sum) were used. Results. Eligible respondents’ (N=267) mean age was 16 (SD=1.6) years. Females (61%) predominantly made up the sample. Seven percent (N=264) reported they vaped regularly and 20% (N=245) had tried vaping with the majority of those experimenting by age 16. There were statistically significant differences on 14 of 40 VAPE Scale prompts when comparing vapers to non-vapers (35%). Respondents reported that they did not think vaping was the same as smoking (63% “No”) or traditional tobacco use (38% “No”, 21% “Maybe”). Ninety percent (N=17) of those who vaped reported experiencing negative health outcomes, as indicated by greater than or equal to one sign or symptom. Conclusions. Texas adolescents reported similar vaping trends when compared to national samples. However, Texas adolescents did not view vaping as cigarette smoking or traditional tobacco use. Perceptions of the health of routine activities of daily living might be predictive of future vape initiation and use. Prospective studies should be designed to evaluate negative health outcomes and implications associated with vaping.Item A decision-based approach to establish non-informative prior sample space for decision-making in geotechnical applications(2022-12-21) Feng, Kai (Ph.D. in civil engineering); Gilbert, Robert B. (Robert Bruce), 1965-; Lake, Larry W.; Rathje, Ellen M; Nadim, Farrokh; Boyles, StephenBayes’ theorem is widely adopted for risk-informed decision-making in natural hazards (which often have limited data), but the prior sample space based on the existing methods may lead to inconsistent, irrational, and not defensible results. Therefore, Decision Entropy Theory (DET) is under development to improve the assessment of small probabilities when limited information is available for non-informative prior sample space in assisting Bayesian decision-making. The key idea to establish a non-informative prior sample space with DET is that the value of new information is as uncertain as possible, or the entropy of the new information is maximized. The mathematical formulation includes prior decision analysis by maximizing the relative entropy of the value of perfect information and pre-posterior decision analysis by maximizing the relative entropy of the value of imperfect information given each value of perfect information. The goal of this research includes (1) apply the theory to simple problems to demonstrate and study its rigorous implementation, evaluate possible approximations to reduce the computational effort required to implement it rigorously, and develop insight into the results; (2) propose and characterize the likelihood functions to represent subjective judgment for small-probability events in the decision analysis; and (3) demonstrate the application of the theory to real-world cases histories. From this research, the following conclusions are drawn: (1) results of illustrative decision analysis examples show that the non-informative prior probabilities obtained from DET are sensical and address concerns that have been raised about other approaches to establish non-informative prior probabilities that do not consider their impact on decision making; moreover, the DET-based non-informative prior is invariant to transformations of uncertain variables as it depends on the decisions rather than how the states of nature are defined; (2) an approximation to the rigorous DET reduces the computation effort considerably (many orders of magnitude), provides reasonable results for the prior decision and value of perfect information, but is less able to approximate the value of imperfect information; (3) likelihood functions proposed for fractional occurrence models with the Binomial distribution, Poisson distribution, and Multinomial distribution have a maximum at the estimated fraction of occurrences and a Fisher information quantity that is inversely proportional to the estimated fraction and proportional to the length of the record used to estimate the fraction; and (4) the non-informative prior probabilities obtained with DET for the dam case history provide useful insight into the potential impacts of not making assumptions beyond what is actually known. When uncertainty in frequencies of overtopping and the chance of dam failure given overtopping (fragility) are included, the decision to rehabilitate the dam is justified with a cost of dam breach that is more than 100 times smaller than when this uncertainty is neglected and more than 10 times smaller than when uncertainty in the hazard but not the fragility is neglected. In addition, the maximum value of obtaining additional information about frequencies of hazard and fragility is 35% of the cost of rehabilitation. The theory will be advanced in the future by developing more efficient algorithms that optimize the time complexity and space complexity for the numerical implementation of DET and applying it to more complicated and realistic problems.Item Am I in danger here? Incorporating organizational communication into an extended model of risk information seeking at work(2016-05) Ford, Jessica Lynn Isabel; Stephens, Keri K.; Barbour, Josh; Donovan, Erin; Kahlor, LeeAnnThere is a notable deficiency in organizational communication literature on the topic of risk information seeking (Real, 2008), given that 3.7 million nonfatal occupational injuries occurred in 2013 (U.S. Bureau of Labor Statistics, 2014). Previous research on organizational communication addressing health and safety at work tends to focus on employee attitudes toward risk (Real, 2008) or looks at the discursive emergence of safety in the workplace (Zoller, 2003), while overlooking how organizational-level constructs, such as information seeking norms and safety information availability influences employees’ search for risk information. In general, communication scholarship on this subject is fragmented, and lacks a representative model accounting for both individual and organizational influences on risk information seeking behaviors. In light of the frequency of on-the-job injuries and fatalities, this dissertation calls attention to the lack of research by organizational communication scholars on employee risk information seeking within high-reliability organizations (HROs). Using quantitative survey data from a large oil refinery, this dissertation expands the Planned Risk Information Seeking Model (PRISM: Kahlor, 2010) to (a) include organizational-level variables, and (b) account for information seeking sources and strategies used by employees. Originally, the goal of Kahlor’s (2010) PRISM was to integrate the relationships from well-known health information seeking models to build a model of risk information seeking that was independent of any health context. However, to fully capture the various constraints—power, control, status—which employees confront to either encourage or avert risk information seeking attempts, this dissertation alters Kahlor’s PRISM. This dissertation offers a set of theoretically-driven hypotheses and research questions to assess the explanatory value of the extended PRISM, aptly named the Organizational Planned Risk Information Seeking Model (O-PRISM). Using Analysis of Moment Structures (AMOS) to conduct structural equation modeling tests reveals that the O-PRISM accounts for 62% of the variance in risk information seeking behaviors. Follow-up testing of the PRISM revealed that Kahlor’s original model explained only 34% of risk information seeking behaviors. In addition to answering Real’s (2010) call for “health-related organizational communication” research concerning occupational safety (p. 457), the findings from this study offer insight for safety personnel tasked with encouraging risk information seeking. First and foremost, this study encourages high-reliability organizations to consider how organizational norms are communicated both formally and informally. The results also provide evidence that employee risk perceptions are a poor motivator for information seeking behaviors. Lastly, from a theoretical perspective, the present study provokes a discussion about the added value of model adaptations for organizational studies.Item The application of systems engineering to a Space-based Solar Power Technology Demonstration Mission(2012-05) Chemouni Bach, Julien; Fowler, Wallace T.; Guerra, Lisa A.This thesis presents an end-to-end example of systems engineering through the development of a Space-based Solar Power Satellite (SSPS) technology demonstration mission. As part of a higher education effort by NASA to promote systems engineering in the undergraduate classroom, the purpose of this thesis is to provide an educational resource for faculty and students. NASA systems engineering processes are tailored and applied to the development of a conceptual mission in order to demonstrate the role of systems engineering in the definition of an aerospace mission. The motivation for choosing the SSPS concept is two fold. First, as a renewable energy concept, space-based solar power is a relevant topic in today's world. Second, previous SSPS studies have been largely focused on developing full-scale concepts and lack a formalized systems engineering approach. The development of an SSPS technology demonstration mission allows for an emphasis on determining mission, and overall concept, feasibility in terms of technical needs and risks. These are assessed through a formalized systems engineering approach that is defined as an early concept or feasibility study, typical of Pre-Phase A activities. An architecture is developed from a mission scope, involving the following trade studies: power beam type, power beam frequency, transmitter type, solar array, and satellite orbit. Then, a system hierarchy, interfaces, and requirements are constructed, and cost and risk analysis are performed. The results indicate that the SSPS concept is still technologically immature and further concept studies and analyses are required before it can be implemented even at the technology demonstration level. This effort should be largely focused on raising the technological maturity of some key systems, including structure, deployment mechanisms, power management and distribution, and thermal systems. These results, and the process of reaching them, thus demonstrate the importance and value of systems engineering in determining mission feasibility early on in the project lifecycle.Item Applying Classification and Regression Trees to manage financial risk(2012-05) Martin, Stephen Fredrick; Scott, James (Statistician); Carvalho, Carlos M.; Marti, Nathan C.This goal of this project is to develop a set of business rules to mitigate risk related to a specific financial decision within the prepaid debit card industry. Under certain circumstances issuers of prepaid debit cards may need to decide if funds on hold can be released early for use by card holders prior to the final transaction settlement. After a brief introduction to the prepaid card industry and the financial risk associated with the early release of funds on hold, the paper presents the motivation to apply the CART (Classification and Regression Trees) method. The paper provides a tutorial of the CART algorithms formally developed by Breiman, Friedman, Olshen and Stone in the monograph Classification and Regression Trees (1984), as well as, a detailed explanation of the R programming code to implement the RPART function. (Therneau 2010) Special attention is given to parameter selection and the process of finding an optimal solution that balances complexity against predictive classification accuracy when measured against an independent data set through a cross validation process. Lastly, the paper presents an analysis of the financial risk mitigation based on the resulting business rules.Item Debris-covered glaciers : modeling ablation and flood hazards in the Nepal Himalaya(2016-05) Rounce, David Robert; McKinney, Daene C.; Maidment, David R; Hodges, Ben R; Catania, Ginny A; Yang, Zong-liangDebris-covered are ubiquitous in the Nepal Himalaya and significantly alter the glaciers response to climate change and have large implications on the development of glacial lakes. The thickness of the debris is largely heterogeneous over the course of the glacier thereby promoting ablation in areas of thin debris and retarding ablation in areas of thick debris. The debris thickness typically increases towards the snout of the glacier, but can be difficult to measure as field measurements are time-consuming and laborious. This body of work utilizes satellite imagery in conjunction with a debris-covered glacier energy balance model to reasonably estimate the spatial variations in debris thickness for glaciers in the Everest region of Nepal. Sub-debris ablation rates may be computed using the same energy balance model, but requires detailed information regarding the properties of the debris and the surface processes. Detailed field data was collected over the 2014 melt season on Imja-Lhotse Shar Glacier to estimate many of the debris properties. This data was also used to model the sub-debris ablation rates to develop an understanding of the critical properties (i.e., thermal conductivity, albedo, and surface roughness) and processes (i.e., accounting for the latent heat flux) required to accurately model the impact of the debris. The heterogeneous debris cover often causes higher melt rates upglacier, which diminishes the glacier’s topographic gradient thereby promoting glacier stagnation and the development of glacial lakes. These glacial lakes form behind terminal moraines comprising soil and loose boulders that are susceptible to fail causing a glacial lake outburst flood (GLOF). GLOFs can have devastating impacts on infrastructure and communities located downstream; however, assessing the risks associated with these floods has traditionally required detailed field campaigns that are difficult to perform as these glacial lakes are located in remote areas at high altitudes. This body of work develops a holistic hazard assessment using solely remotely sensed data to objectively characterize the threat of a GLOF. This hazard assessment provides valuable information concerning potential GLOF triggers that may be used to direct future field campaigns and ultimately the management actions associated with these glacial lakes.Item Decomposition of multiple attribute preference models(2013-12) He, Ying, active 2013; Dyer, James S.This dissertation consists of three research papers on Preference models of decision making, all of which adopt an axiomatic approach in which preference conditions are studied so that the models in this dissertation can be verified by checking their conditions at the behavioral level. The first paper “Utility Functions Representing Preference over Interdependent Attributes” studies the problem of how to assess a two attribute utility function when the attributes are interdependent. We consider a situation where the risk aversion on one attribute could be influenced by the level of the other attribute in a two attribute decision making problem. In this case, the multilinear utility model—and its special cases the additive and multiplicative forms—cannot be applied to assess a subject’s preference because utility independence does not hold. We propose a family of preference conditions called nth degree discrete distribution independence that can accommodate a variety of dependencies among two attributes. The special case of second degree discrete distribution independence is equivalent to the utility independence condition. Third degree discrete distribution independence leads to a decomposition formula that contains many other decomposition formulas in the existing literature as special cases. As the decompositions proposed in this research is more general than many existing ones, the study provides a model of preference that has potential to be used for assessing utility functions more accurately and with relatively little additional effort. The second paper “On the Axiomatization of the Satiation and Habit Formation Utility Models” studies the axiomatic foundations of the discounted utility model that incorporates both satiation and habit formation in temporal decision. We propose a preference condition called shifted difference independence to axiomatize a general habit formation and satiation model (GHS). This model allows for a general habit formation and satiation function that contains many functional forms in the literature as special cases. Since the GHS model can be reduced to either a general satiation model (GSa) or a general habit formation model (GHa), our theory also provides approaches to axiomatize both the GSa model and the GHa model. Furthermore, by adding extra preference conditions into our axiomatization framework, we obtain a GHS model with a linear habit formation function and a recursively defined linear satiation function. In the third paper “Hope, Dread, Disappointment, and Elation from Anticipation in Decision Making”, we propose a model to incorporate both anticipation and disappointment into decision making, where we define hope as anticipating a gain and dread as anticipating a loss. In this model, the anticipation for a lottery is a subjectively chosen outcome for a lottery that influences the decision maker’s reference point. The decision maker experiences elation or disappointment when she compares the received outcome with the anticipated outcome. This model captures the trade-off between a utility gain from higher anticipation and a utility loss from higher disappointment. We show that our model contains some existing decision models as its special cases, including disappointment models. We also use our model to explore how a person’s attitude toward the future, either optimistic or pessimistic, could mediate the wealth effect on her risk attitude. Finally, we show that our model can be applied to explain the coexistence of a demand for gambling and insurance and provides unique insights into portfolio choice and advertising decision problems.Item Detection of CO2 leakage in overlaying aquifers using time lapse compressibility monitoring(12th Annual Conference on Carbon Capture Utilization & Sequestration, 2013-05-13) Hosseini, Seyyed Abolfazi; Zeidouni, MehdiItem Essays on credit risk(2005) Tang, Yongjun; Titman, Sheridan; Yan, HongThis dissertation examines the determinants of credit spreads. The purpose and contribution of this dissertation is to provide a more comprehensive and coherent view of credit risk valuation. Specifically, I examine the effects of previously overlooked factors (in addition to conventional factors such as market financing costs, firm leverage, and firm risk) on credit risk using credit default swap (CDS) rates that better reflect associated credit risk. I undertake this study through both theoretical exploration and empirical examination. On the theoretical front, I present a structural credit risk model that explicitly considers both macro-economic conditions and firm fundamentals. I show that the model predicts more appropriate levels of credit spreads across all credit rating classes than the existing structural models and produces the empirically observed upward-sloping term structure of credit spreads for high-yield bonds that most other models fail to explain. On the empirical front, I capitalize on the advantage of CDS spreads as a better measure of credit risk than other existing measures. Using this measure, I first test and verify some of our model’s predictions, namely, both macro-economic conditions and firm characteristics have significant effects on credit spreads. The most notable finding is that credit spreads increase with investor sentiment. The second part of my empirical investigation examines the role of imperfect information in the CDS market. Using several proxies (especially analyst forecast dispersion) for transparency, I find that credit spreads decrease with transparency, but this effect is most pronounced for issuers with low disclosure costs. I also find significant liquidity effects and illiquidity spillover in the CDS market, contrary to the conventional wisdom.Item Evaluating, risking, and ranking carbon sequestration buoyant traps with application to nearshore Gulf of Mexico(2022-05-16) Laidlaw, Madeleine C.; Bump, Alexander P.; Hovorka, Susan D. (Susan Davis); Peel, Frank JIt is critical to streamline investment into CCS projects to reduce the concentration of greenhouse gases in the atmosphere and reduce the impacts of climate change. The Gulf of Mexico is a prime location to develop CCS projects due its vast geologic carbon storage potential, proximity to concentrated CO₂ emissions, and coincidence with an experienced hydrocarbon industry that can lend its expertise to this young field. The petroleum industry uses prospect inventories, catalogues of discovery opportunities, to identify potential projects, quantify their associated risks, and rank them to focus on prospects to maximize the potential of high-quality investments (Lottaroli et al 2018). This work improves upon existing prospect inventories considering only buoyant traps in the Miocene section in state and nearshore federal waters of offshore Texas and Louisiana (TexLa Dataset) by incorporating fault seal as a trapping mechanism and including lithological and petrophysical data from a comprehensive 3D geologic model. The result was larger, amalgamated buoyant traps prospects which are more likely to support development projects due to their size and continuity. The usefulness of a prospect inventory is realized when the subsurface and above ground risks associated with each buoyant trap prospect are quantified, allowing prospects to be ranked by suitability for project development. Quantifying risk differentiates prospects, highlights those of greatest promise, and allows developers to make more informed choices about which projects to pursue. Subsurface risk was evaluated by considering structural trap, confining zone, well leak, capacity, and injectivity risk. Above-ground risk was examined by looking at the political and permitting conditions in the region and building a sequestration discounted-cash flow valuation model to quantify each prospect’s economic potential. This work shows that it is possible to risk and rank CCS prospects using commonly available data and quantitative, repeatable workflows that can be applied anywhere in the world. The final useful output of this work is a ranked list of the buoyant trap storage opportunities within the Miocene section of the TexLa region. Broad risk ranking were conducted using Common Risk Segment (CRS) maps. More differentiating risk-weighted values for the prospects were calculated using Expected Monetary Value ($MM).Item Exceptional soybeans : genetically modified soybeans in Argentina and international environmental governance(2015-05) Smith, Geneva Montana Leader; Lentz, Erin C.; Smith, LindsayOn the margins of the first and developing worlds, Argentines have made many bids to enter into the small cohort of international power-players with varying degrees of success. This report takes a step back from rumors and suspicions surrounding Argentina to instead draw attention to the economic growth and international political clout gained through state and private industry support of agricultural biotechnology. Perhaps more important than revenue generated through exports of a lucrative crop, this transformative technology, namely in the form of genetically modified (GM) soybeans, has given Argentine political and industry elites the means to establish credibility in the international community. In turn, this has aided the Argentine state in negotiating the contours of its sovereignty on its own terms. GM soybeans have indeed become a stalwart of Argentine economic growth and a means to gain respect from the international community. There is still controversy, however, surrounding regulation of GM soybean production and the uncertain effects on those whose livelihoods depend on its continued adoption. Given that the jury is still out on the long-term effects of agricultural biotechnology production on soil quality, health and human safety, and rural job opportunities, the domestic effect of Argentine exceptionalism deployed for international purposes is troublingly unclear. An exploration of Argentine exceptionalism in relation to a shifting, yet always hybrid political economy and some of the contradictions via two case studies is a first step toward discovering how transitions to agricultural biotechnology affect the lives and livelihoods of Argentines at home.Item Exploring protective factors in school and home contexts for economically disadvantaged students in the middle school(2012-05) Okilwa, Nathern S. A.; Holme, Jennifer Jellison; Reyes, Pedro; Yates, James; Saenz, Victor; Crosnoe, RobertThe purpose of this study was to explore the experiences of middle school students particularly focusing on the academic achievement of economically disadvantaged students. Existing data show that there is an increasing cohort of school children experiencing poverty, either short or long term. For poor middle school students, the risk for school failure is amplified by the general risks associated with middle school transition and early adolescence development. The cumulative nature of these risks is often associated with undesirable school outcomes including grade retention, behavior problems, absenteeism, delinquency, teenage pregnancy, school dropout, fewer years of schooling, and lower academic achievement. However, there is evidence that some students succeed in spite of adversity, which is often attributed to protective factors present in the students’ own immediate environment – school, home, and community. This current study, therefore, examined the relationship between two potential protective factors–parent involvement and school belonging–and student achievement. Previous research has established that parent involvement and school belonging are both associated with positive school outcomes including academic motivation, self-efficacy, internal locus of control, pro-social and on-task behavior, school engagement, educational aspirations and expectations, and better academic achievement. Consequently, this study examined three main questions: (a) How is parental involvement associated with academic achievement for economically disadvantaged eighth grade students? (b) How school belonging associated with academic achievement for economically disadvantaged eighth grade students? (c) Do the relations between parent involvement, school belonging, and eighth grade achievement vary as a function of prior achievement and middle school? To answer these research questions, this study used the nationally representative longitudinal data from Early Childhood Longitudinal Study, Kindergarten (ECLS-K) Class of 1998/99. The findings for this study showed that when parent involvement and school belonging were considered together, the association between parent involvement and student achievement diminished while school belonging consistently emerged as a significant predictor of achievement. However, while school belonging emerged as a significant predictor of achievement, this study established that students’ prior achievement was the single strong and significant factor explaining achievement for poor eighth grade students.Item How NFL Teams Make Risky Decisions(2023-04) Kardesch, CarterData analytics has begun to take over the world of sports. Coaches are making educated gametime decisions based on these advancements. However, leagues, such as the National Football League, still do not allow coaches to access in time analytics. Therefore, this raises the question: how do NFL teams make risky decisions? While there are studies that apply this question to kickoff strategies as well as the injury risk for players, studies regarding actual play calling have yet to be published. Furthermore, the concept of risk differs from sport to sport. Because it rests so heavily on uncertainty, play calling in football is actually transformed into a subset of game theory. Even if the majority of the decisions are based upon game theory, there are still a myriad of factors that could affect the ultimate choice, even if they are incognizant. These can be broken into factors of the actual game, season, team, or for the coach who oversees all of the play calling. These factors can then be run in a logistical regression against an example of a risky decision in football. The one chosen in this study was going for it on 4 th down. After establishing what factors are significant, a final predictive model can be created that in turn displays what plays the largest role in delicate gametime situations. Finally, this model can hopefully provide a framework for companies outside of sports, such as ones in finance, to evaluate different investments and projects.Item Identity Trust Framework for iGaming(2017-12-06) George, Michael David; Barber, SuzanneThe online gambling community, or the iGaming industry in the United States has individual solutions and a mix of classic processes to manage universal customer identity but it lacks a standard identity management framework in which to enroll new iGaming users, monitor those users and ensure secure transactions, which leaves it open to identity theft and financial fraud. The iGaming industry offers online poker, sports betting and casino table games. iGaming providers (provider/providers) include companies such as PartyPoker.com, Pokerstars.com, Bovada.com, BetOnline.com among others. An iGaming player (player/players) is anyone who plays to gamble on games through the Internet. This report focuses on the requirements and specification for an Identity Trust Framework to enhance security and privacy in the United States iGaming industry and players.Item Measures of risk in economics and finance(1988) Laurence, Antoine, 1965-; Not availableItem Mobility and environmental intimacy in Italian volcanic zones(2019-12-05) McQuaid, Megan Louise; Sturm, Circe, 1967-This thesis explores human and environmental movement and mobility in various Italian volcanic zones. Places and sites are typically thought of as stable, locatable in a specific location, pin-pointable. Places are not generally considered “mobile.” Stromboli, Italy and other volcanic sites force the ethnographer to reconcile a certain tension between movement and place. Volcanic sites are worlds that are materially and socially constituted through movement. How tectonic plates move creates volcanic activity, how lava moves up and out of the volcano transforms the landscape, and how people move to, from, around, through, up and down the volcano creates a volcanic social world. How do humans navigate this environment, and how does the environment agentially present itself as a force to be circumnavigated? Movement and mobility serve as a framework for theorizing human social relations with their environment and other non-humans. Thinking through mobility captures the unique limits and affordances that volcanic environments offer to their human, plant, and animal residents. Scholars differ on whether or not we can call a landscape “alive,” “lively,” or “vibrant.” This thesis argues that the answer to this question is based in observations about movement. That we can, in fact, locate agential capability in the way that a subject moves. The ability to move is the condition for agency.Item NewsFerret : supporting identity risk identification and analysis through text mining of news stories(2013-05) Golden, Ryan Christian; Barber, SuzanneIndividuals, organizations, and devices are now interconnected to an unprecedented degree. This has forced identity risk analysts to redefine what “identity” means in such a context, and to explore new techniques for analyzing an ever expanding threat context. Major hurdles to modeling in this field include the inherent lack of publicly available data due to privacy and safety concerns, as well as the unstructured nature of incident reports. To address this, this report develops a system for strengthening an identity risk model using the text mining of news stories. The system—called NewsFerret—collects and analyzes news stories on the topic of identity theft, establishes semantic relatedness measures between identity concept pairs, and supports analysis of those measures through reports, visualizations, and relevant news stories. Evaluating the resulting analytical models shows where the system is effective in assisting the risk analyst to expand and validate identity risk models.Item Optimization of production allocation under price uncertainty : relating price model assumptions to decisions(2011-08) Bukhari, Abdulwahab Abdullatif; Jablonowski, Christopher J.; Lasdon, Leon S.; Dyer, James S.Allocating production volumes across a portfolio of producing assets is a complex optimization problem. Each producing asset possesses different technical attributes (e.g. crude type), facility constraints, and costs. In addition, there are corporate objectives and constraints (e.g. contract delivery requirements). While complex, such a problem can be specified and solved using conventional deterministic optimization methods. However, there is often uncertainty in many of the inputs, and in these cases the appropriate approach is neither obvious nor straightforward. One of the major uncertainties in the oil and gas industry is the commodity price assumption(s). This paper investigates this problem in three major sections: (1) We specify an integrated stochastic optimization model that solves for the optimal production allocation for a portfolio of producing assets when there is uncertainty in commodity prices, (2) We then compare the solutions that result when different price models are used, and (3) We perform a value of information analysis to estimate the value of more accurate price models. The results show that the optimum production allocation is a function of the price model assumptions. However, the differences between models are minor, and thus the value of choosing the “correct” price model, or similarly of estimating a more accurate model, is small. This work falls in the emerging research area of decision-oriented assessments of information value.Item The prevalence and risks of injury for masters athletes : current findings(2015-05) Baker, Jeffrey Robinson; Tanaka, Hirofumi, Ph. D.; Castelli, DarlaRegular physical activity and exercise are important clinical tools that can be used to improve our health. This is especially true due to the prolonged lifespan of the average adult and the declines in physical function that are attributed to advancing age. Those functional detriments can be controlled or reversed via regular exercise, and as a result, the growth of competitive sports targeted to the elderly is on the rise. These events have created generations of Masters athletes. However, continued growth of and successful participation in these competitions may be limited by an unfounded belief that an increased risk of sports injury occurs as we age. This notion is not supported by the available scientific literature. The preponderance of epidemiological evidence demonstrates no age-associated increase in injury for Masters athletes. This remains true even when the research has focused on specific injury types such as connective tissue. To unequivocally answer question of whether elderly athletes are at a high risk of injury, future research will need to focus on providing more rigorous controls over activity levels and training status as both of these variables are likely confounding the current conclusions that can be drawn when comparing young and old athletes. It will also be beneficial to specifically study the association between altered muscle function, age and injury. This association has not been addressed within the Masters athlete population, but could provide potent insight into the aging process of habitual exercisers.Item Probabilistic bicriteria models : sampling methodologies and solution strategies(2010-08) Rengarajan, Tara; Morton, David P.; Hasenbein, John J.; Kutanoglu, Erhan; Muthuraman, Kumar; Popova, ElmiraMany complex systems involve simultaneous optimization of two or more criteria, with uncertainty of system parameters being a key driver in decision making. In this thesis, we consider probabilistic bicriteria models in which we seek to operate a system reliably, keeping operating costs low at the same time. High reliability translates into low risk of uncertain events that can adversely impact the system. In bicriteria decision making, a good solution must, at the very least, have the property that the criteria cannot both be improved relative to it. The problem of identifying a broad spectrum of such solutions can be highly involved with no analytical or robust numerical techniques readily available, particularly when the system involves nontrivial stochastics. This thesis serves as a step in the direction of addressing this issue. We show how to construct approximate solutions using Monte Carlo sampling, that are sufficiently close to optimal, easily calculable and subject to a low margin of error. Our approximations can be used in bicriteria decision making across several domains that involve significant risk such as finance, logistics and revenue management. As a first approach, we place a premium on a low risk threshold, and examine the effects of a sampling technique that guarantees a prespecified upper bound on risk. Our model incorporates a novel construct in the form of an uncertain disrupting event whose time and magnitude of occurrence are both random. We show that stratifying the sample observations in an optimal way can yield savings of a high order. We also demonstrate the existence of generalized stratification techniques which enjoy this property, and which can be used without full distributional knowledge of the parameters that govern the time of disruption. Our work thus provides a computationally tractable approach for solving a wide range of bicriteria models via sampling with a probabilistic guarantee on risk. Improved proximity to the efficient frontier is illustrated in the context of a perishable inventory problem. In contrast to this approach, we next aim to solve a bicriteria facility sizing model, in which risk is the probability the system fails to jointly satisfy a vector-valued random demand. Here, instead of seeking a probabilistic guarantee on risk, we instead seek to approximate well the efficient frontier for a range of risk levels of interest. Replacing the risk measure with an empirical measure induced by a random sample, we proceed to solve a family of parametric chance-constrained and cost-constrained models. These two sampling-based approximations differ substantially in terms of what is known regarding their asymptotic behavior, their computational tractability, and even their feasibility as compared to the underlying "true" family of models. We establish however, that in the bicriteria setting we have the freedom to employ either the chance-constrained or cost-constrained family of models, improving our ability to characterize the quality of the efficient frontiers arising from these sampling-based approximations, and improving our ability to solve the approximating model itself. Our computational results reinforce the need for such flexibility, and enable us to understand the behavior of confidence bounds for the efficient frontier. As a final step, we further study the efficient frontier in the cost versus risk tradeoff for the facility sizing model in the special case in which the (cumulative) distribution function of the underlying demand vector is concave in a region defined by a highly-reliable system. In this case, the "true" efficient frontier is convex. We show that the convex hull of the efficient frontier of a sampling-based approximation: (i) can be computed in strongly polynomial time by relying on a reformulation as a max-flow problem via the well-studied selection problem; and, (ii) converges uniformly to the true efficient frontier, when the latter is convex. We conclude with numerical studies that demonstrate the aforementioned properties.