Browsing by Subject "Model complexity"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Deciding among models : a decision-theoretic view of model complexity(2010-05) Mozano, Jennifer Maile; Bickel, J. Eric; Lake, Larry W.This research examines the trade-off between the cost of adding complexity to a model and the value added to the results within the context of decision-making. It seeks to determine how complex a model should be in order to fit it to the purpose at hand. The report begins with a discussion on general modeling theory and model complexity. It next considers the specific case of petroleum reservoir models and the existing research that has compared modeling results with model complexity levels. Finally, it presents original results applying Monte Carlo sampling to a drilling decision scenario and to a one-dimensional reservoir model where a cylindrical oil field is represented by different numbers of cells and the results compared.Item Model complexity and risk aversion in decision analysis(2023-08-07) Small, Colin Andrew; Bickel, J. Eric; Leibowicz, Benjamin; Hasenbein, John; Dyer, James S.; Henry, StephenModels are often formulated to aid in decision-making. However, the details included or excluded are often determined with minimal examination of the effects. There is a tendency to make models more complex than merited. Yet, decision makers’ risk preferences are often ignored without considering the effect on recommendations. Additionally, modelers do not always understand the difference between preferences toward deterministic outcomes and risk preferences or the impact of modeling conflicting risk and deterministic preferences together in a single component. In this paper, I investigate the relationship between complexity and accuracy, using COVID-19 forecasting as a case study. I find our simple model is comparable in accuracy to highly publicized models, generating among the best-calibrated forecasts. This may be surprising, given the complexity of many high-profile models supported by large teams. However, it is consistent with research suggesting simple models perform very well in a variety of settings. Although utility functions are a fundamental component of decision analysis, they can assume many forms. For small decisions, the choice might not change the decision. But it can greatly affect recommendations for large decisions. There are qualitative recommendations on which functional form to use. But there is no quantitative recommendation relating size of uncertainties to choice of utility function. By maximizing error in certain equivalents when using different utility functions, this paper provides guidance into when to use different utility functions. Although decision makers should be approximately risk neutral for small problems, they are often “risk averse.” Rabin and Thaler showed utility functions modeling small-scale risk aversion result in absurd risk aversion for large uncertainties. They explain small-scale risk aversion is due to loss aversion, where pain from losses exceeds benefit from gains. But preferences for deterministic losses and gains is a deterministic preference and is not equivalent to risk preference. They argue loss aversion caused the observed behavior, yet modelled deterministic and risk preferences in a single factor. In this paper, I show modeling risk and deterministic preference separately can resolve Rabin’s Paradox, underscoring the need to explicitly model both when deterministic preferences can influence decision making or conflict with risk preferences.