Browsing by Subject "Effect size"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Identification of optimal moderators in clinical trials(2015-05) Wang, Li, M.S. in Statistics; Daniels, Michael Joseph; Muller, PeterModerators and mediators can be very informative in the analysis of clinical trials to help determine what treatment should be assigned to individuals (moderators) and to determine how to improve treatments (mediators). It is well known that a treatment might not be equally beneficial to everyone and an overall effective treatment may be less effective (or even harmful) in certain groups; this highlights the importance of moderators in making treatment assignment decisions. A combined moderator, or optimal moderator, can be useful when multiple potential moderators exist, but no individual one is particularly strong. This report reviews how to assess a single moderator as well as approaches to derive an optimal moderator. An example from randomized clinical trial is presented, including the identification of an optimal moderator.Item Software implementation of modeling and estimation of effect size in multiple baseline designs(2013-12) Xu, Weiwei, active 2013; Beretvas, Susan NatashaA generalized design-comparable effect size modeling and estimation for multiple baseline designs across individuals has been proposed and evaluated by Restricted Maximum Likelihood method in a hierarchical linear model using R. This report evaluates the exact approach of the modeling and estimation by SAS. Three models (MB3, MB4 and MB5) with same fixed effects and different random effects are estimated by PROC MIXED procedure with REML method. The unadjusted size and adjusted effect size are then calculated by matrix operation package PROC IML. The estimations for the fixed effects of the three models are similar to each other and to that of R. The variance components estimated by the two software packages are fairly close for MB3 and MB4, but the results are different for MB5 which exhibits boundary conditions for variance-covariance matrix. This result suggests that the nlme library in R works differently than the PROC MIXEDREML method in SAS under extreme conditions.Item The impact of response-guided methods on single-case designs(2019-08) Swan, Daniel M.; Beretvas, Susan Natasha; Pustejovsky, James E.; Ferron, John; Klingbeil, David ASingle-case designs (SCDs) are commonly used in special education in order to study small populations and focused, idiosyncratic interventions. In these designs, observations of a participant made in a baseline phase prior to treatment are compared to observations in a phase after or concurrent to treatment. The difference between the baseline phase and treatment phase provides information about the treatment effect. Historically, the most common method for the analysis of SCD data has been visual analysis, but there is a growing interest in the analysis and meta-analysis of SCD data using parametric methods. Among the issues that researchers are interested in is the problem of stability in the baseline phase. More specifically, researchers from the visual analysis tradition often observe the baseline phase until the data pattern reaches stability, generally characterized in terms of minimal trend and variability. This form of SCD is sometimes referred to as a response-guided experiment or response-guided design. Previous simulation research has examined this issue, however these studies have focused on a particular, narrow form of response-guided design and used data generating processes that may not be credible for the types of data common to SCD studies. In this study I explored several different ways to operationalize response-guided designs, as well as alternative data generating models for commonly observed outcomes. I applied the response-guided algorithms to empirical SCD data, as well as simulated SCD baselines to explore the credibility of the response-guided algorithms. I also examined the impact of these response-guided algorithms on treatment effect estimates from SCDs, applied to response-guided data generated using both traditional and alternative data generating models. The results of these simulations suggest that response-guided designs are likely to have an impact on the magnitudes of treatment effects that depend upon the variability of the sample data, such as non-overlap indices like the NAP and parametric effect sizes like the within-case standardized mean difference. Even in the case of effect sizes that are not notably affected by response-guided designs, such as the log-response ratio, the standard errors are likely to be biased as a consequence of response-guided designs. I discuss the broader implications of these results, including alternatives to response-guided designs and how researchers and journal editors can help the field to better understand the impacts of response-guided design practices.