• Login
    • Submit
    View Item 
    •   Repository Home
    • UT Electronic Theses and Dissertations
    • UT Electronic Theses and Dissertations
    • View Item
    • Repository Home
    • UT Electronic Theses and Dissertations
    • UT Electronic Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Evaluation of single- and multilevel factor mixture model estimation

    Icon
    View/Open
    alluad93484.pdf (1.075Mb)
    Date
    2007
    Author
    Allua, Shane Suzanne
    Share
     Facebook
     Twitter
     LinkedIn
    Metadata
    Show full item record
    Abstract
    Confirmatory factor analysis (CFA) models test the plausibility of latent constructs hypothesized to account for relations among observed variables. CFA models can be used to model both hierarchical data structures as a result of cluster sampling designs and investigate the plausibility of latent classes or unobserved classes of individuals. Recent research has suggested preliminary evidence on the accuracy of typical fit indices (AIC, BIC, aBIC, LMR aLRT) among various single-level latent class models, multilevel latent class models that correct for biased standard error estimates as a result of nested data, and growth mixture models (Clark & Muthén, 2007; Nylund et al., vii 2006, Tofighi & Enders, 2006). But few, if any, researchers have studied the accuracy of the fit indices in multilevel factor mixture models. The purpose of this study was to extend the literature in this less researched area and assess the performance of typical fit indices used to compare factor mixture models. Class separation, intraclass correlation, and between-cluster sample size were manipulated to emulate realistic typical research conditions. The proportion of times out of a possible 100 replications in which each of the AIC, BIC, aBIC, and LMR aLRT led to selection of the correctly specified model among other mis-specified models was recorded. Results for data generated to fit one-class models indicated that the BIC and aBIC outperformed the AIC and the LMR aLRT was nonsignificant nearly 100% of the time, supporting the correct one-class model. Performance of all fit indices was, however, poor when data were generated to originate from two-class models. The AIC and LMR aLRT tended to perform better than the BIC and aBIC, although accuracy of all fit indices increased as a function of increasing class separation and between-cluster sample size. Implications and recommendations regarding optimal fit indices under various conditions are reported. It is hoped that the current research has provided initial evidence of conditions in which the various fit indices are more likely to model the correct number of latent classes in multilevel data.
    Department
    Educational Psychology
    Description
    text
    URI
    http://hdl.handle.net/2152/2986
    Collections
    • UT Electronic Theses and Dissertations
    University of Texas at Austin Libraries
    • facebook
    • twitter
    • instagram
    • youtube
    • CONTACT US
    • MAPS & DIRECTIONS
    • JOB OPPORTUNITIES
    • UT Austin Home
    • Emergency Information
    • Site Policies
    • Web Accessibility Policy
    • Web Privacy Policy
    • Adobe Reader
    Subscribe to our NewsletterGive to the Libraries

    © The University of Texas at Austin

    Browse

    Entire RepositoryCommunities & CollectionsDate IssuedAuthorsTitlesSubjectsDepartmentThis CollectionDate IssuedAuthorsTitlesSubjectsDepartment

    My Account

    Login

    Information

    AboutContactPoliciesGetting StartedGlossaryHelpFAQs

    Statistics

    View Usage Statistics
    University of Texas at Austin Libraries
    • facebook
    • twitter
    • instagram
    • youtube
    • CONTACT US
    • MAPS & DIRECTIONS
    • JOB OPPORTUNITIES
    • UT Austin Home
    • Emergency Information
    • Site Policies
    • Web Accessibility Policy
    • Web Privacy Policy
    • Adobe Reader
    Subscribe to our NewsletterGive to the Libraries

    © The University of Texas at Austin