Show simple item record

dc.contributor.advisorDodd, Barbara Glenzingen
dc.creatorMacken-Ruiz, Candance L.en
dc.date.accessioned2012-09-11T17:58:02Zen
dc.date.available2012-09-11T17:58:02Zen
dc.date.issued2008-08en
dc.identifier.urihttp://hdl.handle.net/2152/17878en
dc.descriptiontexten
dc.description.abstractA multi-stage test (MST) design is an alternative design for the delivery of automated tests. While computerized adaptive tests (CAT) have dominated testing for the past three decades, increasing interest has been focused on the MST because it offers two advantages that CAT does not: Test sponsors and test developers can see an entire test before administration because it is pre-constructed from sets of modules of test items, and within a module examinees may skip forward and back through test items and make changes to previously answered items. Due to the dominance of CAT, little research has been devoted to differing MST designs with regard to the number of items per stage and routing rules that direct the selection of the next module after a previous module has been completed. This research used simulated response data for a large national test and the generalized partial credit model to compare a CAT to one of three MST designs that had either decreasing numbers of items per stage, increasing number of items per stage, or the same number of items per stage, and one of three routing rules, maximum information, fixed [theta], or number-right routing. As anticipated, CAT had the best performance with respect to estimating proficiency and item pool use. Among the MSTs, the MST with increasing numbers of items per stage performed the best with respect to estimating proficiency, followed by the MST with decreasing number of items per stage, and equal numbers of items per stage. By routing rule, maximum information performed the best and number-right routing performed the worst. Only one panel was constructed per MST design, so only limited comparisons of item pool use could be made. Although the MST designs did not perform as well as CAT, the differences in estimating proficiency were not large, implying that the MST design is a viable alternative to CAT.en
dc.format.mediumelectronicen
dc.language.isoengen
dc.rightsCopyright is held by the author. Presentation of this material on the Libraries' web site by University Libraries, The University of Texas at Austin was made possible under a limited license grant from the author who has retained all copyrights in the works.en
dc.subject.lcshComputer adaptive testingen
dc.subject.lcshExaminations--Design and constructionen
dc.titleA comparison of multi-stage and computerized adaptive tests based on the generalized partial credit modelen
dc.description.departmentEducational Psychologyen
thesis.degree.departmentEducational Psychologyen
thesis.degree.disciplineEducational Psychologyen
thesis.degree.grantorThe University of Texas at Austinen
thesis.degree.levelDoctoralen
thesis.degree.nameDoctor of Philosophyen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record