Browsing by Subject "Testing effect"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Does motivation moderate the effectiveness of retrieval as a learning intervention(2013-05) Clark, Daniel Allen; Svinicki, Marilla D., 1946-; Robinson, Daniel H.The effects of using retrieval as a study method have been found to occur across many contexts, such as in classrooms, with different age groups, and for non-verbal materials (Rohrer & Pashler, 2010). Even though researchers have suggested that this intervention be implemented on a widespread basis, studies to date have not investigated how the important variable of motivation could have an effect on retrieval as a learning intervention. This experiment investigated whether motivational variables would moderate the effect that retrieval has on learning. In this study, retrieval, extrinsic incentives, and intrinsic motivation positively affected performance. Causality orientations did not have an impact on performance or moderate the effect of the incentives. However, none of the included motivational variables moderated the effect of retrieval on learning. These results suggest that retrieval as a learning intervention is equally effective across different motivational conditions.Item Does team-based testing promote individual learning?(2011-05) Walker, Joshua David; Robinson, Daniel H.; Schallert, Diane; Svinicki, Marilla; Borich, Gary; Muir-Broaddus, JacquelineTeam-based testing gives students a chance to earn additional points on individual unit tests by immediately re-taking the test as a team competing against other teams. This instructional approach has enjoyed widening implementation and impressive anecdotal support, but there remains a dearth of empirical studies evaluating its prescribed processes and promoted outcomes. Although the posited effectiveness and appeal of team-based testing seem consistent with the benefits of test-enhanced learning and collaborative learning in general, several limitations are readily apparent. Namely, the current format of the individual and team readiness assurance tests is expressly multiple-choice. Though there are some advantages of this type of question (e.g., ease of administering and grading), the long-term cognitive disadvantage relative to short-answer questions is well documented. Furthermore, it is not clear whether the proposed gain in learning through this format is attributable to the group effect -- be it social or cognitive, or simply to repeated exposure to the test items. Therefore, this study measured the effects of initial test question Format (short-answer vs. multiple-choice), Mode (individual vs. group), and Exposure (once vs. twice) on four delayed measures of learning: Old multiple-choice items (ones students had initially been tested over), Old short-answer items, New multiple-choice items, and New short-answer items. Two weeks after watching a video-recorded lecture, 208 college students took a thirty-item test comprising both the old and new items in multiple-choice and short-answer formats. Results revealed that 1) taking an initial test twice is better than once when the delayed test has old short-answer items or new multiple-choice items, 2) taking an initial short-answer test is better than multiple choice when the delayed test has either old multiple-choice, old short-answer, or new multiple-choice items, and 3) taking an initial team test is no different than taking an individual test when it comes to long-term learning. Particularly noteworthy from these results is how a) the effects of short-answer tests and taking tests twice are not present within Team conditions, and b) taking a multiple-choice test twice is as effective as taking a short-answer test once. Implications are discussed in light of learning theory and instructional practice.