Toward a More Perfect Union: Integrating Quantitative and Qualitative Data in Writing Fellows Program Assessment
MetadataShow full item record
Curriculum-based peer tutoring programs such as Writing Fellows typically have multiple goals, ranging from the grand purpose of transforming university-wide curricula to making all students feel included in a community of writers (Haring-Smith 177). Fellows programs are expected to meet the increasing and oftentimes conflicting demands of various stakeholders, including university administrators, faculty, students, Fellows themselves, and possibly employers and other members of the wider community (Condon 46-47). For example, students might want to earn higher grades, instructors might want to require students to write multiple drafts, and administrators might want to encourage a university-wide writing culture and promote collaborative learning. The interests of stakeholders vary, of course, so measuring the success of a Writing Fellows program necessarily means assessing and relating multiple outcomes. However, a review of the literature shows that many Writing Fellows programs rely exclusively on qualitative tools such as student satisfaction surveys and open-ended questionnaires to supply evaluative data. (Soven, “Curriculum-Based Peer Tutors and WAC” 220). While such tools provide potentially useful insight into participant sentiment, they neglect other critical outcomes such as demonstrable improvement in student writing. Further, faculty, Fellows, students, and program administrators are vulnerable to selective interpretation of survey results. As with all research in the social sciences, good data regarding the impact of Writing Fellows must actively eliminate such potential bias. In this time of limited funding and increased pressure to demonstrate program effectiveness, Writing Fellows administrators should articulate meaningful and measurable goals and move beyond the limitations of traditional survey-based assessment. This article details one Writing Fellows program’s attempt to address these assessment concerns by using a quantitative study to complement and enhance existing qualitative measurements.