Skip to content

Composition Forum 23, Spring 2011
http://compositionforum.com/issue/23/

O’Neill, Peggy, Cindy Moore, and Brian Huot. A Guide to College Writing Assessment. Logan: Utah State UP, 2009. 232 pp.

Bookmark and Share

Gerri McNenny

Into the life of every writing professional, programmatic assessment will surely come. Whether it be as a program-wide inquiry into teaching effectiveness or an institutional assessment of writing across the curriculum for accreditation, writing teachers at all levels must have a basic understanding of how to assess the quality and effectiveness of the programs they participate in. Such a foray into the inevitable mix of qualitative and quantitative measures of learning is a relatively recent phenomenon—recent enough for the Council of Writing Program Administrators to have issued a white paper on program assessment strategies in 2008. Still, many writing program administrators come from a humanities tradition, in which writing is assessed qualitatively, as a cultural artifact at the nexus of a particular socio-cultural and rhetorical situation. Social science research methods and strategies, with their ability to allow us to conduct empirical inquiries into goals and gains, are not typically the repertoire of WPAs steeped in the traditions and scholarship required of English majors.

Into this gap comes A Guide to College Writing Assessment, an eminently useful tool for any working writing specialist. As an essential primer on the theory, history, and practice of assessment for both the newly assigned writing professional and the seasoned writing program administrator seeking resources on assessment beyond the individual classroom, the Guide fills a much needed niche in the toolbox of the writing professional confronted with the task of program assessment. Armed with an understanding of theoretically and pedagogically defensible approaches to assessment, the WPA can proceed systematically, with confidence. As a resource for a graduate-level course on assessment, the Guide provides an essential point of departure, introducing students to a synthesis of scholarship and research on assessment practices across the curriculum and the program. Such an overview provides multiple points of departure for further inquiry.

Grounded in the day-to-day pragmatic issues that writing professionals confront outside the classroom, in the areas of placement, exit exams, program evaluations, institutional evaluations (for accreditation by various agencies), and faculty assessments, the Guide consistently affirms the principles of solid assessment: “that, in order to work well, [assessment] practices should not—cannot—be considered outside historical, theoretical, and situational scenes” (12). The context of a writing program, including where a writing program is situated institutionally (whether in English, in student services, or across the disciplines), as well as how it is organized (as a set of courses taught by professionals educated in the field of rhetoric and composition or by part-time, adjunct faculty), will inevitably influence the assessment choices and the political constraints of the team assigned to conduct the program assessment. O’Neill, Moore, and Huot map out those contingencies with admirable clarity.

What the authors have done here is to place in the hands of WPAs and writing professionals, whatever their role and level of expertise, the means to own assessment at the local level, so as to be consistent with the school’s and/or program’s mission and institutional vision. With a grounding in the history and theory of writing assessment, as well as a grasp of the problematic nature of validity and reliability in writing assessment and a call to research and thus probe our studies of the same, the authors walk the reader through the many considerations that await the assessment team. How to benefit from large scale assessments and align assessment design with our current understanding of language and literacy acquisition is the treasure here waiting to be discovered. By interrogating the site-specific, contextual bases for writing, writing professionals and administrators can, in effect, use assessment to research, design, and develop better programs that in turn serve the institution in a more targeted and effective manner. In addition, the authors also go a long way in defusing the many landmines that await an assessment team and transform assessment into an opportunity for programmatic and institutional renewal. Through the authors’ discussion of strategy, context, and situation, the role of the WPA, so often situated in the difficult terrain of middle management, becomes manageable.

To account for this and other idiosyncrasies of program assessment, O’Neill, Moore, and Huot entertain a variety of considerations that will inevitably assist the program director in defining and shaping writing instruction. By interrogating placement rationales and procedures, assessments of student writing proficiency, and program assessments, the writers provide a condensed history of writing placement and assessment procedures while also distilling a synthesis of scholarship and research for each area. In their section on “Conducting Writing Program Assessments,” for instance, the authors provide not only an overview of typical assessment methods and scenarios for writing program assessment, but also an interrogation into the theoretical and programmatic usefulness of the data collected. Writing Center assessment, for example, often falls prey to quantitative data gathering strategies, also known as “usage data” (131), that can be uninformed by a more nuanced understanding of language and literacy learning that might assist the director and tutors in improving on the teaching and learning done in their classrooms and centers. By alerting Writing Center directors to this and other possible impediments to meaningful program assessment, the authors have synthesized the research and scholarship and flagged those common stumbling blocks that might catch a program director unaware.

What finally distinguishes this volume are the tools it provides for conducting solid and transformative research. Throughout the book and more pointedly in the appendices, the authors refer to and later provide models of questionnaires, surveys, portfolio rubrics, focus group procedures, and assessment surveys aimed at various stakeholders that together render a faithful account of the program. The inclusion of the CCCC Committee on Assessment’s position statement on writing assessment (Appendix B) offers foundational disciplinary “Guiding Principles for Assessment” that are, by consensus, theoretically and pedagogically consistent with best practices in literacy learning, language development, and writing assessment (161).

While the Guide to College Writing Assessment may strike the casual reader as being light on theory and research and somewhat heavy on possible impediments that WPAs can anticipate in the actual conduct of writing assessment, these features may not be detractors at all. For while these qualities may make this guide less desirable for faculty seeking an introductory text for graduate students, they nonetheless add to its charm, simply because the authors stand in as the mentors that any WPA, in the midst of the daunting task of program assessment, would surely want to consult in those inevitable moments of need.

Works Cited

National Council of Teachers of English and Council of Writing Program Administrators. “NCTE-WPA White Paper on Writing Assessment in Colleges and Universities.” Council of Writing Program Administrators. 2005-2009. Web. 15 Nov. 2010. <http://www.wpacouncil.org/whitepaper>.

Bookmark and Share

Return to Composition Forum 23 table of contents.