Skip to content

Composition Forum 32, Fall 2015
http://compositionforum.com/issue/32/

Assessment as Living Documents of Program Identity and Institutional Goals: A Profile of Missouri University of Science and Technology’s Composition Program

Bookmark and Share

Daniel Reardon and Alexander Wulff

Abstract: In this profile we describe changes to the composition program at Missouri University of Science and Technology, prompted by the hiring of the university’s first writing program administrator (WPA). We describe our efforts to implement evidence-based best practices in undergraduate writing courses in a context where very little program specific evidence was available. We also describe how challenges of effecting change at a university largely composed of science, technology, engineering, and mathematics (STEM) students has meant that many of the changes have been framed by the spirit of Writing Across the Curriculum (WAC) initiatives. Several new methods of assessment have been introduced to the program, including instructor feedback, student surveys, and skills tests. Allowing assessment to drive standardization has begun a process of measuring the transfer of student knowledge we believe other departments will find interesting. We close by outlining unresolved issues and ongoing challenges as the program moves forward.

This profile chronicles the English and Technical Communication department’s efforts to unify the writing program at Missouri University of Science and Technology, grounded in an attempt to overhaul our Composition I and II courses. While our department was in a unique situation (because Missouri S&T had never hired a WPA and there was no preexisting, overarching structure for either Comp I or II),{1} we believe that using evidence-based practices to create this curricular structure allowed for a more united composition program. That is, assessment was not something that we began in anticipation of resisting outside “forces,” as is so often reported. While we knew that some of our data would help us make arguments for changes needing administrative approval down the line, we began to assess our students to make our program stronger.

Our university has a long-established identity as an elite engineering university with a strong reputation as an affordable investment. This identity as rigorous and affordable certainly impacts some of our desire to create more empirical data about our students. Ranked seventh by U.S. News &World Report among the top 50 public colleges and universities, Missouri S&T offers 65 degree programs and 15 accredited undergraduate engineering programs (Institutional Profile, 2010). In fact, as of the 2011 enrollment tallies, 5406 out of 7206, or 75%, of our students are engineering majors—a consistent majority that hasn’t changed in our university’s 142-year history. In contrast, the English and Technical Communication department is quite small—though there are a substantial number of ex-engineering students within the English majors’ ranks. In 2011, the mean ACT percentile for incoming first-year students was 27.8, a consistent score for nearly ten years (New Freshmen). These numbers and this background reveal a hierarchy on campus. That hierarchy has helped us arrive at the need for more data, though less as a defensive gesture and more as a way of making sure we can speak the same language as our STEM colleagues.

More importantly, the process of gathering data through assessments actually took on a life of its own, serving to foster a more cohesive curriculum and a more cohesive staff. Therefore, we believe that this profile extends Chris Gallagher’s claims about organic writing assessment made in his 2011 CCC article Being There: (Re)Making the Assessment Scene. There is no substitute for local development, Gallagher argues; “being there matters” (463) in the creation of assessment. As important as it is to reinforce the lesson that meaningful assessment is always locally contextualized (not hired-out to consultants), the assessment process has made our department’s shift towards a more evidence-based curriculum much smoother. It has also improved staff training and involvement.

Furthermore, by better understanding our student population, we were able to clarify our objectives in serving that population. Our assessment practices have grown organically as a way for our WPA and faculty in our program to better understand our students’ needs. Our assessments informed our class activity and assignment creations, and even the questions we asked students during class discussion and writing workshops. Multiple members of the department helped to create these assessments. These assessments will continue to be revised to fit the needs of both our students and our department. As living documents, our assessments act as fluid sources for faculty unity and professional development.

What we believe will be of interest to the broader composition community is not only our focus on making this overhaul with increased attention to empirical data, but also the ways that this attention to empirical data made program changes much easier to make. Rather than using subjective staff impressions as the driving force behind the new curricula, Dan, our new WPA, has sought to use staff impressions to guide a search for increasingly robust assessments. Staff members, like Alex, have, in turn, responded with deep interest in the creation of these assessments. These assessments have helped, and will continue to help, clarify our program’s goals—thus the assessments’ nature as living documents. In particular, by focusing our assessments on what students may know when they arrive on campus, we worked to unify our program’s sense of what we need to be teaching. Having everyone on the same page is important, as are the trainings that can be developed around building a shared assessment.

In this profile we will first describe our department background and history in order to provide context for the changes Dan made when he became WPA. Next, we elaborate on our program ideology, then move to an explanation of a reading comprehension diagnostic completed at the course’s beginning by all Comp I students enrolled in the course from 2011-2013. We will then detail the reading and grammar diagnostics we created, and recount how those assessments informed our program and courses. A description of our redesigned Comp I and II courses follows our department background section, so readers may learn how our assessments have driven course construction. We end with a “What We’ve Learned” section, in which we describe our hopes, regrets, and a few dreams.

Missouri S&T’s Composition Program

The composition program resides in the English and Technical Communication department, comprised of ten tenured and tenure-track faculty, four non-tenure-track full-time lecturers (NTTs), and five adjunct instructors. The composition courses are taught almost entirely by Dan, the NTTs, and the adjunct instructors. While the composition staff teaches only two courses—Comp I and Comp II—we teach 30-35 sections every fall semester, and 20-25 sections every spring. When Dan took over as composition director, he formed a composition committee comprised of all four full-time instructors, the writing center director, and one adjunct faculty member. The composition committee advises Dan on composition policy, reviews course objectives and outcomes, and writes assignment instructions for each of the four major essay assignments in Comp I and the four essays in Comp II.

Approximately 60% of Missouri S&T’s students enroll in our Comp I course; the other 40% gain credit for Comp I through either an AP Comp I course in high school, a Comp I course at another degree-granting institution, or through the CLEP examination. Many of these students, particularly in electrical and computer engineering, do not enroll in a writing course until their junior or senior year if they have completed Comp I for AP credit at their high schools. Then they enroll in either Comp II or Technical Writing, depending on their department’s requirement. Technical Writing sits outside of the Composition Program and is meant to prepare students about to head into the workforce to do the kinds of technical writing required of STEM students. We’ve found in both Comp II and in Technical Writing that by their junior or senior year at S&T, students’ writing skills have atrophied because they haven’t been in a course requiring writing of any kind since high school. Consequently, attempts to foster writing skills transfer have very little chances of succeeding if the environment in other disciplines encourages the erosion rather than development of those writing skills. Even the most sound and measurable instances of improved writing fade without further practice and development (Perron, Crowhurst and Piche).

These loopholes in our requirements mean that we have a two semester composition cycle, but not a requirement that students actually take either. In order to ensure that our students are building, rather than eroding, their writing skills, we are very interested in changing the requirement structures around these courses and also developing WAC programs throughout our campus. At the same time, we know that while there will be opportunities to make these changes, those opportunities have not yet presented themselves.

As a way to bridge that gap in students’ writing development during their college years, we have redesigned our Comp II course. We believe this redesigned course benefits the diverse student demographic enrolled in Comp II and helps to make the course a more effective second-semester writing course for all students. Comp II could thus serve as a bridge to Technical Writing instead of an alternative to it; we argue students at Missouri S&T should complete both Comp II and Technical Writing: the former as sophomores or their first semester as transfers, and the latter as second-semester juniors or seniors. In a climate where programs are looking to eliminate courses from the degree requirements, a pitch to add another writing course to the general degree requirements will be a tough sell. That pitch will require even more data than we have gathered, but we believe we have made a start.

The English and Technical Communication department has given Dan significant latitude to substantially revise the composition curriculum, recommend course assignments for composition instructors, and develop WAC initiatives with the WAC director—who also directs our university writing center. The WPA position did not exist prior to 2011, though since 2007 there was an English faculty member responsible for overseeing the composition courses. From 2007-2011, that faculty member—now our department chair—formed a composition committee to create a set of course objectives for Comp I.

Prior to 2011, instructor autonomy in Comp I and II was still nearly absolute; comp instructors could choose their own readers, rhetorics, and writing assignments. Many chose literary themes for their courses. Nonetheless, some elements of a solid program were already in place. Our English and Tech Com department chair—the same faculty member who had been composition director from 2007-2009 before she became chair,{2} succeeded in lowering the student cap in both Comp I and Comp II from 25 students per section to 20. Additionally, our university writing center is robust, with 16 undergraduate tutors comprising nearly every major at Missouri S&T, extensive recruitment each year for new tutors, broad higher administrative support, and up-to-date training programs which are certified by the College Reading and Learning Association (CRLA).

If before 2011 there was little in the way of organization or standardization of student instruction in our composition courses, there was at least fertile ground for creating links between writing instruction and other departments. The university has increasingly recognized the importance of writing and communication skills in students’ academic and professional development. In large part due to scholarship in reading and writing pedagogy over the past twenty years and our own nascent WAC initiatives, many of our engineering departments now appreciate the complexities involved in their students’ writing development.

Dan’s Comp I and II redesign initiatives were also fueled by the goal of re-invigorating a dormant Writing Across the Curriculum program. Our ally in WAC during recent years has been the university administration, in part thanks to the 2008 student outcomes criteria published by The Accreditation Board for Engineering and Technology (ABET), which emphasizes non-technical skills to a greater extent than did standards from the 1990s (Newborg 2).{3} Long before the new ABET outcomes were released, the English department at Missouri S&T began the process of guiding STEM faculty toward a broader understanding of writing instruction’s importance. This effort is detailed in a 2000 article “WAC Meets the Ethos of Engineering: Process, Collaboration, and Disciplinary Practices” by Linda S. Bergmann, at that time our WAC director. Hired to create the WAC program and the writing center at Missouri S&T,{4} Bergmann describes efforts to reconcile different ideas about the value of writing and the writing process among engineering and English faculty at the university. Bergmann noted what she described as compositionists’ dismay at “the engineering mentality” (4), which amounted to a rather simplistic idea of FYC as a be-all-and-end-all writing course. Engineering faculty also, according to Bergmann, often saw WAC as an expedient tool for producing career-ready STEM professionals (4).

Further, Bergmann described how engineering faculty believed that writing could be taught as a series of discrete skills, and that once students learn those skills in FYC, their knowledge about writing could be easily transferred to any writing situation, in any discipline. This assumption did not seem to derive from a set of epistemological constructions about creating meaning through writing or about the intricate collaboration between writer and reader. Rather, the assumption developed from a rather hazy objectivist notion that a “proper” or definable, knowable, and universal way to write exists “out there.” Many studies over the last twenty years have noted myriad problems in this idea of transfer, or of a discrete set of skills that students move from one course to another (Downs and Wardle).

As in most composition programs, some of our composition instructors are less familiar with composition scholarship than others. Involving the staff in our assessments allows for continued training of all members, but especially those who might be less familiar with more recent shifts towards assessment-guided pedagogy. Finally, the soundness of our program’s foundation will be tested by staff turnover. Development and analysis of assessments as a core component of our program means that we are focused on the process of building a program rather than just training current faculty.

Beyond using assessment to build our program, we believe that attempting to better understand our student population and link our instructional efforts to that understanding could be a model for other programs. At a moment where our field is coming to terms with the difficulty of measuring and creating opportunities for the transfer of knowledge (DePalma and Ringer; James; Smit), we believe that curriculum unification and syllabus standardization are two important ways we will be able to meaningfully address this problem. Unquestionably, syllabus standardization is controversial; course standardization is exponentially so. But as Ronda Leathers Dively argues, good reasons exist for standardization in some form. A shared syllabus could increase the likelihood of transfer if writing assignments in courses subsequent to Comp I could build on skills instructors knew all students have practiced. Standardization could also foster “best practices” discussions and implementation by instructors in a program (Dively par. 8). However, these benefits can be difficult to achieve with a cadre of faculty who may have been teaching composition their own way for years. A WPA must therefore weigh the risks of alienating faculty if a syllabus and assignments become prescriptive, and faculty believe that their academic freedom has been curtailed. Dan decided that the only way to develop consistency across Comp I and II would be to ask every member of the writing faculty to collaborate on what would be the eventual course, and rebuild Comp I and II from the ground up.

Through a series of meetings and a far larger series of email messages in a writing instructors’ listserv Dan created, the composition faculty decided to investigate what difficulties students might have with reading comprehension. Since many of us agreed that the first few weeks of our Comp I sections are especially reading-heavy, and since many instructors were concerned about students’ reading skills, Dan created a reading diagnostic. We also later added a grammar diagnostic so we could get a better picture of which grammar issues our instructors should prioritize. Several of our instructors were also concerned about their students’ grammatical abilities, and were administering their own grammar diagnostics. Based on our instructor’s suggestions (resulting from their own experiences teaching and testing grammar), we created a grammar diagnostic for all Comp I students.

The grammar diagnostic has fulfilled its small purpose; we have identified areas of punctuation and sentence construction for which we can develop lessons and activities for our students. Nevertheless, we know that grammar tests provide only small (and often unreliable) windows into a student’s writing ability. As we discovered, and as we’ll recount in our “What We Learned” section, the grammar test reinforced the need to teach a few areas of style, but did little more than serve as a validation for our concerns. One year’s worth of grammar testing did, however, provide another starting point of conversation with our STEM colleagues, who tend to focus on grammatical issues in student writing. But because he feared instructors would place undue emphasis on the results of their students’ grammar tests, Dan discontinued its administration after two semesters.

Conversely, the reading diagnostic provided a much more useful catalyst for discussion of curricular change and for finding common ground with administration and other disciplines. At a university where WAC never fully took flight, reading comprehension became Dan’s tool for engendering conversations about curricular reform on a significantly larger scale than the English department’s writing program. In effect, the reading diagnostic results have sparked conversations about re-invigorating (aka funding and course requirements) of the humanities at a technological university that requires a math placement test but no such test for English. Beyond its original purpose as one method for assessing our students’ reading skills, Dan’s reading diagnostic has had a strategic use in institutional conversations about retention, student success, and ultimately the need for more reading and writing-intensive courses for every major.

Diagnostic Assessments

Reading Diagnostic

When taking over as composition director, Dan wanted to first address reading in his reform efforts. In his former position at Missouri S&T as assistant director of the writing center, he had heard an increasing number of complaints from instructors that students were struggling with reading comprehension. The writing center peer tutors also noticed more problems with students’ ability to understand what they read. As a result, when asked in their Comp I assignments to evaluate the effectiveness of an author’s argument, both instructors and peer tutors remarked with growing frequency that students were increasingly unable to do so. When he became the WPA, in fall 2011, Dan saw the opportunity to gather data on Missouri S&T students’ reading comprehension abilities.

Using the Depth of Knowledge Scale (1997), developed by Norman L. Webb of the Wisconsin Center for Educational Research, Dan wrote the reading diagnostic. Webb’s Depth of Knowledge (DOK) standards are widely used by K-12 and college assessment programs, as well as by the Missouri Department of Elementary and Secondary Education (DESE) in their Show Me Standards and state assessment tests.{5} The DOK assesses tasks according to four levels:

  1. Locate/Recall: find information directly in the text, or a close paraphrase of information in the text. Example: Find where an author makes a specific statement or claim, or tell what happens in a story.
  2. Skill/Concept: make simple inferences based on information about the text, or easily interpret information from the text. Example: identify the main idea of a passage, or learn the meaning of a word through context clues in a sentence.
  3. Strategic Thinking: arrange and organize multiple concepts, resources, and/or modes to develop a concept or solve a problem. Example: describe an author’s intent or success in conveying an argument.
  4. Extended Thinking: reflect, re-arrange, and re-conceptualize multiple sources to develop a concept or solve a problem. Example: arrange multiple sources—some of them contradictory—on a similar topic to develop and argue your own point of view regarding the topic.
  5. The 20-question diagnostic assessed students’ Level I (Locate/Recall) and Level II (Skill/Concept) abilities; 10 questions addressed Level 1 skills, and 10 questions addressed Level II. The diagnostic did not address Level III (Strategic Thinking) skills; in a pilot diagnostic administered to 129 students in Missouri S&T’s summer bridge program during summer 2010, less than 5% of the students were able to successfully answer Level III questions. Dan therefore decided to test only Level I and Level II skills in the diagnostic.{6}

The reading diagnostic was administered to all Comp I students in each class section, during the second class period. In the reading diagnostic, students were required to read a short passage and then select the correct answer out of four choices, in answer to a question regarding the passage (Appendix 1). Readability ease for total passages was averaged according to three scales:

  • Coleman Liau index: 11.98
  • Flesch Kincaid Grade level: 11.70
  • ARI (Automated Readability Index): 11.27

The average reading level score for the passages in the text was 11.65, or, according to the reading scales, what a mid-level high school junior should be able to comprehend.

The English 20 Reading Diagnostic differs from the ACT Reading Test in one fundamental way. The ACT Reading test is a series of four reading passages, with ten questions about each passage. According to a 2011 study conducted by the National Bureau of Economic Research, the ACT Reading Test test offers virtually no predictors of student college success (2). The researchers in the study postulate that many questions in the test may be answered by searching for clues among other questions regarding each passage (16-18).

The reading diagnostic consisted of twenty passages, with one multiple-choice question about each passage. Dan chose readings or wrote passages from a variety of subjects and genres, including American, British, and world history, electrical engineering, and legal documents. The range of texts, therefore, approximated to some degree the kinds of documents students would encounter and be accountable for comprehending in college. Dan also wrote the question, distractors, and correct answer for each question.{7}

Every student enrolled in Comp I from fall 2011 through spring 2013 completed the reading diagnostic. The results are presented in Table 1:

Table 1. Reading Diagnostic Results

Reading Diagnostic

Fall 2011

Spring 2012

Fall 2012

Spring 2013

Totals

Number of Students Tested

404

233

439

248

1324

Average Score out of 20

13.5

13.5

13.6

14

13.65

Median

13

14

14

14

14

Standard Deviation

2.9

2.08

2.73

2.74

2.6125

Answered Incorrectly 4 or more Locate/Recall Questions

79 (19.55%)

64 (27.5%)

109 (24.8%)

53 (21.4%)

23.31%

Answered Incorrectly 4 or more Skill/Concept Questions

242 (59.5%)

141 (60.5%)

241 (54.8%)

133 (53.6%)

57.10%

Answered Incorrectly 9 or more questions out of 20

90 (22.1%)

44 (19%)

96 (21.9%)

46 (18.5%)

20.38%

The results gave us a direction for how to frame class activities, discussion questions, and assignments, and created opportunities for discussions among instructors about how to generate course documents. If, as the scores suggest, over 20% of our students may struggle with locating a passage in a text, and if nearly 60% may have trouble with simple inferences like main idea or theme, then we have a point at which to begin reading instruction in Comp I. The diagnostic also served as a visual starting point in conversations with STEM faculty about the importance of reading and writing, and thus, the importance of Comp I and Comp II for their students. When faculty complain to us that their students “cannot read or write,” we have a tool we can use as we try to unpack that complaint. A conversation about writing and its importance in each discipline is a tough one. But reading already cuts across every discipline, and nearly every course. Reading comprehension and students’ lack of reading abilities are conversations we’ve found our STEM colleagues are eager to have. It’s a start.

The diagnostic also fostered discussions among our instructors about what it does not assess—namely, complexity of decoding, prior knowledge, genre competency, and cultural factors that all contribute to what we understand as reading comprehension. But a diagnostic that asks students to read a variety of passages and answer only one question about each of those passages may provide an avenue as we try to help our students become better readers.

Grammar Diagnostic

As composition instructors we are aware of the perception that our students struggle with grammar. We also know that skill and drill instruction has long since receded from standard practice. We still wanted to create a grammar diagnostic. Why? To be honest, choosing this assessment strategy helped the department shift closer to a culture of assessment and answered questions members of the program wanted to answer: What kinds of mistakes can our students recognize already? What kinds of mistakes are our students most likely to make? Am I seeing a representative sample or creating my own bias?{8} In fact, several instructors had already created their own grammar diagnostics. We also found no real consensus among instructors about which grammar concerns were most prominent and which ones we wanted to work on as a department. All involved understood that grammar instruction needs to be individualized in order to be effective, but there was curiosity in the department about what such an assessment would tell us about our own individual biases and what we could measure.

In the end, the grammar test gave us a chance to get a slightly clearer picture of our students as they arrive on campus. We have found that our students struggle with subject-verb agreement, pronoun agreement, run-on sentences, sentence fragments and using modifiers. Results from the grammar test give our instructors no more than a broad starting point. What we found most important about this process was that the test served as a way to build assessment skills within the department and provided a foundation for inter-faculty discussion about how we can teach grammar concerns. Both of these goals could certainly have been accomplished through other means, but this assessment was an important step for our department.

In creating the test we were able to pilot it with a small number of students before we began assessing all Comp I students on the second day of class. This trial sample allowed us to refine the test before we began to gather large amounts of data. Designing the test itself meant taking a large number of ACT and SAT problem sets and then looking at the various ways these two companies create their questions. What we found were many instances where these testing companies rely too much on the “best answer” they ask for in their directions. We were not looking to “trick” our students and wanted the test to focus on one component of grammar at a time.

Unlike both the SAT and ACT, we wanted our questions to be much more obvious for our students. In order to accomplish this, Alex tested versions of questions with students from his classes. Our students still struggled with our questions, but we assiduously avoided questions where one could debate the answer. For instance, the SAT and ACT are both quite comfortable testing students on their familiarity with American usage conventions. We knew that we were going to be testing international students who comprise about 5% of our Comp I population, so we knew that they may not be as familiar as native speakers with American usage. We therefore wanted to avoid questions were usage would be ambiguous (see Appendix 2).

The grammar test showed us what it was perhaps capable of showing—that students struggle with the six grammatical areas we tested. The test did not impact course design in any way, and is certainly the lowest value assessment we created. In fall 2012 and spring 2013, the two semesters we gave the test to all Comp I students on the second day of class, the average score remained 55%. Some instructors’ concerns about their students’ skills in these areas was corroborated; for some the news was a surprise. What the test did provide was a way to build assessment skills within the department, and to emphasize the importance of assessment for future departmental decisions. The test also created a place for discussion—not just about the test scores themselves but more about how we do or should teach grammar/style, and what importance it has in our classrooms. As we’ve stated, since the test opened the door for discussions, and because he was convinced the test would reveal little else about our students, Dan discontinued the grammar test. The grammar test has helped us talk to each other as a composition faculty, and as our next section will demonstrate, that is a significant step forward as we redesigned the Comp I and II courses.

Comp I and II Redesign: Goals for Best Practice and Assessment

Comp I, titled “Exposition and Argumentation,” was a grab-bag of assignments when Dan came aboard in 2011. Prior to the curriculum standardization, instructors could unduly privilege their own preferred genres: poets could teach poems and ex-journalists could teach journalism. This violates much of what we have learned in the last twenty years about composition instruction. For instance, in the Spring 2013 issue of Composition Forum, Elizabeth Wardle and Doug Downs look back at their article from 2007 an their understanding of how we can best describe what we do in our general-education writing courses. Wardle and Downs argue that if we really want to teach our students a set of skills that will transfer to other writing situations in their careers and lives, then we should ask our students to engage in a discussion about the contexts of writing—why and for whom we write, when we write. If instructors are not adept at explaining why they have chosen the texts for the course, then what hope do students have of addressing these same questions about contexts and goals?

Comp I has been radically altered to focus on reading for the first month of the semester because of the results of our reading diagnostic test. We cannot currently guarantee that incoming students will take this course, or that they will have taken it before they take Comp II, but we want to meet the students who are taking the course where their high schools have left off. This has meant training staff to offer better reading instruction and making that a focus of the program. It also meant carving out substantial space for reading development in the design of the course.

Comp I

Based on the reading diagnostic data, Dan re-designed Comp I so that in the first four-five weeks of the course, instructors would work on developing students’ Level I (locate/recall) skills, while moving slowly toward making simple inferences. Through a series of workshops and memos, Dan advised instructors to think carefully about the types of questions they ask their students about a reading assignment. If an instructor first asks, “Do you think the author was successful in presenting her argument,” students may not be equipped to answer, since our pilot reading diagnostic data indicates that students may not possess Level III (strategic thinking) skills before we see them in Comp I. On the other hand, if an instructor begins with questions that ask students to locate information in a text, they can start to build Level I skills. Then, after students have located several key ideas in that text, instructors could ask students what they thought the main idea of the text is, based on the important passages in the text that a class had identified.

While developing these reading skills, students begin work in Comp I on their first essay, a personal narrative. The narrative is meant to introduce them to fundamental writing concepts of organizational structure, the importance of purpose, understanding of rhetorical situation, and awareness of audience. So students work concurrently on reading skills and writing skills through an integrated approach; they both study and write narratives. This relative ease with the narrative’s rhetorical situation has, we have found, had a positive effect on students’ willingness to engage more challenging rhetorical situations with which they are less or unfamiliar. It also provides us time to work with students on their reading skills before they tackle the argumentative essays they will write for the remainder of the course.

The second essay assignment, Argumentative Analysis, continues to develop students’ Level II (skill/concept) skills while analyzing the structure of an author’s argument in a nonfiction essay. Students practice summary, inferring main ideas, and author’s purpose. Through Essay Two, students work on understanding the nature and form of arguments, as various authors construct them in model essays.

Essay Three, Argumentative Synthesis, is a variation on the well-heeled “compare/contrast” paper. Students use their argument analysis skills developed while writing Essay Two when comparing two authors’ opposing arguments on the same topic. Instructors work with students during this time on Level III (strategic thinking) skills, as students form their own position on the issue argued by the authors’ arguments they compare.

The fourth essay combines students’ reading and class discussion of a non-fiction book, called the One Book,{9} with the synthesis skills they have developed in Essay Three. Thus, each assignment builds on the previous one, culminating in the One Book Limited Source Essay. In this essay, students select a topic of interest originating in the one book and incorporate at least two other outside sources which support their topic. In this way, the One Book essay is both a demonstration of sorts—one in which students continue to practice their argument analysis and synthesis skills. They also learn the basics of logically and responsibly integrating source material into an argumentative essay.

Comp II

Comp II, titled “Writing and Research,” existed with nominally more substance than the former Comp I course. Comp II had labored for years under an ancient, one-page course guide, the author of which has been lost to departmental living memory (Appendix 3). Course sections varied dramatically in content, assignments, objectives, and outcomes. Readings across sections of Comp II embraced a veritable smorgasbord of themes, depending on instructor interest, including science fiction, early American novels, short fiction, studies in a decade (like the 1950s or 60s), and nineteenth-century poetry.

The only outcome projected for Comp II was a one-page guide describing that “students will be capable of writing research papers which contain an identifiable thesis that is intelligently and coherently developed with sufficient supporting details based on the use of sources” (Appendix 3). The guide also stated that students should “write at least 6000 words of graded material” (about 20 pages worth). Finally, students should also “know how to use the library and avoid plagiarism.” These were good precepts to keep in mind, but they were not particularly helpful when constructing the course.

Central to our standardization of Comp I and II was what David Perkins and Gavriel Salomon call “backward-reaching knowledge transfer” (26), or the ability of students to recognize skills that scaffold those they’ve been previously taught. A consistent scaffolding of reading and writing skills necessitates careful building of increasing competencies rather than a smattering of reading in several literary genres. In effect, instructors in the “anything goes” former model of Comp I and Comp II were asking students to develop competencies in reading drama, fiction, poetry, or the incredibly complex collaborative art of dramatic film—all genres requiring a set of competencies for each of those genres. We decided that our comp instructors cannot reasonably expect to effectively teach those literary studies competencies—and expect students to demonstrate those competencies—in a 15-week course. Because there was no actual structure for either Comp I or II, we knew we were not so much making a revision as building from the ground up. In setting our four-essay assignment sequence for both courses, Dan used Webb DOK—the same scale he used when creating the reading diagnostic. The reading diagnostic information was helpful, therefore, in providing us with a map for sequencing essay assignments in Comp I and II.

In Comp I, our department’s composition committee standardized the essay sequence to develop students’ Level II (Skill/Concept) and Level III (Strategic Thinking) skills. More specifically, through a sequence of essays, students first practice analyzing a writer’s argument (Level II) then develop an argument of their own based on the ideas of another writer in a nonfiction essay (Level III). Our focus in the rest of this profile will center on Comp II, which more so than Comp I combines our WAC initiatives with our reading and writing instruction standardization.

When re-building Comp II from the ground up, and while focusing on a specific, detailed list of reading and writing skills objectives for our students, we wished to also assist students in constructing their own understanding of their education and their future careers (Soven), or as Brannon and Knoblauch argue in “Writing as Learning Through the Curriculum” that “the value of writing in any course should lie in its power to enable the discovery of knowledge” (13). Additionally, our overhaul of Comp II, and our vision of it as a flagship WAC course for Missouri S&T’s students, privileges what Stanley Fish calls “interpretive communities” (338). Although this approach does not exclude write-to-learn or expressivist activities, it emphasizes social constructivist assignments—teaching writing as a form of social behavior in the academic community. Such a course could therefore be useful for students as they become members of their own disciplinary discourse communities, and a gap could be bridged for our STEM students between the writing for discovery they practiced in Comp I and the professional literacies they are expected to achieve in their careers.

Comp II could then reinforce the principle of “growing as participants” that Carter, Ferzli, and Wiebe describe. McQuire, Lay, and Peters also support through their research that students can develop their own disciplinary voices through reflective writing, which “becomes a critical skill for functioning effectively in diverse and complex practice realities” (93). In essence, Comp II is about expanding Comp I into genres and fields. Our goal is that students will continue to practice the Level I and Level II skills they’ve developed in Comp I as we move them toward Level III (strategic thinking) and Level IV (extended thinking) skills. Once again, our empirical data which led to the construction of Comp I also fostered our revision of Comp II. We want to create courses in which we can be as clear as possible about what skills we hope will transfer from one course to the next. Through a series of in-class and research writing assignments, Comp II students participate in their disciplinary acculturation processes and analyze what writing skills they believe should transfer across discourse communities.

Comp II Assignment Sequence

In creating a course that focuses on entering various academic conversations, there is room to experiment with genres of writing, but because the goals of Comp II are tied to an academic argumentative research essay, we focused on breaking the research essay into its constitutive parts (Appendix 4). There is some artificiality inherent in this breaking, but we believe that these steps address some of students’ core weaknesses. We therefore require three short essays in the semester’s first half:

  • Rhetorical Analysis
  • Evaluation
  • Synthesis

During the first three weeks of the course, students focus on rhetorical reading and rhetorical writing (Kolln 2010). Initially, we work on comprehension (content-based reading), the kind of reading with which students are most familiar—though not familiar enough. We ask students to summarize and interpret. We then work on process-based reading, which focuses more on the rhetorical decisions made by the writer. Finally, we ask them to examine closely the conventions of the text’s genre.

Essay One: Rhetorical Analysis

The first assignment in Comp II—Rhetorical Analysis—is a 3-4 page deep textual description of a writer's argument and craft. Students examine only one essay. In their description, students practice close reading skills—both Level I (Locate/Recall) and Level II (Skill/Concept). The operating assumption behind this assignment is that the first part of entering any conversation involves an understanding of what has already been said. Only then can the writer offer a valid criticism or endorsement of a particular position in that conversation. In some sense, our rhetorical analysis is an artificially long summary assignment. Because most good undergraduate research papers are based on a few key texts that allow them to build their argument, this assignment is meant to give them the tools to effectively work with those key texts.

Essay Two: Evaluative Analysis

The second assignment in Comp II is an evaluation essay—a 3-4 page appraisal of a writer’s argument and craft. Students discuss the merits and weaknesses of an author’s argument and the way the author constructs that argument in their Evaluative Analyses. Students also examine only one article for the evaluation paper. In this essay, students practice Level III (Strategic Thinking) skills. The goal in this paper is to use the rhetorical reading and writing skills built during the structural analysis assignment to create meaningful evaluation. We do not ask that instructors explicitly teach the Toulmin model of argument, but we do ask that instructors guide students in examining claims, evidence, and the fallacies or proofs that link them.

Instructors are also given the option of having students find their own readings for this assignment. We created this option because we wanted to see if introducing research skills earlier in the semester made the transition to the final research paper smoother. A number of the course goals directly relate to an individual student’s ability to conduct and evaluate research, and it makes sense to practice these skills more than once a semester.

Essay Three: Synthesis

The third short assignment in the first half of the semester is a synthesis—a 3-4 page discussion and analysis of two writers’ arguments, culminating in a fresh perspective from the student author. The synthesis assignment has two goals:

  1. Developing a student’s ability to manage different voices in her writing.
  2. Fostering a student’s creation of her own voice in an academic dialogue.

While students have practiced 1 and 2 in the evaluation assignment (Essay Two), there is a more complicated task here. The synthesis also acts as a stepping stone that anticipates the process of structuring the research paper.

Essay Four: Research Paper

The three writing assignments for the semester’s second half are the research proposal, an annotated bibliography, and the final research paper. These smaller assignments are factored into the final essay grade, though instructors are given discretion about weighting these factors. Giving the students the second half of the semester to work on their research papers allows them to build the assignment in steps. Where the sequence of assignments in the first half of the semester develops a discrete skill in each section, this second half allows students to create documents they can use in their research paper. Instructors are further encouraged to have students break down the research paper into smaller sections with separate deadlines as the end of the semester nears.

The research proposal addresses Level III (Strategic Thinking) skills and asks students to develop a 2-4 page expository essay in which they propose a research question and research methodology that will eventually culminate in a multi-source synthesis research essay. The annotated bibliography also addresses Level III (Strategic Thinking) skills. It is not a full bibliography. Instead, it is a bibliography of select "key" sources to be used in the research paper. This allows students to practice placing themselves in dialogue with individual sources long before they have to arrive at a thesis. Finally, the research paper—which addresses Level IV (Extended Thinking) skills—is an 8-10 page essay. In the research paper, students argue an original thesis based on research from eight academic sources in their essays.

Post-Course Survey

Even though Comp II has been thoroughly redesigned, we already know that there will continue to be room for improvement at the institution level. After a semester of teaching the new Comp II curriculum in seven course sections with three different instructors, we invited the students to complete a voluntary survey which asked them to assess their reading and writing skills both before enrolling in and after completing Comp II. Out of 128 students who completed the course, 47 (n=47) responded to the survey request, for a response rate of 36.7% (see Appendix 5). In the survey, we asked students to assess their reading, writing, and research skills both before enrolling in Comp II and after Comp II’s completion.

In the survey, we asked respondents nine primary questions regarding the skills taught in Comp II. The survey was created with the input of everyone involved in teaching Composition II. Assessments that evolve out of a department’s needs and are created by all the relevant members of a department not only increase dialogue, but also create opportunities for the professional development of contingent faculty. The discussions around this survey allowed our faculty to assess individual practices and to compare those across our newly standardized curriculum. Having dialogues that start with “If we rephrase X to read like Y then can we capture what they think they have learned about…” is helpful not only in producing a better survey but in making the learning goals for our courses central to what we do as a department. In each case, we asked respondents in separate questions to rate their understanding of the following skills both before and after completing Comp II:

  1. Compose grammatically correct sentences
  2. Read academic sources
  3. Formulate research into usable dialogues between authors
  4. Compose unified paragraphs that support a single idea
  5. Compose unified paragraphs that support a larger idea
  6. Navigate an established argument within a discipline
  7. Editing, proofreading, and revising strategies
  8. Focus a research topic
  9. Develop a research strategy for acquiring outside sources
  10. Summary, paraphrasing, and quoting
  11. Argument
  12. Documentation styles (MLA, APA, Chicago)
  13. Argumentative Synthesis

Scores increased in each category, which indicates at least some perception by students that they had a better understanding of key writing and research skills after completing the course. The highest score increases were, predictably, in those skills we emphasize in Comp II: reading academic sources, formulating research, navigating arguments, and argumentative synthesis. The highest jumps were in two of these categories: reading academic sources and navigating an argument. Since the Comp I and Comp II curricula stress these skills perhaps above all others, these student impressions support our belief that they are indeed learning these skills (see Appendix 5 for assessment findings).

While we know that some of these numbers are artificially inflated (most of our students had not focused on writing a “synthesis” paper prior to the course, so they clearly think they know more about synthesis now than they did before they start the course), we are encouraged by several of the improvements and believe assessments like this are fundamentally important to gauging the health and strength of any writing program. One can have too much faith in the evidence created by numbers like these, but student perspectives and voices are lost to this process without the inclusion of such feedback.

For instance, one of the improvements students felt most strongly about was in their belief about their ability to read academic texts. While our shift toward addressing reading concerns early in the semester is relatively new, the numbers here tell us that the instructors in our program have responded to this challenge with the sense of urgency we hoped for. These numbers do not tell us much, however, about how these results will transfer to other courses. But we hope we are starting to build metacognitive skills that do transfer to other courses and beyond college.

Looking Ahead—Conclusions and Implications

Currently, the reading diagnostic has garnered significant attention on campus, both from other departments with whom Dan has shared the diagnostic results and from campus leaders. Our next step was a series of recommendations from the English division of our campuses’ student success committee, of which Dan is a member. The student success committee has made the following recommendations to the university chancellor:

  1. Reading and writing diagnostic placement for all incoming university students
  2. Elimination of the CLEP exam as an option to satisfy the Comp I degree requirement
  3. Inclusion of Comp II as second course in a two-semester FYC requirement for all degree-seeking students
  4. Preservation of Technical Writing as a third writing course for all STEM students

A timetable for any of these initiatives is two to three years, or fall 2015 or 2016 as of the date of this profile’s writing. Our composition program has come a long way in a relatively short time; the result has been to date a standardized sequence of reading and writing skills courses with broadly measurable outcomes across sections. Our campus administration also recognizes the contribution improved reading and writing skills can make for improving student success and retention. The buy-in—and it’s a substantial one—will be through investment of resources and faculty to implement this writing course sequence for Missouri S&T students. In financially challenging times, this is a tough sell. But a necessary one, and well worth the effort.

It seemed unlikely at the outset that a complete overhaul of Comp I and Comp II focusing on standardization would prove popular among composition staff. Surprisingly the new standard curriculum has been well-received. Even more importantly, the staff has been heavily involved in creating and measuring the significance of the newly-created assessments. This nearly universal effort has made it clear that we are changing practices for the better, rather than just because the department has hired a new WPA. Our composition faculty have become stakeholders in this process, and our assessment materials living documents that grow and change (or are eliminated) as our program grows and changes. Our faculty development sessions are often fueled by discussions about what works in our assessment process and what does not. They frequently talk with Dan about what they see in their students’ reading and writing development and about other assessment practices they would like to initiate. Additionally, our grading systems have become more consistent as we reach agreements during workshops about what we value and privilege in our students’ writing. Faculty also discuss throughout the semester how they use students’ diagnostic scores and how assessment and classroom practice inform one another.

Beyond the composition program, using empirical data to drive a major shift in the composition program revitalized our initiatives to assist our science, technology, engineering, and mathematics (STEM) students with overcoming their reading and writing difficulties. By focusing on empirical data and framing our changes in the language of Writing Across the Curriculum (WAC), we have been more successful in getting our STEM colleagues to recognize the value of our composition courses.{10}

What We’ve Learned

Chiefly, we’ve learned that curricular change of the magnitude we propose is a long process and requires a careful analysis and dissemination of data. At the same time, the reading and grammar diagnostics have already told us more about our students than we previously knew. We will want to use this data to make arguments to create institutional changes in the long run, but for now this data has helped us reshape the composition program. This is especially true of the reading test, which unified the composition program’s focus on reading. We have all participated in several trainings that have helped instructors focus on reading throughout Comp I and for the first part of Comp II. In fact, in a recent English department-wide meeting the results of that test drove a discussion about reading instruction in upper level English courses. Almost as important as these reforms has been the way that the data that has been collected has facilitated the building of a stronger composition program. Because various instructors have been able to help create these assessments, they have become documents we all “own” for our program’s improvement. The entire staff therefore has a stake in our assessments rather than acquiescence to a top-down system or a generic series of assessments from an outside corporatist entity.

Saying this, we know that any empirical data we reveal to administration will potentially brand our instructors, courses, and students. Once data is interpreted, perception has a way of becoming reality. In an era of shrinking budgets in which departments are looking for ways to decrease graduation requirements, asking faculty and administration to consider additional course requirements is a tough sell at best. We’re working toward a cultural shift—and those never come easily or quickly. And we know a reading diagnostic alone won’t do it. We are going to have to push for portfolio assessments as a partial entrance requirement.

In sum, we’ve learned to move slowly and deliberately with both course change and assessment implementation. Maintaining a sense of instructor autonomy in a standardization process is tricky, but the two are not nearly as mutually exclusive as one might expect. While we have created standard assignment instructions for each of the four essay assignments in both Comp I and II, we know that instructors do their own “tweaking” of the assignments to fit their individual course schedules and reading materials.

Moving our composition faculty toward that post-assessment process, specifically end of course portfolio assessments of students’ writing in each of our Comp I and Comp II courses, and asking them to accept what will likely be more grading during an already overloaded end of semester process, will be challenging. But if we truly wish to engage our university communities in recognizing the vital role that assessment data plays in our understanding of student skills, then we must begin at home, in our own courses. Post-assessment, in whatever form it eventually takes for us, is coming. And admittedly, if we could enact the entire Comp I and Comp II re-design process over again, we would have a post-assessment process already in place as part of the overall course re-designs. Like a portfolio admissions initiative, that post-assessment challenge lies ahead. They will be the greatest challenges yet.

Appendices

  1. Appendix 1: Reading Diagnostic Sample Questions
  2. Appendix 2: Grammar Diagnostic Examples
  3. Appendix 3: Original Course Description for Comp II: Writing and Research (prior to 2012)
  4. Appendix 4: Revised (2012) Objectives and Outcomes for Comp II: Writing and Research
  5. Appendix 5: Comp II End of Course Survey

Appendix 1: Reading Diagnostic Sample Questions

Both questions were written by Dan for the diagnostic

  1. The following is a Level I (Locate/Recall) question from the diagnostic:

Jack Russell terriers are happy, bold, energetic dogs; they are extremely loyal, intelligent and assertive. Their greatest attribute is their working ability, closely followed by their excellent qualities as companions. A Jack Russell can be equally contented bolting a fox or chasing a toy in your living room, or adept at killing a sock in the living room or a rat in your barn. Their funny antics will continually amuse you, their intelligence seems to know no bounds, and their assertive nature and boundless energy can at times be overwhelming.

The author of this passage states that the Jack Russell’s best quality is:

  1. its outgoing nature.
  2. its friendliness.
  3. its abilities as a working dog.
  4. its intelligence.
  1. The following is a Level II (Skill/Concept) question from the diagnostic:

Brilliant statesman though he was, some of the military actions attributed to Winston Churchill during the war remain controversial. Churchill was at best indifferent and perhaps complicit in the Great Bengal Famine of 1943 which took the lives of at least 2.5 million Bengalis. Japanese troops were threatening British India after having successful taken neighboring British Burma. Some consider the British government's policy of denying effective famine relief a deliberate and callous scorched earth policy adopted in the event of a successful Japanese invasion. The bombing of Dresden shortly before the end of the war, a mostly civilian target with many refugees from the East and of alleged little military value. However, bombing of Dresden was effectively a help for Soviet allies.

From this passage, we may infer that

  1. Churchill caused the Great Bengal Famine.
  2. The Japanese invasion of British India had been unsuccessful.
  3. Many thought Churchill’s policy regarding British India was unfeeling.
  4. The bombing of Dresden had significant strategic value.

Appendix 2: Grammar Diagnostic Examples

  1. Although one of their older children have been an honor student, Julie and Greg have had trouble with a younger one who has poor grades.
  1. has been an honor student
  2. could have been an honor student
  3. had been an honor student
  4. No error.
  1. Although I will never forget how cold it got when our furnace stopped working in our apartment last winter.
  1. Although, I will never forget, how cold it got when
  2. I will never forget how cold it got when
  3. I will never forget, how cold it got when
  4. No error.
  1. My sister, a history major at MS&T, is coming home for dinner.
  1. sister, is a history major at MS&T, and is
  2. sister—a history major at MS&T, is
  3. sister a history major at MS&T is
  4. No error.

Appendix 3: Original Course Description for Comp II: Writing and Research (prior to 2012)

Goals for Comp II: Writing and Research

Comp II (Writing and Research) has three main goals: (1) to improve the students' competency in the techniques of research writing, (2) to improve their abilities to read and analyze various kinds of primary and secondary research materials, and (3) to require them to write research papers of a quality that suffices in school and the work place.

Students who successfully complete Comp II will be capable of writing research papers which contain an identifiable thesis that is intelligently and coherently developed with sufficient supporting details based on the use of sources.

The students will also be able to demonstrate their abilities to locate, select, and analyze source materials. These materials may include assigned readings and independent research in libraries and elsewhere. All students in Comp II are expected to know how to use the Missouri S&T library as a source of reference information.

All students are expected to demonstrate their abilities to integrate research information into research writing involving the accurate and consistent use of a standard system of documentation which includes notes and bibliographies. Students also learn to avoid plagiarism in their presentation of materials from sources.

All students are expected to perform a variety of writing exercises (a minimum of six). In addition to a major emphasis on research, these written pieces may include book and article reviews, abstracts, summaries, position papers, and bibliographic annotations. All students should write at least one longer paper of at least 2500 words. Students who successfully complete Comp II should write at least 6000 words of graded material.

Appendix 4: Revised (2012) Objectives and Outcomes for Comp II: Writing and Research

Comp II Course Objectives

The Research Essay:

  1. Requires understanding and knowledge of previous work in a field, discipline, or topic, allowing the writer to position his or her argument within the framework of current understanding;
  2. Allows a writer to create new methods, theories, or rhetorical positions, while maintaining a verifiable understanding of the stakes involved in such positioning;
  3. Provides a discipline-specific method of documentation to validate a writer’s work and encourage further study;
  4. Obliges students to evaluate and critique multiple texts and writers’ crafts;
  5. Requires students to compare and synthesize multiple texts to create original documents.

Students will:

  1. Practice “write-to-learn” strategies through in-class writing and/or short position papers, totaling at least 5 pages of written material;\
  2. Practice “write to communicate” strategies in at least 15 pages of multiple-source research papers;
  3. Engage in collaborative inquiry through in-class group work or team projects;
  4. Discuss the difference between credible and non-credible sources, both print and online;
  5. Teach and practice proper summarizing, paraphrasing, and quoting rules and techniques
  6. Teach and practice formal documentation styles, such as MLA, APA, Chicago, and IEEE;
  7. Implement prewriting, drafting, and revising techniques to reinforce the recursive nature of writing.

Comp II Course Outcomes

Students will be able to:

  1. Identify important ideas in a text, but as well analyze and evaluate a writer’s craft, purpose, and argument;
  2. Compare and contrast ideas from different sources on a similar topic;
  3. Develop one’s own point of view and argument based on a synthesis of other’s ideas;
  4. Create a distinct and appropriate written voice that demonstrates an awareness of audience and purpose;
  5. Identify primary issues in several professional and academic disciplines;
  6. Understand the similarities and differences in writing styles across disciplines;
  7. Organize ideas and paragraphs into intrinsically coherent essays;
  8. Recognize the difference between credible and questionable sources, both online and otherwise;
  9. Craft meaningful, coherent, and syntactically appropriate sentences;
  10. Demonstrate consistency in responsibly and logically integrating and documenting sources.

Appendix 5: Comp II End of Course Survey

“Before” represents the percentage of respondents who rated their skill in that category before the course began. 5=high skill ability, 1=low skill ability.

“After” represents the percentage of respondents who rated their skill in that category after the course was completed or nearly complete, depending on the day the survey was administered, either before or after the final essay due date.

Therefore, “4-5 Before” indicates how many students rated their skill in a particular category as “high level” or “very high level” before the course began.

Category

4-5 Before

4-5 After

3 Before

3 After

1-2 Before

1-2 After

Compose grammatically correct sentences

75%

87%

22%

13%

3%

0%

Read academic sources

57%

94%

26%

4%

17%

4%

Formulate research into usable dialogues between authors

37%

87%

26%

13%

17

0%

Compose unified paragraphs that support a single idea

72%

89%

20%

11%

9%

0%

Compose unified paragraphs that support a larger idea

61%

87%

28%

13%

11%

0%

Navigate an established argument within a discipline

41%

85%

39%

15%

20%

0%

Editing, proofreading, and revising strategies

63%

78%

22%

20%

15%

2%

Focus a research topic

43%

87%

46%

13%

11%

0%

Develop a research strategy for acquiring outside sources

46%

80%

33%

20%

21%

0%

Summary, paraphrasing, and quoting

59%

89%

30%

11%

11%

0%

Argument

61%

87%

26%

13%

13%

0%

Documentation styles (MLA, APA, Chicago)

50%

78%

35%

22%

15%

0%

Argumentative synthesis

35%

78%

24%

20%

42%

2%

Notes

  1. These courses, English 1120 and English 1160, are identified in the article as “Comp I” and “Comp II” respectively, to provide readers with more general references for course types. (Return to text.)
  2. The composition director from 2009-2011 was a senior faculty member, whose expertise is 20th century American Studies. (Return to text.)
  3. ABET, formerly The Accreditation Board for Engineering and Technology, is the international accrediting agency for higher education engineering, computing, and technology programs. Under General Criterion 3: Subset B, item g, students in a baccalaureate program must demonstrate “an ability to apply written, oral, and graphical communication in both technical and non-technical environments; and an ability to identify and use appropriate technical literature” (2011). Overall, the ABET outcomes criteria emphasizes reasoning, analytical, and cooperative abilities, or “soft skills.” See also Newberg et. al. (2008). (Return to text.)
  4. Bergmann and then English chair Elizabeth Cummins Vonalt struggled for three years—from 1996-1999, to secure funding for a university writing center. They succeeded in 2000, when the Beverly J. Moeller Writing Center opened in March of that year (Vonalt 5). (Return to text.)
  5. The tests, which in part adhere to regulate the Elementary and Secondary Education Act (ESEA) of 2001, also known as No Child Left Behind. The Missouri tests are called the Missouri Assessment Program (MAP). For more information on the Missouri ETS exams, see http://dese.mo.gov/divimprove/assess/ (Return to text.)
  6. Given the nearly nonexistent demonstration of Level III skills by this test group, and to facilitate more time during test administration for students to answer Level I and Level II questions, Dan eliminated Level III questions from the diagnostic. Again, we were interested in discovering where to begin with reading and writing instruction in Comp I, and the extremely low success rate in the Level III answers, combined with the demographically representative sample of the summer 2010 pilot group, Dan determined that when beginning Comp I most students would not be ready yet to begin work on Level III reading skills. (Return to text.)
  7. The reading diagnostic is more extensively detailed in Dan’s working manuscript, tentatively titled, “Reading Assessment for FYC.” (Return to text.)
  8. The correlation between the types of errors a student can correct on a multiple choice test and the types of errors found in their own writing is not a strong one, though the ability revise and edit prose does seem more closely tied to the kind of error recognition one finds in a test like ours (see Ferris 2004). At the same time, others have argued that these kinds of tests can still be helpful for instructors. For instance, Irina Arguelles Álvare (2013)—reporting on attempts at the Universidad Politécnica de Madrid to create a local multiple choice test of English fluency—states that it is “our argument in this paper… that results can be practical for teachers. Results from the test highlight some of the students' general difficulties which might help instructors introduce grammar points accordingly in their curriculum” (16). We agree that a test like ours can show a few areas of general difficulty while helping a department or program establish a more unified approach. At the same, Dan stopped administering the test because he did not want to create an undo emphasis upon grammar in the program. This test was an effort to answer questions while maintaining a sense of perspective and balance about grammar instruction. (Return to text.)
  9. One Book programs are fairly common throughout many FYC programs. At Missouri S&T, the One Book program was instituted in 2005. Every student enrolled in Comp I reads the One Book and attends several events throughout the year which focus on the One Book theme. Each One Book is read on a two-year rotation. (Return to text.)
  10. The authorship of this article, by our program’s WPA and (at the time of this article’s writing) an adjunct faculty member in the department’s composition program, offers a reflection of how our department has found ways to value our contingent instructors. (Return to text.)

Works Cited

ABET. (2011). Criteria for Accrediting Engineering Technology Programs, 2011-2012. Web. 18 January 2012.

Ackerman, John M. The Promise of Writing to Learn. Written Communication 10.3 (1993): 334-370. Print.

Álvarez, Irina Arguelles. Large-scale assessment of language proficiency: Theoretical and pedagogical reflections on the use of multiple-choice tests. International Journal of English Studies 13.2 (2013): 21-38. 22 February 2014. Web.

Anson, Chris M. Symposium: Closed Systems and Standardized Testing. College Composition and Communication 60.1 (2008): 113-126. Print.

Bamberg, Betty. WAC in the 90's: Changing Contexts and Challenges. Language and Learning Across the Disciplines 4.2 (2000): 5-13. Print.

Bergmann, Linda S. WAC Meets the Ethos of Engineering: Process, Collaboration, and Disciplinary Practices. Language and Learning Across the Disciplines 4.1 (2000): 4-15. Print.

Brannon, Lil, and Knoblauch, C. H. Writing as Learning Through the Curriculum. College English 45. 5 (1983): 465-474. Print.

Britton, James. The Development of Writing Abilities. Urbana, IL: NCTE. 1975. Print.

Carter, Michael. Ways of Knowing, Doing, and Writing in the Disciplines. College Composition and Communication 58.3 (2007): 385-418. Print.

Carter, Michael, Ferzli, Miriam, and Wiebe, Eric N. Learning to Write by Writing to Learn in the Disciplines. Journal of Business and Technical Communication 21.3 (2007): 278-302. Print.

Cushman, Mike. This Act of Cultural Vandalism. Adult Learning 21.8 (2010): 29-30. Print.

DePalma, Michael-John, and Ringer, Jeffrey M. Toward a Theory of Adaptive Transfer: Expanding Disciplinary Discussions of ‘Transfer’ in Second-Language Writing and Composition Studies. Journal of Second Language Writing 20 (2011): 134-137. Print.

Dively, Ronda. Standardizing English 101 at Southern Illinois University Carbondale: Reflections on the Promise of Improved GTA Preparation and More Effective Writing Instruction. Composition Forum 22 (Summer 2010). Web.

Downs, Douglas, and Wardle, Elizabeth. Teaching about Writing, Righting Misconceptions: (Re)envisioning ‘First-Year Composition’ as ‘Introduction to Writing Studies. College Composition and Communication 58.4 (2007): 552-584. Print.

Emig, Janet. Writing as a Mode of Learning. College Composition and Communication 28.2 (1977): 122-128. Print.

Ferris, D. R. The ‘Grammar Correction’ Debate in L2 Writing: Where are We, and Where do We Go From Here? (and What do We do in the Meantime?). Journal of Second Language Writing 13 (2004): 49-62. Web. 22 February 2014.

Fish, Stanley. Is There a Text In This Class? Harvard UP, 1980. Print.

Gale, Charlotte. Going it alone: Supporting Writing Across the Curriculum When There is no WAC Program. Unpublished manuscript presented at the National Writing Across the Curriculum Conference, Bloomington, IN. 2001. Web. 28 March, 2011.

Geisler, Cheryl. Literacy and Expertise in the Academy. Language and Learning in the Disciplines 1.1 (1994): 35-57. Print.

---. The Relationship Between Language and Design in Mechanical Engineering: Some Preliminary Observations. Technical Communication 40.1 (1993): 173-176. Print.

---, Rogers, Edwin H, Haller, Cynthia R. Disciplining Discourse: Discourse Practice in the Affiliated Professions of Software Engineering Design. Written Communication 15.1 (1998): 3-24. Print.

Graff, Gerald and Kathy Birkenstein-Graff. They Say/I Say: The Moves that Matter in Academic Writing. W.W. Norton Company. 2006. Print.

Herrington, Anne. Classrooms as Forums for Reasoning and Writing. College Composition and Communication 36.4 (1985): 404-413. Print.

Huot, Brian. Rearticulating Writing Assessment for Teaching and Learning. Utah State UP, 2002. Print.

James, Mark Andrew. An Investigation of learning Transfer in English-for-General-Academic-Purposes Writing Instruction. Journal of Second Language Writing 19 (2010): 183-206. Print.

Kelly, Leonard P. Encouraging Faculty to Use Writing as a Tool to Foster Learning in the Disciplines through Writing Across the Curriculum. Annals of the Deaf 140.1 (1995): 16-22. Print.

Kozeracki, Carol A. Issues in Developmental Education. Community College Review 29.4 (2002): 83-100. Print.

Jones, Ed. Self-Placement at a Distance: Challenge and Opportunities. WPA: Writing Program Administration 32.1 (2008: 57-75. Print.

Mahala, Daniel. Writing Utopias: Writing Across the Curriculum and the Promise of Reform. College English 53.7 (1991): 773-789. Print.

Marcus, Jon. Revamping Remedial Education: ‘The Scary Cost of College.’ National CrossTalk 8.1.1 (2000): 14-16. Print.

McLeod, Susan H. Writing Across the Curriculum: An Introduction. Writing Across the Curriculum: A Guide to Developing Programs. Ed. Susan H. McCleod and Margot Soven. Newbury Park, CA: Sage, 1992. 1-8. Print.

McLeod, Susan H. and Elaine Maimon. Clearing the Air: WAC Myths and Realities. College English 62.5 (2000): 573-583. Print.

McGuire, Lisa, Kathy Kathy, and Jon Peters. Pedagogy of Reflective Writing in Professional Education. Journal of the Scholarship of Teaching and Learning 9.1 (2009): 97-103. Print.

Melzer, Dan. Writing Assignments Across the Curriculum: A National Study of College Writing. College Composition and Communication 61.2 (2009): 240-261. Print.

Missouri University of Science and Technology. Current Enrollment by Degree, 2010. 18 January 2012. Web.

---. Institutional Profile. 2012. 26 January 2012. Web.

Newberg, Beth. It Takes a Whole University to Educate the Whole Engineer: Narratives of Collaboration. ASEE Conference Proceedings (2008). Web. 19 February 2011.

Perin, D., A. Keselman, and M. Monopoli. The Academic Writing of Community College Remedial Students: Text and Learner Variables. Higher Education 45.1 (2003): 19-42. Print.

Perkins, David N., and Gavriel Salomon. Teaching for Transfer. Educational Leadership 46.1 (1988): 22-32. Print.

Smit, David. The End of Composition Studies. Carbondale: Southern IL UP, 2004.

Smith, Louise Z. Why English Departments Should ‘House’ Writing Across the Curriculum. College English 50.4 (1988): 390-395. Print.

Soven, Margot. Write-to-learn: A Guide to Writing Across the Curriculum. Cincinnati, OH: South-Western, 1996. Print.

Sullivan, Patrick, and David Nielsen. Is a Writing Sample Necessary for ‘Accurate Placement?’ Journal of Developmental Education 33.2 (2009): 2-11. Print.

Sutton, Brian. Writing in the Disciplines, First-Year Composition, and the Research Paper. Language and Learning in the Disciplines 2.1 (1997): 46-57. Print.

Vonalt, Elizabeth Cummins. History of Missouri S&T’s (UMR’s) Writing Center. Unpublished manuscript presented at the 10th anniversary celebration of Missouri University of Science and Technology’s writing center. 2010. Print.

Bookmark and Share

Return to Composition Forum 32 table of contents.