Better tools for assessment are key in ensuring quality
By Susan HeaddenLayla Quinones, a bright, vivacious 19-year-old, has an unusually impressive Web page. She has posted an attractive photograph of herself, an engaging biography, and a personal statement about her passions and interests. She’s included a page of modern artwork that she likes, with insightful observations about each piece. She has the requisite headers and links. Yet there are elements noticeably missing from this page. There are no photos of her family, no shots of herself mugging for the camera, no “friending” requests from hopeful correspondents. That’s because this is not Quinones’ Facebook page; it’s her academic portfolio—a cross section of the work she has done, and an assessment of what she has learned, as an education and physics major at LaGuardia Community College in New York City.
Into these digital archives go tests, papers, lab reports and other artifacts, accompanied by teachers’ grades and the student’s own appraisal of what she has learned. What were her aspirations for this course? Were they met? What skills did she gain outside of content knowledge? How does this particular sample show that she has gained such competencies as critical thinking and analytical reasoning? She answers with reflective essays, which are themselves a form of learning. And that learning is an outcome at least as important as the program improvements that these assessments help drive. “[The portfolios] give us a broad focus on who a student is,” says Marisa Klages, an associate professor of English at LaGuardia. “It also allows students to take some accountability onto themselves.”
With its e-portfolios, learning communities and other initiatives, LaGuardia, a public two-year institution in a highly diverse neighborhood in Queens, is in the vanguard of a growing movement in higher education to be more accountable for what students learn—and to think more purposefully about just what that learning should be and how it should be fostered. The reason is simple enough: At a time when four years at a private institution can carry a price tag upwards of $200,000, students, parents, and employers are increasingly questioning whether a diploma is worth the price. It is, at least, no guarantee of some essential skills. A government survey conducted in 2005 found that only 31 percent of college graduates were proficient readers, down from 40 percent a decade earlier. Other studies have shown that fewer than half of all college graduates are proficient in math and reading. And an alarming number of students today—as many as 50 percent—never finish college at all.
Numbers such as these have triggered progressively louder calls for change in recent years, most notably from the Commission on the Future of Higher Education, a group of experts convened by former Secretary of Education Margaret Spellings in 2005. The commission, which made its report in 2006, found a troubling lack of accountability in higher education and called for the creation of a public database on learning outcomes, along with a federally managed system to track student progress. Neither is likely to happen soon, but the recommendations did prompt a lot of colleges to find ways to improve outcomes and assess learning—if only to do so before the government did it for them. Other colleges needed no such incentive, having focused on the “value-added” proposition all along. These institutions have already been engaging students, articulating goals, integrating courses, forming learning communities, and, yes, doing assessments.
Defining success
Before an institution can measure an outcome, it must first determine what it wants that outcome to be. Simple as that sounds, surprisingly few colleges have actually taken this step. Most institutions of higher learning do have some sort of mission statement, laying out a basic vision for their graduates as people of character and scholarship. But they often go no farther than that. It is the rare institution that articulates its broad educational goals with precision.
Few colleges or universities think strategically about what courses are going to produce those results, how to integrate disciplines, what the school’s core curricula should produce, and in what sequence courses should be taken. At the same time, institutions rarely examine how they are working to achieve these outcomes outside the classroom. How do “high-impact practices”—such as learning communities, undergraduate research and study-abroad programs—align with the curriculum? When colleges do ask these questions, when they actually define their desired learning outcomes, those outcomes are more likely to occur.
It was a series of factors—tensions with state legislators, some bad press, a string of campus crimes—that led to concentrated talks about learning outcomes at the University of Wisconsin-Madison, the flagship of the 178,000-student state university system. A statewide survey of residents, conducted in 2007-2008 by political science professor Kathy Cramer Walsh, added urgency to the task.
On the downside, Walsh’s survey revealed, Wisconsonites thought the Madison campus was too expensive, that its students partied too much and that its faculty was “lazy, too liberal and elitist.” Number One on the respondents’ list of what the university did well was “sports.” More broadly, some faculty members and administrators had a feeling that, while Liberal Education was valued on campus, it was “not part of the daily conversation.” One faculty member, according to Liberal Education, a publication of the American Association of Colleges and Universities (AAC&U) likened the situation to “a family in which the most deeply held values are not discussed explicitly at the dinner table.” So a group of professors and administrators started talking about student learning in terms of the essential outcomes as defined by the AAC&U initiative known as Liberal Education and America’s Promise (LEAP). (See accompanying story: What should a college graduate know? Find out…)
In workshops, the members of the Wisconsin group pressed each other to answer questions such as: “Beyond the content of your course, what do you want your students to learn that will stay with them? What do you try to teach your students more generally, such as communication or quantitative reasoning?” And more fundamentally, “How do you make what is implicit in the requirements explicit?” The math teachers talked to the art professors, the science teachers to the literature professors. And they were asked to explain to their neighbor just what it was they were trying to do. According to Aaron Brower, Wisconsin’s vice-provost for teaching and learning, faculty members struggled a bit to articulate how their courses could achieve the breadth necessary to elicit certain outcomes. Says Brower: “Intro to calculus could be a ‘breadth’ course; intro to psychology could be a ‘breadth’ course. The question is, if this is the only psychology course your students take, what is it that you want them to know?”
As they continued their work, the educators sought to create a common language. It was, Brower says, an iterative process. The first list of outcomes was “not as elegant,” he says, as the final, thoughtfully crafted list. It was detailed under four general headings: knowledge of human cultures and the physical and natural world; intellectual and practical skills; personal and social responsibility; and integrative learning. Said one satisfied faculty member, [The outcomes] “allow me to describe how my department’s courses promote learning in areas that are not only highly valued by us but are also seen as important by employers and educators across the nation.”
Yet it was important, too, for the institution to stress how it was unique. UW-Madison, as distinct from its satellites, has enjoyed a long tradition of social and civic engagement. According to the university literature, it has produced more Peace Corps and Teach for America volunteers than almost any other university in the country, more leaders of major corporations, and an extraordinary number of graduates who go on to teach at other research universities. “There is something about going here that leads to being a leader,” says Brower. “And we started talking about what that was.” The group also identified experiences in and out of the classroom, the so-called “high-impact” practices, like learning communities, first-year seminars, and undergraduate research that have been shown to increase student retention and engagement.
The final product of all these discussions was a document called “The Wisconsin Experience and the Essential Learning Outcomes”—what its creators call the best expression of the university’s shared aspirations for its students. As the university puts it, Wisconsin seeks to produce graduates “who think beyond the conventional wisdom, who are creative problem solvers, who know how to integrate passion with empirical analysis, who know how to seek out and evaluate and create new knowledge and technologies, who can adapt to new situations and who are engaged citizens of the world.”
The learning outcomes document now serves as a framework for re-accreditation studies and as part of the university’s own strategic plan. Because of it, says Brower, some professors have changed the makeup of class assignments so that they require students to be more analytical and reflective—“so that they are aimed at higher-order thinking instead of being just content-based.”
It’s a good bet that most students will attend four years of college without ever hearing the phrase “essential learning outcomes,” but John Fink, who graduated from Wisconsin in May with a degree in psychology, sociology and integrated liberal studies, is an exception. He was introduced to the concept as a freshman at Chadbourne Residential College, one of seven at the university that focus faculty, staff, and students on a specific content area inside a traditional dorm. These learning communities are an acknowledgement that college students often gain as much from each other as from their academic work. At UW generally, says Fink, “there is a culture of being really intentional about what you learn.”
Another high-impact practice is undergraduate research; studies show that the more students are involved in such work, the more they get out of a course. In Fink’s case, he says, “I started crunching numbers two hours a week with a professor. He would explain, but it was incredibly exhausting; it was like he was speaking Greek. But then in the second semester, things started to click, and I started to integrate the ideas. That one hour formed a sense of intellectual pursuit.”
Fink says he learned much from a psychology class that taught methods of inquiry. The class required two oral presentations and a great deal of writing, he says, making for considerable faculty-student interaction.
“We had to construct our own experiments and synthesize them with our own hypotheses,” he says. He also benefited greatly, he says, from a course called “Students Seeking Educational Equity and Diversity (SEED),” a favorite of students in the learning community. In the class, students of all races, abilities and sexual orientations found “a safe space” to talk about how those characteristics made them different and alike, Fink says.
“It was one of the first times I had encountered dealing with privileges I hold as a white male,” he says. “I am super grateful for this course because, even though I have good intentions, the course made me realize that the impact of what I do or say can be seen as different from what I intend.” As for grades, Fink says, “They are for content, not necessarily for the real knowledge that is happening.”
Any discussion of learning outcomes must acknowledge that institutions have been assessing these outcomes for years. Specifically, they have been measuring them by how long a student sits in a classroom and how many courses he or she takes. And that very system, critics say, is the problem: All of these years, we may have been measuring with the wrong stick. The ruler is the credit hour, and the credit hour, they say, is an idea whose time has passed.
An archaic yardstick
A measure that dates back to the 19th century, the credit hour is generally defined as representing one hour a week for 14 or 15 weeks. Twelve hours per week in class represents full-time status, and 120 credit hours equals a bachelor’s degree. As explained by researcher Jessica Shedd of the University of Maryland, the credit hour was a means of standardizing achievement at a time when students were admitted to college based on unreliable and subjective exams. But it was teacher retirement, Shedd says, that brought the credit hour wide acceptance and made it the standard. (See story on Page 9.) Outmoded or not, it remains the uniform means of measuring student progress and faculty workload.
Absent an alternative, says Washington, D.C., policy expert Jane Wellman, the credit hour remains a helpful way to measure, if not learning, then effort, a universal way to translate the somewhat abstract into quantifiable units. Government agencies use credit hours to determine enrollment and allocate funds. The federal government requires that all degree-granting institutions use the measure to be accredited. And it uses it as a base to determine such things as enrollment and financial aid. Institutions themselves use credit hours as a common currency, a way to translate the value of a course at one college when a student wants to transfer to another.
“All of these years, we may have been measuring with the wrong stick. The ruler is the credit hour, and the credit hour, they say, is an idea whose time has passed.”
But many educational experts argue that the credit hour is obsolete at a time when so much instruction takes place outside of a classroom—in self-directed study, in the field, in labs and, increasingly, online. At the same time, classes awarding the same number of credits may require wildly different effort. A student in one three-credit course may be expected to do five hours of outside work a week; a student in another may be expected to do virtually none. Yet both earn the same number of credits. At the same time, students learn at different speeds; some will take three hours to grasp the same material that another learns in one. As a measure of time, critics say, the credit hour can hold back the quick study while also failing to reward the slower student who nevertheless manages to learn the same material through sweat and determination.
Kevin Carey, policy director for Education Sector, a Washington, D.C.-based think tank, analyzes the problem in a lighter vein in an article he calls “How I Aced College and Why I Regret It.” A graduate of the State University of New York at Binghamton, he says he received a number of Advanced Placement credits from high school, then four college credits for courses that required only three faculty contacts a week. A roommate who skipped classes all of October, November and December, he recalls, still managed to earn 16 credits. As for Carey’s own eventual credit tally, he figures that discounting 88 by 25 percent left him with only 66 credits. In other words, he says: “I had an associate degree. Who knew?”
Perhaps a greater problem lies in the order in which credits are accumulated. Wellman notes that colleges are paid for the credits no matter the sequence in which they occur—a situation she says contributes to the “atomization of learning:” learning a little bit at a time without sufficient regard for whether it adds up to a coherent whole. “A bachelor’s degree,” she says, “adds up to 120 credits no matter what.”
Reformers have called for a different sort of system, one that would recognize the rate and amount of learning—what the student has actually accomplished as opposed to how much time he has spent in the classroom. This sort of “competency-based learning” is the model at Alverno College, a small women’s Catholic college in Milwaukee that long ago eliminated grades and credits in favor of portfolios. Simply put, that means students must clearly demonstrate skills, abilities and knowledge in one course or subject before they move on to the next. Students at Alverno don’t have standardized tests or traditional exams. Instead of credits, they get competency-based units. According to Alverno, the system recognizes that every student learns differently and that grading on a curve diminishes student achievement.
As its desired learning outcomes, Alverno lists communication, analysis, problem solving, value-based decision making, social interaction, effective citizenship, social engagement, and developing a global perspective. Assessments are both course-based and integrative. And, as with the e-portfolios at LaGuardia, students are required to reflect on what they have learned and assess the value of that learning accordingly. “We make explicit that students should be able to do something with what they know,” says the college.
Competency-based learning is said to be best suited to small colleges such as Alverno, where a student body of around 2,400 allows for thoughtful (and time-consuming) faculty feedback. But Western Governors University (WGU), based in Salt Lake City, proves that the model can also apply at an institution of 20,000 students who work exclusively online. WGU was founded in 1995 by the governors of 19 states who were looking to expand higher education without incurring huge costs. As with students at Alverno, WGU students collect not credits but competency-based units. Says President Robert W. Mendenhall: “We measure learning, not time.” Students work at their own pace, guided by “mentors,” professionals and students who connect with them by computer or phone. If students grasp the material quickly, they can move ahead quickly. If not, they don’t.
WGU is not the sort of college where professors teach whatever they want and hope that students will benefit. The school doesn’t even develop its own courses or materials. (It leaves that to outside experts.) At WGU the focus is exclusively on teaching and learning. There is no research being conducted here, and there is no such thing as tenure. Faculty and administrative salaries are tied primarily to how well students perform—on measures of academic progress, satisfaction, and retention.
Learning outcomes at WGU are paramount and clearly expressed. Groups of experts, academics, and professionals, known as program councils, decide what skills and knowledge students need to be considered proficient in a discipline. Then they determine the curriculum and materials that will get them there. Students must receive a “B” or better in every course required for their degree, or they cannot graduate. “We are defining competency for a graduate,” says Mendenhall. “We are saying: ‘Someone with this degree, we guarantee they have the knowledge and skills.’ ”
It was sheer frustration that brought Helga Peschka of Albuquerque, N.M., to Western Governors a few years ago. A native of Germany, she had an R.N. and had passed the American boards while living in Israel. Fluent in English, she later moved to the United States, first practicing in California and then as a psychiatric nurse in an intensive care unit in Nebraska. She felt growing urges to do more. “I was unhappy in the Nebraska ICU,” she says. “They made me feel inferior because I didn’t have a bachelor’s degree. It was my dream to get one, but I had no opportunity because I always had to work almost a double shift.”
Peschka had taken some classes at a community college in California, but the courses required time she didn’t have, so she was forced to quit early into the program. Later, in Nebraska, she answered an appeal from Creighton University in Omaha, which encouraged people like her to come back to college for their bachelor’s degrees. But she says Creighton refused to grant her credit for her nursing license or for her 30 years as an R.N. “I blew my stack,” she says, “and I decided to search for something else.”
At Western Governors, Peschka received 53 competency units for her nursing experience, nearly half of the 120 units needed for the bachelor’s degree. That gave her the incentive to move diligently toward her goal. “I am very energetic and very determined when it is something I want,” she says. “I can sit for hours—from 8 a.m. ’til 1 the next morning. I would do a 12-hour shift and then sit at the computer for the other half of the day.” Self-paced learning suits her because, she says, “I like to do research myself. I profit much more from that.”
Peschka has nothing but praise for her mentors, as well as for a math tutor who got her through some rough spots. “I could call him anytime,” she says. “He guided me with unbelievable patience.” She confesses that literature courses “drove (her) nuts,” with assessments that required her to revise papers three times over. But she says her writing greatly improved as a result. When the program was over, Peschka says she had become so used to the pace of learning that she “fell into a black hole.” She had “no papers, no tests, no research to do.” The solution was obvious: she decided to pursue her master’s in education, a program she completed in December.
A couple of time zones away, sitting at her computer in Martinsville, Ind., Shannon Adams is also embracing competence-based learning. A physical therapist, Adams had graduated from Indiana’s Vincennes University, where she said, “I did that 18- to 21-year-old sorority thing and had a blast.” But after practicing her profession for a few years and having a baby, she realized she wanted to develop her skills in advocacy, particularly in the service of women. Specifically, she wanted a degree in secondary education. But she lacked the necessary prerequisites. At the same time, she says, “I really didn’t want to waste my time in class. I had absolutely no desire to sit in a lecture hall.” She wanted to work at her own pace, a speed that she says varies with the material. “Some things I can fly through,” she says. “For other things I need to slow down.”
Like Peschka, Adams found the right fit in WGU, quickly noting how the learning model differed from other programs. She had done some online coursework through a community college, but says she didn’t get the sort of feedback she has come to expect from her mentors at WGU. “Maybe you’d get a response in two or three days,” she says. “If you had a problem, you were out of luck. You could e-mail them a question, but they weren’t available.” Today, she admits to being in “chemistry hell” at WGU, but says she is guided well by her mentor. “My tutor had e-mailed my student mentor, and she said go to this link and watch it, and then we’ll set up a call to go through it. She is really proactive. I said ‘show me the hardest thing.’ I said I wanted a big, hairy, nasty problem. She was awesome.”
Assessing learning
Once colleges have done the hard work of defining what their learning outcomes should be, how do they determine whether those outcomes have been achieved? This gets into the very squishy matter of assessment—as controversial among practitioners of higher education as it is among teachers in K-12. Faculty members argue that they are constantly assessing students with required papers, exams and midterms, and that any attempt to further “quantify” learning is hugely impractical, given the size and diversity of American higher education. Many also see it as an affront to their academic freedom.
Yet as criticism of ill-prepared (and hugely indebted) graduates continues to grow, there is a demand for better measures of learning and of the value of an institution. Experts, policymakers, employers—and yes, students themselves—are calling for measures that go beyond college rankings of the sort produced by U.S. News & World Report — rankings that measure what goes in rather than what comes out. And, while all the assessment methods have their flaws, it is possible to test what colleges teach. The question is what is the best method for doing so and what to do with the results.
Colleges now use two major tests to measure student learning. One, used by about 400 schools, is the Collegiate Learning Assessment (CLA), a test that attempts to measure critical thinking and analytical reasoning. The CLA is not a multiple-choice test; it asks students to synthesize information and marshal evidence to write persuasive essays. Fans of the CLA say that scores are especially meaningful when they consider a college’s “value added” role—when they are controlled for the SAT or ACT scores of those who take it. Critics contend that the 90-minute test is too loosely related to the actual knowledge that students acquire and that the seniors who take it have little incentive to do well. Yet institutions have made curricular and instructional changes based on the results of the CLA. And in 2012, the CLA scores of more than 100 colleges will be posted on the College Portrait Web site of the Voluntary System of Accountability, through which 300 public colleges and universities report data about learning.
“As criticism of ill-prepared (and hugely indebted) graduates continues to grow, there is a demand for better measures of learning and of the value of an institution.”
An even more popular assessment, which some 1,400 colleges have administered at least once, is the National Survey of Student Engagement (NSSE), which gauges students’ involvement in the high-impact practices that are proven to boost learning: things like student-faculty interaction, collaborative learning, writing, research and study abroad. The questions for students are specifically linked to learning outcomes. How many times a week do they interact with faculty? How many books have they read? Do their courses ask them to memorize facts or analyze ideas? Studies show that the more students are engaged—the more they write, research, and so forth—the more effectively they learn.
NSSE does have its critics. One, Stephen R. Porter, an assistant professor of policy studies at Iowa State University, says that the survey “fails to meet basic standards for reliability and validity.” He doubts the ability of students to recall the sort of information sought by NSSE questions, and he suggests that students have varying interpretations of NSSE’s terms—including “thinking critically.” He also says NSSE is too rooted in the traditional college classroom experience, given that so many students now transfer among institutions, study online, or combine college with work.
Still, with few alternatives available, NSSE has become the most respected of such surveys and is growing in popularity. More and more colleges are using it as a tool to achieve the ultimate goal of assessment: revising programs and improving the learning experience. For instance, James Madison University (JMU) found through NSSE results that freshman involvement in service learning—a high-impact practice—was lacking. JMU took this finding as a cue to improve freshman engagement overall. Cal State Fresno, according to a NSSE report, found that faculty interaction with students was lower than expected. As a result, it established a mentoring institute that provides professional development. At Washington State University, students reported on NSSE that they thought the college was deficient when it came to collaborative learning and educationally enriching experiences. So the university increased its number of learning communities.
The power of portfolios
As with surveys and tests, student portfolios can serve as powerful catalysts for improving learning outcomes. That’s how they have been used at LaGuardia Community College. Paul Arcario, the dean of academic affairs at LaGuardia, explains why portfolios were adopted. “This is such a vibrant place. It’s so innovative. But we felt we weren’t doing a good job documenting the growth we knew was happening, and that growth is not always captured by standard measurements. It wasn’t [a matter of] ‘let’s make sure we’re accountable.’ …We said, ‘Let’s really see what students are producing for us.’ ”
At LaGuardia, portfolios serve as tools to judge both student learning and program and institutional effectiveness. As students progress through their courses, they deposit various “artifacts”—papers, tests and so forth—into an electronic vault. Some of the artifacts are required to be submitted; others students choose to submit. The chosen items form their public portfolio—their academic Facebook. Faculty members, who developed the process themselves, read hundreds of papers as a group. “We said it’s the faculty’s job to say what students should be able to do,” says Arcario. “They develop the rubric.”
To determine a program’s effectiveness, the faculty doesn’t look at all portfolios; just a sampling will tell them what they need to know. “Every five years a major comes up for review,” Arcario says. “Let’s say it’s business administration. We don’t need to look at all 600 majors. We’d go to the portfolios and pull, say, 100 of them. We rate them on a 1 to 6 scale. If something moves from 2 to 4, that’s progress.”
But program review is just one purpose for the portfolios. On an individual level, Arcario says, instructors use them “to help students pull together the threads of their education and make it somewhat coherent.” It was interesting, Arcario said, that when the college asked students about the benefits of the portfolio, the answer they gave most frequently was that they could show their families what they were learning in college. “We weren’t expecting that at all,” he said, “until we realized that the vast majority are first-generation college students and immigrants, so their families want to know what a college education is like in America.” Arcario recalls the story of a Korean student who was studying art even though her family had wanted her to study accounting. “She was a fabulous artist, and one thing that’s great about the portfolio is that you can put all the visuals on it, and she got the nerve to tell her family, and they were blown away. She ‘came out,’ so to speak.”
Mikhail Valentin, a fine arts major at LaGuardia, calls the portfolios “a personal time capsule.” He says, “You are chronicling your growth and creating an identity for yourself.” At the same time, portfolios have given many students an opportunity to learn by teaching, either as tutors, paid consultants or technical advisers. Valentin, who will graduate next fall, got a fresh start at LaGuardia after some personal problems and less-than-impressive high school grades. He is now a consultant who helps fellow students craft their portfolios. “I finally understood what learning was, how powerful the act of teaching is,” Valentin says. “I read the material, and I could see how it related. I learned so much from my students.” Fellow fine arts major Christobel Torres says the portfolios also serve as a sort of personal career counselor. “They force you to evaluate yourself and what you have to offer.”
Assessing the portfolios helps the faculty at LaGuardia focus on core competencies and assists them in seeing where certain outcomes fit into the curriculum. Say the English department resists including quantitative reasoning in its stated outcomes, but the college says it must. “So we ask them,” Arcario says, ‘’ ’When students do research [for English courses], is there anything that might have demographic information, charts, tables, statistics?’ Well, there you are.” Nursing provides another example. Perhaps the program instructors don’t see how their discipline can help boost competency in writing or research skills. But they can, says Arcario, by assessing the quality of patient notes. As with the faculty at Wisconsin who were looking at ways to broaden the lessons from courses with narrow content, Arcario says the faculty at LaGuardia “worked hard about where to put these competencies into the curriculum.”
There are drawbacks to portfolios, of course. By their very nature they are not standardized, so it is not always possible to compare them across institutions. They are subjective, and they can vary greatly in content. They may be produced with assistance. And those who judge them differ in their opinions. (This is why portfolio raters work hard to establish norms before they start grading.) But arguably the greatest problem with portfolio assessments is that they take far more time and resources than many institutions can devote to the task.
One way around that problem is to limit the portfolio to a single representative discipline. And one outcome in particular—the ability to communicate effectively—can encompass all the rest. If a student can write clearly and forcefully, the reasoning goes, the chances are that he can synthesize ideas, link concepts, and focus sharply, as well. It also suggests that he has accumulated some content knowledge along the way. “Writing is not simply a way for students to demonstrate what they know,” says The National Commission on Writing in America’s Schools and Colleges. “It is a way to help them understand what they know. At its best, writing is learning.”
Accordingly, a number of colleges are measuring learning outcomes based on this one essential skill. They have made demonstrated writing proficiency a requirement of graduation and instituted programs to make sure that students meet it—development programs for faculty and tutoring services for students. Colleges also have created rubrics for measuring success and improved programs based on what they have learned.
At Millsaps College, a private liberal arts institution in Jackson, Miss., with just 1,013 students, educators believe that writing is essential to every academic pursuit and every career. Writing is not the responsibility of any one faculty member or department; it is the shared enterprise of all. That means that students are as likely to learn as much about writing from a physics professor as an English teacher.
A crucial outcome
Writing is vital to Millsaps’ core curriculum, which seeks to foster reasoning, communication, historical consciousness and social and cultural awareness. Students start with a foundation seminar on a particular topic that carries a heavy emphasis on writing, including analysis, organization, documentation and revision. Later core courses emphasize revision as an important means of clarifying thinking, and they require at least three pages of writing a week, all revised and assessed. As with the e-portfolios at LaGuardia, a final requirement asks students to write an essay that reflects on their own learning. At the same time, writing center director Anita DeRouen helps faculty members understand why it’s so important to explain why they are assigning the particular piece of writing and exactly what they want students to learn from it.
From the student perspective, tutors have proven a vital resource for writing instruction. At Carleton College in Northfield, Minn., another institution with a writing proficiency requirement, a writing center employs tutors in every major and is open six days a week until midnight. A similar center at Millsaps has a bank of computers and tutors on hand to serve appointments and drop-ins. The learning goes both ways: The act of teaching helps students be better learners. “I have learned so much about writing just by being a tutor,” says Michael Mohr, a senior headed for a teaching job. “The more I read, the more I learn about my own processes.”
Indeed, as Carol Rutz, director of the writing program at Carleton, observes: Written communication connects several important dimensions of learning, and in every discipline. A writing assessment alone, she says, “provides a rich set of data with which to evaluate what students have learned and how faculty can improve their classroom practice.” In other words, she asks: “What better, more economic place can we find to evaluate what college graduates know and are able to do?”
What now?
All of these institutions have proven that when outcomes are clearly expressed, instruction becomes more deliberate, curriculum gains meaning, and learning improves. They know that learning improves because they have found ways, however imperfect, to measure it. To be sure, assessment is time-consuming and expensive. And one size most certainly does not fit all. It is this worry about standardization that has kept other colleges and universities from adopting learning assessments. But while the institutions described here are diverse—a community college, a large research university, small liberal arts schools and an online institution—what they all have in common is that they have embraced outcomes and assessment from the bottom up instead of the top down. In each case, it was the faculty—not the administration or an outside accrediting body—that wrote the outcomes and helped develop the assessments.
These colleges are using assessments to improve programs. But too many colleges, experts say, are using them simply to check an accountability box. According to Stanley O. Ikenberry of the National Institute for Learning Outcomes Assessment (NILOA), almost 80 percent of the 1,500 colleges that responded to a recent survey said they had articulated a common set of learning outcomes. But most said they did so to fulfill accreditation requirements. (They also reported considerable opposition by faculty.)
The study prompted NILOA to pose a number of questions: Should accreditors require more evidence of learning gains? Can the tensions between accountability and improvement be resolved? How can colleges find the resources to do meaningful assessments? How can faculty be brought on board? As the pressure increases on colleges to prove that they are handing out high-quality degrees and credentials, these are questions that are increasingly urgent.