By Doug Ward

Two vastly different views of assessment whipsawed many of us over the past few days.

The first, a positive and hopeful view, pulsed through a half-day of sessions at KU’s annual Student Learning Symposium on Friday. The message there was that assessment provides an opportunity to understand student learning. Through curiosity and discovery, it yields valuable information and helps improve classes and curricula.

The second view came in the form of what a colleague accurately described as a “screed” in The New York Times. It argued that assessment turns hapless faculty members into tools of administrators and accreditors who seek vapid data on meaningless “learning outcomes” to justify an educational business model.

As I said, it was hard not to feel whipsawed. So let’s look a bit deeper into those two views and try to figure out what’s going on.

Clearly, the term “assessment” has taken on a lot of baggage over the last two decades. Molly Worthen, the North Carolina professor who wrote the Times op-ed article, highlights nearly every piece of that baggage: It is little more than a blunt bureaucratic instrument imposed from outside and upon high. It creates phony data. It lacks nuance. It fails to capture the important aspects of education. It is too expensive. It burdens overtaxed instructors. It generates little useful information. It blames instructors for things they have no control over. It is a political, not an educational, tool. It glosses over institutional problems.

Dawn Shew works on a poster during a session at the Student Learning Symposium. With her are, from left, Ben Wolfe, Steve Werninger and Kim Glover.

“Without thoughtful reconsideration, learning assessment will continue to devour a lot of money for meager results,” Worthen writes. “The movement’s focus on quantifying classroom experience makes it easy to shift blame for student failure wholly onto universities, ignoring deeper socio-economic reasons that cause many students to struggle with college-level work. Worse, when the effort to reduce learning to a list of job-ready skills goes too far, it misses the point of a university education.”

So if assessment is such a burden, why bother? Yes, there are political reasons, but assessment seems a reasonable request. If we profess to educate students, shouldn’t we be able to provide evidence of that? After all, we demand that our students provide evidence to back up arguments. We demand that our colleagues provide evidence in their research. So why should teaching and learning be any different?

I’m not saying that the assessment process is perfect. It certainly takes time and money to gather, analyze and present meaningful evidence, especially at the department, school or university level. At the learning symposium, an instructor pointed out that department-level assessment had essentially become an unfunded mandate, and indeed, if imposed from outside, assessment can seem like an albatross. And yet, it is hardly the evil beast that Worthen imagines.

Yes, in some cases assessment is required, and requirements make academics, who are used to considerable autonomy, chafe. But assessment is something we should do for ourselves, as I’ve written before. Think of it as a compass. Through constant monitoring, it provides valuable information about the direction and effectiveness of our classes and curricula. It allows us to make adjustments large and small that lead to better assignments and better learning for our students. It allows us to create a map of our curricula so that we know where individual classes move students on a journey toward a degree. In short, it helps us keep education relevant and ensures that our degrees mean something.

New data about assessment

That view lacks universal acceptance, but it is gaining ground. Figures released at the learning symposium by Josh Potter, the university’s documenting learning specialist, show that 73 percent of degree programs now report assessment data to the university, up from 59 percent in 2014. More importantly, more than half of those programs have discussed curriculum changes based on the assessment data they have gathered. In other words, those programs learned something important from assessment that encouraged them to take action.

That’s one of the most important aspects of assessment. It’s not just data we send into the ether. It’s data that can lead to valuable discussion and valuable understanding. It’s data that helps us make meaningful revisions.

The data that Potter released pointed to challenges, as well. Less than a third of those involved in program assessment say that their colleagues understand the purpose of assessment, that their department recognizes their work in assessment, or that they see a clear connection between assessment and student learning. Part of the problem, I think, is that many instructors want an easy-to-apply, one-size-fits-all approach. There simply is no single perfect method of assessment, as Potter makes clear in the many conversations he has with faculty members and departments. Another problem is that many people see it as a high-stakes game of gotcha, which it isn’t, or shouldn’t be.

“Assessment isn’t a treasure hunt for deficiencies in your department,” Potter said Friday.

Rather, assessment should start with questions from instructors and should include data that helps instructors see their courses in a broader way. Grades often obscure the nuances of learning and understanding. Assessment can make those nuances clearer. For instance, categories in a rubric add up to a grade for an individual student, but aggregate scores for each of those categories allow us to see where a broad swath of students need work or where we need to improve our instruction, structure assignments better, or revisit topics in a class.

Assessment as a constant process

That’s just one example. Individually, we subconsciously assess our classes day by day and week by week. We look at students’ faces for signs of comprehension. We judge the content of their questions and the sophistication of their arguments. We ask ourselves whether an especially quiet day in class means that students understand course material well or don’t understand at all.

The goal then should be to take the many meaningful observations we make and evidence we gather in our classes and connect them with similar work by our colleagues. By doing that on a department level, we gain a better understanding of curricula. By doing it on a university level, we gain a better understanding of degrees.

I’m not saying that any of this is easy. Someone has to aggregate data from the courses in a curriculum, and someone – actually, many someones – has to analyze that data and share results with colleagues. Universities need to provide the time and resources to make that happen, and they need to reward those who take it on. Assessment can’t live forever as an unfunded mandate. Despite the challenges that assessment brings, though, it needs to be an important part of what we do in higher education. Let me go back to Werther’s op-ed piece, which despite its screed-like tone contained nuggets of sanity. For instance:

“Producing thoughtful, talented graduates is not a matter of focusing on market-ready skills. It’s about giving students an opportunity that most of them will never have again in their lives: the chance for serious exploration of complicated intellectual problems, the gift of time in an institution where curiosity and discovery are the source of meaning.”

I agree wholeheartedly, and I think most of my colleagues would, too. A college education doesn’t happen magically, though. It requires courses to give it shape and curricula to give it meaning. And just as we want our students to embrace curiosity and discovery to guide their journey of intellectual exploration, so must we, their instructors, use curiosity and discovery to guide the constant development and redevelopment of our courses. That isn’t about “quantifying classroom experience,” as Werther argues. It’s about better understanding who we are and where we’re going.


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

Comments are closed.

CTE’s Twitter feed