By Doug Ward

Martha Oakley couldn’t ignore the data.

The statistics about student success in her discipline were damning, and the success rates elsewhere were just as troubling:

A white woman in a green blouse speaks in front of a white board.
Martha Oakley, a professor of chemistry and associate vice provost at Indiana University, speaks at Beren Auditorium on the KU campus.
  • Women do worse than men in STEM courses but do better than men in other university courses.
  • Students of color, first-generation students, and low-income students have lower success rates than women.
  • The richer students’ parents are, the higher the students’ GPAs are.

“We have no problem failing students but telling ourselves we are doing a good job,” said Oakley, a professor of chemistry and an associate vice provost at Indiana University, Bloomington. “If we are claiming to be excellent but just recreating historical disadvantages, we aren’t really doing anything.”

Oakley spoke to about 40 faculty and staff members last week at a CTE-sponsored session on using mastery-based grading to make STEM courses more equitable. The session was part of a CTE-led initiative financed by a $529,000 grant from the Howard Hughes Medical Institute, with participants from KU working with faculty members from 13 other universities on reducing equity gaps in undergraduate science education.

The work at KU, IU, and other universities is part of a broader cultural shift toward helping students succeed rather than pushing them out if they don’t do well immediately. Most disciplines have been changing their views on student success, but there has been increasing pressure on STEM fields, which have far lower numbers of women and non-white students and professionals than many other fields.

Oakley said she started digging deeper into university data about five years ago after attending a conference sponsored by the Association of American Universities and getting involved in IU’s Center for Learning Analytics and Student Success. She also began working with a multi-university initiative known as Seismic, which focuses on improving inclusiveness in STEM education.

She and some colleagues started by asking questions about the success rates of women in STEM but then recognized that the problem was far wider.

“And so we looked at each other and said, ‘Yeah, forget the women. Let’s worry about this bigger problem,’ ” Oakley said. “And we didn’t forget the women. We just had confidence that the things that we would do to address the other groups would also help women.”

Using analytics to guide change

In last week’s talk, she used many findings from Seismic and the IU analytics center as she made a case for changing the approach to teaching in STEM fields. For instance, she said, 20% to 50% of students at large universities fail or withdraw from early chemistry courses, with underrepresented minority students at the high end of that range. Students who receive a B or lower in pre-general chemistry courses have less than a 50-50 chance of succeeding in general chemistry.

She also talked about a personal revelation the data brought about. In 2011, she said, she received a university teaching award, and “by every metric, I knew I was doing my job really well.”

The data she saw a few years later suggested otherwise, showing that 37% of underrepresented students and 24% of the other students in her classes dropped or failed in the year she received the award.

“The major part of the story is we’ve all been trained in our disciplines to teach in a certain way that really was never particularly effective,” Oakley said.

We have learned much about how people learn but have continued with ineffective teaching strategies. That needs to change, she said.

“One really simple thing we can do is to say we only give teaching awards to people who actually demonstrate that their students have learned something,” she said.

A mastery-based approach

To address the problem at IU, Oakley has been experimenting with a mastery-based approach to grading.

The way most of us grade exacerbates inequities, Oakley said. It emphasizes superficial elements (basically memorization) and does nothing to reward learning from mistakes, persistence, or teamwork – “all the things that matter in life.” Grades are also poor predictors of how well students will do in jobs or in graduate school, she said.

Mastery-based grading gives students multiple attempts to demonstrate understanding of course material. It is related to another approach, competency-based learning, which also gives students multiple opportunities but focuses on application rather than simple understanding.

Oakley started shifting her class to mastery-based grading by taking broad learning goals and breaking them into smaller components: things like identifying catalysts and intermediates, using reaction order, and explaining why rates change with temperature. She also eliminated a grading curve. That was especially hard, she said, because she had internalized the notion of grade distributions, an approach that punishes failure and provides little opportunity for students to learn from mistakes.

She still uses quizzes and exams, with students taking quizzes the evening before class and then working in groups the next day to create a quiz key. That helps them learn from mistakes, knowing they will see similar questions on a quiz the following week.

At KU, Chris Fischer and Sarah LeGresley Rush have used a similar approach in physics courses, with results suggesting that a mastery approach helps students learn concepts in ways that stay with them in later engineering courses.

Oakley’s initial work has also showed potential, with DFW rates in her class falling to 8% and the average grade rising to a B. That was better than other sections of the class, although students didn’t do as well in later courses. Oakley isn’t discouraged, though. Rather, she said, she continues to learn from the process, just as her students do.

“We’ve really only scraped the tip of the iceberg,” she said.

Building on experience

Oakley’s advocacy for equity in STEM education is informed by experience. When she started at IU in 1996, she said, she was the only woman in a department of 42. That was isolating and frustrating, she said. Through her work in STEM education, she hopes to improve the opportunities for women and students of color.

“We’ve got to be both equitable and striving for excellence,” she said.

Only through experimentation, failure, and persistence can we start breaking down systemic barriers that have persisted for too long, she said.

“The system is broken,” Oakley said. “We are not ready for the students of the future – or even the present.”


Doug Ward is associate director of the Center for Teaching Excellence and an associate professor of journalism and mass communications.

By Doug Ward

Two vastly different views of assessment whipsawed many of us over the past few days.

The first, a positive and hopeful view, pulsed through a half-day of sessions at KU’s annual Student Learning Symposium on Friday. The message there was that assessment provides an opportunity to understand student learning. Through curiosity and discovery, it yields valuable information and helps improve classes and curricula.

The second view came in the form of what a colleague accurately described as a “screed” in The New York Times. It argued that assessment turns hapless faculty members into tools of administrators and accreditors who seek vapid data on meaningless “learning outcomes” to justify an educational business model.

As I said, it was hard not to feel whipsawed. So let’s look a bit deeper into those two views and try to figure out what’s going on.

Clearly, the term “assessment” has taken on a lot of baggage over the last two decades. Molly Worthen, the North Carolina professor who wrote the Times op-ed article, highlights nearly every piece of that baggage: It is little more than a blunt bureaucratic instrument imposed from outside and upon high. It creates phony data. It lacks nuance. It fails to capture the important aspects of education. It is too expensive. It burdens overtaxed instructors. It generates little useful information. It blames instructors for things they have no control over. It is a political, not an educational, tool. It glosses over institutional problems.

Dawn Shew works on a poster during a session at the Student Learning Symposium. With her are, from left, Ben Wolfe, Steve Werninger and Kim Glover.

“Without thoughtful reconsideration, learning assessment will continue to devour a lot of money for meager results,” Worthen writes. “The movement’s focus on quantifying classroom experience makes it easy to shift blame for student failure wholly onto universities, ignoring deeper socio-economic reasons that cause many students to struggle with college-level work. Worse, when the effort to reduce learning to a list of job-ready skills goes too far, it misses the point of a university education.”

So if assessment is such a burden, why bother? Yes, there are political reasons, but assessment seems a reasonable request. If we profess to educate students, shouldn’t we be able to provide evidence of that? After all, we demand that our students provide evidence to back up arguments. We demand that our colleagues provide evidence in their research. So why should teaching and learning be any different?

I’m not saying that the assessment process is perfect. It certainly takes time and money to gather, analyze and present meaningful evidence, especially at the department, school or university level. At the learning symposium, an instructor pointed out that department-level assessment had essentially become an unfunded mandate, and indeed, if imposed from outside, assessment can seem like an albatross. And yet, it is hardly the evil beast that Worthen imagines.

Yes, in some cases assessment is required, and requirements make academics, who are used to considerable autonomy, chafe. But assessment is something we should do for ourselves, as I’ve written before. Think of it as a compass. Through constant monitoring, it provides valuable information about the direction and effectiveness of our classes and curricula. It allows us to make adjustments large and small that lead to better assignments and better learning for our students. It allows us to create a map of our curricula so that we know where individual classes move students on a journey toward a degree. In short, it helps us keep education relevant and ensures that our degrees mean something.

New data about assessment

That view lacks universal acceptance, but it is gaining ground. Figures released at the learning symposium by Josh Potter, the university’s documenting learning specialist, show that 73 percent of degree programs now report assessment data to the university, up from 59 percent in 2014. More importantly, more than half of those programs have discussed curriculum changes based on the assessment data they have gathered. In other words, those programs learned something important from assessment that encouraged them to take action.

That’s one of the most important aspects of assessment. It’s not just data we send into the ether. It’s data that can lead to valuable discussion and valuable understanding. It’s data that helps us make meaningful revisions.

The data that Potter released pointed to challenges, as well. Less than a third of those involved in program assessment say that their colleagues understand the purpose of assessment, that their department recognizes their work in assessment, or that they see a clear connection between assessment and student learning. Part of the problem, I think, is that many instructors want an easy-to-apply, one-size-fits-all approach. There simply is no single perfect method of assessment, as Potter makes clear in the many conversations he has with faculty members and departments. Another problem is that many people see it as a high-stakes game of gotcha, which it isn’t, or shouldn’t be.

“Assessment isn’t a treasure hunt for deficiencies in your department,” Potter said Friday.

Rather, assessment should start with questions from instructors and should include data that helps instructors see their courses in a broader way. Grades often obscure the nuances of learning and understanding. Assessment can make those nuances clearer. For instance, categories in a rubric add up to a grade for an individual student, but aggregate scores for each of those categories allow us to see where a broad swath of students need work or where we need to improve our instruction, structure assignments better, or revisit topics in a class.

Assessment as a constant process

That’s just one example. Individually, we subconsciously assess our classes day by day and week by week. We look at students’ faces for signs of comprehension. We judge the content of their questions and the sophistication of their arguments. We ask ourselves whether an especially quiet day in class means that students understand course material well or don’t understand at all.

The goal then should be to take the many meaningful observations we make and evidence we gather in our classes and connect them with similar work by our colleagues. By doing that on a department level, we gain a better understanding of curricula. By doing it on a university level, we gain a better understanding of degrees.

I’m not saying that any of this is easy. Someone has to aggregate data from the courses in a curriculum, and someone – actually, many someones – has to analyze that data and share results with colleagues. Universities need to provide the time and resources to make that happen, and they need to reward those who take it on. Assessment can’t live forever as an unfunded mandate. Despite the challenges that assessment brings, though, it needs to be an important part of what we do in higher education. Let me go back to Werther’s op-ed piece, which despite its screed-like tone contained nuggets of sanity. For instance:

“Producing thoughtful, talented graduates is not a matter of focusing on market-ready skills. It’s about giving students an opportunity that most of them will never have again in their lives: the chance for serious exploration of complicated intellectual problems, the gift of time in an institution where curiosity and discovery are the source of meaning.”

I agree wholeheartedly, and I think most of my colleagues would, too. A college education doesn’t happen magically, though. It requires courses to give it shape and curricula to give it meaning. And just as we want our students to embrace curiosity and discovery to guide their journey of intellectual exploration, so must we, their instructors, use curiosity and discovery to guide the constant development and redevelopment of our courses. That isn’t about “quantifying classroom experience,” as Werther argues. It’s about better understanding who we are and where we’re going.


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

By Doug Ward

Here’s a secret about creating a top-notch assessment plan:

Make sure that it involves cooperation among faculty members, that it integrates assignments into a broader framework of learning, and that it creates avenues for evaluating results and using them to make changes to courses and curricula.

Lorie Vanchena, Nina Vyatkina and Ari Linden of the department of Germanic languages and literatures accepted the Degree-Level Assessment Award from Stuart Day, interim vice provost for academic affairs.

Actually, that’s not really a secret – really, it’s just good assessment practice – but it was the secret to winning a university assessment award this year. Judges for both the Degree-Level Assessment Award and the Christopher H. Haufler KU Core Innovation Award cited the winners’ ability to cooperate, integrate and follow up on their findings as elements that set them apart from other nominees.

The Department of Germanic Languages and Literatures won this year’s degree-level assessment award, and the Department of Curriculum and Teaching won this year’s Haufler award. The awards were announced at last week’s annual Student Learning Symposium. Each comes with $5,000.

The German department focused its plan on two 300-level courses that serve as a gateway to the major, and on its capstone course. Stuart Day, the acting vice provost for academic affairs, said the University Academic Assessment Committee, which oversees the award, found the plan thorough, manageable and meaningful. It is one of the strongest assessment plans in place at the university, he said. It emphasizes substantive learning outcomes, uses a variety of methods for assessment, and includes a plan for making ongoing improvements.

Reva Friedman accepted the Haufler KU Core Innovation Award from DeAngela Burns-Wallace, vice provost for undergraduate studies.

DeAngela Burns-Wallace, vice provost for undergraduate studies, said the plan created by curriculum and teaching had similar characteristics, using a rich approach that integrates active learning, problem solving and critical thinking. The department created a “strong and intentional feedback loop for course improvement,” she said, and created a clear means for sharing results throughout the department.

So there again is that secret that isn’t really a secret: A strong assessment plan needs to include cooperation among colleagues, integration of assignments and pedagogy, and follow-ups that lead to improvements in the curriculum.

That sounds simple, but it’s not. Reva Friedman, associate professor of curriculum and teaching, and Lorie Vanchena, associate professor of Germanic languages and literatures, both spoke about the deep intellectual work that went into crafting their plans. That work involved many discussions among colleagues and some failed attempts that eventually led to strong, substantive plans.

“Everything we’re doing informs everything else we’re doing,” Friedman said.

She also offered a piece of advice that we all need to hear.

“All of us have our little castles with moats around them, and we love what we do,” she said. “But we need to partner in a different way.”

A new resource for teaching media literacy

In a world of “alternative facts,” we all must work harder to help students learn to find reliable information, challenge questionable information, and move beyond their own biases. To help with that, KU Libraries recently added a media literacy resource page to its website. Instructors and students will find a wealth of useful materials, including definitions, evaluation tools, articles and websites.


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

By Doug Ward

All too often, we pursue teaching as an individual activity. We look at our classes as our classes rather than as part of continuum of learning. And we are often ill-prepared to help other instructors engage in a course’s evolution when they take it over. We may pass along course material, but rarely do we pass along the background, context, and iterations of a course’s development.

In a recent portfolio for the Center for Teaching Excellence, Holly Storkel and Megan Blossom explain how they did exactly that, demonstrating the benefits of collaboration in improving learning and in keeping the momentum of improvement intact.

Holley Storkel in her office in the Dole Human Development Center
Holley Storkel in her office in the Dole Human Development Center

Storkel, a professor and chair of speech-language-hearing, added active learning activities to a 400-level class called Language Science, a required undergraduate class on the basic structure of the English language. The changes were intended to help students improve their critical thinking and their interpretation of research articles. Blossom, who was a graduate teaching assistant for the class, built on that approach when she later took over as an instructor.

Storkel had taught the class many times and had been mulling changes for several years to help students improve their ability to find and work with research.

“I decided they should start reading research articles and get more familiar with that: understand how to find a research article, understand how to get it from the library, have basic skills of how to read a research article,” Storkel said in an interview. “And this class is supposed to be kind of the sophomore-junior level so that then, as they move to the junior-senior level, they would have the skills to find a variety of papers and do the synthesis across the papers and where that sort of things is the next level up. But I figured, ‘You can’t synthesize information if you didn’t understand what it is to begin with.’ ”

Blossom, who is now an assistant professor at Castleton University in Vermont, taught the same class three semesters later, building on Storkel’s work but making several changes based on the problem areas that she and Storkel identified. She reduced the number of research articles that students read in an attempt to give them more time in class for discussion. She also added pre-class questions intended to help students better prepare for in-class discussions, worked to make those discussions more interactive, and provided structured questions to help students assess articles.

In later discussions, Blossom let students guide the conversations more, having them work in pairs to interpret a particularly challenging article. To gain a better understanding of methods, students also created experimental models like those used in the article. Blossom pooled their results and had students compare the differences in their findings.

In their course portfolio, Storkel and Blossom said the changes improved class discussions about research and helped instructors devote more one-on-one attention to students in class. That was especially helpful for students who struggled with concepts. They also said the process itself provided benefits for students.

The benefits of collaboration

In a recent interview, Storkel said that collaboration was crucial in gaining a shared understanding of what students were learning from the class and where they were struggling. Rather than telling Blossom what to do, they talked through how they might make it better. She suggested that others use the same approach to improving classes.

“I think one thing that I would say to that is sort of sharing what you know so that you can get on the same page,” Storkel said. “Look at some student work and say, ‘Here’s how I taught the class. Here’s what the performance on this assignment looked like. They were doing pretty well with this but there were some struggles here, and so that might be something you want to think about if you’re going to keep some of these activities, or even if you’re doing different activities this seems to be a hard concept for them to learn or this process seems to be the part that’s really a stumbling block.’ ”

Storkel suggested that faculty engage in more conversations about the courses they teach and use course portfolios to make shared information more visible.

Portfolios provide a means to look at a class “and say, ‘What skills are people taking away from this? Where am I having a challenge?’ ” Storkel said, adding: “It’s already in a format then that is shareable and that’s more than just, ‘Here are my lecture notes’ or ‘Here are my slides. Here’s the syllabus.’ Here’s what actually happened. I think having rich records that can be easily handed off is good.”

Assessment also provides opportunities for increased sharing of experiences in courses, Storkel said.

“That might be another place where you can have a conversation around teaching, and then it might not even be attached to a particular class but more, ‘Here’s a particular skill. Students aren’t always getting it.’ So as I approach this class where that skill needs to be incorporated or we expect that to happen, now I’ve some idea of what might be challenging or not.”

It all starts with a willingness to share experiences, to put defensiveness aside, and to focus on what’s best for students.


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

Award winners, Student Learning Symposium, by Lu Wang
Chris Brown and Bob Hagen accepted the university degree-level assessment award for work that they and others have done in the environmental studies program. Chris Fischer, right, accepted the Chris Haufler Core Innovation Award on behalf of the physics department. Joining them at the Student Learning Symposium on Saturday were Provost Jeff Vitter, left, and Haufler, second from right. (Photo by Lu Wang)

By Doug Ward

Chris Brown sees assessment as a way to build community.

It brings together faculty members for much-needed discussions about learning. It helps departments explain to colleagues, administrators, and accreditors the value of a degree. And it helps create a better learning environment for students as courses undergo constant evaluation and revision.

“Assessment is not a box to check off on a thing that says you’re done and now you can stop thinking about it,” said Brown, director of the environmental studies program at KU. “It’s about people who are engaged in an ongoing process of learning themselves about how to do their jobs better.”

Brown’s program received the university’s degree-level assessment award at Saturday’s Student Learning Symposium in Lawrence. He was joined by two colleagues, Bob Hagen and Paul Stock, in accepting the award, which comes with $5,000 the program can use in advancing teaching and assessment.

Brown said everyone at KU was “basically taking crash courses” in assessment, which he describes as a series of questions about student learning:

  • How do you document?
  • What do you document?
  • How do you decide what’s valuable to document and what’s not valuable to document?
  • What changes do you need to make based on the evidence you’ve gathered?

Moving from informal to formal

Instructors in all departments have been engaging in informal assessment for years, Brown said.

“It’s every time we talk to each other about one way we think we could have done things better for a particular course, or all the times we’ve looked at our curriculum and decided to make changes,” he said. “The degree-level assessment we’ve been doing has taken that to a formal level.”

Faculty members in environmental studies began focusing on that formal assessment process a few years ago when the program did a self-study as part of an external review, Brown said. That forced them to take a hard look at what students were learning and why they thought the degree was valuable.

“We’re an interdisciplinary major,” Brown said. “Our foundational course should cover all the divisions of the college – the natural sciences, the social sciences and the humanities – as it relates to environmental studies. So there were a bunch of different moments that came together and really piqued people’s interest across our faculty and really say, ‘What do we want with this degree?’”

As it created a formal assessment process, environmental studies looked first at core writing skills, largely because instructors weren’t happy with the final projects students were turning in for upper-level courses. It was clear students were struggling with collecting evidence, structuring arguments, and making those arguments clear in their written work, he said. So faculty members broke larger assignments into smaller segments and gave more feedback to students along the way as they moved toward their final projects. Doing so has led to a dramatic improvement in those projects.

It has also led to opportunities for instructors to share their successes and struggles in classes. They also freely share class material with colleagues. Brown says that openness allowed him to teach an environmental ethics course for the first time with meaningful and successful results.

“I could not have done that if I weren’t in conversations with colleagues,” Brown said. “That’s what this comes down to.”

Brown makes assessment sound easy.

“Once the formal process began, it really helped solidify that we do need to get together at specific faculty meetings as a whole group,” he said.  “When I call those faculty meetings, I don’t have to pull teeth. Everybody comes. It’s not difficult. Perhaps it’s the nature of the major. People seek out contact across these various fields because it’s an interesting and rewarding conversation. Assessment has given us one more reason to come together and talk about what we value.”

Finding colleagues to help

He urges others interested in moving assessment forward to seek out like-minded colleagues, those with whom you are already having discussions about teaching.

“It really doesn’t have to start with any greater number of people than two,” Brown said. “Start there if that’s all you have.”

Talk about goals for students and goals for your major. Determine how you know students and the major are meeting those goals. Then think about how you can gather meaningful information and use that information in ways that lead to greater success. Then carry that conversation forward with other colleagues, including those in other departments. Draw on the many workshops and discussions at CTE.

“That’s hundreds of colleagues from various fields who are eager to talk with you about what you do and to help you and others see that what we’re doing with teaching and learning is intellectual work,” Brown said.

Again, assessment loops back to the idea of building community.


The lighter side of assessment

A short film that helped lead off Saturday’s Student Learning Symposium showed that assessment isn’t always serious business.

By Doug Ward

Let’s peer into the future – the near future, as in next semester. Or maybe the semester after that.

You’ll be teaching the same course that is wrapping up this week, and you’ll want to make some changes to improve student engagement and learning. Maybe some assignments tanked. Maybe you need to rearrange some elements to improve the flow of the course. Maybe you need to give the course a full makeover. By the time the new semester rolls around, though, the previous one will be mostly a blur.

So why not take a few minutes now to reflect on the semester? While you’re at it, why not solicit feedback from students?

Six question marks of different colors
Clker.com

To help, here are 20 questions to ask yourself and your students. This isn’t an exhaustive list. Rather, it’s a way to think about what you’ve accomplished (or haven’t) and how you can do better.

Learning and assessment

Use of class time

Assignments

  • What assignments or discussion topics worked best?
  • Which ones flopped? Why?
  • How might you improve the way you use Blackboard or other online resources?

Some questions to ask your students

I also like to spend time talking with students about the class. Sometimes I do that as a full class discussion. Other times, I use small groups. Either way, I ask some general questions about the semester:

  • What worked or didn’t work in helping you learn?
  • What would help next time?
  • How has your perspective changed since the beginning of the class?
  • What will you take away from the course?
  • How did the format of the class affect your learning and your motivation?

Sometimes students don’t have answers right away, so I encourage them to provide feedback in the self-evaluations I ask them to write, or in their course evaluations.

I promised 20 questions, so I’ll end with one more: What questions would you add to the list?


Doug Ward is an associate professor of journalism and the associate director of  the Center for Teaching Excellence. You can follow him on Twitter @kuediting.

By Doug Ward

Sylvia Manning offers an insightful characterization of a college education that summarizes the challenges all of us in higher education face today. In a paper for the American Enterprise Institute, she writes:

The reality is that no one can guarantee the results of an educational process, if only because a key element is how the student engages in that process. The output or outcome measures that we have are crude and are likely to remain so for considerable time to come. For example, the percentage of students who graduate from an institution tells us next to nothing about the quality of the education those students received.

Poster that says "Just because kids know how to use Twitter, Snapchat, and Instragram doesn't mean they how how to use technology to enhance their learning."
A good message about students and technology from Sean Junkins, via Twitter: http://bit.ly/1yFYfY5

Manning is right. In a piece for Inside HigherEd last year, I argued that students and administrators had become too caught up in the idea of education as a product. Far too many students see a diploma, rather than the learning that goes into it, as their primary goal. I tell students that I can’t make them learn. My job is to provide the environment and the guidance to help them learn. They have to decide for themselves whether they want to take advantage of the resources I provide – and to what degree. Only after they do that can learning take place.

Colleges and universities face a similar conundrum. They have come under increasing pressure to provide ways to measure their effectiveness. As Manning says, though, they have struggled to find effective ways to do that. Most focus on graduation rates and point to the jobs their graduates get. Many, like KU, are working at decreasing the number of students who drop or fail classes. Those are solid goals, but they still don’t tell us anything about what students have learned.

I’m not convinced that we can do that we can truly do that at a university level, at least not in the form of simplistic numeric data that administrators and legislators seem to want. There’s no meaningful way to show that student learning grew X percent this semester or that critical thinking increased at a rate of X over four years, although critics of higher education argue otherwise.

A portfolio system seems the best bet. It provides a way for students to show the work they have done during their time in college and allows them to make their own case for their learning. Portfolios also provide a means for students to demonstrate their potential to employers. By sampling those portfolios, institutions can then get a broad overview of learning. With rubrics, they can create a statistic, but the real proof is still qualitative rather than quantitative.

As an instructor, I see far more value in the nuances of portfolios, projects and assignments than I do in the rigid numerical data of tests and quizzes. Until that thinking gains a wider acceptance, though, we’ll be stuck chasing graduation rates and the like rather than elevating what really matters: learning.

A defense of liberal arts, along with a challenge

Without a backbone of liberal arts, science and technology lack the ability to create true breakthroughs. That’s what Leon Botstein, president of Bard College, argues in The Hechinger Report. Botstein makes a strong case, but he also issues a stinging rebuke to programs that refuse to innovate.

“Students come to college interested in issues and questions, and ready to tackle challenges, not just to “major” in a subject, even in a scientific discipline,” Botstein writes. “…What do we so often find in college? Courses that correspond to narrow faculty interests and ambitions, cast in terms defined by academic discourse, not necessarily curiosity or common sense.”

Bravo!

He argues for fundamental changes in curricula and organization of faculty, but also in the way courses are taught. The only aspect of education “that is truly threatened by technology is bad teaching, particularly lecturing,” he says. Instead, technology has expanded opportunities for learning but has done nothing to diminish the need for discussion, argument, close reading and speculation. He calls for renewed attention in helping students learn to use language and to use liberal arts to help students become literate in the sciences.

I’d be remiss if I didn’t bring up Botstein’s comparison of teaching and learning to sex, along with the slightly sensational but certainly eye-grabbing headline that accompanied his article: “Learning is like sex, and other reasons the liberal arts will remain relevant.”

Related: At Liberal Arts Colleges, Debate About Online Courses Is Really About Outsourcing (Chronicle of Higher Education)

Briefly …

College instructors are integrating more discussions and group projects into their teaching as they cut down on a lecture-only approach, The Chronicle of Higher Education reports. … David Gooblar of PedagogyUnbound offers advice on handling the seemingly never-ending task of grading … Stuart Butler of the Brookings Institution suggests ways to “lower crazy high college costs.” They include providing better information to students, revamping accreditation, and allowing new models of education to compete with existing universities.


Doug Ward is an associate professor of journalism and the associate director of  the Center for Teaching Excellence. You can follow him on Twitter @kuediting.

By Doug Ward

Assessment often elicits groans from faculty members.

It doesn’t have to if it’s done right. And by right, I mean using it to measure learning that faculty members see as important, and then using those results to revise courses and curricula to improve student learning.

In a white paper for the organization Jobs for the Future, David T. Conley, a professor at the University of Oregon, points out many flaws that have cast suspicion on the value of assessment. He provides a short but fascinating historical review of assessment methods, followed by an excellent argument for a drastic change in the ways students are assessed in K-12. He also raises important issues for higher education. The report is titled A New Era for Educational Assessment.

Conley says that the United States has long favored consistency in measuring something in education over the ability to measure the right things. Schools, he says, “have treated literacy and numeracy as a collection of distinct, discrete pieces to be mastered, with little attention to students’ ability to put those pieces together or to apply them to other subject areas or real-world problems.”cover of a new era for educational assessment

One reason standardized testing has recently come under scrutiny, he says, is that new research on the brain has challenged assumptions about fixed intelligence. Rather, he says, researchers have come to an “understanding that intellectual capacities are varied and multi-dimensional and can be developed over time, if the brain is stimulated to do so.” Relatedly, they have found that attitudes toward learning are as important as aptitude.

The Common Core has also put pressure on states to find alternatives to the typical standardized test. The Core’s standards for college readiness include such elements as the ability to research and synthesize information, to develop and evaluate claims, and to explain, justify and critique mathematical reasoning – complex abilities that defy measurement with multiple-choice questions. Schools have been experimenting with other means to better measure sophisticated reasoning include, Conley writes. They include these:

  • Performance tasks that require students to parse texts of varying lengths and that may last from 20 minutes to two weeks. (KU’s Center for Education Testing & Evaluation has been working on one such test.)
  • Project-centered assessment, which gives students complex, open-ended problems to solve.
  • Portfolios, which collect a wide range of student work to demonstrate proficiency in a wide range of subjects.
  • Collaborative problem-solving, which sometimes involves students working through a series of online challenges with a digital avatar.
  • Metacognitive learning strategies, which Conley describes as ways “learners demonstrate awareness of their own thinking, then monitor and analyze their thinking and decision-making processes” and make adjustments when they are having trouble. Measuring these strategies often relies on self-reporting, something that has opened them to criticism.

Conley sees opportunities for states to combine several forms of assessment to provide a deeper, more nuanced portrait of learning. He calls this a “profile approach” and says it could be used not only by teachers and administrators but also colleges and potential employers. He asks, though, whether colleges and universities are ready to deal with these more complex measurements. Higher education has long relied on GPAs and test scores for deciding admissions, and more nuanced assessments would require more time to evaluate and compare. He says, though, that “the more innovative campuses and systems are already gearing up to make decisions more strategically and to learn how to use something more like a profile of readiness rather than just a cut score for eligibility.”

Conley raises another important issue for higher education. Over the past decade, high schools have focused on making students “college and career ready,” although definitions of those descriptions have been murky. Because of that, educators have “focused on students’ eligibility for college and not their readiness to succeed there.” Conley and others have identified key elements of college readiness, he says. Those include such things as hypothesizing and strategizing, analyzing and evaluating, linking ideas, organizing concepts, setting goals for learning, motivating oneself to learn, and managing time.

The takeaway? Assessment is moving in a more meaningful direction. That’s good news for both students and wary faculty members.


Doug Ward is an associate professor of journalism and the associate director of  the Center for Teaching Excellence. You can follow him on Twitter @kuediting.

CTE’s Twitter feed