By Doug Ward

The intellectual work that goes into teaching often goes unnoticed.

All too often, departments rely on simple lists of classes and scores from student surveys of teaching to “evaluate” instructors. I put “evaluate” in quotation marks because those list-heavy reviews look only at surface-level numerical information and ignore the real work that goes into making teaching effective, engaging, and meaningful.

A pile of books next to a notebook with a pen on top
Debby Hudson via Unsplash

An annual evaluation is a great time for instructors to document the substantial intellectual work of teaching and for evaluators to put that work front and center of the review process. That approach takes a slightly different form than many instructors are used to, and at a CTE workshop last week we helped draw out some of the things that might be documented in an annual review packet and for other, more substantial reviews.

Participants shared a wide range of activities that showed just how creative and devoted many KU instructors are. The list might spur ideas for others putting together materials for annual review:

Engagement and learning

Nearly all the instructors at the workshop reported modifying classes based on their observations, reviews of research, and student feedback from previous semesters. These included:

  • Moving away from quizzes and exams, and relying more on low-stakes assignments, including blog posts, minute papers, and other types of writing assignments to gauge student understanding.
  • Moving material online and using class time to focus on interaction, discussion, group work, peer review, and other activities that are difficult for students to do on their own.
  • Using reflection journals to help students gain a better understanding of their own learning and better develop their metacognitive skills.
  • Providing new ways for students to participate in class. This included adding a digital tool that allows students to make comments on slides and add to conversations the way they do through online chats.
  • Using universal design to provide choices to students for how they learn material and demonstrate their understanding.
  • Scaffolding assignments. Many instructors took a critical look at how students approached assignments, identifying skills in more detail, and helping students build skills layer by layer through scaffolded work.
  • Bringing professionals into class to broaden student perspectives on the discipline and to reinforce the importance of course content.
  • Creating online courses. In some cases, this involved creating courses from scratch. In others, it meant adapting an in-person course to an online environment.
  • Rethinking course content. Sarah Browne in math remade course videos with a lightboard. That allowed students to see her as she worked problems, adding an extra bit of humanity to the process. She also used Kaltura to embed quizzes in the videos. Those quizzes helped students gauge their understanding of material, but they also increased the time students spent with the videos and cut down on stopping part-way through.

Overcoming challenges

  • Larger class sizes. A few instructors talked about adapting courses to accommodate larger enrollment or larger class sizes. More instructors are being asked to do that each semester as departments reduce class sections and try to generate more credit hours with existing classes.
  • Student engagement. Faculty in nearly all departments have struggled with student engagement during the pandemic. Some students who had been mostly online have struggled to re-engage with courses and classmates in person. As a result, instructors have taken a variety of steps to interact more with students and to help them engage with their peers in class.
  • Emphasis on community. Instructors brought more collaborative work and discussion into their courses to help create community among students and to push them to go deeper into course material. This included efforts to create a safe and inclusive learning environment to bolster student confidence and help students succeed.
  • Frequent check-ins. Instructors reported increased use of check-ins and other forms of feedback to gauge students’ mood and motivation. This included gathering feedback at midterm and at other points in a class so they could adjust everything from class format to class discussions and use of class time. At least one instructor created an exit survey to gather feedback. David Mai of film and media studies used an emoji check-in each day last year. Students clicked on an emoji to indicate how they were feeling that day, and Mai adapted class activities depending on the mood.

Adapting and creating courses

The university has shifted all courses to Canvas over the last two years. Doing so required instructors to put in a substantial amount of time-consuming work. This included:

  • Time involved in moving, reorganizing, and adapting materials to the new learning management system.
  • Training needed through Information Technology, the Center for Online and Distance Learning, and the Center for Teaching Excellence to learn how to use Canvas effectively and to integrate it into courses in ways that help students.

Ji-Yeon Lee from East Asian languages and culture went even further, creating and sharing materials that made it easier for colleagues to adapt their classes to Canvas and to use Canvas to make courses more engaging.

Resources on documenting teaching

CTE has several resources available to help instructors document their teaching. These include:

  • A page on representing and reviewing teaching has additional ideas on how to document teaching and student learning, and how to present that material for review. One section of the page includes resources on how to use results from the new student survey of teaching.
  • A page for the Benchmarks for Teaching Effectiveness project has numerous resources related to a framework developed for evaluating teaching. These include a rubric with criteria for the seven dimensions of effective teaching that Benchmarks is based on; an evidence matrix that points to potential sources for documenting aspects of teaching; and a guide on representing evidence of student learning.

Documenting teaching can sometimes seem daunting, but it becomes easier the more you work on it and learn what materials to set aside during a semester.

Just keep in mind: Little of the intellectual work that goes into your teaching will be visible unless you make it visible. That makes some instructors uncomfortable, but it’s important to remember that you are your own best advocate. Documenting your work allows you to do that with evidence, not just low-level statistics.


Doug Ward is an associate director at the Center for Teaching Excellence and an associate professor of journalism and mass communications.

By Doug Ward

A recent meeting at the National Academies of Sciences, Engineering and Medicine achieved little consensus on how best to evaluate teaching, but it certainly showed a widespread desire for a fairer system that better reflects the many components of excellent teaching.

The National Academies co-sponsored the meeting earlier this month in Washington with the Association of American Universities and TEval, a project associated with the Center for Teaching Excellence at KU. The meeting brought together leaders from universities around the country to discuss ways to provide a richer evaluation of faculty teaching and, ultimately, expand the use of practices that have been shown to improve student learning.

A CTE rubric for evaluating teaching helps instructors and departments focus on a series of questions.

My colleague Andrea Greenhoot, professor of psychology and director of CTE, represented KU at the meeting. Members of the TEval team from the University of Massachusetts, Amherst, the University of Colorado, Boulder, and Michigan State University also attended. The TEval project involves more than 60 faculty members at KU, CU and UMass. It received a five-year, $2.8 million grant from the National Science Foundation last year to explore ways to create a fairer, more nuanced approach to evaluating teaching.

The TEval project, which is known as Benchmarks at KU, has helped put KU at the forefront of the discussion about evaluating teaching and adopting more effective pedagogical strategies. Nine departments have been working to adapt a rubric developed at CTE, identify appropriate forms of evidence, and rethink the way they evaluate teaching. Similar conversations are taking place among faculty at CU and UMass. One goal of the project is to provide a framework that other universities can follow.

Universities have long relied on student surveys as the primary – and often sole – means of evaluating teaching. Those surveys can gather important feedback from students, but they provide only one perspective on a complex process that students know little about. The results of the surveys have also come under increasing scrutiny for biases against some instructors and types of classes.

Challenges and questions

The process of creating a better system still faces many challenges, as speakers at the meeting in Washington made clear. Emily Miller, associate vice president for policy at the AAU, said that many universities were having a difficult time integrating a new approach to evaluating teaching into a rewards system that favors research and that often counts teaching-associated work as service.

“We need to think about how we recognize the value of teaching,” Miller said.

She also summarized questions that had arisen during discussions at the meeting:

  • What is good teaching?
  • What elements of teaching do we want to evaluate?
  • Do we want a process that helps instructors improve or one that simply evaluates them annually?
  • What are the useful and appropriate measures?
  • What does it mean to talk about parallels between teaching and research?
  • How can we situate the conversation about the evaluation of teaching in the larger context of institutional change and university missions?

Noah Finkelstein, a University of Colorado physics professor who is a principal investigator on the TEval grant, brought up additional questions:

  • How do we frame teaching excellence within the context of diversity, equity and inclusion?
  • How can we create stronger communities around teaching?
  • How do we balance institutional and individual needs?
  • How do we reward institutions who improve teaching?
  • When will AAU membership be contingent on teaching excellence?

Moving the process forward

Instructors at KU, CU and UMass are already grappling with many of the questions that Miller and Finkelstein raised.

At KU, a group will meet on Friday to talk about the work they have done in such areas as identifying the elements of good teaching; gathering evidence in support of high-quality teaching practices; developing new approaches to peer evaluation for faculty and graduate teaching assistants; providing guidance on instructor reflection and assessment; and making the evaluation process more inclusive. There have also been discussions among administrators and Faculty Senate on ways to integrate a new approach into the KU rewards structure. Considerable work remains, but a shift has been set in motion.

KU faculty and staff share insights on teaching

Several KU faculty members have recently published articles about their inquiry into teaching. Their articles are well worth the time to read. Among them:

Briefly …

  • Writing in EdSurge, Bryan Alexander says that “video is now covering a lot of ground, from faculty-generated instructional content to student-generated works, videoconferencing and the possibility of automated videobots.” The headline goes beyond anything in the article, but it nonetheless raises an interesting thought: “Video assignments are the new term paper.”
  • The Society for Human Resource Management writes about a trend it calls “microinternships,” which mirror the work of freelancers. Microinternships involve projects of 5 to 20 hours that the educational technology company Parker Dewey posts on a website. Students bid on the work, and Parker Dewey takes a percentage of the compensation. The company says it is working with 150 colleges and universities on the microinternship project.
  • Writing in The Chronicle of Higher Education, Aaron Hanlan argues that by relying on a growing number of contingent, “disposable” instructors, “institutions of higher education today operate as if they have no future.” In following this approach, tenured faculty and administrators “are guaranteeing the obsolescence of their own institutions and the eventual erasure of their own careers and legacies,” he argues.
  • EAB writes about the importance of reaching out to students personally, saying that email with a personal, supportive tone can be like a lifeline to struggling students.

  • Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

By Doug Ward

If you plan to use student surveys of teaching for feedback on your classes this semester, consider this: Only about 50% of students fill out the surveys online.

Yes, 50%.

There are several ways that instructors can increase that response rate, though. None are particularly difficult, but they do require you to think about the surveys in slightly different ways. I’ll get to those in a moment.

The low response rate for online student surveys of teaching is not just a problem at KU. Nearly every university that has moved student surveys online has faced the same challenge.

That shouldn’t be surprising. When surveys are conducted on paper, instructors (or proxies) distribute them in class and students have 10 or 15 minutes to fill them out. With the online surveys, students usually fill them out on their own time – or simply ignore them.

——————————————————————————————————————

——————————————————————————————————————

I have no interest in returning to paper surveys, which are cumbersome, wasteful and time-consuming. For example, Ally Smith, an administrative assistant in environmental studies, geology, geography, and atmospheric sciences, estimates that staff time needed to prepare data and distribute results for those four disciplines has declined by 47.5 hours a semester since the surveys were moved online. Staff members now spend about 4 hours gathering and distributing the online data.

That’s an enormous time savings. The online surveys also save reams of paper and allow departments to eliminate the cost of scanning the surveys. That cost is about 8 cents a page. The online system also protects student and faculty privacy. Paper surveys are generally handled by several people, and students in large classes sometimes leave completed surveys in or near the classroom. (I once found a completed survey sitting on a trash can outside a lecture hall.)

So there are solid reasons to move to online surveys. The question is how to improve student responsiveness.

I recently led a university committee that looked into that. Others on the committee were Chris Elles, Heidi Hallman, Ravi Shanmugam, Holly Storkel and Ketty Wong. We found no magic solution, but we did find that many instructors were able to get 80% to 100% of their students to participate in the surveys. Here are four common approaches they use:

Have students complete surveys in class

Completing the surveys outside class was necessary in the first three years of online surveys at KU because students had to use a laptop or desktop computer. A system the university adopted two years ago allows them to use smartphones, tablets or computers. A vast majority of students have smartphones, so it would be easy for them to take the surveys in class. Instructors would need to give notice to students about bringing a device on survey day and find ways to make sure everyone has a device. Those who were absent or were not able to complete the surveys could still do so outside class.

Remind students about the surveys several times

Notices about the online surveys are sent by the Center for Online and Distance Learning, an entity that most students don’t know and never interact with otherwise. Instructors who have had consistently high response rates send out multiple messages to students and speak about the surveys in class. They explain that student feedback is important for improving courses and that a higher response rate provides a broader understanding of students’ experiences in a class.

To some extent, response rates indicate the degree to which students feel a part of a class, and rates are generally higher in smaller classes. Even in classes where students feel engaged, though, a single reminder from an instructor isn’t enough. Rather, instructors should explain why the feedback from the surveys is important and how it is used to improve future classes. An appeal that explains the importance and offers specific examples of how the instructor has used the feedback is more likely to get students to act than one that just reminds them to fill out the surveys. Sending several reminders is even better.

Give extra credit for completing surveys

Instructors in large classes have found this an especially effective means of increasing student participation. Giving students as little as 1 point extra credit (amounting to a fraction of 1% of an overall grade) is enough to spur students to action, although offering a bump of 1% or more is even more effective. In some cases, instructors have gamified the process. The higher the response rate, the more extra credit everyone in the class receives. I’m generally not a fan of extra credit, but instructors who have used this method have been able to get more than 90% of their students to complete the online surveys of teaching.

Add midterm surveys

A midterm survey helps instructors identify problems or frustrations in a class and make changes during the semester. signaling to students that their opinions and experiences matter. This in turn helps motivate students to complete end-of-semester surveys. Many instructors already administer midterm surveys either electronically (via Blackboard or other online tools) or with paper, asking students such things as what is going well in the class, what needs to change, and where they are struggling. This approach is backed up by research from a training-evaluation organization called ALPS Insights, which has found that students are more likely to complete later course surveys if instructors acknowledge and act on earlier feedback they have given. It’s too late to adopt that approach this semester, but it is worth trying in future semesters.

Remember the limitations

Student surveys of teaching can provide valuable feedback that helps instructors make adjustments in future semesters. Instructors we spoke to, though, overwhelmingly said that student comments were the most valuable component of the surveys. Those comments point to specific areas where students have concerns or where a course is working well.

Unfortunately, surveys of teaching have been grossly misused as an objective measure of an instructor’s effectiveness. A growing body of research has found that the surveys do not evaluate the quality of instruction in a class and do not correlate with student learning. They are best used as one component of a much larger array of evidence. The College of Liberal Arts and Sciences has developed a broader framework, and CTE has created an approach we call Benchmarks for Teaching Effectiveness. It uses a rubric to help shape a more thorough, fairer and nuanced evaluation process.

Universities across the country are rethinking their approach to evaluating teaching, and the work of CTE and the College are at the forefront of that. Even those broader approaches require input from students, though. So as you move into your final classes, remind students of the importance of their participation in the process.

(What have you found effective? If you have found other ways of increasing student participation in end-of-semester teaching surveys, let us know so we can share your ideas with colleagues.)

The ‘right’ way to take notes isn’t clear cut

Photo by The Climate Reality Project on Unsplash

A new study on note-taking muddies what many instructors saw as a clear advantage of pen and paper.

The study replicates a 2014 study that has been used as evidence for banning laptop computers in class and having students take notes by hand. The new study found little difference except for what it called a “small (insignificant)” advantage in recall of factual information for those taking handwritten notes.

Daniel Oppenheimer, a Carnegie Mellon professor who is a co-author of the new paper, told The Chronicle of Higher Education:

“The right way to look at these findings, both the original findings and these new findings, is not that longhand is better than laptops for note-taking, but rather that longhand note-taking is different from laptop note-taking.”

A former KU dean worries about perceptions of elitism

Kim Wilcox, a former KU dean of liberal arts and sciences, argues in Edsource that the recent college admissions scandal leaves the inaccurate impression that only elite colleges matter and that the admissions process can’t be trusted.

“Those elite universities do not represent the broad reality in America,” writes Wilcox, who is the chancellor of the University of California, Riverside. He was KU’s dean of liberal arts and sciences from 2002 to 2005.

He speaks from experience. UC Riverside has been a national leader in increasing graduation rates, especially among low-income students and those from underrepresented minority groups. Wilcox himself was a first-generation college student.

He says that the scandal came about in part by “reliance on a set of outdated measures of collegiate quality; measures that focus on institutional wealth and student rejection rates as indicators of educational excellence.”

Wilcox was chair of speech-language-hearing at KU for 10 years and was president and CEO of the Kansas Board of Regents from 1999 to 2002.

Join our Celebration of Teaching

CTE’s annual Celebration of Teaching will take place Friday at 3 p.m. at the Beren Petroleum Center in Slawson Hall. More than 50 posters will be on display from instructors who have transformed their courses through the Curriculum Innovation Program, C21, Diversity Scholars, and Best Practices Institute. It’s a great chance to pick up teaching tips from colleagues and to learn more about the great work being done across campus.


Doug Ward is the acting director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

By Doug Ward

CHARLOTTE, N.C. – Faculty members seem ready for a more substantive approach to evaluating teaching, but …

It’s that “but” that about 30 faculty members from four research universities focused on at a mini-conference here this week. All are part of a project called TEval, which is working to develop a richer model of teaching evaluation by helping departments change their teaching culture. The project, funded by a $2.8 million National Science Foundation grant, involves faculty members from KU, Colorado, Massachusetts, and Michigan State.

Rob Ward, Tracey LaPierre and Chris Fischer discuss strategies during a meeting of TEval, an NSF-grant-funded project for changing the way teaching is evaluated. They joined colleagues from three other universities for meetings this week in Charlotte, N.C.

The evaluation of teaching has long centered on student surveys, which are fraught with biases and emphasize the performance aspects of teaching over student learning. Their ease of administration and ability to produce a number that can be compared to a department average have made them popular with university administrators and instructors alike. Those numbers certainly offer a tidy package that is delivered semester to semester with little or no time required of the instructor. And though the student voice needs to be a part of the evaluation process, only 50 to 60 percent of KU students complete the surveys. More importantly, the surveys fail to capture the intellectual work and complexity involved in high-quality teaching, something that more and more universities have begun to recognize

The TEval project is working with partner departments to revamp that entrenched process. Doing so, though, requires additional time, work and thought. It requires instructors to document the important elements of their teaching – elements that have often been taken for granted — to reflect on that work in meaningful ways, and to produce a plan for improvement. It requires evaluation committees to invest time in learning about instructors, courses and curricula, and to work through portfolios rather than reducing teaching to a single number and a single class visit, a process that tends to clump everyone together into a meaningless above-average heap.

That’s the where the “but …” comes into play. Teaching has long been a second-class citizen in the rewards system of research universities, leading many instructors and administrators to chafe at the idea of spending more time documenting and evaluating teaching. As with so many aspects of university life, though, real change can come about only if we are willing to put in the time and effort to make it happen.

None of this is easy. At all the campuses involved in the TEval project, though, instructors and department leaders have agreed to make the time. The goal is to refine the evaluation process, share trials and experiences, create a palette of best practices, and find pathways that others can follow.

At the meeting here in Charlotte, participants talked about the many challenges that lie ahead:

  • University policies that fail to reward teaching, innovation, or efforts to change culture.
  • An evaluation system based on volume: number of students taught, numbers on student surveys, number of teaching awards.
  • Recalcitrant faculty who resist changing a system that has long rewarded selfishness and who show no interest in reframing teaching as a shared endeavor.
  • Administrators who refuse to give faculty the time they need to engage in a more effective evaluation system.
  • Tension between treating evaluations as formative (a means of improving teaching) and evaluative (a means of determining merit raises and promotions).
  • Agreeing on what constitutes evidence of high-quality teaching.

Finding ways to move forward

By the end of the meeting, though, a hopeful spirit seemed to emerge as cross-campus conversations led to ideas for moving the process forward:

  • Tapping into the desire that most faculty have for seeing their students succeed.
  • Working with small groups to build momentum in many departments.
  • Creating a flexible system that can apply to many circumstances and can accommodate many types of evidence. This is especially important amid rapidly changing demands on and expectations for colleges and universities.
  • Helping faculty members demonstrate the success of evidence-based practices even when students resist.
  • Allowing truly innovative and highly effective instructors to stand out and allowing departments to focus on the types of skills they need instructors to have in different types of classes.
  • Allowing instructors, departments and universities to tell a richer, more compelling story about the value of teaching and learning.

Those involved were realistic, though. They recognized that they have much work ahead as they make small changes they hope will lead to more significant cultural changes. They recognized the value of a network of colleagues willing to share ideas, to offer support and resources, and to share the burden of a daunting task. And they recognized that they are on the forefront of a long-needed revolution in the way teaching is evaluated and valued at research universities.

If we truly value good teaching, it must be rewarded in the same way that research is rewarded. That would go a long way toward the project’s ultimate goal: a university system in which innovative instructors create rich environments where all their students can learn. It’s a goal well worth fighting for, even if the most prevalent response is “but …”

A note about the project

At KU, the project for creating a richer system for evaluating teaching is known as Benchmarks for Teaching Effectiveness. Nine departments are now involved in the project: African and African-American Studies; Biology; Chemical and Petroleum Engineering; French, Francophone and Italian; Linguistics; Philosophy; Physics; Public Affairs and Administration; and Sociology. Representatives from those departments who attended the Charlotte meeting were Chris Fischer, Bruce Hayes, Tracey LaPierre, Ward Lyles, and Rob Ward. The leaders of the KU project, Andrea Greenhoot, Meagan Patterson and Doug Ward, also attended.

Briefly …

Tom Deans, an English professor at the University of Connecticut, challenges faculty to reduce the length of their syllabuses, saying that “the typical syllabus has now become a too-long list of policies, learning outcomes, grading formulas, defensive maneuvers, recommendations, cautions, and referrals.” He says a syllabus should be no more than two pages. … British universities are receiving record numbers of applications from students from China and Hong Kong, The Guardian reports. In the U.S., applications from Chinese students have held steady, but fewer international students are applying to U.S. universities, the Council of Graduate Studies reports. … As the popularity of computer science has grown, students at many universities are having trouble getting the classes they need, The New York Times reports.


Doug Ward is the acting director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

By Doug Ward

A peer review of teaching generally goes something like this:

An instructor nears third-year review or promotion. At the request of the promotion and tenure committee, colleagues who have never visited the instructor’s class hurriedly sign up for a single visit. Sometimes individually, sometimes en masse, they sit uncomfortably among wary students for 50 or 75 minutes. Some take notes. Others don’t. Soon after, they submit laudatory remarks about the instructor’s teaching, relieved that they won’t have to visit again for a few years.

ChangHwan Kim (left), Tracey LaPierre and Paul Stock discuss their plans for evaluating teaching in the sociology department. They gathered with faculty members from four other units at the inaugural meeting of the Benchmarks for Teaching Effectiveness Project.

If your department or school has a better system, consider yourself lucky. Most peer evaluations lack guidelines that might offer meaningful feedback for a candidate and a P&T committee, and they focus almost exclusively on classroom performance. They provide a snapshot at best, lacking context about the class, the students or the work that has gone into creating engagement, assignments, evaluations and, above all, learning. Academics often refer to that approach as a “drive-by evaluation,” as reviewers do little but breeze past a class and give a thumbs-up out the window.

Those peer evaluations don’t have to be a clumsy, awkward and vapid free-for-all, though. Through the Benchmarks for Teaching Effectiveness Project, we have begun a process intended to make the evaluation of teaching much richer and more meaningful. The project is financed through a five-year, $612,000 National Science Foundation grant and is part of a larger NSF project that includes the University of Colorado, Michigan State, and the University of Massachusetts, Amherst.

We have used the NSF grant to distribute mini-grants to four departments and one school that will pilot the use of a rubric intended to add dimension and guidance to the evaluation of teaching. Faculty members in those units will work with colleagues to define and identify the elements of good teaching in their discipline, decide on appropriate evidence, adapt the rubric, apply it in some way, and share experiences with colleagues inside and outside the department and the university. Evidence will come from three sources: the instructor, students and peers, with departments deciding how to weight the evidence and to weight the categories in the rubric.

Not surprisingly, the instructors involved in the project had many questions about how the process might play out as they gathered for the first time in February: What types of evidence are most reliable? How do we reduce conscious or unconscious bias in the evaluation process? How do we gain consensus among colleagues for an expanded evaluation process and for application of a new system of evaluation? How can we create a more meaningful process that doesn’t eat up lots of time?

Those are important questions without simple answers, but the departments that have signed on in this initial stage of the project have already identified many worthy goals. For instance, Sociology, Philosophy and Biology hope to reduce bias and improve consistency in the evaluation process. Chemical and Petroleum Engineering plans to create triads of faculty members who will provide feedback to one another. Public Affairs and Administration sees opportunities for enriching the enjoyment of teaching and for inspiring instructors to take risks to innovate teaching.

All the units will use the rubric to foster discussion among their colleagues, to identify trustworthy standards of evidence, and, ultimately, to evaluate peers. Philosophy and sociology see opportunities for better evaluating graduate teaching assistants, as well. Chemical and Petroleum Engineering hopes to use the rubric to guide and evaluate 10 faculty members on tenure track. Sociology plans to use it to guide peer evaluation of teaching. Public Affairs and Administration plans to have a group of faculty alternate between evaluator and evaluee as they hone aspects of the rubric. Biology plans to explore the best ways to interpret the results.

That range of activities is important. By using the rubric to foster discussion about the central elements of teaching – and its evaluation – and then testing it in a variety of circumstances, instructors will learn valuable information about the teaching process. That feedback will allow us to revise the rubric, create better guidelines for its use, and ultimately help as many departments as possible adopt it for the promotion and tenure process.

All of the faculty members working in the initial phase of the Benchmarks project recognize the complexity and challenge of high-quality teaching. They also recognize the challenges in creating a better system of evaluation. Ultimately, though, their work has the potential to make good teaching more transparent, to make the evaluation of teaching more nuanced, and to make teaching itself a more important part of the faculty evaluation process.

Work your way through college? Not anymore

Kansas students would need to work nearly 30 hours a week at minimum wage to pay for college, even if they received grants and scholarships, according to an analysis by the public policy organization Demos.

In only eight other states would students need to work more hours to pay for college. New Hampshire, which would require more than 41 hours of work a week, was No. 1, followed by Pennsylvania (39.8 hours) and Alabama (36 hours).

Students attending college in Washington State would need to work the fewest hours (11.6), followed by California (12.6) and New York (15).

“In the vast majority of states, the idea of working your way through college is no more than an antiquated myth,” Demos writes. “A combination of low minimum wages and high college prices make borrowing an inevitability for students.”

The average yearly cost of attending Kansas universities is $16,783, Demos says. That’s 86 percent higher than it was in 2001, putting Kansas at No. 32 in average cost of attendance for public universities. New Hampshire had the highest average cost ($26,008), followed by Vermont ($25,910) and New Jersey ($25,544). Utah ($13,344) had the lowest average cost, followed by Wyoming ($13,942) and Idaho ($14,211).

Demos, which tilts liberal in its ideology, calculated the rankings using data from the federal government’s Integrated Postsecondary Education Data System and the Department of Labor. It created a “net price” for each state by subtracting average scholarship and grant aid from the average tuition and fees for four-year colleges in each state.

That approach has many flaws. In Kansas, for instance, tuition and fees vary widely among four-year universities and even within schools at those universities. Averaging also masks a wide variance in the amount of financial aid students receive. Looking only at cost of attendance skews the picture even further, as housing, food and other expenses generally exceed the cost of tuition and fees, especially in the Northeast and West Coast.

Even so, the study offers a reality check about college costs. State investment in higher education has declined even as the number of students attending college, and the diversity among those students, has grown. In Kansas, tuition now covers an average of 53 percent of a university’s costs, compared with 28 percent in 2001. Even that looks good compared with states like New Hampshire, where tuition accounts for 79 percent of university revenue, Delaware (75 percent) and Pennsylvania (73 percent).

Then again, in Wyoming, tuition dollars account for only 13 percent of college budgets. That is considerable lower than the states that follow: California (21 percent) and Alaska (30 percent) . In all states but Wyoming, tuition dollars now account for a greater share of university budgets that they did in 2001.

As Demos writes, “our state and federal policymakers have been vacating the compact with students that previous generations enjoyed.” It’s no wonder students have sought to put political pressure on schools and legislators.

The disinvestment in higher education began in the 1970s as a political message of lower taxes and smaller government started gaining ground. It accelerated during economic downturns and has only recently begun to ease. To compensate, colleges and universities have cut staff, hired fewer tenure-track professors, increased class size, and relied increasingly on low-paid adjunct instructors for teaching. Students and their families have taken on larger amounts of debt to finance their education.

As Demos writes: “When states do not prioritize higher education as a public good, students and families generally bear the burden.”


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

By Doug Ward

The evaluation of teaching generally looks like this:

Students hurriedly fill in questionnaires at the end of a semester, evaluating an instructor on a five-point scale. The university compiles the results and provides a summary for each faculty member. The individual scores, often judged against a department mean, determine an instructor’s teaching effectiveness for everything from annual reviews to evaluations for promotion and tenure.

That’s a problem. Student evaluations of teaching provide a narrow, often biased perspective that elevates faculty performance in the classroom above all else, even though it is just a small component of teaching. Even as faculty members work to provide a multitude of opportunities for students to demonstrate understanding, and even as their research receives layers of scrutiny, teaching continues to be evaluated by a single piece of evidence.

A CTE rubric for evaluating teaching helps instructors and departments focus on a series of questions.

The Center for Teaching Excellence hopes to change that in the coming years, with the help of a $612,000 grant from the National Science Foundation. Through the grant, CTE will offer mini-grants to departments that are willing to adopt a richer evaluation of teaching and adapt a rubric we have developed to aid the evaluation process. The rubric draws not only on student voices but also on peer evaluations and on material from the faculty member, including syllabi, assignments, evidence of student learning, assessments, and reflections on teaching.

The grant project involves departments that fall under the umbrella of STEM, or science, technology, engineering and math, but we plan to expand involvement to humanities and professional schools. It will focus on the evaluation of teaching, but our goals extend beyond that. The reliance on student evaluations has in many cases hindered the adoption of evidence-based teaching practices, which emphasize student learning as the central outcome of instruction. These practices have resulted in deeper learning and greater success for students, in addition to closing gaps between majority and minority groups. So by helping create a richer evaluation of faculty teaching, we hope to help departments recognize the work that faculty members put into improved student learning.   

As the project unfolds, four to five departments will receive mini-grants in the coming year and will work with CTE staff members to develop a shared vision of high-quality teaching. We will add departments to the program in the next two years. Those departments will adapt the rubric so that it aligns with their disciplinary goals and expectations. They will also identify appropriate forms of evidence and decide how to apply the rubric. We envision it as a tool for such things as evaluation for promotion and tenure, third-year review, annual review, and mentoring of new faculty members, but the decision will be left to departments.  

Representatives of all the KU departments using the rubric will form a learning community that will meet periodically to share their approaches to using the rubric, exchange ideas and get feedback from peers. Once a year, they will have similar conversations with faculty members at two other universities that have created similar programs. 

The KU grant is part of a five-year, $2.8 million project that includes the University of Massachusetts, Amherst, the University of Colorado, Boulder, and Michigan State University. UMass and Colorado will also work to improve the evaluation of teaching; a researcher from Michigan State will create case studies of the other three campuses. Andrea Greenhoot, director of CTE; Meagan Patterson, a faculty fellow at CTE; and I will oversee the project at KU. The project grew from conversations at meetings of the Bay View Alliance, a group of North American research universities working to improve teaching and learning on their campuses. KU, Colorado and Massachusetts are all members of the alliance.

We see this as an important step in recognizing the intellectual work that goes into teaching and in elevating the role of teaching in the promotion and tenure process. In doing so, we hope to help faculty make their teaching accomplishments more visible and to elevate the importance of student learning. 


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

By Doug Ward

Research universities generally say one thing and do another when it comes to supporting effective teaching.

That is, they say they value and reward high-quality teaching, but fail to back up public proclamations when it comes to promotion and tenure. They say they value evidence in making decisions about the quality of instruction but then admit that only a small percentage of the material faculty submit for evaluation of teaching is of high quality.

That’s one finding from a recent report by the Association of American Universities, an organization that has traditionally embraced research as the most important element of university culture. That has begun to change over the last few years, though, as the AAU has emphasized the importance of high-quality teaching through its Undergraduate STEM Education Initiative. It elevated the importance of teaching even more with its recent report.

That report, called Aligning Practice to Policies: Changing the Culture to Recognize and Reward Teaching at Research Universities, was created in collaboration with the Cottrell Scholars Collaborative, an organization of educators working to improve the teaching of science. The report contains a survey of AAU member universities about attitudes toward teaching, but many of the ideas came out of a 2016 gathering of more than 40 leaders in higher education. Andrea Greenhoot, the director of CTE, and Dan Bernstein, the former director, represented KU at that meeting.

I wrote earlier this summer about the work of Emily Miller, the AAU’s associate vice president for policy, in helping improve teaching at the organization’s member universities. The AAU, she said, had been working to “balance the scale between teaching and research.” Miller played a key role in creating the latest report, which makes several recommendations for improving undergraduate education:

Provide ways to reward good teaching. This involves creating an evaluation system that moves beyond student surveys. Those surveys are fraught with problems and biases, the report says, and don’t reflect the much broader work that goes into effective teaching. Such a system would include such elements as evidence of course revision based on learning outcomes, documentation of student learning, adoption of evidence-based teaching practices, and reflection on teaching and course development. Universities also need to educate promotion and tenure committees on best practices for reviewing such materials, the report said.

Create a culture that values teaching as scholarship. This might involve several things: raising money to reward faculty members dedicated to improving student learning; providing time and resources for instructors to transform large lecture classes; and creating clear standards of good teaching for promotion and tenure, and for teaching awards. The report also suggests providing forums for recognizing teaching, and diminishing the divide between instructional faculty members and those whose jobs are research heavy.

Gain support from department chairs and deans. University leaders play a crucial role in setting agendas and encouraging faculty to adopt evidence-based teaching practices. This is especially important in the hiring process, the report says, and leaders can signal the importance of good teaching by providing professional development money, supporting involvement in communities that help promote good teaching, and having new faculty members work with experienced colleagues to gain insights into how to teach well.

The report made it clear that many research universities have a long way to go in making teaching and learning a crucial component of university life. Despite mounting evidence showing that student-centered, evidence-based teaching practices help students learn far more than lecture, the report said, most faculty members who teach undergraduate STEM courses “remain inattentive to the shifting landscape.”

In many cases, the report said, university policies express the importance of teaching, with most providing at least some guidance on how teaching should be evaluated. Most require use of student surveys and a majority recommend peer classroom evaluation. The problem is that teaching has long been pushed aside in the promotion and tenure process, even as universities pay lip service to the importance of teaching. The report said that needed to change.

“Research universities need to create an environment where the continuous improvement of teaching is valued, assessed, and rewarded at various stages of a faculty member’s career and aligned across the department, college, and university levels,” the report said. “Evidence shows that stated policies alone do not reflect practices, much less evolve culture to more highly value teaching. A richer, more complete assessment of teaching quality and effectiveness for tenure, promotion, and merit is necessary for systemic improvement of undergraduate STEM education.”

The report features the work of three universities, including KU, in helping change the culture of teaching. It includes a rubric we have developed at CTE to help departments move beyond student surveys in evaluating teaching, and talks about some of the work we have done to elevate the importance of teaching. It also explains the work that the University of Colorado and the University of California, Irvine, have done to improve STEM teaching at their campuses.

I’ll be writing more about the CTE teaching rubric in the coming weeks as we launch a new initiative aimed at helping departments use that rubric to identify the elements of good teaching and to add dimension to their evaluation of teaching. The AAU report is a good reminder of the momentum building not only to improve teaching but to elevate its importance in university life. Progress has been slow but steady. We seem on the cusp of significant changes, though.


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

 

By Doug Ward

Gauging the effectiveness of teaching solely on student evaluations has always been a one-dimensional “solution” to a complex issue. It is an approach built on convenience and routine rather than on a true evaluation of an instructor’s effectiveness.

And yet many universities routinely base promotion and tenure decisions on those evaluations, or, rather, a component of those evaluations in the form of a single number on a five-point scale. Those who rank above the mean for a department get a thumbs-up; those below the mean get a thumbs-down. It’s a system that bestows teaching with all the gravitas of a rounding error.

A new meta-analysis of research into student course evaluations confirms this weakness, underscoring the urgency for change. The authors of that study argue that student evaluations of teaching are not only a questionable tool but that there is no correlation between evaluations and student learning.

That’s right. None.

“Despite more than 75 years of sustained effort, there is presently no evidence supporting the widespread belief that students learn more from professors who receive higher SET ratings,” the authors of the study write, using SET for student evaluations of teaching.Macro shot of pencil writing/sketching on the checkered, blank page.

The study, titled “Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related,” has been accepted for publication in Studies in Educational Evaluation. It was written by Bob Uttl, Carmela A. White, and Daniela Wong Gonzalez of Mount Royal University in Calgary, Alberta.

As part of their analysis, they challenge the validity of a seminal 1981 study that is often held up as evidence of the importance of teaching evaluations. That study and subsequent studies, they say, suffered from small sample sizes and “multiple methodological flaws that render their conclusions unwarranted.”

Course evaluations, they say, provide little more than a score for student perceptions, arguing that if student learning is important, we need other methods for evaluating teaching.

Their findings fall in line with a 2014 study by the statisticians Philip B. Stark and Richard Freishtat of the University of California, Berkeley. That study argues that course evaluations are fraught with statistical problems and “pernicious distortions that result from using SET scores as a proxy for teaching quality and effectiveness.” Among those distortions: low response rates, and failure to account for factors such as size and format of class, and academic discipline.

This is all damning evidence, especially because universities rely heavy on student evaluations in making decisions about instruction, and about instructors’ careers. It is especially problematic for the growing number of adjunct instructors, who are often rehired – or not – based solely on student evaluations; and for graduate teaching assistants, who are often shoved into classes with little pedagogical instruction and forced to make decisions about their teaching solely through the lens of end-of-semester evaluations.

All this points to the need for swift and substantial change in the way we evaluate teaching and learning. That does not mean we should abandon student evaluations of courses, though. Students deserve to be heard, and their observations can help instructors and administrators spot problem areas in courses.

The non-profit organization IDEA makes a strong case for using student evaluations of teaching, and has been one of its staunchest proponents. IDEA has created a proprietary system for course evaluations, one that it says accounts for the many biases that creep into most surveys, so its defense of course evaluations must be viewed with that in mind.

Nonetheless, it makes a strong case. In a paper for IDEA earlier this year, Stephen L. Benton and Kenneth R. Ryalls make a point-by-point rebuttal to criticisms of student evaluations of teaching, saying that “students are qualified to provide useful, reliable feedback on teacher effectiveness.” They acknowledge faculty frustration with the current system, saying that course evaluations are often poorly constructed, created in ways that ask students to make judgments they are not qualified to make, and “overemphasized in summative decisions about teaching effectiveness.”

“Those institutions who employ an instrument designed by a committee decades ago, or worse yet allow each department to develop its own tool, are at risk of making decisions based on questionable data,” they write.

So what can we do? I suggest two immediate steps:

Expand the evaluation system. This means de-emphasizing student evaluations in making decisions about teaching effectiveness. No department should rely solely on these evaluations for making decisions. Rather, all departments should rely on range of factors that provide a more nuanced measurement of faculty teaching. I’ve written previously about CTE’s development of a rubric for evaluating teaching, and that rubric can be a good first step in making the evaluation system fairer and more substantial. The goal with that rubric is to help departments identify a variety of means for judging teachers – including student evaluations – and to give them flexibility in the types of discipline-specific evidence they use. It is a framework for thinking about teaching, not a rigid measurement tool.

Revisit student evaluations of teaching. As I said, students’ opinions about courses and instructors deserve to be heard. If we are going to poll students about their courses, though, we should use a system that helps filter out biases and that provides valid, meaningful data. The IDEA model is just one way of doing that. Changing the current system will require an investment of time and money. It will also require the will to overcome years of entrenched thinking.

The problems in student evaluations of teaching are simply a visible component of a much larger problem. At the root of all this is a university system that fails to value effective and innovative teaching, and that rewards departments for increasing the number students rather than improving student learning. If the university system hopes to survive, it simply must give teaching the credit it deserves in the promotion and tenure process. Moving beyond reliance on course evaluations would be a solid first step.


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

CTE’s Twitter feed