By Doug Ward

The evaluation of teaching generally looks like this:

Students hurriedly fill in questionnaires at the end of a semester, evaluating an instructor on a five-point scale. The university compiles the results and provides a summary for each faculty member. The individual scores, often judged against a department mean, determine an instructor’s teaching effectiveness for everything from annual reviews to evaluations for promotion and tenure.

That’s a problem. Student evaluations of teaching provide a narrow, often biased perspective that elevates faculty performance in the classroom above all else, even though it is just a small component of teaching. Even as faculty members work to provide a multitude of opportunities for students to demonstrate understanding, and even as their research receives layers of scrutiny, teaching continues to be evaluated by a single piece of evidence.

A CTE rubric for evaluating teaching helps instructors and departments focus on a series of questions.

The Center for Teaching Excellence hopes to change that in the coming years, with the help of a $612,000 grant from the National Science Foundation. Through the grant, CTE will offer mini-grants to departments that are willing to adopt a richer evaluation of teaching and adapt a rubric we have developed to aid the evaluation process. The rubric draws not only on student voices but also on peer evaluations and on material from the faculty member, including syllabi, assignments, evidence of student learning, assessments, and reflections on teaching.

The grant project involves departments that fall under the umbrella of STEM, or science, technology, engineering and math, but we plan to expand involvement to humanities and professional schools. It will focus on the evaluation of teaching, but our goals extend beyond that. The reliance on student evaluations has in many cases hindered the adoption of evidence-based teaching practices, which emphasize student learning as the central outcome of instruction. These practices have resulted in deeper learning and greater success for students, in addition to closing gaps between majority and minority groups. So by helping create a richer evaluation of faculty teaching, we hope to help departments recognize the work that faculty members put into improved student learning.   

As the project unfolds, four to five departments will receive mini-grants in the coming year and will work with CTE staff members to develop a shared vision of high-quality teaching. We will add departments to the program in the next two years. Those departments will adapt the rubric so that it aligns with their disciplinary goals and expectations. They will also identify appropriate forms of evidence and decide how to apply the rubric. We envision it as a tool for such things as evaluation for promotion and tenure, third-year review, annual review, and mentoring of new faculty members, but the decision will be left to departments.  

Representatives of all the KU departments using the rubric will form a learning community that will meet periodically to share their approaches to using the rubric, exchange ideas and get feedback from peers. Once a year, they will have similar conversations with faculty members at two other universities that have created similar programs. 

The KU grant is part of a five-year, $2.8 million project that includes the University of Massachusetts, Amherst, the University of Colorado, Boulder, and Michigan State University. UMass and Colorado will also work to improve the evaluation of teaching; a researcher from Michigan State will create case studies of the other three campuses. Andrea Greenhoot, director of CTE; Meagan Patterson, a faculty fellow at CTE; and I will oversee the project at KU. The project grew from conversations at meetings of the Bay View Alliance, a group of North American research universities working to improve teaching and learning on their campuses. KU, Colorado and Massachusetts are all members of the alliance.

We see this as an important step in recognizing the intellectual work that goes into teaching and in elevating the role of teaching in the promotion and tenure process. In doing so, we hope to help faculty make their teaching accomplishments more visible and to elevate the importance of student learning. 


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

By Doug Ward

Research universities generally say one thing and do another when it comes to supporting effective teaching.

That is, they say they value and reward high-quality teaching, but fail to back up public proclamations when it comes to promotion and tenure. They say they value evidence in making decisions about the quality of instruction but then admit that only a small percentage of the material faculty submit for evaluation of teaching is of high quality.

That’s one finding from a recent report by the Association of American Universities, an organization that has traditionally embraced research as the most important element of university culture. That has begun to change over the last few years, though, as the AAU has emphasized the importance of high-quality teaching through its Undergraduate STEM Education Initiative. It elevated the importance of teaching even more with its recent report.

That report, called Aligning Practice to Policies: Changing the Culture to Recognize and Reward Teaching at Research Universities, was created in collaboration with the Cottrell Scholars Collaborative, an organization of educators working to improve the teaching of science. The report contains a survey of AAU member universities about attitudes toward teaching, but many of the ideas came out of a 2016 gathering of more than 40 leaders in higher education. Andrea Greenhoot, the director of CTE, and Dan Bernstein, the former director, represented KU at that meeting.

I wrote earlier this summer about the work of Emily Miller, the AAU’s associate vice president for policy, in helping improve teaching at the organization’s member universities. The AAU, she said, had been working to “balance the scale between teaching and research.” Miller played a key role in creating the latest report, which makes several recommendations for improving undergraduate education:

Provide ways to reward good teaching. This involves creating an evaluation system that moves beyond student surveys. Those surveys are fraught with problems and biases, the report says, and don’t reflect the much broader work that goes into effective teaching. Such a system would include such elements as evidence of course revision based on learning outcomes, documentation of student learning, adoption of evidence-based teaching practices, and reflection on teaching and course development. Universities also need to educate promotion and tenure committees on best practices for reviewing such materials, the report said.

Create a culture that values teaching as scholarship. This might involve several things: raising money to reward faculty members dedicated to improving student learning; providing time and resources for instructors to transform large lecture classes; and creating clear standards of good teaching for promotion and tenure, and for teaching awards. The report also suggests providing forums for recognizing teaching, and diminishing the divide between instructional faculty members and those whose jobs are research heavy.

Gain support from department chairs and deans. University leaders play a crucial role in setting agendas and encouraging faculty to adopt evidence-based teaching practices. This is especially important in the hiring process, the report says, and leaders can signal the importance of good teaching by providing professional development money, supporting involvement in communities that help promote good teaching, and having new faculty members work with experienced colleagues to gain insights into how to teach well.

The report made it clear that many research universities have a long way to go in making teaching and learning a crucial component of university life. Despite mounting evidence showing that student-centered, evidence-based teaching practices help students learn far more than lecture, the report said, most faculty members who teach undergraduate STEM courses “remain inattentive to the shifting landscape.”

In many cases, the report said, university policies express the importance of teaching, with most providing at least some guidance on how teaching should be evaluated. Most require use of student surveys and a majority recommend peer classroom evaluation. The problem is that teaching has long been pushed aside in the promotion and tenure process, even as universities pay lip service to the importance of teaching. The report said that needed to change.

“Research universities need to create an environment where the continuous improvement of teaching is valued, assessed, and rewarded at various stages of a faculty member’s career and aligned across the department, college, and university levels,” the report said. “Evidence shows that stated policies alone do not reflect practices, much less evolve culture to more highly value teaching. A richer, more complete assessment of teaching quality and effectiveness for tenure, promotion, and merit is necessary for systemic improvement of undergraduate STEM education.”

The report features the work of three universities, including KU, in helping change the culture of teaching. It includes a rubric we have developed at CTE to help departments move beyond student surveys in evaluating teaching, and talks about some of the work we have done to elevate the importance of teaching. It also explains the work that the University of Colorado and the University of California, Irvine, have done to improve STEM teaching at their campuses.

I’ll be writing more about the CTE teaching rubric in the coming weeks as we launch a new initiative aimed at helping departments use that rubric to identify the elements of good teaching and to add dimension to their evaluation of teaching. The AAU report is a good reminder of the momentum building not only to improve teaching but to elevate its importance in university life. Progress has been slow but steady. We seem on the cusp of significant changes, though.


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

 

By Doug Ward

Gauging the effectiveness of teaching solely on student evaluations has always been a one-dimensional “solution” to a complex issue. It is an approach built on convenience and routine rather than on a true evaluation of an instructor’s effectiveness.

And yet many universities routinely base promotion and tenure decisions on those evaluations, or, rather, a component of those evaluations in the form of a single number on a five-point scale. Those who rank above the mean for a department get a thumbs-up; those below the mean get a thumbs-down. It’s a system that bestows teaching with all the gravitas of a rounding error.

A new meta-analysis of research into student course evaluations confirms this weakness, underscoring the urgency for change. The authors of that study argue that student evaluations of teaching are not only a questionable tool but that there is no correlation between evaluations and student learning.

That’s right. None.

“Despite more than 75 years of sustained effort, there is presently no evidence supporting the widespread belief that students learn more from professors who receive higher SET ratings,” the authors of the study write, using SET for student evaluations of teaching.Macro shot of pencil writing/sketching on the checkered, blank page.

The study, titled “Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related,” has been accepted for publication in Studies in Educational Evaluation. It was written by Bob Uttl, Carmela A. White, and Daniela Wong Gonzalez of Mount Royal University in Calgary, Alberta.

As part of their analysis, they challenge the validity of a seminal 1981 study that is often held up as evidence of the importance of teaching evaluations. That study and subsequent studies, they say, suffered from small sample sizes and “multiple methodological flaws that render their conclusions unwarranted.”

Course evaluations, they say, provide little more than a score for student perceptions, arguing that if student learning is important, we need other methods for evaluating teaching.

Their findings fall in line with a 2014 study by the statisticians Philip B. Stark and Richard Freishtat of the University of California, Berkeley. That study argues that course evaluations are fraught with statistical problems and “pernicious distortions that result from using SET scores as a proxy for teaching quality and effectiveness.” Among those distortions: low response rates, and failure to account for factors such as size and format of class, and academic discipline.

This is all damning evidence, especially because universities rely heavy on student evaluations in making decisions about instruction, and about instructors’ careers. It is especially problematic for the growing number of adjunct instructors, who are often rehired – or not – based solely on student evaluations; and for graduate teaching assistants, who are often shoved into classes with little pedagogical instruction and forced to make decisions about their teaching solely through the lens of end-of-semester evaluations.

All this points to the need for swift and substantial change in the way we evaluate teaching and learning. That does not mean we should abandon student evaluations of courses, though. Students deserve to be heard, and their observations can help instructors and administrators spot problem areas in courses.

The non-profit organization IDEA makes a strong case for using student evaluations of teaching, and has been one of its staunchest proponents. IDEA has created a proprietary system for course evaluations, one that it says accounts for the many biases that creep into most surveys, so its defense of course evaluations must be viewed with that in mind.

Nonetheless, it makes a strong case. In a paper for IDEA earlier this year, Stephen L. Benton and Kenneth R. Ryalls make a point-by-point rebuttal to criticisms of student evaluations of teaching, saying that “students are qualified to provide useful, reliable feedback on teacher effectiveness.” They acknowledge faculty frustration with the current system, saying that course evaluations are often poorly constructed, created in ways that ask students to make judgments they are not qualified to make, and “overemphasized in summative decisions about teaching effectiveness.”

“Those institutions who employ an instrument designed by a committee decades ago, or worse yet allow each department to develop its own tool, are at risk of making decisions based on questionable data,” they write.

So what can we do? I suggest two immediate steps:

Expand the evaluation system. This means de-emphasizing student evaluations in making decisions about teaching effectiveness. No department should rely solely on these evaluations for making decisions. Rather, all departments should rely on range of factors that provide a more nuanced measurement of faculty teaching. I’ve written previously about CTE’s development of a rubric for evaluating teaching, and that rubric can be a good first step in making the evaluation system fairer and more substantial. The goal with that rubric is to help departments identify a variety of means for judging teachers – including student evaluations – and to give them flexibility in the types of discipline-specific evidence they use. It is a framework for thinking about teaching, not a rigid measurement tool.

Revisit student evaluations of teaching. As I said, students’ opinions about courses and instructors deserve to be heard. If we are going to poll students about their courses, though, we should use a system that helps filter out biases and that provides valid, meaningful data. The IDEA model is just one way of doing that. Changing the current system will require an investment of time and money. It will also require the will to overcome years of entrenched thinking.

The problems in student evaluations of teaching are simply a visible component of a much larger problem. At the root of all this is a university system that fails to value effective and innovative teaching, and that rewards departments for increasing the number students rather than improving student learning. If the university system hopes to survive, it simply must give teaching the credit it deserves in the promotion and tenure process. Moving beyond reliance on course evaluations would be a solid first step.


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

By Doug Ward

The spread of evidence-based teaching practices highlights a growing paradox: Even as instructors work to evaluate student learning in creative, multidimensional ways, they themselves are generally judged only through student evaluations.

Students should have a voice. As Stephen Benton and William Cashin write in a broad review of research, student evaluations can help faculty members improve their courses and help administrators spot potential problems in the classroom.optical illusion box

The drawback is that too many departments use only student evaluations to judge the effectiveness of instructors, even as they submit faculty research through a multilayered evaluation process internally and externally. Student evaluations are the only university-mandated form of gauging instructors’ teaching, and many departments measure faculty members against a department mean. Those above the mean are generally viewed favorably and those below the mean are seen as a problem. That approach fails to account for the weaknesses in evaluations. For instance, Benton and Cashin and others have found:

  • Students tend to give higher scores to instructors in classes they are motivated to take, and in which they do well.
  • Instructors who teach large courses and entry-level courses tend to receive lower evaluations than those who teach smaller numbers of students and upper-level courses.
  • Evaluation scores tend to be higher in some disciplines (especially humanities) than in others (like STEM).
  • Evaluation scores sometimes drop in the first few semesters of a course redesigned for active learning.
  • Students have little experience in judging their own learning. As the Stanford professor Carl Wieman writes: “It is impossible for a student (or anyone else) to judge the effectiveness of an instructional practice except by comparing it with others that they have already experienced.”
  • Overemphasis on student evaluations often generates cynicism among faculty members about administrators’ belief in the importance of high-quality teaching.

Looked at through that lens, we have not only a need but an obligation to move beyond student evaluations in gauging the effectiveness of teaching. We simply must add dimension and nuance to the process, much as we already do with evaluation of research.

So how do we do that?

At CTE, we have developed a rubric to help departments integrate information from faculty members, peers, and students. Student evaluations are a part of the mix, but only a part. Rather, we have tried to help departments draw on the many facets of teaching into a format that provides a richer, fairer evaluation of instructor effectiveness without adding onerous time burdens to evaluators.

For the most part, this approach uses the types of materials that faculty members already submit and that departments gather independently: syllabi and course schedules; teaching statements; readings, worksheets and other course materials; assignments, projects, test results and other evidence of student learning; faculty reflections on student learning; peer evaluations from team teaching and class visits; and formal discussions about the faculty member’s approach to teaching.

Departments then use the rubric to evaluate that body of work, rewarding faculty members who engage in such approaches as:

  • experimenting with innovative teaching techniques
  • aligning course content with learning goals
  • making effective use of class time
  • using research-based teaching practices
  • engaging students in hands-on learning rather than simply delivering information to them
  • revising course content and design based on evidence and reflection
  • mentoring students, and providing evidence of student learning
  • sharing their work through presentations, scholarship, committee work and other venues

Departments can easily adapt the rubric to fit particular disciplinary expectations and to weight areas most meaningful to their discipline. We have already received feedback from many faculty members around the university. We’ve also asked a few departments to test the rubric as they evaluate faculty members for promotion and tenure, third-year review, and post-tenure review, and we plan to test it more broadly in the fall.

We will continue to refine the rubric based on the feedback we receive. Like teaching itself, it will be a constant work in progress. We see it as an important step toward making innovative teaching more visible, though, and toward making teaching a more credible and meaningful part of the promotion and tenure process. If you’d like to be part of that, let us know.

****

This article also appears in Teaching Matters, a publication of the Center for Teaching Excellence.


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.