By Doug Ward

If you plan to use student surveys of teaching for feedback on your classes this semester, consider this: Only about 50% of students fill out the surveys online.

Yes, 50%.

There are several ways that instructors can increase that response rate, though. None are particularly difficult, but they do require you to think about the surveys in slightly different ways. I’ll get to those in a moment.

The low response rate for online student surveys of teaching is not just a problem at KU. Nearly every university that has moved student surveys online has faced the same challenge.

That shouldn’t be surprising. When surveys are conducted on paper, instructors (or proxies) distribute them in class and students have 10 or 15 minutes to fill them out. With the online surveys, students usually fill them out on their own time – or simply ignore them.

——————————————————————————————————————

——————————————————————————————————————

I have no interest in returning to paper surveys, which are cumbersome, wasteful and time-consuming. For example, Ally Smith, an administrative assistant in environmental studies, geology, geography, and atmospheric sciences, estimates that staff time needed to prepare data and distribute results for those four disciplines has declined by 47.5 hours a semester since the surveys were moved online. Staff members now spend about 4 hours gathering and distributing the online data.

That’s an enormous time savings. The online surveys also save reams of paper and allow departments to eliminate the cost of scanning the surveys. That cost is about 8 cents a page. The online system also protects student and faculty privacy. Paper surveys are generally handled by several people, and students in large classes sometimes leave completed surveys in or near the classroom. (I once found a completed survey sitting on a trash can outside a lecture hall.)

So there are solid reasons to move to online surveys. The question is how to improve student responsiveness.

I recently led a university committee that looked into that. Others on the committee were Chris Elles, Heidi Hallman, Ravi Shanmugam, Holly Storkel and Ketty Wong. We found no magic solution, but we did find that many instructors were able to get 80% to 100% of their students to participate in the surveys. Here are four common approaches they use:

Have students complete surveys in class

Completing the surveys outside class was necessary in the first three years of online surveys at KU because students had to use a laptop or desktop computer. A system the university adopted two years ago allows them to use smartphones, tablets or computers. A vast majority of students have smartphones, so it would be easy for them to take the surveys in class. Instructors would need to give notice to students about bringing a device on survey day and find ways to make sure everyone has a device. Those who were absent or were not able to complete the surveys could still do so outside class.

Remind students about the surveys several times

Notices about the online surveys are sent by the Center for Online and Distance Learning, an entity that most students don’t know and never interact with otherwise. Instructors who have had consistently high response rates send out multiple messages to students and speak about the surveys in class. They explain that student feedback is important for improving courses and that a higher response rate provides a broader understanding of students’ experiences in a class.

To some extent, response rates indicate the degree to which students feel a part of a class, and rates are generally higher in smaller classes. Even in classes where students feel engaged, though, a single reminder from an instructor isn’t enough. Rather, instructors should explain why the feedback from the surveys is important and how it is used to improve future classes. An appeal that explains the importance and offers specific examples of how the instructor has used the feedback is more likely to get students to act than one that just reminds them to fill out the surveys. Sending several reminders is even better.

Give extra credit for completing surveys

Instructors in large classes have found this an especially effective means of increasing student participation. Giving students as little as 1 point extra credit (amounting to a fraction of 1% of an overall grade) is enough to spur students to action, although offering a bump of 1% or more is even more effective. In some cases, instructors have gamified the process. The higher the response rate, the more extra credit everyone in the class receives. I’m generally not a fan of extra credit, but instructors who have used this method have been able to get more than 90% of their students to complete the online surveys of teaching.

Add midterm surveys

A midterm survey helps instructors identify problems or frustrations in a class and make changes during the semester. signaling to students that their opinions and experiences matter. This in turn helps motivate students to complete end-of-semester surveys. Many instructors already administer midterm surveys either electronically (via Blackboard or other online tools) or with paper, asking students such things as what is going well in the class, what needs to change, and where they are struggling. This approach is backed up by research from a training-evaluation organization called ALPS Insights, which has found that students are more likely to complete later course surveys if instructors acknowledge and act on earlier feedback they have given. It’s too late to adopt that approach this semester, but it is worth trying in future semesters.

Remember the limitations

Student surveys of teaching can provide valuable feedback that helps instructors make adjustments in future semesters. Instructors we spoke to, though, overwhelmingly said that student comments were the most valuable component of the surveys. Those comments point to specific areas where students have concerns or where a course is working well.

Unfortunately, surveys of teaching have been grossly misused as an objective measure of an instructor’s effectiveness. A growing body of research has found that the surveys do not evaluate the quality of instruction in a class and do not correlate with student learning. They are best used as one component of a much larger array of evidence. The College of Liberal Arts and Sciences has developed a broader framework, and CTE has created an approach we call Benchmarks for Teaching Effectiveness. It uses a rubric to help shape a more thorough, fairer and nuanced evaluation process.

Universities across the country are rethinking their approach to evaluating teaching, and the work of CTE and the College are at the forefront of that. Even those broader approaches require input from students, though. So as you move into your final classes, remind students of the importance of their participation in the process.

(What have you found effective? If you have found other ways of increasing student participation in end-of-semester teaching surveys, let us know so we can share your ideas with colleagues.)

The ‘right’ way to take notes isn’t clear cut

Photo by The Climate Reality Project on Unsplash

A new study on note-taking muddies what many instructors saw as a clear advantage of pen and paper.

The study replicates a 2014 study that has been used as evidence for banning laptop computers in class and having students take notes by hand. The new study found little difference except for what it called a “small (insignificant)” advantage in recall of factual information for those taking handwritten notes.

Daniel Oppenheimer, a Carnegie Mellon professor who is a co-author of the new paper, told The Chronicle of Higher Education:

“The right way to look at these findings, both the original findings and these new findings, is not that longhand is better than laptops for note-taking, but rather that longhand note-taking is different from laptop note-taking.”

A former KU dean worries about perceptions of elitism

Kim Wilcox, a former KU dean of liberal arts and sciences, argues in Edsource that the recent college admissions scandal leaves the inaccurate impression that only elite colleges matter and that the admissions process can’t be trusted.

“Those elite universities do not represent the broad reality in America,” writes Wilcox, who is the chancellor of the University of California, Riverside. He was KU’s dean of liberal arts and sciences from 2002 to 2005.

He speaks from experience. UC Riverside has been a national leader in increasing graduation rates, especially among low-income students and those from underrepresented minority groups. Wilcox himself was a first-generation college student.

He says that the scandal came about in part by “reliance on a set of outdated measures of collegiate quality; measures that focus on institutional wealth and student rejection rates as indicators of educational excellence.”

Wilcox was chair of speech-language-hearing at KU for 10 years and was president and CEO of the Kansas Board of Regents from 1999 to 2002.

Join our Celebration of Teaching

CTE’s annual Celebration of Teaching will take place Friday at 3 p.m. at the Beren Petroleum Center in Slawson Hall. More than 50 posters will be on display from instructors who have transformed their courses through the Curriculum Innovation Program, C21, Diversity Scholars, and Best Practices Institute. It’s a great chance to pick up teaching tips from colleagues and to learn more about the great work being done across campus.


Doug Ward is the acting director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

By Doug Ward

The evaluation of teaching generally looks like this:

Students hurriedly fill in questionnaires at the end of a semester, evaluating an instructor on a five-point scale. The university compiles the results and provides a summary for each faculty member. The individual scores, often judged against a department mean, determine an instructor’s teaching effectiveness for everything from annual reviews to evaluations for promotion and tenure.

That’s a problem. Student evaluations of teaching provide a narrow, often biased perspective that elevates faculty performance in the classroom above all else, even though it is just a small component of teaching. Even as faculty members work to provide a multitude of opportunities for students to demonstrate understanding, and even as their research receives layers of scrutiny, teaching continues to be evaluated by a single piece of evidence.

A CTE rubric for evaluating teaching helps instructors and departments focus on a series of questions.

The Center for Teaching Excellence hopes to change that in the coming years, with the help of a $612,000 grant from the National Science Foundation. Through the grant, CTE will offer mini-grants to departments that are willing to adopt a richer evaluation of teaching and adapt a rubric we have developed to aid the evaluation process. The rubric draws not only on student voices but also on peer evaluations and on material from the faculty member, including syllabi, assignments, evidence of student learning, assessments, and reflections on teaching.

The grant project involves departments that fall under the umbrella of STEM, or science, technology, engineering and math, but we plan to expand involvement to humanities and professional schools. It will focus on the evaluation of teaching, but our goals extend beyond that. The reliance on student evaluations has in many cases hindered the adoption of evidence-based teaching practices, which emphasize student learning as the central outcome of instruction. These practices have resulted in deeper learning and greater success for students, in addition to closing gaps between majority and minority groups. So by helping create a richer evaluation of faculty teaching, we hope to help departments recognize the work that faculty members put into improved student learning.   

As the project unfolds, four to five departments will receive mini-grants in the coming year and will work with CTE staff members to develop a shared vision of high-quality teaching. We will add departments to the program in the next two years. Those departments will adapt the rubric so that it aligns with their disciplinary goals and expectations. They will also identify appropriate forms of evidence and decide how to apply the rubric. We envision it as a tool for such things as evaluation for promotion and tenure, third-year review, annual review, and mentoring of new faculty members, but the decision will be left to departments.  

Representatives of all the KU departments using the rubric will form a learning community that will meet periodically to share their approaches to using the rubric, exchange ideas and get feedback from peers. Once a year, they will have similar conversations with faculty members at two other universities that have created similar programs. 

The KU grant is part of a five-year, $2.8 million project that includes the University of Massachusetts, Amherst, the University of Colorado, Boulder, and Michigan State University. UMass and Colorado will also work to improve the evaluation of teaching; a researcher from Michigan State will create case studies of the other three campuses. Andrea Greenhoot, director of CTE; Meagan Patterson, a faculty fellow at CTE; and I will oversee the project at KU. The project grew from conversations at meetings of the Bay View Alliance, a group of North American research universities working to improve teaching and learning on their campuses. KU, Colorado and Massachusetts are all members of the alliance.

We see this as an important step in recognizing the intellectual work that goes into teaching and in elevating the role of teaching in the promotion and tenure process. In doing so, we hope to help faculty make their teaching accomplishments more visible and to elevate the importance of student learning. 


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

By Doug Ward

Gauging the effectiveness of teaching solely on student evaluations has always been a one-dimensional “solution” to a complex issue. It is an approach built on convenience and routine rather than on a true evaluation of an instructor’s effectiveness.

And yet many universities routinely base promotion and tenure decisions on those evaluations, or, rather, a component of those evaluations in the form of a single number on a five-point scale. Those who rank above the mean for a department get a thumbs-up; those below the mean get a thumbs-down. It’s a system that bestows teaching with all the gravitas of a rounding error.

A new meta-analysis of research into student course evaluations confirms this weakness, underscoring the urgency for change. The authors of that study argue that student evaluations of teaching are not only a questionable tool but that there is no correlation between evaluations and student learning.

That’s right. None.

“Despite more than 75 years of sustained effort, there is presently no evidence supporting the widespread belief that students learn more from professors who receive higher SET ratings,” the authors of the study write, using SET for student evaluations of teaching.Macro shot of pencil writing/sketching on the checkered, blank page.

The study, titled “Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related,” has been accepted for publication in Studies in Educational Evaluation. It was written by Bob Uttl, Carmela A. White, and Daniela Wong Gonzalez of Mount Royal University in Calgary, Alberta.

As part of their analysis, they challenge the validity of a seminal 1981 study that is often held up as evidence of the importance of teaching evaluations. That study and subsequent studies, they say, suffered from small sample sizes and “multiple methodological flaws that render their conclusions unwarranted.”

Course evaluations, they say, provide little more than a score for student perceptions, arguing that if student learning is important, we need other methods for evaluating teaching.

Their findings fall in line with a 2014 study by the statisticians Philip B. Stark and Richard Freishtat of the University of California, Berkeley. That study argues that course evaluations are fraught with statistical problems and “pernicious distortions that result from using SET scores as a proxy for teaching quality and effectiveness.” Among those distortions: low response rates, and failure to account for factors such as size and format of class, and academic discipline.

This is all damning evidence, especially because universities rely heavy on student evaluations in making decisions about instruction, and about instructors’ careers. It is especially problematic for the growing number of adjunct instructors, who are often rehired – or not – based solely on student evaluations; and for graduate teaching assistants, who are often shoved into classes with little pedagogical instruction and forced to make decisions about their teaching solely through the lens of end-of-semester evaluations.

All this points to the need for swift and substantial change in the way we evaluate teaching and learning. That does not mean we should abandon student evaluations of courses, though. Students deserve to be heard, and their observations can help instructors and administrators spot problem areas in courses.

The non-profit organization IDEA makes a strong case for using student evaluations of teaching, and has been one of its staunchest proponents. IDEA has created a proprietary system for course evaluations, one that it says accounts for the many biases that creep into most surveys, so its defense of course evaluations must be viewed with that in mind.

Nonetheless, it makes a strong case. In a paper for IDEA earlier this year, Stephen L. Benton and Kenneth R. Ryalls make a point-by-point rebuttal to criticisms of student evaluations of teaching, saying that “students are qualified to provide useful, reliable feedback on teacher effectiveness.” They acknowledge faculty frustration with the current system, saying that course evaluations are often poorly constructed, created in ways that ask students to make judgments they are not qualified to make, and “overemphasized in summative decisions about teaching effectiveness.”

“Those institutions who employ an instrument designed by a committee decades ago, or worse yet allow each department to develop its own tool, are at risk of making decisions based on questionable data,” they write.

So what can we do? I suggest two immediate steps:

Expand the evaluation system. This means de-emphasizing student evaluations in making decisions about teaching effectiveness. No department should rely solely on these evaluations for making decisions. Rather, all departments should rely on range of factors that provide a more nuanced measurement of faculty teaching. I’ve written previously about CTE’s development of a rubric for evaluating teaching, and that rubric can be a good first step in making the evaluation system fairer and more substantial. The goal with that rubric is to help departments identify a variety of means for judging teachers – including student evaluations – and to give them flexibility in the types of discipline-specific evidence they use. It is a framework for thinking about teaching, not a rigid measurement tool.

Revisit student evaluations of teaching. As I said, students’ opinions about courses and instructors deserve to be heard. If we are going to poll students about their courses, though, we should use a system that helps filter out biases and that provides valid, meaningful data. The IDEA model is just one way of doing that. Changing the current system will require an investment of time and money. It will also require the will to overcome years of entrenched thinking.

The problems in student evaluations of teaching are simply a visible component of a much larger problem. At the root of all this is a university system that fails to value effective and innovative teaching, and that rewards departments for increasing the number students rather than improving student learning. If the university system hopes to survive, it simply must give teaching the credit it deserves in the promotion and tenure process. Moving beyond reliance on course evaluations would be a solid first step.


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

By Doug Ward

The spread of evidence-based teaching practices highlights a growing paradox: Even as instructors work to evaluate student learning in creative, multidimensional ways, they themselves are generally judged only through student evaluations.

Students should have a voice. As Stephen Benton and William Cashin write in a broad review of research, student evaluations can help faculty members improve their courses and help administrators spot potential problems in the classroom.optical illusion box

The drawback is that too many departments use only student evaluations to judge the effectiveness of instructors, even as they submit faculty research through a multilayered evaluation process internally and externally. Student evaluations are the only university-mandated form of gauging instructors’ teaching, and many departments measure faculty members against a department mean. Those above the mean are generally viewed favorably and those below the mean are seen as a problem. That approach fails to account for the weaknesses in evaluations. For instance, Benton and Cashin and others have found:

  • Students tend to give higher scores to instructors in classes they are motivated to take, and in which they do well.
  • Instructors who teach large courses and entry-level courses tend to receive lower evaluations than those who teach smaller numbers of students and upper-level courses.
  • Evaluation scores tend to be higher in some disciplines (especially humanities) than in others (like STEM).
  • Evaluation scores sometimes drop in the first few semesters of a course redesigned for active learning.
  • Students have little experience in judging their own learning. As the Stanford professor Carl Wieman writes: “It is impossible for a student (or anyone else) to judge the effectiveness of an instructional practice except by comparing it with others that they have already experienced.”
  • Overemphasis on student evaluations often generates cynicism among faculty members about administrators’ belief in the importance of high-quality teaching.

Looked at through that lens, we have not only a need but an obligation to move beyond student evaluations in gauging the effectiveness of teaching. We simply must add dimension and nuance to the process, much as we already do with evaluation of research.

So how do we do that?

At CTE, we have developed a rubric to help departments integrate information from faculty members, peers, and students. Student evaluations are a part of the mix, but only a part. Rather, we have tried to help departments draw on the many facets of teaching into a format that provides a richer, fairer evaluation of instructor effectiveness without adding onerous time burdens to evaluators.

For the most part, this approach uses the types of materials that faculty members already submit and that departments gather independently: syllabi and course schedules; teaching statements; readings, worksheets and other course materials; assignments, projects, test results and other evidence of student learning; faculty reflections on student learning; peer evaluations from team teaching and class visits; and formal discussions about the faculty member’s approach to teaching.

Departments then use the rubric to evaluate that body of work, rewarding faculty members who engage in such approaches as:

  • experimenting with innovative teaching techniques
  • aligning course content with learning goals
  • making effective use of class time
  • using research-based teaching practices
  • engaging students in hands-on learning rather than simply delivering information to them
  • revising course content and design based on evidence and reflection
  • mentoring students, and providing evidence of student learning
  • sharing their work through presentations, scholarship, committee work and other venues

Departments can easily adapt the rubric to fit particular disciplinary expectations and to weight areas most meaningful to their discipline. We have already received feedback from many faculty members around the university. We’ve also asked a few departments to test the rubric as they evaluate faculty members for promotion and tenure, third-year review, and post-tenure review, and we plan to test it more broadly in the fall.

We will continue to refine the rubric based on the feedback we receive. Like teaching itself, it will be a constant work in progress. We see it as an important step toward making innovative teaching more visible, though, and toward making teaching a more credible and meaningful part of the promotion and tenure process. If you’d like to be part of that, let us know.

****

This article also appears in Teaching Matters, a publication of the Center for Teaching Excellence.


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

By Doug Ward

Women teach a sizable majority of online courses at KU, even though men make up a sizable majority of the university’s faculty.

Data provided by Laura Diede, the associate director at the Center for Online and Distance Learning, shows that of 171 online courses that CODL worked with in the 2014-15 school year, 60 percent were taught by women.

That’s especially interesting when you consider that of 1,649 faculty members on the Lawrence campus that fiscal year, only 42 percent were women.

I’ve not been able to find comparable data for online courses nationally, so I have no way to know whether the dominance of women in online teaching is unusual or not. In general, faculty members have been highly skeptical of online courses. In a recent survey by Inside HigherEd, more than half of faculty members said they didn’t think online courses could achieve the same level of learning as in-person classes. That attitude makes many long-time faculty members resistant to, if not hostile toward, online teaching.

Still, Diede and I still puzzled over the high percentage of women teaching online courses at KU. We came up with three possibilities:

  • Women are more willing than men to try new approaches to teaching.
  • Women prefer the flexibility that online teaching provides.
  • Men, who are more likely to have tenure, are more likely to refuse to teach online courses.

That last possibility seems the most likely, although there may be factors we hadn’t thought about. Whatever the case, it will be interesting to see whether this trend grows as the number of online courses grows.

Female instructors score slightly higher in online course evaluationsbar chart comparing course evaluations for men and women in online courses

The data that Diede provided about online courses also showed another interesting facet of online teaching at KU: Female instructors score slightly higher than their male counterparts on student evaluations.

This runs counter to a widely publicized recent study (and widespread perceptions) that argued that student evaluations are inherently biased against female instructors. I’m not going to wade into that debate here other than to say that student evaluations of teaching are problematic on many levels. The reliance on them as the sole measure of teaching quality benefits no one.

Fear and loathing about good teachers

The observation below is from Richard M. Felder, a professor emeritus at North Carolina State University. It was reprinted this week on Tomorrow’s Professor. It’s something I wonder about frequently and have talked about repeatedly with colleagues who value high-quality teaching:

“Some departments I know, including mine, have in the past hired faculty members who were exciting and innovative teachers and who didn’t do research. Some departments I know, again including mine, have hired former professionals with decades of practical experience who also didn’t do research. Both groups of faculty members did beautifully, teaching core courses brilliantly and serving as supportive advisors, mentors, and role models to the undergraduates who planned to go into business or industry after graduation. Professors like that are the ones students remember fondly years later, and endow scholarships and student lounges and sometimes buildings in honor of. And yet the thought of bringing one or two of them into a 20-person department faculty instead of hiring yet another research scholar who looks pretty much like the other 18 or 19 already there is unthinkable to many administrators and professors. Why is that?”


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

CTE’s Twitter feed