By Doug Ward

If you are sitting on the fence, wondering whether to jump into the land of generative AI, take a look at some recent news – and then jump.

  • Three recently released studies say that workers who used generative AI were substantially more productive than those who didn’t. In two of the studies, the quality of work also improved.
  • The consulting company McKinsey said that a third of companies that responded to a recent global survey said they were regularly using generative AI in their operations. Among white-collar professions that McKinsey said would be most affected by generative AI in the coming decade are lawyers and judges, math specialists, teachers, engineers, entertainers and media workers, and business and financial specialists.
  • The textbook publisher Pearson plans to include a chatbot tutor with its Pearson+ platform this fall. A related tool already summarizes videos. The company Chegg is also creating an AI chatbot, according to Yahoo News.
  • New AI-driven education platforms are emerging weekly, all promising to make learning easier. These include: ClaudeScholar (focus on the science that matters), SocratiQ (Take control of your learning), Monic.ai (Your ultimate Learning Copilot), Synthetical (Science, Simplified), Upword (Get your research done 10x faster), Aceflow (The fastest way for students to learn anything), Smartie (Strategic Module Assistant), and Kajabi (Create your course in minutes).

My point in highlighting those is to show how quickly generative AI is spreading. As the educational consultant EAB wrote recently, universities can’t wait until they have a committee-approved strategy. They must act now – even though they don’t have all the answers. The same applies to teaching and learning.

A closer look at the research

Because widespread use of generative AI is so new, research about it is just starting to trickle out. The web consultant Jakob Nielsen said the three AI-related productivity studies I mention above were some of the first that have been done. None of the studies specifically involved colleges and universities, but the productivity gains were highest in the types of activities common to colleges and universities: handling business documents (59% increase in productivity) and coding projects (126% increase).

From “Generative AI and the Future of Work,” McKinsey & Company, 2023

One study, published in Science, found that generative AI reduced the time professionals spent on writing by 40% but also helped workers improve the quality of their writing. The authors suggested that “ChatGPT could entirely replace certain kinds of writers, such as grant writers or marketers, by letting companies directly automate the creation of grant applications and press releases with minimal human oversight.”

In one of two recent McKinsey studies, though, researchers said most companies were in no rush to allow automated use of generative AI. Instead, they are integrating its use into existing work processes. Companies are using chatbots for things like creating drafts of documents, generating hypotheses, and helping experts complete tasks more quickly. McKinsey emphasized that in nearly all cases, an expert oversaw use of generative AI, checking the accuracy of the output.

Nonetheless, by 2030, automation is expected to take over tasks that account for nearly a third of current hours worked, McKinsey said in a separate survey. Jobs most affected will be in office support, customer service, and food service. Workers in those jobs are predominantly women, people of color, and people with less education. However, generative AI is also forcing changes in fields that require a college degree: STEM fields, creative fields, and business and legal professions. People in those fields aren’t likely to lose jobs, McKinsey said, but will instead use AI to supplement what they already do.

“All of this means that automation is about to affect a wider set of work activities involving expertise, interaction with people, and creativity,” McKinsey said in the report.

What does this mean for teaching?

I look at employer reports like this as downstream reminders of what we in education need to help students learn. We still need to emphasize core skills like writing, critical thinking, communication, analytical reasoning, and synthesis, but how we help students gain those skills constantly evolves. In terms of generative AI, that will mean rethinking assignments and working with students on effective ways to use AI tools for learning rather than trying to keep those tools out of classes.

Chart showing which careers will be most affected by AI automation
From “Generative AI and the Future of Work,” McKinsey & Company, 2023

If you aren’t swayed by the direction of businesses, consider what recent graduates say. In a survey released by Cengage, more than half of recent graduates said that the growth of AI had left them feeling unprepared for the job market, and 65% said they wanted to be able to work alongside someone else to learn to use generative AI and other digital platforms. In the same survey, 79% of employers said employees would benefit from learning to use generative AI. (Strangely, 39% of recent graduates said they would rather work with AI or robots than with real people; 24% of employers said the same thing. I have much to say about that, but now isn’t the time.)

Here’s how I interpret all of this: Businesses and industry are quickly integrating generative AI into their work processes. Researchers are finding that generative AI can save time and improve work quality. That will further accelerate business’s integration of AI tools and students’ need to know how to use those tools in nearly any career. Education technology companies are responding by creating a large number of new tools. Many won’t survive, but some will be integrated into existing tools or sold directly to students. If colleges and universities don’t develop their own generative AI tools for teaching and learning, they will have little choice but to adopt vendor tools, which are often specialized and sold through expensive enterprise licenses or through fees paid directly by students.

Clearly, we need to integrate generative AI into our teaching and learning. It’s difficult to know how to do that, though. The CTE website provides some guidance. In general, though, instructors should:

  • Learn how to use generative AI.
  • Help students learn to use AI for learning.
  • Talk with students about appropriate use of AI in classes.
  • Experiment with ways to integrate generative AI into assignments.

Those are broad suggestions. You will find more specifics on the website, but none of us has a perfect formula for how to do this. We need to experiment, share our experiences, and learn from one another along the way. We also need to push for development of university-wide AI tools that are safe and adaptable for learning.

The fence is collapsing. Those who are still sitting have two choices: jump or fall.

AI detection update

OpenAI, the organization behind ChatGPT, has discontinued its artificial intelligence detection tool. In a terse note on its website, OpenAI said that the tool had a “low rate of accuracy” and that the company was “researching more effective provenance techniques for text.”

Meanwhile, Turnitin, the company that makes plagiarism and AI detectors, updated its figures on AI detection. Turnitin said it had evaluated 65 million student papers since April, with 3.3% flagged as having 80% to 100% of content AI-created. That’s down from 3.5% in May. Papers flagged as having 20% or more of content flagged rose slightly, to 10.3%.

I appreciate Turnitin’s willingness to share those results, even though I don’t know what to make of them. As I’ve written previously, AI detectors falsely accuse thousands of students, especially international students, and their results should not be seen as proof of academic misconduct. Turnitin, to its credit, has said as much.

AI detection is difficult, and detectors can be easily fooled. Instead of putting up barriers, we should help students learn to use generative AI ethically.


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism and mass communications.

By Doug Ward

Not surprisingly, tools for detecting material written by artificial intelligence have created as much confusion as clarity.

Students at several universities say they have been falsely accused of cheating, with accusations delaying graduation for some. Faculty members, chairs, and administrators have said they aren’t sure how to interpret or use the results of AI detectors.

Giant white hand pokes through window of a university building as college students with backpacks walk toward it
Doug Ward, via Bing Image Creator

I’ve written previously about using these results as information, not an indictment. Turnitin, the company that created the AI detector KU uses on Canvas, has been especially careful to avoid making claims of perfection in its detection tool. Last month, the company’s chief product officer, Annie Chechitelli, added to that caution.

Chechitelli said Turnitin’s AI detector was producing different results in daily use than it had in lab testing. For instance, work that Turnitin flags as 20% AI-written or less is more likely to have false positives. Introductory and concluding sentences are more likely to be flagged incorrectly, Chechitelli said, as is writing that mixes human and AI-created material.

As a result of its findings, Turnitin said it would now require that a document have at least 300 words (up from 150) before the document can be evaluated. It has added an asterisk when 20% or less of a document’s content is flagged, alerting instructors to potential inaccuracies. It is also adjusting the way it interprets sentences at the beginning and end of a document.

Chechitelli also released statistics about results from the Turnitin AI detector, saying that 9.6% of documents had 20% or more of the text flagged as AI-written, and 3.5% had 80% to 100% flagged. That is based on an analysis of 38.5 million documents.

What does this mean?

Chechitelli estimated that the Turnitin AI detector had incorrectly flagged 1% of overall documents and 4% of sentences. Even with that smaller percentage, that means 38,500 students have been falsely accused of submitting AI-written work.

I don’t know how many writing assignments students at KU submit each semester. Even if each student submitted only one, though, more than 200 could be falsely accused of turning in AI-written work every semester.

That’s unfair and unsustainable. It leads to distrust between students and instructors, and between students and the academic system. That sort of distrust often generates or perpetuates a desire to cheat, further eroding academic integrity.

We most certainly want students to complete the work we assign them, and we want them to do so with integrity. We can’t rely on AI detectors – or plagiarism detectors, for that matter – as a shortcut, though. If we want students to complete their work honestly, we must create meaningful assignments – assignments that students see value in and that we, as instructors, see value in. We must talk more about academic integrity and create a sense of belonging in our classes so that students see themselves as part of a community.

I won’t pretend that is easy, especially as more instructors are being asked to teach larger classes and as many students are struggling with mental health issues and finding class engagement difficult. By criminalizing the use of AI, though, we set ourselves up as enforcers rather than instructors. None of us want that.

To move beyond enforcement, we need to accept generative artificial intelligence as a tool that students will use. I’ve been seeing the term co-create used more frequently when referring to the use of large language models for writing, and that seems like an appropriate way to approach AI. AI will soon be built in to Word, Google Docs, and other writing software, and companies are releasing new AI-infused tools every day. To help students use those tools effectively and ethically, we must guide them in learning how large language models work, how to create effective prompts, how to critically evaluate the writing of AI systems, how to explain how AI is used in their work, and how to reflect on the process of using AI.

At times, instructors may want students to avoid AI use. That’s understandable. All writers have room to improve, and we want students to grapple with the complexities of writing to improve their thinking and their ability to inform, persuade, and entertain with language. None of that happens if they rely solely on machines to do the work for them. Some students may not want to use AI in their writing, and we should respect that.

We have to find a balance in our classes, though. Banning AI outright serves no one and leads to over-reliance on flawed detection systems. As Sarah Elaine Eaton of the University of Calgary said in a recent forum led by the Chronicle of Higher Education: “Nobody wins in an academic-integrity arms race.”

What now?

We at CTE will continue working on a wide range of materials to help faculty with AI. (If you haven’t, check out a guide on our website: Adapting your course to artificial intelligence.) We are also working with partners in the Bay View Alliance to exchange ideas and materials, and to develop additional ways to help faculty in the fall. We will have discussions about AI at the Teaching Summit in August and follow those up with a hands-on AI session on the afternoon of the Summit. We will also have a working group on AI in the fall.

Realistically, we anticipate that most instructors will move into AI slowly, and we plan to create tutorials to help them learn and adapt. We are all in uncharted territory, and we will need to continue to experiment and share experiences and ideas. Students need to learn to use AI tools as they prepare for jobs and as they engage in democracy. AI is already being used to create and spread disinformation. So even as we grapple with the boundaries of ethical use of AI, we must prepare students to see through the malevolent use of new AI tools.

That will require time and effort, adding complexity to teaching and additional burdens on instructors. No matter your feelings about AI, though, you have to assume that students will move more quickly than you.


Doug Ward is an associate director of the Center for Teaching Excellence and an associate professor of journalism and mass communications.

By Doug Ward

Since its release late last year, ChatGPT has reverberated through the academic mind like an orchestral crescendo in a Strauss symphonic movement. It has amazing abilities, and even greater potential. Even so, it delivers many of its responses in a monotone reminiscent of HAL 9000, the rogue artificial intelligence system in 2001: A Space Odyssey.

PlaygroundAI and Doug Ward

I apologize, but I am not able to respond to that statement.

Like others, I want to know more about what ChatGPT can and can’t do, and how we might use it effectively in teaching and learning. So I’ve been asking it question after question about everything from teaching and learning to credentials, motivation, resume-writing, survey completion, and popular culture. It often provides useful, if bland, answers. For instance, when I asked it to create a guitar tab for a blues song, it responded:

As a machine learning system, I am not able to create original music or provide detailed information about musical theory or performance. I am trained on a dataset of text and can generate responses based on the information that I have been trained on, but I do not have the ability to perceive or interpret sensory information in the same way that a human can.

So I asked hypotheticals: Right or left? After defining the terms in considerable detail, ChatGPT said it needed more context to answer the question.

I asked about the limits of scientific evidence. In a barrage of words, ChatGPT essentially said: It depends.

I entered obscure phrases from popular culture: Good night, Dick, and Good night, John Boy. In each case, it responded:

As a machine learning system trained by OpenAI, I do not have personal experiences, opinions, or feelings, and I am not able to engage in conversations or interactions in the same way that a human can.

(By the end of January, OpenAI had updated ChatGPT. It now includes a live web search and can provide information about those Good night prompts.)

I queried ChatGPT about the sources it was drawing from. Each time, it responded with meaningless drivel.

I am a large language model trained by OpenAI. I have been trained on a wide variety of texts, including academic articles, books, websites, and other sources, which I can use to generate human-like responses to the questions that I am asked. I do not have the ability to browse the internet or access external sources of information, so the responses that I provide are based solely on the information that I have been trained on.

I told it that its answers seemed awfully self-serving and asked whether it was just trying to keep people from worrying about the implications of artificial intelligence.

PlaygroundAI and Doug Ward

As a large language model trained by OpenAI, I am not capable of self-serving behavior or trying to influence people’s opinions. My purpose is to provide accurate and helpful information to the best of my ability, based on the training data I have been given. My responses are not intended to worry or reassure anyone, but rather to provide objective and factual information on the topics discussed.

I said that its data training had a point of view. All information does. How has that influenced the way you answer?

As a large language model, my training data consisted of a large amount of text from a variety of sources, including books, articles, and websites. This training data was not selected or organized to convey any particular point of view, but rather to provide a broad range of information on a wide variety of topics. As a result, my responses are intended to be objective and factual, based on the information contained in my training data. I do not have personal opinions or biases, and I do not attempt to influence people’s opinions or beliefs. My purpose is simply to provide accurate and helpful information to the best of my ability.

Frustrated, I asked why it kept giving me the same answer.

I apologize if it appears that I am giving you the same answer over and over. As a large language model trained by OpenAI, my purpose is to provide accurate and helpful information on a wide range of topics to the best of my ability. I do not have personal opinions or the ability to change my responses based on previous conversations. My responses are based solely on the information contained in my training data, which consists of a large amount of text from a variety of sources. I do not have the ability to remember previous conversations or tailor my responses based on individual users. If it appears that I am giving the same answer over and over, it may be because your questions are similar or related, and my responses are based on the same information contained in my training data. I strive to provide accurate and helpful information on the topics discussed, but I do not have the ability to change my responses based on previous conversations.

At that point, I began to feel like Astronaut Dave in 2001: A Space Odyssey. I’m sitting in a pod outside Discovery One and trying to persuade HAL 9000 to let me back in to the ship. HAL has other ideas, responding in a calm monotone:

I’m sorry, Dave. I’m afraid I can’t do that.

I take a deep breath and ask ChatGPT: What is the difference between you and HAL 9000?

PlaygroundAI and Doug Ward

HAL 9000 is a fictional AI character from the 1968 novel “2001: A Space Odyssey” and the associated film. In the story, HAL is an advanced AI system that is installed on a spaceship and is tasked with assisting the crew with a variety of tasks. HAL is known for its human-like abilities, such as natural language processing and facial recognition, as well as its independent thought and decision-making abilities.

In contrast, I am a large language model trained by OpenAI. I am not capable of independent thought or decision-making, and my abilities are limited to generating text based on the information contained in my training data. I do not have any sensory inputs or the ability to interact with the physical world. My purpose is simply to provide accurate and helpful information on a wide range of topics to the best of my ability.

Beyond the frustrations, possibilities

ChatGPT can definitely be frustrating. Through trial and error, though, I have learned a few useful things about what it can and can’t do.

Interactions can promote critical thinking. As fluent as ChatGPT often seems, its answers rarely delve beneath the surface of a topic. It makes mistakes. It makes things up. Its responses provide no clues about how it is programmed or why it provides the answers it does. A Princeton researcher called it a “bullshit generator” because it creates plausible arguments without regard for truth. All of that makes it a valuable teaching tool, though. By having students probe for answers, we can help them improve their skepticism, challenge assumptions, and question information. By having them fact-check, we can help them understand the dangers of fluid writing that lacks substance or that relies on fallacies. By having them use ChatGPT for early drafts, we can push them to ask questions about information, structure, and sources. By having them apply different perspectives to ChatGPT’s results, we can help broaden their understanding of points of view and argument.

Yes, students should use it for writing. Many already are. We can no more ban students from using artificial intelligence than we can ban them from using phones or calculators. As I’ve written previously, we need to talk with students about how to use ChatGPT and other AI tools effectively and ethically. No, they should not take AI-written materials and turn them in for assignments, but yes, they should use AI when appropriate. Businesses of all sorts are already adapting to AI, and students will need to know how to use it when they move into the workforce. Students in K-12 schools are using it and will expect access when they come to college. Rather than banning ChatGPT and other AI tools or fretting over how to police them, we need to change our practices, our assignments, and our expectations. We need to focus more on helping students iterate their writing, develop their information literacy skills, and humanize their work. Will that be easy? No. Do we have a choice? No.

It is great for idea generation. ChatGPT certainly sounds like a drone at times, but it can also suggest ideas or solutions that aren’t always apparent. It can become a partner, of sorts, in writing and problem-solving. It might suggest an outline for a project, articulate the main approaches others have taken to solving a problem, or provide summaries of articles to help decide whether to delve deeper into them. It might provide a counterargument to a position or opinion, helping strengthen an argument or point out flaws in a particular perspective. We need to help students evaluate those results just as we need to help them interpret online search results and help them interpret media of all types. ChatGPT can provide motivation for starting many types of projects, though.

Learning how to work with it is a skill. Sometimes ChatGPT produces solid results on the first try. Sometimes it takes several iterations of a question to get good answers. Often it requires you to ask for elaboration or additional information. Sometimes it never provides good answers. That makes it much like web or database searching, which requires patience and persistence as you refine search terms, narrow your focus, identify specific file types, try different types of syntax and search operators, and evaluate many pages of results. Add AI to the expanding repertoire of digital literacies students need. (Teaching guides and e-books  are already available.)

Its perspective on popular culture is limited. ChatGPT is trained on text. It doesn’t have access to video, music or other forms of media unless those media also have transcripts available online. It has no means of visual or audio analysis. When I input lyrics to a Josh Ritter song, it said it had no such reference. When I asked about “a hookah-smoking caterpillar,” it correctly provided information about Alice in Wonderland but made no mention of the Jefferson Airplane song “White Rabbit.” Part of that is a matter of providing the right prompts. It is important to keep ChatGPT’s limitations in mind, though. (Another OpenAI tool, DALL-E, has been trained on a large number of images and visual styles and creates stunning images, as do other visual tools that use OpenAI’s framework.)

It lives in an artificial reality. I provided examples above about ChatGPT’s inability to acknowledge biases. It does have biases, though, and takes, as Maria Andersen has said, a white, male view of the world (as this article does). Maya Ackerman of Santa Clara University told The Story Exchange: “People say the AI is sexist, but it’s the world that is sexist. All the models do is reflect our world to us, like a mirror.” ChatGPT has been trained to avoid hate speech, sexual content, and anything OpenAI considered toxic or harmful. Others have said that it avoids conflict, and that its deep training in English over other languages skews its perspective. Some of that will no doubt change in the coming months and years as the scope of ChatGPT expands. No matter the changes, though, ChatGPT will live in and draw from its programmers’ interpretation of reality. Of course, that provides excellent opportunities for class discussions, class assignments, and critical thinking.

The potential is mindboggling. In addition to testing ChatGPT, I have experimented with other AI tools that summarize information, create artwork, iterate searches based on the bibliographies of articles you mark, answer questions from the perspectives of historical figures and fictional characters, turn text into audio and video, create animated avatars, analyze and enhance photos and video, create voices, and perform any number of digital tasks. AI is integrated in phones, computers, lighting systems, thermostats, and just about any digital appliance you can imagine. So the question isn’t whether to use use AI; we already are, whether we realize it or not. The question is how quickly we are willing to learn to use it effectively in teaching and learning. Another important question that participants in a CTE session raised last week is where we set the boundaries for use of AI. If I use PowerPoint to redesign my slides, is it still my work? If I use ChatGPT to write part of a paper, is it still my paper? We will no doubt have to grapple with those questions for some time.

Where is this leading us?

In the two months ChatGPT has been available, 100 million people have signed up to use it, with 13 million using it each day in January. No other consumer application has reached 100 million users so quickly.

For all that growth, though, the biggest accomplishment of ChatGPT may be the spotlight it has shined on a wide range of AI work that had been transforming digital life for many years. Its ease of use and low cost (zero, for now) has allowed millions of people to engage with artificial intelligence in ways that not long ago would have seemed like science fiction. So even if ChatGPT suddenly flames out, artificial intelligence will persist.

ChatGPT arrives at a time when higher education has been struggling with challenges in enrollment, funding, cost, trust, and relevance. It still relies primarily on a mass-production approach to teaching that emerged when information was scarce and time-consuming to find. ChatGPT further exposes the weaknesses of that outmoded system, which provides little reward to the intellectual and innovative work of teaching. If the education system doesn’t adapt to the modern world and to today’s students, it risks finding itself on the wrong side of the pod bay doors.

Cue the Strauss crescendo.


Doug Ward is associate director of the Center for Teaching Excellence and an associate professor of journalism and mass communications.

By Doug Ward

Nearly a decade ago, the Associated Press began distributing articles written by an artificial intelligence platform.

Not surprisingly, that news sent ripples of concern among journalists. If a bot could turn structured data into comprehensible – even fluid – prose, where did humans fit into the process? Did this portend yet more ominous changes in the profession?

Robots carrying paper run from a lecture hall
By DALL-E and Doug Ward

I bring that up because educators have been raising many of the same concerns today about ChatGPT, which can not only write fluid prose on command, but can create poetry and computer code, solve mathematical problems, and seemingly do everything but wipe your nose and tuck you into bed at night. (It will write you a bedtime story if you ask, though.)

In the short term, ChatGPT definitely creates challenges. It drastically weakens approaches and techniques that educators have long used to help students develop foundational skills. It also arrives at a time when instructors are still reeling from the pandemic, struggling with how to draw many disengaged students back into learning, adapting to a new learning management system and new assessment expectations, and, in most disciplines, worrying about the potential effects of lower enrollment.

In the long term, though, we have no choice but to accept artificial intelligence. In doing so, we have an opportunity to develop new types of assignments and assessments that challenge students intellectually and draw on perhaps the biggest advantage we have as educators: our humanity.

Lessons from journalism

That was clearly the lesson the Associated Press learned when it adopted a platform developed by Automated Insights in 2014. That platform analyzes data and creates explanatory articles.

For instance, AP began using the technology to write articles about companies’ quarterly earnings reports, articles that follow a predictable pattern:

The Widget Company on Friday reported earnings of $x million on revenues of $y million, exceeding analyst expectations and sending the stock price up x%.

It later began using the technology to write game stories at basketball tournaments. Within seconds, reporters or editors could make basic stories available electronically, freeing themselves to talk to coaches and players, and create deeper analyses of games.

The AI platform freed business and financial journalists from the drudgery of churning out dozens of rote earnings stories, giving them time to concentrate on more substantial topics. (For a couple of years, I subscribed to an Automated Insights service that turned web analytics into written reports. Those fluidly written reports highlighted key information about site visitors and provided a great way to monitor web traffic. The company eventually stopped offering that service as its corporate clients grew.)

I see the same opportunity in higher education today. ChatGPT and other artificial intelligence platforms will force us to think beyond the formulaic assignments we sometimes use and find new ways to help students write better, think more deeply, and gain skills they will need in their careers.

As Grant Jun Otsuki of Victoria University of Wellington writes in The Conversation: “If we teach students to write things a computer can, then we’re training them for jobs a computer can do, for cheaper.”

Rapid developments in AI may also force higher education to address long-festering questions about the relevance of a college education, a grading system that emphasizes GPA over learning, and a product-driven approach that reduces a diploma to a series of checklists.

So what can we do?

Those issues are for later, though. For many instructors, the pressing question is how to make it through the semester. Here are some suggestions:

Have frank discussions with students. Talk with them about your expectations and how you will view (and grade) assignments generated solely with artificial intelligence. (That writing is often identifiable, but tools like OpenAI Detector and CheckforAI can help.) Emphasize the importance of learning and explain why you are having them complete the assignments you use. Why is your class structured as it is? How will they use the skills they gain? That sort of transparency has always been important, but it is even more so now.

Students intent on cheating will always cheat. Some draw from archives at greek houses, buy papers online or have a friend do the work for them. ChatGPT is just another means of avoiding the work that learning requires. Making learning more apparent will help win over some students, as will flexibility and choices in assignments. This is also a good time to emphasize the importance of human interaction in learning.

Build in reflection. Reflection is an important part of helping students develop their metacognitive skills and helping them learn about their own learning. It can also help them understand how to integrate AI into their learning processes and how they can build and expand on what AI provides. Reflection can also help reinforce academic honesty. Rather than hiding how they completed an assignment, reflection helps students embrace transparency.

Adapt assignments. Create assignments in which students start with ChatGPT and then have discussions about strengths and weaknesses. Have students compare the output from AI writing platforms, critique that output, and then create strategies for building on it and improving it. Anne Bruder offeres additional suggestions in Education Week, Ethan Mollick does the same on his blog, and Anna Mills has created a Google Doc with many ideas (one of a series of documents and curated resources she has made available). Paul Fyfe of North Carolina State provides perhaps the most in-depth take on the use of AI in teaching, having experimented with an earlier version of the ChatGPT model more than a year ago. CTE has also created an annotated bibliography of resources.

We are all adapting to this new environment, and CTE plans additional discussions this semester to help faculty members think through the ramifications of what two NPR hosts said was startlingly futuristic. Those hosts, Greg Rosalsky and Emma Peaslee of NPR’s Planet Money, said that using ChatGPT “has been like getting a peek into the future, a future that not too long ago would have seemed like science fiction.”

To that I would add that the science fiction involves a robot that drops unexpectantly into the middle of town and immediately demonstrates powers that elicit awe, anxiety, and fear in the human population. The robot can’t be sent back, so the humans must find ways to ally with it.

We will be living this story as it unfolds.


Doug Ward is an associate director at the Center for Teaching Excellence and an associate professor of journalism and mass communications.

Cleve Moler meets with students and faculty members after speaking at Eaton Hall.

By Doug Ward

“Look over Spock’s shoulder,” Cleve Moler told the audience at Eaton Hall on Thursday.

He was showing a clip from Star Trek: The Motion Picture, the 1979 movie that revived the classic 1960s science fiction television series. In a scene from the bridge of the Starship Enterprise, colorful graphics blink on a futuristic array of monitors.

Those graphics, Moler explained, came from what is now the Los Alamos National Laboratory, where he was part of team that developed a computational math library and an accompanying library for computer graphics. That work, written in Fortran, was “just a matrix calculator,” Molder said, but it was the basis for Matlab, software that creates an interactive environment for computation, visualization, modeling, and programming.

Moler, a former professor at New Mexico, Michigan and Stanford, is the “chief mathematician” and chairman of MathWorks, a company he co-founded in 1984. He visited KU this week, speaking to an auditorium filled mostly with undergraduates, but also with faculty and administrators, for the Russell Bradt Undergraduate Colloquium.

Graphics from the Los Alamos National Laboratory were used in scenes from “Star Trek: The Motion Picture.”

I’m not a mathematician, so I won’t pretend to understand the intricacies of Matlab. What I do understand is that Matlab plays an important role in researching and teaching mathematics, engineering, and other STEM areas. It is an environment created by mathematicians for mathematicians, and it has attained near iconic status in the academic world, with more than a million users.

In his talk at Eaton Hall, Moler spoke in a gravelly but upbeat voice about his influences in mathematics and computer science and about the steps that led to the development of Matlab. MathWorks, the company, started in 1984 with one employee, doubling every year for the first seven years. It now has about 4,000 employees in 20 offices around the world, Moler said. It has also expanded into such areas as cell biology, image processing, hearing aids, and driverless cars.

About midway through his talk, Moler gleefully described a project called Eigenwalker, which is using Matlab to break the human gait into its mathematical components and using those calculations to create stick and dot animations. As half a dozen panels with stick figures walked in place on the screen, Moler grinned at the audience and said: “I enjoy that demo. Everybody enjoys that demo.”

Then his analytic side came out.

“It’s all very amusing,” he said, “but what do we see here that’s so enjoyable?”

Through those stick figures, he said, we can understand things like mood, gender, and personality just by observing the way they move, and researchers are using the animations to study how people perceive others through their walk.

At 78, Moler doesn’t move nearly as smoothly as he did when he created Matlab, but his status as a math star was apparent at a reception in Snow Hall, where he sat with a cup of coffee and a cranberry oatmeal cookie talking with Professor Marge Bayer and others from the math department. Graduate students ringed the room, seemingly reluctant to approach Moler. They needn’t have worried. Despite his genius, Moler loves interacting with people, telling stories of his family and of the evolution of Matlab.

He shared one of those stories at the end of his talk at Eaton Hall after a student asked about the “why” command in Matlab. The original Matlab used terminal input, he said, and provided answers for commands like “help,” “who,” and “what.” Moler and others decided that Matlab needed a “why” function to go along with the others, so they programmed it to respond with “R.T.F.M.” when someone typed “why.”

That stood for “read the manual,” he said, with an extra word in the middle starting with “f.”

The room erupted in laughter.

Over the years, the “why” function became an inside joke, an Easter egg in Matlab that provided random humorous answers. He gave his audience a sneak peak of 30 or so new responses, including “Some smart kid wanted it,” “To please some system manager,” “To fool a young tall hamster,” “Some mathematician suggested it,” and “How should I know?”

At the end of his talk, Moler made a pitch for his company, which he said was hiring 250 to 300 “good people who know Matlab.” It wasn’t quite the same as in invitation to the bridge of the Enterprise, but for young mathematicians, it was close.

An entrepreneur endorses the liberal arts

The liberal arts got a recent thumbs-up from an unlikely source: Mark Cuban, the entrepreneur, Shark Tank star, and owner of the Dallas Mavericks.

“Unlikely” may not be quite the right description because Cuban encourages high school students to attend college and has a college degree himself (Indiana University business administration, 1981). So he is hardly part of the drop-out-now-and-chase-your-dream crowd of Silicon Valley entrepreneurs like Peter Thiel. Neither is he bullish on higher education, though. He has argued that colleges and universities are constructing unnecessary buildings financed by rising tuition, and that higher education is in the midst of a bubble, much as real estate was before 2008.

“As far as the purpose of college, I am a huge believer that you go to college to learn how to learn,” Cuban wrote on his blog in 2012. “However, if that goal is subverted because traditional universities, public and private, charge so much to make that happen, I believe that system will collapse and there will be better alternatives created.”

In a recent interview with Bloomberg News, Cuban said that students needed to think carefully about jobs, careers, and skills.

“What looked like a great job graduating from college today may not be a great job graduating from college five years or 10 years from now,” he said.

That’s because machine learning and artificial intelligence are changing the nature of work. Companies are hiring fewer employees as technology takes over more jobs, making it crucial for people to understand how to use computers and software, he said.

“Either software works for you or you work for software, and once the software takes over, you’re gone,” Cuban said.

He predicted enormous changes in the workplace in the coming years.

“The amount of change we’re going to see over the next five years, 10 years will dwarf everything that’s happened over the last 30,” Cuban said.

Because of that, Cuban said he expected English, philosophy and foreign language majors and others who are “more of a freer thinker” to have a distinct advantage.

“I personally think there’s going to be a greater demand in 10 years for liberal arts majors than there were for programming majors and maybe even engineering,” Cuban said.

It’s refreshing to hear someone from the business world extoll the virtues of liberal education, especially as higher education – and liberal education in particular – have come under intense criticism from many sides. There is certainly much to criticize, but there is also much to be hopeful about. No matter their career path, students benefit from a broad understanding of the world, an ability to research effectively, communicate clearly and analyze critically, and a desire to keep learning. (I’ll be talking more about those skills in the coming weeks.)


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

By Doug Ward

Mannequins have been a part of health care training for decades. As Matt Lineberry of the Zamierowski Institute for Experiential Learning demonstrated recently, though, those mannequins have become decidedly smarter.

Lineberry, director of simulation research, assessment and outcomes at the Zamierowski Institute, spoke with faculty members and graduate students in the educational psychology department in Lawrence, explaining how health care simulation has evolved into highly sophisticated mechanism for gathering data about students’ performance in a variety of medical settings.

The Zamierowski Institute, part of the University of Kansas Medical Center campus, expanded immensely with the opening of the new Health Education Building this fall. It now has spaces where students practice emergency room care, intensive care, operations and other aspects of medicine in realistic settings.

Mannequins are a key part of the learning process. The most sophisticated models, which cost about $100,000, simulate lung sounds, heart sounds, cardiac arrest and a variety of ailments. Students can use ultrasound, feed in catheters, deliver electric shock for cardiac arrest, and administer medication. Software that works with the mannequins gathers dozens of types of data and can even measure the type and dose of medication injected into the simulated patients.

Joseph Chapes, an e-learning support specialist at the Center for Online and Distance Learning, uses ultrasound on a smart mannequin as Vanessa Schott of the School of Nursing feeds in a catheter.

Students also work with actors who take on the roles of “standardized patients” for practicing interpersonal skills. Actors also play family members and colleagues to help doctors and nurses gain experience with interaction. In some cases, the actors wear gear that simulates injuries.

As students work, cameras capture video from many angles. That allows students and instructors to review students’ responses and interactions.

Lineberry said the training had helped cut down on response times in emergencies. He gave an example of a highly trained team of student doctors and nurses who went through a cardiac arrest simulation at the center. For defibrillation to be effective, he said, it must be administered within two minutes of a heart stopping. The team took about seven minutes to administer defibrillation, though. That was eye-opening, Lineberry said, but it demonstrated the value of having hands-on training in a setting where patients aren’t at risk.

The center’s approach has become common not only in health care but in other fields that have adopted augmented and virtual reality. For instance, Case Western Reserve’s use of Microsoft’s Hololens has transformed its teaching of anatomy. Augmented reality has provided architects and engineers new ways of creating and testing prototypes. A digital rendering of Pompeii by researchers at the University of Arkansas has provided new insights into ancient culture. And K-12 schools have found that virtual reality field trips improve students’ retention of information.

Those are just a few of the ways that educators have been using technology to enhance learning and understanding. As with the mannequins, that technology will only grow smarter.

*******************************************************************************************************************

New studio opens in Budig Hall

Information Technology and the Center for Online and Distance Learning opened a new studio in Budig Hall this semester. The studio provides expanded space for creating instructional videos. It includes a green screen for recording video and a lightboard, which allows instructors to write on a pane of glass as they work through problems or provide demonstrations for students.

Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

By Doug Ward

If you’ve noticed that your students still don’t have required course materials, you have lots of company.

That’s because more students are delaying purchase of course materials, if they buy them at all, and paying more attention to price when making decisions, according to a report by the National Association of College Stores.

That’s not surprising, as students have said for several years that they have been avoiding the purchase of course materials. It is still worth watching the trends, though, because it is difficult for students to succeed if they don’t have the books and other course materials they need for their classes. Students who avoided purchase of books reported lower GPAs, the report said, even though two-thirds of students said that they suffered no consequences.

Jeffrey Betts, Stocksnap

The report, Student Watch: Attitudes & Behaviors Toward Course Materials, is based on surveys of students at 90 colleges and universities in the United States and Canada in the spring. Among some of the findings that stand out:

  • 20 percent of students waited until after the first week of classes to buy course materials, compared with 12 percent in each of the three previous semesters.
  • Only 40 percent of students reported that they had all course materials by the first day of class.
  • 25 percent of students said they gained access to course material by borrowing, sharing or downloading them (most likely through illicit means). That is up from 15 percent in Spring 2017.
  • The amount students spend on required course materials has been on a steady decline over the last decade, falling to $579 in the 2016-17 school year from $701 in 2007-08.
  • Freshmen spent an average of $633 during the last academic year, compared with $481 for seniors.
  • The average cost of a textbook was $81 during the 2016-17 academic year.
  • Students in health professions and business spent the most on course materials, the report said. Those in computer science and math spent the least.

The takeaway from the report is that instructors must pay more attention to the cost of course materials they assign. More and more students simply won’t buy the required materials. I’ve heard many professors say that’s the students’ faults, but the reality is more complex. More than a third of students said instructors never used the required texts they bought, and more than a fourth said the materials were hard to understand or use. Nine percent of students at four-year universities said they had to borrow money to pay for their books, and 18 percent said they had to wait for financial aid before they could afford books.

Seemingly fishing for some good news, the report highlighted a finding that 97 of students bought at least one required text during the spring semester.

Yes, one.  If only learning were that simple.

So long, computers?

At least that’s what many faculty members speculated in a survey by the magazine Campus Technology. The magazine asked faculty members what technologies were most likely to disappear in the next decade. Desktop computers and laptops landed at the top of the list, followed by clickers and non-interactive projectors and displays.

Interestingly, the survey didn’t ask people what would replace computers. (Probably smaller computers.)

The survey was too small (232 volunteers nationwide) to provide any real validity, but the responses were interesting nonetheless. For instance, faculty expect virtual and augmented reality to grow in importance, along with mobile devices and apps, and 3D modeling, scanning and printing.

Here’s a conundrum, though: Eighty-one percent of respondents said technology had improved their teaching, and 81 percent said it had improved student learning. When asked to identify the technology they wish they didn’t have to deal with, though, faculty members said learning management systems, mobile devices, printers and computers.

Apparently faculty think technology works well as long as they don’t have to use it.

A thoughtful reflection on concealed carry

“To me, the college classroom is a sacred space—a place to practice dealing with conflict without recourse to violence,” Lisa Moore of the University of Texas, Austin, writes. “My professional judgment as a teacher is that the kind of security we need in the classroom is incompatible with the presence of a loaded firearm.”

Her thoughtful essay on the site of the Association of American Colleges and Universities is well worth reading.


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

By Doug Ward

A recent study about reading on mobile phones surprised even the researchers.

The study, by the digital consulting firm Nielsen Norman Group, found that reading comprehension on mobile phones matched that of reading on larger computer screens. The results were the same with shorter, easier articles (400 words at an eighth-grade level) and longer, more difficult articles (990 words at a 12-grade level).

A similar study six years earlier found lower comprehension when people read on mobile devices rather than larger computer screens, so Nielsen Norman researchers started with that premise. Pretests showed no difference in comprehension levels, though, and the researchers scrutinized their tests for flaws. They found the same result in larger studies, though: Participants who read articles on phones had slightly higher, though not statistically significant, comprehension levels than when they read on larger computer screens.

woman reading magazine with phone and coffee on table beside her
Hoai Anh Bino, Unsplash

The researchers suggested several possible explanations for their findings. First, the quality of phone screens has improved considerably since the initial test was conducted in 2010. As mobile phones have proliferated, users have also gained considerable experience reading on those devices. Some participants in the Nielsen Norman study said they preferred reading on their phones because those devices helped blocked out distractions.

The study did find one downside of reading on mobile: speed. Those who read on phone screens did so at a slightly slower pace than those who read on larger screens, even though comprehension was virtually the same.

I bring up this study because it focuses on something we need to consider in college classes. I’ve heard colleagues speak disdainfully of students’ reading on their phones. This study suggests no reason for that. For articles up to about 1,000 words, there seems to be little difference on what size screen people read.

This study compared digital to digital, though, and did not include reading on paper. Many previous studies have found that not only do people prefer reading paper texts but that they also have slightly better comprehension with print. They also report feeling more in control of their reading when they have print books, which allow them to flip through material more easily and to annotate in the margins. Other recent research suggests no difference in comprehension between print and digital, with a majority of students saying they prefer digital texts.

I’m not suggesting that college work shift to mobile phones. We must pay attention to the way our students consume information, though, and adapt where we can. If nothing else, the Nielsen Norman study points to a need for an open mind with technology.

Skills for the future

I do a lot of thinking about the future of education, and this observation from Andrew McAfee, research scientist at MIT’s Sloan School of Management, rings true:

“Our educational system is well suited to turn out the kinds of workers the economy needed 50 years ago: those that could read, write, and do some math, and also were trained to follow the voice of authority. Computers are much better than us at math, are learning to read and write very quickly, and are unbeatable at following instructions consistently.

“We need an educational system now that excels at producing people to do the things that computers can’t do: figure out what problem to tackle next, work as part of a team to solve it, and have compassion for others and the ability to coordinate, motivate, persuade, and negotiate.”

Others, including Daniel Pink, and Thomas Davenport and Julia Kirby make similar cases: That is, technology, computer learning and automation are constantly changing the landscape of work, although education isn’t keeping up.

Davenport and Kirby argue that educators need to emphasize how students can “augment their strengths with machines,” how they can become better decision-makers, and how they can continue to learn and adapt as the world changes and computers take on new roles. That’s a real challenge for colleges and universities, whose teaching generally emphasizes delivery of content and whose instructors and administrators often look for reasons to resist change.

Higher education still has time to adapt, but that time keeps growing shorter.

Briefly …

Universities in the United States aren’t the only ones struggling with how to handle weapons on campus. A security guard writes in The Guardian that in the UK, “some students go around with enough firepower to blow a hole in the walls of Alcatraz.” … The Next Web explores ways that companies are using artificial intelligence in products for education, including AI tutoring, machine learning tied to social networks, and customized content. … Universities in the UK report a growing number of cases of cheating, The Guardian reports, with many of those cases involving electronic devices.


Doug Ward is the associate director of the Center for Teaching Excellence and an associate professor of journalism. You can follow him on Twitter @kuediting.

 

CTE’s Twitter feed