By Doug Ward

Instructors have raised widespread concern about the impact of generative artificial intelligence on undergraduate education.

As we focus on undergraduate classes, though, we must not lose sight of the profound effect that generative AI is likely to have on graduate education. The question there, though, isn’t how or whether to integrate AI into coursework. Rather, it’s how quickly we can integrate AI into methods courses and help students learn to use AI in finding literature; identifying significant areas of potential research; merging, cleaning, analyzing, visualizing, and interpreting data; making connections among ideas; and teasing out significant findings. That will be especially critical in STEM fields and in any discipline that uses quantitative methods.

The need to integrate generative AI into graduate studies has been growing since the release of ChatGPT last fall. Since then, companies, organizations, and individuals have released a flurry of new tools that draw on ChatGPT or other large language models. (See a brief curated list below.) If there was any lingering doubt that generative AI would play an outsized role in graduate education, though, it evaporated with the release of a ChatGPT plugin called Code Interpreter. Code Interpreter is still in beta testing and requires a paid version of ChatGPT to use. Early users say it saves weeks or months of analyzing complex data, though.

OpenAI is admirably reserved in describing Code Interpreter, saying it is best used in solving quantitative and qualitative mathematical problems, doing data analysis and visualization, and converting file formats. Others didn’t hold back in their assessments, though.

Ethan Mollick, a professor at the University of Pennsylvania, says Code Interpreter turns ChatGPT into “an impressive data scientist.” It enables new abilities to write and execute Python code, upload large files, do complex math, and create charts and graphs. It also reduces the number of errors and fabrications from ChatGPT. He says Code Interpreter “is relentless, usually correcting its own errors when it spots them.” It also “ ‘reasons’ about data in ways that seem very human.”

Andy Stapleton, creator of a YouTube channel that offers advice to graduate students, says Code Interpreter does “all the heavy lifting” of data analysis and asks questions about data like a collaborator. He calls it “an absolute game changer for research Ph.D.s.”

Code Interpreter is just the latest example of how rapid changes in generative AI could force profound changes in the way we approach just about every aspect of higher education. Graduate education is high on that list. It won’t be long before graduate students who lack skills in using generative AI will simply not be able to keep up with those who do.

Other helpful research tools

The number of AI-related tools has been growing at a mind-boggling rate, with one curator listing more than 6,000 tools on everything from astrology to cocktail recipes to content repurposing to (you’ve been waiting for this) a bot for Only Fans messaging. That list is very likely to keep growing as entrepreneurs rush to monetize generative AI. Some tools have already been scrapped or absorbed into competing sites, though, and we can expect more consolidation as stronger (or better publicized) tools separate themselves from the pack.

The easiest way to get started with generative AI is to try one of the most popular tools: ChatGPT, Bing Chat, Bard, or Claude. Many other tools are more focused, though, and are worth exploring. Some of the tools below were made specifically for researchers or graduate students. Others are more broadly focused but have similar capabilities. Most of these have a free option or at least a free trial.

How to use Code Interpreter

You will need a paid ChatGPT account. Jon Martindale of Digital Trends explains how to get started. An OpenAI forum offers suggestions on using the new tool. Members of the ChatGPT community forum also offer many ideas on how to use ChatGPT, as do members of the OpenAI Discord forum. (If you’ve never used Discord, here’s a guide for getting started.)

By Doug Ward

Not surprisingly, tools for detecting material written by artificial intelligence have created as much confusion as clarity.

Students at several universities say they have been falsely accused of cheating, with accusations delaying graduation for some. Faculty members, chairs, and administrators have said they aren’t sure how to interpret or use the results of AI detectors.

Giant white hand pokes through window of a university building as college students with backpacks walk toward it
Doug Ward, via Bing Image Creator

I’ve written previously about using these results as information, not an indictment. Turnitin, the company that created the AI detector KU uses on Canvas, has been especially careful to avoid making claims of perfection in its detection tool. Last month, the company’s chief product officer, Annie Chechitelli, added to that caution.

Chechitelli said Turnitin’s AI detector was producing different results in daily use than it had in lab testing. For instance, work that Turnitin flags as 20% AI-written or less is more likely to have false positives. Introductory and concluding sentences are more likely to be flagged incorrectly, Chechitelli said, as is writing that mixes human and AI-created material.

As a result of its findings, Turnitin said it would now require that a document have at least 300 words (up from 150) before the document can be evaluated. It has added an asterisk when 20% or less of a document’s content is flagged, alerting instructors to potential inaccuracies. It is also adjusting the way it interprets sentences at the beginning and end of a document.

Chechitelli also released statistics about results from the Turnitin AI detector, saying that 9.6% of documents had 20% or more of the text flagged as AI-written, and 3.5% had 80% to 100% flagged. That is based on an analysis of 38.5 million documents.

What does this mean?

Chechitelli estimated that the Turnitin AI detector had incorrectly flagged 1% of overall documents and 4% of sentences. Even with that smaller percentage, that means 38,500 students have been falsely accused of submitting AI-written work.

I don’t know how many writing assignments students at KU submit each semester. Even if each student submitted only one, though, more than 200 could be falsely accused of turning in AI-written work every semester.

That’s unfair and unsustainable. It leads to distrust between students and instructors, and between students and the academic system. That sort of distrust often generates or perpetuates a desire to cheat, further eroding academic integrity.

We most certainly want students to complete the work we assign them, and we want them to do so with integrity. We can’t rely on AI detectors – or plagiarism detectors, for that matter – as a shortcut, though. If we want students to complete their work honestly, we must create meaningful assignments – assignments that students see value in and that we, as instructors, see value in. We must talk more about academic integrity and create a sense of belonging in our classes so that students see themselves as part of a community.

I won’t pretend that is easy, especially as more instructors are being asked to teach larger classes and as many students are struggling with mental health issues and finding class engagement difficult. By criminalizing the use of AI, though, we set ourselves up as enforcers rather than instructors. None of us want that.

To move beyond enforcement, we need to accept generative artificial intelligence as a tool that students will use. I’ve been seeing the term co-create used more frequently when referring to the use of large language models for writing, and that seems like an appropriate way to approach AI. AI will soon be built in to Word, Google Docs, and other writing software, and companies are releasing new AI-infused tools every day. To help students use those tools effectively and ethically, we must guide them in learning how large language models work, how to create effective prompts, how to critically evaluate the writing of AI systems, how to explain how AI is used in their work, and how to reflect on the process of using AI.

At times, instructors may want students to avoid AI use. That’s understandable. All writers have room to improve, and we want students to grapple with the complexities of writing to improve their thinking and their ability to inform, persuade, and entertain with language. None of that happens if they rely solely on machines to do the work for them. Some students may not want to use AI in their writing, and we should respect that.

We have to find a balance in our classes, though. Banning AI outright serves no one and leads to over-reliance on flawed detection systems. As Sarah Elaine Eaton of the University of Calgary said in a recent forum led by the Chronicle of Higher Education: “Nobody wins in an academic-integrity arms race.”

What now?

We at CTE will continue working on a wide range of materials to help faculty with AI. (If you haven’t, check out a guide on our website: Adapting your course to artificial intelligence.) We are also working with partners in the Bay View Alliance to exchange ideas and materials, and to develop additional ways to help faculty in the fall. We will have discussions about AI at the Teaching Summit in August and follow those up with a hands-on AI session on the afternoon of the Summit. We will also have a working group on AI in the fall.

Realistically, we anticipate that most instructors will move into AI slowly, and we plan to create tutorials to help them learn and adapt. We are all in uncharted territory, and we will need to continue to experiment and share experiences and ideas. Students need to learn to use AI tools as they prepare for jobs and as they engage in democracy. AI is already being used to create and spread disinformation. So even as we grapple with the boundaries of ethical use of AI, we must prepare students to see through the malevolent use of new AI tools.

That will require time and effort, adding complexity to teaching and additional burdens on instructors. No matter your feelings about AI, though, you have to assume that students will move more quickly than you.


Doug Ward is an associate director of the Center for Teaching Excellence and an associate professor of journalism and mass communications.

By Doug Ward

Nearly a decade ago, the Associated Press began distributing articles written by an artificial intelligence platform.

Not surprisingly, that news sent ripples of concern among journalists. If a bot could turn structured data into comprehensible – even fluid – prose, where did humans fit into the process? Did this portend yet more ominous changes in the profession?

Robots carrying paper run from a lecture hall
By DALL-E and Doug Ward

I bring that up because educators have been raising many of the same concerns today about ChatGPT, which can not only write fluid prose on command, but can create poetry and computer code, solve mathematical problems, and seemingly do everything but wipe your nose and tuck you into bed at night. (It will write you a bedtime story if you ask, though.)

In the short term, ChatGPT definitely creates challenges. It drastically weakens approaches and techniques that educators have long used to help students develop foundational skills. It also arrives at a time when instructors are still reeling from the pandemic, struggling with how to draw many disengaged students back into learning, adapting to a new learning management system and new assessment expectations, and, in most disciplines, worrying about the potential effects of lower enrollment.

In the long term, though, we have no choice but to accept artificial intelligence. In doing so, we have an opportunity to develop new types of assignments and assessments that challenge students intellectually and draw on perhaps the biggest advantage we have as educators: our humanity.

Lessons from journalism

That was clearly the lesson the Associated Press learned when it adopted a platform developed by Automated Insights in 2014. That platform analyzes data and creates explanatory articles.

For instance, AP began using the technology to write articles about companies’ quarterly earnings reports, articles that follow a predictable pattern:

The Widget Company on Friday reported earnings of $x million on revenues of $y million, exceeding analyst expectations and sending the stock price up x%.

It later began using the technology to write game stories at basketball tournaments. Within seconds, reporters or editors could make basic stories available electronically, freeing themselves to talk to coaches and players, and create deeper analyses of games.

The AI platform freed business and financial journalists from the drudgery of churning out dozens of rote earnings stories, giving them time to concentrate on more substantial topics. (For a couple of years, I subscribed to an Automated Insights service that turned web analytics into written reports. Those fluidly written reports highlighted key information about site visitors and provided a great way to monitor web traffic. The company eventually stopped offering that service as its corporate clients grew.)

I see the same opportunity in higher education today. ChatGPT and other artificial intelligence platforms will force us to think beyond the formulaic assignments we sometimes use and find new ways to help students write better, think more deeply, and gain skills they will need in their careers.

As Grant Jun Otsuki of Victoria University of Wellington writes in The Conversation: “If we teach students to write things a computer can, then we’re training them for jobs a computer can do, for cheaper.”

Rapid developments in AI may also force higher education to address long-festering questions about the relevance of a college education, a grading system that emphasizes GPA over learning, and a product-driven approach that reduces a diploma to a series of checklists.

So what can we do?

Those issues are for later, though. For many instructors, the pressing question is how to make it through the semester. Here are some suggestions:

Have frank discussions with students. Talk with them about your expectations and how you will view (and grade) assignments generated solely with artificial intelligence. (That writing is often identifiable, but tools like OpenAI Detector and CheckforAI can help.) Emphasize the importance of learning and explain why you are having them complete the assignments you use. Why is your class structured as it is? How will they use the skills they gain? That sort of transparency has always been important, but it is even more so now.

Students intent on cheating will always cheat. Some draw from archives at greek houses, buy papers online or have a friend do the work for them. ChatGPT is just another means of avoiding the work that learning requires. Making learning more apparent will help win over some students, as will flexibility and choices in assignments. This is also a good time to emphasize the importance of human interaction in learning.

Build in reflection. Reflection is an important part of helping students develop their metacognitive skills and helping them learn about their own learning. It can also help them understand how to integrate AI into their learning processes and how they can build and expand on what AI provides. Reflection can also help reinforce academic honesty. Rather than hiding how they completed an assignment, reflection helps students embrace transparency.

Adapt assignments. Create assignments in which students start with ChatGPT and then have discussions about strengths and weaknesses. Have students compare the output from AI writing platforms, critique that output, and then create strategies for building on it and improving it. Anne Bruder offeres additional suggestions in Education Week, Ethan Mollick does the same on his blog, and Anna Mills has created a Google Doc with many ideas (one of a series of documents and curated resources she has made available). Paul Fyfe of North Carolina State provides perhaps the most in-depth take on the use of AI in teaching, having experimented with an earlier version of the ChatGPT model more than a year ago. CTE has also created an annotated bibliography of resources.

We are all adapting to this new environment, and CTE plans additional discussions this semester to help faculty members think through the ramifications of what two NPR hosts said was startlingly futuristic. Those hosts, Greg Rosalsky and Emma Peaslee of NPR’s Planet Money, said that using ChatGPT “has been like getting a peek into the future, a future that not too long ago would have seemed like science fiction.”

To that I would add that the science fiction involves a robot that drops unexpectantly into the middle of town and immediately demonstrates powers that elicit awe, anxiety, and fear in the human population. The robot can’t be sent back, so the humans must find ways to ally with it.

We will be living this story as it unfolds.


Doug Ward is an associate director at the Center for Teaching Excellence and an associate professor of journalism and mass communications.

CTE’s Twitter feed