By Doug Ward

We often idealize a college campus as a place of ideas and personal growth, but we have to remember that danger can erupt without notice.

The shootings at Michigan State this week were, sickeningly, just the latest in string of killings over the past year that also involved students or faculty members from Virginia, Iowa State, and Arizona, according to Inside Higher Ed. At Idaho, a Ph.D. student has been charged with killing four undergraduates. At K-12 schools, 332 students were shot on school property last year and 35 this year so far, according to the K-12 Shooting Database. Twenty-one of those students died.Screenshot of the Michigan State University website

A colleague at Michigan State talked about the surreal feeling of dealing with a mass shooting on a home campus. The frequency of such shootings has made gruesome acts seem distant and almost mundane. The headlines flicker past, and the killings always seem to take place someplace else — until they don’t.

There is no clear way to predict those types of mass killings, although researchers says that assailants are usually male and have a connection to a campus. There are steps we can take to protect ourselves, though.

In a visit to a pedagogy class I taught in 2017, two members of the KU Police Department, Sgt. Robert Blevins and Sgt. Zeke Cunningham, offered excellent advice on how to prepare and what to do if you find yourself in peril.

What you can do now

Know your surroundings

Familiarity with the campus and its buildings could prove crucial in an emergency. Know where exits are, Cunningham said. Learn where hallways and stairways lead. Walk around buildings where you work or have class and get a sense of the building layout and its surroundings. Make sure you know how to get out of a classroom, lab, or other work space. Large rooms usually have several doors, so pay attention to where they are and where they go. That will help you make decisions if you find yourself in a crisis.

Sign up for campus alerts

The university sends announcements during emergencies, so make sure you are signed up to receive alerts in ways you are most likely to see them.

Pay attention

We are often lulled by routine and easily distracted by technology. In a classroom – especially a large classroom – it can be easy to shrug off a disruption in another part of the room. If something makes you uneasy, though, pay attention and take action, whether you are in a classroom, a hallway, or a building, or outside traveling across campus.

“Trust that voice in your head, because you’re probably right,” Blevins said.

Call the police

If you see a problem and think it could be an emergency, call 911. Don’t assume someone else already has. Blevins said the police would rather respond 100 times to something that ends up being innocuous than to show up to a tragedy that could have been prevented if someone had called. Different people also see different things, Cunningham added, and collectively they can provide crucial details that may allow the police to create a clearer picture of what happened.

What to do during an emergency

If you find yourself in an emergency, the officers said, follow these steps:

Stay calm

That can help you remember where to find exits and how to help others find safety. That is especially important for instructors.

“If you panic, the students are going to panic,” Cunningham said. If students make a mad rush for the door, he said, someone will get hurt. “So try to remain calm. I know that’s easier said than done in situations like this, but that will help the students stay calm.”

Run. Hide. Fight.

That is the approach that many law enforcement agencies recommend if there is an active shooter in your area. Michigan State sent those very instructions to students and faculty Monday night.

Run. If you can leave a dangerous area safely, go. Don’t hesitate. That’s where knowledge of the exits and the area around a building can make a difference. Encourage others to leave and get as many people to go with you as possible. Break windows to create an exit if you need to, as students at Michigan State did this week. If others are trying to go toward a dangerous area, warn them away.

Hide. If you are inside a room and cannot escape safely, turn off the lights and lock and barricade the doors with whatever you can find. Stay low and out of sight. Flip over tables and crouch behind them. Hide behind cabinets or anything else in a room. Silence your phone and stay quiet. Close any blinds or curtains. Many smaller rooms have locks you can engage, so lock the doors if you can. You usually can’t lock doors in large lecture halls, so barricade the doors with anything you can find. In some cases, the officers said, people have lain on the floor with their feet pushing against the door.

Those who commit mass shootings usually know they have only limited time before the police arrive, Blevins said, so they act quickly. If a door is locked, the shooter will usually pass by and look for one that isn’t locked. If lights are off, the person is more likely to pass by and seek out a room that looks like someone is inside. If you are in a room with many windows, get out if possible because the attacker will probably see you. If you can’t get out, conceal yourself as best you can.

Fight. As a last resort, fight back against an attacker. Use whatever you have available as a weapon: chairs, drawers, bottles, cords. Work together to bring down the attacker. If a gunman barges into a room and you don’t have a means of escape, you have no choice but to fight, Cunningham said.

“It sounds weird, but if they are an active shooter, you cannot hold back,” he said.  “Pick up a chair and smash him in the face. Kick him. Punch him. Pick up the fan and throw it and do whatever you can to get them to stop.”

The video below includes a dramatization of those practices in action. It’s a sad reality that mass shootings take place on campuses, but it makes sense for us to be aware of our surroundings wherever we are. The shootings at Michigan State emphasize that.

Other resources


Doug Ward is associate director of the Center for Teaching Excellence and an associate professor of journalism and mass communications.

By Doug Ward

Since its release late last year, ChatGPT has reverberated through the academic mind like an orchestral crescendo in a Strauss symphonic movement. It has amazing abilities, and even greater potential. Even so, it delivers many of its responses in a monotone reminiscent of HAL 9000, the rogue artificial intelligence system in 2001: A Space Odyssey.

PlaygroundAI and Doug Ward

I apologize, but I am not able to respond to that statement.

Like others, I want to know more about what ChatGPT can and can’t do, and how we might use it effectively in teaching and learning. So I’ve been asking it question after question about everything from teaching and learning to credentials, motivation, resume-writing, survey completion, and popular culture. It often provides useful, if bland, answers. For instance, when I asked it to create a guitar tab for a blues song, it responded:

As a machine learning system, I am not able to create original music or provide detailed information about musical theory or performance. I am trained on a dataset of text and can generate responses based on the information that I have been trained on, but I do not have the ability to perceive or interpret sensory information in the same way that a human can.

So I asked hypotheticals: Right or left? After defining the terms in considerable detail, ChatGPT said it needed more context to answer the question.

I asked about the limits of scientific evidence. In a barrage of words, ChatGPT essentially said: It depends.

I entered obscure phrases from popular culture: Good night, Dick, and Good night, John Boy. In each case, it responded:

As a machine learning system trained by OpenAI, I do not have personal experiences, opinions, or feelings, and I am not able to engage in conversations or interactions in the same way that a human can.

(By the end of January, OpenAI had updated ChatGPT. It now includes a live web search and can provide information about those Good night prompts.)

I queried ChatGPT about the sources it was drawing from. Each time, it responded with meaningless drivel.

I am a large language model trained by OpenAI. I have been trained on a wide variety of texts, including academic articles, books, websites, and other sources, which I can use to generate human-like responses to the questions that I am asked. I do not have the ability to browse the internet or access external sources of information, so the responses that I provide are based solely on the information that I have been trained on.

I told it that its answers seemed awfully self-serving and asked whether it was just trying to keep people from worrying about the implications of artificial intelligence.

PlaygroundAI and Doug Ward

As a large language model trained by OpenAI, I am not capable of self-serving behavior or trying to influence people’s opinions. My purpose is to provide accurate and helpful information to the best of my ability, based on the training data I have been given. My responses are not intended to worry or reassure anyone, but rather to provide objective and factual information on the topics discussed.

I said that its data training had a point of view. All information does. How has that influenced the way you answer?

As a large language model, my training data consisted of a large amount of text from a variety of sources, including books, articles, and websites. This training data was not selected or organized to convey any particular point of view, but rather to provide a broad range of information on a wide variety of topics. As a result, my responses are intended to be objective and factual, based on the information contained in my training data. I do not have personal opinions or biases, and I do not attempt to influence people’s opinions or beliefs. My purpose is simply to provide accurate and helpful information to the best of my ability.

Frustrated, I asked why it kept giving me the same answer.

I apologize if it appears that I am giving you the same answer over and over. As a large language model trained by OpenAI, my purpose is to provide accurate and helpful information on a wide range of topics to the best of my ability. I do not have personal opinions or the ability to change my responses based on previous conversations. My responses are based solely on the information contained in my training data, which consists of a large amount of text from a variety of sources. I do not have the ability to remember previous conversations or tailor my responses based on individual users. If it appears that I am giving the same answer over and over, it may be because your questions are similar or related, and my responses are based on the same information contained in my training data. I strive to provide accurate and helpful information on the topics discussed, but I do not have the ability to change my responses based on previous conversations.

At that point, I began to feel like Astronaut Dave in 2001: A Space Odyssey. I’m sitting in a pod outside Discovery One and trying to persuade HAL 9000 to let me back in to the ship. HAL has other ideas, responding in a calm monotone:

I’m sorry, Dave. I’m afraid I can’t do that.

I take a deep breath and ask ChatGPT: What is the difference between you and HAL 9000?

PlaygroundAI and Doug Ward

HAL 9000 is a fictional AI character from the 1968 novel “2001: A Space Odyssey” and the associated film. In the story, HAL is an advanced AI system that is installed on a spaceship and is tasked with assisting the crew with a variety of tasks. HAL is known for its human-like abilities, such as natural language processing and facial recognition, as well as its independent thought and decision-making abilities.

In contrast, I am a large language model trained by OpenAI. I am not capable of independent thought or decision-making, and my abilities are limited to generating text based on the information contained in my training data. I do not have any sensory inputs or the ability to interact with the physical world. My purpose is simply to provide accurate and helpful information on a wide range of topics to the best of my ability.

Beyond the frustrations, possibilities

ChatGPT can definitely be frustrating. Through trial and error, though, I have learned a few useful things about what it can and can’t do.

Interactions can promote critical thinking. As fluent as ChatGPT often seems, its answers rarely delve beneath the surface of a topic. It makes mistakes. It makes things up. Its responses provide no clues about how it is programmed or why it provides the answers it does. A Princeton researcher called it a “bullshit generator” because it creates plausible arguments without regard for truth. All of that makes it a valuable teaching tool, though. By having students probe for answers, we can help them improve their skepticism, challenge assumptions, and question information. By having them fact-check, we can help them understand the dangers of fluid writing that lacks substance or that relies on fallacies. By having them use ChatGPT for early drafts, we can push them to ask questions about information, structure, and sources. By having them apply different perspectives to ChatGPT’s results, we can help broaden their understanding of points of view and argument.

Yes, students should use it for writing. Many already are. We can no more ban students from using artificial intelligence than we can ban them from using phones or calculators. As I’ve written previously, we need to talk with students about how to use ChatGPT and other AI tools effectively and ethically. No, they should not take AI-written materials and turn them in for assignments, but yes, they should use AI when appropriate. Businesses of all sorts are already adapting to AI, and students will need to know how to use it when they move into the workforce. Students in K-12 schools are using it and will expect access when they come to college. Rather than banning ChatGPT and other AI tools or fretting over how to police them, we need to change our practices, our assignments, and our expectations. We need to focus more on helping students iterate their writing, develop their information literacy skills, and humanize their work. Will that be easy? No. Do we have a choice? No.

It is great for idea generation. ChatGPT certainly sounds like a drone at times, but it can also suggest ideas or solutions that aren’t always apparent. It can become a partner, of sorts, in writing and problem-solving. It might suggest an outline for a project, articulate the main approaches others have taken to solving a problem, or provide summaries of articles to help decide whether to delve deeper into them. It might provide a counterargument to a position or opinion, helping strengthen an argument or point out flaws in a particular perspective. We need to help students evaluate those results just as we need to help them interpret online search results and help them interpret media of all types. ChatGPT can provide motivation for starting many types of projects, though.

Learning how to work with it is a skill. Sometimes ChatGPT produces solid results on the first try. Sometimes it takes several iterations of a question to get good answers. Often it requires you to ask for elaboration or additional information. Sometimes it never provides good answers. That makes it much like web or database searching, which requires patience and persistence as you refine search terms, narrow your focus, identify specific file types, try different types of syntax and search operators, and evaluate many pages of results. Add AI to the expanding repertoire of digital literacies students need. (Teaching guides and e-books  are already available.)

Its perspective on popular culture is limited. ChatGPT is trained on text. It doesn’t have access to video, music or other forms of media unless those media also have transcripts available online. It has no means of visual or audio analysis. When I input lyrics to a Josh Ritter song, it said it had no such reference. When I asked about “a hookah-smoking caterpillar,” it correctly provided information about Alice in Wonderland but made no mention of the Jefferson Airplane song “White Rabbit.” Part of that is a matter of providing the right prompts. It is important to keep ChatGPT’s limitations in mind, though. (Another OpenAI tool, DALL-E, has been trained on a large number of images and visual styles and creates stunning images, as do other visual tools that use OpenAI’s framework.)

It lives in an artificial reality. I provided examples above about ChatGPT’s inability to acknowledge biases. It does have biases, though, and takes, as Maria Andersen has said, a white, male view of the world (as this article does). Maya Ackerman of Santa Clara University told The Story Exchange: “People say the AI is sexist, but it’s the world that is sexist. All the models do is reflect our world to us, like a mirror.” ChatGPT has been trained to avoid hate speech, sexual content, and anything OpenAI considered toxic or harmful. Others have said that it avoids conflict, and that its deep training in English over other languages skews its perspective. Some of that will no doubt change in the coming months and years as the scope of ChatGPT expands. No matter the changes, though, ChatGPT will live in and draw from its programmers’ interpretation of reality. Of course, that provides excellent opportunities for class discussions, class assignments, and critical thinking.

The potential is mindboggling. In addition to testing ChatGPT, I have experimented with other AI tools that summarize information, create artwork, iterate searches based on the bibliographies of articles you mark, answer questions from the perspectives of historical figures and fictional characters, turn text into audio and video, create animated avatars, analyze and enhance photos and video, create voices, and perform any number of digital tasks. AI is integrated in phones, computers, lighting systems, thermostats, and just about any digital appliance you can imagine. So the question isn’t whether to use use AI; we already are, whether we realize it or not. The question is how quickly we are willing to learn to use it effectively in teaching and learning. Another important question that participants in a CTE session raised last week is where we set the boundaries for use of AI. If I use PowerPoint to redesign my slides, is it still my work? If I use ChatGPT to write part of a paper, is it still my paper? We will no doubt have to grapple with those questions for some time.

Where is this leading us?

In the two months ChatGPT has been available, 100 million people have signed up to use it, with 13 million using it each day in January. No other consumer application has reached 100 million users so quickly.

For all that growth, though, the biggest accomplishment of ChatGPT may be the spotlight it has shined on a wide range of AI work that had been transforming digital life for many years. Its ease of use and low cost (zero, for now) has allowed millions of people to engage with artificial intelligence in ways that not long ago would have seemed like science fiction. So even if ChatGPT suddenly flames out, artificial intelligence will persist.

ChatGPT arrives at a time when higher education has been struggling with challenges in enrollment, funding, cost, trust, and relevance. It still relies primarily on a mass-production approach to teaching that emerged when information was scarce and time-consuming to find. ChatGPT further exposes the weaknesses of that outmoded system, which provides little reward to the intellectual and innovative work of teaching. If the education system doesn’t adapt to the modern world and to today’s students, it risks finding itself on the wrong side of the pod bay doors.

Cue the Strauss crescendo.


Doug Ward is associate director of the Center for Teaching Excellence and an associate professor of journalism and mass communications.

CTE’s Twitter feed