When San Francisco–based startup OpenAI released ChatGPT in November, an existential chill spread through academia. Give the AI-powered chatbot a prompt—even one as esoteric as “Write an essay on the politics of 18th-century Italy”—and it will deliver a polished response in seconds. By January, the free text generator, which references more than 175 billion data points, had more than 100 million users and had passed exams at prestigious law and business schools, forcing Colorado academics to reckon with the cybernetic scribe’s role in the future of higher education.

“The initial reaction many people have to any kind of new technology is always, Oh, the computer is coming to get us,” says Peter Foltz, a University of Colorado Boulder professor and executive director of the Institute for Student-AI Teaming (ISAT), which researches how AI can benefit K–12 education. “I generally have a very positive view about the potential of AI for helping students and teachers.” That optimism stems, in part, from Foltz’s belief that the purpose of writing assignments shouldn’t be to get grades, but rather to gain knowledge. While ChatGPT can write a paper, it can also act as a research assistant, brainstorming partner, copy editor, and writing tutor.

Nikhil Krishnaswamy, an assistant professor of computer science at Colorado State University who specializes in human-machine communication and is a member of ISAT’s research team, isn’t so bullish: “I’ve been alarmed, at times, at how thoughtlessly [AI] has been adopted.” He agrees that systems like ChatGPT might help improve students’ work, but it’s a short trip from study buddy to ghostwriter. What’s also concerning is the chatbot’s propensity for, well, bullshitting. Although an excellent author, ChatGPT can’t tell truth from fiction, meaning it’s up to users to fact-check responses, which could require them to already understand the topic.

No one knows if Colorado students are submitting AI-generated homework because there’s currently no good way to detect such material. To complicate matters, the programs are only going to get more sophisticated. That’s why Matthew Hickey, a professor at CSU’s College of Health and Human Sciences and board member of the school’s Center for Ethics and Human Rights, thinks outright bans won’t work. “Trying to compete with this technology is going to be a losing game,” he says. “We have to fold it into our educational landscape.”

Hickey believes professors could accomplish this by reworking their curricula. That could include more group projects or requiring students to submit multiple drafts to show their work. For now, however, both CSU and CU Boulder point toward their honor codes, which forbid portraying another’s work or ideas as one’s own—it doesn’t matter whether the source is human or machine.

This article was originally published in 5280 April 2023.
Nicholas Hunt
Nicholas Hunt
Nicholas writes and edits the Compass, Adventure, and Culture sections of 5280 and writes for 5280.com.