Altering Approaches to AI in College
How an Incident of AI Misidentification Caused Panic and a Discussion of New Attitudes Towards the Controversial Tool

Gianna Morales sits at her desk with her cat in her apartment in Boulder, C.O. on Tuesday, Oct. 29, 2024. Miles Swope, CU News Corps
Gianna Morales sits at her desk with her cat in her apartment in Boulder, C.O. on Tuesday, Oct. 29, 2024. Miles Swope, CU News Corps
BOULDER - A weight is lifted from her shoulders and she breathes a sigh of relief, finally able to put her philosophy ethics course behind her as she submits her final exam. Little does she know that her time spent worrying about the course is not over yet, with her integrity soon to be questioned by her instructor.
Gianna Morales, a junior at the University of Colorado Boulder, found herself in a state of panic when a faulty AI detection tool flagged her final exam for cheating. “It's like a pandemic through education, all of this AI stuff,” Gianna said, reflecting on the fallout of her case.
Morales took an asynchronous philosophy course centered on ethics over the summer of 2024. She excelled in the course and assumed she had finished with a good grade when she turned in her final exam. When Morales checked her grade for the final, she said, “I was just kind of caught off guard.” Her grade had been changed to 0%, and a comment on her final exam explained that AI use had been detected.
Without hesitating, Morales contacted her instructor, Professor Matthew Pike, in an attempt to clear up the mistake. She was allowed to prove her innocence by showing her notes from the class and answering new questions relating to the final on a Zoom call with Pike. When it was clear that the AI detector had made a mistake and that Morales had not cheated, her grades were returned to their original standing.
Students bustling on the University of Colorado Boulder Main Campus on Thursday, Oct. 31, 2024. Miles Swope, CU News Corps
Students bustling on the University of Colorado Boulder Main Campus on Thursday, Oct. 31, 2024. Miles Swope, CU News Corps
Professor Pike attempted to uphold academic integrity in a world increasingly influenced by AI but ironically relied on a tool that mistakenly identified Morales’ honest work as machine-generated. “He wasn't trying to punish me; he was just trying to crack down on AI being used in the class,” she said, capturing the shared frustration among students and faculty alike.
Peter Foltz, executive director of the Institute for Student-AI Teaming at CU Boulder, noted, “There are challenges with using AI to grade human writing, especially with the rise of generative AI. These tools rely on statistical predictions, which can lead to inaccuracies.”
The implications of these technologies extend beyond the classroom. Understanding its role becomes vital as students prepare to enter a workforce that increasingly utilizes AI. However, the distrust surrounding AI detection tools complicates the landscape, challenging educators to maintain academic integrity while preparing students for a future filled with AI involvement.
Attempting to find a silver lining within this dilemma, Foltz emphasized, “We need to stop being afraid of it and learn to embrace it to unlock its potential for learning.” This debate is pronounced in universities nationwide, where educators strive to implement AI responsibly without compromising academic values.
An unnamed cyclist checks the event board on a main thoroughfare on the University of Colorado Boulder Main Campus on Thursday, Oct. 31, 2024. Miles Swope, CU News Corps
An unnamed cyclist checks the event board on a main thoroughfare on the University of Colorado Boulder Main Campus on Thursday, Oct. 31, 2024. Miles Swope, CU News Corps
Across the country, colleges are scrapping their AI detection software, citing concerns that the tools are wrongfully accusing students. Companies utilized by higher education institutions such as Turnitin are facing scrutiny, prompting a reconsideration of an appropriate course of action.
Unfortunately, Gianna's experience is indicative of a more significant trend. Turnitin’s data details how over 22 million assignments—about 11% of those reviewed—have at least 20% AI writing present. Additionally, around 3% of submissions were flagged as containing at least 80% AI writing, underscoring the potential for misidentification.
Although there are good reasons to be reluctant about using AI detection tools, there is a clear need to know when students are using AI bots like ChatGPT. In a survey study done by Intelligent magazine, 1,000 college students were asked various questions regarding AI use in their schoolwork. Thirty percent admitted they use the free service for written homework with 60% of that group using it on more than half of their total assignments.
Statistics like these make it clear that AI bots are popular and being employed. ChatGPT and AI tools like it are not going away anytime soon, especially for college students. From that perspective, it seems essential that the academic world find a path forward that ensures the stability of worthwhile education.
Al Pisano, a CU professor, looks at the tool from a practical, moral perspective. “The ethical dilemma is not whether to use AI, but how students use it to succeed based on the assessments we’re giving. There’s always this tension—either don’t use it or use it—but the truth is somewhere in between. We need to figure out how we can guide students to use AI ethically so they get to a better place in life and business.”
Hwanseo Yu, an international student from South Korea, is a student of Pisano's and utilizes ChatGPT in Korean. The unique way he uses ChatGPT in the classroom points to an applicable niche area where AI tools can really make a difference to non-native speakers, such as himself.
“The main reason I use ChatGPT is to translate Korean into English and then back to Korean or to get ideas for assignments or projects,” Yu explained. “When I have trouble understanding during class, I take a picture or simply type it and listen to the explanation right there. It's much easier to follow, and that helps me understand the class better.”
For students like Yu, who use AI as a means of filling in the gaps between language barriers and constructing a genuine understanding of class materials, the tool is integral in his schoolwork. “I use ChatGPT for most of my assignments or projects,” Yu said. “If I use everything requested in ChatGPT as is, it would be a problem, but to some, it helps us work together to get ideas or do translations or things like that.” His comments exhibit the thin line between leveraging AI in learning and misuse.
The conversation around AI's role in education goes beyond technology; it's an ethical issue that calls for thoughtful engagement as we prepare for a future that embraces AI. As educators grapple with these challenges, the implications become clear. A study conducted by Tyton Partners sponsored by Turnitin indicated that 75% of students plan to continue using generative AI tools, regardless of faculty restrictions.
“The best way to mitigate these things is to have open conversations about it—with faculty, with students," said Pisano when commenting on AI use that violates class rules. "Be transparent about what your expectations are as an institution, as an organization, as a school, if you’re respectful and transparent enough, generally, people show that respect back."
Dr. Pisano and the approach he takes with his class aligns with Foltz’ belief that guiding students in the right direction is key. There is an AI-positive movement growing among CU professors, which includes David Cass, an adjunct professor at CU Boulder and managing director at venture capital company Techstars. He stated in one of his classes, “I encourage you to use AI and ChatGPT, but for things like idea generation. It is your job as a college student to be able to write creatively on your own.”
Similarly, Pisano emphasizes the need to be a dynamic thinker within leadership roles in a rapidly evolving educational and professional landscape.
“These tools are going to be used by everybody, leveraged by everybody, and similar to me as an instructor, as a teacher, as a leader, you have to just guide them.”
The role of leadership grows alongside AI and no matter the field, leaders must adapt to the nuances of technology in order to provide guidance and structure without diminishing creativity. This is an opinion that Pisano relays in his classroom to potential future leaders in the business world, which in turn creates a trickle-down effect of accepting attitudes towards AI synergy.
A necessary aspect of creating effective leadership skills is setting boundaries with AI use. As alluded to by Dr. Pisano, the responsibility of leaders is not to stifle innovation, but rather, adopt it in a way to elevate creativity and human engagement.
“You don’t want people to produce something that should be authored by the team as something that’s only authored by ChatGPT, but... use GPT for that. So it’s that guidance as a leader... The human side, the empathetic side, should always be the final decision.”
A serene view of Varsity Lake with the Old Main building in the background on the University of Colorado Boulder Main Campus in Boulder on Oct. 31, 2024. Miles Swope, CU News Corps
A serene view of Varsity Lake with the Old Main building in the background on the University of Colorado Boulder Main Campus in Boulder on Oct. 31, 2024. Miles Swope, CU News Corps
This emphasizes the need for a balance to be struck between assistance and collective human effort. This point is furthered by an analysis published in the Harvard Business Review stating, “Leaders must ensure AI is used in ways that amplify human potential, not replace it.” As this balance is crucial in the leadership world, so too is it essential in the evolving workflows of academics and the fluctuating job market.
Considering the importance of finding a job after graduation, it is worthwhile to note that the corporate world has a fairly open perspective on AI integration in their work. Aaron Farr, a senior technical SEO strategist at Power Digital Marketing, stated, “I feel like the AI tools themselves are more useful than the detectors.” This sentiment reflects a growing frustration among students who feel the tools designed to uphold academic integrity often fall short.
The emotional toll on students like Morales can be significant, as wrongful accusations undermine their confidence and overall educational experience.
As technology becomes more integrated into everyday life and professional environments, the challenge remains: how do we equip students with the necessary skill set while preserving the integrity of their work?
“It's better to teach them how to do it correctly than just making it, oh, you can't do that," Farr said.