ChatGPT’s Impact on Stuyvesant
Issue 10, Volume 113
Stuyvesant recently faced its first instance of plagiarism using artificial intelligence (AI). Two students in English teacher Annie Thoms’s Writing to Make Change class used an AI program to create an essay, sparking a conversation regarding how to approach plagiarism when none of the material is explicitly stolen.
In particular, ChatGPT is a commonly used program that uses AI for the purpose of generating essays artificially, creating thousands of words based on simple prompts inputted into the program. The software was launched by OpenAI on November 30 and has since garnered millions of users. Despite the efficacy of the algorithm, essays generated by the program often stand out as robotic, lacking the human thought found in student-created pieces. “ChatGPT and other AI seem to not be at [the level] where they sound like a person. They can write coherent sentences [...], but all the indicators of presence—personality, voice, surprise, error, mistakes—[aren’t] part of that,” Assistant Principal of English Eric Grossman said.
This robotic nature was a clear red flag for Thoms, causing her to investigate the paper more. “[The essay] just didn’t sound like the student. It was sort of vague and bland,” Thoms said. “It certainly didn’t sound like either of the students whose voices I knew, because I've been reading their writing since September.”
Following the discovery of the plagiarism, Thoms reached out to the students, who confessed to using a program to generate their essays. The students later spoke with both Thoms and Grossman, and while there was no explicit way to detect AI, there was additional evidence alluding to an outside source. “[The essay] did not directly respond to the assignment and was different from the drafts and reflection material the students had submitted,” Grossman said. “Part of avoiding [plagiarism] is creating assignments and structures around the assignment that make [plagiarism] not likely to occur in the first place and makes AI-generated material more likely to stand out.”
In response to the incident, Thoms also promptly had a conversation with a majority of the classes she teaches, in which she admitted that catching incidents like these has changed how she views other students’ work. “I spoke with my Writing to Make Change sections very openly, obviously not identifying students, but just saying [that I] caught those two students,” Thoms said. “Finding those pieces, in the same way when I find [other forms of] plagiarism, [...] makes me read everybody else’s final project differently.”
Even before a recorded incident, AI-generated plagiarism was a topical issue for the English Department. Despite this, few teachers believed it would become a large problem. “There had been a lot of publicity about it. English teachers were talking, sharing articles, so it was very much on our collective radar [prior to the incident],” Grossman said. “[However], it seemed queer that [AI] would be something we would encounter, and we would deal with any incident of plagiarism.” Many students have also felt the effects of this focus; in a survey conducted by The Spectator, 70.8 percent of students out of 72 survey respondents agreed with the statement, “AI has been a topic many teachers/classmates have discussed.”
Despite the severity of the incident, from Grossman’s point of view, few students seem to currently use AI-generated essays. “[There were only] a small handful [of cases], the same or less as in finals in previous semesters,” Grossman said. “Students understand that [plagiarism] is not ethical and does not reflect learning. For the most part, students do want to learn, [so cheating is] not worth it.”
For many students as well, AI-based plagiarism seems to be a line that hardly any are willing to cross. In the aforementioned survey, only 8.3 percent disagreed with the statement, “I would never use ChatGPT/an alternate AI to cheat on an essay,” and an even smaller percentage, 4.2 percent, admitted that “[they] have already used ChatGPT/an alternate AI to cheat on an essay.” This is partially due to the fact that ChatGPT is not yet at the level of quality that students require for their assignments. “The quality of AI essays, at least for now, are not at all comparable to student essays, so it’s not really cheat-able,” anonymous survey respondent A said.
In contrast, many students highlighted positive ways that AI could be used. Though 95.9 percent of respondents agreed with the statement, “I believe AI can have practical uses,” only 22.2 percent of respondents agreed with the statement, “I believe AI is inherently immoral.” To justify this, some respondents mentioned the groundbreaking potential of AI. “[I’m] a programmer who understands how modern AIs work and how they’re useful,” anonymous survey respondent B said. “The idea that ‘just because [AI] can be used to plagiarize, it’s bad’ is something I entirely disagree with.”
Given the general consensus condemning the use of AI to cheat, Stuyvesant has already taken various measures to help counter the problem in the future. “Principal [Seung] Yu has already revised [Stuyvesant’s schoolwide academic honesty policy] because it’s not just an English department thing. It’s potentially an issue in any subject,” Grossman said. “We’re not changing our policies, we’re just being attentive to [a new] dynamic [of plagiarism]. The whole staff spoke about it at the January 30 faculty meeting [and] we are planning a full faculty meeting for March.” Many students also seem to want greater accountability, with 29.1 percent of survey respondents agreeing with the statement, “Teachers should more strongly enforce/create harsher plagiarism policies.”
However, the Stuyvesant administration ultimately wants to promote the ethical impacts of AI, using it as a tool for beneficial creations. “[We need to] think about larger questions like: what are the ethics of AI? What are some cool and useful ways of using it?” Grossman said. “Certainly teachers will find their own applications of it. What I’d like to make sure is that everyone in the Stuy[vesant] community is on the same page about what it is and how to use it in ethical ways.”