
At GDS, some teachers have used ChatGPT to write report card comments and quizzes, while students have used it to study. No matter what the AI is used for, though, it sets us up for failure.
ChatGPT is a chatbot that is designed to answer any prompt like a human would by predicting the best possible answer based on online information. But the information ChatGPT feeds to you is not necessarily true. Because its responses come from combing through patterns online, it has been found to reinforce patterns of systemic and institutional racism. Studies have discovered that ChatGPT reflects bias against certain races and generates misinformation and stereotypes about ethnic and social groups.
In other words, the AI is biased against specific groups of people because the data it was trained on is, itself, biased. ChatGPT’s lack of awareness poses a new question for us all: Does artificial intelligence have a moral compass?
English Department Chair Katherine Dunbar said she tried ChatGPT, and the program wrote an essay that misidentified the races of characters in a book and was riddled with offensive stereotypes about Black and trans people. Dunbar said ChatGPT’s racism and prejudice could be “extraordinarily damaging” to students and teachers.
“It’s counter to what we want for our students, which is independent thinking,” Dunbar said. “I have deep concerns about it as a barrier to deep thinking in English classes.”
The department updated its academic integrity policy in January to include the use of ChatGPT as a form of academic dishonesty.
ChatGPT’s blatant and offensive remarks about identity concern many teachers. “The risks that [ChatGPT] has for reinforcing certain cultural and societal beliefs moves whatever is being produced further away from what the student might think,” English teacher Benjamin Stein said, echoing Dunbar’s sentiments.
One of the most devastating consequences of ChatGPT is that it is preventing us from thinking critically. Critical thinking is a safeguard against racism, and by using ChatGPT to avoid this critical work, we become more susceptible to bias.
Stein added that he thinks ChatGPT has no place in the classroom because it doesn’t allow students to learn how to construct their own arguments, which he believes is one of the most important parts of high school English classes. He said he thinks the AI potentially inserts bias into students’ work that the student might not be aware of, which is a key reason why ChatGPT should not be in school environments.
Another result of using ChatGPT is that much of its bias can subconsciously make its way into our thinking. While that biased information may already be available on the internet, reproducing it through ChatGPT makes it more available.
Not only does the use of ChatGPT defy our English classroom rules, but the information it provides also contradicts GDS’ mission of equity. It undermines values GDS holds close — diversity, inclusion and respect. Using a system that has such dangerous consequences is actively harmful to us and our environment because it perpetuates bias and prejudice. When we let AI do the work for us, it not only influences us, but it also hurts our ability to think respectfully and examine a world full of bias.
ChatGPT is a powerful tool with the capability to analyze information in an unprecedented way. But with great power comes great responsibility, and AI is not addressing bias responsibly. Whether you avoid ChatGPT entirely or find it acceptable to use in some circumstances, keep in mind the consequences of your actions, and be conscious of the effect its information has on you.