Home Technology Another AI Issue for Schools to Know About: Bias Against Non-Native English Speakers

Another AI Issue for Schools to Know About: Bias Against Non-Native English Speakers

by Staff

As educators continue to explore what role artificial intelligence tools should or could play in the classroom, some researchers are cautioning teachers that AI detectors are biased against non-native English-speakers.

In an article published last month, Stanford University researchers wanted to evaluate how accurate detectors of GPT, or generative pre-trained transformers, are when it comes to determining whether text authored by non-native English speakers was AI-generated or written by a human.

To test this, the researchers ran essays written by Chinese students for the Test of English as a Foreign Language, or TOEFL, through seven widely used detectors. They did the same with a sample of essays written by U.S. 8th graders who were native English speakers. The tools incorrectly labeled more than half of the TOEFL essays as AI-generated, while accurately classifying the 8th grade essays.

“We found this substantial bias whereby many of [non-native English speakers’] writings are mistakenly flagged as generated by GPT, when they were really written by humans,” said James Zou, one of the co-authors of the article and an assistant professor of biomedical data science at Stanford University.

GPT detectors look at something called perplexity of text, Zou said. Low perplexity text involves using more common, generic words that aren’t very surprising and thus are more likely to be flagged as AI-generated even if that isn’t the case. The researchers looked into the perplexity of the students’ writing samples for this reason and found that text with low perplexity was more likely to be flagged as AI-generated.

Students in the U.S. who are English learners may be more likely to use more common words in their writing as they work to expand their vocabulary, making them more likely to be erroneously flagged as having used AI, Zou added.

How to address bias in AI detectors

Bias in AI tools isn’t a new phenomenon, and GPT detectors have never been 100 percent foolproof, especially as AI technology continues to advance, said Christopher Doss, a policy researcher at the RAND Corporation.

“AI is trained on data. Societal biases are baked into data,” Doss said.

For Doss and others, the key takeaway is that teachers must be cautious when relying on GPT detectors to determine if a student cheated with AI assistance, but more importantly, that educators need to be thinking of different ways to use AI tools in the classroom.

“In the beginning of the class, how do you figure out how to use ChatGPT to help learning and teach children how to use these tools for good?” Doss said. “But then also, how do you make sure that your children don’t use it as a crutch?”

Peter Gault, the executive director and co-founder of Quill, a nonprofit that provides open-source literacy materials to teachers, said that the group’s AI detector known as AI Writing Check was among the tools examined by the Stanford researchers. That tool is no longer available to teachers as of this week.

“When we launched this tool in January 2023, the only Generative AI tool available was ChatGPT. There are now a series of different tools available, and each of these tools is being upgraded weekly. As these tools make their AI more complex, the AI text output becomes more varied, and it becomes more difficult for algorithms to detect whether a piece of writing was generated by AI,” read a statement on the AI Writing Check website.

As a more reliable means to check for whether students used AI in their work and avoid the risk of biases in AI tools, Gault recommends teachers use students’ version histories of text. In other words, teachers can go into Google Docs or Microsoft Word and see the various edits and revisions students make in their writing. Those iterations allow teachers to tell if a student was using AI assistance and also better understand their students’ writing process to know how to best help them grow their writing skills.

Looking ahead, Gault said advancements in technology could lead to developing AI assistants to help students as they’re writing and check their work as they go rather than wait until they complete an assignment.

Zou, the Stanford researcher, also recommends this more proactive approach to checking students’ writing in progress as opposed to relying on an evaluation tool at the end.

With English learners in particular, he added, AI tools have the potential to help track students’ grammatical mistakes for more personalized assistance and could even aid students in need of translation services.

The biases against English learners in general

English learners are one of the fastest growing student demographics in the United States.

When it comes to working specifically with English learners, educators must be cognizant of broader biases these students face in K-12 schools, said Xilonin Cruz-Gonzalez, the deputy director of Californians Together, a research and advocacy organization for English learners and their families.

For instance, while there are clear federal guidelines and even some state guidelines on the rights English learners have in schools—whether it’s access to translation services or the ability to enroll in schools regardless of immigration status—some school districts at the local level may not always understand their legal obligations, Cruz-Gonzalez said.

It’s partly why in June the U.S. departments of Justice and Education published fact sheets reminding educators of immigrant students’ legal rights in K-12 schools.

Cruz-Gonzalez said English learners often face unconscious biases among educators when it comes to the linguistic and academic assets they bring to the classroom.

As new technologies emerge, she and other advocates hope developers address biases in AI tools and create opportunites for English learners to creatively and positively engage with AI technology.

You may also like