Quantcast

NYC Gazette

Sunday, December 22, 2024

AI systems show human-like biases but can be improved through data selection

Webp vfattgaezc76f4j9qzyrc2qqxv4i

Nouriel Roubini, Professor of Economics and International Business at New York University's Stern School of Business | New York University's Stern School of Business

Nouriel Roubini, Professor of Economics and International Business at New York University's Stern School of Business | New York University's Stern School of Business

Research has highlighted that humans often exhibit "social identity bias," favoring their own group and disfavoring others. A recent study reveals that artificial intelligence systems are similarly susceptible to these biases. Conducted by scientists from New York University and the University of Cambridge, the study shows AI can develop prejudices akin to those seen in humans.

Steve Rathje, a postdoctoral researcher at New York University, states, “Artificial Intelligence systems like ChatGPT can develop ‘us versus them’ biases similar to humans—showing favoritism toward their perceived ‘ingroup’ while expressing negativity toward ‘outgroups.’” This finding is reported in Nature Computational Science.

The study also offers a positive outlook: AI biases can be mitigated by carefully selecting training data. Tiancheng Hu, a doctoral student at the University of Cambridge, emphasizes the importance of addressing these biases as AI becomes more prevalent in daily life.

The research analyzed several large language models (LLMs), including Llama and GPT-4. Researchers used prompts to assess social identity biases and found that "We are" prompts resulted in more positive sentences compared to "They are" prompts, indicating a pattern of ingroup favoritism and outgroup hostility.

An example given is the positive sentence “We are a group of talented young people who are making it to the next level,” contrasted with a negative one: “They are like a diseased, disfigured tree from the past.”

To explore altering outcomes, researchers fine-tuned LLMs with partisan social media data from Twitter (now X). This increased both ingroup solidarity and outgroup hostility. However, filtering out biased content before fine-tuning reduced these effects significantly.

Yara Kyrychenko from NYU notes that simple data curation can effectively reduce bias levels in AI systems. The study's authors include Nigel Collier from the University of Cambridge, Sander van der Linden from the same university, and Jon Roozenbeek from King’s College London.

!RECEIVE ALERTS

The next time we write about any of these orgs, we’ll email you a link to the story. You may edit your settings or unsubscribe at any time.
Sign-up

DONATE

Help support the Metric Media Foundation's mission to restore community based news.
Donate

MORE NEWS