Nouriel Roubini, Professor of Economics and International Business at New York University's Stern School of Business | New York University's Stern School of Business
Nouriel Roubini, Professor of Economics and International Business at New York University's Stern School of Business | New York University's Stern School of Business
In 2025, artificial intelligence (AI) has become an integral part of daily life, offering benefits such as traffic navigation, medical advancements, and academic research. However, its increasing presence also raises concerns about its cultural and societal impact. For instance, Coca-Cola's AI-generated Christmas video in November 2024 faced criticism for lacking creativity and replacing human artists without proper credit.
AI's influence on politics was evident in Germany when the far-right Alternative for Germany party used AI-generated content in their campaign to create a fictional narrative ahead of the February 23 election. Politico noted that this content helped the party make both its idealistic and dystopian visions appear real.
The Los Angeles Times recently introduced a "bias meter," an AI tool designed to detect political bias in opinion pieces. However, it was withdrawn after producing a controversial response perceived as minimizing the Ku Klux Klan's racist agenda.
Meredith Broussard, an associate professor at NYU’s Arthur L. Carter Journalism Institute, highlights AI's drawbacks, particularly biases related to race and gender. In her book "More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech," she warns against over-reliance on technology in high-risk scenarios like legal or medical decisions. Broussard emphasizes that AI systems are inherently biased due to their reliance on real-world data that reflects existing societal issues.
Broussard advises using technology judiciously and differentiating between mathematical fairness and social fairness. She notes that while AI excels at math, it struggles with social contexts essential for decisions involving employment or healthcare.
She also cautions against "technochauvinism," where technological solutions are presumed superior to human ones. This mindset can lead to inefficient use of resources on computational solutions that may not work effectively.
Regarding regulatory measures for AI technologies, Broussard advocates for governmental regulation rather than self-regulation by tech companies. She suggests learning from past regulatory models like automobile seat belts—initially absent from cars but later mandated by law—to improve safety standards iteratively as new challenges arise.
"Instead of assuming that AI decisions are unbiased or neutral," says Broussard, "it’s more useful to assume that the AI decisions are going to be biased or discriminatory in some way."