machine learning models often reproduce racist and sexist stereotypes because of bias in the data they draw from internet forums, popular culture and photo archives.

Artificial intelligence favors the rich and powerful, disadvantages the rest: Mozilla

The growing power imbalance between those who benefit from artificial intelligence (AI) and those who are harmed by the technology is the Internet’s biggest challenge, according to the Internet Health Report 2022, which says AI and automation can be a powerful tool for the powerful. – for example, tech titans who profit more from it, but at the same time can be harmful to vulnerable groups and societies.

A report compiled by researchers at Mozilla, the nonprofit that makes the Firefox web browser and advocates for privacy on the web, said: “In real life, time and time again, AI disproportionately harms people who are disadvantaged by global systems of power.”

“Amid the global rush to automation, we see serious dangers of discrimination and surveillance. We see an absence of transparency and accountability and an over-reliance on automation for decisions with huge consequences,” Mozilla researchers said.

While the report notes that systems trained with vast swathes of complex real-world data are revolutionizing computing tasks, including speech recognition, financial fraud detection, piloting self-driving cars, and so on, that were previously difficult or impossible, there are plenty and more challenges in space artificial intelligence.

For example, machine learning models often reproduce racist and sexist stereotypes due to bias in the data they draw from internet forums, popular culture, and photo archives.

The nonprofit believes big companies aren’t being transparent about how they use our personal data in algorithms that recommend social media posts, products and purchases, among other things.

Furthermore, referral systems can be manipulated to display propaganda or other harmful content. In a Mozilla study on YouTube, algorithmic recommendations were responsible for showing people 71% of videos they said they regretted watching.

Companies like Google, Amazon, and Facebook have major programs to address issues like AI bias, but the subtle ways in which biases have been built into the algorithms. For example, The New York Times pointed to a Google Photo review in 2015 where Google apologized after photos of black people were labeled as gorillas. To solve such shameful problems, Google simply removed the labels for gorillas, chimpanzees and monkeys.

Likewise, in the 2020 major protests against the killing of George Floyd in the US, Amazon cashed in on its facial recognition software and sold it to police departments, even though research has shown that facial recognition programs falsely identify people of color compared to whites, and also that its use by the police could result in unfair arrests that disproportionately affect blacks. Facebook also featured clips of black men arguing with white civilians and police officers.

The Mozilla researchers differ, however, in that while Big Tech funds much academic research, and that even articles focusing on social issues or the risks of AI, they don’t walk the walk.

“Centralizing influence and control over artificial intelligence does not work to the benefit of most people,” Solana Larsen, editor of Mozilla’s Internet Health Report, said in the report. The purpose is to “enhance technology ecosystems outside of big tech. and venture capital startups if we want to unlock the full potential of trustworthy AI,” she said.

Mozilla suggested that “the new set of regulations can help set the bar for innovation that reduces harm and promotes data privacy, user rights and more.”


I am Sanjit Gupta. I have completed my BMS then MMS both in marketing. I even did a diploma in computer software and Digital Marketing.

Articles: 4720

Newsletter Updates

Enter your email address below to subscribe to our newsletter

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this:
x Logo: Shield Security
This Site Is Protected By
Shield Security