304 North Cardinal St.
Dorchester Center, MA 02124
Companies must adopt effective and responsible policies to reduce bias in Artificial Intelligence (AI) decision-making within the organization, said a study by the Aapti Institute and the United Nations Development Program (UNDP), released Wednesday.
Familiarity with policies and regulations amid growing digital business entities can help companies address the impact of AI on human rights, which is a growing concern as more businesses create services automatically, the study noted. Lack of proper company policies and laws could increase the impact of AI and automation on workers’ human rights, it said.
Companies use an algorithm-based decision-making framework to “deliberately find company policies”, rather than aiming to establish a reliable and explanatory AI model, according to research.
An explanatory AI model is one in which the actions taken by the AI algorithm and its concept can be explained, thus leading to a better understanding of subtle bias and training algorithms accordingly.
The issue of algorithmic bias, according to research, has a profound impact on the financial services, healthcare, marketing and gig workers’ industries. Within these categories, the most affected employees are part of the most vulnerable and disadvantaged people with limited access to technology, thus limiting their ability to seek assistance if they feel they have been violated by an automatic decision made by their employers.