Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
All In One Tech News Channel
All In One Tech News Channel
Seasonal fluctuations, changes in data quality, or significant events such as pandemics can lead to a number of poor AI predictions.
Fair Isaac Corp and its artificial intelligence software, which is used by two-thirds of the world’s top 100 banks to help make lending decisions, can wreak havoc if things go wrong.
This crisis almost passed at the beginning of the pandemic. As FICO recounted to Reuters, the Bozeman, Montana-based company’s artificial intelligence tools, which help banks identify credit and debit card fraud, concluded that a surge in online shopping meant fraudsters must have been busier than usual.
AI software told banks to reject millions of legitimate purchases at a time when consumers were looking for toilet paper and other essentials.
However, consumers ended up facing several rejections, according to FICO. The company said a global group of 20 analysts who constantly monitor its systems are recommending temporary adjustments to avoid spending blockages. The team is automatically alerted to unusual purchase activity that could confuse the AI, which a total of 9,000 financial institutions rely on to detect fraud on 2 billion cards.
Such corporate teams as part of the emerging machine learning subspecialty (MLOps) are uncommon. In separate surveys last year, FICO and consulting firm McKinsey & Co found that most organizations surveyed do not regularly monitor AI-based programs after they are launched.
The problem is that errors can abound when real-world circumstances deviate, or in technical parlance, “deviate” from the examples used to train the AI, according to the scientists who run these systems. In FICO’s case, it said its software expected more in-person purchases than virtual purchases, and the inverted ratio led to a higher proportion of transactions flagged as problematic.
Seasonal fluctuations, changes in data quality or major events – such as a pandemic – can all lead to a series of bad AI predictions.
Imagine a system that recommended swimwear to summer shoppers, not realizing that sweatpants were more appropriate due to the COVID lockdown. Or the facial recognition system has become faulty as cloaking has become popular.
The pandemic must have been a “wake-up call” for anyone who hadn’t been keeping a close eye on artificial intelligence systems because it prompted countless changes in behavior, said Aleksander Madry, director of the Center for Deployable Machine Learning at the Massachusetts Institute of Technology.
Coping with drift is a huge challenge for organizations using AI, he says. “That’s what’s really holding us back right now in this dream of AI that’s going to change everything.”
In addition to the urgent need for users to address this issue, the European Union plans to pass a new artificial intelligence law next year that will require some monitoring. The White House also called for monitoring in new AI guidelines this month to ensure “system performance does not fall below acceptable levels over time.”
Slow recognition of problems can be costly. Unity Software Inc, whose advertising software helps video games attract players, estimated in May that it would lose $110 million in sales this year, or about 8% of total expected revenue, after customers backed away when its AI tool that determines who to display ads stopped working as well as they once did. Its AI system, which learned from corrupted data, was also to blame, the company said.
San Francisco-based Unity declined to comment beyond the earnings statement. Management there said Unity was deploying alerting and remediation tools to catch problems more quickly, and acknowledged that expansions and new features took precedence over monitoring.
Real estate marketplace Zillow Group Inc announced last November that it was writing down $304 million on homes it bought — based on a price-prediction algorithm — for more than they could be resold for. The Seattle-based company said that artificial intelligence could not keep up with rapid and unprecedented market swings and left the buy and sell business.
New market
Artificial intelligence can go awry in many ways. Most famously, training data biased along race or other lines can produce unfairly biased predictions. According to surveys and industry experts, many companies are now pre-screening data to prevent this. By comparison, few companies consider the dangers of a well-functioning model that later breaks, these sources say.
“It’s an urgent problem,” said Sara Hooker, head of the Cohere For AI research lab. “How do you update models that become stale as the world around them changes?”
In the past few years, several startups and cloud computing giants have begun selling software to analyze performance, set alarms, and deploy patches that work together to help teams stay on top of AI. IDC, a global market researcher, estimates that spending on AI operations tools will reach at least $2 billion in 2026, up from $408 million last year.
Venture capital investment in AI companies grew to nearly $13 billion last year, and $6 billion has poured in so far this year, according to data from PitchBook, a Seattle-based funding tracking firm.
Arize AI, which raised $38 million from investors last month, enables monitoring for customers including Uber, Chick-fil-A and Procter & Gamble. Chief Product Officer Aparna Dhinakaran said that at a previous employer, she struggled to quickly detect AI predictions that were deteriorating, and friends elsewhere told her about their own delays.
“The world today is that you don’t know there’s a problem until it affects the business two months from now,” she said.
Cheats score points
Some AI users have built their own monitoring capabilities, and that’s what FICO said saved them at the start of the pandemic.
Alarms went off when multiple purchases were made online – what the industry calls “card not present”. Historically, more of that spending tends to be fraudulent, and the increase has pushed transactions higher on FICO’s scale of 1 to 999 (the higher it is, the more likely it is fraud), said Scott Zoldi, principal analyst at FICO.
Zoldi said consumer habits are changing too quickly to override an AI system. So FICO advised U.S. clients to review and decline only transactions with scores above 900, up from 850, he said. It saved clients from checking 67% of legitimate transactions above the old threshold, allowing them to focus on the truly problematic cases.
During the first six months of the pandemic, clients detected 25% more total fraud in the US than expected and 60% more in the UK, Zoldi said.
“You’re not responsible for AI if you’re not monitoring,” he said.