Is it Artificially Intelligent or Naturally Stupid? Let’s ask Apple

https://dailyfintech.com/2019/11/15/is-it-artificially-intelligent-or-naturally-stupid-lets-ask-apple/
http://dailyfintech.com/wp-content/uploads/2019/11/Apple-1760558932-1573772842906.png?#

Earlier this week, there was an allegation that the credit scoring engine behind Apple card was biased. It emerged from the twitter account of David Heinemeier Hansson (@dhh).

He raised the issue that his wife had been given a credit limit 20 times lower than his. David has about 360K followers on twitter, and the tweet went viral.

Steve Wozniak, Apple’s cofounder chipped in that him and his faced the same issue. This triggered reactions from regulators and politicians.

So, how do we keep tech that drives financial decisions honest? How much can we trust a machine’s intelligence?

Apple

Image Source: Twitter

For the ease of the readers, I am going to take the liberty of using the term AI to refer to machine learning, deep learning and even data analytics in the post.

Many underwriting engines within financial services are starting to explore AI based decision making. Thanks to the rise of Fintech, even big banks have had to embrace some of these technologies to help with their core capabilities like lending.

However, the intelligence of the system can only be as good as the data that the system is learning from. Garbage In Garbage Out.

In the last century, AI was envisioned as a program that was clever enough to mimic a human’s way of decision making. Despite a lot of progress in this space, the technology went through several winters without any meaningful commercial application.

Post the dot com boom and bust, as social media became more main stream, is when the field of AI started seeing more traction. This was because, the machines now had more data to churn and create its internal rules engine based on the patterns in the data. As more and more data got created, we started seeing AI solutions for different problems.

Therefore, while algorithms could act as the machinery that makes decisions, the oil that keeps it going is really rich data. However, AI can be only as clever as the data that is being fed into it. Data on the internet, data reflected in our social media interactions, and even data from historic credit decisions reflect several human biases.

As a result, when the AI engines are being fed with this data, patterns identified by them reflect the bias. If these patterns are used as rules for future decision making based on past data, there are going to be biased outputs from the machine.

I recently came across the following interaction between a customer ordering food and a food delivery service in India. The service provider (Zomato) has been accused of using an inaccurate AI bot that automatically responds to customers.

WhatsApp Image 2019-11-14 at 4.50.38 PM

Image Source

The above food scenario was mostly harmless, and was resolved amicably. But surely, that benevolence can’t be extended to financial services. An error prone machine is performing a major financial calculation and its decision affects people’s livelihoods.

In such a scenario, firms using a machine, would need to go through compliance checks for quality of decisions. It cannot be a one time exercise either. The checks will need to be periodic, with a clearly defined set of criteria and edge scenarios.

Coming back to our apple card story, the New York Department of Financial Services Superintendent responded that they would take a look into the issue, and kicked off an investigation into it.

Goldman Sachs released a note today (in image below). It sounds shallow to me, as David (@dhh) reported that his credit score and his wife’s credit score were similar. Therefore, credit decisions should be similar.

Goldman

Image Source: Twitter

From the sound of it, we still have a lot of work to do, in redoing these machines properly. I am not sure if banks (in this case Goldman) realise that this is more of a data quality issue than a code issue. They can claim that their code doesn’t have any logic programmed in, to make the decision sexist.

If the machine is not clever enough to spot biased data, report it, and make decisions based on good quality data, it is not fit for purpose.

The logical step is to ensure that regulators create AI ethics boards across the world. Where financial services is provided, the board needs to ensure that the technology behind it is monitored on a regular basis for several criteria around ethical implications.

Without that, we may see many more such stories and machine bashing will soon be a daily headline.


Arunkumar Krishnakumar is a VC at Green Shores Capital, where he focuses on deeptech and sustainable investments.


https://dailyfintech.com/2019/11/15/is-it-artificially-intelligent-or-naturally-stupid-lets-ask-apple/