This is an extreme example. But one of the problems with new technology is that it can render existing law obsolete. The opposite situation rarely occurs — legislation is rarely so futuristic in its approach that it forces technology to find new solutions.
The European Union’s proposed General Data Protection Regulation (GDPR) may actually trigger such an unusual situation. The GDPR will be enforced in EU member states from May 2018 and some sections of the law give citizens a right to demand explanations about decisions made by algorithms.
The EU already has strong data protection laws, which have undergone further strengthening in the new legislation. Existing data protection laws give EU citizens the right to know about, and access, data collected and held about them by governments and private corporations. It even gives them the “right to be forgotten” in search results in certain cases.
Articles 13 and 14 of the new legislation deal with “a data subject having the right to meaningful information about the logic involved in algorithmic decisions that affect them”. Article 22 of the new law pertains to “Automated individual decision-making, including profiling” and it bans decisions “based solely on automated processing, including profiling, which produces an adverse legal effect concerning the data subject, or significantly affects him or her.”
Taken together, this could mean a sea-change in the way algorithms are designed and presented. In effect, Article 22 implies that a human element would have to be present in the chain if algorithms are run to make key decisions that affect individuals. More importantly, the decisions would have to be explained in natural language that made sense to humans.
Machine learning (more or less a synonym for artificial intelligence, or AI) is often used to make routine decisions in personal finance. When somebody applies for a loan, or a credit card, the yes/no decision is often made by a machine. The credit limit, interest rates, tenure, and other details are also likely to be automated. Similarly, intelligent agents suggest portfolio allocations for savings.
Law-enforcement decisions such as profiling for “no-fly” lists or additional security checks on air passengers are often made by running algorithms. AI is also increasingly effective at esoteric tasks like reading body language and even at figuring out sexual orientation.
It may seem easy enough to introduce a human being into the chain to sign off on some decisions. But even the data scientists who write the machine-learning programs might find it impossible to explain the machine’s decisions.
Machine learning works by setting up programs that consider multiple variables and then feeding huge quantities of data. The program sifts the data, sets its own rules and finds patterns and correlations. It is in effect, a black box: Mr X is given a credit limit of so-much; Ms Y gets a different limit; Mr Z is refused a credit card. Nobody bar the machine knows why and it would not be easy or even at all possible for a human being to figure out the machine’s logic.
We don’t know if the machine is transparent and fair, or if it is basing decisions on racial or religious factors. For example, somebody who lives in a low-income, minority-dominated area may be refused credit by an AI. Is it because it’s a low-income area, or because that person belongs to a minority?
One classic case study that’s often mentioned in machine learning involved examination of pneumonia cases. A program discovered that asthmatics with pneumonia had a higher recovery rate than general category patients. The reason is that asthmatics get immediate emergency care while general patients tend to be treated more casually.
In August, a behavioural expert and data scientist at Stanford’s Graduate School of Business published a study where he claimed that a facial-recognition program he had developed could identify the sexual orientation of people with very high accuracy by looking at profile pictures pulled off social media. Human beings have a strike rate of about 60 per cent – little better than random guesses – at identifying sexual orientation by viewing faces. The program was correct 91 per cent of the time with men, and about 83 per cent accurate with women.
Another AI programme, “Silent Talker” claims to work as a “lie-detector” by looking at microscopic changes in facial expression when people are answering questions. Similarly, body-language recognition programs can identify people with their faces obscured even at a distance. In all these cases, the programmers don’t really know what the machine is picking up.
The new GDPR will force computer scientists to direct research into this area. That could lead to much greater insights into machine learning and a better understanding of what biases algorithms.
One subscription. Two world-class reads.
Already subscribed? Log in
Subscribe to read the full story →
Smart Quarterly
₹900
3 Months
₹300/Month
Smart Essential
₹2,700
1 Year
₹225/Month
Super Saver
₹3,900
2 Years
₹162/Month
Renews automatically, cancel anytime
Here’s what’s included in our digital subscription plans
Exclusive premium stories online
Over 30 premium stories daily, handpicked by our editors


Complimentary Access to The New York Times
News, Games, Cooking, Audio, Wirecutter & The Athletic
Business Standard Epaper
Digital replica of our daily newspaper — with options to read, save, and share


Curated Newsletters
Insights on markets, finance, politics, tech, and more delivered to your inbox
Market Analysis & Investment Insights
In-depth market analysis & insights with access to The Smart Investor


Archives
Repository of articles and publications dating back to 1997
Ad-free Reading
Uninterrupted reading experience with no advertisements


Seamless Access Across All Devices
Access Business Standard across devices — mobile, tablet, or PC, via web or app
