In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational.
However, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways.
Also Read
These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.
Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorisation and automated translations.
"Questions about fairness and bias in machine learning are tremendously important for our society," said Arvind Narayanan, assistant professor at Princeton University in the US.
"We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from," said Narayanan.
Researchers developed an algorithm GloVe, which can represent the co-occurrence statistics of words in, say, a 10 -word window of text. Words that often appear near one another have a stronger association than those words that seldom do.
Researchers used GloVe on a huge trawl of contents from the World Wide Web, containing 840 billion words.
Within this large sample of written human culture, researchers then examined sets of target words, like "programmer, engineer, scientist" and "nurse, teacher, librarian" alongside two sets of attribute words, such as "man, male" and "woman, female," looking for evidence of the kinds of biases humans can unwittingly possess.
In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race.
The Princeton machine learning experiment managed to replicate the broad substantiations of human bias.
For instance, the machine learning program associated female names more with familial attribute words, like "parents" and "wedding," than male names.
In turn, male names had stronger associations with career attributes, like "professional" and "salary."
This correctly distinguished bias about occupations can end up having pernicious, sexist effects.
The study was published in the journal Science.
You’ve reached your limit of {{free_limit}} free articles this month.
Subscribe now for unlimited access.
Already subscribed? Log in
Subscribe to read the full story →
Smart Quarterly
₹900
3 Months
₹300/Month
Smart Essential
₹2,700
1 Year
₹225/Month
Super Saver
₹3,900
2 Years
₹162/Month
Renews automatically, cancel anytime
Here’s what’s included in our digital subscription plans
Exclusive premium stories online
Over 30 premium stories daily, handpicked by our editors


Complimentary Access to The New York Times
News, Games, Cooking, Audio, Wirecutter & The Athletic
Business Standard Epaper
Digital replica of our daily newspaper — with options to read, save, and share


Curated Newsletters
Insights on markets, finance, politics, tech, and more delivered to your inbox
Market Analysis & Investment Insights
In-depth market analysis & insights with access to The Smart Investor


Archives
Repository of articles and publications dating back to 1997
Ad-free Reading
Uninterrupted reading experience with no advertisements


Seamless Access Across All Devices
Access Business Standard across devices — mobile, tablet, or PC, via web or app
)