Business Standard

AI is using 'fake' data to learn to be less discriminatory and racist

Many AI makers are using 'synthetic' images to train computers on a broader array of people, skin tones, ages or other features

Artificial intelligence
Premium

Fake data isn’t just being used to train vision recognition systems, but also predictive software.

Parmy Olson | Bloomberg
Last week Microsoft  said it would stop selling software that guesses a person’s mood by looking at their face. The reason: It could be discriminatory. Computer vision software, which is used in self-driving cars and facial recognition, has long had issues with errors that come at the expense of women and people of color. Microsoft’s decision to halt the system entirely is one way of dealing with the problem.

But there’s another, novel approach that tech firms are exploring: training AI on “synthetic” images to make it less biased. The idea is a bit like training pilots. Instead of practicing in

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Jun 28 2022 | 1:26 AM IST

Explore News

To read the full story, subscribe to BS Premium now, at just Rs 249/ month.

Key stories on business-standard.com are available only to BS Premium subscribers.

Register to read more on Business-Standard.com