Indian researchers figure out way to detect facial recognition fraud

IIIT-Delhi has found an algorithm to help machine learning-based facial recognition systems identify synthetic images, which could potentially fool even the most robust systems

Indian researchers figure out way to detect facial recognition fraud
Neha Alawadhi New Delhi
6 min read Last Updated : Jun 22 2019 | 10:42 PM IST
Last month, San Francisco banned the use of facial recognition technology, taking a stand against its potential misuse, drawing appreciation as well as criticism in equal measure from different quarters. While the debate about the propriety of the technology rages on, this hasn't deterred other countries and individuals from continuing to experiment with, and deploy it.  

As facial recognition technology becomes increasingly 'intelligent,' as with any other technology, devising ways to 'fool' systems are also not far behind. It has become relatively easier to 'generate' conflicting or confusing images using AI algorithms, to fool such facial recognition systems.

A recent research by an Indian institute aims to fix some of these issues by developing a way to identify whether a facial recognition system has been attacked. The Image Analysis and Biometrics (IAB) Lab of IIIT-Delhi, led by Dr Mayank Vatsa and Dr Richa Singh, has come up with a solution for locating and mitigating these attacks. The team also includes PhD students Gaurav Goswami and Akshay Agarwal.

How the scamsters work

For example, consider an autonomous car driving on a road at a speed of 60 km per hour. To fool the car system that reads road signs, a miscreant can change the appearance of a common symbol. For example, on a stop sign an attacker could add stickers so that when the autonomous car sees the sign it reads it as “slow speed”, and not a “stop” sign.

A human would understand the sign and would stop, but the autonomous car would only slow down, which could cause an accident. Similarly, for an automatic facial recognition system, an attacker or miscreant can make minor changes in the face image such that a human might be able to spot the actual person, but there is a likelihood that an algorithm may identify it as a different person altogether.

“To fool the AI, attackers use something called adversarial perturbations. Facial recognition systems are becoming easy prey for such attacks. The research was to protect the integrity of these algorithms. We argued that when the model sees a face image, can we predict whether there has been an attack or whether it’s a real image or synthetic,” said IIIT-Delhi’s Dr Vatsa.

Muffling the attack

With an aim to address such problems, Indraprastha Institute of Information Technology-Delhi has found an algorithm to help machine learning-based facial recognition systems identify synthetic images, which could potentially fool even the most robust systems. The research can have far reaching impact on authentication systems, ranging from smartphones to access doors and even public places.

The results, which the team is likely to make available for use by the wider technology community, could help companies deal with the serious issue of spoofing or fooling face recognition algorithms.

A possible scenario that the research considered was, that an attacker can take a person’s facial features which a given facial recognition model is using and embed those facial features in a pair of spectacles. They could paint it in some fashion or take a 3D print of the facial features and embed it in the spectacles. 

The research will be able to help the algorithm detect this is a synthetic image, a point that algorithms run the risk of missing or not being able to identify correctly. 

“This has many use cases, particularly when we are uploading photographs on social media, on Instagram, for example, people will have a hard time in misusing other people’s images if we have such a defense mechanism,” Dr Vatsa added.

According to digital security company Gemalto, basic facial biometrics require a two or three dimensional sensor that "captures" a face. It then transforms it into digital data by applying an algorithm. This can then be compared to an existing image of the person kept in a database.

These automated systems can be used to identify or check the identity of individuals in just a few seconds based on their facial features: spacing of the eyes, bridge of the nose, contour of the lips, ears, chin, etc. They can even do this in the middle of a crowd and within dynamic and unstable environments. 

Facial recognition technology has had a long and turbulent history. While governments, law enforcement agencies, and even private organisations make the case for using the technology for better management and crime detection, the high possibility of misuse or fooling facial recognition systems is a very real threat.

The corporate experience

Nearly all large US-based technology companies- Facebook, Google, Amazon, Microsoft and Apple have invested in building their facial recognition systems. It is however, the Chinese that are using the technology for surveillance and law enforcement at perhaps the largest scale. 

Yitu Technologies is a Chinese company whose facial recognition software is used to screen visitors at ports and airports. Another firm called SenseTime has worked with everyone from government agencies to retailers and online entertainment to healthcare providers. 

Apple, which introduced FaceID which identifies the user and unlocks the phone, has had several complaints- from users saying the iPhone does not recognise their face in the morning, to Chinese users accusing the Apple AI of being racist after people were able to log into each other’s phones in the country. 

The Amazon facial recognition system called “Rekognition” has also been under fire from not just civil liberties activists and lawmakers, but also from its shareholders. 

In India, there is little to no awareness on who uses facial recognition and for what purpose. There has been talk of integrating India’s biometric identification programme- Aadhaar- with a facial recognition system at airports, but there has been no large scale rollout as yet. There is also opposition on linking the already controversial Aadhaar with facial recognition. 

Besides, there is little to no legal provision that safeguards misuse of data collected by such systems. 

As the facial recognition systems become more efficient, hackers and bad actors constantly try to break them. One can try to fool the facial recognition system through masks or other synthetic means, or try to break the algorithm that processes images. 

One subscription. Two world-class reads.

Already subscribed? Log in

Subscribe to read the full story →
*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

Next Story