One afternoon in early 2017, at Facebook’s headquarters in Menlo Park, California, an engineer named Tommer Leyvand sat in a conference room with a smartphone standing on the brim of his baseball cap. Rubber bands helped anchor it in place with the camera facing out. The absurd hat-phone, a particularly uncool version of the future, contained a secret tool known only to a small group of employees. What it could do was remarkable.
The handful of men in the room were laughing and speaking over one another in excitement, as captured in a video taken that day, until one of them asked for quiet. The room went silent; the demo was underway.
Leyvand turned toward a man across the table from him. The smartphone’s camera lens — round, black, unblinking — hovered above Leyvand’s forehead like a Cyclops eye as it took in the face before it. Two seconds later, a robotic female voice declared, “Zach Howard.”
“That’s me,” confirmed Howard, a mechanical engineer.
An employee who saw the tech demonstration thought it was supposed to be a joke. But when the phone started correctly calling out names, he found it creepy, like something out of a dystopian movie.
The person-identifying hat-phone would be a godsend for someone with vision problems or face blindness, but it was risky. Facebook’s previous deployment of facial recognition technology, to help people tag friends in photos, had caused an outcry from privacy advocates and led to a class-action lawsuit in Illinois in 2015 that ultimately cost the company $650 million.
With technology like that on Leyvand’s head, Facebook could prevent users from ever forgetting a colleague’s name, give a reminder at a cocktail party that an acquaintance had kids to ask about or help find someone at a crowded conference. However, six years later, the company now known as Meta has not released a version of that product and Leyvand has departed for Apple to work on its Vision Pro augmented reality glasses.
In recent years, the startups Clearview AI and PimEyes have pushed the boundaries of what the public thought was possible by releasing face search engines paired with millions of photos from the public web (PimEyes) or even billions (Clearview). With these tools, available to the police in the case of Clearview AI and the public at large in the case of PimEyes, a snapshot of someone can be used to find other online photos where that face appears, potentially revealing a name, social media profiles or information a person would never want to be linked to publicly, such as risqué photos.
What these startups had done wasn’t a technological breakthrough; it was an ethical one. Tech giants had developed the ability to recognise unknown people’s faces years earlier, but had chosen to hold the technology back, deciding that the most extreme version — putting a name to a stranger’s face — was too dangerous to make widely available.
Now that the taboo has been broken, facial recognition technology could become ubiquitous. Currently used by the police to solve crimes, authoritarian governments to monitor their citizens and businesses to keep out their enemies, it may soon be a tool in all our hands, an app on our phone — or in augmented reality glasses — that would usher in a world with no strangers.
‘We decided to stop’
As early as 2011, a Google engineer revealed he had been working on a tool to Google someone’s face and bring up other online photos of them. Months later, Google’s chairman, Eric Schmidt, said in an onstage interview that Google “built that technology, and we withheld it.” “As far as I know, it’s the only technology that Google built and, after looking at it, we decided to stop,” Schmidt said.
Advertently or not, the tech giants also helped hold the technology back from general circulation by snapping up the most advanced startups that offered it. ©2023 The New York Times News Service