Backers of the legislation argued that using software and cameras to positively identify people is, as city councillor Aaron Peskin put it, "not ready for prime time." All but one of the nine members of San Francisco's board of supervisors endorsed the legislation, which will be voted on again next week in a procedural step not expected to change the outcome.
Facial recognition could "exacerbate racial injustice and threaten our ability to live free of continuous government monitoring," it added.
The ban was part of broader legislation setting use and auditing policy for surveillance systems, creating high hurdles and requiring board approval for any city agencies.
"It shall be unlawful for any department to obtain, retain, access, or use any Face Recognition Technology or any information obtained from Face Recognition Technology," read a paragraph tucked into the lengthy document.
The ban did not include airports or other federally regulated facilities. San Francisco is known as "the tech epicenter of the world," and its Bay Area is home to giants such as Facebook, Twitter, Uber and Google parent Alphabet.
A similar ban is being considered across the bay in the city of Oakland. Worries about the technology include dangers of innocent people being misidentified as wrongdoers and that systems can infringe on privacy in everyday life.
But supporters of the technology argue that facial recognition systems can help police fight crime and keep streets safer.
Stop Crime SF, a local group, said facial recognition "can help locate missing children, people with dementia and fight sex trafficking".
"Technology will improve and it could be a useful tool for public safety when used responsibly and with greater accuracy. We should keep the door open for that possibility," it said in a statement.
The technology has been credited with helping police capture dangerous criminals, but also criticized for mistaken identifications.
Facial recognition "can be used in a passive way that doesn't require the knowledge, consent, or participation of the subject," the American Civil Liberties Union warned.
"The biggest danger is that this technology will be used for general, suspicionless surveillance systems." Chinese authorities are using a vast system of facial recognition technology to track its Uighur Muslim minority across the country, according to a recent story in the New York Times.
Beijing has already attracted widespread criticism for its treatment of Uighurs in the northwest region of Xinjiang, where up to one million members of mostly Muslim Turkic-speaking minority groups are held in internment camps, according to estimates cited by a UN panel.
But according to the Times article, facial recognition technology -- integrated into China's huge networks of surveillance cameras -- has been programmed to look exclusively for Uighurs based on their appearance and keep records of their movements across China.
It is thought to be the first known example of a government intentionally using AI for racial profiling.
(This story has not been edited by Business Standard staff and is auto-generated from a syndicated feed.)