San Francisco bans facial recognition technology

Facial recognition datasets

San Francisco has become the first city to ban government agencies from using facial recognition technology.

Eight of the city's board of supervisors voted to approve the proposal, with one against, that will prevent government agencies, such as law enforcement, from using the technology.

The Stop Secret Surveillance Ordinance, proposed by supervisor Aaron Peskin in January, also requires departments in the city to seek approval from the board of supervisors before using or buying surveillance technology. Other cities have approved similar transparency measures.

In a statement seen by The Verge, Peskin said it was "an ordinance about having accountability around surveillance technology". Peskin added that it was not an "anti-technology policy" but stated that facial recognition is "uniquely dangerous and oppressive".

San Francisco's ban comes as a broader debate about the ethical use of facial recognition rages on. The technology can be used to rapidly identify individuals for security purposes, but there has been a number of cases where the data collected from the technology has been plagued with bias and or inaccuracies.

"The propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits, and the technology will exacerbate racial injustice and threaten our ability to live free of continuous government monitoring," the ordinance states.

In January, researchers at MIT discovered that Amazon's Rekognition facial tech wasn't identifying race or gender accurately or fairly. A report by the scientific research facility said tests it has conducted on the technology found that Rekognition mistakenly identified some pictures of women as men and this was more prevalent when presented with pictures of darker-skinned women.

Where companies get the data to train facial recognition models has also proved to be controversial. In March, it was revealed that IBM used Flick images to train its facial recognition tech, but without letting the people involved know.

The company is said to have used almost a million pictures from the Flickr photo-sharing site to train its platform. However, those in the pictures weren't advised the company was going to use their features to determine gender, race or any other identifiable features, such as eye colour, hair colour, whether someone was wearing glasses etc.

In the UK, the Metropolitan Police have come under heavy scrutiny for using facial recognition, due to its disastrous success rate. The Police revealed that the scheme, intended to help identify and apprehend violent criminals, resulted in zero arrests.

Arguing the case for the technology, Microsoft, which offers facial recognition tools, has called for some form of regulation for the technology, but how to exactly regulate the tool has been contested.

Bobby Hellard

Bobby Hellard is ITPro's Reviews Editor and has worked on CloudPro and ChannelPro since 2018. In his time at ITPro, Bobby has covered stories for all the major technology companies, such as Apple, Microsoft, Amazon and Facebook, and regularly attends industry-leading events such as AWS Re:Invent and Google Cloud Next.

Bobby mainly covers hardware reviews, but you will also recognize him as the face of many of our video reviews of laptops and smartphones.