UPDATED On both sides of the Atlantic, concerns are rising about a covert real-time surveillance culture emerging among police officers – both state employed and private. Amazon is one of the companies at the centre of the controversy. Sooraj Shah and Chris Middleton present an extended analysis.
The UK government has been advised by MPs to hold off on further deployments of real-time facial recognition systems in police forces until privacy and accuracy concerns about the technology have been resolved.
It has also been told that its existing programme of retaining images of innocent people is “unacceptable”.
In a report into the government’s biometric strategy and forensic services by Parliament’s Science and Technology Committee, MPs quoted findings from privacy advocacy organisation Big Brother Watch, which revealed that the Metropolitan Police had achieved less than two percent accuracy with its live facial recognition technology.
The report says, “In the UK, Big Brother Watch recently reported their survey of police forces, which showed that the Metropolitan Police had a less than two percent accuracy in its automated facial recognition ‘matches’ and that only two people were correctly identified and 102 were incorrectly ‘matched’. The force had made no arrests using the technology.”
“There are serious concerns over its current use, including its reliability and its potential for discriminatory bias,” adds the report, referring to MIT research into the training of facial recognition systems on predominantly white faces, making the systems far less accurate when identifying people from ethnic minorities.
The use of AI, facial recognition systems, and machine learning to, deliberately or unknowingly, replicate systemic biases – for example, against ethnic and other minority groups in law enforcement situations – has been frequently cited as a major concern, both by members of Parliament and by privacy rights groups. Internet of Business editor Chris Middleton has produced an extensive report on this problem, which is available here.
The Committee recommended that such facial recognition technology should “not generally be deployed, beyond the current pilots” until questions about the technology’s effectiveness and the risk of potential bias have been answered.
The Committee suggested that operational control over facial recognition systems should be taken away from the police, and that the use of the technology should be debated and voted on by the House of Commons before any further action is taken.
MPs’ fears may have been heightened by the growing use of the technology in China as part of a compulsory (from 2020) national social ratings and surveillance scheme, which has the stated aim of punishing citizens for non-conformity.
Facial recognition systems from Face++ and other Chinese providers are core components in the programme, and are increasingly being used in the country to pay for goods, as well as to authenticate identity and provide security. As a result, these systems are enabling real-time state surveillance over every aspect of citizens’ lives.
In Beijing, the technology has a more overt police application too: facial recognition systems linked to smart glasses are being trialled in the city, helping officers to identify suspects in crowds.
An unofficial database?
In the UK, the Committee called the government’s current approach to not deleting images of innocent people “unacceptable”, and questioned the legality of the police’s ‘deletion on application’ – rather than automated – process.
The Home Office has alluded to current weaknesses in IT systems and its concerns about the potential cost of a comprehensive deletion programme.
The Committee said that many individuals may simply be unaware that they can apply to have their images deleted from police systems.
“In the four years since the government promised to produce a biometrics strategy, the Home Office and police have developed a process for collecting, retaining, and reusing facial images that some have called unlawful,” said Norman Lamb MP, chair of the Committee.
“Large-scale retention of the facial images of innocent people amounts to a significant infringement of people’s liberty without any national framework in place and without a public debate about the case for it.
“The government must urgently set out the legal basis for its current on-request process of removing images of innocent people. It is unjustifiable to treat facial recognition data differently to DNA or fingerprint data.
“It should urgently review the IT systems being developed and ensure that they will be able to deliver an automated deletion system, or else move now to introduce comprehensive manual deletion that is fit for purpose,” he said.
Private security forces
There is another emerging context for MPs’ fears over the technology’s application: the rise of private police forces in the UK – not just the outsourcing of routine police functions to companies such as G4S, but also the establishment of a new company, My Local Bobby (MLB), among other local security services.
Run by former serving officers, the private police service has been trialled in three of London’s wealthiest areas – Belgravia, Mayfair, and Kensington – over the past two years and is now being rolled out nationwide.
The company’s strategic focus on wealthy clients, who pay a monthly fee for uniformed officers to patrol their neighbourhoods, has alarmed civil liberties campaigners, who are concerned that a private national police service would be unaccountable to the general public.
Their worries peaked earlier this month when MLB officers began tweeting about moving homeless people, beggars, and drunks away from their clients’ properties. The company’s Twitter account has since been switched to private.
As private police may also begin to use real-time facial recognition and surveillance systems –like their state-employed counterparts – the stage could be set for the increasingly unaccountable deployment of surveillance technologies by private officers who have the power of citizens’ arrest and of private prosecution.
A society in which wealthy citizens can, effectively, pay to have people removed from their neighbourhoods would hardly be a unified and cohesive one.
There are signs, too, that retail and Web services giant Amazon sees a big financial opportunity in these types of neighbourhood application. Since acquiring smart doorbell maker Ring for $1 billion earlier this year, the brand has launched its new Neighbors security app.
Via Neighbors, users can access local crime and safety information, view videos shared by other Ring security cam owners, and join what is, effectively, a social network for people who are worried about crime or strangers in their midst. The app is also designed to allow police departments and law enforcement agencies to share data too.
ACLU, congressmen challenge Amazon
While British MPs debate the issues, similar concerns about the spread of real-time surveillance are being aired in the US.
The American Civil Liberties Union (ACLU) has challenged Amazon about two police forces’ (Orlando, FL, and Washington County, Oregon) use of its Rekognition real-time facial recognition system in body cameras and local surveillance.
It and other civil liberties advocates are demanding that Amazon stop the sale of a technology that enables live citizen surveillance and, often, discriminates against minority groups, because of poor training data at the design stage.
The ACLU said, “By automating mass surveillance, facial recognition systems like Rekognition threaten this [sic] freedom, posing a particular threat to communities already unjustly targeted in the current political climate. People should be free to walk down the street without being watched by the government.”
In recent weeks, multiple examples have emerged in the US of people from black and other minority groups being reported to police for doing legal, innocuous things, such as looking at properties to buy, dozing off in public places, or cooking food in areas set aside for barbecues.
While black Americans have sadly become accustomed to racial profiling over decades, and while racial minorities in the US make up nearly 63 percent of unarmed individuals shot by police, despite representing less than 38 percent of the population, there is a palpable sense that these types of incident are increasing in the current political climate.
Two congressmen, Keith Ellison (D-MN) and Emanuel Cleaver (D-MO) have also written to Amazon chief Jeff Bezos demanding an explanation of the company’s sale of real-time facial recognition systems to law enforcement agencies.
You can read the full text of their letter – which makes detailed points about facial recognition systems unfairly impacting the lives of ethnic minority citizens – here. The congressmen have asked for a response from Bezos by 20 June.
Their concerns were given added spice today by news from Seattle that an Amazon Echo smart speaker recorded a family’s private conversation and emailed it to a random contact of theirs without their knowledge. The recipient immediately got in touch with the family and warned them to disconnect the Alexa-powered device.
Responding to this development today, Amazon said, “Echo woke up due to a word in background conversation sounding like ‘Alexa.’ Then, the subsequent conversation was heard as a ‘Send message’ request. At which point, Alexa said out loud ‘To whom?’ At which point, the background conversation was interpreted as a name in the customer’s contact list. Alexa then asked out loud, “[contact name], right?” Alexa then interpreted background conversation as ‘right’.
“As unlikely as this string of events is, we are evaluating options to make this case even less likely,” added the company.
However, the reports should be ringing alarm bells both inside and outside Amazon, as it recently began rolling out Alexa updates in the US which allow the digital assistant to understand context, learn user preferences, and sustain conversations without users repeating the ‘Alexa’ trigger word.
These conversational updates, designed to make talking to Alexa-powered devices more natural, intuitive, and personal for users, suggest that such intrusive incidents are likely to snowball, not become “even less likely”.
Earlier this month, researchers found that an Alexa-powered device could easily be turned into a covert eavesdropping tool. Since then, of course, one Echo became just that – without even being hacked.
Internet of Business says
In the UK, the implication of the Parliamentary Committee’s comments would seem to be that it is worried that the police are using facial recognition systems to build an unofficial database of anyone questioned by officers, regardless of whether they have committed a crime.
With such low success rates from the system and fears over its misuse or bias, it certainly appears as though this is an ill-advised, brute-force, and socially divisive application of a technology, which falls into the ‘because we can’ category, rather than the ‘because we should’.
It may also be that the Home Office is trying to create a legal fudge by implicitly linking the system to the ‘right to be forgotten’, the ability for citizens to request the permanent deletion of data under the terms of GDPR/the UK’s Data Protection Act. If true, that would be a cynical move indeed, as it would recast this beneficial provision of GDPR as a right for the police to retain data on innocent civilians unless the subject requests deletion.
- Read more: Amazon now world’s second most valuable company after Apple
- Read more: DHL US trials robots, AI, AR & crowdsourcing to beat Amazon
- Read more: Amazon takes on UPS and FedEx – and catches Theresa May’s eye
• UK Parliament news story, Sooraj Shah; further news and analysis, Chris Middleton.