Microsoft has called on the US government to step in and regulate the use of facial recognition technologies. Chris Middleton explains why.
“All tools can be used for good or ill. Even a broom can be used to sweep the floor or hit someone over the head,” wrote Microsoft president, Brad Smith, in a blog post on Friday night (13 July).
But Smith wasn’t talking about cleaning house – at least, not in the traditional sense. He was referring to facial recognition systems, and the potential for them to be both used and abused by private companies and public authorities.
“Facial recognition technology raises issues that go to the heart of fundamental human rights protections, like privacy and freedom of expression,” he continued. “These issues heighten responsibility for tech companies that create these products.”
The solution to countering the technology’s potential for broad-scale abuse is what Microsoft calls “thoughtful government regulation” rather than vendor self-policing, along with the development of new norms for acceptable usage.
Smith called for “a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission”.
The new context
So what prompted Microsoft to ask the US government to police the technology sector, along with its own use of products? Or, to look at it another way, seek a bipartisan consensus and remedy?
A number of news stories have emerged in recent month about the growing use of facial recognition in law enforcement – especially the increased risk of misidentification and bias when dealing with ethnic minority citizens.
“Researchers across the tech sector are working overtime to address these challenges and significant progress is being made,” wrote Smith. “But as important research has demonstrated, deficiencies remain. The relative immaturity of the technology is making the broader public questions even more pressing.”
Internet of Business recently reported on the use of Amazon’s real-time Rekognition system by two US police forces, and controversy about the poor results from UK police’s own use of the technology: a two percent success rate and zero arrests.
But these issues are only part of the problem, said Microsoft: the rush to use the technology goes to the heart of what kind of world we’re trying to create. The technology “no longer stands apart from society”, wrote Smith. “It is becoming deeply infused in our personal and professional lives.”
Smith acknowledged that some emerging uses of facial recognition are positive and transformative, but the risk of constant citizen surveillance, political monitoring, and even the sharing of customer data by retailers without consent should be forcing us to ask: what role do we want these technologies to play in our society?
Hoist by its own petard
But what is Microsoft itself doing, as it is arguably as much a part of the problem as the solution?
The recent controversy over Microsoft’s relationship with US Immigration and Customs Enforcement (ICE) wasn’t solely to do with worldwide revulsion about children being separated from their parents at the Mexican border, but also to do with fears about the possible discriminatory use of facial recognition by ICE in the future.
Just as Google’s recent commitment to ethical AI development was triggered by employees rebelling against its Project Maven deal with the Pentagon, so it seems that Microsoft’s public statement about facial recognition has been spurred by its own ICE-related backlash.
“The contract in question isn’t being used for facial recognition at all,” wrote Smith in a notable, defensive-sounding aside. “Nor has Microsoft worked with the US government on any projects related to separating children from their families at the border, a practice to which we’ve strongly objected.
“The work under the contract instead is supporting legacy email, calendar, messaging, and document management workloads. This type of IT work goes on in every government agency in the United States, and for that matter virtually every government, business and nonprofit institution in the world.
“Some nonetheless suggested that Microsoft cancel the contract and cease all work with ICE.”
Apparently, Microsoft has no intention of doing that – or at least, Smith dropped the subject at that point, and opted instead to draw some of Microsoft’s main rivals into the wider controversy:
“[These issues] surfaced earlier this year at Google and other tech companies. In recent weeks, a group of Amazon employees has objected to its contract with ICE, while reiterating concerns raised by the American Civil Liberties Union (ACLU) about law enforcement use of facial recognition technology. And Salesforce employees have raised the same issues related to immigration authorities and these agencies’ use of their products.
“Demands increasingly are surfacing for tech companies to limit the way government agencies use facial recognition and other technology.”
But of course, vendors could simply say no to making money from deployments they believe are against their own stated values. But with $10 billion in Pentagon cloud contracts currently up for grabs, will any of them walk away?
But the issues themselves are not going away, said Smith, adding that this makes it “even more important that we use this moment to get the direction right”.
Internet of Business says
Smith believes that government regulation would be more effective than vendor self-policing, because the competitive dynamics between American tech companies are likely to enable governments to “keep purchasing and using new technology in ways the public may find unacceptable”.
An interesting argument: that the US government must legislate against its own use of technology in order to resist the temptation to use it badly. But that’s not to say that Smith is wrong about the need for discussion at every level in society, given the risk of abuse, misuse, or error.
“A world with vigorous regulation of products that are useful but potentially troubling is better than a world devoid of legal standards,” he added. However, the need for government leadership doesn’t absolve technology companies of their own ethical responsibilities.
Smith acknowledged this and proposed a four-point action plan for the future. First, he said, it’s incumbent on everyone in the tech sector to continue the work needed to reduce the risk of bias in facial recognition technology.
Second, a “principled and transparent approach” is essential in the development and application of facial recognition technology.
Third, the industry should consider moving more slowly when it comes to deploying the full range of facial recognition technologies.
“‘Move fast and break things’ became something of a mantra in Silicon Valley earlier this decade,” explained Smith. “But if we move too fast with facial recognition, we may find that people’s fundamental rights are being broken.”
And fourth, government officials, civil liberties organisations, and the public can only appreciate the full implications of new technology trends if creators “do a good job of sharing information with them”, he said.
“It’s incumbent on us to step forward to share this information. As we do so, we’re committed to serving as a voice for the ethical use of facial recognition and other new technologies, both in the United States and around the world.”
‘Around the world’ includes China, where the government has stated its commitment to using facial recognition and other technologies in a compulsory social ratings system that will force citizens to conform to state-sanctioned codes of good behaviour, and ostracise anyone who steps out of line.
By asking the US government to regulate its own use of the technology, Microsoft is effectively saying that the West needs to make a stand on these issues and forge a different set of values. But will Microsoft cease its work in China in protest at Beijing’s actions? Unlikely.