In the wake of Facebook’s problems with Cambridge Analytica, the sense that the technology industry is being held to account by the public – and by its shareholders – for its complex relationships with government is growing. Chris Middleton reports.
UPDATED As international revulsion grew earlier this week over US Immigration and Customs Enforcement’s (ICE) separation of children from parents in detention centres at the Mexican border, Microsoft found itself in the spotlight for its ongoing work with ICE.
Social media users dug up a Microsoft blog post from January, in which it said it was proud of its work with the agency, adding “Azure Government enabl[ed] [ICE] to process data on edge devices or utilize deep learning capabilities to accelerate facial recognition and identification”.
In response to growing concern at the company’s relationship with the agency, Microsoft has released a statement saying, “We want to be clear: Microsoft is not working with US Immigration and Customs Enforcement or US Customs and Border Protection on any projects related to separating children from their families at the border, and contrary to some speculation, we are not aware of Azure or Azure services being used for this purpose.”
Microsoft added, “Family unification has been a fundamental tenet of American policy and law since the end of World War II. As a company, Microsoft has worked for over 20 years to combine technology with the rule of law to ensure that children who are refugees and immigrants can remain with their parents. We need to continue to build on this noble tradition rather than change course now.
“We urge the administration to change its policy and Congress to pass legislation ensuring children are no longer separated from their families.”
However, Microsoft has not commented on whether its contract with the immigration agency will remain active in the wake of international outrage, or exactly how its facial recognition systems are aiding the ICE in its work.
Apple CEO Tim Cook has also spoken out against the new immigration policy. “It’s heartbreaking to see the images and hear the sounds of the kids,” he told the Irish Times. “Kids are the most vulnerable people in any society. I think that what’s happening is inhumane, it needs to stop.”
Meanwhile, Accenture is currently feeling the wrath of social media as hundreds of tweets daily name the company for its association with ICE.
Facial recognition systems and their potential misuse or inherent bias (due to being predominantly designed and trained in closed teams of white males, leading to a higher risk of misidentifying ethnic minority citizens) are at the heart of another controversy, this time involving Amazon.
On 18 June, 17 shareholders wrote to the retail and Web services giant, urging the company to stop selling its Rekognition system to government agencies.
The letter said, “The undersigned Amazon shareholders are concerned such government surveillance infrastructure technology may not only pose a privacy threat to customers and other stakeholders across the country, but may also raise substantial risks for our company, negatively impacting our company’s stock valuation and increasing financial risk for shareholders.”
The text made clear that shareholders are concerned that possible biases in the technology may contribute to the unfair targeting of ethnic minority citizens or immigrants by law enforcement agencies.
The use of AI, facial recognition systems, and machine learning to replicate systemic biases – for example, against ethnic and other minority groups in law enforcement – has been frequently cited as a concern, both by legislators and privacy rights groups. (Internet of Business editor Chris Middleton has produced an extensive independent report on this problem, which is available here.)
In May, Internet of Business reported that the American Civil Liberties Union (ACLU) had challenged Amazon about two police forces’ (Orlando, FL, and Washington County, Oregon) use of Rekognition, a real-time system, in body cameras and local surveillance.
It and other civil liberties advocates demanded that Amazon stop the sale of a technology that enables live citizen surveillance and may discriminate against minority groups, because of poor training data at the design stage.
The ACLU said, “By automating mass surveillance, facial recognition systems like Rekognition threaten this [sic] freedom, posing a particular threat to communities already unjustly targeted in the current political climate. People should be free to walk down the street without being watched by the government.”
Two congressmen, Keith Ellison (D-MN) and Emanuel Cleaver (D-MO) also wrote to Amazon chief Jeff Bezos demanding an explanation of the company’s sale of real-time systems to law enforcement agencies.
You can read the full text of their letter – which makes detailed points about facial recognition systems unfairly impacting the lives of ethnic minority citizens – here. The congressmen asked for a response from Bezos by today, 20 June. Internet of Business will update this report if and when Amazon responds.
A chorus of disapproval
Concerns about these issues have been aired on both sides of the Atlantic.
In May, the UK government was advised by MPs to hold off on further deployments of real-time facial recognition systems in police forces until privacy and accuracy concerns about the technology had been resolved.
In a report into the government’s biometric strategy and forensic services by Parliament’s Science and Technology Committee, MPs quoted findings from privacy advocacy organisation Big Brother Watch, which revealed that the Metropolitan Police had achieved less than two percent accuracy with its live facial recognition systems.
The report said, “In the UK, Big Brother Watch recently reported their survey of police forces, which showed that the Metropolitan Police had a less than two percent accuracy in its automated facial recognition ‘matches’, and that only two people were correctly identified and 102 were incorrectly ‘matched’. The force had made no arrests using the technology.”
“There are serious concerns over its current use, including its reliability and its potential for discriminatory bias,” continued the report, referring to MIT research into the training of facial recognition systems on predominantly white faces, making the systems far less accurate when identifying people from ethnic minorities.
The Committee suggested that operational control over facial recognition systems should be stripped from the police, and that the use of the technology should be debated and voted on by the House of Commons before any further action is taken.
Back in the US, yet another American technology giant has been damaged in the eyes of its customers, shareholders, and employees by its current relationship with the government. Earlier this month, Google bowed to pressure from its employees and announced that it would be exiting from the Project Maven contract with the US Department of Defense next year.
The 18-month contract to apply AI to the analysis of drone footage expires in 2019 and won’t be renewed, Diane Greene, CEO of Google Cloud, told employees on 1 June.
Image recognition was once again in the frame. Project Maven seeks to use machine learning and computer vision techniques to improve the gathering of battlefield intelligence from aerial imagery, to help armed forces recognise “objects of interest”.
Last month, Internet of Business reported that a number of employees had resigned from Google in the wake of the deal, while thousands of others signed an internal petition in a successful effort to persuade CEO Sundar Pichai to withdraw the company from “the business of war”.
In April, the Tech Workers Coalition launched its own petition demanding that Google cancel the Project Maven contract, insisting that other technology providers should also avoid working with the military. “We can no longer ignore our industry’s and our technologies’ harmful biases, large-scale breaches of trust, and lack of ethical safeguards,” the petition read. “These are life and death stakes.”
The world of academia also expressed its concerns over Google’s work with the Pentagon. Last month, over 90 academics in the spheres of ethics, AI, and computer science published an open letter asking Google to back an international treaty prohibiting autonomous weapons systems, and cease work with the US military.
Google has since published a new code of ethical conduct for future AI development. Meanwhile, Pentagon cloud services contracts worth up to $10 billion are currently up for grabs, with Google, Amazon, and Microsoft all thought to be in the running.
Plus: NSA heads into the cloud
In related news, the US National Security Agency (NSA) has moved most of its data resources into a classified cloud, known as the Intelligence Community GovCloud, allowing its data analysts to rapidly ‘connect the dots’ across all of its sources. Four years ago, the CIA awarded a $600 million contract to AWS to develop a commercial cloud environment for the agencies.
Internet of Business says
That many US technology companies recognise they now operate in a polarised world should not be in doubt, with government demands and multibillion-dollar deals on the one hand, and something equally persuasive and valuable on the other: customer sentiment on social media.
For example, the arrival of GDPR in June saw several American companies – Microsoft, Apple, Salesforce.com, Box, and SugarCRM among them – talk up the need to protect private data with similar voluntary, or mandatory, rules in the US.
Meanwhile in April, over 30 technology companies, including ABB, ARM, Cisco, Dell, Facebook, HP, Microsoft, Nokia, Oracle, SAP, and Trend Micro, signed a new accord, promising to protect all customers – both citizens and businesses – from government or state-sponsored cyber attack, regardless of nationality, location, or motive.
The stated aim of the accord was to prevent political interference online.
One thing is clear in this new world: as Google found, any disparity between strategic mission and operational action will be flagged by people throughout the world, who can use tech companies’ own social platforms to hold them to account. And in the absence of a social platform, there are always shareholders’ private investments.