SAP launches ethical A.I. guidelines, expert advisory panel

SAP launches ethical A.I. guidelines, expert advisory panel

Enterprise software giant SAP has announced a set of ethical guiding principles for artificial intelligence (AI) development, along with the creation of an external advisory panel on AI ethics – making it the first European technology company to do so.

The panel, comprising experts from academia, politics, and industry, will ensure the adoption of SAP’s new principles and develop them in collaboration with the AI steering committee at the company – a panel of executives from its development, strategy, and human resources departments.

Five members of the advisory panel have already been confirmed. These are: Dr.theol. Peter Dabrock, chair of Systematic Theology (Ethics), at the University of Erlangen-Nuernberg; Dr. Henning Kagermann, chairman acatech board of trustees and acatech senator; Susan Liautaud, lecturer in Public Policy and Law, Stanford University, and founder/managing director of Susan Liautaud & Associates Ltd (SLAL); Dr. Helen Nissenbaum, professor, Cornell Tech Information Science; and Nicholas Wright, consultant Intelligent Biology and affiliated scholar at the Pellegrino Center for Clinical Bioethics at Georgetown University Medical Center, and honorary research associate at the Institute of Cognitive Neuroscience, University College London.

SAP plans to add more members in the coming months.

“AI offers immense opportunities, but it also raises unprecedented and often unpredictable ethics challenges for society and humanity,” said Liautaud. “The AI ethics advisory panel allows us to ensure an ethical AI, which serves humanity and benefits society.”

SAP’s new statement

So what are the new ethical principles that SAP has drawn up? First, the company says that is “driven by its values”.

“We recognise that, as with any technology, there is scope for AI to be used in ways that are not aligned with these guiding principles and the operational guidelines we are developing,” it said.

“In developing AI software, we will remain true to our human rights commitment statement, the UN guiding principles on business and human rights, laws, and widely accepted international norms.

“Wherever necessary, our AI Ethics Steering Committee will serve to advise our teams on how specific use cases are affected by these guiding principles. Where there is a conflict with our principles, we will endeavour to prevent the inappropriate use of our technology.”

Next, the company said it “designs for people”.

“We strive to create AI software systems that are inclusive and that seek to empower and augment the talents of our diverse usership [sic],” it continued.

“By providing human-centered user experiences through augmented and intuitive technologies, we leverage AI to support people in maximising their potential. To achieve this, we design our systems closely with users in a collaborative, multidisciplinary, and demographically diverse environment.”

SAP added that it enables “business beyond bias”.

“Bias can negatively impact AI software and, in turn, individuals and our customers,” explained the company. “This is particularly the case when there is a risk of causing discrimination or of unjustly impacting underrepresented groups.

“We therefore require our technical teams to gain a deep understanding of the business problems they are trying to solve and the data quality this demands.

“We seek to increase the diversity and inter-disciplinarity of our teams, and we are investigating new technical methods for mitigating biases. We are also deeply committed to supporting our customers in building even more diverse businesses by leveraging AI to build products that help move business beyond bias.”

The trust issue

SAP “strives for trust and integrity”, it said.

“Our systems are held to specific standards in accordance with their level of technical ability and intended usage,” continued the company. “Their input, capabilities, intended purpose, and limitations will be communicated clearly to our customers, and we provide means for oversight and control by customers and users.

“They are, and will always remain, in control of the deployment of our products. We actively support industry collaboration and will conduct research to further system transparency.

“We operate with integrity through our code of business conduct, our internal AI Ethics Steering Committee, and our external AI Ethics Advisory Panel.”

Privacy and security

The company said it strives to uphold quality and safety standards. More, “Data protection and privacy are a corporate requirement and at the core of every product and service,” it continued.

“We communicate clearly how, why, where, and when customer and anonymised user data is used in our AI software.

“This commitment to data protection and privacy is reflected in our commitment to all applicable regulatory requirements, as well as through the research we conduct in partnership with leading academic institutions to develop the next generation of privacy enhancing methodologies and technologies.”

Finally, SAP “engages with the wider societal challenges of AI”, it said.

“There are numerous emerging challenges that require a much broader discourse across industries, disciplines, borders, and cultural, philosophical, and religious traditions,” explained SAP.

“These include, but are not limited to, questions concerning: Economic impact, such as how industry and society can collaborate to prepare students and workers for an AI economy, and how society may need to adapt means of economic redistribution, social safety, and economic development.

“Social impact, such as the value and meaning of work for people and the potential role of AI software as social companions and caretakers.

“And normative questions around how AI should confront ethical dilemmas and what applications of AI, specifically with regard to security and safety, should be considered permissible.”

So what was behind the move?

Motivating factors

With its new guidelines, the external panel, and its internal committee, SAP said it aims to ensure that the AI capabilities supported by its Leonardo Machine Learning capabilities are “used to maintain integrity and trust in all solutions”.

“SAP considers the ethical use of data a core value,” said Luka Mucic, chief financial officer and member of the executive board of SAP Se. “We want to create software that enables the intelligent enterprise and actually improves people’s lives. Such principles will serve as the basis to make AI a technology that augments human talent.”

The new guiding principles also contribute to Europe’s debate on AI, explained SAP. The European Commission has appointed Markus Noga, senior vice president of Machine Learning at SAP, to its high-level expert group on AI.

The group was created to design a European AI strategy and propose ethical guidelines relating to fairness, safety, transparency, the future of work, and democracy by early 2019.

Internet of Business says

SAP’s welcome move is more evidence of the growing awareness in the technology sector, and among policymakers, that the rapid uptake of AI is both a massive opportunity, and a cause for global concern if it begins to automate societal problems via the biases inherent in human cultures, in historic data, and even in business decision-making.

Earlier this week, IBM rolled out a suite of tools and resources to combat bias in AI systems by exposing the workings used by popular AI and machine learning platforms, suggesting ways in which data could be augmented and improved.

And in the summer, Google issued a detailed statement on its own strategy for ethical AI development. In that case, it was prompted by employee rebellion and external criticism of the company’s involvement with the Pentagon’s Project Maven.

That programme uses AI to analyse drone footage and identify possible targets, which many see as weaponising AI.

Google bowed to pressure to exit the project when the contract comes up for renewal next year, but has since faced similar rebellion over revelations that it has been developing a censored version of its search technology for the Chinese market.

However, compared with SAP’s general, vague, aspirational, and – some might suggest – rather ‘corporate mission statement’ approach to its ethical statement (as published today), Google’s version was lengthy and extremely detailed.

Chris Middleton
Chris Middleton is former editor of Internet of Business, and now a key contributor to the title. He specialises in robotics, AI, the IoT, blockchain, and technology strategy. He is also former editor of Computing, Computer Business Review, and Professional Outsourcing, among others, and is a contributing editor to Diginomica, Computing, and Hack & Craft News. Over the years, he has also written for Computer Weekly, The Guardian, The Times, PC World, I-CIO, V3, The Inquirer, and Blockchain News, among many others. He is an acknowledged robotics expert who has appeared on BBC TV and radio, ITN, and Talk Radio, and is probably the only tech journalist in the UK to own a number of humanoid robots, which he hires out to events, exhibitions, universities, and schools. Chris has also chaired conferences on robotics, AI, IoT investment, digital marketing, blockchain, and space technologies, and has spoken at numerous other events.