A senior clergyman and government advisor has written what he calls “the Ten Commandments of AI”, to ensure the technology is applied ethically and for social good.
AI has been put forward as the saviour of businesses and national economies, but how to ensure that the technology isn’t abused? The Rt Rev the Lord Bishop of Oxford (pictured below), a Member of the House of Lords Select Committee on Artificial Intelligence, set out his proposals at a policy debate in London, attended by representatives of government, academia, and the business world.
Speaking on 27 February at a Westminster eForum Keynote Seminar, Artificial Intelligence and Robotics: Innovation, Funding and Policy Priorities, the Bishop set out his ten-point plan, after chairing a debate on trust, ethics, and cybersecurity.
The 10 Commandments of AI
AI should be designed for all, and benefit humanity.
AI should operate on principles of transparency and fairness, and be well signposted.
AI should not be used to transgress the data rights and privacy of individuals, families, or communities.
The application of AI should be to reduce inequality of wealth, health, and opportunity.
AI should not be used for criminal intent, nor to subvert the values of our democracy, nor truth, nor courtesy in public discourse.
The primary purpose of AI should be to enhance and augment, rather than replace, human labour and creativity.
All citizens have the right to be adequately educated to flourish mentally, emotionally, and economically in a digital and artificially intelligent world.
AI should never be developed or deployed separately from consideration of the ethical consequences of its applications.
The autonomous power to hurt or destroy should never be vested in artificial intelligence.
Governments should ensure that the best research and application of AI is directed toward the most urgent problems facing humanity.
The eForum debate also included contributions from: Lord Clement-Jones, chairman of the House of Lords Select Committee on Artificial Intelligence; Prof Dame Wendy Hall, joint lead of the government’s AI review; Prof Philip Nelson, chief executive of EPSRC; Rob McCargow, AI programme lead at PwC; Gila Sacks, director of digital and tech policy at the Department of Culture, Media, and Sport; and Sue Daley, techUK’s head of cloud, data, analytics, and AI.
Detailed reports from the event will be published throughout this week on Internet of Business.
Internet of Business says
Perhaps the Bishop of Oxford’s Commandments should now sit alongside Isaac Asimov’s Three Laws of Robotics. Either way, putting the IT in ‘pulpit’ seems to be a growing trend in the UK.
The Bishop is not the first churchman to be involved in a public debate about the ethical challenges of AI. In January, the Rev Dr Malcolm Brown, who sits on the Archbishops’ Council of the Church of England, was on the panel at another public event, Ethics in AI, hosted by Imperial College in London.
Sitting with technologists and academics, Dr Brown invoked the spirit of Labour politician Tony Benn, rather than of any celestial authority: “I’m reminded of something that Tony Benn said should be asked of everyone in power: ‘What power do you have, who gave you that power, whose interests do you serve with that power, to whom are you accountable, and how can we get rid of you?’
“These are interesting questions [that could be applied to AI]. With AI, the advance that makes this problematic involves manifestations of power. Is that power in the hands of the people who create the AI, or is it in those of the user? Where do responsibility and accountability lie, and how do we change that if it goes wrong? These are the areas where we are floundering.”
While it’s difficult to argue with either priest’s conclusions about the ethical challenges of AI, it’s good to know that the new tablets being brought down from the mountain have touch screens and are Wi-fi-enabled. And whatever your religious views, this is a refreshing change from those speakers who prefer to put the id in the IoT.