The UK can lead the way in the development of artificial intelligence (AI) as long as it takes an ethical approach to the technology, according to a new report by the House of Lords Select Committee on Artificial Intelligence.
The report, dubbed AI in the UK: Ready, Willing and Able? makes several recommendations on ethical behaviour, which the Committee believes will enable the UK to take a unique leadership position in the market.
The Committee’s five principles are:
- Artificial intelligence should be developed for the common good and benefit of humanity.
- Artificial intelligence should operate on principles of intelligibility and fairness.
- Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families, or communities.
- All citizens should have the right to be educated to enable them to flourish mentally, emotionally, and economically alongside artificial intelligence.
- The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
The Committee suggested that these principles should form the basis of a cross-sector AI code, which can be adopted both nationally and internationally.
“The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences,” said chair of the Committee, Lord Clement-Jones.
“AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these. An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse,” he added.
The report says that significant government investment in skills and training will be required to mitigate the negative effects of AI and automation, including job losses.
The Committee suggested that individuals need to be able to have greater personal control over their data and the way in which it is used. It called on the government and the Competition and Markets Authority to review the use of data by large technology companies operating in the UK.
It also called for the establishment a voluntary mechanism to inform consumers when AI is being deployed to make significant or sensitive decisions, and for the Law Commission to investigate whether existing liability law will be sufficient when AI systems malfunction or cause harm to users.
Nurturing the market
The Committee also suggested that the government should incentivise the development of a new approach to auditing the data used by AI systems, to ensure that biases and prejudices from the past are not unwittingly built into automated systems, and called for children to be adequately prepared for using AI.
Clement-Jones has previously warned about the risks of biased data within AI systems, or the biased deployment of intelligent systems. In September 2017, he said: “How do we know in the future, when a mortgage, or a grant of an insurance policy, is refused, that there is no bias in the system?
“There must be adequate assurance, not only about the collection and use of big data, but in particular about the use of AI and algorithms. It must be transparent and explainable, precisely because of the likelihood of autonomous behaviour. There must be standards of accountability, which are readily understood.”
Despite the amount of work that the UK still has to do to come to terms with AI, Clement-Jones said that it has made a good start as the country plays host to leading AI companies, a dynamic research culture, and a vigorous start-up ecosystem, in addition to its legal, ethical, financial, and linguistic strengths.
“We should make the most of this environment, but it is essential that ethics take centre stage in AI’s development and use,” he said.
• A growth fund for startups to help them scale their businesses, and changes to the immigration system were among the Committee’s other suggestions.
Internet of Business says
In February, Lord Clement-Jones co-chaired a Westminster eForum debate on UK AI policy, at which a number of speakers – including Dame Wendy Hall, co-author of the UK’s AI strategy document – stressed the opportunity for the UK to take a leadership position in ethical AI deployment.
Meanwhile, AI poses a challenge to the core legal system, suggested other speakers, and to established legal and ethical principles, such as liability and responsibility.
However, also speaking at the event was the Lord Bishop of Oxford, who proposed The Ten Commandments of AI. Five of them appear to have been adopted almost verbatim by the House of Lords’ Select Committee in their core recommendations. So what of the other five?
One, ‘AI should never be developed or deployed separately from consideration of the ethical consequences of its applications’, can be discounted, given the UK’s determination to lead the ethical debate. That leaves four others that are notable by their absence.
These are: ‘The application of AI should be to reduce inequality of wealth, health, and opportunity’; ‘AI should not be used for criminal intent, nor to subvert the values of our democracy, nor truth, nor courtesy in public discourse’; ‘The primary purpose of AI should be to enhance and augment, rather than replace, human labour and creativity’; and, ‘Governments should ensure that the best research and application of AI is directed toward the most urgent problems facing humanity’.
Why these have not been suggested among the Lords’ core principles is therefore an interesting question.
At the event, the leadership of the UK’s new Office for AI was also announced, and afterwards the Nuffield Foundation announced the creation of the £5 million Ada Lovelace Institute, to explore the ethical and social impacts of AI and automation.
Among the many positive messages to come out of these developments are the facts that the UK is increasingly honouring its computer science heritage, including critical figures such as Lovelace and Alan Turning, and that many of the leading figures in the UK’s AI debate are now women: itself a massive step in a new direction for an industry that is overwhelmingly dominated by men.
However, there remains a significant challenge to the UK’s ambitions to lead the AI debate: China. The Chinese government’s creation of a compulsory social ratings and citizen monitoring system – due to go live in 2020 – gives Beijing access to the one thing that AI needs on a massive scale to be successful: data. Without regulations such as GDPR to hold it back.
Even Facebook and the UK’s own Cambridge Analytica can’t hope to compete with that level of invasive data harvesting.
The UK is right to lead the debate on ethical AI and data gathering, and there have been numerous high-profile events on the subject already this year. But whether a real commercial advantage will result from it is a different matter. Meanwhile, the government could lay itself open to accusations of double standards, given the recent establishment of a state surveillance scheme.
Additional reporting: Chris Middleton
- Read more: AI regulation & ethics: How to build more human-focused AI
- Read more: Sage: Why gender-neutral AI helps remove bias from systems
- Read more: Mayor of London launches project to make capital epicentre of AI