House of Lords: AI needs an ethical code of practice

Artificial intelligence (AI) should be subject to a cross-sector code of practice that ensures the technology is developed ethically and does not diminish the rights and opportunities of humans, according to a new report by the House of Lords.

In the comprehensive report, released this morning, the House of Lords Select Committee said the UK is in a "unique position" to help shape the development of AI on the world stage, ensuring the technology is only applied for the benefit of mankind.

"The UK has a unique opportunity to shape AI positively for the public's benefit and to lead the international community in AI's ethical development, rather than passively accept its consequences," said Committee chairman Lord Clement-Jones.

"The UK contains leading AI companies, a dynamic academic research culture, and a vigorous startup ecosystem as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take centre stage in AI's development and use," added Clement-Jones.

The 13-member Committee, which includes journalist Baroness Bakewell and the Lord Bishop of Oxford, was tasked with assessing the economic and social impact of artificial intelligence in July 2017.

After almost 10 months of consultation, 223 pieces of written evidence, and visits to companies such as DeepMind and Microsoft, the panel has now proposed a set of principles that will be used to form the basis of a code of practice, something it hopes will be embraced internationally.

"Ready and willing" to take advantage of AI

AI should be developed for the "common good and benefit of humanity", as well as operate on principles of "intelligibility and fairness", the committee's report states.

There should also be restrictions on any AI systems that attempt to "diminish the data rights or privacy of individuals, families or communities", and each citizen should be given the right to be educated to a level where they can "flourish mentally, emotionally and economically" alongside an AI system.

The report also called for a ban on the development of any AI that has the potential to "hurt, destroy, or deceive human beings".

"AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these," said Clement-Jones. "An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse."

He added that it was the Committee's aim to see that the UK remains at the cutting-edge of research, achieved in part by providing greater support for technology startups. In order to do this, the Committee has urged the creation of a "growth fund" for SMBs, and changes to immigration laws that make it easier to source skilled overseas talent.

"We've asked whether the UK is ready willing and able to take advantage of AI. With our recommendations, it will be," said Clement-Jones.

This will potentially go some way to alleviate concerns that the UK is significantly behind when it comes to the amount invested as a percentage of GDP, which currently stands at 1.7% but is due to rise to 2.4% by 2021/22.

"I am particularly pleased to see the suggestion of an SME fund - support and funding schemes for UK SMBs working with AI will provide much needed education and clarity about how adoption of this technology will supercharge the growth of all industries," said Sage's VP of AI, Kriti Sharma, who is one of many AI industry experts to give evidence to the committee. "We hope that the government will get behind this and look at reframing incentives for SMBs in particular to invest in technology which enables them to take advantage of AI."

At a recent panel event on AI, CEO of innovation charity Geoff Mulgan said the UK has made a "massive strategic error" on funding, particularly within the public sector, and criticised the lack of strategic programmes to help mobilise the nation's talent.

In response, the report has also called for greater investment in skills and training, designed to ensure any disruption to the workforce from the introduction of AI is kept to a minimum.

Legal liability

Mark Deem, a partner at law firm Cooley who submitted evidence to the Committee, argues that developing AI will "challenge the underlying basis of a number of legal obligations according to our present concepts of private law".

"To harness the power of this technology requires the establishment of an appropriate legal and regulatory framework, which balances the innovative and entrepreneurial aspirations of key stakeholders with the implementation of a safety net of protections should systems malfunction," said Deem.

However, he added that consideration of this framework should not be undertaken in a "jurisdiction silo or seen as a purely academic, legal exercise", but instead be a wider discussion with input from technologists, legal professionals and those looking to invest in this area.

On the subject of the use of data by AI, the committee believes individuals should be given greater powers to protect their data from being misused. While GDPR will deliver on this to some extent, further action is needed, such as the creation of ethics advisory boards, the report said.

Given the negative perception of AI among the public, the tech industry has been urged to lead to the way in establishing "voluntary mechanisms" for informing the public when AI is being deployed, according to the report, although it's not clear precisely what these mechanisms will look like.

Splitting responsibilities

Sue Daley, head of programme for AI at technology industry lobby group techUK, described the report as an "important contribution to current thinking".

"At a time when some are questioning the ability of politicians to keep pace with tech this report proves that policy makers can get to grips with big issues like AI," said Daley. "It is particularly impressive that members of the Committee spent time learning to programme deep neural networks. Politicians across the pond should take note."

The government and the Competition and Markets Authority have also been tasked with ensuring that large technology companies do not hold a monopoly on the availability of data, and that greater competition is encouraged.

"Implementing a universal code of ethics for AI is an extremely good idea and is something we have independently implemented at Sage to educate our people and protect our customers," added Sharma. "This step will be critical to ensuring we are building safe and ethical AI - but we need to think carefully about their practical application and the split of responsibility between business and government, specifically when considering their application to specific industry sectors and ensuring buy-in and rapid adoption from the business community."

Image: Shutterstock

Contributor

Dale Walker is a contributor specializing in cybersecurity, data protection, and IT regulations. He was the former managing editor at ITPro, as well as its sibling sites CloudPro and ChannelPro. He spent a number of years reporting for ITPro from numerous domestic and international events, including IBM, Red Hat, Google, and has been a regular reporter for Microsoft's various yearly showcases, including Ignite.