top of page
helenhall5

The Regulation of Artificial Intelligence in the UK and Beyond







The regulation of artificial intelligence (AI) has recently become a hot topic. This blog will discuss the issues that make regulation of this technology particularly difficult. One key problem is that the technology advances rapidly, whereas the process of legislation is relatively slow.


Why is regulation of AI difficult?


Collingridge, a sociologist at Aston University, outlined the following dilemma:

‘attempting to control a technology is difficult…because during its early stages, when it can be controlled, not enough can be known about its harmful social consequences to warrant controlling its development; but by the time these consequences are apparent, control has become costly and slow’.


The key to addressing Collingridge’s dilemma is often considered to lie with anticipatory governance, where efforts are made to predict likely problems as much as possible.

The concerns in the short and medium term are not the apocalyptic visions currently being forecast by some actors. The SkyNet scenario of the Terminator franchise, where an AI creation becomes self-aware and attacks humanity is probably not on the horizon just yet. The problems we face in the immediate future are rather more mundane. For example, Chat GPT presents real challenges for both assessments and the future of legal services. Financial journalist Martin Lewis recently warned that an advert which apparently showed him endorsing a product was in fact using “deep fake” technology to produce convincing video and audio of him. Actors and writers in Hollywood are striking over the potential for AI to put them out of a job.


The European Approach


To address concerns about the unregulated growth of Artificial Intelligence, the EU convened its High-Level Expert Group on Artificial Intelligence in 2018. This committee set out Ethics Guidelines for Trustworthy AI which has four key ethical principles:


1. Respect for human autonomy

2. Prevention of harm

3. Fairness

4. Explicability


These lead to seven key requirements:


1. Human agency and oversight

2. Technical robustness and safety

3. Privacy and data governance

4. Transparency

5. Diversity, non-discrimination and fairness

6. Societal and environmental wellbeing

7. Accountability


Explicability, transparency and accountability are all crucial. Explicability in this context means explaining the basis on which an algorithm or neural network has come to a decision. The EU’s General Data Protection Regulation has a clause about automated processing of data (Article 22), but it does not require a full explanation of the decision-making process - there are thorny dilemmas around intellectual property of algorithms that prevent this. The algorithm that powers the AI is valuable intellectual property e.g. the weighting given to different factors. This will likely continue to be a major problem. When the AI is powered by a neural network, this task is even more difficult because the AI has effectively programmed itself to find a solution (IBM Data and AI Team 2023). The data subject is still entitled to know what data are used and to check their accuracy.


There is an AI Act currently making its way through the European Parliament. The proposed regulation will take a four tiered risk-based approach. Amongst the proposals is a total ban on the use of AI for biometric surveillance, emotion recognition, and predictive policing. The Act is already facing opposition from technology companies, who have worked together to send an open letter to the European Parliament warning about an adverse effect on innovation.


The Council of Europe has also been examining the issue, with their Committee on Artificial Intelligence (CAI). They released a draft of a Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law. This Convention focusses on the duties of states, but there is no detail on the implementation of these duties in the face of competing economic and policy issues.


The UK Approach


There are a range of regulatory regimes across the world, with the EU offering some of the more robust protections for individuals. The UK however has opted to follow a different pathway via specific regulators with enforcement powers and law (Davies and Birtwhistle, 2023).


All regulatory regimes must be assessed in light of global competitiveness. High levels of regulation may reduce the attractiveness of a country for investors, and these concerns appear to be steering the UK government towards a ‘lighter touch’. The focus is on devising more flexible and adaptive rules via a voluntary framework rather than legislation, with all the issues of being slow-moving and often reactive.


The key concerns about industry-led regulation are the lack of democratic input and the problems of regulatory capture. It is important that industry consults with the public, either directly or via their elected representatives, in order to determine the acceptable use of data and technology. The use of ethics panels in industry has not been unproblematic, as recent events have shown. The firing of Timnit Gebru, head of AI ethics team at Google, allegedly over raising concerns, is just one example. . Furthermore, the ethics panel which oversaw Google’s DeepMind in a much criticized project at the Royal Free Hospital using patient data has since been disbanded.


It is already apparent that the UK and the EU are taking very different approaches, and time will tell which is the better.


Further Reading:


R. Benedikter, “Artificial Intelligence, New Human Technologies, and the Future of Mankind” (2023) 66 Challenge (online first version) DOI: 10.1080/05775132.2023.2223061

M. Brundage, Responsible Governance of Artificial Intelligence: An Assessment, Theoretical Framework, and Exploration. (Arizona State University 2019)

D. Collingridge, The Social Control of Technology. (St Martin 1980)

P. Fung and H. Etienne, “Confucius, Cyberpunk and Mr. Science: Comparing AI ethics between China and the EU.” (2021) arXiv preprint arXiv:2111.07555.

A.D. Hudson, E. Finn, and R. Wylie, 2021. “What can science fiction tell us about the future of artificial intelligence policy?” (2001) 1 AI & SOCIETY 1

W. Leontief, W. “Technological advance, economic growth, and the distribution of income.” (1983) 9 Population and Development Review 40-

M.D. Murray, “Artificial Intelligence and the Practice of Law Part 1: Lawyers Must be Professional and Responsible Supervisors of AI” (2023a) Available at SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4478588 accessed 4th July 2023

M.D. Murray, “Artificial Intelligence and the Practice of Law Part 2: Working With Your New AI Staff Attorney” (2023b) Available at SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4478748 accessed 4th July 2023

A. Ronchi, 2021. “From Ingsoc to Skynet it is not only science fiction: From novels and science fiction to quasi-reality” In Tangible and Intangible Impact of Information and Communication in the Digital Age (pp. 1-28) (UNESCO IFAP 2021)

M. Ryznar, “Exams in the Time of ChatGPT” (2023) 80 Washington and Lee Law Review Online 305.

Web Resources:

M. Davies and M. Birtwhistle, “Regulating AI in the UK”, Ada Lovelace Institute (2023). Available at: https://www.adalovelaceinstitute.org/report/regulating-ai-in-the-uk/#:~:text=things%20go%20wrong.-,The%20UK's%20approach%20to%20AI%20regulation,network%20of%20regulators%20and%20laws accessed 25th July 2023

BBC News, “Martin Lewis felt 'sick' seeing deepfake scam ad on Facebook” (2023). Available at: https://www.bbc.co.uk/news/uk-66130785 accessed 18th July 2023

Committee on Artificial Intelligence, “REVISED ZERO DRAFT [FRAMEWORK] CONVENTION ON ARTIFICIAL INTELLIGENCE, HUMAN RIGHTS, DEMOCRACY AND THE RULE OF LAW”, Council of Europe (2023). Available at: https://rm.coe.int/cai-2023-01-revised-zero-draft-framework-convention-public/1680aa193f accessed 4thJuly 2023

C. Duffy and R. Maruf, “Elon Musk warns AI could cause ‘civilization destruction’ even as he invests in it”, CNN (2023). Available at: https://edition.cnn.com/2023/04/17/tech/elon-musk-ai-warning-tucker-carlson/index.html#:~:text=Elon%20Musk%20warned%20in%20a,including%20a%20rumored%20new%20venture accessed 3rd July 2023

European Parliament, “Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS”, EUR-Lex (2023). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 accessed 3rd July 2023

S. Ghaffary “Google says it’s committed to ethical AI research. Its ethical AI team isn’t so sure”, Vox.com (2021). Available at: https://www.vox.com/recode/22465301/google-ethical-ai-timnit-gebru-research-alex-hanna-jeff-dean-marian-croak accessed 4th July 2023

A. Hern “Royal Free breached UK data law in 1.6m patient deal with Google's DeepMind” (2017). Available at: https://www.theguardian.com/technology/2017/jul/03/google-deepmind-16m-patient-royal-free-deal-data-protection-act accessed 18th July 2023

High-Level Expert Group on AI, “ETHICS GUIDELINES FOR TRUSTWORTHY AI”, European Union (2019). Available at: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai accessed 4th July 2023

IBM Data and AI Team, “AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: What’s the difference?” (2023). Available at: https://www.ibm.com/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks/ accessed 18th July 2023

MIT Technology Review “We read the paper that forced Timnit Gebru out of Google. Here’s what it says.” Available at: https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/ accessed 18th July 2023

Various signatories to an open letter on the EU Artificial Intelligence Act, “Open letter to the representatives of the European Commission, the European Council and the European Parliament”, Available at: https://drive.google.com/file/d/1wrtxfvcD9FwfNfWGDL37Q6Nd8wBKXCkn/view accessed 3rd July 2023

Various signatories, Future of Life Institute. “Pause Giant AI Experiments: An Open Letter”. Available at: https://futureoflife.org/open-letter/pause-giant-ai-experiments/ accessed 3rd July 2023

Vaughan, A “Google is taking over DeepMind's NHS contracts – should we be worried?” (2019) Available at: https://www.newscientist.com/article/2217939-google-is-taking-over-deepminds-nhs-contracts-should-we-be-worried/ accessed 18th July 2023


78 views0 comments

Comments


bottom of page