top of page

Is the regulation of artificial intelligence bound to be a race to the bottom?

helenhall5

Professor Rebecca Parry, Nottingham Law School, Nottingham Trent University, UK https://www.ntu.ac.uk/staff-profiles/law/rebecca-parry and Profesora Lorena Carvajal, Pontificia Universidad Católica de Valparaíso and Senior Visiting Fellow, NTU https://www.pucv.cl/uuaa/derecho/academicos/claustro-academico/lorena-carvajal-arenas



Seismic shifts in global politics threaten to derail responsible development of artificial intelligence. Despite AI's promising medical applications and its current integration into daily life through virtual assistants, targeted advertising, and automated support systems, troubling trends are emerging. AI's darkest potential extends beyond the rogue scenarios of science fiction into very real concerns: massive environmental footprints from energy-hungry data centres, surveillance technologies that erode privacy and autonomy, and automated decision systems that can amplify discrimination and inequality. There are also dystopian possibilities for human rights in a world where AI deployment outpaces ethical guardrails. Deregulation movements, such as the US recent emphasis on removing barriers to innovation, could potentially create a dangerous race to the bottom where existential risks receive insufficient regulatory attention. Yet a focus on responsible development is important. The Collingridge dilemma presents a fundamental challenge: technological impacts remain difficult to predict and regulate until the technology is fully developed—but by then, control mechanisms may be too late to implement. From a pessimistic perspective, existential threats might become unavoidable once AI reaches certain development thresholds.


Given that AI companies operate across international borders with significant mobility, there can be legitimate concerns about weak regulation leading to regulatory arbitrage—where corporations strategically relocate to jurisdictions offering minimal oversight of matters such as environmental safeguards and human rights protections. This mobility creates additional challenges for establishing effective global governance frameworks for advanced AI systems. Governments may therefore be hesitant to raise ethical and environmental concerns while simultaneously vying to attract AI companies with minimal regulation and generous incentives, including access to natural resources. This is illustrated most clearly by the almost immediate rescindment of President Biden’s Executive Order 14110 of October 30, 2023, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”. This had seemed to be a balanced approach in that it supported responsible innovation and sustainable development. This Executive Order was replaced by President Trump’s much shorter “Removing Barriers to American Leadership in Artificial Intelligence” which, as the name indicates, takes a strong deregulatory approach and moves away from efforts to eliminate discrimination and bias.


There had been reason to hope for better. Not only had there been the Biden Executive Order but other global developments included a statement from G7 regarding privacy and data protection in generative AI, as well as the risk-based European Union AI Act. There has also been rising competition among nations. Recent months have also seen the announcement of a UK principles-based AI playbook. Jurisdictions such as Canada, the UAE and Saudi Arabia have positioned themselves as suitable destinations for AI businesses, in view of their regulatory structures and resources.  These high-level developments are significant, given the fast-moving nature of artificial intelligence, evidenced by the growth of generative AI tools. There are however dangers in that efforts to attract businesses can lead to a “race to the bottom”, enacting light regulation that fails to adequately address the risks presented by AI. In addition, different countries already have different approaches to environmental laws, worker rights, and copyright protections, all of which are very relevant in AI production.


Where can emerging and developing countries fit into this? Can they be more than victims who supply exploitative labour, have environments ruined by mining for rare earth minerals and droughts caused by data storage centres’ high water usage.


An answer can be found by analysing the case of Chile, as it is the first country in the world to implement and complete the UNESCO Readiness Assessment Methodology (RAM). The UNESCO Readiness Assessment Methodology (RAM) is a key tool for identifying governance gaps, as well as the views of multiple stakeholders in the jurisdictions of interest. More specifically, falling into five dimensions (i.e.: Legal/Regulatory, Social/Cultural, Economic, Scientific/Educational, and Technological/Infrastructure) RAM attempts to help identify what institutional changes are required to elaborate or bolster a National AI Strategy. Its objective is to facilitate the implementation of UNESCO’s Recommendation on the Ethics of Artificial Intelligence, approved in 2021 by its 193 Member States.


In addition to the content, which focuses on the ethical approach, one aspect of the Chilean policy that should be highlighted is the work with stakeholders. The first version of the policy on artificial intelligence dates from 2021 and is intended to last for 10 years. This policy is still in force and has been updated in 2023-2024, especially the section dedicated to government and ethics. As part of this updating process, 300 people thought about the future of AI in the country, and there was an online consultation in which 600 people took part. There was also input from specialised instances such as the Forum and Summit on the Ethics of Artificial Intelligence in Latin America and the Caribbean, the recommendations developed by the Committee of Experts on Artificial Intelligence and the report of the technical roundtables of the Commission on Future Challenges, Science, Technology and Innovation in the Senate. This inclusive and participatory approach allowed the policy update to reflect a regional perspective and benefit from the contribution of experts from Latin America. These efforts have paid off, as Chile is positioned as the leading country in the region in the Latin American AI Index (CENIA, 2024).


Along the same lines, a certain consequence of an ethical policy is that it requires ongoing engagement with stakeholders and the community to implement and monitor the principles at stake (i.e. not just a defensive approach, such as protection against data breaches, unauthorised access to sensitive information and protection from extensive surveillance). Among the measures that could be mentioned are: analysing algorithms for biases and ensuring that they are documented; or the need for companies and institutions to inform citizens about the destination of data in a permanent way. This could eventually lead to further regulatory developments.


This ongoing testing opens the way for emerging countries to act as true human-centred laboratories for the development of the principles that will govern AI in the years to come. This is important because, so far, the debate leading to the creation of a catalogue of principles governing AI has been largely concentrated in the global North, along with most technological and regulatory developments. We have noted that that emerging and developing countries have tended to be given the “dirty work” but Chile shows how they can play a higher part in development.

The task of regulating Artificial Intelligence is challenging. There are obstacles to effective regulation, as regulators can tend to be a step behind the fast pace of developments, as well as often under-resourced compared to those developing the technologies. Tech giants can raise objections that regulation will stifle innovation. Poor regulatory choices can indeed lead to suboptimal ways of doing things becoming an industry standard and hampering innovation. This is where organisations such as UNESCO can play a major role.


Confronted with the dilemma that many countries face in designing a clear AI policy that could discourage investment, Chile opted to adopt such a policy in order to create opportunities in this field. In fact, various studies and recent surveys show that investors prefer clear policies and regulations on artificial intelligence (AI), prioritising oversight and control over a broad regulatory framework that allows unrestricted development and use.


Therefore, it is considered that Chile’s new regulations and institutions (policy, centre and draft law) will help pave the way not only for AI investments, but also for national companies to operate in jurisdictions with stricter AI regulations, such as the EU, UK, Canada, and Japan. The regulatory approaches taken by these major powers toward AI safety don't just affect their own markets therefore—they create "spillover" effects that raise global standards. Meanwhile, countries like Chile demonstrate how emerging economies can balance innovation with responsible AI governance. In this sense, Chile has made an "investment" in the future, as the updated policy states: "Chile's future economic growth depends largely on AI, to the point that its growth rate could increase by one percentage point for every three points of growth by 2035, according to the IDB".


Effective AI regulation requires global coordination to establish appropriate standards, which can then drive the development of safer technical approaches. While nations may disagree on appropriate AI applications—particularly regarding law enforcement, social monitoring, and other state activities—it is likely that there can be a broader consensus on avoiding dependence on foundational technologies that pose systemic risks. Individual countries should focus their regulatory efforts on these high-priority areas of potential agreement.


Further Reading

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 
 
 

Comments


bottom of page