
The world will shortly be at a crossroads. Will we harness AI responsibly, ensuring it benefits humanity, or allow unchecked innovation to lead us down a dangerous path? A future of AI-driven medical errors and mass unemployment is not inevitable, but the policy and regulatory decisions we make now will determine both the future of AI and of humanity. Will AI be our greatest ally, or our worst enemy? Regulation will decide.
The Compelling Case for AI Regulation
As AI systems become more advanced and ubiquitous, the risks of unchecked development grow. High-profile incidents like the Cambridge Analytica data scandal, where AI facilitated large-scale voter manipulation, and the 2010 Flash Crash, potentially triggered by automated trading algorithms, underscore AI’s ability to cause widespread harm when improperly deployed or constrained.
Beyond these high-profile events, broader concerns around transparency, accountability, ethical implications, and potential discrimination or bias in AI systems have catalysed public opinion, and have resulted in growing calls for regulatory oversight.
The challenge of explaining the decision-making processes of advanced AI models like deep neural networks, and the systemic importance of AI in sectors like finance, healthcare, and critical infrastructure creates an imperative to mitigate risks that could destabilise markets, compromise consumer safety, or erode public trust.
The Evolving Regulatory Landscape
While AI’s societal impact grows, specific regulations governing its development and deployment remain a developing area. However, several intergovernmental bodies and national governments have begun establishing guiding principles and frameworks. These include:
The OECD AI Principles: These are the foundational principles that have shaped global regulatory approaches to AI. They promote human-centred values, transparency, safety, and accountability in AI development and use. Given this is a fast moving and emerging area, they have been recently updated to reflect current technological and policy landscapes.
European Union: Poised to become the world’s first comprehensive AI law, the AI Act takes a risk-based approach to regulating AI systems. It sets stringent requirements for high-risk systems, including those used in critical infrastructure, law enforcement, and employment.
United States: While the 2019 Executive Order on AI emphasised maintaining U.S. leadership, recent developments have seen a growing focus on AI in the healthcare sector and individual states exploring their own AI regulations to address concerns around bias, transparency, and accountability.
United Kingdom: The UK favours a flexible, principles-based approach to AI regulation that encourages innovation while ensuring safety and public trust. This framework focuses on adaptability and proportionality in addressing specific AI applications.
China: China has been rapidly expanding its AI regulations, with recent measures addressing data security, algorithmic transparency, and the ethical implications of AI. The government recently released draft measures for managing generative AI services, demonstrating a proactive stance towards regulating emerging technologies.
Each approach to AI regulation reflects the jurisdictions values, regulatory traditions, and technological priorities. However, a common thread is the growing recognition of the need for balanced governance that fosters innovation while mitigating risks and ensuring ethical AI development and deployment.
Unique Challenges in Regulating AI
Crafting effective AI regulation is presenting a real challenge for policymakers worldwide. A primary hurdle lies in defining AI, given its rapidly evolving nature and diverse applications. While various jurisdictions and organisations have proposed legal definitions, we don’t yet have a universal definition. For example, the European Union’s proposed AI Act focuses on software developed with specific techniques to generate outputs, whereas the U.S. National AI Initiative Act emphasises machine-based systems making predictions or decisions.
Determining liability and accountability for AI actions poses another significant challenge, particularly as systems become more autonomous and capable of independent decision-making. The EU, through its proposed AI Act and updates to the Product Liability Directive, aims to address this by placing obligations on providers and users and clarifying liability for defective AI products. In other jurisdictions it isn’t yet as clear as in the EU. Other jurisdictions are further behind the EU, where existing sector-specific laws often govern liability currently.
The development and deployment of AI systems involve complex networks and processes collectively known as AI supply chains. These supply chains encompass the entire life-cycle of an AI system, from the initial sourcing of data and computing resources to the eventual integration and operation of the AI within an application or product. Key components include data acquisition and preparation, computing infrastructure provisioning, AI model development, software integration, system deployment, and ongoing maintenance and monitoring.
AI supply chains often involve multiple stakeholders, such as data providers, cloud service providers, model developers, software engineers, system integrators, and end-users. These entities may be spread across different organisations, locations, and jurisdictions, creating complex inter dependencies and potential vulnerabilities. Ensuring transparency, traceability, and accountability throughout these complex supply chains is vital for effective regulation and risk management. This involves tracking data provenance, documenting model development processes, verifying the confidentiality and integrity of of computing resources, and maintaining audit trails for AI system decisions and outputs. Addressing the challenges posed by the global nature and diversity of AI supply chains is essential for building trust in AI technologies and mitigating potential risks associated with their development and deployment. Naturally, a level playing field across jurisdictions would help, but regulators must strike a approach that balances appropriate oversight without stifling innovation.
Earlier this year, I was privileged to be part of the team who drafted a research paper for the UK’s Department for Science, Innovation and Technology (DSIT) on cyber security risks to AI. You can read the report here.One of the key takeaway lessons was the the importance of understanding the AI supply chain to minimise cyber security risk.
Ethical AI considerations are also central to AI regulation. Issues such as bias, discrimination, privacy, and the potential impact of AI on employment and social structures must be carefully addressed to ensure that AI development and deployment align with societal values. Those differing societal values across jurisdictions risks creating a ‘race to the bottom’ in AI regulation. Without global coordination, some regions may relax rules to attract AI businesses, allowing companies to bypass stricter ethical oversight elsewhere. Since AI systems and supply chains cross borders, lax regulations in one place could undermine responsible AI efforts worldwide.
Public engagement is another critical aspect of AI regulation. Involving the public in discussions about AI’s potential benefits and risks can help build trust in AI technologies and ensure that regulations reflect societal concerns and expectations.
Lessons from History for AI Governance
Examining historical precedents in regulating new technologies and industries offers us valuable insights into our future.
The evolution of financial services regulation, which progressed from light frameworks to more comprehensive regimes such as BCBS 239 in the wake of major crises like the 2008 financial meltdown could serve as a model. Initial, broadly-scoped AI regulations may well give way to more granular, sector-specific rules as real-world impacts emerge.
The UK’s experience with the General Data Protection Regulation (GDPR) offers a blueprint for AI regulation. The GDPR focuses on transparency, accountability, and individual rights in data processing, which is highly relevant to AI, especially given the overlapping relationship between data and AI systems. The UK’s proactive enforcement of the GDPR, combined with its principles-based approach that balances flexibility with core values, could help address similar concerns in AI regulation, such as data bias, algorithmic transparency, and discriminatory outcomes. However, learning the lessons from the adoption of GDPR; its complexity and its impact on businesses will be key as well as the need to expand beyond personal data protection to address the broader societal impacts of AI.
Recent years have seen major tech companies develop their own ethical AI principles. While the efficacy of self-regulation is debated, it underscores that, for whatever motivating factor, the AI industry is grappling with ensuring responsible AI development.
Proportional, risk-focused regulation calibrated to the degree of systemic risk AI wields within a given domain is the most likely course. Sectors like finance and healthcare, with high economic and societal relevance, may warrant stricter rules, while lower-risk applications could remain more lightly governed. As mentioned earlier, the EU AI Act, and its risk focused approach, where high-risk use cases will face stricter scrutiny, is likely to become a global template for AI regulation.
The Global AI Challenge: Can We Collaborate for a Better Future?
AI is only just starting to reshape our world. Will we use AI responsibly to solve our biggest problems, or let it run wild and cause chaos?
The rules we make now will decide that. We need to learn from past mistakes with other innovations, policies, regulations, and technologies. We must ensure AI is built on strong ethical principles, and get everyone involved in a balanced conversation. Furthermore, we must try to do it as a collaborative global effort in an increasingly divided world.
This may be the challenge of our generation. The choices we make today will shape the future for generations to come. Work together and get it right, we can be assured of a great future where AI could fix many of our previous wrongs like climate change, and lead to a better standard of living for all across the world. Get it wrong, and we could end up with an even more fragmented and broken world.
Good luck to the AI policymakers and AI regulators. It wont be easy.
Jamie is Founder at Bloch.ai, and a Visiting Fellow in Enterprise AI at Manchester Metropolitan University. He prefers cheese toasties.
Follow Jamie here and on LinkedIn: Jamie Crossman-Smith | LinkedIn
Comments