How the US and EU Are Addressing AI Regulation

Jan 31, 2024

In the past couple of years, the technology landscape has been disrupted by a massive surge in the development and proliferation of artificial intelligence (AI) platforms. This surge is the driving force behind the current boom in AI products which seem to be everywhere these days, exerting their influence across data-sensitive sectors such as healthcare, finance, and insurance, and touching more aspects of daily life with each passing day.

With artificial intelligence evolving rapidly, both the EU and the US have taken actions to meet the challenge of swiftly implementing comprehensive regulations to guide the responsible use of these powerful new systems.

EU Introduces The First Legislation

In June 2023, the EU took a decisive step by introducing the AI Act, a legislative framework outlining priorities for safe, transparent, traceable, non-discriminatory, and environmentally friendly AI. The primary objective for the European Parliament was to establish a human-centric approach that ensured AI systems are supervised by human beings rather than automation to prevent harmful outcomes.

The EU’s AI Act is a meticulous document that categorizes AI systems based on risk levels, establishing different rules for each category. This risk-based approach is a fundamental distinction from the US discussions. The EU outright bans certain AI applications deemed high risk, such as those involving cognitive behavioral manipulation, social scoring, and real-time facial recognition. Notably, the legislation places a significant emphasis on protecting vulnerable populations, including children. 

Assessment and guidelines for high-risk AI systems form another critical aspect of the EU’s regulatory framework. It mandates thorough assessments before placing high-risk AI products on the market and requires ongoing scrutiny throughout the product’s lifecycle. This careful monitoring aligns with the EU’s commitment to ensuring the safety and ethical use of AI, reflecting the region’s historical emphasis on consumer protection.

However, the AI Act has faced pushback from the corporate sector. Over 150 business executives raised concerns about the potential compliance costs and liability risks associated with the proposed regulations. This opposition is indicative of  the delicate balance policymakers must strike between cultivating innovation and safeguarding against potential harms.

The US Takes Action

As the AI Act gained momentum in the EU, the US followed suit with the proposal of the SAFE Innovation Framework Policy. The SAFE Innovation Framework Policy was established on two core pillars – safety and innovation – with the intent of striking a balance between the potential societal benefits and harms posed by AI. The policy states that future legislation should embrace the unprecedented advancements possible with AI while proactively addressing potential threats to US national security, the prospect of job displacements, and the spread of disinformation. The proposed legislation emphasizes the need to establish the appropriate level of transparency that both the federal government and private citizens should expect from AI companies.

Shortly after the introduction of the SAFE Innovation Framework Policy, President Biden met with seven leading AI developers to set an agreement of eight rules, signifying a move toward safer, more secure, and transparent AI development. Although voluntary, the agreement outlines measures such as internal and external security testing, information sharing, third-party vulnerability reporting, and the prioritization of research on societal risks. This collaborative approach, involving both public and private sectors, contrasts with the EU’s more prescriptive legislation.

In October 2023, President Biden issued an executive order to strengthen AI safety and security, reflecting the federal government’s commitment to fast track AI regulation to keep pace with the rapidly evolving technology. 

Although the US is making great efforts to quickly implement AI regulations, the country is still lagging far behind the EU. In December 2023 the AI Act was passed into law, effectively making the EU the global frontrunner in AI regulation and setting a groundbreaking precedent for other nations to follow. Despite initiatives such as the SAFE Innovation Framework Policy, and Biden’s executive order still subject to debate and discussion in Congress, the US may not see sweeping AI regulation passed into law until later this year.

Summary

The current boom in AI products is transforming the technology landscape and will have unpredictable effects on daily life in the future. This prospect has led both the EU and the US to take actions towards implementing comprehensive AI regulations. The EU introduced the AI Act in June 2023, a meticulous legislative framework emphasizing safety, transparency, and a risk-based approach. The US responded with the SAFE Innovation Framework Policy, conceived with the intent of striking a balance between potential societal benefits and harms posed by AI. The AI Act, with its categorization of AI systems based on risk levels and emphasis on consumer protection, contrasts with the more collaborative and voluntary measures proposed in US policies. The AI Act was passed into law in December 2023, while comprehensive AI regulation in the US may not be established until later this year.

As technology continues to evolve, the regulatory frameworks established by these regions will undoubtedly shape the trajectory of AI regulations for the rest of the world.