EU and US begin to align on AI regulation

0

A series of regulatory changes and new hires from the Biden administration signal a more proactive stance by the federal government on artificial intelligence (AI) regulation, bringing the United States closer to that of the Union European (EU). These developments are promising, as is the inclusion of AI issues in the new EU-US Trade and Technology Council (TTC). But these great democracies can take other steps to align with AI harm reduction.

Since 2017, at least 60 countries have adopted some form of artificial intelligence policy, a torrent of activity that nearly matches the pace of modern AI adoption. The expansion of AI governance raises concerns about impending challenges for international cooperation. Simply put, the growing ubiquity of AI in online services and physical devices means that any new regulations will have significant ramifications for global markets. The variety of different ways AI can be trained and deployed also complicates this picture. For example, AI systems can be hosted in the cloud and accessed remotely from anywhere with an internet connection. Recycling and transfer learning allow different teams to jointly develop an AI model with many datasets while working in multiple countries. Edge and federated machine learning techniques allow physical products around the world to share data that affects the operation of their AI models.

These considerations complicate the governance of AI, although they should not be used as an excuse to avoid necessary protections – the many arguments for which I will not repeat here. An ideal outcome would be the implementation of meaningful government oversight of AI, while enabling these global AI supply chains. Additionally, a more unified international approach to AI governance could strengthen common oversight, direct research toward common challenges, and promote the sharing of best practices, code, and data.

Perhaps with this in mind, the September 2021 TTC meeting highlighted a discussion of AI policy. This was by no means guaranteed, as other issues discussed at the inaugural TTC meeting in Pittsburgh have a much longer history as bilateral policy issues, including semiconductors, investment screening and export control. Additionally, government officials who participated in the meeting expressed optimism about shared intentions regarding AI governance, citing in particular the consensus on both a risk-based approach and a ban on cases. government social rating extremes (see Annex III of the inaugural EU-US TTC statement).

The EU’s extensive engagement on these issues has likely elevated AI policy to the rank of TTC. More importantly, this AI bill, which would create regulatory oversight for a wide range of high-risk AI applications in digital services (e.g., hiring and admissions software) and physical products (for example, medical devices). The AI ​​law would affect other types of AI, for example by requiring disclosure of low-risk AI systems and banning a few categories of AI, but this will likely result in fewer international trade and regulatory considerations. Although there is still a great deal of uncertainty as to how the rules of the AI ​​law would be applied, existing regulatory agencies within EU member states are likely to bear a large part work. The debate on the content of the law is still ongoing; it should also be noted that, if adopted, these new rules could take some time to come into force. Take the case of the General Data Protection Regulation (GDPR). Recent fines imposed on Amazon (€746 million) and WhatsApp (€225 million) for breaching privacy demonstrate the EU’s willingness to use its regulatory powers, but most of the significant penalties are two years after the implementation and four years after the adoption of the GDPR. If AI law follows a similar timeline, it could be years before significant oversight is in place.

The United States turns the engines of regulation

In contrast, incremental developments in the United States have received less headlines, but they are aggregating into a meaningful approach to AI regulation. Some agencies, such as the Food and Drug Administration or the Department of Transportation, have been working for years to incorporate AI considerations into their regulatory regimes. In late 2020, the Trump administration’s Office of Management and Budget encouraged agencies to think about what regulatory moves AI might be needed, though it generally advocated a light touch.

Since then, policymaking under the Biden administration signals that the pace of change has accelerated. The Federal Trade Commission (FTC) first published a widely rated blog post and then began a rule-making process that made it clear that the agency considers issues of discrimination, fraud and misuse of AI-related data falls within its purview. Additionally, the Department of Housing and Urban Development has begun rolling back a Trump administration rule that effectively shielded housing-related algorithms from claims of discrimination. In late October, the Equal Employment Opportunity Commission announced that it would launch an initiative to enforce hiring and workplace protections on AI systems. Additionally, five financial regulators have launched an investigation into AI practices at financial institutions that may affect risk management, loan fairness and creditworthiness. Finally, the National Institute of Standards and Technology is developing an AI risk management framework. This list of policy interventions is starting to look a bit like the EU perspective on “high risk” AI. In fact, given that it could take years for the EU to set up and enforce its AI law, the US could find itself ahead in many practical areas of AI regulation. .

The expertise of staff joining the Biden administration also signals a greater prominence for these issues, particularly AI Now Institute co-founder Meredith Whittaker at the FTC, as well as AI damages experts Suresh Venkatasubramanian and Rashida Richardson at the White House Office of Science and Technology Policy. (OSTP). To further its call for an AI Bill of Rights from its leadership, the OSTP has also launched a series of public events on biometric technologies and other dangerous AIs. Altogether, these developments suggest that the prospects for the Biden administration are closer to EU AI surveillance goals than many seem to think.

This trend is not limited to AI products and services. The Senate’s recent introduction of the Platform Accountability and Transparency Act suggests the possibility of greater consensus between the US and EU. The proposed legislation would allow university researchers to work with raw platform data, subject to National Science Foundation approval and corporate compliance imposed by a new FTC bureau. This reflects a key provision of the proposed EU Digital Services Act, whose passage by the European Parliament looks increasingly likely.

Also relevant, although less specific to AI, is the July 2021 Biden executive order to increase competition in US markets, which contains many technology-focused provisions. The order, along with the selection of Lina Khan to lead the FTC, convinced EU competition chief Margrethe Vestager that there was “a lot of alignment” between the two governments.

Be proactive on regulatory cooperation

The emerging political landscapes on both sides of the Atlantic reflect progress towards a strong government role in protecting citizens from the harms of AI. Yet this shared ambition does not make coherent regulation particularly likely. For context, a 2009 analysis documented thousands of instances of regulatory divergence and non-tariff barriers to trade between the EU and US. The fact that subsequent efforts to align these policies have gone rather poorly suggests that preventing inconsistency might be the best approach. AI regulations, which are likely to include many technical definitions or even specific mathematical formulas, are certain to provide ample opportunity for honest disagreement.

Beyond circumventing trade barriers, consistent approaches can also strengthen government oversight. The application of similar AI regulations by multiple governments can increase the chances that the worst offences, at least by international companies, will be caught. Furthermore, consistent government priorities send a clear signal to civil society and academic communities in the EU and US, directing inquiry and research toward common concerns.

There are many incremental and achievable steps that the EU and US can consider to set the stage for long-term AI policy coherence. Working towards common definitions of AI, if only for regulatory purposes, would be a good start. Encouraging more information sharing, such as between national standards bodies (as NIST and CEN-CENELEC have done on other technologies) could be another easy step. As regulatory responsibilities take shape, encouraging communication and collaboration among sector regulators can prevent difficulties in the future. This could be facilitated by tasking a central office (such as the Commerce Branch and Department of Commerce) as the international regulatory coordinator for AI, which would advise agencies on how to avoid future conflicting rules. If these steps prove successful, the EU and US can work towards consistent processes and criteria for auditing AI systems.

A joint regulatory sandbox approach would be even more ambitious, where collaborative experimentation and testing of emerging AI systems could help facilitate better and more consistent regulation. There could even be a common approach to monitoring online platforms, where the EU and US could agree to allow researchers to study aggregated data from both continents, improving our understanding of online harms.

Broadly speaking, the EU and US should include proactive regulatory cooperation under the TTC and start preparing for a broader international community with significant AI oversight measures. This agenda would also fit nicely into the Biden administration’s emerging portfolio of more active democratic governance of technology, which includes the recently announced US-UK Grand Challenges on democracy-affirming technologies and plans to restrict the export of surveillance technologies to authoritarian governments.

Share.

About Author

Comments are closed.