U.S. Artificial Intelligence Policy: Legislative and Regulatory Developments
October 20, 2023, Covington Alert
U.S. policymakers are focused on supporting and regulating artificial intelligence (AI) platforms as they explode into the mainstream. Policymakers are balancing promoting U.S. global leadership in this critical emerging technology against the risks, both for individual consumers, workers, and businesses and for U.S. economic and national security. AI has emerged as a robust policy issue across Congress and the Biden Administration, as officials scramble to educate themselves about the technology while crafting legislation, rules, and policies to balance U.S. innovation leadership with national security priorities.
Over the past year, AI has drawn bipartisan interest and support. House and Senate committees have held nearly three dozen hearings on AI this year alone, and more than 30 AI-focused bills have been introduced in this Congress. Two bipartisan groups of Senators have announced separate frameworks for comprehensive AI legislation. Several AI bills—largely focused on the federal government’s internal use of AI—have also been voted on and passed through committees.
Meanwhile, the Biden Administration has announced plans to issue a comprehensive executive order this fall to address a range of AI risks under existing law. The Administration has also taken steps to promote the responsible development and deployment of AI systems, including securing voluntary commitments on AI safety and transparency from 15 technology companies.
Despite the strong bipartisan interest in AI regulation—as well as support from leaders of major technology companies, and the general public—passing comprehensive AI legislation remains a challenge. No consensus has emerged around either substance or process, with different groups of members developing their own versions of AI legislation through different procedures. In the House, a bipartisan bill would punt the issue of comprehensive regulation to the executive branch, creating a blue-ribbon commission to study the issue and make recommendations.
Majority Leader Chuck Schumer (D-NY) convened a bipartisan working group that includes Senators Martin Heinrich (D-NM), Mike Rounds (R-SD), and Todd Young (R-IN). They are leading a novel process for developing AI legislation that involves “AI Insight Forums,” closed-door sessions with Senators and key stakeholders. The Forums are designed to educate Senators as a mechanism to lead to consensus outside of the normal congressional committee process. Other Senators, including Senate Judiciary Subcommittee on Privacy, Technology, and the Law Chair Richard Blumenthal (D-CT) and Ranking Member Josh Hawley (R-MO) have announced their own legislative framework informed by the traditional committee process.
Without an emerging consensus, the future of comprehensive AI legislation remains uncertain. At the same time, however, more than a dozen bipartisan bills have been introduced on a range of specific AI-related topics in both chambers of Congress. Targeted legislation introduced so far includes bills to promote U.S. leadership in AI R&D, protect national security, compel disclosure of AI use, secure U.S. elections from deepfakes and other AI-generated misinformation, address the impact of AI on U.S. workers, and help the federal government leverage AI to deliver services. With bipartisan support and widespread interest in AI issues, it is likely that at least some targeted AI legislation could become law in the near future.
Comprehensive Bipartisan Bills and Frameworks
Three versions of a comprehensive AI regulatory regime have emerged in Congress – two in the Senate and one in the House.
SAFE Innovation—A Values-Based Framework and New Legislative Process
In June, Senate Majority Leader Chuck Schumer (D-NY) unveiled a new bipartisan proposal—with Senators Martin Heinrich (D-NM), Todd Young (R-IN), and Mike Rounds (R-SD)—to develop legislation to promote and regulate artificial intelligence. In a speech at the Center for Strategic & International Studies, Leader Schumer asked whether Congress can work “maximize AI’s benefits, while protecting the American people—and all of humanity—from its novel risks? I think the answer to these questions is an emphatic yes.”
With that premise, Leader Schumer proposed a plan to boost U.S. global competitiveness in AI development, while ensuring appropriate protections for consumers and workers. To facilitate comprehensive AI legislation, Leader Schumer announced both a policy framework and a new procedural approach, outside the regular committee process.
Substantively, Leader Schumer announced his SAFE Innovation Framework, five policy principles designed to encourage domestic AI innovation while ensuring adequate guardrails to protect national security, democracy, and public safety. These principles include:
- Security: Protect national security and promote economic security for workers by addressing threat of job displacement
- Accountability: Ensure transparent and responsible AI systems and hold accountable those who promote misinformation, engage in bias, or infringe IP
- Foundations: Support development of algorithms and guardrails that protect democracy and promote foundational American values, including liberty, civil rights, and justice
- Explainability: Regulations should require disclosures from AI developers to educate the public about AI systems, data, and content
- Innovation: Regulations must promote U.S. global technology leadership
Procedurally, Leader Schumer argued that the complexities of evolving technology require education of policymakers beyond the traditional committee hearing process. Instead, he announced that he would convene a series of AI Insight Forums—closed-door sessions with all Senators, as well as AI experts, including industry leaders, civil rights and consumer groups, unions, and other organizations, AI developers, and other stakeholders.
While Leader Schumer emphasized that the Insight Forums would not replace traditional congressional committee hearings and markups, he said that those tools alone are insufficient to create the “right policies.”
The first AI Insight Forum was held on September 13, featuring civil rights groups, unions, and the creative community, as well as the leaders of major technology companies engaged in AI R&D.
Leader Schumer said that the process has no fixed timeline, but that he expects to draft legislation within the next few months.
Licensing Framework—Another Bipartisan Approach
Separate from Leader Schumer’s effort, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), the chair and ranking member of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, announced their own framework for AI regulation in September. The Blumenthal-Hawley approach focuses on transparency and accountability to address harms of AI and protect the personal data of consumers.
Unlike the SAFE Innovation framework, which aims to develop consensus legislation based on guiding principles, the Blumenthal-Hawley framework proposes several specific policies alongside broad principles, drawing on the multiple AI-related hearings the two senators have held in the Privacy Subcommittee this year. Specifically, this consumer privacy-focused framework would:
- Create an independent oversight body to administer a registration and licensing process for companies developing “sophisticated general purpose AI models” and models to be used in certain “high risk situations.”
- Eliminate Section 230 immunity for AI-generated content. This proposal follows legislation Senators Blumenthal and Hawley introduced in June, the No Section 230 Immunity for AI Act, which would deny section 230 immunity to internet platforms for damages from AI-generated content.
- Increase National Security protections, including export controls, sanctions, and other restrictions to prevent foreign adversaries from obtaining advanced AI technologies.
- Promote transparency, including requiring AI developers to disclose training data and other key information to users and other stakeholders, requiring disclaimers when users are interacting with AI systems, and publicly disclosing adverse incidents or AI system failures.
- Protect consumers, including increased control over personal data used in AI systems and strict limitations on generative AI involving children.
Senators Blumenthal and Hawley said they will develop legislation to implement the framework by the end of this year.
House Blue-Ribbon Commission
While the Senate engages in legislative fact-finding and drafting of concrete proposals based on “frameworks,” a bipartisan group of House members have introduced legislation for an alternative approach. The National AI Commission Act (H.R. 4223)—introduced in June by Representatives Ted Lieu (D-CA), Ken Buck (R-CO), and Anna Eshoo (D-CA) and five additional colleagues (2 Republicans and 3 Democrats)—would establish a bipartisan commission of experts with backgrounds in computer science or AI technology, civil society, industry and workforce issues, and government, including national security, to “review the United States’ current approach to AI regulation,” make recommendations for a risk-based AI regulatory framework and the structures necessary to implement them.
The President and congressional leaders would appoint 20 members to the commission, with each political party selecting half of the members. Once all members of the commission are appointed, the commission would release an interim report within six months, a final report six months after the interim report, and a follow-up report one year after the final report.
Senator Brian Schatz (D-HI) joined the House press release announcing the introduction of the bill, but a Senate companion has not been formally introduced.
Targeted Bipartisan Legislation
In addition to the bipartisan frameworks, several other bipartisan AI bills on targeted subject matter have been introduced, some of which have advanced through the committee process. Subject-specific bills generally fall into six major categories: (1) promoting AI R&D leadership; (2) protecting national security; (3) disclosure; (4) protecting election integrity; (5) workforce training; and (6) coordinating and facilitating federal agency AI use.
1. Promoting AI R&D Leadership
Members in both Houses have introduced legislation to promote U.S. leadership in AI R&D. The Creating Resources for Every American to Experiment (CREATE) with Artificial Intelligence Act (CREATE AI Act) (S. 2714/H.R. 5077) is bipartisan, bicameral legislation led by Senators Martin Heinrich (D-NM), Todd Young (R-IN), Cory Booker (D-NJ), and Mike Rounds (R-SD) and Representatives Anna Eshoo (D-CA), Michael McCaul (R-TX), Don Beyer (D-VA), and Jay Obernolte (R-CA)—that would establish the National Artificial Intelligence Research Resource (NAIRR). The NAIRR would provide software, data, tools and services, testbeds, and other resources to facilitate AI research by higher education institutions, non-profits, and other federal funding recipients.
Although the CREATE AI Act has not passed out of committee in either chamber, three of the lead sponsors of the Senate bill are also key participants in the working group for Leader Schumer’s SAFE Innovation framework, suggesting that this legislation could become part of a larger bill developed as part of that framework.
2. Protecting National Security
Several bipartisan bills have been introduced to require government agencies to prepare for health crises or cyber attacks facilitated by AI and other emerging technologies. These include:
- The Artificial Intelligence and Biosecurity Risk Assessment Act (S. 2399/H.R. 4704), introduced by Senators Ed Markey (D-MA) and Ted Budd (R-NC), and Representatives Anna Eshoo (D-CA) and Dan Crenshaw (R-TX), would require the Health and Human Services Department to conduct risk assessments and implement strategies to address threat posed to public health and national security by AI and other technology advancements.
- The Block Nuclear Launch by Autonomous Artificial Intelligence Act (S. 1394/H.R. 2894)—introduced by Senators Ed Markey (D-MA), Elizabeth Warren (D-MA), Jeff Merkley (D-OR), and Bernie Sanders (I-VT), and Representatives Ted Lieu (D-CA), Ken Buck (R-CO), Don Beyer (D-VA), Jim McGovern (D-MA), and Jill Tokuida (D-HI)—would prohibit the use of federal funds to use any AI or other autonomous system to launch a nuclear weapon or select or engage targets of a nuclear weapon, without “meaningful human control.”
- Senator Richard Blumenthal (D-CT) and Representatives Michael McCaul (R-TX), Gregory Meeks (D-NY), Jared Moskowitz (D-FL), Thomas Kean (R-NJ), and Del. Aumua Amata Coleman Radewagen (R-American Samoa) introduced a bill (S. 1006/H.R. 1093) in February to require the State Department to report to Congress on efforts to implement the advanced capabilities component of the trilateral security partnership between Australia, the United Kingdom, and the United States, including on advanced capabilities such as AI. The bill passed the House in March, 393-4, but has not yet advanced in the Senate Foreign Relations Committee.
- The AI for National Security Act (H.R. 1718), introduced by Representatives Jay Obernolte (R-CA), Jimmy Panetta (D-CA), and Patrick Ryan (D-NY), would update Defense Department procurement laws to allow the procurement of AI-enabled cybersecurity products and services.
3. Disclosure
Several bills have been introduced to require disclosure of AI-generated products, through a disclaimer requirement or other markings. Bipartisan disclosure measures include the AI Labeling Act (S. 2691), a bipartisan bill introduced by Senators Brian Schatz (D-HI) and John Kennedy (R-LA), that would require all generative AI systems to include a “clear and conspicuous disclosure” that, to the extent feasible is “permanent and unable to be easily removed by subsequent users,” identifies content as AI-generated.
4. Guarding Against “Deepfakes”
The growth of AI has stoked fear of “deepfakes”—AI-generated audiovisual content that appropriates the voice and likeness of individuals without their consent—particularly in elections and artistic pursuits. Political campaigns and foreign actors, for example, could use AI systems to generate “deepfake” images or videos to influence elections. Speaking at a recent Senate Rules Committee hearing on AI and elections, Leader Schumer emphasized the importance of AI guardrails to protect democracy, and committed to ensuring elections are a focus of a future AI Insight Forum. Election-related AI legislation already introduced includes:
- The Protect Elections from Deceptive AI Act (S. 2770), led by Senators Amy Klobuchar (D-MN), Josh Hawley (R-MO), Chris Coons (D-DE), and Susan Collins (R-ME), that would prohibit the distribution of materially deceptive AI-generated content in ads related to a federal election. The bill would also allow targeted candidates to seek removal of the content and recover damages.
- The Require the Exposure of AI-Led (REAL) Political Advertisements Act (S. 1596/H.R. 3044)—sponsored by Senators Amy Klobuchar (D-MN), Cory Booker (D-NJ), and Michael Bennet (D-CO), and Representative Yvette Clarke (D-NY)—which would require all political ads that include AI-generated content to display a disclaimer identifying content as AI-generated.
Lawmakers are also concerned about the use of AI in art and advertising, such as unauthorized celebrity endorsements of products, or AI-generated music featuring the voices of specific artists without their consent. Earlier this month, Senators Chris Coons (D-DE), Marsha Blackburn (R-TN), Amy Klobuchar (D-MN), and Thom Tillis (R-NC) released a discussion draft of their Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, which would impose liability on persons or companies who generate unauthorized digital reproductions of any person engaged in a performance, as well as on platforms hosting such content if they have knowledge that the content was not authorized by the subject.
5. Workforce
Members in both parties are concerned about the impact of AI systems on the American workforce. One bipartisan House bill, the Jobs of the Future Act (H.R. 4498)—introduced by Representatives Darren Soto (D-FL), Lori Chavez-DeRemer (R-OH), Lisa Blunt Rochester (D-DE), and Andrew Garbarino (R-NY)—would require the Labor Department and the National Science Foundation (NSF) to draft a report for Congress analyzing the impact of AI on American workers.
6. Coordinating and Facilitating Federal Agency AI Use
Several bipartisan bills, including bills that have passed committee, relate to the federal government’s use of AI for its own purposes, either to facilitate services or to advise the public when an agency may use AI systems. These include:
- The AI LEAD Act (S. 2293), sponsored by Senators Gary Peters (D-MI) and John Cornyn (R-TX), would establish the position of Chief Artificial Intelligence Officer at each federal agency, who would “ensure the responsible research, development, acquisition, application, governance, and use” of AI by the agency. The bill passed the Senate Homeland Security and Governmental Affairs Committee (HSGAC) in July, but has not yet been considered on the Senate floor.
- The AI Leadership Training Act (S. 1564), sponsored by Senators Gary Peters (D-MI) and Mike Braun (R-IN), would require the Office of Personnel Management to establish an AI training program for federal agency management and supervisory employees. This bill passed out of HSGAC in May, but has not been considered on the Senate floor.
- The AI Training Expansion Act (H.R. 4503), sponsored by Representatives Nancy Mace (R-SC) and Gerald Connolly (D-VA), would expand AI training within the executive branch. The bill passed the House Oversight and Accountability Committee in July on a bipartisan 39-2 vote, but has not been considered on the House floor.
- The Transparent Automated Governance Act (S. 1865), introduced by Senators Gary Peters (D-MI), Mike Braun (R-IN), and James Lankford (R-OK), would require federal agencies to notify individuals whenever they are interacting with AI or other automated systems, or where such systems are making critical decisions. The bill would also create an appeals process to ensure human-review of AI-generated decisions.
- The Consumer Safety Technology Act (H.R. 4814), a partisan bill—led by Representatives Darren Soto (D-FL), Michael Burgess (R-TX), Lori Trahan (D-MA), and Brett Guthrie (R-KY)—that would require the Consumer Products Safety Commission to establish a pilot program for exploring the use of AI to support its mission.
Geopolitical Competition and AI
Competition with China for technology leadership is of heightened concern for policymakers. Congress has taken steps to both
promote U.S. innovation in foundational technologies, such as AI, and to restrict the transfer of critical emerging technologies to “foreign entities of concern,” including China.
In July, the Senate voted 91-6 to add the Outbound Investment Transparency Act as an amendment to the National Defense Authorization Act for FY2024 (NDAA). The bill would requires notification to the Treasury Department of certain foreign investment activities involving AI, as well as semiconductors, quantum computers, and other critical technologies. While the House-passed NDAA does not include outbound investment provisions, some Members of the House are advocating for imposing stricter sanctions on companies in China.
The Biden Administration has also taken its own action to address outbound investments in “countries of concern.” President Biden issued an executive order in August imposing restrictions on U.S. persons undertaking certain outbound transactions involving national security-sensitive technologies in the artificial intelligence, semiconductor, and quantum computing sectors. The order—which will be implemented by regulations issued by the Treasury Department—prohibits certain transactions and requires U.S. parties engaged in other transactions to notify the Treasury Department. We expect the NDAA conference process will include efforts to codify and enhance the rules proposed in the executive order. Legislation that codifies or modifies the order would give Congress a greater role in oversight of investment restrictions on key technologies like AI.
Congressional Hearings
In addition to legislation, congressional committees in both Chambers have held nearly three dozen hearings on AI-related topics since January of this year, exploring a wide range of issues at the intersection of AI and national security, human rights, intellectual property, and democracy. The variety of subjects for these hearings, across multiple committee jurisdictions, suggests that Congress may consider additional legislation in these areas in the coming months.
For example, the Senate Judiciary Subcommittee on Intellectual Property held a hearing to explore whether innovations created using AI should be patentable, and if so, who should be listed as the inventor. Senator Chris Coons (D-DE), the Chairman of the Subcommittee, stated that in his view, “neglecting IP in an effort to regulate AI would have serious unintended consequences for [the U.S.] innovation ecosystem, national security, and economic competitiveness.”
The Senate Judiciary Subcommittee on Privacy, Technology, and the Law, has also held multiple hearings on AI—including on “Rules for AI,” “Principles for Regulation” of AI, and most recently, a hearing specifically on the AI legislative framework announced by Senators Blumenthal and Hawley—each featuring the testimony of executives and senior engineers of major technology companies working on AI.
The Senate Judiciary Subcommittee on Human Rights and the Law also held a hearing to assess AI’s implications for civil rights, criminal justice, democratic governance, and privacy rights. During this hearing, lawmakers raised concerns about AI and the lack of comprehensive federal privacy legislation as well as the potential for AI to perpetuate bias and discrimination.
The House Oversight and Accountability Subcommittee on Oversight, Information Technology, and Government Innovation discussed how the United States can use cutting-edge technology to protect American citizens and its borders. During this hearing, Representatives Nancy Mace (R-SC) and Stephen Lynch (D-MA) both highlighted the need to support advancements in AI to ensure that the United States does not cede this space to foreign competitors, like China.
Similarly, the House Select Committee on the Strategic Competition between the United States and the Chinese Communist Party recently held a hearing to explore how the United States can improve its position in the race for global technology leadership, which includes leading in AI.
These hearings reflect the broad range of topics Congress is examining to determine how best to regulate AI. The breadth of these topics indicates that Congress may consider additional targeted legislation directed at these issues, even as the path to comprehensive AI legislation remains uncertain.
Executive Branch Developments
As Congress develops comprehensive AI legislation through hearings and working groups and advances narrower AI bills, the Biden Administration has taken concrete steps toward AI regulation using both existing legal authorities and the bully pulpit to address AI issues and promote responsible AI development and deployment.
President Biden is expected to issue a comprehensive executive order addressing AI risks in the coming weeks. While the Administration has not released details of its anticipated order, Dr. Arati Prabhakar, Director of the White House Office of Science and Technology Policy, appearing at a September event on Building Responsible AI sponsored by the Information Technology Industry Council (ITI), said that the order will be “broad” and will reflect “everything that the President sees as possible under existing law to get better at managing risk and using the technology.”
Separately, the White House has been leading a months-long initiative to secure voluntary commitments from AI companies to mitigate risks, including commitments to:
- Safety testing and information sharing across industry, governments, academia, and other stakeholders;
- Investing in cybersecurity safeguards and facilitating discovery and reporting of security vulnerabilities; and
- Transparency around the use of AI technologies.
Fifteen major technology companies have taken the White House pledge, including seven that joined in July 2023 and eight more in September.
The National Telecommunications and Information Administration (NTIA) is taking an active role in studying and developing policy recommendations for AI accountability. Most notably, in April 2023 it issued a request for comment (“RFC”) asking stakeholders to suggest policies the Administration can advance to assure the public that AI systems are “legal, effective, safe, and otherwise trustworthy.” NTIA's work in this area has attracted significant public input and attention, with the agency receiving more than 1,400 comments in response to the RFC. NTIA has explained that it will use these comments and other inputs to inform the agency’s forthcoming report making policy recommendations for “mechanisms that can create earned trust in AI systems.”
Following a directive from Congress (section 5301 of the NDAA for FY2021), the National Institute of Standards and Technology (NIST) released its AI Risk Management Framework (AI RMF 1.0) in January 2023. The AI RMF is voluntary guidance for public and private organizations designed to provide “standards, guidelines, best practices, methodologies, procedures, and processes” for developing trustworthy AI systems, assessing those systems, and mitigating risks from AI systems. NIST collaborated with both government and private stakeholders to develop the framework, including several rounds of public comment.
Other agencies across the Executive Branch are engaged in efforts to regulate AI systems, advance U.S. leadership in AI innovation, and enforce existing laws in the evolving AI ecosystem. While agency initiatives are constantly evolving, some significant actions the Administration has taken in 2023 so far include:
- In February, the U.S. Patent and Trademark Office (USPTO) issued a request for comment seeking public “input on the current state of AI technologies and inventorship issues that may arise in view of the advancement of such technologies, especially as AI plays a greater role in the innovation process.” The USPTO received 69 comments in response to the request, including on a range of questions about the use of AI in invention.
- In April, four federal agencies—the Consumer Financial Protection Bureau, the Justice Department, the Equal Employment Opportunity Commission, and the Federal Trade Commission released a joint statement on their commitment to using existing law to prevent bias and discrimination in AI, describing how AI falls within these agencies’ civil rights enforcement authorities. The agencies “pledge to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”
- In May, the Department of Education released a report on the risks and opportunities related to AI in teaching, research, and assessment.
- Also in May, NSF announced $140 million in funding to launch new National AI Research Institutes focused on six major research areas, including trustworthy AI, AI for cybersecurity, and AI for “smart climate” applications.
- The Commerce Department’s National AI Advisory Committee delivered its first report to President Biden in May.
- In July, the NIST launched a new public working group to build on the AI RMF.
- In August, the U.S. Copyright Office issued a “notice of inquiry” seeking public comment on fair use issues and status of AI outputs to “help assess whether legislative or regulatory steps in this area are warranted.” In September, the Office extended the deadline for initial comments to October 30 and reply comments to November 29.
- In August, the Federal Election Commission published a notice seeking public comment on whether to start a rulemaking related on regulation of AI in campaign advertisements.
These actions are not an exhaustive list of measures the Administration has taken so far to address AI. Other agencies have also taken steps to use existing funding streams to invest in AI R&D, to issue reports or solicit public comments on AI issues within their jurisdiction, to bring enforcement actions against AI companies for violations of existing law, and other actions. We expect this uptick in Executive Branch activity will continue in parallel with legislative efforts in Congress.
If you have any questions concerning the material discussed in this client alert, please contact the members of our Public Policy practice.