Publications

Home 9 Publication 9 Bipartisan Senate AI Working Group Publishes Anticipated AI Roadmap Outlining Key Areas For Regulation, Investment, and Further Study

Bipartisan Senate AI Working Group Publishes Anticipated AI Roadmap Outlining Key Areas For Regulation, Investment, and Further Study

May 24, 2024

On May 15, 2024, the Bipartisan Senate AI Working Group composed of Senate Majority Leader Chuck Schumer (D-NY), and Senators Mike Rounds (R-SD), Todd Young (R-IN), and Martin Heinrich (D-NM), published โ€œDriving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senateโ€ (the โ€œReportโ€).[1] The 31-page document contains a roadmap for lawmakers, administrators, and practitioners drawn from a series of โ€œAI Insight Forumsโ€ the Working Group hosted, convening prominent members of industry, academia, and civil society.

The Report covers eight key topics:

  • Supporting U.S. Innovation in AI
  • AI and the Workforce
  • High Impact Uses of AI
  • Elections and Democracy
  • Privacy and Liability
  • Transparency, Explainability, Intellectual Property, and Copyright
  • Safeguard Against AI Risks
  • National Security

Supporting US Innovation in AI

The Report sets a goal of securing $32 billion in federal funding per year to be allocated toward non-defense, AI innovation. With such a sizeable dream budget to work with, the Report is generous and inclusive in identifying worthy recipients of funding. One throughline that emerges from the proposed allocation is a desire to ensure that the U.S. is capable of supporting all aspects of the infrastructure that ungirds AI technology: from the design and manufacture of the next generation of high-end AI chips; to the development of smart, widely-adopted technological standards for AI; to the tools for testing and evaluating AI models; to, of course, the physical infrastructure that supports AI.

AI and the Workforce

The Report recognizes that AI will revolutionize work and encourages all stakeholders to be heard during the development and deployment of AI to minimize and mitigate the impact of AI adoption on the labor force.

To facilitate information gathering on this issue, the Report encourages Congress to pass the Workforce Data for Analyzing and Tracking Automation Act (S. 2138) which would charge the Secretary of Labor with measuring the impact of automation on the workforce, including job displacement, job creation, and shifting in-demand skills, in order to inform workforce development strategies.

High Impact Uses of AI

The Report encourages lawmakers to ensure that existing laws, including those related to consumer protection and civil rights, are consistently and effectively applied to AI systems and their developers, deployers, and users. The Report tasks Congressional committees with identifying any gaps in the application of existing law and addressing those gaps to ensure that AI systems are not unwittingly exempt.

Recognizing that many of these gaps may still be unknown, the Report flags some areas for lawmakers to focus on initially: the development of standards for use of AI in critical infrastructure; the impact on content creators and publishers; the use of AI to perpetrate harms on vulnerable populations; and the testing and deployment of autonomous vehicles.

The Report also identifies the healthcare industry as a key area for lawmakers to target, with recommendations to further aid the development and improvement of AI in healthcare and increase transparency about the use of AI for providers and the public.

Elections and Democracy

With concerns about election integrity expected to once again dominate the upcoming election, the Report encourages lawmakers and technologists to consider how to work together to advance effective watermarking and digital content provenance as it relates to AI-generated or AI-augmented election content.

Privacy and Liability

The Report acknowledges that AIโ€™s rapid evolution and varying degrees of autonomy may make it difficult to assign legal liability to AI companies and their users and recommends that lawmakers take steps to better hold AI companies and their users accountable. Among the suggestions are: a strong comprehensive federal data privacy law to protect personal information like the European Unionโ€™s General Data Protection Regulation; policy mechanisms to reduce the prevalence of non-public personal information stored and/or used by AI systems; and incentives for the research and development of privacy-enhancing technologies.

Evoking the recent debates about the Internetโ€™s liability rules (set forth primarily by the Communications Decency Act), the Report also directs Congressional committees to consider whether AI developers and deployers should be held accountable if their products or actions cause harm to consumers, or to hold end users accountable if their actions cause harm.

Transparency, Explainability, Intellectual Property, and Copyright

Concerned about the โ€œblack boxโ€ of how AI technology operates, the Report directs lawmakers to increase transparency and explainability requirements for AI systems. Specifically, the Report encourages lawmakers to evaluate the need for transparency and public education about how AI systems work, are trained, and deployed. The Report also encourages lawmakers to consider whether there should be best practices governing what sort of activities are appropriate for AI automation.

Recognizing the ways in which AI can replicate peopleโ€™s likeness, voice, writing style, and artistic style, the Report also suggests the consideration of protecting these sorts of identifying markers. The Report also directs lawmakers to work with the U.S. Copyright Office and the U.S. Patent and Trademark Office to ensure that intellectual property rights are adequately protected.

Safeguarding Against AI Risks

The Report encourages the development of AI standards around risk assessment, testing, red-teaming, and auditing. The Report also suggests other actions for lawmakers to consider, ranging from: investigating the policy implications of different AI product release choices; to developing a framework that specifies when a pre-deployment evaluation of AI is needed; to creating an interface between commercial AI entities and the federal government to support the monitoring of AI risks; to supporting R&D efforts that address AI risks.

National Security

The Report also acknowledges that AI may present specific challenges to national security, while also impacting the management of talent for the Department of Defense and the Intelligence Community. The Report encourages both groups to collaborate with lawmakers and the relevant federal agencies to stay informed about the research areas and capabilities of U.S. adversaries, maintain a strong digital workforce within the armed services, and expand the AI talent pathway into the military.

The Report also encourages lawmakers to bolster the use of AI in U.S. cyber capabilities and weapons systems. For example, lawmakers can: ensure federal agencies can proactively manage critical technologies; develop frameworks for determining when export controls should be placed on AI; and facilitate the free flow of information across borders while also protecting against the forced transfer of American technology.

Conclusion

The Report provides a buffet of bipartisan supported recommendations for lawmakers to act on and shape the AI landscape. AI developers, deployers, and even users should be prepared for increased legislation and regulations, and possibly scrutiny, that may rapidly shape the AI space. Wiggin and Dana has extensive experience counseling clients regarding many issues that may be affected by the Reportโ€™s recommendations and will continue to monitor this quickly evolving industry for developments.

[1] The Report can be seen here.

For more information on the topics covered in this advisory, contact Counsel Anjali S. Dalal.ย 

Firm Highlights