On July 25, 2023, the Senate Judiciary Committee held its fifth hearing this year on artificial intelligence (AI). This is the second hearing held by the Subcommittee on Privacy, Technology, and the Law, and it highlighted the “bipartisan unanimity” in regulating AI technology.

Overview

Chairman Richard Blumenthal (D-CT) opened the hearing by recognizing “the future is not science fiction or fantasy. It’s not even the future. It’s here and now.”

Last week, the Biden administration secured voluntary commitments focused on managing the risks posed by artificial intelligence. Blumenthal commended the Administration for recognizing the need to act, but noted that the commitments made were unspecific and unenforceable, hence the need for further action by Congress. While recognizing the “good” in AI, Chairman Blumenthal opened his introductory statements by noting the need to address the “fear” in the public domain. Throughout the hearing, Blumenthal was adamant about the need for a proactive regulatory agency that invests in research to create countermeasures against autonomous AI systems while ensuring that innovation continues to prosper. Specifically, the Senator welcomed ideas for establishing a licensing regime, a testing and auditing regime, legal limits on usage in scenarios such as elections or nuclear warfare, and transparency requirements for the limits and use of AI systems.

Ranking Member Josh Hawley (R-MO) gave a shorter statement, identifying his main priorities as workers, children, consumers, and national security. He confidently stated that big tech companies would benefit from the rise and development of AI, as they did in the rise of major social media platforms, but marked his concern for how its development would affect the American people.

Together, Chairman Blumenthal and Ranking Member Hawley promoted their bipartisan bill, the “No Section 230 Immunity for AI Act” (S. 1993), which was introduced as the first bipartisan bill in the Senate to put safeguards around AI development.

Senator Amy Klobuchar (D-MN), Chairwoman of the Subcommittee on Competition Policy, Antitrust, and Consumer Rights, also made a short opening statement. She urged quick action, mentioned bipartisan work from Senators Chuck Schumer (D-NY) and Todd Young (R-IN), and warned that “if we don’t act soon, we could decay into not just partisanship but inaction.”

Expert Witness Testimony

Dario Amodei, Chief Executive Officer, Anthropic, San Francisco, CA

Amodei is the CEO of Anthropic, a “public benefit corporation” that is developing techniques to make AI safer and more controllable. Amodei testified on Anthropic’s work in constitutional AI, a method of training AI to behave according to specific principles, as well as early work on adversarial testing of AI to uncover bad behavior, and foundational work on AI interpretability. Amodei warned of short and long-term risks, ranging from bias, misinformation, and privacy to more existential threats to humanity, such as autonomous AI. He also argued that “medium-term” risks, such as misuse of AI for bioweapon production, are a “grave” threat to national security, and that private action is not a sufficient mitigation technique. Accordingly, Amodei suggested three regulatory steps:

  • Secure the AI supply chain to maintain a technology lead while keeping technologies out of bad actor hands.
  • Implement a testing and auditing regime for new and more powerful system being released to the public.
  • Fund agencies such as the National Institute for Standards and Technology (NIST) and National AI research resources, which is crucial for measurement.

Yoshua BengioFounder and Scientific Director of Mila – Québec AI Institute, Professor in the Department of Computer Science and Operations Research at Université de Montréal

In his opening statement, Bengio quoted former expert witness Sam Altman: “if this technology goes wrong, it could go terribly wrong.” He testified that the AI revolution has “the potential to enable tremendous progress and innovation,” but also entails a wide range of risks, including discrimination, disinformation, and loss of control of superhuman AI systems. Bengio noted that estimates for when human level intelligence could be achieved in AI systems is now within a few years or decades. Bengio defined four factors on which the government can base its efforts—access, alignment, raw intellectual power, and scope of actions—before recommending that the following actions be taken “in the coming months” to protect democracy, national security, and the collective future:

  • Coordinate agile national and international frameworks and liability incentives to bolster safety, including licenses, standards, and independent audits.
  • Accelerate global research endeavors focused on AI safety to form essential regulations, protocols, and government structures.
  • Research on countermeasures to protect society from rogue AIs.

Stuart Russell, Professor of Computer ScienceThe University of California, Berkeley

Russell’s testimony centered on a core tenet of his research –  artificial general intelligence (AGI) and how to “control” AI systems. He questioned how humans can maintain power over entities “more powerful than ourselves.” He explained that the field of AI has reached a point in which an AI’s internal operations are a mystery, even for computer scientists and those who train the systems. While underscoring the importance of predictability for AI, he argued that there is no trade-off between safety and innovation. Russell made the following recommendations:

  • There should be an absolute right to know if someone is interacting with a person or with a machine.
  • Algorithms should not be able to decide to kill human beings, especially in nuclear warfare.
  • A kill switch, or “safety brakes,” must be designed into AI systems and activated if systems break into other computers or replicate themselves.
  • Systems that break regulatory rules should be recalled from the market.

Social Media

All members that spoke during the hearing, including Senators Hawley, Blumenthal, Klobuchar, and Marsha Blackburn (R-TN), mentioned the unintended harm caused by social media on the public, particularly children. They suggested that lawmakers need to chart a different course than the congressional delay, dismissal, and inaction on creating regulatory guidelines during the development of social media platforms to get ahead of the potential threats of AI.

Election Threats

While noting that lawmakers do not want censorship, Senator Blumenthal directly asked witnesses about the immediate threat of AI for the integrity of the electoral system, given the upcoming 2024 Presidential election. The witnesses identified misinformation, external influence campaigns, propaganda and deepfakes as immediate dangers. Bengio also recommended against releasing pre-trained large AI systems. All three witnesses recommended implementing watermarks or labeling on content in audio and visual campaigns, including requiring social media companies to restrict account use to human beings that have affirmatively identified themselves.

Labor Exploitation

Senator Hawley entered an article from the Wall Street Journal into the record, chronicling the “traumatizing” work contractors in Kenya were required to perform for a generative AI company, which included screening out descriptions of violence and abuse. The Senator maintained the need for labor reform in the industry and pushed for high paying jobs for American workers, structures for training, and incentives that enable them. Senator Blumenthal agreed, advocating that the industry focus on “made in America when we’re talking about AI.”

Securing the Supply Chain

Prompted by Amodei’s testimony, lawmakers emphasized the critical nature of securing supply chains, particularly in the event of a Chinese invasion of Taiwan, where a large portion of AI components are manufactured. When asked if Congress should consider limitations or full prohibitions of components manufactured in China, Amodei redirected the question, suggesting that Congress should examine the components produced in the United States that end up in the hands of adversaries. However, Amodei also argued that chip fabrication production capabilities should be developed in the United States quickly to secure the supply chain for AI components.

Watermarking, Labeling, and Ethical Use

Senators Blackburn and Klobuchar questioned panelists on the ethical use of AI, bringing attention to AI scams and the use of an individual’s name, image, and voice, as well as watermarking election materials produced by AI. Senator Klobuchar highlighted that only about half of states have laws giving individuals control over the use of their name, image, and voice. When Klobuchar asked whether panelists would support a federal law giving individuals this type of control, Amodei said yes and argued that “counterfeiting humans” should have the same level of penalty as counterfeiting money.

Senator Blackburn asked whether industry is “mature” enough to self-regulate, to which Mr. Russell explicitly replied “no.” When Senator Blackburn asked whether a federal privacy standard would help, Mr. Russell explained that there should be a requirement to disclose if a system is harvesting data from individual conversations.

International Cooperation

Panelists agreed that an international and multilateral approach would be critical to AI regulation, particularly in mitigating an AI arms race. Specifically, Bengio testified that “we have a moral responsibility” to have an internationally-coordinated effort that can fully retain the economic and social benefits of AI, while protecting against our shared futures. Russell made clear that the UK, not China, is the closest competitor for AI development, claiming that lawmakers have “slightly overstated” the level of threat that China presents. He claimed that China mainly produces “copycat” systems that are not as sophisticated as the original systems. However, he noted China’s “intent” to be a global world leader and flagged that China is investing larger sums of public money in AI than is the U.S.

At the same time, Bengio recognized that allies, such as the UK, Canada, Australia, and New Zealand, are important to an international and multilateral approach, which would work together with a national oversight body doing licensing and registration in the U.S.

 Testing and Evaluating Structures

Building on Amodei’s opening testimony around the “control” of AI systems, Senator Blumenthal asked Amodei if he would recommend that Congress impose testing, auditing, and evaluation requirements focused on risk, including the implementation of “safety brakes.” Amodei answered affirmatively, while also recommending a mechanism for recalling products that have shown dangerous behavior. All three witnesses also expressed support for a reporting requirement for product failure, as is regular practice within the medication and transportation industries.

Conclusion

Crowell & Moring, LLP will continue to monitor congressional and executive branch efforts to regulate AI. Our lawyers and public policy professionals are available to advise any clients who want to play an active role in the policy debates taking place right now or who are seeking to navigate AI-related concerns in government contracts, employment law, intellectual property, privacy, healthcare, antitrust, or other areas.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Neda Shaheen Neda Shaheen

Neda M. Shaheen is an associate in the Washington, D.C. office of Crowell & Moring, and is a member of the Privacy and Cybersecurity and International Trade Groups. Neda focuses her practice on representing her clients in litigation and strategic counseling involving national…

Neda M. Shaheen is an associate in the Washington, D.C. office of Crowell & Moring, and is a member of the Privacy and Cybersecurity and International Trade Groups. Neda focuses her practice on representing her clients in litigation and strategic counseling involving national security, technology, cybersecurity, trade and international law. Neda joined the firm after working as a consultant at Crowell & Moring International (CMI), where she supported a diverse range of clients on digital trade matters concerning international trade, national security, privacy, and data governance, as well as advancing impactful public-private partnerships.

Photo of Jacob Canter Jacob Canter

Jacob Canter is an attorney in the San Francisco office of Crowell & Moring. He is a member of the Litigation and Privacy & Cybersecurity groups. Jacob’s areas of emphasis include technology-related litigation, involving competition, cybersecurity and digital crimes, copyright, trademark, and patent…

Jacob Canter is an attorney in the San Francisco office of Crowell & Moring. He is a member of the Litigation and Privacy & Cybersecurity groups. Jacob’s areas of emphasis include technology-related litigation, involving competition, cybersecurity and digital crimes, copyright, trademark, and patent, as well as general complex commercial matters.

Jacob graduated from the University California, Berkeley School of Law in 2018, where he launched Berkeley’s election law outreach program and pro bono project. He joins the firm after a year of practice at an international law firm in Washington, D.C., and a year clerking in the Southern District of New York for the Hon. Lorna G. Schofield. Jacob was exposed to and provided support in a variety of complex substantive and procedural legal topics during the clerkship, including trade secrets, insurance/reinsurance, contracts, class actions, privacy, intellectual property, and arbitrability.

Photo of Aaron Cummings Aaron Cummings

Aaron serves as the co-chair of the Government Affairs Group and provides counsel and advocacy to clients on legislative and policy matters in a range of areas including antitrust, financial services, health care, energy, intellectual property, artificial intelligence, technology, agriculture, and national security.

Aaron serves as the co-chair of the Government Affairs Group and provides counsel and advocacy to clients on legislative and policy matters in a range of areas including antitrust, financial services, health care, energy, intellectual property, artificial intelligence, technology, agriculture, and national security. All too often in Washington if you’re not at the table, you’re on the menu. Aaron helps clients make sure their views are represented in policy discussions in Capitol Hill, the White House, and throughout the federal government.

Aaron has years of high-level experience on Capitol Hill. He’s the former Chief of Staff to U.S. Senator Chuck Grassley (R-IA), the longest serving Republican Senator in history, and current President Pro Tempore-emeritus of the Senate. As Senator Grassley’s Chief of Staff, Aaron worked closely with other members of Republican Senate Leadership and their senior staff to advance the priorities of the Republican Caucus and to set the agenda for the Senate. Aaron also advised Senator Grassley during his tenure as the Chairman of the powerful Judiciary and Finance Committees, the top Republican of the Committee on Budget, and as a senior member of the Senate Committee on Agriculture. During his tenure as a Chief Counsel on the Senate Judiciary committee, Aaron advised Senator Grassley on a host of policy and constitutional issues, including Supreme Court nominations, and was the lead Republican negotiator of the First Step Act—the biggest criminal justice reform effort in a generation and a signature bipartisan accomplishment of the Trump Administration. Aaron also played key roles in the passage of the United States-Mexico-Canada Trade Agreement and the Infrastructure Investment and Jobs Act. Earlier in his career, he worked as an Associate Director of Presidential Speechwriting in the George W. Bush White House.

Drawing on his years of experience in litigation, leading congressional investigations, and high-profile hearings on Capitol Hill, Aaron also counsels clients responding to government investigations.

Photo of Tim Shadyac Tim Shadyac

Tim Shadyac is a director in the Government Affairs Group, where he assists clients with legislative and regulatory issues. Tim’s areas of focus include the implementation of the Affordable Care Act and other health reform efforts, the Medicare program, drug pricing policy, and…

Tim Shadyac is a director in the Government Affairs Group, where he assists clients with legislative and regulatory issues. Tim’s areas of focus include the implementation of the Affordable Care Act and other health reform efforts, the Medicare program, drug pricing policy, and the broad impact of politics on health care policy.

Tim has experience working in a variety of health care sector settings. Prior to joining Crowell & Moring, Tim advised clients on a number of health care policy issues at Avalere Health, a D.C.-based health policy consultancy. Throughout his time at Avalere, Tim specialized in matters of importance to the life sciences industry and worked closely with various patient advocacy groups. His work was largely concentrated on issues related to the outcome of presidential and congressional elections, drug pricing policy, and Medicare Part D benefit design. Tim also served in the federal government affairs, public policy, and advocacy groups at Sanofi for nearly four years. There, he monitored and analyzed federal health policy to assess the business impact and developed strategies for response. In his role with the advocacy group, Tim sought opportunities to reflect the patient voice in regulatory guidance and identified opportunities for partnership with patient advocates.