It's time to break the news.The Messenger's slogan

Congress Is Falling Behind on AI

Lawmakers have a history of falling behind on oversight of Big Tech.


OpenAI CEO Sam Altman will make his first appearance before Congress on Tuesday as lawmakers grapple with how to regulate artificial intelligence. 

Rapid advances – such as the public rollout of OpenAI’s ChatGPT chatbot – have sparked alarm on Capitol Hill and from tech experts. “I am freaked out by A.I., specifically A.I. that is left unchecked and unregulated,” Rep. Ted Lieu (D-Calif.) wrote in a recent New York Times op-ed. Senate Majority Leader Chuck Schumer said last month that he was developing legislation to regulate AI. And Rep. Ken Buck (R-Colo.) told Fox News in March that “Congress cannot afford to be caught sleeping at the wheel” on AI.

But it’s not clear that Congress is up to the task of developing rules for how and when AI is used. Lawmakers have introduced just four AI-focused bills this year: a House resolution of concern, a Senate bill that would direct the federal government to review its use of AI, a House bill from that would mandate disclosure when AI is used in political ads, and joint House-Senate legislation that would ban the use of AI to launch nuclear weapons. Whether any of them will pass is another question entirely. 

Critics – from NGOs and trade groups to sitting lawmakers – say this is yet another example of technological developments outpacing Congress’ understanding or willingness to regulate, especially in the wake of a full-court lobbying press from major technology companies. 

  • Unlike Europe or China, the U.S. still lacks a national privacy law, despite bipartisan support for the general concept.
  • Lawmakers in Congress have also failed to pass meaningful laws addressing social media fraud, abuse and disinformation, despite widespread criticism of platforms like Twitter, TikTok and Facebook from both parties.
  • Meanwhile, the handful of lawmakers who have spoken out about AI split down party lines on how to deal with it.

Congress’s slowness to deal with those and other major issues related to Big Tech “certainly undermines our ability to now grapple with something like generative AI, which is moving at such a rapid pace,” said Jesse Lehrich, co-founder and senior advisor at Accountable Tech, a tech watchdog group. 

Microsoft, which has integrated ChatGPT into its Bing search engine, declined to comment on the matter. Its press office directed the Messenger to a blog post on AI by vice chair and president Brad Smith. Google, which has its own AI chatbot, and is working to integrate AI into a number of its systems, did not respond to an interview request. 

Altman, who will testify at a hearing of the Senate Judiciary Committee’s subcommittee on privacy, technology and the law, is also slated to attend a bipartisan dinner with House members. 

AI is everywhere – and its capabilities are growing

AI is already making inroads into Americans’ daily life. Beyond flashy chatbots like ChatGPT or image generators like DALL-E, AI helps power automated decision making systems that companies rely on for everything from who to hire to whether to give someone a loan to who to hire, and doctors use to diagnose certain medical conditions. And now, the rapidly growing field of generative AI — which includes chatbots and image generators — is creating the potential for convincing misinformation to be created at a massive scale. 

Even those in the field are concerned about its direction. In March, more than 1,000 experts released an open letter calling for a 6 month “pause” on AI development. The signers included tech titans like Apple co-founder Steve Wozniak and prominent researchers, including John J. Hopfield of Princeton University, who invented associative neural networks. In early April, current and former presidents of the American Association for the Advancement of Intelligence released their own letter, acknowledging AI’s risks and calling for governments and researchers to “harness AI for the betterment of all humanity.”

More recently, Geoffrey Hinton, known as “the godfather of AI”, left Google to speak more freely about AI’s hazards. “It is hard to see how you can prevent the bad actors from using it for bad things,” he told the New York Times this month. 

What the Biden administration is doing

While Congress has been slow to act, the Biden administration has begun making some moves on AI. 

This month, the White House announced it would spend $140 million on AI research and development. It also directed the Office of Management and Budget to issue a draft guide on the use of AI systems by the U.S. government for public comment and to undertake a public assessment of generative AI systems like ChatGPT. 

Last year, the White House Office of Science and Technology Policy released an “AI bill of rights” outlined how to implement AI tools safely. In January, the the National Institute for Standards and Technology released a framework for AI risk management, and in late April, officials from the Federal Trade Commission, Department of Justice, Equal Employment Opportunity Commission, and other federal agencies put out a joint statement “outlining a commitment to enforce their respective laws and regulations to promote responsible innovation in automated systems.”

Meanwhile, federal agencies including the Justice Department, Consumer Financial Protection Bureau and the FTC are swiftly staffing up with technologists skilled in understanding AI products and systems, multiple sources told The Messenger. 

But action by the Biden administration or any White House can only go so far – the executive branch’s ability to create new rules or programs is limited without intervention from Congress. And executive actions can often quickly be overturned when a new president takes the White House.

“We're seeing an uptick in signaling across the US government that’s supportive of a wide ranging AI accountability policy,” said Sarah Myers-West, managing director of the AI Now Institute. “I think what's important is we can't afford to confuse the right noises with enforceable regulation. Especially when the industry definitely knows the difference between the two.”

Both the Trump and Biden administrations have struggled to enforce executive orders related to AI, said Ben Winters, senior counsel with the Electronic Information Privacy Center's AI and Human Rights Project. They include a 2020 order that directed federal agencies to report in detail how they are using AI in their work.

Can Congress walk the walk?

Congress has struggled to regulate technology in the internet era – a combination of the technology’s evolution outpacing the speed of the legislative process and Big Tech’s lobbying blitz.

There’s a deep philosophical split between Republicans and Democrats on AI regulation. Democrats have emphasized AI’s potential to create or deepen inequities in American life. They want Congress to develop rules to make the technology’s applications fairer and more equitable. Republicans have focused on the prospect of losing the global AI-development race to China, and they want to make sure any regulations don’t chill the industry’s rapid-fire development. 

Many of the policy conversations that are happening now in Congress and among regulators are primarily still about what characteristics and requirements should AI systems have, according to Mina Narayanan, a research analyst at the Center for Security and Emerging Technology at Georgetown University. 

“These often focus on principles like robustness, transparency, explainability, interpretability, and so on,” said Narayanan. Such discussions are valuable, she added, but translating principles into assessing and regulating AI systems can be complex.

Daniel Castro, director of the think tank Center for Data Innovation and vice president at the industry-funded Information Technology and Innovation Foundation, said that he thinks too much attention is being paid to the negative aspects of AI. He argues that Congress should take amore hands off approach to the sector, in line with the United Kingdom and India, which have said they will not regulate AI. 

“The reasons to be hands off because we don't know exactly what it is we're dealing with yet,” said Castro. “We don't know where the challenges are. We don't know what will be handled by self regulation, by the market, or just norms that will exist about what's acceptable behavior in society.”

Correction: A previous version of this story misspelled Mina Narayanan's name.

Start your day with the biggest stories and exclusive reporting from The Messenger Morning, our weekday newsletter.
By signing up, you agree to our privacy policy and terms of use.
Sign Up.