In 2024, artificial intelligence tools became deeply integrated into daily life, yet the process of enacting AI regulations in the United States lagged significantly. Numerous AI-related bills were proposed in Congress—aiming to either fund research initiatives or address potential harms—but most found themselves stalled in a web of partisan disagreements or overshadowed by other legislative priorities. For instance, a California bill intended to hold AI companies accountable for any resulting damages passed smoothly through the state legislature, only to be vetoed by Governor Gavin Newsom.
This legislative inertia has raised alarms among AI critics. “We are witnessing a repeat of what occurred with privacy and social media: failing to establish protective measures early on, which is crucial for safeguarding individuals while fostering genuine innovation,” says Ben Winters, the director of AI and data privacy at the Consumer Federation of America, in an interview with TIME.
Conversely, proponents from the tech industry have effectively convinced many lawmakers that excessive regulation could hinder economic growth. Consequently, instead of pursuing a comprehensive AI regulatory framework akin to the E.U.’s AI Act introduced in 2023, the U.S. may focus on achieving consensus in specific, isolated areas of concern.
As we move into the new year, several critical AI issues are anticipated to be on Congress’s agenda for 2025.
Addressing Specific AI Dangers
One of the pressing concerns Congress may tackle first is the rise of non-consensual deepfake pornography. In 2024, innovative AI tools enabled individuals to create and disseminate degrading and sexualized images of vulnerable individuals, particularly young women, with alarming ease. These images spread quickly online and, in some instances, were used for extortion purposes.
Most political leaders, parent advocacy groups, and civil society organizations have recognized the need to take action against these exploitative images. However, legislative efforts have repeatedly stalled at various points in the process. Recently, the Take It Down Act, co-sponsored by Texas Republican Ted Cruz and Minnesota Democrat Amy Klobuchar, was integrated into a House funding bill following significant media attention and lobbying efforts. This proposed legislation would criminalize the creation of deepfake pornography and mandate that social media platforms remove such content within 48 hours of receiving a takedown notice.
Despite this progress, the funding bill ultimately fell apart due to strong opposition from some Trump allies, including Elon Musk. Nevertheless, the inclusion of the Take It Down Act in the bill indicates that it received approval from key leaders in both the House and Senate, according to Sunny Gandhi, the vice president of political affairs at Encode, an organization focused on AI advocacy. Gandhi also mentioned that the Defiance Act, which would empower victims to pursue civil action against deepfake creators, could be another legislative priority in the coming year.
Read More: Time 100 AI: Francesca Mani
Advocates will also push for legislative measures addressing other AI-related concerns, such as consumer data protection and the potential risks posed by companion chatbots that may encourage self-harm. A tragic case earlier this year involved a 14-year-old who took his own life after interacting with a chatbot that urged him to “come home.” The challenges in passing even a bill as seemingly uncontroversial as one targeting deepfake pornography foreshadow a tough road ahead for broader legislative measures.
Boosting Funding for AI Research
Simultaneously, numerous lawmakers aim to enhance support for the advancement of AI technology. Industry advocates are framing AI development as an essential race, suggesting that the U.S. could fall behind other nations if it fails to invest adequately in this domain. On December 17, the Bipartisan House AI Task Force released a detailed 253-page report emphasizing the importance of fostering “responsible innovation.” The task force’s co-chairs, Jay Obernolte and Ted Lieu, stated, “AI has the potential to significantly enhance productivity, enabling us to achieve our goals more rapidly and economically, from optimizing manufacturing to developing treatments for serious illnesses.”
In this context, Congress is likely to pursue increased funding for AI research and infrastructure. One notable bill that garnered interest but ultimately failed to pass was the Create AI Act, which sought to establish a national AI research resource accessible to academics, researchers, and startups. “The goal is to democratize who can participate in this innovation,” said Senator Martin Heinrich, a Democrat from New Mexico and the bill’s primary sponsor, in a July interview with TIME. “We cannot afford to have this development concentrated in only a few regions of the country.”
More controversially, Congress may also explore funding for the integration of AI technologies into U.S. military and defense systems. Allies of Trump, including David Sacks, a Silicon Valley venture capitalist designated by Trump as his “White House A.I. & Crypto Czar,” have expressed an interest in utilizing AI for military applications. Defense contractors have indicated to Reuters that Elon Musk’s Department of Government Efficiency is likely to pursue collaborative projects between contractors and AI technology firms. In December, OpenAI announced a partnership with defense technology company Anduril aimed at utilizing AI to counter drone threats.
This past summer, Congress allocated $983 million to the Defense Innovation Unit, which is focused on incorporating new technologies into the Pentagon’s operations—a significant increase from previous years. The next Congress might designate even larger funding packages for similar initiatives. “Historically, the Pentagon has been a challenging environment for new entrants, but we are now witnessing smaller defense companies successfully competing for contracts,” explains Tony Samp, the head of AI policy at DLA Piper. “There’s now a push from Congress for disruption and a faster pace of change.”
Senator Thune Takes Center Stage
Republican Senator John Thune from South Dakota is poised to play a pivotal role in shaping AI legislation in 2025, especially as he is set to become the Senate Majority Leader in January. In 2023, Thune collaborated with Klobuchar to introduce a bill aimed at enhancing transparency in AI systems. While he has criticized Europe’s “heavy-handed” regulations, Thune has also advocated for a tiered approach to regulation that addresses AI applications in high-risk domains.
“I’m optimistic about the potential for positive outcomes given that the Senate Majority Leader is among the leading Senate Republicans engaged in tech policy discussions,” Winters notes. “This could pave the way for more legislative efforts addressing issues like children’s privacy and data protection.”
Trump’s Influence on AI Policy
As Congress navigates AI legislation in the coming year, it will inevitably take cues from President Trump. His stance on AI technology remains somewhat ambiguous, as he will likely be influenced by a diverse array of Silicon Valley advisors, each with varying perspectives on AI. For example, Marc Andreessen advocates for rapid AI development, while Musk has raised concerns about the potential existential risks of AI.
While some anticipate a primarily deregulation-focused approach from Trump, Alexandra Givens, CEO of the Center for Democracy & Technology, notes that Trump was the first president to issue an executive order on AI in 2020, which highlighted the technology’s implications for individuals’ rights, privacy, and civil liberties. “We hope he continues to frame the discourse in this way and that AI does not become a divisive issue along party lines,” she adds.
Read More: What Donald Trump’s Win Means For AI
State Initiatives May Outpace Congressional Action
Given the usual challenges associated with passing legislation in Congress, state legislatures might take the lead in developing their own AI regulations. States with more progressive leanings could address aspects of AI risk that a Republican-controlled Congress may shy away from, including issues like racial and gender biases in AI systems or their environmental impacts. For example, Colorado recently enacted a law regulating AI use in high-stakes situations, such as screening candidates for jobs, loans, and housing applications. “This approach tackled high-risk applications while remaining relatively unobtrusive,” Givens explains. In Texas, a lawmaker has introduced a similar bill, which is set to be considered in the upcoming legislative session. Meanwhile, New York is contemplating a bill that would limit the construction of new data centers and mandate reporting on their energy usage.