Implications of Donald Trump’s Victory for Artificial Intelligence

Implications of Donald Trump’s Victory for Artificial Intelligence

 

 

When Donald Trump was last in office, the world had yet to witness the launch of ChatGPT. Now, as he gears up for a potential return to the White House after defeating Vice President Kamala Harris in the 2024 election, the landscape of artificial intelligence has transformed significantly.

 

The rapid evolution of AI technologies has led industry leaders, including Anthropic CEO Dario Amodei and Tesla CEO Elon Musk, a notable Trump supporter, to speculate that AI could surpass human intelligence as early as 2026. Other experts, like OpenAI CEO Sam Altman, suggest a broader timeline, indicating in a September essay that “we might achieve superintelligence in a few thousand days,” while acknowledging it could take longer. Meta CEO Mark Zuckerberg, on the other hand, envisions a more gradual emergence of advanced AI systems rather than a sudden breakthrough.

 

Regardless of the timeline, these advancements could have significant implications for national security, the economy, and the global balance of power.

 

Read More: When Might AI Outsmart Us? It Depends Who You Ask

 

Trump’s views on AI have oscillated between fascination and concern. In a June interview on Logan Paul’s Impaulsive podcast, he referred to AI as a “superpower” while expressing alarm over its capabilities. Like many in Washington, he perceives AI through the lens of competition with China, which he identifies as the “primary threat” in the race to develop cutting-edge AI technology.

 

However, even among his closest allies, opinions on how to regulate AI technology vary. Musk has consistently raised alarms about the existential risks posed by AI, while J.D. Vance, who is expected to be Trump’s Vice President, argues that industry warnings are merely tactics to implement regulations that would benefit established tech firms. These differing views among Trump’s inner circle highlight the conflicting pressures that will influence AI policy during his potential second term.

 

Reversing Biden’s AI Policies

 

One of Trump’s first significant actions regarding AI policy will likely be to revoke President Joe Biden’s Executive Order on AI. This extensive order, enacted in October 2023, aimed to address potential threats AI poses to civil rights, privacy, and national security, while simultaneously fostering innovation, competition, and the utilization of AI for public services.

 

Trump committed to repealing the Executive Order during his campaign in December 2023, a stance that was reiterated in the Republican Party platform in July, which criticized the order for stifling innovation and imposing what they referred to as “radical leftwing ideas” on the technology’s development.

 

Read more: Republicans’ Vow to Repeal Biden’s AI Executive Order Has Some Experts Worried

 

Experts note that sections of the Executive Order addressing racial discrimination and inequality are “not really Trump’s style,” according to Dan Hendrycks, executive and research director of the Center for AI Safety. While some experts express concern over any reduction of bias protections, Hendrycks believes the Trump Administration may retain other bipartisan elements of Biden’s approach. “There are aspects in [the Executive Order] that have broad support, alongside some provisions that lean more toward Democratic ideals,” he states.

 

“It wouldn’t surprise me if a Trump executive order on AI retained or even expanded upon some of the core national security measures from the Biden Executive Order, particularly those related to evaluating cybersecurity, biological, and radiological risks tied to AI,” observes Samuel Hammond, a senior economist at the Foundation for American Innovation, a technology-focused think tank.

 

The future of the U.S. AI Safety Institute (AISI), established last November by the Biden Administration to spearhead government efforts on AI safety, remains uncertain. In August, the AISI entered into partnerships with OpenAI and Anthropic to collaborate on AI safety research and the evaluation of new models. “It’s likely that the AI Safety Institute will be viewed as a barrier to innovation, which may not align with the broader objectives of Trump’s tech and AI agenda,” states Keegan McBride, a lecturer in AI, government, and policy at the Oxford Internet Institute. However, Hammond notes that while some fringe voices may advocate for dismantling the institute, “most Republicans support the AISI. They view it as an extension of our leadership in the AI space.”

 

Read more: What Trump’s Win Means for Crypto

 

Congress is already taking steps to protect the AISI. In October, a broad coalition of companies, universities, and civil society organizations—including OpenAI, Lockheed Martin, Carnegie Mellon University, and the nonprofit Encode Justice—sent a letter urging key congressional leaders to establish a legislative framework for the AISI. Efforts are underway in both the Senate and the House, with reports indicating significant bipartisan support, according to Hamza Chaudhry, U.S. policy specialist at the Future of Life Institute.

 

America-First AI and Competing with China

 

Trump’s previous remarks indicate that ensuring the U.S. remains a leader in AI development will be a central priority for his Administration. “We must lead in this field,” he asserted during the Impaulsive podcast in June. “Taking the lead over China is crucial.” Trump has also pointed to environmental regulations as potential hindrances, arguing they could “hold us back” in the competition against China.

 

Trump’s AI strategy may involve rolling back regulations to expedite infrastructure development, according to Dean Ball, a research fellow at George Mason University. “Data centers will need to be constructed, and the energy demands to power those centers will be substantial. Chip production is even more critical,” he explains. “We will require a significantly increased supply of chips.” Although Trump’s campaign has occasionally criticized the CHIPS Act, which incentivizes domestic chip manufacturing, analysts suggest he is unlikely to repeal it.

 

Read more: What Donald Trump’s Win Means for the Economy

 

Export restrictions on chips are expected to remain a crucial aspect of U.S. AI policy. Building on measures initiated during his first term and later expanded by Biden, Trump may strengthen controls to limit China’s access to advanced semiconductors. “The Biden Administration has taken a tough stance on China, but Trump likely wants to be perceived as even tougher,” McBride remarks. It is “quite probable” that Trump’s administration will “intensify” export controls to seal gaps that have allowed China access to chips, asserts Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. “Most experts across the aisle view export controls as essential,” he adds.

 

The emergence of open-source AI poses new challenges. Reports indicate that Chinese researchers have adapted earlier versions of Meta’s Llama model for military purposes, demonstrating China’s ability to leverage U.S. systems. This has created a policy divide. “Some GOP members advocate for open-source,” Ball explains, “while others, particularly the ‘China hawks,’ are eager to restrict open-source at the forefront of AI development.”

 

“Given the strong emphasis on open-source within Trump’s platform, I would be surprised to see a movement towards restrictions,” Singer comments.

 

Despite his strong rhetoric, Trump’s inclination for deal-making could influence his approach towards China. “Many misunderstand Trump as being solely anti-China; he doesn’t harbor hatred for the country,” Hammond explains, describing Trump’s “transactional” perspective on global relations. In 2018, Trump relaxed restrictions on Chinese technology company ZTE in exchange for a hefty fine and increased oversight. Singer sees similar opportunities for negotiations regarding AI, especially if Trump acknowledges concerns about the extreme risks AI could pose, such as the potential for humanity to lose control over advanced systems.

 

Read more: U.S. Voters Value Safe AI Development Over Racing Against China, Poll Shows

 

Divisions Within Trump’s Coalition on AI Policy

 

Discussions around AI governance reveal significant rifts within Trump’s supporter base. Key figures, such as Vance, advocate for fewer regulations on technology. Vance has dismissed concerns about AI risks as a strategy by the industry to push for regulations that would “stifle competition and innovation essential for America’s future growth.”

 

Peter Thiel, a Silicon Valley billionaire and former member of Trump’s 2016 transition team, recently warned against regulatory efforts concerning AI. Addressing an audience at the Cambridge Union in May, he argued that any governing body with authority over AI would likely possess a “global totalitarian character.” Marc Andreessen, co-founder of the influential venture capital firm Andreessen Horowitz, has contributed $2.5 million to a pro-Trump super PAC and an additional $844,600 to Trump’s campaign and the Republican Party.

 

Conversely, a safety-oriented perspective has found traction among other Trump allies. Hammond, who contributed to the AI policy committee for Project 2025—a proposed policy agenda spearheaded by the right-leaning Heritage Foundation, not officially endorsed by the Trump campaign—notes that there was a “distinct emphasis on artificial general intelligence and the catastrophic risks associated with AI” among the advisors involved.

 

Musk, a notable ally of the Trump campaign due to his financial support and promotion of Trump on X (formerly Twitter), has long expressed concern regarding the existential threats posed by AI. He recently assessed that there is a 10% to 20% chance AI could “go bad.” In August, Musk supported the now-vetoed California AI safety bill, which aimed to impose regulations on AI developers. Hendrycks, who co-sponsored the California bill and serves as a safety adviser at Musk’s AI company, xAI, states, “If Elon is making recommendations regarding AI, I anticipate positive outcomes.” Nevertheless, he acknowledges that “there are fundamental appointments and groundwork to establish, which complicates predictions.”

 

Trump has recognized some of the national security risks associated with AI. In June, he expressed concerns that deepfakes depicting a U.S. President making threats of nuclear action could incite a dangerous response from another nation. He has also acknowledged the possibility of an AI system going “rogue” and overpowering humanity but carefully differentiated this from his personal stance. Nevertheless, competition with China seems to remain Trump’s primary concern.

 

Read more: Trump Worries AI Deepfakes Could Trigger Nuclear War

 

Yet, these priorities need not conflict, as AI safety regulations do not inherently mean ceding ground to China, Hendrycks argues. He emphasizes that safeguards against misuse can require minimal investment from developers. “Simply hiring one person to dedicate a couple of months towards engineering can establish effective safeguards,” he explains. However, with various factions influencing Trump’s AI strategy, the direction of his AI policy remains ambiguous.

 

“In terms of which perspective President Trump and his team will lean towards, that is still an open question, and we will have to wait and see,” Chaudhry comments. “This is a critical juncture.”