Insights

Technology Policy and Trump 2.0: Where Are We Headed?

Dec 12, 2024

Key takeaways 

  • The 2024 Republican Party platform called for the repeal of the November 2023 executive order (EO) on AI; Republican think tanks, particularly the America First Policy Institute (AFPI), have reportedly been developing a replacement. The new administration is likely to issue such an order early in 2025 that puts the Trump stamp on AI policy.  
  • Several key administration figures with ties to Silicon Valley, including Vice President-elect JD Vance, have strong views on issues such as AI governance and government’s role in the broader AI ecosystem. In addition, Tesla CEO Elon Musk is likely to weigh in on the new administration’s AI policy.  
  • Early decision points for the Trump administration include determining how to engage with the February AI Action Summit in Paris — Vice President Kamala Harris led the U.S. delegation to the initial Bletchley Park AI Safety Summit in November 2023 — and how much support to provide for the budding U.S. AI Safety Institute (USAISI), which originated from the Biden EO.  

National security implications of AI front and center 

During the Biden administration, national security concerns drove creation of a – largely voluntary – AI governance framework. This approach was embodied in the November 2023 executive order, centered around concerns that advanced models could be used to supercharge adversarial cyber threats and enable malicious actors to develop biological and even nuclear weapons. Hyperscalers were tasked to conduct due diligence on customers and report to the Commerce Department on companies using cloud services to train advanced models. Biden’s AI policy has also been driven by concerns about China – we expect the goals of ensuring U.S. leadership in AI development and slowing Chinese development of advanced models will continue in the new Trump administration. 

In June 2024, Congressperson Mike Waltz (R-FL) asserted that, “I would take an unregulated or less-than-ideally-regulated Western-developed AI rather than a Chinese Communist Party techno-dictatorship-developed AI that has the potential to dominate both militarily and economically – if those are our two bad choices.” Five months on, Waltz has been named Trump’s National Security Advisor, positioning him to advance a national security-focused AI policy. With China critics occupying key national security positions in the incoming Trump administration, it seems likely that the administration’s approach to AI will be driven by a desire to ensure U.S. dominance. This approach will have both offensive and defensive components, with the former likely to manifest as light-touch regulation as opposed to financial R&D government support, and the latter consisting of increased pressure on allies to comply with U.S. export and investment controls. 

The Trump approach: Deregulate and selectively support 

Under a second Trump term, industry players can expect the federal government to bolster AI innovation and development through deregulation. So far, there is scant regulation to roll back in AI development itself. Deregulation efforts will likely focus on the energy grid, simplifying processes and reducing reviews involved in bringing new power online. These changes are almost certain to attract challenges from environmental groups – to include criticism of technology companies as their increased energy usage impacts decarbonization goals. This agenda is visible in Trump’s appointment of former congressman Lee Zeldin as EPA Administrator – on X, Zeldin stated his intent to “make the U.S. the global leader of AI” – and Liberty Energy founder and CEO Chris Wright as Energy Secretary. North Dakota Gov. Doug Burgum – Trump’s pick to head the Interior Department and a newly created White House National Energy Council – has echoed this, with Trump stating that his Council will “oversee the path to U.S. energy dominance by cutting red tape, enhancing private sector investments…and by focusing on innovation over longstanding…regulation”. 

AFPI has reportedly drafted a replacement EO that focuses on boosting the use of AI in the military domain and will call for review of “unnecessary and burdensome regulations” on the technology. Absent Congressional action, the Biden administration’s approach to AI has thus far prioritized industry collaboration and voluntary initiatives without pushing for comprehensive national legislation. The draft EO would reportedly devolve any model testing to “industry-led” agencies. Although the details of how this might work remain unclear, all indicators point to a low-to-no regulation approach to AI development and deployment, with many Silicon Valley venture capitalists and AI players pushing the China card, arguing that domestic overregulation of AI would benefit Chinese firms. 

Of particular interest is the fate of the USAISI which, though created by the Biden AI executive order, secured bipartisan support for initial funding. The Institute is housed within the National Institute of Standards and Technology (NIST); a senior Biden Commerce official expressed cautious optimism that the technical, non-regulatory nature of NIST’s work might allow for continuity between administrations. Industry, meanwhile, has voiced support for USAISI: both through OpenAI and Anthropic’s agreements with the Institute to conduct pre-release model research and testing, as well as a letter to Congressional leadership signed by the likes of Intel, Meta, OpenAI, and Microsoft urging Congress to codify funding for USAISI. Even absent White House leadership on the issue, industry might press Congress to lock in USAISI’s mandate. The Institute has been ramping up recruiting, collaborating closely with its UK counterpart, and launched the International Network of AI Safety Institutes in November. The outcomes of that meeting are intended to inform the Paris AI Action Summit in February, taking place only three weeks after the new administration takes office. As such, the new administration will face decisions around the future of the USAISI soon after taking office. Given fairly strong industry support for the AISI approach so far, the new administration is likely to reform and put its own stamp on the process rather than push for a rollback. 

Advanced computing beyond the headlines 

The development and diffusion of AI has received the lion’s share of media and policy attention over the past two years, but national compute and quantum computing are part and parcel of advancing computing – entangled with national security implications. The latter is likely to experience a steady state vis a vis federal support for quantum research. There is little indication that national compute will be an administration priority, although there is a chance that the National Artificial Intelligence Research Resource (NAIRR) could endure. 

Except insofar it is framed vis-a-vis competition with China, quantum information science-related research will likely be a lesser priority of the second Trump administration, albeit one that they support implicitly as a legislative item. Under the first Trump administration, the Executive Branch was supportive of Congressional efforts to advance quantum research. The National Quantum Initiative Act of 2018 created the National Quantum Coordination Office (NCQO) under the White House Office of Science and Technology Policy (OSTP), which emphasizes public private sector cooperation in driving quantum innovation. The October 2020 “Quantum Frontiers” report from NCQO asserted that “under the Trump Administration, the United States has made American leadership in quantum information science (QIS) a critical priority for ensuring our Nation’s long-term economic prosperity and national security.” 

NAIRR: Federal support for AI infrastructure

The concept for NAIRR originates in the National Artificial Intelligence Research Resource Task Force, which published its final report, as tasked by the National AI Initiative Act of 2020, in January 2023. The report describes the NAIRR as “a widely-accessible, national cyberinfrastructure that will advance and accelerate the U.S. AI R&D environment and fuel AI discovery and innovation in the United States by empowering a diverse set of users across a range of fields through access to computational, data, and training resources.” A pilot version of NAIRR was subsequently stood up within the National Science Foundation (NSF) in accordance with the Biden AI executive order.

Although part of the presumedly soon-defunct EO, there is bipartisan support in Congress for institutionalizing the NAIRR. In September 2024, the Create AI Act, aimed at institutionalizing the NAIRR and creating public AI infrastructure, advanced out of the House Science, Space, and Technology Committee with bipartisan support. Although the scale of any government led effort is likely limited, there may be appetite in Congress to facilitate researchers access through NAIRR’s channels.

There is little to suggest that Trump’s administration would adopt a particularly active role in supporting or leading a national compute initiative. The preferences of likely Trump appointees and outside influence coming from industry suggest that the Trump administration would look to the private sector to lead on developing advanced compute capacity, with potential coordinating support from government around select issues, such as enhancing choices for energy sources.  

Decrypting the White House visitor’s log: the voices that matter on AI 

With national security as the core lens through while the Trump administration is likely to view AI, Commerce-nominee Howard Lutnick, the Under Secretary of Bureau of Industry and Security (BIS), China-focused NSC roles, U.S. Trade Representative nominee Jamieson Greer, and perhaps select roles within the State Department responsible for coordinating with international partners will be the central drivers of Trump’s AI policies. In addition to NSA Waltz, former U.S. Trade Representative Robert Lighthizer, named as Trump’s “trade czar,” favors economic decoupling from China. Sen. Marco Rubio, nominee for Secretary of State, has also spoken in favor of economic decoupling.   

On December 5, Trump named Silicon Valley veteran David Sacks as his “AI & cryptocurrency czar,” in addition to   overseeing the President’s Council of Advisers on Science and Technology. Sacks will reportedly serve in the capacity of “special government employee,” a designation that would exempt him from facing confirmation hearings or financial disclosure requirements but limits him to serving a maximum of 130 days per year.  Sacks is a close confidant of Musk and Peter Thiel and is expected to take the view that Silicon Valley is best served by a non-interventionist approach by Washington. His emphasis will be on growth and innovation, with safety concerns potentially downplayed as far off or blown out of proportion — Musk, for example has been outspoken around existential concerns, but less vocal on the need for regulation to govern near-term risks. However, the lack of explicit authority vested in his advisory role means that Sachs’ impact will depend on his ability to leverage relationships with individuals vested with regulatory authority. JD Vance would be one such person. Jacob Helberg, a senior advisor to Palantir, is another individual in Trump’s orbit who might yet land in a senior White House role with responsibility for technology policy. Hellberg has indicated that his Hill and Valley Forum, which drove the TikTok divestiture bill in Congress, will tackle national AI competitiveness next.  

Critical to the evolution of U.S. AI policy in the Trump era will be the relationship between the White House and leading technology platforms, particularly the leading AI players, who are increasingly comfortable working with the federal government on a range of issues facing the industry. OpenAI has been forward-leaning in lobbying the Biden administration to shape the larger AI ecosystem, for example, adopting the rhetorical framing of AI as an existential choice between a democratic and authoritarian global future. The week following the election, OpenAI rolled out its AI infrastructure blueprint, listing policy recommendations that would assign the U.S. government an aggressive role in supporting and building out AI infrastructure. Framed in terms that reflect Republican priorities for reshoring, the blueprint states that “AI presents an unmissable opportunity to reindustrialize the U.S. and through that, generate the kind of broad-based economic growth that will revitalize the American Dream. It also presents a national security imperative to protect our nation and our allies against a surging China.”  

OpenAI CEO Sam Altman refrained from endorsing a presidential candidate – he also faces a major rival in Musk, who has brough several lawsuits against OpenAI. It remains to be seen whether OpenAI, which has repeatedly demonstrated an ambitious agenda in shaping the global AI ecosystem – will gain leverage in the new White House, and in persuading the Oval Office to derisk and finance continued AI development. Musk’s influence remains a looming variable – he and President-elect Trump appear to have formed a solid partnership in recent months and Musk is almost certain to weigh in on AI policy. In August, Musk broke from many of his industry peers in supporting California bill SB-1047. Musk could emerge as an advocate within the administration for some level of regulatory oversight – although he would almost certainly seek to shape AI regulation that advantages his own business interests. 

AI beyond – and within – U.S. borders 

Should the Trump AI team pursue a hands-off approach to AI oversight, it is probable that key states will redouble existing regulatory efforts. Although a few may follow the direction of California and Connecticut in seeking more comprehensive regulation, most state-level bills are expected to address specific harms, to include non-consensual generative AI, protections for minors, privacy implications of AI, and civil rights protections. Such a state-by-state approach places a significant compliance burden on AI developers and deployers. 

The Biden administration made multilateral engagement through the Bletchley Park process, G7, and OECD, central to its AI safety and standards strategy, but a Trump administration is likely to deprioritize these multilateral initiatives. New personnel overseeing the issue may prefer a more unilateral approach on AI policy and are unlikely to sign on to any initiatives perceived as restricting U.S. industry. However, there has been widespread support within U.S. industry for the Bletchley Park multilateral process as well as continued U.S.-China engagement on AI, given enduring issues around interoperability and a desire to avoid fragmentation of the AI sector globally. The Trump AI team will have to consider conflicting inputs from industry and venture capitalists to develop an approach that juggles regulatory, geopolitical, national security, and industry concerns – informing how the U.S. government interacts at the global level.  

In addition, we do not envisage that the Trump AI team will continue the nascent bilateral AI dialogue with China, given the views of many of Trump’s cabinet appointees on the threat that China poses. Biden’s agreement with President Xi at the November 2024 APEC Summit to restrict the use of AI for nuclear command and control may secure support from the new administration, but even this effort will likely be reviewed as the incoming administration seeks to put its stamp on all AI initiatives going forward.  

 

About DGA Group

DGA Group is a global advisory firm that helps clients protect – and grow – what they have built in today’s complex business environment. We understand the challenges and opportunities in an increasingly regulated and interconnected world. Leveraging the expertise and experience of our team at Albright Stonebridge Group, a leader in global strategy and commercial diplomacy, and a deep bench of communications, public affairs, government relations and business intelligence consultants, we help clients navigate and shape global policy, reputational and financial issues. To learn more, visit dgagroup.com.

For additional information or to arrange a follow-up, please contact Paul.Triolo@dgagroup.com and Jessica.Kuntz@dgagroup.com.