Insights

AI Pulse: Monthly Roundup

Feb 10, 2025

(Did you miss “AI Decrypted 2025,” our seminal look ahead at AI trends in 2025? Check it out here!) 

Although President Trump was only in Washington, D.C. for the last 10 days of the month, the swiftness and scope of the administration’s initial initiatives was broad, covering several areas touching on artificial intelligence (AI). The promised storm of executive orders (EOs) on day 1 of the Trump administration had an adjacent focus on technology – several EOs addressing energy resources and deregulation carried direct implications for the AI industry. The Trump team wasted no time in revoking the principal Biden era AI EO from November 2023, although the National Security Memorandum (NSM) on AI from late 2024 was left standing.  

On January 23, the White House issued an EO titled “Removing Barriers to American Leadership in Artificial Intelligence.” The brief and high-level order suggests that there will be a much more comprehensive policy released in the coming weeks. Instead of cancelling Biden-initiated AI initiatives outright, the new administration will conduct a thorough review under the significantly expanded high-level team overseeing AI policy – an acknowledgement that there may be value in continuing select key initiatives. The EO also mandates the development of an AI Action Plan within 180 days. During his January 30 hearing, Commerce Secretary nominee Howard Lutnick expressed support for U.S.-led AI standards to advance U.S. leadership, signaling a positive future of the U.S. AI Safety Institute, though its mandate and role will almost certainly undergo changes. 

Other socially charged actions by the administration may have implications for AI. The “Restoring Freedom of Speech and Ending Federal Censorship” EO aims to limit communication between the federal government and social media companies, effectively relitigating Murthy v. Missouri. While primarily targeting social media platforms, the EO’s broader opposition to government involvement in identifying dis/misinformation could affect AI system guardrails and voluntary government coordination with private AI developers. The January 23 AI EO echoes this sentiment, emphasizing the need to “develop AI systems that are free from ideological bias or engineered social agendas.” In establishing Trump’s Council of Advisors on Science and Technology (PCAST), the accompanying fact sheet warned that “the pursuit of scientific truth is under threat from ideological agendas … These threats have not only distorted truth, but have eroded public trust, undermined the integrity of research, stifled innovation, and weakened America’s competitive edge.”  

In this new Washington, the role of technology companies in political debates surrounding AI regulation and the influence of big technology platforms will be critically important to monitor. Proprietary AI leader OpenAI has transitioned from close collaboration with the Biden administration to adeptly refining its messaging for the Trump administration, securing high-level access despite CEO Sam Altman’s contentious relationship with Tesla and xAI CEO Elon Musk. Open AI issued a new “A.I. in America” blueprint, framed in the language of the new administration to stress reindustrialization and individual freedoms. The blueprint identifies “chips, data, energy and talent [as] the keys to winning on AI,” asserting that “infrastructure is destiny” and urging supportive government policies across all four inputs. Notably, the blueprint also seeks to head off liability concerns, stating that “users should be responsible for impacts of how they work and create with AI.” With early agentic models emerging, the potential for real world risks will become an increasingly pressing issue for regulators to keep ahead of. Indeed, the opening hours of January offered a preview of AI misuse: after a man exploded a CyberTruck outside the Trump Hotel in Las Vegas on January 1, police reported that he had used ChatGPT to plan to the attack. OpenAI released its agentic Operator tool on January 23; although its capabilities are currently limited, it teases a coming era of increasingly sophisticated AI agents (for more on AI agents, see section 1 of AI Decrypted 2025”). 

However, the relationship between the tech industry and the Trump administration remains complex. During a media appearance on January 26, Vice President Vance warned that “[big tech] can either respect Americans’ constitutional rights, they can stop engaging in censorship, and if they don’t, you can be absolutely sure that Donald Trump’s leadership is not going to look too kindly on them.”  

In the second week of the Trump administration, a federal funding freeze was introduced. The freeze was quickly challenged in court and subsequently retracted; it is expected that an amended version will be issued in the coming weeks. As written, the funding freeze would impact CHIPS Act and National AI Research Resource (NAIRR) funding, amongst many other federal grants. Research on AI bias and sustainability could be at risk, as such references may be interpreted as alluding to diversity and clean energy. 

Amidst a busy schedule of confirmation hearings and votes for Trump’s nominees, Congress reintroduced several AI related bills from the prior session. Senators Ted Cruz and Amy Klobuchar reintroduced the “Take it Down” Act, focused on criminalizing AI-generated non-consensual intimate imagery. Representative Jay Obernolte (R-CA), previously chair of the bipartisan House Taskforce on AI, announced plans to reintroduce the CREATE AI Act, which would codify into law the NAIRR to expand access to advanced AI models for researchers and universities.  

The past month also saw major developments in the open-source/weight model community with the release of the R1 reasoning model by Chinese AI startup DeepSeek. Venture capitalist Marc Andreessen described the release as AI’s “Sputnik moment.” The major market reaction centered on the potential for a lower cost model to upend anticipated infrastructure buildouts, such as Stargate, and raised DeepSeek’s apparent technical accomplishments to the fore of the debate around U.S.-China AI competition. It also forced a rethink of some major assumptions around AI: 1) the durability of U.S. technological AI leadership; 2) the necessity of American tech firms’ massive capital spend, given the success of DeepSeek’s low-cost, low-compute approach; and 3) the utility of U.S. export controls targeting advanced GPUs, as DeepSeek’s approach appears to have delivered competitive performance without access to the latest GPUs or large sized clusters favored by U.S. model developers. It also prompted new discussion about the progress open-source/weight models have made compared to proprietary models from market leaders such as OpenAI and Anthropic. 

American frontier AI leaders remain undeterred, reasserting that massive compute and infrastructure spend would be necessary to win the AI race, while acknowledging the importance of learning from DeepSeek’s innovations. On January 21, President Trump joined OpenAI, Oracle, and Softbank in announcing the Stargate partnership, a $500 billion AI data center undertaking. For the Trump administration, the move signaled a commitment to bolster the country’s competitive edge in AI, with the presumed government role in reducing regulatory oversight and environmental restrictions around land use and energy. Meta subsequently announced on January 24 that it plans to spend between $60 and 65 billion in capital infrastructure in 2025, focusing on AI and data centers – roughly a 50% increase over the prior year. Microsoft also pledged to invest $80 billion in FY2025 to build out AI-enabled data centers, with roughly half based in the U.S. Most of the capital expenditure here will target inference rather than large models training, such that the demand for expanded capacity would not be significantly impacted by reduced training costs. 

Recent dealmaking and investment in the energy industry reflects the anticipated power demands of these new data centers. On January 10, nuclear power plant operator Constellation announced a $26.6 billion acquisition of natural gas electricity producer Calpine, framed as a way to ensure grid reliability amidst a forecasted increase in energy demand. Oil and gas giant Chervon, following the lead of competitor Exxon Mobil, announced plans to build natural gas power plants connected directly to AI data centers. On January 27, Spanish oil company Repsol also announced plans to invest $4.2 billion in data centers in Northern Spain. 

On its way out the door, the Biden Administration issued the AI Diffusion rule, sparking major backlash from key industry players. The hotly contested rule, done unilaterally and largely without industry input, has broad AI stack and global implications, aims to position the U.S. as the AI decider. By enabling bureaucrats in Washington to choose what countries can access advanced AI hardware, the rule effectively seeks to shape the global AI landscape and determine which nations can harness AI for economic advancement. 

The Trump administration’s regulatory freeze issued on January 20 applies to the AI Diffusion rule, which remains amid the 120-day comment period. Other AI-relevant rules likely to be reviewed include the foundry due diligence rule and potentially the December 2 export control package. Interagency proponents, particularly the intelligence community and Department of Defense, view the rule as requisite to maintain U.S. leadership in AI, in conjunction with infrastructure efforts such as Stargate. They will almost certainly resist efforts to fundamentally shift the rule’s scope. 

It is too early to say how the new administration will approach export control policy concerning advanced semiconductors and advanced manufacturing equipment. On January 27, at a House Republican retreat, President Trump teased plans “in the very near future” to impose tariffs on imported semiconductors in the effort to force industry to relocate production to the U.S. The same week, Reuters reported that unnamed Trump officials were in early-stage discussions about potentially expanding restrictions to cover Nvidia’s H20 GPUs, which were redesigned to comply with the October 2023 export control package that updated GPU performance metrics. During his confirmation hearing, Commerce Secretary nominee Lutnick expressed support for strengthening restrictions on China’s access to advanced computing power.  

The issue remains central to the U.S.-China trade relationship. On January 16, China’s Ministry of Commerce announced an investigation into anticompetitive behavior by the U.S. related to mature node semiconductors, also looking at the CHIPS Act. The move is seen as a tit-for-tat, responding to the USTR Section 301 probe of Chinese support for legacy or mature node semiconductor production initiated by the former Biden administration in late December. 

European leaders are looking across the Atlantic in trepidation. France has urged a “massive regulatory pause” in Brussels to send a positive signal to businesses in the context of “exacerbated international competition,” a veiled reference to Trump’s economic protectionism. Similar sentiment underlies the ‘EuroStack’ undertaking, a loosely affiliated group of academics, entrepreneurs and policymakers that released a report on January 14 articulating a vision for a Europe-led “digital supply chain” to break the bloc’s digital dependence. The European Commission’s January 29 “Competitiveness Compass” report outlines a “massive simplification effort” aimed at bolstering the bloc’s tech sovereignty and competitiveness in such fields as AI, semiconductors, biotech, robotics, and quantum. Developed in response to the Draghi report’s diagnoses for lagging productivity, the report aims to send a new, business-friendly message to corporations and entrepreneurs but falls short of fully grappling with the tradeoffs and financial spend required to address identified structural shortcomings.  

The UK, meanwhile, spent January vigorously positioning itself to secure maximum AI investment and foster domestic model development, an ambition spotlighted in the issuance of its techno-optimistic “AI Action plan” on January 13. UK Prime Minister Starmer asserted that “the AI industry needs a government that is on their side … We must move fast and take action to win the global race … Our plan will make Britain the world leader.” Following this pro-AI, pro-business messaging, Whitehall announced on January 22 that Doug Gurr, former Amazon executive, had been appointed as interim Chair of the Competition and Markets Authority (CMA), Britain’s antitrust body. Combined with a changing of the guard in Washington and growing concerns about economic competitiveness in Brussels, the antitrust scrutiny of tech shows some signs of relenting (see our analysis of how the Trump administration might approach antitrust law towards the technology industry here.) 

Finally, AI remained – unsurprisingly – a central theme amidst global corporate and investment leaders gathered at the Davos World Economic Forum, with the tenor of conversation ranged from trepidation about unmanaged AI risks to boundless enthusiasm for anticipated economic gains. Eyes now turn to Paris, where governments and industry leaders will gather for the AI Action Summit on February 10 and 11 to advance the Bletchley Park process goals for AI safety and global governance.  

 

About DGA Group

DGA Group is a global advisory firm that helps clients protect – and grow – what they have built in today’s complex business environment. We understand the challenges and opportunities in an increasingly regulated and interconnected world. Leveraging the expertise and experience of our team at Albright Stonebridge Group, a leader in global strategy and commercial diplomacy, and a deep bench of communications, public affairs, government relations and business intelligence consultants, we help clients navigate and shape global policy, reputational and financial issues. To learn more, visit dgagroup.com.

For additional information or to arrange a follow-up, please contact Paul.Triolo@dgagroup.com and Jessica.Kuntz@dgagroup.com.