AI Pulse: Monthly Roundup

The Paris AI Action Summit — held 15 months after the original Bletchley Park gathering to accommodate the U.S. elections — revealed deepening impulses of economic nationalism among world powers. The Trump administration arrived with a tough stance on regulation, championing an “America First” approach to AI policy and issuing threats to allies and adversaries alike. Europe—once the self-proclaimed global leader in tech regulation—signaled a pivot toward more flexible and innovation-friendly policies. The UK was highly frustrated by the Summit’s emphasis on innovation while downplaying safety and security risks, and the UK did not sign the summit declaration, a major signal of displeasure.
The Paris AI Action Summit: A Splintered Summit Outcome
Hosted by French President Emmanuel Macron on February 10–11, the Paris AI Action Summit became a stage for high-profile AI power plays. Macron promoted a so-called “Notre Dame strategy,” championing accelerated AI-friendly reforms in Europe and emphasizing France’s nuclear-powered advantage over the U.S. reliance on fossil fuels. France spotlighted its AI champion Mistral, announcing €100 billion in new investments and launching a “Made in Europe” initiative to counter U.S. and Chinese dominance.
Leading the U.S. delegation, Vice President JD Vance openly criticized European regulations—citing the General Data Protection Regulation (GDPR), Digital Services Act, and new AI rules—for “handcuffing innovation,” while also attacking China’s “authoritarian approach” to AI. In an abrupt move that underscored the U.S. shift toward unilateralism, Vance departed before Macron and other European leaders spoke, signaling the Trump administration’s disinterest in compromise.
Unlike the broad agreement at Bletchley Park, where major AI labs and several governments endorsed a nascent framework for AI safety, the Paris meeting ended with a communiqué that the U.S. and UK refused to sign. Whitehall’s objections centered on the omission of AI’s safety and security implications – a concern reflected in the subsequent renaming of the UK AI Safety Institute to UK AI Security Institute. Having failed to sign the communiqué from the Seoul Summit, China joined the final Paris declaration. Beijing also simultaneously unveiled its “China AI Safety and Development Network” in Paris – an equivalent to Western AI Safety Institutes –bolstering speculation that Beijing will seek a more active role in global AI safety and security talks as the U.S. retreats. The new Trump administration is likely to strongly object to the inclusion of the Chinese AI Safety Institute in the network of institutes led by the U.S. and UK, while London will continue to press for including Beijing in the process in some capacity.
The net effect was a fracturing of the Bletchley process. The Summit pivoted even further toward voluntary, industry-led approaches and big-ticket public-private investment funds, effectively shelving the prospect of a coordinated regulatory regime. AI safety advocates, including top researcher Yoshua Bengio, decried the extent to which safety and security in general and existential risks were omitted from the official program. While French officials touted new open-source investments and philanthropic coalitions to democratize AI benefits, critics noted the conspicuous absence of binding measures or a serious AI safety roadmap, just as models are becoming much more capable. 2025 will see a major push in the agentic and embodied AI space, increasing real world risks around model deployment. For more see DGA ASG: AI Decrypted 2025.
The next major global AI forum, to be held in India, may attempt to return AI safety/security to the agenda — but for now, 2025 appears poised to witness more dealmaking and less attention to AI risks.
With a global AI regulatory scheme off the table for now, attention will increasingly fall on developer frameworks and self-imposed guardrails. In the immediate lead-up to the Paris Summit, Meta, Microsoft, xAI, Amazon, Cohere and the Emirati firm G42 all released updated Frontier AI Safety Policies, in accordance with the voluntary Frontier AI Safety Commitments adopted at the Seoul Summit.
Critics, however, worry about the lack of transparent, verifiable standards and the limitations of self-regulation. Manifesting these concerns, on February 4, Google amended its Frontier Safety Framework to drop bans on developing weapons and surveillance AI—an unthinkable shift just a decade ago. DeepMind’s CEO, Demis Hassabis, cited a “use-case-based” analysis of risks and benefits. While the move reflects the Trump administration’s interest in harnessing AI for national security, it also rattled some rank-and-file DeepMind staff who recalled the original acquisition guarantee never to apply DeepMind tech for military purposes.
Tariffs, Trade, and Tech Tensions
On February 1, President Trump signed an executive order raising tariffs by 10% on Chinese products –including technology imports. China retaliated with select tariffs on U.S. goods and export controls on critical minerals, as well as by reopening an antitrust probe into Google. Separately, the White House indicated that it would soon impose tariffs on imports of semiconductors and a range of raw materials, potentially upending the global supply chains underlying the AI stack.
Tariff-driven cost hikes for semiconductors, raw materials, and specialized hardware risk slowing the momentum of U.S. data center buildouts. The emerging tit-for-tat dynamic also puts American tech firms at greater risk of reciprocal trade hostilities from Beijing. Nvidia and Intel, already navigating U.S. export controls on advanced GPUs to China, now face the specter of deeper Chinese scrutiny or retaliation.
China’s Tech-Government Nexus Seeks to Build on DeepSeek’s Momentum
While the Paris Summit dominated global February AI headlines, President Xi Jinping held a high-profile summit in Beijing on February 17 with corporate leaders, including Jack Ma (representing Alibaba), Ren Zhengfei (representing Huawei), and Liang Wenfeng (DeepSeek CEO). The message: private firms closely aligned with state priorities will be rewarded with resources in key strategic domains—especially AI. Investors took this as an indicator of policy relaxation, sending Chinese tech shares upward.
Meanwhile, Chinese private sector and state-owned firms, along with regional and local governments, moved quickly to integrate DeepSeek’s open-source/weight models across cloud, telecom, and cybersecurity applications. Some industry leaders viewed this as a long-overdue push to deploy advanced AI models at scale, following a sustained period during which certain sectors were reluctant to choose a particular model and deploy applications across their business operations.
Overseas, however, the reaction was mixed, with the open-source community welcoming DeepSeek’s innovations and major U.S. cloud providers beginning to host the firm’s models. At the same time, DeepSeek’s frontier models provoked data security concerns and national security fears. Several government ministries — including offices in Italy, Australia, and South Korea — either restricted or banned DeepSeek’s deployments, citing security risks and potential data exfiltration.
In Washington, DeepSeek’s technical success has prompted a debate about how to further tighten existing export control efforts on aspects of the advanced compute stack. Senators Elizabeth Warren (D-MA) and Josh Hawley (R-MO) as well as Representatives John Moolenaar (R-MI) and Raja Krishnamoorthi (D-IL) urged the Commerce Department to clamp down on loopholes that might allow China to access advanced AI hardware or open-source frontier models from U.S. companies such as Meta. While the White House weighs further measures — such as potential restrictions on Nvidia H20 chips — semiconductor industry leaders cautioned that algorithmic innovations can sometimes leapfrog hardware controls, raising questions about how Washington can effectively contain China’s AI progress via technology controls that also come with significant collateral damage to U.S. firms.
U.S. Domestic Developments
The Trump administration grappled with underperformance in the domestic semiconductor manufacturing market, as Intel continues to struggle with its foundry business model and obligations for fab buildouts under the CHIPS Act. Trump officials reportedly floated the proposal that TSMC might take a controlling stake in Intel’s foundry services, while U.S. chip design giants such as Broadcom, Qualcomm, and others could participate in a spinoff of Intel’s design arm. The aim is to rescue Intel with “patient capital” and government backing, preserving a U.S.-based semiconductor champion to fulfill the administration’s reshoring ambitions.
Vice President Vance’s vow at the Grand Palais to ensure that “the most powerful AI systems are built in the U.S.” runs headlong into the reality that advanced-node foundry capacity still overwhelmingly resides in Taiwan. Import tariffs on semiconductors or a forced Intel–TSMC tie-up might bring about short-term gains for the White House but could easily spark backlash from industry and introduce risk-based supply chain disruptions.
Brussels’ Regulatory Rebalance
While the first enforceable phase of the EU AI Act came into effect on February 2, European leaders sought to send a different message, rebranding Europe as a pro-innovation hub amidst anxieties about economic competitiveness.
On February 12, the European Commission scrapped its draft 2022 AI Liability Directive, which had aimed to simplify legal recourse for AI-related harm. Instead, the Commission opted to expand the existing Product Liability Directive to cover certain AI systems.
The EU has broadly acknowledged the tradeoff between its historic regulation-forward approach towards tech and economic competitiveness, evident through the Draghi report, the EU Competitiveness Compass initiative, and nongovernmental efforts such as EuroStack. Despite a rhetorical shift, however, Brussels has yet to advance a substantive action plan to reverse course, seemingly caught in a ‘have your cake and eat it too’ quandary.
Rapid-Fire AI Industry Updates
February saw a wave of new reasoning model releases, with more expected in the coming months. xAI introduced Grok 3 on February 18, marketing it as a “maximally truth-seeking AI,” featuring an integrated “Deep Search” tool to rival OpenAI’s latest research capabilities. On February 24, Anthropic launched its new reasoning Claude 3.7 model with dynamic compute scaling for greater cost control. OpenAI teased ChatGPT 4.5 and ChatGPT 5, anticipated in the near future.
Meanwhile, developers began taking early steps to fend off threats by the Trump administration that AI systems must be kept “free from ideological bias or engineered social agendas.” On February 12, OpenAI released an updated Model Spec, a lengthy document outlining how the company wants its models to behave. The document states that models will be encouraged to embrace “intellectual freedom … no matter how challenging or controversial a topic may be” and avoid taking an “editorial stance.”
USAISI designated Scale AI as an authorized external tester for AI models, introducing a public-private arrangement to evaluate safety prior to deployment. This move could set the stage for an emerging ecosystem of independent AI risk assessors. Michael Kratsios, the OSTP director-designate and a former senior official at Scale AI, suggests the new administration intends to continue some level of support for the AI safety institute process.
Amazon was the latest major U.S. AI developer to commit to an eye-popping infrastructure spend, announcing plans on February 6th to spend $100 billion on capital expenditures this year, primarily for AI data centers. This announcement pushed the total 2025 AI infrastructure commitments by Amazon, Microsoft, Meta, and Google beyond $300 billion. Then, on February 23, Apple unveiled plans for $500 billion investment in the U.S. over the next four years. The sweeping investment will cover all areas of R&D and software development, including efforts to expand chip and server manufacturing via an AI server manufacturing factory in Texas and the production of Apple semiconductors in TSMC’s Fab 21 facility in Arizona.
About DGA Group
DGA Group is a global advisory firm that helps clients protect – and grow – what they have built in today’s complex business environment. We understand the challenges and opportunities in an increasingly regulated and interconnected world. Leveraging the expertise and experience of our team at Albright Stonebridge Group, a leader in global strategy and commercial diplomacy, and a deep bench of communications, public affairs, government relations and business intelligence consultants, we help clients navigate and shape global policy, reputational and financial issues. To learn more, visit dgagroup.com.
For additional information or to arrange a follow-up, please contact Paul.Triolo@dgagroup.com and Jessica.Kuntz@dgagroup.com.