Insights

AI Pulse: Monthly Roundup

Apr 3, 2025

Global developments this month have exposed deep divisions over how to manage—and benefit from—the explosive growth of AI applications and infrastructure buildout pledges. Washington’s stance veers between free-market convictions and protectionist efforts to secure AI hardware supply chains. China is accelerating its own open-source model development and enacting legislation requiring the labeling of AI-generated content. Europe, facing hostile rhetoric from the U.S., appears increasingly compelled to pursue digital sovereignty across the tech stack. 

 

Trump administration grapples with AI Action Plan, reshoring semiconductor manufacturing 

Thousands of stakeholders submitted comments to the Office of Science & Technology Policy (OSTP) as input for the administration’s AI Action Plan process, hoping to shape a new federal framework for the AI ecosystem while offering perspectives on a range of regulatory issues, export controls, workforce development and immigration issues, and government integration of AI. OpenAI and Google advocated for a flexible copyright regime that allows developers to train models on copyrighted materials. Entertainment guilds and publishers voiced strong opposition, warning that a blanket fair-use rule would artists’ and authors’ livelihoods and weaken their ability to negotiate with large AI labs. Echoing the Trump administration’s national security rhetoric, many comments warned that China’s progress is being accelerated by “model distillation” or by diverted GPUs slipping past U.S. controls. 

Industry has also spoken up in defense of the importance of the U.S. AI Safety Institute (USAISI) process, which faces an uncertain future. On March 6, the newly formed AI Innovators Alliance, representing smaller tech startups, sent a letter to Commerce Secretary Howard Lutnick and Cryptocurrency-AI Czar David Sacks emphasizing the importance of USAISI and the National AI Research Resource (NAIRR). A March 10 letter signed by eight established tech industry groups also called on Lutnick to preserve NIST’s AI initiatives and maintain industry collaboration with the AI Safety Institute. The USAISI remains in place for now but could ultimately be moved under OSTP as new director Michael Kratsios takes the reins in April. 

In an apparent signal of a tougher stance on the high stakes AI Diffusion Framework, the March 25 announcement adding 80 new entities — including the Beijing Academy of Artificial Intelligence (BAAI) — to the Entity List underscores Washington’s intent to restrict China’s AI hardware access, even at the risk of straining relations with allies. Meanwhile, industry and foreign leaders, are working to shape the administration’s export policy. On March 18, UAE National Security Advisor Sheikh Tahnoon bin Zayed Al Nahyan visited Washington at the Trump administration’s invitation and called for easier access to GPUs. Many comments submitted to the AI Action Plan also touched on this issue. The AI Diffusion rule has frustrated U.S. allies relegated to Tier 2 status, which requires companies to navigate additional regulatory hurdles. Countries pushing back on this status include the UAE, Israel, Poland, and India. 

Meanwhile, U.S. officials have continued to grapple with how to respond to Chinese AI startup DeepSeek and the spread of its open source/weigh models. Finding itself in a reactive posture, the Trump administration is reportedly considering an executive order that would, at a minimum, ban DeepSeek’s app on U.S. government devices. It is also considering avenues to block American cloud providers from hosting DeepSeek’s API on national-security grounds. However, the open-source nature of DeepSeek complicates such restrictions, as there is no direct user-data pipeline to China. DeepSeek is expected to release its latest base and reasoning models in the coming weeks, increasing pressure on U.S. officials to act. Yet, with nearly every Chinese official praising DeepSeek at major international fora in China throughout March, any move against DeepSeek is certain to escalate U.S. China relations. 

The flipside of American efforts to block Chinese companies from accessing advanced GPUs is the push to reshore semiconductor manufacturing in the U.S. On that front, President Trump claimed a victory on March 3 when TSMC announced an additional $100billion in investment in its Arizona-based complex, bringing its total U.S. commitment to $165billion. The plan is to co-locate advanced fabrication plants, packaging facilities, and an R&D center—mirroring TSMC’s “GigaFab” model in Taiwan—to enable rapid scaling of sub-3nm nodes for U.S. customers including Apple, Qualcomm, Broadcom, and Nvidia. Intel, central to the American reshoring scheme, named a new CEO this month: former board member Lip-Bu Tan. Tan, who reorganized the Intel board to include more semiconductor industry expertise, insists that Intel must stand on its own, rejecting the prospect of TSMC taking over its  struggling foundry services. Intel is trying to woo major customers—Nvidia, Broadcom, and AMD—for their advanced 18A foundry node, hoping that success will restore confidence in its manufacturing capabilities and establish Intel as a credible alternative to TSMC. 

The semiconductor industry is finding President Trump’s policy stance difficult to predict. On March 4, Trump demanded a repeal of the 2022 CHIPS Act in an address to Congress, describing its subsidies as “horrible” and accusing chipmakers of “taking our money and not spending it.” Leading Republicans and Democrats alike showed no interest in reversing the $33billion in grants already awarded. Still, the Commerce Department’s CHIPS Program Office has lost 40% of its staff due to layoffs, raising concerns about its ability to oversee the program effectively. Governors from Arizona and Ohio warned that cutting funding mid-construction would jeopardize the entire onshoring effort. Meanwhile, TSMC’s Arizona sites are already struggling with labor shortages and higher costs of building advanced-node chips in the U.S. Analysts forecast that domestically manufactured semiconductors could be 20–30% more expensive than those produced in Taiwan, even before accounting for potential tariffs. This casts serious doubt on the future of U.S. industrial policy on semiconductors. With Intel and Samsung struggling to meet their CHIPS Act obligations, only TMSC appears on track to meet the CHIPS Act goal of reaching 20 percent of advanced node manufacturing — primarily for AI applications — by 2030. 

 

Diverging state regulations

In their comments to the AI Action Plan RFI, OpenAI argued for the preemption of state-level AI bills, eying the proliferation of bills in statehouses across the country.  

The state-level tug-of-war over AI governance saw Virginia Governor Glenn Youngkin veto a bill on March 24 that would have required audits and impact assessments for so-called “high-risk” AI, modeled on Colorado’s algorithmic discrimination law. In Texas, lawmakers narrowed a sweeping AI discrimination bill such that the revised bill only covers government deployments of AI.  

California appears poised for another regulatory push: State Senator Scott Wiener introduced SB53, a streamlined version of last year’s SB-1043, which was vetoed by Governor Gavin Newsom. The “Frontier Models” Working Group, established by Newsom following the veto of SB-1047, released a draft report on March 18 that endorses transparency mandates, whistleblower protections, and reporting for serious AI incidents. At times, the thoughtful report seemed like an effort to align with the Biden administration on responsible AI governance, arguing that “carefully tailored governance approaches can unlock tremendous benefits.” Given California’s track record of setting de facto national standards—in areas such as data privacy and emissions—observers see potential for this narrower legislative push to have an impact beyond the state’s borders. 

In line with the Trump administration’s deregulatory stance on AI, little legislative action is expected in Washington – with limited exceptions for tackling specific AI-related harms and its enabling ecosystem. The Senate unanimously passed the TAKE IT DOWN Act on March 12, with support from First Lady Melania Trump. The bill, first introduced in the previous session of Congress, criminalizes AI-generated nonconsensual intimate imagery and is expected to pass the House.  

 

Eyes on the FTC 

News that President Trump had dismissed two Democratic Federal Trade Commission (FTC) Commissioners on March 18 provoked both consternation and lawsuits. Thus far, the Trump administration has sent mixed messages on how it intends to apply antitrust law toward the tech industry. Alphabet quietly secured a win on March 10 when the DOJ dropped its proposal to force the company to divest its AI-related investments, removing a major headache that had carried over from the Biden era. The FTC’s advancing probe of Microsoft –examining the company’s partnership with Open-AI, in-house AI model development, and data center capacity constraints signals a more hostile posture.  

 

China’s confidence on AI soars 

While the U.S. grapples with internal divisions, Chinese AI players are powering ahead. Beijing announced a new $150billion venture capital guidance fund for AI and robotics in March, seeking to fuel advanced research into both large language models (LLMs) and humanoid and other robotics technologies. Alibaba unveiled QwQ-32B, an open-source reasoning model, underscoring how swiftly Chinese tech giants can iterate. Meanwhile, smaller players like Zhipu AI are raising hundreds of millions of dollars from state-backed funds, highlighting Beijing’s strategy of directing investment toward strategic domestic champions. 

Chinese lawmakers have also fast-tracked a long-anticipated AI legislative package to curb deepfake abuse by requiring labels for AI generated content and clarifying liability for AI-driven harm. Top executives, including Xiaomi’s Lei Jun, are urging that any new AI laws remain flexible to encourage rapid adoption. Yet some at the Two Sessions Meetings (March4–10) pressed for stronger guardrails on data security. Additionally, on March 21, the Chinese government introduced restrictions on private entities’ use of facial recognition, effective June 1, 2025, although state agencies will remain exempt. The long-discussed AI law is unlikely to materialize, as industry planners and officials in Beijing hesitate to introduce significant new regulations in such a rapidly evolving sector. 

 

Europe reevaluates its tech relationship with the U.S.  

On March 14, more than 100 companies and associations under the EuroStack moniker issued a joint open letter to European Commission (EC) President Ursula von der Leyen and Digital Chief Henna Virkkunen. The letter called for concrete EU actions to cultivate technological “independence across all layers of its critical digital infrastructure”, to include incentives to foster uptake of European-made technology – apps, platforms, AI models, semiconductors, and cloud storage. The European Semiconductor Industry Association (ESIA) and SEMI Europe urged a ‘Chips Act 2.0’ in Brussels, a sentiment echoed by Members of the European Parliament (MEPs) in a letter to Virkkunen.  Concerns over digital dependence predate the Trump administration; the administration’s pugilistic tone has only accelerated efforts to diversify beyond American technology suppliers. 

Amidst intense lobbying from industry and direct threats from Washington, the European Commission is reportedly weighing whether to make portions of the EU AI Act voluntary – specifically provisions concerning models’ production of false or violent content, and their use in elections. Industry has long warned that regulation could prevent them from rolling out frontier tech in the European bloc. On March 19, however, Meta announced that it had started rolling out Meta AI across 41 European countries.  

 

Investments, infrastructure, and model updates  

March saw multi-billion-dollar commitments toward data centers worldwide, reflecting the AI industry’s insatiable need for computational power– and national efforts to build out domestic capacity. On March 5, Blackstone gained approval for a $13billion hyperscale facility in England’s Northeast. Meanwhile on March 6 in South Africa, Microsoft announced $297million in AI infrastructure, part of its broader $80billion AI spending plan for the fiscal year. On March 17, Thailand’s Board of Investment approved three data center projects worth $2.7 billion, from China’s Beijing Haoyang, Singapore-based Empyrion Digital, and Thai GSA Data Center 02. On March 4, Foxconn committed $900million to build what it calls the “world’s largest AI server assembly plant” in Jalisco, Mexico, set for completion by 2026. Already known as Mexico’s semiconductor hub, Jalisco anticipates further high-tech investment despite looming tariffs that could raise operating costs. Ultimately, the development and operation of advanced data centers outside the U.S. faces uncertainties on the shape of the AI Diffusion Framework.  

On the model front, rapid expansion of reasoning capabilities continues at full speed. OpenAI’s GPT-4.5, released on February 27, uses a larger context window and improved reasoning, though the hefty size and strain on inference resources have led to a staggered rollout. Anthropic’s Claude3.7 Sonnet—released on February 24—applies a “hybrid reasoning” framework that merges large language modeling with symbolic logic. On March 12, Google integrated its new Gemma3 model family (ranging from 1B to 27B parameters) into both consumer services and robotics R&D. Google followed up on March 25, launching Gemini 2.5 Pro, which outperformed OpenAI’s models in multiple benchmarks. In China, Alibaba expanded its RISC-V chip portfolio and released the QwQ-32B reasoning model on March 6, aiming to rival OpenAI and DeepSeek in advanced LLM performance. Tencent released its T1 reasoning model on March 22, stepping up competition in an increasingly crowded AI landscape.  

 

The road(s) ahead 

Escalating global trade tensions have become an undercurrent for the AI industry. Washington, Beijing, and Brussels are pursuing distinct strategies aimed at boosting their domestic AI ecosystem, and, in the case of Washington—seeking to control international AI advancement. Governments worldwide see AI access and integration as critical to economic and national security making control of AI inputs a powerful, if analogistic, lever. Continuing trends seen at the February Paris AI Action Summit, national AI paths look set to continue diverging in 2025.      

For more, see DGA-ASG AI Decrypted 2025.  

 

About DGA Group

DGA Group is a global advisory firm that helps clients protect – and grow – what they have built in today’s complex business environment. We understand the challenges and opportunities in an increasingly regulated and interconnected world. Leveraging the expertise and experience of our team at Albright Stonebridge Group, a leader in global strategy and commercial diplomacy, and a deep bench of communications, public affairs, government relations and business intelligence consultants, we help clients navigate and shape global policy, reputational and financial issues. To learn more, visit dgagroup.com.

For additional information or to arrange a follow-up, please contact Paul.Triolo@dgagroup.com and Jessica.Kuntz@dgagroup.com.