Insights

AI Pulse: Monthly Roundup

Dec 17, 2025

Key Takeaways

  • November’s new frontier models narrowed performance gaps further, pushing enterprises to manage ecosystem dependence and negotiate more flexible evaluation and integration terms.
  • The surge in AI infrastructure spending is colliding with limits on power, memory, and permitting, making long-term compute access a strategic constraint. This is quickly becoming top-of-mind in industry and increasingly visible in media.
  • U.S. approvals for advanced Nvidia systems to Gulf partners mark a shift toward “compute for alignment,” raising compliance expectations for firms relying on facilities in the UAE or Saudi Arabia.
  • China is continuing to develop its domestic stack but remains constrained by high-bandwidth memory and tooling shortages, limiting its ability to train and deploy true frontier-scale systems in the near term.
  • Governance is moving into practical enforcement, as Europe and California introduce structured incident-reporting regimes, while the Trump administration attempts to curb divergent state rules.
  • The continued emergence of a two-stack landscape (China, U.S.) will require companies to map their exposure across cloud providers, jurisdictions, and supply chains, with greater emphasis on resilience and diversified infrastructure.

Bumper Crop of New Frontier Models Released in November

November delivered another round of model launches. OpenAI’s GPT-5.1 (and GPT-5.2 in early December), Anthropic’s Claude 4.5, Google’s Gemini 3 Pro, xAI’s Grok 4.1, and China’s DeepSeek-V3.2 were all released or entered broad preview in November, with early third-party testing placing them in a tight cluster at the top of current benchmarks.

Independent evaluations and developer commentary point to similar patterns. Gemini 3 Pro and Claude Opus 4.5 appear to lead most benchmarks on complex reasoning and multi-step tool use. Gemini 3 Pro and Grok 4.1 are marketed as more tightly integrated into their respective ecosystems, making it easier to embed them into search, productivity, or social platforms. DeepSeek-V3.2, co-developed with Huawei and optimized for domestic chips and the CANN software stack, indicates that DeepSeek continues to innovate and set the stage for the release of a new reasoning model, likely in early 2026.

For users, the difference among these frontier models is more about fit for purpose, latency, and price rather than benchmark scores. Enterprises that signed “one-model-for-everything” contracts in 2023–24 are starting to revisit that choice, either unbundling workloads across several providers or pressing their main vendor to expose a broader model menu. As we noted last month, this is shifting bargaining power: the easier a workload can move between models, the more leverage large customers have on pricing and safety commitments.

Our take: With major labs now iterating with greater cadence at the frontier, enterprise customers may want to structure contracts considering specific capability tiers and evaluation benchmarks. November’s lab reports suggest that the strongest systems remain capable of being used by malicious actors for cyber intrusions, biological design, and social-engineering tasks if not carefully configured, even as red-teaming practices improve. Firms planning to rely on a single frontier provider should assume that internal safety tooling will need regular updates to match new releases, rather than treating model upgrades as a simple quality improvement. 

A New Compute Corridor in the Gulf

The November U.S.–Saudi package looks set to create a new axis in the global AI infrastructure map in 2026. Riyadh has secured licenses for approximately 35,000 Nvidia Blackwell GPUs and related networking hardware, along with U.S. support for civil nuclear power and deeper defense ties. Humain, the state-backed national AI champion, is now positioned to build a 500 MW cluster over the next five years in partnership with U.S. labs, though it has not yet received a license from the Commerce Department for advanced GPUs.

Export licenses will reportedly come with tight conditions. Saudi facilities that host the newest Nvidia chips are expected to be segregated from data centers that use Chinese hardware. Remote access for Chinese firms will be restricted. U.S. security assurances will likely extend not only to traditional energy and industrial sites but also to critical data infrastructure.

The UAE’s G42 received a parallel approval track earlier this year and secured its own Blackwell allocation this month. Together, G42 and Humain form a small group of Gulf champions that will eventually sit firmly inside the U.S. hardware sphere. Washington is encouraging quiet competition between Abu Dhabi and Riyadh for regional hub status, while keeping both dependent on U.S. export decisions and security cooperation.

In return, Saudi Arabia has made concessions on China, nuclear policy, and defense. The civil nuclear agreement limits enrichment on Saudi soil and locks the Kingdom into imported fuel and technology. Saudi entities are also stepping back from sensitive Chinese joint ventures and signaling that industrial platforms, such as, Alat will respect U.S. red lines around co-location of Western and Chinese tech.

Our Take: When fleshed out and moved forward as the US formalizes licensing requirements, the deals amount to a “compute for alignment” trade. The Gulf will become a key external supplier of capital and hosting capacity for the U.S. stack. Companies that rely on Saudi or Emirati data centers for training and inference will need to document clear separation from Chinese hardware, software, and personnel. For Chinese vendors, room at the high end of the Gulf market is shrinking, with remaining opportunities likely to center on legacy infrastructure and less sensitive cloud-based workloads.

Corporate Alliances, Capital Flows, and Physical Limits

The restructured partnership between Microsoft, Nvidia, and Anthropic shows how frontier labs and major cloud providers are tying themselves together financially. Microsoft and Nvidia are investing heavily in Anthropic, while it commitstens of billions of dollars of future spending on Azure. Azure, in turn, uses that revenue to finance further GPU purchases from Nvidia. From a market perspective, these arrangements boost reported revenue and order backlogs for all three firms. However, a growing share of Nvidia’s data center income and cloud capex now depends on a handful of large labs with long-dated obligations. The key question is whether downstream enterprise and consumer adoption will grow fast enough to validate this spending.

At the same time, a new cohort of infrastructure providers is emerging around power and land constraints. Crusoe Energy has raised sizable capital to build gigawatt-scale data centers that run on flare gas and isolated renewables in North America. Nebius is rapidly deploying smaller, distributed clusters across multiple countries and aims to serve both hyperscalers and sovereign buyers that cannot source enough compute on their own. These firms focus on securing cheap, reliable electricity in locations where traditional players have run into permitting hurdles or grid congestion. They then leverage energy access into a service sold to cloud providers, labs, and sovereigns that want to scale without building new power infrastructure from scratch.

Our Take: The capital cycle continues to be defined as much by long-term energy and land commitments as by chip supply. Vendor-financed lab deals will keep Nvidia and the hyperscalers growing in the near term, but investors are watching the quality of that revenue more closely. Investors from hedge funds to investment banks will look at AI-driven capex with greater scrutiny as valuations remain high. Smaller players that can unlock stranded power are likely to gain greater influence, particularly with sovereign buyers that want their own AI facilities but lack local grids that can absorb large new loads.

China’s Stack Under Pressure

China’s effort to build a self-reliant AI stack based on domestic chips and software is facing operational challenges. DeepSeek’s V3.2 and reasoning models were meant to showcase training on Huawei’s latest Ascend processors instead of stockpiled Nvidia A100 and H100 units. Reports from inside the ecosystem suggest repeated problems with cluster stability, interconnect performance, and yields for the newest Huawei chips when used at frontier scale. Furthermore, high bandwidth memory (HBM) and related equipment tooling remain key unsolved bottlenecks. Huawei still relies on hoarded HBM inventories for its products, with estimates indicating that these stockpiles will run out in 2026 unless domestic scale can replace demand. SMIC — China’s largest and most advanced fab — is reportedly seeing order cuts as buyers are holding back on logic chip orders until they can obtain sufficient memory in parallel. Thus far, YMTC and CXMT, China’s leading NAND and DRAM makers respectively, have not been able to match SMIC’s scale.

These difficulties have fueled speculation that DeepSeek and other models are still leaning on older or smuggled Nvidia hardware for parts of its training runs. If true, this would illustrate the extent of the difficulty in moving from mid-tier workloads to genuine frontier models using only local hardware. The gap is in chip design, software, hardware tooling, and memory.

In parallel, the Moore Threads listing on Shanghai’s STAR Market in early December shows how domestic capital markets are subsidizing the effort. The firm raised $1.1 billion at a valuation near $7.5 billion, despite limited market share and a record of losses exceeding RMB 5.9 billion. Investor demand reflects strong policy backing and a narrative of resilience, rather than proven competitiveness against Nvidia and AMD.

Our Take: Beijing is successfully mobilizing capital and policy support for its AI stack, but the technical and ecosystem gap at the frontier remains significant. Domestic chips can meet much of the demand for mid-range inference and industry applications, which will matter for China’s internal market. However, continued reliance on older or smuggled Nvidia hardware for top-end training would leave key firms exposed to the next round of U.S. enforcement, even if public messaging stresses self-reliance. Beijing has long viewed dependence on U.S. GPUs to be a strategic liability, and regulators are increasingly incentivizing Chinese buyers away from Nvidia products to speed development of the Chinese stack. For Nvidia, even with the early December landmark decision by President Trump to allow H200 sales into China, Beijing may still consider restricting their deployment in high-profile platforms. More on this in next month’s roundup.

AI Governance Roundup

Senior Trump administration officials have floated the use of federal funding conditions, litigation, and regulatory guidance to limit states’ ability to impose their own requirements. The White House views uniform national rules as essential for rapid deployment and argues that state-led regimes — especially California’s — risk slowing innovation and creating multi-jurisdictional compliance burdens. While these efforts are early and face legal hurdles, and there are likely to be fractures within the Trump administration on this, they mark an attempt to consolidate AI authority at the federal level.

At the same time, regulators and courts are still catching up to rapid deployment. Agencies are updating guidance on safety, cybersecurity, and incident reporting, but there is no clear federal framework that governs how companies must manage and disclose AI-related failures across sectors. Firms face a patchwork of sectoral expectations and litigation risk, especially in finance and health.

In Europe, the Commission’s Digital Omnibus package and draft guidance on “serious incident” reporting under the AI Act move the bloc toward implementation. The Omnibus gives companies more room to process sensitive data to test for bias and narrows some definitions of personal data, while centralizing oversight of general-purpose models in the AI Office. At the same time, Brussels is advancing an economic security doctrine that tightens screening of outbound investment in advanced semiconductors, AI, quantum, and biotechnology.

Our Take: Europe is moving from rules on paper to enforcement and workflow design. Multinationals will need to reconcile a relatively permissive U.S. domestic environment, but a more assertive export policy toolkit, and European requirements that push toward higher compliance standards and heavier documentation.

The Road Ahead

While November saw progress on the debate around advanced compute exports to the Middle East, there was still significant volatility in exports of advanced GPUs to China. Although Congress dropped the GAIN AI Act from the defense authorization process, the Hill remains eager to impose a higher level of export controls than those currently present. The bill would have required U.S. firms to be offered priority access to advanced chips before exports to foreign buyers. Its removal preserves broad executive discretion to approve or deny GPU exports on a country-by-country basis.

November’s moves deepen the faultlines of a two-stack environment. The U.S. stack, centered on U.S. and allied firms and standards, now has deep roots in the Gulf. The China stack is consolidating around Chinese hardware, platforms, and governance exports such as WAICO. The main zone of competition is the wider Global South, where governments want access to AI for growth but seek to avoid full alignment with either bloc.

India’s AI Impact Summit in February will be the next key moment to watch for AI governance. New Delhi is promoting a narrative of “impact” and “people-centric” adoption that resonates with many emerging markets. The Gulf’s sovereign AI platforms could either complement this agenda, by offering affordable access to U.S.-stack compute for African and South Asian clients, or complicate it if they move more firmly into Washington’s camp.

For companies, it will become increasingly important to map exposure to Gulf and Chinese infrastructure across training, inference, and data storage to gain better situational awareness of the AI landscape. On the regulatory front, engaging early with European regulators as AI Act workflows move from draft to enforcement will prove vital. On the investment front, it will be important to distinguish between AI projects that are already generating recurring cash flows from deployed products and services, and headline investment commitments that reflect long-term cloud capacity reservations or research partnerships but may not translate into near-term revenue. We will be monitoring how energy prices, grid policy, and community opposition shape the pace and location of new data centers.

The next year will likely bring more strain on physical resources, more selective use of export controls, and sharper choices for governments that want to tap into AI while keeping options open. Firms that plan around these constraints now will be better positioned as the AI buildout moves into its next phase.

For more, see DGA-ASG AI Decrypted 2025. Look out for AI Decrypted 2026, releasing early 2026.

 

About DGA Group

DGA Group is a global advisory firm that helps clients protect – and grow – what they have built in today’s complex business environment. We understand the challenges and opportunities in an increasingly regulated and interconnected world. Leveraging the expertise and experience of our team at Albright Stonebridge Group, a leader in global strategy and commercial diplomacy, and a deep bench of communications, public affairs, and government relations consultants, we help clients navigate and shape global policy, reputational and financial issues. To learn more, visit dgagroup.com.

For additional information or to arrange a follow-up, please contact Paul.Triolo@dgagroup.com, Alexis.Serfaty@dgagroup.com, or Nikta.Khani@dgagroup.com.