OpenAI has struck preliminary agreements with Samsung Electronics and SK hynix to secure advanced memory for Stargate, its multi-hundred-billion-dollar AI infrastructure program. The partnerships target massive monthly output of DRAM/HBM at scale and include plans to explore AI data centers in South Korea. Here’s what happened, why it matters, and the open questions that will shape the next phase.


What happened

OpenAI announced new strategic partnerships with Samsung and SK that make the two Korean giants primary memory suppliers for Stargate. The agreements currently take the form of letters of intent and industrial partnerships, signaling intent to ramp capacity rather than final purchase orders. In parallel, OpenAI and its partners said they will evaluate and co-develop data centers in Korea, part of a broader global footprint. The public announcements followed high-level meetings in Seoul between OpenAI leadership and senior Korean officials and industry chairs. In practice, this is about locking in assured supply across a volatile memory market before next-gen models come online. It also shifts part of Stargate’s value chain closer to the companies that dominate HBM (high-bandwidth memory)—a critical component for training and serving state-of-the-art AI models.

Why it matters

Modern AI training is constrained less by ideas than by compute and memory bandwidth; even brilliant model architectures stall without enough high-speed memory stacks. Samsung and SK hynix together control the bulk of global DRAM—and an even larger share of HBM—which makes their participation pivotal for any hyperscale AI plan. By formalizing partnerships, OpenAI reduces the risk of supply shocks, price spikes, or allocation shortfalls as the industry races to build larger clusters. The move also pressures competitors to secure similar long-term pipelines, potentially making memory the strategic choke point of the AI era. For South Korea, the deals align with national ambitions to be an AI manufacturing and infrastructure hub, turning chip strength into broader ecosystem leadership. For end users, the downstream effect is more consistent model upgrades and faster iteration cycles—assuming the rest of the stack (GPUs, networking, power) keeps pace.

The scale—by the numbers

OpenAI and its Korean partners outlined a production target on the order of hundreds of thousands of DRAM wafer starts per month, an eye-watering run rate that underscores Stargate’s size. At these levels, memory alone represents tens of billions of dollars over the program’s life—even before packaging, testing, and logistics. The partners framed the output as essential to power next-generation multimodal and agentic models, which tend to be hungrier for bandwidth than their predecessors. When combined with previously announced U.S. sites, Stargate’s roadmap pushes toward double-digit gigawatts of power and hundreds of thousands of racks over multiple phases. The headline is simple: this is industrial-policy scale, not a typical data-center refresh.

What it means for Samsung & SK hynix

For Samsung and SK hynix, Stargate is a way to stabilize utilization and justify aggressive capex on HBM and advanced DRAM nodes, even through memory’s cyclical downturns. HBM is technically demanding—stacked dies, TSVs, exacting yields—so long-tail, high-volume commitments help both firms derisk new lines. The partnerships could also accelerate co-design between model developers and memory vendors, tuning capacities, bandwidth, and thermals to future AI workloads. Expect spillovers into packaging innovations (e.g., CoWoS-class or advanced interposers) as the ecosystem works to close the memory wall. Strategically, anchoring Stargate wins strengthens both companies’ positions against rising competition, while keeping more of the AI value chain in Korea’s industrial orbit.

The Korea data-center angle

Beyond chips, the parties said they will explore building AI data centers in South Korea, sometimes described as a “Stargate Korea” track. Early guidance points to two facilities under evaluation with initial tens-of-megawatts capacity apiece, expandable as demand matures. Several Samsung affiliates—construction, heavy industry, and IT services—are attached to studies that include floating or offshore data-center concepts to ease land and cooling constraints. The Korean government has signaled willingness to facilitate siting, power, and permitting to accelerate deployment. If these sites proceed, they would complement Stargate’s U.S. campuses and create a Pacific anchor for training and inference. The net effect would be a more geographically diversified footprint with shorter supply lines to memory fabrication.

Supply-chain ripple effects

Locking in memory doesn’t eliminate other bottlenecks—GPU/accelerator availability, advanced packaging capacity, and high-speed networking remain critical. Still, predictable HBM supply lets system designers optimize module configurations and yields across the stack, improving time-to-cluster. Expect renewed attention on power delivery and cooling, with experiments in liquid cooling and modular power to hit density targets without blowing past energy budgets. The partnerships also add momentum to previously announced Stargate sites in the U.S., where multi-gigawatt campuses are already in flight with hyperscale partners. In competitive terms, this raises the bar for any lab trying to scale frontier models without similar long-horizon vendor commitments. It also signals to capital markets that memory capacity—not just GPUs—is investable AI infrastructure.

What we still don’t know

The parties did not publish binding purchase volumes, unit pricing, or detailed delivery timetables, beyond directional monthly targets. We also don’t have public PUE/energy-mix goals for the proposed Korean sites or specifics on grid interconnects and water use—key for environmental scrutiny. Questions remain about export controls and whether certain configurations will require additional approvals. Finally, integration details—how memory roadmaps align with upcoming accelerator generations—are still to come. Those disclosures will determine how quickly Stargate’s Korean nodes can move from concept to capacity.


Bottom line

This is a foundational supply-side move: by aligning with the world’s leading memory makers, OpenAI is de-risking one of the rarest inputs in frontier AI. If the data-center plans proceed, Korea becomes a second home field for Stargate, with chips and compute co-located. The next milestones to watch are firm orders, site permits, and packaging/network build-outs—the practical signals that today’s intent is turning into tomorrow’s available compute. Until then, the message is clear: in AI, memory is destiny, and OpenAI just secured a privileged lane.