Gate Square “Creator Certification Incentive Program” — Recruiting Outstanding Creators!
Join now, share quality content, and compete for over $10,000 in monthly rewards.
How to Apply:
1️⃣ Open the App → Tap [Square] at the bottom → Click your [avatar] in the top right.
2️⃣ Tap [Get Certified], submit your application, and wait for approval.
Apply Now: https://www.gate.com/questionnaire/7159
Token rewards, exclusive Gate merch, and traffic exposure await you!
Details: https://www.gate.com/announcements/article/47889
Sanjay Mehrotra Signals Major HBM4 Ramp-Up: Micron Targeting 15,000 Wafers Monthly by 2026
Micron is gearing up for a significant push in next-generation AI memory, with leadership committing to substantial HBM4 production expansion. During the December 2025 financial results briefing, CEO Sanjay Mehrotra disclosed that the company plans to dramatically scale HBM4 output beginning in Q2 2026, with the performance improvement curve expected to outpace the previous HBM3E generation ramp.
The numbers tell an ambitious story. Micron is eyeing monthly HBM4 production of 15,000 wafers by 2026—a figure that would represent approximately 30% of the company’s total HBM monthly capacity, which stands around 55,000 wafers. This allocation underscores management’s conviction that next-generation AI memory represents a critical growth vector in the evolving semiconductor landscape.
For years, Micron has trailed Korean competitors in HBM market share, constrained by production scale constraints. That competitive disadvantage appears to be narrowing. The company has already initiated equipment investments and is accelerating capacity buildout across its manufacturing footprint. This isn’t merely incremental optimization—it’s a strategic reallocation of resources to capture share in a segment where demand continues to outstrip supply.
Sanjay Mehrotra’s public commitment signals internal confidence in both market demand sustainability and Micron’s ability to execute the ramp efficiently. With yield improvements historically the gating factor in new process node adoption, faster yields than HBM3E would materially compress the time-to-revenue for this capacity investment. Industry observers are tracking whether execution matches the ambition—early success here could reshape competitive dynamics in premium AI memory for the coming years.