WHAT MICRON ACTUALLY IS — AND WHAT CHANGED
Before the thesis, the most important distinction in analyzing MU: Micron is not a commodity memory company in any meaningful sense for the 2026–2027 period. It is a supplier of High Bandwidth Memory — a product so technically demanding, so capacity-constrained, and so embedded in the GPU supply chain that it shares more characteristics with a specialty engineered component than with the DRAM chips that caused Micron's historical boom-bust cycles.
HBM is not DRAM shipped faster. It is DRAM stacked vertically in layers of 8, 12, or 16 dies, connected by thousands of Through-Silicon Vias (TSVs), packaged on an interposer alongside the GPU die, and co-designed with the accelerator vendor. The manufacturing process requires TSV etching, wafer thinning to 30 microns, precision alignment of stacked dies, and advanced thermal management — processes that take years to qualify, cannot be replicated overnight, and consume approximately three times more wafer capacity per bit than standard DRAM. There are three qualified HBM manufacturers.
The HBM Oligopoly — Competitive Positions
Supplier | HBM Share | Key Customer | HBM4 Status | Share Trend |
SK Hynix | 57–62% | NVIDIA (primary) | Mass production 2026 | Dominant; slight share decline from peak |
Micron | ~21% | Hyperscalers (direct) | Vol. shipments CQ1 2026 | Growing; 2nd/3rd place as of Q2 2025 |
Samsung | 17–22% | Google TPUs | Ramp 2026 via P5 fab | Recovering after HBM3E qualification delays |
China (CXMT/YMTC) | 0% | None qualified | Years away — export controls | Tech gap at HBM2E level; no commercial supply |
HBM CO-DESIGN: THE SWITCHING COST THAT NO ONE IS MODELLING
Here is what almost every Micron analysis gets wrong: it treats HBM as a component that GPU vendors buy from whoever is cheapest, the same way they might buy DDR5 modules for a server. That analogy is wrong in every important dimension, and the difference explains why Micron's relationships with hyperscalers are far stickier than the market is pricing.
Why HBM Is Not a Commodity Component
Unlike standard DRAM — where any JEDEC-compliant chip from any supplier fits any slot — HBM is co-designed at the interposer, electrical, and thermal level between memory supplier and GPU vendor. The microbump geometry, PAM-4 signal tuning, and thermal models are jointly engineered and unique to each GPU platform. Re-qualifying a new supplier takes 9–18 months and requires redesigning the memory controller. Once Micron qualifies for a GPU generation — as it has for NVIDIA Vera Rubin with HBM4 in volume production — it is locked in for the life of that platform. This is a switching cost structure that looks like enterprise software, not commodity DRAM: technically embedded and practically irreplaceable within the product lifecycle.
Dimension | Standard DRAM | HBM — The Reality |
Interface standard | JEDEC — universal across suppliers | Co-designed interposer: unique to GPU platform |
Switching cost | Zero — swap supplier each quarter | 9–18 month re-qualification per new platform |
Firmware | No firmware dependency | Memory controller tuned to supplier; changing supplier = controller re-spin |
Thermal model | Modular — vendor-independent | System thermal design built around specific supplier's power profile |
Design-win cadence | Every quarter at spot price | Platform design-win 18–24 months before production; locked for platform life |
Who defines specs | JEDEC standards body | GPU vendor + memory supplier bilateral roadmap negotiation |
The Vera Rubin episode clarifies this: In early 2026, reports emerged that NVIDIA had demanded last-minute specification changes — requiring all three HBM suppliers to redesign their products and pushing mass production back by approximately one quarter. This is not a sign of a commodity market. In a commodity market, the buyer accepts the product on offer. In co-designed AI memory, NVIDIA sets the spec and all three suppliers adapt. Micron adapted and began high-volume Vera Rubin HBM4 shipments in Q1 2026 — simultaneously announcing the industry's first PCIe Gen6 SSD and SOCAMM2 for the platform. It was the first supplier to deliver all three Vera Rubin memory products in volume at the same time.
THE LTA STRUCTURE — WHY THIS CYCLE IS DIFFERENT
The balance sheet tells the story most analysts miss. Customer prepayments — the cash deposited upfront to secure supply — ran to $1.8B in FY2024. By February 2026 they had zeroed out. That is not demand weakness. It is a structural upgrade.
Customer Prepayments — Balance Sheet Snapshot (SEC Filed)
Period | Filing | Customer Prepayments / Contract Liabilities | Key Movement / Implication |
FQ3 FY2023 (Aug 2023) | 10-K | $453M (current only) | Prepayments in place; supply being secured from trough cycle |
FY2024 (Aug 2024) | 10-K | $907M total ($766M current) | Peak prepayment balance — hyperscalers queuing for HBM3E allocation |
FQ2 FY2025 (Feb 2025) | 10-Q | ~$542M (declining) | $365M of FY24 balance recognised as revenue in H1 FY2025 — supply being shipped |
FQ3 FY2025 (May 2025) | 10-Q | $146M ($3M current) | $777M of FY24 balance recognised across first 9 months of FY2025 |
FQ1 FY2026 (Nov 2025) | 10-Q | $1.68B PP&E purchase obligations | Balance shifts — LTAs now drive equipment capex, not customer deposits |
FQ2 FY2026 (Feb 2026) | 10-Q | ~$0 prepayments; not material | Prepayments FULLY SHIPPED. New SCA structure replaces one-time deposits — revenue recognition moves to shipment, not advance |
Reading this table correctly is the key to understanding where Micron is in its LTA cycle. The $907 million of customer prepayments at the end of FY2024 represented hyperscale customers depositing capital to secure HBM3E allocation — they were paying upfront because supply was scarcer than demand and they needed to guarantee access. Micron then shipped $777 million of that balance across the nine months ending May 2025. By August 2025, the balance was immaterial.
SCAs do not require upfront deposits — revenue is recognized on shipment, not as a prepayment liability. The first five-year SCA, signed Q2 2026, commits volume and pricing across a multi-year horizon. Future obligations are described as 'not material' in the 10-Q because SCA revenue will appear on future income statements, not as a balance sheet liability today.
The signal the balance sheet is actually sending: Deposits can be cancelled; SCAs cannot be walked away from without contractual consequence. This is a structural upgrade in revenue quality.
Why This Cycle Is Structurally Different
Every prior upcycle ended the same way: spot prices spike, suppliers race to add capacity, the market floods, prices collapse — because supply decisions were made independently on spot pricing.
Calendar 2026 HBM supply was sold out — in its entirety — before the end of 2025. Pricing and volume for Micron's HBM3E and HBM4 allocations were locked via Long-Term Agreements with hyperscale customers before production lines were even fully ramped. The most recent agreements — Strategic Collaboration Agreements (SCAs) — lock both volume and pricing across multi-year horizons and include R&D collaboration components. The first five-year SCA, signed in Q2 2026, covers Idaho, New York, and Singapore fab programs.
The practical consequence: A business that spent its history at the mercy of spot DRAM pricing now has multi-year revenue visibility most industrials would envy. When Microsoft’s CFO attributes $25B of capex overshoot to memory costs, that is Micron’s pricing power appearing on a customer’s P&L in real time.
Milestone | Detail | Significance |
Sept 2025 (FQ4) | "Almost all customers" locked for vast majority of HBM3E 2026 supply | Revenue visibility unprecedented |
Dec 2025 (FQ1) | 100% of calendar 2026 HBM supply (HBM3E + HBM4) under price & volume agreements | Sold out 12 months in advance |
Q1 2026 | First five-year SCA signed; CapEx raised to $20B on multi-year demand visibility | Multi-year pricing lock-in; capex confidence |
Q2 2026 | Management in discussions for additional multi-year agreements across markets | SCA model expanding beyond HBM |
FOUR BUSINESS UNITS — ONE TRANSFORMING STORY
Micron reports four business segments. The trajectory of each tells the story of a company whose entire portfolio is repricing simultaneously — not because of a spike in one category, but because AI memory demand is structurally tightening supply across the board.
Business Unit | FQ2-25 Rev | FQ1-26 Rev | FQ2-26 Rev | YoY | GM | Key Driver |
Cloud Memory (CMBU) | $2.9B | $5.3B | $7.7B | +163% | 74% | HBM3E/HBM4; sold out 2026 |
Core Data Center (CDBU) | $1.8B | $2.4B | $5.7B | +211% | 74% | High-density server DRAM; DDR5 & LPDDR5 |
Mobile & Client (MCBU) | $2.2B | $4.3B | $7.7B | +245% | 79% | AI-enabled devices; on-device LLM memory |
Auto & Embedded (AEBU) | $1.0B | $1.7B | $2.7B | +162% | 68% | Automotive content growth; industrial AI |
TOTAL | $8.1B | $13.6B | $23.9B | +196% | ~76% | Record across all four units |
Three observations from the segment data. CMBU gross margins of 74% would be the envy of most technology businesses — this is the HBM premium, crystallized. The Mobile and Client unit — historically Micron's most cyclical — posted 79% gross margins in FQ2, up from 15% one year prior. That is not a cycle, it is a repricing. Every business unit is growing triple-digits year-over-year. This is a story of the entire memory stack being structurally repriced by AI demand.
THE $725 BILLION QUESTION — AND MICRON'S SHARE OF IT
The four major hyperscalers — Amazon, Microsoft, Alphabet, and Meta — have collectively committed to spending approximately $725 billion in capital expenditure in 2026, up from roughly $410 billion in 2025. In two years, their combined annual infrastructure spend has more than tripled. Approximately 75% — roughly $545 billion — is directed at AI infrastructure. Adding Oracle, the top-five total exceeds $725 billion.
Company | 2024 Capex | 2025 Capex | 2026 Guided | Memory-Specific Comment |
Amazon (AWS) | $83B | ~$104B | $200B | Capacity-constrained through 2026; $200B total |
Alphabet (Google) | $52B | ~$100B | $185-190B | Compute-constrained near term; cloud revenue +63% YoY |
Microsoft | $56B | ~$72B | $190B | $25B attributable to memory price increases alone (CFO disclosure) |
Meta | $38B | ~$65B | $125-145B | CEO cited memory pricing as key capex driver in 2026 |
Big-4 Total | ~$229B | ~$341B | ~$725B | +117% two-year CAGR; ~$1T+ in 2027 per Evercore/BofA |
Within the $725 billion total, approximately 25% is estimated to represent memory alone. That translates to roughly $181 billion in memory spend from the Big Four alone. Micron's ~20% DRAM market share applied to that pool implies roughly $36 billion in annual addressable revenue from the Big Four — before the broader cloud ecosystem. This $36B can generate around $25B in net profits which could represent, by itself, almost $250 for the stock.
THE CUSTOMERS CONFIRM IT — FIVE CEOs ON THE MEMORY SHORTAGE
The memory shortage is not an analyst estimate or a sell-side projection. It is being disclosed in plain language by the CFOs and CEOs of the five largest technology companies on earth, on the record, in their most recent earnings calls. The following is what each of them said in April and May 2026 — the same earnings season in which Micron guided to $33.5 billion in a single quarter. These are not endorsements of Micron specifically. They are real-time, sworn-disclosure-level confirmations that the demand side of this equation is not abating.
Microsoft — Amy Hood, CFO (Q3 FY2026 Earnings Call, April 29, 2026)
“For calendar year 2026, we expect to invest roughly $190 billion in capital expenditures, which includes approximately $25 billion from the impact of higher component pricing. Even with these additional investments, and continued efforts to bring GPU, CPU and storage capacity online faster, we expect to remain constrained at least through 2026.”
$190B capex is 23% above analyst consensus — Hood attributed $25B of that overshoot directly to higher component pricing, with $5B in Q4 alone. Microsoft guided for constraints persisting at least through end-2026.
Meta — Mark Zuckerberg, CEO (Q1 2026 Earnings Call, April 29, 2026)
“We are increasing our infrastructure CapEx forecast for this year. Most of that is due to higher component costs, particularly memory pricing. But every sign that we are seeing in our own work and across the industry gives us confidence in this investment.”
Meta raised capex guidance $10B at the midpoint to $125–145B, memory pricing the primary driver. The stock fell 6% after-hours despite 33% revenue growth. Zuckerberg did not signal normalisation — he said every sign across the industry confirms continued investment.
Alphabet — Sundar Pichai, CEO (Q1 2026 Earnings Call, April 29, 2026)
“We are compute constrained in the near term. Our cloud revenue would have been higher if we were able to meet the demand.”
“These strong results reinforce our conviction to invest the capital required to continue to capture the AI opportunity. As a result, we expect our 2027 CapEx to significantly increase compared to 2026.” — Anat Ashkenazi, CFO
Alphabet grew Cloud revenue 63% YoY and still left demand unserved — the ceiling was set by what memory suppliers could ship. CFO Ashkenazi guided 2027 capex to “significantly increase” beyond the already-elevated $180–190B 2026 base.
Amazon — Andy Jassy, CEO (Q1 2026 Earnings Call, April 29, 2026)
“The cost of components, particularly memory, has skyrocketed. We are in a stage where there is just not enough capacity for the amount of demand. One of the interesting things that we see right now with the change in price and supply on things like memory is that it is a further impetus pushing companies who have on-premises infrastructure into the cloud.”
Jassy reframed the shortage as a cloud migration tailwind: enterprises priced out of on-premises memory are moving to AWS. AWS grew 28% YoY in Q1 — the fastest in 15 quarters — even with supply constraints.
Apple — Tim Cook, CEO (Q2 FY2026 Earnings Call, April 30, 2026)
“We continue to see market pricing for memory increasing significantly. Beyond Q2, we believe memory costs will drive an increasing impact on our business. We are not at the point where we are saying this is going to end anytime soon.”
Consumer and data centre demand are contesting the same supply pool. Apple’s product gross margin fell 200bp sequentially, with memory the identified driver. A 12GB module went from ~$30 to ~$70 in under a year. Cook said supply-demand balance on Mac mini and Mac Studio is still months away — and the shortage is “not going to end anytime soon.”
What Five CEOs Just Confirmed in One Earnings Season
Five operators with $725B in combined 2026 capex said, in the same two-day earnings window, that memory costs are a multi-year structural constraint. Not one guided for near term normalization. These are legally attested statements by fiduciaries. The market prices Micron as if the boom is ending. Its five largest customers just said it isn’t.
THE WAFER WALK — WHY $60 BILLION IN NEW CAPEX DOES NOT BRIDGE THE SUPPLY GAP
The bear case assumes $60B+ capex closes the supply gap. It ignores wafer intensity — HBM consumes capacity in a fundamentally different way from standard DRAM.
Step 1: The Baseline — How Many Wafers Exist?
Supplier | Estimated Monthly DRAM Wafer Capacity (all nodes) | Capacity Notes |
SK Hynix | ~250,000 wspm | M14/M15 fabs; some capacity redirected to HBM already |
Samsung | ~350,000 wspm | P3/P4/P4L fabs; world's largest — but majority for non-HBM DRAM |
Micron | ~160,000 wspm | Hiroshima/Taichung/Manassas fabs; ID1 not operational until 2027 |
Industry Total | ~760,000 wspm | Approximate combined DRAM wafer capacity, calendar 2026 |
Step 2: The Wafer Intensity Multiplier — DRAM vs. HBM vs. HBM4
HBM3E consumes ~3× the wafer starts per bit vs. DDR5; HBM4 ~4×. The reasons are structural — wafer thinning, TSV etching, and stacking each consume additional cleanroom time that standard DRAM does not require.
Product | Wafer-starts per equivalent bit | Why the premium? |
Standard DDR5 DRAM | 1× (baseline) | Single wafer, standard lithography, no stacking |
HBM3E (8H or 12H stack) | ~3× vs. DDR5 | Wafer thinning losses; TSV etch; bonding; test per layer |
HBM4 (12H or 16H stack) | ~4× vs. DDR5 | Taller stack; finer microbump pitch; base logic die adds process steps; higher yield sensitivity |
Implication |
| Converting 1 DRAM wafer-line to HBM4 production removes ~4 equivalent DRAM wafers from the market — and adds fewer bits back |
Step 3: Current HBM Allocation — What Share Is Already Committed?
~20–25% of industry DRAM wafer capacity is allocated to HBM (SK Hynix ~35–40%). At 3–4× intensity, the effective bit output redirected from conventional DRAM is far larger than the share percentage implies.
Step 4: The Supply Gap Walk — Adding $60B In Capex Does Not Bridge It
Step | Walk Item | Calculation / Logic | Net Wafer Availability for Conventional DRAM |
A | Total industry DRAM wafer capacity (calendar 2026) | ~760,000 wspm across three suppliers | 760,000 wspm |
B | Capacity already committed to HBM3E production | ~22% of total (~167,000 wspm at 3× intensity = ~56,000 equivalent DRAM wafers) | 604,000 wspm |
C | HBM4 ramp displacing HBM3E lines (H2 2026) | ~5% of capacity shifts from HBM3E → HBM4; each wafer now consumes 4× not 3×. Adds ~13,000 wafer-equivalent demand headroom demand | 591,000 wspm |
D | Incremental HBM demand from Stargate / Rubin ramp | Industry analysts estimate AI-specific demand absorbs additional 8–10% capacity in H2 2026 | 510–530,000 wspm for non-HBM |
E | Demand that cannot be served at current capacity (Micron management) | Management commentary implies at least ~35% of inbound demand requests cannot be fulfilled | ~35% unmet — structural shortage persists |
F | New capex in 2026 ($61B industry, ~$20B Micron) | Primarily equipment upgrades and TSV expansion. Micron's ID1 fab NOT operational until 2027. No new cleanroom in 2026. | ~0% new capacity from 2026 capex in 2026 |
G | Does $60B bridge the 35% gap? | NO. New fabs: Micron ID1 → 2027. Samsung P5 → 2028. SK Hynix Yongin → 2027. 2026 capex buys equipment for 2027–2028 output. | Supply gap persists through at least 2026. Partial relief 2027. |
The correct mental model: the $60+ billion in 2026 DRAM capex is a 2027–2028 story, not a 2026 story. Every dollar being spent this year on cleanroom construction, equipment installation, and node qualification will produce its first commercial wafer output between 12 and 36 months from now. Micron's FY2026 capex of $20 billion is being spent on an Idaho fab that will not produce a single revenue-generating wafer before late 2027.
The HBM4 compounding problem: The HBM3E→HBM4 transition (3× to 4× intensity) means the same wafer base produces fewer bits through 2027 — demand-led, not capacity-driven. Every line converted to HBM4 tightens conventional DRAM availability simultaneously. It is possible that all the capacity that comes online in 2027/2028; and the move towards majority HBM3/HBM4 production; the total increase in wafer capacity does not result in overall capacity increase for the memory sector in totality. The increased number of wafers, could result in overall reduction in the absolute number of memory chips produced.
TECHNOLOGY ADVANTAGE — POWER, PROCESS, AND THE CHINA WALL
Micron vs. Korean Competitors — Power Efficiency as Differentiation
Micron's HBM3E delivers approximately 30% lower power consumption than competing products for equivalent bandwidth. In data centers where energy costs are a CEO-level concern and where cooling constraints limit watts per rack, a 30% power saving per memory stack is an engineering decision, not a procurement preference. For HBM4, Micron has delivered 2.8 TB/s bandwidth at 11+ Gbps per pin — 2.3× the bandwidth of its own HBM3E — with a further 20% power efficiency improvement. The company holds 621 HBM-related patents, nearly double SK Hynix's 315 filings.
Dimension | Micron | SK Hynix | Samsung |
HBM3E Power | ~30% lower vs. competition — industry leading | Strong; MR-MUF packaging thermal advantage | TC-NCF yield issues in 2024-25; recovering |
HBM4 Bandwidth | >2.8 TB/s; 11+ Gbps sampled at GTC 2026 | 10 Gbps target; 40% power efficiency vs HBM3E | 1γ process for HBM4 — potential cost advantage |
HBM Patents | 621 patents (2018-2026); ~2× nearest rival | 315 patents; proven execution | Largest wafer fab capacity globally |
NVIDIA VR Qualification | High-volume HBM4 for Vera Rubin CQ1 2026 | Primary HBM4 supplier; ~70% VR share estimated | Ramp expected; ~30% VR share targeted |
US Strategic Status | ONLY US-HQ'd HBM supplier; $6.4B CHIPS grant | South Korea; strong US ally | South Korea; US-China tensions create complexity |
The China Gap — Why No Fourth Competitor Is Coming
CXMT and YMTC are real companies with real government backing. Both have genuine progress in standard DDR4 and DDR5. Neither is a credible HBM threat within the investment horizon of this thesis. The barriers are not primarily financial — they are technical and regulatory. HBM requires TSV equipment from Applied Materials and Lam Research, advanced packaging capability, and access to TSMC's logic base die for HBM4 — all under export control restrictions. Chinese manufacturers are currently struggling to clear technical hurdles at HBM2E level, three generations behind the commercial frontier. Where China does compete — DDR3, DDR4, and increasingly DDR5 commodity DRAM — it will eventually pressure Micron's non-HBM segments. It is not a risk to HBM pricing or Micron's AI infrastructure position within this thesis window.
WHAT THE CONSENSUS IS MISSING — FOUR ANGLES NOBODY IS MODELLING
The generic Micron bull case is: HBM demand is strong, supply is tight, the multiple is cheap. This is correct but it is also what every sell-side note says. The four points below are what the consensus is structurally underweighting.
Micron has been awarded up to $6.44 billion in CHIPS Act direct funding — the largest memory semiconductor grant in the program’s history. It is the only US-headquartered memory manufacturer, which means it is the only company that will ever receive this level of direct government capital support for domestic memory manufacturing. The $20 billion FY2026 capex figure that investors treat as a burden on FCF is partially offset by $6.4 billion in grants disbursed as construction milestones are hit.
The math: Micron's announced US investment program totals approximately $200 billion over 20 years ($125 billion in New York, $25 billion in Idaho, additional R&D). The $6.4 billion in CHIPS grants — plus the 25% Advanced Manufacturing Investment Credit (AMIC) on eligible capex — means the effective cost of building US capacity is materially lower than the gross capex suggests. Samsung and SK Hynix, building capacity in South Korea, do not have access to equivalent US subsidies. When Micron's US fabs come online in 2027, they will have a cost structure advantage over Korean-based competitors that will not diminish over time.
2. SOCAMM2 — The Platform Lock-In Beyond HBM That No One Is Modelling
At NVIDIA GTC 2026, Micron announced it was the first company to mass-produce SOCAMM2 memory modules for the Vera Rubin platform — a proprietary form factor that replaces standard SO-DIMM in next-generation AI server deployments. Micron is shipping 192GB SOCAMM2 in high volume, with a portfolio from 48GB to 256GB for Vera Rubin NVL72 and standalone Vera CPU systems.
Why this matters: SOCAMM2 is not a JEDEC standard. It is platform-specific. Every Vera Rubin system deployed by a hyperscaler requires SOCAMM2 — not standard DDR5, not LPDDR5X, but Micron's SOCAMM2. Combined with HBM4, Micron is now the memory supplier for every major memory socket in the Vera Rubin platform. HBM for the GPU, SOCAMM2 for the CPU, PCIe Gen6 SSD for storage — three proprietary or platform-locked products, all from one supplier, all in high-volume production simultaneously. This is a platform position, not a component sale.
3. The Power Grid Problem Is Micron's Pricing Power In Disguise
Data center power consumption has become a national infrastructure issue. Hyperscalers are building gigawatt-scale data centers and struggling to source power. Micron's 30% HBM power efficiency advantage — and the further 20% improvement from HBM3E to HBM4 — directly reduces the power draw of every GPU cluster its memory serves.
The implication: Micron's power efficiency advantage creates a willingness-to-pay dynamic that is not captured in simple ASP analysis. A hyperscaler that can build a 100MW data center instead of a 120MW facility because its memory runs cooler has just saved hundreds of millions in infrastructure.
4. The Inference Era Will Consume More HBM Per Chip Than The Training Era
The current bull case for Micron is almost entirely framed around AI training — the large GPU clusters that teach models. But the next phase of AI deployment is inference — running models in production for billions of users simultaneously. Inference has a different memory profile from training: it requires holding large model weights in HBM across many chips simultaneously, and the KV cache (the memory that stores previous tokens in a conversation) grows linearly with context length and concurrent users.
Industry analysts estimate that inference workloads will consume more cumulative HBM than training by 2027. The reason: training runs are time-limited events; inference is continuous. A model serving 100 million users 24/7 never releases its HBM allocation. As hyperscalers deploy Vera Rubin clusters for inference at scale — and as context windows extend to 1 million tokens and beyond — the amount of HBM required per deployed model grows faster than the model sizes themselves. This is a structural HBM demand driver that is almost entirely absent from the consensus models, which are overwhelmingly trained-centric.
THE NUMBERS — A BUSINESS UNRECOGNISABLE FROM TWO YEARS AGO
Metric | FQ2 2025 | FQ1 2026 | FQ2 2026 | FQ3 2026 Guide |
Revenue | $8.1B | $13.6B | $23.9B | $33.5B |
YoY Revenue Growth | +38% | +57% | +196% | +260%E |
Non-GAAP Gross Margin | ~44% | ~58% | ~76% | ~81% |
Operating Cash Flow | $3.9B | $8.4B | $11.9B | — |
Free Cash Flow | ~$0.9B | ~$3.0B | ~$6.9B | — |
Non-GAAP EPS | $1.56 | ~$6.00 | $12.20 | $19.15E |
CMBU Revenue | $2.9B | $5.3B | $7.7B | — |
CMBU Gross Margin | 55% | 66% | 74% | — |
FQ3 2026 single-quarter revenue guidance of $33.5 billion exceeds Micron's full-year revenue in every year of the company's history through FY2024.
VALUATION — THE MULTI-TRILLION DOLLAR COMPARISON THAT EXPOSES THE ANOMALY
The most intellectually honest way to understand Micron's current valuation is to place it next to the companies it shares the global market capitalization table with — and then explain why investors who pay 24× earnings for NVIDIA and 27× for Apple are apparently unwilling to pay 7× forward earnings for the company making the memory chips without which none of those businesses can run their AI infrastructure.
Company | Market Cap | Fwd P/E | 2027E Net Profit | TTM FCF Yield | Relationship to AI Memory |
NVIDIA | ~$4.6T | ~24x | ~$150B+ | ~2.0% | Customer — requires HBM for every GPU |
Alphabet (Google) | ~$4.2T | ~19x | ~$100B+ | ~2.1% | Customer — $185-190B capex 2026 |
Apple | ~$3.9T | ~27x | ~$120B+ | ~3.5% | Customer — on-device AI memory |
Microsoft | ~$3.2T | ~28x | ~$90B+ | ~2.0% | Customer — $190B capex; $25B from memory prices |
Amazon (AWS) | ~$2.8T | ~30x | ~$80B+ | ~2.2% | Customer — $200B capex 2026 |
Meta | ~$1.7T | ~22x | ~$60B+ | ~3.5% | Customer — memory pricing key capex driver |
TSMC | ~$2.0T | ~21x | ~$50B+ | ~2.8% | Partner — logic base die for HBM4 |
Broadcom | ~$1.9T | ~26x | ~$40B+ | ~2.2% | Peer AI infra supplier; custom ASICs |
Micron (MU) | ~$735B | ~7x | ~$90B+E | ~1.4% | Supplier — the bottleneck they all depend on |
Micron trades at a 73% discount to the average forward P/E of its eight largest customers and peers. Every single company on that list is spending billions of dollars to secure Micron's output — and yet Micron is priced as though the cycle is about to end. Micron does not deserve the multiple of a NVDA, AMZN or AAPL, but it could justify a multiple 50 – 75% of its peers.
Cash Generation vs. Capital Commitment — The FCF Equation
The important question for investors with a 2–3 year horizon is the cashflow equation: how much capital does Micron need to deploy to sustain this trajectory, and how much residual FCF does that leave? The gross capex of $20B in FY2026 and ~$22–25B in FY2027 looks heavy. Net of the $6.4B CHIPS Act grant disbursed over the construction period and the 25% AMIC tax credit, the effective capex burden is considerably lower.
Item | FY2025 (Actual) | FY2026E | FY2027E |
Revenue | $37.4B | $75-80B | $100-120B |
Gross Margin | ~39% | ~70%+ | ~75-80% |
Operating Cash Flow | ~$14B | ~$45-50B | ~$65-80B |
Capital Expenditure (gross) | ($12.9B) | ($20B) | ($22-25B) |
CHIPS Act grants + AMIC credit (est.) | ~$0.5B | ~$2B | ~$2.5B |
Net Effective Capex | ~($12.4B) | ~($18B) | ~($20-22B) |
Free Cash Flow (net of grants) | ~$1.6B | ~$27-32B | ~$43-58B |
FCF Yield vs. $735B Market Cap | ~0.2% | ~3.7-4.4% | ~5.9-7.9% |
The two-year cumulative picture: Micron will generate an estimated $70–90 billion in operating cash flow over FY2026 and FY2027 combined. Net capex over the same period, after CHIPS Act support, is approximately $38–40 billion. Residual free cash flow over two years is approximately $30–50 billion — against a current market cap of $735 billion. That is a 2-year cumulative FCF yield of approximately 4–7% from a contracted, sold-out business while simultaneously building the largest domestic memory manufacturing program in US history.
MARKET PRICING vs. OUR THESIS — THE EPS BATTLEGROUND
Micron trades at ~7× forward earnings. That is not a valuation anomaly — it is the market’s verdict: the earnings stream will not last. We disagree. The gap between a boom-bust framework and a structural inflection framework is the entire thesis.
The market’s model: ~$60B net profit in FY2026 (~$53 EPS), then the cycle turns — HBM supply catches up, DRAM floods back, and earnings collapse to ~$10B (~$9 EPS) by FY2027. It has seen this exact script in 1996, 2001, 2008, and 2016. The stock is cheap at 7× because the market believes the “E” is about to collapse.
Our thesis is materially different — and it does not require optimism. It requires reading the LTA structure, the wafer mathematics, and the inference demand curve. We expect Micron to generate EPS of $60–90 cumulatively across FY2026 through FY2028, not as a single-year spike followed by a collapse, but as a sustained plateau of profitability that the market has never seen from a memory company and therefore does not know how to price.
What the Market Is Pricing: Boom, Then Bust
Metric | Market: FY2026 (Boom) | Market: FY2027 (Bust) | Our Thesis: FY2026 | Our Thesis: FY2027 | Our Thesis: FY2028 |
Net Profit | ~$60B | ~$10B (↓83%) | ~$60B | ~$75–90B | ~$90–120B |
EPS (1.13B shares) | ~$53 | ~$8.8 | ~$53 | ~$66–80 | ~$80–106 |
Market Narrative | Enjoy it while it lasts | “There’s the bust” | Contracted. Held. | Plateau expands | Re-rates up |
Implied P/E at $700/share | ~13× | ~75× (bust-year optics) | ~12.6× | ~8.4–10.1× | ~6.3–8.4× |
Market (blue): profits spike in 2026 then collapse 83% in 2027. Our thesis (green): profits hold or grow through FY2028, with each new five-year SCA adding a fresh increment of contracted earnings. EPS at $668/share.
Why the Market’s Bust Scenario Is Wrong This Time
The bust script requires supply to overtake demand, pricing to collapse, and no contractual protection — none of which is in place through 2028.
On supply: Micron ID1 ships late 2027, SK Hynix Yongin is 2027–2028, Samsung P5 is 2028. Every HBM4 conversion absorbs ~4× the wafer starts of standard DRAM. New capacity is a 2028 risk at earliest.
On pricing: HBM revenue is locked in five-year SCAs. When the market prices a bust in 2027, it is assuming contracts dissolve or demand collapses — neither is credible against $725B in 2026 hyperscaler capex with supply still constrained through their own disclosures.
On contracts: the LTA-to-SCA shift is the structural change the market has not processed. Each new five-year SCA adds a block of earnings that cannot be taken away by spot-market dynamics — and forces the market to reprice from “one-year spike” to “five-year stream.” Management confirmed additional multi-year SCA discussions across markets in Q2 2026.
The SCA Re-Rating Mechanism — How Each New Contract Compounds the Multiple
The re-rating is not predicated on higher earnings — it is predicated on the market accepting that the earnings are durable. Five-year SCAs do not appear on the balance sheet; they are disclosed qualitatively in the 10-Q. The market underweights what it cannot measure. The cash flows will be very measurable when they arrive.
Each SCA announcement and each non-peak quarter chips away at the bust narrative — incrementally. The arithmetic: 15× on $60B = ~$800/share vs. today’s $730. 20× on $80B — still a discount to comparable infrastructure suppliers — implies above $1,400. That is not a price target. It is what happens when a market pricing a cyclical realizes it owns an infrastructure franchise.
THE HONEST RISKS
The conventional memory cycle reasserts: HBM is contracted and capacity-constrained, but DRAM and NAND remain cyclical in their standard segments. If consumer PC and smartphone demand weakens materially, non-HBM margins could compress even while HBM holds. Watch channel inventory data in Mobile and Client quarterly.
HBM supply overtakes demand post-2027: All three suppliers are building capacity at pace. Samsung P5 online 2028. SK Hynix Yongin 2027. Micron ID1 late 2027. If AI infrastructure build rates plateau — whether from macro, regulation, or demand saturation — HBM pricing could soften faster than capacity additions roll in.
China revenue and geopolitical exposure: Micron derives meaningful revenue from China. Export restrictions, tariffs, or further escalation in the semiconductor trade war are live risks. YMTC patent litigation in China is an ongoing overhang.
Qualification risk on future GPU platforms: Micron's success depends on remaining qualified for each successive NVIDIA GPU generation. SK Hynix has the entrenched primary supplier relationship. Losing a design win — or being second-sourced with reduced allocation — would materially affect the HBM revenue trajectory. The Vera Rubin qualification is confirmed; future platforms beyond that are not yet public.
CHIPS Act risk — milestone disbursement: The $6.4B in CHIPS grants is disbursed on completion milestones, not up front. Delays in fab construction, changes in US semiconductor policy, or failure to meet production milestones would defer the grant income that partially offsets capex in the FCF model.
Valuation already reflects significant progress: At $730 the stock has moved +700% in 52 weeks. The forward P/E of 8x is cheap against peers, but the stock is not at a trough valuation. The re-rating from commodity discount to infrastructure premium has partially but not fully occurred.
THE BOTTOM LINE
The investing framework that built this newsletter — cashflow inflection points in neglected sectors — does not require obscurity. It requires mispricing. And Micron is, right now, one of the most obviously mispriced cashflow machines in the public market.
The facts are not in dispute. Micron is generating $11.9 billion in operating cash flow in a single quarter. It is guiding to $33.5 billion in quarterly revenue at 81% gross margins. Its entire 2026 HBM supply is under long-term contract. Its customers' prepayments — the deposits they paid to secure supply during the shortage of 2024 — have been fully shipped and recognized. The transition from deposit LTAs to multi-year SCAs marks a structural upgrade in revenue quality: the business is no longer living quarter to quarter on spot DRAM economics.
What the consensus is underweighting: Micron holds the only US-based HBM manufacturing program with $6.4 billion in government grants reducing the effective cost of its capacity expansion. It has co-designed its memory into every major socket of the Vera Rubin platform — HBM4, SOCAMM2, and PCIe Gen6 SSD — creating a platform lock-in that looks nothing like the commodity DRAM model the market is using to value it. And the inference era, which is only beginning, will consume more cumulative HBM than the training era that drove this initial re-rating.
The re-rating catalyst is not a single event. It is the accumulation of quarters in which earnings guidance proves conservative, margins hold, and the wafer mathematics of HBM4 transition continues to tighten supply. At some point, the multiple normalizes toward the infrastructure premium it deserves. The gap between 7x and the semiconductor industry average of 34x is the opportunity. The contracted revenues, the government subsidy, and the platform lock-in are the margin of safety.
DISCLAIMER: This newsletter is for informational and educational purposes only. Nothing published here constitutes personalized financial or investment advice. All investments carry risks including the possible loss of principal. Do your own research and consult a qualified financial adviser before making any investment decision.

