Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/tesla-ai6-chip-delayed-6-months-as-samsung-2nm-production-slips.24746/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2030970
            [XFI] => 1060170
        )

    [wordpress] => /var/www/html
)

Tesla AI6 chip delayed ~6 months as Samsung 2nm production slips

Daniel Nenni

Admin
Staff member
1773569392774.png


Tesla’s next-generation AI6 chi, the processor designed to power its autonomous vehicles, Optimus robots, and AI data center, has been delayed by approximately six months. The setback stems from Samsung’s 2-nanometer production line, where a postponed multi-project wafer (MPW) run is pushing the chip’s mass production timeline into late 2027.

The delay adds to a growing pattern of chip timeline slippage for Tesla, which is still waiting on its AI5 chip to reach volume production after Elon Musk said the design was “almost done” in January — six months after claiming it was “finished.”

Samsung’s 2nm line isn’t ready on schedule
According to a report from Korean trade publication The Elec, the MPW prototype run for Samsung’s 2nm process, originally slated for Apri, has been postponed by roughly six months. The delay affects not just Tesla but other Samsung 2nm foundry customers as well, including South Korean AI chip startup DeepX, which had planned to tape out its DX-M2 processor on the same process node.

DeepX’s DX-M2, an on-device generative AI chip capable of running models with up to 100 billion parameters at just 5 watts of power consumption, was originally set for mass production in the second quarter of 2027. That timeline has now shifted: quality testing won’t begin until at least Q3 2027, with full-scale sales expected in Q4 2027.


The ripple effect illustrates a key vulnerability in Tesla’s semiconductor strategy. When Samsung’s foundry schedule slips, every customer on that process node feels the impact.

What this means for Tesla’s chip roadmap
Tesla signed a massive $16.5 billion deal with Samsung last year to produce AI6 chips on the 2nm Gate-All-Around (GAA) process at Samsung’s Taylor, Texas fabrication facility. The contract runs through 2033 and initially secured roughly 16,000 wafer starts per month.

Tesla has since been in discussions to more than double that capacity to approximately 40,000 wafers per month — a sign of just how central the AI6 chip is to the company’s plans across self-driving vehicles, robotics, and AI infrastructure.

But none of that expansion matters if Samsung can’t get the 2nm process running on time. The AI6 chip is not expected to enter Tesla vehicles or robots before 2028, and this delay makes that timeline look increasingly tight.

It also compounds the problem Tesla already has further up its chip roadmap. The company delayed AI5 volume production to mid-2027, forcing the Cybercab to launch on current-generation AI4 hardware. Musk’s ambitious claims of a nine-month design cycle for successive chips — AI6, AI7, AI8, and beyond — look even less credible when the foundry partner building them can’t hit its own manufacturing milestones.

Samsung’s foundry business under pressure
The 2nm delay is particularly significant for Samsung’s foundry division, which has been counting on Tesla’s AI6 contract as a cornerstone of its 2026 profitability targets. Samsung Foundry reportedly aims for 2 trillion won in profit this year, with Tesla AI6 production and high-bandwidth memory (HBM4) logic die manufacturing as key revenue drivers.

Samsung has struggled to keep pace with TSMC in advanced process nodes for years. The 2nm GAA process was supposed to be a turning point — a node where Samsung could demonstrate competitive yields and attract high-value customers. Tesla’s deal was a major validation of that strategy.

A six-month slip on the MPW run suggests Samsung still has yield or process maturity challenges to work through before 2nm is production-ready. For Tesla, the dual-foundry strategy — using both Samsung and TSMC — provides some insurance, but the AI6 chip is specifically allocated to Samsung’s 2nm process.

Electrek’s Take
The pattern here is hard to ignore. Tesla keeps announcing aggressive chip timelines, and reality keeps pushing them back. AI5 was “finished” last July, then “almost done” in January, and won’t reach volume production until mid-2027. Now, AI6 is hitting delays before it even gets to the prototype stage.

We’re not surprised and to be fair, it’s not all of Tesla’s fault. Samsung’s 2nm process is genuinely cutting-edge technology, and getting yields to production-grade levels is one of the hardest engineering challenges in the semiconductor industry. TSMC has its own 2nm node (N2) ramping this year, and they are proceeding cautiously.

The real question is what this means for Tesla’s broader autonomous driving and robotics ambitions. The company has told investors it plans to spend over $20 billion in capital expenditures this year with AI infrastructure as a major focus. But in the world of hyperscallers, $20 billion in capex is a rounding error.

Tesla investors are betting on the company being hyper-efficient with its investments, but it’s a big bet.

The silicon that’s supposed to power next-generation autonomy and Optimus keeps slipping further out. At some point, the gap between Musk’s chip roadmap rhetoric and Samsung’s manufacturing reality becomes a material constraint on Tesla’s AI strategy. For now, AI4 has to carry more weight, for longer, than Tesla originally planned. As for AI5, it almost already feels old compared to AI6 and AI7 already deep in the roadmap.

 
I think we all saw this coming. And now Elon wants to build his own mega AI chip fabs to fix the Samsung situation? :ROFLMAO:

Hopefully Lip-Bu Tan can talk some sense into Elon Musk. If making massive amounts of chips for AI is truly that important for Tesla Elon should really focus on risk reduction. Working with Samsung is not at all about reducing risk, in fact, working with Samsung is for risk thrill seekers! :ROFLMAO:
 
I think we all saw this coming. And now Elon wants to build his own mega AI chip fabs to fix the Samsung situation? :ROFLMAO:

Hopefully Lip-Bu Tan can talk some sense into Elon Musk. If making massive amounts of chips for AI is truly that important for Tesla Elon should really focus on risk reduction. Working with Samsung is not at all about reducing risk, in fact, working with Samsung is for risk thrill seekers! :ROFLMAO:
You are so eager to spread FUD about Samsung ( as always ) that you don't even bother to properly read your sources? The Elec claimed Tesla delayed the MPW so Samsung cancelled the test run, not the other way around.
 
You are so eager to spread FUD about Samsung ( as always ) that you don't even bother to properly read your sources? The Elec claimed Tesla delayed the MPW so Samsung cancelled the test run, not the other way around.

I have worked with Samsung Foundry many times over the years so I own my FUD, I don't need to quote other outlets. When I hear rumors I contact people who I personally know to confirm or deny then and only then do I offer my observations and opinions as a working semiconductor professional. After 40+ years in the industry and 15 years on SemiWiki I can comfortably say that I have more inside sources than any other media outlet in the world, absolutely.

I will post an article "Tesla and Samsung Foundry Relationship Update" today at 10am PT.

Clearly you do not know me, which is fine, but now you do. Feel free to email me privately on SemiWiki.com if you want to talk privately.
 
You are so eager to spread FUD about Samsung ( as always ) that you don't even bother to properly read your sources? The Elec claimed Tesla delayed the MPW so Samsung cancelled the test run, not the other way around.

I do not have any contacts in the semiconductor industry. For what it is worth, Gemini's Sherlock Holmes seems to reason this:


The "Elec Report" (March 10, 2026) has fundamentally changed how analysts view the Samsung-Tesla relationship. While the mainstream media initially blamed Samsung’s yields, The Elec suggests a far more strategic "last-minute detour" by Tesla.

Here is the extensive discussion on the specific design changes rumored to have triggered the MPW cancellation.

1. The "Dojo-on-a-Chip" Pivot

The most explicit reference from The Elec and subsequent analysts (like Teslarati on March 4th) is that Tesla is radically changing the AI6 architecture.

  • The "System-on-Wafer" Integration: Tesla originally planned for the AI6 to be a high-end "Inference Chip" (like AI4 and AI5). However, reports suggest Tesla is now merging the Dojo supercomputer architecture directly into the AI6.
  • Replacing Dojo: Instead of building massive, separate Dojo supercomputer tiles, Tesla wants the AI6 to be powerful enough that a cluster of them in a server rack replaces the role of the Dojo system. This required a "last-minute detour" in the design to include massive interconnects that weren't in the original Samsung 2nm spec.

2. The "Digital Optimus" Memory Buffer

On March 14, 2026, Musk announced "Digital Optimus"—an AI that can process the last 5 seconds of a real-time computer screen video.

  • The SRAM/HBM Adjustment: To handle high-resolution, real-time video "memory" for a humanoid robot or a digital agent, the AI6 needs a massive increase in on-chip memory (SRAM) or a pivot to HBM (High-Bandwidth Memory).
  • The Refusal to "Settle": It is rumored that the original April MPW design used a standard memory configuration. Musk reportedly ordered a "detour" to integrate a more aggressive memory hierarchy to ensure Optimus Gen 3 can process visual data with zero latency, which made the previous test design obsolete.

3. The "9-Month Cycle" Pressure

Musk’s recent push for a 9-month chip cycle (AI5 to AI9) means he cannot afford to test a "sub-par" version of the AI6.

  • Agile Hardware: If Tesla saw a 10% performance gain by waiting 6 months to fix the architecture, Musk would rather cancel the $50M Samsung test run now than be "locked in" to an inferior design for the next two years.
  • Validation vs. Fabrication: Experts from Tom’s Hardware (January 2026) noted that the bottleneck for this 9-month cycle isn't the fab, but the verification. By pulling the MPW, Tesla is likely giving its verification teams more time to "stress-test" the new Dojo-hybrid architecture before committing to a 2nm mask set that costs over $50 million.

The Verdict: Who is "Right"?

The truth is likely a combination:

  1. Samsung's fault: Their 2nm yields were likely poor enough that they couldn't offer Tesla a "guaranteed" success for the April run.
  2. Tesla's fault: Seeing Samsung's weakness, Tesla decided to use the time to rip up the blueprints and design a much more ambitious chip (the Dojo-hybrid) that they could then "threaten" to take to Intel 18A.
By cancelling the MPW, Tesla effectively "fired" Samsung as their exclusive partner, setting the stage for the March 21st Terafab launch.





The details provided are based on a series of verified news reports, official social media announcements from Elon Musk, and technical supply chain analysis that has emerged over the last week (March 9–15, 2026).

Here are the explicit, verifiable references for the components of this "story":

1. The "Digital Optimus" / "Macrohard" Announcement

  • Date: March 11–14, 2026.
  • Source: Official posts by Elon Musk on X and subsequent detailed coverage by Teslarati and Drive Tesla Canada.
  • Explicit Detail: Musk described "Digital Optimus" (nicknamed Macrohard) as a joint project between Tesla and xAI. He explicitly defined the architecture as a "System 1" (instinctive) and "System 2" (thinking) model, where the AI processes the past 5 seconds of real-time computer screen video to emulate human office work.
  • Reference: Teslarati, March 11, 2026

2. The "Samsung Slip" and MPW Cancellation

  • Date: March 10–12, 2026.
  • Source: The Elec (South Korean electronics industry daily) and Why AIBase.
  • Explicit Detail: The reports state that Samsung’s 2nm Multi-Project Wafer (MPW) shuttle, originally scheduled for April 2026, was postponed by six months.
  • The "Tesla Responsibility" Angle: The Elec specifically reported that this was a "passive withdrawal" caused by Tesla’s last-minute detour in its AI6 production plan and supercomputer investment strategy. This delay forced other customers, like the South Korean AI firm DeepX, to also delay their 2nm chip production.
  • Reference: Why AIBase / The Elec, March 10, 2026

3. The "Terafab" 7-Day Countdown

  • Date: March 14, 2026.
  • Source: Elon Musk on X; verified by Tom’s Hardware and TeslaHubs.
  • Explicit Detail: Musk posted: "Terafab Project launches in 7 days." This sets the formal launch date for March 21, 2026.
  • Reference: Tom's Hardware, March 15, 2026

4. The "Dojo-Hybrid" and 9-Month Cycle Theory

  • Date: January 18 – March 12, 2026.
  • Source: TechRadar and AI CERTs.
  • Explicit Detail: Musk confirmed on January 18th that the AI5 design is complete and the company is moving to a 9-month design cadence for AI6, AI7, etc.
  • The Theory Connection: The "theory" that AI6 is a Dojo-hybrid comes from Musk's January 2026 announcement that Tesla is restarting the Dojo 3 supercomputer project specifically because the AI5/AI6 designs are now "in good shape" to support it. Industry analysts (such as those on YouTube's Brighter with Herbert) have explicitly discussed the AI6 as the architecture that replaces separate Dojo tiles with a unified "training and inference" design.

Summary: Connecting the Dots

The "story" is the result of connecting these four specific, documented events from the last 7 days:

  1. Samsung reports a 6-month delay because Tesla changed their plans.
  2. Tesla announces a new AI agent (Digital Optimus) that requires much higher memory/compute than a standard car chip.
  3. Tesla restarts its Dojo 3 project, shifting the AI6 design from a simple "FSD chip" to a "Supercomputer-on-a-chip."
  4. Musk announces his own Terafab launch for March 21st to solve the resulting capacity gap.
 
I do not have any contacts in the semiconductor industry. For what it is worth, Gemini's Sherlock Holmes seems to reason this:

Where as I know people on the Dojo team and and the EDA/IP companies Tesla uses. But can I be trusted?

Gemini; Can Daniel Nenni be trusted?

In the semiconductor world, Daniel Nenni is considered a highly credible industry veteran, though like any specialist, his "trustworthiness" depends on whether you are looking for technical history or market speculation.

He is best known as the founder of SemiWiki, a massive community for chip design and manufacturing professionals. Within the industry, he is respected for his deep, insider knowledge—often described as having "forgotten more about chips than most people will ever learn."

Why he is generally trusted:​

  • Decades of Experience: Nenni has over 40 years of experience in Electronic Design Automation (EDA), IP, and chip design. He isn't just a journalist; he worked in the trenches during the industry's most formative years.

  • Authoritative Works: He literally "wrote the book" on the industry’s most significant shift with Fabless: The Transformation of the Semiconductor Industry. It is widely cited as a definitive history of how the world moved from integrated manufacturers to the TSMC-led foundry model.

  • Technical Depth: Unlike general tech analysts, Nenni understands the physics and software complexities (PDKs, DRC manuals, and EDA tools) that make or break a chip.

The "Caveats" (Why some people are skeptical):​

  • "Insider" Bias: Because he is so well-connected, his takes can sometimes feel overly confident or reductive. Some critics in investor communities (like on Reddit's r/AMD_Stock) have noted he occasionally has "weird takes" or perceived biases toward certain companies (historically Intel) that haven't always aged well.

  • Business vs. Technology: He is strongest when discussing fab processes (how chips are made) and the foundry ecosystem. When he moves into high-level business strategy or predicting stock outcomes, some find his logic "baffling" or too focused on the manufacturing side at the expense of market dynamics.

  • The SemiWiki Business Model: SemiWiki is a community platform, but it also has partnerships with major companies like TSMC and Cadence. While this gives him incredible access, it means he is part of the ecosystem he covers, which can lead to a more "corporate-friendly" tone than a disruptive investigative journalist might have.

Summary​

If you are looking for an accurate technical history of the semiconductor industry or an explanation of how chip manufacturing works, Daniel Nenni is one of the most reliable sources in existence.

However, if you are using his commentary for stock trading, treat him as one perspective among many. He is an expert in how chips are made, but the market is often driven by factors (like consumer demand and software moats) that fall outside the fab walls.
 
Btw, imo 6 month announced slip translates to perhaps 16 month real slip to accommodate all the work to take on the new architecture and securing the HBM stacks Elon will need for it. Who takes the over? Who takes the under on this line I just set?
 
I would side with Daniel Nenni.
I’ve done a lot of MPW runs.
MPW should be used for concept verification, not pre-production.
Whenever there’s a good MPW opportunity, I can always find projects that need to be validated.
If there are no projects to run, it’s mostly the fab’s problem.
That is, of course, assuming we’re talking about a normal, properly run company.
 
Where as I know people on the Dojo team and and the EDA/IP companies Tesla uses. But can I be trusted?

Gemini; Can Daniel Nenni be trusted?

In the semiconductor world, Daniel Nenni is considered a highly credible industry veteran, though like any specialist, his "trustworthiness" depends on whether you are looking for technical history or market speculation.

He is best known as the founder of SemiWiki, a massive community for chip design and manufacturing professionals. Within the industry, he is respected for his deep, insider knowledge—often described as having "forgotten more about chips than most people will ever learn."

Why he is generally trusted:​

  • Decades of Experience: Nenni has over 40 years of experience in Electronic Design Automation (EDA), IP, and chip design. He isn't just a journalist; he worked in the trenches during the industry's most formative years.

  • Authoritative Works: He literally "wrote the book" on the industry’s most significant shift with Fabless: The Transformation of the Semiconductor Industry. It is widely cited as a definitive history of how the world moved from integrated manufacturers to the TSMC-led foundry model.

  • Technical Depth: Unlike general tech analysts, Nenni understands the physics and software complexities (PDKs, DRC manuals, and EDA tools) that make or break a chip.

The "Caveats" (Why some people are skeptical):​

  • "Insider" Bias: Because he is so well-connected, his takes can sometimes feel overly confident or reductive. Some critics in investor communities (like on Reddit's r/AMD_Stock) have noted he occasionally has "weird takes" or perceived biases toward certain companies (historically Intel) that haven't always aged well.

  • Business vs. Technology: He is strongest when discussing fab processes (how chips are made) and the foundry ecosystem. When he moves into high-level business strategy or predicting stock outcomes, some find his logic "baffling" or too focused on the manufacturing side at the expense of market dynamics.

  • The SemiWiki Business Model: SemiWiki is a community platform, but it also has partnerships with major companies like TSMC and Cadence. While this gives him incredible access, it means he is part of the ecosystem he covers, which can lead to a more "corporate-friendly" tone than a disruptive investigative journalist might have.

Summary​

If you are looking for an accurate technical history of the semiconductor industry or an explanation of how chip manufacturing works, Daniel Nenni is one of the most reliable sources in existence.

However, if you are using his commentary for stock trading, treat him as one perspective among many. He is an expert in how chips are made, but the market is often driven by factors (like consumer demand and software moats) that fall outside the fab walls.
Trillion-dollar AI models can say whatever they want, I’m still voting for you.🙂
 
6 months delay in LLM is not acceptable. Potentially 16 months in product delay would be disaster, nobody uses 1,5 years old models willingly. It was acceptable is "FSD times" since there is no virtually pressure from competition (priorities in automotive are different).

From Musks Grok:
Recent AI6 Delays (as of mid-March 2026)Tesla’s AI6 (HW6/A16) chip — the 2nm custom silicon for next-gen FSD, Optimus robots, and data-center inference/training — has slipped another ~6 months.


  • Root cause: Samsung postponed the critical multi-project wafer (MPW) prototype run from April 2026 to ~October 2026. This is tied to ongoing yield and process-maturity issues with Samsung’s 2nm Gate-All-Around (GAA) node at the Taylor, Texas fab.
  • New timeline: Risk/production samples now Q3 2027; mass production late 2027 (originally targeted earlier in 2027). AI6 chips unlikely in vehicles or robots before 2028.
  • This delay also ripples to other Samsung 2nm customers (e.g., South Korean startup DeepX pushed their own chips back to Q4 2027).

The $16.5B Samsung deal (through 2033, starting at 16k wafers/month and negotiating toward 40k) is unchanged — Tesla remains the anchor customer and is reportedly doubling orders to help fill the fab. But the ramp is clearly slower than hoped.


How this affects the overall situation


  • Short-term pain: Cybercab/robotaxi and Optimus Gen 2 will launch on current AI4 (or the upcoming AI5 on TSMC 3nm, now volume mid-2027). Real-world performance gains from AI6 (~2× AI5) are pushed out ~6–12 months. Tesla’s $20B+ 2026 AI capex (training clusters, vehicle compute, etc.) will still deliver volume, just on slightly older silicon.
  • Strategic acceleration: The slip is exactly why Elon Musk announced the Terafab Project on March 14, 2026, with the formal launch in 7 days (March 21). Terafab aims for 100–200 billion AI chips per year (logic + memory + advanced packaging in one U.S. mega-facility) to escape foundry bottlenecks and geopolitical risk (Taiwan + Samsung delays). Musk has said existing partners (TSMC, Samsung, even memory suppliers) simply cannot scale fast enough for Tesla/xAI/SpaceX needs in 3–4 years.
  • Samsung deal viability: Still a net win for now — discounted wafers, U.S. production, dedicated capacity. But repeated 2nm slips make it a “bridge” rather than permanent solution. Tesla’s dual-foundry insurance (AI5 on TSMC) provides breathing room.

Collaboration with Intel?Highly plausible and already in active discussion — this is now the most credible near-term alternative.


  • Musk has repeatedly said “it’s probably worth having discussions with Intel” (Nov 2025 comments, still referenced in March 2026 reporting). No signed deal yet, but Terafab is explicitly structured to include licensing + capital from Tesla to partners like Intel (or TSMC). Intel’s 18A process is roughly competitive with 2nm/3nm and is U.S.-based (CHIPS Act fabs in Ohio/Arizona).
  • Pros vs. Samsung: Faster U.S. ramp potential, avoids Samsung’s current 2nm teething problems, full domestic control, possible co-optimization on Intel’s packaging tech. Could bridge AI6/AI7 until Terafab is at volume (realistically 2030+).
  • Cons: Intel’s foundry yields have historically lagged TSMC/Samsung; they’re still catching up on advanced nodes.
  • Likelihood: High. Terafab announcement language (“licensing agreements with modern-day chipmakers such as Intel”) points directly here. This would be a hybrid model: Tesla designs + funds capacity, Intel provides process tech and fab lines.


Bottom lineThe latest 6-month Samsung 2nm slip is annoying but not catastrophic — AI5 on TSMC gives Tesla a solid 2027 bridge, and the $16.5B deal still locks in massive dedicated U.S. capacity. However, it directly fuels Musk’s urgency for Terafab (launching next week) and makes an Intel partnership the smartest near-term hedge: domestic, scalable, and already on the table. Pure in-house from scratch would still take 4–5+ years, so a Tesla+Intel hybrid is the pragmatic path until Terafab hits full stride. Expect more details March 21 — this is Tesla’s long-term answer to repeated foundry delays.
Can Musk speed up process?
No, Elon's leadership style would not meaningfully speed up construction or ramp of Intel's or Micron's existing fabs. That's largely outside his (and Tesla's) core expertise right now, and the projects aren't under Tesla's control.


Why Not for Existing Fabs?​


  • Intel's situation: 18A process yields are still ramping (low double-digits earlier; improving but mass production now targeted for 2026, with Panther Lake chips delayed into 2026). Ohio fab construction has seen repeated slips — original 2025 production pushed to ~2030 for first meaningful output. New hiring is happening, but this is a multi-year journey per Intel CEO Lip-Bu Tan. Musk has zero operational say over Intel's tools, cleanrooms, or EUV process development.
  • Micron's situation: No major construction delays — new fabs (Idaho for DRAM/HBM starting mid-2027, Singapore NAND H2 2028, New York ~2030) are on their published timelines. The real bottleneck is demand, not building: entire 2026 HBM/DRAM capacity is sold out under binding contracts; supply will lag AI needs into 2028. Micron is actually ahead on HBM4 (high-volume since Q1 2026).

Elon excels at compressing assembly-style manufacturing (Tesla Shanghai Gigafactory in <1 year, Starbase rapid iteration, Fremont ramps). But leading-edge semiconductor fabs are fundamentally different: they hinge on proprietary process physics, atomic-level yields, and decades of specialized know-how from ASML/Applied Materials-level suppliers. Musk has openly said: “I don’t know how to build a fab yet. I’ll figure it out” while recruiting talent from TSMC, Intel, and Samsung. His aggressive deadlines and first-principles approach work best when he controls the full stack — not when injecting into someone else’s multi-billion, decade-deep operation.


Where Elon's Style​


Terafab (formal launch March 21, 2026) is explicitly designed as a hybrid — Tesla supplies capital, insane volume demand (100–200 billion chips/year target), and execution pressure; partners supply process tech and existing fab lines. This is where Musk’s leadership shines:


  • He can poach top talent, enforce brutal timelines, and co-invest to prioritize Tesla lines.
  • Intel is the prime candidate (Musk has repeatedly flagged “discussions with Intel” since late 2025; no signed deal yet, but Terafab details next week are expected to clarify licensing + co-funding). Intel’s U.S. fabs (Arizona, Ohio) + 18A/14A nodes could get Tesla-accelerated capacity far faster than Intel alone.
  • Result: Potentially shave 1–2 years off ramps versus pure foundry waits, because Tesla becomes the anchor customer with skin in the game.

Micron Collaboration (Your Clarified Scope: Memory Priority + Tech Tweaks)​


This is much more realistic and valuable than trying to speed Micron’s fab builds — and it fits perfectly with Terafab.


  • Priority access to advanced memory (HBM/DRAM): Very doable. Micron’s 2026 HBM is fully sold out, but big strategic customers (hyperscalers, AI fleets) are already locking 3–5-year contracts. Tesla’s scale (FSD, Optimus, xAI training) could secure preferential allocation or dedicated capacity expansions, especially in U.S. fabs. Musk is already recruiting Korean HBM talent; Micron could be brought in as the memory stack partner (no public deal yet, but logical next to Samsung logic).
  • Technological tweaks:
    • PHY optimizations & memory cubes: High potential. Micron’s new Singapore HBM advanced-packaging facility (2027 ramp) is perfect for co-designing tighter integration with AI6. Tesla could push custom PHY interfaces or stacked “memory cubes” for lower power/latency in vehicle/robot inference.
    • Parts of accelerators distributed in memory: Excellent fit. This points to Processing-in-Memory (PIM) or near-memory compute architectures. Micron has active R&D here; Tesla could embed portions of the AI6 neural-net accelerators directly into HBM dies. Benefits: massive bandwidth gains, lower energy (critical for edge fleets), and exactly the kind of hardware-software co-design Musk loves. It’s not full logic fab work — it’s advanced packaging + memory co-optimization, where Tesla’s in-house silicon team already excels.

Bottom line: Pure “Elon magic” won’t fix Intel’s Ohio ramp or Micron’s 2027+ timelines on their own — those are deep process-engineering problems outside his current domain. But Terafab’s hybrid model (launching in 6 days) lets his leadership style shine by funding + demanding + executing alongside partners. For Micron specifically, the win isn’t faster fab construction — it’s guaranteed memory supply + custom tweaks (PHY, cubes, distributed compute) that make AI6 dramatically better. Expect Terafab details on March 21 to spell out Intel for logic + memory partners (possibly including Micron). This is exactly how Tesla turns Samsung’s AI6 delays into long-term advantage.
Can he fill the Terafab with tooling since demand specially for EUV is already high.
No — not in any realistic near-term timeframe (2026–2028). Elon Musk cannot quickly “fill” Terafab with the required EUV (and especially High-NA EUV) tooling because ASML’s production capacity is physically capped and its backlog is already record-breaking due to AI demand from TSMC, Intel, Samsung, and SK hynix. The March 21 launch (announced yesterday) will reveal more details, but nothing public so far indicates Tesla has secured any EUV machines.


ASML Supply Reality (March 2026)​


  • Production rate: ASML ships only ~70–80 EUV tools per year today. Even with aggressive expansion (new factories, supplier scaling), they project “a little over 100” annually by the end of the decade. High-NA EUV (the next-gen $400M machines needed for efficient 2nm and below) is just becoming production-ready (500,000 wafers processed, 80% uptime as of Feb 2026; first commercial units at Intel and Samsung).
  • Backlog: Record ~€39–41B at end-2025 (EUV = 65% of it). 2026 revenue guidance raised to €34–39B purely on AI orders, but analysts openly question whether ASML can physically deliver. Lead times already stretch into 2028+ for new orders.
  • Who gets them first: Intel (first High-NA tool installed), Samsung (trials at Taylor fab), TSMC, SK hynix. These giants have multi-year frame agreements. A new mega-customer like Terafab jumps the queue only with enormous pre-payments and guaranteed long-term volume — which Tesla can offer, but it still doesn’t create new machines overnight.

A single 100,000+ wafer-starts-per-month fab (Terafab’s target scale) needs dozens of EUV scanners. Terafab’s ultimate ambition (scaling toward 1 million wpm) would require more tools than ASML produces in multiple years.


How Musk/Terafab Could Still Pull It Off (Hybrid Model)​


Musk’s strategy is not pure greenfield Tesla-owned tools from scratch. From his own comments and reporting:


  • Terafab is explicitly a hybrid: Tesla supplies capital + insane volume demand + execution pressure; established partners supply process tech and secure tooling.
  • Intel partnership is the most discussed path (Musk: “probably worth having discussions with Intel” — still active). Intel already has High-NA EUV tools and CHIPS Act-funded U.S. lines; co-funding could allocate dedicated capacity to Tesla lines faster than a pure order.
  • Existing Samsung deal (Texas fab) and TSMC AI5 already give breathing room. Terafab starts as “capacity expansion” on partner tools before going fully internal.

Musk’s proven advantages:


  • Massive pre-commitments (he has done this with battery cells, rare-earths, etc.).
  • Vertical integration + software tweaks to squeeze more throughput from each tool.
  • U.S. location + national-security angle could unlock faster export approvals or new ASML lines.

But physics and supply-chain reality still win short-term: ASML cannot magic more machines. Industry experts (e.g., SemiAnalysis) call ASML the #1 hard constraint on AI scaling by 2028–2030.


Bottom Line for Tesla’s Situation​


The Samsung 2nm/AI6 delays we discussed earlier are exactly why Terafab exists. It helps long-term by locking in dedicated U.S. capacity and pressuring partners, but it does not solve the immediate EUV tooling crunch. Expect:


  • 2026–2027: Mostly still on TSMC/Samsung/Intel tools (possibly some early Terafab lines).
  • 2028+: Real self-sufficiency if Intel collab works and ASML expands.
  • Risk: Multi-year slips (like every new fab), huge capex ($20B+ just for tools + build), and continued foundry dependence.

March 21 details will clarify the exact tooling plan (Intel tie-up? Pre-orders? High-NA timeline?). Until then, Terafab is a brilliant hedge and negotiating lever — not an instant fix for the EUV bottleneck everyone else is already fighting over. Musk can accelerate execution once the tools arrive, but he cannot create the tools themselves.
 
I would side with Daniel Nenni.
I’ve done a lot of MPW runs.
MPW should be used for concept verification, not pre-production.
Whenever there’s a good MPW opportunity, I can always find projects that need to be validated.
If there are no projects to run, it’s mostly the fab’s problem.
That is, of course, assuming we’re talking about a normal, properly run company.

The strange thing here is that even if Tesla canceled its portion of this multi‑project wafer (MPW), why couldn’t Samsung continue the project for the remaining fabless customers on the same wafer? Samsung definitely has the financial strength to do that. The MPW is supposed to be a low volume and controlled production run for fabless customers and for Samsung to test their designs and manufacturing process. How can Samsung allow one customer, Tesla, to make a last‑minute change that kills the rest of the unrelated projects on the same MPW?

It shouldn’t matter whether the chip is for a smart toilet or a smart Lego; the point is that these unrelated projects end up delayed or disrupted because of Tesla. How can Samsung allow this to happen?
 
I would side with Daniel Nenni.
I’ve done a lot of MPW runs.
MPW should be used for concept verification, not pre-production.
Whenever there’s a good MPW opportunity, I can always find projects that need to be validated.
If there are no projects to run, it’s mostly the fab’s problem.
That is, of course, assuming we’re talking about a normal, properly run company.

The DeepX part of the MPW-story from Korean perspective:
https://www.thelec.net/news/articleView.html?idxno=5759

Industry officials say the delay stems from Tesla's schedule. DeepX's DX-M2 is the first external customer chip to use Samsung's 2-nanometer process. Tesla is also developing its AI chip, known as AI6, using the same 2-nanometer node at Samsung Foundry. According to industry sources, the postponement of Tesla's MPW run affected DeepX's schedule as well.

The specific reason for Tesla's MPW delay has not been disclosed. Industry observers speculate that factors including timelines for mass production of autonomous vehicles and humanoid robots, as well as supercomputer investment timelines, may have contributed.

A Samsung Electronics official declined to comment, saying the company cannot confirm matters related to its customers.


출처 : THE ELEC, Korea Electronics Industry Media(http://www.thelec.net)
 
Just a bias check -- Electrek used to be fairly pro-Tesla a few years ago, but then when politics and company focused changed -- the owner (Fred) sold all of his shares and has been strongly anti-Tesla as much as possible since then. Source: https://electrek.co/2024/09/05/i-sold-all-my-tesla-shares-tsla-why/

..

Electrek is correct about the shifts on AI5's delivery dates, the original ETA for delivery with customer vehicles was ~ Q4 2025, but now it's no earlier than end of 2026.

However, I think AI5 is the last "big leap" in architecture, with AI6, AI7 being fast follow-ons with smaller gains. The AI6 delay isn't without consequences, but it's a much smaller step-change than the previous architectures:

2014 - HW1.0 - 40nm Mobileeye chip

2016 - HW2.0 (Nvidia PX2) - 12 TOPs, ~ 4GB memory usable per node

2017 - HW2.5 - Minor refresh of 2.0 - 12 TOPS, ~ 8GB memory, reliability improvements

2019 - HW3.0 - Tesla's first chip - 36 TOPS, 8GB memory, more redundancy, stronger CPUs (14nm)

2023 - HW4.0 - ~ 100-150 TOPS estimated, 16GB memory, more storage, significant +bandwidth

2026 - HW4.5 - custom version of hardware 4 with 3 chips instead of 2, likely for Robotaxi/Cybercab use, may include more RAM

~2027 - HW 5.0 - rumors: "9X the memory of HW 4.0", and "8X more raw compute", with other improvements. Intended to "complete" self driving.

~ 2028+ - HW 6.0 - beginning of 9 month iteration cycle, "fast follow on to AI5". Intended for compute. "clear path to 2X perf over AI5".
 
Just a bias check -- Electrek used to be fairly pro-Tesla a few years ago, but then when politics and company focused changed -- the owner (Fred) sold all of his shares and has been strongly anti-Tesla as much as possible since then. Source: https://electrek.co/2024/09/05/i-sold-all-my-tesla-shares-tsla-why/

..

Electrek is correct about the shifts on AI5's delivery dates, the original ETA for delivery with customer vehicles was ~ Q4 2025, but now it's no earlier than end of 2026.

However, I think AI5 is the last "big leap" in architecture, with AI6, AI7 being fast follow-ons with smaller gains. The AI6 delay isn't without consequences, but it's a much smaller step-change than the previous architectures:

2014 - HW1.0 - 40nm Mobileeye chip

2016 - HW2.0 (Nvidia PX2) - 12 TOPs, ~ 4GB memory usable per node

2017 - HW2.5 - Minor refresh of 2.0 - 12 TOPS, ~ 8GB memory, reliability improvements

2019 - HW3.0 - Tesla's first chip - 36 TOPS, 8GB memory, more redundancy, stronger CPUs (14nm)

2023 - HW4.0 - ~ 100-150 TOPS estimated, 16GB memory, more storage, significant +bandwidth

2026 - HW4.5 - custom version of hardware 4 with 3 chips instead of 2, likely for Robotaxi/Cybercab use, may include more RAM

~2027 - HW 5.0 - rumors: "9X the memory of HW 4.0", and "8X more raw compute", with other improvements. Intended to "complete" self driving.

~ 2028+ - HW 6.0 - beginning of 9 month iteration cycle, "fast follow on to AI5". Intended for compute. "clear path to 2X perf over AI5".

I'm no expert but perhaps Musk is doing some restructuring that is taking some more time and some serious redesign of AI6, and at the same time many people leaving his SpaceX/xAI/Tesla group of people working on his "super-chip" for cars/digitalOptimus/AI ?

https://electrek.co/2026/03/13/elon-musk-admits-xai-built-wrong-rebuild-tesla-spacex-investment/

https://www.teslarati.com/tesla-xai-digital-optimus-explained/#google_vignette
 
I'm no expert but perhaps Musk is doing some restructuring that is taking some more time and some serious redesign of AI6, and at the same time many people leaving his SpaceX/xAI/Tesla group of people working on his "super-chip" for cars/digitalOptimus/AI ?

https://electrek.co/2026/03/13/elon-musk-admits-xai-built-wrong-rebuild-tesla-spacex-investment/

https://www.teslarati.com/tesla-xai-digital-optimus-explained/#google_vignette
The talent exodus definitely hurts, but I think Daniel Nenni (as usual) hit it on the head - where Musk is learning how complex it is to design leading edge chips. Even with a highly talented and experienced team, you can run into situations that you couldn't predict, necessitating rework / respin / redesign of the SoC.

It also appears likely that Elon may have changed requirements for AI6 (+possibly AI5), also leading to the delay. 7 months ago:


"Elon: "Once it became clear that all paths converged to AI6, I had to shut down Dojo and make some tough personnel choices, as Dojo 2 was now an evolutionary dead end. Dojo 3 arguably lives on in the form of a large number of AI6 SoCs on a single board."
 
Back
Top