Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/any-predictions-on-interposer-pad-pitch-sweet-spot-in-2025.17542/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Any predictions on interposer pad pitch sweet spot in 2025

cliff

Active member
150um was an old standard sweet spot for pad pitches. I believe the new standard pad pitch is 110u. I assume that the 55-40um pad pitch is pretty pricey currently. It seems that HBMs will push the industry towards interposers with BOWs (bunch of wires with high impedence connections) and fewer CML (current mode logic with 50 ohm pullups) connections. This makes designs way faster and uses a lot less power. IMO the drawback is the packaging costs (Us analog guys can deal with the simulation complications. We don't rely on STA). Can somebody make a prediction on what is the tightest pitch to package, say a logic die connecting to an HBM3 (1024 connections) onto a old process pitch interposer for 100 die, which will enable packaging MPWs with HBM3s for a total price packaging price of $100-200K? We are prototyping an AI chip and we don't know what pad pitch to demonstrate. Based on Tanj's response in another thread today, it seems that I am underestimating what the industry will soon be capable of.
 
Well you have 40um to deal with if you work with HBM3, that seems like a reasonable anchor for general purpose for the next couple of years, since HBM3 will be common in the AI market.

The other way of looking at it is what is your shoreline, the bandwidth you need across it, and the expected distance to the next chip? If you expect to be adjacent then the arithmetic seems simple enough and you choose the cheapest bumps that will fit? If the answer is real loose then maybe you are not a demanding enough case to need to be adjacent and BOW up to 50mm is what you are looking for.

What are you expecting to connect to? Is it just the HBM3, where you have a geometry which is fairly rigid, or connections separate from your HBM? And there is no interposer around for 100 die which supports HBM of any generation I am aware of. They were developed on Si interposers, you can probably mount them on some high quality alternatives if you are careful, but the largest ones shown (not shipping) are things like the Intel Ponte Vecchio with 50 chiplets about 8 of them HBM, and they are using an active interposer. Very custom. Nvidia has been using oversized silicon interposers.
 
Last edited:
Sorry, I meant 100 separate packaged SIPS. 1 logic die, 1 HBM3, perhaps a sige130 transceiver. The cheapest SIP that I can get with HBM3. There is a 40um => 90um spreader?

Designing with HBM would be amazing, but the packaging costs to prove the system in is my concern. I don't know what to expect 2 years from now. You are correct that flip chipping 40um would be ideal.

Edit: IMO, the NRE of the first handful of SIPs (engineering, MPW, package, testing) is the primary concern. Once a system is proven and the company is still in business, the cost was probably worth it and the recurring costs are not as worrisome.
 
Last edited:
ChatGPT said so:

"As an AI language model, I do not have access to specific information about the interposer pad pitch sweet spot in 2025. However, I can provide some general insights on trends and factors that may affect the interposer pad pitch sweet spot in the coming years.

Interposer technology is becoming increasingly important in semiconductor packaging as it enables higher bandwidth, lower power consumption, and smaller form factor. Interposer pad pitch refers to the distance between the pads on the interposer substrate, and the sweet spot is the optimal pitch for a given application in terms of performance, cost, and manufacturability.

One trend that may affect the interposer pad pitch sweet spot is the increasing demand for high-performance computing (HPC) and data center applications. These applications require high-speed interconnects and low latency, which may favor smaller pad pitches to reduce signal skew and crosstalk. However, smaller pad pitches also increase the complexity and cost of manufacturing, which may limit their adoption.

Another trend is the development of new materials and processes for interposer manufacturing. For example, some companies are exploring the use of new materials such as graphene or carbon nanotubes, which may enable smaller pad pitches and higher performance. However, these materials are still in the early stages of development, and their commercial viability and manufacturability remain uncertain.

Overall, the interposer pad pitch sweet spot in 2025 will depend on various factors such as the specific application requirements, technological advancements, and market demand. It is difficult to make specific predictions about the sweet spot, but it is likely that it will continue to evolve and adapt to new challenges and opportunities in the semiconductor packaging industry."
 
Last edited:
Sorry, I meant 100 separate packaged SIPS. 1 logic die, 1 HBM3, perhaps a sige130 transceiver. The cheapest SIP that I can get with HBM3. There is a 40um => 90um spreader?

Designing with HBM would be amazing, but the packaging costs to prove the system in is my concern. I don't know what to expect 2 years from now. You are correct that flip chipping 40um would be ideal.
100 sales, short run, IIUC? Why do you need HBM3? HBM2 and 2E had 55/73 center to center pitches in a staggered setup (https://www.anandtech.com/show/9969/jedec-publishes-hbm2-specification), they are still active products and cheaper.

If they are overkill then look at LPDDR4x. They have good reliability, throughput around 4x what DDR do for the same GB capacity, and they support a lot of parallel access. That is why Apple love them. They work fine on a few inches of fairly ordinary board, including organic interposers inside a SIP. Bare chips, but also options with very slim packaging and stacked chips if your thermals allow. LPDDR5 looking forward.
 
2 good answers. Tanj vs Robot (that was crazy!).

We are making an AI chip on the edge. HBM uses less power and is faster. Do you believe flip chipping a couple of dies with 40um pitch pads 2 years from now will cost the same flip chipping with 110um pads?
 
40 will get cheaper but you still have a differential, and 110 is going to be a much cheaper interposer. Maybe more rugged too?

LPDDR5 on short distance is going to give you close to same power as HBM, maybe better sleep if that matters, and a lot lower cost/GB. Each chip delivers 16 IO at up to 8 Gbps, for 128 per die, so by the time you get to a 4 chip x64 package you are overtaking HBM1, 2 packages faster than HBM2 (and much cheaper).

Only you know your market, but it seems worth understanding the tradeoff in depth. Power and throughput will matter a lot for AI.
 
Last edited:
Thank you for those answers. Ed Sprirling has amazing videos and the Rambus guys have discussed the differences in detail. We have designed CML and LVDS drivers and receivers and SerDes (1ps jitter, not down to Ian's level), so I understand the amount of power is wasted on I/O. Even if I am wrong about HBM vs DDR, our environment is focused on GDS2, not Gerber. Interposers are in our wheelhouse. Boards are not. I am trying to figure out when interposers will be practical for smaller companies working on the edge.
 
our environment is focused on GDS2, not Gerber. Interposers are in our wheelhouse. Boards are not.
Fascinating. I would have expected tool flow for organic interposer would be much the same as silicon interposers. But yeah, such assumptions can trip you up.
 
I am hopeful that Skywater and old foundries start packaging multiple flip chipped die.
Intel, TSMC, GF. Ecosystem partners. Skywater would likely find it a distraction, and it is not a cheap or simple investment in plant.
 
I assumed that special packaging was Skywater's sweet spot.

 
While we are on the subject, how much cost to convert a fab, say ON 0.35 or Uchip .27, to become a 2D packaging company. We got a lot of experts here. Some of us believe the 2.5D SIP is as valuable as EUV.
 
While we are on the subject, how much cost to convert a fab, say ON 0.35 or Uchip .27, to become a 2D packaging company. We got a lot of experts here. Some of us believe the 2.5D SIP is as valuable as EUV.
I don't really get that logic (no pun intended). 2.5D packaging doesn't offer any cost per transistor reduction, and it seems to add far more design complexity than double patterning did. Unless you are pushing/breaking the reticle limit, I don't know why you wouldn't just design a cheaper/easier to design monolithic die on like 8LPP than do some complex 40nm disagregated thing. On say N5 I get that thought process given the scaling walls that analog and sram started hitting on "7nm" class nodes, but on "10nm" class nodes these scaling issues had not yet fully reared their ugly heads. The only benefit I can think of is the modular nature of a MCM design gives some benefits for reducing TTM for future products.

People with actual chip design experience please feel free to rip my argument apart if you see any flaws with this logic.
 
Cool! But that is making the interposer component, which is a relatively low-res Si part well matched to other SW capabilities. It is not at all the same as building the package using it and other chiplets, where mastery of the bonding technology and process is the requirement. Only a few places do that with good yield on fine pitch.
 
I don't know why you wouldn't just design a cheaper/easier to design monolithic die on like 8LPP than do some complex 40nm disagregated thing.
It is instructive to study what AWS did with the Graviton 3. They took an off-the-shelf reference design for a 64-core ARM v8 chip (cores, coherent fabric, layout, interface ports) and ran it for production on a suitable logic node (N5?), then picked up chiplets from other IP vendors for the PCIe gen4 and DDR5 control (ARM does not provide memory or PCIe IP) which might have been partners ARM used to prove their reference worked, on older, cheaper logic nodes, converted that to a 5-chiplet set that Intel could package on EMIB, and hey presto first to market with low development cost and excellent performance. They also have some flexibility to upgrade/downgrade the memory and bus interfaces without redesigning the CPU, or upgrading the CPU using the same interfaces.

This was a fascinating teaching moment showing future economics of design, process mix and match, and Intel working as a packaging house.
 
It is instructive to study what AWS did with the Graviton 3. They took an off-the-shelf reference design for a 64-core ARM v8 chip (cores, coherent fabric, layout, interface ports) and ran it for production on a suitable logic node (N5?), then picked up chiplets from other IP vendors for the PCIe gen4 and DDR5 control (ARM does not provide memory or PCIe IP) which might have been partners ARM used to prove their reference worked, on older, cheaper logic nodes, converted that to a 5-chiplet set that Intel could package on EMIB, and hey presto first to market with low development cost and excellent performance. They also have some flexibility to upgrade/downgrade the memory and bus interfaces without redesigning the CPU, or upgrading the CPU using the same interfaces.

This was a fascinating teaching moment showing future economics of design, process mix and match, and Intel working as a packaging house.
That's certainly an interesting angle for non chip design companies and for the people who actually made said IP. That also kind of fits into what I mentioned before on die size (even if G3 isn't a monster monolithic chip like NVIDIA GPUs or monolithic Xeons can be) and the poorer scaling of analog on leading edge logic nodes (assuming the DRAM controllers and PCIE dies use that stuff more than the compute die would).

I feel G3 also disputes the claim that MCM is a replacement for dieshrinks, because otherwise you would have seen that compute die split up into many trailing edge dies. There is of course the elephant in the room of how difficult designing/validating these MCM systems seems to be. But I can certainly see that for companies with little to no design expense being able to pull off-the-shelf IP and just worry about the MCM part might be easier and cheaper than having a full design team.
 
Last edited:
I'm viewing it more like a board/hybrid replacement. Circuits run faster, and less power dissipated. Cost effectiveness in 2025 is my question.

The next obvious question is, in the year 2525, if TSMC is still alive, if Intel can survive, they may find.,, (ChatGPT please)
 
I feel G3 also disputes the claim that MCM is a replacement for dieshrinks, because otherwise you would have seen that compute die split up into many trailing edge dies. There is of course the elephant in the room of how difficult designing/validating these MCM systems seems to be. But I can certainly see that for companies with little to no design expense being able to pull off-the-shelf IP and just worry about the MCM part might be easier and cheaper than having a full design team.
Look for ARM to provide examples of multi-die implementations, then Amazon to pick them up. Maybe even sponsor it. They don't seem eager to have an architectural license. Yet.

UCIe should help, when it arrives in force.
 
Back
Top