Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/durations-of-process-steps.17276/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Durations of process steps?

jms_embedded

Well-known member
I keep reading that a good cycle time is 1.5 - 2 days per mask layer. How does that break down into different process steps? e.g. how much of that is cleaning / photoresist / lithography / etch / deposition / diffusion / CMP / etc? Are there any publications that cover this?

I'm not looking for exact numbers (or numbers from a specific fab), just a general idea.
 
Is this from writing the photomask to shipping to customer?

Or is this use of said mask in the FAB?
 
I keep reading that a good cycle time is 1.5 - 2 days per mask layer. How does that break down into different process steps? e.g. how much of that is cleaning / photoresist / lithography / etch / deposition / diffusion / CMP / etc? Are there any publications that cover this?

I'm not looking for exact numbers (or numbers from a specific fab), just a general idea.

Excuse the self plagiarism, but...
From what I’ve seen wafers spend alot of time queued up on a tool (either docked to the tool or sitting in a storage robot with other foups), or waiting for the other wafers in a lot to process. Depending on the exact tool wafers could end up waiting a while for the wafer handling robots to move other wafers in/out of the process modules. The longest steps are often waiting for wafers to move in from EFEMs past load locks (kind of like an airlock and slow like one too) to get into the actual transfer units/process modules. At least for etchers wafers also need to outgas after processing.

The exact layer also can have a widely varying number of process steps. You might have to go from one etcher, to another, to an asher/clean tool. Then go to ald (depending on how much you are depositing this can be real slow), deposition, cmp (I think this is slow), and then back to litho. This gets even worse if that layer has any multi patterning.
Litho exposures and etches are often on the scale of minutes. Anneal is also pretty fast. Deposition, CMP, ovens, ALD, and Etches that are either highly selective or highly anisotropic can be much slower. Best publication would be throughput quotes from tool vendors.
 
I keep reading that a good cycle time is 1.5 - 2 days per mask layer. How does that break down into different process steps? e.g. how much of that is cleaning / photoresist / lithography / etch / deposition / diffusion / CMP / etc? Are there any publications that cover this?

I'm not looking for exact numbers (or numbers from a specific fab), just a general idea.
This was discussed recently in another thread: https://semiwiki.com/forum/index.ph...phone-15-gpu-price-increase.17097/#post-56510
 
It depends on the process flow, exposure tools run >250 wph, CMP 60 to 80 wph, implant can be up to 500 wph for some tools, etch varies a lot, for example 3D NAND channel etches can be an hour per wafer. I plotted throughput by tool type for a whole variety of processes once and the data points made tall bars between low and high throughout for each tool.

the other thing is wafers spend more time waiting than processing. If a wafer can process through an empty Fab in a certain amount of time with no waiting (ideal cycle time), the average cycle time will be 2.5 to 3.5 times that value. In fact as a tool utilization approaches 100% the X factor goes to infinity (Little‘s law).

From a manufacturing efficiency perspective fabs are an excellent example of what not to do with reentrant flows and unreliable equipment.
 
the other thing is wafers spend more time waiting than processing. If a wafer can process through an empty Fab in a certain amount of time with no waiting (ideal cycle time), the average cycle time will be 2.5 to 3.5 times that value. In fact as a tool utilization approaches 100% the X factor goes to infinity (Little‘s law).
Yes, that aspect I'm familiar with -- the general cycle time vs throughput curves pop up in some academic literature on fab operations research; as well as "Factory Physics" (Hopp & Spearman)

But I do have some interesting questions on X-factor. What drives the choice of X-factor towards a certain range?

As I understand it, going too high doesn't leave a lot of margin in case something goes wrong and the traffic flow in the fab backs up due to gridlock; you can't go to, say, X = 20 to eke out a little more throughput because that's really risky. But why 3.5 as a practical maximum? Why not 5.0? or 2.0?

Is that something that fab operations managers consciously control to that range intentionally (2.5 - 3.5), for example by limiting wafer starts? Or does it vary with circumstance / fab strategy? --- for example in a glut when there's not a very high demand for throughput, keep cycle time lower by lowering the X factor; in a shortage, let it go higher; or Company X likes to run their fab at 2.0 and Company Y likes to run their fab at 4.0.

From a manufacturing efficiency perspective fabs are an excellent example of what not to do with reentrant flows and unreliable equipment.

Now I'm intrigued... is this a historical quirk that we're stuck with because of early fab decisions? (like the width between railroad rails) Or is it just the best we can do given the cost of equipment?

(sorry for the overload of questions, this stuff interests me)
 
Best publication would be throughput quotes from tool vendors.
Any recommendations for which tool vendor(s) might be willing to answer questions of this sort from a random engineering blogger?
(reply privately if you have any specific contacts you'd be willing to share)
 
Any recommendations for which tool vendor(s) might be willing to answer questions of this sort from a random engineering blogger?
(reply privately if you have any specific contacts you'd be willing to share)
Unfortunately my guess would be none. ASML publicly does. Although this is helped by the less layer dependent nature of exposure. Nikon and Cannon might but I wouldn’t know for sure.
 
Yes, that aspect I'm familiar with -- the general cycle time vs throughput curves pop up in some academic literature on fab operations research; as well as "Factory Physics" (Hopp & Spearman)

But I do have some interesting questions on X-factor. What drives the choice of X-factor towards a certain range?

As I understand it, going too high doesn't leave a lot of margin in case something goes wrong and the traffic flow in the fab backs up due to gridlock; you can't go to, say, X = 20 to eke out a little more throughput because that's really risky. But why 3.5 as a practical maximum? Why not 5.0? or 2.0?

Is that something that fab operations managers consciously control to that range intentionally (2.5 - 3.5), for example by limiting wafer starts? Or does it vary with circumstance / fab strategy? --- for example in a glut when there's not a very high demand for throughput, keep cycle time lower by lowering the X factor; in a shortage, let it go higher; or Company X likes to run their fab at 2.0 and Company Y likes to run their fab at 4.0.



Now I'm intrigued... is this a historical quirk that we're stuck with because of early fab decisions? (like the width between railroad rails) Or is it just the best we can do given the cost of equipment?

(sorry for the overload of questions, this stuff interests me)
It depends on what you make and who your customers are. Foundries need shorter cycle times to respond to changing market conditions. If you are making diodes you might not care about cycle time and just load up the fab. If you are early in the yield curve for memory you also need short cycle time, etc.

There can actually be a sweet spot depending on yield. If your yield is low you want a shorter cycle time so you can get more learning cycles and drive up yield faster. If you make something like DRAM where there is a price versus time curve that also has a big impact. It is complex but possible to calculate what cycle time produces the most good die.

In the late nineties I did a study with ISMI/SEMATECH on the economic value of cycle time that was the most comprehensive ever done. Shorter cycle time can be worth a lot of money in the right circumstances.
 
Unfortunately my guess would be none. ASML publicly does. Although this is helped by the less layer dependent nature of exposure. Nikon and Cannon might but I wouldn’t know for sure.
Well, it sounds like lithography isn't really the driver of cycle time since it's so short (but is a primary driver of fab capital cost!), so that's less interesting to me... (the only tangible times for some of this stuff I could find so far in published literature was an article in Future Fab in 2001 by someone from ST, with average time for litho of on the order of only 1 minute, but I wasn't sure if that was one pass out of several consecutive passes, or the full lithography time.)

What about manufacturers of some of the lower-tech types of tools?
 
Last edited:
the other thing is wafers spend more time waiting than processing. If a wafer can process through an empty Fab in a certain amount of time with no waiting (ideal cycle time), the average cycle time will be 2.5 to 3.5 times that value. In fact as a tool utilization approaches 100% the X factor goes to infinity (Little‘s law).
Little is for random queueing. The IEEE Transactions on VLSI Systems have had a number of articles on fab scheduling and optimization. The simplest case is you make one product for a long time. You build a process pipeline and tune it so the most expensive processes run at ideal capacity, other machines may be underutilized. You can approach 100% with little extra delay in those. A DRAM or NAND plant may be close to this.

The more complex case is you have an IFS fab with many tools and many products circulating at any one time needing a different mix of process steps. The fab is designed to approximate a balanced set of equipment for the first scenario (paragraph above) but averaged over multiple expected product types and scaled up to allow them all. Then a fancy scheduling software which understands all the latencies and the delays moving between machines and the various places the FOUPs are allowed to queue, that scheduler computes the best use of tools it can plan, which is continually updated with changes in tools going down and up, issues found in metrology, etc. For schedulers like that, and assuming the fab is not in pain from some tool down or process excursion that needs adjustment, a slack of 2.5x should be easy to beat. But since any given product may be in the system for 2 months on average, some of those days are the slow days where a problem is being fixed.

Added to which there is a cost model. As Scotten said, there are scenarios where a customer may see great value in getting the product back quickly. The fab may offer to do that at a premium. For customers who are not so stressed about it, who can plan ahead and accept a few weeks extra in delivery time, well their wafers will end up sitting quietly (at little cost to the fab) in FOUPs awaiting each tool. That boosts the utilization of the tool, which optimizes the fab.
 
Little is for random queueing. The IEEE Transactions on VLSI Systems have had a number of articles on fab scheduling and optimization. The simplest case is you make one product for a long time. You build a process pipeline and tune it so the most expensive processes run at ideal capacity, other machines may be underutilized. You can approach 100% with little extra delay in those. A DRAM or NAND plant may be close to this.

The more complex case is you have an IFS fab with many tools and many products circulating at any one time needing a different mix of process steps. The fab is designed to approximate a balanced set of equipment for the first scenario (paragraph above) but averaged over multiple expected product types and scaled up to allow them all. Then a fancy scheduling software which understands all the latencies and the delays moving between machines and the various places the FOUPs are allowed to queue, that scheduler computes the best use of tools it can plan, which is continually updated with changes in tools going down and up, issues found in metrology, etc. For schedulers like that, and assuming the fab is not in pain from some tool down or process excursion that needs adjustment, a slack of 2.5x should be easy to beat. But since any given product may be in the system for 2 months on average, some of those days are the slow days where a problem is being fixed.

Added to which there is a cost model. As Scotten said, there are scenarios where a customer may see great value in getting the product back quickly. The fab may offer to do that at a premium. For customers who are not so stressed about it, who can plan ahead and accept a few weeks extra in delivery time, well their wafers will end up sitting quietly (at little cost to the fab) in FOUPs awaiting each tool. That boosts the utilization of the tool, which optimizes the fab.
And what a premium it is -- we recently needed TSMCs ultra-stupid-fast TAT for a 5nm chip, extra cost was $2.5M to get 10 day faster processing... :-(
 
Fabs try to be limited by just the photo module. As a rule, this is the first rule—keep photo fed at all times. That’s where the money is.

All the other rules apply to varying degrees but, it’s the same sense as Mike Tyson might say—everyone has a plan until you get punched in the face. Scott notes that fab tools are unreliable.

There is just no way to express how unreliable. It has to do with the precision and accuracy required. Tools don’t break, they simply sneeze a little, and you have to intervene.
 
There is just no way to express how unreliable. It has to do with the precision and accuracy required. Tools don’t break, they simply sneeze a little, and you have to intervene.
so the MTT"F" (mean time to "failure" = adjustment actually) is.... once an hour? day? week? month? year?
 
If you don’t mind me asking how many wafers were you expediting?
Not sure, it was a standard lot so presumably 25 or 50 wafers. I don't think the fee depends on this, it's because it "queue-jumps" -- all the other products in the line literally stop to get out of the way of a lot like this -- so the productivity of the entire line drops. For the same reason there are only a very small number of such slots per line available each month...
 
so the MTT"F" (mean time to "failure" = adjustment actually) is.... once an hour? day? week? month? year?
Depends on the tool. I hear Ion implanters have low time to fail. And then there is individual tool model as well. Besides the obvious some tool models are better than others, some work better with certain recipes/layers. There are also some dog tools that have problems for multiple PM cycles. On aggregate we have counters that guide PM schedules to avoid the worst of this and minimize the amount of unscheduled downtime.
 
Those were some seriously expensive wafers.
Yeah, but if you really *really* need to get samples ASAP it's still "only"*** adds <20% to the 5nm mask costs that you're paying for anyway, so <5% to the total chip NRE cost -- and if it makes the difference between meeting and missing a timescale you've promised to a critical customer... ;-)

*** "only" because it's like complaining about the extra cost of giraffe-foreskin seats on a Rolls-Royce, if you have to worry about this then you should have picked a different car...)
 
Back
Top