Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/fab-manufacturing-questions.16561/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Fab manufacturing questions

jms_embedded

Active member
Some questions about manufacturing in semiconductor fabs:

1. Do fabs try to balance production flows, e.g. there is no single capacity bottleneck, and for each tool, its average throughput matches the throughput of both upstream and downstream tools? Or is there a reason to intentionally plan capacity in an unbalanced manner? If so, what kind of tools get purchased with intentionally higher or lower capacity than a balanced flow would require?

I can't imagine intentionally making lithography tool capacity higher than other tools, since they seem to get the spotlight as the most expensive equipment in the fab, with ASML's EUV machines showing up in newspaper articles as costing hundreds of millions of dollars each. (Aside from EUV, are they the most expensive? I found this quote from Levinson 2011: "Lithography tools are often the most expensive in the wafer fab. Even when they are not, the fact that lithography is required for patterning many layers in IC manufacturing processes, while most other tools are used for only a few steps, means that a large number of lithography tools are needed for each wafer fab, resulting in high total costs for lithography equipment.")

2. How are tool upgrades typically prioritized? Is there a list of planned upgrades, and someone identifies the most bang for the buck? Does this tend to prioritize small low-risk upgrades over larger high-risk upgrades that might have better payoff? I can imagine a case of greedy algorithm failure (the best short-term solution is a suboptimal long-term solution) where a bunch of small upgrades are chosen that add a total of 10-15% more capacity, but it might be more profitable in the long run to wait and spend more money on constructing a new building.
 
1) There do tend to be bottlenecks for certain process steps. Sometimes this is caused by a large number of a certain layer having it's tools down. Sometimes it is just because steps are very slow. Take for example EUV tools, which are MUCH slower than DUV tools, and as a result you might need more EUV tools to make up the difference (not always true because EUV can also speed up your process flow by not having to do multi patterning). Another common area to act as a bottleneck for a fab is etch. Every single litho step is followed by an etch step, and "regular" dry etching and especially deep reactive ion etching can be very SLOW processes. As a result logic fabs have a large volume of etch tools to keep up with all of the litho tools that are much faster than them.

Of course this is for logic; due to the highly stacked nature of the structures memory fabs construct, memory fabs need even more etch tools than logic fabs. This is because they have to etch deeper (ie longer) to get down to the layer they want. As far as I know the tools are bought in a balanced manner (not by pure numbers, but by buying what they need to complement their other tools). Of course the problem a lot of the leading edge folks are struggling with is that they cannot get enough EUV tools to get the capacity scales these companies would like to have.

2) I don't have the knowledge to answer that, sorry.
 
1) There do tend to be bottlenecks for certain process steps. Sometimes this is caused by a large number of a certain layer having it's tools down. Sometimes it is just because steps are very slow. Take for example EUV tools, which are MUCH slower than DUV tools, and as a result you might need more EUV tools to make up the difference (not always true because EUV can also speed up your process flow by not having to do multi patterning).
Interesting! So for example (just trying to follow your thought process) since EUV tools are $$$$$ and obtaining them from ASML is currently a long-lead constraint, and fabs using them may have DUV already, then EUV may be a likely bottleneck either because they cost a lot or the desired number cannot be obtained. (Or the factory construction may not physically support that many machines?)

Oh -- that brings up a third question:

3. Would the degree of "balancing" depend on the customer demand, if the fab supports a few different processes by using different tool mixes?

For example, suppose a single fab supports, say, 5nm and 7nm, sharing some tools in common... and suppose today the 7nm demand is very high but the 5nm demand isn't quite as high, then not only will some 5nm tools will sit unused, but since the lack of 5nm demand frees up some shared tool capacity, the 7nm-only tools will be more of a bottleneck. Or vice-versa if 5nm demand is higher than 7nm. And the fab isn't going to want to keep converting tools between 5nm and 7nm to keep the production flows balanced, so instead they pick enough 5nm-only tools and 7nm-only tools to support foreseeable demand for both... and if there's a lopsided demand then there may be bottlenecks.

Is this a realistic situation?

Another common area to act as a bottleneck for a fab is etch. Every single litho step is followed by an etch step, and "regular" dry etching and especially deep reactive ion etching can be very SLOW processes. As a result logic fabs have a large volume of etch tools to keep up with all of the litho tools that are much faster than them.
Just to make sure I'm using the word correctly, if one type of tool T is a "bottleneck", then that would mean that there are fewer of tool T than necessary to keep downstream production flowing smoothly, and upstream work-in-process would tend to pile up before tool T if upstream tools have full utilization.

If so, then etch tools could be a bottleneck if there aren't enough of them? But with proper planning the fabs purchase enough of them so this is not the case?
 
Interesting! So for example (just trying to follow your thought process) since EUV tools are $$$$$ and obtaining them from ASML is currently a long-lead constraint, and fabs using them may have DUV already, then EUV may be a likely bottleneck either because they cost a lot or the desired number cannot be obtained. (Or the factory construction may not physically support that many machines?)
Correct. The main issue with EUV right now is that some process nodes for logic and memory require EUV for that process node to be financially competitive, but ASML/CZ are not able to produce them fast enough for everyone to fill up their current leading edge fabs or their future leading edge fabs. The EUV tools also require a taller celling/a crane built into the roof/stronger floor than DUV tools as well as being more maintenance intensive. All of these factors make EUV tools a bottleneck for scaling capacity on advanced nodes at the rate many of these companies would like as well as increasing the chance that a critical tool goes down (causing a wafer backlog to build up).

3) I don't think so, but I don't know for sure. Most fabs tend to only do one node and stick to it, with new fabs getting built for new nodes. Furthermore chips take months to go from wafer start to getting shipped out for packaging, on these sorts of time frames it is hard to shift a tool back and forth due to a slow quarter (especially when you consider that some tools might not be in the process flow of other nodes). I think for lithography tools they might be able to flex a good bit though, since all they need to change is the mask, photoresist application, and dose.

Just to make sure I'm using the word correctly, if one type of tool T is a "bottleneck", then that would mean that there are fewer of tool T than necessary to keep downstream production flowing smoothly, and upstream work-in-process would tend to pile up before tool T if upstream tools have full utilization.

If so, then etch tools could be a bottleneck if there aren't enough of them? But with proper planning the fabs purchase enough of them so this is not the case?
Etch tools often get very dirty/need to be cleaned frequently, this can result in unexpected outages or maintenance that takes longer than it should have. As you mentioned the right number of etch tools are purchased to deal with the slow speed, potential, and planned outages. However there will always be issues or circumstances where x number of the etching tool "T" that are configured to run layer y went down, and then a queue starts to build up because the fab is only designed to have x-2 tools from that layer down at any given time without having an excessively large queue start building up.
 
Could not resist adding few words:
1
- Lithography is always made to be the wafer fab capacity bottleneck because it is the most expensive
- It just means that when rounding up and down, lithography may see more rounding down and other areas rounding up
2
- As soon as the fab starts running, the technologies and products start changing and bottlenecks can shift
- The fab then upgrades its equipment set by improving actual equipments or adding new equipments
3
- Given a stable set of technologies and products, the fab wafer starts are very rarely simply linear
- So on a given day, week, month, the mix of product will fluctuate and that will create dynamic bottlenecks
 
Building on Eric
-All the factors of production, including human, matter. All fabs have bonus incentive plans to motivate the humans, it makes a difference. Staffing levels and experience can be influential.
-Machine generations matter. There are, very crudely, 2 levels, 300 and 300 prime. 300 prime is more productive, but more costly. 300 prime came roughly 10 years after 300; it was a substitute for 450. So, very crudely, newer fabs can be 20% more productive if they utilize primes.
-Fabs have a lot of WIP, 1 million wafers is typical. There are sort of two levels, the wafer level that individuals deal with, and the averages. They can sometimes tell very different stories.
 
Some questions about manufacturing in semiconductor fabs:

1. Do fabs try to balance production flows, e.g. there is no single capacity bottleneck, and for each tool, its average throughput matches the throughput of both upstream and downstream tools? Or is there a reason to intentionally plan capacity in an unbalanced manner? If so, what kind of tools get purchased with intentionally higher or lower capacity than a balanced flow would require?

I can't imagine intentionally making lithography tool capacity higher than other tools, since they seem to get the spotlight as the most expensive equipment in the fab, with ASML's EUV machines showing up in newspaper articles as costing hundreds of millions of dollars each. (Aside from EUV, are they the most expensive? I found this quote from Levinson 2011: "Lithography tools are often the most expensive in the wafer fab. Even when they are not, the fact that lithography is required for patterning many layers in IC manufacturing processes, while most other tools are used for only a few steps, means that a large number of lithography tools are needed for each wafer fab, resulting in high total costs for lithography equipment.")

2. How are tool upgrades typically prioritized? Is there a list of planned upgrades, and someone identifies the most bang for the buck? Does this tend to prioritize small low-risk upgrades over larger high-risk upgrades that might have better payoff? I can imagine a case of greedy algorithm failure (the best short-term solution is a suboptimal long-term solution) where a bunch of small upgrades are chosen that add a total of 10-15% more capacity, but it might be more profitable in the long run to wait and spend more money on constructing a new building.
This is a surprisingly complex subject.

I remember back in the late nineties at the Advanced Semiconductor Manufacturing Conference there was a lot of talk about the Theory of Constraints and everyone was reading "The Goal" by Eliyahu M. Goldratt. There was at least one company who designed a fab with a built in constraint with the idea that if they optimized the constraint they would optimize the fab. It has always surprised me how uninformed fab operations were by manufacturing science even in the nineties. We had these incredibly expensive manufacturing plants and we ran them really badly from a manufacturing efficiency perspective.

In 1997 Don Martin of IBM published a paper "How the Law of Unanticipated Consequences Can Nullify the Theory of Constraints:
The Case for Balanced Capacity in a Semiconductor Manufacturing Line". The problem with having a fixed constraint is that while the constraint is optimized every other tool in the fab is underutilized. Even EUV tools aren't more expensive than everything else in the fab combined. The best way to design a fab is to have balanced capacity but even that isn't straight forward.

Variability is the enemy of efficient manufacturing and semiconductor fabs are a worst case scenario when it comes to manufacturing principles.

If you look at an automotive plant, you put something in the front end of your manufacturing lines, it progresses through a series of steps and comes out the back end complete. The equipment on the line has uptime >99% and everything moves in a straight line step to step.

In contrast to that, in a fab a lot of wafers starts into the fabs, gets cleaned, films grown or deposited, and then goes into photo to be patterned, then into etch, back to deposition and back to the same photo tools used the first time. This is called a reentrant flow and causes collisions between new lots and existing lots. Then you add in the awful tool reliability of 85 to 95% and the fab bottleneck moves around day to day and even hour to hour. There are a lot of other subtleties, too, scheduled down time is better than unscheduled down time and tools that take a long time to repair relative to the time to process a wafer, exponentially increase cycle time. Mixing batch and single wafer tools also create issues with how long you bank up lots to run a full batch. There is also a fundamental trade off between utilization and cycle time (Littles law), as utilization approaches 100%, cycle time goes to infinity!

I haven't actively worked on fab design and planning in many years but when I was last involved the only way to accurately model a fab was with discrete event simulation. It could take hours or even days to run a single fab simulation. One of the fabs I designed I laid out all the tools in AutoCAD and ran a plug-in that overlaid all of the process flows color coded with the width of the lines representing how many times the wafer traveled that path. You could calculate the total travel distance for a particular mix of flows and optimize your layout.

Speaking of the nineties, it was around 1995 when Sematech introduced Overall Equipment Effectiveness (OEE) and did a study of fabs all over the industry. In a nut shell, you figure out how many good - shippable wafers per hour a tool is producing and divide it by the tools theoretical throughput if it is always up and producing good wafers. Sematech found that industry wide, tools were only making about 30% of their theoretical capacity. This led to a big focus on OEE and when 300mm tools were designed they were all built to support multiple input FOUPs of wafers to insure the tool never ran out of wafers. The ironic part is when 300mm fabs started coming on-line the was a lot of surprise that cycle times went up, something that should have been expected since there was now extra wafers waiting at every tool. Even today OEE is around 50% in logic fabs and 60-70% range in memory fabs.

At the end of the day fabs should be optimized for good die out or cycle time depending on the business needs. Good die out optimization actually drives different utilization rates depending on process maturity and yield. At low yield you want short cycle time to speed up yield learning and as yield reaches mature levels you can run higher utilization and longer cycle times.

Memory fabs typically run a single node and have only a few flow variants but also get upgraded to new nodes every few years. Tool upgrades will be driven by the needs of the new node with tools that can't meet the process requirement replaced. I know of older fabs that have been through over ten node changes!

On the logic side, a company like Intel will build a fab and upgrade it to a new node on a 3 to 4 year interval.

Foundries are the most complex, they may do upgrades to new nodes but they also typically have multiple flows running in a fab and sometimes multiple nodes.

TSMC is probably the most node focused of the foundries, they typically build a fab for a node and leave it there for the life of the fab, but even then there are multiple flows in the fab. At any given node there are options around number of metals layers, and various modules like MIM caps, resistors, number of threshold voltages, etc. They also will have for example a 7nm fab that runs 7nm and then the 6nm second or third generation process.

Most tool upgrades are either more tools are needed for capacity or better tools for a new node, just putting in a new tool on an existing node is pretty rare.
 
Good Input Scotten. Just to add to that: every Fab I worked at was organized to have Litho be the constraint for obvious reasons. On the official capacity report, litho was the constraint. However 75% of the time, on the daily ops report, the bubble was not at Litho. Reason is variability, number of tools, and focus on the constraint rather than the cheap etch tool that always goes down and is actually the limiter for an entire year. Successful fabs plan for litho constraint.... but ALL tools have to deliver or have backups and not everyone should be subordinate to the "constraint". Also all successful fabs make huge improvements in outs/tool every quarter. so the constraint can move from the designed constraint. Lots of stories to highlight how "the goal" is overly simplified
 
Good Input Scotten. Just to add to that: every Fab I worked at was organized to have Litho be the constraint for obvious reasons. On the official capacity report, litho was the constraint. However 75% of the time, on the daily ops report, the bubble was not at Litho. Reason is variability, number of tools, and focus on the constraint rather than the cheap etch tool that always goes down and is actually the limiter for an entire year. Successful fabs plan for litho constraint.... but ALL tools have to deliver or have backups and not everyone should be subordinate to the "constraint". Also all successful fabs make huge improvements in outs/tool every quarter. so the constraint can move from the designed constraint. Lots of stories to highlight how "the goal" is overly simplified
I disagree, a well designed fab will balance the tools across the fab to the greatest extent possible because the "constraint" will move, designing a deliberate constraint under utilizes everything else. How well that can be done is highly dependent on the fab size, the bigger the fab the better the balance. I don't agree with the "huge improvements in outs/tool every quarter" either.
 
I disagree, a well designed fab will balance the tools across the fab to the greatest extent possible because the "constraint" will move, designing a deliberate constraint under utilizes everything else. How well that can be done is highly dependent on the fab size, the bigger the fab the better the balance. I don't agree with the "huge improvements in outs/tool every quarter" either.
I think the "designed constraint" flaw depends on how literal you take it. There is always a limiter on the computer report and not all areas are equal. A balanced fab still needs to decide whether to delay or add a $20M (or $100M) tool install based on the performance of other tools and process steps. you are correct.... larger fabs minimize that underutilization and the impact.
 
I think the "designed constraint" flaw depends on how literal you take it. There is always a limiter on the computer report and not all areas are equal. A balanced fab still needs to decide whether to delay or add a $20M (or $100M) tool install based on the performance of other tools and process steps. you are correct.... larger fabs minimize that underutilization and the impact.
I would also say the state of the art fabs get to mature manufacturing performance in the first year or so and after that improvements are incremental, not "huge".
 
Last edited:
Variability is the enemy of efficient manufacturing and semiconductor fabs are a worst case scenario when it comes to manufacturing principles.

If you look at an automotive plant, you put something in the front end of your manufacturing lines, it progresses through a series of steps and comes out the back end complete. The equipment on the line has uptime >99% and everything moves in a straight line step to step.

In contrast to that, in a fab a lot of wafers starts into the fabs, gets cleaned, films grown or deposited, and then goes into photo to be patterned, then into etch, back to deposition and back to the same photo tools used the first time. This is called a reentrant flow and causes collisions between new lots and existing lots. Then you add in the awful tool reliability of 85 to 95% and the fab bottleneck moves around day to day and even hour to hour. There are a lot of other subtleties, too, scheduled down time is better than unscheduled down time and tools that take a long time to repair relative to the time to process a wafer, exponentially increase cycle time. Mixing batch and single wafer tools also create issues with how long you bank up lots to run a full batch. There is also a fundamental trade off between utilization and cycle time (Littles law), as utilization approaches 100%, cycle time goes to infinity!
Good stuff! Thanks for sharing your experiences. I've been digging through operations research papers occasionally over the last few months to understand some of the theory behind queues and operating curves, but of course all that is theory --- and if the reality is sensitive to many different factors, then you have to model all that stuff accurately enough to come up with what-if conclusions that are meaningful.

My biggest question on the queue behavior front is what are the most significant causes of process-time variability in real-world fabs? Unscheduled down time, or product mix, or hot lots, or something else? The May 2007 FabTime newsletter covers this topic somewhat, and lists the following causes in general, but I'm guessing that there are probably one or two big culprits that account for most of the variability. (If I had to bet today from what little I know, I'd probably put most of my money on unscheduled down time.)
  • - Different recipes run on the same tool (this stems from product mix and from the reentrant nature of our process flows)
  • - Setups
  • - Equipment failures and maintenance events
  • - Quals
  • - Engineering time on the tools
  • - Operators (not being available to load or unload)
  • - Operator decisions about what to process next (e.g. to drive up their own moves)
  • - Scrap (because it changes the lot size)
  • - Rework
  • - Other lot size variation (from product mix)
  • - Inspections
  • - Time constraints between process steps (because it can lead to the reprocessing of lots, requiring additional process time)
  • - Technicians (not being available to fix a problem)
  • - Hot lots (especially hand carry lots, when tools are held idle)

Then you add in the awful tool reliability of 85 to 95%
That's interesting... is that across-the-board reliability? or are some tools more reliable than others?

Did the trend in automation from 200mm -> 300mm improve or degrade reliability? (I'm going to guess that 180nm - 1 micron semiconductor equipment designed 20+ years ago has different reliability characteristics than leading-edge machinery today.)

Are there any reputable published studies that cover this sort of data?
 
And on the economic side...
This is a surprisingly complex subject.
I figured that out pretty quickly!!! :)

In 1997 Don Martin of IBM published a paper "How the Law of Unanticipated Consequences Can Nullify the Theory of Constraints:
The Case for Balanced Capacity in a Semiconductor Manufacturing Line". The problem with having a fixed constraint is that while the constraint is optimized every other tool in the fab is underutilized. Even EUV tools aren't more expensive than everything else in the fab combined. The best way to design a fab is to have balanced capacity but even that isn't straight forward.
Bah! When I had originally asked this question last August, I had convinced myself that balanced capacity was probably the best. Then I read about queueing and variation, and now, six months later, I had just about convinced myself that unbalanced capacity with a managed bottleneck was probably the best. Sounds like this is just a hard problem to deal with.

I had found the Don Martin paper about a month ago (it's on IEEExplore), I'll give it a closer read. I never know when to trust the conclusions of published papers from 15+ years ago; fundamental industry characteristics may have shifted since then, and as someone who isn't involved in fab operations I wouldn't be able to second-guess whether those conclusions are reasonable.

Here's a closely related question:

Is there a tradeoff in capacity management between cost and risk in unbalanced/balanced designs?

- unbalanced capacity with a managed bottleneck --- not optimal, but simple, and potentially lowers risk because the fab can focus its resources more on the managed bottleneck. Upside: less sensitive to downtime everywhere across the fab. Downside: underutilized tools outside the bottleneck.

- economically balanced capacity --- chooses the right mix of equipment to minimize cost for desired operating curve characteristics. (for example: if litho machines are $120M each and etch machines are $10M each and you get approximately the same cycle-time/throughput behavior from 4 litho tools + 20 etch tools = $680M, or 5 litho tools + 12 etch tools = $720M, then you're going to pick the $680M case.) Downside: may be more sensitive to downtime everywhere across the fab. Upside: better economic choice.
 
Good stuff! Thanks for sharing your experiences. I've been digging through operations research papers occasionally over the last few months to understand some of the theory behind queues and operating curves, but of course all that is theory --- and if the reality is sensitive to many different factors, then you have to model all that stuff accurately enough to come up with what-if conclusions that are meaningful.

My biggest question on the queue behavior front is what are the most significant causes of process-time variability in real-world fabs? Unscheduled down time, or product mix, or hot lots, or something else? The May 2007 FabTime newsletter covers this topic somewhat, and lists the following causes in general, but I'm guessing that there are probably one or two big culprits that account for most of the variability. (If I had to bet today from what little I know, I'd probably put most of my money on unscheduled down time.)
  • - Different recipes run on the same tool (this stems from product mix and from the reentrant nature of our process flows)
  • - Setups
  • - Equipment failures and maintenance events
  • - Quals
  • - Engineering time on the tools
  • - Operators (not being available to load or unload)
  • - Operator decisions about what to process next (e.g. to drive up their own moves)
  • - Scrap (because it changes the lot size)
  • - Rework
  • - Other lot size variation (from product mix)
  • - Inspections
  • - Time constraints between process steps (because it can lead to the reprocessing of lots, requiring additional process time)
  • - Technicians (not being available to fix a problem)
  • - Hot lots (especially hand carry lots, when tools are held idle)


That's interesting... is that across-the-board reliability? or are some tools more reliable than others?

Did the trend in automation from 200mm -> 300mm improve or degrade reliability? (I'm going to guess that 180nm - 1 micron semiconductor equipment designed 20+ years ago has different reliability characteristics than leading-edge machinery today.)

Are there any reputable published studies that cover this sort of data?
There are essentially no operators in a 300mm fab, too few to matter. Wafers are all moved in FOUPs by overhead transport systems so idle no operator is a thing of the past, Decisions on what to process are all handled by automation too.

I believe unscheduled down time is the biggest issue. It does vary by tool type, processes that deposit films tend to be more maintenance intensive. They typically incorporate a chamber clean after every deposition but that is lost production time.

Uptime is a big focus for the equipment companies and is getting better in general, OEE has certainly gotten better with each new generation of tools, but new technologies can set it back. Exposure tools generally run 95%+ but EUV has struggled to get to 90%.
 
Last edited:
And on the economic side...

I figured that out pretty quickly!!! :)


Bah! When I had originally asked this question last August, I had convinced myself that balanced capacity was probably the best. Then I read about queueing and variation, and now, six months later, I had just about convinced myself that unbalanced capacity with a managed bottleneck was probably the best. Sounds like this is just a hard problem to deal with.

I had found the Don Martin paper about a month ago (it's on IEEExplore), I'll give it a closer read. I never know when to trust the conclusions of published papers from 15+ years ago; fundamental industry characteristics may have shifted since then, and as someone who isn't involved in fab operations I wouldn't be able to second-guess whether those conclusions are reasonable.

Here's a closely related question:

Is there a tradeoff in capacity management between cost and risk in unbalanced/balanced designs?

- unbalanced capacity with a managed bottleneck --- not optimal, but simple, and potentially lowers risk because the fab can focus its resources more on the managed bottleneck. Upside: less sensitive to downtime everywhere across the fab. Downside: underutilized tools outside the bottleneck.

- economically balanced capacity --- chooses the right mix of equipment to minimize cost for desired operating curve characteristics. (for example: if litho machines are $120M each and etch machines are $10M each and you get approximately the same cycle-time/throughput behavior from 4 litho tools + 20 etch tools = $680M, or 5 litho tools + 12 etch tools = $720M, then you're going to pick the $680M case.) Downside: may be more sensitive to downtime everywhere across the fab. Upside: better economic choice.
"for example: if litho machines are $120M each and etch machines are $10M each and you get approximately the same cycle-time/throughput behavior from 4 litho tools + 20 etch tools = $680M, or 5 litho tools + 12 etch tools = $720M, then you're going to pick the $680M case."

I don't think that would ever be true, that you could trade etchers for litho tools.

Don Martin has written a lot of great papers, the best I have ever seen on Fab operations and I believe they are still valid.

I don't think anyone deliberately unbalances a leading edge 300mm fab design.
 
Optimizing Factory Performance by Ignizio is a pretty good book but not semiconductor specific and I have seen comments from him that show he doesn't understand the unique aspect of semiconductor manufacturing.
OK, thanks. I've gotten through most of Hopp & Spearman's "Factory Physics". My interest is primarily just in understanding some of the basic concepts behind cycle-time / throughput curves as they impact fabs.

I have a coworker who has been responsible for the architectural specification of several successful microcontrollers, and knew roughly what to expect for manufacturing cycle time --- but he was surprised when I told him that wafers typically spend 70-80% of their cycle time waiting in a queue to be processed, and only 20-30% of their cycle time actually being processed. (corresponding to X-factor in the 2.33 - 4 range which seems like it should comfortably cover most real-world cases from what I've read.) He is not involved in the manufacturing side, so I'm not too surprised... and the underlying concepts to explain this do not seem intuitive to me unless you have experience working with semiconductor fabrication or delve a little bit into the theory.

(whoops I meant 3.33 - 5, off by 1... so maybe it's more like 65-75% of the time waiting in a queue. Anyway the exact numbers aren't important for outsider-understanding purposes, it's the same ballpark.)
 
Last edited:
Building on Scotten:
- Mature OEE could be somewhat higher than 50%
- Unbalancing strategically is both deliberate and necessary at my fab; to achieve targets which are customer imposed
 
OK, thanks. I've gotten through most of Hopp & Spearman's "Factory Physics". My interest is primarily just in understanding some of the basic concepts behind cycle-time / throughput curves as they impact fabs.
If you have access to IEEE Transactions on Semiconductor Manufacturing, you will find there have been a number of papers in recent years about applying mathematical or stochastic modelling optimization to fab scheduling. These have included real-time updating and rescheduling to match actual events in the fab. Mostly the papers seem to come from TSMC sponsorship as I recall. Now, whether this is actually implemented in modern fabs is another question, the papers could be entirely academic, or they could be some insight into the latest work.

65-75% queue time is not really surprising in recent years where fabs could easily find enough customers to book full capacity. That roughly averages two FOUPs queued (or in transit) per machine for each one in process, which seems about right for keeping things humming. Machines with very long process times should have shorter queues and machines where processing is comparable or faster than transit times should have longer queues, as a rule of thumb.
 
Back
Top