Yes, that aspect I'm familiar with -- the general cycle time vs throughput curves pop up in some academic literature on fab operations research; as well as "Factory Physics" (Hopp & Spearman)
But I do have some interesting questions on X-factor. What drives the choice of X-factor towards a certain range?
As I understand it, going too high doesn't leave a lot of margin in case something goes wrong and the traffic flow in the fab backs up due to gridlock; you can't go to, say, X = 20 to eke out a little more throughput because that's really risky. But why 3.5 as a practical maximum? Why not 5.0? or 2.0?
Is that something that fab operations managers consciously control to that range intentionally (2.5 - 3.5), for example by limiting wafer starts? Or does it vary with circumstance / fab strategy? --- for example in a glut when there's not a very high demand for throughput, keep cycle time lower by lowering the X factor; in a shortage, let it go higher; or Company X likes to run their fab at 2.0 and Company Y likes to run their fab at 4.0.
Now I'm intrigued... is this a historical quirk that we're stuck with because of early fab decisions? (like
the width between railroad rails) Or is it just the best we can do given the cost of equipment?
(sorry for the overload of questions, this stuff interests me)