Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/which-way-the-semiconductor-industry-is-going.2286/page-2
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Which way the semiconductor industry is going?

The issue of cost rise is not new. It happened in 1980s and was tacked, fabless was born. It is back. I heard in this discussion about the industry maturing and repositioning itself into a commodity business (manufacturing) and more fragmentation (within design). There is a talk of Virtual IDM which has to take place if previous statement is correct. Deeper collaboration and alliances between all the players. This has to be deeper than ever before and stronger than any other industry. This is easier said than done. There is strong possibility of monopolies or oligopolies within the industry and the current players would not want that. I read somewhere that this reason prompted the creation of CPA (common platform alliance) between Samsung, IBM and Global foundries and that the alliance is not working the way it should and hence the issue of collaboration and alliance. Anyone familiar or know about this angle of thought?
 
I have similar experience with you. I have covered the IC application, IC verification, system Validation and IC test for manufacture though I have no chance to enter IC design. Now it's my second year for the MBA project.
For the fabless or IDMs model, I think it depends on the company, both of which have their advantages and disadvantages. Fabless needs less capital and focus on the IC integration and new design technology, but their supply chain is restricted if their business goes well or the foundry can't support the capacity. So Fabless must have good relationship with foundry and familiar with them, have long term contract with them for the strategic regulation. IDMs have strong integration ability for the design and manufacture, but it must have good sales to keep the manufacture in busy working, like intel, for the cost is huge.
BTW, I joined the MBA project for I want to transfer to the technology and industry analyst, understand the industry and technology development.
 
I see a slow down in shrinking dimensions because of the lithography issues and a move to 3D-IC - i.e. system complexity will continue to rise, but we will be stuck at around 20nm for a while.

Power electronics is moving to use Gallium-Nitride and Silicon-Carbide, which may be driven by Smart-Grid applications and EVs.

The EDA tools are mostly old and dysfunctional (30% first spin success) so I think there may be opportunities to revamp them. That might be driven by more in-house work by the remaining large fabs who want to support IP reuse and "fabless" flows for others (e.g. Intel).

Another interesting piece hanging out there is the use of integrated optical interconnect to replace copper.
 
Wei Chen • The semiconductor industry is a mature industry and only a few large companies will survive. Niche markets will be service by small size specialty companies.
 
Yes, consolidation will continue at an even faster pace now that design costs are climbing. Only the top semiconductor companies will be able to design at 20nm and below due to cost and complexity. Even EDA tools for 20nm and below will be more expensive than the higher nodes.
 
@Wei Chen - I would say the current players are mature rather than the industry is mature, the manufacturing technology moves fairly fast (by Moore's law), but the following industries are slow with a tendency to "keep fighting the last battle".

@Dan - Since the laws of physics/electronics don't change from node-to-node there really shouldn't be that much re-tooling required. However, since the number of customers for some tools is decreasing I would expect the "upgrades" to be expensive regardless in order to keep EDA revenue up/growing. Mask-prep is the only area where I can see a need to do much rework.

Of course the industry could invest in some better open-source solutions and cut costs considerably ;)
 
Gordon Harling • Obviously physics are getting to be a bit of a problem when we are down to the size of a few atoms and only active gates are measured in single digit nanometers, the back end of the process is much larger. My own impression is that we have been barreling down the Moore's Law path a long time, even if process technology stopped dead we could spend the next 50 years improving the efficiency of both technology and design and continue to pump out more and more sophisticated products withouf relying on brute force process shrinks (sorry if I am understating the difficulty of that task) .
 
@Gordon -

The brute force shrinkage stops here -

Single-atom transistor is end of Moore's Law; may be beginning of quantum computing

As such I would say there is plenty of opportunity to go smaller, however the inability for the lithographic process to go much beyond UV means we are probably stalled in the short term. Having done physics I can say that the lens systems currently used are impressive but probably a dead evolutionary branch, but you can skip past that to soft X-rays and computed holography to make masks with smaller features that are easier to use (only it's not available yet).

This is (of course) a problem for Intel who have relied on being a node ahead to compensate for their lack-lustre designs, since they are likely to find everyone converging on the same litho limit, at which point they won't have any advantage over the competition from being on their own Silicon.

While the feature shrinking is stalled I expect Moore's law to continue through die-stacking (3D IC), i.e. the volumetric density of devices should keep going up.
 
Sadayuki Yoshitomi • Thanks, Daniel. I agree with you. Same background will lead Japanese makers to be fabless, too.

I think huge fab makers drive semiconductor inductory torward "COMMODITY". They go with the development of further deep submicron/complexed devices with the background of plenty of resources. Good for MEOMRY/LOGIC industory where device shrink directry influences the performance.

ANALOG/RF/POWER industories, where "COMMODITY" does not cover, are still happy with non-advanced process technology. No need for 2xnm CMOS and FinFET for them. Although their factory is non-advanced, but maintenance cost is low. I think this is the present situation where most of non-Huge players have. One of the way for them to survive is to produce very original prducts and to find nitch (but high-end) products which "COMMODITY" does not fit. High-quality costomer service is important, too.

I think there are still field of application, where (non-advanced) CMOS technology can apply. I am looking forward to finding such application.
 
Daniel Tomaszewski • To Mr. Inpone (Nick) Phavorachith: pm scale is atomic sale. In this range device/circuit (if manufactured) operation would be random. Yet, in the range of tens of nm variability is large and becomes a primary limitation for further device scaling. Also compact modeling of such devices for design purpose (if possible) should be revised and started from scratch, I think.
 
Inpone (Nick) Phavorachith • Very well Mr. Tomaszewski and thank you for your insight and as well as others. I think when it comes to atomic scale, it is far beyond my understanding, as small scale as the picometer may be I don’t even think if we will even have the equipments to manufacture such process/device below 5nm. My opinion is maybe we have reached the end of the scalability and stability at maybe around 5nm process. Of course, I can be completely wrong as well.
 
Zvi Or-Bach • The industry is going to do very well and continue Moore' Law, but now scaling up using monolithic 3D. The NAND Flash vendors already change from scaling down dimension to scaling up, and the rest of the industry will follow. Now that monolithic 3D is feasible and practical it surely is the most natural path for the future
 
Andy Turudic • Respectfully disagree, Daniel, unless you meant "packaged chip transistor count increases" instead of "Moore's Law".

Moore's law is a curve fit to a parabolic function (square law) and the capital, NRE, and recurring cost, per annual transistors fabbed, is VERY low, even at 10nm (do the math...a 1B transistor 28nm FPGA is only $0.004 per transistor fabbed to build the fab to make 1000 chips a year)....those that need low cost per transistor, large transistor counts, or require improvements in speed, or thermal performance, will continue "Moore's Madness". There's nothing less expensive, PER TRANSISTOR.

"3D" is a one time doubler, assuming you can even get it to the equivalent of two layers and, if you're talking about die stacking using TSVs, 3D actually represents a cost INCREASE because of yield compounding. Then you're back to the area correlation again, because competition prioritizes cost over technologer-amusement.

With stacked 3D, whether via (pun intended) TSV, wafer bonding, or whatever - it's a linear function with the number of layers. Thermal flux density limitations, again, will make 3D a fixed multiplier of the area correlation and there's very little chance that layer counts will double every 18 months unless substrates change (compare SiC with Si, for instance and its yield of its planar/FINFETs...not a winning choice). I'd be surprised to see doubling of layers every DECADE. We'll make multilayer to limits quickly, then its progress will grind to a halt, or run on a snail's pace density curve. The ONLY economically viable, non fantasy, reasons for 3D is either a cost-tolerant packaging limitation or coexistence of non-integrable processes and devices, IMO.

Either Moore's law dies off to being a linear function instead of square, or we invent new, non FET, devices. My money's on the latter - you FET-heads need to dump the assumption it'll be a MOSFET in a few years, just like the vacuum tube people never envisioned FET-based scaling. That new technology will likely be borne out of CERN's work, IMO - the science has to come before the engineering can happen.

Time for something new...non-FET...."soon" - my crystal ball says two decades.

We also are stuck with a binary logic paradigm - there's a fair amount of innovation here that few, if any, are exploiting, particularly the somewhat unimaginative CAE providers. This may actually provide some relief until my predicted sub-atomic devices are developed....and those may be some form of analog/binary bastardization that will require entirely new computation constructs.

As far as analog goes, hang on to the thought that there's no room for process improvement or technology advances and you'll get run over by the big truck called innovation. As long as we invest in innovation, instead of Wall St, human imagination, creativity, and resourcefulness is limitless. Sadly, we're in a phase where we are not, though kudos to the EU for CERN and ITER..

BTW, I'm now looking for new career opportunities in semiconductor product line roadmaps, strategic planning, chip architecture, product marketing, biz dev, etc. Please take a look at my profile and let me know if there's an area where I can make a significant contribution to a team you are working with, or if you know a team that would welcome my abilities and background..
 
Zvi Or-Bach • Hi Andy
Nice and detailed but very wrong !
Monolithic 3D provides all the benefits of dimension scaling pulse many more !
And please before rebutting please study why the NAND Vendors are adapting monolithic 3D. And as far as number of layers some of them are talking on 128 layers and going farther.
As to the heat please read our recent blog <http://www.monolithic3d.com/blog.html> which summarize a joint paper with Stanford presented in the recent IEDM.
As to why the cost is being reduce with monolithic 3D please read our Blogs which was also published on EE Times:
<http://www.eetimes.com/electronics-blogs/other/4390409/Is-the-cost-reduction-associated-with-IC-scaling-over->
 
Inpone (Nick) Phavorachith • With due respect to all parties, if my understanding is correct Monolithic 3D technology appears to be very promising from various perspectives such as: smaller dice size, better power consumption, device operate more efficiently and as well as improve the scalability. I am by no mean an expert in nanotechnology but merely a curious learner. Taking from the perspective of Monolithic 3D process with multiple layers stacking on top of each other which I think may introduce even more complexity into the circuit design to compensate for any hard to reach area so that device testing and failure analysis can be done more effectively to further improve the device quality and its ROI. At least that my opinion and thank you all for posting.
 
Ze'ev Wurman • Nick,

There is no question that stacked monolithic layers introduce new elements of complexity, but I doubt those will be in routing (if I understood you correctly) -- from routability POV 3D is just better than 2D.

But you are correct that in 3D one needs to consider layer-based self-testing, possibly even before that whole stack is finalized, to achieve good yields and ROI. Techniques like contact-less powering & testing will help, as well as the fact that one layer of the stack may be dedicated to repairing the underlying layers, may finally enable high yield with ultra-scale integration.

You can read a bit more about these ideas here: http://www.monolithic3d.com/2/post/2012/01/repair-in-3d-stacks-the-path-to-100-yield-with-no-chip-size-limits.html
 
Back
Top