DAC2025 SemiWiki 800x100

2011: A Semiconductor Odyssey!

2011: A Semiconductor Odyssey!
by Daniel Nenni on 04-15-2011 at 10:08 pm

Stanley Kubrick’s 2001: A Space Odyssey showed us a world where machine vision allowed a computer to watch and interact with its human colleagues. Yet after 40 years of incredible progress in semiconductor design, the technology to make computer-based image and video analysis a reality is still not practical.

While working with semiconductor IP companies in Silicon Valley I met Parimics, a very promising start-up focused on making real-time image and video analysis a reality. The company has created a unique architecture to make machine vision orders of magnitude more powerful, reliable and easy to use and deploy.

The key to Parimics’ image and video analysis subsystems is a novel image analysis processor chipset and accompanying software to deploy the technology. Vision systems based on Parimics’ technology are capable of processing image and video data in real time — right when the image and video data is taken. This is a huge advantage compared to current systems that perform analysis offline, i.e. after images have been captured and stored. Applications for the technology include automotive guidance; security and surveillance; medical and biotechnology applications such as f-MRI machines and high-throughput drug screening processes; advanced robotics; and of course generalized machine vision applications. Since these solutions are both deterministic and real-time capable, they can be used in scenarios that demand high reliability.

Image analysis solutions available today don’t have the speed or accuracy needed for complex machine vision applications. Speeds of 20 frames per second (FPS) or less are the rule. The ability to track specific objects is often limited to 3-5 individual items per frame. Filtering out noise and objects that are not relevant to the video analysis is poorly done with today’s systems. Finally, typical probabilistic methods greatly reduce system accuracy and trustworthiness. In some applications, it just cannot be tolerated that results are probabilistic or unreliable, or non-real-time.

In stark contrast, Parimics’ unique and patented architecture supports speeds between 50 to 20,000 FPS, and its results are deterministic and reliable. Even in the cost-optimized version of the processor chipset, Parimics’ solutions can track 160 unique objects per VGA or HD frame. The computational horsepower of the chipset also allows it to filter out noise and objects that are not relevant to the application within a very small number of frames.

Parimics’ architecture is specifically designed for advanced image analysis, much the same way that graphics processors are designed to create graphical images. The video analytics processors take in any type of image at any speed and use a massively parallel architecture to analyze all the objects in the field of view. Parimics’ scalable approach to solving customer problems easily meets the very wide range of their performance and cost requirements.

The company’s technical advisory board includes Stan Mazor, key inventor of Intel’s first microprocessor; John Gustafson, a leading expert in parallel architectures; John Wharton, inventor of the pervasive Intel 8051 microcontroller, and Dr. Gerard Medioni from USC, internationally known for his work on image recognition software.

Parimics’ founders, CEO Royce Johnson, and CTO Axel Kloth, have a combined experience of more than 50 years in chip design and management of cost-effective operations. See their web site at www.parimics.com for more details.


Chip-Package-System (CPS) Co-design

Chip-Package-System (CPS) Co-design
by Paul McLellan on 04-14-2011 at 5:13 pm

I can still remember the time, back in the mid-1980s, when I was at VLSI and we first discovered that we were going to have to worry about package pin inductance. Up until then we had been able to get away with a very simplistic model of the world since the clock rates weren’t high enough to need to worry about the package and PCB as anything other than a simple capacitive load. Put big enough transistors in the output pads and the signal would change acceptably fast. Of course, back then, 10K gates was a large chip so it was a rather simpler and more innocent time than we have today.

Those days are long gone. Now we have to analyze the chip, the package and the PCB as a single system, we can’t just orthogonally worry about the chip separately from the package and then just expect to put that package on any old PCB. In the old days, there was plenty of noise margin, we were less sensitive about trading-off increased power against reliability and there was simply less interaction between signals.

The full low-level details of the entire chip are clearly overkill for CPS analysis. Instead, a model of the die is necessary to model the noise generated by the chip. Chip power models include the die-level parasitics as well as instance switching noise. Since the model will be used for EMI emission analysis, simultaneously switching outputs (SSO) power analysis, noise analysis and more, the model must be accurate across a broad frequency range. Transistor switching noise is modeled by region-based current sources that capture the transient behavior. The power model is reduced to a SPICE format that allows long system simulations to be done using any circuit simulator.

Chip power models represent the switching noise and parasitic network of the die. The next generation of chip power model has recently become available, enabling more advanced CPS analysis methodologies. Designers are now able to probe at lower metal layer nodes in the die, to observe transistor-level noise in CPS simulation. The transient behavior of the die can now capture advanced operational modes, which are of particular interest to system engineers, such as resonance-inducing activity and transitions from low-power to high-power modes. Chip designers can now create user-configurable chip power models that can be programmed to represent different modes, depending on the simulation.

The increasing importance of CPS analysis has led Apache to create a CPS user-group to bring some standardization and interoperability to the area and create a Chip-Power Model reference flow. Besides Apache, the founding members include representatives from the top 10 semiconductor companies.

Matt Elmore’s blog on CPS
CPS user-group created by Apache


DDR4 Controller IP, Cadence IP strategy… and Synopsys

DDR4 Controller IP, Cadence IP strategy… and Synopsys
by Eric Esteve on 04-14-2011 at 4:17 am


I will share with you some strategic information released by Cadence last week about their IP strategy, more specifically about the launch of the DDR4 Controller IP. And try to understand Cadence strategy about Interface IP in general (USB, PCIe, SATA, DDRn, HDMI, MIPI…) and how Cadence is positioned in respect with their closest and more successful competitor in this field, Synopsys.

When Cadence has acquired Denali for $315M, less than one year ago, one of the nuggets was the DDRn controller IP product line built by Denali during the last 10 years. Denali’ DDR controller IP was well known within the industry, doing pretty well with sales in 2009 estimated to be slightly less than $10M (even if Denali was one of the very few companies who constantly report to Gartner DDR Controller IP business results below the reality!). Their product was nice, but still based on a Soft PHY, making life more complicated for the designer having to integrate it. Synopsys DDR Controller IP (coming from the acquisition of MOSAID in 2007) was already based on a hard PHY, as well as Virage’ product (coming from the acquisition of INGOT in 2005). That’s why Denali had to build a partnership with MOSYS (in fact Prism Circuit before to be acquired in 2009) to offer a solution based on a hard PHY (from MOSYS) and their DDR3 Controller. Before the acquisition of Denali by Cadence (and Virage by Synopsys, by the way) in May 2010, the DDR Controller IP market was growing fast, and very promising, as we can see on the two figures.

In fact, if the number of ASIC design start is declining, the proportion of SoC is growing faster, so the net count of ASIC integrating a processor (or Controller, DSP) core is growing. When you integrate a processor, you need to access an external memory (the cost of embedded DRAM being prohibitive) so you need t integrate a DDRn Controller. Considering the ever increasing memory Interface frequency, and the related difficulty to build a DDRn Controller in-house, the make vs buy question leads more frequently to an external solution, or to buy an IP. This is why the forecast for the DDR Controller IP market, even the more conservative, shows a x3 multiplication during the next 3 years. And when we compare the DDR IP market with the other Interface IP market, we expect it to be the faster growing market, as we can see on the first figure.

With this history in mind, you better understand why it was important for Cadence to be the first to launch a DDR4 Controller IP. Proposing a hard PHY option is a way to catch up with Synopsys, who offer hard PHY systematically for the DDR IP product. The lack of such a hard PHY was a weakness of the Denali DDRn IP product line, this explain why Denali had built a partnership with Prism Circuit in April 2009 to offer a complete solution based on a (soft) Controller from Denali and a (hard) PHY from Prism Circuit. Ironically, both companies have been acquired in the meantime…

If we look at the IP market, at least at the Interface IP segments (USB, PCIe, SATA, HDMI, DDRn, MIPI…), we see that the positioning of Cadence is pretty weak, compared with Synopsys. Cadence is supporting DDRn segment, thanks to Denali product line, the PCIe segment (in fact restricted to PCIe Gen-3)… and that’s it. When Synopsys is active in all the above mentioned segments, with a dominant position (more than 50% market share) in USB, SATA, PCIe, DDRn and a decent (but unknown) position in HDMI and MIPI. Moreover, Cadence strategy as presented during IP-SoC 2010 last December in Grenoble, which was to build a complete offer in the Interface IP market through partnership with existing IP vendors (if you prefer, offering a solution coming from 3[SUP]rd[/SUP] party instead of internally developed) has completely vanished. This will certainly leave the door open for Synopsys to consolidate their dominant position, and build a product line of more than $250M on a $500M market (IPnest evaluation of the Interface IP market in 2015). To summarize, Mentor Graphics gave up in 2005 on the IP market, and the last announcement from Cadence means that they will attack 20% only of this market (the DDRn IP) and give up on the remaining 80%, which will represent $400M in 2015.

The study of the Interface IP market: analysis of 2004-2009 results and Forecast for 2010-2015 can be found on:http://www.ip-nest.com/index.php?page=wired

This survey is unique and very complete, looking at each interface (USB1,2, USB3, PCIe, SATA, DDRn, HDMI, MIPI, DisplayPort and more) and proposing a forecast for 2010-2015.
It has been sold to several IP vendors (Cadence (!), Mosys, Cast, Evatronix, HDL DH…) and the KSIA in Corea. Please contacteric.esteve@ip-nest.com if you are interested…


AMD and GlobalFoundries / TI and UMC

AMD and GlobalFoundries / TI and UMC
by Daniel Nenni on 04-11-2011 at 11:38 am

There have been some significant foundry announcements recently that if collated will give you a glimpse into the future of the semiconductor industry. So let me do that for you here.

First the candid EETimes article about TI dumping Samsung as a foundry:

Taiwan’s UMC will take the ”lead role’’ in making the OMAP 5 device on a foundry basis for TI, said Kevin Ritchie, senior vice president and manager of TI’s technology and manufacturing group.

“We have not been pleased with the results’’ at Samsung, he told EE Times in a telephone interview. Samsung has indeed built and shipped parts for TI. ”I can’t complain about the yields,’’ he said. ”I can complain about everything else.’’

Regarding Samsung’s future as a foundry partner within TI, Ritchie said TI will rely on Samsung to a ”lesser extent’’ at 45-nm. Samsung is ”off our radar at 28-nm,’’ he said.

As I have written before, Samsung is NOT a Foundry! And I absolutely agree with his statement about UMC, a company I have worked with for many years:

”UMC, for us, is everything that Samsung is not,’’ he said. UMC ”does not get enough credit.’’

UMC is by far the most humble foundry and gets much more respect from the top tier semiconductor companies than the traditional press!

UMC (NYSE: UMC, TWSE: 2303) is a leading global semiconductor foundry that provides advanced technology and manufacturing services for applications spanning every major sector of the IC industry. UMC’s customer-driven foundry solutions allow chip designers to leverage the strength of the company’s leading-edge processes……

Foundry 2010 Revenue:
(1) TSMC $13B
(2) UMC $4B
(3) GFI $3.5B
(4) SMIC $1.5B


DanielNenni
#UMC buys equipment for 3D IC manufacturing http://tinyurl.com/3ext569
#UMC gearing up for 12-inch capacity expansion http://tinyurl.com/3jq99xw
#UMC Financials Q2 2011 +5.25% Y/Y http://tinyurl.com/3htnpv2

Second is the announcement about AMD and GlobalFoundries changing their manufacturing contract. The revised pricing model is a short-term measure to address AMD’s financial needs in 2011 as they aggressively ramp up and deliver 32nm technology to the marketplace. The intention is to return to a cost-plus model in 2012.

The agreement also provides an increased upside opportunity for GFI in securing a larger proportion of future chipset and GPU business from AMD. Why is this important to the foundry business? Because this absolutely supports the collaborative partnership model that GlobalFoundries has been pushing since day one. The question I have is: When will AMD and GFI announce a joint ARM based mobile strategy? The insider talk was that AMD CEO Dirk Meyer was forced to resign due to the lack of a feasible mobile strategy so a quick ARM based strategy would make complete sense.

The other significant GlobalFoundries news is the partnership with IMEC on sub-22nm CMOS scaling and GaN-on-Si technology. IMEC, a long time TSMC partner, is one of the largest semiconductor consortia in the world. IC manufacturers, fabless companies, equipment and material suppliers, foundries, and integrated device manufacturers (IDMs), collaborate on the continued CMOS scaling into the sub-22nm node. Today GFI relies on IBM process technology which is limiting to say the least. IBM can’t even spell collaborate.

All in all a great week for the semiconductor industry! GlobalFoundries is definitely in for the long term and UMC finally gets credit where credit is due!


Who Needs a 3D Field Solver for IC Design?

Who Needs a 3D Field Solver for IC Design?
by Daniel Payne on 04-07-2011 at 4:53 pm

Inroduction
In the early days we made paper plots of an IC layout then measured the width and length of interconnect segments with a ruler to add up all of the squares, then multiplied by the resistance per square. It was tedious, error prone and took way too much time, but we were rewarded with accurate parasitic values for our SPICE circuit simulations.

Today we have many automated technologies to choose from when it comes to extracting parasitic values for an IC layout. These parasitic values ensure that what we simulate in SPICE is accurately providing the right timing value, can detect glitches and measure the effects of cross-talk.

Accuracy vs Speed


The first automated parasitic extraction tools used rules about each interconnect layer for resistance and capacitance as a function of width, length and proximity to other layers. These tools are fast and reasonable accurate for nodes that have wide interconnect with little height. As the height of interconnect have grown then the accuracy of these rules diminishes because of the complex 3D nature of nearby layers.

3D field solvers have been around for over a decade and have also offered the ultimate in accuracy with the major downside being slow run times. The chart above places 3D field solvers in the upper left hand corner, high accuracy and low performance.

Here’s a quick comparison of four different approaches to extracting IC parasitics:

[TABLE] class=”cms_table_grid”
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Approach
| class=”cms_table_grid_td” | Plus
| class=”cms_table_grid_td” | Minus
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Rule-based/Pattern Matching
| class=”cms_table_grid_td” | Status quo
Familiar
Full-chip
| class=”cms_table_grid_td” | Unsuitable for complex structures
Unable to reach within 5% of reference
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Traditional Field Solver
| class=”cms_table_grid_td” | Reference Accuracy
| class=”cms_table_grid_td” | Long run times
Limited to devices
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Random-Walk Field Solver
| class=”cms_table_grid_td” | Improved Integration
| class=”cms_table_grid_td” | 3 to 4X slower than Deterministic
|- class=”cms_table_grid_tr”
| class=”cms_table_grid_td” | Deterministic Field Solver
| class=”cms_table_grid_td” | Reference-Like Accuracy
Fast as Rule-based
| class=”cms_table_grid_td” | Multi-cpu required (4 – 8)
|-

What if you could find a tool that was in the upper right hand corner, offering high accuracy and fast run times?

That corner is the goal of a new breed of 3D field solvers where highest accuracy and fast run times co-exist.

Mentor’s 3D Field Solver
I learned more about 3D field solvers from Claudia Relyea, TME at Mentor for the Calibre xACT 3D tool, when we met last month in Wilsonville, Oregon. The xACT 3D tool is a deterministic 3D field solver where multiple CPUs are used to achieve faster run times. A white paper is available for download here.

Q: Why shouldn’t I try a 3D field solver with a random-walk approach?

A: Well, your results with a random-walk tool will have a higher error level. Let’s say that you have 1 million nets in your design, then with a sigma of 1% you will see 3,000 nets where the accuracy is >3% different from a reference result. For sensitive analog circuits and data converters that level of inaccuracy will make your chip fail.

Q: What is the speed difference between xACT 3D and random walk tools?

A: We see xACT 3D running about 4X faster.

Q: What kind of run times can I expect with your 3D field solver?

A: About 120K nets/hour when using 32 cpus, and 65K nets/hour with 16 cpus.

Q: How is the accuracy of your tool compared to something like Raphael?

A: On a 28nm NAND chip we saw xACT 3D numbers that were between 1.5% and -2.9% of Raphael results.

Q: Which customers are using xACT 3D?

A: Over a dozen, the one’s that we can mention are: STARC, eSlicon and UMC.

Q: For a device level example, how do you compare to a reference field solver?

A: xACT 3D ran in 9 seconds versus 4.5 hours, and the error versus reference was between 4.5% and -3.8%.

Q: What kind of accuracy would I expect on an SRAM cell?

A: We ran an SRAM design and found xACT 3D was within 2.07% of reference results.

Q: How does the run time scale with the transistor count?

A: Calibre xACT 3D has linear run time performance with transistor count. Traditional field solvers have an exponential run time with transistor count making them useful for only small cells.

Q: What is the performance on a large design?

A: A memory array with 2 million nets runs in just 28 hours when using 16 cpus.

Q: Can your tool extract inductors?

A: Yes, it’s just an option you can choose.

Q: How would xACT 3D work in a Cadence IC tool flow?

Q: Can I cross-probe parasitics in Cadence Virtuoso?

A: Yes, that uses Calibre RVE.

Q: Where would I use this tool in my flow?

A: Every place that you need the highest accuracy: device, cell and chip levels.

Summary
3D field solvers are not just for device level IC parasitics, in fact they can be used on cell and chip level as well when using multiple CPUs. The deterministic approach by Mentor gives me a safer feeling than the random-walk method because I don’t need to worry about accuracy.

I’ve organized a panel discussion at DAC on the topic of 3D field solvers, so hope to see you in San Diego this June.


Wanted: FPGA start-up! …Dead or Alive?

Wanted: FPGA start-up! …Dead or Alive?
by Eric Esteve on 04-05-2011 at 6:23 am

The recent announcement from Tabula about the $108 million raised in its Series D round of funding is putting the focus on FPGA technology, and FPGA startups in particular. Who are these FPGA startups, what is their differentiation, where is the innovation, in the product or the business model?

When you say FPGA, you first think: customization, “Field Programmable” means any design engineer can do it (providing he has the right tool set). Almost immediately, the two brands “Xilinx” and “Altera” come to your mind, illustrating the duopoly ruling the FPGA market. These two companies have been successful because they have been able to “standardize the customization”, by creating numerous product lines, whether low cost, or high density, or DSP centric and so on. The “Makimoto’s Wave” concept illustrates very well the expansion model of the customer specific market (ASIC, SoC, FPGA, PLD…), oscillating between customization and standardization.

If we go further, we can say that innovation, bringing by start-up, is linked with customization, and maturity with standardization. When a new product finds a market, because it offers valuable differentiation (customization) the success will pass through mass production, requiring a high level of standardization.

If we look at history of PLD startups (published by EETimes in July 2009) we can see that almost all the startups since 1985 are dead, except QuickLogic and Atmel, both not doing so well. We can see that the lack of money is probably not the failure reason, as the parent companies list includes AMD, Philips, TI, National Semiconductor, Samsung, SGS, Toshiba, IBM… That reminds me the person in charge of the FPGA product line for Europe for TI, in the mid 90’s, a certain Warren East. Being in charge of FPGA business may lead to success, as far as you have a chance to escape from the FPGA business, but I digress. The second statement of fact is that the companies still alive are all born after 2004. Even if we can learn a lot from post mortem analysis, we will take a look at the startups which are still alive, at their product, differentiation and business model. And try to guess who has a chance of success…

Talking about FPGA start-up, such a company has to offer a differentiated product, usually based on technical innovation, even if it can be based on a new business model, like for example offering an FPGA “design block”, or an IP, to be integrated into a more traditional SoC (ASIC or ASSP). The list of “still alive” startups is short: Achronix,Menta, Silicon Blue, and Tabula.

Achronix is the first FPGA to be commercially launched which is different from conventional architectures. They have developed Asynchronous FPGAs, allowing very high speed operations. Achronix claim to deliver world’s fastest FPGAs with frequencies up to 1.5GHz, Speedster family being fabricated on TSMC 65nm process. They have hard blocks for memory, multipliers, SerDes, PLLs and also for memory and communication controllers. Their CAD tools suite ACE (Achronix CAD Environment) provides a classical RTL tools flow for the programmer by hiding all the effects of Asynchronous FPGA hardware. Target market segments are networking, telecommunication, DSP, high performance computing, military and aerospace etc. The company has got special attention of industry in 2010 when they announced partnership with Intel to make 22nm FPGAs on Intel process, being the first company to share such an advanced process technology.
Differentiation: The 3H: High Speed Logic, High density, High speed SerDes
Market: Networking, high performance computing
Challenge: directly compete with the Big-2 on the sweet spot (high ASP products)
Tech./Fab:TSMC 65nm


Menta licenses world’s first pure soft FPGA IP core. Having a soft IP makes its integration in a SoC very easy since it is synthesized with the standard HDL flow of design. Being a soft core the Menta’s eFPGA is technology independent, which gives a lot of easiness for SoC manufacturing since it can be integrated with any process technology for which the SoC is designed. Implementing of systems on programmable logic is slower, bigger and more power consuming compared to dedicated hardware. Menta is fully aware of this problem and its ultra compact architecture decreases this gap and helps the SoC designer to have the flexibility of FPGA with ASIC like performance.

Differentiation: Business Model (offering FPGA as an IP design block)
Market: ASSPs/ASIC, MCUs, Aerospace/Defense/Automotive
Challenge: Funding to strengthen technology and expand business
Tech./Fab: Tech. Independent!, current main focus for evaluation ST 65nm, TSMC 45nm




SiliconBlue has a major focus on low-power FPGAs which can be used for battery-based portable devices. Their iCE65 FPGA devices family is built on low-power TSMC 65nm process. They have done several innovations in packaging and configuration mechanism to make their devices very compact, low-power and single chip solution for the target. Their FPGAs have very low static power. Their FPGAs compared to other providers are relatively small, logic cells (LUT4+FF) range from 1200 to 16000. They have embedded memory blocks and phase-lock loops (PLL) as hard macros. They also propose their FPGAs in the form of a Die for SIP (System in Package) solutions. One of their most appreciated innovations is a single chip solution using embedded non-volatile XPM memory from Kilopass which loads the configuration to SRAMs of FPGA on power up.
Differentiation: Low power
Market: Mobile market
Challenge: meet ASIC/ASSP price point
Tech./Fab:TSMC 65nm

Tabula’s technology can be considered as a masterpiece of dynamic reconfiguration. The device is not physically 3D in manufacturing; they call the time as 3rd axes. By this advantage their ABAX 3PLD devices fabricated on 40nm TSMC [1.39] process when compared to an equivalent classical 2D FPGA have gains of around 2.5X in logic density, 2.0X in Memory and 3.7X in DSP performance. More importantly as stated before, despite of the architecture which is completely un-natural physically, the programming model according to the company is purely standard RTL based. Their 3D Spacetime Compiler makes this possible. Tabula claim to have Cisco as a customer.
Differentiation: Time based reconfiguration
Market: networking
Challenge:directly compete with the Big-2 on the sweet spot (high ASP products) – Product introduction has been very long (5 years+)
Tech./Fab:TSMC 40nm

These four startups are well segmented in term of differentiation: one concentrates on low complexity products offering a very low power to target the mobile industry, when the second offering high performance core logic (1.5 GHz peak performance) and supporting a wide range of communication protocols and SerDes up to 12.5 Gbps, target the Networking segment. The next two proposes real disruptive innovation: the third proposing to embed FPGA as a design IP block (business model innovation), targeting IDM or Fabless in multiple segments, when the fourth has created the “3 D” FPGA concept where the 3rd dimension is the time.

To be honest, none of these startups should “loose”, as each of them is offering a key differentiator. Except if they fail to attract customer because of too complex design flow or too expensive toolset, of fail to keep them because of wrong execution (in production). Or simply if the duopoly can close the technology gap. We will probably dig more and come back in a next blog, as this is a fascinating part of the SC industry.

I would like to thank Syed Zahid Ahmed (www.linkedin.com/in/SyedZahidAhmed) who has helped me to write this blog, using the know how of the emerging FPGA market acquired when doing research for his PhD. . By the way, Zahid will defend his PhD in June, and is looking these days for his next adventure (job)!
Eric Esteve (eric.esteve@ip-nest.com)


Samsung is NOT a Foundry!

Samsung is NOT a Foundry!
by Daniel Nenni on 04-01-2011 at 5:47 pm


Samsung is the #1 electronics company, the #2 semiconductor company, and for 20+ years the world’s largest memory chip maker. Analysts expect Samsung to catch Intel by the year 2014. In the foundry business however Samsung is a distant #9 after more than a five year investment and here’s why:

Foundry 2010 Revenue:
(1) TSMC $13B
(2) UMC $4B
(3) GFI $3.5B
(4) SMIC $1.5B
(5) Dongbu $512M
(6) Tower/Jazz $509M
(7) Vanguard $508M
(8) IBM $430M
(9) Samsung $420M
(10) MagnaChip $405M

Dr. Kwang-Hyun Kim, Executive VP, Foundry Business, Samsung Electronics, keynoted the final day of the SNUG 2011 conference: “Consumer Driven Innovation in SoC Design”. It was an excellent presentation on consumer electronics market trends and the associated challenges down to the semiconductor device level. It did not however contain anything that supports the Samsung position that it is serious about the foundry business.

Samsung Electronics’ Foundry business, Samsung Foundry, is dedicated to support fabless and IDM semiconductor companies offering full service solutions encompassing design kits and proven IP to fully turnkey manufacturing to achieve market success with advanced IC designs by Foundry, ASIC and COT engagement.

The market trends Dr Kim covered were: high performance WITH low power, rapidly increasing bandwidth requirements (streaming mobile HD video), increasing design complexity (multicore everything), and short turn-around time (12 month product cycles versus 18 month SoC design cycles). Samsung Electronics has a nice Wikipedia page HERE. The Samsung Foundry site is HERE.

IDM 2010 Revenue:
(1) Intel $40B
(2) Samsung $32.6B
(3) Toshiba $13.4B
(4) TI $13B
(5) Renesas $11.8B
(6) Hynix $10B
(7) STMicro $10B
(8) Elpida $7B
(9) Infineon $6.2B
(10) Sony $5.6B

According to a 2010 interview with Ana Hunter, Samsung Semiconductor Vice President of Foundry Services, after years of trying, “Samsung’s share of the foundry business is not as big as we want, but it takes time to put the pieces in place and ramp designs.”Hunter stated that, “The foundry business is part of our core strategy” and highlighted 6 reasons why Samsung believes it will succeed:

[LIST=1]

  • Capacity – Samsung plans to double its production of chips for outside customers every year until it rivals market leader TSMC. I asked a Samsung executive what the Samsung foundry capacity was today and he declined to answer. All other foundries are very open about this.
  • Resources – Samsung is one of the few companies that has the resources to compete at the high-end of the foundry market. Other than Intel, IBM, TSMC, UMC, SMIC, and GFI of course.
  • Leading Edge Technology – Samsung was ramping 45-nm technology at a time when TSMC and others were struggling.
  • Leading Edge Technology part II – Samsung will be one of the first foundries to roll out a high-k/metal-gate solution. The technology will be offered at the 32- and 28-nm nodes. (TSMC and GFI will go straight to 28nm HKT this year)
  • Leading Edge Technology part III – Unlike rival TSMC, Samsung is using a gate-first, HKMG technology, TSMC is going with gate-last. News flash: for 20nm and below all foundries will use gate-last HKMG technology which is a clear TSMC win.
  • Ecosystem – Samsung has put the EDA pieces in place for the design-for-manufacturing puzzle. Actually, GlobalFoundries has and Samsung is following their lead.

    Let me reiterate the 6 reasons why I believe Samsung will continue to struggle as a foundry:
    [LIST=1]

  • Business Model – The Foundry business is services centric, the IDM business is not. This is a serious paradigm shift for Samsung. GlobalFoundries made the transition though so it can be done.
  • Customer Diversity – Supporting a handful of customers/products is a far cry from supporting the 100’s of customers and 1,000′s of products TSMC does.
  • Ecosystem – An open ecosystem is required which includes supporting commercial EDA, Semiconductor IP, and Design Services companies of all shapes and sizes.
  • Conflict of Interest – Pure-play foundries will not compete with customers, Not-pure-play foundries (Samsung/Intel) will. Would you share sensitive design, yield, and cost data with your competitor? Apple will move its iProduct ICs from Samsung to TSMC for this reason alone.
  • China – The Chinese market represents the single largest growth opportunity for the foundry business. TSMC has a fab in Shanghaiand 10% control of SMIC (#4), UMC (#2) has control of China’s He Jian(#11), and Samsung does not even speak Mandarin.
  • Competition – The foundry business is ultra competitive, very sticky, and predatory pricing (product dumping) will not get you from #9 to #1.

  • DRC/DFM inside of Place and Route

    DRC/DFM inside of Place and Route
    by Daniel Payne on 03-31-2011 at 10:19 am

    Intro
    Earlier this month I drove to Mentor Graphics in Wilsonville, Oregon and spoke with Michael Buehler-Garcia, Director of Marketing and Nancy Nguyen, TME, both part of the Calibre Design to Silicon Division. I’m a big fan of correct-by-construction thinking in EDA tools and what they had to say immediately caught my attention.

    The Old Way
    In a sequential thinking mode you could run a P&R tool, do physical signoff (DRC/LVS), optimize for DFM, find & fix timing or layout issues, then continue to iterate while hoping for manufacturing closure.

    The New Way
    Instead of a sequential process, what about a concurrent process? Well, in this case it really works to reduce the headaches of manufacturing closure.

    The concept is that while the P&R tool is running it has real time access to DRC, LVS and DFM engines. This approach is what I term correct-by-construction and it creates a tight, concurrent sub-flow of tools to get the job done smarter than with sequential tools.

    The product name is called Calibre InRoute and it includes the following four engines:

    • P&R – Olympus SoC
    • DRC – Calibre nmDRC
    • LVS – Calibre nmLVS
    • DFM – Calibre DFM

    DRC Checking with the Sign-off Deck
    Since Calibre is a golden tool for sign-off at the foundries it makes sense to use this engine and the sign-off rule deck while P&R is happening. Previous generations of P&R tools would have some rudimentary DRC checks embedded, but now that we have hundreds of rules at 28nm you cannot afford to rely on incomplete DRC checks during P&R.

    What I saw in the demo was DRC checking results both in a text format and visual format. There was even a detailed description of what the DRC violation was.

    OK, it’s nice to have DRC checking but what about fixing the DRC violations for me?

    InRoute does just that. Here’s a DRC violation shown highlighted in Yellow:

    And the same area after the DRC violation has been automatically fixed for me:

    DFM Issues – Find and Fix Automatically
    Just like DRC issues the InRoute tool can find and fix your DFM issues which saves time and speeds you through manufacturing closure more quickly:

    Litho Friendly Design
    The perfectly rectangular polygons that we can draw in our IC layouts are not what ends up on the masks or silicon. The lithography process effects can be taken into account while InRoute is running to identify issues and even fix some of them. Here’s an example that shows a minimum width violation and how it was identified and auto-fixed:

    Who is Using This?
    STMicroelectronics is a tier-one IC design company that is one of the first to publicly talk about Calibre InRoute.

    “We have used Calibre InRoute on a production 55nm SOC. InRoute has successfully corrected the DRC violations caused by several complex IPs, whose ‘abstract’ views did not fully match the underlying layout, as well as several detailed routing DRC violations,” said Philippe Magarshack, STMicroelectronics Technology R&D Group Vice President and Central CAD and Design Solutions General Manager.

    Conclusion
    It appears that the old ways of buying point tools (P&R, DRC, LVS, DFM, Litho) from multiple EDA vendors and creating your own sub-flows will get you through manufacturing closure with multiple iterations, while this new concurrent approach with EDA tools that are qualified by the foundries reduces iterations significantly. Calibre InRoute looks to be well-suited for the challenges of manufacturing closure by using a concurrent approach.


    Andrew Yang’s presentation at Globalpress electronic summit

    Andrew Yang’s presentation at Globalpress electronic summit
    by Paul McLellan on 03-30-2011 at 3:15 pm

    Yesterday at the Globalpress electronic summit Andrew gave an overview of the Apache product line, carefully avoiding saying anything he cannot due to the filing of Apache’s S-1. From a financial point of view the company has had 8 years of consecutive growth, is profitable since 2008, and has no debt. During 2010 when the EDA industry grew 9%, Apache grew 27%. The detailed P&L can be found in Apache’s S-1.

    Apache is focused on low power and this is a problem that is going to be with us going forward. Per Gary Smith, Apache has a 73% market share in physical power analysis and is the sign-off solution for all 20 of the iSuppli top-20 semiconductor companies.

    Apache’s business in power and noise is driven by some underlying industry trends. The number of transistors per chip is rising (more power density), power supply voltages are coming down and getting closer and closer to threshold voltages (less noise margin), I/O performance is increasing (more noise) and packaging pin counts are exploding.

    These trends have driven Apache’s solutions in 3 areas: power budgeting, power dellivery integrity and power induced noise.

    The really big underlying trend that means that power keeps designers awake at night is the growing disparity between power budgets, which don’t really increase since we are not looking for shorter battery life on our devices, and power consumption. I’ve seen a similar worry from Mike Mueller of ARM looking at how many processors we’ll be able to put on a chip, and worrying that we won’t be able to light them all up at once for power reasons.

    Another growing problem is power noise from signals going out from the chip, through the package, onto the board, back into the package and onto the chip again. The only way to handle this is to look at the whole chip-package-system, including decoupling capacitors, the power grid, well capacitance and so on, all factors we’ve managed to avoid up to now.


    2011 Semiconductor Design Forecast: Partly Cloudy!

    2011 Semiconductor Design Forecast: Partly Cloudy!
    by Daniel Nenni on 03-29-2011 at 11:39 am

    This was my first SNUG (Synopsys User Group) meeting as media so it was a ground breaking event. Media was still barred from some of the sessions but hey, it’s a start. The most blog worthy announcement on day 1 was that Synopsys signed a deal with Amazon to bring the cloud to mainstream EDA!

    Even more blog worthy was a media roundtable with Aart de Geus. Aart more than hinted that cloud computing, or SasS (software as a service), will also be used to change the EDA business model. The strategy is to offer cloud services for “peak” simulation requirements when customers need hundreds of CPU’s for short periods of time. When customers get comfortable with EDA in the cloud they will switch completely and cut millions of dollars from their IT budgets (my opinion).

    Yes, I know other EDA companies have dabbled in the cloud but dabbling in something that will DISRUPT an industry does not cut it. Cadence is a cloud poser and so is EDA up to this point. Synopsys, and Aart de Geus specifically, can make this transition happen, absolutely. When Aart discussed it, he had passion in his voice and determination in his eyes, do not bet against Aart on this one.


    Can EDA software take advantage of a big fat cloud? Yes of course. Synopsys will start with VCS and add other tape-out gating, compute intensive applications like Hspice and hopefully DRC. Is a cloud secure enough for EDA? Aart mentioned that the Amazon cloud is “Military Secure!” Not the best analogy! I would say that Amazon is more than military secure, and much more secure than any private data center. Traditional endpoint security is no longer enough. You must have truly massive perimeter control and only cloud companies like Amazon can facilitate that.

    It would also be nice to get all of those private EDA saturated CPUs off the Silicon Valley power grid and into a cloud powered by greener energy sources. Right?

    How can a cloud help EDA grow? Well, clearly EDA360 has fizzled so my bet is on the cloud. Not only will this free up millions of dollars in IT and facilities expense, it also presents EDA with the opportunity for a much needed business model change. I gave Aart business model grief at the EDA CEO panel and in a follow-up blog EDA / IP Business Model Debate: Daniel Nenni versus Aart de Geus. Hopefully this is his response. A cloud based business model, or Software as a Service (SaaS), is much more collaborative and presents significant incremental revenue opportunities for EDA.


    The other thing I questioned Aart on is his run for California Governor. The response to the Aart de Geus (Synopsys) for Governor! blog was overwhelming with big views from Sacramento and even Washington DC. Unfortunately Aart told me that he is not up for the challenge and will continue to shape the world through semiconductors. Probably the only thing Aart and I have in common is determination so this is not over by a long shot.

    One final comment on SNUG, the vendor exhibition was one of the best I have seen and something I have asked for in the past. 60 or so vendors participated in uniform 10×10 booths, a level playing field for big companies and small. 2000+ people attended (my guess). We were wined, dined, and demo’d, business as it should be. Only one thing was missing, John Cooley! I know times are tough John, but how could you miss the 21 [SUP]st[/SUP] birthday of the child you co-parented? Invitation get lost in your SPAM filter? Seriously, you were missed.