RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Signoff Summit and Voltus

Signoff Summit and Voltus
by Paul McLellan on 11-22-2013 at 10:21 am

Yesterday Cadence had an all-day Signoff Summit where they talked about the tools that they have for signoff in advanced nodes. Well, of course, those tools work just fine in non-advanced nodes too, but at 20nm and 16nm there are FinFETs, double patterning, timing impacts from dummy metal fill, a gazillion corners to be analyzed and so on.

The core of Cadence’s signoff environment consists of 3 tools, two of them new and one of them updated. These are:

  • Tempus, Cadence’s new timing engine announced in May
  • Voltus, Cadence’s new power grid analysis tool announced a couple of weeks ago
  • QRC, Cadence’s parasitic extraction tool

These tools are designed to interact, because at these process nodes signoff is increasingly like tuning a steel-drum for a Caribbean band, where every change you make alters every other note on the drum. Every change you make to the power network alters the parasitics and the timing, and adjustments to the timing change the power demands. You just have to cross your fingers and hope that the changes get smaller and smaller and eventually converge.

There is apparently an ancient Haida saying that “everything depends on everything else.” Sounds like the perfect metaphor for advanced node signoff!

The biggest effect is that voltage affects timing, so accurate analysis of the power grid (especially IR drop) is very important. But timing affects the power supply too: as changes are made to the design to meet timing there an knock-on effects incrementally changing voltage and thermal (and temperature affects timing and power dissipation, and not in a good way). And changes to the power net change all the parasitics. This all needs to be integrated with Allegro and Sigrity to take account of package and board effects since major current changes, especially inrush current when powering up domains that were powered down, can cause huge transients that affect the whole on-chip power network and thus the timing and…you get the idea. Everything depends on everything else.

I wrote about Tempus in detail on Semiwiki when it was announced here. It is a static timing analysis (STA) tool that has been designed from the ground up to be massively parallel. Yesterday Ruben Molina of Cadence said that the sweet spot on a single server is to use 8 cores after which you should distribute the design across multiple servers. Note that Tempus doesn’t just do the easy distribution, doing analysis in parallel of different corners, but also can distribute timing analysis of a single large design.

Voltus is pretty much the same message but for power analysis. For over a decade, Cadence’s analysis in this area has been based on the VoltageStorm technology acquired in the Simplex acquisition renamed EPS. However, Voltus is a completely new tool. I don’t know how much code it shares with Tempus but I’m betting quite a bit, based on the fact that it has the same massively parallel value proposition and the two tools are clearly tightly integrated. It is 10X the speed of other solutions on the market and supports designs of up to one billion instances. What does it do:

  • IR drop and electromigration analysis and optimization
  • Power consumption calculation and analysis
  • Analysis of power impact on design closure, from chip to package to PCB

Also during the day were several presentation by actual users of Cadence’s signoff tools: GlobalFoundries, nVidia, TI, Connexant and LSI Logic. In particular, nVidia is one of the lead customers for Voltus and presented some of their experience.

Information on Voltus is here. The Voltus white paper is here.


More articles by Paul McLellan…


Thermal Analysis for 3D SoC Integration

Thermal Analysis for 3D SoC Integration
by Daniel Payne on 11-21-2013 at 7:01 pm

The first time that I saw a DRAM in a ceramic package running on a tester I made the mistake of touching my finger to the metal lid, scorching my finger and teaching me a lesson that ICs can run extremely hot. I’ve read a lot the past few years about 3D IC design, and immediately my mind becomes curious about how an engineer would go about simulating or estimating the thermal performance before building a prototype. Last month the Global Semiconductor Association (GSA) invited Gene Matter from Docea Power to talk about:

  • How to model a 3D IC for dynamic power and thermal analysis
  • Creating compact thermal models for fast simulation and acceptable accuracy
  • Performing “what-if” analysis on the floor plan while running real software loads


Gene Matter, Docea Power

The cross-section of a typical 3D IC may contain a substrate that stacks multiple ICs, like: Processor, DRAM, RF and Non-volatile memory.

During the design process you want to know how temperature in such a 3D system impacts: power, peak performance, aging, and package costs. Here’s a thermal modeling flow used by Docea Power that creates a Compact Thermal Model (CTM):

The EDA tool from Docea is called Ace Thermal Modeler (ATM):

Once you’ve created a thermal model, the next step is to define your power model and use case, then run simulation with Aceplorer to understand the temporal and spatial effects:

By modeling thermal effects at the system level an engineer can now:

  • Analyze how IP leakage is temperature-dependent
  • Explore multiple power and thermal management strategies
  • Qualify the environment capacitive effect
  • Qualify the design of minimum cooling properties
  • Trade off and explore various floor plan and proximity dependencies
  • Find and fix spatial or temporal hot spots and gradients
  • Choose an optimal thermal sensor location
  • Manage costs across: Die, package, PCB, chassis, the complete system

3D IC Example – WIOMING

Here’s a Memory-on-Logic 3D stack example:


Source:CEA LETI, Pascal Vivet
A cross-section view shows how the SoC is connected to the DRAM memory:

Eight heaters and several thermal sensors were placed around the SoC in order to accurately characterize within about 1 degree Celsius. A compact thermal model was created and static simulation results were generated in milli-seconds, while a dynamic simulation only took seconds to complete. The difference between simulated and measured results across all scenarios showed and average error of just 4.22%:

Transient simulation results also showed acceptable correlation between simulated and measured:

Summary

It’s possible to use thermal modeling at the system level with Ace Thermal Modeler to explore and measure a 3D stack with TSVs. Compact thermal models allow for quick run times and decent accuracy. The simulated results correlate well with measured silicon values as seen with the WIOMNG example where a WideIO DRAM was added on top of an SoC.

You may read the complete presentation on the GSA web site.

More Articles by Daniel Payne …..

lang: en_US


It’s about the mobile GPU memory bandwidth per watt, folks

It’s about the mobile GPU memory bandwidth per watt, folks
by Don Dingee on 11-21-2013 at 4:00 pm

There has been a lot of huffing and puffing lately about 64-bit cores making it into the Apple A7 and other mobile SoCs, and we could probably dedicate a post to that discussion. However, there are a couple other wrinkles to the Apple A7 that should be getting a lot more attention.

There are two primary causes of user frustration in multimedia applications. Continue reading “It’s about the mobile GPU memory bandwidth per watt, folks”


QCOM delivers first TSMC 20nm mobile chips!

QCOM delivers first TSMC 20nm mobile chips!
by Daniel Nenni on 11-21-2013 at 3:00 pm

QCOM is now sampling the TSMC 20nm version of its market dominating Gobi LTE modem. The announcement also included a new turbo charged version of their 28nm Snapdragon 800 SoC with a Krait 450 quad core CPU and Adrino 420 GPU. Given the comparable benchmarks between the Intel 22nm SoC and the 28nm SoCs from Apple and QCOM, the new 20nm mobile products from the top fabless semiconductor companies will be well beyond Intel’s 22nm reach, absolutely.

The question is: When will Intel have a competitive 14nm SoC? The answer will hopefully come today at the Intel Analyst conference so stay tuned to SemiWiki. I will compare the conference info with what I have heard and see how they match up. Spoiler alert: Production Intel 14nm SoCs will not arrive until 2015, believe it.

TSMC’s 20nm process technology can provide 30 percent higher speed, 1.9 times the density, or 25 percent less power than its 28nm technology. The advanced 20nm technology demonstrates double digit 112Mb SRAM yield. The high performance device equipped with second generation gate-last HKMG and third generation Silicon Germanium (SiGe) strain technology. By leveraging the experience of 28nm technology, TSMC’s 20nm process can further optimize Backend-of Line (BEOL) technology options and deep collaboration with customers to continue the Moores’ Law shrinking path. Technology and design innovation keep production costs in check.

The new QCOM Krait 450 quad-core SoC is the first mobile CPU capable of running at speeds of up to 2.5GHz per core with a memory bandwidth of 25.6GB/s which will significantly increase the speed of running apps and browsing the internet. According to QCOM it is also capable of delivering Ultra HD (4K) resolution video, images, and graphics to mobile devices and HDTVs via their new Adreno graphics engine (the Adreno 420 GPU claims a 40% graphics boost over the Snapdragon 800). QCOM also claims to have integrated hardware accelerated image stabilization, which would be an industry first. The quad core processors are still 32-bit which was a bit of a disappointment for me. If anyone can push Android to 64-bit it is QCOM. As it turns out, Apple really did pull a rabbit out of the hat with their 64-bit ARM based A7 SoC for the iPhone5s which I have and am thoroughly enjoying!

“Using a smartphone or tablet powered by Snapdragon 805 processor is like having an UltraHD home theater in your pocket, with 4K video, imaging and graphics, all built for mobile,” said Murthy Renduchintala, executive vice president, Qualcomm Technologies, Inc., and co-president, QCT. “We’re delivering the mobile industry’s first truly end-to-end Ultra HD solution, and coupled with our industry leading Gobi LTE modems and RF transceivers, streaming and watching content at 4K resolution will finally be possible.”

The FinFET version of Snapdragon and Gobi LTE modems are expected to sample one year from now with a 20% performance boost or a 35% power savings from the silicon alone. I also expect it will have a 64-bit ARM based architecture for greater throughput. Apple’s next A8 SoC (iPhone6) is also TSMC 20nm which will mark the first time Apple has competitive silicon with competing tablets and smartphones. Apple’s A7, which just came out, is old school 28nm and last year’s A6 was 32nm. Exciting times in the fabless semiconductor ecosystem, absolutely!

See the Qualcomm presentation HERE.

More Articles by Daniel Nenni…..

lang: en_US


Semiconductor market could grow 15% in 2014

Semiconductor market could grow 15% in 2014
by Bill Jewell on 11-20-2013 at 8:00 pm

The global semiconductor market has grown 4% for the first three quarters of 2013 compared to a year ago, according to World Semiconductor Trades Statistics (WSTS). Guidance for 4Q 2013 revenue change versus 3Q 2013 varies widely for key semiconductor companies. Texas Instruments (TI), Broadcom, Infineon and Renesas all expect declines ranging from 7% to 10% based on the midpoint of their guidance. Intel, Qualcomm, STMicroelectronics (ST) and Advanced Micro Devices (AMD) guide toward flat or low single digit growth. Micron Technology did not provide specific revenue guidance, but provided estimates of DRAM and flash Memory bit growth and price changes for their quarter ending in late November. Based on the Micron guidance, Semiconductor Intelligence estimates revenue growth of 30%. Samsung did not provide revenue guidance, but expects solid demand and a tight market (meaning higher prices) for both DRAM and flash memory. Based on the table below, the 4Q 2013 semiconductor market should be flat or up low single digits from 3Q 2013. Thus full year 2013 growth should be 5% to 6%.

[TABLE] align=”center” border=”1″
|-
| colspan=”4″ style=”width: 529px; text-align: center” | Key Semiconductor Company Revenue Guidance
|-
| colspan=”4″ style=”width: 529px; text-align: center” | 4Q 2013 versus 3Q 2013
|-
| style=”width: 121px” | Company
| style=”width: 132px; text-align: center” | Low end
| style=”width: 144px; text-align: center” | Midpoint
| style=”width: 132px; text-align: center” | High end
|-
| style=”width: 121px” | Intel
| style=”width: 132px; text-align: center” | -2%
| style=”width: 144px; text-align: center” | 2%
| style=”width: 132px; text-align: center” | 5%
|-
| style=”width: 121px” | Qualcomm
| style=”width: 132px; text-align: center” | -3%
| style=”width: 144px; text-align: center” | 2%
| style=”width: 132px; text-align: center” | 6%
|-
| style=”width: 121px” | TI
| style=”width: 132px; text-align: center” | -12%
| style=”width: 144px; text-align: center” | -8%
| style=”width: 132px; text-align: center” | -4%
|-
| style=”width: 121px” | Micron
| style=”width: 132px; text-align: center” |
| style=”width: 144px; text-align: center” | 30%*
| style=”width: 132px; text-align: center” |
|-
| style=”width: 121px” | ST
| style=”width: 132px; text-align: center” | -4%
| style=”width: 144px; text-align: center” | 0%
| style=”width: 132px; text-align: center” | 4%
|-
| style=”width: 121px” | Broadcom
| style=”width: 132px; text-align: center” | -11%
| style=”width: 144px; text-align: center” | -8%
| style=”width: 132px; text-align: center” | -5%
|-
| style=”width: 121px” | Renesas
| style=”width: 132px; text-align: center” |
| style=”width: 144px; text-align: center” | -10%
| style=”width: 132px; text-align: center” |
|-
| style=”width: 121px” | Infineon
| style=”width: 132px; text-align: center” | -9%
| style=”width: 144px; text-align: center” | -7%
| style=”width: 132px; text-align: center” | -5%
|-
| style=”width: 121px” | AMD
| style=”width: 132px; text-align: center” | 2%
| style=”width: 144px; text-align: center” | 5%
| style=”width: 132px; text-align: center” | 8%
|-
| colspan=”4″ style=”width: 529px” | *estimate based on bit growth and price guidance
|-

What will be semiconductor market growth in 2014? We at Semiconductor Intelligence expect growth to accelerate from 2013 to 2014. One major factor driving the acceleration is the expectation of increasing global GDP growth in 2014. The table below shows the International Monetary Fund (IMF) November 2013 forecast for GDP growth. The IMF expects World GDP growth to accelerate from 2.9% in 2013 to 3.6% in 2014. Advanced economies are forecast to grow 2.0%, up from 1.2% in 2013. The key drivers in advanced economies are the U.S., with GDP growth accelerating by one percentage point, and the Euro Area, which should move from a 0.4% decline in 2013 to 1.0% growth in 2014. Developing Economies are projected to grow 5.1% in 2014, up from 4.5% in 2013. China is forecast to have slightly lower growth in 2014 than in 2013, but other developing economies such as India, Mexico, Russia, Eastern Europe and Southeast Asia are all expected to see accelerating growth in 2014.

[TABLE] align=”center” border=”1″
|-
| colspan=”3″ style=”width: 389px” |

Real GDP Annual Percent Change
(IMF, November 2013)

|-
| style=”width: 187px” | Region
| style=”width: 108px; text-align: center” | 2013
| style=”width: 94px; text-align: center” | 2014
|-
| style=”width: 187px” | World
| style=”width: 108px; text-align: center” | 2.9
| style=”width: 94px; text-align: center” | 3.6
|-
| style=”width: 187px” | Advanced Economies
| style=”width: 108px; text-align: center” | 1.2
| style=”width: 94px; text-align: center” | 2.0
|-
| style=”width: 187px” | U.S.
| style=”width: 108px; text-align: center” | 1.6
| style=”width: 94px; text-align: center” | 2.6
|-
| style=”width: 187px” | Euro Area
| style=”width: 108px; text-align: center” | -0.4
| style=”width: 94px; text-align: center” | 1.0
|-
| style=”width: 187px” | Japan
| style=”width: 108px; text-align: center” | 2.0
| style=”width: 94px; text-align: center” | 1.2
|-
| style=”width: 187px” | Developing Economies
| style=”width: 108px; text-align: center” | 4.5
| style=”width: 94px; text-align: center” | 5.1
|-
| style=”width: 187px” | China
| style=”width: 108px; text-align: center” | 7.6
| style=”width: 94px; text-align: center” | 7.3
|-

Many factors affect the semiconductor market, but GDP growth is a key element. The components of GDP include business investment and consumer durable goods spending – both major drivers of semiconductors. We at Semiconductor Intelligence have developed a proprietary model of semiconductor market growth based on changes in GDP. The model is illustrated below for 2003 to 2014. The model is generally accurate in predicting the acceleration or deceleration of the semiconductor market. The only exception in the last 10 years is 2012, when the model predicted slight acceleration in semiconductor market growth while the market actually declined. In six of the last ten years the model has been within a couple of percentage points of the actual market change. Based on the IMF forecast of 3.6% GDP growth in 2014, the model predicts semiconductor market growth of 12%. Of course the accuracy of the model is dependent on the accuracy of the GDP forecast.


In November 2012 Semiconductor Intelligence forecast semiconductor market growth of 9% in 2013 and 12% in 2014. In May 2013 we revised this to 6% in 2013 and 15% in 2014. We are continuing to hold to this forecast. As stated earlier, 2013 will probably finish with 5% to 6% growth. Although the model calls for 12% growth in 2014, we believe there is upside potential for GDP and semiconductor market growth.

How does our 15% growth for 2014 compare to other semiconductor market forecasts? The optimists are Objective Analysis and Future Horizons. In June Jim Handy of Objective Analysis projected 2014 growth of over 20%. Malcolm Penn of Future Horizons recently called for 25% growth. Other forecasters expect 2014 growth to be similar to 2013, ranging from 2.9% from IDC to 8% from IC Insights.

lang: en_US

More Articles by Bill Jewell…..


The Rosetta Stone of Lithography

The Rosetta Stone of Lithography
by Paul McLellan on 11-20-2013 at 3:14 pm

At major EDA events, CEDA (the IEEE council on EDA, I guess you already know what that bit stands for) hosts a lunch and presentation for attendees and others. This week was ICCAD and the speaker was Lars Liebmann of IBM on The Escalating Design Impact of Resolution-Challenged Lithography. Lars decided to give us a whirlwind tour of the history of recent lithography. I’ll summarize things here and talk about some of the future technologies and challenges that he described in a later blog.

Lars started by presenting what he called the Rosetta Stone of lithography. This summarizes the past challenges survived and the future challenges to come in a single slide. Almost anything you need to know about lithography as an EDA professional is on this one slide. One important thing to realize is that process names are increasingly just names. The critical thing is what is the minimum pitch that is allowed on the layer. For example, at 22nm the minimum pitch is 80nm. At 10nm the minimum pitch is 48nm.

The fundamental equation of lithography is that the resolution (always talked about as the half-pitch) is k[SUB]1[/SUB] * lambda / NA, where

  • k[SUB]1[/SUB] is the Rayleigh parameter, which is a measure of the lithography complexity. Yield is affected if it drops below 0.65 and then we need to do something about it (such as OPC or double patterning, but that story is yet to come)
  • NA is the numerical aperture, which is the sine of the largest diffracted angle captured by the lens. It is hard to scale since lens manufacture is hard for NA>0.5 but worse, the depth of field scales NA[SUP]-2[/SUP] making planarity of the wafer more and more critical
  • lambda is the wavelength of light, which for many years has been 193nm.

The actual pitch is twice this number. So if the number is 100 then you can have metal (or whatever) at 100nm width and 100nm space (or two numbers that are close but add up to 200).

In the early days of semiconductor manufacturing, before this Rosetta Stone even begins, we scaled by scaling lambda, the wavelength of the light we used. First we used G-line at 436nm and then in 1984 went to I-line at 365nm. In 1989 we switched to KrF light sources at 248nm and in 2001 to ArF at 193nm. We then expected to go to F[SUB]2[/SUB] at 157nm but that never happened. It was too difficult to build effective optics and masks. And by the time we though about Ar[SUB]2[/SUB] at 126nm that already required full vacuum and reflective optics so why not go all the way to X-rays (EUV is at 14nm wavelength). So we have been stuck at 193nm light since 2001, as you can see on the 3rd line down on the Rosetta Stone, the one that only has one entry.

The slides starts at 130nm which was the first time that we used 193nm light. At that point we could use conventional lithography without doing anything unusual: flash the light through the reticle onto the wafer without really more than rudimentary correction on the mask. Since then we have had to scale using NA and k[SUB]1[/SUB] down to 28nm and which point scaling NA ran into the wall since it was impossible to manufacture lenses, and we were left with only being able to scale k[SUB]1[/SUB].

At 90nm we needed powerful optical proximity correction (OPC) essentially turning the masks into less of a mask and more of a diffraction grating where the light that got through interfered in just the way we wanted to give us something approaching the pattern we required. We couldn’t make square corners, the OPC is a sort of low-pass filter, but we could live with rounded corners and vias that were more circular than square. But OPC couldn’t correct everything so from an EDA point of view we needed tools to check the design, locate hot-spots that OPC would fail to correct, and get the designer to fix them.

From 65nm to 32nm we used off-axis illumination and asymmetric illumination. Without going into all the details, one of the inputs into the equation of to what angle to tilt the illumination is the pitch of the patterns on the wafer. So for DRAM not such a big issue but for logic we had to have a lot of rules about the dominant direction on a layer and increasingly complicated design rules since not all pitches were allowed any more. This was also when immersion lithography was introduced which got us down to 32nm.

To get us the next process generation to 22nm (80nm pitch) off-axis illumination and immersion lithography was no longer enough. For layers that didn’t only have patterns in one direction, we needed double exposure, one mask for the horizontal patterns and one for the vertical. However, still only one photoresist step and one etch step. The rules about prohibited pitches became more complex leading to unbelievably huge design rule decks.

80nm pitch is the least we can get out of the optical system. To go further we need to go to double patterning (DP), what lithographers call LELE (litho-etch-litho-etch). In principle this should take us down to 40nm but since the two masks used in double patterning are not self-aligned, we need to give up 10nm for those errors and 50nm is the smallest we can get with double patterning. I have written in detail about double patterning on Semiwiki here.

There is also triple patterning TP, (called LE[SUP]3[/SUP] by the lithographers). But this is not used to increase resolution (it isn’t really possible to use it that way) but rather to get better 2D resolution. But this leads to some big issues in EDA such as how to communicate complex structures that cannot be 3-colored.

Another type of double patterning is what IBM calls sidewall image transfer and what many people call SADP for self-aligned double patterning. In this the two separate patterns in DP are constructed in a way that removes that 10nm penalty. A mandrel is constructed using a single mask, and then it used to build sidewalls on each side of the mandrel. The mandrel is removed leaving everything at the desired pitch. Another wrinkle is that it is no longer possible to build anything other than gratings with no ends. A separate cut-mask is required to divide these up. In fact this approach is also used on some critical layers even with LELE DP. If you have ever seen any 20nm layout, that is why it looks so regular: only certain pitches are allowed and the lines have to be continuous and then cut.

Another problem is that the area that we need to inspect for interactions increases. Actually, of course, the area actually remains the same but the number of patterns drawn into the area increases. So from the point of view of someone sitting in front of a layout editor, more and more polygons need to be considered. In particular, it is no longer just the nearest neighbor but the next one over too. This causes big problems when cells are placed next to each other since the interaction area stretches deeper into the cell. Further, vias, which used to simply be colored the same as the metal they contacted, can interact over greater distance and so need to be actively colored leading to more complexity in the routes.

So this is where we are today. First generation multiple patterning required only a few levels using LELE (DP). Cell-to-cell interactions could be managed through simple rules. As we go to 10nm we will have more layers using LELE, a few levels using LE[SUP]3[/SUP] (TP). Then a few levels needed SADP. With lots of complex cell-to-cell interfactions.

That’s enough for one blog. More next week.

The presentation and a video of the talk should be here on the CEDA website when it eventually gets published.


More articles by Paul McLellan…


Revisiting Andy Grove’s "Only the Paranoid Survive"

Revisiting Andy Grove’s "Only the Paranoid Survive"
by Ed McKernan on 11-19-2013 at 10:00 pm

Over the course of the last fifty years there have been two significant books that have delivered emotional and operational clarity on the rise and fall of high tech companies and industries: The Innovator’s Dilemma and Only the Paranoid Survive. Amazingly, these two books were released within a year of each other (1996, 1997) and at the height of Andy Grove’s tenure as CEO of Intel. Still today, the Innovator’s Dilemma is more often used as a short-handed catch phrase to describe the sure apparent fall of an established player in a maturing industry whereas Grove’s phrase is seen as a rallying cry to remain vigilant against competitors forays into ones market. What is most remarkable about Grove’s book is that it really provides a roadmap for companies to avoid the Innovator’s Dilemma trap and would be a timely read today as Intel wrestles with its future.

Finished in 1996, “Only the Paranoid Survive” describes Inflection Points and 10X factors that can impact a company positively or negatively and thus lead to launching a company into high growth at the expense of its competitors or conversely into a downturn from which survival is in serious doubt. In fact, if a CEO and his team is not able to capitalize on the inflection point, a business exit is more than likely. Grove uses the case of Intel’s exit from the DRAM business in the mid 1980s and the response to the Pentium bug debacle in 1994 to highlight how the company moved off of an inflection point in an upwardly, positive way.

Intel was founded in 1968 by Robert Noyce and Gordon Moore to develop the DRAM, an integrated circuit, as a low cost, small footprint replacement to core memory used in mainframe computers. Andy Grove was an assistant of Moore’s at Fairchild and was hired on as the first employee. Quite often he is referred to as the third founder, primarily because he became the most recognizable face of Intel as he transform the company from a commodity memory player to a dominate microprocessor supplier that expanded revenue from roughly $2.7B in 1987 to nearly $21B when he stepped down in 1997.

The company relied mainly on DRAMs during its first decade as sales exploded to $400M by 1978, however with growth came many competitors, including nimble startups like Mostek and well-capitalized Japanese conglomerates like NEC, Toshiba and Hitachi. The field became over crowded and when the dollar soared in value relative to the Yen, price-cutting and dumping ensued to the point Intel’s market share crashed to a low of 1.3% in 1984. It would have been the end for Intel, the innovative silicon valley startup with some of the brightest minds in the industry were it not for the experimental work in the early days on a new memory called EPROM and a 4 bit calculator chip called the microprocessor. All three semiconductor building blocks of the modern computer were invented by 1971, three years into the company’s existence and yet each would reach its prime importance at various stages.

The often-recounted story of exiting the DRAM business occurs in mid 1985, a time that Grove describes as being after a year of wandering aimlessly. He is meeting with then CEO Gordon Moore in his office discussing the quandary of remaining in the DRAM business. Emotionally he and many of Intel’s employees are attached to DRAM as the device they road to success and in many ways critical as it was considered the technology driver for new process technologies given its uniformity and high volume. However, each generation of DRAM densities invariably had a different leader. To be profitable meant being first to market and without assurance, only deep pockets could guarantee survival across multiple generations. The Japanese had the advantage of cheap financing and the ability to employee multiple design teams in order to increase their chance of winning the next generation design.

As Grove looks out the window of his office at the rotating Ferris Wheel of the Great American amusement park, he turns to Moore and asks, “If we get kicked out and the board brought in a new CEO, what do you think he would do?” Gordon answered without hesitation, “He would get us out of memories.” And so the two walk out the door and reenter convinced that they must execute on the plan to get out of the memory business and concentrate on Microprocessors and EPROMs, which Intel would grow to dominate over the following years.

For those readers who are not close followers of Intel, there is sometimes an assumption that DRAMs made up the majority of the revenue and that microprocessors were a nascent business. In reality, the company was saved by IBM’s selection of the 8088 processor for its PC, which launched in 1981 and shipped roughly 400 thousand units in its first year or in the words of Bill Lowe, VP of the Personal System’s Group, more than the installed base of big blue’s mainframes. Also key was an investment by IBM in December 1982 to guarantee Intel had the resources to support the company’s growth and development of new processors.

While Intel would take three years to exit the DRAM business, Grove notes that credit had to be given to the middle managers making resourcing decisions on a day-to-day basis, such as allocating more production wafers to microprocessors than DRAM, as the critical part of the transition process. Still plants had to be closed and with it mass layoffs. Intel’s future survival and dominance would require a roadmap out of commodity and into a sole source technology leadership position.

Andy Grove’s remaking of Intel would continue during the next dozen years during which time he pushed AMD out of a second sourcing agreement that originally was required by IBM; he outmaneuvered the RISC processor competitors and Microsoft; subsumed all the ancillary chipset logic of the PC, sans graphics controllers; led a dramatic branding campaign that made Intel a worldwide recognizable household name; and kept the PC market split amongst many rivals with none attaining even 30% market share. All of these tactics added up to market dominance and a market capitalization of $197B, up from $4B when he took over and more than 50% higher than today.

The story of Intel’s dominance from the 386 generation until the end of the century can sit in mighty contrast to the missed mobile inflection point of these past ten years and what is likely the next one, that of its leading edge process technology that enables high margin x86 server and PC processors and could be used with others in a Foundry arrangement. Recounting the history of Intel allows one to view not only the inflection points but the mistakes and successes along the way. Survival, in Grove’s book was more than Paranoia, it was reinforcing a market trend as well as developing contingency plans, listening to the remote Field Sales Cassandras and proactively developing and testing for new markets. Grove was not mistake free and some initiatives started under his watch were not snuffed out in time to prevent damage to the company. All this is what makes looking at Intel uniquely interesting.

lang: en_US

More articles by Ed McKernan…


IoT begets silicon, interoperability, and standards

IoT begets silicon, interoperability, and standards
by Don Dingee on 11-19-2013 at 5:00 pm

The Internet of Things is on every technology mind these days, but what does it mean for the EDA community? Dennis Brophy of Mentor Graphics says the billions of things we are hearing about will not happen unless we find a way to build a lot more things, efficient things, and connected things. He has more thoughts in our recent interview.
Continue reading “IoT begets silicon, interoperability, and standards”


Interface Protocols, USB3, PCI Express, MIPI, DDRn… the winner and losers in 2013

Interface Protocols, USB3, PCI Express, MIPI, DDRn… the winner and losers in 2013
by Eric Esteve on 11-19-2013 at 11:57 am

How to best forecast a specific protocol adoption? One option is to look at the various IP sales, it will give you a good idea of the number of SoC or IC offering this feature on the market in the next 12 months. Once again, if you wait for the IP sale to have reached a maximum, it will be too late, so you have to monitor the IP sales dynamic when the sale volume is still in the low range to make an efficient analysis, which can help you taking the right decision just a little bit in advance in respect with your competitor – to benefit from a Time-To-Market advantage. That’s why we will mention the clear winners, demonstrating high market penetration (and becoming “de facto” standard in certain market segments), and also put the focus on the emerging protocols demonstrating fast growing penetration.

The above table is extracted from the “Interface IP Survey” version 5, just completed. In short you will discover in this survey:

  • IP vendor ranking, protocol by protocol, by IP License revenue, for USB, PCI Express, HDMI, SATA, MIPI, DisplayPort, Ethernet and DDRn Memory Controller,
  • Competitive analysis by protocol
  • Controller and PHY IP license price (by technology node for PHY)
  • By protocol adoption rate and market trends

In fact, IPnest is the only analyst proposing such a granularity and this approach has allowed building a large customer base, including IP vendors, ASIC Design House, Foundries, Fabless and IDM. Ranking of the numerous IP vendors by protocol is very useful, but not enough, IPNEST has inserted market intelligence and not only raw data!

The winners in 2013

HDMI is again this year a very successful protocol, both in term of market penetration in the Consumer/HDTV segment and in term of pervasion in various segments like PC, Wireless Handset (smartphone), Set-Top-Box, DVD players and recorders, Digital Camcorder, Digital still camera and even Automotive (I guess thanks to platforms like TI OMAP, as the chip maker has to enter in new segments after giving up in the wireless). Analyst consensus is that almost 3 billion HDMI ports have been shipped since the protocol inception. DisplayPort has become complementary to HDMI, the adoption has been confirmed in 2012, after a strong growth in 2011: the protocol is well tailored for interfacing a PC and a screen, that’s naturally here that the adoption is high. We clearly rank DisplayPort in the winner list.

According with Silicon Image, the adoption for “Mobile High-Definition Link” (MHL) is growing very fast. MHL provides the same bandwidth capability (and compatibility) that HDMI 1.4… but with a micro-USB (5 pin) connector instead of the traditional HDMI connector. Just keep in mind that MHL will be primarily used in mobile electronic systems, smartphones, media tablet and probably notebook and ultrabook PC. That makes over one BILLION potential devices integrating MHL in 2013…

The semiconductor and electronic industry had in the past some concerns with HDMI protocol: they had to pay royalties to HDMI LLC, but they could not influence the specifications. Until late 2011, HDMI LLC was a closed standard body that consisted in seven founders and over 1,000 adopters. The HDMI specification was architecture by the seven founders in a closed-door environment. In October 2011, the HDMI founders established a nonprofit corporation called HDMI Forum, with the purpose to foster broad industry participation in the development of future versions of the HDMI specification.

Efficient basic protocol, plus two new releases, one addressing the form factor of the connector (MHL) and the second extending HDMI to more HD with 2.0, sold to customers happier today than in the past, that makes good reasons for HDMI IP sales to jump in 2012 and growing again in 2013.

MIPI is a set of interface specifications initially specifically tailored for wireless phones system, defining almost any kind of chip to chip interface: Camera to Application Processor (AP) with CSI, Display with AP (DSI), Baseband with AP (Low Latency Interface, allowing to share an external DRAM), main SoC with RF chipset (DigRF) and another dozen specifications. I will come back to MIPI in a next blog very soon, explaining why MIPI IP sales have been multiplied by x4 from 2010 to 2012, seeing a 60% increase in 2012.

To give a complete picture about MIPI, it’s important to notice that the MIPI Alliance has consolidated MIPI positioning within the Interface Ecosystem, by concluding very promising agreements with three standard organizations:

  • JEDEC: definition of Universal Flash Storage (UFS), to be used in conjunction with MIPI M-PHY, offering within a mobile system
  • USB-IF: specification of SuperSpeed USB Inter Chip (SSIC), where the USB 3.0 PHY can be replaced by MIPI M-PHY, offering high bandwidth and low power capabilities for chip to chip communication in a mobile system
  • PCI-SIG: definition of “Mobile Express”, delivering an adaptation of the PCI Express® (PCIe®) architecture to operate over the MIPI M-PHY® physical layer technology

We will use these agreements to introduce the next two Interface protocol winners: USB 3.0 and PCI Express.

SuperSpeed USB IP have started selling well during 2012, with a design start count being last year equal to the sum of design starts during 2009, 2010 and 2011. In the meantime, many IP vendors have given up (PLDA, Snowbush to name a few), and the result is that there is a clear, undisputed winner in the USB 3.0 market: Synopsys! But Cadence has made two acquisitions during 2013, Cosmic Circuit and Evatronix, the first bringing USB 3.0 PHY IP and the second USB 2.0 integrated solution plus USB 3.0 Controller IP, indicating that the EDA & IP vendor did not give up about MIPI. Fair competition is always good for the market! Moreover, USB-IF is launching USB 3.1, offering a (doubled) 10 Gbps data rate. Offering a solid roadmap is always a good indication that a protocol will live for a long time; it could also be a good way to change the deal, or the IP vendor landscape.

PCI Express penetration has started in 2005 and has never stopped since then. The technology has been adopted in many, many market segments, with the notable exception of Consumer Electronic and mobile wireless. In fact PCIe success will go even further, as at the end of 2011, SATA-IO Organization has decided to offer ‘SATA Express”, the Non Volatile Memory storage application interface will be supported by NVM Express and in 2012 MIPI Alliance has defined “Mobile Express”.

If we take a look at PCI Express IP cumulated revenues since its inception, we realize that the technology has generated more than $300M of license IP business. We can mention four reasons why PCIe IP sales should continue to grow:

  • PCIe gen-3 (8 Gbps) is selling well, at higher price than gen-2
  • PCIe gen-4 (16 Gbps) is probably in the pipe to be finalized in early 2015, expect fewer IP sales than gen-3, BUT at much higher pricing (PHY IP over $1M)
  • Mobile Express IP sales are net growth, the standard is new
  • SATA Express as well, NVMe is more questionable

This last protocol is seeing a wide adoption, or more precisely a growing outsourcing rate and simply the faster growing and larger IP sales: DDRn. DDRn controller is a mean to interconnect a SoC with memory, using a digital part (Controller) and a physical media access (PHY), so it’s built like every other modern high speed protocol. We have shown in the “Interface IP Survey” that, even if the ASIC design starts decline year after year, the SoC proportion of these design start is growing higher. Because there is more SoC design starts, a SoC being defined as a chip integrating one or more processor (CPU, GPU, DSP, and Microcontroller), the net number of DDRn controller is growing at the same rate. Because the DDRn controller design from scratch is becoming more difficult to manage with DRAM frequency increase, leading to move from ‘Soft PHY” to hardened PHY for example, the move to external sourcing of DDRn controller IP is growing faster than any other Interface IP. This looks theory, but we can see effective sales of DDRn controller IP growing on line with the thory! Just look at these results from IPNEST for 2008-2012. DDRn IP segment is strongly growing, and the leadership is split between Synopsys (again), Cadence (thanks to Denali acquisition) and ARM.

Misc… but not least: Network-onChip

Network on Chip (NoC) is not a protocol, neither an interface, rather an interconnect function, buried into a SoC, to connect, manage and monitor the multiple IP blocks. As such a NoC will be, by definition, connected to all the interface functions, from DDRn memory controller to USB, PCIe, UFS and so on. That we have seen in 2011-2013 is the strong penetration of NoC IP into various market segments (Wireless, Consumer Electronics, Automotive and more), although the NoC was at concept stage in the mid-2000. This trend has been so effective that a NoC IP vendor like Arteris has incredibly increase revenue coming from upfront licenses between 2010 and 2012. But trees don’t grow up to sky, everybody knows this… Qualcomm buy it before, like the company did with Arteris for a (supposedly) amount of a quarter billion dollar. Semiwiki told you last year how good was Arteris, thanks to Qualcomm to confirm our views.

The losers in 2013
Just take a look at the same article written last year, as there is no new comer in this list!

Eric Esteve from IPNEST –

Table of Content for “Interface IP Survey 2008-2012 – Forecast 2013-2017” available here.

More Articles by Eric Esteve …..

lang: en_US

The type of answers IPNEST customers find in the “Interface IP Survey” are:

  • 2013-2017 Forecast, by protocol, for USB, PCIe, SATA, HDMI, DDRn, MIPI, Ethernet, DisplayPort, based on a bottom-up approach, by design start by application
  • License price by type for the Controller (Host or Device, dual Mode)
  • License price by technology node for the PHY
  • License price evolution: technology node shift for the PHY, Controller pricing by protocol generation
  • By protocol, competitive analysis of the various IP vendors: when you buy an expensive and complex IP, the price is important, but other issues count as well, like

    • Will the IP vendor stay in the market, keep developing the new protocol generations?
    • Is the PHY IP vendor linked to one ASIC technology provider only or does he support various foundries?
    • Is one IP vendor “ultra-dominant” in this segment, so the success chance is weak, if I plan to enter this protocol market?


Meeting the Challenges of Designing Internet of Things SoCs with the Right Design Flow and IP

Meeting the Challenges of Designing Internet of Things SoCs with the Right Design Flow and IP
by Daniel Nenni on 11-18-2013 at 7:00 pm

Connecting “things” to the Internet and enabling sensing and remote control, data gathering, transmission, and analysis improves many areas: safety and quality of life, healthcare, manufacturing and service delivery, energy efficiency, and the environment. The concept of the Internet of Things (IoT) is quickly becoming a reality. At this year’s IDC Smart TECHnology Conference, attendees learned that IoT connected devices could number 50 billion by 2020 and the data generated by these devices could reach 50 trillion gigabytes. Clearly, there is significant opportunity for system and semiconductor companies developing the connected technologies that are fueling this space.

A typical IoT node integrates one or multiple sensors, analog front-end (AFE) modules, micro-electro-mechanical systems (MEMS), analog-to-digital converters (ADC), communication interfaces, wireless receivers/transmitters, a processor, and memory. Therefore, the system on chip (SoC) embodying the IoT node function is a microcontroller integrated with analog peripherals, creating an inherently mixed-signal design. To design SoCs for IoT applications in a competitive landscape where differentiation in features and price is critical, designers must address some key challenges, including:

  • Integration of analog and digital functions
  • Software-hardware verification
  • Power consumption

Low Power: How Low Can You Go?
Power consumption is one of the most critical considerations for IoT applications because the devices typically operate on batteries for many years, ideally recharging by harvesting energy from the environment. To minimize power consumption, designers choose power-efficient processors, memory, and analog peripherals, and optimize the system such that only the necessary parts operate at a given time, while the rest of the system remains shut down. For example, consider a device that senses pressure—if there are no changes, only the peripheral monitoring sensor is powered on until a pressure change is detected, awaking the rest of the system to process the information and send it to the host. Another example is a smart meter. Most of the time, this type of device will be in standby mode, waking up every so often to collect power usage data and sending this data perhaps once daily to the power company. Some parts of the design are off, others are on. There might be about a dozen different modes of operation within the system, and all of them need to be verified.

To optimize power consumption, designers use many different techniques, including multiple supply voltages, power shutoff with or without state retention, adaptive and dynamic frequency scaling, and body biasing. In a pure digital design, implementation and verification of these low-power techniques are highly automated in a top-down methodology following common power specifications.

Analog content in IoT devices represents more challenges since it is usually implemented bottom up without explicit low-power specifications, leaving transistor-level simulation as the only verification option. Cadence has automated mixed-signal simulation using Common Power Format (CPF) for specifying behavior at crossing between analog and digital domains in case of power domain changes and power shut-offs. Furthermore, Cadence® Virtuoso® Schematic Editor is able to capture power intent for a custom circuit and export it in the CPF format for static low-power verification. The static method is much faster than simulation for discovering common low-power errors, like missing level shifters or isolation cells.

Hardware-Software Verification
Software plays a crucial role in IoT devices as sensor controls, data processing, and communication protocols are functions often implemented in software. Therefore, system verification must include both software and hardware. To reduce verification time, it is important to start software and hardware development and verification in parallel. For example, instead of waiting for silicon, software development and debugging should start earlier using a virtual prototyping methodology. Cadence Virtual System Platform provides the capability to create virtual models and integrate them into a virtual system prototype for early system verification, software development, and debugging.

When it comes to systems including analog, Cadence offers some unique capabilities. Incisive® Enterprise Simulator is capable of simulating an entire system, including register-transfer level (RTL), for a processor with a compiled instruction set, digital block in RTL, and analog modeled using real number models. This enables hardware and software engineers to start collaborating sooner on developing software and hardware concurrently, instead of sequentially.

High Level of Integration
To ease the development process and shorten the design cycle for IoT devices, designers re-use intellectual property (IP) blocks for a variety of functions. They either design these IP blocks in house, or acquire them from outside vendors, so they can focus on a few differentiating blocks and on integration. Getting the SoC integrated quickly and cost-effectively is the key to success.

Integrating analog IP requires special care. To verify system functions properly in all possible scenarios, designers use simulation. Simulating analog parts at the transistor level, although necessary for some aspects of performance verification, is not the most efficient method to incorporate analog into SoC functional verification. Cadence has developed a methodology based on very efficient real number models (RNM) for abstracting analog at a higher level and for SoC verification without a major performance penalty. Automated model generation and validation capabilities in the Virtuoso platform assist designers in overcoming traditional modeling challenges and taking advantage of simulation using RNM supported in Verilog-AMS or recently standardized System Verilog, IEEE1800 extensions.

Using RNM, designers can validate functionality of the design in many different scenarios more thoroughly and much faster, and leave only specific performance verification to transistor-level simulation.

Once an IoT design is verified, it is important to realize it in silicon, productively. To ensure design convergence throughout the physical implementation process, analog and digital designers must closely collaborate on deriving an optimal floorplan, full-chip integration, and post-layout performance and physical signoff. Cadence integrated its leading Virtuoso analog and Encounter® digital platforms on the industry-standard OpenAccess database to provide a unified flow for mixed-signal designs. The flow operates on the common database for analog and digital that requires no data translation and enables easier iteration between analog and digital designers in optimizing the floorplan, implementing engineering change orders (ECOs), and performing full-chip integration and signoff.

Fig 1. Cadence flow for ARM® Cortex™-M0 embedded IoT designs

Summary
The modern world will continue to get more connected, and the electronic products that make this possible, smarter. This creates not only more challenges but more opportunities for design engineers creating the complex SoCs that power these smart, connected products. Processors, analog components, IP blocks, tools, and methodologies all play important roles in addressing power, integration, and price challenges. With the right design solutions, engineers can deliver differentiated products that support what some experts say is a key enabler of the fourth industrial revolution: the Internet of Things.

By Mladen Nizic, Engineering Director, Mixed-Signal Solutions, Cadence

lang: en_US