BannerforSemiWiki 800x100 (2)

The Innovator’s Dilemma Dagger Aimed at AMD and nVidia’s Heart

The Innovator’s Dilemma Dagger Aimed at AMD and nVidia’s Heart
by Ed McKernan on 01-13-2012 at 1:42 pm

There is one semiconductor company that for the last 3 years has outperformed ARM and more than doubled in stock price relative to Apple. They are everywhere but barely known to most. The success of this company in the coming year though could result in the leveling of AMD and nVidia as they try to adjust to the economics of the mobile Tsunami, which means the commoditization of application processors. The company is Imagination Technologies, whose graphics IP is in many mobile processors including Apple’s ARM family and Intel’s latest Atom processors.

Clayton Christensen described in his book the Innovators Dilemma the difficulty and reluctance that companies have of going into new markets that effectively cannibalize existing high margin businesses. One of the examples that Christensen focuses on is the disk drive industry in which every few years a new leader would emerge to displace the previous leader who was focused on maximizing its market share and profits instead of forging ahead with an eventual replacement. One of the key drivers of the replacement was a smaller form factor device (higher density was always understood to be a requirement).

What happens however, to Christensen’s model when in the midst of a massive increase in graphics capability there is a clash with an even greater force called personal mobility (or ultra mobility). The shift from desktop to notebook happened over many years and was considerably more gradual than what we have seen in just the last 4 years when the Internet was placed in the palm of one’s hand with the Smartphone. The first order of business for Apple, Samsung, HTC and others has been to shoehorn all the electronics in an area that has no tolerance for excessive heat. Something had to give.

Open any souped-up desktop PC with the latest AMD or nVidia graphics card and you realize that the cooling infrastructure has overshot that of the processor and indeed it is a supercomputer. The massive R&D budgets employed by AMD and nVidia are intended to win the gamers and then over a couple of Moore’s Law generations trickle down to the notebook, tablet and then Smartphone.

In typical, Innovator’s Dilemma fashion, Imagination Technologies has come from the ground up to challenge AMD and nVidia’s from the rear in an area that both are trying to catch up. This will be difficult for both to do because of the head start that Imagination Technologies has had in licensing its technology to Apple, Intel, Qualcomm, TI and Samsung. Indeed ARM feels threatened. Against this array of competitors, nVidia sits alone. The company has seen revenue more than double over the past three years and operating profit margins exceed that of AMD and nVidia by a wide margin. You could say that the Innovator’s Dilemma formula has been extended to take into account how an IP business model is superior to a Fabless Business Model.

Intel’s push in the very slim ultrabook form factor is already reducing nVidia and AMD’s share in the PC space. With Imagination Technologies licensing its graphics technology to the Fab players (Intel, Samsung and yes Apple – I consider them virtual fab) there is a squeeze on nVidia and AMD from above and below. All of this was driven by a major form factor shrinkage in PCs and Smartphones that was unforeseen just a few years ago but is dramatically reshaping the industry.

For AMD to survive, I believe they have to become an IP design House for Google, Samsung, Qualcomm, Amazon, HTC or other major player. Pure Fabless, with no shared investment, is no longer a model that survives up against the Fab Titans: Intel and Samsung. Companies must move to one side or the other: IP House or Fab Focused. If I can make a play of words on Jerry Sanders famous quote: Real Men Have Fabs or Real Men Live in IP Houses.

I find it interesting that in all of this transformation, Intel has decided that it needs Imagination Technologies for its low end Atom. Another sign that Paul Otellini believes Intel’s future value is really based on process technology and not chip architectures. Intel has never been able to keep up with nVidia on graphics but it way outperforms TSMC in process development. Imagination Technologies is able to give nVidia a run for its money in the graphics space and as a result have outperformed them financially. As a comparison over the past three years, nVidia’s revenue has been flat and is down since 2007 – perhaps a sign that the cliff is near.

FULL DISCLOSURE: I am Long AAPL, INTC, QCOM, ALTR


Needham growth conference

Needham growth conference
by Paul McLellan on 01-13-2012 at 6:00 am

One of the fun things when a company gets big but is still private, like Atrenta, is that you start to get invited to events like the Needham Growth Conference that took place earlier this week in New York. When I ran Compass Design Automation, which at the time was about $55M in revenue, I remember going to a couple of these events. At one level this seems like a pointless exercise since nobody can buy the stock. But there are actually two reasons that analysts should be interested. Firstly, when a company gets big enough, it can have an effect on the results of the other companies in the industry that are public. And secondly, when the company is big enough it starts to be plausible that it might have an IPO in the future, and an analyst who has a good understanding of the industry should not be hearing about it for the first time on the roadshow.

So this week, Bert Clement, the CFO of Atrenta was at the Needham conference for his 15 minutes of fame (plus 5 more for questions). Of course the audience is primarily financial types so the focus is not so much on Atrenta’s technology. Just getting the audience to understand that you are in EDA and which part you serve is enough of a challenge.

So what did Bert say. Firstly, that Atrenta is focused on SoC realization where it is really the only company today, and SpyGlass is pretty much the standard. They have 170 customers including 19 of the top 20 semiconductor companies. They have had eight consecutive years of revenue growth, are profitable and growing. They should do about $45M this year and margins are growing over time. So they are one of the largest and healthiest private EDA companies. They have over 300 employees with over 200 in R&D.

SoC Realization actually occupies an interesting niche in the spectrum of EDA areas. Below SoC realization is classic EDA, tools to build the actual SoC. This has single digit growth and is experiencing consolidation of suppliers (Synopsys/Magma being the most significant). Above SoC Realization is system design. It has double digit growth but the market is very fragmented and has a low TAM as a result. In the middle, SoC Reallization has double digit growth, an expanding supplier base of IP and IP companies, and is fuelled by the need for consumer products that incorporate a lot of IP to build very complex SoCs (think smartphones and tablets).


EDAC reports Q3

EDAC reports Q3
by Paul McLellan on 01-12-2012 at 7:49 pm

EDAC (EDA consortium) market statistics service announced the data for Q3 of 2011. Revenue increased 18.1% (versus 2010) to $1543.9 million. Sequentially (versus Q2) revenue increase 7.4%. Annualized, that puts EDA at over $6B for, I belive, the first time ever. Wally Rhines, who is EDAC chair (and CEO of Mentor) commented that “growth was exceptionally robust across the board, in every product category and every region.”

Breaking it down:

  • CAE revenue was $566.7 million (10.5% up on Q3 2010)
  • IC physical design and verification was $338.3 million (16% up on 2010)
  • PCB and MCM was $140.3 million (up 11.6% on 2010)
  • Semiconductor intellectual property, or what we usually just call IP, was $510 million (up a huge 37.4% from last year)
  • Services was $88.7 million (up 13.1% on 2010)

By region the numbers were all up too:

  • North America purchased $706.7M of products and services (up 22.4% on 2010)
  • Europe, Middle East and Africa (EMEA) was $257.9 million (up 14.9% on 2010)
  • Japan was $256.9 million (up 11.1% on 2010)
  • APAC was $322.4 million (up 17.6% on 2010)

Historically Q4 is the biggest quarter, with year-end budgets available and salespeople’s quota plans going into overdrive. Cadence is first to report since their financial year ended at the end of the calendar year. Synopsys, Mentor and Magma are all offset.


The EDAC Market Statistics Service page is here.


Advanced Memory Cell Characterization with Calibre xACT 3D

Advanced Memory Cell Characterization with Calibre xACT 3D
by SStalnaker on 01-12-2012 at 7:18 pm

Advanced process technologies for manufacturing computer chips enable more functionality, higher performance, and low power through smaller sizes. Memory bits on a chip are predicted to double every two years to keep up with the demand for increased performance.

To meet these new requirements for performance and power, memory designers must increase bit density while satisfying exacting specifications for fast data transfer and low power consumption. Unfortunately, higher density increases the interactions among interconnects and devices, making it harder to ensure that memories will meet all specifications and be manufacturable with high yield. Ultimately, this means that more accurate characterization than ever before is required at every step of memory design.

Traditional extraction methods used for memory designs have proven unable to address these challenges, either because they are too slow, or are not accurate enough, or both. Memory designers need tools that can help them analyze parasitic issues accurately and quickly at every stage of the physical design cycle, from basic building blocks to the full chip.

A fast field solver, such as Calibre xACT 3D, can be used to apply boundary conditions on a bit cell (Figure 1). By specifying a closed boundary for the cell, the designer can improve parasitic extraction and simulation accuracy, as well as performance for a symmetric design. Using boundary conditions, bit cell geometries are effectively modeled as a reflected or periodic repeated pattern on all sides of the boundary, at the same distance. This allows the designer to extract a single bit cell accurately without having to construct an array.

Figure 1: Application of boundary conditions on a cell

This modeling technique enables designers to radically speed up their characterization process and realize a design that performs to their specification. For example, using Calibre xACT 3D, we extracted a bit cell in 4 seconds, whereas a popular reference-level field solver required 2.15 hours. The total capacitance of the nets extracted from the bit cell compared very closely to the reference results.

Using fast field solver technology like Calibre xACT 3D at all stages of memory design, from bit cell design to full chip sign-off, ensures a robust design that will work to specification when it is manufactured.

To read the complete white paper, click here.

Leave a comment or contact Claudia Relyea if you would like to discuss how Calibre xACT 3D can help your company ensure the successful and timely development of high-performance, low-power memory designs at advanced nodes.


Memory Controller IP, battle field where Cadence and Synopsys are really fighting face to face. Today let’s have a look at Cadence’s strategy.

Memory Controller IP, battle field where Cadence and Synopsys are really fighting face to face. Today let’s have a look at Cadence’s strategy.
by Eric Esteve on 01-12-2012 at 9:45 am

I have shared with you last year some strategic information released by Cadence in April about their IP strategy, more specifically about the launch of the DDR4 Controller IP. And try to understand Cadence strategy about Interface IP in general (USB, PCIe, SATA, DDRn, HDMI, MIPI…) and how Cadence is positioned in respect with their closest and more successful competitor in this field, Synopsys.
Continue reading “Memory Controller IP, battle field where Cadence and Synopsys are really fighting face to face. Today let’s have a look at Cadence’s strategy.”


Speeding SoC timing closure

Speeding SoC timing closure
by Paul McLellan on 01-12-2012 at 1:42 am

As chips have become larger, one of the more challenging steps is full-chip signoff. Lots of other steps in the design process can work on just a part of the problem, but by definition full-chip signoff has to work on the full chip. But it is not just that chips have got larger, the number of corners that need to be validated has also exploded. And, of course, signoff is the last step before tapeout and so is on a critical part of the critical path under the most intense schedule pressure.

Over the last year or so Magma has introduced a suite of tools to address these issues. The first tool is the QCP extractor. You can’t have accurate timing without accurate parasitic data. The next tool is Tekton for delay calculation and static timing analysis. And thirdly there is Quartz DRC/LVS for physical verification.

These tools are multi-threaded and so scale to very large designs and can take advantage of compute farms. A further optimization is multi-mode, multi-corner analysis and extraction that allow a single server to concurrently analyze many scenarios and thus reduce the time and resources required. Magma’s place and route is now also built on top of these same basic extraction and analysis engines, thus removing correlation problems that can arise if the place and route system uses an approximation that the subsequent verification flags as incorrect.

There is a new webinar that explains how the sign-off technologies enhance the overall flow and are integrated together into a complete sign-off solution.


Medfield: ARM twisting

Medfield: ARM twisting
by Paul McLellan on 01-11-2012 at 2:53 pm

One of the most significant announcements at the consumer electronics show (CES) this week was Intel’s Medfield, an Atom-based smartphone SoC. The SoC itself is unremarkable, perhaps a little better than ARM Cortex-based SoCs in some areas, worse in others. The reason it is significant is that Motorola (soon to be Google, don’t forget) announced a multi-year partnership with the first products expected this summer, and Lenovo actually demoed a smartphone containing the chip.

I used to think that Intel had very little chance in the mobile marketplace because ARM was so entrenched and there just wasn’t any good reason to switch. Plus Intel’s big weakness is that they are hopeless at software. They previously tried to get into the communications business with an Xscale (ARM) strategy but gave up after investing over a billion dollars without really getting any customers. Last year they bought Infineon’s wireless business (also ARM-based) but they promptly lost their flagship customer, Apple, to Qualcomm. They have had an unsuccessful Atom-based phone SoC, Moorestown, that went nowhere.

But Android has leveled the playing field so Intel doesn’t need to be good at software development. There is little lock-in of Android to ARM-based systems and as more and more of the software is further and further from the hardware, the details of the hardware matter less and less for the software developer. With a little care, an Android-based app should run on any Android phone without really even knowing what the processor is (Android apps are (mostly) written in Java and so are actually compiled into Java Virtual Machine bytecodes not the underlying assembly in any case).

The important aspect of the announcement is not that Intel is going to seriously impact ARM-based phones in the short term. It is not. It is simply that Intel is seriously in the game. And once it is seriously in the game it will be able to leverage its lead in process technology which will soon put it about 2 process generations ahead of TSMC (or anyone else for that matter). Even if there are some inherent weaknesses in the Atom architecture versus ARM Cortex, two processes generations is simply too big a chasm to get across and a TSMC/ARM SoC will be inferior to an Intel/Atom SoC.

I wouldn’t be the least bit surprised if Intel hasn’t been making some trips across the valley to a famous Cupertino-based smartphone company.


Imera Virtual Fabric

Imera Virtual Fabric
by Paul McLellan on 01-10-2012 at 6:00 am

Virtual fabric sounds like something that would be good for making the emperor’s new clothes. I talked today to Les Spruiell of Imera to find out what it really is.

Anyone who has worked as either a designer or as an EDA engineer has had the problem of a customer who has a problem but can’t send you the design since it is (a) too big (b) the companies crown jewels and (c) no time to carve out a small test case. I’ve even once had a bug reported from the NSA where they were not even allowed to tell us what the precise error message was (since it mentioned signal names).

But realistically, if the problem is going to be debugged then either the design company’s crown jewels (the design source code) or the EDA company’s crown jewels (the tool source code) need to be transferred so that both can get together on the same machine. But wait…Imera has another approach. Connect the EDA company to the design company in a way that all the EDA company’s source code remains behind their firewall, and all the design company’s proprietary design data remains behind theirs. But you can still step through a debuggable version of the code running on the problematic design.

For example, a major southern California communications company was having a problem with an EDA tool. By using the Imera Virtual Fabric they put breakpoints in the code and found the problem within 5 hours. A complete fix was implemented, tested and delivered in 5 days. This compared to 35 or more days before using the previous approach, where a version of the code would be created that logged internal progress, this was mailed back to the EDA company, who then created a new version and gradually homed in on the problem.

It turns out that all of Cadence, Synopsys, Mentor and Magma are using this technology.

Another Imera technology that EDA companies are using is the capability to reach into their internal data center (or private cloud — I guess that is the new fashionable name for compute farms) and built a secure virtual vault with some number of machines siloed into the vault. These are then accessible only to the authorized users. But interestingly those could include an EDA vendor. So it is possible for a design company to set up a specific set of machines that, say, Cadence also has access to to enable collaborative work to debug a problem, for training, for beta testing and so on.

The approach is broadly applicable to other industries too. Volvo, for example, uses it to work with 3rd party vendors and thus ensure that the parts they are designing will fit in the space in the car where they need to go. Banks are using it to give very controlled access to sensitive data.

If you would like to learn more about Imera technology and how it is being used for remote debugging at Mentor Graphics, you might want to check into this seminar “Effective, Secure Debugging in a Fabless Ecosystem“, Jan. 31, San Jose.


Samsung’s Regrettable Moment and the Coming of 3D Tick Tock!

Samsung’s Regrettable Moment and the Coming of 3D Tick Tock!
by Ed McKernan on 01-10-2012 at 12:35 am

The might have beens. The shoulda’s, coulda’s, woulda’s are what launches a thousand Harvard Business School Case Studies that are meant to prepare a generation of business leaders on how to make decisions that impact the future directions of companies. Right before the 2008 financial crises (September 5, 2008), Samsung made a run at Sandisk in order to reduce its NAND Flash royalty payments. A year later, Sandisk rejected Samsung’s final offer for what would be half the value of the company today. Samsung can look back and say that was a big fork in the road and hopefully for them it wasn’t a “stick a fork in it” moment.

Winston Churchill, the man who saved Western Civilization was famous for saying “the farther backward you can look, the farther forward you are likely to see.” While he was forced to sit on the political sidelines in the 1930s during his “Wilderness Years”, he watch Britain and France run the same appeasement playbook while Germany re-arm with new tanks and planes. At age 65, he was called to lead Britain out of its darkest hour with only the technology of Radar and a 22-mile water barrier called the English Channel as his barrier to defeat. The beaches of Dunkirk were left littered with the bulk of Britain’s military equipment. As one of the fathers of the WWI tank and a through and through military technologist he could tie the nuts and bolts of capability with an overarching strategy. None was his equal. What’s this got to do with semiconductors?

There is only one semiconductor executive that has been there from the early, early days of the 1970s of Noyce, Moore and Grove. From the days of DRAM, the EPROM, the 8 bit microprocessor all the way to today’s multi-core 64 bit processors there is only one who was personally tutored by Andy Grove and who has the ability to look farther back in semiconductors in order to see what lies farther ahead. This person is Paul Otellini, who has built an Intel that in 2012 will likely end up being 3 times as profitable as in the bubble year of 2000, the peak of the disastrous Craig Barrett “Itanium Era.”

Competing in the semiconductor industry often times is a multi-front war. If you can’t visualize your enemies or the enemies of your enemies, then you will die somewhere in the next turn of Moore’s Law. Intel has recognized this for a long time now. The compute platform is the actual battlefield. The array of tanks and airplanes offered by the ARM camp are built in factories that are NOT out of bombing range like American factories of WWII.

Imagine you are a processor architect and have been given a clean sheet of paper to define the next big thing. You have been told that there are three new parameters. The first one is that you have infinite cache SRAM. The second is that you have infinite, off-chip NAND Flash. And the third is that you have a pool of DRAM that is still quite large but has already shrunk by an order of magnitude in size relative to Flash (and is shrinking relative to SRAM) but its cost is FREE. How would you arrange the new architecture? Remember, the price of a loaf of bread in the old USSR was FREE before the breakup.

If you are designing something that will first be in production in 3-4 years and remain as the basis for multiple product spinoffs for an additional 4 years then you push the envelope on those resources that maximize performance/cost and performance/watt. This means that over the lifetime of the chip, Moore’s law will expand the SRAM by a factor of at least 8 over where it is today. NAND Flash is scaling even faster and DRAM will likely scale at half that rate. The beauty of DRAM over its 40-year history is that there was nothing better sitting next to the processor as a cheap, short-term storage of code and data. However, this is fading fast relative to NAND Flash, especially as the flash controllers of the world get smarter in maximizing the life of a bit. This sums up the trend that places NAND and SRAM as the dominating factors driving winners and losers for the next decade. How do Samsung, Intel, TSMC, Toshiba, Sandisk and the rest stack up?

The ramifications of Infinite SRAM and Infinite NAND are probably already incorporated in Intel’s roadmaps for the next 4 years. Many people know that Intel has a business-operating model that they call the Tick-Tock model, where Tick is referred to as a new process technology used on an existing architecture and Tock is a new microarchitecture running on an existing process technology. With SRAM and NAND about to be added in a stacked 3D configuration above the processor – I would like to suggest that the Intel business model will now be effectively known as “3D Tick-Tock.” In 3D Tick-Tock, a variable size of cache or NAND will be stacked to an underlying multi-core CPU to enhance the overall product for mobile, desktop or server applications. The timing of the 3D Tick-Tock product rollouts will be determined by the availability of the latest NAND Flash or Logic (SRAM) process technology. Imagine these new memory cores offering a midlife kicker to the older, more complex processor core at higher prices but same operating and thermal (TDP) power envelopes. And these will be introduced when competitors offer their newest products that are built on one or two generation-old technology. Now Intel’s competitors must think about how they keep up in a whole dimension.

This scenario is why TSMC, Samsung, Sandisk and the ARM Crew must think in a whole new way about competing with Intel. If the Foundries lack world-class process technology in both NAND Flash and Logic and if these same foundries cannot field high performance, small footprint SRAM blocks, then they will not be able to get ARM or any other processor architecture across the goal line in the market segments that Intel chooses to play in.

Samsung’s moment of lift off was available three years ago in early 2009 when the economic world was thought to be close to ending. If they had overpaid for Sandisk, they would have consolidated the NAND Flash industry to themselves before Apple took off and before Intel made the decision to get back in. It was a moment to end Toshiba as a NAND foundry for Apple. Samsung would have owned the 1/2 of the infinite memory architecture playing field that will determine the winners of the Semiconductor market in the coming decade.

FULL DISCLOSURE: I am Long AAPL, INTC, QCOM, ALTR



Kindle Touch – My Experience

Kindle Touch – My Experience
by Daniel Payne on 01-09-2012 at 11:08 am

Mostly I blog about EDA software however the end objective of IC design is to produce an electronic system like the Kindle Touch, a popular e-book reader from Amazon introduced in late 2011.

Tear Down
This particular model has the following components (Source: Tech Republic):


Initial Use
I bought mine in November 2011 and was quickly impressed at what my $79 had purchased (originally $99 plus a $20 credit from BestBuy for opening a credit card). The packaging looked like a recycled container and the only thing inside the box was the Kindle Touch and a micro-USB connector, no user manual in sight.

There are only two buttons to push, and the On/Off button is on the very bottom of the device. When you depress it a small green led lights up and the system boots up with an image of a boy reading a book under a tree. Boot time is several seconds and there is no annoying chime like from Windows booting.

The user manual is the first e-book that you see, so it was easy to ready through the pages and learn about the capabilities. It took some getting used to the quirks of the e-ink display because it literally flashes every time that you turn a page, something that I’ve never seen before and wish that they could avoid in the future. Yes, you can read for hours on end with no eye fatigue so it’s a better experience than viewing my LCD display on a laptop or smartphone.

Font size can be adjusted however you cannot rotate the Kindle Touch and see a landscape display of your book. I had expected that i could rotate my e-book like I can with my smartphone, but then decided it wasn’t a deal breaker for me.

The Home button is on the front and from that screen you can start to organize your book library into categories that make sense to you.

I already had an Amazon account so I just linked my new Kindle to the Amazon account and started looking for free books. There’s a few million free books over at Google. Just like the success of Apples is their iTunes store, the success of the Kindle family of e-book readers is tied to the online infrastructure for books and movies at Amazon (another reason I didn’t go with the B&N Nook device).

Finding Books
You can search for books however many of the books at my home are not available. I really wanted to find We Seven the book about the Apollo astronauts but it isn’t in e-book format yet (maybe never). Finding a book on the Amazon network is easy and downloading is through WiFi (they call it WhisperNet).

Responsiveness
If you own a smartphone and then try using a Kindle Touch you’ll find out that the responsiveness is quite slow on the Kindle. Turning a page takes under a second, however clicking the top to get a menu will take a few seconds. Likewise, pinching your fingers to zoom in or zoom out take several seconds on the Kindle unlike on my Android phone where pinching is simultaneous (thank you Samsung).

Document Support
To my delight I discovered that the Kindle allows me to email Word Documents and PDF files to my special kindle.com email account.

Recharging
Battery life is advertised as two months when reading for 30 minutes a day. My experience with WiFi turned on is that it lasts about three weeks between charges. To me it’s more of a hassle to disable and re-enable WiFi just to save power. Recharging is thru the micro USB connector, which presumes that you have a computer with a USB port for charging (and transferring MP3 or other files).

Extras
Hidden away are some special beta features that are fun to play with:

  • web browser (greyscale only)
  • MP3 player

If you want to convert something into e-book format or between formats then check out the Calibre program (no relation to Mentor Graphics).

Amazon Return Policy
Just this week my Kindle Touch started to reboot, the page turning sometimes didn’t work, and the power button would turn the Kindle on-off-on-off. A quick visit to customer chat and I am now receiving a replacement, no hassles, no arguments.

Summary
The Kindle Touch lets your read books in a comfortable form factor that is easy on the wallet and the eyes. I recommend the Kindle Touch for all readers.