100X800 Banner (1)

Advanced Memory Cell Characterization with Calibre xACT 3D

Advanced Memory Cell Characterization with Calibre xACT 3D
by SStalnaker on 01-12-2012 at 7:18 pm

Advanced process technologies for manufacturing computer chips enable more functionality, higher performance, and low power through smaller sizes. Memory bits on a chip are predicted to double every two years to keep up with the demand for increased performance.

To meet these new requirements for performance and power, memory designers must increase bit density while satisfying exacting specifications for fast data transfer and low power consumption. Unfortunately, higher density increases the interactions among interconnects and devices, making it harder to ensure that memories will meet all specifications and be manufacturable with high yield. Ultimately, this means that more accurate characterization than ever before is required at every step of memory design.

Traditional extraction methods used for memory designs have proven unable to address these challenges, either because they are too slow, or are not accurate enough, or both. Memory designers need tools that can help them analyze parasitic issues accurately and quickly at every stage of the physical design cycle, from basic building blocks to the full chip.

A fast field solver, such as Calibre xACT 3D, can be used to apply boundary conditions on a bit cell (Figure 1). By specifying a closed boundary for the cell, the designer can improve parasitic extraction and simulation accuracy, as well as performance for a symmetric design. Using boundary conditions, bit cell geometries are effectively modeled as a reflected or periodic repeated pattern on all sides of the boundary, at the same distance. This allows the designer to extract a single bit cell accurately without having to construct an array.

Figure 1: Application of boundary conditions on a cell

This modeling technique enables designers to radically speed up their characterization process and realize a design that performs to their specification. For example, using Calibre xACT 3D, we extracted a bit cell in 4 seconds, whereas a popular reference-level field solver required 2.15 hours. The total capacitance of the nets extracted from the bit cell compared very closely to the reference results.

Using fast field solver technology like Calibre xACT 3D at all stages of memory design, from bit cell design to full chip sign-off, ensures a robust design that will work to specification when it is manufactured.

To read the complete white paper, click here.

Leave a comment or contact Claudia Relyea if you would like to discuss how Calibre xACT 3D can help your company ensure the successful and timely development of high-performance, low-power memory designs at advanced nodes.


Memory Controller IP, battle field where Cadence and Synopsys are really fighting face to face. Today let’s have a look at Cadence’s strategy.

Memory Controller IP, battle field where Cadence and Synopsys are really fighting face to face. Today let’s have a look at Cadence’s strategy.
by Eric Esteve on 01-12-2012 at 9:45 am

I have shared with you last year some strategic information released by Cadence in April about their IP strategy, more specifically about the launch of the DDR4 Controller IP. And try to understand Cadence strategy about Interface IP in general (USB, PCIe, SATA, DDRn, HDMI, MIPI…) and how Cadence is positioned in respect with their closest and more successful competitor in this field, Synopsys.
Continue reading “Memory Controller IP, battle field where Cadence and Synopsys are really fighting face to face. Today let’s have a look at Cadence’s strategy.”


Speeding SoC timing closure

Speeding SoC timing closure
by Paul McLellan on 01-12-2012 at 1:42 am

As chips have become larger, one of the more challenging steps is full-chip signoff. Lots of other steps in the design process can work on just a part of the problem, but by definition full-chip signoff has to work on the full chip. But it is not just that chips have got larger, the number of corners that need to be validated has also exploded. And, of course, signoff is the last step before tapeout and so is on a critical part of the critical path under the most intense schedule pressure.

Over the last year or so Magma has introduced a suite of tools to address these issues. The first tool is the QCP extractor. You can’t have accurate timing without accurate parasitic data. The next tool is Tekton for delay calculation and static timing analysis. And thirdly there is Quartz DRC/LVS for physical verification.

These tools are multi-threaded and so scale to very large designs and can take advantage of compute farms. A further optimization is multi-mode, multi-corner analysis and extraction that allow a single server to concurrently analyze many scenarios and thus reduce the time and resources required. Magma’s place and route is now also built on top of these same basic extraction and analysis engines, thus removing correlation problems that can arise if the place and route system uses an approximation that the subsequent verification flags as incorrect.

There is a new webinar that explains how the sign-off technologies enhance the overall flow and are integrated together into a complete sign-off solution.


Medfield: ARM twisting

Medfield: ARM twisting
by Paul McLellan on 01-11-2012 at 2:53 pm

One of the most significant announcements at the consumer electronics show (CES) this week was Intel’s Medfield, an Atom-based smartphone SoC. The SoC itself is unremarkable, perhaps a little better than ARM Cortex-based SoCs in some areas, worse in others. The reason it is significant is that Motorola (soon to be Google, don’t forget) announced a multi-year partnership with the first products expected this summer, and Lenovo actually demoed a smartphone containing the chip.

I used to think that Intel had very little chance in the mobile marketplace because ARM was so entrenched and there just wasn’t any good reason to switch. Plus Intel’s big weakness is that they are hopeless at software. They previously tried to get into the communications business with an Xscale (ARM) strategy but gave up after investing over a billion dollars without really getting any customers. Last year they bought Infineon’s wireless business (also ARM-based) but they promptly lost their flagship customer, Apple, to Qualcomm. They have had an unsuccessful Atom-based phone SoC, Moorestown, that went nowhere.

But Android has leveled the playing field so Intel doesn’t need to be good at software development. There is little lock-in of Android to ARM-based systems and as more and more of the software is further and further from the hardware, the details of the hardware matter less and less for the software developer. With a little care, an Android-based app should run on any Android phone without really even knowing what the processor is (Android apps are (mostly) written in Java and so are actually compiled into Java Virtual Machine bytecodes not the underlying assembly in any case).

The important aspect of the announcement is not that Intel is going to seriously impact ARM-based phones in the short term. It is not. It is simply that Intel is seriously in the game. And once it is seriously in the game it will be able to leverage its lead in process technology which will soon put it about 2 process generations ahead of TSMC (or anyone else for that matter). Even if there are some inherent weaknesses in the Atom architecture versus ARM Cortex, two processes generations is simply too big a chasm to get across and a TSMC/ARM SoC will be inferior to an Intel/Atom SoC.

I wouldn’t be the least bit surprised if Intel hasn’t been making some trips across the valley to a famous Cupertino-based smartphone company.


Imera Virtual Fabric

Imera Virtual Fabric
by Paul McLellan on 01-10-2012 at 6:00 am

Virtual fabric sounds like something that would be good for making the emperor’s new clothes. I talked today to Les Spruiell of Imera to find out what it really is.

Anyone who has worked as either a designer or as an EDA engineer has had the problem of a customer who has a problem but can’t send you the design since it is (a) too big (b) the companies crown jewels and (c) no time to carve out a small test case. I’ve even once had a bug reported from the NSA where they were not even allowed to tell us what the precise error message was (since it mentioned signal names).

But realistically, if the problem is going to be debugged then either the design company’s crown jewels (the design source code) or the EDA company’s crown jewels (the tool source code) need to be transferred so that both can get together on the same machine. But wait…Imera has another approach. Connect the EDA company to the design company in a way that all the EDA company’s source code remains behind their firewall, and all the design company’s proprietary design data remains behind theirs. But you can still step through a debuggable version of the code running on the problematic design.

For example, a major southern California communications company was having a problem with an EDA tool. By using the Imera Virtual Fabric they put breakpoints in the code and found the problem within 5 hours. A complete fix was implemented, tested and delivered in 5 days. This compared to 35 or more days before using the previous approach, where a version of the code would be created that logged internal progress, this was mailed back to the EDA company, who then created a new version and gradually homed in on the problem.

It turns out that all of Cadence, Synopsys, Mentor and Magma are using this technology.

Another Imera technology that EDA companies are using is the capability to reach into their internal data center (or private cloud — I guess that is the new fashionable name for compute farms) and built a secure virtual vault with some number of machines siloed into the vault. These are then accessible only to the authorized users. But interestingly those could include an EDA vendor. So it is possible for a design company to set up a specific set of machines that, say, Cadence also has access to to enable collaborative work to debug a problem, for training, for beta testing and so on.

The approach is broadly applicable to other industries too. Volvo, for example, uses it to work with 3rd party vendors and thus ensure that the parts they are designing will fit in the space in the car where they need to go. Banks are using it to give very controlled access to sensitive data.

If you would like to learn more about Imera technology and how it is being used for remote debugging at Mentor Graphics, you might want to check into this seminar “Effective, Secure Debugging in a Fabless Ecosystem“, Jan. 31, San Jose.


Samsung’s Regrettable Moment and the Coming of 3D Tick Tock!

Samsung’s Regrettable Moment and the Coming of 3D Tick Tock!
by Ed McKernan on 01-10-2012 at 12:35 am

The might have beens. The shoulda’s, coulda’s, woulda’s are what launches a thousand Harvard Business School Case Studies that are meant to prepare a generation of business leaders on how to make decisions that impact the future directions of companies. Right before the 2008 financial crises (September 5, 2008), Samsung made a run at Sandisk in order to reduce its NAND Flash royalty payments. A year later, Sandisk rejected Samsung’s final offer for what would be half the value of the company today. Samsung can look back and say that was a big fork in the road and hopefully for them it wasn’t a “stick a fork in it” moment.

Winston Churchill, the man who saved Western Civilization was famous for saying “the farther backward you can look, the farther forward you are likely to see.” While he was forced to sit on the political sidelines in the 1930s during his “Wilderness Years”, he watch Britain and France run the same appeasement playbook while Germany re-arm with new tanks and planes. At age 65, he was called to lead Britain out of its darkest hour with only the technology of Radar and a 22-mile water barrier called the English Channel as his barrier to defeat. The beaches of Dunkirk were left littered with the bulk of Britain’s military equipment. As one of the fathers of the WWI tank and a through and through military technologist he could tie the nuts and bolts of capability with an overarching strategy. None was his equal. What’s this got to do with semiconductors?

There is only one semiconductor executive that has been there from the early, early days of the 1970s of Noyce, Moore and Grove. From the days of DRAM, the EPROM, the 8 bit microprocessor all the way to today’s multi-core 64 bit processors there is only one who was personally tutored by Andy Grove and who has the ability to look farther back in semiconductors in order to see what lies farther ahead. This person is Paul Otellini, who has built an Intel that in 2012 will likely end up being 3 times as profitable as in the bubble year of 2000, the peak of the disastrous Craig Barrett “Itanium Era.”

Competing in the semiconductor industry often times is a multi-front war. If you can’t visualize your enemies or the enemies of your enemies, then you will die somewhere in the next turn of Moore’s Law. Intel has recognized this for a long time now. The compute platform is the actual battlefield. The array of tanks and airplanes offered by the ARM camp are built in factories that are NOT out of bombing range like American factories of WWII.

Imagine you are a processor architect and have been given a clean sheet of paper to define the next big thing. You have been told that there are three new parameters. The first one is that you have infinite cache SRAM. The second is that you have infinite, off-chip NAND Flash. And the third is that you have a pool of DRAM that is still quite large but has already shrunk by an order of magnitude in size relative to Flash (and is shrinking relative to SRAM) but its cost is FREE. How would you arrange the new architecture? Remember, the price of a loaf of bread in the old USSR was FREE before the breakup.

If you are designing something that will first be in production in 3-4 years and remain as the basis for multiple product spinoffs for an additional 4 years then you push the envelope on those resources that maximize performance/cost and performance/watt. This means that over the lifetime of the chip, Moore’s law will expand the SRAM by a factor of at least 8 over where it is today. NAND Flash is scaling even faster and DRAM will likely scale at half that rate. The beauty of DRAM over its 40-year history is that there was nothing better sitting next to the processor as a cheap, short-term storage of code and data. However, this is fading fast relative to NAND Flash, especially as the flash controllers of the world get smarter in maximizing the life of a bit. This sums up the trend that places NAND and SRAM as the dominating factors driving winners and losers for the next decade. How do Samsung, Intel, TSMC, Toshiba, Sandisk and the rest stack up?

The ramifications of Infinite SRAM and Infinite NAND are probably already incorporated in Intel’s roadmaps for the next 4 years. Many people know that Intel has a business-operating model that they call the Tick-Tock model, where Tick is referred to as a new process technology used on an existing architecture and Tock is a new microarchitecture running on an existing process technology. With SRAM and NAND about to be added in a stacked 3D configuration above the processor – I would like to suggest that the Intel business model will now be effectively known as “3D Tick-Tock.” In 3D Tick-Tock, a variable size of cache or NAND will be stacked to an underlying multi-core CPU to enhance the overall product for mobile, desktop or server applications. The timing of the 3D Tick-Tock product rollouts will be determined by the availability of the latest NAND Flash or Logic (SRAM) process technology. Imagine these new memory cores offering a midlife kicker to the older, more complex processor core at higher prices but same operating and thermal (TDP) power envelopes. And these will be introduced when competitors offer their newest products that are built on one or two generation-old technology. Now Intel’s competitors must think about how they keep up in a whole dimension.

This scenario is why TSMC, Samsung, Sandisk and the ARM Crew must think in a whole new way about competing with Intel. If the Foundries lack world-class process technology in both NAND Flash and Logic and if these same foundries cannot field high performance, small footprint SRAM blocks, then they will not be able to get ARM or any other processor architecture across the goal line in the market segments that Intel chooses to play in.

Samsung’s moment of lift off was available three years ago in early 2009 when the economic world was thought to be close to ending. If they had overpaid for Sandisk, they would have consolidated the NAND Flash industry to themselves before Apple took off and before Intel made the decision to get back in. It was a moment to end Toshiba as a NAND foundry for Apple. Samsung would have owned the 1/2 of the infinite memory architecture playing field that will determine the winners of the Semiconductor market in the coming decade.

FULL DISCLOSURE: I am Long AAPL, INTC, QCOM, ALTR



Kindle Touch – My Experience

Kindle Touch – My Experience
by Daniel Payne on 01-09-2012 at 11:08 am

Mostly I blog about EDA software however the end objective of IC design is to produce an electronic system like the Kindle Touch, a popular e-book reader from Amazon introduced in late 2011.

Tear Down
This particular model has the following components (Source: Tech Republic):


Initial Use
I bought mine in November 2011 and was quickly impressed at what my $79 had purchased (originally $99 plus a $20 credit from BestBuy for opening a credit card). The packaging looked like a recycled container and the only thing inside the box was the Kindle Touch and a micro-USB connector, no user manual in sight.

There are only two buttons to push, and the On/Off button is on the very bottom of the device. When you depress it a small green led lights up and the system boots up with an image of a boy reading a book under a tree. Boot time is several seconds and there is no annoying chime like from Windows booting.

The user manual is the first e-book that you see, so it was easy to ready through the pages and learn about the capabilities. It took some getting used to the quirks of the e-ink display because it literally flashes every time that you turn a page, something that I’ve never seen before and wish that they could avoid in the future. Yes, you can read for hours on end with no eye fatigue so it’s a better experience than viewing my LCD display on a laptop or smartphone.

Font size can be adjusted however you cannot rotate the Kindle Touch and see a landscape display of your book. I had expected that i could rotate my e-book like I can with my smartphone, but then decided it wasn’t a deal breaker for me.

The Home button is on the front and from that screen you can start to organize your book library into categories that make sense to you.

I already had an Amazon account so I just linked my new Kindle to the Amazon account and started looking for free books. There’s a few million free books over at Google. Just like the success of Apples is their iTunes store, the success of the Kindle family of e-book readers is tied to the online infrastructure for books and movies at Amazon (another reason I didn’t go with the B&N Nook device).

Finding Books
You can search for books however many of the books at my home are not available. I really wanted to find We Seven the book about the Apollo astronauts but it isn’t in e-book format yet (maybe never). Finding a book on the Amazon network is easy and downloading is through WiFi (they call it WhisperNet).

Responsiveness
If you own a smartphone and then try using a Kindle Touch you’ll find out that the responsiveness is quite slow on the Kindle. Turning a page takes under a second, however clicking the top to get a menu will take a few seconds. Likewise, pinching your fingers to zoom in or zoom out take several seconds on the Kindle unlike on my Android phone where pinching is simultaneous (thank you Samsung).

Document Support
To my delight I discovered that the Kindle allows me to email Word Documents and PDF files to my special kindle.com email account.

Recharging
Battery life is advertised as two months when reading for 30 minutes a day. My experience with WiFi turned on is that it lasts about three weeks between charges. To me it’s more of a hassle to disable and re-enable WiFi just to save power. Recharging is thru the micro USB connector, which presumes that you have a computer with a USB port for charging (and transferring MP3 or other files).

Extras
Hidden away are some special beta features that are fun to play with:

  • web browser (greyscale only)
  • MP3 player

If you want to convert something into e-book format or between formats then check out the Calibre program (no relation to Mentor Graphics).

Amazon Return Policy
Just this week my Kindle Touch started to reboot, the page turning sometimes didn’t work, and the power button would turn the Kindle on-off-on-off. A quick visit to customer chat and I am now receiving a replacement, no hassles, no arguments.

Summary
The Kindle Touch lets your read books in a comfortable form factor that is easy on the wallet and the eyes. I recommend the Kindle Touch for all readers.


HiFi audio…in all the devices

HiFi audio…in all the devices
by Paul McLellan on 01-09-2012 at 6:00 am

The big challenge with audio is that there are so many standards. Some of this is for historical reasons since audio for mobile (such as mp3), for the home (Dolby 5.1) and for cell-phone voice encoding/decoding have all had very different requirements, different standard setters and so on. But gradually everything is coming together. You will expect your smartphone to be able to play a movie with the same DTS sound your BluRay can provide.

The proliferation (and constant change) of standards has been an opportunity for Tensilica whose HiFi product line has become the standard for many semiconductor and system companies. By designing a custom audio processor and then supplying software for each standard, it is fairly easy to support any portfolio of standards and even be able to change the code as standards evolve, possibly even being able to update devices after they are in the customers’ hands.

The technical challenge for any audio solution is to get the lowest MACs/W/area/price which translates into designing a small processor that can implement the audio standards of interest at the lowest possible clock frequency. Power is a big driver since a lot of audio is either in portable devices (battery) or living room devices (no fans). The obvious solution of just running code on the ARM processor that is (probably) already on the chip is too power hungry. Building an optimal sillicon hardware implementation is too expensive both in design time and area (if different standards require different dedicated silicon).

HiFi 2 is a DSP with dual 24-bit MACs. HiFi EP is a version with improved memory and other optimizations. This week at CES Tensillica is announcing HiFi 3 which is a quad-mac solution which can be configured either as dual 32-bit MACs or as 4 24-bit MACs or some other configurations. The hardware is 3-slot VLIW processor, issuing 3 instructions on each clock cycle. The improvements are large. For example, a 32-tap FIR takes 2090 cycles on HiFi EP but only 997 on HiFi 3, less than half. DTS processing for movie sound needed 362MHz with HiFi 2 but only 233 MHz with HiFi 3.

HiFi 3 is source compatible with HiFi 2 codecs and even has a little performance improvement, maybe 10%. But by updating the code to take better advantage of HiFi 3’s capabilities a typical codec is improved by over 20%.

HiFi 3 will enable more post processing of home entertainment, for example matching the processing to the room and speaker environment. Similarly smartphones will require much more complex audio for immersive gaming experience. On the voice coding, the requirement are actually outpacing what even Moore’s law can deliver, requiring increasing processing to deliver better noise suppression, noise dependent volume control and so on.



Synopsys, the first 25 years

Synopsys, the first 25 years
by Paul McLellan on 01-08-2012 at 8:00 pm

Synopsys was started in 1986 and so 2011 was its 25th anniversary. They created a little timeline with some of their history. As with most companies, the earlier history is the most interesting, before it was clear what the future would bring. From 1986 to 1990 they grew to $22M in revenue, which was explosive growth. So explosive that Geoffrey Moore used them as an example in his follow-up to Crossing the Chasm, called Inside the Tornado.

One area they list on the timeline are the key acquisitions. It is interesting to look at these and see the focus of both Synopsys, broadening out from being a pure synthesis company, and EDA in the wider sense broadening and addressing new challenges.

  • Zycad 1990 VHDL simulation (they didn’t acquire the hardware acceleration biz)
  • Logic Modeling 1994 high level functional modeling
  • Epic 1997 transistor level analysis
  • Viewlogic 1997 quad motive timing, sunrise test, some system products
  • Avant! 2002 place and route, physical verification, circuit simulation, cell libraries
  • InSilicon 2003 silicon IP
  • Numerical Technologies 2003 DFM lithography technology
  • Synplicity 2008 FPGA synthesis
  • Virage Logic 2010 IP portfolio
  • Ora 2010 optical design

and, of course (probably):

  • Magma 2012 place and route, timing verification, circuit design, physical verification, analog design

When I was at Cadence we had a sort of family tree that showed all the acquisitions. Not just the ones Cadence had made but the prior ones that went into making up the companies that were acquired. It would be interesting to see a similar family tree for Synopsys if it exists. We have some of the raw data hereon the EDA Mergers and Acquisitions Wiki but without the graphic design and the logos.


Interface Protocols, USB3, HDMI, MIPI… the winner and losers in 2011

Interface Protocols, USB3, HDMI, MIPI… the winner and losers in 2011
by Eric Esteve on 01-07-2012 at 11:30 am

Releasing a new protocol like ThunderBolt, HDMI or SuperSpeed USB has not only to do with bandwidth performance or form factor of the connector as a guarantee of success. Some non-scientific parameters also play a role in the alchemy, that’s why forecasting the success of a certain protocol is such a hard task, and can’t be reduced to a feature list comparison table. That’s why, even if good marketing campaign can help (in fact is a necessary condition), properly marketing a technically attractive protocol is not sufficient to make it successful in the mass market. Do I know the magic recipe to cook an interface protocol generating high penetration in the mass market? I would be rich if I knew it! But I can try, at least, to understand which protocols are the winners in 2011, demonstrating high market penetration (and becoming “de facto” standard in certain market segments), or fast growing penetration (which is even more useful if you want to grow your business, developing around this protocol). Then, if we can identify winners, some losers should exist as well…

The winners in 2011

HDMIis certainly the most successful protocol, both in term of market penetration in the Consumer/HDTV segment and in term of pervasion in various segments like PC, Wireless Handset (smartphone), Set-Top-Box, DVD players and recorders, Digital Video Camera and more. Analyst consensus is that 2 billion HDMI ports have been shipped since the inception (2004 for the protocol definition, 2006 for the first devices shipped in the market). That’s huge number, and very fast growth rate, as 250 million ports have been shipped in 2007 and we can reasonably expect 1 billion ports to be shipped this year. If we look at the protocols in competition with HDMI, we find DisplayPortand, to a certain extent, SuperSpeed USB.

DisplayPort has been defined at the same time than HDMI, by the Video Electronic Standard Association (VESA) a non-profit organization. The important word here is profit: when Silicon Image was developing a worldwide and very aggressive strategy to push HDMI standard, creating subsidiaries like HDMI Licensing (to license the technology) and Simplay Labs (to offer testing and certification capabilities, in fact a mandatory step to get the HDMI stamp), as the company wanted to be in a position to get the maximum benefit from the technology they had developed, VESA was… very quiet!

Things are changing now, as DisplayPort is starting to see a real momentum, this is why the protocol is in the winner list. Effectively, DisplayPort adoption has strongly grown in 2011, for various reasons: at first, the protocol is well tailored for interfacing a PC and a screen, that’s naturally here that the adoption is high. The second reason has to see with the marketing effort made by two heavyweight of the electronic industry: Intel and Apple. Apple is using DisplayPort for some time now, and Intel has created a strong buzz around DisplayPort, not directly, but when promoting ThunderBolt(defunct Light Peak), multiplexing DisplayPort and PCI Expressin the same link. Anyway, the result is there: DisplayPort technology start to be used, the Verification IP for the protocol are selling well (which is a sign, in advance from the mass market real sales, that developments are occurring, leading to product launch a few quarter later). SuperSpeed USB or USB 3.0 has been mentioned in this paragraph, in fact it is “a contrario” (I like Latin words, it make me feel highly educated), as USB 3.0 should be ranked in the losers, no doubt about it, as we will see in the next paragraph.

Another protocol is seeing a wide adoption, and very fast growing IP sales: DDRn. Even if it can be questionable to rank it in the Interfaces Protocol, DDRn is a mean to interconnect a SoC with memory, using a digital part (Controller) and a physical media access (PHY), so it’s built like every other modern high speed serial technology, even if it’s still parallel, “scricto sensu”. As we have shown in the “Interface IP Survey”, DDRn adoption is now a matter of fact for any SoC design, the key point is that design team tend now to source it externally, one of the reasons being the growing difficulty to manage higher speed transfer rate (up to 2 GT/s for DDR4). IPNEST has forecasted DDRn IP sales to be the largest, when compared with the other Interface IP, passing $100M in 2013.

MIPIis a specification (not a standard according with the MIPI Alliance) that smartphone users are probably running everyday (every second), but they don’t know it. I don’t think we will ever see a marketing campaign labeled “MIPI inside”, as MIPI is more like a commodity, allowing various IC to speak together inside the handset, for now, and other mobile electronic devices in the near future, than a flagship technology. Even if the technology is very smart, allowing to standardize many different interfaces with IC controllers for Camera, Display, Flash, RF or audio.

So far, IPNEST evaluation is that 700 million IC supporting MIPI have been shipped in 2011, exclusively in the handset/smartphone segment (more than 400 million smartphones have been shipped over the same period). Looking at the IP sales for MIPI is a bit disappointing, as these are far to be at the level of HDMI in 2011, the reason is simply that the first MIPI adopters, the Application Processor chip makers, have probably used internally designed IP rather than source it to IP vendors. This will certainly change when the Tier 2 chip makers for Application Processor and the companies targeting other market segments like PC and Media Tablet or Consumer (Mobile) Electronic Devices will adopt MIPI. Which is funny here is that the end user will probably never know that he is using MIPI, when he certainly knows that his device support HDMI or USB.

Still in the winner list, but at a lower extent, we can rank PCI Expressand SATA. PCI Express pervasion has been strong in almost every segment (except in Wireless handset, Consumer electronic or Automotive), generating growing IP sales year after year to reach $40M in 2010, when SATA is obviously staying strong in storage equipment, but only here. Five years back, some people thought that SATA could be replaced by other protocols (USB 3.0 or PCI Express) but this will not happen. Which is likely to occur is the merger between SATA and PCI Express, to generate something called SATA Express, to serve the new needs generated by Flash based storage.

And now the losers…

To me, one of the most disappointing events of 2011 was the take off absence of SuperSpeed USB. This coming after the same disappointment in 2010… and in 2009. The technology was ready (back in 2008), proven (at least the PHY, very similar to PCI Express gen-2) and expected by the market. But Intel decided to delay, over and over, the support of USB 3.0 in their PC chipset, expected now to come in April 2012! It seems that, five or six years back, the USB-IF has strongly missed their mission. The market was expecting high speed interface protocol, that they could use to download or exchange video. At that time, HDMI was just about to be launched and High Speed USB was already five years old. It was the right time to launch SuperSpeed USB, with an attractive slogan “10 time faster than USB HS”. What did USB-IF? They launch USB On-The-Go, which maybe is a nice to have feature, but far to be revolutionary! More like an engineer’s dream than a marketing vision if you prefer.

I am afraid that, even if USB 3.0 will certainly see a wide adoption in 2012 and 2013, seeing IP sales doubling in 2012, USB standard has missed the right window, and will never recover (in term of penetration) as in the meantime HDMI has became the “de facto” standard for imaging. Because integrating too many connectors is prohibiting in term of cost on the consumer market, OEM will have to make a choice and you can guess that they will not get rid of HDMI connector (even if they are hungry to pay 4 cent per port to Silicon Image). SuperSpeed USB market will survive and generate IP sales, but will never reach the level of ubiquity that USB has reached in the past.

I realize that I did not talk about ThunderBolt, but what to say about it? It has been adopted by Apple, to be used in the high end PC segment and some Digital Still Camera makers will propose it. But, as of today, ThunderBolt controller can’t be integrated into a SoC, it can’t be sold as an IP function, this means it will be more difficult to build an ecosystem around it. It’s perceived as an Intel/Apple “proprietary” function, this may not be the best way to a wide adoption (think about FireWire).

There is still some high speed serial, differential, protocol standards that I did not mentioned: Serial RapidIO, Hyper Transportor Infiniband. If you don’t know it, if you don’t use it now, you can keep in peace, as all of these should stay in their niche, or even disappear…

Eric Esteve – from IPNEST– Above graphic extracted from “Interface IP Survey” available here