Ceva webinar AI Arch SEMI 800X100 250625

Apple Creates Semiconductor Opportunities . . .

Apple Creates Semiconductor Opportunities . . .
by Steve Moran on 03-09-2011 at 7:22 pm

There has been a lot of press this past week surrounding the release of iPad2. While it has some significant improvements, they are, for the most part, incremental. In my view the lack of flash, a USB port and a memory card slot continue to be huge deficits. Until this past week my reservations about the iPad have been mostly theoretical, never having actually used one. Then, earlier this week I got a chance to use an iPad for half a day while traveling. It was an OK experience and, in a pinch, it would serve my needs, but I found myself wanting to get back to my laptop as soon as possible. I would even have preferred my little Dell netbook.

I am already an Android user on my smart phone and I think that, when I make the leap to a pad, I will look for an Android platform.That said, I give Apple all the credit for creating a new challenge for its competitors, while continuing to lead the marketplace and creating the ultimate user experience, continuing to lead the pack and making buckets of money along the way.

In the very near term it is certain there will be a plethora of Android (and other) pads. The iPad will continue to set the standard others will attempt to meet or exceed. With this background, I recently came across an article in the EEherald that addresses the substantial opportunities for semiconductor companies to be a part of the Android pad ecosystem. You can read the whole article here but these are the highlights:

– In a very short period of time price will be nearly as as important as features.

– With streaming media and real time communication, multicore processing (more than 2) will be essential.

– The marketplace is likely to branch into application-specific pads such as health care, engineering, environmental. Much of the differentiation will be software, but hardware will play a role.

– Power consumption will continue to be a major problem and opportunity.

– There will likely be the need to include more radios.

The EEherald article concludes by pointing out that the odds of any company getting their chips inside the Apple iPad are remote, but that with so many companies preparing to leap into the Android pad market this is a huge emerging market.

I am looking forward to the burst of innovation that Apple has spawned.


TSMC 2011 Technology Symposium Theme Explained

TSMC 2011 Technology Symposium Theme Explained
by Daniel Nenni on 03-09-2011 at 6:49 pm

The 17[SUP]th[/SUP] Annual TSMC Technology Symposium will be held in San Jose California on April 5[SUP]th[/SUP], 2011. Dr. Morris Chang will again be the keynote speaker. The theme this year is “Trusted Technology and Capacity Provider”and I think it’s important to not only hear what people are saying but also understand why they are saying it, so that is what this blog is all about.

You can bet TSMC spent a lot of time on this theme, crafting every word. When dealing with TSMC you have to factor in the Taiwanese culture which is quite humble and reserved. Add in the recent semiconductor industry developments that I have been tweeting and I offer you an Americanized translation for “Trusted Technology and Capacity Provider”, a phrase made famous by the legendary rock band Queen “We are the Champions!”

DanielNenni #TSMC said to make 40nm Chipset for #INTEL’s Ivy Bridge CPU:
http://tinyurl.com/46qk89b March 5

DanielNenni #AMD contracts #TSMC to make another CPU Product:
http://tinyurl.com/4lel5zy March 2

DanielNenni #Apple moves #SAMSUNG designs to #TSMC:
http://tinyurl.com/64ofq67 February 15

DanielNenni #TSMC 2011 capacity 2 rise 20%
http://tinyurl.com/4j5v6qtFebruary 15

DanielNenni #SAMSUNG orders 1M #NVIDIA #TEGRA2 (#TSMC) chips:
http://tinyurl.com/4aa2xo6February 15

DanielNenni #TSMC and #NVIDIA ship one-billionth GPU:
http://tinyurl.com/4juzdvd January 13

TRUST in the semiconductor industry is something you earn by telling people what you are going to do then doing it. After inventing and leading the pure-lay foundry business for the past 21 years you have to give them this one: TSMC is the most trusted semiconductor foundry in the world today.

TECHNOLOGY today is 40nm, 28nm, and 20nm geometries. Being first to a semiconductor process node is a technical challenge, but also a valuable learning experience, and there is no substitute for experience. TSMC is the semiconductor foundry technology leader.

CAPACITY is manufacturing efficiency. Capacity is yield. Capacity is the ability to ship massive quantities of wafers. From MiniFab to MegaFab to GigaFab,TSMC is first in semiconductor foundry capacity.

Now that you have read the blog let me tell you why I wrote it. The semiconductor foundry business is highly competitive which breeds innovation. Innovation is good, I like innovation, and would like to see more of it. Other foundries take note, the foundry business is all about TRUST, TECHNOLOGY, and CAPACITY.

Right now I’m sitting across from Tom Quan, TSMC Design Methodology & Service Marketing, in the EVA Airways executive lounge. Both Tom and I are flying back from Taiwan to San Jose early to participate in tomorrow’s Design Technology Forum. Tom is giving a presentation on “Successful Mixed Signal Design on Advanced Nodes”. I will be moderating a panel on “Enabling True Collaboration Across the Ecosystem to Deliver Maximum Innovation”. I hope to see you there.


Essential signal data and Siloti

Essential signal data and Siloti
by Paul McLellan on 03-05-2011 at 3:24 pm

One of the challenges with verifying today’s large chips is deciding which signals to record during simulation so that you can work out the root cause when you detect something anomalous in the results. If you record too few signals, then you risk having to re-run the entire simulation when you omitted to record a signal that turns out to be important. If you record too many, or simply record all the signals to be on the safe side, then the simulation time can get prohibitively long. In either case, re-running the simulation or running it very slowly, the time taken for verification increases unacceptably.

The solution to this paradox is to record the minimal essential set of data needed from the logic simulation to achieve full visibility. This guarantees that it will not be necessary to re-run the simulation without slowing down the simulation unnecessarily. A trivial example: it is obviously not necessary to record both a signal and its inverse since one value can easily be re-created from the other. Working out which signals form the essential set is not feasible to do by hand for anything except the smallest designs (where it doesn’t really matter since the overhead of recording everything is not high).

SpringSoft’s Siloti automates this process, minimizing simulation overhead while ensuring that the necessary data is available for the Verdi system for debug and analysis.

There are two basic parts to the Siloti process. Before simulation is a visibility analysis phase in which the essential signals are determined. These are the signals that will be recorded during the simulation. Then, after the simulation, is a data expansion phase when the signals that were not recorded are calculated. This is done on-demand, so that values are only determined if and when they are needed. Additionally, the signals recorded can be used to re-initialize a simulation and re-run any particular simulation window without requiring the whole simulation to be restarted from the beginning.

Using this approach results in dump file sizes of around 25% of what results when recording all signals, and simulation turnround times around 20% of the original. With verification taking up to 80% of design effort these are big savings.

More information:


Mentor Graphics 1 : Carl Icahn 0!

Mentor Graphics 1 : Carl Icahn 0!
by Daniel Nenni on 03-04-2011 at 10:03 pm

This is just another blog about Carl Icahn and his quest to conquer EDA, when in fact EDA is conquering him. It includes highlights from my dinner with Mentor Graphics and Physicist Brian Greene, the Mentor Q4 conference call, and meeting Mentor CEO Wally Rhines at DvCon 2011.

It wasn’t just the free food this time, dinner with Brian Greene was big enough to lure me to downtown San Jose on a school night. Brian has his own Wikipedia page so you know he is a big deal. Mentor executives and their top customers would also be there and I really wanted to hear “open bar” discussions about the Carl Icahn saga. Unfortunately the Mentor executives did not show and nobody really knew what was going on with Carl, nor did they care (except for me), so the night was a bust in those regards.

The dinner however was excellent and the “who’s who” of the semiconductor world did show up so it was well worth my time. Brian Greene’s lecture on parallel universes versus a finite universe was riveting. According to string theory there is nothing in modern day physics that precludes a “multiverse”, which totally supports the subplot of the Men in Black movie.

The excuse used for the Mentor executives not coming to dinner was the Q4 conference call the next day. Turns out it was a very good excuse! No way could Wally sit through dinner with a straight face with these numbers in his pocket:

Walden Rhines: Well,Mentor‘s Q4 ’11 and fiscal year set all-time records in almost every category. Bookings in the fourth quarter grew 45% and for the year, 30%. Revenue grew 30% in the fourth quarter and 14% for the year, the fastest growth rates of the big three EDA companies. Established customers continue to purchase moreMentor products than ever, with growth in the annualized run rate of our 10 largest contract renewals at 30%.

The transcript for the call is HERE. It’s a good read so I won’t ruin it for you but it definitely rained on Carl Icahn’s $17 per share offer. Carl would be lucky to buy Mentor Graphics at $20 per share now! After careful thought, after thorough investigation, after numerous debates, I have come to the conclusion that Carl has no big exit here. If someone reading this sees a big exit for Carl please comment because I just don’t see it.

Wally Rhines did make it to DvCon and he was all smiles, business as usual. His Keynote “From Volume to Velocity” was right on target! Wally did 73 slides in 53 minutes, here are my favorites:

This is conservative compared to what I see with early access customers. Verification is a huge bottle neck during process ramping.

The mobile semiconductor market is driving the new process nodes for sure. Expect that to continue for years to come.

The SoC revolution is driving gate count. One chip does all and IP integration is key!

Purchased IP will continue to grow so gentlemen start your IP companies!

Sad but true.

Here is the bottom line Mr Icahn: Verification and Test are not only the largest EDA growth applications, verification and test are very “sticky”. Mentor Graphics owns verification and test so Mentor Graphics is not going anywhere but up in the EDA rankings. If you want to buy Mentor Graphics you will have have to dig deep ($20+ per share).

Related blogs:
Personal Message to Carl Icahn RE: MENT
Mentor Acquires Magma?
Mentor – Cadence Merger and the Federal Trade Commission
Mentor Graphics Should Be Acquired or Sold: Carl Icahn
Mentor Graphics Should Be Acquired or Sold: Carl Icahn COUNTERPOINT


Clock Domain Crossing, a potted history

Clock Domain Crossing, a potted history
by Paul McLellan on 03-03-2011 at 11:23 am

Yesterday I talked to Shaker Sarwary, the senior product director for Atrenta’s clock-domain crossing (CDC) product SpyGlass-CDC. I asked him how it came about. The product was originally started nearly 8 years ago, around the time Atrenta itself got going. Shaker got involved about 5 years ago.

Originally this was a small insignificant area of timing analysis. Back then there were few chips failing from CDC problems for two reasons. First, chips had few clock domains (many chips had only one so CDC problems were impossible) and second the chips were not that large. So CDC analysis was done by running static timing (typically PrimeTime of course) which would throw up the CDC paths as areas which are ignored by timing analysis. They could then be checked manually to make sure that they were correctly synchronized.

But like so many areas of EDA, a few process generations later the numbers all moved. The number of clocks soared, the size of chips soared and more and more chips were failing due to CDC problems. To make things worse, CDC failures are typically intermittent, a glitch that gets through a synchronizer occasionally for example. But there were no tools to deal with this issue in any sort of automated way.

Atrenta started by creating a tool that could extract the CDC paths and look for rudimentary synchronizers such as double flops (to guard against metastability). This structural analysis became more and more sophisticated, looking for FIFOs, handshakes and other approaches to synchronizing across clock domain boundaries.

Eventually this purely structural approach alone was not enough and a functional approach needed to be added. This uses Atrenta’s static formal verification engine to check that various properties remain true under all circumstances. For example,consider the simple case of a data-bus crossing a clock domain along with a control signal; for this to be safe the data-bus signals must be stable when the control signal indicates the data is ready. Or, in the case of using a FIFO to create some slack between the two domains (so data can be repeatedly generated by the transmitter domain and stored until the receiver domain can accept it. The FIFO pointers need to be Gray coded (so that only one signal changes when the pointer is incremented or decremented, a normal counter generates all sorts of intermediate values when carries propagate) and again, proving this cannot be done simply by static analysis.

When CDC errors escape into the wild it can be very hard to determine what is going on. One company in the US, for example, had a multki-million gate chip connecting via USB ports. It would work most of the time but freeze every couple of hours. It took 3 months of work to narrow it down to the serial interfaces. After a further long investigation it turned out that it was a CDC problem generating intermittent glitches. There were synchronizers but glitches must either be gated off (for data) or not generated (for control signals).

Another even more subtle problem was a company in Europe that had an intermittent problem that, again, took months to analyze. It turned out that the RTL was safe (properly synchronized) but the synthesis tool modified the synchronizer replacing a mux (glitch-free) with a complex AOI gate (not glitch free).

In a big chip, which may have millions of of CDC paths, the only approach is to automate the guarantees of correctness. If not you are doomed to spend months working out what is wrong instead of ramping your design to volume. More information on SpyGlass-CDC here.


All you want to know about MIPI IP… don’t be shy, just ask IPnest

All you want to know about MIPI IP… don’t be shy, just ask IPnest
by Eric Esteve on 03-03-2011 at 10:14 am

According with the MIPI Alliance, the year 2009 has been the time for MIPI specification to be developed, when 2010 was dedicated to Marketing and Communication effort to popularize MIPI technologies within the SC industry, finally the year 2011 is expected to see MIPI deployment in the mass market, at least in the Wireless (Handset) segment.
MIPI is an Interconnect protocol offering several key advantages: strong modularity allowing minimizing power but also reaching high bandwidth when necessary (6 Gbps, 12 Gbps…), interoperability between IC coming from different sources: Camera Controllers (CMOS Image Sensor), Display Controller, RF Modem, Audio Codec and an Application Processor. It take advantage of the massive move from parallel to serial interconnect we have seen, illustrated by PCI Express replacing PCI, SATA replacing PATA etc… to use similar technologies in Mobile Devices, where low power is key, as well as the needs for higher and higher bandwidth.

Even if MIPI as an IP market is still at the infant stage, we expect it to strongly grow in 2011, at least in the wireless handset segment. We have started to build a forecast for MIPI powered IC in production, for 2010-2015.

The number of IC in production is a key factor to push for MIPI adoption as a technology: test issues have been solved, no impact on the yield and, last but not least, IC pricing going down, benefiting from the huge production quantities (almost 1Billion IC in 2010) generated by the wireless handset segment.This is true for the Application processor (OMAP5 and OMAP4 from TI, Tegra 2 from NVIDIA, U9500 from ST-Ericsson…) which can be reused for other Mobile Electronic Devices and also for the peripheral ASSP supporting Camera, Display, Audio Codec or Modem applications. These MIPI powered price optimized ASSP can be also used in segments like PC (Media Tablet) and for Mobile Devices in CE.

The Tier 2 chip makers, playing in the wireless segment could minimize their investment in this new technology (need for an AMS design team) and gain TTM by externally sourcing MIPI IP if they want to enter into the lucrative high end segment (Smartphone), which grew by 71% in 2010. For the same reason, MIPI pervasion in the above mentioned new segments would then be eased by the availability of these ASSP and by off the shelf MIPI IP. Finally, the pervasion of MIPI should also occur in the mid range wireless handset (feature phones). The overall result for 2010-2015 IP sales forecast is shown here:


Take a look at “MIPI IP survey” from IPnest
See:http://www.ip-nest.com/index.php?page=MIPI
it gives information about:
MIPI powered IC Forecast 2010-2015

IP Market Analysis: IP vendor competitive analysis, market trends
MIPI IP 2010-2015 Forecast by Market segment:
Wireless Handset
Portable systems in Consumer Electronics
PC Notebook & Netbook, Media Tablets
License price on the open market for D-PHY & M-PHY IP, by technology node
Penetration rate by segment for 2010-2015 of MIPI PHY and Controller IP?
ASIC/ASSP world wide designs start, and related IP sales from 2010 to 2015?
From Eric Esteve
eric.esteve@ip-nest.com

(**) MIPI is a bi-directional high speed serial differential signaling protocol, power consumption optimized and dedicated to the Mobile Devices, to be used to Interface chips within the system, at the board level. It uses a Controller (digital) and a mixed signal PHY. There are numerous specifications (DSI, CSI, DigRF, LLI…), but all of these rely on one of the two PHY specifications (D-PHY and M-PHY). M-PHY supports different modes of operation:

  • LP at frequency range of 10K-600Mb/s
  • HS at frequency range of 1.25 to 6 Gb/s

The protocol is scalable, so you can implement one or more lanes in each direction.See:http://www.mipi.org/momentum


Semiconductor Power Crisis and TSMC!

Semiconductor Power Crisis and TSMC!
by Daniel Nenni on 03-02-2011 at 8:48 pm

Power grids all over the world are already overloaded even without the slew of new electronic gadgets and cars coming out this year. At ISSCC, Dr. Jack Sun, TSMC Vice President of R&D and Chief Technology Officer made the comparison of a human brain to the closest thing available in silicon, a graphical processing unit (GPU).

Dr. Jack Sun is talking about the NVIDIA GPU, I believe, as it is the largest 40nm die to come out of TSMC. The human brain has more than 100 billion neurocells (cells of the nervous systems). 100 billion neurocells consume 20 watts of power versus 200 watts for an equivalent amount of transistors in silicon. The bottom line is that semiconductor technology is severely power constrained and he suggests that we must learn from nature, we must look at technology from the bottom up.

“New transistor designs are part of the answer,” said Dr. Jack Sun. Options include a design called FinFET, which uses multiple gates on each transistor, and another design called the junctionless transistor. “Researchers have made great progress with FinFET, and TSMC hopes it can be used for the next generation of CMOS — the industry’s standard silicon manufacturing process,” Sun said.

According to Wikipedia:
The term FinFET was coined by UniversityofCalifornia,Berkeleyresearchers (Profs. Chenming Hu, Tsu-Jae King-Liu and Jeffrey Bokor) to describe a nonplanar, double-gate transistor built on an SOI substrate,[SUP][5][/SUP]based on the earlier DELTA (single-gate) transistor design.[SUP][6][/SUP]The distinguishing characteristic of the FinFET is that the conducting channel is wrapped by a thin silicon “fin”, which forms the gate of the device. The thickness of the fin (measured in the direction from source to drain) determines the effective channel length of the device.

So now you know as much about a FinFET as I do.

After the panel I had a conversation with Dr. Sun about power and looking at it top down from the system level. TSMC is already actively working with ESL and RTL companies (Atrenta) to do just that. Designers can optimize for power consumption and efficiency early in the design cycle at the register transfer level (RTL). Using commercially available tools, information about power consumption is available at RTL and can provide guidance where power can be reduced, in addition to detecting and automatically fixing key power management issues. Finding and/or fixing these types of problems later in the design cycle, during simulation or verification, can be costly and increase the overall risk of your design project finishing on time and within predetermined specifications.

TSMC’s current Reference Flow 11.0 was the first generation to host an ESL/RTL based design methodology. It includes virtual platform prototyping built on TSMC’s proprietary performance, power, and area (PPA) model that evaluates PPA impact on different system architectures. The ESL design flow also supports high level synthesis (HLS) and ESL-to-RTL verification. TSMC also expanded its IP Alliance to include RTL based (soft) IP with Atrenta and others. Atrenta is known for the SpyGlass product which is the defacto standard for RTL linting (analysis). If you do a little digging on the Atrenta site you will find the Atrenta GuideWare page for more detailed information.

But of course TSMC can always do more to save power and they will.


With EUVL, Expect No Holiday

With EUVL, Expect No Holiday
by Beth Martin on 03-02-2011 at 1:12 pm

For a brief time in the 1990s, when 4X magnification steppers suddenly made mask features 4X larger, there was a period in the industry referred to as the “mask vendor’s holiday.” The party ended before it got started with the arrival of sub-wavelength lithography, and we all trudged back to the OPC/RET mines. Since then, the demands for pattern correction have grown ever more onerous. But at long last, there may be an extreme ultraviolet light at the end of the lithography tunnel.

Extreme ultraviolet lithography (EUVL) is a strong candidate for 15nm node and below. While the 13.5nm EUV wavelength seems to imply a bit of an OPC vendor’s holiday, it’s shaping up to be quite the opposite situation. EUVL is introducing two major pattern-distorting effects at a magnitude never before seen: flare and mask shadowing. These effects combined create pattern distortions of several nanometers, and they vary in magnitude across the reticle in a complex yet predictable way. So, unpack your bags, there’s no holiday with EUVL.

EUV Lithography Patterning Distortions
There are several technical challenges for EUVL, but I’ll focus on just one: correcting pattern distortions unique to EUVL, specifically flare and mask shadowing.

Flareis an incoherent light produced by scattering from imperfections in the optical system, and it can degrade image contrast and worsen control of CD. Optical scanners suffer little from flare, but EUV scanners have flare levels of around 10% today, likely declining to around 5% in a few years. The magnitude of flare is driven by the roughness of the mirror surfaces in the scanner, the wavelength (flare is proportional to the square of the wavelength), and local pattern density on the mask (clear, open, low-density regions suffer worse from flare).
The level of flare, and the impact on CD control, depends on local and global pattern density, which exhibits a large variation within a chip. A 1% change in the level of flare within a chip produces more than a 1nm change in printed resist dimension.

Compensating for Flare Effect with traditional rules-based OPC is not accurate enough for EUVL, so the EDA industry developing new model-based flare compensation models. The biggest challenge in incorporating flare into OPC software is in simulating the very long range of the flare effect with reasonable runtimes. New simulation engines and models designed to handle these long interaction distances have largely addressed the runtime issues. It is now possible to simulate an entire 26X33mm reticle layout with >15mm diameter flare models in about an hour.
Once a flare map is generated, we need to utilize this information to mitigate the flare’s impact on printing. This can be accomplished within existing model-based OPC software with minor enhancements to how the aerial image is computed. A simple formula where intensity is scaled by the total integrated scatter (TIS), and the local flare intensity is added to the image, should be an effective way of adding flare intensity to the OPC simulator. This method is currently being validated with actual wafer data.

Mask Shadowing is truly unique to EUV lithography. Because everything (all solids, liquids, and gasses anyway) absorb 13.5nm EUV wavelength, EUV exposure tools must use mirrors for all optical elements and for the mask. Unlike the telecentric refractive projection lens systems used in optical lithography, EUV systems rely on off-axis mask illumination. Currently, the angle of incidence is 6o from normal. Even more problematic is the complication of the azimuthal angle. This angle varies across the scanner’s illumination slit and ranges from 0o to +/- 22o [2].
Off-axis mask illumination wouldn’t be a problem if we lived in a perfect world where the mask has no topography, and is infinitely thin. But unfortunately, the mask absorber has thickness, and the mask reflector is a distributed Bragg reflector composed of alternating thin-film layers. Thus, off-axis mask illumination results in an uneven shadowed reflection depending on the orientation of the mask shapes and the angle of incidence. This is shown in the figure below.

Figure : Mask topography and azimuthal angles for left (a), center (b), and right (c) region of the scan slit. Detailed view (d) showing how the mask in the right region of the scan slit will shadow both incoming light from the source, and light reflected from the mask substrate.

The result is an orientation-dependent pattern bias and shift, which is also a function of the location within the scanner slit. Current estimates of the magnitude of this effect range from 1-2nm for longer lines, and >5nm for small 2D shapes like contact & via holes. Variation in wafer CDs of this magnitude are obviously unacceptable.

Compensating for Mask Shadowing Effect requires new 3D mask topography models. Existing TCAD lithography simulators can already accurately predict the effects but cannot be directly adapted for use in OPC due to their very long simulation times. These models were designed to be very, very fast under certain assumptions: a thin mask with no topography (the Kirchhoff approximation), and a normal illumination incidence at the mask plane. Neither is valid in EUV exposure tools. The EDA industry has already introduced fast 3D mask topography models capable of simulating an entire chip with minor runtime penalty and with an accuracy approaching that of TCAD simulators. New OPC models that correctly handle off-axis incidence at the mask plane are becoming available.

However, we won’t be able to accurately model the mask shadowing effects until the software can handle the fact that the angle of illumination incidence is now a function of the location within the scanner field, as shown in Figure 1. The best solution now may be a hybrid of rules and model-based mask shadow compensation: models to handle the field-location-invariant component, with rules to compensate for the field-dependent component.

Conclusion
The upcoming availability of EUV scanners will enable further technology scaling. Initially, the features being imaged by EUV scanners will be larger than the 13.5nm illumination wavelength, and this would seem to provide some breathing room to the over-worked OPC engineers of the world. But as we’ve demonstrated here, the unique illumination source, reflective optics, and reflective masks are conspiring to ensure continued full employment amongst these engineers. So I think we can safely predict there will be no OPC engineer’s holiday for the foreseeable future.

–James Word, OPC product manager at Mentor Graphics.


Semiconductor Design Flows: Paranoia or Prudence

Semiconductor Design Flows: Paranoia or Prudence
by Steve Moran on 02-28-2011 at 11:34 am

I was recently talking to a friend who works for a semiconductor company (I can’t tell you which one or how big it is. I am not even sure I can tell you that he/she works for a semiconductor company.) I was describing Semiwiki.com to him/her and this person thought it was a great concept but wondered how much of the site would be focused on custom design. He/she wished there was more open discussion about techniques and tools used in custom design. Here is the problem and he/she described it:

1. There are relatively few companies that do custom IC design as a result, compared to the number of ASIC design teams there are only a small number of custom design teams. Finally this means that there are relatively few custom designers.

2. Custom designers and design teams typically work on the most important, most secretive, most costly and potentially the most profitable parts of a chip or system, in other words the crown jewels of the company.

3. In an effort to protect these highly competitive pieces of technology, almost universally, the companies that do custom designs have a complete embargo, on sharing any information about their designs and even their design flows.

4. Typically these information embargoes are so restrictive that even mentioning that the company uses a given tool or technique becomes a firing offense.

I find myself thinking this is secrecy gone amok.

There is no question that technology area is fiercely competitive; that the stakes run into billions of dollars; That being first to market with something like an iphone, or ipad can be worth billions of dollars and needs to be protected. Yet the question still remains, do those companies really gain when they refuse to share basic information about the nuts and bolts of their design flows, their design bottlenecks and their design tools?

I would argue that the competitive advantage comes not from these flow “secrets” but rather from pure differentiated technology, which is a combination of hardware, software and concept or idea, plus brilliant demand creation marketing. What ends up happening is that these companies end up wasting millions of dollars recreating, reinventing the basics. It means that every single time there is a fundamental/foundational problem each company spends hundreds to thousands of man hours figuring out how to solve what are really basic problems, that ultimately everyone solves in one fashion or another. That each company evaluates multiple tools that solve the same problem without a clue as to whether or not it has worked or failed for other companies.

What I find so puzzling is that given the enormous expense of building up a flow, evaluating and selecting tools, direct and indirect engineering costs, plus the inherent risk of producing a new product or device, how little interest there is in sharing the basics. Do have this right or wrong?


Will Thunderbolt kill SuperSpeed USB?

Will Thunderbolt kill SuperSpeed USB?
by Eric Esteve on 02-28-2011 at 11:09 am

And probably more protocols, like: Firewire, eSATA, HDMI …
Some of you probably know it as Light Peak, as it was the code name for the just announced (last week) Thunderbolt Intel proprietary interface, supporting copper (or optical) interconnect when Light Peak was optical only. This will make Thunderbolt easier to implement, and more dangerous for USB 3.0 and the other Interface in competition.


Thunderbolt is a High Speed Serial, Differential, Bi-directional offering 10 Gbps per port and per direction. Each Thunderbolt port on a computer is capable of providing the full bandwidth of the link in both directions with no sharing of bandwidth between ports or between upstream and downstream direction. See: http://www.intel.com/technology/io/thunderbolt/index.htm
Intel will provide the technology (protocol)and the silicon, the Thunderbolt controller. So far, Intel was providing the protocol controller on the Host (PC) side, through the Chipset, to support the PCIe, USB, eSATA etc… and other chip makers the controller on the peripheral side, sourcing the related protocol IP (PHY or Controller) to IP vendors, when necessary. Now, Intel will be present on both side of the link.


Thunderbolt Controller supports natively two protocols: PCIe and DisplayPort. Intel claims that “Users can always connect to their other non-Thunderbolt products at the end of a daisy chain by using Thunderbolt technology adapters (e.g., to connect to native PCI Express devices like eSata, Firewire). These adapters can be easily built using a Thunderbolt controller with off-the-shelf PCI Express-to-“other technology” controllers.” What does this means? That the BOM for peripheral will increase, as you add Intel device to it, but you still need a PCI Express to other technology controller (when there was a unique controller before)!

What does this means for the IP vendors? On the PC side, on mature technology (like USB 2.0), they were not present as the PC Chipset was supporting the protocol (please note that this was not true for USB 3.0, not supported from now by Intel). So, no big change. But, on the peripheral side, the number of protocol previously supported will reduce to PCI Express (in term of PHY IP sales) and to a PCIe to “other protocol” bridge on the controller side. In other word, it will probably kill the USB 2.0, yet to come USB 3.0 and eSATA IP sales.

The other protocol natively supported by Thunderbolt is DisplayPort. Here the target competitor seems to be Silicon Image with HDMI. This makes sense on the PC side, supporting only one connector will be cheaper. Which is unclear is the willingness of Intel to be present in the HDTV type of product. If this is the case, we can expect these products to support both HDMI and DisplayPort, at least until the market decide to reject one of these. If Thunderbolt finally win in the Consumer Electronics (which is far to be certain), this will increase Intel sales for chips and kill any IP sales for HDMI, USB 3.0… and DisplayPort in the Consumer Electronic segment.

Another big unknown is Thunderbolt penetration into the Mobile Internet Devices segments (Smartphones, Media Tablets). So far, in the mobile handset segment, you find HDMI and USB 3.0 protocols supported in the high end products. Thunderbolt could kick out these two, at two minimum conditions: the controller price and power consumption. If you look at the Application processor for Smartphone market, you see that the chip makers tend to integrate IP (internal or sourced to IP vendors) and leaving Intel catch a part of the BOM is probably not their first priority!

From an IP market point of view, Thunderbolt does not look great, except maybe for the PCI Express PHY IP vendor, as well as the PCI Express to “other protocol” bridge controller IP developers, at the condition that this market is not cannibalized by off-the-shelf (ASSP) solutions. From an OEM point of view, this means that the PC peripheral manufacturer will have to pay a tribute to Intel (that they did not do before).as well as the Consumer Electronic and Cell phone manufacturers. Will Thunderbolt be Intel Trojan horse into the other than PC segments?
Eric Esteve
www.ip-nest.com