Banner 800x100 0810

OTP based Analog Trimming and Calibration

OTP based Analog Trimming and Calibration
by Eric Esteve on 03-01-2013 at 10:16 am

Embedded NVM technology based functions can be implemented in large SoC designed in advanced technology nodes down to 28nm, as there is no requirement for extra mask levels, like when integrating Nand Flash, negatively impacting the final cost. And it is also possible to integrate One Time Programmable (OTP) to store trim and calibration settings in an analog device, usually designed in more mature technology node, so that the device powers up already calibrated for the system in which it is embedded. Variations in chip processing and packaging operations result in deviations of analog circuits and sensors from their target specifications. To optimize the performance of the systems in which these components are placed, it is necessary to “trim” interface circuitry to match a specific analog circuit or sensor. A trimming operation compensates for variations in the analog circuits and sensors due to manufacturing variances of these components.

Sidense 1T-Fuse™ technology is based on a one transistor non-volatile memory cell that does not rely on charge storage, rendering a secure cell that can not be reverse engineered. The 1T-Fuse™ is smaller than any alternative NVM IP manufactured in a standard-logic CMOS process. The OTP can be programmed in the field, during wafer or production testing.

In fact, the trimming requirement becomes more important as process nodes shrink due to the increased variability of analog circuit performance parameters at smaller processes, due to both random and systematic variations in key manufacturing steps. This manifests itself as increasing yield loss when chips with analog circuitry migrate to smaller process nodes since a larger percentage of analog blocks on a chip will not meet design specifications due to variability in process parameters and layout.

Examples where trimming is used include automotive and industrial sensors, display controllers, and power management circuits. If you look at the superb car at the top of the article, you realize that OTP technology can be implemented in several chips used to build “life critical” systems: Brake calibration, Tire pressure, Engine control or temperature or even Steering calibration… The field-programmability of Sidense’s OTP allows these trim and calibration settings to be done in-situ in the system, thus optimizing the system’s operation. Other examples where automotive trimming and calibration operations occur include secure vehicle ID (VID) storage, in-car communications, infotainment systems. The examples in the figure are for trimming and calibration of circuits such as analog amplifiers, ADCs/DACs and sensor conditioning. There are also many other uses for OTP, both in automotive and in other market segments, including microcontrollers, PMICs, and many others.

The above picture is a MEB view of the 1 Transistor OTP technologies, illustrating very interesting characteristics helping to guarantee high security level, pretty useful in SC industry today. In fact, the 1T-OTP bit-cell is very difficult to reverse engineer, as there is no difference between programmed and un-programmed bit. And, for applications requiring safe storage for secure keys, code and data, 1T-OTP macros incorporate other features for additional security, including a differential read more (no power signature), and probably a bunch of features that should be discussed face to face with Sidense!

A wide range of 1T-OTP macros are now available in many variants at process nodes from 180nm down to 28nm, and the technology has been successfully tested in 20nm. The company’s focus looking ahead is on maintaining a leadership position with NVM at advanced process nodes and solutions focused on customer requirements in the major market segments, including mobile and handheld devices, automotive, industrial control, consumer entertainment, and wired and wireless communications.

Eric Esteve from IPNEST


When the lines on the roadmap get closer together

When the lines on the roadmap get closer together
by Don Dingee on 02-28-2013 at 12:53 pm

Tech aficionados love roadmaps. The confidence a roadmap instills – whether using tangible evidence or just a good story – can be priceless. Decisions on “the next big thing”, sometimes years and a lot of uncertain advancements away, hinge on the ability of a technology marketing team to define and communicate a roadmap.

Any roadmap has three fundamental pieces: reality, probability, and fantasy. The first two, taken together, are critical to success. A good reality is better, but even a relatively dismal current product situation can be overcome, if there is some credibility left, on the strength of the probability story in the middle. (I actually created and told a crappy reality but good probability roadmap story once this way: “We took a vacation. We’re back, and here’s what we’re doing based on what we heard customers say they wanted.” It was true; we were a new marketing team with experience, and we spent a lot of time with hundreds of customers on the listening part to get the next thing we said right.) Companies that fail to execute on the probability story – the absolute must-have for customers that have bought in – risk losing credibility fast.

If both the reality and probability stories and execution hold up, attention turns to the fantasy portion. A fantasy has a lot of components: difficult enough to be interesting, achievable enough to look believable, and dramatic enough to get people excited. The fantasy part of the roadmap evolves: if successful, it becomes the probability portion, with more definition and firmer timeframes, and it gets triumphantly replaced by a new and improved fantasy. If not successful, it gets replaced anyway with a different, hopefully improved vision.

We are seeing one of the bigger roadmap marketing efforts of our time right now, weaving a story around the progression from 28nm, to 22/20nm, to 14nm and beyond.

We know 28nm processes are relatively solid now, having endured most of the transition woes in getting any process technology to volumes. We’ve been able to get a fairly good estimate of the limits of the technology, as measured by a 3GHz ARM Cortex-A9 as the consensus of the fastest core we’ll see in 28nm. Foundries are churning out parts, more and more IP is showing up, and things are relatively well.

At the other end, the industry went giddy when Altera and Intel recently announced they will work together on 14nm. There is some basis in their earlier cooperation on “Stellarton”, a primitive attempt at an Atom core and some FPGA gates in a single package. The most definite thing in this new announcement is Intel is looking to have a 14nm process up “sometime in 2014”, which is usually code for December 32[SUP]nd[/SUP], with some slack. In a best case scenario, we’d probably see an Altera part – sampled, count them, there’s one – about two years from right now.

Difficult? Yep. Billion dollar fab, new FinFETs, big chips. Achievable? Sure. If there is any way to prove out a new process, it is with memory or programmable logic, mostly uniform structures that can be replicated easily. Dramatic? A mix of people saying Altera is now ahead, Xilinx is suddenly behind, and Intel is completely transforming themselves into a leading foundry. Wow. We’ll leave the discussion on high-end FPGA volumes for another time.

What we should be discussing more is the probability story, and that lies in the area of 20nm. It is what people are actually facing in projects now, and there are some changes from the 28nm practices that are extremely important to understand. Cadence has released a new white paper “A Call to Action: How 20nm Will Change IC Design” discussing some of these ideas.


Among the changes Cadence identifies, the first is obvious: double patterning, with a good discussion of what to do about it. Another area of concern is the increasing amount of mixed-signal integration, something designers have tended to avoid. That factors into the third area, layout-dependent effects and better design rule checking. An interesting quote:

At 20nm up to 30% of device performance can be attributed to the layout “context,” that is, the neighborhood in which a device is placed.

The final discussion is on better modeling and concurrent PPA optimization, dealing with the disparities in IP blocks from many sources – 85 blocks in a typical SoC today, and growing – in the clock and power domains. This is a key part of Cadence’s approach to 28nm, and becomes even more important at 20nm and beyond.

Dealing with the probabilities will tell us more than any press release on what might be “the next big thing.” If you’re looking at what you’ll face in moving to 20nm, the Cadence white paper is a good introduction. What other design issues are you seeing in the transition from 28nm to 20nm? Am I too pessimistic on the 14nm story, or just realistic that there are a lot of difficult things to solve between here and there? Thoughts welcome.


TSMC (Lincoln) vs Samsung (Clinton) vs Intel (Washington)

TSMC (Lincoln) vs Samsung (Clinton) vs Intel (Washington)
by Daniel Nenni on 02-28-2013 at 9:00 am

Usually I sleep on long flights, if not, I watch movies and read. The Lincoln movie was playing on EVA Air this week which reminded me that Abraham Lincoln was one of the greatest U.S. Presidents. If I was asked to pick a U.S. President as a spokesperson for TSMC it would be Honest Abe Lincoln. Chairman Morris Chang said it best during his keynote, “We do not screw customers!” Samsung, on the other hand, chose Bill Clinton for their CES keynote which is also a good fit in my opinion (Clinton was impeached for lying and cheating but he is still a very popular President). For Intel I would choose George Washington, our founding father of microprocessors and GLOBALFOUNDRIES would be Barack Obama.

Associating with an American President is certainly good business since the U.S. market is the largest and Western culture is often emulated. Unfortunately honesty and decency is not always the top business priority in competitive markets so Abe Lincoln for a CES keynote would be a tough sell. Even with the incredible amount of intellectual property contained in semiconductors and consumer electronics, trust does not seem to be a prevailing factor.

Look at the Apple relationship with Foxconn. With horrible working conditions in their China factories that resulted in riots and suicides and many technology “leaks”, Apple still manufactures their all American iProducts at Foxconn. Look at Apple’s relationship with Samsung. With never ending legal actions Apple is still Samsung’s number one customer even though Samsung is Apple’s number one competitor. According to the press, Apple is moving away from Samsung but I’m not convinced it is a result of the quest for honesty and decency. From what I have learned Apple is using second source suppliers to negotiate better pricing from Samsung. The true test will be Apple’s 14nm SoC. Will Apple go back to Samsung or stay with TSMC? Apple will go back to Samsung, absolutely.

My experience at Avant! is another example. Even though the Avant! P&R tool was under indictment, customers still purchased the tool because it gave the best results making their semiconductor design faster, cheaper, more competitive. Profits over honesty and decency once again.

Sometimes our friends become our enemies, and sometimes our enemies become our friends

It is the classic tale of the scorpion and frog. A scorpion asks a frog to carry him across the water. The frog is afraid but the scorpion assures the frog that if he stings him it would be bad for both of them. The frog agrees and starts carrying the scorpion across the water but the scorpion stings the frog anyway because it is his nature. Profits are the nature of business so don’t be to surprised if you get stung by one of your friends.

Even with my bias against Samsung, my wife has informed me that we will be buying a new Samsung washer and dryer this year. She saw them at CES, they are clearly the smartest and best value, so honesty, decency, and my personal bias will not be a factor in the purchase decision. Unless of course I want to do the mountain of laundry my kids and I generate every week and I certainly do not. Laundry over ethics for sure.


Wally Rhines: Name That Graph!

Wally Rhines: Name That Graph!
by Paul McLellan on 02-27-2013 at 4:04 pm

Wally Rhines gave the keynote at DVCon yesterday. He started out with a game of “name that graph” which was unfortunately a bit spoiled since when the names were revealed the first line was off the top of the screen. But he extrapolated several trends such as the decreasing number of fabs (the current trend is that there won’t be any left by 7nm) or the number of EDA companies left for Synopsys to acquire. Or that in a few more years, designers will spend 100% of their time on verification.

But then he had some more serious graphs. Mentor had run a 3rd party survey examining trends in verification. This survey has been done under various auspices several times in the last decade. But it was done blind, nobody who participated knew it was Mentor sponsoring the survey, and by the way that the question that asked about simulators had Cadence, Mentor and Synopsys pretty much equal meant that it didn’t get biased towards Mentor’s own customers.

I’m not going to try and recap all the graphs here, but pull out a few trends that Wally highlighted. Or is that highlit?


The first trend is that standardization has gone a long way, especially around SystemVerilog which is almost universally used for large designs and is the only language that isn’t shrinking apart from a little use of C++.


Accelera UVM is also having huge growth at the expense of everything else so that there is increasing standardization of the verification flow.


One thing that is clear is that adoption of advanced verification methodologies and a structured approach to verification results in much higher productivity. The old way of buying some tools and then using them in an ad hoc way just doesn’t cut it. Verification costs go up. But groups that have a good process for verification and the tools to support it see their costs decrease.


One particular area that has a huge impact is intelligent test bench. It significantly reduces the redundancy inherent in constrained random. Last time Wally gave a keynote at DVCon he offered a deal that anyone could send him their verification suite and if he didn’t get a 10X improvement then the software was free. First design they got…improvement only 9.5X. Aargh. But every other design was much faster, some by as much as 60X.

Formal verification has also advanced. Partially this is more powerful capabilities, but also that the technique no longer requires a degree in rocket science to use it. Also, by focusing the technology on smaller applications such as connectivity or unknown analysis make formal approaches much easier to apply and with much faster run-times.

The next area Wally feels is ripe for some standardization is the system level. This is currently at the point that everyone is jockeying for position but something might emerge around transactional level models (TLM) used to drive both high level synthesis and virtual platforms for software development.

Maybe he’ll be giving the keynote again in a couple of years and be able to report how system level productivity has improved driven by standards, better processes and more powerful tools.


Shrinking audio creates issues and opportunities

Shrinking audio creates issues and opportunities
by Don Dingee on 02-26-2013 at 6:00 pm

There is a lot more to sound than meets the ear, and there a vast number of ways to deliver an audio experience. I recently trashed my gaming headset, replacing it with a Samson C03U mic and Audio-Technica ATH-PRO700MK2 headphones. It’s a huge upgrade, especially for podcasting, and I admit I was also motivated by research into digital music formats. Audio is fascinating, and I enjoy learning about how it works.
Continue reading “Shrinking audio creates issues and opportunities”


High and Low: High Level Synthesis and Low Power

High and Low: High Level Synthesis and Low Power
by Paul McLellan on 02-26-2013 at 2:39 pm

It is so widely accepted that it is already a cliche to say that “power is the new timing.” With more and more chips, the major challenge is not so much to meet timing but to meet timing without blowing out the power budget. Otherwise, you could just crank up the clock rate.

I’m going to be lazy so you can insert your own sentences here about mobile, battery-life, datacenters, cloud computing and why this is making power so important. Or you can talk about “dark silicon”, putting a lot of functionality on a chip but then not being able to light it all up. Or how about FinFETs and reducing leakage power. Whatever your design is doing, keeping its power low is almost certainly one of the things you are having to worry about.

What goes for design in general also goes for high-level synthesis (HLS). Historically, HLS has traded off performance and area. Want higher performance? Instead of re-using that multiplier put a second one on the chip. But that is no longer good enough since performance doesn’t just come at the cost of area, it also comes at the cost of power. As is almost always the case, optimization at higher levels of abstraction can make larger gains than trying to recover later in the design cycle. So optimization for power at the C++/SystemC level offers more opportunity than at RTL which offers more than at the gate level.


Calypto’s Catapult LP was created to address this, enabling area and power optimization when synthesizing designs from C++ and SystemC. Catapult LP automatically inserts power saving techniques during high level synthesis, driven by the constraints the user provides.

So what sort of power saving techniques can be used?

  • Numerical refinement: by allowing all calculations to be optimized for the exact bit-widths necessary, the sizes of registers and buses can be reduced saving power (dynamic and static).
  • Interfaces: if a design is making repeated use of memory and buses, the interface can be made wider to do multiple reads and writes at once and store the data locally.
  • Pipeline architecture: many algorithms are highly dependent on pipeline architectures and the associated memory/register toggle rates
  • Clock frequency: by reducing the clock frequency and forcing HLS to find architectures that can live with the lower clock rate can result in significant power saving
  • Multiple clocks: blocks with lower data rates can be run at lower clock rates. Catapult supports multiple clocks.
  • Latency and throughput: different tradeoffs between latency and throughput can also reduce power significantly.
  • Idle: Catapult can insert an idle signal into a block, set when the block is not doing any processing. This can be used as part of the system level power management to suppress or slow clocks.
  • Sequential clock gating: by analyzing flows of data it is often possible to suppress clocking all registers when it is clear that the value is not changing.

Download the Calypto white-paper Catapult LP for a Power Optimized ESL Hardware Realization Flow on their website here. There is also a webinar How to Optimize for Power with High Level Synthesis on March 12th at 10am Pacific. Registration is here.

Or see Calypto at booth 705 at DVcon this week (Tuesday/Wednesday, 3.30 to 6.30).


Intel and Altera Sign on for 14nm

Intel and Altera Sign on for 14nm
by Ed McKernan on 02-25-2013 at 5:00 pm

The announcement today that Intel will be a Foundry for Altera at 14nm is a significant turning point for the Semiconductor Industry and Intel’s Foundry fortunes of which the full ramifications are not likely to be understood by analysts. As a long time follower of Intel and a former co-founder of an FPGA startup (Cswitch), it has been surprising to me that Altera and Xilinx had not signed on with Intel at 22nm two years ago after the two ended up sharing space at TSMC. Although perceived to be in balance, the two FPGA players have had opportunities to sway the market in their direction and now I believe there is an opportunity for an Altera breakout.

For many months I have argued that as the world turns mobile Intel’s future is tied to its Foundry. The x86 legacy client and the Growing Data Center will fund this move as Intel remains cash flow positive and profitable. In addition, Intel appears to have unlimited access to the debt market to create deals that are attractive for new customers. The “Mobile Foundry” will ramp to billions of units across a multitude of suppliers which in the end could be an order of magnitude larger than the roughly 400MU PC TAM, albeit at smaller die sizes.

There is however another growth market whose significance is under appreciated and yet could reshape the semiconductor field on the High Speed Communications side of the equation. The roping in of Cisco and Altera as Intel’s first major foundry customers is intriguing in that neither is a competitor of Intel and the combined could end up sharing key IP that accelerates growth at the expense of other communication suppliers.

As part of the 14nm bring up, there will be a common focus on building high performance, low power, dense SRAM blocks as well as next generation high speed Serdes, Memory Interfaces, Ethernet and PCI express interfaces. Imagine all of them shared such that compatibility will be guaranteed between Intel, Cisco and Altera chips used to build 100G/400G systems as well as High Performance Computers. Also consider that early 14nm Altera FPGAs will be used by Cisco and Intel in the development of systems with a Hard Copy Path to production ASICs.

It has been the goal of Xilinx and Altera for several years to be able to not only be the prototype vehicles for Cisco, Huawei, Ericsson and others but also to enter high volume production. However, this is usually reserved for ASICs or networking ASSPs from the likes of Broadcom. If, however Intel provides a two-year plus process lead over the likes of TSMC, then it can be argued that the economics tip more in the favor of Altera in some of these situations.

In addition to the above, I suspect that at 14nm, Altera will be willing to lay down more hard blocks of IP that are extremely area and power efficient. At 28nm both Xilinx and Altera were wary of making the move for fear of selling parts that were hampered with extra logic that went unused and therefore not valuable to end customers. If Altera gains a 2X or more advantage, expect them to come with more hard IP that is feature rich and yet can undersell their rivals.

The business model for the Altera-Intel Foundry deal to me is intriguing. An analyst contacted me and asked if Intel would have to sell wafers at 40% margins. I think not. Rather this model will allow Intel to sell wafers at 60%+ margins in year one of an FPGA ramp, which is typically low in volume and then slide in scale to 50% over the products lifetime. One should remember that Altera and Xilinx sell their high-end parts for thousands of dollars during the first year of prototyping. If they end up paying Intel $300 or even $500 per chip, it is worth it has they themselves usually gain over 70% in margins and are targeting for design wins that lead to higher volumes.

Finally, it is important to note again that traditional legacy foundry where the IP is provided by the Fab is much different than the direction Intel is headed. In some cases the customer will bring their own IP and in other cases, Intel will gladly collaborate in order to get a lead on the same IP as in the case of Serdes. The agreement with Altera is significant for the performance side of Intel’s foundry business, however, the level of Capex that Intel has signed up for in 2013 suggests that there are more, much higher volume announcements to come and these must come from the Mobile side of the compute equation.

Full Disclosure: I am long AAPL, INTC, QCOM, ALTR


Can Japan Regain Semiconductor Leadership?

Can Japan Regain Semiconductor Leadership?
by Paul McLellan on 02-25-2013 at 1:14 pm

In the 1980s, Japan was seen as the leader in the semiconductor industry. Their quality was higher, especially in memories, and the US was worried about falling behind. In fact Sematech was created in 1987 by the US government and a consortium of 14 US-based semiconductor companies primarily to pool investment on common problems and regain competitiveness of the US semiconductor industry versus the Japanese. However, today, Japan is largely seen as an also-ran in the semiconductor business. It is very inwardly focused on the Japanese market, and consolidation really is only allowed to happen in Japan, so that Hitachi and Mitsubishi merged to form Renasas and the NEC’s semiconductor business was folded in too and then bailed out by the Japanse government. Meanwhile, Elpida became the focus for memory but eventually filed for bankruptcy and is (probably) being acquired by US-based Micron.

Earlier this month at the Nikkei Electronics’ World Semiconductor Industry Summit in Tokyo, Global Foundries’ CEO Ajit Manocha presented “Reshaping the Foundry Industry: Welcome to Foundry 2.0” where he outlined the evolution of the foundry model and, in particular, what it will take for Japanese IDM companies to regain their former glory. In particular, he urged them to embrace the fabless/fablite model rather than continuing to invest in their own process technology development and their own manufacturing. Today, just 5% of the volume manufactured by foundries comes from Japan.

The challenge is huge. Japan no longer has leading edge capacity (20nm and below) and although there are 42 new fabs under construction worldwide in 2011-13, not one of them is in Japan (or owned by a Japanese semiconductor company). Mobile is an area where Japan has been especially inwardly focused, with their own standards. In many ways the Japanese mobile market is the most advanced in the world, but the suppliers compete aggressively inside Japan while they have largely (with the exception of Sony) given up on the global market.


What Ajit calls Foundry 2.0 is a collaborative virtual-IDM. The old foundry model will no longer work due to the complexity of the process and the accelerated adoption ramp. Instead the responsibilities need to be more cooperative, with the foundry taking the lead for technology architecture, PDKs, manufacturing etc, and the semiconductor company taking responsibility for system architecture, SoC design, IP, final test. But all these need to be developed in parallel so that when the process is ready, the SoCs (especially for mobile) are ready and the volume can ramp very fast.

A video of Ajit’s presentation is here (45 minutes).


The New "Mobile Foundry" Era: Whose Wheelhouse?

The New "Mobile Foundry" Era: Whose Wheelhouse?
by Ed McKernan on 02-25-2013 at 1:12 pm

Nothing seems to raise the Visceral Ire of Semiwiki readers like the two words: Intel and Foundry. To get maximum steam coming out of the ears make sure you combine the two words in a sentence. Something along the lines like: Intel is Now Going to be a Leader in the Foundry Business. Pause…..Ok catch your breadth and now let’s move on :). After reading the comments to my column from last week, I have come to the conclusion that our understanding of “Foundry” is also about to change dramatically. As Dan Nenni points out, in addition to Intel, both TSMC and Samsung are adding massive amounts of leading edge capacity – and why not there are billions of dollars and units at stake for as the eye can see. Our current view of Foundry is about to be encapsulated as “Legacy Foundry” as we enter the era of the high volume “Mobile Foundry.” If anyone else has a better term then lets bring it to the table because this will be significant.

Arguments made back forth try to sway the opinions of readers that Intel can never play in the foundry game due to lack of IP, tools support and pricing. I totally agree when it comes to playing in TSMC’s 40nm and below sand box or high volume commodity chips. This is completely out of the realm of Intel’s capabilities and interest for that matter. But the Billions of units a year mobile market is well within Intel’s wheelhouse and they will claim their advanced, low power Finfet process technology is their competitive edge that can benefit all. The two major mobile horses: Apple and Samsung who claim 80% of the market are standardizing on a small number of components that drive the steep step function ramps that occur multiple times a year. Intel has a long history of high volume step function ramps and is living in a market (i.e PCs) that has a nearly 400MU TAM based on larger die. Ramping capacity with high yield for what will be 6 month product cycles is key.

Before going forward, one has to come to the conclusion that Intel has only one path forward. Yes, it will continue to thrive in a $40+ x86 market serving PCs and servers. However, if they don’t pursue the larger mobile market than over time TSMC and Samsung will overwhelm them and the ability to reinvest in process technology as they have in the past will be at risk.

As an example of how this plays out lets look at Cirrus Logic. In all the iPAD and iPhone tear downs there are one or two components from Cirrus. They shipped over 200MU into Apple last year. Furthermore they were a captive supplier with 80% of their business tied to Apple. In this new mobile market, Apple will be driving Cirrus to reduce supply risk while lowering costs and power. If Cirrus blew up, Apple’s freight train would grind to a halt. In addition, the Cirrus parts come with custom features that only Apple gets to enable for their devices.

Last year it was rumored that Apple financed the ramp of Cirrus Logic’s components because the cash outlay required for the enormous production ramp in Q3 was to put it mildly a challenge. Cirrus grew revenue 100% from Q2 to Q3 and another 50% in Q4. One could see a scenario where Intel would offer a financing deal to have Cirrus move production to its fabs, especially since it has the ability to raise billions of dollars at interest rates below 3%. This is a sign of things to come in the Foundry industry. Free financing will no longer be just the realm of auto companies (notice what heavy capex does to an industry). In addition, Intel would likely make the argument that hiding new products in an Intel Fab and away from Android system players who live cross-town from TSMC could be an added benefit as confidential production ramp schedules could remain stateside. My bet is that the two are talking.

Apple has developed a plan of alternating processors in different platforms based on whether it is a premium device (iphone 5) or last year’s model (ipad mini). This reduces risk in the supply chain. There is however an interesting twist in Apple’s processor development. They are experiencing what Intel saw in the 1990s and that is an insatiable demand for more performance, albeit under a more strict power curve. They need to stretch their capabilities at the front end in order to justify the $650 price tag of their latest iphone. If they leverage Intel’s fab process then they could conceivably pick up another 2X overnight at an even lower power. What if they paid Intel an extra $10-$15 – it would be worth it. Tools: let Intel engineers do the port. IP: courtesy of Apple Inc.

What about Qualcomm and Broadcom? They will be called on to make versions of their chips specifically for Apple and Samsung. Again it will be necessary to hide the different versions. Think of it this way. If Qualcomm makes one version of their 4G LTE chip and it is built at one Fab, let’s say TSMC, then both Samsung and Apple know the schedule of production and build products around it. Therefore Apple and Samsung know that their product releases will be in sync and reduce the advantage that one vendor can have over the other. If instead, Apple has Qualcomm build a custom baseband chip at a different foundry than what the Qualcomm version is for Samsung or the rest of the Android market, then it is possible for them to hit the market with a different feature set and timing than its competitors. This will be critical in maintaining higher hardware ASPs. Mere months can be the difference in high and mediocre profits.

As mentioned in a previous blog, Intel was operating under two business plans during the last three years. Andy Bryant was running the manufacturing footprint side while Paul Otellini was charged with winning mobile with internal products. Otellini succeeded in his efforts to win the PC market through a cannibalization low TDP strategy, however he lost the smartphone and tablet space to Samsung and Apple who decided it was key to develop their own processors. Now Intel’s opportunity is to shift to the Bryant plan, which is to be a foundry to the Billions of mobile devices outside of the PC market.

I expect that we will not fully understand how this new “Mobile Foundry” model shakes out until several years down the line when the smartphone and tablet markets show growth of just a low double-digit rate. In some ways it is reminiscent of the early 1990s when PC growth took off and the investment in chip startups and Fabs was at its peak. The first instance of a slow down occurred in September 1995 when Microsoft launched Win 95 and companies like Micron and Cirrus hit the wall and the mothballs came out. TSMC, Samsung and Intel have probably developed a multi-year business plan that is based on growth rates that one day reach a drop off. Mothballs will come out. However for now, to underinvest is to ensure losing early, to overinvest is to lay it all on the line and hope the other guy comes up short.


Who Allegedly Broke Tela’s Patents: Is Samsung or Qualcomm the Real Villain?

Who Allegedly Broke Tela’s Patents: Is Samsung or Qualcomm the Real Villain?
by Randy Smith on 02-25-2013 at 1:08 pm

I recently blogged about the actions filed by Tela Innovations at both the US International Trade Commission (USITC) and in federal district court. Those actions allege that five mobile phone manufacturers -HTC, LG, Motorola Mobility, Pantech, and Nokia – were importing handsets into the US which infringed on seven of Tela’s patents. After posting that blog there were a few follow-up comments and questions on that SemiWiki post which caused me to take a deeper look. This blog is to follow-up on those comments in two areas – “patent trolls” and the bigger story, who really may have broken those patents.

There have been allegations that Tela is a “patent troll”. Patent trolls are companies that typically collect patents at auction from bankrupt companies and then try to profit from owning the patents, rather than using them in their own products. Tela is most definitely not a patent troll. I believe Tela has raised upwards of $20M over at least three rounds of funding. They have also spent years developing their own products, services, and IP portfolio with only one noticeable small (less than $5M) acquisition, Blaze Semiconductor. Tela’s engineers include several former layout and circuit design experts (several that worked at Artisan Components) and some lithography experts (with backgrounds such as KLA-Tencor). Patent trolls do not hire the top engineers, just accountants and lawyers, so we can stop discussing this topic as Tela is clearly not a patent troll.

This topic got more interesting to me when I looked more deeply into the documents filed with the USITC. All of the mobile phone products named in the complaint contain a Samsung package which appears to contain a Qualcomm Snapdragon design. There are more than 500 separate documents already on file in the case and I do not plan to read them all. But as far as I can tell so far, all of the alleged patent violations appear inside the Qualcomm design. Bear in mind that the patents are layout related – new techniques to layout circuits in the most advanced semiconductor manufacturing processes. So, the question is “Who created those layouts?”

Before you jump to the conclusion that it must have been Qualcomm consider two things: (1) the designs appear to have been implemented using a standard cell library approach; and (2) Qualcomm is an investor in Tela. If indeed Qualcomm designed the layouts violating Tela’s patents, wouldn’t you like to be a fly on the wall for the next several Tela board meetings? Qualcomm’s level of investment in Tela is not so high that it has a seat on the board. And the board is formidable including EDA legends Don L. Lucas, Ray Bingham, and Jim Hogan; Spansion’s CEO John Kispert; and two former Artisan executives Scott Becker and Dhrumil Gandhi. But if the violations are in a standard cell library design used in the Samsung manufactured part – where did the standard cell library come from?

The documents filed show a very modern looking layout techniques like those I expect to see at 22nm and below. Samsung lists a 20nmHKMG process on their foundry website. Tela has been developing libraries on this process node for a while and made a public statement on this topic last summer. But, I have not yet been able to determine who provided the library to Qualcomm or if Qualcomm developed it on their own. If the USITC upholds Tela’s complaint, I would expect that these details will eventually come out.

In another interesting twist, according to a response by HTC’s lawyers, the mobile phones in question constitute all of the available Windows Phone 8 models sold in the US. In fact, both HTC and Nokia have asked the USITC to consider the impact on the Windows 8 launch as a viable third player in the mobile phone market and because their products are used for public health and safety. Hard to imagine that these should be factors as any phone, even one several generations old, can contribute to public health and safety. And I doubt AMD ever got anywhere by saying it should be able to violate Intel patents so we could have a second player in the PC microprocessor market. They also say the Qualcomm part is a small part of the overall phone – I bet Qualcomm loved that statement. The microprocessor chip takes a small area of my PC’s footprint too – so what? How can you claim that this makes its contribution unimportant? This is just the beginning of the responses though. This is very serious stuff, but I cannot help but think it will continue to provide a source of entertainment, kind of like a who-done-it mystery, for months to come.