RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Brian Krzanich Does Reddit AMA

Brian Krzanich Does Reddit AMA
by Paul McLellan on 02-19-2014 at 1:03 pm

Do you know what an AMA is on Reddit? It stands for “ask me anything”. A person, often a famous person like Bill Gates (last week) but sometimes just someone who does an interesting job (like astronaut) or was in an interesting situation (like the hijack from Ethiopia last week).

Today, it is someone who argualble is all three: Brian Krzanich, CEO of Intel. Most of the people asking questions are not semiconductor professionals so they didn’t ask the questions I would have (and it was over by the time I discovered it had taken place).

He is very upfront about Intel missing the mobile transition:

How did Intel miss out on the mobile/tablet market and what is it doing now to compensate?

BrianKrzanich 366 points1 hourago
we wanted the world of computing to stop at PC’s…and the world.. as it never does… didn’t stop innovating.. The new CEO of Microsoft Satya said it well the other day… our industry does not respect tradition, it respects innovation…. i think he was 100% right.. and it’s why we missed the mobile move.

Did you always want to be a CEO? Did you come from a technical background?

BrianKrzanich 59 points 1 hourago
nope.. thought i wanted to stick with the pure technical path…. then i realized the best way to drive technology was to run the company…

He is also clearly a total geek who used to water-cool his PC and overclock it at 4GHz:

Do you own a badass gaming PC with a crazy processor that isn’t on the open market?

BrianKrzanich 104 points1 hourago
no.. i used to have time to do that.. i used to build my own PC’s.. and actually had one of the first water cooled overclocked PC’s around. i ran it at over 4Ghz and this was back in 2001… but alas i do not have the time for that fun anymore

and what color socks is he wearing?

BrianKrzanich 38 points1 hourago
Brown

I knew that would be the first thing SemiWiki readers wanted to know.

Brian’s AMA is here.


More articles by Paul McLellan…


Carbon Design Systems – Secret of Success

Carbon Design Systems – Secret of Success
by Pawan Fangaria on 02-19-2014 at 11:00 am


Last week, after learning from the press releaseof Carbonabout its rocking sustained growth with record-breaking revenue and a thumping 46% increase in bookings, I was interested to know some more details about what drives Carbon to such an amazing performance in an EDA market that is generally prone to growth of a few percentage points provided the economic circumstances remain favourable. I was lucky talking to Bill Neifert of Carbon Design Systems who provided great insights into the value-added business model that is driving this kind of growth at Carbon and also about the ESL (Electronic System Level) segment. Here is the conversation –

Q: Bill, I know Carbon is among the very early movers into ESL area with its virtual prototyping product, SoC Designer. What are the various offerings you have that are gaining traction?

Carbon has been in the virtual prototyping space for nearly twelve years now, first with our Carbon Model Studio product and then with Carbon SoCDesigner. We rolled out Carbon IP Exchange a few years ago to meet the huge demand for automated model creation with an automated web portal. It’s really our CPAKs (Carbon Performance Analysis Kits) though which have been driving growth. While our other tool offerings are models and tools to create your own virtual prototypes, CPAKs are completely pre-built virtual prototypes and software which enable designers to be productive within minutes of download. If needed, they can then customize these very easily to more closely model the behaviour of their own system. We’ve found that by using CPAKs, we can enable users to be quickly productive across design tasks ranging from architecture analysis to system level performance optimization.

Q: I notice that you used the term virtual prototyping instead of ESL. What do you see as the difference?

ESL seems to mean something different to everyone. For some people it means high level synthesis (HLS). For others, it means a virtual prototype representation of the system. Another set of designers may use it to mean an architectural conception of the system as a starting point for successive refinement. Although I see lots of design teams using various parts of this definition, it’s pretty rare to see people using the entirety of design flows which can be lumped into the ESL term. We like to stick with using the term virtual prototype but even in that term there is some ambiguity since there can be virtual prototypes at differing levels of abstraction depending upon the models it contains. Our SoCDesigner-based virtual prototypes typically have multiple layers of abstraction depending upon the design need being addressed (ARM Fast Models for the software engineer, 100% accurate ARM models for the architect). Other virtual prototypes tend to stick to a single abstraction level and therefore, a single use case.

Q: Lately, I see that Carbon’s focus has been on providing complete solution for SoC designs, such as creation of accurate virtual models, Carbon IP Exchange through web based support etc. I would like to hear your comments on this.

Carbon IP Exchange is really software as a service (SaaS) application of Carbon Model Studio targeted for specific IP. For example, our relationship with ARM provides Carbon with access to ARM’s RTL to compile and instrument 100% accurate models based upon their IP.

We rolled out the Carbon IP Exchange website to automate this model creation task. It walks the user through the configuration options for each piece of IP when each option is chosen. The form updates itself to allow only valid choices for each of the additional options. This way, when the form is complete it will correspond to a valid configuration for the RTL. The resultant model is therefore correct by construction based upon the configuration options supported in the RTL.

Along with the models, we also have a growing collection of Carbon Performance Analysis Kits (CPAKs). These CPAKs are pre-built virtual prototypes and system software. These CPAKs are user extensible and shipped with source code to enable simple customization.

Carbon IP Exchange is a vital part of our customer’s design infrastructure, it’s available 24/7 round the clock. It has already been used by designers to create over 5,000 models. This doesn’t include the several thousand additional models which are included in our CPAKs. This enables designers to be much more productive since they don’t worry about model creation. We enable them to be more productive by concentrating their efforts on doing actual design and debug, not worrying about the correctness of their model.

Q: How does web based Carbon IP Exchange work? Can any third party IP be modeled on-the-fly as per SoC design need? How about licensing?

Any third party IP could utilize Carbon IP Exchange. After all, we’re using the Carbon Model Studio compiler to generate the designs from the original RTL. That technology has been successfully used on projects for over a decade. We’ve tended to focus more on IP which our customers are using in their own SoC designs. This is why you see most of the top IP providers represented on the portal. We still of course also provide the Carbon Model Studio tool to design teams as well to enable them to compile models from their own RTL or from IP providers who aren’t currently on Carbon IP Exchange.

Q: I recollect from the press release that Carbon Performance Analysis Kits (CPAKs) are seeing rapid adoption by SoC design houses. How do they fit into the overall solution?

We rolled out CPAKs about 18 months ago and they are now in use at a majority of our customers. The reason for this is quite simple: they get designers up and running much more quickly. CPAKs are prebuilt virtual prototypes and software targeted at various use cases. The use cases vary. IP level optimization typically requires bare metal software for configuration and benchmarks. Driver development is generally done on a more complete system with a ported version of Linux or Android. Finally, a designer doing OS level performance optimization generally wants a complete system together with benchmarking applications. We have CPAKs targeted at each one of these use cases and package them together with source code for both the virtual prototype as well as the software which runs on top of them. This means that they can get you up and running quickly and also be easily modified to represent the actual configuration of your system. Using CPAKs it’s not unusual for a designer to be productive within minutes of download. It will take much longer, if they assemble that system themselves.

Q: My observation from our conversation is that it’s huge productivity gain for SoC designers, which is driving them to increasingly use Carbon’s solution or products for fast and accurate virtual prototyping that leads to silicon success in shorter time. Am I right in assuming that?

When you’re designing an SoC you typically have a bunch of questions: how fast will it run? What IP do I need? How much power will it consume? How do all the parts work together? Many more questions come to mind as well. The faster these questions can be answered, the sooner the project can be successful. The evolution of Carbon’s solutions is focused entirely on reducing the time needed to answer these design questions.

Q: O.K. looking at customers, I guess U.S. is always leading in newer as well as mature businesses. You say there is strong growth in Asia as well; can you elaborate a little bit on that? Is it because of many design and IP companies coming up in that region?

Virtual prototyping has always had a slightly different geographical sales mix than other EDA products. Japan was easily the strongest geography for many years but we’ve been seeing a dramatic shift in this in the past few years as the US takes on a more system-focused design flow and as companies in Korea, China and Taiwan do more and more designs. We grew so much last year by expanding dramatically in certain strategic accounts but also by acquiring accounts focused on additional vertical markets. ARM’s new 64 bit processors are being adopted by companies looking to build servers, storage and network devices. This is opening up a new set of customers to us as designers in those markets are adopting the same tools which have been used in the markets where ARM is already dominant.

Q: Would you like to further elaborate on continuing strategic partnership with Samsung and future roadmap?

Not yet, stay tuned!

Q: It’s interesting to know about such a progress happening in this space. One last question, what’s your view on ESL (i.e. use of virtual platforms, high level modeling etc.) entering into the mainstream of semiconductor design practices, at least for SoC designs?

I honestly believe that virtual prototyping is finally in the mainstream of the design practices for most SoC designs. I used to spend a lot of time talking to designers about why they should design using virtual prototypes. Now, that time is focused more on discussing the ways in which they can be using the virtual prototypes they have. Most design teams seem to be using virtual prototypes in at least part of their design flow. The key now is expanding that use throughout the entire flow to enable even greater value. EDA companies are successful when they remove design bottlenecks and CPAKs have demonstrated that they are able to get designers productive more quickly than with any other solution.

This was an interesting discussion with Bill. Now I can truly visualize how virtual prototyping adds value to an SoC design and how Carbon has made the process of virtual prototyping simple enough for it to get into the mainstream of semiconductor designs.

More Articles by Pawan Fangaria…..

lang: en_US


Verifying DRC Decks and Design Rule Specifications

Verifying DRC Decks and Design Rule Specifications
by Daniel Nenni on 02-19-2014 at 8:00 am

DRVerify is part of the iDRM design rule compiler platform from Sage DA, something that I have been personally involved with for the past three years. DRVerify is mainly used to verify third party design rule check (DRC) decks and ensure that they correctly, completely and accurately represent the design rule specification. In addition, DRVerify can be used to clarify and validate the exact meaning of a complex design rule description, as well as to ensure design rule consistency and prevent potential conflicts between different rules.

Using the iDRM design rule definition pattern and it’s logical expression, DRVerify systematically generates variations of the design rule and creates a comprehensive set of layout test cases that manifest all meaningful combinations of the design rule parameters that can make it pass or fail.

DRC deck programmers use DRVerify to automatically generate an exhaustive set of pass and fail test cases, and run their DRC code on the generate gds file to check that it found all fail cases and did not flag any of the passing ones. Design rule manual (DRM) teams use DRVerify to visualize their design rule expressions and check its boundary conditions to ensure the formal expression accurately reflects their intent. DRVerify also indicates any possible conflict with other existing design rules.

Verifying DRC Decks (DRC Runsets)
Creating a correct and accurate check of a complex design rule is almost impossible; The probability of making errors is so high and compounded by the ambiguity of the design rule description, the difficulty to fully cover all possible situations and the complexity of evaluating each situation and determining its legality with respect to the design rule specification.

The Fundamental challenge is that there is no formal way or methodology to verify the DRC code against the design rule definition or any other golden reference, and ensure its correctness. To minimize the risk and release higher quality runsets, PDK teams create QA test cases for each design rule. These test cases are layout snippets that manifest both violating (fail) and legal (pass) configurations. The DRC code is then run on these test cases and the coders check that the code flags an error for each of the “fail” test cases, and that it does not flag any of the “pass” cases.

DRVerify offers a formal and automated methodology to generate such test cases. Starting from the source, the design rule specification in iDRM, the DRVerify geometry engine exercises both topological properties of the design rule pattern and the numerical values of its logical expression, and does it in a systematic and comprehensive manner. The result is an exhaustive set of layout test cases that manifest all meaningful combinations of the design rule parameters that can make it pass or fail. Each test case is tagged and annotated with the specific combination of parameters that make it pass or fail. The entire collection is written to a gds file, where all the violating test cases are grouped and placed under a dividing line and all passing cases are placed above it, to clearly distinguish between them.

Clear and formal design rule description in iDRM generates hundreds of DRC test cases

The DRC deck under test is then run on this gds file. A correct DRC deck must flag as errors all the test cases in the fail group below the line, and should not flag any of the passing test cases above the line. Any other result indicates a discrepancy between the deck and the design rule specification, and needs to be further evaluated and probably corrected. The annotated specific value combination of each test case can pinpoint to the specific discrepancy and help the user to debug the DRC code and correct it.

Verifying DRC deck using the generated test cases (only error marker layer shown)

Validating Design Rule Descriptions: Design rule descriptions are written today in various forms using drawings, tables and free form text. Advanced technology (20nm and below) design rules have become so complex that it is very hard to verify that such a description completely and accurately represents the design rule intent.

To address this problem, DRVerify uses the iDRM formal design rule description and automatically creates layout embodiment snippets of how this rule can manifest itself given the specified drawing and logical conditions. DRVerify acts like an animator that acts out the written spec and creates a set of design rule instances that visualize the design rule expression and highlight the possible corner cases and boundary conditions that can flip a case from pass to fail and vice versa.

This visualization shows immediately what the design rule spec actually means, which cases are passing and which are failing and where are the possible hidden ‘gotcha’ situations. The user can immediately spot unexpected aberrations, like violations that should have been legal and vice versa.

Design Rules Consistency: Advanced technology DRMs hold thousands of design rules, some of them very complex. Most layout configurations are subject to multiple design rules which can be overlapping, meaning that certain objects or distances in the layout need to satisfy different conditions that are specified in multiple design rules. All these rules need to be consistent and one design rule cannot conflict with other rules. Without the use of a formal automated system, this is almost humanly impossible to verify.

DRVerify addresses this problem: When a design rule is being exercised, DRVerify keeps all other “fixed” design rules that are in the background. If a specific layout instance requires setting values that conflict with any of the other rules, DRVerify will issue a warning indicating such conflict. The user can then review the conflicting conditions between the rules and determine how to resolve them. Note that conflicting conditions might also exist within a single complex design rule.

DRVerify is part of the iDRM platform. iDRM uses a formal and clear design rule description as a single source for everything: the design rule specification, the checker and the test structures. Using iDRM a design rule is defined only once, and all its other facets and derivative manifestations are generated automatically, correct by construction and are always in sync.

DRVerify enables the development of higher quality DRC decks, much faster and with less effort. In addition it can also be used to develop clear, unambiguous and consistent design rule specifications.

DRVerify is part of the iDRM Platform

Also read:

iDRM for Complex Layout Searches and IP Protection!

Analyze your physical design: avoiding layout related variability issues

iDRM Brings Design Rules to Life!

Sage Design Automation launched, with design rule compiler technology and products

More Articles by Daniel Nenni…..

lang: en_US


The Future of Money is Digital – BitCoin Introduction

The Future of Money is Digital – BitCoin Introduction
by Sam Beal on 02-18-2014 at 5:00 pm

By now most people who read or listen to the news know something about Bitcoin (BTC). Most people have the perception that it is either the currency of crime or speculation. Put aside the perceptions and consider this. When is the last time you saw someone write a check in the grocery store, especially someone under 60? Today’s kids will shop with their smart gadget, not with a credit card, and certainly not with folding money as my grandfather called it.
Continue reading “The Future of Money is Digital – BitCoin Introduction”


Smart cards hard for the US to figure out?

Smart cards hard for the US to figure out?
by Don Dingee on 02-18-2014 at 3:30 pm

Every once in a while, I just scratch my head and wonder just what in the wide, wide world of tech is going on. More than ever, it seems the big barriers to adoption aren’t a lack of technology – instead, barriers come from a system that staunchly defends the old way of doing things, even when the participants are battered, broken, and bleeding.

Consider smart cards, for instance. We have had both the international standardization and the microcontroller and RF technology available for some time. Smart cards are routinely used in 130 countries worldwide – but for the most part, not the US, outside of the well-known mobile phone SIM module. Financial transactions in the US still mostly rely on magnetic stripe technology. Why can’t we get on board?

Part of the answer lies in sheer size. By some estimates, changing the US financial system – card issuers, retailers, and almost every consumer – to a smart card transaction infrastructure could cost as much as $35B. Smart cards themselves, each carrying a microcontroller and non-volatile memory supporting encryption of stored information, are about five times more expensive to produce compared to trivial mag stripe versions (but maybe not the versions of cards with holographic logos, bearer photos, and other features).

There is also security to consider. Ironically, the recent Target breach could be the straw that finally breaks the camel’s back and lowers the resistance to smart cards. Mag stripe cards are trivial to counterfeit, but smart cards are a much more difficult nut to crack for forgers. However, there is controversy over whether the two-factor authentication method for smart cards should be chip-and-PIN, or chip-and-signature. As many pundits point out, while these cards provide more security in physical transactions, in an economy increasingly moving to online purchases, smart cards don’t create much of a change.

I remember when the buzz on NFC was that it was going to take over payments. On the trail, I wandered around CTIA Mobile in San Diego in the fall of 2011, asking a few vendors what they thought. The response was a bit startling: retailers won’t change their infrastructure. It’s a lesson we learned from RFID in the previous decade, where item-level tagging and seven-cent chips were going to sweep the universe and make everything “smart”, displacing that rotten old barcode technology.

Things didn’t happen that way. Smooth-talking marketers said it was all about the use case, that these new technologies don’t create a big enough change to justify investment. Those same marketers also managed to siphon most of the energy out of the term “smart”, nullifying it with the suggestion that it offered anything but extra cost for consumers. (See: smart grid.)

The other blowback is being “too far in front of the bus”. That usually comes from sales people making a comfortable living selling mostly old stuff to their mostly old customer buddies. The problem: embedded life cycles are really long, two, three, sometimes five years. If you miss the bus, your best case outcome is running like crazy for the next two years to catch the next one. The worst case is it goes by, and as Tom Peters succinctly put it, you wake up dead one day without understanding what happened. Sometimes, it’s better to be hit by the bus – it can be a great call to action.

The smart card bus is here, in a system riddled with hacking, fraud, and identity theft, and some big names are getting run over and hurt badly because switching from mag stripes was sold to a lot of people as too expensive. We got what we asked for by avoiding the fairly obvious. Now, how do we get what we really need?

It starts at the building block level, and dealing with the myth that smart cards have to be expensive to produce. An ultra-low power MCU and NVM with 10-year-plus data retention isn’t that costly any more, with 8-bit engines under 25 cents, and 32-bit engines under $1 and dropping thanks to IoT demand. Those costs are offset by reduced card replacement due to mag stripe erasure (like setting your phone on your wallet), and in response to all-too-frequent identity compromise. Some NVM technologies, like Sidense 1T-OTP, are also secure against physical inspection attacks – without visually revealing the state of programmed memory cells, hackers can’t reverse engineer the application code or encryption keys easily.

Next, we have to get over this “absolutely secure” excuse, postponing change waiting for a perfect solution. Signatures should have gone out with the Declaration of Independence, and are totally non-secure. Two-factor authentication schemes using PINs are pretty good. I actually like the two-factor NFC approach using a smartphone, but that’s another discussion. US banks and consumers need to just get behind the EMV smart card standards and chip-and-PIN, and get over the IT changes needed to make it happen. Nothing is bulletproof, but what we have now looks like Swiss cheese in comparison to what we should have – and attacks are only going to increase the longer we wait.

Of course, there are still lawyers to deal with, and they may be the ultimate barrier to progress. I enjoyed this analysis in the NY Times:

… Visa and MasterCard have both set forth timetables that attempt to institute the adoption of embedded-chips technology by the fall of 2015. Although the timetables are not mandatory, they would essentially shift the liability for card losses on to whichever side — the bank or the retailer — has the least secure technology.

That is world-class FUD if I’ve ever seen it, but unfortunately it is exactly the type of misunderstood risk a lawyer would use to stop change in its tracks. I may switch careers to become an expert witness: “Well, their card uses an MCU with known security vulnerabilities ….”

Seriously, the time has come for smarter payment systems – smart cards, NFC-enabled phones, anything but mag stripes – in the US. Technologists need to lead this charge and debunk some of the myths surrounding “smart”, communicating the benefits to consumers more clearly. I hope I’m part of that. What are your thoughts on this melee?


Verification of Power Delivery Networks

Verification of Power Delivery Networks
by Paul McLellan on 02-18-2014 at 2:43 pm

Power delivery networks (PDN) are the metal structures on a chip that delivers the power. In a high-end desktop SoC this might be delivering as much as 150W, and with voltages around 1V that means over 150 amps of current. Clearly getting the PDN correct is critical for a correctly functioning chip. One of the challenges to verifying the PDN is that early in the design the precise circuits are not finalized, and no vectors are available to perform the verification.

It is no longer possible to simply over-design the PDN since chips are increasingly routing area-limited. Instead, there is a requirement to efficiently perform PDN verification to ensure that all the currents and terminal voltages remain within specification, and that the line currents remain within limits protecting the grid from reliability issues.

The traditional approach to power grid verification is power grid analysis (PGA), which employs circuit simulation to check the grid voltages and currents in response to a current stimulus that represents the pattern of activity of the underlying circuit. The current stimulus ideally is the result of multiple simulation runs of the underlying transistor circuit, but that technique is prohibitively expensive. The more practical (and typical) approach is to generate the current stimulus based on specific workload scenarios (operational modes) of the underlying circuit.

Because of this need for a current stimulus, power grid analysis suffers from certain limitations. Naturally, to guarantee safety and reliability under all conditions, designers are interested in worst-case behavior on the grid, yet there is no known method for finding the true or realistic worst-case behavior without an exhaustive analysis, which is prohibitively expensive. In the real world, designers are reduced to generating the current stimulus based on typical or representative workload, neither of which are easy to define and, in any case, are insufficient to guarantee grid safety under all conditions. In fact, typical case analysis only provides a lower bound on the true worst-case voltage or current variations on the grid. To make things worse, it is extremely important to perform some type of grid verification during early design planning, but information on circuit workload is often simply unknown early in the design flow.

Existing power grid analysis, then, effectively shifts the burden onto the designer, who is required to provide the workload patterns for which the grid will be verified. A superior approach would obviously not burden designers to this extent. That is the promise of vectorless verification—a verification that does not require any user-input stimulus.


However, this notion of a truly vectorless verification approach is simply impossible to realize. Given any power grid that presumably has been checked and verified by such a hypothetical engine, designers can always envision an underlying circuit that would draw a large enough current to make the grid unsafe. We will never have an ideal vectorless verification approach for the power grid.

Given, then, that power grid verification must require some information about the current stimulus, the burden is on EDA developers to minimize the amount of information required. We describe an approach that aims to achieve this goal. Such an approach is not computationally cheap, but it gives a good upper bound on the true worst case, and it holds the promise of leading to practical approaches for certain scenarios, especially for early design planning.

One simple case is to assume all the currents are DC, also known as static verification. Only the grid resistance is relevant and the voltage drops can be calculated.

The problem is much harder in the dynamic case, where transient currents are allowed and RLC parasitics all become relevant. Exact solutions become prohibitively expensive, and we see that bounds on the solution must be sought for practical use. The constraints remain DC, but all current and voltage signals are transient over time.

Mentor have a new white paperVectorless Verification of IC Power Delivery Networks which covers the topic in a lot more detail. The author is Farid Najm of University of Toronto. It can be downloaded here


More articles by Paul McLellan…


Is Smartphone Market Maturing?

Is Smartphone Market Maturing?
by Pawan Fangaria on 02-17-2014 at 12:00 pm

Yes and No, in my view. Yes to a certain extent, considering that most of the people in developed world have more than one (may be with dual sim card) phone; and No, considering the vast untapped market in the third world countries of Asia and Africa. In India, although much of the population (who can afford a phone) has phone, but not smartphone; and similar can be the case in many other countries. So, what is required for these countries is an affordable smartphone, and that need has been well recognized by Chinese companies (a few companies in India as well but much of those components are sourced from China).
Continue reading “Is Smartphone Market Maturing?”


HDMI, DisplayPort, MHL IPs + Engineering Team = Good Move

HDMI, DisplayPort, MHL IPs + Engineering Team = Good Move
by Eric Esteve on 02-17-2014 at 10:18 am

This news is certainly not as amazing that the acquisition of MIPS by Imagination, or Arteris by Qualcomm… but it shows that Cadence is building a complete Interface IP port-folio, brick after brick. The result will be that a complete wall is being built on the Synopsys road to monopoly and complete success on Interface IP market. When evaluating HDMI and DisplayPort IP segment, the two big names are Synopsys and Silicon Image, and Transwitch comes after, quite far from the two leaders. Let’s hope for Cadence that this lagging position was due to a lack of investment, rather than from the quality of the engineering team. In this case, the very strong motivation, and deep pockets of Cadence should help the company to head to head compete with Synopsys in the near future, in this IP market segment where Cadence had so far no product to offer… Thus, we think these asset acquisition will be generate new IP sales for Cadence. If we want to forecast the volume of these IP sales, it can be wise to see the starting point, or what was the latest available business figure for Transwitch.

In fact, Transwitch is under chapter 11 since November 21, 2013, and the company website has been pirated. But, if you keep searching, you can find the latest quarterly and annual report from Transwitch. I have read the complete annual report (2012) to understand that you had to search under “Customer Premise Equipment” product line to understand where the IP and services revenue are located:

It seems that in 2013, Transwitch had finally decided to call a spade a spade, and name this product line “IP and service revenue” as we can see on the below picture, extracted from the last published quarterly report.

Thus, IP and service have generated $K 3,676 revenue in 2012, and $K 2,868 during the first half of 2013. As we don’t know what is the share between IP and service (and also what type of product the service revenue is related to) , we have to dig into another source, still extracted from the 2012 annual report, the “Consolidated Statement of Operation”. We can find the “Cost of service revenue” line ($K 1,274 in 2012). This is important to discriminate between “service” and “IP”: IP are developed by the R&D team, or say that the cost of IP development can be classified as “R&D Cost”, when the service related cost are classified apart. Thus, we have to make some assumption, deciding that design service should generate 50% GPM. This lead to service revenue being evaluated in the $2.5million in 2012, leaving a maximum of $5million for IP revenue, or less. This is consistent with the $3,676million ranked in Customer Premise Equipment, or HDMI, DisplayPort, MHL, HDPlay and Ethernet IP sales in 2012.

If we compare this revenue generated by these various IP with Synopsys or Silicon Image HDMI revenue, Transwitch is clearly below, with HDMI IP revenue being four time lower than Synopsys (and six or seven time lower than Silicon Image). Nevertheless, the engineering team will now be part of a far healthy company, able to make the right investment to target the latest technology nodes, pay for the shuttles, develop demonstration board. In short invest upfront to enhance the product quality and invest again to promote the IP in such a way that the license sales should follow. This is not a surprise if, even the best product doesn’t really sale untill you develop the right marketing plan, better position the product if needed, and be able to rely on a strong sales network to access customers on a world-wide basis.

If you take a look at the new Cadence IP Port-Folio, you can see an IP offer as wide as the company direct competitor. How long could it take for Cadence to generate the same level of revenue in Interface IP market than Synopsys? Some time… but maybe not that long.

Anyway: HDMI, DisplayPort, MHL IPs + Engineering Team = Good Move

Eric Esteve from IPNEST

More Articles by Eric Esteve…..

lang: en_US


Dr. Cliff Hou, TSMC VP of R&D, Keynote

Dr. Cliff Hou, TSMC VP of R&D, Keynote
by Daniel Nenni on 02-16-2014 at 9:00 am

This will be my 30[SUP]th[/SUP] Design Automation Conference. I know this because my first DAC was the same year I got married and forgetting how many years you have been married can cost you half your stuff. I have known Cliff Hou for half of that time and he has proven to be one of the most humble and honorable men I have worked with, definitely.

Cliff started at TSMC in the PDK group and produced the first TSMC Reference Flow which really was the starting point for the fabless semiconductor ecosystem (Grand Alliance) that we have today. Cliff then took over the TSMC IP group before becoming the Senior Director at Design and Platform which included the PDK, IP, and other design enablement Groups inside TSMC. In 2011 Cliff was appointed TSMC’s Vice President of Research and Development. Clearly Dr. Cliff Hou is rising star in the semiconductor industry and it has been an honor to work with him.

Cliff was our choice to write the foreword to the book, “Fabless: The Transformation of the Semiconductor Industry” as he and TSMC led this transformation. The foreword alone is worth the price of the book and I can’t wait to get Cliff to sign a copy for me at #51DAC where he will be keynoting:

Industry Opportunities in the Sub-10nm Era

The human thirst for connectivity and experience, as enabled by the electronics industry and the ongoing march of Moore’s Law, has already brought, and will bring even more, profound changes in way we interact with the world and each other. This profound enhancement of the human experience enabled by constant mobile connectivity, the Cloud, and sensors, brought to an ever widening worldwide audience, will bring untold opportunity to all of us here at DAC.

All of these changes demand continued chip and wafer-based scaling to deliver the power and performance necessary to enable wondrous, new applications. In less than two years we’ll be in production at 10nm, and shortly after 7nm, all made possible by a “Grand Alliance” of design ecosystem, equipment and material suppliers. At the same time, a new paradigm is being realized: heterogeneous silicon integration combining chips from multiple process technologies with 3D packaging to deliver compelling economics for a “System in a Si Superchip.”

New design techniques will be required for those applications becoming reality, including how 10nm, and 7nm will support those requirements, new manufacturing techniques, and the benefits they will provide. The introduction of 10nm and 7nm processes will alter today’s ecosystem while opening greater EDA and IP opportunities, and present new system and chip design challenges such as near threshold design, thermal and battery limitations, and 3D IC considerations.

IC designers, ecosystem providers and foundries have been committed to open innovation and mutually beneficial teamwork for many process technology generations, but success in the sub-10nm era will require unprecedented levels of collaboration and cooperation between all of us here at DAC. Our teamwork will drive industry progress, and the more we “collaborate to innovate,” the more successful our customers and all of us will become.

More Articles by Daniel Nenni…..

lang: en_US


Speeding Up AMS Verification by Modeling with Real Numbers

Speeding Up AMS Verification by Modeling with Real Numbers
by Daniel Payne on 02-15-2014 at 7:00 pm

My first introduction to modeling an AMS behavior using a language was back in the 1980’s at Silicon Compilers using the Lsim simulator. Around the same time the VHDL and Verilog languages emerged to handle the modeling of both digital and some analog behaviors. The big reason to model analog behavior with a language is for improved simulation speeds to boost productivity of both the design and verification phases, but the challenge is to consider the trade-off in accuracy versus a reference like SPICE circuit simulation.


Pulse response of an Operational Amplifier:SPICE in blue, Real Number model in Orange
Continue reading “Speeding Up AMS Verification by Modeling with Real Numbers”