wide 1

Microprocessor Test and Verification 2012

Microprocessor Test and Verification 2012
by Paul McLellan on 12-05-2012 at 5:43 pm

Next week December 10-12th is the Microprocessor Test and Verification (MTV 2012) which is in Austin Texas (as DAC will be next year, of course). After lunch on Monday there is a panel session on the effectiveness of virtual prototyping entitled When simulation suffices, who needs FPGA or emulation? Bill Neifert, the CTO of Carbon Design Systems is one of the panelists.

The panel will be moderated by Sanjay Gupta of Freescale. The other panelists besides Bill are Juergen Jaeger of Cadence, Andrew Dauman of Synopsys and Mike Pedneau of Jasper.

Bill will, of course, address the effectiveness of virtual prototypes. After all, when it comes to systems with a large software component as well as a large hardware component, it is clear that simulation doesn’t suffice and other approaches are essential. Booting Android on an RTL simulation is basically infeasible.


Formal Analysis of Security Data Paths

Formal Analysis of Security Data Paths
by Paul McLellan on 12-05-2012 at 5:07 pm

One challenge with security in systems is to ensure that there are not backdoors, either accidentally or maliciously inserted. Intel, ARM and others have various forms of trusted execution technology. Under the hood these are implemented by dividing the design into two parts, normal and secure, and implementing them with physical separation on the chip. This is then connected up with a secure operating system so that a minimal secure OS runs on the secure part of the chip and a full-featured operating system runs on the normal part of the chip to deliver the main functionality of the system.

At the Haifa Verification Conference in Israel last month, Ziyad Hanna and Jamil Mazzawi of Jasper presented a paper on formal analysis of these types of design. The hardware is vulnerable if it turns out that secure areas can be accessed without going through the appropriate encryption protected paths. A modern SoC is very complex and contains a lot of IP that might not be very well understood by the designers. This creates unexpected paths to access secure areas. Additionally, test logic creates paths to the outputs (so that the design can be tested) which may themselves represent security vulnerability.

Today, people use ad hoc approaches to look at these issues and inspect them manually. This can only look at a small subset of the possible traces and there is no sense that the analysis is complete.

A better approach is to use formal techniques to specify the secure area(s) and illegal sources and destinations for the data. Waveforms from the formal tool will then show illegal propagations found. This can reduce analysis to weeks from months and exhaustively analyze all paths.

For example, perhaps there is a secure register that contains data that may not be accessed outside the secure area without first being encrypted (DRM keys perhaps). It is necessary to prove that there is no path to the system bus that bypasses the encryption block, and that there is no path in the other direction whereby the secure register can be directly over-written (see diagram).

The heart of the approach is path sensitization technology, which has a source signal and a destination signal. There is no SystemVerilog assertion equivalent for this property. To prove this property the data at the source of the path is tainted and then Jasper formally verifies if it is possible to cover tainted data at the destination, in which case a waveform shows how data can propagate from source to destination. Thus it identifies paths propagating data to and from secure areas through various forms of back-channel or unapproved route.


This approach can be used to verify requirements that are not accessible by regular SVA assertions and so therefore not possible with standard formal verification tools.

The slides from the presentation, which contains more detail and many more examples, are available from the Jasper website here.


Patents: Who to Sue?

Patents: Who to Sue?
by Paul McLellan on 12-05-2012 at 12:52 pm

In an interview (probably $) with the Wall Street Journal, Eric Schmidt, the chairman (and ex-CEO) of Google, said:“The adult way to run a business is to run it more like a country. They have disputes, yet they’ve actually been able to have huge trade with each other. They’re not sending bombs at each other. … It’s extremely curious that Apple has chosen to sue Google’s partners and not Google itself.”

This is actually quite a naive view both from an intellectual property (patent) point of view and also from a business point of view.

Many years ago I had to learn quite a lot about patents in mobile since the GSM standard ended up containing a lot of patented technology as part of the standard. Indeed, there was the concept of “essential” and “non-essential” patents. An essential patent is one that you could not avoid infringing if you were compliant with the GSM standard. For example, Philips had a patent on the specific parameter values used in the vocoder so it wasn’t possible to use those values, specified in the standard, without infringing Philips’s patent.

I was working for VLSI Technology and nobody came after us for patent infringement. There are two reasons for this, which Eric Schmidt doesn’t seem to understand. The first is that we built chips. Since patents typically were about a mobile phone with certain capabilities, we didn’t infringe the patent directly since we didn’t make mobile phones. Potentially we infringed all sorts of semiconductor manufacturing patents, but that was a different issue. In the same way, although a mobile phone containing the Android operating system may infringe some Apple patent, almost certainly the operating system itself does not since it is not a device.

The second reason nobody would come after VLSI Technology then or Google now is that you always want to go after the furthest downstream product you can. A royalty of 2% is worth more on a more expensive product, and even if the royalty is fixed (for instance, Philips wanted $1 from every mobile phone for the vocoder license and other patents) it is easier to get a $1 from a $100 product than from a $10 product. Since Google gives away Android it is not clear (perhaps even to Google) how much money they generate on which to base any royalty anyway.

Interestingly, one of the Android licensees is Samsung. Apple is suing Samsung. But it is also Samsung’s biggest semiconductor customer. They would seem to completely embody Eric Schmidt’s statement that “they have disputes, yet they’ve actually been able to have huge trade with each other.”


Double Patterning Exposed!

Double Patterning Exposed!
by SStalnaker on 12-04-2012 at 7:15 pm

Wanna become the double patterning guru at your company? David Abercrombie, DFM Program Manager for Calibre, has written a series of articles detailing the multifaceted impacts of double patterning on advanced node design and verification. For designers struggling to understand the complexity and nuances of double patterning, these articles provide a well-lit roadmap that enables them to not only comprehend how double patterning will change the design process, but also how they can anticipate and mitigate the potentially unwelcome effects, such as lengthy debugging of double patterning errors, or unforeseen influences on timing and performance. A must-read for every designer who needs to understand double patterning design requirements and verification.


SystemC vs C++ for High Level Synthesis

SystemC vs C++ for High Level Synthesis
by Paul McLellan on 12-04-2012 at 4:00 pm

One of the decisions that needs to be made when using high-level synthesis (HLS) in general and Catapult in particular is what language to use as input. The choice is C++ or SystemC. Of course at some level SystemC is C++ with added libraries and templates, but in fact the semantics of the two languages end up being very different.

The biggest difference is in the area of hierarchy, concurrency, clocks and timing. C++ is an untimed approach with no explicit hierarchy, IO protocol, clocks or resets (that is, appearing in the code). During HLS these are added by the synthesis tool. C++ data-types also work well for bit-accurate modeling of hardware. Because of the lack of explicit hierarchy, inter block IO and clocks, C++ is actually a higher level of abstraction in coding style, with no need for such hardware details to be made explicit. The architecture is implicit in the coding style and derived through the HLS process and constraints. However, it is not possible to explicitly specify in the source detailed wiggling of pin signals or controlling on which precise clock cycle something occurs. Another benefit is that C++ simulation is much faster than SystemC (10-100 times).

On the other hand, SystemC makes clocks and resets explicit. The design is represented as a hierarchy of modules using threads to express concurrency. When bit-accurate modeling of pins is needed then this is done by writing at an RTL level. However, all those clocks and threads make the simulation much slower, since it is operating at a much lower level of granularity and there is a lot of overhead associated with the thread context switching. However, for most SystemC (provided you don’t write it entirely like RTL which is very suboptimal if you are using HLS since it overconstrains the tool) the simulation speeds are still 1000s of times faster than RTL.

For HLS, bit accurate datatypes are a necessity. The functional simulation of the source must match the simulation of the generated RTL or there is no way to verify the design. SystemC contains its own datatypes (part of the SystemC standard) and Calypto also provides some algorithmic datatypes that are easier to use. This is provided in the form of a C++ header file that can also be used in SystemC designs too. One big advantage of the algorithmic datatypes is that the simulation is 30-60 times faster.

By using an appropriate coding style it is possible to write synthesizable code where the kernel of the algorithm is unchanged and can be switched between C++ and SystemC, and between different families of datatypes.


Of course when the design is synthesized the core HLS capabilities are the same. Catapult supports both languages as does the sequential equivalence checking tool SLEC. Depending on timing (and technology) constraints, resource mapping will be done (scheduling, allocation etc) to meet the target, which may involve unrolling loops to increase concurrency for high performance (but high power/area) versus making everything serial so that the design is slow but does not require a lot of resource.

Calypto has a webinar coming up on December 13th at 11am to go over all this in more detail. For more information and to register go here.


Mixed-Signal Methodology Guide: Design Management

Mixed-Signal Methodology Guide: Design Management
by Daniel Payne on 12-04-2012 at 11:04 am

5c8d860406c36 product thumbnail

I reviewed the book Mixed-Signal Methodology Guidein August of this year published by Cadence, and decided to follow up with one of the authors, Michael Henrie from ClioSoft, to learn more about the importance of Design Management for AMS. Michael is a Software Engineering Manager at ClioSoft and has worked at Zarlink Semi, Legerity, Agere Systems and Lucent Technologies. Continue reading “Mixed-Signal Methodology Guide: Design Management”


Don’t miss this Panel! Platform & Subsystem IP: Trends and Realities

Don’t miss this Panel! Platform & Subsystem IP: Trends and Realities
by Eric Esteve on 12-03-2012 at 10:01 am

If you pass by Grenoble tomorrow (Tuesday 4th Dec.) and go to IP-SoC 2012, then you should attend this panel at 4pm in the Auditorium (you can’t miss it, it’s the larger room at the registration level).

If you are Designer, Architect, Project Manager, Marketing… working for a chip maker, please prepare questions! The topic is hot, but the success of the panel will come at least at 50% from questions from “customers”…

… the remaining 50% or less, we (Hal Barbour, CAST CEO, Peter Hirt, Director IP Procurement & Partnership at STM, Martin Lund, Senior VP, Research and Development, SoC Realization Group at Cadence, Jack Browne, Senior VP Marketing, Sonics, Bill Finch, CAST and I) are working on it, dropping some ideas to start the discussion!

Panel: Platform & Subsystem IP: Trends and Realities Since the mid-1990s when the concept of reusable IP cores first came into being, the proposal has always been that it was more economical to use and reuse IP than to always design chips from a clean start. It was also faster to market and more resource efficient. Many 3rd party IP companies came into being to supply IP that could be considered standard or that was so complex to design that doing the same function over and over was simply not practical. An example of the former would be a function like an Ethernet MAC and the latter would be something like a processor. Over the years this has proven to be a very successful practice for both chip designers and IP vendors and is one of the cornerstones of today’s business.

In the last few years with the exponential growth in gates available from the silicon suppliers, the pressure to use those gates to provide much more advanced functionality on a per chip basis has grown more and more intense. Functionality that is common in today’s smart phones, for example, was out of reach only a few short years ago. Getting to market in only a few months at price points that are astoundingly low is necessary for success. Many believe that this is the result of a shift to designing around reusable platforms and whole subsystems which gives entirely new meaning to the reusable IP concept. The idea of building a platform around which several different sets of functionality could be brought to market was viewed as something only the largest companies could engineer. Lately, there has been much hype about the 3rd party vendors expanding their offerings to at least the subsystem, if not the platform, level. The idea is to bring to the general market the advantages of higher levels of design reuse in effect recreating the success of IP cores at the next level. Is this really coming to pass or just industry hype and clever marketing? Do customers really want this and can the industry really deliver the kinds of flexibility customers will demand?

This panel attempts to examine the trend and discuss the realities of today’s platform IP market in addressing the requirements of both ASIC and FPGA designers.


Arteris answer to Sonics: should compare actual NoC (in Silicon proven SoC) performance, instead of potential, unproven NoC performances!

Arteris answer to Sonics: should compare actual NoC (in Silicon proven SoC) performance, instead of potential, unproven NoC performances!
by Eric Esteve on 12-02-2012 at 5:06 am

It seems that Ateris vs. Sonics war, initiated by Sonics in 2010 on the legal battle field, is now continuing on the marketing field, as far as I am concerned, I prefer the latter, as I am an engineer and not a lawyer, and I must say that playing in the marketing allow both companies to extract the most attractive features of their products. Such a battle is good for design engineers and decision makers, as they can learn about the state of the art for products like Crossbar, Silicon fabric and Network on Chip (NoC).

A couple of weeks ago, Sonics board member Jim Hogan has used deepchip.com as a marketing battle field to develop quite a strong offensive: not less than 6 articles posted to describe the Network-on-Chip (NoC) market, commons definitions terms about it, Make-or-Buy decision; and more, including an article dedicated to comparison of Sonics SGN vs. Arteris Flex NOC vs. ARM NIC 400. Today, we will focus on this comparison table, as Kurt Shuler (VP of Marketing with Arteris) has proposed an answer to this specific article. Kurt has reviewed the comparison table, and provided some point by point corrections, when needed… in fact on almost every table entry. I strongly suggest you to take a deep look at Kurt’ answer and the corrected table here, so you can make your mind by yourself. I can give you a summary of the key points that Kurt has highlighted in his article.

The first, and probably sounding like a killing argument is that “Jim Hogan NoC table compares silicon-proven Arteris FlexNoC to unproven Sonics SGN”. If you consider that Arteris FlexNoC has been integrated in System on Chip (SoC) developed by Texas Instruments, Qualcomm or Samsung (to name just a few of the long list of Arteris customers, see here), that these SoC are now in production, when Sonics SGN, although a promising product, is still in the pre-adoption phase, or in evaluation by potential customers, that means that the comparison is made between potential performances (Sonics SGN) and actual performances (Arteris FlexNoC). If you prefer, Jim Hogan is building a comparison table using performances coming from a slide show on one hand, and from Silicon in production on the other hand…

Then, Kurt explains “that NoC technology is now being adopted by all semiconductor makers creating SoCs with sufficient complexity. And it’s even clearer that Arteris FlexNoC is the gold standard for NoC interconnect fabric IP.”
And finally, he ask the right question

“Why have Samsung, Qualcomm, TI and Freescale adopted Arteris FlexNoC as their corporate-standard interconnect fabric IP for their most important chips?”

The answer should not surprise SoC design engineers or project managers: “Innovative technology, excellent engineering, a robust product roadmap and customer satisfaction always speak louder than marketing!” May I add my two cents? Even if a company can legitimately claim that they are selling the best product, with the best in class technical support, marketing can be useful to share these facts with the rest of the world, on not let competition being the only one to occupy the field. Moreover, a good marketing campaign is always better than any kind of legal battle, a lot cheaper for the company, and definitely more useful, as it allows educating and finally convincing your potential customers…

Eric Esteve


ST Microelectronics: Strategic Options

ST Microelectronics: Strategic Options
by Paul McLellan on 12-01-2012 at 5:11 pm

ST Microelectronics announced yesterday that it would have a conference call on December 10th to announce its strategy going forward. ST has been struggling the last couple of years, with revenues down year to year. From 2010-2012 (the last an estimate of course) it did $10.3B, $9.6B and $8.4B so it has shrunk nearly 20% in 3 years. Last quarter alone it lost $500M. In September they announced planned production stoppages at their Crolles fab (just outside Grenoble) and their Catenia fab (in Sicily).

ST has two big problems. The first is that its stronghold market is Europe and Europe has been an especially weak market for the last few years. There is not a lot that ST can do about that. They are not entirely European, with an international business, but they are perceived as the European semiconductor champion.

The second problem that they have is ST-Ericsson. This was a combination of Ericsson Mobile Platforms, Ericsson’s unsuccessful attempt to build a cellular IP licensing company, with ST’s own wireless business and Philips/NXP’s wireless business (some of which was the old VLSI Technology wireless business that I used to work with, absorbed into Philips Semiconductors when they bought VLSI). There has been a huge investment in this business since it was created a few years ago but it continues to be a big drain on profitability for the parent company.

As the CEO Bozotti said recently, announcing Q3 results:”Our Wireless segment delivered strong progress during the third quarter; however, the segment’s operating loss and negative cash flows still remain significant.”

Tied up with the European aspect is that ST (and presumably ST-Ericsson) had major customers in Nokia and Sony-Ericsson. To put it mildly, Nokia has not been doing well lately. And Ericsson bailed out of Sony-Ericsson and now it is pure Sony, so no inside track there.

The other problem with the wireless business in general is that the merchant part of the market is not that attractive. Apple designs their own baseband chips and gets the wireless interface chips from Qualcomm. Samsung largely designs their own chips too. The remaining part of the market, smartphones from second tier suppliers and non smart phones (so-called feature phones) is not generating a lot of profit and so is likely to be hard to make good margins. It is a commodity business. Texas Instruments recently announced layoffs in their equivalent part of the business and they are refocusing their OMAP strategy on other embedded markets such as automotive, industrial and medical.


The received wisdom seems to be that ST is going to put ST-Ericsson up for sale, although who would buy it is an interesting question. Simply changing the owner doesn’t change the fact that it is in a low-margin market with intense competition and a flawed lead customer. Maybe they bite the bullet and just shut it down. At least that would soon stop it hemorrhaging money and probably the markets would give ST instant credit for doing so. As the CFO of VLSI put it to me when I was running Compass and trying to find a buyer for the business: “Wall Street will give me credit just for shutting you guys down. If we get some money too that is icing on the cake.”