ads mdx semiwiki building trust gen 800x100ai

Semiconductor IP Library QA Just Got Easier

Semiconductor IP Library QA Just Got Easier
by Daniel Payne on 10-17-2013 at 12:05 pm

Imagine that you’re working in a CAD group and just received a new library of a few hundred IP blocks and you needed to know if these blocks conform to your design and quality standards. There are many questions about library and IP quality:

  • Are all of the views consistent (layout, schematic, HDL, test, timing, SPICE)?
  • Are there any anomalies in any view?
  • How much time can I spend doing QA on this library?

Continue reading “Semiconductor IP Library QA Just Got Easier”


How to Simplify Complexities in Power Verification?

How to Simplify Complexities in Power Verification?
by Pawan Fangaria on 10-17-2013 at 11:00 am

With multiple functionalities added into a single chip, be it a SoC or an ASIC, maintaining low power consumption has become critical for any design. Various techniques at the technology as well as design level are employed to accomplish the low power target. These include thinner oxides in transistors, different sections of the design working at different voltage levels, the design architected into multiple power domains which can then be switched on or off as per need and so on. In order to make smooth transition between different voltages, circuit elements such as level shifters are used. Similarly retention cells are used to keep input state of any switched-off power domain intact. However, such designs with multiple power domains are prone to subtle errors which are easy to commit but very difficult to detect through conventional tools and methodologies such as SPICE simulation or P&R methodologies (because P&R tools mostly work at gate level).


[A transistor connected to difference VCCs – Susceptible to performance degradation]

As an example, in the above figure, a transistor is connected to VCC1 and VCC2 which can be at the same voltage but in different domains, hence switching on and off at different times. This issue may not be detected by usual SPICE simulation. Again, if VCC2 has a higher voltage than VCC1, the gate oxide becomes vulnerable to breakdown. Such issues may not cause immediate failure of the circuit, but can lead to performance degradation and affect reliability in the long run.


[An IP connected incorrectly to external power terminals]

In case of SoCs containing multiple IPs, it becomes further complicated because the IPs can be at different voltage levels and power domains and need to be hooked up correctly in the larger implementation of semiconductor design. Power domain crossing errors are very prevalent in these cases. In the above figure, although voltages internal to the IP block are consistent, externally it has been hooked up incorrectly. These kinds of issues can render the devices to multiple issues such as changed threshold voltages and switching times, ultimately leading to degradation of the whole chip.

Talking about how to overcome these issues, Calibre PERC from Mentor Graphics provides robust transistor-level power intent verification by leveraging the power intent information annotated in UPF (Unified Power Format) that comes along as an integral part of the whole design flow. Calibre PERC identifies the right voltage combinations and assigns them as per design’s power intent, thus improving verification coverage and robustness

Calibre PERC is able to leverage design flows with or without UPF to understand the power intent down to the transistor level and then apply reliability verification at the transistor level. It ensures correct implementation of low-power rules, use of level shifters and other protection circuitry etc. Calibre PERC also detects all thin oxide transistors in the circuit and takes extra care to catch voltages which can lead to their oxide breakdown.

Users can easily insert reliability verification into their existing design flows with Calibre PERC as part of an integrated Calibre platform for cell, block, and full-chip verification. Those rules can be in standard SVRF and TVF formats, thus maintaining compatibility with designs and foundries.

To reduce the debugging complexity, Calibre PERC eliminates false errors by recognizing particular topologies (such as level shifters and isolation cells) used to enable power domain transitions at the right level. In the figure on the right side, Calibre PERC can identify these structures as level shifters and then the errors on M2 and M3 can be waived.


[Calibre RVE – Result viewing and debugging environment]

Calibre RVE is a novel result viewing and debugging environment which makes debugging reliability checks easy, quick and thorough.

I was impressed with the capabilities of Calibre PERC in the power and reliability verification and its easy integration into sign-off flows. A detailed analysis of power related issues and how calibre solves those is given in one of the Mentor’s whitepapers here. Interesting read!!

lang: en_US


GSA hosting “Interface IP: Winners, Losers in 2013” from IPnest

GSA hosting “Interface IP: Winners, Losers in 2013” from IPnest
by Eric Esteve on 10-17-2013 at 5:32 am

The GSA IP Working Group will meet today in San Jose, and the Group has asked IPnest building a presentation dedicated to Interface IP. The timing was perfect, as I have just completed the “Interface IP Survey” version 5, and I was able to use fresh market data. The IP working group will discover the IP vendor ranking, protocol by protocol, by IP License revenue, for USB, PCI Express, HDMI, SATA, MIPI, DisplayPort, Ethernet and DDRn Memory Controller. In fact, IPnest is the only analyst proposing such a granularity and this approach has allowed building a large customer base, including IP vendors, ASIC Design House, Foundries, Fabless and IDM. Ranking of the numerous IP vendors by protocol is very useful, but not enough! Thus, IPnest has decided to also propose a competitive analysis, by protocol, as well as a 5 years forecast. In other words, insert market intelligence and not only raw data!

Just take a look at these two pictures: the DDRn Memory Controller IP market analysis (2008-2012) and vendor ranking is complemented by a forecast (2013-2017), splitting the DDRn PHY and Digital (Memory Controller), as this is the only way to understand this market segment dynamics. Because this survey is the fifth version, we think that the forecast is now pretty accurate: IPnest has fine-tuned the various parameters, using the actual data (from the past) as a feedback loop to build this 5 years forecast, year after year.

IPNEST has proceeded the same way with the various protocols surveyed, and we can tell that each of these protocols exhibit his own market dynamics. This justifies the initial approach: scrutinizing the Interface IP market protocol by protocol. Nevertheless, this market taken globally clearly grows, continuously since 2005 (with a clear weakness in 2009, but the reason is well-known), and will continue up to 2017, at least. Moreover, IPnest think that we are seeing the strongest growth rate now, in 2013, going on this way in 2014 and 2015, with growth rate in the high teens. The IP market today is really amazing; the competition between Synopsys and Cadence is starting, as Cadence has decided to invest very seriously (Tensilica, Cosmic Circuits, Evatronix recently, Denali a while ago – 2010) to close the gap with Synopsys. How long will it take is a good question, but the only way to get a share of this fast growing IP market was to invest, resource and money, and the market size justify acting this way.

Eric Esteve from IPNEST –
Table of Content for “Interface IP Survey 2008-2012 – Forecast 2013-2017” available here

lang: en_US

The type of answers IPNEST customers find in the “Interface IP Survey” are:

  • 2013-2017 Forecast, by protocol, for USB, PCIe, SATA, HDMI, DDRn, MIPI, Ethernet, DisplayPort, based on a bottom-up approach, by design start by application
  • License price by type for the Controller (Host or Device, dual Mode)
  • License price by technology node for the PHY
  • License price evolution: technology node shift for the PHY, Controller pricing by protocol generation
  • By protocol, competitive analysis of the various IP vendors: when you buy an expensive and complex IP, the price is important, but other issues count as well, like

    • Will the IP vendor stay in the market, keep developing the new protocol generations?
    • Is the PHY IP vendor linked to one ASIC technology provider only or does he support various foundries?
    • Is one IP vendor “ultra-dominant” in this segment, so the success chance is weak, if I plan to enter this protocol market?

You probably better know why IPNEST is the leader on the IP dedicated surveys, enjoying this long customer list:

Synopsys, (US)
Cadence, (US)
Rambus, (US)
Arasan, (US)
Denali, (US) now Cadence
Snowbush, (Canada) now Semtech
MoSys, (US)
Cast, (US)
eSilicon, (US)
True Circuits, (US)
NW Logic, (US)
Analog Bits, (US)
Open Silicon,(US)
Texas Instruments, (US)
PLDA, (France)
Evatronix,(Poland)
HDL DH, (Serbia)
STMicroelectronics (France)
Inventure, (Japan) now Synopsys
“Foundry” (Taiwan)
GUC, (Taiwan)
KSIA, (Korea)
Sony, (Japan)
SilabTech, (India)
Fabless, (Taiwan)

Eric Esteve from IPNEST –
Table of Content for “Interface IP Survey 2008-2012 – Forecast 2013-2017” available here


How Asia Works, phase 2/3

How Asia Works, phase 2/3
by Paul McLellan on 10-17-2013 at 2:19 am

Success in manufacturing has two conditions: tariff barriers to shield the infant industries from external competition, and a rigorous focus on exports to ensure that manufacturing cannot just shelter behind the tariff barriers and reap monopoly profits inside the country. Each industry needs to have several companies enter, so that it is possible to cull (or merge) the weak rather than have a national champion that must be supported for political reasons. So Toyota and Hyundai, TSMC and Huawei, Toshiba and Samsung are all international leaders these days, not many decades after they were nobodies, but originally they were not national champions just one of many (well, mostly, just a few). In the early days the focus is not so much on profitability but on learning, with the aim of becoming a global technology leader, which can only be measured on a global scale by export success; being successful internally usually just indicates being well-connected politically (sugar quotas, ethanol distillation, anyone?). To get tax-breaks, low interest loans, foreign currency, requires exports (which are also easy to measure) even though companies don’t want to export (it is far easier to sell low-quality stuff internally if they can get away with it).

The third stage is handling finance. Don’t let the money in the country be invested abroad or leak into real-estate. Force it to be in agriculture initially, then export-oriented manufacturing. Don’t deregulate and remove capital controls too soon. Otherwise, like Indonesia, everything goes into a real-estate and stock bubble, none of which serves the needs of the country as a whole. And eventually the bubble bursts and the banks are bankrupt. If the state controls the banks (through what they will let the banks do and what risks they will let the banks offload) they can direct investment.

The big reason that South East Asian countries are not successes is that they didn’t do the land reform; In most cases because the ruling elite owned a lot of the land and didn’t want lose their cash cows. So the alternative is the status quo, which is semi-permanent revolution/uprisings since the peasants have nothing to lose and the landlords make all the money. That’s why you constantly read about riots in Thailand and Philippines, for example. But there is no point in the peasants increasing productivity since the landlords will simply capture the gains in increased rents, so agricultural productivity is low. Meaning that at the country level there is no agricultural surplus to both feed the cities and fund the transition to manufacturing.

Manufacturing is not always forced to be internationally competitive, or even internally due to multiple companies. So, for example, Malaysia’s Proton automobiles are not forced to be exported, and the steel from which they are made has to be imported since Malaysia’s one steel company is also not internationally competitive and only makes low-grade steel good enough for building inside Malaysia (which is highly profitable due to the tariff barriers). The young Hyundai, by comparison, had to compete with Kia and some other car companies you have never heard in Korea. And they had to export. Its plants are built at the side of the river ready to load cars on to ships. Proton is just outside Kuala Lumpur, inland. Hyundai only got financing in proportion with how much it exported, Proton was a national champion and was not allowed to fail even though it was not competitive (aka failed).

Highly recommended. Amazon link here. I have no financial interest in this book.


Andes: the Biggest Microprocessor IP Company You’ve Never Hear Of

Andes: the Biggest Microprocessor IP Company You’ve Never Hear Of
by Paul McLellan on 10-16-2013 at 3:06 pm

I wrote in April about Andes Technology, a microprocessor IP licensing company that even the person sitting next to me, a strategic marketing guy from Qualcomm had never heard of. So, OK, if you read that earlier article you had at least heard of them.

Part of the reason you haven’t heard of them is that they are in Taiwan (in Hsinchu) and most of their initial business was with Taiwanese companies, especially Mediatek. But they have started to go global and now have 60 licensees including a couple each in Korea and Japan and some in China. They just recently closed their first US licensee. Their licensees have shipped over 300 million chips containing their cores.


They are not a one-trick pony either. There are cores from the low-end with two-stage pipeline that runs at 250MHz up to a high-end with 8-stage pipeline running at close to a gigahertz. Their focus is on low-power, and they claim better power numbers than ARM for equivalent performance.

The cores are not just RTL. They have another 60 partners in their ecosystem around the architecture. I won’t list them all, you’ll be glad to know, but they include many of the usual suspects of RTOSs, compilers, debuggers. And, of course, full Linux support.


The other reason that you haven’t heard of them is that their chips are in parts of a chip which are not visible. In a smartphone, all the apps are written to run on ARM so it is not really feasible to use a different processor. But all the other things like wireless or GPS have processors in them, and the code is not visible to the end-user so does not automatically have to be ARM. In fact, it is an even playing field in which the processor cores can compete on their merits. Also, the next big markets such as wearable computing and the internet of things (IoT) are even more focused on low power and, again, the actual architecture does not really matter (do you even know what the processor in Google Glass is?).

Another advantage that Andes has (along with some other competitors like Imagination’s MIPS) is that it is not ARM. “Can you imagine how hard it would be to negotiate with Xilinx if Altera did not exist?” one potential customer said, “That is like ARM.” At the MIPS event a couple of months ago, MIPS CEO said much the same thing: customers want a competitor that produces technically good cores, rather than making ARM another Intel. Although supporting multiple instruction set architectures (ISAs) is more expensive than just supporting one, it seems to be getting less so.

When I was at Ambit, customers would buy our synthesis software because it was good. But it was also free to them: they figured that they could get their design compiler license payments down by just as much if they had competition. Andes is finding some of the same effect too. By creating true competition to ARM, ARM cannot have a take-it-or-leave-it attitude.

So what sort of performance do they deliver? With a standard 40LP TSMC library (so not even in 28um) the N1337 delivers 908 MHz and 79 uW/MHz in 0.25mm2, which is 50% higher performance at 1/3 lower power and slightly smaller area than their competition. With a speed optimized library it exceeds the gigahertz barrier (still in 40LP).

More information on the Andes website here.


Using HLS to Turbocharge Verification

Using HLS to Turbocharge Verification
by Paul McLellan on 10-16-2013 at 8:23 am

One of the benefits of using high-level synthesis is obviously the ease of writing some algorithms in SystemC since it is at a higher level than RTL (that’s why we call it high-level synthesis!). But a second benefit is at the verification level. Since a lot of the verification gets done at the SystemC level, less needs to be done at the RTL level.

The SystemC designs used in an HLS flow typically simulate 100-1,000 times faster than RTL. This is because the interfaces and timing are specified in an abstract source. So verification done at the SystemC level is a lot more efficient than waiting until the RTL level (or having no option if you are not using HLS, since RTL will then be all you have).


UPDATE: webinar moved to November 5th

Calypto have a webinar on just this topic coming up next week. It is at 10am on Tuesday November 5th. The presenter is Bryan Bowyer who leads the product design team. Previously he was product manager for HLS at Mentor and has worked in the topic area for 13 years. The webinar is titled How to Maximize the Verification Benefit of High Level Synthesis with SystemC.

In the 50 minute webinar, Bryan will describe a verification approach that leverages SystemC simulation and HLS to reduce the RTL verification effort by 50%. He will describe how to write a bit-accurate SystemC design for HLS and how to use that model to improve specification functional coverage and avoid time-consuming debug at the RTL level.

Catapult did well in Cooley’s DAC report (here) where a dozen people picked it as a “best of DAC”, a lot more than the competition. I’m a bit more wary than John is of reading anything into this type of pop quiz. People who go to DAC are already a biased sample (more from US, fewer from Japan and Europe, maybe a disproportionate number from Austin-based companies this year). Of the people who go to DAC, the people who send Cooley feedback is another small subset. But the comments from actual users are valuable. For example, one designer said:”I’ve had huge algorithms in SystemC and the RTL generated by Catapult simply PASSED in testbench verification FIRST time without any fixes!”

That is almost the perfect trailer for the webinar. Once again it is at 10am Pacific on November 5th. More details, including a link for registration, are here.


Always-on Context-aware Sensors in Your Phone

Always-on Context-aware Sensors in Your Phone
by Paul McLellan on 10-16-2013 at 8:00 am

Smartphones are smart but they are about to get smarter. The next big thing in mobile phones is to have a rich sensor environment: proximity, temperature and humidity, atmospheric pressure, light color, cover, gyroscope, magnetometer, accelerometer, ambient light, gesture and more. Some of these are already here, of course, and some don’t require much in the way of processing. But the big promise is to have always-on (so powered up even when the application processor is in standby mode) and context aware (pedestrian dead reckoning like Fitbit and Nike FuelBand but aware of whether you are running, walking, biking, on an escalator and so on). These will allow all sorts of new applications to be part of your smartphone, which since it is always with you means it can pull together data from lots of sensors to be even smarter: it knows when you are sleeping, it knows when you’re awake, it knows if you’ve been bad or good…well, maybe that’s a bit too far.

The challenge to make this a reality is that the power sensor algorithm computation has to be really low power, just 1-2% of battery life. So for a typical smartphone battery with 8K mWh capacity that is around 5-10mW.


A couple of years ago, Quicklogic decided to go after this market once they decided that they could make the power requirements. You can’t use the application processor’s main processor (typically a high-end ARM these days) since it is too high power and to optimize the system design it cannot always be on. But on the other hand, the sensor algorithms are in a state of flux and even new capabilities using existing sensors may be required as an upgrade during the life of the phone (for new apps), so programmability is a must.

Quicklogic have been in the anti-fuse based programmable area for 25 years and have had a lot of success. But anti-fuse is one-time programmable and although it has lots of advantages for some systems, for sensors in smartphones “once is not enough”. Traditional programmable logic is designed for high performance and is not suitable for mobile where power is king. So they designed their own 65nm SRAM-based reconfigurable substrate and then used it to build two products for the mobile sensor market that come in under the 2% battery limit that nobody else can reach.


The first product in the PolarPro 3 family, is a 10 second sensor data buffer. This offloads the application processor so that it can basically sleep for 10 seconds, wake up, do the calculation and then sleep again, analogous to audio processors for playing mp3 although the data is going in the other direction.


The more interesting product is ArcticLink 3 S1 contains a custom compute engine which they worked with Sensor Platforms Inc to use their change detectors and algorithms. This offloads most of the sensor processing from the application processor. It can sense device motion (shake, rotate, translate), where carrying (not on person, in hand front, in hand side, in container), posture (sitting, standing, walking, running) and (soon) what sort of transport you are using (in car, on train, in elevator, going up stairs). It also handles environmental sensors such as temperature and pressure.

This technology enables a step-function increase in the capabilities that a smartphone and its associated apps have, especially in the health and fitness area. I can’t wait to have one in my next phone. My Fitbit is good but it gets confused on a bike ride or an escalator.

More details on the QuickLogic website here.


How Asia Works, phase 1

How Asia Works, phase 1
by Paul McLellan on 10-15-2013 at 9:00 pm

This is not too much about semiconductors so consider this an “off-topic” warning. But I think you should read on anyway.

TSMC will show up eventually but not yet.

I was in Asia at last week. Coincidentally, I had a book to read on the plane called How Asia Works by Joe Studwell. It looks at what has made Japan, Korea, Taiwan and China successful (North East Asia) whereas Malaysia, Indonesia, Thailand and Philippines have not been (South East Asia). It is a fascinating book. It made me challenge so many preconceptions.

The answer is to ignore the Washington consensus (open borders, free trade, no tariff barriers, strong land property rights etc) just as every successful country has during its transition out of being poor and agricultural to a manufacturing and, eventually, a service based economy. Britain and the US had high tariff barriers and other less obvious barriers at that stage in their growth (for example, it was illegal to export textile manufacturing equipment out of Britain in the 18[SUP]th[/SUP] and much of 19[SUP]th[/SUP] centuries,; or the US basically expropriated land from the American Indians (and btw I had a gf once who was American Indian and she told me they all hated the term Native American, that was just somebody born here to them).

The successful countries have all taken the same approach. The first step is land reform. Ensure that all the peasants have a small amount of land, maybe just an acre or two. One fallacy is that we (or at least I) assumed that highly mechanized farming on large farms is much more productive than small farms. And if labor costs are high then this is true, measured in dollars. But the Berkeley professor’s organic vegetable garden is several times more productive than the Midwest farmer per acre, although it is highly labor intensive (but she doesn’t think of it at her professorial rate). Labor is the one thing poor countries have almost unlimited amounts of, so if the land is reformed so every peasant has a few acres of land, then they can farm it intensively with huge yields, perhaps 200 or 300% more than before. Large scale gardening.

This generates a big surplus that has two outcomes: the peasants become rich enough to buy manufactured goods when there are some, and there is enough surplus at the country level to move into the second stage which is manufacturing. Japan, Korea, Taiwan and China all had land-reform whereby the land was basically expropriated from the landlords with minimal compensation and given to the peasants. The exception that proves the rule is that when China collectivized its small farms during the great leap forward under Mao, productivity fell so drastically that 30-40 million people died of starvation. Tiny farms really do produce the maximum per acre if labor is essentially free. Mechanizing early reduces yields and leaves the rural population with nothing to do. But eventually as standards of living increase you have to let consolidation happen, it’s a strategy not a religion. The peasants have to move to the factories.

Phase 2 soon. Manufacturing. But for export.

The book is here (amazon kindle). If you really want it on dead trees you can get there from the link.



Yes, Intel 14nm Really is Delayed…And They Lost $600M on Mobile

Yes, Intel 14nm Really is Delayed…And They Lost $600M on Mobile
by Paul McLellan on 10-15-2013 at 6:01 pm

Intel server profits are growing, which isn’t a big suprise. But mobile losses are high. Although the amount lost by the Other Intel Architecture Group had a loss of $606M, that is actually down slightly from Q2 but up a lot from last year when they lost “only” $235M. This group includes Atom, the Infineon Wireless unit they acquired (which finally seems to have an LTE modem although I gather it is manufactured by TSMC not Intel), and the set-top/gateway chip unit (which I confess I didn’t know existed). At around $2.5B/year Intel is investing a lot into mobile. Even with some capital costs, that is a lot of people.

The enterprise PC market seems to have bottomed out (it is a lot less vulnerable to tablets than the home PC market) and although customer inventories (the PC manufacturers) rose they are below historic levels.

As Daniel speculated recently, 14nm is slipping and Intel plans to start manufacturing the 14nm Broadwell CPUs in Q1 with commercial launch for the second half of next year. Broadwell is the tick-tock shrink of Haswell, although the seem to have made some microarchitecture changes and added a few new instructions. Interestingly, as late as the Intel Developer Conference CEO Brian said it was not delayed although the equipment industry just took the delay as a done deal.


I know some people are interested in the Intel stock price, which has been basically flat for 10 years, but I’m not one of them (I own no Intel stock except in S&P500 index funds which presumably do).

To me the three big questions about Intel are process lead, mobile, and foundry.

Intel maintains it has a big process lead over everyone else (read TSMC) but as Intel has slipped out a bit and TSMC seems to have pulled in a bit there is not much light between them. But there is still everything to play for and how everything ramps next year is going to be critical.


Mobile is the next big question. Microsoft realized they had the same problem. Desktop revenues are big and not going away but it is not an engine for future growth, so they bought Nokia. We’ll have to wait and see if they can make a real success of that. Intel has the same problem. They bought StrongARM (from Digital) and renamed it Xscale, tried to use it for their first mobile strategy and ended up selling that business to Marvell. They bought the Infineon Wireless group which I believe struggled once Siemens (its old parent) got out of handset manufacture. Their current strategy is based around Atom, which is Intel architecture. They are assuming that in tablets, binary compatibility with office is important but since Microsoft just wrote off nearly $1B in tablet inventory, that remains unproven. It is even less clear in smartphones. If they manage to ramp this business it will be in spite of being Intel architecture (instead of ARM) and not because of it. Of course this strategy has slipped too, partially because 14nm has slipped and partially, I think, because server microprocessors get all the early volume ramp.

And foundry. Intel has entered the foundry business. So far the only announced customers are FPGA, Altera being the big one, which has the big advantage that Intel doesn’t need to build an entire EDA toolchain and IP ecosystem to be successful. There are rumors they will be doing foundry business for Cisco, with perennial rumors that they are talking to Apple and Qualcomm (which at the talking level I’m sure is true). It is hard to do well in the foundry business if you are also competing with your customers so I don’t see how they can have a mobile strategy that has them competing full-on in the mobile market while building chips for Apple or Qualcomm. But Samsung has carried this trick off so far, again based on having a process and being able to ramp it, so it can be done. But to be a foundry for anyone other than the most leading edge companies (who typically build their own IP, and Intel is a big customer of EDA so the tools should work on Intel’s process as a side-effect) they will need to build a foundry ecosystem as both TSMC and GF have done.

Intel’s stock price in a couple of years will depend on these megaquestions more than whether their op ex this quarter are $50M higher than forecast.


Assertions verifying blocks to systems at Broadcom

Assertions verifying blocks to systems at Broadcom
by Don Dingee on 10-15-2013 at 6:00 pm

Speaking from experience, it is very difficult to get an OEM customer to talk about how they actually use standards and vendor products. A new white paper co-authored by Broadcom lends insight into how a variety of technologies combine in a flow from IP block simulation verification with assertions to complete SoC emulation with assertions.

The basic problem Broadcom faces is the typical one for all SoC vendors: how to efficiently verify IP blocks from a variety of sources when integrated into a complex SoC design. Simulation technology used at the IP block level becomes problematic when interactions and real-world stimuli enter the picture at the system level.

Many vendors use different technologies to verify designs as they progress from low-level IP to system level implementation, but that often spawns inefficiency. Is it possible to use a unified assertion-based flow, whether on a simulator or an emulator, as a way to improve productivity and coverage? That is the question Broadcom has set out to answer, and their experience is valuable in illustrating what works and what still needs improving.

The fundamental approach of using assertions is to pass information, without gory levels of detail, from an IP block up to a subsystem and finally up to the complete system. The white paper cites four sources of assertions Broadcom uses:

1. Inline assertions supplied by designers
2. Assertions stitched by the verification team using SystemVerilog bind construct
3. Protocol monitors supplied by EDA vendors for standard bus protocols and memories, developed internally for custom designs, or supplied by third-party IP providers
4. Automated checks added by simulation and emulation compilers

The power of an emulator may be alluring, but the fact is an emulator – such as Mentor Graphics Veloce – is an expensive piece of capital equipment. The interesting point Broadcom makes is to use the emulator to only target coverage points and checks that were not previously included, except for scenarios difficult to cover in simulation. By bringing forward assertions from the lower cost simulation environment, duplication is removed and precious emulator time focused on what the simulator cannot do – without a lot of rewriting verification code, and within an overall view of coverage.

The rewriting of verification code was addressed by working with Mentor in developing developing a single assertion format. The coverage is addressed by using the Unified Coverage Database (UCDB) and its UCIF interchange format, so tools can read and stamp the coverage matrix at each stage. Interestingly, Broadcom merged their coverage data and assertions into a single UCDB file.

As we’ve seen in other discussions, assertions are only as good as they are written. Broadcom ended up with a set of guidelines for assertion writing, improving flow from simulation to emulation. They identified five rules that can help verification teams:

1. Label each assertion individually and descriptively – this label can be used to filter expected failures, if you need to, and is significantly clearer; e.g., “acknowledge _ without _ request: assert ”
2. Associate a message with each assertion along the lines of “assert else $error(…)” ($error is just like $display except that it sets the simulation return code and halts an interactive simulation)
3. Reduce the severity by using $warning or increase the severity by using $fatal
4. Exclude assertions that are informational in nature and occur frequently from emulation runs
5. Avoid open-ended or unbound assertions as these potentially have a very large hardware impact

They also found a couple of difficulties in passing assertions forward. One is much of the black-box IP out there is not really SystemVerilog compliant, using reserved keywords that interfere with assertions in emulation. Another issue relates to memory accesses outside the memory size that can crash an emulator, handled with automatically inserted assertions. Also, there are some suggestions on using saturation counters for coverage, rather than just go/no-go on the matrix.

The complete white paper is available from Mentor Graphics:

Localized, System-Level Protocol Checks and Coverage Closure Using Veloce

We don’t often get a look at real-world experience with a variety of tools and standards in play, and although some of the findings are specific to the Mentor Graphics Veloce platform, much of the Broadcom experience applies to the general problem of using assertions in verification of complex designs. Thanks to both Broadcom and Mentor for sharing the learning from their use.

How do these findings match with your experience in assertion-based verification? Are there other important considerations for developing a unified flow that works beyond simulation? What else should verification teams and vendors consider in streamlining an approach for both simulation and emulation environments?

lang: en_US