Banner 800x100 0810

Verifying Hardware at the C-level

Verifying Hardware at the C-level
by Paul McLellan on 09-09-2013 at 2:25 pm

As more people adopt high-level synthesis (HLS) they start to worry about what is the best design flow to be using. This is especially so for verification since it forms such a large part of the effort on a modern SoC. The more people rely on HLS for producing their RTL from C, the more they realize they had better do a good job of verifying the C code in the first place. Waiting until RTL is inefficient, not to mention late in the flow. Issues are hard to locate and debug and you can’t change the RTL even when you do (at least you are asking for big problems if you do). Instead you need to return to the “golden” representation which is the C, find the corresponding part of the code and then fix it there.

Better by far is to do verification at the C level. If the C is correct, then HLS should produce RTL that is correct-by-construction. Of course “trust but verify” is a good maxim for IC design otherwise we wouldn’t need DRCs after place and route. Similarly, you shouldn’t assume that just because your C is good that the RTL doesn’t need any verification at all.

Unfortunately, tools for C and SystemC verification are rudimentary compared with the tools we have for RTL (software engineering methodology has changed a lot less than IC design methodology over the last couple of decades, but that is a story for another day). RTL engineers have code coverage, functional coverage, assertion-based verification, directed random and specialized verification languages. C…not so much.

But we can borrow many of these concepts and replicate them in the C and SystemC world. By inserting assertions and properties into your C models you can start to do functional coverage on the source code and formal property checking on the C-level design. An HLS tool like Catapult can actually make sense of these assertions and propagate them through into the RTL.

Catapult, along with SLEC, allows you to do this sort of analysis now. With SLEC you can check for defects in the C/SystemC source, do property checking on assertions in your C-level code and apply formal techniques to prove whether an assertion always holds true or not. Catapult will then embed all these assertions into the RTL, automatically instantiating them as Verilog assertions that do the at the RTL level as the originals did at the C-level. It also takes C cover points and synthesizes them into Verilog so that when you run Verilog you also get functional coverage.

This hybrid approach leverages all the methodology that the RTL experts have put together, but pulling it up to do more at the C-level. After all, the better the C, the better the RTL (and the gates…and the layout).

Calypto have a couple of upcoming webinars, one for India/Europe on another topic and one for US on verification.

Dynamic/leakage power reduction in memories. Timed for Europe and India at 10am UK time, 11am central Europe, 12pm Finland and 2.30pm India. September 12th (this coming Thursday). Register here.

How to Maximize the Verification Benefit of High Level Synthesis with SystemC. 10am Pacific on October 22nd. Register here.


TSMC OIP: Soft Error Rate Analysis

TSMC OIP: Soft Error Rate Analysis
by Paul McLellan on 09-09-2013 at 1:34 pm

Increasingly, end users in some markets are requiring soft error rate (SER) data. This is a measure of how resistant the design (library, chip, system) is to single event effects (SEE). These manifest themselves as SEU (upset), SET (transient), SEL (latch-up), SEFI (functional interrupt).

There are two main sources that cause these SEE:

  • natural atmospheric neutrons
  • alpha particles

Natural neutrons (cosmic rays) have a spectrum of energies which also affects how easily they can upset an integrated circuit. Alpha particles basically are stopped by almost anything and as a result these can only affect a chip if they contaminate the packaging materials, solder bumps etc.


Increasingly, more and more partners need to get involved in this reliability assessment process. Historically it has started when end-users (e.g. telecom companies) are unhappy. This means that equipment vendors (routers, base-stations) need to do testing and qualification and end up with requirements for process, cell design (especially flops and memories), and at the ASIC/SoC level.

Large IDMs such as Intel and IBM have traditionally done a lot of the work in this area internally, but the fabless ecosystem relies on specialist companies such as iROCtech. They increasingly work with all the partners in the design chain since it is a bit like a real chain. You can’t measure the strength of a real chain without measuring the strength of all the links.

So currently there are multi-partner SER efforts:

  • foundries: support SER analysis through technology specific SER data such as TFIT databases
  • IP and IC suppliers: provide SER data and recommendations
  • SER solution providers: SEE tools and services for improving design reliability, accelerated testing

Reliability has gone through several eras:

  • “reactive” end users encounter issues and have to deal with them. product recalls, software hot-fixes etc
  • “awareness” system integrators pre-emptively acknowledge the issue. system and component testing, reliability specifications
  • “exchanges” requirements and targets are propagated up and down the supply chain with SER targets for components, IP etc
  • “proactive” objective requirements will drive the design and manufacturing flow towards SER management and optimization

One reason that many design groups have been able to ignore these reliability issues is that they depend critically on the end market. The big sexy application processors for smartphones in the latest process do not have major reliability issues: they will crash from time to time due to software bugs and you just reboot, your life is not threatened. And your phone only needs to last a couple of years before you will throw it out and upgrade.

At the other end of the scale are automotive and implantable medical devices (such as pacemakers). They are safety critical and they are expected to last for 20 years without degrading.


iRocTech have been working with TSMC for many years and have even presented several joint papers on various aspects of SER analysis and reliability assessment.

iRocTech will be presenting at the upcoming TSMC OIP Symposium on October 1st. To register go here. To learn more about TFIT and SOCFIT, iRocTech’s tools for analyzing reliability of cells and blocks/chips respectively, go here.


Rapid Yield Optimization at 22nm Through Virtual Fab

Rapid Yield Optimization at 22nm Through Virtual Fab
by Pawan Fangaria on 09-09-2013 at 10:00 am

Remember? During DAC2013 I talked about a new kind of innovation: A Virtual Fabrication Platform, SEMulator3D, developed by COVENTOR. Now, to my pleasant surprise, there is something to report on the proven results from this platform. IBM, in association with COVENTOR, has successfully implemented a 3D Virtual Fabrication methodology to rapidly improve the yield of high performance 22nm SOI CMOS technology.

The CTO-Semiconductor of COVENTOR, Dr. David M. Fried was in attendance while IBM’s Ben Cipriany presented an interesting paper on this work at The International Conference on Simulation of Semiconductor Processes and Devices (SISPAD 2013). The paper is available at the link “IBM, Coventor present 22nm Virtual Fabrication Success at SISPAD” at the COVENTOR website.

Dr. Fried leads COVENTOR’s technology strategy, partnerships and external relationships. His expertise touches upon areas such as Silicon-on-Insulator (SOI), FinFETs, memory scaling, strained silicon, and process variability. He is a well-respected technologist in the semiconductor industry, with 45 patents to his credit and a notable 14-year career with IBM, where his latest position was “22nm Chief Technologist” for IBM’s Systems and Technology Group.

I had a nice opportunity speaking to Dr. Fried on phone during the conference at Glasgow, Scotland. From the conversation, all I can say is that SEMulator3D really has a true innovation and fascinating technology that will majorly ease the complex design and manufacturing process, not only at 22nm but still lower nodes in the days to come; specifically in analyzing and predicting the key phenomenon of variability at these nodes, quickly and accurately.

The Conversation –

Q: I guess the first introduction of SEMulator3D in wide public media was in DAC 2013?

Actually COVENTOR is a 17 years old company, with experience in MEMS s/w, Device simulation and Design enabling tools. It has many years of 3D structural modelling experience from the MEMS field, which has been migrated over the last 7 years into the Semiconductor space after encouragement by Intel, one of our original investors. I moved to COVENTOR about a year ago, and am now driving its technology roadmap and expanding SEMulator3D usage across the semiconductor industry. In 2012, we made a very important updated release of SEMulator3D. But in 2013, our newest release, developed in significant collaboration with IBM, adds lots of automation, advanced modelling and really takes a quantum leap forward in virtual fabrication.

Q: This is a new concept of Virtual Fabrication Platform. How does it relate with the design flow at Fabless companies?

It provides a flexible process platform for Fabless companies to explore the physical embodiment of their specific design IPs, otherwise they do not know what happens at the process end. In other words, it is a “Window into the Fab” for Fabless companies. Variability mainly depends on physical parameters. Because SEMulator3D is 10-100 times faster than other process modelling tools, it can provide many more data points of physics-driven highly predictive and accurate models, giving a more rigorous view of the variability effects.

Q: Let’s see some of the aspects of this paper presented at SISPAD. It’s a meticulous work that the 3D model was matched to inline measurement and metrology data as well as 2D SEM/TEM cross-section images of a nominal transistor. How did you get this idea to model this way, so accurately?

In any modelling situation, first of all, we need to gain faith by experimenting with the past results and demonstrating the perfect match with the existing data, calibrate and then predict. The modelling engine is physics-driven, so it will correctly predict many processes and effects without any calibration. For this detailed study, the nominal model was validated against a significant amount of inline measurement and metrology, and also TEMs and SEMs and iterations of calibration were performed. After that, thousands of cases of variation were performed and the results were assembled and analysed quantitatively. SEMulator3D 2013 adds the first production availability of our Expeditor tool which enables automated spreadsheet-based process variation study and analysis. Our expanded Virtual Metrology capabilities can automatically collect and analyse the structure during the minutes of modelling rather than the months these experiments could spend in the fab.

Q: It is observed that major process variation occurs at the n-p interface due to spatial exposure and overlay errors from multiple lithography levels whose edges define that interface. So, how difficult it is when multiple devices in a large layout are involved?

Right, the nfet-pfet interface is used in this work as a vehicle for the study and can easily be scaled to larger design space. Expeditorenables massively parallel execution, so the same type of modelling can be applied to many different design elements, such as cells in a design library. That’s how there is really no comparison between SEMulator3D and any TCAD tool out there. SEMulator3D is extremely fast, applicable to a larger physical design space and provides yield focused modelling.

Q: How large a design, SEMulator3D can handle to model and evaluate for process window tolerance?

There is effectively no limitation on design area. What matters is modelling time. SEMulator3D can do the typical job in minutes rather than weeks. The concept is to choose the smallest area of a design which can be modelled to solve a specific process integration challenge, apply process description and define resolution of models. And then parallelize the job. Some users are investigating large memory arrays, some users are investigating single transistor effects. Most of our typical users are applying this modelling platform at the circuit-scale.

Q: So, how do you choose that smallest sample, process description and resolution?

In our more collaborative engagements, like the one in this paper, we work with the customer to define these aspects and optimize the modelling exercise. The customer is always interested in their specific design IP, but we can help optimize the area selection and parallelization. The process description is where we do most of our collaboration. Each customer’s process is different, and developed independently, but we spend time working with “Modelling Best Known Methods” and many tricks of the trade we’ve learned in virtual fabrication. The modelling resolution is flexible based on the effects being studied and can be tailored to optimize modelling performance based on the granularity of the effects being analysed.

Q: What kind of different design structures have been modelled using SEMulator3D?

There are many, to name a few – DRAM, SRAM, logic, Flash memory, ESD circuitry, Analog circuits, TSVs, multi-wafer 3D integration structures, and so on.

Q: We know IBM is already working closely with COVENTOR. Who are the other customers using SEMulator3D?

Our primary user base is in the Foundries, IDMs and Memory manufacturers. We are engaged with the top 9 worldwide semiconductor manufacturers and also several Fabless design houses. With Cadencewe have provided a Virtuoso Layout Editor link to SEMulator3D. SEMulator3D also has its own full-function layout editor which serves as a conduit to any existing GDSII.

Q: One last question. Although, from this conversation, what I can perceive is that this tool has a bright future which can provide a solid foundation to designs at ultra sub-micron process nodes, I would like to know your view on what’s in store for future?

One of the things I spend a lot of time thinking about is 3D DRC. You know, most yield limitations come from a physical problem that is 3D in nature. That single 3D problem gets exploded into 100s of 2D rules to provide a DRC checking infrastructure. In many cases, it’s becoming cumbersome and inefficient to do it this way, especially in the development phase where these rules are just being explored. That’s an area to work on and optimize by doing 3D searches on structural models. There are some structures that will be better checked in 3D rather than with a massive set of translated 2D rules. So, in the future 3D Design Rule Checking will be a reality, and we are working hard on that.

This was a very exciting and passionate conversation with Dr. Fried, where I could enhance my knowledge about process modelling. And, I can see that SEMulator3D can provide competitive advantage to design companies as well, because this facilitates their parallel interaction with Fab in a virtual world which can be used to mitigate the manufacturing related issues at the design stage. A novel concept!


Searching an ADC (or DAC) at 28 nm may be such a burden…

Searching an ADC (or DAC) at 28 nm may be such a burden…
by Eric Esteve on 09-09-2013 at 9:13 am

If you have ever send a Request For Quotation (RFQ) for an ASIC including processor IP core, memories, Interfaces IP like PCIe, SATA or USB and Analog function like Analog to Digital Converter (ADC) or Digital to Analog Converter (DAC), you have discovered, like I did a couple of years ago, that these Analog functions may be the key pieces, limiting your choice of the technologies and foundries. Every System-on-Chip (SoC) will use a processor IP core (and most of the time it will be an ARM core, by the way), so you will realize that the processor is not a problem as it will be available in most of the technologies/foundries, as well as any Single Port or Dual Port memory, through memory compilers.

But, especially if you target advanced nodes like 28 nm, being able to find the right ADC or DAC may become a real issue. If you can’t find it in the technology you have to target, for good reasons like benefiting for the smallest possible area, or device cost, or to make sure that the power consumption will be within the allocated budget, the only possibility will be to order a custom design… not really the best approach to minimize risk and time-to-market (TTM)! When a well-known and respected IP vendor like Synopsys announce they have scaled their ADC Architecture to support Mobile and Multimedia SoC at 28 nm –and beyond, that’s the type of news which make your life easier when sending ASIC RFQ, targeting to use TSMC 28 HPM. If it’s your case, that make sense to attend to the next webinar:

Scaling ADC Architectures for Mobile & Multimedia SoCs at 28-nm and Beyond

If you are interested, you should quickly register as this webinar will be held on Tuesday, the 10[SUP]th[/SUP] of September…


Overview:
Data converters are at the center of every analog interface to systems-on-chips (SoCs). As SoCs move into 28-nm and smaller advanced process nodes, the challenges of integrating analog interfaces change due to the process characteristics, reduced supply voltages, and analog blocks’ area requirements. Synopsys is offering a complete portfolio for analog interfaces in 28-nm includes high-speed ADCs and DACs, PLL, and general-purpose ADCs and DACs, implemented on a Standard CMOS process with no additional process options.

I have noticed a few impressive features:

  • Parallel SAR 12-bit ADC architecture implementations for up to 320 MSPS sampling rates
  • Parallel assembly allows for greater architectural flexibility for specific applications
  • Very high performance 12-bit 600 MSPS DAC
  • Power consumption reduction of up to 3X and area use reduction of up to 6X over previous generations

This webinar will:

  • Compare the prevailing analog-to-data converter (ADC) architectures in terms of speed, resolution, area, and power consumption trade-offs
  • Describe the benefits of the successive-approximation register (SAR)-based ADC architecture for the medium and high speed ADCs
  • Describe how implementations of the SAR ADC architecture can reduce power consumption and area usage for 28-nm process technologies
  • Present the 28-nm DesignWare® Analog ADCs, which use the SAR-based architecture, and explain how they achieve 3x lower power consumption and 6x smaller area compared to previous generations


Who should attend:
SoC Design Engineers, Managers and System Architects

Presenter:

Carlos Azeredo-Leme, Senior Staff Engineer, DesignWare Analog IP, Synopsys
Carlos Azeredo-Leme is a senior staff engineer for the DesignWare Analog IP at Synopsys since 2009. Prior to joining Synopsys, he was co-founder and member of the Board of Directors of Chipidea Microelectronics in 1993, where he held the position of Chief Technical Officer. There, he was responsible for complete mixed-signal solutions, analog front-ends and RF. He worked in the areas of audio, power management, cellular and wireless communications and RF transceivers. Since 1994 he holds a position as Professor at the Technical University of Lisbon (UTL-IST) in Portugal. His research interests are in analog and mixed-signal design, focusing on low-power and low-voltage. Carlos holds an MSEE from Technical University of Lisbon (UTL-IST) in Portugal and a Ph.D. from ETH-Zurich in Switzerland.

Eric Esteve from IPNEST

lang: en_US


Test Compression and Hierarchy at ITC

Test Compression and Hierarchy at ITC
by Daniel Payne on 09-09-2013 at 8:00 am

The International Test Conference (ITC) is this week in Anaheim and I’ve just learned what’s new at Synopsys with test compression and hierarchy. Last week I spoke with Robert Ruiz and Sandeep Kaushik of Synopsys by phone to get the latest scoop. There are two big product announcements today that cover:

Continue reading “Test Compression and Hierarchy at ITC”


SemiWiki Analytics Exposed 2013

SemiWiki Analytics Exposed 2013
by Daniel Nenni on 09-08-2013 at 7:15 pm

One of the benefits of blogging is that you put a stake in the ground to look back on and see how how things have changed over the years. You can also keep a win/loss record of your opinions, observations, and experiences. Last year I posted the “SemiWiki Analytics Exposed 2012” blog so here is a follow-up to that.

The Semiconductor Wiki Project, the premier semiconductor collaboration site, is a growing online community of professionals involved with the semiconductor design and manufacturing ecosystem. Since going online January 1st, 2011 to June 30th 2012 more than 300,000 unique visitors have landed at www.SemiWiki.com viewing more than 2.5M pages of blogs, wikis, and forum posts. WOW!

Wow indeed. We have now had more than 700,000 unique visitors and they just keep on coming. Original content and crowdsourcing wins every time, absolutely. There is a plethora of website rating companies but we use Alexa (owned by Amazon.com) as it has proven to have the most comprehensive and accurate analysis in comparison to Google and internal analytics, absolutely.

Rankings from theAlexawebsite:

EETimes
2012 Rank: 24,870
2013 Rank: 28,584
-13%

SemiWiki
2012 Rank: 431,594
2013 Rank: 249,989
+73%

Design And Reuse
2012 Rank: 312,684
2013 Rank: 320,258
-2%

EDAcafe
2012 Rank: 441,196
2013 Rank: 388,857
+9%

ChipEstimate
2012 Rank: 861,483
2013 Rank: 1,169,187
-26%

DeepChip
2012 Rank: 1,175,172
2013 Rank: 1,849,759
-36%

SoCCentral
2012 Rank: 3,765,891
2013 Rank: 2,529,697
+14%

Yes a 73% increase is a big number but let me tell you why. First, you may have noticed we installed a new version of SemiWiki this summer which is optimized for speed and search engine traffic. My son the site co-developer has the summer off (he teaches math) so we kept him busy with site optimization. Second, we enhanced our analytics and internal reporting so we know more of what is happening inside SemiWiki. A good friend of mine is a Business Intelligence (BI) expert for a Fortune 100 company and shares his expertise with me in exchange for food and drink. Okay, mostly drink. This enhanced data mining and reporting capability gives our subscribers a clear view of what does and doesn’t work so we can manage the growth of their landing pages and SemiWiki on a whole.

The Semiconductor Wiki Project, the premier semiconductor collaboration site, is a growing online community of professionals involved with the semiconductor design and manufacturing ecosystem. Since going online January 1st, 2011 more than 700,000 unique visitors have been recorded at www.SemiWiki.com viewing more than 5M pages of blogs, wikis, and forum posts. WOW!

I would tell you more but the new SemiWiki 2.0 software release is so amazing it would sound like I was bragging and I’m much too humble to brag.

lang: en_US


Xilinx At 28nm: Keeping Power Down

Xilinx At 28nm: Keeping Power Down
by Paul McLellan on 09-08-2013 at 2:26 pm

Almost without exception these days, semiconductor products face strict power and thermal budgets. Of course there are many issues with dynamic power but one big area that has been getting increasingly problematic is static power. For various technical reasons we can no longer reduce the voltage as much as we would like from one process generation to the next which means that transistors do not turn off completely. This is an especially serious problem with FPGAs since they have a huge number of transistors, many of which are not actually active in the design at all, just due to the nature of how an FPGA is programmed.

Power dissipation is related to thermal because heat dissipated rises with power. But temperature is related to reliability: every 10°C increase in temperature doubles the failure rate. There are all sorts of complex and costly static power management schemes but ideally the FPGA would simply have lower static power. TSMC, which manufactures Xilinx FPGAs, has two 28nm processes, 28HP and 28HPL. The 28HPL process has some big advantages over 28HP:

  • wider range of operating voltages (not possible with 28HP)
  • high-performance mode with 1V operation leading to 28HP-equivalent performance at lower static power
  • low-power mode at 0.9V operation, with 65% lower static power than 28HP. Dynamic power reduced by 20%.

The 28HPL process is used to manufacture the Xilinx 7 series FPGAs. The net result is that competing FPGAs built in the 28HP process have no performance advantage of series 7 and some of the competing products come with a severe penalty of over twice the static power (along with the associated thermal and reliability issues). In fact Xilinx’s primary competitor (I think we all know who that is) has had to raise their core voltage specification resulting inn a 20% increase in static power and their power estimator has gradually crept the power up even more. Xilinx has invested resources so that the power specifications published when the product is first announced do not subsequently require being revised, meaning that end-users can plan their board design confident they do not need to worry about inaccurate power estimates.


The low power and associated thermal benefits mean that Xilinx FPGAs can operate at higher ambient temperature without the FPGA itself getting too hot. The graph below shows junction temperature against ambient temperature for series 7 standard and low power devices compared to the competitor’s equivalent array. This is very important since the ability to operate at 50-60°C ambient temperature while keeping the junction temperature at or below 100°C is essential in many applications, such as wired communications designs in rack-style environments such as datacenters. It is no secret that routers and base-stations are one of the largest end-markets for FPGAs.


But wait, there’s more, as the old infomercials said.

Not specific to series 7, the Vivado Design Suite performs detailed power estimation at all stages of the design, and has an innovative power optimization engine that identifies excessively power hungry paths and reduces them.

Reducing dynamic power depends on reducing voltage (which the 28HPL allows), reducing frequency (usually not an option since that is set by the system performance required) or reducing capacitance. By optimizing the arrays for dense area, meaning shorter wires with lower capacitance, power is further reduced compared to Xilinx’s competitors.

And there is even more than that. But to find out you need to download the Xilinx white paper Leveraging Power Leadership at 28nm with Xilinx 7 Series FPGAs available here.


Asian Embargoes

Asian Embargoes
by Paul McLellan on 09-07-2013 at 8:00 pm

[This blog embargoed until 10am China time]

An interesting thing happened to me this week. I had two press briefings. No, that wasn’t the interesting thing and if you have sat through a press briefing you will not regard them as recreation. I do it for you, Semiwiki readers. Even though, as this week, the briefings are given by friends. But there was something different about these two briefings.

On Monday evening, which is Tuesday morning in Asia, you will see the blogs. This is a sort of teaser. However, this is the first time, and I’m sure not the last, where the embargo date is early in Asia as opposed to early in the US. A typical embargo is 5am in California, 8am in New York. Before ‘the’ markets open, which are on Wall Street. Nowhere else really matters.

One of these announcements is about strategy in Asia and one coincides with a user group meeting in Asia so you can’t read too much into it. But on the same day I got two announcements with Asian embargo times. That’s never happened before. In fact I can’t remember a press release with an Asian embargo before.

Like traffic down 101 (sorry, non silicon valley readers) something easy to observe may be a proxy for the health of the economy in general or a big trend that is just getting started. Or it may be nothing. My other proxy for business in the valley is Birks in Santa Clara: If you have no idea where it is it is in those pink towers beside 101 at Great America Parkway. If you can get a reservation within a couple of days, silicon valley is not in good shape, if you need to wait two weeks the valley is booming. Right now on Friday as I write you can get a reservation for Monday. For 2 people but not 4. Half-booming. During the downturn, both Birks and Parcel 104 added lots of cheap items to their menu. Not good. But you can’t get a hamburger any more. If you want the best hamburger going, go to Zuni cafe on Market near where I live in San Francisco. But it is only available for lunch or after 10pm. The perfect finish to an evening.


SpyGlass: Focusing on Test

SpyGlass: Focusing on Test
by Paul McLellan on 09-07-2013 at 5:51 pm

For decades we have used a model of faults in chips that assumes that a given signal is stuck-at-0 or stuck-at-1. And when I say decades, I mean it. The D-algorithm was invented at IBM in 1966, the year after Gordon Moore made a now very famous observation about the number of transistors on an integrated circuit. We know that stuck-at faults are not the best model for what can go wrong on an IC but it does work surprisingly well. If you can detect if every signal on a chip is stuck then it turns out you can detect a lot of other stuff that might go wrong (such as two signals bridging together).

But one particular problematic area is detecting transition faults. These are faults that, due to excessive resistance or capacitance, or perhaps one of a number of transistors in parallel being faulty, cause a signal to transition slowly. Slow-to-rise and slow-to-fall faults. If test is not being run at the normal clock rate these might not be detected since at the slower speed the circuit behaves correctly. When these first became a big problem was still in the era when manufacturing test used functional vectors and running them at-speed would automatically pick up these faults. But that approach ran out of steam and today all chips are tested using some sort of scan-test methodology during which the chip is in a special mode and even further from its normal behavior. Scan coverage works on these faults by setting up an initial condition, pulsing the clock twice and then latching the result, or similar approaches that run the transition at speed.

It turns out that most faults are easy to detect but a few hard to detect faults can make a big difference to either the fault coverage or to the time taken by the ATPG (because it has to work very hard to detect the hard to detect faults by definition. Otherwise they wouldn’t be hard to detect duh!). We can make the concept of “hard to detect” more rigorous by the idea of random resistance. No, this isn’t resistance in the resistance-capacitance-inductance sense. It is how likely a fault would be to get detected by a set of random vectors. If almost any set of random vectors will detect the fault it is easy to detect; if very few (or even none) random sequences detect it then it is hard to detect. A design (or a block) is random resistant if random vectors do not automatically have high fault coverage due to there being too many hard to detect faults.

ATPG is done late in the design cycle so we don’t want to discover test problems then, when it is very expensive to do anything about it (changing the RTL after place & route, known as ECO, is orders of magnitude more time consuming than fixing the RTL before starting physical design). What we would really like is a tool at the RTL level that will tell us if we are creating hard to detect faults. We can change the RTL to remove them. Atrenta SpyGlass DFT DSM (SDSM from now on, that is too much of a mouthful) product is such a tool.


SDSM gives feedback on 4 aspects of design at RTL:

  • distribution of nodes (in sorted order) with probable control to 0
  • distribution of nodes with probable control to 1
  • distribution of nodes with probability of observe
  • distribution of nodes with probability of detect


This allows the designer to quickly zoom into blocks where the coverage is low. SDSM can then display a sort of thermal map showing where the hard to detect faults are hiding. Typically in places that have very wide logic cones (forcing ATPG to generate a huge number of vectors to cover all possibilities) or similar issues that are not usually hard to change once identified or can simply be fixed by adding additional test points.

SDSM can identify both scannability and low fault coverage issues early and help designers fix problems without requiring iterations during the implementation flow.

Atrenta’s new white paper on Analysis of Random Resistive Faults and ATPG Effectiveness at RTL is here.

If you are attending ITC in Anaheim, Atrenta is at booth 306.


Why I dumped my iPhone5 for a Samsung S4!

Why I dumped my iPhone5 for a Samsung S4!
by Daniel Nenni on 09-07-2013 at 5:00 pm

A good friend and dog walking partner was on the smartphone Apple/Android fence last year so I pushed him over to Apple and the result was the infamous “8 Reasons Why I Hate My iPhone5” Blog. After months of complaining I bought him a Samsung S4 and gave his iPhone5 to my very appreciative wife so all is well that ends well, maybe.

During our frequent walks on the Iron Horse Trail we sometimes have smartphone contests. Voice control is everything to us. Watching people furiously trying to type on smartphones cracks us up and the autocorrect blunders are just hilarious. This morning Siri won when we asked our phones what sex author Lee Childs is (male). In general though, Android beats Siri on voice commands.

Apple maps, however, is a big loser in all of our contests. Google maps is unbeatable, not even our new car navigation systems come close so I don’t see that changing anytime soon. According to my friend:

Several weeks ago I was picking up my son at the Dublin BARTstation. I made the mistake of using Apple maps, as it is so wellintegrated with Siri. It actually understood me when I said, “Navigate toDublin BART”. Unfortunately Siri told me that no exit off 580 was required. It told me I had arrived as I was passing the station that is between the East and West bound lanes. I quickly jumped out my window and was able to meet my son on time.

Now he is trying to push me over to Android since my two year contract is up and the new Apple phones look “uninspiring”. His Google centric arguments include:

If you don’t use Google Voice you are missing out. You can easilyforward your calls to multiple phones, you can block calls, you can getvoicemail transcribed – all sorts of good stuff. However, the iPhonedoesn’t enable Google Voice to use your Google Voice number for outgoingcalls. My real cell number, which I don’t want people to see, was alwaysused by the iPhone. Now that I’m back in the Android fold, my GoogleVoice numbers is used for outgoing calls, as it should.

I save myself a ton of time by using Google contacts. It’s wellintegrated with Gmail and Google Voice. Guess what? It’s not integratedwith the iPhone. It made me crazy not being able to “call Dan” unless Iexplicitly added him as a contact on my iPhone. With Android and Googleall the syncing between your life on your Windows PC and your life onthe phone happens for free. I like MacBooks but they aren’t used atwork, so they are generally more trouble than they are worth.

If you haven’t checked out Google Now, it’s a treat. Remember Scott McNealy, one of the founders of Sun Microsystems? He famously stated, “Thereis no privacy on the internet, get over it”. Once you accept that factyou’ll appreciate Google Now snooping through your online life,including email, and making excellent recommendations. It will tell youwhen shipments you ordered from Amazon are enroute, if your plane isleaving on time, how far you are from home – in traffic, all sorts ofgood stuff WITHOUT YOU ASKING!

He is also one of the many people who help with SemiWiki so I started him on an iPad2 three years ago and later upgraded him to an iPad3 and an iPad Mini. Other than the questionable battery life of the iPad3, he has a great respect for Apple tablets but now wants a Samsung to match his phone. As soon as it arrives his iPad3 will go to my very appreciative wife.

Hopefully the new iPhone5s and iOS7 release will help in our smartphone battles and make him regret his defection. If not, Android here I come! [env]

More Articles by this author