RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Let the FinFET Yield Controversy Begin!

Let the FinFET Yield Controversy Begin!
by Daniel Nenni on 11-03-2014 at 8:00 am

It never ceases to amaze me how people point fingers and create controversy to cover their mistakes. It happened at 40nm, 28nm, and again at 20nm and now it is time for the regularly scheduled yield controversy. Of course any conversation about semiconductor yield generates clicks for SemiWiki so I’m happy to play along.

It generally starts with a semiconductor equipment manufacturer missing their quarterly numbers then throwing their customers under the yield bus. Just once I would like to hear a CEO say, “Hey, we missed our number, my fault.” Of course they never name the customer so all customers come under suspicion which is exactly what is happening here. This time it is Art Zafiropoulo, CEO of Ultratech:

As we have discussed on past conference calls, the difficult implementation of 3D FinFET microprocessors to high production manufacturing. Once again a major logic manufacturer delayed their FinFET ramp. We had then requested to prepare LSA tools for shipment for the end of the third quarter which was delayed. These LSA shipments for the most part caused our third quarter revenue to be less than projected. These LSA systems have been rescheduled for shipment in the fourth quarter. Due to the continued low yield in FinFET devices for the past two years, we have seen a reduction in new LSA bookings in subsequent shipments…

I’m very sorry you missed your quarterly number Art and that your stock price is less than half what it was in January of last year. I’m also very sorry you have to blame customers using misleading statements such as this. Ramping leading edge process technologies is more difficult with every new node so delays should be expected. How does a CEO of an equipment manufacturer not know this?

Clearly Art is talking about Intel in regards to 3D FinFET microprocessors for which I understand. Last September Intel CEO BK held up a laptop that was powered by a 14nm CPU and claimed silicon would ship by the end of 2013. That chip is now shipping (about 2 Quarters late) with products due in time for the holiday season. It really is an impressive microprocessor so congrats to Intel on this one:Intel?s 14-nm Parts are Finally Here! | Chipworks Blog

Now check out this interpretation of Art’s comments from the Motley Fool’s “Senior” Technology Specialist:

However, after listening to the earnings call of chip equipment vendor Ultratech (NASDAQ: UTEK ) , it’s clear to me that neither TSMC nor Samsung quite has the FinFET transistor structure (which promises higher performing transistors at lower power) figured out. This, as far as I can tell, strongly suggests that Intel’s manufacturing lead remains intact.

Comparing the TSMC manufacturing capabilities to Samsung’s is absurd. These are two VERY different companies so don’t be a fool and lump them together. This “Senior” Technology Specialist owns Intel stock of course.

An interesting note, when comparing the density of Intel’s 14nm process against TSMC it is always pointed out that 16nm uses the 20nm process with FinFETs instead of planar transistors. When talking about yield however it is not mentioned, especially now that 20nm is in full production with a better than expected yield ramp. Weird hu

Also read: Cliff Hou at TSMC OIP

Here are some FinFET notes from Dr. Mark Liu, president and co-CEO at the TSMC OIP Forum held earlier this month:

  • Today 20nm production has a monthly volume of 60,000 wafers with good defect density
  • The yield learning on 20nm production will directly benefit 16nm production
  • 20nm capacity can quickly support the coming 16nm ramp up
  • More than 90 percent of TSMC’s equipment for the established 20nm node is being reused at 16nm.
  • TSMC’s 16nm defect learning has reached a similar level as 20nm (they are less than six months apart)
  • 10 customer 16nm tape-outs in 2014 so far, more than 45 are expected in 2015
  • TSMC is already in production with a 16nm FinFET network processor for HiSilicon Technologies Co. Ltd.
  • TSMC is ahead of schedule on their 2014 CAPEX

Look at the papers that were presented, they are all about 16nm silicon:

TSMC 2014 OIP – Paper Abstracts

Bottom line:The “major logic manufacturer that pushed out an equipment order” is not TSMC, I’m sure of that. Nor do I think it’s Samsung or Intel as they have already moved 14nm equipment in and are ramping production. If I had to pick one from the other possibilities it would be UMC. They licensed IBM 14nm and I have not heard of any production equipment moving in yet. Just my opinion of course. The truth will come out in 2-3 quarters so lets circle back then and see who is true to their word.

More Articles by Daniel Nenni…..


DAC Deadlines: Action This Day

DAC Deadlines: Action This Day
by Paul McLellan on 11-02-2014 at 7:00 am

DAC is coming up. OK, it’s not actually until next June. It is June 7-11th 2015 at the Moscone Center here in San Francisco. But there are lots of important deadlines coming up for papers, panels and more. The 52[SUP]nd[/SUP] DAC will focus on five key tracks:

  • automotive
  • IP design
  • embedded systems
  • hardware/software security
  • electronic design automation (EDA)

I bet we are going to hear a lot about internet of things (IoT) given those focus areas. Each track will include invited talks, embedded tutorials, special sessions/panels, regular research papers and designer tracks. This is where you come in, because those papers and panels don’t come from nowhere. But you also don’t just hand in something a few weeks before DAC, there are long lead times on various aspects of the conference. You need to get started now.

Research Manuscripts (for the conference itself)
Abstract due before 5pm MT November 21, 2014
Manuscript due 5pm MT December 2, 2014
A DAC research paper explores a specific technology problem and proposes a complete solution to it, with extensive experimental results. Submission includes a six-page paper and a short abstract clearly stating the significant contribution, impact, and results of the submission. Authors are encouraged to submit research manuscripts on all aspects of EDA, embedded systems and software as well as automotive design, hardware and software security, and IP design research topics.

“Work-In-Progress” (WIP) Abstracts
Abstract due before 5:00pm MT, November 21, 2014
Manuscript due 5pm MT December 2, 2014

A DAC work-in-progress provides authors an opportunity for early feedback on current work and preliminary results. Authors have two different opportunities to be part of the Work-in-Progress Poster Session.
Option 1: If authors submit a research manuscript and it is not accepted as part of the regular technical program, there will be a second opportunity to have their submission reviewed as part of the DAC WIP poster session.
Option 2: Authors submit a 100-word abstract and a one-page manuscript to be reviewed as part of the DAC Work-in-Progress Poster session.

Special Session Proposals

Due before 5:00pm MT, November 13, 2014
A special session is devoted to the following topics: traditional core EDA, embedded systems and software (ESS), automotive, security, IP design or a topic of future interest. The topic should be presented from an angle that does not overlap with content from traditional research manuscripts, having a more educational component. A complete submission should list at least three inspiring speakers who address the topic from different viewpoints. The special session submission form is streamlined this year, requiring an overall abstract for the special session plus a title, abstract, and speaker names (and contact info).

Technical Panel Proposals
Due before 5:00pm MT, November 13, 2014
A good panel session explores a single, high-level issue or question with representatives of differing viewpoints. Panel suggestions may include anything that might appeal to the DAC’s broad audience as long as the topic is interesting, timely, informative, and enlightening. The topic should be relevant to one or more segments of DAC attendees. Controversy is appropriate and encouraged.

But wait, there’s more…

Workshop Proposals

Due before 5:00pm MT, November 13, 2014

Co-located Conference Proposals

Proposals due before 5:00pm MT, November 13, 2014

The designer track and the IP track have a little bit more time. More details on these later in the year.

Designer Track
Abstract due before 5:00pm MT, January 20, 2015

IP Track

Abstract due before 5:00pm MT, January 20, 2015

Full details of everything are on the DAC website.


More articles by Paul McLellan…


Semiconductor IP Forecast 2014 – 2020

Semiconductor IP Forecast 2014 – 2020
by Daniel Nenni on 11-01-2014 at 10:00 pm

Given that the majority of my 30+ years in Silicon Valley has revolved around semiconductor IP it should be of no surprise that IP is a big part of SemiWiki and our first book “Fabless: The Transformation of the Semiconductor Industry”. That is also why one of my first round blogger draft choices was IP expert Dr. Eric Esteve. Eric has written 211 IP blogs on SemiWiki thus far garnering close to one million views. Eric had not blogged before SemiWiki but he is the author of the industry standard Interface IP Market Survey which was just updated last month.

According to Eric, Design IP is a niche market worth less than 1% of the semiconductor market but its significance in regards to design enablement is unprecedented. Eric started working in the Interface IP segment in 2005 as marketing director for PLDA. At the same time PLDA was launching the PCI Express gen-1 controller IP and within three years the company revenue multiplied by 3 (PLDA was already 10 years old). Next he worked for Snowbush, the IP division of Gennum, building a five year business plan which required deep knowledge of all protocols (PCI Express, SATA, SuperSpeed USB, HDMI and DDRn). In 2009 Eric started IPnest to better use his IP expertise which was pretty unique at that time. Eric released the first annual “Interface IP Survey and Forecast” in Q2 2009.

Why is this survey unique you may ask? Because you can find information that is not available elsewhere. For example there is an IP vendor ranking by protocol: USB2, USB3, PCIe, DDRn, HDMI, SATA, MIPI, and Ethernet. Eric also compiled a competitive analysis by protocol. For every protocol, you can find price information (for the Controller and for the PHY) and an evaluation of the design start count: the number of PCIe (or USB2, USB3, HDMI etc.) IP sales in 2013, then the total number of ASIC/ASSP design starts that include this protocol. To be able to calculate such a number requires an intimate knowledge of the IP market, absolutely.

Also read:Cliff Hou at TSMC OIP

Before working in the IP business Eric spent 20 years in the ASIC business participating in the IP buying process to support customers and then if you add another 10 years spent essentially on IP you end up with 30+ years of IP experience. During the last five years the Interface IP market segment has doubled in size from $240 million in 2008 to $480 million in 2013. It’s a fast growing market which makes the analysis in this report even more important.

One thing I can tell you is that the foundries rely on this forecast. In regards to the foundation and CPU/GPU IP, the foundries support the IP vendors that their customers work closely with which means TSMC has thousands of IP that needs to be prioritized and silicon proven for the new process nodes.

If you look at IP there is a paradox: Design IP is a niche market, weighing in at $2.5 billion in 2013 which is small if you compare it to the foundry business. But suppress Design IP and probably 70% of the chips processed by the foundries vanishes, which is why foundries take great care in supporting Design IP, and not only hard IP, RTL IP as well. It’s interesting to see that the more successful a foundry is, TSMC for example, the greater care it takes with external IP investing time, money, and resources to make sure the IP ecosystem develops properly. In return, foundry customers can reach production faster which sells more wafers.

After reading the report the only question you will probably have for Eric is this: Is the Interface IP Survey forecast up to 2020 realistic?And the answer is:

“I build a five years forecast since the very first release of the survey. This comes from my experience with Snowbush, as it was one of the key requirements. This year I have based the forecast on the number of commercial design starts (IP sales) by protocols. The first task is to evaluate the TOTAL design starts, and the evolution up to 2020, by protocol. For example, SATA and PCI Express don’t have the same growth behavior, so you need to use the protocol granularity to calculate the ASIC or ASSP design starts. Thus you have to evaluate the pervasion potential for each protocol. Then you have to insert the magic parameter: the “externalization factor”. There is an industry consensus about the fact that IDM and fabless tend to buy certain IP when they used to develop it internally. This is certainly true for Interface IP: this is a standard based IP, it’s pretty difficult for a chip maker to add differentiation. The evaluation is complex, and it’s exactly here that the 30 years’ experience add value! A couple of days ago, I read the first version of the “Interface IP Survey” written in 2009, including a forecast up to 2013. In 2009, I have evaluated the IP market to weight $440 million in 2013. And the result is… $421 million for up-front license only. A forecast with less than 5% error at five years is OK for me!

You can get the latest IP Survey HERE.


What Presentations to Attend During IP-SoC 2014 ?

What Presentations to Attend During IP-SoC 2014 ?
by Eric Esteve on 11-01-2014 at 11:00 am

Will you go to Grenoble next week to attend to IP-SoC? I will do it and will certainly listen to these Keynote Talks:

These keynotes are given by most of the IP vendors leaders: Imagination Technologies, Synopsys, Cadence, Sonics (with the exception of ARM Ltd., I agree). The presentation from STMicroelectronics is expected to clarify a key issue about FD-SOI: can we consider that a solid IP Ecosystem is being built around the technology? As of today, we have posted many articles in Semiwiki, generating an incredible amount of comments (from very positive to neutral, neutral to very critical). Our knowledge of the semiconductor technology has pushed to write very enthusiastic posts. Even if we are not process development specialists, we can understand the major benefits coming from FD-SOI in term of direct wafer cost, low power and indirect cost (by using forward biasing to enhance a slow device instead of trashing it). There is just one step to pass and the technology will become a credible mainstream solution: the creation of a solid IP ecosystem. We expect this presentation to provide the expected answers about the FD-SOI IP Ecosystem.

The keynote from Mark Ma (who organizes IP-SOC in Shanghai on top of running Jiatao, an IP rep in China) will be interesting to listen as well. We know that Chinese SC companies are rocketing, trying to serve a domestic market which is still dominated by alien chip vendors, but we expect to get many more information, and to learn about the IP market in China. How many local suppliers? Is it also dominated by the big guys, or do we see Chinese suppliers for CPU or GPU IP?

Don’t forget to listen to the Invited Talks,I have picked two of these:

In the first talk, Gabriele Saucier will speak about a Design&Reuse facet that you probably ignore, the IP management tool develop by D&R and sold to (very) big names of the electronic industry like some large European Telecom companies, or a very successful American OEM (sorry but I can’t share the name, but it’s BIG!).

I will certainly attend to the second as I will give a presentation, updated from this given at CDN-Live this year as I will provide additionnal information about the Design Starts by protocol. I will review the Winners (rather protocols than IP vendors, even if I will propose a ranking by protocols)… and the Losers (once again talking about protocols or technologies rather than dropping vendor names). I welcome your questions during the presentation!

Let’s finish with the two Panels:

  • IoT Wonderland for IP based Electronic Systems” with the participation of Drew Wingard (Sonics), Michel Depeyrot (Dolphin Integration), Nikos Zervas (CAST), John Koeter (Synopsys), Eklovya Sharma (Sibridge Technology), Ian Dennison (Cadence Design Systems)
  • What is the most efficient distribution scheme for an IP Providers and IP Consumers?” organized by Gabrièle Saucier (Design And Reuse) with the participation of Harold Barbour (CAST), Sébastien Rabou (Barco Silex), Howard Pakosh (Chipstart), Nigel Dixon (T2M UG), Mark Ma (Shanghai Jiatao Industrial Corp Ltd), Sanjeev Sharma (Terminus Circuits), Amir Bar-Niv (Cadence Design Systems)

I will tell you more after IP-SoC, as it’s difficult to foresee the quality of a panel: it’s all about discussion, interaction and questions from the audience!

See you on Wednesday 5[SUP]th[/SUP] November in Grenoble at 9am!

From Eric Esteve from IPNEST


Noise & Reliability of FinFET Designs – Success Stories!

Noise & Reliability of FinFET Designs – Success Stories!
by Pawan Fangaria on 11-01-2014 at 7:00 am

I think by now there has been good level of discussion on FinFET technology at sub-20 nm process nodes and this is an answer to ultra dense, high performance, low power, and billion+ gate SoC designs within the same area. However, it comes with some of the key challenges with respect to power, noise and reliability of the design. A FinFET typically operating at lower supply voltage and higher drive strength reduces noise margin and increases transient noise. The higher current density (~25% more with a typical FinFET transistor) in smaller fragile interconnects of a dense design severely impacts electro migration (EM) making EM sign-off critical. Additionally the fin structure of FinFET provides little space for heat to escape, thus leading to heat accumulation and further impacting EM and ESD (Electrostatic Discharge). Also, higher gate count and additional metal layers significantly increase simulation runtime and memory requirements, especially for full-chip analyses (with package and PCB data for better accuracy). In order to tackle these issues, multiple methods / engines with silicon level accuracy (involving multi physics simulations) along with smart computational algorithms to tackle large sizes of designs (including package and system information) must be used.

Upholding these requirements, ANSYShas added new capabilities in its production proven (with thousands of designs into successful silicon) RedHawk platform to include FinFETs and 2.5D/3D ICs with TSVs (through-silicon vias).

To tackle high capacity and high performance, RedHawk’s DMP (Distributed Machine Processing) technology smartly distributes a design database across a network of machines where each machine analyzes a portion of the design within the context of the entire chip including package, thus providing flat sign-off accuracy at enhanced performance and reduced memory per machine for full-chip voltage drop, EM and ESD analyses.

RedHawk employs state-of-the-science engines to meet these challenges of next generation SoCs; an integrated solver as transient simulation engine that can handle 2 B+ node of RLC network matrices along with distributed and cross-coupled package models; a high performance ALP3D solver for power up (rush current) analysis in complex power gate architectures. An engine to support stacked-die structures allows simulation of heterogeneous designs where each die and interposer can be based on different process technologies. Apache Power Library (APL) remains a fundamental underlying technology through its ability to capture nonlinear behaviour of circuit elements into compact, linear models that enable full-chip simulations while achieving sign-off quality results comparable to Spice. RedHawk handles ever increasing complex EM rules and quickly identifies EM hotspots related to power and signal wires as well as ESD failures.

RedHawk uses APL and Custom Macro Model (CMM) to incorporate device-level RC parasitics and switching current waveforms for full-chip transient simulation at pico-second resolution that provides Spice level sign-off accuracy. It simulates all power and ground domains simultaneously to accurately predict the current drawn and voltage seen at every cell in the design, and noise coupling that can happen inside the chip. At sub-20nm, for FinFET based designs, the modeling, extraction and analysis capabilities are expanded to support special metal layers, complex via structures, dummy devices, vertical resistances and double patterning.

RedHawk-CPA (in RedHawk 2014 release) accurately analyzes the effect of package parasitics on dynamic voltage drop which takes into account the current flow inside the package and bumps, necessary for increasing accuracy levels at advanced nodes and FinFET based designs. It provides comprehensive chip-package sign-off through its integrated extraction, model hook-up, and co-simulation capabilities. Its 3D FEM solver uses package layout and material property to generate a detailed per-bump parasitic network model that is appropriate for time-domain simulations. RedHawk-CPA enables seamless merging of fully distributed package parasitic network (that provides increased granularity and accuracy) with the on-die PDN. It displays both the chip and the package layouts and analysis results, enabling co-analysis, debug and simultaneous optimization.

RedHawk’s current flow-aware extraction techniques help in achieving sign-off quality results for every wire and via that enable in accurate analysis of EM violations on power/ground and signal lines. PathFinder supports IP and SoC level ESD integrity analysis by providing connectivity as well as interconnects failure checks for all current flow pathways from an ESD event (HBM or CDM).

As discussed earlier, heat generation and trapping in a FinFET based design can significantly impact lifetime of the device. ANSYS’s comprehensive chip-package-system thermal analysis flow takes in chip data along with package and system to generate accurate on-chip thermal profiles which are used by RedHawk to enable thermal-aware EM and ESD analysis. A Chip Thermal Model (CTM) created using temperature-aware power density and package thermal boundary conditions from a system simulation enables required thermal integrity for FinFET and 3D ICs.

ANSYS solution provides best coverage of power integrity and reliability verification for RTL to GDS power noise closure with and without input vectors. PowerArtist enables a physical-aware RTL design for power methodology that creates RPM (RTL Power Model) of a given design. RedHawk’s VectorLessengine along with logic simulation (using RTL VCD), state propagation (using activity) and gate level VCD simulation engines enable a comprehensive analysis toolkit for dynamic power noise. It can work in mixed-mode, for example with one block having gate level VCD, another having RTL vectors and rest of the design using VectorLess engine to generate the overall switching scenario. RedHawk provides enhanced capabilities to accurately simulate an on-chip LDO (low-dropout regulator) during full-chip static and dynamic simulations enabling sign-off confidence.

For easy and flexible analysis of results and comprehensive debugging (identifying design weaknesses and their fixes), RedHawk provides multi-tab, multi-pane GUI which allows simultaneous display of multiple results and tables, and multiple chip layouts with their own power densities and impact on each other.

Ansys’s power noise and reliability solution also includes production proven system-level simulation solutions: SIWave for signal integrity, Icepak for thermal integrity and HFSS for EMI and high frequency analysis.

ANSYS provides most comprehensive platform for system-aware chip simulation as well as chip-package-aware system simulation. Read the whitepaperposted at ANSYS website to know more details about each of these capabilities.

RedHawk is certified for 16 nm FinFET based design sign-off by TSMCand Tri-Gate based design reference flow at 14nm by Intel. These foundry certifications along with substantial successful production with ANSYS solutions provide increased confidence in these advanced technologies. Read more on the certifications here –
Intel & ANSYS Enable 14nm Chip Production
ANSYS Tools Shine at FinFET Nodes!

A recent success story of chip production with ANSYS power, noise and reliability solution was at NXPwhere a complex RF-CMOS IC has shown very good correlation with simulation results. To know more about how they have used ANSYS tools for this IC, attend a free webinarto be presented by NXP, here is the schedule –

Webinar: Noise coupling analysis for advanced mixed-signal IC’s
Date: Tuesday, November 4, 2014 – 8:00am – 9:00am PST
Location: Online
Register here

More Articles by PawanFangaria…..


Debugging a 10 bit SAR ADC

Debugging a 10 bit SAR ADC
by Daniel Payne on 10-31-2014 at 4:00 pm

SMIC (Semiconductor Manufacturing International Corporation) is a China-based foundry with technology ranging from 0.35 micron to 28 nm, and we’ve blogged about them before on SemiWiki. I’ve been reading about SMIC recently because they created a technical presentation for the MunEDA Technical Forum Shanghai in March. They will also present this at the MunEDA User Group meetingon Nov 17-18 in Munich, Germanywith the title: SAR ADC Debug Using WiCkeD. The acronym SAR ADC means Successive Approximation Register, Analog to Digital Converter. These converter circuits accept an analog input, then create a precise digital value output at a certain sampling rate and resolution of bits. Other types of converter architectures are: pipeline, flash and sigma-delta ADCs.


Simplified N-bit SAR ADC Architecture

A customer design fabricated at SMIC in 40 nm technology contained a 10 bit SAR ADC with low performance yield, so SMIC engineers set about to debug the source of the low yield and improve it. The ADC was producing wrong code conversion values and only reached an Effective Number Of Bits (ENOB) of 6, instead of the intended 10. Running a full SNDR (Signal to Noise plus Distortion Ratio) simulation on this circuit with 3,500 devices required tens of hours, so was not deemed efficient for debug purposes.


Low performance yield with ENOB of 6, instead of 10

Related – Transistor-level Sizing Optimization

To debug this low yield issue they used an EDA tool from MunEDA called WiCkeD where the debug flow is based upon using a fail pattern and testbench:

Differential Nonlinearity (DNL) error is defined as the difference between an actual step width and the ideal value of 1 LSB (Least Significant Bit).


DNL must be less than 1 LSB for no missing codes

Next, a large number of key devices were selected to see how they impacted deviation values while varying values of Vin. Results from the key devices finding showed that devices in the capacitor array had the greatest impact on deviation, followed by devices in the second stage comparator, and finally devices in the first stage comparator.

Mismatch analysis was run on sub-blocks independently to identify sensitive blocks before analyzing single devices. This hierarchical approach greatly reduces the simulation effort.

Related – An IO Design Optimization Flow for Reliability in 28 nm CMOS

Plots were made to show the distortion behavior of each device to see if correlation was negative or positive:


Device variation, the deviation value for code 496 caused by distortion is negative correlated


Device variation, the deviation value for code 504 caused by distortion is positive correlated

While verifying the size parameters and process parameters it was found that the MOM (Metal Oxide Metal) capacitor’s mismatch was the main cause of distortion. So by changing the mismatch variation in the MOM capacitor the simulated value for ENOB improves from 6 (Red) to 9 (Green):


ENOB improvement by modifying the mismatch variation of MOM capacitor

Conclusion

A 10 bit SAR ADC with low yield was debugged using the WiCkeD tool to pinpoint the source of deviations. Reducing the mismatch variation of MOM capacitors in simulation increased ENOB significantly, proving that the issue was actually caused by the local variation of these devices. The SMIC debug strategy was based on the designer’s circuit knowledge, and enabled by MunEDA’s flexible, interactive analysis tools.

To view the 19 page presentation by SMIC, request it online.


GNSS, dead reckoning, and MEMS IMUs

GNSS, dead reckoning, and MEMS IMUs
by Don Dingee on 10-31-2014 at 4:00 pm

GNSS is a wonderful invention, and low cost receivers have crept into smartphones and other mobile devices. However, GNSS does not solve all problems, especially in urban environments. The canyon effect blocks signals at street level between tall buildings, and signals do not penetrate to the interior of parking garages, tunnels, basements or lower floors of large structures. Continue reading “GNSS, dead reckoning, and MEMS IMUs”


Effective Bug Tracking with IP Sub-systems

Effective Bug Tracking with IP Sub-systems
by Daniel Payne on 10-31-2014 at 7:00 am

Designing an SoC sounds way more exciting than bug tracking, but let’s face it – any bug has the potential to make your silicon fail, so we need to take a serious look at the approaches to bug tracking. When using an IP or an IP subsystem in a design, the SoC integrators require some critical knowledge about this IP. The actual design and test files are just one part of that information. It is also very useful to understand all of the outstanding bugs associated with this IP subsystem in real time so that the impact of using a specific version of the IP is clear.

Problems with traditional bug tools
The key problem with this is that most bug tools like Jiraor Bugzilla are project specific. Bugs discovered in IPs developed as part of a bigger system are associated with that system, rather than the IP itself. If that IP is now used elsewhere, there is usually no good way to track open bugs.

Related – IP and Design Management Done Right

The other shortcoming of native bug tools is that bugs cannot be associated with multiple projects or systems. This way, even if the bugs related to an IP were carved out into a separate project, the original project would no longer have access to these bugs.

Many teams have tried to implement IP hierarchy, bug portability, etc. by using custom fields, complex logical associations and call-backs directly in the bug tools themselves. However, IP reuse and IP hierarchy is a dynamic, constantly evolving situation, which makes keeping bug hierarchy and IP reuse in-sync an almost impossibly complex task.

Connecting bugs to an IP management system
Methodics has an approach where the most scalable way to connect bugs to IP is by overlaying IP meta-data in the bug tracking system. Connecting bugs to IP means that the IP management platform itself will understand which bugs are associated with which IP by accessing this metadata. This allows multiple users to view IP bugs directly in their project hierarchy.

Related – Speeding up IP and Data Management

The actual project under which a bug is filed is no longer a limitation. Users can pick the project that they wish (the one they are currently working on) to file a bug. However, the tool allows users to also associate the bug with the IP itself. This allows the IP and IP subsystem hierarchy to be maintained separately in a scalable, fast database, removing the need for the bug tool to track the evolution of the project.

The other advantage of this method is that bugs are visible in the actual hierarchy of the project. For example, take a simple CPU project shown below. In this case, the CPU is made up of three subsystems, which also have additional IP of their own.

In order to make sense of all the bugs associated with each of these components, the user should be able to see the currently open bugs in a similar hierarchy – allowing the user to drill down into each level. A hierarchical bug display looks like:

Bug relationships – discovering predecessors and successors
As an IP develops, some bugs are fixed and others are discovered. Each released version of an IP has a set of outstanding bugs that are open against it.

Related – IP Management Update at DAC

It is quite useful to explore bugs in earlier and later releases of each IP so that the full context of the current IP’s bug profile is visible. This can be done by creating parent-child relationships between IP versions and branches, and having an easy discovery process for these bugs. A snapshot of how this is done is shown below:

Another way these parent/child relationships can be leveraged is between branches of an IP. Bugs found in child branches (variants) can be reflected upstream back to the parent branch (trunk) context and merged into the effective bug list. The same applies for bugs found in the original parent branch, new problems should be communicated to downstream child branches for analysis. Of course, not all of these upstream/downstream bugs are relevant in every context, so the ability to include or dismiss issues is important.

Identifying the context
Another problem with managing bugs in an IP context is understanding at what level to apply the bug. Often the engineer that finds the problem doesn’t have the full context of the hierarchical IP where the problem was found, and the best they can do is to file the issue at the IP hierarchy they have visibility. This leaves the subsystem owner to refile the bug at the appropriate IP further down the hierarchy and a process for managing those changes that communicates with the relevant stakeholders.

Summary
We’ve seen how important it is for bugs to be managed in an IP-centric fashion with modern SoC designs. With IP reuse the norm, bugs must be managed at the resource level rather than the project level and parent child relationships must be tracked to notify downstream/upstream stakeholders when problems occur. Integration people must also have the ability to create placeholder issues at the subsystem they have visibility to and a mechanism for moving those to the appropriate lower level IP when the issue has been properly analyzed.

With these new tools a true IP centric design methodology can be embraced by our SoC design community.


Silvaco at the TSMC 2014 Open Innovation Platform

Silvaco at the TSMC 2014 Open Innovation Platform
by Daniel Payne on 10-31-2014 at 7:00 am

The success of our semiconductor eco-system depends on collaboration, so the annual TSMC OIP Event just held on September 30 at the San Jose Convention Center was a prime example of that. I didn’t attend this year, but I did follow up with Amit Nandaof Silvaco this week to hear about what they presented. As a consultant I’ve worked with Amit before when he was at Barcelona Design Inc., an interesting analog-compiler company, now part of Synopsys.

Related: EDA Mergers and Acquisitions Wiki

Many engineers think of Silvaco as suppling only TCAD tools, however that’s not really true because they’ve assembled a custom IC design flow of tools that support many PDKs at TSMC:


EDA Tools for Custom IC Design

AMS designers can build their next IoT products using schematic capture, SPICE circuit simulation, waveform viewing, layout editor, DRC, LVS and parasitic extraction tools. Silvaco’s SPICE circuit simulator is called SmartSpice and it has been upgraded to:

  • Simulate faster by using a new parallel algorithm
  • Model certified at 16 nm
  • ETMI reliability supported
  • Soft Error Reliability


Improved capacity and performance

Related: Modeling and Analysis of Single Event Effects (SEE)

SmartSpice is used by circuit designers for library, memory and critical path characterization and it also has built-in optimizers.

In SPICE circuit simulation you need to have netlists with extracted parasitics to get the most accurate results, and the 3D RC extractor from Silvaco is called Clever. Memory cell design require accurate parasitics in order to tune RAM performance goals and catch all of the capacitive coupling effects caused by 3D layout structures like FinFETs.


3D structure of an SRAM cell

SPICE models are created by the Utmost IV tool, and many different device types are supported: TFT, UOTFT, BSIM-CMG for FinFETs, HSIM-HV2 for high voltage devices, BJT, SOI, JFET, Diode, FRAM.


TFT example fit plot using the RPI a-Si TFT model

Related: SiC and Si Power Devices

On the TCAD side engineers can virtually model diverse semiconductor technologies for: Displays, Power devices, Optical, FinFET, FD-SOI and even soft error reliability. Victory is the product name for this TCAD modeling and it supports 1D, 2D and 3D.

Related – TCAD to SPICE

I think that you’ll agree that Silvaco has a lot more than just TCAD tools to offer semiconductor engineers today, because circuit simulation and IC CAD tools are also included in their tool flow. Another factor that you need to know about Silvaco is that their tools are affordable compared to the big three in EDA. 2014 marks the 30th year in business for Silvaco, which is quite an accomplishment in this competitive industry.


Improving Verification by Combining Emulation with ABV

Improving Verification by Combining Emulation with ABV
by Tom Simon on 10-30-2014 at 4:00 pm

Chip deadlines and the time to achieve sufficient verification coverage run continuously in a tight loop like a dog chasing its tail. Naturally it is exciting when innovative technologies can be combined so that verification can gain an advantage. Software based design simulators have been the mainstay of verification methodologies. Test benches are as old as the earliest designs. The advent of language based design created an opportunity to insert code for functional checks right into the designs themselves. But this activity was initially an informal and inconsistent practice.

In the last decade the formalization and standardization of these assertion based checks has made them easier to implement and more effective as debugging and verification tools. Accellerahas developed the Property Specification Language (PSL) for specifying assertions for a variety of HDL’s. Also, there is System Verilog Assertion (SVA) which is part of System Verilog. Both are IEEE standards.

Assertion based verification (ABV) brings with it many benefits. For one, as developers write HDL code they can add assertion checks in situ. The assertion checks added at this point can also help serve as documentation, and they will be more efficient and potentially more complete than directed and constrained random verification. Of course ABV code can be added externally as well by using the “bind” feature. In some cases that is preferable because it makes the verification code reusable, and the verification team can easily work in parallel with designers during development.

At higher levels of design abstraction ABV suites facilitate reuse. Having a set of assertion based checks for SOC building blocks makes it easier to confidently reuse blocks from internal or external sources. And this goes with knowledge that the blocks will more easily meet verification requirements. It turns out that there is now a niche market that provides Verification IP (VIP) for this very purpose. For example companies such as Mentor have large libraries of VIP for many different standard design elements. Because of their large user base and productized development, commercially available VIP can be better tested and documented than in-house developed VIP.

Performing assertion based checks with a software based simulator affords complete visibility, which is invaluable for debugging. Also it is easy to toggle on or off each level of checks: internal, block, signal, protocol, or chip level. Lastly error reporting is of a very high quality – including coverage information and data useful for debugging. Naturally this is a fully developed approach. Its main drawback is that as the verification problem moves up the hierarchy the simulation data set grows rapidly, so performance and throughput become limiting factors. Compromises on the number of clock cycles that can be run or the amount of test case data processed are necessary. Given that the verification team is always in a race with time, software based approaches alone may not suffice.

In the worst case scenario a test chip might come back with an issue that only shows up after many clock cycles or that is very data dependent. Despite high clocks rates, debugging in silicon is hindered by lack of observability. Unless critical signals or registers can be probed, tracking down problems can become an open ended task.

What if there was a solution that supports large design size, fast execution, high design visibility and full support for the assertion based verification standards? Mentorrecently put out a white paper that talks about using their Veloce emulators as the vehicle for performing ABV. Using an emulator can provide a huge speed advantage. Tests can be run on more cases over more clock cycles. By fully supporting the assertion based verification standards in the Veloce compilers, Mentor is making it easy to bring assertion checking to emulator users. Another big advantage of this approach is that visibility into the design is maintained so coverage data and debugging information is readily available.

Mentor also reports that they are offering tight integration in the Veloce GUI for reporting and controlling what assertions are exercised. This allows for focused verification as the design moves from block to system level.

It is clear that combining the acceleration and flexibility of emulation with the power of assertion based verification creates synergistic effect that can boost design debug and product verification.