CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Digital Design Trends – A Cadence Perspective

Digital Design Trends – A Cadence Perspective
by Bernard Murphy on 04-21-2016 at 7:00 am

I talked with Paul Cunningham (VP front-end digital R&D) at CDNLive recently to get a Cadence perspective on digital design trends. He sees needs from traditional semiconductor companies evolving as usual, with disruption here and there from consolidation. But on the system side there is explosion in demand – for wearables, furniture, grid and power delivery management and in many more domains. Since many of these teams are starting from a blank sheet, they’re looking (especially in Asia) for high-productivity front-to-back solutions that will get them running at full speed as fast as possible.

Everyone is squeezing cost and power and needs to build in security and delivery is (unsurprisingly) schedule-sensitive but other factors counter traditional expectations of IoT design. Devices are complex (now we have realized you can’t push all the heavy compute to the cloud), so they want to go to advanced nodes. Also system designers want to build a diverse range of solutions, therefore require flows that support fast turn-around without needing vast teams of engineers. In short, direct involvement from systems companies pushes the well-known problem even harder; design complexity and diversity continues to rise much faster than engineering resources, schedules are getting tighter and cost-sensitivity is climbing.

On turn-around time, physical synthesis has to handle 3-5X the number of gates in the same time which demands all kinds of fundamental changes for massive parallelism, for coupling to physical design and for handling advanced technologies. The Cadence Genus Synthesis Solution is now handling 3-5 million placeable instances in production designs and should already be scalable to 10+ million instances flat in overnight runs making it practical to optimize all but the largest IPs and even some sub-systems in single physical synthesis runs.

On high-productivity solutions, physical synthesis requires very accurate correlation between physical estimates at this stage and what will actually be implemented on physical design. You have to use the same placer, the same global router, the same extractor and the same delay calculator which is exactly what Genus does, sharing the same engines with the Innovus Implementation System.

But productivity is not just about engine correlation. Who among us didn’t curse Microsoft Office when Word, PowerPoint and Excel supported what should have been exactly the same features in different ways? Didn’t we feel more productive when those features became common? The same thing applies to design. Front-end and implementation designers need to be able to easily exchange bounding timing and physical constraints, timing reports and scripts without confusion between different flavors of format. Genus provides this again through deep engine integration with Innovus, even extending to report formats.

On cost-pressure, a major contributor to device unit-cost is test-time (which can be as much as 50% of unit cost). System and IoT applications are pushing to reduce this further by looking for even higher levels of test compression. Current compression approaches compress in effect linearly by splitting scan chains into multiple chains. Unfortunately, this increases routability problems since every scan chain must connect to compression logic. The result can be increased die area which puts the cost burden back on silicon. Effectiveness is also bounded by need to keep chains sufficiently long that they can deliver test patterns to test challenging cases.

Cadence recently announced a 2D-based elastic compression in the Modus Test Solution which through a grid-based approach can greatly reduce routing overhead; the elastic part represents the ability to borrow from previous test clock cycles to extend scan patterns for challenging test cases. Between these innovations, Modus allows for much higher levels of compression, reducing time on the tester, without bloating die area. Experience with production designs shows 2-3X reduction in test time with 2.6X reduction in routing overhead.

You can read more detail about recent Genus and Modus advances HERE and HERE.

More articles by Bernard…


Cross-viewing improves ASIC & FPGA debug efficiency

Cross-viewing improves ASIC & FPGA debug efficiency
by Don Dingee on 04-20-2016 at 4:00 pm

We introduced the philosophy behind the Blue Pearl Software suite of tools for front-end analysis of ASIC & FPGA designs in a recent post. As we said in that discussion, effective automation helps find and remedy issues as each re-synthesis potentially turns up new defects. Why do Blue Pearl users say their tool suite is easier to use Continue reading “Cross-viewing improves ASIC & FPGA debug efficiency”


Top Mobile OEM Uses NetSpeed to Boost Its Next Gen Application Processor

Top Mobile OEM Uses NetSpeed to Boost Its Next Gen Application Processor
by Eric Esteve on 04-20-2016 at 12:00 pm

The smartphone segment is certainly the most competitive market for chip makers today and the yearly product launch cadence puts a lot of pressure on the application processor design cycle. End-users expect to benefit from higher image definition, better sound quality, ever faster and more complex applications which push the limits of application processor performance in terms of higher frequency, lower latency, and reduced power consumption. The race for ever better performance is also translating into always more cores, CPU or GPU.

Optimizing processing by integrating cache memory is a well-known architecture, but the core multiplication is creating a new challenge: cache-coherency. Because the memory has to be shared between many cores (6 GPU and 2 CPU in the picture below), when one core read a precise memory location, after another core has written this same location, the read must return the last written value, not an older one. You may define cache-coherency as the ability to maintain consistency between the cache and memory. Cache-coherency is adding to design complexity (a specific function has to be developed), but is severely impacting the overall system performance, that’s why it will become a must have functionality in the complexes multi-core SoC, even in consumer or mobile applications, as it is today in networking and data center.

One of NetSpeed’s customers is a mobile OEM developing his own Application Processor (AP) which it then integrates into its flagship smartphone product. This latest generation application processor was defined as a future-proof platform. To ensure that the processor would be adaptable for future generations, the spec required support for cache coherency. In light of a long list of stringent requirements (performance x2, lower power, complex QoS requirements), the team was relieved that they were not locked into a legacy design or forced into using a low bandwidth crossbar-based interconnect design.

There is only one commercially available on-chip interconnects solution that is capable of satisfying both coherent and non-coherent requirements and that is NetSpeed’s Gemini NoC IP. By selecting NetSpeed IP, the company was able to implement a single solution today that satisfied current requirements for a non-coherent design and future requirements for coherent designs and even designs with a mix of coherent and non-coherent traffic. This approach allowed the company to minimize the risk for future SoC designs because later when the design team needs to implement a new cache-coherent architecture they will be working with an interconnect IP that is already known and well understood.

Not all interconnects (or NoCs) are created equal. NetSpeed provided a physically aware interconnect synthesis engine, an innovative solution that optimizes the interconnect architecture based on workload models able to deliver the right topology within minutes. Implementation of NetSpeed’s NoC led to a new generation SoC that delivers 20% lower latency and 15% higher maximum frequency than target set by the customer. Because NetSpeed synthesizes a pre-verified interconnect design within minutes, the direct impact on design schedule is to shrink six months of analysis down to a few hours.

Designing an heterogeneous multi-core SoC for mobile requires to meet very aggressive target for power consumption and also for Quality of Service (QoS). QoS is not equal to performance (in term of frame/second or MIPS), but a mediocre QoS may lead to downgrade an excellent performance figure-on the paper. For example, NetSpeed’s Gemini NoC allows building a real time bandwidth allocation mechanism, through an automated virtual assignment. The number of wires after P&R is directly impacting the SoC power consumption, but also the SoC performance itself due to wiring congestion. You understand why obtaining 65% fewer wires next to the memory controller is such an important result. Not only the power consumption will decrease, but the easiest routability in this critical area will also help meeting more stringent timing constraints.

Using NetSpeed’ NoC solution to design this heterogeneous multi-core SoC Application Processor for mobile has helped to meet or exceed the incredible TTM requirement for this kind of SoC, improve QoS as well as push the maximum frequency limit, prepare the future by integrating a cache-coherent NoC, and finally help NetSpeed’s customer to launch an AP SoC with a power consumption behavior on line with mobile customer expectation.

This blog is extracted from NetSpeed “Mobile” Success Stories. You can read more about this story and Data Center AP, Automotive SoC, Networking, Digital Home SoC or Data Center Storage stories here

From Eric Esteve from IPNEST


The Semicon Industry Keeps Wafer Fabs Moving Up

The Semicon Industry Keeps Wafer Fabs Moving Up
by Pawan Fangaria on 04-20-2016 at 7:00 am

The worldwide revenue of semiconductor industry has remained flat in last few years; to be more precise, overall semiconductor revenue declined by 1.9% in 2015 and Gartner forecasts it to further decline by 0.6% in 2016. The total revenue was at record high of $340.3 billion in 2014.

Well, semiconductor industry has matured. A market with growth of anything between 0 and 5% for five years or more can be considered as a matured. However, the health of semiconductor industry has been evergreen. This can be best judged by the way wafer fabs are moving up the chain. Although 450mm wafer volume production is not expected in this decade, the number of 300mm wafer fabs is continuously increasing since the beginning of 2002. The semiconductor companies are moving up the production facilities from 150mm and 200mm wafers to 300mm wafers.

The number of 200mm wafer fabs with volume production reached a maximum of 210 and then started declining. By the end of 2015, the number of 200mm wafer fabs was reduced to 148.


An IC Insights report provides this chart depicting the number of 300mm IC wafer fabs which are in volume production since 2002. The chart shows steady growth in the number of 300mm IC wafer fabs and forecasts it to reach 117 by 2020.

The only year when the number declined slightly was in 2013. This was because of closure of 2 fabs by ProMOS in 2013 and delay in schedule of some of the new fabs which were to be opened in that year. In 2014, the number of 300mm fabs jumped from 81 to 90. Naturally, this reflected in a handsome 8% increase in the semiconductor revenue in 2014 from $315 billion in 2013.

In the semiconductor industry, a growth of 0-5% in dollar terms would mean a much higher growth in terms of area of wafer produced because increase in scale of production of wafers reduces price per transistor drastically.

The arrival of 450mm wafer fab will determine how the number of 300mm wafer fabs will grow in future. Also, the price advantage with 450mm wafers is yet to be seen because of their higher production cost. Lithography is a big challenge in 450mm wafer production; the equipments are not there yet and will be highly priced when available.

Within the semiconductor industry, the wafer fab segment of the market is also maturing. The number of companies with fabs is declining. There are fewer companies with higher investment fabs like 300mm. Today, 200mm fabs are spread across 61 fab companies in the world, whereas 300mm fabs are concentrated among just 22 companies.

There will be still fewer companies with 450mm wafer fab facility. In last two years, we saw a large consolidation among semiconductor companies. Amid growing concentration of wafer fabs with fewer companies, there may be a challenge in reduction of price per transistor any further. However, the wafer production is expected to remain healthy.

More Articles from Pawan


ARM and FD-SOI are like Peanut Butter and Jelly!

ARM and FD-SOI are like Peanut Butter and Jelly!
by Daniel Nenni on 04-19-2016 at 4:00 pm

When I first heard about a foundry possibly licensing FD-SOI I would have bet it was SMIC in China. What better market for a low cost, low power, easy to manufacture alternative to FinFETs? The foundry of course was Samsung which also made complete sense since they have 28nm gate-first capacity that matches up nicely to 28nm FD-SOI. Same thing goes for GlobalFoundries and the Dresden Fab with gate-first 28nm capacity ready to be converted to 22nm FD-SOI.

Instead of taking the short road to FD-SOI, SMIC will take the long road to 14nm FinFETs and in my opinion they will fail miserably yet again. Seriously, how competitive will a first generation 14nm FinFET process be in 2020? Remember, third generation FinFETs will hit the new Nanjing TSMC fab in 2018.

The other bet I lost was on FD-SOI and ARM. For the life of me I have no idea why ARM did not champion FD-SOI from the very beginning. Seriously, what better fit for FD-SOI than ARM IP and IoT? That all changed at the FD-SOI Symposium in San Jose last week. Will Abbey of ARM presented “Realize the Potential of FD-SOI in Growing Markets”with the opening line “An honest confession is good for the soul, but not for the reputation”. The quote originated from Thomas Robert Dewar, 1st Baron Dewar founder of the Scottish whisky distiller by the way.

Will Abbey admitted that ARM had been on the FD-SOI sidelines before presenting data from a skunk works project that had never been seen before. Unfortunately his slides are not on the FD-SOI Symposium website as of yet (hopefully they will be soon) but I did get a picture of the summary slide:


Another presentation worth viewing that is not up yet is “Enabling Next Generation Semiconductor Product Innovations with 22FDX”by Subramani Kengeri, VP CMOS Business Unit, GLOBALFOUNDRIES“. Fortunately Subi and I go way back so he gave me his slides. You can find them HERE.

I was at the 22FDX Launch in Dresden last year and was very impressed not only by the technology, but also by the conservative approach of the launch. Everything I was told in Dresden is right on schedule according to the PDK releases I have seen thus far so congratulations to GF and I look forward to the official 22FDX HVM launch in 1H 2017 as planned, absolutely.

An interesting note, in looking at the many badges on the table I did not see anyone from SMIC or UMC which is a mistake. I did see people from TSMC so that was interesting and lots of people from the fabless semiconductor ecosystem. More badges that I could easily count actually, it was standing room only.

Another interesting note, the EDA, IP, and ASIC companies I know and love are fully behind FD-SOI so it is not just press release fluff going around. I have counted close to 100 FD-SOI tape-outs that are in progress and a dozen or so of those have already taped-out (Samsung 28nm FD-SOI) so I hereby declare 2017 the year of FD-SOI HVM!


IOT Security Trends – Is the online world more dangerous??

IOT Security Trends – Is the online world more dangerous??
by Bill McCabe on 04-19-2016 at 12:00 pm

Security threats are the biggest concern among the main concerns on the Internet of Things. Due to its very nature, it is a target of interest for those who want to commit either industrial or national espionage. By hacking into these systems and putting them under a denial of service, or other attacks, an entire network of systems can be taken out. This has caused cyber criminals to become very interested in the IoT and the possibilities that surround its misuse

Fortunately, companies are realizing that there are many potential problems with their framework. This has caused a new trend of companies reviewing these areas and coming up with an effective solution. Until that is done, those using these devices should remain wary. The IoT allows devices to exchange contextual information and to execute certain decisions based on this information. This means cars, homes, power supplies, and even water supplies using the IoT could potentially be at risk. In these cases, physical security is irrelevant, as a simple change of data could impact the control of systems and cause them to function as a dangerous item.

The idea of a security breach through the IoT isn’t something that is a possibility that could happen either. There are already cases of hackers breaking into the systems. Two cars were hacked, their brakes were disabled, and the lights turned off. All without the driver having the ability to control them in a test situation. Another instance of a yacht being taken off course by a hijacked GPS system is another.

Even in the home, people are at risk. Devices that have video cameras, children’s monitors, and similar devices that should be safe are actually giving hackers the chance to cause havoc in the home. Smart wired homes are having their temperature settings and lights flickering on and off, as these hackers explore the possibilities that are out there. Even the latest electric power meters that are digital are allowing hackers to steal power with ease.

But these device annoyances aren’t where the heart and soul of the IoT lies. Instead, it is the possibility of what can be done with these systems. Since everything is attached through the internet, these devices have the potential to perform a third party attack on websites. If millions of devices hit a website at the same time, it can overwhelm the bandwidth and potentially take down a competitor’s website, effectively crippling them until they find a workaround solution. Corporate espionage becomes a real concern as competition realizes they can turn simple devices against their main competition and draw in their business

All this means that the virtual world has the ability to have an impact on the physical world. The solution right now is to boost security on our devices that use the IoT. With added security tools and advanced API that can detect usage that goes beyond what the system is designed to do, there is a lower risk for the world.

With terrorism one of the main concerns in the world, and growing dangers around us, we need to be smart how we use technology. That’s why when we look at the IoT that we don’t write these devices off as being nothing more than simple tools to make our lives easier, but recognize them for the potential dangers they could also possess.

For more information about looking for IOT/Security Talent check our our website at www.internetofthingsrecruiting.com


A Versatile Design Platform with Multi-Language APIs

A Versatile Design Platform with Multi-Language APIs
by Pawan Fangaria on 04-19-2016 at 7:00 am

In one of my whitepapers “SoCs in New Context – Look beyond PPA”, I had mentioned about several considerations which have become very important in addition to power, performance, and area (PPA) of an SoC. This whitepaper was also posted in parts as blogs on Semiwiki (links are mentioned below). Two important considerations mentioned in that whitepaper were ‘Target Segment’ and ‘IP integration’ from a design standpoint.

Considering the possibility of several segments within a single market, for example desktop, laptop, tablet, and even smartphone in computing market, imagine how many variations of an SoC or IP within it may be required. A desktop processor can be less power efficient than a tablet or smartphone processor, but needs to be highly efficient in performance.

The IP selection and integration can be more complex than we can perceive. Now a day, among a suite of standard IP blocks for particular functionalities, there can be further customization in each IP to match a particular operating environment, thus introducing differentiation in an IP. Also, IP development environments can vary with different vendors.

It’s evident that these design requirements for SoCs and IP in modern context necessitate design platforms to be versatile enough to support development and integration in multiple environments. The design platforms must be supported with multiple language APIs for development of custom applications. It’s impressive to see this innovative design framework coming up to satisfy such critical requirements for the chip design industry; designers are no longer stuck with one proprietary or standard/popular language.


Defacto TechnologiesSTAR platform provides a unique multi-language API development environment that offers the user to choose a language best suited for a particular task. While scripting languages such as Tcl, Perl, or Ruby are well-suited for tool integration, traditional programming languages along with complex data-structures and algorithms are used for application development, performance enhancement, and optimization of resources. The STAR platform is fully equipped for development of new applications as well as integration with existing tools and flows.

The STAR platform provides a differentiated solution for custom design automation that can be used for development of stand-alone applications as well as plug-ins. The key salient features in the STAR platform include –

  • Unified (HDL format agnostic) APIs for design exploration and editing

    • Extended support for SystemVerilog and VHDL data types
    • Mixed language support
    • Mixed gate-level and RTL support
  • Handling of design hierarchy with queries and editing
  • Advanced APIs for connectivity handling (bundle, slice, bit blasted views) through hierarchy
  • File list support for simplified HDL input management
  • Add-on libraries (Tcl, Perl, Java, etc.) with sample applications


The STAR Application Development Environmentin STAR platform is an extensive software framework with multiple sets of libraries including Perl, Python and Java. It is supported with language independent and persistent data-structures.

By using the STAR platform designers can create, explore, modify, and verify RTL designs within the same design flow. This is an ideal platform for fast design prototyping as well as development of complete new applications in shortest possible time.

Major semiconductor design companies across the globe built their custom tools on top of the STAR platform that enabled them to accelerate their design completions through seamless design flows from start to finish. A typical flow of tasks integrated on STAR platform by these design companies include –

  • Design metrics and SoC topology extraction
  • IP assembly at sub-system or top level
  • Automatic design hierarchy restructuring based on physical design constraints
  • Power aware driven design partitioning
  • Feedthrough insertion to accommodate channel-less physical implementation
  • Verification of structural connections including clock, I/O, and other connections

With Defacto’s solution, the design houses can quickly build their customized yet versatile development and integration environment for a variety of SoCs catering to different target segments. Different versions of PPA optimized IPs from various heterogeneous environments can be integrated with ease on a single development platform. And design restructuring can be done automatically to keep the RTL in sync with physical implementation, thus reducing verification and debug cycles. The STAR RTL platform is sure to improve designers’ productivity and is proven through working silicon in several semiconductor companies.

Defacto Technologies has a booth #1129 at 53[SUP]rd[/SUP] DAC in Austin, Texas. Visit them to know more about their innovative technology in design automation with the STAR platform.

Refer the following blogs to know more about new requirements of modern SoCs –
SoCs in New Context Look beyond PPA
SoCs in New Context Look beyond PPA – Part2

More Articles from Pawan


Get ready for hypergrade in automotive

Get ready for hypergrade in automotive
by Don Dingee on 04-18-2016 at 4:00 pm

With use cases expanding, the meaning of “automotive qualified” semiconductors is changing. What we’re now hearing about now is beyond the AEC-Q100 Grade 0 upper end of 150°C, while still meeting other reliability, retention, and security requirements. What does hypergrade mean for complex digital chip designs moving forward? Continue reading “Get ready for hypergrade in automotive”


More on the Practical Uses of Automation

More on the Practical Uses of Automation
by Bernard Murphy on 04-18-2016 at 12:00 pm

There’s a good article in the March issue of the Communications of the ACM which follows a theme I commented in my “One, Two Many” post. But the CACM article has a better title: “Automation should be like Iron Man, not Ultron”.

For anyone who hasn’t seen the movies, Iron Man is a man (Tony Stark) who has built a suit to enhance his abilities. Ultron is an automaton, made in a similar form to the Iron Man suit but not requiring a human operator. Of course the CACM article doesn’t use the movie as an argument – that would be silly. What the author (Tom Limoncelli) does is to talk about different classes of automation and why one class may ultimately prove superior to the others.

The leftover principle
This is the first class – automate the easy stuff and leave the hard stuff to the humans. A problem with this is that what’s leftover gets progressively harder and within the abilities of only a small number of people. Also we develop skills to solve harder problems while we are solving easier problems. Take away the easy problems and our general problem-solving skills deteriorate. In the limit, this is the Ultron approach. There’s another problem he didn’t mention. The economics of solving easy problems are not very attractive because we don’t assign much value to solving easy problems. Of course as the problems get harder they should gain value, but difficulty is in the eye of the beholder. Ultimately this path is self-defeating because it makes obsolescent the things (us) it is supposed to be helping and even then with questionable rate of return for solution builders, at least in the early stages.

The compensatory principle
In the second class, you separate what is best done by machines (repetitive, data-driven tasks, requiring 24-7 operation, dangerous tasks, tasks requiring more than human strength or precision, …) from what is best done by humans (improvisation, flexibility, adaptability, judgment, …). Man and machine compensate for each others respective weaknesses. This should be a good guide in principle though it seems that many, perhaps most tasks do not break down so cleanly into these two categories, as is clear when you consider recent progress in vision automation. This area has seen a lot of success but progress extends less clearly to complex action consequences. Braking to avoid hitting an object is comparatively easy but if there isn’t sufficient stopping distance, choosing from the next set of options (collide anyway at lower speed, turn to avoid, thus possibly hitting something else, ..) may not always be so easy to rank-order. This path is useful case-by-case but but doesn’t provide a larger guiding philosophy.

The complementarity principle
In this final class in the article, you look at automation as a way to complement us, to extend what we can do, not a way to replace us (unless whatever you do is easily automated). In other words, automation should be more like Iron Man, not Ultron. The author brings up an interesting value for this approach, illustrated by a system he and others created to take a (hardware) system out of a cloud, send it for repair and later recommission the repaired system back into the cloud. In this case, the automation was treated as an extension to the team, working in areas it was told to work, avoiding systems it was told to leave alone and filing problem tickets where it became confused and did not know how to proceed. Over time the team learned and refined the system and were dealing with less cases manually, but never fully disengaged – they continued to learn as they refined the system and perhaps (my view) could only asymptotically move towards full automation.

For tasks on an assembly line, whether in a factory or an office, you don’t need an Iron Man, you need an Ultron. But this automation chips away only at the lower rungs of labor, as has always been the case (there really is nothing truly new under the sun). For anything higher up, we need more Iron Man suits, not more Ultrons.

You can access the CACM article HERE. Sorry you have to have a subscription or you have to buy the article.

More articles by Bernard…


A Better Way for Analog Designers to Perform Variation Analysis

A Better Way for Analog Designers to Perform Variation Analysis
by Tom Dillinger on 04-18-2016 at 7:00 am

The impact of process variation at advanced nodes is increasing — no surprise there. In recent years, the principal design emphasis to better reflect this variation has been the adoption of two new methodologies: (1) advanced on-chip variation (AOCV, as well as POCV/LVF) for digital static timing analysis, and (2) advanced statistical analysis and Monte Carlo methods, including high-sigma Monte Carlo analysis.

High-sigma Monte Carlo has been adopted for applications where circuit performance and reliability requirements necessitate results beyond a traditional 3-sigma yield. Although high-sigma Monte Carlo methods are most often used in memory array analysis, a growing proportion of analog designers are also using high-sigma Monte Carlo verification to meet their customers’ performance and yield requirements. Whereas memory circuits require 6-sigma analysis to estimate the failure rate and confirm a suitable yield for the vast number of bitcells integrated on-die, analog designs require 6-sigma analysis to achieve the extreme reliability and yield requirements necessary for products upon which human life depends or where extreme environmental conditions are present — such as medical and automotive applications.

The team at Solido Design Automation has focused on optimization of Monte Carlo methodologies, with sophisticated algorithms to provide process variation analysis results that are both accurate and efficient. Their parameter sampling approaches cover two regions: (1) the 3-sigma region, where historically analog designers have concentrated the majority of their statistical simulation efforts, and (2) the high-sigma region, which has become more accessible to designers of all types in recent years due to advanced technology like Solido’s High-Sigma Monte Carlo. Solido’s high-sigma approach minimizes the number of Monte Carlo circuit simulations required to provide extreme statistical distribution data, which having specific testcase parameter detail for design exploration and optimization.

I recently spoke with members of the Solido team about features in the latest Version 4 release of Variation Designer. For memory designers, one of the key enhancements in this release is the Hierarchical Monte Carlo support for high-sigma array analysis — a brief summary of that discussion is available here.

Then, the Solido team discussed another new feature in Variation Designer — Statistical PVT.

They highlighted, “Digital library IP designers are used to dealing with process variation in terms of a fast/slow global corner, at best-case/worst-case voltage and temperature conditions. The (correlated) local variation around that global definition is used to verify setup/hold timing checks and array yield. Yet, that method won’t suffice for analog IP designers, whose designs can’t reliably be bounded by fixed fast/slow corners. What is a ‘fast’ gain, or a ‘slow’ bandwidth? Analog designers need design-specific, technology-specific, measurement-specific corners that correctly capture the bounds of their circuit specifications.”

Solido’s Variation Designer now offers Statistical PVT, in which fixed digital corners are supplanted by accurate analog statistical corners. Unlike the traditional approach of simulating combinations of fixed process, voltage, and temperature conditions and then also running Monte Carlo simulation, Statistical PVT combines Monte Carlo and PVT simulation to extract design-specific statistical corners and verify those across voltage, temperature, and other environmental conditions. This results in a more accurate and efficient analog variation analysis. (Actually, the “temperature inversion” characteristic of FET devices at advanced nodes increasingly impacts digital circuit performance, as well — Statistical PVT may not be solely of interest to analog designers.)

The figure below illustrates a setup screenshot from the application, which includes an extended sampling space for Spice simulations.

Analog IP typically incorporates a diverse set of additional on-chip components — e.g., R’s, C’s, diodes. In addition to applying variation to digital FF/SS MOS corners, Statistical PVT determines design-specific, technology-specific, and measurement-specific variation corners for the desired yield using variation models for each of these components. The following figure highlights how important it is to consider (passive) component variation, as well as device parameters. Considering only MOS devices fails to capture the possible variation present when other elements are included.

The sample sampling expertise used in the Solido HSMC methods is applied to an alternate method in Statistical PVT, to derive results representative of the 3-sigma performance with a reduced set of Monte Carlo simulations. For analog IP, the circuit simulation measurement specifications are vastly different than memory or cell library statistical characterization — e.g., gain, bandwidth, phase margin, duty cycle. The Solido team reminded me that their sampling optimization methodology is agnostic — “if you can measure it, we can analyze it” is their credo.

A key facet to statistical analysis is the evaluation and debug of post-simulation results.

The figure above illustrates the post-simulation user interface for a number of analog specification measurements. Note that the Solido results are not an extrapolation of a model or a brute-force exploration of the entire PVT space, but rather a specific set of simulation testcase parameters for the designer to examine in detail. Design optimization requires this testcase parameter detail, to help identify the circuit elements (and parasitics) to address. Although the description above uses an analog block for illustration, Statistical PVT is certainly applicable to RF circuits, too.

Key to analog IP design productivity is the Design History results view in Variation Designer, to provide the required perspective on the iterative optimization progress. The figure below illustrates the Design History user interface for a set of Statistical PVT runs and their corresponding revisions.

With an increasing breadth of application markets for advanced semiconductor processes — e.g., automotive, mil/aero, medical (and health-related IoT devices) — the SoC reliability requirements are demanding. Analog/RF IP validation requires a comparable simulation solution to High-Sigma Monte Carlo, as array and cell library designs have successfully applied. Solido’s Statistical PVT application within Variation Designer fits that need.

For more information on Statistical PVT in Variation Designer, please follow this link.

-chipguy