RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

GlobalFoundries 7nm and EUV Update!

GlobalFoundries 7nm and EUV Update!
by Daniel Nenni on 06-13-2017 at 7:00 am

Scott Jones and I had the opportunity to talk again with Gary Patton, GlobalFoundries CTO and SVP of R&D for a quick update on 7nm and EUV. Gary has been at GF for two years now with more than 500 other technologists from the IBM semiconductor acquisition. 7nm is the first IBM based process from GF (14nm was licensed from Samsung), it will also be the first time AMD has a process advantage over Intel.

“We are very pleased with the leading-edge technology that GF is bringing with its advanced 7nm processtechnology. Our collaborative work with GF is focused on creating high-performance products that will drive more immersive and instinctive computing experiences.”MarkPapermaster, CTO and senior vice president of technology and engineering, AMD.

Scott Jones will be updating his 14nm 16nm 10nm and 7nm – What we know now blog with the latest specs from GF 7nm in the next week or so. One thing you will notice is that the GF 7nm and TSMC 7nm are much more similar than previously thought. GF however is leading with a high performance (LP equals Lead Performance in IBM speak) version of 7nm for AMD while TSMC is first with a low power version of 7nm for Apple, Qualcomm, MediaTek, and the other SoC vendors. The similarity between the TSMC and GF 7nm processes does open up the opportunity for GF to do some serious 2[SUP]nd[/SUP] sourcing which is a critical component to the pure-play foundries business model, absolutely.

GF 7LP will be in volume production in the second half of 2018 and is expected to provide a greater than 40% improvement and 2x the area scaling over Samsung 14nm. According to Gary, EUV tools will be installed in the second half of 2017 with the hopes of inserting EUV into 7nm in 2019. My guess would be 2020 at the earliest but the point here is that EUV is not holding up 7nm for TSMC or GF and we should all be thankful for that.


Back to the AMD thing. Given that the new AMD Ryzen architecture was launched on 14nm in Q1 2017, it should be reasonable to predict that AMD could refresh Ryzen on 7nm in the second half of 2018 putting AMD 7nm just six months behind Intel 10nm. I certainly hope this is the case because I really want to see how Intel PR spins that one!

GLOBALFOUNDRIES on Track to Deliver Leading-Performance 7nm FinFET Technology


An Approach to TFT and FPD Design

An Approach to TFT and FPD Design
by Daniel Payne on 06-12-2017 at 4:00 pm

Webinars are a powerful way for engineers to get updated on EDA and IC design approaches, so I’m sharing what I viewed last month at a Silvaco webinar on TFT and FPD design. You probably are using a TFT LCD display in your TV, monitor, mobile phone, video game system, GPS device or projector. The custom IC design flow offered by Silvaco is shown below for both front-end and back-end tools:
Continue reading “An Approach to TFT and FPD Design”


Design Rule Development Platform @ #54DAC!

Design Rule Development Platform @ #54DAC!
by Daniel Nenni on 06-12-2017 at 12:00 pm

While some might have expected the exponential growth in design rules number and complexity to cool down a little, it looks as if these are only heating up more. The multiplicity of technology nodes, lithography options, , fundamental technology options (Bulk, FD-SOI, FinFET), different process flavors and specific applications, have made design enablement and design rules in particular an even more painful issue than it used to be.

Sage-DA addresses this problem in a systematic way with iDRM, a complete end-to-end integrated Design Rule Management System. iDRM encompasses all steps of design rule development, from design rule capture by the process integration team to the delivery of a compiled and verified DRC deck that accurately matches and represents the design rule intent.

The system significantly shortens the turnaround time of every PDK release, reduces the engineering effort and most importantly – maintains consistency and eliminates errors.

The benefits of automation are apparent and the value in this case is enormous, since the cost of mistakes and delayed delivery is so high. However at the same time, automation can sometimes seem disruptive and intimidating and therefore not always quick to adopt. To facilitate a smooth and easy adoption of this new automation technology, Sage-DA has developed new features in the new iDRM system, which it will show this year at DAC.

The system puts an emphasis on interface and integration with existing tools and environments, so that iDRM can be easily integrated into current technology development and enablement flows. It includes features such as:

Reading from and synchronizing with design rule spreadsheets. This enables automatic import of design rules into the iDRM system and instant synchronization of any rule update or new rule additions.
Using pre-defined rule templates. Most rules can be entered using a pre-defined rule type (or template) , this makes adding rules and rule editing quicker, easier and more consistent.
Extraction of design rule values from existing layouts so that they don’t need to be typed in manually
Compilation of sign-off DRC code. Users can automatically generate DRC code for their signoff DRC tool by using rule templates.
QA and test of DRC code by automatic generation of pass/fail patterns with coverage measurement


Early customer deployment successful experience

The significant investment Sage has put into the new upgraded system is already paying off for Sage and its customers. Sage mentioned two recent successful customer use-cases:

1. An advanced technology semiconductor company uses iDRM to enforce consistency and to automatically generate 3rd party DRC code for its most advanced node technology (below 14nm). The results are faster turnaround times for DRC runsets and ensured consistency.

2. Another semiconductor company develops multiple and diverse IC technologies for different markets and applications. It uses iDRM to qualify and validate their DRC signoff runsets, using the DRVerify tool of iDRM. DRVerify automatically generates high coverage QA test layout to test the DRC runset. Using the system, the users were already able to detect errors and gaps in their existing signoff DRC runsets, which until then were traditionally coded and qualified.

Sage-DA will demo the new system and functionality at DAC in Austin next week. You can find them at booth #513 or see http://www.sage-da.com/news/1706-sage-dac2017.html


Visual Quality

Visual Quality
by Bernard Murphy on 06-12-2017 at 7:00 am

A few years ago, I started looking at data visualization methods as a way to make sense of large quantities of complex data. This is a technique that has become very popular in big data analytics where it is effectively impossible to see patterns in data in any other way. There are vast numbers of different types of diagram – treemap, network and Sankey are just a few examples – each designed to highlight certain aspects of the data – concentration, connectivity, relative size and other characteristics. Given the right type of diagram, key attributes of mountains of data can become immediately obvious.


I didn’t get beyond an experimental stage in my work, so I was very happy to see that Rene Donkers, CEO at Fractal, had finished the job in delivering a production capability for data visualization around library analytics, which he calls error fingerprint visualization.

Library (Liberty) files can get pretty large, covering OCV timing models and power models among many other characteristics. Which raises an obvious question – how do you check that this stuff is correct? I’m not thinking here about the basics – whether each model has the right name and the right pins, or basic consistency checks between these and the tables. Questions around table data can become more challenging. Monotonicity and the correct sign of the slope are already covered in the Fractal Crossfire product, but whether values fall within reasonable bounds is no longer a binary question – there is no bright line separating reasonable from unreasonable.


Crossfire now provides help in analyzing these cases through visualization. For example, for rule 7201: “Range check for cell_rise/cell_fall delay values”, you start by specifying what you think is an acceptable range for these delays, say 0-10ns. Delay values outside this range will be flagged in the normal type of error listing, but that could amount to a lot of error messages. The trick in this or any other effective visualization is to present aspects of that information in a way that makes it easy to reach conclusions about root causes. In the example above, they present all violations in a network diagram, starting from the cell-name, with connections to associated tables and from there variously through pin names, min and max values and applicable range limits.

You can temporarily extend an allowed range through waivers. In the example above, blue lines show violations which fall within waiver limits, whereas red lines show cases falling outside those limits. Waivers provide a way to experiment with more relaxed bounds before committing to those changes.

What stands out from the diagram above (OK, you need to look closely; try magnifying or look at the white paper link below) is that a lot of errors are associated with the OAMOD pin. You immediately see that you need to drill down into problems with that pin. Maybe this is a design problem, maybe a characterization problem, either way it’s obvious that addressing this area can resolve a lot of the flagged errors.

This goes to the heart of the value of visualization methods. When looking at failures from any kind of pass/fail analysis (or indeed any binary division of data), it is unlikely that the data is randomly distributed, especially when effort has been made to reduce failures. It is probable that many failures can be attributed to a relatively small number of root-causes.


Similarly, the visual can help you decide if maybe the limits you set on values should be adjusted. If values beyond an upper limit increase at a modest pace, perhaps the upper bound should be increased. If they show signs of rising rapidly, perhaps that signals a design or characterization problem, or maybe an unavoidable characteristic of this cell in this usage, indicating that designers need to be warned not to stray into this area.

I’m a believer in visualization aids to analysis of complex data. We can only do so much with pass-fail metrics presented in lists and spreadsheets. Visualization provides a way to tap skills we already have that can be much more powerful than our limited ability to see patterns through text and number inspection. Until we have deep-learning to handle these problems, perhaps we should put our visual cortex to work. You can learn more about Crossfire error fingerprints HERE.


Cadence Design Systems @ #54DAC!

Cadence Design Systems @ #54DAC!
by Daniel Nenni on 06-11-2017 at 8:00 am

This year Cadence Design Systems is showcasing system design enablement in their booth, capitalizing on the industry shift from naked chip design to system level chip design. Apple started it with making the chips inside the iProducts as part of the system and now other systems companies are looking to take more control over their silicon. We can see it with the SemiWiki readership and the widening distribution of domains over the last six years. Companies we would have never expected to be reading about semiconductor design enablement are stopping by much more frequently, especially with IoT, Automotive, and Artificial Intelligence, it really is all about the system.

Historically Cadence does a great DAC and this year will be no different. Let’s start with the Cadence luncheon series because that involves FREE FOOD, the opportunity for some great table talk, more FREE FOOD, and some trending topics:

Towards Smarter Verification
Monday, June 19 – 12:00-1:30pm, Ballroom B/C

High-Performance Digital Design at 7nm
Tuesday, June 20 – 12:00-1:30pm, Ballroom B/C

Overcoming Mixed-Signal Design and Verification Challenges in Automotive and IoT Systems
Wednesday, June 21 – 12:00-1:30pm, Ballroom B/C

Speaking of 7nm, TSMC and GlobalFoundries are both shipping production quality PDKs so you will see a lot of 7nm design talk at DAC this year. GF is a quarter or two behind TSMC on 7nm but it is close enough for some of the top semiconductor companies to hedge their bets and design to both, absolutely. I don’t even recall the last time two pure-play foundries had competitive leading edge technology out at the same time, if ever.

Cadence is doing the expert bar again this year which is something I enjoy. It is kind of like the Apple Genius Bar except Cadence actually has geniuses. There are 60 different slots on a variety of topics which you can see HERE.

Cadence is hosting more than 20 technical sessions on the latest developments in digital, custom/analog, and verification that you can choose from HERE.The Cadence Theater is also packed with 30+ presentations from partners and customers HERE. The partner and customer talks are the most interesting to me. Partners and customers really do say a lot about a company, right? It is all about the ecosystem…

Last but not least for content there are the Cadence DAC sessions starting with the One-on-One with Lip-Bu Tan in the DAC Pavilion. This is a session I will be at because not only do I remember Cadence before Lip-Bu, I remember Cadence before Cadence was actually Cadence (the ECAD days…). After Lip-Bu’s arrival in 2009, Cadence went through an EDA transformation like no other I have witnessed in my 30+ years and I am interested to hear about what is next. It truly has been an honor covering Cadence the past 6 years on SemiWiki, absolutely.

And let’s not forget about the Denali Party. Space is limited so register today:

Mix, Mingle, and Enjoy!
The Denali Party by Cadence
Tuesday, June 20, 8:00pm
Palm Door on Sixth
508 E 6th Street, Austin, TX

Ready for a night to remember? Catch up with old friends and meet some new ones at the popular Denali[SUP]®[/SUP] Party by Cadence. For your musical entertainment, Disco Inferno will be back to rock the house.

NOTE: You must pick up your wristband from the Cadence team in booth #107 before noon on Tuesday, June 20 or your reservation will be given to another guest.


Mentor a Siemens Business @#54thDAC

Mentor a Siemens Business @#54thDAC
by Daniel Nenni on 06-11-2017 at 7:00 am

This year the Mentor booth will be quite interesting now that they are part of Siemens. I expect zero changes to their DAC presence but we shall see. It will certainly be good to see Wally again. More importantly, I will be afforded the opportunity to talk personally with both Wally Rhines, CEO of Mentor ANDChuck Grindstaff executive chair, Siemens PLM Software, about the recent acquisitionand what happens moving forward. Inquiring minds want to know……


If you look at the Mentor DAC landing page you will see IoT front and center with a panel on Overcoming the Challenges of Creating Custom SoCs for IoT:

The hardware industry finds itself in a new wave of innovation driven by custom SoC development for mixed-signal IoT devices. These devices span an incredibly diverse set of markets, from medical to white goods, and everything in between. With the demand for IoT connectivity and secure data management, what are the best options for today’s systems designers? This panel will explore various options and viable solutions to help designers innovate and to provide unprecedented services to the personal, industrial, and societal levels.

Looking at the SemiWiki analytics I still see IoT as driving not only the most total traffic but also the most diverse collection of domains and crossing the most SemiWiki catagories. In fact, as of today, we have published close to 400 IoT related articles that have been viewed more than 1,000,000 times.

The next highlighted panel is The Explosion of Emulation Use Models in Diverse segments:

Hardware emulation is the star in today’s verification flow. Complexity and the rising cost of doing verification forces project groups to become more innovative about SoC debug and the validation of HW/SW system integration. Expanding emulation use models makes emulation an easy choice. It’s flexible, versatile and scalable. Data center access makes it a cost effective approach for global project groups. Stop by the Mentor, a Siemens Business, DAC booth (#947) to hear a panel of experts describe how to use emulation to build an effective verification and validation platform.

Emulation is certainly a trending topic in semiconductor and system design, in fact, Bernard Murphy and I are finishing up a book on emulation to be published later this year so stay tuned to SemiWiki.com.

The other highlighted panel is on the Impact of ISO 26262 on the fabless Ecosystem:

As fabless companies design products for the rapidly-expanding automotive semiconductor market, whether they are power window controllers or drive-by-wire systems, they have to meet the exacting requirements of the ISO 26262 standard for functional safety. Those requirements affect not only the end product itself, but the software that it runs and the software used to create, validate, and test it. As a result, ISO 26262 has far-reaching impact on the fabless ecosystem, including EDA tools, software, and IP. Come listen to leaders in the industry who have pioneered the way forward for ISO 26262 in the fabless ecosystem as they discuss the challenges that they have faced, what they have done, and what they think still needs to be done in the industry.

Personally, I’m a big fan of interactive panels and I have asked the SemiWiki bloggers to attend as many as possible to share this content with our readers who could not attend.

Mentor will also give us access to some of the best researchers and partners in their booth for you to meet and exchange ideas with (Experts at the Mentor Graphics Booth). There are literally dozens to choose from but you need to sign up for them. The foundries are well represented here as well as some of the top semiconductor companies.

Mentor is also hosting several Networking and Lunches which I find huge value in. Not only does it include FREE FOOD and FREE DRINK (I blog for food and drink), you can actually learn stuff and meet other semiconductor professionals to further your professional goals.

And of course there are Partner Activities because where would we be without partners to collaborate with? We would still be using rotary phones no doubt. There are 18 partners listed including the Foundries, Universities, Customers, EDA and IP companies.

I hope to see you there, absolutely!


Photonics at DAC – Integrated Electronic/Photonic Design Flow to be Presented at Cadence Theater

Photonics at DAC – Integrated Electronic/Photonic Design Flow to be Presented at Cadence Theater
by Mitch Heins on 06-09-2017 at 12:00 pm


I recently wrote an article on SemiWiki talking about the integrated Electronic/Photonic Design Automation (EPDA) flow that is being developed by Cadence Design Systems, Lumerical Solutions and PhoeniX Software and how that flow is now expanding into the system level through SiP (system in package) techniques.

Up till recently, demos were all being done using a theoretical PDK but this changed last week (May 24[SUP]th[/SUP], 2017) when Cadence, Lumerical and PhoeniX presented a demonstration of the EPDA flow using a real foundry PDK. The PDK was from AIM Photonics and the demonstration was given at the AIM Proposers Meeting in Rochester, NY. This is a key milestone for Cadence, Lumerical and PhoeniX Software as this is the first public demo of the EPDA tool flow with a real foundry PDK.

There are other production PDKs also in the works for the flow, however it’s too early to drop names just yet. Suffice it to say that momentum continues to grow. Cadence’s partnership with Lumerical for circuit level photonic simulation and with PhoeniX Software for photonic physical design gives it a significant jump start towards bringing PDK support online for the full EPDA flow as both Lumerical and PhoeniX Software have extensive PDK support from existing photonics foundries. With only a modest amount of effort, these existing PDKs are now being synced up and used to populate PDKs for the entire EPDA flow.


If you haven’t seen the EPDA flow yet, it will be presented at this year’s Design Automation Conference in a presentation entitled, “Capture the Light. An Integrated Photonics Design Solution from Cadence, Lumerical and PhoeniX Software”. The presentation will be given in the DAC Cadence Theater at 10:00a on Tuesday, June 20th.

Momentum for the new EPDA flow continues to grow as the three companies will also be engaging with more engineers at a five-day class entitled ‘Fundamentals of Integrated Photonics Principals, Practice and Applications’. This class is being put on by the AIM Photonics Academy and will be taking place the last week of July at the MIT campus in Boston, MA.

There are also multiple customer engagements underway for the EPDA flow. Again, it’s too early yet to release those customer’s names. It is however these same customers that are now pushing the trio to work on the advanced system-level flow that was eluded to in my last SemiWiki article (see link below).

As part of this effort, Cadence, Lumerical and PhoeniX Software are also planning to host a second photonics summit in the early September time frame. Like the 2016 photonic summit, this will be a two-day event hosted at the Cadence campus in San Jose. The first day will focus on technical presentations discussing challenges and progress towards implementing integrated photonics systems. The second day, like last year, will again be a hands-on session that will highlight progress made towards extending the existing EPDA flow for integrating the full system (electronics, lasers, and photonics) into a common package. Watch for more details on how to register for this summit in the upcoming weeks.

It’s still early days for integrated photonics but capabilities are rapidly being put into place. If it’s time for you to come up to speed on integrated photonics I would encourage you to attend one or more of these upcoming opportunities to learn.
See Also:


Samsung Details Foundry Roadmap

Samsung Details Foundry Roadmap
by Scotten Jones on 06-09-2017 at 10:00 am

Samsung recently held a meeting where they laid out a detailed roadmap for their foundry business. On Tuesday June 1st, Daniel Nenni and myself had an interview with Kelvin Low, senior director of foundry marketing and business development to discuss the details of Samsung’s plans.
Continue reading “Samsung Details Foundry Roadmap”


Simplifying Requirements Tracing

Simplifying Requirements Tracing
by Bernard Murphy on 06-09-2017 at 7:00 am

Requirements traceability is a necessary part of any top-down system specification and design when safety or criticality expectations depend on tightly-defined requirements for subsystems. Traceability in this context means being able to trace from initial documented requirements down through specification and datasheet documents to the design implementation and to the testplan. Standards such as ISO26262, DO-187C and IEC 61508 demand that critical requirements be verified by demonstrating traceability through these documents and design materials.


This is not so easy. The path to be traced contains in part documents requiring human interpretation, validation and cross-checking and in part design data which lends itself to automating interpretation, validation and cross-checking. The human-dependent part of this tracing is a significant contributor to the cost overhead and incompleteness of requirements tracing efforts. Which raises the obvious question – isn’t there a better way? You can’t get humans out of the loop completely and, for that reason, you can’t get documentation out of the loop completely. But can dependence on human review and verification be reduced in some meaningful way?

Getting there is obviously more difficult if there aren’t machine-readable links between specification, design and documentation, a state of affairs that is still common today in many design shops. But it is possible to have quite good links between these components in SoC/subsystem design given an integrated methodology. Magillem call this ISDD – integrated specification design and documentation. Unfortunately you can’t get there by just adding a tool to whatever unstructured spec/design/doc flow you already have. You must switch to a more structured SoC/subsystem design process incorporating links to specification and documentation.

Which in this world means IP-XACT. I don’t believe anyone will contradict me when I say that Magillem has gone further than any other EDA vendor in delivering commercial products around IP-XACT. I competed with them for several years so I know a little about this area and what they can do. Moving to IP-XACT may require a big switch in methodology which can seem daunting, but I understand tools have evolved significantly to make this much simpler. So let’s assume you decide to make that transition – what can Magillem do for you in requirements traceability?

The mechanism they provide is called Magillem Link Tracer (MLT). This connects interdependencies in specifications, documents and the design through dynamic typed links (I assume connecting to vendor extensions in the IP-XACT schema). Their objective is not simply to push data out to documentation from the IP-XACT database but instead to provide a check and synchronization mechanism between all these views, such that a change in one can be followed through to dependent views, in each of which you can choose to accept or reject a change.


The tool displays links between dependent documents and design components. Note that here links aren’t just back to the design database; there can be links between documents also, allowing for smart reuse of content between different document views.


When you change a requirement, impacted resources are flagged and you can drill down to accept/reject changes. For specifications and documentation this can update appropriate fields, under your control since you want to see where changes are implied before you accept them. Of course a requirement change won’t make design changes automatically – there you will need to go into Magillem design tools to make the appropriate changes to match the new requirements. Changes don’t have to start from the requirements; you might choose to make a change in the IP-XACT design representation, in a register or address map for example, then use the same dependency computation to see where and how that will ripple up.

This kind of analysis can be a valuable contribution to supporting automated requirements traceability. Of course the scope will be bounded to those parameters understood within IP-XACT and the Magillem tools. You will still need to manage requirements tracing for AC and DC characteristics, among others, through other means. And unlinked text in documents must be checked manually. But disconnects in items you will link (bitfield maps for example) can often be where disconnects between requirements, spec, design and test are most likely to happen. Automating the management and traceability of this data should be a big step forward in traceability support.

As a sidebar, some readers may note that there are other tools to connect individual parts of the design process to spreadsheet specifications and documentation. Indeed there are. But a meaningful contribution to requirements traceability needs more than a bundle of disconnected mechanisms, each supporting a limited set of individual requirements. The level of contribution needed for safety standards certification is better served by coverage of a significant subset of requirements through an auditable / verifiable standard representation. That’s what Magillem aims to offer through their MLT solution linked to their extensive range of IP-XACT-based design tools. You can read more HERE.


System Implementation Connectivity Verification and Analysis, Including Advanced Package Designs

System Implementation Connectivity Verification and Analysis, Including Advanced Package Designs
by Tom Dillinger on 06-08-2017 at 4:00 pm

Regular Semiwiki readers are aware of the rapid emergence of various (multi-die) advanced package technologies, such as: FOWLP (e.g., Amkor’s SWIFT, TSMC’s InFO); 2D die placement on a rigid substrate (e.g., TSMC’s CoWoS); and, 2.5D “stacked die” with vertical vias (e.g., any of the High Bandwidth Memory, or HBM, implementations).

Typically, one or more SoC’s are under development concurrently with the advanced package design. A vexing issue often arises, where the design platforms differ for chip and package, with different representations of system connectivity and circuit library models. As a result, there is no direct method for the SoC designer to build a correct connectivity model and simulate circuit paths between die (which potentially use different process PDK’s). Circuit validation throughout the multi-die package involves (error-prone) manual netlist creation, pulling data from the package environment into the circuit designer’s cockpit.

Cadence has recently addressed this flow deficiency, providing the necessary bridges between their leading Virtuoso and Allegro tool platforms.

I had the opportunity to chat with John Park, Product Management Director for IC Packaging and Cross-Platform Solutions at Cadence, about their new product, Virtuoso System Design Platform. “We have enhanced and extended Virtuoso. A product design team is able to incorporate a full-system hierarchical schematic model resident in Virtuoso SDP. We have developed a bi-directional bridge between the IC, package, and PCB design environments. There are two corresponding flows enabled by the Virtuoso SDP model – a new implementation flow and an analysis flow.”, John described.

The figure below provides an overall Cadence product architecture view, highlighting how Virtuoso System Design Platform provides the SoC designer with access to system connectivity and board/package parasitic data.

Virtuoso SDP Implementation Flow
As illustrated in the figure below, the implementation flow is invoked to automatically generate the die model data from Virtuoso SDP for use in Allegro SiP for package design – e.g., the schematic symbol, die physical footprint. In this example, three active SoC designs are underway – all the package (passive) components are represented in SDP, as well. Virtuoso SDP eliminates the issues associated with system connectivity data residing in different platforms and formats.

The SDP implementation flow generates the model exchange data between the Virtuoso and Allegro platforms, potentially on different operating systems – the “generate AllegroDB” operation imports the library and connectivity model into Allegro. The Virtuoso SDP cockpit is also used to maintain the techfile data used by Allegro – e.g., layer stackups, specific net implementation constraints. The SDP implementation flow provides the connectivity to verify the package/board LVS in Allegro (and for the connectivity use when building simulation models for analysis, discussed next).

Signal Integrity and Power Integrity Analysis
The analysis flow developed with the Virtuoso SDP offering provides several key validation features. The Sigrity family of signal integrity and power integrity tools is integrated into Virtuoso SDP.

Circuit Simulation
Additionally, SoC designers can use Sigrity to extract/connect complex multi-port Touchstone (S-parameter) model parasitics into a detailed die-package-die simulation in Cadence ADE (Spectre) invoked from the SDP cockpit GUI. The figure below illustrates the extraction of a model by Sigrity (a selected set of nets to generate a multi-port model), followed by the creation of a Virtuoso SDP instance that can then be used in die-package-die simulation.

Cadence has extended their Virtuoso platform to provide a unified cockpit for design teams to capture a system connectivity model. Additionally, implementation flow features are provided to generate the library, netlist, and design constraint data for advanced package and PCB design, to enable LVS connectivity verification in Allegro. The Sigrity model extraction features are integrated as well, for designers to run circuit analysis simulations. These features eliminate the tedious and error-prone tasks of constructing end-to-end circuit path models from disparate environments.

For additional information on the Cadence Virtuoso System Design Platform, please follow thislink.

-chipguy