RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

The 4C’s of PCB Design

The 4C’s of PCB Design
by Tom Dillinger on 04-20-2017 at 12:00 pm

The diamond jewelry industry encourages customers to focus on the 4C’s — cut, clarity, color, and carats. At the recent PCB Forum conducted by Mentor (a Siemens business) in Santa Clara, I learned that current system design flows also require an emphasis on the 4C’s — collaboration, concurrency, consistency, and acloud environment. These capabilities need to span schematic design, constraint management, and physical PCB design and layout.

The complexity of current products requires attention to a plethora of details, to address the many optimization criteria:

  • cost (area, layer stackups)
  • routability
  • minimization of high-frequency signal reflections and losses
  • (differential pair and bus) signal topolopy matching
  • manufacturability
  • system thermal/EMI/mechanical packaging constraint (Look for another article shortly from the PCB Forum on MCAD-ECAD collaboration.)

The tasks of schematic design and physical implementation to achieve the goals above are tightly interdependent.

Collaboration among Applications

The conventional waterfall method for PCB development proceeds in a sequential manner — i.e., schematics and constraints are tossed “over the wall” to the physical design engineers to complete the implementation. This process simply does not address the demands of current PCB projects — a platform that enables an interactive, iterative, incremental flow is needed to support schematic and physical implementation designers collaborating in real-time.

Proposed edits to components or constraints need to be communicated among the design team for review and approve/reject decisions. A robust notification system is required, to indicate an update in one application that impacts other dependencies.

Mentor’s Xpedition platform utilizes a “traffic light” indicator in each application, to highlight that a change has been made in another interdependent application — greenis in sync, amber indicates an update in another application has been recorded. The Project Integrator pulldown is used to provide detailed information of the specific change for review. Designers are collaborating across applications in real-time.


The figure above illustrates three users working on the same project database — two in the constraint manager, and one in physical layout. An update in the constraint manager is shared concurrently with the other user, while the physical layout session is notified of the update.

Concurrent Design within an Application
Another reality of current projects is that the optimization of a complex system will involve the skills of all team members, leveraging specific expertise. The PCB development platform needs to readily support concurrent design among team members working on different areas of the design within the same application.

There are brute-force “partition, work independently, and re-assemble” approaches to concurrent design. Yet, designers require real-time visibility to the full design model — a true concurrent platform enables the full design data to be accessible.

Mentor’s Xpedition platform enables a fully concurrent set of users working in the same application on the full project database.


Consistency is a MUST
A development platform that enables collaborative, concurrent design MUST ensure the consistency of the “live” data being updated by the various team members. Xpedition maintains a single, consistent database model. For example, schematic sheets being edited are locked for others to edit until the sheet is closed — however, as schematic objects are modified, updates are visible in real-time to users viewing the schematic set. Design constraints being inserted/edited are locked until the edit is complete — the specific user performing the edit is displayed to other users viewing the constraint set. Once the edit is complete, updated objects are then highlighted to other users. For multiple designers working concurrently on a board layout, Xpedition provides isolation through real-time “force fields” representing the active neighborhoods in which separate designers are working, visible to all clients.


The figure above illustrates multiple concurrent layout users, with the force field identifying an active edit area of one user, and thus, a keepout for the others.

At the PCB Forum, Mentor provided a live demo of the unique features of these Xpedition product platform for concurrent, collaborative PCB design. It was one of those “Wow, that’s incredibly productive!” design methodology demonstrations.

A central Xpedition server manages the project data and concurrent access by multiple (up to 16) design clients. The demo highlighted how collaborative updates are dynamically reported to another application’s client using the traffic light indicator — e.g., the layout designer receives an amber notification when there is a component change in a schematic or an update to a property=value assignment in the constraint manager. The demo also highlighted how multiple clients work concurrently in the same application, with the appropriate locking of data objects.

Xpedition’s collaborative, concurrent environment supports both a lightweight data management/notification system, and a full enterprise-level DM application, which includes full user privilege and authentication controls, notification and signoff policies, and version/configuration management support. In either the Xpedition lightweight or xDM-based data management mode, client sessions are independent — the individual design workspaces are separate, supporting unique user preferences.

In addition to the use of a site-based server, the Xpedition collaboration features support a cloud-based project database.

Mentor’s Xpedition design platform addresses the 4C’s required by large, complex systems, enabling a Collaborative (multiple, dependent applications), Concurrent (same application) environment, with data management features ensuring Consistency, across a site or Cloud-based project database. The transition from a waterfall process to a more flexible methodology offers substantial project productivity benefits.

For more information on Xpedition’s design platform and collaboration features, please follow these links:

additional Mentor PCB Forum locations/dates

Concurrent Engineering landing page (with multiple video demonstrations)

Collaborative Management of Design Constraints blog

Concurrent Schematic Design blog

Real-time Concurrent PCB Layout blog

Xpedition datasheet

-chipguy


Webinar: Getting to Formal Coverage

Webinar: Getting to Formal Coverage
by Bernard Murphy on 04-20-2017 at 10:00 am

Facing rapidly growing challenges in getting to respectable coverage, designers have been turning more and more to formal verification, not just to plug gaps but increasingly to take over verification of significant components of the testplan. Which is great, but at the end of the day any approach to verification must be measured against its contribution to coverage and most of us wrestle with how to do that for formal.

REGISTER HERE for Webinar on Tuesday April 25th at 10am PDT

We know that when we test formally we ensure a very good check mark for that particular feature but how can we factor that into overall coverage and how does that relate to the coverage we best understand – simulation-based coverage? A disciplined engineering management approach to verification signoff must answer this question for formal investment on a design to ensure that effort adds up to more than a disaggregated set of point proofs.

Synopsys aims to answer that need in this webinar, providing ways to quantify formal coverage and particularly answering questions on how much of a design is covered by checkers and how much by full proofs, where design constraints might be unnecessarily limiting coverage and how to address coverage questions for inconclusive proofs.

REGISTER HERE

Web event: Boosting Confidence in Property Verification Results with VC Formal
Date: April 25, 2017
Time:10:00 AM PDT
Duration: 60 minutes

Formal Property verification is gaining a lot of traction in recent years due to a) An ever-increasing challenge to verify all possible corner-case behaviors and b) Industry adoption/acknowledgement of the power of assertion based verification.

The user base for property verification is not limited to a handful of formal experts but has extended to the realm of simulation-based verification users and designers. This increase in a rather diverse user base puts the spotlight on the most fundamental, “must-have” requirement for every verification engineer/manager — “How does one measure or quantify formal verification?” – A question answered with simulation-based verification using coverage metrics.

In this webinar, we will showcase VC Formal’s capabilities, which include allowing users to quantify formal progress at a granular level, in order to address the 4 basic questions leading to formal signoff:

  • How much of my design is covered by the list of checkers?
  • Is my formal test bench over constrained?
  • Are proof depths from inconclusive results good enough to catch potential design bugs?
  • Do the full proofs cover the design logic that was intended to cover?

We will rely on existing simulation based verification coverage targets ie: line coverage, condition coverage, FSM coverage, to measure the RTL targets that are hit based upon the formal test bench.

REGISTER HERE

Speakers:

Kiran Vittal
Product Marketing Director, Verification Group

Kiran Vittal is a product marketing director at Synopsys, with 25 years of experience in EDA and semiconductor design. Prior to joining Synopsys, Kiran held product marketing, field applications and engineering positions at Atrenta, ViewLogic, and Mentor Graphics. He holds a MBA from Santa Clara University and a Bachelors in Electronics Engineering from India.


Abhishek Muchandikar
Staff Corporate Applications Engineer, Synopsys Verification Group

Abhishek Muchandikar is a Staff Corporate Applications Engineer in Synopsys’ Verification Group. He has over 11 years of experience in the verification domain having worked upon formal and simulation based methodologies. He has previously worked on software telecom protocols. He holds a Master’s Degree in Microelectronics from Victoria University, Melbourne, Australia


Virtual Reality

Virtual Reality
by Bernard Murphy on 04-20-2017 at 7:00 am

In the world of hardware emulators, virtualization is a hot and sometimes contentious topic. It’s hot because emulators are expensive, creating a lot of pressure to maximize return on that investment through multi-user sharing and 24×7 operation. And of course in this cloud-centric world it doesn’t hurt to promote cloud-like access, availability and scalability. The topic is contentious because vendor solutions differ in some respects and, naturally, their champions are eager to promote those differences as clear indication of the superiority of their solution.

Largely thanks to contending claims, I was finding it difficult to parse what virtualization really means in emulation, so I asked Frank Schirrmeister (Cadence) for his clarification. I should stress that I have previously talked with Jean Marie Brunet (Mentor) and Lauro Rizzatti (Mentor consultant and previously with Eve), so I think I’m building this blog on reasonably balanced input (though sadly not including input from Synopsys, who generally prefer not to participate in discussions in this area).

There’s little debate about the purpose of virtualization – global/remote access, maximized continuous utilization and 24×7 operation. There also seems to be agreement that hardware emulation is naturally moving towards becoming another datacenter resource, alongside other special-purpose accelerators. Indeed, the newer models are designed to fit datacenter footprints and power expectations (though there is hot 😎debate on the power topic).

Most of the debate is around implementation, particularly regarding purely “software” (RTL plus maybe C/C++) verification versus hybrid setups where part of the environment connects to real hardware, such as external systems connecting through PCIe or HDMI interfaces for example. Pure software is appealing because it offers easy job relocation, which helps the emulator OS pack jobs for maximum utilization and therefore also helps with scalability (add another server, get more capacity).

In contrast, hybrid (ICE) modeling requires external hardware and cabling to connect to the emulator, also specific to a particular verification task, and that would seem to undermine the ability to relocate jobs or scale and therefore undermine the whole concept of virtualization. In fact, this problem has been largely addressed in some platforms. You still need the external hardware and cabling of course but internal connectivity has been virtualized between those interfaces and jobs running on the emulator. Since many design systems want to ICE-model with a common set of interfaces (PCIe, USB, HDMI, SAS, Ethernet, JTAG), these resources can also be shared and jobs continue to be relocatable, scalable and fully virtualized.

Naturally external ICE components can also be virtualized, running on the software host, or the emulator or some combination of these. One appeal here is that there is no need for any external hardware (beyond the emulation servers), which could be attractive for deployment in general-purpose datacenters. A more compelling reason is to connect with expert 3[SUP]rd[/SUP]-party software-based systems to model high levels and varying styles of traffic which would be difficult to reproduce in a local hardware system. One obvious example is in modelling network traffic across many protocols, varying rates and SDN. This is an area where solutions need to connect to testing systems from experts like Ixia.


You might wonder then if the logical endpoint of emulator evolution is for all external interfaces to be virtualized. I’m not wholly convinced. Models, no matter how well they are built, are never going to be as accurate as the real thing, in real-time and asynchronous behaviors and especially in modeling fully realistic traffic. Yet virtual models unquestionably have value in some contexts. I incline to thinking that the tradeoff between virtualized modeling and ICE modeling is too complex to reduce to a simple ranking. For some applications, software models will be ideal especially when there is external expertise in the loop. For others, only early testing in a real hardware system will give the level of confidence required, especially in the late stages of design. Bottom line, we probably need both and always will.

So that’s my take on virtual reality. You can learn more about the vendor positions HERE and HERE.


Autonomous Vehicles: Tesla, Uber, Lyft, BMW, Intel and Mentor Graphics

Autonomous Vehicles: Tesla, Uber, Lyft, BMW, Intel and Mentor Graphics
by Daniel Payne on 04-19-2017 at 12:00 pm

I read at least one hour of news every day to keep informed, and I’ve read so many stories about autonomous vehicles that the same, familiar company names continue to dominate the thought leadership. What really caught my attention this month was an announcement about autonomous vehicle technology coming from Mentor Graphics, now a Siemens Business. I knew that Mentor has been serving the automotive market with pieces of ADAS (Automated Driver Assistance Systems) technology over the years for tasks like:

  • Real Time Operating System
  • Embedded Systems design
  • Cabling and wiring harnesses
  • IC design, verification, emulation
  • Semiconductor IP

As an overview, consider the five levels of ADAS:

Level 0 – My previous car, a 1988 Acura Legend, where I control steering, brakes, throttle, power.

Level 1 – My present car, a 1998 Acura RL, where the car can control braking and acceleration (cruise control).

Level 2 – A system is used to automate steering and acceleration, so a driver can take hands and feet off the controls but is ready to take control.

Level 3 – Drivers are in the car, but safety-critical decisions are made by the vehicle.

Level 4 – Fully autonomous vehicle able to make an entire trip. Not all driving scenarios are covered.

Level 5 – Fully autonomous vehicle able to drive like a human in all scenarios, even using dirt roads.

One approach used in autonomous vehicles today is to use distributed sensors and data processing as shown below:

In this approach shown on top is a Radar sensor feeding data into a converter chip, then that data being used by an Adaptive Cruise Control (ACC) system which communicates with a vehicle CAN network, finally reaching the actuators on your breaks to slow or stop the car. Shown on the bottom is a separate sensor for the front-facing camera where raw data gets converted and used by the Lane Departure Warning System (LDWS). So each automotive sensor has its own, independent filtering, conversion, and processing for streams of data which then communicate on a bus to take some physical action. Some side effects of the data processing approach are:

  • System latency will be longer to transmit safety critical info
  • Edge nodes don’t see all of the data, only snippets
  • Increased cost and power requirements

Related blog – Help for Automotive and Safety-critical Industries

Mentor Approach
With a centralized approach Mentor has created something called the DRS360 platform that is designed for the rigors of Level 5 autonomy. Here’s how data flows in the DRS360 platform:

Raw sensor data is connected to a centralized module, allowing all processing to occur in one place using high-speed communication lines. The DRS part of DRS360 stands for Direct, Raw Sensing. So this low latency communication architecture places all of the sensor data, both raw and processed, visible and usable throughout the entire system, at all times. A benefit of DRS360 is that decisions can be made more quickly and more efficiently than other approaches. Expect to get power consumption at 100 watts through the use of neural networking in DRS360.

Related blog – Mentor Safe Program Rounds Out Automotive Position

Inside DRS360
Now that we have the big picture of this new platform from Mentor, what exactly is inside of it?

  • Xilinx Zynq UltraScale+ MPSoC FPGAs
  • Neural networking algorithms for machine learning
  • Integration services
  • Mentor IP

With DRS360 you won’t be using a separate chip to connect with each automotive sensor, instead you’ll be connecting to an FPGA. Your ADAS system can use either the x86 or ARM-based SoCs to perform common functions: sensor fusion, event detection, object detection, situational awareness, path learning and actuator control.

Next Steps
You can read more about DRS360 online, browse the press release, or watch an overview video. I’m really looking forward to the adoption rate of this DRS360 platform idea over the coming year, because the autonomous vehicle market is so popular and could be the next big driver in the semiconductor industry. Let’s see how soon before we see a vehicle with some form of a Mentor Inside logo on it.


SPIE 2017 – ASML Interview and Presentations

SPIE 2017 – ASML Interview and Presentations
by Scotten Jones on 04-19-2017 at 7:00 am

At the SPIE Advanced Lithography conference I sat down with Mike Lercel, Director of Strategic Marketing for ASML for an update. ASML also presented several papers at the conference and I attended many of these. In this article, I will discuss my interview with Mike and summarize the ASML presentations.
Continue reading “SPIE 2017 – ASML Interview and Presentations”


Making Cars Smarter And Safer

Making Cars Smarter And Safer
by Tom Simon on 04-18-2017 at 12:00 pm

The news media has naturally focused on the handful of deaths that have occurred while auto-pilot features have been enabled. In reality, automobile deaths are occurring at a lower rate now than ever. In 2014 the rate was 1.08 deaths per 100 million miles driven. Compare that to the 5.06 per 100M miles in 1960, or a whopping 24.09 in 1921 – the first year there were reliable data on miles driven. Still around thirty-two thousand deaths per year in the US is a horrific number.

Autonomous driving promises to do a lot to further reduce these numbers. Automated systems containing electronics have contributed to reducing deaths. The airbag system contains accelerometers and controller circuitry to trigger deployment. Anti-lock brakes and anti-skid controls have certainly been effective in reducing accident rates or the severity of injuries. However, semi and fully autonomous driving could help eliminate the most frequent cause of injury accidents – human error. Per the NHTSA, 94% of auto accidents can be tied back to human choice or error.

The one big assumption made for autopilot systems is that they themselves do not suffer from ‘driver error’ – or in other words any kind of failure. Of course, no system can be made error proof. However, much can be done to design them to drastically reduce the likelihood of an error occurring. Going back to the 1970’s, the space shuttle relied on redundant systems and a voting mechanism to ensure no failures affected the mission. They had five computers to enable this. Four of them performed identical calculations and they voted to ensure the correct result. The fifth was a no-frills backup in case the fully configured systems failed.

Today, for commercial and consumer products, having a four or five-fold redundancy would be prohibitive. Even two-fold redundancy would present a competitive disadvantage. Let’s probe deeper into the control systems for autonomous driving. Neural networks are the core of autopilot systems. They rely heavily on hardware accelerators to perform compute intensive operations. In these systems, there is a combined need for functional safety, super-computer complexity and near real-time latency. See below for block diagrams of several leading auto-pilot compute systems.

The relevant standard for automotive functional safety is ISO 26262. It deals with every level of the supply chain. Functional safety risks include both random and systemic faults, and the systems it covers deal with accident prevention and accident mitigation, Active and Passive respectively. For a higher-level unit or system to function properly, all its components, mechanical, hardware and software, must also adhere to the same safety process. Each identified potential failure has an Automotive Safety Integrity Level (ASIL) assigned to it based on the severity of the expected loss, the probability of the failure and the degree to which it may be controllable if it occurs. ASIL is quite different from the more common Safety Integrity Level (SIL) used for other applications. ASIL relies on more subjective and comprehensive metrics.

Avoiding ASIL level B faults, as a rule of thumb, can be accomplished with fault detection such as ECC/parity and software measures. Again, this is for faults are associated with low levels of loss, or that can be easily recovered from. ASIL D, on the other end of the spectrum calls for space shuttle levels of protection. Common techniques for these faults involve duplication of key logic – albeit a costly undertaking.

Recently at the Linley Autonomous Hardware Conference in Santa Clara Arteris announced an innovative solution for improving the reliability of ISO26262 systems at the hardware level. As an alternative to duplicating all the elements of critical hardware to ensure reliable operation, with their Ncore 2.0 and Ncore Resilience Package it is possible to improve the reliability of the existing memories and data links. This means that only hardware that affects the contents of data packets need to be duplicated. The Ncore Resilience Package internally uses integrated checkers, ECC/parity and buffers to provide a reliable and resilient data transport in SOC’s. This goes a long way toward helping system designers and architects meet the requirements of ISO 26262.

One of the advantages of this approach is that system software is simplified. Ncore is scalable so there is more flexibility, including the ability to add non-coherent elements and make them present a coherent interface to the SOC by using Ncore Proxy Caches. Building functional safety into on chip networking makes complete sense in the context of automotive safety and reliability. More information about Arteris Ncore and Ncore Resilience Package is available on their website.


An Ultra-Low Voltage CPU

An Ultra-Low Voltage CPU
by Bernard Murphy on 04-18-2017 at 7:00 am

A continuing challenge for large scale deployment of IoT devices is the need to minimize service/cost by extending battery life to decades. At these lifetimes, devices become effectively disposable (OK – a new recycling challenge) and maintenance may amount to no more than replacing a dead unit with a new unit. Getting to these levels requires effort to manage dynamic, leakage and sensor power consumption. Managing leakage has been covered in many articles on FinFET and FDSOI technologies and discussions on aggressive power switching, though this challenge is not as acute in legacy processes. I wrote recently about advances in managing sensor power where at least for some types of sensing, standby power can be reduced to zero.

That leaves dynamic power (the power the system burns when it is doing something other than sleeping) as the primary contributor to battery drain. Dynamic power varies with the square of the operating voltage, so reducing voltage has a major impact. We’re used to seeing modern devices running at 1 volt, in some cases at 0.8 volts and even at 0.6 volts. It’s easy to see why; at 0.6 volts dynamic power should be only 36% of the power consumed at 1 volt. So why not reduce the voltage to 0.1 volts or even lower? That isn’t so easy; digital circuits depend on transistors switching between a ‘0’ state and a ‘1’ state. The normal way they do this is to switch between a definitely-off (grounded) state and a definitely-on (saturated) state. But at very low voltages there isn’t enough voltage swing to get up to the saturated state; you can only get part way up the curve.

Of course getting part way up can still be enough to effectively switch, as long as the level you get to is sufficient to switch the next gate in the chain. But there’s a problem. Variabilities in process, temperature and other environmental factors make it difficult to exactly control the voltage swing for each transistor or how quickly the next transistor will respond. As these variations accumulate it can be very challenging to have a circuit operate reliably at very low voltages.

This is where EtaCompute comes in. I think they can safely claim without contradiction that they have developed the world’s lowest power microcontroller IP, since they can operate these as low as 0.25 volts. This might be mildly interesting (do we really need more processors?) were it not for the fact that they have built their IP based on ARM M0+ and M3 cores, in partnership with ARM. They are very cagey about how they make this work, mentioning only that they use self-timed technology and dynamic voltage scaling (DVS) and they say that this is insensitive to process variations, inaccurate device models and path delay variations. 0.25-volt operation has evidently been demonstrated in a 90LP process. They also offer supporting low voltage IP including a real-time clock, AES encryption, an ADC, a DSP and a PMIC to control DVS.

At these low levels of dynamic power consumption and operating in legacy low-power processes where leakage is presumably not a concern, they assert devices built around this logic can comfortably operate in always-on/always-aware mode (which should be easier to manage and lower cost), driven by small coin-cell batteries and energy harvested through e.g. solar power.

The company is quite new. They were founded in 2015 by co-founders and early execs at Inphi, are based in LA and have raised ~$4.5M so far. They have very credible backing, through their partnership with ARM, also some of the investment comes from Walden International (Lip-Bu Tan). Good idea, good backing, should be an interesting company to watch.

You can visit the website HERE.


The Importance of EM, IR and Thermal Analysis for IC Design – Webinar

The Importance of EM, IR and Thermal Analysis for IC Design – Webinar
by Daniel Payne on 04-17-2017 at 4:00 pm

Designing an IC has both a logical and physical aspect to it, so while the logic in your next chip may be bug-free and meet the spec, how do you know if the physical layout will be reliable in terms of EM (electro-migration), IR (voltage drops) and thermal issues? EDA software once again comes to our rescue to perform the specific type of reliability analysis required to ensure that these physical issues are well understood, and to be gate-keepers before silicon tape-out occurs. So when should you start running this type of analysis, and how often do you need to run it? Great questions, and fortunately for us there’s a webinar on this specific topic from Silvaco.

Webinar Overview
This webinar introduces the best practices for ensuring robustness and ease-of-use in performing power, EM and IR drop analysis on various types of IC designs early in the design cycle using simple and minimalistic input data. Using industry-standard input and output file formats, power integrity analysis will be demonstrated early in the design cycle as well as at the sign-off, tape-out stage. We will show how to find and fix issues that are not detectable with regular DRC/LVS checks like missing vias, isolated metal shapes, inconsistent labeling, and detour routing. InVar Prime has been used to verify a broad range of designs including processors, wired and wireless network ICs, power ICs, sensors and displays.

Presenter

Kim Nguyen is a Senior Applications Engineer for Silvaco specializing in physical design and physical verification. Prior to joining Silvaco, Kim led physical design and tape-out teams at Intel Corporation.

He has also held key back-end applications engineering roles for various EDA companies.

Who Should Attend
Physical design and physical verification engineers who work on reliability verification, EM/IR, thermal and power analysis.

When: April 20, 2017

Time: 10AM – 11AM, PDT

Language: English

I will be attending this webinar and blogging about it in more detail, so look forward to another post next week with the details.

Register Online

About Silvaco, Inc.
Silvaco, Inc. is a leading EDA provider of software tools used for process and device development and for analog/mixed-signal, power IC and memory design. Silvaco delivers a full TCAD-to-signoff flow for vertical markets including: displays, power electronics, optical devices, radiation and soft error reliability and advanced CMOS process and IP development. For over 30 years, Silvaco has enabled its customers to bring superior products to market at reduced cost and in the shortest time. The company is headquartered in Santa Clara, California and has a global presence with offices located in North America, Europe, Japan and Asia.


A New Product for DRC and LVS that Lives in the Cloud

A New Product for DRC and LVS that Lives in the Cloud
by Daniel Payne on 04-17-2017 at 12:00 pm

Back in the day the Dracula tool from Cadence was king of the DRC and LVS world for physical IC verification, however more recently we’ve seen Calibre from Mentor Graphics as the leader in this realm. Cadence wanted to reclaim their earlier prominence in physical verification so they had to come out with something different to meet the ever increasing challenges:

  • 10/7nm – 7,000 DRC rules, 40,000 operations
  • 16nm DRC signoff – more than 4 days to run
  • Poor scalability with physical verification tools

Instead of acquiring a start-up, management at Cadence decided to have engineering develop a new DRC/LVS tool from scratch to meet these challenges, and they named the new product Pegasus. If you’ve been keeping track of recent Cadence products there’s a common naming theme: Voltus, Innovus, Genus, Modus, Stratus, Tempus. Some non-conforming new product names: Xcelium, Joules, Indago.

Other DRC/LVS tools use multi-threading, however they really aren’t scaling well beyond a few hundred CPUs, so here are the three big differences with Pegasus:

[LIST=1]

  • Massively parallel architecture
  • Cloud-ready
  • Full-flow physical verification


    Cadence’s 2nd generation of DRC/LVS tools were Assura and PVS that handled hierarchy and used multi-threading, which worked OK in terms of turn around time for many process nodes. With Pegasus you can expect to get runs back about 10X faster than with PVS, and it accomplishes this using three technologies:

    • Stream processing – don’t wait to read in entire GDSII before starting to run DRC/LVS
    • Data flow architecture – scales well up to 960 cores
    • Massively Parallel Pipelined infrastructure – customer private cloud

    What this means to the end user is much shortened DRC runtimes that used to take days are now completed in hours. Companies like Google made stream processing famous in their search engine approach, Facebook is the most well-known company to use a data flow architecture, and finally Amazon has mastered cloud computing. So Cadence took their new product development ideas for Pegasus from outside of traditional EDA thinking.

    Related blog – Simulation Done Faster

    With the old approach of multi-threading you needed to have a huge master machine, while with Pegasus you don’t need that any more and the Pegasus scheduler can setup 1,000,000 separate threads for use by all nodes in your cloud.

    So just how fast can you expect to get results from Pegasus compared to PVS? The following chart shows three customer designs run on 360 cores:

    How about scalability? This next chart shows a couple of designs run through Pegasus using 160, 320 and 640 CPUs:

    Cadence customers using Virtuoso or Innovus will be please to learn that Pegasus works natively with each tool, and benefits to Virtuoso users of Pegasus include:

    • In memory integration, no stream out and stream in
    • Dynamically detect the creation, editing and deletion of objects
    • Instantaneous DRC checks
    • Uses the standard foundry-certified PVS deck

    Related blog – Making Functional Simulation Faster with a Parallel Approach

    I asked Christen Decoin at Cadence about repeatability and rotated designs, and he assured me that results are consistent between runs and it doesn’t matter if you rotate the layout.

    Summary
    The EDA world never remains constant, there are always new challengers for each tool category, and the team at Cadence has achieved something quite note-worthy with the introduction of Pegasus for SoC designers that cannot afford to wait 4 days or more for their DRC runs to get through sign-off. Even if you need to get blocks pushed through DRC faster, then any new tool that promises a 10X improvement is certainly worth looking into. Texas Instruments and Microsemi have talked publicly about using Pegasus for their DRC/LVS tool.

    The marketing folks at Cadence even got a bit artistic with their graphics for Pegasus and the tagline: Let Your DRC Fly


  • Live from the TSMC Earnings Call!

    Live from the TSMC Earnings Call!
    by Daniel Nenni on 04-17-2017 at 7:00 am

    Last week I was invited to attend the TSMC earnings call at the Shangri-la Hotel in Taipei which was QUITE the experience. I generally listen in on the calls and/or read the transcripts but this was the first one I attended live. I didn’t really know what to expect but I certainly did NOT expect something out of Hollywood. Seriously, there were photographers everywhere taking hundreds of pictures. I was sitting front row center and as soon as the TSMC executives sat down there was a rush of paparazzi and the clicking sounds were deafening. It was a clear reminder of how important TSMC is in Taiwan, and the rest of the world for that matter.

    The most interesting news for the day was that 10nm is progressing as planned with HVM in the second half of this year. In fact, 10nm should account for 10% of TSMC wafer revenue this year (Apple). There had been rumors that foundry 10nm was in trouble (fake news) but clearly that is not the case for TSMC. In fact, according to C. C. Wei:

    Although N10 technology is very challenging, the yield learning progression has been the fastest as compared to the previous node such as the 20- and 16-nanometer. Our current N10 yield progress is slightly ahead of schedule. The ramp of N10 will be very fast in the second half of this year.

    C.C. also gave an encouraging 7nm update:

    TSMC N7 will enter risk production in second quarter this year. So far, we have more than 30 customers actively engaged in N7. And we expect about 15 tape-outs in this year with volume production in 2018. In just 1 year after our launch of N7, we plan to introduce N7+ in 2018. N7+ will leverage EUV technology for a few critical layers to save more immersion layers. In addition to process simplification, our N7+ provides better transistor performance by about 10% and reduces the chip size by up to 10% when compared with the N7. High volume production of N7+ is expected in second half 2018 — I’m sorry, in second half of 2019. Right now, our focus on EUV include power source stability, pellicle for EUV mask and stability of the photoresist. We continue to work with ASML to improve the tool productivity so that it can be ready for mass production on schedule.

    And last but not least 5nm:

    We have been working with major customers to define 5-nanometer specs and to develop technology to support customers’ risk production schedule in second quarter 2019, with volume ramp in 2020. Functional SRAM in our test vehicle has already been established. We plan to use more layers of EUV in N5 as compared to N7+.

    The other interesting technology update was InFO:

    First, we expect InFO revenue in 2017 will be about USD 500 million. Now we are engaging with multiple customers to develop next-generation InFO technology for smartphone application for their 2018, 2019 models. We are also developing various InFO technologies to extend the application into high-performance computing area, such as InFO on substrate, and we call it InFOoS; and InFO with memory on substrate, InFO-MS. These technologies will be ready by third quarter this year or first quarter next year.

    If I remember correctly, InFO contributed $100M last year (Apple) so this is great progress. By the way, now that I have seen the facial expressions that go with the voices during the Q&A I can tell you that C.C. has a very quick wit. I had pity for the analysts who tried to trip up C.C. and get inappropriate responses from him.

    Mark Lui talked about ubiquitous computing and AI which reminded me why TSMC is in the dominant position they are today. As a pure-play foundry TSMC makes chips for all applications and devices. Ubiquitous says that computing can appear anytime and anywhere meaning all of those mobile devices TSMC has enabled over the past 30 years will continue to evolve making the TSMC ecosystem worth its weight in silicon.

    I also have a new perspective on the analysts that participate in the Q&A after sitting amongst them. I have no idea how much they get paid for what they do but I’m pretty sure it is too much.

    Here is my favorite answer for Q1 2017:

    Michael Chou Deutsche Bank AG, Research Division – Semiconductor Analyst Okay, the next question, sir, management mentioned the log scale comparison versus Intel, I think, the 2014, right? So since Intel came out to say that their technology seems to be 3 year ahead of the other competitor, including your company, so do you have any comment on your minimum metal pitch and the gate pitch comparison versus Intel? Or do you have any comment for your 5-nanometer versus Intel 10-nanometer, potential 7-nanometer?

    C. C. Wei Taiwan Semiconductor Manufacturing Company Limited – Co-CEO and President Well, that’s a tough question. I think every company, right now, they have their own philosophy developing the next generations of technology. As I reported in the foundry, we work with our customer to define the specs that can fit their product well. So the minimum pitch to define the technology node, we are compatible to the market. But the most important is that we are offering the best solution to our customers’ product roadmap. And that’s what we care for. So I don’t compare that really what is the minimum pitch to define the technology node.

    Absolutely!

    A PDF of the meeting is HERE. The presentation materials are HERE. I have pages of notes from the event and the trip in general so lets talk more in the comments section and make these analysts green with envy!