Banner Electrical Verification The invisible bottleneck in IC design updated 1

Webinar Alert – Embedded Monitoring of Process and Voltage in SoCs

Webinar Alert – Embedded Monitoring of Process and Voltage in SoCs
by Daniel Payne on 03-13-2018 at 12:00 pm

In the old days to learn about new semiconductor IP you would have to schedule a sales call, listen to the pitch, then decide if the IP was promising or not. Today we have webinars which offer a lot less drama than a sales call, plus you get to ask your questions by typing away at the comfort of your desk, hopefully wearing headphones as to not disrupt your co-workers at the next cubicle. I’ll be attending a webinar from Moortec about their IP for monitoring process and voltage variations on April 25, 10AM PDT and invite you to join the event online. After the webinar I’ll write up a summary of the salient points, saving you some time and effort in the process if you cannot attend virtually.

Intro
Historically temperature has always been the first thing engineers think about when it comes to monitoring in-chip conditions, however as we move into more complex designs on advanced nodes, process and voltage start to become equally critical considerations. The associated challenges manifest in multiple ways, including: process variability; exposure to timing violations; excessive power consumption; and the effects of aging. Each of these can lead to ICs failing to perform as expected.

Webinar Content
In this latest Moortec webinar we will look at how process and voltage monitoring combine to enhance the performance and reliability of the design and how they can be used to implement various power management control systems.

This webinar is aimed at IC developers and engineers working on advanced node CMOS technologies including 40nm, 28nm, 16nm, 12nm and 7nm. It will seek to outline the two main pressures that designers are grappling with today, being: i) the desire for lower supplies, enabling compelling power performance for products, especially consumer technologies; and ii) the challenge posed of placing in jeopardy the functional operation of SoCs and an entire product range. The dilemma for the designer is that to maximize the former, optimization schemes used today are algorithmically treading an increasingly thinner line between robust operation and having failing devices within the field

Moortec provide complete PVT Monitoring Subsystem IP solutions on 40nm, 28nm, FinFET and 7nm. As advanced technology design is posing new challenges to the IC design community, Moortec are able to help our customers understand more about the dynamic and static conditions on chip in order to optimize device performance and increase reliability. Being the only PVT dedicated IP vendor, Moortec is now considered a centre-point for such expertise.

After registering, you will receive a confirmation email containing information about joining the webinar.

Webinar Registration
It’s easy to register online here for Wednesday, April 25 at 10AM PDT (US, Europe, Israel).

About Moortec Semiconductor

Established in 2005 Moortec provides compelling embedded sub-system IP solutions for Process, Voltage & Temperature (PVT) monitoring, targeting advanced node CMOS technologies from 40nm down to 7nm. Moortec’s in-chip sensing solutions support the semiconductor design community’s demands for increased device reliability and enhanced performance optimization, enabling schemes such as DVFS, AVS and power management control systems. Moortec also provides excellent support for IP application, integration and device test during production.

Related Blogs


Another Application of Automated RTL Editing

Another Application of Automated RTL Editing
by Bernard Murphy on 03-13-2018 at 7:00 am

DeFacto and their STAR technology are already quite well known among those who want to procedurally apply edits to system-level RTL. I’m not talking here about the kind of edits you would make with your standard edit tools. Rather these are the more convoluted sort of changes you might attempt with Perl (or perhaps Python these days). You know, changes that need to span multiple levels of hierarchy, looking for certain types of block, then adding, removing or changing connections which also cross hierarchy. Technically possible with custom scripts perhaps but these can get really hairy, leaving you at time wondering if you’re battling on out of a stubborn refusal to quit or because that’s really the best way.


DeFacto originally got into this space in support of DFT teams who need to add and connect complex BIST logic which may have to be reconfigured on new RTL drops and reconfigured again on floorplan changes. Their big value-add is in making these edits easily scriptable without demanding that you tie yourself in knots figuring out hierarchy implications. Since they can edit RTL, and such needs are common beyond DFT, their customers have expanded use of these tools into many other applications, each of which needs at least a subset of those complex find, restructure, edit, re-stitch and similar operations.

One such use-model was announced recently – using scripted editing to trim down SoC RTL in order to greatly accelerate simulations. The design application in the case was in graphics, a domain which has lots of repeated block instances, in common with quite a lot of other applications like networking. Also in common with those applications, these designs tend to be huge. Now imagine you have to simulate this monster – yes, simulate, not emulate or prototype. Why on earth would you do that? Lots of reasons – you have to include AMS in your verification, you have to do 4-state modeling (0, 1, X, Z), you need to do on-the-fly debug, it’s faster to experiment in simulation, or maybe acceleration hardware is tied up on another project. But compile and simulation on the full core/chip will take forever.

Fortunately, a lot of verification objectives don’t require the simulator to swallow the whole design. You can trim repeated instances down to just one or a few instances, replacing the rest with shell models. But this isn’t quite as simple as black-boxing. First and most obviously a black-box outputs will float at X, which will mess up downstream logic. So at minimum you have to tie these outputs off, inside the black-box.

But even that isn’t quite enough. Integration logic often depends on handshake acknowledgement. If I send you a req, you better respond with an ack at some point, otherwise everything locks up. So now you have to add a little logic (again inside the black-box) to fake that req-ack handling. And so on. The shell starts to accrete some logic structure just to make sure it behaves itself while you focus on the real simulation. This may extend to keeping whole chunks of a logic block while removing/tying off the rest. So much for simple black-boxing.

This is a perfect application for the DeFacto toolset, as Synapse Design observed in their endorsement of STAR. They found that some simulations they were running would take 3 weeks – each. But many of the sims only exercised subsets of the system, in some cases only needing certain instances in a repeated set, in others requiring only a part of the functionality of a module. By intelligently exploiting scripted edits over the design, they found they were able to reduce these simulation run-times by 4X (in one case by 5X). That’s a pretty huge advantage in getting to verification closure.

I’m a strong believer in this kind of scripted RTL editing/manipulation. There are many tasks through design and verification (and touching even implementation) which beg for automation but don’t easily fall into canned solutions. Many design teams hack scripts or simply accept they can’t do better when they hit these cases. There is a better way which doesn’t constrain your ingenuity and control but does automate the mechanical (and very painful) part of the job. Check it out.

Also Read

Analysis and Signoff for Restructuring

Design Deconstruction

Webinar: How RTL Design Restructuring Helps Meet PPA


Clock Domain Crossing in FPGA

Clock Domain Crossing in FPGA
by Alex Tan on 03-12-2018 at 12:00 pm

Clock Domain Crossing (CDC) is a common occurrence in a multiple clock design. In the FPGA space, the number of interacting asynchronous clock domains has increased dramatically. It is normal to have not hundreds, but over a thousand clock domains interactions. Let’s assess why CDC is a lingering issue, what its impact and the available remedy guidelines to ensure a robust FPGA design.


CDC occurs whenever data is transferred from a flip-flop driven by one clock to a flip-flop driven by another clock. CDC issues could cause significant amount of failures in both ASIC and FPGA devices. The consequence of CDC is a metastability effect which leads to either functional non-determinism (unpredictability of downstream data, which could also yield to data loss) or data incoherency (when CDC induced delayed latency on subset of bus signals being sent across, causing non-uniform capture event).

Metastability and Synchronizer — As illustrated in Figure 1, metastability may be present in design utilizing flip-flop. Any flip-flop could be made into such state by concurrent toggling of input data and sampling clock (in the diagram the concurrent switching window of the underlying gates introduced leakage current). The known approach to neutralize the effect of metastability is by the use of synchronizer. A synchronizer can be defined as a logical entity that samples an asynchronous signal and outputs a derivative signal synchronized to a local sampling clock. It is usually not synthesized, instead pre-instantiated in the design or presented as a macro. A good synchronizer should be reliable, have low latency, power and area impact. The simplest implementation is using two back-to-back flip-flops. The first flip-flop samples the asynchronous input signal into the new clock domain and waits for a full clock cycle to permit any metastability to settle down. The output signal of the first stage is sampled by the same clock into a second stage flip-flop to produce a stable and synchronized output.

Data Synchronizers — Two basic methods are available for transferring data signals across clock domain boundaries. The first is based on enable-controlled data capture in the receiving domain, while the second is based on sequential writing and reading of data using a dual-port FIFO.

Control-Based Data Synchronizers – in this type of synchronizers, the enable signal is responsible to inform the receiving domain that data is stable and ready to be captured. The transmitter is responsible for keeping data stable over time while data enable is asserted. The stability of all data bits during received data capture guarantees an absence of the metastability effect and correct data capture. Figure 2 shows variation of control based data synchronizers:
– Mux-based data synchronizer
– Enable-based data synchronizer
– Handshake-based data synchronizer

To achieve safe data capture, the control-based data synchronizer should make sender data stable, not only during period of enable signal assertion, but also covering data setup/hold margin for stability. This is key to prevent glitches during data capture. This is employed in handshake based data synchronizer.

FIFO based data synchronizer – Control based data synchronizer has limited bandwidth, while FIFO-based can increase bandwidth across the interface and still maintain reliable communication. It also allows fast data communication through clock domain boundaries. Data is pushed into the FIFO with transmitter clock and pulled out from FIFO with receiver clock. FIFO_FULL control signal manages the driver write frequency, while the FIFO_EMPTY controls the receiver read frequency.

Reset synchronizer — reset signals must be synchronized at de-assertion stage to prevent registers from going metastable with corrupted values. Reset signal edges can both be synchronized (full-synchronization) or only one (partial synchronization). Sequential elements with asynchronous reset may receive either full or partial synchronized reset, but full reset synchronizer should be targeted for sequential elements with synchronous reset.

Synchronizer in FPGA design
In FPGA design, several safety guidelines should be observed when implementing synchronizers (for more complete discussions, please refer to Aldec 17-page white paper here):

– Avoid the use of half-cycle synchronizer, which usually relies on the use of an inverted clock-edge for second-stage flop as it adds extra resources and complexity to clock implementation.

– The flip-flops (referred as NDFF, signifying 2 or more flops) should be from flip-flop FPGA resources only and should be preserved and dont-touched during synthesis, including no boundary retiming. It is preferred to use metastability hardened macros for CDC. No shift registers or BRAMs allowed as they may induce glitches. Placement of NDFF should be in the same slice to minimize inter-flop propagation delay, reducing potential metastability effects.

– In timing critical high-speed FPGA designs, avoiding combo logic at either control or data CDC is key. Due to this reason mux-based data synchronizers should be avoided. Combo logic should not be injected between synchronizer stages or CDC.

– Ensure no clock reconvergence in the receiving domain even after one or more register stages. Also use synchronizers that match with your data transfer speed needs. For IP developers, it is better to contain the CDC transition within the IP design, avoiding uncontrolled data latency from outside the block.

– FPGA vendors (Xilinx, Intel) based flows utilized attribute reserved for indicating the NDFF flip-flops structure. This attribute will prompt the underlying tools in the flow to react accordingly. It will trigger the synthesis tool to apply “dont_touch” on the synchronized flops and instruct the placement tool to place these flops in close proximity preferably in one slice, although not necessarily all synchronizers implemented in slices. Apply key SDC constraints such as set_max_delay instead of set_false_path to CDC related timing paths to the interface. There are variations also in how the downstream tools respond to the attribute, such as different handling of X state generation depending which vendor solution is used. It is also necessary for timing analysis to not consider the path from upstream driver flop to this NDFF structure, by setting proper constraint.

For non-timing-critical FPGA designs, use BRAM’s instead of driving a flip-flop array, to connect directly to receiving flip-flops from another clock domain. To avoid glitch during data transfer, output of BRAM should remain stable during enable signal assertion (also sufficient margin for setup and hold).

Built-in FIFO generators such as LogicCORE IP FIFO Generator from Xilinx can be used to implement safe FIFO-based data synchronizers for FPGA. The generated FIFO should be configured with independent clocks for read and write operations. For custom-built FIFOs, it is important to check that read and write pointers crossing clock-domains are properly encoded, with only one bit changing a time (just like Greycode) and validated by assertion.

CDC Sign-off — Achieving CDC sign-off in today’s FPGA designs is as crucial as functional correctness and timing closure. The existing dominating CDC verification methods/tools designated for the ASIC flow need to be retargeted to be efficient in the context of FPGA. The ALDEC_CDC rule plug-in turns ALINT-PRO into a full-scale CDC and RDC Verification solution capable of complex clock and reset domain crossings analysis and handling of metastability issues in multi-clock and multi-reset designs. The verification strategy in ALINT-PRO is comprised of static structural verification, design constraints setup, and dynamic functional verification. The first two steps are executed in ALINT-PRO, while dynamic checks are implemented via integration with simulators (Riviera-PRO™ , Active-HDL™, and ModelSim® are supported) based on the automatically generated testbench. This approach reveals potential metastability problems during RTL simulation, which otherwise would require lab tests to be detected. Debugging CDC and RDC issues is being achieved via rich schematic and cross-probing mechanisms, as well as comprehensive reports and TCL-based API, which allows browsing through synthesis results, clocks and resets structures, detected clock and reset domain crossings, and identified synchronizers.

For more info on ALDEC Static Design Verification ALINT-PRO, please refer to this link or download the white paper Clock Domain Crossings in the FPGA World.


What Car Will You Drive Tomorrow?

What Car Will You Drive Tomorrow?
by Roger C. Lanctot on 03-11-2018 at 7:00 am

Today more than ever where you live may well determine what kind of car you drive. Federal governments and, lately, cities are stepping forward to determine what kinds of cars are available to consumers and how they will be built.

The latest such initiatives are efforts by the Trump Administration in the U.S. to explore lowering vehicle emissions standards while a German court decision has given German cities the right to ban diesel-powered cars.

These developments are part of the back drop to the 13th edition of the Future Networked Car Symposium convening at the Geneva Motor Show in the Palexpo convention center this Thursday, March 8. It is fitting that the event is hosted in Geneva by the International Telecommunications Union (ITU) and the United Nations Economic Commission for Europe (UNECE), both of which have offices nearby, and both of which are involved in standard setting and transportation regulations, respectively.

In a world of increasingly connected cars and transportation generally the rules are being rewritten every day regarding precisely what kind of cars will be available in the future. Regulators and government authorities are stepping in to steer auto makers toward making safer and cleaner connected, electrified and autonomous cars.

The Future Networked Car Symposium brings together regulators, standards-setting organizations, car makers and the broad supplier eco-system to discuss and debate the future of connected cars. Much is at stake including cybersecurity, privacy, data ownership and autonomous operation along with safety, efficiency and clean operation. This year’s presentations and discussions promise to be especially interesting in the context of recent technical and regulatory developments.

Some observers might be annoyed by all the regulatory attention focused on cars. U.S. President Donald Trump has made regulations his bete noire and has demonstrated his determination to remove any and all regulations. (Multiple auto industry suppliers have pushed back against lowering emissions and fuel efficiency standards.) Certainly car makers themselves have a long history of complaining about regulatory oversight of virtually all aspects of vehicle design.

Auto industry resistance suggests the industry doesn’t recognize good guidance when it gets it. There are good reasons for regulatory oversight. If car companies had been left to their own devices, we’d still have metal dashboards and soaring highway fatality levels throughout the world. It was government regulation that forced the adoption of safety measures from seatbelts to airbags.

Regulators have more recently turned their attention to the safety of pedestrians even as governments around the world continue to come to grips with deadly vehicle emissions. The latest efforts in Germany to limit the use of diesel vehicles in stifling cities such as Stuttgart, is an ominous sign of more severe measures to come if automakers fail to respond.

Congestion charging in cities such as London and Stockholm, now being contemplated by New York City (again), is yet another example of local efforts to restrict the use of cars as vehicular traffic threatens to overwhelm the transportation infrastructure. Actually, that may be the wrong tense – it appears that vehicular traffic has already overwhelmed the ability of the network to support it.

If there is a single trend that is likely to speed the development and adoption of connected, autonomous and shared transportation resources it is the actions of regulators and Federal and local governments. The U.S. is facing runaway demand for SUVs and other large vehicles in the context of congested roadways and rising highway fatalities. The congestion and fatalities – to say nothing of the emissions – represent a vested interest in intervention for local politicians who must cope with the consequences of inaction.

I am no fan of government intrusion, but it is clear that inaction is not an option. Car makers now more than ever need guidance and legislative support for their efforts to adapt their designs to the transportation network of the future.

The Future Networked Car Symposium 2018 at the Geneva Motor Show will be the perfect platform to conduct that debate from all angles. There is something of an irony that FNC 2018 is taking place in Geneva where hotel visitors are provided with free access to the local transit system in order to discourage them from bringing their personal transportation to the city. The Salon de l’Auto Geneve, itself, is notorious for highlighting fuel guzzling, emission spewing muscle cars for auto enthusiasts uninterested in self-driving technology. The two events represent an amusing juxtaposition.

https://tinyurl.com/ycavfcsb – The 88th International Geneva Motor Show

https://tinyurl.com/y77dumgo – The Future Networked Car 2018

https://tinyurl.com/yd6htggm – This Geneva Motor Show Auto Makers Show Brand New Sides – Bloomberg

https://tinyurl.com/y89uog7p
– Parts Suppliers Call for Cleaner Cars, Splitting with Their Main Customers: Automakers – NYTimes


An OSAT Reference Flow for Complex System-in-Package Design

An OSAT Reference Flow for Complex System-in-Package Design
by Tom Dillinger on 03-09-2018 at 12:00 pm

With each new silicon process node, the complexity of SoC design rules and physical verification requirements increases significantly. The foundry and an EDA vendor collaborate to provide a “reference flow” – a set of EDA tools and process design kit (PDK) data that have been qualified for the new node. SoC design methodology teams leverage these tool recommendations, when preparing their project plan, confident that the tool and PDK data will work together seamlessly.

The complexity of current package design is increasing dramatically, as well. The heterogeneous integration of multiple die as part of a “System-in-Package” (SiP) module design introduces new challenges to traditional package design methodologies. This has motivated both outsourced assembly and test (OSAT) providers and EDA companies to address how to best enable designers to adopt these package technologies. I was excited to see an announcement from Cadence and Advanced Semiconductor Engineering, or ASE, for the availability of a reference flow and design kit for SiP designs.

I recently had the opportunity to chat with John Park, Product Management Director, IC Packaging and Cross-Platform Solutions, at Cadence, about this announcement and the collaboration with ASE.

In preparation for our discussion, I tried to study up on some of the recent technical advances at ASE.

ASE SiP (and FOCoS) Technology

There is a growing market for advanced SiP offerings, spanning the mobile/consumer markets to very high-end compute applications. The corresponding packaging technology requirements share these characteristics:

  • integration of multiple, heterogeneous die (and passives) in complex 2.5D and 3D configurations
  • very high chip I/O count and package pin count
  • high-density and high-performance signal interconnections between die
  • compatibility with high volume manufacturing throughput
  • compatibility with thermal management packaging options for high-performance applications (e.g., attachment of thermal interface material (TIM) and a heat sink)

Traditionally, multi-chip modules have used sputtered thin film metallization on ceramic substrates or traces on laminate substrates for signal interconnects – e.g., 10-25um L/S traces are achievable. These SiP packages can be extremely complex, as illustrated below for a smart watch assembly.


Figure 1. SiP for smart watch – top view and cross-section. (From: Dick James, Chipworks, “Apple Watch and ASE Start New Era in SiP”.)

This package incorporates a laminate substrate with underfill, molding encapsulation, and EMI shielding, necessitating intricate Design for Assembly (DFA) rules.

Other SiP applications require high interconnect density between die and high SiP pin counts, as mentioned above – these requirements have necessitated a transition to the use of lithography and metal/dielectric deposition and patterning based on wafer level technology – e.g., < 2-3um L/S redistribution layers (RDL). The volume manufacturing (i.e., cost) requirement has driven development of a wafer-based, bump-attach technology for SiP.

The general class of these newer packages is denoted as fan-out wafer-level processing (FOWLP). ASE has developed a unique offering for high-performance SiP designs – Fan-Out Chip-on-Substrate (FOCoS).

Figure 2. Cross-section and assembly flow for ASE’s advanced SiP, FOCoS. (From: Lin, et al., “Advanced System in Package with Fan-out Chip on Substrate”, Int’l. Conference on Microsystems, Packaging, Assembly and Circuits Technology, 2015.)

The multiple die in the SiP are mounted face-down on an adhesive carrier, and presented to a unique molding process. The molding compound fills the volume between the dice – a replacement 300mm “wafer” of die and compound results, after the carrier is removed. RDL connectivity layers are patterned, underbump metal (UBM) is added, and solder balls are deposited. The multi-die configuration is then flip-chip bonded to a carrier, followed by underfill and TIM plus heat sink attach.

SiP-intelligent design

With that background, John provided additional insight on the Cadence-ASE collaboration.

“SiP technology leverages IC-based processing for RDL fabrication. Existing package design and verification tools needed to be supplanted. Cadence recently enhanced SiP Layout, to provide a 2.5D/3D constraint-driven and rules-driven layout platform. Batch routing support for the signal density of advanced heterogeneous die integration is required.”
, John highlighted.

“To accelerate the learning curve for the transition to SiP design, Cadence and ASE collaborated on the SiP-id capability – System-in-Package-intelligent-design.”

The figure below illustrates the combination of design kit data, tools, and reference flow information encompassed by this partnership.

Figure 3. SiP-id overview. ASE-provided design kit data highlighted inred.

ASE provided the Design for Assembly (DFA) and DRC rules data, for Cadence SiP Layout and Cadence Physical Verification System (PVS).

Further, there are a couple of key characteristics of SiP-id that are truly focused on design enablement.

  • The DFA and DRC rules are used by SiP Layout for real time, interactive design checking (in 2D and 3D).
  • ASE provides environment setup and workflow support to SiP designers, for managing the data interfaces to ASE, as illustrated below.

and, very significantly,

  • As a result, this is a manufacturing sign-off based flow.

The figures below illustrate the SiP-id customer interface with ASE.

Figure 4. Customer interface with SiP-id.

SiP technology will continue to offer unique PPA (and cost) optimization opportunities, especially for designs integrating heterogeneous die. The collaboration with ASE and Cadence to provide assembly and verification design kit data and release-to-manufacturing reference flows is a critical enablement. ASE is clearly committed to assisting designers pursue the challenges of SiP integration – perhaps their SiP-id web site says it best:

“It is our intention to offer all ASE customers a set of efficient tools where designers can freely experiment with designs which can go beyond the current packaging limits… This is an ongoing effort by ASE, not only to develop fanout (such as Fan-Out Chip on Substrate, FOCoS), panel fanout, embedded substrates, 2.5D, but also to making design tools more user friendly, up-to-date and efficient.”

This is indeed an exciting time for the packaging technology industry.

For more information on Cadence SiP Layout, please follow this link. For more information on the SiP-id reference flow and customer interface to ASE, please follow this link.

-chipguy


Don’t Stand Between The Anonymous Bug and Tape-Out (Part 1 of 2)

Don’t Stand Between The Anonymous Bug and Tape-Out (Part 1 of 2)
by Alex Tan on 03-09-2018 at 7:00 am

In the EDA space, nothing seems to be more fragmented in-term of solutions than in the Design Verification (DV) ecosystem. This was my apparent impression from attending the four panel sessions plus numerous paper presentations given during DVCon 2018 held in San Jose. Both key management and technical leads from DV users community (Intel, AMD, Samsung, Qualcomm, ARM, Cavium, HPE, and nVidia) as well as the EDA vendors (thetriumvirate: Synopsys, Cadence, Mentor plus Breker, Oski and Axiomise) were present in the panels.

There were some consensus captured during the panels evolving around these four main questions:

What are the right tools for toughest verification tasks?
Is system coverage a big data problem?
Should formal go deep or broad?
What will fuel verification productivity: data analytics, ML?

Reviewing more of discussion details, it is obvious that few factors had constrained the pace of new solution adoption and a potentially more integrated approach.

An array of verification methods spanning from emulation, simulation, formal verification to FPGA prototyping are used to cover verification space. The first panel is to cover user’s approach to the new developments on the verification front.

Market dictates execution mode – Users supported products serving market inherently required frequent product refreshes, which shorten development and thus, verification schedule. Companies are in-turn focused in delivering-out product fast; no time to explore. As a result, currently some just keep pushing simulation and emulation instead of spending time to explore modeling; trying to manage the use of resources optimally.

Software injects complexity – In addition to growth in system size, programmable components such as security engine, encryption engine, have also contributed to the added complexity. There was a raised question on how to isolatenon-determinism and debug, if something has gone wrong. Need a tool verifying S/W that bridge into the behavioral hardware space. Also a spectrum of tools to cover from full-system → system → block-level. Is S/W causing problem that we can’t verify? Running simulation can’t be trained-up. For example, a bug found at 64-bit counter — how to catch it at top level. H/W based approach then needed. Software verification is difficult with standard tools, so need emulation. Example test such as running Youtube onWindows introduced system complexity.

Emulation and hybrid simulation – More software on-board causing increased emulation usage. Also in hybrid simulation S/W is a big unknown, while so much can be done before shipping the product. Emulation has technology to scale-up. Hybrid simulation model done before SOC constructed. Emulation is growing but space is also growing.

Simulation vs HW Assisted Efforts Ratio — In the past, it used to be 80% simulations and 20% emulation, today will it be considered 80% H/W assisted vs 20% simulation? According to panelist, simulation need has kept pace with IP growth, so not a 80/20 scenario, necessarily.

FPGA vs Hybrid— Hybrid helps, but FPGA may be needed such as for covering corner cases. Actually no difference between emulator vs FPGA. How much time needed for S/W model to be in seamless usage with emulator or FPGA is key. In hybrid environment a lot of data and transactions (such as graphics IP with lots of transactors): FPGA can’t address those and hybrid emulator would be more suitable. While others still believe that FPGA or emulator share similar challenges. Emulator faster but design more complex and bigger, so in the end yielding about the same speed (although FPGA could be faster). What about size of FPGA (scalability) to prototype or emulate product? How to tackle size issue on a more than 10 billion gates design. Do targeted testing on which subset of instances. Can we use mixed and match, get value now not waiting till last minute.

Shift-left and Cultural-divide— Does shift-left effort work? The answers are mostly a sounding yes, albeit a few with caveats. Yes, IP development at ARM involved software development before roll-out, anticipating usage although not doing system design. Shift-left has been both painful and effective. Also use H/W emulation model. Cost of using models and making it work all across involved hidden costs. Can be made easier migration to shift left. Shift left has been successful but with challenges (2 hours vs 2 weeks). We may need teams that overseeing both sides. S/W folks have faster expectation than verification (may take longer). How to use same stimulus to run simulation faster? Test intent needed and may run faster if applies in emulation realm.

Questions from the audience:
How to address A/D interface?Panelists stated that clean boundaries (sys-subsys-IP) should be key to allow partitioning system to be more manageable. The use of virtual interface (interface layer) could accomodate the need for A/D (e.g. Matlab, C) but Analog block usually has pickyrequirements.

When will we have a point tool to address versus being spread thin across using different ones?
— Panel responded that Integration issue always there (a handshake problem), it will shift problems somewhere else, hence not replacing jobs which is good news. Vendor pointed out about doing shift left early on and possibly doing testbench analysis acceleration with M/L.

S/W friendly implementation need— Hybrid simulation may address S/W centric in H/W design. Trend of more software focus. Usually H/W first and then software (when ramping/kickoff); now S/W, then H/W.

[To be continued in part 2 of 2.]

 


Is there anything in VLSI layout other than “pushing polygons”? (7)

Is there anything in VLSI layout other than “pushing polygons”? (7)
by Dan Clein on 03-08-2018 at 12:00 pm

The time is 1995 and my mandate as Layout Manager is to grow my team. I advertised everywhere but there were no experienced people in Canada that I can hire so the solution was back to training. I was the trainer a few times in Israel in MSIL but there we had a very organised material for layout, UNIX, software, etc. We had exercises, tests, some senior people as teaching assistants, a flow. I knew what is needed so I started developing everything. From testing aptitudes and teaching materials, from schematic for cells and blocks with progressive complexity, all had to be invented and generated from scratch. We had a layout team of 5 people and needed to double that so everybody joined to help. We did it and all the students are still successful in Layout 20+ years later. If you want to know more about this read our next book revision coming out before theend of 2018. After so many training classes I was really tired to repeat myself and wanted a better solution. The idea that a book may help started to grow in my mind. I started to talk to layout schools in US, IBT, Gered, etc., and received their curriculum’s. Some enthusiastic instructors like Dan Asuncion just shared with me their training class materials. I inquired with my former team in MSIL, so Zehira Dadon-Sitbon, the layout manager at that time, helped me reinvent the aptitude test. I got a lot of materials from all over the world but the table of content for “the book” was still far from comprehensive to the level I wanted. Many questions needed answers and nobody around could help. I did not know what is needed to write a book at that time.

To put a little gas on the fire when I asked IEEE if they are interested to publish a Layout book they told me that if such book does not exit, it means its not needed!!! How wrong they proved to be… Check the attached pictures of a layout book translated in Chinese and Korean.

I was determined to move forward but I needed help and luckily it came from a work colleague. Gregg Shimokura, a Design Engineer who decided that MOSAID needs a CAD group so he built one, volunteered to help me. The starting point was our internal training course but we did not know if this is good enough for a book. Opportunity came to us as MOSAID was interested to increase the number of engineers with Memory expertise in Ottawa. They invited Carleton University professors for an internal DRAM course that was meant to be the base for a Memory course in University. This was my occasion to talk to Tad Kwasniewski (who passed away in February 2018). I wanted to know his thoughts: can such book be of interest for his university curriculum? After some research he came back with a solution: I will prepare a course of VLSI introduction for master students and teach at Carleton and this way I can test my material viability live. Based on Tad guidance, in 1996 I worked with Martin Snelgrove for VLSI Design course fall sessions. He was teaching Circuit and I was teaching Layout. The course was so successful that they invited me back but Martin wanted to move to Toronto so I needed a front-end person to teach Design. Like a real partner, Gregg jumped on this opportunity and together we taught VLSI Design 97.584 course at Carleton, fall 1997. We worked full time in MOSAID (!) and we worked nights and weekends to prepare materials and print them on overhead transparency film before each class (we invented Just In Time). Twice a week we were in class in front of 44 students for 3 months. At the end of the term we had to invent an exam, not a multiple-choice but with real solutions, in 2 versions, including design and layout. We were lucky we had 2 good TAs. One of them Rolando Ramirez Ortiz worked with me in PMC Sierra later.

Using the lessons learnt and the course materials, using a few other schools training materials we started to write the book.

In 1998 Gregg Shimokura and myself finished to write our book, CMOS IC Layout Concepts, Methodologies and Tools. We sent the manuscript to editors and worked with them on all implementation details. With more than 150 VISIO graphics and about 200 pages of text this was a gargantuan effort. We worked about 2000 hours on this, this is 6 months extra work for each of us, in top of our daily jobs! But the book came out in Dec 1999 and we became famous!

How is this for a NON “polygon pushing” assignment?

Last important “non-layout” activity in MOSAID was a training course for Mentor Graphics AEs and internal software developers. MOSAID decided at that time that from Design Services for Memories it’s time to extend into products with memory inside, meaning we wanted to go into ASIC. Suddenly we needed a lot of people trained in all Digital Design activities. Mentor had a good set of training courses and we decided to use them but no budget for it. Gregg Shimokura and myself just finished the manuscript for the first edition of our book. Thinking “outside the box” Roger Colbeck, our VP and Dan Chapman, Mentor Graphics account manager came up with a proposal. Knowing that we finished our book manuscript maybe Gregg and I can transform our book in a 5 days training course and exchange this for a digital training course for our engineers. We worked hard for a few weeks (again) to create the training classes in PowerPoint slides, print the materials and put them in booklets. Then I organized the trip and classes with Janet (Scheckla) Petersen, marketing manager in Mentor at that time and I travelled to Wilsonville. It was a tremendous experience to learn from participants what are the challenges of internal EDA teams. It’s difficult to understand what a USER wants/needs from a document written by a technical marketing person without ever meeting a customer. Most of them never did layout or circuit design so it was an eye opener on both sides. Was very useful for my growth to learn from AEs what are their challenges when working with customers. I was on the other side of the wall! Afterwards when a tool did not do what expected I was able to “imagine” the reasons why there is a difference between the manual and tool performance and adjust my expectations. I became able to relate to developers and help them modify the tools to make them more user friendly. I gained lot of friends in the EDA industry and found out that I like training again.


More to come while I worked in the next company…

Dan

Also read: Is there anything in VLSI layout other than “pushing polygons”? 1-6


An Advanced-User View of Applied Formal

An Advanced-User View of Applied Formal
by Bernard Murphy on 03-08-2018 at 7:00 am

Thanks to my growing involvement in formal (at least in writing about it), I was happy to accept an invite to this year’s Oski DVCon dinner / Formal Leadership Summit. In addition to Oski folks and Brian Bailey (an esteemed colleague at another blog site, to steal a Frank Schirrmeister line), a lively group of formal users attended from companies as diverse as Cisco, NVIDIA, Intel, AMD, Teradyne and Fungible (apologies to any I missed). I find what customers are really doing is an enlightening counterbalance to product company viewpoints, so kudos to Oski for inviting media representatives to the meeting.

Register here for Oski’s upcoming Decoding Formal session Tuesday March 20th 11:30am-5pm Pacific

Kamal Sekhon of Oski kicked off with a joint project they drove with Fungible. Fungible is an interesting company. Barely a year old and headed by a founder of Juniper Networks it is a startup focused on compute / storage /networking functions for datacenters (details are hazy). They are building their own silicon, which is where formal comes in.

The presentation nicely prompted debate on a number of topics, one being formal-only signoff. Some of what was discussed here inevitably circled around what signoff means. Even for dynamic verification, signoff at RTL can be a qualified signoff these days since many product teams now require gate-level timing signoff for multiple reasons (to cover power-related transformations at implementation for example). I think everyone around the table also agreed that they still include formally-proven assertions in their RTL simulation signoffs – just in case.

This naturally led into a discussion on coverage. Within the formal domain, I was interested to hear that still not too many people know about formal core coverage. This is where you look at what part of the cone-of-influence (COI) was needed in proving an assertion – generally a smaller space than the full COI. Point being that if you think your assertions are ideal for (formal) signoff but the formal core coverage is low, you probably need to expand the set of assertions you are testing. Mutation was also mentioned and generated interest as a way to prompt further assertion tuning (or constraint de-tuning) to solidify coverage. (Mutation is a technique in which the tool introduces bugs in the RTL to test whether they are detected in proving. If such a bug escapes detection, that’s further evidence that your assertions and constraints need work.)

There was a sidebar on how to combine formal and dynamic coverage in signoff. One of the group talked about their work using Cadence vManager. I get to talk to multiple vendors, so I’ll have some insight on this topic from a Cadence folks in upcoming blogs. Meantime some confusion remains about mergingversus combining formal and dynamic coverage. The state of the art still seems to be combining (showing side by side) rather than somehow folding coverage metrics together. Another interesting view at the dinner, related to coverage, is that dynamic coverage is (above some level) in a sense a kind of black-box testing whereas formal coverage is, at least as practiced by many, a form of white-box testing and perhaps (my guess) this is why they have to be combined rather than merged? This question would make a good topic for another blog perhaps (hint to the formal tool suppliers)?

Another interesting area of debate, always popular with Oski, was around architectural formal verification. Quick recap – you have a verification problem at the system-level (not contained inside a component), there are no obvious long-pole candidates to abstract so instead you abstract everything, replacing components with smaller FSM models to enable verifying control interactions. A typical candidate is cache-coherency verification.

Some of the group are already doing (or have done) this, others had concerns about the validity of the method. There seems to be some sort of quantum barrier to acceptance in this area, not just at the dinner but in general. The folks who get over (or rather through :)) the barrier do it because they have no choice; the problem they face has no reasonable coverage solution in simulation. An interesting perspective for me is that this kind of architectural formal is nowhere near as complex as some of the theorem-proving techniques already used (Murphi, Spin, …) in validating floating-point units for example; so the issue can’t be “too complex to trust”. Maybe it’s just a question of familiarity (Kamal hinted at this).

One last topic on PSS and formal, on which I won’t spend a lot of time since Brian was very passionate at the dinner and I’m sure will expound at length on this topic in his blog. Brian feels that PSS and formal should naturally be very close (at least in the graph-based view of PSS). It wasn’t clear that many (or any) of the other diners had a view on this. In fairness some of them didn’t know what PSS was, which in itself may tell us something. One person said that it wasn’t clear to him that dynamic and formal needs always overlap. A deadlock check is one example – makes sense in formal, not so much in simulation. As for me, I’m uncertain. From a perspective of graphs and pruning and so, on it does seem like there should be commonality. But I wonder if the commonality is more like that found in Berkeley ABC – common underlying data structures and engines to serve both formal verification and synthesis – rather than a commonality in application-level usage, particularly with other disjoint engines for verification. Just a thought.

Oski always provides entertaining and thought-provoking insights and debates around the formal verification domain. I stronglyrecommend you attend their next Decoding Formal session (Tuesday March 20th 11:30am-5pm Pacific), in which you’ll get to hear:

  • A talk from an expert on Meltdown/Spectre, including a discussion on how formal helped find these problems and could have helped prevent them,
  • Experiences in building formal specialists and a formal methodology in Cisco,
  • Applying formal in support of ISO 26262 needs

EDA and Semiconductor — Is There Growth In The Ecosystem?

EDA and Semiconductor — Is There Growth In The Ecosystem?
by Alex Tan on 03-07-2018 at 12:00 pm


The semiconductor industry has gone through several major transitions driven by different dynamics such as shift in business models (fab-centric to fab-less), product segmentation (system design house, IP developers) and end market applications (PC to cloud; and recently, to both automotive and Internet of Things — IOT’s, or Internet of Everythings – IOE’s).

According to the management consulting firm McKinsey, seven out of twelve technology disruptors (from mobile internet to 3-D printing) collectively would leave an economic imprint of $16 trillions on the lower end, and up to $37 trillions by 2025 (Figure 1). We have seen many assessments given by various industry observers as how digital technology has injected momentum and is often found in many cross-paths of the current change spaces. One such confirmation was given by a Silicon Valley luminary Jim Hogan, who had categorized our current shift into a Cognitive Era or the Industrial Revolution 4.0, as illustrated in Figure 2. The 4[SUP]th[/SUP]revolution — after agriculture (or if one would argue a pre-agriculture/hunting era should be part of the starting point, it was not being counted here), mechanical, electrical and electronics – being driven by digital technology.

What about the impact on the semiconductor industry and Electronic Design Automation (EDA) in particular? In order to measure the monetization scale of this digital economy, we could assess the recent revenues generated by its key players. Based on data gathered by ICInsight, the estimated overall IC sales footprint is reported to top $100Billions, with the top-ten companies accounted for almost 75% of the total figure (refer to Figure 3).

To gain better visibility on how the above numbers correlate to the electronics industry in general and EDA space in particular, we could glean into the market dichotomy captured by Synopsys, labeled as a global value chain (shown in Figure 4a). In the EDA & IP space, the total 2017 annual revenues amounted to $10.3B.

Diving one step deeper, we could see that the top four EDA companies (Synopsys, Cadence, Mentor, Ansys) have contributed to about 68% of the total estimated revenue space. In the comparison, Mentor revenue valuation was based on taking the Siemens’ Digital Factory (in which business segment Mentor is in) revenues over 2017 and substracting the contribution from pre-acquisition revenue in this segment, plus its corresponding constant 4-5% quarterly growth, then converted into US$ based on 12/31/17 conversion rate (Figure 4b). As a side note, Mentor estimated FY17 figure given during pre-acquisition date was $1.283B.

To gauge the amount of spending usually allocated for EDA/IP acquisition, we could look at the change in spending from last two years (2017 vs 2016) of the semiconductor companies. Based on ICInsight report, with the exception of two players in the top-ten (i.e., Qualcomm,Toshiba), there was an increase of 6% in collective spending, i.e.,$35.9B vs $34B in 2016.

Getting a top-down view in the global chain, during 2017, the global electronics systems market grown by 2%, semiconductor segment by 15% and EDA experienced an average 9% growth. From the EDA standpoint, based on their earning reports, the anticipated 2018 growth of the big-four are 7% on the average (the range is 5 to 8%).

Recently, numerous blogs had captured the potential contribution of Artificial Intelligence (AI) and in particular, Machine Learning (ML) across all segments dealing with data. In the early days of Internet roll-out, in which web browser was a luxury, we were accustomed to the pull technology to get data from centralized mainframes. Subsequently a push technology was becoming mainstream, coupled with more distributed computing resources.

Nowadays, data will be increasingly streamed in both directions, potentially with constant monitoring, plus some analytics and being shaped over time. It is fair to anticipate AI and ML projected impacts related to these aspects:

  • Market Valuation – in contributing to the incremental growth of the semiconductor market size through enhancing the adoption pace of autonomous vehicles, medical therapeutic, proliferation IoT’s on the edges as well as scaling up the cloud and super-computing facilities.
  • Design Methodology Upgrades – in driving increased needs for proper design and validation intent capture of multifaceted, data-centric applications; new ways to address more complex interface handling; and new metrics for design success criteria including those for ascertaining functional closure.

All of these are positive drivers to a healthy demand in both semiconductor and EDA spaces. Despite a single digit growth projection in EDA this year, it is an expanding market size, which offers a growth continuum.


Students Should Attend DAC in SFO

Students Should Attend DAC in SFO
by Daniel Payne on 03-06-2018 at 12:00 pm

On LinkedIn I have some 2,116 connections and many of those are students looking to enter the field of EDA, IP or semiconductor design. What a wonderful opportunity these students have by attending the 55th annual DAC in San Francisco this summer from June 24-28. Technical sessions, keynote speeches, exhibitors, networking, poster presentations, and parties are some of the many activities that will show these students what our industry is all about so that they can soon contribute to its success.

Students have two opportunities when attending DAC:

The fellow program is named after Dr. A. Richard Newton and each student selected will get in on seven events during DAC:

  • Participate in the DAC summer school on Sunday, June 24.
  • Kickoff breakfast meeting on Monday, June 25 sponsored by Cadence.
  • Selected conference sessions, including sessions with Best Paper Award nominations.
  • Poster presentation (either current research, or relevant coursework/projects) introducing each Fellow during the DAC student event.
  • Attendance at the awards ceremony.
  • Attendance at the closing session during the Thursday evening reception.
  • Social media postings (Twitter, Facebook, LinkedIn, etc.) to provide timely news, photos and feedback on events during DAC.

Many of us at SemiWiki will be attending and tweeting about #55DACor blogging articles about what we discover, and these students in the fellow program will also be sharing what they learn on social media.

Seniors and Juniors at college will get some preference, and some 70 students have been part of the fellow program in past years. Submit your application by March 7th, and you’ll be notified by April 2.

On the scholarship side you can receive $4,000.00 per year as part of the P.O Pistilli Scholarship Program, which aims to attract students to electrical engineering, computer engineering and computer science degrees from under-represented groups. Read the details about this scholarship online and submit your application by March 9.

If you are a student and want to meet me or any of the SemiWiki bloggers at DAC, just ask, we’d love to answer your questions at DAC or start the conversation earlier by connecting to me on LinkedIn.