webinar banner2025 (1)

Integrity, Reliability Shift Left with ICC

Integrity, Reliability Shift Left with ICC
by Bernard Murphy on 06-26-2018 at 7:00 am

There is a nice serendipity in discovering that two companies I cover are working together. Good for them naturally but makes my job easier because I already have a good idea about the benefits of the partnership. Synopsys and ANSYS announced a collaboration at DAC 2017 for accelerating design optimization for HPS, mobile and automotive. In February 2018 they pulled the covers back a little, announcing a product launch integrating ICC II with RedHawk Analysis Fusion (see my earlier blog on Fusion). Now they’re pulling the covers back further, getting into more of the detail on what this Fusion integration provides.

Why is this important? Because it’s becoming more difficult to continue handling integrity and reliability as a (pre-) signoff step. The conventional approach is to agree global margins for IR-drop, current surge and other power factors, to guide timing closure, EM security and so on. Then you design the power distribution network (PDN) to meet those objectives and signoff, most likely using RedHawk. That worked well for quite a while but in advanced processes, lower operating voltages reduce margin above threshold, increasing relative sensitivity to power noise. Meanwhile increased dependence on power-switching through reduced width power rails and vias increases risk of EM. Simply cranking up the margins even higher to compensate becomes an untenable solution.

If you can’t globally margin, what can you do? Obviously you really don’t have to increase rail widths globally and strapping everywhere; you can be more selective because (hopefully) not every part of the design will be subject to the worst stresses. RedHawk supports this kind of differential analysis across the design but taking action on that guidance will clearly affect implementation. In older flows, you could do the RedHawk analysis outside of ICC and back-annotate to the implementation database, but that’s obviously cumbersome. More importantly, tuning IR-drop across the design versus other implementation objectives becomes a painful manual task or dependent on complex in-house scripting. A better approach is move this analysis into the implementation flow where it can be used to guide local optimizations.

Kenneth Chang (Product Marketing Manager at Synopsys) tell me that this is more than simply displaying RedHawk results inside ICC II (though they do that too through various heat maps). Synopsys provides newly developed features to take action on RedHawk feedback, adding straps where needed. In fact, they have reduced the task of doing this analysis to one command – analyze_rail – within IC Compliler II, so a designer simply runs this command to both analyze and optimize for power integrity.

Synopsys generally does a good job of validating customer needs before they build a solution, so I was interested to hear from their perspective what drove this integration. He told me the biggest problem for many of their customers was having to over-margin (sounds familiar). Being the good engineers that they are, customers have built their own solutions with scripting and loose tool integration, but they acknowledge these are not ideal and leave a lot of opportunities for optimization untouched, particularly since they also have to worry about DRC-correctness and timing when scripting their fixes.

Hence the attractiveness of integrating RedHawk into IC Compiler II, complemented by automated fixes. RedHawk is dominant in power integrity/reliability analysis and clearly continuing to innovate. Integrating into physical design enables in-design analysis and optimization through these new functions which, being native, are designed to be DRC-aware and timing-aware.

Nice, but is this really essential? Kenneth told me of one design example he had seen recently where rail analysis at the end of design showed an unfixable problem (no reasonable late-stage ECO possible), which could have been fixed if it had been addressed earlier. That design had to be abandoned. If you’ve read any of my blogs on RedHawk, you know these challenges are increasing. More generally, when you consider that 30%+ of metal may be devoted to power in advanced geometry designs, this kind of solution is likely to become essential in delivering competitive products.


The integration makes sense; it simplifies the design task, it reduces the need to over-margin and it ensures correlation between in-design IR-drop optimization and final signoff. Kenneth mentioned that Synopsys also continues to work on other opportunities to leverage this integration, including more possibilities for IR-drop-driven optimizations. He tells me we should also stay tuned for updates on work they are doing together around thermal and EM analysis. You can learn more HERE, and you should check this at the Synopsys booth at DAC, where they may have further updates.


7nm Networking Platform Delivers Data Center ASICs

7nm Networking Platform Delivers Data Center ASICs
by Daniel Nenni on 06-26-2018 at 7:00 am

We all know IP is critical for advanced ASIC design. Well-designed and carefully tested IP blocks and subsystems are the lifeblood of any advanced chip project. Those IP suppliers who can measure up to the need, especially at advanced process nodes, will do well, absolutely.

It is interesting to note that eSilicon now has a very large internal IP group that is both developing IP and qualifying external IP to ensure there are no design spins and recently eSilicon has taken the mandate for quality IP to a new level.

A few weeks ago they announced neuASIC™, a 7nm IP platform for AI/deep learning that was covered by SemiWiki. This platform aims to make it easier to track changing AI algorithms in silicon by offering a library of configurable subsystems that can be easily assembled in an “ASIC chassis” – a kind of pre-defined architecture. This kind of approach takes IP quality, compatibility and configurability to a new level.

eSilicon is expanding their IP platform strategy this week. This time it’s a 7nm IP platform targeted at networking and switching ASICs for the data center. In this market, algorithms don’t change that much since everyone is designing to a standard protocol. What is challenging for these designs is hitting ultra-high-performance demands at a commercially acceptable power and density. To get that done requires a lot of tuning and trade-offs and that’s where eSilicon’s networking platform comes in.

All the elements of the platform have configurability built in, making it easier to perform the balancing act required to hit the power, performance and area requirements for advanced networking applications. All IP in the platform is “plug and play,” using the same metal stack, reliability requirements, operating ranges, control interfaces and DFT methodology. That also helps with integration and configuration. eSilicon complements their extensive library of IP with third-party offerings for the more commoditized functions, such as PCI Express PHYs, controllers, PLLs and PVT monitors. What is interesting is that eSilicon claims all the third-party IP in the platform adheres to the same compatibility and integration rules.

So what’s in the networking platform? Here’s a summary of the key parts:

At the core of the platform is eSilicon’s SerDes technology – communicating between chips is critical for these applications and the SerDes block is what does that. eSilicon’s design is based on a novel DSP-based architecture. Two 7nm PHYs support 56G and 112G NRZ/PAM4 operation to provide the best power efficiency tradeoffs for server, fabric and line-card applications. The clocking architecture provides extreme flexibility to support multi-link and multi-rate operations per SerDes lane. A multitude of protocols are supported including Ethernet and Fibre Channel. The architecture allows scaling power consumption even further for shorter-reach channels. eSilicon claims a lot more capabilities and innovations for their SerDes technology. You can check out their website to find out more.

TCAMs are a big part of the platform and a big part of networking ASICs as well. Unlike a regular memory that returns the value stored in a given address, a TCAM returns all the addresses where a given value is stored. This comes in handy for packet processing applications. eSilicon has delivered 12 generations of TCAM technology and the current 7nm compiler supports low-power operation with partial-pipelined search, resulting in power savings. BIST enhancements allow faster design cycles and simulation through soft programming. A patented Duo architecture and two-cycle read/write architecture reduce area and power even further for large networking ASICs.

A lot of data center ASICs use HBM memory stacks to provide a large amount of storage that is easily accessible by the ASIC. These devices use a 2.5D integration scheme for the HBM memory stacks, so the PHY, or physical interface to those stacks is a key element of the power and performance profile. eSilicon has an HBM PHY (gen 2) as part of the platform as well. eSilicon’s HBM2 PHY integrates unique features to minimize switching noise and duty cycle distortion to provide a risk-free, robust solution. The PHY is a self-contained, hardened macro that offers many programmable hooks to architects. Drive strength calibration and jitter reduction as well as dedicated circuitry for training and lane repair are offered as well. eSilicon also has a 2.5D HBM enablement package. Based on seven years of experience with 2.5D design, this package provides easy integration of the HBM2 PHY and associated HBM2 DRAM stacks.

Rounding out the platform is an array of unique, network-optimized, high-speed and ultra-high-density memory compilers, register files and latch-based compilers optimized for extreme density and performance.


Leveraging AI to help build AI SOCs

Leveraging AI to help build AI SOCs
by Tom Simon on 06-25-2018 at 12:00 pm

When I first started working in the semiconductor industry back in 1982, I realized that there was a race going on between the complexity of the system being designed and the capabilities of the technology in the tools and systems used to design them. The technology used to design the next generation of hardware was always lagging behind while it was being used to build generationally larger and more complex systems. I liken it to a dragon chasing its own tail. Designers have always really wished they had the next generation computing power available to design the next generation of hardware.

The situation has been this way ever since those days so long ago. However, perhaps the advent of Artificial Intelligence may change that dynamic. AI has an uncanny ability to solve complex problems that cannot addressed by more processors, more memory and more networking. It represents a fundamentally different way of solving problems that have large numbers of variables and complex performance surfaces.

It’s not surprising then to see machine learning making its way into the software and tools used to design SOCs and complex systems. The endgame of this is using machine learning to design machine learning systems. There you have it, AI inception.

One of the most complex and non-deterministic problems in SOC design is interconnect. Long ago the ability of hardwired interconnect to keep up has slipped away. As a result, the application of Network on Chip (NoC) for interconnecting blocks has become more prevalent in SOC designs. Still, even with top down, requirement driven tools for designing NoC structures, there are great challenges in designing efficient NoC interconnect implementations. These days NoCs have routers and they dynamically manage traffic.

I recently had a chance to talk to Anush Mohandass, VP of Business Development at Netspeed, a leading provider of NoC IP and development tools. We talked about their announcement of Orion AI which is NoC technology that now incorporates machine learning algorithms. Right off the bat he pointed out that the challenges of designing AI chips has led to dramatically shifting requirements for SOC design. AI chips have significantly different interconnect needs. They have a large number of computing elements with their own local memory stores that are connected in a largely flat topology. No longer does data move to and from central memory to be processed by a central processor. This is a peer to peer system that requires low latency, high bandwidth and incredible flexibility.

The NoC for AI must support multicast and broadcast data transfers. It is also needs to have non-posted and posted transactions. Comprehensive QoS is also necessary and it must be non-blocking. In effect AI applications require software defined NoCs. Netspeed accomplishes this with a multi-layered protocol that created levels of abstraction between the physical and functional implementations.

Netspeed ‘s Orion AI is a leap forward in NoC technology. It offers scalable data widths that are significantly larger than its predecessor. It can operate at speeds of 2-3GHz with bus widths of 1024 bits. It can support the interconnection of thousands of elements. The AI algorithms built into Orion AI efficiently optimize the final implementation. This naturally means that it is the ideal technology to implement AI systems.

Maybe we haven’t reached the level of robots building robots, but we definitely have reached the age of using AI to help build SOCs. Netspeed’s Orion AI is an excellent example of how this technology can be applied. For more detailed information about Netspeed’s Orion AI visit their website.


Cadence in the Cloud!

Cadence in the Cloud!
by Daniel Nenni on 06-25-2018 at 9:45 am

The first clue was cloud vendors (Amazon, Google, IBM, etc…) at 55DAC for the first time ever with lots of cloud content including a Design on Cloud Pavilion. The second clue was the pre-briefing from Cadence last week. There has also been a lot of cloud chatter in the semiconductor ecosystem so yes, I saw this coming and EDA will get even more cloudy in the very near future.

Cadence disclosed that they have been actively working on cloud solutions with customers over the past ten years and felt they are at a point now that security is no longer an issue. Pricing is still a bit cloudy but that will be much easier to address based on specific customer needs.

Example: When I worked for Solido Design we experimented with token based pricing where the customer bought one time-based license and usage tokens. As it turned out it was like feeding a slot machine. Customers bought many more tokens than expected and Solido made much more money than originally forecast, all for the greater good of course!

Bottom line:My bet is that customers will actually spend more on EDA in the cloud and get better designs as a result, absolutely!

“We’ve delivered the Cadence Cloud portfolio to address the challenges our customers face—the unsustainable peak compute needs created by complex chip designs and exponentially increasing design data,” said Dr. Anirudh Devgan, president of Cadence. “By leading this industry shift to the cloud, we’re enabling our customers to adopt the cloud quickly and easily and are further executing upon our System Design Enablement vision, which enables our customers to be more productive and get to market faster.”

I have a 1:1 with Anirudh at DAC this week so I will talk cloud more with him then. The nice thing about the pre-brief this time were the people on it. Carl Siva is a long time IT guy who is now the VP of IT for Cadence. Honestly, I think this is the first time I have been on a call with an IT executive. Craig Johnson was also on the call, Craig spent the first half of his 20+ year career at Intel and the second half at Cadence. I really liked these guys and got a lot from the call which is not always the case.

Here are the Highlights from the press release:

  • The Cadence Cloud portfolio includes customer-managed and Cadence-managed cloud environments providing productivity, scalability, security and flexibility benefits that enable engineers to achieve electronic product design goals
  • Customers establishing and maintaining their own cloud environments can now use the Cadence Cloud Passport, a model that provides easy access to cloud-ready Cadence tools and a cloud-based license server for high reliability
  • Cadence offers the Cloud Hosted Design Solution, a managed, EDA-optimized cloud environment built on Amazon Web Services or Microsoft Azure that supports customers’ peak or entire design environments
  • Cadence introduces the Palladium Cloud, a fully-managed emulation solution that can be deployed in combination with other Cadence Cloud offerings, freeing customers from installation and operational responsibilities

Cadence also included two white papers and a slide deck. The first white paper “Cadence Cloud—The Future of Electronic Design Automation” is a nice 6 page overview written by Carl:

Design complexity and competitive pressures are driving electronics developers to seek innovative solutions to gain competitive advantage. A key area of investigation is applying the power of the cloud to electronic design automation (EDA) to dramatically boost productivity. Grounded in its long history of providing hosted design solutions (HDS) and internal experience with cloudbased design, Cadence has taken a leadership position in moving EDA to the cloud. Cadence has developed a deep expertise in the requirements and unique challenges of EDA cloud users. That expertise has resulted in Cadence® Cloud, a productive, scalable, secure, and flexible approach to design, and one that embodies the future of EDA.

The second one “Accelerating SoC Time to Market with Cloud-Based Verification” is a 7 page cloud case study written by Michael A. Lucente, Cadence Product Management Director:

This paper discusses the growing use of cloud and hybrid cloud environments among semiconductor design and verification teams. The schedule and efficiency benefits seen by verification teams using cloud are specifically highlighted, due to the considerable compute requirements associated with verification of advanced node SoCs, and the significant impact verification has on the overall SoC project schedule. The readiness of public cloud environments for use in semiconductor design and verification workflows is discussed, along with factors to consider when choosing EDA technology for use in the cloud. Cadence® offerings for selfmanaged and fully managed EDA cloud solutions are also outlined.

As soon as I get the links for the papers I will add them. In the mean time you can request them directly from Cadence. They really are worth the read. An article I wrote was even referenced in the first one which is a nice touch.


7nm, 5nm and 3nm Logic, current and projected processes

7nm, 5nm and 3nm Logic, current and projected processes
by Scotten Jones on 06-25-2018 at 7:00 am

There has been a lot of new information available about the leading-edge logic processes lately. Papers from IEDM in December 2017, VLSIT this month, the TSMC and Samsung Foundry forums, etc. have all filled in a lot of information. In this article I will summarize what is currently known.
Continue reading “7nm, 5nm and 3nm Logic, current and projected processes”


Semiconductor Cycles Always End the Same Way

Semiconductor Cycles Always End the Same Way
by Robert Maire on 06-24-2018 at 3:00 pm

It appears the current cycle has rolled over? The reason is memory & could be worsened by trade Figuring out length, depth and impact of the downturn? We had said that AMAT “called” the top of the cycle on their last conference call even though they may not think so. Semiconductor cycles always ends the same way. The rate of increase slows to zero then rolls over as business slows and the cycle goes down.
Continue reading “Semiconductor Cycles Always End the Same Way”


Mentor at the 55th Design Automation Conference

Mentor at the 55th Design Automation Conference
by Daniel Nenni on 06-22-2018 at 9:00 am

It’s hard to believe that this is the 55th DAC and even harder to believe that this will be my 35th. So much has changed in 35 years, with DAC back in San Francisco I expect a VERY big crowd and even bigger announcements, absolutely.

Not only is this an epic time for semiconductors, I would say that EDA is exciting again and the Mentor acquisition by Siemens is definitely a catalyst. Being that this is the first DAC with the full backing by Siemens, it is definitely one to see:

Technical Conference Program Overview:

  • 7 paper presentations
  • 13 posters
  • 2 panels
  • 2 expert tutorials

Straight talk with Wally Rhines
Wally Rhines, President and CEO of Mentor, a Siemens Business, sits down with Semiconductor Engineering’s Ed Sperling to discuss the big shifts in technology, from AI to autonomous cars to the growth of the Internet of Things and the Industrial Internet of Things. What kinds of shifts can we expect to see in the future, who’s going to be best positioned to take advantage of them, and what will the semiconductor industry look like in five years as these changes begin taking hold? Who will be the winners and who will be the losers?

MENTOR ON THE EXHIBIT FLOOR
You’ll find Mentor experts on both exhibit floors of Moscone West. The main Mentor booth (2621) is on the second floor while you can also visit Mentor experts on the first floor in the booths for Verification Academy (1622), Tanner EDA (1337), and Solido Design Automation (1344).

Expert Panels
Mentor will host expert panels on Monday and Tuesday of DAC in booth 2621. Show up a little early and grab a free beer or glass of wine at our Happy Hour to enjoy during the panel!

Functional safety – where are we going and how do we get there?
Monday June 25, 4:00pm – 5:00pm
With everything from cars to factories to the world around us becoming more intelligent and increasingly automated, the decision making is shifting from humans to the machines. Semiconductors are at the center of this innovation but now the way these electronics are developed must evolve as humans put their lives in the hands of these transistor-based machines. The concept of functional safety is not new but with the move to autonomous driving, functional safety has been put in the spotlight for IC development teams. From requirements to fault injection, functional safety brings many new challenges for IC development but at scales and levels of automation not seen before.

Getting your tape-out done on time isn’t easy, but it can be easier
Tuesday June 26, 4:00pm – 5:00pm
More than 50% of tape-outs don’t occur on schedule. Which 50% do you want to be in? Come listen to Calibre customers talk about the challenges that they face and the steps that they take to get their tape-outs on time.

Daily Technical Sessions
The Mentor booth (2621) will host over 70 technical sessions across 7 technical focus areas:

IC Design & Test
Design & Functional Verification
Analog/Mixed-Signal and Custom IC Design
High-Level Synthesis, Low-Power, and SLEC
Emerging Markets
Packaging & PCB
Verification Academy

ANNOUNCEMENTS
Veloce on the Cloudwas announced on June 8. Mentor is pioneering new licensing and use models to enable small companies as well as large companies (with geographically dispersed verification groups) to access emulation via Amazon Web Services. Make sure to check out the “Design-on-Cloud” pavilion on the DAC exhibit floor.

Calibre RealTime Digital was announced on June 18. Mentor not only maintains the leading position in physical verification but is expanding on that lead with new innovations. RealTime Digital was built by customer demand. Mentor released the first tool in the Calibre RealTime line six years ago, which won several awards and gained great popularity with layout folks doing custom design. The new tool targets the digital design market and helps designers fix local DRC errors in their designs without creating further DRC violations. It cuts weeks off of the signoff stage of the design – the last stage before tapeout.

Customer presentations on Calibre RealTime Digitalin the Mentor booth:

  • Customer Presentation: Saving weeks off the physical design implementation cycle: Qualcomm’s experience using Calibre RealTime Digital
    • Monday 10:00am
    • Tuesday 3:00pm
    • Wednesday 2:00pm
  • Customer Presentation: How Inphi uses Calibre RealTime Digital to improve the time to tapeout digital designs
    • Tuesday 4:00pm
    • Wednesday 10:00am

     

About DAC
The Design Automation Conference (DAC) is recognized as the premier event for the design of electronic circuits and systems, and for electronic design automation (EDA) and silicon solutions. A diverse worldwide community representing more than 1,000 organizations attends each year, represented by system designers and architects, logic and circuit designers, validation engineers, CAD managers, senior managers and executives to researchers and academicians from leading universities. Close to 60 technical sessions selected by a committee of electronic design experts offer information on recent developments and trends, management practices and new products, methodologies and technologies. A highlight of DAC is its exhibition and suite area with approximately 175 of the leading and emerging EDA, silicon, intellectual property (IP) and design services providers. The conference is sponsored by the Association for Computing Machinery’s Special Interest Group on Design Automation (ACM SIGDA), the Electronic Systems Design Alliance (ESDA), and the Institute of Electrical and Electronics Engineer’s Council on Electronic Design Automation (IEEE CEDA).

READ MORE 55DAC BLOGS


Folklore Around the HP 35 LED Development and the Nobel Prize

Folklore Around the HP 35 LED Development and the Nobel Prize
by Daniel Nenni on 06-22-2018 at 7:00 am

This is the third in the series of “20 Questions with Wally Rhines”

In the early 1970s I was working on a PhD thesis based upon GaAs light emitting diodes, or LEDs. Many of my predecessors in the Materials Science and Engineering Department at Stanford had worked on other aspects of III-V compounds and some of them went to work at HP after completing their PhD’s. The story they told me seems credible so I’ll relate it here.

HP recognized that LEDs would be important for many types of instrumentation. The company pioneered LED research and eventually formed HP Associates to commercialize this business.

In the late 1960s development began on what became the HP-35 calculator. Logic chip contracts were let to both AMI and Mostek and chips developed by both companies ended up in production units of the product. Technology at that time made “reverse Polish” a more logical procedure for entering data and many engineers still prefer the elegance of this approach.

Choice of a display for the HP-35 logically fell to the most promising technology, GaAsP (Gallium Arsenide Phosphide) which could be tailored to alter the exact wavelength of light emission. GaAs emits light at 1 micron, which is in the infrared and therefore not visible by humans. By alloying GaAs with phosphorus, the “band gap” can be tailored to emit at shorter wave lengths. Armed with this knowledge, the team of relatively young engineers attacked the development task of creating a suitable, reliable red LED for the HP-35 calculator. After analyzing various ratios of arsenic and phosphorus, they selected a combination that emitted a bright cherry red that was easily visible to the entire team of development engineers.

Detailed characterization and development of a manufacturing process followed. And then the day came to demonstrate their achievements. A presentation was put together for the Board of Directors of HP. The presentation included the multifaceted issues associated with light emission and culminated with a demonstration of an array of discrete LEDs that were arranged to spell the letters “HP”. When the switch was pulled, Bill Hewlett turned to David Packard and said, “I don’t see anything”. Packard agreed. What the “less than 40” year old engineers had overlooked was the natural narrowing of bandwidth perception that occurs with age. Eye sight, hearing, smell and almost all our senses deteriorate with age. As we become older, the range of frequencies we can perceive decreases. The particular choice of GaAsP alloy that the younger engineers had selected was one that emitted red light in the visible spectrum (above 750nm). But visibility is relative. For 60+ year olds, it wasn’t visible. The entire project went back to the drawing boards to reduce the wavelength of red light emission to a frequency that would be visible to a much broader range of the population.

Red wasn’t the only LED color of interest at the time. The DARPA contract that funded the work I was doing with GaAs, along with Shang-yi Chiang (who later became VP of R&D for TSMC), included Craig Barrett (a faculty advisory who later became CEO of Intel) and Herb Maruska who had worked on Gallium Nitride at RCA before coming to Stanford. Herb had worked with Jacques Pankove at RCA, trying a wide variety of materials for LEDs so that RCA could build solid state televisions. The challenge of short wavelength emitters remained, making the lack of an efficient blue LED a challenge.

Herb tirelessly deposited thin films of GaN with various dopants. And then one day, we systematically analyzed his results. Element by element, I went through the periodic table while Herb told me who had tried various dopants and what the results had been. And then, miraculously, we focused on a group II element, magnesium, that was not well characterized with GaN. Herb headed to the lab and in a short period produced a film of Mg-doped GaN that emitted blue/violet light when a voltage was applied. We were all ecstatic and proceeded to apply for a patent with the help of the Stanford legal staff. The patent was granted later, in 1974, and Herb returned to RCA a hero.

Unfortunately, the blue LEDs were not very efficient. But we published papers and two researchers, Akasaki and Amano, talked to Herb about ten years later and were able to reproduce his results. The history is documented in the article:

A modern perspective on the history of semiconductor nitride blue light sources

Later, the critical missing piece evolved. Shuji Nakamura of UC Santa Barbara fabricated Mg doped InGaN LEDs that operated with a quantum well structure, dramatically improving the efficiency. Nakamura’s advance was remarkable and he clearly deserved the Nobel Prize that he received (along with Akasaki and Amano who were able to reproduce both Nakamura’s and our results, as well as achieve stimulated emission). Nakamura highlighted the Stanford work when the Nobel Prize was announced, saying he believed recognition for the blue LED should also extend to Herbert Paul Maruska, a researcher at RCA who created a functional blue LED prototype in 1972. Nakamura said he did not think his or Akasaki and Amano’s work would have been possible without Maruska’s contributions many years prior.

Nakamura Gives Some Credit to Maruska for Blue LED Invention

20 Questions with Wally Rhines Series


Achieving Clean Design Early with Calibre-RTD

Achieving Clean Design Early with Calibre-RTD
by Alex Tan on 06-21-2018 at 4:00 pm

Functional and physical verification are easily the two long poles in most IC product developments. During a design implementation cycle, design teams tend to push physical verification (PV) step towards the end as it is a time consuming process and requires significant manual interventions.

PV Challenges
In the traditional physical design flow, design teams send their designs through a full DRC (Design Rule Check) verification run after completing the place and route step. This process can take several hours for a billion-transistor design and often uncover problems in the design, which must be fixed to comply with foundry manufacturing rules. Subsequent fixes of the errors necessitate a repeat of place-and-route and a full DRC run again. It is quite common to find the fixes introduce yet additional errors, leading to even more iterations and delays before converging on a clean design as illustrated in figure 1a.

Recent complexity of the advanced process nodes has prolonged the physical verification cycle time further as they are accompanied by an increased list of complex DRC rules to satisfy. The advanced nodes had also introduced a finer layer stack segregation namely FEOL, MEOL, BEOL (Front, Middle and Back- End-Of-Lines). For example, DRC errors such as implant related violations on FEOL layers now need to be handled by the place and route system as it correlates with cell placement.

Prior attempts to mediate DRC fixing has been done. One approach is accomplished by facilitating the needed steps for importing and viewing of DRC errors in the P&R environment. Another is by embedding layout editor with the P&R environment to enable custom fixing at the end of DRC run. However, neither of these address the overall cycle time reduction nor the recurring iterations.

Shift Left and Tool Integration
The notion shift left was initially popular in the verification domain and is becoming a mantra to most of EDA tool providers. With ample availability of fast compute resources and more efficient algorithms, it is more practical to provide concurrency access to many solutions previously done as separate processes.

Like the Berkeley’s SPICE and its derivatives in circuit simulation domain, Calibre has been the de facto physical verification tool for over a decade. Now Mentor, a Siemens business, launches a new Calibre based solution dubbed Calibre® RealTime Digital (RTD) – a new physical verification tool that works in concert with popular commercial place-and-route environments.

As design teams use place-and-route to fix violations discovered after full DRC runs, they can use the Calibre RTD tool to make minor changes, thereby resolving DRC violations without causing additional violations — ergo “Correct by Calibre”. Calibre RTD achieves this by making the minor changes and performing customized, smaller and more localized DRC runs to help ensure the violations are removed.
As illustrated in figure 1b, shorter iterations during debug reduce the total number of full-chip pass iterations, allowing designers to dramatically shorten design cycles and get to market sooner. “Calibre RealTime Digital is a solution that was driven by customer requests,” said Joe Sawicki, vice president and general manager of Mentor’s Design-to-Silicon Division.

This roll-out is complementing its earlier 2011 release of Calibre RealTime Custom tool for custom IC design flows. RTD targets the full-chip and block-level digital designs and provides teams designing primarily ASICs and SoCs for various electronics end markets. According to Mentor’s early customers feedback, RTD significantly cut the amount of time needed to reach a DRC clean block. Time saving is ranging from 40% for a design block and up to 85% for ECO’ed block.

“The tool can save time and headaches for design teams developing system chips using any digital process. By working in tandem with the place-and-route tool, Calibre RealTime Digital helps correct physical violation errors that cannot be corrected using a place-and-route system alone. As a result, customers have the potential to get designs to market weeks faster,” Joe added.

Endorsements were already given by several named customers such as Qualcomm and Inphi. “Calibre RealTime Digital is an accelerator to our existing physical verification strategies that fits seamlessly into our design flows, We expect the tool will allow us to cut weeks off of our signoff schedule.” said Weikai Sun, associate vice president of Engineering at Inphi.

RTD and P&R
Enabling RTD physical verification in the RTL-to-GDS2 flow include the following usage scenarios. As illustrated in figure 2a, with RTD designers could do DRC early-on at floorplanning stage, during which an optimal IP or macro placement exploration is being exercised and data flow being analyzed. Furthermore, this also provides a more concrete assessment of area versus performance trade-offs for an IP block during process retargeting. Metal stack selection and routability study are commonly made during this stage in which a balance of route resources for both signal route versus global signals (power, ground and clock networks) is targeted.

Another challenge faced during P&R stage is in dealing with preemptive placements (such as clock headers, special cells) and routings of critical nets (pre-routes) which often times performed by means of augmented internal script-based tool into the formal flow. These preemptive placements or routes may not satisfy all the DRC complex requirements (for example with respect to metal vs via allocation, cut-metal, etc.). Calibre RTD interface lets designers interactively verify DRC, multi-patterning, and pattern matching fixes in P&R using the same sign-off Calibre decks. Hence, these pre-routes or pre-placements could be ascertained as DRC clean prior to setting any dont-touch attribute on the entities.

RTD Usage Models
With Calibre RTD, physical designers are no longer in need of RVE or RealTime-RVE to interface with Calibre verification. Instead, physical verification can be done in physical implementation environment of choice. Some designers who had used Mentor’s Olympus-SoC might be familiar with earlier Calibre InRoute integration.
This time the level of integration is made across major P&R tools.

For custom or mixed signal IPs development, the interaction with either Cadence Virtuoso or Synopsys Custom Compiler is supported as shown in figure 3a. On other hand, for ASIC/SOC physical designers integration with Cadence Innovus and Synopsys ICC2 is available as shown in figure 3b.

With the Calibre RTD release, Mentor has upped the ante in tackling design cycle reduction by doing a shift-left and integrating Calibre physical verification to be part of design implementation. Mentor has reported no meaningful memory footprint impact as RTD should be able to be run on any design size being routable by designers P&R of choice.

Several customer DAC 2018 presentations are scheduled at Mentor’s booth #2621. For more detailed info on Calibre RTD, please check HERE.


What to Expect from Methodics at DAC

What to Expect from Methodics at DAC
by Daniel Payne on 06-21-2018 at 12:00 pm

I’ve been visiting DAC for decades now, at first as an EDA vendor and since 2004 as a freelance EDA consultant. There’s always a buzz about what’s new, semiconductor industry trends, who is getting acquired and the latest commercial EDA and IP offerings. There’s so much vying for my attention at DAC each year that it can seem like a blur, however I can give you some clarity about a company called Methodics by asking Simon Butler the CEO some questions:

What is Methodics all about?

At this year’s DAC, we’ll be showing a range of solutions for helping manage your IP portfolio, including the latest version of our Percipient IP Lifecycle Management (IPLM) platform. Percipient has evolved to be a real game changer for enterprise-wide coordination of your most critical design assets and a proven way to implement an IP-centric design methodology.

Many vendors talk about PLM, so what’s different with yours?

We’ll also be showcasing how we put the ‘I’ in PLM – our integration with enterprise-class PLM solutions from partners like Siemens that also include world-class version control systems such as Perforce Helix. Please be sure to stop by our booth to say “hello” to our Perforce and Siemens partners who will be joining us to showcase the latest Methodics integrations.

What industry trends do you see this year?

Another big focus for us has been the automotive industry, and the ISO 26262 functional safety requirement specifically. Traceability of designs is an important part of complying with the ISO standard and we’ve got you covered. You can read more about this in our latest white paper, and we have a demo dedicated to this topic at DAC.

We picked up even more automotive know-how at the recent ISO 26262 for Semiconductors conference in Detroit. A lot of the movers and shakers in the car business and their electronics suppliers were at this event. We had chance to offer our thoughts on how IP management is an important consideration, sitting along side ARM, NXP and Intel on a panel discussion.

How do your users share their best practices?

We held our annual Methodics User Group Meeting this month. Our friends at Maxim Integrated were kind enough to host our impressive gathering of customers and lots of great information was shared. Special thanks to Intel, Silicon Labs, Analog Devices, and Maxim for delivering really insightful presentations. The interaction among our users and our own engineering team was fantastic and extremely helpful as we evolve our IPLM solution.

Who is new at Methodics this year?

Vadim Iofis has joined us as VP of Engineering. Vadim brings great insights for implementing solutions on an enterprise level and we’re looking forward to him helping us move further up the value chain of managing our customers most important design assets.

What else will Methodics be doing at DAC this year?