RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Achieving Clean Design Early with Calibre-RTD

Achieving Clean Design Early with Calibre-RTD
by Alex Tan on 06-21-2018 at 4:00 pm

Functional and physical verification are easily the two long poles in most IC product developments. During a design implementation cycle, design teams tend to push physical verification (PV) step towards the end as it is a time consuming process and requires significant manual interventions.

PV Challenges
In the traditional physical design flow, design teams send their designs through a full DRC (Design Rule Check) verification run after completing the place and route step. This process can take several hours for a billion-transistor design and often uncover problems in the design, which must be fixed to comply with foundry manufacturing rules. Subsequent fixes of the errors necessitate a repeat of place-and-route and a full DRC run again. It is quite common to find the fixes introduce yet additional errors, leading to even more iterations and delays before converging on a clean design as illustrated in figure 1a.

Recent complexity of the advanced process nodes has prolonged the physical verification cycle time further as they are accompanied by an increased list of complex DRC rules to satisfy. The advanced nodes had also introduced a finer layer stack segregation namely FEOL, MEOL, BEOL (Front, Middle and Back- End-Of-Lines). For example, DRC errors such as implant related violations on FEOL layers now need to be handled by the place and route system as it correlates with cell placement.

Prior attempts to mediate DRC fixing has been done. One approach is accomplished by facilitating the needed steps for importing and viewing of DRC errors in the P&R environment. Another is by embedding layout editor with the P&R environment to enable custom fixing at the end of DRC run. However, neither of these address the overall cycle time reduction nor the recurring iterations.

Shift Left and Tool Integration
The notion shift left was initially popular in the verification domain and is becoming a mantra to most of EDA tool providers. With ample availability of fast compute resources and more efficient algorithms, it is more practical to provide concurrency access to many solutions previously done as separate processes.

Like the Berkeley’s SPICE and its derivatives in circuit simulation domain, Calibre has been the de facto physical verification tool for over a decade. Now Mentor, a Siemens business, launches a new Calibre based solution dubbed Calibre® RealTime Digital (RTD) – a new physical verification tool that works in concert with popular commercial place-and-route environments.

As design teams use place-and-route to fix violations discovered after full DRC runs, they can use the Calibre RTD tool to make minor changes, thereby resolving DRC violations without causing additional violations — ergo “Correct by Calibre”. Calibre RTD achieves this by making the minor changes and performing customized, smaller and more localized DRC runs to help ensure the violations are removed.
As illustrated in figure 1b, shorter iterations during debug reduce the total number of full-chip pass iterations, allowing designers to dramatically shorten design cycles and get to market sooner. “Calibre RealTime Digital is a solution that was driven by customer requests,” said Joe Sawicki, vice president and general manager of Mentor’s Design-to-Silicon Division.

This roll-out is complementing its earlier 2011 release of Calibre RealTime Custom tool for custom IC design flows. RTD targets the full-chip and block-level digital designs and provides teams designing primarily ASICs and SoCs for various electronics end markets. According to Mentor’s early customers feedback, RTD significantly cut the amount of time needed to reach a DRC clean block. Time saving is ranging from 40% for a design block and up to 85% for ECO’ed block.

“The tool can save time and headaches for design teams developing system chips using any digital process. By working in tandem with the place-and-route tool, Calibre RealTime Digital helps correct physical violation errors that cannot be corrected using a place-and-route system alone. As a result, customers have the potential to get designs to market weeks faster,” Joe added.

Endorsements were already given by several named customers such as Qualcomm and Inphi. “Calibre RealTime Digital is an accelerator to our existing physical verification strategies that fits seamlessly into our design flows, We expect the tool will allow us to cut weeks off of our signoff schedule.” said Weikai Sun, associate vice president of Engineering at Inphi.

RTD and P&R
Enabling RTD physical verification in the RTL-to-GDS2 flow include the following usage scenarios. As illustrated in figure 2a, with RTD designers could do DRC early-on at floorplanning stage, during which an optimal IP or macro placement exploration is being exercised and data flow being analyzed. Furthermore, this also provides a more concrete assessment of area versus performance trade-offs for an IP block during process retargeting. Metal stack selection and routability study are commonly made during this stage in which a balance of route resources for both signal route versus global signals (power, ground and clock networks) is targeted.

Another challenge faced during P&R stage is in dealing with preemptive placements (such as clock headers, special cells) and routings of critical nets (pre-routes) which often times performed by means of augmented internal script-based tool into the formal flow. These preemptive placements or routes may not satisfy all the DRC complex requirements (for example with respect to metal vs via allocation, cut-metal, etc.). Calibre RTD interface lets designers interactively verify DRC, multi-patterning, and pattern matching fixes in P&R using the same sign-off Calibre decks. Hence, these pre-routes or pre-placements could be ascertained as DRC clean prior to setting any dont-touch attribute on the entities.

RTD Usage Models
With Calibre RTD, physical designers are no longer in need of RVE or RealTime-RVE to interface with Calibre verification. Instead, physical verification can be done in physical implementation environment of choice. Some designers who had used Mentor’s Olympus-SoC might be familiar with earlier Calibre InRoute integration.
This time the level of integration is made across major P&R tools.

For custom or mixed signal IPs development, the interaction with either Cadence Virtuoso or Synopsys Custom Compiler is supported as shown in figure 3a. On other hand, for ASIC/SOC physical designers integration with Cadence Innovus and Synopsys ICC2 is available as shown in figure 3b.

With the Calibre RTD release, Mentor has upped the ante in tackling design cycle reduction by doing a shift-left and integrating Calibre physical verification to be part of design implementation. Mentor has reported no meaningful memory footprint impact as RTD should be able to be run on any design size being routable by designers P&R of choice.

Several customer DAC 2018 presentations are scheduled at Mentor’s booth #2621. For more detailed info on Calibre RTD, please check HERE.


What to Expect from Methodics at DAC

What to Expect from Methodics at DAC
by Daniel Payne on 06-21-2018 at 12:00 pm

I’ve been visiting DAC for decades now, at first as an EDA vendor and since 2004 as a freelance EDA consultant. There’s always a buzz about what’s new, semiconductor industry trends, who is getting acquired and the latest commercial EDA and IP offerings. There’s so much vying for my attention at DAC each year that it can seem like a blur, however I can give you some clarity about a company called Methodics by asking Simon Butler the CEO some questions:

What is Methodics all about?

At this year’s DAC, we’ll be showing a range of solutions for helping manage your IP portfolio, including the latest version of our Percipient IP Lifecycle Management (IPLM) platform. Percipient has evolved to be a real game changer for enterprise-wide coordination of your most critical design assets and a proven way to implement an IP-centric design methodology.

Many vendors talk about PLM, so what’s different with yours?

We’ll also be showcasing how we put the ‘I’ in PLM – our integration with enterprise-class PLM solutions from partners like Siemens that also include world-class version control systems such as Perforce Helix. Please be sure to stop by our booth to say “hello” to our Perforce and Siemens partners who will be joining us to showcase the latest Methodics integrations.

What industry trends do you see this year?

Another big focus for us has been the automotive industry, and the ISO 26262 functional safety requirement specifically. Traceability of designs is an important part of complying with the ISO standard and we’ve got you covered. You can read more about this in our latest white paper, and we have a demo dedicated to this topic at DAC.

We picked up even more automotive know-how at the recent ISO 26262 for Semiconductors conference in Detroit. A lot of the movers and shakers in the car business and their electronics suppliers were at this event. We had chance to offer our thoughts on how IP management is an important consideration, sitting along side ARM, NXP and Intel on a panel discussion.

How do your users share their best practices?

We held our annual Methodics User Group Meeting this month. Our friends at Maxim Integrated were kind enough to host our impressive gathering of customers and lots of great information was shared. Special thanks to Intel, Silicon Labs, Analog Devices, and Maxim for delivering really insightful presentations. The interaction among our users and our own engineering team was fantastic and extremely helpful as we evolve our IPLM solution.

Who is new at Methodics this year?

Vadim Iofis has joined us as VP of Engineering. Vadim brings great insights for implementing solutions on an enterprise level and we’re looking forward to him helping us move further up the value chain of managing our customers most important design assets.

What else will Methodics be doing at DAC this year?


ANSYS at DAC

ANSYS at DAC
by Bernard Murphy on 06-21-2018 at 7:00 am

I’m not going to be at DAC this year because I scheduled a fishing trip at the end of June, assuming the show would stay true to form as an early/mid-June event. Still, having to endure salmon and halibut fishing in Alaska rather than slogging around Moscone Center, I can’t pretend to be too disappointed; I’ll be thinking of you all 😎.

One of the things I’ll miss is the ANSYS update on status which, from the information I have, is shaping up to be quite impressive. DAC has accepted 25 ANSYS-related customer papers/posters from all major geographies. Among these the majority seem to come from the who’s who of mobile, while the majority of topics are RedHawk-SC (the big-data version), RedHawk for 3DIC/InFO/CPM and PathFinder (ESD analysis). Good to see these technologies, about which I have been writing for a while, are both trending and translating into successes. (I confess I haven’t seen the papers but I’m assuming no-one wants to brag about failures.)

Vic Kulkarni (VP and Chief Strategist in ANSYS SCBU) gave me a rundown on their theme and events for DAC. The headline is “Beyond Signoff”, getting past traditional margin-constrained analyses to more effective approaches. I wrote about this earlier (breaking out of the box) on my John Lee interview. They’ll be highlighting applications in four main areas:

  • Mobile – a lot of innovation still, even though the smartphone market is flattening out (think 5G, basestations, AI, 3D-sensing, …). A lot of activity in advanced packaging.
  • High-performance computing (HPC) – CPUs and GPUs of course but also networking and crypto-currency (I learned the in-term for this is now simply crypto. I guess the encryption folks lost that tag.). Advanced packaging big here too.
  • 5G/AI – an odd pairing from a solution point of view but it seems both are pushing ultra-high performance, power and reliability hard. Advanced packaging is also big here.
  • And of course, automotive – where pretty much everything is critically important, use of advanced processes is becoming more common and there are some moves into advanced packaging, if not quite as aggressive (yet) as in other domains.

The ANSYS story across all these domains remains very consistent – the margin-based approach to design and signoff is breaking down, a point on which I have written multiple times. You don’t have to be a semiconductor expert to understand this. In any type of engineering, the standard way to study the characteristics of a design is vary one thing at a time and hold everything else constant, because we haven’t known how to analyze with everything varying at the same time. We make allowance for variability in other factors through margins – limits on how much each factor can vary, and we repeat analysis at combinations of those extreme cases (the corners).

This approach works fine in many cases, but obviously it is a simplification of a more complex problem. It’s not hard to imagine circumstances under which that simplification would break down, particularly where there may be strong coupling between different factors. In mechanical engineering this happened quite a while ago. Aircraft-engine design requires co-analysis of mechanical, heat and airflow at the same time because analyzing these independently is already known to be dangerously inaccurate. FYI, this co-analysis across multiple domains is commonly known as multi-physics analysis.

Semiconductor design is no different. The question is not if but when co-analysis becomes important in this domain. Perhaps we are so far away from those kinds of interdependency that we can comfortably continue to use our margin-based approaches? Customers using advanced processes and packaging appear to disagree. They’re saying they have to look at multiple factors at the same time, and if they don’t they lose pricing advantage, PPA, yield and even reliability. But I admit I get my information though ANSYS and I’m a sucker for reasonable physics explanations, so you should probably sit in on some of the customer papers at DAC to form your own opinion.

You’ll have multiple chances at DAC to pick apart the story. ANSYS have four customer workshops: design for optimal PPA, early power analysis for IP and chips, accelerating SoC power signoff and multi-physics reliability signoff. They have seven best practices sessions, John Lee (GM) is speaking at a Synopsys special interest group dinner, Norman Chang (CTO) is speaking at an AI/ML workshop and there will be customer presentations at the booth. You can learn more and signup for events HERE. I’ll tell you what I caught when I get back and you can tell me what you thought of the ANSYS story.


HOT Party for a Cause at DAC 55

HOT Party for a Cause at DAC 55
by Randy Smith on 06-20-2018 at 4:00 pm

The Design Automation Conference (DAC), now in its 55[SUP]th[/SUP] year, always offers a lively mix of activities. For EDA vendors and their customers, the focus is on the exhibit floor and in booth suites where the latest technology is on display. For R&D engineers and academics, the technical sessions dig deeply into an increasingly wide range of topics. For every attendee, DAC offers plenty of time for networking and catching up with colleagues from all around the world.

This year’s show in San Francisco offers a unique opportunity for everyone to meet, party, and support a great cause at the same time. The traditional Sunday evening DAC kick-off reception has been combined with the “HOT” party to benefit Heart of Technology and the Gary Smith Memorial Scholarship Endowment at San Jose State University. That one sentence contains multiple hints why this will be one of the hottest EDA events this year; here are some details.

For a start, Sunday evening receptions have been kicking off DAC for many years. Industry luminaries offer their thoughts on the past year and predictions for the next, all accompanied by generous food and beverage service. Do you know what the “DAC glance” is? That’s when you greet a friend or colleague at the reception and immediately look down at his or her badge to check current employment status. Changes from the previous year are not uncommon in EDA.

Over the last few years, a new DAC tradition began with the parties hosted by Heart of Technology (HOT), a philanthropic organization founded by Silicon Valley venture capitalist Jim Hogan. Based in San Jose, HOT rallies the high-tech industry to aid charities throughout the Bay Area in their fundraising efforts. To date, the organization has netted approximately $185,000 to benefit local charities in need.

Last year’s HOT party at DAC in Austin was a benefit for the Gary Smith Memorial Scholarship Endowment and this year we will again support this very worthy cause. Gary Smith was one of the luminaries of EDA, the industry’s most followed analyst, and one heck of a nice guy. He defined market segments, tracked market share, offered revenue projections, published detailed research reports, and served as a tireless advocate for EDA.

There is no better way to celebrate Gary’s extraordinary life than to support the endowment, which offers an award to one undergraduate student annually participating in the San Jose State University Educational Opportunity Program’s Guardian Scholars Program. This program serves youth emancipated from foster care, Wards of the Court, and certified homeless individuals who are highly self-motivated to complete their college education at SJSU.

So now you know the backstory and can appreciate why this combined event at DAC is so significant. The evening starts at 6:00 p.m. on Sunday, June 24th, with the DAC Welcome Reception, including the presentation “EDA Industry Observations and Outlook” from Richard F. Valera, Managing Director, Equity Research at Needham & Company at 7:00 p.m. This is followed immediately by the “HOT 55” party, which will run until 10:30 p.m.

All this happens at Moscone West in San Francisco on the Third Floor Mezzanine. Back by popular demand, Vista Roads Band will provide the entertainment with special guests Methodics Ensemble. There is no ticket or pre-registration required. All DAC attendees and sponsored guests are welcome to the party at no charge, though HOT and the 18 co-sponsoring companies and organizations ask that you donate what you can to the Gary Smith Memorial Scholarship Endowment.

We are certainly looking forward to attending and hope that you can be there as well!


The Wolper Method

The Wolper Method
by Bernard Murphy on 06-20-2018 at 11:00 am

If you read around topics in advanced formal verification you’re likely to run into something called Wolper coloring, or what Vigyan Singhal (Chief Oski at Oski) calls the Wolper method. Many domains have specialized techniques but what’s surprising in this instance is a seeming absence of helpful on-line explanations (though there are plenty of resources which cite and use the method without explanation, as if we should already know what it is.) The original source is a paper by Pierre Wolper which may be a little heavy going for some (me too), so I asked Vigyan for help, which he happily provided, adding also some interesting background. What follows is my attempt to provide an explanation for those of us who aren’t CS theoreticians.

Let’s start with the problem the method aims to address. When you want to verify the correctness of data transport logic (in network switches or on-chip interconnects or memory subsystems, for example), checking the protocol is one part of the job, with well-understood dynamic and formal approaches to verification. Checking integrity of the payloads flowing through the network – can they be corrupted in some way – is a different task. At first glance, this could be extremely difficult. In simulation you can’t practically check all possible data values in potentially long sequences and yet it may be far from obvious what corner cases would provide good coverage. And formal methods, even using clever techniques, normally have problems with very long sequences.

Wolper’s contribution was to discover and prove that, as long as control behavior in the logic is independent of payload data, it isn’t necessary to test all possible payload values or long sequences. In fact it is sufficient in formal proofs to use one or two bits in a (payload data) sequence, from which you can provably infer behavior for sequences of arbitrary length. I’m not going to attempt a proof of this; you can read the paper or take it on trust. Instead I’ll give a little background on how this technique found its way into formal verification for hardware, along with a couple of examples of application.

Vigyan told me that when he was working on his doctorate at UC Berkeley, Prof. Bob Brayton directed his formal group to study each week certain papers he would recommend, looking for possible applications in formal methods. Based on the Wolper paper they developed a formal proof approach to checking properties related to a sequence of items transported through a design, in which the design is making decisions about routing, merging and other transport-related activities. So that’s where it all started in our world.

Quite generally they found that using this Wolper technique they could formally detect any possibility that a design could drop, duplicate, corrupt or reorder data, for any possible data sequences. Again, the naive formal approach would check all possibilities out to some sequential depth and then run out of gas. But thanks to Wolper’s insight, they could prove correct behavior using a few specific sequences composed of just a few bits of data, and from that infer correct behavior over arbitrary length sequences.

Using this technique in the simplest case, you would look at only a stream of single bits coming into the router. You constrain to have just two consecutive bits in the stream set to 1 and all others are constrained to zero; importantly, the position of these consecutive bits in the stream should be unconstrained. It’s easy to get tripped up here by a simulation mindset (I did at first). Don’t think of how to set up specific sequences. Think instead of what will be constrained in the proof – two consecutive bits somewhere (unconstrained) set to 1 and all others set to zero. The formal engine will take care of looking for any possible counter-example to this being undisturbed at the output.

These kinds of constraints are what is often referred to as Wolper coloring. You can add an assertion to check transmission at the output of the design using a small state machine. This state machine will accept any sequence of zeros, followed by two consecutive 1’s, followed by any sequence of zeros. But it will error on a single 1 (a bit dropped) or a 101 sequence (maybe an erroneous data insertion or reordering). If the assertion triggers, you have a bug in the transport logic. And if you don’t get an assertion trigger you know, thanks to Wolper, that the transport logic is bug-free for any sequences.

You can continue to refine this to handle more complex transport – if bytes are merged into words on the output, you have to adjust for two streams flowing into one stream. If the design is required to repeat if no response after some time, the check has to allow for that possibility, And so on.

Which makes it rather challenging to produce a canned application (app) to do Wolper checking. Each variant to handle product differentiation, merging options, better error handling, more complex routing, etc, requires modifications to the proof. Perhaps best to think of this as a powerful technique for validating transport correctness, to be used by the full property-checking experts. Where, naturally, Oski would be happy to advise 😎

BTW I have also seen this method referred to as a data-abstraction technique. You are probably familiar with data abstractions when handling memories (reducing a large memory to a single word, byte or bit to simplify a proof). Think of the Wolper method as a way to do a similar thing with data streams – reducing an arbitrary-length stream to just a few bits in the stream.


TSMC OIP DAC Theater Schedule 2018

TSMC OIP DAC Theater Schedule 2018
by Daniel Nenni on 06-20-2018 at 6:00 am

The TSMC OIP DAC Theater schedule is finalized and ready to go. It kicks off Monday at 10:15 am in booth #1629 and ends with a raffle at 5:45 pm each day (Mon-Tue-Wed) TSMC gives out some very nice prizes so check in with the TSMC booth staff when you arrive. There are 66 coveted presentation spots representing the top ecosystem partners around the world. The TSMC theater is one of the busiest and if you look at the attached schedule you will see why.

TSMC OIP DAC:
Overview Schedule Raffle

Honorable mentions go to the presentations by companies that we work with:

  • Analog Bits: A Case Study of FinFet SERDES for AI
  • ANSYS: ADAS Reliability for Advanced FinFET Design
  • Cadence: Virtuoso Design Platform for Advanced Nodes
  • Cadence: Advanced Semiconductor Packaging
  • Cadence: IP Solutions for Advanced Nodes
  • Cadence: High Performance 7nm Digital Design
  • Flex Logix: Applications and Value Proposition of eFPGA by Market
  • Moortec: FinFET Optimization and Reliability Enhancement
  • Mentor: Verification Solutions for TSMC Advanced Packaging
  • Mentor: Verification and Advanced DRC
  • Mentor: Tessenet DFT Yield Solutions for Advanced Nodes
  • SiFive: Enabling Access to Silicon
  • Silicon Creations: High Performance PLL Design on 5nm FinFET
  • Silvaco: Technology Behind the Chip
  • Synopsys: Silicon Proven Designware IP for TSMC Processes
  • Synopsys: Power ECOs with ANSYS Redhawk
  • Synopsys: Custom Platform for TSMC
  • TSMC OIP Update

Special mention goes to Open Silicon who sent abstracts for their TSMC theater presentations:

Topic: Turnkey 2.5D HBM2 ASIC SiP Solution for Deep Learning and Networking Applications
Presenter: Asim Salim / VP of Manufacturing Operations, Open-Silicon

The most common memory requirements for emerging deep learning and networking applications are high bandwidth and density, based on real-time random operations. High Bandwidth Memory (HBM2) meets these requirements and delivers unprecedented bandwidth, power efficiency and small form factor. Open-Silicon’s silicon proven HBM2 IP subsystem in TSMC’s FinFET and CoWoS® technologies is enabling next generation high bandwidth applications and the successful ramping of 2.5D HBM2 ASIC SiP designs into volume production.

Topic:IP Subsystem solutions for Deep Learning and Networking Applications
Presenter: Kalpesh Sanghvi / Technical Manager of IP and Platforms, Open-Silicon

For Deep Learning and Networking applications ASICs, HBM IP Subsystem, Networking IP Subsystem are main building blocks. Open-Silicon’s first HBM2 IP subsystem in 16FF+ is silicon-proven at 2Gbps data rate, achieving bandwidths up to 256GBps. Open-Silicon’s next generation HBM2 IP subsystem supports 2.4Gbps in 16FFC, achieving bandwidths up to >300GBps and supports 3.2Gbps and beyond data rates in 7nm, achieving bandwidths up to >400GBps. Open-Silicon’s Networking IP subsystem includes high-speed chip-to-chip interface Interlaken IP, Ethernet Physical Coding Sublayer (PCS) IP, FlexE IP compliant to OIF Flex Ethernet standard v1.0 and v2.0, and Multi-Channel Multi-Rate Forward Error Correction (MCMR FEC) IP.

Topic:Package Design, Assembly and Test Strategies for Robust 2.5D HBM2 ASIC SiP Manufacturing
Presenter: Abu Eghan / Sr. Manager of Packaging & Assembly, Operations, Open-Silicon

2.5D HBM2 ASIC SiPs manufacturing has unique challenges for package design, assembly and testing both at the wafer level and the SiP level. Open-Silicon’s has proven solutions and strategies that are available to mitigate these issues in order to successfully ramp ASIC SiP designs into volume production.

About DAC
The Design Automation Conference (DAC) is recognized as the premier event for the design of electronic circuits and systems, and for electronic design automation (EDA) and silicon solutions. A diverse worldwide community representing more than 1,000 organizations attends each year, represented by system designers and architects, logic and circuit designers, validation engineers, CAD managers, senior managers and executives to researchers and academicians from leading universities. Close to 60 technical sessions selected by a committee of electronic design experts offer information on recent developments and trends, management practices and new products, methodologies and technologies. A highlight of DAC is its exhibition and suite area with approximately 200 of the leading and emerging EDA, silicon, intellectual property (IP) and design services providers. The conference is sponsored by the Association for Computing Machinery’s Special Interest Group on Design Automation (ACM SIGDA), the Electronic Systems Design Alliance (ESDA), and the Institute of Electrical and Electronics Engineer’s Council on Electronic Design Automation (IEEE CEDA).


Billion Transistor Designs Need Faster Full Chip Tools

Billion Transistor Designs Need Faster Full Chip Tools
by Tom Simon on 06-19-2018 at 12:00 pm

During the design cycle as tape out approaches, time pressure usually goes up dramatically. To make matters worse the design itself is much larger, because all the block level work is done and there is a requirement to work with the entire database. It feels like it’s time to put aside the garden trowel and start using a steam shovel. This is when whole-chip DRCs are run and each change needs to be double checked to ensure that no inadvertent changes to the design have been introduced. At this stage, all the tools that were used to initially make the design are most likely sagging under the weight of the fully assembled and nearly finished database. Fortunately, there is an EDA vendor working specifically on solving these design challenges. The Taiwanese company AnaGlobe was founded in 2000 and has a solution for viewing, fixing and comparing the largest design files in existence.

I caught up recently with Ted Chou, AnaGlobe’s Corporate Applications Engineer, to go over their Thunder Integration Platform and Thunder LVL tool. One of their most significant advantages is high capacity and extremely fast database read in and writing. They have their own database called Thunder DB, but can read GDS, LEF/DEF and OASIS. The Thunder DB is about one tenth the size of GDS. Their extremely fast read-in can read in a 17 GB database in around 8.5 minutes. Ted pointed out that other tools can take around 112 minutes for this size design. So, the time saved is significant. Similar gains are seen for database writing. He also pointed out that their performance scales well when additional processors are used.

Their tools operate with the full design in memory, so no viewing or editing operations require disk file access. This makes Thunder a great choice for viewing and fixing DRC errors found with Calibre. AnaGlobe offers an integration with Calibre and ICV. But what appears to be one the most compelling motivation for using Thunder is its LVL capability. Unlike Calibre, no rule deck is needed for Thunder LVL. Another incentive is that you can use your Calibre license for something else when running Thunder LVL, and Thunder only needs one license to run on multiple CPUs, unlike Calibre.

Thunder LVL can run flat or hierarchical. Some of the runtimes that Ted shared with me were impressive. On a 170 GB design file with 393 layers, other tools took 15 hours with 12 CPUs. Thunder LVL ran in 1.7 hours. Thunder LVL also features synchronized viewports for viewing differences in the design that it reports.

There is a lot to say for companies that pick a niche and focus relentlessly on delivering a support product. It helps that the founders come from TSMC and Springsoft, both companies with excellent bona fides. From the looks of it AnaGlobe is enjoying good adoption rates at a number of the largest semiconductor companies. This makes sense because their market focus is on the largest designs. For more information on AnaGlobe and their Thunder products be sure to look at their website.

In an interesting side note, I worked at Calma, the company that created Stream format. In fact, that was the name of their layout editor, GDS II. Yes, there was a GDS I before GDS II, but Stream format was a GDS II utility and was never used with GDS I. I had the opportunity to meet Sheila Brady, the woman that actually wrote the very first Stream import and export utilities. Just to give a frame of reference, this was back the in early 1980’s. It’s testament to some good software design that GDS Stream is still in use today. Of course, it has been adapted to handle today’s more complicated technologies with the addition of more layers and datatypes, and larger record sizes. However, even back then I remember her saying that she really only intended it just to be a tape archive format, not a database for design hand-off, etc.

In one sense, it’s amazing it is still in common use. But this also makes the case for using newer more efficient databases and formats for today’s design challenges. Imagine if you were still using any other technology from 1980 for your daily tasks: floppy disks, 100MHz processors, magnetic tape drives, dial up modems…..


The Starting Point of Functional Safety Analysis

The Starting Point of Functional Safety Analysis
by Bernard Murphy on 06-19-2018 at 7:00 am

In the course of building my understanding of functional safety, particularly with respect to ISO 26262, I have developed a better understanding of the design methods used to mitigate safety problems and the various tools and techniques that are applied to measure the impact of those diagnostics against ASIL goals. One area in which I was struggling was the failure mode effects analysis (FMEA), which seems to be viewed as a “given” in many presentations. I didn’t see any explanations on how the FMEA was developed in the first place. That was a problem for me because everything else in functional safety is built on top of the FMEA; get that wrong and the rest of your effort is pointless, no matter how cleverly applied. So I asked Alexis Boutillier (Corporate Apps Manager and Safety Manager at Arteris IP) to explain.

Alexis started with a great question – how are you going to prove to your customer that your safety analysis is satisfactory? Obviously you can’t get away with “our experts checked it out and all possible failures are covered”. But you also can’t get away with “here’s a list of all the safety-critical features and related diagnostic logic”. The analysis has to be more objective and reviewable by someone not expert in your design. This is what the FMEA (within the standard) provides; a method to remove opinion, no matter how expert, from the loop.

Alexis demonstrated using a routing component in a NoC IP as an example (see the table figure). The table looks at each signal at a time and considers the potential impact of different types of error on that signal, both permanent and transient. Since this is a router, there can be impact from failures affecting the header and failures affecting the payload. Permanent (e.g. manufacturing) errors are modeled as stuck at zero or one and transient errors (e.g. soft errors from neutron-induced ionization) might result in bit flips and/or might lead to multi-bit errors.

Then in each case, you determine the effect of that failure. An error in framing might lead to a transaction being lost or an unexpected transaction. An error in the address could lead to incorrect routing and an error in the data naturally leads to bad data. You can also qualify these assessments with an explanation if they will only happen in certain circumstances. Next you describe a safety mechanism to either detect or correct for that error, along with your estimate of the effectiveness of that mechanism. So now you know, for each possible failure, the coverage you have in mitigating that failure. So far, so good. Very systematic, you could share this with a customer and they would agree you have done a comprehensive and objective job of covering possible the possible failure modes.

The really interesting question is – how likely is each of these failure modes? This is where it would be easy to slide back into opinion-based debate which would undermine the objectivity goal. One way to overcome this is to base failure distribution on the number of signals in that mode as a percentage of the total number of signals in the design. In the first entry on the table the mode is associated with a single signal out of 195 total signals, so it contributes just 0.5% to the distribution. This single-point fault analysis is in line with the standard and better meets an expectation of objectivity than would expert judgement. From a Tier1 or OEM viewpoint there’s nothing objective about being asked to trust a semiconductor supplier design expert for their opinion on probable distributions.

This would make building and FMEA very challenging if you started with a decent-sized IP which hadn’t been designed from the outset around these objectives. A big chunk of flat RTL would be painful to analyze comprehensively (many more possible failure modes) as would logic where failure modes had interdependencies between sub-functions (which would have to be grouped together for FMEA, I would guess). Analyzing big complex FMEAs would also make for challenging compliance reviews with customers (tell me again why this covers all possible failure modes? I got lost 20 pages back…) But if the IP is designed bottom-up for this kind of support, each component FMEA is easier to understand and these will drop neatly into FMEA analysis in the larger system (where FMEAs can also be composed together more automatically). The net of this is that it’s best to compose FMEAs in a modular and hierarchical fashion, which you can do if the IP has been micro-architected from the outset to support this analysis.

There’s a lot more to safety and how IP suppliers need to support their customers in their safety activities. I touched on some of this in my last blog for Arteris IP (ISO 26262: My IP Supplier Checks the Boxes, So That’s Covered, Right?). All of this that support starts with the FMEA. You can learn more about Arteris IP approaches safety in their design HERE.


Apple and China to kill Intellectual Property?

Apple and China to kill Intellectual Property?
by Eric Esteve on 06-18-2018 at 12:00 pm

The recent (since 2016) news about Apple, China, FTC and other organizations positioning in respect with IP are concerning, as it seems indicating that Intellectual Property in general (Design IP and Technology IP) is at risk. Let’s consider several facts through different cases, involving ARM, Qualcomm, Imagination Technologies vs Apple, the Chinese government or various organization in Europe and the US.

ARM vs China
ARM is by far the #1 Design IP vendor, with $1, 660 million revenues in 2017. China is becoming to be an important semiconductor market, where chip design activity is fast growing with the Chinese government support. SoC design is characterized by CPU (or DSP, GPU) integration, ARM’s CPU market share is 86%. This makes ARM CPU almost unavoidable, especially for wireless application processor design. In May 2017, ARM – Softbank has created a joint-venture with “Chinese investors” (piloted by the Chinese government) at 49% (ARM) 51% (China). ARM had probably no other choice if the company wanted to continue to license IP in China, so they closed the deal.

We have learned from the press (EETimes) in June 2018 that “SoftBank Group, owner of microprocessor IP firm Arm, announced this week that the British firm will sell 51% stake of Arm’s China unit to Chinese investors and ecosystem partners for $775.2 million to form a joint venture for Arm’s business in China. Under the agreement, Arm will still receive a significant proportion of all license, royalty, software, and service revenues arising from Arm China.”

In other words, China has win the control of ARM IP business in China, thanks to a two steps maneuver… How would you call this, official theft, or weird business practices?

Apple vs Imagination Technologies
Apple was the #1 customer of IMG, licensing the company GPU IP since 2007. This GPU was integrated into Apple’ application processor integrated into iPad smartphone. For such IP licensing deal of such a critical IP, both engineering teams must work very closely, sharing information about the GPU architecture, integration and test strategies. That’s why when Apple has announced in 2017 that they will develop their own GPU, IMG was not only desperate to lose 50% of their GPU IP revenues, but also hanger because they thought that it was almost impossible for Apple to develop their own GPU without using architecture, test or integration related know-how acquired with IMG.

In term of strategy, that makes sense for Apple to develop their own IP, like CPU, GPU or even DSP. That was Qualcomm strategy: the CPU was ARM compatible (architecture license), the DSP was 100% Qualcomm (thanks to an acquisition), as well as the GPU. That’s the best to differentiate from the competition! The problem with the Apple/IMG case is that Apple did buy anything, and it could be difficult to imagine that Apple’s engineers will never use the know-how acquired when working with IMG GPU IP… The result is that IMG may disappear from the IP market, and the position from Apple is difficult to justify.


Qualcomm vs Apple
Another case is still unsolved: Apple has stopped paying royalty to Qualcomm for their wireless technology licensing. Qualcomm has invented the CDMA technology, not a surprise as one of the Qualcomm founders is Andrew Viterbi (you probably know the algorithms better than the person, as he has invented in 1967 the Viterbi algorithm to decode convolutionally encoded data). Without Qualcomm’s inventions, today’s smartphones wouldn’t be able to integrate 100 Mbit/s modem, allowing smooth download of movies, pictures, etc. In other words, without Qualcomm, today’s smartphones like iPhone would probably stay an elegant but not powerful object!

From the beginning, Qualcomm business model is to ask royalty as a percentage of the system selling price, not on the chip (application processor or modem) price. Apple, selling iPhone for $800 to $1000 is certainly paying higher royalty level than they would pay on a $25-$50 chip. But Apple is making incredibly high level of margin: evaluated to 70% to 80% of the iPhone price, depending on the amount of integrated flash. Apparently, paying a few $10 is too much for the company, so they have stopped paying royalty to Qualcomm. This case is currently reviewed by the court, but it shows how a large company like Apple, making huge profits with their iPhone sales, is just showing contempt for intellectual property, invention and design IP, which is a bit weird for a high-tech company…

Qualcomm vs China
Last but not least, this two years old case, now settled, between a Chinese organization (NDRC) and Qualcomm. The reason is still the high royalty level that Qualcomm was asking for their phone technology related patents, to Chinese companies. After months of negotiation between Qualcomm and NDRC, Qualcomm had no other choice than to cut by half the royalty the company was asking for to Chinese companies. The official result is that:

Chinese enterprise will enjoy a lower SEP royalty rate (5% for 3G devices and 3.5% for 4G devices) and royalty base (65% of the net selling price of the device)”

Who is the winner, who is the victim? I let you decide, but for all these cases, the loser is certainly intellectual property, whether it’s Design IP (ARM, Imagination technologies) or technology patent (Qualcomm), and this becomes a real concern!

Eric Esteve (IPnest) June 15[SUP]th[/SUP] 2018


Fractal Technologies Joins TSMC Open Innovation Platform EDA Alliance

Fractal Technologies Joins TSMC Open Innovation Platform EDA Alliance
by Daniel Nenni on 06-18-2018 at 7:00 am

In case you missed it, Fractal is now officially part of the TSMC EDA Alliance. Fractal Crossfire is the leading IP and Library QA tool used by TSMC and many of TSMC’s customers so this is for the greater IP good, absolutely. Fractal has also released a new white paper “Setup Generation for Fractal Crossfire” that we can talk about but first let’s check out the meat of the press release:

Fractal Technologies is proud to announce its acceptance as a partner in the TSMC EDA Alliance, a key component of the TSMC Open Innovation Platform® (OIP). Within this partnership Fractal Technologies will be cooperating with TSMC to support mutual customers. Fractal Crossfire provides a validation solution for the qualification of IP blocks prior to the integration of these components into final designs for manufacturing.

“TSMC recognizes the need of our customers to have a formal IP qualification handshake in place. This enables TSMC to deliver IP products that are compatible with customer-specific requirements on IP configuration. As an independent IP qualification solution, Fractal Crossfire is enabling this IP qualification capability,” said Suk Lee, senior director of the Design Infrastructure Marketing Division at TSMC.

Bottom line:leading edge processes are breaking internal QA flows. If your QA strategy is “If it ain’t broke, don’t fix it” then you are in for a rude awakening.

If you are not familiar with Fractal we have been covering them on SemiWiki for more than five years so their landing page is a great place to start. You could also check out the IP Library and QA with Crossfire webinar we did last month.

The new white paper “Setup Generation for Fractal Crossfire” is a quick 6 pages and registration is not required so it is open for all.

In this white-paper we will discuss this customization process for the Fractal Crossfire IP qualification tool. We will review the toolbox provided by Crossfire to automate the setup process and the ways in which a design organization can further leverage a well designed IP qualification setup by providing it as a standard to its suppliers.

You can also see Fractal at the Design Automation Conference next week in booth #2333. I will be there signing copies of “Fabless: The Transformation of the Semiconductor Industry” compliments of Fractal. I hope to see you there!

About Crossfire
Mismatches or modelling errors for libraries or IP can seriously delay an IC design project. Because of the increasing number of different views required to support a state-of–the-art deep submicron design flow, as well as the complexity of the views themselves, library and IP integrity checking has become a mandatory step before the actual design can start. Crossfire helps CAD teams and IC designers in performing integrity validation for libraries and IP. Crossfire ensures that the information represented in the various views is consistent across these views. Crossfire improves Quality of your Design Formats.

About Fractal Technologies
Fractal Technologies is a privately held company with offices in San Jose, California and Eindhoven, the Netherlands. The company was founded by a small group of highly recognized EDA professionals. For more information: http://www.fract-tech.com/.

Related Blog