webinar banner2025 (1)

A Brief History of Methodics

A Brief History of Methodics
by Daniel Nenni on 04-29-2019 at 12:00 pm

Methodics has been a key player in IP management for over 10 years. In this section, Methodics shares their history, technology, and their role in developing IP Lifecycle Management (IPLM) solutions for the electronics industry.

Methodics is recognized as a premier provider of IP Lifecycle Management (IPLM) and traceability solutions for the Enterprise. Methodics solutions allow semiconductor design teams to benefit from the solutions ability to enable high-performance analog/mixed signal, digital, software, and SOC design collaboration across multi-site and multi-geographic design teams and to track the usage of their important design assets.

The journey started in 2006, when Methodics was founded in 2006 by two ex-Cadence experts in the Custom IC design tools space, Simon Butler and Fergus Slorach. After leaving Cadence, they started a consulting company called IC Methods, active in Silicon Valley from 2000 – 2006. As their consulting business grew, they needed to create a new company to service an engagement that had turned into a product for analog data management. With IP management in their DNA, They reused the IP in their consulting company name and Methodics was born!

Methodics first customer was Netlogic Microsystems, which was later to be acquired by Broadcom. Netlogic used the first commercial product developed by Methodics, VersIC, which provides analog design data management for Cadence Virtuoso. The development of Virtuoso was unique in that Methodics did not have to also develop an underlying data management layer as the first generation design data management companies in the semiconductor industry had to. During the late 1990’s and early 2000’s, a number of data management solutions had entered the market. Some of these solutions were open source, such as Subversion, and others were commercially available, like Perforce. These solutions had developed very robust data management offerings and were in use by 100,000’s of users in multiple industries.

In order to leverage these successful data management solutions, Methodics made the architectural decision to build a client layer on top of these products, allowing the team to focus it’s engineering efforts on developing a unique and full featured client, and not having to develop and maintain a layer for the design data management. Customers would benefit from this arrangement by having a full featured client integrated directly into the Virtuoso environment, and also have a robust data management layer that was widely in use, without necessarily having to concern themselves with the ongoings of the data management system.

It wasn’t too long before Methodics’ customers started asking for a solution that could be used in the digital domain as well. With the increase of companies adopting design reuse methodologies and using third party IP, Methodics decided to not only deliver a solution for digital design, but also one that could be used to manage and track IP reuse throughout their companies. This lead to the development of ProjectIC, which could be used not only for digital design, but analog design as well.

ProjectIC was an enterprise solution for releasing IP’s and cataloging them for reuse, SoC integration, tracking bugs across IP’s and managing permissions. ProjectIC also allowed for the comprehensive auditing of IP usage and user workspaces. With ProjectIC managers could assemble configurations of qualified releases as part of the larger SoC and make this available for designers to build their workspaces. Workspace management was a key technology within ProjectIC as well, and Methodics created a caching function to allow data to be populated in minimal time. Like VersIC before it, ProjectIC was built on top of the growing number of solutions available for data management, which allowed customers to quickly integrate to their development methodologies, especially if design teams had already adopted a commercially available system for data management.

In 2012, Methodics acquired Missing Link Software, which had developed Evolve, a test, regressions and release management tool focused on the digital space. Evolve tracked the entire design test history and provided audit capabilities on what tests were run, when and by whom. These were associated with DM releases and provided a way to gate releases based on the required quality for that point in the designs’ schedule.

With the acquisition of Missing Link, Methodics began to focus on the traceability of design information throughout the entire development process. While the core solutions of Methodics could keep track of who were developing IP, who were using which releases in which designs, and what designs were taped out using specific releases, customer wanted even more visibility into the life cycle of the IP. They wanted to know what requirements were used in developing IP, whether it was internally developed or acquired, what versions of the IP incorporated which features based on requirements, and how that IP was tested, verified, and integrated into the design. What was needed by customers was not only an IP management solution, but a methodology that could be adopted to track the lifecycle of an IP.

In 2017, Methodics released the Percipient platform, the second generation IP Lifecycle Management solution. Percipient built on the success of ProjectIC, but also began to allow for integrations into other engineering systems. In order to fully track an IP’s lifecycle, Percipient created integrations into requirements management systems, issue and defect systems, program and project management systems, and test management systems. These integrations allow for a fully traceable environment, from requirements, through design, to verification, of the lifecycle of an IP. Users of the Percipient platform can now not only track where an IP is used and which version is being used, but can now see what requirements were used in the development of an IP, any outstanding issues that IP might have and what other projects are affected, and whether the IP is meeting requirements based on current verification information.

Today, Methodics continues to develop solutions for fully traceable IP lifecycle management as well as solutions for mission critical industries that require strict adherence to functional safety requirements like automotive and ISO 26262 and Aerospace DO-254. Methodics is also working on solutions to increase engineering productivity. With workspaces growing exponentially, Methodics is developing solutions like WarpStor, which virtualizes engineering workspaces and drastically reduces data storage requirements while increasing network bandwidth. With the adoption of cloud computing by semiconductor companies, Methodics is also working on solutions to help customers work with hybrid compute environments of on premise and cloud based. Just as it was in 2006, Methodics goal is to bring value engineering teams by making the development environment more efficient by enabling close collaboration and the optimization of resources.


The Evolution of the Extension Implant Part I

The Evolution of the Extension Implant Part I
by Daniel Nenni on 04-29-2019 at 7:00 am

The 3D character of FinFET transistor structures pose a range of unique fabrication problems that can make it challenging to get these devices to yield. This is especially true for the all-important Extension implant that is put in place just prior to the nitride spacer formation.

The Extension implant is a central component of any transistor because the physical distance between the two elements of this high-dose implant defines the speed of the transistor. In planar transistors the Extension implant is self-aligned to the edges of the gate electrode and is the chief reason why great effort has been made in the past to minimize the Length (Lg) of the gate electrode, and in so doing, improve transistor performance.

In planar devices the Extension implant in realized by implanting dopant at a angle of 90 degrees to the silicon surface on either side of the gate electrode (refer to figure #1).

However, since the channel in a FinFET device is perpendicular to the silicon surface, this methodology is not an option. Instead, an angled implant is employed that implants the top and both sides of the fin, usually at a steep angle as illustrated in figure #2.

The issue with implanting the Extension at such a steep angle is that a large percentage of the dopant is not retained on the fin, but is instead ricocheted off. The relationship between the Extension implant angle and dopant retained on the sidewall is illustrated in figure #3. As this figure indicates, the steeper the implant angle, the less dopant is retained on the fin sidewalls.


Figure #3

Unfortunately, the height of the photoresist used to shield the PMOS devices during the NMOS Extension implant (and vice-versa) dictates that a steep double implant angle be used at an angle of + and -20 degrees. Figure #2 as illustrates one of these two implants.

This problem was mitigated in the first iteration of the FinFET architecture by the fact that the fins were sloped. This increased the incident angle of the implant and allowed more dopant to retained on the fin sidewalls. However, at the 14nm and 10nm nodes the fins were tall and vertical and the mitigating effect of sloped fins was absent (refer to figure #4).

The solution to this problem was to replace the tall photoresist used to separate the NMOS Extension implant and the PMOS Extension implant with a short hard mask that would permit a greater implant angle, and in so doing, allow a greater percentage of the dopant to remain on the fin sidewalls. Figure # 5 illustrates this approach to the problem. By employing a short hard mask material to shield the PMOS devices during the NMOS Extension implant (and vice versa), an implant angle of as much as +/-30 degrees can be used, and the majority of the dopant in the Extension implant is retained on the fin sidewalls ensuring a high performance device.


Figure #5

Of course the use of a hard mask in lieu of photoresist will only be advantageous in very dense regions where there would normally be numerous, tall photoresist lines. Since this is the most common situation on sub 14nm devices, this technique can be very useful.

However ultimately, the primary limiting factor in realizing the Extension implant into the fin is fin-to-fin shadowing. This is an increasing problem with the tall, closely spaced fins that are present at the 10nm node and below.

For more information on this topic and for detailed information on the entire process flows for the 10/7/5nm nodes attend the course “Advanced CMOS Technology 2019” to be held on May 22, 23, 24 in Milpitas California.


EDA Update 2019

EDA Update 2019
by Daniel Nenni on 04-26-2019 at 12:00 pm

Over the last six years EDA has experienced yet another disruption not unlike the Synopsys acquisition of Avant! in 2001 which positioned Synopsys for the EDA lead they still enjoy today. Or the hiring of famed venture capitalist Lip-Bu Tan in 2009 to be the CEO of struggling EDA pioneer Cadence Design Systems. Under Lip-Bu’s command Cadence has prospered like no other company in the history of EDA, absolutely.


In 2017 Siemens acquired Mentor Graphics for $4.5B representing a 21% stock premium. Acquisition rumors had been flying around the fabless semiconductor ecosystem but no one would have guessed it would be the largest industrial manufacturing company in Europe. At first the rumors were that Siemens would break-up and sell Mentor keeping only the groups that were part of Siemens core business, specifically they would sell the Mentor IC Group. Those rumors were flatly denied at the following Design Automation Conference during a CEO roundtable and now Mentor, including the IC group, is an integral part of the Siemens corporate strategy.

While Mentor was the biggest and most disruptive EDA acquisition there were many others. EDA has always been focused on non-organic growth (acquisitions) which we track on SemiWiki with our EDA Merger and Acquisitions Wiki. Synopsys is the largest acquiring EDA company scooping up EDA and IP companies as well as companies outside of the semiconductor ecosystem. In the last six years Synopsys has acquired 10 companies involved with software security and quality including the acquisition of Black Duck Software in 2017 for $547M. In total Synopsys has acquired more than 88 companies and we should expect the acquisition spree to continue.

Mentor financials are no longer public but inside sources say that revenue growth since the acquisition has by far exceeded expectations based on the extended reach of the Siemens workforce. Some estimate it to be as high as 25% growth. Synopsys and Cadence have also prospered since the Mentor acquisition was announced with revenues and market caps jumping in a very unEDA way. Synopsys (SNPS) stock price has almost doubled and the Cadence (CDNS) stock price has more than doubled. Clearly Wall Street has a renewed interest in EDA as they should. After all, EDA is where electronics begins.

Another significant EDA change that has evolved over the previous six years is the customer mix. Following Apple, systems companies are now taking control over their silicon destiny and developing their own chips. We see this on SemiWiki with the domain additions of our expanding readership. Systems companies now dominate our audience with the rapid growth of the IP, AI, Automotive, and IoT market segments.

Systems companies are also changing the way EDA tools are purchased. Rather than buying point tools and assembling custom tool flows (a fabless tradition), systems companies can buy complete tool flows and IP from Synopsys, Cadence or Mentor. The “One throat to choke” concept of customer support is a very attractive business strategy for companies venturing into the world of chip design for the first time.

Systems companies are good candidates for EDA in the cloud which is finally coming to fruition after many failed attempts. Cadence has been in the cloud for many years starting with Virtual CAD (VCAD) more than 20 years ago, Hosted Design Solutions (HDS) 10 years ago, and the Cadence Cloud announcement in 2018 with TSMC, Amazon, Microsoft, and Google as partners. In 2019 they announced the Cloudburst Platform which is another important EDA step towards full cloud implementation.

System companies are also not bound by the margin challenges of traditional fabless semiconductor companies. Apple for example can pay a much higher price for premium tools and support without notice to their bottom line. As a result, EDA companies are catering to system companies by providing and integrating IC tools with system level design tools. System based software development is also an EDA target as noted by the recent Synopsys acquisitions.

EDA has prospered in the last six years like no other time in EDA history and will continue to do so as semiconductors and electronic products continue to dominate modern life, absolutely.


A Quick TSMC 2019 Tech Symposium Overview

A Quick TSMC 2019 Tech Symposium Overview
by Daniel Nenni on 04-26-2019 at 7:00 am

This year TSMC did a FinFET victory lap with the success of 16nm, 12nm, 10nm, and 7nm. It really is well deserved. Even though TSMC credits the ecosystem and customers, I credit TSMC and their relationship with Apple since it has pushed us all much harder than ever before. TSMC CEO C.C. Wei summed it up nicely in his keynote: Innovation, collaboration, and hard work.

Tom Dillinger also attended and he will be writing in more detail next week. Tom has been busy of late. He just finished his second textbook on VLSI Design Methodology Development for Prentice Hall. Remember, Tom started the FinFET discussion on SemiWiki in 2012 so you can bet FinFETs will be mentioned a time or two.

Here is an outline of what Tom will be writing about next week so stay tuned:

Advanced Technology Development and Materials Engineering:
1) N7/N7+ update
highlights: “Making 5G a Reality” — N7 as a technology enabler; D0 defect density improvement; 112Gbps PAM4 SerDes IP

2) N6 update
highlights: PPA comparisons to N7; ease of RTO/NTO migration from N7; new M0 routing (very unique!)

3) N5 update
highlights: schedule; D0 ramp; PPA comparisons to N7; “designed from the start for both mobile and HPC platforms”

4) Advanced process development / materials engineering
highlights: additional Vt devices for HPC in N5 and ULL/ULP technologies; high mobility (Ge) channel device; metal RIE (replacing damascene patterning); new metallization materials (graphene cap on Cu); future research into 2D semiconductor materials

5) Manufacturing excellence
highlights: focus on in-line process monitoring; maverick lot identification; “ink out good die in a bad zone” (very unique!); Continuous Process Improvement focused on wafer edge (very unique!); product-specific UpperSpecLimit + LowerSpecLimit statistical process control (also very unique!)

6) Roadmap for automotive platform
highlights: new “L2+” automotive grade introduced; focus on DPPM reduction; MCU’s in a vehicle transitioning from eFlash to MRAM memory offering

7) Roadmap for IoT platform
highlights: new eHVT device; unique analog device process engineering; new “dual-rail VDD” SRAM offerings (aggressive SRAM_Vmin scaling); MRAM roadmap

8) RF process development focus
highlights: device engineering to improve ft and fmax (in several processes), new thick metal to improve inductor Q factor; device model characterization

Although the specialty technologies presentation was very interesting, there’s probably not enough room in the article to cover MEMS, CIS (at near infrared wavelengths), etc.

“Front-End” and “Back-End” Advanced Packaging:
1) SoIC
highlights: diverse die size and stacking options (e.g., face-to-face and face-to-back bonding)

2) CoWoS
highlights: reticle size roadmap, embedded deep trench caps (DTC) in Silicon interposer

3) InFO
highlights: InFO_PoP through InFO via (TIV) scaling; InFO without substrate (2020)
4) 3DIC ecosystem support

History is always a part of semiconductor symposiums because semiconductors really have come a long way fueled by a series of technological disruptions. When I went away to college my beautiful girlfriend (wife) used to write letters to me everyday and call me on the weekends. My parents and grandparents had similar experiences. Shortly after we married, PCs and the internet landed on our desks and we emailed and Usenet-ed our way around the world. Then came smartphones and social media, probably the biggest disruption of them all. Phones are now in our hands and faces more than ever before but that is going to change.

The next disruption will be fueled by 5G and AI which is just now beginning. If you think semiconductors are important today just wait another ten years because you will not be able to survive without them, absolutely.

TSMC and the semiconductor industry have been living a very mobile life since PCs and phones left our desks. Moving forward, AI enabled edge devices will continue to be a semiconductor industry driver but the real upside for the foundry business will be getting the many zettabytes of data into the cloud and processed. Today Intel CPUs and GPUs dominate the cloud. Tomorrow it will be custom AI processors built by the cloud companies themselves in close partnership with the fabless semiconductor ecosystem and that means TSMC.

From writing letters to “real-time thought processing” in one lifetime, simply amazing.

Also read: 2019 TSMC Technology Symposium Review Part I


Deep Learning, Reshaping the Industry or Holding to the Status Quo

Deep Learning, Reshaping the Industry or Holding to the Status Quo
by Daniel Payne on 04-25-2019 at 12:00 pm

AI, Machine Learning, Deep Learning and neural networks are all hot industry topics in 2019, but you probably want to know if these concepts are changing how we actually design or verify an SoC. To answer that question what better place to get an answer than from a panel of industry experts who recently gathered at DVcon with moderator Jean-Marie Brunet from Mentor, a Siemens business:
Continue reading “Deep Learning, Reshaping the Industry or Holding to the Status Quo”


Semiconductor Equipment Revenues To Drop 17% In 2019 On 29% Capex Spend Cuts

Semiconductor Equipment Revenues To Drop 17% In 2019 On 29% Capex Spend Cuts
by Robert Castellano on 04-25-2019 at 7:00 am

The semiconductor equipment market grew 37.3% in 2017 on the heels of capex spend by memory companies in order to increase bit capacity and move to more sophisticated products with smaller nanometer dimensions. Unfortunately these companies overspent resulting in excessive oversupply of memory chips. As memory prices started dropping, these companies put a halt to capex spend, and global equipment revenues increased only 13.9% in 2018. As excess inventory continues to increase in 2019, capex spend by these companies is projected to drop 29%, which will result in a significant reduction in equipment revenues in 2019.

As a result, the global semiconductor equipment market is forecast to drop 17% in 2019, reaching revenues of $64.5 billion, according to The Information Network’s report, “Global Semiconductor Equipment: Markets, Market Shares, Market Forecasts.”

To analyze the equipment market in 2019, we need to look at revenues and market shares for previous years. For 2018, Applied Materials (AMAT) ended the year with a market share of 18.8%, down from 21.2% in 2017, as shown in Chart 1. Fellow U.S. supplier Lam Research (LRCX) held a 16.8% share in 2018, down from 16.9% in 2017.


Japanese supplier Tokyo Electron Ltd. (OTCPK:TOELY), a major competitor of AMAT and LRCX in the deposition and etch sectors, held a 16.7% share in 2018, an increase of 1.6 points from a 15.1% share in 2017.

Chart 2 illustrates the change in revenue YoY for the leading semiconductor equipment companies. As I said above, the overall market increased 13.9%, so Lam’s growth of 13.7% attributed to its 0.1 point loss in share. AMAT’s lackluster growth of only 1.1% in 2018 contributed to its loss of 1.6 share points. Growth was less than the composite growth of companies ranked 8 through 75. The company has been losing market share to competitors YoY for the past three years.

KLA-Tencor (KLAC) grew 26.1%. I noted in a January 31 Seeking Alpha article entitled “KLA-Tencor And Metrology/Inspection Peers Will Be Less Impacted By Memory Capex Cuts Than The Rest Of The Equipment Industry In 2019” that a shift to more inspection in the fab was underway.


AMAT sells equipment for nearly all the processes used to make a semiconductor chip. Its two major segments are deposition and etch. In 2015-2017, deposition made up 46% of AMAT’s revenue, while etch made up 18-20% of revenues. These two sectors represent 2/3rd of the company’s revenues.

In 2018, the deposition market grew 3.9% and etch grew 4.4%. Since AMAT’s total revenues grew just 1.1%, it is obvious that the company lost shares in both sectors to competitors in these sectors, namely Lam Research, Tokyo Electron, and Hitachi High Technologies (OTC:HICTF).

2019 Analysis
Based on capex spend data as detailed in Table 1, The Information Network projects the semiconductor equipment market will drop 17% in 2019, compared to growth of 13.9% in 2018.


The main reason is the drop in capex spend by memory companies tied to the inventory overhang and oversupply of 3D NAND and DRAM chips. In total, capex spend is projected to drop 29.1% in 2019.


But there is another important factor investors need to consider: market share gains and losses. Why are market shares important?

Semiconductor manufacturers purchase equipment based on a “best of breed” strategy. Market share losses indicate equipment is not best of breed. When a customer decides to make additional equipment purchases to increase capacity, it will buy more from its current supplier. This means further market share gains.


A Brief History of IP Management

A Brief History of IP Management
by Daniel Nenni on 04-24-2019 at 12:00 pm

As RTL design started to increase in the late 1980’s and early 1990’s, it was becoming apparent that some amount of management was needed to keep track of all the design files and their associated versions. Because of the parallels to software development, design teams looked to the tools and methodologies that were in use by software teams at the time.

Software teams had adopted Software Configuration Management solutions to handle the organization and versioning of their source code. RCS and CVS were two of the most popular revision control systems in use at the time, and semiconductor development teams began to adopt these for their development environment, eventually building methodologies around the use of these solutions.

It quickly became apparent that the differences between hardware and software design necessitated that more customized solutions needed to be developed for the semiconductor development teams. Binary databases for analog design needed to be supported, integration into the EDA environment were needed, and support for scripting and configuration files for EDA tool flow had to be developed.

In 1993, the consulting group at VIEWLogic began work on providing the first such environment for hardware teams. Building on top of RCS, they released ViewData, a plugin for the PowerView framework. This solution began to address the needs of managing configurations of files where RTL, schematics, and layout all made up the final design configuration.

In 1995, Dennis Harmon, Mitch Mastelone, Norm Sozio, and Eugene Connolly left VIEWLogic to form Synchronicity with the goal of providing the first true semiconductor design data management system that would manage design data across different development platforms and EDA tool environments. In 1996, they released DesignSync, which was built on top of a custom data management system that could handle the RTL and other ASCII data, and connectors into the solution that would interface with the EDA tools at the time. This solution became popular with analog designers, as now there was a way to handle the binary data and custom frameworks associated with Analog design.

Two years later, Srinath Anantharaman founded ClioSoft to continue to fill in the gaps that were not met by software SCM tools. ClioSoft launched the SOS design collaboration platform to target the challenges of hardware design. Like DesignSync, ClioSoft built SOS on top of a customized data management system, and developed technology to augment the traditional SCM approach to create a hardware configuration management (HCM) system while partnering with EDA companies to provide specific connectors into the EDA tools and methodologies.

In the ensuing years, there was a rise in the development of commercially available data management (DM) platforms. IBM Rational’s ClearCase and Perforce’s Helix were being adopted by development teams in many different industries. A new generation of open source solutions were also being developed, such as Subversion and later, Git. This allowed for a second generation of solutions to be introduced to the market that allowed for the adoption of solutions that were built on top of these commercially available solutions instead of running on proprietary data management systems.

In 2003, Shiv Shikland and Dean Drako founded IC Manage. Building on top of Perforce’s Helix data management solution, they released their Global Design Platform (GDP). By choosing to release their solution on top of a commercially available DM system, design teams were able to use a common DM system for software and hardware design, with the GDP client able to be customized for the needs of hardware designers.

Four years later, Simon Butler and Fergus Slorach founded Methodics. Methodics also chose to run on top of commercially available systems, but instead of limiting the solution to a single platform, they chose to allow users to run their choice of platforms, with Perforce and Subversion being the two most popular at the time. This further allowed customers to mix and match backend DM systems to fit their needs while having a common client, VersIC, running on top of the different systems for hardware design.

As design reuse began to gain traction in the early 2000’s and the use of third party IP began to grow, semiconductor designers were now faced with the challenge of managing designs for reuse, and managing the acquisition of third party IP. Design teams needed to know where to find internal IP for reuse and be able to track what versions were being used, in which projects it was being used in, and what products had taped out with what versions of IP. Third party IP complicated the problem, as each IP acquired often had a different contract that stipulated how the IP provider was to be paid for the IP’s use. Often, users of this IP would have to keep track of varying business terms that required the users to keep track of who looked at the IP, was it uses once or many times in a design, how many different designs was it used in, or how many parts were ultimately shipped after tape out.

Semiconductor design teams looked to the design management companies to provide solutions in this area. Synchronicity was first to market in the IP management space with IP Gear, Methodics released ProjectIC, IC Manage developed IP Central, and ClioSoft released DesignHub. Later, in 2004, Synchronicity would be acquired by MatrixOne, developer of one of the first PLM systems, to bring semiconductor design management closer to systems development. MatrixOne would then be acquired by Dassault Systemes in 2006. While DesignSync lives on as part of the ENOVIA PLM group in side of Dassault, IP management has been integrated into the ENOVIA PLM platform itself. Methodics has release Percipient as a follow on to ProjectIC, incorporating an IP LifeCycle Management (IPLM) methodology into the solution and providing integration to other engineering systems like requirements management and issue and defect systems.

Today, SoC’s continue to take advantage of reuse, with the number of IP cores in an SoC exceeding 100. The challenges facing the management of IP are still increasing. Functional safety requirements, such as ISO 26262 for automotive and DO-254 for aerospace, push semiconductor companies to provide evidence of a traceable path from requirements through design to verification and to document all work that has been done to meet those requirements. The need for these traceable flows require that IP management systems have links into requirements, verification and document management systems. Increasing use of third party IP are making designers look for robust IP portals with abundant IP meta data available so that they can accurately compare IP from different vendors. With the industries dependence now on IP, IP management systems will remain core to the effective collaboration of design teams for the years to come.

Also Read

Three things you should know about designHUB!

Data Management Challenges in Physical Design

Webinar: Tanner and ClioSoft Integration


Foundational Excellence in a Laid-Back Style

Foundational Excellence in a Laid-Back Style
by Bernard Murphy on 04-24-2019 at 7:00 am

I recently had a call with Rob Dekker, Founder and CTO of Verific. If you’re in EDA or semiconductor CAD, chances are high that you know who they are. They’re king of the hill in parser software for SystemVerilog and VHDL. When you hear a line like that, you assume a heavy dose of marketing spin, but here it really is fact. I don’t know of anyone else in this line with their market presence. They’re used by all the EDA majors and by CAD groups in leading semiconductor and systems companies (Intel and Google, to drop a couple of names).

I have some familiarity with this space since I was for a short time in Interra Systems before we spun out as Atrenta, and Interra provided our Verilog and VHDL language parsers. I don’t know about the business side of that activity but I do know we were always struggling to keep up with the standards and, more challenging, vendor-specific wrinkles on those standards. When you’re low in the EDA value chain and you’re using your own parsers, that’s a constant headache in competing with the big tool providers. Using Verific for parsing eliminates those headaches and lets you focus on your differentiating value-add.

I asked Rob what got them started on this path. He had been responsible for language front-ends for the Exemplar logic synthesis software back in the 1990s. In 1999, after Exemplar was acquired by Mentor Graphics, he decided to start his own company. He was originally thinking about developing a formal verification tool (hence the name Verific), but of course had to start with language parsing and RTL elaboration, the front-end to any formal tool. So he built that and found several customers who were interested in licensing that software.

A couple of years later, a company developing an equivalence checker approached him wanting to license the parsers. That was one of those defining forks in the road for a small company – if he continued along the path he originally planned, we would be competing with a customer. Instead he decided to stay in the parser business but do the very best he could in that domain. The formal company became a customer and fairly quickly after that most formal providers were using Verific parsers.

The business model is a little unusual but seems to work well for them and for their customers. They were clear from the outset that they wanted to be in the (software) IP business, not the services business, but that they would license source-code rather than compiled libraries. Customers can build on their favorite hardware/OS platforms, as best suits their needs. Of course if you have source code, you can change it. The model here seems to be that you’re likely only to make minor tweaks. Verific will support these changes, merging them on top of a standard release and re-regressing with their test suites before release back to the customer. Rob says that customers like this model. In the event of something bad happening to Verific, customers already have hands-on experience with the source-code, a possibility which remains theoretical in most software license agreements.

The pricing philosophy is equally simple; this has to be 50% of what the customer thinks it would cost them to develop. Customers are always optimistic when they do this calculation. So the real percentage has to be 25% of Verific’s development cost, which means they have to sell 4 copies before they start to make money. Turns out that their customers find this very reasonable, so they don’t run into a lot of resistance.

You’re probably wondering about the available market for products like this. Rob said that they originally mostly targeted EDA developers, for formal, synthesis, some simulators, hardware accelerators, even virtual prototypers. The Verific software is built in C++, with C++ interfaces, so is a natural fit for that type of development. They still find some new business in this area but have seen more growth over time in semiconductor CAD groups, in traditional semis and in design groups in the big systems houses. There’s still need for a lot of custom tooling in these groups and Verific provides a good turnkey front-end to RTL analysis.

However, in-house CAD groups are generally not as enthusiastic about C++ development; their development languages of choice tend to be Python or Perl. Verific’s first pass at meeting this need was to wrap underlying C/C++ APIs for these languages. I’ve been there, done that so can sympathize with Rob’s statement that this didn’t help so much. APIs for these kinds of applications tend to be overwhelming. You can do anything you want to do, but it takes forever to figure out how. In 2017 Verific solved this problem by acquiring the INVIO platform from Invionics. INVIO builds on top of the basic APIs with a much simpler object-based model and the kind of lookup functions you’d expect to have in a Tcl interface. I’d imagine this is a big hit with CAD developers and probably even with designers.

Asking Rob about long-term goals, I got an answer you’d never hear in Silicon Valley, perhaps because this is a company with strong European roots. Rob feels they are in a good niche market; they are already the industry standard with little competition, they like where they are and don’t feel the need to grow too fast. Which is just as well, because he doesn’t see massive room for growth. They have been able to manage double digit growth each year, which is fine by them, helped along now and again by a new parser, such as a recent introduction for UPF.

In an industry where CAGRs must be spectacular and competition is a blood sport, this is a refreshing change. Rob told me the reason they chose a giraffe as a logo was that it has a good overview of its surroundings, but at the same time has a gentle and non-aggressive nature both internally and with partners. Quality of life as a primary goal – an interesting differentiator.


Rambus Take on AI in the Era of Connectivity at Linley Processor Conference

Rambus Take on AI in the Era of Connectivity at Linley Processor Conference
by Camille Kokozaki on 04-23-2019 at 12:00 pm

Steven Woo, Fellow and Distinguished Inventor presented at the just concluded Linley Spring Processor Conference a talk about AI in the Era of Connectivity. As he put it, the world is becoming increasingly connected, with a marked surge of digital data, causing a dependence on said data. With the explosion of digital data and AI, the interaction is such that they are feeding off each other. Consequently, architectures are evolving to more efficiently capture, secure, move, and process the growing volume of digital data.

Data Centers are evolving, and data processing is moving to the edge. Data is increasingly valuable and sometimes more so than the infrastructure itself so securing this data is essential. Power efficiency is also a key consideration. There is an interesting evolution/revolution in how data is captured, processed and moved. AI techniques have been around for decades, so why the sudden resurgence of interest? Faster compute and memory along with large training sets have enabled modern AI. With the transistor feature size limits being reached and the increased need for performance coupled with energy efficiency mandates, clearly new approaches are needed and indeed emerging.

AI relying on CNN (convolutional neural network) is suddenly taking off due to its increasing accuracy as the data and the model size increase. To support this evolution, Domain Specific Architectures (DSAs), have emerged with specialized processors targeted specifically for some tasks away from general purpose compute. Memory systems are critical in these systems and can range from On-Chip Memory to High Bandwidth Memory (HBM) and GDDR. On-Chip Memory provides the highest bandwidth and power efficiency, with HBM exhibiting very high bandwidth and density, while GDDR sits in the middle and provides a good trade-off between bandwidth, power efficiency, cost and reliability.



With Data growing in value, security is challenged by increased sophistication of intrusion attempts and exploitation of vulnerabilities. The surface area to attack is also growing due to infrastructure diversity, pervasiveness and user interaction type, with the spectreof a meltdown foreshadowing, pun notwithstanding, more havoc to come.

Rambus has a new approach called Siloed Execution that improves security where physically distinct CPUs separate secure operations from other ones that require fast performance. The security CPU can be simplified and armored for tighter security for secret keys, secure assets and apps, and privileged data, remaining uncompromised even if the general-purpose CPU is hacked. Rambus has such a secure CPU, the CryptoManager Root of Trust which provides secure functionality for secure boot, authentication, run-time integrity and a key vault. It includes a custom RISC-V CPU, secure memory and crypto accelerators such as AES and SHA. With a secure CPU integrated on the chip you can monitor run-time integrity in real time in the system and make software/hardware adjustments as needed.

The AI infrastructure connection is helped by allowing cloud shared neural network hardware to be used by multiple users who can now encrypt their training sets and even their models and the security CPU can manage different keys to decrypt that information for each user. Rambus’ CryptoManager Root of Trust would allow a multi root capability, with a key decrypting one user data, allowing access to the model parameters for training and inference, then a second user can have their data decrypted with a separate set of keys and run on the same hardware.

On the memory side there is a wide range of solutions available that are appropriate for some applications with no one size fits all and on the security side the data itself is becoming in some ways more valuable than the infrastructure. It is important to not only secure the infrastructure but also it is important to secure the training data and models as it can be your competitive advantage. Over time, what will be needed is allowing the user to have simpler ways to describe their models by using compilers to transform something that is hard to describe to the user but that runs extremely well on hardware. What is needed is how to describe the job and less and less will be needed on how neural networks work and software will enable this transformation over what the latest available hardware provides.

Dr. Woo stressed that AI is driving in a sense a computer architecture renaissance. Memory systems now offer multiple gradations of AI options for data centers, edge computing and endpoints. As the data value is increasing, with growing security challenges, security by design is imperative as complexity and data grows with no sign of slowing down. If you get AI, then get security, and get going with functional integration and task separation, all in one AIdea. Sounds like a good name, get the AIdea?


IC Implementation Improved by Hyperconvergence of Tools

IC Implementation Improved by Hyperconvergence of Tools
by Daniel Payne on 04-23-2019 at 7:00 am

Physical IC design is a time consuming and error prone process that begs for automation in the form of clever EDA tools that understand the inter-relationships between logic synthesis, IC layout, test and sign-off analysis. There’s even an annual conference called ISPDInternational Symposium on Physical Design, and this year it was held in San Francisco, April 14-17. For the keynote speaker this year they invited Shankar Krishnamoorthy from Synopsys to talk about, “Fusion: The Dawn of the Hyper Convergence Era in EDA“. I was able to review his presentation to better understand the challenges and EDA approach that Synopsys has undertaken.

Before I delve into EDA tools, let me first take a step back and review what’s happened in the datacenter recently, where three mostly separate technologies have morphed into a single, more optimized system (aka hyperconvergence):

  • Computation
  • Storage
  • Networking

So a hyper-converged infrastructure (HCI) uses software and virtualized elements running on commercial, off-the-shelf servers to improve performance and enable easier scaling. In the traditional datacenter server the networking could come from Cisco, the compute by HP and storage by EMC, but the setup and maintenance was complex, a bit inefficient and scaling was brute force.

By the 2010’s we saw datacenter servers take a converged approach where either Simplivity and HP partnered, or EMC and Dell partnered, so it was easier to manage than the traditional data center but still had issues with limited capabilities and reliability.

Since the mid 2010s we now see the emergence a hyperconverged datacenters with vendors like Nutanix that have fused together the once separate components of storage, compute, networking and virtualization.

I’ve been an EDA tool user since 1978 and blogging about EDA tools for over 10 years, so I’ve seen many generations of tools being offered. Through the 1990s we saw many CAD groups combining multiple point tools into a traditional flow for nodes down to 90nm, as shown below. Sure, you could mix and match the best tool for each task, yet there would always be iterations to reach closure.

The converged approach has been in use since 2000 and used on IC implementation down to 7nm, with EDA vendors typically providing more integration and links between the tools. Benefits with a converged approach are more coherency, and an improvement in predictability, but the sheer size of IC designs and unprecedented complexity due to relentlessly advancing Moore’s Law have made even this methodology unviable.

Going from RTL code to signoff while meeting the QoR and productivity targets is a much bigger task at 7nm and below, so creating an EDA tool flow to meet this challenge could take a couple of approaches: Loose coupling between multiple engines using separate data models, or a single data model with common engines.


Loose coupling between engines

With a loose coupling approach between engines there’s still an issue meeting PPA (Power, Performance, Area) and convergence, because you don’t always get a predictable improvement over time, and the runtimes are lengthened because there are still iterative recipes being used.

The hyperconverged “Fusion approach is distinguished by a single data model, single user cockpit and common interleaved engines:

The promise of this approach is a quicker convergence to optimal PPA. Just think about how an end-to-end physical implementation system unified on a single data model and using common synthesis, place-and-route and signoff engines could enable seamless optimization throughout the flow for superior QoR and signoff predictability:

OK, the theory of hyperconverged EDA tools sounds interesting, but what about actual results? One IC design with 2.5M instances and 5 power domains using a 16nm process was run in both converged and hyperconverged tools, showing the following improvements:

  • 2.4X faster full-flow turnaround time
  • 46% better timing
  • 11% less area

Engineers love data, so here are some more results using the hyperconverged approach on multi-million instance designs at 16nm and 7nm process nodes:

  • Mobile CPU

    • 10% Total Negative Slack (TNS) improvement
    • 10% Leakage improvement
    • 3% Smaller area
  • Automotive IC

    • 28% TNS improvement
    • 13% Smaller area
  • High performance server SoC

    • 56% Leakage reductions
    • 41% Faster runtime
    • 10% Smaller area

So this new hyperconverged Fusion approach from Synopsys uses many common optimization technologies throughout the flow to concurrently optimize across multiple metrics, including timing, power, IR drop, area and congestion. For instance, by using an integrated IR analysis engine in the flow it can resolve IR violations without impacting timing closure, look at one comparison versus the baseline flow:

The baseline flow had 3,136 IR violations where the threshold was >=8% IR drop, while the Fusion flow had just 137 IR violations, that’s a whopping 95% reduction with the newer approach.

Summary
If you use the same EDA methodology from a 28nm flow on a 7nm or 5nm SoC, then there are going to be some big surprises as you iterate and attempt to reach an acceptable PPA value within the time budget allotted. Change with the times and consider the hyperconverged approach being offered by Synopsys in Fusion, the early numbers looked promising to me.