webinar banner2025 (1)

Mentor Excitement at 56thDAC!

Mentor Excitement at 56thDAC!
by Daniel Nenni on 05-23-2019 at 10:00 am

Mentor continues to invest in conferences such as DAC, no matter the location, for which I am very grateful. They have a long list of activities this year but I wanted to point out my top three:

Wally Rhines has a talk in the DAC Pavilion which is first on the list. Wally’s expert industry perspective is the result of tireless research and endless customer meetings around the world and should not be missed. Wally will also be signing “From Wild West to Modern Life” books (last on the activity list) at the Mentor booth Monday at 5:00pm and Tuesday at 10:00am. There is a limited supply so I would get there early on either day. This is Wally’s first book, first book signing, and is your chance to get a piece of EDA history, absolutely.

FREE cappuccino from 9:00-2:00, and happy hour from 3:45-4:45 in the Mentor Booth. Hobnob with semiconductor professionals from around the world in the most casual setting. A great place to start and end your 56thDAC exhibition floor experience. I hope to see you there.

The 5G Myth vs. Reality Panel with Mentor, Synopsys and Cadence. Paul Mclellan and I were chatting about 5G at the Samsung Foundry event last week. His AT&T iPhone said he had a 5G connection while my Verizon iPhone said 4g. Identical phones different coverage. Marketing at its finest! This is an excellent opportunity to learn more about 5G from the semiconductor ecosystem, where electronics and 5G begins!

Activity List From Mentor Marketing:

The Design Automation Conference (DAC) is the premier conference for automated electronics design and verification technology. For 2019, DAC returns to sunny Las Vegas, Nevada at the Las Vegas Convention Center from June 2-5, 2019.

We’ve packed each day full of exciting activities and presentations featuring Mentor technical experts discussing the latest in cutting-edge design. You’ll find our experts in the conference program, in our booth (#334) hosting suite sessions and networking events, and in the Verification Academy booth (#617).

CONFERENCE PROGRAM

DAC Pavilion

Fundamental Shifts in the Electronics Ecosystem

MONDAY June 03, 10:30am – 11:15am | DAC Pavilion – Booth 871

Speaker: Wally Rhines – Mentor, a Siemens Business

Wally Rhines, CEO Emeritus of Mentor, a Siemens business, will examine major new market opportunities like AI/ML, automotive, 5G, etc. and how these markets will call for new design activity and the need for broader design tool innovation.  He will also explore whether we are heading into a period of stability after three years of disruption or if the revolution will continue.

Straight Talk with Tony Hemmelgarn, Siemens Digital Industries Software CEO

MONDAY June 03, 11:30am – 12:00pm | DAC Pavilion – Booth 871

Moderator: Ed Sperling

Myth vs. Reality: What 5G is Supposed to Be, And What it Will Take To Get There

TUESDAY June 04, 11:30am – 12:00pm | DAC Pavilion – Booth 871

5G is trumpeted as the big enabler, providing massive throughput and a massive upgrade path for the mobile and mobility markets. It is a way for cars, phones and other connected devices to stream massive amounts of data to the cloud and back again. But 5G signals don’t travel very far, and they don’t penetrate objects. Devices built for this market will require extreme power management so they aren’t searching for signals constantly. Parts of them will always be on, which has an impact on design and reliability. And some parts, such as the antenna arrays, cannot even be tested using conventional means.

Panelists:

Neill Mullinger – Mentor, a Siemens Business

Peter Zhang – Synopsys

Ian Dennison – Cadence Design Systems

Paper Presentations

MONDAY, June 03

4.4 Electromigration Signoff based on IR-drop Degradation Assessment

8.4 Local Layout Effect Aware Design Methodology for Performance Boost below 10nm FinFET Technology

TUESDAY, June 04

18.4 A Lightweight Hardware Architecture for IoT Encryption Algorithm

WEDESNDAY, June 05

66.4 Virtual Methodology For Performance and Power Analysis of AI/ML SoC Using Emulation

69.4 Efficient Verification of High-level Synthesis IP

Posters

123.21 Metric Driven Power Regression – A Methodology based Metric Driven Approach for Power Regressions

123.25 River Fishing: Leverage Simulation Coverage to Drive Formal Bug Hunting

124.2 Comprehensive Analog Layout Constraint Verification for Matching Devices

124.7 Enabling Exhaustive Reset Verification in Intel Design

124.16 A Smart RTL Linting Tool with Auto-correction

124.25 Configurable Multi-protocol AUTOSAR-based Secure Communication Accelerator

125.12 Faster PV Signoff Convergence in P&R using RTD

125.14 Hybrid Methodology- An Innovative Methodology for Hierarchical CDC Verification

Moving Up in the World

125.17 Functional Safety on A-R-M CPUs

125.21 Tackling the Increasing Challenge of IR drop & EM Fails in Advanced Technologies with a Push Button Solution

EXHIBIT FLOOR

Mentor’s booth #334 is located on the west end of the exhibit floor. Check in daily for a host of technical sessions, networking events, panel discussions, a free cappuccino from 9:00-2:00, and happy hour from 3:45-4:45! You’ll also find Mentor verification experts in the Verification Academy booth (#617) for in-depth sessions on Portable Stimulus, UVM, and more.

Technical Sessions in the Mentor Booth

Each day, Mentor experts will be in the booth delivering technical sessions across:

  • AMS Verification
  • Analog/Mixed-Signal Verification
  • Design & functional Verification
  • Digital Design & Implementation
  • IC Design & Test

You can view the complete list of technical sessions and pre-register here.

Expert Panel Discussions

Mentor experts will be moderating in-booth panels on both Monday and Tuesday directly following happy hour. Make sure to pick up a free glass of wine or beer before!

Design Smarter Innovations Faster using AI/ML and More with Mentor, a Siemens Business

MONDAY June 03, 4:00pm – 4:45pm | Mentor Booth #334

To enable our customers to deliver smarter innovations to market faster, Mentor, a Siemens business is actively delivering new solutions and use models that enable our customers to more readily develop AI-powered technologies. We are also integrating advanced machine learning algorithms into our existing tools to enable those tools to deliver better results faster. Come hear experts from across Mentor’s IC solutions portfolio describe what Mentor has to help customers deliver smarter IC innovations to market faster.

Panelists:

Ellie Burns, director of marketing, Calypto Systems Division

Vijay Chobisa, product marketing director, Mentor Emulation Division

Geir Eide, product marketing director, D2S Tessent Division

Amit Gupta, general manager, Solido, IC Verification Solutions Division

Steffen Schulz, vice president product management, D2S Calibre Marketing

Functional Safety in Isolation – Can Safety Be Collaborative?

TUESDAY June 04, 4:00pm – 4:45pm | Mentor Booth #334

As companies strive for greater levels of autonomy, more capability will be required of automotive ICs living at the edge, and the challenge of ensuring functional safety is exacerbated. The mass public trusts companies to deliver safe products to the market, but can the industry deliver on that promise given the demand for rapid innovation and complexity within the automotive ecosystem and supply chain? The scope of functional safety extends beyond the product boundaries to systems of interlinked devices representing the complete transportation network. From IP to automobile, each product plays a role in the overall functional safety of the transportation network. New paradigms and methodologies are required to ensure functional safety across all levels of the automotive ecosystem.

Panelists:

Yves Renard, Functional Safety Manager, ON Semi

Ghani Kanawati, Technical Directory of Functional Safety, Arm

Matt Blazy-winning, Functional Safety Director, NXP

Book Signing with Wally Rhines

Wally Rhines will be at the Mentor booth signing copies of his new book, “From Wild West to Modern Life”, Monday at 5:00pm and Tuesday at 10:00am.


Mentor Extends AI Footprint

Mentor Extends AI Footprint
by Bernard Murphy on 05-23-2019 at 8:00 am

Mentor are stepping up their game in AI/ML. They already had a well-established start through the Solido acquisition in Variation Designer and the ML Characterization Suite, and through Tessent Yield Insight. They have also made progress in prior releases towards supporting design for ML accelerators using Catapult HLS. Now they’ve stepped up to better round out (in my view) Catapult support, also to introduce new ML-enabled capabilities in Calibre.

Joe Sawicki (who needs no introduction but for completeness is EVP of IC EDA at Mentor/Siemens) kicked off this announcement with some background on AI/ML, starting with a nice infographic on startups in AI (over 2000 with $27B in funding) and the AI chip landscape, estimated to be $195B by 2027. Will all or even most of the startups make it? Of course not – startups have a significant fallout rate in any field. But the practical stuff – computer vision, keyword/phrase recognition, localization and mapping for robots, among others – this is real, and has massive potential in many markets. Siemens particularly is very interested in the Industry 4.0 opportunities. Joe also noted that over half the fabless venture funding since 2012 has gone into AI startups, most of it relatively recently, which is even more impressive.

Joe sees challenges in this area in four domains: optimizing ML accelerator architectures, managing power, dealing with huge designs (up to reticle size) and dealing with high speed I/O for fast memory access and communication. This is driven in part by winner-take-all competition in these application domains, demanding differentiation in hardware architecture towards application-specific goals at the edge versus ultimate performance in data-centers (DCs). Edge nodes need ultra-low power for long battery life and DCs still need manageable power (no-one wants to scale-out power hogs). Performance requirements in DC ML accelerators demand deeply intermixed logic with multiple levels of embedded memory, driving massive die sizes and need for fast access to off-die working memory through interfaces such as HBM2 and GDDR6.

For Joe, this maps onto design needs in top-down optimization through HLS, higher capacity and faster, scalable tools everywhere (he noted particularly that he sees this domain driving huge growth in emulation, particularly for power verification), power budget management and need for a flexible AMS flow, especially at the edge where you need to optimize from sensors straight into inference engines (aka smart sensors).

Ellie Burns (Mktg Dir for digital design implementation solutions) followed to describe progress the have made in Catapult HLS for AI/ML design. I first wrote about what they are doing in this area about a year ago. The value proposition is pretty clear. HLS works well with neural net architectures, ML designers for edge applications want to functionally differentiate while also squeezing PPA as hard as they can (especially power, for e.g. wake-words/phrases), so fast analysis and verification through the HLS cycle is a great fit.

The Catapult team have been working with customers such as Chips and Media for a while, optimizing the architecture and flow and they now have an updated release, including (again in my view) some important advances. First, they now have a direct link to TensorFlow. Earlier you had to figure out yourself how to map a trained network (trained almost certainly on TensorFlow) to your Catapult input; do-able but not for the timid. Now that’s automated – big step forward. Second, they now have HLS toolkits for four working AI applications. And finally, they provide an FPGA demonstrator kit compatible with a Xilinx Ultrascale board. You can checkout and adapt the reference design and prove out your ML recognition changes from an HDMI camera through to an HDMI display. The kit provides scripts to build and download your design to the board; board and Xilinx IP such as HDMI are not included.

Steffen Schulze (VP Calibre product management) followed to share the latest ML-driven release info for Calibre OPC and Calibre LFD. Almost anything in implementation is for me a natural for ML – analysis, optimization, accelerated time to closure – all good candidates for improvement through learning. Steffen said they have done a lot of infrastructure work under the Calibre hood, including adding APIs for the ML engine, seeing potential for other applications to also leverage this new capability.

On ML-enabled OPC, Steffen first presented an interesting trend graph – the predicted number of cores required to maintain a similar OPC turn-around-time versus feature size. The example he cites is for critical layer OPC on a 100mm2 die using EUV and multiple patterning, starts at around 10k cores for 7nm and trends more or less linearly to around 50k cores at 2nm.

He said that, as always, scalability of the tools helps but customers are looking for more performance and increased accuracy through algorithmic advances to cope with these significantly diffraction-challenged feature-sizes. As an interesting example of real-world application of ML in a critical application, they use the current OPC model to drive training, then in application to the real design they use one ML (inference) pass to get close followed by two traditional OPC passes to resolve inconsistencies and problems with unexpected configurations (configs not encountered in the training I assume). This approach is delivering 3X runtime reduction and better yet, improved edge placement error (a key metric in OPC accuracy).

For Calibre LFD (lithography-friendly design), let me start with a quick explanation since I’m certainly no expert in this area. The dummies guide, at least as this dummy understands it, is that processes and process variability today are so complex that the full range of possibly yield-limiting constructions can no longer be completely captured in design rule decks. The details that fall outside the scope of DRC rules require simulation to model potential differences between as-drawn and as-built lithographies. The purpose of Calibre LFD is to do that analysis, based on an LFD kit supplied by the foundry.

The ML-based flow here is fairly similar, starting with labeled training followed by inference on target designs. The training is designed to identify high-risk layout patterns, passing only these through for detailed simulation. This delivers 10-20X improvement in performance over full-chip simulation. Steffen also said that using this approach they have been able to find yield limiters that were not previously detected. Here also, ML delivers greatly increased throughput and higher accuracy.

To learn more about what Mentor is doing in AI/ML in Catapult and Calibre, see them at DAC or click HERE and HERE.


Webinar Recap: IP Life Cycle Management and Traceability

Webinar Recap: IP Life Cycle Management and Traceability
by Daniel Payne on 05-22-2019 at 10:00 am

Earlier this month I attended a webinar organized by Methodics on the topic of IP life cycle management and traceability, with three presenters and a Q&A session at the end. I’ve worked with Michael Munsey before and he was the first presenter. Semiconductor IP creation and re-use is the foundation of all modern IC designs, and keeping track of hundreds to thousands of IP blocks along with design scripts and verification results becomes a complicated process very quickly, especially if you’re still using a manual approach.

Methodics provides products in three major areas:

  • IP Lifecycle Management – percipient, versic
  • Enterprise Data Storage Acceleration – warpstor
  • Scaleable, Massively Parallel Job Execution – arrow

This webinar was focused on IP Lifecycle Management, aka IPLM. The company has been around since 2006, has an HQ in SFO, and is staffed with 32 professionals in the USA, Europe and Pacific Rim. Their tools work with popular vendors, like: Perforce, Siemens, Cadence, Synopsys, Jama and neo4j.

The percipient tool has five layers of abstraction, as shown below, where engineers have in a single place to access all of the information about their IC design and can track release management and versions.

Rien Gahlsdorf then gave us a live demo of percipient showing multiple ways to use the tool: command line, web, Cadence, API. Percipient is built on top of a DM system, then manages both meta-data and releases. Users can recall all IPs for any release made earlier, manage all file types, manage IP hierarchy, attach meta data to an IP, view layout, view schematics, review the design state. Making a new release can automatically trigger scripts: Simulations run, requirements checked.

percipient

Michael talked about functional safety (FuSa) and the challenges of complying with the ISO 26262 standard where traceability is a requirement from specification to design, verification and release.

The Methodics approach has a link from requirements through design and verification, enabling compliance with the ISO 26262 standard. Rien demonstrated a second time showing requirements in jama, making a release with Perforce, checking in IP with the latest version, and how a release can trigger scripts to run.

To make ISO 26262 compliance easier the percipient tool comes with IP templates that are configured with properties and attributes. There are survey and doc templates that automate the collection and FuSa interview responses.

In the final demo Rien showed how the percipient tool helps capture all meta-data throughout the entire design process, and automates release management. Documentation is even automated with percipient, where each IP gets a chapter in the design documentation, along with all meta-data entered, hyperlinks added and property values shown.

Q&A

Q: There are other traceability products, like from IBM, so why percipient?

A: percipient allows management of IP, traceability, FuSa compliance, etc. We know how to build a design BOM. Verification, design and requirements are all traceable. This was built from the ground up to achieve this.

Q: Is it possible to capture document and code reviews?

A: Usually within GIT you would use that code review feature, natively.

Q: How do you track a family of data?

A: In the demo we showed data types, there are no restrictions, you can have tables, graphs, charts, families of related data, hierarchical tables.

Q: is percipient DM agnostic?

A: Yes, we work with all the popular DM tools, plus we offercustom support as well. GIT, Perforce, Sharepoint, etc.

Summary

The percipient tool enables traceability from Design to Release to Verification. No more manual, error-prone engineering practices.

To view the webinar video archive visit here.

Related Blogs


What are SOTIF and Fail-Operational and Does This Affect You?

What are SOTIF and Fail-Operational and Does This Affect You?
by Bernard Murphy on 05-22-2019 at 7:00 am

Standards committees, the military and governmental organizations are drawn to acronyms as moths are drawn to a flame, though few of them seem overly concerned with the elegance or memorability of these handles. One such example is SOTIF – Safety of the Intended Function – more formally known as ISO/PAS 21448. This is a follow-on to the more familiar ISO 26262. While 26262 provides processes and definitions for safety standards of the hardware in electrical and electronic systems in automobiles, it has little to say about the high-levels of automation that dominate debate around autonomous and semi-autonomous cars.


ISO 26262:2018 introduces the Emergency Operation Time Tolerance Interval to account for fail operational use cases

Safety at SAE level 2 and above automation is no longer simply a function of the safety of the hardware. When systems-on-chip are running complex software stacks, quite often multiple stacks, and those systems use probabilistic AI accelerators depending not only on software but also on arrays of trained weights, then there’s a lot more that can go wrong beyond the transient faults of 26262.

An SoC designer might assert “Yes these are problems, but they have nothing to do with my hardware. My responsibilities stop at ensuring that I meet the ISO 26262 requirements. All the rest is the responsibility of the system and software developers.” But you’d be wrong, based on where SOTIF is heading. High levels of integration and non-deterministic compute elements (AI) in safety-critical applications raise a new question; how should the system respond when something goes wrong? And how do you test for this? Because inevitably something will go wrong.

When you’re zipping down a busy freeway at 70mph and a safety-critical function misbehaves, traditional corrective actions (e.g., reset the SoC) are far too clumsy and may even compound the danger. You need something the industry calls “fail operational”, an architecture in which the consequences of a failure can be safely mitigated, possibly with somewhat degraded support in a fallback state, allowing for the car to get to the side of the road and/or for the failing system to be restored to a working state. According to Kurt Shuler (Arteris VP of marketing and an ISO 26262 working group member), a good explanation of this concept is covered in ISO 26262:2018 Part 10 (chapter 12, clauses 12.1 to 12.3). The system-level details of how the car should handle failures of this type are decided by the auto OEMs (and perhaps tier 1s) and the consequences can reach all the way down into SoC design. Importantly, there are capabilities at the SoC-level that can be implemented to help enable fail operational.

Redundancy engineering is becoming more important in SoC functional safety mechanism design. In safety-critical areas in the design, you use two or more versions in parallel and compare the outputs. This is called static redundancy and sounds suspiciously like the TMR, lockstep computing and similar safety mechanisms you already use for ISO 26262. And to some extent they are. But as I understand it, there are a couple of key differences. First these requirements are likely to come from the OEM (or Tier 1), over and above anything you plan to add for redundancy. And second, in a number of redundancy configurations (called dynamic redundancy), these independent systems are expected to self-check their correctness. For example, there is a redundancy style called “1 out of 2 with diagnostics” (1oo2d) in which perhaps 2 cores would each compute a result in parallel, and also each provide a self-check diagnostic. The comparison step can then feed-forward a fail-operational result if both cores self-check positively and agree, or if one core self-checks positively and the other does not.

Another major component of fail-operational support requires the ability to selectively reset/reboot subsystems in the SoC. A very realistic example in this context would be for a smart sensor SoC containing (among many subsystems) one or more vision subsystems (ISPs) and one or more machine learning (ML) subsystems. On a failure in one of these subsystems, rebooting selectively allows other object-recognition paths to continue working. This obviously requires a method to isolate individual subsystems so that the rest of the system can be insulated from anomalous behavior as the misbehaving subsystem resets. One SoC network-on-chip interconnect company, Arteris IP, is already pioneering technology to enable this.

Redundancy in ML subsystems as described above allows for one class of failures in recognition, but what about failures resulting from training problems? One idea that has been suggested (though I don’t know if anyone has put it into practice) is to use asymmetric redundancy between two ML system trained on different training sets. It will be interesting to see how that debate evolves.

The system interconnect is the ideal place to manage a lot of this functionality in the SoC, from “M out of N” redundancy (maybe with diagnostics) to isolation for selective reset/reboot. Arteris IP have made significant and well-respected investments in this area. You should check them out.


20 Questions with John East

20 Questions with John East
by John East on 05-21-2019 at 10:00 am

In 1967 I was a grad student at Cal Berkeley.  In December of that year my wife to be and I got engaged to be married.  I was supposed to get my master’s degree in December of ’68, but once we worked out all the details we realized that I’d have to go to school over the summer of ’68 and get the degree in September.  We were broke and couldn’t afford the extra three months of expenses with little or no income.  Berkeley was set up with two biannual college recruiting programs, during which corporations would come in to interview prospective new hires.  One of the sessions was in April and one was in November. My original plan was to go through the college recruiting process in the November session, but the wedding plans changed that.  Since I wouldn’t be ready to go to work until September, the April recruiting session seemed too early.  So  — how to get a job?  That was the question. I wrote 40 or 50 letters. There was a college placement handbook that had the address of the important companies.  I wrote to them basically saying “Dear Sir, you don’t know me but I want a job.”  I got back just three responses which was a little depressing   One was from IBM, where I then interviewed and didn’t get a job offer. One was from HP where I interviewed and didn’t get a job offer.  But one was from Fairchild. All I knew about them — or thought I knew — was they made cameras.  (The official company name was Fairchild Camera.)  I interviewed with them and they were excited about me.  They brought me back a short while later to have lunch with two of their executives:  Jerry Briggs – an HR guy (Called Personnel in those days)  — and Gene Flath  –  a product line manager.  That was my first business lunch.  It turned out that in those days, business lunches involved large quantities of martinis and the like. They thought I was the greatest guy in the world (Possibly because of the martinis) and they offered me a job on the spot. This was in roughly May of ’68.  They knew that I wasn’t going to be done until September so they said, “That’s not a problem. We’ll wait for you. You’re going to be wonderful. In fact, you don’t even need to communicate with us in the interim. The day before you’re done, just call us and we’ll make arrangements for you to come and everything will be great.”   Then they both gave me their business cards. When I had one day to go —that is I had just taken my last final and was ready to go to work —- I picked up the phone and called Fairchild HR.  A lady answered the phone.  I asked, “Can I please speak to Jerry Briggs”   The lady who answered the  phone said,  “There’s no Jerry Briggs here and I’ve never even known a Jerry Briggs.” We debated for a while and after a bit I asked her “Well, how long have you been there?” It had been a couple of months. The department had turned totally over between the time of the offer in May and my call in September. I thought to myself, ‘That’s not a problem because I’ve got Gene Flath’s card as well. I’ll just call Gene Flath’.  So   —  I called Gene Flath’s number and got a secretary. She said, “There’s no Gene Flath here and there’s never been a Gene Flath here in all of the time since I got here.”  “Well, how long have you been here?” “A couple of months.”   I asked myself, “What the heck is going on here?”  I needed that job! Fortunately, I had the offer letter.  I called the HR department again and told them so. Some guy who I had never met said “Well, okay, we’ll honor it.  Come in at 9:00 on Monday morning and we’ll figure out what to do with you.” What the heck was going on?  I found out later that Bob Noyce, the President of Fairchild had just left to form Intel Corporation and taken a cadre of the really good people with him.  Sherman Fairchild (the Chairman of the Fairchild board) had brought in Les Hogan from Motorola to be the new CEO. Hogan, then, brought in eight of his top lieutenants to help him run things.  They were referred to as ‘Hogan’s Heroes’.  (That was the name of a popular TV show in those days.)  Hogan’s Heroes proceeded to fire about a third of the upper ranks. Roughly another third of the upper ranks said to themselves, “Well, wait a minute.  If I stay around they’re going to fire me, too.” So they left as well.  Everything had turned over in that four month window.  When I got there nobody knew what was going on. Nobody knew who their boss was. What a zoo it was, but that made it almost seem like fun. One thing that was particularly noticeable was —- in the other companies where I had interviewed the managers were 40-year-old or 50-year-old people.  Today that doesn’t seem very old, does it?  But then it seemed ancient. “You mean, I’ve got to be around twenty years before I can get a manager job? That’s terrible”.   At Fairchild the managers were kids. They were 25 and 26 years old. And not only were they kids, they were kids viewed as being experts in their field because the field was that young. I thought,  “I’m going to like this place” See the entire John East series HERE. Biography John East retired from Actel Corporation in November 2010 in conjunction with the transaction in which Actel was purchased by Microsemi Corporation.  He had served as the CEO of Actel for 22 years at the time of his retirement.  Previously, he was a senior vice president of AMD, where he was responsible for the Logic Products Group.  Prior to that, Mr. East held various engineering, marketing, and management positions at Raytheon Semiconductor and Fairchild Semiconductor.  In the past he has served on the boards of directors of Adaptec,  Pericom and Zehntel (public companies), and MCC,  Atrenta and Single Chip Systems (private companies).  He currently serves on the boards of directors of SPARK Microsystems – a Canadian start-up involved in high speed, low power radios — and Tortuga Logic  —  a Silicon Valley start-up involved in hardware security.   Additionally,  he is presently an advisor to Silicon Catalyst  — a Silicon Valley based incubator actively engaged in fostering semiconductor based start-ups. Mr. East holds a BS degree in Electrical Engineering and an MBA both from the University of California, Berkeley.  He has lived in Saratoga, California with his wife Pam for 46 years.

Breker on PSS and UVM

Breker on PSS and UVM
by Bernard Murphy on 05-21-2019 at 5:00 am

When PSS comes up, a lot of mainstream verification engineers are apt to get nervous. They worry that just as they’re starting to get the hang of UVM, the ivory tower types are changing the rules of dynamic verification again and that they’ll have to reboot all that hard-won UVM learning to a new language. The PSS community and tool makers work hard to dispel this fear by partitioning the roles of these languages (e.g. UVM for IP and PSS for SoC sequences and randomization) but questions remain; what about the grey areas between these two, and what about legacy UVM development? Also important, just how portable is PSS? In principle it’s perfectly portable but how does that work in practice? If I develop for one vendor’s platform will it work compatibly with another vendor?


Starting with the last question, Breker has a natural advantage as a neutral player among simulation platform providers, which should give them best access to validate their solution against each platform. It should also make it easier for them to validate equivalent behavior (within the scope of the PSS standard) across platforms – i.e. true portability.

That answers one concern, but what about the legacy question – how much do you have to reinvent, versus building on UVM you already have (and will continue to develop)? To set your mind at rest, Breker have released a white paper on just that topic. This elaborates in some detail how you can use the Breker tools to model and generate randomized sequences, and then generate the corresponding UVM sequences (along with automated scoreboard and coverage modeling), which can connect to your UVM testbench.

The PSS modeling stage works as you would expect; you define PSS models using either DSL or C++, or through their graphical interface. The Breker TrekGen and Trek UVM tools read the model and synthesize tests based on flows and resource constraints (and even path constraints) defined in the model, then convert those to SystemVerilog tests. Generated score-boarding and coverage analysis will roll-up test pass/fail, profiling and other details for analysis in the Breker debugger and/or a vendor-supplier debugger to guide further refinement in scenario modeling.

Point here is that with Breker a PSS-based testing flow works hand-in-hand with your native UVM environment as an easier way to define, randomize and check coverage on sequences. No need to start over on any test-building; this is an entirely complementary addition to your flow.

The white-paper points out a number of advantages to using this approach over using UVM-based sequence definition and control:

  • It’s a more efficient way to build useful sequence tests. Doing this in UVM is eminently possible, but it takes much more effort to build each sequence (or seed sequence with constraints) in a way that is guaranteed to connect meaningfully to real system behavior. PSS starts with expected system behavior, so each test is guaranteed to be meaningful. Which incidentally also accelerates test development and testing – always a desirable objective.
  • The PSS approach is white-box versus black-box. Figuring out how to drive a path test in UVM can be hard – very hard. PSS removes need to think about these details in modeling and sequence generation, thanks to the internal smarts of the UVM generator.
  • The PSS-based flow makes it possible to define more complex tests with more (allowed) concurrency. VIP models (an alternative) run independently, making it difficult to build tests around system-level concurrency, whereas these are easy to generate in PSS and constrain based on available resources as defined in the model.
  • Score-boarding and checking is built-in – no extra effort on your part is required.
  • Coverage is also built-in and is directly related to coverage of paths through the model, a concept that you can’t easily define through traditional coverage metrics. This for me is one of the big motivators for PSS. Traditional coverage is more or less useless at the system level. The useful metric in this context is coverage of realistic sequences constrained by available resources.
  • You get automatic reusability both horizontally and vertically in design and verification flows – the “P” in PSS. Once you’ve defined models for a block, you can reuse those in higher-level subsystem or system testing; you can also reuse these models from simulation to emulation, FPGA prototyping, virtual prototyping and silicon testing.

There’s a lot more detail in the white paper that I won’t attempt to cover here, but I will add that Breker now includes a Portable Stimulus/UVM example with every software distribution. The design is a small representative SoC based on a couple of CPUs, a couple of UARTS, a DMAC and an AES encryption block. Most importantly the white paper provides a detailed walk-through of the steps in integrating this into a UVM testbench and then executing these together. Well worth a read if you’ve been wondering about PSS but have been nervous about jumping in.

Also Read

Verification 3.0 Holds it First Innovation Summit

CEO Interview: Adnan Hamid of Breker Systems

Breker Verification Systems Unleashes the SystemUVM Initiative to Empower UVM Engineering


Uber Lyft and the Price of Greed

Uber Lyft and the Price of Greed
by Roger C. Lanctot on 05-20-2019 at 10:00 am

Uber and Lyft blew it with their initial public offerings over the past couple weeks. Both companies opted to cash out founders and early investors while tossing pennies to long-supportive drivers in the form of bonuses. The short-term cash out focus could sound the death knell of these market leaders.

Both companies extracted billions from investors in the process – but both companies also failed to attract the kind of money necessary to rebuild their business models and establish a path to long-term profitability. The one core fundamental issue they neglected: driver compensation and churn.

The U.S. is the home market for both companies and the employment environment in the U.S. is becoming increasingly hostile to ride hailing. Not only are local municipalities, like New York, forcing transportation network companies (TNCs) like Uber and Lyft to treat their drivers as employees – the available pool of drivers is shrinking along with the unemployment rate.

Strategy Analytics estimates that both Uber and Lyft will have to continue to recruit drivers globally and locally. At the same time, both companies are likely to be reducing driver compensation to address their debt and cash-flow challenges. All of these vectors point to ongoing driver recruitment and compensation challenges – especially in a market where competing services – including Amazon’s delivery operations – promise more reliable compensation with benefits.
What Uber and Lyft failed to recognize was the need to treat drivers, and maybe even passengers, as owners in the companies. Both drivers and passengers have been suspending their disbelief for the past five-plus years to make the service viable if not profitable.

Both Uber and Lyft seemed to recognize the importance of their drivers, but both companies missed the opportunity to set up a stock option plan of some sort to convert existing, and eventually new, drivers into vested owners of the company. What has always been missing from the Uber/Lyft experience has been a recognition or feeling among drivers that they were actually “owners,” representatives of the company and the brand.

The lack of this feeling is manifest in the fact that most, though not all, drivers – that I have met – drive for both. The driving for both proposition feeds the overall gaming the system mentality of the Uber/Lyft experience – also manifest in rides periodically cancelled by drivers that may not want to drive to your preferred destination.

Uber and Lyft (and Yandex and Ola and DiDi etc.) have overcome the supply and demand challenges of getting rides to drivers on the fly in a reasonable reliable manner. But they have failed to create any loyalty among drivers or passengers – leaving the door open to any new entrant seeking to offer a superior experience.

One such player, Bounce, is offering an ownership experience for drivers and passengers – though the company is only active in a handful of markets. The struggle of starting up in a market saturated and dominated by huge competitors – in this case Lyft and Uber – is clear and daunting and is captured in this review by Will Preston for TheRideShareGuy:

https://therideshareguy.com/what-is-it-like-to-drive-for-bounce/

The most important takeaway from this review, for me, is the inclination of Uber and Lyft drivers and passengers, both, to complain about the experience. Bounce seeks to address the flaws in the system by creating a share vesting program for both drivers and passengers built around recruitment referrals for both. Bounce also seeks to leverage relationships with event operators to provide queue-based post-event transportation.

The Bounce model and strategy are detailed here: https://therideshareguy.com/bounce-rideshare/
The long-term prospects for Bounce are unclear to me. The long-term prospects for Uber and Lyft are even less clear in view of their decision not to enter into any longer-term relationship with their drivers – even though some of those drivers have entered into long-term multi-year relationships with Uber and Lyft.

By using the IPOs to reward investors and founders and shun drivers, Uber and Lyft have signed their own death warrants. The IPOs have all the earmarks of an exit strategy not a survival plan. I may continue to use the services where and when they are convenient, but the IPOs have left a sour smell in my nose and a bad taste in my mouth. It’s telling that drivers did not celebrate the IPOs – if they even knew they were taking place – they went on strike.

These IPOs were not clever or game changing. They were nothing less than craven and mark the beginning of the end of ride hailing. It is only a matter of time. The current business model will not stand.


Learning on the Edge Investment Thesis

Learning on the Edge Investment Thesis
by Jim Hogan on 05-20-2019 at 5:00 am

It is said that it will cost as much as $600M to develop a 5nm chip. At that price, only a few companies can afford to play, and with that amount of cash in, innovation is severely limited.

At the same time, there is a stampede in the artificial intelligence (AI) market where around 60 startups have appeared, many of which have already raised $60M or more. $12B was raised for AI startups in 2017 and, according to International Data Corporation (IDC), is expected to grow to $57B by 2021. Most of these are going after the data center, which is necessary to get the required ROI when you have a big raise. The chance of success is slim, and the risks are high. There is an alternative for investors and startups.

In this investment thesis, I talk about large disruptive changes happening in the semiconductor industry and the opportunities this creates for innovative architectures and business models.

I used a specific startup – Xceler, who are taking an alternative route to the development of an AI processor. A second organization – Silicon Catalyst is enabling them to bring silicon to market with much lower cost and risk. In the interest of full disclosure, I am a director and investor of both Xceler and Silicon Catalyst.

I love getting feedback and I share that feedback with people, so please let me know what you think. Thanks – Jim


Fig 1. Costs associated with SoC development at each manufacturing node. Source: IBS

Success with semiconductor investment in general, and AI specifically, is a multi-step process. At each stage, the goal is to reduce risk and maximize the potential for success at the lowest possible cost in dollars and time.

The low-risk structured approach comes down to executing the following steps:

  • The requirements in the chosen markets are distilled into the minimum functionality required, and target architectures identified.
  • The solutions are prototyped using FPGAs and proven in the market, creating initial revenue.

With these two steps, you have proof of technology proficiency and early market validation.

  • The solutions are then retargeted to silicon, adding further architectural innovations. An important element of this step is the utilization of silicon incubators that significantly reduce cost and risk. For an AI semiconductor startup, aside for people costs, significant costs are EDA tools and silicon. Typically, this is in the range of $3 to $5M. If the company can avoid or reduce that expense, they will see a much higher enterprise valuation and retain more ownership for the founders and early investors.

Identify the Market Opportunity


Source: AI Insight, August 30th, 2018

The AI/ML market in the data center is large. For a lot of applications, data collected at various nodes will move back to the data center. These are the public clouds such as those run by large data center companies. The problem for a semiconductor company, or sub-system vendor, is the business model associated with AI/ML in the datacenter. Systems and semiconductor companies build hardware and tools that are powerful and can run a multitude of applications, but the question is what type of AI/ML and what specific applications? It is a solution looking for a market. You need something that people would want to use.

The Cloud provides the infrastructure that your accelerator is integrated into, and they sell repeated services/applications on top of it until the product reaches the end of life. That is not a sustainable business for the technology provider. You can only sell so many units and you can’t keep selling the same volume every year. Consider how NVIDIA’s GP/GPU sales are tapering off. Sales are typically tied to the silicon cycle where every few years more silicon at a lower price, lower power consumption and improved performance becomes available. The services to get customers on board are offered for free, which is the best price, or for cheap, as the service provider is relying on volume sales with a lot of customers/users. This drives the commoditization of the underlying infrastructure as the service providers want the infrastructure at a price that makes business sense for them. In addition, with the slowing of Moore’s law, this forced obsolescence of technology is no longer a driving factor.

The ideal target is a problem looking for a solution, and for Xceler, it was found in the industrial space. Everybody wants to deploy IIoT (Industrial IoT with AI/ML), but each company is looking for the right solution. Consumer IoT was considered but most of the solutions are nice or cool to have, as opposed to must-have. In the consumer space, there are multiple policy hurdles such as legal, privacy, security, or liability. The adoption of these solutions requires a socio-economic behavioral change in the consumer mindset. It is something that requires a generational adoption cycle and a lot of marketing/PR dollars. Conversely, the adoption of industrial solutions is faster because it is a must-have capability that directly affects their bottom line.

Even though the IIoT market is fragmented, fragmentation can be your friend. Each of the larger companies in this space has a top line revenue of $50 billion per year plus. Even with limited market penetration, it is possible to build a sizeable revenue base. There are many potential end-customers, collectively presenting an opportunity, unlike the data center space where there are only a few end customers.

Predictive training and learning on the edge – the dream is now a reality with AI.
Web-based solutions appear to be free. However, someone is paying for the services. In the case of the web, it is advertisers or people trying to sell products. In the industrial space, suppliers are selling both the hardware and the solution. They are directly providing value to the purchaser and can thus make money from that directly, plus there is the potential for future revenue as more capabilities are added.

Every edge-based application is different. This fragmentation is one reason why people are scared of the edge market. You must have the ability to personalize and adapt to the context in which you are deployed. That requires learning on the edge. Inferencing is not enough. If the link breaks, what do you do? Solutions that only perform inferencing could endanger lives.

Some people are trying to build edge-based platforms. These often contain custom edge-based processors for a specific vertical application. They have the characteristics of very high volume but relatively low average selling price in comparison to Cloud devices.

I asked the founder and CEO of Xceler, Gautam Kavipurapu, about his company’s experience in the development of an edge-based processor and application. “We are running a pilot program for a company that makes large gas turbines. They are instrumented in many ways. Fuel valves have flow controls, sensors record vibration and sound, rotation speed at different stages of the turbine is measured, combustion chamber temperature – about 1000 sensors in total. We process the data and do predictive maintenance analysis.”

When the processor is connected to the machine and without connectivity to the Cloud, which may not exist for security reasons, the profile for the normally functioning system is observed. It builds a basic model for the machine, and over time, that model is refined. When you get deviations, the data from the sensors is cross-correlated in real time to figure out what is causing the anomaly. A connection to the Cloud enables the heavy lifting to build a refined model and make micro-refinements to it. However, there are huge advantages in doing the initial processing at the edge in terms of latency and power.

Define the Right Architecture
Systems need to be architected for the problems that they are solving. “We look at problems as hard real-time, near real-time or user time,” explains Kavipurapu. “Hard real-time requires response times of 5 microseconds or less; near real-time requires response times within a few milliseconds, and lastly, user time can take hundreds of milliseconds or minutes. Consumer applications fall within the last category and usually do not have Software License Agreements (SLAs)[BB2] [BB3] and performance commitments so they can work in concert with the Cloud. For problems that require hard or near real-time responses, relying on the Cloud is not viable as the round-trip time itself will take several milliseconds if it manages to complete at all.

“We have seen edge-processors evolve over time. Initially, machine learning on the edge meant collecting data and moving it to the cloud. Both the learning and inferencing were done in the Cloud. The next stage of advancement enabled some inferencing to be done on the edge, but the data and the model remain in the Cloud. Today, we need to move some of the learning to the edge, especially when real-time constraints exist, or you have concerns about security.”

Prototype and Create Revenue Stream
For this class of problem, it is possible to prototype it in an FPGA. For applications that do not need blistering performance, you can even go to market in the seed round with this solution. This offsets the need for more investment dollars and enables the concept to be validated.

“With Xceler, we started with an FPGA solution. They are acceptable in the market place we are targeting since they have good price/performance and power numbers. They are comparable to x86-based systems in price points and provide higher performance. The only downsides are that the margins are compressed, and certain architectural possibilities are not available in an FPGA solution.”

Migrate to Silicon
To capture more value, you do need a solution that is cheaper, faster and lower power. “That involves building a chip, or Edge-Based Processor (EBU),” adds Kavipurapu. “For the control processor, we are using a RISC-V implementation from SiFive. SiFive does the backend design implementation, reducing the risk for us. SiFive is also a Silicon Catalyst partner. We expect our FPGA solution to translate to between 20 million to about 36 million ASIC gates and so the proposed chip does not have to be that large.”

The only risk that remains is silicon risk. By doing the chip in 28nm, a well-understood fabrication process, the manufacturing silicon risk is minimized. All that remains are closing the design and timing which is a much-reduced problem. We have taken out most of the variability in the design element. In addition, we have restrained our design approach using only simple standard cell design with no custom blocks and no esoteric attempts at power reduction.”

Refine architecture
The FPGA solution cannot run faster than about 100MHz. “With an FPGA, we are also constrained by the memory architecture,” explains Kavipurapu. “For the custom chip, we are deploying a superior memory sub-system. New processing techniques require memory for data movement and storage. For us, to execute each computation, it takes about 15 instructions on an FPGA, which will be reduced to 4 or 5 on the ASIC. In terms of clock frequency, we will be running at 500 MHz to 1 GHz in an ASIC at a much-reduced power.”

Usage of silicon incubator
The goal of Silicon Catalyst is to limit the friction for getting an IC startup to the point they can secure an institutional Series A round by reducing the barriers to innovation. This has provided Xceler with a significant advantage compared to potential competition. While the competition struggles with multiple tape outs to achieve working silicon, burning through their dollars before first revenue, Xceler had first revenue even before the tape out. This happened because of the lower risk strategy and help from Silicon Catalyst and others.

“Silicon Catalyst offers through in-kind contributions from ecosystem partners capabilities for startups to get the tools and silicon they need,” says Kavipurapu. “It enables them to get an A round of funding that is nice in terms of valuation and raise. They bring the ability to prototype a chip at a very low cost. We get free shuttles services from TSMC. We do not have to pony up for chip design tools because there are in-kind partnerships for tools from Synopsys. We have licenses to each tool for 2 years. Silicon Catalyst also has a lot of chip industry veterans. I am not a chip guy and neither is my team. When it comes to silicon, Silicon Catalyst adds a lot of value.”

The result is that Xceler will have working samples for a little over $10M. They have customers and could be breakeven before they get to silicon.

Conclusions
We are in an era where innovation is more important than raw speed, the number of transistors or the amount of money invested. There are opportunities everywhere and getting to market with silicon no longer requires building silicon for extremely high volumes and margins. We are in the age of custom solutions designed to solve real problems and there are countless opportunities at the edge.

I’m convinced that Xceler’ s opportunity is much less risky with the help of Silicon Catalyst and the use of an open source architecture (RISC-V). I believe there will be many companies that will follow a similar path – Jim[RC5].

A Final Note
Gautam Kavipurapu was critical to me in creating this post so I’d like to tell you a bit more about him. Gautam has 20 years of experience at both large and small companies in operations, technology and management roles, including setting up and running geographically distributed teams of over 150 employees (India, US, and Europe). Gautam has 14 issued and several pending patents covering systems, networking, computer architecture. He also has several IEEE conference papers to his credit. Gautam’s team at IRIS Holdings in 2001 demonstrated a “Virtual Router,” a software router (modeled as a dataflow machine similar to Google Tensor) on a PC with NIC cards as part of the IRIS (Integrated Routing and Intelligent Switching) system development (Today’s NFV). His inventions and innovations have preceded major technology to various companies in the storage and computing industry worldwide for millions of dollars and cited in more than 350 issued patents. Gautam has an Executive MBA from INSEAD and a BSEE from Georgia Institute of Technology.


Chips are the bleeding edge of China trade war Recovery

Chips are the bleeding edge of China trade war Recovery
by Robert Maire on 05-17-2019 at 7:00 am

Last week we warned of a further down leg due to China trade. We were surprised how quickly our prediction came true as it appears we are now in the midst of giving back all the upside built in to stocks based on a peaceful resolution of the trade conflict which obviously isn’t happening.

Many of the semi stocks we cover were down anywhere from 3% to 5% or more on Monday and had a partial bounce back on Tuesday. There is likely more downside beyond that as details of China’s retaliation come out. If the US goes tit for tat and retaliates to the retaliation its going to be a longer drop.

Chips are much more exposed than average company
We think today’s stock action was more or less a knee jerk reaction without a lot of analysis on specific names. As investors and analysts do the math, the impact on chip companies will be more than this initial reaction.

What is very difficult to calculate is the derivative impact from the many trade related issues of up stream companies from the semiconductor industry. Given that chips are used in many many applications and China is the biggest consumer of chips with many of those bound for re-export to the US, there is a lot of derivative impact.

The iPhone supply chain example
Iphones are in the cross hairs of the trade dispute given that they are made in China. It sounds like they may not escape the tariffs no matter how much the administration talks about “Tim Apple”.

The iPhone is the pinnacle of the semiconductor industry as it is driving the leading edge of Moore’s Law at TSMC which is currently the technology leader in the industry. Intel and PCs are far from the driver.

Apple has also been the biggest driver of TSMC’s conversion to EUV due to Apple’s demands and competition from Samsung. The iPhone also sucks up a huge amount of NAND memory in an industry already oversupplied despite 256GB Iphones.

Obviously all the CPU, GPU, comms chips and support chips in an iPhone add up to roughly $150 worth of chips alone in the bill of materials. Multiply that by 200 million phones a year and you get roughly $30B out of a $400B chip industry (at least).

There is zero chance of Apple moving production out of China in the next 5 to 10 years so iPhones are hostage in the trade war.

A tax not a tariff
If we assume a $1000 iPhone has a 25% “tariff” placed on it, it likely becomes a more expensive iPhone perhaps close to a $1250 iPhone. Foxconn, Apple’s manufacturer in China lives on relatively small gross margins which don’t allow it to absorb the impact of a tariff. Apple, is very concerned about anything that will hurt its margins and profitability and has already become the high end smart phone product in the market. This suggests that Apple will not absorb much of the price increase either.

The majority of the price increase will be passed on to US consumers as higher prices for iPhones. This means that Apple’s competitors, like Samsung and LG will have an even bigger price advantage as they can compete with phones made in Korea, Vietnam or elsewhere. This means that Apple will likely lose even more share to Samsung and LG (and others) in the phone market as Apple’s products will be less price competitive.

Obviously this will have the impact of less phones being sold by a US company (Apple) and more being sold by foreign competition (just not the Chinese).

So at the end of the day it becomes a “tax” on US consumers leading to less product sold by a US company with more product going to foreign competition.

Trickle down to chips and further down to chip equipment
The effect will be more like a bowling ball rolling downhill rather than a soft trickle down effect. It is highly likely that the trade war will only strengthen China’s resolve for “made in China 2025” to get out from under the US’s dominance. China could easily take a very hard line and push from the government on down to make Chinese companies use any other chips than US chips and any other equipment than US equipment.

If the Chinese didn’t get the message with ZTE, and then with Jinhua, they certainly are getting the message loud and clear now.

We think the response will be for China to double down on its “made in China 2025”. Perhaps financially helping out its chip industry even more. Buying semi equipment tools outside the US and developing more tools in China. If anything, IP could be more at risk as China needs to accelerate its independence through what ever means necessary. We don’t see China “knuckling under” in the trade war.

Downcycle lengthened?

The current chip downcycle which many were hoping might start to ease in the second half of 2019 will clearly take much longer to recover with a raging trade war. The downturn would clearly last well into 2020 and perhaps well beyond that.

When the recovery happens, the upturn will be more muted and we could see other countries, not involved in the war, benefiting and recovering first. Korea would likely gain share in both chips and equipment as would Japan.

The stocks
What we saw in the stock market on Monday and Tuesday was no more than headline risk and reaction. The downturn and partial rebound in stock prices was not based on new calculations of P/E ratios or P/S ratios calculated on lower levels of business due to the trade war. It will take several quarters of results to figure out how bad the impact would be.

We could start to see the negative impact in the June quarter with companies missing their current forecasts that they just made at the end of the March quarter as shipments get held up or stopped. Its likely that business would slow until the “tariffs” are sorted out.

There also may have been some “channel stuffing” in those sectors that could ship in front of the tariff imposition to get product across borders before tariffs changed. This could also cause a weaker June quarter. One of the main issues is that we just don’t yet know. Even if China and the US cease hostilities and bang out a deal ASAP, we will still see significant impact in the June quarter.

Right now, both sides appear to still be in escalation mode so the hopes for a resolution prior to the end of June seem weak at best. We don’t think the last two days stock action is a good predictor of where we are going, we still think the downside risk is higher than the upside risk. So far this assumption has been right given the turn of events.

A general position in the semiconductor industry may be to be long non US chip industry players such as Korea, Japan and others and generally short US chip industry especially those with high China exposure.


IC to Systems Era

IC to Systems Era
by Daniel Nenni on 05-16-2019 at 12:00 pm

One of my favorite EDA disruptions is the Siemens acquisition of Mentor, pure genius. Joe Sawicki now runs the Mentor IC EDA business for Siemens so we will be seeing him at more conferences and events than ever before. Joe did a very nice keynote at the recent U2U conference that I would like to talk about before we head to the 56thDAC in Las Vegas. You will see more of Joe there for sure.


Joe covered quite a bit of material over 47 slides so I will talk to my top 5:

Apple’s Domain-Specific Processor Evolution
The Apple Ax SoC progression slide. Apple started with TSMC in 2014 on the 20nm process delivering the A8 inside the iPhone 6. I bought six of them for my family and it was a great Apple experience. Apple packed about 2B transistors inside a 89mm2 die. Four years later we have the iPhone 10xs with a TSMC 7nm A12 SoC with an 83mm2 die and close to 7B transistors. Simply amazing. And one thing I can tell you that Joe cannot is that Apple could not have done it without Mentor and the rest of EDA.

Physical Design Complexity Continues
The average DRC operations and rules slide. 28/22nm context sensitivity and smart fill, 16/14nm FinFET and Double Patterning, 10/7nm cutOD and pattern matching, 5/3nm EUV + DP and SADP Opt. This right here is an example of EDA ingenuity. I cannot count how many times in my 35 year career that I heard the next process node couldn’t be done or would be too expensive (7 customers for 7nm and 5 for 5nm). Complete and utter nonsense.

3nm Node Manufacturing Technology Challenges
EUV Multi-Patterning required for achieving pattern resolution, gate all-around transistors trigger new extraction requirements and physical failure modes, PPA metrics drives accuracy of lithography process model below 0.3nm RMS, and multi-beam mask writing enables curvilinear masks for most advanced lithography. We got a more detailed look at 3nm GAA technology at the Samsung Foundry Forum this week. Tom Dillinger will write in more detail about it but Joe is right on the money here. Samsung announced PDK availability for 3nm and I can tell you that Mentor was a big part of that effort as well.

Systems Companies Growing Percent of Foundry Sales 5-Year CAGR +70%
Another significant EDA change that has evolved over the previous six years is the customer mix. Following Apple, systems companies are now taking control over their silicon destiny and developing their own chips. In 2018, according to Joe, systems companies contributed 17% to foundry sales. IDMs 16% and fabless companies 67%. Ten years ago it would have been all IDM and fabless companies.

Mentor Safe IC
The most complete functional safety IC solution automating the path to compliance: Lifecycle management, safety analysis, safety verification, design for safety. Here is my concern, when I first started in semiconductors, TVs and minicomputers were the semiconductor drivers and those products were built to last many years. The mobile era changed all that. We can reboot our phones with impunity or buy a new one just about anywhere if it stops working. Transportation related semiconductors are a different story. The average ownership for a new car is more than 10 years and you can’t reboot your semiconductor laden auto with impunity. Truly autonomous cars will require massive amounts of validation and verification (another one of Joe’s slides) before I will trust my family’s life with one.

Joe also did an overview of the Siemens software leadership and commitment to EDA. My former employer Solido was used as an example. Solido was the first EDA acquisition after Mentor became part of Siemens. I can tell you from personal experience that it was the best acquisition integration I have ever seen, absolutely.