Bronco Webinar 800x100 1

Upcoming Webinar: Optimized Chip Design with Main Processors and AI Accelerators

Upcoming Webinar: Optimized Chip Design with Main Processors and AI Accelerators
by Kalar Rajendiran on 02-08-2022 at 10:00 am

Expedera DLA IP Benefits

Using the right tool for the job can be extremely important. Well, maybe not in the case of the famed chef Martin Yan who is notorious for using just one knife—a razor sharp wide blade cleaver that doubles as a spatula—for preparing anything and everything he cooks. For the rest of us, though, the right tools can make all the difference.

The wrong choice of tool has stymied the prospects of many a product. Maybe there were justifiable reasons for the choice. Maybe the product concept was ahead of its time and too early for the market. Maybe the ecosystem at that time did not offer a better option. Perhaps you have your own list of such products. One in particular comes to mind that many of you may not be aware.

More than a decade before Apple launched the iPad, a similar product was conceived at National Semiconductor (now part of TI). It was called the WebPad, an always-on wireless tablet device. For practical purposes, the intended use cases for the WebPad were similar to the iPad. National had developed the reference design and manufactured a whole bunch of samples for its OEM customers to test and evaluate. National’s goal was to create traction for this product, so the company could sell more chips. There was serious interest from many customers. But the Achilles heel of the product was the processor. X86-based processors were available in-house. National had acquired Cyrix, an x86-architecture based processor company, a few years earlier. So, that was the processor of choice. From a PPA perspective for the intended application, it scored well on the performance metrics. But from power and area, not so well. The sample devices were power hungry and bulky. There are probably any number of reasons why the WebPad died on the vine, but the choice of processor makes for an interesting case study. For a product that is supposed to be an always-on mobile tablet, weight, form factor and longevity of each battery-charge are of paramount importance and play a deciding role on the product’s market viability.

Could a different processor have been considered for the WebPad? Maybe. Arm was nascent at that time and was just beginning its expansion into the mobile market. Arm may not have matched the x86 on performance during those days. But the applications were not that demanding and x86 was likely an overkill. And Arm would have done well on the power and area metrics. Fast forward to current times, applications are extremely demanding on all three metrics of PPA. And AI driven edge applications pose stringent requirements in terms of latency, deterministic responses, energy efficiency, memory resources and maximum throughput. As there are many options to choose from, there is no excuse for undermining a great product idea by making the wrong processor choice.

For today’s and future AI-enabled applications, is the main processor still the best fit in every case? Can custom instructions extensions breathe new life into main processing? When does it make sense to use a hybrid core architecture with a main processor along with AI accelerators? You will find the answers to these questions at an upcoming webinar hosted by Expedera and Andes Technology.

Expedera

Expedera provides scalable neural engine semiconductor IP that enables major improvements in performance, power, and latency while reducing cost and complexity in AI-inference applications. Expedera’s Origin™ deep learning accelerator (DLA) products are easily integrated, readily scalable, and can be customized to application requirements. The solutions also reduce the memory requirement, which is very important for embedded devices at the edge.

While their DLA products can work with any CPU architecture, they can deliver better efficiencies alongside processors that can support custom instructions.

Andes Technology

Andes Technology is a leading embedded processor intellectual property supplier in the world. Andes offers high-performance/low-power 32/64 bit processors and associated SoC platforms to serve the rapidly growing embedded systems applications.

Their processor cores including their RISC-V cores supporting custom extensions can fulfill the requirements of many AI applications. In other cases, an architecture with RISC-V cores and an Expedera DLA core would lead to a more optimal end-solution.

 

 

Also read:

CEO Interview: Da Chaung of Expedera

A Packet-Based Approach for Optimal Neural Network Acceleration

The Roots Of Silicon Valley


Accellera at DVCon U.S. 2022 in the Metaverse!

Accellera at DVCon U.S. 2022 in the Metaverse!
by Daniel Nenni on 02-08-2022 at 6:00 am

Gather.Town

The premier verification conference and exhibition is coming up and of course Accellera plays an important role. This year DVCON will again be virtual, which is unfortunate, but I must say as a long time attendee this year’s program really stands out. In fact, there is a new addition that is worth mentioning, it’s the Metaverse without the headset, I tried the demo, very cool, it will be interesting to see it in play:

“DVCon U.S. 2022 is pleased to be partnering with Gather.Town to enhance the exhibit hall and networking experience for companies and attendees. The virtual pages used in 2021 will still be available for our sponsors/exhibitors to upload supplemental documents for on-demand viewing and to chat with attendees at any time. The addition of Gather.Town will make spending time with attendees just as easy as in real life. Allowing attendees to walk in and out of conversations in a natural and seamless way. ”

And here are the Accellera related events. I hope to virtually see you there!

Portable Stimulus Working Group Tutorial:

PSS in the Real World

Monday, February 28 9:00-11:00am

The tutorial will highlight the power and flexibility of Accellera’s Portable Stimulus Standard by walking through several real-world examples. Beginning with a brief overview of the standard, presenters will show how to use PSS to model stimulus for a variety of applications, from which multiple target-specific test implementations may be generated.

UVM-AMS Working Group Workshop:

An Update on the Accellera UVM-AMS Standard

Monday, February 28 11:30am-12:30pm

The UVM-AMS Working Group was formed with a charter to develop a standard that will provide a unified analog/mixed-signal verification methodology based on UVM, with major focus on transient analysis. The UVM-AMS standard will provide a comprehensive and unified analog/mixed-signal verification methodology based on UVM to improve analog mixed signal (AMS) and digital mixed signal (DMS) verification of integrated circuits and systems. This will encourage support by tool and IP providers, offering ready-to-use analog/mixed-signal verification IP that can be integrated easily into a UVM-AMS testbench. It will raise the productivity and quality of analog/mixed-signal verification across projects and applications, thanks to the reuse of proven verification components, and stimuli. In this workshop, the working group will share the findings, requirements and ideas collected so far and the next step plan for developing the proposed standard. Aspects under consideration for the UVM-AMS standard will be discussed at high level in this workshop:

In addition, an example will be provided to illustrate how UVM-AMS may be deployed to easily augment an existing UVM environment to verify an Analog/Mixed-Signal device under test.

Presenters will conclude with an opportunity for attendees to ask questions and comment on the proposed standard.

IP Security Assurance Working Group Workshop:

An Overview of the Security Annotation for Electronic Design Integration (SA-EDI) Standard

Monday, February 28 1:00-2:00pm

The importance of security in the electronic systems many of us rely on has become obvious to semiconductor design and manufacturing companies but most hardware security assurance practices in industry are still performed manually using proprietary methods. This approach is very expensive, time consuming, and error prone due to the ever-increasing complexity of systems. To address the issue, the Accellera IP Security Assurance (IPSA) Working Group was formed in 2018 by a team of security and EDA experts to work on developing a general and portable IP security specification standard to describe the IP security concerns (threat model) and to guide EDA vendors on how to produce security assurance collateral and use it for the automation of security verification. The specification was approved as an Accellera standard for Security Annotation for Electronic Design Integration (SA-EDI) in 2021.

During this workshop we will give an overview of this standard by going over the related collateral, methodology, a case study of the application of the standard and the roadmap of the standard.

Functional Safety Working Group Workshop:

An Update on Accellera’s Emerging Functional Safety Standard

Monday, February 28 2:30-3:30pm

This workshop presents an update on the work performed by Accellera’s Functional Safety Working Group over the past year and gives a preview of the white paper the group is planning to publish in 2022. The presentation first introduces the formalization of the Failure modes, effects, and diagnostic analysis (FMEDA) process and how it has led to the initial high-level definition of the data model, which will be the basis for the emerging functional safety standard.

The workshop will then provide detail on the data model and describe the necessary attributes to perform an FMEDA, followed by a description of some of the methodology discussions that are captured or assumed in the data model.

The workshop will also explore some directions connected to the development of the Functional Safety data format standard that the working group has identified and that will form the basis for the next steps for the working group.

UVM Working Group Birds of a Feather:

Wednesday, March 2 1:00-2:00pm

During the UVM Birds of a Feather meeting at DVCon U.S. 2021, the Accellera UVM Working Group heard from users how backward compatibility issues held back migration to the latest library.  The Working Group is preparing to release a new library version (targeted for summer 2022) that reduces these issues greatly. At this meeting, the Working Group will present the expectations for this library, including the few remaining situations that may require user code updates, to again get feedback from the user community. There should also be time remaining for an open Q&A. Attendance to the Birds of a Feather is free, but registration through DVCon is required to access the platform.

About Accellera

Accellera Systems Initiative is an independent, not-for profit organization dedicated to create, support, promote, and advance system-level design, modeling, and verification standards for use by the worldwide electronics industry. We are composed of a broad range of members that fully support the work of our technical committee to develop technology standards that are balanced, open, and benefit the worldwide electronics industry. Leading companies and semiconductor manufacturers around the world are using our electronic design automation (EDA) and intellectual property (IP) standards in a wide range of projects in numerous application areas to develop consumer, mobile, wireless, automotive, and other “smart” electronic devices. Through an ongoing partnership with the IEEE, standards and technical implementations developed by Accellera Systems Initiative are contributed to the IEEE for formal standardization and ongoing governance.

Also read:

Accellera Unveils PSS 2.0 – Production Ready

Functional Safety – What and How

An Accellera Update. COVID Accelerates Progress


Silicon Catalyst Fuels Worldwide Semiconductor Innovation

Silicon Catalyst Fuels Worldwide Semiconductor Innovation
by Mike Gianfagna on 02-07-2022 at 10:00 am

Silicon Catalyst Fuels Worldwide Semiconductor Innovation

Silicon Catalyst just announced the addition of six new companies to its semiconductor industry incubator. The focus areas for these companies is worth noting, as is the broad geographic footprint of the group. I’ll get to this detail in a moment, but first I’d like to step back a bit and take a closer look at this remarkable organization and what it has accomplished. We all know semiconductors are fueling the development of life-changing and world-changing technology. Without this innovation pipeline, many of the amazing new products we enjoy would simply not exist. If you want more on this, just Google chip shortage. I had the opportunity recently to speak with Pete Rodriguez, Silicon Catalyst CEO. Before I get into the recent announcement, let me share Pete’s perspective on how Silicon Catalyst fuels worldwide semiconductor innovation.

“Best Time Ever to Be in the Semiconductor Industry”

Pete Rodriguez

This is the comment that began my conversation with Pete. He was very pleased with Silicon Catalyst’s growth in 2021 and is very bullish about the outlook for the Incubator in 2022 The term “explosive growth” was used. Pete pointed out Silicon Catalyst is the only incubator in the world (among thousands) that is focused exclusively on semiconductors. Given how hot semiconductors are across the globe, it’s great to have them as a unique source for startups.

Pete ran down some of the recent additions to the organization, including Matrix Capital Management and Sony Semiconductor Solutions as Strategic Partners. Beyond the current nine, there will be several more Strategic Partners in 2022. Pete also pointed out that the Silicon Catalyst Advisor Network is second to none, being two orders of magnitude larger than the leading incubators in the world (from a semiconductor perspective). This organization has grown to over 220 members and includes Wally Rhines, the recipient of this year’s Morris Chang Exemplary Achievement Award from the Global Semiconductor Alliance. Their ecosystem of In-Kind Partners grew from 14 four years ago to over 54 today. These are the organizations that provide preferred access to the tools, technologies and services semiconductor startups need to get to market. This is the primary mission of Silicon Catalyst. It’s what they do best. There will be more In-Kind Partners announced soon.

From an international perspective, there was a successful launch of Silicon Catalyst UK, covered by SemiWiki here. This adds to the joint venture in China and the team in Israel. Pete explained the organization also has a university program with 30 institutions that will increase this year to over 50. There are now 46 companies in the Silicon Catalyst incubator domestically and 35 in China. These organizations are getting funded and getting to a product. Pete reported that the gross market value of the portfolio is now over $1.25 billion, starting from zero about 6 ½ years ago. Fantastic progress in a very important area.

Six New Entrants to the Incubator

Now a bit about the recent announcement. Six new companies have been admitted to the Silicon Catalyst incubator. They are in Argentina, Canada, US, Israel, Singapore, and Switzerland. Silicon Catalyst has quite a worldwide footprint. The application areas of these six companies also shows a lot of diversity. Here is a summary:

  • ApLife Biotech- Argentina “Becoming World Leaders in Discovery for Biosensors”

Aplife Biotech manufactures synthetic DNA-derived molecules and large combinatorial libraries in predefined locations for mass-screening of important biological molecules.

  • Lemurian Labs – Canada “Building a next-gen AI Accelerator to enable deep learning on the edge”  

At Lemurian, our goal is to make deep learning affordable and available for everyone, from the individual researcher to industry.

  • NanoHydro Chem – USA “Energy Storage Solutions”

NanoHydroChem is an advanced materials company developing and commercializing nanomaterials for energy storage applications.

  • RAAAM – Israel Providing the highest-density embedded memory in any standard CMOS technology” 

RAAAM offers the highest-density embedded memory in any standard CMOS process, requiring no additional process steps or cost.

  • Siloxit – Singapore “Zero-touch security that works”

Siloxit was founded in 2020, focused on delivering IoT devices and systems for secure high-value, high-volume infrastructure applications.

  • Synthara.AI – Switzerland “Delivering server-class, rapidly-customizable AI accelerators for the next-generation of edge inference applications”   

Synthara offers highly scalable and rapidly customizable energy-efficient AI accelerators for the extreme edge applications such hearing aids, wearables and bio-medical monitoring.

That’s quite a lineup. Now you know how Silicon Catalyst fuels worldwide semiconductor innovation. You can learn more about Silicon Catalyst and their unique programs here.

Also read: 

CEO Interview: Pete Rodriguez of Silicon Catalyst

CEO Interview: Pete Rodriguez of Silicon Catalyst

Webinar: Investing in Semiconductor Startups

 


Future of Semiconductor Design: 2022 Predictions and Trends

Future of Semiconductor Design: 2022 Predictions and Trends
by Kalar Rajendiran on 02-07-2022 at 6:00 am

IP Management Tools Survey

Predictions and trends create the forces that accelerate innovations and keep the industry moving forward. We are all used to hearing of important issues and challenges, usually in the context of solutions offered by various vendors. The SemiWiki forum plays its role in bringing awareness of all of the above to its audience. For example, many companies make presentations on a regular basis about design related challenges and solutions and SemiWiki covers many of those. But a recent webinar by the Methodics division of Perforce is different. It is different because it presented key insights gathered from a broad cross-section of the industry.

The webinar titled “Future of Semiconductor Design: 2022 Predictions and Trends” was presented by Robin Butler, General Manager of Methodics at Perforce. Robin reported the top issues, trends, challenges and solutions as learned from a survey of the industry. The key to a great survey depends on how well the industry was represented in it. The following is the breakdown of the representation.

Roles: Engineering Management (32%), Design Engineering (34%), CAD Management (13%), IT Management (1%), Executive Management (10%) and others such as Functional Safety Managers.

Experience level: from 0-2 years (13%), 3-5 years (10%), 6-10 years (14%), and 11 years or more (62%).

Companies: under $500 million annual revenue (44%), $501 million to $5 billion (26%), and over $5 billion (30%).

No matter what role one is playing within the semiconductor ecosystem, you are likely to find the results of the survey interesting. (a) Inspiration to enhance an existing product, build a new product to bridge a gap, ride a trend or solve an important issue. (b) Adoption of best design practices and asset management tools and techniques.

This post is a synthesis of the salient points I garnered from the webinar.

Most Important Issues

The two most important issues companies are facing are time-to-market pressures and effective reuse of IP. While reuse of IP is an effective way to accelerate time to market, companies need to implement formal IP reuse strategies. Lack of such formal reuse strategies, processes and supporting tools are impeding the growth in semiconductor design productivity. This is compounded by the fact that semiconductor design capacity is increasing at a rapid rate. According to a recent study by the University of Michigan, semiconductor design productivity is increasing at a rate of 28% annually. But semiconductor capacity is increasing even faster, at 58% annually.

A formal IP reuse strategy is becoming a must to deliver on time-to-market demands and close that gap between design capacity and design productivity. 

Trends

Companies are increasingly required to meet ISO 26262, ITAR, and other compliance/functional safety standards. This is understandable given that many of the markets driving semiconductor growth are for consumer-oriented applications. A design and implementation tools suite that can enable, automate and ensure traceability for semiconductor design is becoming critical.

The global embedded system market is expected to grow 6.3% to $137.31 billion by 2027. Embedded software is becoming part and parcel of many of today’s products requiring hardware and software to be bundled together. As hardware designers and software developers collaborate to create the product offering, configuration management is essential to handle the interdependencies.

A majority of the survey respondents stated that 40+% of a die in a typical chip is made of custom circuitry. And analog component reuse is becoming more common to expedite design of complex mixed-signal SoCs. In other words, more analog is getting integrated into SoCs.

A significant portion of survey respondents indicated that more than half of their job requires IP integration. A comprehensive IP lifecycle management platform would make the IP integration job easier by helping find, qualify and integrate the optimal IP for the job.

What a difference a year or two makes. Implementation of 2.5D designs is trending upward. A little more than a third of the survey respondents are considering or already implementing 2.5D designs for their products. 2.5D designs are becoming more feasible and a way to maintain the level of SoC integration as Moore’s law is slowing.

Challenges

Finding relevant IPs for reuse is a challenge. Many survey respondents are either reusing IP from a previous project or asking a coworker for recommendations. While this approach works, this may or may not yield the optimal IP for the project on hand. A more formal, data-driven approach to finding relevant IP would increase design productivity and deliver a better product.

Survey respondents stated that finding relevant IP for their design takes at least a day or longer. They then need to qualify the IP for inclusion into their design. Nearly 75% of survey respondents reported difficulty in determining the context of an IP and its quality. Tracking and determining the quality of Ips is important for traceability.

An efficient way of cataloging IP using metadata from various qualification tools within the design ecosystem is an area of opportunity. A platform that can determine if requirements are met and where an IP is used can provide teams with the quality metric and state of the IP.

Tools

IP Management

Although companies have embraced IP-centric design practices, using a commercial IP Lifecycle platform is still in the early stages of adoption. As you can see from the Figure below, 81% of the survey respondents are not using a commercial IP Lifecyle Management (IPLM) platform. While 19% stated they are using Methodics IPLM, 28% said they use internal/other. The 28% could contain other commercial IPLM platforms. This is due to the fact that the survey was promoted in the Perforce customer base.

From an opportunity perspective for IPLM platform vendors, there is an opportunity with at least 53% of the pie below.

Data Management and Version Control

Data management and version control solutions come from the development space and for Perforce, it goes back to its early days as a company. These solutions provide a backbone for IP management. It can support the tracking of IPs and provide the metadata engineers need to make informed decisions. 36% of respondents indicated they are using Perforce Helix Core for data management, followed by 17% using Subversion (SVN) and another 17% using Git.

From an opportunity perspective for data management/version control tools vendors, there is an opportunity with at least 16%% of the pie below.

Summary

A formal IP reuse strategy is essential to make the most of one’s IP investments. It is a must to deliver on time-to-market demands and close that gap between design capacity and design productivity. With an increasing requirement for semiconductor products to meet compliance and/or functional safety standards, traceability represents a major challenge. An effective IP management platform helps designers locate, qualify and manage the release of IP. Using such a platform to manage the IP enables reuse across projects and also enables traceability.

The survey indicates that there is opportunity to maximize the potential of an IP-centric design approach with the use of the right management tools. And there is opportunity for tools vendors to tap into the prospective market potential for these tools.

To watch a recording of the webinar, click here.

The detailed results of the survey are included in a Perforce report titled “Semiconductor Report – The State of the Industry.” To get a copy of the report, click here.

 

Also read:

Webinar – SoC Planning for a Modern, Component-Based Approach

You Get What You Measure – How to Design Impossible SoCs with Perforce

Achieving Scalability Means No More Silos


The Semiconductor Ecosystem Explained

The Semiconductor Ecosystem Explained
by Steve Blank on 02-06-2022 at 6:00 am

TSMC Ecosystem Explained

The last year has seen a ton written about the semiconductor industry: chip shortages, the CHIPS Act, our dependence on Taiwan and TSMC, China, etc.

But despite all this talk about chips and semiconductors, few understand how the industry is structured. I’ve found the best way to understand something complicated is to diagram it out, step by step. So here’s a quick pictorial tutorial on how the industry works.


The Semiconductor Ecosystem

We’re seeing the digital transformation of everything. Semiconductors – chips that process digital information — are in almost everything: computers, cars, home appliances, medical equipment, etc. Semiconductor companies will sell $600 billion worth of chips this year.

Looking at the figure below, the industry seems pretty simple. Companies in the semiconductor ecosystem make chips (the triangle on the left) and sell them to companies and government agencies (on the right). Those companies and government agencies then design the chips into systems and devices (e.g. iPhones, PCs, airplanes, cloud computing, etc.), and sell them to consumers, businesses, and governments. The revenue of products that contain chips is worth tens of trillions of dollars.

Yet, given how large it is, the industry remains a mystery to most.  If you do think of the semiconductor industry at all, you may picture workers in bunny suits in a fab clean room (the chip factory) holding a 12” wafer. Yet it is a business that manipulates materials an atom at a time and its factories cost 10s of billions of dollars to build.  (By the way, that wafer has two trillion transistors on it.)

If you were able to look inside the simple triangle representing the semiconductor industry, instead of a single company making chips, you would find an industry with hundreds of companies, all dependent on each other. Taken as a whole it’s pretty overwhelming, so let’s describe one part of the ecosystem at a time.  (Warning –  this is a simplified view of a very complex industry.)

Semiconductor Industry Segments

The semiconductor industry has seven different types of companies. Each of these distinct industry segments feeds its resources up the value chain to the next until finally a chip factory (a “Fab”) has all the designs, equipment, and materials necessary to manufacture a chip. Taken from the bottom up these semiconductor industry segments are:

  1. Chip Intellectual Property (IP) Cores
  2. Electronic Design Automation (EDA) Tools
  3. Specialized Materials
  4. Wafer Fab Equipment (WFE)
  5. “Fabless” Chip Companies
  6. Integrated Device Manufacturers (IDMs)
  7. Chip Foundries

The following sections below provide more detail about each of these seven semiconductor industry segments.

Chip Intellectual Property (IP) Cores

  • The design of a chip may be owned by a single company, or…
  • Some companies license their chip designs – as software building blocks, called IP Cores – for wide use
  • There are over 150 companies that sell chip IP Cores
  • For example, Apple licenses IP Cores from ARM as a building block of their microprocessors in their iPhones and Computers

Electronic Design Automation (EDA) Tools

  • Engineers design chips (adding their own designs on top of any IP cores they’ve bought) using specialized Electronic Design Automation (EDA) software
  • The industry is dominated by three U.S. vendors – CadenceMentor (now part of Siemens) and Synopsys
  • It takes a large engineering team using these EDA tools 2-3 years to design a complex logic chip like a microprocessor used inside a phone, computer or server. (See the figure of the design process below.)

  • Today, as logic chips continue to become more complex, all Electronic Design Automation companies are beginning to insert Artificial Intelligence aids to automate and speed up the process

Specialized Materials and Chemicals

So far our chip is still in software. But to turn it into something tangible we’re going to have to physically produce it in a chip factory called a “fab.” The factories that make chips need to buy specialized materials and chemicals:

  • Silicon wafers – and to make those they need crystal growing furnaces
  • Over 100 Gases are used – bulk gases (oxygen, nitrogen, carbon dioxide, hydrogen, argon, helium), and other exotic/toxic gases (fluorine, nitrogen trifluoride, arsine, phosphine, boron trifluoride, diborane, silane, and the list goes on…)
  • Fluids (photoresists, top coats, CMP slurries)
  • Photomasks
  • Wafer handling equipment, dicing
  • RF Generators

Wafer Fab Equipment (WFE) Make the Chips

  • These machines physically manufacture the chips
  • Five companies dominate the industry – Applied MaterialsKLALAMTokyo Electron and ASML
  • These are some of the most complicated (and expensive) machines on Earth. They take a slice of an ingot of silicon and manipulate its atoms on and below its surface
  • We’ll explain how these machines are used a bit later on

 “Fabless” Chip Companies

  • Systems companies (Apple, Qualcomm, Nvidia, Amazon, Facebook, etc.) that previously used off-the-shelf chips now design their own chips.
  • They create chip designs (using IP Cores and their own designs) and send the designs to “foundries” that have “fabs” that manufacture them
  • They may use the chips exclusively in their own devices e.g. Apple, Google, Amazon ….
  • Or they may sell the chips to everyone e.g. AMD, Nvidia, Qualcomm, Broadcom…
  • They do not own Wafer Fab Equipment or use specialized materials or chemicals
  • They do use Chip IP and Electronic Design Software to design the chips


Integrated Device Manufacturers (IDMs)

  • Integrated Device Manufacturers (IDMs) design, manufacture (in their own fabs), and sell their own chips
    • They do not make chips for other companies (this is changing rapidly – see here.)
    • There are three categories of IDMs– Memory (e.g. MicronSK Hynix), Logic (e.g. Intel), Analog (TIAnalog Devices)
  • They have their own “fabs” but may also use foundries
    • They use Chip IP and Electronic Design Software to design their chips
    • They buy Wafer Fab Equipment and use specialized materials and chemicals
  • The average cost of taping out a new leading-edge chip (3nm) is now $500 million

 Chip Foundries

  • Foundries make chips for others in their “fabs”
  • They buy and integrate equipment from a variety of manufacturers
    • Wafer Fab Equipment and specialized materials and chemicals
  • They design unique processes using this equipment to make the chips
  • But they don’t design chips
  • TSMC in Taiwan is the leader in logic, Samsung is second
  • Other fabs specialize in making chips for analog, power, rf, displays, secure military, etc.
  • It costs $20 billon to build a new generation chip (3nm) fabrication plant

Fabs

  • Fabs are short for fabrication plants – the factory that makes chips
  • Integrated Device Manufacturers (IDMs) and Foundries both have fabs. The only difference is whether they make chips for others to use or sell or make them for themselves to sell.
  • Think of a Fab as analogous to a book printing plant (see figure below)
  1. Just as an author writes a book using a word processor, an engineer designs a chip using electronic design automation tools
  2. An author contracts with a publisher who specializes in their genre and then sends the text to a printing plant. An engineer selects a fab appropriate for their type of chip (memory, logic, RF, analog)
  3. The printing plant buys paper and ink. A fab buys raw materials; silicon, chemicals, gases
  4. The printing plant buys printing machinery, presses, binders, trimmers. The fab buys wafer fab equipment, etchers, deposition, lithography, testers, packaging
  5. The printing process for a book uses offset lithography, filming, stripping, blueprints, plate making, binding and trimming. Chips are manufactured in a complicated process manipulating atoms using etchers, deposition, lithography. Think of it as an atomic level offset printing. The wafers are then cut up and the chips are packaged
  6. The plant turns out millions of copies of the same book. The plant turns out millions of copies of the same chip

While this sounds simple, it’s not. Chips are probably the most complicated products ever manufactured.  The diagram below is a simplified version of the 1000+ steps it takes to make a chip.

Fab Issues

  • As chips have become denser (with trillions of transistors on a single wafer) the cost of building fabs have skyrocketed – now >$10 billion for one chip factory
  • One reason is that the cost of the equipment needed to make the chips has skyrocketed
    • Just one advanced lithography machine from ASML, a Dutch company, costs $150 million
    • There are ~500+ machines in a fab (not all as expensive as ASML)
    • The fab building is incredibly complex. The clean room where the chips are made is just the tip of the iceberg of a complex set of plumbing feeding gases, power, liquids all at the right time and temperature into the wafer fab equipment
  • The multi-billion-dollar cost of staying at the leading edge has meant most companies have dropped out. In 2001 there were 17 companies making the most advanced chips.  Today there are only two – Samsung in Korea and TSMC in Taiwan.
    • Given that China believes Taiwan is a province of China this could be problematic for the West.

What’s Next – Technology

It’s getting much harder to build chips that are denser, faster, and use less power, so what’s next?

  • Instead of making a single processor do all the work, logic chip designers have put multiple specialized processors inside of a chip
  • Memory chips are now made denser by stacking them 100+ layers high
  • As chips are getting more complex to design, which means larger design teams, and longer time to market, Electronic Design Automation companies are embedding artificial intelligence to automate parts of the design process
  • Wafer equipment manufacturers are designing new equipment to help fabs make chips with lower power, better performance, optimum area-to-cost, and faster time to market

What’s Next – Business

The business model of Integrated Device Manufacturers (IDMs) like Intel is rapidly changing. In the past there was a huge competitive advantage in being vertically integrated i.e. having your own design tools and fabs. Today, it’s a disadvantage.

  • Foundries have economies of scale and standardization. Rather than having to invent it all themselves, they can utilize the entire stack of innovation in the ecosystem. And just focus on manufacturing
  • AMD has proven that it’s possible to shift from an IDM to a fabless foundry model. Intel is trying. They are going to use TSMC as a foundry for their own chips as well as set up their own foundry

What’s Next – Geopolitics

Controlling advanced chip manufacturing in the 21st century may well prove to be like controlling the oil supply in the 20th. The country that controls this manufacturing can throttle the military and economic power of others.

  • Ensuring a steady supply of chips has become a national priority. (China’s largest import by $’s are semiconductors – larger than oil)
  • Today, both the U.S. and China are rapidly trying to decouple their semiconductor ecosystems from each other; China is pouring $100+ billion of government incentives in building Chinese fabs, while simultaneously trying to create indigenous supplies of wafer fab equipment and electronic design automation software
  • Over the last few decades the U.S. moved most of its fabs to Asia. Today we are incentivizing bringing fabs and chip production back to the U.S.

An industry that previously was only of interest to technologists is now one of the largest pieces in great power competition.

https://steveblank.com/

Also read:

Samsung Keynote at IEDM

TSMC Earnings – The Handoff from Mobile to HPC

Intel Discusses Scaling Innovations at IEDM

 


Podcast EP60: Knowing your bugs can make a big difference to elevate the quality of verification

Podcast EP60: Knowing your bugs can make a big difference to elevate the quality of verification
by Daniel Nenni on 02-04-2022 at 10:00 am

Dan is joined by Philippe Luc, director of verification at  Codasip. Philippe has spent over 20 years in verification which includes an extensive and successful career at Arm. Philippe gained engineering experience with a list of significant achievements during his time there, including:

  – Design and verification of coherent caches for the first multiprocessor core from Arm (Cortex-A9)

  – Lead development of random test bench for L1&L2 caches, used on most A & R class processors

  – Initiate and lead the development of one of the major random generator used on all application processors

  – Verification lead of Cortex-A17 core

Today, Philippe leads Codasip’s growing verification team from France, a key part of Codasip’s increasingly global team. His mission is to focus on boosting the quality of RISC-V processor IP, and to do so efficiently. Dan explores why bug tracking is so important with Philippe and how the process can impact the quality of designs.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

Codasip on SemiWiki.com


CEO Interviews: Kurt Busch, CEO of Syntiant

CEO Interviews: Kurt Busch, CEO of Syntiant
by Daniel Nenni on 02-04-2022 at 6:00 am

Syntiant Busch Headshot

Named Ernst & Young’s Entrepreneur of the Year® 2021 Pacific Southwest – Orange County, Kurt Busch is a tech industry veteran with extensive experience in product development, having driven the successful launch of new products, ranging from SaaS and semiconductors for telecom and broadcast video to consumer electronics and data center systems. Prior to founding Syntiant Corp., Busch was president, CEO and board of director at Lantronix (NASDAQ: LTRX), a global provider of secure data access and management solutions for Internet of Things (IoT) and information technology (IT). He is an engineering hall of fame inductee of the University of California at Irvine, where he earned bachelor’s degrees in electrical engineering and biological science. He also holds an MBA from Santa Clara University.

Can you tell us a little about Syntiant?

We founded Syntiant in 2017 with the idea of building a new kind of processor that will bring artificial intelligence to most any edge device. At the time, AI was the domain of cloud computing, and no one was thinking of putting significant deep learning processing into devices that operated at the edge. Today, we shipped more than 20 million of our Neural Decision Processors worldwide, making edge AI a reality for always-on voice, sensor and image applications in a range of consumer and industrial use cases, free from cloud connectivity, ensuring privacy and security.

What is unique about the company and its product technology?

We designed our technology as a complete turnkey system by combining purpose-built silicon with an edge-optimized data platform and training pipeline. Syntiant’s devices typically offer more than 100x efficiency improvement, while providing a greater than 10x increase in throughput over current low-power MCU solutions, and subsequently, enabling larger networks at significantly lower power. Using at-memory compute and built-in standard CMOS processes, Syntiant devices directly process neural network layers from platforms such as TensorFlow without the need for any secondary compilers, which shortens time to market and offers unprecedented performance for solutions that require under 1mW power consumption.

What industries are Syntiant addressing? 

Syntiant’s deep neural network processors are being designed into all kinds of end uses from earbuds to automobiles. We are working with about 80 customers globally across market segments including consumer, medical and industrial IoT. Our NDP100 and NDP101 are being used for always-on voice applications. The NDP102 for sensor processing. Our NDP120 for speech and sensor fusion and the NDP200 for vision and image recognition. We went from just offering voice to an expanded product line that includes sensor, audio and image processing, as well as offering the data and training too, providing customers with low-cost, low latency, end-to-end solutions that quickly deliver production grade deep learning models in a variety of domains.

What problems/challenges are you solving?

We’re moving AI from the cloud to the edge. Production deep learning models require significant data and training expertise, as well as significant processing power. The lack of clean data, training expertise and sufficient processing power has created fundamental blockers for mass edge AI deployments. Syntiant has tackled these fundamental challenges. First, with custom silicon delivering best in class performance while still meeting size, power and cost constraints for massive edge deployments. Second, the ability to collect, clean, align and generate data for ML training, and lastly, providing a training pipeline, optimized for edge applications, that can go from raw data to production quality machine learning models in an economical manner.

What’s new?

There is a lot of discussion about the democratization of AI, enabling most anyone to utilize the benefits of machine learning and not just the big Internet companies. While we usually deal with large volume customers, we also want to expand the reach and availability of AI. That’s why we launched our new TinyML Development Board for building low-power voice, acoustic event detection and sensor ML applications. This collaboration with Edge Impulse now enables anyone, from individual developers and hardware engineers to small companies to design, build and deploy highly accurate ML applications that respond to speech, sounds and motion with minimal power consumption. Whether it is for a wearable, industrial product or even to assist with people with disabilities, the possibilities are endless with our new TinyML board that provides a full solution for bringing the power of artificial intelligence to almost any device.

What’s next for AI at the edge?

We’ve just begun to scratch the surface on how AI will impact people’s everyday lives. Using Syntiant technology, devices can hear, speak, see and feel, making natural interfaces the path to the future. Advances in AI already are having a profound impact on many societal issues, including how voice technology can help those with disabilities and the elderly, as well as those in remote parts of the world with limited or no Internet access. As AI pervasiveness grows globally, so do myriad applications for public health like our collaboration with Canary Speech, a leader in the voice digital biomarker industry. Our joint deep learning solution enables real-time patient monitoring to detect health conditions such as Alzheimer’s disease, anxiety, depression, as well as a complex voice energy measurement. We’ve also seen AI play a big part in the industrial IoT landscape. Until now, predictive maintenance and condition-based monitoring usually has been done in the cloud. That said, we just announced a collaboration with Ceramic Speed for their Bearing Brain project, which moves prediction and forecasting down to the battery-powered sensor device itself to reduce or eliminate unforeseen maintenance costs. Our technology can continuously monitor sounds, vibrations and even temperature with minimal drain on power consumption, extending battery life by months or years, while improving performance, productivity and efficiency across a wide range of manufacturing applications.

Also read:

CEO Interview: Mo Faisal of Movellus

CEO Interview: Fares Mubarak of SPARK Microsystems

CEO Interview: Pradeep Vajram of AlphaICs


Waymo Collides with Transparency

Waymo Collides with Transparency
by Roger C. Lanctot on 02-03-2022 at 10:00 am

Waymo Collides with Transparency

Anyone looking to U.S. Transportation Secretary Pete Buttigieg to forthrightly assert a path-setting policy vision to guide autonomous vehicle development in the U.S. during his CES 2022 keynote was sorely disappointed. There was no guidance from the Secretary.

The issue has gained new urgency now that Waymo has sued the California Department of Motor Vehicles for allegedly sharing some Waymo-specific operational data with an unspecified inquiring third party. Outraged, Waymo is seeking an end to the sharing of its data relevant to how its autonomous vehicles operate or cope with specific circumstances.

Waymo complaint: https://www.courthousenews.com/wp-content/uploads/2022/01/waymo-calif-dmv-complaint.pdf

The lawsuit represents an important turning point in autonomous vehicle regulation. California lays claim to some of the most rigorous reporting requirements in relation to what is likely the largest group of licensed AV operators in the world.

The primary philosophy behind California’s autonomous vehicle regulatory policy is one of disclosure. Operators are obliged to report all disengagement events – where the safety driver has had to take over from the AV system. This, in turn, has created a battle among licensed operators to show the greatest distance traveled, on average, between disengagement events.

Waymo has used California’s reporting framework as a marketing tool to advertise its performance advantages over the numerous competitors operating in the state. Observers have grown frustrated that the disengagement-centric system is skewing AV development priorities in favor of favorable operating environments including location and time of day.

What is missing in the California regulatory regime is a minimum set of performance requirements, standards, or tests that operators must meet to receive their operating license. The AV regulation is performance based, but only in retrospect – and calling for mitigation in the event of failures for which the State seeks functional disclosures – that have allegedly been shared.

Ironically, since each licensed operator is generally pursuing its own bespoke path to autonomous operation it is unclear that any could benefit from learning about specific corrective measures that any other operator might have taken. All operators are presumably using similar mathematics, but each is using a unique portfolio of sensors and each has its own philosophical approach to writing its AV code.

The lawsuit highlights the lack of an adequate performance-based licensing or regulatory regime for AV operation on public roads. Each of the 50 U.S. states have pursued their own unique approaches – as have countries around the world.

The U.S. came close to establishing an AV regulatory regime at the end of the Obama administration, but fell short after unresolved disputes emerged over the number of AVs that would be exempted from Federal Motor Vehicle Safety Standards requirements such as brake pedals and steering wheels.

It is fairly clear that the Federal government is not in a position to establish a single path to autonomous operation. In this regard it is worth noting that the first AV operator to be granted an FMVSS waiver was Nuro – the maker of delivery bots.

What might work, as part of a process of setting AV operational standards, would be a series of operational tests that AV prototypes will have to pass – such as recognizing and responding to obstacles and other vehicles. Such an approach can be calibrated to establish some basic performance characteristics without giving an advantage to any particular operator or strategic approach.

It is worth noting that in the current global environment characterized by the existing regulatory vacuum, Mobileye, alone, has a unique advantage in putting forth its Responsibility-Sensitive Safety (RSS) framework. Mobileye says RSS “has advanced its way into both IEEE and ISO standards efforts recently.  Intel Senior Principal Engineer and Mobileye VP of Automated Vehicle Standards Jack Weast is chairing the IEEE effort to adopt a formal technical standard known as IEEE P2846: A Formal Model for Safety Considerations in Automated Vehicle Decision Making.”

Alone among operators, Mobileye is working to turn transparency into a competitive advantage. No competing operator has yet come forward to offer an equivalent vision – though Nvidia tried, and failed, with its Safety Force Field (SFF) alternative, which was quickly set aside.

While Mobileye touts RSS, competitors are left with smoke and mirrors. And Waymo clearly wants to keep that smoke and those mirrors in place – resisting requirements that it share elements of its disengagement mitigation. Waymo may be getting something of a comeuppance in California where General Motors’ Cruise may report some exceptional low disengagement figures – surpassing even Waymo – after operating exclusively at night.

It’s time for U.S. regulators to put forward some minimum performance requirements. The U.S. DOT’s National Highway Traffic Safety Administration has spent decades crashing cars. Isn’t it about time they started figuring out how to prevent cars from crashing in the first place?

I think it is. The Waymo lawsuit is a sign of the times and the time has come for change. The framework for regulation should be less focused on disclosure than it is on performance testing. Regulators should define the objectives and measure and monitor their achievement – anything less is an abdication of responsibility.

Also read:

Apple and OnStar: Privacy vs. Emergency Response

Musk: Colossus of Roads, with Achilles’​ Heel

RedCap Will Accelerate 5G for IoT


Why It’s Critical to Design in Security Early to Protect Automotive Systems from Hackers

Why It’s Critical to Design in Security Early to Protect Automotive Systems from Hackers
by Mike Borza on 02-03-2022 at 6:00 am

Figure 2 Automotive Security Diagram

Remember when a pair of ethical hackers remotely took over a Jeep Cherokee as it was being driven on a highway near downtown St. Louis back in 2015? The back story is, those “hackers,” security researchers Charlie Miller and Chris Valasek, approached vehicle manufacturers several years before their high-profile feat, warning of the risks that security vulnerabilities posed for cars. However, manufacturers at the time didn’t consider cars to be targets for cyberattacks.

With the amount of hardware and software content enabling greater automation, vehicles actually have many potential points of security vulnerability—much like many of our other smart, connected IoT devices. Let’s take a look at key automotive areas that should be protected, why it’s important to keep security in mind starting early in the design cycle, and how you can protect the full car from bumper to bumper.

ECUs: Irresistible to Hackers

We can start our discussion with electronic control units (ECUs), the embedded systems in automotive electronics that control the electrical systems or subsystems in vehicles. It’s not uncommon for modern vehicles to have upwards of 100 ECUs running functions as varied as fuel injection, temperature control, braking, and object detection. Traditionally, ECUs were designed without the requirement that they validate the entities with which they communicate; instead, they simply accepted commands from and shared information with any entity on the same wiring bus. Vehicle networks were not considered to be communications networks in the sense of, say, the internet. However, this misconception has created the biggest vulnerability.

Going back to the Jeep hack, Miller and Valasek set out to demonstrate how readily ECUs could be attacked. First, they exploited a vulnerability in the software on a radio processor via the cellular network, then moved on to the infotainment system, and, finally, targeted the ECUs to affect braking and steering. That was enough to get the automotive industry to start paying more attention to cybersecurity.

Today, it’s common for ECUs to be designed with gateways, so that only those devices that ought to be talking to each other are doing so. This presents a much better approach than having a wide-open network in the vehicle.

How Infotainment Systems Can Be Exploited

In addition to ECUs, cars can include other vulnerabilities that can allow a bad actor to hopscotch from one device inside the vehicle to another. Consider the infotainment system, which is connected to cellular networks for activities such as:

  • Firmware updates to cars from vehicle manufacturers
  • Location-based roadside assistance and remote vehicle diagnostic services
  • Increasingly in the future, vehicle-to-vehicle and vehicle-to-everything functions

The thing is, infotainment systems also tend to be connected to various critical vehicle systems to provide drivers with operational data, such as engine performance information, as well as to controls, ranging from climate control and navigation systems to those that tie in to driving functions. Infotainment systems also increasingly have some level of integration with the dashboard—with modern dashboards becoming a component of the infotainment display. Given all the connections that exist in this vehicle subsystem and the powerful, full-featured software on them that performs these functions, it is probable that someone will exploit a vulnerability to hack into them.

Safeguarding In-Vehicle Networks

To prevent such attacks, it’s important to apply physical or logical access controls on what type of information gets exchanged between more and less privileged subsystems of the network. To ensure that the communications is authentic, it is also critical for in-vehicle networks to tap into the security experience gained over the past 30 years in the networking world by combining strong cryptography with strong identification and authentication. All these measures should be planned early in the design cycle to provide a robust security foundation for the system. Doing so early is less labor intensive, less costly, and more effectively scrutinized for residual risk than incorporating security measures piecemeal to address problems that emerge later.

The increasing popularity of Ethernet for in-vehicle networks is a positive development. Ethernet comes with some cost savings and some powerful networking paradigms that support the speeds needed for applications like advanced driver assistance systems (ADAS) and autonomous driving, as well as increasing applications of infotainment systems. Part of the Ethernet standard provides for devices identifying themselves and proving their identify before they are allowed to join the network and perform any critical functions.

NHTSA Automotive Cybersecurity Best Practices

The National Highway Traffic Safety Administration (NHTSA) suggests a multilayered automotive cybersecurity approach, with a better representation of the in-vehicle system as a network of connected subsystems that may each be vulnerable to cyberattack. In its updated cybersecurity best practices report released this month, NHSTA provides various recommendations regarding fundamental vehicle cybersecurity protections. Many of these would seem to be common-sense practices for development of critical systems, but these practices have been (and even continue to be) surprisingly absent from many. Among the suggestions for a more cyber-aware posture:

  • Limit developer/debugging access in production devices. An ECU could potentially be accessed via an open debugging port or through a serial console, and often this access is at a privileged level of operation. If developer-level access is needed in production devices, then debugging and test interfaces should be appropriately protected to require authorization of privileged users.
  • Protect cryptographic keys and other secrets. Any cryptographic keys or passwords that can provide an unauthorized, elevated level of access to vehicle computing platforms should be protected from disclosure. Any key from a single vehicle’s computing platform shouldn’t provide access to multiple vehicles. This implies that a careful key management strategy based on unique keys and other secrets in each vehicle, and even subsystem, is needed.
  • Control vehicle maintenance diagnostic access. As much as possible, limit diagnostic features to a specific mode of vehicle operation to accomplish the intended purpose of the associated feature. Design such features to eliminate or minimize potentially dangerous ramifications should they be misused or abused.
  • Control access to firmware. Employ good security coding practices and use tools that support security outcomes in their development processes.
  • Limit ability to modify firmware, including critical data. Limiting the ability to modify firmware makes it more challenging for bad actors to install malware on vehicles.
  • Control internal vehicle communications. Where possible, avoid sending safety signals as messages on common data buses. If such safety information must be passed across a communication bus, the information should reside on communication buses that are segmented from any vehicle ECUs with external network interfaces. For critical safety messages, apply a message authentication scheme to limit the possibility of message spoofing.

The NHTSA cybersecurity best practices report provides a good starting point to fortify automotive applications. However, it is neither a recipe book, nor is it comprehensive. NHTSA also recommends that the industry follow the National Institute of Standards and Technology’s (NIST’s) Cybersecurity Framework, which advises on developing layered cybersecurity protections for vehicles based around five principal functions: identify, protect, detect, respond, and recover. In addition, standards such as ISO SAE 21434 Cybersecurity of Road Vehicles, which in some ways parallels the ISO 26262 functional safety standard, also provide important direction.

Helping You Secure Your Automotive SoC Designs

Vehicle manufacturers have differing levels of in-house cybersecurity expertise. Some still opt to add a layer of security to their automotive designs near the end of the design process; however, waiting until a design is almost completed can leave points of vulnerability unaddressed and open to attack. Designing security in from the foundation can avoid creating vulnerable systems (see the figure below for a depiction of the layers of security needed to protect an automotive SoC). Moreover, it’s also important to ensure that the security will last as long as vehicles are on the road (11 years, on average).

Layers of security needed to protect an automotive SoC.

With our long history of supporting automotive SoC designs, Synopsys can help you develop the strategy and architecture to implement a higher level of security in your designs. In addition to our technical expertise, our relevant solutions in this area include:

Connected cars are part of this mix of things that should be made more resilient and hardened against attacks. While functional safety has become a familiar focus area for the industry, it’s time for cybersecurity to be part of the early planning for automotive silicon and systems, too. After all, you can’t have a safe car if it is not also secure.

To learn more visit Synopsys DesignWare Security IP.

Also read:

Identity and Data Encryption for PCIe and CXL Security

High-Performance Natural Language Processing (NLP) in Constrained Embedded Systems

Lecture Series: Designing a Time Interleaved ADC for 5G Automotive Applications


Are We Headed for a Semiconductor Crash?

Are We Headed for a Semiconductor Crash?
by Daniel Nenni on 02-02-2022 at 6:00 am

Malcolm Penn Webinar 2022

COVID was certainly a black swan event but semiconductors have seen similar events over the past 50 years, some of which I have experienced personally. The Dot-com bubble comes to mind but there were others. The question is will history repeat itself and the answer, according to Malcolm Penn of Future Horizons, is yes.

Malcolm is a longtime friend, colleague, and one of my trusted few. I used to attend the live version of his Annual Industry Update and Forecast here in Silicon Valley but now it is virtual like everything else semiconductor. Malcolm has also been a guest on our Semiconductor Insiders Podcast: Podcast EP40: The Semiconductor Supply Chain and the Real Cause of Semiconductor Shortages.

For the 2022 Update Malcolm spent an hour covering 33 slides in great detail including his previous high end prediction of a 24% increase in semiconductor revenue for 2021. It ended up closer to 26% but he was closest (I was at 10-15%). The most important part of the presentation to me was the historical look at the semiconductor industry and his prediction for 2022. Spoiler alert a crash may be coming.

You can get his complete slide deck HERE for 150GPB which is quite the deal if you consider the time invested, absolutely. A highlights reel is at the bottom of this page.

If you look at his opening slide you can see the historical ups and downs including the 2020 Dot-com bubble I mentioned earlier. One of his slides shows the previous up and down turns since 1961 in more detail. While the current bust of 2019 and boom of 2021 doesn’t quite measure up to the Dot-com cycle it is still significant. This sets up the Perfect Storm slide #6 in the deck and after a decade of single digit growth you really have to wonder.

Malcolm also mentioned EDA but let’s look at that in more detail. EDA is also a single digit growth industry but lately, as you have read on SemiWiki, EDA growth has been booming with double digit growth.

ESDA Reports Double-Digit Q3 2021 YOY Growth and EDA Finally Gets the Respect it Deserves

ESD Alliance Reports Double-Digit Growth – The Hits Just Keep Coming

Is EDA Growth Unstoppable?

The reasoning is twofold: First and foremost, systems companies are rushing to do their own chips and this now includes automobile companies. The chip shortage is a big driver but the increasing software burden of the systems companies is a close second. Automated cars now include millions of lines of code and this code can be developed and optimized in parallel with chip design. The smart phone companies figured this out a long time ago when Apple and others started doing their own SoC chips.

Second, venture capital has been pouring into the chip sector at record rates. AI is a big driver for startup chip companies and electronic vehicles is another bubble waiting to pop. Last told there were 300 companies developing AV/EV related products, I mean WOW. Again,  Déjà vu Dot-com bubble.

Malcolm then moves onto key drivers, their impact and roles. The key drivers are the Economy, Unit Demand, Capacity, and ASPs. Malcolm goes into detail but I will make an additional comment on capacity.

We have capacity, that has never been the problem. Utilizing that capacity is another story. For example, in 2019 TSMC saw a -7% downtick in automotive chips and another -7% in 2020. That is why the car companies did not have enough chips, cancelled orders. In 2021 TSMC saw a 51% uptick in automotive and 2022 will probably be the same since inventories are building like never before.

But the chip shortage narrative continues and so does the CAPEX contest between Intel, TSMC, and Samsung. The biggest difference is that TSMC builds capacity based on customer orders with some big pre pays and the IDM foundries do not. TSMC is also building big capacity for Intel which complicates things a bit. The one saving grace is that the equipment companies are going to have a difficult time equipping all of these new fabs with the supply chain issues they are suffering. Especially ASML and EUV. No way will they be able to outfit all of the new leading edge logic and memory fabs that are in the press release phase.

Malcolm continues with his agenda and has us approaching the top of a rollercoaster. He does not show how steep the drop off is but he is convinced it is coming. He has a nice graphic for that one as well. His forecast for 2022 is 4% at the low end and 14% at the high end. I’m a bit more optimistic with a 10-15% industry forecast with TSMC again hitting the 20%+ growth rate.

Malcolm finished with key takeaways and the Q&A. For me it’s all about the supply chain which Malcolm covers in detail. When the dust settles and COVID is under control we will see a much stronger supply chain that will not be schooled again, just my opinion of course.

Also read:

The Roots Of Silicon Valley

The Semiconductor Ecosystem Explained

Are We Headed for a Semiconductor Crash?