Bronco Webinar 800x100 1

Expert Advice for the New Intel CEO

Expert Advice for the New Intel CEO
by Daniel Nenni on 02-15-2021 at 10:00 am

Pat Gelsinger Intel CEO

Intel is a semiconductor legend. Founded on July 18, 1968, the name Intel is short for Integrated Electronics. After leading Silicon Valley, the United Sates, and the world into the era of semiconductors through technical excellence, Intel has hit some challenging times. There has been quite a bit of CEO drama that we will look at but the root cause is the delay of new process technologies. After dominating semiconductor manufacturing for most of its corporate life, Intel has fallen behind Samsung and TSMC. I grew up with Intel here in Silicon Valley and it pains me to see this. The question is can Intel regain the lead?

First the CEO drama:

After almost 40 years of technical leadership:
Robert Noyce (Ph.D in physics, Massachusetts Institute of Technology)
Gordon Moore (Ph.D in chemistry and physics, California Institute of Technology)
Andrew Grove (Ph.D. in chemical engineering, University of California-Berkeley)
Craig Barrett (Ph.D. in materials science, Stanford University)

Less technical CEOs followed:
Paul Ottelini (MBA, University California of Berkeley)
Brian M. Krzanich (  BS chemistry, San Jose State University)
Bob Swan ( MBA, Binghamton University)

Now Intel has Pat Gelsinger (MSEE, EE & CS, Stanford University ) with 30+ years at Intel working under Gordon Moore and Andy Grove, so some say, “Problem solved” and I might agree.

The challenge I see now is that Intel is still a very top heavy company (too many managers/MBAs) that are living in the past. Anyone that thinks an IDM can compete with the foundries and their respective ecosystems of world wide customers, partners, and suppliers head-to-head is dead wrong. Do you remember when Intel said, “It is the beginning of the end for the fabless model” in 2012 ? I certainly do.

Moving forward it’s VERY important that Intel change the rules of engagement to better compete with this new fast paced fabless model.

So, today Pat takes the helm at Intel and here is the advice that I offer him for his first 100 days:

  1. Streamline the decision making processes inside of Intel. Yes, this means layoffs and re-orgs but it has to be done. Intel needs to be optimized for a fast paced ultra-competitive semiconductor marketplace.
  2. Bring transparency to Intel. No more surprises. When you surprise us with delays and technical challenges, we doubt Intel. And there are no secrets in the semiconductor ecosystem. We now know the truth behind the 10nm delays so be transparent and earn the respect and trust that Intel deserves.
  3. Take a leadership position in process node naming. Read the blog by Scott Jones on “Equivalent Nodes” (EN) and bring technology back to node naming.
  4. Engage TSMC in an exclusive relationship and outsource power and price sensitive chips. I would also give the FPGA business back to TSMC to better compete with AMD/Xilinx. Stick with your core manufacturing competency and focus the Intel fabs on high performance high margin CPUs. As they say, keep your friends close, keep your competition closer.
  5. Rid yourself of non-core competency business. Mobileye and the other distractions must go.
  6. Be a leader in the semiconductor ecosystem and not an outsider or follower. Roll up your corporate sleeves, get to work, and play well with others.

Bottom line: Make Intel synonymous with innovation and leadership again and get back on top of the semiconductor leader board where Intel belongs, absolutely!


Intel Node Names

Intel Node Names
by Scotten Jones on 02-15-2021 at 6:00 am

Slide2

There is a lot of interest right now in how Intel compares to the leading foundries and what the future may hold.

Several years ago, I published several extremely popular articles converting processes from various companies to “Equivalent Nodes” (EN). Nodes were at one time based on actual physical features of processes but had become uncoupled from physical features and a “marketing number”.

My original articles were based on some work ASML did and that they allowed me to extend and publish. Basically, what they did was plot node versus Contacted Poly Pitch (CPP) multiplied by Minimum Metal Pitch (MMP) for all the leading logic producers and come up with a curve fit that could be used to assign node numbers to the processes using a consistent methodology. The problem with the original method of calculating EN is that scaling began to transition to include Track Height (TH) and Single versus Double Diffusion Break. I eventually adopted transistor density in millions of transistors per millimeter squared using a weighted average of 60% two input NAND cells and 40% Scanned Flip Flop cells based on an Intel metric. The resulting number more completely captured logic scaling but is different than nodes that people are used to.

If you look at the leading edge logic landscape today, there are two foundries, Samsung and TSMC and one IDM, Intel still pursuing the state of the art in logic. The foundries are following a “foundry” node roadmap of 65nm, 40nm, 28nm, 20nm, 16nm/14nm, 10nm, 7nm, 5nm, 3nm. Intel on the other hand has stayed with a more classic node sequence of 65nm, 45nm, 32nm, 22nm, 14nm, 10, 7nm, 5nm. Furthermore, because the scaling node to node is generally larger for Intel than the foundries, the node names no longer align.

While considering this situation the other day it occurred to me, I could resurrect EN by plotting node versus transistor density. I decided my approach would be to use TSMC as the baseline since they are the clear logic density leader, I took TSMC’s nodes from 28nm to a projected 1.5nm and plotted the nodes versus transistor density and fitted a curve, see figure 1.

 

Figure 1. TSMC Nodes Versus Transistor Density.

 The curve fit in figure 1. has an excellent R squared value of 0.9879. Using the equation for the curve fit I can take Intel nodes and generate node numbers based on TSMC’s node scaling (EN).

TSMC has announced timing and density improvements through the 3nm node. Assuming TSMC stays on a two-year cadence for new nodes and continues to produce shrinks per generation like the 5nm and 3nm nodes, we can project transistor density versus node out to 1.5nm.

Intel has provided guidance on 7nm timing and density improvements and we then assume Intel gets back on a two year cadence with 2x shrinks (the same as 7nm) and project transistor density for Intel. I should note here that Intel took 3 years to get 14nm into production, 5 years to get 10nm into production and is now heading towards 3 to 4 years for 7nm. I would therefore view this as an aggressive roadmap for Intel.

Figure 2 provides our roadmap for node by year for TSMC and node and EN by year for Intel.

Figure 2. TSMC and Intel Node Roadmaps.

We are projecting that Intel’s 7nm node will have an EN value of 4.1nm (intermediate between TSMC 5nm and 3nm nodes), the Intel 5nm node will have an EN value of 2.4nm (intermediate between TSMC 3nm and 2nm nodes) and if Intel stays with a 2x per generation shrink the Intel 3nm node could have an EN vale of 1.3nm or slightly better than TSMC’s 1.5nm. This of course presupposes intel can execute 2x shrinks at a much faster pace than in the past.

This roadmap for Intel while aggressive still leaves them playing catch up versus TSMC until at least mid-decade.

This roadmap is purely density based and Intel products generally require higher performance than most of TSMC’s customers. As best as we can benchmark Intel versus TSMC processes for performance, we believe Intel 10SF is competitive with TSMC 7nm. I would expect Intel 7nm to be competitive with TSMC 3nm and Intel 5nm to be competitive with TSMC 2nm.

If Intel is reading this I would suggest they could do everyone a favor and rename 7nm to 4nm and 5nm to 2.5nm so they are named more consistently with how the processes actually compare to the other logic leaders.

In conclusion this analysis provides a way to convert Intel nodes into equivalent TSMC nodes and provides roadmaps for both companies into the late 2020s. Even with aggressive execution Intel will likely be competitive with TSMC at best and likely trailing them until mid-decade even under the best case scenario.

Also Read:

ISS 2021 – Scotten W. Jones – Logic Leadership in the PPAC era

IEDM 2020 – Imec Plenary talk

No Intel and Samsung are not passing TSMC


How SerDes Became Key IP for Semiconductor Systems

How SerDes Became Key IP for Semiconductor Systems
by Eric Esteve on 02-14-2021 at 10:00 am

Ethernet bandwidth

We have seen that the interface IP category is seeing incredibly high growth rate over the last two decades and we expect this category to generate an ongoing high source of IP revenues for at least another decade. But if we dig into the various successful protocols like PCI Express, Ethernet or USB, we can detect a common function in the Physical (PHY) part, the Serializer/Deserializer (SerDes) function.

In 1998, advanced interconnects used in telecom application were based on 622 MHz LVDS I/O. Telecom chip makers were building huge chips integrating 256 LVDS I/O running at 622 MHz to support Networking fabric. Today, advanced PAM4 SerDes run at 112 Gbps; over a single connection to support 100G Ethernet. In twenty years, SerDes technology efficiency jumped by a factor of 180-times! If we make a quick comparison with CPU technologies. In 1998 Intel released the Pentium II Dixon processor, whose frequency was 300 MHz. In 2018, an Intel Core i3 run at 4 GHz. CPU frequencies have grown by a factor of 15 times over a span of twenty years while SerDes speeds have exploded by by a factor of 180-times.

SerDes are now used in many more application than just telecom, to interface chips and systems. At the end of the 2000’s, smartphones integrated USB3, SATA and HDMI interfaces, while Telecom and PC/server integrated both PCIe and Ethernet. These trends resulted in the interface IP market to become a sizeable IP category growing above $200 million at that time. It was small compared to the CPU category, which was four or five times larger. But, since 2010, the interface category has seen at least 15%, year over year. It was the fastest growing category compared with all other semiconductor IP categories, such as CPU, GPU, DSP, Library, etc. The reason is directly linked with the number of connected devices growing every year, each exchanging more data (more movie, pictures, etc.). Connectivity is the beginning of the chain of communication, to the internet modem or base station, Ethernet switch and the datacenter network.

Figure 1: Long Term Ethernet Switch Forecast (source: Dell’Oro)

During the 2010 decade the worldwide community became almost completely connected. Ethernet became the backbone of this connectivity as both the connectivity rates and the number of datacenter quickly increased over the decade. If we use SerDes rates as an indicator, it was 10 Gbps in 2010, 28 Gbps in 2013 and 56 Gbps in 2016 (allowing to support 10G, 25G and 50G Ethernet resp.) and 112 Gbps in 2019.

Then, in 2017, the exploding high-speed connectivity needs for emerging data-intensive compute applications such as machine learning and neural networks started to appear, adding to the growing need of high bandwidth connectivity. At the same time, analog mixed-signal architectures, which were the norm for SerDes design since the inception, became extremely difficult to manage and much more sensitive to process, voltage, and temperature variations, due to the evolution of CMOS technology toward advanced FinFET. In modern nanometer FinFET technologies, building transistors involves stacking individual electrons, given the tiny dimensions of the transistors. Thus, the construction of precise, analog circuits that can sustain stressful environmental variations is extremely difficult.

But the positive point with advanced technology like 7nm is that you can integrate an incredible number of transistors by sq. mm (density of 100 million Transistor per sq. mm), so it’s now possible to develop new digital-based architecture leveraging Digital Signal Processing (DSP) to do the vast majority of the Physical Layer work. DSP-based architecture enables the use of higher order Pulse Amplitude Modulation (PAM) modulation scheme compared to Non-Return to Zero (NRZ or PAM2) used by historical previous analog mixed-signal approaches. PAM 4 enabled doubling data throughput of channels without having to increase the bandwidth of channels themselves. As an example, a channel with 28 GHz of bandwidth can support a maximum data throughput of 56 Gbps using NRZ signaling. With the use of PAM-4 DSP technique this same 28 GHz bandwidth channel can now support a data rate of 112 Gbps! When you consider that analog SerDes architectures are limited to a maximum of 56 Gbps rates for physical reasons (and maybe less…), DSP SerDes  are the approach to scale rates to 200 Gbps and beyond, with the use of more sophisticated modulation schemes (eg. PAM-6 or PAM-8).

Using DSP based SerDes is not only required for building robust interfaces in FinFET technologies, but it is also the only way to double data rates for above 56 Gbps, eg. 112 Gbps with PAM-4, 200 Gbps with PAM-8. And this need for more bandwidth is the requirement linked with emerging data-intensive applications like AI (to interconnect CPU and accelerator), ADAS, and to follow the data-centric trend of the connected human community, expected to grow steadily over the next decade.

Figure 2: Top 5 Interface IP Forecast & CAGR (source: IPnest 2020)

In the “Interface IP Survey” IPnest rank the market share of IP vendor revenues by protocol since 2009. In the 2020 version of the report, we have shown that the Interface IP category will have 15% CAGR from 2020-2024 to reach $1.57 billion, as listed in Figure 2. This is a wide IP market including PCIe, Ethernet and SerDes as well as USB, MIPI, HDMI, SATA and memory controller IP. In 2019, Synopsys is a strong leader with 53% market share of the estimated $870 million IP market, followed by Cadence with 12%. Both EDA companies have defined a one-stop-shop business model, addressing the mainstream market. This strategy is successful for these large companies as it targets a wide part of various segments (smartphone, consumer, automotive or datacenter), but not the most demanding high-end portion of these segments.

Nevertheless, another strategy can be successful for the IP market, which is to strongly focus on one segment (eg. High-end) of the market and provide the best experience to very demanding hyperscalar customers. If you can build an excellent engineering team able to develop top quality products on the most advanced technologies, focusing on the high end of the market, the resulting business model can be rewarding.

We have seen that SerDes IP is the key to the interface IP market. Furthermore if we concentrate on PCIe and Ethernet protocols, Figure 3 illustrates the IP revenues forecast 2020-2025, limited to high-end PCIe (Gen 5 and Gen 6) and high-end Ethernet (PHY based on 56G, 112G and 224G SerDes), including D2D protocol for a reason that will be describes shortly.

 

Figure 3: High-End Interface IP Forecast & CAGR (source: IPnest 2021)

This high-end interface IP forecast show 28% CAGR from 2020-2025 (to be compared with 15% for the total interface IP market), and a TAM of $806 million in 2025. One young company has demonstrated strong leadership on this High-End interface IP segment, thanks to their focus on high-end SerDes (112G since 2017 and soon 200G) targeting the most advanced technology nodes (7nm in 2017, then 5nm in 2019) offered by the two leading foundries, TSMC and Samsung. Alphawave, was founded in 2017 has are rumored to have $75 million in orders in 2020, thanks to their positioning targeting the most advanced rates and application of the high-end segment of PCIe and Ethernet. In this portion of the market, they enjoyed 28% market share in 2019 and 36% in 2020. If Alphawave can keep their advance on the high-end SerDes market, it’s not unrealistic to foresee $300-400 million in IP revenues… by 2024-2025!

Since 2019, a new sub-segment, D2D interface, has emerged and is expected to grow with 46% CAGR from 2020-2024. By definition, D2D protocol are used between two chips or die, within a common silicon package. Briefly, we consider two cases for D2D: i) dis-integration of the master SoC to avoid SoC area to badly impact yield or become larger that the maximum reticle size, or ii) SoC interconnect with “service” chiplet (can be I/O chip, FPGA, accelerator…).

At this point (February 2021), there are several protocols being used, with the industry trying to build formalized standards for many of them. Current leading D2D standards includes i) Advanced Interface Bus (AIB, AIB2) initially defined by Intel who has offered royalty free usage, ii) High Bandwidth Memory (HBM) where DRAM dies are stacked on each other on top of a silicon interposer and are connected using TSVs, iii) Domain-Specific Architecture (ODSA) subgroup, an industry group, has defined two other interfaces, Bunch of Wires (BoW) and OpenHBI. All of these D2D standards are based on DDR-like protocol, a parallel group of single-ended data wires being accompanied with a forwarded clock currently operating in the the 2GHz to 4 GHz range. By using literally hundred of parallel wires over very short distances, these interfaces compete with VHS SerDes NRZ, usually defined around 40 Gbps, and offering a strong advantage to enable a much lower latency and lower power consumption, compared to SerDes.

There is now consensus in the industry that a maniacal focus on achieving Moore’s law is becoming not valid anymore for advanced technology node, eg. 7nm and below. Chip integration is still happening, with more transistor being added per sq. mm at every new technology node. However, the cost per transistor is growing higher every new node. Chiplet technology is a key initiative to drive increased integration for the main SoC while using older mainstream nodes for service chiplet. This hybrid strategy decreases both the cost and the design risk associated with integration of the service IP directly into the main SoC. IPnest believes this trend will have two main effect in the interface IP business, one will be the strong growth of D2D IP revenues soon (2021-2025), and the other is the creation of the heterogenous chiplet market to augment the high end SerDes IP market.

We have forecasted the growth of the D2D Interface IP category for 2020-2025, passing from less than $10 million in 2020 to $171 million in 2025 (87% CAGR). This forecast is based on the assumption that the service chiplet market should explode in 2023, when most of advanced SoC will be designed in 3nm. This will make integration of high-end IP like SerDes far too risky, leading to externalizing this functionality into a chiplet designed in more mature node like 7 or 5nm. If Interface IP vendors will be major actors in this revolution, the Silicon foundries addressing the most advanced nodes like TSMC and Samsung and manufacturing the main SoC will have key role. We don’t think they will design chiplet, but they could make the decision to support IP vendors and push them to design chiplet to be used with SoC in 3nm, like they do today when supporting advanced IP vendors to market their high-end SerDes as hard IP in 7nm and 5nm. Intel’s recent transition to 3rd party foundries is expected to also leverage third party IPs, as well as heterogenous chiplet adoption by the semiconductor heavyweight. In this case, no doubt that Hyperscalars like Microsoft, Amazon and Google will also adopt chiplet architecture… if they don’t even precede Intel in chiplet adoption.

By Eric Esteve (PhD.) Analyst, Owner IPnest

Also Read:

Interface IP Category to Overtake CPU IP by 2025?

Design IP Revenue Grew 5.2% in 2019, Good News in Declining Semi Market

#56thDAC SerDes, Analog and RISC-V sessions


Semiconductors up 6.5% in 2020, >10% in 2021?

Semiconductors up 6.5% in 2020, >10% in 2021?
by Bill Jewell on 02-14-2021 at 6:00 am

Feb 2021 co revised 9

Semiconductor sales in 2020 were $439.0 billion, up 6.5% from $412.3 billion in 2019, according to World Semiconductor Trade Statistics (WSTS).

We at Semiconductor Intelligence have been tracking the accuracy of semiconductor market forecasts from various sources for several years. We look at publicly available projections made late in the prior year or early in the forecast year before the WSTS January data for the forecast year is released in early March. For 2020, we have a tie for most accurate forecast between IHS Markit with a 6% forecast made in January 2020 and ourselves at Semiconductor Intelligence with a 7% forecast made in February 2020. WSTS was also close with a 5.9% forecast in December 2019. Forecasts made during this time period ranged from 0% to 10%.

The forecasts made in late 2019 and early 2020 did not account for the impact of the COVID-19 global pandemic. As the severity of the pandemic became apparent by April 2020, forecasters dramatically lowered their expectations for the 2020 semiconductor market. Some projected a double-digit decline. By the middle of 2020, it became apparent the semiconductor industry would not as impacted by the pandemic as other sectors of the economy. Most projections then shifted toward positive single digit growth. Our Semiconductor Intelligence November 2020 forecast was 5.5%. Interestingly, our 7% forecast released in early 2020 was closer to the final number of 6.5% than our forecast in late 2020.

The 4Q 2020 semiconductor market was up 3.5% from 3Q 2020, according to WSTS. The major semiconductor companies generally had strong revenue gains in 4Q 2020. Qualcomm’s IC revenues were up 32% from 3Q 2020. AMD and NXP Semiconductors each had double-digit growth while Intel, Texas Instruments, and Infineon each had high single-digit growth. Micron Technology and STMicroelectronics had revenue declines. Three companies had revenue declines in 4Q 2020 versus 3Q 2020 measured in local currency (Samsung and SK Hynix in South Korean won; MediaTek in New Taiwan dollars) but grew revenue when converted to U.S. dollars.

The outlook for 1Q 2021 revenue is mixed. Micron Technology, MediaTek, Infineon and NXP Semiconductors expect revenue to grow in the low single-digits in 1Q 2021 versus 4Q 2020. Intel, Qualcomm, Texas Instruments, AMD, and STMicroelectronics expect single-digit revenue declines – largely due to normal seasonal trends. Automotive was cited as a growth driver by several companies. The memory companies (Samsung, SK Hynix and Micron Technology) all see an improving DRAM market. The weighted average guidance for the non-memory companies is a 5% decline in 1Q 2021 revenue.

What is the outlook for the semiconductor market for the year 2021? Three key market drivers are smartphones, PCs, and light vehicles (automobiles and light trucks). Smartphone shipments declined 11% in 2020, primarily due to pandemic related production delays. Gartner expects strong 11% growth in smartphone shipments in 2021 as production returns to normal and new 5G models drive consumer demand. PC unit shipments increased 11% in 2020 as more people depended on PCs for home-based working, learning and entertainment during the pandemic. IDC projects PCs will return to a more typical 1% growth in 2021. Light vehicle production dropped sharply by 17% in 2020 due to pandemic related production delays and caution by automakers. IHS Markit forecasts a strong bounce-back to 14% light vehicle production growth in 2021.

Smartphones are the single largest product driver for semiconductors, accounting for about $115 billion in semiconductor revenue in 2020, according to IDC. PCs are the second largest driver at about $70 billion. However, automotive is becoming an increasingly important market for semiconductors. IHS Markit estimates the automotive semiconductor market at about $40 billion. The average semiconductor content per vehicle is about $500, compared to less than $100 per smartphone and around $200 per PC. Most of the semiconductor value in smartphones and PCs is in relatively few components such as processors and memory. Automobiles contain a much wider range of semiconductor products including controllers, memory, mixed-signal ICs, power devices and sensors.

The automotive market is currently experiencing shortages in many semiconductor products. Reduced automobile production in beginning in early 2020 led semiconductor companies to shift to products for other applications. Fitch Ratings says the shortages could disrupt automotive production for several months, but expects most of the lost production will be made up in the second half of 2021.

Recent forecasts for the 2021 semiconductor market generally call for strong growth. They range from a low of 4.1% from the Cowan LRA model (which is based on past trends) to a high of 18% from the eternally optimistic Future Horizons. A strong consensus has emerged in the 11% to 12% range with five of the eleven projections. We at Semiconductor Intelligence are reconfirming our November 2020 forecast of 14% growth in 2021.

Our 2021 forecast is based on the following assumptions:

  1. Recoveries in automobiles, smartphones, and other end markets more than offset slower growth in PCs.
  2. Semiconductor pricing remains stable or increases slightly as demand exceeds supply in several areas.
  3. The global economy opens up in the second half of 2021 as COVID-19 vaccinations become widespread.

About
Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

Semiconductor Boom in 2021

China Mobile and Computer Update 2020

Electronics Production Healthy


Podcast EP7: Signal Integrity and Killer Robots

Podcast EP7: Signal Integrity and Killer Robots
by Daniel Nenni on 02-12-2021 at 10:00 am

Dan and Mike are joined by Matt Burns, technical marketing manager at Samtec. Matt discusses the signal integrity challenges faced by system designers. The materials and protocols used for channels on a board, between boards on a rack and even between racks are discussed. Matt also touches on the work Samtec is doing with BattleBots. There is sure to be a topic for everyone in this lively discussion.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Sathyam Pattanam

CEO Interview: Sathyam Pattanam
by Daniel Nenni on 02-12-2021 at 6:00 am

Sathyam Pattanam

Sathyam has over 35 years of experience in company management, R&D management and software development in Electronic Design Automation (EDA) (mostly) and PCB Manufacturing. He has headed companies, global engineering and marketing organizations in Forture-500 and in startups companies to introduce innovative and successful products. He has led startups such as Karthik Electronics, Atrenta and ArchPro Design Automation to successful mergers and acquisitions. His prior management and technical roles were at Avery Design Systems, Atrenta, Synopsys, ArchPro Design Automation BlueSpec, Cadence Design Systems, AT& Bell Labs, Karthik Electronics and Hindustan Teleprinters. Sathyam’s experience in EDA includes SoC Design, Low Power Verification, Electronic System Level Design and Verification, Logic Design Simulation/Formal Verification/Emulation, Fault Simulation and Integrated Circuit Layout Extraction, Design-rule Checking, and Symbolic Layout Compaction. For complete bio of Sathyam please visit his LinkedIn profile.

What brought you to electronics, semiconductors, and EDA?
I was fascinated by electronics in my high school back in the mid-1970s when I built a music stereo system. Later, I got a Bachelors in Electronics at IIT Madras, India and after undergrad did a startup in PCB manufacturing in Chennai, India at the age of 24. Several local magazines in Chennai, India recognized my initiative and wrote articles about me as a promising entrepreneur of the 1982 year. Later, I moved to the US and got a MS in Computer Engineering from Rutgers. I did a design project called “Content Addressable Memory” using UC Berkeley’s Magic Layout tools.  It was amazing to see the impact of EDA on helping do designs.  That experience jump started my journey with EDA, and after my Masters in Computer Engineering/Computer Science at Rutgers University I joined the famed Electronic Design Automation Department at AT&T Bell Laboratories.

What is Anew Design Automation backstory and when was it launched?
My Co-founder, Dr. Rahul Razdan (Linkedin Profile) when working as the Senior VP of Strategy at Flextronics realized that Long Lifecycle Electronic products faced enormous issues dealing with a semiconductor supply chain dominated by consumer markets. The Long Lifecycle Electronic products covered several market verticals such as aerospace, automobiles, defense, industrial equipment, medical, power and energy, IoT, and telecom.  Specific issues included reliability, semiconductor obsolescence, and evolving maintenance functionality.

More recently, Rahul was awarded the Hall of Fame Honors by the ACM for his Ph. D work at Harvard University on reconfigurable computing. He realized that AI/ML and reconfigurable computing techniques could be used to solve the issues of reliability, supply-chain obsolescence, and functionality obsolescence.

Rahul and I had worked together at Cadence Design Systems where we had successfully delivered flagship products such as Incisive and AMS platforms to the marketplace. In July of 2020, we founded Anew Design Automation with me as President and CEO and with Rahul as the Chairman and Chief Technical Advisor. Interestingly, we have discovered that along the way, system board design also needed to be upgraded, and our critical IP could accelerate this task as well.

As CEO, I brought on board another EDA veteran, Faiq Fazal (Linkedin Profile) as VP Engineering in October 2020 to lead the engineering efforts, and we have built a strong team of ten engineers with expertise in design, AI/ML, Database, GUI, and of course EDA.  Interestingly, the original NC-Verilog engineering team which I managed at Cadence Design Systems was about the same size, and that platform has delivered over $1Billion in revenue and counting.

What customer challenges are you addressing?
LLC (Long Lifecycle) customers face a design environment which has not changed very much over several decades, yet the design challenges have accelerated.  Today, the role of programmable devices such as FPGAs, programmable processor cores, programmable analog cores, microprocessors has significantly increased.  Also, in most embedded design, the dominant design element is the software stack and associated ecosystem for their vertical market. For designers, they face a sea of information spread across websites, EDA databases, reference designs, and application notes. Amazingly, the central repository of information is still the datasheet, and PDF search is the EDA tool of choice.  Our previous articles on Semiwiki  describe this situation in detail.  Anew will be addressing this problem with advanced semiconductor component selection through design intent, design abstraction, and design related linting.

After the initial design, LLC customers face a couple of specific pain points:

  1. Say the product has been launched to market. Significant resources have been spent on qualification, certification, system validation and the product is being actively used by customers. There are active maintenance contracts and warranties connected to the system design. Five years later, there is a semiconductor obsolescence or reliability problem. The design team is long gone, and the leadership is left with difficult and expensive choices.
  2. When the product is embedded into infrastructure or distributed out in the field. The cost of updating the hardware is expensive or even impossible (think satellites).

Anew addresses these issues through a Design for LLC flow. Details of the approach can be found on EPSNews Articles  and details of the company can be found at www.anew-da.ai. The leadership team consists of EDA (Cadence, Synopsys) veterans who have built highly differentiated solutions which were not available in the marketplace.  In summary, Anew solves the pain points for electronics design engineering and component management functions for Long Lifecycle Electronic products companies in vertical markets such as aerospace, automobiles, defense, industrial equipment, IoT, medical, power and energy and telecom.

What is your competitive positioning?
Interestingly, Anew seems to be in a gap of functionality between the major parts of the electronics ecosystem.  The key players include:

  1. PLM Companies: PLM (Product Level Management) is very important to LLC customers and the conventional PLM solutions are well integrated. However, PLM capability does not address the specific electronic design challenges described above.
  2. EDA Companies: EDA has an intense focus on semiconductor design. Even today, PCB design tools are still operating in a component structure model with little regard for the current issues of system board designers.
  3. Distributors: Distributors such as Digi-Key and Mouser serve as smart searchers for components, but of course this does not address the core design issues around soft IP, programmable hardware or software.

Anew sits in the large and growing gap between these pillars of conventional functionality with a solution leveraging critical AI/ML and reconfigurable computing technology.

What is the funding situation for Anew?
The company has raised a seed round of around $100K sufficient to do develop the product ideas, functional specifications and a prototype. Currently, we are working on raising $1M+ through institutional investors, angel investors, and US government grants to productize the first product, System Level Design (SLD) Explorer Smart Searcher.

What does the coming months/years have in store for Anew?
We have conceptualized a powerful EDA platform in which we plan on releasing a series of product releases towards a solution which can solve these deep Long Life Cycle Product issues. As you might imagine, in the coming months, we are hyper focused on building the first product: SLD Explorer Smart Searcher.  The figure below should give you a sense of the product architecture and process flow for this product.  We have walked senior designers in the defense, medical, and energy markets through this product, and gotten very positive feedback.  We expect to release this product to the marketplace in Q221.

Finally, in terms of a call-to-action.  We are very interested in engaging with potential partners such as distributor, semiconductor, EMS, or EDA companies who find our value statement of interest.  Also, we would love to engage with design teams who want to work with an innovative company to accelerate their productivity.

Also Read:

CEO Interview: Pim Tuyls of Intrinsic ID

CEO Interview: Tuomas Hollman of Minima Processor

CEO Interview: Lee-Lean Shu of GSI Technology


“For Want of a Chip, the Auto Industry was Lost”

“For Want of a Chip, the Auto Industry was Lost”
by Robert Maire on 02-11-2021 at 10:00 am

Auto Assembly Line

Semiconductor production can’t be turned on and off like a switch

Semiconductor fabs have to run 24/7 to make money. They have to be running full all the time to fully utilize the high running costs and in the case of new fabs, high capital costs.

Unlike a brake pad factory hat can hire and fire people at will and source readily available materials on short notice chip fabs are run non stop by specialized employees.

Its a long pipeline to design chips, build the mask set, start wafers and then package and test the final die.

When auto makers hit the emergency stop button on their chip supply for Covid they didn’t realize how long it would take to restart production.

They also didn’t realize that semiconductor fabs can just as easily make microcontrollers for toaster ovens as they can for cars and that toaster oven demand is not as impacted by Covid as cars are.

Demand in other parts of the tech industry such as IOT has been stronger in 2020 so why in the world would I even consider going back to making chips for anti-lock brakes.

The $60B plus “butterfly effect” on a $2T/year Chaos based auto industry

There have been reports in the media that over $60B in auto production has been lost due to semiconductor shortages.

We would be willing to bet that this shortfall and idling of auto workers and factories was likely caused by only a few tens of millions of dollars worth of semiconductor parts in critical areas.

Think about it….the lack of a $0.50 microcontroller can stop the production of a $50,000 car for months as the part is highly specialized and likely single sourced.

We think the auto industry has exposed the point of failure that the semiconductor industry is to the huge number of other industries that rely on it.

Trillions of dollars of many different industries rely on semiconductor chips that everyone assumes will always be there, freely available, and at low cost at a moments notice.

Old Fabs and Analog companies just saw a large jump in value

Older 8 inch equipment has been going up in price and equipment companies are selling more 8 inch stuff than they were at the peak of 8 inch.

Old fabs which were crated up and sold off by the pound to be shipped off to China to make chips for toaster ovens and dish washers all of a sudden look more valuable and important.

Semiconductor companies that make cheap analog components and microcontrollers which were viewed as trailing edge dinosaurs now look sexy.

We don’t think this is a one time blip in the industry that people will forget about. The auto industry certainly won’t forget any time soon. Other industry better figure it out or they too will suffer the same fate some day soon.

We think both chip companies and foundries should be able to parlay this into higher pricing and better margins for a more guaranteed pipeline of supply of key components.

We wonder if any auto companies or others will think about in-sourcing chip supply.

If we were Apple, run by a former supply chain guru, like Tim Cook, maybe we might get a bit nervous looking at the auto industry and knowing that all my eggs are in TSMC’s basket, especially now after the launch of the M1.

Maybe TSMC and Apple are in a “deadly embrace” that neither can exit from.

We think Apple and many others in the tech industry and beyond have to take a long hard look at their supply chain in the chip industry that has long been taken for granted and assumed would always be there for them.

The risks are in the Trillions and are existential for defense and intelligence of governments.

Even if the chip industry doesn’t go away on its own there’s always the risk of it being taken away by external forces….

You don’t know what you got till its gone…….

Semiconductor Advisors

Also Read:

Will EUV take a Breather in 2021?

New Intel CEO Commits to Remaining an IDM

ASML – Strong DUV Throwback While EUV Slows- Logic Dominates Memory


Need Electromagnetic Simulations for ICs?

Need Electromagnetic Simulations for ICs?
by Daniel Nenni on 02-11-2021 at 6:00 am

RaptorH in Virtuoso

Electromagnetic (EM) simulations have been performed on die metal structures since the 1990s. Originally, the analysis was restricted to a single device (e.g., a spiral inductor). The number of on-die devices simulated simultaneously grew with the increasing capabilities of the computers performing the computations. This recently culminated with Ansys’ announcement that HFSS was used to solve an entire 5.5 mm x 5.5 mm 5G radio frequency integrated circuit (RFIC) in under 30 hours.

People have been using the gold standard accuracy of HFSS for decades to solve on-die structures. But isn’t HFSS hard to use and only for electromagnetic simulation experts? What about the on-die designer who must be an expert in layout and SPICE simulation? Is it too much to ask the designers to become an expert in another simulator? Design cycles are getting shorter and a die designer can no longer afford to wait in line for electromagnetic extraction from a core dedicated group of EM simulation experts.

To address the needs of circuit designers, Ansys developed RaptorH. Powered by Ansys HFSS, RaptorH integrates the HFSS solver with the established integration of RaptorX with Cadence Virtuoso. This means that die designers can now run their own HFSS simulations from within the familiar Cadence Virtuoso environment without having to learn a new software interface. Furthermore, RaptorH provides many benefits to simulation of on-die structures.

Figure 1. RaptorH shown integrated with Cadence Virtuoso

The first benefit is that RaptorH fulfills all foundry requirements ranging from compliance with techfile encryption standards to the support of advanced layout-dependent effects (LDE) down to 3nm nodes. This has many implications. A user no longer must guess at the proprietary material proprieties and thicknesses of the backend metallization to get accurate models and the foundries no longer need to worry about disclosing their intellectual property. In addition, the LDE modifications to the metal are automatically implemented in the model generation so that a user does not have to read, interpret, and modify the geometry manually. This means the user can accurately simulate the true manufactured device performance.

Figure 2. Image of layout-dependent effects (LDE)

The black shapes on the left of the figure are the as-drawn shapes. The red shapes on the right of the figure shows how the lines are actually manufactured. RaptorH reads the techfiles and automatically applies LDE for the most accurate model possible.

Geometry simplification is also automated in the RaptorH flow. Old workflows for simulating on-die structures with HFSS included creating a reduced layout file of just the die metal which needed to be simulated. It also included filling in slots and holes in large metal planes and simplifying the vias into a structure that is fast and efficient to simulate. RaptorH now automates this work for the user. You now can select which cells of the hierarchy to include and how large a hole that HFSS will automatically fill for you, significantly reducing engineering time spent on model creation.

Figure 3. Full layout shown on the left

The automatically reduced layout is shown on the right. Notice that the active device metallization is not included, and that some inductors (which were intentionally left out) are not included.

Not only does RaptorH read the techfile documentation from foundries and simplify the geometry for you, but it also automatically exports an S-parameter model and creates a Spectre netlist file and symbol. This makes it very easy to use the EM model in circuit simulations to verify circuit performance.

As system complexity increases, it is no longer adequate for the die designer to model the die metal alone as the die is always placed in a package of some kind. Today’s system-on-chip (SoC) designs are typically placed in ball grid array (BGA) packages and the metal on the die couples electromagnetically to the metal on the BGA. RaptorH allows the designer to import parts of the BGA package for co-simulation to derive the true manufactured performance. This eliminates surprises at the end of the design cycle or during product testing.

Because RaptorH uses the distributed memory matrix (DMM) solving technology, part of Ansys HFSS, there is no need to limit the size of the problem to what can fit on the RAM of a single machine. DMM allows engineers to efficiently utilize existing compute infrastructure to solve the most demanding problems. Beyond using DMM to solve large problems, RaptorH can also use multiple cores on each of the machines using high-performance computing (HPC) licensing to get your simulations done fast.

Want to learn more? Check out the Ansys blog at https://www.ansys.com/blog

Also Read

Webinar: Electrothermal Signoff for 2.5D and 3D IC Systems

Best Practices are Much Better with Ansys Cloud and HFSS

System-level Electromagnetic Coupling Analysis is now possible, and necessary


Do You Care About What You’re Measuring? Part 3: Industrial Condition Monitoring

Do You Care About What You’re Measuring? Part 3: Industrial Condition Monitoring
by Steve Logan on 02-10-2021 at 10:00 am

Industrial Condition Monitoring

Mountains over 10,000 feet capped with snow in the winter. Some of the deepest, clearest blue sky you’ll find in the United States. Farmlands of green in the spring. That was the view looking out the second story window from the most awesome conference room I’ve ever taken a customer meeting. Even if I didn’t immediately understand their application of measuring eddy current sensors for vibration analysis of a turbine, that conference room view is one to remember.

At this industrial automation giant, precision sensing brings in the revenue to keep them viable in their high-desert location that was strikingly different from the typical suburbia customer locations I usually visited. This company utilizes eddy current proximity sensors, pressure sensors and velocity sensors for vibration analysis of turbines across a variety of condition monitoring applications: power plants, windmills, hydroelectric and oil & gas.

I loved the idea of potentially selling one of my ADCs into an application such as a windmill or turbine. I’m not a motor control expert, but I was amazed with the technology of windmills and turbines. These condition monitoring applications utilize a series of sensors to detect whether a blade is “slightly off” its ideal control loop, producing worse efficiency and therefore less energy. Breaking down too soon, for a windmill in the ocean or a turbine running 99.999% of the time in a power plant, meant a large loss to the industrial giant’s end customer. This customer’s ability to perform vibration analysis across a wide frequency range was mind-blowing.

A slight digression if you’ll allow me… a certain prolific podcaster and author wrote about “the signal and the noise” eight years ago. I’ve always liked that phrase. One of the ideas was being able to get to the truths of real-world circumstances (the signal) amidst a large amount of random, inconsequential data points (the noise). I’ve always thought what this industrial customer did, pulling out the most minute vibration differences in a sea of complex data, all while accurately monitoring over time, was the true definition of analyzing the signal amongst the noise. They definitely care about what they’re measuring.

At the heart of these measurements, the incumbent socket was held by a 24-bit, 105ksps delta sigma ADC.  109dB signal-to-noise ratio. 12uV rms noise – maximum. Not typical, maximum! I really wanted to unseat this competitor’s ADC. We put together an impressive proposed product definition that we called JG17: 24-bits, 300ksps and multiple channels vs their 1-ch ADC. Plus ours offered customized synchronization control.

But alas, it didn’t work out in our favor. It turns out there aren’t cell-phone-level volumes for vibration analysis and industrial condition monitoring. There aren’t too many companies in the world with the engineering craft and capability to make these kinds of products. Their volumes are more in the 10’s of thousands vs 100’s of millions. But that’s also what makes it so intriguing. Designing systems around eddy current sensors isn’t something most engineers do right after getting their BSEE.

In the end, we couldn’t build the business case or get the customer commitment to switch from their existing solution to ours. But that can never take away my memory from that conference room – and the fun of exploring the product definition challenge.


A New ML Application, in Formal Regressions

A New ML Application, in Formal Regressions
by Bernard Murphy on 02-10-2021 at 6:00 am

A New ML Application

Machine learning (ML) is a once-in-a-generation innovation that seems like it should be applicable almost everywhere. It’s certainly revolutionized automotive safety, radiology and many other domains. In our neck of the woods, SoC implementation is advancing through learning to reduce total negative slacks and better optimize floorplans. But functional verification has been curiously resistant to the charms of ML. I know this is not for lack of trying. Some of the superficially “obvious” candidates, such as improving constrained random test generation, have proven not to be as easy or as effective as you might think.

ML for orchestration

That doesn’t mean there aren’t ways to use ML in this field. We just have to think more creatively. Formal verification has already breached the barrier by using ML to orchestrate use of 30 or more formal engines to prove (or disprove) assertions. Formal isn’t just one technique; there are many engines and methodologies to use those engines in order to work towards a proof. There’s not a fixed way to know in advance what will work. You try something for a while – if that isn’t getting you anywhere, you try something else. Orchestration is managing this process automatically. Knowing how to do this efficiently is quite dependent on experience, and therefore amenable to ML training.

ML for regression acceleration

Another application is in accelerating regression runs. Regression is a natural for ML because the whole process is a continuous refinement with a growing database of results (until you make big changes). Synopsys recently posted a webcast detailing how they now offer ML-based regression mode acceleration (RMA) in VC Formal. The image above gives a simplified explanation of how this works. In the first run, proving/disproving progresses through multiple paths until proofs or counterexamples are found. On a subsequent run, those conclusive paths can be checked first to re-verify. If the checks are good, regression moves on to the next step. If not, the search can be expanded to find new proofs or counterexamples.

The impact is obvious. Regressed runs don’t have to start from “zero knowledge” each time. They can build on what they already know. With the caveat that where logic changed and therefore certain proofs did not work as before, engines need to back off to generate new proofs. Which then become the basis for new learning. From that point, subsequent regressions can start. This isn’t just theory. Synopsys show examples in which they get to very impressive speedups (24-65X) in simply re-running regressions. And in some cases are able to complete additional assertions which were previously inconclusive. Speedups in these cases are not as impressive, but hey, you completed more proofs than before. And next time around you should get those big speedups again!.

ML for bug hunting

The presenter (Sai Karthik Madabhushi, Sr Apps Engineer) also talked about applying this capability to bug hunting. This is a neat application for RTL developers while still in design. Testbench development at this stage is uncommon, however bug hunting is a very productive way to look for bugs early on. Here you create assertions you think should hold true, then run formal to see if you can find counterexamples. This can be very productive, but without intelligence, you keep retracing unproductive paths in subsequent runs as you try to extend the depth of your search. RMA can help here also, by following successful traces from previous runs to reach further out, to find deeper failures.

You can watch the webcast HERE.

Also Read:

Change Management for Functional Safety

What Might the “1nm Node” Look Like?

EDA Tool Support for GAA Process Designs