RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Arteris IP folds in Magillem. Perfect for SoC Integrators

Arteris IP folds in Magillem. Perfect for SoC Integrators
by Bernard Murphy on 02-18-2021 at 6:00 am

merge min

Arteris IP and Magillem recently tied the knot, creating a merger of Network-on-Chip (NoC) and related Intellectual Property (IP) with a platform known for IP-XACT based SoC integration and related support. This is interesting to me because I’m familiar with products and people in both companies. I talked to Kurt Shuler, vice president of marketing to understand the rationale behind the acquisition.

The value Arteris IP provides

Kurt put the top-level reasoning to me this way. First, Arteris IP has always been about making it easier to build systems-on-chip (SoCs). Integrating IPs you acquired from various providers, together with your own IP, to produce an SoC that will deliver top-notch performance, Quality of Service (QoS), power, safety and so on. In fact, as he said (and I agree), the on-chip interconnect provides the “knobs and dials” for engineers to define the architecture of a SoC. You put a lot of IP that anyone can buy around that interconnect, and you make it yours, partly through your special sauce and partly through how you optimize your architecture using your unique interconnect configuration. Making this possible, each IP interfaces to the bus through a network interface unit (NIU) adapter, so that all IPs are speaking a common language on the bus.

The value Magillem provides

Magillem has a well-related and complementary goal. They also aim to make it easy for you to build your SoC. But they are doing it at a data management level. You acquire all these IPs from multiple sources. Each has Register Transfer Level (RTL) source files, SystemC models, register maps, bus interfaces, controls and more. Together with all the configurability offered by most IPs. This could create a nightmare for an integrator if IP vendors couldn’t agree on a standard way to package all that information. Fortunately, all the main providers already have agreed on the IP-XACT standard. This means that at the integration level when you’re connecting IPs and busses together, reconfiguring them and so on, in definition of packaging information, they also all speak the same language. Sounds familiar?

These capabilities have existed since the early days of both companies. Over the years they have acquired many joint customers. Some using Arteris and Magillem together, some using one or the other solution as best meets their immediate needs. In either case, they’re looking for low-friction development solutions in their interconnect design and/or in SoC assembly. Avoiding complex rework in multi-tier interconnects or in importing and updating IP with incompatible interfaces.

An obvious synergy

There is obvious synergy between these goals. Why not further reduce friction by having the NoC layer and the data management layer work hand-in-hand? Import a new rev. of an IP, and the NoC is ready to reconfigure against that new package. A new derivative can be created with some additional IPs and some removed. The NoC can easily sync up with that new configuration. Or optimize the NoC to meet a QoS goal and the corresponding IPs can automatically reconfigure.

Kurt tells me they will continue to support the standalone Magillem and Arteris IP products. They have already started engineering and architectural work to take advantage of the technical synergies between the two product lines. With so many top-tier shared customers, I expect we’ll start to see significant advantages and innovations in their joint capabilities soon.

You can read the press release HERE.

Also Read:

The Reality of ISO 26262 Interpretation. Experience Matters

Cache Coherence Everywhere may be Easier Than you Think

AI in Korea. Low-Key PR, Active Development


Techniques and Tools for Accelerating Low Power Design Simulations

Techniques and Tools for Accelerating Low Power Design Simulations
by Kalar Rajendiran on 02-17-2021 at 10:00 am

Figure 1 for Synopsys Blog

I recently watched a webinar titled “How to accelerate power-aware simulation debug with Synopsys’ VC LP” that was presented by Ashwani Kumar Dwivedi senior applications engineer at Synopsys. Watching the webinar made me reminisce how design verification has evolved over the years. A long time ago, static verification started gaining attention as a way to address some of the challenges of those times. Performance and memory capacity of computers couldn’t meet the turnaround time demands of simulating complex designs. And after long turnaround time simulations, if design bugs were identified, it was a gargantuan task to debug.

Static verification tools were developed to pre-verify designs. A way to catch design bugs early-on and minimize the need for running elaborate dynamic simulations as a requirement for signoff. Results were amazing and there was even a push for signing off simple designs without even running dynamic simulations or simply running dynamic simulations with a small set of test vectors just to exercise the complex portions of the designs. Nowadays, with the increased design complexities, SoC designs, incorporation of extreme low power design techniques and access to high performance compute/memory capacity, tendency may be to rely more heavily on dynamic simulation.

This webinar showcases how a judicious combination of static verification and dynamic simulation can provide immense benefits. The presenter provides lots of examples to highlight each area of benefit and quantifies the benefits using results from some case studies. I recommend you watch the webinar to get the complete details.

I’ll synthesize below what I gathered from watching the webinar.

First things first. The webinar sheds light on lot more aspects than what the webinar title may suggest. It addresses more than just accelerating simulation debugging. It covers issues that lead to down-stream bugs that show up during simulation and discusses ways to prevent those bugs in the first place.

The presenter sets the stage (Figure 1) by listing the many challenges faced during low power functional verification that fall into the Bring-up, simulation and debug stages. The fewer number of iterations that happen in each of these stages, the faster the turnaround time to design signoff.

Figure 1 (Source: Synopsys)

He highlights the importance of running a design independent UPF check (DIUC). Many UPF issues that have the potential to cause down-stream problems can be caught independent of the design and it is very important to fix these even before getting Synopsys VC LP to run.

Presenter then discusses many examples of situations where custom design rules are needed and if not implemented will cause bigger debug challenges down-stream. He walks through some examples and talks about VC LP custom design rule writing and checking functionality that is not possible with just UPF.

He then addresses how LP architecture checks can be performed for detecting both structural and functional issues. Debugging simulation failure due to X-propagation is very challenging and time consuming. The presenter goes into details of the many reasons that lead to X-propagation in simulation that can be prevented from entering the simulation stage.

The integrated flow between Synopsys VCS® and VC LP makes it an almost zero effort task for the simulation team to run VC LP. The presenter advises that whenever possible, the same team should run both static verification and simulation. This combined with an integrated flow through Synopsys’ Verdi® HW SW Debug software (Screenshot in Figure 2) enables easy tracing of debug situations for locating their root causes.

Figure 2 (Source: Synopsys)

By leveraging VC LP with VCS, a lot can be gained in terms of time and cost savings. The presenter reports stats from some case studies where a 38% reduction in simulation run time and an 81% reduction in design related issues were achieved after running VC LP ahead of simulation.

Check out the full webinar to see detailed examples and how they are applicable to your particular role within the design cycle.

Also Read:

A New ML Application, in Formal Regressions

Change Management for Functional Safety

What Might the “1nm Node” Look Like?


SmartDV Shines in 2020!

SmartDV Shines in 2020!
by Daniel Nenni on 02-17-2021 at 6:00 am

SmartDV 2020

After an incredible year for SemiWiki I spent much of January breaking down 2020 with our sponsoring companies. Some had a down year, some had a flat year, but quite a few had remarkable years. One standout company is SmartDV which recorded a 51% revenue increase, so the important question is why?

Reasons:
SmartDV covers one of the more explosive market segments and that is IP. SmartDV also covers one of the fastest growing EDA segments and that is verification. This is one of those 1+1=3 situations, absolutely.

SmartDV closed multi-year agreements and added 26 new customers in North America, Japan, Europe and Asia, which grew 126%.

SmartDV saw an increased licensing demand of close to 70% for Verification IP and a 300% increase in demand for Design IP solutions. So much of this can be attributed to its expanding Design and Verification IP portfolio. Engineering groups purchasing SmartDV’s IP range from well-funded startups and mid-size chip makers to well-known diversified companies.

Will that growth continue in 2021?
The acquisition of a portfolio of silicon-realized controller Design IP for MIPI and USB interfaces last year will help fuel growth this year. In fact, 2021 revenue to date has already reached 22% of 2020’s full year revenue. 2021’s strong growth so far comes from both design and verification IP in the 5G, consumer and HPC markets, though design IP is considerably stronger.

“Our 2021 revenue projections are strong in all regions and show increasing interest in our products, especially Design IP,” says Deepak Kumar Tala, SmartDV’s managing director. “As we grow our business and expand our offerings, our customer commitment and service will not waiver nor will our ability to rapidly customize Design and Verification IP for specific applications and customer requirements.”

So yes SmartDV is hiring:
SmartDV offers a unique opportunity for ambitious ASIC engineers. As a ASIC design and verification expert you will have range of projects to work with. You will have opportunity to work with industry’s best talent.

At SmartDV you will get to work on technologies which are very innovative and will have chance to contribute to this innovative technologies. If you think you know next big thing in verification, or you think you can solve next big issue in verification, then SmartDV is right place for you. Send your resume to jobs@smart-dv.com

About SmartDV
SmartDV™ Technologies is the Proven and Trusted choice for Verification and Design IP with the best customer service from more than 250 experienced ASIC and SoC design and verification engineers. SmartDV offers high-quality standard protocol Design and Verification IP for simulation, emulation, field programmable gate array (FPGA) prototyping, post-silicon validation, formal property verification and RISC-V CPU verification. Any of its Design and Verification IP solutions can be rapidly customized to meet specific customer design needs. The result is Proven and Trusted Design and Verification IP used in hundreds of networking, storage, automotive, bus, MIPI and display chip projects throughout the global electronics industry. SmartDV is headquartered in Bangalore, India, with U.S. headquarters in San Jose, Calif.

Connect with SmartDV at:
Website: www.Smart-DV.com
Linkedin: https://www.linkedin.com/company/smartdv-technologies/about/
Twitter: @SmartDV

Also Read:

SmartDV Expands Its Design IP Portfolio with an Acquisition

CEO Interview: Deepak Kumar Tala of SmartDV

The Quiet Giant in Verification IP and More


Synopsys Delivers a Brief History of AI chips and Specialty AI IP

Synopsys Delivers a Brief History of AI chips and Specialty AI IP
by Mike Gianfagna on 02-16-2021 at 10:00 am

Cloud AI Accelerator SoC

Let’s face it, AI is everywhere. From the cloud to the edge to your pocket, there is more and more embedded intelligence fueling efficiency and features. It’s sometimes hard to discern where human interaction ends, and machine interaction begins. The technology that underlies all this is quite complex and daunting to understand. Sometimes low power is critical, sometimes it’s all about raw throughput and it’s always about flexibility and programmability. Putting all this in perspective can be daunting. Just cataloging all the disciplines and suppliers involved is hard. When I saw a recent White Paper from Synopsys on the topic of AI, SoCs and IP, I took notice. After reading the piece I came away with a much better grasp of everything that is going on in the area of chips and AI. Indeed, Synopsys delivers a brief history of AI chips and specialty AI IP that covers a lot of ground.

If you are interested in AI, no matter what your job is, you’re going to want to read this White Paper. A link is coming. Let’s review some of the topics covered first. The author is Ron Lowman, DesignWare IP Strategic Marketing Manager at Synopsys. Besides Synopsys, Ron has had a long stint at Motorola and then Freescale. Ron clearly understands the space and does a great job explaining how all the pieces fit together. The breadth of application of AI is truly remarkable. The White Paper references a Semico report that is helpful. A quote from the report is included that is quite telling, “some level of AI function in literally every type of silicon is strong and gaining momentum.”  That says a lot.

The various types of silicon for AI are discussed and there are two of note. Stand-alone accelerators that connect in some fashion to an apps processor and application processors with added neural network hardware acceleration on-device. The market segments served by these kinds of chips are detailed. There are many, and they exhibit various levels of three key parameters, depending on the application:

  • Performance in TOPS
  • Performance in TOPS/Watt
  • Model compression

If you want to know how these parameters fit the various markets and what process technologies are used, download the White Paper. An interesting view of market growth is also provided that looks at expansion across high (>100 W), medium (5–100 W) and low (<5 W) power requirements. As you might imagine, the growth is extremely large. The split between the various power regimes may surprise you.

The design challenges associated with AI chip design are then discussed. There are many, including balancing power/latency/performance, special memory architectures, data connectivity and security.  A useful set of core challenges that span all markets is provided:

  • Adding specialized processing capabilities that are much more efficient performing the necessary math such as matrix multiplications and dot products
  • Efficient memory access for processing of unique coefficients such as weights and activations needed for deep learning
  • Reliable, proven real-time interfaces for chip-to-chip, chip-to-cloud, sensor data, and accelerator-to-host connectivity
  • Protecting and securing data against hackers or data corruption

Typical architectures for various types of AI devices are then discussed. Applications here include cloud, edge and on-device. The figure at the top of this post is a representation of a cloud AI accelerator SoC.  The White Paper then outlines the DesignWare IP that is available for AI SoCs. The categories discussed include:

  • Specialized processing
  • Memory performance
  • Real-time data connectivity
  • Security

Synopsys has quite a large footprint here, there’s a lot to browse. I’ve covered Synopsys IP support for AI/ML in this prior post as well. The White Paper concludes with a very important discussion regarding the tools and support Synopsys provides to help you get your AI application out the door. This includes tools for software development, verification, and benchmarking as well as expertise and IP customization. You can also review several customer successes, including Tenstorrent, Black Sesame, Nvidia, Himax, Infineon and Orbecc. This White Paper has something for everyone. Synopsys really delivers a brief history of AI chips and specialty AI IP that covers a lot of ground. The White Paper title is “The Growing Market for Specialized Artificial Intelligence IP in SoCs”, and you can download your copy here.

 

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.

Also Read:

The Heart of Trust in the Cloud. Hardware Security IP

Synopsys is Extending CXL Applications with New IP

Webinar on Synopsys MIPI IP


Happy Birthday UVM! A Very Grown-Up 10-Year-Old

Happy Birthday UVM! A Very Grown-Up 10-Year-Old
by Bernard Murphy on 02-16-2021 at 6:00 am

UVM logo min

.The UVM standard was first released by Accellera 10 years ago this month and is now by far the leading methodology for functionally verifying logic designs, especially at the block level. As I write, DVCon fast approaches so I talked to Tom Fitzpatrick, Verification Technologist at Siemens EDA (Mentor Graphics) for a perspective. Tom has been intimately evolved in committee activity on defining UVM, representing at different times each of multiple EDA vendors, so he knows whereof he speaks. We talked about how the standard emerged, where it is today and a look into where the committee is planning further growth.

History

Successful standards most often build on successful proprietary/pseudo-open implementations. Back in 2004 Mentor had AVM and Cadence had eRM. They agreed in principle to partner on a common standard, OVM, which took a little while to get off the ground as some tools were still completing their SystemVerilog support. When both were ready, they locked themselves in a hotel room in the Bay Area for a week. Agreeing this had to be based on SystemVerilog, they pulled in concepts from AVM and eRM, kicked around and refined ideas, converged on an OVM definition and donated it to Accellera.

Synopsys were already doing very nicely with VMM, but acknowledged that momentum was building behind OVM. They decided to jump in also, participating in the definition and donating the register abstraction language from VMM. To satisfy honor all round, this evolved standard was renamed to UVM. And that’s how UVM 1.0 was born, in February of 2011.

I asked Tom if he could share a little behind the scenes insight, for those of us who wonder why such collaborations don’t seem to produce definitions everywhere grounded in pure reason. He chuckled and admitted that some choices are made for technical reasons and some for political reasons. Even SystemVerilog has features defined to carry across some contributor-preferred methodologies. The same applies with the UVM definition. In fairness to vendors, I’m sure each is looking ahead to how they’re going to migrate their customers from legacy investments. Some level of accommodation is going to be needed for that reason alone. Then again, sometimes I’m sure it’s simply partisan pride in their own inventions. That’s an unavoidable reality in collaboration. Wise standards leaders know how to progress while keeping all participants reasonably happy.

Adoption

Mentor does regular blind surveys on verification through the Wilson Group, so they can say with confidence that almost 80% of ASIC design projects are using UVM and 50% of FPGA projects. (FPGAs are now so complex that verification – as opposed to burn and churn – has become very important, for prototyping and coverage reasons.) Tom acknowledged that most users focus on block-level verification. That said, big design houses have already built extensive libraries on top of this stack. Which I take as a measure of a truly successful standard.

Of the 20% who aren’t using UVM, Tom felt that the majority who are still on legacy standards rely on cost-of-switching arguments – “what we have works and we don’t have time to change”. An argument we can all relate to, but then again… All IP vendors have switched to UVM because they have to service customers who demand UVM support. And a high percentage of job postings for verification engineers explicitly list UVM experience as a requirement. Momentum is heavily behind UVM. Being a holdout is only going to get lonelier.

Looking ahead

I’ve covered some of this recently in my update with Lu Dai of Accellera. A few points particularly struck me the current discussion with Tom. First, as a UVM guy through and through, he acknowledged that UVM doesn’t do a good job in running software. You can create UVM sequences and virtual sequences that mimic software, but when it comes to replacing the UVM agent with an actual processor model, you have to rewrite the sequence as software. That’s where PSS can take over. UVM still reigns supreme at the block level, but not for modeling software at the system level. PSS tasks still connect to UVM underneath and a continuing focus is on connections between PSS and UVM.

Another fascinating direction is on Python implementations of UVM. Tom told me that he and Ray Salemi talked about how many new college hires know nothing about Verilog or VHDL. But they they all know Python. Ray (who at Siemens supports mil-aero FPGA users) also mentioned that class of users have a natural resistance to using a standard based on SystemVerilog. Switch them to perceived neutral territory in Python and the resistance evaporates. I imagine the same may be true for FPGA applications in datacenters (reconfigurable networking for example). Python could lower the barrier to adoption for CS grads hired to deal with these strange FPGA beasts.

Tom is an entertaining guy to talk to. He tells me they are going to be pretty busy at (virtual) DVCon this year,. Sounds like they already have planned tutorials and updates on UVM. I haven’t seen the agenda yet but I’m sure we will soon. Meantime you can learn about Siemens EDA views on and support for UVM HERE.

Also Read:

The Five Pillars for Profitable Next Generation Electronic Systems

Probing UPF Dynamic Objects

Calibre DFM Adds Bidirectional DEF Integration


Expert Advice for the New Intel CEO

Expert Advice for the New Intel CEO
by Daniel Nenni on 02-15-2021 at 10:00 am

Pat Gelsinger Intel CEO

Intel is a semiconductor legend. Founded on July 18, 1968, the name Intel is short for Integrated Electronics. After leading Silicon Valley, the United Sates, and the world into the era of semiconductors through technical excellence, Intel has hit some challenging times. There has been quite a bit of CEO drama that we will look at but the root cause is the delay of new process technologies. After dominating semiconductor manufacturing for most of its corporate life, Intel has fallen behind Samsung and TSMC. I grew up with Intel here in Silicon Valley and it pains me to see this. The question is can Intel regain the lead?

First the CEO drama:

After almost 40 years of technical leadership:
Robert Noyce (Ph.D in physics, Massachusetts Institute of Technology)
Gordon Moore (Ph.D in chemistry and physics, California Institute of Technology)
Andrew Grove (Ph.D. in chemical engineering, University of California-Berkeley)
Craig Barrett (Ph.D. in materials science, Stanford University)

Less technical CEOs followed:
Paul Ottelini (MBA, University California of Berkeley)
Brian M. Krzanich (  BS chemistry, San Jose State University)
Bob Swan ( MBA, Binghamton University)

Now Intel has Pat Gelsinger (MSEE, EE & CS, Stanford University ) with 30+ years at Intel working under Gordon Moore and Andy Grove, so some say, “Problem solved” and I might agree.

The challenge I see now is that Intel is still a very top heavy company (too many managers/MBAs) that are living in the past. Anyone that thinks an IDM can compete with the foundries and their respective ecosystems of world wide customers, partners, and suppliers head-to-head is dead wrong. Do you remember when Intel said, “It is the beginning of the end for the fabless model” in 2012 ? I certainly do.

Moving forward it’s VERY important that Intel change the rules of engagement to better compete with this new fast paced fabless model.

So, today Pat takes the helm at Intel and here is the advice that I offer him for his first 100 days:

  1. Streamline the decision making processes inside of Intel. Yes, this means layoffs and re-orgs but it has to be done. Intel needs to be optimized for a fast paced ultra-competitive semiconductor marketplace.
  2. Bring transparency to Intel. No more surprises. When you surprise us with delays and technical challenges, we doubt Intel. And there are no secrets in the semiconductor ecosystem. We now know the truth behind the 10nm delays so be transparent and earn the respect and trust that Intel deserves.
  3. Take a leadership position in process node naming. Read the blog by Scott Jones on “Equivalent Nodes” (EN) and bring technology back to node naming.
  4. Engage TSMC in an exclusive relationship and outsource power and price sensitive chips. I would also give the FPGA business back to TSMC to better compete with AMD/Xilinx. Stick with your core manufacturing competency and focus the Intel fabs on high performance high margin CPUs. As they say, keep your friends close, keep your competition closer.
  5. Rid yourself of non-core competency business. Mobileye and the other distractions must go.
  6. Be a leader in the semiconductor ecosystem and not an outsider or follower. Roll up your corporate sleeves, get to work, and play well with others.

Bottom line: Make Intel synonymous with innovation and leadership again and get back on top of the semiconductor leader board where Intel belongs, absolutely!


Intel Node Names

Intel Node Names
by Scotten Jones on 02-15-2021 at 6:00 am

Slide2

There is a lot of interest right now in how Intel compares to the leading foundries and what the future may hold.

Several years ago, I published several extremely popular articles converting processes from various companies to “Equivalent Nodes” (EN). Nodes were at one time based on actual physical features of processes but had become uncoupled from physical features and a “marketing number”.

My original articles were based on some work ASML did and that they allowed me to extend and publish. Basically, what they did was plot node versus Contacted Poly Pitch (CPP) multiplied by Minimum Metal Pitch (MMP) for all the leading logic producers and come up with a curve fit that could be used to assign node numbers to the processes using a consistent methodology. The problem with the original method of calculating EN is that scaling began to transition to include Track Height (TH) and Single versus Double Diffusion Break. I eventually adopted transistor density in millions of transistors per millimeter squared using a weighted average of 60% two input NAND cells and 40% Scanned Flip Flop cells based on an Intel metric. The resulting number more completely captured logic scaling but is different than nodes that people are used to.

If you look at the leading edge logic landscape today, there are two foundries, Samsung and TSMC and one IDM, Intel still pursuing the state of the art in logic. The foundries are following a “foundry” node roadmap of 65nm, 40nm, 28nm, 20nm, 16nm/14nm, 10nm, 7nm, 5nm, 3nm. Intel on the other hand has stayed with a more classic node sequence of 65nm, 45nm, 32nm, 22nm, 14nm, 10, 7nm, 5nm. Furthermore, because the scaling node to node is generally larger for Intel than the foundries, the node names no longer align.

While considering this situation the other day it occurred to me, I could resurrect EN by plotting node versus transistor density. I decided my approach would be to use TSMC as the baseline since they are the clear logic density leader, I took TSMC’s nodes from 28nm to a projected 1.5nm and plotted the nodes versus transistor density and fitted a curve, see figure 1.

 

Figure 1. TSMC Nodes Versus Transistor Density.

 The curve fit in figure 1. has an excellent R squared value of 0.9879. Using the equation for the curve fit I can take Intel nodes and generate node numbers based on TSMC’s node scaling (EN).

TSMC has announced timing and density improvements through the 3nm node. Assuming TSMC stays on a two-year cadence for new nodes and continues to produce shrinks per generation like the 5nm and 3nm nodes, we can project transistor density versus node out to 1.5nm.

Intel has provided guidance on 7nm timing and density improvements and we then assume Intel gets back on a two year cadence with 2x shrinks (the same as 7nm) and project transistor density for Intel. I should note here that Intel took 3 years to get 14nm into production, 5 years to get 10nm into production and is now heading towards 3 to 4 years for 7nm. I would therefore view this as an aggressive roadmap for Intel.

Figure 2 provides our roadmap for node by year for TSMC and node and EN by year for Intel.

Figure 2. TSMC and Intel Node Roadmaps.

We are projecting that Intel’s 7nm node will have an EN value of 4.1nm (intermediate between TSMC 5nm and 3nm nodes), the Intel 5nm node will have an EN value of 2.4nm (intermediate between TSMC 3nm and 2nm nodes) and if Intel stays with a 2x per generation shrink the Intel 3nm node could have an EN vale of 1.3nm or slightly better than TSMC’s 1.5nm. This of course presupposes intel can execute 2x shrinks at a much faster pace than in the past.

This roadmap for Intel while aggressive still leaves them playing catch up versus TSMC until at least mid-decade.

This roadmap is purely density based and Intel products generally require higher performance than most of TSMC’s customers. As best as we can benchmark Intel versus TSMC processes for performance, we believe Intel 10SF is competitive with TSMC 7nm. I would expect Intel 7nm to be competitive with TSMC 3nm and Intel 5nm to be competitive with TSMC 2nm.

If Intel is reading this I would suggest they could do everyone a favor and rename 7nm to 4nm and 5nm to 2.5nm so they are named more consistently with how the processes actually compare to the other logic leaders.

In conclusion this analysis provides a way to convert Intel nodes into equivalent TSMC nodes and provides roadmaps for both companies into the late 2020s. Even with aggressive execution Intel will likely be competitive with TSMC at best and likely trailing them until mid-decade even under the best case scenario.

Also Read:

ISS 2021 – Scotten W. Jones – Logic Leadership in the PPAC era

IEDM 2020 – Imec Plenary talk

No Intel and Samsung are not passing TSMC


How SerDes Became Key IP for Semiconductor Systems

How SerDes Became Key IP for Semiconductor Systems
by Eric Esteve on 02-14-2021 at 10:00 am

Ethernet bandwidth

We have seen that the interface IP category is seeing incredibly high growth rate over the last two decades and we expect this category to generate an ongoing high source of IP revenues for at least another decade. But if we dig into the various successful protocols like PCI Express, Ethernet or USB, we can detect a common function in the Physical (PHY) part, the Serializer/Deserializer (SerDes) function.

In 1998, advanced interconnects used in telecom application were based on 622 MHz LVDS I/O. Telecom chip makers were building huge chips integrating 256 LVDS I/O running at 622 MHz to support Networking fabric. Today, advanced PAM4 SerDes run at 112 Gbps; over a single connection to support 100G Ethernet. In twenty years, SerDes technology efficiency jumped by a factor of 180-times! If we make a quick comparison with CPU technologies. In 1998 Intel released the Pentium II Dixon processor, whose frequency was 300 MHz. In 2018, an Intel Core i3 run at 4 GHz. CPU frequencies have grown by a factor of 15 times over a span of twenty years while SerDes speeds have exploded by by a factor of 180-times.

SerDes are now used in many more application than just telecom, to interface chips and systems. At the end of the 2000’s, smartphones integrated USB3, SATA and HDMI interfaces, while Telecom and PC/server integrated both PCIe and Ethernet. These trends resulted in the interface IP market to become a sizeable IP category growing above $200 million at that time. It was small compared to the CPU category, which was four or five times larger. But, since 2010, the interface category has seen at least 15%, year over year. It was the fastest growing category compared with all other semiconductor IP categories, such as CPU, GPU, DSP, Library, etc. The reason is directly linked with the number of connected devices growing every year, each exchanging more data (more movie, pictures, etc.). Connectivity is the beginning of the chain of communication, to the internet modem or base station, Ethernet switch and the datacenter network.

Figure 1: Long Term Ethernet Switch Forecast (source: Dell’Oro)

During the 2010 decade the worldwide community became almost completely connected. Ethernet became the backbone of this connectivity as both the connectivity rates and the number of datacenter quickly increased over the decade. If we use SerDes rates as an indicator, it was 10 Gbps in 2010, 28 Gbps in 2013 and 56 Gbps in 2016 (allowing to support 10G, 25G and 50G Ethernet resp.) and 112 Gbps in 2019.

Then, in 2017, the exploding high-speed connectivity needs for emerging data-intensive compute applications such as machine learning and neural networks started to appear, adding to the growing need of high bandwidth connectivity. At the same time, analog mixed-signal architectures, which were the norm for SerDes design since the inception, became extremely difficult to manage and much more sensitive to process, voltage, and temperature variations, due to the evolution of CMOS technology toward advanced FinFET. In modern nanometer FinFET technologies, building transistors involves stacking individual electrons, given the tiny dimensions of the transistors. Thus, the construction of precise, analog circuits that can sustain stressful environmental variations is extremely difficult.

But the positive point with advanced technology like 7nm is that you can integrate an incredible number of transistors by sq. mm (density of 100 million Transistor per sq. mm), so it’s now possible to develop new digital-based architecture leveraging Digital Signal Processing (DSP) to do the vast majority of the Physical Layer work. DSP-based architecture enables the use of higher order Pulse Amplitude Modulation (PAM) modulation scheme compared to Non-Return to Zero (NRZ or PAM2) used by historical previous analog mixed-signal approaches. PAM 4 enabled doubling data throughput of channels without having to increase the bandwidth of channels themselves. As an example, a channel with 28 GHz of bandwidth can support a maximum data throughput of 56 Gbps using NRZ signaling. With the use of PAM-4 DSP technique this same 28 GHz bandwidth channel can now support a data rate of 112 Gbps! When you consider that analog SerDes architectures are limited to a maximum of 56 Gbps rates for physical reasons (and maybe less…), DSP SerDes  are the approach to scale rates to 200 Gbps and beyond, with the use of more sophisticated modulation schemes (eg. PAM-6 or PAM-8).

Using DSP based SerDes is not only required for building robust interfaces in FinFET technologies, but it is also the only way to double data rates for above 56 Gbps, eg. 112 Gbps with PAM-4, 200 Gbps with PAM-8. And this need for more bandwidth is the requirement linked with emerging data-intensive applications like AI (to interconnect CPU and accelerator), ADAS, and to follow the data-centric trend of the connected human community, expected to grow steadily over the next decade.

Figure 2: Top 5 Interface IP Forecast & CAGR (source: IPnest 2020)

In the “Interface IP Survey” IPnest rank the market share of IP vendor revenues by protocol since 2009. In the 2020 version of the report, we have shown that the Interface IP category will have 15% CAGR from 2020-2024 to reach $1.57 billion, as listed in Figure 2. This is a wide IP market including PCIe, Ethernet and SerDes as well as USB, MIPI, HDMI, SATA and memory controller IP. In 2019, Synopsys is a strong leader with 53% market share of the estimated $870 million IP market, followed by Cadence with 12%. Both EDA companies have defined a one-stop-shop business model, addressing the mainstream market. This strategy is successful for these large companies as it targets a wide part of various segments (smartphone, consumer, automotive or datacenter), but not the most demanding high-end portion of these segments.

Nevertheless, another strategy can be successful for the IP market, which is to strongly focus on one segment (eg. High-end) of the market and provide the best experience to very demanding hyperscalar customers. If you can build an excellent engineering team able to develop top quality products on the most advanced technologies, focusing on the high end of the market, the resulting business model can be rewarding.

We have seen that SerDes IP is the key to the interface IP market. Furthermore if we concentrate on PCIe and Ethernet protocols, Figure 3 illustrates the IP revenues forecast 2020-2025, limited to high-end PCIe (Gen 5 and Gen 6) and high-end Ethernet (PHY based on 56G, 112G and 224G SerDes), including D2D protocol for a reason that will be describes shortly.

 

Figure 3: High-End Interface IP Forecast & CAGR (source: IPnest 2021)

This high-end interface IP forecast show 28% CAGR from 2020-2025 (to be compared with 15% for the total interface IP market), and a TAM of $806 million in 2025. One young company has demonstrated strong leadership on this High-End interface IP segment, thanks to their focus on high-end SerDes (112G since 2017 and soon 200G) targeting the most advanced technology nodes (7nm in 2017, then 5nm in 2019) offered by the two leading foundries, TSMC and Samsung. Alphawave, was founded in 2017 has are rumored to have $75 million in orders in 2020, thanks to their positioning targeting the most advanced rates and application of the high-end segment of PCIe and Ethernet. In this portion of the market, they enjoyed 28% market share in 2019 and 36% in 2020. If Alphawave can keep their advance on the high-end SerDes market, it’s not unrealistic to foresee $300-400 million in IP revenues… by 2024-2025!

Since 2019, a new sub-segment, D2D interface, has emerged and is expected to grow with 46% CAGR from 2020-2024. By definition, D2D protocol are used between two chips or die, within a common silicon package. Briefly, we consider two cases for D2D: i) dis-integration of the master SoC to avoid SoC area to badly impact yield or become larger that the maximum reticle size, or ii) SoC interconnect with “service” chiplet (can be I/O chip, FPGA, accelerator…).

At this point (February 2021), there are several protocols being used, with the industry trying to build formalized standards for many of them. Current leading D2D standards includes i) Advanced Interface Bus (AIB, AIB2) initially defined by Intel who has offered royalty free usage, ii) High Bandwidth Memory (HBM) where DRAM dies are stacked on each other on top of a silicon interposer and are connected using TSVs, iii) Domain-Specific Architecture (ODSA) subgroup, an industry group, has defined two other interfaces, Bunch of Wires (BoW) and OpenHBI. All of these D2D standards are based on DDR-like protocol, a parallel group of single-ended data wires being accompanied with a forwarded clock currently operating in the the 2GHz to 4 GHz range. By using literally hundred of parallel wires over very short distances, these interfaces compete with VHS SerDes NRZ, usually defined around 40 Gbps, and offering a strong advantage to enable a much lower latency and lower power consumption, compared to SerDes.

There is now consensus in the industry that a maniacal focus on achieving Moore’s law is becoming not valid anymore for advanced technology node, eg. 7nm and below. Chip integration is still happening, with more transistor being added per sq. mm at every new technology node. However, the cost per transistor is growing higher every new node. Chiplet technology is a key initiative to drive increased integration for the main SoC while using older mainstream nodes for service chiplet. This hybrid strategy decreases both the cost and the design risk associated with integration of the service IP directly into the main SoC. IPnest believes this trend will have two main effect in the interface IP business, one will be the strong growth of D2D IP revenues soon (2021-2025), and the other is the creation of the heterogenous chiplet market to augment the high end SerDes IP market.

We have forecasted the growth of the D2D Interface IP category for 2020-2025, passing from less than $10 million in 2020 to $171 million in 2025 (87% CAGR). This forecast is based on the assumption that the service chiplet market should explode in 2023, when most of advanced SoC will be designed in 3nm. This will make integration of high-end IP like SerDes far too risky, leading to externalizing this functionality into a chiplet designed in more mature node like 7 or 5nm. If Interface IP vendors will be major actors in this revolution, the Silicon foundries addressing the most advanced nodes like TSMC and Samsung and manufacturing the main SoC will have key role. We don’t think they will design chiplet, but they could make the decision to support IP vendors and push them to design chiplet to be used with SoC in 3nm, like they do today when supporting advanced IP vendors to market their high-end SerDes as hard IP in 7nm and 5nm. Intel’s recent transition to 3rd party foundries is expected to also leverage third party IPs, as well as heterogenous chiplet adoption by the semiconductor heavyweight. In this case, no doubt that Hyperscalars like Microsoft, Amazon and Google will also adopt chiplet architecture… if they don’t even precede Intel in chiplet adoption.

By Eric Esteve (PhD.) Analyst, Owner IPnest

Also Read:

Interface IP Category to Overtake CPU IP by 2025?

Design IP Revenue Grew 5.2% in 2019, Good News in Declining Semi Market

#56thDAC SerDes, Analog and RISC-V sessions


Semiconductors up 6.5% in 2020, >10% in 2021?

Semiconductors up 6.5% in 2020, >10% in 2021?
by Bill Jewell on 02-14-2021 at 6:00 am

Feb 2021 co revised 9

Semiconductor sales in 2020 were $439.0 billion, up 6.5% from $412.3 billion in 2019, according to World Semiconductor Trade Statistics (WSTS).

We at Semiconductor Intelligence have been tracking the accuracy of semiconductor market forecasts from various sources for several years. We look at publicly available projections made late in the prior year or early in the forecast year before the WSTS January data for the forecast year is released in early March. For 2020, we have a tie for most accurate forecast between IHS Markit with a 6% forecast made in January 2020 and ourselves at Semiconductor Intelligence with a 7% forecast made in February 2020. WSTS was also close with a 5.9% forecast in December 2019. Forecasts made during this time period ranged from 0% to 10%.

The forecasts made in late 2019 and early 2020 did not account for the impact of the COVID-19 global pandemic. As the severity of the pandemic became apparent by April 2020, forecasters dramatically lowered their expectations for the 2020 semiconductor market. Some projected a double-digit decline. By the middle of 2020, it became apparent the semiconductor industry would not as impacted by the pandemic as other sectors of the economy. Most projections then shifted toward positive single digit growth. Our Semiconductor Intelligence November 2020 forecast was 5.5%. Interestingly, our 7% forecast released in early 2020 was closer to the final number of 6.5% than our forecast in late 2020.

The 4Q 2020 semiconductor market was up 3.5% from 3Q 2020, according to WSTS. The major semiconductor companies generally had strong revenue gains in 4Q 2020. Qualcomm’s IC revenues were up 32% from 3Q 2020. AMD and NXP Semiconductors each had double-digit growth while Intel, Texas Instruments, and Infineon each had high single-digit growth. Micron Technology and STMicroelectronics had revenue declines. Three companies had revenue declines in 4Q 2020 versus 3Q 2020 measured in local currency (Samsung and SK Hynix in South Korean won; MediaTek in New Taiwan dollars) but grew revenue when converted to U.S. dollars.

The outlook for 1Q 2021 revenue is mixed. Micron Technology, MediaTek, Infineon and NXP Semiconductors expect revenue to grow in the low single-digits in 1Q 2021 versus 4Q 2020. Intel, Qualcomm, Texas Instruments, AMD, and STMicroelectronics expect single-digit revenue declines – largely due to normal seasonal trends. Automotive was cited as a growth driver by several companies. The memory companies (Samsung, SK Hynix and Micron Technology) all see an improving DRAM market. The weighted average guidance for the non-memory companies is a 5% decline in 1Q 2021 revenue.

What is the outlook for the semiconductor market for the year 2021? Three key market drivers are smartphones, PCs, and light vehicles (automobiles and light trucks). Smartphone shipments declined 11% in 2020, primarily due to pandemic related production delays. Gartner expects strong 11% growth in smartphone shipments in 2021 as production returns to normal and new 5G models drive consumer demand. PC unit shipments increased 11% in 2020 as more people depended on PCs for home-based working, learning and entertainment during the pandemic. IDC projects PCs will return to a more typical 1% growth in 2021. Light vehicle production dropped sharply by 17% in 2020 due to pandemic related production delays and caution by automakers. IHS Markit forecasts a strong bounce-back to 14% light vehicle production growth in 2021.

Smartphones are the single largest product driver for semiconductors, accounting for about $115 billion in semiconductor revenue in 2020, according to IDC. PCs are the second largest driver at about $70 billion. However, automotive is becoming an increasingly important market for semiconductors. IHS Markit estimates the automotive semiconductor market at about $40 billion. The average semiconductor content per vehicle is about $500, compared to less than $100 per smartphone and around $200 per PC. Most of the semiconductor value in smartphones and PCs is in relatively few components such as processors and memory. Automobiles contain a much wider range of semiconductor products including controllers, memory, mixed-signal ICs, power devices and sensors.

The automotive market is currently experiencing shortages in many semiconductor products. Reduced automobile production in beginning in early 2020 led semiconductor companies to shift to products for other applications. Fitch Ratings says the shortages could disrupt automotive production for several months, but expects most of the lost production will be made up in the second half of 2021.

Recent forecasts for the 2021 semiconductor market generally call for strong growth. They range from a low of 4.1% from the Cowan LRA model (which is based on past trends) to a high of 18% from the eternally optimistic Future Horizons. A strong consensus has emerged in the 11% to 12% range with five of the eleven projections. We at Semiconductor Intelligence are reconfirming our November 2020 forecast of 14% growth in 2021.

Our 2021 forecast is based on the following assumptions:

  1. Recoveries in automobiles, smartphones, and other end markets more than offset slower growth in PCs.
  2. Semiconductor pricing remains stable or increases slightly as demand exceeds supply in several areas.
  3. The global economy opens up in the second half of 2021 as COVID-19 vaccinations become widespread.

About
Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

Semiconductor Boom in 2021

China Mobile and Computer Update 2020

Electronics Production Healthy


Podcast EP7: Signal Integrity and Killer Robots

Podcast EP7: Signal Integrity and Killer Robots
by Daniel Nenni on 02-12-2021 at 10:00 am

Dan and Mike are joined by Matt Burns, technical marketing manager at Samtec. Matt discusses the signal integrity challenges faced by system designers. The materials and protocols used for channels on a board, between boards on a rack and even between racks are discussed. Matt also touches on the work Samtec is doing with BattleBots. There is sure to be a topic for everyone in this lively discussion.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.