RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

2020 Retrospective. Innovation in Verification

2020 Retrospective. Innovation in Verification
by Bernard Murphy on 01-20-2021 at 6:00 am

innovation min

Paul Cunningham (GM, Verification at Cadence), Jim Hogan and I launched our series on Innovation in Verification at the beginning of last year. We wanted to explore basic innovations and new directions researchers are taking for hardware and system verification. Even we were surprised to find how rich a seam we had tapped. We plan to continue the series, first starting with a retrospective on what we found last year and how that might direct our discovery this year.

The 2020 Picks

These are the blogs in order, January to December. All did well in views but the first one and the last two really blew the roof off. We’d be curious to know which were your favorites.

Optimizing Random Test Constraints Using ML

Learning to Produce Direct Tests for Security Verification using Constrained Process Discovery

End-to-End Concolic Testing for Hardware/Software Co-Validation

Metamorphic Relations for Detection of Performance Anomalies

Is Mutation Testing Worth the Effort?

Predicting Bugs. ML and Static Team Up

Using AI to Locate a Fault

Quick Error Detection

Bug Trace Minimization

Covering Configurable Systems

ML Plus Formal for Analog

More on Bug Localization

Paul’s view

It has been such fun reading all these papers and discussing them with Jim and Bernard. I have been so impressed by the quality of work from the various authors and it is wonderful to see that innovation in verification is truly thriving. A very big thank you to all the universities and governments that are sponsoring and funding this research!

Probably the biggest theme that shone through from our cogitations last year is fault localization – helping engineers quickly and efficiently work out why tests fail and where the bugs are in their designs. There are a lot of ideas that are gaining traction in the software verification world that have not yet fully permeated to hardware verification. Also, it’s clear that ML is a key enabler behind this wave of innovation in fault localization.

Another theme which stands out is how great results nearly always come from combining multiple techniques together – simulation with formal, mutation with static, ML with deductive.  As a computer scientist and lover of algorithms, this has made for wonderfully enjoyable reading throughout the year.

A very happy new year to all our readers.

Jim’s view

First, I have to agree that there are a lot of creative people out there, imagining new ways to improve verification. Some are immediately interesting. Factors that always attract me here are:

  • Innovations directed at big market transitions. In semiconductor we think of new process nodes, but it could equally be in OpenRAN for 5G, car electrification, improvements to public health infrastructure, big return AI applications – you get the idea.
  • I’m not looking for incremental advances. I want disruptive ideas, unique, enough IP to be patentable, to preserve an advantage for ~5 years until the product is established in its market.
  • It’s important to realize that most investors these days are pretty seasoned, even a little cynical. They know what big ideas look like. Anything else will be a really tough sell.

I see metamorphic testing in this class, bug prediction using ML, and using AI or other methods to localize faults. I see ML/AI as an extension of statistics. A way of improving and speeding up our guessing. Potential applications here have been barely touched. I’m always a fan for anything to do with analog, a a market underserved by automation. I’d want to get my experts to do more due diligence on that paper, but it is immediately intriguing.

I’m not suggesting the other topics are unworthy. Among the other papers are several advances which could be very valuable incremental enhancements to existing verification flows. Perhaps these could be self-funded startups to prove a prototype then slip straight into acquisition?

My view

As the screener of candidate papers for our little group, you might be interested in my methods for selection. I bias to fundamental research which tends to be posted in a great variety of national and international conferences, best consolidated through platforms like the ACM and IEEE digital libraries. The ACM library provided more help initially because the IEEE didn’t yet support personal accounts for the library – now they do.

I still like to look in both libraries, because they provide a lot of complementary coverage. Also we have a lot to learn from our software brothers and sisters. Beyond that, I’m looking for anything topical and relevant to verification. I like to look at fairly recent papers, though Paul (rightly) prods me now and again to look towards the start the millennium. Sometimes I find hidden gems! We’re all eager to get feedback. If you think we should look harder at some problem or research area, please let us know!

Also Read

ML plus formal for analog. Innovation in Verification

Cadence is Making Floorplanning Easier by Changing the Rules

Verification IP for Systems? It’s Not What You Think.


The Spartan flow for custom silicon: when losing is NOT an option.

The Spartan flow for custom silicon: when losing is NOT an option.
by Raul Perez on 01-19-2021 at 10:00 am

iStock 1204288358

Every so often a custom silicon socket comes up at a system company that you simply cannot afford to lose if you’re a silicon supplier. These are the types of custom silicon sockets that last for generations of a product, in huge and predictable volumes, and for whatever reason they may become available. It’s not easy to predict when a strategic change by a system company will force this to happen, or when your silicon supplier competitor simply screws up. But when it does happen you have to throw every possible tactic and strategy so you earn that spot.

There is no single “spartan flow” because there is no single type of silicon product. There is also no single type of system product. So what I want to do in this article is not to prescribe an exact formula for what a spartan flow is, but to communicate a mentality of how to create a spartan flow for your engagement.

The ancient Spartans were famous for having laws in their society configured to maximize military proficiency at all costs, focusing all social institutions on military training and physical development. The Spartans were willing to make the sacrifices needed to excel in battle, and break the will of their opponent. Just thinking about going against them was a burden on the minds of their enemies.

While there is no single spartan flow, there are areas of focus to any spartan flow plan.

These are the following:

  • Communication.

You need to establish effective lines of communication ASAP. It is critical that from day zero the silicon supplier develops the best communication systems possible between its own teams. Also communication needs to be strong with the system company. This enables the teams to focus on quick and effective action and reaction. The classic sales force engagement with the customer communicating through the sales channel with an FAE is usually nowhere near enough if you want to win like a Spartan. You need to go all the way possible, and provide a ticket system hosted by you, or suggest the system company starts their ticket system component early if they are willing to do it. The latter is more common since the system company usually wants to retain records of the tickets in their servers. You need to identify as soon as possible who are the stakeholders on the system company side that are critical to design in your silicon product, and you need to get them in touch with your stakeholders/engineers. This usually means who is the EE, the SW engineer, the FW engineer, etc… who needs to be in close collaboration with your engineers. Then make sure there is a ticket system, collaboration tools setup (like Google docs and Google sheets, etc…) for both teams to collaborate easily. You want their system engineers to get to know your engineers, and invest time with them working on issues and ideas.

  • Pre-silicon engagement.

The silicon supplier needs to find ways to help the system company engineers integrate their silicon product into the system as soon as possible. Right away the silicon supplier needs to engage the system company to come up with a plan of what types of deliverables the system company engineers would like to receive to get system development started even while the silicon is being designed. You need to enable the software and firmware engineers to start writing and debugging code so when your silicon shows up they have made good progress towards getting a good image they can use for a build. The silicon supplier needs to evaluate if building an FPGA board with analog peripherals would be beneficial for this opportunity. Propose delivering development boards, manually built prototype parts, etc… Bottom-line is this; get them working on your solution and investing time in it early.

  • Ensure to have a process to manage the custom silicon engagement.

CustomSilicon.com implements a process, and manages both the system and silicon companies such that there is a really strong connection between all teams, the deliverables are clearly communicated, and the development phases are signed off by all stakeholders. The intent is that there is cross functional and intra company alignment at every stage of the project. It is vital that this is implemented to avoid issues that could mean the silicon supplier loses the opportunity due to some miscommunication about a spec, or bug, etc…

  • Verification.

In most integration projects, you need to specialize the verification function into dedicated AMS and DV engineers that start writing models, test benches and tests as soon as the spec is started. Companies that try to re-use designers as they come off the design of their portions of the chip will not be able to beat a spartan flow company that has those functions specialized and working in parallel to the designers.

  • Quality of first samples, and timing.

The quality of first silicon samples is literally a matter of life or death. If two or more silicon suppliers are competing for a socket, there will be a strong preference to focus system company engineers on who delivers samples first. Then the next selector is who has the least amount of bugs or the least severe ones. If you submit silicon samples later than the other supplier, unless the other supplier has some major bugs in their silicon, you may have already lost the race. So you need to think about how you can deliver samples early and with good quality. This usually means you need to think about how to bring up your ATE quickly. One thing you could do is do a “functional only silicon” tape out; this is a tape out of the chip when it is functionally good but not meeting all specs, and then use this silicon to bring up your ATE earlier. Of course this means that your design team needs to be larger so that you can split the development at some point in time, and keep working on the fully spec compliant silicon tape out while the “functional only silicon” is being taped out. This also means that your mask set cost may double or at least you’ll usually need a full extra metal mask set. Other things to keep in mind are paralleling your verification (see previous point) to speed up the development process and increase your capability of catching bugs before silicon. You could try and develop ways to wafer probe without bumping to get data quicker and start debugging your silicon sooner, etc… You could also hold wafers at various stages of processing so you can quickly release new masks to fix bugs you find at the wafer probe and provide those samples quickly to replace previous versions. You can use OTP/MTP to develop clever ways to quickly spin new samples that fix issues. The types of tactics used are going to depend on the silicon product, but you get the idea, you need to shorten the design and test time while also increasing the quality of samples all at once.

  • Validation.

The quicker you can validate your silicon and find the bugs, the quicker you can start working on ECOs to fix them or on work arounds, and the quicker you can tape out to converge on final silicon. So automating bench testing, using ATE, developing FPGA test platforms, stress testing units with asynchronous combinations of inputs, validating samples from process corner wafers and testing those to check any weakness over process, etc… are all critical things to do to ensure you have the highest chance of success and don’t fall off the horse mid-race when some critical bug is found that is not present in your competitor’s silicon. Team specialization is key here to be able to have setups running regardless of the availability of the designers.

There are many ways to optimize your processes, and come up with a spartan flow. This is certainly not the type of development that would be usually economically desirable for standard opportunities. But when you get an opportunity to get into a huge volume system, and the revenue stream from that is likely to continue coming generation after generation, deploying this spartan flow mentality is certainly worth the money. However, it’s hard to switch teams from normal mentality to spartan mentality. So you probably need to create a team that will always work in spartan mode and find them opportunities to execute. It would be wise to perhaps rotate some or all of the people in the spartan team periodically so that others in your company can learn this spartan methodology without burning out.

About CustomSilicon.com by Digital Papaya Inc.

CustomSilicon.com is the leading consulting firm in the custom silicon strategy and project management space for AR/VR, automotive, mobile, server, crypto, sensors, security, medical, space and more.

Raul has 20 years of combined experience in the system electronics and silicon industries. He is currently responsible for major system company’s custom silicon and sensor projects with projects approaching 30 total chips. Raul was the directly responsible silicon manager for 18 chips ramped to mass production at Apple for iPhone and iPad, and 23 total chips ramped to mass production counting projects where he was an expert reviewer. Raul was directly responsible for the development of mobile processor System PMICs for the iPad2, New iPad, iPad mini, iPad 4 and iPhone 5s. Other silicon included, backlight/display power for iPhone 5 and iPhone 5s, lightning connector silicon and video buffers. He managed supplier teams across the Globe.

Our network of experts provide our clients with an A+ silicon management team from day one.


Automotive SoCs Need Reset Domain Crossing Checks

Automotive SoCs Need Reset Domain Crossing Checks
by Tom Simon on 01-19-2021 at 6:00 am

reset domain crossing verification

When the number of clock domain crossings (CDCs) in SoCs proliferated it readily became apparent that traditional verification methods were not well suited to ensuring that they were properly handled in the design. This led to the creation of new methods and tools to check for correct interfaces between domains. Now, in automotive designs a similar issue is arising. Due to functional safety requirements, such as ISO 26262, automotive ICs need to be able to recover from faults and failures while continuing operation. This has led to the use of reset domains, which can be reset selectively as necessary while the surrounding blocks or the top level continue operation.

There are a number of cases where Reset Domain Crossings (RDCs) can be problematic and lead to functional issues. Siemens EDA in conjunction with NXP has a white paper titled “Systematic Methodology to Solve Reset Challenges in Automotive SOCs” that focuses the topic of finding and fixing issues with RDCs. It’s worth mentioning that Siemens EDA is the new name for Mentor.

Reset domain crossing verification

The white paper starts out by describing the problem generally and discussing the prominent techniques used to ensure proper behavior. The first is reset sequencing, where the receiver’s flop is reset before the asynchronous reset in the transmitter’s domain. The second technique is to use isolation on the data or clock of the receiver’s flop prior to the occurrence of a reset in the transmitter’s domain. This can be done with a reset-gating signal that isolates either the data or clock.

Yet, the real challenge is to find where there are specific issues that can cause problems. The second part of the white paper covers several examples that cause reset domain crossing issues. Specifically, in automotive designs the authors cite some common real-world problems that can occur.

In many designs there are configuration registers that need to hold values during warm resets. The registers are loaded at power on reset (POR), but if a warm reset occurs simultaneously with POR, the values in the POR registers can become corrupted. Another case they mention is where a clock gating signal can glitch due to a reset on the controlling flop. If this happens to a flop in another reset domain that is active, it can cause failures. They also include several examples where there is combinational logic on the reset path. There are legitimate reasons for wanting this, but it complicates RDCs.

The authors outline a methodology that employs Siemens EDA’s Questa to formally, and exhaustively, verify RDCs. Designers identify the primary reset signals to help the tool better understand the reset structure of the design. Then the RDC tool generates a digital model of the reset structure, including locating local reset signals. Each reset is categorized according to its type. This model is elaborated with information such as reset polarity, reset source and output value of the sequential element upon reset.

This model allows structural checks that can reveal problems such as polarity usage mismatch or always asserted registers driving reset signals. Then there is a user aided step that involves grouping reset domains according to their asynchronous relationships and specifying information about reset sequencing.

The white paper closes with a summary of a case study. Here they explain how the tool classified each of the resets and identified inferred resets. Possible structural issues in RTL coding are identified at this point. Of the 90k RDCs the tool helped narrow down the list of potential issues to ~1.6% of those that needed a closer look. The outstanding issues were easily fixed using the methods presented in the white paper.

Questa RDC appears to offer a high value, low pain methodology for adding RDC checks to the verification process. Because it is built on the Questa platform it shares a common language front-end and uses the same formal based algorithms for its analysis. With the added reliability requirements of the automotive space, designers will probably be glad to have a tool like this to ensure that the now necessary reset domains are implemented properly. The full white paper is available for reading here.

Also Read:

Siemens EDA is Applying Machine Learning to Back-End Wafer Processing Simulation

CDC, Low Power Verification. Mentor and Cypress Perspective

Multicore System-on-Chip (SoC) – Now What?


Siemens EDA is Applying Machine Learning to Back-End Wafer Processing Simulation

Siemens EDA is Applying Machine Learning to Back-End Wafer Processing Simulation
by Mike Gianfagna on 01-18-2021 at 6:00 am

Siemens EDA is Applying Machine Learning to Back End Wafer Processing Simulation

There’s a lot to unpack in the title of this post. First, Siemens EDA is the new name for Mentor, a Siemens Business. The organization continues to operate as part of Siemens Digital Industries Software.  The organization has released a white paper that describes research done with the American University of Armenia. The work examines how to use machine learning (ML) modeling techniques to predict wafer surface topography after a back-end-of-line metal deposition step. It’s critical to get these predictions right so chemical mechanical polishing (CMP) can do its job effectively. Read on to see how Siemens EDA is applying machine learning to back-end wafer processing simulation.

Smoothing Out the Bumps

This is a story about smoothing out the bumps in wafer processing. Many of the process steps in chip manufacturing need a smooth surface on the wafer to ensure patterns are printed correctly during photolithography. These patterns form the devices for the chip and they’re very sensitive to distortion that can be caused by a non-planar (bumpy) surface. I’ll get into where these bumps come from in a moment. The method to smooth the surface is accomplished by the previously mentioned CMP step, which is quite complex.

During CMP, a polishing pad is pressed against a rotating wafer and a mixture containing abrasive particles and chemicals is injected between the pad and the wafer. Mechanical and chemical interactions occur at the wafer pad contact area, removing material from the wafer surface to smooth out the bumps. If you’re thinking this sounds like polishing your car, it’s a whole lot more complex than that. The pressure applied by the pad and the makeup of the chemicals injected all influence the quality of the result. Material is being manipulated at a microscopic level across the wafer surface to achieve maximum planarity (i.e., smoothness).

Back to the bumps. Where do they come from? This part of the story is about metallization, that is, creating the interconnect for the chip. Copper is deposited on the wafer surface using an electrochemical deposition process (ECD). This is another highly complex process that uses a variety of chemicals to minimize bumps, based on the underlying surface that the metallization is being deposited on. While these approaches can help, bumps happen, and the CMP step is required. The figure below illustrates a typical profile after ECD and before CMP.

Schematic plot of post ECD surface profile

For all this to work, an accurate prediction (through simulation) of the post-ECD surface is needed to drive the simulation that is used to set up the CMP process. During the first part, i.e., the simulation of the post-ECD surface, machine learning finds useful application.

How Machine Learning Helps

To model and simulate the ECD process, the design is first divided into fixed-size tiles. For each tile, average values of geometrical characteristics like width, space, pattern density, and perimeter are extracted. A series of complex mathematical models is then applied to each tile. After the analysis completes, the surface profile data for each tile are used as input for the CMP model. This is a long and complex process, and this is where neural networks from the field of machine learning find application.

A fully connected neural network-based full-chip deposition model for predicting the post- deposition surface profile is described in the white paper. Multiple ML algorithms are used to address the complicated surface variation that is typically seen. A key part of ML is applying large data sets of known results to “train” the neural networks. The white paper describes a series of experiments using Calibre® CMP ModelBuilder and CMPAnalyzer tools. First, physics-based models using data collected at the factories are created. These physics-based models are then used to generate the training, validation, and test data for ML model building.

The white paper then describes the application of various methods to find the best combination of run time and accuracy. The best methods exhibited a much shorter training time (a couple of hours) compared to other methods (several hours or several days). The ultimate results showed improved accuracy with lower run times compared to more traditional approaches. The post-ECD surface is also influenced by shapes that are not close by and ML approaches helped to model these longer-range effects as well.

ML is clearly finding its way into many applications. This is another example. You can access the full white paper, entitled MODELING ECD WITH MACHINE LEARNING FOR CMP SIMULATION here. Check it out to see how Siemens EDA is applying machine learning to back-end wafer processing simulation.

Also Read:

CDC, Low Power Verification. Mentor and Cypress Perspective

Multicore System-on-Chip (SoC) – Now What?

Smoother MATLAB to HLS Flow


Uber v Alto Ride Hail Streetfight

Uber v Alto Ride Hail Streetfight
by Roger C. Lanctot on 01-17-2021 at 10:00 am

Uber v. Alto Ride Hail Streetfight

Uber, Lyft, Postmates, Instacart and Doordash were successful in their nearly $200M effort to pass California’s Proposition 22 in November – to allow gig operators to avoid treating their drivers as full-time employees with all of the associated employee benefits and legal protections. In the midst of a devastating pandemic that has placed a premium on safety and driven away both passengers and driver/contractors, a new threat to these operators is arriving in the shape of Alto to contest the juicy L.A. market opportunity.

Uber and Lyft have taken on ride hailing competition before, but Alto is an entirely new kind of rival offering a reasonably priced, premium level of service enabled by an employee-based platform using company-owned and outfitted vehicles with partitions. All of Alto’s vehicles are five-star crash rated SUVs piloted by a team of vetted and trained uniformed drivers.

While Uber and Lyft have hemorrhaged both drivers and passengers – to the tune of more than a third of their revenue – during the COVID-19 pandemic, Alto is pursuing its slightly delayed plans to expand to Los Angeles, California, and Austin and Houston, Texas. The key for Alto is its intense commitment to quality control and safety – something that makes a difference during a pandemic that has devastated the ad hoc transportation sector including everything from ride hailing to rental cars.

Uber and Lyft have both groped for a solution to the revenue crisis by leveraging micromobility and delivery services – but both have either resisted a shift toward deploying partitions in their vehicles or have made only half-hearted efforts to do so. Lyft, at least, has made partitions available – but has not required them.  Uber has done nothing.

To its credit, Alto recognized from the beginning that the fundamental value proposition of ride hailing was delivering a luxury service. As such, during a pandemic, it was no longer sufficient to offer convenience and an inexpensive ride. True luxury during COVID time requires a powerful statement regarding driver and passenger safety.

Uber and Lyft both require masks for drivers and passengers. But we all know, by now, that enforcing mask wearing on passengers is a dicey and difficult proposition for drivers. And masks alone in the confined space of an automobile cabin is insufficient protection.

Alto has removed this concern by installing partitions in all of its cars. This is also easier to do for Alto because it acquires and owns all its cars and can therefore deliver a predictable, reliable, guaranteed quality of service. With Uber and Lyft both the driver and the passenger get no such guarantees – particularly as regards safety. (We all know that ride hailing with Uber and Lyft, to quote Forrest Gump, is like a box of chocolates – “you never know what you’re going to get.”)

Alto says each driver is a “W2-employee and undergoes an extensive background check in addition to a thorough safety and defensive driving training program. Drivers are empowered with the tools needed for a successful ride every time, with technical support embedded into Alto’s dashboard including destination confirmation, navigation preferences and more. Riders can feel confident knowing that any time they request an Alto, it will be the same experience down to the vehicle model, amenities, and even the scent.”

Alto says its drivers are available for “corporate or personal courier service, customer or employee courtesy rides, and food pickup and delivery. Alto’s uniformed employee drivers are available to shop, purchase, and deliver a variety of needs from anywhere.”

Alto’s L.A. market entry press release states:  “In addition to the daily cleaning and maintenance procedures, each vehicle’s high-touch surfaces such as the interior and exterior door handles and headrests are disinfected between every trip. To ensure maximum safety for its drivers and riders in the midst of COVID-19, Alto has double-downed on its safety efforts by equipping each vehicle with custom plexiglass barriers between the driver and passenger compartments while also installing HEPA cabin air filters that remove 99.9% of airborne particles.”

Alto is not alone in adopting in-vehicle partitions. Bolt, DiDi Chuxing, Yandex, and Cabify have either fully or partially equipped their fleets with partitions.

With operations in Spain, Portugal, and Latin America, Cabify has taken some unique pandemic-related measures including installing partitions in 7,000 of its vehicles in Columbia and administering COVID-19 tests to some drivers. Cabify has reported increases in business during the pandemic, which may in part be due to its safety measures.

Cabify says it has been able to support safe mobility with more than 38,000 taxis in Columbia that have been operating since the beginning of the quarantine, complying with all government safety protocols.

The company noted in a report that most of the new users of its service are people who previously used mass public transport (40%), their private car (15%) or took taxi services on the street (15%). A company spokesman said: “During the pandemic, 60% of new users said that the reason they used the app was due to the good availability of vehicles and the high safety standards. The three main reasons why they needed to mobilize were: work 50%, health 10% and family visits 9%.”

Partitions are seeing wider adoption throughout the transportation sector including new partitions in public busses throughout the world. It’s worth noting that partitions are likely to become a permanent aspect of public, shared, and semi-private transportation services.

While most ride hail operators have added partitions without waiting for regulations, it is possible that regulatory pressure may grow for requiring partitions for taxis and other ad hoc transportation service providers. The next step is likely to include regular driver testing – though funding and reporting will have to be addressed.

It will be interesting to see how Uber responds to competitors adopting superior pandemic-related safety and sanitation measures like Alto in L.A. and Cabify in Latin America.  Alto is a particularly serious threat, though, in offering both a safer operating proposition for drivers and passengers – while also offering full employment and benefits. There is no doubt that hundreds of former Uber and Lyft drivers in L.A. will be lining up to join the Alto platform.

Alto is currently recruiting driver employees in L.A. and making the necessary arrangements to serve area airports.  Alto’s value proposition for both drivers and passengers is tailor-made to capitalize on the negative industry impacts from COVID-19.  L.A. is about to see a ride hailing street fight capable of restoring broader consumer confidence in the ride hailing sector. If the terms of engagement are safety, Alto will win hands down – and partitions up.


CES 2021 Goes All Digital

CES 2021 Goes All Digital
by Bill Jewell on 01-17-2021 at 8:00 am

2021 US Consumer Electronics

CES, the massive consumer technology show put on by the Consumer Technology Association (CTA), was held this week. Due to the global COVID-19 pandemic, CES 2021 was all digital. Last year, CES 2020 had over 170,000 attendees from over 160 countries and 4,400 exhibiting companies.

CES 2020 was held January 7-10, 2020 in Las Vegas, Nevada. The first cases of COVID-19 were found in Wuhan, China in December 2019. The first known case in the U.S. was on January 15, 2020. According to the World Health Organization (WHO), COVID-19 cases were found in 19 countries by the end of January 2020, 54 countries by the end of February, and almost every country in the world by the end of March. If the timing of COVID-19 had been a few weeks earlier, CES 2020 would have been the ultimate super-spreader event accelerating the spread of COVID-19 throughout the world.

CES 2021 had 1,960 on-line exhibitors, less than half the live exhibitors at CES 2020. Virtual attendance was 69,523, about 40% of the 2020 in-person attendance. Some Chinese companies with a major presence in past CES shows were absent in 2021 – including Huawei and Haier. Huawei is currently banned from working with U.S. companies, so their absence is not surprising. Haier, self-described as China’s largest consumer electronics and home appliance producer, has had one of the largest booths at previous CES shows, so its absence it notable.

The CTA projected overall U.S. retail sales revenue for the technology industry in 2021 will reach $461 billion, up 4.3% from 2020. This includes electronics hardware, software, and services. In terms of hardware, smartphones remain the largest category, $73 billion in 2021, up 5%. 5G smartphone revenues should triple in 2021, accounting for over half of smartphone revenues and over 40% of units. Laptop PCs had strong growth in 2020 due to large numbers of people working and learning from home. In 2021, laptops are expected to be $38 billion, down 2%. Televisions are projected at $22 billion in 2021, down 1%.

One of the faster growing electronics hardware categories is smart home devices, growing 3% in 2021 to $15 billion. Connected health and fitness products should grow 6% to $9 billion. Within this category, connected health monitoring devices revenues should grow 34% in 2021 as more people check for COVID-19 symptoms and manage chronic conditions from home.

The two fastest growing categories in 2021 are wireless earbuds and gaming consoles, each up 16%. Wireless earbud growth is aided by the increase in video and audio conferencing. Gaming consoles benefit from more people seeking entertainment at home and by the introduction of new gaming systems by Sony and Microsoft in late 2020.

The impact of people staying a home more in 2020 (and for at least several months in 2021) is reflected in the strong growth of software and streaming services in the U.S. This category grew 31% in 2020 and the CTA forecasts growth of 11% in 2021. Video gaming software and services is the largest segment at $47 billion in 2021, up 8% after 20% growth in 2020. Video streaming is set at $41 billion in 2021, up 15% after 60% growth in 2020.

A major theme of CES 2021 was how the COVID-19 pandemic impacted the technology industry in 2020 and how it will influence future trends. CTA’s session on “Tech Trends to Watch” highlighted emerging technology trends which have been accelerated during the pandemic. These include e-commerce, telemedicine, streaming video, remote learning, AI & machine learning, natural language processing, cloud computing and remote health monitoring devices. To limit human contact, robots are increasingly being used for tasks such as cleaning & disinfecting, delivery, stocking, food processing, and health monitoring.

The session “Robotics to the Rescue” highlighted how COVID-19 is driving contactless delivery. Wing, an Alphabet subsidiary, delivers packages by drones. Starship (despite its name) uses ground-based robots for deliveries within a 4-mile radius. Intel has a division focused on autonomous transportation. A desire for ride-sharing services (such as Uber) without potential disease-carrying human drives will boost demand for self-driving cars.

The highlight of CES is the introduction of new products. The digital CES 2021 lacked the excitement of previous in-person CES shows, but many interesting new products were introduced.

TVs continue to get larger and larger – exceeding the available wall space in many homes. Samsung demonstrated its MicroLED 110 inch television. The TV can display up to four separate screens at once. The TV will be available in the U.S. in March 2021. Pricing was not disclosed, but the price at its launch in South Korea was over US$150,000.

Samsung also showed its Bot Handy robot which has an arm attached to its movable body. A video demonstrated the Bot Handy doing tasks such as loading a dishwasher, picking up clothes, and pouring wine. If successful, the Bot Handy could be the first robot to go perform household tasks. It is in development and Samsung did not announce when it could become available.

Panasonic revealed an augmented reality heads-up display (AR HUD) for automobiles. The display will project important information such as speed an turn directions on the windshield.

Sony demonstrated its new Airpeak – a drone with a Sony Alpha camera for professional photography and video production.

Lenovo introduced its ThinkPad X1 Fold, which it says is the world’s first foldable PC. It has a 13.3 inch OLED display but folds to half that size. The display can be show one screen or two. It has an on-screen keyboard and an optional external keyboard. ThinkPad X1 Fold pricing starts at $2500.

Numerous connected health products were introduced at CES 2021. Biotricity demonstrated is Bioheart wearable personal heart monitor for cardiac patients. The Bioheart continuously collects ECG data and monitors other critical functions.Neuvana offers its Xen headphones which send an electrical signal to the vagus nerve in the ear. The company claims this tones the nerve to improve sleep, relaxation, focus and memory.

Toto showed a prototype of its Wellness Toilet which uses sensors to analyze human waste and skin. The Wellness Toilet will provide health information and diet recommendations via a smartphone app. Toto said the toilet will be on the market in the next several years.

CTA made a great effort to put on its CES 2021 digital show. CES 2021 offered informative presentations and intriguing new products. Still, nothing can match the excitement of an in-person CES show. Let us hope CES 2022 (January 5-8, 2022) will match previous live shows. The Las Vegas Convention Center has nearly completed a $980 million expansion which includes 1.4 million square feet with 600,000 square feet of exhibit space and 150,00 square feet of meeting rooms. The expansion includes an underground public transportation system using autonomous electric vehicles. Completion was initially targeted for CES 2021, but the facility will be available for CES 2022.


CES 2021 and all things Cycling Technology

CES 2021 and all things Cycling Technology
by Daniel Payne on 01-17-2021 at 6:00 am

bowflex® velocore™ bike 940455

It’s January so time to give you another summary of what I’ve found at CES 2021 about new cycling products that have electronic content. During the pandemic in 2020 we’ve seen a surge in sales for bicycles, e-bikes, spin bikes and trainers as people wanted a simple way of getting around town running errands, or for fitness purposes.  I’ve continued my schedule of four rides per week, reaching 13,384 miles on a road bike, so follow me on Strava.com to keep fit and stay in touch. On rainy days I cycle indoors and use a Tacx Neo 2T smart trainer running the Zwift app and talk with my buddies using the Discord app.

Tacx Neo 2T

The Tacx smart trainer uses rare-earth neodymium magnets and electricity to create variable resistance, while most other trainers use a belt connected to a weighted flywheel for resistance. The Neo is calibrated one time at the factory for power readings, while other trainers require calibration on a regular basis. With the Neo you remove the rear wheel of your own bike, then place the bike onto the trainer, a quick process taking under a minute of time.

Smart Spin Bikes

Peloton is probably the best-known brand out there for what I call smart spin bikes, because they allow you to stay fit at home while being connected via the Internet with a live or recorded instructor to stay accountable while pursuing fitness goals. But if you’ve ever jumped onto a spin bike, you quickly realize that it feels nothing like a real bike, because it’s stationary and you don’t get to steer or balance or sway.

Nautilus has their own smart spin bike called the Bowflex VeloCore, and on the surface it looks a bit like the Peloton, but with one big difference: the bike actually sways side to side as you ride, just like in real life, so the feeling is more natural.

Bowflex VeloCore

Myx Fitness has a spin bike similar to Peloton called the Star Trac, along with heart rate monitoring. After paying $1,199 for the bike, you subscribe to online classes for $29/month.

Star Trac from Myx Fitness

e-bikes

This category remains the fastest growing segment again in cycling for 2020, so expect more choices in 2021 if the supply chain can actually deliver enough units during the pandemic shortages.

From Italy we have VAIMOO, an e-bike sharing system that is a CES 2021 Innovation Awards Honoree, and they offer e-bikes, charging stations and an app to control the service.

VAIMOO

Every e-bike needs to be charged, and most companies provide you with a wall charger unit that is quite large and clunky, except that this new company called Wise-Integration is using GaN technology for their ultra-compact e-bike charger.

Wise-Integration

Ever want to convert your existing bike into an e-bike? Well, there’s a company called CYC Motor Ltd. that has an electric motor assembly add-on that could work with your frame, and it’s called the X1 Pro Gen 2, priced at $1,093, but the complete conversion will be closer to $2,000.

CYC Motor, X1 Pro Gen 2

Bike Computers

I’ve used many models over the years: CatEye Stealth 50, Garmin 520, Garmin 820 and my latest is the Wahoo Elemnt Bolt. Even car maker Bosch has entered this field with the Nyon computer, sporting a big, color display, all refined for the e-bike experience.

Bosch eBike Systems Nyon

Mio has two GPS-based, color bike computers dubbed the Cyclo Discover and Cyclo Discover Plus, with a unique feature called NeverMiss to show you something worth stopping for. The maps are only ready for European customers to start out with, so stay tuned for maps in the rest of the world.

Mio Cyclo Discover

Bike Locks

How about a bike lock that you open with your fingerprint, so no more keys or remembering a sequence of numbers to open up your lock? Well, Hampton Products has the BenjiLock that reads your fingerprint to open it up. Cool.

BenjiLock

Smart Helmets

One of my cycling buddies rides with a helmet that has rear-facing flashing lights, and it really does look more visible than the typical seat location for a rear light. LIVALL has gone beyond that by adding automatic fall detection to trigger a text message to emergency contacts, and brake warning lights. Garmin has had crash detection and text messaging in their Edge series of bike computers for several years now, but the only complaint that I hear is that their system produces false triggers if you suddenly brake. These smart systems rely on MEMs sensors to detect g-force changes.

LIVALL Smart Helmets

When I cycle indoors on my smart trainer, I will often use the free Discord app to talk with my buddies. For outdoors there’s a company called Sena with a smart cycling helmet called the R1 that also supports group intercom.

Other Stuff

Let’s say that you’re cycling and are listening to music in one ear (for safety) and want to change the volume, skip the song, or answer a phone call? ArcX has a new device that is a ring that fits on your index finger and is controlled by your thumb, so your phone stays in a pocket or jersey.

ArcX

Listening to music while cycling and still hearing the sounds of approaching cars is important for safety, so with the JBuds Frames you can attach this pair of wireless speakers to your existing cycling glasses.

JLab Jbuds Frames

Knowing your heart rate is important for fitness types and racers who like to know which zone they exercise in, and can train in certain zones as part of a program or know during a competition how close to maximum they are currently at. Scoshce has a heart rate band that fits on your arm, while I use the more traditional chest strap style.

Scosche

My road bike has a computer mounted out front of the handlebars displaying lots of real-time data: Speed, RPM, Power, Heart Rate, Distance, Time of day, Elevation. The thing is that I have to glance downwards to view it, so my eyes are momentarily off of the road, causing safety issues. Well, Julbo has a new eyewear with heads-up display, so no more glancing downwards at  the bike computer.

EVAD-1 by Julbo

Cyclists share the road with motorists, so being seen on a bike is a big deal to avoid collisions and staying alive, and thanks to Panasonic Automotive there’s a new 4K-resolution Augmented Reality (AR) Heads Up Display (HUD) that identifies cyclists and places a bright Yellow icon on top of them:

Panasonic AR HUD

EyeNet is an app for motorists, cyclists and others on the road to alert when a collision may happen, each person just needs a smart phone and cellular connection.

https://youtu.be/AoAohl2_As8

In the opening paragraph I talked about virtual cycling indoors with a smart trainer, and from Taiwan comes a virtual cycling app called WhiizU that looks a lot like Zwift.com, the leading indoor platform. You connect your bike to a smart trainer, run the app, customize your avatar, select a route, and enjoy virtual scenery along the way.

WhiizU – virtual cycling

Tuya helps OEMs build electric scooters, e-bikes, etc. quicker by providing the Bluetooth, NB-IoT, GPRS and LTE Cat 1 infrastructure in one place.

Tuya

Tome Software, Ford, Trek Bicycle, Hammerhead, Specialized, SRAM, Shimano and Bosch are trying to form a bicycle-to-vehicle (B2V) group with technology using Bluetooth 5 to alert motorists of bicycles nearby. Read their Press Release.

Bicycle-to-Vehicle (B2V)

Biking, running, rowing and strength training in a combined indoor machine called Transformer hails from Stride. I just use my Tacx Neo 2T for indoor cycling, but this new product starts looking like a serious home gym foundation.

Transformer from Stride

I ride using 10 battery-powered devices, so the Wheelswing-VOLT caught my attention, because it’s a way to dynamically generate electricity while you’re moving on the bike. The generator fits onto your front wheel, but doesn’t make direct contact with the rotating wheel by using magnets instead. As a weight-weenie I would use it, but I can see commuters using it.

WheelSwing-VOLT

Previous CES Posts about Cycling


Podcast EP3: Tomorrow’s Semiconductors with Jim Hogan

Podcast EP3: Tomorrow’s Semiconductors with Jim Hogan
by Daniel Nenni on 01-15-2021 at 10:00 am

Dan and Mike are joined by industry luminary Jim Hogan. In a rare interview, Jim talks about his life – how he got into semiconductors, EDA and venture investing. Jim’s time at Cadence as well as his work at ARM are explored. Jim also provides a concise and informative overview of how venture investing works. The podcast concludes with a discussion of the current and future state of the processor wars and what Jim does in his spare time.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Arun Iyengar of Untether AI

CEO Interview: Arun Iyengar of Untether AI
by Daniel Nenni on 01-15-2021 at 8:00 am

Arun Iyengar Chief Executive Officer

I had a chance to catch up with Arun Iyengar, CEO of Untether AI.  Untether AI recently unveiled its tsunAImi accelerator cards powered by the company’s runAI devices. Using at-memory computation, Untether AI breaks through the barriers of traditional von Neumann architectures, offering industry-leading compute density with power and price efficiency.

What brought you to Untether AI? (After almost 20 years in the FPGA business)

I spent a long time with FPGA companies and processor companies during a period when the industry viewed hardware as important but not critical. Artificial intelligence changed all that and moved the hardware world to be a critical component to solve the difficult machine learning requirements. As I was considering the impact of AI to the existing chip companies, I realized that it would fundamentally alter the chip landscape. I wanted to fully realize the impact of such a change by being part of a pure play AI company vs being in a larger company that was going to go through the painful process of migrating existing silicon to have AI capability. So that meant being in a startup. However, it was important to me to look at a technology and architecture that would be differentiated and scale readily for both production and targeting various end markets. Untether AI, with its at memory compute architecture, fulfilled this criteria. Untether AI is well positioned to scale for technology nodes as well as scale the size of the die to target various end markets.

Neural Net Inference is an exciting but competitive market, how will you differentiate? (Who do you really compete with?)

Available chips for neural net inference are mostly based on von Neumann architecture. As a quick aside, von Neumann described a computer architecture in 1945 that is still the mainstream approach for silicon. It is very well suited for general purpose compute, but ill-suited for neural net inference. With expected exponential growth in power consumption for AI processing, this leads to an untenable situation. When Untether AI looked at the von Neumann architecture, we found that 90% of the power is wasted in data movement. We set about to solve that with the company’s at memory architecture which reduces data movement by a factor of 6. The resulting product can run at 8 TOPS/W and offers over 2,000 TOPS per PCIe card. There are few companies that can match this compute density and performance.

What can you tell me about your silicon? (Availability? Foundry partner? Process node? Benchmarks?)

We use standard CMOS technology with redundancy incorporated into it for high yields. We use TSMC 16 nm process to produce our runAI200 chips. The product is sampling now and is sold in 2 form factors:

  1. tsunAImi accelerator PCIe card with 4 runAI200 devices
  2. standalone runAI200 devices.

For inference benchmarks examples, the tsunAImi accelerator card is capable of computing 80,000 ResNet 50 images per second and 12,000 BERT base queries per second, both of which are at least 3 times better than the closest competitor’s numbers. On a total cost of ownership approach (using benchmark/W/sq mm of die area), the 16nm runAI200 is an impressive 8X better than the GPU competitor’s 7nm part

What type of software effort will be required? 

While our tsunAImi cards will be deployed in servers and the cloud, we consider our customer to be the data scientist. The data scientist is great at modeling and proficient in machine learning frameworks like TensorFlow and PyTorch. As such we use these popular frameworks as our entry point. From that point our goal is to make the implementation of the neural network as pain-free as possible. Therfore our imAIgine software development kit requires no knowledge of specifically how we translate the neural network into code running on our devices. The imAIgine compiler does the automated graph lowering and has sophisticated optimization and allocation algorithms. The imAIgine toolkit provides extensive feedback to the modeler highlighting the resource allocation, congestion and providing cycle accurate simulation. The imAIgine runtime engine does the hardware abstraction, communication and health monitoring as it places the net on the chip(s). So the overall vision is of a software development flow that allows the data scientist to stay just at ML framework, but provide more advanced capabilities to the power user if they choose to do so.

Software will be the metric by which any AI startup will succeed or fail. At Untether AI, we have more than half the company as software engineers, with a large number of them having advanced degrees.

Can you talk about your relationship with Intel? 

Intel Capital has been an investor in Untether AI from the early days. Along with Radical Ventures, they have been a huge supporter of the company, providing guidance and connections to help us access technology that would be hard for a startup to do on their own. Intel Capital has a good network across their portfolio companies and Untether AI taps into that network as we have specific questions to resolve. For example, as we were looking to bring up our runAI200 silicon, we wanted to get some specific questions answered and were able to talk to another AI company from the Intel Capital network.

Additional comments?

I am excited for how silicon can change and enable the AI usage. We are truly back at a golden age if you are a silicon enthusiast.

Untether AI’s goal is to have sustainable AI that does not consume the world’s energy resources in order to have humanity get the benefits of AI. With this, we will have the best combination of the golden age of silicon with democratization of AI.

Please visit Untether AI for more information and to view their presentation at the Linley Fall Processor Conference https://www.untether.ai/technology

Also Read:

CEO Interview: Tony Pialis of Alphawave IP

CEO Interview: Dr. Chouki Aktouf of Defacto

CEO Interview: Andreas Kuehlmann of Tortuga Logic


ISS 2021 – Scotten W. Jones – Logic Leadership in the PPAC era

ISS 2021 – Scotten W. Jones – Logic Leadership in the PPAC era
by Scotten Jones on 01-15-2021 at 6:00 am

Slide3

I was asked to give a talk at the 2021 ISS conference and the following is a write up of the talk.

The title of the talk is “Logic Leadership in the PPAC era”.

The talk is broken up into three main sections:

  1. Background information explaining PPAC and Standard Cells.
  2. A node-by-node comparisons of companies running leading edge logic processes.
  3. PPAC trend charts by company and year.

Background Information

 Historically new processes have targeted Power, Performance and Area (PPA), for example during TSMC’s 2020-Q1 conference call they stated that their 3nm process would provide 25-30% lower power at the same speed relative to 5nm, 10-15% better speed at the same power and a 70% increase in density.

With rising costs and challenges to produce cost effective leading edge processes the need to target cost during process development has become apparent. For example, both Imec and Applied Materials have discussed PPAC in recent presentations.

Figure 1. Power, Performance, Area and Cost (PPAC).

 Logic designs are created using standard cells, inverters, NAND gates, Scanned Flip Flops, etc.

The size of a standard cell is determined by the cell type and the design rules of the process the cell is run on. Process minimum dimensions can be used to calculate cell sizes. The height of a standard cell is determined by the minimum metal pitch multiplied by the number of tracks. The cell width is some number of contacted poly pitches plus an extra contacted poly pitch is required at the edge of the cell for a double diffusion break cell.

In recent years difficulties shrinking pitches has led to track reductions to scale down cell sizes, however as track heights are reduced it leads to fin depopulation, for a 9-track cell each transistor can have 4 fins, for a 7.5-track cell only 3 fins fit for each transistor and for 6-track cells that are the current state-of-the-art, only 2 fins fit in the cell per transistor. All other things being equal a 6-track cell with 2 fins per transistor will have one-half the drive current of a 9-track cell with 4 fins per transistor. This has led to Design-Technology-Co-Optimization (DTCO) where a new process is developed to support a 6-track cell with 2 fins per transistor, the fins are designed to provide higher drive current per fin for example by making them taller.

When comparing process density, we use the smallest cell available on each process (least tracks) to calculate millions of transistors per millimeter squared. We assume a design with 60% NAND cell and 20% Scanned Flip Flops.

A lot of people try to compare processes based on transistor density for an actual design, the problem with this is processes support multiple cell heights, for example 6 and 9-track cells, A design that targets high performance would use a lot of 9-track cells and a process that targets lower performance but minimum size would use a lot of 6-track cells, even on the same process two different designs targeting different performance levels would have different densities, we therefore use the minimum available cells to do a fair comparison.

Figure 2. Standard Cell.

 Another key density comparison for logic processes is SRAM cell size since many designs incorporate significant amounts of SRAM cache.

A have written an article on design effects on process density that is available here.

 Node by Node Comparison

The node-by-node comparison begins with 28nm foundry processes versus intel’s 22nm process. This comparison represents a moment in time as opposed to the same nodes where foundry 20nm nodes might be more appropriate.

In 2011 Intel introduced their 22nm process with the world’s first FinFET production, at the same time the foundries were producing 28nm planar devices. From a device technology perspective 28nm represented the foundries introduction of High-K Metal Gate (HKMG), a technology Intel introduced in 2007 and now Intel is introducing FinFETs and the foundries will not introduce FinFETs for three more years. At this point in time Intel was the clear logic technology process leader.

Interestingly, the Intel 22nm process has the best SRAM cell size but for logic has lower transistor density than the foundry 28nm processes, although presumably better performance. Intel was conservative on some process dimensions presumably because this was their first FinFET generation.

Figure 3. Foundry 28nm and Intel 22nm Nodes.

Moving forward to 2014 Intel introduces their second generation FinFET process with an aggressive shrink that put them into the lead on both logic density and SRAM cell size. In 2014 Samsung introduced their first generation FinFET with their 14nm process and in 2015 TSMC introduced their first generation FinFET with their 16nm process.

Figure 4. 16nm/14nm Nodes.

 A key point at this node is that Intel 14nm was originally due in 2013 and even when it was introduced suffered from a slow yield ramp, this was the beginning of a chain of intel delays and yield problems that persist today.

Another thing that stands out at this node is that Apple designed their A9 processor based on Samsung’s 14nm process but then also ported the design to TSMC’s 16nm process. Tom’s Hardware compared the PPA for the A9 on both processes and found power to be slightly better on the Samsung process, performance the same for both and die area to also be slightly smaller on the Samsung process. The Samsung power and area advantage may just be because the part was originally designed for Samsung and later ported to TSMC, but it gives us a unique opportunity to compare the two processes. We will use this data point later as a starting point for some of the trend analysis we will present.

The next step in time is the instruction of foundry 10nm nodes in 2016 when both Samsung and TSMC took the process density lead from Intel. This is the beginning of a key difference between Intel and the foundries where Intel takes bigger density jumps with each successive process generation, but the foundries introduce new generations faster and pass Intel for process leadership.

Figure 5. Foundry 10nm and intel 14nm nodes.

Stepping forward again, TSMC introduces their 7nm process in 2017, Samsung introduced their 7nm process in 2018 and Intel’s 10nm process finally enters production in 2019, although even today Intel is struggling with yield on 10nm. Intel’s 10nm process did move them into relative logic density parity with the foundry 7nm processes but with larger SRAM cell sizes. It should also be noted that as we will see in a moment, in 2019 the foundries began production on 5nm processes that once again moved them ahead.

At 7nm Samsung’s process has several EUV layers and for their internal production was the first production EUV process, although TSMCs 7nm+ process that added EUV for several layers may have been the first generally available foundry process with EUV. Total EUV layers for 7nm was between 5 and 7.

 Figure 6. Foundry 7nm and Intel 10nm Nodes.

In late 2019 we saw the foundries begin risk starts of 5nm processes and those processes reached high volume production in 2020. At the Intel 10nm/Foundry 7nm node the three companies had similar logic densities. Moving to 5nm TSMC delivered an approximately 1.8x density improvement while Samsung only delivered a 1.33x density improvement, this leads to TSMC having a substantial logic density advantage and the smallest SRAM cell size. 5nm also saw an increase in EUV layers to 10 to 15 layers and TSMC introduced a pFET with a high mobility Silicon-Germanium fin. While the foundries are once again delivering a new node, Intel is still working on ramping up 10nm yields.

Figure 7. Foundry 5nm and Intel 10nm Nodes.

Now we step forward into the future with foundry 3nm processes starting risk starts in 2021 with 2022 production, and Intel 7nm process entering production in 2022. Intel’s 7nm was originally due in 2021 so 2022 represents another delay and there are rumors it will be delayed beyond 2022. There have also been reports of delays for Samsung and TSMC 3nm, our check indicate Samsung may be delayed but TSMC is on track.

Intel 7nm will represent Intel’s first use of EUV and Samsung’s 3nm will see the industry’s first use of Gate-All-Around (GAA) in the form of stacked Horizontal-Nano-Sheets (HNS). TSMC is continuing to utilize FinFETs at 3nm.

For 7nm Intel has announced a 2x density increase over 10nm, Samsung has announced 3nm will be 1.35x denser than 5nm and TSMC has announced 3nm will be 1.7x denser than 5nm. Based on these announced density improvements TSMC will have the densest process by a wide margin, Intel will pass Samsung for second place and Samsung will be in third. We expect 15 to 30 EUV layers at this node with TSMC at the upper end due to their denser process.

Figure 8. Foundry 3nm and Intel 7nm Nodes.

 There has been a lot of speculation about whether Intel will outsource production of their microprocessors to the foundries given that the foundries now have the process lead. At the Credit Suisse conference in December 2020, Intel CEO Robert Swan announced Intel will continue to develop leading edge process with Intel 5nm and 3nm processes still planned. I wouldn’t be surprised to see Intel gradually outsource more of their needs, but it doesn’t currently look like any radical change is going to take place any time soon. I should also point out that given Intel’s volumes it would take years for the foundries years to ramp up to accommodate Intel’s volumes.

Figure 9. Intel Status

PPAC Trends

Now we will compare PPAC by company and time.

One key take-away from our analysis is that although Intel tends to make bigger logic density improvements from each new node the foundries are introducing new nodes faster and ultimately driving density faster. In fact, between 2014 and 2022 the foundries will have introduced five new nodes in the time it took Intel to introduce three new nodes and this is only counting major nodes, the foundries have introduced a lot of half-nodes as well. Intel does introduce “half-nodes” as well with +, +++, +++ nodes but they are performance half-nodes, not shrinks.

Figure 10. Nodes Versus Times.

Comparing Power and Performance between companies and process is nearly impossible, ideally someone would run a consistent product such as an Arm core with a set amount of SRAM cache on each process and publish power and performance metrics, but this is way too expensive to be practical. In the chart in figure 10. I have created the best estimated comparison I can produce.

I stared the power comparisons at the 16nm/14nm node where we have the A9 on both Samsung 14nm and TSMC 16nm. I have given Samsung a slight advantage as previously discussed even though this may be a design issue. I have then taken the power improvement for each subsequent node from the companies announced improvements. As can be seen TSMC takes a significant lead at 10nm, Samsung does largely catch up a 3nm presumably reflecting their switch to HNS although TSMC is still competitive with their high scaled FinFET. I am unable to place Intel on this chart with any confidence.

For the performance comparison I once again start with the A9 at the Samsung 14nm and TSMC 16nm node and use the companies announced performance improvement by node to forward project. TSMC’s develops a performance advantage over Samsung at 10nm and increases their lead at each successive node. To place Intel on this chart I looked at the Intel microprocessors made on their 10nm Super Fin process and AMD microprocessors made on TSMC’s 7nm process and concluded they have similar performance. I also used published Intel performance comparisons between their base 14nm process and 10nm Super Fin process to back project how Intel would compare at the 14nm/16nm node. TSMC and Intel are competitive at the Intel 10nm/Foundry 7nm node with Samsung likely having the lowest performance. I don’t have 7nm performance estimates from Intel, but my “best guess” would be TSMC 3nm will be as good or better.

I do want to stress that these are “best estimates” with a lot of uncertainty.

Figure 11. Power and Performance Trends.

This finally bring us to Cost.

My company IC Knowledge LLC is the world leader in cost and price modeling of semiconductors and MEMS. Our commercially available Strategic Cost and Price Model is a company specific industry roadmap beginning with the first 300mm processes and projecting out into the late 2020’s for 3DNAND, 3DXPoint, DRAM and Logic. The Strategic Cost and Price Model produces equipment, materials and manufacturing cost and selling price estimates by company, time, and even specific wafer fabs. Using the Strategic Cost and Price Model I have produced the three trend plots on the next slide.

On the left is the normalized wafer cost by node. Some key points on this chart:

  • The wafer costs do not include mask set amortization. For foundries masks are typically purchased by the customer and not part of the wafer price when the wafers are sold to the customer. For Intel mask amortization costs would typically be included but to make the comparisons consistent company to company we have omitted mask amortization. There is an important point that mask costs are increasing rapidly and wafer costs with mask set amortization are highly sensitive to the volume the masks are amortized over. Rising mask costs have resulted in a situation where leading-edge processes only make sense for high volume designs.
  • The wafers cost also don’t consider design costs, this is another area where costs are rapidly increasing and pricing out all but the largest volume products from leading edge processes.
  • For this analysis we have assumed new greenfield fabs for each node with Intel fabs located in the united states, Samsung in South Korea and TSMC in Taiwan.

The resulting wafer cost plot shows rising wafer costs with Intel having the highest wafer costs until the Intel 7nm/Foundry 3nm node where TSMC has the highest costs. This reflects TSMC having the densest process and Intel having fewer interconnect layers.

The middle graph provides normalized logic transistor density based on the values presented in the node-by-node analysis section of our presentation. As previously noted, we expect TSMC to have the densest process at the i7/F3 node.

Finally, the graph on the right side combines wafer cost and transistor density to produce a relative logic transistor cost trend. What is clear in this chart is that although higher transistor density may require a more expensive wafer process, the transistor density improvements, at least in the cases studied overcomes the higher wafer cost to deliver lower transistor cost.

Another key take-away is that for logic transistors Moore’s law is alive and well. In his seminal 1965 Electronics Magazine article “Cramming more components onto integrated circuits”, Gordan Moore stated what became known as Moore’s law: “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year”. The key to me in this “law” is that is as much an economic observation as it is a technology observation. In my opinion the purest measure of Moore’s law is are we continuing to decrease cost per transistor, and as this plot shows, we are, although once again this is purely logic transistor manufacturing cost and these economics only work for high volume products.

Figure 12. Wafer Cost, Transistor Density, Transistor Cost.

Conclusion

The key points in this presentation around PPAC and logic leadership are summarized in Figure 12.

Figure 13. Conclusion.

TSMC’s continued rapid execution of moderate shrinks has led them to a leadership position and we expect them to maintain leadership through the 3nm node and beyond.

Also Read:

IEDM 2020 – Imec Plenary talk

No Intel and Samsung are not passing TSMC

Leading Edge Foundry Wafer Prices