Bronco Webinar 800x100 1

CES 2021 Goes All Digital

CES 2021 Goes All Digital
by Bill Jewell on 01-17-2021 at 8:00 am

2021 US Consumer Electronics

CES, the massive consumer technology show put on by the Consumer Technology Association (CTA), was held this week. Due to the global COVID-19 pandemic, CES 2021 was all digital. Last year, CES 2020 had over 170,000 attendees from over 160 countries and 4,400 exhibiting companies.

CES 2020 was held January 7-10, 2020 in Las Vegas, Nevada. The first cases of COVID-19 were found in Wuhan, China in December 2019. The first known case in the U.S. was on January 15, 2020. According to the World Health Organization (WHO), COVID-19 cases were found in 19 countries by the end of January 2020, 54 countries by the end of February, and almost every country in the world by the end of March. If the timing of COVID-19 had been a few weeks earlier, CES 2020 would have been the ultimate super-spreader event accelerating the spread of COVID-19 throughout the world.

CES 2021 had 1,960 on-line exhibitors, less than half the live exhibitors at CES 2020. Virtual attendance was 69,523, about 40% of the 2020 in-person attendance. Some Chinese companies with a major presence in past CES shows were absent in 2021 – including Huawei and Haier. Huawei is currently banned from working with U.S. companies, so their absence is not surprising. Haier, self-described as China’s largest consumer electronics and home appliance producer, has had one of the largest booths at previous CES shows, so its absence it notable.

The CTA projected overall U.S. retail sales revenue for the technology industry in 2021 will reach $461 billion, up 4.3% from 2020. This includes electronics hardware, software, and services. In terms of hardware, smartphones remain the largest category, $73 billion in 2021, up 5%. 5G smartphone revenues should triple in 2021, accounting for over half of smartphone revenues and over 40% of units. Laptop PCs had strong growth in 2020 due to large numbers of people working and learning from home. In 2021, laptops are expected to be $38 billion, down 2%. Televisions are projected at $22 billion in 2021, down 1%.

One of the faster growing electronics hardware categories is smart home devices, growing 3% in 2021 to $15 billion. Connected health and fitness products should grow 6% to $9 billion. Within this category, connected health monitoring devices revenues should grow 34% in 2021 as more people check for COVID-19 symptoms and manage chronic conditions from home.

The two fastest growing categories in 2021 are wireless earbuds and gaming consoles, each up 16%. Wireless earbud growth is aided by the increase in video and audio conferencing. Gaming consoles benefit from more people seeking entertainment at home and by the introduction of new gaming systems by Sony and Microsoft in late 2020.

The impact of people staying a home more in 2020 (and for at least several months in 2021) is reflected in the strong growth of software and streaming services in the U.S. This category grew 31% in 2020 and the CTA forecasts growth of 11% in 2021. Video gaming software and services is the largest segment at $47 billion in 2021, up 8% after 20% growth in 2020. Video streaming is set at $41 billion in 2021, up 15% after 60% growth in 2020.

A major theme of CES 2021 was how the COVID-19 pandemic impacted the technology industry in 2020 and how it will influence future trends. CTA’s session on “Tech Trends to Watch” highlighted emerging technology trends which have been accelerated during the pandemic. These include e-commerce, telemedicine, streaming video, remote learning, AI & machine learning, natural language processing, cloud computing and remote health monitoring devices. To limit human contact, robots are increasingly being used for tasks such as cleaning & disinfecting, delivery, stocking, food processing, and health monitoring.

The session “Robotics to the Rescue” highlighted how COVID-19 is driving contactless delivery. Wing, an Alphabet subsidiary, delivers packages by drones. Starship (despite its name) uses ground-based robots for deliveries within a 4-mile radius. Intel has a division focused on autonomous transportation. A desire for ride-sharing services (such as Uber) without potential disease-carrying human drives will boost demand for self-driving cars.

The highlight of CES is the introduction of new products. The digital CES 2021 lacked the excitement of previous in-person CES shows, but many interesting new products were introduced.

TVs continue to get larger and larger – exceeding the available wall space in many homes. Samsung demonstrated its MicroLED 110 inch television. The TV can display up to four separate screens at once. The TV will be available in the U.S. in March 2021. Pricing was not disclosed, but the price at its launch in South Korea was over US$150,000.

Samsung also showed its Bot Handy robot which has an arm attached to its movable body. A video demonstrated the Bot Handy doing tasks such as loading a dishwasher, picking up clothes, and pouring wine. If successful, the Bot Handy could be the first robot to go perform household tasks. It is in development and Samsung did not announce when it could become available.

Panasonic revealed an augmented reality heads-up display (AR HUD) for automobiles. The display will project important information such as speed an turn directions on the windshield.

Sony demonstrated its new Airpeak – a drone with a Sony Alpha camera for professional photography and video production.

Lenovo introduced its ThinkPad X1 Fold, which it says is the world’s first foldable PC. It has a 13.3 inch OLED display but folds to half that size. The display can be show one screen or two. It has an on-screen keyboard and an optional external keyboard. ThinkPad X1 Fold pricing starts at $2500.

Numerous connected health products were introduced at CES 2021. Biotricity demonstrated is Bioheart wearable personal heart monitor for cardiac patients. The Bioheart continuously collects ECG data and monitors other critical functions.Neuvana offers its Xen headphones which send an electrical signal to the vagus nerve in the ear. The company claims this tones the nerve to improve sleep, relaxation, focus and memory.

Toto showed a prototype of its Wellness Toilet which uses sensors to analyze human waste and skin. The Wellness Toilet will provide health information and diet recommendations via a smartphone app. Toto said the toilet will be on the market in the next several years.

CTA made a great effort to put on its CES 2021 digital show. CES 2021 offered informative presentations and intriguing new products. Still, nothing can match the excitement of an in-person CES show. Let us hope CES 2022 (January 5-8, 2022) will match previous live shows. The Las Vegas Convention Center has nearly completed a $980 million expansion which includes 1.4 million square feet with 600,000 square feet of exhibit space and 150,00 square feet of meeting rooms. The expansion includes an underground public transportation system using autonomous electric vehicles. Completion was initially targeted for CES 2021, but the facility will be available for CES 2022.


CES 2021 and all things Cycling Technology

CES 2021 and all things Cycling Technology
by Daniel Payne on 01-17-2021 at 6:00 am

bowflex® velocore™ bike 940455

It’s January so time to give you another summary of what I’ve found at CES 2021 about new cycling products that have electronic content. During the pandemic in 2020 we’ve seen a surge in sales for bicycles, e-bikes, spin bikes and trainers as people wanted a simple way of getting around town running errands, or for fitness purposes.  I’ve continued my schedule of four rides per week, reaching 13,384 miles on a road bike, so follow me on Strava.com to keep fit and stay in touch. On rainy days I cycle indoors and use a Tacx Neo 2T smart trainer running the Zwift app and talk with my buddies using the Discord app.

Tacx Neo 2T

The Tacx smart trainer uses rare-earth neodymium magnets and electricity to create variable resistance, while most other trainers use a belt connected to a weighted flywheel for resistance. The Neo is calibrated one time at the factory for power readings, while other trainers require calibration on a regular basis. With the Neo you remove the rear wheel of your own bike, then place the bike onto the trainer, a quick process taking under a minute of time.

Smart Spin Bikes

Peloton is probably the best-known brand out there for what I call smart spin bikes, because they allow you to stay fit at home while being connected via the Internet with a live or recorded instructor to stay accountable while pursuing fitness goals. But if you’ve ever jumped onto a spin bike, you quickly realize that it feels nothing like a real bike, because it’s stationary and you don’t get to steer or balance or sway.

Nautilus has their own smart spin bike called the Bowflex VeloCore, and on the surface it looks a bit like the Peloton, but with one big difference: the bike actually sways side to side as you ride, just like in real life, so the feeling is more natural.

Bowflex VeloCore

Myx Fitness has a spin bike similar to Peloton called the Star Trac, along with heart rate monitoring. After paying $1,199 for the bike, you subscribe to online classes for $29/month.

Star Trac from Myx Fitness

e-bikes

This category remains the fastest growing segment again in cycling for 2020, so expect more choices in 2021 if the supply chain can actually deliver enough units during the pandemic shortages.

From Italy we have VAIMOO, an e-bike sharing system that is a CES 2021 Innovation Awards Honoree, and they offer e-bikes, charging stations and an app to control the service.

VAIMOO

Every e-bike needs to be charged, and most companies provide you with a wall charger unit that is quite large and clunky, except that this new company called Wise-Integration is using GaN technology for their ultra-compact e-bike charger.

Wise-Integration

Ever want to convert your existing bike into an e-bike? Well, there’s a company called CYC Motor Ltd. that has an electric motor assembly add-on that could work with your frame, and it’s called the X1 Pro Gen 2, priced at $1,093, but the complete conversion will be closer to $2,000.

CYC Motor, X1 Pro Gen 2

Bike Computers

I’ve used many models over the years: CatEye Stealth 50, Garmin 520, Garmin 820 and my latest is the Wahoo Elemnt Bolt. Even car maker Bosch has entered this field with the Nyon computer, sporting a big, color display, all refined for the e-bike experience.

Bosch eBike Systems Nyon

Mio has two GPS-based, color bike computers dubbed the Cyclo Discover and Cyclo Discover Plus, with a unique feature called NeverMiss to show you something worth stopping for. The maps are only ready for European customers to start out with, so stay tuned for maps in the rest of the world.

Mio Cyclo Discover

Bike Locks

How about a bike lock that you open with your fingerprint, so no more keys or remembering a sequence of numbers to open up your lock? Well, Hampton Products has the BenjiLock that reads your fingerprint to open it up. Cool.

BenjiLock

Smart Helmets

One of my cycling buddies rides with a helmet that has rear-facing flashing lights, and it really does look more visible than the typical seat location for a rear light. LIVALL has gone beyond that by adding automatic fall detection to trigger a text message to emergency contacts, and brake warning lights. Garmin has had crash detection and text messaging in their Edge series of bike computers for several years now, but the only complaint that I hear is that their system produces false triggers if you suddenly brake. These smart systems rely on MEMs sensors to detect g-force changes.

LIVALL Smart Helmets

When I cycle indoors on my smart trainer, I will often use the free Discord app to talk with my buddies. For outdoors there’s a company called Sena with a smart cycling helmet called the R1 that also supports group intercom.

Other Stuff

Let’s say that you’re cycling and are listening to music in one ear (for safety) and want to change the volume, skip the song, or answer a phone call? ArcX has a new device that is a ring that fits on your index finger and is controlled by your thumb, so your phone stays in a pocket or jersey.

ArcX

Listening to music while cycling and still hearing the sounds of approaching cars is important for safety, so with the JBuds Frames you can attach this pair of wireless speakers to your existing cycling glasses.

JLab Jbuds Frames

Knowing your heart rate is important for fitness types and racers who like to know which zone they exercise in, and can train in certain zones as part of a program or know during a competition how close to maximum they are currently at. Scoshce has a heart rate band that fits on your arm, while I use the more traditional chest strap style.

Scosche

My road bike has a computer mounted out front of the handlebars displaying lots of real-time data: Speed, RPM, Power, Heart Rate, Distance, Time of day, Elevation. The thing is that I have to glance downwards to view it, so my eyes are momentarily off of the road, causing safety issues. Well, Julbo has a new eyewear with heads-up display, so no more glancing downwards at  the bike computer.

EVAD-1 by Julbo

Cyclists share the road with motorists, so being seen on a bike is a big deal to avoid collisions and staying alive, and thanks to Panasonic Automotive there’s a new 4K-resolution Augmented Reality (AR) Heads Up Display (HUD) that identifies cyclists and places a bright Yellow icon on top of them:

Panasonic AR HUD

EyeNet is an app for motorists, cyclists and others on the road to alert when a collision may happen, each person just needs a smart phone and cellular connection.

https://youtu.be/AoAohl2_As8

In the opening paragraph I talked about virtual cycling indoors with a smart trainer, and from Taiwan comes a virtual cycling app called WhiizU that looks a lot like Zwift.com, the leading indoor platform. You connect your bike to a smart trainer, run the app, customize your avatar, select a route, and enjoy virtual scenery along the way.

WhiizU – virtual cycling

Tuya helps OEMs build electric scooters, e-bikes, etc. quicker by providing the Bluetooth, NB-IoT, GPRS and LTE Cat 1 infrastructure in one place.

Tuya

Tome Software, Ford, Trek Bicycle, Hammerhead, Specialized, SRAM, Shimano and Bosch are trying to form a bicycle-to-vehicle (B2V) group with technology using Bluetooth 5 to alert motorists of bicycles nearby. Read their Press Release.

Bicycle-to-Vehicle (B2V)

Biking, running, rowing and strength training in a combined indoor machine called Transformer hails from Stride. I just use my Tacx Neo 2T for indoor cycling, but this new product starts looking like a serious home gym foundation.

Transformer from Stride

I ride using 10 battery-powered devices, so the Wheelswing-VOLT caught my attention, because it’s a way to dynamically generate electricity while you’re moving on the bike. The generator fits onto your front wheel, but doesn’t make direct contact with the rotating wheel by using magnets instead. As a weight-weenie I would use it, but I can see commuters using it.

WheelSwing-VOLT

Previous CES Posts about Cycling


Podcast EP3: Tomorrow’s Semiconductors with Jim Hogan

Podcast EP3: Tomorrow’s Semiconductors with Jim Hogan
by Daniel Nenni on 01-15-2021 at 10:00 am

Dan and Mike are joined by industry luminary Jim Hogan. In a rare interview, Jim talks about his life – how he got into semiconductors, EDA and venture investing. Jim’s time at Cadence as well as his work at ARM are explored. Jim also provides a concise and informative overview of how venture investing works. The podcast concludes with a discussion of the current and future state of the processor wars and what Jim does in his spare time.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Arun Iyengar of Untether AI

CEO Interview: Arun Iyengar of Untether AI
by Daniel Nenni on 01-15-2021 at 8:00 am

Arun Iyengar Chief Executive Officer

I had a chance to catch up with Arun Iyengar, CEO of Untether AI.  Untether AI recently unveiled its tsunAImi accelerator cards powered by the company’s runAI devices. Using at-memory computation, Untether AI breaks through the barriers of traditional von Neumann architectures, offering industry-leading compute density with power and price efficiency.

What brought you to Untether AI? (After almost 20 years in the FPGA business)

I spent a long time with FPGA companies and processor companies during a period when the industry viewed hardware as important but not critical. Artificial intelligence changed all that and moved the hardware world to be a critical component to solve the difficult machine learning requirements. As I was considering the impact of AI to the existing chip companies, I realized that it would fundamentally alter the chip landscape. I wanted to fully realize the impact of such a change by being part of a pure play AI company vs being in a larger company that was going to go through the painful process of migrating existing silicon to have AI capability. So that meant being in a startup. However, it was important to me to look at a technology and architecture that would be differentiated and scale readily for both production and targeting various end markets. Untether AI, with its at memory compute architecture, fulfilled this criteria. Untether AI is well positioned to scale for technology nodes as well as scale the size of the die to target various end markets.

Neural Net Inference is an exciting but competitive market, how will you differentiate? (Who do you really compete with?)

Available chips for neural net inference are mostly based on von Neumann architecture. As a quick aside, von Neumann described a computer architecture in 1945 that is still the mainstream approach for silicon. It is very well suited for general purpose compute, but ill-suited for neural net inference. With expected exponential growth in power consumption for AI processing, this leads to an untenable situation. When Untether AI looked at the von Neumann architecture, we found that 90% of the power is wasted in data movement. We set about to solve that with the company’s at memory architecture which reduces data movement by a factor of 6. The resulting product can run at 8 TOPS/W and offers over 2,000 TOPS per PCIe card. There are few companies that can match this compute density and performance.

What can you tell me about your silicon? (Availability? Foundry partner? Process node? Benchmarks?)

We use standard CMOS technology with redundancy incorporated into it for high yields. We use TSMC 16 nm process to produce our runAI200 chips. The product is sampling now and is sold in 2 form factors:

  1. tsunAImi accelerator PCIe card with 4 runAI200 devices
  2. standalone runAI200 devices.

For inference benchmarks examples, the tsunAImi accelerator card is capable of computing 80,000 ResNet 50 images per second and 12,000 BERT base queries per second, both of which are at least 3 times better than the closest competitor’s numbers. On a total cost of ownership approach (using benchmark/W/sq mm of die area), the 16nm runAI200 is an impressive 8X better than the GPU competitor’s 7nm part

What type of software effort will be required? 

While our tsunAImi cards will be deployed in servers and the cloud, we consider our customer to be the data scientist. The data scientist is great at modeling and proficient in machine learning frameworks like TensorFlow and PyTorch. As such we use these popular frameworks as our entry point. From that point our goal is to make the implementation of the neural network as pain-free as possible. Therfore our imAIgine software development kit requires no knowledge of specifically how we translate the neural network into code running on our devices. The imAIgine compiler does the automated graph lowering and has sophisticated optimization and allocation algorithms. The imAIgine toolkit provides extensive feedback to the modeler highlighting the resource allocation, congestion and providing cycle accurate simulation. The imAIgine runtime engine does the hardware abstraction, communication and health monitoring as it places the net on the chip(s). So the overall vision is of a software development flow that allows the data scientist to stay just at ML framework, but provide more advanced capabilities to the power user if they choose to do so.

Software will be the metric by which any AI startup will succeed or fail. At Untether AI, we have more than half the company as software engineers, with a large number of them having advanced degrees.

Can you talk about your relationship with Intel? 

Intel Capital has been an investor in Untether AI from the early days. Along with Radical Ventures, they have been a huge supporter of the company, providing guidance and connections to help us access technology that would be hard for a startup to do on their own. Intel Capital has a good network across their portfolio companies and Untether AI taps into that network as we have specific questions to resolve. For example, as we were looking to bring up our runAI200 silicon, we wanted to get some specific questions answered and were able to talk to another AI company from the Intel Capital network.

Additional comments?

I am excited for how silicon can change and enable the AI usage. We are truly back at a golden age if you are a silicon enthusiast.

Untether AI’s goal is to have sustainable AI that does not consume the world’s energy resources in order to have humanity get the benefits of AI. With this, we will have the best combination of the golden age of silicon with democratization of AI.

Please visit Untether AI for more information and to view their presentation at the Linley Fall Processor Conference https://www.untether.ai/technology

Also Read:

CEO Interview: Tony Pialis of Alphawave IP

CEO Interview: Dr. Chouki Aktouf of Defacto

CEO Interview: Andreas Kuehlmann of Tortuga Logic


ISS 2021 – Scotten W. Jones – Logic Leadership in the PPAC era

ISS 2021 – Scotten W. Jones – Logic Leadership in the PPAC era
by Scotten Jones on 01-15-2021 at 6:00 am

Slide3

I was asked to give a talk at the 2021 ISS conference and the following is a write up of the talk.

The title of the talk is “Logic Leadership in the PPAC era”.

The talk is broken up into three main sections:

  1. Background information explaining PPAC and Standard Cells.
  2. A node-by-node comparisons of companies running leading edge logic processes.
  3. PPAC trend charts by company and year.

Background Information

 Historically new processes have targeted Power, Performance and Area (PPA), for example during TSMC’s 2020-Q1 conference call they stated that their 3nm process would provide 25-30% lower power at the same speed relative to 5nm, 10-15% better speed at the same power and a 70% increase in density.

With rising costs and challenges to produce cost effective leading edge processes the need to target cost during process development has become apparent. For example, both Imec and Applied Materials have discussed PPAC in recent presentations.

Figure 1. Power, Performance, Area and Cost (PPAC).

 Logic designs are created using standard cells, inverters, NAND gates, Scanned Flip Flops, etc.

The size of a standard cell is determined by the cell type and the design rules of the process the cell is run on. Process minimum dimensions can be used to calculate cell sizes. The height of a standard cell is determined by the minimum metal pitch multiplied by the number of tracks. The cell width is some number of contacted poly pitches plus an extra contacted poly pitch is required at the edge of the cell for a double diffusion break cell.

In recent years difficulties shrinking pitches has led to track reductions to scale down cell sizes, however as track heights are reduced it leads to fin depopulation, for a 9-track cell each transistor can have 4 fins, for a 7.5-track cell only 3 fins fit for each transistor and for 6-track cells that are the current state-of-the-art, only 2 fins fit in the cell per transistor. All other things being equal a 6-track cell with 2 fins per transistor will have one-half the drive current of a 9-track cell with 4 fins per transistor. This has led to Design-Technology-Co-Optimization (DTCO) where a new process is developed to support a 6-track cell with 2 fins per transistor, the fins are designed to provide higher drive current per fin for example by making them taller.

When comparing process density, we use the smallest cell available on each process (least tracks) to calculate millions of transistors per millimeter squared. We assume a design with 60% NAND cell and 20% Scanned Flip Flops.

A lot of people try to compare processes based on transistor density for an actual design, the problem with this is processes support multiple cell heights, for example 6 and 9-track cells, A design that targets high performance would use a lot of 9-track cells and a process that targets lower performance but minimum size would use a lot of 6-track cells, even on the same process two different designs targeting different performance levels would have different densities, we therefore use the minimum available cells to do a fair comparison.

Figure 2. Standard Cell.

 Another key density comparison for logic processes is SRAM cell size since many designs incorporate significant amounts of SRAM cache.

A have written an article on design effects on process density that is available here.

 Node by Node Comparison

The node-by-node comparison begins with 28nm foundry processes versus intel’s 22nm process. This comparison represents a moment in time as opposed to the same nodes where foundry 20nm nodes might be more appropriate.

In 2011 Intel introduced their 22nm process with the world’s first FinFET production, at the same time the foundries were producing 28nm planar devices. From a device technology perspective 28nm represented the foundries introduction of High-K Metal Gate (HKMG), a technology Intel introduced in 2007 and now Intel is introducing FinFETs and the foundries will not introduce FinFETs for three more years. At this point in time Intel was the clear logic technology process leader.

Interestingly, the Intel 22nm process has the best SRAM cell size but for logic has lower transistor density than the foundry 28nm processes, although presumably better performance. Intel was conservative on some process dimensions presumably because this was their first FinFET generation.

Figure 3. Foundry 28nm and Intel 22nm Nodes.

Moving forward to 2014 Intel introduces their second generation FinFET process with an aggressive shrink that put them into the lead on both logic density and SRAM cell size. In 2014 Samsung introduced their first generation FinFET with their 14nm process and in 2015 TSMC introduced their first generation FinFET with their 16nm process.

Figure 4. 16nm/14nm Nodes.

 A key point at this node is that Intel 14nm was originally due in 2013 and even when it was introduced suffered from a slow yield ramp, this was the beginning of a chain of intel delays and yield problems that persist today.

Another thing that stands out at this node is that Apple designed their A9 processor based on Samsung’s 14nm process but then also ported the design to TSMC’s 16nm process. Tom’s Hardware compared the PPA for the A9 on both processes and found power to be slightly better on the Samsung process, performance the same for both and die area to also be slightly smaller on the Samsung process. The Samsung power and area advantage may just be because the part was originally designed for Samsung and later ported to TSMC, but it gives us a unique opportunity to compare the two processes. We will use this data point later as a starting point for some of the trend analysis we will present.

The next step in time is the instruction of foundry 10nm nodes in 2016 when both Samsung and TSMC took the process density lead from Intel. This is the beginning of a key difference between Intel and the foundries where Intel takes bigger density jumps with each successive process generation, but the foundries introduce new generations faster and pass Intel for process leadership.

Figure 5. Foundry 10nm and intel 14nm nodes.

Stepping forward again, TSMC introduces their 7nm process in 2017, Samsung introduced their 7nm process in 2018 and Intel’s 10nm process finally enters production in 2019, although even today Intel is struggling with yield on 10nm. Intel’s 10nm process did move them into relative logic density parity with the foundry 7nm processes but with larger SRAM cell sizes. It should also be noted that as we will see in a moment, in 2019 the foundries began production on 5nm processes that once again moved them ahead.

At 7nm Samsung’s process has several EUV layers and for their internal production was the first production EUV process, although TSMCs 7nm+ process that added EUV for several layers may have been the first generally available foundry process with EUV. Total EUV layers for 7nm was between 5 and 7.

 Figure 6. Foundry 7nm and Intel 10nm Nodes.

In late 2019 we saw the foundries begin risk starts of 5nm processes and those processes reached high volume production in 2020. At the Intel 10nm/Foundry 7nm node the three companies had similar logic densities. Moving to 5nm TSMC delivered an approximately 1.8x density improvement while Samsung only delivered a 1.33x density improvement, this leads to TSMC having a substantial logic density advantage and the smallest SRAM cell size. 5nm also saw an increase in EUV layers to 10 to 15 layers and TSMC introduced a pFET with a high mobility Silicon-Germanium fin. While the foundries are once again delivering a new node, Intel is still working on ramping up 10nm yields.

Figure 7. Foundry 5nm and Intel 10nm Nodes.

Now we step forward into the future with foundry 3nm processes starting risk starts in 2021 with 2022 production, and Intel 7nm process entering production in 2022. Intel’s 7nm was originally due in 2021 so 2022 represents another delay and there are rumors it will be delayed beyond 2022. There have also been reports of delays for Samsung and TSMC 3nm, our check indicate Samsung may be delayed but TSMC is on track.

Intel 7nm will represent Intel’s first use of EUV and Samsung’s 3nm will see the industry’s first use of Gate-All-Around (GAA) in the form of stacked Horizontal-Nano-Sheets (HNS). TSMC is continuing to utilize FinFETs at 3nm.

For 7nm Intel has announced a 2x density increase over 10nm, Samsung has announced 3nm will be 1.35x denser than 5nm and TSMC has announced 3nm will be 1.7x denser than 5nm. Based on these announced density improvements TSMC will have the densest process by a wide margin, Intel will pass Samsung for second place and Samsung will be in third. We expect 15 to 30 EUV layers at this node with TSMC at the upper end due to their denser process.

Figure 8. Foundry 3nm and Intel 7nm Nodes.

 There has been a lot of speculation about whether Intel will outsource production of their microprocessors to the foundries given that the foundries now have the process lead. At the Credit Suisse conference in December 2020, Intel CEO Robert Swan announced Intel will continue to develop leading edge process with Intel 5nm and 3nm processes still planned. I wouldn’t be surprised to see Intel gradually outsource more of their needs, but it doesn’t currently look like any radical change is going to take place any time soon. I should also point out that given Intel’s volumes it would take years for the foundries years to ramp up to accommodate Intel’s volumes.

Figure 9. Intel Status

PPAC Trends

Now we will compare PPAC by company and time.

One key take-away from our analysis is that although Intel tends to make bigger logic density improvements from each new node the foundries are introducing new nodes faster and ultimately driving density faster. In fact, between 2014 and 2022 the foundries will have introduced five new nodes in the time it took Intel to introduce three new nodes and this is only counting major nodes, the foundries have introduced a lot of half-nodes as well. Intel does introduce “half-nodes” as well with +, +++, +++ nodes but they are performance half-nodes, not shrinks.

Figure 10. Nodes Versus Times.

Comparing Power and Performance between companies and process is nearly impossible, ideally someone would run a consistent product such as an Arm core with a set amount of SRAM cache on each process and publish power and performance metrics, but this is way too expensive to be practical. In the chart in figure 10. I have created the best estimated comparison I can produce.

I stared the power comparisons at the 16nm/14nm node where we have the A9 on both Samsung 14nm and TSMC 16nm. I have given Samsung a slight advantage as previously discussed even though this may be a design issue. I have then taken the power improvement for each subsequent node from the companies announced improvements. As can be seen TSMC takes a significant lead at 10nm, Samsung does largely catch up a 3nm presumably reflecting their switch to HNS although TSMC is still competitive with their high scaled FinFET. I am unable to place Intel on this chart with any confidence.

For the performance comparison I once again start with the A9 at the Samsung 14nm and TSMC 16nm node and use the companies announced performance improvement by node to forward project. TSMC’s develops a performance advantage over Samsung at 10nm and increases their lead at each successive node. To place Intel on this chart I looked at the Intel microprocessors made on their 10nm Super Fin process and AMD microprocessors made on TSMC’s 7nm process and concluded they have similar performance. I also used published Intel performance comparisons between their base 14nm process and 10nm Super Fin process to back project how Intel would compare at the 14nm/16nm node. TSMC and Intel are competitive at the Intel 10nm/Foundry 7nm node with Samsung likely having the lowest performance. I don’t have 7nm performance estimates from Intel, but my “best guess” would be TSMC 3nm will be as good or better.

I do want to stress that these are “best estimates” with a lot of uncertainty.

Figure 11. Power and Performance Trends.

This finally bring us to Cost.

My company IC Knowledge LLC is the world leader in cost and price modeling of semiconductors and MEMS. Our commercially available Strategic Cost and Price Model is a company specific industry roadmap beginning with the first 300mm processes and projecting out into the late 2020’s for 3DNAND, 3DXPoint, DRAM and Logic. The Strategic Cost and Price Model produces equipment, materials and manufacturing cost and selling price estimates by company, time, and even specific wafer fabs. Using the Strategic Cost and Price Model I have produced the three trend plots on the next slide.

On the left is the normalized wafer cost by node. Some key points on this chart:

  • The wafer costs do not include mask set amortization. For foundries masks are typically purchased by the customer and not part of the wafer price when the wafers are sold to the customer. For Intel mask amortization costs would typically be included but to make the comparisons consistent company to company we have omitted mask amortization. There is an important point that mask costs are increasing rapidly and wafer costs with mask set amortization are highly sensitive to the volume the masks are amortized over. Rising mask costs have resulted in a situation where leading-edge processes only make sense for high volume designs.
  • The wafers cost also don’t consider design costs, this is another area where costs are rapidly increasing and pricing out all but the largest volume products from leading edge processes.
  • For this analysis we have assumed new greenfield fabs for each node with Intel fabs located in the united states, Samsung in South Korea and TSMC in Taiwan.

The resulting wafer cost plot shows rising wafer costs with Intel having the highest wafer costs until the Intel 7nm/Foundry 3nm node where TSMC has the highest costs. This reflects TSMC having the densest process and Intel having fewer interconnect layers.

The middle graph provides normalized logic transistor density based on the values presented in the node-by-node analysis section of our presentation. As previously noted, we expect TSMC to have the densest process at the i7/F3 node.

Finally, the graph on the right side combines wafer cost and transistor density to produce a relative logic transistor cost trend. What is clear in this chart is that although higher transistor density may require a more expensive wafer process, the transistor density improvements, at least in the cases studied overcomes the higher wafer cost to deliver lower transistor cost.

Another key take-away is that for logic transistors Moore’s law is alive and well. In his seminal 1965 Electronics Magazine article “Cramming more components onto integrated circuits”, Gordan Moore stated what became known as Moore’s law: “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year”. The key to me in this “law” is that is as much an economic observation as it is a technology observation. In my opinion the purest measure of Moore’s law is are we continuing to decrease cost per transistor, and as this plot shows, we are, although once again this is purely logic transistor manufacturing cost and these economics only work for high volume products.

Figure 12. Wafer Cost, Transistor Density, Transistor Cost.

Conclusion

The key points in this presentation around PPAC and logic leadership are summarized in Figure 12.

Figure 13. Conclusion.

TSMC’s continued rapid execution of moderate shrinks has led them to a leadership position and we expect them to maintain leadership through the 3nm node and beyond.

Also Read:

IEDM 2020 – Imec Plenary talk

No Intel and Samsung are not passing TSMC

Leading Edge Foundry Wafer Prices


HFSS – A History of Electromagnetic Simulation Innovation

HFSS – A History of Electromagnetic Simulation Innovation
by Daniel Nenni on 01-14-2021 at 10:00 am

ansys HFSS electric field distribution in coax to waveguide adapter

In the 155 years since James Clerk Maxwell introduced the world to Maxwell’s Equations in the “Dynamic Theory of the Electromagnetic Field” there have been some amazing breakthroughs and avenues of insight. As a young electrical engineering student, we are introduced to the set of equations describing electromagnetic waves, but it is often difficult to visualize and understand wave propagation and how it pertains to high-speed electronic designs. It is in this area that Ansys HFSS stands out as the pioneer in this industry.

From the time that HFSS was introduced in 1990, it has provided a unique type of insight into electromagnetic problems. Diagnosing design issues in a lab requires measurement of time domain waveforms or S-parameters on components and test boards. Often, there are observations of these waveforms that defy basic understanding of electromagnetic phenomenon. From TDR (Time Domain Reflectometry) “glitches” or mysterious “suck-outs” in frequency domain S-parameters, debugging these designs can be extremely challenging. Modeling these structures in all their 3D complexity in a full-wave electromagnetic solver like HFSS can very quickly uncover the source of these problems.  The ability to excite a problem with real-world signals, watch the field propagate through the model, and quickly uncover hidden discontinuities or coupling mechanisms is invaluable.

On the capacity front of electromagnetic simulation, HFSS has been an industry leader. Engineers have always wanted to simulate ever larger and more complex designs. From the early days of HFSS with the solution of the coax-to-waveguide adapter shown in the image, designers have clamored for the ability to include larger 3D models, more detailed mechanical and electrical CAD, as well as more complicated material properties.  When HFSS was introduced, a then complex coax-to-waveguide adapter took approximately 10,000 matrix elements and 10 hours to solve just a single frequency point.  That same model today solves an entire band of frequencies in under 30 seconds on a laptop. The original full-wave FEM solution has grown from solving simple waveguide components to entire microwave systems, complex antenna arrays, and entire printed circuit boards.

There are many algorithmic innovations leading to the unprecedented scale users can solve in HFSS today. We have tackled problems such as multi-processing matrix solutions to distributing these solutions across dozens of compute nodes and hundreds of cores. HFSS introduced the first commercially available Domain Decomposition Method solver for full-wave electromagnetics leading to the ability to mesh and solve pieces of a large problem, then bringing them together for a full model field solution. Whether it is creating, developing, and commercializing new computational electromagnetic algorithms or more efficiently storing information, these enhancements have exponentially increased the speed and capacity of HFSS over the years. One recent example of large-scale IC simulation in HFSS can be seen here.

Some would claim that the increase in capacity is largely due to hardware innovation over the thirty-year history of HFSS. These hardware improvements, described in 1964 by Gordon Moore, and colloquially known as Moore’s law, have magnified these algorithmic developments. Floating point operations have improved in speed by almost 500 times since HFSS was first introduced. Combining the raw clock speed improvements of CPUs with the increased size and speed of cache and main memory, larger simulations can be performed in less time.

It is not much of a stretch to state that engineers have very little patience to wait for simulations. In my experience, the happiest engineer would be one who could solve an entire complex electromagnetic system within seconds from the comfort of their living room on their tablet. For some, the reality of working from home has been realized recently, but we are still not to the point of being able to solve these systems in seconds. However, using the Ansys Cloud and HFSS, these simulations can be accessed and monitored from the comfort of your phone or tablet.

Ansys Distinguished Engineer, Dr. Larry Williams notes that “the numerous electromagnetic method innovations in Ansys HFSS have enabled solutions that now propel the 5G, RF, Wireless, and high-speed industries.” To find out more about the innovations in the HFSS solvers and the history of our HPC computing advancements, take a look at these videos:

https://www.youtube.com/watch?v=N7v4fgDyxB4&list=PLQMtm0_chcLx5STq8Q_p1m79PyjkylvR8&index=3

https://www.youtube.com/watch?v=DC-SA4hloHQ&list=PLQMtm0_chcLx5STq8Q_p1m79PyjkylvR8&index=6

Also Read

HFSS Performance for “Almost Free”

The History and Significance of Power Optimization, According to Jim Hogan

The Gold Standard for Electromagnetic Analysis


Developing Drivers For The Automotive Industry

Developing Drivers For The Automotive Industry
by Daniel Nenni on 01-14-2021 at 6:00 am

mcal

Autonomous driving, connected vehicles, power electronics, infotainment and shared mobility are some of the developments which have mobilized the revolution within the automotive industry in recent years. Combined, they are not only disrupting the automotive value chain and impacting all stakeholders involved but are a significant driver in the growth of the automotive software market which is expected to cross $450 billion by 2030. Unfortunately, these rapid changes are making it difficult for automotive OEMs and other industry stakeholders to keep pace. This is partly due to the fact that a large part of these automotive innovations depends more on software quality, execution, and integration as opposed to mechanical ingenuity.

Despite the importance of software in the cars of today, the development of the embedded software modules is often either done in isolation, partnerships or sometimes bought from suppliers. These modules are then stitched together into a proprietary platform. But these proprietary platforms are difficult to support as the hardware supplied needs to work seamlessly with the platforms.

To ensure configurability, flexibility and maintenance for the hardware defined cars of today, there is a growing need for standardization of software defined transportation platforms. With the increasing complexity of hardware and software, it also becomes necessary to find common solutions across product lines. The Automotive Open System Architecture (AUTOSAR) is a big step in this direction and founded by companies such as Toyota, BMW, VW, Ford, Daimler and GM with the aim of standardizing the software architecture for the automotive industry. AUTOSAR enables hardware and software development to be independent of each other and makes it easier to implement the growing complexity, quality and reliability of the electronic systems used in the automobiles. This standardization has helped the automotive embedded developers in focusing primarily on the innovations in the product feature development rather than working on different architectures making it easier to reuse applications between ECUs. The open source AUTOSAR architecture has been adopted by several companies who in turn have given it their own flavor. These include companies like Vector, Elektrobit, Bosch etc. to name a few.

To provide flexibility to software developers, AUTOSAR has been developed using a layered approach to software development with the complete software split into multiple layers such as Application layer, Runtime environment and the Basic software. The Basic layer in turn has further been broken into the Services layer, the ECU Abstraction layer and the Microcontroller Abstraction Layer (MCAL).

The device drivers for the MCU are developed using the standard as specified in the Microcontroller Abstraction Layer which provides inputs to the software developer about the name of the API, the parameters of the said APIs and the overall functionality of the driver. The primary benefit of standardizing the APIs is that the MCAL drivers which have been developed can be reused across multiple customer projects by the OEMs as well as across multiple Basic Software Layers (BSWs) provided by multiple vendors. This results in reduced development times for the OEM as well as less integration efforts on the part of the SW Integrator. Consequently this leads to reduced time to market resulting in huge cost reductions for the OEMs.

How are MCAL device drivers any different?

In the automotive industry, there is a general understanding that the suppliers will supply the MCAL layer to be placed on the register level of the microcontroller. While reference implementation of MCAL may be provided by some suppliers, there is a big chasm which needs to be crossed in order to develop a production ready implementation. One of the major challenges faced while developing the device drivers is the rather complex interaction between specifically implemented hardware features and standardized software requirements. The device driver solutions also need to map the different software modules to the same microcontroller resource and need to manage the complex dependency between software driver configurations.

Moreover, anyone who has developed drivers or used a driver knows that the driver comprises of APIs which take certain parameters as defined by the driver developer. The application developer uses the developed APIs when developing applications to achieve a pre-defined functionality. While this is true in the case of AUTOSAR as well, and most of the APIs accept parameters which are passed by the application developer, there is another side to the AUTOSAR development process which deals with configuration of the MCAL.

The configuration of the MCAL is done using a Module Configuration Generator (MCG) which generates a configuration which is then given to the driver for it to configure the hardware as mandated during the configuration process. The configuration can then be used across multiple hardware platforms for the same microcontroller.

MCAL also requires an adherence to strict quality standards and considerations and defines how the drivers should be written. They have to be MISRA compliant and meet the necessary safety requirements (ISO26262). These include for example when to clear the allocated memory or ensuring parameters are not passed using pointers to ensure safety critical programming.

The development of the MCAL for different ASIL levels also possesses its own challenges. ISO26262 specifies particular processes and methodologies to be followed during the development of the MCAL drivers for different ASIL levels which often end up modifying the architecture of the MCAL driver. Some of the design changes to the MCAL driver can potentially include

  • The number of safety checks done in the driver. These include aspects such as monitoring of errors during register accesses, monitoring memory corruption, addition of safety markers to the configuration data etc.
  • Implementation of safety mechanisms needed in the event of errors during register accesses.
  • Memory partitioning has to be done such that variables of an ASIL-D driver (highest level of ASIL) do not occupy the same memory space or overlap as the variables of an ASIL-A driver (lowest ASIL level). This partitioning is done during system and software design.

Complex drivers

The complex device drivers are also a part of the AUTOSAR architecture and includes developing and integrating a driver, which does not need to conform to the AUTOSAR standard, into the BSW. As such the developer is free to architect their own driver with the APIs, parameters and functionality for the APIs according to their choice. But developing a complex driver is easier said than done as complex driver implementation often requires awareness of overall system constraints such as timing and latency requirements.

Our expertise

Vayavya Labs, with its considerable experience in the embedded domain, AUTOSAR and MCUs, provides an avenue for design companies to leverage their expertise for custom driver and software development for the automotive industry. It has successfully implemented MCALs for multiple hardware platforms across multiple OEMs such as Synopsys’s ARC HS3x and EM6 processors, Calterah’s Alps 77GHz Radar SoC, NXP’s MPC5748 controller to name a few. It has also developed MISRA compliant drivers for Spi, Qspi, Ethernet, Can, Can-FD, Mcu, Port, Dio, Pwm, Gpt, Adc modules.

Vayavya Labs also provides a software platform – DDGEN – to automatically generate MISRA compliant device drivers for commonly used peripherals to ensure rapid development of automotive applications.

 Also Read:

CEO Interview: R.K. Patil of Vayavya Labs

A Blanche DuBois Approach Won’t Resolve Traffic Trouble

Chip Shortage Killed the Radio in the Car


2020 was a Mess for Intel

2020 was a Mess for Intel
by Robert Maire on 01-13-2021 at 10:00 am

Intel 2020 Mess

Understanding Intel’s future means understanding Intel’s past

Yes, there are two paths you can go by, but in the long run. There’s still time to change the road you’re on.

Intel is at a crossroad. The road they have been on since inception, and the road that has differentiated them from the rest of the pack and made them great was their manufacturing prowess. They are the last “real man” standing that owns their own fabs.

Now we are at a point where Intel has to decide whether to continue to try to be the last CPU IDM, patch up their mistakes and stumbles in manufacturing and recover their greatness (although maybe not Moore’s Law leadership). Or, throw in the towel and follow AMD and everyone else into TSMC’s warm embrace. Or perhaps some half baked compromise between the two extremes.

There is no easy way out nor clear decision to be had. And you may ask yourself, “Well, how did I get here?”

AMD was taking their last dying gasps, Apple gave up on PowerPC and went all in with Intel. Intel was flying higher, even more so than its partner Microsoft.

Then a few years ago, it seemed as if something started happening and it all started unraveling.

It was not a single point in time nor was there a single inflection point event that signaled a change in Intel, it came about much more slowly, much more insidiously.

Lets get rid of our most experienced people

A few years ago, back in 2016, Intel did a “RIF” (reduction in force) of about 11%. Intel had previously done a significant reduction way back in 2006 of about 10%. At the time we noted that Intel seemed to be trying to soften the blow by offering early retirement and other “packages” to older employees with the most seniority (and experience). It seemed to us , as a casual observer, with some friends at Intel, that the RIF had gone well beyond its original intent and Intel was losing real, experienced, talent, who were taking the attractive “package” and leaving pre-maturely without transition.

In an industry that runs on “tribal knowledge” and “copy exact” and experience of how to run a very, very complex multi billion dollar fab, much of the most experienced, best talent walked out the doors at Intel’s behest, with years of knowledge in their collective heads

Lets go buy some stuff with the money burning a hole in our pockets

Intel over the past years has also been on a bit of a shopping spree, buying all nature of companies for very high valuations. Though we won’t go through all the acquisitions, they all seemed to have some sort of legitimate justification or logic even though they may or may not have exactly been anywhere near Intel’s wheelhouse. In the end , when we add up the price of the acquisitions and try to calculate the value added to Intel we come up very short.

While we would never criticize M&A as a method to grow as we think that properly applied acquisitions can propel a company well beyond its peers and into new, faster markets, badly done M&A, done with weak logic, can sink a company.

Mobile Phones are toys that will never amount to anything

Intel famously balked at making CPUs for Apple’s Iphone and essentially completely “whiffed” on the smart phone and tablet markets while TSMC embraced it. (This somehow reminds me of a software analyst I knew when we both were at Morgan Stanley, telling Bill Gates that microcomputers wouldn’t amount to anything when Morgan was pitching for Microsofts IPO business, which Goldman got and put them on the tech map). Though this was a single, key mistake, there was never a significant recovery effort until the game was already over.

Forgetting your roots/ Taking your eye off the ball

Perhaps the peak of my concerns about Intel came a number of years ago at an Intel event. The CEO of Intel at the time (name withheld to protect the innocent…) was doing a presentation about all the myriad of new markets that Intel’s was getting into and looking at.

It was a litany of multi billion dollar opportunities and amazing technologies. He spoke for an hour on a host of topics and I never heard him use the word “semiconductor” (or chip) once. You would not have known from the speech that Intel was even in the semiconductor business in any way, let alone that it was the “leader” from which all its profits came. When I walked out of the room I had the urge to short Intel on the spot….but didn’t.

Deserting a sinking ship

When we heard that the legendary Jim Keller was leaving Intel in the beginning of 2020 it was clearly an ominous sign. Jim is the semiconductor design genius/guru that has had stints at Tesla, Apple, AMD and finally Intel. He has been a bit of a turn-around/seal team that parachutes in for a few years, pulls off a miracle and then hops on to the next lily pad to come to the next companies rescue. He has since moved on to an AI chip start up that he will lead.

His departure around the time that Apple also abandoned the Intel ship was a very clear indication that things were already sinking. Here we are, almost a year later still without a rescue plan.

A plane crash is never “just one thing” went wrong

My point in all this is that Intel’s problem is not just bad yields and delays of 10NM and 7NM caused by a singular or some number of esoteric technology issues that caused them to fall behind TSMC in Moore’s Law.

Sure, that’s the manifestation of other underlying issues that taken together have caused those symptoms to come to a head to cause problems. Its a bit like an airline trying to decide whether to outsource its airplane maintenance after a plane crashed due to a loose screw after years of neglect, mistakes and laying off the most experienced mechanics. Maybe its not the mechanics faults.

Maybe its a managerial fault that needs fixing first. For inspiration look at what the semiconductor winners are doing

It is interesting to note that Intel used to be the biggest spender on capex in the industry and was passed by around the time of its issues starting to manifest by both TSMC and Samsung.

We are now looking ay Samsung potentially spending a record $30B a year and TSMC spending a record $20B, both more than Intel.

While randomly throwing money at an issue is not a solution, a focused spend on both machinery and people that are key to your leadership position is well warranted and should not be subject to cuts to come in on budget or make a quarterly earning number. It is clear that TSMC, Samsung and now the Chinese, view long term spend and commitment in semiconductors as crucial to long term success despite short term impact. Focusing on the stock price, buybacks and acquisitions while using capex and R&D to balance the budget is the quintessential sacrificing long term for short term.

The problem is much bigger than just Intel

While we love Intel as being an American technology pioneer and former leader, we are also worried about Intel being the last standing US semiconductor maker (not counting Micron…which doesn’t quite count). As we have written about extensively for many years, the risk to the greater US, its national defense and economy by losing semiconductor leadership is well beyond what most people can even begin to understand.

The numbers are many times Intel’s revenues and go well beyond just dollars and cents to national security.

While Intel and its shareholders are not responsible for the US national economy and security, the alignment between Intel’s long term success and the US’s best interest is clear and synergistic.

AMD’s example is not a good one

Many investors and analysts would take the easy position that following AMD’s lead in splitting the company in two between the fabs and the rest of the company worked for AMD and will work for Intel. We think that this is not a good comparison.

AMD did not have the minimum critical mass needed to support a fab and all the R&D that goes along with it. Intel has the size, scope and market needed to support the associated spend. The basis of the problem is not economic as it was with AMD it is an execution/technical problem that Intel has encountered. The divorce between AMD and its fab did not work out too good for its fab (now Global Foundries).

AMD did find a buyer (sucker) in Abu Dhabi, who thought they were going to buy their way into high tech but didn’t anticipate the years and billions of endless spending to keep up in semiconductors, especially without the requisite revenue/profitability to support it. The math simply didn’t work and they wound up bailing out of the Moore’s Law race. GloFo is now off hiding in a “specialized” corner of the semiconductor industry hoping to avoid being trampled by the bigger players before they can IPO or otherwise unload the company.

Investors will point to AMD’s recent success but we would point out that that success is more of an example of TSMC’s success than AMD’s, much as Apple’s success is directly linked to TSMC and Nvidia and others etc;. While we take nothing away from Lisa Su’s management of AMD, she did have the luck of being in charge when the supply deal with the millstone that was GloFo expired and AMD was free to use TSMC as a fab, which was somewhat of a “no-brainer” . Intel does not have the luxury of the same luck nor easy decision to make.

Intel’s situation is much more complicated. In addition, AMD honestly didn’t have much of a choice at the time.

Putting toothpaste back in the tube is not easy

The other end of the decision spectrum is doubling down and fixing the existing issues with Intel’s manufacturing problems. Regaining leadership in Moore’s Law is likely a lost cause that can’t be recovered as we haven’t seen TSMC ever stumble which is likely what Intel would need to happen.

We are also at a bit of a transition as Moore’s Law is indeed slowing and becoming more difficult and multiple cores, multiple packaged die, 3D stacking and other alternatives attempt to make up for the geometric slowing.

This means that it isn’t just catching up to TSMC on Moore’s Law by leapfrogging a generation or miraculously fixing the yield issues it also means doing a lot on the many alternative technology fronts. Intel does have the resources to fight a multi-front war. It would mean a likely increase in spend and duplicate costs as Intel would have to outsource to TSMC while at the same time also spending to fix and improve their existing process and technologies.

This larger incremental cost would likely not sit well with investors as the additional costs would squeeze profitability in the short run (likely a number of years)

Outsourcing only works if you have multiple potential manufacturers…

We would point out that outsourcing only works in the long run if you have multiple, equally competent outsource partners to play off one another to keep them honest and pricing reasonable. If both Intel and AMD outsource to TSMC without Samsung as a viable alternative, they will both lose as TSMC will be in the drivers seat and will be able to determine winners and losers and pricing. AMD being at TSMC has worked because the real competition has been TSMC versus Intel.

We would also point out that Apple is an even more vulnerable position than Intel with TSMC now that it has decided to move all its laptop/desktop CPU business away from Intel and put all its eggs in TSMC’s basket. In Apple’s case they have even less alternative as going to Samsung foundry as a backup/alternative/ stalking horse against TSMC is not a very viable alternative. Apple is perhaps even more vulnerable than Intel would be.
Even though Samsung is pouring oodles of money into its hugely profitable chip business it doesn’t mean they will be competitive in foundry and where they are making money is memory anyway.

Outsourcing is a “burn the boats”/”roach motel”, one way decision that there is no recovery from if it doesn’t work and it will likely put Intel at the mercy of others. Both Intel and AMD will be in the exact same boat.

Thinking outside the box

We would imagine that there should be alternative, unique solutions to Intel’s manufacturing issue.

Could Intel buy/rent TSMC’s process/know how? Could TSMC help fix/run Intels advanced fabs? Could Intel become TSMC’s presence in Arizona?

What alternative arrangements can get Intel’s manufacturing back on track more quickly while using TSMC in the interim as a fill in the gap in manufacturing?

Could the US government get involved with the “Chips for America? act as the threat of losing Intel’s manufacturing would be a clear case for that legislation.

Maybe Apple would be willing to chip some money in to get on shore manufacturing that is not dependent on an isolated island off of mainland China.

There should be a way to help Intel out in the near term with manufacturing and help TSMC out with presence and diversity outside of its island.

Maybe Samsung could step in as some white knight and team up with Intel such that both get a synergistic effect with their logic manufacturing efforts.
We think that this complex situation requires a complex, out of the box, solution which will require some compromise.

The Stocks

Unfortunately we don’t see an easy way out for Intel that doesn’t hurt either the short term or long term valuation. If Intel increases outsourcing and abandons being a fab leader they will sacrifice long term value for short term profits and a quick/easy fix.

If they choose to stay in the game, short term profits and the stock will be hurt by the extra expense of outsourcing while at the same time continuing to invest even more to fix the problem as an independent manufacturer.

If Intel chooses the outsource approach we would likely look to get out of the stock after it had run up on the news of the cost cutting and investors thinking the same thing will happen to Intel that happened to AMD.

If Intel chooses to fight on independently, we might be tempted to wait for a lower entry point as the impact on profitability would increase.

The wild card is some sort of in between solution that is a unique hybrid that is difficult to game out, but that may be the best hope for a good outcome.

From the perspective of semiconductor equipment makers, Intel giving up the ghost would be very bad as they would not only lose Intel’s spend but would have to deal with an ever more dominant TSMC that would take industry spending from 3 players down to two oppressive giants, Samsung and TSMC. More customers are always better for equipment makers.

Tokyo Electron & Hitachi does a lot of Intel business as does KLA and to a slightly lesser extent Applied. Lam is more exposed to Samsung. ASML already does the vast majority of its business with TSMC and less with Intel so they would see less impact.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

SMIC Blacklist puts ASML in Jam

Noose tightens on SMIC- Dead Fab Walking?

China Semiconductor Bond Bust!


CDC, Low Power Verification. Mentor and Cypress Perspective

CDC, Low Power Verification. Mentor and Cypress Perspective
by Bernard Murphy on 01-13-2021 at 6:00 am

CDC Low Power

Clock domain crossing (CDC) analysis is unavoidable in any modern SoC design and is challenging enough to verify in its own right. CDC plus low power management adds more excitement to your verification task. I wrote on this topic for another solution provider last year. This time I want to intro an interesting twist on the problem, revealed in a Mentor/Cypress white paper presented at DVCon San Jose last year.

Scoping down CDC analysis reduces noise, so Mentor and Cypress focused their attention on interactions between low power and CDC. Obvious candidates are around isolation management and retention registers. Since they’re working with RTL design which does not yet have power management implemented, they read the current UPF (power management constraints) together with the RTL.

CDC and isolation control

One obvious CDC+low power problem, illustrated in the figure, would not necessarily be obvious in a CDC analysis of the RTL alone where the isolation signal is not yet connected. In this case there is a clock domain crossing between clk2 and clk1, which could lead to metastability in the B2 register when isolation is enabled (or, I think, when it is disabled). Which means you must harden B2 for metastability, or you must synchronize the isolation enable signal. Either way this is a realization that only becomes apparent when you look at the RTL and the UPF together.

CDC and retention

Retention registers provide another tricky example. In RTL a register is just a register – no power considerations of any kind. If you power off the block containing the register, the register powers off along with everything else in the block. But that can introduce a lot of latency and rework when you want to power back on. When your phone automatically powers down and a little later you start it up again, you don’t want to have to restart all your apps and re-find the last things you were looking at. You expect the phone to jump right back to where you left off. Retention registers play a role here. These registers hang on to their last state, even when you power down the domain around them. Not every register has to be retention, just enough to allow jumping back quickly to the last state when power is restored. To my knowledge, this typically works by saving state to a separate part of a special register, where that separate part sits in an always-on power domain. When you’re ready to restore, you copy that saved state back to the main register.

The CDC challenge again starts with the fact that designer flag these decisions in the UPF, not in the RTL. And choose signals to trigger state restoration which must be synchronized to the register clock. This is another case where a check has to look both at the RTL and the UPF. First to find the retention candidates and then to ensure that no restore signal has a CDC issue with its target register.

Mentor solution

Interesting stuff. I wonder if at some point we will be seeing white papers on CDC and security verification. Or maybe CDC and low power and security! You can access the Mentor white paper HERE.

Also Read:

Multicore System-on-Chip (SoC) – Now What?

Smoother MATLAB to HLS Flow

A Fast Checking Methodology for Power/Ground Shorts


ESD Alliance Report for Q3 2020 Presents an Upbeat Snapshot That is Up and to the Right

ESD Alliance Report for Q3 2020 Presents an Upbeat Snapshot That is Up and to the Right
by Mike Gianfagna on 01-12-2021 at 10:00 am

ESD Alliance Report for Q3 2020 Presents an Upbeat Snapshot That is Up and to the Right

The Electronic System Design (ESD) Alliance (a SEMI Technology Community) recently released their regular report on EDA revenue for Q3, 2020 . While the report is a normal occurrence, the numbers in this particular report are anything but normal. I have been reviewing these reports for many years, and I honestly can’t remember a more positive result. There have been plenty of things to remember from 2020 that evoke sadness. If you care about the EDA industry, take heart. There is something to be happy about. Read on to see how the ESD Alliance report for Q3 2020 presents an upbeat snapshot that is up and to the right.

Let’s start with some of the basics. EDA revenue increased 15 percent in Q3 2020 to $2,953.9 million, compared to $2,567.7 million in Q3 2019, with all categories logging significant gains. The four-quarter moving average, which compares the most recent four quarters to the prior four quarters, rose by 8.3 percent. The companies tracked in the report employed 47,087 people in Q3 2020, a 4.8 percent increase over the Q3 2019 headcount of 44,950 and up 1.1 percent compared to Q2 2020. Substantial revenue growth and higher employment. Happy days are here again. Before I got too gleeful, I wanted to get some more detail – the story behind the numbers. For that, I was fortunate to be able to spend some time with Wally Rhines, Executive Sponsor, SEMI EDA Market Statistics Service.

Wally always seems to have a substantial amount of detailed information and broad perspective at his fingertips. This discussion was no different. Wally started by explaining there have been three times in the past 15 years when EDA growth has reached 15 percent. So, this is about as good as it gets for recent EDA history. Wally pointed out that perhaps you could see a weakness in services revenue, but that’s about it. We talked about that category a bit. Services revenue is defined as support for tool deployment, training and design work. Looking at support and training for tool deployment a positive story can be seen in the numbers, however. EDA tool flows are becoming more user driven. The result of that is a reduction in the need for expert AE support and training, so a reduction in services can signal a good trend.

Looking at the bigger picture, Wally explained that the semiconductor industry will likely grow five to six percent this year. He went on to explain that EDA typically grows one point more than semiconductor. In the last three years, EDA has been growing about four points faster. EDA is exhibiting accelerating momentum. More good news. IP is another bright spot. As a newer market category for EDA, growth has been large, almost 26 percent year on year. IP is now about 35 percent of the total EDA revenue number.

Another somewhat counter-intuitive aspect to the numbers is the impact of consolidation. M&A activity typically brings synergy to the merged company, resulting in lower EDA spend. Yet we’re seeing a healthy increase. The impact of all the new players, think Google, Amazon, Facebook and other system companies like that are neutralizing the consolidation effect. As Wally pointed out, in spite of M&A and the associated headcount reduction, designers are still designers. They find opportunity somewhere else, and thanks to the trend for system design companies to bring semiconductor design in house, design folks find a good job market.

Overall, a very positive story for EDA and semiconductors in the face of a challenging year. It was very helpful to get some of the backstory from Wally about how the ESD Alliance report for Q3 2020 presents an upbeat snapshot that is up and to the right.