Banner 800x100 0810

The EUV Divide and Intel Foundry Services

The EUV Divide and Intel Foundry Services
by Scotten Jones on 03-23-2022 at 10:00 am

Intel IDM 2.0 Process Roadmap
The EUV Divide

I was recently updating an analysis I did last year that looked at EUV system supply and demand, while doing this I started thinking about Intel and their Fab portfolio.

If you look at Intel’s history as a microprocessor manufacturer, they are typically ramping up their newest process node (n), in volume production on their previous node (n-1) and ramping down the node before that (n-2 node). They don’t typically keep older nodes in production, for example, last year 10nm was n, 14nm was n-1 and 22nm was n-2. Intel had some 32nm capacity in Fab 11X but that has now been converted to a packaging Fab. This contrasts with someone like TSMC that built their first 130nm – 300mm fab in 2001 and is still running it plus their 90nm, 65nm, 40nm and 28nm fabs as well.

By the end of 2022 Intel should be ramping up their 4nm node, then in 2023 their 3nm node, and in 2024 their 20A (2nm) and 18A (1.8nm) nodes should ramp up. All of those are EUV based nodes and it would seem reasonable that by the end of 2024 Intel would have little use for non-EUV based processes for microprocessor production since their 7nm/10nm non-EUV nodes would be n-4/n-5 depending on how you treat 10nm/7nm.

If I look at Intel’s current and planned Fab portfolio, there are EUV capable fabs and older fabs that are unlikely to ever be used for EUV, in fact EUV tools require an overhead crane and many of Intel’s older fabs would likely require significant structural modifications to accommodate this, plus Intel is building 9 EUV based production fabs.

The following is a site by site look at Intel’s fabs:

  • New Mexico – Fab 11X phases 1 and 2 are Intel’s oldest production fabs and they are being converted to packaging Fabs. 11X-3D may continue to operate for 3D Xpoint. Intel recently discussed two more generations of 3D Xpoint and this is currently the only place to make it.
  • Oregon – Fab D1X phases 1, 2 and 3 now lead all of intel’s EUV based development and early production. Fabs D1C/25 and D1D are older development/production fabs that are unlikely to be converted to EUV and are currently being used for non-EUV production.
  • Arizona – Fabs 52 and 62 are EUV fabs under construction. Fab 42 is currently running non-EUV nodes but it was built as a EUV capable Fab and will likely be used for EUV someday. Fabs 12 and 32 are production fabs running non-EUV nodes and will likely never be converted to EUV.
  • Ireland – Fab 34 is an EUV fab under construction with equipment currently being moved in, this will likely be Intel’s first 4nm EUV node production site. Fabs 24 phases 1 and 2 are non-EUV production sites and will likely never be used for EUV (unless they get combined with Fab 34 at some point).
  • Israel – Fab 38 is an EUV fab under construction and will be a 4nm EUV node production site. Fabs 28 phases 1 and 2 are non-EUV node production and will likely never be used for EUV (unless they get combined with Fab 34 at some point).
  • Ohio – Silicon Heartland EUV based fabs 1 and 2 are in the planning stage.
  • Germany – Silicon Junction EUV based fabs 1 and 2 are in the planning stage.

In summary Intel is in various stages of running, building, or planning the following EUV based fabs, D1X phases 1, 2 and 3, Fabs 42, 52, and 62, Fab 34, Fab 38, Silicon heartland 1 and 2 and Silicon Junction 1 and 2. That is 3 development fabs/phases and 9 EUV based production fabs.

For non-EUV fabs still running, Intel has D1C/25, D1D, Fabs 12 and 32, Fab 24 phases 1 and 2, and Fab 28 phases 1 and 2. That is 8 non-EUV production Fabs. This really puts into perspective why Intel would want to get into the foundry business and support trailing edge processes. All these fabs can be used to produce any of Intel’s non EUV 10nm/7nm and larger processes plus likely with reasonable changes in equipment sets any of the processes they will be acquiring from the Tower acquisition.

Déjà vu all over again

Yogi Bera is famous for being humorously quotable and one of his famous quotes was “it is Déjà vu all over again”.

The last time Intel tried to get into the foundry business they failed to gain much traction. Foundry was still a second-class citizen at Intel, they didn’t have the design eco system and eventually exited the foundry business. One of the things that bothered me about Intel’s effort and in my opinion sent a message to foundry customers that foundry was second class was that Intel would develop a new process node, for example 32nm, they would introduce a high performance version for internal use and then a year later introduce the foundry (SOC) version.

Recently I saw an interview with Pat Gelsinger where he talked about 4nm being an internal process for Intel and then 3nm being the foundry version. 3nm is currently expected to come out approximately a year after 4nm. He then talked about 20A as an internal process and 18A as the foundry version. 18A is due to come out 6 to 9 months after 20A. I don’t think foundry customers will accept always being 6 to 12 months behind the leading edge and I think it sends the wrong message. He did say if a foundry customer really wanted to use 4nm they could, but he seemed to view 4nm and 20A as processes that should be tested internally before the next version is released more widely.

I do think Intel has an interesting opportunity. There is a shortage of foundry capacity at the trailing edge where Intel will likely be freeing up a lot of fab capacity and there is a shortage at the leading edge as well. In addition to that, there is a need for a second source at the leading edge. Samsung has a long history of over promising and under delivering on technology and yield. Companies like Qualcomm have repeatedly tried to work with Samsung, so they aren’t wholly dependent on TSMC and have been repeatedly forced back to TSMC. The latest example is the Qualcomm’s Snapdragon 8 gen 1 that is reported to have only 35% yield on Samsung’s 4nm node. If Intel can execute on their technology roadmap in a consistent basis with good yield, they can likely pick up a lot of second source and maybe even some primary source leading edge business particularly at Samsung’s expense. I could even see a company like Apple giving Intel some designs to strengthen their negotiating position with TSMC. I wouldn’t expect MediaTek, a Taiwan company located near TSMC, or AMD or NVDIA due to competitive concerns to work with Intel, but never say never.

EUV  shortage

As I mentioned at the outset it is a EUV supply and demand analysis I have been doing that triggered EUV gap ideas. As I outlined above Intel plans to build out and equip 9 EUV based Fabs. At the same time TSMC 5nm is widely believed to have ended 2021 at 120 thousand wafer per month capacity. TSMC has announced they expect to double the end of 2021 capacity on 5nm, by the end of 2024 and that is before the Arizona 5nm fab comes online. TSMC has talked about 3nm being an even bigger node than 5nm. TSMC has also started planning on a 4 phase 2nm fab with a second site in discussion. Samsung started using EUV for one layer on their 1z DRAM and then 5 layers on their 1a DRAM. Samsung is planning a new EUV based logic fab in Texas and is building out logic and DRAM capacity in Pyeongtaek. SK Hynix has started using EUV for DRAM, Micron has pulled in their DRAM EUV use from the delta to gamma generation and even Nanya is talking about using EUV for DRAM. This begs the question, will there be enough EUV tools available to support all these needs and my analysis is that there won’t.

In fact, I believe there will be demand for 20 more EUV tools than ASML can produce each of the next 3 years. To put that is perspective, ASML shipped 42 EUV systems in 2021 and is forecasting 55 system in 2022. Interestingly I saw a story today where Pat Gelsinger commented that he is personally talking to the CEO of ASML about system availability and admitted that EUV system availability will likely gate the ability to bring up all the new fabs.

I think another impact the EUV system shortage will drive is a different view of what layers to use EUV on. If a layer is currently done with multi-patterning more complex than double patterning EUV is generally cheaper. EUV also enables simpler design rules, more compact layouts, and potentially better performance. EUV will be even more important as the switch is made to horizontal nanosheets. I believe companies will be forced to prioritize EUV use to the layers where it has the most impact and continue to use multi-patterning for other layers.

Also Read

Intel Evolution of Transistor Innovation

Intel Discusses Scaling Innovations at IEDM

Intel 2022 Investor Meeting


Webinar: Simulate Trimming for Circuit Quality of Smart IC Design

Webinar: Simulate Trimming for Circuit Quality of Smart IC Design
by Daniel Nenni on 03-23-2022 at 6:00 am

p1

Advanced semiconductor nanometer technology nodes, together with smart IC design applications enable today very complex and powerful systems for communication, automotive, data transmission, AI, IoT, medical, industry, energy harvesting, and many more.

However, more aggressive time-to-market and higher performance requirements force IC designers to look for advanced and seamless design flows with tools and methodologies to overcome these challenges. In this context for high-precision circuit applications, quality trimming is becoming a very important step before tape-out because the increased performance variation induced by process statistics cannot be reduced only at design level.

Most of today’s trimming applications are based on Monte Carlo Analysis to ensure that a trimming step is executed for each simulation sample. Unfortunately, most of the time this task requires custom scripts to setup the right sequence of multiple simulations and cannot be reliable for high-sigma robustness application (beyond 3 sigma) at long tail distributions. MunEDA provides an enhanced Dependent Test Feature for circuit trimming within its EDA design tool suite WiCkeD. This ensures for each simulation sample an easy-to-use and seamless trimming procedure as well as controlled switching of operating conditions, suitable for circuit verification, high-sigma robustness and circuit sizing/optimization.

In this webinar we’ll discuss typical applications of trimming methods, and will show how to set up sequences of multiple dependent simulations in MunEDA’s WiCkeD circuit sizing & verification tool suite. We’ll then discuss how process variation, local variation, temperature and Vdd variation, have to be treated differently for a correct analysis result. The measurements before trimming are usually taken at fixed operating conditions, whereas after trimming the circuit has to work at multiple operating conditions.

For documentation purposes, the performance variation with and without trimming has to be simulated. Simulation of the trimming procedure involves different methods of calculating trimming settings from initial measurement results. We’ll discuss ways to set up scripts, Verilog code or use simulator outputs to decide on trimming settings.

After setting up the simulation procedure, a typical analysis step is standard Monte Carlo. But since the simulation setup in MunEDA WiCkeD is general and not limited to Monte Carlo, it can run sensitivities, optimization, and high sigma robustness analysis with the trimmed circuit as well.

Especially the topic of parametric high-sigma analysis is interesting for post-trimming performance analysis, because the distribution shape of trimmed performance metrics often deviates significantly from the normal distribution.

Here is the REPLAY

The speaker is Michael Pronath. I have known Michael for many years and enjoy working with him. He has a PhD in Electronics and has been with MunEDA since the beginning. Today he is VP of Products and Solutions. With 20+ years of field experience Michael is an excellent speaker and worth listening to, absolutely.

Also Read

Webinar: AMS, RF and Digital Full Custom IC Designs need Circuit Sizing

WEBINAR: Using Design Porting as a Method to Access Foundry Capacity

Numerical Sizing and Tuning Shortens Analog Design Cycles


Co-Developing IP and SoC Bring Up Firmware with PSS

Co-Developing IP and SoC Bring Up Firmware with PSS
by Kalar Rajendiran on 03-22-2022 at 10:00 am

Creating a Driver

With ever challenging time to market requirements, co-developing IP and firmware is imperative for all system development projects. But that doesn’t make the task any easier. Depending on the complexity of the system being developed, the task gets tougher. For example, different pieces of IP may be the output of various teams distributed geographically. Some pieces of IP may be sourced from third-party IP suppliers. The SoC integration testing not only has to contend with quality and reliability of the IP blocks but also with testing various operating scenarios. And then there is the matter of verification.

Verification of complex SoC designs involves multiple platforms including emulation and FPGA prototyping. Each platform usually requires its own way of specifying the tests. This results in expending a lot of time and effort in recreating the same test information for various platforms for the same project. What if there is a way to describe the verification intent as a single abstract specification? Accellera Systems Initiative (Accellera) developed the Portable Test and Stimulus Standard (PSS) to address this need. Accellera is an independent, not-for-profit organization dedicated to create, support, promote, and advance system-level design, modeling, and verification standards for use by the worldwide electronics industry.

Portable Test and Stimulus Standard

The following is a description from the Accellera website.

“The Portable Test and Stimulus Standard (PSS) defines a specification to create a single representation of stimulus and test scenarios usable by a variety of users across many levels of integration under different configurations. This representation facilitates the generation of diverse implementations of a scenario that run on a variety of execution platforms, including, but not necessarily limited to, simulation, emulation, FPGA prototyping, and post-silicon. With this standard, users can specify a set of behaviors once and observe consistent behavior across multiple implementations.”

Can PSS help with co-developing IP and SoC bring-up firmware? This is the topic of a talk by Matthew Ballance from Siemens EDA at the 2022 DVCon. He presents details of leveraging PSS for co-developing and co-verifying IP and firmware that eases SoC integration testing challenges. Matthew is a senior principal product engineer and portable stimulus technologist. The following is a synopsis of the salient points from his talk.

While it is straightforward to write directed and constrained random UVM sequences to exercise IP behavior, integration tests at the SoC level are more complicated. With many different pieces of IP getting integrated, there are lots of scenarios to exercise and ensuring the interoperability of multiple drivers become a challenge.

Interoperability Framework

While the need for interoperability exists, there is no need to reinvent the wheel for implementing a framework. A real time operating system (RTOS) provides such a framework, having to deal with multiple drivers from different sources. A RTOS is designed to work on very low power with constrained resources too. Matthew uses Zephyr, an open-source RTOS in his presentation. For a typical test configuration, the memory footprint for the Zephyr RTOS image is around 8KB, which is attractive for an SoC verification environment.

Creating Drivers

Creating device drivers in C code calls upon the same knowledge and skills that UVM sequence code writers use with System Verilog. The Zephyr RTOS specifies a format for drivers in the system and defines the data types and structures that fit into the driver framework. This makes it easy to configure the various aspects of the drivers. Zephyr RTOS defines several standard APIs for supporting standard devices such as DMAs or timers but can support custom APIs as well. Refer to Figure below for a sample piece of driver code.

PSS Building Blocks

There are two parts to a PSS model. The first part addresses the test intent and the second part handles the test realization. This structure allows for modeling scenarios in terms of high-level abstractions in the test intent section, reserving the low-level details to the test realization section. This makes it easier for creating multiple tests just like it is done in System Verilog by changing the seed to create new test variants. What is handed off to the SoC team includes not just RTL deliverables but also the PSS code along with driver code. At the SoC level, all IP blocks and firmware are integrated and managed under the Zephyr RTOS framework.

Refer to Figure below for a sample piece of PSS code. The bottom portion of this code segment is a call to the driver code. In this example, it is an action for doing a memory to memory transfer on a DMA channel. The call contains the details of the channel id and spells out the resource that the transfer requires.

Summary

Verification at the SoC level can be performed efficiently using PSS. A key requirement for this, is of course making lower-level driver firmware available to be called via PSS actions. More details about creating device drivers and connecting PSS to the device drivers can be found in a technical paper authored by Matthew. Zephyr RTOS is one way to enable multiple firmware modules to interoperate. There is an expanded list of IP deliverables as a result of co-developing IP and firmware using this approach. In addition to the usual RTL deliverables, driver firmware and reusable PSS code are also handed over to the SoC verification team.

Matthew’s DVCon talk can be accessed here by registering at the DVCon website.

Matthew’s presentation slides can be downloaded here.

Matthew’s technical paper can be downloaded here.

Also read:

Leveraging Virtual Platforms to Shift-Left Software Development and System Verification

Using a GPU to Speed Up PCB Layout Editing

Dynamic Coherence Verification. Innovation in Verification


Optimizing AI/ML Operations at the Edge

Optimizing AI/ML Operations at the Edge
by Tom Simon on 03-22-2022 at 6:00 am

Optimizing Edge Based AI ML

AI/ML functions are moving to the edge to save power and reduce latency. This enables local processing without the overhead of transmitting large volumes of data over power hungry and slow communication links to servers in the cloud. Of course, the cloud offers high performance and capacity for processing the workloads. Yet, if these workloads can be handled at the edge, albeit with reduced processing power, there is still likely to be a net advantage in power and latency. In the end it boils down to the performance of the edge based AI/ML processor.

As a Codasip white paper points out, embedded devices are typically resource constrained. Without the proper hardware resources, AI/ML at the edge will not be feasible. The white paper titled “Embedded AI on L-Series Cores” states that conventional microcontrollers, even when they have FP and DSP units are hard pressed to run AI/ML. Even with SIMD instructions there is still more required to achieve good results.

Google’s introduction of TensorFlow Lite for Microcontrollers (TFLite-Micro) in 2021 has opened the door for edge-based inference on hardware targeted for IoT, and other low power and small footprint devices. TFlite-Micro uses an interpreter with a static memory planner. Most importantly it also supports vendor specific optimizations. It runs out of the box on just about any embedded platform. With this it delivers operations such as convolution, tensor multiplication, resize and slicing. But the domain-specific optimizations it offers mean that further improvements are possible through embedded processor customization.

Codasip offers configurable application specific processors that can make good use of the TensorFlow Lite-Micro optimization capability. The opportunity for customization arises because each application will have its own neural network and training data. This makes it advantageous to tailor the processor for the particular needs of its specific application.

All of Codasip’s broad spectrum of processors can run TFLite-Micro. The white paper focuses on their L31 embedded core running the well known “MNIST handwritten digits classification” training set and a neural net with two convolutional and pooling layers, at least one fully-connected layer, vectorized nonlinear functions, data resize and normalization operations.

During the early stages of the system design process, Codasip lets designers run Codasip Studio to profile the code to see where things can be improved. In their example ~84% of the time is spent in the image convolution function. Looking at the source code they identify the code that is using the most CPU time. Using disassembler output they determine that creating a new instruction that combines the heavily repeated mul and c.add operations, will improve performance. Another change they evaluate is replacing vector loads with loading bytes using an immediate address increment.

Optimizing Edge Based AI/ML

The Codasip Studio profiler can provide estimates of the processor’s power and area. This helps designers choose between standard variants of the L31 core. In this case they explored what the effects of removing the FPU would be. TFLite-Micro supports quantization of neural network parameters and input data. With integer only data the FPU can be dispensed with. Of course, there is a trade off in accuracy, but this can be evaluated as well at this stage of the process. The table below shows the benefits of moving to integer and using a quantized neural model.

The Codasip white paper concludes with a closer look at how the L31 operates in this use case with the new instructions and compares it to running before the instructions were added. Using their software tools, it is possible to see the precise savings. Having this kind of control over the performance of an embedded processor can provide a large advantage in the final product. The white paper also shows how Codasip’s CodAL language is used to easily create the assembly encoding for new instructions. CodAL makes it easy to iterate while defining new instructions to achieve the best results.

To move AI/ML operations to the edge designers must look at every avenue to optimize the system. In order to gain the latency improvements and overall power saving that edge-based processing promises, every effort must be made to make the power and performance profile of the embedded processor as good as possible. Codasip demonstrates an effective approach to solving these challenges. The white paper is available for download on the Codasip website.

Also read:

Webinar: From Glass Break Models to Person Detection Systems, Deploying Low-Power Edge AI for Smart Home Security

Getting to Faster Closure through AI/ML, DVCon Keynote

Unlocking the Future with Robotic Precision and Human Creativity


Webinar Series: Learn the Foundation of Computational Electromagnetics

Webinar Series: Learn the Foundation of Computational Electromagnetics
by Matt Commens on 03-21-2022 at 10:00 am

image 2 for semiwiki blog

The electromagnetism problems upon which we spent many hours laboring away on homework in college has a mathematical formulation originally developed by Maxwell, Lorentz, Gauss, Faraday and others. In their full forms, these formulas are partial differential equations that come in many versions – both differential and integral.

The resulting set of equations are elegant in their form but complex in their usage. Only for the simplest systems are there closed form solutions. But there are a multitude of systems that can potentially be analyzed with Maxwell’s Equations which are beyond the scope of pen and paper. This need to solve the more complex to get the answers, as defined by Maxwell’s Equations, ultimately led to the birth of Computational Electromagnetics, a field in engineering and applied physics in which Ansys has been in the forefront for over three decades.

Since joining the HFSS team in 2001 I personally have benefitted from working with the very engineers and developers that have built HFSS over these decades. By understanding the many challenges they have overcome such as scale and speed I have a deeper knowledge of computational electromagnetics as well as a solid understanding of how to effectively leverage these solutions. And no doubt others would benefit as well from having this deeper background knowledge that I was fortunate to accumulate over the years.

In order to provide this deeper insight to a broader audience, the leading technologists in Computational Electromagnetics at Ansys will publicly discuss the work they’ve done over the years in both breadth and depth. A five-part Electromagnetics Foundation webinar series is launching on March 22nd. Ansys experts will pull back the curtain and reveal what is going on within the EM solvers they’ve developed over the last 30 years. The discussions will go into detail regarding numerical methods (time vs frequency approaches, integral versus differential, etc), analytical methods such as full wave, asymptotic, quasi-static and scattering & bouncing rays (SBR), differences between high and low frequency analysis (from electric motors up to photonic wavelengths), modeling & simulation scalability from micro to macro, the applicability of distributed computing to EM simulation, and numerous application examples.

The webinars are spaced roughly 1-2 weeks apart, and are available for registration here:

Electromagnetics Foundation Webinar Series | Ansys

And here’s the lineup:

Foundations of Computational Electromagnetics

An overview of different approaches used for computational electromagnetics and the trade-offs for choosing the best solution method.

An Overview of the Foundations of HFSS and Maxwell Solver Technologies

Figure 1 – HFSS 3D Layout

A review of the various numerical methods (finite elements, integral equations, etc.) included in HFSS and Maxwell.

The Foundation of Domain Decomposition Technologies in HFSS

A theoretical overview of domain decomposition formulations and dive into how HFSS solvers evolved over the past decade.

Learning Ray Tracing Methods Foundations for Electromagnetics

Figure 2 – 5G Antenna Array

Foundations of shooting and bouncing rays (SBR) as a computational electromagnetic (CEM) methodology.

The Foundation of Computational Optics and Photonics

Covers the basics of ray tracing, surface and volume scattering models, full-wave time and frequency domain electromagnetic solvers for optics, photonics and quantum photonic effects.

This is your chance at a unique opportunity to learn how electromagnetic simulation is done at the very bleeding edge.

Also read: 

The Clash Between 5G and Airline Safety

The Hitchhiker’s Guide to HFSS Meshing

The 5G Rollout Safety Controversy


Alphawave IP and the Evolution of the ASIC Business

Alphawave IP and the Evolution of the ASIC Business
by Daniel Nenni on 03-21-2022 at 6:00 am

Alphawave IP OpenFive

Alphawave IP has agreed to acquire OpenFive, a SiFive business unit (formerly Open-Silicon) for $210m in cash. Having spent many years in the ASIC business which included working with Open-Silicon, Alphawave, and OpenFive here is my perspective on the acquisition:

This acquisition accomplishes two things: First it trims down SiFive as they march to IPO. In concert with this acquisition SiFive raised an additional $175M which earned them a more than $2.5B valuation, doble unicorn status, which is a first for a semiconductor IP company.

Last year it was rumored that SiFive was in acquisition discussions with Intel, which I can confirm, but the valuation was too high. Intel CEO Pat Gelsinger has a strong acquisition background and has many opportunities. He also passed on Globalfoundries for the much smaller Tower Semiconductor which I think was an excellent move for both companies. GF’s IPO is booming and Israel based Tower Semi is a perfect fit to run the Intel Foundry business. The same goes for this acquisition. SiFive will successfully IPO and Alphawave IP will do quite well with the ASIC experts at OpenFive. This is one of the rare 1+1 = 3 semiconductor acquisitions, absolutely.

With OpenFive, Alphawave now competes in the multibillion dollar ASIC business with the likes of Marvel, who acquired the ASIC business from GF and eSilicon, and Broadcom who has the Avago/LSI Logic ASIC business.

You will also see Alphawave come out with standard products (my opinion) like Marvel and Broadcom putting them in the chip big leagues. Thanks to OpenFive, Alphawave expects to hit $500M in 2024 and I expect them to hit $1B not long after that. Yes, this acquisition is that good and I am sure there are more acquisitions to follow.

Alphawave and OpenFive did a much more detailed press release than we usually see for events like this so it is definitely worth a read. Here is the link to the PDF and some highlights:

  • This acquisition will nearly double the number of connectivity-focused IPs available to Alphawave customers from 80 to over 155 and will provide customers with a one-stop-shop for their bundled connectivity needs in the most advanced technologies at 5nm, 4nm, 3nm and beyond. This will include an expanded die-to-die connectivity portfolio that will accelerate chiplet delivery capabilities to customers. Alphawave has also licensed RISC-V processor IPs from SiFive as part of the transaction.
  • OpenFive’s proven silicon development team enables Alphawave to offer leading edge data centre and networking custom silicon solutions as well as enhancing its chiplet design capabilities. This accelerates Alphawave’s strategic goal to scale revenues by monetising its leading connectivity IP not only through IP licensing but advanced custom silicon design.
  • The combination of Alphawave’s leading high-speed connectivity with OpenFive’s IP portfolio is expected to generate material revenue synergies through bundling of IP and integrated IP sub-systems as well as leveraging the two companies’ respective strengths to win complex custom silicon design wins at leading edge process nodes.
  • The transaction will be immediately EPS accretive to Alphawave. Forecast FY 2023 revenue for the combined group is anticipated to reach between US$325m to US$360m with a path to a yearly revenue run rate of over US$500m in 2024. 2023 adjusted EBITDA margins for the group are expected to be between 32-36% with 2025 adjusted EBITDA margins between 40-45% as revenues exceed US$500m.

My good friend Paul McLellan and I wrote up the history of the ASIC business in our book “Fabless: The Transformation of the Semiconductor Industry”. Chapter number two “The ASIC Business” includes a brief history of two pioneering companies VLSI Technology (now part of NXP) and eSilicon (now part of Marvell).  It is interesting to note that like many semiconductor market segments the ASIC business has come full circle and will boom again. But instead of the fabless transformation powering the ASIC business it will be domain specific chips by system companies, absolutely.

Also read:

Demand for High Speed Drives 200G Modulation Standards

The Path to 200 Gbps Serial Links

Enabling Next Generation Silicon In Package Products


No Traffic at the Crossroads

No Traffic at the Crossroads
by Roger C. Lanctot on 03-20-2022 at 10:00 am

NoTraffic Safety Impact 2022

The Federal Highway Administration in the U.S. tells us that “each year roughly one–quarter of all traffic fatalities and about one–half of all traffic injuries in the United States are attributed to intersections.” Intersections are clearly a challenge for human drivers, and the dirty little automotive industry secret is that intersections are an even bigger problem for computer-driven vehicles.

While humans run red lights – resulting in 700-800 fatalities annually – computer driven cars struggle to accurately identify the presence of traffic signals and stop signs and proceed appropriately without human assistance. Intersections are the great Achilles heel of autonomous driving – aside from all of the other weaknesses and unsolved problems these vehicles face – and remain an enormous challenge for transportation engineers.

One company, NoTraffic, has brought new thinking and new technology to municipalities. The two key insights that NoTraffic has employed are:

  1. Developing proper metrics for categorizing and quantifying the types of crashes that occur at intersections – left turn, rear-end, red-light running, right turn?
  2. Providing tools for communicating with drivers who are approaching intersections to alert them to the signal phase and timing of the light AND the potential for pedestrians in the crosswalk or a red light runner?

We hear a lot about connected cars these days. What we don’t hear much about is connected traffic lights and infrastructure. This is where NoTraffic comes in.

NoTraffic takes traditional disconnected infrastructure solutions – manifest in all those road-side boxes visible near traffic light installations – and digitizes them thereby allowing them to become part of a managed and always connected grid.

Nowhere is this transformation and its impact clearer than in NoTraffic’s approach to red-light runner mitigation. Rather than implementing enforcement cameras and the corresponding terror, confusion, and anger of impacted drivers, NoTraffic takes a more sophisticated, calibrated approach.

NoTraffic starts with measuring the extent of the problem. You can’t mitigate what you can’t measure.

NoTraffic breaks red-light runners into three “tiers:”

  • Tier 1: A vehicle in the intersection during the yellow phase (light) with no conflicting green.
  • Tier 2: A vehicle in the intersection during the red light, yet the intersection is in an all-red interval (no one has a green light).
  • Tier 3 (most dangerous): A vehicle in the intersection during the red light and there is a conflicting green

Significantly, NoTraffic both measures and mitigates the problem – identifying troublesome times of the day when spikes in red-light running tend to occur – and then identifying solutions that typically do not require red light enforcement cameras.

Based on cameras plus Wi-Fi and cellular connectivity along with cloud-based analytics, NoTraffic’s solution provides three layers of safety benefits while also leveraging existing installations:

  1. A grid-level view into dangerous intersections, on a real-time basis, allowing cities to take precise measures – i.e. deploy local police to deter red-light runners at specific intersections and hours of the day (for example, intersections 1, 3, and 5, during rush hour traffic).
  2. Reduction in the potential for red-light-running crashes: by shortening delay times and queue lengths – in the example illustrated above, NoTraffic reduced the number of red-light runners, potentially minimizing the number of life-threatening crashes.
  3. Real-time notifications can be sent to road users by connecting urban intersections to a managed grid via their V2X-enabled IoT sensors, thereby enabling alerts to be sent to road users – connected vehicles, pedestrians, or cyclists – warning of vehicles about to cross an intersection on red, thus potentially minimizing the number of life-threatening crashes.

The data in the chart (above) represents a two-week window, gathered in April and May 2021 (two weeks in each month) in a major U.S. city where NoTraffic was deployed along a 1.8-mile corridor across five intersections. One of the intersections shows a slight increase during the mitigation period, which might be “noise” or lower cycle length for the reversible lane.

Most important of all, NoTraffic’s connected infrastructure solution is capable of communicating via cellular wireless technology with approaching vehicles or nearby pedestrians to warn of an identified red-light runner.

This is the future of connected infrastructure – with intersections connected to one another and to approaching vehicles and pedestrians. NoTraffic is more than a little ahead of its time – or maybe it’s right on time. The NoTraffic solution offers the promise of reducing the number of red-light runners or at least warning drivers and pedestrians when they occur. Perhaps just as important, the NoTraffic approach will lend a helping hand to hopeless autonomous vehicles.

Also read:

GM’s Super Duper Cruise

Emergency Response Getting Sexy

Waymo Collides with Transparency


Facebook or Meta: Change the Head Coach

Facebook or Meta: Change the Head Coach
by Ahmed Banafa on 03-20-2022 at 6:00 am

Facebook or Meta Change the Head Coach

The title of this article shows one side of the problem with #Meta or Facebook which is how people saying the name and adding “whatever their name now….”, but let me get down to the main points by giving this example of comparing Facebook changing of its name to Meta, to repainting an old house with cracks and outdated design which will not make it new, the minute people look carefully or walk through they know its old.

This is the story of Facebook (I will use this name as it’s the real well-known brand name for years, forget about Meta for now) I am also comparing #Facebook business to an NFL team with bad news after bad news from losing daily visitors, shutting down cryptocurrency idea, announcing defeat by TikTok and losing the last ounces of users trust with the whistleblower in 2021 , keeping all that in mind it’s time to change the head coach and I mean both Mark Zuckerberg and Sheryl Sandberg , they ran their course , no more new tricks, magic is gone and Facebook needs new faces and new direction, you can’t do the same and expect different results (yeah! there is a name for that action).

Let’s see how Facebook can be saved:

First, Mark Zuckerberg is hitting his 18th year of running the company, time for him to step down like many other founders/CEOs of top Silicon Valley companies. Instead of trying new products Facebook should try a new CEO and both Mark and Sheryl they can enjoy their time doing something else besides running the company’s daily business.

Second, spin off REELS (the underperforming competitor of TikTok) I checked both and the differences are clear, #TikTok is a happy place with all the actions, filters and funny videos, while Reels is a strip down version of Instagram, no excitements, no vibes, and it’s still part of Instagram, not to mention the confusing upload steps of a video. So, as I said, spin it off and change the name to something Gen-Z and Millennials will love to talk about.

Third, take the Metaverse as a separate company away from Facebook and Mark can run it and experiment with it but away from social media business (Facebook, Instagram, Whatsapp, Messenger) like someone who try new idea in the lab, if it fails like cryptocurrency no impact on the other parts of the business, and no bad reputation. It is a very expensive experiment but if that’s what will take Mark away from other parts of the company it’s worth it.

It’s 2022 and the tricks and techniques of the past will not work now, this is a new era with generations who see their lives through Apps that define their personality and open opportunities for them not through vague terms like #Metaverse that will take place years in the future.

Also, instead of invasion of privacy at each turn when dealing with Facebook products, Facebook’s new management should try to be honest and clear with users about their data. Apple figured it out that “Privacy” is the key to block competition and gain customers, TikTok talked to young people and gained access to their mindsets, but Facebook still served drinks and foods from the early 2000’s.

Let me say this one more time, Mark Zuckerberg was the driving force for Facebook. Now he is the brakes that keep holding back any progress of the company. Ten years ago, no one could imagine Facebook without Mark, now many wanted to envision the company without him so the company can do better.

Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing


Podcast EP67: Corigine Combines Emulation and Prototyping

Podcast EP67: Corigine Combines Emulation and Prototyping
by Daniel Nenni on 03-18-2022 at 10:00 am

Dan is joined by Jeff Critten, VP of sales at Corigine. They discuss the unique capabilities of Corigine that allows support of both emulation and prototyping in one platform.

Jeff Critten has been in the EDA industy for over 25yrs.  He started with Cadence as a verification AE in 1997 and moved into a sales role where he was promoted to a Sales Director running Major Accounts like Intel, Broadcom, Marvell and numerous others.  He left Cadence to try a Cloud start up and that gamble didn’t bear fruit.  He recently went to Corigine in Q4 to become their VP of Sales for their new Mimic Product line which unifies the functions of an emulator and a prototyping board into a single unifed platform that is priced like a prototyping board.   Jeff has a MSc. from the University of Waterloo and his BSc from the University of British Columbia.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Aki Fujimura of D2S

CEO Interview: Aki Fujimura of D2S
by Daniel Nenni on 03-18-2022 at 6:00 am

ESD Alliance D2S Blog Post Image 1 1

Curvilinear Design Primer for Design, Packaging Communities

This interview was done by Bob Smith, Executive Director, ESD Alliance, a SEMI Technology Community.

Previously, Fujimura served as CTO at Cadence Design Systems and returned to Cadence for the second time through the acquisition of Simplex Solutions where he was President/COO and inside board member. He was also an inside board member and VP at Pure Software. Simplex and Pure both went public during his tenure. Fujimura was a founding member of Tangent Systems, subsequently acquired by Cadence Design Systems. He was a board member of HLDS, RTime, Bristol, S7, and Coverity, Inc., all of which were successfully acquired. Fujimura received Bachelor of Science and Master of Science degrees in Electrical Engineering from MIT.

Semiconductor packaging and photomask segments of our industry have undergone some major technology changes in the past few years after relatively minor changes for many years. In the case of photomasks, new technologies such as multi-beam mask writers and extreme ultraviolet (EUV) lithography are major breakthroughs in the news as they ramp into high-volume manufacturing. A new trend related to these technologies is the use of curvilinear features on photomasks.

Curvilinear photomasks are here today, particularly interesting to the ESD Alliance as the door opens for “curvy” design. Aki Fujimura, CEO of D2S and a member of the the ESD Alliance Governing Council, speaks to me about curvilinear photomasks and what it means for design and packaging.

BS: What are the advantages of curvilinear photomasks?

AF: First let me explain what we mean by curvilinear photomasks. Shapes consisting of axis-parallel edges are sometimes referred to as Manhattan geometries. Shapes that do not need to be Manhattan geometries are considered curvilinear in the context of our discussion.

Curvilinear mask features have been shown not only to print more accurately, mostly because 90-degree corners can’t be accurately reproduced, but also to print more reliably, with less variation. This is good for both mask and wafer quality.

BS: What breakthroughs enabled curvilinear photomasks?

AF: Multi-beam mask writing and GPU-acceleration of pixel-based computing including curvilinear inverse lithography technology (ILT) are enabling curvilinear masks. With multi-beam mask writers available in all leading-edge mask shops now, the mask write times are no longer affected by the number of shapes on the mask or their complexity. This is principally because multi-beam mask writers write with pixels, similarly to how TVs, monitors, and digital projection machines work.

The economics of mask writing is dominated by the mask writing time. The fact that multi-beam mask writers, given a resist and writing method, writes any shapes of any shape count in constant time is economically and logistically very attractive to the mask shop. Once a mask shop has a multi-beam mask writer, curvilinear masks take no more time to write than any other.

BS: What is ILT and how does it contribute?

AK: ILT is a mathematically rigorous inverse version of optical proximity correction (OPC) known to produce the best wafer results for both optical (193i) and EUV lithography. Many studies have demonstrated that curvilinear ILT mask shapes produce the best “process windows,” a measure of resilience to manufacturing variation.

Until multi-beam mask writers became available in the leading-edge mask shops, it hadn’t been practically possible to use curvilinear mask shapes as the desired mask shapes provided to the mask writers. However, runtimes associated with this computational technique limited its practical application to critical “hotspots” on chips.

Applying GPU acceleration to the ILT problem paved the way in the past few years for some breakthroughs in runtime roadblocks to ILT. In 2019, an entirely new approach systematically designed for multi-beam mask writers and GPU acceleration by D2S made full-chip ILT a practical reality in production for the first time.

BS: Will curvilinear masks be used for 193i lithography, EUV or both?

AF: In annual surveys conducted by the eBeam Initiative (See Figure 1), industry experts anticipate that curvilinear ILT shapes are already in use or will be for hotspots in some leading-edge layers before 2023 for both 193i and EUV masks. They clearly indicate that the primary reason to purchase multi-beam mask writers is for EUV masks. They also indicate that writing curvilinear masks is also a strong reason to purchase multi-beam mask writers.

Given that EUV masks are being written with multi-beam mask writers already, there is no penalty in the mask write time to write curvilinear shapes. Whether for 193i or for EUV, curvilinear mask shapes produce better wafer quality. With sufficient supply of multi-beam writers, leading-edge masks are likely to be written with them in the future.

Figure 1 caption: 2020 eBeam Initiative Survey result in answer to the question: “How extensively will curvilinear shapes be used for leading-edge (EUV, 193i) masks intended for high-volume manufacturing (HVM) by 2023?” (a) 94% believe curvilinear shapes will be used for 193i for HVM by 2023, (b) 85% expect that EUV also needs curvilinear shapes for HVM.

Source: eBeam Initiative

BS: Where is the industry in terms of adoption of curvilinear photomasks?

AF:  With multi-beam mask writing being widely available for the leading-edge nodes, manufacturing curvilinear ILT shapes is now possible.

And the rest of the mask making infrastructure shown in Figure 2? A limited number of curvilinear shapes can already be handled by leading-edge mask shops today, according to leading authorities. For wide-spread use, there are likely more streamlined solutions needed for metrology, inspection, dispositioning and repair.

Figure 2 caption: A typical photomask manufacturing flow follows a specific pattern.

Source: D2S

BS: How do curvilinear photomasks unlock new opportunities for design?

AF: As we anticipate this exciting transition to curvilinear mask making or “curvy” design, an upstream effect of this change is being studied by some. Figure 3(a) below shows an image from an Imec paper in 2019 that highlighted potential improvements in compacting cell designs, decreasing load, and decreasing interconnect delay through the use of curvy design. Figure 3(b) from a Micron presentation illustrates the use of manual manipulation to jog multi-bit busses using non-Manhattan, curvilinear shapes of varying angles. Manual manipulation is resource intensive, a clear indication of the benefits being significant enough to be worth the trouble, at least for a memory maker. The entire chip design infrastructure is based on the Manhattan assumption.

In my previous life in EDA, I had something to do with that, so I know this very well and it is not going to change any time soon. At the same time, though, is there any doubt that a curvilinear chip, if magically made possible, would be smaller, faster, and use less power?

Figure 3 caption: (a) An Imec paper showing “curvy” designs are feasible with the reliable manufacturing of curvy masks, (b) an example wafer image from Micron with non-Manhattan design.

Source: D2S

Also Read

CEO Interview: Frankwell Lin, Chairman and CEO of Andes Technology

CEO Interview: Tamas Olaszi of Jade Design Automation

CEO Interview: John Mortensen of Comcores