Banner 800x100 0810

IC Layout Symmetry Challenges

IC Layout Symmetry Challenges
by Daniel Payne on 08-29-2022 at 10:00 am

1D symmetry

Many types of designs, including analog designs, MEMs, and image sensors, require electrically matched configurations. This symmetry has a huge impact on the robustness of the design across process variations, and its performance. Having an electrically matched layout basically means having a symmetric layout. To check the box of electric matching during verification, designers must also check the symmetry of their design.

Design symmetry is defined as either 1D or 2D. 1D symmetry is the symmetry around the x-axis or the y-axis, while 2D symmetry is the symmetry around the center of gravity.

One approach to achieving 1D symmetry is the quarter cell method, in which a cell is placed four times around a common X and Y axis.

Quarter cell layout method

In 2D layout symmetry, there is a common centroid, and this common symmetry can be between multiple devices, or even groups of devices. Here’s an example of common centroid symmetry on the left, while the right side is not a common centroid:

Common centroid symmetry

You can just eyeball an IC layout to see if it’s symmetric, but that’s not precise at all. Another approach is to write specific rule checks, based on your design experience, to catch symmetry violations. A more scalable approach is to use the Calibre nmPlatform with one of three symmetry checking methods:

  • Batch symmetry checks
  • Calibre PERC reliability platform for electrically-aware symmetry checks
  • A new approach that leverages the Calibre RealTime tools to allow for an interactive symmetry checking experience

Taking an interactive approach, which avoids custom rule coding and supports iterative runs to immediately check changes, can offer the highest accuracy and greatest time savings.   This solution includes four kinds of interactive symmetry checks:

  • 1D symmetry
    • Symmetrical about X-axis
    • Symmetrical about Y-axis
  • 2D symmetry (aka common centroid)
    • 90° symmetry
    • 180° symmetry

With Calibre’s interactive solution, the symmetry checks give live feedback to pinpoint any violations for fixing, and DRC checks can also be run in parallel to add even more efficiency. An example of the graphical feedback is shown for a magnetic actuator, where the metal layer on the right of this MEMS circuit has a symmetry violation highlighted in cyan color:

Symmetry violation in Cyan color

Verifying that two devices in a differential-pair op-amp are symmetrical involves selecting the device area, running the symmetry check, and viewing the results to pinpoint any violations.

Selecting device area inside green rectangle
Metal 1 symmetry violations in Red

The error markers tell the layout designer where the symmetry violation is, and whether it is caused by missing or extra polygons, shifted polygons, or a difference in polygon size. Fixing symmetry violations may create new DRC violations, making it advantageous to run both kinds of checking (symmetry & DRC checks) at the same time to reduce the time needed to achieve a symmetric DRC-clean design, such as is possible with the Calibre toolsuite.

Summary

Analog IC layouts design can be a long, manual and error-prone process, leading to lengthy schedules, especially when symmetry is required to achieve optimum performance. Traditional approaches, like coding custom symmetry checks or doing visual inspections, can take up valuable time and resources.

The approach of using a commercial tool like the Calibre nmPlatform to perform interactive symmetry checking is a welcome relief. Because the interactive results happen so quickly, the layout designer is alerted to the specific areas that are violating symmetry rules and is then able to apply fixes, saving valuable time. Being able to check for both symmetry and DRC violations interactively is a nice combination to ensure a symmetric DRC clean design.

Read the complete white paper online here.

Related Blogs


Verifying 10+ Billion-Gate Designs Requires Distinct, Scalable Hardware Emulation Architecture

Verifying 10+ Billion-Gate Designs Requires Distinct, Scalable Hardware Emulation Architecture
by Daniel Nenni on 08-29-2022 at 6:00 am

960 x 540 Veloce

In a two-part series, Lauro Rizzatti examines why three kinds of hardware-assisted verification engines are a must have for today’s semiconductor designs. To do so, he interviewed Siemens EDA’s Vijay Chobisa and Juergen Jaeger to learn more about the Veloce hardware-assisted verification systems.

What follows is part one, a condensed version of his discussion with Vijay Chobisa,

Product Marketing Director in the Scalable Verification Solution division at Siemens EDA. They talk about why verification of 10+ billion-gate design requires a distinct architecture.

LR: The most recent announcement from Siemens EDA in the hardware-assisted verification area included an expansion of the capabilities of the Veloce Strato emulator with a new Veloce Strato+, Veloce Primo for enterprise prototyping, and Veloce proFPGA for desktop type prototyping. What has been the customer response to these new capabilities?

VC: Last year, we announced Veloce Strato+ emulation platform, Veloce Primo Enterprise prototyping system, and Veloce proFPGA traditional prototyping system. The response from our customers has been fantastic. Let me briefly go through what the new capabilities consist of.

When we announced Veloce Strato+, we specified the ability to emulate a single instance of an SoC design as big as 12-billion gates. Today, we have several customers emulating 10- to 12-billion gate designs on Strato+. Such designs are becoming popular in the areas of AI and machine learning, networking, CPUs/GPUs, and other leading-edge industries. What customers really like about Veloce Strato+ is the ability to move their entire verification environment from Veloce Strato by just flipping a single compile switch and get 1.5X capacity in same footprint.

A unique feature of Veloce Primo is the interoperability with Veloce Strato+. A customer owning both platforms can switch from emulation to prototyping when designs reach stability, to run at a much faster speed, and in the process get more verification cycles at lower cost. They can also switch back from prototyping to emulation when they run into a design bug and need to root cause the issue, correct the design and verify the issue has been removed. Veloce Strato+ excels in design turnaround time (TAT) by supporting 100% native visibility, like in simulation, with rapid waveform generation, as well as fast and reliable compilation.

Our Veloce proFPGA prototyping delivers ultra-high speed in the ballpark of 50-100 MHz to support software teams performing software validation.

LR: How do you describe the advantages of the Veloce platform approach and how it differs from other hardware assisted verification approaches?

VC: Let me start by saying that we are working very closely with our partner customers and we attentively monitor their roadmaps to make sure that we are providing verification and validation tools to address their challenges. Let me touch on some of those.

We have seen three common design trends across several industries. First, designs are becoming very large. Second, thorough verification and validation of such designs before tape out requires the execution of vast amount of software workloads, run different use cases. Third, power and performance have become critical. To deal with all of the above, we had to address three fundamental aspects of the Veloce platform: design capacity, design compilation, and design debug.

When I say that Veloce Strato+ can emulate a 12-billion gate design, that’s not by luck. We designed a unique architecture that can scale in all three aspects. Veloce Strato+ is a robust system that can map and emulate monolithic designs of up to 12-billion gates, executing large workloads for a very long time with reliability and repeatability.

The Veloce Strato+ compiler can map 12-billion gates designs in less than a full day. Customers would like to get a minimum of one or two turns a day in order to use emulation effectively for doing verification. Our compiler takes advantage of design structures assembled by customers and advancements in processor/server topology. We developed technologies called template processing, distributed Velsyn, and technology like ECO compile. Our goal is to allow users to perform a couple of compilations per day, even for very large designs.

The final aspect is design debug. Customers coming from simulation would like to see exactly the same debugging environment in emulation just running faster at larger scale. We support the same Visualizer GUI across simulation, emulation and prototyping.

A unique differentiator of the Veloce architecture is fast waveform generation regardless of the design size. For example, Veloce can capture the data for every node in the design for one-million cycles and generate waveforms within five minutes, regardless of the design size. Whether the design is half a billion gates or four-billion gates or 12-billion gates, the time to generate the entire set of waveforms for one-million cycles is the same.

The bottom line is that the Strato+ architecture scales not only in terms of capacity, but also in terms of infrastructure to provide an efficient environment for compiling, running and debugging large designs, rapidly, accurately, and reliably. Users can run emulation, find a bug, fix the design and validate the change all within a day.

All of the above are advantages of Veloce Strato+ vis-à-vis our competition. As of today, Strato+4M is the only emulator on the market that can emulate 10- to 12-billion gates monolithic designs efficiently and consistently.

LR: Let’s look into the future. What are you hearing from customers and potential Veloce users about additional challenges and needs for hardware assistive verification tools for the next three years?

VC: As I mentioned, we work closely with our customers to design better hardware-assisted verification products that meet customer requirements. Let me give you some examples of how we solve customer issues.

Traditionally emulation in the storage market is used in in-circuit-emulation (ICE) mode where the design is connected to and driven by physical devices. This use mode is inherently limited when it comes to measuring bandwidth or I/O traffic performance, debug and access by teams from different geographies.

Instead, we built a solution based on virtual devices in close collaboration with top storage customers. We have many customers using Veloce to verify their design and software by using ICE setups. However, our customers are adopting more and more a VirtuaLAB-based use model due to the above changes with ICE. Today, storage customers using Veloce Strato+, can do exactly what they were doing in the ICE environment plus measure very accurately I/O bandwidth traffic and error injection to test corner case scenarios. They can also perform power analysis, a critical objective in the storage industry.

Our approach is to work with customers in each vertical market segment to understand their challenges and build an efficient and effective solution.

As already mentioned, power and performance are becoming critical not only for customers designing smart devices, but also for semiconductors powering up HPC and data centers. Again, we work closely with these customers to allow the “shift-left” in power profiling, power analysis, and to generate accurate power numbers by running software applications, customer workload and benchmarks. This early power proofing allows them to impact RTL code and software to ensure that their power budget is within the envelop and also that they can deliver the required performance.

Another aspect is functional safety. In some market segments today, functional safety is becoming very important. People designing autonomous cars or chips in the mil-aero industry consider functional safety verification a critical need. Customers are looking for the ability to inject a fault and verify that the design hardware or software. They also need end-to-end FuSa solutions where they can do the analysis, generate fault campaigns, and output hardware metrics to see whether the design is ISO 26262 compliant or not. That is the focus in our organization. We are delivering end-to-end FuSa solutions where customers can fully rely on an ISO 26262 certified solution coming from Siemens EDA to validate their chips.

Looking into the future, power, performance and functional safety will continue to grow in importance.

LR: To conclude, you have been working in the hardware-assisted verification domain for quite a while. What are some of the aspects of the job that continue to motivate and fascinate you most?

VC: Siemens is a great company to work for. The Siemens culture, the processes, and the open-door policy are highly motivating for me. Open door means that you can approach anyone, people interact and cooperate with each other as a team. We do not pursue our individual success, rather we aim to achieve our division success, our company success and, by reflection, that makes us successful. We all recognize each other.

As important for me is that at Siemens we are able to drive our roadmaps and not depend on anybody else. I love to talk to customers, learning what they are doing, and where they are going in five years down the road. Understanding the above, feeding it back to the division to build the products around it pleases me greatly.

LR: Thank you, Vijay.

VC: Thank you, Lauro.

Also read:

UVM Polymorphism is Your Friend

Delivering 3D IC Innovations Faster

Digital Twins Simplify System Analysis


GM: Where an Option is Not an Option

GM: Where an Option is Not an Option
by Roger C. Lanctot on 08-28-2022 at 10:00 am

GM Where an Option is Not an Option

How does a General Motors executive react when they get a transfer to work at OnStar? “What am I going to tell my partner?”

Twenty-six years after its founding, OnStar remains an appendage to GM – a team set apart from the heart and soul of the larger company. Team members assert that the group is profitable, thanks to millions of GM subscribers, but it has less control over its destiny as hardware responsibilities were removed years ago.

Though profitable, the group’s revenue is not sufficiently material to GM’s results to merit a mention on earnings calls. Not a one. Sure, Cruise may be burning $550M a quarter gumming up traffic in San Francisco, but a profitable OnStar? Ghosted.

The “otherness” of the OnStar organization was made apparent during GM’s journey through chapter 11 bankruptcy after the “great recession.” As the company looked to potentially sell assets the one division that attracted avid attention was, in fact, OnStar – with Verizon being one of the potentially interested buyers.

OnStar is one of the most powerful brands – globally – in the connected car space and, yet, its parent company continues to hold the group at arm’s length. A service that should long ago have become synonymous with GM and a brand defining gem somehow retains the status of an albatross.

The latest evidence of this is rife in recent announcements. OnStar announced a new “cleaner” logo – whatever that means – and is venturing beyond cars to offer safety and security services to pedestrians, hikers, and motorcyclists while offering in-home services via Alexa.

Meanwhile, news arrives this week that GM is making OnStar a non-optional $1,500 three-year subscription on Buick and GMC vehicles. According to the GM Authority newsletter, “the automaker will equip all new 2022 and 2023 model year Buick and GMC vehicles with a three-year OnStar and Connected Services Plan. The plans cost between $905 and $1,675, depending on the chosen trim level.”

The newsletter indicates that the cost is to be included in the vehicles’ MSRP “however the online configurator tools for the Buick and GMC brands suggest these charges are added on top of the MSRP, for the time being,” The services include remote keyfob, Wi-Fi data,, and OnStar safety services.

Making OnStar a non-optional option sends some powerful and unfortunate marketing messages to the car-buying public including:

A)    GM is removing the power of choice from your connected car decision-making.

The news is a sad echo of GM’s announcement in 2011 under then-OnStar president Linda Marshall that the company would reserve the right to compile and sell information about drivers’ habits even after users discontinue the service – unless a user explicitly opts out. The announcement led to Congressional calls for an investigation by the Federal Trade Commission and likely contributed to Marshall’s early exit from her leadership role at OnStar in the following year.

B)    GM has failed to find a sufficiently compelling application or combination of applications to drive OnStar adoption organically.

In the early days of OnStar, before smartphones, the fear factor was a powerful motivator for selling the service to consumers. If you’re OnStar-equipped GM vehicle was involved in a crash, OnStar would automatically summon assistance. It is a capability that was ultimately mandated in Europe as so-called eCall in all cars.

In a post-smartphone world, the average driver doesn’t think they are going to be involved in a crash and, even if they are, they are convinced they’ll be able to call for assistance on their own – provided they are conscious. OnStar has tried to enhance this automated crash functionality with built-in Wi-Fi services and, more recently, access to Alexa. But the enhancements are insufficiently compelling.

C)    The built-in connection in the car is some sort of add-on device that must be paid for separately.

It is no mystery that building connectivity into cars is an expensive business. The hardware and software is expensive and the back-end secure network operating center is a further source of cost and liability. To that can be added the dedicated call center and, of course, the wireless service itself.

The reality is that no one can buy a new car in the U.S. today – or Europe or China, for that matter – that isn’t equipped with a wireless connection. Wireless connectivity is a comes-with proposition in the auto industry today. Some amount of cost for the hardware, software, service, and infrastructure has to be built into every car. GM is the first auto maker brazenly putting a price to it and shoving it in the customer’s face. It’s not a promising strategy.

The value proposition of the in-car connection long ago shifted from the customer to the car maker and the dealer. Auto makers stand to benefit mightily from being connected to their cars and their customers. Auto makers should never give consumers any reason to think twice about the connectivity devices in their cars.

Via connectivity, car companies can anticipate vehicle maintenance issues and possibly prevent failures; they can respond to crashes and breakdowns in a timely manner; and maybe they can more readily identify and remedy vehicles with outstanding recalls. Car makers are entitled to compensation for providing vehicle-centric cell service and the vast majority of consumers are willing to pay.

SOURCE: Strategy Analytics consumer survey results from upcoming report.

Soon to be published Strategy Analytics research shows a strong inclination among cosumers to pay for service packages associated with their cars. In fact, a majority of those consumers across a range of demographics and regions are willing to pay upfront. But not all.

GM’s announcement is an inelegant approach to solving a problem facing the entire industry. Simply put, car companies can no longer afford to sell cars for a one-time price and be done. Cars must be connected. Software must be protected and updated. A long-term subscription-centric strategy is unavoidable.

GM is putting all the onus for subscription collection on OnStar which is more or less walled off from the rest of GM. OnStar is not intimately integrated into the customer-dealer relationship and the latest initiatives from OnStar – focused on extra-vehicular use cases – suggest further straying from a focus on connected cars.

A big question looms over GM and the industry: What is the strategy for monetizing vehicle connectivity? Will it be jacking up OnStar/telematics subscriptions, building the cost into cars upfront, charging for features on demand?

To achieve long-term vehicle=based revenue production will necessitate more smoothly integrating OnStar into the GM vehicle ownership and dealer experience. The entire purpose of vehicle connectivity is customer retention.  GM needs better marketing and messaging to keep the company in the forefront of the connected car industry. This latest messaging is an amazing marketing failure and a non-starter for most consumers.

One company can take heart from GM’s stumble. After becoming the poster child for features-on-demand by announcing plans to charge a subscription for access to heated seats, BMW will have a bit of schadenfreude at GM’s expense – this time.

Also read:

C-V2X: Talking Cars: Toil & Trouble

Automotive Semiconductor Shortage Over?

Auto Makers Face Existential Crisis

Time for NHTSA to Get Serious


C-V2X: Talking Cars: Toil & Trouble

C-V2X: Talking Cars: Toil & Trouble
by Roger C. Lanctot on 08-28-2022 at 6:00 am

C V2X Talking Cars Toil and Trouble

Last year, the U.S. Federal Communications Commission sought to resolve the lingering dispute over the use of 75MHs of Wi-Fi spectrum in the 5.9MHz range, previously allocated to the automotive industry for safety applications, by designating 45MHz of that spectrum for unlicensed use while preserving 30MHz for automotive safety. The move opened the door to cellular-based C-V2X deployments – kicking aside the DSRC (dedicated short rage communications) technology favored by many auto makers.

ITS America and the American Association of State Highway and Transportation Officials (AASHTO) filed suit to block the action in court. Last week the court ruled in favor of the FCC.

Prior to that ruling, three auto makers (Jaguar Land Rover, Audi of America, and Ford Motor Company), nine hardware manufacturers (of roadside and in-vehicle devices), along with several state transportation authorities sought waivers from the FCC to proceed to deploy C-V2X technology and/or to replace older so-called DSRC V2X technology. Weeks ago, comments supporting the waiver requests poured in and action is expected within weeks.

Several dozen entities filed comments with the FCC on C-V2X waiver filings, overwhelmingly supporting immediate deployments of C-V2X in the 5905-5925 MHz band. Opposition from pro-DSRC parties was muted. California, Colorado, South Dakota, Wyoming, Tampa, FL, Atlanta & Alpharetta, GA, DOTs all asked the FCC to expeditiously approve the waiver requests.

Notably, ITS America and AASHTO – the groups that challenged the FCC decision allocating 45 MHz of the ITS band to unlicensed use and the remaining 30 MHz to C-V2X – filed comments strongly supporting the waivers.

 The judge’s ruling against ITS America and AASHTO and in favor of the FCC, ought to be the last word. But perhaps not.

The battle continues over V2X technology, which began with an FCC allocation of 75MHz of spectrum in 1999 for DSRC. In a recent Automotive News “Shift” podcast, manager of vehicle technology at Consumer Reports Kelly Funkhouser spoke out in favor of DSRC as a technology that could dramatically reduce vehicle collisions and the related fatalities. Funkhouser displayed a frightening lack of awareness of C-V2X tech and an unfortunate partiality to older, now defunct DSRC tech.

Similarly, a podcast produced by the Alliance for Automotive Innovation – which included participants from the U.S. DOT, and the National Transportation Safety Board (NTSB), appeared to be promoting DSRC technology as a technology that could eliminate 90% of highway crashes and related fatalities. The NTSB representative, in particular, advocated DSRC – the older technology that will be compromised by the FCC re-allocation.

The AAI podcast was especially disturbing for its emphasis on the risk of signal interference within the more limited 30MHz of allocated spectrum and the unlicensed use of the nearby 45MHz. This concern verged on disinformation in association with C-V2X technology.

The comments on the AAI podcast were reminiscent of similar skepticism and resistance toward C-V2X technology expressed at the ITS America summit in Charlotte, N.C., last November. During a panel discussion at the event a U.S. DOT executive did his best to raise questions regarding the efficacy of C-V2X.

The AAI has a 10-point plan for the promotion and adoption of V2X technology, notably not distinguishing C-V2X and still appearing to put a thumb on the scale in favor of DSRC. You can judge for yourself here: https://www.autosinnovate.org/about/advocacy/V2X%20Policy%20Agenda.pdf

The sad reality vis-à-vis the AAI is that the Infrastructure Bill already passed through Congress is filled with a wide array of funding and initiatives to promote the adoption of C-V2X technology specifically – again, reflecting the reality that DSRC is a dead letter.

With waivers expected to be granted and the recent judicial victory for the FCC, the path is finally clear in the U.S. for widespread C-V2X adoption and deployment. In this context, it is time for the DSRC crowd to dismount from the barricades and accept reality and, really, fundamentally, stop standing in the path of progress.

Also read:

Automotive semiconductor shortage over?

Auto Makers Face Existential Crisis

Time for NHTSA to Get Serious


Podcast EP103: A Look at the Game-Changing Technology Being Built by Luminous Computing

Podcast EP103: A Look at the Game-Changing Technology Being Built by Luminous Computing
by Daniel Nenni on 08-26-2022 at 10:00 am

Dan is joined by Michael Hochberg, president at Luminous Computing. His career has spanned the space between fundamental research and commercialization for over 20 years. He founded four silicon photonics companies garnering a total exit value of over a billion dollars.

Dan explores the computing technology being built by Luminous and the fundamental impact it will have on the growth of AI applications and the AI market in general.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


2022 Semiconductor Supercycle and 2023 Crash Scenario

2022 Semiconductor Supercycle and 2023 Crash Scenario
by Daniel Nenni on 08-26-2022 at 6:00 am

2021 Semiconductor Forecast Summary

Charles Shi, semiconductor analyst at Needham & Company, an US-based investment bank and asset management firm, hosted an expert call on semiconductor cycles with Malcolm Penn, Founder and CEO of Future Horizons on 18 August 2022, with over 100 financial analysts in attendance.  The following bulletin is a summary of the discussions.

Charles:  In January 2021, you forecasted 2021 semiconductor growth was going to be 18 percent as the base case and 24 percent as the bull case. Around the same time, Gartner, your former employer, was forecasting 11.6 percent of growth. Now we know the actual number was 26.3 percent and you were probably the closet. But at the same time when you gave people that strong double-digit forecast, here is what you said: “Enjoy the super cycle for now, because it will crash in 2023!”

Being a financial analyst myself, I always find it both refreshing and frightening to hear the level of conviction in your statement about the future. Also “crash” would not be the word I use in my research reports, but we will touch upon this later. Having said that, you were spot on about the industry downturn that is just getting started, even when your statement was really made 18 months ago. So now, my first question: What’s your forecasting methodology that gives you this level of confidence? And what are the key metrics that you look at?

Malcolm:  Yes, I’m actually very proud of this achievement, but both forecasts were very scary and I was publicly derided for being ‘ever-optimistic’ on the impending double-digit super-cycle (the glory days are over) and ‘off my rocker’ re the 2023 crash (the cycles were dead, it’s different this time).

I spent a lot of sleepless nights checking and double checking my assumptions and scenarios but kept coming back to the same answers and conclusions.  In the end I decided … “If that’s what my research shows then it’s my professional responsibility to publish and be damned”.

I’ve always believed in the need for integrity and accountability so one of the tenants of my forecasting methodology has always been transparency … to explain the ‘how and why’ behind the forecasts and then be judged by what actually transpired.

I also see it my role to provide timely and insightful information at the start of each year, not when the outcome’s painfully obvious, also not to follow the fashion or consensus just because these are the safe options … anyone can do this, but it’s doing the industry a disservice.  Being independent helps … I don’t have a corporate position to defend, which means I can take a contrarian view if needed and I don’t need ‘wiggle room’ … I can live or die on my own integrity.

A third fundamental tenant is not to keep constantly changing the goal posts … a forecast, if properly considered, ought to be good for the whole year with just interim ‘progress report’ updates on the way.  The maths will affect the final forecast number but never the assumptions.

I also don’t believe in ‘models’ or ‘secret sauce’ rather a ‘balance of probabilities’ methodology based on data analysis and experience.  I see pictures not numbers, a bit like a jigsaw puzzle but with half the pieces missing … It won’t show all the detail but I can still see the picture.

I look at four key data trends, the global economy (which sets the overall appetite for demand), unit sales (how many parts are sold), capacity (how many parts could be made) and ASPs (what price they were sold at).  There’s lots of good economic data from e.g. the IMF (my prime ‘go-to’ source); WSTS provides good data on unit sales and ASPs and SEMI provides good data on semiconductor equipment capital expenditure.  Unfortunately, since 2011 capacity data is no longer published by SICAS, but SEMI’s CapEx spend data is a very good proxy to it.

I liken these metrics to the biblical four horsemen of the apocalypse because, whilst they are all inter-related (but independent), their impact on the market could be either off-setting or supporting.  For example, over-investment and excess capacity in a strong economy will be quickly absorbed whereas excess capacity in a weak economy and slowing demand will have a stronger and longer-lasting impact.

Charles:  Now I want to double click on each of the metrics that you look at. First, the economy. You said semiconductor industry can have a bad year in a good economy but not a good year in a bad economy. Is it though? Maybe this is a bit U.S.-centric, but in 2020, the U.S. GDP declined by 3.5 percent but it was a healthy growth year for the semiconductor industry. Is that the exception to the rule?

Malcolm:  That’s a really good question.  The GDP took a hit in 2020 because the world went into lockdown in March of that year, but it was more like someone hit the pause button on the world, not a ‘real’ economic downturn, and we hadn’t all suddenly gone back to living in caves … we still had all of our creature comforts, governments all over the world were pumping cash into their economies and firms were furloughing staff or paying their employees to work from home.

My original forecast for 2020 was for 10 percent growth, against a tight capacity background.  In the end it grew 6.6 percent, slowed down by the Q2 economic downturn, but it quickly rebounded driven by a spurt in home office and home entertainment demand.

Charles:  So we talked about the economy, Covid and 2020. What is the chip shortage issue all about that were in the headlines through 2021? What caused that? More specifically, how did you manage to correctly call 2021 a strong double-digit growth year as early as October 2020?

Malcolm:  I had been cautioning for some time that the chip industry was too finely tuned, running on empty with no slack in the system, meaning any slight increase in demand would very quickly escalate into shortage.  A cyclical market upturn was always inevitable and would have hit home in 2020 had Covid not initially reined back demand in Q2.

That push back was short-lived though and by October wafer fab capacity was maxed out against a historically low level of CapEx investment.  With inventory levels low, shortages were already starting to bite and lead-times were extending.  Three out of four of my matrices were fundamentally sound with only the economic outlook fragile, hostage to the ongoing Covid uncertainty, but with a lot of fiscal policy support to do ‘whatever it takes’ to avoid a recession.

With no tight capacity relief possible before 2022 at the earliest, based on the industry momentum and the systemic supply-side tightness, it was impossible for 2021 to show anything less than strong double-digit growth, hence my 18 percent number, with a 24 percent upside.

Charles:  Second, let me ask you about the unit growth rate. You said long-term unit growth has always been 8 percent. Right now people are talking about electric vehicle and autonomous driving, and expect the semiconductor content in electric vehicles are going to six times more than internal combustion engine. How is this not bending the long-term growth curve to the upside?

Malcolm:  The industry is currently shipping around 250 billion semiconductor devices every quarter – one trillion units per year – and these parts both increase their penetration in existing markets and constantly create new markets, such as the ones you mentioned, markets that were either impossible or too expensive using the previous levels of technology.  That’s the real economic impact of Moore’s Law.

Without these new markets, the long-term growth in the industry would falter, but all these do is keep the industry on its 8 percent growth track; offsetting other applications that are slowing down or falling by the wayside, they are not altering the overall status quo.  The long-term average IC unit growth rate has been averaging 8 percent since records are available so I believe it’s a very reliable statistic.

Charles:  Next I want to ask you about capacity. We may have just touched upon it a bit. Through this semiconductor super cycle, you hear foundries claiming that they are sold out through 2022, 2023. Some even stick their neck out and said that supply is no longer unlimited and supply will chase demand going forward. What’s your response to these statements, as you just said that the supply/demand balance has always been bumping between under- and over-supply?

Malcolm:  Balancing supply with demand has always been the industry’s Achille’s heel, driven by the inherent mismatch between short term volatility in demand and the long (one to two-year) lead time for new capacity exacerbated by the long (four month typically) production cycle time.

In the past, the industry used to forecast what capacity was needed two years out, an impossible task to get right, resulting in systemic over capacity but that all changed when TSMC elected to stop building this speculative capacity.  Instead, it would only build new capacity against firm customer commitment.

Right now, given the shortages, supply is now no longer unlimited but as new capacity comes on stream, that status quo will flip.  It’s fundamentally inherent in the economics … economists call it the hog cycle.  That’s never going to change until the industry as a whole takes collective responsibility for the supply chain, which inevitable means the end customers taking more fiscal responsibility which in turn is going to impact their bottom line.  There is absolutely no indication any end customer, their shareholders, or the financial community, has any stomach for that.

Charles:  Lastly, before we move onto forecasting, I want to ask you about ASP. You said the long-term IC ASP growth is zero. But through 2021 and 2022, we saw foundries raising prices by as much as 20 percent, and even OSATs, the packaging houses, are also raising prices. I’ll give you an example… Even wire bonding, the most legacy packaging service, the price of it is now 20 percent higher than where it was in 2020. Some say those ASP increases are here to stay. What do you say?

Malcolm:  This is one of the more controversial and counter-intuitive industry statistics but it’s the culmination of a lot of associated facts.  For example, the average revenue per square inch of silicon processed has been constant since the beginning of time; likewise, the average number of ICs per wafer area.  It’s the corollary of Moore’s Law … for every new more complex IC we design, another existing one gets cost reduced or shrunk.  Moore’s Law enables us to either get more bang for the same buck or pay less bucks for the same bang.

If ASPs are 20 percent higher now than in 2020, you can virtually stake your life on them being 20 percent lower down the road.  Interestingly, and something that’s never grabbed the world’s attention, Gordon Moore had a second law, namely “The long-term IC ASP trend is $1.”  My data analysis has shown this observation both true and equally prophetic, but largely denied.

Charles: Now let’s look at the upcoming downturn more closely. Malcolm, I’ll be honest with you. Wally Rhines, the former CEO of Mentor Graphics, Daniel Nenni, Co-Founder of SemiWiki, and I met got together at an industry event here in California back in May and we talked about your forecast of a semiconductor crash as early as Q4 2022. Back then, most of the semiconductor companies have reported their Q1 results and guided their Q2. The numbers were not bad. Three of us were talking behind your back that hey Malcolm might be too pessimistic. But hey, over the last month, we’ve got Micron, Intel, AMD, Nvidia, and Micron again, massively reduced their outlook for the second half of this year. Earlier on this call, I said you warned people that “enjoy the super cycle, but it’ll crash”. Where numbers are trending do look like a “crash”. So allow me to ask, is the worst news in? Are we going to have a soft landing this time?

Malcolm:  The short answer to both questions is ‘no’ and ‘no’.  June’s WSTS data was just the start of the bad news, I fully expect Q3 data to be worse, it’s inevitable, and the forward-looking company statements are already paving the ground.  Once the correction starts, it quickly escalates into a downward spiral.  Orders start to soften, some get delayed or even cancelled, lead times start to shrink and bloated inventories get purged softening fab demand even more and so on and so forth.  Unit shipments inevitably decline.

Faced with depleted backlogs and spare fab capacity, ‘bomb the price … any order’s a good order’ is the name of the day, meaning ASPs quickly plummet soon after.  The combination of reduced unit demand and falling ASPs is a lethal combination for negative sales growth.

Charles:  So you don’t think there will be a soft landing. What’s your forecast for 2023?

Malcolm:  Well, let’s just say if there is, it will be an industry first.  Once you get into the situation of severe product shortages and 50-week long lead times, customers have no option but to increase buffer stocks and inventory over and above what they actually need.  Stockpiling is inevitable and orders, as seen by the supplier, are inflated by this excess demand.

Once supply catches up with this now inflated level of demand, products become more available, lead times start to shrink and customers pare back their orders and start to liquidate their bloated inventories.  This frees up even more supply availability, reducing lead times further, triggering even more inventory liquidation and paring back of orders.  This positive feedback fuels a downward spiral which only stops once lead times reach a sustainably low level (typically 4-6 weeks) and customer inventory levels have been purged.

Simultaneous with this slowdown in unit shipments, ASPs fall, hence the plummeting nature of sales value.  It’s a deluge, soft landings just don’t happen.  This process started in June 2022 and it typically lasts four quarters.  Our forecast was based on negative quarter on quarter growth for the second half of 2022 and first half of 2023, with the market bottoming in Q2-2023 and recovering from Q3-2023 onwards.  Rolling these quarterly growth numbers forward led to a 22 percent year-on-year decline, 2023 vs. 2022.

Charles:  Wow 22 percent decline in 2023. How do we go from here to a 22 percent decline in 2023? Maybe let me ask from a different angle… You said there is no soft landing. Is the path forward more like a V, or a U, or a K?

Malcolm:  As mentioned before, the market collapse typically last four quarters and the ensuing recover spans a three to five-year period, so it’s more the shape of a hockey stick on its side.  There is nothing to indicate why this current market correction and recovery will be different.

Charles:  Based on your current forecast, on a quarterly basis, when will the rate of decline be the greatest? And when will we start to recover?

Malcolm:  When I made the forecast back in January 2022, it was based on a Q3-2022 decline of 3.2 percent followed by 10 percent declines in Q4 2022 and Q1-2023, both being, in addition to the correction, seasonally weak quarters, before bottoming out in Q2-2023, with a further 8 percent decline, followed by 2.5 percent growth in Q3 and a 2.5 percent seasonal decline in Q4.

Granted, a 22 percent decline sounds pretty bad but it’s really just a compensation adjustment from 2021’s 26.6 percent growth.  Also, when you look historically back at the at the magnitude of previous swings, from the end of the boom to the start of the bust, you see swings ranging from 69 to 21 percentage points, so going from plus 6 or even 10 percent growth in 2022 to minus 22 percent in 2023 is only a swing of 28-32 percentage points, at the historical low end of the range.  Corrections, when they do happen are always very steep.

Charles:  I believe there are good number of people on the call who want to push back on your very grim view about the road ahead. Let me ask my follow-up questions first.  My first follow-up… Many foundries this time have signed LTAs with their customers. Could LTAs smooth the curve a bit there?

Malcolm:  Well, that’s certainly the hope, but I think their impact will be disappointing, a negotiating tool at best.  They can never compensate for the fact the customers no longer need the product on order so, at best, suppliers may manage to enforce a price penalty and offset the early impact on ASPs but this will only kick the can down the road and drag out the downturn.  You cannot force customers to take product they do not need so the impact on unit shipments will be negligible and fab capacity will be underutilized which will trigger an ASP decline.

Firms will also need to think about the long-term marketing impact on their supplier-customer relationship.  Forcing their customers too hard now might just result in a lost customer tomorrow.

Charles:  My second follow-up… you touched upon this a bit… the equipment demand remains very high and equipment supply is still very tight… we hear lead time for equipment is now 12-18 months. Now we are seeing some cancellation and pushouts from the memory companies, but not so much on the non-memory side. This elevated level of capacity investment… what do you think is going to do to the upcoming downcycle if nobody cancels equipment orders?

Malcolm:  No firm ever sees the downturn coming yet, after it happens, it’s always seen has having been inevitable!  Push outs and cancellations will eventually follow through, but there’s always a ‘two quarter lag’, up and down.  Memory is always the fastest to react, but the other sectors will also soon feel the pain, except at the extreme leading-edge where capacity is always tight.

If firms are part way through a capacity expansion, there is always a point of no return whereby they have no choice but to finish what’s been started, a bit like V1 speed for an aircraft, even though it will exacerbate the excess capacity situation.  The spend thus continues after the market has collapsed but eventually capacity expansion plans are adjusted to the new demand reality and which means push outs, cancellations and delays, the bull-whip effect is inevitable.

Charles:  My last follow-up… CHIPS Act. The CHIPS Act is now signed into law. What’s your general thought on this government subsidized on-shoring? The funding is going to get allocated and get into capacity investments. What’s your view on that?

Malcolm:  This is one area where my European location really comes to the fore and my first piece of advice is to look carefully at the European experience … copy the bits that have worked well and learn from the areas that were less successful.

It’s good that governments around the world have woken up, at least temporarily, to the strategic nature of semiconductors but in general, these initiatives fail to grasp the cause of the problem, namely why did the off-shoring occur in the first place?

What inevitably happens in these situations is the proposed programmes start with where we currently are and focus on essentially ‘band aid’ solutions … patching up the current system, rather than starting the other way around, focusing on what do we want the industry to be like in 20 or so years’ time in the face of all the financial and geo-political issues that need to be addressed.

None of the current industry and political thinking or initiatives even start to address these issues and, whilst government investment in the cash-hungry chip industry is to be welcomed, it will have only limited long-term success unless the reasons chip firms off-shored production in the first place are addressed, especially those once IDM firms who elected to go fabless.  A sizable amount of offshored chip production is from firms who have no desire whatsoever to own and operate fabs … anywhere.

In addition, none of these programmes have the support of the end market customer, they are all bottom-up initiatives.  From Apple down, there is not a single end-customer championing any on-shoring initiative, nor any customer willing to pay a potential price premium for on-shored product.

Industry needs to collectively address this problem and then tell its respective governments what supportive financial and policy incentives are needed to encourage this top-down demand-driven industry market pull.

For a monthly updates, see Future Horizons’ Semiconductor Monthly Update Report:

https://www.futurehorizons.com/page/137/

For the latest industry outlook, join Future Horizons’ 13 September 2022 Webinar

https://us02web.zoom.us/webinar/register/9216604665338/WN_4vgIAPA9TW6co4Wl-z5zRw

Also read:

EDA Product Mix Changes as Hardware-Assisted Verification Gains Momentum

WEBINAR: A Revolution in Prototyping and Emulation

An EDA AI Master Class by Synopsys CEO Aart de Geus


Automotive Semiconductor Shortage Over?

Automotive Semiconductor Shortage Over?
by Bill Jewell on 08-25-2022 at 5:00 pm

Global Light Vehicle Production 2022

Several signs point to an easing of the shortages of semiconductors for automotive applications. However, the production of light vehicles will likely remain below full potential through at least 2023. LMC Automotive’s July forecast of light vehicle production called for 81.7 million units in 2022, up 2% from 2021. LMC’s January forecast was for 13% unit growth in 2021, over 4 million more units than the current forecast. The July forecast calls for production growth of 5% in 2023 and 7% in 2024. LMC’s April projection for 2023 and 2024 was 4 million units higher each year than the July projection. 2024 production of 91.7 million units should still be below the pre-pandemic level of 94 million units in 2018, five years earlier.

The key reasons for the shortages of automotive semiconductors are:

  • Automotive manufacturers cut back on semiconductor orders severely at the beginning of the COVID-19 pandemic in early 2020. The auto companies were fearful of being stuck with excess inventories of cars if demand fell significantly due to the pandemic. When the automakers tried to increase orders, they had lost their place in line and were behind other industries such as PCs and smartphones.
  • Many automakers used a just-in-time ordering system to avoid excess inventories. This left them with almost no buffer inventories. Also, most semiconductors used in automotive are bought by the companies supplying the systems (engine controls, dashboard electronics, etc.) rather than the automakers, leading to a more complex supply chain.
  • Semiconductors used in automobiles have a long design-in cycle and must be qualification standards. Thus, it is difficult for an automaker to change suppliers in the short term.
  • Due to the long design cycles and long product life, semiconductors used in automotive applications use older process nodes than most other applications. As shown in the table below, McKinsey estimated 72% of the semiconductor wafers for automotive in 2021 used process nodes of 90 nanometer or greater compared to 52% of all applications. Only 6% of automotive demand is for 14 nanometer and less process nodes, compared to 21% for all applications. Semiconductor manufacturers concentrate their capital spending on the more advanced process nodes, with only modest expansion of capacity in older nodes. TSMC, the dominant wafer foundry, makes 65% of its revenue from advanced process nodes and only 12% of its revenue from nodes of 90 nanometers or greater. Only 5% TSMC’s revenue is from automotive, compared to 38% from smartphones. Thus, automakers are generally a lower priority for foundries.

Given all the above factors, it will take time for all the supply issues to be resolved. Recent comments from major automakers reveal mixed trends in resolving the semiconductor shortages.

  • Toyota – shortages at least through 3Q 2022
  • Volkswagen – shortages easing
  • Hyundai – shortages easing
  • General Motors – shortage impact into 2023
  • Stellantis – shortages through the second half of 2022
  • Honda – outlook uncertain due to shortages
  • Nissan – recovery in next few months
  • Ford – shortages still an issue
  • Mercedes-Benz – no significant supply issues
  • BMW – no production delays due to shortages
  • Volvo – back at full supply
  • Bosch (parts supplier) – shortages into 2023

The table below list the major suppliers of semiconductors to the automotive market. The top ten suppliers accounted for 46% of the total automotive semiconductor market of $69 billion in 2021 (according to WSTS). Automotive is a significant portion of total semiconductor revenues for each of these companies, ranging from 17% to 50%.

The five largest suppliers of automotive semiconductors also had varied outlooks on the shortages in their recent financial reports for 2Q 2022:

  • Infineon Technologies – gradual easing of shortages throughout 2022
  • NXP Semiconductors – demand will still exceed supply in 3Q 2022
  • Renesas Electronics – inventory back to planned levels at end of 2Q 2022.
  • Texas Instruments – inventories still below desired levels
  • STMicroelectronics – capacity sold out through 2023

Shortages of automotive semiconductors will likely remain through at least the year 2023. Although a few automakers indicate they are back at full production, most report continuing shortages. The shortages will prevent automakers from producing enough vehicles to meet demand in 2022 and 2023, resulting in continued high prices for most vehicles.

Automakers and semiconductor suppliers are working to try to prevent such severe shortages in the future. Automakers are adjusting their just-in-time inventory models. Automakers are also working more closely with semiconductor suppliers to communicate their short-term and long-term needs. Semiconductors will become even more crucial to automakers as trends toward electric vehicles and driver-assist technologies continue.

Also read:

Electronics is Slowing

Semiconductors Weakening in 2022

Semiconductor CapEx Warning


Enhanced X-NAND flash memory architecture promises faster, denser memories

Enhanced X-NAND flash memory architecture promises faster, denser memories
by Dave Bursky on 08-25-2022 at 10:00 am

x nand timing vs slc

Although the high-performance X-NAND memory cell and architecture were first introduced in 2020 by Neo Semiconductor, designers at Neo haven’t rested on that accomplishment and recently updated the cell and the architecture in a second-generation implementation to achieve 20X the performance of conventional quad-level-cell (QLC) 3D NAND memories. The Gen2 X-NAND, unveiled at this month’s Flash Memory Summit, achieves that improvement through an enhanced architecture that allows the 3D NAND flash programming (i.e., data writes) to occur in parallel using fewer planes. Able to deliver a 2X performance improvement over the first generation X-NAND technology, the Gen2 architecture remains compatible with current manufacturing technologies and processes thus giving adopters of the technology a competitive advantage over existing 3D NAND flash memory products.

According to Andy Hsu, Neo’s CEO, the Gen2 technology incorporates zero-impact architectural and design changes that do not increase manufacturing costs while offering improvements in throughput and reductions in latency. The X-NAND technology can be implemented with all flash cell structures – SLC, MLC, TLC, QLC, and PLC – while delivering higher performance at comparable manufacturing costs. Neo is currently looking to partner with memory manufacturers who will license the X-NAND technology and then design and manufacture the high-performance memory chips.

NAND-flash memories based on the QLC cell and 3D stacking have been widely adopted in many applications thanks to the high storage capacities possible and their relatively low cost per bit. The one drawback is their slow write speed. The X-NAND technology overcomes that limitation and improves the QLC NAND read and write speeds threefold and the sequential read/write throughput by 15 to 30X. Further improvements in the Gen2 technology let memories deliver SLC-like (single-level cell) performance but with the higher capacity and lower cost of QLC implementations. Transfer rates of up to 3.2 Gbytes/s are possible with the Gen2 X-NAND technology.

In Neo’s Gen1 design the company employs a unique SLC/QLC parallel programming scheme that allows the data to be programmed to QLC pages at SLC speed for the entire memory capacity. This also solves the conventional NAND’s SLC cache-full problem (see the figure). When the conventional SLC cache is full for traditional NAND, the data will be directly written to QLC cells, and the write speed will be reduced to below 12%. X-NAND solves this problem, explains Hsu, and it also provides an excellent solution for heavy-write systems such as data centers with NAS systems.

Furthermore, the X-NAND’s 16-64 plane architecture provides parallelism at the chip level. When compared to a conventional NAND that uses 2 to 4 planes, one X-NAND chip can provide the same parallelism of 4 to 16 NAND chips. This would allow small-form-factor packaging such as in M.2 and eMMC memory modules. Additionally, X-NAND’s bit-line capacitance is only 1/4 – 1/16 that of a conventional NAND and thus the bit line’s power consumption for read and write operations can be reduced to about 1/4 – 1/16 (or by about 25% – 90%). This significantly increases the battery life for smartphones, tablets, and IoT devices.

www.neosemic.com

Also read:

WEBINAR: A Revolution in Prototyping and Emulation

ARC Processor Summit 2022 Your embedded edge starts here!

WEBINAR: Design and Verify State-of-the-Art RFICs using Synopsys / Ansys Custom Design Flow


Getting Ahead with Semiconductor Manufacturing Equipment and Related Plasma Reactors

Getting Ahead with Semiconductor Manufacturing Equipment and Related Plasma Reactors
by Kalar Rajendiran on 08-25-2022 at 6:00 am

Figure 1 Dry Etching Process Classification

Advanced semiconductor fabrication technology is what makes it possible to pack more and more transistors into a sq.mm of a wafer. The rapidly increasing demand for advanced-process-based chips has created huge market opportunities for semiconductor manufacturing equipment vendors. According to SEMI, worldwide sales of semiconductor manufacturing equipment in 2021 rose 44% to an all-time record of $102.6 billion. While the opportunities are big, delivering cost-effective equipment optimized for mass production involves overcoming a number of challenges.

In June 2022, SemiWiki published an article titled “Leveraging Simulation to Accelerate the Design of Plasma Reactors for Semiconductor Etching Processes.” That article was a brief introduction, leading up to a webinar on simulation techniques to accelerate design of plasma reactors for semiconductor etching processes. The webinar was presented by Richard Cousin of Dassault Systèmes and is now available on-demand for viewing. This article covers some salient points from that webinar.

Benefits of Simulation When Designing Plasma Reactors

Simulations help understand how a device will behave even before the device is actually manufactured. Whether the device being designed is a chip or a plasma reactor, the savings in terms of time and money is worth investing in a good simulation tool.

In the case of plasma reactors, the design has to balance many different parameters to accommodate various requirements and plasma characteristics. The plasma characteristics include the density profiles, the ionization rate, the effect of the pressure and the type of gas used as well as the influence of the geometry to prevent damages. Simulation can, for example, help experiment with different numerical and physical settings for a plasma reactor to improve uniformity of ion density profiles. It can also help perform thermal coupling analysis with the aim of eliminating vulnerabilities to reactor damage. It can also allow experimenting with different types of gases, pressures and power levels.

The benefits of simulation fall into three categories.

  • Predicting and explaining experimental results, particularly when no diagnostics are available in advance
  • Reducing cost of development by optimizing performance and reliability before actually manufacturing the plasma reactor (or modifying an existing reactor)
  • Accelerating device validation and decision making on right manufacturing processes

Focus of Dassault Systèmes’ SIMULIA Tools

Depending on the pressure levels deployed, dry etching process would be a physical process or a chemical process. Refer to the Figure 1 below for the dry etching classification spectrum and the focus range of the spectrum for SIMULIA tools.

Figure 1: Dry Etching Process Classification as a function of the Pressure level

The anisotropic etching process is well suited for nanoscale size features. Also, more physical parameters can be controlled to characterize the plasma, such as the input power as well as the pressure of the neutral gas which controls the plasma density.

Simulation Techniques Deployed by SIMULIA Tools

The SIMULIA tools help analyze not only the steady-state model after the plasma is formed but also the transient model on how/when the plasma is formed. SIMULIA can use a microscopic approach or a macroscopic approach for the simulations.

Microscopic Approach

Under the microscopic approach, it uses a time-domain kinetic approach with a Poisson based Particle-In-Cell code to analyze space charge interactions. Several particle interactions are taken into account in a global Monte-Carlo Collision (MCC) type model. The ionization of the neutral gas, its excitation and the elastic collisions are considered simultaneously to compute the plasma kinetics.

Macroscopic Approach

Under the macroscopic approach, SIMULIA tools treat the plasma as bulk for RF plasma analysis and matching network optimization. Both linear and non-linear Drude dispersion models are available as options. This is well suited for use when designing Capacitive Coupled Plasma (CCP) type of reactors. With the application of a Bias Magnetic field, an Electric Gyrotropic Dispersion model is available for use when designing Inductive Coupled Plasma (ICP) type of reactors, for example.

SIMULIA Tool Use Cases

The SIMULIA tool can be used to simulate various types of plasma reactors. The following are three examples that were presented.

The DC Magnetron Sputtering Reactor Example

This design setup is as follows:

Pressure range: 1 to 5 mTorr

Target voltage: -400V

Target materials: Al, Cu or Ti

Target thickness: 1 mm to 3 mm

Goal: Estimating the target erosion profile for understanding long term sputtering efficiency

Figure 2 below shows the high correlation between the simulated results and measured results, as it relates to target erosion profile prediction.

Figure 2: Very good agreement between the predicted results and the experimental data for the Target erosion profile in a DC Magnetron Sputtering Device example

GEC CCP Reactor (Capacitive Coupled Plasma) Example

The design setup is as follows:

Pressure: 200 mTorr

Temperature: 300K

Gas: Argon neutral gas

RF-Voltage: 60V Peak-to-Peak

Discharge over 13.56MHz RF-Voltage

Goal: Control of plasma homogeneity

Figure 3 below shows the ion density profile as predicted by the simulator. This compares very well to density profiles presented in the technical literature for CCP reactors.

Figure 3: Well-known GEC Cell CCP Reactor. Ion Density Profile in good agreement with the published results for this Device 

The VHF ICP Reactor (Inductive Coupled Plasma) Example

The design setup is as follows:

Pressure: 30 mTorr

Gas: Argon neutral gas

Input power: 300 W

EM-Field distribution: 13.56MHz

Goal: Characterize the physical parameters, understand the physical principles and identify potential issues and damages

Figure 4 below shows the plasma homogeneity issue and potential for damage from energy enhancement.

Figure 4: Electron Energy Enhancement localized which affects the plasma homogeneity and could lead to potential damage of the ICP Reactor

 


Automating and Optimizing an ADC with Layout Generators

Automating and Optimizing an ADC with Layout Generators
by Daniel Payne on 08-24-2022 at 10:00 am

Layout Geneator tool flow min

I first got involved with layout generators back in 1982 while at Intel, and about 10% of a GPU was automatically generated using some code that I wrote. It was an easy task for one engineer to complete, because the circuits were digital, and no optimization was required. In an IEEE paper from the 2022 18th International Conference on Synthesis, Modeling, Analysis and Simulation Methods and Applications to Circuit Design, there was an article authored by experts from Fraunhofer IIS/EAS, MunEDA, IMST Gmbh and Dresden University of Technology. I’ll share what I learned from their article, “A Multi-Level Analog IC Design Flow for Fast Performance Estimation Using Template-based Layout Generators and Structural Models.”

Analog designs like an ADC require that you start with a transistor-level schematic, do an initial IC layout, extract the parasitics, simulate, then measure the performance to compare against the specifications. This manual process is well understood, yet it requires iterations that can take weeks to complete, so there has to be a better approach. In the paper they describe a more automated approach, built upon three combined techniques:

  • Template-based generator
  • Parasitic estimation from object-oriented templates
  • Fast model-based simulation

The following diagram shows the interaction and flow between optimization, performance estimation and layout generators for an ADC:

Generator Template, Model, Performance Estimation

There’s a SystemC AMS model of the pipeline ADC circuit, and the parameters define things like the number of device rows, while the model defines the behavior of non-ideal capacitors and OpAmp offsets. The flow is designed to be executed, and when optimized it reaches an acceptable performance criteria.

The inner loop makes an estimate of the layout parasitics in about 5 seconds, and then the ADC is optimized and a layout generated in about 1 minute. The layout generator uses the best parameter set, and generates the capacitor structures. Layout capacitance values for device and wires were pre-characterized to enable fast estimates in the template approach. The optimization step is using estimated parasitics, not extracted parasitics, saving time.

A SystemC AMS model of a pipeline ADC has both behavioral and structural details, so that engineers can trade off accuracy versus runtime. Using an analytical model enables a thousand runs in just a few minutes. The outer loops adds the ADC model, and that run takes about 50 seconds to complete.

This generator template approach even estimates layout parasitics, capacitor variation and device mismatch. Both global and local process variations were taken into account.

Results

Starting from transistor-level schematics for the ADC, a parameterized model was built. Having a model enabled fast simulation and optimization, with the goals of:

  • Reduced layout area
  • Specific layout aspect ratio
  • Minimal error in the effective capacitor ratio
  • Robustness against process variations and mismatch

Layout-level optimization used an EDA tool from MunEDA called WiCkeD, which has a simulator in a loop approach, and the template was the simulator:

Optimization with WiCkeD

As the optimizer needs to find the performance for a set of design parameters, it asks the template to evaluate them. The optimizer finds the direction to change the design parameters that improve the layout. Evaluating a template takes under 5 seconds, so the optimization can quickly reach an optimal set of layout parameters.

To get the best capacitor array aspect ratio they selected an input parameter range of W and L for the unit devices, and the number of rows in the array. Next, they simulated the worst-case performance, including offsets in under two hours with 114 individual parameterizations. The worst-case transfer functions of the ADC model, based on numbers of rows in the capacitive array and various W over L values are shown below, where the ideal curve is dashed:

Worst-case transfer functions

Summary

Analog design and optimization is more difficult than digital design, because there are more interdependencies and trade-offs involved. A new approach with layout generators, template-based layout estimates and optimization has been demonstrated successfully for an ADC circuit, and it uses optimization technology from MunEDA, called WiCkeD. Instead of taking days to weeks, this approach met specifications for an ADC in just minutes.

Related Blogs