DAC2025 SemiWiki 800x100

Blending Finite Element Methods and ML

Blending Finite Element Methods and ML
by Bernard Murphy on 01-23-2024 at 6:00 am

FEM mesh min

Finite element methods for analysis crop up in many domains in electronic system design: mechanical stress analysis in multi-die systems, thermal analysis as a counterpart to both cooling and stress analysis (eg warping) and electromagnetic compliance analysis. (Computational fluid dynamics – CFD – is a different beast which I might cover in a separate blog.) I have covered topics in this area with another client and continue to find the domain attractive because it resonates with my physics background and my inner math geek (solving differential equations). Here I explore a recent paper from Siemens AG together with the Technical Universities of Munich and Braunschweig.

The problem statement

Finite element methods are techniques to numerically solve systems of 2D/3D partial differential equations (PDEs) arising in many physical analyses. These can extend from how heat diffuses in a complex SoC, to EM analyses for automotive radar, to how a mechanical structure bends under stress, to how the front of a car crumples in a crash.

For FEM, a mesh is constructed across the physical space as a discrete framework for analysis, finer grained around boundaries and especially rapidly varying boundary conditions, and more coarse-grained elsewhere. Skipping the gory details, the method optimizes linear superpositions of simple functions across the mesh by varying coefficients in the superposition. Optimization aims to find a best fit within some acceptable tolerance consistent with discrete proxies for the PDEs together with initial conditions and boundary conditions through linear algebra and other methods.

Very large meshes are commonly needed to meet acceptable accuracy leading to very long run times for FEM solutions on realistic problems, becoming even more onerous when running multiple analyses to explore optimization possibilities. Each run essentially starts from scratch with no learning leverage between runs, which suggests an opportunity to use ML methods to accelerate analysis.

Ways to use ML with FEM

A widely used approach to accelerate FEM analyses (FEAs) is to build surrogate models. These are like abstract models in other domains – simplified versions of the full complexity of the original model. FEA experts talk about Reduced Order Models (ROMs) which continue to exhibit a good approximation of the (discretized) physical behavior of the source model but bypass the need to run FEA, at least in the design optimization phase,  though running much faster than FEA.

One way to build a surrogate would be to start with a bunch of FEAs, using that information as a training database to build the surrogate. However, this still requires lengthy analyses to generate training sets of inputs and outputs. The authors also point out another weakness in such an approach. ML has no native understanding of the physics constraints important in all such applications and is therefore prone to hallucination if presented with a scenario outside its training set.

Conversely, replacing FEM with a physically informed neural network (PINN) incorporates physical PDEs into loss function calculations, in essence introducing physical constraints into gradient-based optimizations. This is a clever idea though subsequent research has shown that while the method works on simple problems, it breaks down in the presence of high frequency and multi-scale features. Also disappointing is that the training time for such methods can be longer than FEA runtimes.

This paper suggests an intriguing alternative, to combine FEA and ML training more closely so that ML loss-functions train on the FEA error calculations in fitting trial solutions across the mesh. There is some similarity with the PINN approach but with an important difference: this neural net runs together with FEA to accelerate convergence to a solution in training. Which apparently results in faster training. In inference the neural net model runs without need for the FEA. By construction, a model trained in this way should conform closely to the physical constraints of the real problem since it has been trained very closely against a physically aware solver.

I think my interpretation here is fairly accurate. I welcome corrections from experts!


Chiplet Summit 2024 Preview

Chiplet Summit 2024 Preview
by Daniel Nenni on 01-22-2024 at 10:00 am

Chiplet Summit 2024 Logo

The second annual Chiplet Summit is coming up and if it is anything like the first one it will not disappoint. Chiplets are a disruptive semiconductor technology that are already being used by the top semiconductor companies like Intel, Nvidia, AMD and others. These companies design their own chiplets so they are blazing the trail for us all.

The next phase of adoption will be commercial chiplets designed by outside sources (IP and ASIC companies) for all to use. Some say the chiplet market could quickly reach $6B and I definitely see that happening. The opportunity for IP and ASIC companies to sell die is just the start. I see a huge upside here, absolutely!

The Chiplet Summit is February 6-8th at my favorite location the Santa Clara Convention Center. The introduction and the keynotes have been posted:

2024 will be a growth year – especially for generative AI and chiplets! Start it off right by meeting with the leading chiplet executives and technologists at the 2nd annual Chiplet Summit. You will hear the latest ideas and breakthroughs, see the new products, learn about generative AI acceleration, and exchange ideas with the industry’s innovators.

We have a new venue with lots of room for conversations, meetings, demonstrations, and posters. Please join us for pre-conference tutorials (including new ones on working with foundries and AI in chiplet design), a superpanel on accelerating generative AI applications, our popular “Chat with the Experts” event, presentations, and exhibits.

Chiplet Summit is the place where the entire ecosystem meets to share ideas across disciplines and keep the chiplet industry moving ahead. Please join us at this must-attend event!

You will hear about industry trends at keynotes from Applied Materials, Synopsys, Micron, Alphawave Semi, Hyperion Technologies, and the Open Compute Project:

Enabling an Open Chiplet Ecosystem at the Package Level

Brian Rea, UCIe™ Consortium

About UCIe™ Consortium: The UCIe Consortium is an industry consortium dedicated to advancing UCIe™ (Universal Chiplet Interconnect Express™) technology, an open industry standard that defines the interconnect between chiplets within a package, enabling an open chiplet ecosystem and ubiquitous interconnect at the package level. UCIe Consortium is led by key industry leaders Advanced Semiconductor Engineering, Inc. (ASE), Alibaba, AMD, Arm, Google Cloud, Intel Corporation, Meta, Microsoft Corporation, NVIDIA, Qualcomm Incorporated, Samsung Electronics, and Taiwan Semiconductor Manufacturing Company. For more information, visit www.UCIexpress.org.

Multi-Die Systems Set the Stage for Innovation

Abhijeet Chakraborty, VP Engineering, Synopsys

Abstract: So far, the only design teams able to handle multi-die systems are bleeding-edge ones accustomed to breaking new ground with every step. Now the ecosystem is providing the tools, IP, standards, connectivity, and manufacturing needed to allow many more teams to switch to this new approach. Multi-die systems are now the mainstream and open up innovation in AI, security, transaction systems, virtual reality, and other areas. They continue the trend established by Moore’s Law to provide more compute power, more memory and storage, and faster I/O in less space and at lower cost.

Creating the Connectivity Required for AI Everywhere

Tony Chan Carusone, CTO, Alphawave Semi

Abstract: All major semiconductor companies now use chiplets for developing devices at leading-edge nodes. This approach requires a die-to-die interface within packages to provide very fast communications. Such an interface is particularly important for AI applications which are springing up everywhere, including both large systems and on the edge. AI requires high throughput, low latency, low energy consumption, and the ability to manage large data sets. The interface must handle needs ranging from enormous clusters requiring optical interconnects to portable, wearable, mobile, and remote systems that are extremely power-limited. It must also work with platforms such as the widely recognized ChatGPT and others that are on the horizon. The right interface with the right ecosystem is critical for the new world of AI everywhere.

New Packaging Technology Accelerates Major Compute Tasks

Sam Salama, Hyperion Technologies

Abstract: Many rapidly emerging compute applications (especially generative AI) need vast computing power and memory capacity. A new 3D packaging technology (QCIA) offers a highly economical solution. It allows larger packages, much higher power dissipation (up to 1000 watts per package), and substrates that exceed 100 mm by 100 mm (beyond the limitations of silicon interposers and without the warpage issues). For example, a single package could hold compute and SRAM devices plus many high- bandwidth memory (HBM) stacks for AI acceleration. Even more should be possible soon as research into new technologies employing < 1-micrometer line/space redistribution layers and panel-processing technologies for bigger packages continues. The development of materials for systems with even higher power dissipation is also ongoing. The QCIA technology can both help meet thermal challenges and deliver fine-pitch connections. It can provide some of the smaller-better-cheaper progress that Moore’s Law can no longer offer.

Creating a Vibrant Open Chiplet Economy

Bapi Vinnakota, Open Compute Project

Abstract: Chiplets have arrived as the way to design very large chips at leading-edge nodes. But how can we take full advantage of the drop-in approach they offer, allowing designers to easily include existing designs at older nodes, IP, and chiplets from outside sources? The OCP believes that an open chiplet economy is the way to go. It will serve the needs of chiplet creators, ASIC designers, and those providing support such as design tools, test facilities, and professional services. Such an economy requires standards, tools, and best practices. The OCP is already pursuing projects that standardize design models, help establish 3rd party testing, improve supply chain methods, define best practices for assembly, and create a standard high- performance, low-power die-to-die interface. The open chiplet economy will benefit large and small organizations alike, and will create huge opportunities for economic growth worldwide.

Many of the Chiplet Summit exhibitors are on SemiWiki so I will definitely be there. 2024 will be a big semiconductor growth year and that will include conference attendance, my opinion.

Also Read:

How Disruptive will Chiplets be for Intel and TSMC?

Synopsys Geared for Next Era’s Opportunity and Growth

Will Chiplet Adoption Mimic IP Adoption?


Luc Burgun: EDA CEO, Now French Startup Investor

Luc Burgun: EDA CEO, Now French Startup Investor
by Lauro Rizzatti on 01-22-2024 at 6:00 am

Luc Burgun

When we last saw Luc Burgun’s name in the semiconductor industry, he was CEO and co-founder of EVE (Emulation and Verification Engineering), creator of the ZeBu (Zero Bugs) hardware emulator. EVE was acquired by Synopsys in 2012.

After the acquisition, Luc moved out of EDA and became an investor. Join me as I catch up with Luc and learn more about his activities and investments and what’s interesting to him today.

What did you do once EVE became part of Synopsys?
After the acquisition, Synopsys offered me an opportunity to join the team. Even though Synopsys is a great company to work for, two years later, I craved for a change. An opportunity to join a startup designing FPGA-based market data processing systems for the financial market as CEO accelerated my departure. NovaSparks, that’s the startup’s name, is still in business and growing. Recently, we entered the APAC market opening and staffing an office in Bangkok, Thailand.

As you indicated in the intro, I’m also an investor.

Did you consider doing another startup?
I must say that the idea resonated with me at the time, but the NovaSparks opportunity cut short my planning.

Is there one investment area currently more interesting to you than others?
I am in favor of differentiation. I do not put all my eggs in one basket. Roughly speaking, I split my investments evenly in three buckets: real estate, private equities and startups. For private equities, I consult with my network of financial advisors. Over the years, I established a network of trusted advisors in the private equity community who possess a broad and deep knowledge of the market.

As for investing in startups, my focus is on French small enterprises mostly doing B2B in semiconductor, AI and financial trading.

What made you decide to invest in French startups?
Par-bleu!* I am a Frenchman (smiling).

On a more serious note, I always thought that France enjoys first-class engineers in virtue of a top-notch education taught in excellent universities. However, high-tech investment is a high-risk proposition that fits well with Silicon Valley but not with France.

Semiconductor development, and AI even more, requires rather substantial investments before reaching a return on investment, and sometimes institutional investors don’t see that. They may make an initial investment, but then they realize that it will be necessary to continue to invest and they drop out.

My success at EVE was valuable training for me and I learned a lesson. When I invest, I am in for a long run.

*English translation: Of course!

Where is your investment focus now?
So far, I invested in several high-tech startups mostly in the semiconductor business. Not all have been successful. On average, they have generated a substantial return on investment. Some of them are still in business, boosting my expectations for a higher payoff down the road.

Is there a company that stands out? Why?
Among the active startups, obviously, NovaSparks is my #1. Then I look forward to VSORA, a startup that devised an exciting and novel semiconductor architecture ideal for processing a spectrum of leading-edge AI applications. It has been implemented on two families of devices.

The Tyr family addresses autonomous driving (AD) vehicles at Levels 4 and 5; the Jotunn family delivers generative AI (GenAI) acceleration. Both applications are demanding in terms of computing power, measured in multiple Petaflops. High throughput in absolute terms is just one of several critical requirements. As critical is the efficiency of the processing cores. Today, the most popular AI computing core is the GPU, created decades ago to accelerate graphics rendering. When applied to AI algorithmic acceleration, the GPU efficiency drops dramatically. In processing AI algorithms like transformers, the GPU efficiency hovers around 1%. The VSORA architecture is 50x more efficient. Other attributes include low latency and low power consumption. For edge application, low cost is essential.

Why do you consider VSORA such an important investment?
Because I believe in their creation and in the team behind it. I have known the team since 2002 when EVE’s headquarters shared the same building with DibCom, the precursor to VSORA.

To put in perspective, my trust let me highlight the main attributes of the VSORA architecture.

The Tyr device boasts up to 1,600 Teraflops at efficiencies of 60% or more. It can process the most advanced AD algorithms like transformers and retentive nets, realizing perception stage contextual awareness in less than 20msec. The Tyr1 has a peak power consumption of only 10W.

The Jotunn8 Generative AI accelerator delivers up to six Petaflops, with efficiency in the of 50% for very large and comples LLMs like GPT-4, consuming a maximum of 180 Watts.

VSORA’s attributes have been confirmed in early customer evaluations.

That is only part of what makes a successful product. Another is the unique VSORA development software, built from the ground up along with the creation of the hardware. Porting new complex algorithms like incremental transformers onto the VSORA computing processors is a straightforward process. Users only deal with the algorithmic language, never having to bother with low level code such as RTL. The tight integration of the software with the hardware optimizes the hardware resources based on customer profiling without manual tuning and simplifies the entire design process, reducing cost and time to market.

A VSORA device can be deployed rapidly and efficiently with highly competitive overall performance.

What advice do you offer startup founders?
As we say in French, I like “to bring the stone to the building.” My advice is twofold. First, I like to coach and motivate startup founders, especially in time of stress. Second, I am available to help them by getting involved in some very specific projects when necessary. It could be business, marketing, legal, HR, Finance or even M&A. Any aspect where founders are not comfortable.

Also Read:

Long-standing Roadblock to Viable L4/L5 Autonomous Driving and Generative AI Inference at the Edge

The Corellium Experience Moves to EDA

EDA Product Mix Changes as Hardware-Assisted Verification Gains Momentum


Podcast EP204: A Broad View of Design Architectures and the Role of the NoC with Arteris’ Michal Siwinski

Podcast EP204: A Broad View of Design Architectures and the Role of the NoC with Arteris’ Michal Siwinski
by Daniel Nenni on 01-19-2024 at 10:00 am

Dan is joined by Michal Siwinski, Chief Marketing Officer at Arteris. He brings more than two decades of technology-based strategy, marketing and growth acceleration. Prior to joining Arteris, he served as Corporate Vice President of Marketing and Business Development at Cadence, and through his leadership, the company’s brand and digital image was transformed. Mr. Siwinski directed the strategy and marketing in Cadence’s entry into the intellectual properties, embedded and system domains. Prior to Cadence, he worked at Verplex Systems on new product innovation and Mentor Graphics on IP and SoC design services.

In this far-reaching discussion, Dan explores new chip design architectures with Michal. The unique design requirements for trends such as AI, inference, and multi-die design are discussed in detail. The pivotal role of specialized IP such as the Arteris NoC is also discussed, with an assessment of future trends and impact.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


2024 Outlook with Chris Morrison of Agile Analog

2024 Outlook with Chris Morrison of Agile Analog
by Daniel Nenni on 01-19-2024 at 6:00 am

Agile Analog image

Agile Analog focuses on providing analog and mixed-signal IP (Intellectual Property) solutions for the semiconductor industry. They specialize in developing analog and mixed-signal designs that can be integrated into various electronic devices and systems.  Composa™ is their unique technology that automatically generates analog IP to your exact specifications. Applications include; HPC, IoT, AI, automotive and aerospace.

Tell us a little bit about yourself and your company.
I am an electronics engineer with over 15 years’ experience of working in the semiconductor industry, delivering innovative analog, digital, power management and audio solutions. In that time I have been fortunate to have had some really interesting technical, product and project roles, including during my 10 years at Dialog Semiconductor (acquired by Renesas). Now, I am the Director of Product Marketing at Agile Analog, the customizable analog IP company.

At Agile Analog, we are transforming the world of analog IP with our highly configurable, multi-process analog IP technology. The company has developed a unique way to automatically generate analog IP that meet the customer’s exact specifications, for any foundry and on any process, from legacy nodes right up to the leading edge. We provide a wide-range of novel analog IP solutions and subsystems, covering data conversion, power management, IC monitoring, security and always-on IP. Applications include; data centers/HPC (High Performance Computing), IoT, AI, quantum computing, automotive and aerospace.

What was the most exciting high point of 2023 for your company?
2023 was a busy year for Agile Analog. There were several significant company milestones, including product launches for our 12-bit ADC and RISC-V subsystem. There were also important industry partner announcements – when we joined the Intel Foundry Services Accelerator IP Alliance Program in March and then the TSMC Open Innovation Platform (OIP) IP Alliance Program in September.

If I had to single out just one exciting high point for Agile Analog in 2023, it would be becoming a TSMC OIP member. This was a great team achievement and it has opened up many new doors for us. For example, enabling us to take part in the TSMC OIP Ecosystem Forums last September and October. Collaborating closely with the major foundries means that we can better serve our growing array of global customers, especially those in sectors like HPC (High Performance Computing) working on the most advanced nodes.

What was the biggest challenge your company faced in 2023?
2023 was a challenging year for the semiconductor industry as a whole, with supply chain disruptions, component shortages, staff layoffs, the global economic downturn and geopolitical turmoil. As a result of these complex issues there was a lack of confidence across the industry. This impacted on most organisations in our market, particularly in the first half of 2023. By the end of 2023 however, and coming into 2024, at Agile Analog we have seen a really strong pick up with a lot of new projects.

How is your company’s work addressing this biggest challenge?
Over the last 12 months, the Agile Analog team has continued to develop our product portfolio and strengthen partner relationships, whilst the industry has slowly regained its confidence. Our core mission is to reduce chip design complexity, risk and cost, enabling a faster time-to-market for the very latest semiconductor devices.

As our innovative Composa™ platform automatically generates customizable and process agnostic analog IP, analog circuits can be designed more quickly and to a higher quality. Our digitally wrapped solutions simplify the semiconductor integration process. Using our technology, chip designers can add analog features to provide product differentiation without needing to worry about specialist analog engineers or high costs. This will help to accelerate innovation in semiconductor design.

In 2023 we saw great advances within our Composa tool, with multiple customer deliveries generated and verified through our Composa engine. The advances here allow us to generate transistor sized schematics in a matter of minutes. In 2024 we will continue to extend the functionality of our Composa tool, in particular adding layout functionality, initially around component placement and then on to full routing.

What do you think the biggest growth area for 2024 will be, and why?
Analysts are predicting that there will be a return to growth in the semiconductor industry in 2024. I expect that this will be driven in part by increased demand for HPC, especially with continued growth in ICs for AI. At some point we must see some recovery in the consumer space too!

Away from the more mainstream areas there is also mounting interest in quantum computing and quantum sensing technology. In 2024 we will begin to see more companies try and take these solutions out of the science labs and work towards commercializing the technology. This is largely thanks to the huge amounts of government investment worldwide in this field.

How is your company’s work addressing this growth?
There is growing demand for our highly configurable, high-quality and high-performance analog IP products – especially our data conversion, security, and power management solutions. In terms of quantum computing technology, one of the reasons that the Agile Analog team worked on the sureCore CryoCMOS project last year was to gain more experience in this area. We are working hard to extend our product portfolio further to support the increasing range of innovative applications.

In 2024 we will have a particular focus on building out our data converter roadmap, to bring higher resolution and higher data rate converters to market and delivering our chip health and monitoring, security, and power management IP on the latest nodes.

What conferences did you attend in 2023 and how was the traffic?
I was on the road quite a lot in 2023! I attended an interesting range of industry events across the world – including the TSMC Symposium in Amsterdam in May, the RISC-V Summit Europe in Barcelona in June, the GSA Forum in Munich in June, and then DAC (Design Automation Conference) in Santa Clara in July.

During the second half of the year we focused more on foundry events – such as the TSMC OIP Ecosystem Forums in Santa Clara in September and in Amsterdam in October that I mentioned earlier. The event world seemed to be beginning to come back to life in 2023, with increased traffic compared to 2022, so here’s hoping that this momentum continues in 2024.

Will you attend conferences in 2024? Same or more?
In 2024, it is likely that we will maintain our focus on strengthening our foundry relationships by participating in more of the foundry related events. The first of these being the new Intel IFS event in San Jose in February. This year I will be joined on the road by a new colleague, Christelle Faucon, our new VP of Sales. Together we will spread the word about the benefits of our unique analog IP technology to accelerate the adoption of our analog IP products across the globe.

Also Read:

Agile Analog Partners with sureCore for Quantum Computing

Agile Analog Visit at #60DAC

Counter-Measures for Voltage Side-Channel Attacks


Non-EUV Exposures in EUV Lithography Systems Provide the Floor for Stochastic Defects in EUV Lithography

Non-EUV Exposures in EUV Lithography Systems Provide the Floor for Stochastic Defects in EUV Lithography
by Fred Chen on 01-18-2024 at 10:00 am

Defocus flare (small)

EUV lithography is a complicated process with many factors affecting the production of the final image. The EUV light itself doesn’t directly generate the images, but acts through secondary electrons which are released as a result of ionization by incoming EUV photons. Consequently, we need to be aware of the fluctuations of the electron number density as well as the scattering of electrons, leading to blur [1,2].

In fact, these secondary electrons need not be coming from the direct EUV absorption in the resist either. Secondary electrons can come from absorption underneath the resist, which include a certain amount of defocus. Moreover, there is an EUV-induced plasma in the hydrogen ambient above the resist [3]. This plasma can be a source of hydrogen ions, electrons, as well as vacuum ultraviolet (VUV) radiation [4,5]. The VUV radiation, the electrons and even the ions constitute separate blanket resist exposure sources. These outside sources of secondary electrons and other non-EUV radiation all basically lead to non-EUV exposures of resists in EUV lithography systems.

Defocused images have reduced differences between maximum and minimum dose levels, and also add an offset to the minimum dose level (Figure 1). Thus, when incorporated with the EUV-electron dose profile, the overall image is more sensitive to stochastic fluctuations, since the defocused doses are everywhere closer to the printing threshold. The blanket exposures from the EUV-induced plasma further increase sensitivity to stochastic fluctuations at the minimum dose regions.

Figure 1. Defocus reduces the peak-valley difference and adds an offset to the minimum dose level. This tends to increase vulnerability to stochastic fluctuations.

Thus, stochastic defect levels are expected to be worse when including the contributions from these non-EUV sources. The effect is equivalent to adding a reduced incident EUV dose and adding an extra background electron dose.

Figure 2. 30 nm pitch, 30 mJ/cm2 absorbed, 3 nm blur, without non-EUV sources. Pixel-based smoothing (rolling average of 3×3 0.6 nm x 0.6 nm pixels) is applied. Numbers plotted are electrons per 0.6 nm x 0.6 nm pixel.

Figure 3. 30 nm pitch, 40 mJ/cm2 absorbed, 3 nm blur, 33 e/nm^2 from non-EUV sources. Pixel-based smoothing (rolling average of 3×3 0.6 nm x 0.6 nm pixels) is applied. Numbers plotted are electrons per 0.6 nm x 0.6 nm pixel.

Figures 2 and 3 show that including non-EUV exposure sources can lead to prohibitive stochastic defects, regardless of where the printing threshold is set in the resist development process. In particular, the nominally unexposed regions are more vulnerable to the non-EUV exposure sources. The nominally exposed regions, on the other hand, are more sensitive to the dose levels and blur. The non-EUV exposure sources therefore contribute to providing a floor for stochastic defect density.

Thus, it is necessary to include the electrons emitted from underneath the resist as well as the radiation from the EUV-induced plasma as exposure sources in EUV lithography systems.

References

[1] P. Theofanis et al., Proc. SPIE 11323, 113230I (2020).

[2] Z. Belete et al., J. Micro/Nanopattern. Mater. Metrol. 20, 014801 (2021).

[3] J. Beckers et al., Appl. Sci. 9, 2827 (2019).

[4] P. De Schepper et al., J. Micro/Nanolith. MEMS MOEMS 13, 023006 (2014).

[5] P. De Schepper et al., Proc. SPIE 9428, 94280C (2015).

This article first appeared in LinkedIn Pulse: Non-EUV Exposures in EUV Lithography Systems Provide the Floor for Stochastic Defects in EUV Lithography

Also Read:

Application-Specific Lithography: Sense Amplifier and Sub-Wordline Driver Metal Patterning in DRAM

BEOL Mask Reduction Using Spacer-Defined Vias and Cuts

Predicting Stochastic Defectivity from Intel’s EUV Resist Electron Scattering Model


Transformative Year for Sondrel

Transformative Year for Sondrel
by Daniel Nenni on 01-18-2024 at 6:00 am

Happy,New,Year,2023,,Keep,Fighting,Together,,Silhouette,Of,2023

This is our third year working with Sondrel and it has been a great experience. I have always been fascinated with the ASIC business and put a full chapter about it in our first book “Fabless: The Transformation of the Semiconductor industry.” Companies like Sondrel enabled our move to the fabless model and now they are enabling systems companies in the transformation to being fabless system companies who own their silicon.

If you have ever walked the exhibits at CES you will see ASIC companies at work bringing ideas to silicon to systems, absolutely.

Founded in 2002, Sondrel  provides a full turnkey ASIC service from architecture through silicon supply using the most advanced tools, IP, process technology, and packaging. This is the low risk way to competitive electronic products for companies big and small. Sondrel is 150+ engineer strong with hundreds of tape-outs under their belts across a wide of market segments.

The folks at Sondrel posted a bit of a brag last month but it is well deserved. The ASIC business is a crucial part of the semiconductor ecosystem and with 21 years of experience they are one to watch:

We have had a transformative year in 2023 now that we are a listed company on the London Stock Exchange (LON:SND). This has enabled us to expand our global reach and forge new partnerships to support our ambitious growth plans. Here are just a few of the many highlights of 2023:

  • Prototypes of the silicon for each of our three major ASIC chip projects have been delivered to the customers and are now progressing through validation and qualification to production.
  • The first of these, a next-generation AI chip from a leading semiconductor company, has already moved into production. Receipt of the production order was a significant milestone for Sondrel and demonstrated the company’s ability to deliver high-performance, complex IC designs.
  • We have continued to constantly innovate with our own methodologies for design and verification for these advanced nodes, which is why our customers continue to engage with us in the most complex designs and advanced nodes down to 3nm. For example, we were part of the engineering team that taped out a 5nm design for a network chip of over 600mm2.
  • We are a Founding Member of Arm Total Design Ecosystem, a collaborative initiative that brings together leading semiconductor companies, design houses, and software developers to accelerate the development and deployment of powerful Arm processors. This will enable us to create ultra-complex ASIC solutions for exciting and rapidly growing applications such as Artificial Intelligence (AI), Machine Learning (ML) and Edge computing.
  • North America has been a focus for significant investment in our US sales organisation, increasing the headcount and opening an office in Santa Clara, CA. As a result, we have just received an order with a new customer in Silicon Valley with several more, new customers in negotiations for our turnkey design and supply service.
  • Our innovative Architecting the Future approach to design with its set of IP frameworks enabled us to close new contracts as customers can see how it reduces risks and speeds time to market. Having two of the SFA frameworks specifically designed for automotive and ISO26262 compliance, enabled us to engage with several exciting new customers in this area.
  • Reflecting continued strong customer demand, our graduate hiring programme has been enlarged for 2024.  We are looking forward to supporting the growth in the semiconductor communities in the UK, Morocco and India, and welcoming these new members to our team.
Looking Ahead

Sondrel is committed to providing its customers with the best possible IC design and manufacturing services. The company’s focus on innovation and its commitment to customer service will ensure that it continues to be a leader in the industry.” Graham Curren, Sondrel’s Founder and CEO.

Also Read:

Sondrel Extends ASIC Turnkey Design to Supply Services From Europe to US

Integrating High Speed IP at 5nm

NoC-Based SoC Design. A Sondrel Perspective

Closing the Communication Chasms in the SoC Design and Manufacturing Supply Chain


Mastering Mixed-Signal Verification with Siemens Symphony Platform

Mastering Mixed-Signal Verification with Siemens Symphony Platform
by Daniel Payne on 01-17-2024 at 10:00 am

verification platform min

Digital design and verification is well understood by EDA vendors and IC designers, however mixed-signal design and verification is more challenging, because the continuous nature of analog signals requires more compute resources and specialized design skills. Siemens EDA has a unique offering in what they call Symphony, as it connects the most popular commercial digital simulators to their Analog Fast SPICE (AFS) circuit simulator to enable mixed-signal simulation quickly and accurately. I attended a webinar on Symphony last month and describe my findings in this blog. If you missed it, you can now view the webinar on-demand:  Symphony for Mixed-Signal Verification.

Symphony is a mixed signal simulator that fits into Siemens EDA Custom IC Verification platform consisting of a design environment, characterization suite, and IP validation.

Symphony

Mixed-signal design accounts for 85% of IC design starts today, per IBS research, and example product categories include: 5G, automotive, AI, HPC, IoT. SPICE circuit simulators provide the highest simulation accuracy at the transistor-level, yet the run times are long. Digital simulators are fast and high capacity, but don’t account for analog. Mixed-signal simulation combines high precision analog blocks with digital blocks, enabling verification.

Speed vs Accuracy

In mixed-signal verification it’s important to detect top-level connectivity specification errors, find electrical behavior errors, and to ultimately verify that performance metrics were met. Mixed-signal designers want verification tools that have golden SPICE accuracy, are easy to use and setup, enable debug and coverage goals, and that reuse verification infrastructure.

The Symphony platform from Siemens EDA uses AFS XT as the analog solver, is scalable up to 16 cores, supports post-layout simulations, and offers advanced verification and debug. First announced 5 years ago, Symphony continues to grow in popularity with over 100+ customers to date. Symphony benchmarks well versus competitors, showing speed improvements of 2X to 8X for mixed-signal circuits like: HF oscillator, PLL, SerDes, Video TxRx, audio ADC and CMOS image sensors.

All the major foundries have certified the silicon accuracy of AFS across process nodes from 0.5u to 2nm: TSMC, ST, Samsung, Intel, Global Foundries, UMC. Symphony works with digital simulators that model VHDL, Verilog and System Verilog: Questa, VCS, Incisive, Xcelium. Engineers can simulate Symphony using batch mode, command line or interactively with a GUI.

When combing a digital HDL netlist to an analog netlist requires a Boundary Element (BE) for unidirectional or bi-directional connections, and with Symphony the BEs are built-in and can be parameterized, so engineers are not writing any code. Analog signals can be accessed in Verilog or System Verilog modules using analog access functions. Hi-Z states are also detected during mixed-signal simulation.

Simulations with Symphony can have a snapshot taken, then restored for subsequent simulations, saving your team valuable simulation resources. To improve debugging there are features in Symphony to browse all the BEs and even control simulation using Tcl code interactively.

NVIDIA shared at the 2018 Siemens user conference on how they used Symphony to perform mixed-signal verification of a high-speed GPU interface PHY, seeing simulation speed improvements between 2X and 12X from previous tools. Invensense reported that Symphony did noise simulation for a multi-slope ADC with a speedup of 2X to 5X compared to other simulators. Analog Value presented at DVCon 2021 how they did an ADC mixed-mode simulation with Monte Carlo results using 6,590 fewer simulations than brute-force.

The final part of the webinar was a live demo from Mina Zaki of Siemens EDA, first using the command line with a digital on top netlist, then another example with analog on top. The Solido Waveform Analyzer results were shown for a PLL example.

PLL example

In the third demo example they showed how BE values could be changed from 2.6V to 1.2V to get the proper simulation results.

Summary

Yes, mixed-signal design and verification is a difficult engineering task, yet with the right tool environment like Symphony and the ability to choose your preferred digital simulator running with AFS, it looks like Siemens EDA has an attractive offer. Their simulation technology is time-tested over the past five years, and has plenty of tier-one adopters, so why not give it a try to improve your mixed-signal turnaround times.

Related Blogs


IEDM 2023 – Imec CFET

IEDM 2023 – Imec CFET
by Scotten Jones on 01-17-2024 at 6:00 am

29 1 Wed Horiguchi 3 final Page 04

At IEDM 2023, Naoto Horiguchi presented on CFETs and Middle of Line integration. I had a chance to speak with Naoto about this work and this write up is based on his presentation at IEDM and our follow up discussion. I always enjoy talking to Naoto, he is one of the leaders in logic technology development, explains the technology in an easy-to-understand way and is responsive and easy to work with.

Why Do We Need CFETs

As CMOS scaling has transitioned from purely pitch based scaling to pitch plus track-based scaling, fin depopulation has become necessary, see figure 1. Each time you reduce the number of fins performance is reduced.

Figure 1. Standard Cell Scaling

By moving from FinFEts to stacked Horizontal NanoSheets (HNS) performance can be improved/recovered by wider nanosheets stacks and stacking multiple nanosheets vertically, see figure 2.

Figure 2. Nanosheet Advantage

But as we have seen with FinFETs nanosheet scaling eventually leads to reduced performance, see figure 3.

Figure 3. Nanosheet Scaling Limitations

CFETs (Complementary FET) stack the nFET and pFET, see figure 4.

Figure 4. CFET

CFETs once again reset the scaling constraints because the nFET and pFET are stacked and the n-p spacing between the devices becomes vertical instead of horizontal, this enables wider sheets, see figure 5.

Figure 5. CFET Improved Scaling

Figure 6 presents a comparison of HNS and CFET performance versus cell height highlighting the CFET advantage.

Figure 6. HNS vs CFET Performance Versus Cell Height
Monolithic Versus Sequential CFET

There are two fundamentally different approaches to CFET fabrication. In a monolithic flow the CFETs are fabricated on a wafer in a continuous process flow. In a sequential flow the bottom device is fabricated on one wafer, then a second wafer is bonded to the first wafer and the top device is fabricated in the second wafer.

In a sequential flow a bonding dielectric is present between the two devices, see figure 7.

Figure 7. Monolithic Versus Sequential CFET

Because of the bonding dielectric the structure is taller and has higher capacitance degrading performance, see figure 8.

Figure 8. Monolithic/Sequential CFET Performance Comparison

Sequential CFETs are more expensive to fabricate than monolithic CFETs and between that and the performance degradation, it appears the industry is focused on monolithic CFETs.

Monolithic CFET Processing

The monolithic CFET process is illustrated in figure 9.

Figure 9. Monolithic CFET Process Flow

The steps in bold are particularly challenging:

  • Horizontal Nanosheet Stacks (fins) are already high aspect ratio, then in order to make a CFET you stack the nFET and pFET stacks on top of each other with a relatively thick layer in between more than doubling the height.
  • The gate formation is also high aspect ratio as described on the previous point.
  • The epitaxial source/drains must be vertically isolated from each other.
  • Not explicitly called out, the bottom device source/drain is fabricated and then the top device top source/drain is fabricated. The thermal processing of the top device and subsequent steps must be done at low enough temperatures to not degrade the bottom device.

One particularly interesting part of this presentation was the Middle Dielectric Isolation (MDI) part, I hadn’t seen this issue before. The MDI proves inner spacer and Work Function Material (WFM) patterning.

Figure 10 illustrates the MDI effect on inner spacer formation (left side) and WFM patterning (right side).

Figure 10. Middle Dielectric Isolation Impact

Figure 11 illustrates the MDI integration flow.

Figure 11. MDI Integration Flow

By integrating MDI the vertical spacing between the nFET and pFET can be increased without impacting the inner spacer formation.

As mentioned previously the bottom device source/drain is fabricated and then the top device source/drain is fabricated. After formation of the bottom source/drain, an isolation dielectric is deposited and etched back to expose the top device for source/drain epitaxial formation. The isolation etch back has to be controlled with the MDI height, see figure 12.

Figure 12. MDI for Vertical Edge Placement Alignment

 In order to minimize thermal degradation of device performance new WFM options with dipole first processing and no anneal and low temperature inter layer formation processes are needed, see figure 13.

Figure 13. Low Temperature Gate Stack Options

Low temperature source/drain growth and low temperature silicides for contact formation are also needed, see figure 14.

Figure 14. Low Temperature Source/Drain and Contact Options

The low temperature silicide will be particularly important for backside direct contact to the bottom device. CFET interconnect requires contacts to the bottom and top device and with the advent of backside power delivery the top device will be contacted from the front side interconnect stack and the bottom device will be contacted from the backside. Molybdenum (Mo) and Niobium (Nb) are promising for pFET and Scandium (Sc) is promising for the nFET, although Sc is hard to deposit with ALD.

Backside and Middle of Line Interconnect

As I have written about previously here Back Side Power Delivery Network (BSPDN) is expected to be introduce this year by Intel and by Samsung and TSMC in 2026. Splitting interconnect into frontside signal connections and backside power connections reduces IR drop (power loss) by an order of magnitude, see figure 15.

Figure 15. BSPDN Reduction in IR Drop

BSPDN also improves track scaling supporting a reduction from a 6-track to 5-track cell, see figure 16.

Figure 16. BSPDN Track Scaling

 The integration of BSPDN with CFET can provide a 20% to 40% power reduction versus Horizontal stacked NanoSheets (HNS), see figure 17.

Figure 17. CFET with BSPDN

In order to go beyond a 5-track cell to a 4-track cell interconnect challenges must be overcome, see figure 18.

Figure 18. 4-track Call Interconnect Challenges

 Vertical-Horizontal-Vertical layout with additional Middle of Line (MOL) layers can enable 4-track cells, see figure 19.

Figure 19. VHV Routing and Second MOL Layer

I have previously written about Imec’s work in this area here so I will not repeat that information.

I asked Naoto what it would take to go beyond a 4-track cell to a 3-track cell, he replied that Imec is working on that optimization now, that it may require addition MOL layers and possibly a top to bottom connection next to the device that would impact standard cell layout.

I also asked Naoto when he thought we might see CFETs implemented and he said possibly the A10 logic generation or A7 generation.

Authors note, Intel, Samsung, and TSMC all published work on CFETs at IEDM this year and both Intel and TSMC have technology option maps showing FinFETs giving way to HNS and then CFETs.

Conclusion

Imec continues to show excellent progress on the development of CFETs as a next generation option after HNS. In this work device integration options as well as BSPDN and MOL options have all been described.

Also Read:

IEDM 2023 – Modeling 300mm Wafer Fab Carbon Emissions

SMIC N+2 in Huawei Mate Pro 60

ASML Update SEMICON West 2023


WEBINAR: Mastering DC-DC Converters: Your Guide to Better Hardware

WEBINAR: Mastering DC-DC Converters: Your Guide to Better Hardware
by Daniel Nenni on 01-16-2024 at 10:00 am

Webinar (2)

Renie Ananthakumar, Principal Engineer at TenXer Labs., invites System Design Engineers globally to a webinar that’s all about practical insights—From Bench to Board: A Comprehensive Guide to DC-DC Converter Selection and Remote Testing.

See the Replay Here

In this session, we are breaking down the complexity of DC-DC converters. Learn the essentials, like how to choose the right one considering efficiency, voltage ranges, form factor, noise, and why reference designs matter. It’s not just theory; we’re throwing in real-world applications to make it stick. Facing hurdles in hardware evaluations? The session has got you covered.

Discover strategies to manage budget limitations, procurement timelines, and the need for expert guidance. We are not just talking about problems; we are talking about solutions. Ever wondered about remote test setups? We will spill the beans on how these setups can fast track your evaluations, providing step by-step guidance. It’s like having your own hardware lab accessible from anywhere.

But here’s the real take away—a hands-on example that ties theory to application. The goal is simple: equip you with practical knowledge to excel in DC-DC converters, testing setups, and remote evaluations.

Join us on for a straightforward journey from the bench to the board, where the focus is on your expertise. Don’t miss out—this webinar is your ticket to mastering DC-DC converters and taking your hardware game to the next level!

See the Replay Here
ABSTRACT

Explore the world of DC-DC converters with Renie Ananthakumar, Principal Engineer at TenXer Labs.

This webinar dives deep into selecting the best converter and building the essential testing environment. Gain a comprehensive understanding of crucial DC-DC converters considering factors like efficiency, voltage ranges, form factor, noise considerations and the significance of reference designs. Learn to identify and overcome obstacles such as — budget limitations, procurement timelines, and the necessity for expert guidance in seamless configuration.

Discover the power of remote test setup, to accelerate the evaluation processes, and enable step-by-step guided evaluation. Dive into a practical example bridging theory and actual application, enriching your expertise in DC-DC converters. Gain exclusive insights to excel in DC-DC converters, testing setups, and remote evaluation capabilities.

SPEAKER

Renie Ananthakumar is a seasoned hardware engineering professional with a decade-long track record in hardware system design. Renie has led the creation of many cutting-edge cloud-based remote testing environments. At TenXer Labs, Renie drives the pioneering initiative of onboarding Hardware Solution or Evaluation Kits (“EVK”) on the “LiveBench – Your Lab on Cloud” platform as a Device Under Test (DUT). The LiveBench platform helps System Designers gain access to an IC Evaluation Kit or a subsystem over the internet from anywhere in the world.

As Principal Engineer, Renie innovates to blend traditional testing methods with real-time virtual environments. Renie excels in the codification of physical hardware, revolutionizing hardware evaluations. Their forward-thinking approach merges practical experience and visionary ideas, shaping a future of seamless remote testing. Renie’s commitment to advancing hardware engineering through technology, positions him as a key influencer in hardware evaluation methodologies and innovation.

See the Replay Here
Also Read:

CEO Interview: Sridhar Joshi of TenXer