SILVACO 073125 Webinar 800x100

Application-Specific Lithography: Via Separation for 5nm and Beyond

Application-Specific Lithography: Via Separation for 5nm and Beyond
by Fred Chen on 08-02-2023 at 8:00 am

1689394958554

With metal interconnect pitches shrinking in advanced technology nodes, the center-to-center (C2C) separations between vias are also expected to shrink. For a 5/4nm node minimum metal pitch of 28 nm, we should expect vias separated by 40 nm (Figure 1a). Projecting to 3nm, a metal pitch of 24 nm should lead us to expect vias separated by 34 nm (Figure 1b).

Figure 1. (a) Left: 4nm 28 nm pitch M2 and M3 via connections may be expected to have center-to-center distance of 40 nm. (b) Right: 3nm 24 nm pitch M2 and M3 via connections may be expected to have center-to-center distance of 34 nm.

Is it really straightforward to do this by EUV?

Conventional EUV Patterning

A conventional EUV patterning would use a current 0.33NA EUV system to image spots smaller than 20 nm. However, for such an optical system, the spot is already limited to the image of the point spread function (PSF), which after resist absorption (e.g., 20%), has a highly stochastic cross-section profile (Figure 2).

Figure 2. Cross-section of point spread function in 20% absorbing resist. The stochastic characteristic is very apparent. The red dotted line indicates the classical non-stochastic image.

The real limitations from the point spread function from putting two of them together [1]. At close enough distances, the merging behavior is manifest by image slope reduction (degraded contrast) between the two spots (Figure 3). This is also accompanied by a change in the distance between the expected spot centers in the image, and stochastic printing between the two spots.

Figure 3. (a) Left: Absorbed photon number per sq. nm. for two point spread functions placed 36 nm apart. Note that the actual image C2C distance is 40 nm. (b) Right: Absorbed photon number per sq. nm. for two point spread functions placed 34 nm apart. The red dotted lines indicate the classical, non-stochastic images.

This means basically although two spots will appear, the chance of defects is too high to print them when running a high-throughput exposure. It would be safer to restrict them to be forbidden layout.

Alternative Patterning

Going to 0.55NA EUV worsens the stochastic behavior because of much lower resist absorption (e.g., 10%) due to the requirement for much thinner resist from severely limited depth of focus [2]. Such systems are also not available currently either, so the only remaining alternative is to print the two spots individually in separate exposures, i.e., double patterning [3]. Moreover, given that the EUV point spread function already has a significant stochastic distortion (see Figure 2), it would be better for a wider spot to be printed for each exposure (even by DUV) and post-litho shrink applied [4].

References

[1] F. Chen, Stochastic Behavior of the Point Spread Function in EUV Lithography, https://www.youtube.com/watch?v=2tgojJ0QrM8

[2] D. Xu et al., “Feasibility of logic metal scaling with 0.55NA EUV single patterning,” Proc. SPIE 12494, 124940M (2023).

[3] F. Chen, Lithography Resolution Limits: Paired Features, https://www.linkedin.com/pulse/lithography-resolution-limits-paired-features-frederick-chen/

[4] H. Yaegashi et al., “Enabled Scaling Capability with Self-aligned Multiple patterning process,” J. Photopolym. Sci. Tech. 27, 491 (2014), https://www.jstage.jst.go.jp/article/photopolymer/27/4/27_491/_pdf

This article originally appeared in LinkedIn Pulse: Application-Specific Lithography: Via Separation for 5nm and Beyond

Also Read:

NILS Enhancement with Higher Transmission Phase-Shift Masks

Assessing EUV Wafer Output: 2019-2022

Application-Specific Lithography: 28 nm Pitch Two-Dimensional Routing


Qualitative Shift in RISC-V Targets Raises Verification Bar

Qualitative Shift in RISC-V Targets Raises Verification Bar
by Bernard Murphy on 08-02-2023 at 6:00 am

SVIPs

I had grown comfortable thinking about RISC-V as a cost-saving and more flexible alternative to Intel/AMD or Arm in embedded applications. Where clearly it is already doing very well. But following a discussion with Dave Kelf and Adnan Hamid of Breker, RISC-V goals have become much more ambitious, chasing the same big system applications where the major processor players currently claim dominance. Differentiation may now be the driving factor, in cloud and communications infrastructure for example. Happy to hear the RISC-V users and ecosystem are spreading their wings however these systems inevitably imply a new level of complexity in system and system-level core verification. This is further compounded by the naturally disaggregated nature of RISC-V core and system development.

What changed?

In principle whatever you can do with Arm you should be able to do with RISC-V, right? In the US, Tenstorrent, Esperanto and Condor Computing are active in building many-core CPUs and AI accelerators to serve HPC and more general needs. In processors, SiFive, Codasip and Andes among others are already familiar. In other regions there are active programs both at the regional level and at company level to develop independence from dominant IP/device suppliers, with echoes of recent anxieties around independence in semiconductor manufacturing.

In Europe, the European Processor Initiative wants to establish European independence in HPC and beyond, with end-to-end security. NXP and Infineon are both involved in RISC-V and Open Hardware initiatives though cagey about what they are actually doing. In China, the XiangShan open project provides a China-centric spin on the ISA together with a microarchitecture and implementation and workflow/tools. Alibaba, Tencent, Huawei and ZTE already have active programs in HPC, AI and communications. I would guess all these developers are eager to decouple from embargo risks.

What is common between all these objectives is big, many-core systems applied to big applications in HPC, communications and AI infrastructure. Very understandable goals but there is a high verification hurdle they must all clear in stepping up to that level of RISC-V integration.

What makes big systems different from embedded systems?

The short answer is a mountain of system-level verification. All those tests to verify that multiple cores communicate accurately with each other, that interconnects, cache and I/O honor coherency requirements, that interrupts are handled correctly, that writebacks and address translation services work as expected. As security is progressively standardized for RISC-V applications (critical for servers), implementation won’t be any easier to validate than for other platforms.

Then there’s the OS connection – ability to boot an un-modified target OS (Windows or Linux) without customization. OS and application suppliers have no interest in developing branches for a proliferation of independent hardware platforms. Neither should platform providers want to maintain their own branches.

Arm has estimated that they spend $150M per year on their own system level verification/ validation. I have no idea what comparable numbers would be for Intel and AMD, but I have to believe these would run to billions of dollars for each. Multiply those numbers by the years of accumulated wisdom in their regression suites and it is clear that getting close to a comparable level of signoff quality for RISC-V-based systems will be a heavy lift.

What will it take?

There is already very active collaboration in the RISC-V ecosystem, both generally and within each of the regional organizations. How can that collaboration best coordinate to tackle the mountain of system level testing? There is a growing trend to organizing the task around System VIPs, each providing a baseline for components of a specific system level check through traffic generation, testing and profiling. System VIPs have the same general intent as more conventional VIPs, though are inevitably more configurable around system level parameters and objectives.

The tables at the beginning of this blog show examples of capabilities you would expect system VIPs to support. Accelerating development of all necessary system verification components seems essential to quickly maturing verification quality to the same level as mainstream processor providers yet is beyond the reach of any but the largest verification teams, at least in the near term. The VIP model lends itself to collaborative standardization together with open source and commercial development. It will take a village to build this essential foundation for big RISC-V systems. We’ll all need to pitch in!

Breker tells me they would be happy share their ideas in more detail. Check them out HERE.

Also Read:

Breker’s Maheen Hamid Believes Shared Vision Unifying Factor for Business Success

Scaling the RISC-V Verification Stack

Breker Verification Systems Unleashes the SystemUVM Initiative to Empower UVM Engineering


A Bold View of Future Product Development with Matt Genovese

A Bold View of Future Product Development with Matt Genovese
by Mike Gianfagna on 08-01-2023 at 10:00 am

Matt Genovese

Matt Genovese is the founder of Planorama Design, a software requirements and user experience design professional services company.  The company designs simple and intuitive software and IoT product user interfaces for complex, technical applications and systems.  Its unique and proven approach reduces client development and support costs while accelerating achievement of key delivery timelines.  This new and bold type of company comes from Matt and his seasoned team’s deep experience in both chip design and software development.  Planorama can see many sides of the product development problem.

In his recent podcast interview, AI is Changing Software Development, Matt provided a paradigm-shifting perspective on the interplay of generative AI and future application creation. This innovative foresight presents a world where software is produced at a pace that is unprecedented, both in terms of speed and cost. While this may seem like a futuristic thought, the evidence is gradually emerging in our present reality.  Continue reading for a bold view of future product development with Matt Genovese, for both software and hardware products.

Matt’s Views:  AI-Accelerated Co-Design

Artificial Intelligence is behind a generational disruption in the software development cycles of record.  This is not about chatbots and simple Q&A prevalent in the popular literature and news.  Rather, this is about large language models (LLMs) actually interpreting product requirements and generating code implementation.  A variety of open-source software projects have already sprung up, demonstrating that software generation from product requirements, at least on a small scale, is feasible.  Matt emphasizes,

“Quite suddenly, we’ve seen a shift from software that has been developed purely by human software developers, to developers ‘collaborating’ with tools like Github Copilot and OpenAI’s GPT4 to write code.  Many companies, including my internal R&D team at Planorama are reporting very positive experiences and greatly increased development efficiency.  This trend will continue to shift towards additional AI involvement in this process.”

Now consider that this same type of generative AI can also write RTL in the HDL of your choice, derived from product requirements.  While early and limited in capabilities, HDL generation is possible today.  In recent short videos from Planorama, Matt demonstrates both OpenAI’s GPT4 and Github Copilot generating and supporting creation of behavioral RTL in Verilog, and even supporting testbenches.

Couple these nascent capabilities with a project like OpenROAD and OpenLane that aim for, “no human in the loop” from RTL to GDS2.  We are beginning to see natural points of connection emerge between various technologies that would enable end-to-end acceleration of both software and hardware from unified product specifications.

Matt explains that the original vision of hardware-software co-design manifests a future where the intertwined development processes are not only conceivable but integral to the hardware-software generation landscape. He anticipates powerful, specialized AI technologies will deftly navigate the complexities of partitioning between hardware and software, enablement of dynamic decision-making based on power, performance, cost, area constraints, and other variables to make a type of end-to-end co-design an achievable reality.  The realization of such a system could maximize efficiency and deliver novel solutions.  In this environment, AI tools would scrutinize the intricate trade-offs, optimizing the assignment of each function to either hardware or software.  Matt believes that as technology continues to innovate at a rapid pace, the hardware-software co-design paradigm’s potential is becoming a tangible game-changer for the industry.

To Learn More

You can listen to Matt’s original Futurati podcast here on audio, with a video version of the same podcast here

You can watch Matt’s short video demo using GPT4 to code an ALU and testbench in Verilog.

And here is a video  showing how Github Copilot can be used to write and edit Verilog

About the Company

Planorama Design is a unique company, as their mission is to accelerate time to market for software and IoT products.  How do they do this?  As a strategic partner for their technology business clients, Planorama drives solid product requirements, designing the software visual requirements (user interface designs), and delivering all the assets every product development team member needs to execute efficiently.  The results yield products that are easy to use, require less customer support, are more maintainable, and ensure customers experience the value of the solution.

Matt brings over 25 years of experience in high-tech, spanning semiconductors, hardware, IoT, IT, and software product development.  He has a forward-looking view of how to bring these skills together to accomplish the mission of Planorama Design.  AI plays an important role in his thinking, and that makes for a great podcast.  You can learn more about Planorama Design here.  And that’s a bold view of future product development with Matt Genovese.


Agile Analog Visit at #60DAC

Agile Analog Visit at #60DAC
by Daniel Payne on 07-31-2023 at 10:00 am

agile analog 60dac min

Chris Morrison, Director of Product Marketing at Agile Analog met with me on the Tuesday at DAC this year, and I asked what has changed in the last year for their analog IP business. The short answer is that the company has initially built up foundation IP for Analog Mixed-Signal (AMS) uses, then recently added new IP for data conversion, power management and chip monitoring and health.

I was surprised to learn just how much AMS IP they have to offer:

Data Conversion

  • 8/10 -bit DAC
  • 8/10 bit SAR ADC
  • 12 bit SAR ADC

Security

  • Voltage glitch detector

Power Management

  • Linear Regulator
  • Power-On-Reset
  • General purpose bandgap
  • Power Management (PMU) subsystem
  • Sleep Management (SMU) subsystem

Sensing

  • Temperature sensor
  • Programmable threshold comparator
  • PVT sensor subsystem
  • IR drop detector
  • Sensor interface subsystem

Always-on Domains

  • Digital standard cell library
  • RC oscillator

There are four initial subsystems and they may be combined to build even bigger systems along with RISC-V support. The interface protocols for Arm’s AMBA APB and SiFive’s TileLink are also supported, so that covers two of the most popular ISA choices out there today.

Chris also talked about some of the current IC design issues like mechanical stress, aging and reliability. What sets Agile Analog apart is the use of their Composa methodology, a rapid way to create new IP blocks based upon customer requirements, Agile Analog’s design recipe and the foundry PDK. So, this automates the front-end of the analog design process quite well, and in the works are further automation for the back-end as well, so stay tuned. New, sized schematics are created for IP blocks in under 10 minutes, which is quite a bit faster than the traditional, manual methods which require weeks of engineering effort. Foundry support for these analog IP blocks are from: TSMC, GlobalFoundries, Intel, Samsung, SMIC, and UMC. .

The Composa tool knows how to combine all of the analog transistors and circuits, just like an expert analog designer would. Connecting your Agile Analog IP blocks is made even easier by wrapping digital logic around the AMS portions, making verification a simpler task. About 50 IP blocks have been delivered to customers, so they are scaling up quickly and efficiently to meet demand.

Last year the company headcount was about 30-35 people, and now it’s grown to over 55, so that says something about their success as a company to meet a challenging market. With a HQ in Cambridge in the UK, the company also has sales offices to help with your questions in Asia, and the US.

In March Agile Analog joined the Intel Found Services (IFS) Accelerator IP Alliance Program, they’re part of the Samsung SAFE and they have just joined the TSMC OIP program too. In 2024 you can expect to see the company continue their growth path across the globe, and even more AMS IP blocks being added to the portfolio, ready to be customized to meet your unique requirements.

Summary

Analog IP has traditionally been a limiting factor in getting new SoCs to market on time and within spec, however with the more automated approach used at Agile Analog you can expect to use their AMS IP at your favorite foundry to speed time to market. Both RISC-V and ARM-based designs can quickly add AMS IP by using subsystems that have been digitally wrapped.

New IP blocks in development include: 12-bit DAC, clock monitor, ultra-low power LDO, ultra-low power bandgap, capless LDO, process sensor and free-running clock. I look forward to talking with Agile Analog again  to give you another update.

Related Blogs


Automated Code Review. Innovation in Verification

Automated Code Review. Innovation in Verification
by Bernard Murphy on 07-31-2023 at 6:00 am

Innovation New

A little thinking outside the box this time. Microsoft is adding automation to their (and LinkedIn) code reviews; maybe we should consider this option also? Paul Cunningham (Senior VP/GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome..

The Innovation

This month’s pick is Automating Code Review Activities by Large-Scale Pre-training. The paper published in the 2022 European Software Engineering Conference and Symposium. The authors are from Microsoft, LinkedIn, and Peking University.

This paper is interesting on two counts: first that it is a method to automate code change review and second that it uses a transformer model, very appropriate to text analysis. HuggingFace reports availability of CodeReviewer based on work by the same authors. Training is based on (past) real-world code change fragments, together with reviewer comments where available.

Changes are measured first on quality as judged by reviewer comments. Changes without comments are judged to be minor and of sufficient quality. Changes with comments suggest suspect quality. In training, comments are interpreted through natural language processing, looking for common patterns which can then be used to suggest comments on for new code changes. Finally, they combine this learning together with observed changes from the training set to suggest potential code changes to satisfy review comments.

Paul’s view

Our first blog on generative AI in verification and wow does it pack some punch! A global team of authors from Microsoft/LinkedIn and a few universities in China look at automatically generating code reviews. The paper was published late last year and describes a generative AI model called CodeReviewer that is based on a Transformer Large Language Model of similar complexity to OpenAI’s GPT-1.

Like any AI system, good training data is vital, and quite a bit of the paper is devoted to how the authors mine GitHub to create an impressive dataset covering 9 different programming languages and over 7.9M code review tickets.

I still find the whole process of training a Transformer super cool: you basically teach it different skills to build up to the desired generative capability. The paper eloquently walks us through the training steps used for CodeReviewer, teaching it first to understand the “+” and “-“ line prefix syntax for source code change diffs, then to “understand” code changes, then to “speak” the English language used to write a code review, and then finally to do the actual job of generating a code review in plain English from a code diff.

To benchmark CodeReviewer the authors split their dataset into two buckets: projects with 2.5k or more code reviews are used as training data and the remaining projects for benchmarking. Results are rock solid: 8% more accurate (72-74% vs. 64-66%) than the best of prior works at determining if a code change is good quality (meaning no review comments needed, it can be committed as is). For code review benchmarking the authors ask 6 expert programmers to personally inspect 100 randomly selected reviews and score them 1-5 for both relevance and informativeness. The average score for CodeReviewer is 3.2 compared to 2.5 for the best of prior works. Nice. And for a bit of fun the authors also do some qualitative comparisons of CodeReviewer with GitHub CoPilot, showing a few examples where CodeReviewer generates much better reviews than CoPilot.

Wonderful paper, well written and easy to read. Expect more from us on generative AI in future blogs – it’s going to transform (no pun intended) verification as well as so many other things in our daily lives!

Raúl’s view

This month we review a recent paper on automating code review. The results and the available CodeReviewer model are relevant and useful for anyone writing code in C, C++, C#, Go, Java, JavaScript, PHP, Python, and Ruby (covering much of EDA software).

The code review process as modeled in this paper consists of proposing a code change Code diff to an original code C0 resulting in a code C1, and then (1) estimating the quality of the code change, (2) generating a review comment RNL in natural language, and finally (3) code refinement in which a new version of the code is generated taking as inputs C1 and RNL. The authors construct a model called CodeReviewer for tasks 1, 2 and 3, with an encoder-decoder model based on Transformer, with 12 Transformer encoder layers and 12 decoder layers, 12 attention heads in each layer and the hidden size is 768. The total parameter size of the model is 223M. The paper goes into great detail on how to get the data to pre-train and fine tune the model. The used dataset is collected from GitHub and the pre-training set consists of 1,161 projects with a total of 7,933,000 pull requests.

Results are compared with three baselines, a state-of-the-art (SOTA) model architecture Transformer trained from scratch and two pre-trained models: T5 for code review and CodeT5 [43].  Table 4 shows that CodeReviewer is superior than all 3 networks for quality estimation (1) in terms of precision (true positive / (true + false positive)), recall (true positive / (true positive + false negative)), F1 (weighted average of precision and recall) and accuracy ((true positive + negative) / total). Performance on review generation is also better in terms of BLEU scores (bilingual evaluation understudy which evaluates quality of machine translation on a scale of 0-100) and human evaluations. The BLEU score is still lower than 10, indicating it is a hard task. In terms of code refinement (3) CodeReviewer successfully generates the repaired code exactly as ground truth for more than 30% cases, which is two times as the result of T5 and 25% more than CodeT5 relatively. Interestingly, table 8 gives results for the influence of the multilingual dataset, showing that for Java, C# and Ruby training with all languages improves the accuracy by 2.32% and the F1 score by 1.10% on average.

The presented results are better than the state of the art. They hinge on collecting and organizing a large-scale dataset from GitHub. Unfortunately, to my knowledge, there are no comparable data collections for hardware designs written in Verilog, VHDL, SystemC, etc., so it is an open question whether CodeReviewer can be used for hardware design. Perhaps closer to home, whether a code review of EDA SW would yield similar results than the ones reported, given that CodeReviewer was trained so carefully with different kinds of SW, is an interesting question which EDA companies can try to answer. Given that “multilingual dataset benefits the CodeReviewer for understanding specific languages significantly… It also proves the broad applicability of CodeReviewer in different programming languages” there is reason to speculate for broad applicability for different kinds of SW.


Podcast EP174: Expanding Application Horizons with OpenLight’s Silicon Photonics Platform

Podcast EP174: Expanding Application Horizons with OpenLight’s Silicon Photonics Platform
by Daniel Nenni on 07-28-2023 at 10:00 am

Dan is joined by Dr. Adam Carter, CEO of OpenLight. Adam has over 25 years of experience in the semiconductor industry, including a variety of roles in Networking, Optical Communication Systems, Optical Components and Modules markets.

Adam describes OpenLight’s unique silicon photonics platform – what makes it different and how it impacts the development of advanced applications.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


A Look at the Winners of the Silicon Catalyst/Arm Silicon Startups Contest

A Look at the Winners of the Silicon Catalyst/Arm Silicon Startups Contest
by Mike Gianfagna on 07-28-2023 at 6:00 am

A Look at the Winners of the Silicon Catalyst:Arm Silicon Startups Contest

Silicon Catalyst is the world’s only incubator focused exclusively on semiconductor solutions. This unique position puts the organization in the center of many new technology innovations. Recently, a Semiconductor Startups Contest was announced in collaboration with Arm. You can learn more about the details of the contest here.  Entrants to the contest represented the most interesting emerging applications using Arm technologies, including quantum computing, consumer products, massively parallel AI, cryptography and wireless communications. Silicon Catalyst recently announced the winners of the contest. The winning companies are located in Ireland, Germany and Scotland, emphasizing the global footprint of Silicon Catalyst. Let’s take a look at the winners of the Silicon Catalyst/Arm Silicon Startups Contest.

The Contest

Arm is both a Silicon Catalyst Strategic Partner and an In-Kind Partner, so the company was a natural fit for this contest. Winners receive valuable commercial, technical and marketing support from Arm and Silicon Catalyst.

The overall top winner receives Arm credit worth $150,000. In addition, all winners receive:

  • Access to the full Arm Flexible Access for Startups program, which includes:
    • No cost, easy access to an extensive SoC design portfolio including a wide range of Cortex processors, Mali graphics, Corstone reference systems, CoreLink and CoreSight system IP
    • Free tools, training, and support
    • $0 license fee to produce prototypes
  • Cost-free Arm Design Check-in Review with Arm’s experienced support team
  • Entry to the invitation-only Arm ecosystem event and be featured, along with the networking and connecting with Arm’s broad portfolio of silicon, OEM and software partners
  • Investor pitch review and preparation support by Silicon Catalyst, and an opportunity to present to the Silicon Catalyst Angels group and their investment syndication network

Quite a list of very useful swag. And the winners are…

Top Winner – Equal1

Based in Ireland, Equal1 is a pioneering silicon quantum computing company dedicated to making the technology affordable and accessible. Equal1’s pioneering Quantum System-on-a-Chip (QSoC) processors, now in their third generation, integrate entire quantum computing systems onto a single chip, merging millions of qubits, control systems, and real-time error correction capabilities. The company is one of the top patent holders in quantum silicon and is indeed opening a path to the future of quantum computing.

To learn more about Equal1 you can view a short video from the CEO and CTO here.

Runner Up – SpiNNcloud

Based in Germany, SpiNNcloud delivers a unique solution combining deep learning, symbolic AI, and neuromorphic computing. The company’s platform delivers a real-time, low-latency, and energy-efficient cognitive AI capability that leverages cutting-edge research from the Human Brain Project. By combining statistical AI and neuromorphic computing in a massively-parallel scale with world-class energy efficiency and real-time response, brain-like capabilities can be enabled. The company aims to deliver Large-Scale AI in Real-time.

SpiNNcloud’s system is the only real-time AI cloud with brain inspiration, powering instantaneous robotics control, sensing, prediction and insights, resulting in enabling the most intelligent and capable robots, and the most effective cognitive services.

Runner-Up – weeteq

Based in Scotland, weeteq is pioneering a new approach to circuit design that defines a new technology category of ‘circuit-level machine learning’. Called Ultra Edge®, it enables circuit-level, sensor-independent, predictive performance planning and unsupervised performance improvement for every closed-loop control system.

The company is developing embedded software, silicon, modules and enterprise software, allowing other technology manufacturers to seamlessly integrate Ultra Edge® into their solutions. weeteq holds four patents to protect the technology.

To Learn More

You can learn more about the contest and the winners here. And that’s a look at the winners of the Silicon Catalyst/Arm Silicon Startups Contest.


ASML Update SEMICON West 2023

ASML Update SEMICON West 2023
by Scotten Jones on 07-27-2023 at 10:00 am

12494 34 Bart Smeets Supporting future DRAM overlay and EPE roadmaps with the NXT2100i Page 21

At SEMICON West I had a chance to catch up with Mike Lercel of ASML. In this article I am going to combine ASML presentation material from the SPIE Advanced Lithography Conference, Mike’s SEMICON presentation, my discussions with Mike at SEMICON and a few items from ASML’s recent earnings call.

DUV

ASML continues to improve DUV systems. The new NXT:2100i has 4 new features to improve overlay and edge placement errors for future logic and DRAM.

  1. Distortion manipulator for Improved lens and cross matching provides more overlay correction control.
  2. Conditioned reticle library and new reticle heating control improve reticle overlay and throughput.
  3. Optical sensors PARIS improve overlay.
  4. 12 colors of alignment also improve overlay.

The net result is machine matched overlay improved to well under 1.3nm, see figure 1, and cross matched overlay of just over 1.1nm.

Figure 1. Machine matched overlay.

0.33NA EUV

From the just completed quarterly financial call, ASML has now shipped over 200 – NXE:3400/3600 systems. My count is 45 – NXE:3400B, 76 – NXE:3400C and 75 – NXE:3600D so I am missing a few systems. My count is based on ASML sales numbers and there is some delay between shipment and counting a sale. NXE:3600D either is/or will shortly be the system with the most units shipped.

From Q1-2014 to Q4-2019 system throughput increased by >17x! The NXE:3400C has achieved around 140 wafer per hour (wph) at a 30mJ/cm2 dose at customer sites, the NXE:3600D has achieved just over 160 wph at a 30mJ/cm2 at a customer site and 185 wph at ASML. The NXE:3800E is targeting >220 wph! See figure 2.

Figure 2. EUV System Throughput.

NXE production keeps improving, in 2020 there was only 1 system in the world that produced over 0.5 million wafers in a calendar year, in 2021 that number increased to 15 and in 2022 to 51, see figure 3.

Figure 3. EUV System Productivity.

The NXE:3800E targets >220 wph at 0.9nm Matched Machine Overlay, see figure 4.

Figure 4. NXE:3800E Targets.

The first NXE:3800E shipment is targeted for Q4, see figure 5.

Figure 5. NXE:3800E Shipment Status.

One big concern around EUV has always been the tremendous power draw of the systems. ASML continues to improve energy efficiency reducing energy per wafer by 3x, see figure 6.

Figure 6. EUV Energy Efficiency.

0.33NA EUV systems are now firmly established as the tool of choice for the most critical layers on leading edge logic and DRAM parts with more layers changing to EUV with each new node.

High NA EUV

Single exposure patterning with 0.33NA EUV systems currently reaches approximately 30nm with further improvements expected as the process matures, but some EUV multi-patterning has been used at 5nm and 3nm logic processes. A higher NA tool improves the achievable single exposure pitch limit.

The first 0.55NA EUV system, the EXE:5000 is due to ship early 2024 with volume manufacturing in 2025. The EXE:5000 is a development system that will be built in limited numbers. The status is shown in figure 7.

Figure 7. EXE:5000 Status.

There will be a High NA EUV demo lab at the ASML factory in Veldhoven in conjunction with imec later in 2023 with a tool running in early 2024.

The production High NA exposure tool will be the EXE:5200 with shipments due early 2025.

Hyper NA EUV

If pitches continue to shrink even the 0.55NA High NA exposure tools will eventually require multipattering and ASML is seriously discussing a “Hyper NA” tool with an NA of around 0.75NA, the specific NA has not been determined yet. A key question is when/if such a tool would be needed.

Conclusion

ASML continues a relentless program of improvement across their product line. Faster, more precise DUV and 0.33 NA EUV tools. Development of the forth coming 0.55NA High NA EUV tools and even looking beyond High NA to a possible Hyper NA tool.

Also Read:

Intel Internal Foundry Model Webinar

Applied Materials Announces “EPIC” Development Center

SPIE 2023 – imec Preparing for High-NA EUV


Xcelium Safety Certification Rounds Out Cadence Safety Solution

Xcelium Safety Certification Rounds Out Cadence Safety Solution
by Bernard Murphy on 07-27-2023 at 6:00 am

MIDAS System min

While fully autonomous driving may now be a distant dream, ADAS continues to be a very active industry driver as much for its safety advantages as for other features. Today in the hierarchy of SAE levels, SAE 2+ may represent the most active area of development rather than levels 3 through 5. This range of options still requires a human driver in the loop yet is bubbling with ideas and products: adaptive merging when entering or exiting a highway, further enhanced automatic emergency braking, driver monitoring systems (for when you aren’t paying sufficient attention), automated parking, intelligent rear- and side-view mirrors. All clever stuff which must also meet appropriate ISO 26262 safety standards, ASIL-A through ASIL-D according to the criticality of the application.

Increasing prominence of ASIL-D

ASIL-D is the most exacting standard, requiring for example better than 99% single point fault metric coverage, compared with say ASIL-B which will let you slide by with merely better than 90% coverage. For example, antilock brakes, self-steering and airbag deployment require ASIL-D coverage, whereas controls for brake lights and rear-view cameras may only require ASIL-B.

As systems become more complex and more highly integrated, an increasing number of SoCs now require some level of ASIL-D certification. This is triggered when a failure in such a system could be life-threatening or fatal, combined with a high risk of exposure since the system is used in the normal course of driving. Failures in aspects of a collision avoidance system would be an example. However, raising a whole SoC to ASIL-D is effectively impossible without abandoning pre-packaged IP and reuse methods. Instead, a “hybrid ASIL-D” approach has emerged. A “safety island” IP is certified to ASIL-D and charged with regularly testing and supervising other functions in the SoC, which are allowed to meet lower ASIL standards. The safety island provides ability to force selective IP reboots or isolation if needed while signaling driver alerts through a central control system.

This approach provides more flexibility in using a wider range of IP but adds more complexity to the certification strategy (the safety island IP must meet ASIL-D but the GPU IP perhaps only needs to meet ASIL-B for example). This mixture demands a clear safety plan from architecture onwards and a fault campaign to match that strategy in all its complexity. The Cadence MIDAS Safety Platform provides that management and control across digital and analog safety verification and safety mitigation implementation.

Xcelium Safety in the MIDAS platform

The Xcelium safety app builds on Xcelium native serial and concurrent fault simulation to provide a common mechanism both for debug and for high-throughput fault analysis. This is further accelerated through a combination of formal methods to filter out untestable or unobservable faults, and with machine learning methods to accelerate throughput on successive runs. The complete Xcelium safety system has been certified by TÜV-SÜD to be used in safety-related development for any ASIL level.

This Xcelium capability integrates with the MIDAS platform, an impressive answer to total SoC certification support from my perspective, managing FMEDA starting from early architectural analysis. This is tracked through fault campaign management across digital, analog, and AMS functions and insertion, optimization, and verification of safety mitigation techniques.

Support includes automotive Functional Safety Documentation Kits satisfying documentation requirements that the automotive component supplier must provide for their tools and flow to achieve ASIL certification. The kits also reduce effort required to evaluate tool use cases within each of the supplier’s automotive design projects and help automotive component suppliers avoid the costly efforts of tool-qualification activities.

Front-to-Back ISO 26262 compliance management for all ASIL levels or a mix of levels. Pretty impressive. You can learn more about Cadence Safety solutions HERE.


Wally Rhines Predicts the Future of AI at #60DAC

Wally Rhines Predicts the Future of AI at #60DAC
by Mike Gianfagna on 07-26-2023 at 10:00 am

Wally Rhines Predicts the Future of AI at DAC

Dr. Walden Rhines has appeared many times on SemiWiki. His discussions touch on a variety of topics, most recently on the health of EDA and IP. His knowledge of our industry is substantial, and he always seems to have a new take on the trends and technologies that are unfolding around us. So, when Wally took the stage for a keynote address at the recent Design Automation Conference in San Francisco it was standing room only. Wally took everyone on a scenic tour of how technology has impacted chip design over the years, ending with a very real view of how all this will change the future of the planet. Read on to understand how Wally Rhines predicts the future of AI at DAC.

The Early Days

Wally began with a look back at the emerging technologies in the 1980’s and the emerging technology leaders of the day.  He pointed out that Lip Bu Tan’s first major VC deal was with Creative Labs. Lip Bu clearly saw the future. There were other forward-looking folks in that time frame. Wally reminded us of what some of these folks looked like, back in the day.

Wally took us back a bit more than the 1980’s to uncover some interesting predictions. In the Summer of 1956, members of the Dartmouth Research Project on Artificial Intelligence made some interesting comments:

  • “Within ten years a digital computer will be the world’s chess champion” (A.Newell)
  • “In from three to eight years we will have a machine with the general intelligence of an average human being” (Marvin Minsky)
  • “Within ten years a digital computer will discover and prove an important new mathematical theorem” (H.A.Simon)

Current Day Trends

Bold and optimistic to say the least. But why hasn’t AI taken off until recently? The answer sets the stage for the future. According to Dr. Rhines:

  • Lack of big data to analyze
    • No Internet or IoT to collect sizable data sets
  • Limited computing power
    • Limitation of traditional computer chip architectures
  • Need for more advanced algorithms
  • Lack of ‘killer’ applications to make money

Many of these limitations are going away, opening the door for new chapters of innovation. The last point is key – this will be elaborated on by Wally in a bit. But first, Wally took a look at what IS making money these days. It’s kind of a mixed bag.

OpenAI, last year’s ChatGPT investment of ~$540 million has losses of $700,000 every day. The $75B automotive industry bet on autonomous vehicles has no meaningful return, YET. However, Nvidia made a big bet on AI and has become the first chipmaker to join the $1 trillion club.  Dramatic, but inconsistent results so far.

Before looking to the future, Wally spent some time examining the impact AI is having on chip design. Recall the limited computing power issue – better chip design can have a big impact on that. Wally did a great job summarizing many EDA innovations into three buckets. This is a great way to watch innovation and judge impact. One picture can explain the views presented, and that picture is included below.

This is a great model. Take a look at the latest announcements from your favorite EDA supplier. You will likely be able to put them in one of the columns, above.

Looking to the Future

Wally concluded his talk with a view of what the future holds for AI, specifically how it will be monetized. The data used to power the sophisticated and complex models of AI is becoming the “currency” for the future of the technology, and possibly the future of the planet.

Wally pointed out that the world creates 2.5 quintillion bytes of data every day, yet only a fraction of it is utilized. In case you’re wondering how many bytes that is, here is the full expression of one quintillion:

1,000,000,000,000,000,000

Data indeed is the new oil in the next generation economy and controlled, secure sharing of this data will be the engine for profit. But protecting all this data and sharing it in a controlled, secure way presents many challenges. Data that is driving many emerging AI systems today can be unreliable and have baked-in biases. Theft is widespread, and the ability to hack things like autonomous driving systems present existential threats.

Wally explained that the world needs a way to share and protect sensitive information at the same time. To make matters worse, once quantum computers reach 10,000 qubits most Internet security will break. This may happen in the next few years.

So, the question becomes how can the power of all the available data be unlocked to revolutionize everything from financial services to healthcare to manufacturing in a predictable, secure way? For this seemingly impossible problem. Wally offered a way forward.

It turns out there is a technology called fully homomorphic encryption (FHE) that can enable secure data sharing. This unique technology keeps data encrypted at all times. Computation of all kinds can be performed in the encrypted data, so machine learning models can be built from encrypted data.

This approach essentially hides all sensitive information in plain sight. Since the data is always encrypted, it needs no protection – stealing it gives you no useful information. So, what prevents the widespread use of such innovation?

Wally explained that FHE is very hard to implement from a computational standpoint. Using current technology, an unencrypted operation that takes one second will take about 11 days using encryption. Hardware requirements for FHE will require nearly one million times faster performance than current Intel and Nvidia servers.

We are now at the home stretch of Wally’s keynote, and here is where his long-term vision shines. It turns out there is a company that is working on this problem and aims to make FHE available to all. The company’s name is Cornami, and Wally is its CEO. You can learn more about Cornami here.  And that’s how Wally Rhines predicts the future of AI at DAC.