Ceva webinar AI Arch SEMI 800X100 250625

TSMC 2023 North America Technology Symposium Overview Part 3

TSMC 2023 North America Technology Symposium Overview Part 3
by Daniel Nenni on 04-27-2023 at 6:00 am

3DFabric Technology Portfolio

TSMC’s 3DFabric initiative was a big focus at the symposium, as it should be. I remember when TSMC first went public with CoWos the semiconductor ecosystem, including yours truly, let out a collective sigh wondering why TSMC is venturing into the comparatively low margin world of packaging. Now we know why and it is  absolutely brilliant!

In 2012 TSMC introduced, together with Xilinx, the by far largest FPGA available at that time, comprised of four identical 28 nm FPGA slices, mounted side-by-side, on a silicon interposer. They also developed through-silicon-vias (TSVs), micro-bumps and re-distribution-layers (RDLs) to interconnect these building blocks. Based on its construction, TSMC named this IC packaging solution Chip-on-Wafer-on-Substrate (CoWoS).

This building blocks-based and EDA-supported packaging technology has become the de-facto industry standard for high-performance and high-power designs. Interposers, up to three stepper fields large, allow combining multiple die, die-stacks and passives, side by side, interconnected with sub-micron RDLs. Most common applications were combinations of a CPU/GPU/TPU with one or more high bandwidth memories (HBMs).

In 2017 TSMC announced the Integrated FanOut technology (InFO). It uses, instead of the silicon interposer in CoWoS, a polyamide film, reducing unit cost and package height, both important success criteria for mobile applications. TSMC has already shipped tens of millions of InFO designs for use in smartphones.

In 2019 TSMC introduced the System on Integrated Chip (SoIC) technology. Using front-end (wafer-fab) equipment, TSMC can align very accurately, then compression-bond designs with many narrowly pitched copper pads, to further minimize form-factor, interconnect capacitance and power.

Today TSMC has 3DFabric, a comprehensive family of 3D Silicon Stacking and Advanced Packaging Technologies. Here are the TSMC related accomplishments from the briefing:

  • TSMC 3DFabric consists of a variety of advanced 3D Silicon Stacking and advanced packaging technologies to support a wide range of next-generation products:
    • On the 3D Si stacking portion, TSMC is adding a micro bump-based SoIC-P in the TSMC-SoIC® family to support more cost-sensitive applications.
    • The 2.5D CoWoS® platform enables the integration of advanced logic and high bandwidth memory for HPC applications, such as AI, machine learning, and data centers. InFO PoP and InFO-3D support mobile applications and InFO-2.5D supports HPC chiplet integration.
    • SoIC stacked chips can be integrated in InFO or CoWoS packages for ultimate system integration.
  • CoWoS Family
    • Aimed primarily for HPC applications that need to integrate advanced logic and HBM.
    • TSMC has supported more than 140 CoWoS products from more than 25
    • All CoWoS solutions are growing in interposer size so they can integrate more advanced silicon chips and HBM stacks to meet higher performance requirements.
    • TSMC is developing a CoWoS solution with up to 6X reticle-size (~5,000mm2) RDL interposer, capable of accommodating 12 stacks of HBM memory.
  • InFO Technology
    • For mobile applications, InFO PoP has been in volume production for high-end mobile since 2016 and can house larger and thicker SoC chips in smaller package form factor.
    • For HPC applications, the substrateless InFO_M supports up to 500 square mm chiplet integration for form factor-sensitive applications.
  • 3D Silicon stacking technologies
    • SoIC-P is based on 18-25μm pitch μbump stacking and is targeted for more cost-sensitive applications, like mobile, IoT, client, etc.
    • SoIC-X is based on bumpless stacking and is aimed primarily at HPC applications. Its chip-on-wafer stacking schemes feature 4.5 to 9μm bond pitch and has been in volume production on TSMC’s N7 technology for HPC applications.
    • SoIC stacked chips can be further integrated into CoWoS, InFo, or conventional flip chip packaging for customers’ final products.
  • 3DFabric™ Alliance and 3Dblox Standard
    • At last year’s Open Innovation Platform®(OIP) Forum, TSMC announced the new3DFabric™ Alliance, the sixth OIP alliance after the IP, EDA, DCA, Cloud, and VCA alliances, to facilitate ecosystem collaboration for next-generation HPC and mobile designs by:
      • Offering 3Dblox Open Standard,
      • Enabling tight collaboration between memory and TSMC logic, and
      • Bringing Substrate and Testing Partners into Ecosystem.
    • TSMC introduced 3Dblox™ 1.5, the newest version of its open standard design language to lower the barriers to 3D IC design.
      • The TSMC 3Dblox is the industry’s first 3D IC design standard to speed up EDA automation and interoperability.
      • 3Dblox™ 1.5 adds automated bump synthesis, helping designers deal with the complexities of large dies with thousands of bumps and potentially reducing design times by months.
      • TSMC is working on 3Dblox 2.0 to enable system prototyping and design reuse, targeting the second half of this year.

Above is an example of how TSMC 3DFabric technologies can enable an HPC chip. It also supports my my opinion that one of the big values of the Xilinx acquisition by AMD was the Xilinx silicon team. No one knows more about implementing advanced TSMC packaging solutions than Xilinx, absolutely.

Also Read:

TSMC 2023 North America Technology Symposium Overview Part 1

TSMC 2023 North America Technology Symposium Overview Part 2

TSMC 2023 North America Technology Symposium Overview Part 4

TSMC 2023 North America Technology Symposium Overview Part 5

 


TSMC 2023 North America Technology Symposium Overview Part 2

TSMC 2023 North America Technology Symposium Overview Part 2
by Daniel Nenni on 04-26-2023 at 8:00 pm

TSMC N3 Update 2023

The next topic I would like to cover is an update to the TSMC process node roadmap starting with N3. As predicted, N3 will be the most successful node in the TSMC FinFET family. The first version of N3 went into production at the end of last year (Apple) and will roll out with other customers in 2023. There is a reported record amount of N3x design starts in process and from what I have heard from the IP ecosystem, that will continue.

Not only is N3 easy to design to, the PPA and yield is exceeding expectations. While I’m hearing good things about N2 I still think the mainstream chip designers will stick to N3 for quite some time and the ecosystem agrees.

Meanwhile the competition is still working on 3nm. Intel 3 for foundry customers is still in process and Samsung 3nm was skipped by all. I still have not heard of a successful tape-out to Samsung 3nm from a customer name that I recognize.

Here are the TSMC N3 accomplishments from the briefing:

  • N3 is TSMC’s most advanced logic technology and entered volume production in the fourth quarter of 2022 as planned; N3E follows one year after N3 and has passed technology qualification and achieved the performance and yield targets.
  • Compared with N5, N3E offers 18% speed improvement at the same power, 32% power reduction at the same speed, a logic density of around 6X, and a chip density of around 1.3X.
  • N3E has received the first wave of customer product tape-outs and will start volume production in the second half of 2023.
  • Today, TSMC is introducing N3P and N3X to enhance technology values and offer additional performance and area benefits while preserving design rule compatibility with N3E to maximize IP reuse.
  • For the first 3 years since inception, the number of new tape-outs for N3 and N3E is 5 to 2X that of N5 over the same period, because of TSMC’s technology differentiation and readiness.
  • N3P: Offers additional performance and area benefits while preserving design rule compatibility with N3E to maximize IP reuse. N3P is scheduled to enter production in the second half of 2024, and customers will see 5% more speed at the same leakage, 5-10% power reduction at the same speed, and 1.04X more chip density compared with N3E.
  • N3X: Expertly tuned for HPC applications, N3X provides extra Fmax gain to boost overdrive performance at a modest trade-off with leakage. This translates to 5% more speed versus N3P at drive voltage of 1.2V, with the same improved chip density as N3P. N3X will enter volume production in 2025.
  • Today, TSMC introduced the industry’s first Auto Early technology on 3nm, called N3AE. Available in 2023, N3AE offers automotive process design kits (PDKs) based on N3E and allow customers to launch designs on the 3nm node for automotive applications, leading to the fully automotive-qualified N3A process in 2025.

TSMC N3 will be talked about for many years. Not only did TSMC execute as promised, the competition did not, so it really is a perfect semiconductor storm. The result being a very N3 focused industry ecosystem that will be impossible to beat, absolutely.

Here are the TSMC N2 accomplishments from the media briefing:

  • N2 volume production is targeted for 2025; N2P and N2X are planned for 2026.
  • Performance of the nanosheet transistor has exceeded 80% of TSMC’s technology target while demonstrating excellent power efficiency and lower Vmin, which is a great fit for the energy-efficient compute paradigm of the semiconductor industry.
    • TSMC has exercised N2 design collateral in the physical implementation of a popular ARM A715 CPU core to measure PPA improvement: Achieved a 13% speed gain at the same power, or 33% power reduction at the same speed at around 0.9V, compared to the N3E high-density 2-1 fin standard cell.
  • Part of the TSMC N2 technology platform, a backside power rail provides additional speed and density boost on top of the baseline technology.
    • The backside power rail is best suited for HPC products and will be available in the second half of 2025.
    • Improves speed by more than 10-12% from reducing IR drop and signal RC delays.
    • Reduces logic area by 10-15% from more routing resources on the front side.

Remember, N2 is nanosheets, which, unlike FinFETs, is not open source technology so this is really going to be a challenge for design and the supporting ecosystem which gives TSMC a very strong advantage. TSMC also mentioned what follows nanosheets which I found quite interesting. I’m sure we will hear more about this at IEDM 2023:

  • Transistor architecture has evolved from planar to FinFET and is about to change again to nanosheet.
  • Beyond nanosheet, TSMC sees vertically stacked NMOS and PMOS, known as CFET, as one of the key process architecture choices going forward.
    • TSMC estimates the density gain would fall between 5 to 2X after factoring in routing and process complexity.
  • Beyond CFET, TSMC made breakthroughs in low dimensional materials such as carbon nanotubes and 2D materials which could enable further dimensional and energy scaling.

For the record, TSMC has deployed 288 distinct process technologies and manufactured 12,698 products for 532 customers and counting. There is no stopping this train so you might as well jump on with the rest of the semiconductor industry.

Also Read:

TSMC 2023 North America Technology Symposium Overview Part 1

TSMC 2023 North America Technology Symposium Overview Part 3

TSMC 2023 North America Technology Symposium Overview Part 4

TSMC 2023 North America Technology Symposium Overview Part 5


TSMC 2023 North America Technology Symposium Overview Part 1

TSMC 2023 North America Technology Symposium Overview Part 1
by Daniel Nenni on 04-26-2023 at 6:00 pm

Advanced Technology Roadmap

The TSMC 2023 North America Technology Symposium happened today so I wanted to start writing about it as there is a lot to cover. I will do summaries and other bloggers will do more in-depth coverage on the technology side in the coming weeks. Having worked in the fabless semiconductor ecosystem the majority of my 40 year semiconductor career and writing about it since 2009 I may have a different view of things than the other media sources so stay tuned.

First some items from the opening presentation. As I have mentioned before, AI is driving the semiconductor industry and North America is leading the way with a reported 43% of the world wide AI business. With AI you have 5G since tremendous amounts of data have to be both processed and communicated from the edge to the cloud and back again and again and again.

Due to this tremendous industry driver, TSMC expects the global semiconductor market to approach $1 trillion by 2030 as demand surges from HPC-related applications with 40% of the market, smartphone at 30%, automotive at 15%, and IoT at 10%.

Of course in 2023 we will experience a revenue pothole which C.C. Wei joked about. C.C. said he would not give a forecast this year since he was wrong in saying TSMC would again experience double digit growth in 2023. It is now expected to be single digit decline and it could be even worse than that if you believe other industry sources. Since the TSMC forecast is derived from customer forecasts they were wrong too, there is plenty of blame to share and joke about, which C.C. did.

I still blame the pandemic for the horrible forecasting of late, truly a black swan event. Personally I think the foundry business and TSMC specifically is in the strongest position today so I have no worries whatsoever.

I had flashbacks to when Morris Chang spoke at the symposiums during the C.C. Wei presentation. I see a lot of Morris in C.C. but I also see a very focused man who is not afraid to ask for purchase orders. I also see a much stronger competitive nature in C.C. and I would never want to be on the wrong side of that, absolutely.

“Our customers never stop finding new ways to harness the power of silicon to create innovations that shall amaze the world for a better future,” said Dr. C.C. Wei, CEO of TSMC. “In the same spirit, TSMC never stands still, and we keep enhancing and advancing our process technologies with more performance, power efficiency, and functionality so their pipeline of innovation can continue flowing for many years to come.”

I sometimes tell my family that I don’t want to talk about my accomplishments because it will seem like bragging and I’m much too humble to brag. This is actually true with TSMC so here are some of their accomplishments from the briefing:

  • Together with partners, TSMC created over 12,000 new, innovative products, on approximately 300 different TSMC technologies in 2022.
  • TSMC continues to invest in advanced logic technologies, 3DFabric, and specialty technologies to provide the right technologies at the right time to empower customer innovation.
  • As our advanced nodes evolve from 10nm to 2nm, our power efficiency has grown at a CAGR of 15% over a span of roughly 10 years to support the semiconductor industry’s incredible growth.
  • The CAGR of TSMC’s advanced technology capacity growth will be more than 40% during the period of 2019 to 2023.
  • As the first foundry to start volume production of N5 in 2020, TSMC continues to improve its 5nm family offerings by introducing N4, N4P, N4X, and N5A.
  • TSMC’s 3nm technology is the first in the semiconductor industry to reach high-volume production, with good yield, and the Company expects a fast and smooth ramping of N3 driven by both mobile and HPC applications.
  • In addition, to push scaling to enable smaller and better transistors for monolithic SoCs, TSMC is also developing 3DFabric technologies to unlock the power of heterogeneous integration and increase the number of transistors in a system by 5X or more.
  • TSMC’s specialty technology investment experienced more than 40% CAGR from 2017 to 2022. By 2026, TSMC expects to grow specialty capacity by nearly 50%.

The two customer CEO presentations that followed C.C. were quite a contrast. ADI has been a long and trusted TSMC customer where as Qualcomm has been foundry hopping since the beginning of fabless time. I remember working with QCOM on a 40nm design that was targeted to four different fabs. TSMC did the hard work first then it went to UMC, SMIC, and Chartered for high volume manufacturing.  QCOM has a new CEO and TSMC has CC Wei so that may change. The benefits of being loyal to TSMC have grown dramatically since the planar days so we shall see.

Also Read:

TSMC 2023 North America Technology Symposium Overview Part 2

TSMC 2023 North America Technology Symposium Overview Part 3

TSMC 2023 North America Technology Symposium Overview Part 4

TSMC 2023 North America Technology Symposium Overview Part 5


Podcast EP157: The Differentiated Role Andes Plays in the US with Charlie Cheng

Podcast EP157: The Differentiated Role Andes Plays in the US with Charlie Cheng
by Daniel Nenni on 04-26-2023 at 10:00 am

Dan is joined by Charlie Cheng, Managing Director of Polyhedron. Prior to that, Charlie was the CEO of Kilopass Technology, where he grew the core memory business into a successful acquisition by Synopsys. Before that, Charlie was an Entrepreneur in Residence at US Venture Partners and a Corporate VP at Faraday Technology, a Taiwanese semiconductor company. He joined Faraday after he co-founded Lexra, a CPU IP company. Charlie started his career at General Electric and IBM before focusing on microprocessor, semiconductor, EDA, and the IP business.

Charlie joins this podcast in his capacity at Board Advisor for Andes Technology. Dan explores the market position Andes occupies in the US, which is focused on higher end applications as compared to its position in other parts of the world. How some of Andes’ unique qualities are leveraged in the US market are discussed.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


AI and the Future of Work

AI and the Future of Work
by Ahmed Banafa on 04-26-2023 at 8:00 am

AI and the Future of Work

Artificial Intelligence (AI) is a rapidly growing field that has the potential to revolutionize the way we work, learn, and interact with technology. The term AI refers to the ability of machines to perform tasks that would typically require human intelligence, such as decision-making, problem-solving, and natural language processing. As AI technology continues to advance, it is becoming increasingly integrated into various aspects of the workplace, from automating repetitive tasks to helping professionals make more informed decisions.

The impact of AI on the future of work is a topic of much discussion and debate. Some experts believe that AI will lead to the displacement of human workers, while others argue that it will create new opportunities and lead to increased productivity and economic growth. Regardless of the outcome, it is clear that AI will have a profound effect on the job market and the skills needed to succeed in the workforce.

In this context, it is crucial to understand the potential benefits and risks of AI in the workplace, as well as the ethical implications of using AI to make decisions that affect human lives. As AI continues to evolve, it is essential that individuals and organizations alike stay informed and adapt to the changing landscape of work.

AI is set to transform the future of work in a number of ways. Here are some possible angles:

·      The Impact of AI on Jobs: One of the biggest questions surrounding AI and the future of work is what impact it will have on employment. Will AI create new jobs or displace existing ones? What types of jobs are most likely to be affected?

·      The Role of AI in Workforce Development: As AI becomes more prevalent in the workplace, it’s likely that workers will need to develop new skills in order to keep up. How can companies and organizations help workers develop these skills?

·      The Future of Collaboration Between Humans and AI: Many experts believe that the future of work will involve collaboration between humans and AI. What might this collaboration look like? How can companies and organizations foster effective collaboration between humans and AI?

·      AI and Workforce Diversity: AI has the potential to reduce bias and increase diversity in the workplace. How can organizations leverage AI to improve workforce diversity?

·      The Ethical Implications of AI in the Workplace: As AI becomes more prevalent in the workplace, there are a number of ethical considerations that need to be taken into account. How can companies and organizations ensure that their use of AI is ethical and responsible?

·      AI and the Gig Economy: AI has the potential to transform the gig economy by making it easier for individuals to find work and for companies to find workers. How might AI impact the future of the gig economy?

·      AI and Workplace Automation: AI is likely to automate many routine tasks in the workplace, freeing up workers to focus on higher-level tasks. What types of tasks are most likely to be automated, and how might this change the nature of work?

Advantages and disadvantages of AI in the context of the future of work:

Advantages:

  • Increased Efficiency: AI can automate many routine tasks and workflows, freeing up workers to focus on higher-level tasks and increasing productivity.
  • Improved Accuracy: AI systems can process large amounts of data quickly and accurately, reducing the risk of errors.
  • Better Decision-Making: AI can analyze data and provide insights that humans may not be able to identify, leading to better decision-making.
  • Cost Savings: By automating tasks and workflows, AI can reduce labor costs and improve the bottom line for businesses.
  • Enhanced Customer Experience: AI-powered chatbots and other tools can provide fast, personalized service to customers, improving their overall experience with a company.

Disadvantages:

  • Job Displacement: As mentioned earlier, AI and automation could displace many workers, particularly those in low-skill jobs.
  • Skill Mismatch: As AI and automation become more prevalent, workers will need to develop new skills in order to remain competitive in the workforce.
  • Bias and Discrimination: AI systems are only as unbiased as the data they are trained on, which could lead to discrimination in hiring, promotion, and other workplace practices.
  • Ethical Concerns: As AI and automation become more prevalent, there are a number of ethical concerns that need to be addressed, including issues related to privacy, transparency, and accountability.
  • Cybersecurity Risks: As more and more data is collected and processed by AI systems, there is a risk that this data could be compromised by cybercriminals.
  • Loss of Human Interaction: AI systems may replace some forms of human interaction in the workplace, potentially leading to a loss of social connections and collaboration between workers.
  • Uneven Access: As mentioned earlier, not all workers and organizations have equal access to AI and automation technology, which could widen the gap between those who have access to these tools and those who do not.

These are just a few of the advantages and disadvantages of AI and the future of work. As AI continues to evolve, it’s likely that new advantages and disadvantages will emerge as well.

In conclusion, the impact of AI on the future of work is a complex and multifaceted issue that requires careful consideration and planning. While AI has the potential to revolutionize the way we work and improve productivity, it also poses significant challenges, including job displacement and ethical concerns.

To prepare for the future of work, individuals and organizations must prioritize upskilling and reskilling to ensure that they have the skills and knowledge necessary to thrive in an AI-driven world. Additionally, policymakers must address the potential impacts of AI on employment and work towards creating policies that ensure the benefits of AI are shared equitably.

Ultimately, the successful integration of AI into the workplace will require collaboration and dialogue between industry, academia, and government to ensure that AI is used in a way that benefits society as a whole. By staying informed and proactive, we can navigate the changes brought about by AI and create a future of work that is both efficient and equitable.

Ahmed Banafa, Author the Books:

 Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

Also Read:

AI is Ushering in a New Wave of Innovation

Narrow AI vs. General AI vs. Super AI

10 Impactful Technologies in 2023 and Beyond


Reality Checks for High-NA EUV for 1.x nm Nodes

Reality Checks for High-NA EUV for 1.x nm Nodes
by Fred Chen on 04-26-2023 at 6:00 am

Reality Checks for High NA EUV for 1.x nm Nodes

The “1.xnm” node on most roadmaps to indicate a 16-18 nm metal line pitch [1]. The center-to-center spacing may be expected to be as low as 22-26 nm (sqrt(2) times line pitch). The EXE series of EUV (13.5 nm wavelength) lithography systems from ASML feature a 0.55 “High” NA (numerical aperture), targeted toward enabling these dimensions. The only justification for this projected resolution is that it exceeds about one-third wavelength/NA. Some reality checks are in order to confirm the realism of this expectation.

1. Plane of Incidence Rotation Across Slit

The plane of incidence is known to rotate across the EUV exposure arc-shaped slit [2]. Consequently, the optimized illumination distribution for a given pattern actually rotates across slit, potentially giving unoptimized results at slit positions toward the edge compared to the center.

Figure 1. The optimized source (red) for a 25.5 nm center-to-center array needs to be trimmed down (blue) to be safe against rotation across slit.

As shown in Figure 1, trimming the optimized source allows it to be safe against slit rotation effects, but this also reduces pupil fill, i.e., the range of illumination angles divided by full range of possible angles. Below 20% pupil fill, the EUV illumination system itself begins absorbing the EUV energy, which is not desired not only due to system wear, but also due to reduced throughput [3].

2. Low NILS

The 1.x nm node may be expected to feature 22-25 nm center-to-center vias with sizes <10 nm. Such small vias (already less than the Rayleigh resolution) at relatively wide spacings will have a low normalized image log-slope (NILS) without a slower exposure [4]. Phase-shift masks need to be designed for EUV use, but this is still under development.

3. Polarization

As line pitches shrink to 18 nm and below, polarization begins to become important since the angle between interfering waves is larger [5,6]. Moreover, the multilayer in an EUV system preferentially reflects in the TE polarization, i.e., perpendicular to the plane of reflection; this is (mostly) perpendicular to the scan direction) [6]. This will degrade the NILS of lines which are aligned with the scan direction, i.e., horizontal lines.

4. Thinner Resist

Resist thickness is expected to be reduced to below 30 nm in order to be used with High-NA EUV systems. This is due to the reduced depth of focus [7]. Even with improved depth of focus, however, aspect ratio of smaller features is another likely limiting factor. A 10 nm wide feature with a 20 nm resist thickness already has an aspect ratio of 2:1. A reduced resist thickness must be compensated by an inversely proportional higher absorption coefficient in order to preserve absorbed photon density for a given dose.

5. Electron Blur and Stochastics

Finally, focusing EUV to a smaller spot would be no use in the presence of over 3 nm blur [8]. However, the randomness of secondary electrons [9,10] prevents blur from being a fixed number, ultimately leading to the possibility of stochastic defects [11].

All the above factors should serve as reminders that advancing lithography is no longer a simple matter of reducing wavelength and/or increasing numerical aperture.

References

[1] International Roadmap for Devices and Systems (Lithography), 2022 edition.

[2] M. Antoni et al., “Illumination optics design for EUV lithography,” Proc. SPIE 4146, 25 (2000).

[3] F. Chen, High NA EUV Design Limitations for sub-2nm Nodes, https://www.youtube.com/watch?v=IgYJfLyDYos

[4] F. Chen, Phase-Shifting Masks for NILS Improvement – A Handicap for EUV?, https://www.linkedin.com/pulse/phase-shifting-masks-nils-improvement-handicap-euv-frederick-chen

[5] H. Levinson, “High-NA EUV lithography: current status and outlook for the future,” Jpn. J. Appl. Phys. 61 SD0803 (2022).

[6] F. Chen, The Growing Significance of Polarization in EUV Lithography, https://www.linkedin.com/pulse/growing-significance-polarization-euv-lithography-frederick-chen; F. Chen, Polarization by Reflection in EUV Lithography Systems, https://www.youtube.com/watch?v=agMx-nuL_Qg

[7] B. J. Lin, “The k3 coefficient in nonparaxial l/NA scaling equations for resolution, depth of focus, and immersion lithography” J. Micro/Nanolith. MEMS MOEMS 1(1) 7–12 (April 2002).

[8] T. Allenet et al., “Image blur investigation using EUV-Interference Lithography,” Proc. SPIE 11517, 115170J (2020), https://www.dora.lib4ri.ch/psi/islandora/object/psi%3A38930/datastream/PDF/Allenet-2020-Image_blur_investigation_using_EUV-interference_lithography-%28published_version%29.pdf

[9] Q. Gibaru et al., Appl. Surf. Sci. 570, 151154 (2021), https://hal.science/hal-03346074/file/DPHY21090.1630489396_postprint.pdf

[10] H. Fukuda, J. Micro/Nanolith. MEMS MOEMS 18, 013503 (2019).

[11] F. Chen, Secondary Electron Blur Randomness as the Origin of EUV Stochastic Defects, https://www.linkedin.com/pulse/secondary-electron-blur-randomness-origin-euv-stochastic-chen

This article first appeared in LinkedIn Pulse: Reality Checks for High-NA EUV for 1.x nm Nodes

Also Read:

Can Attenuated Phase-Shift Masks Work For EUV?

Lithography Resolution Limits: The Point Spread Function

Resolution vs. Die Size Tradeoff Due to EUV Pupil Rotation


How to Enable High-Performance VLSI Engineering Environments

How to Enable High-Performance VLSI Engineering Environments
by Kalar Rajendiran on 04-25-2023 at 10:00 am

License Operations Figure

Very Large Scale Integration (VLSI) engineering organizations are known for their intricate workflows that require high-performance simulation software and an abundance of simulation licenses to create cutting-edge chips. These workflows involve complex dependency trees, where one task depends on the completion of another task and collaboration among team members is vital for successful project completion. The design processes involve different stages such as architectural design, exploration, implementation, and verification. Each stage requires specialized tools and licenses, making license management and resource planning a critical factor for successful VLSI projects.

To optimize design processes for maximum performance and efficiency, engineering teams must adopt a carefully curated tool chain that addresses critical success factors. These factors include collaboration, efficient sharing of compute and license resources, clear visibility of progress and project status, and reproducibility of results.

Altair recently hosted a webinar in which Stuart Taylor, Senior Director, Product Management presented an approach on how engineering teams can achieve the above. Stuart highlights various tools from Altair such as Altair® Monitor™ and Altair® FlowTracer™ that could be used to implement the methodology he presents. The webinar titled “How to Enable High-Performance VLSI Engineering Environments” includes a demo of some of these tools and is available for viewing on-demand. The following are excerpts from the webinar.

License Resource Planning

License resource planning involves both qualitative and quantitative usage analysis of licenses. Qualitative analysis involves understanding which licenses are being used for what purpose, while quantitative analysis involves analyzing license utilization. Engineering teams must understand which licenses are being used, how much, and for what purpose. This understanding will help teams plan for future license requirements and optimize license usage.

A license management details report can provide a graphical dashboard displaying license capacity, utilization, and denials. The report can also include a forecast of future license requirements based on current usage trends. This report will help engineering teams plan for future license requirements and avoid license denials.

License Operations

Each stage of the VLSI design process requires different licenses and tools and as such license management is critical for engineering organizations. License management includes tracking the current usage of licenses, expiration dates, and license availability. For example, a license may expire during a critical phase of the design process, causing delays and impacting project timelines. Therefore, keeping track of license expiration dates and renewing licenses in advance is critical.

Operational Efficiency

One of the critical factors for operational efficiency is collaboration across different disciplines, geographies, and time zones. Collaboration tools that enable easy sharing of designs, code, and simulations are critical for efficient collaboration. Simple visual communication is also essential for complex workflows. Altair FlowTracer is an example of a tool that quickly communicates workflow status in a visual manner. This tool can provide a simple graphical representation of the VLSI design process, enabling team members to understand the current status of the workflow at a glance.

Collaboration

Collaboration is a key factor in VLSI engineering workflows since multiple engineers work on the same project simultaneously. Communication between team members is vital to ensure that everyone is working towards the same goal. It’s important to choose a tool chain that allows for easy communication and collaboration.

Efficient Sharing of Compute and License Resources

One of the major constraints for VLSI engineering teams is the availability of simulation licenses and compute resources. At the same time, an unlimited supply of all needed licenses is not economically sensible or even feasible. A well-curated tool chain can help optimize resource usage and reduce license costs. One solution is to use a cloud-based simulation platform that allows for the efficient sharing of compute resources and licenses. Cloud-based simulation platforms can be used to run simulations on a large scale without the need for expensive hardware, and can also provide access to the latest software versions.

Clear Visibility of Progress and Project Status

VLSI engineering projects involve many tasks that are interdependent and must be completed in a specific order. A well-curated tool chain can help provide clear visibility of project status and progress, allowing team members to see where they stand and what tasks need to be completed. A clear real-time view of the project progress and status is essential for project management and planning.

Reproducibility of Results and Concepts

Reproducibility is a crucial factor in VLSI engineering workflows since design concepts and simulation results need to be reproducible. Using a common framework for design methodologies, libraries and standards along with a carefully curated tool chain can help with reproducibility of results, ensuring that designs are manufacturable and meet requirements.

Summary

Altair provides a suite of tools that increase engineering productivity, optimize costs and accelerate chip development timeframes to achieve quick time to market. The tools optimize EDA environments and improve design-to-manufacturing process by reducing iteration cycles. The entire webinar is available for viewing on-demand.

Altair semiconductor design solutions are built to optimize EDA environments and to improve the design-to-manufacturing process, eliminate design iterations, and reduce time-to-market.

To learn more about Altair’s tools offering, visit here.

Also Read:

Optimizing Return on Investment (ROI) of Emulator Resources

Measuring Success in Semiconductor Design Optimization: What Metrics Matter?

Load-Managing Verification Hardware Acceleration in the Cloud


Configurable RISC-V core sidesteps cache misses with 128 fetches

Configurable RISC-V core sidesteps cache misses with 128 fetches
by Don Dingee on 04-25-2023 at 6:00 am

Gazzillion misses 2

Modern CPU performance hinges on keeping a processor’s pipeline fed so it executes operations on every tick of the clock, typically using abundant multi-level caching. However, a crop of cache-busting applications is looming, like AI and high-performance computing (HPC) applications running on big data sets. Semidynamics has stepped in with a new highly configurable RISC-V core, Atrevido, including a novel approach to cache misses – the ability to kick off up to 128 independent memory fetches for out-of-order execution.

Experience suggests a different take on moving data

Vector processing has always been a memory-hungry proposition. Various attempts at DMA and gather/scatter controllers had limited success where data could be lined up just right. More often than not, vector execution still ends up being bursty, with a fast vector processor having to pause while its pipeline reloads. Big data applications often introduce a different problem: the data is sparse and can’t be assembled into bigger chunks without expensive moves and lots of waiting. Conventional caching can’t hold all the data being worked on, and what it does hold encounters frequent misses – increasing the wait further.

Roger Espasa, CEO and Founder of Semidynamics, has seen the evolution of vector processing firsthand, going back to his days on the DEC Alpha team and followed by a stint at Intel working on what became AVX-512. Their new memory retrieval technology is Gazzillion™, which can dispatch up to 128 simultaneous requests for data anywhere in memory. “It’s tough to glean exactly how many memory accesses some other cores can do from documentation, but we’re sure it’s nowhere near 128,” says Espasa. Despite the difficulty in discovery, his team assembled this look at some competitive cores.

 

 

 

 

 

 

 

 

 

Three important points here. The first is that Gazzillion doesn’t eliminate the latency of any single fetch, but it does hide it when transactions get rolling and subsequent fetches overlap earlier ones in progress. The second is the vector unit in the Atrevido core is an out-of-order unit, which Espasa thinks is a first in the industry. Put those two points together; the result is whichever fetches arrive soonest will be processed next. Finally, the 128 figure is per core. It’s not hard to project this to a sea-of-cores strategy that would provide numbers of execution units with improved fetching needed for machine learning, recommendation systems, or sparse dataset HPC processing.

Way beyond tailorable, fully configurable RISC-V cores match the requirements

Most RISC-V vendors offer a good list of tailorable features for their cores. Atrevido has been envisioned from the ground up as a fully customizable core where everything is on the table. A customer interview phase determines the application’s needs, and then the core is optimized for power-performance-area (PPA). Don’t need a vector unit? No problem. Change the address space? Sure. Need custom instructions? Straightforward. Coherency, scheduling, or tougher needs? Semidynamics has carved out a unique space apart from the competition, providing customers with better differentiation as they can open up the core for changes – Open Core Surgery, as Espasa enthusiastically terms it. “We can include unique features in a few weeks, and have a customized core validated in a few months,” says Espasa.

 

 

 

 

 

 

 

 

An interesting design choice enables more capability. Instead of just an AXI interface, Semidynamics included CHI, allowing Atrevido to plug directly into a coherent network-on-chip (NoC). It’s also process agnostic. Espasa says they have shipped on 22nm and are working on 12nm and 5nm.

Upfront NRE in the interview and optimization phase also has another payoff. Semidynamics can deliver a core for an FPGA bitstream, allowing customers to thoroughly evaluate their customizations before committing a design to a foundry, saving time and reducing risk. Using Semidynamics expertise this way also speeds up exploration without the learning curve of customers having to become RISC-V architectural gurus.

This level of customization means Atrevido fits any RISC-V role, small or large, single or multicore. The transparency of the process helps customers improve their first-pass results and get the most processing in the least area and power. There’s more on the Atrevido announcement, how configurable RISC-V core customization works, and other Semidynamics news at:

https://semidynamics.com/newsroom

Also Read:

Semidynamics: A Single-Software-Stack, Configurable and Customizable RISC-V Solution

Gazzillion Misses – Making the Memory Wall Irrelevant

CEO Interview: Roger Espasa of Semidynamics


LAM Not Yet at Bottom Memory Worsening Down 50%

LAM Not Yet at Bottom Memory Worsening Down 50%
by Robert Maire on 04-24-2023 at 10:00 am

LAM RESEARCH Vantex external chamber lrg 300x300

-Lam reported in line results on reduced expectations
-Guidance disappoints as memory decline continues
-Memory capex down 50% but still sees “further declines”
-Lam ties future to EUV maybe not good idea after ASML report

Lam comes in above grossly already reduced expectations
and misses on guidance

We always find it amusing when companies greatly reduce expectations and guidance then try to act like it was a “beat” of numbers. Lam came in at revenues of $3.87B and EPS of $6.99 which was down 27% sequentially. Guidance was for $3.1B+-$300M versus street expectations of $3.47B and EPS guidance is for $5.00+_ $0.75 versus $5.63. Lam continues to guide down more than the street is looking for as conditions worsen.

Backorders back to “normal”

As supply chain problems have more or less cleared up as demand has fallen off, Lam has used up most all of its backlog and is now in a more “normal” backlog position.

The company still has $2B of deferred revenue to keep it warm at night from pre-payments on shipments to China or Japanese customers awaiting acceptance , so a buffer still remains, somewhat.

Is Lam still a memory shop if foundry is top revenue segment??

Memory business at Lam dropped to a low that we haven’t seen in a very long time as memory was 32% of overall business with foundry at 46%. Probably the bigger piece of the memory business is service, which also sequentially declined, as new tool shipments to memory customers are likely falling to relative near zero levels.

China tied Korea at 22% of business. Service was a huge 40% of business even with the decline as new tools sales obviously kept dropping.

Expecting “further declines” in memory

Lam made it clear that memory capex spend has not yet hit bottom as they commented that “further declines” in memory spending are expected. Memory capex spending was estimated by the company to be down 50% already as memory makers continue to reduce bit growth and bit output to try to shore up declining pricing.

It seems fairly clear to us that given the trajectory and momentum that memory will not recover before the end of the year and the strength of the recovery when it does eventually happen will be weak and slow.

Memory makers will have a ton of existing capacity to bring back online before they buy a single new piece of equipment or even think about expanding or adding new fabs.

In addition to the idled memory making equipment sitting turned off in fabs there are also technology shrinks that will add to capacity without the need to buy new equipment.

The bottom line is that with all the idled equipment, reduced output, and potential technology changes, memory makers could easily survive a year or two based on current demand trajectory without any incremental spending.

The bigger problem is that perhaps only Samsung will be financially able to spend after another year in the current loss environment.

Whenever the current down cycle is over , it will certainly not be memory that leads the way out.

China seems to be buying any chip equipment not nailed down

One of our other concerns that we continue to see is that China has been on a huge spending spree for non leading edge equipment. Its hard to figure out where all the equipment is going as it seems there aren’t that many fabs in China (that we know of).

It has all the makings of the famous toilet paper shortage as people bought in expectation of a shortage.

China seems to be buying any and all equipment they can as they likely fear that they will be cut off from even non leading edge tools. We saw this in ASML’s report this morning where 45-50% of DUV sales were into China.

This demand from China feels artificial and runs the additional risk of slowing because of increased sanctions or just running out of the stampede/herd mentality. This obviously adds to the risk of a longer/deeper downturn

Lam hitches wagon to EUV future (maybe not such a great idea right now…)

During the call, Lam went to lengths to show how it will benefit from the EUV transition and success. They claimed a 75% share of “EUV patterning” related technology (we assume etch) and also spoke about continued wins in dry resist technology.

While this certainly is good , the news from ASML this morning seemed to call into question how much of EUV sales would slow, be delayed or changed over the next year or two. As we previously pointed out this has a negative effect on tools associated with EUV including those made by Lam. It was probably too late for Lam to change their prepared script after the ASML call….

Is this a “second leg down” in the semiconductor down cycle??

As we have been saying and repeated this morning after ASML, we remain concerned that there is a second leg down in the current down cycle or at the very least we are certainly not at a bottom at which we feel comfortable buying chip equipment stocks at already inflated valuations for a down cycle.

Lam clearly called for further declines in memory and guided to further lower revenues in their business. There was no talk on the call about being at or near a bottom in business. The amount of China business is a double edged sword in that it helps soften the blow of the downturn but creates an additional risk in exposure to a politically unstable market.

The stocks

We have zero motivation to buy Lam or any other equipment stock after the run that they have had. We would expect some sort of rationalization of valuation after earnings season and investors and analysts figure out that we haven’t yet hit bottom and there is further unknown downside.

Macro economic conditions certainly don’t give us any comfort either.
From a Lam specific perspective, memory will likely be the last to recover and very slowly at that.

With today’s set up from both ASML and LRCX we would expect that KLAC will have similar comments and AMAT as well though a month later.
We can’t imagine anyone making positive commentary about the industry trajectory any time soon.

Bullish analysts who called the bottom a bit too soon will likely be out defending their position tomorrow or defending that Lam “beat” their numbers (if you can describe being down 27% sequentially a “beat”)

Bottom line is that “it ain’t over till its over” (Yogi Berra) and its not over yet…..

The light at the end of the tunnel could be an oncoming train….

Also Read:

ASML Wavering- Supports our Concern of Second Leg Down for Semis- False Bottom

Gordon Moore’s legacy will live on through new paths & incarnations

Sino Semicap Sanction Screws Snugged- SVB- Aftermath more important than event


Synopsys Accelerates First-Pass Silicon Success for Banias Labs’ Networking SoC

Synopsys Accelerates First-Pass Silicon Success for Banias Labs’ Networking SoC
by Kalar Rajendiran on 04-24-2023 at 8:00 am

Image to Depict Optical SoC

Banias Labs is a semiconductor company that develops infrastructure solutions for next-generation communications. Its target market is the high-performance computing infrastructure market including hyperscale data center, networking, AI, optical module, and Ethernet switch SoCs for emerging high-performance computing designs. These SoCs require high-speed Ethernet designs and low-latency solutions to provide increased system performance and accelerate time-to-market. The company has developed an optical DSP SoC on 5nm process technology to address the requirements of this market.

An optical DSP SoC is a specialized type of system-on-chip (SoC) designed for use in high-speed optical communication systems. In addition to the DSP, the optical DSP SoC typically includes high-speed interface IP blocks, such as Ethernet PHY IP, PCIe IP, and DDR memory controllers. These types of SoCs enable high-speed data transfers at low latencies for real-time signal processing. They are also designed to minimize power consumption, making them ideal for applications that require efficient operation with reduced thermal issues. With the advantages come challenges too. The specialized requirements of optical communication systems make designing an optical DSP SoC more challenging than designing a regular SoC.

Implementation Challenges

The challenges revolve around the complexity of the design, the tight power and performance requirements, and the need to meet various industry standards. The integration of multiple IP blocks including the DSP processor, Ethernet PHY IP, and other custom blocks requires careful design and validation. Additional high-speed interfaces such as PCIe and DDR add further to the complexity of the design. The high-speed interfaces and multiple IP blocks in the system can create signal distortion, crosstalk, and electromagnetic interference, which can impact system performance and reliability. Signal and power integrity analysis and optimization must be performed early in the design cycle to ensure that the system can meet its performance and reliability requirements. Finally, meeting time-to-market requirements can be challenging. The high-performance computing infrastructure market is rapidly evolving, and SoC development teams need to deliver their designs quickly to stay ahead of the competition.

Getting to First Pass Silicon Success

Overcoming the above mentioned challenges requires a comprehensive approach. One of the critical components of high-performance, low-latency solutions is the Ethernet PHY IP. The Ethernet PHY IP is responsible for the physical layer interface between the SoC and the Ethernet network. The IP must support high-speed Ethernet interfaces, including 10G, 25G, 40G, 50G, 100G, 200G, 400G, and 800G, and provide low latency and low power consumption. Additionally, the IP must support various standards, including IEEE 802.3 and Ethernet Alliance. Another important component is the EDA design suite. The EDA design suite must provide a comprehensive solution for designing and verifying the SoC, including power optimization, performance analysis, area optimization, and yield analysis. To the extent, the EDA design suite includes advanced features, such as artificial intelligence (AI) and machine learning (ML), the better for enhanced productivity and reduced time-to-market.

Synopsys Accelerates First Pass Silicon Success

Synopsys offers solutions that address the unique challenges of developing SoCs for the high-performance computing infrastructure market. The company provides a comprehensive IP solution that includes a routing feasibility study, packaging substrate guidelines, signal and power integrity models, and thorough crosstalk analysis. This is imperative to address the signal and power integrity challenges faced when developing an optical DSP SoC. Synopsys’ 112G Ethernet PHY IP offers low latency, flexible reach lengths, and maturity on 5nm process technology, making it an ideal solution for hyperscale data center, networking, AI, optical module, and Ethernet switch SoCs. In addition, Synopsys offers an EDA Design Suite that delivers high-quality results with optimized power, performance, area, and yield. Synopsys’ AI-driven EDA Design Suite provides solutions to boost system performance and accelerate time-to-market, making it an essential component of a successful solution for the high-performance computing infrastructure market.

Summary

Synopsys provides high-performance, low-latency solutions that accelerate the development of advanced Ethernet switch and networking SoCs. To learn more about Synopsys’ comprehensive IP solutions, their comprehensive EDA Design Suite and their AI-Enhanced EDA Suite, visit the following pages.

Synopsys’ comprehensive IP solutions

Synopsys’ comprehensive EDA Suite

Synopsys’ AI-driven EDA Design Suite

Also Read:

Multi-Die Systems: The Biggest Disruption in Computing for Years

Taking the Risk out of Developing Your Own RISC-V Processor with Fast, Architecture-Driven, PPA Optimization

Feeding the Growing Hunger for Bandwidth with High-Speed Ethernet

Takeaways from SNUG 2023