SILVACO 073125 Webinar 800x100

Lowering the DFT Cost for Large SoCs with a Novel Test Point Exploration & Implementation Methodology

Lowering the DFT Cost for Large SoCs with a Novel Test Point Exploration & Implementation Methodology
by Daniel Nenni on 10-03-2023 at 6:00 am

blog sept pic1

With the increasing on-chip integration capabilities, large scale electronic systems can be integrated into a single System-on-Chip or SoC. New manufacturing test challenges are raised for more advanced technology nodes where both quality and cost during testing are affected. A typical parameter is test coverage which impacts directly yield.

Test point exploration is becoming a key for large system on chip designs especially for automotive & security applications where test coverage figures are expected to reach 99% and more. To reach such high coverage figures, area overhead is usually quite high. EDA tools in general are still conservative and any analysis of “test coverage vs. test logic implementation” requires tedious manual work especially when realized post-synthesis.

In summary, Test Point Exploration & Implementation (TPI&E) is nowadays a key problem for DFT engineers.

Beyond the exploration of different test point figures, the implementation impact of thousands of test points is an important problem to consider the earliest in the design flow.  As most of the test logic today such as test compression, self-test, IEEE 1500, etc., it is expected the implementation of test points to occur pre-synthesis at RTL level.

Another important aspect to consider is related to design constraints and design collaterals which should be taken into consideration during the test point exploration and implementation design steps.  Typical related questions are: test points should belong to which power domain, to which clock domain, and to which physical partition? Managing design collaterals usually means automated management of the related design data such as UPF, SDC, LEF/DEF, etc.

Finally, beyond the TPI &E process, once test points are implemented at RTL, the impact on the design verification process is key. This includes RTL simulation, netlist elaboration, etc.

Defacto Technologies has been providing EDA DFT solutions for more than 20 years and has recently developed a new methodology to make this test point exploration and implementation process easy and straightforward. The proposed EDA TPI&E solution is structured into four steps:

Step 1: Capture user requirements and constraints

The idea here is to provide DFT engineers with the ability to introduce the expected test coverage and the afforded cost (area overhead) with different design constraints. Typical examples of constraints can be for test points which need to be excluded from certain scan chains or those which should not be part of timing critical paths., etc.

Step 2: Explore ATPG test points configuration as soon as at RTL!

Knowing that the exploration and insertion run pre-synthesis, Defacto tools fully support an automated ATPG-based RTL analysis.  For mainstream ATPGs, here the goal is to benefit from the ATPG-based list of test points and help explore their efficiency as soon as at RTL. Design constraints from SDC, UPF, LEF/DEF or similar formats, are easily considered as part of the test points exploration process.

Step 3: for the list of TPI configurations that are suggested by the ATPG, measure the quality of the list of test points and the related cost through the generated coverage/area reports.

At a glance, the most optimal test point configuration with the best trade-off is obtained.

Step 4: Once the most optimal configuration is selected, inserted test points are implemented at RTL and the final SoC top level RTL files are generated, ready for both logic synthesis and design verification processes.
Step 5: Run DFT verification tasks at RTL including the RTL simulation of ATPG vectors, testability design rule checks, linting, etc. in full compliance with design collaterals.

The above process and steps with fully automated test point exploration, implementation, and verification capabilities are part of Defacto’s SoC Compiler V10. This flow was shown to be efficient in checking and fixing test point related problems for large IPs, subsystems and SoCs. It has been used by several major semiconductor companies to quickly obtain a significant coverage improvement with a moderate area increase.

Worth mentioning that this solution is vendor agnostic and fully interoperates with mainstream DFT implementation flows. As a result, the overall DFT throughput is drastically improved as soon as at RTL. The obtained experiments clearly show the potential of this unique DFT flow to help face the key DFT challenges.

Defacto Team will be attending the International Test Conference in Anaheim on October 9th to present this Test Point Exploration & Implementation Solution. Feel free to contact them to request a meeting, a demo, or an evaluation: info_req@defactotech.com.

Also Read:

Defacto Celebrates 20th Anniversary @ DAC 2023!

WEBINAR: Design Cost Reduction – How to track and predict server resources for complex chip design project?

Defacto’s SoC Compiler 10.0 is Making the SoC Building Process So Easy


Cyber-Physical Security from Chip to Cloud with Post-Quantum Cryptography

Cyber-Physical Security from Chip to Cloud with Post-Quantum Cryptography
by Kalar Rajendiran on 10-02-2023 at 10:00 am

Secure IC Product Tree

In our interconnected world, systems ranging from smart cities and autonomous vehicles to industrial control systems and healthcare devices have become everyday components of our lives. This fusion of physical and digital systems has led to a term called cyber-physical system (CPS). Ubiquitous connectivity is exposing the systems to a wide range of cyber-threats and robust security measures are needed to safeguard the systems.

Securing the Chip, the Cloud and the Communications

At the heart of cyber-physical systems are the chips that control various functions. Securing these chips is crucial because compromised hardware can lead to vulnerabilities that are nearly impossible to detect. Cryptography plays a role at this level by ensuring that the communication between different components of a chip is secure and the data is protected. One of the exciting developments in chip-level security is the integration of hardware-based security features that work in tandem with cryptographic protocols. These features can include physically unclonable functions (PUFs), secure enclaves, and tamper-resistant designs. Such measures not only protect data at rest but also help prevent attacks on the chip’s operation. In the context of CPS, secure communication between devices and the cloud is paramount. Whether it’s transmitting data from sensors in an industrial setting or updating the software on an autonomous vehicle, the integrity of this communication is imperative. And, the cloud is where the data from CPS is often processed, stored, and analyzed and is a prime target for cyberattacks. Leveraging cryptography in cloud security protocols is vital to safeguarding the data and services hosted on the cloud. This ensures that even if bad actors gain access to the cloud infrastructure, they won’t be able to decipher sensitive information.

The field of cryptography is quite advanced and time-tested encryption algorithms such as RSA and ECC have been widely-deployed to ensure cyber-security of connected systems. But dedicated hardware support is very important as secure processing and management cannot be accomplished solely with software.

But the onset of quantum computing poses a significant threat to current cryptographic methods.

Post-Quantum Cryptography (PQC)

To counter the quantum threat, researchers are developing post-quantum cryptography, a new generation of cryptographic techniques that are designed to withstand attacks from quantum computers. These cryptographic methods are being designed to be quantum-resistant, ensuring that the confidentiality and integrity of data remain intact in a quantum-powered world. Post-quantum cryptographic algorithms can be employed to secure the communication channels, ensuring that data remains confidential and protected from quantum eavesdropping. Techniques like lattice-based cryptography, code-based cryptography, and hash-based cryptography offer robust security even against quantum adversaries. While PQC is in various stages of standardization currently, what customers need now are solutions that can be easily upgraded to incorporate PQC.

The ease of upgrading CPS to incorporate PQC protection is essential in order to expect rapid adoption of cyber-threat safeguards across a very wide base of systems. This means deploying embedded security systems that are PQC ready. Secure-IC offers solutions that fit this criterion.

Secure-IC’s Securyzr: Ultimate Solution for Embedded Security

Securyzr delivers state-of-the-art hardware and software security solutions, designed to meet the demands of a wide range of applications, from IoT devices and automotive systems to critical infrastructure and smart cards. With a focus on adaptability, scalability, and efficiency, Securyzr seamlessly integrates into existing systems while ensuring they remain impervious to attacks. Securyzr’s quantum-resistant cryptographic algorithms, ensure data remains safe and confidential even in the era of quantum computing. Its hardware-based Root of Trust, protects against unauthorized access and tampering and its secure boot and firmware update mechanisms prevent malicious code injection. Evolving threats can be handled with ease by employing dynamic security policies that can be updated remotely, ensuring that devices can stay secure throughout their lifecycle. Embedded systems can meet evolving industry-specific security standards and certifications easily.

Secure-IC’s Laboryzr: Unlock the Power of Security Testing

Laboryzr is a comprehensive security testing platform that enables thorough evaluation of the robustness of integrated circuits (ICs), embedded systems, and software applications. With a powerful suite of testing modules and an intuitive user interface, Laboryzr simplifies the complex process of security assessment. Laboryzr provides in-depth security analysis and identifies vulnerabilities and weaknesses in your products, allowing you to address them proactively. By simulating real-world cyberattacks, Secure-IC performs penetration testing to evaluate a system’s resistance to external threats. It performs side-channel analysis to detect potential information leakage through side-channel attacks and allowing to mitigate them, securing sensitive data from unauthorized access. Laboryzr also provides detailed, actionable reports that help you understand the security posture of your systems and make informed decisions to enhance protection.

Secure-IC’s Expertyzr: Elevate Your Expertise in Embedded Security

Expertyzr is a cutting-edge educational platform, crafted to empower professionals, engineers, and security enthusiasts with the expertise needed to navigate the complex landscape of embedded security. With a wealth of resources, hands-on labs, and expert insights, Expertyzr is a gateway to mastering the art of secure embedded systems. The educational platform covers a wide scope from standardization to design principles, evaluation, and certification schemes among many other topics of interest.

Summary

As we continue to embrace cyber-physical systems in our daily lives, the security of these interconnected systems becomes paramount. While quantum computing poses a significant threat to current cryptographic methods, PQC offers a promising solution to protect CPS, from the hardware level in chips to the cloud.

To learn more about solutions for implementing secure CPS systems for the post-quantum era, visit www.secure-ic.com.

Also Read:

How Do You Future-Proof Security?

Points teams should consider about securing embedded systems


Extension of DUV Multipatterning Toward 3nm

Extension of DUV Multipatterning Toward 3nm
by Fred Chen on 10-02-2023 at 8:00 am

Extension of DUV Multipatterning Toward 3nm

China’s recent achievement of a 7nm-class foundry node using only DUV lithography [1] raises the question of how far DUV lithography can be extended by multipatterning. A recent publication at CSTIC 2023 indicates that Chinese groups are currently looking at extension of DUV-based multipatterning to 5nm, going so far as to consider use 6 masks for one layer [2]. Comparing the DUV-based and EUV-based approaches going towards 3nm leads to an interesting conclusion.

LELE Patterning

The most basic form of multipatterning is the so-called “Litho-Etch-Litho-Etch” (LELE) approach, which is essentially doing the basic lithography followed by etching twice. This enables a halving of pitch, as a second feature is inserted between two printed first features. By extension, LE3 (3xLE) and LE4 (4xLE) may follow. However, using these approaches for getting to less than half the original pitch is no longer favored, with the arrival of self-aligned spacer patterning.

Self-Aligned Spacer Patterning

Self-aligned spacer patterning has the advantage over LELE of not requiring an extra lithography step, thereby saving the extra cost. Spacer deposition and subsequent etchback, followed by gapfill and subsequent etchback, replace the coat, bake, expose, bake, develop lithography sequence. While much cheaper, precise process control is still required, such as spacer thickness and etch rate selectivity. A one-time spacer application leads to feature doubling within a given pitch. Hence this is often referred to as self-aligned double patterning (SADP). Re-application leads to self-aligned quadruple patterning (SAQP), as may be expected.

Subtractive Patterning

While LELE and SADP both naturally add features to a pattern, it is sometimes necessary to remove parts of those features for the final layout. Cut masks indicated areas where line segments are to be removed. These are also called block locations when the line-forming etch is blocked. The inverse mask is called a keep mask. Restricting a line break to a single line width has placement issues if the adjacent line can also be etched. When alternate lines can be arranged to be made from different materials to be etched, line breaks can be made with better tolerances (Figure 1).

Figure 1. Self-aligned block/cut only removes sections of alternate lines.

For a given interconnect line, the distance between breaks is expected to be at least two metal pitches. Thus, two masks per line are expected when the metal pitch is from 1/4 to 1/2 of the resolution limit.

Figure 2. Two sets of block/cut masks are required for the two sets of etch.

Alternate Line Arrangement

Arranging the alternate lines is natural by LELE, SADP, SAQP or a hybrid of LELE and SADP known as SALELE (self-aligned LELE) [3]. SALELE has already been considered the default use for EUV for the tightest metal pitches [2, 4].

DUV vs. EUV Cost Assessment

One of the expectations for multipatterning with DUV has been burgeoning cost, relative to EUV. It is time for an updated re-assessment. First, we use the latest (2021) normalized patterning cost estimates [5] (Figure 3).

Figure. 3 Normalized costs for patterning, from Reference 5.

Next, we use representative patterning styles for DUV and EUV for the various nodes (Figure 4).

Figure 4. DUV vs. EUV patterning costs vs. node

Several comments are in order:

  1. For 7nm DUV, 40 nm pitch is at a point where the only features that can be resolved are lines, so these lines have to be cut in a separate exposure.
  2. For 7nm EUV, a separate line cut is used since at 40 nm pitch, the required resolution (~20 nm) is less than the point spread function of the EUV system (~25 nm). A High-NA EUV system is also not advantageous for this pitch, because of depth of focus and pupil fill limitations [6].
  3. For 3/5nm DUV, LELE SADP is more flexible than SAQP for sub-40 nm pitch [7].
  4. For 3/5nm EUV, the driving force of using LELE is the stochastic behavior at <17 nm half-pitch and <20 nm isolated linewidth [8,9]. As we approach 10 nm dimensions, the electron scattering dose-dependent blur [10-12] will also become prohibitive. The optical resolution of the system, i.e., NA, is no longer relevant.
  5. Pattern shaping is not considered as a way to eliminate cuts, as it would make the pre-shaping lithography much more difficult (Figure 5). Also, angled ion beam etching has generally been used to flatten pre-existing topography, reducing the etch mask height [13].

Figure 5. For pattern shaping the pattern before shaping is very litho-unfriendly.

For the most part, we can make the direct judgment that DUV LELE is much cheaper than EUV single exposure (SE). Also, DUV LE4 is cheaper than EUV double patterning. Although LELE requires extra steps over SE, there is also the consideration of EUV system maintenance vs. DUV system maintenance, as well as energy consumption. DUV LELE uses half as much energy as EUV SE, DUV SADP about 2/3, and even DUV LE4 uses just under 85% of the energy for EUV SE [14].

All this serves to highlight that moving to advanced nodes requires facing growing cost, regardless of DUV or EUV choice.

References

[1] https://www.techinsights.com/blog/techinsights-finds-smic-7nm-n2-huawei-mate-60-pro

[2] Q. Wu et al., CSTIC 2023.

[3] Y. Drissi et al., Proc. SPIE 10962, 109620V (2019).

[4] R. Venkatesan et al., Proc. SPIE 12292, 1229202 (2022).

[5] S. Snyder et al., 2021 EUVL Workshop, https://www.euvlitho.com/2021/P2.pdf

[6] F. Chen, When High NA is Not Better Than Low NA in EUV Lithography, 2023, https://www.youtube.com/watch?v=10K5i4QdLBU

[7] S. Sakhare et al., Proc. SPIE 9427, 94270O (2015).

[8] L. Meli et al., J. Micro/Nanolith. MEMS MOEMS 18, 011006 (2019).

[9] D. De Simone and G. Vandenberghe, Proc. SPIE 10957, 109570Q (2019).

[10] A. Narasimhan et al., Proc. SPIE 9422, 942208 (2015).

[11] I. Bespalov et al., ACS Appl. Mater. Interfaces 12, 9881 (2020).

[12] F. Chen, Modeling EUV Stochastic Defects With Secondary Electron Blur, https://www.linkedin.com/pulse/modeling-euv-stochastic-defects-secondary-electron-blur-chen

[13] M. Ulitschka et al., J. Europ. Opt. Soc. – Rapid Pub. 17:1 (2021).

[14] L-A. Ragnarsson et al., 2022 Electron Dev. Tech. Manuf., 82 (2022).

This article first appeared in LinkedIn Pulse: Extension of DUV Multipatterning Toward 3nm

Also Read:

Stochastic Model for Acid Diffusion in DUV Chemically Amplified Resists

Advancing Semiconductor Processes with Novel Extreme UV Photoresist Materials

Modeling EUV Stochastic Defects with Secondary Electron Blur


The True Power of the TSMC Ecosystem!

The True Power of the TSMC Ecosystem!
by Daniel Nenni on 10-02-2023 at 6:00 am

logo chart 092623

The 15th TSMC Open Innovation Platform® (OIP) was held last week. In preparation we did a podcast with one of the original members of the TSMC OIP team Dan Kochpatcharin. Dan and I talked about the early days before OIP when we did reference flows together. Around 20 years ago I did a career pivot and focused on Strategic Foundry Relationships. The importance of the foundries was clear to me and I wanted to be an integral part of that ecosystem. As it turns out it was a great career move, absolutely.

Before I get to the importance of the early TSMC reference flow days let’s talk about the recent OIP event. It was held at the Santa Clara Convention Center and it was a full house. For those other semiconductor event coordinators, if you want full semiconductor attendance use the Santa Clara Convention Center. Local hotels or the San Jose Convention Center are not convenient and convenience means attendance. TSMC switched to the Santa Clara Convention Center from San Jose a few years back and the rest as they say is history, TSMC hosts the best semiconductor networking events.

This year OIP was all about packaging and rightly so. It is the next foundry battleground and TSMC is once again building a massive ecosystem appropriately named the 3D Fabric Alliance:

TSMC Announces Breakthrough Set to Redefine the Future of 3D IC New 3Dblox 2.0 and 3DFabric Alliance Achievements Detailed at 2023 OIP Ecosystem Forum

“As the industry shifted toward embracing 3D IC and system-level innovation, the need for industry-wide collaboration has become even more essential than it was when we launched OIP 15 years ago,” said Dr. L.C. Lu, TSMC fellow and vice president of Design and Technology Platform. “As our sustained collaboration with OIP ecosystem partners continues to flourish, we’re enabling customers to harness TSMC’s leading process and 3DFabric technologies to reach an entirely new level of performance and power efficiency for the next-generation artificial intelligence (AI), high-performance computing (HPC), and mobile applications.”

L.C. Lu has been part of the TSMC OIP since the beginning, he worked for Dr. Cliff Hou. From 1997 to 2007 Cliff established the TSMC PDK and reference flow development organizations which then led to the OIP. Cliff Hou is now TSMC Senior Vice President, Europe & Asia Sales and Research & Development / Corporate Research.

L.C. updated us on the progress of the 3D Alliance and 3D Blox which is an incredible piece of technology that is open to all customers, partners and competitors alike. It is an industry standard in the making for sure. We covered 3D Blox HERE and TSMC gave us this update:

Introduced last year, the 3Dblox open standard aims to modularize and streamline 3D IC design solutions for the semiconductor industry. With contribution from the largest ecosystem of companies, 3Dblox has emerged as a critical design enabler of future 3D IC advancement.

The new 3Dblox 2.0, launched today, enables 3D architecture exploration with an innovative early design solution for power and thermal feasibility studiesThe designer can now, for the first time in the industry, put together power domain specifications and 3D physical constructs in a holistic environment and simulate power and thermal for the whole 3D system. 3Dblox 2.0 also supports chiplet design reuse features such as chiplet mirroring to further improve design productivity.

3Dblox 2.0 has won support from key EDA partners to develop design solutions that fully support all TSMC 3DFabric offerings. Those comprehensive design solutions provide designers with key insights to make early design decisions, accelerating design turnaround time from architecture to final implementation.

TSMC also launched the 3Dblox Committee, organized as an independent standard group, with the goal to create an industry-wide specification that enables system design with chiplets from any vendors. Working with key members including Ansys, Cadence, Siemens, and Synopsys, the committee has ten technical groups of different subjects and proposes enhancements to the specs and maintain the interoperability of EDA tools. Designers can now download the latest 3Dblox specifications from the 3dblox.org website and find more information about 3Dblox and its tool implementation by EDA partners.

Back to the reference flows, I was the Strategic Foundry Relationship Advisor for Solido Design Automation out of Saskatoon Canada and Berkeley Design Automation (BDA) in Silicon Valley at the time. Back then EDA included a lot of point tools inside the design flow since no one company could do it all. So all of the point tool companies looked to TSMC for guidance on how to interoperate inside a customer’s design flow. This was not only valuable experience, it provided much needed exposure for EDA start-ups to the TSMC customer base. In the case of Solido and BDA, it not only led to rapid adoption by TSMC’s top customers, TSMC itself licensed the tools for internal use which is the ultimate seal of approval. Solido and BDA were both acquired by Seimens EDA and their close relationships with TSMC was a big part of that transaction, believe it.

A similar process was developed for silicon proven IP. I am also a Foundry Relationships Advisor for IP companies and not only do we get access to TSMC’s top customers, TSMC allows access to PDKs and taught us how to silicon prove our products. Notice on the TSMC OIP partner list the biggest market segment is IP companies for these exact reasons. IP is a critical enabler for the foundry business and getting silicon right the first time is what OIP is all about.

Bottom line:  In the foundry business it’s all about collaboration and TSMC built this massive ecosystem from the ground up. Not only does it reduce customer risk of designing to new processes, the close collaboration between TSMC and the ecosystem partners multiplies the total annual ecosystem R&D investments exponentially.

Also Read:

TSMC’s First US Fab

The TSMC OIP Backstory

The TSMC Pivot that Changed the Semiconductor Industry!


Micron Chip & Memory Down Cycle – It Ain’t Over Til it’s Over Maybe Longer and Deeper

Micron Chip & Memory Down Cycle – It Ain’t Over Til it’s Over Maybe Longer and Deeper
by Robert Maire on 10-01-2023 at 6:00 pm

china 800 pound gorilla
  • The memory down cycle is longer/deeper than many thought
  • The recovery will be slower than past cycles- a “U” rather than “V”
  • AI & new apps don’t make up for macro weakness
  •  Negative for overall semis & equip- Could China extend downcycle?
Micron report suggests a longer deeper down cycle than expected

The current memory downcycle started in the spring of 2022, over a year ago with Micron first reporting weakness. We had suggested that the current memory downturn would be longer & deeper than previous downturns given the unique circumstances and was roundly criticized as too pessimistic.

It now looks like the memory downturn will last at least two years (if not longer) and its clearly worse and longer than most prior cycles. It seems fairly clear that there will be no recovery in 2023, as we are already past the peak season for memory sales, and at best maybe sometime in 2024.

Typically memory peaks in the summer prior to the fall, busy selling season of all things electronic. We then go through a slow Q1 due to post partum depression after the holiday sales coupled with Chinese holidays in Q1. Thus it looks like summer 2024 is our next opportunity for better pricing.

The problem is that “analysts” always kick the can down the road in 6 month increments saying that things will get better in H1 or better in H2 etc;. So don’t listen to someone who now says an H1 recovery in 2024 as its just another kick of the can, without hard facts to back it up.

A “Thud” rather than a “Boing”- sounds of the cycle

The last memory downcycle several years ago seemed more like a “one quarter wonder” with things quickly bouncing back to normal after a short spending stop by Samsung.

This leads investors to believe that we were in a “V” shaped bottom when its obviously a “U” or worse yet “L” shaped bottom.

The down turn this time is not just over supply created by over spend but it is also coupled with reduced demand due to macro issues.

We have cut back on supply by holding product off market in inventory, slowing down fabs and cutting capex none of which can fix the demand issue. Perhaps the bigger problem is that product held off market needs to be eventually sold and factories running at less than full capacity beg to be turned back up to full capacity to increase utilization and profitability so any near term uptick in demand will quickly be offset by the existing excess capacity thus slowing a recovery.

We haven’t even started talking about all the potential increase in capacity related to Moore’s law density increases that increases the number of bits per wafer produced due to just ongoing technology improvements.

Bottom line:  There is a ton of excess memory capacity with weak demand to sop it all up, it’s gonna take a while.

China can and will likely stifle a memory recovery

The other 800lb gorilla problem that most in the industry haven’t spoken about is China’s entry into the memory market and what that will do to the current down cycle and the resulting market share impacts.

Most in the industry look at the supply/demand balance in memory chips as a static market share model. But its not.

China has been spending tons of money, way more than everyone else, on semiconductor equipment. Not just for foundry and trailing edge but for memory as well. While China is not a big player in memory right now they are spending their way into a much bigger role.

All that equipment shipped into China over the last few years will eventually come on line and further increase the already existing oversupply in memory chips.

Many would argue that China is not competitive in memory due to higher cost less efficient technology but we would argue that China is not a semi- rational player like Samsung or Micron and will price its product at whatever money losing price they need to to gain market share and crush competition. Kind of like what Samsung has done in the memory market but only with state sponsored infinite money behind it.

China is a “wild card” in the memory market that could easily slow or ruin any recovery in the memory market and take share from more rational players or weaker players, such as Micron, who don’t have the financial resources to lose as much money to survive.

In short, China can screw up any potential memory chip recovery and delay it further.

AI and other new apps are not enough to offset weakness & oversupply

High bandwidth memory needed for AI applications is obviously both hot and under supplied. Capacity will shift to high bandwidth memory but not enough to reduce the currently very oversupplied market. The somewhat limited supply of AI processors will also limit high bandwidth memory demand because there aren’t enough processors available and you are not going to buy memory if you can’t get processors.

$7B in capex keeps Micron treading water

Micron talked about $7B in capex for 2024 which likely is just enough to keep their existing fabs at “maintenance” levels.

With the current excess capacity in the memory market coupled with technology based capacity improvements and the threat of China, building new fabs in Boise or New York is a distant dream as it would be throwing gasoline on an already raging bonfire of excess capacity.

We don’t see a significant change in capex on the horizon and most will continue to be maintenance spend.

Both Huawei/SMIC and Micron go “EUV-less” into next gen chips

Further proof of the ability to continue on the Moore’s Law path without EUV has recently been provided by Micron.

It would appear that the latest and greatest memory chip, the LPDDR5 16GB D1b device, which made its debut in the IPhone 15 was made without $150M EUV tools just like the 7NM Huawei/SMIC chip.

Where there’s a will there’s a way…….Micron has always been a bunch of very cheap and very resourceful people who think outside the box and they have done so with this latest generation device without EUV that others are using.

In this case, doing it without EUV at Micron likely means producing it at lower cost

Link to article on EUV-less 16GB D1b chip from Micron

This just underscores our recent article about China’s ability to skirt around the semiconductor sanctions that ban EUV. They will be able to do it in memory as well.

The Stocks

Obviously this is not great news for the stock of Micron. We were even somewhat surprised that there wasn’t a worse reaction and the broader semiconductor market was positive today.

Memory oversupply/demand weakness is coincident with broader semiconductor malaise. The weak capex predictions are certainly a negative for the chip equipment providers.

For Micron specifically we remain concerned about continued losses and what that does to their balance sheet and ability to recover when the time comes. They are certainly burning through a lot of cash and if we do the math, aren’t going to have a lot left at the end of the downcycle assuming we get an end to the downcycle soon (which is not clear)

There is an old joke about Micron that if you totaled up all the profits and losses over the life of the company it would be a negative number. We haven’t revisited that exercise of late but wonder where we are…..and getting worse.

We don’t see any reasonable reason to own the shares of Micron especially at current levels. The stock is well off the bottom yet business is not and we don’t have a definitive recovery in sight.

Risks remain quite high, China is a risk to Micron in several ways and their financial strength, which is important in the chip business, is dwindling fast.

At this point there are less risky semiconductor investments, even at higher valuations, that seem more comfortable.

But then again, Micron stock has never been for the faint of heart…for a reason

Also Read:

Has U.S. already lost Chip war to China? Is Taiwan’s silicon shield a liability?

ASML-Strong Results & Guide Prove China Concerns Overblown-Chips Slow to Recover

SEMICON West 2023 Summary – No recovery in sight – Next Year?


Podcast EP185: DRAM Scaling, From Atoms to Circuits with Synopsys’ Dr. Victor Moroz

Podcast EP185: DRAM Scaling, From Atoms to Circuits with Synopsys’ Dr. Victor Moroz
by Daniel Nenni on 09-29-2023 at 10:00 am

Dan is joined by Dr. Victor Moroz, a Synopsys Fellow engaged in a variety of projects on leading edge modeling Design-Technology Co-Optimization. He has published more than 100 technical papers and over 300 US and international patents. Victor has been involved in many technical committees and is currently serving as an Editor of IEEE Electron Device Letters.

Dan discusses the challenges of advanced DRAM scaling with Victor, who explores the strategies being used today and what is on the horizon. Victor discusses the aspects of process, package design and stress analysis, security, and next-generation structures and materials and how to address those challenges with advanced design tools.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Stephen Rothrock of ATREG

CEO Interview: Stephen Rothrock of ATREG
by Daniel Nenni on 09-29-2023 at 6:00 am

Stephen Rothrock ATREG

Stephen Rothrock founded ATREG in 2000 to help global advanced technology companies divest and acquire infrastructure-rich manufacturing assets. Over the last 25 years, his firm has completed more than 100 transactions, representing over 40% of all global operational wafer fab sales in the semiconductor industry for operational, warm, and cold shells. Prior to founding ATREG, Stephen established Colliers International’s Global Corporate Services initiative and headed the company’s U.S. division based in Seattle, Wash. Before that, he worked as Director for Savills International commercial real estate brokerage in London, UK, also serving on the UK-listed property company’s international board. He also spent four years near Paris, France working for an international NGO.

Tell us about how ATREG came about.
In the late 90s, Japan was heavily divesting from U.S. semiconductor manufacturing assets due to falling memory prices and the high exchange rate of the Yen against the U.S. Dollar. Having had some cleanroom experience through work I had done with AT&T in Europe, several Japanese companies approached me when I was with Colliers International asking if I could help them divest some of their wafer fabs located in the Pacific Northwest. That’s how I ended up selling two 200mm fabs, including Matsushita Puyallup, WA and Fujitsu Gresham, OR, to Microchip. Then we sold a facility for Sony down in Eugene, OR and NEC invited us to sell its 200mm facility in Scotland. After closing these fab transactions for Japanese companies, I recognized a gap in the market and decided to create a special internal division named Advanced Technology Real Estate Group (ATREG), dedicated exclusively to transactions focused on infrastructure-rich semiconductor cleanroom and manufacturing assets. We realized that if we sold a facility with an operational tool line, workforce, and an ongoing supply agreement, there would be a market for wafer fab divestment and acquisition services to other chip makers at a time when the industry was consolidating, not just in Asia, but also in the U.S. and Europe. The business took off through assignments with IBM, Infineon, Micron, and a number of Silicon Valley firms such as Maxim. Eventually, I spun the division out of Colliers International and ATREG was born.

What factors do you attribute ATREG’s success to?
After operating for 25 years, ATREG is still the only premier global firm in the world dedicated to initiating, brokering, and executing the exchange of advanced technology cleanroom manufacturing assets. ATREG has served as an objective intermediary in the transfer of over $30 billion in assets so far, acting as an indispensable conduit for the growth of its partners and the industry as a whole. There was a real need to help advanced technology companies with their global manufacturing disposition strategies because they didn’t know where to start.

As we continued to conduct fab transactions, we collected significant data on global cleanroom assets and critical deal points. Most companies didn’t have the internal staff, knowledge, or ability to allocate the time and resources necessary to execute these types of transactions. Trust and integrity were key to discussing these very sensitive issues given the financial and balance sheet effect. Over time, ATREG has built trusted relationships with many of the high-level C-suite executives in the semiconductor industry to facilitate these transactions. Our key objective is to work hand in hand with sellers and buyers alike to find the right asset strategy while simultaneously retaining as much human capital as possible when fabs change hands. CEOs call us when they need to respond to ever-changing market conditions and adjust their manufacturing strategy to reposition themselves in the global marketplace, ensure capacity, and meet customer needs.

What does your competitive landscape look like and how do you differentiate?
What ATREG offers is unique and there isn’t a firm like us anywhere else in the world. We are the go-to partner in the semiconductor industry to identify opportunities, find creative solutions, and drive competitive demand for the exchange of holistic advanced technology facilities. We facilitate the comprehensive sale and purchase of everything our clients need to be fully operational from day one – including supply agreements, human capital, tool lines, and intellectual property. We have an entrenched ability to evolve amid ever-changing global market conditions, based on 25 years of global experience. We are also very committed to human capital retention – and the significant value it adds – across all the transactions we are involved in.

What things keep your customers up at night?
The semiconductor manufacturing industry is a multifaceted, highly complex and competitive environment, subject to constant geopolitical tensions and unexpected global events (pandemics, natural disasters, etc.) Chip makers bear a lot on their shoulders. On top of having to keep up with the latest technological advances to meet ever-pressing customer demand, they need to accelerate time-to-market while keeping manufacturing costs down. The situation has worsened since the Covid pandemic, with costs and lead times spinning out of control. Add other considerations such as the labor shortage to staff greenfield fabs, protecting intellectual property, supply chain issues, sustainability compliance, or shorter product cycles, all of which impact manufacturing assets. That’s when ATREG comes in to alleviate some of that load by providing expert advice on how to address some of these strategic issues.

What are the strategic benefits of selling and buying brownfield fabs for chip makers?
On the sell side, they include fabless or fab-lite strategic initiatives​, gross margin pressure, and underutilization whose root cause is often demand based. In addition, we have products coming to their end of life, CapEx requirements to continue advancing technology capabilities (there are infrastructure limitations for a site to move from 200mm to 300mm), and consolidation into other fabs, often for cleanroom shell sales. Examples include onsemi who needed to consolidate its U.S. fab portfolio to come out of low-margin businesses. Buying the East Fishkill 300mm fab gave the company 2.5 times more capacity. In Asia, Allegro MicroSystems sold a cleanroom in Thailand to consolidate all of its production to its site in the Philippines where it had extra space.

On the buy side, what makes fabs valuable is different from the driving force behind a disposition. Core reasons include geopolitical de-risking, allocation and manufacturing control, scaling geometry requirements, and product demand. Examples include Diodes and their desire for increased internal manufacturing control, Texas Instruments and scaling geometry requirements, or VIS and increased demand for products requiring additional capacity​.

What are some of the most notable fab transactions that have taken place recently?
On August 31st, Bosch announced the completion of the acquisition of TSI Semiconductors’ operational 200mm fab in Roseville, CA. Following a retooling phase beginning in 2026, the company will start producing its first SiC chips on 200mm wafers. Attracting one of Europe’s largest manufacturers to U.S. soil who has only ever produced front-end chips in Germany is a massive win for the U.S. semiconductor industry as Bosch plans to invest $1.5 billion in the Roseville site over the next few years. In Europe, the German government just granted their approval for the sale of Elmos’ Dortmund fab to U.S. company Littelfuse. In June, both companies had signed a purchase agreement for a net purchase price of approximately €93 million. In both these transactions facilitated by ATREG on behalf of TSI and Elmos respectively, both buyers committed to continue to employ both fabs’ staff, saving hundreds of jobs in an already extremely tight labor market.

What is the best advice you would give U.S. chip makers to ensure a successful manufacturing strategy in 2023 and beyond?
If there was one piece of advice I could give U.S. semiconductor manufacturers to ensure capacity and supply chain resilience, it would be that they leave no stone unturned by looking at all semiconductor manufacturing options at their disposal. Greenfield fabs with support from CHIPS Act funding is one avenue, but it will take years before these new facilities yield wafers at volume. Until permit, certification, and entitlement procedures are reformed in the U.S., this will be a cumbersome process. Plus the competition for those public funds will be fierce. The other alternative to consider is brownfield. These facilities are obviously few and far between at any one time, but as chip makers who wish to go fab-light or fabless transition their production out to foundries, some operational fab assets will become available on the market all over the world and there might just be one out there that’s right for you. E.g. companies in compound semi, GaN, GaA, SiC, and MEMS want fabs and greenfield is not necessarily the answer for them because they are too long to yield.

Also Read:

CEO Interview: Dr. Tung-chieh Chen of Maxeda

CEO Interview: Koen Verhaege, CEO of Sofics

CEO Interview: Harry Peterson of Siloxit

Breker’s Maheen Hamid Believes Shared Vision Unifying Factor for Business Success


AI for the design of Custom, Analog Mixed-Signal ICs

AI for the design of Custom, Analog Mixed-Signal ICs
by Daniel Payne on 09-28-2023 at 10:00 am

high sigma verifier min

Custom and  Analog-Mixed Signal (AMS) IC design are used when the highest performance is required, and using digital standard cells just won’t meet the requirements. Manually sizing schematics, doing IC layout, extracting parasitics, then measuring the performance only to go back and continue iterating is a long, tedious approach. Siemens EDA has been offering EDA tools that span a wide gamut, including: High Level Synthesis, IC design, IC verification, physical design, physical verification, manufacturing and test, packaging, electronic systems design, electronic systems verification and electronic systems manufacturing. Zooming into the categories of IC design and IC verification is where tools for Custom IC come into focus, like the Solido Design Environment.  I had a video conference with Wei Tan, Principal Product Manager for Solido to get an update on how AI is being used.

Designing an SoC at 7nm can cost up to $300 Million, and 5nm can reach $500 Million, so having a solid design and verification methodology is critical to the financial budget, and the goal of first pass silicon success. With each smaller process node the number of PVT corners required for verification only goes up.

The general promise of applying AI to the IC design and verification process is to improve or reduce the number of brute-force calculations, assist engineers to be more productive, and to help pinpoint root causes for issues like yield loss. Critical elements of using AI in EDA tools include:
  • Verifiability- the answers are correct
  • Usability -non-experts can use the tools without a PhD in statistics
  • Generality – it works on custom IC, AMS, memory and standard cells
  • Robustness – all corner cases work properly
  • Accuracy – same answers as brute-force methods
Wei talked about three levels of AI, with the first being Adaptive AI which accelerates an existing process using AI techniques, the next level as Additive AI that retains previous model answers in new runs, and the final level of Assistive AI to help circuit designers be more productive with new insights while using generative AI.
Solido has some 15 years of applying AI techniques to EDA tools used by circuit designers at the transistor level. For Monte Carlo simulations using Adaptive AI there’s up to a 10,000X speedup so you can get 3 to 6+ sigma results at all corners that matches brute-force accuracy. Here’s an example of Adaptive AI where a 7.1 sigma verification that required 10 trillion brute-force simulations only used 4,000 simulations, or 2,500,000,00X faster with SPICE accuracy.
High-Sigma Verifier

The Solido Design Environment also scales well in the cloud to speed up simulation runs using AWS or Azure vendors to meet peak demands.

An example of Additive Learning employs AI model reuse for when there are multiple PDK revisions and you want to characterize your entire standard cell library for each new PDK version. The traditional approach would require 600 hours to do the initial PVT runs using Monte Carlo, covering five revisions.
Traditional PVTMC jobs
With AI model reuse this scenario takes much less time to complete, also saving many MB to GB of data saved on disk.
AI model reuse, saves time
Assistive AI is applied to the sizing of transistors and identifies optimization paths to improve PPA, determines the optimal sizing of transistors to achieve the target PPA goals, and has friendly reports to visual the progress. You can expect your IC team to save days to weeks of engineering time by using AI-assisted optimization.
Assistive AI for circuit sizing

Summary

Custom and AMS IC designers can now apply AI-based techniques in their EDA tool flows during both design and verification stages. Adaptive AI speeds up brute-force Monte Carlo simulation, Additive learning uses retained AI models to speed up runs, and Assistive AI is applied to circuit optimization and analysis.
Yes, you still need circuit designers to envision transistor-level circuits, but they won’t have to wait so long for results when using EDA tools that have AI techniques under the hood.

Related Blogs


Keysight EDA 2024 Delivers Shift Left for Chiplet and PDK Workflows

Keysight EDA 2024 Delivers Shift Left for Chiplet and PDK Workflows
by Don Dingee on 09-28-2023 at 8:00 am

Chiplet PHY Designer

Much of the recent Keysight EDA 2024 announcement focuses on high-speed digital (HSD) and RF EDA features for Advanced Design System (ADS) and SystemVue users, including RF System Explorer, DPD Explorer (for digital pre-distortion), and design elements for 5G NTN, DVB-S2X, and satcom phased array applications. Two important new features in the Keysight EDA 2024 suite may prove crucial in EDA workflows for chiplets and PDKs (process design kits).

A quick introduction to chiplet interconnects

Chiplets are the latest incarnation of modular chip design tracing back through multi-chip module (MCM), system-in-package (SiP), package-on-package (PoP), and others, targeting improved cost-effective design, performance, yield, power efficiency, and thermal management. Chiplets decompose what would otherwise be a complex SoC, with an expensive and maybe unrealizable single-die solution, into smaller pieces designed and tested independently and then packaged together. Chip designers can grab chiplets from different process nodes in a heterogeneous approach – say, putting a 3nm digital logic chiplet alongside a 28nm mixed-signal chiplet.

Until recently, there has been no specification for die-to-die (D2D) interconnects, leaving chiplet designers with two significant challenges. First is the speed of today’s interconnects, often with gigabit clocks, where the bit error rate (BER) starts creeping up enough to affect performance. Second is the difficulty of modeling and simulating interconnects in digital EDA tools, usually in a do-it-yourself approach, trying to match precise time-domain measurements of eye patterns from high-speed oscilloscopes.

UCIe (Universal Chiplet Interconnect Express) fills the gap for D2D interconnects. It defines three layers: a PHY layer with data paths on physical bumps grouped into lanes by signal exit ordering; a D2D adapter coordinating link states, retries, power management, and more; and a protocol layer building on CXL and PCIe specifications. The Standard Package (2D) drives low-cost, long-reach (up to 25mm) interconnects. Advanced Package (2.5D) variants optimize performance on short-reach (less than 2mm) interconnects with tighter bump pitch, enabling improved BER at higher transfer rates. Bump maps and signal exit routing, combined with scalable diagonal bump pitch requirements, ensure that a UCIe-compliant chiplet places on a substrate with controlled interface characteristics, making interoperable connections possible.

 

 

 

A shift left with Chiplet PHY Designer for UCIe

Keysight EDA teams have been working on modeling and simulating HSD interfaces aligned with industry specifications for some time. Their first major product release was ADS Memory Designer with an IBIS-AMI modeler for DDR5/LPDDR5/GDDR7 memory interfaces with statistical and single-ended bus bit-by-bit simulations. Its rigorous and genuine JEDEC compliance testing handles over 100 test IDs with the same test algorithm found in the Keysight Infinium oscilloscope family.

According to Hee-Soo Lee, DDR/SerDes Product Owner and HSD Segment Lead at Keysight, the HSD R&D squad leveraged four years of effort developing Memory Designer in the creation of Chiplet PHY Designer, the industry’s first chiplet interconnect simulation tool ready for introduction as part of ADS 2024 Update 1.0 in the Keysight EDA 2024 suite. “We saw an opportunity to speed up designs using chiplets by simulating a chiplet subsystem, from one D2D PHY through interconnect channels to another D2D PHY, much earlier in the cycle,” says Lee. “Chiplet PHY Designer precisely computes a voltage transfer function (VTF) to ensure specification compliance and analyzes system BER down to 1e-27 or 1e-32 levels.” Chiplet PHY Designer can also measure eye height, eye width, skew, mask margin, and BER contour.

 

 

 

 

 

 

 

 

Keysight teams adapted the single-ended bus simulation technology to deal with the single-ended signaling and forwarded clocking used in UCIe. They then incorporated the UCIe signal naming convention and connection rules for handling smart wiring in the schematic. “After placing two dies along with interconnect channels, we can now tell Chiplet PHY Designer to make the automated wiring connections between chiplet components, and the design is ready for simulation right away,” continues Lee. The upcoming November 2023 release of Chiplet PHY Designer puts Keysight ahead of competing EDA environments for chiplet design. Interestingly, Lee hints support for Bunch of Wires (BoW) and Advanced Interface Bus (AIB) is coming in future releases.

Adapt existing PDK models to new process specifications

Creating accurate and high-quality transistor models can be time-consuming and affect the on-time delivery of PDKs. “In the traditional modeling approach, extracting a transistor model card from mass measurement data takes at least several days, often weeks,” says Ma Long, Manager of Device Modeling and Characterization at Keysight.

Keysight IC-CAP now incorporates a new product for model recentering, where models from prior processes are adjusted using figure-of-merits (FOMs) on a new process. “The biggest challenge is addressing the trend plots in real-time, simulating data points for different geometries and temperatures,” says Long. “From threshold voltage, cutoff frequency, and other FOMs, modeling engineers can modify an existing model to new specifications and save 70% compared to traditional step-by-step model extraction.”

 

 

 

 

 

 

 

 

Earlier model quality check reduces later iterations

Keysight has a full-featured model quality assurance (QA) tool, MQA, used for final SPICE model library sign-off and documentation. A newly developed light version of MQA, QA Express, is now integrated into Keysight Model Builder (MBP), allowing modeling engineers to apply a quick model QA check during parameter extraction.

Binning model QA is complicated and can also take days or weeks, and issues showing up late in the process can send teams back to the beginning. “QA Express gives easy-to-use, quick results providing a high-confidence check,” Long continues. A faster result is beneficial when simulators toss warnings over parameter effective ranges or bin discontinuity is detected. QA Express enables modeling engineers to find QA issues earlier with one-click ease.

 

 

 

 

 

 

 

Learn more at the Keysight EDA 2024 product launch event

Keysight has packed many new capabilities into the Keysight EDA 2024 release. For a brief introduction to trends in the EDA market driving these improvements, watch the video featuring Keysight EDA VP and GM Niels Faché below.

To help current and future users understand the latest enhancements in Keysight EDA 2024, including workflows for chiplets and PDKs, Keysight is hosting an online product introduction event on October 10th and 11th for various time zones.

Registration page:

Keysight EDA 2024 Product Launch Event

Press release for Keysight EDA 2024:

Keysight EDA 2024 Integrated Software Tools Shift Left Design Cycles to Increase Engineering Productivity

Also Read:

Version Control, Data and Tool Integration, Collaboration

Keysight EDA visit at #60DAC

Transforming RF design with curated EDA experiences


Assertion Synthesis Through LLM. Innovation in Verification

Assertion Synthesis Through LLM. Innovation in Verification
by Bernard Murphy on 09-28-2023 at 6:00 am

Innovation New

Assertion based verification is a very productive way to catch bugs, however assertions are hard enough to write that assertion-based coverage is not as extensive as it could be. Is there a way to simplify developing assertions to aid in increasing that coverage? Paul Cunningham (Senior VP/GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Towards Improving Verification Productivity with Circuit-Aware Translation of Natural Language to SystemVerilog Assertions. The paper was presented in the First International Workshop on Deep-Learning Aided Verification in 2023 (DAV 2023). The authors are from Stanford.

While a lot of attention is paid to LLMs for generating software or design code from scratch, this month’s focus is on generating assertions, in this case as an early view into what might be involved in such a task. The authors propose a framework to convert a natural language check into a well-formed assertion in the context of the target design which a designer can review and edit if needed. The framework also provides for formally checking the generated assertion, feeding back results to the designer for further refinement. The intent looks similar to prompt refinement in prompt-based chat models, augmented by verification.

As a very preliminary paper our goal this month is not to review and critique method and results but rather to stimulate discussion on the general merits of such an approach.

Paul’s view

A short paper this month – more of an appetizer than a main course, but on a Michelin star topic: using LLMs to translate specs written in plain English into SystemVerilog assertions (SVA). The paper builds on earlier work by the authors using LLMs to translate specs in plain English into linear temporal logic (LTL), a very similar problem, see here.

The authors leverage a technique called “few shot learning” where an existing commercial LLM such as GPT or Codex is asked to do the LTL/SVA translation, but with some additional coaching in its input prompt: rather than asking the LLM to “translate the following sentence into temporal logic” the authors ask the LLM to “translate the following sentence into temporal logic, and remember that…” followed by a bunch of text that explains temporal logic syntax and gives some example translations of sentences into temporal logic.

The authors’ key contribution is to come up with the magic text to go after “remember that…”. A secondary contribution is a nice user interface to allow a human to supervise the translation process. This interface presents the user with a dialog box showing suggested translations of sub-clauses in the sentence and asks the user to confirm these sub-translations before building up the final overall temporal logic expression for the entire sentence. Multiple candidate sub-translations can be presented in a drop-down menu, with a confidence score for each candidate.

There are no results presented in the SVA paper, but the LTL paper shows results on 36 “challenging” translations provided by industry experts. Prior art correctly translates only 2 of the 36, where the authors’ approach succeeds on 21 of 36 without any user supervision and on 31 of 36 with user supervision. Nice!

Raúl’s view

The proposed framework, nl2sva, “ultimately aims to utilize current advances in deep learning to improve verification productivity by automatically providing circuit-aware translations to SystemVerilog Assertions (SVA)”. It is done by extending a recently released tool, nl2spec, to interactively translate natural language to temporal logic (SVA are based on temporal logic). The framework requires an LLM (they use GTP-4) and a Model checker (they use JasperGold). The LLM reads the circuits in System Verilog and the assertions in natural language, and generates SVAs plus circuit meta information (e.g., module names, input and output wire names) and sub-translations in natural language. These are presented to the developer to evaluate and the SVA are run through a model checker to evaluate. The authors describe how the framework is trained (few-shot prompting) and include two complete toy examples (Verilog listings) and show a correctly generated SVA for each of them (“Unless reset, the output signal is assigned to the last input signal”).

As pointed out, this is preliminary work. Using AI to generate assertions seems a worthy enterprise. It is a hard problem in the sense that it involves translation; we briefly hit translation back in July when reviewing Automatic Code Review using AI; translation is a hard problem to score, often done with the BLEU score (bilingual evaluation understudy which evaluates quality of machine translation on a scale of 0-100) involving human evaluation. The authors use GPT-4 stating that they have “up to 176 billion parameters” and “supports up to 8192 tokens of context memory”, which is limiting. Using GPT-5 (1.76 trillion parameters, not clear why they quote only 8192 tokens) will remove these limits.

In any case, this is a really easy paper, with a paragraph long introduction to both SVA and to LLMs, with two complete toy examples – fun to read!

Also Read:

Cadence Tensilica Spins Next Upgrade to LX Architecture

Inference Efficiency in Performance, Power, Area, Scalability

Mixed Signal Verification is Growing in Importance