SNPS1670747138 DAC 2025 800x100px HRes

Prioritize Short Isolation for Faster SoC Verification

Prioritize Short Isolation for Faster SoC Verification
by Ritu Walia on 10-17-2024 at 10:00 am

Fig1 shorts analysis conf data

Improve productivity by shifting left LVS
In modern semiconductor design, technology nodes continue to shrink and the complexity and size of circuits increase, making layout versus schematic (LVS) verification more challenging. One of the most critical errors designers encounter during LVS runs are shorted nets. Identifying and isolating these shorts early in the process is crucial to meeting deadlines and ensuring a high-quality design. However, isolating shorts in early design cycles can be a time-consuming and resource-intensive task because the design is “dirty” with numerous shorted nets.

To tackle this challenge, designers need an LVS solution for rapid short isolation that enhances productivity by addressing shorts early in the design flow. This article explores the key difficulties designers face with short isolation, and a novel solution that integrates LVS runs with a debug environment to make the verification process faster and more efficient.

The challenge of shorted nets in LVS verification

Design size, component density, and advanced nodes like 5 nm and below all contribute to the growing complexity of SoC designs. With layouts containing billions of transistors, connectivity issues like shorted nets can proliferate. Shorts can occur between power/ground networks or signal lines and may result from misalignment, incorrect placement, or simply the close proximity of electrical connections in densely packed areas of the chip.

As shown in industry conference surveys, the number of shorts in “dirty” early-stage designs has skyrocketed as process nodes have shrunk, leading to an increased need for comprehensive short isolation (Figure 1). While earlier nodes like 7 nm might see a manageable number of shorts, modern 5 nm designs can produce over 15,000 short paths that need to be investigated, analyzed, and corrected. Identifying the specific short paths causing the issue is not just difficult—it’s overwhelming.

Figure 1. Short path analysis statistics from industry conference surveys show the increase in shorts across process nodes.

Traditional LVS verification and short debugging approaches require designers to switch between a graphical user interface (GUI) for short inspection and a command-line environment for LVS reruns, resulting in longer design cycle times and less efficient workflows. Furthermore, manually inspecting and debugging each short path is an incredibly tedious process, especially when designers need to pinpoint shorts in hierarchical designs where components and interconnects are densely packed across multiple layers.

Debugging shorts: common pitfalls

The key challenges designers face during short isolation and debugging include:

  • Locating the exact short path: Each short is composed of multiple paths, and identifying the specific path responsible for the short can be time-consuming.
  • Extended LVS cycle times: Running a full LVS verification after each short fix significantly lengthens the process.
  • Tedious visual inspection: Manually inspecting and analyzing short paths across the entire chip layout can take several days, especially in large, complex designs.

With these challenges in mind, having an efficient short isolation solution can drastically improve the speed and accuracy of the LVS process.

A comprehensive solution for interactive short isolation

To address these challenges, Siemens EDA has developed the Calibre RVE Interactive Short Isolation (ISI) flow, which integrates short analysis directly into the Calibre RVE environment. This solution allows designers to quickly identify and debug shorts without leaving the familiar layout viewing and debugging interface.

The flow lets designers visualize short paths in their design layouts after running LVS verification. With the addition of the “SI” keyword (short isolation) in the Mask SVDB Directory statement of the rule file, designers can isolate and inspect shorts in real time. The flow automatically highlights shorted segments in the layout and organizes them in an intuitive tree view, making it easier to manage and debug shorts (Figure 2).

Figure 2. Older Summary View of the shorted paths for each texted-short in Calibre RVE (top). Updated comprehensive Summary View of the shorted paths for each texted-short in Calibre RVE (bottom).

The ability to simulate short fixes without making actual changes to the design layout is a key feature. Designers can perform virtual fixes, verify them, and save the results in a separate database. This means they can debug multiple short paths simultaneously, reducing the overall LVS cycle time and minimizing disruptions to their workflow.

Benefits of short isolation in early-stage design

By running partial LVS checks targeted at specific nets, designers can quickly isolate and fix shorts on power/ground or signal nets, significantly reducing the number of shorts before running a full LVS signoff extraction.

With the integration of LVS runs into a graphical debug environment, designers no longer need to switch between different tools for verification and debugging. Instead, they can invoke LVS runs directly from the debug GUI (Figure 3). This push-button feature allows for quick, targeted LVS runs, with options for multithreading and distributed processing to further accelerate runtimes.

Figure 3. Designers can quickly prioritize and fix critical shorts using the Calibre RVE background verify shorts functionality.

This short isolation flow helps designers simulate short fixes and verify them without requiring full-chip LVS runs. This targeted, parallel processing reduces overall verification time, allows for early identification of critical issues, and helps design teams stay on schedule.

Boosting designer productivity with integrated short isolation

The tight integration between Calibre tools enables a much more efficient LVS process by providing a unified toolset for short isolation, debugging, and verification. Designers can now:

  • Run targeted partial LVS checks for shorts without waiting for full-chip LVS runs.
  • Perform interactive short isolation and virtual fixes in the same environment.
  • Automatically update results in the debug interface, eliminating the need for manual context switching.
  • Leverage parallel processing and multithreading options to speed up debugging.

This seamless flow significantly reduces the time spent on short isolation and debugging, enabling designers to focus on optimizing other aspects of their design.

Conclusion: Faster SoC verification with early short isolation

As SoC designs become larger and more complex, early-stage short isolation and verification are critical to keeping projects on schedule. By allowing designers to simulate short fixes and verify them in parallel, this flow helps reduce the number of full LVS iterations required, leading to shorter design cycles and improved productivity. With the combined LVS and debug environments, design teams can tackle the most critical LVS violations early, ensuring higher-quality designs and faster time to market.

To learn more about how Calibre nmLVS Recon can streamline your verification process, download the full technical paper here: Siemens EDA Technical Paper.

Ritu Walia is a product engineer in the Calibre Design Solutions division of Siemens EDA, a part of Siemens Digital Industries Software. Her primary focus is the development of Calibre integration and interface tools and technologies.

Also Read:

Navigating Resistance Extraction for the Unconventional Shapes of Modern IC Designs

SystemVerilog Functional Coverage for Real Datatypes

Automating Reset Domain Crossing (RDC) Verification with Advanced Data Analytics


The Perils of Aging, From a Semiconductor Device Perspective

The Perils of Aging, From a Semiconductor Device Perspective
by Mike Gianfagna on 10-17-2024 at 6:00 am

The Perils of Aging, From a Semiconductor Device Perspective

We‘re all aware of the challenges aging brings. I find the older I get, the more in touch I feel with those challenges.  I still find it to be true that aging beats the alternative. I think most would agree. Human factors aside, I’d like to discuss the aging process as applied to the realm of semiconductor device physics. Here, as with humans, there are degradations to be reckoned with. But, unlike a lot of human aging, the forces causing the problems can be better understood and even avoided. There is a recent high-profile news story regarding issues with the 13th and 14th generation of the Intel ‘Raptor Lake’ core processors. After a fair amount of debugging and analysis, the observed problems highlight the perils of aging from a semiconductor device perspective. Let’s look at what happened, and what it means going forward.

What Went Wrong?

Back in August, PC Magazine reported that unstable 13th and 14th Gen Intel Core processors are raising lots of concerns for desktop owners. The article went on to say that:

An unusual number of the company’s latest 14th Gen “Raptor Lake Refresh” chips, which debuted late in 2023, are proving to be prone to crashes and blue screens. Intel’s older 13th Gen “Raptor Lake” processors are, similarly, showing the same distressing traits.

What was particularly vexing was the incidence of stability issues so early in the life of these chips. And the fact that not everyone was seeing the problems, and further the problems were not always in the same form or frequency. News such as this about a part that sees widespread use can cause a lot of angst.

Root Cause Analysis

After much analysis, research and code updates, Intel has honed in the root cause and developed a plan. Dubbed the Vmin Shift Instability issue, Intel traced the problem to a clock tree circuit within the IA core which is particularly vulnerable to reliability aging under elevated voltage and temperature. What was observed was that these conditions can lead to a duty cycle shift of the clocks and observed system instability.  

Intel has identified four operating scenarios that lead to the observed issues. In a recent communication from the company, details of these four scenarios and mitigation plans were published. The company is releasing updated documentation, microcode, and BIOS to modify the clock/supply voltage behavior, so the rapid aging behavior is mitigated. Intel is working with its partners to roll out the relevant BIOS updates to the public.

This issue was manifested in the desktop version of the part. Intel also affirmed that both the Intel Core 13thand 14th Gen mobile processors and future client product families – including the codename Lunar Lake and Arrow Lake families – are unaffected by the Vmin Shift Instability issue.

These fixes are taking a substantial amount of resources, both to mitigate the problem and deal with the impact in the market.

How to Avoid Problems Like This

This is a highly visible example of what happens when clock trees go out of specification particularly when N- and P-channel devices age differently, leading to asymmetrical changes in clock signals. As these performance shifts accumulate, circuits stop working reliably or stop working completely.

The solution to such aging degradation lies in the use of precise, high-resolution analysis tools throughout the design process.  It turns out there is a company that targets the identification and mitigation of clock anomalies. Infinisim solution ClockEdge® offers a powerful approach, simulating how clock signals degrade over time due to factors like Negative Bias Temperature Instability (NBTI) and Hot Carrier Injection (HCI). By performing comprehensive aging simulations across entire clock domains over multiple PVT corners, Infinisim’s technology allows designers to predict signal degradation and mitigate its impact, effectively extending the operational lifespan of high-performance clocks.

My gut tells me Intel’s problems could have been tamed before they reached the field with tools like this. By identifying potential clock aging failures early, Infinisim’s solutions reduce the risk of expensive field failures and costly silicon re-spins. Their proven track record demonstrates how they enable customers to achieve exceptional design robustness before tape-out, providing fast, accurate analysis that helps optimize performance without compromising reliability.

You can learn more about clock aging and Infinisim’s approach in this blog on SemiWiki. And that’s a look at the perils of aging from a semiconductor device perspective.

Also Read:

A Closer Look at Conquering Clock Jitter with Infinisim

Afraid of mesh-based clock topologies? You should be

Power Supply Induced Jitter on Clocks: Risks, Mitigation, and the Importance of Accurate Verification


ASML surprise not a surprise (to us)- Bifurcation- Stocks reset- China headfake

ASML surprise not a surprise (to us)- Bifurcation- Stocks reset- China headfake
by Robert Maire on 10-16-2024 at 10:00 am

ASML 2024 Downturn
  • Investors finally realize the upcycle isn’t as strong as stocks indicated
  • Industry Bifurcation between AI & rest of industry continues
  • China spending risk/overhang finally kicks in
  • AI is super strong, majority of chips remain weak- Invest accordingly
ASML simply states chip industry reality that investors have long ignored

Chip stocks and most specifically ASML got crushed yesterday when ASML mistakenly pre-announced their quarter a day early.

Quarter results were fine and in line as expected but orders, at $2.6B, were absolutely horrible, coming in at less than half of expectation.

As a result, ASMLs expectations for 2025 will come down. Q4 is in good shape and will include 2 High NA systems (Intel), but the longer term, 2025 and beyond, remains the issue.

Basically, ASML pointed to a more gradual recovery of the semiconductor industry, while AI remains very strong, the rest (majority) of the industry remains weak and has gotten weaker in recent months (Intel & Samsung).

This should certainly NOT be a surprise as we have been talking extensively, for a year, about a slower recovery than past recoveries as both trailing edge, non AI and memory (non HBM) have been weak and its difficult, if not impossible to have a full recovery based on the strength of a small part of the overall chip market, AI.

We told you so….and so did the chip companies themselves

Aside from the fact that we have been warning about overdone chip stocks for quite some time, if we go back to the equipment companies earnings reports themselves we have not seen a huge uptick in numbers nor a pronouncement by managements that the industry is off to the races again.

Instead we have heard only slow improvements and modest investments (with the exception of China).

If you look at the revenues of equipment companies like AMAT, KLAC and LRCX you will note that there has not been a sharp uptick in revenues as in cycles past even though the stock valuations have seen a sharp uptick reflective of a stronger upturn than has actually been happening….so the disconnect between reality and the stocks has widened…..

Mistakenly taking AI as an indicator of the broader, entire chip market

The mistake that many investors and inexperienced analysts have made is to take the huge, rip roaring success of AI and translate it as indicator of the entirety of the chip market….Wrong! We have reminded investors many times that HBM memory is a single digit percentage of the overall memory market and Nvidia AI chips are at the very bleeding edge of overall chip capacity neither one nor both taken together are an indicator of chip market health.

Yet many investors and analysts have mistakenly gotten caught up in the wave of AI excitement and have dragged the valuation of the entire sector up along with it.

Bifurcation of chip industry into haves (AI) and have nots (everything else)

We have spoken extensively about the bifurcation of the chip industry into AI and everything else. This bifurcation will clearly continue as Nvidia is sold out for a year while regular memory, trailing edge logic and most of the rest of the chip industry remains weak.

The fact is that AI will remain on fire while the rest of the industry languishes….investors and analysts have a hard time wrapping their heads around that dichotomy.

It is also equally import to distinguish between AI stocks and the rest of the industry as we saw Nvidias shares get sold off , in the selling frenzy, along with other chip stocks even though they remain the stand out.

China head fake/risk finally coming in to play

ASML mentioned on their call that China sales will return to a more “normalized” rate. Obviously China has been spending as fast as possible to avoid possible sanctions and clearly spending more than rationally makes sense.

Anyone who thought China would continue to spend at the current pace is naive or doesn’t understand the market.

The spend can’t go on forever and even if China wanted to continue to try to buy up all the scanners they could eventually the US will finally get sanctions in place.

In essence, the head fake is that many investors/analysts mistakenly thought that Chinese spend was real and would continue as is and did not discount for an eventual slowing.

Rest of chip equipment will follow ASML

ASML remains the monopolistic engine that drives the train of semiconductor equipment stocks. They typically see things first as scanners have a long lead time and need to be ordered long before dep and etch which is more of a turns business. If China slows ordering litho scanners, they certainly don’t need dep (AMAT), etch (LRCX) and yield management (KLAC).

So the sell off in the chip equipment industry overall seems justified.

Obviously Samsung & Intel should have been a stronger warning to investors in the group but most chose to ignore those key warnings.

The Stocks

We have said that the stocks were overvalued for a while. We have repeatedly said that AI was essentially the sole and only driver and the rest of the industry was at best, weak.

The stocks have been trading at the high end of the overall PE range seemingly unbothered by reality and egged on by the AI tidal wave.

We think the reset we saw yesterday should make investors much more selective and not just buy any chip stock that has a less than ridiculous valuation.

We maintain our positive view on Nvidia and feel that it has essentially zero to do with yesterdays sell off as it is in a much different universe.

TSMC is doing very well as Nvidias essentially sole supplier but slightyly more tempered as its trailing edge capacity is weaker. However we would point out that the vast majority of TSMC’s profitability comes from leading edge such as Nvidia and Apple and trailing edge does not much more than squeezing out additional revenue from already depreciated fabs.

ASML is still a great, monopolistic , bleeding edge company, which leads all other semiconductor equipment companies in import. Clearly the shine has come off as expectations are reduced and the China risk is more apparent than before. The valuation was overheated and will now more accurately reflect the reality of the industry.

Going forward we think that investors have to me much more selective between the have and have nots. Continue to focus on AI (Nvidia) and HBM but Do Not assume it drives the entire chip market. Look for those who benefit more directly from AI and related trends.

We would remind investors that we remain fearful of the eventual chip capacity coming on line from China from all the huge numbers of tools they have been buying. When this capacity eventually comes on line it could crush what is an already weak trailing edge chip foundry market. So far that has not yet happened, but its a matter of time.

We hope that the large dose of reality yesterday, resets investors discipline in investing in the sector.

“Reality, What a concept!”
Robin Williams (RIP)

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.

We have been covering the space longer and been involved with more transactions than any other financial professional in the space.

We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.

We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

SPIE Monterey- ASML, INTC – High NA Readiness- Bigger Masks/Smaller Features

Samsung Adds to Bad Semiconductor News

AMAT Underwhelms- China & GM & ICAP Headwinds- AI is only Driver- Slow Recovery


Mobile LLMs Aren’t Just About Technology. Realistic Use Cases Matter

Mobile LLMs Aren’t Just About Technology. Realistic Use Cases Matter
by Bernard Murphy on 10-16-2024 at 6:00 am

chatbot list of suggestions min

Arm has been making noise about running large language models (LLMs) on mobile platforms. At first glance that sounds wildly impractical, other than Arm acting as an intermediary between a phone and a cloud-based LLM. However Arm are partnered with Meta to run Llama 3.2 on-device or in the cloud, apparently seamlessly. Running in the cloud is not surprising but running on-device needed more explaining so I talked to Ian Bratt (VP of ML Technology and Fellow) at Arm to dig deeper.

Start with what’s under the hood

I think we’re conditioned now to expect every new (hardware) announcement signals a new type of accelerator, but that is not what Arm is claiming. First, they are starting from Llama 3.2 lightweight models built for edge deployment, not just a smaller parameter count but also with pruning (zeroing parameters which have low impact on result accuracy) and something Meta calls knowledge distillation:

… uses a larger network to impart knowledge on a smaller network, with the idea that a smaller model can achieve better performance using a teacher than it could from scratch.

The Arm demonstration platform uses 4 CPU cores on a middle-of-the-road phone. Let me repeat that – 4 CPUs, no added NPU. Arm then put a lot of (repeatable) work into optimization. Starting from a trained model they heavily compress from Bfloat16 weights down to 4-bit. They compile operations through their hand-optimized Kleidi libraries and run on CPUs hosting ISA extensions for matrix operations they have had in place in place for years.

No magic other than aggressive optimization, in a way that should be repeatable across applications. Ian showed me a video of a demo they ran recently for a chatbot running on that same phone. He typed in “Suggest some birthday card greetings” and it came back with suggestions in under a second. All running on those Arm CPU cores.

Of course this is just running inference (repeated next token prediction) based on a prompt. It’s not aiming to support training. It won’t be as fast as a dedicated NPU. It’s not aiming to run big Llama models on-device, though apparently it can seamlessly interoperate with a cloud-based deployments to handle such cases. And it will sacrifice some accuracy through aggressive compression. But how important are those limitations?

The larger question in mobile AI

We’ve seen unbounded expectations in what AI might be able to do, chased by innovation in foundation models from CNNs to DNNs to transformers to even newer fronts, and innovation in hardware to accelerate those models in the cloud and mobile applications.

While now-conventional neural nets have found real applications in automotive, building security, and other domains, LLM applications in mobile are still looking for a winner. Bigger, faster, better is great in principle but only if it is useful. Maybe it is time for the pendulum to swing from performance to utility. To explore first at relatively low cost what new features will attract growth.

Adding an AI accelerator to a design adds cost, power drain and complexity to system design and support. Arm’s argument for sticking to familiar CPU-based platforms for relatively modest inference tasks (with a path to cloud-based inference if needed) sounds like a sensible low-risk option until we consumers figure out what we find appealing as killer apps.

Not all edge devices are phones, so there will still be opportunity for NPUs at the edge. Predictive maintenance support for machines, audio personalization in earbuds, voice-based control for systems lacking a control surface, are examples where product innovators will start with a real world need in consumer, industry, office, hospital applications and then need to figure out how to apply AI to that need.

Interesting twist to the mobile AI story. You can learn more from Ian’s blog.


Electron Beam Probing (EBP): The New Sheriff in Town for Security Analyzing of Sub- 7nm ICs with Backside PDN

Electron Beam Probing (EBP): The New Sheriff in Town for Security Analyzing of Sub- 7nm ICs with Backside PDN
by Navid Asadizanjani on 10-15-2024 at 10:00 am

Screenshot

The outsourcing fabrication of ICs makes them vulnerable to hardware security threats. Security threats such as reverse engineering, insertion of hardware Trojan, and backside contact-less probing to steal cryptographic information can cause financial loss to IP owners and security risk to the system in which these ICs are deployed.

Physical inspection techniques have become more advanced to debug the sub-7 nm advanced technology nodes System on Chip (SoC) or the Heterogeneous Integration (HI) Packaging for failure analysis. However, these advanced inspection techniques developed for debugging can also be used maliciously by an adversary to uncover intellectual property, keys, and memory content.

Previous research has demonstrated that these physical inspection techniques have capabilities as invasive, non- invasive, and semi-invasive to perform hardware-level attacks such as reverse engineering, probing, or circuit editing a chip with the intent to change or decipher the content of the chip. These physical inspection techniques typically perform the attack by scanning and reconstructing netlists or manipulating the chip circuitry.

Electron-beam probing (EBP) has emerged as a powerful method as shown in Figure 1, offering 20x better spatial resolution than optical probing, and applies to sub- 7nm flip-chips and advanced 3D architecture systems. In this work, two semi-invasive physical inspection methods: optical probing and E-beam probing, will be discussed and compared in their capabilities in sub-7nm technology nodes for failure analysis and hardware assurance.

Screenshot

Nowadays, EBP takes advantage of the improved beam resolution of modern SEMs resulting in analysis on FinFETs with nm resolution allowing for scaling up into future process generations. The results show that the sample preparation process of EBP, such as bulk silicon removal and shallow trench isolation (STI) exposure has little influence on circuit performance, which makes EBP suitable for semiconductor failure analysis and isolation. Logic states can be read from both memory cell devices and metal lines.

Researchers have successfully performed EBP on active transistors with advanced technology nodes. However, adversaries can take advantage of EBP to attack the sub-7nm technologies devices by extracting valuable information. An adversary will only need traditional failure analysis de-processing tools and an SEM with electrical feed-throughs to prepare and complete unauthorized data extraction in less than a few days. Given its positive results, there is no doubt that the E-beam approach has proven to be the much-awaited need of the industry, and it can continue to inspire ambitious goals, such as achieving a footprint as small as 1 nm. With that said, this paper will serve faithfully to answer the most fundamental questions regarding every aspect of EBP and aid in paving the future roadmap of the semiconductor inspection trade.

With the motivation of lower technology nodes inspection, this paper aims to highlight the importance and need of EBP, mainly focusing on nodes at 7 nm and even below. The focus is on clarifying how all conventional techniques prominent in the IC

segment physical inspection used to date, including the optical inspection, fail to meet the expectations of the required resolution at the lower nanometer aspirations. Furthermore, backside EBP offers all the advantages frontside EBP did in the 90s: fast signal acquisition, linear VDD signal scaling, and superior signal-to-noise ratio. Our work delves into the principles behind EBP, its capabilities, challenges for this technique as shown in Figure 2, and potential applications in failure analysis and potential attacks. It highlights the need for developing effective countermeasures to protect sensitive information on advanced node technologies. Therefore, Effective countermeasures must be devised to protect sensitive information on advanced node technologies.

Screenshot

Credits:

Navid Asadizanjani
Associate Professor, Alan Hastings Faculty Fellow, Director, Security and Assurance lab (virtual walk), Associate director, FSIMEST, Department of Electrical and Computer Engineering, Department of Material Science and Engineering, University of Florida

Nitin Vershney
Research Engineer, Florida Institute for Cybersecurity Research

Also Read:

Navigating Frontier Technology Trends in 2024

PQShield Builds the First-Ever Post-Quantum Cryptography Chip

Facing challenges of implementing Post-Quantum Cryptography


Navigating Resistance Extraction for the Unconventional Shapes of Modern IC Designs

Navigating Resistance Extraction for the Unconventional Shapes of Modern IC Designs
by Nada Tarek on 10-15-2024 at 6:00 am

Fig1 MEMS design

The semiconductor industry is experiencing rapid evolution, driven by the proliferation of IoT applications, image sensors, photonics, MEMS applications, 3DIC and other emerging technologies. This growth has dramatically increased the complexity of integrated circuit (IC) design. One aspect of this complexity is the use of unconventional, non-Manhattan layout structures to achieve optimal functionality and performance.

Non-Manhattan routing examples

3DICs—As technology advances and the limits of Moore’s Law approach, 3DICs allow designers to decompose architecture into smaller chiplets. These are integrated into a single package, offering higher integration density, faster interconnect speeds, lower power consumption, higher bandwidth data movement, better thermal management, and overall reduced costs. A key feature of 3DIC design is the use of curvilinear shapes for non-Manhattan routing in interconnects and redistribution layers (RDL) among through-silicon vias (TSVs) and micro-bumps, which enhances flexibility and performance but poses new challenges for design tasks like resistance extraction.

Image sensors and MEMS designs—Image sensors, utilized in devices like digital cameras, smartphones, and surveillance systems, demand high image recognition capabilities. Designers incorporate wide polygons in layout designs to achieve this. These wide photodiode polygons gather more light, resulting in high-resolution, low-noise images with high dynamic range and low power consumption. MEMS designs utilize curvilinear shapes and unconventional geometries for a broad range of applications in mechanical, optical, magnetic, fluidic, and biomedical fields. An example of a MEMS layout is shown in figure 1.

Figure 1. MEMS design with unconventional structures.

However, the use of wide polygons in image sensors and complex structures in MEMS introduce significant challenges for resistance extraction, which is an important part of ensuring the design’s reliability. Traditional field solvers, though effective, often require long run times and are unsuitable for full-chip verification due to computational complexity. Designers need to leverage the newer fracturing techniques have emerged to improve accuracy and efficiency in resistance extraction for different applications.

Resistance extraction for design reliability

Resistance extraction is crucial for IC design physical verification. Accurate resistance modeling is essential for predicting circuit behavior through simulation, ensuring the overall reliability and accuracy of circuit performance. As interconnect sizes decrease, the impact of parasitic resistance becomes more pronounced.

Accurate resistance extraction ensures the reliability, performance, and functionality of ICs across various downstream flows, including timing analysis, power analysis, electromigration, signal integrity analysis, thermal analysis, and noise analysis. Timing analysis, for instance, relies on accurate parasitic resistance extraction to estimate signal delays, identify critical paths, and ensure proper timing closure.

Parasitic resistance extraction faces significant challenges, such as increasing design complexity, aggressive shrinking in feature sizes, and the presence of non-standard geometries requiring special handling through advanced fracturing techniques.

Evolving fracturing techniques for accurate resistance extraction

For precise resistance extraction, it is essential to divide geometries into smaller fragments, allowing detailed analysis and accurate estimation of parasitic resistances. This process, known as fracturing, breaks down complex geometries into smaller parts. The extraction process captures the complexities and details of individual components and interconnects, leading to a more precise determination of parasitic resistances. These techniques include 1D fracturing, 2D fracturing, and more advanced methods designed to handle curvilinear shapes.

  • 1D fracturing:

1D fracturing involves dividing a route into multiple fractures in one dimension based on current direction. Resistance is calculated for each fracture based on its length, width, and the sheet resistance of the layer. While efficient for standard geometries, 1D fracturing may introduce inaccuracies for shapes with non-uniform current flow or irregular cross-section profiles (figure 2)

Figure 2. 1D resistance extraction inaccuracy for irregular section profile.

Electronic design automation vendors strive to handle non-uniform current distribution in 1D fracturing by using complex models and algorithms, improving resistance accuracy even in cases of non-uniform current distribution.

  • 2D fracturing:

2D fracturing handles planar structures and slotted metals by fracturing shapes into smaller polygons that cover the planar regions. This enables accurate parasitic resistance extraction for planar and slotted structures.

Considering slotted conductors, 2D fracturing creates a mesh of resistors around the slots, offering more accurate resistance extraction than simple 1D fracturing (figure 3).

Figure 3. 2D fracturing for slotted metal.

  • Advanced fracturing:

Curvilinear shapes are critical in applications such as analog and RF designs, antenna designs, MEMS devices, 3DIC and optical waveguides. Advanced fracturing techniques handle the complexity of these structures more effectively than traditional methods.

Advanced fracturing methods, such as fracturing in the direction of the polygon, break structures into smaller elements aligning with polygon boundaries, allowing more accurate resistance extraction. Figure 4 illustrates fracturing polygon for a curved conductor.

Figure 4. Fracturing in the direction of the polygon for a curved conductor.

Best practices for next-generation extraction tools

To ensure accurate resistance extraction, consider design automation tools that integrate newer fracturing techniques that accommodate non-Manhattan routes.

A rule-based extraction engine applies heuristics to determine the current direction and subsequently applies 1D fracturing, then generates nodes across the current direction and calculates resistance between them, resulting in precise P2P resistance values for uniform structures, which are common in most designs.

To effectively handle unconventional structures, parasitic extraction tools should allow users specify the application of 2D fracturing for a conductor layer under a specified marker layer. This approach involves applying a 2D mesh to the conductor layer to enable accurate P2P results for unconventional structures.

Additionally, an advanced flow should be introduced to handle curvilinear shapes and complex structures, eliminating the need for a field solver and maintaining reasonable runtimes. This approach ensures accurate resistance extraction for various design scenarios, providing reliable performance and efficiency in IC designs.

Summary

Accurate measurement of interconnect resistance is fundamental for ensuring circuit performance and reliability. Designers need advanced tools to handle curvilinear shapes and complex structures, enabling quick and accurate point-to-point resistance extraction for the full layout.

For more information download the full technical paper HERE.

Also Read:

SystemVerilog Functional Coverage for Real Datatypes

Automating Reset Domain Crossing (RDC) Verification with Advanced Data Analytics

Smarter, Faster LVS using Calibre nmLVS Recon


Alchip Technologies Sets Another Record

Alchip Technologies Sets Another Record
by Daniel Nenni on 10-14-2024 at 10:00 am

Alchip Q2 2024

The ASIC business has always been a key enabler of the semiconductor industry but it is a difficult business. In my 40 years I have seen many ASIC companies come and go but I have never seen one like Alchip.

Alchip Technologies Ltd. was founded more than 20 years ago, about half way through my career.  I know one of the founders, a fiercely competitive man equally matched with intelligence and charm. The founding Alchip team was from Simplex Solutions, a design and verification company, which was acquired by Cadence for $300M, a very big number in 2002.

Simplex had a close relationship with Sony (the Playstation 2 ASIC) and that relationship continued with Alchip. TSMC was also a key relationship for Alchip as an investor and manufacturing partner. TSMC at one time owned 20% of Alchip. At the same time (2002/2003) TSMC also invested in another ASIC provider Global Unichip (GUC) and is now the largest shareholder. As I mentioned, ASICs are a key semiconductor enabler and TSMC is a big reason why.

Bottom line: Alchip has passed the test of time with flying colors and is the one to watch for complex ASICs and SoCs, absolutely.

Here is their latest press release:

Taipei, Taiwan August 31, 2024 – Alchip Technologies’ Q2 2024 financial results set second-quarter records for revenue, operating income, and net income.

Second-quarter 2024 revenue notched a record $421 million, up 62.8% from Q2 2023 revenue of $258.5 million and up 26.2% over Q1 2024 revenue of $333.6 million. Operating income for the second quarter of 2024 was a record $51.2 million, representing an 80.2% increase over Q2 2023 operating income of $28.4 million, and a 32.8% increase over Q1 2024 operating income of $38.5 million.

At the same time, second-quarter 2024 net income set a record of $49.3 million, 105.8% higher than Q2 2023 net income of $23.9 million, and up 26.3% compared to Q1 2024 net income of $39 million. Earnings per share for Q2 2024 were NTD 20.1.

Commenting on the record results, Alchip President and CEO Johnny Shen cited revenue growth driven by higher-than-expected AI ASIC shipments to a major customer; in particular the shipments of AI ASIC to a North America service customer and the ramp-up of a 5nm AI accelerator to a North America IDM customer.

In total, AI and high-performance computing applications accounted for 91% of Q2 2024 revenue, with networking contributing 6%, niche applications adding 2%, and consumer uses accounting for the remaining 1%. 

On a process technology basis, revenue derived from designs at 7nm and more advanced nodes accounted for 96% of Q2 2024 revenue and 95% of first-half 2024 revenue.  The North America region accounted for 78% of Q2 2024 revenue, while the Asia Pacific region contributed 8%, with Japan and other regions made up the remaining 14%.  

About Alchip

Alchip Technologies Ltd., founded in 2003 and headquartered in Taipei, Taiwan, is a leading global provider of silicon and design and production services for system companies developing complex and high-volume ASICs and SoCs.  Alchip provides faster time-to-market and cost-effective solutions for SoC design at mainstream and advanced process technology. Alchip has built its reputation as a high-performance ASIC leader through its advanced 2.5D/3DIC design, CoWoS/chiplet design and manufacturing management. Customers include global leaders in AI, HPC/supercomputer, mobile phones, entertainment device, networking equipment and other electronic product categories. Alchip is listed on the Taiwan Stock Exchange (TWSE: 3661).

http://www.alchip.com

Also Read:

Collaboration Required to Maximize ASIC Chiplet Value

Synopsys and Alchip Accelerate IO & Memory Chiplet Design for Multi-Die Systems

The First Automotive Design ASIC Platform


Hearing Aids are Embracing Tech, and Cool

Hearing Aids are Embracing Tech, and Cool
by Bernard Murphy on 10-14-2024 at 6:00 am

earbud2a min

You could be forgiven for thinking of hearing aids as the low end of tech, targeted to a relatively small and elderly audience. Commercials seem unaware of advances in mobile consumer audio, and white-haired actors reinforce the intended audience. On the other hand, the World Health Organization has determined that at least 6% of people worldwide have at least some hearing loss, and they expect this number to grow to 9% by 2050. Even more remarkably, in the US 20% of people in their 20’s are already reported to have noise-induced hearing loss.

Image courtesy of Jeff Miles

Given the rapid technology and consumerization advances we are seeing in consumer audio, particularly in earbuds, slow progress in hearing aids seems odd since you would think these should share similar growth opportunities. A welcome shift therefore is a recent FDA announcement that Apple Airpods Pro (with an added software feature) is authorized to sell over the counter as a hearing aid. If Apple sees meaningful business expansion in this direction, then I assume other earbud and hearing aid makers will follow quickly.

A different kind of consumerization

In fairness to traditional hearing aid makers there are reasons they didn’t jump on this opportunity immediately. These are partly cost and partly reliability – elderly users on a fixed income don’t want to pay thousands of dollars for devices which may not immediately fit their needs, and they certainly don’t want to upgrade every 2 years.

But more than that, the abundance of features we expect in regular consumer devices can be an active negative for non-tech-aware users, judging by high return rates on early attempts. Such users will certainly enjoy the benefits: increased clarity, TWS support, making phone calls. But don’t ask them to install apps, deal with a bewildering range of options, or figure out what steps to take when something doesn’t work.

This presents a challenge because configuring the aids to a user’s personal hearing needs is critical in making this technology effective. An audiologist can tune a configuration the first time, but hearing needs evolve over time and audiologist visits are inconvenient and perhaps expensive. Even more inconvenient when the aids stop working, especially if you wanted to save money by buying over the counter (OTC).

For this reason, there’s an active trend towards hearing aids self-calibrating through AI, with minimal user feedback. In fact AI-based model opportunities are sparking growth of an early software ecosystem around these devices. According to Casey Ng (Audio Product Marketing Director at Cadence), this 3rd party software market has come as a surprise to some product builders who are not used to the demands of an ecosystem expecting increased memory, APIs, and developer support.

Calibration, or more popularly now personalization, has important value for the rest of us. Wherever we are on the hearing disability spectrum, our ears are unique enough that our hearing experience can benefit from a personalized configuration. This is good news for younger users who need help. Rather than suffer the stigma of wearing classic aids they can use their earbuds (almost a fashion statement these days) which they can optimize to their hearing needs.

Cadence Tensilica Audio DSPs are ready to help

In support of audio applications, the Cadence Tensilica HiFi DSP family offers a range of DSP options from the ultra-low-power HiFi 1 DSP, to the HiFi 5s which is most interesting to me in the context of this hearing aid discussion. This platform is designed to manage TWS, active noise cancellation, also automatic speech recognition for voice-based commands, plus support for more general AI-based applications. Cadence also offers access to a rich supporting audio and voice ecosystem (300 packages) which should accelerate time to market for OEMs.

Among these, noise cancellation options go beyond conventional noise/echo suppression. HiFi 5s offers AI to distinguish and select speech over other audio sources, an important refinement for the hearing impaired.

Importantly these advances (including AI) demand enhanced processing capability. Cadence’s power-efficient HiFi platforms are able to offer that performance yet still extend battery life between recharges, a very important benchmark when users need their hearing aids to last for a complete work-day.”

Casey tells me that the Cadence Tensilica group are also working closely with OEMs who are building their own learning-based personalization models and software. They have also joined a venture with Hoerzentrum Oldenburg GmbH, Leibniz University Hannover, and Global Foundries to build a prototype Smart Hearing Aid Processor (SmartHeAP).

Looks to me like serious commitment to advancing technology in hearing aids! You can learn more about this topic HERE.


Podcast EP253: Democratization of AI with Christopher Vick of Lemurian Labs

Podcast EP253: Democratization of AI with Christopher Vick of Lemurian Labs
by Daniel Nenni on 10-11-2024 at 10:00 am

Dan is joined by Christopher Vick, the Vice President of Engineering at Lemurian Labs, bringing over three decades of experience from top tech companies such as Qualcomm, Oracle, and Sun Microsystems. Throughout his distinguished career, Christopher has played a key role in developing technologies used by billions.

Notably, Vick led the creation of software development tools for Qualcomm’s 5G modems and was one of the original developers of the HotSpot Java Virtual Machine at Sun Microsystems. In his role at Lemurian Labs, he drives engineering strategy, focusing on creating innovative solutions to make AI development more efficient, accessible, and performance-driven.

In this very informative podcast, Christopher provides an overview of new development approaches that can be applied to AI training and algorithm development to make the process more efficient and predictable. The result is a democratization of AI, making the technology available to a wide range of applications.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Tobias Ludwig of LUBIS EDA

CEO Interview: Tobias Ludwig of LUBIS EDA
by Daniel Nenni on 10-11-2024 at 6:00 am

IMG 3442 eh

Tobias began his journey with a strong academic foundation in electronic design automation, studying at a leading university in Germany that specialized in formal verification. After graduating, Tobias gained hands-on experience in the semiconductor industry, where he quickly recognized the challenges and inefficiencies in the traditional formal verification process.

Driven by a passion for innovation, Tobias co-founded LUBIS EDA with a clear vision: to automate and simplify formal verification, making it more accessible to companies of all sizes.  Under his leadership, LUBIS EDA integrated a dedicated team of software developers focused on automating verification processes. He also pioneered the adoption of AI techniques to enhance debugging and setup.

Acknowledging the shortage of skilled formal verification engineers, Tobias launched the Formal Bootcamp, a training program designed to bridge the talent gap and prepare the next generation of experts.

Today, as CEO, Tobias leads LUBIS EDA in helping semiconductor companies around the world overcome the most challenging aspects of formal verification, driving both innovation and industry standards.

Tell us about your company?

At LUBIS EDA, we specialize in uncovering simulation-resistant and corner-case bugs in high-risk silicon designs by automating the formal verification process. We go beyond the traditional, labor-intensive methods, integrating cutting-edge tools and AI techniques to make formal verification more accessible and effective.

Our team is based in Germany, close to one of the few universities in the country that teach formal verification, ensuring we stay at the forefront of industry knowledge. We don’t just provide solutions; we also empower our customers through consulting-based formal sign-off and our specialized Formal Bootcamp training program.

Whether it’s working on complex IPs like caches, routers, and controllers, or addressing the talent gap in formal verification, we’re committed to helping our customers succeed. Our goal is to simplify the verification process, allowing our customers to focus on innovation, not bugs.

What problems are you solving?

First, finding bugs at the Sub-IP level is notoriously difficult. Simulation often falls short in achieving comprehensive coverage, especially for detecting deadlocks and livelocks. Our automated formal verification tools simplify this process, making it more accessible and effective.

Second, there’s a significant shortage of skilled professionals capable of executing complex formal verification tasks. Traditionally seen as a “PhD-level technique,” formal verification requires years of practice to master. At LUBIS EDA, we are dedicated to formal verification, and over the past year, we’ve built one of the largest talent pools for formal verification worldwide.

What application areas are your strongest?

We excel in various types of IPs, but where we truly shine is helping our customers maximize the potential of formal verification. We’ve added the most value in working on caches, routers, controllers, and all sorts of pipelines.

What keeps your customers up at night?

Imagine you’re a lead engineer at a small semiconductor company, and you’re nearing the final stages of a critical chip design. The pressure is mounting—deadlines are looming, and you can’t afford any last-minute surprises. What keeps you up at night? It’s the fear that somewhere, buried deep in the complexity of your design, there’s a lurking bug—one that could derail months of hard work.

What does the competitive landscape look like and how do you differentiate?

In the realm of formal verification, much of the work remains manual and time-consuming, with a few other consulting firms offering similar services. However, we’ve taken a different approach. Rather than simply adding more people to tackle the problem, we’ve integrated a dedicated team of software developers into our company from the start. Their mission is to automate every aspect of the formal verification process. Recently, we’ve also begun incorporating AI techniques to streamline debugging and setup, further setting us apart from the competition.

What new features/technology are you working on?

We are currently focused on two main development areas:

Browser-Based Automated Verification Solution:
We’re working on a web-based tool that simplifies formal verification to the point where users require no training or prior knowledge. Our initial release will feature a RISC-V formal verification app, where users simply click a “run” button. The tool handles everything, including licensing and resource management. If a bug is detected, the user receives a comprehensive, AI-generated description of the issue directly from the waveform. This app is just the beginning—we plan to expand its capabilities to include protocol and cache checks in the coming months.

C++ Based FV AppGenerator:
We have developed a tool that generates SVA assertions from a SystemC/C++ model. This tool, which has been years in the making and proven invaluable in our consulting projects, allows users to run assertions in simulation or formal verification without any prior expertise.

How do customers normally engage with your company?

The easiest way to connect with us is via LinkedIn or email, and we’re committed to responding promptly. We also schedule regular sync-ups with all our customers to address questions and provide ongoing support throughout their formal verification journey.

Also Read:

CEO Interview: Nikhil Balram of Mojo Vision

CEO Interview: Doug Smith of Veevx

CEO Interview: Adam Khan of Diamond Quanta