SemiWiki Ad2E ILOVEDAC 800x100

Electrical Rule Checking in PCB Tools

Electrical Rule Checking in PCB Tools
by Daniel Payne on 12-10-2024 at 10:00 am

HyperLynx DRC min

I’ve known about DRC (Design Rule Checking) for IC design, and the same approach can also be applied to PCB design. The continuous evolution of electronics has led to increasingly intricate PCB designs that require Electrical Rule Checking (ERC) to ensure that performance goals are met. This complexity poses several challenges in design verification, often resulting in errors, inefficiencies, and increased costs. This blog post examines these challenges and introduces HyperLynx DRC, an EDA tool from Siemens, to address them.

Modern electronic products demand enhanced functionality and performance, directly impacting the complexity of PCB design and verification. The use of complex components, high-speed interfaces, and advanced materials requires thorough PCB checks to guarantee optimal performance and reliability. This level of complexity often stretches the capabilities of traditional verification methods. 

Several factors contribute to the challenges in PCB design and verification:

  • Error-Prone Processes: The intricate nature of complex PCBs makes the design process susceptible to errors. Oversights and mistakes during layout, component placement, and routing can compromise product functionality and reliability. Undetected errors lead to revisions, rework, and possibly complete redesigns, impacting project timelines and budgets.
  • Infrequent Checks: The labor-intensive nature of PCB checking processes discourages frequent checks throughout the design cycle. Delays in verification lead to accumulated errors and inconsistencies, making fixes challenging and time-consuming.
  • Late-Stage Error Detection: Detecting design errors in later stages of development is inefficient, leading to more modifications, increased development time and costs, and delayed time-to-market. This is particularly critical in industries with rapid technological advancements.
  • Simulation Challenges: Traditional signal and power integrity simulations involve analyzing numerous objects, including nets, planes, and area-fills. Collecting simulation models and running simulations for each object is labor-intensive and time-consuming, often exceeding the benefits gained.
HyperLynx DRC

To face these challenges, Siemens developed HyperLynx DRC, a rule-based checker that identifies potential PCB design errors using geometrical calculations. The key features are:

  • Predefined Rules: The software comes with over 100 predefined rules addressing various aspects of PCB design, including signal integrity, power integrity, electromagnetic interference, electrostatic discharge, analog circuits, creepage, clearance, and IC package-specific checks.
  • Efficient Embedded Engines: HyperLynx DRC utilizes various embedded engines, such as the geometry engine, graph engine, field solver, and creepage engine, for efficiently checking diverse technical challenges.
  • Management of False Violations: The tool provides a feature for managing false violations, allowing users to create object lists, apply rules to specific objects, and eliminate unnecessary checks, significantly reducing checking time.
  • Enhanced Filtering Capability: HyperLynx DRC enables the creation of object lists manually or automatically, offering filtering capabilities to focus on relevant objects.

The extensive capabilities of HyperLynx DRC can lead to long rule-based geometrical run times for large and complex designs. To address this, HyperLynx DRC provides the area-crop function, allowing users to isolate and analyze specific areas of the design. 

The area-crop function streamlines the verification process through:

  • User-Friendly Interface: Users can quickly specify an area by selecting nets or components using a wizard.
  • Automated Cropping: The wizard automatically crops the design with predefined merging from the selected objects and creates a new project for checking.

This function enables users to concentrate on specific design areas, reducing complexity, enhancing accuracy and speeding up run times during verification.

Case Study

MediaTek, a leading semiconductor company, used HyperLynx DRC’s area-crop function on a highly complex board. The board specifications were:

  • Layout file size: Over 300 MB
  • Layers: Over 40
  • Layout size: Over 22000 mil * 16000 mil
  • Components: Over 16,000
  • Nets: Over 11,000

The area-crop function was used as follows:

  • Segmentation of the Board: The board was divided into four sections using vertical and horizontal virtual cuts, creating top-left, top-right, bottom-left, and bottom-right areas. Two additional overlap zones were added at the intersecting regions to ensure thoroughness.
  • Accelerated Verification: Checking each section individually significantly reduced the overall run time, particularly for the complex GND signal Long Stub rule.
  • Reduced Complexity: Dividing the board into smaller sections simplified the intricate GND nets, enhancing performance and allowing for efficient error identification and resolution.
PCB layout with four areas selected

The implementation of the area-crop function yielded impressive results:

  • Time Reduction: Total checking time was reduced from 33 hours, 51 minutes, 53 seconds to just 44 minutes, a big time savings.
  • Enhanced Efficiency and Precision: Focusing on segmented areas allowed for more precise verification, ensuring design reliability and integrity without compromising the project timeline.
  • Optimized Resource Allocation: Large time savings and enhanced focus enabled optimized resource allocation, ensuring critical areas received proper scrutiny and facilitated a smoother design refinement process.
Run Times per area under Long Stub rule
Conclusion

HyperLynx DRC’s area-crop function is a powerful tool for PCB design verification. By enabling focused verification, reducing complexity, and significantly accelerating the checking process, HyperLynx DRC ensures project success and meets the challenges of modern PCB designs. This innovative solution ensures advancements in electronic products are characterized by reliability, precision, and efficiency.

Read the complete, 12-page white paper online.

Related Blogs

Synopsys Brings Multi-Die Integration Closer with its 3DIO IP Solution and 3DIC Tools

Synopsys Brings Multi-Die Integration Closer with its 3DIO IP Solution and 3DIC Tools
by Mike Gianfagna on 12-10-2024 at 6:00 am

Synopsys Brings Multi Die Integration Closer with its 3DIO IP Solution and 3DIC Tools

There is ample evidence that technologies such as high-performance computing, next-generation servers, and AI accelerators are fueling unprecedented demands in data processing speed with massive data storage, lower latency, and lower power. Heterogeneous system integration, more commonly called 2.5 and 3D IC design, promises to address these demands. As there is no “free lunch”, these new design approaches create similarly unprecedented demands associated with manufacturability and cost. It turns out the solution to this dilemma requires a combination of advanced design tools and purpose-built IP. One company stands apart with deep technology in both areas. Let’s explore how Synopsys brings multi-die integration closer with its 3DIO IP Solution and 3DIC tools.

Framing the Problem

There are two fundamental challenges to be met in order to bring heterogeneous system integration closer to reality – packaging and interconnect. Let’s examine the key requirements of each.

The need to process massive quantities of data is a driver for advanced packaging. There are many approaches here. 2.5D and 3D packaging have gained popularity as prominent solutions. In the 2.5D approach, two or more chips are side by side with an interposer connecting them. The interposer acts as a high-speed communication interface, creating greater flexibility to combine functions in one package.

For 3D IC, chips are connected with vertical stacking. This improves performance and functionality, allowing the integration of chiplets with multiple layers. A key trend is to shrink the bump pitch between the chiplets.  This improves interconnect distances and related parasitics.

All of these new design requirements and advanced packaging approaches have given rise to a significant change in interconnect strategies from traditional copper uBUMP to the most advanced uBUMP using 40um pitch, scaling even further down to 10um.

For 2.5D design, the connection between chips is made through redistribution layers on the interposer.  The distance between chips is usually around 100um. For 3D, the use of vertical stacking allows for direct connection between two chips, reducing the distance to less than 40um. The result is a much smaller substrate.

With this approach, IO no longer needs to be placed at the edge of the chip. Also, by using hybrid bond technology the vertical connection between chips is even tighter. Hybrid bonding connects dies in packages using tiny copper-to-copper connections (<10um).  

Synopsys has released an informative technical bulletin on all these trends. A link is coming. The figure below is taken from that document and shows these significant scaling trends.

Addressing the Problem

Taming these design challenges requires a combination of advanced EDA tools and specialty IP. Together, these two approaches form a winning design approach. Synopsys is well-known for its 2.5/3D design tools. Its 3D IC Compiler is a key enabler for multi-die integration. It turns out the design methodology required spans many disciplines. More on that in a moment. First, let’s examine how Synopsys brings multi-die integration closer with its 3DIO IP Solution.

This IP is specially tuned for multi-die heterogeneous integration, enabling the optimal balance of power, performance and area to address the packaging demands of 3D stacking. It turns out the 3DIO IP enables faster timing closure as well.

To better understand how it works, here are the key components of the solution:

  • Synopsys 3DIO includes a synthesis friendly Tx/Rx cell compatible with Synopsys standard cell libraries and a configurable charge device model for optimal ESD protection. As the number of IO channels increases, the optimized Synopsys 3DIO solution leverages the automatic place and route environment to place and route the IOs directly on the BUMP. The solution supports both 2.5D and 3D packaging using uBUMP and hybrid BUMP. The Synopsys 3DIO cell supports a high data rate and offers the lowest power solution, with an optimal area that fits within the hybrid BUMP area.
  • Synopsys Source Synchronous 3DIO (SS3DIO) extends the synthesizable 3DIO cell solution with a clock forwarding functionality to aid in lower bit error rate and ease timing closure between dies. The SS3DIO offers scalability to create custom-sized macros with optimal PPA and ESD. The TX, RX, and clock circuits support matched data and clock path, with data launched at the transmitting clock edge and captured at the corresponding receiving clock edge.
  • Synopsys Source Synchronous 3DIO PHY is a 64-bit hardened PHY module with inbuilt redundancy, optimized for the highest performance. The 3DIO PHY with CLK forwarding reduces bit error rate and eases implementation along with optimal placement of POWER/CLK/GND BUMP.

The figure below, also taken from the Synopsys technical bulletin provides an overview of how the Synopsys 3DIO IP Solution helps a with a variety of design challenges.

With new packaging technologies and increased density of interconnects, there is a significant rise in the IO channels for a given die area. The corresponding decrease in IO channel length increases performance but gives rise to the need for a more streamlined interface. The Synopsys 3DIO IP Solution provides a way to implement tunable, integrated multi-die design structures.

To Learn More

Addressing the challenges of heterogeneous system integration requires a combination of advanced design tools and IP that is optimized for this new design style. Synopsys provides strong offerings in both areas. As mentioned, a cornerstone of tools is the Synopsys 3DIC Compiler. You can learn more about Synopsys 3DIC Compiler here.  In the area of overall design flow, there is an excellent webinar that Synopsys recently presented with Ansys that delves into all the aspects of multi-die design. You can catch the replay of that webinar here.

You can access the technical bulletin that provides more detail on Synopsys 3DIO Solution here. And you can explore more about this IP, including access to the Synopsys 3DIO IP Solution datasheet on the Synopsys website here.  And that’s how Synopsys brings multi-die integration closer with its 3DIO IP Solution and 3DIC tools.


What is Wrong with Intel?

What is Wrong with Intel?
by Daniel Nenni on 12-09-2024 at 10:00 am

Intel Inside

One of the most popular topics on the SemiWiki forum is Intel, which I understand. Many of us grew up with Intel, some of us have worked there, and I can say that the vast majority of us want Intel to succeed. The latest Intel PR debacle is the abrupt departure of CEO Pat Gelsinger. To me this confirms the answer to the question, “What is wrong with Intel?”. But first let’s look at the big picture.

Hopefully we can all agree that AI will change the world over the next decade. It has already started, AI has made it down from the cloud into our cars, homes, laptops and phones. AI is also being weaponized and is a critical technology powering conflicts around the world. Clearly the leaders of the AI race will also be the leaders of the new AI infused world.

We should also be able to agree as to the importance of semiconductor technology and the importance of controlling the semiconductor supply chain. The pandemic fueled semiconductor shortages should still be fresh in our minds and if you think it could not happen again you are wrong.

When speaking about semiconductors you can separate them into two categories: logic and memory. Currently the majority of the leading-edge logic chips come from Taiwan (TSMC) with Intel and Samsung a distant second and third. The majority of the memory chips come from South Korea (Samsung and SK Hynix) with a distant third being U.S. based Micron and relatively new memory chip makers in China. To be clear, without memory logic is useless and without logic memory is useless.

My personal mantra has always been to plan for the worst and hope for the best so you will not be disappointed. Best case is that Taiwan and South Korea continue as they have for the last 50+ years. Worst case is that they won’t and the semiconductor supply chain is fractured and life as we know it is over. We may not go back to prehistoric days but to the younger generations it will seem like it.

There are two companies that are critical to semiconductor manufacturing in the United States: Intel (logic) and Micron (memory). Both are semiconductor legends, and both are critical to the survival of semiconductor manufacturing in the United States.

We can discuss Micron another time but a recent podcast with Intel’s Dr. Tahir Ghani (Mr. Transistor) reminded me of how important Intel is to the semiconductor industry. This week I am at IEDM, the premier semiconductor conference that showcases semiconductor innovation, and Intel is again front and center. This is a much longer technology discussion so I will simply say that Intel is critical to the semiconductor industry and the United States of America. If you think otherwise post it in the SemiWiki forum and thousands of working semiconductor professionals will explain it to you in painful detail.

This brings us back to the question: What is wrong with Intel? In my opinion Intel has had the worst board of directors in the history of the semiconductor industry. The hiring/firing of the three previous CEOs is a clear example. Seriously, we are talking about 20 years of bad decisions that have destroyed a semiconductor legend.

I posted a blog about The Legend of Intel CEOs in 2014 and updated it after Pat Gelsinger was hired and I will have to update it yet again. To me the bad board decisions started when they hired Paul Otellini (finance/sales/marketing) and then made an even worse pick with Brian Krzanich (manufacturing). The firing of Krzanich was even worse. How could the board have not properly vetted a man whose entire career was at Intel? The stated reason for the firing was absolute nonsense. Krzanich was the worst Intel CEO of all time and that is why he was fired. I would liken it to Pat Gelsinger’s “refirement” announcement. Why are board of directors allowed to reimagine CEO firings? They are heavily compensated employees of a publicly traded company. Intel pays these people millions of dollars in salary and stock options to safeguard the company. Where are the activist investors now?!?!

I also questioned the hiring of Robert Swan (finance) as Intel CEO. As it turns out Swan signed the megadeal with TSMC that saved the company from the Intel 10nm debacle and he was later fired for it. I do believe that if Swan stayed as CEO Intel would be fabless which is a very bad idea for the reasons stated above.

In regards to Pat Gelsinger, I was a big fan at the beginning but I told my contacts at Intel that the strategy should be to “speak softly and carry a big stick”. Intel’s culture has been based on being a virtual monopoly for so many years it really got the best of them. Making overly optimistic statements is a very risky proposition. At some point in time those statements will come back to haunt you unless you have the revenue to back them up. Intel did not, so Pat was out, just my opinion.

Let’s be clear, Intel is an IDM foundry like Samsung. TSMC is a pure-play foundry with hundreds of customers and partners collaborating on R&D and business strategy. No one company is going to compete with that. If you compare Intel Foundry to Samsung Foundry you get a very favorable result. The challenging TSMC head-to-head strategy has been tried before (Samsung and Globalfoundries) and billions of dollars were wasted. How did a seasoned board of directors allow this to happen?

As for the rumors of Intel being acquired, in my opinion Broadcom is the only company that qualifies. I’m confident Hock Tan could turn Intel around. I do not know how the finances would work but Hock’s management style would get Intel back into the green without a doubt.

Selling off the manufacturing part of Intel is ridiculous. Do you really think Intel Design can compete with Nvidia or even AMD without intimate ties to manufacturing? I was really excited when Intel signed the agreement with TSMC because it was a head-to-head design shootout with AMD, Nvidia, and Intel on the same process technology for the very first time. You tell me how that turned out. Are the new Intel products disruptive? The entire leading edge semiconductor industry is using TSMC N3. Will Intel really be relevant without manufacturing?

The quick fix for Intel is to be acquired by Broadcom. Bringing back Pat 2.0 and replacing the board of directors is another option. A third option is for the U.S. Government to step in and make semiconductor manufacturing a priority. Maybe Elon Musk can help Intel sort things out (kidding/not kidding).

Bottom line: Some very difficult decisions have to be made by some very qualified people. Take a look at the current Intel Board of Directors and convince me that they are the right ones to do it. You have an open invitation to be a guest on our podcast or post a written response to this blog.

I started SemiWiki 14 years ago to give semiconductor professionals a voice, a platform to participate in conversations for the greater good of the semiconductor industry. Let’s use it to help Intel become an industry leader again.


Enhancing System Reliability with Digital Twins and Silicon Lifecycle Management (SLM)

Enhancing System Reliability with Digital Twins and Silicon Lifecycle Management (SLM)
by Kalar Rajendiran on 12-09-2024 at 6:00 am

Synopsys SLM Solution Components

As industries become more reliant on advanced technologies, the importance of ensuring the reliability and longevity of critical systems grows. Failures in components, whether in autonomous vehicles, high performance computing (HPC), healthcare devices, or industrial automation, can have far-reaching consequences. Predicting and preventing failures is essential, and technologies like Digital Twins and Silicon Lifecycle Management (SLM) are key to achieving this. These tools provide the ability to monitor, analyze, and predict failures, thereby improving the dependability, and performance of systems.

“The reliability, availability, and serviceability (RAS) of complex systems such as data center infrastructure has never been more complex or critical,” said Jyotika Athavale, director of Engineering Architecture at Synopsys. “By integrating silicon health with digital twin simulations, we unlock powerful new capabilities for predictive modeling. This enables technology leaders to optimize system design and performance in new, impactful ways.”

Athavale addressed this topic during a talk she delivered at the Supercomputing Conference 2024 recently. She leads quality, reliability and safety research, pathfinding, standards and architectures for SLM solutions across RAS sensitive application domains.

Why Digital Twins Are Good for Prognostics

A Digital Twin is a virtual replica of a physical asset, created by combining real-time sensor data with simulation models. Digital twins enable continuous monitoring of system health and provide valuable insights for prognostics, which is the process of predicting future failures. By simulating different scenarios, digital twins can predict Remaining Useful Life (RUL), helping operators plan maintenance or replacements before a failure occurs. RUL refers to the time a device or component is expected to function within its specifications before failure. This proactive approach reduces downtime and optimizes system resources.

Types of Failures in Modern Systems

Failures in modern systems are categorized into permanent, transient and intermittent faults. Permanent faults, such as Time-Dependent Dielectric Breakdown (TDDB), Negative Bias Temperature Instability (NBTI), and Hot Carrier Injection (HCI), occur over time and lead to errors resulting in failure. In contrast, transient faults are temporary disruptions caused by external factors like radiation, which do not result in lasting damage.

In sub-20nm process technologies, degrading defects continue to evolve into the useful life phase of the bathtub curve, leading to issues like Silent Data Corruption (SDC), which can go unnoticed until critical failure occurs.

Why Failures Are Increasing

Despite technological advancements, failures are rising due to several factors. As devices shrink in size and increase in complexity, they become more vulnerable to failure. Smaller transistors, particularly below 20nm, are more susceptible to intrinsic wearout. Moreover, the demand for higher performance leads to greater stress on semiconductors. With interconnected systems in critical applications, even a single failure can have serious consequences, making predictive maintenance even more essential.

“To keep pace with these challenges, it’s essential to shift from reactive to predictive maintenance strategies,” said Athavale. “By integrating real-time monitoring and predictive insights at the silicon level, we can better manage the complexities of modern systems, helping avoid  potential failures and make maintenance more manageable..”

How to Monitor Silicon Health

Monitoring the health of semiconductor devices is crucial for identifying early signs of degradation. With embedded monitors integrated during the design phase, data on key performance metrics—such as voltage, temperature, and timing—can be continuously collected and analyzed. Silicon Lifecycle Management (SLM) systems include PVT monitors to track process, voltage, and temperature variations, path margin monitors to ensure signal paths remain within safe operating margins, and clock delay monitors to detect timing deviations. SLM also includes in-field analytics, which enables real-time monitoring and proactive decision-making throughout the device lifecycle.

Analyzing and Predicting Failures

Once the data is collected, it is analyzed to detect potential failures. Prognostic systems use advanced algorithms to analyze degradation patterns, such as those caused by TDDB, NBTI, and HCI, to predict when a component might fail. Predicting RUL is vital for managing system reliability, as early identification of failure allows for corrective actions like maintenance or replacement before the failure occurs.

RUL Prediction Using Synopsys SLM Data Solution

Synopsys’ SLM solution enables accurate RUL predictions through advanced monitoring and analytics, ensuring predictive maintenance and enhanced device reliability.

Key components of the Synopsys SLM solution include SLM PVT Monitors, which track process, voltage, and temperature variations to assess wear; SLM Path Margin Monitors, which detect timing degradation in critical paths; SLM Clock Delay Monitors, which identify clock-related performance anomalies; and SLM In-Field Analytics, which analyzes real-time data to predict failure trends.

The benefits of RUL prediction with Synopsys SLM include predictive maintenance, optimized reliability vs. performance, lifecycle and end-of-life planning, outlier detection, and catastrophic failure prevention. Corrective actions based on RUL analysis can include early decisions on recalls, implementing lifetime-extending mitigation strategies, and transitioning devices to a safe state to prevent further damage. Synopsys SLM provides actionable insights to minimize downtime, extend device lifespan, and ensure reliable performance throughout the lifecycle of semiconductor devices.

Summary

The combination of digital twins and Silicon Lifecycle Management (SLM) provides a powerful approach to managing the health and reliability of semiconductor devices. By enabling continuous monitoring, accurate failure prediction, and timely corrective actions, these technologies offer organizations tools to improve dependability, optimize performance, and reduce downtime. As electronic systems grow more complex and mission-critical, digital twins and SLM are becoming essential for predictive maintenance, ensuring long-term system reliability, and preventing costly failures.

Also Read:

A Master Class with Ansys and Synopsys, The Latest Advances in Multi-Die Design

The Immensity of Software Development and the Challenges of Debugging Series (Part 4 of 4)

Synopsys-Ansys 2.5D/3D Multi-Die Design Update: Learning from the Early Adopters

 


Podcast EP265: The History of Moore’s Law and What Lies Ahead with Intel’s Mr. Transistor

Podcast EP265: The History of Moore’s Law and What Lies Ahead with Intel’s Mr. Transistor
by Daniel Nenni on 12-08-2024 at 6:00 am

Dan is joined by Dr. Tahir Ghani, Intel senior fellow and director of process pathfinding in Intel’s Technology Research Group. Tahir has a 30-year career at Intel working on many innovations, including strained silicon, high-K metal gate devices, FinFETs, RibbonFETs, and backside power delivery (BSPD), among others. He has filed more than 1,000 patents over his career at Intel and was honored as Intel’s 2022 Inventor of the Year. He has the nickname of “Mr. Transistor” since he’s passionate about keeping Moore’s Law alive.

In this very broad discussion, Tahir outlines the innovations over the past 60 years of Moore’s Law and how these advances will pave the way to a trillion transistor device in this decade. Tahir explains how transistor scaling, interconnect advances, chiplet-based design and advanced packaging all work together to keep Moore’s Law scaling alive and continue to deliver exponential increases in innovation.

Tahir will present an invited paper at a special session of the upcoming 70th IEDM called The Incredible Shrinking Transistor – Shattering Perceived Barriers and Forging Ahead. IEDM will be held from December 7-11, 2024 in San Francisco.  You can learn more about IEDM and register to attend here. His presentation will be Tuesday, December 10 at 2:20 PM. Tahir also reviews several other significant Intel papers that will be presented at IEDM.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Podcast EP264: How Sigasi is Helping to Advance Semiconductor Design with Dieter Therssen

Podcast EP264: How Sigasi is Helping to Advance Semiconductor Design with Dieter Therssen
by Daniel Nenni on 12-06-2024 at 10:00 am

Dan is joined by Dieter Therssen, CEO of Sigasi. Deiter started his career as a hardware design engineer, using IMEC’s visionary tools and design methodologies in the early days of silicon integration. Today, being CEO of Sigasi, a fast-growing, creative technology company is a perfect fit for Dieter. Having worked in that space for several companies, and well-rooted in the field of semiconductors, he forever enjoys the magic of a motivated team.

Dan explores the changing landscape of semiconductor design with Dieter. The demands of higher complexity and multi-technology systems are discussed. The impact of AI and specifically generative AI are also explored with a view toward how the unique front-end design tools offered by Sigasi can move technology forward.

ASIC/FPGA design and safety/security requirements are also reviewed in this broad discussion. Dieter explains how Sigasi is helping these trends and also discusses the new and unique community version of the Sigasi tools.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: GP Singh from Ambient Scientific

CEO Interview: GP Singh from Ambient Scientific
by Daniel Nenni on 12-06-2024 at 6:00 am

GP Sir Photo

Gajendra Prasad Singh, also known as GP Singh, is a seasoned tech professional with over 26 years of experience in advanced semiconductor chips. With a zeal to solve the most complex technical problems, he harped on a difficult journey to create programmable AI Microprocessors, that provide high-performance in a cost-effective manner while still consuming low power. To realize this vision, he co-founded Ambient Scientific along with a team of visionaries from California’s Silicon Valley. GP’s extensive technical experience and successful leadership record within global prestigious companies from building cutting-edge chips contributed to his deep understanding of not only the scientific first principles required for such breakthrough innovations at the grassroots level but also the business acumen to ensure practical feasibility. With an innate passion for everything electronics and computers, GP Singh is a fierce advocate of using semiconductors for the betterment of human lives.

Tell us about your company?

Ambient Scientific is a fabless semiconductor company born in Silicon Valley, pioneering ultra-low power AI processors that are fully programmable to enable endless AI applications.

Our breakthrough Analog In-Memory Compute technology called DigAn® is making AI computing more powerful and efficient than ever before, without compromising on flexibility and programmability. Compared to traditional AI hardware, our processors deliver thousands of time more AI performance at the same power consumption or thousands of time less power consumption for the same AI performance.

Our first product GPX10 leverages the DigAn® architecture to bring battery-powered, cloud-free, on-device AI applications to life, something considered nearly impossible before. From always-on voice detection to FaceID to predictive maintenance, GPX10 is enabling endless applications in various industries, all while running on as little as a coin cell battery with no dependence on the cloud or an internet connection.

With a full stack SDK designed to support industry standard AI frameworks (Tensorflow, Keras, etc.) and an AI compiler to enable custom neural networks, we enable rapid time to market for your AI applications. Order our DVK today and bring the power of AI away from the cloud, right on to your fingertips.

What problems are you solving?

While the AI application and software landscape has exploded in complexity, hardware has failed to keep up. Current chips used for AI processing (GPUs) were designed for graphics processing and not AI computing in mind, making them inefficient and extremely expensive. This is clearly visible with the rising compute costs as well as power consumption for all AI ranging from gigantic LLMs to edge AI for smaller electronic devices. We at Ambient Scientific have solved these problems by inventing not just analog in-memory computing but also new instruction set architecture designed specifically for AI computing. Our analog matrix multiplication engines deliver 40X AI performance at 70X lower power consumption compared to equivalent GPUs. Built with scalability and flexibility in mind, our architecture enables AI processors all the way from cloud and server level to MCU level for a wide variety of applications across several industries. Ambient Scientific’s mission is to make AI computing powerful, energy efficient and affordable for everyone alike.

What application areas are your strongest?

Our first product GPX10 is an AI processor targeted towards on-device AI applications for the tiniest of battery powered devices. It helps move AI processing from the confines of the cloud directly on to the device even if its running on as little as a coin cell battery. This improves application reliability, latency, data security as well as total cost of ownership. Some of our strongest application areas popular with customers are industrial predictive maintenance at the edge, anomaly detection on MedTech devices and cloud-free voice control for consumer products. While commonly these application would struggle with latency or reliability or miniscule battery lives due to AI processing, our processor solves all these problems without forcing any compromises or even affordability.

What keeps your customers up at night?

With the widespread utility of AI, product makers have realized the importance of incorporating AI features into their product roadmap to remain competitive and maintain differentiation. These products makers are now faced with a difficult choice:

  1. Run AI processing in the cloud and sacrifice latency, data privacy and reliability due to complete dependence on a network connection.
  2. Run AI on device and sacrifice accuracy and power efficiency which translates into significantly compromised battery life.

These limitations which ultimately translate into higher costs or compromised product quality are an absolute function of the current processors available in the market, none of which were designed for AI processing. They force debilitating sacrifices for the product maker that keeps them up at night, stuck between a rock and a hard place.

What does the competitive landscape look like and how do you differentiate?

The AI compute market for small electronic devices includes either MCUs, entry level GPUs or new age NPUs. While MCUs cannot deliver enough performance required for meaningful AI compute, entry level GPUs consume too much power, occupy too much area and are not affordable enough to fit within the boundaries of commercial viability for battery powered on-device AI applications. Several new age NPUs claim to be able to deliver low power AI solutions but with a heavy price to pay in lack of programmability. They tend to be fixed function with pre-defined neural networks and minimal room for customization. Our Ultra-low power AI chips not only deliver the highest performance/unit of power consumption (>7 TOPs/W), they’re smaller than a fingernail, affordable and most importantly completely programmable. Product makers care about programmability so they can differentiate their products from competitors’ by owning the software such as their proprietary AI algorithms. Programmability also makes their products future proof with the ability to push updates over the air as the software and application landscape evolves. Compared to fixed function or application specific NPUs, our processors offer a versatile and flexible platform for product makers to differentiate themselves with ultra-low power AI features well into the future.

What new features/technology are you working on?

Our claim to fame is a breakthrough on analog in-memory computing technology that enables us to leverage a combination of high speed digital and analog circuits designed specifically for AI computing. By leveraging cubic in-memory architecture and the analog matrix multiplication circuit, we’ve solved all the bottlenecks for AI computing while minimizing energy consumption to a fraction of contemporary architectures. Not only this, we’ve also created custom instruction set architecture from the ground up to enable flexibility and scalability in AI computing. This means we can build a wide range of processors from AI MCUs to high speed computer vision processors. Similarly our end to end software stack scales with our processors to adapt to the application needs of software developers for a wide variety of applications in several industries.

Also Read:

CEO Interview: Ollie Jones of Sondrel

CEO Interview: Dr. Yunji Corcoran of SMC Diode Solutions

CEO Interview: Rajesh Vashist of SiTime

CEO Interview: Dr. Greg Newbloom of Membrion


SystemReady Certified: Ensuring Effortless Out-of-the-Box Arm Processor Deployments

SystemReady Certified: Ensuring Effortless Out-of-the-Box Arm Processor Deployments
by Lauro Rizzatti on 12-05-2024 at 10:00 am

SystemReady Certified Ensuring Out of the Box Effortless Arm Processors Deployments Figure 1

When contemplating the Lego-like hardware and software structure of a leading system-on-chip (SoC) design, a mathematically inclined mind might marvel at the tantalizing array of combinatorial possibilities among its hardware and software components. In contrast, the engineering team tasked with its validation may have a more grounded perspective. Figuratively speaking, the team might be more concerned with calculating how much midnight oil will need to be burned to validate such a complex system.

The numerous interactions between hardware components such as large arrays of various processor types, memory types, interconnect networks, and a wide assortment of standard and custom peripherals and logic, with those of the software, like bare-metal software, drivers, and OS hardware-dependent layers, demand exhaustive functional verification. This process is computationally intensive, requiring billions of cycles to establish confidence in a bug-free design before manufacturing. The challenge is magnified by the relentless pace of technology, with new hardware and software versions constantly emerging while support for older iterations persists.

The Economies of Design Debug

A well-known axiom in the field of electronic design emphasizes that the cost of fixing a design bug soars an order of magnitude at each successive stage of the verification process. What might cost a mere dollar to correct at the basic block-level verification stage can skyrocket to a million dollars when the issue surfaces at the full SoC level, where hardware and software tightly interact.

The stakes become even higher if a design flaw goes undetected until after silicon fabrication. Post-silicon bug detection not only challenges engineering teams but can also lead to exorbitant costs that may drain a company’s financial resources. For small enterprises, such a scenario could be catastrophic, potentially leading to bankruptcy due to the redesign expenses and missed revenues caused by delayed product launches.

In the fiercely competitive semiconductor industry, the margin for error is razor thin. Therefore, rigorous verification at each stage of the design process is not just a best practice—it’s a critical safeguard against the potentially ruinous consequences of post-silicon bug detection.

On the bright side, the electronic design automation (EDA) industry has been investing heavily in resources and innovation to tackle the challenge of pre-silicon verification. The Shift-Left verification methodology is a testament to the industry’s commitment to addressing this challenge.

Arm: Linchpin Example of the Hardware/Software Integration Challenges

Among processor companies Arm is a case-study, because of its vast catalog of IP solutions. Arm offers a wide range of IPs, platforms and solutions, including CPUs, GPUs, memory controllers, interconnects, security, automotive, AI, IoT, and other technologies, each designed to meet the needs of different markets and applications. While the exact number is not publicly known, when adding updates and new releases, it amounts to thousands of different parts.

SoC designers using Arm components face an uphill verification challenge. Once they have selected the IP components, they must integrate them into complex SoC designs, add a software stack to bring the design to life, and ensure compliance, that is, compatibility or interoperability of the software with the hardware.

This process is fraught with uncertainties and risks.

Often, root causes of integration issues can be traced to non-compliant hardware, such as non-standard PCIe ECAM, PCIe ghost devices, or customized components like universal asynchronous receiver-transmitters (UARTs) or GICs. These issues can lead to design malfunctions, and potentially to serious failures. For instance, systems with complex PCIe hierarchies may lack firmware workarounds, custom OS distributions may receive limited security updates, and Windows servers and clients may be incompatible with non-compliant PCI ECAM.

To address these issues, a widely used but increasingly outdated method in the electronics industry is post-silicon testing. While it serves the purpose of debugging hardware flaws after fabrication, it is inherently inefficient. This approach contradicts the well-established principle of exponential cost increase, summarized by the phrase “the sooner, the cheaper.” By delaying the detection of design flaws until after silicon manufacturing, companies incur costly silicon re-spins and face extended timelines.

Fortunately, these issues can be mitigated much earlier in the development cycle through pre-silicon design verification. Pre-silicon verification, which includes simulation, emulation, formal and timing verification, allows engineers to identify and resolve problems before chips are fabricated, significantly reducing both costs and risks.

Arm’s Game-Changing Solution: From ServerReady to SystemReady

To mitigate this challenge, specifically to eliminate or at least reduce design re-spins and accelerate time-to-market, Arm introduced the SystemReady Certification Program in 2022. Building on the success of the ServerReady program, which was launched in 2018 and targeted server applications, SystemReady expands the coverage to include designs like edge devices, IoT applications, automotive systems, and more.

In general, hardware platforms provided by semiconductor partners come with their own software stacks, i.e., firmware, drivers and operating systems. These are often siloed, creating challenges for OS vendors and independent-software-vendors (ISVs) who need to run applications across different platforms, as these setups tend to be highly specific and fragmented. SystemReady aims to break down these silos, enabling software portability and interoperability across all Arm-based A-Class devices. When third-party operating systems are run on devices complying with a minimum set of hardware and firmware requirements based on Arm specifications, they boot seamlessly, and applications run smoothly.

SystemReady Program Foundation

The foundation of the Arm’s SystemReady program lies in two key specifications. First, the Base System Architecture (BSA), a formal set of compute platform definitions to encompass a variety of systems from the cloud to the IoT edge, ensures that in-house developed or 3rd-party sourced software works seamlessly across a universe of Arm-based hardware. Second, a set of accompanying firmware specifications called the Base Boot Requirements (BBR), complements the BSA definitions. These sets of rules are encapsulated in the BSA Compliance Suite, accessible on GitHub.

The suite is designed to run compliance tests during pre-silicon validation, eliminating the need for executing full operating systems to validate the environment. This early-stage validation prevents costly silicon respins, expedites system-level debugging, and accelerates time-to-market.

Arm’s Thriving SystemReady Partner Ecosystem

To reach a vast and diverse customer base while considerably enhancing the value of the Arm ecosystem, Arm has strategically partnered with a wide array of companies, including leaders in EDA, IP, and silicon providers. These collaborations play a critical role in driving the success of Arm’s SystemReady program, a certification initiative that ensures seamless compatibility across hardware platforms and software stacks.

Leading EDA Firms Accelerate SystemReady Certification Success

The pre-silicon validation of software stacks on newly designed hardware platforms demands hardware-assisted verification platforms, such as emulation and FPGA prototyping. These platforms are crucial for ensuring that new designs function correctly across the range of real-world conditions they will face. Best-in-class emulators and FPGA prototypes support comprehensive verification and validation processes, including hardware debugging, hardware-software co-verification, power and performance analysis, and even post-silicon testing for final checks.

Prominent suppliers of hardware-assisted verification platforms have joined Arm’s SystemReady program to enable their customers developing Arm SoCs and components to validate BSA compliance on HAV platforms using Transactors and Verification IPs. By participating in this program, EDA companies enable developers to validate software before silicon is even taped out, significantly reducing risks and development costs while accelerating time-to-market. The “PCIe SystemReady Certification Case Study” is an example of how a collaborative approach to pre-silicon validation can lead to successful certification and market-ready products.

Case Study: PCIe SystemReady Certification

The PCIe protocol is one of the most widely adopted and popular interfaces in the electronics industry, supporting a broad spectrum of applications, including networking, storage, GPU accelerators, and network accelerators. Each of these applications has distinct workload profiles that interact uniquely with system components, making PCIe a versatile yet complex protocol to integrate into hardware platforms.

Arm’s SystemReady certification program for the Arm architecture implementation including the complex PCIe subsystems is designed to ensure that these diverse applications can run seamlessly across various hardware environments. Achieving this certification requires adherence to a stringent set of compliance rules. These rules involve injecting specific sequences into the PCI port and monitoring responses at the PCI protocol layer, ensuring that the system can handle different types of workloads in real-world scenarios.

Synopsys and PCIe SystemReady Compliance

To streamline this process, Synopsys provides a PCI endpoint model specifically designed to meet Arm’s BSA certification standards. As shown in Figure 1, the SystemReady compliance program is a collaborative effort between Arm, Synopsys, and silicon providers. While the silicon partner focuses on developing the boot code, Synopsys contributes the Platform Abstraction Layer (PAL), a crucial software component that ensures smooth execution of Arm’s Compliance Suite tests on the SoC.

Figure 1: Block diagram describing how Arm and partners (Arm, Synopsys, Silicon Providers) work together

The PAL acts as an intermediary, enabling the Compliance Suite to communicate effectively with Synopsys’ transactors and Verification IPs (VIP) thus maximizing test coverage and capturing corner cases that may otherwise be overlooked. This integration ensures thorough testing of PCIe subsystems, providing developers with the confidence that their designs meet the highest standards of compatibility and performance.

Performance Verification and PCIe Protocol Evolution

In addition to compliance testing, performance verification is a critical aspect of pre-silicon design validation for PCIe interfaces. When systems upgrade to newer PCIe protocol generations, such as moving from PCIe Gen 5 to PCIe Gen 6, it involves significant investment. However, it’s vital to verify that the system is fully equipped to handle the additional bandwidth and performance enhancements offered by the newer protocol. Performance validation helps determine whether a developing SoC can manage various workloads and uncover any potential bottlenecks that might prevent the system from realizing the full benefits of the upgrade.

Synopsys’ support for integrating the Compliance Suite adds an additional layer of performance validation, allowing users to run comprehensive performance scenarios, particularly focused on the PCI subsystem. This ensures that the PCIe subsystem not only complies with Arm architectural requirements but also achieves optimal performance across a range of SoC applications.

Conclusion

By ensuring that software stacks are portable and interoperable across a diverse range of platforms—from cloud servers to edge devices and IoT applications—Arm’s SystemReady program plays a pivotal role in minimizing design risks. This standardization significantly reduces design costs and accelerates time-to-market, enabling companies to deliver products that function seamlessly out-of-the-box.

SystemReady not only enhances design efficiency but also opens new avenues for Total Addressable Market (TAM) expansion. By ensuring compatibility and reducing development complexity, the program allows Arm’s partners to target a broader range of industries and applications, providing them with a distinct competitive advantage.

These efforts underscore Arm’s commitment to empowering its ecosystem and driving innovation across the industry.

Also Read:

The Immensity of Software Development the Challenges of Debugging (Part 1 of 4)

The Immensity of Software Development and the Challenges of Debugging Series (Part 2 of 4)

The Immensity of Software Development and the Challenges of Debugging (Part 3 of 4)

The Immensity of Software Development and the Challenges of Debugging Series (Part 4 of 4)


PDF Solutions Hosts Executive Conference December 12 on AI’s Power to Transform Semiconductor Design and Manufacturing

PDF Solutions Hosts Executive Conference December 12 on AI’s Power to Transform Semiconductor Design and Manufacturing
by Daniel Nenni on 12-05-2024 at 6:00 am

PDF Solutions LI Post on Conference

PDF Solutions, Inc. will host an AI Executive Conference Thursday, December 12, in San Francisco featuring keynotes, presentations, panels and demonstrations offering insights into the power of AI to transform semiconductor design and manufacturing. The conference immediately follows the 70th Annual IEEE International Electron Devices Meeting (IEDM).

Talks will cover the state of art and best practices to design, deploy, scale and manage AI/ML solutions across the global semiconductor industry from PDF Solutions executives, other industry thought leaders, solutions experts and partners and users.

Three keynote presentations will look at how AI is currently being deployed in semiconductor manufacturing. Aziz Safa, Vice President and General Manager at Intel, will describe “How Analytics and AI are helping to transform a leading semiconductor company.” Smitha Mathews from ADI will discuss how semiconductor companies can “Get ready for AI” and the lessons learned from a real-life deployment. John Kibarian, PDF Solutions’ CEO will explain how AI is the next evolution of PDF Solutions portfolio.

The event includes talks on the state of art and best practices to design, deploy, scale and manage AI/ML solutions across the global semiconductor industry from PDF Solutions executives, other industry thought leaders, solutions experts and partners and users.

Five panels will appraise use cases for GenAI, AI for 3D device test, trust, AI-enabled digital transformation and digital twin including:

  • “GenAI for semiconductor: use cases, solutions and demonstrations” with panelists from PDF Solutions, SAP, Voltai and Yurts.
  • “AI for test in a world of hybrid 3D devices” includes Advantest, Siemens, Teradyne, a leading foundry and Outsourced Semiconductor Assembly and Test (OSAT) service spokespersons.
  • Panelists from PDF Solutions, Yurts and an Enterprise Applications from an independent software vendor discuss “Revisiting the notion of trust in an AI solutions world.”
  • “How can semiconductor companies accelerate their digital transformation with AI” has spokespersons from ADI, PDF Solutions, a Foundry and IDMs.
  • A final panel “AI enabled digital twin for semiconductor manufacturing equipment” has panelists from PDF Solutions and Equipment OEMs.
Additional speakers are:

Mike Campbell, Vice President of Engineering at Qualcomm; Shyam Gooty, Microsoft’s Senior Director Product Engineering; Jean Philippe Fricker, Founder and Chief System Architect at Cerebras; Anton Devilliers, TEL’s Vice President of R&D; and Siemens’ Jayant D’Souza, Principal Technical Product Manager, and Marc Hunter, Director Product Management.

Also Ken Butler, Senior Director of Applications Marketing with Advantest; Eli Roth, Product Manager at Teradyne; SAP’s Sunil Gandhi, Senior Director, Industry Executive, High Tech; and Yurt’ Jason Schnitzer, CTO and Steve Mahoney, Vice President of Product Management. Handel Jones, Founder and CEO of International Business Strategies (IBS), will be the dinner keynote speaker.

As part of the program, PDF Solutions will demonstrate its ModelOps product portfolio, the AI infrastructure for the global semiconductor supply chain.

Registration

The one-day Executive Conference will take place Thursday, December 12, at the St. Regis Hotel in San Francisco starting with 8 a.m. registration. The conference begins at 9 a.m. and concludes at 5:30 p.m. A reception and dinner follow. Registration is open.

Date: December 1‌2, 2‌0‌2‌4, following the 70th Annual IEEE International Electron Devices Meeting.

Location: St. Regis Hotel 125 3rd St San Francisco, Calif., 94103

About PDF Solutions

PDF Solutions (Nasdaq: PDFS) provides comprehensive data solutions designed to empower organizations across the semiconductor and electronics ecosystems to improve the yield and quality of their products and operational efficiency for increased profitability. The Company’s products and services are used by Fortune 500 companies across the semiconductor ecosystem to achieve smart manufacturing goals by connecting and controlling equipment, collecting data generated during manufacturing and test operations, and performing advanced analytics and machine learning to enable profitable, high-volume manufacturing.

Founded in 1991, PDF Solutions is headquartered in Santa Clara, California, with operations across North America, Europe, and Asia. The Company (directly or through one or more subsidiaries) is an active member of SEMI, INEMI, TPCA, IPC, the OPC Foundation, and DMDII. For the latest news and information about PDF Solutions or to find office locations, visit https://www.pdf.com/.

Also Read:

WEBINAR: Elevate Your Analog Layout Design to New Heights

Silicon Creations is Fueling Next Generation Chips

I will see you at the Substrate Vision Summit in Santa Clara


Accelerating Electric Vehicle Development – Through Integrated Design Flow for Power Modules

Accelerating Electric Vehicle Development – Through Integrated Design Flow for Power Modules
by Kalar Rajendiran on 12-04-2024 at 10:00 am

Existing EV Power Module Flow

The development of electric vehicles (EVs) is key to transitioning to sustainable transportation. However, designing high-performance EVs presents significant challenges, particularly in power module design. Power modules, including inverters, bulky DC capacitors, power management ICs (PMICs), and battery packs, are critical in managing the high voltage and current systems in EVs. These modules often operate at over 1,000V and can supply hundreds of amperes, generating substantial heat, with temperatures potentially rising to 200-250°C. As power distribution systems shrink, effective thermal management becomes essential. Power modules also must meet strict safety standards, making a system-level approach for integrating ICs, packages, and PCBs, crucial for avoiding safety risks and delays.

Cadence recently sponsored a webinar on the topic of integrated design flow for power modules for electric vehicles. The webinar was hosted by Amlendu Shekhar Choubey, Director Product Management,  Athar Kamal, a Lead Product Engineer and Ritabrata Bhattacharya, a Senior Principal Product Engineer, both from Cadence.

Current Challenges in Power Module Design

The design process for power modules is often fragmented, with insufficient integration between electrical, mechanical, and thermal design. This leads to miscommunication, delays, and increased costs. Simulation tools are limited, especially for electromagnetic (EM) analyses, requiring specialized expertise. Many designers resort to lab testing, which can be too late to address critical issues impacting safety, performance, and reliability.

Thermal management and parasitic effects are significant challenges in power module design. High power requirements generate heat that must be managed to avoid component failure. Parasitic inductances from bondwires, copper traces, and other components can lead to overshoot during switching, causing performance degradation and electromagnetic interference (EMI). Addressing these issues early in the design cycle is crucial to avoid critical system failures later in the process.

The Ideal Power Module Design Flow

An ideal power module design flow integrates electrical, mechanical, and thermal considerations from the start. Schematic-driven layouts with SPICE-enabled simulations ensure functionality is validated before layout. Quick extraction of parasitics using the 3D-Quasi-Static solver and integration back into simulations is essential to understand their impact. Auto-generating post-layout schematics aligns the design with the original schematic, reducing errors. Thermal analysis tools, such as Celsius Thermal Solver, help optimize cooling solutions early on. 3D EM tools like Clarity 3D Solver help with management of electromagnetic effects.

Cadence’s Advanced Tools for Optimization

Cadence’s advanced solutions, including Allegro X, PSpice, Clarity 3D Solver, and Celsius Thermal Solver, offer an integrated and thermally aware design flow that ensures both functional safety and reliability. Allegro X enables PCB layout, with advanced capabilities for component placement, routing, and thermal management, integrated with other Cadence tools. PSpice allows for electrical simulations and parasitic effect analysis, ensuring the design meets functional and safety requirements.

Clarity 3D Solver provides EM simulation, optimizing the power module’s electromagnetic characteristics and reducing overshoot to improve reliability. Celsius Thermal Solver predicts temperature distribution, identifies hot spots, and optimizes cooling solutions to mitigate thermal issues early. Together, these tools create an integrated design process, reducing thermal runaway risk and addressing EMI and parasitic effects before they affect the final product.

Seamless Integration of Cadence’s Tools

Cadence’s platform supports collaboration across engineering disciplines. Designers do not need to be experts in every area but must understand how their expertise fits within the larger ecosystem. They can execute the complete design flow without leaving their preferred environment to complete all the analyses needed for a reliable design. This approach improves decision-making and ensures optimized designs. Thermal and warpage analysis can be triggered from within the layout tool, streamlining workflows and reducing errors.

The Future of Power Module Design and Reliability

Looking ahead, the next challenge for EV power module design will be estimating Mean Time Between Failures (MTBF) based on data generated during the design process. Predictive analytics will be key for assessing reliability and preventing failures, ensuring the durability of EV systems.

Summary

An integrated approach to power module design is essential for addressing the complex challenges in EV development. Advanced tools that combine electrical, thermal, mechanical, and EM simulations within a unified platform help streamline the design process, reduce costs, and accelerate time-to-market. Cadence’s design flow bridges traditional gaps, enabling the creation of safer, more efficient, and reliable power modules for the next generation of EVs. With tools like Allegro X, PSpice, Clarity 3D Solver, and Celsius Thermal Solver, the EV industry can benefit from a thermally aware, end-to-end integrated design solution that enhances functional safety and reliability.

For more details, refer to the following:

Cadence whitepaper titled “Power Module Design for Electric Vehicles – Addressing Reliability and Safety.”

Cadence Automotive Solutions page.

Also Read:

Compiler Tuning for Simulator Speedup. Innovation in Verification

Cadence Paints a Broad Canvas in Automotive

Analog IC Migration using AI