SMT webinar banner3

Enhancing System Reliability with Digital Twins and Silicon Lifecycle Management (SLM)

Enhancing System Reliability with Digital Twins and Silicon Lifecycle Management (SLM)
by Kalar Rajendiran on 12-09-2024 at 6:00 am

Synopsys SLM Solution Components

As industries become more reliant on advanced technologies, the importance of ensuring the reliability and longevity of critical systems grows. Failures in components, whether in autonomous vehicles, high performance computing (HPC), healthcare devices, or industrial automation, can have far-reaching consequences. Predicting and preventing failures is essential, and technologies like Digital Twins and Silicon Lifecycle Management (SLM) are key to achieving this. These tools provide the ability to monitor, analyze, and predict failures, thereby improving the dependability, and performance of systems.

“The reliability, availability, and serviceability (RAS) of complex systems such as data center infrastructure has never been more complex or critical,” said Jyotika Athavale, director of Engineering Architecture at Synopsys. “By integrating silicon health with digital twin simulations, we unlock powerful new capabilities for predictive modeling. This enables technology leaders to optimize system design and performance in new, impactful ways.”

Athavale addressed this topic during a talk she delivered at the Supercomputing Conference 2024 recently. She leads quality, reliability and safety research, pathfinding, standards and architectures for SLM solutions across RAS sensitive application domains.

Why Digital Twins Are Good for Prognostics

A Digital Twin is a virtual replica of a physical asset, created by combining real-time sensor data with simulation models. Digital twins enable continuous monitoring of system health and provide valuable insights for prognostics, which is the process of predicting future failures. By simulating different scenarios, digital twins can predict Remaining Useful Life (RUL), helping operators plan maintenance or replacements before a failure occurs. RUL refers to the time a device or component is expected to function within its specifications before failure. This proactive approach reduces downtime and optimizes system resources.

Types of Failures in Modern Systems

Failures in modern systems are categorized into permanent, transient and intermittent faults. Permanent faults, such as Time-Dependent Dielectric Breakdown (TDDB), Negative Bias Temperature Instability (NBTI), and Hot Carrier Injection (HCI), occur over time and lead to errors resulting in failure. In contrast, transient faults are temporary disruptions caused by external factors like radiation, which do not result in lasting damage.

In sub-20nm process technologies, degrading defects continue to evolve into the useful life phase of the bathtub curve, leading to issues like Silent Data Corruption (SDC), which can go unnoticed until critical failure occurs.

Why Failures Are Increasing

Despite technological advancements, failures are rising due to several factors. As devices shrink in size and increase in complexity, they become more vulnerable to failure. Smaller transistors, particularly below 20nm, are more susceptible to intrinsic wearout. Moreover, the demand for higher performance leads to greater stress on semiconductors. With interconnected systems in critical applications, even a single failure can have serious consequences, making predictive maintenance even more essential.

“To keep pace with these challenges, it’s essential to shift from reactive to predictive maintenance strategies,” said Athavale. “By integrating real-time monitoring and predictive insights at the silicon level, we can better manage the complexities of modern systems, helping avoid  potential failures and make maintenance more manageable..”

How to Monitor Silicon Health

Monitoring the health of semiconductor devices is crucial for identifying early signs of degradation. With embedded monitors integrated during the design phase, data on key performance metrics—such as voltage, temperature, and timing—can be continuously collected and analyzed. Silicon Lifecycle Management (SLM) systems include PVT monitors to track process, voltage, and temperature variations, path margin monitors to ensure signal paths remain within safe operating margins, and clock delay monitors to detect timing deviations. SLM also includes in-field analytics, which enables real-time monitoring and proactive decision-making throughout the device lifecycle.

Analyzing and Predicting Failures

Once the data is collected, it is analyzed to detect potential failures. Prognostic systems use advanced algorithms to analyze degradation patterns, such as those caused by TDDB, NBTI, and HCI, to predict when a component might fail. Predicting RUL is vital for managing system reliability, as early identification of failure allows for corrective actions like maintenance or replacement before the failure occurs.

RUL Prediction Using Synopsys SLM Data Solution

Synopsys’ SLM solution enables accurate RUL predictions through advanced monitoring and analytics, ensuring predictive maintenance and enhanced device reliability.

Key components of the Synopsys SLM solution include SLM PVT Monitors, which track process, voltage, and temperature variations to assess wear; SLM Path Margin Monitors, which detect timing degradation in critical paths; SLM Clock Delay Monitors, which identify clock-related performance anomalies; and SLM In-Field Analytics, which analyzes real-time data to predict failure trends.

The benefits of RUL prediction with Synopsys SLM include predictive maintenance, optimized reliability vs. performance, lifecycle and end-of-life planning, outlier detection, and catastrophic failure prevention. Corrective actions based on RUL analysis can include early decisions on recalls, implementing lifetime-extending mitigation strategies, and transitioning devices to a safe state to prevent further damage. Synopsys SLM provides actionable insights to minimize downtime, extend device lifespan, and ensure reliable performance throughout the lifecycle of semiconductor devices.

Summary

The combination of digital twins and Silicon Lifecycle Management (SLM) provides a powerful approach to managing the health and reliability of semiconductor devices. By enabling continuous monitoring, accurate failure prediction, and timely corrective actions, these technologies offer organizations tools to improve dependability, optimize performance, and reduce downtime. As electronic systems grow more complex and mission-critical, digital twins and SLM are becoming essential for predictive maintenance, ensuring long-term system reliability, and preventing costly failures.

Also Read:

A Master Class with Ansys and Synopsys, The Latest Advances in Multi-Die Design

The Immensity of Software Development and the Challenges of Debugging Series (Part 4 of 4)

Synopsys-Ansys 2.5D/3D Multi-Die Design Update: Learning from the Early Adopters

 


Podcast EP265: The History of Moore’s Law and What Lies Ahead with Intel’s Mr. Transistor

Podcast EP265: The History of Moore’s Law and What Lies Ahead with Intel’s Mr. Transistor
by Daniel Nenni on 12-08-2024 at 6:00 am

Dan is joined by Dr. Tahir Ghani, Intel senior fellow and director of process pathfinding in Intel’s Technology Research Group. Tahir has a 30-year career at Intel working on many innovations, including strained silicon, high-K metal gate devices, FinFETs, RibbonFETs, and backside power delivery (BSPD), among others. He has filed more than 1,000 patents over his career at Intel and was honored as Intel’s 2022 Inventor of the Year. He has the nickname of “Mr. Transistor” since he’s passionate about keeping Moore’s Law alive.

In this very broad discussion, Tahir outlines the innovations over the past 60 years of Moore’s Law and how these advances will pave the way to a trillion transistor device in this decade. Tahir explains how transistor scaling, interconnect advances, chiplet-based design and advanced packaging all work together to keep Moore’s Law scaling alive and continue to deliver exponential increases in innovation.

Tahir will present an invited paper at a special session of the upcoming 70th IEDM called The Incredible Shrinking Transistor – Shattering Perceived Barriers and Forging Ahead. IEDM will be held from December 7-11, 2024 in San Francisco.  You can learn more about IEDM and register to attend here. His presentation will be Tuesday, December 10 at 2:20 PM. Tahir also reviews several other significant Intel papers that will be presented at IEDM.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Podcast EP264: How Sigasi is Helping to Advance Semiconductor Design with Dieter Therssen

Podcast EP264: How Sigasi is Helping to Advance Semiconductor Design with Dieter Therssen
by Daniel Nenni on 12-06-2024 at 10:00 am

Dan is joined by Dieter Therssen, CEO of Sigasi. Deiter started his career as a hardware design engineer, using IMEC’s visionary tools and design methodologies in the early days of silicon integration. Today, being CEO of Sigasi, a fast-growing, creative technology company is a perfect fit for Dieter. Having worked in that space for several companies, and well-rooted in the field of semiconductors, he forever enjoys the magic of a motivated team.

Dan explores the changing landscape of semiconductor design with Dieter. The demands of higher complexity and multi-technology systems are discussed. The impact of AI and specifically generative AI are also explored with a view toward how the unique front-end design tools offered by Sigasi can move technology forward.

ASIC/FPGA design and safety/security requirements are also reviewed in this broad discussion. Dieter explains how Sigasi is helping these trends and also discusses the new and unique community version of the Sigasi tools.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: GP Singh from Ambient Scientific

CEO Interview: GP Singh from Ambient Scientific
by Daniel Nenni on 12-06-2024 at 6:00 am

GP Sir Photo

Gajendra Prasad Singh, also known as GP Singh, is a seasoned tech professional with over 26 years of experience in advanced semiconductor chips. With a zeal to solve the most complex technical problems, he harped on a difficult journey to create programmable AI Microprocessors, that provide high-performance in a cost-effective manner while still consuming low power. To realize this vision, he co-founded Ambient Scientific along with a team of visionaries from California’s Silicon Valley. GP’s extensive technical experience and successful leadership record within global prestigious companies from building cutting-edge chips contributed to his deep understanding of not only the scientific first principles required for such breakthrough innovations at the grassroots level but also the business acumen to ensure practical feasibility. With an innate passion for everything electronics and computers, GP Singh is a fierce advocate of using semiconductors for the betterment of human lives.

Tell us about your company?

Ambient Scientific is a fabless semiconductor company born in Silicon Valley, pioneering ultra-low power AI processors that are fully programmable to enable endless AI applications.

Our breakthrough Analog In-Memory Compute technology called DigAn® is making AI computing more powerful and efficient than ever before, without compromising on flexibility and programmability. Compared to traditional AI hardware, our processors deliver thousands of time more AI performance at the same power consumption or thousands of time less power consumption for the same AI performance.

Our first product GPX10 leverages the DigAn® architecture to bring battery-powered, cloud-free, on-device AI applications to life, something considered nearly impossible before. From always-on voice detection to FaceID to predictive maintenance, GPX10 is enabling endless applications in various industries, all while running on as little as a coin cell battery with no dependence on the cloud or an internet connection.

With a full stack SDK designed to support industry standard AI frameworks (Tensorflow, Keras, etc.) and an AI compiler to enable custom neural networks, we enable rapid time to market for your AI applications. Order our DVK today and bring the power of AI away from the cloud, right on to your fingertips.

What problems are you solving?

While the AI application and software landscape has exploded in complexity, hardware has failed to keep up. Current chips used for AI processing (GPUs) were designed for graphics processing and not AI computing in mind, making them inefficient and extremely expensive. This is clearly visible with the rising compute costs as well as power consumption for all AI ranging from gigantic LLMs to edge AI for smaller electronic devices. We at Ambient Scientific have solved these problems by inventing not just analog in-memory computing but also new instruction set architecture designed specifically for AI computing. Our analog matrix multiplication engines deliver 40X AI performance at 70X lower power consumption compared to equivalent GPUs. Built with scalability and flexibility in mind, our architecture enables AI processors all the way from cloud and server level to MCU level for a wide variety of applications across several industries. Ambient Scientific’s mission is to make AI computing powerful, energy efficient and affordable for everyone alike.

What application areas are your strongest?

Our first product GPX10 is an AI processor targeted towards on-device AI applications for the tiniest of battery powered devices. It helps move AI processing from the confines of the cloud directly on to the device even if its running on as little as a coin cell battery. This improves application reliability, latency, data security as well as total cost of ownership. Some of our strongest application areas popular with customers are industrial predictive maintenance at the edge, anomaly detection on MedTech devices and cloud-free voice control for consumer products. While commonly these application would struggle with latency or reliability or miniscule battery lives due to AI processing, our processor solves all these problems without forcing any compromises or even affordability.

What keeps your customers up at night?

With the widespread utility of AI, product makers have realized the importance of incorporating AI features into their product roadmap to remain competitive and maintain differentiation. These products makers are now faced with a difficult choice:

  1. Run AI processing in the cloud and sacrifice latency, data privacy and reliability due to complete dependence on a network connection.
  2. Run AI on device and sacrifice accuracy and power efficiency which translates into significantly compromised battery life.

These limitations which ultimately translate into higher costs or compromised product quality are an absolute function of the current processors available in the market, none of which were designed for AI processing. They force debilitating sacrifices for the product maker that keeps them up at night, stuck between a rock and a hard place.

What does the competitive landscape look like and how do you differentiate?

The AI compute market for small electronic devices includes either MCUs, entry level GPUs or new age NPUs. While MCUs cannot deliver enough performance required for meaningful AI compute, entry level GPUs consume too much power, occupy too much area and are not affordable enough to fit within the boundaries of commercial viability for battery powered on-device AI applications. Several new age NPUs claim to be able to deliver low power AI solutions but with a heavy price to pay in lack of programmability. They tend to be fixed function with pre-defined neural networks and minimal room for customization. Our Ultra-low power AI chips not only deliver the highest performance/unit of power consumption (>7 TOPs/W), they’re smaller than a fingernail, affordable and most importantly completely programmable. Product makers care about programmability so they can differentiate their products from competitors’ by owning the software such as their proprietary AI algorithms. Programmability also makes their products future proof with the ability to push updates over the air as the software and application landscape evolves. Compared to fixed function or application specific NPUs, our processors offer a versatile and flexible platform for product makers to differentiate themselves with ultra-low power AI features well into the future.

What new features/technology are you working on?

Our claim to fame is a breakthrough on analog in-memory computing technology that enables us to leverage a combination of high speed digital and analog circuits designed specifically for AI computing. By leveraging cubic in-memory architecture and the analog matrix multiplication circuit, we’ve solved all the bottlenecks for AI computing while minimizing energy consumption to a fraction of contemporary architectures. Not only this, we’ve also created custom instruction set architecture from the ground up to enable flexibility and scalability in AI computing. This means we can build a wide range of processors from AI MCUs to high speed computer vision processors. Similarly our end to end software stack scales with our processors to adapt to the application needs of software developers for a wide variety of applications in several industries.

Also Read:

CEO Interview: Ollie Jones of Sondrel

CEO Interview: Dr. Yunji Corcoran of SMC Diode Solutions

CEO Interview: Rajesh Vashist of SiTime

CEO Interview: Dr. Greg Newbloom of Membrion


SystemReady Certified: Ensuring Effortless Out-of-the-Box Arm Processor Deployments

SystemReady Certified: Ensuring Effortless Out-of-the-Box Arm Processor Deployments
by Lauro Rizzatti on 12-05-2024 at 10:00 am

SystemReady Certified Ensuring Out of the Box Effortless Arm Processors Deployments Figure 1

When contemplating the Lego-like hardware and software structure of a leading system-on-chip (SoC) design, a mathematically inclined mind might marvel at the tantalizing array of combinatorial possibilities among its hardware and software components. In contrast, the engineering team tasked with its validation may have a more grounded perspective. Figuratively speaking, the team might be more concerned with calculating how much midnight oil will need to be burned to validate such a complex system.

The numerous interactions between hardware components such as large arrays of various processor types, memory types, interconnect networks, and a wide assortment of standard and custom peripherals and logic, with those of the software, like bare-metal software, drivers, and OS hardware-dependent layers, demand exhaustive functional verification. This process is computationally intensive, requiring billions of cycles to establish confidence in a bug-free design before manufacturing. The challenge is magnified by the relentless pace of technology, with new hardware and software versions constantly emerging while support for older iterations persists.

The Economies of Design Debug

A well-known axiom in the field of electronic design emphasizes that the cost of fixing a design bug soars an order of magnitude at each successive stage of the verification process. What might cost a mere dollar to correct at the basic block-level verification stage can skyrocket to a million dollars when the issue surfaces at the full SoC level, where hardware and software tightly interact.

The stakes become even higher if a design flaw goes undetected until after silicon fabrication. Post-silicon bug detection not only challenges engineering teams but can also lead to exorbitant costs that may drain a company’s financial resources. For small enterprises, such a scenario could be catastrophic, potentially leading to bankruptcy due to the redesign expenses and missed revenues caused by delayed product launches.

In the fiercely competitive semiconductor industry, the margin for error is razor thin. Therefore, rigorous verification at each stage of the design process is not just a best practice—it’s a critical safeguard against the potentially ruinous consequences of post-silicon bug detection.

On the bright side, the electronic design automation (EDA) industry has been investing heavily in resources and innovation to tackle the challenge of pre-silicon verification. The Shift-Left verification methodology is a testament to the industry’s commitment to addressing this challenge.

Arm: Linchpin Example of the Hardware/Software Integration Challenges

Among processor companies Arm is a case-study, because of its vast catalog of IP solutions. Arm offers a wide range of IPs, platforms and solutions, including CPUs, GPUs, memory controllers, interconnects, security, automotive, AI, IoT, and other technologies, each designed to meet the needs of different markets and applications. While the exact number is not publicly known, when adding updates and new releases, it amounts to thousands of different parts.

SoC designers using Arm components face an uphill verification challenge. Once they have selected the IP components, they must integrate them into complex SoC designs, add a software stack to bring the design to life, and ensure compliance, that is, compatibility or interoperability of the software with the hardware.

This process is fraught with uncertainties and risks.

Often, root causes of integration issues can be traced to non-compliant hardware, such as non-standard PCIe ECAM, PCIe ghost devices, or customized components like universal asynchronous receiver-transmitters (UARTs) or GICs. These issues can lead to design malfunctions, and potentially to serious failures. For instance, systems with complex PCIe hierarchies may lack firmware workarounds, custom OS distributions may receive limited security updates, and Windows servers and clients may be incompatible with non-compliant PCI ECAM.

To address these issues, a widely used but increasingly outdated method in the electronics industry is post-silicon testing. While it serves the purpose of debugging hardware flaws after fabrication, it is inherently inefficient. This approach contradicts the well-established principle of exponential cost increase, summarized by the phrase “the sooner, the cheaper.” By delaying the detection of design flaws until after silicon manufacturing, companies incur costly silicon re-spins and face extended timelines.

Fortunately, these issues can be mitigated much earlier in the development cycle through pre-silicon design verification. Pre-silicon verification, which includes simulation, emulation, formal and timing verification, allows engineers to identify and resolve problems before chips are fabricated, significantly reducing both costs and risks.

Arm’s Game-Changing Solution: From ServerReady to SystemReady

To mitigate this challenge, specifically to eliminate or at least reduce design re-spins and accelerate time-to-market, Arm introduced the SystemReady Certification Program in 2022. Building on the success of the ServerReady program, which was launched in 2018 and targeted server applications, SystemReady expands the coverage to include designs like edge devices, IoT applications, automotive systems, and more.

In general, hardware platforms provided by semiconductor partners come with their own software stacks, i.e., firmware, drivers and operating systems. These are often siloed, creating challenges for OS vendors and independent-software-vendors (ISVs) who need to run applications across different platforms, as these setups tend to be highly specific and fragmented. SystemReady aims to break down these silos, enabling software portability and interoperability across all Arm-based A-Class devices. When third-party operating systems are run on devices complying with a minimum set of hardware and firmware requirements based on Arm specifications, they boot seamlessly, and applications run smoothly.

SystemReady Program Foundation

The foundation of the Arm’s SystemReady program lies in two key specifications. First, the Base System Architecture (BSA), a formal set of compute platform definitions to encompass a variety of systems from the cloud to the IoT edge, ensures that in-house developed or 3rd-party sourced software works seamlessly across a universe of Arm-based hardware. Second, a set of accompanying firmware specifications called the Base Boot Requirements (BBR), complements the BSA definitions. These sets of rules are encapsulated in the BSA Compliance Suite, accessible on GitHub.

The suite is designed to run compliance tests during pre-silicon validation, eliminating the need for executing full operating systems to validate the environment. This early-stage validation prevents costly silicon respins, expedites system-level debugging, and accelerates time-to-market.

Arm’s Thriving SystemReady Partner Ecosystem

To reach a vast and diverse customer base while considerably enhancing the value of the Arm ecosystem, Arm has strategically partnered with a wide array of companies, including leaders in EDA, IP, and silicon providers. These collaborations play a critical role in driving the success of Arm’s SystemReady program, a certification initiative that ensures seamless compatibility across hardware platforms and software stacks.

Leading EDA Firms Accelerate SystemReady Certification Success

The pre-silicon validation of software stacks on newly designed hardware platforms demands hardware-assisted verification platforms, such as emulation and FPGA prototyping. These platforms are crucial for ensuring that new designs function correctly across the range of real-world conditions they will face. Best-in-class emulators and FPGA prototypes support comprehensive verification and validation processes, including hardware debugging, hardware-software co-verification, power and performance analysis, and even post-silicon testing for final checks.

Prominent suppliers of hardware-assisted verification platforms have joined Arm’s SystemReady program to enable their customers developing Arm SoCs and components to validate BSA compliance on HAV platforms using Transactors and Verification IPs. By participating in this program, EDA companies enable developers to validate software before silicon is even taped out, significantly reducing risks and development costs while accelerating time-to-market. The “PCIe SystemReady Certification Case Study” is an example of how a collaborative approach to pre-silicon validation can lead to successful certification and market-ready products.

Case Study: PCIe SystemReady Certification

The PCIe protocol is one of the most widely adopted and popular interfaces in the electronics industry, supporting a broad spectrum of applications, including networking, storage, GPU accelerators, and network accelerators. Each of these applications has distinct workload profiles that interact uniquely with system components, making PCIe a versatile yet complex protocol to integrate into hardware platforms.

Arm’s SystemReady certification program for the Arm architecture implementation including the complex PCIe subsystems is designed to ensure that these diverse applications can run seamlessly across various hardware environments. Achieving this certification requires adherence to a stringent set of compliance rules. These rules involve injecting specific sequences into the PCI port and monitoring responses at the PCI protocol layer, ensuring that the system can handle different types of workloads in real-world scenarios.

Synopsys and PCIe SystemReady Compliance

To streamline this process, Synopsys provides a PCI endpoint model specifically designed to meet Arm’s BSA certification standards. As shown in Figure 1, the SystemReady compliance program is a collaborative effort between Arm, Synopsys, and silicon providers. While the silicon partner focuses on developing the boot code, Synopsys contributes the Platform Abstraction Layer (PAL), a crucial software component that ensures smooth execution of Arm’s Compliance Suite tests on the SoC.

Figure 1: Block diagram describing how Arm and partners (Arm, Synopsys, Silicon Providers) work together

The PAL acts as an intermediary, enabling the Compliance Suite to communicate effectively with Synopsys’ transactors and Verification IPs (VIP) thus maximizing test coverage and capturing corner cases that may otherwise be overlooked. This integration ensures thorough testing of PCIe subsystems, providing developers with the confidence that their designs meet the highest standards of compatibility and performance.

Performance Verification and PCIe Protocol Evolution

In addition to compliance testing, performance verification is a critical aspect of pre-silicon design validation for PCIe interfaces. When systems upgrade to newer PCIe protocol generations, such as moving from PCIe Gen 5 to PCIe Gen 6, it involves significant investment. However, it’s vital to verify that the system is fully equipped to handle the additional bandwidth and performance enhancements offered by the newer protocol. Performance validation helps determine whether a developing SoC can manage various workloads and uncover any potential bottlenecks that might prevent the system from realizing the full benefits of the upgrade.

Synopsys’ support for integrating the Compliance Suite adds an additional layer of performance validation, allowing users to run comprehensive performance scenarios, particularly focused on the PCI subsystem. This ensures that the PCIe subsystem not only complies with Arm architectural requirements but also achieves optimal performance across a range of SoC applications.

Conclusion

By ensuring that software stacks are portable and interoperable across a diverse range of platforms—from cloud servers to edge devices and IoT applications—Arm’s SystemReady program plays a pivotal role in minimizing design risks. This standardization significantly reduces design costs and accelerates time-to-market, enabling companies to deliver products that function seamlessly out-of-the-box.

SystemReady not only enhances design efficiency but also opens new avenues for Total Addressable Market (TAM) expansion. By ensuring compatibility and reducing development complexity, the program allows Arm’s partners to target a broader range of industries and applications, providing them with a distinct competitive advantage.

These efforts underscore Arm’s commitment to empowering its ecosystem and driving innovation across the industry.

Also Read:

The Immensity of Software Development the Challenges of Debugging (Part 1 of 4)

The Immensity of Software Development and the Challenges of Debugging Series (Part 2 of 4)

The Immensity of Software Development and the Challenges of Debugging (Part 3 of 4)

The Immensity of Software Development and the Challenges of Debugging Series (Part 4 of 4)


PDF Solutions Hosts Executive Conference December 12 on AI’s Power to Transform Semiconductor Design and Manufacturing

PDF Solutions Hosts Executive Conference December 12 on AI’s Power to Transform Semiconductor Design and Manufacturing
by Daniel Nenni on 12-05-2024 at 6:00 am

PDF Solutions LI Post on Conference

PDF Solutions, Inc. will host an AI Executive Conference Thursday, December 12, in San Francisco featuring keynotes, presentations, panels and demonstrations offering insights into the power of AI to transform semiconductor design and manufacturing. The conference immediately follows the 70th Annual IEEE International Electron Devices Meeting (IEDM).

Talks will cover the state of art and best practices to design, deploy, scale and manage AI/ML solutions across the global semiconductor industry from PDF Solutions executives, other industry thought leaders, solutions experts and partners and users.

Three keynote presentations will look at how AI is currently being deployed in semiconductor manufacturing. Aziz Safa, Vice President and General Manager at Intel, will describe “How Analytics and AI are helping to transform a leading semiconductor company.” Smitha Mathews from ADI will discuss how semiconductor companies can “Get ready for AI” and the lessons learned from a real-life deployment. John Kibarian, PDF Solutions’ CEO will explain how AI is the next evolution of PDF Solutions portfolio.

The event includes talks on the state of art and best practices to design, deploy, scale and manage AI/ML solutions across the global semiconductor industry from PDF Solutions executives, other industry thought leaders, solutions experts and partners and users.

Five panels will appraise use cases for GenAI, AI for 3D device test, trust, AI-enabled digital transformation and digital twin including:

  • “GenAI for semiconductor: use cases, solutions and demonstrations” with panelists from PDF Solutions, SAP, Voltai and Yurts.
  • “AI for test in a world of hybrid 3D devices” includes Advantest, Siemens, Teradyne, a leading foundry and Outsourced Semiconductor Assembly and Test (OSAT) service spokespersons.
  • Panelists from PDF Solutions, Yurts and an Enterprise Applications from an independent software vendor discuss “Revisiting the notion of trust in an AI solutions world.”
  • “How can semiconductor companies accelerate their digital transformation with AI” has spokespersons from ADI, PDF Solutions, a Foundry and IDMs.
  • A final panel “AI enabled digital twin for semiconductor manufacturing equipment” has panelists from PDF Solutions and Equipment OEMs.
Additional speakers are:

Mike Campbell, Vice President of Engineering at Qualcomm; Shyam Gooty, Microsoft’s Senior Director Product Engineering; Jean Philippe Fricker, Founder and Chief System Architect at Cerebras; Anton Devilliers, TEL’s Vice President of R&D; and Siemens’ Jayant D’Souza, Principal Technical Product Manager, and Marc Hunter, Director Product Management.

Also Ken Butler, Senior Director of Applications Marketing with Advantest; Eli Roth, Product Manager at Teradyne; SAP’s Sunil Gandhi, Senior Director, Industry Executive, High Tech; and Yurt’ Jason Schnitzer, CTO and Steve Mahoney, Vice President of Product Management. Handel Jones, Founder and CEO of International Business Strategies (IBS), will be the dinner keynote speaker.

As part of the program, PDF Solutions will demonstrate its ModelOps product portfolio, the AI infrastructure for the global semiconductor supply chain.

Registration

The one-day Executive Conference will take place Thursday, December 12, at the St. Regis Hotel in San Francisco starting with 8 a.m. registration. The conference begins at 9 a.m. and concludes at 5:30 p.m. A reception and dinner follow. Registration is open.

Date: December 1‌2, 2‌0‌2‌4, following the 70th Annual IEEE International Electron Devices Meeting.

Location: St. Regis Hotel 125 3rd St San Francisco, Calif., 94103

About PDF Solutions

PDF Solutions (Nasdaq: PDFS) provides comprehensive data solutions designed to empower organizations across the semiconductor and electronics ecosystems to improve the yield and quality of their products and operational efficiency for increased profitability. The Company’s products and services are used by Fortune 500 companies across the semiconductor ecosystem to achieve smart manufacturing goals by connecting and controlling equipment, collecting data generated during manufacturing and test operations, and performing advanced analytics and machine learning to enable profitable, high-volume manufacturing.

Founded in 1991, PDF Solutions is headquartered in Santa Clara, California, with operations across North America, Europe, and Asia. The Company (directly or through one or more subsidiaries) is an active member of SEMI, INEMI, TPCA, IPC, the OPC Foundation, and DMDII. For the latest news and information about PDF Solutions or to find office locations, visit https://www.pdf.com/.

Also Read:

WEBINAR: Elevate Your Analog Layout Design to New Heights

Silicon Creations is Fueling Next Generation Chips

I will see you at the Substrate Vision Summit in Santa Clara


Accelerating Electric Vehicle Development – Through Integrated Design Flow for Power Modules

Accelerating Electric Vehicle Development – Through Integrated Design Flow for Power Modules
by Kalar Rajendiran on 12-04-2024 at 10:00 am

Existing EV Power Module Flow

The development of electric vehicles (EVs) is key to transitioning to sustainable transportation. However, designing high-performance EVs presents significant challenges, particularly in power module design. Power modules, including inverters, bulky DC capacitors, power management ICs (PMICs), and battery packs, are critical in managing the high voltage and current systems in EVs. These modules often operate at over 1,000V and can supply hundreds of amperes, generating substantial heat, with temperatures potentially rising to 200-250°C. As power distribution systems shrink, effective thermal management becomes essential. Power modules also must meet strict safety standards, making a system-level approach for integrating ICs, packages, and PCBs, crucial for avoiding safety risks and delays.

Cadence recently sponsored a webinar on the topic of integrated design flow for power modules for electric vehicles. The webinar was hosted by Amlendu Shekhar Choubey, Director Product Management,  Athar Kamal, a Lead Product Engineer and Ritabrata Bhattacharya, a Senior Principal Product Engineer, both from Cadence.

Current Challenges in Power Module Design

The design process for power modules is often fragmented, with insufficient integration between electrical, mechanical, and thermal design. This leads to miscommunication, delays, and increased costs. Simulation tools are limited, especially for electromagnetic (EM) analyses, requiring specialized expertise. Many designers resort to lab testing, which can be too late to address critical issues impacting safety, performance, and reliability.

Thermal management and parasitic effects are significant challenges in power module design. High power requirements generate heat that must be managed to avoid component failure. Parasitic inductances from bondwires, copper traces, and other components can lead to overshoot during switching, causing performance degradation and electromagnetic interference (EMI). Addressing these issues early in the design cycle is crucial to avoid critical system failures later in the process.

The Ideal Power Module Design Flow

An ideal power module design flow integrates electrical, mechanical, and thermal considerations from the start. Schematic-driven layouts with SPICE-enabled simulations ensure functionality is validated before layout. Quick extraction of parasitics using the 3D-Quasi-Static solver and integration back into simulations is essential to understand their impact. Auto-generating post-layout schematics aligns the design with the original schematic, reducing errors. Thermal analysis tools, such as Celsius Thermal Solver, help optimize cooling solutions early on. 3D EM tools like Clarity 3D Solver help with management of electromagnetic effects.

Cadence’s Advanced Tools for Optimization

Cadence’s advanced solutions, including Allegro X, PSpice, Clarity 3D Solver, and Celsius Thermal Solver, offer an integrated and thermally aware design flow that ensures both functional safety and reliability. Allegro X enables PCB layout, with advanced capabilities for component placement, routing, and thermal management, integrated with other Cadence tools. PSpice allows for electrical simulations and parasitic effect analysis, ensuring the design meets functional and safety requirements.

Clarity 3D Solver provides EM simulation, optimizing the power module’s electromagnetic characteristics and reducing overshoot to improve reliability. Celsius Thermal Solver predicts temperature distribution, identifies hot spots, and optimizes cooling solutions to mitigate thermal issues early. Together, these tools create an integrated design process, reducing thermal runaway risk and addressing EMI and parasitic effects before they affect the final product.

Seamless Integration of Cadence’s Tools

Cadence’s platform supports collaboration across engineering disciplines. Designers do not need to be experts in every area but must understand how their expertise fits within the larger ecosystem. They can execute the complete design flow without leaving their preferred environment to complete all the analyses needed for a reliable design. This approach improves decision-making and ensures optimized designs. Thermal and warpage analysis can be triggered from within the layout tool, streamlining workflows and reducing errors.

The Future of Power Module Design and Reliability

Looking ahead, the next challenge for EV power module design will be estimating Mean Time Between Failures (MTBF) based on data generated during the design process. Predictive analytics will be key for assessing reliability and preventing failures, ensuring the durability of EV systems.

Summary

An integrated approach to power module design is essential for addressing the complex challenges in EV development. Advanced tools that combine electrical, thermal, mechanical, and EM simulations within a unified platform help streamline the design process, reduce costs, and accelerate time-to-market. Cadence’s design flow bridges traditional gaps, enabling the creation of safer, more efficient, and reliable power modules for the next generation of EVs. With tools like Allegro X, PSpice, Clarity 3D Solver, and Celsius Thermal Solver, the EV industry can benefit from a thermally aware, end-to-end integrated design solution that enhances functional safety and reliability.

For more details, refer to the following:

Cadence whitepaper titled “Power Module Design for Electric Vehicles – Addressing Reliability and Safety.”

Cadence Automotive Solutions page.

Also Read:

Compiler Tuning for Simulator Speedup. Innovation in Verification

Cadence Paints a Broad Canvas in Automotive

Analog IC Migration using AI


A Master Class with Ansys and Synopsys, The Latest Advances in Multi-Die Design

A Master Class with Ansys and Synopsys, The Latest Advances in Multi-Die Design
by Mike Gianfagna on 12-04-2024 at 6:00 am

A Master Class with Ansys and Synopsys, The Latest Advances in Multi Die Design

2.5D and 3D multi-die design is rapidly moving into the mainstream for many applications. HPC, GPU, mobile, and AI/ML are application areas that have seen real benefits. The concept of “mix/match” for chips and chiplets to form a complex system sounds deceptively simple. In fact, the implementation and analysis techniques required to achieve success are substantial.

For many years, Synopsys and Ansys have been creating design flows that escort design teams through early exploration, implementation, and final signoff. The two companies are deeply engaged with many customers on advanced multi-die projects and have helped bring many successful designs to market. Synopsys and Ansys recently teamed up to present a webinar on the latest technology for multi-die design. The result is nothing short of a master class. Let’s explore the latest advances in multi-die design.

The Presenters

The effectiveness of a webinar is heavily influenced by the capabilities of the presenters. A slick polished and shallow presentation may entertain you, but you won’t learn much. A very detailed, but scattered presentation may deliver a lot of information but it’s often hard to find it amidst the noise. The presenters for this webinar delivered a perfect blend of professional polish and deep technical knowledge. The event runs for 45 minutes, but it will seem more like 20 minutes given how engaging both speakers are. The presenters for this webinar are:

Marc Swinnen

Marc Swinnen, Product Marketing Director for semiconductor products at Ansys. Before joining Ansys, Marc was Director of Product Marketing at Cadence Design Systems and has worked in Marketing and Technical Support positions at Synopsys, Azuro, and Sequence Design, where he gained experience with a wide array of digital and analog design tools.

 

 

 

Keith Lanier

Keith Lanier, Product Management Director at Synopsys. Keith focuses on multi-die and 3D heterogeneous integration (3DHI) solutions involving the latest advanced packaging technology. He brings over 30 years of experience in custom design, analog/mixed signal (AMS) and RF/mmWave product experience, including 8 years designing high speed data converters and amplifiers at Analog Devices.

A link to the webinar replay is coming. I highly recommend you watch it if 2.5/3D is in your future. First, let’s look at the structure of the webinar and a few key takeaways.

The Topics Covered

Here is the agenda for the webinar:

  • Multi-Die Design Motivation, Adoption, and Challenges
  • Combined Synopsys-Ansys Solutions for Implementation and System Analysis
  • Multi-Die Design Implementation
  • System Analysis
    • Power Integrity: Electromigration / IR Drop
    • Thermal integrity: Multi-Die Design
    • Signal Integrity: High Frequency Electromagnetic Analysis
  • Golden Sign-off Analysis
  • Customer Successes
  • Summary

This is a lot to cover, but Marc and Keith do a great job covering it all in under 30 minutes. What follows is about 15 minutes of Q&A from the webinar audience. The questions are deep and insightful, and the responses are concise and on-point. You will learn a lot.

Some Takeaways

Some macro-trend motivations for multi-die design are worth repeating. Here are the ones mentioned in the webinar:

  • Accelerated scaling of system functionality at a cost-effective price (>2X reticle limits)
  • Reduced risk & time-to-market by re-using proven designs/die
  • Lower system power while increasing throughput (up to 30%)
  • Rapid creation of new product variants for flexible portfolio management

The size, projected growth and application footprint for this design style are also covered. The numbers and scope will surprise you. Some recent examples of completed multi-die designs were covered. Some of these details surprised me. Below are the results presented during the webinar.

Examples of Recent Commercial Multi Die Designs

What followed was a deep dive into the combined Synopsys-Ansys implementation, analysis and optimization techniques used by both companies to deliver a production, unified flow. A lot of the webinar goes into the details of the complete exploration to signoff flow for multi-die designs, how to use the flow with the multi-physics models and how the tools work together. Case studies are also provided to show application on real designs. Below is a high-level overview of what was covered.

Combined Synopsys Ansys Implementation, Analysis and Optimization Techniques

Real customer success stories are then presented. This really crystallized for me how advanced this flow is and what kind of impact is being achieved. You need to see the results for yourself, but here are the projects covered:

  • Sanechips Builds Comprehensive Ansys Thermal Signoff Flow for Multi-Die Design
  • GUC Leverages Synopsys 3DIC Compiler to Enable 2.5D/3D Multi-Die Designs

To Learn More

What I’ve covered here is a very small subset of the content of this important webinar. As I mentioned, if 2.5/3D design is in your future it’s a must-see event. You can access the webinar replay here. And that’s a master class with Ansys and Synopsys to explore the latest advances in multi-die design.


SystemC Update 2024

SystemC Update 2024
by Daniel Payne on 12-03-2024 at 10:00 am

SystemC ecosystem min

SystemC version 1.0 came out in 2000 as a C++ class library for system-level modeling and simulation, and on SemiWiki.com there are some 497 references to the language. I wanted to provide an update in this blog so that engineering teams can become more efficient in using SystemC on their SoC projects, saving time and improving product quality.

SystemC Evolution Day

This one day workshop was co-located with DVCon Europe on October 17th in Munich and had participants from the user community, EDA vendors, and Accellera Working Groups. The keynote from Alex Bennee, Linaro, talked about synergies between QEMU – an open-source machine emulator for running OS and apps for a guest on a host, and SystemC. With co-simulation both software and hardware components can be simulated together.

A panel discussed what SystemC 4.0 should do to widen the simulation community and what the standardization cycle looks like, with panelists from Qualcomm, Infineon, Arteris, MachineWare and Robert Bosch.

The five working groups presented updates:

  • Language – SystemC 3.0.1 update published
  • AMS – LRM for IEEE 1666.1 update started, call to participate in P1666.1
  • CCI (Configuration, Control & Inspection) – latest developments, established regression/CI flow in GitHub
  • Verification – UVM-SystemC library 1.0beta6 released July 2024
  • Synthesis – restarted in early 2024,  plans and development from Fika in May

SystemC ecosystem

Engineers from Intel and MachineWare did a CCI update for an Inspection Proposal in draft form, plus a live demo.

GUI for controlling, inspecting simulation

Chapman University presented on using SystemC TLM 2.0 for loosely-timed contention-aware modeling, looking at trade-offs between simulation speed and timing accuracy. The final hour was an open room discussion about having SystemC multi-kernel support and thread safety, with presenters from MachineWare and COSEDA.

SystemC Fikas

Two to three times per year there are free, virtual workshops, called Fikas – in the Swedish tradition of sharing a coffee and talking with community.  The most recent Fika was May 30th, where three of the working groups presented updates: Language, CCI, Synthesis. You can view the latest presentations and watch the recording online.

SystemC 3.0.1

The latest release of the SystemC Class Library, SystemC 3.0.1, represents a significant step forward in aligning with the IEEE 1666-2023 Language Reference Manual. This update introduces a variety of enhancements, bug fixes, and expanded platform support.

Key bug fixes and improvements include:

  • Alignment with IEEE 1666-2023:
    • Completed the remaining changes to match sc_bind with the revised IEEE 1666-2023 definition
    • Updated the implementation of reset event notification in sc_process_b::trigger_reset_event to comply with IEEE 1666-2023
  • Performance Enhancements:
    • Refactored integer tracing and file writing for improved performance.
    • Enhanced the non-regression test suite
  • Compiler and Build Improvements:
    • Cleaned up various compiler warnings and improved support for sanitizers
    • Addressed issues in autotools and enhanced CMake build flows
    • Updated the list of supported operating systems and compilers
    • Removed configurations that are no longer supported from build flows
  • Datatype Management:
    • Fixed various issues in datatype management, resulting in better performance

These updates ensure that SystemC 3.0.1 is more robust, efficient, and compliant with the latest standards, providing a better experience for developers and users alike.

SystemC Synthesis

At DVCon US 2024 the Synthesis Working Group leaders asked for your participation to further high-level synthesis, and high level verification.

Synthesis Working Group

The Synthesis Working Group has all their status online, and invite you to join their group.

SystemC Community Portal

The best place to visit on the web for all things SystemC is https://systemc.org/ , where each working group keeps you up to date, the annual SystemC Evolution Day presentations and Fikas are archived, plus there are publications, libraries and projects for you to be part of the community.

Summary

In the past 24 years the SystemC language has grown in scope and acceptance for system-level modeling, so that engineering teams can design and verify both hardware and software components together at a higher level of abstraction than RTL. SystemC is also useful as a verification framework and even mixed-signal modeling. Follow up by clicking the links to get more details and improve your engineering skillset.

View presentations from the SystemC Evolution Day online.

Related Blogs


Innexis Product Suite: Driving Shift Left in IC Design and Systems Development

Innexis Product Suite: Driving Shift Left in IC Design and Systems Development
by Kalar Rajendiran on 12-03-2024 at 6:00 am

Full Spectrum Development Inexis Developer Pro

At the heart of the shift-left strategy is the goal of moving traditionally late-stage tasks—such as software development, validation, and optimization—earlier in the design process. This proactive approach allows teams to identify and resolve issues before they escalate, reducing costly rework and shortening the overall development timeline. As IC designs become more complex and software demands increase, shifting left becomes critical. Early defect detection, quicker iterations, and the ability to validate performance and power early in the design process help prevent delays and reduce costs. Ultimately, this approach ensures a higher quality product, faster time-to-market, and a more efficient development cycle.

Siemens EDA recently launched its Innexis Product Suite, a comprehensive set of tools designed to reshape the development and validation of integrated circuits (ICs) and complex systems. Building on the success of its Veloce™ hardware-assisted verification and validation system, the Innexis product suite is engineered to support shift-left software development. And its integration with Veloce ensures that both hardware and software are validated in parallel throughout the development cycle. By enabling early software testing, continuous validation, and rapid debugging across virtual and hardware environments, Innexis complements Veloce to optimize the entire verification process.

The following insights were gained from the various talks at the Innexis Product launch event.

The Innexis Product Suite

The product suite is specifically built to enable this shift-left methodology across various stages of the design process. The suite includes several components, each offering unique capabilities and use cases but all aligned with the goal of accelerating development and enabling early validation of both hardware and software.

Innexis Developer Pro

The Innexis Developer Pro plays a pivotal role in supporting the continuous development flow from virtual models to hybrid systems and eventually to full RTL simulations. This tool offers a seamless platform for hardware-software co-development, validation, and analysis. Developers can work across virtual, hybrid, and RTL environments, ensuring that designs are continuously tested and optimized from the very start. By enabling early power and performance analysis, Innexis Developer Pro helps teams identify issues early in the cycle, preventing rework later on. It supports a wide range of use cases, such as enabling pre-silicon validation and accelerating the creation of complex SoCs with heterogeneous cores.

Samsung shared with the audience, how the Innexis suite has accelerated their software development by providing a configurable reference platform that mimics a Samsung A75-based CPU subsystem and integrates Samsung GPU IP. With the Innexis stack, Android boots in under 10 minutes, compared to 20+ hours on traditional emulators, and Veloce Strato enables faster pre-silicon performance analysis by executing GPU RTL. Samsung’s successful shift-left with Innexis has streamlined their development process, enabling software development as early as the first RTL milestone, with RTL to Innexis readiness in just one week, pre-verified software stacks, and a configurable testbench for efficient custom driver integration and testing.

Innexis Architecture Native Acceleration (ANA)

For teams looking to develop software early in the process, Innexis ANA provides a high-speed, cloud-based platform. By utilizing Arm-based servers, ANA enables the execution of software workloads at speeds up to 2-4 GHz, significantly faster than traditional simulation-based platforms. The cloud-native environment offers scalable resources and enables team collaboration by allowing the sharing of models and workloads across different locations. With Innexis ANA, engineers can develop and test software long before RTL or silicon are available, optimizing performance and identifying software defects early. It also integrates seamlessly with other parts of the suite, enhancing the shift-left workflow and ensuring continuous development without delays.

Arm shared with the audience, Innexis ANA benchmark numbers that demonstrate 50-100X boot time performance improvement when using realistic software workloads compared to a QEMU-based Instruction Set Simulator (ISS) virtual platform.

Innexis Virtual System Interconnect (VSI)

Another key component, Innexis VSI, facilitates the creation of system-level digital twins. This tool integrates multi-behavioral models of various subsystems, such as sensors, ECUs, and environmental models, to simulate the interactions within a complete system. By providing visualization and analysis capabilities, VSI helps engineers understand system behavior before physical prototypes are available. It is especially useful in industries like automotive, where system-level validation is critical for complex designs such as autonomous driving systems or electric powertrains. VSI can also be cloud-enabled, offering scalable simulations and real-time collaboration, which accelerates the design process and ensures all system components function together as intended.

Innexis Product Suite Benefits

The Innexis suite’s benefits are far-reaching. First, it helps accelerate time-to-market by enabling earlier testing and identification of defects, thus reducing design iterations and re-spins. Second, it offers cost savings by allowing issues to be addressed early, preventing expensive last-minute fixes. Third, it fosters collaboration by enabling teams to work seamlessly across geographic locations, sharing models, data, and workflows in real time. Finally, Innexis contributes to performance optimization by providing tools to run realistic software workloads early, ensuring that power and performance benchmarks are met before hardware is finalized.

Shifting Left Using AWS

The shift-left approach using Software and Digital Twin through virtual Hardware in Loop (vHIL) testing in the cloud accelerates the development cycle by enabling silicon virtualization before target hardware is available.

AWS highlighted to the audience, how Arm’s validated IP subsystems and AWS’s scalable cloud infrastructure ensure that teams have access to high-performance, cloud-native tools, enabling them to scale their development efforts quickly and efficiently. With Innexis ANA offering cloud-based benchmarking and software profiling, these capabilities ensure that developers can test and validate their designs in real-world conditions long before physical hardware is available. By utilizing Arm64-based Graviton instances on AWS, native execution of embedded software offers performance and efficiency gains over traditional emulation, allowing early software development before silicon is available. This approach reduces reliance on upfront HIL testing, enables early issue discovery, and offers scalable cloud-based resources for improved software quality and faster development cycles.

Summary

The Innexis Product Suite represents a paradigm shift in IC and systems development. By enabling shift-left in hardware/software co-design, early defect detection, and comprehensive system-level validation, Innexis empowers engineers to meet the challenges of modern IC design and accelerate the development of complex systems. With its cloud-native capabilities, powerful simulation tools, and integration with Veloce, Innexis provides the tools necessary to deliver high-quality products faster, more cost-effectively, and with higher reliability.

To learn more, visit

The Innexis solution page.

Press announcement page.

Also Read:

Relationships with IP Vendors

Handling Objections in UVM Code

Next Generation of Systems Design at Siemens