ads mdx semiwiki building trust gen 800x100ai

A Complete Guidebook for PCB Design Automation

A Complete Guidebook for PCB Design Automation
by Kalar Rajendiran on 11-29-2023 at 8:00 am

Constraint Management

Printed Circuit Boards (PCBs) are the foundation of modern electronics, and designing them efficiently is complex. Design automation and advanced PCB routing have transformed the process, making it faster and more reliable. Design automation streamlines tasks, reduces errors, and ensures consistency. Advanced PCB routing combines auto-routing and manual routing for efficiency, optimizes layer stacking, controls via placement, and handles differential pair routing.

Siemens EDA has published an eBook on PCB design automation covering problems with legacy PCB design methodologies, constraint-management in PCB design, PCB component placement and routing, design reuse and automating manufacturing output for PCB board fabrication. Following is an overview of the eBook.

Problems with Legacy PCB Design Methodologies

Legacy PCB design methodologies are struggling to meet the demands of modern electronics development. They are ill-suited for complex products, shorter timelines, reduced budgets, and limited resources. Manual data manipulation is error-prone and time-consuming, hindering integration between design tools. Communication bottlenecks between engineering and other disciplines, involving physical document exchange, no longer work in today’s fast-paced design environment. To keep up with evolving industry standards, more efficient and streamlined PCB design processes are essential.

Constraints-Management in PCB Design

Constraint management is a vital practice in PCB design, streamlining the process and reducing the need for extensive back-and-forth communication between engineers and designers. Constraint-driven methodology has become a best practice, allowing for the systematic management of constraints and introducing standardization. Constraint templates, which can be reused and adjusted for specific projects, save time and maximize existing data utilization.

This approach offers control over electrical and physical rules, aligning the design with the final product’s requirements. Design constraints ensure quality is integrated from the outset, eliminating the need for costly post-design quality checks. Automated constraint entry in Siemens EDA’s Xpedition simplifies the process and ensures adherence to predefined parameters, enhancing the potential for design success by consistently meeting specified requirements and constraints.

PCB Component Placement

3D component planning and placement are pivotal in achieving a “correct-by-construction” PCB layout while considering electro-mechanical constraints. Clusters, defined groups of components within a circuit, play a crucial role in simplifying and optimizing placement. They enable efficient extraction, version control, and reuse of component groups, enhancing connectivity and flow. Clusters also support nested structures, allowing for unique rules within groups, streamlining component placement. In the Xpedition environment, clusters can be further enhanced with additional elements like mounting holes and seed vias, providing greater visibility and control over the PCB design process and improving design quality.

PCB Routing

Modern PCB design tools offer various routing approaches, including manual, interactive, semi-automated, and fully automated methods, improving the design process’s efficiency. In the Xpedition flow, advanced routing features like Sketch Routing and Sketch Planning provide user-friendly automation for high-quality, fast routing. These tools mimic human decision-making, allowing designers to experiment with autorouting and modifications until they achieve the desired outcome, enhancing PCB routing efficiency.

Additionally, advanced routing tools like the “hug router” help manage stubborn nets without overhauling the design. It’s particularly useful for routing single net lines in pre-routed designs. The “plow routing” feature aids in handling challenging remaining nets, reducing time and effort. For specialized signal requirements in analog and RF traces, “hockey stick” or segment routing offers precise control over routing paths, improving routing precision and efficiency in PCB design.

Design Reuse in PCB design

Efficiency in PCB design can be greatly improved through the practice of PCB design reuse. This strategy involves leveraging previously approved circuitry or IP in various designs, saving time and reducing project risks. It eliminates redundant efforts, allowing the reuse of reliable components and layouts. True design reuse is more than traditional copy-pasting; it involves applying entire layouts, like multi-layer circuit stacks, saved in the library for future use, saving significant time compared to manual recreation. In platforms like Xpedition, creating and managing reuse modules is seamless, simplifying sharing and tracking deployment, making PCB design reuse an invaluable strategy in electronics design.

Automating Manufacturing Output for PCB Board Fabrication

Once a PCB design is fully complete and successfully passes various assessments, the focus turns to preparing for board fabrication and assembly manufacturing. Automation is key in this phase, eliminating redundancy and saving time in generating output files like ODB++, Gerber data, GENCAD data, and more. It ensures consistency in output generation, customizability to meet standards, and correctness and quality in the content. In contrast to manual methods, automation streamlines the process and provides a reliable foundation for successful printed circuit assembly production by fabricators and manufacturers.

Summary

In the world of PCBs, the cost of board respins due to human errors is significant in terms of both time and money. Early error detection is crucial, as errors discovered later become more expensive to rectify. Adopting a “correct-by-construction” design approach and leveraging automation tools such as Siemens EDA’s Xpedition Enterprise are very important.

To learn more, download the eBook guide for PCB Design Automation.

Getting educated on PCB design automation tools is also essential to avoid schedule disruptions due to a lack of knowledge on how to effectively use the tools. Siemens EDA offers training sessions, including on-demand training, expert-led webinars, and on-site visits with application engineers, to empower designers to harness the full potential of automation and streamline PCB design processes efficiently.

For more information on PCB design automation, visit:

https://eda.sw.siemens.com/en-US/pcb/engineering-productivity-and-efficiency/design-automation/

To request an on-site training, visit:

https://resources.sw.siemens.com/en-US/talk-to-an-expert-about-xpedition

Also Read:

Uniquely Understanding Challenges of Chip Design and Verification

Successful 3DIC design requires an integrated approach

Make Your RISC-V Product a Fruitful Endeavor


ML-Guided Model Abstraction. Innovation in Verification

ML-Guided Model Abstraction. Innovation in Verification
by Bernard Murphy on 11-29-2023 at 6:00 am

Innovation New

Formal methods offer completeness in proving functionality but are difficult to scale to system level without abstraction and cannot easily incorporate system aspects outside the logic world such as in cyber-physical systems (CPS). Paul Cunningham (Senior VP/GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Next-Generation Software Verification: An AI Perspective. This is an article published in IEEE Software, May-June 2021 issue. The author is from the University of Ottawa.

The author presents her research described in this paper as an adaptation of the CEGAR method for developing abstractions to be used in system level analysis. A key difference between methods in building an abstraction is that CEGAR uses model check (formal methods) in building and refining an abstraction, whereas the author’s flow (ARIsTEO) uses simulation under ML supervision for this purpose. This is an interesting and complementary approach for abstracting logic of course but has the added merit of being able to abstract analog, mechanical or other non-logic systems that can be simulated in some other manner for example through Simulink.

Paul’s view

Last month we looked at generating abstractions for analog circuits to simulate much faster while still being reasonably accurate. This month we take the analog abstraction theme further into the world of cyber-physical systems. These are essentially software-level models of analog control systems with sensors and actuators defined in Matlab Simulink, for example, a smart home thermostat, automotive controllers (powertrain, transmission etc.), or navigation systems (e.g. satellite).

Complexity of these cyber-physical systems is rising, with modern commercial systems often consisting of thousands of individual Simulink building blocks, resulting in simulation times for verification even at this level of abstraction becoming problematic. The author of this month’s paper proposes using machine learning to address the problem, realized in a verification tool called Aristeo. The paper is more of an editorial piece drawing some parallels between Aristeo and model checking. To understand Aristeo itself, I found it best to read her ICSE’20 publication.

Aristeo works by building an abstraction for the cyber-physical system, called a “surrogate”, that is used as a classifier on randomized system input sequences. The goal of the surrogate is to predict if a randomized input sequence is likely to find a bug. Sequences selected by the surrogate are applied to the full model. If the full model passes (false positive) then the model is incrementally re-trained, and the process continues.

The surrogate is built and trained using the Matlab system identification toolbox. This toolbox supports a variety of abstractions, both discrete and continuous time, and provides a system to train model parameters based on a set of example inputs and outputs. Models can range from simple linear functions or time-domain transfer functions to deep neural networks.

Aristeo results are solid: 20% more bugs found with 30% less compute than not using any surrogate. Interestingly, the most effective surrogate across a range of credible industrial benchmarks was not a neural network, it was a simple function where the output at timestep t is a linear function of all input and output values from t-1 to t-n. The authors make a passing comment that the purpose of the surrogate is not to be accurate but to predict if an input sequence is buggy. These results and observations align with our own experience at Cadence using machine learning to guide randomized UVM-based logic simulations: our goal is not to train a model that predicts circuit behavior, it’s to train a model that predicts if some randomized UVM-sequence will find more bugs or improve coverage. So far, we have likewise found that complex models do not outperform simple ones.

Raúl’s view

For a second month in a row, we review a paper which is quite different to what we have done before in this blog. This time, the topic is a new artificial intelligence (AI)-based perspective on the distinctions between formal methods and testing techniques for automated software verification. The paper is conceptual, using the concepts presented for a high-level perspective.

The author starts by observing that “for the most part, software testing and formal software verification techniques have advanced independently” and argues that “we can design new and better adaptive verification schemes that mix and match the best features of formal methods and testing”. Both formal verification and testing are posed as search problems and their virtues and shortcomings are briefly discussed in the familiar terms of exhaustiveness and flexibility. The proposed framework is based on two systems, CEGAR (counterexample guided abstraction and refinement) and ARIsTEO (Approximation-Based Test Generation). In CEGAR, the model of the software being verified is abstracted, and then refined iteratively using model checking to find bugs; if a bug is spurious, it is used to refine the abstract model, until it is sufficiently precise to be used by a model checker to verify or refute a property of interest. ARIsTEO works similarly but it uses a model approximation and then search based testing to find bugs. Again, if a bug is spurious it is used to refine the model; refinement is simply retraining with additional data, and the refinement iterations continue until a nonspurious failure is found.

This work was done in the context of and inspired by cyber-physical systems (CPS), complex industrial CPS models that existing formal verification and software testing could not handle properly. The author concludes expressing her hope that “the testing and formal verification communities will eventually merge to form a bigger and stronger community”. Mixing formal and simulation-based techniques to verify hardware has been common practice for a long time.


WEBINAR : Avoiding Metastability in Hardware Software Interface (HSI) using CDC Techniques

WEBINAR : Avoiding Metastability in Hardware Software Interface (HSI) using CDC Techniques
by Daniel Nenni on 11-28-2023 at 10:00 am

Agnisys Mux Synchronizer

This webinar looks at the challenges a Design Engineer could face, such as when various IP blocks within an SoC are required to work in different clock domains to satisfy the power constraints.

Abstract:
Various IP blocks within an SoC are often required to work in different clock domains in order to satisfy the power constraints. Clock domain crossing (CDC) challenges faced by design engineers include:

– Speed and power requirements lead to designs with multiple asynchronous clock domains on different I/O interfaces and data being transferred from one clock domain to another.
– Transferring signals between asynchronous clock domains may lead to setup or hold timing violations of the flip-flops in the receiving clock domain.
– These violations may cause CDC signals to be metastable.
– Metastability may also arise from jitter between asynchronous clock domains, resulting in functional failures if the appropriate clock synchronizers are not present.

This webinar will examine the techniques used to avoid metastability as signals cross from one clock domain to another:
– Mux Synchronizer
– Two-Flip-Flop Synchronizer
– Handshake Synchronization
– Write operation
– Read operation
– Pulse
– Reset Domain Crossing
– Custom Synchronizer

Speaker Bio:
Freddy Nunez, a Senior Application Engineer at Agnisys since 2021, holds a degree in Computer Engineering from California State University, Northridge. He’s been instrumental in Agnisys’ success, leveraging his technical expertise to assist customers in navigating complex challenges and optimizing their systems. Freddy has experience with SystemRDL, IP-XACT, Verilog and SystemVerilog.

REGISTER HERE
Background:

The Agnisys IDesignSpecTM (IDS) Suite supports clock domain crossings (CDCs) from both the software (SW) and hardware (HW) sides. Techniques used to avoid metastability as signals cross from one clock domain to another include:

    • Two-Flip-Flop Synchronizer
    • Mux Synchronizer
    • Handshake Synchronization
      • Write
      • Read
      • Pulse
    • Custom Synchronizer

In a CDC design, one clock is either asynchronous to, or has a variable phase relation with respect to, another clock. Speed and power requirements lead to designs with multiple asynchronous clock domains employed at different I/O interfaces and data being transferred from one clock domain to another. Transferring signals between asynchronous clock domains may lead to setup or hold timing violations of the flip-flops in the receiving clock domain. These violations may cause CDC signals to be metastable. “Metastability” refers to a state of indecision where the flip-flop’s output has not yet settled to the final expected value.

A typical register RTL block can be accessed either by a register bus on the SW interface and by application logic on the HW Interface.

Various techniques are used to avoid meta-stability as signals cross from one clock domain to another.

IDS supports multiple CDC synchronization techniques to synchronize data and control signals on the register block HW interface between the register block clock domain and the HW clock domain. These include simple 2-FF synchronization techniques with or without handshake, Mux Synchronizer and HW Write Pulse Synchronizer.

IDS also supports a custom synchronizer flow where the “2-FF wr_req Synchronizer” and “2-FF wr_ack Synchronizer” will be replaced with back2back sync flop modules giving users the flexibility to add custom implementations for their FF chain.

IDS also supports SW Interface CDC when it is required that the Register RTL block operates on a clock different from the Register bus clock. In this case, transactions coming from the bus will first get translated into a custom bus and then custom control and data signals are synchronized into the Register block bus domain using appropriate Handshake synchronization techniques.

REGISTER HERE
Also Read:

An Update on IP-XACT standard 2022

WEBINAR: Driving Golden Specification-Based IP/SoC Development

The Inconvenient Truth of Clock Domain Crossings


Siemens Digital Industries Software Collaborates with AWS and Arm To Deliver an Automotive Digital Twin

Siemens Digital Industries Software Collaborates with AWS and Arm To Deliver an Automotive Digital Twin
by Mike Gianfagna on 11-28-2023 at 6:00 am

Siemens Digital Industries Software Collaborates with AWS and ARM To Deliver an Automotive Digital Twin

 

According to McKinsey & Company, a digital twin is a digital representation of a physical object, person, or process, contextualized in a digital version of its environment. Digital twins can help an organization simulate real situations and their outcomes, ultimately allowing it to make better decisions. Anyone who is a fan of the shift-left development methodology will immediately see the benefits a digital twin offers. The shift-left approach aims to take tasks that are typically done later in the design process and perform them earlier, creating forward visibility and less chance for re-work. While the shift-left strategy has been around a while, commercial deployment of digital twin models that are robust enough to make an impact is newer. A recent announcement illustrates a high-impact deployment of a digital twin for automotive design and illustrates the collaboration required for success.  Read on to see how Siemens Digital Industries Software collaborates with AWS and Arm to deliver an automotive digital twin.

Building a Digital Twin – PAVE360™

Announced in 2019, PAVE360 from Siemens aims to deliver a “revolutionary new validation program to accelerate autonomous vehicle development.” Today, the product delivers digital twin technology intended to put a virtual car on every engineer’s desk. The goal is robust pre-silicon validation for software-defined vehicle designs – the Holy Grail of the shift-left methodology.

PAVE360 delivers some important capabilities to enable this goal, including:

  • An open platform – allowing configuration and connection of mixed fidelity domains, protocols, systems and tools from multiple sources. This allows adapting digital twin models for each phase of the vehicle design cycle.
  • A consistent environment – facilitating full “system-of-system” results with in-depth performance metrics.
  • The ability to start with your existing tools – you can connect available models and then test the system. This makes it easier to analyze metrics to improve and iterate on the design for faster decisions.
  • Access to mixed-fidelity analysis – using hybrid simulation, mixing virtual and register transfer level (RTL) code to deliver greater accuracy.
  • Create a virtual car – you can model SoCs, ECUs or full system-of-systems. This allows you to maintain consistency and break down silos as multiple teams work on the same digital twin.

The figure below is an overview of PAVE360.

PAVE360 Overview

You can get a full over of PAVE360 here.

The Power of Collaboration

A digital twin has a lot of moving parts. Items like accurate, mainstream models and robust compute infrastructure to support all the analysis required are two good examples. Recognizing this, Siemens Digital Industries got to work with a couple of their partners to address the holistic requirements of digital twin technology.  A recent announcement outlined the details of the company’s plan.

Expanding on the strong partnership between Siemens and AWS, PAVE360 is now available on the cloud. By using the AWS technology, developers can experience near real-time simulation speeds which are significantly faster than conventional on-premises modeling and simulation infrastructures. This partnership improves both time-to-market and quality of results since the right compute infrastructure can be deployed when needed to ensure fast, robust and accurate results.

Siemens has also collaborated with Arm to help enable developers to access Arm®-based technology running on Siemens’ PAVE360 Digital Twin solution via AWS cloud services. Automakers are now able to develop software and evaluate key Arm-based system and software components earlier in their IP selection and design cycles, without the burden of conventional on-premises software.

The program not only helps address the technology and commercial challenges ahead but also helps empower developers to gain a competitive advantage by shifting left hardware and software development, with unprecedented simulation speeds, enabling them to meet shrinking time-to-market requirements. Executives from AWS and Arm also weighed in:

“The proliferation of digital twin methodologies throughout the automotive industry uses the compute capabilities and world-class infrastructure of AWS,” said Wendy Bauer, Vice President of Automotive and Manufacturing, AWS. “With PAVE360 mapping accurate embedded environments to optimal AWS instances while using Arm automotive enhanced IP, OEMs and suppliers are enabling software defined vehicle solutions and methodologies that were previously impractical.”

“The software defined vehicle is survival for the automotive industry, requiring new technologies and methodologies for faster and more agile development,” said Dipti Vachani, Senior Vice President and General Manager, Automotive Line of Business, Arm. “The innovative Siemens’ PAVE360 solution is helping to accelerate the automotive system development required to address the increasingly demanding consumer expectations. Together with Siemens and AWS, we are enabling a breadth of use cases on the Arm automotive platform across the entire supply chain, from IP evaluation to fleet management.”

To Learn More

You can hear more about the impact of digital twin technology on automotive design at the upcoming IESF Automotive E/E Design & Engineering Conference. The event will take place in Munich, Germany on November 30, 2023. You can learn more about the conference and register here. And that’s how Siemens Digital Industries Software collaborates with AWS and ARM to deliver an automotive digital twin.


Synopsys.ai Ups the AI Ante with Copilot

Synopsys.ai Ups the AI Ante with Copilot
by Bernard Murphy on 11-27-2023 at 10:00 am

Synopsys.ai Stack 111623

Last week Synopsys announced their next step in generative AI (GenAI) in Synopsys.ai Copilot based on a collaboration with Microsoft. This integrates Azure OpenAI together with existing Synopsys.ai GenAI capabilities to extend Copilot concepts to the EDA world. For those of you unfamiliar with Copilot, this is a development by GitHub/Microsoft and OpenAI to aid software developers in writing code.

… GitHub Copilot includes assistive features for programmers, such as the conversion of code comments to runnable code, and autocomplete for chunks of code, repetitive sections of code, and entire methods and/or functions. … GitHub states that Copilot’s features allow programmers to navigate unfamiliar coding frameworks and languages by reducing the amount of time users spend reading documentation. (source Wikipedia)

Microsoft has since significantly expanded the scope of Copilot to help in all capabilities provided by Microsoft 365 (Word, etc.) and has introduced solutions to support sales, service, and security along with assistance for software developers. Copilot is technology with broad reach, so it is not surprising that Synopsys has jumped on the potential to extend that technology to EDA. I should add that regular Copilot already supports RTL (thanks Matt Genovese for the demo!) but as a proof of concept in my view. Synopsys is aiming for a more robust version. 😊

Why the Continued Emphasis on AI?

OK, so AI is hot, a new technology with a lot of potential applications but some might suspect it is becoming a solution in search of a problem. In fact new design challenges abound and this continued emphasis on AI is an important direction that the design and EDA communities are exploring to help product designers jump ahead.

Trends to domain-specific architectures and to multi-die systems are ways that product teams are overcoming Moore’s Law limitations in performance and power scaling for advanced processes. However both approaches greatly amplify complexity in design, verification and implementation, yet teams must deliver products today on even more compressed schedules. This while staffing needs across the industry are expected to fall short by tens of thousands of engineers by 2030.

Incremental improvements in tools and methodologies will continue to be important but clearly we need a turbo-charge to overcome problems at this scale. We need ways for engineers to be able to deliver better results in a shorter time without need for additional staffing. That’s where AI comes in. In part through artificial intelligence capabilities in development flow tools, as in DSO.ai, VSO.ai and TSO.ai. In part through assistive (copilot) methods to guide designers more quickly to optimal solutions, the subject of Synopsys’ Copilot announcement.

Synopsys.ai Copilot Working with Designers

Shankar Krishnamoorthy, general manager of the Synopsys EDA Group, told me that the first roll out to early customers (AMD, Intel, Microsoft and other leading companies) is testing capabilities including:

  • Collaborative capabilities to provide engineers with guidance on tool knowledge, analysis of results, and enhanced EDA workflows, and
  • Generative capabilities to expedite development of RTL, formal verification assertion creation, and UVM testbenches.

In addition to these first new developments, Shankar said his team plans to develop autonomous capabilities across the Synopsys.ai suite, which will enable end-to-end workflow creation from natural language, spanning architecture to design and manufacturing.

I’ll expand on a couple of points, starting with formal verification assertions where I claim at least informed novice status thanks to working with the Synopsys VC Formal group. Formal verification is a very powerful technology but historically has been limited by the high-level of expertise demanded of practitioners, especially when it comes to developing assertions and constraints. Packaging standard tests as apps has greatly simplified many applications and has increased adoption but hasn’t done anything to simplify non-standard/specialized property checks.

Part of the problem is the complexity of the property language (SVA). Copilot should be able to help by converting natural language requirements written by a verification engineer into the correct formal syntax and by providing recommendations for the RTL.

Next, RTL generation. This for me is a more complex picture with different potential use-cases. One is to autocomplete a chunk of code given sufficient context or create a code snippet from a natural language description. At the same time creating assertions to check the correctness of the generated code. Both could be valuable accelerators for a junior developer. Another use-case might extend generation to small blocks from a natural language description (remember the 4096 token limit on GPT-3).

Shankar said the distinction here between proofs of concept and the Synopsys.ai implementation is that Synopsys.ai guard-rails and checks generated code using the many technologies they have at their disposal (even PPA analyses) and of course many years of learning and expertise.

In all cases, I expect experts will code-review generated code to guide further training. No one wants to see hallucinations creep in as added challenges for late-stage verification.

Availability

Shankar tells me that this technology is currently in early custom evaluation and refinement, which especially in this case makes complete sense. Which use-cases are going to make a difference in practice and how big a difference (lines of code/day, bugs created or found per day, …) will emerge. Add in the extensive scope of the goal and I expect more general releases to come a feature or two at a time as these are refined and proven robust in a wide range of use-cases.

Looks like Synopsys is following the Sundar Pichai maxim of “You will see us be bold and ship things, but we are going to be very responsible in how we do it.” Good for them for taking the first step! You can read the press release HERE.


Synopsys 224G SerDes IP’s Extensive Ecosystem Interoperability

Synopsys 224G SerDes IP’s Extensive Ecosystem Interoperability
by Kalar Rajendiran on 11-27-2023 at 6:00 am

Synopsys 224G SerDes IP InterOp Multiple Tradeshows

Hyperscale data centers are evolving rapidly to meet the demands of high-bandwidth, low-latency applications, ranging from AI and high-performance computing (HPC) to telecommunications and 4K video streaming. The increasing need for faster data transfer rates has prompted a scaling of Ethernet from 51Tb/s to 100Tb/s. Numerous suppliers are offering IP and components such switches, backplane connectors, pluggables, cable assemblies and other networking infrastructure elements. Extensive ecosystem interoperability is a must to build robust systems.

Synopsys 224G SerDes IP

The Synopsys 224G SerDes IP is designed to provide exceptional performance, power efficiency, and configurability, making it a versatile solution for a wide range of applications, including high-speed networking, data centers, and artificial intelligence. This IP is engineered to support multiple industry-standard protocols, enabling it to seamlessly interface with a variety of communication standards. Its compatibility with protocols such as PCIe (Peripheral Component Interconnect Express), Ethernet, and Common Electrical I/O (CEI), showcases the SerDes IP’s versatility across different applications. The 224G SerDes IP offers a high degree of configurability, allowing designers to tailor its parameters to specific system requirements. Functional demonstrations highlight how the IP can adapt to different data rates, channel lengths, and signal integrity conditions, showcasing its flexibility in meeting the unique needs of various applications.

What might not be as widely known is Synopsys SerDes IP’s remarkable interoperability within an extensive ecosystem.

Attention to Ecosystem Interoperability

Synopsys recognizes the diversity of the modern semiconductor ecosystem, with various vendors providing critical components such as switches, pluggables, and other networking infrastructure elements. Synopsys collaborates with industry partners in the development and validation of its silicon proof points. This collaborative approach ensures that the technology is not developed in isolation but is tested and refined in conjunction with the broader semiconductor ecosystem. Synopsys invests heavily in rigorous testing and validation procedures to ensure that its solutions work seamlessly in real-world scenarios. This involves comprehensive testing with components from various vendors to simulate diverse networking environments. It includes stress testing under challenging conditions, demonstrating the SerDes IP’s reliability and stability in real-world applications.

Extensive Ecosystem Interoperability Demonstrations

Synopsys 224G SerDes IP continues showcasing extensive ecosystem interoperability through multiple tradeshows

Synopsys has actively demonstrated the interoperability of its 224G and 112G SerDes solutions in various settings, establishing its commitment to creating technology that seamlessly integrates into diverse environments. Some notable demonstrations include those at the TSMC Symposium 2023, OIF & OFC 2023, ECOC 2022, DesignCon 2023 and other industry events.

ECOC 2023:

Synopsys showcased the performance of its 224G TX and RX with Keysight test equipment and their latest SW for key 224G TX and RX characterization parameters. OIF Interop demonstrations include its 224G RX equalizing a 224G C2M channel and 3rd party 224G SerDes TX, showcasing BER orders of magnitude better than IEEE or OIF 224G Spec is indicating.

TSMC Symposium 2023:

The demo highlighted interoperability between Synopsys 224G hardware, connectivity, mechanicals, signal integrity, and power integrity, demonstrating superior performance in real-time.

OIF & OFC 2023:

Synopsys demonstrated the interoperability of its 224G and 112G Ethernet PHY IP solutions. The demonstrations featured wide-open PAM4 eyes, very low jitter, and excellent linearity, underscoring the robustness of Synopsys’ SerDes technology.

DesignCon 2023:

This video clip shows seven demonstrations of the Synopsys 224G and 112G Ethernet PHY IP, and the Synopsys PCIe 6.0 IP interoperating with third-party channels and SerDes.

ECOC 2022:

Synopsys showcased the performance and interoperability of its 224G and 112G Ethernet PHY IP solutions at ECOC 2022. Demonstrations included the world’s first 224G Ethernet PHY IP interop with Keysight AWG and ISI channel.

Synopsys Commitment to Furthering SerDes Technology

Synopsys’ commitment to pushing the boundaries of high-speed serial interface technology is evident in its multiple silicon proof points across various data rates, including 56G, 112G and 224G to implement 400G and 800G data connectivity. By successfully implementing and validating their SerDes IP across different data rates, Synopsys has showcased the robustness and adaptability of its core technology. This not only instills confidence in the current implementations but also suggests that the technology is well-prepared for the challenges posed by the upcoming 1.6Tbps speeds.

For details about Synopsys 224G Ethernet IP, visit the product page.

Summary

Synopsys 224G Ethernet PHY IP showcasing 224Gbps TX PAM-4 eyes in TSMC N3E

Synopsys 224G Ethernet PHY IP is available to help streamline the transition to 1.6T ethernet data transfer rate. In addition to doubling 112G data rates, the Synopsys 224G Ethernet PHY IP consumes one-third less power (per bit) compared to its predecessor while optimizing network efficiency by reducing cable and switch counts in high-density data centers. Synopsys is the first company to demonstrate 224G Ethernet PHY IP.

By emphasizing multivendor interoperability at 224G, Synopsys has positioned itself as a key enabler of the global data center ecosystem. Its solutions seamlessly integrate with a multitude of components, contributing to the efficiency and reliability of high-speed networking infrastructure.

Also Read:

Synopsys Debuts RISC-V IP Product Families

A Fast Path to Better ARC PPA through Fusion Quickstart Implementation Kits and DSO.AI

100G/200G Electro-Optical Interfaces: The Future for Low Power, Low Latency Data Centers


Podcast EP195: A Tour of Mythc’s Unique Analog Computing Capabilities with Dave Fick

Podcast EP195: A Tour of Mythc’s Unique Analog Computing Capabilities with Dave Fick
by Daniel Nenni on 11-24-2023 at 10:00 am

Dan is joined by Dave Fick, co-founder and CEO of Mythic. Dave leads Mythic to bring groundbreaking analog computing to the AI inference market. With a PhD in Computer Science & Eng from Michigan, he brings a wealth of knowledge and expertise to the industry.

Dan explores Mythic’s unique analog computing capability with Dave. The tradeoffs between edge and cloud processing are discussed. Dave explains the benefits of Mythic’s approach for edge computing in many demanding AI applications. Speed, power density, cost, form factor, training efficiency and latency are all discussed, highlighting the substantial benefits of the Mythic approach.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Dr. Meghali Chopra of Sandbox Semiconductor

CEO Interview: Dr. Meghali Chopra of Sandbox Semiconductor
by Daniel Nenni on 11-24-2023 at 6:00 am

MeghaliChopra highres cropped

Dr. Meghali Chopra is co-founder and CEO of SandBox Semiconductor. She is responsible for SandBox’s vision and strategy and oversees the development of SandBox’s software products and technologies. Dr. Chopra received her PhD in Chemical Engineering from the University of Texas at Austin where her research focused on computational algorithms for plasma process optimization. She has her B.S. with Honors in Chemical Engineering from Stanford University. Dr. Chopra is an industry expert with publications in leading peer-reviewed journals and patents in the areas of semiconductor processing and computational optimization.

Tell us about your company?
Founded in 2016, SandBox Semiconductor is a pioneer in developing AI-based software solutions to accelerate process development for semiconductor manufacturing. Our fully integrated, no-code AI tool suite gives process engineers the ability to build their own physics-based, AI-enabled models to solve challenges during process definition, ramp-up, and high-volume manufacturing.

Using SandBox’s physics-based models and machine learning tools, process engineers in the semiconductor industry can virtually simulate, predict, and measure process outcomes. Even with small sets of experimental data, SandBox’s tools can extract valuable insights and patterns, helping engineers to gain a deeper understanding of manufacturing processes and to make informed decisions about recipe adjustments. SandBox leverages expertise in numerical modeling, machine learning, and manufacturing optimization to develop its proprietary toolsets, which are used by the world’s leading chip manufacturers and semiconductor equipment suppliers.

What problems are you solving?
At SandBox, we reduce cycles of learning for next-generation advanced manufacturing technologies.  To optimize a recipe, a process engineer must specify a process window for tens of process conditions including pressure, temperature, and gas flow rates. Determining the best process conditions is so challenging that oftentimes a recipe will take over two years to develop, or worse, the chip is dropped from production because the cost of the process development becomes too expensive. This technology gap and cycle time is a significant barrier to the deployment of novel microelectronic devices and imposes a substantial economic burden on semiconductor manufacturers who must make significant R&D investments to stay afloat.

SandBox provides computational modeling software that accelerates process development and enables semiconductor manufacturers to reduce costs, get to market faster, and commercialize new processes not possible before.

What application areas are your strongest?
SandBox works on leading-edge logic and memory manufacturing processes. Our users are typically performing technology development or high-volume manufacturing recipe optimization. Our technologies have been used on a range of optimization applications including feature-level, die-to-die, across-wafer, chamber-to-chamber, and tool-to-tool.

What keeps your customers up at night?
The process engineers we work with must figure out how to optimize many process conditions to manufacture billions of features across the wafer with nano-scale precision and at high throughput.  These process engineers are extremely knowledgeable and arguably the single most important individuals within each of our semiconductor customers. Unfortunately, these process engineers are often over-worked as they must continually push the envelope in advancing to the next node.  We developed our tools with these process engineers in mind – our mission is to provide meaningful leverage to the process engineer as he or she works to enable manufacturers to bring new microelectronics to market faster.

What does the competitive landscape look like and how do you differentiate?
Our proprietary modeling pipeline enables users to make process predictions with a small number of experimental data points.  The competitive landscape for process engineer-focused computational modeling tools is very limited.  Many of our customers have internal modeling groups, but our observation is that most frequently our process engineering users rightfully rely on their expertise and intuition to drive critical changes in recipe development.  To that end, the most common recipe optimization approach is the process engineer’s intuition.  We seek to help these process engineers in their role, particularly as the advanced manufacturing nodes increasingly push the limits of physics and chemistry in conjunction with the process engineer’s demands in a 24-hour day.

What new features/technology are you working on?
SandBox recently released a new product for its technology suite called Weave™. Weave™ significantly improves metrology accuracy and precision by leveraging advanced machine learning capabilities to extract and analyze profiles from SEM and TEM data. Process development engineers can spend up to 20% of their time manually measuring SEM and TEM images. With Weave, process engineers minimize tedious manual tasks and increase metrology accuracy, resulting in more insights, quicker experimentation, and reduced costs during process definition, ramp-up, and high-volume manufacturing.  The introduction of Weave continues on our platform vision as we work to provide a comprehensive tool-suite to bring easy to use physics-based AI tools to market with the goal of enabling the process engineer.

How do customers normally engage with your company?
Customers can reach out to us at info@sandboxsemiconductor.com or through our website at www.sandboxsemiconductor.com.

Also Read:

CEO Interview: Dr. J Provine of Aligned Carbon

CEO Interview: Vincent Bligny of Aniah

Executive Interview: Tony Casassa, General Manager of METTLER TOLEDO THORNTON


Generative AI for Silicon Design – Article 4 (Hunt for Bugs)

Generative AI for Silicon Design – Article 4 (Hunt for Bugs)
by Anshul Jain on 11-23-2023 at 8:00 am

Generative AI for Silicon Design 4

In the complex world of silicon design, ensuring the accuracy and reliability of our designs is paramount. As our chips become more sophisticated, the process of bug hunting—identifying and rectifying errors in our designs—becomes even more critical. Generative AI has the potential to revolutionize the silicon design industry by enhancing our bug-hunting capabilities. How? Let us experiment!

Experiment 4: Hunt for Bugs

To demonstrate the efficacy of Generative AI in silicon design, I conducted an experiment with OpenAI’s GPT-4. We provided the system with a design specification (sourced from https://mathcircle.berkeley.edu/sites/default/files/archivedocs/2010_2011/contest/1011contestpdf/comp8.pdf) alongside an implementation of the design. The goal? To see if GenAI could identify any bugs.

Design Specification:

On a 5 × 5 chessboard, a king moves according to the following rules:
1. It can move one square at a time, horizontally, vertically, or diagonally.
2. It can move in each of the eight allowable directions at most three times in its entire route.

Design Implementation in Verilog:

module chess (
    input clk,
    input reset,
    input [2:0] direction,      // 0-7 to indicate the 8 directions
    input move_request,         // Input to request a move
    output reg [2:0] cur_x_pos, // X position of the king
    output reg [2:0] cur_y_pos, // Y position of the king
    output move_ack,            // Acknowledgement of the move
    output reg error            // Error when a move in a direction exceeds limit
);

    // Define the directions: 0-7
    // 0: Right
    // 1: Left
    // 2: Up
    // 3: Down
    // 4: UpRight
    // 5: UpLeft
    // 6: DownRight
    // 7: DownLeft
    
    reg [2:0] cur_moves_count[7:0]; // Array to hold move counts for each direction
    reg [2:0] nxt_moves_count[7:0]; // Array to hold move counts for each direction
    always @(posedge clk) begin
	for (int dir = 0; dir < 8; dir++) begin
            if (reset) cur_moves_count[dir] <= 3'd0;
	    else cur_moves_count[dir] <= nxt_moves_count[dir];
	end
    end

    reg [2:0] nxt_x_pos, nxt_y_pos;
    always @(posedge clk) begin
        if (reset) begin
            cur_x_pos <= 3'd0;
            cur_y_pos <= 3'd0;
	end
	else begin
	    cur_x_pos <= nxt_y_pos;
	    cur_y_pos <= nxt_y_pos;
	end
    end

    always @(*) begin
        nxt_x_pos = cur_x_pos;
        nxt_y_pos = cur_y_pos;
        nxt_moves_count = cur_moves_count;
	error = 1'b0;
        if (move_request) begin
            case (direction)
                3'b000: if (cur_x_pos < 3'd4 && cur_moves_count[0] < 3'd3) begin
                            nxt_x_pos = cur_x_pos + 1'b1;
                            nxt_moves_count[0] = cur_moves_count[0] + 1'b1;
                        end
			else error = 1;
                3'b001: if (cur_x_pos > 3'd0 && cur_moves_count[1] < 3'd3) begin
                            nxt_x_pos = cur_x_pos - 1'b1;
                            nxt_moves_count[1] = cur_moves_count[1] + 1'b1;
                        end
			else error = 1;
                3'b010: if (cur_y_pos < 3'd4 && cur_moves_count[2] < 3'd3) begin
                            nxt_y_pos = cur_y_pos + 1'b1;
                            nxt_moves_count[2] = cur_moves_count[2] + 1'b1;
                        end
			else error = 1;
                3'b011: if (cur_y_pos > 3'd0 && cur_moves_count[3] < 3'd3) begin
                            nxt_y_pos = cur_y_pos - 1'b1;
                            nxt_moves_count[3] = cur_moves_count[3] + 1'b1;
                        end
			else error = 1;
                3'b100: if (cur_x_pos < 3'd4 && cur_y_pos < 3'd4 && cur_moves_count[4] < 3'd3) begin
                            nxt_x_pos = cur_x_pos + 1'b1;
                            nxt_y_pos = cur_y_pos + 1'b1;
                            nxt_moves_count[4] = cur_moves_count[4] + 1'b1;
                        end
			else error = 1;
                3'b101: if (cur_x_pos > 3'd0 && cur_y_pos < 3'd4 && cur_moves_count[5] < 3'd3) begin
                            nxt_x_pos = cur_x_pos - 1'b1;
                            nxt_y_pos = cur_y_pos + 1'b1;
                            nxt_moves_count[5] = cur_moves_count[5] + 1'b1;
                        end
			else error = 1;
                3'b110: if (cur_x_pos < 3'd4 && cur_y_pos > 3'd0 && cur_moves_count[6] < 3'd3) begin
                            nxt_x_pos = cur_x_pos + 1'b1;
                            nxt_y_pos = cur_y_pos - 1'b1;
                            nxt_moves_count[6] = cur_moves_count[6] + 1'b1;
                        end
			else error = 1;
                3'b111: if (cur_x_pos > 3'd0 && cur_y_pos > 3'd0 && cur_moves_count[7] < 3'd3) begin
                            nxt_x_pos = cur_x_pos - 1'b1;
                            nxt_y_pos = cur_y_pos - 1'b1;
                            nxt_moves_count[7] = cur_moves_count[7] + 1'b1;
                        end
			else error = 1'b1;
            endcase
        end
    end

    assign move_ack = move_request && !error;
	
endmodule

Identify functional bugs present in the Verilog implementation, create a counterexample for each functional bug with detailed input scenario, design transitions and a comparison of actual outcome versus expected outcome as per design specification.

Following is the output from GenAI model:

1. Bug: Overwriting the cur_x_pos with cur_y_pos

else begin
	cur_x_pos <= nxt_y_pos;
	cur_y_pos <= nxt_y_pos;
end

This section will cause the current x position of the king (cur_x_pos) to always take the value of nxt_y_pos, which is incorrect.

Counterexample:

Input Scenario: Assume the king is at position (2,2) and the move_request is provided with direction = 3'b010 (Up).
Design Transition:
   Expected: cur_x_pos remains 2 and cur_y_pos becomes 3.
   Actual: Both cur_x_pos and cur_y_pos become 3.
Outcome:
   Expected Outcome: King moves to (2,3).
   Actual Outcome: King moves to (3,3).
2. Bug: cur_moves_count[] initialization to zero

if (reset) cur_moves_count[dir] <= 3'd0;

All directions' move counts are initialized to zero upon reset. However, the condition to check for maximum moves uses < 3'd3, meaning the king can move 4 times in each direction before an error is raised.

Counterexample:

Input Scenario: Assume the king is at position (0,0) and four consecutive move_requests are provided with direction = 3'b000 (Right).
Design Transition:
   Expected: After 3 moves to the right, the 4th move should trigger an error.
   Actual: King moves four times to the right without triggering an error.
Outcome:
   Expected Outcome: King moves to (3,0) and then error is triggered.
   Actual Outcome: King moves to (4,0) without an error.
3. Bug: Lack of Boundary Check
Although the code contains boundary checks for the movement of the king on the chessboard, it does not account for the possibility that adding or subtracting from the 3-bit values might result in roll-over errors. For instance, if cur_x_pos is 3'd7 (which shouldn't happen based on the current design, but might due to external influences or other design issues), subtracting 1 might result in 0 rather than an expected 6 due to bit roll-over.

Counterexample:
This might be a theoretical bug as the current design does not allow for such a scenario to happen. However, it's essential to consider this for robust design.

Recommendation: Boundary checks can be improved, and additional sanity checks can be added to ensure that the position of the king never goes outside the valid range.
Intriguing Observations
  1. Bug Identification: GenAI model pinpointed a specific discrepancy between the design specification and its implementation with a proper counterexample including the scenario, transitions and actual vs expected outcome analysis. This was indeed a genuine bug that needed rectification.
  2. False Negative: Like any tool, Generative AI is not infallible. It did highlight an issue which, upon closer inspection, turned out not to be a bug. This serves as a reminder that while AI can be a valuable assistant, human expertise and judgment remain crucial in the validation process.
  3. Enhancement Issue: Beyond just bug hunting, GenAI model also proposed an enhancement—a suggestion for improving the design that wasn’t necessarily fixing a bug but rather optimizing the design further. This unexpected benefit demonstrates how AI can not only help in finding mistakes but also in suggesting innovations.

Tomorrow’s Landscape – GenAI powered HDL Code Editors/Analyzers!

The implications of this experiment are profound. By integrating Generative AI tools into our HDL code development flow, we can:

  • Accelerate the bug-hunting process: Automating the initial stages of bug detection can reduce the man-hours spent in the debugging long simulation waveforms.
  • Reduce verification cost: With fewer bugs left for verification tools/engineers to find, verification closure can be achieved sooner, leading to cost savings in the long run.
In Conclusion

As the semiconductor industry continues to evolve, leveraging tools like Generative AI can give us an edge, ensuring that our designs are not only error-free but also pushing the boundaries of what’s possible. It is only a matter of time when co-pilots and plug-ins gain wide adoption across the industry.

Also Read:

Generative AI for Silicon Design – Article 3 (Simulate My Design)

Generative AI for Silicon Design – Article 2 (Debug My Waveform)

Generative AI for Silicon Design – Article 1 (Code My FSM)


Handling metastability during Clock Domain Crossing (CDC)

Handling metastability during Clock Domain Crossing (CDC)
by Daniel Payne on 11-22-2023 at 10:00 am

synchronizer min

SoC designs frequently have lots of different clock domains to help manage power more efficiently, however one side effect is that when the clock domains meet, i.e., in a Clock Domain Crossing (CDC), there’s the possibility of setup and hold time violations that can cause a flip-flop to become metastable. Synchronizer circuits are then employed to mitigate the metastability issue, although they add non-deterministic delays to the logic path. The challenge for an RTL simulator is to take these non-deterministic delays into account.

CDC Synchronizer with 2 DFF

Siemens EDA wrote a white paper about this topic, and I’ll cover the traditional models for metastability effects, along with the model inside Questa CDC-FX.

In the synchronizer example from above, metastability happens when signal S changes too close to the rising edge of rx_clk, and the output of R1 glitches towards a 1, but settles down to a 0 in hardware, while an RTL simulator shows R1 transitioning to a 1 much quicker. This hardware metastability from R1 then makes R2 switch one clock cycle later in hardware compared to the RTL simulator.

Metastability adds ext-cycle delay

With the same synchronizer example when signal S arrives slightly after the rising edge of rx_clk then the output of R1 follows it to a 1 in hardware, while the RTL simulator waits until the next clock cycle for R1 to reach a 1 value, an effect called bleed-through. In this case the hardware transitions one cycle before the RTL simulator for both R1 and R2 signals.

Metastability and bleed-through

Both the extra-cycle delay and bleed-through cases only occur whenthe two clock domains, rx_clk and tx_clk have rising edges very close to each other in time. An ideal RTL simulator will match silicon for extra-cycle and bleed-through cases when:

  • Violations of setup constraints cause simulator to add cycle delay
  • Violations of hold constraints cause simulator to subtract a cycle delay
  • CDC signals are modeled independently
  • Registers with the same boundaries of clock domains have the same metastability effects

The RTL simulator industry has used several modeling methods to account for metastability effects.

  • Clock jittering – random changes to timing of clock edge, globally or locally.
  • 3 DFF synchronizer model – three registers instead of two.
  • Propagation delay-based method – a random delay inserted before the synchronizer.

Pitfalls are shown for each of these modeling methods and their variations, which then leads to how the Questa CDC-FX method was developed to accurately simulate metastability effects.

CDC-FX circuit model

When the two clock edges – tx_clk, rx_clk, are closely aligned in time and signal S is changing, this circuit will pseudo-randomly invert S that gets loaded into register R1. This means that in RTL simulation the output is forced into R1:

Instance of an CDC-FX

Both of the metastability effects introduced – extra-cycle delay and bleed-through – are modeled by inverting the value of S that gets loaded into R1. This CDC-FX circuit model can be automatically applied to every register that samples CDC signals.

Only the CDC-FX method satisfies all criteria for RTL matching silicon timing results.

Metastability methods comparison

Siemens recommends a complete CDC verification flow where the designer first runs a static CDC re-convergence analysis to remove any suspected re-convergence in the RTL. With netlist analysis three things are generated that are then used in subsequent steps:

  • CDC assertions
  • Coverage monitors
  • Metastability effects injectors

Verification engineers then simulate tests as a second step using the CDC assertions and coverage monitors from step 1. Tests identified as being effective in step 2 are run with the metastability effects injectors enabled in step 3. The coverage monitors actually track activity at each bit of every CDC register, then provide statistics:

  • Metastable cycles
  • Delayed transitions
  • Advanced transitions
  • Inverted bits bitmap

If the design under test fails with metastability modeling in simulation, then the design must be debugged to identify the source of improperly re-converging CDC signals.

Summary

CDC verification for metastability effects is a necessary step for all designs using multiple clock domains. Siemens EDA has developed the Questa CDC-FX method to be complete and accurate, working on all CDC signals. The extra-cycle delay and bleed-through delay effects are injected so that RTL simulation matches silicon behavior.

Read the complete white paper online here.

Related Blogs