SILVACO 073125 Webinar 800x100

Rugged Security Solutions For Evolving Cybersecurity Threats

Rugged Security Solutions For Evolving Cybersecurity Threats
by Kalar Rajendiran on 11-30-2023 at 6:00 am

Secure IC Stockphoto.jpg

Secure-IC is a global leader in end-to-end cybersecurity solutions, specializing in the domain of embedded systems and connected devices. With an unwavering commitment to pushing the boundaries of security innovation, Secure-IC has established a remarkable track record. Its credentials include active involvement in new standards development, extensive thought leadership, and an extensive portfolio of more than 200 patents. The company’s expertise spans the entire spectrum of cybersecurity, from cutting-edge cryptographic solutions to comprehensive testing frameworks, ensuring the safeguarding of digital assets against evolving threats. Secure-IC’s notable flagship product, Securyzr, exemplifies its dedication to adaptable solutions that meet dynamic cybersecurity demands, making it a trusted partner in securing the digital sphere. Secure-IC recently made a couple of exciting announcements, one relating to eShard and the other relating to MediaTek’s flagship smartphone chip, the Dimensity 9300.

Ruggedness of a Security Technology

The ruggedness of a security solution is directly related to the ruggedness of the security technology upon which the solution is based. It goes without saying that the ruggedness can only be established and ascertained by comprehensive testing of the security technology itself. The recent agreement between eShard and Secure-IC highlights how committed Secure-IC is to offering rugged security technology. The company announced a strategic acquisition of patents portfolio from eShard, a renowned pioneer in advanced security testing.

eShard

Known for its state-of-the-art solutions, software tools, and expert services, eShard specializes in providing comprehensive testing frameworks that enable the scalability of security testing. eShard’s expertise extends across various domains, including chip security testing, mobile application security testing, and system security testing. With a legacy marked by innovation and a robust patents portfolio, eShard continues to be a trailblazer in the field of advanced security, contributing significantly to the defense against cyber threats.

Secure-IC’s Acquisition of eShard Patents Portfolio

This marks a significant milestone in Secure-IC’s commitment to pushing the boundaries of security innovation and reinforcing its leading position in the embedded cybersecurity industry. With this acquisition, Secure-IC has expanded its patent portfolio to more than 250 in approximately fifty international patent families. The integration of eShard patents portfolio constitutes a substantial reinforcement to Secure-IC’s existing tunable cryptography product offering. Cybersecurity is a very dynamic field where adaptability and innovation are essential and this partnering will help secure the entire lifecycle of connected devices. This aspect takes on even more importance in the context of the European Union Cyber Resilience Act (EU CRA) coming into force, which mandates critical products to be designed secure and kept secure for an extended duration.

Secure-IC’s Securyzr iSE 900 as Trusted Anchor and Root of Trust

Secure-IC recently announced that its embedded cybersecurity solution Securyzr iSE (integrated secure element) 900 was integrated into MediaTek’s new flagship smartphone chip, the Dimensity 9300. This collaboration represents a significant leap forward in the realm of embedded systems and connected devices, setting new standards for security and performance. What sets Securyzr apart is its dual function as the Trusted Anchor and Root of Trust, allowing sensitive processes and applications to run in an isolated, secure area. This Secure Enclave plays a pivotal role in safeguarding critical operations throughout a device’s lifecycle, including Secure Boot, Firmware Updates, Key Management, and Cryptographic Services. Its continuous monitoring capabilities ensure resilience against potential disruptions, such as Cyber Physical Attacks, thereby mitigating potential threats with utmost reliability. As a result, the Dimensity 9300 is able to guard against evolving cybersecurity threats, setting a new standard for secure mobile threats.

Summary

Embedded within the main System-on-Chip (SoC), Securyzr offers a comprehensive suite of services to its host system, ranging from secure boot and cryptographic services to key isolation and anti-tampering protection. What sets Securyzr iSE 900 apart is its dual computation-and-strong-isolation aspect, providing an additional layer of security that surpasses traditional trusted execution environments.

Secure-IC’s expertise spans the entire spectrum of cybersecurity, from cutting-edge cryptographic solutions to comprehensive testing frameworks, ensuring the safeguarding of digital assets against evolving threats. The company’s flagship product, Securyzr, spotlights its dedication to adaptable solutions that meet dynamic cybersecurity demands, making it a trusted partner in securing the digital sphere.

For more details, visit the Securyzr product page.

Also Read:

Cyber-Physical Security from Chip to Cloud with Post-Quantum Cryptography

How Do You Future-Proof Security?

Points teams should consider about securing embedded systems


SystemVerilog Has Some Changes Coming Up

SystemVerilog Has Some Changes Coming Up
by Daniel Payne on 11-29-2023 at 10:00 am

SystemVerilog - extending coverpoints

SystemVerilog came to life in 2005 as a superset of Verilog-2005. The last IEEE technical committee revision of the SystemVerilog LRM was completed in 2016 and published as IEEE 1800-2017.

Have the last seven years revealed any changes or enhancements that maintain SystemVerilog’s relevance and efficaciousness in the face of rapidly evolving technology? Why yes! Engineers are continually wanting more features, improved clarity of the specification, and fixes to the previous versions.

Starting in 2019, the technical committee started work on the proposed standard P1800-2023, with a plan of final publication in 2024. The 1800-2023 standard benefits from hundreds of corrections, clarifications, and enhancements to the LRM to keep the language current. Dave Rich from Siemens EDA wrote a nine-page paper going into the details of some of these changes. In this article, I’ll highlight just a few of the enhancements discussed in his paper.

Enhancements

Coverpoints are being extended so that covergroups have inheritance. The new syntax will allow you to write a class with covergroups like this:

class pixel; // original base class
 bit [7:0] level;
 enum {OFF,ON,BLINK,REVERSE} mode;
 covergroup g1;
   a: coverpoint level;
   b: coverpoint mode;
 endgroup
 function new(); 
   g1 = new;
 endfunction
endclass

class colorpixel extends pixel; // extended covergroup in extended class
 enum {red,blue,green} color;
 covergroup extends g1;
  b: coverpoint mode { // override the 
  coverpoint ‘b’ from the base class 
   ignore_bins ignore = {REVERSE}; 
  }
  cross a color; // ‘a’ comes from the base class 
 endgroup 
endclass

Arrays will now allow you to cast their elements to a new type, and operate on each element, like in these examples:

int A[3] = {1,2,3}; 
byte B[3]; 
int C[3]; // assigns and casts array of int to an array of byte 
B = A.map() with ( byte’(item) ); 
// increments each element of the  array (use b instead of item) 
B = B.map(b) with ( b + 8’b1 ); 
// B becomes {2,3,4} 
// Add two arrays 
C = A.map(a) with (a + B[a.index] ); 
// C becomes {3,4,5}

The ifdef statement will support Boolean expressions in parenthesis, reducing the number of lines required, like this:

`ifdef (A && B)
 // code for AND condition 
`endif 
`ifdef (A || B) 
// code for OR condition 
`endif

Multi-line strings are supported using a triple quote syntax:

string x = “”” 
This is one continuous string. 
Single ‘ and double “ can be placed 
throughout, and only a triple quote will end it. 
“””

Real number modeling has been added to better model AMS designs. The syntax using real numbers with covergroup looks like:

coverpoint r {
 type_option.real_interval= 0.02;
 bins b[] = {[0.75:0.85]};
 // 10 bins
 // b[0] 0.75 to less than 0.76
 // b[1] 0.76 to less than 0.77
 // . . .
 // b[9] 0.84 to less than or equal to 0.85 
}

With the chaining of method calls you can use a function result as a variable for choosing a member of the result.  Here’s an example:

class A;
 int member=123;
endclass 
module top; 
 A a; 
 function A F(int arg=0); 
  int member; // static variable 
  uninitialized value 0
  a = new();
  return a; 
 endfunction
 initial begin 
  $display(F.member); // 0 – No 
  “()”, Verilog hierarchical reference 
  $display(F().member); // 123 – With 
  “()”, implicit variable 
 end 
endmodule

There’s now support for adding a static qualifier to a formal ref argument, assuring that the actual argument has a static lifetime.

module top;
 function void monitor(ref static 
 logic arg);
  fork // the reference to arg only 
  becomes legal with a static 
  qualifier 
   forever @(arg) $display(“arg 
   changed at time %t”, arg, 
   $realtime); 
  join_none 
 endfunction 
 logic C; 
 initial monitor(C); 
endmodule

Summary

SystemVerilog users will reap the benefits of staying current with the proposed changes coming for the language. If your favorite features weren’t proposed for this release, then why not get involved with the technical committee to have your voice heard and make SystemVerilog even better for the next version.

Read the complete Nine-page paper from Dave Rich at Siemens EDA.

Related Blogs


A Complete Guidebook for PCB Design Automation

A Complete Guidebook for PCB Design Automation
by Kalar Rajendiran on 11-29-2023 at 8:00 am

Constraint Management

Printed Circuit Boards (PCBs) are the foundation of modern electronics, and designing them efficiently is complex. Design automation and advanced PCB routing have transformed the process, making it faster and more reliable. Design automation streamlines tasks, reduces errors, and ensures consistency. Advanced PCB routing combines auto-routing and manual routing for efficiency, optimizes layer stacking, controls via placement, and handles differential pair routing.

Siemens EDA has published an eBook on PCB design automation covering problems with legacy PCB design methodologies, constraint-management in PCB design, PCB component placement and routing, design reuse and automating manufacturing output for PCB board fabrication. Following is an overview of the eBook.

Problems with Legacy PCB Design Methodologies

Legacy PCB design methodologies are struggling to meet the demands of modern electronics development. They are ill-suited for complex products, shorter timelines, reduced budgets, and limited resources. Manual data manipulation is error-prone and time-consuming, hindering integration between design tools. Communication bottlenecks between engineering and other disciplines, involving physical document exchange, no longer work in today’s fast-paced design environment. To keep up with evolving industry standards, more efficient and streamlined PCB design processes are essential.

Constraints-Management in PCB Design

Constraint management is a vital practice in PCB design, streamlining the process and reducing the need for extensive back-and-forth communication between engineers and designers. Constraint-driven methodology has become a best practice, allowing for the systematic management of constraints and introducing standardization. Constraint templates, which can be reused and adjusted for specific projects, save time and maximize existing data utilization.

This approach offers control over electrical and physical rules, aligning the design with the final product’s requirements. Design constraints ensure quality is integrated from the outset, eliminating the need for costly post-design quality checks. Automated constraint entry in Siemens EDA’s Xpedition simplifies the process and ensures adherence to predefined parameters, enhancing the potential for design success by consistently meeting specified requirements and constraints.

PCB Component Placement

3D component planning and placement are pivotal in achieving a “correct-by-construction” PCB layout while considering electro-mechanical constraints. Clusters, defined groups of components within a circuit, play a crucial role in simplifying and optimizing placement. They enable efficient extraction, version control, and reuse of component groups, enhancing connectivity and flow. Clusters also support nested structures, allowing for unique rules within groups, streamlining component placement. In the Xpedition environment, clusters can be further enhanced with additional elements like mounting holes and seed vias, providing greater visibility and control over the PCB design process and improving design quality.

PCB Routing

Modern PCB design tools offer various routing approaches, including manual, interactive, semi-automated, and fully automated methods, improving the design process’s efficiency. In the Xpedition flow, advanced routing features like Sketch Routing and Sketch Planning provide user-friendly automation for high-quality, fast routing. These tools mimic human decision-making, allowing designers to experiment with autorouting and modifications until they achieve the desired outcome, enhancing PCB routing efficiency.

Additionally, advanced routing tools like the “hug router” help manage stubborn nets without overhauling the design. It’s particularly useful for routing single net lines in pre-routed designs. The “plow routing” feature aids in handling challenging remaining nets, reducing time and effort. For specialized signal requirements in analog and RF traces, “hockey stick” or segment routing offers precise control over routing paths, improving routing precision and efficiency in PCB design.

Design Reuse in PCB design

Efficiency in PCB design can be greatly improved through the practice of PCB design reuse. This strategy involves leveraging previously approved circuitry or IP in various designs, saving time and reducing project risks. It eliminates redundant efforts, allowing the reuse of reliable components and layouts. True design reuse is more than traditional copy-pasting; it involves applying entire layouts, like multi-layer circuit stacks, saved in the library for future use, saving significant time compared to manual recreation. In platforms like Xpedition, creating and managing reuse modules is seamless, simplifying sharing and tracking deployment, making PCB design reuse an invaluable strategy in electronics design.

Automating Manufacturing Output for PCB Board Fabrication

Once a PCB design is fully complete and successfully passes various assessments, the focus turns to preparing for board fabrication and assembly manufacturing. Automation is key in this phase, eliminating redundancy and saving time in generating output files like ODB++, Gerber data, GENCAD data, and more. It ensures consistency in output generation, customizability to meet standards, and correctness and quality in the content. In contrast to manual methods, automation streamlines the process and provides a reliable foundation for successful printed circuit assembly production by fabricators and manufacturers.

Summary

In the world of PCBs, the cost of board respins due to human errors is significant in terms of both time and money. Early error detection is crucial, as errors discovered later become more expensive to rectify. Adopting a “correct-by-construction” design approach and leveraging automation tools such as Siemens EDA’s Xpedition Enterprise are very important.

To learn more, download the eBook guide for PCB Design Automation.

Getting educated on PCB design automation tools is also essential to avoid schedule disruptions due to a lack of knowledge on how to effectively use the tools. Siemens EDA offers training sessions, including on-demand training, expert-led webinars, and on-site visits with application engineers, to empower designers to harness the full potential of automation and streamline PCB design processes efficiently.

For more information on PCB design automation, visit:

https://eda.sw.siemens.com/en-US/pcb/engineering-productivity-and-efficiency/design-automation/

To request an on-site training, visit:

https://resources.sw.siemens.com/en-US/talk-to-an-expert-about-xpedition

Also Read:

Uniquely Understanding Challenges of Chip Design and Verification

Successful 3DIC design requires an integrated approach

Make Your RISC-V Product a Fruitful Endeavor


ML-Guided Model Abstraction. Innovation in Verification

ML-Guided Model Abstraction. Innovation in Verification
by Bernard Murphy on 11-29-2023 at 6:00 am

Innovation New

Formal methods offer completeness in proving functionality but are difficult to scale to system level without abstraction and cannot easily incorporate system aspects outside the logic world such as in cyber-physical systems (CPS). Paul Cunningham (Senior VP/GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Next-Generation Software Verification: An AI Perspective. This is an article published in IEEE Software, May-June 2021 issue. The author is from the University of Ottawa.

The author presents her research described in this paper as an adaptation of the CEGAR method for developing abstractions to be used in system level analysis. A key difference between methods in building an abstraction is that CEGAR uses model check (formal methods) in building and refining an abstraction, whereas the author’s flow (ARIsTEO) uses simulation under ML supervision for this purpose. This is an interesting and complementary approach for abstracting logic of course but has the added merit of being able to abstract analog, mechanical or other non-logic systems that can be simulated in some other manner for example through Simulink.

Paul’s view

Last month we looked at generating abstractions for analog circuits to simulate much faster while still being reasonably accurate. This month we take the analog abstraction theme further into the world of cyber-physical systems. These are essentially software-level models of analog control systems with sensors and actuators defined in Matlab Simulink, for example, a smart home thermostat, automotive controllers (powertrain, transmission etc.), or navigation systems (e.g. satellite).

Complexity of these cyber-physical systems is rising, with modern commercial systems often consisting of thousands of individual Simulink building blocks, resulting in simulation times for verification even at this level of abstraction becoming problematic. The author of this month’s paper proposes using machine learning to address the problem, realized in a verification tool called Aristeo. The paper is more of an editorial piece drawing some parallels between Aristeo and model checking. To understand Aristeo itself, I found it best to read her ICSE’20 publication.

Aristeo works by building an abstraction for the cyber-physical system, called a “surrogate”, that is used as a classifier on randomized system input sequences. The goal of the surrogate is to predict if a randomized input sequence is likely to find a bug. Sequences selected by the surrogate are applied to the full model. If the full model passes (false positive) then the model is incrementally re-trained, and the process continues.

The surrogate is built and trained using the Matlab system identification toolbox. This toolbox supports a variety of abstractions, both discrete and continuous time, and provides a system to train model parameters based on a set of example inputs and outputs. Models can range from simple linear functions or time-domain transfer functions to deep neural networks.

Aristeo results are solid: 20% more bugs found with 30% less compute than not using any surrogate. Interestingly, the most effective surrogate across a range of credible industrial benchmarks was not a neural network, it was a simple function where the output at timestep t is a linear function of all input and output values from t-1 to t-n. The authors make a passing comment that the purpose of the surrogate is not to be accurate but to predict if an input sequence is buggy. These results and observations align with our own experience at Cadence using machine learning to guide randomized UVM-based logic simulations: our goal is not to train a model that predicts circuit behavior, it’s to train a model that predicts if some randomized UVM-sequence will find more bugs or improve coverage. So far, we have likewise found that complex models do not outperform simple ones.

Raúl’s view

For a second month in a row, we review a paper which is quite different to what we have done before in this blog. This time, the topic is a new artificial intelligence (AI)-based perspective on the distinctions between formal methods and testing techniques for automated software verification. The paper is conceptual, using the concepts presented for a high-level perspective.

The author starts by observing that “for the most part, software testing and formal software verification techniques have advanced independently” and argues that “we can design new and better adaptive verification schemes that mix and match the best features of formal methods and testing”. Both formal verification and testing are posed as search problems and their virtues and shortcomings are briefly discussed in the familiar terms of exhaustiveness and flexibility. The proposed framework is based on two systems, CEGAR (counterexample guided abstraction and refinement) and ARIsTEO (Approximation-Based Test Generation). In CEGAR, the model of the software being verified is abstracted, and then refined iteratively using model checking to find bugs; if a bug is spurious, it is used to refine the abstract model, until it is sufficiently precise to be used by a model checker to verify or refute a property of interest. ARIsTEO works similarly but it uses a model approximation and then search based testing to find bugs. Again, if a bug is spurious it is used to refine the model; refinement is simply retraining with additional data, and the refinement iterations continue until a nonspurious failure is found.

This work was done in the context of and inspired by cyber-physical systems (CPS), complex industrial CPS models that existing formal verification and software testing could not handle properly. The author concludes expressing her hope that “the testing and formal verification communities will eventually merge to form a bigger and stronger community”. Mixing formal and simulation-based techniques to verify hardware has been common practice for a long time.


WEBINAR : Avoiding Metastability in Hardware Software Interface (HSI) using CDC Techniques

WEBINAR : Avoiding Metastability in Hardware Software Interface (HSI) using CDC Techniques
by Daniel Nenni on 11-28-2023 at 10:00 am

Agnisys Mux Synchronizer

This webinar looks at the challenges a Design Engineer could face, such as when various IP blocks within an SoC are required to work in different clock domains to satisfy the power constraints.

Abstract:
Various IP blocks within an SoC are often required to work in different clock domains in order to satisfy the power constraints. Clock domain crossing (CDC) challenges faced by design engineers include:

– Speed and power requirements lead to designs with multiple asynchronous clock domains on different I/O interfaces and data being transferred from one clock domain to another.
– Transferring signals between asynchronous clock domains may lead to setup or hold timing violations of the flip-flops in the receiving clock domain.
– These violations may cause CDC signals to be metastable.
– Metastability may also arise from jitter between asynchronous clock domains, resulting in functional failures if the appropriate clock synchronizers are not present.

This webinar will examine the techniques used to avoid metastability as signals cross from one clock domain to another:
– Mux Synchronizer
– Two-Flip-Flop Synchronizer
– Handshake Synchronization
– Write operation
– Read operation
– Pulse
– Reset Domain Crossing
– Custom Synchronizer

Speaker Bio:
Freddy Nunez, a Senior Application Engineer at Agnisys since 2021, holds a degree in Computer Engineering from California State University, Northridge. He’s been instrumental in Agnisys’ success, leveraging his technical expertise to assist customers in navigating complex challenges and optimizing their systems. Freddy has experience with SystemRDL, IP-XACT, Verilog and SystemVerilog.

REGISTER HERE
Background:

The Agnisys IDesignSpecTM (IDS) Suite supports clock domain crossings (CDCs) from both the software (SW) and hardware (HW) sides. Techniques used to avoid metastability as signals cross from one clock domain to another include:

    • Two-Flip-Flop Synchronizer
    • Mux Synchronizer
    • Handshake Synchronization
      • Write
      • Read
      • Pulse
    • Custom Synchronizer

In a CDC design, one clock is either asynchronous to, or has a variable phase relation with respect to, another clock. Speed and power requirements lead to designs with multiple asynchronous clock domains employed at different I/O interfaces and data being transferred from one clock domain to another. Transferring signals between asynchronous clock domains may lead to setup or hold timing violations of the flip-flops in the receiving clock domain. These violations may cause CDC signals to be metastable. “Metastability” refers to a state of indecision where the flip-flop’s output has not yet settled to the final expected value.

A typical register RTL block can be accessed either by a register bus on the SW interface and by application logic on the HW Interface.

Various techniques are used to avoid meta-stability as signals cross from one clock domain to another.

IDS supports multiple CDC synchronization techniques to synchronize data and control signals on the register block HW interface between the register block clock domain and the HW clock domain. These include simple 2-FF synchronization techniques with or without handshake, Mux Synchronizer and HW Write Pulse Synchronizer.

IDS also supports a custom synchronizer flow where the “2-FF wr_req Synchronizer” and “2-FF wr_ack Synchronizer” will be replaced with back2back sync flop modules giving users the flexibility to add custom implementations for their FF chain.

IDS also supports SW Interface CDC when it is required that the Register RTL block operates on a clock different from the Register bus clock. In this case, transactions coming from the bus will first get translated into a custom bus and then custom control and data signals are synchronized into the Register block bus domain using appropriate Handshake synchronization techniques.

REGISTER HERE
Also Read:

An Update on IP-XACT standard 2022

WEBINAR: Driving Golden Specification-Based IP/SoC Development

The Inconvenient Truth of Clock Domain Crossings


Siemens Digital Industries Software Collaborates with AWS and Arm To Deliver an Automotive Digital Twin

Siemens Digital Industries Software Collaborates with AWS and Arm To Deliver an Automotive Digital Twin
by Mike Gianfagna on 11-28-2023 at 6:00 am

Siemens Digital Industries Software Collaborates with AWS and ARM To Deliver an Automotive Digital Twin

 

According to McKinsey & Company, a digital twin is a digital representation of a physical object, person, or process, contextualized in a digital version of its environment. Digital twins can help an organization simulate real situations and their outcomes, ultimately allowing it to make better decisions. Anyone who is a fan of the shift-left development methodology will immediately see the benefits a digital twin offers. The shift-left approach aims to take tasks that are typically done later in the design process and perform them earlier, creating forward visibility and less chance for re-work. While the shift-left strategy has been around a while, commercial deployment of digital twin models that are robust enough to make an impact is newer. A recent announcement illustrates a high-impact deployment of a digital twin for automotive design and illustrates the collaboration required for success.  Read on to see how Siemens Digital Industries Software collaborates with AWS and Arm to deliver an automotive digital twin.

Building a Digital Twin – PAVE360™

Announced in 2019, PAVE360 from Siemens aims to deliver a “revolutionary new validation program to accelerate autonomous vehicle development.” Today, the product delivers digital twin technology intended to put a virtual car on every engineer’s desk. The goal is robust pre-silicon validation for software-defined vehicle designs – the Holy Grail of the shift-left methodology.

PAVE360 delivers some important capabilities to enable this goal, including:

  • An open platform – allowing configuration and connection of mixed fidelity domains, protocols, systems and tools from multiple sources. This allows adapting digital twin models for each phase of the vehicle design cycle.
  • A consistent environment – facilitating full “system-of-system” results with in-depth performance metrics.
  • The ability to start with your existing tools – you can connect available models and then test the system. This makes it easier to analyze metrics to improve and iterate on the design for faster decisions.
  • Access to mixed-fidelity analysis – using hybrid simulation, mixing virtual and register transfer level (RTL) code to deliver greater accuracy.
  • Create a virtual car – you can model SoCs, ECUs or full system-of-systems. This allows you to maintain consistency and break down silos as multiple teams work on the same digital twin.

The figure below is an overview of PAVE360.

PAVE360 Overview

You can get a full over of PAVE360 here.

The Power of Collaboration

A digital twin has a lot of moving parts. Items like accurate, mainstream models and robust compute infrastructure to support all the analysis required are two good examples. Recognizing this, Siemens Digital Industries got to work with a couple of their partners to address the holistic requirements of digital twin technology.  A recent announcement outlined the details of the company’s plan.

Expanding on the strong partnership between Siemens and AWS, PAVE360 is now available on the cloud. By using the AWS technology, developers can experience near real-time simulation speeds which are significantly faster than conventional on-premises modeling and simulation infrastructures. This partnership improves both time-to-market and quality of results since the right compute infrastructure can be deployed when needed to ensure fast, robust and accurate results.

Siemens has also collaborated with Arm to help enable developers to access Arm®-based technology running on Siemens’ PAVE360 Digital Twin solution via AWS cloud services. Automakers are now able to develop software and evaluate key Arm-based system and software components earlier in their IP selection and design cycles, without the burden of conventional on-premises software.

The program not only helps address the technology and commercial challenges ahead but also helps empower developers to gain a competitive advantage by shifting left hardware and software development, with unprecedented simulation speeds, enabling them to meet shrinking time-to-market requirements. Executives from AWS and Arm also weighed in:

“The proliferation of digital twin methodologies throughout the automotive industry uses the compute capabilities and world-class infrastructure of AWS,” said Wendy Bauer, Vice President of Automotive and Manufacturing, AWS. “With PAVE360 mapping accurate embedded environments to optimal AWS instances while using Arm automotive enhanced IP, OEMs and suppliers are enabling software defined vehicle solutions and methodologies that were previously impractical.”

“The software defined vehicle is survival for the automotive industry, requiring new technologies and methodologies for faster and more agile development,” said Dipti Vachani, Senior Vice President and General Manager, Automotive Line of Business, Arm. “The innovative Siemens’ PAVE360 solution is helping to accelerate the automotive system development required to address the increasingly demanding consumer expectations. Together with Siemens and AWS, we are enabling a breadth of use cases on the Arm automotive platform across the entire supply chain, from IP evaluation to fleet management.”

To Learn More

You can hear more about the impact of digital twin technology on automotive design at the upcoming IESF Automotive E/E Design & Engineering Conference. The event will take place in Munich, Germany on November 30, 2023. You can learn more about the conference and register here. And that’s how Siemens Digital Industries Software collaborates with AWS and ARM to deliver an automotive digital twin.


Synopsys.ai Ups the AI Ante with Copilot

Synopsys.ai Ups the AI Ante with Copilot
by Bernard Murphy on 11-27-2023 at 10:00 am

Synopsys.ai Stack 111623

Last week Synopsys announced their next step in generative AI (GenAI) in Synopsys.ai Copilot based on a collaboration with Microsoft. This integrates Azure OpenAI together with existing Synopsys.ai GenAI capabilities to extend Copilot concepts to the EDA world. For those of you unfamiliar with Copilot, this is a development by GitHub/Microsoft and OpenAI to aid software developers in writing code.

… GitHub Copilot includes assistive features for programmers, such as the conversion of code comments to runnable code, and autocomplete for chunks of code, repetitive sections of code, and entire methods and/or functions. … GitHub states that Copilot’s features allow programmers to navigate unfamiliar coding frameworks and languages by reducing the amount of time users spend reading documentation. (source Wikipedia)

Microsoft has since significantly expanded the scope of Copilot to help in all capabilities provided by Microsoft 365 (Word, etc.) and has introduced solutions to support sales, service, and security along with assistance for software developers. Copilot is technology with broad reach, so it is not surprising that Synopsys has jumped on the potential to extend that technology to EDA. I should add that regular Copilot already supports RTL (thanks Matt Genovese for the demo!) but as a proof of concept in my view. Synopsys is aiming for a more robust version. 😊

Why the Continued Emphasis on AI?

OK, so AI is hot, a new technology with a lot of potential applications but some might suspect it is becoming a solution in search of a problem. In fact new design challenges abound and this continued emphasis on AI is an important direction that the design and EDA communities are exploring to help product designers jump ahead.

Trends to domain-specific architectures and to multi-die systems are ways that product teams are overcoming Moore’s Law limitations in performance and power scaling for advanced processes. However both approaches greatly amplify complexity in design, verification and implementation, yet teams must deliver products today on even more compressed schedules. This while staffing needs across the industry are expected to fall short by tens of thousands of engineers by 2030.

Incremental improvements in tools and methodologies will continue to be important but clearly we need a turbo-charge to overcome problems at this scale. We need ways for engineers to be able to deliver better results in a shorter time without need for additional staffing. That’s where AI comes in. In part through artificial intelligence capabilities in development flow tools, as in DSO.ai, VSO.ai and TSO.ai. In part through assistive (copilot) methods to guide designers more quickly to optimal solutions, the subject of Synopsys’ Copilot announcement.

Synopsys.ai Copilot Working with Designers

Shankar Krishnamoorthy, general manager of the Synopsys EDA Group, told me that the first roll out to early customers (AMD, Intel, Microsoft and other leading companies) is testing capabilities including:

  • Collaborative capabilities to provide engineers with guidance on tool knowledge, analysis of results, and enhanced EDA workflows, and
  • Generative capabilities to expedite development of RTL, formal verification assertion creation, and UVM testbenches.

In addition to these first new developments, Shankar said his team plans to develop autonomous capabilities across the Synopsys.ai suite, which will enable end-to-end workflow creation from natural language, spanning architecture to design and manufacturing.

I’ll expand on a couple of points, starting with formal verification assertions where I claim at least informed novice status thanks to working with the Synopsys VC Formal group. Formal verification is a very powerful technology but historically has been limited by the high-level of expertise demanded of practitioners, especially when it comes to developing assertions and constraints. Packaging standard tests as apps has greatly simplified many applications and has increased adoption but hasn’t done anything to simplify non-standard/specialized property checks.

Part of the problem is the complexity of the property language (SVA). Copilot should be able to help by converting natural language requirements written by a verification engineer into the correct formal syntax and by providing recommendations for the RTL.

Next, RTL generation. This for me is a more complex picture with different potential use-cases. One is to autocomplete a chunk of code given sufficient context or create a code snippet from a natural language description. At the same time creating assertions to check the correctness of the generated code. Both could be valuable accelerators for a junior developer. Another use-case might extend generation to small blocks from a natural language description (remember the 4096 token limit on GPT-3).

Shankar said the distinction here between proofs of concept and the Synopsys.ai implementation is that Synopsys.ai guard-rails and checks generated code using the many technologies they have at their disposal (even PPA analyses) and of course many years of learning and expertise.

In all cases, I expect experts will code-review generated code to guide further training. No one wants to see hallucinations creep in as added challenges for late-stage verification.

Availability

Shankar tells me that this technology is currently in early custom evaluation and refinement, which especially in this case makes complete sense. Which use-cases are going to make a difference in practice and how big a difference (lines of code/day, bugs created or found per day, …) will emerge. Add in the extensive scope of the goal and I expect more general releases to come a feature or two at a time as these are refined and proven robust in a wide range of use-cases.

Looks like Synopsys is following the Sundar Pichai maxim of “You will see us be bold and ship things, but we are going to be very responsible in how we do it.” Good for them for taking the first step! You can read the press release HERE.


Synopsys 224G SerDes IP’s Extensive Ecosystem Interoperability

Synopsys 224G SerDes IP’s Extensive Ecosystem Interoperability
by Kalar Rajendiran on 11-27-2023 at 6:00 am

Synopsys 224G SerDes IP InterOp Multiple Tradeshows

Hyperscale data centers are evolving rapidly to meet the demands of high-bandwidth, low-latency applications, ranging from AI and high-performance computing (HPC) to telecommunications and 4K video streaming. The increasing need for faster data transfer rates has prompted a scaling of Ethernet from 51Tb/s to 100Tb/s. Numerous suppliers are offering IP and components such switches, backplane connectors, pluggables, cable assemblies and other networking infrastructure elements. Extensive ecosystem interoperability is a must to build robust systems.

Synopsys 224G SerDes IP

The Synopsys 224G SerDes IP is designed to provide exceptional performance, power efficiency, and configurability, making it a versatile solution for a wide range of applications, including high-speed networking, data centers, and artificial intelligence. This IP is engineered to support multiple industry-standard protocols, enabling it to seamlessly interface with a variety of communication standards. Its compatibility with protocols such as PCIe (Peripheral Component Interconnect Express), Ethernet, and Common Electrical I/O (CEI), showcases the SerDes IP’s versatility across different applications. The 224G SerDes IP offers a high degree of configurability, allowing designers to tailor its parameters to specific system requirements. Functional demonstrations highlight how the IP can adapt to different data rates, channel lengths, and signal integrity conditions, showcasing its flexibility in meeting the unique needs of various applications.

What might not be as widely known is Synopsys SerDes IP’s remarkable interoperability within an extensive ecosystem.

Attention to Ecosystem Interoperability

Synopsys recognizes the diversity of the modern semiconductor ecosystem, with various vendors providing critical components such as switches, pluggables, and other networking infrastructure elements. Synopsys collaborates with industry partners in the development and validation of its silicon proof points. This collaborative approach ensures that the technology is not developed in isolation but is tested and refined in conjunction with the broader semiconductor ecosystem. Synopsys invests heavily in rigorous testing and validation procedures to ensure that its solutions work seamlessly in real-world scenarios. This involves comprehensive testing with components from various vendors to simulate diverse networking environments. It includes stress testing under challenging conditions, demonstrating the SerDes IP’s reliability and stability in real-world applications.

Extensive Ecosystem Interoperability Demonstrations

Synopsys 224G SerDes IP continues showcasing extensive ecosystem interoperability through multiple tradeshows

Synopsys has actively demonstrated the interoperability of its 224G and 112G SerDes solutions in various settings, establishing its commitment to creating technology that seamlessly integrates into diverse environments. Some notable demonstrations include those at the TSMC Symposium 2023, OIF & OFC 2023, ECOC 2022, DesignCon 2023 and other industry events.

ECOC 2023:

Synopsys showcased the performance of its 224G TX and RX with Keysight test equipment and their latest SW for key 224G TX and RX characterization parameters. OIF Interop demonstrations include its 224G RX equalizing a 224G C2M channel and 3rd party 224G SerDes TX, showcasing BER orders of magnitude better than IEEE or OIF 224G Spec is indicating.

TSMC Symposium 2023:

The demo highlighted interoperability between Synopsys 224G hardware, connectivity, mechanicals, signal integrity, and power integrity, demonstrating superior performance in real-time.

OIF & OFC 2023:

Synopsys demonstrated the interoperability of its 224G and 112G Ethernet PHY IP solutions. The demonstrations featured wide-open PAM4 eyes, very low jitter, and excellent linearity, underscoring the robustness of Synopsys’ SerDes technology.

DesignCon 2023:

This video clip shows seven demonstrations of the Synopsys 224G and 112G Ethernet PHY IP, and the Synopsys PCIe 6.0 IP interoperating with third-party channels and SerDes.

ECOC 2022:

Synopsys showcased the performance and interoperability of its 224G and 112G Ethernet PHY IP solutions at ECOC 2022. Demonstrations included the world’s first 224G Ethernet PHY IP interop with Keysight AWG and ISI channel.

Synopsys Commitment to Furthering SerDes Technology

Synopsys’ commitment to pushing the boundaries of high-speed serial interface technology is evident in its multiple silicon proof points across various data rates, including 56G, 112G and 224G to implement 400G and 800G data connectivity. By successfully implementing and validating their SerDes IP across different data rates, Synopsys has showcased the robustness and adaptability of its core technology. This not only instills confidence in the current implementations but also suggests that the technology is well-prepared for the challenges posed by the upcoming 1.6Tbps speeds.

For details about Synopsys 224G Ethernet IP, visit the product page.

Summary

Synopsys 224G Ethernet PHY IP showcasing 224Gbps TX PAM-4 eyes in TSMC N3E

Synopsys 224G Ethernet PHY IP is available to help streamline the transition to 1.6T ethernet data transfer rate. In addition to doubling 112G data rates, the Synopsys 224G Ethernet PHY IP consumes one-third less power (per bit) compared to its predecessor while optimizing network efficiency by reducing cable and switch counts in high-density data centers. Synopsys is the first company to demonstrate 224G Ethernet PHY IP.

By emphasizing multivendor interoperability at 224G, Synopsys has positioned itself as a key enabler of the global data center ecosystem. Its solutions seamlessly integrate with a multitude of components, contributing to the efficiency and reliability of high-speed networking infrastructure.

Also Read:

Synopsys Debuts RISC-V IP Product Families

A Fast Path to Better ARC PPA through Fusion Quickstart Implementation Kits and DSO.AI

100G/200G Electro-Optical Interfaces: The Future for Low Power, Low Latency Data Centers


Podcast EP195: A Tour of Mythc’s Unique Analog Computing Capabilities with Dave Fick

Podcast EP195: A Tour of Mythc’s Unique Analog Computing Capabilities with Dave Fick
by Daniel Nenni on 11-24-2023 at 10:00 am

Dan is joined by Dave Fick, co-founder and CEO of Mythic. Dave leads Mythic to bring groundbreaking analog computing to the AI inference market. With a PhD in Computer Science & Eng from Michigan, he brings a wealth of knowledge and expertise to the industry.

Dan explores Mythic’s unique analog computing capability with Dave. The tradeoffs between edge and cloud processing are discussed. Dave explains the benefits of Mythic’s approach for edge computing in many demanding AI applications. Speed, power density, cost, form factor, training efficiency and latency are all discussed, highlighting the substantial benefits of the Mythic approach.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Dr. Meghali Chopra of Sandbox Semiconductor

CEO Interview: Dr. Meghali Chopra of Sandbox Semiconductor
by Daniel Nenni on 11-24-2023 at 6:00 am

MeghaliChopra highres cropped

Dr. Meghali Chopra is co-founder and CEO of SandBox Semiconductor. She is responsible for SandBox’s vision and strategy and oversees the development of SandBox’s software products and technologies. Dr. Chopra received her PhD in Chemical Engineering from the University of Texas at Austin where her research focused on computational algorithms for plasma process optimization. She has her B.S. with Honors in Chemical Engineering from Stanford University. Dr. Chopra is an industry expert with publications in leading peer-reviewed journals and patents in the areas of semiconductor processing and computational optimization.

Tell us about your company?
Founded in 2016, SandBox Semiconductor is a pioneer in developing AI-based software solutions to accelerate process development for semiconductor manufacturing. Our fully integrated, no-code AI tool suite gives process engineers the ability to build their own physics-based, AI-enabled models to solve challenges during process definition, ramp-up, and high-volume manufacturing.

Using SandBox’s physics-based models and machine learning tools, process engineers in the semiconductor industry can virtually simulate, predict, and measure process outcomes. Even with small sets of experimental data, SandBox’s tools can extract valuable insights and patterns, helping engineers to gain a deeper understanding of manufacturing processes and to make informed decisions about recipe adjustments. SandBox leverages expertise in numerical modeling, machine learning, and manufacturing optimization to develop its proprietary toolsets, which are used by the world’s leading chip manufacturers and semiconductor equipment suppliers.

What problems are you solving?
At SandBox, we reduce cycles of learning for next-generation advanced manufacturing technologies.  To optimize a recipe, a process engineer must specify a process window for tens of process conditions including pressure, temperature, and gas flow rates. Determining the best process conditions is so challenging that oftentimes a recipe will take over two years to develop, or worse, the chip is dropped from production because the cost of the process development becomes too expensive. This technology gap and cycle time is a significant barrier to the deployment of novel microelectronic devices and imposes a substantial economic burden on semiconductor manufacturers who must make significant R&D investments to stay afloat.

SandBox provides computational modeling software that accelerates process development and enables semiconductor manufacturers to reduce costs, get to market faster, and commercialize new processes not possible before.

What application areas are your strongest?
SandBox works on leading-edge logic and memory manufacturing processes. Our users are typically performing technology development or high-volume manufacturing recipe optimization. Our technologies have been used on a range of optimization applications including feature-level, die-to-die, across-wafer, chamber-to-chamber, and tool-to-tool.

What keeps your customers up at night?
The process engineers we work with must figure out how to optimize many process conditions to manufacture billions of features across the wafer with nano-scale precision and at high throughput.  These process engineers are extremely knowledgeable and arguably the single most important individuals within each of our semiconductor customers. Unfortunately, these process engineers are often over-worked as they must continually push the envelope in advancing to the next node.  We developed our tools with these process engineers in mind – our mission is to provide meaningful leverage to the process engineer as he or she works to enable manufacturers to bring new microelectronics to market faster.

What does the competitive landscape look like and how do you differentiate?
Our proprietary modeling pipeline enables users to make process predictions with a small number of experimental data points.  The competitive landscape for process engineer-focused computational modeling tools is very limited.  Many of our customers have internal modeling groups, but our observation is that most frequently our process engineering users rightfully rely on their expertise and intuition to drive critical changes in recipe development.  To that end, the most common recipe optimization approach is the process engineer’s intuition.  We seek to help these process engineers in their role, particularly as the advanced manufacturing nodes increasingly push the limits of physics and chemistry in conjunction with the process engineer’s demands in a 24-hour day.

What new features/technology are you working on?
SandBox recently released a new product for its technology suite called Weave™. Weave™ significantly improves metrology accuracy and precision by leveraging advanced machine learning capabilities to extract and analyze profiles from SEM and TEM data. Process development engineers can spend up to 20% of their time manually measuring SEM and TEM images. With Weave, process engineers minimize tedious manual tasks and increase metrology accuracy, resulting in more insights, quicker experimentation, and reduced costs during process definition, ramp-up, and high-volume manufacturing.  The introduction of Weave continues on our platform vision as we work to provide a comprehensive tool-suite to bring easy to use physics-based AI tools to market with the goal of enabling the process engineer.

How do customers normally engage with your company?
Customers can reach out to us at info@sandboxsemiconductor.com or through our website at www.sandboxsemiconductor.com.

Also Read:

CEO Interview: Dr. J Provine of Aligned Carbon

CEO Interview: Vincent Bligny of Aniah

Executive Interview: Tony Casassa, General Manager of METTLER TOLEDO THORNTON