WP_Term Object
(
    [term_id] => 159
    [name] => Siemens EDA
    [slug] => siemens-eda
    [term_group] => 0
    [term_taxonomy_id] => 159
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 748
    [filter] => raw
    [cat_ID] => 159
    [category_count] => 748
    [category_description] => 
    [cat_name] => Siemens EDA
    [category_nicename] => siemens-eda
    [category_parent] => 157
)
            
Q2FY24TessentAI 800X100
WP_Term Object
(
    [term_id] => 159
    [name] => Siemens EDA
    [slug] => siemens-eda
    [term_group] => 0
    [term_taxonomy_id] => 159
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 748
    [filter] => raw
    [cat_ID] => 159
    [category_count] => 748
    [category_description] => 
    [cat_name] => Siemens EDA
    [category_nicename] => siemens-eda
    [category_parent] => 157
)

Scalable Verification Solutions at Siemens EDA

Scalable Verification Solutions at Siemens EDA
by Daniel Nenni on 02-24-2022 at 6:00 am

Andy Meier 2 siemens edaLauro Rizzatti recently interviewed Andy Meier, product manager in the Scalable Verification Solutions Division at Siemens EDA. Andy is a product manager in the Scalable Verification Solutions Division at Siemens EDA. Andy has held positions in the electronics and high-tech fields during his 20-year career including: Sr. Product Marketing manager at Siemens EDA, Product Marketing manager at Mentor Graphics, Solution Product manager at Hitachi Data Systems, Director of Application engineering at Carbon Design Systems, and Sr. Verification Engineer at SiCortex. He holds a Bachelor of Science degree in Engineering and Computer Engineering from Worcester Polytechnic Institute in Worcester, Mass.

Thank you for meeting with me, Andy. It is a pleasure to talk with you. One of the major challenges SoC verification and validation teams has is ensuring correct system operation with real workloads. What is at the core of this challenge?

The core of this challenge comes down to the fact that hardware and software teams have different perspectives and different debug needs when tasked with ensuring correct system operation. Often hardware and RTL design teams rely on waveforms to debug while software developers need a full-functioning software debugger. The problem is when there is an issue that they both need to be involved in, such as debug, each set of users is speaking a different debug language. They need a way to speak a common debug language and correlate between both the hardware teams and the software teams.

How is it possible for them to so call, speak a common language?

In our Codelink product, we have a correlation engine that allows our customers to do just that. It is one of the greatest strengths of Codelink. As I said the SW team is looking for a full functioning SW debugger. Being able to single step the SW execution while looking at a source code view, CPU registers and memory views of what is happening in the SoC is invaluable. To then be able to correlate that to exactly what is happening at the RTL level by looking at the waves is extremely powerful. This is what truly enables the HW/SW co-verification use case.

Can you share a real-world customer example of the HW/SW co-verification use case?

Recently, a customer came to us with a unique challenge. This customer had a six-stage boot process that jumped from CPU to CPU through the ‘power on’ sequence where the CPUs came from different vendors. Ultimately, they were tasked with integrating the IP as well as validating the multi-stage boot process of their SoC. The customers existing verification and validation methodologies didn’t have a unified way to debug this scenario. They would look at waveforms from one simulation or emulation run at a specific stage, and then use different tool sets from different CPU vendors to debug the following stage. There was no common unified debug. To make matters more challenging, the team involved in this verification and validation effort was the SoC integration team. This team didn’t have domain expertise of the hardware design, and they didn’t have the software expertise on what individual blocks were responsible for what. Still, they were tasked with validating and making sure that the SoC booted properly. They were interested in using a new solution to address their needs.

Using standard features from Codelink, like the source code view, the register view, and the correlation engine  as well as RTL Waves from an emulation run, the customer created and adopted a new methodology focused on unifying their debug. Using this new methodology, they were able to look at the multi-core capabilities of their SoC and debug the software execution as it jumped through the various stages to ensure things were operating as expected.

That’s quite interesting. Beyond what you just described, what are some current industry trends that present other challenges?

In terms of trends, we see the use cases expanding beyond just traditional SW debug and HW/SW co-validation. Customers have expanding SoC requirements, and these requirements are driving new opportunities for expansion of SW-enabled verification.

For example, customers are trying to understand how their software is performing. They are trying to understand the behavior of different event handlers execution, and if those event handlers are executing within the budgeted amount of time. To address this additional use case in an emulation environment, we have added into our Codelink tool software performance profiling. This allows customers to identify where the time is being spent, and the functions that are called most frequently. This is key for customers that are working on hardware and software partitioning or customers trying to tune their SoC performance. One can imagine a case where an event handler is supposed to execute within a certain amount of time, but for one reason or another, it doesn’t. The customer can now isolate where the time is being spent from a software perspective, and then tune the performance.

We’ve also recently seen from both simulation and emulation customers, where providing SW code coverage helped aid in the SW verification and validation. An example where this was used, was when the customer’s SW-enabled verification methodology had randomly generated SW in it. The randomly generated SW acted like test vectors for their SoC. Due to its random nature, the customer needed to know exactly what SW test scenarios had been covered and executed. We’ve added SW code coverage to Codelink to address this need. All that was needed from the customer was the software executable and debug symbols, which they already needed to execute the workload. The validation team can now look at the SW Code coverage report and see what statements, functions, conditions, or branches were covered during the execution.

What other industry trends are you seeing?

With different SoC market verticals, such as automotive, IoT, 5G, Enterprise Data Centers, etc., we have seen a need to increase our collaboration with our CPU IP partners. Over time, we’ve built strong partnerships in collaboration with different IP suppliers to provide Codelink debug capabilities for those CPU IPs. Recently we have seen a significant increase in requests for RISC-V CPU support from the RISC-V community. In collaboration with SiFive, we have been able to add Codelink support for several SiFive CPUs. This is just one example. By being CPU IP vendor agnostic, it allows us to work with all the IP vendors to meet the customers needs.

There is some interesting work going on in this space, Andy. Thank you again for your time. Maybe we can catch up in a year and look at your continued progress.

Also read:

Power Analysis in Advanced SoCs. A Siemens EDA Perspective

Faster Time to RTL Simulation Using Incremental Build Flows

SIP Modules Solve Numerous Scaling Problems – But Introduce New Issues

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.