WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 571
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 571
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 571
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 571
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

ML-Guided Model Abstraction. Innovation in Verification

ML-Guided Model Abstraction. Innovation in Verification
by Bernard Murphy on 11-29-2023 at 6:00 am

Formal methods offer completeness in proving functionality but are difficult to scale to system level without abstraction and cannot easily incorporate system aspects outside the logic world such as in cyber-physical systems (CPS). Paul Cunningham (Senior VP/GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

ML-Guided Model Abstraction

The Innovation

This month’s pick is Next-Generation Software Verification: An AI Perspective. This is an article published in IEEE Software, May-June 2021 issue. The author is from the University of Ottawa.

The author presents her research described in this paper as an adaptation of the CEGAR method for developing abstractions to be used in system level analysis. A key difference between methods in building an abstraction is that CEGAR uses model check (formal methods) in building and refining an abstraction, whereas the author’s flow (ARIsTEO) uses simulation under ML supervision for this purpose. This is an interesting and complementary approach for abstracting logic of course but has the added merit of being able to abstract analog, mechanical or other non-logic systems that can be simulated in some other manner for example through Simulink.

Paul’s view

Last month we looked at generating abstractions for analog circuits to simulate much faster while still being reasonably accurate. This month we take the analog abstraction theme further into the world of cyber-physical systems. These are essentially software-level models of analog control systems with sensors and actuators defined in Matlab Simulink, for example, a smart home thermostat, automotive controllers (powertrain, transmission etc.), or navigation systems (e.g. satellite).

Complexity of these cyber-physical systems is rising, with modern commercial systems often consisting of thousands of individual Simulink building blocks, resulting in simulation times for verification even at this level of abstraction becoming problematic. The author of this month’s paper proposes using machine learning to address the problem, realized in a verification tool called Aristeo. The paper is more of an editorial piece drawing some parallels between Aristeo and model checking. To understand Aristeo itself, I found it best to read her ICSE’20 publication.

Aristeo works by building an abstraction for the cyber-physical system, called a “surrogate”, that is used as a classifier on randomized system input sequences. The goal of the surrogate is to predict if a randomized input sequence is likely to find a bug. Sequences selected by the surrogate are applied to the full model. If the full model passes (false positive) then the model is incrementally re-trained, and the process continues.

The surrogate is built and trained using the Matlab system identification toolbox. This toolbox supports a variety of abstractions, both discrete and continuous time, and provides a system to train model parameters based on a set of example inputs and outputs. Models can range from simple linear functions or time-domain transfer functions to deep neural networks.

Aristeo results are solid: 20% more bugs found with 30% less compute than not using any surrogate. Interestingly, the most effective surrogate across a range of credible industrial benchmarks was not a neural network, it was a simple function where the output at timestep t is a linear function of all input and output values from t-1 to t-n. The authors make a passing comment that the purpose of the surrogate is not to be accurate but to predict if an input sequence is buggy. These results and observations align with our own experience at Cadence using machine learning to guide randomized UVM-based logic simulations: our goal is not to train a model that predicts circuit behavior, it’s to train a model that predicts if some randomized UVM-sequence will find more bugs or improve coverage. So far, we have likewise found that complex models do not outperform simple ones.

Raúl’s view

For a second month in a row, we review a paper which is quite different to what we have done before in this blog. This time, the topic is a new artificial intelligence (AI)-based perspective on the distinctions between formal methods and testing techniques for automated software verification. The paper is conceptual, using the concepts presented for a high-level perspective.

The author starts by observing that “for the most part, software testing and formal software verification techniques have advanced independently” and argues that “we can design new and better adaptive verification schemes that mix and match the best features of formal methods and testing”. Both formal verification and testing are posed as search problems and their virtues and shortcomings are briefly discussed in the familiar terms of exhaustiveness and flexibility. The proposed framework is based on two systems, CEGAR (counterexample guided abstraction and refinement) and ARIsTEO (Approximation-Based Test Generation). In CEGAR, the model of the software being verified is abstracted, and then refined iteratively using model checking to find bugs; if a bug is spurious, it is used to refine the abstract model, until it is sufficiently precise to be used by a model checker to verify or refute a property of interest. ARIsTEO works similarly but it uses a model approximation and then search based testing to find bugs. Again, if a bug is spurious it is used to refine the model; refinement is simply retraining with additional data, and the refinement iterations continue until a nonspurious failure is found.

This work was done in the context of and inspired by cyber-physical systems (CPS), complex industrial CPS models that existing formal verification and software testing could not handle properly. The author concludes expressing her hope that “the testing and formal verification communities will eventually merge to form a bigger and stronger community”. Mixing formal and simulation-based techniques to verify hardware has been common practice for a long time.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.