WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 595
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 595
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 595
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 595
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

Innovation in Verification – February 2020

Innovation in Verification – February 2020
by Bernard Murphy on 02-11-2020 at 6:00 am

This blog is the next in a series in which Paul Cunningham (GM of the Verification Group at Cadence), Jim Hogan and I pick a paper on a novel idea in verification and debate its strengths and opportunities for improvement.

Innovation in Verification

Our goal is to support and appreciate further innovation in this area. Please let us know what you think and please send any suggestions on papers or articles for us to discuss in future blogs. Ideas must be published in a peer-reviewed forum, available (free or through a popular site) to all readers.

The Innovation
Our next pick is “Learning to Produce Direct Tests for Security Verification using Constrained Process Discovery”. This was presented at DAC 2017. The authors are Kuo-Kai Hsieh, Li-C. Wang, Wen Chen, (all from UCSB) and Jayanta Bhadra from NXP.

Security verification is a challenging part of any complete test plan. Black hats know that general testing tends to go more broad than deep in order to bound the scope of tests, so look especially for complex penetration attacks. This paper offers a method to learn from deep penetration testing developed by security experts to generate more penetration attacks of similar type.

All tests in this method are based on sequences, in the paper as sequence of calls to C operations – might be C or portable stimulus standard (PSS). They start with tests developed by security experts and, through grammatical inference (a type of machine learning) they build an automaton model, representing a complete grammar of all such tests.

Training also develops constraints, given observed limitations in sub-sequences in the training examples. The authors say the automaton model matures relatively quickly and constraints continue to mature with more examples.

Once trained, model plus constraints can be used to generate new sequence tests through a SAT solver. Generated tests are run in conventional verification environments. They show result of their analysis, presenting an increase in coverage points (CPs) defined by experts.

Paul
I really liked the exposition of the problem in this paper, concepts of confidentiality, integrity, availability. The authors go on to combine two ideas, constraint solving (a standard verification method), with an idea of grammatical inference. This is a nice follow-on from our previous blog (which used genetic learning to improve coverage).

Another thing I thought was intriguing was using machine learning to generate attacks, rather detect attacks, as AI is often used as a method for detecting behavioral anomalies in real-time. Their method, grammatical inference on a state machine, is something that would be interesting to apply on top of PSS engines. If someone was interested in doing this – for research – I’d be happy to support them with a Perspec license.

The test example shows promise. It would be interesting to see how the method scales over a range of test cases and sizes. I’d also like to see more discussion on metrics for assessing the effectiveness of security verification methods. This is a tricky area, I know, but all insights are useful.

For example, using cover points as a metric is certainly useful to increase general coverage, but doesn’t give a clear sense of impact on security.  Is it possible to adapt the approach consider impact on attack surface (for example), a metric more directly tied to security?

Overall, I think there were some nice ideas prompting this work. I would like to see them developed further.

Jim
I’m going to take a bit of a different tack here, more where I think security may head and the current market.

First, directions. At the high end (servers), security is becoming a game of whack-a-mole. Before a vulnerability has been fixed, a new one has been found. I don’t think our current approaches are sustainable. We need to be looking at more vulnerability-tolerant architectures.

At the low end (IoT), decent security is better than none so still plenty of opportunity for methods of this type.

In adoption, there’s a gap between the security must-haves and the security nice-to-haves. Must-haves are the DoD, nuclear reactors, automotive and payment cards. Places where security is not negotiable, and liability is huge if you get it wrong. There’s a middle ground where the same probably applies but there’s no organizational or political will to invest in upgrades. For everything else, regulation may be the only path.

Me
I think an example attack would have helped. One I remember attacks a hardware root of trust (HRoT). Inside the HRoT, a master key is decrypted onto the HRoT data bus, then stored in a secure location. External access to the bus is disabled during this phase.

The HRoT then decrypts data for external access, while access to the internal bus is necessarily enabled. If enabled too soon, the key is still on the internal bus and can be read outside the HRoT. A small coding error exposes the key for a short time. Would this method have found such a bug?

On coverage, incremental improvement isn’t very compelling. I would like to see more discussion on how to determine that some class of penetration attacks could be prevented completely. Expert-defined coverage points don’t seem like the right place to start.

To see the next paper click HERE.

To see the previous paper click HERE.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.