WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 567
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 567
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 567
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 567
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

Symbolic Trojan Detection. Innovation in Verification

Symbolic Trojan Detection. Innovation in Verification
by Bernard Murphy on 03-30-2022 at 6:00 am

We normally test only for correctness of the functionality we expect. How can we find functionality (e.g. Trojans) that we don’t expect? Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

Innovation New

The Innovation

This month’s pick is Trojan Localization Using Symbolic Algebra. The paper published in the 2017 Asia and South Pacific DAC. The authors are from the University of Florida and the paper has 30 citations.

Methods in this class operate by running an equivalence check between a specification in RTL, presumed Trojan-free, and a gate-level implementation potentially including Trojans. Through equivalence checking, the authors can not only detect the presence of a Trojan but also localize that logic. This approach extracts and compares polynomial representations from specification and gate-level views, unlike traditional BDD-based equivalence checking.

Implementation polynomials are easy to construct, eg NOT(a) → 1-a and OR(a,b) → a+b-a*b. One writes specification polynomials to resolve to 0, eg 2*Cout+Sum-(A+B+Cin) with one polynomial per output. In sequential logic, flop inputs act as as outputs. Checking proceeds then by “reducing” specification polynomials, tracing backwards in a defined order from outputs. At each step backwards, the method replaces output/intermediate terms by the implementation polynomials creating those values. On completion, each polynomial remainder should be zero if specification and implementation are equivalent. Any non-zero remainder flags suspect logic.

Since the method identifies quite closely any suspect areas, they can prune out all non-suspect logic, leaving a much smaller region of logic. They then run ATPG run to generate vectors to trigger theTrojan.

Paul’s view

This is a fun paper and an easy read. The basic idea is to use logical equivalence checking (LEC) between specification and implementation (e.g. RTL vs. gates) to see if malicious trojan logic has been added to the design. The authors do the equivalence by forming arithmetic expressions to represent logic rather than more traditional BDD or SAT based approach. As with commercial LEC tools their approach first matches sequential elements in the specification and implementation, which then reduces the LEC problem to a series of Boolean equivalence checks on the logic cones driving each sequential element.

The key insight for me in this paper is the observation that if a non-equivalent “suspicious” logic cone overlaps with another equivalent logic cone (i.e. they share some common logic) then this overlap logic cannot be suspicious – i.e. if can be removed from the set of potential trojan logic.

Having used this insight to identify a minimal set of suspicious gates, the authors then use an automatic test pattern generation (ATPG) tool on this set to identify a design state that activates the trojan logic.

To be honest I didn’t quite follow this part of the paper. Once the non-equivalent logic is identified, commercial LEC tools just perform a SAT operation on the non-equivalent logic itself. This generates a specific example of design state where the specification and implementation behave differently. It isn’t necessary to use an ATPG tool for this purpose.

As another fun side note, Cadence has a sister product to our Conformal LEC tool, called Conformal ECO,. This uses this same overlapping logic cone principle (together with a bunch of other neat tricks) to identify a minimal set of non-equivalent gates from a LEC compare. A designer can use the tool to automatically map last minute functional RTL patches to an existing post-layout netlist patch late in the tapeout process. This is a big advantage when re-running the whole implementation flow is not feasible.

Raúl’s view

Detecting Trojans is difficult because the trigger conditions are deliberately designed to be satisfied only in very rare situations. Random or other test pattern generation will likely not activate the Trojan and the circuit will appear to be functioning correctly. If a specification is available, formal verification, i.e., equivalence checking, of this specification against an implementation will uncover that they are not equivalent.

This paper uses a method based on extracting polynomials from an implementation and comparing them to a specification polynomial. This works only for combinatorial circuits. Retiming through movement of Flip-Flops would presumably break this assumption. The basics are explained in the introduction to this blog, it is an application of Gröbner basis theory.

The paper claims that the algorithms scale linearly, which would mean that equivalence checking is linear (unlikely at best). I am not clear what feature the call linear. As Paul points out, most of this is state of the art in commercial tools. However, the ability to narrowly identify the part of the circuit that contains the Trojan is a nice result.

My view

I understood that this was an LEC problem, approached in a different way. However it didn’t occur to me that it was still subject to the same bounds. Paul (an expert in this area) set me straight!

Also read:

Leveraging Virtual Platforms to Shift-Left Software Development and System Verification

Using a GPU to Speed Up PCB Layout Editing

Dynamic Coherence Verification. Innovation in Verification

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.