WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 490
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 490
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 490
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 490
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

Side Channel Analysis at RTL. Innovation in Verification

Side Channel Analysis at RTL. Innovation in Verification
by Bernard Murphy on 08-26-2021 at 6:00 am

Roots of trust can’t prevent attacks through side-channels which monitor total power consumption or execution timing. Correcting weakness to such attacks requires pre-silicon vulnerability analysis. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO) and I continue our series on research ideas. As always, feedback welcome.

Side Channel Analysis at RTL. Innovation in Verification

The Innovation

This month’s pick is RTL-PSC: Automated Power Side-Channel Leakage Assessment at Register-Transfer Level. The paper appeared in the 2017 VLSI Test Symposium. The authors are from the University of Florida, Gainesville and NIST, MD.

This paper is one of several exploiting statistical profiles of power or other factors to determine vulnerability to side-channel attacks. Statistical analysis is an established way to extract keys post-silicon. But discovering vulnerabilities post-silicon is too late to guide design improvements to mitigate problems. Here, the authors use simulation toggle activity at RTL as a proxy for power,. Their test case is an AES block. Side-channel attacks look for intermediate calculations in the algorithm sensitive to input data, therefore the authors’ method applies statistical tests to detect difference between distributions for a pair of trial keys. They run this across a range of pairwise trial keys and plaintext inputs.

Since the goal is not to find a key but to find potential vulnerabilities, they look for maximum deviation in their selected statistical tests across their test data. They found real vulnerabilities in both a Galois Field implementation and a LUT implementation of an AES128 encryption engine. They were also able to isolate weaknesses to specific sub-blocks, better than some other methods offer, providing more insight into potential design improvements.

Paul’s view

Differential Power Attacks (DPA) to crack crypto keys is intriguing. Something I’ve always to understand better but never got to. This paper is an easy read, and the references are very helpful too – I especially enjoyed ref [20] discussing DPA simulation of crypto algorithms implemented in software on a CPU rather than with dedicated hardware.

It’s amazing and scary to learn how probing only the power supply to a crypto algorithm can be sufficient to crack its private key. We have a collective social responsibility to find and correct weaknesses wherever we can.

The DPA premise is that if power consumed is sensitive to a choice of private key, then this relationship can be used to crack the key. Specifically, if encrypting the same data with two different keys shows a difference in power profile then this difference might be exploitable to work out the key. DPA also requires some insight into the nature of the relationship between power and the key. In this case, AES has multiple sub-steps progressively transforming the plaintext data. If the power for each sub-step can be measured and this sub-step power also varies with the number of 1’s in its output (more 1’s consumes more power), then such insight is sufficient, over a large enough number of power traces, to determine the key.

The paper presents a flow to score the sensitivity of a crypto algorithm’s power profile to different keys. And doing this early in the design phase based only on an RTL description of the algorithm. The authors show how their sensitivity analysis on an AES128 engine correlates closely to power profiling at the gate-level and to oscilloscope measured power profiles when the RTL is compiled and run on an FPGA. The total time needed to profile the RTL is less than 1hr, opening the door to massive exploration of different RTL designs across a farm of servers. Even machine generated RTL variants implementing different types of counter-measure methods to reduce power sensitivity. Equally it implies scalability to much larger system level RTL power sensitivity analysis.

Overall, tight paper, well written, and on an important topic. I’m grateful for the opportunity to spend time on it this month!

Raúl’s view

As Paul suggested, it is useful to first read reference 20, Use of Simulators for Side-Channel Analysis” as an introduction to the use of simulators for side-channel attacks (SCA) using power analysis. The survey IMO yields modest results. Only two such open-source tools were available in 2017. Their own simulator barely identified the leak of the value of the MSB of an intermediate state. Here the authors showed that the Kulback-Leibler (KL) divergence metric shows high correlation between RTL, gate level and FPGA implementation. This provides strong support for their concept.

From an investment point of view, I see this being interesting for DoD and national security organizations. With possibility to attract SBA and Air Force research grants for example. Possibly DARPA might be interested, folding this in as a component of a larger program. I’m a bit more skeptical about commercial opportunity. The direction is intriguing, though I suspect hackers will stick to simpler and higher return software and phishing exploits.

My view

Following Raúl, I would like more discussion on the influence of uncertainty in pre-silicon power estimates on accuracy of results. The authors are measuring vulnerability rather than cracking codes, yet this analysis depends on fine-grained comparison between distributions. Pre-silicon power estimates can have quite significant standard deviations which could challenge accuracy. Maybe the narrow application makes most variability largely irrelevant. Perhaps, like stuck-at fault grading for test, the author’s method is a proxy, sufficiently accurate for this purpose. Either position would benefit some explicit defense.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.