Fuzzing is to software verification what randomization is hardware verification. Can a fuzzing approach improve hardware security testing? Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO) and I continue our series on research ideas. As always, feedback welcome.
This month’s pick is HyperFuzzing for SoC Security Validation. The authors presented this paper at ICCAD 2020. They are from IIT Kanpur.
This is an intriguing approach to fuzzing, adapted specifically to modern SoC design. It builds on hyper-property checking in dynamic simulations. These hyper-properties reason about behaviors over sets of traces, an approach well suited to security checking. The authors offer as examples information flow checks (privileged data cannot leak from A to B say) and non-interference checks (adversarial actions must not interfere with computations flow). Security is then checked by comparing bundles of simulation traces with and without tampering.
Tampering in this approach can model different types of vulnerabilities to untrusted sources. By randomizing firmware instructions, write instructions from a component to the NoC, or bit flips in a memory. The authors also propose several novel coverage metrics. These are designed to guide iterations to tampering around cases most influenced by previous tampering runs.
Their testcase is a small though representative SoC (details in GitHub) running firmware tests against cryptographic blocks, checking for non-interference and other vulnerabilities. They also run secure boot with data block checks. They found multiple security violations in the crypto blocks, except where block include ECC protection.
Security verification is a such an important topic, and there is much work ongoing here both in academia and in industry. This paper nicely brings together randomized mutation-based coverage with “hyper-properties” over sets of simulation traces to create an innovative solution that is both scalable and effective at demonstrating security flaws.
Some security properties can be formally defined only over a set of simulation traces. For example, “non-interference” means that an attacker cannot interfere with certain protected computations in a design. To demonstrate interference, you need to compare two traces, identical in input stimulus except for the presence of some attacker actions in one trace. If any protected computations in the attacked trace differ from those in the golden trace, then there has been interference.
The authors create their own special flavor of language for assertions over multiple traces and use it to formulate security properties for non-interference and confidentiality. They build a custom flow to randomly tamper with simulations and check their security properties between tampered and non-tampered simulations. Their random tampering algorithm also has an elegant coverage-based learning heuristic to guide it to more efficiently find security flaws.
The idea of assertions over multiple simulations is very powerful. I wonder if it would be possible to cleanly extend SystemVerilog to support these kinds of assertions. This could open the door to some compelling native extensions to commercial simulation and formal tools. Another possibility might be to extend the new Portable Stimulus Standard (PSS) to include assertions that span across multiple generated tests.
This paper is an easy and enjoyable read, although I do wish for some more details on the results. The authors claim their solution finds security holes in their open-source SoC testcase but there are no details on what these holes are or how their approach compares to other approaches in the literature that could be applied to finding the same holes.
I’m going to look at this first from a technology maturity angle. I like the idea in general, a very interesting approach to grade for security in a design. That said, each design requires designers to provide seed tests, tamperers and security specs in a novel assertion language. For me this bounds the approach firmly to the academic domain for now. Great for dissertations and papers, not yet close to something that could make the jump to commercial application.
I’ll put my investor hat on for the second challenge. Security is an important topic, no question. But outside of a few domains we already know – aerospace, defense, payment systems and processors/servers for example. It’s still not an existential problem for most OEMs and component builders. They’re willing to check a box if generally expected. But only if impact on cost or time to market is small. Because their customers generally won’t pay more for security. Which leaves security for most markets still dependent on turnkey IP, such as hardware roots of trust, and easy-to-use apps. Solutions packaged in one of these ways will be investable, otherwise not so much.
Paul and Raúl covered most of what I might have suggested. I like Paul’s idea of extending SVA, at least to encourage experimentation with hyper-properties. This must open a new class of interesting tests, leading eventually to new bundled verification methods.
Also ReadShare this post via: