When it comes to security we’re all outraged at the manifest incompetence of whoever was most recently hacked, leaking personal account details for tens of millions of clients and everyone firmly believes that “they” ought to do better. Yet as a society there’s little evidence beyond our clickbait Pavlovian responses that we’re becoming more sensitized to a wider responsibility for heightened security. We’d rather ignore security options in social media, in connecting to insecure sites or in clicking on links in phishing emails, because convenience or curiosity provide instant gratification, easily outweighing barely understood and distant risks which maybe don’t even affect us directly.
This underlines the distributed nature of security threats and the need for each link in the chain to assume that other links may have been compromised, often by the weakest link of all – us. Initial countermeasures, adding a variety of security techniques on top of existing implementations, proved easy to hack because the attack surface in such approaches is huge and it is difficult to imagine all possible attacks much less defend against them.
Hardware roots of trust are the new “in” technology, stuffing all security management into a tightly guarded center. Google now has their Titan root of trust for the Google Cloud and Microsoft has their Cerberus root of trust, both implemented in hardware. These aren’t marketing gimmicks. A security flaw discovered this year in baseboard management controller firmware stacks and hardware allows for remote unauthenticated access and almost any kind of malfeasance following an attack. When a major cloud service provider is hacked and it wasn’t clearly a user problem, the reputational damage could be unbounded. Imagine what would happen to Amazon if we stopped trusting AWS security.
However – just because you built a hardware root of trust (HRoT) into your system, that doesn’t automatically make you secure. Tortuga Logic recently hosted a webinar in which they provided a nice example of what can go wrong even inside an HRoT. This is illustrated in the opening graphic. An AES (encryption/decryption) block inside the HRoT first reads the encrypted key, decrypts it and stores it in a safe location inside the HRoT in preparation for data decryption. To decrypt data for use outside the HRoT two things have to happen: the data has to be run through the AES core and the demux on the right has to be flipped from storing internally to sending outside. Makes sense to flip the switch first then start decrypting, right?
But realize that the state from the key decryption persists on that path until other data is run through the AES. If you flip the demux switch first, the plaintext key can be read outside the HRoT. Oops. A seemingly reasonable and harmless software choice just gave away your most precious secret. Why not hardwire this kind of thing instead? Because for most embedded systems users expect some level of configurability even in the HRoT (which areas of memory should be treated as secure for example). You can’t hardwire your way out of all security risks and even if you tried, you’d just replace possibly firmware-fixable bugs with definitely unfixable HW bugs.
Bottom-line, to run serious security checks, you have to check the operation of the software on the hardware. Like for example booting Linux on the hardware. But how do you figure out where to check for problems like this? And how do you trigger such cases? A standard software test probably won’t trigger this kind of problem. Exposing the problem likely depends on some unusual event which might happen almost anywhere in the hardware + software stack, making it close to impossible to find.
The Tortuga approach using their Radix tool takes a different approach. It runs within your standard functional simulations/emulations, looking for sensitizable paths representing potential security problems. These are captured in fairly easy to understand security assertions, not SVA but assertions unique to the Tortuga tools (they can help you develop these if you want the help).
I like a couple of things about this approach. First, the collection of assertions represents your threat model for the system. Which means that once you understand the assertions, which in my view have a declarative flavor, you can easily assess how complete that model is, rather than trying to wrap your brain around all the details of the RTL implementation and how it might be attacked.
Second, this runs with your existing testbenches. You don’t need to generate dedicated testbenches, so you or your assigned security expert can start testing immediately and regress right alongside your functional regressions. A common question that comes up here is how complete the security signoff can be if it is simulation based. Jason Oberg (the Tortuga CEO) answered this in the webinar. It’s not a formal guarantee, but then no known method (including formal) can provide a guarantee for most security threats. However if your testbench coverage is good enough for functional signoff, Radix routinely finds more problems than other methods such as directed testing
Tortuga is already partnered with Xilinx, Rambus, Cadence, Synopsys, Mentor and Sandia National Labs, so they’ve obviously impressed some important people. You can register to watch the WEBINAR REPLAY HERE .
Share this post via: