We have become accustomed to the idea that safety expectations can’t be narrowed down to one thing you do in design. They pervade all aspects of design from overall process through analysis, redundancies in design, fault analytics and mitigation for faults and on-board monitors for reliability among other requirements and techniques.
Why shouldn’t similar concepts apply to security also? Here we don’t have an ISO 26262; we do have security IP and software, however design tools and methods in this space are somewhat piecemeal, leaving me at least feeling that our coverage of design best practices is rather patchy. Conversely the concept of best practices in design for softwaresecurity is already quite well established within certain enterprise levels like Microsoft.
Tortuga Logic aims to correct this. A venture that started in joint research between UCSD and UCSB, their approach builds on a top-down approach to security development. This starts by developing a threat model for the system, defining potential attack entry points, assumptions about resources attackers may have (money, time, etc.), and their ability to find exploits via those entry points given those resources. This threat model can be developed either by Tortuga’s hardware security engineers or by the chip design / architecture / security team. Examples of factors considered in the threat model include memory isolation, key management and secure boot configurations, from a high-level view all the way down to individual circuit components. This should be a living document, updated regularly during the design lifecycle.
The threat model is then a key input, along with the design RTL, to the Tortuga analysis / augmentation process. Using patented techniques based on a concept they call information flow, their products analyze for potential harmful data leakages in the chip design. This analysis does not require advance knowledge of suspected architecture vulnerabilities.
Jason Oberg, the CEO of Tortuga, pointed particularly to the Meltdown/Spectre issue and said that these problems are symptomatic of a broader lack of design-for-security methodologies which he believes Tortuga can address with this solution. He mentioned a number of markets where this will be important:
- Aerospace and defense – Apparently this domain was a significant component in their research activity, especially around two topics: information assurance, where it is critical to ensure that you can properly contain secrets and microelectronics trust, where there is always a question around the trust you can place in 3[SUP]rd[/SUP] party cores, a very pressing concern when you’re building electronics to go in a missile for example.
- Mobile applications – where security concerns are probably much more familiar to many of us: ensuring that secure boot cannot be compromised, correctly managing access control so that that a general-purpose CPU, for example, should not be able to read boot image authentication keys, and protecting customer content (passwords, mobile payment data etc)
- Datacenters – where there is a proven concern around effective isolation of different customer processes potentially running on the same hardware in different VMs or in other shared resources in the datacenter. There have already been multiple reports of techniques to run side-channel attacks in these cases through cache probes and timing analyses.
From what I can deduce, to detect security weaknesses Tortuga’s analysis effectively instruments your design to help detect potential security problems in your downstream analysis, whether that be through simulation, emulation or formal methods (again, my guess). This seems to me to be a very powerful complement to security-aware design flows. You should probably check them out. You can learn more HERE.Share this post via: