Anyone knows that the best way to keep a secret is never to share it with anyone. Which works fine for your own most personal secrets, but it’s not very useful when you have to share with at least one other, such as in cyber-security. One such need, of enormous importance in the IoT, is authentication; are you who you claim to be? Seas of (digital) ink have been spilled over the manifest insecurity of the IoT, from a jeep to a Boeing 757, kids’ watches, pacemakers, and Philips Hue lights, all have published white-hat attacks. Some attacks have not been so benevolent, such as hacks on the Ukrainian power grid and recent attempts on US infrastructure.
Authentication is the first wall in preventing (or at least limiting) such attacks. If you, an IoT device, want to communicate with me you must prove that you are allowed to communicate with me. There are a variety of ways to do this, all dependent on some kind of key stored on the IoT device – the secret. Setting up this key is a part of provisioning, the first-time setup of the device. The key might be created during chip manufacture (in ROM for example) or it might be generated in the cloud, communicated to the device and stored in flash memory, or it might be generated by the device itself using a true random number generator (TRNG), then stored in flash. Whichever method is used, the device can share its key (suitably encrypted) with the cloud and the cloud can compare against its stored inventory of allowed keys to verify and approve.
All of these methods work well in defending against “conventional” threats, primarily software-based attacks. But the really bad actors – states and criminal enterprises – are already capable of doing much more, particularly in hacking fabrication and/or the hardware. These actors can plant spies in fabs, or phish for key databases, or they can use focused ion-beam equipment to read stored keys from flash memory. All of this requires serious organization and/or special equipment but is well within the capabilities of a government or a major criminal enterprise.
A better approach would be to have a secret that is generated on the device but is never stored and never exchanged. At first sight, this looks like a useless secret, but stay with me here. First you want to generate the key on device (sounds like a TRNG) but not store it (not like a TRNG). So it has to be reliably consistent yet generated on-the-fly. That’s what physically unclonable functions (PUFs) do, something (not storage) from which you can read a random string which is nevertheless consistent each time you read it. A good example is the power-up state of an on-chip SRAM. Small manufacturing variations will ensure that each bit in the SRAM will initialize to 0 or 1 with some unique distribution to that device. There’s your secret key, at least for a decent-size SRAM (at least in principle, for N bits in the SRAM only a 1 in 2[SUP]N[/SUP] chance that this will match the key for another device).
At least that’s the theory; reality is always more complicated. There’s some noise. Not all bits fall readily into a 0 or 1 state; some could go either way and may do so on each power up, depending on temperature, voltage and other factors. Then there’s aging. SRAMs get older, just as we do. Even while the SRAM continues to function, the subtle factors on which the PUF depends can change noticeably. A reliable SRAM PUF has to factor out all of these sources of indeterminacy and degradation without compromising the uniqueness and stability of the generated device key.
OK, we have our secret, but how do we avoid sharing it, at least with our authentication partner? I’m aware of one clever mathematical technique called a zero-knowledge proof (ZKP) which allows you to share a piece of information with a partner. On subsequent requests by the partner, you can prove to them that you know the (correct) secret but you never have to share the secret.
Intrinsic ID have developed very interesting technology in this class, based on research deriving from Leuven. The details are somewhat different from what I outlined above. They put a lot of work first into managing noise and aging, using proprietary algorithms which they have tested across many foundries and processes. The shared data, which they call an Activation Code appears to be based on a similar philosophy to ZKP, though I can’t speak to how close it might be. What is important is that they have run a lot of very detailed analyses to demonstrate the stability and reliability of their approach.
The solution is offered in software-only and RTL options. The software-based solution is particularly interesting to a lot of customers because it allows them to retro-fit a higher level of security with no change to the hardware. One interesting real case mentioned in a webinar (see below) has been used for anti-counterfeiting. A company making a radio along with batteries to work with that radio found they were losing revenue and running into reliability/ reputation problems because customers were replacing original batteries with grey-market copies. By adding authentication from the radio to the batteries, they were able to detect and disallow use of counterfeits.
This isn’t a startup with one customer. They have multiple customers, including Intel, NXP, Renesas and Samsung at the Chip/module level and they are deployed in multiple IoT, secure transaction (e.g. payment) and government/defense applications. You can check out a more comprehensive description in this webinar.Share this post via: