It’s a hostile world we live in, and cybersecurity of connected devices is a big concern. Attacks are rising rapidly, and vulnerabilities get exploited immediately. Supply chains are complex. Regulations are proliferating. Secrets don’t stay secrets for long – in fact, the only secret in a system with open-source algorithms may be the secret key. What if the root secrets were never stored in a device? Hiding keys with a physically unclonable function (PUF) plus a hardware root-of-trust makes this possible – the subject of a recent webinar from Intrinsic ID and Rambus.
Defeating prying eyes with sophisticated tools
It might seem secure to embed a root secret in a chip, using non-volatile memory (NVM). It’s an improvement over the old school methods of jumpers and dip switches. Putting it under the chip lid makes it harder for anyone to change it from the outside. Extra mask steps add processing and test time, and there may redundancy needs. Most process questions are solvable, although NVM is running into difficulty at advanced process nodes.
But a bigger issue is times have changed. Cracking a device key can lead to huge rewards. Physical attackers are no longer armed with just soldering irons, diagonal cutters, and magnifying glasses. X-ray, ion beams, lasers, and other scanning technology can see chip features, revealing a key stored in NVM. Side channel attacks monitoring tiny power supply fluctuations can uncover data patterns transmitted within a chip.
The bottom line is if the secret key is stored somewhere, there are ways to see it. Making the key harder to get may deter the drive-by amateur, but not the well-funded professional hacker with time and the right equipment. If the key can’t be stored, and it can’t be transmitted, how can a device get it?
Entropy plus tamper-resistance for the win
Out of chaos comes order. PUFs put entropy to good use. Anything that varies on chip – such as nanoscale differences in transistors and parasitics – can be an entropy source. For example, a bi-stable cell with cross-coupled transistors is an SRAM cell. In theory, its power up is random, but when fabricated, a given SRAM cell powers up quite repeatably in one of the two states. A sea of these entropy-driven cells creates a repeatable power-on pattern unique to each chip, like a fingerprint. Those prying eyes with sophisticated tools can see a PUF’s structure, but that’s all. The PUF output doesn’t exist in the off state and is tough to predict in the on state. Cloning the structure results in a different output pattern.
Next comes a NIST-certified key derivation function (KDF). The PUF output is essentially a pseudo-random passphrase. Adding encrypted “helper data” usually from NVM that error corrects for noisy bits between power ups, the KDF algorithm derives a reliable and truly random secret key whenever it is needed. Most attacks go after either the NVM value, which doesn’t reveal the PUF output, or the circuit computing the KDF.
This is where a hardware root-of-trust comes in, a tamper-resistant engine securely processing the PUF output. In a finishing touch of synergy, the hardware root-of-trust cooperates in creating the helper data stored in NVM, adding a layer to security. Effectively, this extends the unclonable nature of a PUF to an entire SoC. Here’s one of the final quotes in the webinar:
“Even a perfectly cloned SoC cannot perfectly clone the PUF’s transformation function.”
Readers may have already seen this technology in action, if one has ever tried to stuff a counterfeit printer ink cartridge into a printer and found it doesn’t work. We’ve simplified this discussion, and the details are interesting. Want to see more about how a PUF plus a hardware root-of-trust work in this presentation from experts Dr. Roel Maes of Intrinsic ID and Scott Best of Rambus? To view this archived webinar, please visit: