I’m going to get to low power and RISC-V, but first I’m trying out virtual DAC this year. Seems to be working smoothly, aside from some glitches in registration. But maybe that’s just me – I switched email addresses in the middle of the process. Some sessions are live, many pre-recorded, not quite the same interactive experience as live talks. There’s a live Q&A chat window during the recordings, continues on after you’ve watched the recording. Slightly strange experience if you listen to multiple papers in a session, watching Q&A chats running asynchronously from the videos you’re watching. Then again, being able to go back and replay pre-recorded sessions is very cool.
Machine learning for building power models
This was a very good session. Dave Rich from Mentor chaired a mixed group of topics, some on low power, some on innovative directions around RISC-V. He kicked off with a presentation from Caaliph Andriamisaina of CEA-LIST on learning-based power modeling. Power models are useful in a variety of contexts: for IPs/subsystems to accelerate estimation in larger systems, for power distribution network design and for thermal modeling. The challenge of course it to extract a sensible and accurate model from hundreds of gigabytes of detailed power analysis at the RTL level. Caaliph does this through machine learning on the simulation traces, using K-means clustering to group windows on the data into similar profiles. From this he extracts ~2% of the signals to come up with a model which demonstrates 95% accuracy versus the original power data.
Retention verification
Next up was Lakshamanan Balasubramanian from TI to talk about accurately verifying retention behavior in low power designs. This can be quite complex, especially in sequence dependencies between state retention, clock and reset controls, eg on wakeup. According to Lakshamanan, these behaviors are not handled thoroughly in standard low power verification flows. TI have built a simulation overlay to handle verification more comprehensively. They found this flow enabled them to address potential retention issues early in the design flow.
UPF macro models
Rohit Sinha from Intel followed with a discussion on using Hard IP power models to simplify hierarchical design. The guts of UPF far exceed my meagre understanding, but one thing I do get, at least intuitively, is the massive complexity of creating and managing UPF for very large SoC designs. Rohit talked about more than 2000 IPs, both soft and hard, 30-40 power domains and 4 main levels of hierarchy from IPs up to the SoC. His point is that power macro models for hard IPs can greatly reduce this complexity, reducing chance for errors and reducing simulation time. Please check out the replay of his talk for a more intelligent understanding than I can provide.
Google testbench generator for RISC-V
Tao Liu presented next on an open-source configurable testbench generator for RISC-V-based systems. This aims to provide full-feature support, an end-to-end flow integrated with mainstream simulators, and a standalone functional coverage model. Testbenches are generated in SV/UVM (no plans so far for PSS support) as complete programs with randomization. Google first released this to Github in the first half of 2019, in 2020 they have already formed a working group under Chips Alliance, together with a bunch of companies. No details provided on how well this integrates with standard verification flows. Nevertheless, an interesting direction.
Post-quantum cryptography
Markku-Juhani Saarinen of PQShield in Oxford followed to talk about prototyping accelerator architectures for post-quantum cryptography (PQC). As Markku was quick to point out, these are not based on quantum computing. They use conventional computing methods to design cryptography algorithms which are quantum resistant. I wrote about this some time ago. These algorithms use new mathematical approaches, such as lattice-based methods. And like more conventional crypto algorithms, they can benefit from acceleration. The trick is to figure out architectures for those accelerators. Markku uses RISC-V software emulators as a starting point, since the emulator allows for near-real-time performance. When the architecture is reasonably settled, they move to an FPGA implementation with multiple algorithms embedded, ranging from ISA extensions in the RISC-V core to dedicated accelerators. Well-timed development, since NIST standards for PQC are expected to be released this year.
Advanced Security support
Professor Sylvain Guilley, CTO at Secure-IC wrapped up (he also holds academic positions at Paris TELECOM and elsewhere). He introduced their Cyber Escort system, a module that sits to the side of the main processing path, monitoring both software and hardware behaviors. Sylvain particularly called out the DARPA System Security Integration Through Hardware and Firmware (SSITH) program as a model with which they aim to align. That program is ambitious, targeting anomalous state detection, meta-data tagging, and churning of the electronic attack surface. Secure-IC already has a partnership with Andes technology (hence the RISC-V connection). They are already in the Root of Trust for a number of automotive-grade chips for car-to-car V2X communication.
Quite a broad span of topics! If you have access to the DAC 2020 site, you can lean more HERE.
Share this post via:
Comments
There are no comments yet.
You must register or log in to view/post comments.