At the Cadence front-end summit last week, Jay Roy presented the Cadence Joules solution for RTL (and gate-level) power estimation. Jay is ex-Apache, so knows his way around RTL power estimation which should make Joules a product to watch. Joules connects very natively to Palladium for power characterization for realistic software loads, which I’ll cover in a separate blog. Here I want to focus on Joules as a characterization competitor to Apache/Ansys, Atrenta/Synopsys and other products.
Jay’s claim, and I think he’s right today, is that Joules has all the pieces to get high accuracy for RTL power estimation. They have Genus for synthesis and Innovus for implementation so they can do (somewhat) production quality fast estimations straight from RTL and know they are (somewhat) going to correlate with the real implementation, and therefore they can get power estimates from RTL simulations which will correlate within ~15% of gate-level estimation. Jay showed a table of comparison which indeed support this assertion.
You may notice I am (somewhat) hedging my support for the attainable level of accuracy. I also know a little about this domain and some of the challenges in RTL estimation. Part of the problem is indeed in using the same tools for fast physical synthesis as you use for production implementation. But that’s not all of the problem. Fast physical synthesis is fast because it cuts corners and that can lead to correlation problems between RTL and gate-level estimates, even if you use the same physical synthesis tools you use for production.
It seems obvious that the way to understand this problem should be a detailed analysis of sources of miscorrelation between RTL and gate-level estimates. But I have yet to see such an analysis from any provider and that’s a problem because it leads to unscientific trial and error approaches to improving correlation, with no deep understanding. Scientific approaches (you know, start with a hypothesis, test against data) would provide a credible basis for knowing how to repeatably improve correlation or, just as important, knowing that perhaps 15% is as low as you can go and you cannot repeatably improve on that. This would be a lot of work, but whoever does this first will be able to claim the laurels of true expertise in this domain.
I don’t think it is necessary to test every conceivable design – that would not be a scientific approach. Useful hypotheses are simple – I’ll offer a couple to get the ball rolling. First, I believe the harder you push performance, the worse the correlation will become. The harder you push, the more buffers have to be upsized; also there are implications for routing in the presence of factors not considered in fast estimation (DFT, detailed routing, signal integrity, …), leading to yet more buffer upsizing, further impacting power. A related but not identical hypothesis is that accuracy will negatively correlate with the number of near-critical paths. As you get into implementation, some of these will become critical, requiring (probably) buffer upsizing; the more of these you have, the more implemented circuit power will deviate from initial estimates. Cadence has a running start with a fully integrated solution which should minimize known systematic sources of error from the estimation tool – they could lead the field with a detailed correlation analysis.
None of this is intended to diminish the role Joules can play today. As far as I know today they have the only full in-house flow for estimation based on implementation class physical synthesis, so they are likely to be best in-class for estimation until Synopsys inevitably releases something similar. And then they will both have a significant edge in accuracy over Apache and Calypto for the foreseeable future.
To learn more about Joules, click HERE.Share this post via: