WP_Term Object
(
    [term_id] => 34
    [name] => Ansys, Inc.
    [slug] => ansys-inc
    [term_group] => 0
    [term_taxonomy_id] => 34
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 262
    [filter] => raw
    [cat_ID] => 34
    [category_count] => 262
    [category_description] => 
    [cat_name] => Ansys, Inc.
    [category_nicename] => ansys-inc
    [category_parent] => 157
)
            
3dic banner 800x100
WP_Term Object
(
    [term_id] => 34
    [name] => Ansys, Inc.
    [slug] => ansys-inc
    [term_group] => 0
    [term_taxonomy_id] => 34
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 262
    [filter] => raw
    [cat_ID] => 34
    [category_count] => 262
    [category_description] => 
    [cat_name] => Ansys, Inc.
    [category_nicename] => ansys-inc
    [category_parent] => 157
)

Big Data Analytics in Early Power Planning

Big Data Analytics in Early Power Planning
by Bernard Murphy on 12-13-2018 at 7:00 am

ANSYS recently hosted a webinar talking about how they used the big-data analytics available in RedHawk-SC to do early power grid planning with static analytics, providing better coverage than would have been possible through pure simulation-based approaches. The paradox here is that late-stage analysis of voltage drops in the power distribution network (PDN), when you can do accurate analysis, may highlight violations which you have no time left to fix. But if you want to start early, say at floorplanning where you can allow time to adjust for problems, you don’t have enough information about cell placement (and therefore possible current draw) to do accurate analysis.

22714-looking-ahead-min.jpeg

ANSYS have a solution based on something they call Build Quality Metrics (BQM). In the webinar they talk about the general methodology. There are multiple ways to approach BQM; one starts with a static analysis of the design (no simulation) and doesn’t require placement info. For this you build heatmaps based on simultaneous switching (SS) calculations, likely issues in the planned power grid and likely timing criticality. For SS, you calculate peak current per cell based on library parameters and operating voltage. You then combine these values for nearby instances which have overlapping timing windows (taken from STA analysis), summing these currents to generate an SS heatmap.

Next you want to look at where you may have excessive IR drop in the planned grid. In BQM, since you don’t yet have cell instance placements you fake it by placing constant current sources at a regular pitch on the low metal segments and then do a static solve to generate an IR-drop heatmap. The evenly-spaced current draw won’t match exact cell instance current draws but it should be a reasonable proxy, allowing these heatmaps to be generated early in implementation and refined as placement data becomes available.

You can further refine this analysis using timing slack data generated from STA analysis data to prioritize timing critical cases. Combining all these heatmaps together generates the ultimate BQM heatmaps. ANSYS and their customers have shown that there is excellent correlation in observed hotspots between these and heatmaps generated through the traditional RedHawk (non-SC) path.

All of this analysis leverages the ANSYS Seascape architecture underlying RedHawk-SC to elastically distribute compute to build heatmaps. Which means that analysis can run really quickly, allowing for an iterative flow through block place and route. Which is really the whole point of the exercise. Instead of building a PDN based on early crude analyses like shortest path resistance checks, then doing detailed analysis on the finished PnR to find where you missed problems with real vectors, the BQM approach provides high coverage earlier in the flow, without need for vectors or cell placement, enabling incremental refinement to the PDN as you approach final PnR.

ANSYS reports that runtime of the BQM approach can be 3X faster than a dynamic analysis based on just a single vector. Note that the static approach in BQM provides essentially complete instance coverage (all instances are effectively toggled) whereas dynamic coverage is inevitably lower. You can raise dynamic coverage by adding more vectors but then runtime becomes even higher. Overall, you can build and refine your PDN early, avoiding late-stage surprises, you can do this quickly enough that it makes sense as an iterative step in the PnR flow. You’ll still do signoff at the end with whatever method you feel comfortable. Just without nasty surprises. What’s not to like?

ANSYS tells me they have scripts to automatically setup the SC flow from your RedHawk setup, so it seem like there’s really no excuse not to give this a whirl 🙂 You can register to watch the webinar HERE.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.