WP_Term Object
(
    [term_id] => 13
    [name] => Arm
    [slug] => arm
    [term_group] => 0
    [term_taxonomy_id] => 13
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 392
    [filter] => raw
    [cat_ID] => 13
    [category_count] => 392
    [category_description] => 
    [cat_name] => Arm
    [category_nicename] => arm
    [category_parent] => 178
)
            
Mobile Unleashed Banner SemiWiki
WP_Term Object
(
    [term_id] => 13
    [name] => Arm
    [slug] => arm
    [term_group] => 0
    [term_taxonomy_id] => 13
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 392
    [filter] => raw
    [cat_ID] => 13
    [category_count] => 392
    [category_description] => 
    [cat_name] => Arm
    [category_nicename] => arm
    [category_parent] => 178
)

Shifting Low Power Verification to an IP to SoC Flow

Shifting Low Power Verification to an IP to SoC Flow
by Ellie Burns on 10-28-2015 at 7:00 am

One of the most exciting recent developments in low power design and verification is the successive refinement flow developed by ARM® and Mentor Graphics®.

Successive refinement partitions the UPF into layers so that IP providers and implementers can add in new information as they go. This establishes a much more effective flow for using low power IP correctly and retarget it to multiple different implementation strategies. Because it is built into UPF 2.0, any IP provider, whether internal or external, and any IP integrator can use it.

It is generating a lot of buzz. For that reason, I wanted to get the inside scoop from someone who has been working very closely with ARM on its creation; so I looked up my colleague Gabriel Chidolue.

Gabriel Chidolue

First of all, Gabriel, what is the primary motivation for ARM to create this flow with Mentor Graphics?

As you know, Ellie, low power IP vendors have a big stake in making sure it is easy for SoC integrators to use their IP correctly. But it wasn’t always clear to the IP integrator what can be done and what can’t be done from a power architecture point of view. This meant there were too many support questions and too many problems. That’s not only frustrating, it also drives up the cost for everyone. It comes down to the sheer complexity of the IP and figuring out how to use it in multiple different contexts. ARM wanted to simplify that process with a new methodology to make it easier for them to support and easier for their customers to use.

Because it is based on UPF 2.0, you can update and define things as you go along and know more information; for example, as you progress from the subsystem to the system level or from system configuration to implementation and on down the design chain. This methodology also makes verification easier and quicker by starting much earlier and supporting configuration and implementation in parallel.

That sounds like really good motivation for the SoC designers to want this too. So what does this flow look like?

It’s important to understand that even though there is only one UPF language/standard, this methodology partitions the use of UPF into three layers: Constraint UPF, Configuration UPF, and Implementation UPF.

The IP vendor delivers the Constraint UPF along with the RTL for their IP. This gives the SoC team an executable way to check and specify that they used the IP correctly. The Constraint UPF captures four key things. Atomic power domains, which are the lowest possible division of power domains that the IP can sustain. Constraints on retention strategies. Constraints on isolation strategies. And the specification of fundamental power states for the power domains. Flexibility is built into these constraints to allow the IP to be used in different power management schemes and contexts.

The Configuration UPF is created by the IP integrator who has a logical view of the power architecture of the system being designed. It includes the actual power domains and their power supply_sets, and it adds logic controls to describe how the power domains, isolation, and retention strategies will be controlled. In addition, the power states of the power domains are updated with the appropriate logic control signals. The Configuration UPF itself is technology independent and has no implementation detail.

Finally the Implementation UPF adds all of those details that the target technology supports, including the actual underlying supply nets, power switching information, and much more. The implementation team creates that because they know what the target technology will be.


Figure 1: Successive Refinement of Power Intent

Do you have to throw away the configuration UPF when they go to implementation or are these things reused or incorporated as you move through the flow?

Nothing is thrown away. You actually leverage what they want to use out of the prior UPF in their own context. Then, using a feature of UPF 2.0 called –update, they add any new information to what has already been written.

For example, once you’ve configured the IP, and created the Configuration UPF, you verify that the RTL plus your Configuration UPF do not violate the constraints defined in the IP’s Constraint UPF; you also verify that the power management architecture is functioning and logically correct. You can then update the Configuration UPF by adding implementation details to create the Implementation UPF. ARM refers to the RTL+ConstraintsUPF+ConfigurationUPF as thegolden source. You can now use this Implementation UPF to drive the implementation flow (i.e., synthesis, P&R, etc.) while verifying the additional details of the Implementation UPF in parallel.

So you see, all the players get to add in what they know because of the way the UPF has been partitioned into these three layers.

And thus it’s a “successive refinement.” So what are the major benefits of this flow?

One: it establishes clear communication of how a piece of IP can be used in any given system context from a power architectural point of view in an executable spec. This greatly reduces the level of support required of the IP provider and the number of mistakes by IP users.

Two: It allows you to reuse the power intent, using the same constraints in any system configuration and the same system configuration for different target technologies.

Three: once you have this constraint and configuration done, you can get verification done much earlier. This is because the separation of the logical from the technology-dependent part of things allows you to start verifying before the implementation is specified and once the power management architecture is verified, it does not have to be verified again for subsequent implementations.

Four: It makes debug easier in general. If you verify the golden source and find a bug, it is most likely an issue with the logical power management architecture. However if you verify the Implementation UPF + RTL and find a bug, then it is most likely an issue in the implementation UPF. Knowing this makes it easier find the source of the problem.


Figure 2: Enabling Successive Refinement with Questa

Great stuff! So what was Mentor’s motivation to help develop and promote this flow?

We saw that everyone desperately needs to be able to do their power verification earlier, and we’re trying to help them figure out how to do that. So we asked ourselves, how do you automate tools to actually get the kind of productivity that users need?

The answer, as we discovered with SystemVerilog and the UVM, is you need to standardize on a methodology. In the case of low power, the biggest anti-productivity thing we’re seeing is that verification is put off until very, very late in the design cycle. So we’re looking to help create a methodology to automate and shift that forward and make it easier and faster to get it done.


Once the methodology is standardized, we can automate the tools around it. We can build tools in simulation, we can build checking tools, we can build rule sets, supply documentation on how to use it, and provide help. Just like with UVM, we need that core foundation methodology in order to build effective productivity tools on top of it.

Thanks for the inside scoop, Gabriel. I know this is just scratching the surface. People can learn more about the successive refinement flow by reading the DVCon paper co-authored by ARM and Mentor Graphics at: https://www.mentor.com/products/fv/verificationhorizons/volume11/issue1/successive-refinement?cmpid=10165

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.