WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 598
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 598
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 598
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 598
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

The Future of Logic Equivalence Checking

The Future of Logic Equivalence Checking
by Bernard Murphy on 08-07-2024 at 6:00 am

Logic equivalence checking (LEC) is an automated process to verify that modified versions of a design evolving through implementation remain logically equivalent to the functionally signed-off RTL. This becomes important when accounting for retiming optimizations and for necessary implementation-stage ECOs which must be proven not to break equivalence. LEC eliminates the need to rerun expensive functional verification cycles, impossible to contemplate as tapeout deadlines approach.

The Future of Logic Equivalence Checking

The scalability problem for LEC

To provide a signoff-worthy guarantee  that a circuit modified in implementation is logically equivalent to the unmodified version, LEC must provide absolute certainty in its results. The only way to do that is through formal analysis. LEC programs in production today have become so effective in this analysis that their use is now mandatory in final signoff.

However these methods are not without their challenges. First, expanding design complexities are a problem for LEC as much as for any other design automation tool. According to David Stratman (Product Management Director at Cadence), the blocks to which this technology is applied (5-10 million instances today) are 100X larger than in the early days of LEC. Advanced synthesis options back then were used on maybe 1-2 critical blocks, whereas now they are used everywhere in the design. Power domains, which complicate equivalence checking, are everywhere. And functional ECO counts have grown by 100X (among other reasons no doubt for post-P&R CDC fixes). However, while LEC was the first formal technology to see mainstream adoption in chip design, the fundamentals of the underlying formal methods haven’t changed much over that period.

ML-based optimizations add scalability by intelligently partitioning designs for massive parallelization and by orchestrating proofs. This is formal analysis after all, with multiple options for proof algorithms (BDD, SAT in multiple flavors) together with multiple options to simplify proofs, techniques familiar in the property checking world to bound complex problems. Still, even with these accelerators LEC analyses on large blocks can now run for multiple days.

Another scalability problem is related to “aborts”. LEC isn’t perfect and from time to time can run into instances where it is unable to conclude that sub-circuit comparisons match or don’t match. There can be multiple reasons for an abort: complexity of the sub-circuit under comparison (formal methods don’t like problems which get too big), or aggressive synthesis/PPA optimizations creating big structural differences between the RTL and the post-synthesis circuit, especially in datapath elements. It’s worth remembering that such elements are no longer constrained to ALUs. Much of design innovation today revolves around datapath components in more diverse applications: in the MAC arrays common in AI accelerators, in wireless basebands in phones and wireless infrastructure for MIMO management, and in headphones and earbuds for 3D audio.

The issue here is that each abort must be debugged manually; you can’t sign off a design with open aborts. That debug cycle can easily consume more time than the LEC run itself. Some solutions suggest extracting logic cones around the problem area in hopes that an equivalence run may be more successful when working on a smaller circuit without surrounding logic. But this is a temporary hack. Alternatively, synthesis options may need to be dialed back to prioritize verification over circuit optimization. David describes such cases as “LEC-ability problems”. Making such choices up-front while minimally compromising PPA optimization requires designer insight and experience. Given these tradeoffs, LEC must now be considered an additional cost function in the implementation process.

Learning through the evolution of a design project

A designer or team of designers learns what works and what causes problems as they run analyses through the course of a project and between projects. That isn’t the case for a typical EDA analysis tool, which is optimized to be effective within a single analysis run or group of related runs such as an ML orchestration but otherwise remembers nothing between runs.

AI is a natural way to address this need by learning through project evolution, in part perhaps to reduce runtimes through insight into what does not need to be redundantly re-checked. Such learning can also infer how to grade ECOs up-front for LEC-ability, and then within an optimization framework to automatically implement those changes. Clearly such a capability would not be as basic as the wrappers offered by current ML orchestration flows (though those will continue to be valuable). A project-scope AI must be supported by a project-scope learning database able to support exhaustive exploration of equivalence-based recipes to identify and optimize functional ECOs based on changes in RTL.

One thing that stands out for me here is how the center of design optimization must shift from isolated tool analysis (in this context LEC) to a central AI-based optimization framework surveying variations in a cost function across a multi-dimensional state space (LEC, STA, DRC, …). Shades of multiphysics analysis, but here multianalytics 😀

The future of logic equivalence checking must account for these fundamental shifts, in new design trends and in higher efficiency in support of the architectures underpinning those trends. A role in generative AI approaches in design optimization seems like a natural direction.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.