WP_Term Object
(
    [term_id] => 16301
    [name] => Veriest
    [slug] => veriest
    [term_group] => 0
    [term_taxonomy_id] => 16301
    [taxonomy] => category
    [description] => 
    [parent] => 0
    [count] => 6
    [filter] => raw
    [cat_ID] => 16301
    [category_count] => 6
    [category_description] => 
    [cat_name] => Veriest
    [category_nicename] => veriest
    [category_parent] => 0
)
            
Veriest Logo SemiWiki 1
WP_Term Object
(
    [term_id] => 16301
    [name] => Veriest
    [slug] => veriest
    [term_group] => 0
    [term_taxonomy_id] => 16301
    [taxonomy] => category
    [description] => 
    [parent] => 0
    [count] => 6
    [filter] => raw
    [cat_ID] => 16301
    [category_count] => 6
    [category_description] => 
    [cat_name] => Veriest
    [category_nicename] => veriest
    [category_parent] => 0
)

Veriest Meetup Provides Insights on Safety, Deadlocks

Veriest Meetup Provides Insights on Safety, Deadlocks
by Bernard Murphy on 10-01-2020 at 6:00 am

I wasn’t familiar with Veriest, I’m guessing you may not be either. They are a design and verification services company based in the Tel Aviv area (Israel). The CEO, Moshe Zalcberg, was earlier GM for Cadence in Israel. Echoes for me of the early days working with Ajoy Bose in Interra. Veriest have a big emphasis in verification, for example jump-starting formal verification at Valens and teaming with Nuvoton to accelerate tight dynamic verification schedules. A Veriest meetup highlighted another and delightfully free service – sharing verification know-how. They host meetup events for verification knowledge sharing, inviting experts from other companies to speak on an area of their expertise. I attended one of these events in September, in which Jyotika Athavale of Intel spoke on functional safety and Laurent Arditi of Arm spoke on using formal for deadlock detection.

Veriest meetup

Jyotika on soft error-rate modeling

Jyotika is a PE at Intel based in Chandler. She shared insight into soft error rate modeling from a perspective of neural nets. This is highly topical area; we know about functional safety in “standard” logic in an SoC, but AI accelerators are always treated as black boxes for safety. Which of course isn’t a long-term solution. She quoted an MIT paper in which an approaching large truck could be mis-identified as a bird, potentially leading to a fatal collision. That this is possible shouldn’t be a surprise. We’ve already seen many papers on how pixel-level hacking can completely change image identification. Soft errors can do the same thing. Jyotika’s talk was on modeling that soft error rate.

She observed that errors can occur in the image memory and in weights, even in computation. The consequences though, based on their empirical studies, are that fatal errors (crash/hang) are more likely than a silent data corruption (such as the bird versus truck example). Here I gather she is talking about NNs implemented on multicore processors. It would be interesting to know how this might apply to systolic array architectures. She also observed that the training phase is inherently error-tolerant since it is a slow iterative process, though I don’t think the same claim could be made for inference.

Laurent on improving deadlock detection, debug

Laurent is a Sr. PE at Arm in the Nice area in France, particularly expert in formal verification. His topic was on a refinement to deadlock hunting. This is a popular application for formal modeling where problems may be difficult to find in directed or random simulation. This is one of those formal topics that takes a while to wrap your head around, but I think I’ve got it.

First, the standard approach to checking for a deadlock in formal is a liveness assertion – checking that something good eventually happens, here say getting an ack following a req . The problem is that such assertions are based on something called Linear Time Logic (LTL). LTL checks along just one time-path. You may prove there’s no deadlock along that path (it escapes) or perhaps there only appears to be a deadlock along that path. In the second case, maybe you didn’t wait long enough. Or maybe it needed a nudge to get out of a local trap. You can add fairness constraints to fix either problem, but constraints can also introduce bugs. They can also expose real problems, a good thing, but this process is pretty hit and miss. The real problem is that you didn’t check exhaustively along all temporal paths. To do that you need to use Computational Tree Logic (CTL). Not a problem for the core engines but assertion languages such as SVA have no way to represent a CTL assertion.

CTL checks generated by Questa formal

Laurent used Mentor Questa Formal for verification. Given a standard (LTL) liveness assertion, this will automatically also generate and run a CTL check under the hood. Which bypasses the problem of not being able to write these checks. You get both checks, in Laurent’s terms a maybe-escapable deadlock check (the LTL version) and the definitely not escapable check (the CTL version). Between these you can figure out if you have a problem. Then you can decide if you want to add constraints or pass back to design as a real bug. He makes the point that getting to this conclusion is much easier when you have both LTL and CTL checks.

Good insights. To register for future Veriest events, and to access the presentation video and slides of this last event, click here.

Also Moshe’s opening comments to the event here

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.