WP_Term Object
(
    [term_id] => 140
    [name] => Breker Verification Systems
    [slug] => breker-verification-systems
    [term_group] => 0
    [term_taxonomy_id] => 140
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 27
    [filter] => raw
    [cat_ID] => 140
    [category_count] => 27
    [category_description] => 
    [cat_name] => Breker Verification Systems
    [category_nicename] => breker-verification-systems
    [category_parent] => 157
)
            
semiwiki banner 1b
WP_Term Object
(
    [term_id] => 140
    [name] => Breker Verification Systems
    [slug] => breker-verification-systems
    [term_group] => 0
    [term_taxonomy_id] => 140
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 27
    [filter] => raw
    [cat_ID] => 140
    [category_count] => 27
    [category_description] => 
    [cat_name] => Breker Verification Systems
    [category_nicename] => breker-verification-systems
    [category_parent] => 157
)

Breker Hosts an Energetic Panel on Spec-Driven Verification

Breker Hosts an Energetic Panel on Spec-Driven Verification
by Bernard Murphy on 03-18-2026 at 6:00 am

Key takeaways

I was fortunate to be asked to moderate an evening panel adjacent to the first day of DVCon 2026, on AI-Driven SoC Verification starting from specs. You know my skepticism on panels, finding they rarely generate insights or controversy. This panel was quite different. Panelists were Shelley Henry (CEO, Moores Lab AI), Adnan Hamid (CTO, Breker Verification Systems), Deepak Manoharan (Senior Director of Engineering, Arm), and Michael Chin (Senior Principal Engineer, Intel Corp). If you want to know more about reality in AI deployment in functional verification, these guys have opinions. I summarize my takeaways below.

Energetic panel on AI in verification

Why automate spec-driven verification?

The purpose of verification is to certify what is being built is (functionally) consistent with the design spec (or test spec, here assume design spec). This spec is generated by an experienced in-house architect, crossing what they currently know about customer requirements with what they know about available in-house baseline designs, IP options and expertise.

There are unavoidable challenges in specs. These remain moving targets some way into the design schedule. Architects deliver a first pass for design and DV teams to shift left, understanding that updates will continue. Customer(s) themselves may not yet have frozen their requirements as they continue to gauge market expectations. They too are shifting left to the architect. Add to this that neither writing, reading nor understanding are error-free.

If you have ever been on the review cycle for a document, you will understand how mistakes can happen. First pass review, diligently read and checked. Second and later passes skimmed with high chance of missing small but important changes. Revision tracking is an outdated solution for locating and understanding changes.

Extracting relevant updates is a perfect application for AI. Also perfect is dynamically discovering topic-relevant sections in the spec “Show me all mentions of fence in this spec”. A related challenge, similarly addressed, is dealing with specs which back-reference earlier specs.

Here I should acknowledge a good question from the audience, “What if you don’t have a spec?” This is startup territory: no baseline design and not enough time to create a spec. I’ve been in this position myself. A startup has a core differentiating idea and should be busy creating a proof of concept around that idea before they run out of money. Creating a spec is a very low priority. That said, pre-funding and in-flight they must create documents and mail/text threads to communicate internally. Perhaps these could be sucked into a spec generator (Shelly, any comments)?

Automation experiences in production design

How accurately can AI generate, from a raw spec, an intermediate representation, say a table of opcodes or a flow diagram of operation? The sense here is 90-95%, but that last 5% is hard. No ideas were shared on how to characterize this shortfall, though Shelly has thoughts on how the gap could be narrowed.

Good discussion around hallucinations and over-enthusiastic claims from AI, with general agreement that we should never accept early responses. Instead, repeat the question, eliciting different responses, pick your favorites and iterate to a good solution. An interesting perspective was the importance of figuring out where in the AI process is best to provide feedback, to guide/train the system onto a better path when it looks like it might be headed down an unproductive path. Live experience has shown this can evolve correctness from 40% on a first pass to 100% over successive learning passes. Impressive!

Why not automate this process setting different agents to work on an answer, judging between answers using a critique agent? Interesting idea though there were mixed feelings on how ready we are for that step (or whether we will be given the option).

Which brought us to trust. This boils down to decomposing tasks between checkpoints with easily human-checkable output. PSS generated from a spec is easy to read, therefore easy to catch errors. Going all the way from spec to UVM is a more challenging jump, though Shelly suggests there is a market for that too, perhaps more based on UVM familiarity than ease of checking.

Threat or opportunity for DV engineers?

Will DV engineers train AI and then be out of a job? One response was that AI will simply make us more productive. We are nowhere near maxing out appetite for new semiconductor devices. Using AI, more of these will be in reach and we’ll need all our engineers to satisfy that need.

A rather different viewpoint noted that design execs are now pushing for a shorter design lifecycle per chip, to be competitive on time to market and cost. AI will play a larger part in that lifecycle than we may find comfortable, but we may have to adapt. Engineers who are curious, who can ask good questions and have a learning mindset will thrive in an AI-centric process. Those who are stuck in their old ways will not do so well.

A third panelist told his kids that they should not plan to do what he does, because that job won’t exist when they graduate. Current DV roles will have been replaced by verification architects. (Shout out here also to Abhi Kolpekwar at Siemens who calls them “verification scientists”.)

In closing, we do indeed need to train the AI, just as we currently train junior engineers. I’m sure existing training collateral would be a good start. We also need to develop more systematized methods to assess the performance of AI verification. Today these may be captured in spreadsheets and communal know-how. Now we must define metrics which critique agents can check. And processes to support periodic human review and update.

Breker and MooresLab have partnered to create the first commercial AI-driven SoC verification solution, I assume addressing a number of these areas. You can learn more HERE, including a recording of the panel discussion.

Exciting times!

Also Read:

Verifying RISC-V Platforms for Space

A Principled AI Path to Spec-Driven Verification

Breker Verification Systems at the 2025 Design Automation Conference #62DAC

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.