WP_Term Object
(
    [term_id] => 26940
    [name] => Bronco AI
    [slug] => bronco-ai
    [term_group] => 0
    [term_taxonomy_id] => 26940
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 5
    [filter] => raw
    [cat_ID] => 26940
    [category_count] => 5
    [category_description] => 
    [cat_name] => Bronco AI
    [category_nicename] => bronco-ai
    [category_parent] => 157
)
            
Bronco AI Banner SemiWiki
WP_Term Object
(
    [term_id] => 26940
    [name] => Bronco AI
    [slug] => bronco-ai
    [term_group] => 0
    [term_taxonomy_id] => 26940
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 5
    [filter] => raw
    [cat_ID] => 26940
    [category_count] => 5
    [category_description] => 
    [cat_name] => Bronco AI
    [category_nicename] => bronco-ai
    [category_parent] => 157
)

Verification Futures with Bronco AI Agents for DV Debug

Verification Futures with Bronco AI Agents for DV Debug
by Daniel Nenni on 01-16-2026 at 6:00 am

Key takeaways

Bronco AI Verification Futures 2025

Verification has become the dominant bottleneck in modern chip design. As much as 70% of the overall design cycle is now spent on verification, a figure driven upward by increasing design complexity, compressed schedules, and a chronic shortage of design verification (DV) engineering bandwidth. Modern chips generate thousands of tests per night, producing massive volumes of logs and waveforms. Within this flood of data, engineers must find the rare, chip-killing bug hidden among hundreds of failures. Verification today is fundamentally a large-scale data analysis problem, repeated daily under intense time pressure.

Traditional approaches struggle to scale with this reality. Human engineers are exceptionally strong at deep, creative reasoning about a single complex failure, but they cannot efficiently process thousands of datasets simultaneously. Classical machine learning techniques, while powerful in narrow contexts, face severe limitations in DV. They often fail to generalize across architectures such as CPUs, GPUs, NoCs, or memory subsystems. Training data is difficult to collect due to IP sensitivity, labeling requires expert engineers, and constant design evolution creates distribution shifts between chip versions. These constraints limit the long-term impact of conventional ML in verification.

Bronco AI agents for DV represent a step change. Instead of relying on narrow models trained for specific tasks, agent-based systems leverage large reasoning models combined with tool use, memory, and decision-making loops. These agents generalize more effectively because they are trained on internet-scale code and problem-solving data rather than proprietary design specifics. They can be steered through natural language, allowing DV engineers to guide investigations intuitively. Crucially, agents learn from metadata and patterns rather than memorizing raw data, reducing overfitting and mitigating IP and security concerns by selectively handling and discarding context.

In DV workflows, Bronco AI agents operate much like a highly scalable junior-to-senior engineer hybrid. When a simulation fails, the agent autonomously decides how to investigate, executes standard DV actions such as log parsing and waveform inspection, and iterates until it identifies a likely root cause. If the issue exceeds its confidence threshold, the agent escalates with a well-formed ticket for a human engineer. This approach allows routine debug work to be handled automatically while preserving human expertise for the hardest problems.

The impact of this agentic approach is measurable. In real subsystem-level UVM test failures on next-generation ASICs, Bronco AI agents were able to index new regressions within minutes, adapt to unfamiliar error signatures, and build an understanding of designs containing hundreds of thousands of lines of RTL.

In one case, an agent analyzed over 100,000 lines of logs and approximately 20 GB of waveform data to identify a deeply nested root cause in less than 10 minutes, work that a DV lead estimated would have taken hours, or days for a less experienced engineer.

AI agents also fundamentally change how waveform debug is performed. Traditional waveform analysis forces engineers to scroll through laggy GUIs, manually correlating thousands of signals across time windows and following one hypothesis at a time. Agents, by contrast, can examine many signals, hierarchies, and failure modes simultaneously. They can correlate errors across CPU, memory controllers, fabrics, and accelerators, classify failures, and recognize recurring patterns across regressions.

Perhaps most importantly, these systems improve over time. By learning from past failures, tickets, and human feedback, AI agents build reusable debug playbooks, discover efficient shortcuts, and develop generalized intuition—such as recognizing which issue types tend to appear in certain subsystems. This continuous learning enables faster time-to-value without custom AI training and allows seamless integration into existing EDA flows.

Bottom line: AI agents deliver value in verification not by replacing human insight, but by amplifying it through scale, speed, and learning. As verification complexity continues to grow, agentic AI offers a practical path to closing the verification gap.

Also Read:

Superhuman AI for Design Verification, Delivered at Scale

AI RTL Generation versus AI RTL Verification

Scaling Debug Wisdom with Bronco AI

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.