Key Takeaways
- Bronco AI develops AI agents for DV debugging that investigate failures in real-time, significantly reducing the manual workload for verification engineers.
- The company addresses the major bottleneck in chip development by alleviating the pressure on DV engineers, who face overwhelming debugging tasks due to limited resources.
- Bronco differentiates itself from competitors by focusing specifically on complex DV debugging rather than general design tasks, allowing for enhanced performance and efficiency.
David Zhi LuoZhang is Co-Founder and CEO of Bronco AI with extensive experience in building AI systems for mission-critical high-stakes applications. Previously while at Shield AI, he helped train AI pilots that could beat top human F-15 and F-16 fighter pilots in aerial combat. There, he created techniques to improve ML interpretability and reliability, so the system could explain why it flew the way it did. He gave up a role at SpaceX working on the algorithms to coordinate constellations of Starlink satellites in space, and instead founded Bronco to bring AI to semiconductors and other key industries.
Tell us about your company.
We do AI for design verification. Specifically, we’re building AI agents that can do DV debugging.
What that means is the moment a DV simulation fails, our agent is already there investigating. It looks at things like the waveform, the run log, the RTL and UVM, and the spec to understand what happened. From there, it works until it finds the bug or hands off a ticket to a human.
We’re live-deployed with fast-moving chip startups and are working with large public companies to help their engineers get a jump on debugging. And we’re backed by tier-1 Silicon Valley investors and advised by leading academics and semiconductor executives.
What problems are you solving?
If you look at chip projects, verification is the largest, most time-consuming, and most expensive part. And if you look at the time-spent of each DV engineer, most of their time is spent on debug. They are watching over these regressions and stomping out issues as they show up over the course of the project.
Every single day, the DV engineer gets to work and they have this stack of failures from the night’s regression to go through. They have to manually figure out if it’s a design bug or a test bug, if they’ve seen the bug before, what the root cause might be, and who to give it to. And this is quite a time-consuming and mundane debugging process.
This creates a very large backlog in most companies, because typically this task of understanding what’s happening in each failure falls onto a select few key people on the project that are already stretched thin. Bronco is helping clear this bottleneck and take the pressure off those key folks and in-so-doing unblock the rest of the team.
What application areas are your strongest?
We focus on DV debug. We chose DV debug because it is the largest pain point in chip development, and because from a technical standpoint it is a very strong motivating problem.
To do well at DV debug, we need to be able to cover all the bases of what a human DV is currently looking at and currently using to solve their problems. For example, we’re not just assisting users in navigating atop large codebases or in reading big PDF documents. We’re also talking about making sense of massive log files and huge waveforms and sprawling design hierarchies. Our agent has to understand these things.
This applies at all levels of the chip. With customers, we’ve deployed Bronco everywhere from individual math blocks up to full chip tests with heavy NoC and numerics features. One beautiful thing about the new generation of Generative AI tools is that it can operate at different levels of abstraction the same way humans can, which greatly improves its scalability compared to more traditional methods that would choke on higher gate counts.
What keeps your customers up at night?
It’s a well-known set of big-picture problems that trickle into day-to-day pains.
Chip projects need to get to market faster than ever, and the chips need to be A0 ready-to-ship, but there just aren’t enough DV engineers to get the job done.
That manifests in there not being enough engineers to handle the massive amount debugging that needs to go into getting any chip closed out. So the engineers are in firefighting mode, attacking bugs as they come up, and being pulled away from other important work – work that could actually make them more productive in the long-run or could surface bigger issues with uncovered corner cases.
And moreover, this burden falls most heavily on the experts on the project. During crunch time, it’s these more experienced engineers that get inundated with review requests, and because of institutional knowledge gaps, the rest of the team is blocked by them.
What does the competitive landscape look like and how do you differentiate?
There are the large EDA giants, and there are a few other startups using AI for design and verification. Most of their work focuses on common DV tasks like document understanding and code help. These are general, high surface area problems that aren’t too far from the native capabilities of general AI systems like GPT.
No other company is taking the focused approach we are to AI for DV. We are focused on getting agents that can debug very complex failures in large chip simulations. We use that actually as a way to define what it means to be good at those more general tasks in the DV context like understanding spec docs or helping with hardware codebases.
For example, it’s one thing to answer basic questions from a PDF or write small pieces of code. It’s another thing to use all that information while tracing through a complex piece of logic. By taking this focused approach, we’re seeing huge spillover benefits. We almost naturally have a great coding assistant and a great PDF assistant because they’ve been battle-tested in debug.
What new features or technology are you working on?
All of our tech is meant to give your average DV engineer superpowers.
On the human-in-the-loop side, we are making a lot of AI tools that automate the high-friction parts of manual debug. For example, our tool will go ahead and set up the waveform environment to focus on the right signals and windows, so engineers don’t have to spend ridiculous amounts of time clicking through menus.
On the agent side, we want to allow each DV engineer to spin up a bunch of AI DVs to start debugging for them. That requires a really smart AI agent with the right tools and memory, but also really good ways for users to transfer their knowledge to the AI. And of course, we are doing all this in a safe way that stays on-premise at the customer’s site to put their data security first.
And we’re doing all these things on ever-larger and more sophisticated industry-scale chip designs. In the long term, we see a large part of the Bronco Agent being like a scientist or architect, able to do very large system-level reasoning about things like performance bottlenecks, where the Agent has to connect some super high-level observation to some super low-level root cause.
How do customers normally engage with your company?
Customers have a very easy time trying our product, since we can deploy on-prem and can leverage existing their AI resources (eg. Enterprise ChatGPT). First, the customer chooses a smaller, lower-risk block to deploy Bronco on. Bronco deploys on-premise with the customer, typically via a safe, sandboxed system to run our app. Then, Bronco works with the block owner to onboard our AI to their chip and to onboard their DVs to our AI.
From there, it’s a matter of gauging how much time our AI is saving the DV team on tasks they were already doing, and seeing what new capabilities our tool unlocked for their team.
Also Read:
Scaling Debug Wisdom with Bronco AI
Arm Lumex Pushes Further into Standalone GenAI on Mobile
The Impact of AI on Semiconductor Startups
Share this post via:
Comments
There are no comments yet.
You must register or log in to view/post comments.