The 2025 Design and Verification Conference (DVCon) was a four-day event packed with insightful discussions, cutting-edge technology showcases, and thought-provoking debates. The conference agenda included a rich mix of tutorial sessions, a keynote presentation, a panel discussion, and an exhibit hall with Electronic Design Automation (EDA) vendors demonstrating their latest tools and engaging with customers.
AI Dominates the Discussion
A dominant theme throughout the event was Artificial Intelligence (AI), which was featured in over 60 technical papers, 18 technical posters, a dedicated keynote, and a high-profile panel discussion. The tutorial sessions included real-world customer case studies and pioneering university research, demonstrating how AI is reshaping verification methodologies and challenging traditional workflows.
Prime Time for Hardware-Assisted Verification
Hardware-Assisted verification (HAV) was also a prominent topic in technical sessions, keynote and panel. As AI drives innovation across virtually all industries, from industrial, to agriculture, banking, medicine, automotive, mobile, and more, the verification engineers are grappling with the complexity of increasingly sophisticated AI processing hardware. The surge in AI-specific accelerators, custom chips, and groundbreaking computing architectures has amplified verification challenges, pushing traditional software-based verification methods to their limits.
In response, hardware-assisted verification (HAV) platforms have become a cornerstone of modern verification strategies. Their ability to manage massive workloads, expedite test cycles, facilitate shift-left methodologies, and provide comprehensive system-level validation and debugging is increasingly vital. The surge in user interest, demonstrated by the strong attendance at conference technical sessions, underscores this trend. With the continued advancement of AI hardware, the necessity for HAV solutions will only intensify, cementing their position in guaranteeing the performance, accuracy, and reliability of next-generation computing systems.
The Rise of Portable Stimulus Technology
Following AI and HAV, portable stimulus technology emerged as another significant subject of interest. This methodology, now adopted by multiple companies, was explored in-depth, with discussions on internally developed frameworks and EDA vendor-driven solutions. Attendees witnessed how the industry is increasingly leveraging portable stimulus to improve test coverage and verification efficiency.
Other Key Topics
Beyond AI, HAV and portable stimulus, DVCon also highlighted significant advancements in:
- UVM Deployment Case Studies: The industry continues to refine and expand the Universal Verification Methodology (UVM) framework, with companies sharing their successes and lessons learned.
- A Broad Spectrum of Verification Topics: From formal verification techniques to new methodologies in functional safety and security, DVCon showcased a diverse array of technical advancements in design verification.
Keynote: The AI-Driven Revolution in Chip Design and the Rise of the AI Factory
The much-anticipated keynote, “AI Factories Will Drive the Re-invention of Chip Design, Verification, and Optimization,” delivered by Ravi Subramanian, Chief Product Management Officer and leader of the Product Management & Markets Group (PMG) at Synopsys, and Artour Levin, Vice President of AI Silicon at Microsoft, provided a compelling analysis of how artificial intelligence is fundamentally reshaping the semiconductor industry.
The speakers emphasized that AI is no longer just an enabler of innovation, rather, it has become the driving force behind a radical transformation in chip design and verification. This shift is fueled by the relentless expansion of large language models (LLMs) and their insatiable demand for high-performance AI accelerators. Ravi framed the magnitude of this evolution by comparing Moore’s Law, which historically predicted the doubling of transistor density approximately every 18 months, to the explosive growth in LLM parameters, which now double—or even quadruple—within just three to six months.
The presentation featured detailed charts and striking data points that underscored the seismic changes underway. These insights illustrated the mounting complexity in verification, design optimization, and system-level architecture, highlighting how the industry is contending with an era where traditional methodologies are losing steam. The increasing demand for processing throughput presents one of the biggest engineering challenges, as AI workloads continue to scale exponentially. At the same time, memory bandwidth and capacity are struggling to keep pace with the ever-growing model sizes that demand faster access and larger storage capabilities. The tsunami of data required for AI training and inference is estimated to double the total amount of data traversing the Internet each year, adding pressure to an already strained infrastructure.
Another critical issue is interconnect bandwidth, which has become a major bottleneck as AI workloads require ultra-high-speed data movement between compute nodes.
The challenge of energy efficiency looms large, as the industry strives to balance performance gains with power constraints for sustainable scaling. Artour emphasized, “Managing power while scaling performance is critical. If power isn’t controlled, deploying these chips in data centers becomes infeasible. The industry must figure out ways to exponentially grow compute, memory bandwidth, and interconnect efficiency while keeping power consumption sustainable.” He further noted, “Historically, software entered the chip development cycle late, but AI accelerators are fundamentally software accelerators. This shift requires software modeling to begin at the architectural phase. Understanding workloads early enables more efficient hardware design, optimizing transistors and silicon resources to maximize performance while at minimizing power. Additionally, today’s software stacks are highly complex. Waiting for silicon to develop software is no longer viable, pre-silicon software development is essential, adding another layer of design challenges.”
Compounding these technical challenges is the massive financial burden of developing next-generation AI hardware. The capital expenditure (CapEx) required to sustain innovation in this space is reaching unprecedented levels, forcing companies to make strategic, long-term investments in infrastructure.
Ravi summed up the momentous shift by declaring that we are on the cusp of a new industrial revolution, one defined not by traditional manufacturing but by AI-powered computation at an unprecedented scale. The AI Factory, a paradigm where AI not only designs chips but optimizes, verifies, and accelerates the next generation of semiconductor breakthroughs, is no longer a vision for the future. It is happening now.
With AI taking center stage in the reinvention of chip design, verification engineers, system architects, and semiconductor companies must adapt to a landscape that is evolving faster than ever before. The keynote left attendees with a powerful message: embracing AI is no longer optional, it is essential for those looking to stay ahead in the age of AI-driven silicon innovation.
Panel: Are AI Chips Harder to Verify?
One of the conference highlights was the panel discussion titled “Are AI Chips Harder to Verify?”
Moderated by Moshe Zalcberg, CEO of Veriest Solutions, the discussion brought together a distinguished panel of industry experts: Harry Foster, Chief Scientist, Verification at Siemens EDA; Ahmad Ammar, Technical Lead, AI, Infrastructure, and Methodology (AIM) at AMD; Stuart Lindsay, Principal Hardware EDA Methodology Engineer at Groq; Shuqing Zhao, Formal Verification Lead at Meta; and Shahriar Seyedhosseini, Generalist Engineer at MatX.
The panel unanimously acknowledged the substantial hurdles in verifying AI chips, but their detailed analysis revealed nuanced perspectives.
Harry Foster (Siemens EDA) highlighted a crucial divergence from traditional SoC verification. He emphasized that AI architectures operate on probabilistic principles, contrasting sharply with the deterministic nature of conventional designs. This shift implies that AI chips aim for “approximate correctness” within acceptable thresholds, rather than strict pass/fail outcomes, thereby necessitating a fundamental recalibration of verification methodologies. How, he questioned, do you effectively verify a system that inherently operates on non-deterministic principles?
The discussion debated the limitations of existing EDA tools. While these tools have significantly advanced the verification of traditional chips through formal verification, simulation, and emulation, they struggle to adapt to the dynamic and adaptive behavior of AI accelerators. These accelerators, driven by constantly evolving learning models, diverse data distributions, and statistical inference, present a moving target for verification.
Ahmad Ammar brought attention to the scale of deployment, emphasizing that AI chips are typically deployed in massive clusters to handle demanding AI workloads. He pinpointed the difficulty of adapting those massive workloads onto individual chips or small subsets to achieve realistic verification.
Stuart Lindsay focused on the complexities of AI chip data paths, where the flow of data is not static but dynamically changes based on the parameter values being processed. This variability, coupled with mixed-precision operations where precision levels shift throughout the pipeline, adds significant complexity to modeling and prediction. Furthermore, the dynamic evolution of system states and the presence of feedback loops further complicate the verification process.
Shuqing Zhao championed the role of formal verification in AI chip validation, while acknowledging the need for adaptation to handle the probabilistic and approximation-driven nature of AI workloads.
The panel collectively recognized the imperative of adopting a “divide and conquer” strategy to manage the sheer complexity of AI chip verification.
A final, provocative question from DVCon Panel Chair Ambar Sarkar asked the panelists to rate the difficulty of AI chip verification on a scale of 0 to 100% (0 being traditional chips, 100% being twice as hard), and proposed his own estimate at just 5%. The panelists’ responses varied, reflecting their diverse perspectives. While most leaned towards the higher end of the scale, acknowledging the increased difficulty, Shahriar Seyedhosseini offered a contrasting view. He pointed out that, unlike general-purpose processors, AI workloads are statically compiled, which simplifies fine-tuning and coverage. This, he argued, offsets some of the added complexity, limiting the verification challenge to only 5% more than that of a traditional SoC. He also noted that AI chip verification is, in many ways, more enjoyable.
The panel concluded with a resounding acknowledgment that the increasing complexity of AI accelerators necessitates a fundamental rethinking of hardware verification. The industry must adapt and innovate to ensure the performance and correctness of these critical components in the evolving landscape of AI-driven computing.
Conclusion
DVCon 2025 delivered a comprehensive look at the future of design verification, with AI at the forefront of innovation. As verification engineers navigate new challenges in AI hardware, portable stimulus, and hardware-assisted verification, DVCon continues to be the premier platform for knowledge sharing and industry collaboration.
Also Read:
Synopsys Expands Hardware-Assisted Verification Portfolio to Address Growing Chip Complexity
How Synopsys Enables Gen AI on the Edge
Share this post via:
Comments
There are no comments yet.
You must register or log in to view/post comments.