Key Takeaways
- Jason Cong highlighted the transformative role of AI and machine learning in semiconductor design during his keynote at DAC 2025, reflecting on over 30 years in the DAC community.
- The evolution from the Intel 386 processor to modern chips like NVIDIA's B200 and Micron's 3D NAND showcases the increasing complexity and speed of integrated circuit design.
- AI/ML is being leveraged for optimization in EDA, particularly in placement and routing, verification, lithography, and analog design, significantly improving processes and outcomes.
- Hybrid approaches that integrate ML with traditional methods are emphasized, allowing for real-time feedback and better decision-making in design flows.
- The future of chip design aims for democratization through multi-agent systems, enabling easier design processes while addressing challenges like data scarcity and model integration.
In a keynote at the 62nd Design Automation Conference (#62DAC) on July 8, 2025, Jason Cong, Volgenau Chair for Engineering Excellence Professor, UCLA, reflected on over 30 years in the DAC community, highlighting the transformative role of AI and machine learning (ML) in semiconductor design. The speaker, whose first DAC paper was in 1988 on two-layer channel routing, contrasted the era of Intel’s 386 processor (275,000 transistors at 1.5 microns) with today’s marvels like NVIDIA’s B200 (208 billion transistors at 4nm) and Micron’s 3D NAND (5.3 trillion transistors in over 200 layers). This evolution underscores integrated circuits as among the most complex man-made objects, designed in just 12-18 months compared to decades for projects like the International Space Station.
The design flow begins with system specifications in languages like C++ or SystemC, synthesized to RTL (VHDL/Verilog), then to Boolean equations. Logic synthesis optimizes for area, power, and timing, followed by physical design stages: floorplanning, placement, clock tree synthesis, routing, and sign-off verification. Challenges include exploding complexity—trillions of transistors, 3D stacking, and heterogeneous integration—coupled with power constraints and shrinking timelines. Traditional methods struggle with NP-hard problems like placement and routing, where exhaustive search is infeasible.
Enter AI/ML as game-changers. The speaker advocated treating EDA problems as data-driven, leveraging ML for optimization. Key applications include:
-
Placement and Routing: ML models predict wirelength, congestion, and timing, outperforming heuristics. Techniques like graph neural networks (GNNs) and reinforcement learning (RL) guide macro placement, achieving 20-30% better PPA (power, performance, area). Tools like Google’s Circuit Training use RL for chip floorplanning.
-
Verification: ML aids bug detection in RTL, predicting coverage gaps and generating stimuli. Analog verification uses surrogate models for faster simulations, reducing runtime from days to minutes.
-
Lithography and Manufacturing: ML corrects optical proximity effects, predicts hotspots, and optimizes masks. Generative models design resolution enhancement techniques, while RL tunes process parameters.
-
Analog Design: Traditionally manual, ML automates sizing and layout. Bayesian optimization and generative adversarial networks (GANs) create layouts, with RL fine-tuning for performance.
The speaker emphasized hybrid approaches: ML augments, not replaces, traditional methods. For instance, in logic synthesis, ML predicts post-synthesis metrics to guide transformations. In physical design, ML-based predictors integrate into flows for real-time feedback.
Looking ahead, the outlook is multi-agent systems combining human and machine intelligence. The goal: enable software programmers to design chips as easily as writing PyTorch libraries. Undergraduate courses already demonstrate this, with students building CNN accelerators on AWS F1 clouds using high-level synthesis.
Challenges remain: data scarcity, model generalization, and integration into existing tools. The speaker stressed deep problem understanding over superficial AI applications, urging cross-disciplinary collaboration. Funded by NSF and PRISM centers, partnerships with experts on yield innovations in ML for EDA.
In conclusion, AI/ML is revolutionizing chip design, addressing complexity and accelerating innovation. As Moore’s Law evolves into “More than Moore” with 3D and heterogeneous systems, this DAC community-driven synergy promises a future where chip design is democratized, efficient, and impactful.
Also Read:
Enabling the AI Revolution: Insights from AMD’s DAC Keynote
AI Evolution and EDA’s Role in the Fourth Wave: William Chappell’s DAC Keynote
AI-Driven ECAD Library Creation: Streamlining Semiconductor Design
Share this post via:
Should the US Government Invest in Intel?