Key Takeaways
- Siemens EDA and Nvidia highlighted the transformative impact of AI on semiconductor and PCB design during their presentation at the DACtv event.
- The newly launched Siemens EDA AI system integrates advanced AI technologies to enhance designer productivity and address challenges in chip design, including complexity, talent shortages, and rising costs.
- The partnership aims to redefine chip design efficiency by combining AI-driven tools with flexible, open platforms that support third-party integrations and custom workflows.
On July 18, 2025, Siemens EDA and Nvidia presented a compelling vision for the future of electronic design automation (EDA) at a DACtv event, emphasizing the transformative role of artificial intelligence (AI) in semiconductor and PCB design. Amit Gupta, Vice President and General Manager at Siemens EDA, and John Lynford, head of Nvidia’s CAE and EDA product team, outlined how their partnership leverages AI to address the escalating complexity of chip design, shrinking talent pools, and rising development costs, as detailed in the YouTube video.
Gupta opened by contextualizing the semiconductor industry’s evolution, noting its growth from a $100 billion market in the 1990s to a projected $1 trillion by 2030, driven by AI, IoT, and mobile revolutions. He highlighted the challenges of designing advanced chips at 2nm nodes, where complexity in validation, software, and hardware has surged, while the talent base dwindles and costs soar. Siemens EDA’s response is its newly announced Siemens EDA AI system, a comprehensive platform integrating machine learning (ML), reinforcement learning (RL), generative AI, and agentic AI across tools like Calibre, Tessent, Questa, and Expedition. This system, launched that morning, aims to boost designer productivity by enabling natural language interactions, reducing simulation times, and lowering barriers for junior engineers.
The Siemens EDA AI system is purpose-built for industrial-grade applications, emphasizing verifiability, usability, generality, robustness, and accuracy—qualities absent in consumer AI models prone to hallucinations. For instance, Gupta demonstrated how consumer AI fails at domain-specific tasks like compiling EDA files in Questa, underscoring the need for specialized solutions. The system supports multimodal inputs like RTL, Verilog, netlists, and GDSII, and integrates with vector databases for retrieval-augmented generation, ensuring precise, sign-off-quality results. It also allows customers to fine-tune large language models (LLMs) on-premises, prioritizing data security and compatibility with diverse hardware, including Nvidia GPUs.
Nvidia’s contribution, as Lynford explained, centers on its hardware and software ecosystem, tailored for accelerated computing. Nvidia uses Siemens EDA tools internally to design AI chips, creating a feedback loop where its GPUs power Siemens’ AI solutions. The CUDA-X libraries, with over 450 domain-specific tools, accelerate tasks like lithography (cuLitho), sparse matrix operations (cuSPARSE, cuFFT), and physical simulations (Warp). cuLitho, for example, is widely adopted in production, while Warp enables differentiable simulations for TCAD workflows at companies like TSMC. Nvidia’s Nemo framework supports enterprise-grade generative and agentic AI, with features like data curation, training, and guardrails to ensure accuracy and privacy. Nvidia Inference Microservices (NIMs) further simplify deployment, offering containerized AI models for cloud or on-premises use, integrated into Siemens’ platform.
A standout example of their collaboration is in AI physics, where Nvidia’s Physics Nemo framework accelerates simulations for digital twins, such as optimizing wind farm designs at Siemens Energy with 65x cost reduction and 60x less energy compared to traditional methods. In EDA, this translates to faster thermal, flow, and electrothermal simulations for chip and data center design. The partnership also explores integrating foundry PDKs into LLMs, enabling easier access to design rules and automating tasks like DRC deck creation, as discussed in response to audience questions about foundry collaboration and circuit analysis.
The Siemens EDA AI system and Nvidia’s infrastructure promise a “multiplicative” productivity boost, combining front-end natural language interfaces with backend ML/RL optimizations. This open platform supports third-party tools and custom workflows, ensuring flexibility. While specific licensing details were not disclosed, Gupta invited attendees to explore demos at Siemens’ booth, highlighting practical applications like script generation, testbench acceleration, and error log analysis. This collaboration signals a paradigm shift toward AI-driven EDA, positioning Siemens and Nvidia to redefine chip design efficiency and accessibility.
Also Read:
Insider Opinions on AI in EDA. Accellera Panel at DAC
Caspia Focuses Security Requirements at DAC
Building Trust in Generative AI
Share this post via:
Comments
There are no comments yet.
You must register or log in to view/post comments.