ads mdx semiwiki building trust gen 800x100ai

IBM Cloud: Enabling World-Class EDA Workflows

IBM Cloud: Enabling World-Class EDA Workflows
by Admin on 08-02-2025 at 1:00 pm

DAC 62 Systems on Chips

On July 9, 2025, Derren Dunn from IBM Research’s TJ Watson Research Center delivered a DACtv presentation, as seen in the YouTube video detailing IBM’s EDA-as-a-Solution platform. This innovative offering leverages IBM’s high-performance computing (HPC) cloud to deliver hybrid and cloud-only infrastructure for electronic design automation (EDA) workflows, addressing the semiconductor industry’s escalating computational demands. By integrating AI, machine learning (ML), and advanced infrastructure-as-code (IaC) practices, IBM empowers dynamic chip design and foundry-facing processes with scalability, security, and efficiency.

Dunn introduced EDA-as-a-Solution as a value-added layer atop IBM’s HPC cloud, designed to handle compute-intensive EDA tasks like simulation, verification, and physical design. The platform supports both hybrid cloud (integrating on-premises and cloud resources) and cloud-native environments, offering flexibility for diverse design needs. Key components include IaC tools like Terraform and Ansible, customized kernel and service parameter settings, and vendor-specific configurations. IBM’s proprietary design flows incorporate AI and ML, enhancing tasks such as design space exploration and optimization. This infrastructure is built on robust foundations, including IBM’s LSF (Load Sharing Facility) for job scheduling, Spectrum Scale for storage, and Aspera for high-speed data transfers, all fortified by enterprise-grade security for data protection.

The platform’s architecture ensures seamless integration across client data centers, on-premises systems, and IBM’s cloud. A critical feature is its ability to create Active File Management (AFM) caches from any NFS-exportable mount, addressing an audience question about storage compatibility. Dunn clarified that IBM’s solution works with non-IBM storage systems like NetApp filers or kernel-based NFS servers, enabling efficient data access without costly migrations. For object storage, IBM supports scenarios where it serves as the master data repository, ensuring flexibility for varied enterprise setups. This eliminates data transfer bottlenecks, a common pain point in EDA workflows, by allowing seamless bursting to the cloud as if submitting to a local queue.

AI and ML integration is central to IBM’s offering. By embedding these technologies into design flows, the platform optimizes tasks like timing analysis, power estimation, and layout generation, reducing design cycles. For foundry-facing workflows, IBM’s cloud supports process design kit (PDK) integration and tape-out preparation, ensuring compatibility with major foundries like TSMC and GlobalFoundries. The platform’s linear scaling capabilities, achieved through LSF or Kubernetes-based workflows, cater to both traditional and cloud-native EDA environments, positioning IBM as a forward-looking solution for future-proofing chip design.

Security and scalability are paramount, especially for sensitive IP in semiconductor design. IBM’s high-security data protection chambers safeguard proprietary data, critical for industries like automotive and defense. Dunn emphasized the platform’s ability to handle compute-intensive workloads, addressing the industry’s challenge of insufficient compute resources. By bursting to the public cloud, companies can prioritize design projects without hardware constraints, a significant advantage in an era of trillion-gate SoCs and AI accelerators.

The presentation underscored IBM’s strategic vision: providing a scalable, secure, and AI-enhanced EDA platform that adapts to the semiconductor industry’s evolving needs. By supporting hybrid and cloud-native workflows, IBM ensures designers can leverage existing infrastructure while scaling seamlessly. The session concluded with Dunn inviting attendees to explore IBM’s offerings and join them at DAC 2026 in Long Beach, signaling confidence in their role in driving the next wave of chip design innovation.

Also Read:

AI Infrastructure: Silicon Innovation in the New Gold Rush

Large Language Models: A New Frontier for SoC Security on DACtv

AI and VLSI: A Symbiotic Revolution at DAC 2025


AI-Driven Chip Design: Navigating the Future

AI-Driven Chip Design: Navigating the Future
by Admin on 08-02-2025 at 1:00 pm

DAC 62 Systems on Chips

On July 9, 2025, a DACtv session by Dr. Peter Levin explored the transformative impact of artificial intelligence (AI) on chip design, as presented in the YouTube video. The speaker, an industry expert, delved into how AI is reshaping electronic design automation (EDA), addressing the escalating complexity of modern chips and the need for innovative tools to maintain pace with market demands. The talk highlighted AI’s role in optimizing design workflows, enhancing productivity, and tackling challenges like power efficiency and time-to-market in the semiconductor industry.

The semiconductor landscape is undergoing a seismic shift, driven by AI’s integration into design processes. With chip complexity soaring—modern systems-on-chip (SoCs) now encompass billions of transistors—traditional EDA tools struggle to keep up. The speaker emphasized that AI, particularly machine learning (ML) and large language models (LLMs), is revolutionizing tasks like synthesis, placement, routing, and verification. For instance, AI-driven tools can predict optimal circuit layouts, reducing iterations and accelerating design cycles by up to 30%. This is critical as time-to-market pressures intensify, with the global semiconductor market projected to reach $1 trillion by 2030, fueled by AI, IoT, and 5G.

A key focus was AI’s ability to enhance productivity. By automating repetitive tasks, such as generating testbenches or optimizing power consumption, AI frees engineers to focus on creative problem-solving. The speaker highlighted tools like reinforcement learning-based optimizers, which iteratively improve chip designs by learning from simulation data, achieving up to 15% better power-performance-area (PPA) metrics compared to manual methods. These advancements are vital for AI accelerators, where high computational throughput and energy efficiency are paramount, as seen in chips powering generative AI models.

The talk also addressed challenges in adopting AI-driven EDA. Legacy workflows, often reliant on outdated tools like Perl scripts or manual verification, hinder scalability. The speaker advocated for modernizing IT infrastructure to support AI tools, citing the need for cloud-based platforms with scalable compute resources. IBM’s EDA-as-a-Solution and similar platforms were referenced as examples, enabling hybrid cloud workflows that integrate on-premises and cloud resources for seamless bursting during peak design phases. This approach mitigates compute shortages, a bottleneck as AI workloads demand massive parallelism.

Sustainability emerged as a critical theme. AI data centers consume gigawatts, raising environmental concerns. The speaker stressed designing energy-efficient chips, leveraging AI to optimize power at the architecture level. For example, AI-driven thermal modeling can reduce hotspot issues in 3D chiplet designs, improving reliability and cutting energy use by 10-20%. Collaborative efforts with foundries to integrate process design kits (PDKs) into AI workflows ensure manufacturability, aligning with sustainability goals by minimizing waste during fabrication.

In response to audience queries, the speaker addressed integrating AI into academic research, suggesting federated learning to overcome limited dataset access. This approach allows universities to train models locally while aggregating insights globally, protecting intellectual property. The session underscored the need for industry-academia partnerships to cultivate talent, echoing initiatives like NYDesign’s experiential learning programs to inspire young engineers.

The presentation concluded with a forward-looking vision: AI is not just a tool but a catalyst for redefining chip design. By enhancing EDA with AI, the industry can tackle complexity, improve efficiency, and drive sustainable innovation. The speaker urged attendees to embrace this paradigm shift, leveraging the “brain power” at DAC to shape a future where AI and semiconductors symbiotically advance technology.

Also Read:

IBM Cloud: Enabling World-Class EDA Workflows

AI Infrastructure: Silicon Innovation in the New Gold Rush

Large Language Models: A New Frontier for SoC Security on DACtv


AI Infrastructure: Silicon Innovation in the New Gold Rush

AI Infrastructure: Silicon Innovation in the New Gold Rush
by Admin on 08-02-2025 at 12:00 pm

On July 18, 2025, Jeff Wittich, Chief Product Officer at Ampure Computing, delivered a compelling DACtv presentation, as seen in the YouTube video, likening the current AI boom to the 1848 California Gold Rush. Speaking in San Francisco, just 100 miles from Sutter’s Mill, Wittich drew parallels between the historical rush that transformed the region and today’s AI-driven technological revolution, emphasizing the need for silicon innovation to sustain this growth while addressing environmental and societal impacts.

Wittich began with the Gold Rush analogy, noting how it created winners like Samuel Brannan, who amassed wealth selling supplies, and Levi Strauss, whose jeans empire endures. San Francisco’s population surged 25x in a year, leading to California’s statehood in 1850. However, the rush also had losers: many miners left penniless, indigenous populations suffered, and environmental devastation from mercury and hydraulic mining scarred the landscape. Similarly, today’s AI boom offers immense opportunities but risks repeating past mistakes if not managed responsibly.

The AI landscape is evolving rapidly, driven by generative AI, multimodal large language models (LLMs), and agentic AI systems that reason and act autonomously. These advancements demand specialized silicon AI accelerators like GPUs, TPUs, and custom ASICs to handle massive computational loads. Wittich highlighted the shift from general-purpose CPUs to domain-specific architectures, critical for scaling AI applications in data centers and edge devices. For instance, training LLMs requires thousands of GPUs, while inference at the edge demands low-power, high-efficiency chips.

From a silicon perspective, Wittich outlined key challenges. AI systems are massively parallel and non-deterministic, complicating traditional modeling approaches. Race conditions, deadlocks, and livelocks are harder to detect due to non-linear responses to system changes. EDA tools must evolve to model these complex, parallel architectures effectively, incorporating advanced verification techniques to ensure reliability. Additionally, power efficiency is paramount, as AI data centers consume gigawatts, rivaling small cities. Innovations like chiplet-based designs and advanced packaging (e.g., 2.5D/3D integration) are crucial for optimizing performance and reducing energy footprints.

Sustainability is a pressing concern. The Gold Rush’s environmental toll—deforestation, mercury pollution—mirrors AI’s potential to exacerbate climate issues through energy-intensive computing. Wittich urged the industry to prioritize efficient silicon designs, leveraging AI to optimize power usage in chips for data centers and edge applications. For example, low-power MCUs for IoT devices can reduce data transmission to the cloud, cutting energy costs. Collaborative efforts among EDA vendors, foundries, and chip designers are essential to build sustainable infrastructure that mitigates long-term environmental impacts.

The financial investment in AI is unprecedented, with billions poured into startups and infrastructure. Wittich encouraged leveraging this capital to advance EDA tools and silicon design methodologies, ensuring they keep pace with AI’s rapid evolution. The innovations developed today, he noted, will shape society for decades, much like the Gold Rush’s lasting impact on San Francisco’s landscape and economy. Responsible design—balancing performance, cost, and sustainability—is critical to avoid the pitfalls of past technological booms.

Wittich concluded optimistically, confident in the industry’s brainpower to navigate this “new golden era of AI.” By fostering innovation in silicon design, from advanced verification to energy-efficient architectures, the semiconductor industry can drive AI’s transformative potential while minimizing its downsides. The session underscored the urgency of adapting EDA tools and silicon strategies to meet AI’s demands, ensuring a sustainable, impactful future for technology and society.

Also Read:

Large Language Models: A New Frontier for SoC Security on DACtv

AI and VLSI: A Symbiotic Revolution at DAC 2025

AI’s Transformative Role in Semiconductor Design and Sustainability


Large Language Models: A New Frontier for SoC Security on DACtv

Large Language Models: A New Frontier for SoC Security on DACtv
by Admin on 08-02-2025 at 11:00 am

On July 18, 2025, Mark Teranipur, chairman of the Electrical and Computer Engineering Department at the University of Florida and co-founder of Caspia Technologies, delivered a compelling talk at DACtv on leveraging large language models (LLMs) for System-on-Chip (SoC) security, as seen in the YouTube video. Addressing the growing complexity of modern chip designs, Teranipur highlighted how LLMs are transforming security verification, a critical yet often overlooked aspect of semiconductor design, amidst a verification market exceeding $2.2 billion.

Teranipur emphasized that while functional correctness, power, performance, and area optimization dominate verification efforts, security remains underaddressed. The intricate interplay of intellectual properties (IPs) in SoCs introduces vulnerabilities, exemplified by high-profile incidents like Intel’s Meltdown and Spectre vulnerabilities, which exploited out-of-order execution, causing an 7-8% stock drop in a single day. Similarly, Alibaba’s T-Head Ghostwrite attack exposed faulty RTL instructions, enabling unauthorized memory rewrites. These incidents underscore the urgency of integrating security into verification flows, as traditional functional verification tools like SpyGlass may detect issues but lack specificity for security violations.

Caspia Technologies’ solution, Codax, leverages LLMs to address this gap. Unlike conventional tools, Codax identifies confidentiality and integrity violations with 100% accuracy, eliminating the tedious manual analysis required by tools like SpyGlass. By focusing on security-specific linting, Codax ensures precise detection of vulnerabilities, such as those enabling data leaks or unauthorized access. Teranipur noted that seven of the top ten semiconductor companies, alongside automotive, data center, and military prime contractors, have adopted Codax, reflecting its broad applicability and industry trust.

Responding to an audience question about detecting hardware Trojans—malicious circuits causing confidentiality or integrity breaches—Teranipur explained that Codax excels by targeting these violations directly, whether introduced intentionally or unintentionally. The solution’s ability to pinpoint such threats is critical, as Trojans can compromise chip integrity, especially in sensitive applications like defense or automotive systems. Another question addressed the challenge of applying LLMs to large SoC designs, given their limited input token capacity. Teranipur acknowledged this constraint, explaining that Caspia employs a divide-and-conquer strategy, breaking down designs into manageable segments for analysis. This approach, combined with open-source LLMs, ensures scalability while maintaining accuracy, though token limitations remain a hurdle.

The integration of LLMs into security verification offers a paradigm shift. By processing vast amounts of design data, including RTL and netlists, LLMs can identify patterns indicative of vulnerabilities that traditional tools miss. This is particularly vital as SoC complexity grows, with billions of gates and diverse IPs increasing attack surfaces. Teranipur stressed that the industry’s slow adoption of security-focused verification—despite its critical importance—must change, as vulnerabilities can lead to catastrophic financial and reputational losses, as seen with Intel’s market cap hit.

Caspia’s approach also aligns with broader industry trends, where AI-driven tools are enhancing design and verification. By automating security checks, Codax reduces human error and accelerates time-to-market, crucial in competitive sectors like AI accelerators and automotive chips. Teranipur’s vision positions LLMs as a cornerstone for future-proofing SoC security, urging companies to prioritize it alongside functional verification.

The session concluded with a call to action, emphasizing that security verification is no longer optional. With Caspia’s Codax gaining traction across major industries, Teranipur’s presentation highlighted a pivotal moment for semiconductor security, where LLMs offer a robust, scalable solution to safeguard increasingly complex SoCs against evolving threats.

Also Read:

AI and VLSI: A Symbiotic Revolution at DAC 2025

AI’s Transformative Role in Semiconductor Design and Sustainability

From Atoms to Tokens: Semiconductor Supply Chain Evolution


Chip Agent: Revolutionizing Chip Design with Agentic AI

Chip Agent: Revolutionizing Chip Design with Agentic AI
by Admin on 08-02-2025 at 10:00 am

DAC 62 Systems on Chips

On July 18, 2025, ChipAgents AI, a Santa Barbara-based startup, showcased its innovative agentic AI platform for chip design and verification at a DACtv session. Led by CEO William Wong, a former CMU PhD and UC Santa Barbara professor, Chip Agent is redefining electronic design automation (EDA) by leveraging advanced AI agents to tackle the escalating complexity of modern chip design, from millions to trillions of gates. Wong, alongside head of research Koshin and head of engineering Meheer Aurora, presented how their technology addresses critical industry challenges, delivering significant productivity gains and accuracy improvements.

Wong began by highlighting the semiconductor industry’s struggles with soaring design complexity, project delays (up from 60% to 75% last year), and declining first-time silicon success rates (from 20% to 10%). Traditional EDA tools, designed decades ago, struggle to scale for today’s billion-gate designs, exacerbated by communication gaps between design and verification teams. Chip Agent’s solution is a suite of specialized AI agents that streamline the entire design-to-verification flow, from datasheet analysis to RTL code generation, testbench creation, and debugging. Deployed at top-10 semiconductor companies, their platform has already caught bugs in production chips, saving millions and demonstrating real-world impact.

Chip Agent’s AI agents, built on state-of-the-art large language models (LLMs), excel in domain-specific tasks. Unlike general-purpose AI, which may hallucinate on technical queries, their agents are fine-tuned for EDA, achieving 97-99% accuracy on Nvidia’s Verilog Evo benchmark for specification-to-RTL tasks and leading results on OpenAI’s SWE-bench for Python-based high-level synthesis. Their proprietary “sweet search” algorithm, combining Monte Carlo tree search with iterative refinement, was presented at the International Conference on Learning Representations in 2025, showcasing their research prowess. These agents support tasks like generating SystemVerilog assertions, UVM testbenches, and readable finite state machine documentation, reducing development time by up to 80%.

Koshin explained the technical foundation, noting that LLMs are next-word predictors trained on vast datasets, including Verilog code, enabling them to handle EDA-specific tasks like predicting correct code syntax or answering domain-specific queries. Agentic AI enhances this by incorporating feedback loops, allowing agents to learn from user inputs and past results, unlike static traditional tools. This adaptability ensures continuous improvement, critical for handling dynamic design changes. For instance, Chip Agent can analyze updated design specs, identify necessary codebase modifications, and implement them rapidly, often completing the process end-to-end for smaller changes.

The live demo by Aurora showcased Chip Agent’s ability to generate thousands of lines of high-quality, compilable SystemVerilog code in minutes, a feat enabled by their “self-stabilization” process within the agentic loop. This ensures robustness against design changes, a key advantage for complex system-on-chip (SoC) integration. The platform supports multimodal inputs, including block diagrams in formats like MermaidJS, facilitating hierarchical design tasks. While currently focused on front-end design up to synthesis, Wong noted interest in back-end tasks like placement and routing, drawing parallels to their success in generating CUDA kernels competitive with human experts.

Addressing audience questions, Wong detailed their infrastructure, running on elastic clusters of Nvidia H200 GPUs via AWS, offering flexibility through software-as-a-service, single-tenant, or bespoke deployments. This cloud-centric approach keeps pace with rapid hardware advancements, avoiding obsolescence. On future directions, Wong emphasized improving agentic workflows with “light informal proofs” to simplify code review, a novel research area to enhance trust in AI-generated outputs. Chip Agent’s open-source compatibility and focus on correctness ensure broad applicability, positioning them to scale toward subsystem-level designs with complex protocols like CMN. With a Series A-funded team of 10-15, Chip Agent invites collaboration at their DAC booth, signaling a bold vision for fully agentic tape-out processes in the coming years.

Also Read:

Siemens EDA and Nvidia: Pioneering AI-Driven Chip Design

Insider Opinions on AI in EDA. Accellera Panel at DAC

Caspia Focuses Security Requirements at DAC


AI and VLSI: A Symbiotic Revolution at DAC 2025

AI and VLSI: A Symbiotic Revolution at DAC 2025
by Admin on 08-02-2025 at 9:00 am

On July 18, 2025, a DACtv panel discussion titled “AI and VLSI: A Symbiotic Revolution” explored the transformative interplay between artificial intelligence (AI) and very-large-scale integration (VLSI) design. Moderated by Ramuni Nagasetty from NATCast. The panel featured Arijit Raychowdhury (Georgia Tech), Dr. Rob Aitken (National Advanced Packaging Manufacturing Program), and Sydney Tsai (IBM Research), Monoj Selva (Intel), and Priya Panda from Yale University. The session delved into how AI is reshaping VLSI design, from circuit optimization to manufacturing, while addressing challenges like outdated tools and sustainability.

Arijit Raychowdhury, a professor and chair at Georgia Tech, highlighted his experience in circuit design, having worked at Intel and Texas Instruments. His research leverages EDA tools for digital and mixed-signal circuits, emphasizing AI’s role in enhancing design efficiency. Rob Aitken, with a background at Synopsys, ARM Research, and HP’s internal EDA, discussed advanced packaging’s role in VLSI, noting its importance for AI accelerators. Sydney Tsai from IBM Research detailed their dual focus: designing AI accelerators (e.g., systolic arrays, in-memory computing) and applying AI to manufacturing (e.g., defect analysis) and large language models (LLMs) for design, such as an “Ask EDA” chatbot.

The panel underscored the symbiotic relationship between AI and VLSI. AI optimizes VLSI design by automating tasks like placement, routing, and verification, reducing design cycles for complex chips. Conversely, VLSI advancements enable specialized AI hardware, such as accelerators, to handle growing computational demands. For instance, IBM’s work on near-memory computing addresses AI’s data-intensive needs, while their LLM-driven tools streamline design workflows, leveraging data from IBM’s Albany Research Center for defect root cause analysis.

A key discussion point was the semiconductor industry’s lag in adopting modern AI tools due to outdated IT infrastructure. An audience member questioned the reliance on legacy tools like Notepad++ and VNC, which are incompatible with advanced AI platforms like Copilot or Cursor, particularly for SystemVerilog. Manoj emphasized the need for a return-on-investment (ROI) approach, combining incentives like code introspection, generation, and automated testing with mandates to shift to modern IDEs. He noted that large semiconductor companies are cautious due to high failure costs, with hardware engineers inherently skeptical of new tools. This conservatism, while risk-averse, hinders AI integration, as companies prioritize stability over innovation.

Sustainability emerged as a critical theme. Arijit stressed the environmental cost of computing, advocating for educating younger generations about resource usage to minimize waste. AI-driven VLSI design can optimize power efficiency in chips, crucial for data centers consuming gigawatts of power. Aitken highlighted advanced packaging’s role in reducing energy losses in AI chips, while Tsai noted IBM’s efforts in using AI to enhance manufacturing yield, reducing waste. The panel agreed that AI and VLSI must evolve together to address climate challenges, with efficient chip designs enabling greener technologies.

The discussion also touched on workforce dynamics, with an audience member humorously questioning why hardware engineers aren’t paid more than software engineers given the complexity. The panel acknowledged the high stakes in hardware design, where errors are costly, but didn’t delve into compensation specifics. Ramuni Nagasetty wrapped up by emphasizing the need for industry-academia collaboration to bridge the gap between cutting-edge AI tools and VLSI applications, urging the audience to drive adoption despite resistance.

This panel highlighted the transformative potential of AI in VLSI, from accelerating design to enabling sustainable, high-performance chips, while candidly addressing barriers like legacy infrastructure and cultural inertia, setting the stage for a collaborative push toward innovation.

Also Read:

AI’s Transformative Role in Semiconductor Design and Sustainability

From Atoms to Tokens: Semiconductor Supply Chain Evolution

The Future of Mobility: Insights from Steve Greenfield


The Future of Mobility: Insights from Steve Greenfield

The Future of Mobility: Insights from Steve Greenfield
by Admin on 08-02-2025 at 8:00 am

DAC 62 Systems on Chips

On July 18, 2025, Steve Greenfield, an early-stage investor and author, delivered a compelling 45-minute talk at DACtv on the future of mobility. Quoting futurist William Gibson—“The future is already here. It’s just not evenly distributed”—Greenfield explored how emerging technologies and business models are reshaping transportation across land, sea, air, and space, drawing from his book, The Future of Mobility. His presentation, structured in 12 sections, highlighted key trends, challenges, and opportunities in the mobility sector, emphasizing the role of innovative entrepreneurs in addressing them.

Greenfield began by defining mobility as the movement of humans or cargo across various modalities. His venture capital firm, on its third fund with 42 investments over four and a half years, seeks differentiated startups tackling mobility’s pressing issues. In ground transport, he noted U.S. trends since 1975: vehicle weight has remained stable, but horsepower and fuel efficiency have doubled, while tailpipe emissions have halved. However, a shift from passenger cars to heavier SUVs and pickup trucks has dominated, with franchise dealers maintaining consistent profitability (1.5-3% pre-tax margins) through complex business models, including used car sales, insurance, and service operations.

Electrification, a major focus, is growing but faces hurdles. U.S. battery electric vehicle (EV) sales have plateaued at 8-9% of new vehicles, with non-plug-in hybrids gaining traction. Greenfield identified three barriers to EV adoption: range anxiety, charger availability (one in five public chargers fails), and slow charging speeds. Advances in battery chemistry and inductive charging could resolve these, potentially tipping EVs toward dominance globally, though U.S. tax incentive cuts in 2025 may dampen demand short-term. Greenfield sees a future where EVs’ lower ownership costs drive widespread adoption once these issues are addressed.

China’s automotive rise was a key theme, with its manufacturers outpacing legacy automakers due to lax regulations, enabling faster innovation cycles. Greenfield warned that Chinese OEMs building U.S. factories will face stricter standards, like ISO 26262, potentially slowing their advantage but increasing costs. This regulatory gap currently handicaps Western manufacturers, who must comply with rigorous safety and environmental rules.

In response to audience questions, Greenfield addressed Intel’s exit from its automotive compute business, suggesting legacy automakers struggle with software competency and must partner with startups like Rivian or tier-one suppliers to compete. On autonomous vehicles, he tackled the LIDAR debate, advocating for redundancy over Tesla’s camera-only approach. Citing edge cases like bright sunlight or fogged lenses, he argued LIDAR’s incremental cost is justified for safety, despite Tesla’s cost-driven resistance.

Greenfield also touched on broader mobility trends, including urban air mobility (e.g., eVTOLs), maritime electrification, and space transport innovations. He highlighted the potential of fusion energy to eliminate fossil fuel reliance, noting significant venture capital interest in supporting such breakthroughs. His investment thesis focuses on backing entrepreneurs solving these challenges, from battery advancements to autonomous systems.

Drawing from Mike Maples’ Pattern Breakers, Greenfield illustrated how transformative ideas often hide in plain sight, using the pottery wheel’s 1,800-year delay in becoming a wagon wheel as an analogy. He urged attendees to connect the dots at conferences like DAC to identify and fund the next mobility breakthroughs. His optimism stems from the visible “seeds of the future” in current innovations, encouraging collaboration with smart entrepreneurs to drive progress. Greenfield invited attendees to request his presentation or book via email (steve@automotive.com), emphasizing the urgency of supporting visionary startups to shape a sustainable, efficient mobility future.

Also Read:

Chip Agent: Revolutionizing Chip Design with Agentic AI

Siemens EDA and Nvidia: Pioneering AI-Driven Chip Design

Insider Opinions on AI in EDA. Accellera Panel at DAC


From Atoms to Tokens: Semiconductor Supply Chain Evolution

From Atoms to Tokens: Semiconductor Supply Chain Evolution
by Admin on 08-02-2025 at 7:00 am

On July 18, 2025, a DACtv session titled “From Atoms to Tokens” explored the semiconductor supply chain’s transformation, as presented in the YouTube video. The speaker tackled the challenges and innovations from the atomic level of chip fabrication to the tokenized ecosystems of AI-driven data centers, emphasizing the critical role of interconnect scaling, advanced packaging, and fault-tolerant computing in meeting the demands of modern AI and high-performance chips.

At the atomic level, interconnect scaling remains a bottleneck. Since the shift from aluminum to copper in 1998, progress has stalled, with Intel’s failed cobalt experiment at 10nm underscoring the difficulty. Emerging materials like tungsten (already used in contacts and vias), molybdenum, and ruthenium are being explored for 1.6nm nodes, necessitating new subtractive manufacturing processes that overhaul traditional dual damascene methods. These changes significantly alter design rules, impacting AI chips where interconnect performance is critical, unlike mobile chips with less stringent requirements. Backside power delivery, set to enter high-volume production within years, further complicates design-for-test strategies, as metal layers on both wafer sides hinder metrology, demanding new EDA tools to ensure reliability.

Lithography advancements, once expected to simplify design rules, have disappointed. High-NA EUV, anticipated to streamline processes, is being delayed, with TSMC opting against adoption due to cost-effectiveness of multi-patterning EUV over single-patterning. Scaling boosters like pattern shaping (ion beam manipulation) and directed self-assembly (DSA), adopted by Intel at 14A, introduce further design rule complexity but promise cost savings. These shifts challenge EDA workflows, requiring tools to adapt to rapidly evolving fabrication techniques.

Packaging innovations are driving transistor density. While Moore’s Law cost-per-transistor trends have plateaued, advanced packaging like 2.5D (silicon interposers) and 3D (RDL interposers, local silicon bridges) has skyrocketed transistor counts per package. Techniques like TSMC’s SoIC and hybrid bonding achieve finer pitches (down to 9 microns), critical for chiplet-based AI accelerators. However, these require sophisticated thermal and stress modeling, as chiplets strain EDA tools for signal integrity and power delivery. The speaker highlighted the need for new design flows to handle these complexities, especially for AI chips where performance is paramount.

At the system level, AI data centers demand unprecedented scale. Clusters with millions of GPUs, like Meta’s 2-gigawatt Louisiana facility, consume power rivaling major cities. Optical interconnects, with a five-year mean time between failure, pose reliability issues; a 500,000-GPU cluster could fail every five minutes. Co-packaged optics (CPO) exacerbate this, despite improved reliability, necessitating fault-tolerant strategies like Meta’s open-sourced training library. Inter-data-center connectivity, with Microsoft’s $10 billion fiber deals and Corning’s massive orders, underscores the infrastructure challenge. Projects like Stargate, with $50-100 billion investments, aim for multi-gigawatt clusters, pushing EDA to model reliability and power at unprecedented scales.

The China market, addressed in Q&A, presents unique challenges. Huawei’s subsidiary SMIC is ramping production, but reliance on foreign EDA tools and materials (e.g., Japanese photoresists and etchants) persists. China’s open-source contributions, like ByteDance’s Triton library, contrast with restricted GPU access, complicating analysis. The speaker noted that tracking China’s semiconductor progress via WeChat and forums is fraught with noise, similar to Reddit or Twitter, highlighting the difficulty of accurate market assessment.

This session underscored the semiconductor industry’s pivot toward AI-driven design, from atomic-level material innovations to tokenized, fault-tolerant data center ecosystems. EDA tools must evolve to handle new design rules, packaging complexities, and massive-scale reliability, ensuring the industry meets the soaring demand for AI and high-performance computing.

Also Read:

The Future of Mobility: Insights from Steve Greenfield

Chip Agent: Revolutionizing Chip Design with Agentic AI

Siemens EDA and Nvidia: Pioneering AI-Driven Chip Design


Siemens EDA and Nvidia: Pioneering AI-Driven Chip Design

Siemens EDA and Nvidia: Pioneering AI-Driven Chip Design
by Admin on 08-02-2025 at 6:00 am

DAC 62 Systems on Chips

On July 18, 2025, Siemens EDA and Nvidia presented a compelling vision for the future of electronic design automation (EDA) at a DACtv event, emphasizing the transformative role of artificial intelligence (AI) in semiconductor and PCB design. Amit Gupta, Vice President and General Manager at Siemens EDA, and John Lynford, head of Nvidia’s CAE and EDA product team, outlined how their partnership leverages AI to address the escalating complexity of chip design, shrinking talent pools, and rising development costs, as detailed in the YouTube video.

Gupta opened by contextualizing the semiconductor industry’s evolution, noting its growth from a $100 billion market in the 1990s to a projected $1 trillion by 2030, driven by AI, IoT, and mobile revolutions. He highlighted the challenges of designing advanced chips at 2nm nodes, where complexity in validation, software, and hardware has surged, while the talent base dwindles and costs soar. Siemens EDA’s response is its newly announced Siemens EDA AI system, a comprehensive platform integrating machine learning (ML), reinforcement learning (RL), generative AI, and agentic AI across tools like Calibre, Tessent, Questa, and Expedition. This system, launched that morning, aims to boost designer productivity by enabling natural language interactions, reducing simulation times, and lowering barriers for junior engineers.

The Siemens EDA AI system is purpose-built for industrial-grade applications, emphasizing verifiability, usability, generality, robustness, and accuracy—qualities absent in consumer AI models prone to hallucinations. For instance, Gupta demonstrated how consumer AI fails at domain-specific tasks like compiling EDA files in Questa, underscoring the need for specialized solutions. The system supports multimodal inputs like RTL, Verilog, netlists, and GDSII, and integrates with vector databases for retrieval-augmented generation, ensuring precise, sign-off-quality results. It also allows customers to fine-tune large language models (LLMs) on-premises, prioritizing data security and compatibility with diverse hardware, including Nvidia GPUs.

Nvidia’s contribution, as Lynford explained, centers on its hardware and software ecosystem, tailored for accelerated computing. Nvidia uses Siemens EDA tools internally to design AI chips, creating a feedback loop where its GPUs power Siemens’ AI solutions. The CUDA-X libraries, with over 450 domain-specific tools, accelerate tasks like lithography (cuLitho), sparse matrix operations (cuSPARSE, cuFFT), and physical simulations (Warp). cuLitho, for example, is widely adopted in production, while Warp enables differentiable simulations for TCAD workflows at companies like TSMC. Nvidia’s Nemo framework supports enterprise-grade generative and agentic AI, with features like data curation, training, and guardrails to ensure accuracy and privacy. Nvidia Inference Microservices (NIMs) further simplify deployment, offering containerized AI models for cloud or on-premises use, integrated into Siemens’ platform.

A standout example of their collaboration is in AI physics, where Nvidia’s Physics Nemo framework accelerates simulations for digital twins, such as optimizing wind farm designs at Siemens Energy with 65x cost reduction and 60x less energy compared to traditional methods. In EDA, this translates to faster thermal, flow, and electrothermal simulations for chip and data center design. The partnership also explores integrating foundry PDKs into LLMs, enabling easier access to design rules and automating tasks like DRC deck creation, as discussed in response to audience questions about foundry collaboration and circuit analysis.

The Siemens EDA AI system and Nvidia’s infrastructure promise a “multiplicative” productivity boost, combining front-end natural language interfaces with backend ML/RL optimizations. This open platform supports third-party tools and custom workflows, ensuring flexibility. While specific licensing details were not disclosed, Gupta invited attendees to explore demos at Siemens’ booth, highlighting practical applications like script generation, testbench acceleration, and error log analysis. This collaboration signals a paradigm shift toward AI-driven EDA, positioning Siemens and Nvidia to redefine chip design efficiency and accessibility.

Also Read:

Insider Opinions on AI in EDA. Accellera Panel at DAC

Caspia Focuses Security Requirements at DAC

Building Trust in Generative AI


Materials Selection Methodology White Paper

Materials Selection Methodology White Paper
by Daniel Nenni on 08-02-2025 at 6:00 am

Materials Selection Methodology ANSYS

The Granta EduPack White Paper on Materials Selection, authored by Harriet Parnell, Kaitlin Tyler, and Mike Ashby, presents a practical and educational guide to selecting materials in engineering design. Developed by Ansys and based on Ashby’s well-known methodologies, the paper outlines a four-step process to help learners and professionals select materials that meet performance, cost, and functional requirements using the Granta EduPack software.

The materials selection methodology begins with translation, the process of converting a design problem into precise engineering terms. This involves four components: function, constraints, objectives, and free variables. The function describes what the component must do—such as support loads, conduct heat, or resist pressure. Constraints are strict conditions that must be met, such as minimum strength, maximum service temperature, or corrosion resistance. Objectives define what is to be minimized or maximized—typically weight, cost, energy loss, or thermal conductivity. Finally, free variables are parameters the designer is allowed to adjust, such as the material choice itself or geometric dimensions. Defining these clearly is essential for identifying suitable materials later in the process.

The second step is screening, which eliminates materials that do not meet the basic constraints identified during translation. If a material doesn’t meet the required temperature, stiffness, or conductivity, it is screened out. Screening can be done manually by checking material databases, but the Granta EduPack software provides tools for a more visual approach. Using property charts with logarithmic scales, users can apply filters and quickly identify which materials fall outside the necessary limits. These visualizations make it easier to compare large material datasets and help narrow down potential candidates.

After unsuitable options are removed, the ranking step evaluates the remaining materials based on how well they meet the design objectives. This involves using performance indices, which are combinations of material properties that reflect the overall performance for a given function. For instance, if the goal is to design a lightweight and stiff beam, the relevant performance index could be the square root of Young’s modulus divided by density. The better this index, the more suitable the material. These indices can be plotted on property charts within EduPack to show which materials perform best. Materials above the selection line, or toward a defined optimal region, are considered the top choices.

The final step is documentation, where the designer further investigates the top candidates. Even if a material performs well according to data, real-world concerns like manufacturing limitations, environmental impact, availability, and historical reliability must also be considered. This step emphasizes broader engineering judgment and the importance of context in final decision-making.

Following the methodology section, the white paper explains how performance indices are derived. They come from the performance equation, which relates the function of the component, its geometry, and the material properties. If the variables in the equation can be separated into those three groups, the material-dependent part becomes the performance index. This index can then be used universally across different geometries and loading scenarios, simplifying the selection process early in design.

Two examples demonstrate how performance indices are formed. In the first, a thermal storage material must store maximum heat per unit cost. The index becomes heat capacity divided by material cost. In the second, a beam must be light and stiff under bending. The derived performance index combines modulus and density. These examples show how specific requirements and constraints lead to practical, optimized material choices.

Granta EduPack supports these concepts through its interactive features. Users can plot performance indices as selection lines with defined slopes on charts or use indices as axes to rank materials visually. The Performance Index Finder tool automates the index derivation process by letting users input their function, constraints, objectives, and free variables directly. The software then produces a relevant performance index and displays suitable materials accordingly.

The paper concludes with a list of references and educational resources. Ashby’s textbook Materials Selection in Mechanical Design is cited as the foundational source. Additional resources include Ansys Innovation Courses, video tutorials, and downloadable case studies focused on mechanical, thermal, and electromechanical applications. These are intended to reinforce the material and support both independent learning and classroom instruction.

In summary, this white paper offers a clear, structured, and practical approach to materials selection. It not only teaches the methodology behind choosing the right materials but also integrates powerful software tools that make the process faster and more intuitive. By combining theoretical rigor with real-world practicality, the Granta EduPack methodology equips students and engineers with the skills to make informed, optimized, and sustainable material choices.

You can download the paper here.

Also Read:

ML and Multiphysics Corral 3D and HBM

A Master Class with Ansys and Synopsys, The Latest Advances in Multi-Die Design

Ansys and eShard Sign Agreement to Deliver Comprehensive Hardware Security Solution for Semiconductor Products