100X800 Banner (1)

The Future of Mobility: Insights from Steve Greenfield

The Future of Mobility: Insights from Steve Greenfield
by Admin on 08-02-2025 at 8:00 am

DAC 62 Systems on Chips

On July 18, 2025, Steve Greenfield, an early-stage investor and author, delivered a compelling 45-minute talk at DACtv on the future of mobility. Quoting futurist William Gibson—“The future is already here. It’s just not evenly distributed”—Greenfield explored how emerging technologies and business models are reshaping transportation across land, sea, air, and space, drawing from his book, The Future of Mobility. His presentation, structured in 12 sections, highlighted key trends, challenges, and opportunities in the mobility sector, emphasizing the role of innovative entrepreneurs in addressing them.

Greenfield began by defining mobility as the movement of humans or cargo across various modalities. His venture capital firm, on its third fund with 42 investments over four and a half years, seeks differentiated startups tackling mobility’s pressing issues. In ground transport, he noted U.S. trends since 1975: vehicle weight has remained stable, but horsepower and fuel efficiency have doubled, while tailpipe emissions have halved. However, a shift from passenger cars to heavier SUVs and pickup trucks has dominated, with franchise dealers maintaining consistent profitability (1.5-3% pre-tax margins) through complex business models, including used car sales, insurance, and service operations.

Electrification, a major focus, is growing but faces hurdles. U.S. battery electric vehicle (EV) sales have plateaued at 8-9% of new vehicles, with non-plug-in hybrids gaining traction. Greenfield identified three barriers to EV adoption: range anxiety, charger availability (one in five public chargers fails), and slow charging speeds. Advances in battery chemistry and inductive charging could resolve these, potentially tipping EVs toward dominance globally, though U.S. tax incentive cuts in 2025 may dampen demand short-term. Greenfield sees a future where EVs’ lower ownership costs drive widespread adoption once these issues are addressed.

China’s automotive rise was a key theme, with its manufacturers outpacing legacy automakers due to lax regulations, enabling faster innovation cycles. Greenfield warned that Chinese OEMs building U.S. factories will face stricter standards, like ISO 26262, potentially slowing their advantage but increasing costs. This regulatory gap currently handicaps Western manufacturers, who must comply with rigorous safety and environmental rules.

In response to audience questions, Greenfield addressed Intel’s exit from its automotive compute business, suggesting legacy automakers struggle with software competency and must partner with startups like Rivian or tier-one suppliers to compete. On autonomous vehicles, he tackled the LIDAR debate, advocating for redundancy over Tesla’s camera-only approach. Citing edge cases like bright sunlight or fogged lenses, he argued LIDAR’s incremental cost is justified for safety, despite Tesla’s cost-driven resistance.

Greenfield also touched on broader mobility trends, including urban air mobility (e.g., eVTOLs), maritime electrification, and space transport innovations. He highlighted the potential of fusion energy to eliminate fossil fuel reliance, noting significant venture capital interest in supporting such breakthroughs. His investment thesis focuses on backing entrepreneurs solving these challenges, from battery advancements to autonomous systems.

Drawing from Mike Maples’ Pattern Breakers, Greenfield illustrated how transformative ideas often hide in plain sight, using the pottery wheel’s 1,800-year delay in becoming a wagon wheel as an analogy. He urged attendees to connect the dots at conferences like DAC to identify and fund the next mobility breakthroughs. His optimism stems from the visible “seeds of the future” in current innovations, encouraging collaboration with smart entrepreneurs to drive progress. Greenfield invited attendees to request his presentation or book via email (steve@automotive.com), emphasizing the urgency of supporting visionary startups to shape a sustainable, efficient mobility future.

Also Read:

Chip Agent: Revolutionizing Chip Design with Agentic AI

Siemens EDA and Nvidia: Pioneering AI-Driven Chip Design

Insider Opinions on AI in EDA. Accellera Panel at DAC


From Atoms to Tokens: Semiconductor Supply Chain Evolution

From Atoms to Tokens: Semiconductor Supply Chain Evolution
by Admin on 08-02-2025 at 7:00 am

On July 18, 2025, a DACtv session titled “From Atoms to Tokens” explored the semiconductor supply chain’s transformation, as presented in the YouTube video. The speaker tackled the challenges and innovations from the atomic level of chip fabrication to the tokenized ecosystems of AI-driven data centers, emphasizing the critical role of interconnect scaling, advanced packaging, and fault-tolerant computing in meeting the demands of modern AI and high-performance chips.

At the atomic level, interconnect scaling remains a bottleneck. Since the shift from aluminum to copper in 1998, progress has stalled, with Intel’s failed cobalt experiment at 10nm underscoring the difficulty. Emerging materials like tungsten (already used in contacts and vias), molybdenum, and ruthenium are being explored for 1.6nm nodes, necessitating new subtractive manufacturing processes that overhaul traditional dual damascene methods. These changes significantly alter design rules, impacting AI chips where interconnect performance is critical, unlike mobile chips with less stringent requirements. Backside power delivery, set to enter high-volume production within years, further complicates design-for-test strategies, as metal layers on both wafer sides hinder metrology, demanding new EDA tools to ensure reliability.

Lithography advancements, once expected to simplify design rules, have disappointed. High-NA EUV, anticipated to streamline processes, is being delayed, with TSMC opting against adoption due to cost-effectiveness of multi-patterning EUV over single-patterning. Scaling boosters like pattern shaping (ion beam manipulation) and directed self-assembly (DSA), adopted by Intel at 14A, introduce further design rule complexity but promise cost savings. These shifts challenge EDA workflows, requiring tools to adapt to rapidly evolving fabrication techniques.

Packaging innovations are driving transistor density. While Moore’s Law cost-per-transistor trends have plateaued, advanced packaging like 2.5D (silicon interposers) and 3D (RDL interposers, local silicon bridges) has skyrocketed transistor counts per package. Techniques like TSMC’s SoIC and hybrid bonding achieve finer pitches (down to 9 microns), critical for chiplet-based AI accelerators. However, these require sophisticated thermal and stress modeling, as chiplets strain EDA tools for signal integrity and power delivery. The speaker highlighted the need for new design flows to handle these complexities, especially for AI chips where performance is paramount.

At the system level, AI data centers demand unprecedented scale. Clusters with millions of GPUs, like Meta’s 2-gigawatt Louisiana facility, consume power rivaling major cities. Optical interconnects, with a five-year mean time between failure, pose reliability issues; a 500,000-GPU cluster could fail every five minutes. Co-packaged optics (CPO) exacerbate this, despite improved reliability, necessitating fault-tolerant strategies like Meta’s open-sourced training library. Inter-data-center connectivity, with Microsoft’s $10 billion fiber deals and Corning’s massive orders, underscores the infrastructure challenge. Projects like Stargate, with $50-100 billion investments, aim for multi-gigawatt clusters, pushing EDA to model reliability and power at unprecedented scales.

The China market, addressed in Q&A, presents unique challenges. Huawei’s subsidiary SMIC is ramping production, but reliance on foreign EDA tools and materials (e.g., Japanese photoresists and etchants) persists. China’s open-source contributions, like ByteDance’s Triton library, contrast with restricted GPU access, complicating analysis. The speaker noted that tracking China’s semiconductor progress via WeChat and forums is fraught with noise, similar to Reddit or Twitter, highlighting the difficulty of accurate market assessment.

This session underscored the semiconductor industry’s pivot toward AI-driven design, from atomic-level material innovations to tokenized, fault-tolerant data center ecosystems. EDA tools must evolve to handle new design rules, packaging complexities, and massive-scale reliability, ensuring the industry meets the soaring demand for AI and high-performance computing.

Also Read:

The Future of Mobility: Insights from Steve Greenfield

Chip Agent: Revolutionizing Chip Design with Agentic AI

Siemens EDA and Nvidia: Pioneering AI-Driven Chip Design


Siemens EDA and Nvidia: Pioneering AI-Driven Chip Design

Siemens EDA and Nvidia: Pioneering AI-Driven Chip Design
by Admin on 08-02-2025 at 6:00 am

DAC 62 Systems on Chips

On July 18, 2025, Siemens EDA and Nvidia presented a compelling vision for the future of electronic design automation (EDA) at a DACtv event, emphasizing the transformative role of artificial intelligence (AI) in semiconductor and PCB design. Amit Gupta, Vice President and General Manager at Siemens EDA, and John Lynford, head of Nvidia’s CAE and EDA product team, outlined how their partnership leverages AI to address the escalating complexity of chip design, shrinking talent pools, and rising development costs, as detailed in the YouTube video.

Gupta opened by contextualizing the semiconductor industry’s evolution, noting its growth from a $100 billion market in the 1990s to a projected $1 trillion by 2030, driven by AI, IoT, and mobile revolutions. He highlighted the challenges of designing advanced chips at 2nm nodes, where complexity in validation, software, and hardware has surged, while the talent base dwindles and costs soar. Siemens EDA’s response is its newly announced Siemens EDA AI system, a comprehensive platform integrating machine learning (ML), reinforcement learning (RL), generative AI, and agentic AI across tools like Calibre, Tessent, Questa, and Expedition. This system, launched that morning, aims to boost designer productivity by enabling natural language interactions, reducing simulation times, and lowering barriers for junior engineers.

The Siemens EDA AI system is purpose-built for industrial-grade applications, emphasizing verifiability, usability, generality, robustness, and accuracy—qualities absent in consumer AI models prone to hallucinations. For instance, Gupta demonstrated how consumer AI fails at domain-specific tasks like compiling EDA files in Questa, underscoring the need for specialized solutions. The system supports multimodal inputs like RTL, Verilog, netlists, and GDSII, and integrates with vector databases for retrieval-augmented generation, ensuring precise, sign-off-quality results. It also allows customers to fine-tune large language models (LLMs) on-premises, prioritizing data security and compatibility with diverse hardware, including Nvidia GPUs.

Nvidia’s contribution, as Lynford explained, centers on its hardware and software ecosystem, tailored for accelerated computing. Nvidia uses Siemens EDA tools internally to design AI chips, creating a feedback loop where its GPUs power Siemens’ AI solutions. The CUDA-X libraries, with over 450 domain-specific tools, accelerate tasks like lithography (cuLitho), sparse matrix operations (cuSPARSE, cuFFT), and physical simulations (Warp). cuLitho, for example, is widely adopted in production, while Warp enables differentiable simulations for TCAD workflows at companies like TSMC. Nvidia’s Nemo framework supports enterprise-grade generative and agentic AI, with features like data curation, training, and guardrails to ensure accuracy and privacy. Nvidia Inference Microservices (NIMs) further simplify deployment, offering containerized AI models for cloud or on-premises use, integrated into Siemens’ platform.

A standout example of their collaboration is in AI physics, where Nvidia’s Physics Nemo framework accelerates simulations for digital twins, such as optimizing wind farm designs at Siemens Energy with 65x cost reduction and 60x less energy compared to traditional methods. In EDA, this translates to faster thermal, flow, and electrothermal simulations for chip and data center design. The partnership also explores integrating foundry PDKs into LLMs, enabling easier access to design rules and automating tasks like DRC deck creation, as discussed in response to audience questions about foundry collaboration and circuit analysis.

The Siemens EDA AI system and Nvidia’s infrastructure promise a “multiplicative” productivity boost, combining front-end natural language interfaces with backend ML/RL optimizations. This open platform supports third-party tools and custom workflows, ensuring flexibility. While specific licensing details were not disclosed, Gupta invited attendees to explore demos at Siemens’ booth, highlighting practical applications like script generation, testbench acceleration, and error log analysis. This collaboration signals a paradigm shift toward AI-driven EDA, positioning Siemens and Nvidia to redefine chip design efficiency and accessibility.

Also Read:

Insider Opinions on AI in EDA. Accellera Panel at DAC

Caspia Focuses Security Requirements at DAC

Building Trust in Generative AI


Materials Selection Methodology White Paper

Materials Selection Methodology White Paper
by Daniel Nenni on 08-02-2025 at 6:00 am

Materials Selection Methodology ANSYS

The Granta EduPack White Paper on Materials Selection, authored by Harriet Parnell, Kaitlin Tyler, and Mike Ashby, presents a practical and educational guide to selecting materials in engineering design. Developed by Ansys and based on Ashby’s well-known methodologies, the paper outlines a four-step process to help learners and professionals select materials that meet performance, cost, and functional requirements using the Granta EduPack software.

The materials selection methodology begins with translation, the process of converting a design problem into precise engineering terms. This involves four components: function, constraints, objectives, and free variables. The function describes what the component must do—such as support loads, conduct heat, or resist pressure. Constraints are strict conditions that must be met, such as minimum strength, maximum service temperature, or corrosion resistance. Objectives define what is to be minimized or maximized—typically weight, cost, energy loss, or thermal conductivity. Finally, free variables are parameters the designer is allowed to adjust, such as the material choice itself or geometric dimensions. Defining these clearly is essential for identifying suitable materials later in the process.

The second step is screening, which eliminates materials that do not meet the basic constraints identified during translation. If a material doesn’t meet the required temperature, stiffness, or conductivity, it is screened out. Screening can be done manually by checking material databases, but the Granta EduPack software provides tools for a more visual approach. Using property charts with logarithmic scales, users can apply filters and quickly identify which materials fall outside the necessary limits. These visualizations make it easier to compare large material datasets and help narrow down potential candidates.

After unsuitable options are removed, the ranking step evaluates the remaining materials based on how well they meet the design objectives. This involves using performance indices, which are combinations of material properties that reflect the overall performance for a given function. For instance, if the goal is to design a lightweight and stiff beam, the relevant performance index could be the square root of Young’s modulus divided by density. The better this index, the more suitable the material. These indices can be plotted on property charts within EduPack to show which materials perform best. Materials above the selection line, or toward a defined optimal region, are considered the top choices.

The final step is documentation, where the designer further investigates the top candidates. Even if a material performs well according to data, real-world concerns like manufacturing limitations, environmental impact, availability, and historical reliability must also be considered. This step emphasizes broader engineering judgment and the importance of context in final decision-making.

Following the methodology section, the white paper explains how performance indices are derived. They come from the performance equation, which relates the function of the component, its geometry, and the material properties. If the variables in the equation can be separated into those three groups, the material-dependent part becomes the performance index. This index can then be used universally across different geometries and loading scenarios, simplifying the selection process early in design.

Two examples demonstrate how performance indices are formed. In the first, a thermal storage material must store maximum heat per unit cost. The index becomes heat capacity divided by material cost. In the second, a beam must be light and stiff under bending. The derived performance index combines modulus and density. These examples show how specific requirements and constraints lead to practical, optimized material choices.

Granta EduPack supports these concepts through its interactive features. Users can plot performance indices as selection lines with defined slopes on charts or use indices as axes to rank materials visually. The Performance Index Finder tool automates the index derivation process by letting users input their function, constraints, objectives, and free variables directly. The software then produces a relevant performance index and displays suitable materials accordingly.

The paper concludes with a list of references and educational resources. Ashby’s textbook Materials Selection in Mechanical Design is cited as the foundational source. Additional resources include Ansys Innovation Courses, video tutorials, and downloadable case studies focused on mechanical, thermal, and electromechanical applications. These are intended to reinforce the material and support both independent learning and classroom instruction.

In summary, this white paper offers a clear, structured, and practical approach to materials selection. It not only teaches the methodology behind choosing the right materials but also integrates powerful software tools that make the process faster and more intuitive. By combining theoretical rigor with real-world practicality, the Granta EduPack methodology equips students and engineers with the skills to make informed, optimized, and sustainable material choices.

You can download the paper here.

Also Read:

ML and Multiphysics Corral 3D and HBM

A Master Class with Ansys and Synopsys, The Latest Advances in Multi-Die Design

Ansys and eShard Sign Agreement to Deliver Comprehensive Hardware Security Solution for Semiconductor Products

 


AI and Machine Learning in Chip Design: DAC Keynote Insights

AI and Machine Learning in Chip Design: DAC Keynote Insights
by Admin on 08-01-2025 at 2:00 pm

Screenshot 2025 08 27 160739

In a keynote at the 62nd Design Automation Conference (#62DAC) on July 8, 2025, Jason Cong, Volgenau Chair for Engineering Excellence Professor, UCLA, reflected on over 30 years in the DAC community, highlighting the transformative role of AI and machine learning (ML) in semiconductor design. The speaker, whose first DAC paper was in 1988 on two-layer channel routing, contrasted the era of Intel’s 386 processor (275,000 transistors at 1.5 microns) with today’s marvels like NVIDIA’s B200 (208 billion transistors at 4nm) and Micron’s 3D NAND (5.3 trillion transistors in over 200 layers). This evolution underscores integrated circuits as among the most complex man-made objects, designed in just 12-18 months compared to decades for projects like the International Space Station.

The design flow begins with system specifications in languages like C++ or SystemC, synthesized to RTL (VHDL/Verilog), then to Boolean equations. Logic synthesis optimizes for area, power, and timing, followed by physical design stages: floorplanning, placement, clock tree synthesis, routing, and sign-off verification. Challenges include exploding complexity—trillions of transistors, 3D stacking, and heterogeneous integration—coupled with power constraints and shrinking timelines. Traditional methods struggle with NP-hard problems like placement and routing, where exhaustive search is infeasible.

Enter AI/ML as game-changers. The speaker advocated treating EDA problems as data-driven, leveraging ML for optimization. Key applications include:

  • Placement and Routing: ML models predict wirelength, congestion, and timing, outperforming heuristics. Techniques like graph neural networks (GNNs) and reinforcement learning (RL) guide macro placement, achieving 20-30% better PPA (power, performance, area). Tools like Google’s Circuit Training use RL for chip floorplanning.

  • Verification: ML aids bug detection in RTL, predicting coverage gaps and generating stimuli. Analog verification uses surrogate models for faster simulations, reducing runtime from days to minutes.

  • Lithography and Manufacturing: ML corrects optical proximity effects, predicts hotspots, and optimizes masks. Generative models design resolution enhancement techniques, while RL tunes process parameters.

  • Analog Design: Traditionally manual, ML automates sizing and layout. Bayesian optimization and generative adversarial networks (GANs) create layouts, with RL fine-tuning for performance.

The speaker emphasized hybrid approaches: ML augments, not replaces, traditional methods. For instance, in logic synthesis, ML predicts post-synthesis metrics to guide transformations. In physical design, ML-based predictors integrate into flows for real-time feedback.

Looking ahead, the outlook is multi-agent systems combining human and machine intelligence. The goal: enable software programmers to design chips as easily as writing PyTorch libraries. Undergraduate courses already demonstrate this, with students building CNN accelerators on AWS F1 clouds using high-level synthesis.

Challenges remain: data scarcity, model generalization, and integration into existing tools. The speaker stressed deep problem understanding over superficial AI applications, urging cross-disciplinary collaboration. Funded by NSF and PRISM centers, partnerships with experts on yield innovations in ML for EDA.

In conclusion, AI/ML is revolutionizing chip design, addressing complexity and accelerating innovation. As Moore’s Law evolves into “More than Moore” with 3D and heterogeneous systems, this DAC community-driven synergy promises a future where chip design is democratized, efficient, and impactful.

Also Read:

Enabling the AI Revolution: Insights from AMD’s DAC Keynote

AI Evolution and EDA’s Role in the Fourth Wave: William Chappell’s DAC Keynote

AI-Driven ECAD Library Creation: Streamlining Semiconductor Design


Enabling the AI Revolution: Insights from AMD’s DAC Keynote

Enabling the AI Revolution: Insights from AMD’s DAC Keynote
by Admin on 08-01-2025 at 1:00 pm

Screenshot 2025 08 27 160739

In a keynote by Michaela Blott, AMD Senior Fellow,  at the 62nd Design Automation Conference (DAC) on July 8, 2025, explored the trends shaping the AI revolution, emphasizing inference efficiency and hardware customization. While acknowledging AMD’s efforts in scaling GPUs and achieving energy efficiency goals (30x by 2025, with new targets for 2030), the speaker focused on emerging industry dynamics, offering food for thought for the DAC community rather than specific solutions.

The talk began with a disclaimer on AI’s diverse viewpoints. As an empirical science, AI lacks fundamental understanding; researchers run experiments, observe patterns, and derive scaling laws. The information bandwidth from AI research overwhelms individuals, leading to reliance on trusted experts and resulting in polarized beliefs. This is evident in debates over Artificial General Intelligence (AGI). “Bulls” argue AGI is achievable through scaling models with more compute and data, driven by competitive fears. “Bears” counter that scaling alone is insufficient, citing diminishing returns and the need for breakthroughs in reasoning and planning.

A key shift highlighted was from training to inference. Training, centralized in data centers, focuses on model creation, but inference—deploying models for real-world use—is distributed and power-intensive. The speaker noted that generating a single AI response consumes energy equivalent to a cup of water, underscoring sustainability concerns. With inference dominating AI workloads (90% by some estimates), efficiency optimizations are crucial for widespread adoption.

Algorithmic advancements offer promise. Quantization reduces precision (e.g., from FP32 to INT8), cutting power and memory needs while maintaining accuracy through quantization-aware training. New architectures like RecurrentGemma and Mamba use recurrent neural networks, achieving efficiency gains—RecurrentGemma matches transformer performance with 40% less memory. Mixture of Experts (MoE) models activate subsets of parameters, enabling trillion-parameter models with billion-parameter efficiency. Emerging techniques, such as test-time scaling and synthetic data generation, further enhance capabilities without proportional resource increases.

Hardware customization presents a “humongous opportunity.” The speaker advocated for domain-specific accelerators, moving beyond general-purpose GPUs. Tools like AMD’s Vitis AI ecosystem, including Brevitas for quantization training and BrainSmith for deployment, enable end-to-end flows. An example demonstrated quantizing a YOLOv8 model for edge devices, achieving real-time performance on low-power hardware. Customization exploits data types, sparsity, and architecture tweaks, potentially yielding 10-100x efficiency gains.

The DAC community’s role is pivotal. As AI disrupts industries—from automotive sensors to medical diagnostics—inference optimizations will drive revenue-generating deployments. The speaker stressed that superhuman AI on specific tasks is already here, without needing AGI. Design automation is more critical than ever, with AI integration necessary for agility amid rapid innovation.

In summary, the AI revolution hinges on addressing inference inefficiencies through algorithmic and hardware advancements. The DAC audience was urged to focus on customization tooling and AI-augmented design flows. As AI becomes pervasive, solving these challenges will unlock transformative applications, ensuring sustainable, scalable intelligence.

Also Read:

AI Evolution and EDA’s Role in the Fourth Wave: William Chappell’s DAC Keynote

AI-Driven ECAD Library Creation: Streamlining Semiconductor Design

Modern Data Management: Overcoming Bottlenecks in Semiconductor Engineering


AI Evolution and EDA’s Role in the Fourth Wave: William Chappell’s DAC Keynote

AI Evolution and EDA’s Role in the Fourth Wave: William Chappell’s DAC Keynote
by Admin on 08-01-2025 at 12:00 pm

In a keynote at the 62nd Design Automation Conference (DAC) on July 8, 2025, William Chappell, Vice President of Mission Systems at Microsoft, reflected on the intertwined evolution of AI and semiconductor design. Drawing from his DARPA experience, Chappell traced AI’s progression from 2016 onward, highlighting its transformative potential for the EDA community and urging a shift toward a “fourth wave” where AI drives physical transformation.

Chappell began by revisiting pivotal moments in AI’s history, starting with DARPA’s 2014 Cyber Grand Challenge (CGC), a computer-versus-computer capture-the-flag event that spurred advancements in automated cyber reasoning. This led to the 2016 Third Offset Strategy, which envisioned enhanced human-machine teaming to maintain military superiority. Recognizing limitations in existing AI, DARPA’s Information Innovation Office (I2O), under John Launchbury, launched the AI Next program in 2017. Launchbury described the transition from the second wave of AI—focused on statistical learning—to a third wave emphasizing contextual adaptation and reasoning.

Complementing these efforts, Chappell’s Microsystems Technology Office initiated the Electronics Resurgence Initiative (ERI) to intersect AI with hardware innovation, fostering collaboration between industry, government, and academia—core to the DAC community. Unbeknownst at the time, Microsoft was developing its AI supercomputer, announced in 2020, which powered the 2022 ChatGPT breakthrough. Chappell noted the rapid scaling: from models with 100 million parameters in 2016 to trillions by 2023, enabling unprecedented capabilities.

Today, AI resides in the third wave, with models like GPT-4 demonstrating reasoning and contextual understanding. Chappell showcased applications in physical discovery, such as Microsoft’s collaboration with Pacific Northwest National Laboratory to develop a battery electrolyte using 70% less lithium. AI screened 32 million candidates, narrowing to 23 viable options through high-performance computing (HPC) and quantum simulations, accelerating discovery from years to months. Similar successes include a PFAS-free data center coolant discovered in ten days and AI-driven vaccine development for monkeypox, where agents decomposed tasks into verifiable steps, reducing design time dramatically.

Chappell emphasized agent-based systems as key to this wave. Unlike traditional models, agents handle long-running tasks through planning, orchestration, and verification, mimicking scientific workflows. In vaccine research, agents generated 500,000 candidate sequences, validated them via simulations, and identified top performers. Applied to silicon design, agents could automate RTL generation, verification, and PPA analysis, integrating domain knowledge from vast datasets.

However, challenges persist. Chappell discussed the need for trustworthy agents, citing examples where models hallucinate or fail in complex reasoning. Solutions involve grounding agents in knowledge graphs, using HPC for validation, and incorporating human oversight. In EDA, this means leveraging specialized hardware like Intel’s Xeon processors and ensuring interoperability with existing tools.

Looking ahead, Chappell proposed a fourth wave: physical AI, where digital transformation extends to tangible outcomes. The EDA community, adept at bridging software and hardware, is uniquely positioned to lead. By integrating agents into design flows, it can accelerate innovation in semiconductors, from circuit to system levels. Chappell called for rethinking traditional methods, embracing AI’s rigor in manufacturing, and fostering partnerships to realize this vision.

The keynote underscored DAC’s role in uniting stakeholders to navigate hype and harness AI’s real capabilities. As 2025 unfolds as the year of agent integration, Chappell’s insights inspire the community to pioneer physical transformation, ensuring AI not only adapts contextually but reshapes the physical world.

Also Read:

AI-Driven ECAD Library Creation: Streamlining Semiconductor Design

Modern Data Management: Overcoming Bottlenecks in Semiconductor Engineering

AI-Powered Waveform Debugging: Revolutionizing Semiconductor Verification


AI-Driven ECAD Library Creation: Streamlining Semiconductor Design

AI-Driven ECAD Library Creation: Streamlining Semiconductor Design
by Admin on 08-01-2025 at 11:00 am

DAC 62 Systems on Chips

On July 9, 2025, Julie Liu, PalPilot International presented for  DACtv, unveiling Footprintku AI, a groundbreaking platform for automating configurable ECAD (Electronic Computer-Aided Design) library creation. This innovative solution addresses the inefficiencies of manual library generation, leveraging AI and automation to enhance productivity, reduce errors, and integrate Design for Manufacturing (DFM) rules early in the semiconductor design process.

The creation of ECAD libraries is a critical yet labor-intensive task in electronics design. Engineers traditionally rely on component suppliers’ PDF datasheets, which detail mechanical and electrical properties, to manually create schematic symbols, footprints, and 3D models. This process requires interpreting complex specifications and applying DFM rules, which vary by manufacturer and are essential for ensuring production-ready designs. Manual creation is time-consuming, error-prone, and struggles to keep pace with the rapid release of new components. In 2025 alone, over 83 million datasheets exist, with approximately 9 million new parts released annually, making manual methods unsustainable.

Footprintku AI revolutionizes this process by automating library generation with a focus on accuracy and customization. The platform begins with a robust data-capturing system trained on over one million datasheets, guided by deep domain expertise in ECAD library creation. This system intelligently extracts critical information, including manufacturer details, part names, symbol data (e.g., ball maps), footprint outlines, and electrical properties. By replacing manual datasheet interpretation, it eliminates human error and significantly accelerates the process, ensuring libraries are generated in seconds with precision.

A key feature of Footprintku AI is its configurable DFM-driven library creation engine, accessible through an intuitive user interface. Engineers can input specific DFM requirements, such as solder mask specifications, silkscreen clearances, or keep-out zones, tailoring libraries to meet manufacturing standards early in the design cycle. This proactive integration of DFM rules prevents costly defects and delays during production, enhancing design reliability and time-to-market. The platform’s universal data structure supports a wide range of DFM extensions, allowing users to add custom rules without disrupting existing pipelines. This scalability ensures flexibility for diverse manufacturing needs.

Footprintku AI also prioritizes compatibility with industry-standard EDA tools. The platform’s data structure enables seamless export to various EDA formats, allowing engineers to directly incorporate generated libraries into their preferred design environments while preserving DFM information. This interoperability eliminates the need for manual data reformatting, further streamlining workflows and reducing errors.

The presentation included a video showcasing Footprintku AI’s vision: a digital ecosystem connecting component suppliers, distributors, and design companies, akin to how Google Maps digitized paper maps. Engineers can set DFM parameters, select EDA formats, and download production-ready library files instantly, freeing them to focus on creative design tasks. This ecosystem fosters collaboration across the electronics industry, standardizing and accelerating component data integration.

Case studies highlight Footprint AI’s impact. For example, a design team reduced library creation time by 70% by automating datasheet processing and DFM integration, avoiding weeks of manual work. Another company improved manufacturing yield by embedding custom DFM rules, reducing production errors by 30%. These examples underscore the platform’s ability to enhance efficiency and reliability in high-stakes semiconductor projects.

By combining AI-driven data extraction, configurable DFM integration, and EDA compatibility, Footprintku AI addresses the scalability and accuracy challenges of traditional ECAD library creation. Julie Lou encouraged attendees to visit Brinkle AI’s booth to explore the platform further, emphasizing its potential to transform the electronics design landscape. As the industry faces growing complexity and volume, Footprint AI offers a forward-thinking solution to empower engineers, streamline workflows, and shape the future of semiconductor design.

Also Read:

Modern Data Management: Overcoming Bottlenecks in Semiconductor Engineering

AI-Powered Waveform Debugging: Revolutionizing Semiconductor Verification

AI-Driven Verification: Transforming Semiconductor Design


Modern Data Management: Overcoming Bottlenecks in Semiconductor Engineering

Modern Data Management: Overcoming Bottlenecks in Semiconductor Engineering
by Admin on 08-01-2025 at 10:00 am

DAC 62 Systems on Chips

In a DACtv session on July 9, 2025, Pedro Pires from Keysight’s Design Engineering Software addressed the critical role of data management in modern semiconductor engineering projects. The presentation highlighted why data has become a bottleneck, how Keysight’s SOS (Save Our Source) platform mitigates these challenges, and best practices for optimizing design workflows, emphasizing practical solutions through case studies and actionable insights.

See the full replay here

Semiconductor projects are growing increasingly complex, with larger file sizes, diverse technologies, and intricate methodologies requiring seamless coordination across geographically dispersed teams. This complexity creates significant data management challenges, including collaboration at scale, standardization across tools, and maintaining data traceability. According to a Keysight survey, up to 63% of engineering productivity is lost due to manual data handling and siloed information. Engineers spend 30-40% of their time searching for data, often finding incorrect or outdated information, and an additional 20% fixing errors caused by using the wrong data. For a team of 10 engineers with a fully burdened cost of $285,000 per engineer, this inefficiency translates to approximately $1.8 million in annual losses.

The proliferation of complex toolchains exacerbates these issues. Over two-thirds of engineers use six or more software tools, leading to increased manual labor and data correlation challenges across different formats. Data sharing often relies on ad-hoc methods, such as custom scripts, spreadsheets (used by 74% of surveyed engineers), paper notes, and emails (72%), which are inefficient and error-prone. These practices also compromise data security, a top concern for over 50% of industry executives, especially given that one in four global cyberattacks targets manufacturing companies, with an average cost of $5 million per breach.

Keysight’s SOS platform addresses these challenges by providing a robust design data management (DDM) solution tailored for integrated circuit (IC) design. Unlike traditional software configuration tools like Git, optimized for small text files, SOS is designed to handle large IC design files, such as GDS and simulation waveforms, with efficient storage and seamless scalability. It integrates with major EDA tools, ensuring smooth data flow across workflows, and connects with enterprise systems like ERP and PLM for end-to-end traceability. SOS supports compliance with standards like ISO 26262, critical for safety-critical industries, by maintaining auditable trails for design decisions and revisions.

Case studies demonstrate SOS’s impact. For instance, a leading semiconductor company used SOS to streamline multi-site collaboration, reducing data retrieval time by 40% and improving design iteration cycles. Another case involved an SoC project where SOS’s version control and reference management enabled reuse of IP across projects, cutting development time by 25%. These examples highlight SOS’s ability to enhance collaboration, reduce errors, and accelerate time-to-market.

Pius outlined several best practices for effective data management. First, ensure data security through role-based access controls, encryption, and governance automation. SOS allows automated script execution for tasks like link checks, reducing manual effort and ensuring consistency. Second, maintain a clean project structure by modularizing data and using lightweight references for components not actively edited, optimizing performance. Third, define methodologies upfront, including milestones and tagging strategies, to avoid confusion from excessive branches or tags. For heterogeneous tool environments, SOS’s plug-in layer integrates with third-party systems like Git, allowing digital and analog teams to work cohesively. Finally, leverage AI-ready features, such as SOS’s metadata labeling and lineage tracking, to build high-performance MLOps pipelines for future scalability.

In conclusion, poor data management is a liability that hinders semiconductor engineering efficiency. By adopting SOS and following best practices, teams can transform data into an asset, enhancing collaboration, security, and productivity. Pius encouraged attendees to visit Keysight’s booth to explore tailored solutions, emphasizing that addressing data challenges is critical for staying competitive in the evolving semiconductor landscape.

Also Read:

AI-Powered Waveform Debugging: Revolutionizing Semiconductor Verification

AI-Driven Verification: Transforming Semiconductor Design

Building Trust in AI-Generated Code for Semiconductor Design


Formal Verification: Why It Matters for Post-Quantum Cryptography

Formal Verification: Why It Matters for Post-Quantum Cryptography
by Daniel Nenni on 08-01-2025 at 10:00 am

Formal Verification Why does it matter for PQC

Formal verification is becoming essential in the design and implementation of cryptographic systems, particularly as the industry prepares for post-quantum cryptography (PQC). While traditional testing techniques validate correctness over a finite set of scenarios, formal verification uses mathematical proofs to guarantee that cryptographic primitives behave correctly under all possible conditions. This distinction is vital because flaws in cryptographic implementations can lead to catastrophic breaches of confidentiality, integrity, or authenticity.

In cryptographic contexts, formal verification is applied across three primary dimensions: verifying the security of the cryptographic specification, ensuring the implementation aligns precisely with that specification, and confirming resistance to low-level attacks such as side-channel or fault attacks.

The first dimension involves ensuring that the design of a cryptographic primitive fulfills formal security goals. This step requires proving that the algorithm resists a defined set of adversarial behaviors based on established cryptographic hardness assumptions. The second focuses on verifying that the implementation faithfully adheres to the formally specified design. This involves modeling the specification mathematically and using tools like theorem provers or model checkers to validate that the code behaves correctly in every case. The third area concerns proving that the implementation is immune to physical leakage—such as timing or power analysis—that could inadvertently expose secret data. Here, formal methods help ensure constant-time execution and other safety measures.

Formal verification also contributes to broader program safety by identifying and preventing bugs like buffer overflows, null pointer dereferencing, or other forms of undefined behavior. These bugs, if left unchecked, could become exploitable vulnerabilities. By combining specification security, implementation correctness, and low-level robustness, formal verification delivers a high level of assurance for cryptographic systems.

While powerful, formal verification is often compared to more traditional validation techniques like CAVP (Cryptographic Algorithm Validation Program) and TVLA (Test Vector Leakage Assessment). CAVP ensures functional correctness by running implementations through a series of fixed input-output tests, while TVLA assesses side-channel resistance via statistical analysis. These methods are practical and widely used in certification schemes but inherently limited. They can only validate correctness or leakage resistance across predefined scenarios, which means undiscovered vulnerabilities in untested scenarios may remain hidden.

Formal verification, by contrast, can prove the absence of entire classes of bugs across all input conditions. This level of rigor offers unmatched assurance but comes with trade-offs. It is resource-intensive, requiring specialized expertise, extensive computation, and significant time investment. Additionally, it is sensitive to the accuracy of the formal specifications themselves. If the specification fails to fully capture the intended security properties, then even a correctly verified implementation might still be vulnerable in practice.

Moreover, formal verification is constrained by the scope of what it models. For instance, if the specification doesn’t include side-channel models or hardware-specific concerns, those issues may go unaddressed. Tools used in formal verification can also contain bugs, which introduces the risk of false assurances. To address these issues, developers often employ cross-validation with multiple verification tools and complement formal verification with traditional testing, peer review, and transparency in the verification process.

Despite these limitations, formal verification is increasingly valued, especially in high-assurance sectors like aerospace, defense, and critical infrastructure. Although most certification bodies do not mandate formal verification—favoring test-driven approaches like those in the NIST and Common Criteria frameworks—its use is growing as a differentiator in ensuring cryptographic integrity. As cryptographic systems grow in complexity, particularly with the shift toward post-quantum algorithms, the industry is recognizing that traditional testing alone is no longer sufficient.

PQShield exemplifies this forward-looking approach. The company is actively investing in formal verification as part of its product development strategy. It participates in the Formosa project and contributes to formal proofs for post-quantum cryptographic standards like ML-KEM and ML-DSA. The company has verified its implementation of the Keccak SHA-3 permutation, as well as the polynomial arithmetic and decoding routines in its ML-KEM implementation. PQShield also contributes to the development of EasyCrypt, an open-source proof assistant used for reasoning about cryptographic protocols.

Looking ahead, PQShield plans to extend formal verification across more of its software and hardware offerings. This includes proving the correctness of high-speed hardware accelerators, particularly the arithmetic and sampling units used in PQC schemes. These efforts rely on a mix of internal and open-source tools and demonstrate the company’s commitment to secure-by-design principles.

In conclusion, formal verification offers critical advantages for cryptographic security, particularly as the industry transitions to post-quantum systems. It complements conventional testing methods by addressing their limitations and providing strong guarantees of correctness, robustness, and resistance to attack. While not yet universally mandated in certification schemes, formal verification is fast becoming a cornerstone of next-generation cryptographic assurance—and companies like PQShield are leading the way in putting it into practice.

You can download the paper here.

Also See:

Podcast EP290: Navigating the Shift to Quantum Safe Security with PQShield’s Graeme Hickey

Podcast EP285: The Post-Quantum Cryptography Threat and Why Now is the Time to Prepare with Michele Sartori

PQShield Demystifies Post-Quantum Cryptography with Leadership Lounge