ads mdx semiwiki building trust gen 800x100ai

AI and Machine Learning in Chip Design: DAC Keynote Insights

AI and Machine Learning in Chip Design: DAC Keynote Insights
by Admin on 08-01-2025 at 2:00 pm

Screenshot 2025 08 27 160739

In a keynote at the 62nd Design Automation Conference (#62DAC) on July 8, 2025, Jason Cong, Volgenau Chair for Engineering Excellence Professor, UCLA, reflected on over 30 years in the DAC community, highlighting the transformative role of AI and machine learning (ML) in semiconductor design. The speaker, whose first DAC paper was in 1988 on two-layer channel routing, contrasted the era of Intel’s 386 processor (275,000 transistors at 1.5 microns) with today’s marvels like NVIDIA’s B200 (208 billion transistors at 4nm) and Micron’s 3D NAND (5.3 trillion transistors in over 200 layers). This evolution underscores integrated circuits as among the most complex man-made objects, designed in just 12-18 months compared to decades for projects like the International Space Station.

The design flow begins with system specifications in languages like C++ or SystemC, synthesized to RTL (VHDL/Verilog), then to Boolean equations. Logic synthesis optimizes for area, power, and timing, followed by physical design stages: floorplanning, placement, clock tree synthesis, routing, and sign-off verification. Challenges include exploding complexity—trillions of transistors, 3D stacking, and heterogeneous integration—coupled with power constraints and shrinking timelines. Traditional methods struggle with NP-hard problems like placement and routing, where exhaustive search is infeasible.

Enter AI/ML as game-changers. The speaker advocated treating EDA problems as data-driven, leveraging ML for optimization. Key applications include:

  • Placement and Routing: ML models predict wirelength, congestion, and timing, outperforming heuristics. Techniques like graph neural networks (GNNs) and reinforcement learning (RL) guide macro placement, achieving 20-30% better PPA (power, performance, area). Tools like Google’s Circuit Training use RL for chip floorplanning.

  • Verification: ML aids bug detection in RTL, predicting coverage gaps and generating stimuli. Analog verification uses surrogate models for faster simulations, reducing runtime from days to minutes.

  • Lithography and Manufacturing: ML corrects optical proximity effects, predicts hotspots, and optimizes masks. Generative models design resolution enhancement techniques, while RL tunes process parameters.

  • Analog Design: Traditionally manual, ML automates sizing and layout. Bayesian optimization and generative adversarial networks (GANs) create layouts, with RL fine-tuning for performance.

The speaker emphasized hybrid approaches: ML augments, not replaces, traditional methods. For instance, in logic synthesis, ML predicts post-synthesis metrics to guide transformations. In physical design, ML-based predictors integrate into flows for real-time feedback.

Looking ahead, the outlook is multi-agent systems combining human and machine intelligence. The goal: enable software programmers to design chips as easily as writing PyTorch libraries. Undergraduate courses already demonstrate this, with students building CNN accelerators on AWS F1 clouds using high-level synthesis.

Challenges remain: data scarcity, model generalization, and integration into existing tools. The speaker stressed deep problem understanding over superficial AI applications, urging cross-disciplinary collaboration. Funded by NSF and PRISM centers, partnerships with experts on yield innovations in ML for EDA.

In conclusion, AI/ML is revolutionizing chip design, addressing complexity and accelerating innovation. As Moore’s Law evolves into “More than Moore” with 3D and heterogeneous systems, this DAC community-driven synergy promises a future where chip design is democratized, efficient, and impactful.

Also Read:

Enabling the AI Revolution: Insights from AMD’s DAC Keynote

AI Evolution and EDA’s Role in the Fourth Wave: William Chappell’s DAC Keynote

AI-Driven ECAD Library Creation: Streamlining Semiconductor Design


Enabling the AI Revolution: Insights from AMD’s DAC Keynote

Enabling the AI Revolution: Insights from AMD’s DAC Keynote
by Admin on 08-01-2025 at 1:00 pm

Screenshot 2025 08 27 160739

In a keynote by Michaela Blott, AMD Senior Fellow,  at the 62nd Design Automation Conference (DAC) on July 8, 2025, explored the trends shaping the AI revolution, emphasizing inference efficiency and hardware customization. While acknowledging AMD’s efforts in scaling GPUs and achieving energy efficiency goals (30x by 2025, with new targets for 2030), the speaker focused on emerging industry dynamics, offering food for thought for the DAC community rather than specific solutions.

The talk began with a disclaimer on AI’s diverse viewpoints. As an empirical science, AI lacks fundamental understanding; researchers run experiments, observe patterns, and derive scaling laws. The information bandwidth from AI research overwhelms individuals, leading to reliance on trusted experts and resulting in polarized beliefs. This is evident in debates over Artificial General Intelligence (AGI). “Bulls” argue AGI is achievable through scaling models with more compute and data, driven by competitive fears. “Bears” counter that scaling alone is insufficient, citing diminishing returns and the need for breakthroughs in reasoning and planning.

A key shift highlighted was from training to inference. Training, centralized in data centers, focuses on model creation, but inference—deploying models for real-world use—is distributed and power-intensive. The speaker noted that generating a single AI response consumes energy equivalent to a cup of water, underscoring sustainability concerns. With inference dominating AI workloads (90% by some estimates), efficiency optimizations are crucial for widespread adoption.

Algorithmic advancements offer promise. Quantization reduces precision (e.g., from FP32 to INT8), cutting power and memory needs while maintaining accuracy through quantization-aware training. New architectures like RecurrentGemma and Mamba use recurrent neural networks, achieving efficiency gains—RecurrentGemma matches transformer performance with 40% less memory. Mixture of Experts (MoE) models activate subsets of parameters, enabling trillion-parameter models with billion-parameter efficiency. Emerging techniques, such as test-time scaling and synthetic data generation, further enhance capabilities without proportional resource increases.

Hardware customization presents a “humongous opportunity.” The speaker advocated for domain-specific accelerators, moving beyond general-purpose GPUs. Tools like AMD’s Vitis AI ecosystem, including Brevitas for quantization training and BrainSmith for deployment, enable end-to-end flows. An example demonstrated quantizing a YOLOv8 model for edge devices, achieving real-time performance on low-power hardware. Customization exploits data types, sparsity, and architecture tweaks, potentially yielding 10-100x efficiency gains.

The DAC community’s role is pivotal. As AI disrupts industries—from automotive sensors to medical diagnostics—inference optimizations will drive revenue-generating deployments. The speaker stressed that superhuman AI on specific tasks is already here, without needing AGI. Design automation is more critical than ever, with AI integration necessary for agility amid rapid innovation.

In summary, the AI revolution hinges on addressing inference inefficiencies through algorithmic and hardware advancements. The DAC audience was urged to focus on customization tooling and AI-augmented design flows. As AI becomes pervasive, solving these challenges will unlock transformative applications, ensuring sustainable, scalable intelligence.

Also Read:

AI Evolution and EDA’s Role in the Fourth Wave: William Chappell’s DAC Keynote

AI-Driven ECAD Library Creation: Streamlining Semiconductor Design

Modern Data Management: Overcoming Bottlenecks in Semiconductor Engineering


AI Evolution and EDA’s Role in the Fourth Wave: William Chappell’s DAC Keynote

AI Evolution and EDA’s Role in the Fourth Wave: William Chappell’s DAC Keynote
by Admin on 08-01-2025 at 12:00 pm

In a keynote at the 62nd Design Automation Conference (DAC) on July 8, 2025, William Chappell, Vice President of Mission Systems at Microsoft, reflected on the intertwined evolution of AI and semiconductor design. Drawing from his DARPA experience, Chappell traced AI’s progression from 2016 onward, highlighting its transformative potential for the EDA community and urging a shift toward a “fourth wave” where AI drives physical transformation.

Chappell began by revisiting pivotal moments in AI’s history, starting with DARPA’s 2014 Cyber Grand Challenge (CGC), a computer-versus-computer capture-the-flag event that spurred advancements in automated cyber reasoning. This led to the 2016 Third Offset Strategy, which envisioned enhanced human-machine teaming to maintain military superiority. Recognizing limitations in existing AI, DARPA’s Information Innovation Office (I2O), under John Launchbury, launched the AI Next program in 2017. Launchbury described the transition from the second wave of AI—focused on statistical learning—to a third wave emphasizing contextual adaptation and reasoning.

Complementing these efforts, Chappell’s Microsystems Technology Office initiated the Electronics Resurgence Initiative (ERI) to intersect AI with hardware innovation, fostering collaboration between industry, government, and academia—core to the DAC community. Unbeknownst at the time, Microsoft was developing its AI supercomputer, announced in 2020, which powered the 2022 ChatGPT breakthrough. Chappell noted the rapid scaling: from models with 100 million parameters in 2016 to trillions by 2023, enabling unprecedented capabilities.

Today, AI resides in the third wave, with models like GPT-4 demonstrating reasoning and contextual understanding. Chappell showcased applications in physical discovery, such as Microsoft’s collaboration with Pacific Northwest National Laboratory to develop a battery electrolyte using 70% less lithium. AI screened 32 million candidates, narrowing to 23 viable options through high-performance computing (HPC) and quantum simulations, accelerating discovery from years to months. Similar successes include a PFAS-free data center coolant discovered in ten days and AI-driven vaccine development for monkeypox, where agents decomposed tasks into verifiable steps, reducing design time dramatically.

Chappell emphasized agent-based systems as key to this wave. Unlike traditional models, agents handle long-running tasks through planning, orchestration, and verification, mimicking scientific workflows. In vaccine research, agents generated 500,000 candidate sequences, validated them via simulations, and identified top performers. Applied to silicon design, agents could automate RTL generation, verification, and PPA analysis, integrating domain knowledge from vast datasets.

However, challenges persist. Chappell discussed the need for trustworthy agents, citing examples where models hallucinate or fail in complex reasoning. Solutions involve grounding agents in knowledge graphs, using HPC for validation, and incorporating human oversight. In EDA, this means leveraging specialized hardware like Intel’s Xeon processors and ensuring interoperability with existing tools.

Looking ahead, Chappell proposed a fourth wave: physical AI, where digital transformation extends to tangible outcomes. The EDA community, adept at bridging software and hardware, is uniquely positioned to lead. By integrating agents into design flows, it can accelerate innovation in semiconductors, from circuit to system levels. Chappell called for rethinking traditional methods, embracing AI’s rigor in manufacturing, and fostering partnerships to realize this vision.

The keynote underscored DAC’s role in uniting stakeholders to navigate hype and harness AI’s real capabilities. As 2025 unfolds as the year of agent integration, Chappell’s insights inspire the community to pioneer physical transformation, ensuring AI not only adapts contextually but reshapes the physical world.

Also Read:

AI-Driven ECAD Library Creation: Streamlining Semiconductor Design

Modern Data Management: Overcoming Bottlenecks in Semiconductor Engineering

AI-Powered Waveform Debugging: Revolutionizing Semiconductor Verification


AI-Driven ECAD Library Creation: Streamlining Semiconductor Design

AI-Driven ECAD Library Creation: Streamlining Semiconductor Design
by Admin on 08-01-2025 at 11:00 am

DAC 62 Systems on Chips

On July 9, 2025, Julie Liu, PalPilot International presented for  DACtv, unveiling Footprintku AI, a groundbreaking platform for automating configurable ECAD (Electronic Computer-Aided Design) library creation. This innovative solution addresses the inefficiencies of manual library generation, leveraging AI and automation to enhance productivity, reduce errors, and integrate Design for Manufacturing (DFM) rules early in the semiconductor design process.

The creation of ECAD libraries is a critical yet labor-intensive task in electronics design. Engineers traditionally rely on component suppliers’ PDF datasheets, which detail mechanical and electrical properties, to manually create schematic symbols, footprints, and 3D models. This process requires interpreting complex specifications and applying DFM rules, which vary by manufacturer and are essential for ensuring production-ready designs. Manual creation is time-consuming, error-prone, and struggles to keep pace with the rapid release of new components. In 2025 alone, over 83 million datasheets exist, with approximately 9 million new parts released annually, making manual methods unsustainable.

Footprintku AI revolutionizes this process by automating library generation with a focus on accuracy and customization. The platform begins with a robust data-capturing system trained on over one million datasheets, guided by deep domain expertise in ECAD library creation. This system intelligently extracts critical information, including manufacturer details, part names, symbol data (e.g., ball maps), footprint outlines, and electrical properties. By replacing manual datasheet interpretation, it eliminates human error and significantly accelerates the process, ensuring libraries are generated in seconds with precision.

A key feature of Footprintku AI is its configurable DFM-driven library creation engine, accessible through an intuitive user interface. Engineers can input specific DFM requirements, such as solder mask specifications, silkscreen clearances, or keep-out zones, tailoring libraries to meet manufacturing standards early in the design cycle. This proactive integration of DFM rules prevents costly defects and delays during production, enhancing design reliability and time-to-market. The platform’s universal data structure supports a wide range of DFM extensions, allowing users to add custom rules without disrupting existing pipelines. This scalability ensures flexibility for diverse manufacturing needs.

Footprintku AI also prioritizes compatibility with industry-standard EDA tools. The platform’s data structure enables seamless export to various EDA formats, allowing engineers to directly incorporate generated libraries into their preferred design environments while preserving DFM information. This interoperability eliminates the need for manual data reformatting, further streamlining workflows and reducing errors.

The presentation included a video showcasing Footprintku AI’s vision: a digital ecosystem connecting component suppliers, distributors, and design companies, akin to how Google Maps digitized paper maps. Engineers can set DFM parameters, select EDA formats, and download production-ready library files instantly, freeing them to focus on creative design tasks. This ecosystem fosters collaboration across the electronics industry, standardizing and accelerating component data integration.

Case studies highlight Footprint AI’s impact. For example, a design team reduced library creation time by 70% by automating datasheet processing and DFM integration, avoiding weeks of manual work. Another company improved manufacturing yield by embedding custom DFM rules, reducing production errors by 30%. These examples underscore the platform’s ability to enhance efficiency and reliability in high-stakes semiconductor projects.

By combining AI-driven data extraction, configurable DFM integration, and EDA compatibility, Footprintku AI addresses the scalability and accuracy challenges of traditional ECAD library creation. Julie Lou encouraged attendees to visit Brinkle AI’s booth to explore the platform further, emphasizing its potential to transform the electronics design landscape. As the industry faces growing complexity and volume, Footprint AI offers a forward-thinking solution to empower engineers, streamline workflows, and shape the future of semiconductor design.

Also Read:

Modern Data Management: Overcoming Bottlenecks in Semiconductor Engineering

AI-Powered Waveform Debugging: Revolutionizing Semiconductor Verification

AI-Driven Verification: Transforming Semiconductor Design


Modern Data Management: Overcoming Bottlenecks in Semiconductor Engineering

Modern Data Management: Overcoming Bottlenecks in Semiconductor Engineering
by Admin on 08-01-2025 at 10:00 am

DAC 62 Systems on Chips

In a DACtv session on July 9, 2025, Pedro Pires from Keysight’s Design Engineering Software addressed the critical role of data management in modern semiconductor engineering projects. The presentation highlighted why data has become a bottleneck, how Keysight’s SOS (Save Our Source) platform mitigates these challenges, and best practices for optimizing design workflows, emphasizing practical solutions through case studies and actionable insights.

See the full replay here

Semiconductor projects are growing increasingly complex, with larger file sizes, diverse technologies, and intricate methodologies requiring seamless coordination across geographically dispersed teams. This complexity creates significant data management challenges, including collaboration at scale, standardization across tools, and maintaining data traceability. According to a Keysight survey, up to 63% of engineering productivity is lost due to manual data handling and siloed information. Engineers spend 30-40% of their time searching for data, often finding incorrect or outdated information, and an additional 20% fixing errors caused by using the wrong data. For a team of 10 engineers with a fully burdened cost of $285,000 per engineer, this inefficiency translates to approximately $1.8 million in annual losses.

The proliferation of complex toolchains exacerbates these issues. Over two-thirds of engineers use six or more software tools, leading to increased manual labor and data correlation challenges across different formats. Data sharing often relies on ad-hoc methods, such as custom scripts, spreadsheets (used by 74% of surveyed engineers), paper notes, and emails (72%), which are inefficient and error-prone. These practices also compromise data security, a top concern for over 50% of industry executives, especially given that one in four global cyberattacks targets manufacturing companies, with an average cost of $5 million per breach.

Keysight’s SOS platform addresses these challenges by providing a robust design data management (DDM) solution tailored for integrated circuit (IC) design. Unlike traditional software configuration tools like Git, optimized for small text files, SOS is designed to handle large IC design files, such as GDS and simulation waveforms, with efficient storage and seamless scalability. It integrates with major EDA tools, ensuring smooth data flow across workflows, and connects with enterprise systems like ERP and PLM for end-to-end traceability. SOS supports compliance with standards like ISO 26262, critical for safety-critical industries, by maintaining auditable trails for design decisions and revisions.

Case studies demonstrate SOS’s impact. For instance, a leading semiconductor company used SOS to streamline multi-site collaboration, reducing data retrieval time by 40% and improving design iteration cycles. Another case involved an SoC project where SOS’s version control and reference management enabled reuse of IP across projects, cutting development time by 25%. These examples highlight SOS’s ability to enhance collaboration, reduce errors, and accelerate time-to-market.

Pius outlined several best practices for effective data management. First, ensure data security through role-based access controls, encryption, and governance automation. SOS allows automated script execution for tasks like link checks, reducing manual effort and ensuring consistency. Second, maintain a clean project structure by modularizing data and using lightweight references for components not actively edited, optimizing performance. Third, define methodologies upfront, including milestones and tagging strategies, to avoid confusion from excessive branches or tags. For heterogeneous tool environments, SOS’s plug-in layer integrates with third-party systems like Git, allowing digital and analog teams to work cohesively. Finally, leverage AI-ready features, such as SOS’s metadata labeling and lineage tracking, to build high-performance MLOps pipelines for future scalability.

In conclusion, poor data management is a liability that hinders semiconductor engineering efficiency. By adopting SOS and following best practices, teams can transform data into an asset, enhancing collaboration, security, and productivity. Pius encouraged attendees to visit Keysight’s booth to explore tailored solutions, emphasizing that addressing data challenges is critical for staying competitive in the evolving semiconductor landscape.

Also Read:

AI-Powered Waveform Debugging: Revolutionizing Semiconductor Verification

AI-Driven Verification: Transforming Semiconductor Design

Building Trust in AI-Generated Code for Semiconductor Design


Formal Verification: Why It Matters for Post-Quantum Cryptography

Formal Verification: Why It Matters for Post-Quantum Cryptography
by Daniel Nenni on 08-01-2025 at 10:00 am

Formal Verification Why does it matter for PQC

Formal verification is becoming essential in the design and implementation of cryptographic systems, particularly as the industry prepares for post-quantum cryptography (PQC). While traditional testing techniques validate correctness over a finite set of scenarios, formal verification uses mathematical proofs to guarantee that cryptographic primitives behave correctly under all possible conditions. This distinction is vital because flaws in cryptographic implementations can lead to catastrophic breaches of confidentiality, integrity, or authenticity.

In cryptographic contexts, formal verification is applied across three primary dimensions: verifying the security of the cryptographic specification, ensuring the implementation aligns precisely with that specification, and confirming resistance to low-level attacks such as side-channel or fault attacks.

The first dimension involves ensuring that the design of a cryptographic primitive fulfills formal security goals. This step requires proving that the algorithm resists a defined set of adversarial behaviors based on established cryptographic hardness assumptions. The second focuses on verifying that the implementation faithfully adheres to the formally specified design. This involves modeling the specification mathematically and using tools like theorem provers or model checkers to validate that the code behaves correctly in every case. The third area concerns proving that the implementation is immune to physical leakage—such as timing or power analysis—that could inadvertently expose secret data. Here, formal methods help ensure constant-time execution and other safety measures.

Formal verification also contributes to broader program safety by identifying and preventing bugs like buffer overflows, null pointer dereferencing, or other forms of undefined behavior. These bugs, if left unchecked, could become exploitable vulnerabilities. By combining specification security, implementation correctness, and low-level robustness, formal verification delivers a high level of assurance for cryptographic systems.

While powerful, formal verification is often compared to more traditional validation techniques like CAVP (Cryptographic Algorithm Validation Program) and TVLA (Test Vector Leakage Assessment). CAVP ensures functional correctness by running implementations through a series of fixed input-output tests, while TVLA assesses side-channel resistance via statistical analysis. These methods are practical and widely used in certification schemes but inherently limited. They can only validate correctness or leakage resistance across predefined scenarios, which means undiscovered vulnerabilities in untested scenarios may remain hidden.

Formal verification, by contrast, can prove the absence of entire classes of bugs across all input conditions. This level of rigor offers unmatched assurance but comes with trade-offs. It is resource-intensive, requiring specialized expertise, extensive computation, and significant time investment. Additionally, it is sensitive to the accuracy of the formal specifications themselves. If the specification fails to fully capture the intended security properties, then even a correctly verified implementation might still be vulnerable in practice.

Moreover, formal verification is constrained by the scope of what it models. For instance, if the specification doesn’t include side-channel models or hardware-specific concerns, those issues may go unaddressed. Tools used in formal verification can also contain bugs, which introduces the risk of false assurances. To address these issues, developers often employ cross-validation with multiple verification tools and complement formal verification with traditional testing, peer review, and transparency in the verification process.

Despite these limitations, formal verification is increasingly valued, especially in high-assurance sectors like aerospace, defense, and critical infrastructure. Although most certification bodies do not mandate formal verification—favoring test-driven approaches like those in the NIST and Common Criteria frameworks—its use is growing as a differentiator in ensuring cryptographic integrity. As cryptographic systems grow in complexity, particularly with the shift toward post-quantum algorithms, the industry is recognizing that traditional testing alone is no longer sufficient.

PQShield exemplifies this forward-looking approach. The company is actively investing in formal verification as part of its product development strategy. It participates in the Formosa project and contributes to formal proofs for post-quantum cryptographic standards like ML-KEM and ML-DSA. The company has verified its implementation of the Keccak SHA-3 permutation, as well as the polynomial arithmetic and decoding routines in its ML-KEM implementation. PQShield also contributes to the development of EasyCrypt, an open-source proof assistant used for reasoning about cryptographic protocols.

Looking ahead, PQShield plans to extend formal verification across more of its software and hardware offerings. This includes proving the correctness of high-speed hardware accelerators, particularly the arithmetic and sampling units used in PQC schemes. These efforts rely on a mix of internal and open-source tools and demonstrate the company’s commitment to secure-by-design principles.

In conclusion, formal verification offers critical advantages for cryptographic security, particularly as the industry transitions to post-quantum systems. It complements conventional testing methods by addressing their limitations and providing strong guarantees of correctness, robustness, and resistance to attack. While not yet universally mandated in certification schemes, formal verification is fast becoming a cornerstone of next-generation cryptographic assurance—and companies like PQShield are leading the way in putting it into practice.

You can download the paper here.

Also See:

Podcast EP290: Navigating the Shift to Quantum Safe Security with PQShield’s Graeme Hickey

Podcast EP285: The Post-Quantum Cryptography Threat and Why Now is the Time to Prepare with Michele Sartori

PQShield Demystifies Post-Quantum Cryptography with Leadership Lounge


CEO Interview with Bob Fung of Owens Design

CEO Interview with Bob Fung of Owens Design
by Daniel Nenni on 08-01-2025 at 10:00 am

Bob Fung Photo

Bob Fung is the CEO of Owens Design, a Silicon Valley company specializing in the design and build of complex equipment that powers high-tech manufacturing. Over his 22-year tenure, Bob has led the development of more than 200 custom systems for world-class companies across the semiconductor, biomedical, energy, and emerging tech sectors, solving their most demanding equipment challenges. Under his leadership, Owens achieved a 10x revenue increase while maintaining a 100% delivery record, reflecting its engineering excellence and unwavering customer commitment.

Tell us about your company

At Owens Design, our story began in 1983 with a bold vision: to build a company where world-class engineers and technicians collaborate seamlessly to design and manufacture the next generation of high-tech equipment. Today, that vision is realized through our legacy of delivering over 3,000 custom tools across various industries, including semiconductors, renewable energy, medical devices, hard disk drives, and emerging technologies. We are proud to maintain a 100% on-time delivery record, a testament to our culture of precision, partnership, and performance.

What sets Owens Design apart is our deep understanding of complex equipment engineering, our ability to design production-ready prototypes and rapidly scale manufacturing to meet our customer’s growth. Our customers don’t just come to us to design equipment; they come to us to co-develop future-proof solutions. We offer turnkey services that span custom design, precision engineering, prototyping, pilot builds, and scalable manufacturing. This integrated approach minimizes risk, compresses development cycles, and enables rapid ramp-up for production in fast-evolving markets.

What problems are you solving?

At Owens Design, we help high-tech innovators turn intellectual property (IP) into fab-ready systems and enable new processes with production-capable equipment that scales. We focus on sectors that require precision, speed, and scalability, where standard solutions often don’t suffice.

Across the semiconductor and electronics manufacturing sectors, the increasing complexity of products is reshaping equipment requirements. Advanced technologies such as chiplet integration, 3D packaging, and heterogeneous system design, demand highly customized tools that can meet exact standards for precision, reliability, and integration capability.

At the same time, companies face compressed development timelines and pressure to bring new solutions to market faster. Owens Design addresses these needs by engineering application-specific equipment that enables breakthrough innovations, which are tightly aligned with each customer’s performance and production goals.

As the industry shifts toward more regionalized manufacturing and supply chain resilience, companies are reevaluating their approach to equipment strategy. There is a growing need for agile, scalable platforms that can adapt to rapidly changing product roadmaps and evolving production environments. We support that transition by working closely with our clients’ R&D and operations teams, acting as an extension of their organization to deliver complex tools on accelerated timelines while maintaining the engineering rigor that’s core to their success.

What are your strongest application areas?

Being rooted in Silicon Valley, we’ve grown alongside the semiconductor industry and built deep expertise in the design of complex equipment. Many of our customers are semiconductor OEMs, ranging from early-stage startups to Tier-1 equipment manufacturers, who are looking to bring advanced, high-precision systems to market quickly. They turn to us not just because we understand the technical demands of semiconductor tools but because we consistently deliver on tight timelines with the engineering depth, domain knowledge and execution reliability they need.

As Silicon Valley has evolved into a hub for a broader range of advanced technologies, Owens Design has evolved with it. Today, we apply the same level of rigor and creativity to automation challenges in renewable energy, data storage, medical devices, and emerging tech. These projects often involve highly specialized needs, such as advanced laser processing, precision robotics, or the automated handling of fragile materials. In many cases, standard solutions don’t meet the requirements, and that’s where our broad technical experience becomes essential.

What connects all of our work is the ability to take on complex programs while moving quickly and maintaining high quality. Our development process is designed to compress timelines and give customers confidence from concept through production. For over 40 years, we’ve maintained a 100% delivery record by a focusing on areas where we can deliver exceptional results and a commitment to meet our customer’s needs. That combination of discipline and engineering versatility is what continues to set Owens Design apart.

What keeps your customers up at night?

For semiconductor OEMs, whether they’re early-stage startups or established Tier-1 OEMs, the pressure to move fast while getting it right the first time is intense. They’re trying to bring highly complex systems to market on aggressive timelines, and every delay or design misstep can have real commercial consequences. What we hear most often is concern about bridging the gap between a promising concept and a reliable, production-ready tool, especially when resources are limited, and there is no room for second chances.

These teams aren’t just looking for a contract manufacturer; they’re seeking a partner who thoroughly understands semiconductor equipment. Someone who can engage early, ask the right questions and design a system that meets both performance specs and production realities. Reputation matters in this space. If you’re a startup, getting into the fab with a tool that doesn’t perform can be a deal breaker. And if you’re a Tier-1 company, quality and consistency are non-negotiable across your entire roadmap.

That’s why we place such a strong emphasis on early alignment. We work closely with customers to de-risk development from day one, bringing decades of domain expertise, a proven process, and a sense of urgency that matches theirs. Ultimately, it’s about giving them the confidence that they’ll reach the market quickly with a system that works, scales, and earns the trust of their end users.

What does the competitive landscape look like, and how do you differentiate?

The equipment development space is becoming increasingly specialized as technologies grow more complex and timelines get tighter. While there are many players offering contract manufacturing or niche engineering services, very few are structured to provide proper end-to-end support from early design definition through to production-ready delivery. That’s where Owens Design stands apart.

What differentiates us is our ability to engage early in the product lifecycle, even when requirements are still evolving, and carry that design through to a production-ready, scalable tool. Many other service providers are either focused on early-stage prototyping or late-stage manufacturing. However, few can bridge both sides with the same level of technical depth and delivery reliability. Owens Design has the ability to close the gap.

What new features/technology are you working on?

We’re focused on adding value to our customers, working on new ways to address what they need most. In the semiconductor capital equipment space, we’re receiving a strong message that customers need to get to market even faster and require assistance with navigating their customers’ expectations and fab requirements, including SEMI spec compliance, particle control, vibration control, and fab interface software. With artificial intelligence accelerating the advanced packaging market, we’re seeing many new technologies being developed to help improve yield. There is a high interest in our experience with developing Inspection and Metrology equipment to help new technologies get into production quickly, which has led us to our most recent initiative, called PR:IME™. This new platform accelerates the commercialization of these technologies.

What is the PR:IME platform, and how does it accelerate the development of semiconductor inspection and metrology tools?

The idea behind PR:IME came from a recurring challenge we’ve observed over the years: the time it takes to bring inspection and metrology tools from concept to something ready for deployment in a fab. For most tool developers, every new system starts as a clean sheet, custom mechanics, software, controls, and wafer handling, and that means long lead times and a lot of engineering effort spent on non-differentiating components.

We asked ourselves: What if we could take some of that burden off their plate? PR:IME is our answer to that. It’s a modular platform with standardized mechanical, electrical, and software interfaces. A flexible foundation that lets customers plug in their core IP while we handle the rest. That way, teams can focus on what makes their technology unique, not on reinventing basic infrastructure.

One of the things we’re most excited about is its scalability. For R&D environments, a manual wafer loading option is available to get up and running quickly. Then, as the tool matures and heads toward volume production, there is a clear path to fully automated wafer handling without changing the process hardware. That kind of flexibility makes it easier to iterate early and scale later without having to start over from scratch. It’s really about helping innovators move faster and with more confidence.

How do customers usually engage with your company?

Our engagements typically begin with a collaborative discovery process. We work closely with customers to understand their technical challenges, commercial objectives, and long-term vision for the project. This includes discussing key performance objectives, test methods, cost and schedule constraints, system complexity, and barriers to success. By gaining a clear understanding of both the engineering and business context, we’re able to align early on around what success looks like.

If the opportunity is a strong technical and commercial fit, we partner with customers through a phased development approach. This model offers a structured, low-risk pathway for transitioning from concept to implementation, starting with system architecture and feasibility, then progressing through detailed design, prototyping, and ultimately, scalable production. Each phase is designed to validate assumptions and refine the scope, giving customers confidence in both the technical viability and the business case.

This process enables us to build trust and deliver value at every step, whether we’re designing a new tool from scratch or helping an existing system evolve for the next stage of production.

Contact Owens Design

Also Read:

CEO Interview with Dr. Avi Madisetti of Mixed-Signal Devices

CEO Interview with Andrew Skafel of Edgewater Wireless

CEO Interview with Jutta Meier of IQE


Podcast EP301: Celebrating 20 Years on Innovation with yieldHUB’s John O’Donnell

Podcast EP301: Celebrating 20 Years on Innovation with yieldHUB’s John O’Donnell
by Daniel Nenni on 08-01-2025 at 10:00 am

Dan is joined by John O’Donnell, Founder and CEO of yieldHUB, a pioneering leader in advanced data analytics for the semiconductor industry. Since establishing the company in 2005 he has transformed it from a two-person startup into a trusted multinational partner that empowers some of the world’s leading semiconductor companies to improve yield, reduce test costs, boost engineering efficiency and enhance quality.

yieldHUB has recently celebrated its 20th anniversary. The company has also received national recognition for its accomplishments in Ireland. Dan explores yieldHUB’s history and future plans with John, including the company’s expansion worldwide and its new R&D focus areas. John describes the company’s new yieldHUB Live system, a test agnostic real-time capability with an AI recommendation system and digital twin models. John explains that AI is a new development focus for the company and this new system is having significant impact on improving yield, reducing test costs, and increasing product quality. John also describes a new API native platform that is in development.

Dan also explores the four pillars of yieldHUB with John, which are the previously mentioned improve yield, reduce test cost, boost engineering efficiency, and enhance quality. John describes the importance of each pillar and explains the approach yieldHUB takes to achieve these goals with its customers.

Contact yieldHUB here.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


AI-Powered Waveform Debugging: Revolutionizing Semiconductor Verification

AI-Powered Waveform Debugging: Revolutionizing Semiconductor Verification
by Admin on 08-01-2025 at 9:00 am

DAC 62 Systems on Chips

On July 9, 2025, a DACtv session with Zackary Glazewski of ChipAgents AI introduced Waveform Agents, an AI-driven solution by Chip Agents designed to tackle the complex challenge of waveform debugging in semiconductor design. The speaker highlighted the difficulties of traditional waveform debugging and demonstrated how AI agents, leveraging large language models (LLMs), offer a scalable, autonomous approach to streamline verification, addressing the “needle in a haystack” problem inherent in analyzing massive waveform datasets.

Waveform debugging is notoriously challenging due to the sheer volume of data involved, often ranging from tens of gigabytes to terabytes in large systems. Bugs typically manifest within a narrow temporal range, making their identification akin to finding a needle in a haystack. Beyond localization, fixing these bugs requires deep knowledge of signal interactions, their roles in design and testbench files, and their interconnections. This dual challenge—locating and resolving issues—demands significant time and expertise, as engineers must navigate intricate relationships and interpret signal behaviors.

Traditional methods fall short in addressing these issues. Waveform viewing software, while useful for visualizing signal toggles and protocols, struggles with the scale of modern datasets, requiring engineers to manually comb through vast amounts of data. Signal tracing features, though helpful, rely heavily on engineer guidance, adding to the time-intensive process. Anomaly detection using traditional machine learning can identify patterns but lacks flexibility for novel issues, as it depends on pre-trained datasets that may not cover new anomalies. Formal tools, while powerful for finding counterexamples, do not scale well for large systems and still require manual effort to pinpoint and fix errors.

Waveform Agents, powered by LLMs, offer a transformative solution by autonomously handling end-to-end waveform debugging. Unlike traditional LLMs that process entire datasets and risk exceeding context limits, these agents employ intelligent context selection. They traverse design and testbench files, log files, and waveforms, selectively analyzing relevant data to identify bugs without overwhelming computational resources. This approach ensures scalability and efficiency, even for terabyte-scale datasets. The agents can detect a wide range of issues, from functional failures to assertion violations, whether in the design or testbench, without requiring specific error syntax or predefined signatures.

The process begins with the agent analyzing regression logs and waveforms to localize bugs, leveraging its pre-trained understanding of the codebase and specifications. It then proposes fixes by interpreting signal relationships and referencing design intent, reducing the need for manual intervention. For instance, in a regression with thousands of tests, the agent autonomously parses logs, identifies failure signatures, and maps them to specific design or testbench issues, eliminating the need for repeated training per project. This pre-trained model adapts dynamically, pulling relevant context as needed, ensuring flexibility across diverse scenarios.

The benefits are significant: Waveform Agents drastically reduce debugging time, enhance scalability, and improve accuracy by understanding complex signal interactions. By automating both bug localization and resolution, they free engineers from tedious manual tasks, allowing focus on higher-value design work. The session addressed concerns about context limits, with the speaker noting that intelligent context selection avoids processing entire waveforms, making the solution practical for real-world applications.

Chip Agents’ approach marks a paradigm shift in verification, aligning with the industry’s need for smarter, data-driven tools to handle growing design complexity. By integrating AI into waveform debugging, Waveform Agents promise to accelerate verification cycles, improve first-time silicon success rates, and empower engineers to tackle the challenges of modern semiconductor design with greater confidence and efficiency.

Also Read:

AI-Driven Verification: Transforming Semiconductor Design

Building Trust in AI-Generated Code for Semiconductor Design

Microsoft Discovery Platform: Revolutionizing Chip Design and Scientific Research


AI-Driven Verification: Transforming Semiconductor Design

AI-Driven Verification: Transforming Semiconductor Design
by Admin on 08-01-2025 at 8:00 am

DAC 62 Systems on Chips

In a DACtv session on July 9, 2025, Abhi Kolpekwar, Vice-President & General Manager at Siemens EDA, illuminated the transformative role of artificial intelligence (AI) in addressing the escalating challenges of semiconductor design verification. The presentation underscored the limitations of traditional methods and introduced a smarter, AI-driven approach to enhance efficiency, scalability, and insight in the verification process.

The semiconductor industry is grappling with unprecedented complexity. In 2024, only 14% of chips achieved first-time silicon success, highlighting the difficulty in managing intricate designs. Additionally, 75% of chips are behind schedule due to tight time-to-market pressures, compounded by a critical shortage of skilled professionals, with 80% of the industry’s demand for talent unmet. These challenges—complexity, time constraints, and resource scarcity—render traditional rule-based and database-driven verification methods inadequate. The speaker emphasized that siloed, manual processes, such as testbench creation and debugging, consume 40% of the verification workforce’s time, yet fail to deliver the necessary confidence in tape-out readiness.

The industry is undergoing a profound shift, moving from the electrification era, which automated physical labor, to the computation era, and now to the “cognification” era, where cognitive tasks are delegated to AI. By 2030, the semiconductor market is projected to exceed a trillion dollars, with AI as a dominant driver. Generative AI, expected to grow at an 85% compound annual growth rate by 2027, is reshaping infrastructure, services, and software. Meanwhile, the rise of 3D integrated circuits (3D ICs) is set to propel the market from $200 billion to $1 trillion by 2030, introducing complexities in heterogeneous integration, interconnects, power, and thermal management. Data security is another concern, with 8.2 trillion breaches reported in 2023, costing an average of $4.5 million per incident.

Traditional verification, reliant on disconnected simulation, formal, and static methods, struggles to scale. Coverage reports often lack actionable insights, and manual debugging is inefficient. The speaker proposed an AI-driven solution, exemplified by their “Quetsa One” approach, which integrates connected workflows, data-driven insights, and scalable tools. This smarter verification leverages AI in three key areas: specification generation, debugging, and coverage analysis.

In specification generation, AI tools like Property Assist convert plain English or PDF-based design specifications into standardized SystemVerilog assertions, eliminating the need for engineers to master complex languages. These assertions integrate seamlessly with formal verifiers and netlists, while test plans can be generated from the same specifications, ensuring alignment with design intent. This streamlines the process, reducing human effort and maintaining a single source of truth.

For debugging, AI transforms regression testing into “smart regression.” By prioritizing test cases to trigger failures quickly, AI minimizes resource waste. It clusters failures, maps them to specific design changes, and identifies the responsible commits, significantly reducing debugging time. This targeted approach contrasts with traditional methods, where engineers manually sift through extensive logs.

In coverage analysis, analytical AI processes large datasets to identify common expression patterns, creating configurable graphs that highlight critical coverage gaps. This enables engineers to write targeted testbenches, improving coverage quality without relying on arbitrary test additions. By providing actionable insights, AI moves beyond mere numbers to deliver meaningful verification outcomes.

The Costa One approach embodies this shift, offering connected workflows that unify tools across domains, data-driven wisdom to uncover insights, and scalable tools to handle growing verification loads. The speaker urged the verification community to adapt, questioning siloed and static processes to embrace AI’s potential. By fostering adaptability and innovation, AI-driven verification promises to enhance productivity, reduce time-to-market, and ensure robust chip designs in an increasingly complex semiconductor landscape.

Also Read:

Building Trust in AI-Generated Code for Semiconductor Design

Microsoft Discovery Platform: Revolutionizing Chip Design and Scientific Research

Google Cloud: Optimizing EDA for the Semiconductor Future