ads mdx semiwiki building trust gen 800x100ai

Building Trust in AI-Generated Code for Semiconductor Design

Building Trust in AI-Generated Code for Semiconductor Design
by Admin on 08-01-2025 at 7:00 am

On July 9, 2025, a compelling session at DACtv by Vishal Moondrha of Perfroce addressed a critical challenge in the semiconductor industry: building trust in AI-generated code. The speaker highlighted the unique hurdles of integrating generative AI into semiconductor design, emphasizing issues like data provenance, quality, and intellectual property (IP) management. As AI becomes integral to designing analog circuits, RTL, and other components, these concerns must be tackled to unlock its full potential.

The semiconductor industry faces distinct challenges compared to other sectors. First, there is heightened sensitivity to liability and data provenance. Unlike software development, where errors can often be patched quickly, mistakes in chip design are costly and difficult to rectify, especially as designs progress. A single error in generated code can lead to expensive rework, making reliability paramount. Additionally, data quality is a concern, as every company has unique standards and workflows developed over years. Ensuring that AI-generated code adheres to these standards is critical to maintaining design integrity.

Another significant issue is the legal and ethical use of data for training AI models. Semiconductor designs often incorporate external IP, which may carry royalty obligations or export control restrictions. Using such IP to train AI models without proper authorization risks legal violations and data leakage. For instance, training a model on proprietary IP could inadvertently produce outputs that violate licensing agreements, exposing companies to liability. Furthermore, highly sensitive IPs, such as those with geographical restrictions, must be excluded from training datasets to comply with regulations.

To address these challenges, the speaker proposed a robust IP lifecycle management system. By treating every design block as an IP, whether internally developed, purchased, or reused from prior projects, companies can attach traceability and metadata to each block. This approach ensures clear data provenance, allowing teams to track the origin, ownership, and usage rights of each IP. For example, metadata might include technical specifications, workflow outputs, or verification results, providing context that enhances the AI model’s understanding of the design.

The proposed workflow involves breaking down designs into hierarchical IPs, each with a defined lifecycle. This modular approach enables companies to set rules for which IPs can be used for training. For instance, IPs with export controls or proprietary restrictions can be flagged to prevent their inclusion in datasets. Tools like those from Siemens, in collaboration with Perforce, generate metadata through IP validation and quality assurance processes, which can be linked to specific IP versions. This ensures that AI models are trained on compliant, high-quality data.

The speaker also emphasized the importance of traceable data pipelines. A typical training workflow starts with raw design files and metadata, which are pre-processed, cleaned, and split into training, validation, and testing sets. By maintaining traceability throughout this pipeline, companies can audit the data’s journey, ensuring no unauthorized IPs are included. This is particularly crucial as design data evolves, with incremental updates like bug fixes or new features requiring continuous tracking to maintain accuracy.

Implementing such a system offers multiple benefits. It mitigates data leakage risks, simplifies data management by modularizing IPs, and supports compliance with legal and regulatory requirements. By using data management tools like SQL databases, Perforce, or Git, companies can avoid ad-hoc data handling, ensuring that training datasets are sourced from managed repositories. Additionally, modeling AI models themselves as IPs allows tracking of which LLM version was used, enhancing transparency.

This approach fosters innovation by providing a trusted framework for AI-driven design, enabling semiconductor companies to leverage generative AI confidently while safeguarding IP and ensuring compliance.

Also Read:

Microsoft Discovery Platform: Revolutionizing Chip Design and Scientific Research

Google Cloud: Optimizing EDA for the Semiconductor Future

Synopsys FlexEDA: Revolutionizing Chip Design with Cloud and Pay-Per-Use


CEO Interview with Andrew Skafel of Edgewater Wireless

CEO Interview with Andrew Skafel of Edgewater Wireless
by Daniel Nenni on 08-01-2025 at 6:00 am

Skafel pic

As the demand for high-capacity, low-latency wireless networks explodes across residential, enterprise, and industrial environments, a Canadian innovator is quietly reshaping the way Wi-Fi works—from the silicon up. Edgewater Wireless (TSXV:YFI/OTC:KPIFF), headquartered in Ottawa, is pioneering a transformative approach to wireless connectivity with its patented Wi-Fi Spectrum Slicing™ technology and an AI-enhanced roadmap that’s drawing attention from industry heavyweights.

We sat down with Andrew Skafel, CEO of Edgewater Wireless, to discuss the company’s impressive recent milestones—including selection to Canada’s prestigious FABrIC semiconductor program, backing from Silicon Catalyst, and entry into Arm’s Flexible Access (AFA) Program. We also dove into Edgewater’s vision for AI-powered spectrum optimization and how their “standards-leading” strategy is carving out a unique path in a crowded wireless landscape.

Andrew, thanks for joining us. Edgewater Wireless has had a banner year. Let’s start with the basics—what makes Edgewater different from the rest of the Wi-Fi ecosystem?

Thanks—it’s great to be here. At Edgewater, we’ve taken a fundamental departure from traditional Wi-Fi design. Instead of incrementally optimizing the existing single-channel-per-radio architecture, we pioneered Wi-Fi Spectrum Slicing, which enables multiple concurrent channels in the same frequency band—essentially “slicing” the spectrum to maximize capacity and performance.

This isn’t just theoretical. Our patented silicon architecture can deliver 10x performance gains and up to 50% lower latency—even on legacy client devices. We’re not asking the world to adopt a new standard or wait for new device rollouts. We’re enhancing performance from the infrastructure side, and we’re fully aligned with current and evolving Wi-Fi standards.

That’s impressive. Let’s talk about the AI element. You recently announced the initiation of prototyping an AI subsystem using Arm® technology. What does that mean for Wi-Fi?

It’s a big step for us—and for Wi-Fi, more broadly. We’ve entered the Arm Flexible Access Program, which gives us cost-effective access to industry-proven Arm IP. With this, we’re prototyping our next-generation Wi-Fi baseband chip using Arm Cortex® CPUs, and we’re exploring Arm Ethos™ NPUs for on-chip AI acceleration.

Why does this matter? Because we’re bringing real-time, AI-driven spectrum engineering directly to the network edge. Our patent-pending machine learning algorithms are designed to do things Wi-Fi’s never done before—mitigate congestion, manage spectrum allocation in real-time, and autonomously reconfigure channel/link density for optimal throughput—all within the chipset. This is a paradigm shift that will allow networks to self-optimize based on real-time user and device behavior.

Wi-Fi is notoriously messy in dense environments. Are you saying this AI-driven approach can solve that?

Exactly. Traditional Wi-Fi is reactive and when collisions and interference occur, it has limited ability to mitigate it, and often does so ineffectively. Our AI subsystem is proactive based on data received from the coverage area environment — delivering a significant boost to the overall QoS and transforming the user experience. Imagine your Wi-Fi recognizing a spike in video traffic and re-allocating spectrum capacity to handle the load—or rerouting traffic away from interference without any human intervention.

This is particularly crucial in environments like MDUs (multi-dwelling units), enterprise campuses, industrial IoT deployments, and even in smart homes. As device counts continue to explode, intelligent spectrum management becomes the key differentiator.

You will get unparalleled Wi-Fi QoS for all devices with Wi-Fi Spectrum Slicing.

You mentioned prototyping in Arm’s ecosystem. What advantages does that bring?

Arm has created an incredibly robust platform for innovation. Their CPUs and NPUs are energy-efficient, scalable, and deeply supported, which accelerates our time to market and de-risks development. More importantly, being part of the Arm ecosystem validates our approach and opens doors for deeper industry collaboration.

We’re not building a niche chipset—we’re creating a standards-aligned, AI-enabled platform that can be licensed by leading silicon players or deployed by OEMs, service providers, and enterprises. Working with Arm helps us do that at scale.

That leads us to another major milestone—your inclusion in the Canadian government’s FABrIC program. What does that support mean for Edgewater?

The Government of Canada’s FABrIC initiative is a game-changer. For Edgewater, it represents a national vote of confidence in our vision to lead the next wave of intelligent, AI-enabled wireless innovation. Managed by CMC Microsystems, FABrIC is a five-year $223M program to accelerate the commercialization of semiconductor-based processes and products. We’re proud to be one of the first recipients. The program provides strategic funding that allows us to de-risk R&D and accelerate commercialization of our next-gen AI-enabled Wi-Fi chipsets.

This isn’t just about money—it’s about national competitiveness. Canada has world-class talent and ideas, but turning those into commercially viable silicon solutions for the worldwide market requires deep support. FABrIC recognizes that and is helping position Canada as a global player in intelligent connectivity.

You’ve also secured support from Silicon Catalyst, the world’s only incubator focused exclusively on semiconductors. What has that partnership unlocked?

Joining Silicon Catalyst has given us access to a global network of mentors, corporate partners, and critical infrastructure for silicon development. They’ve helped hundreds of semiconductor startups navigate the journey from concept to production—and their backing speaks volumes about our technology and market potential.

The Silicon Catalyst ecosystem is already helping us refine our product-market fit, prepare for mass manufacturing, and navigate relationships with fabs and IP providers. It’s a massive accelerant for our roadmap.

Let’s step back a bit. Where do you see Edgewater Wireless headed over the next 12–24 months?

Over the next 12 to 24 months, Edgewater Wireless is laser-focused on commercializing our next-generation Wi-Fi Spectrum Slicing silicon under the PrismIQ™ product family. This family will span three high-value, global market segments—residential, enterprise, and Industrial Internet of Things (IIoT)—each with tailored variants designed to optimize performance, capacity, and reliability for dense device environments.

We’re already well through the ‘beta’ prototyping phase. Key upcoming milestones in our silicon realization roadmap include:

  • Prototyping and silicon realization of our next-gen AI-enabled baseband is underway, with sampling beginning in early 2026.  For those parties interested in early access to the PrismIQ product family, development platforms will be made available to select partners in the first quarter of 2026.
  • Partnering with Tier-1 industry players to accelerate market deployment through licensing and joint development initiatives, ensuring broad industry alignment and faster paths to integration.
  • Scaling our software stack and AI-driven algorithms to ensure seamless compatibility with evolving standards like Wi-Fi 7—and laying the groundwork for next-gen protocols including Wi-Fi 8.
  • Leveraging support from strategic partners and government-backed programs, including Silicon Catalyst, Arm, CableLabs, and FABrIC, to fast-track innovation and market readiness.

Interested and qualified service providers and equipment vendors are encouraged to contact us to gain early access to the PrismIQ product family. Let’s shape the future of wireless, together.

There’s a lot of buzz around “standards-leading” versus “standards-following.” How does Edgewater view its role in the Wi-Fi standards ecosystem?

We’re deeply involved in the standards community, and we build our technology with alignment in mind. But we’re also pushing the boundaries of what those standards enable.

Spectrum Slicing is a great example—it fits within existing standards yet unlocks next-level performance by reimagining how the physical layer handles channelization and interference. That’s what we mean by “standards-leading.” We respect the standards, we align with them—but we’re also creating new categories of capability within them.

It’s not about breaking compatibility—it’s about enhancing what’s possible, without requiring changes on the client side. That’s key to enabling broad adoption.

Final question—what’s the big vision here? What does success look like for Edgewater Wireless?

Success for us means redefining Wi-Fi—not just making it faster, but smarter. Our goal is to embed AI-enabled Spectrum Slicing into the heart of Wi-Fi infrastructure and make intelligent wireless connectivity the norm, not the exception.

We believe Wi-Fi should be as dynamic as the environments it serves, and with our technology, it can be. Whether it’s in a smart home, an enterprise network, or a factory floor, we want to be the intelligence layer that ensures every device gets the performance it needs, when it needs it.

We’re not just solving for today—we’re building the wireless fabric of the future.

About Edgewater Wireless (TSXV: YFI / OTC: KPIFF)

We make Wi-Fi. Better.

Edgewater Wireless delivers unmatched Wi-Fi QoS—bar none—by intelligently mitigating congestion, managing spectrum allocation in real-time, and autonomously reconfiguring channel and link density—driving economic gains for service providers and their customers through reduced churn, improved efficiency, and high-performance connectivity in dense environments. Redefining Wi-Fi from the silicon up, Edgewater’s patented, AI-powered Spectrum Slicing platform—delivered through the PrismIQ™ product family—breaks the limits of legacy Wi-Fi by enabling multiple concurrent channels in a single band. This delivers 10x performance and up to 50% lower latency, even for legacy devices. With 26 patents and a fabless model, Edgewater is transforming the economics of Wi-Fi for service providers, OEMs, and enterprises—powering scalable, standards-aligned/leading connectivity across residential, enterprise, and Industrial IoT markets.  Edgewater is building the intelligence wireless foundation for the next era of global connectivity.

Visit https://edgewaterwireless.com

Andrew Skafel is a recognized leader in wireless and next-generation Wi-Fi, driving innovation as President and CEO of Edgewater Wireless. Under his leadership, the company has pioneered Wi-Fi Spectrum Slicing, revolutionizing high-density wireless performance. With over two decades in telecom and technology, Andrew has led product development, strategy, and key industry partnerships. His customer-focused vision ensures Edgewater’s patented solutions address real-world connectivity challenges. A sought-after expert on Wi-Fi innovation, Andrew continues to shape the future of wireless communications, positioning Edgewater Wireless as a global leader in scalable, high-performance networking solutions.

Also Read:

CEO Interview with Jutta Meier of IQE

Executive Interview with Ryan W. Parker of Photonic Inc.

CEO Interview with Jon Kemp of Qnity


Microsoft Discovery Platform: Revolutionizing Chip Design and Scientific Research

Microsoft Discovery Platform: Revolutionizing Chip Design and Scientific Research
by Admin on 08-01-2025 at 6:00 am

DAC 62 Systems on Chips

At a recent conference session on July 9, 2025, Prashant Varshney, head of the Silicon and Physics Industry Vertical for Microsoft’s Discovery and Quantum Division, unveiled the transformative potential of the Microsoft Discovery Platform. This innovative platform, announced at Microsoft’s Build event, aims to redefine chip design and scientific research by integrating generative AI, quantum computing, and high-performance computing (HPC) into a cohesive ecosystem. With its focus on scalability, interoperability, and IP security, the platform addresses the evolving needs of R&D teams across industries, particularly in semiconductors.

The rapid pace of technological innovation, driven by generative AI, has shifted the landscape from experimental to functional applications. Warn highlighted three key trends reshaping engineering: the pervasive influence of generative AI, the rise of domain-specific models, and the imminent impact of quantum computing. These advancements are pushing R&D teams to meet heightened expectations, balancing short-term cost efficiencies with long-term goals like market disruption and new revenue streams. However, current systems often fall short, relying on bespoke, loosely integrated tools that are difficult to scale or secure.

The Microsoft Discovery Platform tackles these challenges by offering a unified, AI-centric solution. Built on Azure’s scalable infrastructure, it integrates AI services, HPC, and data management to support industries like chemistry, material science, life sciences, and semiconductors. At its core is the Science Co-Pilot, a natural language interface that transcends traditional chatbot functionality. This orchestration layer understands domain-specific requirements and design goals, enabling context-aware decision-making and execution plans. By leveraging a knowledge graph termed the Science Bookshelf and an HPC-optimized Science Supercomputer, the platform empowers researchers to manage complex workflows efficiently.

A key strength of the platform is its ecosystem approach. Microsoft collaborates with EDA vendors like Synopsys, research labs, and system integrators to ensure seamless integration of existing tools and workflows. This interoperability is critical for semiconductor design, where commercial EDA tools are indispensable. The platform also supports specialized agents, such as those for planning, orchestration, or data processing, which can be customized by partners or users. These agents, grounded in domain-specific data like product documentation and prior designs, enable precise, context-driven insights.

Warn showcased practical applications, including a project with Pacific Northwest National Labs to develop a battery electrolyte with 70% less lithium, achieved by narrowing 32 million combinations through AI-driven simulations. Another example involved discovering a novel data center coolant, free of harmful PFAS chemicals, in just ten days. For chip design, the platform demonstrated its prowess by generating a microcontroller for an aircraft instrument landing system. Agents handled tasks like writing specifications, creating RTL, verifying designs, and conducting power, performance, and area (PPA) analysis, all while incorporating insights from the platform’s knowledge graph.

The platform’s ability to abstract hardware complexities ensures future-proofing, accommodating classical compute today and quantum workloads tomorrow. Specialized virtual machines, like EV6 and FXV2, powered by Intel’s fifth-generation Xeon processors, cater to EDA workloads’ unique demands, such as large memory and optimized core counts. Partnerships with Intel and NetApp further enhance performance and storage capabilities.

Despite its autonomous capabilities, human oversight remains integral. Each agent-driven step includes human-in-the-loop validation to ensure accuracy and alignment with project goals. Addressing concerns about training data for advanced nodes, Warn emphasized that newer models require minimal data, with encouraging results at 16-18 nm nodes. Future challenges, particularly for 2-5 nm designs, will require ecosystem collaboration with foundries.

The Microsoft Discovery Platform is poised to accelerate innovation, reduce time-to-market, and enable breakthroughs in chip design and beyond. By fostering collaboration and integrating cutting-edge technologies, it offers a scalable, secure, and extensible solution for the next generation of scientific discovery.

Also Read:

Google Cloud: Optimizing EDA for the Semiconductor Future

Synopsys FlexEDA: Revolutionizing Chip Design with Cloud and Pay-Per-Use

Perforce and Siemens: A Strategic Partnership for Digital Threads in EDA


CAST Webinar About Supercharging Your Systems with Lossless Data Compression IPs

CAST Webinar About Supercharging Your Systems with Lossless Data Compression IPs
by Mike Gianfagna on 07-31-2025 at 10:00 am

CAST Webinar About Supercharging Your Systems with Lossless Data Compression IPs

Much of advanced technology is data-driven. From the cloud and AI accelerators to automotive processing and edge computing, data storage and transmission efficiency are of critical importance. It turns out that lossless data compression is a key ingredient to deliver these requirements.

While there are both software and hardware solutions, hardware-based approaches offer the best balance for throughput, latency, and power demands. CAST recently presented an informative webinar on this topic with SemiWiki. In case you missed it, there is now a replay available. A link is coming but first let’s examine what is discussed in the CAST webinar about supercharging your systems with lossless data compression IPs.

The Presenter

Dr. Calliope Louisa Sotiropoulou

The webinar is presented by Dr. Calliope-Louisa Sotiropoulou, an Electronics Engineer who holds the position of Sales Engineer & Product Manager at CAST. Dr. Sotiropoulou specializes in image, video and data compression, and IP stacks. Before joining CAST, she worked as a research and development manager and an FPGA systems developer for the Aerospace and Defense sector. Dr. Sotiropoulou is very knowledgeable on the topic of data compression and has an easy-to-understand style.

She has a long academic record as a researcher, working on various projects, including the trigger and data acquisition system of the ATLAS experiment at CERN. She received her PhD from the Aristotle University of Thessaloniki.

What is Covered

The full title of the webinar is Unpacking System Performance: Supercharge Your Systems with Lossless Compression IPs. The topics discussed during the 35-minute webinar include:

  • What is lossless data compression, and why do we use it?
  • Differences between software and hardware implementations
  • How to choose the right algorithm for your application
  • Real-life examples: Integration and implementation
  • The CAST approach and IP cores portfolio

The webinar is followed by about 10 minutes of Q&A from the webinar audience that covers some very relevant and interesting topics.

Some Highlights

There is a lot of great information shared by Dr. Sotiropoulou in this webinar. She touches on the pros and cons of various approaches and backs it up with details from real examples. To whet your appetite, here is her summary of why a hardware approach is preferred:

Software is flexible, but:

  • It creates a high CPU load
  • It scales poorly, especially when the high throughput in today’s applications is considered
  • It has unpredictable latency

Hardware:

  • Offers deterministic performance: latency, throughput, power
  • Scales to multi-Gbps throughput
  • Lower power consumption
  • Real-time ready

She goes into detail on multiple approaches to data compression, illustrating the pros and cons of each. This discussion gets into topics such as optimizing memory size and file size for various problem sets.  

She then discusses the lossless data compression delivered by CAST. Target applications and system integration details are presented, along with specific results for various ASIC and FPGA technologies. She ends with a summary of the CAST approach and key takeaways.

To Learn More

I have just presented a high-level summary of the webinar content. If you are dealing with data intensive applications, approaches to lossless data compression will definitely be important and I highly recommend experiencing the complete webinar. You will be glad you did. You can access the webinar replay here.

You can also learn more about CAST’s lossless data compression IP here. And that’s the CAST webinar about supercharging your systems with lossless data compression IPs.

Also Read:

WEBINAR Unpacking System Performance: Supercharge Your Systems with Lossless Compression IPs

Podcast EP273: An Overview of the RISC-V Market and CAST’s unique Abilities to Grow the Market with Evan Price

CAST Advances Lossless Data Compression Speed with a New IP Core


Architecting Your Next SoC: Join the Live Discussion on Tradeoffs, IP, and Ecosystem Realities

Architecting Your Next SoC: Join the Live Discussion on Tradeoffs, IP, and Ecosystem Realities
by Daniel Nenni on 07-31-2025 at 8:00 am

sqr 1

Designing a system-on-chip (SoC) has never been more complex—or more critical. With accelerating demands across AI, automotive, and high-performance compute applications, today’s SoC architects face a series of high-stakes tradeoffs from the very beginning. Decisions made during the earliest phases of design—regarding architecture, IP selection, modeling, and system integration—can make or break a project’s success.

That’s why SemiWiki is proud to host a live webinar:

“What to Consider When Architecting Your Next SoC: Architectural Tradeoffs, IP Selection, and Ecosystem Realities”

 Thursday, August 14, 2025 | 9:00 AM PDT

This session will feature a practical, fast-paced conversation between two seasoned experts in SoC architecture and IP design:

  • Paul Martin, Global Director of SoC Architecture, Aion Silicon

  • Darren Jones, Distinguished Engineer & Solutions Architect, Andes Technology

Together, they’ll walk through real-world scenarios, decision frameworks, and lessons learned from working with some of the most demanding silicon customers in the world.

Rather than a static presentation, the format is designed as a fireside chat—highlighting the nuance and complexity of early-stage architecture decisions through dialog. Expect candid insights, live Q&A, and audience engagement—not a canned marketing pitch.

Watch the replay

What You’ll Learn:

  • How to weigh architectural tradeoffs when performance, flexibility, and schedule are in tension

  • What questions to ask when selecting IP across multiple vendors

  • The role of modeling, simulation, and emulation in derisking “works-first-time” silicon

  • How system-level decisions (like interconnect width or coherency models) impact overall architecture

  • Where ecosystem support—toolchains, deliverables, and foundry alignment—can determine downstream success

You’ll also gain a deeper understanding of performance, power, and area (PPA) metrics—how to interpret them, and how to avoid common traps when comparing IP blocks. This session goes beyond datasheets to explore how real design teams validate assumptions and make decisions that hold up under pressure.

Whether you’re leading architecture for your next chip, evaluating IP options, or supporting teams through SoC integration, this webinar will sharpen your perspective and provide actionable strategies.

Why Attend Live:

This is a live-only event, and attendees will have the chance to ask questions directly. If your team is facing architectural decisions this quarter—or simply wants to learn how top-tier firms approach system tradeoffs—this is a valuable opportunity to hear from peers in the trenches.

Watch the replay

Speaker Bios:

Darren Jones, Distinguished Engineer and Solutions Architect, Andes Technology

Darren Jones is a seasoned engineering leader with more than three decades of experience in processor architecture, SoC design, and IP integration. Currently a Distinguished Engineer and Solutions Architect at Andes Technology, he helps customers develop high-performance RISC-V–based solutions tailored to their systems, drawing on his deep expertise in system-on-chip design and verification.

Prior to Andes, Darren held senior leadership roles at Esperanto Technologies, Wave Computing, Xilinx, MIPS Technologies, and LSI Logic, where he led teams through multiple successful chip tapeouts—from 7nm inferencing accelerators to complex multi-core and multithreaded architectures. His experience spans architecture definition, RTL design, IP delivery, and full-chip integration.

Darren holds more than 25 patents in processor design and multithreading. He earned his M.S. in electrical engineering from Stanford University and his B.S. with highest honors from the University of Illinois Urbana-Champaign.

Paul Martin, Global Director of SoC Architecture, Aion Silicon

Paul Martin is the Global Director of SoC Architecture at Aion Silicon, where he leads international engineering teams and drives customer engagement across complex semiconductor design projects. With decades of experience in commercial, technical, and strategic roles at companies including ARM and NXP, he has helped bring cutting-edge SoC technologies to market. Martin is known for his ability to bridge technical innovation with business value across Europe, North America, and Asia.

Watch the replay

Also Read:

The Sondrel transformation to Aion SIlicon!

2025 Outlook with Oliver Jones of Sondrel

CEO Interview: Ollie Jones of Sondrel


cHBM for AI: Capabilities, Challenges, and Opportunities

cHBM for AI: Capabilities, Challenges, and Opportunities
by Kalar Rajendiran on 07-31-2025 at 6:00 am

cHBM Panelists at Synopsys Executive Forum

AI’s exponential growth is transforming semiconductor design—and memory is now as critical as compute. Multi-die architecture has emerged as the new frontier, and custom High Bandwidth Memory (cHBM) is fast becoming a cornerstone in this evolution. In a panel session at the Synopsys Executive Forum, leaders from AWS, Marvell, Samsung, SK Hynix, and Synopsys discussed the future of cHBM, its challenges, and the collective responsibility to shape its path forward.

Pictured above: Moderator: Will Townsend, Moor Insights & Strategy; Panelists: Nafea Bshara, VP/Distinguished Engineer, AWS; Will Chu, SVP & GM Custom Cloud Solutions Business Unit, Marvell; Harry Yoon, Corporate EVP, Products and Solutions Planning, Samsung; Hoshik Kim, SVP/Fellow, Memory Systems Research, SK Hynix; John Koeter, SVP, IP Group, Synopsys.

The Rise of cHBM in Multi-Die Design

HBM has already propelled AI performance to new heights. Without it, the AI market wouldn’t be where it is today. The evolution from 2.5D to 3D packaging brings a nearly 8x improvement in bandwidth and up to 5x power efficiency improvement—transformative gains for data-intensive workloads. Custom HBM further optimizes performance, reducing I/O area by at least 25% compared to standard HBM implementations. But with this comes a familiar tension: performance versus interoperability.

The Customization Dilemma

While hyperscalers may benefit from custom configurations—given they control the entire stack—the broader industry risks fragmentation. As panelists noted, every time memory innovation went too custom (e.g., HBM1, RDRAM), interoperability and industry adoption suffered. Without shared standards, memory vendors face viability issues, and smaller players risk exclusion. Custom HBM must not become a barrier to collaboration.

The Need for Speed in Standardization

A major takeaway from the panel was the urgency for faster-moving standards bodies. JEDEC has traditionally led HBM standardization, but with the pace of AI, panelists discussed how a new and more agile standards body could accelerate interoperable HBM frameworks. The industry needs validated, standardized D2D IP as well to preserve ecosystem harmony while scaling performance. The UCIe standard is fast establishing itself as that D2D IP standard.

Implications for Memory Vendors

Memory vendors are in a tough spot. On one hand, custom HBM demands more features and integration flexibility; on the other, it erodes volume leverage and introduces supply chain risks. To stay competitive, vendors must support both standard and semi-custom memory—while collaborating more deeply with SoC architects, EDA tool providers, and packaging experts.

The Power of the Ecosystem

To unlock cHBM’s full potential, ecosystem-wide collaboration is non-negotiable. Synopsys is playing a central role for system-level enablement—offering integrated IP, power modeling, thermal simulation, and system-level co-design tools. Only through coordinated efforts can companies navigate packaging complexity, ensure test coverage, and deliver performant, scalable AI systems.

The 3D Packaging Imperative

3D packaging is the physical foundation of multi-die design. Compared to 2.5D solutions, 3D integration supports significantly higher bandwidth and tighter physical proximity. However, the benefits come with challenges: thermal hotspots, TSV congestion, and signal integrity must be carefully managed. Architects must co-design silicon and packaging to meet AI’s escalating demands.

From Custom to Computational HBM

The panel reached consensus on one transformative idea: the future isn’t just custom HBM. Computational HBM may be a better way to describe cHBM. This paradigm emphasizes workload-aware partitioning across logic and memory, where memory becomes an active participant in AI processing, not just a passive storage layer. Right now, hyperscalers may be the main drivers for cHBM. But, unlike proprietary custom approaches, computational HBM can scale across markets—cloud, edge, automotive—and thrive through standardization and reuse.

Summary

cHBM holds tremendous promise—but how the industry moves forward will determine whether it accelerates innovation or impedes it. With standardization, agile packaging integration, and coordinated ecosystem efforts, computational HBM can power the next generation of intelligent systems. With ecosystem aligned vision and execution, the industry will be on its way to multi-die design success to the fullest extent.

The “cHBM for AI” panel session recording can be accessed on-demand from here.

Also Read:

Podcast EP299: The Current and Future Capabilities of Static Verification at Synopsys with Rimpy Chugh

Design-Technology Co-Optimization (DTCO) Accelerates Market Readiness of Angstrom-Scale Process Technologies

SNUG 2025: A Watershed Moment for EDA – Part 2


How Channel Operating Margin (COM) Came to be and Why It Endures

How Channel Operating Margin (COM) Came to be and Why It Endures
by Admin on 07-30-2025 at 10:00 am

Samtec Eye Diagram

According to a recent whitepaper by Samtec, Channel Operating Margin (COM) didn’t start as an algorithm; it started as a truce. In the late 2000s and early 2010s, interconnect designers and SerDes architects were speaking past each other. The former optimized insertion loss, return loss, and crosstalk against frequency-domain masks; the latter wrestled with real receivers, equalization limits, and power budgets. As data rates jumped from 10 Gb/s per line to 25 Gb/s—and soon after to 50 Gb/s and beyond—the old “mask” paradigm broke. Guard-banding everything to make frequency masks work would have over-constrained designs and under-informed transceivers. The industry needed a shared language that tied physical channel features to receiver behavior. That shared language became COM.

Figure 1. The use of Eye Diagrams fundamentally Changed how compliance testing was done.

Two technical insights catalyzed the shift. First, insertion loss alone was not predictive once vias, connectors, and packages crept into electrically long territory. Ripples in the insertion-loss curve, codified as insertion loss deviation (ILD) were the visible fingerprints of reflections that eroded eye openings. Second, crosstalk budgeting matured from ad hoc limits to constructs like integrated crosstalk noise (ICN) and insertion-loss-to-crosstalk ratio (ICR), recognizing that noise must be weighed against the channel’s fundamental attenuation. These realizations coincided with the industry’s pivot from NRZ at 25 Gb/s to PAM4 at 50 Gb/s per line, raising complexity and appetite for a more realistic, end-to-end figure of merit.

Enter the time domain. The breakthrough was to treat the channel’s pulse response as the “Rosetta Stone” between interconnect physics and SerDes design. Because a random data waveform is just a symbol sequence convolved with the pulse response, you can sample that pulse at unit intervals to quantify intersymbol interference (ISI), then superimpose sampled crosstalk responses as noise. That reframed compliance from static frequency limits to a statistical signal-quality calculation anchored in what receivers actually see. Early debates over assuming Gaussian noise led to a pragmatic conclusion: copper channel noise is not perfectly IID Gaussian; using “real” distributions derived from pulse-response sampling avoids chronic over-design.

COM formalized this flow. Practically, you feed measured or simulated S-parameters into a filter chain (including transmitter/receiver shaping), generate a pulse response via iDFT, and compute ISI and crosstalk contributions as RMS noise vectors relative to the main cursor. Equalization (CTLE/FFE/DFE) and bandwidth limits are captured explicitly. Crucially, minimum transceiver capabilities are not left to inference; they’re parameterized in tables maintained inside IEEE 802.3 projects. The result is a single operating-margin number that reflects both the channel’s impairments and a realistic, baseline SerDes. MATLAB example code and configuration spreadsheets iterated alongside each project made the method transparent, debuggable, and rapidly adoptable across companies.

Process also mattered. Publishing representative channel models into IEEE working groups was an industry first. Instead of “ouch tests” where SerDes vendors reacted to whatever channels interconnect teams produced, standard bodies curated public S-parameter libraries that mirrored real backplanes and cables at 25, 50, and 100+ Gb/s per lane. That transparency let COM evolve collaboratively—tuning assumptions, refining parameter sets, and aligning equalization budgets—with evidence that mapped to shipping hardware. Over time, COM was adopted and extended across many Ethernet projects (802.3bj, bm, by, bs, cd, ck, df, and the ongoing dj effort), and it influenced parallel work in OIF and InfiniBand.

Why has COM endured? Three reasons. It aligns incentives by giving interconnect and SerDes designers a single scoreboard. It scales: the same pulse-response/statistical framework accommodates NRZ and PAM-N, evolving to higher baud rates with updated parameter tables and annexes (e.g., Annex 93A and 178A). And it’s verifiable: open example code and published channels shrink the gap between compliance and system bring-up. Looking ahead, the pressures are familiar—denser packages, rougher loss at higher Nyquist, more aggressive equalization, and tighter power. COM’s core idea—evaluate channels in the space where receivers actually operate—remains the right abstraction. It turns negotiation into engineering, replacing guesswork with a metric both sides can build to.

See the full Samtec whitepaper here.

Also Read:

Visualizing System Design with Samtec’s Picture Search

Webinar – Achieving Seamless 1.6 Tbps Interoperability with Samtec and Synopsys

Samtec Advances Multi-Channel SerDes Technology with Broadcom at DesignCon


Podcast EP300: Next Generation Metalization Innovations with Lam’s Kaihan Ashtiani

Podcast EP300: Next Generation Metalization Innovations with Lam’s Kaihan Ashtiani
by Daniel Nenni on 07-30-2025 at 10:00 am

Dan is joined by Kaihan Ashtiani, Corporate Vice President and General Manager of atomic layer deposition and chemical vapor deposition metals in Lam’s Deposition Business Unit. Kaihan has more than 30 years of experience in technical and management roles, working on a variety of semiconductor tools and processes.

Dan explores the challenges of metallization for advanced semiconductor devices with Kaihan, where billions of connections must be patterned reliably to counteract heat and signal integrity problems. Kaihan describes the move from chemical vapor deposition to the atomic layer deposition approach used for advanced nodes. He also discusses the motivations for the move from tungsten to molybdenum for metalization.

He explains that thin film resistivity challenges make molybdenum a superior choice, but working with this material requires process innovations that Lam has been leading. Kaihan describes the ALTUS Halo tool developed by Lam and the ways this technology addresses the challenges of metallization patterning for molybdenum, both in terms of quality of results and speed of processing.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Prompt Engineering for Security: Innovation in Verification

Prompt Engineering for Security: Innovation in Verification
by Bernard Murphy on 07-30-2025 at 6:00 am

Innovation New

We have a shortage of reference designs to test detection of security vulnerabilities. An LLM-based method demonstrates how to fix that problem with structured prompt engineering. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and lecturer at Stanford, EE292A) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick, Empowering Hardware Security with LLM: The Development of a Vulnerable Hardware Database was published in the 2024 IEEE Hardware-Oriented Security and Trust and has 12 citations. The authors are from the University of Florida Gainesville.

The authors use LLMs to create a large database (Vul-FSM) of FSM designs vulnerable to a set of 16 weaknesses, documented either in the CWE-MITRE database or in separate guidelines, by inserting these weaknesses into base designs. The intent is to use this dataset as a reference for security analysis tools or security mitigations; the dataset is available on GitHub. They also provide an LLM-based mechanism to detect such vulnerabilities.

The core of the method revolves around a structured approach to prompt engineering to generate (they claim) high integrity test cases and methods for detection. Their prompt engineering methods, such as in-context learning, appear relevant to a broader set of verification problems.

Paul’s view

Hardware security verification is still a somewhat niche market today, but it is clearly on the rise. Open databases to check for known vulnerabilities are making good progress – for example, CWE (cwe.mitre.org) is often used by our customers. However, availability of good benchmark suites of labeled testcases with known vulnerabilities is limited, which in turn limits our ability to develop good EDA tools to check for them.

This month’s paper uses LLM prompt engineering with GPT 3.5 using OpenAI’s APIs to create a labeled benchmark suite of 10k Verilog designs for simple control circuit state machines with 3 to 10 states. Each of these designs contains at least one of 16 different known vulnerabilities, and has been created from a base set of 400 control circuits that do not contain any vulnerabilities. The paper also describes a LLM-based vulnerability detection system for these same 16 vulnerabilities using prompt engineering which is surprisingly effective – 80% likely on average to detect the vulnerability.

One of the best parts of the paper is Figure 6 which shows an example of an actual complete LLM prompt clearly divided into sections showing chain-of-thought (giving LLM step-by-step instructions on how to solve the problem), reflexive verification (giving LLM instructions on how to check that it’s response is correct), and exemplary demonstration (giving the LLM an example of a solution to the problem for another circuit). There are some decent charts elsewhere in the paper that show how much these prompt engineering techniques improve the quality of response from the LLM – about 10-20% depending on the vulnerability.

I’m grateful to the authors for their contribution to the security verification community here!

Raúl’s view

This paper introduces SecRT-LLM, a novel framework for generating and detecting security vulnerabilities in hardware designs, specifically finite state machines (FSMs), that leverages large language models (LLMs). SecRT-LLM uses vulnerability insertion to create a benchmark of 10,000 small RTL FSM designs with 16 types of embedded vulnerabilities (Table II), many based on CWE (Common Weakness Enumeration) classes. It also does vulnerability detection, identifying security issues in RTL on this benchmark.

One of the key contributions is the integration of prompt engineering, LLM inference, and fidelity checking. Prompting strategies in particular are quite elaborate aimed at guiding the LLM to perform the target task. Six tailored prompt strategies greatly improve LLM performance:

  • Reflexive Verification Prompting (self-scrutiny, e.g., indicate where and how have the instructions in the prompt been followed)
  • Sequential Integration Prompting (chain-of-thought, dividing a task into sub-tasks)
  • Exemplary Demonstration Prompting (example designs)
  • Contextual Security Prompting (inserting and identifying security vulnerabilities and weaknesses)
  • Focused Assessment Prompting (emphasize detailed examination of a specific design element such as a deadlock)
  • Structured Data Prompting (systematic arrangement of extensive data for example as a table).

A prompt example is given in Fig. 6.

Experimental Validation shows high accuracy in both insertion (~82% pass@1 and ~97% pass@5) and detection (~80% pass@1 and ~99% pass@5) of vulnerabilities. Automating this process drastically reduces time and cost compared to manual efforts.

The paper applies AI capabilities to hardware security needs. Two major contributions are generating a benchmark of FSMs with embedded vulnerabilities, which serves as a resource for training and evaluating vulnerability detection tools and using prompt engineering for security-centric tasks to guide LLMs. Most commercial tools today focus on verification, threat modeling, and formal methods—but do not yet deeply leverage LLMs for RTL vulnerability tasks. Research such as SecRT-LLM addresses this gap and may influence future commercialization of AI in this field.

Also Read:

New Cooling Strategies for Future Computing

Reachability in Analog and AMS. Innovation in Verification

A Novel Approach to Future Proofing AI Hardware


Calibre Vision AI at #62DAC

Calibre Vision AI at #62DAC
by Daniel Payne on 07-29-2025 at 10:00 am

calibre vision ai min

Calibre is a well-known EDA tool from Siemens that is used for physical verification, but I didn’t really know how AI technology was being used, so I attended a Tuesday session at #62DAC to get up to speed. Priyank Jain, Calibre Product Management presented slides and finished up with a Q&A session.

In the semiconductor world we’ve seen a hardware-centric viewpoint starting with PCs in the 80s and 90s, where software ran on general-purpose hardware. Today, it’s more of a software-defined world, where the software architecture drives the hardware implementation.

The vision with Calibre is to shift-left and reduce Turn Around Time (TAT), accomplished by running the tools earlier in the design and implementation flows and using AI techniques. A huge challenge of trying to run full-chip integration earlier is that it produces billions of DRC errors, making the tool load slowly, increases debug time, all with little collaboration between engineering team members on what to fix first.

This challenge led to a new product called Calibre Vision AI, which enables full-chip analysis earlier in the implementation process by adding intelligent debug and user collaboration. With this new tool, engineers can quickly make sense of a DRC run that has billions of errors, as the AI feature clusters similar errors together making it easier to identify systematic issues such as block overlap, bad via, fill overlap and more, and let’s you  prioritize which errors should be fixed first.

Calibre Vision AI has a modern, multi-threaded foundation for fast operation. A GUI with dynamic panels for quick debug, and navigation features to pinpoint the source of errors.

The GUI helps visualize a heat map, showing the density of DRC errors. AI is used to cluster similar errors, and the AI works across all IC layout technologies with no model training required for tool users. Common failure causes are easily identified so that you will be more productive in fixing DRC errors. As an engineer uses the tool they can use dynamic bookmarks on the layout to capture, assign work and write notes for other team members to collaborate on the fixes.

It’s recommended that you run the Calibre RVE tool at the block level and for tapeouts, and run Calibre Vision AI for chip-level analysis at early stages, as the two tools complement each other. Using Calibre Vision AI for full-chip analysis accelerates full-chip debug through the high capacity and multi-threaded technology. Heat maps of errors show the entire die, so that you can pinpoint areas of highest interest. Results are visualized instantly, even when its millions of errors. One comparison showed that for 790 million DRC errors a traditional ASCII flow would load in 15 minutes, while a Vision AI flow using OASIS loaded in just 45 seconds.

Early users of Vision AI reported that it was faster to identify systematic issues and that DRC debug iterations were cut in half. For example, one run had 600M errors from 3,400 checks, then that was reduced to just 381 signal groups or clusters.

Siemens has many EDA tools using AI techniques.

There are three places where AI is used in Calibre Vision AI:

  • Chatbot – EDA knowledge using prompts
  • Reasoning – Data analysis and summarization
  • Tool Operations – Performing complex tool functions from prompts

Summary

DRC analysis and debug work can now reduce tasks that required hours into just minutes by using AI-based clusters. Teams doing physical design can collaborate and communicate more efficiently by using bookmarks, block debug and attaching reports.

Q&A

Q: Is there any plan to Auto-fix DRC errors?

AI quickly groups similar DRC violations for easier root-cause analysis, but we still need a human in the loop to fix the violations.

Q:  Can I create new Signals?

A: Vision AI comes with a set of Signals out of the box, and I can also create their custom signals by my own checks (i.e. M1 checks first).

Q: What’s the difference between RVE classifier and AI?

A: AI takes and elevates the classifier by 100X, analyzing the results, locations, proximity, root causes, cluster by groups. RVE is good for fewer errors, but AI works on billions of errors and earlier in the process.

Q: Can you aggregate AI across multiple designs, trends, library cells in common, broad trends?

A: It’s under development, stay tuned for a future release.

Q: Are signal groups an AI classification?

A: We use unsupervised learning to create the groups by location, proximity.

Related Blogs