wide 1

Formal Verification: Why It Matters for Post-Quantum Cryptography

Formal Verification: Why It Matters for Post-Quantum Cryptography
by Daniel Nenni on 08-01-2025 at 10:00 am

Formal Verification Why does it matter for PQC

Formal verification is becoming essential in the design and implementation of cryptographic systems, particularly as the industry prepares for post-quantum cryptography (PQC). While traditional testing techniques validate correctness over a finite set of scenarios, formal verification uses mathematical proofs to guarantee that cryptographic primitives behave correctly under all possible conditions. This distinction is vital because flaws in cryptographic implementations can lead to catastrophic breaches of confidentiality, integrity, or authenticity.

In cryptographic contexts, formal verification is applied across three primary dimensions: verifying the security of the cryptographic specification, ensuring the implementation aligns precisely with that specification, and confirming resistance to low-level attacks such as side-channel or fault attacks.

The first dimension involves ensuring that the design of a cryptographic primitive fulfills formal security goals. This step requires proving that the algorithm resists a defined set of adversarial behaviors based on established cryptographic hardness assumptions. The second focuses on verifying that the implementation faithfully adheres to the formally specified design. This involves modeling the specification mathematically and using tools like theorem provers or model checkers to validate that the code behaves correctly in every case. The third area concerns proving that the implementation is immune to physical leakage—such as timing or power analysis—that could inadvertently expose secret data. Here, formal methods help ensure constant-time execution and other safety measures.

Formal verification also contributes to broader program safety by identifying and preventing bugs like buffer overflows, null pointer dereferencing, or other forms of undefined behavior. These bugs, if left unchecked, could become exploitable vulnerabilities. By combining specification security, implementation correctness, and low-level robustness, formal verification delivers a high level of assurance for cryptographic systems.

While powerful, formal verification is often compared to more traditional validation techniques like CAVP (Cryptographic Algorithm Validation Program) and TVLA (Test Vector Leakage Assessment). CAVP ensures functional correctness by running implementations through a series of fixed input-output tests, while TVLA assesses side-channel resistance via statistical analysis. These methods are practical and widely used in certification schemes but inherently limited. They can only validate correctness or leakage resistance across predefined scenarios, which means undiscovered vulnerabilities in untested scenarios may remain hidden.

Formal verification, by contrast, can prove the absence of entire classes of bugs across all input conditions. This level of rigor offers unmatched assurance but comes with trade-offs. It is resource-intensive, requiring specialized expertise, extensive computation, and significant time investment. Additionally, it is sensitive to the accuracy of the formal specifications themselves. If the specification fails to fully capture the intended security properties, then even a correctly verified implementation might still be vulnerable in practice.

Moreover, formal verification is constrained by the scope of what it models. For instance, if the specification doesn’t include side-channel models or hardware-specific concerns, those issues may go unaddressed. Tools used in formal verification can also contain bugs, which introduces the risk of false assurances. To address these issues, developers often employ cross-validation with multiple verification tools and complement formal verification with traditional testing, peer review, and transparency in the verification process.

Despite these limitations, formal verification is increasingly valued, especially in high-assurance sectors like aerospace, defense, and critical infrastructure. Although most certification bodies do not mandate formal verification—favoring test-driven approaches like those in the NIST and Common Criteria frameworks—its use is growing as a differentiator in ensuring cryptographic integrity. As cryptographic systems grow in complexity, particularly with the shift toward post-quantum algorithms, the industry is recognizing that traditional testing alone is no longer sufficient.

PQShield exemplifies this forward-looking approach. The company is actively investing in formal verification as part of its product development strategy. It participates in the Formosa project and contributes to formal proofs for post-quantum cryptographic standards like ML-KEM and ML-DSA. The company has verified its implementation of the Keccak SHA-3 permutation, as well as the polynomial arithmetic and decoding routines in its ML-KEM implementation. PQShield also contributes to the development of EasyCrypt, an open-source proof assistant used for reasoning about cryptographic protocols.

Looking ahead, PQShield plans to extend formal verification across more of its software and hardware offerings. This includes proving the correctness of high-speed hardware accelerators, particularly the arithmetic and sampling units used in PQC schemes. These efforts rely on a mix of internal and open-source tools and demonstrate the company’s commitment to secure-by-design principles.

In conclusion, formal verification offers critical advantages for cryptographic security, particularly as the industry transitions to post-quantum systems. It complements conventional testing methods by addressing their limitations and providing strong guarantees of correctness, robustness, and resistance to attack. While not yet universally mandated in certification schemes, formal verification is fast becoming a cornerstone of next-generation cryptographic assurance—and companies like PQShield are leading the way in putting it into practice.

You can download the paper here.

Also See:

Podcast EP290: Navigating the Shift to Quantum Safe Security with PQShield’s Graeme Hickey

Podcast EP285: The Post-Quantum Cryptography Threat and Why Now is the Time to Prepare with Michele Sartori

PQShield Demystifies Post-Quantum Cryptography with Leadership Lounge


Podcast EP301: Celebrating 20 Years on Innovation with yieldHUB’s John O’Donnell

Podcast EP301: Celebrating 20 Years on Innovation with yieldHUB’s John O’Donnell
by Daniel Nenni on 08-01-2025 at 10:00 am

Dan is joined by John O’Donnell, Founder and CEO of yieldHUB, a pioneering leader in advanced data analytics for the semiconductor industry. Since establishing the company in 2005 he has transformed it from a two-person startup into a trusted multinational partner that empowers some of the world’s leading semiconductor companies to improve yield, reduce test costs, boost engineering efficiency and enhance quality.

yieldHUB has recently celebrated its 20th anniversary. The company has also received national recognition for its accomplishments in Ireland. Dan explores yieldHUB’s history and future plans with John, including the company’s expansion worldwide and its new R&D focus areas. John describes the company’s new yieldHUB Live system, a test agnostic real-time capability with an AI recommendation system and digital twin models. John explains that AI is a new development focus for the company and this new system is having significant impact on improving yield, reducing test costs, and increasing product quality. John also describes a new API native platform that is in development.

Dan also explores the four pillars of yieldHUB with John, which are the previously mentioned improve yield, reduce test cost, boost engineering efficiency, and enhance quality. John describes the importance of each pillar and explains the approach yieldHUB takes to achieve these goals with its customers.

Contact yieldHUB here.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Architecting Your Next SoC: Join the Live Discussion on Tradeoffs, IP, and Ecosystem Realities

Architecting Your Next SoC: Join the Live Discussion on Tradeoffs, IP, and Ecosystem Realities
by Daniel Nenni on 08-01-2025 at 8:00 am

sqr 1

Designing a system-on-chip (SoC) has never been more complex—or more critical. With accelerating demands across AI, automotive, and high-performance compute applications, today’s SoC architects face a series of high-stakes tradeoffs from the very beginning. Decisions made during the earliest phases of design—regarding architecture, IP selection, modeling, and system integration—can make or break a project’s success.

That’s why SemiWiki is proud to host a live webinar:

“What to Consider When Architecting Your Next SoC: Architectural Tradeoffs, IP Selection, and Ecosystem Realities”

 Thursday, August 14, 2025 | 9:00 AM PDT

This session will feature a practical, fast-paced conversation between two seasoned experts in SoC architecture and IP design:

  • Paul Martin, Global Director of SoC Architecture, Aion Silicon

  • Darren Jones, Distinguished Engineer & Solutions Architect, Andes Technology

Together, they’ll walk through real-world scenarios, decision frameworks, and lessons learned from working with some of the most demanding silicon customers in the world.

Rather than a static presentation, the format is designed as a fireside chat—highlighting the nuance and complexity of early-stage architecture decisions through dialog. Expect candid insights, live Q&A, and audience engagement—not a canned marketing pitch.

Register now to reserve your spot and be part of the conversation. 

What You’ll Learn:

  • How to weigh architectural tradeoffs when performance, flexibility, and schedule are in tension

  • What questions to ask when selecting IP across multiple vendors

  • The role of modeling, simulation, and emulation in derisking “works-first-time” silicon

  • How system-level decisions (like interconnect width or coherency models) impact overall architecture

  • Where ecosystem support—toolchains, deliverables, and foundry alignment—can determine downstream success

You’ll also gain a deeper understanding of performance, power, and area (PPA) metrics—how to interpret them, and how to avoid common traps when comparing IP blocks. This session goes beyond datasheets to explore how real design teams validate assumptions and make decisions that hold up under pressure.

Whether you’re leading architecture for your next chip, evaluating IP options, or supporting teams through SoC integration, this webinar will sharpen your perspective and provide actionable strategies.

Why Attend Live:

This is a live-only event, and attendees will have the chance to ask questions directly. If your team is facing architectural decisions this quarter—or simply wants to learn how top-tier firms approach system tradeoffs—this is a valuable opportunity to hear from peers in the trenches.

Register now to reserve your spot and be part of the conversation. 

Speaker Bios:

Darren Jones, Distinguished Engineer and Solutions Architect, Andes Technology

Darren Jones is a seasoned engineering leader with more than three decades of experience in processor architecture, SoC design, and IP integration. Currently a Distinguished Engineer and Solutions Architect at Andes Technology, he helps customers develop high-performance RISC-V–based solutions tailored to their systems, drawing on his deep expertise in system-on-chip design and verification.

Prior to Andes, Darren held senior leadership roles at Esperanto Technologies, Wave Computing, Xilinx, MIPS Technologies, and LSI Logic, where he led teams through multiple successful chip tapeouts—from 7nm inferencing accelerators to complex multi-core and multithreaded architectures. His experience spans architecture definition, RTL design, IP delivery, and full-chip integration.

Darren holds more than 25 patents in processor design and multithreading. He earned his M.S. in electrical engineering from Stanford University and his B.S. with highest honors from the University of Illinois Urbana-Champaign.

Paul Martin, Global Director of SoC Architecture, Aion Silicon

Paul Martin is the Global Director of SoC Architecture at Aion Silicon, where he leads international engineering teams and drives customer engagement across complex semiconductor design projects. With decades of experience in commercial, technical, and strategic roles at companies including ARM and NXP, he has helped bring cutting-edge SoC technologies to market. Martin is known for his ability to bridge technical innovation with business value across Europe, North America, and Asia.

Register now to reserve your spot and be part of the conversation. 

Also Read:

The Sondrel transformation to Aion SIlicon!

2025 Outlook with Oliver Jones of Sondrel

CEO Interview: Ollie Jones of Sondrel


CEO Interview with Andrew Skafel of Edgewater Wireless

CEO Interview with Andrew Skafel of Edgewater Wireless
by Daniel Nenni on 08-01-2025 at 6:00 am

Skafel pic

As the demand for high-capacity, low-latency wireless networks explodes across residential, enterprise, and industrial environments, a Canadian innovator is quietly reshaping the way Wi-Fi works—from the silicon up. Edgewater Wireless (TSXV:YFI/OTC:KPIFF), headquartered in Ottawa, is pioneering a transformative approach to wireless connectivity with its patented Wi-Fi Spectrum Slicing™ technology and an AI-enhanced roadmap that’s drawing attention from industry heavyweights.

We sat down with Andrew Skafel, CEO of Edgewater Wireless, to discuss the company’s impressive recent milestones—including selection to Canada’s prestigious FABrIC semiconductor program, backing from Silicon Catalyst, and entry into Arm’s Flexible Access (AFA) Program. We also dove into Edgewater’s vision for AI-powered spectrum optimization and how their “standards-leading” strategy is carving out a unique path in a crowded wireless landscape.

Andrew, thanks for joining us. Edgewater Wireless has had a banner year. Let’s start with the basics—what makes Edgewater different from the rest of the Wi-Fi ecosystem?

Thanks—it’s great to be here. At Edgewater, we’ve taken a fundamental departure from traditional Wi-Fi design. Instead of incrementally optimizing the existing single-channel-per-radio architecture, we pioneered Wi-Fi Spectrum Slicing, which enables multiple concurrent channels in the same frequency band—essentially “slicing” the spectrum to maximize capacity and performance.

This isn’t just theoretical. Our patented silicon architecture can deliver 10x performance gains and up to 50% lower latency—even on legacy client devices. We’re not asking the world to adopt a new standard or wait for new device rollouts. We’re enhancing performance from the infrastructure side, and we’re fully aligned with current and evolving Wi-Fi standards.

That’s impressive. Let’s talk about the AI element. You recently announced the initiation of prototyping an AI subsystem using Arm® technology. What does that mean for Wi-Fi?

It’s a big step for us—and for Wi-Fi, more broadly. We’ve entered the Arm Flexible Access Program, which gives us cost-effective access to industry-proven Arm IP. With this, we’re prototyping our next-generation Wi-Fi baseband chip using Arm Cortex® CPUs, and we’re exploring Arm Ethos™ NPUs for on-chip AI acceleration.

Why does this matter? Because we’re bringing real-time, AI-driven spectrum engineering directly to the network edge. Our patent-pending machine learning algorithms are designed to do things Wi-Fi’s never done before—mitigate congestion, manage spectrum allocation in real-time, and autonomously reconfigure channel/link density for optimal throughput—all within the chipset. This is a paradigm shift that will allow networks to self-optimize based on real-time user and device behavior.

Wi-Fi is notoriously messy in dense environments. Are you saying this AI-driven approach can solve that?

Exactly. Traditional Wi-Fi is reactive and when collisions and interference occur, it has limited ability to mitigate it, and often does so ineffectively. Our AI subsystem is proactive based on data received from the coverage area environment — delivering a significant boost to the overall QoS and transforming the user experience. Imagine your Wi-Fi recognizing a spike in video traffic and re-allocating spectrum capacity to handle the load—or rerouting traffic away from interference without any human intervention.

This is particularly crucial in environments like MDUs (multi-dwelling units), enterprise campuses, industrial IoT deployments, and even in smart homes. As device counts continue to explode, intelligent spectrum management becomes the key differentiator.

You will get unparalleled Wi-Fi QoS for all devices with Wi-Fi Spectrum Slicing.

You mentioned prototyping in Arm’s ecosystem. What advantages does that bring?

Arm has created an incredibly robust platform for innovation. Their CPUs and NPUs are energy-efficient, scalable, and deeply supported, which accelerates our time to market and de-risks development. More importantly, being part of the Arm ecosystem validates our approach and opens doors for deeper industry collaboration.

We’re not building a niche chipset—we’re creating a standards-aligned, AI-enabled platform that can be licensed by leading silicon players or deployed by OEMs, service providers, and enterprises. Working with Arm helps us do that at scale.

That leads us to another major milestone—your inclusion in the Canadian government’s FABrIC program. What does that support mean for Edgewater?

The Government of Canada’s FABrIC initiative is a game-changer. For Edgewater, it represents a national vote of confidence in our vision to lead the next wave of intelligent, AI-enabled wireless innovation. Managed by CMC Microsystems, FABrIC is a five-year $223M program to accelerate the commercialization of semiconductor-based processes and products. We’re proud to be one of the first recipients. The program provides strategic funding that allows us to de-risk R&D and accelerate commercialization of our next-gen AI-enabled Wi-Fi chipsets.

This isn’t just about money—it’s about national competitiveness. Canada has world-class talent and ideas, but turning those into commercially viable silicon solutions for the worldwide market requires deep support. FABrIC recognizes that and is helping position Canada as a global player in intelligent connectivity.

You’ve also secured support from Silicon Catalyst, the world’s only incubator focused exclusively on semiconductors. What has that partnership unlocked?

Joining Silicon Catalyst has given us access to a global network of mentors, corporate partners, and critical infrastructure for silicon development. They’ve helped hundreds of semiconductor startups navigate the journey from concept to production—and their backing speaks volumes about our technology and market potential.

The Silicon Catalyst ecosystem is already helping us refine our product-market fit, prepare for mass manufacturing, and navigate relationships with fabs and IP providers. It’s a massive accelerant for our roadmap.

Let’s step back a bit. Where do you see Edgewater Wireless headed over the next 12–24 months?

Over the next 12 to 24 months, Edgewater Wireless is laser-focused on commercializing our next-generation Wi-Fi Spectrum Slicing silicon under the PrismIQ™ product family. This family will span three high-value, global market segments—residential, enterprise, and Industrial Internet of Things (IIoT)—each with tailored variants designed to optimize performance, capacity, and reliability for dense device environments.

We’re already well through the ‘beta’ prototyping phase. Key upcoming milestones in our silicon realization roadmap include:

  • Prototyping and silicon realization of our next-gen AI-enabled baseband is underway, with sampling beginning in early 2026.  For those parties interested in early access to the PrismIQ product family, development platforms will be made available to select partners in the first quarter of 2026.
  • Partnering with Tier-1 industry players to accelerate market deployment through licensing and joint development initiatives, ensuring broad industry alignment and faster paths to integration.
  • Scaling our software stack and AI-driven algorithms to ensure seamless compatibility with evolving standards like Wi-Fi 7—and laying the groundwork for next-gen protocols including Wi-Fi 8.
  • Leveraging support from strategic partners and government-backed programs, including Silicon Catalyst, Arm, CableLabs, and FABrIC, to fast-track innovation and market readiness.

Interested and qualified service providers and equipment vendors are encouraged to contact us to gain early access to the PrismIQ product family. Let’s shape the future of wireless, together.

There’s a lot of buzz around “standards-leading” versus “standards-following.” How does Edgewater view its role in the Wi-Fi standards ecosystem?

We’re deeply involved in the standards community, and we build our technology with alignment in mind. But we’re also pushing the boundaries of what those standards enable.

Spectrum Slicing is a great example—it fits within existing standards yet unlocks next-level performance by reimagining how the physical layer handles channelization and interference. That’s what we mean by “standards-leading.” We respect the standards, we align with them—but we’re also creating new categories of capability within them.

It’s not about breaking compatibility—it’s about enhancing what’s possible, without requiring changes on the client side. That’s key to enabling broad adoption.

Final question—what’s the big vision here? What does success look like for Edgewater Wireless?

Success for us means redefining Wi-Fi—not just making it faster, but smarter. Our goal is to embed AI-enabled Spectrum Slicing into the heart of Wi-Fi infrastructure and make intelligent wireless connectivity the norm, not the exception.

We believe Wi-Fi should be as dynamic as the environments it serves, and with our technology, it can be. Whether it’s in a smart home, an enterprise network, or a factory floor, we want to be the intelligence layer that ensures every device gets the performance it needs, when it needs it.

We’re not just solving for today—we’re building the wireless fabric of the future.

About Edgewater Wireless (TSXV: YFI / OTC: KPIFF)

We make Wi-Fi. Better.

Edgewater Wireless delivers unmatched Wi-Fi QoS—bar none—by intelligently mitigating congestion, managing spectrum allocation in real-time, and autonomously reconfiguring channel and link density—driving economic gains for service providers and their customers through reduced churn, improved efficiency, and high-performance connectivity in dense environments. Redefining Wi-Fi from the silicon up, Edgewater’s patented, AI-powered Spectrum Slicing platform—delivered through the PrismIQ™ product family—breaks the limits of legacy Wi-Fi by enabling multiple concurrent channels in a single band. This delivers 10x performance and up to 50% lower latency, even for legacy devices. With 26 patents and a fabless model, Edgewater is transforming the economics of Wi-Fi for service providers, OEMs, and enterprises—powering scalable, standards-aligned/leading connectivity across residential, enterprise, and Industrial IoT markets.  Edgewater is building the intelligence wireless foundation for the next era of global connectivity.

Visit https://edgewaterwireless.com

Andrew Skafel is a recognized leader in wireless and next-generation Wi-Fi, driving innovation as President and CEO of Edgewater Wireless. Under his leadership, the company has pioneered Wi-Fi Spectrum Slicing, revolutionizing high-density wireless performance. With over two decades in telecom and technology, Andrew has led product development, strategy, and key industry partnerships. His customer-focused vision ensures Edgewater’s patented solutions address real-world connectivity challenges. A sought-after expert on Wi-Fi innovation, Andrew continues to shape the future of wireless communications, positioning Edgewater Wireless as a global leader in scalable, high-performance networking solutions.

Also Read:

CEO Interview with Jutta Meier of IQE

Executive Interview with Ryan W. Parker of Photonic Inc.

CEO Interview with Jon Kemp of Qnity


CAST Webinar About Supercharging Your Systems with Lossless Data Compression IPs

CAST Webinar About Supercharging Your Systems with Lossless Data Compression IPs
by Mike Gianfagna on 07-31-2025 at 10:00 am

CAST Webinar About Supercharging Your Systems with Lossless Data Compression IPs

Much of advanced technology is data-driven. From the cloud and AI accelerators to automotive processing and edge computing, data storage and transmission efficiency are of critical importance. It turns out that lossless data compression is a key ingredient to deliver these requirements.

While there are both software and hardware solutions, hardware-based approaches offer the best balance for throughput, latency, and power demands. CAST recently presented an informative webinar on this topic with SemiWiki. In case you missed it, there is now a replay available. A link is coming but first let’s examine what is discussed in the CAST webinar about supercharging your systems with lossless data compression IPs.

The Presenter

Dr. Calliope Louisa Sotiropoulou

The webinar is presented by Dr. Calliope-Louisa Sotiropoulou, an Electronics Engineer who holds the position of Sales Engineer & Product Manager at CAST. Dr. Sotiropoulou specializes in image, video and data compression, and IP stacks. Before joining CAST, she worked as a research and development manager and an FPGA systems developer for the Aerospace and Defense sector. Dr. Sotiropoulou is very knowledgeable on the topic of data compression and has an easy-to-understand style.

She has a long academic record as a researcher, working on various projects, including the trigger and data acquisition system of the ATLAS experiment at CERN. She received her PhD from the Aristotle University of Thessaloniki.

What is Covered

The full title of the webinar is Unpacking System Performance: Supercharge Your Systems with Lossless Compression IPs. The topics discussed during the 35-minute webinar include:

  • What is lossless data compression, and why do we use it?
  • Differences between software and hardware implementations
  • How to choose the right algorithm for your application
  • Real-life examples: Integration and implementation
  • The CAST approach and IP cores portfolio

The webinar is followed by about 10 minutes of Q&A from the webinar audience that covers some very relevant and interesting topics.

Some Highlights

There is a lot of great information shared by Dr. Sotiropoulou in this webinar. She touches on the pros and cons of various approaches and backs it up with details from real examples. To whet your appetite, here is her summary of why a hardware approach is preferred:

Software is flexible, but:

  • It creates a high CPU load
  • It scales poorly, especially when the high throughput in today’s applications is considered
  • It has unpredictable latency

Hardware:

  • Offers deterministic performance: latency, throughput, power
  • Scales to multi-Gbps throughput
  • Lower power consumption
  • Real-time ready

She goes into detail on multiple approaches to data compression, illustrating the pros and cons of each. This discussion gets into topics such as optimizing memory size and file size for various problem sets.  

She then discusses the lossless data compression delivered by CAST. Target applications and system integration details are presented, along with specific results for various ASIC and FPGA technologies. She ends with a summary of the CAST approach and key takeaways.

To Learn More

I have just presented a high-level summary of the webinar content. If you are dealing with data intensive applications, approaches to lossless data compression will definitely be important and I highly recommend experiencing the complete webinar. You will be glad you did. You can access the webinar replay here.

You can also learn more about CAST’s lossless data compression IP here. And that’s the CAST webinar about supercharging your systems with lossless data compression IPs.

Also Read:

WEBINAR Unpacking System Performance: Supercharge Your Systems with Lossless Compression IPs

Podcast EP273: An Overview of the RISC-V Market and CAST’s unique Abilities to Grow the Market with Evan Price

CAST Advances Lossless Data Compression Speed with a New IP Core


cHBM for AI: Capabilities, Challenges, and Opportunities

cHBM for AI: Capabilities, Challenges, and Opportunities
by Kalar Rajendiran on 07-31-2025 at 6:00 am

cHBM Panelists at Synopsys Executive Forum

AI’s exponential growth is transforming semiconductor design—and memory is now as critical as compute. Multi-die architecture has emerged as the new frontier, and custom High Bandwidth Memory (cHBM) is fast becoming a cornerstone in this evolution. In a panel session at the Synopsys Executive Forum, leaders from AWS, Marvell, Samsung, SK Hynix, and Synopsys discussed the future of cHBM, its challenges, and the collective responsibility to shape its path forward.

Pictured above: Moderator: Will Townsend, Moor Insights & Strategy; Panelists: Nafea Bshara, VP/Distinguished Engineer, AWS; Will Chu, SVP & GM Custom Cloud Solutions Business Unit, Marvell; Harry Yoon, Corporate EVP, Products and Solutions Planning, Samsung; Hoshik Kim, SVP/Fellow, Memory Systems Research, SK Hynix; John Koeter, SVP, IP Group, Synopsys.

The Rise of cHBM in Multi-Die Design

HBM has already propelled AI performance to new heights. Without it, the AI market wouldn’t be where it is today. The evolution from 2.5D to 3D packaging brings a nearly 8x improvement in bandwidth and up to 5x power efficiency improvement—transformative gains for data-intensive workloads. Custom HBM further optimizes performance, reducing I/O area by at least 25% compared to standard HBM implementations. But with this comes a familiar tension: performance versus interoperability.

The Customization Dilemma

While hyperscalers may benefit from custom configurations—given they control the entire stack—the broader industry risks fragmentation. As panelists noted, every time memory innovation went too custom (e.g., HBM1, RDRAM), interoperability and industry adoption suffered. Without shared standards, memory vendors face viability issues, and smaller players risk exclusion. Custom HBM must not become a barrier to collaboration.

The Need for Speed in Standardization

A major takeaway from the panel was the urgency for faster-moving standards bodies. JEDEC has traditionally led HBM standardization, but with the pace of AI, panelists discussed how a new and more agile standards body could accelerate interoperable HBM frameworks. The industry needs validated, standardized D2D IP as well to preserve ecosystem harmony while scaling performance. The UCIe standard is fast establishing itself as that D2D IP standard.

Implications for Memory Vendors

Memory vendors are in a tough spot. On one hand, custom HBM demands more features and integration flexibility; on the other, it erodes volume leverage and introduces supply chain risks. To stay competitive, vendors must support both standard and semi-custom memory—while collaborating more deeply with SoC architects, EDA tool providers, and packaging experts.

The Power of the Ecosystem

To unlock cHBM’s full potential, ecosystem-wide collaboration is non-negotiable. Synopsys is playing a central role for system-level enablement—offering integrated IP, power modeling, thermal simulation, and system-level co-design tools. Only through coordinated efforts can companies navigate packaging complexity, ensure test coverage, and deliver performant, scalable AI systems.

The 3D Packaging Imperative

3D packaging is the physical foundation of multi-die design. Compared to 2.5D solutions, 3D integration supports significantly higher bandwidth and tighter physical proximity. However, the benefits come with challenges: thermal hotspots, TSV congestion, and signal integrity must be carefully managed. Architects must co-design silicon and packaging to meet AI’s escalating demands.

From Custom to Computational HBM

The panel reached consensus on one transformative idea: the future isn’t just custom HBM. Computational HBM may be a better way to describe cHBM. This paradigm emphasizes workload-aware partitioning across logic and memory, where memory becomes an active participant in AI processing, not just a passive storage layer. Right now, hyperscalers may be the main drivers for cHBM. But, unlike proprietary custom approaches, computational HBM can scale across markets—cloud, edge, automotive—and thrive through standardization and reuse.

Summary

cHBM holds tremendous promise—but how the industry moves forward will determine whether it accelerates innovation or impedes it. With standardization, agile packaging integration, and coordinated ecosystem efforts, computational HBM can power the next generation of intelligent systems. With ecosystem aligned vision and execution, the industry will be on its way to multi-die design success to the fullest extent.

The “cHBM for AI” panel session recording can be accessed on-demand from here.

Also Read:

Podcast EP299: The Current and Future Capabilities of Static Verification at Synopsys with Rimpy Chugh

Design-Technology Co-Optimization (DTCO) Accelerates Market Readiness of Angstrom-Scale Process Technologies

SNUG 2025: A Watershed Moment for EDA – Part 2


Podcast EP300: Next Generation Metalization Innovations with Lam’s Kaihan Ashtiani

Podcast EP300: Next Generation Metalization Innovations with Lam’s Kaihan Ashtiani
by Daniel Nenni on 07-30-2025 at 10:00 am

Dan is joined by Kaihan Ashtiani, Corporate Vice President and General Manager of atomic layer deposition and chemical vapor deposition metals in Lam’s Deposition Business Unit. Kaihan has more than 30 years of experience in technical and management roles, working on a variety of semiconductor tools and processes.

Dan explores the challenges of metallization for advanced semiconductor devices with Kaihan, where billions of connections must be patterned reliably to counteract heat and signal integrity problems. Kaihan describes the move from chemical vapor deposition to the atomic layer deposition approach used for advanced nodes. He also discusses the motivations for the move from tungsten to molybdenum for metalization.

He explains that thin film resistivity challenges make molybdenum a superior choice, but working with this material requires process innovations that Lam has been leading. Kaihan describes the ALTUS Halo tool developed by Lam and the ways this technology addresses the challenges of metallization patterning for molybdenum, both in terms of quality of results and speed of processing.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Prompt Engineering for Security: Innovation in Verification

Prompt Engineering for Security: Innovation in Verification
by Bernard Murphy on 07-30-2025 at 6:00 am

Innovation New

We have a shortage of reference designs to test detection of security vulnerabilities. An LLM-based method demonstrates how to fix that problem with structured prompt engineering. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and lecturer at Stanford, EE292A) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick, Empowering Hardware Security with LLM: The Development of a Vulnerable Hardware Database was published in the 2024 IEEE Hardware-Oriented Security and Trust and has 12 citations. The authors are from the University of Florida Gainesville.

The authors use LLMs to create a large database (Vul-FSM) of FSM designs vulnerable to a set of 16 weaknesses, documented either in the CWE-MITRE database or in separate guidelines, by inserting these weaknesses into base designs. The intent is to use this dataset as a reference for security analysis tools or security mitigations; the dataset is available on GitHub. They also provide an LLM-based mechanism to detect such vulnerabilities.

The core of the method revolves around a structured approach to prompt engineering to generate (they claim) high integrity test cases and methods for detection. Their prompt engineering methods, such as in-context learning, appear relevant to a broader set of verification problems.

Paul’s view

Hardware security verification is still a somewhat niche market today, but it is clearly on the rise. Open databases to check for known vulnerabilities are making good progress – for example, CWE (cwe.mitre.org) is often used by our customers. However, availability of good benchmark suites of labeled testcases with known vulnerabilities is limited, which in turn limits our ability to develop good EDA tools to check for them.

This month’s paper uses LLM prompt engineering with GPT 3.5 using OpenAI’s APIs to create a labeled benchmark suite of 10k Verilog designs for simple control circuit state machines with 3 to 10 states. Each of these designs contains at least one of 16 different known vulnerabilities, and has been created from a base set of 400 control circuits that do not contain any vulnerabilities. The paper also describes a LLM-based vulnerability detection system for these same 16 vulnerabilities using prompt engineering which is surprisingly effective – 80% likely on average to detect the vulnerability.

One of the best parts of the paper is Figure 6 which shows an example of an actual complete LLM prompt clearly divided into sections showing chain-of-thought (giving LLM step-by-step instructions on how to solve the problem), reflexive verification (giving LLM instructions on how to check that it’s response is correct), and exemplary demonstration (giving the LLM an example of a solution to the problem for another circuit). There are some decent charts elsewhere in the paper that show how much these prompt engineering techniques improve the quality of response from the LLM – about 10-20% depending on the vulnerability.

I’m grateful to the authors for their contribution to the security verification community here!

Raúl’s view

This paper introduces SecRT-LLM, a novel framework for generating and detecting security vulnerabilities in hardware designs, specifically finite state machines (FSMs), that leverages large language models (LLMs). SecRT-LLM uses vulnerability insertion to create a benchmark of 10,000 small RTL FSM designs with 16 types of embedded vulnerabilities (Table II), many based on CWE (Common Weakness Enumeration) classes. It also does vulnerability detection, identifying security issues in RTL on this benchmark.

One of the key contributions is the integration of prompt engineering, LLM inference, and fidelity checking. Prompting strategies in particular are quite elaborate aimed at guiding the LLM to perform the target task. Six tailored prompt strategies greatly improve LLM performance:

  • Reflexive Verification Prompting (self-scrutiny, e.g., indicate where and how have the instructions in the prompt been followed)
  • Sequential Integration Prompting (chain-of-thought, dividing a task into sub-tasks)
  • Exemplary Demonstration Prompting (example designs)
  • Contextual Security Prompting (inserting and identifying security vulnerabilities and weaknesses)
  • Focused Assessment Prompting (emphasize detailed examination of a specific design element such as a deadlock)
  • Structured Data Prompting (systematic arrangement of extensive data for example as a table).

A prompt example is given in Fig. 6.

Experimental Validation shows high accuracy in both insertion (~82% pass@1 and ~97% pass@5) and detection (~80% pass@1 and ~99% pass@5) of vulnerabilities. Automating this process drastically reduces time and cost compared to manual efforts.

The paper applies AI capabilities to hardware security needs. Two major contributions are generating a benchmark of FSMs with embedded vulnerabilities, which serves as a resource for training and evaluating vulnerability detection tools and using prompt engineering for security-centric tasks to guide LLMs. Most commercial tools today focus on verification, threat modeling, and formal methods—but do not yet deeply leverage LLMs for RTL vulnerability tasks. Research such as SecRT-LLM addresses this gap and may influence future commercialization of AI in this field.

Also Read:

New Cooling Strategies for Future Computing

Reachability in Analog and AMS. Innovation in Verification

A Novel Approach to Future Proofing AI Hardware


Calibre Vision AI at #62DAC

Calibre Vision AI at #62DAC
by Daniel Payne on 07-29-2025 at 10:00 am

calibre vision ai min

Calibre is a well-known EDA tool from Siemens that is used for physical verification, but I didn’t really know how AI technology was being used, so I attended a Tuesday session at #62DAC to get up to speed. Priyank Jain, Calibre Product Management presented slides and finished up with a Q&A session.

In the semiconductor world we’ve seen a hardware-centric viewpoint starting with PCs in the 80s and 90s, where software ran on general-purpose hardware. Today, it’s more of a software-defined world, where the software architecture drives the hardware implementation.

The vision with Calibre is to shift-left and reduce Turn Around Time (TAT), accomplished by running the tools earlier in the design and implementation flows and using AI techniques. A huge challenge of trying to run full-chip integration earlier is that it produces billions of DRC errors, making the tool load slowly, increases debug time, all with little collaboration between engineering team members on what to fix first.

This challenge led to a new product called Calibre Vision AI, which enables full-chip analysis earlier in the implementation process by adding intelligent debug and user collaboration. With this new tool, engineers can quickly make sense of a DRC run that has billions of errors, as the AI feature clusters similar errors together making it easier to identify systematic issues such as block overlap, bad via, fill overlap and more, and let’s you  prioritize which errors should be fixed first.

Calibre Vision AI has a modern, multi-threaded foundation for fast operation. A GUI with dynamic panels for quick debug, and navigation features to pinpoint the source of errors.

The GUI helps visualize a heat map, showing the density of DRC errors. AI is used to cluster similar errors, and the AI works across all IC layout technologies with no model training required for tool users. Common failure causes are easily identified so that you will be more productive in fixing DRC errors. As an engineer uses the tool they can use dynamic bookmarks on the layout to capture, assign work and write notes for other team members to collaborate on the fixes.

It’s recommended that you run the Calibre RVE tool at the block level and for tapeouts, and run Calibre Vision AI for chip-level analysis at early stages, as the two tools complement each other. Using Calibre Vision AI for full-chip analysis accelerates full-chip debug through the high capacity and multi-threaded technology. Heat maps of errors show the entire die, so that you can pinpoint areas of highest interest. Results are visualized instantly, even when its millions of errors. One comparison showed that for 790 million DRC errors a traditional ASCII flow would load in 15 minutes, while a Vision AI flow using OASIS loaded in just 45 seconds.

Early users of Vision AI reported that it was faster to identify systematic issues and that DRC debug iterations were cut in half. For example, one run had 600M errors from 3,400 checks, then that was reduced to just 381 signal groups or clusters.

Siemens has many EDA tools using AI techniques.

There are three places where AI is used in Calibre Vision AI:

  • Chatbot – EDA knowledge using prompts
  • Reasoning – Data analysis and summarization
  • Tool Operations – Performing complex tool functions from prompts

Summary

DRC analysis and debug work can now reduce tasks that required hours into just minutes by using AI-based clusters. Teams doing physical design can collaborate and communicate more efficiently by using bookmarks, block debug and attaching reports.

Q&A

Q: Is there any plan to Auto-fix DRC errors?

AI quickly groups similar DRC violations for easier root-cause analysis, but we still need a human in the loop to fix the violations.

Q:  Can I create new Signals?

A: Vision AI comes with a set of Signals out of the box, and I can also create their custom signals by my own checks (i.e. M1 checks first).

Q: What’s the difference between RVE classifier and AI?

A: AI takes and elevates the classifier by 100X, analyzing the results, locations, proximity, root causes, cluster by groups. RVE is good for fewer errors, but AI works on billions of errors and earlier in the process.

Q: Can you aggregate AI across multiple designs, trends, library cells in common, broad trends?

A: It’s under development, stay tuned for a future release.

Q: Are signal groups an AI classification?

A: We use unsupervised learning to create the groups by location, proximity.

Related Blogs


Musk’s new job as Samsung Fab Manager – Can he disrupt chip making? Intel outside

Musk’s new job as Samsung Fab Manager – Can he disrupt chip making? Intel outside
by Robert Maire on 07-29-2025 at 6:00 am

Elon Musk Samsung Tesla

– Musk chip lifeline to Samsung comes with interesting strings attached
– Musk chose Samsung over Intel-What does that say about Intel?
– Musk will hold sway over Samsung much as Apple/NVDA over TSMC
– Will Musk do a “DOGE” on chip tool makers? How much influence?

Tesla/Samsung $16.5B deal has many, many ramifications to rile chips

Samsung has been flailing for quite some time in the foundry business. TSMC is running away with the foundry industry leaving both Samsung and Intel far behind eating dust. Samsung just got a huge lifeline in the form of an endorsement from none other than Elon himself.

The Talyor Texas fab which had been on hold due to a lack of customers now has a customer big enough to fill the whole fab and then some. It puts the fab back on track overnight and puts Samsung back in the foundry business.

We are 100% certain that Musk got a super sweetheart deal that exacted a few pounds of flesh from Samsung which was backed up against a wall.

Certainly a way better deal than he could have gotten from TSMC which is up to its eyeballs in demand especially from the likes of Apple and Nvidia.

This also clearly puts Samsung in the middle of the AI business in a way they never could have by themselves.

“Intel Outside” – What does Musk’s choice say about Intel?

We are sure that Intel would have given away the farm to get this deal from Musk. It would have been the deal they needed to justify 14A and beyond. It would have been the deal of the century to rescue Intel……but it wasn’t……

The real question is why not Intel? Did they not offer Musk enough? I doubt it. Maybe there is not enough faith in Intel’s ability to execute. Maybe concern about viability.

Maybe Musk just wanted to thumb his nose at the US chip company (Intel) and the current Trump administration trying to come up with a post CHIPS Act strategy that works.

Maybe Musk just like the short commute to the Taylor fab in Texas….

Maybe its all of the above……

But what it clearly is, is very bad for Intel to be the last person standing at the chip industry dance without a partner…..

Musk’s new role: “Samsung Fab Manager”

A few Musk words that should strike fear into every semiconductor equipment maker;

“Samsung agreed to allow Tesla to assist in maximizing manufacturing efficiency. This is a critical point, as I will walk the line personally to accelerate the pace of progress. And the fab is conveniently located not far from my house”

Read that line again; ….. “I will walk the line (fab production line) personally to accelerate the pace of progress”

Imagine seeing Musk in a bunny suit inside the fab talking to tool operators…..It just blows my mind….and the fact that Samsung agreed to this shows just how desperate they are.

Is Musk going to personally negotiate with tools makers about the price/performance of their tools? Don’t be surprised as he has very clearly completely disrupted other industries. He is certainly smart enough and rich enough to turn the chip industry on its head….

  • Electrify autos
  • Reusable spaceships
  • Global high speed internet
  • Tunnels
  • Robots
  • AI
  • Drive Ins
  • Flamethrowers
  • DOGE
  • A third political Party
  • The semiconductor industry is a piece of cake
Tesla over other auto makers and other AI suppliers

Tesla now has guaranteed bleeding edge , us sourced, tariff resistant, chip capacity for its cars versus GM stuck with ancient, outdated, Global Foundries and foreign unreliable, tariffed, fabs……

Tesla/Musk now can get critical AI chip capacity for its robots, cars etc; and not be beholden to Nvidia/TSMC

Quite a stroke of genius……

The stocks

Obviously a big positive for both Samsung and Tesla.

Obviously a negative for Intel and TSMC

A negative for GM, Ford, BMW, Mercedes, Toyota etc; left in the analog dust….

Gotta love Texas……

Positive for chip tool makers in that Samsung’s Texas fab is back on but negative given the potential involvement of Musk in running the fab and impacting decisions.

Positive for all those former DOGE Musk minions (including “Big Balls”) will now have jobs “accelerating the pace of progress” in Samsung’s Taylor fab.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.
We have been covering the space longer and been involved with more transactions than any other financial professional in the space.
We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.
We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.
Also Read:

Elon Musk Given CHIPS Act & AI Oversight – Mulls Relocation of Taiwanese Fabs

CHIPS Act dies because employees are fired – NIST CHIPS people are probationary

Trump whacking CHIPS Act? When you hold the checkbook, you make up the new rules