Bronco Webinar 800x100 1

Podcast EP329: How Marvell is Addressing the Power Problem for Advanced Data Centers with Mark Kuemerle

Podcast EP329: How Marvell is Addressing the Power Problem for Advanced Data Centers with Mark Kuemerle
by Daniel Nenni on 01-30-2026 at 10:00 am

Daniel is joined by Mark Kuemerle, Vice President of Technology, Custom Cloud Solutions at Marvell. Mark is responsible for defining leading-edge ASIC offerings and architects system-level solutions. Before joining Marvell, Mark was a Fellow in Integrated Systems Architecture at GLOBALFOUNDRIES and has held multiple engineering positions at IBM. He has authored numerous articles on die-to-die connectivity and multichip systems and holds several patents related to low-power technologies and package integration.

Mark begins this far-reaching and informative discussion with the observation that the parameter driving the overall budget for advanced data centers is no longer money, but rather the available power to drive the massive connectivity of AI accelerators. Dan explores the significant work Marvell is doing to address this power constraint with its advanced die-to-die technology. Mark describes the impact and benefits of technologies such as bi-directional interfaces, redundancy, and methods to power down parts of the interface when they are not needed. He explains how these techniques lower power budgets, improve yield, and reduce total cost of ownership.

Mark discusses how Marvell balances its custom interface technology with popular standards such as UCIe. He also comments on the outlook for future die-to-die interfaces with the addition of integrated optics.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Taming Advanced Node Clock Network Challenges: Jitter

Taming Advanced Node Clock Network Challenges: Jitter
by Mike Gianfagna on 01-30-2026 at 6:00 am

Taming Advanced Node Clock Network Challenges Jitter

Clock jitter rarely fails in obvious ways. In advanced-node designs, its impact is often indirect, emerging through subtle timing uncertainty, interaction with power delivery noise, and compounding effects across large clock networks. These behaviors can quietly erode margin and predictability, even when conventional sign-off checks appear to pass. As a result, jitter that was previously absorbed through conservative margining now directly determines whether designs meet silicon targets or require costly late-stage rework, including frequency derates, ECO churn, or delayed product ramps.

As discussed, ClockEdge focuses on this class of problem with a unique approach that delivers deep insight, helping teams balance the often subtle and conflicting requirements to build reliable clock networks across all operating conditions. ClockEdge is publishing a series of white papers that examine real clock failure mechanisms and practical ways to address them. This installment focuses on the unique behaviors and risks associated with clock jitter.

Clock Jitter – What Changed?

Jitter was once dominated by isolated sources such as PLL phase noise or local buffer variation. In advanced design nodes today, it manifests as time-varying, distributed electrical behavior that evolves as the clock propagates through deep, heterogeneous clock networks. Power delivery noise, interconnect parasitics, non-linear device behavior, and topology-dependent amplification increasingly shape clock edge uncertainty over time across the system. The varied and locally influenced sources of jitter are illustrated in the figure below.

Clock jitter is a distributed electrical phenomenon

What the figure illustrates are distinct jitter profiles that emerge at different locations in the clock network due to local loading, power integrity conditions, and downstream topology. It is important to note that worst-case jitter often appears far from the original source.

As clocks traverse regions with different power domains, clock signals can amplify, attenuate, or reshape in non-intuitive ways. In multi-voltage designs, clock gating and localized activity bursts further exacerbate this behavior, creating correlated, time-dependent jitter patterns that cannot be captured through simple budgeting or worst-case assumptions.

Why Traditional Methods Fall Short

Conventional jitter metrics such as absolute jitter, period jitter, and cycle-to-cycle jitter provide useful measurements at individual observation points, but they do not capture how jitter evolves dynamically and spatially across a clock network. These metrics implicitly assume uniform propagation and limited interaction with the surrounding electrical environment.

Approaches such as static timing analysis (STA) and margin-based methodologies struggle to close jitter in advanced-node clock networks. It is important to understand that STA relies on delay abstractions and fixed uncertainty values that infer clock behavior rather than computing it directly, implicitly assuming that jitter is spatially uniform and temporally stable.

At advanced nodes, where jitter is dominated by localized power distribution effects, non-linear device behavior, and topology-dependent amplification, this assumption no longer holds. Accurate jitter closure requires moving beyond global budgets and worst-case assumptions to waveform-level, electrically accurate analysis across multiple levels of the clock network. The white paper examines these challenges in detail across per-gate, per-path and per-noise profile analysis.

How ClockEdge Addresses the Latest Jitter Challenges

The key capability offered by ClockEdge is to move beyond budgeting and inference to direct time-domain electrical computation.  The size of modern clock networks and the need to observe behavior over many cycles put conventional SPICE analysis out of reach. With its Veridian™ suite, ClockEdge delivers SPICE-accurate, full-clock electrical verification at scale.  Within the suite, vJitter directly computes clock waveforms over many cycles, capturing how jitter propagates, correlates, and interacts with real electrical effects across the entire clock network.

This large-scale, highly accurate analysis exposes subtle but highly impactful time-varying timing behavior that clock jitter can create. The white paper explains how these behaviors can be identified and addressed early, before they surface as late-stage design headaches. As described in the white paper:

By computing actual clock behavior rather than assuming it, ClockEdge allows design teams to achieve confident sign-off, preserve performance targets, and reduce the risk of late-stage surprises in advanced-node systems.

To Learn More

The clock network is the largest and most consequential network in most advanced designs. Improving visibility into clock behavior can directly impact performance, reliability and overall product predictability.  If you are involved in advanced chip design, this white paper explains how waveform-accurate jitter analysis enables more confident clock sign-off and reduces late-stage risk. You can access your copy of the white paper here. And that’s how to tame advanced node clock network jitter challenges.

Also Read:

Taming Advanced Node Clock Network Challenges: Duty Cycle

How vHelm Delivers an Optimized Clock Network

ClockEdge Delivers Precision, Visibility and Control for Advanced Node Clock Networks


Synopsys and AMD Honored for Generative and Agentic AI Vision, Leadership, and Impact

Synopsys and AMD Honored for Generative and Agentic AI Vision, Leadership, and Impact
by Daniel Nenni on 01-29-2026 at 12:00 pm

Synopsys AMD Agentic AI Honor

Synopsys and AMD were recently selected by the World Economic Forum for inclusion in the WEF’s MINDS (Meaningful, Intelligent, Novel, Deployable Solutions) AI program, recognizing their leadership and real-world impact in applying generative and agentic AI to semiconductor design and engineering. This distinction places them among a distinguished global cohort of organizations pioneering AI innovation with measurable outcomes in complex technical domains.

The MINDS program is part of the Forum’s broader AI Global Alliance initiative, which seeks to identify and amplify AI solutions that are not just technologically advanced but have tangible, deployable impact. Rather than focusing on pilot projects or theoretical research, MINDS highlights implementations that are already making a difference in how industries operate and Synopsys and AMD’s work in semiconductor design stood out as a clear example of this shift.

Why This Recognition Matters

Traditionally, semiconductor design has relied on manual, labor-intensive workflows anchored in expert engineers creating and verifying designs line by line. But as chips become more complex, with billions of transistors and multi-disciplinary integration requirements, these workflows have faced scaling limits. Generative and agentic AI, AI that can autonomously perform multi-step tasks and adapt workflows, offers a powerful new paradigm for accelerating these processes while preserving quality and reducing costs.

By honoring Synopsys and AMD, the WEF is acknowledging that AI is not just a future promise for chip design, it’s already producing real, measurable business and engineering outcomes. Their work demonstrates how AI can amplify human expertise instead of replacing it, enabling engineers to explore design spaces faster, automate repetitive tasks, and focus on higher-value decisions.

Synopsys: AI-Driven EDA and Agentic Workflows

Synopsys, a global leader in EDA software and services used by the world’s semiconductor companies to design, verify, and optimize chips, has been integrating generative AI and reinforcement learning deeply into its toolset. Through its Synopsys.ai suite, the company has introduced AI-assisted capabilities that help engineers at various phases of the design flow, from RTL development and verification to signoff and optimization.

These AI capabilities include AI “copilots” that assist with code and script generation, knowledge assistants that expedite learning and problem resolution, and agentic systems that can manage multi-step workflows. In collaboration with partners like Microsoft, Synopsys is also advancing toward more autonomous EDA workflows under the concept of AgentEngineer™,  a vision for AI agents capable of executing complex, multi-agent tasks that previously required extensive human intervention.

This focus on agentic AI marks a departure from simple generative tools to systems that can coordinate iterative tasks, make decisions across multiple steps, and adapt to evolving design contexts, a capability that is especially valuable in semiconductor development where design constraints, tradeoffs, and verification requirements are highly intricate.

AMD: Systems-Level AI Integration

For its part, AMD has been applying these advanced AI workflows in real semiconductor product development. By partnering with Synopsys, AMD has incorporated reinforcement learning and generative AI tools directly into its chip design and verification processes, delivering substantial benefits in productivity and performance. According to the WEF case study on the MINDS award, this collaboration has enabled AMD to double productivity across design stages, expand design exploration, reduce overall design costs significantly, and shrink time to signoff, all outcomes that directly impact competitiveness in a fast-moving market.

These gains are especially notable given the rising pressures facing the semiconductor industry. Global demand for advanced chips continues to grow rapidly while the pool of experienced engineers has not kept pace. AI-augmented design workflows provide a way to leverage expert knowledge at scale, enabling more efficient use of human talent and AI assistants working together.

Looking Ahead: AI as a Strategic Enabler

The recognition from the World Economic Forum underscores a broader shift in how AI is perceived in high-technology sectors, from a promising research topic to a strategic enabler of real-world innovation and competitive advantage. By spotlighting Synopsys and AMD, the WEF is highlighting that complex fields like chip design can benefit from AI not just conceptually, but with quantifiable improvements in engineering workflow efficiency, product quality, and time to market.

Bottom line: As AI technologies continue to mature, other organizations in semiconductor design, manufacturing, and systems engineering will likely follow similar paths combining human ingenuity with scalable AI workflows to tackle the ever-increasing complexity of next-generation computing systems.

Also Read:

Synopsys’ Secure Storage Solution for OTP IP

Curbing Soaring Power Demand Through Foundation IP

Acceleration of Complex RISC-V Processor Verification Using Test Generation Integrated with Hardware Emulation

TSMC based 3D Chips: Socionext Achieves Two Successful Tape-Outs in Just Seven Months!


The Foundry Model Is Morphing — Again

The Foundry Model Is Morphing — Again
by Jonah McLeod on 01-29-2026 at 10:00 am

SIP Market

When Morris Chang left Texas Instruments in 1983 to found TSMC, he was not merely starting a new company—he was proposing a new industrial logic. Chang recognized that semiconductor manufacturing had become so capital-intensive that it could no longer survive as just one function inside a vertically integrated company. His answer was radical for its time: specialize. Let foundries manufacture, let fabless companies design, and let scale economics determine who survives. Over the next four decades, that model reshaped the entire semiconductor industry.

But the conditions that made the pure-play foundry model dominant are changing again.

As process scaling slows, product lifetimes lengthen, and differentiation shifts away from raw transistor density, manufacturing alone is no longer sufficient to define competitive advantage—especially outside the consumer and cloud-compute markets. Increasingly, value is migrating up the stack, toward system enablement: hardened IP, software continuity, platform longevity, and predictable lifecycle support. In this environment, foundries are no longer just wafer suppliers. They are becoming infrastructure providers.

Seen in that light, GlobalFoundries’ recent acquisition of Synopsys’ CPU IP business—including the ARC processor family—marks something more significant than a portfolio expansion. It signals a second evolution of the foundry model itself. Where Morris Chang separated manufacturing from design to survive the economics of scaling, today’s foundries are selectively reintegrating high-value hard IP to survive the economics of maturity.

This is not a retreat to the IDM era. It is a recognition that in long-lifecycle markets—automotive, industrial, infrastructure, and embedded systems—customers increasingly demand platform certainty, not just wafers. The foundry model is not being abandoned. It is being adapted.

As traditional scaling economics weaken and capital intensity rises, competitive advantage—especially outside consumer and hyperscale compute—shifts from raw transistor density to lifecycle execution and design enablement. “In embedded and industrial markets, leading suppliers explicitly commit to 10–15+ year availability and continuity-of-supply, reflecting customer demand for long-lived platforms rather than frequent redesigns.”

At the same time, the semiconductor IP business has become a multi-billion-dollar market opportunity (roughly $7–8B annually, according to MarketsandMarkets), with the Asia Pacific region, predominately China, holding the largest share. Moreover, it is still growing, with recent industry tracking showing strong IP revenue acceleration. Foundries are responding by expanding from wafer output into “design infrastructure”—ecosystems of qualified IP, certified EDA flows, PDKs, DFM, packaging, and services—exemplified by programs like TSMC’s OIP. In that environment, foundries are no longer just wafer suppliers; they are becoming platform and infrastructure providers for whole product lifecycles.

A Foundry Built by Accumulation

GlobalFoundries did not arise from a single founding moment. It was assembled over time through a sequence of structural decisions, each responding to a different pressure in the semiconductor industry.

The process began in 2008–2009, when AMD spun off its manufacturing arm to escape the escalating capital demands of the IDM model. The move preserved AMD’s access to advanced production while freeing its design organization from an unsustainable cost structure.

That spinout would not have been possible without Mubadala Investment Company, which supplied the long-horizon capital and ownership stability AMD lacked. Mubadala ultimately took an 82% stake in the new foundry, insulating it from short-term financial pressures and enabling a strategy focused on durability rather than speed.

Mubadala then expanded GlobalFoundries’ footprint by acquiring Chartered Semiconductor in Singapore. This move transformed GF from an AMD carve-out into a geographically distributed foundry with real scale, adding multiple fabs, a diverse customer base, and specialty-process breadth.

The final foundational piece arrived in 2014, when IBM exited semiconductor manufacturing and transferred its East Fishkill and Essex Junction fabs—along with its advanced R&D organization and long-term POWER and Z supply agreements—to GlobalFoundries, even paying the company to assume the operations. This infusion of U.S. engineering talent and SOI-driven process technology elevated GF from a mid-range player to a foundry with world-class capabilities in specialized logic.

Taken together, these moves produced a company built by accumulation rather than invention: AMD contributed manufacturing DNA, Abu Dhabi provided capital and industrial patience, Chartered added global scale, and IBM delivered advanced technology. The result is a foundry whose strategy diverges sharply from firms built solely to chase leading-edge nodes. Ironically, AMD relies on TSMC for its leading edge FinFet process.

Two Electronics Universes

At the highest level, the global electronics industry has organized itself around two distinct competitive universes.

The first is consumer and cloud compute: smartphones, PCs, data centers, and hyperscale AI. This universe is driven by peak performance per watt, rapid product cycles, and relentless transistor density scaling. Capital intensity is extreme, product lifetimes are short, and a small number of customers account for a disproportionate share of demand. Manufacturing here has converged on a narrow set of players—TSMC foremost, with Samsung and Intel as the only credible peers at the leading edge.

The second universe, where GlobalFoundries squarely operates, is physical, embedded, and infrastructure electronics. This includes automotive systems, industrial automation, RF connectivity, power electronics, aerospace and defense, medical devices, energy systems, and embedded control and inference. Success in this universe is defined not by headline performance but by reliability, deterministic behavior, qualification, and supply continuity. Product lifetimes stretch over decades, not years.

Comparing GF to TSMC Misses the Point

TSMC, Samsung, and Intel compete in a three-player arena where scale, redundancy, and geopolitical resilience are existential requirements. Each new node demands tens of billions of dollars in capital investment, and the penalty for execution failure is severe.

GlobalFoundries deliberately exited this race. In doing so, it avoided the “middle squeeze” that eliminated many other foundries—companies that were neither large enough to win at the leading edge nor differentiated enough to command durable customers.

Instead, GF rebuilt around markets that value stability over novelty. Automotive, industrial, RF, power, and infrastructure customers do not want to requalify silicon every two years. They want predictability, long-term availability, and conservative process evolution. For these customers, a mature node that improves steadily over time is often more valuable than a bleeding-edge node with a short commercial half-life.

This is why GF appears to “sit by itself” while TSMC has Intel and Samsung as peers. That asymmetry reflects different market physics—not competitive weakness.

Process Innovation Without Density Obsession

Exiting the leading-edge race did not mean exiting process innovation. GlobalFoundries exited the transistor-density race, not the process-generation race that matters to its customers.

GF continues to advance performance, power efficiency, and reliability within existing nodes while evolving specialty platforms such as FD-SOI, RF-SOI, and power processes. These advances are often invisible in consumer-centric narratives, but they are decisive in automotive and industrial systems, where leakage, analog behavior, and qualification margins dominate real-world performance.

FD-SOI illustrates this philosophy particularly well. While it continues to scale geometrically, it does so on a different cadence and with different objectives than FinFET. Strong electrostatic control enables gains through body biasing, voltage scaling, and system integration, reducing pressure for aggressive geometry shrinks. This controlled evolution aligns naturally with long-lifecycle markets.

A Dual-Process Strategy by Design

A critical—and often overlooked—aspect of GlobalFoundries’ strategy is that it operates both FD-SOI and FinFET process families in volume. Unlike leading-edge foundries that concentrate almost exclusively on shrinking FinFET nodes, GF maintains two complementary process pillars optimized for different workloads and lifetimes.

FD-SOI platforms such as 22FDX and 28FD-SOI are optimized for ultra-low leakage, deterministic timing, and wide operating ranges, making them well suited for safety-critical, mixed-signal, and always-on domains. Mature FinFET nodes such as 12LP and 14LPP deliver higher performance for Linux-capable embedded systems, automotive domain controllers, and infrastructure silicon—without the churn of leading-edge scaling.

The coexistence of these platforms is not transitional; it is structural. Together, they allow GF to support a full spectrum of physical-world electronics without dependence on the 7-nm, 5-nm, or 3-nm race.

RISC-V and the Logic of Owning Compute IP

The strategic coherence of this model becomes clearer when viewed alongside the accelerating adoption of RISC-V. RISC-V’s fastest growth is occurring not in consumer compute, but in the same embedded, automotive, and industrial markets GF already serves.

Market estimates place the global RISC-V ecosystem at roughly $1.6–2.6 billion today, with projected growth of 25–33% CAGR, reaching $8–9 billion by 2030 and potentially $20–26 billion by the mid-2030s. While still small relative to Arm and x86, its trajectory is unmistakable.

GF’s acquisitions of MIPS and Synopsys’ ARC IP should be understood in this context. These moves anchor GF more deeply in the RISC-V ecosystem at the level that matters most: system enablement. ARC-V aligns naturally with FD-SOI; MIPS RISC-V aligns with mature FinFET. The goal is not ISA evangelism, but infrastructure—reducing friction between compute IP, tools, and manufacturing.

The Markets That Matter

High-end consumer AR/VR often benefits from leading-edge nodes, but it is economically narrow and consumer-driven. Robotics, by contrast, is a real-time systems problem where deterministic latency and power stability outweigh peak throughput. Automotive provides the clearest validation: long lifetimes, functional safety, and supply continuity dominate, and only a narrow slice of workloads truly require leading-edge silicon.

GlobalFoundries remains relevant because it aligned itself with the durable half of the electronics industry. While consumer and cloud compute dominate headlines, physical and infrastructure electronics reward capital discipline, stability, and long-term execution. GF exited the density arms race early and rebuilt around those realities. In an industry where many companies vanished by chasing scale they could not sustain, GlobalFoundries survived by choosing a different battlefield—and building a business model matched to its economics.

Also Read:

TSMC’s 6th ESG AWARD Receives over 5,800 Proposals, Igniting Sustainability Passion

TSMC based 3D Chips: Socionext Achieves Two Successful Tape-Outs in Just Seven Months!

Why TSMC is Known as the Trusted Foundry


2026 Outlook with Mathew Burns of Samtec

2026 Outlook with Mathew Burns of Samtec
by Daniel Nenni on 01-29-2026 at 8:00 am

square headshot (2)

We have been working with Matt and Samantec for the past 5 years. Samtec is a billion dollar privately-held manufacturer with a wide range of electronic interconnect products that are critical in high-speed and high-reliability systems.

Matt has been with Samtec for more than 10 very important year developing go-to-market strategies for Samtec’s Silicon-to-Silicon solutions. Over the course of 25+ years, he has been a leader in design, applications engineering, technical sales and marketing in the telecommunications, medical and electronic components industries.

Tell us a little bit about your company. It’s good to talk with you again, Dan. We are looking forward to a strong year.

Samtec designs and manufactures high-performance copper, optical, and RF interconnect solutions across industries including datacom, computer, telecommunications, semiconductors, among many others. We are also celebrating our 50th anniversary in 2026.

Personally, I lead a team of experienced content developers evangelizing Samtec’s Silicon-to-Silicon™.

What was the most exciting high point of 2025 for your company?

Well, Dan, everything in the interconnect world is getting smaller, faster, and denser. 200+ Gbps throughput is now reality, and 400+ Gbps is right around the corner. In 2025, Samtec launched our Si-Fly® HD CPC cable systems enabling 102.4T in a 95 x 95 mm chip substrate. That’s 512 lanes running 200G each. It’s also the smallest, densest solution on the market. We also demonstrated several real-world implementations of Si-Fly HD with market leaders like Broadcom and Marvell Semiconductor. Intermate able CPO solutions are coming in 2026 as well.

What was the biggest challenge your company faced in 2025?

In short, scaling. We’ve chatted several times in recent months that the pace of innovation, especially in AI, is accelerating. Companies small and large must do more with less. Plus, there is constant churn in the industry, especially in Silicon Valley.

How is your company’s work addressing this challenge?

We are ramping up investments where it makes sense. A good example of that is our manufacturing capabilities. Our technologists innovate smaller, flexible equipment that minimizes changeover from one product to another and supports minimum downtimes. On the personnel side, Samtec has a round peg, round hole philosophy. We are laser-focused on finding the right person for the right role while maximizing productivity using the latest AI and digital tools.

What do you think the biggest growth area for 2026 will be, and why?

That’s easy. AI. Next question. Joking aside, AI infrastructure and the data center space are not slowing down anytime soon. Capital expenditure and investment from the largest hyperscalers and the emerging neoscalers are still growing. We see more growth there ‘26 vs ‘25 there. Additionally, we expect to see growth in mil/aero given unfortunate instability around the world. Commercial space is highly competitive with LEO/MEO satellite providers popping up globally.

How is your company’s work addressing this growth?

In a few areas. We are growing our product roadmap with new interconnect solutions while also rounding product families for new applications. New industry standards like VITA™ 90 VNX+™ and VITA 93 QMC™ are quickly being adopted in mil/aero and commercial space applications.

On the sales side, we continue to ramp up as well. Let’s use India as example. We are growing the sales, marketing, and technical support teams in key geographies. We are expanding into a new office in Bengaluru, and we are adding to our sales channels with new regional and industry-specific distribution partners.

Are you incorporating AI into your products?

Well, Dan, since most of Samtec’s interconnect solutions are passive in nature, it’s hard to incorporate AI directly into our products. Yet, products like Si-Fly HD are essential to advancing XPU scale-up and scale-out in AI infrastructure.

Is AI affecting the way you develop your products?

Yes, without a doubt. Samtec has leveraged AI in machine vision for testing and inspection of high-volume interconnect solutions for several years. AI has advanced the accuracy and speed of several MCAD and ECAD EDA tools we use for simulating new products individually and in situ. Scripting optimization is obviously easier in these EDA tools when using AI.

Additionally, Samtec is developing closed loop additive manufacturing processes using AI vision systems to ensure printed part quality during production. AI systems also increase automation in the adoption and scaling of additive manufacturing processes.

What conferences did you attend in 2025 and how was the traffic?

Samtec sponsors and attends many industry-specific, regional, and local conferences all over the globe. Example conferences include DesignCon, OFC, embedded world, PCI-SIG DevCons, IMS, European Microwave, ECOC, AI Infra, SuperComputing, New Tech, AOC, Electronica globally and more. With few exceptions, conference and trade show attendance and engagement remain strong. We really haven’t noticed any declines in the past years. If anything, I think conference attendance is still growing.

Will you participate in conferences in 2026? Same or more as 2026?

We will probably be on par in 2026 compared to 2025. There’s always some churn in conferences we sponsor annually. However, we typically support the same large events as mentioned each year.

How do customers normally engage with your company?

Dan, that’s a great question. There are many different paths to engage with Samtec. We have a global team of factory-trained Field Sales Engineers (FSEs) that are typically the front door to Samtec. They are supported technically by our growing team of FAEs who average >10 years of industry design experience. For self-service, Samtec has several design tools on samtec.com. We also extend our reach with global, regional, local, and specialty distribution partners who support our >100K direct and indirect customers in 125+ counties.

Additional comments?

Thanks, Dan, for the opportunity to talk with you and your team. We love being a part of the curated semiwiki.com ecosystem. We look forward to a strong 2026.

Also Read:

Webinar – The Path to Smaller, Denser, and Faster with CPX, Samtec’s Co-Packaged Copper and Optics

Samtec Practical Cable Management for High-Data-Rate Systems

How Channel Operating Margin (COM) Came to be and Why It Endures

Visualizing System Design with Samtec’s Picture Search


2025 Retrospective. Innovation in Verification

2025 Retrospective. Innovation in Verification
by Bernard Murphy on 01-29-2026 at 6:00 am

Innovation New

As usual in January we start with a look back at the papers we reviewed last year. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and lecturer at Stanford, EE292A) and I continue our series on research ideas. As always, feedback welcome.

Looking back at 2025

We decided on a new way to present our findings this year. I’ll start with a list of blogs sorted by major topic (e.g. AI), then by popularity. Then a quick summary of Paul and Raúl’s takeaways, closing with some insights into who was reading our posts through 2025.

Our motivation for topic selections this year was obviously influenced by AI. It is amazing to realize how quickly our selections in AI have evolved over the several years we have been posting: from CNNs to RNNs to LLMs and reinforcement learning. You can be confident that we will continue to track this topic. Quantum simulation was surprisingly popular, maybe suggesting a follow-on. Hardware acceleration is always hot, we’ll find more. Analog continues to be important, given growing complexity and embedded roles in digital systems. So also is inspiration from software engineering innovations. And, from what we hear, interest in new methods to tackle verification problems in multicore systems remains high.

Category Topic
AI Agentic Bug Localization. November, Stanford, Yale, USC
AI Neurosymbolic code generation. September. Google, MIT, UT Austin, Cornell
AI Prompt Engineering for Security. July, U Florida
AI LLMs Raise Game in Assertion Gen. April, Princeton
Quantum Simulating Quantum Computers. December, ETH, U Toronto, U Mass.
HW Accel Emulator-Like Simulation Acceleration on GPUs. October, NVIDIA, U Beijing
Analog Reachability in Analog and AMS. June, Texas A&M
Analog Metamorphic Test in AMS. March U Bremen, JK U Austria
SW inspired Cocotb for Verification. August, U Tamilnadu
SW inspired Optimizing an IR for Hardware Design. May, ETH and Cambridge U
Multi-core Verif Bug Hunting in Multi Core Processors. February, IBM

 

Paul and Raúl’s combined takeaways

An AI focus is natural, yet we also aim to emphasize grounded research. Our April paper from Princeton on assertion generation is an example, highlighting challenges in this area. Building an agentic verification engineer is going to take more than an off-the-shelf LLM and a bit of prompting. Still research in AI for verification continues to push forward: November’s agentic bug localization paper echoes the top-focus area in industry and academia while February’s paper explores intermediate representations and knowledge graphs. Expect to see more this year on fine-tuned models, transfer learning, LLM long term memory, etc.

We’re intentionally optimizing for broad and topical coverage: digital, analog, GPU acceleration, quantum, LLVM. That said, our selections have leaned heavily to academic papers. We will try to find more industry-sponsored papers this year. Academic research topics are also influenced by industry needs but it would be useful to see more insights into early deployment experiences, either directly authored or co-authored by semiconductor or systems enterprises.

It is also useful to explore overlaps and differences with software verification, especially for RTL/SystemVerilog/SystemC. Two papers touched on this and it will continue to be a topic of interest in our selections

Paul would like to add that we’re grateful and pleased to see research in verification continue to be so vibrant. Stepping back from it all (see below), there really are many quality advances being published, offering a wealth of interesting ideas from around the world!

Who is reading our blogs?

LinkedIn (LI) provides some revealing demographic insights. We now see 20k-30k views per blog. These include not only engineers in ASIC communities but also FPGA communities and software developers.

Top readership groups registered under LI are from Intel, Synopsys, Cadence, Qualcomm, AMD and Apple. Readers are based in the San Francisco Bay Area, the Bengaluru area, Austin (TX) and Portland (OR), with additional interest from Munich (Germany), Paris (France) and Ankara (Turkey).

We greatly appreciate your support and the contributions made by the authors of papers we review. We would still like to see more feedback: suggestions for topics to cover, or support, or comments on topics we have reviewed. If you want to provide private feedback, contact me through my LinkedIn page HERE.

Also Read:

Simulating Quantum Computers. Innovation in Verification

Agentic Bug Localization. Innovation in Verification

Emulator-Like Simulation Acceleration on GPUs. Innovation in Verification


The Chronicle of TSMC CoWoS

The Chronicle of TSMC CoWoS
by Daniel Nenni on 01-28-2026 at 10:00 am

Chronical of CoWoS

As semiconductor scaling slowed and system performance became increasingly constrained by data movement rather than raw compute, advanced packaging emerged as a decisive lever. Among these technologies, TSMC’s CoWoS (Chip-on-Wafer-on-Substrate) represents a turning point in how high-performance systems are architected, manufactured, and scaled. Its evolution mirrors the industry’s shift from transistor-centric progress to system-level optimization.

At its core, CoWoS is a 2.5D integration technology. Logic dies and memory stacks are placed side by side on a silicon interposer, which is then mounted onto an organic substrate. The silicon interposer enables wiring densities far beyond what organic substrates can support, with metal line pitches in the low single-digit microns. Through-silicon vias pass signals and power vertically through the interposer, connecting it to the package substrate below.

The technical motivation for CoWoS was bandwidth density. Traditional off-package memory interfaces, such as DDR, rely on relatively long traces and limited I/O counts, resulting in high power consumption and latency. In contrast, CoWoS allows logic dies to interface with HBM stacks using thousands of parallel connections. Modern CoWoS implementations support memory bandwidth exceeding 3–5 TB/s per package, with energy efficiency measured in a few picojoules per bit, orders of magnitude better than conventional memory systems.

Early CoWoS deployments paired a single large logic die with two to four HBM stacks. Over time, the technology scaled aggressively. Interposer sizes expanded beyond 800–900 mm², pushing reticle and yield limits. Advanced packages now routinely integrate six to eight HBM stacks, each consisting of 8–12 DRAM dies bonded using TSVs and microbumps. Signal integrity across these short interposer traces allows operation at several gigabits per second per pin with minimal equalization overhead.

Manufacturing CoWoS is fundamentally different from traditional back-end packaging. Interposer fabrication resembles front-end wafer processing, using silicon wafers with multiple redistribution layers. TSV formation, wafer thinning, and precise die placement introduce yield sensitivities not seen in simpler packages. Assembly tolerances are tight: microbump pitches around 40 µm (and shrinking) require sub-micron alignment accuracy. Thermal management is equally critical, as large logic dies and dense memory stacks generate heat in close proximity.

As demand grew, CoWoS evolved beyond its original role as a memory enabler. The rise of chiplet-based architectures turned the interposer into a system fabric. Multiple logic dies, compute tiles, I/O dies, accelerators, could be interconnected with wide, low-latency links. This enabled designers to overcome reticle size limits while improving yield and design flexibility. CoWoS became a platform for heterogeneous integration rather than a single-purpose solution.

The AI acceleration boom of the 2020s elevated CoWoS from a niche capability to a strategic bottleneck. Training large neural networks requires massive parallel compute tightly coupled to enormous memory bandwidth. In many leading accelerators, performance scaling is limited less by transistor count than by HBM availability and interposer capacity. As a result, CoWoS production capacity became as strategically important as advanced logic nodes, with packaging throughput directly constraining system shipments.

Technically, CoWoS continues to push boundaries. Interposer routing layers have increased, power delivery networks have been reinforced to handle hundreds of watts per package, and mechanical designs have improved to manage warpage and stress. Variants have emerged to balance cost and performance, while coexistence with newer technologies such as hybrid bonding and 3D stacking is shaping next-generation systems.

Bottom Line: The chronicle of CoWoS is ultimately the story of how packaging became architecture. It demonstrated that performance, power efficiency, and scalability increasingly depend on microns of interconnect and millimeters of proximity. In an era where monolithic scaling alone can no longer carry progress, CoWoS stands as a defining example of how integration, not just miniaturization, drives the future of computing.

CONTACT TSMC

Also Read:

TSMC’s CoWoS® Sustainability Drive: Turning Waste into Wealth

TSMC’s 6th ESG AWARD Receives over 5,800 Proposals, Igniting Sustainability Passion

TSMC based 3D Chips: Socionext Achieves Two Successful Tape-Outs in Just Seven Months!


2026 Outlook with Shelly Henry of MooresLabAI

2026 Outlook with Shelly Henry of MooresLabAI
by Daniel Nenni on 01-28-2026 at 8:00 am

Shelly Henry MooresLabAI

Tell us a little bit about yourself and your company.

I’m Shelly Henry, CEO and co-founder of MooresLabAI. After two decades of building chips for Xbox, HoloLens, and Azure, I reached a point where I knew the industry needed a reset. So I teamed up with fellow engineers, Shashank Chaurasia and Sirish Munipalli to create MooresLabAI—a company built on the belief that silicon design should move at the speed of software. Our flagship product, VerifAgent™, is more than a tool—it’s a co-engineer that reads specs and builds testbenches, helping teams deliver chips faster, smarter, and with less burnout.

What was the most exciting high point of 2025 for your company?

The moment we launched VerifAgent™ at DAC 2025 was electric. We weren’t just unveiling a product—we were showing the world a new way to build chips. Watching engineers light up as our AI seamlessly built testplan, testbenches and testcases was unforgettable. One team told us, “We didn’t believe it until we saw it find a bug we missed.” That shift—from doubt to belief—was the real win.

What was the biggest challenge your company faced in 2025?

Trust was our biggest hurdle. Asking seasoned engineers to let AI into their verification flow felt like asking them to hand over the keys to a race car. They worried about hallucinated code, broken flows, and losing control. We had to earn their confidence, one pilot, one compile, one simulation at a time.

How is your company’s work addressing this challenge?

We built VerifAgent to be transparent and accountable. Every output is simulation-verified, every testbench compile-ready. It fits into existing flows without disruption and runs securely on-prem. We didn’t just ask for trust—we proved ourselves. And when engineers saw our AI catch bugs and hit coverage goals faster than their own teams, the trust followed.

What do you think the biggest growth area for 2026 will be, and why?

2026 will be the year AI becomes indispensable in chip design. The demand for custom silicon is exploding, and traditional methods can’t keep up. Time and talent are stretched thin. AI isn’t just a nice-to-have anymore—it’s the only way forward. The companies that embrace it will lead the next wave of innovation.

How is your company’s work addressing this growth?

We’re building beyond verification. SpecAgent and DesignAgent are coming—AI modules that formalize specs and generate RTL. Together with VerifAgent, they’ll form MooreCoreX, our full-stack platform for IP development. We’re already piloting these tools, and by mid-2026, we’ll offer a complete AI-powered flow from spec to silicon.

What conferences did you participate in in 2025?

DAC 2025 was our breakout moment—nonstop traffic, live demos, and real excitement. At the AI Infra Summit, we connected with partners shaping the future of infrastructure. Verification Futures gave us deep, technical conversations with engineers who live and breathe verification. Each event reminded us: we’re solving real problems for real people.

What are your 2026 Conference Plans?

We’re going bigger in 2026. DAC and DVCon are locked in, with DVCon marking our debut as Gold Sponsors. We’re exploring new verticals like chiplets and CPUs. Conferences aren’t just marketing—they’re where we meet the people who challenge us, inspire us, and help us grow.

How do customers normally engage with your company?

Our journey with customers starts with a conversation, then a demo, and often a pilot. We deploy VerifAgent in their environment, run it on real IP, and measure results. When they see 90%+ productivity gains and bugs caught early, the pilot becomes a partnership. We stay close, offering support, training, and a shared commitment to success.

How are you incorporating AI into your products?

AI isn’t a feature—it’s our foundation. VerifAgent uses LLMs to generate verification assets from specs, and MooreLLM is our own model tuned for hardware. Our multi-agent system handles everything from parsing to debugging. It’s prompt-free, simulation-verified, and built for engineers—not marketers.

Is AI affecting the way you develop your products?

We use our own AI to build and test itself. VerifAgent helps us debug, analyze logs, and validate new features. Our development is fast, iterative, and deeply collaborative. Engineers and AI scientists work side-by-side, learning from data and refining models. It’s not just efficient—it’s energizing.

Additional Comments?

Reflecting on 2025, I’d add that it was as much about building credibility as it was about building technology. One big milestone for us was expanding our advisory board with industry veterans who lent us their experience and legitimacy. In September 2025, for instance, we brought on multiple high-profile advisors: Tom Fitzpatrick, a globally recognized authority in electronic design automation (EDA), K. Balasubramanian, former President of TI Japan, Sanjay Lall, who has 30+ years in EDA sales leadership (Cadence, etc.), Ashutosh Saxena, a noted AI entrepreneur and Stanford AI PhD, and Vijay Chandrasekharan, a two-time founder and seasoned software architect. Having these folks in our corner was a big confidence boost – not just internally, but for customers and investors.

BOOK A DEMO

Also Read:

We Need to Turn Specs into Oracles for Agentic Verification

Moores Lab(AI): Agentic AI and the New Era of Semiconductor Design

CEO Interview with Shelly Henry of Moores Lab (AI)


Synopsys’ Secure Storage Solution for OTP IP

Synopsys’ Secure Storage Solution for OTP IP
by Kalar Rajendiran on 01-28-2026 at 6:00 am

Synopsys Secure Storage Solution for OTP IP

For decades, One-Time Programmable (OTP) memory has been viewed as a foundational element of hardware security. Because OTP can be written only once and cannot be modified afterward, it has traditionally been trusted to store cryptographic keys, secure boot code, device identity, and configuration data. Permanence was often equated with security. In today’s threat environment, however, that assumption no longer holds. While OTP is extremely effective at preventing modification, it does not inherently prevent extraction. As attackers shift their focus from changing data to reading it, OTP must evolve from a permanent storage mechanism into part of a broader, hardware-rooted secure storage architecture.

Why Hardware-Rooted Secure Storage Matters for OTP IP

OTP excels at protecting integrity: once data is programmed, it cannot be altered. What it does not guarantee on its own is confidentiality. If secrets are stored directly in OTP in plaintext form, a sufficiently capable attacker with physical access may still be able to observe or extract those bits using advanced techniques. This distinction is critical. OTP prevents rewriting, but it does not automatically prevent reading. In modern systems, where physical access is often assumed and attacks routinely target hardware, permanence alone is no longer enough. Hardware-rooted secure storage addresses this gap by ensuring that even if memory contents are accessed, the underlying secrets remain protected and unusable.

What Next-Generation OTP Must Address

As SoCs become more complex, valuable, and widely deployed, next-generation OTP solutions must explicitly address the growing gap between immutability and secrecy. Storing static secrets directly in OTP creates an increasingly attractive target for attackers. Security must instead be device-specific, resilient against physical and invasive attacks, and scalable across large, heterogeneous SoCs. OTP can no longer be treated as “secure by default”; it must be embedded within an architecture that assumes attackers may eventually reach the hardware and still prevents meaningful compromise.

Closing the Security Gap with a Layered Defense Model

The most effective way to protect OTP in modern designs is through a layered defense model that combines multiple hardware-based security mechanisms. In this approach, secrets are not stored directly in OTP. Instead, a hardware-generated, device-unique root key is derived from the intrinsic physical characteristics of the chip itself and is never stored anywhere on the device. Cryptographic engines then use this root key to encrypt and decrypt data stored in OTP, while secure control logic manages access and policy enforcement. As a result, OTP holds only encrypted assets, not usable secrets. Even if an attacker succeeds in reading OTP contents, the data remains unintelligible without the regenerated, chip-specific key. This layered model fundamentally changes the security posture of OTP, transforming it from static memory into a protected vault anchored in silicon.

Building In Security for Scaling, Mission-Critical SoCs

Industries such as AI and high-performance computing, automotive, IoT, and aerospace increasingly deploy SoCs in environments where attacks are assumed and failures can have serious safety, financial, or national-security consequences. These systems demand security that is present from the very first instruction and remains effective throughout the product lifecycle. Hardware-rooted secure storage enables secure boot, prevents device cloning and counterfeiting, binds identity and policy enforcement to individual chips, and locks down debug and configuration pathways after manufacturing. Most importantly, it allows security to scale alongside SoC complexity, ensuring consistent protection across subsystems without relying on software-only assumptions.

Synopsys’ Secure Storage Solution for OTP IP

Synopsys addresses the limitations of traditional OTP with its Secure Storage Solution for OTP IP, which is designed specifically to solve the problem of permanent but readable memory. The solution integrates antifuse-based OTP with Synopsys SRAM Physical Unclonable Function (PUF) technology, an on-chip cryptographic engine, and a secure controller. At power-up, the SRAM PUF regenerates a unique root key derived from the silicon itself. This key is never stored and exists only transiently within the hardware. It is used internally to encrypt and decrypt data stored in OTP, ensuring that secrets are never exposed in plaintext at any point in the system.

Advantages, Ease of Integration, and Flexible Configuration

Beyond strong security, the Secure Storage Solution for OTP IP is designed for practical deployment in real-world SoCs. Delivered as a pre-integrated subsystem with a standard AMBA APB interface, it minimizes integration effort and risk. Flexible configuration options allow designers to protect OTP alone or extend hardware-rooted protection across the entire chip, depending on system requirements. The solution abstracts cryptographic complexity behind a simple software interface and automatically initializes keys and protections at power-up, reducing provisioning complexity and deployment risk. By eliminating the need for custom security architectures, the solution helps teams accelerate time-to-market while scaling robust, consistent security across product lines.

Summary

OTP remains a critical component of SoC security, but the industry can no longer assume that data which cannot be changed also cannot be compromised. Modern threat models require security that protects confidentiality as well as integrity, even under physical attack. Hardware-rooted secure storage closes this gap by ensuring that secrets are never stored directly and are instead derived from the silicon itself. By combining OTP with device-unique key generation and cryptographic protection, designers can establish a true root of trust that scales with modern SoCs and meets the demands of mission-critical applications. In today’s systems, OTP provides permanence, but hardware-rooted secure storage provides protection. Both are required to build lasting trust in silicon.

To learn more, visit Synopsys’ Secure Storage Solution for OTP IP page.

Also Read:

Curbing Soaring Power Demand Through Foundation IP

Acceleration of Complex RISC-V Processor Verification Using Test Generation Integrated with Hardware Emulation

TSMC based 3D Chips: Socionext Achieves Two Successful Tape-Outs in Just Seven Months!


Weebit Nano Reports on 2025 Targets

Weebit Nano Reports on 2025 Targets
by Daniel Nenni on 01-27-2026 at 10:00 am

Weebit Nano 2025 Success

In early January 2026, Weebit Nano Ltd. (ASX: WBT) released a comprehensive report detailing its performance against the 2025 commercial and technical targets the company had set at its 2024 Annual General Meeting. The announcement highlighted significant progress in both business development and technology qualification, underpinning the company’s transition from R&D phase toward broader commercialization of its embedded Resistive RAM (ReRAM) technology.

At the core of Weebit’s 2025 achievements were licensing agreements with major industry players. The company successfully secured technology licensing contracts with two Tier-1 semiconductor manufacturers — onsemi and Texas Instruments (TI) — marking a notable endorsement of its ReRAM IP by global leaders in semiconductor manufacturing. These deals enable Weebit’s embedded ReRAM technology to be integrated into advanced process technologies and future product platforms, significantly expanding the company’s commercial footprint.

Commercial traction was further demonstrated by Weebit’s expansion of design agreements with multiple product companies. By year-end 2025, Weebit had exceeded its target of three product customers integrating its ReRAM into next-generation products. These engagements span a range of application areas, notably in security, smart battery management systems, and other embedded devices, showcasing ReRAM’s versatility across diverse markets.

Technology qualification, a critical milestone for semiconductor IP adoption,  also featured prominently in Weebit’s 2025 success. In December, the company announced that its ReRAM had achieved qualification based on JEDEC industry standards for non-volatile memories (NVM) at leading foundry DB HiTek. This qualification process involved rigorous testing across multiple wafer lots and represents a key step toward volume production readiness in DB HiTek’s 130nm Bipolar-CMOS-DMOS (BCD) process. Weebit noted that customers are already preparing for production using the qualified technology.

Although Weebit did not complete its third target foundry/IDM agreement within 2025, the company indicated that negotiations remain active and that the third agreement is now expected in early 2026, reflecting ongoing industry interest and pipeline momentum.

Weebit’s 2025 progress also built on earlier technical achievements that year, such as successfully qualifying its ReRAM modules to AEC-Q100 automotive standards for high-temperature operation, critical for automotive and industrial applications, and progressing towards technology transfer milestones with partners like onsemi. These milestones helped reinforce the reliability and robustness of ReRAM for demanding markets.

Looking ahead, Weebit’s leadership outlined ambitious 2026 priorities, including targeting revenue of at least A$10 million, delivering its first AI customer win, and achieving the first tape-out for a product company. These goals aim to build on 2025’s momentum and further strengthen Weebit’s position as a leading independent provider of ReRAM technology.

Bottom line: Weebit Nano’s 2025 report shows that the company largely met or exceeded its key commercial and technical targets through strategic partnerships, industry-standard technology qualification, and expanded customer engagements. While a few objectives remain in progress, the company’s progress sets a solid foundation for growth in FY26 and beyond, reinforcing the relevance of ReRAM as a next-generation memory solution in a wide array of semiconductor markets.

Also Read:

Weebit Nano Moves into the Mainstream with Customer Adoption

Relaxation-Aware Programming in ReRAM: Evaluating and Optimizing Write Termination

Weebit Nano is at the Epicenter of the ReRAM Revolution