RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

The Foundry Model Is Morphing — Again

The Foundry Model Is Morphing — Again
by Jonah McLeod on 01-29-2026 at 10:00 am

SIP Market

When Morris Chang left Texas Instruments in 1983 to found TSMC, he was not merely starting a new company—he was proposing a new industrial logic. Chang recognized that semiconductor manufacturing had become so capital-intensive that it could no longer survive as just one function inside a vertically integrated company. His answer was radical for its time: specialize. Let foundries manufacture, let fabless companies design, and let scale economics determine who survives. Over the next four decades, that model reshaped the entire semiconductor industry.

But the conditions that made the pure-play foundry model dominant are changing again.

As process scaling slows, product lifetimes lengthen, and differentiation shifts away from raw transistor density, manufacturing alone is no longer sufficient to define competitive advantage—especially outside the consumer and cloud-compute markets. Increasingly, value is migrating up the stack, toward system enablement: hardened IP, software continuity, platform longevity, and predictable lifecycle support. In this environment, foundries are no longer just wafer suppliers. They are becoming infrastructure providers.

Seen in that light, GlobalFoundries’ recent acquisition of Synopsys’ CPU IP business—including the ARC processor family—marks something more significant than a portfolio expansion. It signals a second evolution of the foundry model itself. Where Morris Chang separated manufacturing from design to survive the economics of scaling, today’s foundries are selectively reintegrating high-value hard IP to survive the economics of maturity.

This is not a retreat to the IDM era. It is a recognition that in long-lifecycle markets—automotive, industrial, infrastructure, and embedded systems—customers increasingly demand platform certainty, not just wafers. The foundry model is not being abandoned. It is being adapted.

As traditional scaling economics weaken and capital intensity rises, competitive advantage—especially outside consumer and hyperscale compute—shifts from raw transistor density to lifecycle execution and design enablement. “In embedded and industrial markets, leading suppliers explicitly commit to 10–15+ year availability and continuity-of-supply, reflecting customer demand for long-lived platforms rather than frequent redesigns.”

At the same time, the semiconductor IP business has become a multi-billion-dollar market opportunity (roughly $7–8B annually, according to MarketsandMarkets), with the Asia Pacific region, predominately China, holding the largest share. Moreover, it is still growing, with recent industry tracking showing strong IP revenue acceleration. Foundries are responding by expanding from wafer output into “design infrastructure”—ecosystems of qualified IP, certified EDA flows, PDKs, DFM, packaging, and services—exemplified by programs like TSMC’s OIP. In that environment, foundries are no longer just wafer suppliers; they are becoming platform and infrastructure providers for whole product lifecycles.

A Foundry Built by Accumulation

GlobalFoundries did not arise from a single founding moment. It was assembled over time through a sequence of structural decisions, each responding to a different pressure in the semiconductor industry.

The process began in 2008–2009, when AMD spun off its manufacturing arm to escape the escalating capital demands of the IDM model. The move preserved AMD’s access to advanced production while freeing its design organization from an unsustainable cost structure.

That spinout would not have been possible without Mubadala Investment Company, which supplied the long-horizon capital and ownership stability AMD lacked. Mubadala ultimately took an 82% stake in the new foundry, insulating it from short-term financial pressures and enabling a strategy focused on durability rather than speed.

Mubadala then expanded GlobalFoundries’ footprint by acquiring Chartered Semiconductor in Singapore. This move transformed GF from an AMD carve-out into a geographically distributed foundry with real scale, adding multiple fabs, a diverse customer base, and specialty-process breadth.

The final foundational piece arrived in 2014, when IBM exited semiconductor manufacturing and transferred its East Fishkill and Essex Junction fabs—along with its advanced R&D organization and long-term POWER and Z supply agreements—to GlobalFoundries, even paying the company to assume the operations. This infusion of U.S. engineering talent and SOI-driven process technology elevated GF from a mid-range player to a foundry with world-class capabilities in specialized logic.

Taken together, these moves produced a company built by accumulation rather than invention: AMD contributed manufacturing DNA, Abu Dhabi provided capital and industrial patience, Chartered added global scale, and IBM delivered advanced technology. The result is a foundry whose strategy diverges sharply from firms built solely to chase leading-edge nodes. Ironically, AMD relies on TSMC for its leading edge FinFet process.

Two Electronics Universes

At the highest level, the global electronics industry has organized itself around two distinct competitive universes.

The first is consumer and cloud compute: smartphones, PCs, data centers, and hyperscale AI. This universe is driven by peak performance per watt, rapid product cycles, and relentless transistor density scaling. Capital intensity is extreme, product lifetimes are short, and a small number of customers account for a disproportionate share of demand. Manufacturing here has converged on a narrow set of players—TSMC foremost, with Samsung and Intel as the only credible peers at the leading edge.

The second universe, where GlobalFoundries squarely operates, is physical, embedded, and infrastructure electronics. This includes automotive systems, industrial automation, RF connectivity, power electronics, aerospace and defense, medical devices, energy systems, and embedded control and inference. Success in this universe is defined not by headline performance but by reliability, deterministic behavior, qualification, and supply continuity. Product lifetimes stretch over decades, not years.

Comparing GF to TSMC Misses the Point

TSMC, Samsung, and Intel compete in a three-player arena where scale, redundancy, and geopolitical resilience are existential requirements. Each new node demands tens of billions of dollars in capital investment, and the penalty for execution failure is severe.

GlobalFoundries deliberately exited this race. In doing so, it avoided the “middle squeeze” that eliminated many other foundries—companies that were neither large enough to win at the leading edge nor differentiated enough to command durable customers.

Instead, GF rebuilt around markets that value stability over novelty. Automotive, industrial, RF, power, and infrastructure customers do not want to requalify silicon every two years. They want predictability, long-term availability, and conservative process evolution. For these customers, a mature node that improves steadily over time is often more valuable than a bleeding-edge node with a short commercial half-life.

This is why GF appears to “sit by itself” while TSMC has Intel and Samsung as peers. That asymmetry reflects different market physics—not competitive weakness.

Process Innovation Without Density Obsession

Exiting the leading-edge race did not mean exiting process innovation. GlobalFoundries exited the transistor-density race, not the process-generation race that matters to its customers.

GF continues to advance performance, power efficiency, and reliability within existing nodes while evolving specialty platforms such as FD-SOI, RF-SOI, and power processes. These advances are often invisible in consumer-centric narratives, but they are decisive in automotive and industrial systems, where leakage, analog behavior, and qualification margins dominate real-world performance.

FD-SOI illustrates this philosophy particularly well. While it continues to scale geometrically, it does so on a different cadence and with different objectives than FinFET. Strong electrostatic control enables gains through body biasing, voltage scaling, and system integration, reducing pressure for aggressive geometry shrinks. This controlled evolution aligns naturally with long-lifecycle markets.

A Dual-Process Strategy by Design

A critical—and often overlooked—aspect of GlobalFoundries’ strategy is that it operates both FD-SOI and FinFET process families in volume. Unlike leading-edge foundries that concentrate almost exclusively on shrinking FinFET nodes, GF maintains two complementary process pillars optimized for different workloads and lifetimes.

FD-SOI platforms such as 22FDX and 28FD-SOI are optimized for ultra-low leakage, deterministic timing, and wide operating ranges, making them well suited for safety-critical, mixed-signal, and always-on domains. Mature FinFET nodes such as 12LP and 14LPP deliver higher performance for Linux-capable embedded systems, automotive domain controllers, and infrastructure silicon—without the churn of leading-edge scaling.

The coexistence of these platforms is not transitional; it is structural. Together, they allow GF to support a full spectrum of physical-world electronics without dependence on the 7-nm, 5-nm, or 3-nm race.

RISC-V and the Logic of Owning Compute IP

The strategic coherence of this model becomes clearer when viewed alongside the accelerating adoption of RISC-V. RISC-V’s fastest growth is occurring not in consumer compute, but in the same embedded, automotive, and industrial markets GF already serves.

Market estimates place the global RISC-V ecosystem at roughly $1.6–2.6 billion today, with projected growth of 25–33% CAGR, reaching $8–9 billion by 2030 and potentially $20–26 billion by the mid-2030s. While still small relative to Arm and x86, its trajectory is unmistakable.

GF’s acquisitions of MIPS and Synopsys’ ARC IP should be understood in this context. These moves anchor GF more deeply in the RISC-V ecosystem at the level that matters most: system enablement. ARC-V aligns naturally with FD-SOI; MIPS RISC-V aligns with mature FinFET. The goal is not ISA evangelism, but infrastructure—reducing friction between compute IP, tools, and manufacturing.

The Markets That Matter

High-end consumer AR/VR often benefits from leading-edge nodes, but it is economically narrow and consumer-driven. Robotics, by contrast, is a real-time systems problem where deterministic latency and power stability outweigh peak throughput. Automotive provides the clearest validation: long lifetimes, functional safety, and supply continuity dominate, and only a narrow slice of workloads truly require leading-edge silicon.

GlobalFoundries remains relevant because it aligned itself with the durable half of the electronics industry. While consumer and cloud compute dominate headlines, physical and infrastructure electronics reward capital discipline, stability, and long-term execution. GF exited the density arms race early and rebuilt around those realities. In an industry where many companies vanished by chasing scale they could not sustain, GlobalFoundries survived by choosing a different battlefield—and building a business model matched to its economics.

Also Read:

TSMC’s 6th ESG AWARD Receives over 5,800 Proposals, Igniting Sustainability Passion

TSMC based 3D Chips: Socionext Achieves Two Successful Tape-Outs in Just Seven Months!

Why TSMC is Known as the Trusted Foundry


2026 Outlook with Mathew Burns of Samtec

2026 Outlook with Mathew Burns of Samtec
by Daniel Nenni on 01-29-2026 at 8:00 am

square headshot (2)

We have been working with Matt and Samantec for the past 5 years. Samtec is a billion dollar privately-held manufacturer with a wide range of electronic interconnect products that are critical in high-speed and high-reliability systems.

Matt has been with Samtec for more than 10 very important year developing go-to-market strategies for Samtec’s Silicon-to-Silicon solutions. Over the course of 25+ years, he has been a leader in design, applications engineering, technical sales and marketing in the telecommunications, medical and electronic components industries.

Tell us a little bit about your company. It’s good to talk with you again, Dan. We are looking forward to a strong year.

Samtec designs and manufactures high-performance copper, optical, and RF interconnect solutions across industries including datacom, computer, telecommunications, semiconductors, among many others. We are also celebrating our 50th anniversary in 2026.

Personally, I lead a team of experienced content developers evangelizing Samtec’s Silicon-to-Silicon™.

What was the most exciting high point of 2025 for your company?

Well, Dan, everything in the interconnect world is getting smaller, faster, and denser. 200+ Gbps throughput is now reality, and 400+ Gbps is right around the corner. In 2025, Samtec launched our Si-Fly® HD CPC cable systems enabling 102.4T in a 95 x 95 mm chip substrate. That’s 512 lanes running 200G each. It’s also the smallest, densest solution on the market. We also demonstrated several real-world implementations of Si-Fly HD with market leaders like Broadcom and Marvell Semiconductor. Intermate able CPO solutions are coming in 2026 as well.

What was the biggest challenge your company faced in 2025?

In short, scaling. We’ve chatted several times in recent months that the pace of innovation, especially in AI, is accelerating. Companies small and large must do more with less. Plus, there is constant churn in the industry, especially in Silicon Valley.

How is your company’s work addressing this challenge?

We are ramping up investments where it makes sense. A good example of that is our manufacturing capabilities. Our technologists innovate smaller, flexible equipment that minimizes changeover from one product to another and supports minimum downtimes. On the personnel side, Samtec has a round peg, round hole philosophy. We are laser-focused on finding the right person for the right role while maximizing productivity using the latest AI and digital tools.

What do you think the biggest growth area for 2026 will be, and why?

That’s easy. AI. Next question. Joking aside, AI infrastructure and the data center space are not slowing down anytime soon. Capital expenditure and investment from the largest hyperscalers and the emerging neoscalers are still growing. We see more growth there ‘26 vs ‘25 there. Additionally, we expect to see growth in mil/aero given unfortunate instability around the world. Commercial space is highly competitive with LEO/MEO satellite providers popping up globally.

How is your company’s work addressing this growth?

In a few areas. We are growing our product roadmap with new interconnect solutions while also rounding product families for new applications. New industry standards like VITA™ 90 VNX+™ and VITA 93 QMC™ are quickly being adopted in mil/aero and commercial space applications.

On the sales side, we continue to ramp up as well. Let’s use India as example. We are growing the sales, marketing, and technical support teams in key geographies. We are expanding into a new office in Bengaluru, and we are adding to our sales channels with new regional and industry-specific distribution partners.

Are you incorporating AI into your products?

Well, Dan, since most of Samtec’s interconnect solutions are passive in nature, it’s hard to incorporate AI directly into our products. Yet, products like Si-Fly HD are essential to advancing XPU scale-up and scale-out in AI infrastructure.

Is AI affecting the way you develop your products?

Yes, without a doubt. Samtec has leveraged AI in machine vision for testing and inspection of high-volume interconnect solutions for several years. AI has advanced the accuracy and speed of several MCAD and ECAD EDA tools we use for simulating new products individually and in situ. Scripting optimization is obviously easier in these EDA tools when using AI.

Additionally, Samtec is developing closed loop additive manufacturing processes using AI vision systems to ensure printed part quality during production. AI systems also increase automation in the adoption and scaling of additive manufacturing processes.

What conferences did you attend in 2025 and how was the traffic?

Samtec sponsors and attends many industry-specific, regional, and local conferences all over the globe. Example conferences include DesignCon, OFC, embedded world, PCI-SIG DevCons, IMS, European Microwave, ECOC, AI Infra, SuperComputing, New Tech, AOC, Electronica globally and more. With few exceptions, conference and trade show attendance and engagement remain strong. We really haven’t noticed any declines in the past years. If anything, I think conference attendance is still growing.

Will you participate in conferences in 2026? Same or more as 2026?

We will probably be on par in 2026 compared to 2025. There’s always some churn in conferences we sponsor annually. However, we typically support the same large events as mentioned each year.

How do customers normally engage with your company?

Dan, that’s a great question. There are many different paths to engage with Samtec. We have a global team of factory-trained Field Sales Engineers (FSEs) that are typically the front door to Samtec. They are supported technically by our growing team of FAEs who average >10 years of industry design experience. For self-service, Samtec has several design tools on samtec.com. We also extend our reach with global, regional, local, and specialty distribution partners who support our >100K direct and indirect customers in 125+ counties.

Additional comments?

Thanks, Dan, for the opportunity to talk with you and your team. We love being a part of the curated semiwiki.com ecosystem. We look forward to a strong 2026.

Also Read:

Webinar – The Path to Smaller, Denser, and Faster with CPX, Samtec’s Co-Packaged Copper and Optics

Samtec Practical Cable Management for High-Data-Rate Systems

How Channel Operating Margin (COM) Came to be and Why It Endures

Visualizing System Design with Samtec’s Picture Search


2025 Retrospective. Innovation in Verification

2025 Retrospective. Innovation in Verification
by Bernard Murphy on 01-29-2026 at 6:00 am

Innovation New

As usual in January we start with a look back at the papers we reviewed last year. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and lecturer at Stanford, EE292A) and I continue our series on research ideas. As always, feedback welcome.

Looking back at 2025

We decided on a new way to present our findings this year. I’ll start with a list of blogs sorted by major topic (e.g. AI), then by popularity. Then a quick summary of Paul and Raúl’s takeaways, closing with some insights into who was reading our posts through 2025.

Our motivation for topic selections this year was obviously influenced by AI. It is amazing to realize how quickly our selections in AI have evolved over the several years we have been posting: from CNNs to RNNs to LLMs and reinforcement learning. You can be confident that we will continue to track this topic. Quantum simulation was surprisingly popular, maybe suggesting a follow-on. Hardware acceleration is always hot, we’ll find more. Analog continues to be important, given growing complexity and embedded roles in digital systems. So also is inspiration from software engineering innovations. And, from what we hear, interest in new methods to tackle verification problems in multicore systems remains high.

Category Topic
AI Agentic Bug Localization. November, Stanford, Yale, USC
AI Neurosymbolic code generation. September. Google, MIT, UT Austin, Cornell
AI Prompt Engineering for Security. July, U Florida
AI LLMs Raise Game in Assertion Gen. April, Princeton
Quantum Simulating Quantum Computers. December, ETH, U Toronto, U Mass.
HW Accel Emulator-Like Simulation Acceleration on GPUs. October, NVIDIA, U Beijing
Analog Reachability in Analog and AMS. June, Texas A&M
Analog Metamorphic Test in AMS. March U Bremen, JK U Austria
SW inspired Cocotb for Verification. August, U Tamilnadu
SW inspired Optimizing an IR for Hardware Design. May, ETH and Cambridge U
Multi-core Verif Bug Hunting in Multi Core Processors. February, IBM

 

Paul and Raúl’s combined takeaways

An AI focus is natural, yet we also aim to emphasize grounded research. Our April paper from Princeton on assertion generation is an example, highlighting challenges in this area. Building an agentic verification engineer is going to take more than an off-the-shelf LLM and a bit of prompting. Still research in AI for verification continues to push forward: November’s agentic bug localization paper echoes the top-focus area in industry and academia while February’s paper explores intermediate representations and knowledge graphs. Expect to see more this year on fine-tuned models, transfer learning, LLM long term memory, etc.

We’re intentionally optimizing for broad and topical coverage: digital, analog, GPU acceleration, quantum, LLVM. That said, our selections have leaned heavily to academic papers. We will try to find more industry-sponsored papers this year. Academic research topics are also influenced by industry needs but it would be useful to see more insights into early deployment experiences, either directly authored or co-authored by semiconductor or systems enterprises.

It is also useful to explore overlaps and differences with software verification, especially for RTL/SystemVerilog/SystemC. Two papers touched on this and it will continue to be a topic of interest in our selections

Paul would like to add that we’re grateful and pleased to see research in verification continue to be so vibrant. Stepping back from it all (see below), there really are many quality advances being published, offering a wealth of interesting ideas from around the world!

Who is reading our blogs?

LinkedIn (LI) provides some revealing demographic insights. We now see 20k-30k views per blog. These include not only engineers in ASIC communities but also FPGA communities and software developers.

Top readership groups registered under LI are from Intel, Synopsys, Cadence, Qualcomm, AMD and Apple. Readers are based in the San Francisco Bay Area, the Bengaluru area, Austin (TX) and Portland (OR), with additional interest from Munich (Germany), Paris (France) and Ankara (Turkey).

We greatly appreciate your support and the contributions made by the authors of papers we review. We would still like to see more feedback: suggestions for topics to cover, or support, or comments on topics we have reviewed. If you want to provide private feedback, contact me through my LinkedIn page HERE.

Also Read:

Simulating Quantum Computers. Innovation in Verification

Agentic Bug Localization. Innovation in Verification

Emulator-Like Simulation Acceleration on GPUs. Innovation in Verification


The Chronicle of TSMC CoWoS

The Chronicle of TSMC CoWoS
by Daniel Nenni on 01-28-2026 at 10:00 am

Chronical of CoWoS

As semiconductor scaling slowed and system performance became increasingly constrained by data movement rather than raw compute, advanced packaging emerged as a decisive lever. Among these technologies, TSMC’s CoWoS (Chip-on-Wafer-on-Substrate) represents a turning point in how high-performance systems are architected, manufactured, and scaled. Its evolution mirrors the industry’s shift from transistor-centric progress to system-level optimization.

At its core, CoWoS is a 2.5D integration technology. Logic dies and memory stacks are placed side by side on a silicon interposer, which is then mounted onto an organic substrate. The silicon interposer enables wiring densities far beyond what organic substrates can support, with metal line pitches in the low single-digit microns. Through-silicon vias pass signals and power vertically through the interposer, connecting it to the package substrate below.

The technical motivation for CoWoS was bandwidth density. Traditional off-package memory interfaces, such as DDR, rely on relatively long traces and limited I/O counts, resulting in high power consumption and latency. In contrast, CoWoS allows logic dies to interface with HBM stacks using thousands of parallel connections. Modern CoWoS implementations support memory bandwidth exceeding 3–5 TB/s per package, with energy efficiency measured in a few picojoules per bit, orders of magnitude better than conventional memory systems.

Early CoWoS deployments paired a single large logic die with two to four HBM stacks. Over time, the technology scaled aggressively. Interposer sizes expanded beyond 800–900 mm², pushing reticle and yield limits. Advanced packages now routinely integrate six to eight HBM stacks, each consisting of 8–12 DRAM dies bonded using TSVs and microbumps. Signal integrity across these short interposer traces allows operation at several gigabits per second per pin with minimal equalization overhead.

Manufacturing CoWoS is fundamentally different from traditional back-end packaging. Interposer fabrication resembles front-end wafer processing, using silicon wafers with multiple redistribution layers. TSV formation, wafer thinning, and precise die placement introduce yield sensitivities not seen in simpler packages. Assembly tolerances are tight: microbump pitches around 40 µm (and shrinking) require sub-micron alignment accuracy. Thermal management is equally critical, as large logic dies and dense memory stacks generate heat in close proximity.

As demand grew, CoWoS evolved beyond its original role as a memory enabler. The rise of chiplet-based architectures turned the interposer into a system fabric. Multiple logic dies, compute tiles, I/O dies, accelerators, could be interconnected with wide, low-latency links. This enabled designers to overcome reticle size limits while improving yield and design flexibility. CoWoS became a platform for heterogeneous integration rather than a single-purpose solution.

The AI acceleration boom of the 2020s elevated CoWoS from a niche capability to a strategic bottleneck. Training large neural networks requires massive parallel compute tightly coupled to enormous memory bandwidth. In many leading accelerators, performance scaling is limited less by transistor count than by HBM availability and interposer capacity. As a result, CoWoS production capacity became as strategically important as advanced logic nodes, with packaging throughput directly constraining system shipments.

Technically, CoWoS continues to push boundaries. Interposer routing layers have increased, power delivery networks have been reinforced to handle hundreds of watts per package, and mechanical designs have improved to manage warpage and stress. Variants have emerged to balance cost and performance, while coexistence with newer technologies such as hybrid bonding and 3D stacking is shaping next-generation systems.

Bottom Line: The chronicle of CoWoS is ultimately the story of how packaging became architecture. It demonstrated that performance, power efficiency, and scalability increasingly depend on microns of interconnect and millimeters of proximity. In an era where monolithic scaling alone can no longer carry progress, CoWoS stands as a defining example of how integration, not just miniaturization, drives the future of computing.

CONTACT TSMC

Also Read:

TSMC’s CoWoS® Sustainability Drive: Turning Waste into Wealth

TSMC’s 6th ESG AWARD Receives over 5,800 Proposals, Igniting Sustainability Passion

TSMC based 3D Chips: Socionext Achieves Two Successful Tape-Outs in Just Seven Months!


2026 Outlook with Shelly Henry of MooresLabAI

2026 Outlook with Shelly Henry of MooresLabAI
by Daniel Nenni on 01-28-2026 at 8:00 am

Shelly Henry MooresLabAI
Tell us a little bit about yourself and your company.

I’m Shelly Henry, CEO and co-founder of MooresLabAI. After two decades of building chips for Xbox, HoloLens, and Azure, I reached a point where I knew the industry needed a reset. So I teamed up with fellow engineers, Shashank Chaurasia and Sirish Munipalli to create MooresLabAI—a company built on the belief that silicon design should move at the speed of software. Our flagship product, VerifAgent™, is more than a tool—it’s a co-engineer that reads specs and builds testbenches, helping teams deliver chips faster, smarter, and with less burnout.

What was the most exciting high point of 2025 for your company?

The moment we launched VerifAgent™ at DAC 2025 was electric. We weren’t just unveiling a product—we were showing the world a new way to build chips. Watching engineers light up as our AI seamlessly built testplan, testbenches and testcases was unforgettable. One team told us, “We didn’t believe it until we saw it find a bug we missed.” That shift—from doubt to belief—was the real win.

What was the biggest challenge your company faced in 2025?

Trust was our biggest hurdle. Asking seasoned engineers to let AI into their verification flow felt like asking them to hand over the keys to a race car. They worried about hallucinated code, broken flows, and losing control. We had to earn their confidence, one pilot, one compile, one simulation at a time.

How is your company’s work addressing this challenge?

We built VerifAgent to be transparent and accountable. Every output is simulation-verified, every testbench compile-ready. It fits into existing flows without disruption and runs securely on-prem. We didn’t just ask for trust—we proved ourselves. And when engineers saw our AI catch bugs and hit coverage goals faster than their own teams, the trust followed.

What do you think the biggest growth area for 2026 will be, and why?

2026 will be the year AI becomes indispensable in chip design. The demand for custom silicon is exploding, and traditional methods can’t keep up. Time and talent are stretched thin. AI isn’t just a nice-to-have anymore—it’s the only way forward. The companies that embrace it will lead the next wave of innovation.

How is your company’s work addressing this growth?

We’re building beyond verification. SpecAgent and DesignAgent are coming—AI modules that formalize specs and generate RTL. Together with VerifAgent, they’ll form MooreCoreX, our full-stack platform for IP development. We’re already piloting these tools, and by mid-2026, we’ll offer a complete AI-powered flow from spec to silicon.

What conferences did you participate in in 2025?

DAC 2025 was our breakout moment—nonstop traffic, live demos, and real excitement. At the AI Infra Summit, we connected with partners shaping the future of infrastructure. Verification Futures gave us deep, technical conversations with engineers who live and breathe verification. Each event reminded us: we’re solving real problems for real people.

What are your 2026 Conference Plans?

We’re going bigger in 2026. DAC and DVCon are locked in, with DVCon marking our debut as Gold Sponsors. We’re exploring new verticals like chiplets and CPUs. Conferences aren’t just marketing—they’re where we meet the people who challenge us, inspire us, and help us grow.

How do customers normally engage with your company?

Our journey with customers starts with a conversation, then a demo, and often a pilot. We deploy VerifAgent in their environment, run it on real IP, and measure results. When they see 90%+ productivity gains and bugs caught early, the pilot becomes a partnership. We stay close, offering support, training, and a shared commitment to success.

How are you incorporating AI into your products?

AI isn’t a feature—it’s our foundation. VerifAgent uses LLMs to generate verification assets from specs, and MooreLLM is our own model tuned for hardware. Our multi-agent system handles everything from parsing to debugging. It’s prompt-free, simulation-verified, and built for engineers—not marketers.

Is AI affecting the way you develop your products?

We use our own AI to build and test itself. VerifAgent helps us debug, analyze logs, and validate new features. Our development is fast, iterative, and deeply collaborative. Engineers and AI scientists work side-by-side, learning from data and refining models. It’s not just efficient—it’s energizing.

Additional Comments?

Reflecting on 2025, I’d add that it was as much about building credibility as it was about building technology. One big milestone for us was expanding our advisory board with industry veterans who lent us their experience and legitimacy. In September 2025, for instance, we brought on multiple high-profile advisors: Tom Fitzpatrick, a globally recognized authority in electronic design automation (EDA), K. Balasubramanian, former President of TI Japan, Sanjay Lall, who has 30+ years in EDA sales leadership (Cadence, etc.), Ashutosh Saxena, a noted AI entrepreneur and Stanford AI PhD, and Vijay Chandrasekharan, a two-time founder and seasoned software architect. Having these folks in our corner was a big confidence boost – not just internally, but for customers and investors.

BOOK A DEMO

Also Read:

We Need to Turn Specs into Oracles for Agentic Verification

Moores Lab(AI): Agentic AI and the New Era of Semiconductor Design

CEO Interview with Shelly Henry of Moores Lab (AI)


Synopsys’ Secure Storage Solution for OTP IP

Synopsys’ Secure Storage Solution for OTP IP
by Kalar Rajendiran on 01-28-2026 at 6:00 am

Synopsys Secure Storage Solution for OTP IP

For decades, One-Time Programmable (OTP) memory has been viewed as a foundational element of hardware security. Because OTP can be written only once and cannot be modified afterward, it has traditionally been trusted to store cryptographic keys, secure boot code, device identity, and configuration data. Permanence was often equated with security. In today’s threat environment, however, that assumption no longer holds. While OTP is extremely effective at preventing modification, it does not inherently prevent extraction. As attackers shift their focus from changing data to reading it, OTP must evolve from a permanent storage mechanism into part of a broader, hardware-rooted secure storage architecture.

Why Hardware-Rooted Secure Storage Matters for OTP IP

OTP excels at protecting integrity: once data is programmed, it cannot be altered. What it does not guarantee on its own is confidentiality. If secrets are stored directly in OTP in plaintext form, a sufficiently capable attacker with physical access may still be able to observe or extract those bits using advanced techniques. This distinction is critical. OTP prevents rewriting, but it does not automatically prevent reading. In modern systems, where physical access is often assumed and attacks routinely target hardware, permanence alone is no longer enough. Hardware-rooted secure storage addresses this gap by ensuring that even if memory contents are accessed, the underlying secrets remain protected and unusable.

What Next-Generation OTP Must Address

As SoCs become more complex, valuable, and widely deployed, next-generation OTP solutions must explicitly address the growing gap between immutability and secrecy. Storing static secrets directly in OTP creates an increasingly attractive target for attackers. Security must instead be device-specific, resilient against physical and invasive attacks, and scalable across large, heterogeneous SoCs. OTP can no longer be treated as “secure by default”; it must be embedded within an architecture that assumes attackers may eventually reach the hardware and still prevents meaningful compromise.

Closing the Security Gap with a Layered Defense Model

The most effective way to protect OTP in modern designs is through a layered defense model that combines multiple hardware-based security mechanisms. In this approach, secrets are not stored directly in OTP. Instead, a hardware-generated, device-unique root key is derived from the intrinsic physical characteristics of the chip itself and is never stored anywhere on the device. Cryptographic engines then use this root key to encrypt and decrypt data stored in OTP, while secure control logic manages access and policy enforcement. As a result, OTP holds only encrypted assets, not usable secrets. Even if an attacker succeeds in reading OTP contents, the data remains unintelligible without the regenerated, chip-specific key. This layered model fundamentally changes the security posture of OTP, transforming it from static memory into a protected vault anchored in silicon.

Building In Security for Scaling, Mission-Critical SoCs

Industries such as AI and high-performance computing, automotive, IoT, and aerospace increasingly deploy SoCs in environments where attacks are assumed and failures can have serious safety, financial, or national-security consequences. These systems demand security that is present from the very first instruction and remains effective throughout the product lifecycle. Hardware-rooted secure storage enables secure boot, prevents device cloning and counterfeiting, binds identity and policy enforcement to individual chips, and locks down debug and configuration pathways after manufacturing. Most importantly, it allows security to scale alongside SoC complexity, ensuring consistent protection across subsystems without relying on software-only assumptions.

Synopsys’ Secure Storage Solution for OTP IP

Synopsys addresses the limitations of traditional OTP with its Secure Storage Solution for OTP IP, which is designed specifically to solve the problem of permanent but readable memory. The solution integrates antifuse-based OTP with Synopsys SRAM Physical Unclonable Function (PUF) technology, an on-chip cryptographic engine, and a secure controller. At power-up, the SRAM PUF regenerates a unique root key derived from the silicon itself. This key is never stored and exists only transiently within the hardware. It is used internally to encrypt and decrypt data stored in OTP, ensuring that secrets are never exposed in plaintext at any point in the system.

Advantages, Ease of Integration, and Flexible Configuration

Beyond strong security, the Secure Storage Solution for OTP IP is designed for practical deployment in real-world SoCs. Delivered as a pre-integrated subsystem with a standard AMBA APB interface, it minimizes integration effort and risk. Flexible configuration options allow designers to protect OTP alone or extend hardware-rooted protection across the entire chip, depending on system requirements. The solution abstracts cryptographic complexity behind a simple software interface and automatically initializes keys and protections at power-up, reducing provisioning complexity and deployment risk. By eliminating the need for custom security architectures, the solution helps teams accelerate time-to-market while scaling robust, consistent security across product lines.

Summary

OTP remains a critical component of SoC security, but the industry can no longer assume that data which cannot be changed also cannot be compromised. Modern threat models require security that protects confidentiality as well as integrity, even under physical attack. Hardware-rooted secure storage closes this gap by ensuring that secrets are never stored directly and are instead derived from the silicon itself. By combining OTP with device-unique key generation and cryptographic protection, designers can establish a true root of trust that scales with modern SoCs and meets the demands of mission-critical applications. In today’s systems, OTP provides permanence, but hardware-rooted secure storage provides protection. Both are required to build lasting trust in silicon.

To learn more, visit Synopsys’ Secure Storage Solution for OTP IP page.

Also Read:

Curbing Soaring Power Demand Through Foundation IP

Acceleration of Complex RISC-V Processor Verification Using Test Generation Integrated with Hardware Emulation

TSMC based 3D Chips: Socionext Achieves Two Successful Tape-Outs in Just Seven Months!


Weebit Nano Reports on 2025 Targets

Weebit Nano Reports on 2025 Targets
by Daniel Nenni on 01-27-2026 at 10:00 am

Weebit Nano 2025 Success

In early January 2026, Weebit Nano Ltd. (ASX: WBT) released a comprehensive report detailing its performance against the 2025 commercial and technical targets the company had set at its 2024 Annual General Meeting. The announcement highlighted significant progress in both business development and technology qualification, underpinning the company’s transition from R&D phase toward broader commercialization of its embedded Resistive RAM (ReRAM) technology.

At the core of Weebit’s 2025 achievements were licensing agreements with major industry players. The company successfully secured technology licensing contracts with two Tier-1 semiconductor manufacturers — onsemi and Texas Instruments (TI) — marking a notable endorsement of its ReRAM IP by global leaders in semiconductor manufacturing. These deals enable Weebit’s embedded ReRAM technology to be integrated into advanced process technologies and future product platforms, significantly expanding the company’s commercial footprint.

Commercial traction was further demonstrated by Weebit’s expansion of design agreements with multiple product companies. By year-end 2025, Weebit had exceeded its target of three product customers integrating its ReRAM into next-generation products. These engagements span a range of application areas, notably in security, smart battery management systems, and other embedded devices, showcasing ReRAM’s versatility across diverse markets.

Technology qualification, a critical milestone for semiconductor IP adoption,  also featured prominently in Weebit’s 2025 success. In December, the company announced that its ReRAM had achieved qualification based on JEDEC industry standards for non-volatile memories (NVM) at leading foundry DB HiTek. This qualification process involved rigorous testing across multiple wafer lots and represents a key step toward volume production readiness in DB HiTek’s 130nm Bipolar-CMOS-DMOS (BCD) process. Weebit noted that customers are already preparing for production using the qualified technology.

Although Weebit did not complete its third target foundry/IDM agreement within 2025, the company indicated that negotiations remain active and that the third agreement is now expected in early 2026, reflecting ongoing industry interest and pipeline momentum.

Weebit’s 2025 progress also built on earlier technical achievements that year, such as successfully qualifying its ReRAM modules to AEC-Q100 automotive standards for high-temperature operation, critical for automotive and industrial applications, and progressing towards technology transfer milestones with partners like onsemi. These milestones helped reinforce the reliability and robustness of ReRAM for demanding markets.

Looking ahead, Weebit’s leadership outlined ambitious 2026 priorities, including targeting revenue of at least A$10 million, delivering its first AI customer win, and achieving the first tape-out for a product company. These goals aim to build on 2025’s momentum and further strengthen Weebit’s position as a leading independent provider of ReRAM technology.

Bottom line: Weebit Nano’s 2025 report shows that the company largely met or exceeded its key commercial and technical targets through strategic partnerships, industry-standard technology qualification, and expanded customer engagements. While a few objectives remain in progress, the company’s progress sets a solid foundation for growth in FY26 and beyond, reinforcing the relevance of ReRAM as a next-generation memory solution in a wide array of semiconductor markets.

Also Read:

Weebit Nano Moves into the Mainstream with Customer Adoption

Relaxation-Aware Programming in ReRAM: Evaluating and Optimizing Write Termination

Weebit Nano is at the Epicenter of the ReRAM Revolution


2026 Outlook with Steve Roddy of Quadric

2026 Outlook with Steve Roddy of Quadric
by Daniel Nenni on 01-27-2026 at 8:00 am

2026 Outlook SemiWiki Image

Tell us a little bit about yourself and your company.

I am the Chief Marketing Officer at Quadric, where I have spent the past four years helping scale the company’s market presence and customer engagement. Quadric is a pure-play IP licensing company that has been operating for more than seven years. We specialize in a truly unique, fully programmable AI inference processor designed for edge and device-level inference, enabling customers to deploy advanced AI workloads without sacrificing flexibility or efficiency.

What was the most exciting high point of 2025 for your company? 

2025 was a breakout growth year for Quadric. Revenue expanded dramatically, reaching the eight-figure range, and multiple customers progressed deep into their tape-out cycles, positioning us to see customer silicon in 2026. We capped off the year by closing a very strong Series C funding round, which further validates both our technology and our long-term market opportunity (announced Jan 14, 2026).

What was the biggest challenge your company faced in 2025? 

The biggest challenge was managing the pace of growth—both in terms of team expansion and customer demand. We roughly doubled our team size and scaled our sales organization to engage with an order of magnitude more prospective customers. It was a classic “good news / bad news” scenario: rapidly growing interest in our technology required more people, more demos, more benchmarks, and more infrastructure; fast.

On the technology side, the most significant shift was the explosive demand for running LLMs and SLMs directly on devices. In 2025, the conversation changed almost overnight from “Is it possible to run an LLM on device?” to “We must run LLMs on device.” On-device LLMs moved from experimental to mainstream far faster than most of the industry anticipated.

How is your company’s work addressing this challenge? 

In 2025, we made major investments in our software infrastructure to enable efficient execution of LLMs on the Chimera processor platform. Unlike traditional CNN- or vision-centric models, modern language models require advanced techniques such as key-value caching (KV cache), which go well beyond simple graph compilation.

Our Chimera Graph Compiler (CGC) ingests AI models, generates optimized C++ representations of those graphs, and targets efficient execution on our processor. However, enabling high-performance LLM inference required additional application-level C++ code beyond graph execution alone. This is where Chimera is fundamentally different from conventional NPU “accelerators.” Chimera runs full C++ applications—not just fragments of an AI model—entirely on the processor.

As a result, we now support a complete software stack for token-based models, including launch, prefill, and KV caching, all running natively on Chimera with no reliance on a companion CPU.

What do you think the biggest growth area for 2026 will be, and why?

The biggest growth area in 2026 will be edge-resident generative AI—particularly LLMs, VLMs, and agent-based models running locally on devices. Market drivers such as latency, power efficiency, data privacy, cost control, and system resilience are pushing intelligence out of the datacenter and onto devices across automotive, industrial, consumer, and infrastructure markets. Customers are no longer willing to compromise between performance and programmability, and that shift strongly favors architectures designed for long-term flexibility.

How is your company’s work addressing this growth? 

Quadric is uniquely positioned to support this growth because Chimera is fully programmable and future-proof by design. As models evolve—and they are evolving rapidly—customers can deploy new networks, operators, and software techniques without changing hardware. Our ability to run complete AI applications in C++, including complex control flow and memory management, enables customers to deploy sophisticated generative AI workloads today and adapt them over time. This dramatically reduces risk for silicon designers planning products with long lifecycles.

What conferences did you attend in 2025 and how was the traffic?

In 2025, Quadric participated in a range of leading AI, semiconductor, and embedded systems conferences. Across the board, traffic and engagement were exceptionally strong. Booth conversations were deeper and more technically informed than in prior years, reflecting a more mature market where customers are actively evaluating deployment strategies rather than simply exploring concepts.

Will you participate in conferences in 2026? Same or more as 2025?

Yes—2026 will be a significant expansion year for us.  We started the year with a big presence at CES in Las Vegas, and throughout the full year we plan to attend more events, increase sponsorships, and focus more on vertical-specific conferences.

How do customers normally engage with your company?

Engagement typically begins with technical briefings and application discussions, followed by hands-on evaluations and benchmarking. Because Chimera™ is highly programmable, customer engagements are often collaborative and long-term.

Additional comments? 

The pace of change in AI is unprecedented, but it is also creating tremendous opportunity for companies willing to rethink traditional hardware and software boundaries. At Quadric, we believe programmability is the key to sustainable AI innovation, and we are excited to help our customers bring advanced intelligence to the edge—without compromise.

Also Read:

Quadric: Revolutionizing Edge AI

Legacy IP Providers Struggle to Solve the NPU Dilemna

Recent AI Advances Underline Need to Futureproof Automotive AI


Hierarchical Device Planning as an Enabler of System Technology Co-Optimization

Hierarchical Device Planning as an Enabler of System Technology Co-Optimization
by Kalar Rajendiran on 01-27-2026 at 6:00 am

Connectivity in a Hierarchical IC Package Floorplan

AI, hyperscale data centers, and data-intensive workloads are driving unprecedented demands for performance, bandwidth, and energy efficiency. As the economic returns of traditional transistor scaling diminish, advanced IC packaging and heterogeneous integration have become the primary levers for system-level scaling. Chiplet-based architectures now dominate this transition, enabling modular design and process optimization but introducing dramatic increases in system and package complexity.

Package pin counts have grown from fewer than 100,000 to tens of millions in leading-edge designs, with further growth expected. This complexity far exceeds what flat design methodologies or manual approaches can manage. System Technology Co-Optimization (STCO) has therefore emerged as a necessary framework for aligning architecture, silicon technology, and packaging. Hierarchical device planning provides the structural foundation required to make STCO effective at scale.

Recently, Siemens EDA published a whitepaper on this very topic and the following is a synthesized overview of that whitepaper. You can download the entire whitepaper from here.

Composable Systems: Reconfiguring 3D ICs for Early System Exploration

Modern systems are increasingly composed of multiple chiplets developed asynchronously and integrated using advanced 3D packaging technologies. While this composability improves flexibility and reuse, it makes early partitioning and integration decisions critical. Hierarchical device planning enables designers to assemble and de-assemble 3D IC systems early in the design process, allowing alternative partitioning, stacking, and interface strategies to be explored before physical commitments are made.

Although detailed layout information is unavailable at this stage, rapid approximate analyses enabled by hierarchical planning provide valuable insight into power integrity, signal integrity, thermal behavior, and mechanical risk. These early insights guide system exploration and prevent costly issues from becoming embedded in the design.

System Technology Co-Optimization (STCO) Begins at Package Planning

Packaging has become a primary determinant of system performance, power, cost, and reliability. Yet silicon design teams often lack early visibility into packaging constraints, leading to partitioning decisions that complicate integration. Hierarchical device planning addresses this gap by enabling fast creation of early package prototypes that support multi-domain analysis.

By generating preliminary bump maps, defining power and signal regions, and placing chiplets in 3D space, designers can evaluate packaging implications early and feed results back to silicon teams. This establishes a continuous, bidirectional feedback loop between architecture, silicon, and packaging, transforming STCO from a sequential handoff into a concurrent optimization process.

Taming Explosive Package Complexity with Hierarchical Planning

The exponential growth in package pin counts has rendered traditional flat planning approaches impractical. Managing millions of pins manually introduces unacceptable risk and inefficiency. Hierarchical device planning overcomes this challenge by decomposing complex package assemblies into structured, hierarchical elements such as chiplets, interfaces, and interconnect regions.

This hierarchical organization enables full-package connectivity tracking and verification across the entire 3D assembly. By providing structure and scalability, hierarchical planning allows designers to manage complexity without losing visibility or control.

Smart Pin Regions and Parametric Abstraction at Scale

A key innovation of hierarchical device planning is the use of parameterized pin regions to abstract connectivity. Instead of defining individual pins, designers work with regions that encapsulate pin patterns, power and ground assignments, and interface characteristics. Pins are automatically synthesized from these parameters, ensuring consistency while dramatically reducing design effort.

CONNECTIVITY IN A HIERARCHICAL IC PACKAGE FLOORPLAN

This parametric abstraction enables rapid iteration. Designers can adjust bump pitch, patterns, or net assignments and instantly regenerate connectivity, supporting fast design-space exploration and efficient response to changing requirements.

A DESIGN OF ARRAYED BLOCKS WITH PARAMETERIZED PINS

Cross-Domain Abstraction and Early Multi-Physics Insight

Effective STCO requires operating at the right abstraction level across electrical, thermal, mechanical, and manufacturing domains. Hierarchical device planning provides this abstraction while enabling early multi-physics analysis. Although early-stage analyses are approximate, they provide sufficient directional guidance to compare alternatives and identify high-risk design choices.

Integrated data management further supports this flow by ensuring consistency across rapidly evolving designs and preventing costly errors caused by outdated or mismatched data.

Summary

As semiconductor systems move toward increasingly heterogeneous and 3D-integrated architectures, managing complexity and cross-domain interaction becomes paramount. Hierarchical device planning enables early system assembly, scalable abstraction, and rapid multi-domain analysis, forming the foundation of effective System Technology Co-Optimization.

By enabling a shift-left design methodology and supporting informed decision-making when flexibility is highest and cost is lowest, hierarchical device planning transforms STCO into a practical, scalable engineering discipline for next-generation electronic systems.

A DESIGN COMPLETED USING HIERARCHICAL DEVICE PLANNING METHODOLOGY

You can download the entire whitepaper from here.

Also Read:

Siemens EDA Illuminates the Complexity of PCB Design

Siemens and NVIDIA Expand Partnership to Build the Industrial AI Operating System

Automotive Digital Twins Out of The Box and Real Time with PAVE360


SiFive to Power Next-Gen RISC-V AI Data Centers with NVIDIA NVLink Fusion

SiFive to Power Next-Gen RISC-V AI Data Centers with NVIDIA NVLink Fusion
by Daniel Nenni on 01-26-2026 at 10:00 am

SiFive Data Center Nvidia NVLink Fusion

In a strategic move that could reshape the future of AI data center design, SiFive, a leading developer of RISC-V processor IP and compute subsystems, has announced plans to integrate NVIDIA’s NVLink Fusion interconnect technology into its high-performance data center platforms. This collaboration bridges the open-architecture innovation of the RISC-V ecosystem with NVIDIA’s industry-leading high-bandwidth interconnects, creating new opportunities for scalable, efficient, and customizable AI infrastructure.

At its core, the partnership is about unlocking seamless, high-speed communication between SiFive’s RISC-V CPUs and NVIDIA’s GPUs and accelerators. NVLink Fusion, NVIDIA’s rack-scale interconnect technology, enables coherent linking of CPUs, GPUs, and other accelerators with extremely high bandwidth. By adopting NVLink Fusion, SiFive’s compute platforms will be able to connect directly to NVIDIA accelerators eliminating the traditional bottlenecks of PCIe-based CPU-to-GPU communication and enabling data center architects to build tightly coupled heterogeneous systems optimized for the demands of AI workloads.

Why This Matters

Artificial intelligence workloads especially large language models (LLMs), recommendation engines, and real-time analytics are rapidly outpacing conventional data center designs. These AI workloads demand not only high throughput, but also efficient data movement and power-optimized compute architectures. Traditional x86- or Arm-based CPUs paired with discrete accelerators over PCIe can struggle to deliver the low latency and high bandwidth required at scale, especially as models grow and power costs skyrocket.

SiFive’s RISC-V IP is prized for its configurability and power efficiency. Customers can tailor processor designs to specific workload requirements, tuning for performance per watt and overall system efficiency advantages that are increasingly valuable in hyperscale environments. Integrating NVLink Fusion expands this value proposition by giving RISC-V CPUs a direct, coherent high-performance path to the acceleration layer of modern AI systems.

NVLink Fusion itself is designed to address the needs of next-generation AI “factories” data centers that treat AI compute as a first-class workload rather than a specialized add-on. The technology offers rack-scale performance and a unified interconnect fabric that scales across hundreds of compute units, significantly improving the performance-per-watt equation for AI training and inference across distributed systems.

Strategic Implications for RISC-V

For years, RISC-V has been touted as the future of open-source compute architecture offering an alternative to proprietary instruction sets like x86 and Arm. However, one of the obstacles RISC-V has faced in the high-performance AI space is ecosystem maturity, especially around high-speed interconnects and software support that large data center players demand.

By aligning with NVIDIA’s NVLink Fusion ecosystem, SiFive helps overcome those barriers. Now, RISC-V processors can participate as “first-class citizens” in rack-scale designs with computed-intensive accelerators, supported by the broader NVIDIA stack including CUDA-based libraries and orchestration tools. This increases RISC-V’s attractiveness for cloud providers, hyperscalers, and custom silicon designers who previously might have defaulted to x86 or Arm platforms due to ecosystem inertia.

In the announcement, SiFive President and CEO Patrick Little emphasized the shift toward co-design in AI infrastructure where open, customizable CPUs are built from the ground up alongside accelerators and interconnects. NVIDIA CEO Jensen Huang echoed this sentiment, framing the partnership as a way to bring coherent, high-bandwidth NVLink into the RISC-V world and enable flexible, scalable AI systems.

Broader Industry Context

This collaboration also signals a broader trend in the semiconductor and data center industries: a move away from one-size-fits-all hardware toward heterogeneous, domain-optimized architectures. Hyperscalers and enterprise data centers alike are investing in bespoke solutions that match compute resources to specific workload profiles, whether that’s training next-generation AI models, delivering low-latency inference, or supporting mixed-use enterprise services.

In addition, NVIDIA’s strategy with NVLink Fusion licensing the interconnect for integration with third-party CPUs expands its ecosystem beyond systems built entirely in-house. By bringing partners like SiFive into the fold, NVIDIA strengthens the adoption of its rack-scale architecture as a de-facto standard for high-performance AI infrastructure.

Bottom Line: The integration of NVIDIA NVLink Fusion into SiFive’s RISC-V data center platforms represents a key milestone for open-architecture AI computing. It combines the flexibility and power efficiency of customizable RISC-V designs with the high-throughput, low-latency fabric needed to unify CPUs and accelerators in modern AI systems. As AI models continue to grow in complexity and scale, such innovations may redefine how data centers are architected — enabling not just faster performance, but smarter, more efficient infrastructure tailored to the real-world needs of AI.

Also Read:

Tiling Support in SiFive’s AI/ML Software Stack for RISC-V Vector-Matrix Extension

RISC-V Extensions for AI: Enhancing Performance in Machine Learning

SiFive Launches Second-Generation Intelligence Family of RISC-V Cores