RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Why TSMC is Known as the Trusted Foundry

Why TSMC is Known as the Trusted Foundry
by Daniel Nenni on 12-26-2025 at 6:00 am

TSMC Ivey Fab

Taiwan Semiconductor Manufacturing Company (TSMC) is widely regarded as the world’s most trusted semiconductor foundry, a reputation built over decades through technological leadership, business model discipline, operational excellence, and reliability. In an industry where trust is as critical as transistor density, TSMC has become the backbone of the global digital economy.

First and foremost, TSMC’s pure-play foundry model is the foundation of its trustworthiness. Unlike integrated device manufacturers (IDMs) such as Intel and Samsung, which design and manufacture their own chips, TSMC does not compete with its customers. It manufactures chips exclusively for third parties and has maintained a strict firewall between customer designs. This neutrality reassures customers, from Apple and NVIDIA to AMD, Qualcomm, and countless startups, that their intellectual property will not be used against them. Over time, this consistency has created deep confidence across a vast ecosystem, making TSMC the default manufacturing partner for the world’s most valuable chip designers.

Second, TSMC’s technological leadership reinforces that trust. The company has consistently been first, or decisively best, to mass-produce advanced process nodes such as 7nm, 5nm, and 3nm at high yields. In semiconductor manufacturing, reliability is not just about innovation, but about delivering that innovation at scale, on schedule, and with predictable silicon. TSMC’s ability to translate cutting-edge research into stable, high-volume production has made it indispensable for customers whose product cycles depend on certainty. When companies commit billions of dollars to a chip design, they need confidence that the foundry can deliver exactly as promised and TSMC has repeatedly proven it can.

Third, manufacturing excellence and yield consistency distinguish TSMC from competitors. Advanced chips are extraordinarily complex, and small variations can destroy profitability or product viability. TSMC’s laser focus on process control, defect reduction, and continuous improvement results in industry-leading yields. High yields mean lower costs for customers, faster ramp-ups, and fewer surprises after tape-out. This operational discipline is a major reason customers trust TSMC with their most advanced and sensitive designs.

Fourth, TSMC has built a reputation for strong intellectual property protection and confidentiality. Semiconductor designs represent years of research and billions in investment. TSMC has demonstrated, across thousands of customers, that it can securely handle highly confidential data without leaks or misuse. This trust is reinforced by TSMC’s internal culture, strict access controls, and long-standing customer relationships. In an era of increasing cyber and industrial espionage, this reliability is invaluable.

Fifth, TSMC’s scale and ecosystem integration create trust through inevitability. The company has invested hundreds of billions of dollars in fabrication plants, equipment, and talent, creating manufacturing capabilities that few others can match. Its close collaboration with equipment suppliers (such as ASML and Applied Materials), EDA vendors (Synopsys, Cadence, Siemens EDA), and IP companies (Synopsys, Arm, Analog Bits) also known as the Grand Alliance allows customers to design within a mature, silicon-proven and well-supported ecosystem. This reduces risk and shortens time-to-market, further cementing TSMC as the safest choice.

Sixth, TSMC’s long-term strategic thinking strengthens customer confidence. The company invests aggressively ahead of demand, often years before returns are guaranteed. This willingness to absorb risk ensures that capacity is available when customers need it, even during industry upcycles or shortages. During recent global chip shortages, TSMC’s capacity planning and prioritization reinforced its image as a stable, responsible industry steward.

Finally, TSMC’s global credibility and governance matter. While geopolitical risks exist, TSMC has demonstrated transparency, regulatory compliance, and cooperation with governments and customers worldwide. Its expansion into the United States, Japan, and Europe reflects a commitment to supply chain resilience and global trust.

Bottom line: TSMC is the trusted foundry not because of a single advantage, but because of a rare combination: neutrality, technological supremacy, manufacturing reliability, IP protection, scale, and long-term vision. In an industry where failure is catastrophic and trust is earned slowly, TSMC has become the gold standard and the cornerstone of modern semiconductor manufacturing.

Also Read:

TSMC’s Customized Technical Documentation Platform Enhances Customer Experience

A Brief History of TSMC Through 2025

Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium


Journey Back to 1981: David Patterson Recounts the Birth of RISC and Its Legacy in RISC-V

Journey Back to 1981: David Patterson Recounts the Birth of RISC and Its Legacy in RISC-V
by Daniel Nenni on 12-25-2025 at 10:00 am

RISC V Summit 2025 David Patterson

In a warmly received keynote at the RISC-V Summit, computer architecture legend David Patterson took the audience on a captivating trip back to 1981, using scanned versions of his original overhead transparencies to recount the birth of Reduced Instruction Set Computing (RISC) at UC Berkeley.

Patterson began with humor, noting his wife’s love for time-travel movies and explaining how age allows easy travel backward. Digging through old files, he rediscovered those classic plastic slides, artifacts unfamiliar to much of the younger audience, and used them to recreate the talk he gave across campuses over four decades ago.

The computing landscape of February 1981 was starkly different: mainframes and minicomputers dominated serious work, with IBM as the undisputed leader. DEC’s VAX, a refrigerator-sized 32-bit minicomputer running at 5 MHz with a 2 KB cache, represented the pinnacle. The IBM PC had not yet launched, and Intel’s 8086 was the cutting-edge 16-bit microprocessor. Cultural markers included Ronald Reagan’s presidency, disco music, and the release of Raiders of the Lost Ark.

Against this backdrop, Complex Instruction Set Computers (CISC) reigned supreme. The prevailing philosophy held that richer, more varied instructions would close the “semantic gap” between high-level languages and hardware. Microprogramming, enabled by growing memory densities under Moore’s Law, made complex instructions seemingly inexpensive. Marketing reinforced the idea that sophistication meant smaller programs and greater reliability, while registers were dismissed as old-fashioned.

Reality proved otherwise. High-level language programming used only a small subset of instructions. Complex operations, such as the VAX’s array-indexing instruction or IBM 370’s multi-register moves, were often slower than sequences of simpler ones. Design cycles lengthened dramatically, and microcode bugs were rampant; Patterson’s 1979–1980 sabbatical at DEC exposed constant patching of VAX microcode.

These observations crystallized into RISC principles: favor simplicity unless a compelling reason exists; prioritize fast clock cycles, easy decoding, and pipelining over instruction count or program size; recognize that microcode offers no magic; and rely on advancing compiler technology.

To illustrate, Patterson, a car enthusiast, likened CISC to an over-ornamented 1950s Cadillac and RISC to a sleek, agile sports car.

The ideas gained traction with Patterson and student David Ditzel’s 1980 paper, “The Case for the Reduced Instruction Set Computer,” published alongside a rebuttal from VAX architects—sparking immediate controversy and lively RISC vs. CISC debates at conferences.

Berkeley proved the concept through graduate courses. Leveraging DARPA-funded CAD tools, a simplified instruction set, and sheer beginner’s luck, roughly a dozen students designed, laid out, fabricated, and tested RISC-I in under two years. Remarkably, the RISC-I instruction set closely resembles today’s RISC-V core—Patterson called RISC-V’s version slightly more elegant.

Porting Berkeley UNIX was straightforward, and early benchmarks showed the student-built RISC-I roughly twice as fast as the professional, multi-year VAX effort—a stunning validation.

Patterson closed by honoring the original team, including faculty Carlo Séquin and John Ousterhout, and graduate students. He shared photos from a 2015 ceremony installing a plaque for the first RISC microprocessor, where RISC-I pioneers met RISC-V leaders, creating a touching cross-generational moment.

Bottom line: Forty-five years later, the simplicity and elegance born in those Berkeley classrooms power billions of devices worldwide and live on vibrantly in the open RISC-V ecosystem.

Also Read:

Google’s Road Trip to RISC-V at Warehouse Scale: Insights from Google’s Martin Dixon

Bridging Embedded and Cloud Worlds: AWS Solutions for RISC-V Development

The RISC-V Revolution: Insights from the 2025 Summits and Andes Technology’s Pivotal Role


Assertion-First Hardware Design and Formal Verification Services

Assertion-First Hardware Design and Formal Verification Services
by Kalar Rajendiran on 12-25-2025 at 6:00 am

LUBIS EDA Modelling

Generative AI has transformed software development, enabling entire applications to be built in minutes. But despite similar progress in AI-generated RTL, hardware verification remains a major bottleneck. RTL can be produced quickly, yet proving its correctness is extraordinarily difficult. This has revived a long-standing but historically unattainable idea, namely, a complete set of formal properties. Hardware design in RTL should begin with Assertion IP that precisely define the intended behavior of the design, rather than generated after the fact. For decades, this approach was out of reach. Today, the landscape has shifted, making assertion-first hardware design increasingly viable.

Tobias Ludwig, CEO of LUBIS EDA addressed this very topic at the Verification Futures Conference recently held in Austin, Texas. His talk covered how the company is moving the industry toward this long-awaited direction.

AI Can Generate RTL But Verification Is Still the Bottleneck

AI-generated RTL may look plausible, but correctness in hardware is a hard requirement. Chips must work under every possible condition, and AI systems trained on similar datasets often share similar failure patterns. Using AI to verify AI does not eliminate risk but rather compounds it. Verification continues to dominate engineering cost and schedule because ensuring correctness requires precise, formalized intent. Assertion IP provides that precision, but historically it has been too difficult to produce at scale.

Assertion IP: What Hardware Design Should Have Started With

Assertion IP captures design intent in its most accurate form. It describes how the design must behave across states, cycles, inputs, and transitions. In an ideal process, assertions would come first, serving as the specification against which RTL is implemented and proven. This would eliminate ambiguity and allow mathematical verification throughout development.

Why Hardware Has Not Started From Assertions

Creating a complete assertion set manually was impractical. Writing hundreds or thousands of assertions by hand was slow and error-prone. High-level modeling languages were inconsistent, lacked structure and were difficult to analyze. Property generation tools did not exist. And Formal Verification engines lacked the computational strength to handle the depth and complexity of real IP blocks.

While waiting for tools capable of generating Assertion IP automatically, the industry’s culture centered on “RTL-first,” with assertions treated as an afterthought rather than the foundation.

What Has Changed Now

The situation has changed dramatically. High-level model analysis engines can now extract states, transitions, invariants, and dataflow from C++/SystemC models. Automated property generation tools can transform these models into complete assertion suites that capture timing behavior, and correctness requirements. Formal verification engines have grown powerful enough to handle deep pipelines, cryptographic algorithms, and large state spaces. AI assistance now makes creating structured models easier, allowing engineers to translate natural-language intent into analyzable code. Together, these breakthroughs make assertion-first design far more practical than ever before.

Where LUBIS EDA Fits In: Opening the Window to Assertion-First Design

LUBIS EDA is turning this renewed possibility into a practical methodology. Its technology automatically generates comprehensive Assertion IP from high-level executable models, bridging the gap between abstract model and RTL implementation. Through refinement techniques that align abstract models with cycle-accurate RTL, LUBIS ensures that properties reflect bit-level reality.

Alongside the company’s formal verification services, LUBIS EDA provides training that help teams adopt assertion-driven workflows and achieve formal sign-off on complex blocks. As AI accelerates RTL generation, LUBIS EDA’s model-first, property-driven approach becomes essential for ensuring correctness and preventing hidden bugs.

Summary

For the first time, hardware teams can move toward a design process where intent is explicit, properties are complete and RTL correctness is provable from the start. This paradigm is now within reach thanks to advances in modeling, property generation, formal tools, and AI support. LUBIS EDA is helping the industry make this transition, prying open the door to a future where hardware design can begin with formal assertion IP.

To learn more, visit www.lubis-eda.com

Also Read:

Assertion IP (AIP) for Improved Design Verification

LUBIS EDA at the 2025 Design Automation Conference #62DAC

Automating Formal Verification


TSMC’s Customized Technical Documentation Platform Enhances Customer Experience

TSMC’s Customized Technical Documentation Platform Enhances Customer Experience
by Daniel Nenni on 12-24-2025 at 10:00 am

TSMC Online 2025

Taiwan Semiconductor Manufacturing Company, the world’s leading dedicated semiconductor foundry, has long prioritized customer-centric innovation to maintain its competitive edge in a rapidly evolving industry. TSMC is known as “The Trusted Foundry” for this reason.

Amid increasing complexity in chip design and manufacturing, driven by advanced nodes like 3nm and beyond, TSMC recognized the need for more efficient digital tools to support its global clientele. In response, the company upgraded its flagship customer self-service portal, TSMC-Online™, transforming it into a sophisticated Customized Technical Documentation Platform that significantly enhances user experience and operational efficiency.

Launched with upgrades beginning in April 2021, this platform embodies TSMC’s philosophy of being “Everyone’s Foundry.” The core objective was to address challenges posed by escalating technology and design intricacies, where customers from fabless designers to major tech giants require seamless access to vast amounts of technical data. Previously, navigating extensive documentation and production updates often demanded significant time and occasional support from TSMC’s customer service teams. The revamped TSMC-Online™ introduces a customer-oriented architecture, creating a truly customized service environment that empowers users to manage information as if operating their own fabrication facility.

Key enhancements revolve around three innovative methods:

A standard operation interface, personalized workspace, and intelligent guidance service. The standard operation interface provides a unified, intuitive layout that simplifies navigation across diverse functions, reducing learning curves and minimizing errors. Users benefit from consistent workflows, whether querying process design kits (PDKs), foundation IPs, or real-time wafer status updates.

The personalized workspace stands out as a hallmark of customization. Customers can tailor their dashboards with widgets aligned to specific roles and project stages—such as design verification, tape-out preparation, or manufacturing monitoring. For instance, engineers focused on advanced packaging like 3DFabric™ can prioritize relevant tools, while those in automotive applications highlight AEC-Q100 qualified resources. This flexibility accommodates varied stakeholder needs within a single organization, streamlining collaboration and boosting productivity.

Complementing these is the intelligent guidance service, which leverages smart features like contextual tutorials, animated walkthroughs, and real-time assistance modules. Previously, over half of users reportedly sought external help for document-related tasks due to platform complexity. Now, embedded animations and AI-driven prompts enable self-guided exploration, allowing instant access to tutorials without waiting for support, crucial across time zones.

Security remains paramount, with robust confidential information protection mechanisms ensuring sensitive data, including proprietary designs and production metrics, stays secure. Customers gain comprehensive visibility into the entire lifecycle, from design enablement through wafer manufacturing to shipment, fostering trust and operational transparency.

The impact has been profound. By January 2023, TSMC-Online™ averaged over 3,000 daily logins, reflecting high adoption and reliance. This digital collaboration tool accelerates product success by shortening time-to-market, reducing dependency on manual interventions, and enabling innovative outcomes in high-growth sectors like mobile, high-performance computing, AI, automotive electronics, and IoT.

TSMC’s commitment extends beyond this platform; it integrates with broader initiatives like the Open Innovation Platform® (OIP), which includes ecosystems for EDA tools, IPs, and cloud-based design environments. However, the Customized Technical Documentation Platform within TSMC-Online™ directly tackles day-to-day pain points, exemplifying how digital transformation can elevate service quality in semiconductor manufacturing.

Bottom Line: In an era where speed and precision define success, TSMC’s platform not only optimizes customer experience but also strengthens partnerships. By continuing to refine these tools, TSMC reinforces its role as a trusted enabler of global technological advancement, ensuring customers can focus on innovation while the foundry handles the complexities.

Also Read:

Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium

TSMC Formally Sues Ex-SVP Over Alleged Transfer of Trade Secrets to Intel

TSMC Kumamoto: Pioneering Japan’s Semiconductor Revival


The 10 Practical Steps to Model and Design a Complex SoC: Insights from Aion Silicon

The 10 Practical Steps to Model and Design a Complex SoC: Insights from Aion Silicon
by Daniel Nenni on 12-24-2025 at 6:00 am

10 Practical SoC Steps AION Silicon

In the fast-evolving world of semiconductor design, creating a complex System-on-Chip (SoC) requires meticulous planning to ensure performance, power efficiency, and cost-effectiveness. Aion Silicon’s white paper, authored by Piyush Singh, outlines a streamlined methodology that leverages advanced modeling to bridge the gap from abstract concepts to silicon-ready specifications. Drawing on proprietary tools built atop EDA solutions from Arm and Synopsys, the paper emphasizes early architectural validation to mitigate risks and accelerate time-to-market. This approach is particularly vital for domains like AI, automotive, and high-performance computing, where SoCs integrate diverse components such as CPUs, GPUs, DSPs, and custom IP.

The white paper begins with an overview of SoC modeling’s role in preempting design flaws. Before placing any transistors, accurate models assess key metrics like bandwidth, latency, and Network-on-Chip (NoC) configuration. Aion Silicon’s custom modeling flow enhances vendor tools with extensive tweakable settings, enabling rapid iterations. Unlike traditional spreadsheet-based methods that take months, this flow delivers insights in days, allowing quick evaluation of variants to match customer use cases.

A core section explains how an ASIC evolves from an abstract view, depicted as application tasks on initiators (hardware blocks generating traffic) and targets (memory receivers), to a detailed specification. Central to this is the interconnect fabric, which connects compute and memory elements. Its design remains fluid until floor planning, influenced by timing constraints and die layout. Modeling provides a starting point, refining the NoC iteratively.

The paper highlights modeling’s advantages: estimating architecture, playing “what-if” scenarios, and enabling early software development. It categorizes modeling types, from dataflow (MATLAB / Python / C++ algorithms without timing) to loosely timed (for software prototyping) and approximately timed/fast timed models (ideal for exploration with transaction-level tracing). Cycle-accurate RTL simulations, while precise, are too slow for initial analysis.

Performance exploration is deemed essential because IP blocks, validated in isolation, face real-world constraints when integrated. Other blocks’ traffic patterns impact interconnect and memory, necessitating simulations to size subsystems appropriately.

The heart of the white paper is the “10 Steps to Architecture Success,” a phased approach breaking down complexity. The first four steps are analytical, often spreadsheet-based:

  1. System OI Analysis: Examine input/output dataflows, including burstiness, latency, timing, and formatting to determine buffer needs.
  2. Processing Analysis: Decompose algorithms into sub-tasks, grouping functionalities (e.g., MPEG decode and image analysis).
  3. IP Analysis: Identify third-party IP blocks, incorporating datasheet details on memory and compute requirements for accurate modeling.
  4. Data Interchange Analysis: Decide data exchange methods—on-chip SRAM/FIFO for small data or external DDR for large—based on size and access frequency.
The remaining steps shift to simulation:
  1. Workflow Model (Transactional): Create software representations of algorithm stages as simulation objects (e.g., green boxes in diagrams) with latency/processing settings, connected by channels for sequencing.
  2. Simulate to Verify Processing: Run simulations to visually confirm algorithm sequencing using modeling tools’ visualization features.
  3. Quantify Data Interchange: Model hardware with Virtual Processor Units (VPUs) and local memory, defining communication domains and verifying configurations.
  4. Data Physical Exchange: Remodel memory as external via a common controller, enhancing connectivity accuracy.
  5. Implement Interconnect: Add NoC fabric, replacing direct connections, and evaluate timing/performance impacts, iterating as needed.
  6. Optimize Performance: Adjust settings to identify bottlenecks, reduce latency, and improve throughput through quick simulations (minutes to hours).

These steps progressively refine models, eliminating dead ends and focusing resources logically. The paper concludes with Aion Silicon’s profile: founded in 2002, it offers end-to-end ASIC/SoC services across global centers, emphasizing first-time-right silicon.

Bottom line: This methodology underscores modeling’s transformative power in SoC design, reducing project risks and fostering innovation. By integrating custom flows with established EDA tools, Aion Silicon empowers designers to deliver optimized chips efficiently, proving that a structured path from abstraction to reality is key to semiconductor success.

See more AION Silicon Whitepapers here.

Also Read:

Live Webinar: Considerations When Architecting Your Next SoC: NoC with Arteris and Aion Silicon

Architecting Your Next SoC: Join the Live Discussion on Tradeoffs, IP, and Ecosystem Realities

The Sondrel transformation to Aion SIlicon!


PDF Solutions’ AI-Driven Collaboration & Smarter Decisions

PDF Solutions’ AI-Driven Collaboration & Smarter Decisions
by Kalar Rajendiran on 12-23-2025 at 10:00 am

11 Commandments of AI Application Adoption

When most people hear the term PDF, they immediately think of a PDF file, a universal, platform-independent way to share electronic documents.

There is, however, another PDF that many outside the semiconductor industry may not be familiar with. And this PDF actually predates the PDF file format. It is short for PDF Solutions, a company whose software platforms are widely used across the semiconductor ecosystem, from design through manufacturing and test.

While the company does not formally expand the initials “PDF,” they can be logically interpreted as Process, Design, and Fabrication, reflecting its long-standing mission to connect design intent with manufacturing reality using silicon data. As Qualcomm’s Mike Campbell has described them, the software tools offered by PDF Solutions have become “the industry’s tools,” underscoring how deeply embedded they are across leading semiconductor companies.

PDF Solutions held its annual User Conference earlier in December, and the following article provides a synthesis of the key themes, technologies, and customer perspectives presented at the event.

The industry’s new reality

Explosive demand for AI, data center, and edge workloads is driving aggressive investment in advanced nodes, chiplets, and 3D integration, pushing semiconductor revenues toward the low-trillion-dollar range before 2030. At the same time, the move to sub-2 nm processes, multi-die systems, and globally distributed manufacturing has shifted the industry’s core challenge from linear scaling to systemic complexity.

Yield, cost, and cycle time are no longer determined within a single fab or toolset. They depend on tightly coupled decisions across design, front-end manufacturing, advanced packaging, final test, and a supply chain that spans continents. More fabs are coming online, more test insertions are required per device, and more data is being generated at every step. Yet only a small fraction of that data is actively analyzed using conventional approaches.

Traditional economies of scale are increasingly difficult to realize in this environment. Operations are fragmented across sites and companies, experienced talent is constrained, and data lives in incompatible schemas and tools that make holistic analysis slow and expert-dependent.

Why AI-driven collaboration is becoming essential

Across the industry, AI is now applied to yield optimization, predictive maintenance, planning, and logistics. However, as discussed at the User Conference, its impact remains limited when data is locked in silos or when AI initiatives stall at the proof-of-concept stage rather than becoming operational systems.

In semiconductor manufacturing, a disproportionate amount of data-science effort is still consumed by data preparation rather than modeling or decision-making. Many AI projects fail not because algorithms are inadequate, but because they lack robust data foundations, governance, and deployment pathways.

Source: Adapted from Michael Campbell’s presentation at the 2025 PDF Solutions Users Conference.

From John Kibarian’s perspective, the next major wave of value, potentially in the hundreds of billions of dollars annually at an industry level, will come from AI-driven collaboration across the semiconductor ecosystem, not isolated point optimizations. Realizing that value requires secure, standards-based connectivity, AI-ready data platforms, and workflows that link design, manufacturing, test, and supply-chain decisions into a continuous, data-backed loop.

PDF Solutions’ AI strategy

The strategy rests on three reinforcing pillars: enterprise-grade ModelOps, deep data integration with semiconductor-specific semantics, and a secure, LLM-based agentic workflow platform designed for high-stakes manufacturing environments. The goal is to deliver not only actionable insights, but prescriptive guidance while de-risking AI adoption at production scale.

Source: PDF Solutions 2025

Enterprise ModelOps: closing the last mile

At the core is Exensio AI / Studio AI, an enterprise ModelOps layer designed to train, deploy, and manage thousands of models using the massive legacy datasets already resident in Exensio. The focus is on closing the “last mile” between analytics and sustained production deployment.

Roadmap capabilities discussed include reusable analytics workflows, bring-your-own-model support, a secure enterprise model registry, and full traceability between training data, models, and outcomes. This approach addresses long-standing challenges around governance, reproducibility, and operational trust.

Data integration and semantic understanding

A second pillar is breaking down silos between yield, design, and equipment data to accelerate root-cause analysis. Conference discussions highlighted persistent gaps, including limited integration with design-diagnosis ecosystems, incomplete equipment telemetry, and AI tools that force engineers to context-switch.

To address this, PDF Solutions is integrating with partners such as Siemens Tessent YieldInsight, combining design-for-test diagnosis data and layout context with Exensio’s manufacturing and test datasets. On top of this, the company is building a semiconductor semantic knowledge graph layer spanning lots, wafers, dies, packages, test structures, and yield metrics.

This pragmatic, evolving semantic layer allows analytics and language models to interpret data in context, selectively incorporating design information where it materially improves diagnosis and decision-making.

Agentic LLM infrastructure with governance by design

The third pillar is an LLM-driven, agentic workflow platform designed for secure, high-stakes environments. PDF Solutions is implementing on-premise and air-gapped LLM infrastructure so sensitive IP and yield data remain behind the customer’s firewall.

Built on the Model Context Protocol, Exensio is evolving into an engineer-directed agentic system, assisting with multi-step workflows, from ingesting raw logs to generating diagnostics and guiding engineers toward likely root causes. A synthetic validation engine stress-tests these workflows before production deployment.

Customer value in practice

A customer deep dive illustrated how this PDF Solutions’ strategy translates into value during yield ramp and failure analysis. In modern GPUs and complex SoCs, scan testing generates rich defect information, but scan diagnosis, layout review, and volume data analysis have traditionally lived in fragmented tools.

By integrating Siemens Tessent Diagnosis and YieldInsight directly into Exensio MA, PDF Solutions provides a single environment for analytics, visualization, machine learning, and layout-aware investigation.

From New Product Introduction (NPI) to High Volume Manufacturing (HVM)

This unified platform turns what was historically an NPI-focused, expert-driven flow into an HVM-ready capability that scales across products, sites, and teams. Engineers gain faster access to diagnosis results and layout context, increasing the likelihood of pinpointing failure locations and shortening the path to corrective action.

Because it is built on Exensio’s scalable data and AI infrastructure, the solution can be deployed across on-premise and cloud environments and extended with agentic capabilities as customers mature. This enables applications such as automated triage, pattern-based excursion detection, and conversational copilots to be grounded in governed data.

Summary

Across John Kibarian’s industry framing, the company’s platform roadmap, and the Siemens use-case deep dive, a consistent theme emerged. The next era of semiconductor manufacturing will be won by companies that treat data, AI, and collaboration as a unified operating system, not a collection of disconnected tools.

PDF Solutions’ mission is to provide a backbone that is semantic, agentic, and secure. With this, customers can turn the growing avalanche of manufacturing and design data into smarter, faster, and more collaborative decisions at scale.

The content of the 2025 PDF Solutions Users Conference can be found here:

https://www.pdf.com/company/pdf-solutions-users-conference/

In January, look out for a post featuring excerpts from a PDF Solutions Conference dinner chat between John Kibarian, CEO of PDF Solutions, and Tom Caufield, Executive Chairman of GlobalFoundries.

Also Read:

PDF Solutions Charts a Course for the Future at Its User Conference and Analyst Day

PDF Solutions Calls for a Revolution in Semiconductor Collaboration at SEMICON West

PDF Solutions Adds Security and Scalability to Manufacturing and Test


Bringing Low-Frequency Noise into Focus

Bringing Low-Frequency Noise into Focus
by Admin on 12-23-2025 at 6:00 am

1. Primarius offers a one stop RTN solution

Key takeaways

  • The challenge of acquiring high-quality, reproducible noise data becomes achievable with Primarius’ wafer-level low-frequency noise characterization solution, which is essential for advanced nodes.
  • The Primarius 981X family raises the bar for low-frequency noise measurement metrology with its unique advantages demonstrated below, along with recent innovative solutions such as the 9812HF and 9812AC.
  • Process engineers should use low-frequency noise as a key figure of merit for process quality monitoring and improvement during technology development. Design teams also require accurate low-frequency noise models to characterize devices and enable effective design optimization.

Noise, defined as any unwanted signal superimposed on an ideal one. Low-frequency noise is dominated by 1/f noise and RTN, both directly linked to crystalline defects in semiconductor materials. As semiconductor devices continue shrinking, even a single trap can cause current fluctuations. Consequently, this noise effect is emerging as a latent failure mechanism that threatens system performance.

Since the 28 nm technology node, low-frequency noise models have become a standard component of PDKs. Accurate RTN modeling not only supports circuit design and process optimization but also opens new avenues for statistical 1/f noise modeling and circuit simulation. High quality noise data is therefore a key modeling requirement.

Primarius offers a one-stop RTN solution
Practical challenge VS Technical breakthrough

Accurate low-frequency noise characterization demands more than a sensitive amplifier.

You need:

  • An instrument with ultra-low background noise.
  • Wide impedance matching to cover different types of DUTs.
  • High dynamic range to capture tiny fluctuations.
  • The ability to report both time-domain RTN signatures and frequency-domain PSDs with traceable statistics.

One practical breakthrough for noise test systems is the automatic, signal-aware switching between AC and DC coupling. With Primarius’ advanced low-frequency noise filtering and amplification technologies, the 981X Series Noise Testing System achieves high speed, high precision, and stable data acquisition. This core technology elevates noise testing to a new performance tier.

981X series supports automatic AC/DC coupling

9812DX: Golden standard

The Primarius 981X series Noise Testing System has been the golden standard for low frequency noise testing in the semiconductor industry for decades. It sets new records in measurement speed, system resolution, and coverage of different types of measurement requirements for flicker noise and RTN. The 9812DX, succeeding 9812D, has been adopted by leading foundries as the new golden tool for lowrequency noise testing. 9812DX stands out for its rare ability to measure both high impedance and low impedance devices with consistent accuracy.

Notable 9812DX highlights:

  • Wide voltage and current range: 200V, 200mA
  • Broad impedance matching range: 3 Ω to 30 MΩ
  • Lots of device types: Suitable for devices such as MOSFETs, BJTs, FinFETs, LDMOS, JFETs, HBTs, Diodes, Resistors, etc.
  • Hight resolution & bandwidth: 10 e-27A^2/Hz @ 20KHz
  • Ready for all advanced process nodes: 7/5/3nm and more
9812HF: VHF reach

Launched as the high bandwidth extension of the 981X family, the 9812HF is designed to bridge traditional low-frequency noise analysis and higher frequency noise concerns. This extension matters for applications where noise mechanisms span large frequency ranges.

Notable 9812HF highlights:

  • VHF bandwidth: Brings low-frequency noise measurement into 100 MHz range so designers can observe how low frequency mechanisms interact with the front-end up conversion and wideband circuits.
  • Maintains series sensitivity: Keeps low-current sensitivity and wide impedance coverage as 9812DX while extending frequency reach—so you don’t trade sensitivity for bandwidth.
  • Application fit:
    Meeting growing demands in satellite communications, automotive electronics, and other advanced applications requiring wideband noise analysis.

 

9812AC: Noise under realistic dynamic drive

Historically, most noise research and device qualification have focused exclusively on DC bias conditions. Yet, recent studies reveal that RTN under AC bias exhibits statistical properties distinct from DC-based measurements. While DC-based analysis provides a safe and conservative reliability estimate, it may unnecessarily constrain the circuit design space for advanced CMOS technologies.

To address this industry gap, Primarius launched the commercial grade 9812AC test system in 2023. The 9812AC enables designers to measure noise under dynamic operating conditions, delivering a comprehensive understanding of device behavior. This insight is crucial for making smarter design and verification trade-offs.

Noise data from 9812AC: behaviors not observable under DC bias
M9800: Industry parallel system

To meet wafer-level production demands for higher throughput and lower cost, M9800 system becomes a preferred choice for large-scale testing. Compared with conventional solutions, the M9800 increases overall test efficiency by 2-4×, accelerating process development for advanced technology nodes.

M9800 system
9812 & FS-Pro: Integration to speed up noise measurement

FS-Pro, Primarius’ all-in-one semiconductor parameter analyzer, is another flagship product in the portfolio. Compared with traditional IV meter or SMU, FS-Pro compact size and light weight (less than 8kg) makes it easier to be assembled with 9812DX as a whole system. While maintaining measurement accuracy and resolution, FS-Pro can significantly speed up DC measurement thus enable more efficient overall measurement with 9812DX.

FS-Pro delivering high accuracy and wide dynamic range for device characterization
Summary

Low-frequency noise has evolved from a niche laboratory characterization to a critical process and system-level metric in silicon development. Great low-frequency noise metrology can shorten the feedback loop between process engineers, device physicists, and circuit designers: it turns poorly quantified risk into actionable defect localization, process optimization, and realistic design margins.

The Primarius 981X family gives teams a practical, wafer-scale toolkit to measure what matters—across time, frequency and operating conditions—so they can design and qualify chips with less guesswork and more physics-backed evidence.

Also Read:

Primarius Technologies at the 2024 Design Automation Conference

Free Webinar on SPICE Simulation

Low Frequency Noise Challenges IC Designs


A Brief History of TSMC Through 2025

A Brief History of TSMC Through 2025
by Daniel Nenni on 12-22-2025 at 10:00 am

About TSMC 2025

Taiwan Semiconductor Manufacturing Company, the world’s largest dedicated semiconductor foundry, has transformed from a modest startup into a global technology powerhouse. Founded on February 21, 1987, by Morris Chang, a veteran of Texas Instruments, TSMC pioneered the “pure-play” foundry model. This innovative approach separated chip design from manufacturing, allowing fabless companies to outsource production without competing directly with integrated device manufacturers (IDMs).

Historically, IDMs dominated the industry before the 1980s rise of specialization. Companies like Intel pioneered this model, optimizing processes for proprietary designs. Major IDMs today include Intel, Samsung Electronics, Texas Instruments, Infineon, STMicroelectronics, and Renesas.

Chang, recruited by Taiwan’s government to head the Industrial Technology Research Institute (ITRI), envisioned TSMC as a neutral partner. Initial capital came from the Taiwanese government (48%), Philips (28%, with technology transfer), and private investors. Located in Hsinchu Science Park, TSMC’s early focus was on mature processes like 1-micron CMOS, serving fabless startups.

The 1990s brought rapid growth. TSMC went public on the Taiwan Stock Exchange in 1994 and listed ADRs on the NYSE in 1997, the first Taiwanese company to do so. Technological milestones included 0.5-micron in 1994, 0.35-micron copper interconnects, and overseas ventures like WaferTech in the U.S. (1996). Despite challenges like the 1997 Asian financial crisis and 1999 earthquake, revenue soared, reaching over 50% foundry market share by 2000.

Entering the 2000s, TSMC advanced to 0.13-micron (2002) and 90nm (2004), powering the PC and mobile booms. The Open Innovation Platform (OIP) launched in 2008 fostered ecosystem partnerships. Leadership saw Chang retire briefly (2005-2009) before returning amid the global financial crisis. By 2010, revenue hit $13.9 billion, with nodes like 28nm ramping.

The 2010s marked dominance in advanced nodes. Co-CEOs Mark Liu and C.C. Wei took over in 2013, with Chang as chairman until 2018. Breakthroughs included 16nm FinFET (2015), 10nm (2017), 7nm with EUV (2019), and capturing Apple’s A-series chips from Samsung. Revenue reached $45 billion by 2020, market share ~55%. Geopolitical tensions emerged, including U.S. sanctions halting Huawei shipments.

The 2020s accelerated amid COVID shortages and AI surge. 5nm (2020), 4nm (2021), and 3nm (2022) debuted, powering Apple’s M-series and Nvidia’s GPUs. In 2025, 2nm (N2) entered mass production late in the year, offering 10-15% speed gains or 25-30% power savings over 3nm, using gate-all-around transistors. A16 (1.6nm) is slated for 2026/2027 with backside power delivery.

Global expansion diversified risks. Arizona’s Fab 21 began N4 production in Q4 2024, with yields matching Taiwan. By 2025, investment swelled to $165 billion for six fabs, two packaging sites, and R&D—third fab groundbreaking in April for N2/A16. Japan’s JASM (Kumamoto) started mature nodes in 2024, expanding to advanced. Germany’s ESMC (Dresden) progresses for automotive/specialty.

Financially, TSMC is thriving on AI demand. Q3 2025 revenue grew 30% YoY, with 72% foundry share. TTM revenue ~$88 billion, market cap ~$1.5 trillion. Advanced nodes (<7nm) drive ~74% wafer revenue.

Bottom Line: TSMC’s “trusted foundry” status stems from IP protection, neutrality, and innovation. From 1987’s vision to 2025’s AI linchpin, it powers global tech while navigating geopolitics. With C.C. Wei as CEO, TSMC targets net-zero by 2050 and continued leadership in the angstrom era.

Also Read:

Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium

TSMC Kumamoto: Pioneering Japan’s Semiconductor Revival

TSMC’s Customized Technical Documentation Platform Enhances Customer Experience


Quantum Advantage is About the Algorithm, not the Computer

Quantum Advantage is About the Algorithm, not the Computer
by Bernard Murphy on 12-22-2025 at 6:00 am

Conductor orchestrating waves inside a quantum computer

Of course there is a minimum requirement for the computer: enough qubits, fault-tolerant computing, support for hundreds of millions or more computations before a reset, that sort of thing. We’re still on that journey but even after we reach this goal it is important to have a sense of what delivers advantage since the computer itself simply enables quantum calculations. My research suggests a foundational principle to current algorithms, allowing for a range of high value operations though it is unclear to me how these would generalize more broadly. This is my brief attempt to summarize how quantum computing benefits are determined by algorithms, and what these algorithms must look like, at least in the near future.

What do we mean by advantage?

In the geeky world of computational complexity, there is very clear separation in performance between two main classes of algorithm: those which have polynomial complexity, running in times which grow roughly by some fixed power of the problem size, and those which have exponential complexity, running in times growing by some number raised to the power of the problem size. Algorithms which scale polynomially are practical in principle on classical computers, problems which scale exponentially are not. (I have over-simplified, but this is good enough for my argument.)

If quantum algorithms can demonstrate a polynomial advantage over classical computers that is of course good, though not necessarily unicorn-exciting for investors or customers. Grover’s algorithm, searching for some specified type of an element in an unsorted list, is of this type, showing a quadratic advantage over classical algorithms. Grover has the advantage that the limit on best performance for a classical algorithm has been formally proven, so the quantum method will always be superior. However best performance for almost all classical algorithms is merely the best we have discovered so far. As has been embarrassingly demonstrated on several occasions when claims of advantage have been upstaged by announcements of new classical algorithms with comparable performance.

(In fairness a polynomial advantage could signal a unicorn if the polynomial difference with classical alternatives is big enough.)

I’m sure there are more Grover algorithms waiting to be discovered which can demonstrate true polynomial superiority. But given the difficulty in proving upper limits for classical polynomial complexity algorithms and high expectations for quantum advantage, another way to go is to focus on beating classical algorithms with proven exponential complexity. There are already some examples in this class including Shor’s algorithm, so it’s worth digging into what makes these algorithms different and whether we can discern characteristics that might be extended to other problems.

Shout-out to the video tutorial series from Artur Ekert (professor of quantum physics at Oxford), from which I learned much of what I share here.

Going a bit deeper in Shor’s algorithm

The target problem is to find the factors of a large integer. An efficient solution will open up attacks on secure communications. Most encryption relies on the difficulty of factoring an integer N, the product of two very large (and unknown) primes, since complexity of this problem grows exponentially with N. Shor’s algorithm starts with classical compute setup to generate a seed value ‘a’, 2<a<N. The quantum stage leverages modular arithmetic (clock arithmetic), computing successive powers of a, mod N. When aR = 1 mod N for some power R, the calculation can stop and R appears as the output value of the quantum stage. Classical takes over again to reconstruct a divisor of N from R. The complexity of this algorithm is order of (logN)3, vastly faster that best-known classical methods.

The core principle behind this method is what I now see as an elaborate quantum interference experiment. Start with a list of qubits initialized to zero, run these through a QFT (Quantum Fourier Transform) to initialize for interference, then run through a phase gate which will modify phases between these qubits. Finally run through an inverse QFT to complete the interference. A function to determine phase gate operation sits outside this flow (though still within the quantum algorithm). This function computes successive aRmodN values through which it informs, in a very quantum-like way, phase gating in the interference flow.

Simon’s algorithm, a precursor to Shor, follows a similar if simpler pattern.

Quantum application to linear equation analysis

Linear equation problems come up everywhere in science, engineering, even business applications. The HHL algorithm can provide expectation values for x in the matrix equation Ax=b, under certain constraints. Such equations are classically soluble in low polynomial time in matrix size, even linear time in some cases, but the HHL method solves in polynomial of log (polylog) of the matrix size, so qualifies as an exponential speedup. It may also be significantly better than the classical method in space complexity.

The essential principle behind the method looks to me like Shor, except that interference starts with quantum phase estimation (QPE) (rather than QFT) and ends with inverse QPE. The phase gate in between these QPE gates translates matrix entries into phase values and is of course unique to this algorithm.

Whether or not HHL will prove to have commercial value I don’t know. Even Shor, while very well founded, has only been demonstrated on toy examples. Production value waits on sufficiently powerful and fault tolerant quantum computers to test production readiness.

Takeaway

There may be other possible quantum algorithm architectures, but what is more quantum than interference? The basic algorithm structure prepares superpositions/ entanglement of qubit states (through operators such as Hadamard, QFT, QPE, …), then applies phase gating which will control interference. It then realizes the interference through an inverse of the opening transform, delivering measured qubit output as an integer which may be the desired result or can be fed into further classical computation to generate the result.

This interference is the heart of any exponential advantage that can be realized by a quantum algorithm. All qubits in this path are evaluated simultaneously, a feat that a classical computer cannot hope to match. Not even massively parallel classical computers, since strong coupling between  qubits undermines any parallelism speedup.

Exploiting this advantage in other algorithms will require new kinds of ingenuity. It will be fascinating to see how these evolve!

Also Read:

Quantum Computing Technologies and Challenges

Quantum Computing Algorithms and Applications

An Insight into Building Quantum Computers


CEO Interview with Gopi Sirineni of Axiado

CEO Interview with Gopi Sirineni of Axiado
by Daniel Nenni on 12-21-2025 at 12:00 pm

Gopi Sirineni Axiado

Gopi Sirineni is a Silicon Valley veteran with four startups and over 25 years of experience in the semiconductor, software and systems industries. As a senior executive, he has demonstrated exceptional skills in building highly efficient, cost-effective organizations, managing them in rapidly changing environments, and bringing industry-changing technologies to market.

Tell us about your company?

Axiado is a semiconductor company redefining the way modern platforms are secured and managed. We develop hardware-anchored security and control solutions that are designed specifically for the demands of today’s AI-driven infrastructure. Our mission is to stop cyberattacks at their earliest point: before they impact systems. By delivering platform security that begins at the silicon level, threats are identified instantly with our technology, platform reliability is strengthened, and organizations gain a scalable, future-proof foundation for secure compute.

What problems are you solving?

For years, the industry has depended almost entirely on software-only security at the port of entry. These tools work hard to filter threats, but once malware slips through, there is nothing left to stop an attack in progress. This gap leaves high-value infrastructure dangerously exposed. Axiado closes that gap with a hardware-based security architecture. Our Trusted Control/Compute Unit (TCU) Sits alongside the system hardware as a persistent, intelligent last line of defense. It collaborates with existing software solutions while independently detecting threats they may overlook. Inside the TCU, autonomous AI agents continuously analyze behavior against known attack patterns, enabling real-time detection before damage occurs. No other company currently offers AI-driven cybersecurity integrated directly into hardware in this way.

What application areas are your strongest?

Our strength lies in combining platform management, platform security, and hardware-resident AI. Some core focus areas include platform security and management control, data-center-grade infrastructure, especially AI infrastructure, and hardware-anchored AI agents that continuously monitor system behavior. In addition, we are in the process of strengthening our Dynamic Thermal Management (DTM) and Dynamic Voltage and Frequency Scaling (DVFS), which is driven by customized AI models. Because our AI agents operate beside the hardware, they can continuously learn normal behavior patterns, identify anomalies instantly, and act autonomously. Beyond security, these same agents improve system efficiency by reducing power consumption and optimizing thermal and performance characteristics in real time.

What keeps your customers up at night?

CISOs and infrastructure leaders worry most about zero-day threats, which are attacks that slip past software defenses and act invisibly until it is too late. Today’s data centers rely heavily on software-based port-of-entry tools that are unable to see what happens once malware bypasses them. Our customers want a trusted, proactive way to predict, detect, and stop these attacks before they unfold. Axiado’s hardware-level AI learning provides exactly that: we monitor systems continuously, learn their patterns, flag deviations instantly, and identify active attacks in real time. Today, we are the only solution in the market capable of detecting attacks as they are happening at the hardware layer.

What does the competitive landscape look like and how do you differentiate?

Axiado does not have any direct competitors building a unified hardware security and platform management architecture. Legacy solutions are currently fragmented, with one piece for management, another partial add-on for boot-time security, and none built with holistic AI-driven protection in mind. Axiado’s solution is different because it is architected from the ground up with security baked in, not bolted on. We have reimagined platform control, efficiency, and end-to-end protection to work together as a single silicon-anchored system. This integration, coupled with hardware-resident AI agents, sets us apart from any legacy or discrete offerings on the market.

What new features/technology are you working on?

We are expanding our autonomous AI agent framework, enabling even more intelligent detection, efficiency tuning, and infrastructure automations. Some of these new capabilities include various advanced AI agents, enhanced Dynamic Voltage and Frequency Scaling (DVFS, next-generation Dynamic Thermal Management (DTM), and continued growth of our hardware-anchored AI compute environment. These innovations strengthen both the security posture and operational efficiency of modern compute infrastructure.

How do customers normally engage with your company?

Customers typically reach us through our website, direct outreach from our sales team, and in-person discussions at industry events. We actively participate in major conferences and trade shows, such as SC25, AI Infra Summit, and OCP Global, where we have seen strong engagement from enterprises, hyperscalers and ecosystem partners. From there, we work closely with customers to integrate our hardware, software stack, and APIs into their existing platforms.

Also Read:

CEO Interview with Masha Petrova of Nullspace

CEO Interview with Eelko Brinkhoff of PhotonDelta

CEO Interview with Pere Llimós Muntal of Skycore Semiconductors