Why TSMC is Known as the Trusted Foundry

Why TSMC is Known as the Trusted Foundry
by Daniel Nenni on 12-26-2025 at 6:00 am

TSMC Ivey Fab

Taiwan Semiconductor Manufacturing Company (TSMC) is widely regarded as the world’s most trusted semiconductor foundry, a reputation built over decades through technological leadership, business model discipline, operational excellence, and reliability. In an industry where trust is as critical as transistor density, TSMC has become the backbone of the global digital economy.

First and foremost, TSMC’s pure-play foundry model is the foundation of its trustworthiness. Unlike integrated device manufacturers (IDMs) such as Intel and Samsung, which design and manufacture their own chips, TSMC does not compete with its customers. It manufactures chips exclusively for third parties and has maintained a strict firewall between customer designs. This neutrality reassures customers, from Apple and NVIDIA to AMD, Qualcomm, and countless startups, that their intellectual property will not be used against them. Over time, this consistency has created deep confidence across a vast ecosystem, making TSMC the default manufacturing partner for the world’s most valuable chip designers.

Second, TSMC’s technological leadership reinforces that trust. The company has consistently been first, or decisively best, to mass-produce advanced process nodes such as 7nm, 5nm, and 3nm at high yields. In semiconductor manufacturing, reliability is not just about innovation, but about delivering that innovation at scale, on schedule, and with predictable silicon. TSMC’s ability to translate cutting-edge research into stable, high-volume production has made it indispensable for customers whose product cycles depend on certainty. When companies commit billions of dollars to a chip design, they need confidence that the foundry can deliver exactly as promised and TSMC has repeatedly proven it can.

Third, manufacturing excellence and yield consistency distinguish TSMC from competitors. Advanced chips are extraordinarily complex, and small variations can destroy profitability or product viability. TSMC’s laser focus on process control, defect reduction, and continuous improvement results in industry-leading yields. High yields mean lower costs for customers, faster ramp-ups, and fewer surprises after tape-out. This operational discipline is a major reason customers trust TSMC with their most advanced and sensitive designs.

Fourth, TSMC has built a reputation for strong intellectual property protection and confidentiality. Semiconductor designs represent years of research and billions in investment. TSMC has demonstrated, across thousands of customers, that it can securely handle highly confidential data without leaks or misuse. This trust is reinforced by TSMC’s internal culture, strict access controls, and long-standing customer relationships. In an era of increasing cyber and industrial espionage, this reliability is invaluable.

Fifth, TSMC’s scale and ecosystem integration create trust through inevitability. The company has invested hundreds of billions of dollars in fabrication plants, equipment, and talent, creating manufacturing capabilities that few others can match. Its close collaboration with equipment suppliers (such as ASML and Applied Materials), EDA vendors (Synopsys, Cadence, Siemens EDA), and IP companies (Synopsys, Arm, Analog Bits) also known as the Grand Alliance allows customers to design within a mature, silicon-proven and well-supported ecosystem. This reduces risk and shortens time-to-market, further cementing TSMC as the safest choice.

Sixth, TSMC’s long-term strategic thinking strengthens customer confidence. The company invests aggressively ahead of demand, often years before returns are guaranteed. This willingness to absorb risk ensures that capacity is available when customers need it, even during industry upcycles or shortages. During recent global chip shortages, TSMC’s capacity planning and prioritization reinforced its image as a stable, responsible industry steward.

Finally, TSMC’s global credibility and governance matter. While geopolitical risks exist, TSMC has demonstrated transparency, regulatory compliance, and cooperation with governments and customers worldwide. Its expansion into the United States, Japan, and Europe reflects a commitment to supply chain resilience and global trust.

Bottom line: TSMC is the trusted foundry not because of a single advantage, but because of a rare combination: neutrality, technological supremacy, manufacturing reliability, IP protection, scale, and long-term vision. In an industry where failure is catastrophic and trust is earned slowly, TSMC has become the gold standard and the cornerstone of modern semiconductor manufacturing.

Also Read:

TSMC’s Customized Technical Documentation Platform Enhances Customer Experience

A Brief History of TSMC Through 2025

Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium


TSMC’s Customized Technical Documentation Platform Enhances Customer Experience

TSMC’s Customized Technical Documentation Platform Enhances Customer Experience
by Daniel Nenni on 12-24-2025 at 10:00 am

TSMC Online 2025

Taiwan Semiconductor Manufacturing Company, the world’s leading dedicated semiconductor foundry, has long prioritized customer-centric innovation to maintain its competitive edge in a rapidly evolving industry. TSMC is known as “The Trusted Foundry” for this reason.

Amid increasing complexity in chip design and manufacturing, driven by advanced nodes like 3nm and beyond, TSMC recognized the need for more efficient digital tools to support its global clientele. In response, the company upgraded its flagship customer self-service portal, TSMC-Online™, transforming it into a sophisticated Customized Technical Documentation Platform that significantly enhances user experience and operational efficiency.

Launched with upgrades beginning in April 2021, this platform embodies TSMC’s philosophy of being “Everyone’s Foundry.” The core objective was to address challenges posed by escalating technology and design intricacies, where customers from fabless designers to major tech giants require seamless access to vast amounts of technical data. Previously, navigating extensive documentation and production updates often demanded significant time and occasional support from TSMC’s customer service teams. The revamped TSMC-Online™ introduces a customer-oriented architecture, creating a truly customized service environment that empowers users to manage information as if operating their own fabrication facility.

Key enhancements revolve around three innovative methods:

A standard operation interface, personalized workspace, and intelligent guidance service. The standard operation interface provides a unified, intuitive layout that simplifies navigation across diverse functions, reducing learning curves and minimizing errors. Users benefit from consistent workflows, whether querying process design kits (PDKs), foundation IPs, or real-time wafer status updates.

The personalized workspace stands out as a hallmark of customization. Customers can tailor their dashboards with widgets aligned to specific roles and project stages—such as design verification, tape-out preparation, or manufacturing monitoring. For instance, engineers focused on advanced packaging like 3DFabric™ can prioritize relevant tools, while those in automotive applications highlight AEC-Q100 qualified resources. This flexibility accommodates varied stakeholder needs within a single organization, streamlining collaboration and boosting productivity.

Complementing these is the intelligent guidance service, which leverages smart features like contextual tutorials, animated walkthroughs, and real-time assistance modules. Previously, over half of users reportedly sought external help for document-related tasks due to platform complexity. Now, embedded animations and AI-driven prompts enable self-guided exploration, allowing instant access to tutorials without waiting for support, crucial across time zones.

Security remains paramount, with robust confidential information protection mechanisms ensuring sensitive data, including proprietary designs and production metrics, stays secure. Customers gain comprehensive visibility into the entire lifecycle, from design enablement through wafer manufacturing to shipment, fostering trust and operational transparency.

The impact has been profound. By January 2023, TSMC-Online™ averaged over 3,000 daily logins, reflecting high adoption and reliance. This digital collaboration tool accelerates product success by shortening time-to-market, reducing dependency on manual interventions, and enabling innovative outcomes in high-growth sectors like mobile, high-performance computing, AI, automotive electronics, and IoT.

TSMC’s commitment extends beyond this platform; it integrates with broader initiatives like the Open Innovation Platform® (OIP), which includes ecosystems for EDA tools, IPs, and cloud-based design environments. However, the Customized Technical Documentation Platform within TSMC-Online™ directly tackles day-to-day pain points, exemplifying how digital transformation can elevate service quality in semiconductor manufacturing.

Bottom Line: In an era where speed and precision define success, TSMC’s platform not only optimizes customer experience but also strengthens partnerships. By continuing to refine these tools, TSMC reinforces its role as a trusted enabler of global technological advancement, ensuring customers can focus on innovation while the foundry handles the complexities.

Also Read:

Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium

TSMC Formally Sues Ex-SVP Over Alleged Transfer of Trade Secrets to Intel

TSMC Kumamoto: Pioneering Japan’s Semiconductor Revival


A Brief History of TSMC Through 2025

A Brief History of TSMC Through 2025
by Daniel Nenni on 12-22-2025 at 10:00 am

About TSMC 2025

Taiwan Semiconductor Manufacturing Company, the world’s largest dedicated semiconductor foundry, has transformed from a modest startup into a global technology powerhouse. Founded on February 21, 1987, by Morris Chang, a veteran of Texas Instruments, TSMC pioneered the “pure-play” foundry model. This innovative approach separated chip design from manufacturing, allowing fabless companies to outsource production without competing directly with integrated device manufacturers (IDMs).

Historically, IDMs dominated the industry before the 1980s rise of specialization. Companies like Intel pioneered this model, optimizing processes for proprietary designs. Major IDMs today include Intel, Samsung Electronics, Texas Instruments, Infineon, STMicroelectronics, and Renesas.

Chang, recruited by Taiwan’s government to head the Industrial Technology Research Institute (ITRI), envisioned TSMC as a neutral partner. Initial capital came from the Taiwanese government (48%), Philips (28%, with technology transfer), and private investors. Located in Hsinchu Science Park, TSMC’s early focus was on mature processes like 1-micron CMOS, serving fabless startups.

The 1990s brought rapid growth. TSMC went public on the Taiwan Stock Exchange in 1994 and listed ADRs on the NYSE in 1997, the first Taiwanese company to do so. Technological milestones included 0.5-micron in 1994, 0.35-micron copper interconnects, and overseas ventures like WaferTech in the U.S. (1996). Despite challenges like the 1997 Asian financial crisis and 1999 earthquake, revenue soared, reaching over 50% foundry market share by 2000.

Entering the 2000s, TSMC advanced to 0.13-micron (2002) and 90nm (2004), powering the PC and mobile booms. The Open Innovation Platform (OIP) launched in 2008 fostered ecosystem partnerships. Leadership saw Chang retire briefly (2005-2009) before returning amid the global financial crisis. By 2010, revenue hit $13.9 billion, with nodes like 28nm ramping.

The 2010s marked dominance in advanced nodes. Co-CEOs Mark Liu and C.C. Wei took over in 2013, with Chang as chairman until 2018. Breakthroughs included 16nm FinFET (2015), 10nm (2017), 7nm with EUV (2019), and capturing Apple’s A-series chips from Samsung. Revenue reached $45 billion by 2020, market share ~55%. Geopolitical tensions emerged, including U.S. sanctions halting Huawei shipments.

The 2020s accelerated amid COVID shortages and AI surge. 5nm (2020), 4nm (2021), and 3nm (2022) debuted, powering Apple’s M-series and Nvidia’s GPUs. In 2025, 2nm (N2) entered mass production late in the year, offering 10-15% speed gains or 25-30% power savings over 3nm, using gate-all-around transistors. A16 (1.6nm) is slated for 2026/2027 with backside power delivery.

Global expansion diversified risks. Arizona’s Fab 21 began N4 production in Q4 2024, with yields matching Taiwan. By 2025, investment swelled to $165 billion for six fabs, two packaging sites, and R&D—third fab groundbreaking in April for N2/A16. Japan’s JASM (Kumamoto) started mature nodes in 2024, expanding to advanced. Germany’s ESMC (Dresden) progresses for automotive/specialty.

Financially, TSMC is thriving on AI demand. Q3 2025 revenue grew 30% YoY, with 72% foundry share. TTM revenue ~$88 billion, market cap ~$1.5 trillion. Advanced nodes (<7nm) drive ~74% wafer revenue.

Bottom Line: TSMC’s “trusted foundry” status stems from IP protection, neutrality, and innovation. From 1987’s vision to 2025’s AI linchpin, it powers global tech while navigating geopolitics. With C.C. Wei as CEO, TSMC targets net-zero by 2050 and continued leadership in the angstrom era.

Also Read:

Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium

TSMC Kumamoto: Pioneering Japan’s Semiconductor Revival

TSMC’s Customized Technical Documentation Platform Enhances Customer Experience


AI-Driven DRC Productivity Optimization: Insights from Siemens EDA’s 2025 TSMC OIP Presentation

AI-Driven DRC Productivity Optimization: Insights from Siemens EDA’s 2025 TSMC OIP Presentation
by Daniel Nenni on 12-09-2025 at 10:00 am

AI Driven DRC Productivity Optimization Siemens AMD TSMC

 

In the rapidly evolving semiconductor industry, Design Rule Checking (DRC) remains a critical bottleneck in chip design workflows. Siemens EDA’s presentation at the 2025 TSMC Open Innovation Platform Forum, titled “AI-Driven DRC Productivity Optimization,” showcases how artificial intelligence is revolutionizing this process. Delivered by David Abercrombie, Sr. Director of Calibre Product Management at Siemens EDA, alongside AMD experts Stafford Yu and GuoQin Low, the talk highlights collaborative advancements with TSMC and AMD to enhance productivity across understanding, fixing, debugging, and collaboration in DRC sign-off.

The presentation opens with an overview of Siemens EDA’s new AI Workflow System, designed to boost the entire EDA ecosystem. This system integrates knowledge capture, next-gen debug platforms, AI debug assistance, and automated fixing, ultimately optimizing DRC sign-off. Central to this is the Siemens EDA AI System, an open, secure platform deployable on-premises or in the cloud. It features a GenAI interface, a knowledge base, and a data lake that amalgamates Siemens EDA data, Calibre-specific data, and customer inputs. Powered by LLMs, ML models, and data query APIs, it enables intelligent solutions across tools like Calibre, Aprisa, and Solido. Key benefits include a single installation process, flexibility for customer integrations, and support for assistants, reasoners, and agents. This infrastructure ensures AI tools are hosted on customer hardware, maintaining data security while accelerating workflows.

A major focus is on boosting user understanding through AI Docs Assistant and Calibre RVE Check Assist. The AI Docs Assistant allows users to query Siemens EDA tool documentation via browser or integrated GUIs, providing instant answers with RAG-generated citations. It supports specific tools and versions, includes company documentation, and collects feedback for continuous improvement. Integrated with Calibre’s Results Viewing Environment (RVE) and Vision AI, it streamlines access to knowledge. Complementing this, Calibre RVE Check Assist leverages TSMC’s Design Rule Manual (DRM) data, embedding precise rule descriptions and specialized images directly into the RVE. This enhances designers’ comprehension of rule checks, improving debugging experiences and productivity. Additionally, RVE Check Assist User Notes facilitate in-house knowledge sharing: designers capture fixing suggestions and images in RVE, storing them in a central database within the EDA AI Datalake. This shared repository allows organization-wide review, enhancing DRC-fixing flows by leveraging collective expertise.

Shifting to automated fixing, the presentation details Calibre DesignEnhancer, an analysis-based tool for sign-off DRC-clean modifications on post-routed designs. It includes modules like DE Via, which maximizes via insertion to reduce IR drop and boost robustness, and DE Pge, which enhances power grids by adding Calibre nmDRC-clean interconnects for better EM and IR performance. The engine supports LEF/DEF formats and outputs incremental, full, or ECO DEF files, integrating seamlessly with Place-and-Route tools. Its infrastructure handles simple and complex metal rules, such as spacing (M.S., V#.S.), enclosure (M.EN., V.EN.), and forbidden patterns (EFP.M., EFP.V.), considering connectivity and rule dependencies. Examples illustrate fixes like expanding or trimming edges to resolve end-to-end spacing violations, demonstrating its precision in layout contexts.

For debugging, Calibre Vision AI addresses full-chip integration challenges, such as handling billions of violations with sluggish navigation and limited perspectives. It enables “shift left” strategies, identifying issues early for Calibre-clean resolutions. Features include intelligent debug via check grouping (e.g., bad via arrays or fill overlaps), full-chip analysis at 20x speed (reducing 71GB databases to 1.4GB with instant loading), and cross-team collaboration through bookmarks, ASCII RDB exports, and HTML reports. Integration with the Siemens EDA AI System adds natural language capabilities for tool operations, data reasoning, and knowledge access.

AMD’s testimonial underscores real-world impact: on a design with 600 million errors across 3400 checks, Vision AI grouped them into 381 signals, enabling 2x faster root-cause analysis. Heatmaps revealed systematic issues like fill overlaps with clock cells or missing CM0 in breaker cells, compressing cycle times.

Bottom line: This collaboration between Siemens EDA, TSMC, and AMD exemplifies AI’s transformative role in DRC. By boosting workflows, understanding, fixing, debugging, and collaboration, these tools promise significant productivity gains, potentially shortening design cycles and improving chip reliability. As semiconductor nodes advance, such innovations are essential for maintaining competitive edges in high-stakes industries.

Also Read:

An Assistant to Ease Your Transition to PSS

Accelerating SRAM Design Cycles: MediaTek’s Adoption of Siemens EDA’s Additive AI Technology at TSMC OIP 2025

Why chip design needs industrial-grade EDA AI


Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium

Cerebras AI Inference Wins Demo of the Year Award at TSMC North America Technology Symposium
by Daniel Nenni on 12-07-2025 at 2:00 pm

Cerebras TSMC OIP 2025

This is a clear reminder of how important the semiconductor ecosystem is and how closely TSMC works with customers. The TSMC Symposium started 30 years ago and I have been a part of it ever since.  This event is attended by TSMC’s top customers and partners and is the #1 semiconductor networking event of the year, absolutely.

Cerebras Systems, the pioneer in wafer-scale AI acceleration, today announced that its live demonstration of the CS-3 AI inference system received the prestigious Demo of the Year award at the 2025 TSMC North America Technology Symposium in Santa Clara.

The winning demonstration showcased the Cerebras CS-3, powered by the industry’s largest chip, the 4-trillion-transistor Wafer-Scale Engine 3 (WSE-3), delivering real-time, multi-modal inference on Meta’s Llama 3.1 405B model at over 1,800 tokens per second for a single user, and sustaining over 1,000 tokens per second even under heavy concurrent multi-user workloads. Running entirely in memory with no external DRAM bottlenecks, the CS-3 processed complex reasoning, vision-language, and long-context tasks with sub-200-millisecond latency performance previously considered impossible at this scale.

TSMC’s selection committee, composed of senior executives and technical fellows, cited three decisive factors:
  1. Unprecedented single-chip performance on frontier models without multi-node scaling
  2. True real-time interactivity on models larger than 400 billion parameters
  3. Seamless integration of TSMC’s most advanced 5 nm technology with Cerebras’ revolutionary wafer-scale architecture

During the live demo, the CS-3 simultaneously served dozens of concurrent users running Llama 3.1 405B with 128k context windows, answering sophisticated multi-turn questions, generating images from text prompts via integration with Flux.1, and performing real-time document analysis—all while maintaining conversational latency indistinguishable from smaller cloud-based models.

“Wafer-scale computing was considered impossible for fifty years, and together with TSMC we proved it could be done,” said Dhiraj Mallick, COO, Cerebras Systems. “Since that initial milestone, we’ve built an entire technology platform to run today’s most important AI workloads more than 20x faster than GPUs, transforming a semiconductor breakthrough into a product breakthrough used around the world.”

“At TSMC, we support all our customers of all sizes—from pioneering startups to established industry leaders—with industry-leading semiconductor manufacturing technologies and capacities, helping turn their transformative idea into realities,” said Lucas Tsai, Vice President of Business Management, TSMC North America. “We are glad to work with industry innovators likes Cerebras to enable their semiconductor success and drive advancements in AI.”

The CS-3’s memory fabric provides 21 petabytes per second of bandwidth and 44 gigabytes of on-chip SRAM—equivalent to the memory of over 3,000 GPUs—enabling entire 405B-parameter models to reside on a single processor. This eliminates the inter-GPU communication overhead that plagues traditional GPU clusters, resulting in dramatically lower latency and up to 20x higher throughput per dollar on large-model inference.

The recognition comes as enterprises increasingly demand cost-effective, low-latency access to frontier-scale models. Independent benchmarks published last month by Artificial Analysis confirmed the CS-3 as the fastest single-accelerator system for Llama 3.1 70B and 405B inference, outperforming NVIDIA H100 and Blackwell GPU clusters on both tokens-per-second and time-to-first-token metrics.

TSMC’s annual symposium attracts thousands of engineers and executives from across the semiconductor ecosystem. The Demo of the Year award has previously gone to groundbreaking advancements in 3 nm and 2 nm process technology; this year marks the first time an AI systems company has claimed the honor.

Cerebras is now shipping CS-3 systems to customers in healthcare, finance, government, and scientific research. The company also announced general availability of Cerebras Inference Cloud, offering developers instant API access to Llama 3.1 405B at speeds up to 1,800 tokens/second—the fastest publicly available inference for models of this scale.

Bottom line: With this award from TSMC, Cerebras solidifies its position as the performance leader in generative AI inference, proving that wafer-scale computing has moved from bold vision to deployed reality.

Also Read:

TSMC Kumamoto: Pioneering Japan’s Semiconductor Revival

AI-Driven DRC Productivity Optimization: Revolutionizing Semiconductor Design

Exploring TSMC’s OIP Ecosystem Benefits

Breaking the Thermal Wall: TSMC Demonstrates Direct-to-Silicon Liquid Cooling on CoWoS®


Accelerating SRAM Design Cycles: MediaTek’s Adoption of Siemens EDA’s Additive AI Technology at TSMC OIP 2025

Accelerating SRAM Design Cycles: MediaTek’s Adoption of Siemens EDA’s Additive AI Technology at TSMC OIP 2025
by Daniel Nenni on 12-02-2025 at 10:00 am

Siemens MediaTek TSMC OIP 2025

In the competitive vertical of mobile System-on-Chip development, SRAM plays a pivotal role, occupying nearly 40% of chip area and directly impacting yield and performance. The presentation “Accelerating SRAM Design Cycles With Additive AI Technology,” co-delivered by Mohamed Atoua of Siemens EDA and Deepesh Gujjar of MediaTek at TSMC’s Open Innovation Platform, addresses the verification challenges in advanced nodes like TSMC’s N2P process. As mobile SoCs push for lower minimum operating voltages (Vmin) to enhance power efficiency, device variations intensify, necessitating rigorous statistical yield qualification: 6-sigma for bitcells and 4-4.5 sigma for periphery logic. Traditional brute-force Monte Carlo simulations, while accurate, are computationally intensive and time-consuming, often leading to iterative design cycles that delay production.

The core motivation stems from these iterative workflows. Failures in verification prompt design fixes, PDK revisions, simulator updates, or additional PVT corners, each requiring full re-runs. MediaTek, leveraging Siemens EDA’s Solido tools, sought a more efficient approach. Enter Additive Learning technology, an AI-driven methodology integrated into the Solido Design Environment. This innovation retains and reuses AI models and simulation data from prior jobs, drastically reducing simulations in subsequent iterations without compromising SPICE-level accuracy.

Solido’s suite includes the High-Sigma Verifier and PVTMC Verifier, both enhanced by Additive Learning. HSV enables verifiable high-sigma analysis, achieving 6-sigma yield verification in thousands of simulations—up to 1,000-1,000,000,000x faster than brute force. PVTMC provides full-coverage verification across PVT corners plus Monte Carlo, 2-10x faster than traditional methods, excels at outlier detection. In traditional flows, five iterations might consume 50 hours; with Solido’s iterative workflow, this drops to 5 hours, saving days or weeks in chip schedules.

The Additive Learning Engine automatically detects reuse opportunities, drawing from a lightweight, optimized Reusable AI Datastore. This datastore supports multi-user access, parallel read/write, and small disk footprints, allowing deletion of full DE results while preserving speedup potential. It stores AI models and past data for fast lookups, ensuring seamless integration into workflows like design sizing changes or PDK updates.

MediaTek’s results demonstrate tangible benefits. In Case 1, verifying 5-sigma bitcell write margin on N2P (clock-to-bitcell flip), the base run required 2,500 simulations, yielding a mean of 120.1 ps and 5-sigma of 131.2 ps. Post-design fix (Vt changes in write driver and column mux), Additive Learning completed verification in just 29 simulations, a 67x speedup, with mean 121.8 ps and 5-sigma 132.5 ps. Case 2 involved 4-sigma instance-level verification (clock-to-data out), where the original 300 simulations gave mean 167.4 ps and 4-sigma 173.1 ps. After Vt updates in control/IO blocks, Additive Learning used 15 simulations (20x faster) matching full re-run results (mean 198 ps, 4-sigma 204.6-204.8 ps).

This technology’s broader adoption underscores its production-grade maturity. NVIDIA, for instance, employs Additive Learning in AI-powered standard cell verification, achieving speedups on incremental runs amid rising design complexity beyond 5nm. Siemens EDA highlights up to 100x boosts to existing AI techniques for verification efficiency. As nodes shrink, such tools are essential for maintaining accuracy while compressing cycles, enabling faster time-to-market for high-yield SoCs.

Bottom line: Additive Learning transforms SRAM design from a bottleneck into an agile process: fast, accurate, and automatic. By reusing models across iterations PDK revisions, sizing tweaks, or tool updates, it exemplifies AI’s role in EDA, as evidenced by MediaTek’s 20-67x gains. This collaboration between Siemens EDA and MediaTek not only accelerates mobile innovation but sets a benchmark for AI integration in semiconductor workflows, promising even greater efficiencies in future nodes.

Also Read:

Transforming Functional Verification through Intelligence

Why chip design needs industrial-grade EDA AI

Hierarchically defining bump and pin regions overcomes 3D IC complexity

CDC Verification for Safety-Critical Designs – What You Need to Know


TSMC Formally Sues Ex-SVP Over Alleged Transfer of Trade Secrets to Intel

TSMC Formally Sues Ex-SVP Over Alleged Transfer of Trade Secrets to Intel
by Daniel Nenni on 11-28-2025 at 6:00 am

TSMC vs Wei Jen Lo

The big semiconductor news this week is the legal action TSMC is taking against former Senior Vice President Wei-Jen Lo. This looks to be a serious game of 3D chess between CC Wei and Lip-Bu Tan so it is worth a look. I got this notice in my inbox early Tuesday morning:

Immediately following, there was an interesting discussion on SemiWiki. Since this story is still unfolding I thought it would be worth while to take a closer look and share perspectives.

Wei-Jen Lo immigrated to the US from Taiwan and earned his PhD in Solid State Physics & Surface Chemistry from U.C. Berkeley in 1979.  He joined TSMC in 2004 after 18 years at Intel in positions including Director of Technology Development and as a Factory Manager running a development facility in Santa Clara, CA.

Joining TSMC after Intel was not uncommon back then. Intel and TSMC did not compete directly and that is how Silicon Valley worked. We changed jobs in pursuit of equity so Wei-Jen Lo only working for two different companies in 40 years is quite remarkable. It also goes to the depth of knowledge he had at both companies.

Here is the TSMC hiring announcement from 2004:

Approved the appointment of Dr. Wei-Jen Lo as Vice President of TSMC Operations II. Both Dr. Wei-Jen Lo and Dr. Mark Liu, who is also a Vice President of TSMC Operations II, report directly to Dr. Rick Tsai, President and Chief Operating Officer for TSMC. Dr. Wei-Jen Lo recently joined TSMC from Intel Corporation where he held various positions in technology development and management. Prior to joining Intel, Dr. Lo served in the IT and semiconductor industries, and he also held teaching and research position in the university in the US.

TSMC and many other semiconductor companies have hired former Intel employees. In fact, you will be hard pressed to find a company that did not have Intel experience inside so this is nothing new. In fact, Dr. Mark Liu also worked for Intel prior to joining TSMC and later became CEO and Chairman of the Board.

The problem as I see it is two-fold:

Wei-Jen Lo told TSMC he was retiring which provided a completely different exit than if he had said he would go to work for a competitor. For example, it was reported that Wei-Jen Lo was allowed to take 20 boxes of hand written notes he had complied over the last 20 years while working at TSMC. That would not have happened if it was known that he was going to work for Intel.

The second problem is that Intel did not publicly announce Wei-Jen Lo’s arrival  which is normally done for the executive staff. This supports the argument of deception. Wei-Jen is said to be a Senior VP at Intel reporting directly to Lip-Bu Tan. He will work in Intel’s manufacturing group and its packaging business which directly competes with TSMC.

On Wednesday morning Lip-Bu Tan sent a message to Intel employees defending the hiring:

“Based on everything we know today, we see no merit to the allegations involving Wei-Jen, and he continues to have our full support. Intel has welcomed back Wei-Jen Lo, who previously spent 18 years at Intel working on the development of Intel’s wafer processing technology before joining TSMC, where he continued his work in their wafer processing technology development.”

On Thursday it was reported that Taiwan prosecutors had raided the homes of the former senior TSMC executive and seized computers after the company accused him of leaking trade secrets. This is both a criminal and civil investigation.

This big question is why?

Wei-Jen Lo is 75 years old and has had a remarkable career working for two semiconductor giants that changed the world. His legacy is one semiconductor professionals like myself honor greatly. Wei-Jen has also worked for some of the most decorated people in the semiconductor industry including Andy Grove, Craig Barrett, Morris Chang, and CC Wei.

Additionally, Wei-Jin Lo owns significant TSMC stock. I found a source that said as of February 28, 2025, Lo held 1,282,328 shares, valued at about US$63 million. Could that be true? And now he risks losing it as a result of a lawsuit? Not to mention the shame of betraying Taiwan’s most valued company? Seriously, why would he do this?

Bottom Line: Hopefully this story has a happy ending. TSMC is still an important supplier to Intel and this will be yet another test of leadership for Lip-Bu Tan. Lip-Bu needs to get in front of this situation before it does irreparable harm to Intel, absolutely.

Also Read:

TSMC Kumamoto: Pioneering Japan’s Semiconductor Revival

Exploring TSMC’s OIP Ecosystem Benefits

TSMC’s Push for Energy-Efficient AI: Innovations in Logic and Packaging


TSMC Kumamoto: Pioneering Japan’s Semiconductor Revival

TSMC Kumamoto: Pioneering Japan’s Semiconductor Revival
by Daniel Nenni on 11-07-2025 at 6:00 am

TSMC Kumamoto Fab 2

In the lush landscapes of Kumamoto Prefecture, on Japan’s Kyushu Island, TSMC is etching a new chapter in global chip production. The TSMC Kumamoto facility, operationalized through its wholly-owned subsidiary Japan Advanced Semiconductor Manufacturing (JASM), represents the Taiwanese giant’s bold foray into Japan, the first dedicated wafer fab outside Taiwan, the U.S., and Europe.

Announced in 2021, this project embodies a strategic pivot toward diversified, resilient manufacturing. With an initial investment exceeding $8.6 billion, Kumamoto underscores TSMC’s commitment to serving key Asian clients while bolstering Japan’s domestic semiconductor ecosystem.

The journey began with groundbreaking in 2022, transforming a sprawling industrial site in Kikuyo Town into a state-of-the-art cleanroom. Construction progressed swiftly, with Taiwanese engineers relocating en masse to oversee integration. By late 2024, JASM’s first fab commenced mass production, focusing on mature yet vital process nodes: 22/28nm and 12/16nm. These nodes are ideal for automotive semiconductors, image sensors, and microcontrollers, sectors where Japanese powerhouses like Sony and Denso dominate.

The facility’s monthly capacity targets 55,000 wafers, powered entirely by renewable energy sources, aligning with Japan’s green manufacturing mandates. As of April 2025, JASM’s workforce has swelled to around 2,400, including 527 new local hires, fostering a blend of Taiwanese expertise and Japanese precision. This infusion has not only created jobs but also sparked a skills renaissance in Kyushu, a region long synonymous with electronics but starved of cutting-edge fabs. This fab was dubbed the “Knight Castle” by locals since construction was done 24 hours a day.

Kumamoto’s strategic imperative is twofold: Geopolitically, it mitigates risks from Taiwan’s exposure to cross-strait tensions, diversifying TSMC’s footprint amid U.S.-China frictions. Economically, it taps into Japan’s resurgence under the “Semiconductor Revival Plan,” backed by subsidies from the Ministry of Economy, Trade and Industry.

This funding is part of a national initiative which aims to reclaim 10% of global chip production by 2030. For TSMC, Kumamoto secures proximity to loyal customers: Sony’s image sensors for smartphones and cameras, Denso’s automotive chips for electric vehicles, and emerging AI peripherals. In an era where automotive semis face slumps, exacerbated by delayed EV adoption, the fab’s specialization offers stability. Industry watchers project profitability by late 2025, with yields already performing robustly.

Yet, expansion hasn’t been seamless. The second fab, earmarked for advanced 6/7nm processes on a 321,000-square-meter plot adjacent to the first, has encountered headwinds. Initially slated for Q1 2025 groundbreaking, construction was deferred to mid-year due to severe traffic congestion from the initial site’s operations (commutes ballooning from 15 minutes to an hour) irking residents. TSMC Chairman C.C. Wei cited these local pains during the June 2025 shareholders’ meeting, emphasizing dialogues with Japanese authorities for infrastructure upgrades. Further delays in 2025 stemmed from softer automotive demand and a pivot toward U.S. investments, pushing mass production to late 2027. TSMC builds fabs closely tied to customer demand so this a good example of intelligent semiconductor business decisions.

Despite these challenges, TSMC reaffirmed its commitment in August 2025 with board member Paul Liu quashing rumors of diminished Japanese focus. The second fab promises elevated capabilities, including 40nm variants for industrial applications, potentially doubling output and attracting more clients.

Beyond wafers, Kumamoto catalyzes regional transformation. Kyushu’s IC production value hit ¥1 trillion in 2024 for the first time in 16 years, fueled by TSMC’s ripple effects. Local suppliers, from equipment makers to materials firms, now furnish 60% of needs, nurturing a self-sustaining cluster. Governor Takashi Kimura has championed community buy-in, securing promises for green spaces and training programs amid wastewater monitoring starting January 2025.

Bottom live: Kumamoto could spawn a third fab post-2030, embedding TSMC deeper in the “semiconductor triangle” of Taiwan, Japan, and the U.S. As AI and EVs propel chip demand, this outpost fortifies supply chains, blending Eastern innovation with Western resilience. In Kumamoto, silicon flows not just as commerce, but as a bridge across borders proving that in the chip wars, collaboration outpaces isolation. For TSMC, it’s a testament to enduring partnerships, for Japan it is a revival etched in silicon, absolutely.

Also Read:

AI-Driven DRC Productivity Optimization: Revolutionizing Semiconductor Design

Exploring TSMC’s OIP Ecosystem Benefits

Breaking the Thermal Wall: TSMC Demonstrates Direct-to-Silicon Liquid Cooling on CoWoS®

TSMC’s Push for Energy-Efficient AI: Innovations in Logic and Packaging


Memory Matters: The State of Embedded NVM (eNVM) 2025

Memory Matters: The State of Embedded NVM (eNVM) 2025
by Daniel Nenni on 11-06-2025 at 6:00 am

NVM Survey 25 Square Banner for SemiWiki 400x400 px

Make a difference and take this short survey. It asks about your experience with embedded non-volatile memory technologies. The survey is anonymous, and the results will be shared in aggregate to help the industry better understand trends: 2025 Embedded Non-Volatile Memory Survey.

We are now in the AI era where data is the lifeblood of innovation, and embedded NVM stands as a cornerstone technology, retaining information without using power and enabling everything from MCUs and IoT SoCs to automotive controllers and secure elements.

As of November 2025, embedded NVM is moving fast. Edge data is surging, AI features are landing on MCUs and SoCs, and power budgets are tighter than ever. Memory is central to the devices we build. This survey looks at where eNVM stands today in terms of markets, technology, and adoption, and where it’s heading next.

Market Overview and Growth

Embedded emerging NVM, including MRAM, RRAM/ReRAM, and PCM, is entering a broader adoption phase across MCUs, connectivity, and edge-AI devices, with momentum building in automotive and industrial markets. Research firm Yole Group indicate the embedded emerging segment should exceed $3B by 2030, reflecting wider availability in mainstream process nodes and stronger pull where eFlash is no longer a good fit at ≤28 nm.

Technological Advancements

Embedded flash remains foundational, but scaling limits at advanced nodes have pushed MRAM, ReRAM, and embedded PCM to the foreground. Foundries and IDMs are extending embedded options beyond 28/22 nm planar CMOS toward 10–12 nm-class platforms, including FinFET. Yole highlights aggressive foundry roadmaps: TSMC has established high-volume MRAM/ReRAM and is preparing 12nm FinFET ReRAM/MRAM for 2025 and beyond. Samsung, GlobalFoundries, UMC, and SMIC are accelerating embedded MRAM/ReRAM/PCM across general-purpose MCUs and high-performance automotive designs. STMicroelectronics stands out as the IDM fully committed to embedded PCM, ramping xMemory solutions for industrial and automotive MCUs, with 18nm FD-SOI extending reach after 2025.

In parallel, BCD and HV-CMOS flows are incorporating embedded NVM as practical replacements for EEPROM/OTP in analog, power management, and mixed-signal designs. On the IP side, suppliers are qualifying embedded NVM technologies for these platforms, giving designers more options where cost, endurance, and retention outweigh legacy choices. Beyond code and data storage, in-/near-memory compute concepts using eNVM are gaining interest for low-power edge AI inference.

Drivers, Challenges, and Use Cases

Automotive remains the center of gravity for embedded emerging NVM, and 2025 brings a noticeable uptick in secure ICs and industrial MCUs as more products reach production. In practice, ReRAM, MRAM, and PCM each have a role: ReRAM is gaining traction in several high-volume categories; MRAM and PCM are attractive where speed and endurance dominate. The mix varies by node, application, and vendor roadmap.

The challenges are familiar: integrating eNVM at advanced logic nodes, trading off endurance and retention, qualifying to automotive-grade reliability, and achieving cost-effective density as embedded code and AI parameters grow. The trend line is positive, with PDK/IP availability growing and capacity ramping, so these issues are being addressed rather than deferred.

Outlook

By 2030, embedded NVM will underpin more on-chip AI features and practical in-/near-memory compute blocks, with broader use in neuromorphic-inspired accelerators at the edge. Yole’s projections indicate that the embedded emerging segment is now the primary engine of growth, led by ReRAM in high-volume MCUs and analog ICs, while MRAM and embedded PCM consolidate in performance-critical niches. As edge data grows, eNVM’s role expands from “just storage” to part of the computing fabric, redefining efficiency and making embedded memory more central than ever to device intelligence.

Bottom line: In 2025, embedded NVM isn’t just memory, it’s the enabler of intelligent, persistent systems on chip. With accelerating adoption across MCUs and edge SoCs, and clear roadmaps from leading foundries and IDMs, the trajectory is set: embedded memory matters more than ever. Let us know your opinion by taking the short survey.

Take the 2025 Embedded Non-Volatile Memory Survey Here.

Also Read:

Chiplets: Powering the Next Generation of AI Systems

Podcast EP312: Approaches to Advance the Use of Non-Volatile Embedded Memory with Dave Eggleston

Podcast EP311: An Overview of how Keysom Optimizes Embedded Applications with Dr. Luca TESTA


AI-Driven DRC Productivity Optimization: Revolutionizing Semiconductor Design

AI-Driven DRC Productivity Optimization: Revolutionizing Semiconductor Design
by Daniel Nenni on 10-28-2025 at 10:00 am

AI Driven DRC Productivity Optimization Siemens AMD TSMC

The semiconductor industry is undergoing a transformative shift with the integration of AI into DRC workflows, as showcased in the Siemens EDA presentation at the 2025 TSMC OIP. Titled “AI-Driven DRC Productivity Optimization,” this initiative, led by Siemens EDA’s David Abercrombie alongside AMD’s Stafford Yu and GuoQin Low, highlights a collaborative effort to enhance productivity and efficiency in chip design. The presentation outlines a comprehensive AI system that revolutionizes the entire EDA workflow, from knowledge sharing to automated fixing and debugging.

At the core of this innovation is the Siemens EDA AI System, which leverages a GenAI interface, knowledge base, and data lake to integrate AI tools across the portfolio. This system, deployable on customer hardware or cloud environments, supports a unified installation process and offers flexibility to incorporate customer data and models. Tools like the AI Docs Assistant and Calibre RVE Check Assist boost user understanding by providing instant answers and leveraging TSMC design rule data, respectively. The AI Docs Assistant, accessible via browser or integrated GUIs, uses retrieval-augmented generation to deliver relevant citations, while Calibre RVE Check Assist enhances debugging with specialized images and descriptions from TSMC.

Collaboration is a key pillar, with features like Calibre RVE Check Assist User Notes enabling in-house knowledge sharing. Designers can capture fixing suggestions and images, creating a shared knowledge base that enhances DRC-fixing flows across organizations. Meanwhile, Calibre DesignEnhancer automates the resolution of DRC violations on post-routed designs, using analysis-based modifications to insert sign-off DRC-clean interconnects and vias. This tool’s ability to handle complex rules and dependencies makes it a standalone DRC fixing solution.

Calibre Vision AI addresses the unique challenges of full-chip integration by offering AI-guided DRC analysis. It provides lightning-fast navigation through billions of errors, intelligent debug clustering, and cross-user collaboration tools like bookmarks and HTML reports. AMD’s testimonial underscores a 2X productivity boost in systematic error debugging, with Vision AI reducing OASIS database sizes and load times significantly. Signals analysis, such as identifying fill overlaps with clock cells or CM0 issues in breaker cells, accelerates root-cause identification.

This AI-driven approach, bolstered by AMD and TSMC collaborations, optimizes DRC sign-off productivity by boosting workflows, understanding, fixing, debugging, and collaboration. As the industry moves toward more complex designs, Siemens EDA’s AI system sets a new standard, promising faster cycle times and enhanced design robustness, paving the way for future innovations in semiconductor technology.

For more information contact Siemens EDA

Great presentation, absolutely.

Also Read:

Visualizing hidden parasitic effects in advanced IC design 

Protect against ESD by ensuring latch-up guard rings

Something New in Analog Test Automation