Bronco Webinar 800x100 1

Bronco AI Webinar: Full-Chip SoC Debug in 15 Minutes

Bronco AI Webinar: Full-Chip SoC Debug in 15 Minutes
by Daniel Nenni on 05-01-2026 at 10:00 am

BroncoBlogPostDetective

A single bug on a full-chip SoC can pull engineers off roadmap work for days or even weeks. It involves massive waveforms, thousands of files of RTL and UVM, and dense specs that aren’t always perfect. Finding these bugs have always been a matter of engineer-hours and how well knowledge diffuses through the organization.

Bronco changes that equation. At major public chip companies, Bronco Debug works through SoC-level bugs on a regular basis and delivers root-cause analyses in under 15 minutes, hands-free. These same failures take engineering teams multiple days to solve. This is what a routine Bronco debug session looks like.

Register now
Time to Value

Bronco Debug instruments directly into your overnight regression. The moment a simulation fails, the agent is already there — pulling the run log, waveform, code, spec, and relevant project history. By the time your DV engineer sits down in the morning, a Jira-ready ticket is waiting: root cause, evidence to back it up, and specific files or fixes to look at.

Initial deployment takes days, not months. Bronco has been up and running on full-chip SoCs within a week of onboarding.

A Platform Across the Full Verification Journey

Debug is what we’re demo’ing here, but Bronco covers the full DV lifecycle:

  • Bronco Spec Intelligence — ingests massive specs (tables, diagrams, natural language) and automatically flags ambiguities, inconsistencies, and untested requirements
  • Bronco Planning — takes specs and builds or enriches a verification plan and test plan, checking for coverage holes early to avoid surprises and oversights later in the project.
  • Bronco Bring-Up — specialized agents for UVM and RTL bring-up at industry scale
  • Bronco Debug — from the first regression failure through root cause, across block-level designs and full-chip SoCs
What Makes It Work

Bronco runs on a three-layer stack:

  • State-of-the-art AI — large reasoning models with tool use, memory, and decision-making loops that generalize across companies, designs, and tasks.
  • Proprietary AI-native EDA — purpose-built for agents; this is what allows Bronco to navigate 10,000-plus-file SoCs, process massive waveforms, and run parallel debug threads across an entire regression in minutes
  • Bronco Knowledge Library — captures and indexes every bug, insight, and debug session into a customer-specific knowledge store; Bronco gets better the longer it runs on your project
How We Deploy

Bronco runs fully on-prem. Customer design data never leaves the secure environment, and Bronco never trains on customer data. The platform supports bring-your-own-model, whether it’s a third-party Enterprise AI or an on-prem self-hosted model, and integrates natively with standard EDA flows.

For teams with existing AI infrastructure, Bronco also supports bring-your-own-agent deployment: connect your existing agents or orchestration harnesses to the Bronco EDA stack and Knowledge Library, and use Bronco as the DV infrastructure layer underneath.

See It For Yourself

At an upcoming SemiWiki webinar, Bronco AI will demonstrate the Debug Agent end-to-end on a representative full-chip SoC — from a failing regression to an annotated root cause, in real time. Attendees can ask questions on deployment, security posture, and integration with existing flows.

Register now

Bronco AI pairs state-of-the-art AI agents with a proprietary EDA suite purpose-built for agent-driven chip development. The platform is deployed at large public chip companies and startups alike, automating Design Verification from spec review and verification planning through post-regression debug.

Also Read:

Bronco Debug Stress Tested Measures Up

Verification Futures with Bronco AI Agents for DV Debug

Superhuman AI for Design Verification, Delivered at Scale


Podcast EP344: An Overview of the Upcoming Sensors Converge Event with David Drain

Podcast EP344: An Overview of the Upcoming Sensors Converge Event with David Drain
by Daniel Nenni on 05-01-2026 at 6:00 am

Daniel is joined by David Drain, show director for Questex’s Sensors Converge and Broadband Nation Expo, where he leads strategy, content, and industry engagement for two of the company’s flagship technology events. Prior to joining Questex, David spent more than 15 years with Networld Media Group, most recently as senior vice president of events and managing director of the Interactive Customer Experience Association.

Dan explores the details of the upcoming Sensors Converge conference with David, who explains how AI is bringing sensors, connectivity and compute together to form a new ecosystem. The upcoming conference provides a venue for this integrated focus to grow. Beyond a specific engineering focus, David explains that there is now a wider participation from system developers that reflects the convergence of multiple technologies to address the needs of the market.

The conference will host about 5,000 attendees, 200 exhibitors and about 100 speakers, making it a significant event. If you need to run AI on new applications and balance items such as power, latency, and cost, this event could be quite beneficial. It will be held May 5-7, 2026 at the Santa Clara Convention Center in Santa Clara, CA. You can learn more about the show and register to attend here.


Dr. L.C. Lu on TSMC Advanced Technology Design Solutions

Dr. L.C. Lu on TSMC Advanced Technology Design Solutions
by Daniel Nenni on 05-01-2026 at 6:00 am

L.C. Lu TSMC Senior Fellow and Vice President, Research and Development Design & Technology Platform (1)
Dr. L.C. Lu is Vice President of Research & Development / Design & Technology Platform at Taiwan Semiconductor Manufacturing Co. Ltd. (TSMC) and a TSMC Senior Fellow.

L.C. leads efforts in design enablement, ensuring that the company can meet the diverse and evolving requirements of its global customer base. Prior to this, he headed the Design and Technology Platform organization starting in 2018.

Since joining TSMC in 2000, Dr. Lu has held multiple leadership positions in design services. He has worked closely with process R&D teams to pioneer Design and Technology Co-Optimization (DTCO), improving speed, power efficiency, and density in advanced process technologies. He has also collaborated extensively with ecosystem partners through the TSMC Open Innovation Platform (OIP), helping deliver comprehensive design solutions and intellectual property for a wide range of applications, including high-performance computing, automotive, RF, and advanced 2.5D and 3D designs.

Dr. Lu’s contributions have earned him significant recognition. He received Taiwan’s National Outstanding Manager Award in 2012 and was named a TSMC Senior Fellow in 2025. He is also one of the company’s most prolific inventors, holding more than 100 patents worldwide.

He earned his bachelor’s degree in electrical engineering from National Taiwan University, a master’s degree in computer science from National Tsing Hua University, and a Ph.D. in computer science from Yale University.

L.C.’s presentation focuses on advanced design-technology co-optimization (DTCO), packaging innovations, and AI-driven methodologies that enable continued scaling in performance, power, and area (PPA) for next-generation semiconductor systems. The discussion highlights how tightly coupled design and process innovations, along with system-level integration, are critical to sustaining Moore’s Law in the era of AI and HPC.

At the device and design level, TSMC emphasizes DTCO and design-driven cell (DDCL) innovations to achieve node-to-node scaling from N5 through N2 and into A14. The introduction of NanoFlex and NanoFlex Pro architectures enables flexible standard cell design with significant gains in efficiency. N2 NanoFlex achieves up to 50% speed improvement at constant voltage or 50% power reduction at constant performance compared to traditional cells. Building on this, A14 NanoFlex Pro introduces a 1.5× cell height merged oxide diffusion (OD) architecture, significantly improving OD utilization and enabling tighter placement of high-speed and low-power cells. This results in 10–15% speed gains and ~20% area reduction relative to N2, effectively delivering multi-node scaling benefits within a single generation.

https://x.com/SemiAnalysis_/status/2047888356701306916

Further enhancements in N2P and N2U nodes incorporate advanced DTCO and power delivery optimizations. Hybrid dual-rail architectures reduce minimum operating voltage (Vmin) by over 200 mV compared to single-rail designs, achieving approximately 40% energy savings. N2U extends N2P with incremental improvements—3–4% higher performance or 8–10% lower power—while maintaining full compatibility with existing design rules and IP, ensuring smooth adoption for customers.

EDA readiness and AI integration are key enablers of these advanced nodes. TSMC collaborates closely with electronic design automation (EDA) partners to ensure tool readiness and to incorporate AI-enhanced workflows. Agentic AI systems are being deployed across design cycles to optimize block placement, routing, and performance, improving both productivity and design quality. These AI techniques are also applied to analog and RF design, enabling efficient migration across process nodes and accelerating time-to-market.

At the system level, TSMC’s advanced packaging technologies—particularly CoWoS, SoIC, and 3D Fabric—play a central role in enabling AI scaling. CoWoS technology continues to scale reticle size and integration capacity, allowing significant increases in compute density. From 2024 to 2029, the number of transistors in a single CoWoS system is projected to increase by 48×, driven by larger package sizes, increased system-on-chip (SoC) counts, and transition to advanced nodes such as TSMC A14.

Memory bandwidth scaling is similarly aggressive, with high-bandwidth memory (HBM) integration increasing both capacity and throughput. HBM stacks are expected to grow from 8 to 24, while I/O bandwidth per stack doubles and data rates increase significantly, resulting in an overall 34× bandwidth improvement. This scaling is supported by advancements in both DRAM technology and logic-based base dies fabricated on advanced nodes.

Interconnect performance is improved through finer pitch scaling in both 2.5D and 3D integration. In CoWoS, micro-bump pitch reduction enhances bandwidth density and energy efficiency, while in SoIC, scaling to ~4.5 µm bump pitch delivers up to 4× bandwidth density and substantial energy savings. Additionally, silicon photonics integration via CUPE optical engines provides high-speed, low-latency interconnects, achieving 5–10× power efficiency improvements and 10–20× latency reduction compared to traditional electrical links.

Power delivery and thermal management are identified as critical challenges in AI systems due to increasing compute density. TSMC addresses these through advanced capacitance solutions such as metal-insulator-metal (MIM) capacitors and embedded deep trench capacitors (eDDC), achieving over 10× improvements in capacitance density and reducing voltage droop significantly. Thermal optimization techniques—including improved packaging materials, hotspot spreading, and structural enhancements—reduce thermal resistance by up to 40%, ensuring reliable operation under high power conditions.

Bottom line: TSMC is advancing design methodologies through 3D IC design standardization and AI-driven automation. The introduction of “3D Blocks” as a modular design language aims to streamline 3D IC workflows and enhance collaboration across the ecosystem, with ongoing efforts toward IEEE standardization. Combined with generative AI and agent-based design optimization, these innovations promise substantial improvements in productivity and scalability for complex chip-package co-design.

Also Read:

Dr. Cliff Hou and the TSMC N2 Process Technology

TSMC Technology Symposium 2026 Overview


Solving the EDA tool fragmentation crisis

Solving the EDA tool fragmentation crisis
by Admin on 04-30-2026 at 10:00 am

fig1 cci flow (1)

By Samar Abd El-Hady and Wael ElManhawy

Design teams today face an uncomfortable truth: the specialized tools they need to verify modern ICs can’t reliably share the same design data. As geometries shrink below five nanometers and designs incorporate billions of transistors across multiple dies, no single Electronic Design Automation (EDA) tool can address every verification, analysis and modeling challenge.

Design teams routinely use specialized tools for parasitic extraction, power integrity analysis, electromagnetic simulation and soft error rate prediction. Each tool excels in its domain, but this creates a fundamental problem: how do you make sure that all these tools work from the same verified design data without manual translation, reformatting or error-prone data transfers?

This interoperability crisis demands a solution that can bridge the gap between verification and analysis tools. The Calibre Connectivity Interface (CCI) does this by transforming Layout vs. Schematic (LVS) verification data into a universal data source that downstream tools can query with precision and confidence.

Mining the SVDB: How CCI extracts verified design data

At its core, CCI operates on the Standard Verification Database (SVDB) generated during a Calibre nmLVS verification run. This database contains far more than simple pass or fail verification results. The SVDB captures the complete connectivity graph of the design, including layout geometry coordinates, net topology, device parameters, hierarchical relationships and the critical mapping between layout elements and their corresponding schematic or source names.

CCI provides a structured query interface to this rich dataset. Through the Query Server Tcl shell and Calibre YieldServer implementations, downstream tools can extract precisely the information they need. A typical CCI workflow begins with a completed LVS run that generates the SVDB. The CCI command file then specifies what data to extract and in what format. The interface processes these commands against the SVDB and outputs files tailored to the requirements of specific third-party tools.

Figure 1 illustrates this flow, showing how layout, source and rules feed into Calibre nmLVS, which generates the SVDB. CCI then acts as the bridge between this verified database and the diverse ecosystem of analysis tools.

Figure 1. The Calibre Connectivity Interface flow. The interface connects verification, analysis and design tools through a common protocol.
Feeding parasitic extraction tools with accurate connectivity data

Third-party parasitic extraction tools represent one of the most demanding integration scenarios. These tools need comprehensive access to geometric layouts, detailed connectivity information, net and instance names, device characteristics and port definitions. The accuracy of parasitic RC models depends entirely on the fidelity of this input data.

CCI is specifically engineered to provide all this essential data through flexible application programming interfaces (APIs). Each parasitic extraction tool can precisely query and retrieve the specific data it needs. Here’s how different tools leverage CCI:

Empyrean’s PEX tool uses CCI data to generate layout analysis with parasitic RC extraction and critical path netlists with RC annotation.

Phlexing’s GloryEX extraction tool leverages CCI to support advanced 3D modeling for planar gate, FinFET, gate-all-around and other complex device structures. GloryEX also handles sophisticated process modeling including chemical mechanical planarization, etch effects and multi-patterning, while providing high-speed capacitance table generation and pattern matching for 2.5D flows at both gate and transistor levels.

Synopsys StarRC and Cadence QRC demonstrate CCI’s ability to interface with industry-standard sign-off tools. Both tools benefit from dedicated APIs that provide real-time access to device-level layout data, robust SPICE model correlation, geometry-to-schematic mapping, automated net hierarchy tracing and seamless integration into full-chip sign-off flows.

Correlating electromagnetic analysis for high-frequency designs

For high-speed designs operating at multi-gigahertz frequencies, electromagnetic effects in critical signal paths can determine whether a design meets timing and signal integrity requirements. Siemens collaborated with Lorentz Solution, Inc. to integrate Calibre nmLVS with Lorentz PeakView products using CCI.

Together, the tools create a high-frequency design flow that delivers ease of use while enabling IC and 3D IC designers to develop post-layout solutions correlated with source and schematic names, devices and hierarchy. This correlation throughout the electromagnetic analysis workflow means you can trace results back to specific design elements for debugging and optimization.

Streamlining power integrity analysis with comprehensive grid data

Power delivery network analysis has become critical as voltage margins shrink and current densities increase. CCI integrates with mPower, the Siemens power integrity solution that provides comprehensive analysis for digital, analog and complex 3D IC architectures across all design flows.

This integration enables high-resolution voltage drop (IR) and electromigration (EM) analysis, full-chip power grid modeling and accurate power pin annotation with connectivity tracing. The key enabler is CCI’s ability to seamlessly provide all essential input data to the mPower flow—Annotated Geometry Files (AGF), detailed device data and cross-reference files. Figure 2 illustrates how CCI feeds this critical data into the mPower design import flow, ensuring accurate and efficient execution of power integrity analyses.

Figure 2. CCI provides all essential input data to the mPower flow.

Automating soft error analysis for radiation-hardened designs
Many semiconductor devices operate in harsh environments, from automotive applications to aerospace systems, making soft error analysis essential. CCI successfully interfaces with IROC Technologies, a leader in enhancing electronic system reliability through specialized EDA solutions.
IROC’s cell-level soft error detector, TFIT (Transistor Failure in Time), needs precise transistor drain and source diffusion coordinates from GDS files to perform its analysis. The output consists of detailed sensitivity maps identifying vulnerable zones within the design. Before integrating with CCI, IROC relied on a custom LVS module with limited technology support and error-prone workflows.

By integrating with Calibre nmLVS through CCI, a new reliable and automated flow emerged. Figure 3 compares the previous TFIT design import flow with the new automated flow using Calibre nmLVS and CCI. The new flow extracts accurate drain and source locations, executes the TFIT flow with precise input data, eliminates previous technical limitations and streamlines the entire analysis process.

Figure 3. Comparison of the previous flow for TFIT design import (left) and the new flow that uses Calibre nmLVS and CCI.
Deploying CCI across multi-tool verification workflows

To understand the practical value of CCI, consider these real-world applications across multi-tool workflows: In automotive IC sign-off, design teams combine parasitic extraction using StarRC with soft error rate analysis using TFIT. CCI makes sure both tools get consistent, verified design data so you can count on functional correctness and reliability over extended temperature and voltage ranges.

For 2.5D and 3D IC integration, a single design stack benefits from CCI feeding both mPower and GloryEX simultaneously. This lets you run comprehensive interposer parasitic analysis and package power analysis from a common verified database, eliminating potential inconsistencies from using different data sources.

Analog and mixed-signal designers leverage CCI for electromagnetic validation, parasitic-aware simulation and noise coupling prediction with various third-party tools. The ability to maintain correlation between layout and schematic throughout these analyses proves crucial for sensitive analog circuits where small parasitic differences can affect performance.

Building a foundation for seamless multi-tool integration

In today’s complex IC design landscape, seamless collaboration between EDA tools has evolved from a convenience to an absolute necessity. The Calibre Connectivity Interface serves as a critical integration hub, enabling efficient data exchange and communication across diverse design and verification workflows.

By transforming LVS verification data from a simple pass or fail check into a comprehensive, queryable design database, CCI provides a robust foundation for the specialized tool ecosystem that modern IC design requires. As design complexity continues to increase and new analysis requirements emerge, this foundational integration technology proves indispensable for enhancing design accuracy, streamlining verification cycles and accelerating time-to-market for cutting-edge semiconductor innovation.

Samar Abd El-Hady is a Advanced Product Engineer, Calibre Design to Silicon Division, at Siemens EDA, a part of Siemens Digital Industries Software. She is supporting Calibre LVS, layers promotion, CCI, V2LVS and ML activities . Samar has been working in Siemens EDA for over 6 years. Before, she received her BS in electronics & communication engineering in 2019 from Ain Shams University in Cairo, Egypt. After graduation, Samar joined Siemens EDA as a PE supporting Calibre LVS.

Wael ElManhawy is a Director in Calibre Management at Siemens EDA, responsible for leading the Calibre LVS product line. He brings 29 years of experience in VLSI and EDA, specializing in physical and circuit verification, including 26 years at Siemens EDA and 21 years working on Calibre, where he has played a key role in shaping different Calibre products strategy, technology, and customer adoption at the most advanced nodes.

Also Read:

Exploring the Hidden Complexity of Modern Power Electronics Design – A Siemens White Paper

Siemens Wins Best in Show Award at Chiplet Summit and Targets Broad 3D IC Design Enablement

Siemens Fuse EDA AI Agent Releases to Orchestrate Agentic Semiconductor and PCB Design


Dr. Y.J. Mii on TSMC Technology Leadership in 2026

Dr. Y.J. Mii on TSMC Technology Leadership in 2026
by Daniel Nenni on 04-30-2026 at 8:00 am

Y.J. Mii Executive Vice President and Co Chief Operating Officer, TSMC (1)
Dr. Y.J. Mii is Executive Vice President and Co-Chief Operating Officer at Taiwan Semiconductor Manufacturing Co. Ltd. (TSMC).

Dr. Y.J. Mii joined TSMC in 1994 as a manager at Fab 3 before moving into the company’s research and development organization in 2001. He was appointed Vice President of R&D in 2011 and later advanced to Senior Vice President in November 2016.

Over more than 20 years at TSMC, Dr. Mii has played a central role in advancing and manufacturing cutting-edge CMOS technologies across both fab operations and R&D. He led the successful development of key process nodes, including 90nm, 40nm, and 28nm. In addition, he has driven innovation in more advanced technologies—such as 16nm, 7nm, 5nm, and 3nm—helping sustain TSMC’s leadership position in the global semiconductor foundry industry.

In recognition of his leadership in research and development, Dr. Mii received the IEEE Frederik Philips Award in 2022. Prior to joining TSMC, he worked as a research staff member at the IBM Research Center.

Dr. Mii holds 34 patents worldwide, including 25 granted in the United States. He earned his bachelor’s degree in electrical engineering from National Taiwan University, and both his master’s and Ph.D. in electrical engineering from University of California, Los Angeles.

Dr. Y.J. Mii’s presentation outlines the company’s continued leadership in semiconductor technology and its roadmap for future innovation across advanced logic, system integration, and specialty platforms. The talk emphasizes TSMC’s commitment to delivering cutting-edge technologies that support next-generation applications such as AI, high-performance computing (HPC), and mobile devices.

TSMC is introducing several new advanced nodes, including A14, A13, and A12, which extend its leadership into what is described as the “Armstrong era.” The A14 node represents a second-generation nanosheet transistor technology and incorporates NanoFlex Pro, achieving significant improvements in performance, power, and area (PPA). Compared to the 2nm (N2) node, A14 delivers 10–15% speed improvement or 25–30% power reduction, along with notable density gains. Production is expected by 2028. Building on this, A13 offers further optimization, including a 6% die size reduction through optical shrink and improved efficiency, while maintaining backward compatibility with A14 designs.

TSMC’s 2nm family is also expanding, including N2, N2P, N2X, and N2U. These technologies are already seeing strong customer adoption, particularly driven by AI and HPC demands. N2 entered production recently, with N2P and A16 progressing toward volume production. The N2U variant further enhances performance and efficiency while maintaining compatibility with N2P, offering incremental speed and power improvements. The rapid increase in customer tapeouts highlights the strong industry demand for these advanced nodes.

Beyond nanosheet transistors, TSMC is investing in future innovations such as complementary field-effect transistors (CFET), which stack nFET and pFET vertically to enable continued scaling. The company has already demonstrated early CFET implementations and advanced SRAM designs with reduced footprint. Additionally, research into two-dimensional materials shows significant improvements in transistor performance, suggesting further opportunities for scaling and energy efficiency.

Interconnect technology is another key focus area. TSMC is improving copper-based interconnects by reducing resistance and capacitance through new materials and structures. It is also exploring alternative materials and air-gap techniques to further enhance performance. Long-term research includes novel 2D conductors that could dramatically reduce contact resistance compared to existing solutions.

In system integration, TSMC is advancing its HPC platform through technologies such as CoWoS, SoIC, and SoW. CoWoS remains a central platform for scaling, with increasing reticle sizes and high-bandwidth memory (HBM) integration planned through 2030. SoW technology aims to integrate entire systems on a wafer, enabling massive computing capabilities for AI workloads. Meanwhile, SoIC 3D stacking continues to evolve, improving interconnect density and power efficiency.

The company is also developing photonic integration technologies like the Compact Universal Photonic Engine (COUPE), which enables high-speed, low-power optical data transmission. These solutions significantly outperform traditional copper interconnects in both power efficiency and latency, and future advancements aim to further increase bandwidth and scalability.

In the specialty technology segment, TSMC highlights advancements in automotive, RF, memory, and display technologies. The N3A node is now fully automotive-qualified, while future nodes like N2A are in development. RF technologies such as N4C RF deliver improved power efficiency and performance for edge AI applications. In memory, embedded flash is being replaced by alternatives like resistive RAM (RRAM) and MRAM, which offer better scalability and performance. Display innovations, including high-voltage platforms, enable more efficient and compact designs for smartphones and smart glasses.

Bottom line: TSMC’s roadmap demonstrates a comprehensive approach to semiconductor innovation, spanning advanced nodes, new transistor architectures, system integration, and specialized technologies. The company aims to empower customers with industry-leading solutions that drive future computing advancements and enable emerging applications across multiple industries.

Also Read: 

Enabling Next-Generation AI Through Advanced Packaging and 3D Fabric Integration

Dr. Cliff Hou and the TSMC N2 Process Technology

The Shift to System-Level AI Drives Next-Generation Silicon

TSMC Technology Symposium 2026 Overview


Advanced Microelectronics Paving the Way for 6G with Alphacore

Advanced Microelectronics Paving the Way for 6G with Alphacore
by Daniel Nenni on 04-30-2026 at 6:00 am

6G whitepaper image 03062026

The world stands at the threshold of a new era in wireless communication as research communities, standards bodies, and technology companies begin shaping what will become sixth generation mobile networks. While fifth generation systems are still expanding across global markets, attention has already shifted toward defining the capabilities, performance targets, and architectural principles of 6G. This transition is not merely about increasing data rates. It represents a broader transformation in how networks sense, compute, and interact with the physical world. At the heart of this transformation lies microelectronics, whose progress will determine whether ambitious visions can become practical realities.

The development of 6G is guided in part by the International Telecommunication Union through its IMT 2030 framework. This framework outlines performance expectations that extend beyond traditional metrics such as throughput and latency. Future networks are expected to integrate sensing and communication, embed artificial intelligence deeply into their operation, and provide seamless connectivity across terrestrial and non terrestrial domains. In parallel, industry groups such as the Third Generation Partnership Project are preparing study items and future specifications that will eventually formalize these goals into implementable standards. Yet standards alone cannot create a new generation of wireless systems. The feasibility of 6G depends on the capabilities of semiconductor technologies that must support higher frequencies, wider bandwidths, and tighter integration than ever before.

One of the most visible shifts in 6G research is the exploration of new spectrum regions, including upper millimeter wave and sub terahertz bands. These frequencies promise extremely wide channel bandwidths and unprecedented peak data rates. However, operating above one hundred gigahertz introduces formidable challenges. Signal attenuation increases, power amplifier efficiency declines, and maintaining linearity becomes more difficult. Thermal constraints intensify as devices attempt to deliver greater output power in compact form factors. In this regime, the physical properties of semiconductor materials, device geometries, and packaging techniques become decisive factors in system performance.

Massive antenna arrays and advanced beamforming further amplify the demands placed on microelectronics. Future base stations and terminals may rely on dense phased arrays that require precise timing, calibration, and mixed signal processing. Each antenna element must be supported by radio frequency circuitry and data converters capable of handling wide instantaneous bandwidths. As arrays grow larger, integration density and power efficiency become critical. The burden on silicon is not only to process signals but to do so within strict energy budgets that align with sustainability goals and practical deployment constraints.

Another defining feature of 6G is the integration of artificial intelligence into network operation. Rather than treating intelligence as an overlay, future systems are expected to incorporate learning and optimization directly into the air interface, resource management, and service orchestration layers. This shift increases the need for specialized accelerators and efficient digital processing units at the edge of the network. Delivering high compute performance per watt will require careful co design of logic, memory, and radio frequency components. Heterogeneous integration techniques that combine complementary semiconductor processes within a single package are likely to play a central role.

Advanced packaging is emerging as a key enabler rather than a secondary consideration. At very high frequencies, interconnect parasitics and package losses can significantly degrade signal integrity. Shortening signal paths through two and three dimensional integration can reduce these effects while enabling tighter coupling between antennas and radio circuitry. By combining silicon based logic with silicon germanium or compound semiconductor devices optimized for high frequency operation, designers can exploit the strengths of multiple technologies within one system.

Bottom line: The journey to 6G will be shaped by the pace of innovation in microelectronics research and development. Achieving the goals envisioned for IMT 2030 requires more than incremental improvements. It calls for breakthroughs in device efficiency, converter performance, thermal management, and system integration. As the industry moves from conceptual studies to formal specifications, the collaboration between standards experts and semiconductor engineers will be essential. Only by aligning ambitious performance targets with the realities of physics and manufacturing can 6G evolve from aspiration to deployment, delivering networks that are not only faster but also more intelligent, reliable, and deeply integrated into the fabric of society.

For more detailed information about the importance of microelectronics in 5G/6G and Alphacore’s role in this development, download the whitepapers “Microelectronics Paving the Way for 6G” and “ML in Microelectronics”. Please also feel free to explore analog, mixed signal and RF solutions on Alphacore’s website https://alphacoreinc.com/analog-mixed-signal-rf-solutions/  and reach out to us for further information https://alphacoreinc.com/analog-mixed-signal/contacts/#Form

References terms:
  1. Massive Machine Type Communications (mMTC) is a 5G technology designed to support connectivity for billions of IoT devices.
  2. Integrated Sensing and Communication (ISAC) is a cornerstone 6G technology that merges wireless communication with radar-like sensing, allowing network infrastructure to detect, track, and image objects in real-time.
  3. Enhanced Mobile Broadband (eMBB) for centric private networks are tailored 5G deployments designed specifically to provide high-speed, high-capacity connectivity within a defined, private area.
  4. SMART: Scalable Modular Architecture for RF Transceivers
Also Read:

A Tour of Advanced Data Conversion with Alphacore

Analog to Digital Converter Circuits for Communications, AI and Automotive

High-speed, low-power, Hybrid ADC at IP-SoC

CEO Interview: Dr. Esko Mikkola of Alphacore


Enabling Next-Generation AI Through Advanced Packaging and 3D Fabric Integration

Enabling Next-Generation AI Through Advanced Packaging and 3D Fabric Integration
by Kalar Rajendiran on 04-29-2026 at 10:00 am

CoWoS Enables AI Compute Scaling

The rapid rise of artificial intelligence is fundamentally reshaping computing architectures. As AI models scale toward trillions of parameters, traditional approaches to performance improvement are no longer sufficient. Instead, the industry is entering a new era where system-level innovation, advanced packaging, and 3D integration are becoming the primary drivers of progress. This shift reflects a broader transition in computing, where performance gains increasingly depend on how well entire systems are designed and integrated, rather than how small individual transistors can become.

The End of One-Dimensional Scaling

AI compute demand is growing at an exponential rate, creating a widening gap between required performance and what conventional silicon scaling can deliver. Bridging this gap requires innovation beyond the chip itself. The most important shift is that AI performance is now determined at the system level rather than purely at the silicon level. Future gains will depend on how effectively compute, memory, interconnect, and power systems are integrated into a cohesive whole. This marks a transition from device-centric optimization to full-stack co-design, extending from transistor technology all the way to data center architecture.

Data Movement Is the New Bottleneck

A critical constraint in modern AI systems is no longer computation, but data movement. Transporting data across chips can consume up to 50 times more energy than moving data within a single chip. At the same time, data transfer can account for the majority of system activity, significantly reducing accelerator utilization due to communication delays. This shift makes interconnect efficiency a central design priority. Improving bandwidth, reducing latency, and minimizing energy per bit are now essential to unlocking overall system performance.

The Memory Wall Is Getting Worse

As AI models continue to scale, memory demands are increasing even faster than compute capabilities. Emerging workloads, such as long-context processing and multimodal AI, are driving exponential growth in both memory capacity and bandwidth requirements. Systems are transitioning from gigabyte-scale memory to terabyte-scale configurations, while also demanding lower latency. However, memory technology is not advancing at the same pace as compute, creating a widening imbalance. Overcoming this “memory wall” is therefore essential for sustaining AI progress, and it is driving rapid innovation in high-bandwidth memory and memory integration strategies.

Power and Thermal Constraints Are Critical

The increase in compute density, particularly with the adoption of 3D stacking technologies, has led to a corresponding rise in power density and heat generation. These factors are quickly becoming limiting constraints for AI system scaling. Without significant advancements in power delivery, energy efficiency, and thermal management, performance gains cannot be sustained. As a result, power and cooling are no longer secondary considerations but have become central to system design and overall performance.

3D Fabric Technologies: The New Foundation

To address these challenges, advanced 3D fabric technologies are emerging as the foundation of next-generation AI systems. These technologies enable the integration of multiple chips and components into highly efficient, high-performance systems. Innovations such as 3D chip stacking allow for dramatically higher interconnect density, reducing both data movement distance and energy consumption. Advanced packaging platforms make it possible to combine logic and memory in close proximity, enabling massive bandwidth and capacity scaling. At the same time, high-bandwidth memory continues to evolve, delivering higher throughput and improved energy efficiency. Together, these advancements position packaging not merely as a supporting technology, but as a primary driver of system performance.

Co-Packaged Optics: Rethinking Interconnects

As electrical interconnects approach their physical limits, co-packaged optics is emerging as a promising solution for high-speed data transfer. By integrating photonics directly with compute hardware, this approach enables significant improvements in both power efficiency and latency. It also provides a scalable path forward for data center networking, where the need for higher bandwidth and lower energy consumption continues to grow. This evolution signals a broader shift toward optical technologies as a key enabler of future AI infrastructure.

System-on-Wafer and Wafer-Scale Integration

Looking further ahead, system integration is advancing toward wafer-scale architectures, where entire systems are built on a single substrate. This approach enables unprecedented levels of integration density while reducing the overhead associated with traditional interconnects. By minimizing communication distances and improving efficiency, wafer-scale integration offers a powerful pathway for scaling AI performance beyond the limits of conventional packaging methods.

The Rise of System Technology Co-Optimization (STCO)

As AI systems grow more complex, optimizing individual components in isolation is no longer sufficient. The industry is increasingly adopting System Technology Co-Optimization, an approach that simultaneously considers chip design, packaging, interconnects, power delivery, and thermal behavior. This holistic methodology ensures that all parts of the system are designed to work together efficiently, enabling better overall performance and energy efficiency. It represents a fundamental shift in how hardware systems are conceived and developed.

Summary

The future of AI hardware will not be defined by silicon scaling alone. Instead, it will be shaped by advances in packaging, interconnects, memory systems, and power efficiency, all brought together through system-level design. In this new paradigm, the system itself becomes the primary unit of innovation. Success will depend on the ability to integrate across multiple domains and optimize them collectively. As this transformation continues, it is clear that the “system” has effectively become the new chip, redefining how performance is achieved in the age of AI.

Also Read:

Dr. Cliff Hou and the TSMC N2 Process Technology

The Shift to System-Level AI Drives Next-Generation Silicon

All in One Bluetooth Audio: A Complete Solution on a TSMC 12nm Single Die

TSMC Technology Symposium 2026 Overview


WAVE-N Specialized Video Processing NPU for Edge AI Systems

WAVE-N Specialized Video Processing NPU for Edge AI Systems
by Daniel Nenni on 04-29-2026 at 6:00 am

그림1

The rapid growth of AI applications in edge devices has created a strong demand for specialized hardware capable of performing high-performance neural network inference under strict power and latency constraints. Traditional CPUs and GPUs often struggle to meet the efficiency requirements of embedded and mobile systems. As a result, dedicated neural processing units (NPUs) have emerged as a key technology for accelerating deep learning workloads. The WAVE-N specialized video processing NPU, developed by Chips&Media, represents a modern approach to integrating AI acceleration with video processing pipelines for next-generation edge devices.

At the core of the WAVE-N architecture is the need to address the computational demands of deep learning models used in computer vision and video analytics. Recent trends in AI development demonstrate that increasing model size and complexity often leads to improved accuracy and performance. However, this scaling law significantly increases computational requirements. Edge devices such as smart cameras, drones, autonomous robots, and automotive systems cannot rely on cloud infrastructure due to latency, privacy, and connectivity constraints. Therefore, local processing with highly optimized hardware is essential.

The WAVE-N NPU is designed specifically to accelerate neural network workloads related to video and image analysis. These workloads include object detection, motion tracking, image classification, super-resolution, and other computer vision tasks. Unlike general-purpose processors, an NPU implements specialized hardware units optimized for matrix multiplication, convolution operations, and tensor processing, which are the fundamental building blocks of deep neural networks. By implementing these operations in dedicated hardware, the NPU achieves significantly higher throughput and energy efficiency compared with CPU-based processing.

One of the key architectural features of WAVE-N is its parallel processing capability. Neural network inference involves executing a large number of arithmetic operations on multidimensional data structures known as tensors. WAVE-N uses a highly parallel compute engine that distributes these operations across multiple processing elements, allowing simultaneous execution of convolution and activation functions. This massively parallel design dramatically reduces inference latency and increases throughput for real-time video applications.

Another important component of the WAVE-N system is its optimized memory architecture. Memory bandwidth and data movement are critical bottlenecks in AI accelerators. Large neural network models require frequent access to weights, feature maps, and intermediate results. WAVE-N addresses this challenge by integrating high-efficiency on-chip memory buffers and intelligent data reuse mechanisms. These features minimize external memory access and reduce energy consumption while maintaining high computational performance.

Software support also plays a vital role in the usability of hardware accelerators. The WAVE-N platform includes a software simulation and development package that enables developers to design, test, and optimize neural network models before deployment on hardware. This simulation environment allows engineers to evaluate performance characteristics, estimate throughput, and refine model architecture without requiring physical silicon. Such tools significantly shorten development cycles and facilitate integration into complex embedded systems.

In addition to raw performance, scalability and flexibility are critical design goals. The WAVE-N architecture supports various neural network frameworks and can be configured for different performance targets depending on the application. For example, lightweight configurations may be used in low-power IoT devices, while larger configurations can support high-resolution video analytics in smart surveillance systems or automotive platforms.

The applications of specialized video processing NPUs extend across many industries. In smart security systems, WAVE-N can enable real-time object detection and behavioral analysis directly on edge cameras. In automotive environments, the NPU can accelerate driver assistance features such as pedestrian detection, lane recognition, and traffic monitoring. Similarly, robotics and industrial automation systems can leverage the hardware for rapid visual perception and decision-making.

Bottom line:  The WAVE-N specialized video processing NPU represents a significant advancement in edge AI hardware design. By combining parallel computation, optimized memory management, and dedicated neural network acceleration, it delivers high performance while maintaining power efficiency. As AI models continue to grow in complexity and edge computing becomes increasingly important, specialized NPUs like WAVE-N will play a critical role in enabling intelligent, real-time processing directly on embedded devices.

CONTACT CHIPS&MEDIA

Also Read:

Chips&Media and Visionary.ai Unveil the World’s First AI-Based Full Image Signal Processor, Redefining the Future of Image Quality

CEO Interview with Steve Kim of Chips&Media

Complex PCB signoff challenges


Complex PCB signoff challenges

Complex PCB signoff challenges
by Daniel Payne on 04-28-2026 at 10:00 am

metal island

Many complex PCB designs have high data-rate signals like USB, PCIe, DDR and HDMI which call for more thorough verification methods to ensure compliance plus mitigate any signal integrity, power integrity and EMI/EMC issues. Siemens has a methodology that uses automated rule-based electrical verification with an EDA tool, HyperLynx DRC. This blog stems from reading their white paper. The old method of manual verification is just too slow and inadequate to ensure no respins.

The complexity and density of PCBs have increased significantly over the last 20 years, creating the need for multiple specialized verification experts, which can add more bottlenecks in the design process. Design teams require detailed knowledge of protocols and new verification techniques to be successful.

Electrical verification can take significant time for tasks like model set up and validation, often leading to delays. Models range from datasheets to complex S-parameters and even extracted 3D structures. EDA tool complexity can reduce engineering efficiency, so using automation helps improve productivity. Stitching together multiple point tools from different vendors increases CAD integration efforts. The goal should be automating tasks and shifting verification to earlier in the design process.

Point EDA Tools

Traditional manual inspection and verification are both time-consuming and prone to human error, especially when it’s performed only at the end of design. The manual approach has visual checks performed layer-by-layer and net-by-net, with only critical nets and corner cases manually simulated. This leads to only partial inspection, risking missed issues. In contrast, automated DRCs can be run throughout the design cycle, actually saving time and reducing errors.
Proper targeting of PCB areas via object lists and parameter settings is crucial for efficient rule checks using HyperLynx DRC. There are system-generated object lists that filter components automatically.

System object lists

In addition there are user-defined object lists to target specific signals or protocol, where parameters reflect actual design choices, such as high-speed net names or voltage levels. Proper set up reduces false violations and streamlines verification.

DDR4 net naming user list

Rules in HyperLynx are organized into groups within .hldset and .hldproj files, enabling reuse across projects. The default .hldset file provides a starting point to save and capture object lists and rule groups that users define. Custom rule libraries can be created for different technologies and shared between PCB projects and with a hierarchical organization it allows inheritance. Having reusable rule setups improve consistency and efficiency.

HyperLynx DRC detects EMI/EMC issues like metal islands and return path breaks rapidly, typically in seconds, unlike visual inspection which takes 30 minutes to an hour. As an example this metal island detection completed in just 2 seconds.

There are even rule checks for return path continuity during layer changes, so that violations, such as reference plane breaks, are highlighted with correction advice. Detection time for EMI issues is under a minute, aiding quick fixes with HyperLynx.

Automated signal integrity (SI) rules identify issues like impedance discontinuities and crossing gaps efficiently, so that you can focus on easy fixes first to reduce modeling workload. The key rules include impedance and differential impedance checks. Nets crossing gaps are automatically checked for impedance change and reflection risks.

Power delivery is verified through rules that ensure proper decoupling and grounding. The out-of-the-box rules cover decoupling capacitor placement and coverage. Checks include minimum distance from IC power pins to decoupling capacitors. These rules help prevent AC analysis failures and validate layout spacing and component placement for effective power delivery.

Power integrity rules

HyperLynx DRC offers scripting environments for creating your own tailored rules, and you can learn more about these by attending Siemens training. Custom rules can be written in VBScript or Python, and they address complex or proprietary design needs. There are over 100 pre-defined rules that span SI, power integrity (PI), EMI/EMC, and high voltage safety, forming a foundation for effective verification. Siemens offers The Getting Started Workshop to get you up to speed on these quickly.

Summary

PCB verification and sign-off has multiple steps, requiring an understanding of SI, PI and EMI/EMC issues that cause board respins. Using an automated, rule-based verification approach will speed up PCB sign-off, reducing manual effort, and minimizing the risk of respins.

Instead of using visual inspection only once at the end of a project, the automated DRC approach in HyperLynx DRC enables continuous verification during the design process.

Read the entire 27 page White Paper online.

Related Blogs


Dr. Cliff Hou and the TSMC N2 Process Technology

Dr. Cliff Hou and the TSMC N2 Process Technology
by Daniel Nenni on 04-28-2026 at 8:00 am

Cliff Hou, Senior Vice President and Deputy Co COO, TSMC
Dr. Cliff Hou is Senior Vice President, Deputy Co-COO, and Chief Information Security Officer at TSMC, where he also serves as deputy to Y.P. Chyn. Over a long career with the company since joining in 1997, he has played a pivotal role in advancing TSMC’s design technology and ecosystem strategy.

Before assuming his current position, Dr. Hou held several key leadership roles. He served as Vice President of Design and Technology Platform from 2011 to 2018, and later as Vice President of Technology Development starting in August 2018. Earlier in his career, from 1997 to 2007, he established TSMC’s technology design kit and reference flow development organizations, laying the foundation for its design enablement infrastructure.

Over the past decade, Dr. Hou has been instrumental in building TSMC Open Innovation Platform (OIP), which has grown into one of the most comprehensive design ecosystems in the global semiconductor industry. His work in reference flows and design-for-manufacturing (DFM) has significantly lowered barriers to IC design and improved accessibility for customers.

In recognition of his contributions, Dr. Hou received the National Manager Excellence Award in 2010. He also led TSMC’s OIP project team to win the National Industry Innovation Award in 2011, presented by the Ministry of Economic Affairs in Taiwan.

Prior to joining TSMC, Dr. Hou worked at the Industrial Technology Research Institute (ITRI/CCL) as a section manager focused on design environments. He also served as an associate professor at I-Shou University (formerly Kaohsiung Polytechnic Institute).

Dr. Hou holds 44 U.S. patents and serves on the board of directors of Global Unichip Corp.. He earned his bachelor’s degree in control engineering from National Chiao Tung University and a Ph.D. in electrical and computer engineering from Syracuse University.

Cliff’s presentation outlined the significant progress and achievements made by TSMC over the past year in semiconductor manufacturing, focusing on technology advancement, capacity expansion, advanced packaging, global footprint, and sustainability initiatives.

In 2025 TSMC made strong strides in both cutting-edge technology and production capacity. The company’s most advanced node, TSMC N2, has already entered volume production. Despite its increased complexity compared to previous generations, TSMC has achieved an improved yield learning curve, demonstrating its manufacturing excellence. The next iteration, featuring backside power delivery remains on track and is progressing according to schedule.

TSMC has also made advancements in automotive technology, with its N3A node now production-ready and capable of meeting stringent quality requirements. Across all advanced nodes, including 3nm, 5nm, and 7nm, the company continues to refine performance and reliability to support a wide range of applications. Additionally, TSMC is aggressively expanding its advanced packaging technologies to meet growing demand for HPC and AI applications.

A major highlight is the rapid expansion of 2nm production capacity. TSMC is ramping up five phases of 2nm fabs within a single year—an unprecedented pace. As a result, first-year output for 2nm is projected to be 45% higher than that of the previous 3nm generation. Looking ahead, the company plans to further increase 2nm capacity by approximately 70% between 2026 and 2028. Meanwhile, combined capacity for 3nm and 5nm technologies is expected to grow steadily by about 25% over several years.

To address the time constraints associated with building new fabs, TSMC is leveraging artificial intelligence and digital transformation to optimize existing facilities. AI-driven systems improve scheduling, equipment efficiency, and process optimization, enabling higher throughput and reduced production cycle times. Generative AI is also used to fine-tune process parameters, while data analytics helps minimize downtime and maximize tool utilization. These innovations allow TSMC to extract greater productivity from existing capacity while new fabs are under construction.

Demand for AI and HPC applications is a key driver of growth. From 2022 to 2026, the number of wafers shipped for AI accelerators is expected to increase elevenfold. Notably, large-die chips (over 500 mm²) are also seeing strong growth, with shipments increasing sixfold. TSMC’s accumulated experience across multiple generations has enabled consistent improvements in yield and defect density, even for these complex designs.

Beyond leading-edge technologies, TSMC continues to invest in mature nodes, including specialty processes such as radio frequency, high-voltage, analog, embedded memory, and image sensors. The company aims to remain the leading provider in this segment while expanding capacity in a measured and strategic manner.

In advanced packaging, TSMC is pushing the boundaries of 3D integration technologies, such as CoWoS and SoIC. These technologies are critical for enabling chiplet-based architectures and high-bandwidth memory integration. The company has reduced the time required to transition from development to high-volume manufacturing—by 30% for CoWoS and 75% for SoIC—helping customers bring products to market faster. Collaboration with ecosystem partners, including material suppliers and testing providers, has further improved yield and manufacturing efficiency. Packaging capacity is also expanding aggressively, with significant growth projected through 2027.

TSMC’s global expansion strategy is another key focus. The company is doubling its pace of fab construction, with nine new or converted phases planned annually in 2025 and 2026—twice the historical average. This expansion extends beyond Taiwan to include major investments in the United States, Japan, and Germany.

In Arizona, TSMC’s first fab is already in production, with additional phases under construction targeting advanced nodes such as 3nm and 2nm. The company is also planning advanced packaging facilities and acquiring additional land to support long-term growth. In Japan, the Kumamoto fab has entered production and is expanding capacity, while a second fab is being developed with a revised focus on 3nm technology. In Germany, a new fab in Dresden is under construction, targeting automotive and industrial applications. Across these regions, TSMC has demonstrated the ability to replicate high yields comparable to its Taiwan operations.

Sustainability and green manufacturing are central to TSMC’s long-term vision. The company aims to achieve net-zero carbon emissions by 2050 and has already reduced emissions by 3.8 million tons in 2025 alone. Resource recycling is another priority, with goals of 70% internal recycling and up to 98% total recycling by 2030. Water stewardship initiatives target 100% water positivity by the 2040s, with significant progress already made through reclaimed water usage and conservation efforts.

Bottom line: TSMC is aggressively advancing semiconductor technology while scaling capacity to meet surging demand, particularly in AI and HPC. Through innovation in manufacturing, packaging, and AI-driven optimization, combined with global expansion and sustainability commitments, the company is positioning itself to remain a leader in the semiconductor industry for years to come.

Also Read:

TSMC Technology Symposium 2026 Overview

TSMC to Elon Musk: There are no Shortcuts in Building Fabs!

TSMC Technology Symposium 2026: Advancing the Future of Semiconductor Innovation