SNPS1670747138 DAC 2025 800x100px HRes

Cost-Effective and Scalable: A Smarter Choice for RISC-V Development

Cost-Effective and Scalable: A Smarter Choice for RISC-V Development
by Daniel Nenni on 04-25-2025 at 6:00 am

Vast library of 90+ Prototype Ready IP

The RISC-V ecosystem is witnessing remarkable growth, driven by increasing industry adoption and a thriving open-source community. As companies and developers seek customizable computing solutions, RISC-V has become a top choice. Providing a scalable and cost-effective ISA foundation, RISC-V enables high-performance and security-enhanced implementations, making it ideal for next-generation digital infrastructure.

RISC-V’s modular ISA and growing ecosystem support a wide range of configurations, making it highly adaptable across applications. Designers have options to integrate extensions such as vector processing, floating-point, atomic operations, and compressed instructions. Furthermore, its scalability spans from single-core to multi-core architectures and can incorporate optimizations like out-of-order execution to enhance performance. To achieve an optimal balance of performance, power efficiency, and scalability, selecting the right RISC-V microarchitecture and system integration strategy is crucial.

For entry-level RISC-V development, single-core open-source implementations are well-suited for small to medium capacity FPGA-based platforms. Among them, Xilinx VU9P-based solutions, such as the VCU118 development board, have been widely adopted by engineers for their balanced capabilities and accessibility. The S2C Prodigy S7-9P Logic System takes this foundation even further. Built on the same powerful VU9P FPGA with 14M ASIC gates, it enhances usability, expandability, and cost-efficiency. With seamless integration of daughter cards and an advanced toolchain, the S7-9P offers an ideal fit for small to medium-scale RISC-V designs, empowering developers to accelerate their innovation with confidence.

Media-Ready Prototyping: MIPI and HDMI for Real-World Applications

As multimedia processing becomes increasingly integral to RISC-V applications, the demand for high-speed data handling and versatile prototyping tools has never been greater. The S2C Prototyping systems meet this need with support for MIPI and HDMI via optional external daughter cards, making it an ideal choice for smart displays, AR/VR systems, and AI-powered cameras. For example, if you’re developing a RISC-V-based smart camera, a complete prototyping environment from capturing images via MIPI D-PHY to display outputs through HDMI can be deployed with ease. Its flexible expansion options allow developers to experiment with various configurations, refine their designs, and push the boundaries of RISC-V media applications.

High-Speed Connectivity: QSFP28 Ethernet for Next-Gen Networking

With networking requirements becoming more demanding, high-speed connectivity is crucial for RISC-V-based applications. The S7-9P rises to this challenge with built-in QSFP28 Ethernet support, enabling 100G networking applications. This makes it an optimal choice for developing and testing prototyping RISC-V-based networking solutions, including routers, switches, and edge AI processing units.

Need More Scalability?

While the S7-9P is an excellent choice for entry-level to mid-range RISC-V prototyping, more complex designs may require greater capacity. For high-end verification and large-scale projects, S2C also offers advanced solutions like the VU440 (30M ASIC gates), VU19P (49M ASIC gates), and VP1902 (100M ASIC gates), providing the scalability needed for RISC-V subsystems, multi-core, AI, and data-intensive applications.

Special Offer: Save 25% on the Prodigy S7-9P Bundle

For a limited time, get the Prodigy S7-9P bundle—which includes a free Vivado license (valued at $5,000+)—for just $14,995, a 25% savings! Visit S7-9P for information or Contact our team to find the perfect fit for your project.

Also Read:

S2C: Empowering Smarter Futures with Arm-Based Solutions

Accelerating FPGA-Based SoC Prototyping

Unlocking SoC Debugging Challenges: Paving the Way for Efficient Prototyping


High-speed PCB Design Flow

High-speed PCB Design Flow
by Daniel Payne on 04-24-2025 at 10:00 am

PCB design phases min

High-speed PCB designs are complex, often requiring a team with design engineers, PCB designers and SI/PI engineers working together to produce a reliable product, delivered on time and within budget. Cadence has been offering PCB tools for many years, and they recently wrote a 10-page white paper on this topic, so I’ll share what I learned. The promise is that using early identification and resolution of SI and PI challenges will shorten the overall time to market.

The three PCB design steps are: Schematic, Layout, Post-layout and Signoff. If your EDA tool flow includes in-design analysis, then the team can find and fix SI and PI issues earlier and with more accuracy.

Collaboration across teams means that an EE can define the high-speed constraints at the schematic stage with little need for an SI expert. Layout designers use visualization tools to see SI/PI issues quickly in their tools. Handoffs between team members are made efficient by in-tool feedback.

The Power Distribution Network (PDN) can be analyzed for issues like IR drop under DC operating conditions, enabling decisions on current density and specifying copper weight and thicknesses.  You can visualize DC drop analysis in Cadence tools.

DC drop analysis

During transient operation the PCB design encounters high-frequency switching currents that couple with inductance to create voltage noise. Add decoupling capacitors and minimizing inductance are ways to mitigate this noise. AC power analysis tools simulate transient responses from the PCB, along with power noise and impedance profiles so that each component has stable and clean power.

AC Analysis

High-speed datalinks are commonly used for PCIe, Ethernet, USB and UCIe designs, so care is required to manage channel loses, via effects and pass compliance testing. Vias can add undesired discontinuities, create impedance mismatches and degrade signals from inductance and capacitance effects, cause stub resonance and even add return path discontinuities. Engineers can now design, view and validate via structures early on with the Aurora Via Wizard.

Aurora Via Wizard

Traces at high frequencies exhibit losses from conductor resistance, dielectric absorption and the roughness of copper traces. Designers can choose low-loss dielectrics, optimize the trace geometry and maintain a continuous ground plane under signal traces to mitigate these losses. To simulate different dielectric materials the Sigrity X Topology Workbench comes into play. For SerDes interfaces there’s the Compliance Analysis tools to validate a design early, adjust signal paths and pass protocol specifications.

Designing DDR5 interfaces at multi-gigabit speeds is enabled by using Sigrity X Topology Explorer Workbench for parameter sweeps to find the best termination configuration and find optimal routing solutions while finding any timing violations. DDR memory buses can have hundreds of signals, and using Sigrity X Aurora helps to automate through impedance validation, crosstalk analysis and return path optimization.

Signal quality

Another high-speed design issue is Simultaneous Switching Noise (SSN), causing ground bounce, increased jitter and timing errors. Cadence has power-aware IBIS and advanced PDN analysis tools to quickly identify these vulnerabilities, provide decoupling capacitor placement and accurately simulation SSN effects. For via-to-via crosstalk issues there’s 2.5D and 3D analysis tools for via modeling, along with design recommendations for via shielding and optimized layer transitions.

Cadence Tools

The full high-speed PCB flow is covered by tools that work together from schematic to signoff: Allegro X Design Platform, Sigrity X Platform, Sigrity X Aurora Via Wizard, Sigrity X Topology Explorer Workbench, Clarity 3D.

Summary

High-speed PCB design teams can navigate successfully through the challenges of signal integrity and power integrity by using in-design analysis tools. This approach shortens time to market through tool automation, using distributed computing and making complex concepts easier to understand.

Read the complete  white paper from Cadence online.

Related Blogs


ESD Alliance Executive Outlook Features View of How Multi-Physics is Reshaping Chip Design and EDA Tools

ESD Alliance Executive Outlook Features View of How Multi-Physics is Reshaping Chip Design and EDA Tools
by Bob Smith on 04-24-2025 at 6:00 am

CEO Outlook #2 (1)

Every spring, the ESD Alliance, a SEMI Technology Community, organizes a get together where industry executives and experts gather to network and talk about trends in the electronic design automation industry.

The theme of this year’s event, once again co-hosted by Keysight, is “How Multi-Physics is Reshaping Chip Design and EDA Tools.” It will be held Thursday, May 22, starting at 5:30 p.m. at Keysight’s office in Santa Clara, Calif.

Our event speakers and panelists are all technically involved with multi-physics and will share their experiences and opportunities and challenges still ahead. Moderated by Ed Sperling, Editor-in-Chief, Semiconductor Engineering, panelists include: Bill Mullen, Ansys Fellow at Ansys; John Ferguson, Sr. Director, Product Management from Siemens EDA; Chris Mueth, Sr. Director, New Markets and Strategic Initiatives of Keysight; and Albert Zheng, Sr. Engineering Group Director with Cadence.

Registration is open. Pricing for members is free. Non-member pricing is $49. Register at: https://tinyurl.com/bucunc7j.

In a perfect world, multi-physics analysis would be seamlessly integrated within the chip/system design flow resulting in early detection and correction of physical issues.

While the industry isn’t fully there yet, there is rapid adoption of these new technologies. With system complexities ever-increasing, chip designers are being required to expand their scope and responsibilities beyond “the chip.” Modern semiconductor-based systems often include novel packaging of devices and substrates in form factors that minimize system area, while simultaneously optimizing for performance, power and reliability.

While traditional analysis tools continue to play an important role in the design process, multi-physics tools are rapidly being adopted to address system-level issues that must be considered to bring new products to market. In order to achieve market success, these products must meet broad specifications including reliability and safety in addition to typical chip performance issues such as performance, size, energy and throughput.

The term multi-physics covers the range of physical effects that are typically not within the scope of traditional chip design analysis tools. These effects include (but are not   limited to) mechanical stress, heat, electromagnetic interference and even packaging and cooling.

Mechanical stress must be considered in situations where discrete devices (such as chiplets) are interconnected by stacking or sharing a substrate. During operation, heat may cause the components to undergo thermal expansion that can lead to mechanical strain that impacts functioning or leads to system failure.

Heat generated must be analyzed to understand how it propagates through the system. Hot spots may lead directly to device performance issues and induce mechanical stress that causes further issues.

Electromagnetic interference typically arises due to high-speed signals within the system that can interact with other signals within the system or other nearby system components. These interactions can lead to performance issues or even system failures.

Packaging and cooling analysis is necessary to understand how to mitigate the effects of heat and mechanical stress on the chip/system and to other nearby components.

Join us at the 2025 Executive Outlook event and gain insight into how these rapidly evolving technologies are changing chip and system design flows. The event will be held at Keysight, Building 5, 5301 Stevens Creek Blvd in Santa Clara.

About the ESD Alliance

The ESD Alliance, a SEMI Technology Community, offers initiatives and activities that bring value to our entire industry including:

  • Coordinating and amplifying the collective and regional voices of our industry.
  • Continually promoting the value our industry delivers to the global semiconductor and electronics industry.
  • Addressing and defending threats and reducing risks to our industry.
  • Achieving efficiencies for our industry.
  • Marketing the attractiveness of the design ecosystem as an ideal industry for pursuing a career.
  • Enabling networking, sharing and collaboration across our industry.

If your company is not currently a member, shouldn’t it consider joining the ESD Alliance and SEMI? Contact me at bsmith@semi.org to get the discussion started.

Also Read:

Andes RISC-V CON in Silicon Valley Overview

SNUG 2025: A Watershed Moment for EDA – Part 1

DVCon 2025: AI and the Future of Verification Take Center Stage


The Growing Importance of PVT Monitoring for Silicon Lifecycle Management

The Growing Importance of PVT Monitoring for Silicon Lifecycle Management
by Kalar Rajendiran on 04-24-2025 at 6:00 am

SLM IP Target Applications

In an era defined by complex chip architectures, ever-shrinking technology nodes and very demanding applications, Silicon Lifecycle Management (SLM) has become a foundational strategy for optimizing performance, reliability, and efficiency across the lifespan of a semiconductor device. Central to effective SLM are Process, Voltage, and Temperature (PVT) monitors—silicon-proven, highly accurate sensors embedded into chips to provide real-time, in-silicon visibility. As devices grow in complexity and adopt technologies like 2.5D/3D IC packaging, GAA (Gate-All-Around), and BSP (Backside Power), traditional design-time assumptions and margining techniques are no longer sufficient. PVT monitoring is now indispensable for ensuring that chips operate efficiently, safely, and predictably under real-world conditions.

At the recent IPSoC Conference in Silicon Valley, Rohan Bhatnagar gave a talk on the growing importance of PVT Monitoring for effective SLM. Rohan is the product manager for Synopsys’s SLM PVT Monitor IPs. A synthesis of the salient points from his talk follows.

The Four Pillars of SLM

SLM is built on four core pillars: Monitor, Transport, Analyze, and Act. The process begins with embedding monitors during the chip design phase. These monitors collect critical data about process variation, voltage fluctuations, and thermal behavior. Next, this data is transported to a centralized SLM database, where it is analyzed either on-chip, on-edge or in-cloud.  In the design phase, engineers can act on these real silicon insights from test silicon to fine tune their designs. During in-field  operation, automated response routines are enabled in real-time to fine-tune performance, reduce power, and extend reliability/device longevity. This continuous loop of visibility and action forms the backbone of modern chip lifecycle optimization.

Why Real-Time Monitoring Is Essential

The motivation for adopting PVT monitors stems from the growing need to maximize performance and energy efficiency while extending reliability. Today’s semiconductor devices face significant challenges: large die areas introduce more spatial process variation, high-performance workloads lead to thermal hotspots and voltage droops, and modern packaging technologies introduce thermal and electrical complexity. Additionally, unpredictable application workloads make static margining approaches inadequate. PVT monitors allow for  real-time tracking and response  to these variables, enabling more intelligent decisions throughout the chip’s life, from silicon bring-up to field deployment.

Synopsys’ Comprehensive PVT IP Subsystem

The company’s PVT IP subsystem includes a full suite of monitors, a dedicated central controller and supporting infrastructure. Key components include the Process Detector (PD), Voltage Monitor (VM), and Glitch Detectors (GD, Digital GD). For thermal monitoring, the subsystem features a range of sensors such as the Temperature Sensor (TS), Distributed Temperature Sensor (DTS and Digital DTS), Catastrophic Temperature Sensor (CTS), and Thermal Diode (TD). With the exception of the Thermal Diode and the CTS  that are asynchronous, the other sensors are managed by a central PVT Controller with software driver and interfaced via dedicated software serial buses.

The PVT IPs are validated on a broad spectrum of foundries and process nodes, ensuring future-ready compatibility. Recent support includes TSMC nodes such as N6, N5A, N4P, N3E/P, N3A, and N2P. In-development support includes Intel Foundry Services (IFS) 18A and Samsung SF4X. As the IP portfolio continues to grow, new offerings such as Digital DTS1 and Digital Glitch Detectors provide enhanced digital compatibility and lower integration overhead.

Synopsys’ PVT IP solutions are also tailored for automotive-grade applications, where reliability and safety are paramount. The hard IP is tested to AEC-Q100 Grade 2, while soft IP components are ISO 26262 ASIL B ready for functional safety compliance.

Use Cases for PVT IP Solutions

AI Processors

In AI SoCs, thermal and power challenges are significant due to high compute density and bursty workloads. PVT monitors enable low-latency thermal management, dynamic IR drop control, and supply margin optimization for critical logic. This results in improved core utilization, higher performance-per-watt, and reduced operational expenses.

Data Centers and HPC

In cloud and high-performance computing environments, reliability and power efficiency are vital. Embedded monitors support real-time power optimization, multi-core thermal analysis, and predictive reliability, helping data centers scale confidently while reducing CO₂ emissions and lowering total cost of ownership (TCO).

5G and Consumer Devices

For 5G smartphones and consumer electronics, PVT monitors address thermal challenges and enhance battery efficiency. They enable core voltage scaling, real-time thermal management, and performance tuning to improve user experiences in scenarios like video streaming, gaming, and multitasking. These optimizations also contribute to longer battery life and more responsive devices.

Summary

As semiconductor designs become more customized and operate in increasingly unpredictable environments, real-time in-silicon visibility is essential. The integration of SLM and PVT IP transforms chip design and lifecycle management into a dynamic, intelligent, and adaptive process. From development through deployment, SLM empowers engineers to make data-driven decisions that enhance performance, reduce power, and increase reliability.

Whether optimizing AI processors, powering hyperscale data centers, or enhancing mobile user experiences, SLM with PVT IP is enabling the next era of smart, efficient, and resilient semiconductor design. PVT monitors offer a scalable path forward for future-ready silicon.

Learn more about Synopsys SLM PVT Monitor IP Solutions here.

Also Read:

Achieving Seamless 1.6 Tbps Interoperability for High BW HPC AI/ML SoCs: A Technical Webinar with Samtec and Synopsys

SNUG 2025: A Watershed Moment for EDA – Part 1

Synopsys Webinar: The Importance of Security in Multi-Die Designs – Navigating the Complex Landscape


TSMC Brings Packaging Center Stage with Silicon

TSMC Brings Packaging Center Stage with Silicon
by Mike Gianfagna on 04-23-2025 at 11:45 am

TSMC Brings Packaging Center Stage with Silicon

The worldwide TSMC 2025 Technology Symposium recently kicked off with the first event in Santa Clara, California. These events typically focus on TSMC’s process technology and vast ecosystem. These items were certainly a focus for this year’s event as well. But there is now an additional item that shares the spotlight – packaging technology. Thanks to the increase in heterogeneous integration driven in large part by AI, the ability to integrate multiple dies in sophisticated packages has become another primary driver for innovation. So, let’s look at what was shared at the pre briefing by Dr. Kevin Zhang and how TSMC brings packaging center stage with silicon.

A Growing Palette of Options

TSMC has taken advanced packaging well beyond the 2.5D interposer approach that is now quite familiar. The diagram above was provided by TSMC to illustrate the elements that comprise the TSMC 3DFabric® technology portfolio. According to TSMC, transistor technology and advanced packaging integration technology go hand-in-hand to provide its customers with a complete product-level solution.

On the left are the options for stacking or die-level/wafer-level integration. SoIC-P ( below) uses microbump technology to deliver down to a 16um pitch. Using bumpless technology (SoIC-X), you can achieve a few micron pitch. TSMC started with 9um and is now in production at 6um with more improvements to come, creating a monolithic-like integration density.

For 2.5/3D integration, there are many options available. Chip on Wafer on Substrate (CoWoS) technology supports both the familiar silicon interposer as well as CoWoS-L, which uses an organic interposer with a local silicon bridge for high-density interconnect. CoWos-R provides a pure organic interposer.

Integrated Fan-Out (InFO) technology began in 2016 for mobile applications. The platform has been expanded to support automotive applications as well.

There is also the newer System-on-Wafer (TSMC-SoW™) packaging. This technology broadens the integration scale to the wafer level. There is a chip-first approach (SoW-P), where the chip is put on the wafer and then an integrated RDL is built to bring the dies together.  Or, there is a chip-last approach (SoW-X), where you first build the interposer at the wafer level and then add the chips across the wafer. This last approach can produce a design that is 40X larger than the standard reticle size.

High-performance computing for AI is clearly a major driver for advanced packaging technology. The first diagram below provided by TSMC, illustrates a typical AI accelerator application today that integrates a monolithic SoC with HBM memory stacks through a silicon interposer. Some major improvements that are coming for this type of architecture as shown on the next diagram.

The monolithic SoC is now replaced with a 3D stack of chips to address high-density compute requirements. HBM memory stacks are integrated with an RDL interposer. Integrated silicon photonics will also be part of the design to improve communication bandwidth and power. Integrated voltage regulators will also help to optimize power for this type of application.

Regarding power optimization, future AI accelerators can require thousands of watts of power, creating a huge challenge in terms of power delivery into the package. Integrated voltage regulators will help to tame this class of problem. TSMC has developed a high-density inductor which is a key component required to develop this class of regulator. So, a monolithic PMIC plus this Inductor can provide a 5X power delivery density (vs. PCB level).

There are many exciting new technologies on the horizon which will require all the packaging innovation discussed here. Augmented reality glasses is one example of a new product that will require everything discussed. A device like this will require, among other things, an ultra-low power processor, a high resolution camera for AR sensing, eNVM for code storage, a large main processor for spatial computing, a near-eye display engine, WiFi/Bluethooth for low latency RF, and a digital intensive PMIC for low power charging. This kind of product will set a new bar for complexity and efficiency.

While autonomous vehicles get a lot of attention, the demands of humanoid robots were also discussed. TSMC provided the graphic below to illustrate the significant amount of advanced silicon required. And the ability to integrate all of this into dense, power efficient packages is critical as well.

To Learn More

It was clear at the TSMC Technology Symposium that advanced processing and advanced packaging will need to work as one going forward to achieve the type of product innovation on the horizon. TSMC has clearly taken this challenge and is developing unified offerings to address the coming requirements.

You can learn more about TSMC’s 3DFabric Technology here. And that’s why TSMC brings packaging center stage with silicon.

 

UPDATE: TSMC is sharing recordings of the presentations HERE.

Also Read:

TSMC 2025 Technical Symposium Briefing

IEDM 2025 – TSMC 2nm Process Disclosure – How Does it Measure Up?

TSMC Unveils the World’s Most Advanced Logic Technology at IEDM

IEDM Opens with a Big Picture Keynote from TSMC’s Yuh-Jier Mii

 


TSMC 2025 Technical Symposium Briefing

TSMC 2025 Technical Symposium Briefing
by Daniel Nenni on 04-23-2025 at 11:40 am

TSMC Advanced Tecnology RoadMap 2025 SemiWiki

At the pre-conference briefing, Dr. Kevin Zhang gave quite a few of us media types an overview of what will be highlighted at the 2025 TSMC Technical Symposium here in Silicon Valley. Since most of the semiconductor media are not local this was a very nice thing to do. I will be at the conference and will write more tomorrow after the event. TSMC was also kind enough to share Kevin’s slides with us.

The important thing to note is that TSMC is VERY customer driven so this presentation is based on interactions with the largest semiconductor manufacturing customer base the industry has ever seen, absolutely.

As you can imagine, AI is driving the semiconductor industry now not unlike what smartphones did for the last two decades. The difference being that AI consumes leading edge silicon at an alarming rate which is a good thing for the semiconductor industry. While AI is very performance centric, it must also be power sensitive. This puts TSMC in a very strong position from all of those years of manufacturing mobile SOCs for smartphones and other battery operated devices.

Kevin started with the AI revolution and how AI will be infused into most every electronic device from the cloud to the edge and will enable many new applications. Personally, I think AI will transform the world in a similar fashion as smartphones have but on a much grander scale.

Not long ago the mention of the semiconductor industry hitting $1T seemed like a dream. It is one thing for industry observers like myself to say it but it is quite another when TSMC does. There is little doubt in my mind that it will happen based on my observations inside the semiconductor ecosystem.

There have been some minor changes to the TSMC roadmap. It has been extended out to 2028 adding N3C and A14. The C is a compressed version meaning the yield learning curve is at a point where the process can be further optimized for density.

A14 will certainly be a big topic of discussion at the event. A14 is TSMC’s second generation of nanosheet transistor which is considered a full node (PPA) versus N2: 10-15% speed improvement at the same power, 25-30% power reduction at the same speed, and 1.2X logic density improvement. The first iteration of 14A does not have backside power delivery. It was the same with N2 which was followed by A16 with Super Power Rail (SPR). SPR for A14 is expected in 2029.

The TSMC 16A specs were updated as well. 16A is the first version of SPR for reduced IR drop and improved logic density. This has the transistor connection on the back. SPR is targeted at AI/HPC designs with improved signal routing and power delivery. A16 is on track for production in the second half of 2026. In comparison to N2P, A16 provides an 8-10% speed improvement at the same power, 15-20% power reduction at the same speed.

From what I have heard TSMC N2 is yielding quite well and is on track for production later this year. The big question is who will be the first customer to ship N2 product? Usually it is Apple but word on the street is the iPhones this year will again be using N3. I already have an N3 iPhone so I will skip this generation if that is the case. If Apple does an N2 based iPhone Max Pro this year then count me in!

TSMC N2P is also on track for production in the second half of 2026. As compared to N3E, N2P offers: 18% speed improvement at the same power, a 36% power reduction at the same speed, and a 1.2x density improvement.

The most interesting thing about N2 is the rapid growth of tape-outs between N5, N3, and N2. It really is astounding. Given that TSMC N3 was an absolute landslide for customer tape-outs I had serious doubts if we would ever see a repeat of that success but here we are. Again, in the past mobile was the driver for early tape-outs but now we have AI/HPC as well.

Finally, as Kevin said, TSMC N3 is the last and best FinFET technology available on such a massive scale with N3, N3E, N3P, N3X, N3A, and now N3C. Yet, N2 tape-outs beat N3 in the first year and the second year even more so. Simply amazing. I guess the question is who is NOT using TSMC N2?

The second part of the presentation was on packaging which will be covered in another blog. After the event I can provide even more details and get a feeling for the vibe at the event from the ecosystem. Exciting times!

UPDATE: TSMC is sharing recordings of the presentations HERE.

Also Read:

TSMC Brings Packaging Center Stage with Silicon

IEDM 2025 – TSMC 2nm Process Disclosure – How Does it Measure Up?

TSMC Unveils the World’s Most Advanced Logic Technology at IEDM

IEDM Opens with a Big Picture Keynote from TSMC’s Yuh-Jier Mii


Perspectives from Cadence on Data Center Challenges and Trends

Perspectives from Cadence on Data Center Challenges and Trends
by Bernard Murphy on 04-23-2025 at 6:00 am

Cadence Data Center Report Image

From my vantage point in the EDA foxhole it can be easy to forget that Cadence also has interests in much broader technology domains. One of these is in data center modeling and optimization, through their Cadence Reality Digital Twin Platform. This is an area in which they already have significant track record collaborating with companies like Switch, NV5, Nvidia and others, with focus on design modeling for giant data centers. Cadence recently released a report based on a survey of hundreds of IT, facility, and business leaders on what priorities and concerns they see for future data center evolution. The report covers a lot of detail. I will highlight here just a few of the points I found intriguing, especially from my limited understanding of the data center world.

Modeling data centers

While the Cadence report jumps straight into business perspectives, I want to take a couple of paragraphs to elaborate on the purpose of modeling.  Electronic systems (servers, storage, and networking in a data center) dissipate power in the form of heat. Excess heat causes loss of performance and, in extreme cases, damage, which must be mitigated through cooling. Forced air cooling (like AC) has been the standard approach for traditional heat density found in most data centers. However, with the emergence of AI compute, where power densities are climbing towards 100X, there is growing demand for more energy-efficient liquid-based cooling. Heat and energy/sustainability problems have become even more pressing as these power-hungry AI servers consume more of the data center footprint.

Planning compute resource (racks, etc) placements together with cooling resources (fans, vents, liquid cooling support, plumbing and heat exchangers) is rather like a place and route problem, except that objects to place are racks and routing is convective/ conductive/ radiative heat flow away from hot devices in 3 dimensions with flows through vents, up, around and above racks, and through cold plates and piping for liquid cooling. Hence the need for modeling.

The need to evolve

Whether for on-premises data centers, cloud services, or colocated services (guaranteed capacity on my hardware in your data center and you take care of service), almost all users/ suppliers of data center services want to see continued innovation. (The report focuses on hardware, the stuff that consumes capital cost and power, not software layers enabling hybrid cloud for example.)

Unsurprisingly, how much that wish translates into action depends on whether the service provider is a profit center (a cloud or colocation service) or a cost center (on-premises). Profit centers must keep moving ahead to stay competitive, whereas cost centers must justify investments against imperfectly quantified future benefits. Also no surprise, much of the demand to evolve comes from a need to increase energy efficiency and a need to add or increase AI support, two requirements fighting against each other.

Adding to the challenge, while some innovation can be introduced incrementally, significant improvements demand more significant capital investment, for example investments in local renewable energy options, high-density servers, and liquid cooling which may require major infrastructure rework. A big step forward when implemented but a big cost to get there.

Innovation at this level almost certainly demands adding expert staff, for AI certainly, and for digital twin planning – these kinds of improvements must be grounded in certainty that they will deliver in practice.

It’s easy to see how for smaller data centers an ardent desire to improve can run into a brick wall of budget and staffing constraints. Even hyperscaler centers must plan carefully to maximize continuing value of existing assets.  One interesting insight from the report is that data centers in South America are claiming high confidence in their ability to innovate, I would assume because many of these are quite new and have been able to design and hire from scratch to meet state-of-the-art objectives.

Meaningful improvements must start with digital twins

For small enterprises, AI needs may already be pushing some compute loads (at least training loads) to the cloud or colocation services.

For larger enterprises there are good reasons to maintain and expand on-premises (and perhaps colocation) options, but they must also step up to some of the kinds of investments already being made by profit-driven data centers, if not at the same scale. Meanwhile profit-based data center enterprises are already there and fully familiar with these needs.

Whatever the business motivation, any enterprise planning new or significantly upgraded capability is doing so using digital twin modeling. Automakers, aircraft makers, factory builders and others are all moving to digital twins to optimize and continue to refine the efficiency of their businesses. This is an unavoidable component of planning today.

Interesting white paper. You can access it HERE.

Also Read:

Designing and Simulating Next Generation Data Centers and AI Factories

How Cadence is Building the Physical Infrastructure of the AI Era

Big Picture PSS and Perspec Deployment


Semiconductor Tariff Impact

Semiconductor Tariff Impact
by Bill Jewell on 04-22-2025 at 3:00 pm

US Semiconductor Imports 2024 SemiWiki

President Donald Trump has initially excluded semiconductors from his latest round of U.S. tariffs. However, he could put tariffs on semiconductors in the future. If tariffs are placed on semiconductors imported to the U.S., how would that affect U.S.-based semiconductor companies? The chart below shows U.S. semiconductor imports for the year 2024. 64% of imports are from just four countries: Malaysia, Taiwan, Thailand, and Vietnam. China accounts for only 3% of imports. Semiconductors made in China generally come into the U.S. as components in finished electronics equipment such as PCs and smartphones.

Why are these four countries such a significant portion of U.S. semiconductor imports? Except for Taiwan, they do not have significant wafer fabs. However, they account for a major portion of semiconductor assembly and test (A&T) facilities. These facilities take wafers from fabs, assemble them into packages, and test to see if they meet specifications. These A&T facilities may belong to the semiconductor manufacturer (IDM) or may be owned by outsourced assembly and test (OSAT) companies. The chart below from SEMI shows the distribution of these facilities. China, Taiwan and Southeast Asia account for 70% of A&T facilities.

The major U.S.-based IDMs all have most of their A&T facilities outside of the U.S. as show below:

Fabless U.S.-based companies such as Nvidia, Qualcomm, Broadcom and AMD primarily use TSMC’s foundry services. TSMC mainly uses its own A&T facilities located in Taiwan.

Thus, if tariffs are placed on semiconductor imports to the U.S., it would drive up costs for U.S.-based semiconductor companies. The U.S. companies with their own fabs have most of their fab capacity in the U.S., but have the vast majority of their A&T capacity outside of the U.S. TSMC is building fabs in the U.S., but currently has no A&T facilities in the U.S.

A solution to avoid tariffs would be for companies to build more A&T facilities in the U.S. However, it will take significant time and money to build these facilities. Below are proposed new A&T facilities announced in the last few years.

Based on these projects, it takes two to three years to build a new A&T facility. The cost could be over $4 billion. Only two of these new facilities are in the U.S. Amkor, an OSAT company, is building an A&T facility to support TSMC’s fabs in Arizona. Integra Technologies, an OSAT company, announced plans for an A&T facility in Kansas, but the project has been delayed. Intel’s A&T facility in Poland has been delayed at least two years.

Significant cost differences exist in building facilities in the U.S. and Europe versus Asia. The three A&T facilities planned for the U.S. and Europe have an average estimated cost of $3 billion and average employment of 1,900 people. The three facilities planned for Asia have an average cost of $840 million and average employment of 3,500 people.

Any tariffs placed on semiconductor imports to the U.S. would raise the costs to most U.S.-based semiconductor companies as well as foreign companies. If U.S. companies decide to build more A&T facilities in the U.S., it will take several years and drive-up A&T costs.

Also Read:

Weak Semiconductor Start to 2025

Thanks for the Memories

Semiconductors Slowing in 2025


Designing and Simulating Next Generation Data Centers and AI Factories

Designing and Simulating Next Generation Data Centers and AI Factories
by Kalar Rajendiran on 04-22-2025 at 10:00 am

Digital Twin and the AI Factory Lifecycle

At NVIDIA’s recent GTC conference, a Cadence-NVIDIA joint session provided insights into how AI-powered innovation is reshaping the future of data center infrastructure. Led by Kourosh Nemati, Senior Data Center Cooling and Infrastructure Engineer from NVIDIA and Sherman Ikemoto, Sales Development Group Director from Cadence, the session dove into the critical challenges of building AI Factories, which are next-generation data centers optimized for accelerated computing. The talks showcased how digital twins and industrial simulation are transforming the design, deployment, and operation of these complex systems.

The Rise of the AI Factory

Kourosh opened the session by spotlighting how the data centers of the future aren’t just about racks and power but rather to be treated as AI Factories. These next-generation data centers are highly dynamic, compute-dense environments built to run massive AI workloads, support real-time inference, and train foundational models that power everything from drug discovery to autonomous systems. But with this transformation comes a new set of challenges. Designing, building, and operating an AI Factory requires multidisciplinary coordination between power, cooling, networking, and compute — with significantly tighter tolerances and higher performance demands than traditional data centers.

This level of complexity requires a new approach to designing, building and operating. An AI Factory Digital Twin is needed to simulate and manage everything from physical infrastructure to real-time operations.

Simulation: A New Era in Data Center Design

The centerpiece of this vision is NVIDIA’s AI Factory Digital Twin, built on the Omniverse platform and powered by the OpenUSD (Universal Scene Description) framework. It’s more than just a virtual replica, a continuously updating simulation engine that integrates mechanical, electrical, and thermal data into a single, unified environment. AI factories demand extreme levels of optimization to manage power density, thermal load, and operational efficiency. By using simulation in the design phase, engineers can evaluate “what-if” scenarios, test control logic, and spot failure points long before equipment is installed.

This approach helps accelerate deployment timelines and reduce operational risk.

Cadence: Technology Behind the Digital Twin

Following Kourosh’s talk, Sherman detailed how multiphysics simulation powers the AI Factory Digital Twin. He described the need to move beyond siloed workflows and bring together disciplines such as electrical, thermal and structural, that often may have operated independently in the past, into a coordinated, data-driven design process. With traditional design tools, each team optimizes their own domain but when put together, they might not work efficiently as a system. This is why digital twins become mission critical.

Cadence’s advanced simulation technology is a core enabler of NVIDIA’s digital twin strategy. From detailed power integrity models to dynamic thermal simulations, these tools provide the physical accuracy needed to make decisions early, fast, and confidently.

Sherman also emphasized Cadence’s commitment to interoperability. As a founding member of the OpenUSD-based digital twin standards group, Cadence is helping define how simulation data integrates with 3D models, real-time telemetry, and operational software.

Ecosystem Collaboration

Another major theme of the joint session was the importance of collaboration across the broader data center ecosystem. AI factories are not just a design challenge but a supply chain and operational challenge as well.

To this end, partners like Foxconn and Vertiv are playing a critical role. Foxconn, with its global manufacturing capabilities, is helping accelerate the production of modular AI factory components. Vertiv, a leader in power and cooling infrastructure, is working closely with NVIDIA and Cadence to simulate real-world behavior of critical equipment within the twin, ensuring that systems behave predictably under peak AI loads.

By simulating components like Coolant Distribution Units (CDUs), Power Distribution Units (PDUs), and Heating, Ventilation and Air Conditioning (HVAC) systems as part of the broader twin, these partners enable end-to-end validation of system behavior. This is a huge step forward in building next generation data centers that are both resilient and responsive to changing demands.

Summary

AI factories are complex, multidisciplinary systems that require new tools, new thinking, and deep collaboration. By integrating simulation, open standards, and a robust partner ecosystem, Cadence is collaborating with NVIDIA and the broader ecosystem to lay the foundation for a new era of AI infrastructure. Their tools not only reduce costs and risk, but also accelerate the delivery of AI-powered innovation across industries from healthcare and manufacturing to energy, finance, and scientific research.

Also Read:

How Cadence is Building the Physical Infrastructure of the AI Era

Big Picture PSS and Perspec Deployment

Metamorphic Test in AMS. Innovation in Verification


CEO Interview with Dr. Michael Förtsch of Q.ANT

CEO Interview with Dr. Michael Förtsch of Q.ANT
by Daniel Nenni on 04-22-2025 at 6:00 am

20230524MST 2941

Dr. Michael Förtsch, CEO of Q.ANT, is a physicist and innovator driving advancements in photonic computing and sensing technologies. With a PhD from the Max Planck Institute for the Science of Light, he leads Q.ANT’s development of Thin-Film Lithium Niobate (TFLN) technology, delivering groundbreaking energy efficiency and computational power for AI and data center applications.

Tell us about your company.

Q.ANT is a deep-tech company pioneering photonic computing and quantum sensing solutions to address the challenges of the next computing era. Our current, primary area of focus is developing and industrializing photonic processing technologies for advanced applications in AI and high-performance computing (HPC).  

Photonic computing is the next paradigm shift in AI and HPC. By using light (photons) instead of electrons, we overcome the limitations of traditional semiconductor scaling and can deliver radically higher performance with lower energy consumption. 

Q.ANT’s first photonic AI processors are based on the industry standard PCIexpress and include the associated software control (plug-and-play solution) so they integrate easily into HPC environments as they are currently set up.  We know it is critical for any new technology to get adopted.  

At the core of our technology is Thin-Film Lithium Niobate (TFLN), a breakthrough material that enables precise and efficient light-based computation. Our photonic Native Processing Units (NPUs) operate using the native parallelism of light —processing multiple calculations simultaneously, dramatically improving efficiency and performance for AI and data-intensive applications. 

Founded in 2018, Q.ANT is headquartered in Stuttgart, Germany, and is at the forefront of scaling photonic computing for real-world adoption. 

What problems are you solving?

The explosive growth of AI and HPC is starting to exceed what is possible with conventional semiconductors, creating four urgent challenges: 

1). Unmanageable energy consumption – AI data centers require vast amounts of power, with GPUs consuming over 1.2 kW each. The industry is now considering mini nuclear plants just to meet demand. 

2). Processing limitations – Traditional chips rely on electronic bottlenecks that struggle to keep up with AI’s growing complexity. 

3). Space limitations – Physical space constraints pose a significant challenge as datacenters struggle to accommodate the increasing number of server racks needed to meet rising performance demands.  

4). Shrinking traditional GPU technology to the next smaller nodes demands massive investments in cutting-edge manufacturing, often reaching billions.  

Q.ANT is tackling these challenges head-on, redefining computing efficiency by transitioning from electron-based to photon-based processing. Our photonic processors not only enhance performance but also dramatically reduce operational costs and energy consumption—delivering 30X energy savings and 50X greater computational density.  

Also in manufacturing, Q.ANT has recently showcased a breakthrough: its technology can be produced at significantly lower costs by repurposing existing CMOS foundries that operate with 1990s-era technologies. These breakthroughs pave the way for more sustainable, scalable AI and high-performance computing. 

What application areas are your strongest?

Q.ANT’s photonic processors dramatically enhance efficiency and processing power so are ideal for compute-intensive applications, such as AI model training, data center optimization, and advanced scientific computing. By leveraging the inherent advantages of light, Q.ANT enables faster, more energy-efficient processing of complex mathematical operations. Unlike traditional architectures, our approach replaces linear functions with non-linear equations, unlocking substantial gains in computational performance and redefining how businesses tackle their most demanding workloads. 

What keeps your customers up at night?

Q.ANT’s customers are primarily concerned with the increasing energy demands and limitations of current data processing and sensing technologies. Data center managers, AI infrastructure providers, and industrial innovators face mounting challenges in performance, cost, and scalability and traditional semiconductor technology is struggling to keep pace.  This is raising a number of urgent concerns such as: 

  • Increasing power demands – The cost and energy required to run AI at scale are becoming unsustainable. 
  • Processing power – AI workloads require ever-increasing computational power, but scaling with conventional chips is costly and inefficient. 
  • Scalability – Businesses need AI architectures that can grow without exponential increases in power consumption and operational expenses. 
  • Space  – Expanding data centers to accommodate AI’s growing infrastructure is becoming increasingly difficult. 

Q.ANT’s photonic computing solutions directly address these challenges by introducing a more efficient, high-performance approach that provides the following benefits:  

  • Radical energy efficiency – Our photonic processors reduce AI inference energy consumption by a factor of 30. 
  • Faster, more efficient processing – Native non-linear computation accelerates complex workloads with superior efficiency in cost, power, and space utilization. 
  • Seamless integration – Our PCIe-based Native Server Solution is fully compatible with x86 architectures and integrates easily into existing data centers. 
  • Optimized for data center space – With significantly fewer servers required to achieve the same computational power, Q.ANT’s solution helps alleviate space constraints while delivering superior performance. 

By rethinking computing from the ground up, Q.ANT enables AI-driven businesses to scale sustainably, reduce operational costs, and prepare for the future of high-performance computing. 

What does the competitive landscape look like, and how do you differentiate?

The competitive landscape is populated by companies exploring photonic and quantum technologies. However, many competitors remain in the research phase or focus on long-term promises. Q.ANT differentiates itself by focusing on near-term, high-impact solutions. We use light for data generation and processing, which enhances energy efficiency and creates opportunities that traditional semiconductor-based solutions cannot achieve.  

Q.ANT is different. We are: 

  • One of the few companies delivering real photonic processors today 
  • Developing photonic chips using TFLN, a material that allows ultra-fast, precise computations without generating excess heat 
  • TFLN experts: with six years of expertise in TFLN-based photonic chips, and operating our own pilot line we have a significant first-mover advantage in commercial photonic computing. 
What new features/technology are you working on?

We are focused on revolutionizing AI processing in HPC datacenters by developing an entirely new technology: analog computing units packaged as server solutions called Native Processing Servers (NPS). These servers promise to outperform today’s chip technologies in both performance and energy efficiency. They also integrate seamlessly with existing infrastructure, using the standardized PCI Express interface for easy plug-and-play compatibility with HPC server racks. When it comes to data center deployment requirements, our NPS meets the standards you’d expect from leading vendors. The same ease applies to software, existing source code can run on our systems without modification. 

How do customers normally engage with your company?

Businesses, researchers, and AI leaders can contact Q.ANT via email at: native-computing@qant.gmbh and engage with the company in a number of ways: 

  • Direct purchase of photonic processors – Our processors are now available for purchase for experimenting and integrating into HPC environment.  
  • Collaborative innovation – We work with industry and research partners to develop next-gen AI applications on our photonic processors  
  • Community outreach – We participate in leading tech events to advance real-world photonic computing adoption. 

With AI and HPC demand growing exponentially, Q.ANT is a key player in shaping the next era of computing. 

Also Read:

Executive Interview with Leo Linehan, President, Electronic Materials, Materion Corporation

CEO Interview with Ronald Glibbery of Peraso

CEO Interview with Pierre Laboisse of Aledia