Bronco Webinar 800x100 1

Siemens EDA Illuminates the Complexity of PCB Design

Siemens EDA Illuminates the Complexity of PCB Design
by Mike Gianfagna on 01-19-2026 at 6:00 am

Siemens EDA Illuminates the Complexity of PCB Design

As heterogeneous multi-die design becomes more prevalent, the focus on advanced analysis has predictably shifted in that direction. While these challenges are important to overcome, we shouldn’t lose sight of how complete systems are built. Short and long reach communication channels, system-level power management and the all-important PCB are still fundamental building blocks for every complex system.

Siemens Digital Industries Software takes a broad and holistic view of system design, and a recent white paper is a great example of that perspective at work. The paper is titled How long is that trace? and it illustrates the complexity of PCB analysis and why it’s so important to get it right. If you are engaged in delivering complex systems, this white paper provides important information to ensure a successful project. A download link is coming but first let’s examine some of the topics covered when Siemens EDA illuminates the complexity of PCB design.

Getting it Right – Signal Analysis

Measuring and matching propagation delay for complex signal traces is both critical for performance and quite challenging to accomplish.  The white paper points out that:

To match the signal propagation time of two traces, PCB designers make the length of the two traces match down to a few thousandths of an inch (mils). While this is a good place to start, other factors influence the delay of the signal.

The impact of how high frequency signals and vias affect propagation delay is discussed in some detail. The piece explains how to use phase angle to calculate trace delay for example.  The question is posed:

Since different frequencies propagate at different speeds, how does that speed difference affect a digital signal that is not a sine wave?

Fourier analysis is used to show how digital signals containing high frequency components are affected by the interconnect. The relationship of magnitude and phase is discussed across a spectrum of the harmonic frequencies of the signals involved.  The figure below is an example of a plot to examine the composition of a digital signal. There is a lot more to getting this right than you may think. This white paper does a great job explaining what’s involved.

A digital signal and the harmonics that create the edge rate

Getting it Right – Via Design

The white paper also discusses how vias impact the edge rate and thus the trace delay. The piece explains an important point related to this issue:

If vias passed all frequencies of a signal equally, their impact would not be as significant. But vias impact some frequencies more than others, so via characteristics also affect signal delay.

There is a lot of rich and relevant detail presented regarding how via design impacts trace delay. Slightly different via geometries are analyzed in detail. It turns out that via geometry can have significant and non-intuitive impact on overall trace delay and thus overall system performance.

Again, frequency analysis and harmonics play a role in finding the right answers. The impact of various via return paths are also examined. The detail presented will get your attention.

To Learn More

After reading this white paper you will realize that copper length is not the only factor impacting the performance of a PCB trace. It is pointed out that vias have an inherent delay due to their span, but other characteristics add delay and distortion to the signal. The bottom line is the time it takes a signal to rise above the switching threshold at the driver to the time it takes to cross the switching threshold at the receiver.

Edge rate is key. The piece points out that the signal edge is composed of a fundamental sine wave and multiple higher frequency harmonics, all of which must have a certain amplitude and phase to reproduce the signal. When you want to know what the final performance will be, using a simulator is the best way. To find out more about this important system level analysis and optimization process download your copy of the Siemens Digital Industries white paper here. And that’s how Siemens EDA illuminates the complexity of PCB design.

Also Read:

Siemens and NVIDIA Expand Partnership to Build the Industrial AI Operating System

Automotive Digital Twins Out of The Box and Real Time with PAVE360

Addressing Silent Data Corruption (SDC) with In-System Embedded Deterministic Testing


Accelerating Advanced FPGA-Based SoC Prototyping With S2C

Accelerating Advanced FPGA-Based SoC Prototyping With S2C
by Daniel Nenni on 01-18-2026 at 12:00 pm

Prodigy S8 100 Logic Systems

Having spent a significant amount of my career in EDA and IP I can tell you first hand how important picking the right prototyping partner is. I have known S2C since my interview with CEO Toshio Nakama in 2017. It has been a pleasure working with them and I look forward to seeing an S2C update at DVCon the first week of March here in Silicon Valley. Specifically I am looking forward to seeing the new Prodigy Logic System.

The Prodigy S8-100 Logic System-VP1902 is a high-performance FPGA-based prototyping platform designed to accelerate advanced System-on-Chip (SoC) and ASIC development in demanding applications such as AI, high-performance computing (HPC), networking, and RISC-V processor design. Built and marketed by S2C Inc., a leader in FPGA prototyping solutions, the S8-100 series harnesses the latest AMD Versal™ Premium VP1902 adaptive SoC as its core building block. This integration enables developers to prototype large-scale digital logic designs with unprecedented gate capacity and I/O flexibility compared to previous generations of prototyping systems.

At its core, the Prodigy S8-100 platform is about bridging the gap between RTL design and full hardware validation before production silicon is available. FPGA prototyping has become essential because software development, validation, and system-level debugging often cannot wait for the final ASIC to be manufactured. The S8-100’s modular architecture enables hardware teams to test, refine, and validate entire SoCs—right down to peripheral interfaces—on reconfigurable hardware. Unlike simple simulation environments, this FPGA-based approach allows real execution of logic under real timing conditions, enabling much earlier detection of integration bugs and performance bottlenecks.

A defining feature of the Prodigy S8-100 is its massive logic capacity, with each VP1902 FPGA supporting up to 100 million ASIC equivalent gates.

Systems can be configured in three variants:

In multi-FPGA configurations, the total effective capacity scales up to 400 million gates, providing headroom for extremely complex designs that incorporate multiple cores, accelerators, memory hierarchies, and communication fabrics.

Resource-wise, the S8-100 offers rich internal capabilities including tens of thousands of logic cells, megabits of on-chip RAM, and thousands of DSP slices per FPGA. It also boasts advanced I/O support with high-speed transceivers and contemporary interface standards such as PCIe Gen5, enabling real-world connectivity with host systems and other devices. The result is a prototyping system capable of both high throughput and real-world system integration testing.

Beyond raw hardware, the Prodigy S8-100 ecosystem includes a suite of productivity tools to streamline prototyping workflows. S2C’s software, including PlayerPro-CT for partitioning and ProtoBridge for co-simulation, helps automate complex multi-FPGA design partitioning and bitstream generation. An extensive catalog of “Prototype Ready IP” daughter cards further expands the platform’s usability, offering pre-validated interface modules (for memory, Ethernet, GPIO, and more) that plug into the system without consuming valuable FPGA logic. These tools together reduce setup time, simplify board bring-up, and allow teams to concentrate on verification and software development instead of hardware plumbing.

The Prodigy S8-100 is also gaining traction in emerging markets such as RISC-V SoC development, where developers need to validate not just CPU cores but entire subsystems that include custom extensions and accelerators. In a recent collaboration with Andes Technology, the S8-100 platform has been used to prototype advanced RISC-V designs with custom instructions and high-bandwidth interfaces, demonstrating its value in next-generation CPU and SoC workflows.

Bottom line: The Prodigy S8-100 Logic System-VP1902 represents a state-of-the-art prototyping solution that addresses the challenges of modern digital design: huge logic capacity, flexible I/O, scalable configurations, and robust toolchains. For semiconductor developers working on cutting-edge chips—from AI accelerators to custom processors—platforms like the S8-100 make it possible to validate complex designs thoroughly, accelerate software readiness, and reduce the risk associated with first-silicon prototypes. As design complexity continues to grow, FPGA-based prototyping systems like the Prodigy S8-100 will remain essential tools in the semiconductor development cycle.

 

REQUEST A QUOTE

Also Read:

S2C, MachineWare, and Andes Introduce RISC-V Co-Emulation Solution to Accelerate Chip Development

FPGA Prototyping in Practice: Addressing Peripheral Connectivity Challenges

S2C Advances RISC-V Ecosystem, Accelerating Innovation at 2025 Summit China


CEO Interview with Moshe Tanach of NeuReality

CEO Interview with Moshe Tanach of NeuReality
by Daniel Nenni on 01-16-2026 at 2:00 pm

Moshe Tanach

Moshe Tanach is co-founder and CEO of NeuReality. Prior to founding the company, he held senior engineering leadership roles at Marvell and Intel, where he led complex wireless and networking products from architecture through mass production. He also served as AVP of R&D at DesignArt Networks (later acquired by Qualcomm), where he led development of 4G base station technologies.

Tell us about your company? What problems are you solving?

NeuReality was established by industry veterans from Nvidia-Mellanox, Intel, and Marvell, united by a vision to transform datacenter infrastructure for the AI era. As computational focus shifts from CPUs to GPUs and specialized AI processors, we recognized that general-purpose legacy CPU and NIC architectures had become bottlenecks, limiting high-end GPU performance and efficiency. Our mission is to redefine these system components, prioritizing efficiency and cost-effectiveness for next-generation AI infrastructure.

We address a critical challenge in today’s AI datacenters — underutilized GPUs idling while waiting for data. Whether in distributed training of large language models or in disaggregated inference pipelines, the network connecting these GPUs is increasingly vital both in bandwidth and latency. The core challenge is to move large volumes of data between GPUs instantly, enabling continuous computation. Failure to do so results in significant cost inefficiencies and undermines the profitability of AI applications.

Our purpose-built heterogeneous compute architecture, advanced AI networking, and software-first philosophy led to the launch of our NR1 product. NR1 integrates an embedded AI-NIC and is delivered with comprehensive inference-serving and networking software stacks. These are natively integrated with MLOps, orchestration tools, AI frameworks, and xCCL libraries, ensuring rapid innovation and optimal GPU utilization. We are now developing our second-generation of products starting with the NR2 AI-SuperNIC, focused exclusively on GPU-direct, east-west communication for large-scale AI factories.

What is the biggest pain point that you are solving today for customers?

The paradigm in datacenter design has shifted from optimizing individual server nodes to architecting entire server racks and clusters, scaling up to hundreds or thousands of GPUs. The interconnect between these GPU nodes and racks must match the performance of in-node connectivity, delivering maximum bandwidth and minimal latency.

Our customers’ primary pain point is that current networking solutions, such as those from Nvidia and Broadcom, are neither wide nor fast enough, resulting in wasted GPU resources and increased operational costs due to power inefficiencies. To address this, we developed the NR2 AI-SuperNIC, purpose-built for scale-out AI systems. Free from legacy constraints, NR2 offers 1.6Tbps bandwidth, sub-500ns latency, and native support for GPU-direct interfaces over RoCE and UET. A flexible control plane and full hardware offload to the data plane supports all distributed collective libraries, orchestration, and MLOps protocols. By eliminating unnecessary overhead, NR2 achieves industry-leading power efficiency, a critical advantage as the number of NIC ports and wire speeds continue to rise.

Once you secure the best GPUs and XPUs for AI, network performance and integration into AI workflows becomes the ultimate differentiator for AI datacenters and multi-site “AI brains.”

What keeps your customers up at night?

Our customers are focused on three core challenges:

  • Maximizing the ROI of their GPU investments
  • Managing AI infrastructure growth in a cost-effective, sustainable manner
  • Avoiding lock-in to proprietary, closed solutions

From the outset, we addressed these concerns with a software-first, open-standards approach. This gives customers the flexibility to mix accelerators, adapt architectures, and scale without overhauling their entire system while leveraging the power of the developers’ communities. Customers recognize that superior hardware alone is insufficient. Robust, open software that leverages community-driven innovation and supports new algorithms and deployment models, is essential to unlocking the full value of their infrastructure. Our software-first strategy has earned significant trust and respect from customers using our NR1 AI-NIC with our Inference Serving Stack (NR-ISS) and our Scale-out Networking Stack (NR-SONS) and those preparing to adopt NR2 AI-SuperNIC.

What does the competitive landscape look like and how do you differentiate? What new features/technology are you working on?

The competitive landscape is dominated by Nvidia’s ConnectX and Broadcom’s Thor General-purpose NIC products. While these solutions are advancing in bandwidth, their latency remains above 1 microsecond, which becomes a significant bottleneck as speeds increase to 800G and 1.6T. Hyperscalers and other leading customers are demanding faster, more efficient networking to pair with Nvidia GPUs and their own custom XPUs. Without such solutions, they are compelled to develop their own NICs to overcome current limitations, a task found to be long and complex.

NeuReality differentiates itself by delivering double the bandwidth and less than half the latency of competing products. We then deliver exclusive AI features in the core network engines, such as the packet processors and the hardened transport layers, and the integrated system functions, such as PCIe switch and peripheral interfaces.

We defined and designed NR2 AI-SuperNIC die, package and board in collaboration with market leaders to accommodate diverse system topologies. Features include:

  • Integrated UALink for high-performance in-node connectivity between CPUs and GPUs, bridging scale-up and scale-out networks
  • Embedded PCIe switch for flexible system architectures
  • xCCL acceleration for both mathematical and non-mathematical collectives, a unique capability
  • Exceptional power efficiency—2.5W per 100G, setting a new industry benchmark
  • Comprehensive, open-source software stack with native support for all major AI frameworks and libraries.

Looking at this table, you can clearly see the advantage of NR2 AI-SuperNIC compared to today’s solutions and to future roadmap solutions from our competition:

How do customers normally engage with your company?

We work directly with hyperscalers, neocloud customers, and enterprises, providing support both directly and through system integrators and OEMs. Our engineering team invests in understanding each customer’s unique needs, collaborating closely to deliver tailored solutions. Most customers approach us not simply seeking a new networking solution but aiming to maximize the value of their GPU investments.

Engagements often begin with proof-of-concept (POC) projects. With our NR1 AI-CPU product, we established a robust ecosystem of partners, channels, and lead customers to ensure early product validation and customer satisfaction. For NR2, we are inviting partners to join the AI-SuperNIC Partnership and validate interoperability with their hardware, software stacks, and communication libraries well before full-scale deployment.

What is next in the evolution of AI infrastructure?

Looking ahead, we anticipate two key trends will shape customer focus and industry direction.

First, as AI workloads become increasingly dynamic and distributed, customers will demand even greater flexibility and automation in their infrastructure. This will drive the adoption of intelligent orchestration platforms that can optimize resource allocation in real time, ensuring maximum efficiency and responsiveness across diverse environments. To me, it’s crystal clear that Rack-scale design is not enough. Scale-out must evolve together with scale-up to support ease of deployment that is less dependent on the location of GPUs in the node, server, rack, or cluster of racks.

Second, we expect sustainability and energy efficiency to become central decision factors for enterprises building or using large-scale AI infrastructure. Organizations will seek solutions that not only deliver top tier performance but also minimize environmental impact and operational costs. As a result, power-efficient networking and hardware offload will become critical differentiators in the market.

CONTACT NEUREALITY

Also Read:

2026 Outlook with Paul Neil of Mach42

CEO Interview with Scott Bibaud of Atomera

CEO Interview with Rabin Sugumar of Akeana


Podcast EP327: Third Quarter 2025 Electronic Design Market Data Report Overview and More with Dr. Walden Rhines

Podcast EP327: Third Quarter 2025 Electronic Design Market Data Report Overview and More with Dr. Walden Rhines
by Daniel Nenni on 01-16-2026 at 10:00 am

Daniel is joined by Wally Rhines, CEO of Silvaco, about the Electronic Design Market Data report that was just released. Wally is the industry coordinator for the EDA data collection program called EDMD. SEMI and the Electronic System Design Alliance collect data from almost all of the electronic design automation companies in the world and compile it by product category and region of the world where the sales occurred. It’s the most reliable data for the EDA industry and provides insight into what design tools and IP are in highest demand around the world.

Dan explores the results of the current report in detail with Wally, who explains that the current report documents another good quarter for EDA with an 8.8% overall growth compared to last year. Total revenue was $5.6B for the quarter, with EDA now solidly delivering over $20B in annual run rate. Dan and Wally explore the details of the report and discuss worldwide trends in EDA, IP and services across various regions. Some of the insights are surprising. Worldwide EDA employment is also discussed, which grew 17.3% compared to last year representing approximately 73,000 employees.

Dan also discusses Wally’s recent decision to join Silvaco as CEO. Wally offers some excellent insights into what drove that decision and what the future looks like.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Verification Futures with Bronco AI Agents for DV Debug

Verification Futures with Bronco AI Agents for DV Debug
by Daniel Nenni on 01-16-2026 at 6:00 am

Bronco AI Verification Futures 2025

Verification has become the dominant bottleneck in modern chip design. As much as 70% of the overall design cycle is now spent on verification, a figure driven upward by increasing design complexity, compressed schedules, and a chronic shortage of design verification (DV) engineering bandwidth. Modern chips generate thousands of tests per night, producing massive volumes of logs and waveforms. Within this flood of data, engineers must find the rare, chip-killing bug hidden among hundreds of failures. Verification today is fundamentally a large-scale data analysis problem, repeated daily under intense time pressure.

Traditional approaches struggle to scale with this reality. Human engineers are exceptionally strong at deep, creative reasoning about a single complex failure, but they cannot efficiently process thousands of datasets simultaneously. Classical machine learning techniques, while powerful in narrow contexts, face severe limitations in DV. They often fail to generalize across architectures such as CPUs, GPUs, NoCs, or memory subsystems. Training data is difficult to collect due to IP sensitivity, labeling requires expert engineers, and constant design evolution creates distribution shifts between chip versions. These constraints limit the long-term impact of conventional ML in verification.

Bronco AI agents for DV represent a step change. Instead of relying on narrow models trained for specific tasks, agent-based systems leverage large reasoning models combined with tool use, memory, and decision-making loops. These agents generalize more effectively because they are trained on internet-scale code and problem-solving data rather than proprietary design specifics. They can be steered through natural language, allowing DV engineers to guide investigations intuitively. Crucially, agents learn from metadata and patterns rather than memorizing raw data, reducing overfitting and mitigating IP and security concerns by selectively handling and discarding context.

In DV workflows, Bronco AI agents operate much like a highly scalable junior-to-senior engineer hybrid. When a simulation fails, the agent autonomously decides how to investigate, executes standard DV actions such as log parsing and waveform inspection, and iterates until it identifies a likely root cause. If the issue exceeds its confidence threshold, the agent escalates with a well-formed ticket for a human engineer. This approach allows routine debug work to be handled automatically while preserving human expertise for the hardest problems.

The impact of this agentic approach is measurable. In real subsystem-level UVM test failures on next-generation ASICs, Bronco AI agents were able to index new regressions within minutes, adapt to unfamiliar error signatures, and build an understanding of designs containing hundreds of thousands of lines of RTL.

In one case, an agent analyzed over 100,000 lines of logs and approximately 20 GB of waveform data to identify a deeply nested root cause in less than 10 minutes, work that a DV lead estimated would have taken hours, or days for a less experienced engineer.

AI agents also fundamentally change how waveform debug is performed. Traditional waveform analysis forces engineers to scroll through laggy GUIs, manually correlating thousands of signals across time windows and following one hypothesis at a time. Agents, by contrast, can examine many signals, hierarchies, and failure modes simultaneously. They can correlate errors across CPU, memory controllers, fabrics, and accelerators, classify failures, and recognize recurring patterns across regressions.

Perhaps most importantly, these systems improve over time. By learning from past failures, tickets, and human feedback, AI agents build reusable debug playbooks, discover efficient shortcuts, and develop generalized intuition—such as recognizing which issue types tend to appear in certain subsystems. This continuous learning enables faster time-to-value without custom AI training and allows seamless integration into existing EDA flows.

Bottom line: AI agents deliver value in verification not by replacing human insight, but by amplifying it through scale, speed, and learning. As verification complexity continues to grow, agentic AI offers a practical path to closing the verification gap.

Contact Bronco for a Demo

Also Read:

Superhuman AI for Design Verification, Delivered at Scale

AI RTL Generation versus AI RTL Verification

Scaling Debug Wisdom with Bronco AI


AI Bubble?

AI Bubble?
by Bill Jewell on 01-15-2026 at 12:00 pm

unnamed (1)

The currently strong semiconductor market is being driven by AI applications. A McKinsey survey showed 88% of businesses used AI in 2025 compared to just 55% in 2023. According to Inc.com, 306 of the S&P 500 companies mentioned AI in their third quarter 2025 earnings conference calls, up from only 53 citations three years earlier in third quarter 2022.

The WSTS December 2025 forecast called for 22% semiconductor market growth in 2025 and 26% in 2026, following 20% growth in 2024. This growth has been driven by the memory and logic categories. Memory is forecast to grow 28% in 2025 and 39% in 2026. Logic is predicted to grow 37% in 2025 and 32% in 2026. Excluding the memory and logic categories, the remainder of the semiconductor market declined 3% in 2024 and is projected to grow only 6% in 2025. The memory companies have all cited AI as their major growth driver in the last two years. Nvidia, the largest AI semiconductor company, grew its revenue 114% in 2024 and is guiding for 63% growth in 2025. Most Nvidia AI semiconductors are included in the logic category.

How long will the AI boom last? What will happen to the semiconductor market? We can look at previous bubbles in the semiconductor market for clues.

PC Bubble

The introduction of the IBM PC in 1981 led to a boom in the PC market. With IBM setting a standard for PC hardware and software, businesses felt safe investing in PCs. PC unit shipments grew 85% in 1983 and 29% in 1984. The PC boom led to a boom in the semiconductor market, especially for memory and Intel microprocessors. However, the PC boom came to an abrupt halt in 1985, as PC unit shipments fell 11 percent. The weakness in 1985 was due to several factors. Clones of IBM PCs from Compaq, Dell and HP disrupted the market. U.S. GDP growth slowed from 7% in 1984 to 4% in 1985.

The PC bust in 1985 led to a 17% decline in the semiconductor market. Memory fell 38% and Intel revenues dropped 16%. The decline was short-lived. In 1986 PC unit shipment grew 22% and semiconductors grew 23%. Memory and Intel revenues returned to strong growth in 1987.

Internet Bubble

Internet use began to explode in the 1990s as major businesses established links and the World Wide Web created standards and enabled browsing. The number of Internet users roughly doubled each year in 1995 and 1996. Users grew around 50% a year from 1997 through 2000. In 2001, growth in Internet users slowed to 21%. The rapid growth of the Internet led to the creation of numerous dot-com companies fueled by venture capital. By early 2000, rising interest rates and the lack of profitability of most dot-com startups led to a slump in investment. The NASDAQ-100 index, which was heavily weighted with dot-coms, dropped 78% from March 2000 to October 2002.

The collapse of many dot-com companies resulted in telecommunications companies having over capacity in Internet infrastructure. Cisco, the largest provider of Internet infrastructure hardware, saw revenue change from 50% plus growth in 1999 and 2000 to a 23% decline in 2001. In 2001, the semiconductor market dropped 32% with memory down 49%. The semiconductor and memory markets returned to double-digit growth in 2003.

AI Bubble?

The chart below shows the change in the semiconductor market during the PC bubble (blue line), the Internet bubble (red line) and the current AI period (green line). In the PC and Internet bubbles, the semiconductor market had two years of strong growth in the 19% to 46% range followed by major declines when the bubbles burst. In the current cycle, semiconductor growth was 20% in 2024 and is projected at 23% in 2025 and 26% in 2026.

The question is not if the AI bubble will burst, but when. Basically, all major new technologies go through a period of strong growth in the first few years. Many new companies emerge to try to take advantage of the new technology, largely driven by investments from venture capital funds. Eventually, the growth of the new technology slows or declines. Investment funds then begin to dry up. The revenues of hardware companies enabling the new technology fall, leading to semiconductor market declines. History suggests a possible bursting of the AI bubble in the next year or two.

The bubbles are not the end of the new technologies, but an adjustment. Certainly, PCs and the Internet are major economic drivers which have transformed the way of life for both business and consumers. AI also promises a major transformation. How smoothly the transformation is implemented remains to be seen.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

Semiconductors Up Over 20% in 2025

U.S. Electronics Production Growing

Semiconductor Equipment Spending Healthy


There is more to prototyping than just FPGA: See how S2C accelerates SoC Bring-Up with high productivity toolchain?

There is more to prototyping than just FPGA: See how S2C accelerates SoC Bring-Up with high productivity toolchain?
by Daniel Nenni on 01-15-2026 at 10:00 am

cover image

System-on-Chip designs continue to grow in scale and interface diversity, placing greater demands on prototype capacity, interconnect planning, and bring-up efficiency. These challenges arise not only in large multi-FPGA programs but also in smaller designs implemented on a single device or a small FPGA cluster. In all cases, teams must build a representative verification environment, manage logic operating at different rates, and isolate functional issues with minimal iteration time.

S2C’s FPGA solution addresses these needs through a structured prototyping ecosystem that combines automation software, implementation flows, system IP, and hardware expansion options to support efficient and predictable SoC bring-up across a wide range of design scales.

Building a Scalable System-Level Prototyping Methodology

A predictable and efficient bring-up process depends on tight coordination between software automation and hardware infrastructure. S2C’s PlayerPro™ CT software supports both automatic and guided partitioning, including interconnect planning for designs that span multiple FPGAs. Timing-driven and congestion-aware algorithms help improve partition quality and stability. For designs that do not require partitioning, PlayerPro CT also enhances gated-clock conversion and memory mapping, improving overall implementation robustness.

The RTL Compile Flow (RCF) further streamlines implementation by reducing memory footprint, improving iteration turnaround, and maintaining RTL-level visibility for downstream debug. These capabilities are valuable not only for large multi-FPGA designs, but also for projects that ultimately fit into a single FPGA yet still require controlled timing convergence and manageable compile cycles during early architectural exploration.

Clock-domain and rate matching are common requirement when integrating subsystems with different clock frequencies or operating characteristics. In practical SoC bring-up, many IP blocks—such as memory controllers, external interfaces, or third-party subsystems—are often unable to operate at their final target frequencies during early prototyping stages.

S2C addresses this challenge by providing Memory Models and Speed Adapters that decouple functional validation from frequency constraints. These mechanisms allow subsystems to run at reduced or independent rates while preserving correct transaction ordering, protocol behavior, and system-level interactions.

A representative system environment also depends on access to the appropriate peripheral interfaces without extensive custom hardware development. S2C offers a broad portfolio of daughter cards covering high-speed connectivity, memory, storage, display, and general-purpose interfaces. PCIe EP/RC, Mini-SAS, USB PHY, and SFP+/QSFP+ modules support high-bandwidth links; DDR4, LPDDR4, eMMC, and Flash modules enable memory subsystem evaluation; HDMI, DisplayPort, and MIPI D-PHY daughter cards support video and imaging use cases. GPIO headers, JTAG modules, and SerDes extensions enable signal probing and low-speed peripheral access. Together, these hardware options help teams reproduce system-level conditions that closely reflect the target deployment environment.

System-Level Debug Visibility

Debug is a critical part of prototype validation, and S2C provides mechanisms that deliver visibility at multiple levels of the system.

At the I/O level, engineers can validate basic functionality using push buttons, DIP switches, GPIOs, and UART interfaces. PlayerPro also enables virtual access to these controls, supporting remote operation and simplifying early functional checks.

For bus-level visibility, S2C offers ProtoBridge, which uses a PCIe connection to provide high-throughput transaction access suitable for software-driven stimulus generation and data movement. NTBus provides an alternative lower-bandwidth access path over embedded Ethernet.

Signal-level visibility is supported through probe insertion and waveform capture. MDM Pro enables concurrent capture of up to 16K signals across as many as eight FPGAs, with deep trace storage and support for both IP-mode and compile-time configurations—often without requiring a full recompile.

Conclusion

With a structured prototyping ecosystem and a comprehensive debug infrastructure, S2C’s Prodigy prototyping solution provides a stable foundation for building, scaling, and validating FPGA-based prototypes. Whether used for single-FPGA bring-up or large multi-board configurations, S2C enables teams to create representative verification environments, balance subsystem operation, and efficiently isolate functional issues throughout the SoC development cycle.

Contact S2C

Also Read:

S2C, MachineWare, and Andes Introduce RISC-V Co-Emulation Solution to Accelerate Chip Development

FPGA Prototyping in Practice: Addressing Peripheral Connectivity Challenges

S2C Advances RISC-V Ecosystem, Accelerating Innovation at 2025 Summit China


Last Call: Why Your Real‑World Lessons Belong in DAC 2026’s Engineering

Last Call: Why Your Real‑World Lessons Belong in DAC 2026’s Engineering
by Admin on 01-15-2026 at 6:00 am

DAC Call for Contributions 2026

By Frank Schirrmeister, Synopsys
Disclaimer: This article is written in my role as Engineering Track Chair for DAC 2026

If you’ve ever walked out of DAC with a handful of practical ideas you could put to work when you return to work, you already know the value of the Engineering Track. It’s where practitioners talk to practitioners – front‑end to back‑end, IP to systems & software – about what actually shipped, what almost derailed, and what really worked.

With DAC 2026 heading to Long Beach this year (July 26–29), the submission window gives you one more week to get your story into the program: the Engineering Track Call for Contributions closes on January 20, 2026.

DAC’s Engineering Track = The Workbench of the Conference

Think of the Engineering Track as the place where the industry checks claims against data. The four pillars are well‑known:

Front‑End Design, Back‑End Design, IP, and Systems & Software.

The bar for acceptance is pragmatic impact reviewed by your industry peers: integration experience, deployment lessons, measurable outcomes. This is curated by a large Technical Program Committee – we are about 150 reviewers strong this year – precisely to keep sessions practical, concise, and dense with takeaways.

For 2026, DAC is doubling down on that ethos. The official call emphasizes practical insights and real‑world experiences across the design ecosystem, and you have until January 20th to submit your experiences.

What’s Hot – and Why Your Experience Matters Now

First, Agentic AI is moving into mainstream workflows. At DAC62, the buzz wasn’t just “AI in EDA,” it was agentic AI – multi‑agent systems reasoning over flows, data, and constraints. The Microsoft Monday keynote spotlighted reasoning agents, and live demonstrations showed agents collaborating autonomously on parts of a chip design flow.

The consensus: agents can tame complexity, but humans remain firmly in the loop for sign‑off and safety. If you’ve piloted LLMs or agents in verification, synthesis exploration, debug, or collateral generation, the community wants the gritty details—wins, misses, cost curves, and guardrails.

Zooming in, last year’s Engineering Track special session “AI‑enabled EDA” assembled startup and industry voices (ChipStack→Cadence, Silimate, Rise, ChipAgents, VerifAI, Bronco AI, VerifAIX).

The through‑line? Move beyond script helpers to agent‑driven reasoning across the silicon lifecycle. If you’ve tried that jump (even partially), your evidence—coverage movement, regression stability, debug time, data hygiene—will land well for the DAC 2026 Engineering Track.

Second, Chiplets and multi‑die aren’t hypothetical anymore. From the Chiplet Pavilion to tool vendor panels, DAC62 made clear: heterogeneous packaging, 3D‑IC, and chiplets are scaling fast – and the bottlenecks are integration discipline and automation.

The industry’s discussions around “SoC Cockpit” automation, dedicated chiplet content, and panel takeaways all point to flows that must span system spec to signoff with tighter feedback loops.

What has your team learned about interposer modeling, package‑aware verification, DFT for multi‑die, or HBM timing closure under realistic workloads? Bring the receipts.

Third, Software‑defined systems reshape verification and bring‑up. Automotive and aerospace teams at DAC62 described accelerating pre‑silicon software bring‑up using emulation, virtual models, and scenario‑based validation – because lifetime validation and OTA‑driven features demand it.

If you’ve connected emulation/prototyping to CI/CD, instrumented performance/power at scale, or fused virtual platforms with lab rigs, those “how we wired it” details are gold for peers as part of the Engineering Track in 2026.

Finally, the AI imperative touches the whole stack. Arm’s SKYTalk at DAC62 framed AI as a systems problem: technology leadership, a complete systems approach, and a robust ecosystem. That systems lens maps perfectly to the Engineering Track: cross‑discipline integration stories beat single‑point heroics every time.

If you navigated cross‑org collaboration (IP vendors, foundry, cloud, toolchains) to make an AI workload viable, that’s exactly the content the Engineering Track is built to surface.

What a Strong Engineering Track Submission Looks Like

SemiWiki readers gravitate to specifics: numbers, tooling, and lessons learned. So do DAC’s reviewers. Across the four categories, consider framing your six-page abstract submission using this structure:

  • Problem & context. One page: node, die count, workload class, volume/latency/latency‑jitter, safety/security constraints. If applicable, cite DAC alignment – AI agents, chiplets, SW‑
  • Approach & toolchain. Call out the stack – simulation + emulation, prototyping, virtual platforms, physical signoff, package/thermal analysis, LLM/agent frameworks, data backplane. Reviewers will assess architectural clarity.
  • Evidence & metrics. Be precise: % timing slack improvement post‑package‑aware optimization; coverage delta via agent‑generated tests; regression throughput speedups (x‑fold) after moving to hardware‑assisted verification; power/perf shifts under production workloads. The various DAC62 recaps made clear: people are quantifying their results.
  • Pitfalls & playbook. What failed first? Data cleanliness, model fidelity, agent drift, spec/versioning? Turn your scar tissue into a 3–5 step “do this first” list.

Remember, DAC 2026 is Chips to Systems end‑to‑end.

The conference is explicitly inviting contributions that span disciplines, not just point optimizations.

Special Sessions: Curate the Cross‑Cutting Conversations

Beyond standard talks and posters, Engineering Track Special Sessions can stitch together themes like Agentic AI across the flow, From executable spec to multi‑die signoff, or Software‑defined validation at scale. The 2025 panel on AI‑enabled EDA startups drew strong interest because it packed diverse, hands‑on perspectives. If you can convene 3–5 voices (user + tool + ecosystem), you’ll help frame where practice is heading in 2026.

Final Nudge: Your Story Will Help Someone Ship

The best DAC Engineering Track talks I’ve seen aren’t victory laps; they’re honest accounts where a team wrestled with modern complexity – agents in the loop, multi‑die realities, software‑defined validation – and found a pattern worth sharing.

DAC62 proved the appetite is huge: agentic AI is advancing quickly, chiplets are mainstreaming, and the community is aligning on systems‑level thinking.

Now it’s your turn to add to that collective playbook.

Submit by January 20 and bring your hard‑won lessons to the people who will put them to work. Start here for the Engineering Track Submissions, and here for the Engineering Track Special Sessions.

See you in Long Beach!

Disclaimer: This article is written in my role as Engineering Track Chair for DAC 2026

Also Read:

CES 2026 and all things Cycling

Podcast EP326: How PhotonDelta is Advancing the Use of Photonic Chip Technology with Jorn Smeets

Webinar: Why AI-Assisted Security Verification For Chip Design is So Important


2026 Outlook with Randy Caplan of Silicon Creations

2026 Outlook with Randy Caplan of Silicon Creations
by Daniel Nenni on 01-14-2026 at 10:00 am

randy caplan

Randy Caplan is co-founder and CEO of Silicon Creations, and a lifelong technology enthusiast. For almost two decades, he has helped grow Silicon Creations into a leading mixed-signal semiconductor IP company with 500+ customers spanning almost every major market segment. He has driven the development of key technologies for over 700 unique SerDes and PLL IP products, in all mainstream manufacturing nodes from 350nm down to 2nm. He is well known for his promotion of an organic growth model for business and has helped guide Silicon Creations to an average annual growth rate of 25% for the past decade, with nearly 100% employee retention. Prior to Silicon Creations he was a design engineer for PLLs and SerDes at Agilent Technologies, Virtual Silicon, and MOSAID.

Tell us a little bit about yourself and your company.

Silicon Creations is a mixed-signal IP provider, specializing in low-risk, high performance clocking solutions (PLLs), high speed data interfaces (SerDes), and accurate temperature and voltage sensors. We offer designs in every foundry (TSMC, Samsung, GF, Intel, Rapidus, UMC, SMIC, and more…), and every process node (below 2nm up to 180nm, all inclusive). We were founded in 2006, are ISO9001 certified, and after 19 years, still have close to 100% employee retention.

What was the most exciting high point of 2025 for your company?

Passing 14 million (TSMC) wafers shipped using our IP. Also, we passed our 1000th production license of our Fractional SoC PLL IP.

We developed, taped out, and tested a portfolio of 2nm TSMC IP including multiple PLLs, free-running oscillators, low-noise IOs, and temperature sensor IP. We also developed select 2nm PLLs with multiple other foundries including Samsung, Intel, and Rapidus.

What was the biggest challenge your company faced in 2025?

Simultaneously developing IP in all the leading process nodes (TSMC, Samsung, Intel Foundry, Rapidus). This required new design techniques to support core-device-only requirements in GAA (gate-all-around) nano-sheet / nano-wire processes.

How is your company’s work addressing this biggest challenge?

Chip development costs in advanced nodes have gone up exponentially. We provide a substantial portfolio of low-risk, proven, foundational IP which helps enable our customers to get their designs right the first time. Our advanced design and verification flow enables fast iteration and re-verification in processes where PDKs are frequently changing. This ensures we’re not the bottleneck in our customers’ development schedule.

What do you think the biggest growth area for 2026 will be, and why?

We’re seeing growth in many parallel market segments including AI accelerator chips, consumer electronics (AR/VR/mobile), and automotive. Customer tapeouts in advanced nodes (5nm and below) are picking up. We’re even seeing active development in sub-2nm nodes. Due to long chip development schedules and fab times in advanced nodes, our IP sales tend to be a strong leading indicator of chip sales one to two years in the future. We don’t have first-hand knowledge of the end market forces, but we can infer from our IP sales which semiconductor market segments are seeing increased investment now.

How is your company’s work addressing this growth?

We use an advanced IP design flow, leveraging the latest EDA flows from Synopsys, Cadence, Siemens, Silvaco, and others. This helps to reduce IP development time, improve simulation-to-silicon correlation, and ensure our customers have the foundational blocks they need to build their high-performance chips.

What conferences did you attend in 2025 and how was the traffic?

All TSMC shows, ICCAD (China), DAC and many other foundry and EDA events (Samsung, GF, Intel, Siemens U2U, CadenceLive). Traffic was especially strong at TSMC and ICCAD, but we also had a good turn-out at our booth for the other events as well.

Will you attend conferences in 2026? Same or more?

Silicon Creations attended over 30 trade shows / conferences in 2025 and expects to attend a similar number in 2026.

How do customers engage with your company?

We already work with 18 of the top 20 TSMC customers, and over 550 companies overall. For new inquiries, please send an email to sales@siliconcr.com, or come to our booth at any trade show.

Also Read:

Silicon Creations Company Update 2025

Silicon Creations at the 2025 Design Automation Conference #62DAC

Silicon Creations Presents Architectures and IP for SoC Clocking


Where is Quantum Error Correction Headed Next?

Where is Quantum Error Correction Headed Next?
by Bernard Murphy on 01-14-2026 at 6:00 am

quantum computer and QEC coprocessor min

I have written earlier in this series that quantum error correction (QEC), a concept parallel to ECC in classical computing, is a gating factor for production quantum computing (QC). Errors in QC accumulate much faster than in classical systems, requiring QEC methods that can fix errors fast enough to permit production applications. I have read of leaders in the field using FPGAs or GPUs to support QEC, which to me sounded intriguing but also difficult to scale for several reasons. Qubits can’t exist in the classical regime, seeming to imply that they must be collapsed prior to transfer, which would destroy carefully constructed superpositions and entanglement. Communication to an external chip (and back again after calculation) must travel through bulky and constrained channels with significant latencies, particularly in superconducting QC. On the return trip into the QC, the corrected qubit would need to be reconstructed. Would all this added latency undermine performance expectations for a production algorithm?

What’s really happening in QEC today

It took quite a bit of digging, including an excursion into ChatGPT (surprisingly helpful in response to a very technical question), to figure out what is really happening: a more refined partitioning than I had understood for information to be communicated off-chip, and an acknowledgement that FPGA and GPU methods are widely used but temporary expedients, allowing QC builders to research and refine QEC techniques, but not expected to survive as a part of production fault tolerant systems.

First, the primary qubits on chip aren’t measured. The additional qubits used for QEC can be measured, and that data can be communicated to an off-chip device. Said device figures out what corrections must be made per qubit, communicates that back to the QC, where quantum circuitry takes over again to apply those corrections by a mechanism which does not break coherence.

Second, latencies in this path can be significant. High noise rates require frequent correction, but frequency is limited by those latencies. Equally this limits the number of qubits that can be managed (qubits X latency all needing to channel out to the coprocessor and back again) This is why, useful though FPGA and GPU are as QEC co-processors today, they are not seen as long-term solutions for production QEC algorithms.

From prototypes to production QEC support

All prototyping systems eventually evolve to ASICs unless prototype performance is adequate and volumes are not expected to be high. Since QC vendors aim for high performance and (eventually) high qubit count, they too are planning ASICs for QEC. But these must sit very close to the QC core to minimize communication overhead. Which puts them in or very close to the deep cooling cavity for superconducting QCs. IBM plans a QEC core built with cryogenic CMOS, I’m guessing on the bottom layer in their 3D-stack architecture: qubits on the top-layer, resonators on the middle layer, and QEC on the layer under that.

This is very nice technology advance. I don’t know where other QC vendors and technologies are at in this race, but I have to believe IBM QC, already a dominant player in the QC market, is aiming to further widen its lead.

Usual caveat that this is a fast-evolving market and promises aren’t yet proven deliverables. Still, keep a close eye on IBM!

Also Read:

2026 Outlook with Nilesh Kamdar of Keysight EDA

Verifying RISC-V Platforms for Space

2026 Outlook with Paul Neil of Mach42