CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

CEO Interview with Russ Garcia with Menlo Micro

CEO Interview with Russ Garcia with Menlo Micro
by Daniel Nenni on 08-17-2025 at 8:00 am

RussGarcia MenloCEO

Russell (Russ) Garcia is a veteran technology executive with over 30 years of leadership experience in semiconductors, telecommunications, and advanced electronics. As CEO of Menlo Microsystems, he has led the commercialization of disruptive MEMS switch technology across RF, digital, and power systems.

Previously, Russ founded the advisory firm nGeniSys, served as an Executive in Residence at GE Ventures, and held senior leadership roles at Microsemi, Texas Instruments, and Silicon Systems. He also served as CEO of WiSpry and u-Nav Microelectronics, where he oversaw the launch of the industry’s first single-chip GPS device. Russ remains active as a board member and industry advisor.

Tell us about your company

Menlo Micro is setting a new standard in chip-based switch technology with its Ideal Switch by addressing the limitations of traditional electromechanical relays (EMRs) and solid-state (SS) switches.

Following the path of other disruptive innovators, our RF products are enabling customers in high-growth markets, particularly AI, aerospace, and defense to miniaturize their systems while enhancing performance, capability, and reliability. Driven by AI-fueled growth in data and the xPUs supporting the expansion, we’re responding to customer demand by delivering best-in-class miniature RF switch products – necessary for testing existing and future generations of high-speed digital data buses – and scaling to support increased adoption among the top semiconductor manufacturers.  We’re also expanding our high speed and high-performance RF switch products into the aerospace and defense sectors with engagements with top defense, radar, and radio OEMs.  With product adoption accelerating in the RF segment, the company is developing and positioning a new smart power control product platform to expand in AC/DC power distribution and control to meet a growing demand in microgrids, data centers, and factory automation.

Our technology overcomes system-level bottlenecks caused by traditional switching, enabling customers to push performance boundaries such as accelerating AI GPU testing, delivering step-function improvements in size, weight, power, and performance for satellite communications beamforming, enhancing filtering in mobile radios, reducing energy consumption in factory automation, and improving fault detection and power distribution in energy infrastructure.

What problems are you solving?

Across industries, engineers face critical limits with traditional switching technologies. EMRs are large, slow, and prone to mechanical wear. SS switches suffer from high on-resistance, leakage, and heat generation, which limits scalability, reliability, and efficiency.

In semiconductor testing, switch performance directly affects test speed, accuracy, and cost. Traditional switches degrade signals and limit bandwidth, increasing complexity and slowing time-to-market. Aerospace and defense systems demand rugged, reliable switches that meet tight size, weight, and power constraints, yet traditional options lack durability or require bulky protection. Power systems, from industrial automation to energy grids, face thermal inefficiencies that drive overdesign, and slow switching speed limits responsiveness to system faults.

Menlo’s technology is unique because as it is a true metallic conductor rather than a semiconductor, it delivers near-zero on-resistance, ultra-low power loss, and minimal heat generation. This eliminates the need for heat sinks and complex cooling, significantly improving thermal and power efficiency.

Built on a MEMS process, it achieves chip-scale integration, enabling up to 10x or more reductions in footprint and higher channel density for compact, scalable designs. It maintains reliable operation across extreme environments, from cryogenic to +150°C, and withstands shock and vibration, making it ideal for mission-critical applications.

With billions of cycles and no mechanical degradation, its long life combined with low power consumption and minimal thermal management reduces total cost of ownership through fewer replacements, simpler designs, and lower maintenance.

By solving the longstanding trade-offs between speed, power, size, and reliability, Menlo enables engineers to build smaller, faster, more energy-efficient, and reliable RF and power control systems.

What application areas are your strongest?

Our platform is strongest in high-performance industries, thanks to its broadband linearity, from DC to mmWave, and ultra-low contact resistance.

While our platform supports a wide range of demanding applications, from RF to power switching, one of Menlo’s fastest-growing areas is high-speed digital test. We’ve built a unique position by enabling high-integrity test solutions for advanced interfaces like PCIe Gen6 at up to 64 Gbps. Our switches offer a rare combination of broadband linearity from DC to mmWave, a compact footprint, and low on-resistance, ideal for both DC and high-speed environments. This dual capability allows customers to consolidate hardware, reduce signal distortion, and improve test density, improving ROI and lowering total cost of ownership. With proven reliability across billions of cycles, our solutions also minimize maintenance and system downtime – driving our growing market share in semiconductor test, especially among companies working on the next wave of AI processors, GPUs, and data center chipsets.

Looking ahead, Menlo is actively developing its next generation of switches to support PCIe Gen7 and Gen8, and scaling data rates of 128 and 256 Gbps. This roadmap is driven in close collaboration with our customers to align with their next-gen test infrastructure needs.

Beyond test, our innovations in high-speed switching are creating leverageable product platforms for adjacent markets. In aerospace and defense, for example, we’re applying this same high frequency control switching capability to ruggedized environments, where high performance, fast actuation, and extreme reliability are critical, such as phased array radar, electronic warfare, and advanced power protection systems.

How do customers normally engage with your company?

Collaboration is core to our approach. Because our technology supports everything from testing to deployment and optimization, we engage early and often, working not just to meet needs, but to anticipate them. Our team strives to “see around corners,” aligning our innovations with where the industry is headed.  To do this we create strong working partnerships with our customers – when our customers succeed through Menlo product integration, we succeed.

A strong example of this model is the development of the MM5620 switch. In 2023, we partnered with leading GPU and AI chip manufacturers to understand the growing challenges in semiconductor testing. As demands on AI chips, xPUs, and custom ASICs surged, legacy switching became a clear bottleneck, resulting in longer test cycles, increased complexity, and delayed time-to-market.

These insights led to the MM5620: a high-speed, high-linearity switch [array] delivering near-zero insertion loss, ultra-low contact resistance, and exceptional linearity from DC to mmWave. This allows next-gen device testing without compromising signal integrity or accuracy. This is a step-change in test efficiency, with customers reporting 2x faster test times, simplified hardware, lower overhead, and reduced consumables, key reasons top semiconductor companies choose to collaborate with us. Building on this success, we continue to partner with AI and high-performance computing leaders to help them stay ahead in a fast-moving, competitive market.

What keeps your customers up at night?

Our customers operate in industries like semiconductor supply chain, aerospace & defense, communications infrastructure and energy infrastructure where any failure or signal degradation can lead to significant financial impact, operational downtime, and safety risks. The increasing complexity and miniaturization of modern electrical systems amplify the vulnerabilities inherent in legacy switching technologies.

As system architectures demand higher bandwidth, faster switching speeds, and tighter thermal budgets, the tolerance for insertion loss, contact resistance, and thermal dissipation issues are rapidly diminishing. Consequently, customers are under significant pressure to mitigate these risks without compromising performance and reliability, while reducing total cost of ownership. This dynamic is driving customers to collaborate with us on current product offering adoption as well as their next-generation electronic systems.

What does the competitive landscape look like and how do you differentiate?

The promise of a true MEMS switch, i.e., a tiny, fast, efficient mechanical conductor, rather than a semiconductor, has long been recognized. However, scalability has been the major barrier. Over 30 companies have attempted to commercialize MEMS switches only to fail due to material and manufacturing challenges. Semiconductor fabs rely on materials like Silicon (a partial conductor) or soft metals, which cannot deliver the durability and reliability required for high-cycle mechanical elements.

Backed by R&D at GE, we developed a proprietary metal alloy system engineered to be highly conductive and mechanically robust for the device actuator and further integrated the alloy with a metal material system for reliable conductive contacts. This breakthrough in metallurgy enables the production of ultra-conductive, highly reliable switches capable of billions of actuations, delivering unmatched linearity from DC to mmWave with the highest power density per chip on the market. Process on glass substrates with metal-filled hermetic vias, our MEMS device delivers best-in-class RF and power performance.  This core construction differentiates us from competitors who either rely on semiconductor switches, limited by non-linearities, high losses and heat, or EMR technologies that lack scalability and ruggedness. It’s the integrated system that delivers the combined best-in-class performance, at both high power and high frequency, in a miniature chip scale package.

What new features/technology are you working on?

In April 2025, we launched the MM5230, a high-performance RF switch developed with key customers to meet the demands of next-gen systems. Combining ultra-high RF performance with manufacturability, it supports advanced military communications and high-density IC parallel testing, delivering the performance, reliability, and versatility critical to today’s most demanding applications.

In June, we followed with the MM5625, engineered to dramatically increase test throughput with increased channel density in high-speed, high-volume environments such as AI GPU testing. It enables faster test cycles, greater parallelism, and improved data processing, empowering leading semiconductor manufacturers to expand testing capacity, accelerate time-to-market, and reduce total cost of ownership.

Looking ahead, Menlo Micro is working with customers on next-gen switches for PCIe Gen7 and beyond, as well as mmWave products up to 80 GHz to support advanced aerospace and defense RF systems. We’re also advancing a robust power control roadmap for AI IC testing, high-voltage DC in data centers, and smart grid and industrial automation.

In parallel, we’re partnering with the U.S. Navy and the Defense Innovation Unit (DIU) to develop 1000VDC/125A modules for 10MWe advanced circuit breaker systems in micro-nuclear reactors. These compact, low-heat modules offer 5–6X reductions in size and weight and will extend to mission-critical commercial sectors like data centers, industrial automation, and EVs.

Also Read:

CEO Interview with Karim Beguir of InstaDeep

CEO Interview with Dr. Avi Madisetti of Mixed-Signal Devices

CEO Interview with Bob Fung of Owens Design


Video EP9: How Cycuity Enables Comprehensive Security Coverage with John Elliott

Video EP9: How Cycuity Enables Comprehensive Security Coverage with John Elliott
by Daniel Nenni on 08-15-2025 at 10:00 am

In this episode of the Semiconductor Insiders video series, Dan is joined by John Elliott, security applications engineer from Cycuity. With 35 years of EDA experience, John’s current focus is on security assurance of hardware designs.

John explains the importance of security coverage in the new global marketplace. He describes what’s needed to perform deep security verification of a design for both known and potentially unknown threats, and why it’s important to achieve good coverage. He also describes how Cycuity’s tools help perform the deep analysis and verification tasks to ensure a design is secure.

Contact Cycuity

The views, thoughts, and opinions expressed in these videos belong solely to the speaker, and not to the speaker’s employer, organization,
committee or any other group or individual.


Podcast EP303: How Lattice Semiconductor is Addressing Security Threats From the Ground Up with Mamta Gupta

Podcast EP303: How Lattice Semiconductor is Addressing Security Threats From the Ground Up with Mamta Gupta
by Daniel Nenni on 08-15-2025 at 6:00 am

Dan is joined by Mamta Gupta, She leads the Security Product Marketing, Datacenter and the Communications Segment Marketing Teams at Lattice. She brings with her over 20 years of FPGA experience in product development with special focus on security, aerospace and defense segments.

Dan explores the growing area of cybersecurity with a focus on silicon-level security with Mamta. She describes the importance of silicon-level security to ensure devices, software and the systems they implement are not compromised in manufacturing or in the field. She explains that attacks can take the form of direct assault on a system but can also be accomplished by corrupting the training data used in AI algorithms and insertion of weaknesses during manufacturing.

She explains how the FPGA technology from Lattice is creating the ability to design for security from the ground up in a proactive way. She describes the move from “bolt on” to “built in” for systems by using Lattice FPGAs. She also discusses how to deal with post quantum security and how nation states are now becoming more involved in this area as a matter of national security.

Contact Lattice Semiconductor

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Semiconductors Still Strong in 2025

Semiconductors Still Strong in 2025
by Bill Jewell on 08-14-2025 at 2:00 pm

Semiconductor Market Change 2025

The global semiconductor market in 2Q 2025 was $180 billion, up 7.8% from 1Q 2025 and up 19.6% from 2Q 2024, according to WSTS. 2Q 2025 marked the sixth consecutive quarter with year-to-year growth of over 18%.

The table below shows the top twenty semiconductor companies by revenue. The list includes companies which sell devices on the open market. This excludes foundry companies such as TSMC and companies which only produce semiconductors for their internal use such as Apple. The revenue in most cases is for the total company, which may include some non-semiconductor revenue. In cases where revenue is broken out separately, semiconductor revenue is used.

Nvidia remains the largest semiconductor company based on its forecast of $45 billion in 2Q 2025 revenue. Memory companies Samsung and SK Hynix are second and third. Broadcom is fourth and long-time number one Intel has dropped to fifth.

Most companies reported solid growth in 2Q 2025 revenues versus 1Q 2025, with a weighted average increase of 7%. Memory companies showed the largest increases, with SK Hynix up 26%, Micron Technology up 16%, and Samsung up 11%. The healthiest revenue gains among the non-memory companies were Microchip Technologies at 11%, STMicroelectronics at 10%, and Texas Instruments at 9.3%. Five companies saw revenue decline from 1Q 2025.

Almost all the companies providing guidance expect healthy growth in 3Q 2025 revenues versus 2Q 2025. Again, the biggest gains are from memory companies, with Micron projecting 20% and Kioxia projecting 30%. Both companies cited demand form AI applications as the key driver.

STMicroelectronics guided 15% revenue growth with all its end markets up except auto. AMD projects a 13% increase driven by AI. The other six companies providing revenue growth guidance range from 1.7% to 7.7%. The only company expecting a revenue decline is MediaTek, with a drop of 10% in 3Q 2025 due to a weak mobile market.

AI remains the highest grow driver. Many companies are seeing upticks in their traditional markets. Some companies are experiencing growth in automotive revenues while other companies see automotive continuing to be weak. In their conference calls with financial analysts, most companies cited the uncertainties around tariffs and global trade as areas of concern.

The strong semiconductor market growth in the first half of 2025 practically guarantees double-digit full year growth. Recent forecasts are generally in a narrow range of 14% to 16%. WSTS revised its June forecast from 11.2% to 15.4% based on the 2Q 2025 data. We at Semiconductor Intelligence (SC IQ) remain cautious due to the uncertainty about global trade. But based on the strong first half of 2025, we are raising our 2025 forecast to 13% from the May forecast of 7%.

Projecting the impact of U.S. tariffs on global trade is difficult due to the frequent changes in threatened tariffs and implemented tariffs. In the case of China, the Trump administration in April threatened tariffs as high a 145%. In May, the administration put a 90-day pause on the higher tariffs and set tariffs on China at 30%. This week, the pause was extended until November.

Direct tariffs on semiconductors are very uncertain. Earlier this month, President Trump announced the U.S. will impose a 100% tariff on imports of semiconductors. He said companies that commit to building semiconductors in the U.S. will not face tariffs. Details of the plan have yet to be announced.

This month the Trump administration reached an agreement to provide export licenses for Nvidia and AMD to ship certain AI chips to China. The companies will be required to pay 15% of the revenue from these sales to the U.S. government. The legality of this agreement is questionable. The U.S. Constitution prohibits Congress from putting taxes or duties on exports. EE Times describes the deal as “unique”.

One area which has already seen an impact from tariffs is smartphones. As we have noted in previous newsletters, U.S. imports of smartphones have been dropping dramatically in recent months. 2Q 2025 U.S. smartphone imports dropped 58% in dollars and 47% in units from 1Q 2025. Smartphone unit imports from China declined 85%. Although there are currently no tariffs on smartphone imports, the threat of tariffs has had a significant impact. Canalys estimated 2Q 2025 U.S. smartphone sales were down about 20% from 1Q 2025. Many of the 2Q 2025 sales came from existing inventory. However, U.S. smartphone sales should drop significantly in the second half of 2025. Despite the drop in exports to the U.S., China smartphone manufacturing has remained strong, with unit production in 2Q 2025 up 5% from 1Q 2025.

The current semiconductor market is strong. Ongoing global trade disputes are a significant concern, but so far have not had a meaningful impact. The Trump administration tariff threats may become, to quote Shakespeare’s MacBeth, “sound and fury, signifying nothing.”

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

U.S. Imports Shifting

Electronics Up, Smartphones down

Semiconductor Market Uncertainty


Moving Beyond RTL at #62DAC

Moving Beyond RTL at #62DAC
by Daniel Payne on 08-14-2025 at 10:00 am

beyond rtl min

Hardware designers have been using RTL and hardware description languages since the 1980s, yet many attempts at moving beyond RTL have tried to gain a foothold. At the #62DAC event I spent some time with Mike Fingeroff, the Chief High-Level Synthesis Technologist to understand what his company Rise Design Automation is up to. Mike has two decades of experience in High Level Synthesis (HLS) and even authored a book in 2010 on HLS.

One major theme at DAC this year was using GenAI to create RTL faster. At RISE they support a methodology using several higher-level languages like SystemVerilog, C++ or SystemC. Verilog designers gravitate towards using SystemVerilog with loose timing for control flow designs, while C++ is an appropriate language for dataflow designs. Mike thinks that you should use the best language for each block, then mix abstractions as needed.

All of the popular EDA simulators support multiple languages for design descriptions spanning from RTL to transaction level. Many tier-one companies have proven that HLS flows are more productive than RTL: Google, NVIDIA, Qualcomm. The new challenges are providing a complete tool chain for HLS that use AI agents, instead of requiring experts to run the tools.

With RISE there are AI advisors and agents to help you generate high-level code using LLMs that already understand high-level coding from Python and C repositories. Their AI works with engineers to create the code easily by using chat prompts. Traditional LLMs are being used either on-premise or in the cloud, your choice, and they are pre-trained for you.

An LLM doesn’t really know HW design, so they had to show them how to make HW from C++ code. They have an Agent Orchestrator that calls the RISE tools, views the results, and continues to iterate to explore the design space. This iteration loop can also contain logic synthesis and P&R tools as well.

Rise.ai Adviser is a generative AI advisor aimed at high-level design with natural language input, creating designs in SystemVerilog, SystemC and C++. Test benches are created in both C++ and UVM. You can analyze your design then optimize for area, power or speed. This all runs on a local processor or something larger if you really want to. During design exploration you can call your own tools, like VCS for power numbers, or Open ROAD tools for synthesis and P&R.

Verification speed ups with higher abstraction levels range from 100X to 1,000X faster. RISE verification has automatic channel capture for waveforms, automatic high-level to RTL comparisons, and utilities for sub-system assembly and verification testbenches.

Summary

RISE Design Automation did create a buzz at DAC this year, because their message was something that RTL designers want – becoming more productive by raising the design and verification abstractions, using faster toolchains and benefitting from generative AI integration. You can learn more about RISE by visiting their website and then think about starting an evaluation to produce better design and verification results from your team.

Related Blogs


Streamlining Functional Verification for Multi-Die and Chiplet Designs

Streamlining Functional Verification for Multi-Die and Chiplet Designs
by Daniel Nenni on 08-14-2025 at 6:00 am

Streamlining Functional Verification for Multi Die and Chiplet Designs

As multi-die and chiplet-based system designs become more prevalent in advanced electronics, much of the focus has been on physical design challenges. However, verification—particularly functional correctness and interoperability of inter-die connections—is just as critical. Interfaces such as UCIe or custom interconnects must be rigorously tested to ensure the entire system performs as intended.

Traditional verification methods face serious challenges when applied to multi-die systems. Creating a unified top-level simulation that includes all dies is computationally demanding. Memory utilization often exceeds the capabilities of typical compute servers, which are geared more toward verifying individual IP blocks or subsystems. Although premium emulation and prototyping platforms like Palladium and Protium can manage such large-scale simulations, they are generally reserved for later stages of validation and software bring-up, not early-stage design.

Most early and mid-cycle verification relies on simulators like Xcelium Simulator, which perform power regressions across thousands of runs. These use existing compute farms, but the capacity limitations of typical servers prevent full-system simulations from being practical. Another bottleneck is the time and effort needed to build and debug a new top-level testbench for the integrated system, which can take weeks even when each die has already been verified independently.

A serial approach to interoperability testing is risky. In modern development flows, the goal is always to “shift left”to detect and fix issues as early as possible. Waiting until interposer designs are finalized and all die models are complete delays verification unnecessarily. There’s a better path forward: begin interoperability testing as soon as two or more die models are available, even if other parts of the system are still in development.

This is where the Xcelium Distributed Simulation Verification App offers a game-changing solution. Rather than simulating the entire system as one monolithic design, the Xcelium App enables each die to be simulated in its own process, running independently but connected through Xcelium Virtual Channels that abstract away RTL-level bus interfaces. These distributed simulations use the existing testbenches created for individual dies, significantly reducing the time and effort needed to verify multi-die systems.

Customer experience with the App shows that adapting to this distributed approach typically takes just a few days. Once connected, these simulations enable a wide range of interoperability testing scenarios, including register access, concurrency, die-to-die CRC and retry mechanisms, protocol interactions, and physical-layer behaviors like scrambling and lane repair. These tests are essential for signoff quality assurance in multi-die environments.

Importantly, distributed simulation allows verification activities to begin up to three months earlier than traditional methods well before the interposer layout is finalized. The simulation model is constructed with only minor changes: conditional compile switches to handle traffic generation and memory maps, along with API calls to configure Xcelium Virtual Channels. From there, the Xcelium App handles the distributed communication and synchronization.

Performance is a key concern, but real-world testing has shown distributed simulations to be up to 3X faster than integrated top-level simulations, even with inter-process communication overhead. This is because Xcelium Virtual Channels minimize synchronization needs, allowing each simulation to run at optimal speed except during necessary transaction updates.

The potential of distributed simulation isn’t limited to multi-die systems. As individual dies grow in complexity, the same methodology could be applied to partition large single-die designs into independently simulated blocks, each with its own testbench. With the right communication strategy—favoring asynchronous transaction-based links over tightly coupled cycle-by-cycle synchronization—distributed simulation can scale to manage increasing design sizes efficiently.

Bottom line: Multi-die systems are becoming a foundational part of modern electronics, yet functional verification has struggled to keep pace with physical integration. The Xcelium Distributed Simulation Verification App provides a robust, scalable, and early-deployable solution. It enables full-system functional verification using existing testbenches and compute infrastructure, advancing shift-left strategies and accelerating development cycles without sacrificing quality or confidence in design correctness.

You can view the whitepaper here.

Also Read:

Chiplets and Cadence at #62DAC

Prompt Engineering for Security: Innovation in Verification

New Cooling Strategies for Future Computing


S2C Advances RISC-V Ecosystem, Accelerating Innovation at 2025 Summit China

S2C Advances RISC-V Ecosystem, Accelerating Innovation at 2025 Summit China
by Daniel Nenni on 08-13-2025 at 10:00 am

pic 1 (1)

Shanghai, July 19, 2025 — S2C, a leader in functional verification, showcased its latest digital EDA solutions and key partnerships with BOSC, Xuantie, and Andes Technology at RISC-V Summit China 2025, highlighting its contributions to the ecosystem. The company also played a leading role in the EDA sub-forum, with VP Ying J Chen co-chairing and Senior Engineer Dehao Yang delivering insights on accelerating RISC-V adoption through practical strategies.

Showcasing Diverse RISC-V Applications with Ecosystem Partners

Leveraging its comprehensive digital EDA portfolio, S2C delivers matching verification solutions across the RISC-V ecosystem—addressing verification needs ranging from IP validation to system-level verification. Through close partnerships with leading RISC-V vendors, S2C provides high-performance, scalable prototyping solutions that accelerate time-to-market from early design bring-up to full-system deployment.

At the summit, S2C showcased its FPGA prototyping solutions with live demos across multiple RISC-V applications – including the Xiangshan processor running a graphical Linux interface. S2C has collaborated with Beijing Open Source Chip Research Institute (BOSC) since the first-generation Xiangshan CPU. In the recent validation of its third-generation Kunminghu processor – a 16-core RISC-V design with NoC interconnect running on two S8-100Q Logic Systems (each with 4 VP1902 FPGAs) were deployed and achieved a static timing closure at 12MHz. BOSC recognized S2C as a “Strategic Contributor” for its critical role in accelerating Xiangshan’s development cycle.

Additionally, Xuantie R908—a high-efficiency processor designed for real-time performance—was demonstrated live running on S2C S7-19P Logic System. The demo effectively demonstrated its low-latency operation and field-ready reliability.

Equally notable was Andes Technology’s 64-bit RISC-V vector processor IP core, the AX45MPV – running Linux and large language models easily efficiently on S2C’s S8-100 Logic System through the Andes Custom Extension (ACE) framework.

Overcoming Simulation Bottlenecks with Transaction-Based Acceleration

The RISC-V Verification Interface (RVVI) provides a standardized framework to ensure ISA compliance and functional correctness. Yet, as RISC-V designs grow in complexity—especially with custom extensions—traditional simulation methods encounter challenges like slow execution speeds, limited debug visibility, and difficulties scaling to full system-level verification.

To address these challenges, the keynote by Yang Dehao focused on Transaction-Based Acceleration (TBA), a verification methodology that enhances RVVI by decomposing test scenarios into reusable transaction flows. TBA leverages co-simulation between virtual prototyping platforms and hardware emulators—using tools such as S2C’s Genesis Architect and OmniArk/OmniDrive—to significantly improve verification speed and observability at scale, while maintaining RVVI compliance.

This approach exemplifies how advanced verification methodologies, combined with powerful prototyping tools, can accelerate the path from RTL validation to full-chip system verification.

Building on this, VP of marketing Ying J Chen highlighted S2C’s continued commitment to ecosystem collaboration and innovation:

“It is exciting to see thousands of engineers at the summit, and the manifestation of our partners’ RISC-V cores drawing a large crowd to our booth,” stated Ying J Chen, VP of Marketing at S2C. “We don’t just see ourselves as tool providers—we’re also an advocate for innovation and customers’ success. We’re committed to deepening our effort in the RISC-V community and broaden the ecosystem.”

S2C Inc. is a global provider of FPGA prototyping solutions for SoC (System on Chip) and ASIC (Application-Specific Integrated Circuit) designsThey offer hardware, software, and system-level design verification tools to accelerate the development process. S2C’s solutions are used for design exploration, IP development, hardware verification, system validation, software development, and compatibility testing. 

Also Read:

Double SoC prototyping performance with S2C’s VP1902-based S8-100

Enabling RISC-V & AI Innovations with Andes AX45MPV Running Live on S2C Prodigy S8-100 Prototyping System

Cost-Effective and Scalable: A Smarter Choice for RISC-V Development


Breaking the Sorting Barrier for Directed Single-Source Shortest Paths

Breaking the Sorting Barrier for Directed Single-Source Shortest Paths
by Admin on 08-13-2025 at 8:00 am

Dijkstra's Algorithm

Problem & significance.
Single-source shortest paths (SSSP) on directed graphs with non-negative real weights is a pillar of graph algorithms. For decades, the textbook gold standard has been Dijkstra’s algorithm with good heaps, running in the comparison-addition model (only comparisons and additions on weights). Because Dijkstra effectively maintains a total order of tentative distances, many believed its  “sorting cost” was an inherent barrier on sparse graphs. The new paper “Breaking the Sorting Barrier for Directed Single-Source Shortest Paths” overturns that belief with a deterministic algorithm for directed graphs—strictly on sparse inputs and the first to beat Dijkstra’s bound in this model.

Model & setup.
The result holds in the comparison-addition model with real, non-negative edge weights. The authors adopt standard simplifications that preserve shortest paths: (i) make the graph constant-degree via a classical vertex-expansion gadget ensure uniqueness of path lengths by a lexicographic tie-breaking scheme that can be implemented with some overhead. These choices streamline the analysis without changing correctness.

Core idea in one line.
Blend the ordering discipline of Dijkstra with the bulk-relaxation power of Bellman–Ford, but never fully sort the frontier. Instead, shrink the frontier so only a fraction of “pivots” need attention at each scale, thereby skirting the sorting bottleneck.

From barrier to breakthrough: two levers.

  1. Frontier reduction via “pivots.”
    At any moment, Dijkstra’s priority queue reflects a frontier SS of vertices whose outgoing edges can unlock progress. If one naively kept selecting the minimum, sorting resurfaces. The paper instead introduces a FindPivots subroutine that runs kk rounds of bounded relaxations (Bellman–Ford style) and classifies vertices according to whether their shortest path crosses at most kk frontier-interval vertices. Those that finish become complete. The others are “covered” by root vertices whose shortest-path subtrees are large. Only these roots—the pivots—must persist.

  2. Recursive bounded multi-source SSSP (BMSSP).
    Rather than run one monolithic Dijkstra, the algorithm performs a divide-and-conquer over distance scales.

A partial-sorting data structure.
To orchestrate sources across recursive calls without full sorting, the authors design a block-based structure supporting: Insert, BatchPrepend (efficiently add a batch of strictly smaller keys), and Pull (extract up to MM smallest keys plus a separating threshold). This lets the algorithm “sip” from the smallest distance ranges as needed, while updates from relaxations are appended efficiently, avoiding global reordering.

Context & novelty.
Earlier, it was shown that Dijkstra is optimal if one insists on outputting the full ordering of vertices by distance. This paper sidesteps that constraint: it outputs distances without maintaining a total order, breaking the “sorting barrier.” It also delivers the first deterministic improvement even relative to prior randomized gains on undirected graphs, strengthening the case that the barrier is not fundamental.

Takeaways:

Theory: A clean, deterministic path past nlog⁡nn\log n for directed SSSP in a realistic algebraic model.

  • Technique: A reusable template—bounded multi-source recursion, pivot selection, and partial sorting—that may inform faster routines for other path problems under comparison constraints.

  • Outlook: Extending these ideas to richer weight domains or dynamic settings could unlock further speedups where sorting once seemed inevitable.

The full paper:  Breaking the Sorting Barrier for Directed Single-Source Shortest Paths 


Samtec Practical Cable Management for High-Data-Rate Systems

Samtec Practical Cable Management for High-Data-Rate Systems
by Daniel Nenni on 08-13-2025 at 6:00 am

Samtec cable management SemiWiki

According to a recent Samtec whitepaper, in high-data-rate (HDR) architectures, where signals traverse tens to hundreds of gigabits per second, “cable management” isn’t a housekeeping chore, it’s a first-order design variable. The mechanical path a cable takes directly influences channel loss, crosstalk, reliability, rework costs, and even thermal performance. The most successful programs treat cable routing, bend strategy, strain relief, and labeling as part of early architecture, co-optimizing them alongside signal integrity (SI), thermals, and assembly. That mindset unlocks cleaner channels, faster bring-up, and fewer surprises in environmental or HALT testing.

Figure 1. Samtec High-Data-Rate Cable Assemblies can be used in mid-board, front panel, and backplane flyover applications.

Bend control is foundational. Every cable has a static minimum bend radius below which impedance discontinuities, jacket damage, or conductor fatigue become likely. Minimums vary with construction and gauge; designers should consult product-specific data rather than assume generic rules. As one illustrative datum, 34-AWG twinax/coax commonly specifies a 3.1 mm (0.125″) minimum bend radius—tight by eye, but still large enough to demand discipline in dense layouts. Two practical rules follow: avoid bunching (it effectively increases the required minimum), and allow cables to splay as they leave the connector so the first bend is gentle and not levering the termination. When routing along the connector’s length, the best practice is “bend, then twist”: first introduce the desired bend, then apply a controlled 90° twist over ~1.5″ to re-orient the exit without over-stressing the bundle.

Sleeving is valuable when used judiciously. Its role is protective—guarding jackets from nicks, abrasion, and errant edges—while offering light organizational benefits in cable-rich builds. Oversized, stretchable sleeves are preferred because they let strands splay naturally at bends and turns; tight sleeves (or large labels placed on them) add stiffness right where compliance is needed. Keep sleeves away from tight-radius regions, and use edge tape only where necessary. Strain is the other half of the story: always design for slack. Select lengths to accommodate both the final mated condition and the act of insertion/removal, short cables (<10 in / 25 cm) are especially sensitive. Where slack cannot be guaranteed, consult qualification data on allowable normal and side loads and, if needed, transfer load paths from the connector into the chassis or PCB via brackets or tie-downs. If compressive forces are unavoidable, preserve degrees of freedom so cables can splay and dissipate stress rather than prying at the interface.

Labeling and identification deserve more attention than they usually get. In production, field service, and RMA workflows, smart labels pay for themselves, yet they can become unintended stiffeners if applied indiscriminately. Keep labels minimal, place them on individual strands rather than wrapped around a bundle, and keep them clear of bend zones. Color coding can accelerate assembly while reducing handling time (and the handling damage that comes with it). Surround these mechanical practices with a robust set of enablement tools: full-channel SI models and evaluation kits for pre-layout what-ifs; thermal analysis to compute pressure drops and airflow interaction in cabled systems; and physical mock-ups to validate touch-labor ergonomics before freezing the design. Mature vendors also offer application-engineering support, custom sleeves and labels, and solution finders that map needs to qualified assemblies, particularly relevant for mid-board, front-panel, backplane, and Flyover® use cases.

In sum, HDR cable management is about treating the cable path as part of the channel, not an afterthought to be “neatened up” at the end. Respect bend radius by design; route to avoid bunching and leverage bend-then-twist to re-orient without stress; use sleeves for protection, not constriction; preserve slack and manage loads into structures that can bear them; label intelligently without adding stiffness; and anchor the whole effort with SI, thermal, and assembly analyses up front. Do these things, and you’ll ship systems that are faster to validate, more robust in the field, and easier to service—outcomes that matter just as much as the headline data rate.

Read the full Samtec white paper here

Also Read:

How Channel Operating Margin (COM) Came to be and Why It Endures

Visualizing System Design with Samtec’s Picture Search

Webinar – Achieving Seamless 1.6 Tbps Interoperability with Samtec and Synopsys

 


A Quick Tour Through Prompt Engineering as it Might Apply to Debug

A Quick Tour Through Prompt Engineering as it Might Apply to Debug
by Bernard Murphy on 08-13-2025 at 6:00 am

Prompt ENgineering example

The immediate appeal of large language models (LLMs) is that you can ask any question using natural language in the same way you would ask an expert, and it will provide an answer. Unfortunately, that answer may be useful only in simple cases. When posing a question we often implicitly assume significant context and skate over ambiguities. Then we are surprised when the LLM completely misses our expectation in the answer it provides.

The reason for the miss is that initial guidance was insufficient. Rather than trying to stuff all the necessary context into a new prompt, standard practice is to refine initial guidance through added prompts, as in the following (fake) example. No longer a simple prompt, this looks more like an algorithm, though still expressed in natural language. Welcome to prompt engineering, a new discipline requiring user training and familiarity with a range of prompt engineering techniques in order to craft effective queries for LLM applications.

Techniques in prompt engineering

This domain is still quite new, as seen in the great majority of papers reported in Google Scholar which appear from 2023 onwards. Outside of Google Scholar I have found multiple papers on use of LLMs in support of software debug and to a lesser extent hardware debug, but I have found very little on prompt engineering in these domains. There are also Freemium apps to help optimize prompts (I’ll touch on these later) though I’m not sure how much these could help in engineering debug given their more likely business-centric client base.

Lacking direct sources, I will synthesize my view from a variety of sources (list at the end), extrapolating to how I think these methods might apply in hardware debug. It would love to see responses to disprove or confirm these assumptions.

In debug, context is important even though in conventional algorithmic debug it is unclear how this might play a role. LLM-based debug could in principle help bridge between high-level context and low-level detail, for example, requiring that the LLM answer with an expert engineering viewpoint. Yes, that should be a default but isn’t when you are starting with a general-purpose model, trained on a wide spectrum of expertise in many domains. Less obvious is value in including information about the design function. This might narrow context somewhat within general training, maybe more so through in-house fine-tuning/legacy in-context training. Either way providing this information might help more than you expect, even though your prompt suggestion may appear very high level.

Chain of Thought (CoT) prompting, telling the LLM to reason through a question in steps, has proved to be one of more popular prompting techniques. We ourselves don’t reason through a complex problem in one shot and LLMs also struggle when faced with a simple question/prompt addressed to a complex problem. We humans break such a problem down into simpler steps and attack those steps in sequence. The same approach can work with LLMs. For example, in trying to trace to a failure root-cause we might ask the LLM to apply conventional (non-AI) methods to grade possible fault locales, then rank that list based on factors like suspiciousness, and then provide a reasoning for each of the top 5 candidates. One caution is that this method apparently doesn’t work so well on the more advanced LLMs like GPT4-o which apparently prefer to figure out their own CoT reasoning.

Another technique is in-context learning/few-shot learning, learning from a few samples provided in a prompt. I see this method mentioned often in code creation applications, but I’m yet to find a published example for verification except for code repair in debugging. However, there is recent work on code summarization driven by few-shot learning which I think could be a starting point for augmenting prompts with semantic hints in support of fault localization.

There are other techniques such as chain-of-tables for reasoning on table-structured data and tree-of-thoughts to explore multiple paths, but in this brief article it is best next to touch on methods to automate this newfound complexity.

Automated prompt engineering

A general characteristic of ad hoc prompt engineering seems to be high sensitivity to how prompts are worded/constructed. I can relate. I use a DALL-E 3 tool to generate images for some of my blogs and find that beyond my initial attempt and a few simple changes it is very difficult to tune prompts predictably towards something that better matches my goal.

Leading AI providers now offer prompt generators such as this one for ChatGPT which will tell you what essentials you should add to your request, then will generate a new and very detailed prompt which has the added benefit of optimizing to the host model’s preferences (e.g. whether to spell out step-based reasoning or not). I tried this for image generation. It built an impressively complex prompt which unfortunately was too long to be accepted by my free subscriptions to either of 2 image generators. Google/Deepmind OPRO has a somewhat similar objective though as far as I can tell it directly front-ends the LLM, optimizing your input “meta-prompt”, then feeding that into the LLM.

There is also an emerging class of prompt engineering tools, though I wonder how effective these can be given rapid evolution in LLM models and the generally opaque/emerging characteristics of those systems. In prompt engineering perhaps the best options may still be those offered by the big model builders, augmented by your own promptware.

Happy prompting!

References

CACM: Tools for Prompt Engineering

Promptware Engineering: Software Engineering for LLM Prompt Development

Prompt Engineering: How to Get Better Results from Large Language Models

Empirical Evaluation of Large Language Models for Novice Program Fault Localization

Automatic Semantic Augmentation of Language Model Prompts (for Code Summarization)

Also Read:

What is Vibe Coding and Should You Care?

DAC TechTalk – A Siemens and NVIDIA Perspective on Unlocking the Power of AI in EDA

Architecting Your Next SoC: Join the Live Discussion on Tradeoffs, IP, and Ecosystem Realities