Banner 800x100 0810

CEO Interview with Vamshi Kothur, of Tuple Technologies

CEO Interview with Vamshi Kothur, of Tuple Technologies
by Daniel Nenni on 06-27-2025 at 6:00 am

vamshi profile (1)


It was my pleasure to meet with Vamshi Kothur and the Tuple team at #62DAC for a briefing on their Tropos platform and Omni, a new multi-cloud optimizer. The conferences this year have been AI infused with exciting new technologies but one of the lingering questions is: How will the existing semiconductor design IT infrastructure support AI infused EDA tools and complex AI chip designs? Also, has there ever been a time where time-to-market has been more critical?

Vamshi Kothur is a NJ–based cloud and DevSecOps veteran with over 20 years of experience leading large-scale IT transformations at Fortune 100 financial firms and high-growth technology startups. In 2017, he founded Tuple Technologies to close the infrastructure and security gaps that chip-design startups face when racing to tape-out next-generation ICs, FPGAs, and AI accelerators on tight budgets.

As a frequent speaker at DAC, presenter at Cadence Live, Vamshi mentors early-stage semiconductor founders on building secure, license-aware, elastic cloud infrastructures that protect IP and accelerate innovation.

Tell us about your company

Tuple Technologies is a managed services and cloud automation company focused exclusively on the semiconductor industry. Our flagship platform, Tropos, automates IT infrastructure provisioning and DevSecOps for IC, FPGA, and system design workloads—across cloud, hybrid, or on-prem environments.

Tropos simplifies the most complex and error-prone aspects of semiconductor IT—like infrastructure-as-code, CAD license orchestration, job scheduling, and security hardening—so design teams can focus on tapeout, not IT & toolchain configurations. Whether you’re running RTL simulation on a few hundred CPUs or AI/ML workloads on massive GPU clusters, Tropos ensures performance, security, and cost-efficiency at scale.

At DAC 2025 we launched Omni, a multi-cloud optimizer for deep learning and HPC-based workloads, which brings 70–90% cost savings on GPU-heavy jobs through dynamic orchestration and predictive scaling.

What problems are you solving?

We’re solving the invisible but critical infrastructure challenges that slow down chip innovation. At a high level:

– Data Security & Compliance: We protect design IP at every layer, whether in cloud or on-prem environments.

– Cloud Assets & Sprawl: Our clients save up to 90% on compute-heavy jobs through intelligent resource scheduling, especially GPU-heavy AI/EDA workloads.

– Vendor Lock-In: Tropos platform gives teams freedom from rigid EDA vendor constrained compute environments by supporting heterogeneous toolchains and license models.

– Operational Inefficiencies: We reduce setup time from weeks to hours through automation, and improve turnaround time with real-time telemetry and burst scheduling across clouds.

– Put simply, we let design teams focus on building chips—not scripts, servers, or cloud invoices.

What application areas are you strongest?

We specialize in end-to-end infrastructure automation for semiconductor design pipelines. This includes:

– Provisioning, monitoring, and optimization of compute resources for Frontend, Backend, and Post-Tapeout Flows (PTOF).

– ECAD license management, including real-time analytics to help reduce over-provisioning and maintain compliance.

– DevSecOps services for CI/CD, with embedded security protocols tailored to IP-sensitive workflows.

– Multi-cloud orchestration that automatically routes workloads to the most cost-efficient and performant resources—AWS, GCP, or Azure.

– GPU workload optimization, especially for AI/ML-driven verification and simulation flows.

What keeps your customers at night?

Our customers are semiconductor engineers, CTOs, and startup founders—people who want to innovate but are being bogged down by:

– The complexity of managing hybrid cloud infrastructure with a limited IT, CAD & DevOps team.

– Security and compliance risks, especially around proprietary RTL or AI models.

– CAD license sprawl, where costs balloon and usage data is opaque.

– Unscalable DevOps practices, often relying on hand-crafted scripts that break under load.

– And perhaps most urgently, rising IT costs that outpace budgets.

Tropos and Omni are our answers to these concerns—automation, optimization, and visibility in a single platform.

What does the competitive landscape look like and how do you differentiate?

We see two primary types of competitors:

– Generic IT service providers – skilled in infrastructure, but not tuned for semiconductor workflows, licensing, or toolchains.

– EDA vendors (Synopsys, Cadence, Siemens, etc.) – strong on tools, but limited when it comes to customizable infrastructure and multicloud strategies.

Tuple is different because :

– We bring deep cloud-native and DevSecOps expertise.

– We are hyper-focused on semiconductor design work flows.

– Tropos is not a generic platform—it’s purpose-built for IC/FPGA/system development flows.

– Our “Pay-as-you-go” model fits lean startups just as well as growing mid-sized design teams.

– We also maintain deep technical partnerships and integrations across the cloud and EDA ecosystem.

What new features/technology are you working on?

We’re constantly pushing the envelope on intelligent infrastructure for silicon R&D. Some of our current focus areas include:

– AI/ML-driven automation for workload profiling and infrastructure scaling—especially for deep learning-based PPA optimization.

– Predictive multi-cloud scheduling to optimize across Spot and Reserved capacity in AWS, GCP, Azure and Neoclouds.

– Advanced security automation—integrating zero-trust networking and runtime compliance checks.

– Sustainability analytics to help customers reduce energy use and carbon footprint from large compute jobs.

We want to make cloud infrastructure programmable, secure, and efficient.

How do customers normally engage with your company?

Our customers typically come from the semiconductor ecosystem—startups, fabless companies, and design services firms. Engagement usually starts in one of three ways:

– Direct use of the Tropos platform to automate and optimize their infrastructure for IC design.

– Adopting Omni to control and reduce cloud spend on Compute & GPU AI/EDA jobs.

– Managed service partnerships—where we take ownership of infrastructure, DevSecOps, and cost governance, letting design teams focus on innovation.

– We offer tiered service models, so whether you’re a lean startup or scaling up to multiple tapeouts per year, there’s a Tuple Technologies solution that fits.

Closing Thoughts?

Tuple Technologies is quietly powering a wave of semiconductor innovation by making infrastructure invisible. With deep roots in IT automation and a sharp focus on chip design, Tuple is enabling design teams to move faster, spend less, and stay secure—without the pain of managing infrastructure manually.

Contact Tuple Technologies

Tuple Case Study:
Start up best practices for Semiconductors

A Practical Guide to Building Scalable, Secure, and Efficient IC Design Workflows for Start-Ups.

Also Read:

CEO Interview with Yannick Bedin of Eumetrys

The Sondrel transformation to Aion SIlicon!

CEO Interview with Krishna Anne of Agile Analog


Webinar – Power is the New Performance: Scaling Power & Performance for Next Generation SoCs

Webinar – Power is the New Performance: Scaling Power & Performance for Next Generation SoCs
by Mike Gianfagna on 06-26-2025 at 10:00 am

Webinar Scaling Power Performance for Next Generation SoCs

What if you could reduce power and extend chip lifetime, without compromising performance? We all know the importance of power optimization for advanced SoCs. Thanks to the massive build out of AI workloads, power consumption has gone from a cost or cooling headache to an existential threat to the planet, if current power consumptions can’t be managed. Against this backdrop proteanTecs recently held an informative webinar on the topic of power and performance optimization.

The discussion goes beyond adaptive techniques, better design strategies and enhanced technology. A method of performing per-chip, real-time optimization is described and the impact on power consumption and device reliability is dramatic. A replay link is coming but first let’s explore an overview of the proteanTecs webinar – “Power is the New Performance: Scaling Power & Performance for Next Generation SoCs”.

What Was Discussed

Noam Brousard, vice president of solutions engineering at proteanTecs, begins the webinar with a summary of industry best practices and the compromises they represent. He explains that applying a Vmin to all chips has significant drawbacks. Thanks to effects such as different process variation, usage intensity, system quality and operational conditions, a single Vmin will often result in wasted power due to an excessive value. Sometimes it can also lead to performance problems if the operational voltage is set too low.

Current best practices cannot accurately accommodate precise individual chip requirements.  He explains that design must take worst case conditions into account and finding the minimal required voltage per chip is very costly from a test-time perspective. And beyond that, the optimal Vmin over the chip’s lifetime will change due to effects such as changing workloads, aging and stress.

Noam then introduces proteanTecs power reduction applications. These unique technologies provide a strategy to reduce power based on personalized device assessment and real time visibility of actual voltage guard-bands. The figure below provides some of the strategies used and benefits achieved.

proteanTecs Power Reduction Applications

proteanTecs offers power reduction applications to personalize each chip’s minimum voltage requirement efficiently, including real-time visibility of the actual margins from design through production and during lifetime operation. So, over time, adjustments to the voltage supply can be done instantaneously.

A key element of the approach is embedded on-chip monitors that deliver real time information of chip operation and performance. AVS Pro provides reliability and workload-aware adaptive voltage scaling; Prediction-based VDDmin optimization per chip (during production testing) and VDDmin margin-based optimization per system are also part of the overall offering.

VDDmin is the minimum required voltage to achieve correct digital functionality. Voltage applied to a product when it runs in the field, however, needs to account for effects such as aging, temperature, workloads, and environmental conditions. Typically, these require “worst case” guard bands to be built in. But, in reality not all of these conditions take place, therefore these guard bands are wasting critical power and energy. By employing proteanTecs applications, these guardbands can be minimized and dynamically adjusted real-time. proteanTecs’ unique in-chip Agents provide high coverage and in-situ monitoring of the actual performance of limiting paths, in mission-mode. The workload aware power reduction application has a built-in safety net for dynamic adjustment should the timing margin fall critically low. Several examples of how the system works are shown, along with a live demonstration.

There is a lot of important detail shared in the webinar. I highly recommend you watch the event replay. To whet your appetite, Noam shows power savings of 8-14% for designs at advanced nodes. The live demonstration shows how the system adapts Vmin based on current conditions and how the system reacts to an under-voltage situation.

Dr. Joe McPherson, CEO, McPherson Reliability Consulting spoke next then discusses the reliability impact of power reduction. He explains the details of how chip temperature is reduced with power reduction. McPherson then explores the details of how reduced temperature impacts chip lifetime. He presents some eye-popping statics about chip lifetime increase through power induced temperature reduction. The improvement is based on the failure mechanism (e.g., hot carrier, bias temperature instability, interconnects). He explains how a 15% chip power reduction can translate to a 20 – 90 percent improvement in chip lifetime. This should get your attention.

Going beyond temperature, McPherson describes the impact of lower voltage and the associated thinner oxides. Here, the impact on device lifetime can be measured in hundreds of percent improvement. Quite impressive.

Alex Burlak, vice president of Test & Analytics at proteanTecs, concludes the webinar with details of how proteanTecs implements prediction-based VDDmin optimization for chip production and VDDmin margin-based optimization for system production. During ATE testing, typical testing requires multiple tests to identify the minimum required voltage at which the chip will still pass. In order to get the most accurate voltage, test-time becomes long and expensive. So customers must compromise to either run long tests or apply excessive VDD without ideally optimized voltage. proteanTecs offers a technique in which per-chip VDDmin application leverages an automated trained ML prediction model which eliminates the need to run a full test but will assign the correct minimum voltage to achieve the best power optimization.

Additionally, at system level testing, functional workloads or operations are often different than those made by ATE assumptions which usually rely on structural tests. Therefore, the guard bands provide a safe solution to in-field operation. Leveraging the Agents, by measuring the timing margin during functional workloads, proteanTecs can recommend a voltage optimization per chip or per system to further optimize VDDmin. Alex’s presentation is supplemented with live demonstrations of how proteanTecs delivers these capabilities. 

Noam concludes the following compelling points about proteanTecs power reduction solutions:

  • This approach goes beyond traditional AVS by leveraging embedded agents and real-time margin monitoring
  • Optimized power and performance during production and lifetime operation
  • Ensured reliability without risk
  • Lifetime extension of devices – less power, less heat, longer lifespan
  • Already deployed in custom systems, demonstrated up to 14% power savings

To Learn More 

There is a lot of relevant and useful information presented in this webinar. If power, heat and device lifetime are important for your next design, this webinar will help provide many new strategies and approaches. You can access the webinar replay here.  Or you can learn more about AVS Pro™ in this link.

Webinar Presenters

The topics covered in this webinar go deep into the testing, characterization and performance of advanced semiconductor devices. The graphic at the top of this post illustrates some of the significant challenges that are discussed. The team of presenters is highly qualified to discuss these details and did a great job explaining what is possible with the right approach.

Noam Brousard, vice president of solutions engineering at proteanTecs. With over 20 years of experience in System/Hardware/Software product development and management, consumer electronics, Telecom, mobile, IoT systems and silicon, Noam joined proteanTecs in August 2017, soon after it was founded. Before joining proteanTecs, Noam held VP R&D position at Vi, and senior technical positions at Intel Wireless Innovation Solutions, Orckit and ECI Telecom. Noam holds a M.Sc. in Electrical Engineering from Tel Aviv University and a B.Sc. in Electrical Engineering from Ben Gurion University. 

Dr. Joe McPherson, CEO, McPherson Reliability Consulting. Dr. J.W. McPherson. Dr. McPherson is an international and renowned expert in the field of Reliability Physics and Engineering. He has published over 200 scientific papers, is the author of the Reliability Chapters for 4 books and has been awarded 20 patents. Dr. McPherson was formerly a Texas Instruments Senior Fellow and past General Chairman of the IEEE International Reliability Physics Symposium (IRPS) and still serves on its Board of Directors. He is an IEEE Fellow and Founder/DEO of McPherson Reliability Consulting, LLC. Dr. McPherson holds a PhD degree in Physics. 

Alex Burlak, Vice President of Test & Analytics at proteanTecs. With combined expertise in Production Testing and Data Analytics of ICs and system products, Alex joined proteanTecs in October, 2018. Before joining the company, Alex held a Senior Director of Interconnect and Silicon Photonics Product Engineering positions at Mellanox. Alex holds a B.Sc. in Electrical Engineering from The Israel Institute of Technology, Technion.

Jennifer Scher, Content and Communications Manager at proteanTecs. Jennifer moderated the event, including an informative live Q&A session from the audience. Jennifer spent over 20 years in product and solution marketing at Synopsys.

 

 

 


Reachability in Analog and AMS. Innovation in Verification

Reachability in Analog and AMS. Innovation in Verification
by Bernard Murphy on 06-26-2025 at 6:00 am

Innovation New

Can a combination of learning-based surrogate models plus reachability analysis provide first pass insight into extrema in circuit behavior more quickly than would be practical through Monte-Carlo analysis? Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and lecturer at Stanford, EE292A) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick, HFMV: Hybridizing Formal Methods and Machine Learning for

Verification of Analog and Mixed-Signal Circuits, was published in the 2018 DAC and has 2 citations. The authors are from Texas A&M University.

While in this series we usually look at innovations around digital circuitry, the growing importance of mixed signal in modern designs cannot be overlooked. This month’s pick considers machine learning together with formal methods in AMS to explore reachability for out-of-spec behaviors. The goal is to find out-of-spec performance behavior, normally addressed through Monte Carlo analysis but very expensive across a high-dimensional parameter space. Instead an ML-based surrogate model plus reachability analysis could be an effective aid in quick turnaround design exploration, while still turning back to standard methods for full/signoff validation.

My interest in this paper was triggered by a question in a DVCon keynote a couple of years ago, looking for a better way to check CDC (clock domain crossings) in mixed signal designs. Might this approach be relevant? Reachability in that case would be finding a case where a register data input doesn’t settle until inside the setup window.

Paul’s view

Great paper on combining machine learning with formal methods for high sigma verification of small analog circuits. The authors test their method, hybrid formal / machine-learning verification (HFMV), on a differential amplifier, LDO, and DC2DC converter across a range of performance specifications (e.g. gain, GBW, CMRR, …). Comparing to scaled sigma sampling (SSS), a relatively state-of-the-art importance sampling method, HFMV, finds multiple failures with less than 1k samples across all three circuits, whereas SSS is unable to find any failure after 4-9k samples. Impressive.

HFMV works by first building a predictor for a sample being a failure with some given probability using Bayesian machine learning methods. An SMT problem (a Boolean expression which can include numerical expressions and inequalities within it, e.g. a>b AND x+a<y) is constructed using this predictor. This expression is satisfied by any sample point that is predicted to have a probability of being a failure greater than some threshold, P. An existing state of the art SMT solver, Z3, is then used to try and satisfy the expression with a value of P close to 1.0, i.e. to find a sample point that has a probability of being a failure. Neat trick! After a batch of 350 samples has been generated from the SAT solver, real simulations are run to determine the ground truth results for these samples. The Bayesian model is updated with the new results, and the process repeated. If the SMT solver fails to converge on a solution, then P is decreased in small steps until the converges.

The other key innovation in this paper is some clever math tricks that modify the Bayesian model to make it more amenable to SMT solvers. The first is to apply a mapping function to the input parameters to make the model behave more linearly. The second removes a quadratic term in the model, again to make it more linear. This paper is a wonderful example of how disruptive innovation often happens at the intersection of different perspectives. Blending Bayesian with SMT, as the authors do here, is a brilliant idea, and the results speak for themselves.

Raúl’s view

Verifying that an analog circuit meets its specifications as design parameters vary (transistor channel length and width, temperature, supply voltage, …) requires simulating the circuit at numerous points within a defined parameter space. This is essentially the same problem addressed by commercial tools such as Siemens’ Solido, Cadence’s Spectre FMC analysis, Synopsys’ PrimSim, Silvaco’s Varman, etc. all of which are crucial for applications such as library characterization and analog design. A common metric is the probability of identifying rare failures, presuming a Gaussian distribution with a specified width of σ; for instance, 6σ refers to approximately 2 parts per billion (2×10⁻⁹) being undetected. Employing “brute force” Monte Carlo simulations could require around 1.5 billion samples to identify at least one 6σ failure with 95% confidence which is infeasible. Commercial tools address this in different ways, e.g., statistical learning, functional margin computation, worst-case distance estimation (WCD) and scaled-sigma sampling (SSS). The paper in this blog introduced a novel technique, not yet commercialized, called “Hybrid Formal/Machine learning Verification” (HFMV). This verification framework integrates the scalability of machine learning with the precision of formal methods to detect extremely rare failures in analog/mixed-signal (AMS) circuits.

The paper focuses heavily on mathematics and may be challenging to read, but the main ideas are as follows. HFMV exploits commonly used probabilistic ML models such as Bayesian additive regression trees, relevance vector machine (RVM) [12], and sparse relevance kernel machine (SRKM), trained on limited simulation or measurement data to probabilistically predict failure. Points in the parameter space can be characterized as strong failure, weak failure, weak good, or strong good, depending on prediction confidence. Formal verification using Satisfiability Modulo Theories (SMT) is applied to identify high probability failure points (confirmed as true failures by a single simulation), and to prove that all points within a space have a very low probability of being failure points. Since rare failures might not appear in the initial training set, active learning refines the model iteratively. SMT solving is accelerated by several orders of magnitude by input space remapping and linear approximation.

HFMV is tested on a Differential amplifier, a Low-Dropout Regulator (LDO) and a DC-DC converter, tested on specs like GBW, gain, CMRR, overshoot, quiescent current, etc. and compared to Monte Carlo and SSS. HFMV hits the first true failure point using 600-1,500 samples, which are about 10x and up to 1,000x lower than used by SSS and MC respectively. Yet, neither MC and SSS can find any true failure in the bounded parameter space.

Despite being published in 2018, our blog paper shows that HFMV outperforms techniques used in current commercial tools for high σ rare failure detection, effectively bridging the gap between accuracy and scalability (ML models) and rigor and completeness (formal methods). In addition, given the significant advancements in machine learning since its publication, implementing such capabilities today could be very interesting.

Also Read:

A Novel Approach to Future Proofing AI Hardware

Cadence at the 2025 Design Automation Conference #62DAC

Anirudh Fireside Chats with Jensen and Lip-Bu at CadenceLIVE 2025


CEO Interview with Yannick Bedin of Eumetrys

CEO Interview with Yannick Bedin of Eumetrys
by Daniel Nenni on 06-25-2025 at 10:00 am

Yannick Bedin CEO Eumetrys

Yannick founded EUMETRYS in 2012. He began his engineering career for Schlumberger in 1998 in West Africa and then became an applications engineer for the company from 2000 to 2004 in the semiconductor sector. In 2004, he joined Soluris as a field service engineer until 2006, then Nanometrics as a technical product support specialist.

Tell us about your company.

Founded in 2012, EUMETRYS is a global integrator of turnkey metrology, inspection, and robotics solutions for semiconductor manufacturers. Our headquarters are located in Gaillac in the south of France with our operations center in Meylan near Grenoble in France’s Silicon Valley. We also have subsidiaries in Germany and in the United States. EUMETRYS sells measurement and inspection equipment with associated installation, maintenance, and technical support services, as well as spare parts and robots to help its customers worldwide increase the lifespan of all their production equipment.

What problems are you solving?

Defective silicon wafers could cost the global semiconductor industry between $10 and $20 billion annually, depending on yield rates, technology nodes, and fab efficiency. This is the reason why inspection plays such a vital role in semiconductor manufacturing, ensuring chip reliability and performance in such critical sectors as aerospace, healthcare, and automotive where even the slightest failure can have serious consequences. A rigorous quality control process helps detect defects at the earliest stages of production, optimize yields, and reduce costs related to returns or repairs. By ensuring consistent quality, chip makers strengthen customer confidence and position themselves in a highly competitive market.

What application areas are your strongest?

We participate in the value chain of the inline production control, sample qualification, and processed substrates in compound semiconductor manufacturing. We bring a very high level of expertise acquired in the opto-photonic, MEMs and compound semiconductor sectors since 2012 in supporting our customers, process engineers, and cluster leaders to maintain their line yield.

What keeps your customers up at night?

In an industry as competitive as the global semiconductor sector, it is all about making sure that chip makers meet their wafer fab financial and productivity objectives. Yield is the one metric where we can help them move the needle. Metrology and inspection are the last in line when it comes to customer CapEx, though the philosophy has changed – key users are now focusing on improving the manufacturing yield as well and the sizing of their fab, with the aim to produce more efficiently than before. We do understand that very well and bring each customer a dedicated solution to help them meet their challenges.

What does the competitive landscape look like and how do you differentiate?

There are many companies on the market that offer all kinds of different solutions, but to implement those, customers often have to interface with different suppliers, making the process lengthy, cumbersome, and complex. The way we stand out from the pack and differentiate ourselves from the competition is that we provide a one-stop-shop turnkey solution – we offer equipment/hardware, integration services, spare parts, and after-sales service, and therefore a comprehensive approach from A to Z and only one single partner to interface with. But that’s not all – an important part of our solution is customization. Each customer has their own concerns and our support team made of high-level expertise technicians and engineers will tailor training and support to meet each individual customer requirement.

What new features/technology are you working on?

We just made an exciting announcement at CS ManTech in New Orleans, LA – we have been awarded the exclusive global distribution of the YPI – Clear Scanner manufactured by Japanese company YGK. This laser scanning particle inspection tool for unpatterned compound semiconductor substrates greatly enhanced the quality control of semiconductors by inspecting the surface of a variety of opaque and transparent wafers from two to 12 inches, including silicon carbide (SiC), gallium nitride (GaN), indium phosphide (InP), Sapphire, gallium arsenide (GaAs), silicon, and glass. This scanner offers unparalleled functionality at a very affordable price for the compound semiconductor market – unmatched substrate flexibility, advanced surface inspection capabilities, robust and proven engineering for longevity, as well as optimized cost efficiency and uptime. With this reliable and fully customizable scanner developed specifically for the compound semiconductor market, chip makers can quickly respond to the stringent quality standards dictated by increasingly complex manufacturing processes. In addition, by July of this year, we will be launching a new complementary offering for all of our customers, so watch this space for more information!

What would be your best advice to semiconductor fab owners on how to reach the best efficiencies for their manufacturing facilities?

I would recommend fab owners to always adjust their metrology and inspection investment to their fab’s design size requirements. It is really not necessary to own the most expensive metrology tools if they provide overspecification compared to actual product design needs. For me, device manufacturing efficiency starts with adapting control to fab capability. If not, spending on CapEx and resources could negatively impact the cost of chip manufacturing. Semiconductor fabs need to be lean to achieve the best yield and to be fit to meet capabilities over the long term.

How can customers engage with your company?
Also Read:

The Sondrel transformation to Aion SIlicon!

CEO Interview with Krishna Anne of Agile Analog

CEO Interview with Kit Merker of Plainsight


Visualizing System Design with Samtec’s Picture Search

Visualizing System Design with Samtec’s Picture Search
by Mike Gianfagna on 06-25-2025 at 6:00 am

Visualizing System Design with Samtec’s Picture Search

If you’ve spent a lot of time in the chip or EDA business, “design” typically means chip design. These days it means heterogeneous multi-chip design. If you’ve spent time developing end products, “design” has a much broader meaning. Chips, subsystems, chassis and product packaging are in focus. This is just a short list if you consider all the aspects of system design and its many disciplines, both hardware and software. Samtec lives in the system design world. The company helps connect the chiplets, chips, subsystems and racks that comprise a complete system.

There is a huge array of products from Samtec that need to be considered in any system design project. The screenshot above gives you a sense of the diversity involved. Choosing a particular connector will impact other choices. Not all combinations of Samtec products are viable as an integrated solution. The company has developed tools to help system designers navigate all this. Last year, I described something called Solution Blocks that assisted with choices that were compatible. Samtec has now taken that concept to the next level by adding visualization, along with more control and a broader perspective. If the entire semiconductor ecosystem worked this way, design would be a lot easier. Let’s take a short tour of visualizing system design with Samtec’s Picture Search.

Seeing is Believing

A link is coming so you can try this out for yourself. Here are a couple of examples of what’s possible. We’ll start with one of the many edge connectors available from Samtec. I chose the vertical Edge Rate® High-Speed Edge Card Connector. The initial configuration is shown below, including a high-resolution image that can be rotated for different views.

Vertical Edge Rate® High Speed Edge Card Connector

I decided to change the number of positions per row from 10 to 50. I also specified a polyimide file pad. In seconds, I got an updated image with detailed dimensions, see below.

Updated Edge Card Connector

The system summarized the features as shown below, along with detail specs, volume pricing/availability and extensive compliance data.

  • Optional weld tab for mechanical strength
  • 00 mm pitch, up to 140 positions
  • Accepts .062″ (1.60 mm) thick cards
  • Current rating: 2.2 A max
  • Voltage rating: 215 VAC/304 VDC max

This all took seconds to do. The ability to perform what-if experiments to converge on the best solution is certainly enabled by a system like this.

For one more experiment, I decided to try the Solutionator instead of browsing categories. Here, I chose active optics.  I then invoked the Solutionator interface for Active Optics, with its promise to “design in a minute.”

With the Active Optics Cable Builder interface, I could quickly browse the various options available. I chose a 12-channel, 16.1 Gbps, unidirectional AOC and instantly received the 3D diagram and all the specs as before. See below.

Active Optics Cable Builder

I could go on with more examples of how this new technology from Samtec makes it easier to pick the right components to implement your next system. The communication, latency and power requirements for advanced semiconductor systems continues to get more demanding.  The technology delivered by Samtec is a key ingredient to developing channels that meet all those requirements. And the process just got easier with Samtec’s picture search.

To Learn More

If high performance channels matter to you, you must try this new technology from Samtec. If power, performance and form factor are key care abouts, you must try it as well. You can access the starting point for your journey here. Have fun visualizing system design with Samtec’s Picture Search.


Flynn Was Right: How a 2003 Warning Foretold Today’s Architectural Pivot

Flynn Was Right: How a 2003 Warning Foretold Today’s Architectural Pivot
by Jonah McLeod on 06-24-2025 at 10:00 am

Table 1

In 2003, legendary computer architect Michael J. Flynn issued a warning that most of the industry wasn’t ready to hear. The relentless march toward more complex CPUs—with speculative execution, deep pipelines, and bloated instruction handling—was becoming unsustainable. In a paper titled “Computer Architecture and Technology: Some Thoughts on the Road Ahead,” Flynn predicted that the future of computing would depend not on increasingly intricate general-purpose processors, but on simple, parallel, deterministic, and domain-specific designs.

Two decades later, with the cracks in speculative execution now exposed and the rise of AI accelerators reshaping the hardware landscape, Flynn’s critique looks prophetic. His call for architectural simplicity, determinism, and specialization is now echoed in the design philosophy of industry leaders like Google, NVIDIA, Meta, and emerging players like Simplex Micro. Notably, Dr. Thang Tran’s recent patents—Microprocessor with Time-Scheduled Execution for Vector Instructions and Microprocessor with a Time Counter for Statically Scheduled Execution—introduce a deterministic vector processor design that replaces out-of-order speculation with time-based instruction scheduling.

This enables predictable high-throughput execution, reduced power consumption, and simplified hardware verification. These innovations align directly with Flynn’s assertion that future performance gains would come not from complexity, but from disciplined simplicity and explicit parallelism.

The Spectre of Speculation

Flynn’s critique of speculative execution came well before the industry was rocked by the Spectre and Meltdown vulnerabilities in 2018. These side-channel attacks exploited speculative execution paths in modern CPUs to leak sensitive data across isolation boundaries—an unintended consequence of the very complexity Flynn had warned against. The performance gains of speculation came at a steep cost: not just in power and verification effort, but in security and trust.

In hindsight, Flynn’s warnings were remarkably prescient. Long before Spectre and Meltdown exposed the dangers of speculative execution, Flynn argued that speculation was a fragile optimization: it introduced deep design disruption, made formal verification more difficult, and consumed power disproportionately to its performance gains. The complexity it required—branch predictors, reorder buffers, speculative caches—delivered diminishing returns as workloads became increasingly parallel and memory-bound.

Today, a quiet course correction is underway. Major chipmakers like Intel are rethinking their architectural priorities. Intel’s Lunar Lake and Sierra Forest cores prioritize efficiency over aggressive speculation, optimizing for throughput per watt. Apple’s M-series chips use wide, out-of-order pipelines, but they increasingly emphasize predictable latency and compiler-led optimization over sheer speculative depth. In the embedded space, Arm’s Cortex-M and Neoverse lines have trended toward simplified pipelines and explicit scheduling, often foregoing speculative logic entirely to meet real-time and power constraints.

Perhaps most significantly, the open RISC-V ecosystem is enabling a new generation of CPU and accelerator designers to build from first principles—often without speculative baggage. Vendors like Simplex Micro are championing deterministic, low-overhead execution models, leveraging vector and matrix extensions or predictive scheduling in place of speculation. These choices directly reflect Flynn’s thesis: when correctness, power, and scalability matter more than peak IPC, simplicity wins.

It’s worth noting that Tenstorrent, while often associated with RISC-V innovation, does not currently implement deterministic scheduling in its vector processor. Their architecture incorporates speculative and out-of-order execution to optimize throughput, resulting in higher control complexity. While this boosts raw performance, it diverges from Flynn’s call for simplicity and predictability. Nonetheless, Tenstorrent’s use of domain-specific acceleration and parallelism aligns with other aspects of Flynn’s vision.

A Parallel Future: AI Chips and Flynn’s Vision

Nowhere is Flynn’s vision more alive than in the rise of AI accelerators. From Google’s Tensor Processing Units (TPUs) to NVIDIA’s Tensor Cores, from Cerebras’ wafer-scale engines to Groq’s dataflow processors, the trend is clear: ditch speculative complexity, and instead embrace massively parallel, deterministic computing.

Google’s TPU exemplifies this shift. It forgoes speculative execution, out-of-order logic, and deep control pipelines. Instead, it processes matrix operations through a systolic array—a highly regular, repeatable architecture ideal for AI workloads. This approach delivers high throughput with deterministic latency, matching Flynn’s call for simple and domain-optimized hardware.

Cerebras Systems takes this concept even further. Its Wafer Scale Engine integrates hundreds of thousands of processing elements onto a single wafer-sized chip. There’s no cache hierarchy, no branch prediction, no speculative control flow—just massive, uniform parallelism across a tightly connected grid. By optimizing for data locality and predictability, Cerebras aligns directly with Flynn’s argument that regularity and determinism are the keys to scalable performance.

Groq, co-founded by TPU architect Jonathan Ross, builds chips around compile-time scheduled dataflow. Their architecture is radically deterministic: there are no instruction caches or branch predictors. All execution paths are defined in advance, eliminating the timing variability and design complexity of speculative logic. The result is a predictable, software-driven execution model that reflects Flynn’s emphasis on explicit control and simplified verification.

Even Meta (formerly Facebook), which once relied entirely on off-the-shelf GPUs, has embraced Flynn-style thinking in its custom MTIA (Meta Training and Inference Accelerator) chips. These processors are designed for inference workloads like recommendation systems, emphasizing predictable throughput and energy efficiency over raw flexibility. Meta’s decision to design in-house hardware tailored to specific models echoes Flynn’s assertion that different computing domains should not be forced into one-size-fits-all architectures.

Domain-Specific Simplicity: The DSA Revolution

Flynn also predicted the fragmentation of computing into domain-specific architectures (DSAs). Rather than a single general-purpose CPU handling all workloads, he foresaw that servers, clients, embedded systems, and AI processors would evolve into distinct, streamlined architectures tailored for their respective tasks.

That prediction has become foundational to modern silicon design. Today’s hardware ecosystem is rich with DSAs:

  • AI-specific processors (TPUs, MTIA, Cerebras)
  • Networking and storage accelerators (SmartNICs, DPUs)
  • Safety-focused microcontrollers (e.g., lockstep RISC-V cores in automotive)
  • Ultra-low-power edge SoCs (e.g., GreenWaves GAP9, Kneron, Ambiq)

These architectures strip out unnecessary features, minimize control complexity, and focus on maximizing performance-per-watt in a given domain—exactly the design goal Flynn outlined.

Even GPUs have evolved in this direction. Originally designed for graphics rendering, GPUs now incorporate tensor cores, sparse compute units, and low-precision pipelines, effectively becoming DSAs optimized for machine learning rather than general-purpose parallelism.

The Legacy of Simplicity

Flynn’s 2003 message was clear: Complexity is not scalable. Simplicity is. Today’s leading architectures—from TPUs to RISC-V vector processors—have adopted that philosophy, often without explicitly crediting the foundation he laid. The resurgence of dataflow architectures, explicit scheduling, and deterministic pipelines shows that the industry is finally listening.

And in an era where security, power efficiency, and real-time reliability matter more than ever—especially in AI inference, automotive safety, and edge computing—Flynn’s vision of post-speculation computing is not just relevant, but essential.

He was right.

References

  1. Flynn, M.J. (2003). Computer Architecture and Technology: Some Thoughts on the Road Ahead. Keynote at Computing Frontiers Conference.
  2. Spectre and Meltdown
  3. Google TPU: Jouppi, N. et al., ‘In-Datacenter Performance Analysis of a Tensor Processing Unit,’ ISCA 2017.
  4. Cerebras WSE
  5. Groq: “A Software-defined Tensor Streaming Multiprocessor…”
  6. META MTIA V2 Chip:
  7. Tenstorrent Overview: Products and Software
  8. WO2024118838A1: Latency-tolerant scheduling and memory-efficient RISC-V vector processor. https://patents.google.com/patent/WO2024118838A1/en
  9. WO2024015445A1: Predictive scheduling method for high-throughput vector processing. https://patents.google.com/patent/WO2024015445A1/en
Also Read:

Andes Technology: Powering the Full Spectrum – from Embedded Control to AI and Beyond

From All-in-One IP to Cervell™: How Semidynamics Reimagined AI Compute with RISC-V

Andes Technology: A RISC-V Powerhouse Driving Innovation in CPU IP


Enabling RISC-V & AI Innovations with Andes AX45MPV Running Live on S2C Prodigy S8-100 Prototyping System

Enabling RISC-V & AI Innovations with Andes AX45MPV Running Live on S2C Prodigy S8-100 Prototyping System
by Daniel Nenni on 06-24-2025 at 6:00 am

Andesbanner

Qualifying an AI-class RISC-V SoC demands proving that wide vectors, deep caches, and high-speed I/O operate flawlessly long before tape-out. At the recent Andes RISC-V Conference, Andes Technology and S2C showcased this by successfully booting a lightweight large language model (LLM) inference on a single S2C Prodigy™ S8-100 logic system powered by AMD’s Versal™ Premium VP1902 FPGA.

Capacity and Timing — Solved in One Device
Prototyping an SoC traditionally requires partitioning the design across multiple FPGAs, complicating timing closure and increasing development risks. S2C’s S8-100 Logic System, with roughly 100 million usable gates on a single FPGA, removes these hurdles. The dual-core AX45MPV cluster from Andes— featuring 64-bit in-order cores and a powerful 512-bit vector processing unit — together with an AE350 subsystem occupies less than 40% of the FPGA capacity. This generous margin allows designers to add custom instructions, additional accelerators, debug logic or secret sauce without a second board. More importantly, the entire design can now reside within a single FPGA in S8-100, eliminating the need for time-consuming partitioning and avoiding cross-chip latency that would otherwise throttle performance. Freed from the architectural compromises of multi-FPGA systems, the design can be operated at a speed enough to run large-scale software — enabling faster iterations, more realistic validation, and a dramatically simpler prototyping flow.

Robust Memory Bandwidth Without Board Spins
LLM inference workloads require stable, high-throughput memory subsystems to continuously feed vector engines. By leveraging S2C’s pre-validated DDR4 memory module and a plug-and-play auxiliary I/O card that handles JTAG and UART, the LLM demo is easily deployed. This modular approach allowed the hardware platform to be operational within days of receiving RTL code, accelerating design iterations and debugging cycles.

Modularity That Adapts to Changing Needs
The S8-100 excels at flexibility. Developers can rapidly pivot across use cases — whether AI inference, video processing, networking, or safety-critical industrial control — by swapping daughtercards to match the desired interfaces. S2C provides a vast library of over 90 daughter cards covering interfaces from MIPI-DPHY, HDMI and 10/100/1000/100G/400G Ethernet to fieldbus protocols. When a single FPGA isn’t enough, multi-FPGA partitioning is available.

Real Hardware Data Cuts Time-to-Market
Running Linux bring-up, driver stacks, and model benchmarks on cycle-accurate hardware transforms estimations into actionable insights. Teams using this approach typically save six to twelve months on critical paths with quantified risks rather than assumptions, which improves confidence in first-silicon success and ready-to-integrate software.

With Andes–S2C collaboration, developers now have better platform to innovate, explore ideas, and evaluate system architectures. By providing the capacity and flexibility to explore, the S8-100 enables teams to quickly build and iterate on proof-of-concept designs at more reasonable system performance—paving the way for faster, more confident RISC-V and AI development.

Visit s2cinc.com to request an evaluation. Our experts will provide detailed feedback in days—not months—helping you streamline your prototyping journey.

Also Read:

Cost-Effective and Scalable: A Smarter Choice for RISC-V Development

S2C: Empowering Smarter Futures with Arm-Based Solutions

Accelerating FPGA-Based SoC Prototyping


DAC News – A New Era of Electronic Design Begins with Siemens EDA AI

DAC News – A New Era of Electronic Design Begins with Siemens EDA AI
by Mike Gianfagna on 06-23-2025 at 10:00 am

DAC News – A New Era of Electronic Design Begins with Siemens EDA AI

AI is the centerpiece of DAC this year. How to design chips to bring AI algorithms to life, how to prevent AI from hacking those chips, and of course how to use AI to design AI chips. In this latter category, there were many presentations, product announcements and demonstrations. I was impressed by many of them. But an important observation is the focused nature of most of this work.  Methods to use AI to accelerate the design flow, or converge on timing faster, and so on. Siemens took a different approach to addressing the requirements of impossible to design chips, however. In its own words, Siemens introduced a comprehensive generative and agentic AI system for semiconductor and PCB design. This approach has significant implications. Let’s take a look at how a new era of electronic design begins with Siemens EDA AI.

The Big Picture

Stepping back a bit, I was struck with a bit of Déjà vu when examining the Siemens announcement and diving into some of the details. Those who have been around EDA for a while will remember The Framework Concept. The idea was to develop an EDA framework that allowed all tools to work off a common data structure and use model. Sharing the user interface meant the best concepts would find their way to all tools. Sharing data models meant all tools could work off the same design description and collectively improve the design in synergistic ways.

It sounded great on paper, but sadly the technology wasn’t mature enough so many years ago. Most, if not all the CAD Framework ideas failed. I recall folks saying, “don’t use the F-word (Framework), or I’ll walk out of your presentation.” Today, we take all this for granted. Every mainstream design flow shares both data and the user experience effectively. The Framework promise was finally delivered.

Fast-forward to DAC 2025 and Siemens is taking this concept to the next level. What if a broad spectrum of AI technologies could be delivered to all development groups in the company? And what if each group could benefit from the substantial infrastructure delivered this way to then add tool-specific capabilities on top of it to create a truly consistent and AI-enabled design infrastructure? This is what Siemens announced at DAC. Let’s take a closer look.

Introducing the Siemens EDA AI System

The starting point for all this is a focus on something Siemens calls industrial-grade AI. The approach defines what’s important to harness AI for chip and PCB design – industrial grade problems. This is in contrast to consumer AI, the ubiquitous version we all see every day. The figure below illustrates the differences.

Siemens Industrial Grade AI

In my opinion, this important analysis sets up the project for success. Most AI algorithms have a well-defined use model and scope of application. But the way the technology is deployed makes a huge difference. With regard to AI algorithms, the following chart will help to set the scope of application of the Siemens EDA System. In the company’s words, “a powerful hybrid AI system emerges when these AI capabilities are integrated together.”

Spectrum of AI Use Models

The Siemens EDA System is being deployed across the company to many development groups. Based on what I saw at DAC, many teams have embraced the technology and there are already many new capabilities as a result. The general deployment model is to leverage generative and agentic AI for front-end tasks and machine and reinforcement learning for back-end tasks. The strategy and the benefits are summarized in the figure below.

Siemens EDA focuses on the development of powerful hybrid AI systems

There are some guiding principles for this work. They are summarized as follows. I particularly like the last one. The customer base is doing a lot of work to harvest its own unique AI models and strategies. It’s critically important to recognize this and enable it. Siemens seems to have it right.

  • Enables generative and agentic AI capabilities across Siemens EDA tools
  • Strong data flywheel effect enabled by a centralized multimodal data lake
  • Secure with full custom access controls & on-premise / cloud deployment options
  • Open and customizable with multiple large language model (LLM) support, ability to add customer data and build custom workflows

First Results Across Key Tools

There was ample proof on display at DAC of the impact of this new approach across the product line. Here is a quick summary of some examples. There will be many more for you to explore.

Aprisa™ AI software: Aprisa AI is a fully integrated technology in the Aprisa digital implementation solution. It enables next-generation AI features and methodologies across RTL-to-GDS capabilities including AI design exploration that adaptively optimizes for power / performance / area. Integrated generative AI-assist is also included, delivering ready-to-run examples and solutions. Aprisa AI delivers 10x productivity, 3x improved compute-time efficiency, and 10 percent better PPA for digital designs across all process technologies.

Calibre® Vision AI software: Calibre Vision AI offers a revolutionary advance in chip integration signoff by helping design teams identify and fix critical design violations in half the time of existing methods by instantly loading and organizing them into intelligent clusters. Designers can then prioritize their activity based on this clustering and achieve a higher level of productivity. Calibre Vision AI also improves efficiency in the workflow with the addition of “bookmarks” that allow designers to capture current analysis state, including notes and assignments, and then foster enhanced collaboration between chip integrators and block owners during physical verification.

Solido™ generative and agentic AI: Solido now harnesses Siemens’ EDA AI system to deliver advanced generative and agentic AI capabilities throughout the Solido Custom IC platform to transform next generation design and verification. Tailored to each phase of the custom IC development process, including schematic capture, simulation, variation-aware design and verification, library characterization, layout and IP validation, Solido’s new generative and agentic AI empowers engineering teams to achieve orders-of-magnitude productivity gains. It appears that Solido is leading the charge with the application of advanced agentic AI technology.

 A Growing Ecosystem 

As you would expect, successful deployments like this one facilitate expansion to other technologies in the ecosystem. At DAC, Siemens also announced support for NVIDIA NIM microservices and NVIDIA Llama Nemotron models. NVIDIA NIM enables the scalable deployment of inference-ready models across cloud and on-premises environments, supporting real-time tool orchestration and multi-agent systems. Llama Nemotron adds high context reasoning and robust tool-calling for more intelligent automation across the EDA workflow.  

To Learn More

The work Siemens presented at DAC was comprehensive, well thought out and widely adopted by development teams across the company. These are the elements of a very successful deployment of AI. It you’re thinking of adding AI to your design flow (and you should), you must learn more about what Siemens is up to. Here are some places to start:

And that’s how a new era of electronic design begins with Siemens EDA AI.


IP Surgery and the Redundant Logic Problem

IP Surgery and the Redundant Logic Problem
by Bernard Murphy on 06-23-2025 at 6:00 am

IP Surgery min

It’s now difficult to remember when we didn’t reuse our own IP and didn’t have access to extensive catalogs of commercial IP. But reuse comes with a downside – without modification we can’t finetune IP specs to exactly what we want in a current design. We’re contractually limited in how we can adapt commercial IP, however vendors compensate through a good deal of heavily tested configurability which seems to satisfy most needs. For in-house IP, the ROI to bring IP to commercial reuse standards is tough to justify but here there is much more freedom to get creative with a design. It’s more common to see copy-paste reuse, where you start with an IP already proven in silicon then adapt through selective surgery to meet current needs. The challenge of course is that getting this right is never as simple as carving out chunks of RTL and adding in new chunks. Dependencies ripple through the design and what had been proven in silicon, under surgery is probably going to break in interesting ways. Which according to Ashish Darbari (CEO of Axiomise) introduces new opportunities to apply formal methods in pursuit of that verification.

Is there really anything new here?

Of course you need to re-verify, but how far do you want to go down the path to complete re-verification? For argument’s sake, let’s suppose the original IP built on some 8-channel subsystem from which you want to drop 4-channels. You’ll start with the easy checks: formal Linting and coverage. Unless the design/testbench is already parametrized to take care of that possibility, around the area you cut out Lint might show bus size mismatches, inaccessible states, all the usual problems. From coverage you will see new coverage holes. Also not surprising.

These are the obvious must-fix problems but there will be more issues lurking in the rest of the design. The original IP was scaled to handle 8 channels; if you drop 4 channels it may still work fine but it won’t necessarily be efficient. The NoC was tuned for an 8-channel load – FIFOs in the NoC are now bigger than needed for this reduced traffic. More generally, when you remove logic is there any other logic you should also trim, to reduce area and power? Synthesis can help optimize away unnecessary logic to a limited extent, but not complex sequential logic.

Redundant logic

The idea that there could be redundant logic in a design might seem odd. In some cases, you can’t take it out without voiding a warranty, but otherwise if it’s not needed then why not remove it? Makes perfect sense when you anyway plan to invest in detailed verification on that IP. But what if you are doing surgery on a known good IP? You started with that IP because it would save you time and resources. If you have to go back to square one in verification and rediscovering the IP microarchitecture in depth, how much time and effort did you really save?

First-pass checking with Lint and coverage will help but these analyses are not quite as painless or complete as you might think. Take an unreachable state in an FSM. Lint (formal) will find this without problems, but it won’t tell you about the downstream logic made redundant because that state is unreachable. Maybe the state should be reachable (you created a bug) in which case that logic wouldn’t be redundant. Or maybe the state now being unreachable is OK and a function of the surgery, in which case that downstream logic should also be cut out.

Similar redundancies can appear after surgery around counters, FIFOs and other sequential logic. All because logic that was previously useful now has no purpose or should be modified. Axiomise has developed an app (formal based) which will identify redundant logic without requiring a user have expertise in formal. They call this Footprint.

Ashish adds that what Lint and coverage tools do, they do well but they are obviously designed to highlight the problems they target, not collateral problems like redundant logic. Lint will warn about stuck signals and coverage will warn about missed coverage bins (though only for cover points the testbench is checking).

Footprint on the other hand automates all these checks and is in production use in some of the biggest design houses, for good reason. Removing redundant logic can have real impact on area (and power), a million gates in one case. When you’re under pressure to signoff and fielding tough questions about why you are spending so much time on a proven IP, you might appreciate the help.

Check it out along with other Axiomise capabilities HERE.

Also Read:

Podcast EP274: How Axiomise Makes Formal Predictable and Normal with Dr. Ashish Darbari

How I learned Formal Verification

The Convergence of Functional with Safety, Security and PPA Verification

Podcast EP246: How Axoimise Provides the Missing Piece of the Verification Puzzle


Electronics Up, Smartphones down

Electronics Up, Smartphones down
by Bill Jewell on 06-22-2025 at 10:00 am

unnamed

Electronics production in key Asian countries has been steady to increasing in the last several months. In April 2025, China electronics production three-month-average change versus a year ago (3/12) was 11.5%, up from 9.5% in January but below the average 3/12 of 12% in 2024. India showed the strongest growth, with 3/12 of 15% in March, up from 3% six months earlier. South Korea, Vietnam and Malaysia also showed accelerating 3/12 growth in April.

U.S. electronics production growth has been accelerating over the last six months, with 3/12 growth at 4.6% in April 2025, up from 0.4% in October 2024 and the highest growth since November 2022. Some of this growth in U.S. production is likely due to companies ramping production at U.S. factories as imports are threatened by tariffs. Japan 3/12 has averaged 4.5% in the last three months ended February 2025 after being below 1% or negative for most of 2024. The 27 countries in the European Union (EU 27) showed 3/12 of 2.8% in March 2025 while the United Kingdom (UK) 3/12 was zero in April 2025.

Although China’s total electronics production as measured in local currency (yuan) has shown 3/12 growth of 10% or higher for the first four months of 2025, unit production data of specific equipment have shown different trends. PC unit production 3/12 was 4.2% in April 2025. Although lower than the previous two months, PC 3/12 has been trending up from minus 2% in November 2024. Color TV 3/12 was minus 2.2% in April, a sharp decline from a 3/12 of 12.5% in December 2024. Smartphone unit production 3/12 has been negative since January 2025 after averaging 10% for the months of 2024.

Smartphone imports to the U.S. dropped sharply in April 2025 to 7.6 million units, down 45% from 14 million units in March. Imports from China dropped 61% to 2.1 million units in April from 5.4 million units in March. Imports from India dropped 47% and imports from Vietnam dropped 14%. In April, India ranked first as a source of U.S. smartphone imports at 3.0 million units, followed by Vietnam at 2.4 million units and China at 2.1 million units. Apple has been ramping up iPhone production in India to replace China production. Samsung produces most of its smartphones in Vietnam.

The drop in smartphone imports to the U.S. and the production shift from China to other countries is primarily due to tariffs either imposed or threatened by the Trump administration. The proposed tariffs have been wildly inconsistent. In 2025, President Trump has made the following announcements on tariffs on Chinese imports and tariffs on smartphones:

March 4: enacts 20% tariffs on imports from China
April 2: raises tariffs to 34%
April 9: raises tariffs to 145%
April 11: exempts smartphones from tariffs
May 12: reduces tariffs on China to 30%
May 23: proposed 25% tariff on smartphone imports by the end of June

These trends in smartphone production and imports will soon have a significant impact on the U.S. smartphone market. Counterpoint Research estimated Apple’s U.S. iPhone sales in April-May 2025 were up 27% from a year ago. Counterpoint questions whether the strong U.S. sales are due to consumers buying now due to fears of future tariffs.

Inventories of smartphones in the U.S. are likely to run low soon, resulting in shortages and price hikes. We should see these effects in the next few months.

Also Read:

Semiconductor Market Uncertainty

Semiconductor Tariff Impact

Weak Semiconductor Start to 2025