SILVACO 073125 Webinar 800x100

Visualizing System Design with Samtec’s Picture Search

Visualizing System Design with Samtec’s Picture Search
by Mike Gianfagna on 06-25-2025 at 6:00 am

Visualizing System Design with Samtec’s Picture Search

If you’ve spent a lot of time in the chip or EDA business, “design” typically means chip design. These days it means heterogeneous multi-chip design. If you’ve spent time developing end products, “design” has a much broader meaning. Chips, subsystems, chassis and product packaging are in focus. This is just a short list if you consider all the aspects of system design and its many disciplines, both hardware and software. Samtec lives in the system design world. The company helps connect the chiplets, chips, subsystems and racks that comprise a complete system.

There is a huge array of products from Samtec that need to be considered in any system design project. The screenshot above gives you a sense of the diversity involved. Choosing a particular connector will impact other choices. Not all combinations of Samtec products are viable as an integrated solution. The company has developed tools to help system designers navigate all this. Last year, I described something called Solution Blocks that assisted with choices that were compatible. Samtec has now taken that concept to the next level by adding visualization, along with more control and a broader perspective. If the entire semiconductor ecosystem worked this way, design would be a lot easier. Let’s take a short tour of visualizing system design with Samtec’s Picture Search.

Seeing is Believing

A link is coming so you can try this out for yourself. Here are a couple of examples of what’s possible. We’ll start with one of the many edge connectors available from Samtec. I chose the vertical Edge Rate® High-Speed Edge Card Connector. The initial configuration is shown below, including a high-resolution image that can be rotated for different views.

Vertical Edge Rate® High Speed Edge Card Connector

I decided to change the number of positions per row from 10 to 50. I also specified a polyimide file pad. In seconds, I got an updated image with detailed dimensions, see below.

Updated Edge Card Connector

The system summarized the features as shown below, along with detail specs, volume pricing/availability and extensive compliance data.

  • Optional weld tab for mechanical strength
  • 00 mm pitch, up to 140 positions
  • Accepts .062″ (1.60 mm) thick cards
  • Current rating: 2.2 A max
  • Voltage rating: 215 VAC/304 VDC max

This all took seconds to do. The ability to perform what-if experiments to converge on the best solution is certainly enabled by a system like this.

For one more experiment, I decided to try the Solutionator instead of browsing categories. Here, I chose active optics.  I then invoked the Solutionator interface for Active Optics, with its promise to “design in a minute.”

With the Active Optics Cable Builder interface, I could quickly browse the various options available. I chose a 12-channel, 16.1 Gbps, unidirectional AOC and instantly received the 3D diagram and all the specs as before. See below.

Active Optics Cable Builder

I could go on with more examples of how this new technology from Samtec makes it easier to pick the right components to implement your next system. The communication, latency and power requirements for advanced semiconductor systems continues to get more demanding.  The technology delivered by Samtec is a key ingredient to developing channels that meet all those requirements. And the process just got easier with Samtec’s picture search.

To Learn More

If high performance channels matter to you, you must try this new technology from Samtec. If power, performance and form factor are key care abouts, you must try it as well. You can access the starting point for your journey here. Have fun visualizing system design with Samtec’s Picture Search.


Flynn Was Right: How a 2003 Warning Foretold Today’s Architectural Pivot

Flynn Was Right: How a 2003 Warning Foretold Today’s Architectural Pivot
by Jonah McLeod on 06-24-2025 at 10:00 am

Table 1

In 2003, legendary computer architect Michael J. Flynn issued a warning that most of the industry wasn’t ready to hear. The relentless march toward more complex CPUs—with speculative execution, deep pipelines, and bloated instruction handling—was becoming unsustainable. In a paper titled “Computer Architecture and Technology: Some Thoughts on the Road Ahead,” Flynn predicted that the future of computing would depend not on increasingly intricate general-purpose processors, but on simple, parallel, deterministic, and domain-specific designs.

Two decades later, with the cracks in speculative execution now exposed and the rise of AI accelerators reshaping the hardware landscape, Flynn’s critique looks prophetic. His call for architectural simplicity, determinism, and specialization is now echoed in the design philosophy of industry leaders like Google, NVIDIA, Meta, and emerging players like Simplex Micro. Notably, Dr. Thang Tran’s recent patents—Microprocessor with Time-Scheduled Execution for Vector Instructions and Microprocessor with a Time Counter for Statically Scheduled Execution—introduce a deterministic vector processor design that replaces out-of-order speculation with time-based instruction scheduling.

This enables predictable high-throughput execution, reduced power consumption, and simplified hardware verification. These innovations align directly with Flynn’s assertion that future performance gains would come not from complexity, but from disciplined simplicity and explicit parallelism.

The Spectre of Speculation

Flynn’s critique of speculative execution came well before the industry was rocked by the Spectre and Meltdown vulnerabilities in 2018. These side-channel attacks exploited speculative execution paths in modern CPUs to leak sensitive data across isolation boundaries—an unintended consequence of the very complexity Flynn had warned against. The performance gains of speculation came at a steep cost: not just in power and verification effort, but in security and trust.

In hindsight, Flynn’s warnings were remarkably prescient. Long before Spectre and Meltdown exposed the dangers of speculative execution, Flynn argued that speculation was a fragile optimization: it introduced deep design disruption, made formal verification more difficult, and consumed power disproportionately to its performance gains. The complexity it required—branch predictors, reorder buffers, speculative caches—delivered diminishing returns as workloads became increasingly parallel and memory-bound.

Today, a quiet course correction is underway. Major chipmakers like Intel are rethinking their architectural priorities. Intel’s Lunar Lake and Sierra Forest cores prioritize efficiency over aggressive speculation, optimizing for throughput per watt. Apple’s M-series chips use wide, out-of-order pipelines, but they increasingly emphasize predictable latency and compiler-led optimization over sheer speculative depth. In the embedded space, Arm’s Cortex-M and Neoverse lines have trended toward simplified pipelines and explicit scheduling, often foregoing speculative logic entirely to meet real-time and power constraints.

Perhaps most significantly, the open RISC-V ecosystem is enabling a new generation of CPU and accelerator designers to build from first principles—often without speculative baggage. Vendors like Simplex Micro are championing deterministic, low-overhead execution models, leveraging vector and matrix extensions or predictive scheduling in place of speculation. These choices directly reflect Flynn’s thesis: when correctness, power, and scalability matter more than peak IPC, simplicity wins.

It’s worth noting that Tenstorrent, while often associated with RISC-V innovation, does not currently implement deterministic scheduling in its vector processor. Their architecture incorporates speculative and out-of-order execution to optimize throughput, resulting in higher control complexity. While this boosts raw performance, it diverges from Flynn’s call for simplicity and predictability. Nonetheless, Tenstorrent’s use of domain-specific acceleration and parallelism aligns with other aspects of Flynn’s vision.

A Parallel Future: AI Chips and Flynn’s Vision

Nowhere is Flynn’s vision more alive than in the rise of AI accelerators. From Google’s Tensor Processing Units (TPUs) to NVIDIA’s Tensor Cores, from Cerebras’ wafer-scale engines to Groq’s dataflow processors, the trend is clear: ditch speculative complexity, and instead embrace massively parallel, deterministic computing.

Google’s TPU exemplifies this shift. It forgoes speculative execution, out-of-order logic, and deep control pipelines. Instead, it processes matrix operations through a systolic array—a highly regular, repeatable architecture ideal for AI workloads. This approach delivers high throughput with deterministic latency, matching Flynn’s call for simple and domain-optimized hardware.

Cerebras Systems takes this concept even further. Its Wafer Scale Engine integrates hundreds of thousands of processing elements onto a single wafer-sized chip. There’s no cache hierarchy, no branch prediction, no speculative control flow—just massive, uniform parallelism across a tightly connected grid. By optimizing for data locality and predictability, Cerebras aligns directly with Flynn’s argument that regularity and determinism are the keys to scalable performance.

Groq, co-founded by TPU architect Jonathan Ross, builds chips around compile-time scheduled dataflow. Their architecture is radically deterministic: there are no instruction caches or branch predictors. All execution paths are defined in advance, eliminating the timing variability and design complexity of speculative logic. The result is a predictable, software-driven execution model that reflects Flynn’s emphasis on explicit control and simplified verification.

Even Meta (formerly Facebook), which once relied entirely on off-the-shelf GPUs, has embraced Flynn-style thinking in its custom MTIA (Meta Training and Inference Accelerator) chips. These processors are designed for inference workloads like recommendation systems, emphasizing predictable throughput and energy efficiency over raw flexibility. Meta’s decision to design in-house hardware tailored to specific models echoes Flynn’s assertion that different computing domains should not be forced into one-size-fits-all architectures.

Domain-Specific Simplicity: The DSA Revolution

Flynn also predicted the fragmentation of computing into domain-specific architectures (DSAs). Rather than a single general-purpose CPU handling all workloads, he foresaw that servers, clients, embedded systems, and AI processors would evolve into distinct, streamlined architectures tailored for their respective tasks.

That prediction has become foundational to modern silicon design. Today’s hardware ecosystem is rich with DSAs:

  • AI-specific processors (TPUs, MTIA, Cerebras)
  • Networking and storage accelerators (SmartNICs, DPUs)
  • Safety-focused microcontrollers (e.g., lockstep RISC-V cores in automotive)
  • Ultra-low-power edge SoCs (e.g., GreenWaves GAP9, Kneron, Ambiq)

These architectures strip out unnecessary features, minimize control complexity, and focus on maximizing performance-per-watt in a given domain—exactly the design goal Flynn outlined.

Even GPUs have evolved in this direction. Originally designed for graphics rendering, GPUs now incorporate tensor cores, sparse compute units, and low-precision pipelines, effectively becoming DSAs optimized for machine learning rather than general-purpose parallelism.

The Legacy of Simplicity

Flynn’s 2003 message was clear: Complexity is not scalable. Simplicity is. Today’s leading architectures—from TPUs to RISC-V vector processors—have adopted that philosophy, often without explicitly crediting the foundation he laid. The resurgence of dataflow architectures, explicit scheduling, and deterministic pipelines shows that the industry is finally listening.

And in an era where security, power efficiency, and real-time reliability matter more than ever—especially in AI inference, automotive safety, and edge computing—Flynn’s vision of post-speculation computing is not just relevant, but essential.

He was right.

References

  1. Flynn, M.J. (2003). Computer Architecture and Technology: Some Thoughts on the Road Ahead. Keynote at Computing Frontiers Conference.
  2. Spectre and Meltdown
  3. Google TPU: Jouppi, N. et al., ‘In-Datacenter Performance Analysis of a Tensor Processing Unit,’ ISCA 2017.
  4. Cerebras WSE
  5. Groq: “A Software-defined Tensor Streaming Multiprocessor…”
  6. META MTIA V2 Chip:
  7. Tenstorrent Overview: Products and Software
  8. WO2024118838A1: Latency-tolerant scheduling and memory-efficient RISC-V vector processor. https://patents.google.com/patent/WO2024118838A1/en
  9. WO2024015445A1: Predictive scheduling method for high-throughput vector processing. https://patents.google.com/patent/WO2024015445A1/en
Also Read:

Andes Technology: Powering the Full Spectrum – from Embedded Control to AI and Beyond

From All-in-One IP to Cervell™: How Semidynamics Reimagined AI Compute with RISC-V

Andes Technology: A RISC-V Powerhouse Driving Innovation in CPU IP


Enabling RISC-V & AI Innovations with Andes AX45MPV Running Live on S2C Prodigy S8-100 Prototyping System

Enabling RISC-V & AI Innovations with Andes AX45MPV Running Live on S2C Prodigy S8-100 Prototyping System
by Daniel Nenni on 06-24-2025 at 6:00 am

Andesbanner

Qualifying an AI-class RISC-V SoC demands proving that wide vectors, deep caches, and high-speed I/O operate flawlessly long before tape-out. At the recent Andes RISC-V Conference, Andes Technology and S2C showcased this by successfully booting a lightweight large language model (LLM) inference on a single S2C Prodigy™ S8-100 logic system powered by AMD’s Versal™ Premium VP1902 FPGA.

Capacity and Timing — Solved in One Device
Prototyping an SoC traditionally requires partitioning the design across multiple FPGAs, complicating timing closure and increasing development risks. S2C’s S8-100 Logic System, with roughly 100 million usable gates on a single FPGA, removes these hurdles. The dual-core AX45MPV cluster from Andes— featuring 64-bit in-order cores and a powerful 512-bit vector processing unit — together with an AE350 subsystem occupies less than 40% of the FPGA capacity. This generous margin allows designers to add custom instructions, additional accelerators, debug logic or secret sauce without a second board. More importantly, the entire design can now reside within a single FPGA in S8-100, eliminating the need for time-consuming partitioning and avoiding cross-chip latency that would otherwise throttle performance. Freed from the architectural compromises of multi-FPGA systems, the design can be operated at a speed enough to run large-scale software — enabling faster iterations, more realistic validation, and a dramatically simpler prototyping flow.

Robust Memory Bandwidth Without Board Spins
LLM inference workloads require stable, high-throughput memory subsystems to continuously feed vector engines. By leveraging S2C’s pre-validated DDR4 memory module and a plug-and-play auxiliary I/O card that handles JTAG and UART, the LLM demo is easily deployed. This modular approach allowed the hardware platform to be operational within days of receiving RTL code, accelerating design iterations and debugging cycles.

Modularity That Adapts to Changing Needs
The S8-100 excels at flexibility. Developers can rapidly pivot across use cases — whether AI inference, video processing, networking, or safety-critical industrial control — by swapping daughtercards to match the desired interfaces. S2C provides a vast library of over 90 daughter cards covering interfaces from MIPI-DPHY, HDMI and 10/100/1000/100G/400G Ethernet to fieldbus protocols. When a single FPGA isn’t enough, multi-FPGA partitioning is available.

Real Hardware Data Cuts Time-to-Market
Running Linux bring-up, driver stacks, and model benchmarks on cycle-accurate hardware transforms estimations into actionable insights. Teams using this approach typically save six to twelve months on critical paths with quantified risks rather than assumptions, which improves confidence in first-silicon success and ready-to-integrate software.

With Andes–S2C collaboration, developers now have better platform to innovate, explore ideas, and evaluate system architectures. By providing the capacity and flexibility to explore, the S8-100 enables teams to quickly build and iterate on proof-of-concept designs at more reasonable system performance—paving the way for faster, more confident RISC-V and AI development.

Visit s2cinc.com to request an evaluation. Our experts will provide detailed feedback in days—not months—helping you streamline your prototyping journey.

Also Read:

Cost-Effective and Scalable: A Smarter Choice for RISC-V Development

S2C: Empowering Smarter Futures with Arm-Based Solutions

Accelerating FPGA-Based SoC Prototyping


DAC News – A New Era of Electronic Design Begins with Siemens EDA AI

DAC News – A New Era of Electronic Design Begins with Siemens EDA AI
by Mike Gianfagna on 06-23-2025 at 10:00 am

DAC News – A New Era of Electronic Design Begins with Siemens EDA AI

AI is the centerpiece of DAC this year. How to design chips to bring AI algorithms to life, how to prevent AI from hacking those chips, and of course how to use AI to design AI chips. In this latter category, there were many presentations, product announcements and demonstrations. I was impressed by many of them. But an important observation is the focused nature of most of this work.  Methods to use AI to accelerate the design flow, or converge on timing faster, and so on. Siemens took a different approach to addressing the requirements of impossible to design chips, however. In its own words, Siemens introduced a comprehensive generative and agentic AI system for semiconductor and PCB design. This approach has significant implications. Let’s take a look at how a new era of electronic design begins with Siemens EDA AI.

The Big Picture

Stepping back a bit, I was struck with a bit of Déjà vu when examining the Siemens announcement and diving into some of the details. Those who have been around EDA for a while will remember The Framework Concept. The idea was to develop an EDA framework that allowed all tools to work off a common data structure and use model. Sharing the user interface meant the best concepts would find their way to all tools. Sharing data models meant all tools could work off the same design description and collectively improve the design in synergistic ways.

It sounded great on paper, but sadly the technology wasn’t mature enough so many years ago. Most, if not all the CAD Framework ideas failed. I recall folks saying, “don’t use the F-word (Framework), or I’ll walk out of your presentation.” Today, we take all this for granted. Every mainstream design flow shares both data and the user experience effectively. The Framework promise was finally delivered.

Fast-forward to DAC 2025 and Siemens is taking this concept to the next level. What if a broad spectrum of AI technologies could be delivered to all development groups in the company? And what if each group could benefit from the substantial infrastructure delivered this way to then add tool-specific capabilities on top of it to create a truly consistent and AI-enabled design infrastructure? This is what Siemens announced at DAC. Let’s take a closer look.

Introducing the Siemens EDA AI System

The starting point for all this is a focus on something Siemens calls industrial-grade AI. The approach defines what’s important to harness AI for chip and PCB design – industrial grade problems. This is in contrast to consumer AI, the ubiquitous version we all see every day. The figure below illustrates the differences.

Siemens Industrial Grade AI

In my opinion, this important analysis sets up the project for success. Most AI algorithms have a well-defined use model and scope of application. But the way the technology is deployed makes a huge difference. With regard to AI algorithms, the following chart will help to set the scope of application of the Siemens EDA System. In the company’s words, “a powerful hybrid AI system emerges when these AI capabilities are integrated together.”

Spectrum of AI Use Models

The Siemens EDA System is being deployed across the company to many development groups. Based on what I saw at DAC, many teams have embraced the technology and there are already many new capabilities as a result. The general deployment model is to leverage generative and agentic AI for front-end tasks and machine and reinforcement learning for back-end tasks. The strategy and the benefits are summarized in the figure below.

Siemens EDA focuses on the development of powerful hybrid AI systems

There are some guiding principles for this work. They are summarized as follows. I particularly like the last one. The customer base is doing a lot of work to harvest its own unique AI models and strategies. It’s critically important to recognize this and enable it. Siemens seems to have it right.

  • Enables generative and agentic AI capabilities across Siemens EDA tools
  • Strong data flywheel effect enabled by a centralized multimodal data lake
  • Secure with full custom access controls & on-premise / cloud deployment options
  • Open and customizable with multiple large language model (LLM) support, ability to add customer data and build custom workflows

First Results Across Key Tools

There was ample proof on display at DAC of the impact of this new approach across the product line. Here is a quick summary of some examples. There will be many more for you to explore.

Aprisa™ AI software: Aprisa AI is a fully integrated technology in the Aprisa digital implementation solution. It enables next-generation AI features and methodologies across RTL-to-GDS capabilities including AI design exploration that adaptively optimizes for power / performance / area. Integrated generative AI-assist is also included, delivering ready-to-run examples and solutions. Aprisa AI delivers 10x productivity, 3x improved compute-time efficiency, and 10 percent better PPA for digital designs across all process technologies.

Calibre® Vision AI software: Calibre Vision AI offers a revolutionary advance in chip integration signoff by helping design teams identify and fix critical design violations in half the time of existing methods by instantly loading and organizing them into intelligent clusters. Designers can then prioritize their activity based on this clustering and achieve a higher level of productivity. Calibre Vision AI also improves efficiency in the workflow with the addition of “bookmarks” that allow designers to capture current analysis state, including notes and assignments, and then foster enhanced collaboration between chip integrators and block owners during physical verification.

Solido™ generative and agentic AI: Solido now harnesses Siemens’ EDA AI system to deliver advanced generative and agentic AI capabilities throughout the Solido Custom IC platform to transform next generation design and verification. Tailored to each phase of the custom IC development process, including schematic capture, simulation, variation-aware design and verification, library characterization, layout and IP validation, Solido’s new generative and agentic AI empowers engineering teams to achieve orders-of-magnitude productivity gains. It appears that Solido is leading the charge with the application of advanced agentic AI technology.

 A Growing Ecosystem 

As you would expect, successful deployments like this one facilitate expansion to other technologies in the ecosystem. At DAC, Siemens also announced support for NVIDIA NIM microservices and NVIDIA Llama Nemotron models. NVIDIA NIM enables the scalable deployment of inference-ready models across cloud and on-premises environments, supporting real-time tool orchestration and multi-agent systems. Llama Nemotron adds high context reasoning and robust tool-calling for more intelligent automation across the EDA workflow.  

To Learn More

The work Siemens presented at DAC was comprehensive, well thought out and widely adopted by development teams across the company. These are the elements of a very successful deployment of AI. It you’re thinking of adding AI to your design flow (and you should), you must learn more about what Siemens is up to. Here are some places to start:

And that’s how a new era of electronic design begins with Siemens EDA AI.


IP Surgery and the Redundant Logic Problem

IP Surgery and the Redundant Logic Problem
by Bernard Murphy on 06-23-2025 at 6:00 am

IP Surgery min

It’s now difficult to remember when we didn’t reuse our own IP and didn’t have access to extensive catalogs of commercial IP. But reuse comes with a downside – without modification we can’t finetune IP specs to exactly what we want in a current design. We’re contractually limited in how we can adapt commercial IP, however vendors compensate through a good deal of heavily tested configurability which seems to satisfy most needs. For in-house IP, the ROI to bring IP to commercial reuse standards is tough to justify but here there is much more freedom to get creative with a design. It’s more common to see copy-paste reuse, where you start with an IP already proven in silicon then adapt through selective surgery to meet current needs. The challenge of course is that getting this right is never as simple as carving out chunks of RTL and adding in new chunks. Dependencies ripple through the design and what had been proven in silicon, under surgery is probably going to break in interesting ways. Which according to Ashish Darbari (CEO of Axiomise) introduces new opportunities to apply formal methods in pursuit of that verification.

Is there really anything new here?

Of course you need to re-verify, but how far do you want to go down the path to complete re-verification? For argument’s sake, let’s suppose the original IP built on some 8-channel subsystem from which you want to drop 4-channels. You’ll start with the easy checks: formal Linting and coverage. Unless the design/testbench is already parametrized to take care of that possibility, around the area you cut out Lint might show bus size mismatches, inaccessible states, all the usual problems. From coverage you will see new coverage holes. Also not surprising.

These are the obvious must-fix problems but there will be more issues lurking in the rest of the design. The original IP was scaled to handle 8 channels; if you drop 4 channels it may still work fine but it won’t necessarily be efficient. The NoC was tuned for an 8-channel load – FIFOs in the NoC are now bigger than needed for this reduced traffic. More generally, when you remove logic is there any other logic you should also trim, to reduce area and power? Synthesis can help optimize away unnecessary logic to a limited extent, but not complex sequential logic.

Redundant logic

The idea that there could be redundant logic in a design might seem odd. In some cases, you can’t take it out without voiding a warranty, but otherwise if it’s not needed then why not remove it? Makes perfect sense when you anyway plan to invest in detailed verification on that IP. But what if you are doing surgery on a known good IP? You started with that IP because it would save you time and resources. If you have to go back to square one in verification and rediscovering the IP microarchitecture in depth, how much time and effort did you really save?

First-pass checking with Lint and coverage will help but these analyses are not quite as painless or complete as you might think. Take an unreachable state in an FSM. Lint (formal) will find this without problems, but it won’t tell you about the downstream logic made redundant because that state is unreachable. Maybe the state should be reachable (you created a bug) in which case that logic wouldn’t be redundant. Or maybe the state now being unreachable is OK and a function of the surgery, in which case that downstream logic should also be cut out.

Similar redundancies can appear after surgery around counters, FIFOs and other sequential logic. All because logic that was previously useful now has no purpose or should be modified. Axiomise has developed an app (formal based) which will identify redundant logic without requiring a user have expertise in formal. They call this Footprint.

Ashish adds that what Lint and coverage tools do, they do well but they are obviously designed to highlight the problems they target, not collateral problems like redundant logic. Lint will warn about stuck signals and coverage will warn about missed coverage bins (though only for cover points the testbench is checking).

Footprint on the other hand automates all these checks and is in production use in some of the biggest design houses, for good reason. Removing redundant logic can have real impact on area (and power), a million gates in one case. When you’re under pressure to signoff and fielding tough questions about why you are spending so much time on a proven IP, you might appreciate the help.

Check it out along with other Axiomise capabilities HERE.

Also Read:

Podcast EP274: How Axiomise Makes Formal Predictable and Normal with Dr. Ashish Darbari

How I learned Formal Verification

The Convergence of Functional with Safety, Security and PPA Verification

Podcast EP246: How Axoimise Provides the Missing Piece of the Verification Puzzle


Electronics Up, Smartphones down

Electronics Up, Smartphones down
by Bill Jewell on 06-22-2025 at 10:00 am

unnamed

Electronics production in key Asian countries has been steady to increasing in the last several months. In April 2025, China electronics production three-month-average change versus a year ago (3/12) was 11.5%, up from 9.5% in January but below the average 3/12 of 12% in 2024. India showed the strongest growth, with 3/12 of 15% in March, up from 3% six months earlier. South Korea, Vietnam and Malaysia also showed accelerating 3/12 growth in April.

U.S. electronics production growth has been accelerating over the last six months, with 3/12 growth at 4.6% in April 2025, up from 0.4% in October 2024 and the highest growth since November 2022. Some of this growth in U.S. production is likely due to companies ramping production at U.S. factories as imports are threatened by tariffs. Japan 3/12 has averaged 4.5% in the last three months ended February 2025 after being below 1% or negative for most of 2024. The 27 countries in the European Union (EU 27) showed 3/12 of 2.8% in March 2025 while the United Kingdom (UK) 3/12 was zero in April 2025.

Although China’s total electronics production as measured in local currency (yuan) has shown 3/12 growth of 10% or higher for the first four months of 2025, unit production data of specific equipment have shown different trends. PC unit production 3/12 was 4.2% in April 2025. Although lower than the previous two months, PC 3/12 has been trending up from minus 2% in November 2024. Color TV 3/12 was minus 2.2% in April, a sharp decline from a 3/12 of 12.5% in December 2024. Smartphone unit production 3/12 has been negative since January 2025 after averaging 10% for the months of 2024.

Smartphone imports to the U.S. dropped sharply in April 2025 to 7.6 million units, down 45% from 14 million units in March. Imports from China dropped 61% to 2.1 million units in April from 5.4 million units in March. Imports from India dropped 47% and imports from Vietnam dropped 14%. In April, India ranked first as a source of U.S. smartphone imports at 3.0 million units, followed by Vietnam at 2.4 million units and China at 2.1 million units. Apple has been ramping up iPhone production in India to replace China production. Samsung produces most of its smartphones in Vietnam.

The drop in smartphone imports to the U.S. and the production shift from China to other countries is primarily due to tariffs either imposed or threatened by the Trump administration. The proposed tariffs have been wildly inconsistent. In 2025, President Trump has made the following announcements on tariffs on Chinese imports and tariffs on smartphones:

March 4: enacts 20% tariffs on imports from China
April 2: raises tariffs to 34%
April 9: raises tariffs to 145%
April 11: exempts smartphones from tariffs
May 12: reduces tariffs on China to 30%
May 23: proposed 25% tariff on smartphone imports by the end of June

These trends in smartphone production and imports will soon have a significant impact on the U.S. smartphone market. Counterpoint Research estimated Apple’s U.S. iPhone sales in April-May 2025 were up 27% from a year ago. Counterpoint questions whether the strong U.S. sales are due to consumers buying now due to fears of future tariffs.

Inventories of smartphones in the U.S. are likely to run low soon, resulting in shortages and price hikes. We should see these effects in the next few months.

Also Read:

Semiconductor Market Uncertainty

Semiconductor Tariff Impact

Weak Semiconductor Start to 2025


Arteris at the 2025 Design Automation Conference #62DAC

Arteris at the 2025 Design Automation Conference #62DAC
by Daniel Nenni on 06-22-2025 at 8:00 am

62nd DAC SemiWiki

Key Takeaways:

  • Expanded Multi-Die Solution: Arteris showcases its foundational technology for rapid chiplet-based innovation. Check out the multi-die highlights video.
  • Ecosystem compatibility: Supported through integration with products from major EDA and foundry partners, including Cadence, Synopsys, and global semiconductor fabs.
  • Optimized SoC Integration: Magillem Connectivity automates assembly from IP and chiplets, reducing risks associated with manual, error-prone tasks.
  • Consistent HW/SW Development: Magillem Registers automates integration from system map definition to validation and documentation, ensuring alignment through a single source of truth.
  • Unified Interconnect IP: FlexGen, FlexNoC, and Ncore work together to manage coherent and non-coherent data flow across dies, enabling seamless multi-die integration.
  • Live at Booth #2529: Meet Arteris experts at DAC 2025 to learn how its silicon-proven technologies accelerate multi-die SoC design.
  • Presentation at EETimes Chiplet Pavilion: Join Arteris product experts on Tuesday, June 24th, at 4:15 p.m. in the Chiplet Pavilion Booth #2308 for “Chiplets: Opportunities and Challenges.”

Read the latest Arteris SemiWiki blog: Arteris Expands Their Multi-Die Support – SemiWiki

Arteris, a leading provider of system IP, recently announced an expansion of its multi-die solution and will showcase its latest technology, including network-on-chip (NoC) interconnect IP and SoC integration automation solutions providing foundational technology for chiplet-based innovation.

The company offers a product portfolio that supports interoperability and efficient communication among disparate chiplets, streamlining integration across a broad range of implementations. Their recently expanded solution enables multi-die systems to operate as a unified platform that appears monolithic to software developers.

Magillem Connectivity

Magillem Connectivity provides optimized automation for SoC assembly from IP blocks and chiplets. It reduces project risks by eliminating manual, error-prone integration tasks. Based on the IEEE 1685 (IP-XACT) standard, it enables hierarchical integration views, consistent interface definitions, and system-level connectivity. In multi-die systems, it coordinates the integration process across dies, improving design accuracy and accelerating assembly.

Magillem Registers

Magillem Registers delivers optimized automation for integrating hardware and software from system map definition to validation and documentation. It operates from a single source of truth to ensure consistency and traceability across the development flow. The tool automates the management of control and status registers and supports multi-die environments by synchronizing register specifications across dies to streamline hardware/software integration.

FlexGen Interconnect IP

FlexGen is a highly configurable, smart interconnect IP that supports a wide range of protocols and traffic types. It enables flexible topology generation and protocol-aware integration, helping accelerate system design. Using industry standard protocols such as AMBA, FlexGen supports integration with third-party die-to-die controllers and PHYs through standard interfaces.

FlexNoC Non-Coherent Interconnect IP

FlexNoC shares a common subset of features with FlexGen, enabling manual NoC configuration, and is used widely in AI chiplet-based designs today.

Ncore Cache Coherent Interconnect IP

Ncore enables seamless cache coherent reads and writes across chiplets, allowing multi-die systems to behave as a unified platform. It supports standard protocols including CHI and ACE, ensuring correct memory ordering and coherent communication between CPUs, GPUs, and accelerators. Ncore is designed for scalability, low latency, and functional safety with ISO 26262 certification.

Arteris invites you to meet the team at booth #2529 at DAC 2025.

To learn more about Arteris, visit www.arteris.com.

Also Read:

Arteris Expands Their Multi-Die Support

How Arteris is Revolutionizing SoC Design with Smart NoC IP

Podcast EP277: How Arteris FlexGen Smart NoC IP Democratizes Advanced Chip Design with Rick Bye


Secure-IC at the 2025 Design Automation Conference #62DAC

Secure-IC at the 2025 Design Automation Conference #62DAC
by Daniel Nenni on 06-22-2025 at 6:00 am

Banner promo DAC

Secure-IC at DAC 2025: Building Trust into Tomorrow’s Chips and Systems

As semiconductor innovation accelerates, the chiplet-based design paradigm is redefining the landscape of advanced electronic systems. At DAC 2025, Secure-IC (booth #1208) will present a comprehensive suite of technologies engineered to address the security challenges arising from this evolution—highlighting scalable solutions for secure chiplet integration, Post-Quantum Cryptography, MACsec IP for secure automotive and infrastructure Ethernet, and cybersecurity-by-design methodologies.

Securing the Chiplet Revolution

With the increasing disaggregation of SoCs into modular chiplets within a System-in-Package (SiP), the need for secure interoperability becomes critical. At the heart of Secure-IC’s offering is the Securyzr™ iSE s500 neo, a hardware Root of Trust (RoT) tailored for multi-die architectures. Designed to secure inter-chiplet communication and lifecycle management, it supports key security services like chiplet hardware and software bill-of-materials (HBOM/SBOM) verification, remote attestation, and cryptographic isolation.

This architecture will be explored in depth during the DAC 2025 session titled Securing Chiplet Integration: A System-in-Package Security Architecture (Monday, June 23, 3:30pm PDT), co-presented by Secure-IC’s Sylvain Guilley and Cadence’s Junie Um. The talk outlines a protection profile framework to formalize security requirements across the chiplet lifecycle—from enrollment and key provisioning to post-quantum key renewal.

MACsec IP high-speed and automotive-grade

Secure-IC’s MACsec IP solution further extends the company’s portfolio by delivering robust, silicon-proven communication security. Validated with Cadence VIP, the IP achieves throughput scaling up to 1.6 Tbps, positioning it as a ready-to-integrate solution for next-generation automotive and infrastructure-grade systems.

Designed with automotive compliance in mind, the MACsec IP supports AES-GCM 256-bit encryption, ISO 26262 (ASIL-B), and enables secure high-speed links between ECUs, domain controllers, and central compute nodes. The fully integrated stack—MAC, MACsec, and Cadence PHY—provides a streamlined approach for designers to embed trusted hardware communication into their systems.

System-Level Security and Certification Support

Secure-IC’s Securyzr™ platform spans from embedded RoTs to software and cloud-based monitoring, offering full-stack protection. Beyond silicon, the platform includes standardized software interfaces, security lifecycle management, and services that simplify certification under frameworks like SESIP, PSA Certified, Common Criteria, and ISO 26262.

This integrated approach ensures that security is not just a hardware feature, but an architectural foundation—supporting developers from pre-silicon design to compliance and deployment.

Preparing for the Post-Quantum Era

The growing urgency to protect systems against future quantum threats is also addressed through Secure-IC’s PQC-ready solutions. Featuring CAVP-certified PQC algorithms, these technologies provide a pathway to quantum-resilient designs, ensuring long-term data protection and compliance with emerging standards.

Security Verification in the Design Flow

To reinforce security at the earliest stages of development, Secure-IC’s toolchain includes Catalyzr™ and Virtualyzr™. These solutions enable vulnerability analysis, virtual fault injection, and simulation-based verification—empowering design teams to evaluate physical attack resistance and compliance from day one.

Visit Secure-IC at DAC 2025 (Booth #1208) to explore how its cutting-edge technologies are securing the next generation of chiplets and connected systems.

Learn more or book a meeting with Secure-IC at DAC >>

Also Read:

Anirudh Keynote at CadenceLIVE 2025 Reveals Millennium M2000

A Timely Update on Secure-IC

Certification for Post-Quantum Cryptography gaining momentum


Podcast EP293: 3DIC Progress and What’s Coming at DAC with Dr. John Ferguson and Kevin Rinebold of Siemens EDA

Podcast EP293: 3DIC Progress and What’s Coming at DAC with Dr. John Ferguson and Kevin Rinebold of Siemens EDA
by Daniel Nenni on 06-20-2025 at 8:00 am

Dan is joined by Dr. John Ferguson, Director of Product Management for the Calibre nmDRC and 3DIC related products for Siemens EDA. John has worked extensively in the area of physical design verification. Holding several patents, he is also a frequent author in the physical design and verification domain. Current activities include efforts to extend physical verification and design kit enablement for 3DIC design, Silicon Photonics, Quantum Compute and other HPC architectures. Also joining the podcast is Kevin Rinebold, a 3DIC Technologist at Siemens EDA. Kevin has multiple decades of experience in IC design and packaging.

Dan explores the current state of 3DIC design with John and Kevin. They explain that current popular applications are hypercalers and AI using silicon interposers. Future expansion is also discussed, which includes moving to other substrate materials to achieve higher density. Dan explores what the future holds in terms of challenges with John and Kevin. The ability to aggregate, track and analyze massive amounts of data across the design process is discussed. Digital twin technology is described as a way to develop a single source of truth for this process.

Dan then discusses some of the innovations that Siemens EDA will unveil at next week’s DAC. Streamlined workflows are discussed that focus on several areas. These include floor planning support for new substrates, protocol compliance, work-in-process data management and multiphysics analysis.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


The Sondrel transformation to Aion SIlicon!

The Sondrel transformation to Aion SIlicon!
by Daniel Nenni on 06-20-2025 at 7:00 am

1747740149443

Ollie is a commercially astute senior leader with over 20 years of experience in strategy, business development and sales across the technology and engineering sectors, with a strong track record in scaling businesses and driving growth. He has held commercial leadership roles in FTSE 100, private equity-backed, and startup companies, working extensively across Europe, North America, and Asia.

Daniel Nenni, Founder of SemiWiki: Your recent re-brand as Aion Silicon has caught our attention. Can you share the background on the re-brand and what the future holds?

Oliver Jones, CEO of Aion Silicon: Our name change is the culmination of our strategic reset last year. When we went private, tightened our focus and refreshed our  leadership team, we realized the brand also needed to reflect where we’re heading—high-performance, AI-centric silicon delivered through a high-touch consultative model. The name Aion Silicon signals that new chapter: same 20+ years engineering pedigree, renewed commercial ambition and a mandate to grow in North America. Over the next 18 months, we’re increasing design capacity, upgrading our delivery model and further developing ecosystem programs with foundry and IP partners. The goal is clear: become the go-to design partner for companies that need custom silicon but don’t want the bureaucracy of a mega-vendor.

SemiWiki: What is the significance of the new name? How did you decide on it?

Jones: The word “Aion” evokes endurance—an era rather than a moment—while “ion” is the fundamental charge carrier in semiconductor physics. Together they capture long-term partnership and deep technical roots. We weighed dozens of options but kept returning to a name that couples scientific precision with forward momentum. It also distances us from any carry-over perceptions tied to pure design-services work and underlines our continued evolution into full lifecycle ASIC delivery.

SemiWiki: Were there market or customer changes that drove the re-brand?

Jones: Absolutely. The rise of domain-specific AI accelerators and edge devices, compressed product cycles and raised risk. Customers told us they needed earlier architectural help, tighter program management and foundry-agnostic guidance—services that go beyond traditional RTL handoffs. Rebranding allowed us to frame that expanded offer and signal to U.S. and EMEA prospects that we’re more than a UK design house; we’re a global partner with a global footprint built for the AI era.

SemiWiki: How will Aion Silicon’s products and services differ from those offered by Sondrel?

Jones: The legacy pillars—SoC architecture, front-end and back-end design—remain, but we’ve layered on turnkey supply-chain orchestration, chiplet capabilities and reference AI platforms derived from our Architecting the Future libraries. We now lead engagements from concept through tape-out and volume ramp, offer system-level architecture consulting, and license select IP such as security firewalls and power management. We can agnostically offer best in class 3rd party IP via our partner ecosystem, provide design-for-security reviews and run early package/co-design workshops. 

SemiWiki: AI appears to be a key driver. What capabilities did Sondrel have here, and how will you leverage them as Aion Silicon?

Jones: In the recent past, Sondrel delivered 30-billion-transistor SoCs and video processing engines long before AI became table stakes. That gave us experience with high-bandwidth fabrics, memory hierarchies and performance-power optimization that transfer directly to today’s AI accelerators. As Aion Silicon we’re scaling those flows to below 3 nm, adding model-based verification for sparsity and quantization schemes, and partnering with tool vendors for hardware–software co-simulation so customers see workload performance before tape-out.

SemiWiki: Looking ahead, what markets and customers will you serve?

Jones: We’re focused on four verticals where custom silicon moves the needle: data-center AI and HPC, automotive ADAS, advanced networking/5G and edge inference. Our sweet spot is the “Tier 1.5” player—companies big enough to ship volume but too lean to dedicate their own  100-person team to a new chip..

SemiWiki: Which geographies will you emphasize?

Jones: North America is priority one—major AI innovators are here and expect close collaboration. However, we continue to support and expand long-standing customers in Europe and Israel where next-gen automotive and networking projects are heating up.

SemiWiki: How will your offerings evolve to address these changes?

Jones: Two tracks. First, deeper ecosystem ties—we recently joined Intel Foundry Services’ Accelerator program, adding to relationships with TSMC and Samsung Foundry. Second,  more vertical reference platforms so customers can prototype quickly, then hand off to our turnkey flow for tape-out and supply-chain management.

SemiWiki: Anything else you’d like to share with our readers?

Jones: Only that the name may be new, but the ethos is not: integrity, engineering rigor and customer success remain our yardsticks. The industry is moving from “one-size-fits-all” silicon to highly specialized devices. That shift rewards combining agility with SoC architectural depth. Aion Silicon was built for this moment, and we invite engineers who need differentiated chips—without the overhead and risk of doing it alone—to engage with us early, while requirements are still fluid. If your roadmap needs custom silicon without custom headaches, Aion Silicon is ready to help.

Contact Aion Silicon

Also Read:

2025 Outlook with Oliver Jones of Sondrel

CEO Interview: Ollie Jones of Sondrel

Sondrel Redefines the AI Chip Design Process