100X800 Banner (1)

Rise Design Automation Webinar: SystemVerilog at the Core: Scalable Verification and Debug in HLS

Rise Design Automation Webinar: SystemVerilog at the Core: Scalable Verification and Debug in HLS
by Daniel Nenni on 09-18-2025 at 10:00 am

Rise SemiWiki Webinar October


Key Takeaways

– High-Level Synthesis (HLS) delivers not only design productivity and quality but also dramatic gains in verification speed and debug – and it delivers them today.
–  Rise Design Automation uniquely enables SystemVerilog-based HLS and SystemVerilog verification, reusing proven verification infrastructure.
– The webinar features expert insights from verification methodology architect Mark Glasser, and Mike Fingeroff, HLS expert, presenting the technical content and a live demonstration.
– Attendees will learn how to unify design, verification, and debug across abstraction levels without duplicating effort.

Register Here For Replay

High-Level Synthesis (HLS) and raising design abstraction have been proven to deliver significant productivity and value to design teams — faster design entry, improved architectural exploration, and tighter system-level integration. These benefits are real, but experienced users and teams often cite a different advantage as the most valuable: verification.

By enabling earlier testing, running regressions 30×–1000× faster than RTL, and simplifying debug, HLS can dramatically accelerate verification. The challenge, however, is that existing HLS flows rely on C++ or SystemC, often leaving verification disconnected from established SystemVerilog/UVM environments. This gap forces teams to bridge methodologies on their own and uncover problems only after RTL is generated — slowing adoption and raising risk.

Rise Design Automation addresses this directly by making SystemVerilog a first-class citizen in HLS. In collaboration with SemiWiki, Rise will host a webinar that demonstrates how teams can apply familiar SystemVerilog and UVM methodologies consistently from high-level models through RTL, simplify debug, and unify design and verification across abstraction levels. The live event takes place on Wednesday, October 8, 2025, from 9–10 AM Pacific Time.

The Webinar Presenters:

The session begins with Mark Glasser, a distinguished verification architect and methodology expert. Mark co-invented both OVM and UVM and is the author of the recently published book, Next Level Testbenches: Design Patterns in SystemVerilog and UVM (2024). He will provide historical and forward-looking context on how verification methodology has evolved and the need driving raising abstraction.

The majority of the session will be presented by Mike Fingeroff, Chief of HLS at Rise DA. With over 25 years of experience and as the author of The High-Level Synthesis Blue Book, Mike specializes in HLS, SystemVerilog, SystemC, and performance modeling. He will deliver the technical deep dive and a live demonstration of Rise’s flow.

Key Topics

The webinar will address how Rise enables:

  • SystemVerilog for HLS – untimed and loosely timed modeling and the constructs synthesized into RTL.
    Verification continuity – applying SystemVerilog methodologies consistently from high-level models through RTL.
    Mixed-language and mixed-abstraction simulation – explain the automatically generated adapters that bridge between HL and RTL and how to mix-and-match in verification including UVM.
    Advanced debug features – HL↔RTL correlation, transaction-level waveforms, RTL signal visibility, and synthesized assertions and coverage.
    Familiar debug practices – including $display support and line-number annotations for RTL signals.

A highlight of the session will be a live demonstration, where attendees will see a design example progress from high-level verification through RTL, showcasing methodology reuse and debug continuity.

To Learn More

If you’re looking to accelerate verification, reduce duplicated effort, and understand how to apply your existing SystemVerilog/UVM expertise in an HLS context, this webinar will step you through the code.

Register Here For Replay

Don’t miss the opportunity to see how SystemVerilog at the core of HLS can streamline your design process and verification flow.

About Rise Design Automation

Our mission at Rise Design Automation is to raise the level of abstraction of design and verification beyond RTL and have it adopted at scale across the industry in order to transform how designs will be done for years to come.   So, Adoption at Scale with Innovation at Scale.

Also Read:

Moving Beyond RTL at #62DAC

Generative AI Comes to High-Level Design

Upcoming Webinar: Accelerating Semiconductor Design with Generative AI and High-Level Abstraction


CEO Interview with Barun Kar of Upscale AI

CEO Interview with Barun Kar of Upscale AI
by Daniel Nenni on 09-18-2025 at 10:00 am

Barun Kar Headshot

Barun Kar is CEO of Upscale AI. He is also the co-founder of Auradine and previously served as COO. Barun has over 25 years of experience leading R&D organizations to deliver disruptive products and solutions, resulting in multi-billion-dollar revenue. Barun was on the founding team at Palo Alto Networks and served as the company’s Senior Vice President of Engineering where he spearheaded two acquisitions and led five post-merger integrations. Prior to that, Barun oversaw the entire Ethernet portfolio at Juniper Networks.

Tell us about your company

Upscale AI is developing open-standard, full-stack turnkey solutions for AI networking infrastructure. We’re redesigning the entire AI networking stack for ultra-low latency networking, offering next-level performance and scalability for AI training, inference, generative AI, edge computing, and cloud-scale deployments. Upscale AI just raised $100 million of funding for our seed round, so we look forward to using this funding to bring our solutions to market and define the future of scalable, interoperable AI networking.

What problems are you solving?

It’s becoming more challenging for network infrastructure to keep up with AI/ML model sizes, inferencing workloads, token generation rates, and frequent model tuning with real-time data. To meet today’s networking challenges, the industry needs scalable, high-performance networking infrastructure built on open standards. Upscale AI’s open standard solutions meet the latest bandwidth requirements and low latency needs of AI workloads, while also offering customers more scalability. Plus, Upscale AI is providing companies with much more flexibility and interoperability compared to the closed, proprietary solutions that dominate the market today.

What application areas are your strongest?

Upscale AI’s silicon, systems, and software are specifically optimized to meet AI requirements today and in the future. Our ultra-low latency and high bandwidth networking fabric will not only drive the best xPU performance, but will also offer a huge reduction in total cost of ownership at the data center level. Our unified NOS, which is based on SAI/SONiC open standards, makes it easy for companies to scale their infrastructure as needed and perform in-service network upgrades to maximize uptime. Additionally, our networking and rack scale solutions enable companies to host an array of AI compute without vendor lock-in.

What keeps your customers up at night?

Increasing network bandwidth demands have put a lot of pressure on infrastructure to deliver high bandwidth, low latency, and reliable interconnectivity. While you often hear about how powerful AI applications are now accessible to anyone with an internet connection, AI network infrastructure remains limited to companies with a lot of capital.  Furthermore, even hyperscalers and AI neocloud providers with deep pockets are limited to closed, proprietary solutions for AI network infrastructure. These companies don’t like being locked into a closed ecosystem. Upscale AI is giving its customers a new level of flexibility with our portfolio that is built using UALink, Ultra Ethernet, SONiC, SAI, and other cutting-edge open source technologies and open standards.

What does the competitive landscape look like and how do you differentiate?

Today there is no established AI network player that is offering an alternative to proprietary solutions. AI networking innovation should not be locked into a closed ecosystem. We strongly believe at Upscale AI that open standards are the future and we’re working to democratize AI network infrastructure by pioneering open-standard networking technology. Our portfolio gives companies bring-your-own-compute flexibility to help realize the full potential of AI. A truly differentiated full-stack solution—engineered for AI-scale networking, vertically integrated from product to support, diversified for optionality, and built on open standards to power the next wave of AI infrastructure growth.

What new features/technology are you working on?

We’re working to bring to market full stack AI networking infrastructure, including robust silicon, systems, and software. Stay tuned for more updates on what’s coming out next.

How do customers normally engage with your company?

Upscale AI has a direct salesforce working with hyperscalers, neocloud providers, and other companies in the AI networking space. Prospective customers can reach out to our team via our website: https://upscaleai.com/.

Also Read:

CEO Interview with Adam Khan of Diamond Quanta
CEO Interview with Nir Minerbi of Classiq
CEO Interview with Russ Garcia with Menlo Micro


Eric Xu’s Keynote at Huawei Connect 2025: Redefining AI Infrastructure

Eric Xu’s Keynote at Huawei Connect 2025: Redefining AI Infrastructure
by Daniel Nenni on 09-18-2025 at 8:00 am

Eric Xu at Huawei Connect 2025

At Huawei Connect 2025, held in Shanghai, Eric Xu, the Rotating Chairman of Huawei, delivered a keynote speech that laid out the company’s ambitious roadmap for AI infrastructure, computing power, and ecosystem development. His speech reflected Huawei’s growing focus on building high-performance systems that can support the next generation of artificial intelligence while advancing self-reliant technology development.

Setting the Stage

Xu began his keynote by reflecting on the rapid evolution of AI models and how breakthroughs over the past year have pushed the boundaries of computing. He noted that the increasing complexity of large models, particularly in inference and recommendation workloads, demands not just more powerful chips, but fundamentally new computing architectures. According to Xu, AI infrastructure needs to be both scalable and efficient—capable of handling petabyte-scale data and millisecond-level inference.

He also reminded the audience of the five key priorities he had previously outlined, such as the need for sustainable compute power, better interconnect systems, and software-hardware co-optimization. This year’s keynote built upon those principles and introduced Huawei’s vision for its next-generation systems.

New Products and Roadmap

One of the most significant parts of Xu’s speech was the unveiling of Huawei’s updated roadmap for chips and AI computing platforms. Over the next three years, Huawei will roll out several generations of Ascend AI chips and Kunpeng general-purpose processors. Each generation is designed to increase performance and density while supporting the growing needs of training and inference workloads.

Xu introduced the TaiShan 950 SuperPoD, a general-purpose computing cluster based on Kunpeng processors. It offers pooled memory, high-performance storage, and support for mission-critical workloads such as databases, virtualization, and real-time analytics. The design is intended to support diverse computing needs, with significant improvements in memory efficiency and processing speed.

On the AI side, Xu announced the Atlas 950 and Atlas 960 SuperPoDs. These are high-density AI compute systems capable of scaling to tens of thousands of AI processors. The upcoming Atlas 960 SuperCluster will combine over one million NPUs and deliver computing power measured in zettaFLOPS. This marks a shift toward ultra-large-scale AI systems, designed to handle foundation models, search, recommendation, and hybrid workloads.

To enable this, Huawei developed UnifiedBus, a proprietary interconnect that supports high-bandwidth, low-latency communication between nodes. It also supports memory pooling and intelligent task coordination. According to Xu, this interconnect is critical for scaling AI systems efficiently and supporting hybrid PoDs that combine AI, CPU, and specialized compute.

Open Source and Ecosystem Strategy

Another core element of the keynote was Huawei’s strong push toward openness. Xu announced that the company will fully open-source its core AI software stack, including its CANN compiler and virtual instruction set. Toolchains, model kits, and the openPangu foundation models will also become available to developers and partners by the end of the year.

This move toward open-source infrastructure is part of Huawei’s strategy to lower adoption barriers and encourage collaboration across the AI ecosystem. Xu emphasized that AI innovation cannot happen in silos, and by opening up its tools and platforms, Huawei hopes to enable more organizations to build on its technology.

Strategic Implications

Xu’s keynote also carried strategic overtones, reflecting Huawei’s response to geopolitical challenges and technology restrictions. With limited access to advanced semiconductor manufacturing, Huawei is shifting its focus toward system-level innovation—building powerful infrastructure using available nodes while maximizing performance through architecture and software.

The message was clear: Huawei is betting on large-scale infrastructure, hybrid compute systems, and interconnect innovation to maintain competitiveness in AI. The company aims to provide alternatives to traditional U.S.-centric AI platforms and chip providers, especially in markets seeking greater technological independence.

Bottom line: Eric Xu’s keynote at Huawei Connect 2025 outlined a bold vision for the future of AI infrastructure. From SuperPoDs and interconnect breakthroughs to open-source initiatives, Huawei is positioning itself as a central player in the next phase of AI development. If the company can execute its ambitious roadmap and foster a strong ecosystem, it may reshape the global AI landscape—especially in regions looking to build homegrown compute capabilities.

The full transcript is here.

Also Read:

MediaTek Dimensity 9500 Unleashes Best-in-Class Performance, AI Experiences, and Power Efficiency for the Next Generation of Mobile Devices

AI Revives Chipmaking as Tech’s Core Engine

MediaTek Develops Chip Utilizing TSMC’s 2nm Process, Achieving Milestones in Performance and Power Efficiency

 


Global Semiconductor Industry Outlook 2026

Global Semiconductor Industry Outlook 2026
by Admin on 09-18-2025 at 6:00 am

Semiconductor and Beyond 2026 PWC

PwC’s comprehensive report, “Semiconductor and Beyond,” released in 2026, provides a strategic outlook on the global semiconductor industry amid rapid transformations driven by AI, geopolitical tensions, and supply chain shifts. Structured into four sections—Foreword, Demand Analysis, Supply Analysis, and What’s Next?—the 104-page document offers insights for industry leaders, policymakers, and businesses navigating this critical sector. Authored by experts like Global Semiconductors Leader Glenn Burm, it emphasizes semiconductors’ role in powering innovation across automotive, server/network, home appliances, computing devices, and industrial applications.

The Foreword sets the stage, highlighting the industry’s evolution shaped by AI advancements, increased government investments in domestic production, and efforts to enhance supply chain resilience. Semiconductors are deemed indispensable for sectors like automotive, healthcare, and energy, with priorities shifting toward technology sovereignty and risk mitigation amid export controls and trade alliances.

In the Demand Analysis, titled “Semiconductors Power Innovation and Everyday Life,” PwC projects the semiconductor market to grow from $0.6 trillion in 2024 to over $1 trillion by 2030, at a CAGR of 8.6%. This growth is fueled by end-market dynamics across five key sectors. Server and Network leads with an 11.6% CAGR, driven by generative AI services and data center expansions. Automotive follows at 10.7%, propelled by electric vehicles (EVs), autonomous driving, and advanced driver-assistance systems (ADAS), with EV semiconductor content growing at 11.3% CAGR versus -3.1% for internal combustion engines. Home Appliances anticipates a 5.6% CAGR, boosted by AI-powered devices like smart refrigerators and TVs, with connectivity chips growing at 11.6%. Computing Devices expects 8.8% growth, led by AI-capable chips in high-end smartphones (75.6% CAGR) and PCs (29.3% CAGR). Industrial applications project 8.8% growth, supported by renewable energy (13.4% CAGR), smart agriculture (17.3%), and medical devices (5.3%). The analysis uses bubble charts to categorize applications by market CAGR and content growth, identifying “leap” opportunities in areas like automotive HPC and AI connectivity.

The Supply Analysis, “The Race for Semiconductor Supremacy,” examines value chain dynamics, noting U.S. dominance in design, Asia’s fabrication edge, and Southeast Asia’s packaging leadership. Geopolitical shifts are reshaping chains, with investments focusing on leading nodes. Foundry capacity is expected to rise 1.7x by 2030, with ≤7nm nodes at 8% CAGR. High-bandwidth memory (HBM) penetration surges from 14% to 40%, market size at 27.8% CAGR. Materials market grows to $97 billion at 5.1% CAGR, while advanced packaging reaches $76 billion at 10.6%. Regional shifts show China at 7.5% CAGR, USA at 12.9%, and ASEAN at 9.9%. Wide-bandgap materials like GaN (53.5% CAGR) and SiC (27.0%) highlight power electronics growth.

The “What’s Next?” section explores opportunities beyond 2030, evaluating technologies like AI integration and advanced materials. It assesses feasibility via maturity scores, market potential by 2030 size and CAGR, and investment scales over five years. Quantitative insights guide entrants on uncertainties and future dynamics.

Overall, PwC’s report underscores the semiconductor industry’s pivotal role in global innovation and economic security. With AI accelerating demand and geopolitics disrupting supply, companies must prioritize resilience, diversification, and R&D. The outlook warns of disruptions but highlights growth in AI, EVs, and renewables, urging strategic adaptations for supremacy. As semiconductors evolve, PwC positions itself as a partner for sustainable growth, offering expertise in supply chain optimization and innovation.

Here is the full report.


SiFive Launches Second-Generation Intelligence Family of RISC-V Cores

SiFive Launches Second-Generation Intelligence Family of RISC-V Cores
by Kalar Rajendiran on 09-18-2025 at 6:00 am

SiFive 2nd Gen Intelligence Family

SiFive, founded by the original creators of the RISC-V instruction set, has become the leading independent supplier of RISC-V processor IP. More than two billion devices already incorporate SiFive designs, ranging from camera controllers and SSDs to smartphones and automotive systems. The company no longer sells its own chips, choosing instead to license CPU IP and collaborate with silicon partners on development boards. This pure-play IP model allows SiFive to focus on innovation across its three core product families: Performance for high-end applications, Essential for embedded control, and Intelligence for AI-driven compute. The company also has an Automotive family of products with auto-grade safety and quality certifications.

The company recently announced the second generation of its Intelligence Family of processor IP cores, a complete update of its AI-focused X-Series. The new portfolio introduces the X100 series alongside upgrades to the X200, X300, and XM lines designed for low power and high performance in a small footprint for applications from the far edge to the data center.

On the eve of the AI Infra Summit 2025, I chatted with SiFive’s Martyn Stroeve, Vice President of Corporate Marketing and Marisa Ahmad, Product Marketing Director, to gain the following deeper insights.

Two Popular X-Series Use Cases

While very flexible and versatile, the second-generation X-Series targets two distinct use cases. The first is as a standalone vector CPU, where the cores handle complex AI inference directly without the need for an external accelerator. A leading U.S. semiconductor company has already licensed the new X100 core for its next-generation edge-AI system-on-chips, relying on the core’s high-performance vector engine to process filters, transforms, and convolutions efficiently.

The second and increasingly critical application is as an Accelerator Control Unit. In this role, the X-Series core replaces the discrete DMA controllers and fixed-function state machines that traditionally orchestrate data movement in accelerators. Another top-tier U.S. semiconductor customer has adopted the X100 core to manage its industrial edge-AI accelerator, using the processor’s flexibility to control the customer’s matrix engine accelerator and to handle corner-case processing.

The Rising Importance of Accelerator Control

AI systems are becoming more complex, with vast data sets moving across heterogeneous compute fabrics. Conventional accelerators deliver raw performance but lack flexibility, often suffering from high-latency data transfers and complicated memory access hardware. SiFive’s Accelerator Control Unit concept addresses these pain points by embedding a fully programmable scalar/vector CPU within the accelerator itself. This design simplifies programming, reduces latency, and makes it easier to adapt to new AI models without extensive hardware redesign—an area where competitors such as Arm have scaled back their investment. Here is a link to a video discussing how Google is leveraging SiFive’s first generation X280 as AI Compute Host to provide flexible programming combined with the Google MXU accelerator in the datacenter.

Four Key Innovations in the Second Generation

SiFive’s new Intelligence cores introduce four standout enhancements. First are the SSCI and VCIX co-processing interfaces, high-bandwidth links that provide direct access to scalar and vector registers for extremely low-latency communication with attached accelerators.

Second is a hardware exponential unit, which reduces the common exp() function operation from roughly fifteen instructions to a single instruction, an especially valuable improvement given that exponential function operations are second only to multiply–accumulate in AI compute workloads.

Third is a new memory-latency tolerance architecture, featuring deeper configurable vector load data queues and a loosely coupled scalar–vector pipeline to keep data flowing even when memory access is slow. Finally, the family adopts a more efficient memory subsystem, replacing private L2 caches with a customizable hierarchy that delivers higher capacity while using less silicon area.

Performance Compared to Arm Cortex-M85

SiFive highlighted benchmark data showing that the new X160 core,  delivers roughly twice the inference performance of Arm’s Cortex-M85 at comparable silicon area. Using MLPerf Tiny v1.2 workloads such as keyword spotting, visual wake-word detection, image classification, and anomaly detection, the X160 demonstrated performance gains ranging from about 148 % to over 230 % relative to the Cortex-M85 while maintaining the same footprint. This two-times advantage underscores SiFive’s claim that its second-generation Intelligence cores can outpace the best current Arm microcontroller-class AI processors without demanding more die area or power budget.

A Complete AI Software Stack

Hardware is supported by a robust AI RISC-V based software ecosystem . The stack includes an MLIR-based compiler toolchain, a SiFive-tuned LLVM backend, and a neural-network graph analyzer. A SiFive Kernel Library optimized for vector and matrix operations integrates with popular frameworks such as TensorFlow Lite, ONNX and PyTorch. Customers can prototype on QEMU, FPGA, or RTL/SystemC simulators and seamlessly transition to production silicon, allowing rapid deployment of AI algorithms on SiFive’s IP.

Summary

By marrying a mature software platform with cutting-edge vector hardware, SiFive’s second-generation Intelligence Family positions RISC-V as a compelling alternative for next-generation AI processing. These new products all feature enhanced scalar, vector and specifically with XM, matrix processing capabilities designed for modern AI workloads. All of these cores build on the company’s proven fourth-generation Essential architecture, providing the reliability valued by automotive and industrial customers while adding advanced features for AI workloads from edge to data center.

With initial design wins at two leading U.S. semiconductor companies and momentum across industries from automotive to data centers, the Intelligence Gen 2 products stands ready to power everything from tiny edge devices to massive training clusters—while setting a new performance bar by outclassing Arm’s Cortex-M85 in key AI inference tasks.

Access the press announcement here.

To learn more, visit SiFive’s product page.

Also Read:

Podcast EP197: A Tour of the RISC-V Movement and SiFive’s Contributions with Jack Kang

Enhancing RISC-V Vector Extensions to Accelerate Performance on ML Workloads

Enabling Edge AI Vision with RISC-V and a Silicon Platform


Simulating Gate-All-Around (GAA) Devices at the Atomic Level

Simulating Gate-All-Around (GAA) Devices at the Atomic Level
by Daniel Payne on 09-17-2025 at 10:00 am

GAA FET min

Transistor fabrication has spanned the gamut from planar devices o FinFET to Gate-All-Around (GAA) as silicon dimensions have decreased in the quest for higher density, faster speeds and lower power. Process development engineers use powerful simulation tools to predict and even optimize transistor performance for GAA devices. Dr. Philippe Blaise, Principle AE at Silvaco delivered a webinar on the topic of simulating GAA devices using their tool, Victory Atomistic with quantum-level precision.

An overview diagram shows the different FET device structures in 3D cross-sections.

The big challenge is how to simulate a GAA FET and predict its performance at the nano scale with quantum effects, easy for a TCAD engineer to use, fast enough to enable exploration and optimization, and accurate enough to produce trusted SPICE models. Silvaco uses the Non Equilibrium Green’s Function (NEGF) approach in their device simulation.

Using Victory Atomistic looked straight forward, where the user chooses a material from a database, defines the device structure, then the simulator solves density using NEGF equations and solves potential using Poisson’s equation, producing an output IV curve.

Dr. Blaise showed the details of the device template for a nanowire using a diamond crystal structure in Silicon, with lengths, number of gates, doping, oxide and contacts. Users also choose the type of solver, voltage ranges, temperature and output options. The goal of Victory Atomistic is to enable atomic-scale TCAD simulations easy to use by keeping the complexity inside the tool.

Simulation results showed the desired accuracy, along with options for silicon bands and orientation. Victory Atomistic uses the Low-Rank Approximation (LRA) optimization technique to make the complex, quantum-level simulations run in a short period of time on conventional computers.

SiGe materials were simulated using Virtual Crystal Approximation (VCA). Transition-metal dichalcogenide (TMD) monolayers were also simulated for MoS2. Electron-phonon scattering effects were modeled next. Victory Visual was demonstrated as a graphical post processing tool, showing 90,000 atoms loaded.

Simulation results for a GAA FET device showed accurate results across a temperature range of 300K to 2K.

Summary

GAA FET devices can be accurately modeled and simulated using the NEGF algorithm, so that TCAD engineers can predict and optimize a new process technology. Silvaco has made their Victory Atomistic tool easy to use, and the results are produced quickly with the option to employ multiple CPUs. Victory Visual helps to graphically show results, making analysis more intuitive. Simulating a process technology before running wafers saves time and money, so why not give modern TCAD tools a try.

View the entire webinar after registering online here.

Related Blogs


AI Revives Chipmaking as Tech’s Core Engine

AI Revives Chipmaking as Tech’s Core Engine
by Daniel Nenni on 09-17-2025 at 8:00 am

The Economist

A century ago, 391 San Antonio Road in Mountain View, California, housed an apricot-packing shed. Today, it’s marked by sculptures of diodes and a transistor, commemorating the 1956 founding of Shockley Semiconductor Laboratory—the birthplace of Silicon Valley. William Shockley, co-inventor of the transistor, aimed to craft components from silicon, but his firm flopped. Yet, his “traitorous eight” employees defected in 1957 to launch Fairchild Semiconductor nearby. This group included Gordon Moore and Robert Noyce, who later co-founded Intel, and Eugene Kleiner, pioneer of venture capital firm Kleiner Perkins. Most Silicon Valley giants trace their lineage to these early innovators.

Semiconductors revolutionized computing. Pre-semiconductor era computers relied on bulky, unreliable vacuum tubes. Semiconductors, solid materials controlling electrical flow, offered durable, versatile alternatives. Silicon enabled mass production of transistors, diodes, and integrated circuits on single chips, processing and storing data efficiently.

In 1965, Moore observed that transistor density on chips doubled annually (later adjusted to every two years), an observation dubbed Moore’s Law. This drove exponential progress: from 200 transistors per square millimeter in 1971 to 150 million in AMD’s 2023 MI300 processor. Smaller transistors switched faster, fueling breakthroughs like personal computers, the internet, smartphones, and artificial intelligence.

Chipmakers once dominated tech. In the 1970s, IBM integrated chips, hardware, and software into unparalleled dominance. The 1980s saw Microsoft’s software-only model thrive, but Intel’s processors remained vital. By 2000, Intel ranked sixth globally by market cap. Post-dotcom bust, however, firms like Google and Meta overshadowed hardware, commoditizing chips. Venture capitalist Marc Andreessen famously declared software was “eating the world” in 2011.

AI’s surge has reversed this. Training LLMs demands immense computation. Pre-2010, AI training compute doubled every 20 months, aligning with Moore’s Law. Since then, it doubles every six months, spiking chip demand. Nvidia, specializing in AI-suited GPUs, now ranks third-most valuable globally. Since late 2023, chipmaker stocks have outpaced software firms for the first time in over a decade.

Beyond training, AI inference, responding to queries, requires efficient, bespoke chips. General-purpose processors fall short, prompting tech giants to design custom silicon. Apple, Amazon, Microsoft, and Meta invest heavily; Google deploys more proprietary data-center processors than anyone except Nvidia and Intel. Seven of the world’s top ten firms now make chips.

Sophistication hinges on process nodes—feature sizes under 7nm define cutting-edge AI chips. Yet, over 90% of manufacturing uses 7nm or larger nodes for everyday devices like TVs, fridges, cars, and tools.

The 2021 COVID-19 chip shortage exposed vulnerabilities in global supply chains: design in America, equipment in Europe/Japan, fabrication in Taiwan/South Korea, packaging in China/Malaysia. Governments responded with subsidies—America’s $50 billion CHIPS Act in 2022, followed by $94 billion from the EU, Japan, and South Korea. Geopolitics complicates matters: U.S. export bans limit China’s access to advanced tech, prompting Beijing’s restrictions on key materials like gallium and germanium.

Yet, technological hurdles loom larger than political ones, argues Economist Global Business Writer Shailesh Chitnis. For decades, shrinking transistors boosted performance without proportional energy hikes. Now, denser chips and massive AI models drive soaring power use. Data centers could consume 8% of U.S. electricity by 2030.

To sustain exponential gains, innovations are essential. Incremental steps include hardware-software integration, like optimizing algorithms for specific chips. Radical shifts involve alternatives to silicon, such as gallium nitride for efficiency, or neuromorphic computing mimicking brain analog processes over digital ones. Optical computing, using light for faster data transfer, and quantum chips for complex simulations also promise breakthroughs.

AI’s demands are putting silicon back at tech’s heart, echoing Valley origins. As computation needs explode, chipmaking’s evolution will dictate future innovation, balancing efficiency, geopolitics, and sustainability. The apricot shed’s legacy endures—silicon’s story is far from over.

You can read the full article here.

Also Read:

Advancing Semiconductor Design: Intel’s Foveros 2.5D Packaging Technology

Revolutionizing Processor Design: Intel’s Software Defined Super Cores

Intel’s Commitment to Corporate Responsibility: Driving Innovation and Sustainability


The IO Hub: An Emerging Pattern for System Connectivity in Chiplet-Based Designs

The IO Hub: An Emerging Pattern for System Connectivity in Chiplet-Based Designs
by Bernard Murphy on 09-17-2025 at 6:00 am

Disaggregation of NoCs min

In chiplet-based design we continue the march of Moore’s Law by scaling what we can put in a semiconductor package beyond the boundaries of what we can build on a single die. This style is already gaining traction in AI applications, high performance computing, and automotive, each of which aims to scale out to highly integrated systems meeting performance, power, cost, and reliability goals. The technology challenge then is to build effective communications infrastructure between those chiplets.

UCIe is mentioned frequently as the standard for connectivity between chiplets. That standard is important, but is only the bottom layer of the communication stack. In the modern era of networks-on-chip (NoCs), modern networks must also handle packetized communication, congestion, and quality of service, within chiplets and between chiplets. This prompts a deeper dive into Arteris’ recently announced collaboration with AMD, in which FlexGen smart NoC IP cooperates with AMD’s Infinity Fabric. Commercial and proprietary network co-existence has arrived.

Commercial NoC IP jumps a hurdle

For a long time NoC IP choices were an either/or decision. Build and use your own in-house NoC IP or buy a commercial IP. The choice was easy for new ventures or design teams who had outgrown earlier in-house options and who prioritized differentiation in core functions rather than in the communication fabric. Buy a commercial option from an IP supplier and work with that vendor to ensure they keep up as requirements evolved. Still, some semiconductor houses, particularly the big compute vendors like AMD and Intel, continue to see value in their proprietary NoC IP and were not obvious candidates for a commercial option.

Chiplet-based design seems to have scaled that wall. Highly optimized in-house communication IPs such as Infinity Fabric continue to be central in coherent compute chiplets but now AMD’s endorsement of using FlexGen smart NoC IP between chiplets has shown co-existence to be a very viable option.

Andy Nightingale (VP Product Management and Marketing at Arteris) told me that the term IO Hub is now commonly used to represent an emerging architectural pattern, a structure to bridge coherent fabrics (such as Infinity Fabric) with heterogeneous subsystems. A new realization that one unified fabric architecture may not be the optimal strategy across chiplet-based systems.

I asked Andy why some of their top-tier customers are turning to Arteris for this capability. Why not just build their own IO Hub? Their answer reflects what you’ll often hear when design houses choose a commercial solution over an in-house option. They want to prioritize in-house resources towards their core competencies, using a proven partner to handle off-chiplet communication. A co-existence solution meets that objective.

Digging a little deeper

The physical connectivity between chiplets will most commonly be wires, not active elements (also possible but that style of connectivity is more expensive I am told). Traffic management through the IO Hub is therefore handled through distributed control from chiplet interfaces (network interface units for Arteris IPs) to the hub. In the IO hub use-case FlexGen is optimized for non-coherent, high-bandwidth data flows like HBM, PCIe, and AI accelerators.

Effectively managing this structure – off-chip interconnect topology, adding buffers/ adapters, scaling wide datapaths, managing QoS, is a complex task that will probably demand iteration as the larger design evolves. That task usually must be handled by a senior NoC designer, consuming weeks of effort. Arteris’ FlexGen smart NoC technology acts as a virtual senior NoC engineer to automate this function, providing when compared with manual design up to 10x productivity improvement, 30% shorter wirelength and 10% reduced latency according to announced customer benchmarks.

Expanding Arteris reach

Arteris’ FlexGen NoC IP for non-coherent networks and Ncore for coherent networks are already well-established, particularly in new ventures, AI, automotive and many other applications. Arteris have announced a range of co-existence collaborations including:

  • Adoption by AMD to augment Infinity Fabric for AI chiplets
  • Accepted in the Intel Foundry AI + Chiplet Alliance
  • Adopted by Samsung LSI to use FlexNoC/FlexGen alongside proprietary fabric in mobile and AI SoCs
  • Adopted by NXP using Ncore and FlexNoC for automotive, integrating safety and accelerators in ASIL D systems
  • Publicly announced benchmarks from SiMa.ai showing increased productivity and reduced wirelengths

It’s now quite clear that Arteris can now augment proprietary NoC solutions in addition to providing comprehensive NoC fabric solutions, as already demonstrated in widespread adoption in multiple markets. Impressive. You can learn more about Arteris HERE.

Also Read:

Arteris Simplifies Design Reuse with Magillem Packaging

Arteris at the 2025 Design Automation Conference #62DAC

Arteris Expands Their Multi-Die Support


Future Horizons Industry Update Webinar IFS 2025

Future Horizons Industry Update Webinar IFS 2025
by Daniel Nenni on 09-16-2025 at 2:00 pm

Four Horsemen of the Semiconductor Apocolypse 2025

The Future Horizons Industry Update Webinar, presented today by Malcolm Penn, provides a comprehensive analysis of the semiconductor industry’s current state and future trajectory. Founded in 1989, Future Horizons leverages over 300 man-years of experience, emphasizing impartial insights from facts (e.g., IMF economy data, WSTS units/ASPs, SEMI capacity), sentiment (hype vs. reality), and decades of expertise. The agenda covers industry updates, outlooks on economy, unit demand, capacity, and ASPs, market forecasts for 2025-2026, key takeaways, and Q&A.

Malcolm opens by highlighting “truly extraordinary times,” driven by Trump 2.0’s “America First” agenda, which abandons free trade norms, raises tariffs to 1930s levels, and pivots to national interests. Geopolitical tensions—Netanyahu’s Gaza actions, Putin’s Ukraine aggression, Xi’s Taiwan threats—and China’s rise as a superpower challenge U.S. dominance, shattering post-WWII peace.

The market outlook is mixed: 1H-2025 growth relies on ASPs from the AI data center boom, masking non-AI weaknesses, excess CAPEX, and bloated inventories. Doubts emerge on AI’s ROI amid “insane” spending, with YoY revenue plateauing and a fragile U.S. economy. For 2H-2025, expect more of the same plus worsening economics, urging preparation for an AI “hangover” in a head (fundamentals) vs. heart (frenzy) battle.

Framing the analysis around the “Four Horsemen of the Semiconductor Apocalypse”:
  • Economy: Precariously unclear, with “tenuous resilience amid persistent uncertainty” (IMF July 2025). U.S. defies gloom, but EU/China stagnate; Trump shocks absorbed but risks linger. U.S. cost-of-living fears (70% worry income lags inflation) and Fed’s dual mandate conundrum (employment vs. prices) add pressure, with a September 17 rate cut a “dead certainty” but damned either way.
  • Unit Shipments: Yet to recover fully; July at 7.6b/week, 8% below peak, with unmeasured excess inventory choking supply chains (20b repaid of January’s 58b excess). Real recovery awaits unit growth resumption.
  • Capacity: CapEx stubbornly high at ~15% of SC sales vs. 11% trend, abnormal post-2022 crash. China is the culprit, accelerating to 34% global share (decoupling/tariffs-driven), 3x justifiable levels, focusing on non-leading edge but advancing (e.g., Huawei/SMIC 5/7nm). Domestic WFE vendors rise, closing markets; turning CapEx to capacity proves challenging (US$50-100b lost). India now ramps up ambitiously.
  • ASPs: Strong recovery from June 2022 crash ($1.11 to $1.85 peak), but plateauing/oscillating since December 2023. All sectors retreat except logic (TSMC’s SoC pseudo-monopoly holding prices). Long-term trend reverts to $1 (Moore’s Second Law), with disruptions temporary.

Forecasts: 2025 at +16% ($731.6b, range 15-17%), ASP-driven with early recovery in discretes/opto/analog; 2026 at +12% ($813.1b, range 6-18%), assuming no AI slowdown, stable geopolitics/economy, and unit rebound. Risks include AI crash, China dumping, overcapacity on mature nodes.

Bottom line: Technology roadmap to 2039 (GAA transition challenging, four-horse race: TSMC N2, Intel 18A, Samsung SF2, Rapidus 2nm). Power semis favor silicon over SiC/GaN/diamond; Makimoto’s Wave holds through 2037 (chiplets next). Packaging evolves from begrudging to enabling (ASATs vs. foundries). Next disruption: quantum computing, not AI (just improved tools). AI data center “madness” (e.g., 10GW plants, underwater/floating) and smart glasses echo 1970s watch disasters—ego over wisdom, fruition decades away. Chip drivers remain Moore’s Law, legislation, A/D conversion, geographic shifts—entertaining, drudgery-removing, enabling impossibles.

Malcolm also warns of structural risks (excess capacity, low utilization, economic tilts to bear), urging caution amid AI hype. Promotes monthly reports for tracking fundamentals., The next webinar January 20, 2026 which will be a must see round-up of 2025.

Also Read:

MediaTek Develops Chip Utilizing TSMC’s 2nm Process, Achieving Milestones in Performance and Power Efficiency

Advancing Semiconductor Design: Intel’s Foveros 2.5D Packaging Technology

Rapidus, IBM, and the Billion-Dollar Silicon Sovereignty Bet


Something New in Analog Test Automation

Something New in Analog Test Automation
by Daniel Payne on 09-16-2025 at 10:00 am

IJTAG min

Digital design engineers have used DFT automation technologies like scan and ATPG for decades now, however, analog blocks embedded within SoCs have historically required that a test engineer write tests that require specialized expertise and that can take man-months to debug. Siemens has a long history in the DFT field, SPICE circuit simulation and AMS simulation, so it was a natural fit for them to announce analog component testing as part of a new product dubbed Tessent AnalogTest. I had a video call with Etienne Racine, Product Manager, Tessent to understand what’s new.

Testing analog components inside an SoC drives up the test costs, so there’s impetus to reduce this time through automation techniques. Scan for digital has now been extended into the analog realm. There’s an that talks about a novel DFT and ATPG technique that has minimal area and performance impact, and Tessent AnalogTest uses it as part of its technology.

Using such scan-based analog tests reduces defect simulation times from days to minutes and brings to analog very similar benefits as what digital scan and ATPG provided to digital designs several decades ago. Leveraging evolutive standards thus becomes important for broad adoption and EDA industry support.

IEEE P2427 is both a working group and draft standard that gives a standardized approach for analog defect modeling and coverage in Analog Mixed-Signal (AMS) devices, where the defect universe contains all likely defects, and also defines detectable defect coverage. The test idea is to inject a defect in the netlist, run a SPICE circuit simulation and then measure the effects to see whether the resulting fault is detected.

Users of Tessent AnalogTest can also use the IJTAG framework to enable portable and retargetable AMS function tests, defined in the IEEE P1687.2 standard. There’s a learning curve with the Instrument Connectivity Language (ICL) and Procedural Description Language (PDL) to describe the analog test access, instruments and analog test mechanisms. The automation helps users to create those files when they don’t already exist.

Figure 1: Example of a digital IJTAG network used to access an analog block during tests.

Test engineers write PDL for the AMS block, like force 2.5V on this pin, then measure the current on another pin. Tessent AnalogTest reads the PDL file, then creates a simulation test bench automatically. The Siemens tool also reads in the SPICE netlist for the AMS blocks, runs the SPICE simulator to detect injected defects, then reports coverage achieved. Two Siemens simulators are supported for analog defect/fault simulation and detection, AFS or Symphony.

This new approach with Tessent AnalogTest combining digital scan-based tests and analog IJTAG measurements improves AMS test coverage plus reduces test development and application times. When silicon arrives, your team can optimize defect coverage or yield, eventually extending this to automated defect analysis. Safety critical applications that use ISO 26262 functional safety metrics will benefit from this approach with a consistent, simulated, automated test description.

Learning and using the high-level PDL language to describe intended test sequences is a big time saver, freeing up engineering resources. IJTAG is well understood by test teams, so expanding that to include analog blocks is an easy process. The Tessent AnalogTest tool automates the creation of DFT circuitry along with test patterns to test most analog circuits in under 1ms on digital-only testers. Even the test times get reduced 10X-100X while providing similar defect coverage to specification tests.

Structural test waveforms, multiple outputs tested concurrently

AMS designs now have new automation technology to dramatically improve analog test development and reach coverage goals, while being connected with IJTAG scan chains and an analog test bus. Siemens has introduced something not seen before, so it’s exciting times. Following the IEEE standards P2427 and P1687.2 ensures that this technology will be supported by the EDA industry going forward.

Related Blogs