100X800 Banner (1)

Smart Verification for Complex UCIe Multi-Die Architectures

Smart Verification for Complex UCIe Multi-Die Architectures
by Admin on 09-08-2025 at 10:00 am

Figure 1

By Ujjwal Negi – Siemens EDA

Multi-die architectures are redefining the limits of chip performance and scalability through the integration of multiple dies into a single package to deliver unprecedented computing power, flexibility, and efficiency. At the heart of this transformation is the Universal Chiplet Interconnect Express (UCIe) protocol, which enables high-bandwidth, low-latency communication between dies. But with innovation comes complexity: inter-die interactions, evolving protocol features, and the need to verify behavior across multiple abstraction layers create significant verification challenges.

This article examines the core verification hurdles in UCIe-based multi-die systems and explains how Questa™ One Avery™ Verification IP (Avery VIP) provides a protocol-aware, automated, and layered verification framework. It supports block-level to full system-level validation—combining automation, advanced debug tools, and AI-driven coverage analysis to accelerate verification closure.

Verification Hurdles in Multi-Die Systems

In traditional verification, individual design blocks are validated in isolation. Multi-die architectures require a shift toward verifying the entire system, ensuring that inter-die communication, synchronization, and interoperability between protocols, such as PCIe, CXL, CHI, and UCIe, function seamlessly. The challenge grows as the UCIe specification evolves—UCIe 2.0 and later versions introduce features such as the management transport protocol, lane repair, autonomous link training, and advanced sideband handling—all of which require verification environments to remain adaptable and up to date.

The test environment must also offer flexibility to simulate both standard operating conditions (such as steady-state data transfers and control signaling) and edge cases (like misaligned lanes or failed link training). Fine-grained controllability is essential, allowing errors to be injected and traffic patterns manipulated to test robustness—especially since faults can propagate across dies and protocol layers.

In addition, system-level performance metrics like cross-die latency, throughput, and bandwidth must be monitored in real time to ensure the design meets its performance targets under diverse workloads. This performance validation needs to be continuous and consistent across various traffic patterns and system states.

Configuration space management presents another challenge, as multi-die systems require synchronized register updates across dies, including real-time error reporting and runtime reconfiguration. Finally, verification must be able to scale from simulation, where deep debug visibility is possible, to emulation and hardware prototyping, for speed and real-world validation.

Accelerating Multi-Die Verification with Questa One Avery UCIe VIP

The Questa One Avery UCIe VIP is built to handle the full spectrum of multi-die verification needs through a layered, configurable framework.

Figure 1. Layered UCIe verification framework.

It supports multiple bus functional models (BFM) across diverse DUT types. At the block level, Avery VIP can verify standalone components, such as the Logical PHY (LogPHY), D2D adapter, and protocol layers. At the die or system level, it operates in two modes: a full-stack mode for end-to-end testing of the D2D adapter, LogPHY, mainband, and sideband (with or without raw die-to-die interface (RDI), and a No-LogPHY/RDI2RDI mode for direct, higher-level protocol testing without physical link dependencies.

Figure 2. Avery VIP use models.

To accelerate customer testbench deployment, the Questa One VIP Configurator can generate a UVM-compliant testbench in just a few clicks, enabling verification to begin almost immediately. It allows visual DUT–VIP connectivity mapping with automatic signal binding, intuitive GUI-based BFM configuration, and selection of pre-built sequences aligned with the DUT’s use model. This eliminates extensive manual set up and provides a ready-to-run environment from the outset.

Figure 3: GUI based Avery VIP Configurator.

Avery VIP also provides layer-specific and feature-specific callbacks, enabling on-the-fly traffic manipulation and precise error injection, whether that means corrupting FLITs, altering parameter exchanges, or testing abnormal traffic patterns. Real-time monitoring and dynamic score boarding help track system reactions to these injected conditions.

For compliance and coverage, Avery VIP includes a comprehensive Compliance Test Suites (CTS) framework. Questa™ One Avery CTS is organized by DUT type, protocol mode, and direct specification references. The protocol test suite contains more than 500 checks for UCIe core layers and approximately 3,000 checks for PCIe, CXL, and AXI interactions. Runtime configurability allows protocol features, FLIT formats, or topology to be adjusted on the fly without rebuilding the environment.

The Avery CTS smart API layer further simplifies test creation, offering high-level functions to monitor link training, access configuration or memory spaces, control link state transitions, generate placeholder protocol traffic, and inject packets at precise protocol states.

Debugging and performance tracking are built in. Avery VIP features protocol-aware transaction recording, correlating transactions across layers and highlighting errors with associated signal activity. FSM recording logs internal state transitions such as LTSSM, FDI, and RDI events, complete with timestamps and annotations. Layer-specific debug trackers focus on LogPHY, the D2D adapter, FDI/RDI drivers, parity checks, and configuration space accesses. The performance logger provides real-time throughput, latency, and bandwidth measurements, and can focus on specific simulation phases for targeted analysis.

Figure 4: Protocol-aware transaction association.

Finally, an AI-driven verification co-pilot is integrated through Questa One Verification IQ, acting as an intelligent assistant throughout the verification cycle. The Questa One Verification IQ Coverage Analyzer leverages machine learning to automatically identify and prioritize coverage gaps, rank tests based on their contribution to overall coverage, and detect recurring failure signatures for faster root-cause analysis. The system visualizes coverage data through intuitive heatmaps and bin distribution charts, enabling teams to make data-driven decisions, focus effort on high-impact areas, and continuously refine their verification strategy for maximum efficiency.

Figure 5: Questa One VIQ Coverage Analyzer.
Driving the Future of Multi-Die with UCIe Verification

As an open standard, UCIe is rapidly becoming the cornerstone of heterogeneous integration, allowing chipmakers to combine chiplets from different vendors into a single package with seamless interoperability. By standardizing the interconnect layer, UCIe unlocks a new wave of innovation in data centers, AI accelerators, and high-performance computing, paving the way for designs that are more scalable, energy-efficient, and cost-optimized than traditional monolithic approaches.

The Questa One Avery UCIe VIP is built to handle the demands of this rapidly evolving landscape. It scales effortlessly from block-level to full system-level verification, streamlines environment set up, allows precise error injection, and brings in AI-driven coverage analysis to speed up closure. With its mix of compliance suites, flexible runtime configuration, and powerful debug tools, it gives verification teams the confidence to hit performance, compliance, and reliability goals, helping the industry move toward a future where multi-die systems aren’t the exception, but the standard.

For a deeper dive into the full verification architecture, download the complete whitepaper: Accelerating UCIe Multi-Die Verification with a Scalable, Smart Framework.

Ujjwal Negi is a senior member of the technical staff at Siemens EDA based in Noida, specializing in storage and interconnect technologies. With over two years of hands-on experience, she has contributed extensively to NVMe, NVMe over Fabrics, and UCIe Verification IP solutions. She graduated with a Bachelor of Technology degree from Bharati Vidyapeeth College of Engineering in 2023.

Also Read:

Orchestrating IC verification: Harmonize complexity for faster time-to-market

Perforce and Siemens at #62DAC

Breaking out of the ivory tower: 3D IC thermal analysis for all

 


PDF Solutions Adds Security and Scalability to Manufacturing and Test

PDF Solutions Adds Security and Scalability to Manufacturing and Test
by Mike Gianfagna on 09-08-2025 at 6:00 am

PDF Solutions Adds Security and Scalability to Manufacturing and Test

Everyone knows design complexity is exploding. What used to be difficult is now bordering on impossible. While design and verification challenges occupy a lot of the conversation, the problem is much bigger than this. The new design and manufacturing challenges of 3D innovations and the need to coordinate a much more complex global supply chain are examples of the breadth of what lies ahead. The opportunity of ubiquitous use of AI creates even more challenges. I’m not referring to designing AI chips and systems, but rather how to use AI to increase the chances of design success for those systems.

Data is the new oil, and the foundation of AI. That statement is quite relevant here. What is needed to tame these challenges is a way to collect data upstream in that complex global supply chain and find ways to extract insights that will be used to drive decisions and actions further down the supply chain. Doing that across a complex supply chain requires a highly scalable and secure platform. It turns out PDF Solutions has been quietly building what’s needed to make all this happen. Let’s examine what they’ve been doing and what comes next as PDF Solutions adds security and scalability to manufacturing and test solutions.

Building the Foundation

There are two key products from PDF Solutions that build an infrastructure for the future. A bit of background will set the stage.

First is the Exensio® Analytics Platform, which automatically collects, analyzes and detects statistical changes in manufacturing test, assembly, and packaging operations that negatively affect product yield, quality, or reliability, wherever those operations occur.

The suite of modules in this product enables fabless companies, IDMs, foundries, and OSATs to securely collect and manage test data, assure high outgoing quality, deploy machine learning to the edge, establish controls around the test process, and improve efficiency across the manufacturing supply chain.

The machine learning capabilities will be key to what comes next. More on that in a moment.  The product has quite a large footprint across manufacturing and test operations worldwide.

Of course, a massive data analytics platform like this needs many sources of data across the worldwide supply chain to power it. To enable this part of the platform there is DEX, PDF’s data exchange network. DEX is an infrastructure deployed globally at OSAT locations that connects to potentially every test cell or test and assembly equipment across the worldwide supply chain. It automatically delivers test rules, models & recipes, and brings, in real time, all of that test data back to Exensio. There, it will be normalized using PDF Solutions semantic model that ensures it is always complete, consistent and ready for analysis.  DEX is the plumbing for test rules, models and data for Exensio if you will.

I began my career in test data analytics. Back then, the number and type of testers that were part of that process was extremely small compared to what exists today. In spite of that, I can tell you that interfacing to each source of test data, acquiring the data and ensuring it was accurate and timely was a huge challenge. The problem has gotten much, much bigger. DEX is a combination of hardware and software developed by PDF Solutions that interfaces to all data sources, validates the information and transports it to the required location for timely analysis. This is a very complex process.

The system is optimized to handle the unique challenges of semiconductor data acquisition, enabling PDF Solutions to manage petabytes of information on cloud. The diagram below illustrates how DEX feeds critical data to Exensio.

How DEX feeds critical data to Exensio

What’s Next

Systems the size and scope of Exensio and DEX have already created a substantial impact on the semiconductor industry. The graphic at the top of this post illustrates some of the areas that have benefited from these systems. The next chapter of this story will focus on a concept called Data Feed Forward or DFF. PDF sees DFF providing an enterprise solution to collect, transform and distribute data across a customer’s supply chain to enable advanced test methodologies utilizing feed-forward data.

Using PDF’s global infrastructure, test data can be captured upstream and transported to the edge to be used to inform optimized strategies for downstream testing. The concept of feed forward data enablement isn’t new. It has been used successfully to optimize physical implementation by using early data to inform estimates for late-stage implementation.  In the case of manufacturing and test, a globally available, secure and scalable data infrastructure can be used to optimize test in the chiplet and advanced packaging era.

We are seeing the dawn of AI-driven test from PDF Solutions based on this technology. The goal of AI-driven test is to save time, reduce cost and improve quality. The diagram below summarizes the current three focus areas for this capability.

Digging Deeper

Dr. Ming Zhang

I had the opportunity recently to speak with Ming Zhang, vice president of Fabless Solutions at PDF. I’ve worked with Ming in the past and I’ve always found him to be insightful with a strong view of the big picture and what it all means. So, getting his explanation of AI for test was particularly valuable. Here’s what I learned about the three AI-driven test solutions summarized above.

Ming explained that Predictive Test has an optimization focus. As we all know, test insertions are expensive in terms of manufacturing time and tester cost. Most test strategies apply the same test program to all chips. But what if that could be optimized? Using feed forward data, the parts of a design that are both robust and potentially weak can be identified. Using this kind of unit-specific data, a test program can be developed that is optimized for the specific part. Ming explained what that meant is that test for parts of the design that were demonstrated as robust could be minimized or even skipped.

On the other hand, parts of the design that showed potential weakness could be tested more rigorously. In the end, the actual test time might be the same as before, but the tests applied would be more effective at finding good and bad die, so quality would improve for the same, or lower test cost.

We then discussed Predictive Binning. Ming described this approach as a straight cost saving measure. He explained that based on early data, chips that were likely to fail a test step could be removed or binned out. This essentially avoids test costs that would result in a failure so test cost would be reduced.

We then discussed Predictive Burn-in. This is another cost saving measure. In this case, the devices that have exhibited robust behavior can be identified as not requiring burn-in. Since this process requires hundreds to thousands of hours using expensive measurement and ambient control equipment, a significant cost saving can be realized by avoiding burn-in where it’s not necessary.

Ming pointed out that all of these technologies apply advanced AI algorithms to the massive worldwide manufacturing database managed by PDF Solutions. The initial AI models are developed by PDF. Ming went on to explain that some of its customers want to build proprietary models to drive test decisions as well. PDF support this process also, creating an environment where certain customers can build their own AI for test models and algorithms to enhance competitiveness.

I was quite energized after my discussion. Ming painted a rather upbeat and exciting view of what’s ahead. By the way, if you happen to be in Taiwan on September 12, Ming will be presenting this work at the SEMICON event there.

The Last Word

Dr. John Kibarian

PDF Solutions’ president, CEO and co-founder Dr. John Kibarian made a comment recently that steps back a bit and takes a bigger picture view of where this can all go. He said:

“Collaboration from the system company, all the way back to the equipment vendor, is important today in order to get out new products. Then once those products are launched, the collaboration for the ongoing maintenance of that production flow is requiring a lot more effort. The collaboration required is going to go up. It will be doable at scale only because the industry is going to require it to be doable.”

 The vision painted by John is quite exciting as PDF Solutions adds security and scalability to manufacturing and test.

Also Read:

PDF Solutions and the Value of Fearless Creativity

Podcast EP259: A View of the History and Future of Semiconductor Manufacturing From PDF Solution’s John Kibarian


Revolutionizing Processor Design: Intel’s Software Defined Super Cores

Revolutionizing Processor Design: Intel’s Software Defined Super Cores
by Admin on 09-07-2025 at 2:00 pm

Intel European CPU Patent Application

In the ever-evolving landscape of computing, Intel’s patent application for “Software Defined Super Cores” (EP 4 579 444 A1) represents a groundbreaking approach to enhancing processor performance without relying solely on hardware scaling. Filed in November 2024 with priority from a U.S. application in December 2023, this innovation addresses the inefficiencies of traditional high-performance cores, which often sacrifice energy efficiency for speed through frequency turbo boosts. By virtually fusing multiple cores into a “super core,” Intel proposes a hybrid software-hardware solution that aggregates instructions-per-cycle (IPC) capabilities, enabling energy-efficient, high-performance computing. This essay explores the concept, mechanisms, benefits, and implications of Software Defined Super Cores (SDC), highlighting how they could transform modern processors.

The background of this patent underscores persistent challenges in processor design. High-IPC cores, while powerful, depend heavily on process technology node scaling, which is becoming increasingly difficult and costly. Larger cores also reduce overall core count, limiting multithreaded performance. Hybrid architectures, like those blending performance and efficiency cores, attempt to balance single-threaded (ST) and multithreaded (MT) needs but require designing and validating multiple core types with fixed ratios. Intel’s SDC circumvents these issues by creating virtual super cores from neighboring physical cores—typically of the same class, such as efficiency or performance cores—that execute portions of a single-threaded program in parallel while maintaining original program order at retirement. This gives the operating system (OS) and applications the illusion of a single, larger core, decoupling performance gains from physical hardware expansions.

At its core, SDC operates through a synergistic software and hardware framework. The software component—potentially integrated into just-in-time (JIT) compilers, static compilers, or even legacy binaries—splits a single-threaded program into instruction segments, typically around 200 instructions each. Flow control instructions, such as conditional jumps checking a “wormhole address” (a reserved memory space for inter-core communication), steer execution: one core processes odd segments, the other even ones. Synchronization operations ensure in-order retirement, with “sync loads” and “sync stores” enforcing global order. Live-in loads and live-out stores handle register dependencies, transferring necessary data via special memory locations without excessive overhead (estimated at under 5%). For non-linear code, like branches or loops, indirect branches or wormhole loop instructions dynamically re-steer cores, using predicted targets or stored program counters to maintain parallelism.

Hardware support is minimal yet crucial, primarily enhancing the memory execution unit (MEU) with SDC interfaces. These interfaces manage load-store ordering, inter-core forwarding, and snoops, using a shared “wormhole” address space for fast data transfers. Cores may share caches or operate independently, but the system guarantees memory ordering and architectural integrity. The OS plays a pivotal role, provisioning cores based on hardware-guided scheduling (HGS) recommendations, migrating threads to SDC mode when beneficial (e.g., for high-IPC phases), and reverting if conditions change, such as increased branch mispredictions or system load demanding more independent cores.

The benefits of SDC are multifaceted. Energy efficiency improves by allowing longer turbo bursts or operation at lower voltages, as aggregated IPC reduces the need for frequency scaling. Flexibility is a key advantage: platforms can dynamically adjust between high-ST performance (via super cores) and high-MT throughput (via individual cores), adapting to workloads without fixed hardware ratios. Unlike prior multi-threading decompositions, which incurred 25-40% instruction overheads from replication, SDC minimizes redundancy, focusing on explicit dependencies. This could democratize high-performance computing, reducing reliance on advanced process nodes and enabling scalable designs in data centers, mobile devices, and AI accelerators.

However, challenges remain. Implementation requires precise software splitting to minimize communication overhead, and hardware additions, though small, must be validated for reliability. Compatibility with diverse instruction set architectures (ISAs) via binary translation is mentioned, but real-world deployment may face OS integration hurdles.

In conclusion, Intel’s Software Defined Super Cores patent heralds a paradigm shift toward software-centric processor evolution. By blending virtual fusion with efficient inter-core communication, SDC promises to bridge the gap between performance demands and hardware limitations, fostering more adaptable, efficient computing systems. As technology nodes plateau, innovations like this could define the next era of processors, empowering applications from AI to everyday computing with unprecedented dynamism.

You can see the full patent application here.

Also Read:

Intel’s Commitment to Corporate Responsibility: Driving Innovation and Sustainability

Intel Unveils Clearwater Forest: Power-Efficient Xeon for the Next Generation of Data Centers

Intel’s IPU E2200: Redefining Data Center Infrastructure

Revolutionizing Chip Packaging: The Impact of Intel’s Embedded Multi-Die Interconnect Bridge (EMIB)


TSMC’s 2024 Sustainability Report: Pioneering a Greener Semiconductor Future

TSMC’s 2024 Sustainability Report: Pioneering a Greener Semiconductor Future
by Admin on 09-07-2025 at 8:00 am

TSMC Substainability Report 2024 2025

TSMC, the world’s most trusted semiconductor foundry, released its 2024 Sustainability Report, underscoring its commitment to embedding environmental, social, and governance principles into its operations. Founded in 1987 and headquartered in Hsinchu Science Park, TSMC employs 84,512 people globally and operates facilities across Taiwan, China, the U.S., Japan, and Europe. The report, spanning 278 pages, highlights TSMC’s role as an “innovation pioneer, responsible purchaser, practitioner of green power, admired employer, and power to change society.” Amid rising global risks like extreme weather, as noted in the World Economic Forum’s Global Risks Report, TSMC emphasizes multilateral cooperation to advance sustainability, aligning with UN Sustainable Development Goals (SDGs).

In letters from ESG Steering Committee Chairperson C.C. Wei and ESG Committee Chairperson Lora Ho, TSMC reaffirms sustainability as core to its resilience and competitiveness. Wei stresses that ESG is embedded in every decision, driving net zero emissions by 2050 and carbon neutrality. The company saved 104.2 billion kWh globally in 2024 through efficient chips, equivalent to 44 million tons of reduced carbon emissions. By 2030, each kWh used in production is projected to save 6.39 kWh worldwide. Ho highlights collaborations across five ESG directions: green manufacturing, responsible supply chains, inclusive workplaces, talent development, and care for the underprivileged.

Environmentally, as a “practitioner of green power,” TSMC focuses on climate and energy (pages 108-123), water stewardship (pages 124-134), circular resources (pages 135-146), and air pollution control (pages 147-153). It deployed 1,177 energy-saving measures, achieving 810 GWh in annual savings and 13% renewable energy usage, targeting 60% by 2030 and RE100 by 2040. Scope 1-3 emissions reductions follow SBTi standards, with 2025 as the baseline for absolute cuts by 2035. A new carbon reduction subsidy for Taiwanese tier-1 suppliers and the GREEN Agreement for 90% of raw material emitters aim to slash Scope 3 emissions. Water-positive goals by 2040 include a 2.7% reduction in unit consumption and 100% reclaimed water systems. Circular efforts recycled 97% of waste globally, transforming 9,400 metric tons into resources, while volatile organic compounds and fluorinated GHGs saw 99% and 96% reductions, respectively.

Socially, TSMC positions itself as an “admired employer” (pages 155-202), fostering an inclusive workplace with a Global Inclusive Workplace Statement and campaigns on action, equity, and allyship. It conducted a global Workplace Human Rights Climate Survey and expanded human rights due diligence to suppliers, incorporating metrics into long-term goals. Women comprise 40% of employees, with targets for over 20% in management. Talent development averaged 90 learning hours per employee, with programs like the Senior Manager Learning and Development achieving 90-point satisfaction. Occupational safety maintained an incident rate below 0.2 per 1,000 employees, enhanced by 24/7 ambulances and diverse protective gear. As a force for societal change (pages 204-232), TSMC’s foundations benefited 1,391,674 people through 171 initiatives, investing NT$2.441 billion. Social impact assessments using IMP and IRIS+ frameworks supported STEM education, elderly care, and SDG 17 partnerships.

Governance-wise (pages 234-251), TSMC reported NT$2.95 trillion in revenue and NT$1.17 trillion in net income, with 69% from advanced 7nm-and-below processes. R&D spending hit US$6.361 billion, up 3.1-fold in a decade. The ESG Performance Summary (pages 263-271) details metrics like 100% supplier audits and top rankings in DJSI and MSCI ESG.

Bottom line: The report showcases TSMC’s 2024 achievements: 11,878 customer innovations, 96% customer satisfaction, and NT$2.45 trillion in Taiwanese economic output, creating 358,000 jobs. Despite challenges like geopolitical tensions, TSMC’s net zero roadmap and inclusive strategies position it as a sustainability leader, driving shared value for stakeholders and a resilient future.

Also Read:

TSMC 2025 Update: Riding the AI Wave Amid Global Expansion

TSMC Describes Technology Innovation Beyond A14

TSMC Brings Packaging Center Stage with Silicon


Intel’s Commitment to Corporate Responsibility: Driving Innovation and Sustainability

Intel’s Commitment to Corporate Responsibility: Driving Innovation and Sustainability
by Admin on 09-07-2025 at 6:00 am

Intel Corporate Responsibility Report 2025

Introduction by Lip-BU Tan:

I’m an engineer at heart. Nothing motivates me more than solving hard problems. Our teams across Intel are driven by this same mindset—inspired by the power of technology to enable new solutions to our customers’ toughest challenges.

I fundamentally believe this will be a catalyst for innovation throughout our company for years to come—and every day brings new opportunities for us to improve.

In our 2024-2025 Corporate Responsibility Report, you will see we have made important progress in many areas. We are driving greater compute performance in our products while improving their power efficiency. Our water conservation and use of renewable energy is supporting a resilient and sustainable manufacturing footprint. And our close collaboration with partners across our value chain is helping customers to achieve their own sustainability goals.

But this work is never done—and we have a lot of hard work ahead as we take actions to reshape our company, strengthen our culture, and empower our engineers to do what they do best.

Underpinning this work is a consistent focus on technology, sustainability, and talent investments aligned with our long-term goals. At the end of the day, our work in these areas drives innovation and growth—because when great people engineer great products to delight our customers, we strengthen our business and help meet the needs of a changing world.

I am looking forward to the work ahead as we build a new Intel. Thank you for your feedback and your partnership

Lip-Bu Tan

Chief Executive Officer

In the 2024-25 Corporate Responsibility Report, Intel outlines a refreshed strategy under new CEO Lip-Bu Tan, emphasizing transformation amid global challenges. Founded on principles of transparency, ethics, and human rights, the report highlights Intel’s integrated approach to corporate responsibility, aligning with frameworks like the UN Sustainable Development Goals and Global Reporting Initiative. With a targeted workforce of 75,000 employees worldwide, Intel’s efforts span people, sustainability, and technology, demonstrating how a tech giant can balance business growth with societal impact.

Central to Intel’s people-focused initiatives is fostering an inclusive, safe culture. The company invests in talent development, aiming to retain top performers through consistent indicators across talent systems. In 2024, Intel’s undesired turnover rate was 5.9%, reflecting ongoing economic pressures but also progress in employee engagement. Programs like Inclusive Leaders delivered 120 workshops to 2,271 participants, promoting dignity and respect, with 91% of employees reporting respectful treatment. Safety remains paramount, with a recordable injury rate of 0.71 per 100 employees, below the industry average. Intel’s global volunteer program, Intel Involved, saw employees contribute over 830,000 hours, amplified by $7.8 million in Foundation matches. Philanthropy totaled $79.5 million, supporting STEM education and humanitarian relief. Respecting human rights, Intel conducted 252 supplier audits, addressing forced labor and returning over $200,000 in fees to workers. Responsible minerals sourcing achieved 99% conformance, expanding to include aluminum and copper.

Sustainability efforts underscore Intel’s environmental stewardship. Achieving 98% renewable electricity globally, Intel reduced Scope 1 and 2 GHG emissions by 24% from 2019, avoiding 84% of cumulative emissions over the decade. The Climate Transition Action Plan targets net-zero Scope 1 and 2 by 2040. Energy conservation saved 2.4 billion kWh since 2020, with $104 million invested yielding $150 million in savings. Water stewardship restored net positive water in the US, India, Costa Rica, and Mexico, conserving 10.5 billion gallons and restoring 2.9 billion through projects like Oregon’s McKenzie River enhancement. Waste management upcycled 66% of manufacturing streams, diverting 74,000 tons from landfills via circular economy practices. Supply chain sustainability engaged 140 suppliers on GHG reductions, with 99% responding to CDP questionnaires. Responsible chemistry addressed PFAS through collaborations like the NSTC’s PRISM program, promoting safer alternatives.

Technology initiatives leverage Intel’s expertise for societal good. Product energy efficiency improved 4.0X for clients and 2.7X/3.0X for servers from 2019 baselines, reducing Scope 3 emissions. Responsible AI evolved with a new “Protect the Environment” principle, operationalizing governance for risks like bias. Intel’s Digital Readiness Programs trained 8 million in AI skills across 29 countries, emphasizing inclusion. The Intel Responsible Technology Initiative funded 465 projects in 42 countries, addressing health, education, and climate via solutions like AI for veterans’ transitions and water monitoring in mining.

Intel’s report reflects resilience amid challenges like economic pressures and geopolitical risks. By integrating responsibility into operations, Intel not only mitigates risks but also drives innovation, as seen in CHIPS Act awards bolstering US manufacturing. Looking ahead, Intel’s ambitions—net-zero emissions, inclusive AI, and resilient supply chains—position it as a leader in sustainable tech. This holistic approach ensures Intel enriches lives while building a stronger future for all stakeholders.

See the full report here.

Also Read:

Revolutionizing Chip Packaging: The Impact of Intel’s Embedded Multi-Die Interconnect Bridge (EMIB)

Intel’s Pearl Harbor Moment

Should the US Government Invest in Intel?


RANiX Employs CAST’s TSN IP Core in Revolutionary Automotive Antenna System

RANiX Employs CAST’s TSN IP Core in Revolutionary Automotive Antenna System
by Daniel Nenni on 09-06-2025 at 8:00 am

ranix TSN SW antenna array figure

This press release from CAST announces a significant collaboration with RANiX Inc., highlighting the integration of CAST’s TSN Switch IP core into RANiX’s new Integrated Micro Flat Antenna System (IMFAS) SoC. This development underscores the growing adoption of Time-Sensitive Networking (TSN) in the automotive sector, particularly for enhancing in-vehicle communication efficiency. As someone tracking advancements in automotive electronics and IP cores, I find this release both timely and insightful, though it leans heavily on promotional language typical of industry announcements.

At its core, the release details how RANiX, a South Korean leader in automotive and IoT communication chips, has leveraged CAST’s TSN technology to synchronize and route signals from a multi-protocol antenna array. The IMFAS SoC handles diverse protocols like 5G, WiFi, GNSS/GPS, BLE, and UWB, funneling them through an Ethernet backbone to the vehicle’s Telematics Control Unit (TCU). By replacing lengthy RF cable runs with TSN-enabled Ethernet, the system promises reduced complexity, lower costs, improved signal integrity, and seamless integration into Software-Defined Vehicles (SDVs). This is a smart evolution, aligning with the industry’s shift toward zonal architectures where centralized processing dominates.

CAST’s TSN-SW Multiport Ethernet Switch IP core is positioned as the enabler here, boasting ultra-low latency, standards compliance (e.g., IEEE 802.1Q), and configurability for applications beyond antennas, such as sensor fusion in automated parking or environmental sensing. The release quotes RANiX’s CTO, No Hyoung Lee, praising CAST’s forward-thinking approach to evolving TSN standards and their Functional Safety features, crucial for ISO 26262 compliance in automotive designs. Alexander Mozgovenko, CAST’s TSN Product Manager, reciprocates by lauding RANiX’s innovative use of the core, noting their long-standing partnership since 2011. This mutual endorsement adds credibility, but it also feels somewhat scripted, as press releases often do.

From a technical standpoint, the announcement is compelling. TSN’s ability to ensure deterministic timing and prioritization in Ethernet networks addresses a key pain point in modern vehicles, where real-time data from multiple sources must coexist without interference. RANiX’s IMFAS exemplifies this by creating a “flat” antenna system that minimizes physical cabling, potentially slashing weight and assembly costs—vital for electric vehicles aiming for efficiency. Moreover, CAST’s broader IP portfolio, including CAN-FD, LIN, and ASIL-D ready cores, positions them as a one-stop shop for automotive bus controllers, which could appeal to designers seeking integrated solutions.

However, the release has limitations. It lacks quantitative data, such as specific latency figures, cost savings percentages, or performance benchmarks, which would strengthen its claims. While it mentions “proven reliability” and “cost-effectiveness,” these are vague without metrics or third-party validations. Additionally, the focus on RANiX’s 80% market share in South Korean tolling chipsets feels tangential, perhaps included to bolster their credentials but not directly tied to IMFAS. In a broader context, with TSN still emerging in automotive (as the release notes some firms are “still contemplating” adoption), this could be a bellwether for wider implementation, especially amid the push for autonomous driving.

Overall, this press release showcases a practical TSN application, signaling progress in Ethernet-based vehicle networks. It’s well-structured, with clear sections on the technology, quotes, and company backgrounds, making it accessible to both technical audiences and investors. For CAST, it reinforces their expertise in IP cores since 1993; for RANiX, it highlights their innovation in a competitive field. If executed as described, the IMFAS could indeed simplify in-vehicle communications, paving the way for smarter, more efficient cars. That said, I’d love to see follow-up data on real-world deployments to gauge its impact. In an era of rapid automotive electrification and connectivity, announcements like this are exciting harbingers of what’s next.

Link to Press Release

Also Read:

CAST Webinar About Supercharging Your Systems with Lossless Data Compression IPs

WEBINAR Unpacking System Performance: Supercharge Your Systems with Lossless Compression IPs

Podcast EP273: An Overview of the RISC-V Market and CAST’s unique Abilities to Grow the Market with Evan Price

CAST Advances Lossless Data Compression Speed with a New IP Core


Podcast EP306: The Challenges of Advanced AI Data Center Design with Josue Navarro

Podcast EP306: The Challenges of Advanced AI Data Center Design with Josue Navarro
by Daniel Nenni on 09-05-2025 at 10:00 am

Dan is joined by Josue Navarro, product marketing engineer for Microchip’s dsPIC business unit. He began his career as a process engineer at Intel and has since transitioned into product marketing with Microchip Technology where he supports customers developing system designs utilizing Microchip’s Digital Signal Controllers.

Dan explores AI-focused data centers and the associated challenges they present with Josue. What is needed to address energy and thermal efficiency, reliability, and sustainability are some of the topics covered in this broad and informative discussion. Josue discusses where technologies such as liquid cooling and real-time thermal monitoring fit. He explains that next generation AI data centers present a 3X power density increase. Josue provides an overview of the far-reaching environmental impacts that must be addressed and how Microchip is working with the industry to help address these challenges.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Cadence’s Strategic Leap: Acquiring Hexagon’s Design & Engineering Business

Cadence’s Strategic Leap: Acquiring Hexagon’s Design & Engineering Business
by Admin on 09-05-2025 at 8:00 am

Cadence Hexagon

In a bold move that underscores the accelerating convergence of electronic design automation (EDA) and mechanical engineering, Cadence Design Systems announced its agreement to acquire Hexagon AB’s Design & Engineering (D&E) business for approximately €2.7 billion, equivalent to about $3.16 billion. This transaction, expected to close in the first quarter of 2026, represents a significant expansion for Cadence, integrating advanced simulation technologies into its portfolio and positioning the company at the forefront of physical AI and complex system design.

Cadence Design Systems, headquartered in San Jose, California, is a global leader in EDA software, providing tools essential for designing integrated circuits, systems-on-chips, and printed circuit boards. The company has long been a staple in the semiconductor industry, aiding giants like Intel and NVIDIA in bringing cutting-edge chips to market. Over the years, Cadence has pursued aggressive growth through acquisitions, such as its 2023 purchase of Intrinsix for aerospace and defense expertise, and the 2024 integration of BETA CAE Systems to bolster its multiphysics simulation capabilities. This latest deal with Hexagon fits seamlessly into that strategy, enhancing Cadence’s offerings in system-level analysis where electronic and mechanical domains intersect.

Hexagon AB, a Stockholm-based multinational technology group, specializes in digital reality solutions that combine sensor, software, and autonomous technologies. Its D&E business, which generated around €500 million in revenue in 2024, includes flagship products like MSC Software, a pioneer in computer-aided engineering (CAE) simulations. MSC’s tools excel in structural analysis, multibody dynamics, and acoustics, serving industries from automotive to aerospace. By divesting this unit, Hexagon aims to streamline its portfolio, focusing on its core strengths in metrology, geospatial software, and manufacturing intelligence. The sale aligns with Hexagon’s ongoing efforts to optimize operations, as stated in their press release, allowing them to invest more heavily in high-growth areas like smart manufacturing and sustainability solutions.

Under the terms of the agreement, Cadence will fund 70% of the purchase price in cash and the remaining 30% in stock, providing Hexagon with a stake in Cadence’s future success. This hybrid payment structure not only mitigates immediate cash outflow for Cadence but also signals confidence in the synergies ahead. The acquisition is poised to accelerate Cadence’s Intelligent System Design strategy, which emphasizes AI-driven workflows for faster, more efficient product development. By incorporating Hexagon’s mechanical simulation expertise, Cadence can offer end-to-end solutions for multidomain systems—think electric vehicles where battery electronics must integrate flawlessly with structural components, or drones requiring precise aerodynamics alongside embedded software.

The strategic implications are profound. In an era where products are increasingly “smart” and interconnected, the boundaries between hardware disciplines are blurring. Cadence’s CEO, Anirudh Devgan, highlighted in the announcement that this move will “accelerate our expansion in physical AI and system design and analysis,” enabling customers to tackle unprecedented complexity in product engineering. For instance, automotive manufacturers could simulate vehicle crashes with integrated electronics behavior, reducing prototyping costs and time-to-market. Aerospace firms might optimize aircraft designs for fuel efficiency while ensuring electronic systems withstand vibrations. This integration is particularly timely amid the rise of Industry 4.0, where digital twins—virtual replicas of physical assets—demand sophisticated multiphysics modeling.

Market analysts have reacted positively, viewing the deal as a catalyst for Cadence’s growth in non-traditional EDA sectors. Shares of Cadence rose modestly in after-hours trading following the announcement, reflecting investor optimism about revenue diversification. Hexagon’s stock also saw gains, as the divestiture is seen as unlocking value for shareholders. However, challenges loom: integrating Hexagon’s 2,000+ employees and ensuring cultural alignment will be key. Regulatory approvals, especially in Europe and the U.S., could pose hurdles given the deal’s size and the strategic importance of simulation technologies in defense applications.

Looking ahead, this acquisition could reshape the CAE landscape, intensifying competition with rivals like Ansys (recently acquired by Synopsys) and Siemens Digital Industries Software. Cadence’s enhanced portfolio might spur innovation in emerging fields like sustainable energy systems and biomedical devices, where precise engineering simulations are critical.

Bottom line: Cadence’s acquisition of Hexagon’s D&E business is more than a financial transaction—it’s a visionary step toward unified engineering platforms in a hyper-connected world. As industries demand faster iteration and greater reliability, this union promises to deliver tools that bridge electronic and mechanical worlds, fostering breakthroughs that could define the next decade of technological advancement.

Also Read:

Cocotb for Verification. Innovation in Verification

A Big Step Forward to Limit AI Power Demand

Streamlining Functional Verification for Multi-Die and Chiplet Designs


TSMC 2025 Update: Riding the AI Wave Amid Global Expansion

TSMC 2025 Update: Riding the AI Wave Amid Global Expansion
by Daniel Nenni on 09-05-2025 at 6:00 am

CC Wei Donold Trump Handshake

Welcome to the second half of a very exciting year in semiconductors. While Intel and Samsung Foundry have made quite a few headlines, TSMC continues to execute flawlessly at 3nm and 2nm. With the TSMC OIP Ecosystem Forums starting later this month let’s take a look at how we got to where we are today.

The TSMC OIP Ecosystem Forum is the second series of events. At the previous TSMC Technology Symposium last April we were told that N2 design starts were exceeding N3 which was quite a statement. From what I have learned from the ecosystem over the last few months, that may have been an understatement. TSMC N2 is absolutely dominating the foundry business and for good reasons, but most importantly it is trust. TSMC’s market share at 3nm and 2nm is upwards of 90% while their total market share is now between 60-70%. Simply amazing but well deserved.

Financially, TSMC has delivered stellar results. In the second quarter of 2025, revenue reached a record $30.1 billion, marking a 44% year-over-year increase. Gross margins climbed to 59%, up 5 percentage points from the previous year, reflecting strong pricing power and efficiency gains from previous nodes. Net profit surged, with earnings per share hitting NT$15.36, beating analyst forecasts. For the first half of the year, total sales hit $60.5 billion, a 40% jump from 2024. Buoyed by this momentum, TSMC raised its full-year 2025 revenue growth guidance to approximately 30%, up from 25%. Personally I believe TSMC is once again being conservative. My guess would be 35% revenue growth but that depends on China business (Nvidia) which seems to be constrained.  Either way it will be another great year for TSMC.

My optimism stems from unrelenting AI-related demand with revenue from AI accelerators expected to double in 2025. TSMC capital expenditures for the year are projected at $38 billion to $42 billion, focusing on advanced process technologies and overall capacity expansion.

On the technology front TSMC is still pushing boundaries. The company plans to start high volume manufacturing of its N2 chips in the fourth quarter of 2025 which is earlier than anticipated, meaning yield is higher than anticipated. Trial production at its Kaohsiung and Hsinchu fabs has already begun with Apple, Nvidia, AMD, Qualcomm, and MediaTek leading customer demand. Looking further ahead, TSMC broke ground on a 1.4nm facility in Taiwan, with mass production targeted for the second half of 2028, promising 15% performance gains and 30% power savings. Additionally, advanced packaging capacity (CoWoS) has already doubled to 75,000 WPM six months ahead of schedule through partnerships with ASE and Amkor.

Expansion remains a key strategy amid geopolitical tensions. TSMC’s Arizona subsidiary turned profitable in the first half of 2025, reporting a $150.1 million net profit after initial losses. The company is also advancing fabs in Europe and Japan to strengthen supply chains. In Taiwan, new facilities like Fab 25 in the Central Taiwan Science Park will house 1.4nm and 1nm plants with trial production starting in 2027. A new Taiwanese law ensures cutting-edge tech stays on the island, keeping overseas fabs one generation (N-1) behind. This move addresses U.S.-China trade frictions and potential tariffs, which TSMC has flagged as potential risks.

Despite headwinds like currency fluctuations and rising operational costs, TSMC’s outlook is bullish. Third-quarter revenue is forecasted at $31.8 billion to $33 billion, supported by AI and high-performance computing demand. Monthly revenues through June 2025 showed consistent growth, with June alone up 39.6% year-over-year. Analysts maintain a “Buy” rating, citing sustained AI momentum and even Jensen Huang (Nvidia CEO) has a “Buy” rating on TSMC (“anybody who wants to buy TSMC stock is a very smart person”). Never in the 30+ year history of Nvidia and TSMC have I ever seen Jensen so complimentary of TSMC and that will tell you how closely they are working together.

From 2025 to 2030, TSMC’s investments will reshape sectors like AI, automotive, and consumer electronics, reinforcing its ecosystem for a competitive landscape. As my semiconductor bellwether, TSMC’s trajectory signals a thriving semiconductor industry though vigilance on geopolitics remains essential. Dr. C.C. Wei has proven to be a politically savvy leader so I have no concerns here at this point in time. Go TSMC and GO semiconductor industry, $1 trillion dollars by 2030, absolutely!

Also Read:

TSMC Describes Technology Innovation Beyond A14

TSMC Brings Packaging Center Stage with Silicon

TSMC 2025 Technical Symposium Briefing


Alchip’s 3DIC Test Chip: A Leap Forward for AI and HPC Innovation

Alchip’s 3DIC Test Chip: A Leap Forward for AI and HPC Innovation
by Daniel Nenni on 09-04-2025 at 10:00 am

Alchip 3D IC Test Chip TSMC N2

Today Alchip Technologies, a Taipei-based leader in high-performance and AI computing ASICs, announced a significant milestone with the successful tape-out of its 3D IC test chip. This achievement not only validates Alchip’s advanced 3D IC ecosystem but also positions the company as a frontrunner in the rapidly evolving field of three-dimensional integrated circuits. By proving the readiness of an integrated 3DIC solution, including critical components like CPU/NPU cores, UCIe and PCIe PHY, Lite-IO infrastructure, and third-party IP, Alchip is paving the way for faster, more efficient designs in artificial intelligence (AI) and high-performance computing (HPC).

The importance of this test chip lies in its validation of Alchip’s comprehensive 3DIC ecosystem, which is tailored to meet the demands of complex ASIC designs. Unlike traditional 2D ICs, 3DICs involve stacking multiple dies vertically, introducing unique challenges in power density, thermal dissipation, and interconnectivity. Alchip’s test chip, integrating a 3nm top die and a 5nm base die using TSMC’s SoIC-X packaging technology, was designed to tackle these challenges head-on. The successful tape-out confirms the company’s ability to deliver a robust design flow, die-to-die IP, and interconnect solutions, ensuring accuracy and reduced time-to-market for AI and HPC developers.

“What these test chip results should tell hyperscalers and other AI/HPC designers is that don’t have to settle for less-than design-optimal advanced packaging solutions.   We’ve just proven an advanced packaging ecosystem that can meet the most precise 3DIC requirements,” explained Dr. Dave Hwang, Sr. Vice President and North America General Manager, Alchip Technologies.

A key aspect of the test chip is its demonstration of advanced 3DIC capabilities. The top die incorporates a CPU, NPU core, and high-power logic, while the base die features a network-on-chip, L3 cache, and interface IP. These components are connected via APLink-3D Lite IO, enabling seamless communication between dies. The tape-out validated critical features, including cross-die synchronous IP, design-for-test strategies with redundancy and repair, signal and power integrity analysis, thermal and mechanical simulations, and 3D physical design implementation. These validations are crucial, as 3DIC designs require precise coordination to ensure performance and reliability across stacked dies.

Alchip’s achievement is particularly noteworthy due to the complexity of 3DIC design. Unlike 2D counterparts, 3DICs demand new approaches to physical and logical integration. Alchip updated its electronic design automation (EDA) tools and methodologies to support co-design across both dies, ensuring electrical, timing, and mechanical integrity. The company also tested custom interface IP tailored for 3DICs, addressing interoperability challenges for protocols like UCIe and PCIe. A standout feature is the 3DI/O timing, which limits die-to-die latency to just 40 picoseconds. This low latency, combined with a fully integrated 3D clocking structure, minimizes timing skew and ensures coherent operation across layers, a critical factor for high-performance applications.

The test chip program also highlighted Alchip’s collaborative approach. Four IP vendors participated, with two providing proven hard macros and two others testing new IP on the platform. An EDA flow vendor ensured tool and methodology readiness, reinforcing the ecosystem’s strength. This collaboration underscores the scarcity of 3DIC-proven IP, making Alchip’s validated solutions highly valuable for developers seeking reliable components for next-generation ASICs.

The implications of this tape-out extend beyond the test chip itself. By stress-testing power density and thermal dissipation, Alchip has gathered insights that will inform future 3DIC designs, including those using TSMC N2, N3 and N5 stacked chiplets.

Bottom line: Alchip’s 3DIC test chip tape-out marks a pivotal moment for the semiconductor industry. By validating its 3DIC ecosystem, Alchip not only demonstrates technical prowess but also provides a reliable pathway for AI and HPC innovation. This milestone sets a new standard for 3DIC design, promising faster, more efficient, and scalable solutions for the future of computing.

The official news release is here.

About Alchip

Alchip Technologies Ltd., founded in 2003 and headquartered in Taipei, Taiwan, is a leading global High-Performance Computing and AI infrastructure ASIC provider of IC and packaging design, and production services for system companies developing complex and high-volume ASICs and SoCs. Alchip provides faster time-to-market and cost-effective solutions for SoC design at mainstream and advanced process technology. Alchip has built its reputation as a high-performance ASIC leader through its advanced 2.5D/3D CoWoS packaging, chiplet design, and manufacturing management. Customers include global leaders in artificial intelligence, high-performance computing, supercomputing, mobile communications, entertainment devices, networking equipment, and other electronic product categories.

Also Read:

Alchip Launches 2nm Design Platform for HPC and AI ASICs, Eyes TSMC N2 and A16 Roadmap

Alchip’s Technology and Global Talent Strategy Deliver Record Growth

Alchip is Paving the Way to Future 3D Design Innovation