SNPS1670747138 DAC 2025 800x100px HRes

Chiplet integration solutions from Keysight at Chiplet Summit

Chiplet integration solutions from Keysight at Chiplet Summit
by Don Dingee on 01-15-2025 at 10:00 am

signal integrity of electrical layer in UCIe

Chiplets continue gaining momentum, fueled in large part by applications for AI and 5G/6G RFICs. Keysight has a strong presence at this year’s Chiplet Summit in Santa Clara, which includes Simon Rance in a super panel discussing “Chiplets: The Key to Solving the AI Energy Gap” and Nilesh Kamdar with a keynote on “Using Design-to-Test Workflows and Managing the IP Lifecycle in Chiplet Designs”, as well as several technical talks. Keysight previewed some of its material for the conference, focusing on three aspects of its chiplet integration solutions: system-level electrical layer analysis, a compliance testing strategy to characterize golden die and die-to-die channels, and engineering lifecycle management for chiplet design.

System-level analysis of signal integrity crucial for chiplets

Chiplet designers are facing many challenges in creating die-to-die interconnects. Do teams align with a specification, such as Universal Chiplet Interconnect Express (UCIe), Bunch of Wires (BoW), or others? Do they follow the details of a chosen specification closely, seeking to capture the economic benefits of interoperability, or do they customize some features, tuning interconnects to optimize performance and power consumption? In any chiplet design scenario, system-level chiplet signal integrity analysis determines a design’s outcome. Packaging for 3D heterogeneous interconnect (3DHI), using standard or advanced packages specified in UCIe, becomes more than a mechanical convenience, affecting high-bandwidth signals.

Preventing suboptimization of one signal metric while potentially degrading others calls for detailed, system-level electrical layer analysis, evaluating all metrics simultaneously. Keysight’s Chiplet PHY Designer, an extension to Keysight Advanced Design System (ADS), provides robust system-level signal integrity analysis for chiplet interconnects, including UCIe and BoW. It updates its analysis incrementally as schematic and layout details change.

Tim Wang Lee, Ph.D., Signal Integrity Application Scientist at Keysight, presents the topic “Fast Track Chiplet Integration with Streamlined UCIe Electrical Layer Analysis” in the Integration sessions. “UCIe specifies the voltage transfer function (VTF), forward clocking, eye mask equalization, loss, and crosstalk level,” he says. “Our software does the work by setting up TX and RX termination equalization and analyzing and visualizing VTF crosstalk and loss masks for different data rates.”

Dr. Tim also says the visualization tools in Chiplet PHY Designer help spot the root cause of issues, such as insufficient trace spacing creating coupling and excessive crosstalk or channel lengths contributing to losses. He emphasizes that while Chiplet PHY Designer is UCIe-aware, it also handles analysis for BoW or customized die-to-die interconnect links.

Die-to-die interconnect test and characterization

UCIe leans heavily upon work done for its board-level predecessor, PCIe, with modifications recognizing its die-to-die context. However, there is one significant difference – compliance testing. PCIe is famous for its plugfests, where teams can take chips designed with mature PCIe IP blocks, mount them on circuit boards, install them in systems, and stage physical and protocol measurements using signal generators, oscilloscopes, and bit error rate testers. UCIe presents a different challenge since fabricating a chiplet prototype is much more expensive than making a board prototype, and limited test point visibility can make probing infeasible.

Pedro Merlo, Manager of Strategic Planning at Keysight, comes from the test instrumentation side of the house and has been focusing on die-to-die interconnects for the past two years. “Keysight is a proud contributor to UCIe and BoW, the two most prominent chiplet interconnect standards,” says Merlo. “The vision is to base compliance testing at the physical and protocol layers on a ‘golden’ die.” But does such a die exist, especially for customized link designs? For an open chiplet ecosystem to develop, at least three components – one die, one interconnect structure, and another die that completes the link – must be testable standalone and swappable for another test article.

Merlo’s tutorial in the Pre-Con D: Introduction to Die-to-Die Interfaces session shares some in-progress thinking where Keysight’s unified measurement science – equivalent methods between test hardware and EDA software – comes into play. “We can use our high-precision benchtop test equipment to characterize a chiplet’s built-in self-test (BIST) and measure transmitter amplitude, slew rate, jitter, skew, equalization, sensitivity, and more,” he begins. “Our objective is to test as much as possible to reduce the uncertainty, especially after packaging, when test points are no longer accessible.”

“We’re proposing extracting channel characteristics and factoring them into measurements made at the far end of the transmitter, then using those characteristics in a model for end-to-end tests,” Merlo continues. “The approach adds test points to a substrate, makes measurements there, then subtracts out channel effects to see what the signal looks like at the receiver microbump.”

Merlo points out that BIST results can tell designers that a link works but not how much margin it has or where a problem might be. Decomposing the BIST into two phases provides a method for examining links closely.

More importantly, decomposing the BIST paves the way for simulations with robust models to produce results equivalent to measurements. “By breaking the link into pieces and ensuring each one works and meets specifications, we get better information,” Merlo concludes.

Engineering lifecycle management for chiplet design

Considering both electrical and mechanical nuances, chiplet design presents a more significant enterprise opportunity for engineering lifecycle management (ELM) than SoC design. While today’s focus on chiplet design is internal, teams may soon be considering if third-party chiplets are candidates for integration. There’s also reuse of chiplets to consider, and if an organization has more than one design team, there needs to be an easy way to store and share design information across teams.

“Think of chiplets as systems,” says Pedro Pires, ELM Solutions Engineer at Keysight. “Chiplet design carries system engineering connotations – there are die, interposers, packages, and artifacts around each component, some internally sourced and some procured from third parties.” Pires says a common chiplet design scenario will see teams starting with specifications, diving into a catalog with parameterized search and compare of components, and creating a bill of material (BOM) with built-in traceability of assets.

Metadata around chiplets in Keysight Design Data Management (formerly known as Cliosoft SOS) can include what versions exist, where they are in use, and their technical characteristics, including electrical, mechanical, and application programming interface (API) information, making it easy to highlight differences. “HUB can provide critical alerts such as release conflicts, where one bill of material contains different versions of a chiplet somewhere in its hierarchy, or notify designers of a newer version,” adds Pires.

There may be more steps for designers than just grabbing an asset from the catalog. Pires points out that selecting third-party chiplets may require team members to execute non-disclosure and licensing agreements. Keysight ELM implements workflows as a mechanism to automate approval and enforcement policies, with built-in activity reporting for internal and third-party audits. Beyond these customizable workflows, granular access controls can gate access to resources based on user characteristics, roles, or locations, which is essential for expert control scenarios.

Keysight ELM introduces a data-agnostic methodology for managing the entire lifecycle of chiplets and offers creative ways for various roles beyond design teams to share and manage information. It integrates seamlessly with many chiplet design tools, including Keysight ADS and Cadence Virtuoso, via a REST API. It also connects with familiar tools like Jama, JIRA, Bugzilla, Confluence, and more.

Learn more at Chiplet Summit 2025

The complete technical presentations will reveal more details for Chiplet Summit 2025 conference attendees. In addition to the speaking slots on the program, exhibitor attendees can see Keysight’s chiplet integration solutions in booth #307, including the presentation on ELM. For more information on Chiplet PHY Designer, Keysight Design Data and IP Management, and the Chiplet Summit program and registration, please visit the following:

Keysight W3650B Chiplet PHY Designer

Keysight Design Data and IP Management

Chiplet Summit 2025

Also Read:

GaN HEMT modeling with ANN parameters targets extensibility

Keysight EDA 2025 launches AI-enhanced design workflows

Webinar: When Failure in Silicon Is Not an Option


A Deep Dive into SoC Performance Analysis: What, Why, and How

A Deep Dive into SoC Performance Analysis: What, Why, and How
by Lauro Rizzatti on 01-15-2025 at 6:00 am

A Deep Dive into SoC Performance Analysis Part 1 Figure 2

Part 1 of 2 – Essential Performance Metrics to Validate SoC Performance Analysis

Part 1 provides an overview of the key performance metrics across three foundational blocks of System-on-Chip (SoC) designs that are vital for success in the rapidly evolving semiconductor industry and presents a holistic approach to optimize SoC performance, highlighting the need for balancing these metrics to meet the demands of cutting-edge applications. 

Prolog – SoC Performance Validation: Neglect It, Pay the Price!

In today’s technology-driven world, where electronics and software are deeply intertwined, the ability to estimate performance (as well as power consumption) ahead of tape-out has become a crucial factor in determining a product’s success or failure. Below are some real-world examples of failures that could have been avoided by pre-silicon performance validation:

  • A hardware bug in the coherent interconnect fabric of an Android smartphone chip slipped through into the delivered product causing all caches to flush and forcing the Android system to reboot. This oversight led to a recall with significant financial losses for the developer.
  • A hidden firmware bug triggered sudden and random drops in datacenter utilization in a range of 10% to 15%, resulting in sizable financial losses.
  • An expert analysis of the October 22, 2023, GM Cruise autonomous vehicle accident in San Francisco revealed that the AD controller failed to detect a pedestrian lying on the asphalt after a hit-and-run accident. The failure occurred because latency exceeded the specified limits for detecting and responding to moving targets. As a result, GM suspended driverless operations of its Cruise vehicles for several months and incurred significant penalties.
  • After multiple years of unsuccessful attempts to design a high-performance GPU for a mobile chip, a major semiconductor company was forced to adopt a competitor’s solution, incurring substantial costs.

These costly malfunctions and/or missed target specifications could have been avoided through comprehensive pre-silicon performance analysis.

SoC Performance Metrics Critical for Success

Achieving performance targets in today’s cutting-edge SoC designs is critical to determining the success or failure of a product. When looking at design trends, in particular for AI, designs are running into the memory and interconnect walls as shown in the following figures.

Fig 1: Memory Wall (Source: AI and Memory Wall
Fig 2. Interconnect Wall (Source: AI and Memory Wall

For instance, when evaluating SSD storage, developers focus on two metrics: the read/write speed of the SSD and its total capacity. These figures are crucial for memory companies, especially as the volume of data being transferred continues to grow exponentially. The ability to quickly move data off the chip is vital. In the data center market, memory accounts for roughly 50% of resource usage, acquisition cost, and power consumption, that is half of their investment into purchasing memory, maintaining it with sufficient power, and ensuring enough capacity to support parallel memory accesses. As a result, memory performance often becomes a bottleneck.

As another example, AI market performance hinges on the rapid processing of algorithms, which is largely determined by the speed to retrieve input data from memory and to offload results to memory. This is crucial for inference tasks, where real-time decisions depend on quickly retrieving input data and storing results. In automotive applications, for instance, swift data access is essential for split-second decisions like pedestrian detection. Conversely, the faster the data offloading occurs, the more efficient the data handling become, especially in training environments where large datasets must be constantly moved between cores and memory.

Regardless of the application, ultimately three key metrics define performance in SoC designs:

Latency

In SoC designs, latency refers to the time elapsed between the request of a data transfer or of an operation and the delivery of the data or the completion of the operation.

In any SoC design the architectures of three fundamental functional blocks determine the latency of the entire design:

  1. Memory Latency: The delay in accessing data in memory. Memory latency is influenced by memory type, memory hierarchy structure, and clock cycles required for access, typically interdependent.
  2. Interconnect Latency: The time it takes for data to travel across the SoC’s fabric interconnecting one component to another. Critical attributes in an interconnect architecture include number of hops, congestion dependency, protocol overhead.
  3. Interface Protocols Latency: The delay in communicating with external devices through peripheral interfaces. Latency in interface protocols ranges widely from several nanoseconds (PCIe) to microseconds (Ethernet).

High latency in data transfer or response time degrades system performance, especially in real-time or high-performance applications such as AI processing, automotive systems, real-time communications, and high-speed computing.

Bandwidth

In SoC designs, bandwidth refers to the maximum data transfer rate between different components within a chip or between a chip and external devices.

Like latency, the architectures of same three fundamental functional blocks determine the bandwidth of the entire design:

  1. Memory Bandwidth: The data transfer rate between processing units and memories. Measured in GigaBytes per second (GB/s) is influenced by factors such as memory type, bus width, and clock speed.
  2. Interconnect Bandwidth: The data transfer rate between various blocks and subsystems through interconnect fabric like buses or crossbars.
  3. Interface Protocols Bandwidth: The data transfer rate in the communication channels with external devices through peripheral interfaces.

Low bandwidth in an SoC design leads to data congestion. Conversely, high bandwidth improves performance, particularly for data-intensive tasks such as AI/ML processing. Achieving high bandwidth involves optimizing the architecture of memory subsystems, interconnects, and communication protocols within the SoC.

Latency optimization is often balanced with bandwidth optimizations to achieve optimal overall performance.

Accuracy

In SoC designs, data transfer accuracy refers to correctness and reliability of data as it moves between different components within the SoC or between the SoC and external devices.

Data transfer errors can be caused by several factors: congestion, overflow, underflow, incorrect handshakes, noise, interference, crosstalk, signal degradation, or electromagnetic interference (EMI) impacting signal integrity.

Inaccuracies in data transfer can lead to system failures, crashes, loss of data, incorrect computations, especially critical in systems requiring high reliability, such as automotive, medical, or AI-based systems. Ensuring data transfer accuracy is a fundamental aspect of SoC design.

Holistic Approach to SoC Performance Optimization

In today’s highly competitive and demanding SoC design landscape, optimizing the three core attributes that define performance—latency, bandwidth, and accuracy—has become an absolute necessity. This optimization extends beyond hardware alone, and it includes firmware as well. This is especially true for leading-edge SoC designs, from AI-driven systems to high-performance computing and real-time applications as self-driving vehicles.

Achieving optimal performance in SoCs requires a holistic approach that balances hardware and software considerations.

Addressing Performance in Memory Architectures

Memory architecture is central to SoC performance, directly affecting both latency and bandwidth. Memory access speeds and capacity are critical for ensuring that processing cores are not starved of data, particularly in high-throughput applications.

Advanced memory architectures are designed to strike a balance between low-latency access and high-bandwidth memory operations, delivering both speed and capacity required for today’s demanding workloads. For example, LPDDR5 (Low Power Double Data Rate 5) and HBM3 (High Bandwidth Memory 3) represent cutting-edge DRAM technologies that have been engineered to maximize performance in power-constrained environments, such as mobile devices, as well as high-performance computing applications.

LPDDR5 offers improvements in power efficiency and data throughput, enabling mobile SoCs and embedded systems to access memory faster while consuming less power. Meanwhile, HBM3 delivers unparalleled bandwidth with stacked memory dies and a wide bus interface, making it ideal for high-performance applications like AI accelerators, GPUs, and data center workloads. By integrating memory closer to the processor and using wide memory buses, HBM reduces the distance that data must travel, minimizing latency while enabling vast amounts of data to be accessed concurrently, and ensures that multiple processing cores or accelerators can access data simultaneously without bottlenecks.

Shared memory architectures enable different processing units—such as CPUs, GPUs, and specialized accelerators—to access the same pool of data without duplicating it across separate memory spaces. This is especially important for heterogeneous computing environments where different types of processors collaborate on a task.

Cache coherency protocols in multi-core systems guarantee that data remains consistent across different cores accessing shared memory. Protocols like NVMe (Non-Volatile Memory Express), UFS (unified file storage), MESI (Modified, Exclusive, Shared, Invalid) and MOESI (Modified, Owned, Exclusive, Shared, Invalid) are commonly used to maintain cache coherency, ensuring that when one core updates a piece of data, other cores working on the same data are immediately notified and updated.

Addressing Performance in Interface Protocols Architectures

Interface protocols manage the data traffic between the SoC and its external world, playing a vital role in maintaining SoC performance by directly influencing both latency and bandwidth.

High-performance interface protocols like PCIe and Ethernet are designed to maximize data transfer rates between the SoC and external devices preventing data congestion and safeguarding that high-performance applications can consistently meet performance requirements without being throttled by communication delays.

Emerging standards like CXL (Compute Express Link) and Infinity Fabric are designed to enhance the interconnectivity between heterogeneous computing elements. CXL, for instance, enables high-speed communication between CPUs, GPUs, accelerators, and memory, improving not only data bandwidth but also interconnection efficiency. Infinity Fabric, used extensively in AMD architectures, provides a cohesive framework for connecting CPUs and GPUs, ensuring high-performance data sharing and coordination across computing resources.

In the AI acceleration space, Nvidia currently dominates in part due to its superior interface protocols, such as InfiniBand and NVLink. InfiniBand, known for its low-latency, high-bandwidth performance, is widely used in data centers and high-performance computing (HPC) environments. Nvidia’s NVLink, a proprietary protocol, takes data transfer to the next level by achieving rates of up to 448 gigabits per second. This allows for fast data movement between processors, memory, and accelerators, which is essential for training complex AI models and running real-time inference tasks.

Addressing Performance in Interconnect Networks

Interconnect networks play a pivotal role in SoC designs, acting as the highways that transport data between different components. As SoCs become more complex, with multiple cores, accelerators, and I/O components, the interconnect architecture must be capable of supporting massive parallelism and enabling efficient workload distribution across multiple processors and even distributed systems.

To achieve optimal performance, the interconnect must be designed with throughput maximization and low-latency communication in mind. As data movement becomes increasingly complex, the interconnect network must handle not only the sheer volume of data but also minimize bottlenecks and contention points that can slow down performance.

Interconnect architectures developed for performance such as On-Chip Networks (NoC) and Advanced Microcontroller Bus Architecture (AMBA) reduce contention and minimize communication delays between cores, memory, and peripheral components, and ensure that data is routed efficiently between components.

High-performance interconnect architectures like Network-on-chip (NoC) and Advanced Microcontroller Bus Architecture (AMBA) have been developed to meet these demands. Networks on Chip (NoC) are designed to scale with the growing complexity of SoCs, offering high levels of parallelism and flexible routing to reduce contention and enable a modular design approach, where multiple components can communicate simultaneously without overloading shared buses or memory channels, thereby preventing data congestion and minimizing latency.

Similarly, AMBA (Advanced Microcontroller Bus Architecture) has become a standard for connecting processors, memory, and peripherals within an SoC. By incorporating advanced features such as burst transfers, multiple data channels, and arbitration mechanisms, AMBA helps reduce communication delays and ensures that data is routed efficiently across the SoC.

In addition to NoC and AMBA, newer interconnect solutions are emerging to address the growing demand for more advanced performance optimization. Coherent interconnects, such as Arm’s Coherent Mesh Network (CMN), allow multiple processors to share data seamlessly and maintain cache coherence across cores, reducing the need for redundant data transfers and improving overall system efficiency.

Addressing Performance in Firmware

Performance tuning goes beyond optimizing the three essential hardware blocks, and it must also encompass the lower layers of the software stack, including bare-metal software and firmware. These software layers are integral to the system’s overall performance, as they interact with the hardware in a tightly coupled, symbiotic relationship. Bare-metal software and firmware act as the interfaces between the hardware and higher-level software applications, enabling efficient resource management, power control, and hardware-specific optimizations. Fine-tuning these layers is crucial because any inefficiencies or bottlenecks at this level can significantly hinder the performance of the entire system, regardless of how well-optimized the hardware may be.

One of the key benefits of firmware optimization is its ability to unlock performance gains without requiring changes to the physical hardware. For instance, firmware updates can be deployed to improve resource allocation, reduce latency, or enhance power efficiency, leading to significant performance improvements with minimal disruption to the system. This is especially critical in embedded systems, IoT devices, and SSDs, where firmware governs how efficiently data is processed and managed.

Beyond storage devices, firmware optimization is beneficial in domains as networking equipment, GPUs, and embedded systems.

Conclusions

SoC designs are the backbone of numerous technologies, from smartphones and autonomous vehicles to data centers and IoT devices, each requiring precision-tuned performance to function optimally. Falling short of performance goals can lead to higher costs, delayed time-to-market, and compromised product quality, ultimately affecting a company’s competitiveness in the market. Conversely, hitting these targets means faster, more reliable products that stand out in a crowded tech landscape.

As design cycles shorten and market pressures intensify, achieving performance metrics is no longer optional—it’s a necessity for any successful SoC project.

Read Part 2 of this series – Performance Validation Across Hardware Blocks and Firmware in SoC Designs

Also Read:

A Deep Dive into SoC Performance Analysis: Optimizing SoC Design Performance Via Hardware-Assisted Verification Platforms

Synopsys Brings Multi-Die Integration Closer with its 3DIO IP Solution and 3DIC Tools

Enhancing System Reliability with Digital Twins and Silicon Lifecycle Management (SLM)

A Master Class with Ansys and Synopsys, The Latest Advances in Multi-Die Design


2025 Outlook with Dr. Chouki Aktouf of Innova

2025 Outlook with Dr. Chouki Aktouf of Innova
by Daniel Nenni on 01-14-2025 at 10:00 am

Picture Chouki

Chouki Aktouf is Founder & CEO of Defacto Technologies and Co-Founder of Innova Advanced Technologies.  Prior to founding Defacto in 2003, Dr. Aktouf was an associate professor of Computer Science at the University of Grenoble – France and leader of a dependability research group. He holds a PhD in Electric Engineering from Grenoble University.

Tell us a little bit about yourself and your company. 

As a startup with a unique software offer to manage design flow and more generally design resources (EDA tools, computing servers, IP cores, etc.) Innova’s PDM tool help predict, manage and report  design resources by targeting not only cost but also eco-design compliance.

What was the most exciting high point of 2024 for your company? 

Innova confirmed its unique methodology how procurement and design teams can benefit from its PDM software to predict EDA tools licenses and computing servers for new projects and also how to track and optimize the design resource access by users.

What was the biggest challenge your company faced in 2024? 

In 2024, the challenge for the startup was to convince first users that Innova is a real alternative to old and traditional tools for design flow management with much higher possibilities including customization and a much lower cost.

How is your company’s work addressing this biggest challenge? 

We work closely with leading R&D teams in France and in Europe to help solving technical challenges.

What do you think the biggest growth area for 2025 will be, and why?

We believe AI-based EDA will be the main topic for the coming year

How is your company’s work addressing this growth? 

The company has engaged since several years with best Research labs in Europe to work closely on advanced AI-based technologies for EDA.

What conferences did you attend in 2024 and how was the traffic?

We were a key sponsor of DSD/Euromicro in Paris in August in 2024.

We were invited in the booth of Defacto Technologies at DAC this year. We were happy to be able to present our technology and the traffic was really good. Also, we exhibited at DSD/Euromicro in Paris in August where we presented our new methodology around eco-design and sustainability analysis. This is a small conference compared to DAC but it was fairly good.

Will you attend conferences in 2025? Same or more?

Of course, since we are based in Grenoble, we’ll be attending DATE. Defacto renewed its invitation to have us on their booth so we will be present at DAC also in 2025.

How do customers engage with your company?

The best way is to contact us through our website (https://www.innova-advancedtech.com/formulaire-de-contact).The we have an evaluation package that can be sent anytime. The installation is fast and we use to provide a close support from our AE to optimize the use of our solution and enable the users see the benefits we can bring.

Also Read:

WEBINAR: Reconcile Design Cost Reduction & Eco-design Criteria for Complex Chip Design Projects

Innova at the 2024 Design Automation Conference

INNOVA PDM, a New Era for Planning and Tracking Chip Design Resources is Born


WEBINAR: Reconcile Design Cost Reduction & Eco-design Criteria for Complex Chip Design Projects

WEBINAR: Reconcile Design Cost Reduction & Eco-design Criteria for Complex Chip Design Projects
by Daniel Nenni on 01-14-2025 at 6:00 am

flow innova2

As chip design complexity keeps increasing, the challenge of managing costs becomes a pressing concern for companies of all sizes. Efficient resource management is emerging as a critical lever for controlling design expenses and ensuring project success.

The chip design market increasingly demands automated solutions for resource prediction, planning, and analysis. Among these, AI-based technologies hold great promises for transforming resource management, enabling companies to make data-driven decisions that optimize their processes.

“How to Track and Predict Design Resources for Complex Chip Design Projects by Including Jointly Cost and Sustainability.”

On January 21st (10:00AM PST).

The Shift to Cloud-Based Computing: Is It Predictable?

Modern chip design trends show an increasing reliance on cloud-based computing servers. Yet, a vital question arises: Can companies accurately predict when to transition from on-premise to cloud-based resources?

INNOVA provides a clear, AI-powered answer through its innovative Project Design Manager (PDM) tool. This solution simplifies three essential steps in resource management:

  1. Model Training – Using historical data to create predictive models.
  2. Model Selection – Identifying the most suitable model for a specific context.
  3. Resource Time Prediction – Forecasting CPU, memory, and disk requirements with precision.

With its robust tracking capabilities, INNOVA’s PDM monitors the usage of critical resources such as EDA tools, servers, libraries, and engineering assets. It seamlessly integrates with existing IT environments and interoperates with standard project, license, and server management tools, ensuring secure and effective operations.

Streamlined Implementation and AI-Driven Predictions

Once installed, INNOVA’s PDM makes resource prediction straightforward. Its intuitive interface allows users—even those without deep AI expertise—to select appropriate ML models, execute predictions, and generate comprehensive reports. These reports compare real-world data with predictions, enabling teams to make informed adjustments to their resource strategies.

Unified Project and Design Management for SoC Design

INNOVA’s Project and Design Management (PDM) platform combines project management, design flows, and resource optimization into a single, unified software environment. Designed for multi-user accessibility, PDM is suited to design project managers, engineers, purchasing teams, and HR departments. Key features include:

  • Scalability and Integration: Easily interfaces with existing information systems and software tools, ensuring consistent data throughout the project lifecycle.
  • Real-Time Synchronization: Keeps design data and flows up to date, offering traceability of resource usage.
  • Interoperability: Bridges software and hardware needs, managing both design licenses and computational servers effectively.

By offering real-time insights and seamless compatibility with existing tools, PDM simplifies the complexities of managing design entities such as projects, data, servers, and licenses.

Driving Sustainability in Chip Design

INNOVA extends its value by integrating sustainability metrics into the design process. PDM evaluates design configurations—encompassing workflows, resources, and power consumption—to ensure eco-compliance. Its automated measures enable users to identify configurations that fulfill sustainability criteria, providing a clear differentiation between eco-friendly and less efficient options.

Through these capabilities, INNOVA empowers organizations to reduce environmental impact while optimizing resource allocation, ensuring that modern chip designs are not only innovative but also sustainable.

Conclusion

INNOVA’s PDM represents a revolutionary step forward in managing the complexity of chip design projects. By combining AI-driven predictions with unified project management and sustainability tools, it addresses the critical challenges of cost reduction, resource optimization, and environmental compliance. With INNOVA, design teams can confidently navigate the demands of modern chip development while achieving their strategic goals.

To explore these advancements further, join INNOVA’s upcoming webinar:

“How to Track and Predict Design Resources for Complex Chip Design Projects by Including Jointly Cost and Sustainability.”

On January 21st (10:00AM PST).

Don’t miss this opportunity to gain valuable insights into sustainable chip design practices. The webinar is held in partnership with SemiWiki and INNOVA.

Register now and the replay will be sent to you if you are not able to attend live.

Also Read:

2025 Outlook with Dr. Chouki Aktouf of Innova

Build a 100% Python-based Design environment for Large SoC Designs

Defacto Technologies and ARM, Joint SoC Flow at #61DAC


Averting Hacks of PCIe® Transport using CMA/SPDM and Advanced Cryptographic Techniques

Averting Hacks of PCIe® Transport using CMA/SPDM and Advanced Cryptographic Techniques
by Kalar Rajendiran on 01-13-2025 at 10:00 am

CMA:SPDM Flow for Establishing a Secure Connection

In today’s digital landscape, data security has become an indispensable feature for any data transfer protocol, including Peripheral Component Interconnect Express (PCIe). With the rising frequency and sophistication of digital attacks, ensuring data integrity, confidentiality, and authenticity during PCIe transport is crucial. To address these concerns, technologies like Security Protocol and Data Models (SPDM) have emerged as key enablers for secure communication. Component Measurement and Authentication (CMA) determines how SPDM is applied to PCIe systems. By employing these frameworks alongside advanced cryptographic techniques, such as elliptic curve cryptography (ECC), PCIe systems can safeguard sensitive data against potential threats. Siemens EDA recently published a whitepaper on this very topic.

CMA and SPDM: Foundations of PCIe Security

CMA and SPDM play a vital role in fortifying PCIe connections. Together, they establish secure sessions, authenticate communication endpoints, and facilitate encrypted data exchanges. SPDM achieves these goals by defining a series of messages that enable secure connections between devices. These messages negotiate protocol versions, advertise device capabilities, and determine supported cryptographic algorithms. Handshake secrets are generated using hash functions such as HMAC and HKDF, which are critical for encrypting and decrypting communication. After the successful exchange of SPDM requests, a secure session is established.

Symmetric vs. Asymmetric SPDM Flows

Symmetric Encryption: This method relies on a Pre-Shared Key (PSK) known to both parties before initiating communication.  It is computationally efficient but requires secure key distribution beforehand.

Asymmetric Encryption: Public/private key pairs are used to eliminate the need for pre-shared keys. This approach enables stronger authentication mechanisms, such as digital signatures, ensuring that communication endpoints are verified.

SPDM supports both symmetric and asymmetric encryption flows to establish secure connections. While symmetric encryption is faster and less resource-intensive, asymmetric encryption provides superior security and solves key distribution challenges.

Key Generation and Authentication Techniques

Key generation is an essential part of the SPDM flow, with the Diffie-Hellman Key Exchange (DHE) algorithm playing a central role. The algorithm facilitates secure key exchanges by leveraging shared secrets generated during the handshake process.

Elliptic Curve Cryptography (ECC) enhances the DHE algorithm by performing complex operations on elliptic curves, enabling faster and more secure key generation. Authentication within SPDM is achieved through digital signatures, which validate the origin and integrity of transmitted data. By combining message hashing with encryption using a private key, digital signatures ensure that only authorized entities can participate in communication.

Advantages of Elliptic Curve Cryptography

Elliptic curve cryptography has gained prominence due to its ability to deliver equivalent security to traditional algorithms like RSA while requiring significantly smaller key sizes. This efficiency makes ECC particularly suited for resource-constrained environments like PCIe systems. For example, an ECC 256-bit key provides the same level of security as a 3072-bit RSA key, reducing computational overhead and improving performance. The smaller key sizes also simplify key management and accelerate cryptographic operations, making ECC an attractive choice for modern PCIe security.

Strengthening Security with ECC Algorithms

Elliptic curve cryptography further strengthens the security of PCIe transport by offering computational advantages over conventional methods. Its reliance on solving complex elliptic curve equations makes it resistant to cryptanalysis while reducing processing requirements. This efficiency allows for faster encryption and decryption, as well as quicker digital signature generation. Additionally, ECC’s smaller key sizes make it easier to maintain and manage cryptographic keys, ensuring seamless integration into PCIe systems.

Verification with Siemens VIP for PCIe

To ensure the successful implementation of CMA/SPDM, Siemens Verification IP (VIP) for PCIe provides a comprehensive framework for design verification. This solution is fully compliant with CMA Revision 1.1 and SPDM Version 1.3.0, offering robust testing capabilities for secure PCIe communication. Siemens VIP supports the generation of SPDM messages required to establish secure connections, along with APIs that enable flexible stimulus generation. Users can modify fields to create diverse test cases, covering both positive and negative scenarios.

Error injection and debugging are additional strengths of Siemens VIP, allowing designers to simulate fault conditions and analyze system behavior. The solution also supports both symmetric and asymmetric encryption flows, enabling a wide range of testing scenarios. Algorithms such as secp256r1 and secp384r1 are supported for Diffie-Hellman key generation, while digital signature algorithms like TPM_ALG_ECDSA_ECC_NIST_P256 ensure robust authentication. Moreover, Siemens VIP accommodates various device configurations, making it adaptable to the diverse capabilities advertised by different PCIe components.

Summary

CMA and SPDM provide a robust framework for PCIe security, enabling encrypted communication and authentication between devices. The integration of advanced cryptographic techniques, such as ECC, enhances these protocols by offering efficient and secure key generation, digital signatures, and encryption. Siemens Verification IP for PCIe ensures compliance with these security standards, offering extensive testing and debugging capabilities. Together, these technologies establish a new benchmark for PCIe transport security, protecting sensitive data against emerging threats in the digital age.

The whitepaper can be accessed here.

Also Read:

Reset Domain Crossing (RDC) Challenges

Electrical Rule Checking in PCB Tools

Innexis Product Suite: Driving Shift Left in IC Design and Systems Development


2025 Outlook with Mahesh Tirupattur of Analog Bits

2025 Outlook with Mahesh Tirupattur of Analog Bits
by Daniel Nenni on 01-13-2025 at 6:00 am

Mahesh Tirupattur

Tell us a little bit about yourself and your company. 

Mahesh Tirupattur

I’m Mahesh Tirupattur. I’ve been with the company for over 20 years. Recently I took the role of CEO, where I drive business partnerships, IP licensing, and joint venture development. This change was a mutual decision between Alan Rogers and I. Alan wants to focus on technology innovation and he will be able to do that as President and CTO. I have a vision for taking the company to the next level and I will focus on that in my new role as CEO.

Analog Bits is truly a unique company. Through many customer and foundry partnerships we’ve become the leader at developing and delivering low-power integrated clocking, sensor and interconnect IP that are pervasive in virtually all of today’s semiconductors.

What was the most exciting high point of 2024 for your company? 

This is a difficult one. There were many exciting achievements this past year, both with our foundry partners and our customers. If I had to pick one, I would say the introduction of advanced analog and mixed signal IP at cutting edge technologies. We presented proven results at 3nm, and we are moving to 2nm next.

For many years, analog IP was typically developed in older nodes. There have been many advances over the past few years that have changed this paradigm. Today, sensing and communication must be integrated on-chip with cutting edge technology. Analog Bits has met this challenge with a broad range of IP to address the needs of the latest AI and data center chip designs.

What was the biggest challenge your company faced in 2024? 

The best way to describe this is a multi-dimensional balancing act. We need to deliver high-speed, high-precision IP that runs at the most advanced nodes with the lowest possible power. Achieving that combination requires a lot of analyses and tradeoffs.

How is your company’s work addressing this biggest challenge?  

There are technical achievements that are certainly needed. For example, thermal considerations are top of mind for many design teams. To help with that we’ve developed a comprehensive on-die sensing IP portfolio. This technology helps to manage power, enhance reliability, and improve yield. Timing glitches are also becoming more prevalent in advanced designs. We also have a portfolio of glitch detection IP to address this growing problem. There are many more areas we cover, you get the idea.

But there is also an industry-level shift in thinking that is coming. For a long time, analog IP choices were made at the end of the design process. It was something of an afterthought to finish the design. I liken this to package design. For many years, the package for a monolithic chip was done near the end of the design to finish things up. With the growth of multi-die design, the package team is now an integral part of the system development team – the choices made impact and enable the entire project.

In the new multi-die environment, enabling IP that unlocks optimal power distribution, high-speed communication and eases thermal stress becomes a cornerstone item for system design. This is the IP that Analog Bits provides, and I am making changes to the company’s structure to allow us to be part of the system architecture team, ensuring all demands can be met early in the architectural definition phase. You will be hearing more details of how Analog Bits is moving upstream to address substantial challenges as early as possible.

What do you think the biggest growth area for 2025 will be, and why?

Thanks to the huge increase in data center expansion and AI application development, energy efficiency with superior performance and latency are an absolute requirement. To achieve these requirements, superior clocking, sensing, I/O and SerDes communication are all needed. This will be a big growth area, and these are all sweet spots for Analog Bits.

How is your company’s work addressing this growth? 

Beyond design excellence, we focus on partnering with foundries and leading suppliers. A good example of this is the work we’ve done with Arm.

We worked on several integrated power management and clocking IPs with the company. Arm’s customers can readily use these solutions in N3P and soon in N2P. LDO regulator IPs were also part of the effort to efficiently manage the large absolute and dynamic current supplies to Arm CPU cores.

A case study of how CPU cores seamlessly integrate with Analog Bits LDO and Power Glitch Detector IPs, along with integrated clocking capabilities was presented at TSMC OIP in 2024. The implication of this work is substantial for advanced data center applications.

And our focus on working with system design teams early will clearly have a positive impact as well.

What conferences did you attend in 2024 and how was the traffic?

Beyond the usual industry trade shows such as DAC, Analog Bits supports many of the events of our foundry partners. We attend all the worldwide events for the TSMC Technology Symposium, TSMC OIP Ecosystem Forum, Samsung SAFE Forum, GlobalFoundries Technology Summit and Intel Foundry Services Direct Connect events. Each event brings us closer to key customers and our foundry partners, so we view them as all quite valuable.

Will you attend conferences in 2025? Same or more?

Each event we attended last year allowed us to reach an important segment of our customer base and partner network. I expect we will have a similar program this coming year. You can view the current plans on our website at https://www.analogbits.com/events/.

How do customers engage with your company?

As discussed, we are at a lot of shows. You can come by our booth, get the latest information and start a conversation with us. We also joined the Silicon Catalyst In-Kind Partner Ecosystem last year, so if you’re a startup in that incubator it’s easier to work with us. We also opened a new design center in Prague last year. You can also get things started by dropping a note to info@analogbits.com.

Additional questions or final comments? 

2024 was a great year for Analog Bits and we’re excited to see the expansion on the horizon in 2025. If you’re working on advanced data center or AI applications, things just got a bit easier. If power management, performance or communication are challenges we can help.

Also Read:

Analog Bits Builds a Road to the Future at TSMC OIP

Analog Bits Momentum and a Look to the Future

Analog Bits Enables the Migration to 3nm and Beyond


Podcast EP269: A Broad View of Semiconductor Market Dynamics with Rajiv Khemani

Podcast EP269: A Broad View of Semiconductor Market Dynamics with Rajiv Khemani
by Daniel Nenni on 01-10-2025 at 10:00 am

Dan is joined by Rajiv Khemani, a serial entrepreneur and an industry leader with 25 years of experience in building and scaling businesses. He is currently co-founder & CEO of Auradine. He is also an investor and board member in the data infrastructure, AI and software/platforms space. Previously, Rajiv was co-founder & CEO of Innovium, a leading provider of cloud-optimized network switching solutions that was acquired by Marvell for over $1B in 2021. Prior to that, Rajiv was COO at Cavium, he also worked at Intel, Sun Microsystems & Network Appliance.

In this comprehensive discussion, Dan explores the market dynamics and evolution of the semiconductor ecosystem with Rajiv. How monopolistic tendencies impact the market as well as the growth of AI and the importance of an open ecosystem are discussed. Rajiv provides some growth projections as well as advice to entrepreneurs on how to work with the semiconductor ecosystem. The role of the government to maintain semiconductor innovation in the US is discussed as well with a overview of the benefits and challenges of the CHIPS Act.

Rajiv concludes with a summary of his company, Auradine, which provides semiconductors, systems, and software for block chain and AI infrastructure. Auridine builds products that support innovation, sustainability and scalability. You can learn more about Auradine here.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual


CEO Interview: Dr. Zeynep Bayram of 35ELEMENTS

CEO Interview: Dr. Zeynep Bayram of 35ELEMENTS
by Daniel Nenni on 01-10-2025 at 6:00 am

Zeynep Bayram headshot

Zeynep is a co-founder and the Chief Executive Officer of 35ELEMENTS Corp., the GaN solutions company, which spun off from the University of Illinois at Urbana Champaign. She worked in small and large semiconductor companies, and managed operations of an equipment manufacturing business. She completed her B.S. and M.S. in EE, from Cornell and Colombia Universities, respectively.

Why did you start 35ELEMENTS?

To help us reach carbon neutrality, we (Zeynep, CEO/CFO, B.S. in EE, Cornell University, M.S. in EE, Columbia University, & Can, CTO, Ph.D. in EECS, Northwestern University) spun off 35ELEMENTS from the University of Illinois at Urbana-Champaign. We discovered that the best semiconductor material for assisting us in achieving carbon neutrality is cubic gallium nitride – a III-V compound semiconductor material which Can has been leading as a faculty member at ECE Illinois for the past eleven years. The purpose of 35ELEMENTS is to expand this novel semiconductor materials technology platform.

Why did you join Silicon Catalyst?

One of our goals is to form strategic alliances with foundries to scale our novel gallium nitride material solutions on CMOS-compatible Si (100) substrates by our target of two years. We plan to accomplish this by working with Silicon Catalyst’s in-kind and strategic partners. The tremendous experience and wide knowledge that the Silicon Catalyst team has is also what drew us to working with them.

What differentiates your company?

The foundation of 35ELEMENTS is the exclusive portfolio of cubic gallium nitride patents covering materials synthesis as well as photonic and electronic devices. We are the only company that can epitaxially hetero-integrate gallium nitride materials on Si (100) substrates that are compatible with CMOS processes and foundries. We make use of an industrious epitaxy tool called metalorganic chemical vapor deposition.

What is 35ELEMENTS’ goal?

35ELEMENTS will offer the fastest and the highest efficiency light emitters in the world, while still on the largest substrate platforms. To facilitate the rapid adoption of solid-state lighting and contribute to the development of the ideal light sources, which will result in even greater energy and environmental savings sooner, our first product will be a technological advancement in solid-state lighting: direct-emitting innovative green light emitting diodes. A ~ $1 trillion in energy savings, in addition to other health advantages like improved mood regulation and eye safety, will be an immediate advantage of our solution.

What new features/technology are you working on?

We see direct applications in augmented and virtual reality displays, digital lighting, optical interconnects, wide bandgap complementary logic functions, and power electronics. The strategic partnerships with CMOS-centric companies will enable our gallium nitride and their silicon solutions on one platform.

What keeps your customers up at night?

In the process of creating human-centered lighting solutions, our customers currently sacrifice form factor, cost, efficiency, and light quality. Consider any lighting application such as headlights, flashlights, displays, signage, lamps, etc. These days, such devices’ efficiency is cut in half, their color spectrum alters, and thermal issues arise when further light output is needed. The existing photonic solutions are not customer-centric.

Also Read:

CEO Interview: Subi Krishnamurthy of PIMIC

CEO Interview: Dr Josep Montanyà of Nanusens

CEO Interview: Marc Engel of Agileo Automation


2025 Outlook with Matt Burns of Samtec

2025 Outlook with Matt Burns of Samtec
by Daniel Nenni on 01-09-2025 at 10:00 am

Matt Burns Samtec

My good friend Matthew Burns develops go-to-market strategies for Samtec’s Silicon-to-Silicon solutions. Over the course of 25 years, he has been a leader in design, applications engineering, technical sales and marketing in the telecommunications, medical and electronic components industries. Matt holds a B.S. in Electrical Engineering from Penn State University.

Tell us a little bit about yourself and your company.

Samtec is a privately held, global manufacturer of a broad line of high-performance copper, optical, and RF interconnect solutions. Our technical experts around the globe optimize the signal path from the bare die to an interface 100 meters away, and all interconnect points in between. I lead an experienced team of professionals who evangelize the capabilities of Samtec’s Silicon-to-Silicon solutions.

What was the most exciting high point of 2024 for your company?

Throughout the year, Samtec was able to demonstrate the high-speed capabilities of several next-gen interconnect platform solutions. Even at 224 Gbps PAM4 speeds, copper isn’t dead.  Samtec’s Flyover® Next-Gen Systems route data from the ASIC to the front panel (or backplane) via our Eye Speed® Cable Technology. At SC24, we exhibited the latest versions of our Si-Fly® HD co-packaged and near chip systems driven by Synopsys IP with a pre-FEC BER of e-9 over a 40 dB channel. On the optical side, our demonstrated 56 Gbps PAM4 performance of our Halo™ next-gen mid-board optical transceivers.

What was the biggest challenge your company faced in 2024? 

I talked with several colleagues throughout the year. All of us agreed that innovation is speeding up. We anticipate that will continue going forward. The near insatiable demand for GPUs, XPUs, and AI accelerators by the hyperscalers remains the key driving force here. Semiconductor suppliers, IP providers, EDA vendors and interconnect companies like Samtec must meet their design requirements on-time and under budget. That sounds common sensical, but the combined technology required to scale AI at the pace the industry demands is unprecedented.

How is your company’s work addressing this biggest challenge? 

Samtec is innovating faster as well. We are ramping up our engineering hiring. It’s just a necessity. Unique design challenges demand unique interconnect solutions. Our technical experts are improving the SI performance across the new signal channels. We are creating next-gen mated contact systems to enable 224 Gbps PAM4 performance in dense, small footprints. We constantly tweak our twinax cable technology by testing new dielectric materials, improving cable manufacturing, or developing new cable testing techniques. Our SI engineers are always researching the latest laminates to recommend for high-speed or high-frequency design. We work with our partners to squeeze more performance out of simulation tools. The list goes on.

What do you think the biggest growth area for 2025 will be, and why?

In short, we can’t manufacture micro coax and twinax copper cables fast enough. The adoption of Samtec’s Flyover® technology across networking, computing, storage, and AI acceleration platforms throughout data center, supercomputing systems, and semiconductor testing and manufacturing applications continues to be the driver here. We are also seeing increased demand for our growing portfolio mid-board optical transceivers across several applications.

How is your company’s work addressing this growth? 

As mentioned, we continue to invest in innovation. On the copper cable side, we need to find to materials with the lowest dK available. That has led us to researching, developing and finally manufacturing twinax cable based on uniformly foamed dielectrics. Cable diameters need to be smaller, so finding thinner cable wraps is a necessity. We are expanding cable manufacturing globally. Additionally, we have standard cable assemblies, but our customers usually require something unique. We need to balance supporting emerging R+D opportunities while handling high-volume needs of programs already in production. On the optical side, its more of the same story: ramp innovation and ramp production.

What conferences did you attend in 2024 and how was the traffic?

Samtec sponsors, exhibits, and presents at more than 50 tradeshows and conferences annually around the glove.  Some of the shows I attended included OFC, the OCP Global Summit, MemCon, various PCI-SIG DevCons, ECOC, embedded world, SuperComputing (SC24), and the AI Hardware and Edge AI Summit. From Samtec’s perspective and personally, attendance at tradeshows is still on an upward trend. That’s been the case that last few years coming out of the pandemic.  However, I think the accelerating pace of innovation in AI, semiconductors, EDA, optical connectivity, and other high-growth areas are defining this trend.  I expect this to continue into 2025

Will you attend conferences in 2025? Same or more?

Yes, without a doubt. Conference and events are still a great way to meet luminaries, thought leaders, influencers, design engineers and the like. We are still finalizing our 2025 event strategy and scheduling.  Overall, we will probably attend more vents in 2025. We will likely be on par in the Americas and EMEA, while we strategically invest a bit more across Asia.

How do customers engage with your company?

That’s a great question. As I just mentioned, we still meet plenty of new customers as conferences. Engineers can engage with us directly via our global sales team or our global network of approved distributors. Technically, our FAEs, AEs, and SI engineers are only a phone call or e-mail away. Our website – www.samtec.com – is a treasure trove of product information. We are also accessible via our social media channels.

Additional questions or final comments? 

It’s always nice to engage with you and you team, Dan. We always appreciate the opportunity.  I am sure the year ahead will pose many opportunities and challenges. Samtec looks forward to working with our customers and partners to solve their next-gen interconnect challenges.

About Samtec

Founded in 1976, Samtec is much more than just another connector company. We put people first, along with a commitment to exceptional service, quality products and innovative technologies that take the industry further faster. This is enabled by our unique, fully integrated business model, which allows for true collaboration and innovation without the limits of traditional business models.

Also Read:

Samtec Paves the Way to Scalable Architectures at the AI Hardware & Edge AI Summit

Samtec Demystifies Signal Integrity for Everyone

Samtec Simplifies Complex Interconnect Design with Solution Blocks


2025 Outlook with Christelle Faucon of Agile Analog

2025 Outlook with Christelle Faucon of Agile Analog
by Daniel Nenni on 01-09-2025 at 6:00 am

Agile Analog Christelle Faucon headshot

Tell us a little bit about yourself and your company. 

I was born in France, but I have been living in the Netherlands for two decades. I have worked in the global semiconductor industry for over 25 years. After my Master’s Degree in Electronics Engineering, I started my career as a Design Engineer. Since then I have held senior product and commercial positions, including 10 years at TSMC and 10 years as President of GUC (Global Unichip) Europe. Currently I am the VP of Sales at Agile Analog, the customizable analog IP company.

Agile Analog is revolutionizing the analog IP sector with our expanding portfolio of highly configurable, multi-process analog IP products. The company has developed a unique way to automatically generate analog IP that meet the customer’s exact specifications, for any foundry and on any process. We provide a wide-range of customizable analog IP solutions and subsystems, covering data conversion, power management, IC monitoring, security and always-on IP. Applications include; HPC (High Performance Computing), IoT, AI and security.

What was the most exciting high point of 2024 for your company? 

2024 was an extremely busy time at Agile Analog. Our main focus was on implementing and delivering customer projects. Throughout the year we saw a significant increase in demand for our novel analog IP and we ramped up the number of customer deliveries. There are tier 1 companies that we work with that unfortunately we can’t talk about due to confidentiality, but in March 2024 we were able to announce the completion of our first always-on IP subsystem for XMOS.

We have also strengthened relationships with the major foundries. Partnering with these foundries enables us to access advanced technology PDKs, so we can support customers across the globe who need solutions on advanced nodes. Agile Analog has been a member of the TSMC OIP IP Alliance Program and Intel Foundry IP Alliance Program since 2023. In July 2024 we announced that we had joined the GlobalFoundries GlobalSolutions Ecosystem and delivered our IP to customers on FinFet and FDX processes.

Other company highlights in 2024 included being on the EE Times Silicon 100 list for semiconductor startups worth watching and being selected as a WIRED Trailblazer.

What was the biggest challenge your company faced in 2024? 

2024 was another challenging year across the semiconductor sector, with the ongoing geopolitical turmoil and economic downturn leading to more uncertainty. The impact of this was felt across the entire industry, with many companies frustrated and restricted by reduced budgets. The automotive sector in Europe was particularly badly hit, although Agile Analog’s exposure to this market is small.

How is your company’s work addressing this biggest challenge? 

Agile Analog has continued to drive forward to accelerate the adoption of our unique analog IP. Despite the challenges, we are proud that Agile Analog has grown as a business, achieving our highest number of IP sales and bookings. We work closely with our foundry and industry partners across the globe, and we have seen a surge in demand, especially for our data conversion IP, power management IP and security IP. Our aim is that when chip designers are looking for customizable analog IP then Agile Analog is the company that comes to mind. Our reach is truly global, with increased levels of interest from customers in North America and Asia. In October we announced a collaboration to support the work of the Southern Taiwan IC Design Industry Promotion Center.

What conferences did you attend in 2024 and how was the traffic?

Over the last 12 months the Agile Analog team has taken part in more global semiconductor foundry events than ever before – including the TSMC Technology Symposiums, TSMC OIP Ecosystem Forums, GlobalFoundries Technology Summits, Samsung Foundry Forums and Intel Connect. The audience and flow of traffic at these events have been encouraging. We have enjoyed showcasing our extending range of analog IP solutions, as well as talking with customers and partners about market trends and challenges. These discussions are invaluable as they form part of the decision-making process as we develop our product roadmap.

The Global Semiconductor Alliance (GSA) events have also been very interesting, including those focused on the Women’s Leadership Initiative (WLI). In March I attended the first GSA WLI EMEA event – Women in Semiconductors Conference – at the GSA International Semiconductor Conference in London. Then in October there was the first GSA WLI EMEA lunch and learn event in Munich. It’s great to see such a strong community that supports the career development of women working in the semiconductor industry.

What do you think the industry’s biggest growth areas will be in 2025?

Despite the fact that there are obviously ongoing global challenges, there are still reasons for cautious optimism in the semiconductor industry. AI and data centers have been key areas of interest in 2024, and we expect that these will continue to be the main growth areas in 2025. Indeed, the potential of generative AI has been a recurring talking point and its future impact on the world looks set to be game-changing.

Will you attend conferences in 2025?

In 2025, we will further strengthen our foundry relationships by participating in more of the foundry events. We may also review sector related events such as those focused on AI/Big data.  We really enjoy meeting existing and potential customers and partners face-to-face, so events are important for us.

What will be the main product focus areas for your company in 2025? 

At Agile Analog, our key product related priorities in 2025 will be working on advanced nodes and our security IP. Until now we have not been able to focus enough attention on developing our technology on advanced nodes. In 2025, we are keen to change this. We have exciting plans to collaborate with major foundries, such as TSMC and Samsung Foundry. There is also growing demand for our security IP solutions, especially for anti-tamper applications, so this range of our products will be at the forefront in 2025. As always at Agile Analog, meeting the needs of our customers comes first. We will continue to listen to and support our customers, and share our extensive expertise and experience in order to ensure that we can deliver the very best solutions possible.

Also Read:

Overcoming obstacles with mixed-signal and analog design integration

CEO Interview: Barry Paterson of Agile Analog

International Women’s Day with Christelle Faucon VP Sales Agile Analog