You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
Alchip Technologies Ltd., a global leader in high-performance computing (HPC) and artificial intelligence (AI) ASIC design and production services, continues its trajectory of rapid growth and technical leadership by pushing the boundaries of advanced-node silicon, expanding its global design capabilities, and building customer-centric solutions that differentiate at the packaging level. In a candid update from CEO Johnny Shen, three pillars emerged as central to Alchip’s strategy: technology leadership, talent deployment, and customer-driven business execution.
TECHNOLOGY: 2nm and 3nm
Alchip is preparing for a significant technology inflection with the introduction of 2nm design enablement, the first gate-all-around (GAA) transistor node. While 3nm (the final FinFET-based node) will dominate most production designs in 2025, a select few projects are advancing into 2nm, which introduces unique design complexities. These include significantly higher compute power requirements for final sign-off and verification.
During peak 3nm workloads, Alchip leveraged more than 500 servers; for 2nm, even larger compute infrastructures will be required. The company’s 2nm test chip taped out in 2024, with silicon results expected soon. These results will help quantify the PPA (power, performance, area) delta between 3nm and 2nm. While pure 2nm designs might be rare, hybrid approaches—with compute logic in 2nm and analog/mixed-signal components in 3nm chiplets—are becoming common among customers.
Alchip’s early 2nm work is already being validated by one of its more significant customers, who plans to initiate both a test chip and product chip kickoff, within 2025. This underscores Alchip’s credibility as a first-choice ASIC partner for leading-edge silicon.
TEAM: Strategic Global Expansion of Engineering Resources
With 86% of 2024 revenue originating from North America, and with global expansion considerations, Alchip is aggressively shifting its design workforce to Taiwan, Japan, and Southeast Asia. In Vietnam, where the company already employs 30 engineers, headcount is expected to grow to 70–80 by the end of 2025. Similarly, Malaysia’s team is expanding from 20 to approximately 50 engineers. By year-end, over half of Alchip’s engineering workforce will reside outside China.
This distributed R&D model not only ensures IP security and compliance with international regulations but also enables proximity to foundries, customers, and local talent pools. In the United States, Alchip is scaling up its Field Application Engineers (FAEs), Program Managers (PMs), and senior R&D experts to support a customer base that demands nuanced understanding of compute architecture, PPA trade-offs, and roadmap alignment.
For package and assembly support, much of the technical interface remains US-based, with Taiwan-based experts frequently dispatched to co-locate with customers when needed. Testing and product engineering disciplines remain centralized in Taiwan, where Alchip’s reputation as a top-tier semiconductor employer provides a strong pipeline of experienced hires.
BUSINESS: Record-Breaking Growth Driven by Differentiated Solutions
In 2024, Alchip delivered its seventh consecutive year of record financials, with revenue of $1.62 billion and net income of $200.8 million—each marking new highs. These numbers translate into a revenue-per-employee ratio of approximately $2.5 million, placing Alchip among the most productive companies in the semiconductor industry.
Core to this growth is the company’s differentiated package engineering. While customers rarely question Alchip’s ability to deliver on the compute side, most customer inquiries now revolve around packaging strategy. These include determining the optimal HBM stack configuration, interposer design, chiplet integration, thermal modeling, and overall system optimization.
Alchip has completed 18 CoWoS (Chip-on-Wafer-on-Substrate) designs, the most of any ASIC partner, according to TSMC. These designs have varied significantly by customer, each requiring unique interposer geometries, memory bandwidth targets, and form factor considerations. Johnny attributes this capability to Alchip’s focus on emerging, high-tech startups, whose need to innovate quickly forces the company to stay ahead of the technology curve.
This flexibility and deep design experience have made Alchip a go-to partner not only for startups, but also for established tech giants pursuing the next wave of AI and HPC performance.
Outlook: Enabling Tomorrow’s Compute Platforms
With 20–30 tape outs per year, Alchip maintains a rapid feedback loop that continuously hones its methodology, toolchains, and cross-functional workflows. As customers move toward 2nm GAA, 3DIC architectures, and multi-die systems, Alchip is positioning itself as a turnkey provider of silicon, packaging, and system-level integration expertise.
Its tight alignment with TSMC’s roadmap, along with a strategic pivot toward a distributed global engineering footprint, ensures that Alchip will remain a critical player in enabling the future of AI and HPC workloads. The company’s ability to combine advanced silicon design with deep system integration know-how is what makes it not just a service provider—but a true innovation partner.
Part 2 examines the transformation of the interface protocols industry from a fragmented market of numerous specialized vendors to a more consolidated one dominated by a few major solutions providers as driven by the increasing complexity of modern protocols. It highlights the importance of rigorous validation of interface protocols and underscores the pivotal role of security in ensuring robust and reliable protocol design. It concludes by presenting an ideal roadmap for developing cutting-edge interface protocol solutions.
The Interface Protocol Landscape: Adoption, Security and Validation
Traditionally, interface protocols development followed a structured, sequential process that typically began with the formation of a consortium, followed by the creation of draft specifications. Once the specifications approached version 1.0, early adopters would begin implementation, and within a two-year timeframe, products would start to emerge.
Today, this timeline has been radically disrupted. Case in point, in early in 2024, Broadcom1 commented on their roadmap for switched for AI systems slated release in 2025, based on protocols that have not even seen the formation of a consortium. This represents a monumental shift, driven by market demands that far outpace the current capabilities of existing protocols. The pressure on the industry to keep up is unprecedented.
Evolving Approaches to Interface Protocol Adoption
The way protocols are adopted and verified has also undergone significant transformation. Designers are increasingly sourcing third-party IPs instead of building them in-house. For example, where designers previously developed PCI Gen4 protocols themselves, most now turn to external providers for PCI Gen6 solutions.
The adoption process itself has become much more streamlined. Historically, protocol implementation followed a prolonged step-by-step approach. Teams would start by using verification IP (VIP) of the required interface protocol for the functional verification in simulation, then evaluate its performance on emulation for few months, and eventually proceed to hardware implementation for prototyping or post-silicon testing. Now, all pre-silicon steps are compressed into a single quarter, reflecting the urgency and integration demanded by modern workflows.
In today’s project planning meetings, design teams expect a comprehensive roadmap from their IP providers right at the start of their interface protocol journey. Questions about VIP, virtual models in the form of transactors, and hardware implementations availability are no longer deferred—they arise simultaneously with the interface protocol decision. This shift marks a significant trend: designers are no longer solely focused on verifying protocols. Instead, they prioritize building a robust, fully integrated verification ecosystem centered around the interface. This approach ensures thorough validation while enabling a swift transition to software development, where real differentiation and value are created.
The Impact of Industry Consolidation
The interface protocol landscape is undergoing a paradigm shift. Market demands have accelerated the development timeline, compressing processes and driving designers to seek comprehensive, ready-to-integrate solutions. As a result, the industry is consolidating to meet these challenges. Supporting cutting-edge protocols requires immense expertise, resources, and investment—factors that only larger IP companies can consistently deliver.
Smaller IP vendors are finding it increasingly difficult to compete at the high end of the market. Many are merging, consolidating, or being acquired. While these smaller players may continue to find opportunities in mid-tier markets, high-end protocol development and support are rapidly becoming the domain of a few large, well-established companies.
The ability to adapt to this fast-evolving environment will determine who leads the future of protocol development and adoption.
Interface Protocol Security: Critical Imperative
In an increasingly interconnected world, security is essential to safeguarding data and systems against a wide array of threats. To achieve robust and reliable security, the foundation must begin at the hardware level, forming the bedrock of a comprehensive security framework.
The Layered Approach to Security
Security is best visualized as a series of interdependent layers, starting with a hardware root-of-trust and extending to encompass the system, application, software, and services. Hardware security serves as the cornerstone, providing a strong, reliable base to support and protect all subsequent layers. Any weakness in the security chain, regardless of where it occurs, can compromise the entire solution. Therefore, while hardware is the starting point, its design and implementation must be flawless to ensure the security of all operational phases.
The Hardware Root-of-Trust in SoC Designs
The hardware root-of-trust in System-on-Chip (SoC) designs represents a critical security mechanism built directly into the hardware. It establishes a trusted foundation for all secure operations within the system, enabling the verification of software integrity and safeguarding sensitive data. This embedded security ensures that the system can be trusted from the very beginning of its operation, acting as the first line of defense against potential breaches.
Security in Interface Protocols
In recent years, security considerations have become integral to interface protocol design. Modern interface protocols, including PCIe, CXL, UCIe, Display, and Memory, now feature comprehensive security hardware blocks embedded within their specifications encompassing the root-of-trust.
From the design perspective, the hardware root-of-trust is typically implemented as a Register Transfer Level (RTL) component. It integrates with the RTL describing the protocol controller, ensuring seamless operation and robust security. For example, the PCIe and CXL Protocols include security features such as Integrity and Data Encryption (IDE), designed to ensure data authenticity and confidentiality.
The hardware root-of-trust must undergo rigorous RTL-level verification and validation, similar to other hardware components.
Interface Protocol Verification and Validation
Modern interface protocols implement complex specifications with unique data integrity, throughput, latency and security requirements. For example:
PCI Express (PCIe) demands high-bandwidth, low-latency communication for connecting GPUs and NVMe storage devices.
Ethernet must ensure compliance with networking standards for interoperability.
USB introduces complexities with hot-swapping and varying power requirements.
Each protocol requires rigorous validation to avoid costly errors in production silicon. Protocol malfunctions comprise a wide range of types from basic hardware failures that may lead to data corruption, deadlocks, reduced performance, etc., to design intent errors that may affect target specifications, rather challenging in post-silicon debug.
While hardware-description-language (HDL) simulation has traditionally been the cornerstone of SoC validation, providing high visibility into design behavior, it is increasingly limited by performance constraints. By running several orders of magnitude slower than real hardware it becomes impractical for validating the new generations of interface protocols.
These limitations necessitate a move towards vastly faster and extensively scalable validation solutions.
Hardware-Assisted Verification: The Optimal Approach.
Hardware-assisted verification platforms, including hardware emulators and FPGA prototypes, offer a compelling solution to address the challenges of SoC interface protocol validation. These platforms bridge the gap between pre-silicon and post-silicon validation, providing near-real-time performance with hardware accuracy unparalleled by software-based simulation. Key benefits include:
Performance Acceleration: Emulation and prototyping platforms execute SoC designs orders of magnitude faster than HDL simulators, enabling comprehensive validation of high-speed protocols such as PCIe Gen 5 and Ethernet 400G by processing payloads of billions of verification cycles.
Real-World Testing: Hardware-assisted platforms allow integration with real-world devices, providing a realistic environment for protocol validation.
Early Bug Detection: By enabling early and iterative testing, hardware-assisted solutions help catch protocol violations and interoperability issues before they propagate into later stages of development.
Scalability: These platforms can validate multiple protocols in the context of the entire SoC design, ensuring that complex interactions in heterogeneous SoCs are thoroughly tested.
Case Study: PCI Express Validation with Hardware-Assisted Platforms
For example, the validation of a PCIe Gen 5 interface in a modern SoC introduces challenges such as stringent signal integrity requirements and complex error recovery mechanisms. A hardware-assisted platform can simulate real-world scenarios, such as sudden link drops or varying traffic patterns, at full protocol speed. Debugging tools integrated into the platform can provide deep insights into packet-level transactions and protocol compliance, significantly accelerating the validation process.
Key considerations for a comprehensive interface protocol solution
Choosing the right partner for advanced interface protocols requires careful evaluation of several critical factors:
Active Participation in Industry Consortiums
The foundation of effective protocol development lies in active involvement with key industry consortiums. Leading providers play significant roles in these organizations, contributing to the creation and evolution of standards. Early participation—such as donating technology and collaborating closely with protocol developers—ensures that providers stay at the forefront of protocol formation and adoption. Leadership within these consortiums is a strong indicator of an innovative and forward-thinking IP provider.
Proven Performance: The Eye Diagram Speaks Volumes
Trust in a provider’s capabilities is built on a proven track record of silicon-verified IP. This requires thorough testing with industry-standard test equipment to ensure robust performance and reliability.
A critical benchmark is the eye diagram, which visually represents the ability of the IP to transmit data reliably at high speeds. Achieving a clean and well-measured eye diagram demonstrates superior analog performance and adherence to protocol standards. Beyond analog precision, a provider’s experience in implementing and verifying protocol functionality is crucial.
Leading vendors validate their designs through extensive testing, including numerous cycles of the digital controller and PHY test chips on prototyping systems, ensuring the interface operates seamlessly.
Verification Support Across the Lifecycle
Success in protocol development extends beyond design to encompass leadership in design verification, validation, certification, and compliance testing. High-quality solutions require robust support across the development lifecycle, from virtualization and simulation-based verification to hardware-accelerated testing with components such as transactors and speed adapters.
Advanced customers increasingly demand comprehensive verification ecosystems as part of the IP package. These solutions are no longer optional add-ons but integral components of the chip design process. Advanced verification solutions require large, specialized teams—resources that not all suppliers can provide.
Examples of Advanced Verification Solutions
SoC Verification Kit (SVK), a ready-to-use package combining IP and VIP (Verification IP). It streamlines IP integration into SoC testbenches, reducing setup time and allowing design teams to focus on differentiation. Surveys show that teams spend up to 20% of verification cycles on initial setup and sanity testing. Often, IP design and validation teams work in silos, leading to duplicated effort and limited reuse of components. This inefficiency can consume approximately three person-months per IP, a considerable cost given the multiple complex IPs in modern systems. A state-of-the-art SVK can cut setup time by at least 50%, significantly accelerating time-to-market while reducing costs.
IP Verification Kit (IPK), a package combining an FPGA prototype of the interface IP with a PHY interface card ready-to-use in system environments. It validates the interoperability of the IP with the rest of the SoC, ensuring it performs as intended in a real-world system prototype.
Design teams face the dual challenge of achieving differentiation in competitive, accelerated design cycles while pushing beyond the limits of existing interface standards. To meet these demands, they must partner with suppliers capable of aligning the development of advanced interface IP and verification solutions with project timelines.
Taking a holistic approach to verification is essential for maintaining SoC schedules and ensuring product success. Advanced tools like SVKs and IPKs provide measurable efficiency gains, helping teams reduce costs, accelerate time-to-market, and focus on innovation rather than repetitive verification tasks.
As I have said before, there is a foundry market segment that I call the “NOT TSMC” market, companies who want an alternative to TSMC. My guess, this would be a $5B+ market which is what Samsung Foundry has tried to leverage for the past 20 years. Unfortunately, working with Samsung proved to be a much higher risk than expected so the NOT TSMC market came crashing down at 3nm (N3) and 2nm (N2).
Last week the media jumped on a JP Morgan Conference where Intel CFO Dave Zinsner spoke. Of course it was spun every which way but truth for cheap clicks but there were some interesting points and one was risk. Pat Gelsinger was not a low risk CEO. He could certainly rally the troops but expectations were set much too high. If any foundry thinks they are going to beat TSMC they are wrong. Samsung has spent the last 20 years and hundreds of billions of dollars trying to unseat the #1 TSMC and have failed spectacularly.
Lip-Bu Tan however is a low-risk CEO. When he says he can deliver something you can bet on it. He is also very good with customers, he listens, he does not pontificate. Lip-Bu will make some very difficult decisions in the coming weeks but when the dust settles you will see a shiny new Intel, absolutely.
The transcript of the discussion is available HERE and below are my observations, experiences, and opinions as a 40 year semiconductor professional working in the trenches.
Lip-Bu will stay with the current strategy and not split up the company and I agree with this 100%. Intel Design and Intel Foundry must be closely coupled and work together side-by-side, not one in front of the other. That is the only way Intel products will stay ahead of the competition and that is the only way Intel Foundry can stay with the competition. Dave said it is all about execution and that is a fact and that is what Lip-Bu Tan does.
Another great comment by Dave is that Intel 14A is being developed from the ground up as a foundry node. It will have PDKs that are comparable to what the industry would expect (TSMC like). Typically, an IDM foundry was more focused on process development for internal products, then adapted the process to the foundry business. It is very difficult to compete with TSMC if that is your strategy. Unfortunately, 18A and BSPD were not originally built for a foundry but they did get big interest from customers. Hopefully Lip-Bu can turn that into revenue moving forward.
Dave did slip on on this one:
Dave Zinsner, Vice President and Chief Financial Officer, Intel: “Well, the first 18A customer is going to be Intel products. Yeah. It’s Panther Lake, and the first SKU is expected to be out by the end of the year. So, is our first win, so to speak, if you put on the Intel foundry hat with 18A.”
Sorry Dave, you just said 18A was not developed as a foundry process so it is a bit early to put your foundry hat on. Keep your foundry hat in hand until you have an external 18A customer with product, otherwise you are still an IDM.
If anyone thought that Intel 18A would be a blockbuster foundry node they clearly do not understand how foundry customers work. It is all about mitigating risk. IF 18A proves out AND 14A PDKs are early and competitive THEN big customers will come to Intel Foundry.
In regards to 14A and HNA-EUV:
Dave Zinsner, Vice President and Chief Financial Officer, Intel: “14A, you know, obviously, gets more expensive. At present, it’s expected to have high NA, and, you know, that’s a more expensive tool. So, you know, I think we do need to see more external volume come from 14a versus versus 18a.”
Here is the thing about HNA-EUV, I do not feel it is close to being ready for high volume manufacturing and foundry customers will not run to an unproven process with HNA-EUV. The added value of HNA-EUV is just not there yet.
I think Lip-Bu will recognize this and be more cautious with 14A. He should start with EUV then move it over to HNA-EUV, like TSMC did with EUV at 7nm. Get N7 into HVM then add EUV light (N7+) before going full EUV at N6. There is no shame in following TSMC on this. Remember, TSMC followed Intel for many years.
The foundry business aims for a breakeven by 2027, with revenue from external sources.
Dave Zinsner, Vice President and Chief Financial Officer, Intel: “Yeah. Okay. So, we still feel on track to to hit breakeven sometime in 2027. You know, I think when we committed to it in ’24, we said, hey. It’s gonna be somewhere between ’24 and 2030. Most people kind of settled in that that must mean ’27, and that’s generally kind of what we’re thinking is we can be breakeven.”
From ChatGPT:
“In 2024, Intel’s foundry business incurred significant financial losses, totaling approximately $13.4 billion for the year.This figure represents a substantial increase from the $7 billion operating loss reported in 2023.”
I’m not a finance guy but are we talking about breaking even for a full fiscal year or just one quarter? For a full fiscal year that will not happen in 2027 without dramatic cuts. Remember, HNA-EUV systems are $380M each and a foundry will need dozens of them if we are talking full HNA-EUV and not just a layer or three. I doubt Lip-Bu Tan will allow Intel financial hand waving moving forward but we shall see.
Harlan Sur, Semiconductor and Semiconductor Capital Equipment Analyst, JPMorgan: “So, is the right way to think about the (external foundry) mix maybe 20 to 30% ?”
Dave Zinsner, Vice President and Chief Financial Officer, Intel: “Yeah, something like that.”
Let me remind you that Intel has outsourced wafers from foundries for many years due to acquisitions (Altera, Mobileye, Habana Labs, etc…) so 20-30% is not really that much. Nothing to worry about but I do feel that moving forward Intel should be more focused on 18A and 14A. It is all about gaining manufacturing experience and economy of scales. Cost is everything in the foundry business.
All-in-all a great conversation with more transparency on the foundry side for us semiconductor professionals. Thank you Dave.
The explosive growth of large language models (LLMs) has created substantial new requirements for chip-to-chip interconnects. These very large models are trained in high-performance data centers. Multiple accelerators need to work seamlessly to make all this possible as the bandwidth between accelerators directly impacts the size of trainable LLMs. It is accurate to say that this new era of AI is driven by new levels of bandwidth and low latency. A critical enabler for all this is 224G PHY technology. But working IP isn’t enough. The IP needs to be interoperable with other parts of the system. Synopsys has held a strong position here, both in terms of high-quality IP and proven interoperability. Let’s take a closer look at the road to innovation with Synopsys 224G PHY IP.
What it Takes to Enable Innovation
I have first-hand experience regarding what it takes to enable innovation through my work at eSilicon. As a fabless ASIC provider, enabling IP and how to design with it was a significant differentiator. We did a lot of work on 56G PHYs, the state-of-the-art at the time. Getting the IP to work across voltage and temperature was only the beginning, however. How the IP interacted with other parts of the system was also a critical care-about for our customers. This included all forms of processing, storage and communication channels. Proving interoperability was no easy task.
It turns out Synopsys has been at this for a while with great results. The company’s 56G IP has been proven in multiple designs down to 12nm. Its 112G IP has been proven in even more designs between 7nm and 3nm. And over the last three years, its 224G PHY has been put to the test.
In September 2022, Synopsys showcased the world’s first 224G SerDes IP demonstration with an ecosystem partner at ECOC 2022 in Basel, Switzerland. This milestone marked the birth of tangible 224G PHY IP. Since then, significant progress has been made with partner demonstrations at shows such as DesignCon, the Optical Fiber Communication Conference (OFC), ECOC, and the TSMC Technology Symposium. Synopsys IP has become the industry’s most widely interoperable 224G SerDes, supporting VSR, LR, and optical channels.
As shown in the graphic at the top of this post, the characterization report for the Synopsys 224G PHY on TSMC’s 3nm process is now available.
Digging Deeper
There are very useful resources available to understand more about the progress that Synopsys has made and the implications of the work. There is a great interactive video entitled How Synopsys 224G IP is Enabling the Future of 1.6T Networking and UALink 200G. In this video. Magaly Sandoval, product & solutions program manager at Synopsys poses some probing questions to Priyank Shukla, product director for Ethernet and UAL IP Portfolio at Synopsys.
Magaly begins with an overview of the market forces at play that have driven speeds to 224G. With that background, she asks Priyank what has the Synopsys experience been with the design and deployment of high-speed Ethernet as speeds have moved from 56G to 112G to 224G ? Priyank goes into significant detail regarding the challenges faced, the varied types of designs being developed and the milestones achieved along the way.
Magaly then summarizes the significant milestones achieved on designs at advanced nodes and she asks Priyank to comment on when these designs will be taped out. Priyank goes into the details about the status of several 224G designs in 3nm and 2nm. Magaly then ends by asking Priyank about standards. What is the completeness of these standards?
There is also a technical article available that is written by Magaly entitled Leading the Charge in High-Bandwidth Interconnects with 224G PHY IP. This report provides substantial detail on the interoperability work Synopsys has been doing with its 224G PHY IP. The details of many designs from 16nm down to 3nm is provided. There is also an embedded video that takes you on a tour of DesignCon 2025 where partners showcase their products working with Synopsys 224G PHY IP. The video includes detailed overviews of the demos presented at the Keysight booth, the Samtec booth, Yamaichi and the Foxconn Interconnect Technology booth.
To Learn More
If you are working to solve interconnect challenges posed by advanced AI, you will need to master 224G channel speeds and proven interoperable Synopsys IP is an important ingredient for success. You can watch the interactive video with Magaly Sandoval and Priyank Shukla here. And you can access the informative technical article written by Magaly Sandoval here. And that’s what the road to innovation with Synopsys 224G PHY IP looks like.
Sudhanshu Misra is an experienced executive with over 25 years of experience in the semiconductor industry with a focus on advanced materials innovation and commercialization. He has held leadership-level roles with companies such as NthDegree, Entegris, Marubeni, and NexPlanar (a CMP pad company he co-founded) and technical roles at Bell Labs, Lucent Technologies, and Texas Instruments. His responsibilities have included overall business and financial performance, product development and innovation, trading, risk management, fundraising, and building successful start-ups.
Tell us about your company?
ChEmpower Corporation is an advanced materials and specialized chemistry company headquartered in Portland, Oregon. The company develops and supplies chemically reactive pads for planarization, offering an abrasive-free alternative to traditional polishing processes. ChEmpower is dedicated to eliminating abrasives from the polish process, improving chip yields, and advancing sustainability standards within the semiconductor industry.
Serving a range of semiconductor companies globally, ChEmpower is driving the future of semiconductor manufacturing, supporting the production of next-generation devices, and empowering innovation in high-performance technologies.
What problems are you solving?
A key enabling technology to chip manufacturing is planarization that is required to create flat surfaces to build the chip circuitry.
Our abrasive-free technology is revolutionary, providing a cleaner, more efficient, and sustainable method for achieving planarization. By eliminating the abrasive component, we significantly reduce the potential for defects and micro scratches, which are common issues with traditional CMP techniques. This reduction in defects not only improves chip yields but also enhances the overall performance and reliability of the semiconductor devices.
Moreover, our technology offers a cost-effective solution for chip manufacturers. Without the need for expensive abrasives, the operational costs are lowered, making the manufacturing process more economical. Additionally, our approach aligns with the growing emphasis on sustainability within the industry. By minimizing waste and reducing the environmental impact associated with traditional planarization methods, ChEmpower is setting new standards for eco-friendly manufacturing practices.
The potential market for our technology is vast. While the current market size for conventional CMP solutions is estimated to be around $3 billion, the opportunity for our innovative, abrasive-free technology is projected to be between $10 to $12 billion. This significant market potential is driven by the benefits of improved chip yield, enhanced sustainability, simplified process flows, and reduced costs.
What application areas are your strongest?
Our initial product is designed to enhance copper interconnects for integrated circuits (ICs) and advanced packaging applications. ChEmpower’s technology platform is versatile and can be adapted to other metals, including Molybdenum and Ruthenium, which are upcoming metals for next-generation chips. Additionally, our technology is applicable to silicon polishing and exotic materials such as glass and polymers, which require extremely smooth surface finishes in advanced packaging. By leveraging chemical action during the Chemical Mechanical Planarization (CMP) process, ChEmpower’s technology ensures a more systematic, predictable, and controllable approach.
What keeps your customers up at night?
Yields and precise process control. Scalability to new materials and future advances are crucial concerns for our customers. They need solutions that not only address current manufacturing challenges but also adapt to the evolving demands of the semiconductor industry. Our technology provides the precision and control required to process these new materials, ensuring that manufacturers can meet stringent requirements for AI chips and HBMs.
What does the competitive landscape look like and how do you differentiate?
The competitive landscape is dominated by chemical companies on the polish slurry side while material companies on the polish pad side. The slurry market is highly fragmented with numerous chemical companies such as Fujifilm, Entegris, EMD and Fujimi to name a few. On the pad side, the dominant player is Dupont followed by Entegris, a distinct second.
What differentiates ChEmpower is that an abrasive-free technology requires innovation at both the materials and the chemicals side, and none of the chemical or material companies have it nor the motive without jeopardizing their existing businesses. Chempower has the unique technology platform that amalgamates both chemistry and material to create an abrasive-free technology. ChEmpower technology eliminates random defects, lowers operating costs, and enables sustainability. ChEmpower’s technology platform is differentiated and unique from any player in the market that none can even foreseeably attempt to compete with ChEmpower.
What new features/technology are you working on?
ChEmpower is targeting sub 10nm technology nodes with a first product launch for copper this year. We already have a silicon product in development that will be launched early next year. Our next product is for molybdenum followed by ruthenium interconnects. As you can see, ours is a technology platform that is chemistry driven, and we envision this to extend our product offerings to both metals and non-metals. Our first product for copper is well suited for hybrid bonding in advanced packaging applications. Other substrate polishes in advanced packaging are also attractive targets for abrasive-free ChEmpower technology.
How do customers normally engage with your company?
Customer engagement is a critical element in setting up the right expectations with our product. ChEmpower has collaborative engagements with alpha customers, OEMs (equipment manufacturers) and strategic partners to establish the requirements of the customers and thus the solution we provide with our technology. Our immediate approach is to engage both, directly and via partners, with customers in our pre-sales phase all the way through qualification, that typically can take 12-18 months. We will also insert ourselves into the R&D phase early on for the newer advanced technology nodes that are opportunities for us to be adopted right from the onset of those technology nodes. Our partnership strategy will be critical to our go-to market strategy as well as customer support that enables us a surgical approach to gain customer interest.
We will continue to deepen our engagement with customers on a broad basis to ensure we have the right market intelligence to drive alignment with their technology roadmaps. For this effort, we will work closely with our strategic sales partners to drive internal development strategies with a laser focus to target specific customers and applications.
Dan is joined by Dr. Andreas Kuehlmann, Executive Chairman and CEO at Cycuity. He has spent his career across the fields of semiconductor design, software development, and cybersecurity. Prior to joining Cycuity, he helped build a market-leading software security business as head of engineering at Coverity which was acquired by Synopsys. He also worked at IBM Research and Cadence Design Systems, where he made influential contributions to hardware verification. Andreas also served as an adjunct professor at UC Berkeley’s Department of Engineering and Computer Science for 14 years.
Dan explores the growing and maturing field of hardware security with Andreas, who provides relevant analogies of how the software security sector has developed and what will likely occur in hardware security. Andreas describes the types of security checks and enhancements needed for chip hardware and suggests ways for organizations to begin modifying workflows to address the coming requirements. The work going on in the industry to develop metrics to characterize various threats is also discussed, along with an overview of the hardware security verification offered by Cycuity.
The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.
In this episode of the Semiconductor Insiders video series, Dan is joined by Muhammad Umar Khan, Product Manager for Photonics Design Automation Solutions at Keysight Technologies. Muhammad describes the primary design challenges of photonic design with Dan. These include accurate models and a uniform methodology for chip and photonic design. The benefits of a hybrid digital twin model that combines simulation and fabrication data is discussed, along with the overall impact of the Keysight Photonic Designer solution.
The views, thoughts, and opinions expressed in these videos belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.
Thar Casey is a serial entrepreneur, focused on disruptive, game changing technology architecture. Today, he is the CEO of AmberSemi, a young California-based fabless semiconductor company advancing next-generation power management (conversion, control and protection) that revolutionizes electrical products and semiconductor device performance, energy efficiency and reliability. AmberSemi is delivering a generational architecture upgrade to power infrastructure globally.
Tell us about your company?
AmberSemi is a fabless semiconductor company developing next-generation power conversion, control and protection solutions. Its patented innovations aim to improve power management across industries, including data centers, networking, telecom, industrial manufacturing, and consumer electronics. AmberSemi develops power architectures that move beyond traditional passive-analog technologies, incorporating embedded intelligence for greater safety and reliability. Headquartered in Dublin, California, the company is a member of the National Electrical Manufacturers Association and the Global Semiconductor Alliance. AmberSemi has received industry recognition, including Time Magazine’s Best Inventions (2021), Fast Company’s Next Big Thing in Tech (2022), the Edison Award for Innovation (2023 Gold & 2024 Silver), recognition as one of the top four Global Semiconductor Alliances’ Startups to Watch (2023) and a spot on EE Times’ 100 Startups Worth Watching (2024).
With 49 granted US patents and over 50 pending, the company’s breakthroughs in digital control of electricity yielded two core technologies:
Next-gen power conversion enabled by an active architecture that utilizes digital control and precision sensing, delivering superior conversion efficiency, power density, and performance.
A first of its kind power switching and protection enabled by direct digital control and precision sensing, delivering intelligent power control and faster protection for higher reliability and better uptime.
These technologies are being productized to upgrade power architecture across trillions of devices globally and enable disruptive “killer app” enhancements to AI, EV, data center power and more.
The company’s first two commercial semiconductor lines for AC-to-DC Conversion and Switch Control and Protection will be available later this year and next, respectively.
What problems are you solving?
AI-data centers are a quintessential example of the problem with today’s electrical architecture. The industry is driven by the development and use of language learning models, and, by driving more advanced AI processing from advanced GPU/CPU chips, they can do this processing faster and more effectively than ever before. The challenge is that these new chips require much more power. Generation of power is a major issue and many new technologies are being developed to fix this through new sources of power generation, such as nuclear, fusion, fuel-cell, etc. This focus is pushing hyperscalers to build new facilities that better secure the energy needed to power their processors. Yet, the current power conversion architecture in every server is highly inefficient, like a leaky pipe that never gives the water pressure you want.
Industries like data centers, telecommunications, and networking require advanced power management to keep pace with technological developments, like AI. Current power architectures are unable to scale efficiently to meet the increasing power demands of these advanced processors, and these issues further multiply as power demand increases. Amber’s unique approach incorporates semiconductor physics, materials science, and advanced packaging to solve these issues in a revolutionary approach to power delivery.
AmberSemi’s announcement of 50VDC-to-0.8VDC conversion for AI data centers re AI data centers moves conversion steps, reducing inefficiency by half, adding 9% more power from the rack directly to the AI-chip, saving $4 billion annually. (Each 1% efficiency improvement is half a billion dollars in savings in the US and a reduction of 1.7M tons CO2.)
As industries move toward more modern fully solid-state power systems and smarter electrical integration, AmberSemi’s innovations sit at the center of this transition, contributing to a more sustainable, higher-performance electrical infrastructure.
What application areas are your strongest?
AmberSemi’s breakthroughs in power management strongly serve multiple sectors, including data centers, industrial, automotive, commercial/residential building applications and more.
Data Centers / Industrial Protections: Data centers and industrial factories benefit from digitized, more intelligent power. Data centers or factory operating downtime triggered by transient events, like short circuits, is a major concern. A short circuit event in one location cascades down the line, tripping all breakers, taking down the system and costing thousands to millions of dollars per hour. AmberSemi’s extremely fast (3,000x faster than today’s standard) arc-free solid-state switching control and protection platform isolates the event to a single, initial circuit breaker/motor location, directly stopping cascade effects.
AmberSemi’s breakthrough technologies are being applied to solve an even more critical problem today: the accelerating power crisis driven by AI-datacenters’ power demands. Amber’s power conversion solutions remove conversion steps from the data center wall to /through the rack down to the CPU/GPU. By removing steps, AmberSemi lifts efficiency rates dramatically – an estimated 50% improvement over current losses. Most importantly, AmberSemi will deliver increases in power density enabling more AI computing.
Commercial/Residential: Buildings can integrate smart circuit breakers and connected electrical systems and products to improve safety, efficiency, and significant IoT feature additions. These Amber-powered products support access control, fire control, smart building automation and security systems, ensuring more than just reliable power management, but enhanced environmental and energy awareness and more. Essentially, buildings gain a network of intelligent monitoring points powered by Amber, enhancing security, automation management and operational oversight.
What keeps your customers up at night?
Often internal innovation at established companies is limited by headwinds from the current business priorities, such as existing technologies, and models with established financials.
Our customers are hungry for new innovation from technology providers, such as semiconductor companies to provide them with solutions to utilize. If innovation is limited due to legacy approaches to problems there won’t be revolutionary leaps forward, which is what the industry desperately seeks. Amber’s out of the box thinking that is unencumbered by history is working with these innovative and leading-edge customers to deliver the solutions they need.
AmberSemi’s disruptive opportunity is to deliver generational power architecture innovations that modernize legacy old, established power systems:
Protection Products: Semiconductors provide tremendous benefits, yet the challenge is making the transition from manufacturing mechanical products to PCB based electronic products. Amber’s new solid-state power architecture simplifies this development effort for companies, dramatically through a single, flexible IC that acts as the heart for these systems when it comes to sensing and control without compromising their own core IP.
Data-Center Products: The industry seeks more AI-computing from CPUs/GPUs in data centers; yet current power infrastructure is highly inefficient creating cooling issues in current servers. Simultaneously AI-GPU/CPU manufacturers are working on faster processing, but the new chips power demand aren’t supported by existing infrastructure.
Amber solves these issues through its high-density vertical power delivery solution that drastically decreases wasted energy, while increasing power available to Ai-chips for more computing.
What does the competitive landscape look like and how do you differentiate?
Protection Products: There is no direct competition for what Amber is providing to this market – semiconductor-based switch control and protection.
The market currently is focused around electromechanical architectures, but there is already a shift towards using semiconductors instead. Amber’s products don’t compete with any product directly, as they are purpose-built devices for specific markets and applications and simplify / fix some of the native challenges of standard parts, such as a microcontroller.
In fact, Amber is partnering with major semiconductor companies that manufacture power devices such as Si / SiC MOSFETs, IGBTs and more. We provide a strategic pathway for them to increase sales in general and tap into new blue ocean markets.
Data Centers: We are currently at the limit of today’s power delivery architecture due to fundamental issues, such as inefficiencies from pushing so much current across the server board. Vertical power delivery is the only viable solution recognized by the market to meet the ever-increasing power demand.
Currently there are no vertical solutions available on the market. While we see some young companies investigating these opportunities and developing new technologies, from what we can see and hear from our customers, there are no complete solutions on the horizon from these companies – except from AmberSemi.
Amber is developing a very disruptive complete 50V to 0.8V solution with vertical power delivery that aims to transform both power efficiency from the rack to the load and deliver the power directly to the CPU/GPU for substantially more AI computing.
What new features/technology are you working on?
AmberSemi recently announced our new third product line, which is now under development, targeting enhanced power efficiency and power density for AI data centers. The category is one of the hottest – if not the hottest – tech categories today, due to the demand for more AI computing. Yet, the inherent challenges to supply enough power within data centers, limits AI computing. CPUs and GPUs in AI data center server racks require from 20x to as much as 40x the power for their high-performance computing, which current data centers can’t provide. Getting to more AI computing today essentially means building new data centers, as retrofittable solutions for current data centers are not available.
AmberSemi’s complete solution to convert high voltage DC to low voltage DC vertically to the processors solves the current efficiency issues and provides a scalable path for the future. We enhance efficiency at the server rack level that translates to increased power density or more power directly to the chips enabling more AI computing.
The announcement has garnered significant market attention from the largest tech players on earth, because of the very disruptive nature of this recent breakthrough. In addition, this comes on the heels of the successful validation and commercialization of Amber’s first two product lines: dramatically smaller AC-DC conversion and the first of its kind solid-state switch control and protection. Both will be available on the market this year and next respectively.
How do customers normally engage with your company?
The Amber team’s significant experience in a range of key markets helps identify weakness in the current semiconductor offering on the market and opportunities in the range of electrical products applications. Amber uses this expertise to define how our parts perform and how best to simplify and improve designs. We work directly with our customers on these requirements in developing our solutions to deliver significant improvements in their products. This process allows them to differentiate and improve their products versus current offerings and better protect their IP.
Today, AmberSemi has a pool of 50 or so Alpha / Beta customers who are actively engaged with us, have tested and validated our technologies or plan to in the near future.
Author: Niranjan Sitapure, AI Product Manager, Siemens EDA
We are at a pivotal point in Electronic Design Automation (EDA), as the semiconductors and PCB systems that underpin critical technologies, such as AI, 5G, autonomous systems, and edge computing, grow increasingly complex. The traditional EDA workflow, which includes architecture design, RTL coding, simulation, layout, verification, and sign-off, is fragmented, causing error-prone handoffs, delayed communication loops, and other roadblocks. These inefficiencies prolong cycles, drive up costs, and intensify pressure on limited engineering talent. The industry urgently needs intelligent, automated, parallelized EDA workflows to overcome these challenges to keep up with increasingly aggressive market demands.
AI as the Critical Enabler of Next-Gen EDA
AI is becoming essential across industries, and EDA is no exception. Historically, Machine Learning (ML) and Reinforcement Learning (RL) have enhanced tasks like layout optimization, simulation acceleration, macro placement, and others, in addition to leveraging GPUs for performance boosts. Generative AI has recently emerged, enhancing code generation, test bench synthesis, and documentation, yet still requiring significant human oversight. True transformation in EDA demands more autonomous AI solutions capable of reasoning, acting, and iterating independently. Fully autonomous AI agents in EDA will evolve progressively in waves, each building upon prior innovations and demanding multi-domain expertise. We have identified three distinct waves of EDA AI agents, each unlocking greater autonomy and transformative capabilities.
Wave 1: Task-specific AI agents
The first wave of AI-powered transformation in EDA introduces task-specific AI agents designed to manage repetitive or technically demanding tasks within the workflow. Imagine an AI agent acting as a log file analyst, capable of scanning vast amounts of verification data, summarizing results, and suggesting corrective actions. Or consider a routing assistant that works alongside designers to optimize floor plans and layout constraints iteratively. These agents are tightly integrated with existing EDA tools and often leverage large language models (LLMs) to interpret instructions, standard operating protocols, and other comments in natural language. Due to their specialized nature, each phase in the EDA workflow might require multiple such agents. Consequently, orchestration becomes key: a human engineer oversees this fleet of AI agents and must intervene frequently to guide these specialized agents.
Wave 2: Increasingly autonomous agentic AI
Wave two introduces agentic AI, which are solutions that are no longer just reactive but proactive, capable of sophisticated reasoning, planning, and self-improvement. These differ from AI agents, which specialize in a specific task. Agentic AI solutions can handle entire EDA phases independently. For example, an agentic AI assigned to physical verification can conduct DRC and LVS checks, identify violations, and autonomously correct them by modifying the design layers—all with minimal human input. These Agentic AI solutions also communicate with one another, passing along design changes, iterating in real time, and aligning the downstream steps accordingly, making these solutions even more powerful as they enable an iterative improvement cycle.
Furthermore, Agentic AI solutions can deploy multiple task-specific AI agents described in Wave 1, thereby not requiring fine-tuning for every specific task and instead requiring fine-tuning of reasoning, planning, reflection, and orchestration capabilities. An EDA engineer or a purpose-built supervisory agentic AI typically oversees this orchestration, adapting the workflow, resolving conflicts, and optimizing outputs. Essentially, imagine a 24/7 design assistant that never sleeps, constantly refining designs for performance, power, and area. This is not a future vision—it is a near-term possibility.
Wave 3: Collective power of multiple AI agents
The third wave isn’t just about making individual AI agents or an agentic AI solution necessarily smarter—it’s about scaling this intelligence. A powerful multiplying effect is unlocked when we move from a 1:1 relationship between engineer and AI solution to a 1:10 or even 1:100 ratio. Imagine, instead of relying on a single instance of an AI agent or even an agentic AI to solve a problem, you could deploy dozens or hundreds of agents, each exploring different architectures, optimizations, or verification strategies in parallel? Each instance follows its plan, guided by different trade-offs (e.g., power vs. area in PPA optimization). At this stage, the human role evolves from direct executor to strategic supervisor, reviewing outcomes, selecting the optimal solution, or suggesting fundamentally different design approaches. The result is exponential acceleration in both new product innovation and faster design cycles in each of the new as well as products. Problems that took weeks to debug can now be explored from multiple angles in hours. Design ideas previously dismissed due to resource constraints can now be pursued in parallel, uncovering groundbreaking opportunities that would have otherwise remained largely undiscovered.
Both individuals and organizations will reap the transformative benefits of AI
As we enter this new EDA era, agentic solutions are going to shape how individuals and organizations work. At the individual level, chip designers gain productivity as AI takes over repetitive tasks, freeing them to focus on creative problem-solving. Micro-teams of AI agents will enable rapid exploration of multiple design scenarios, uncovering superior designs and speeding up tape-outs, thereby augmenting human expertise with faster AI-led execution. At an organizational level, AI-driven EDA solutions reduce time-to-market, accelerate innovation, and foster rapid development of cutting-edge products. More importantly, AI democratizes expertise across teams and the entire organization, ensuring consistent, high-quality designs regardless of individual experience, enhancing competitiveness. In conclusion, EDA AI will transform workflows in the next few years, greatly boosting productivity and innovation. Discover how Siemens EDA’s AI-powered portfolio can help you transform your workflowHERE.
It is well-known that AI is upending conventional wisdom for system design. Workload-specific processor configurations are growing at an exponential rate. Along with this is an exponential growth in data bandwidth needs, creating an urgency for 1.6T Ethernet. A recent SemiWiki webinar dove into these issues. Synopsys and Samtec explored many of the challenges to face on the road to extreme data bandwidth.
An example is the critical role of 224G SerDes in enabling high-speed data transfers and the importance of rigorous interoperability testing across all parts of the channel. Other topics are covered, including a look at what comes after 1.6 Terabits per second (Tbps). A replay link is coming but first let’s look at what’s discussed in the webinar – Achieving Seamless 1.6 Tbps Interoperability with Samtec and Synopsys.
The Speakers
The quality of most webinars is heavily influenced by the expertise of the speakers. For this webinar, there are two knowledgeable, articulate presenters that provide a great deal of valuable information.
Madhumita Sanyal
Madhumita Sanyal is the director of technical product management for the high-speed Ethernet, PCIe, and D2D IP portfolio at Synopsys. She has over 20 years of experience in ASIC design and the application of logic libraries, embedded memories, mixed-signal IPs, and design methodology for SoCs in high-performance computing, automotive, and mobile markets.
Matthew Burns
Matt Burns develops go-to-market strategies for Samtec’s Silicon-to-Silicon solutions. Over the course of 25 years, he has been a leader in design, applications engineering, technical sales and marketing in the telecommunications, medical and electronic components industries. He currently serves as Secretary at PICMG.
Topics Covered
The webinar focused on the requirements for SoC design to achieve interoperability for the high bandwidth, high-performance computing required for AI/ML workloads. Madhumita and Matt cover a lot of topics, but they get through all of it in about 40 minutes. Very efficient. This is followed by approximately 15 minutes of live questions from the webinar audience. The topics covered are:
Introduction
Triggering new protocols like UAL for scale-up, and UEC for scale-out with an underlying line-rate of 224
Why ecosystem enablement is important
What capabilities in 224G SerDes can help achieve industry requirements for scale-up and scale-out
Interconnect needed to build 224G data center system topologies
Interop setup and demo
448G channel look-ahead
Synopsys summary
Samtec summary
Q&A
Before I get into some more details, a definition of scale-up and scale-out would be useful.
Scale-up, also known as vertical scaling involves adding more resources to existing infrastructure to handle increased workloads. Scale-out, also known as horizontal scaling involves distributing workloads across multiple resources.
What follows are some highlights of the topics covered in the webinar.
Madhumita began with an overview of the substantial bandwidth demands of new AI architectures. She referred to the bandwidth gap, as shown in the figure on the right. This increase cannot be addressed by doing more of what was done before. New architectures supported by new protocols are required as systems are both scaled-up and scaled-out.
She then provides a lot of details on how approaches such as Ultra-Ethernet and UALink_200 can help to address the challenges ahead. Madhumita provides details regarding various architectures that can achieve both scale-up and scale-out requirements. She discusses the various characteristics of passive copper cable, active copper cable and optical modules and where each fit.
Both short and long reach capabilities for 224G channels are explored in detail with specific requirements for SerDes technology. Architectural details and waveforms are shared. She also covers the specific requirements of the simulation environment and the requirements of the models to drive the process. Madhumita concludes with an overview of the interoperability requirements of the Synopsys SerDes and the pallet of solutions offered by Samtec to complete the implementation. She shares a list of technologies that are used for interoperability validation that includes:
224G loopback with Samtec Si-Fly® HD Near Chip Cable Assembly 64 port, 40dB+ channels
24G loopback with 1m DAC + MCBs
224G electrical loopback with Samtec Si-Fly® HD Near Chip Cable Assembly 32 port, 40dB+ channels
224G electrical loopback with Samtec Bulls Eye® ISI Evaluation Boards
Matt then discusses the changes Samtec is seeing in system topologies in the data center, including disaggregated memory. Scalable, flexible, high-performance interconnect becomes a critical requirement. This is where Samtec is focused. Matt began with the diagram below that summarizes the various Samtec interconnect products that facilitate 224 Gbps PAM4 operation from the front panel to the backplane.
He spends some time explaining the various components in this diagram, both optical and copper as well as the benefits of Samtec Flyover® technology. The benefits of Samtec’s Si-Fly® HD co-packaged copper interconnects are also discussed. Some of the features Matt discusses in detail include:
Ultra-high-density co-packaged substrate-to-cable
Highest density interconnect supporting 224 Gbps PAM4 (170 DP/in2)
Designed for high density interconnect (HDI) & package substrates
Matt then provides detailed performance data for various Samtec interconnect configurations along with the architectural benefits of each approach. Both short and long reach configurations are discussed. Matt describes some of the work Samtec is doing with its OEM and OSAT partners to prove out various configurations.
Matt then provides details of a recent live demonstration with Synopsys to illustrate the interoperability of Synopsys communication IP and Samtec channel solutions.
Joint Synopsys/Samtec Demo @ SC24
Matt concludes with a discussion of the work underway for 448 Gbps channels. While still in development, Matt shares some details of what to expect going forward. Both Matt and Madhumita then finish with an overview of the capabilities of each company to address high-speed channels, both now and in the future. This is followed by a spirited Q&A session with questions from the live audience.
It Takes a Village
I had the opportunity to chat with Matt Burns a bit after the webinar. I’ve known Matt for a long time and always enjoy hearing his perspectives on the industry since Samtec typically looks at system design challenges a bit differently than a chip company. Matt began our discussion with this statement:
“If I’m an OEM or an ODM and I’m trying to implement 1.6T ports, there’s no one solution provider I can go to for the whole thing. It takes a village.”
Matt went on to describe the types of IP, physical channels, simulation models and methodology required to get the job done. In this situation, interoperability is key and that’s why working with leading companies like Synopsys is so important to Samtec. This is how real-world solutions are proven and adopted in the market. Matt felt a lot of the details of this formidable task are covered in the webinar, so now it’s time to access the webinar replay.
To Learn More
If AI is part of your next design, you will face the need for high-performance channels and the Samtec/Synopsys webinar delivers a lot of the details you will need. You can access the webinar replay here. And that’s the webinar – Achieving Seamless 1.6 Tbps Interoperability with Samtec and Synopsys.