You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
For more than 20 years, Jason has delivered high impact results in executive management. Prior to Equal1, he held several executive management positions at Analog Devices, Hittite and Farren Technology. In these roles, Jason built and managed large teams comprising sales, operation and finance and was responsible for significant revenue targets. He also drove AI for predictive maintenance, made Internet of Things (IoT) a reality, using cutting-edge sensors and Artificial Intelligence (AI) to save factory-owners millions of dollars in unplanned downtime.
Tell us about your company.
Equal1 is a global leader in silicon-based quantum computing. With a world-class team spanning Ireland, the US, Canada, Romania, and the Netherlands, we are focused on building the future of scalable quantum computing through our patented Quantum System-on-Chip (QSoC) technology.
Our UnityQ architecture is the world’s first hybrid quantum-classical chip, integrating all components of a quantum computer onto a single piece of silicon. This breakthrough enables cost-effective, energy-efficient quantum computing in a compact, rack-mountable form, ready for deployment in existing data center infrastructure.
Our approach radically reduces the size and cost of quantum systems, preparing them to unlock real-world applications across sectors such as pharmaceuticals and finance.
Our first commercial product, the Bell-1 Quantum Server, is now shipping to customers worldwide.
What problems are you solving?
Quantum computing today faces two major barriers – scalability and deployability. Most current systems are bulky, power-hungry and confined to lab environments, making them impractical for real-world adoption.
At Equal1 we’re solving this by delivering a fully integrated, silicon-based quantum computing platform that is compact, energy-efficient and manufacturable using standard semiconductor processes. Our UnityQ QSoC architecture eliminates the need for complex infrastructure, enabling quantum computing to be deployed in existing data centers and HPC environments.
We’re also addressing the challenge of hybrid processing by embedding quantum and classical components on a single chip. This allows for real-time quantum-classical interaction, which is a critical capability for solving complex problems in data-sensitive sectors like pharmaceuticals and finance.
What application areas are your strongest?
Our technology is designed for application areas where quantum-classical hybrid computing has the potential to deliver significant future impact. While commercial applications are still in early stages industry-wide, our QSoC platform is especially well-positioned to support:
Pharmaceuticals: simulating molecular interactions and accelerating early-stage drug discovery.
Finance: exploring advanced optimisation and risk modelling techniques.
Data centre efficiency: enabling more energy-efficient computation and reducing the environmental footprint of large-scale infrastructure.
These sectors share common characteristics of data intensity, high computational complexity and sensitivity, making them prime areas for early hybrid quantum-classical exploration. Equal1’s compact, energy-efficient systems are built to integrate easily into existing infrastructure, enabling customers to prepare for and experiment with quantum workloads today.
What keeps your customers up at night?
Our customers – along with many closely watching the evolution of quantum computing – are excited about the promise of the transformative technology, but they also have uncertainty about when and how that transformation will take place. There are differing views on when quantum computing will begin to deliver real-world value, and how quantum systems will fit into existing operations. Many current quantum solutions are far from enterprise-ready. For organisations in highly regulated, data-sensitive industries like finance and pharmaceuticals, the stakes are even higher.
At Equal1, we’re taking a grounded, practical approach. We see quantum not as a replacement for classical computing, but as a complement – an accelerator within high-performance computing environments. And importantly, quantum is scaling faster than classical computing ever did, bringing us closer to practical applications than many once thought possible.
What does the competitive landscape look like and how do you differentiate?
The quantum computing space is full of exciting and diverse approaches, but what makes Equal1 stand out is our focus on silicon-based quantum technology. We’re proud to be the first company to launch a silicon-based quantum server, Bell-1 – a compact, rack-mounted system designed for deployment in real-world data centers.
We’ve also recently achieved a major milestone by validating a commercial CMOS process for quantum devices. This proves that quantum computing can be built using the same mature, scalable technology behind classical semiconductors, paving the way for what we call Quantum 2.0: practical, integrated and ready to scale. While others are still working with complex, custom platforms, we’re focused on delivering quantum solutions that fit into today’s data centers and tomorrow’s high-performance computing infrastructure.
What new features/technology are you working on?
We’re focused on pushing the boundaries of what’s possible with our silicon-based architecture. Our UnityQ Quantum System on Chip Processor roadmap will deliver millions of physical qubits, uniting quantum and classical components on a single chip using commercial semiconductor manufacturing processes.
To bring these next-generation capabilities to life, we’re working closely with industry and research partners who share our vision for practical, scalable quantum computing. These collaborations are critical in helping us accelerate development and ensure our technology addresses real-world needs from day one. We’re very excited for what’s to come for Equal1.
Dan is joined by Bharat Tailor who is responsible for the Alphawave standard connectivity products portfolio focused on DSP chipsets enabling AI data center interconnects. He is a veteran of the high-speed connectivity semiconductor industry having participated in the evolution of connectivity technologies from 10Gbps to the current discussions on 3.2T and beyond.
Dan explores the many demands of high speed, high density and low power connectivity with Bharat. Driven by ever-growing AI deployment, Bahrat explains some of the many constraints that must be met for next generation systems. He describes Alphawave’s move to a semiconductor supplier and how that has facilitated a very broad product portfolio to address the many needs ahead.
He describes how Alphawave’s silicon IP, chiplets, custom silicon and connectivity products work together to address the demands of applications such as those found in hyperscale data centers. He explains that three of four engagements are now focused on chip delivery at Alphawave Semi. Baraht also discusses the path to 448 Gpbs channels and some of the technical problems that must be solved.
The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.
In this episode of the Semiconductor Insiders video series, Dan is joined by Zackary Glazewski, an ML Engineer at Alpha Design AI. Dan explores the challenges of waveform debugging with Zack, who explains how the process is done today and the shortcoming of existing approaches. He explains why current approaches are time consuming and error-prone. A key required element is linking observed waveform behavior to the actual circuit to find the real issue. Zack describes how Alpha Design AI’s unique AI Agents address these challenges.
The views, thoughts, and opinions expressed in these videos belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.
Improved profitability and competitiveness are at the very heart of every enterprise. Achievements like this are usually attributed to corporate culture. Sometimes, it’s just being in the right place at the right time. Some organizations make huge investments with top-tier consulting companies to help find their way.
Recently Infinisim published a white paper about clock jitter, why it’s a problem and how to minimize its effects. One would expect this kind of information to help with first-time silicon success, and it does. But the white paper also explains how a good clocking strategy will pave the way to broader corporate success. A link is coming for this important white paper but first let’s examine how Infinisim enables a path to greater profitability and a competitive edge.
The Technology Story
Decreasing supply voltages and increasing operating frequencies create substantial design challenges. As the stakes go up and the margins become smaller, challenges that were once manageable now pose significant risks for performance, yield, and long-term reliability. In the middle of all this is a subtle but significant disruptor: clock jitter.
Clock jitter refers to the deviation of a clock signal from its ideal timing. In digital systems, clocks are essential for synchronizing operations and ensuring reliable logic propagation. Even minor variations can lead to timing violations and catastrophic failures in high-performance designs. This white paper explains the various contributors to clock jitter. They include timing variations from the PLL and the power delivery network (PDN). PDN induced jitter is the larger problem as it can vary all over the chip.
The ways this happens, and the implications are explained in detail. The main impacts of clock jitter include slower chip performance and lower yield. More on these effects in a moment.
The white paper also explains why traditional solutions to managing clock jitter fall short. A lot of detail and analysis is shared here. The fundamental point is that finding and managing clock jitter with conventional tools is impractical. Detailed SPICE-level accuracy is needed across many, many scenarios. There is simply not enough time to do the work needed with the required accuracy in a typical design schedule.
And so, the answer to this problem has been to develop design margins. If the team stays within these margins, the chances of catastrophic timing variation due to clock jitter is low. But, as they say, there is no free lunch. As more advanced technology puts higher demands on performance and timing, design margins tend to grow to the point where substantial compromises are made. This is the second part of the story.
The Business Story
The white paper explores several ways clock jitter impacts the overall competitiveness and profitability of an enterprise. An analysis is offered that explores what happens to the lifetime profitability of a design when clock jitter creeps in. You will be able to see to the details, but the overall impact is measured in millions of dollars.
Expanding design margins are also discussed. Overly pessimistic design margins leave performance and profitability on the table. Impacts include lower speed, resulting in a lack of competitiveness and lost market share. Paying high fees for advanced technology and not using all its capabilities also impacts the bottom line. The white paper offers many details that are important to consider.
Solving clock jitter with new technology is a major focus of Infinisim. The company is the industry leader in SoC clock verification for high-performance designs. At advanced process nodes, where nanometer-scale effects dominate, Infinisim enables design teams to push clock performance further than traditional tools can reach. The white paper goes into the details of how Infinisim’s platform delivers game-changing technology, opening up greater profitability and competitiveness.
You will be able to read all the details in the white paper. Below is a graphic that provide a high-level view of the capabilities Infinisim’s platform delivers.
Infinisim’s Comprehensive Clock Solution
To Learn More
I’ve just provided a high-level overview of what this new white paper from Infinisim offers. There is much more to learn. If improved competitiveness and profitability appeal to you, you need to get your own copy of this white paper.
As it celebrates its 20th anniversary in 2025, Andes Technology stands as a defining force in the RISC-V movement—an open computing revolution. What began in 2005 as a bold vision to deliver high-efficiency Reduced Instruction Set Computing (RISC) processor IP has evolved into a company whose innovations power billions of devices worldwide. And Andes’ impact extends well beyond hardware. Through sustained investment in open-source software and active leadership in industry initiatives, the company has played a key role in strengthening the RISC-V ecosystem and ensuring seamless integration between architecture and software.
Frankwell Lin, Chairman and CEO of Andes Technology gave a keynote presentation at Andes RISC-V CON 2025. This annual conference, convened by Andes, has grown steadily in both attendance and speaker participation since its inception, reflecting the company’s increasing market adoption and growing influence on the technology front. The following is a synthesis of Lin’s talk.
Early Commitment to RISC-V
From the outset, Andes focused on delivering high-efficiency CPU IP based on the RISC philosophy. Its proprietary AndeStar™ ISA and development platform, AndeSight™ IDE, formed the technological bedrock for its early success, offering customers a flexible and scalable foundation for SoC design.
In 2016, Andes became a founding member of RISC-V International, well before the open ISA gained mainstream industry momentum. That same year, it was recognized by TSMC with a “New Partner of the Year” award, signaling the broader ecosystem’s confidence in Andes’ direction.
By 2018, Andes had crossed the threshold of one billion SoCs shipped with its CPU IP embedded. The company has since shipped more than 16 billion chips through its customers’ designs, with over two billion units delivered in 2024 alone.
From Innovation to Market Leadership
Andes introduced its first RISC-V processor IP in 2017, marking the beginning of a wave of technical innovation. It soon followed up with the industry’s first RISC-V Packed-SIMD CPU. The NX27V, the first commercially available RISC-V Vector CPU IP was adopted by Meta shortly after the IP was launched in 2019. These breakthroughs demonstrated that RISC-V could address high-performance computing needs as effectively as proprietary architectures.
The company’s momentum continued with the release of ISO 26262–compliant safety cores in 2022, followed by a multicore vector processor in 2023. In 2024, Andes became the first to introduce a RISC-V processor compliant with ISO 26262’s stringent ASIL-D safety requirements.
Beyond its product offerings, Andes plays a leading role in the governance and evolution of the RISC-V ISA. As a member of the Board of Directors and Technical Steering Committee at RISC-V International, the company has helped shape major architecture extensions, including packed SIMD, matrix multiplication, fast interrupt handling, and IOPMP for memory protection.
A significant part of Andes’ commitment to standardization is its full alignment with the RVA23 profile, the recently ratified standard that defines a consistent set of features for 64-bit RISC-V application processors. This profile mandates support for key extensions such as vectors and virtualization, helping ensure consistency across platforms. Andes has already incorporated RVA23 compliance into its roadmap, further cementing its position as a leader in forward-compatible, ecosystem-ready processor IP.
Building the Software Foundation
Understanding that hardware alone isn’t enough, Andes has long invested in enabling software. The company is an active contributor and maintainer of several core open-source projects, including GCC, glibc, Linux, U-Boot, and SIMDe. It also supports broader ecosystem efforts through involvement in the RISE project, the Linux Foundation, and the Civil Infrastructure Platform, all aiming to strengthen RISC-V software infrastructure.
By engaging deeply with both the hardware and software layers, Andes ensures that its IP not only meets performance and power goals but also integrates seamlessly into modern toolchains and operating systems.
Sustained Growth and Global Reach
Following its IPO in 2017, Andes has delivered consistent growth with a compound annual revenue growth rate of 26.8 percent. In 2024, it recorded an impressive 30.6 percent growth—largely fueled by RISC-V, which now accounts for 92 percent of its license revenue. The company has signed more than 200 RISC-V commercial license agreements and operates six global design centers, supporting customers across North America, Asia, and Europe. It is the global market leader in RISC-V IP, with over 30% market share, as reported by market research firm, The SHD Group.
A New Identity for a New Era
In April 2025, Andes Technology unveiled a refreshed corporate logo as part of its 20th-anniversary celebrations. The new logo represents the company’s evolution and its commitment to innovation in the RISC-V ecosystem. The “S” in the new logo, stylized as a “5” reflects its alignment to the RISC-V open-standard movement. Alongside the rebranding, Andes announced the expansion of its headquarters, signaling its continued growth and dedication to serving a global customer base.
Looking Ahead: Innovation with Focus
Andes is a pure-play CPU IP provider with a broad impact across the semiconductor landscape. With deep engagement in safety-certified applications, artificial intelligence, and cloud-edge computing convergence, the company is well poised to shape the future of RISC-V in markets that demand both performance and trust.
Sébastien Dauvé has been CEO of CEA-Leti since July 1, 2021. With over 20 years of experience in microelectronics and their applications—including clean mobility, future medicine, cybersecurity, and power electronics—he has played a key role in advancing digital innovation. A graduate of École Polytechnique and ISAE-SUPAERO, he began his career in defense radar systems before joining CEA-Leti in 2003. There, he led R&D activities in sensors, the Internet of Things, and system integration.
He has successfully designed and implemented strategies to accelerate technology transfer and support startup creation, always promoting cross-disciplinary collaboration to address key societal challenges through innovation. From sensors to wireless communications, his work focuses on combining energy efficiency with high performance to enable responsible, high-impact technological progress.
Tell us about your company?
Located in Europe’s deeptech hotspot of Grenoble, France, CEA-Leti has been pioneering innovation in micro and nanotechnologies since 1967. It is part of the CEA, a Clarivate Top 100 Global Innovator.
CEA-Leti’s model is designed to support companies throughout the innovation lifecycle. Companies of all types and sizes, from startups to major global corporations, have entered into long-term partnerships with CEA-Leti at various stages of their R&D journeys, from proof of concept to scaling new solutions up for volume manufacturing.
Today, CEA-Leti employs 2,000 people and boasts state-of-the-art technology platforms. These include clean rooms capable of processing 300 and 200 mm wafers.
What problems are you solving?
At CEA-Leti, we are addressing some of today’s most pressing technological and societal challenges by developing innovative solutions in microelectronics and nanotechnology. Expanding the “lab-to-fab” concept is at the heart of our mission — we accelerate the industrial adoption of cutting-edge technologies to strengthen Europe’s technological sovereignty. Our work goes beyond performance—we are increasingly placing sustainability at the core of innovation. By combining scientific excellence with a strong industry focus, we help shape a more responsible and competitive future for the tech sector.
What application areas are your strongest?
We are fortunate to master a number of technologies in which we have developed strong expertise, including photonics, healthcare technologies, quantum computing, sensors, advanced wireless communications and power electronics. This expertise enables us to prepare for longer-term technological breakthroughs, for example:
– Neuromorphic computing, in-memory computing and advanced FD-SOI technologies for high-performance, energy-efficient AI;
– Advanced chip-to-chip interconnections, leveraging photonics and heterogeneous integration to enable next-generation computing architectures;
– Miniaturization of systems for organs-on-a-chip, opening up new possibilities for personalized medicine and reducing reliance on animal experimentation.
In all cases, we are increasingly focused on developing solutions with a low environmental impact.
What does the competitive landscape look like and how do you differentiate?
Clearly, the semiconductor field is highly competitive and, above all, fast-paced in terms of innovation. There are only a handful of institutes at this level in the world, and it’s no coincidence that Europe has chosen us to be a key player in the European Chips Act. For over a year now, we have been coordinating a pilot line called FAMES, which has enabled us to expand our resources and, above all, to run large-scale programs for industry. This further strengthens our position.
But that’s still not quite enough. More than ever, we need to strengthen our ties and collaborations with other technological institutes and research centers. We share many common topics with our colleagues in fundamental research at the CEA, have numerous collaborations with CNRS, INRIA, IMT, etc. via the PEPR, and with many other European and even international RTOs. This capacity for collaboration is invaluable and truly enables us to remain at the cutting edge of technology.
There is no IoT without wireless. If you have a great idea for a world-beating IoT device, you must integrate a wireless module (Bluetooth, Zigbee, Wi-Fi, UWB…) into that device. A ready-made companion wireless chip connected to MCU is one approach but will significantly increase unit cost and reduce profits. Products in most IoT markets depend on System on Chips (SoC) with embedded wireless to meet cost and low power targets, a high barrier to entry for non-wireless experts. Prepackaged MAC & modem IP addresses part of the problem but not the RF part. Which is why a turnkey solution combining MAC, modem and RF is an important step to unleash a flood of new products from the great majority of potential innovators whose only shortcoming is lack of wireless expertise.
What new products?
If you want to sell billions of products, sell to consumers. It’s a tough market but differentiated products can be very successful. Wireless earbuds provide an obvious example. You might think that market has already been locked down but you’d be wrong. The Bluetooth SIG continues to innovate at a furious pace, now offering Auracast for unprecedented easy audio sharing experience, and introducing soon High Data Throughput (HDT) for even better audio quality with lossless multi-channel capability. Add 3D audio for a more realistic and immersive sound experience. In gaming, low latency improves the gaming experience whether through earbuds or headphones. Consider also that medical, payment, keyless entry, retail, hearing aid, and other applications build on the same low power Bluetooth communications links.
Meanwhile ZigBee, Thread and Matter, all underpinned by IEEE 802.15.4, are central to home automation. ZigBee is already well established for smart lights, smart thermostats and more. Thread and Matter are emerging options to extend interoperability between devices from various ecosystems. A smart home is likely to require support for some if not all these options and there are boundless opportunities to imagine more devices to further enhance convenience around the home.
Together the Bluetooth and 802.15.4 markets account for over $100B per year today and are expected to grow between 7% and 10% CAGR through 2029. This has to be a very appealing target for product builders.
Ceva-Waves Links200
If you have both wearables and home automation, you’re probably controlling them through your phone or tablet, maybe even your watch, all through either Bluetooth or 802.15.4. As a consumer you don’t care about protocols; you just want to turn on a light while listening to a playlist through your wireless earbuds. Which means that the wireless component in some of your devices must support all these options. Such multi-protocol support is not uncommon today, in some cases even adding Wi-Fi, cellular and UWB. Here, let’s focus on Bluetooth and 802.15.4.
Ceva pretty much owns the embedded Bluetooth business, providing solutions for many years and shipping over a billion devices per year. They stay very current with Bluetooth SIG releases, and even provide next generations ahead of official ratifications, now supporting up to Bluetooth 6.0 with LE Audio and Channel Sounding. They were also one of the first providers to qualify for Auracast compatibility. Next generations additions include low latency for gaming, High Data Throughput (HDT) that will more than triple the speed of traditional Bluetooth LE, allowing far better audio quality in Bluetooth LE than in Bluetooth Classic. The Links200 platform supports all of these capabilities, together with Bluetooth 6.0 Channel Sounding for accurate and secure ranging. The platform also provides IEEE 802.15.4 MAC support for ZigBee, Thread and Matter compliance and allows for concurrent multi-link communication through Ceva’s smart coexistence techniques.
All impressive but the big add in my view, to truly make this solution turnkey, is an included RF stage implemented in a TSMC 12nm FFC+ process for lower power, high performance (and low leakage). Equally important, this stage includes a class 1.5 power amplifier. All that is left for the integrator is to connect that output to an antenna. This unique turnkey IP platform is the only solution in the market enabling the design of innovative fully integrated Edge AIoT System on Chip with advanced processing power capability for headsets, earbuds, speakers, smart watches, smart glasses and more.
Integrating Links200
The Links200 IP is delivered with Verilog RTL source files for the digital part, C software source code running on the platform processor, and GDSII for the RF stage, plus a complete software framework to run on a host MCU. It also includes protocol stacks with comprehensive list of profiles to address a broad range of applications.
Ceva optionally supplies several complementary IPs to further enhance functionality, including:
The NeuPro-Nano, a unique Edge Neural Processor Unit IP that efficiently executes embedded ML workloads for always-on audio, voice, vision, and sensing use cases
RealSpace, a spatial audio software solution offering 3D audio rendering and low latency head tracking for gaming
ClearVox, a software solution for noise reduction and keyword spotting
MotionEngine, software solution for tap, in-ear detection and activity classification
You can read more about the Links200 capability HERE.
Alchip Technologies Ltd., a global leader in high-performance computing (HPC) and artificial intelligence (AI) ASIC design and production services, continues its trajectory of rapid growth and technical leadership by pushing the boundaries of advanced-node silicon, expanding its global design capabilities, and building customer-centric solutions that differentiate at the packaging level. In a candid update from CEO Johnny Shen, three pillars emerged as central to Alchip’s strategy: technology leadership, talent deployment, and customer-driven business execution.
TECHNOLOGY: 2nm and 3nm
Alchip is preparing for a significant technology inflection with the introduction of 2nm design enablement, the first gate-all-around (GAA) transistor node. While 3nm (the final FinFET-based node) will dominate most production designs in 2025, a select few projects are advancing into 2nm, which introduces unique design complexities. These include significantly higher compute power requirements for final sign-off and verification.
During peak 3nm workloads, Alchip leveraged more than 500 servers; for 2nm, even larger compute infrastructures will be required. The company’s 2nm test chip taped out in 2024, with silicon results expected soon. These results will help quantify the PPA (power, performance, area) delta between 3nm and 2nm. While pure 2nm designs might be rare, hybrid approaches—with compute logic in 2nm and analog/mixed-signal components in 3nm chiplets—are becoming common among customers.
Alchip’s early 2nm work is already being validated by one of its more significant customers, who plans to initiate both a test chip and product chip kickoff, within 2025. This underscores Alchip’s credibility as a first-choice ASIC partner for leading-edge silicon.
TEAM: Strategic Global Expansion of Engineering Resources
With 86% of 2024 revenue originating from North America, and with global expansion considerations, Alchip is aggressively shifting its design workforce to Taiwan, Japan, and Southeast Asia. In Vietnam, where the company already employs 30 engineers, headcount is expected to grow to 70–80 by the end of 2025. Similarly, Malaysia’s team is expanding from 20 to approximately 50 engineers. By year-end, over half of Alchip’s engineering workforce will reside outside China.
This distributed R&D model not only ensures IP security and compliance with international regulations but also enables proximity to foundries, customers, and local talent pools. In the United States, Alchip is scaling up its Field Application Engineers (FAEs), Program Managers (PMs), and senior R&D experts to support a customer base that demands nuanced understanding of compute architecture, PPA trade-offs, and roadmap alignment.
For package and assembly support, much of the technical interface remains US-based, with Taiwan-based experts frequently dispatched to co-locate with customers when needed. Testing and product engineering disciplines remain centralized in Taiwan, where Alchip’s reputation as a top-tier semiconductor employer provides a strong pipeline of experienced hires.
BUSINESS: Record-Breaking Growth Driven by Differentiated Solutions
In 2024, Alchip delivered its seventh consecutive year of record financials, with revenue of $1.62 billion and net income of $200.8 million—each marking new highs. These numbers translate into a revenue-per-employee ratio of approximately $2.5 million, placing Alchip among the most productive companies in the semiconductor industry.
Core to this growth is the company’s differentiated package engineering. While customers rarely question Alchip’s ability to deliver on the compute side, most customer inquiries now revolve around packaging strategy. These include determining the optimal HBM stack configuration, interposer design, chiplet integration, thermal modeling, and overall system optimization.
Alchip has completed 18 CoWoS (Chip-on-Wafer-on-Substrate) designs, the most of any ASIC partner, according to TSMC. These designs have varied significantly by customer, each requiring unique interposer geometries, memory bandwidth targets, and form factor considerations. Johnny attributes this capability to Alchip’s focus on emerging, high-tech startups, whose need to innovate quickly forces the company to stay ahead of the technology curve.
This flexibility and deep design experience have made Alchip a go-to partner not only for startups, but also for established tech giants pursuing the next wave of AI and HPC performance.
Outlook: Enabling Tomorrow’s Compute Platforms
With 20–30 tape outs per year, Alchip maintains a rapid feedback loop that continuously hones its methodology, toolchains, and cross-functional workflows. As customers move toward 2nm GAA, 3DIC architectures, and multi-die systems, Alchip is positioning itself as a turnkey provider of silicon, packaging, and system-level integration expertise.
Its tight alignment with TSMC’s roadmap, along with a strategic pivot toward a distributed global engineering footprint, ensures that Alchip will remain a critical player in enabling the future of AI and HPC workloads. The company’s ability to combine advanced silicon design with deep system integration know-how is what makes it not just a service provider—but a true innovation partner.
Part 2 examines the transformation of the interface protocols industry from a fragmented market of numerous specialized vendors to a more consolidated one dominated by a few major solutions providers as driven by the increasing complexity of modern protocols. It highlights the importance of rigorous validation of interface protocols and underscores the pivotal role of security in ensuring robust and reliable protocol design. It concludes by presenting an ideal roadmap for developing cutting-edge interface protocol solutions.
The Interface Protocol Landscape: Adoption, Security and Validation
Traditionally, interface protocols development followed a structured, sequential process that typically began with the formation of a consortium, followed by the creation of draft specifications. Once the specifications approached version 1.0, early adopters would begin implementation, and within a two-year timeframe, products would start to emerge.
Today, this timeline has been radically disrupted. Case in point, in early in 2024, Broadcom1 commented on their roadmap for switched for AI systems slated release in 2025, based on protocols that have not even seen the formation of a consortium. This represents a monumental shift, driven by market demands that far outpace the current capabilities of existing protocols. The pressure on the industry to keep up is unprecedented.
Evolving Approaches to Interface Protocol Adoption
The way protocols are adopted and verified has also undergone significant transformation. Designers are increasingly sourcing third-party IPs instead of building them in-house. For example, where designers previously developed PCI Gen4 protocols themselves, most now turn to external providers for PCI Gen6 solutions.
The adoption process itself has become much more streamlined. Historically, protocol implementation followed a prolonged step-by-step approach. Teams would start by using verification IP (VIP) of the required interface protocol for the functional verification in simulation, then evaluate its performance on emulation for few months, and eventually proceed to hardware implementation for prototyping or post-silicon testing. Now, all pre-silicon steps are compressed into a single quarter, reflecting the urgency and integration demanded by modern workflows.
In today’s project planning meetings, design teams expect a comprehensive roadmap from their IP providers right at the start of their interface protocol journey. Questions about VIP, virtual models in the form of transactors, and hardware implementations availability are no longer deferred—they arise simultaneously with the interface protocol decision. This shift marks a significant trend: designers are no longer solely focused on verifying protocols. Instead, they prioritize building a robust, fully integrated verification ecosystem centered around the interface. This approach ensures thorough validation while enabling a swift transition to software development, where real differentiation and value are created.
The Impact of Industry Consolidation
The interface protocol landscape is undergoing a paradigm shift. Market demands have accelerated the development timeline, compressing processes and driving designers to seek comprehensive, ready-to-integrate solutions. As a result, the industry is consolidating to meet these challenges. Supporting cutting-edge protocols requires immense expertise, resources, and investment—factors that only larger IP companies can consistently deliver.
Smaller IP vendors are finding it increasingly difficult to compete at the high end of the market. Many are merging, consolidating, or being acquired. While these smaller players may continue to find opportunities in mid-tier markets, high-end protocol development and support are rapidly becoming the domain of a few large, well-established companies.
The ability to adapt to this fast-evolving environment will determine who leads the future of protocol development and adoption.
Interface Protocol Security: Critical Imperative
In an increasingly interconnected world, security is essential to safeguarding data and systems against a wide array of threats. To achieve robust and reliable security, the foundation must begin at the hardware level, forming the bedrock of a comprehensive security framework.
The Layered Approach to Security
Security is best visualized as a series of interdependent layers, starting with a hardware root-of-trust and extending to encompass the system, application, software, and services. Hardware security serves as the cornerstone, providing a strong, reliable base to support and protect all subsequent layers. Any weakness in the security chain, regardless of where it occurs, can compromise the entire solution. Therefore, while hardware is the starting point, its design and implementation must be flawless to ensure the security of all operational phases.
The Hardware Root-of-Trust in SoC Designs
The hardware root-of-trust in System-on-Chip (SoC) designs represents a critical security mechanism built directly into the hardware. It establishes a trusted foundation for all secure operations within the system, enabling the verification of software integrity and safeguarding sensitive data. This embedded security ensures that the system can be trusted from the very beginning of its operation, acting as the first line of defense against potential breaches.
Security in Interface Protocols
In recent years, security considerations have become integral to interface protocol design. Modern interface protocols, including PCIe, CXL, UCIe, Display, and Memory, now feature comprehensive security hardware blocks embedded within their specifications encompassing the root-of-trust.
From the design perspective, the hardware root-of-trust is typically implemented as a Register Transfer Level (RTL) component. It integrates with the RTL describing the protocol controller, ensuring seamless operation and robust security. For example, the PCIe and CXL Protocols include security features such as Integrity and Data Encryption (IDE), designed to ensure data authenticity and confidentiality.
The hardware root-of-trust must undergo rigorous RTL-level verification and validation, similar to other hardware components.
Interface Protocol Verification and Validation
Modern interface protocols implement complex specifications with unique data integrity, throughput, latency and security requirements. For example:
PCI Express (PCIe) demands high-bandwidth, low-latency communication for connecting GPUs and NVMe storage devices.
Ethernet must ensure compliance with networking standards for interoperability.
USB introduces complexities with hot-swapping and varying power requirements.
Each protocol requires rigorous validation to avoid costly errors in production silicon. Protocol malfunctions comprise a wide range of types from basic hardware failures that may lead to data corruption, deadlocks, reduced performance, etc., to design intent errors that may affect target specifications, rather challenging in post-silicon debug.
While hardware-description-language (HDL) simulation has traditionally been the cornerstone of SoC validation, providing high visibility into design behavior, it is increasingly limited by performance constraints. By running several orders of magnitude slower than real hardware it becomes impractical for validating the new generations of interface protocols.
These limitations necessitate a move towards vastly faster and extensively scalable validation solutions.
Hardware-Assisted Verification: The Optimal Approach.
Hardware-assisted verification platforms, including hardware emulators and FPGA prototypes, offer a compelling solution to address the challenges of SoC interface protocol validation. These platforms bridge the gap between pre-silicon and post-silicon validation, providing near-real-time performance with hardware accuracy unparalleled by software-based simulation. Key benefits include:
Performance Acceleration: Emulation and prototyping platforms execute SoC designs orders of magnitude faster than HDL simulators, enabling comprehensive validation of high-speed protocols such as PCIe Gen 5 and Ethernet 400G by processing payloads of billions of verification cycles.
Real-World Testing: Hardware-assisted platforms allow integration with real-world devices, providing a realistic environment for protocol validation.
Early Bug Detection: By enabling early and iterative testing, hardware-assisted solutions help catch protocol violations and interoperability issues before they propagate into later stages of development.
Scalability: These platforms can validate multiple protocols in the context of the entire SoC design, ensuring that complex interactions in heterogeneous SoCs are thoroughly tested.
Case Study: PCI Express Validation with Hardware-Assisted Platforms
For example, the validation of a PCIe Gen 5 interface in a modern SoC introduces challenges such as stringent signal integrity requirements and complex error recovery mechanisms. A hardware-assisted platform can simulate real-world scenarios, such as sudden link drops or varying traffic patterns, at full protocol speed. Debugging tools integrated into the platform can provide deep insights into packet-level transactions and protocol compliance, significantly accelerating the validation process.
Key considerations for a comprehensive interface protocol solution
Choosing the right partner for advanced interface protocols requires careful evaluation of several critical factors:
Active Participation in Industry Consortiums
The foundation of effective protocol development lies in active involvement with key industry consortiums. Leading providers play significant roles in these organizations, contributing to the creation and evolution of standards. Early participation—such as donating technology and collaborating closely with protocol developers—ensures that providers stay at the forefront of protocol formation and adoption. Leadership within these consortiums is a strong indicator of an innovative and forward-thinking IP provider.
Proven Performance: The Eye Diagram Speaks Volumes
Trust in a provider’s capabilities is built on a proven track record of silicon-verified IP. This requires thorough testing with industry-standard test equipment to ensure robust performance and reliability.
A critical benchmark is the eye diagram, which visually represents the ability of the IP to transmit data reliably at high speeds. Achieving a clean and well-measured eye diagram demonstrates superior analog performance and adherence to protocol standards. Beyond analog precision, a provider’s experience in implementing and verifying protocol functionality is crucial.
Leading vendors validate their designs through extensive testing, including numerous cycles of the digital controller and PHY test chips on prototyping systems, ensuring the interface operates seamlessly.
Verification Support Across the Lifecycle
Success in protocol development extends beyond design to encompass leadership in design verification, validation, certification, and compliance testing. High-quality solutions require robust support across the development lifecycle, from virtualization and simulation-based verification to hardware-accelerated testing with components such as transactors and speed adapters.
Advanced customers increasingly demand comprehensive verification ecosystems as part of the IP package. These solutions are no longer optional add-ons but integral components of the chip design process. Advanced verification solutions require large, specialized teams—resources that not all suppliers can provide.
Examples of Advanced Verification Solutions
SoC Verification Kit (SVK), a ready-to-use package combining IP and VIP (Verification IP). It streamlines IP integration into SoC testbenches, reducing setup time and allowing design teams to focus on differentiation. Surveys show that teams spend up to 20% of verification cycles on initial setup and sanity testing. Often, IP design and validation teams work in silos, leading to duplicated effort and limited reuse of components. This inefficiency can consume approximately three person-months per IP, a considerable cost given the multiple complex IPs in modern systems. A state-of-the-art SVK can cut setup time by at least 50%, significantly accelerating time-to-market while reducing costs.
IP Verification Kit (IPK), a package combining an FPGA prototype of the interface IP with a PHY interface card ready-to-use in system environments. It validates the interoperability of the IP with the rest of the SoC, ensuring it performs as intended in a real-world system prototype.
Design teams face the dual challenge of achieving differentiation in competitive, accelerated design cycles while pushing beyond the limits of existing interface standards. To meet these demands, they must partner with suppliers capable of aligning the development of advanced interface IP and verification solutions with project timelines.
Taking a holistic approach to verification is essential for maintaining SoC schedules and ensuring product success. Advanced tools like SVKs and IPKs provide measurable efficiency gains, helping teams reduce costs, accelerate time-to-market, and focus on innovation rather than repetitive verification tasks.
As I have said before, there is a foundry market segment that I call the “NOT TSMC” market, companies who want an alternative to TSMC. My guess, this would be a $5B+ market which is what Samsung Foundry has tried to leverage for the past 20 years. Unfortunately, working with Samsung proved to be a much higher risk than expected so the NOT TSMC market came crashing down at 3nm (N3) and 2nm (N2).
Last week the media jumped on a JP Morgan Conference where Intel CFO Dave Zinsner spoke. Of course it was spun every which way but truth for cheap clicks but there were some interesting points and one was risk. Pat Gelsinger was not a low risk CEO. He could certainly rally the troops but expectations were set much too high. If any foundry thinks they are going to beat TSMC they are wrong. Samsung has spent the last 20 years and hundreds of billions of dollars trying to unseat the #1 TSMC and have failed spectacularly.
Lip-Bu Tan however is a low-risk CEO. When he says he can deliver something you can bet on it. He is also very good with customers, he listens, he does not pontificate. Lip-Bu will make some very difficult decisions in the coming weeks but when the dust settles you will see a shiny new Intel, absolutely.
The transcript of the discussion is available HERE and below are my observations, experiences, and opinions as a 40 year semiconductor professional working in the trenches.
Lip-Bu will stay with the current strategy and not split up the company and I agree with this 100%. Intel Design and Intel Foundry must be closely coupled and work together side-by-side, not one in front of the other. That is the only way Intel products will stay ahead of the competition and that is the only way Intel Foundry can stay with the competition. Dave said it is all about execution and that is a fact and that is what Lip-Bu Tan does.
Another great comment by Dave is that Intel 14A is being developed from the ground up as a foundry node. It will have PDKs that are comparable to what the industry would expect (TSMC like). Typically, an IDM foundry was more focused on process development for internal products, then adapted the process to the foundry business. It is very difficult to compete with TSMC if that is your strategy. Unfortunately, 18A and BSPD were not originally built for a foundry but they did get big interest from customers. Hopefully Lip-Bu can turn that into revenue moving forward.
Dave did slip on on this one:
Dave Zinsner, Vice President and Chief Financial Officer, Intel: “Well, the first 18A customer is going to be Intel products. Yeah. It’s Panther Lake, and the first SKU is expected to be out by the end of the year. So, is our first win, so to speak, if you put on the Intel foundry hat with 18A.”
Sorry Dave, you just said 18A was not developed as a foundry process so it is a bit early to put your foundry hat on. Keep your foundry hat in hand until you have an external 18A customer with product, otherwise you are still an IDM.
If anyone thought that Intel 18A would be a blockbuster foundry node they clearly do not understand how foundry customers work. It is all about mitigating risk. IF 18A proves out AND 14A PDKs are early and competitive THEN big customers will come to Intel Foundry.
In regards to 14A and HNA-EUV:
Dave Zinsner, Vice President and Chief Financial Officer, Intel: “14A, you know, obviously, gets more expensive. At present, it’s expected to have high NA, and, you know, that’s a more expensive tool. So, you know, I think we do need to see more external volume come from 14a versus versus 18a.”
Here is the thing about HNA-EUV, I do not feel it is close to being ready for high volume manufacturing and foundry customers will not run to an unproven process with HNA-EUV. The added value of HNA-EUV is just not there yet.
I think Lip-Bu will recognize this and be more cautious with 14A. He should start with EUV then move it over to HNA-EUV, like TSMC did with EUV at 7nm. Get N7 into HVM then add EUV light (N7+) before going full EUV at N6. There is no shame in following TSMC on this. Remember, TSMC followed Intel for many years.
The foundry business aims for a breakeven by 2027, with revenue from external sources.
Dave Zinsner, Vice President and Chief Financial Officer, Intel: “Yeah. Okay. So, we still feel on track to to hit breakeven sometime in 2027. You know, I think when we committed to it in ’24, we said, hey. It’s gonna be somewhere between ’24 and 2030. Most people kind of settled in that that must mean ’27, and that’s generally kind of what we’re thinking is we can be breakeven.”
From ChatGPT:
“In 2024, Intel’s foundry business incurred significant financial losses, totaling approximately $13.4 billion for the year.This figure represents a substantial increase from the $7 billion operating loss reported in 2023.”
I’m not a finance guy but are we talking about breaking even for a full fiscal year or just one quarter? For a full fiscal year that will not happen in 2027 without dramatic cuts. Remember, HNA-EUV systems are $380M each and a foundry will need dozens of them if we are talking full HNA-EUV and not just a layer or three. I doubt Lip-Bu Tan will allow Intel financial hand waving moving forward but we shall see.
Harlan Sur, Semiconductor and Semiconductor Capital Equipment Analyst, JPMorgan: “So, is the right way to think about the (external foundry) mix maybe 20 to 30% ?”
Dave Zinsner, Vice President and Chief Financial Officer, Intel: “Yeah, something like that.”
Let me remind you that Intel has outsourced wafers from foundries for many years due to acquisitions (Altera, Mobileye, Habana Labs, etc…) so 20-30% is not really that much. Nothing to worry about but I do feel that moving forward Intel should be more focused on 18A and 14A. It is all about gaining manufacturing experience and economy of scales. Cost is everything in the foundry business.
All-in-all a great conversation with more transparency on the foundry side for us semiconductor professionals. Thank you Dave.