SNPS1670747138 DAC 2025 800x100px HRes

Intel and Cadence Collaborate to Advance the All-Important UCIe Standard

Intel and Cadence Collaborate to Advance the All-Important UCIe Standard
by Mike Gianfagna on 09-02-2024 at 10:00 am

Intel and Cadence Collaborate to Advance the All Important UCIe Standard

The Universal Chiplet Interconnect Express™ (UCIe™) 1.0 specification was announced in early 2022 and a UCIe 1.1 update was released on August 8, 2023. This open standard facilitates the heterogeneous integration of die-to-die link interconnects within the same package. This is a fancy way of saying the standard opens the door to true multi-die design, sourced from an open ecosystem that can be trusted and validated. This standard is very important to the future of semiconductor system design. It’s also quite complex and presents many technical hurdles to practical usage by many. Intel and Cadence recently published a white paper that details how the two companies are working together to get to the promised land of a chiplet ecosystem. If multi-die design is in your future, you will want to get your own copy. A link is coming, but let’s first examine some history and innovation as Intel and Cadence collaborate to advance the all-important UCIe standard.

Some History

It turns out Cadence and Intel have a history of collaborating to bring emerging standards into the mainstream. In 2021, the companies collaborated on simulation interoperability between an Intel host and Cadence IP for the Compute Express Link™ (CXL™) 2.0 specification. Like UCIe, this work aimed to have a substantial impact on chip and system design.

The specification, along with the latest PCI Express® (PCIe®) 5.0 specification provided a path to high-bandwidth, cache-coherent, low-latency transport for many high-bandwidth applications such as artificial intelligence, machine learning, and hyperscale applications, with specific use cases in newer memory architectures such as disaggregated and persistent memories.

The ecosystem to support this standard was rapidly evolving. Design IP, verification IP, protocol analyzers, and test equipment were all advancing simultaneously. This situation could lead to design issues not being discovered until prototype chips became available for interoperability testing. Finding the problem this late in the process would delay product introduction for sure.

So, Intel and Cadence collaborated on interoperability testing through co-simulation as the first proof point to successfully run complex cache coherent flows. This “shift-left” approach demonstrated the ability to confidently build host and device IP, while also providing essential feedback to the CXL standards body.

You can read about this project here.

Addressing Present Day Challenges

In 2023 Cadence and Intel began collaborating again, this time to advance the UCIe standard and help achieve on-package integration of chiplets from different foundries and process nodes – the promise of an open chiplet ecosystem. UCIe is expected to enable power-efficient and low-latency chiplet solutions as heterogeneous disaggregation of SoCs becomes mainstream.  This work is critical to keep the exponential complexity growth of Moore’s Law alive and well. Monolithic strategies won’t be enough.

To achieve a chiplet ecosystem, design IP, verification IP, and testing practices for compliance will be needed, and that is the focus of the work summarized in this white paper. Here are the topics covered in the white paper – a link is coming so you can get the whole story.

UCIe Compliance Challenges. Topics include the electrical, mechanical, die-to-die adapter, protocol layer, physical layer, and integration of the golden die link to the vendor device under test. The PHY electrical and adapter compliances include the die-to-die high-speed interface as well as the RDI and FDI interface. The mechanical compliance of the channel is tightly coupled with the type of reference package used for integration. There are a lot of technical challenges and design-specific challenges discussed in this section.

The Role of Pre-Silicon Interoperability. There are many parts to each of the standards involved in multi-die design. The entire system is designed concurrently, resulting in all layers going through design and debug at the same time. Like the work done on CXL, “shift-left” strategies are explored here to allow testing and validation to be done before fabrication. The figure below illustrates the relation of the various specifications.

UCIe – A Multi Layered Subsystem

UCIe Verification Challenges. Some of the unique challenges to the verification environment are discussed here. Topics covered include:

  • D2C (data-to-clk) Point Testing
  • PLL Programming Time
  • Length of D2C Eye Sweep Test
  • Number of D2C Eye Sweep Tests

UCIe Simulation Logistics. For this project, the Cadence UCIe advanced package PHY model with x64 lanes was used for pre-silicon verification with Intel’s UCIe vectors. Topics covered include:

  • Initial Interoperability
  • Simulation – Interoperability over UCIe
  • Controller Simulation Interoperability

The piece concludes with UCIe Benefits to the Wider Community.

To Learn More

If multi-die design is in your future, you need to understand the UCIe standard. And more importantly, you need to know what strategies exist for early interoperability validation. The white paper from Cadence and Intel is a must read. You can get your copy here. And that’s how Intel and Cadence collaborate to advance the all-important UCIe standard.

Also Read:

Overcoming Verification Challenges of SPI NAND Flash Octal DDR

The Impact of UCIe on Chiplet Design: Lowering Barriers and Driving Innovation

The Future of Logic Equivalence Checking


WEBINAR: Workforce Armageddon: Onboarding New Hires in Semiconductors

WEBINAR: Workforce Armageddon: Onboarding New Hires in Semiconductors
by Daniel Nenni on 09-02-2024 at 6:00 am

CHIPQUEST WEB1

The semiconductor industry is undergoing an unprecedented inflection—not in its technology, but in its very structure. This transformation is happening at a time of phenomenal growth, presenting both opportunity and crisis. The ingredient most critical to meeting the growth demands, but which also poses the greatest risk, is workforce. There will not be nearly enough skilled workers to fill all roles. The history of such industrial inflections suggests many companies will under-prepare, and then over-react. To their detriment.

This webinar addresses a key, and often overlooked, and maybe unexpected ingredient to weathering such crises—employee onboarding.

Join us at 13:00 EST on September 5th, 2024, hosted by Chipquest. Register here.

The Compounding Forces Behind the Workforce Crisis

This workforce crisis is not driven by one or two independent factors, but by several compounding forces that are reshaping the industry landscape:

Expanded Demand Across Multiple Fronts: The demand for semiconductors is skyrocketing across various sectors:

  • Consumer Electronics: More laptops, smartphones, and other devices are being produced than ever before.
  • Data Centers: The surge in digital transformation during the COVID-19 pandemic has increased the need for server farms to support cloud computing, e-commerce, and streaming platforms.
  • IoT and Automotive: The proliferation of IoT devices and the shift toward electric and autonomous vehicles are driving exponential increase in use cases.
  • Artificial Intelligence: AI and machine learning applications are generating a new wave of need for advanced, high-performance chips.

Supply Chain Redundancy and Geopolitical Tensions: Geopolitical tensions have led to a push for on-shoring, reshoring, or near-shoring semiconductor manufacturing:

  • Companies like TSMC and Amkor are expanding their manufacturing footprint to countries where they never had a presence before.
  • This duplication of infrastructure requires additional skilled workers, further stretching the already limited talent pool.

Technology Sovereignty as National Security: The global race for semiconductor supremacy has become a matter of national security:

  • Governments are investing heavily in domestic semiconductor capabilities. Newcomers like India and Vietnam are entering the semiconductor race, intensifying competition for talent.
  • The CHIPS and Science Act and similar initiatives in other nations aim to secure technology sovereignty, further escalating the need for skilled professionals.

Workforce Dynamics and a Changing Labor Landscape: The semiconductor workforce is already greatly reduced from its earlier peak industry is already facing a significant workforce gap due to early retirements, layoffs, and competition from other tech sectors:

  • A net exodus of workers due to layoffs, early retirements and pilfering of key talent by adjacent industries.
  • Declining interest in manufacturing roles, particularly among younger demographics.
What’s Being Done—and What’s Missing

Public-private partnerships, government funding, and renewed focus on education and apprenticeships are all steps in the right direction. While these initiatives do create a more knowledgeable pool to draw from, they do not serve to integrate new workers into the actual workplace, where the immensity of systems, procedures and policies readily overwhelm new workers..

A New Approach: Modernized Onboarding and Training

One critical aspect that continues to be overlooked is the effectiveness of onboarding and training within individual companies. Traditional methods—relying on static PDFs and uninspiring safety training—fail to engage new employees. This not only leads to costly mistakes but also impacts retention rates.

To address these challenges, the semiconductor industry needs innovative solutions that can modernize onboarding and training. Methods like gamification and microlearning offer a glimpse into how training can become more engaging and effective, better aligning with the expectations of today’s digital-native workforce.

Join Us to Learn More

The semiconductor industry is transforming, and companies must adapt their workforce strategies to stay competitive. Join Chipquest’s upcoming webinar, “Workforce Armageddon: Onboarding New Hires in Semiconductors,” to explore these critical challenges and the innovative solutions that can help your organization thrive.

Register now to secure your spot!

Also Read:

Elevate Your Analog Layout Design to New Heights

Introducing XSim: Achieving high-quality Photonic IC tape-outs

Synopsys IP Processor Summit 2024


Podcast EP244: A Review of the Coming Post-Quantum Cryptography Revolution with Sebastien Riou

Podcast EP244: A Review of the Coming Post-Quantum Cryptography Revolution with Sebastien Riou
by Daniel Nenni on 08-30-2024 at 10:00 am

Dan is joined by Sebastien Riou. Director of Product Security Architecture at PQShield. Sebastien has more than 15 years of experience in the semiconductor industry, focusing on achieving “banking grade security” on resource-constrained ICs such as smart cards and mobile secure elements. Formerly of Tiempo-Secure, he helped create the world’s first integrated secure element IP achieving CC EAL5+ certification.

Sebastien discusses post-quantum cryptography and why the US Government’s National Institute of Standards and Technology (NIST) is pushing for implementation of new, quantum resistant security now. Sebastian explains how the new standards are evolving and what dynamics are at play to deploy those standards across a wide range of systems, both large and small. The special considerations for open source are also discussed.

Sebastien describes the broad hardware and software offerings of PQShield and the rigorous verification and extensive documentation that are available to develop systems that are ready for the coming quantum computing threat to traditional security measures.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Wendy Chen of MSquare Technology

CEO Interview: Wendy Chen of MSquare Technology
by Daniel Nenni on 08-30-2024 at 6:00 am

Wendy Chen,MSquare CEO for SemiWiki

Wendy Chen, MBA from the University of Manchester, has been the Founder and CEO of MSquare Technology since 2021. With over 23 years in the semiconductor industry, Wendy’s career includes roles as Sales Director at Synopsys Technology, Vice President at TF-AMD, and Vice President at Alchip Asia Pacific. Her extensive experience and leadership have been key to MSquare Technology’s growth and innovation.

Tell us about your company?
Our company, MSquare Technology, is incorporated in 2021 as a leading provider of integrated circuit IPs and Chiplets. dedicated to addressing the challenges of chip interconnectivity and vertical integration in the AI era. Currently we operate offices in Taipei, Shanghai, Sydney, and San Jose, boasting a team of over 150 employees, with 80% dedicated to research and development. We strive to foster an open ecosystem service platform for AI and Chiplets, providing comprehensive support for innovation and development within the IC and Chiplet industry. MSquare’s IP products have been successfully validated by notable foundries’ process nodes and brought to mass production, spanning from 5nm to 180nm and covering over 400 different process nodes across 5 leading foundries. The R&D team has launched interconnect interface IPs including HBM, LPDDR, ONFI, UCIe, eDP, PCIe, USB, as well as Chiplet solutions represented by M2LINK.

What problems are you solving?
In the current climate of tight semiconductor supply chains and rising costs, we leverage our robust portfolio, substantial supply chain resources, and system integration capabilities to provide customers with cutting-edge technology, a shortened time to market, and reduced design cost. We offer clients validated IP products equipped with the latest technological advancements.

What application areas are your strongest?
We possess distinct strengths in high-speed interface IPs, Chiplets, foundation IPs and integrated services. These technologies find widespread application in sectors like AI, Data Centers, Automotive Electronics, the IoT, and Consumer Electronics. In scenarios that require extensive data processing and rapid data transmission, our solutions substantially improve efficiency and performance.

  • AI & Data Center: We specializes in providing advanced interface IP and Chiplet products for High Performance Computing and AI applications. Our product portfolio is designed to meet the demands of AI and Data Centers for high-speed, high-bandwidth memory and interconnect technologies, ensuring the efficiency and security of data processing.
  • Automative: Our products have obtained ISO 26262 functional safety certification, ensuring the most advanced functionality and reliable, safe operation.
  • Internet of Things: With our high-performance, low-power IP solutions, are designed to enhance the security and communication efficiency of IoT devices, facilitating safe and efficient data transmission across applications such as smart homes, industrial automation, and smart cities.
  • Consumer Electronics: Our high-performance, low-power IP solutions enable devices such as smartphones, tablets, and smartwatches to achieve faster processing speeds, richer multimedia capabilities, and extended battery life, provides robust momentum for creating the next generation of smart devices.

What keeps your customers up at night?
In the post-Moore era, customers across various industries—such as AI, data centers, automotive, and consumer electronics—face significant challenges related to memory bandwidth/density and system costs. As computational demands and model complexities increase, traditional memory solutions often fall short in several key areas:

  1. High Memory Costs: With the growing need for larger memory capacities to handle complex and memory-intensive applications, costs associated with high-bandwidth memory (HBM) can be prohibitive. Many customers struggle with the high cost per unit bandwidth and the limited availability of advanced memory solutions.
  2. System Integration and Scalability: Integrating large memory capacities into computing systems traditionally requires complex and costly silicon interposers, which increase system costs and complicate design.
  3. Performance Bottlenecks: The need for higher memory bandwidth to improve inference throughput is critical, yet existing solutions often face limitations in achieving the necessary performance levels.

To address these challenges, MSquare Technology offers our innovative HBM3 IO-Die solution. This approach provides several key benefits:

  • Cost Efficiency: By decoupling the HBM host IP from SoCs and utilizing a separate IO-Die that converts the HBM protocol to the UCIe protocol, we reduce the need for expensive silicon interposers. This integration allows for a more cost-effective solution with broader process coverage and improved availability.
  • Enhanced Performance: Our HBM3 IO-Die incorporates the latest 32Gbps UCIe IP, which significantly increases memory bandwidth and supports larger memory capacities within a single computing node. This reduces the need for synchronization across multiple nodes and enhances overall system performance.
  • Flexibility and Scalability: The UCIe-based approach enables customers to integrate various Chiplets and memory types more flexibly. This modularity not only lowers SoC development and packaging costs but also allows for greater customization to meet specific application requirements.
  • Advanced Technology: MSquare’s commitment to standardizing Chiplet interfaces and our early adoption of the UCIe standard ensure that our solutions are at the cutting edge of technology. Our HBM3 IO-Die, expected for mass production by the end of 2024, represents a significant advancement in addressing the memory and performance needs of modern computing systems.

By offering these advanced solutions, MSquare helps our customers overcome the limitations of traditional memory solutions, manage costs effectively, and achieve superior performance in a rapidly evolving technological landscape.

What does the competitive landscape look like and how do you differentiate?
The interface IP and Chiplet sectors are experiencing rapid growth, with fierce competition dominated by major global corporations. By 2030, the market size for IPs is projected to reach $10 billion, while the Chiplet market could expand to ten times that of the IP market. We believe Chiplets represent a revolutionary shift in the semiconductor industry, succeeding IDM and Fabless models, with high-speed interconnects being essential for fulfilling end-application requirements. We hold essential capabilities and resources for Chiplet production—including comprehensive, one-stop solutions and strong supply chain integration. These assets enable us to swiftly adapt to the fast-changing market landscape and provide tailored solutions to our clients.

What new features/technology are you working on?
Our latest technology, the M2LINK solution, decoupled the HBM host IP from SoCs by developing a separate IO-Die that converts the HBM protocol to the UCIe protocol. This IO-Die is packaged with the HBM stack into a single module, allowing direct connectivity on a common substrate without using a silicon interposer. Compatible with UCIe1.1 Die-to-Die technology, which offers a high clock frequency of up to 16GHz, provides a data transfer rate of up to 32Gbps per lane, and delivers 1Tbps (512Gbps TX + 512Gbps RX) bandwidth per module for standard packages. This capability significantly supports the efficient computation of complex AI models.

How do customers normally engage with your company?
Our customers typically engage with us in following ways:

  • IP Licensing: Customers can license various high-speed interface IPs, such as HBM, UCIe, PCIe, LPDDR, ONFI, etc. We provide thorough technical support and enables customers to integrate these IPs into their chip designs.
  • Chiplet Design: We offer Chiplet design services, which follow a process from design specifications through to sampling and verification. This engagement is for customers who need customized solutions beyond standard IP licensing.
  • One-stop Chip Design Service: We provide a comprehensive service that covers the entire lifecycle of chip development, including chip design, fabrication, packaging, and finally, testing.

We can be reached out through our sales, technical support, and marcom teams.

Our contact page: https://en.m2ipm2.com/AboutStd.html#s50019

Our website: https://en.m2ipm2.com/

Our LinkedIn: https://linkedin.com/company/m2ipm2

Also Read:

CEO Interview: BRAM DE MUER of ICsense

CEO Interview: Anders Storm of Sivers Semiconductors

CEO Interview: Zeev Collin of Semitech Semiconductor


Keysight EDA and Engineering Lifecycle Management at #61DAC

Keysight EDA and Engineering Lifecycle Management at #61DAC
by Daniel Payne on 08-29-2024 at 10:00 am

Keysight EDA at 61DAC min

Entering the exhibit area of DAC on the first floor I was immediately faced with the Keysight EDA booth, and it was even larger than either the Synopsys or Cadence booths. They had a complete schedule of partners presenting in their theatre that included: Microsoft Azure, Riscure, Fermi Labs, BAE Systems, Alphawave, Intel Foundry, Sarcina Technology, AWS, TSMC, Allegro Microsystems, Microsoft, UCIe. My visit was with Simon Rance, Director of Product Management and Strategy. The theme this year was Elevate Your Design Intelligence.

Engineering Lifecycle Management

What’s new for 2024 is the focus on Engineering Lifecycle Management (ELM), a new acronym for a new era. Starting with the Keysight Design Data Management (SOS) tool aimed at just design data management, the industry needed a way to fill the gap between Product Lifecycle Management (PLM)) for back-end manufacturing to include things like FuSA and ISO 26262, adding traceability all the way throughout the lifecycle. So ELM is PLM-like for engineers to focus on their project requirements, and include project management, where Keysight Engineering Lifecycle Management (HUB) is the single source of truth.

ELM will be used by design engineers and managers as they perform project management tasks, do digital and AMS design work; even the legal department uses it to verify that all IP in a project conforms to export controls; plus the IT department can define the security required for the data in each IP. The ELM also connects to your requirements tools and bug tracking tools.

Key customers using ELM today include aerospace, aeronautic and automotive where safety and traceability are paramount requirements. Both external IP and internally developed IP require tracking through the lifecycle. Even if a company has an ARM license and they get acquired by another company, that triggers an event to redo the ARM contract, so you really need to know where all of these IP blocks are being used. The ESD Alliance reports quarterly revenues of both EDA software and IP, where IP now is larger than EDA software.

SoC design teams can take months to locate all of the IP required for a new or derivative project, negotiate licenses, then start to use the IP. So, having a catalog of IP can help speed that process by enabling re-use across a corporation. ELM is a strategic approach being advocated top-down by management, then adopted by engineering teams.

Stephen Slater, EDA Product Management – Integrating Manager, talked about the needs of AI and ML for the simulation process as the tools generate so much data, creating a need to tag and store the data. With ELM there’s a central hub to store and organize this kind of simulation data. Even within HUB there’s a knowledge base, creating an incentive to share your project knowledge with others. Once data is stored in HUB then it can start to make correlations. With the growing number of industries mandating traceability, it makes using an ELM more feasible, and besides – adding meta-data is good for you.

Alphawave: UCIe Compliance

Letizia Giuliano, VP Product Marketing at Alphawave Semi shared how their engineering team validates its IP and chips for UCIe compliance using the Keysight Chiplet PHY Designer tool. They created their IBIS AMI model in collaboration with Keysight, validating their 3 nm UCIe IP, for both standard and advanced packages.

Source: Alphawave

YouTube video, 10:49 length.

Lawrence Berkeley National Laboratory

Carl Grace was part of a team that designed custom cryogenic ASICs for Neutrino science, and they used Keysight ELM, IP, and data management tools in their flow for data sharing, team collaboration and security. Their ADC needed to digitize 16 channels at 12-bit resolution and 2 MS/s sampling rate/channel, with low noise, while operating for 30 years at -184C.

Source: Lawrence Berkeley National Laboratory

YouTube video, 14:46 length.

Sarcina Technology, Advanced PackagingBump pitch transformers were presented by Larry Zhu, PhD of Sarcina, and they used Keysight ADS and Memory Designer for advanced packaging design, plus simulation for Fan-Out Chip-on-Substrate with Si Bridge (FOCoS-B).

Source: Sarcina

YouTube video, 11:38 length.

Keysight Tools on Azure

The Director of Customer Engagements – Silicon Collaboration, Joe Tostenrude, presented on how to scale design team collaboration by using Keysight ELM, IP and data management tools running on the Azure Modeling and Simulation Workbench platform as a service.

Source: Keysight

YouTube video, 13:14 length.

IP Security

Serge Leef of Microsoft spoke about how they are helping meet government requirements by creating secure design IP repositories while using the Keysight ELM, IP and data management capabilities.

Source: Microsoft

YouTube video, 8:13 length.

Security Validation during the Design and Development Cycle

From Riscure (acquired by Keysight), Erwin in’t Veld presented how security at the hardware level is a requirement for modern electronic systems. Hardware exploits like side channels and fault injection need to be verified pre-silicon.

Source: Keysight

YouTube video, 16:40 length.

Hybrid SaaS Cloud for EDA

Using a hybrid of on-premises and cloud infrastructure was presented by Ravi Poddar, Principal Semiconductor Industry Advisor, AWS. Detailed use cases for FPGA prototyping and emulation verification were shown. Nupur Bhonge, Sr. Solutions Engineer, Keysight talked about requirements for IP and data management in a hybrid cloud flow.

Source: AWS

YouTube video, 14:52 length.

DuPont, PCB Hybrid Boards

DuPont sent Kalyan Rapolu, Principal Engineer to DAC and he described the design, simulation and characterization of PCB hybrid boards. Their team did layout in Keysight ADS, EM simulation, insertion loss measurements, and channel simulations with Keysight ADS.

Source: Dupont

YouTube video, 12:48 length.

Intel, UCIe Consortium

The co-chair of UCIe’s marketing work group, Brian Rea, talked about the latest chiplet interconnect specification, UCIe 1.1, which is fully backward compatible with UCIe 1.0. This new specification has automotive enhancements, has streaming protocols on full stack, and adds bump map optimization.

Source: UCIe

YouTube video, 8:53 length.

Keysight Labs

Alex Stameroff of Keysight Labs described how their group delivers solutions to their customers by using Keysight EDA tools (SystemVue, Genesys, ADS, EMPro, HeatWave) to validate and then manufacture a variety of products.

Source: Keysight

YouTube video, 13:32 length.

SOC/Chiplet Design

From the Solutions Engineering group of Keysight, Prathna Sekar talked about how to optimize the IP-driven approach using IP and data management tools.

Source: Keysight

YouTube video, 16:00 length.

Allegro Microsystems, ISO26262

The EDA Design Methodology Manager, Ravia Shankar Gaddam from Allegro Microsystems shared about IP management and meeting ISO26262 compliance by using Keysight ELM, IP and data management tools. Their company has both corporate IPs and community IPs.

Source: Allegro Microsystems

YouTube video, 10:06 length.

Summary

2024 was a big growth year at DAC for the Keysight EDA team and I was able to see the increased awareness from attendees at the many theatre presentations from partners. ELM is a new acronym to keep track of in our EDA lexicon, and it will continue to grow in usage by teams around the globe. I did attend the UCIe presentation in the Keysight EDA theatre presented by Microsoft. I cannot wait to see what Keysight EDA develops in the next 12 months.

View all 12 of the theatre presentations on YouTube from this playlist.

Related Blogs


The Chip 4: A Semiconductor Elite

The Chip 4: A Semiconductor Elite
by KADEN CHUANG on 08-29-2024 at 6:00 am

Semiconductor Value Chain Market Share II
Can a 4-member alliance reshape the semiconductor industry?
Photo by Harrison Mitchell on Unsplash

Semiconductors are ubiquitous in electronics and computing devices, making them essential to developments in AI, advanced military, and the world economy. As such, it is unquestionable that nations attain considerable geopolitical and economic leverage from controlling large portions of the global semiconductor value chain, granting them access to key technological and commercial resources while providing them the ability to restrict the same access from other nations. For this reason, competition between major powers such as the United States and China has largely manifested itself in efforts to attain and restrict access to semiconductor technology. For example, under the CHIPS and Science Act, the United States government offers subsidies to global manufacturers on the condition that the companies do not establish fabrication facilities in countries that pose a national security threat. The United States has also established export controls on advanced semiconductor equipment to China and reached a deal for the Netherlands and Japan to undertake similar measures. China, on the other hand, is a net importer on semiconductors and deems its reliance on competing nations for semiconductor access as a weakness; to counter this, it aims to establish a fully independent value chain, investing billions in its “Made in China 2025” policy to do so.

Perhaps the most ambitious venture to establish greater control over the semiconductor value chain emerged in 2022 under the Biden administration. Prior to the enactment of the , Biden proposed the Chip 4 alliance, a semiconductor collective comprised of the United States, Japan, South Korea, and Taiwan. The four member states are essential to the semiconductor value chain, with each member specializing in the necessary components to develop semiconductors as a collective. Under the Chip 4, the four member states will engage in coordination for policies on supply chain security, research and development, and subsidy use. The alliance would hold considerable influence on the distribution of semiconductors and can be utilized to significantly limit the chip access of geopolitical rivals. Despite its potential influence, the Chip 4 has yet to be realized, and it is unclear whether the prospective members will make clear commitments towards the alliance. In this article, we will provide a closer examination of the Chip 4 coalition and assess how it may influence the semiconductor industry. We will also observe the numerous challenges that prevent the prospective member states from forming the alliance.

A Closer Look at the Global Semiconductor Value Chain

Figure #1, designed by The CHIPS Stack

The semiconductor value chain is composed of three central parts: design, fabrication, and assembly. In the design process, the chip architecture’s blueprint is mapped out to fulfill a particular need. The process is facilitated using design software known as electronic design automation (EDA), and intellectual property (Core IP), which serves as basic building blocks for chip design. The semiconductor development process follows in the fabrication stage, where the integrated circuits are manufactured for use. Since these integrated circuits are built within the nano scale, the fabrication process requires highly specialized inputs, both in the form of materials and manufacturing equipment. In the final step, the wafers are assembled, packaged, and tested to be usable within electronic devices. The silicon wafers are sliced into individual chips, placed within resin shells, and undergo testing before being delivered back to the manufacturers.

Figure #2, Data taken from the SIA and BCG

In recent decades, the semiconductor global value chain has become increasingly specialized, with much of the value chain contributions split between the United States and East Asia. United States possesses arguably the most important position within the semiconductor industry, having strong footholds in the design, software, and equipment domains. Its position in the design sector is especially essential, hosting key businesses such as Intel, Nvidia, Qualcomm, and AMD, which account for roughly half of the design market. On the other hand, much of the fabrication market is concentrated in East Asia, where Taiwan and South Korea play major roles. Taiwan and South Korea account for much of the world’s leading-edge fabrication, with TSMC producing the world’s most advanced semiconductors and Samsung following closely behind. In addition, Taiwan holds a well-established ecosystem for semiconductor manufacturing, with numerous sites for materials, chemicals, and assembly. Japan, along with the United States and the Netherlands, account for most of the industry’s equipment manufacturing, providing an essential function to the fabrication process. Lastly, China occupies the largest share of the assembly and testing processes and is also a major supplier of gallium and germanium, two chemicals central to semiconductor manufacturing.

As seen by the distribution of the value chain, the semiconductor industry relies on an interdependent network– no state can source semiconductors without the contributions of other states. Yet, the positioning of nations along the different components of the value chain creates imbalances in the degree of influence a nation has within the semiconductor industry. This, in turn, creates power dynamics that can be leveraged by nations with higher degrees of influence.

Weaponizing the Global Value Chain

Since the global semiconductor value chain operates under an interdependent network of states, states with access to exclusive resources can create chokepoints for rivals, diminishing their semiconductor capacities by withholding essential elements for its production. Hence, export controls operate as central weapons within the realm of global technology development, enabling dominant states to decelerate the growth of rising states.

The United States’ semiconductor-related export control measures against China provide valuable insights on how this principle has affected developments within the industry. In 2019, for instance, the Trump administration enacted export controls against the Chinese telecommunications company Huawei, employing a twofold measure to do so. Firstly, it banned Huawei from purchasing American-made semiconductors for its devices. Secondly, it banned its subsidiary semiconductor company, HiSilicon, from purchasing American-made software and manufacturing equipment. Initially, the measure proved to be ineffective in stunting Huawei’s business operations. Taiwan and South Korea held stronger positions within the semiconductor manufacturing space, and Huawei simply sought their services when American sources were unavailable. The American design firms, which provided blueprints for Huawei chips, outsourced their manufacturing to foreign shores. Here, the export measures damaged American chipmaking firms to a greater extent than it did to Huawei, depriving the domestic businesses of a lucrative client.

However, in an update of the export control policy, the Trump administration extended the export control efforts to third-party suppliers, potentially cutting their access to software, core IP, and manufacturing equipment should they continue to engage in business with Huawei. The United States, by controlling much of the software and core IP sources, could indirectly restrict Huawei’s access to chip design by denying essential inputs to third party design firms. Similarly, its dominant position within the manufacturing equipment industry gave it considerable leverage within the fabrication space, indirectly cutting Huawei’s access to semiconductor manufacturing. By threatening to cut off critical resources for design and fabrication, the United States effectively disincentivized third-party engagement with Huawei. Huawei soon lost crucial access to advanced semiconductors and trailed behind in the smartphone market in the subsequent years, with a report stating that the United States’ efforts cost the company roughly $30 billion a year.

The United States’ policy on semiconductor export control illustrates how having control over fundamental components of the global value chain enables an agent to produce rippling effects downstream. Specifically, the influence the United States was able to exert on China derived from its control over critical chokepoints; the earlier export control measures executed by the United States demonstrated that export controls enacted without sufficient leverage are largely ineffective.

Even so, there are inherent risks associated with a frequent tightening of chokepoints, especially if conducted unilaterally. Since the semiconductor industry is highly competitive and dynamic, companies are frequently producing new innovations within the market. Hence, while withholding critical technology and resources may be effective in the short run, a sustained use of export controls provides opportunities for competitors to produce reliable substitutes and fill the gap within the market. These risks are mitigated by multilateral export controls, where multiple producers along the same chokepoint collectively enact export controls, making it much more difficult for substitutes to be sourced or replaced. Indeed, the Biden administration has increasingly engaged in multilateral efforts in export controls– the Dutch-Japanese-American ban on equipment exports to China is a clear example. More importantly, the proposed Chip 4 alliance provides another critical avenue where multilateral action can be taken.

The World under Chip 4

The stated purpose of the Chip 4 alliance is to provide the four member states with a platform to coordinate policies relating to chip production, research and development, and supply chain management. The United States has outlined the arrangement as one that is fundamentally distinct from its export control policies against China, deeming it as a necessary multilateral coordination mechanism rather than an alliance driven by geopolitical competition. Yet, what would happen if the four member states were to operate under complete coordination and utilize their significant leverage? If the four member states act under a coordinated effort, the Chip 4 would possess unprecedented control over the semiconductor industry, creating an extremely powerful inner circle. In many ways, the formation of the Chip 4 can lead to an extensive weaponization of the global value chain.

As a collective, the United States, Japan, South Korea, and Taiwan would act as the most dominant force within the semiconductor industry, given the capability to exercise significant leverage across almost all areas within the global value chain. When combining their expertise, the Chip 4 would have a majority share in all aspects of the global value chain except for assembly and testing:

Figure #3, with data adapted from Figure #2

As seen above, the Chip 4 could engage in chipmaking processes with minimal engagement from outside sources. More critically, the coordination of resources provides the Chip 4 with a much stronger grasp on chokepoints than the members would have been able to acquire individually. In the design sector, for instance, the United States possesses a 49% share of the market. While significant, the Chip 4 would enhance this market dominance to 84% by combining the capabilities of Japan, South Korea, and Taiwan– a multilateral effort to restrict design exports would severely limit the number of reliable substitutes needed for semiconductor production. The Chip 4 would also hold 63% of the market share within the fabrication industry. While a significant figure, it underestimates the actual strength of the Chip 4 within advanced manufacturing; TSMC, Samsung, and Intel have been able to produce logic chips within 10 nanometers, providing the alliance with a near-exclusive access to leading logic technologies. Within the equipment industry, the United States and Japan can still provide essential resources for leading fabrication firms given its concentration in Taiwan, South Korea, and the United States, but the Chip 4’s ability to restrict tooling to outside states could also be used to enhance the position of existing fabrication firms. Imaginably, Netherland’s ASML will also be a close ally to the Chip 4, providing essential equipment with its EUV tooling. Hence, the Chip 4 will inevitably act as a dominant force within the design, fabrication, and equipment industries, greatly shifting the dynamics of the global semiconductor industry.

Conceivably, then, the Chip 4 can be used as an instrument to advance the United States’ technological race against China. Since the Chip 4 holds expertise across almost all aspects of the global value chain, it can rearrange the supply chain in a way that heavily reduces Chinese involvement and access, preventing the country from establishing a strong foothold within the industry. So far, China has been reliant on the technological prowess of its Far Eastern neighbors for its own development– it houses both Taiwanese and Korean fabrication facilities, providing it with access to logic and memory-based manufacturing. Korean companies such as Samsung and Hynix have been especially involved within China’s semiconductor ecosystem, providing a critical access point to the nation’s technological development; Here, China can utilize technological leakages from more advanced fabrication sites to conduct essential knowledge-based transfers. Yet, under American leadership, members of the Chip 4 alliance may opt to reduce further investments within Chinese borders, effectively stalling Chinese progress.

Given the advantages the members would attain by forming a coalition, the prospect of establishing the Chip 4 appears highly attractive. However, the current state of the alliance suggests that its formation remains a far-reaching ideal. Although plans of a coalition have been in discussion since March of 2022, the prospective member states have been slow to set the groundwork for a coordinated policy. So far, only two meetings have been held to discuss the nascent coalition. The first occurred in September of 2022 and was attended only by working-level officials. A more recent meeting occurred virtually in February of 2023 between senior officials, though more concrete plans for the coalition have yet to be laid out. Despite its salient benefits, the establishment of the coalition presents significant risk factors and challenges for its member states to confront, prompting a more cautious approach to the alliance. These pressing obstacles serve as the greatest source of inertia for the alliance’s progression.

The Geopolitical Challenge

The principal obstacle to a formal declaration of Chip 4 membership stems from the geopolitical implications it carries. Since the Chip 4 can be leveraged to impede China’s semiconductor development, a commitment towards the alliance will undoubtedly be interpreted antagonistically by the Chinese government. For Asian members who have complicated economic and geopolitical ties to China, this can serve as a significant barrier to entry. Unsurprisingly, the Chinese government has voiced concerns against the coalition, with a spokesman specifically urging the South Korean government to reconsider its long-term interests before making formal commitments. Diplomatically, South Korea has maintained stronger relations with China compared to other Chip 4 states and therefore has a weaker interest in slowing China’s semiconductor progress. While Japan and Taiwan have demonstrated strong interest in following the United States’ multilateral initiative even at the cost of worsening diplomatic ties with China, South Korea has indeed been more reluctant to act– of the four member states, South Korea was the last to commit to a preliminary meeting discussing the Chip 4. Within the semiconductor industry, South Korea’s tie to the Chinese market is significant; Samsung and Hynix have built numerous fabrication facilities in China, and the Chinese market accounted for 48% of South Korea’s memory chip exports in 2021. In addition, the Chinese government has demonstrated a willingness to engage in retaliatory action when its interests are placed under threat. In 2017, for instance, the Chinese government implemented policies to restrict trade with Korea as a response to its adoption of THAAD anti-missile technology. More recently, it restricted the export of gallium and germanium following the Dutch-Japanese-American export ban on semiconductor equipment. As such, any steps taken to restrict Chinese access to technology will likely lead to an escalation of trade restrictions, inflicting high economic costs on all involved parties. Attaining membership in the Chip 4 therefore carries a fundamental risk, and South Korea appears to be the most disinclined to act under such circumstances.

There are also geopolitical tensions among the prospective Chip 4 members that makes a formal coalition difficult to establish. While Japan, South Korea, and Taiwan each have strong diplomatic ties with the United States, the relationship between the East Asian member states tends to rest on more tentative grounds. South Korea and Japan’s foreign relations have not fully recovered from their wartime past, which remains as a source of diplomatic friction; in 2018, South Korea’s Supreme Court ruled for Japanese companies to compensate for its forced usage of Korean labor in their wartime factories, prompting the Japanese government to retaliate in kind by restricting the export of essential semiconductor-related chemicals to Korea. On a different note, a South Korean official raised questions about establishing a formal alliance with Taiwan, seeking assurance from the U.S. government that Taiwan’s membership does not preclude a violation of the One China policy. These concerns indicate that diplomatic tensions concerning the Chip 4 are not only manifesting externally, but internally as well. Clearly, the United States must play a role in ameliorating these tensions for a seamless establishment of the Chip 4. The arrangement of the trilateral summit between the United States, Japan, and South Korea in August of 2023 demonstrates the United States’ willingness to forge stronger ties between the Asian states, but it remains to be seen whether its efforts will be sufficient for the formation of the chip alliance.

The Business Challenge

When discussing the Chip 4, some have likened the alliance to OPEC of the oil business, observing that the centralized coordination of the semiconductor industry among the 4 member states could produce a cartel-like presence within the market. While there may be some similarities between the two coalitions, it is important to point out key differences: the coordination efforts of the Chip 4 would be conducted for the interest of national security, coming at the expense of private firms by depriving them of essential markets. The conflict between national security and business interests thus serves as another point of friction for the Chip 4’s establishment. Already, American firms have demonstrated increasing resistance at the tightening of sanctions against Chinese firms. When the U.S. announced export bans in 2022, Lam Research, Applied Materials, and KLA, U.S.-based equipment manufacturers, had stated that they could lose up to $5 billion in revenue from China. Following the enforcement of the bans, Applied Materials came under criminal probe for supplying shipments to China, reportedly selling to Chinese fabrication firms under disguised third parties. The realization of the Chip 4 would likely signify an escalation of trade restrictions against China, meaning businesses that have typically relied on Chinese consumption for its revenue would have much to lose from the maneuver. A sustained exclusion of exports to China would thus be received negatively by semiconductor firms, which rely on its large market for its businesses.

One must also consider the possible effects the formation of the Chip 4 may have on competition and chipmaking innovation. A coordination of semiconductor manufacturing would be a source of concern for leading fabrication firms, who may have concerns about the prospect of sharing technologies with potential rivals. As noted by U.S. government officials, the South Korean leadership has expressed apprehensions that companies such as TSMC and Samsung may be encouraged to engage in knowledge exchange. Similarly, there are worries that the Chip 4 initiative may be used for the United States to place their chipmaking firms under more favorable conditions within the market. Indeed, if the semiconductor firms were to engage in explicit coordination efforts regarding manufacturing and distribution, some firms would undoubtedly benefit more than others; it would be a challenge for the Chip 4 to reach an agreement that complements the competing interests of all governments and private firms. More importantly, the introduction of governmental intervention could greatly reduce the competitiveness of the industry, stalling the pace of innovation in the process. Here, an overextension of governmental control could reshape the semiconductor industry for the worse, depriving the industry of its most valuable innovations. To alleviate these business concerns, the Chip 4 must assure firms that it will strive towards geopolitical objectives while maintaining the integrity of the industry’s operations and practices. A failure to do so will be highly costly not only to the industry, but the various other industries that rely on semiconductor development.

The Future of Chip 4

Overall, it remains uncertain what will become of the Chip 4. The two preliminary meetings indicate that there is a nascent interest in the coalition among the Asian states, but the inner mechanisms of the alliance are yet to be fully articulated. Additionally, the scarcity of official statements regarding the alliance indicates that the dialogue surrounding it remains highly tentative; These developments suggest that the Chip 4’s formation will not be realized in the coming years but may take much longer to complete. In truth, if the Chip 4 were to reshape the semiconductor industry as outlined above, it would be wise for the member states to approach the opportunity with careful deliberation. While a potent concept, the prospective alliance remains held back by the geopolitical and business concerns that greatly damage its appeal. The threat of an escalation of trade-related conflicts, coupled with the challenges of business coordination, raise questions about the effectiveness of the coalition. The American leadership must assure that the benefits of the alliance clearly outweigh the risks before any substantial steps will be taken by other prospective members.

Even if the Chip 4 fails to form, however, the very discussion of its concept signifies a decisive shift in the state of the industry: geopolitical concerns have leaked into the semiconductor world, fundamentally transforming business practices across regions. The United States will continue to tighten its semiconductor exports to China and prompt many of its allies to engage in similar efforts. China will continue to look for avenues of innovation that circumvent its rival’s technology restrictions. The remaining players within the field will find it increasingly difficult to engage with one global power without displeasing another. As technological advancements raise the stakes of attaining semiconductor access, the industry will likely split even in the absence of the Chip 4. With or without it, the globalized era of chipmaking is nearing its end, ushering in a fragmented landscape in its stead.

Also Read:

The State of The Foundry Market Insights from the Q2-24 Results

Application-Specific Lithography: Patterning 5nm 5.5-Track Metal by DUV

3D IC Design Ecosystem Panel at #61DAC


AI: Will It Take Your Job? Understanding the Fear and the Reality

AI: Will It Take Your Job? Understanding the Fear and the Reality
by Ahmed Banafa on 08-28-2024 at 10:00 am

1723340638358

In recent years, artificial intelligence (AI) has emerged as a transformative force across industries, driving both optimism and anxiety. As AI continues to evolve, its potential to automate tasks and improve efficiency raises an inevitable question: Will AI take our jobs? This fear is compounded by frequent reports of layoffs, both in technology and other sectors, leading many to worry that AI might be accelerating job losses. But is this fear justified? In this essay, we will explore the impact of AI on the job market, the factors contributing to recent layoffs, and whether people should genuinely be afraid of AI’s growing presence in the workplace.

The Historical Context of Technological Disruption

To understand the current anxiety surrounding AI, it’s essential to place it within the broader context of technological disruption throughout history. Technological advancements have always had profound effects on employment. The Industrial Revolution, for example, dramatically changed the landscape of work, replacing manual labor with machines and shifting economies from agrarian to industrial. This period saw widespread fear and resistance, with movements like the Luddites destroying machinery they believed threatened their livelihoods.

However, history also shows that technological advancements can lead to the creation of new industries and jobs. The rise of automobiles, for instance, displaced jobs related to horse-drawn carriages but created new opportunities in car manufacturing, road construction, and automotive services. Similarly, the advent of computers and the internet revolutionized nearly every industry, leading to the rise of entirely new job categories like software development, IT support, and digital marketing.

AI represents the latest chapter in this ongoing story of technological disruption. But unlike previous technologies, AI has the potential to automate not just manual labor but also cognitive tasks, leading to concerns that it could replace a broader range of jobs, including those traditionally considered safe from automation.

Understanding AI and Its Capabilities

Artificial intelligence is a broad field encompassing various technologies designed to mimic human intelligence. These technologies include machine learning, natural language processing, computer vision, and robotics. AI systems can analyze data, recognize patterns, make decisions, and even learn from experience, allowing them to perform tasks that once required human intelligence.

Key Areas of AI Impact:
  1. Manufacturing and Production: AI-powered robots and automation systems have been integral to modern manufacturing. These machines can work tirelessly, performing repetitive tasks with precision and speed. In industries like automotive manufacturing, robots handle everything from welding to assembly, significantly reducing the need for human labor on production lines.
  2. Customer Service: AI has made significant inroads into customer service through chatbots and virtual assistants. These tools can handle a wide range of customer inquiries, from answering frequently asked questions to processing orders, reducing the need for large customer service teams.
  3. Healthcare: AI is revolutionizing healthcare by assisting in diagnosis, treatment planning, and even surgery. AI algorithms can analyze medical images, identify patterns, and suggest potential diagnoses, often with greater accuracy than human doctors. In surgical settings, AI-powered robots assist surgeons, improving precision and outcomes.
  4. Finance: In the financial industry, AI is used for algorithmic trading, fraud detection, and risk assessment. AI systems can analyze vast amounts of financial data in real-time, making decisions faster than any human could, which has transformed trading floors and back offices.
  5. Creative Industries: Even creative fields are not immune to AI’s reach. AI tools can generate music, write articles, design logos, and even create visual art. While these tools are often used to assist human creators rather than replace them, they raise questions about the future of creative jobs.
  6. Software Engineers and Developers: AI is increasingly automating parts of software development, such as code generation and bug detection, which could reduce the need for entry-level developers. However, fully replacing software engineers is unlikely, as the field requires critical thinking, creativity, and a deep understanding of complex problems that AI cannot yet replicate. Instead, AI is expected to enhance the work of engineers, allowing them to focus on higher-level tasks while improving overall efficiency.
The Reality of AI-Induced Layoffs

The fear of AI taking jobs is not unfounded, particularly as reports of layoffs in both tech and non-tech sectors dominate the news. However, it’s important to recognize that layoffs are rarely caused by a single factor. Economic conditions, shifts in consumer behavior, and organizational restructuring all play significant roles.

Economic Factors: The global economy has faced significant challenges in recent years, including the COVID-19 pandemic, inflation, and supply chain disruptions. These factors have led companies to reassess their operations, often resulting in cost-cutting measures such as layoffs. In such cases, AI may be seen as a way to maintain productivity with a reduced workforce, but it is not the sole cause of job losses.

Technological Disruption: As companies strive to remain competitive in an increasingly digital world, they are investing in AI and automation. This investment can lead to workforce reductions, particularly in roles that can be easily automated. For example, in retail, self-checkout systems and automated inventory management have reduced the need for cashiers and stock clerks. In finance, AI-driven trading algorithms and robo-advisors are displacing traditional roles in investment banking and financial advising.

Shifts in Business Models: The pandemic accelerated the shift toward digital and remote work, prompting companies to reevaluate their business models. Some jobs, particularly those tied to physical office spaces or traditional retail, have become redundant as companies adapt to new ways of working. AI has played a role in enabling this transition by providing tools for remote collaboration, customer service, and logistics.

However, it’s crucial to note that while AI contributes to job displacement in some areas, it also creates new opportunities. The demand for AI specialists, data scientists, and machine learning engineers is growing rapidly. These roles require skills in AI development, data analysis, and cybersecurity, offering new career paths for those willing to adapt and reskill.

The Fear of AI: Is It Justified?

The fear of AI taking jobs is often rooted in the perception that AI is an unstoppable force that will render human workers obsolete. While AI is undoubtedly powerful and capable of performing tasks that were once thought to require human intelligence, this fear may be overstated for several reasons.

Human Creativity and Emotional Intelligence: AI excels at tasks that involve data processing, pattern recognition, and decision-making based on predefined criteria. However, it struggles with tasks that require creativity, empathy, and nuanced understanding—areas where humans excel. Jobs that involve human interaction, emotional intelligence, and creative problem-solving are less likely to be fully automated. For example, while AI can assist in diagnosing diseases, the human touch is still essential in patient care, where empathy and communication are crucial.

New Job Creation: Just as previous technological revolutions created new industries and jobs, AI is expected to do the same. The rise of AI is leading to the creation of entirely new job categories, such as AI ethics specialists, data privacy officers, and AI trainers. These roles involve overseeing AI systems, ensuring they operate ethically and legally, and training AI models to perform specific tasks. Additionally, AI is likely to create demand for jobs in industries that do not yet exist, much like the internet gave rise to social media management and e-commerce.

Collaborative Work: Rather than replacing human workers, AI is increasingly seen as a tool that can augment human capabilities. In many fields, AI is being used to assist humans rather than replace them. For instance, in healthcare, AI can help doctors analyze medical images and suggest potential diagnoses, but the final decision is still made by a human doctor. In creative industries, AI tools can generate ideas or draft content, but the human touch is needed to refine and personalize the output.

Regulatory and Ethical Considerations: Governments and organizations are becoming increasingly aware of the ethical implications of AI. There is growing recognition of the need for regulations to ensure that AI is used responsibly and that its impact on the workforce is managed. Some countries are already implementing policies to protect workers from the negative effects of automation, such as retraining programs and social safety nets. These measures can help mitigate the impact of AI on employment and ensure that workers are not left behind in the AI-driven economy.

Preparing for the AI-Driven Future

While the fear of AI taking jobs is understandable, it is not inevitable. The key to navigating the AI-driven future lies in preparation and adaptability. Workers, companies, and governments all have roles to play in ensuring that the transition to an AI-driven economy is as smooth and inclusive as possible.

Reskilling and Upskilling: One of the most effective ways for workers to prepare for the AI-driven future is to invest in reskilling and upskilling. As AI continues to evolve, the demand for skills in AI development, data science, and cybersecurity is growing. Workers who acquire these skills will be well-positioned to take advantage of new job opportunities in the AI-driven economy. Additionally, workers should focus on developing skills that are difficult for AI to replicate, such as creativity, critical thinking, and emotional intelligence.

Lifelong Learning: In an AI-driven world, the concept of lifelong learning becomes increasingly important. Workers must be willing to continuously learn and adapt to new technologies and processes. This may involve taking online courses, attending workshops, or participating in on-the-job training programs. Companies can support lifelong learning by offering training and development opportunities to their employees, helping them stay competitive in a rapidly changing job market.

Adapting to Change: Workers should stay informed about technological advancements and be willing to adapt to new tools and processes that can enhance their work. For example, in industries like marketing, AI-driven tools are being used to analyze customer data, optimize ad campaigns, and personalize content. By embracing these tools, marketers can improve their effectiveness and remain valuable to their employers.

Focusing on Uniquely Human Skills: As AI continues to automate routine and repetitive tasks, workers should focus on developing skills that are uniquely human. These include creativity, emotional intelligence, problem-solving, and communication. Jobs that require these skills are less likely to be automated, as AI struggles to replicate the nuances of human interaction and creativity.

Government and Corporate Responsibility: Governments and companies also have a role to play in preparing for the AI-driven future. Policymakers should implement measures to protect workers from the negative effects of automation, such as retraining programs, social safety nets, and policies that encourage job creation in emerging industries. Companies, on the other hand, should invest in their employees by offering training and development opportunities and creating a culture of continuous learning.

Embracing the Future

The rise of AI is undeniably transforming the job market, leading to both challenges and opportunities. While it is natural to fear the unknown, the key to thriving in an AI-driven world lies in preparation, adaptability, and a willingness to embrace change. Rather than fearing AI, workers should focus on developing skills that are in demand, staying informed about technological advancements, and being open to new opportunities.

AI is not an unstoppable force that will render all human workers obsolete. Instead, it is a tool that, when used responsibly, can enhance human capabilities and create new opportunities. By focusing on uniquely human skills, investing in lifelong learning, and staying adaptable, workers can not only survive but thrive in the AI-driven future. The fear of AI may be understandable, but with the right approach, it can also be an opportunity for growth, innovation, and a brighter future for all.

Ahmed Banafa’s books

Covering: AI, IoT, Blockchain and Quantum Computing

Also Read:

The State of The Foundry Market Insights from the Q2-24 Results

AMAT Underwhelms- China & GM & ICAP Headwinds- AI is only Driver- Slow Recovery

The Impact of UCIe on Chiplet Design: Lowering Barriers and Driving Innovation


Bug Hunting in NoCs. Innovation in Verification

Bug Hunting in NoCs. Innovation in Verification
by Bernard Murphy on 08-28-2024 at 6:00 am

Innovation New

Despite NoCs being finely tuned in legacy subsystems, when subsystems are connected in larger designs or even across multi-die structures, differing traffic policies and system-level delays between NoCs can introduce new opportunities for deadlocks, livelocks and other hazards. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is NoCFuzzer: Automating NoC Verification in UVM. 2024 IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. The authors are from Peking University, Hong Kong University and Alibaba.

Functional bugs should be relatively uncommon in production grade NoCs, but performance bugs are highly dependent on expected traffic and configuration choices. By their nature NoCs will almost unavoidably include cycles; the mesh and toroidal topologies common in many-core servers and AI accelerators are obvious examples. Traffic in such cases may be subject to deadlock or livelock problems under enough traffic load. Equally weaknesses in scheduling algorithms can lead to resource starvation. Such hazards need not block traffic in a formal sense (never clearing) to undermine product success. If they take sufficiently long to clear, they will still fail to meet the expected service level agreements (SLAs) for the system.

There are traffic routing and scheduling solutions to mitigate such problems – many such solutions. Which will work fine within one NoC designed by one system integration team, but what happens when you must combine multiple legacy/3rd party subsystems, each with a NoC designed according to its own policy preferences and connected through a top-level NoC with its own policies? This issue takes on even more urgency in chiplet-based designs adding interposer NoCs to connect between chiplets. Verification solutions become essential to tease out potential bugs between these interconnected networks.

Paul’s view

A modern server CPU can have 100+ cores all connected through a complex coherent mesh-based network-on-a-chip (NOC). Verifying this NOC for correctness and performance is very hard problem and a hot topic with many of our top customers.

This month’s paper takes a concept called “fuzzing” from the software verification world and applies it to UVM-based verification of 3×3 OpenPiton NOC. The results are impressive: line and branch coverage hit 95% in 120hrs with the UVM bench vs. 100% in 2.5hrs with fuzzing; functional covergroups reach 89-99% in 120hrs with the UVM bench vs. 100% across all covergroups in 11hrs with fuzzing.  Also, the authors try injecting a corner case starvation bug into the design. The baseline UVM bench was not able to hit the bug after 100M packets whereas fuzzing was able to hit it after only 2M packets.

To achieve these results the authors use a fuzzing tool called AFL – checkout its Wikipedia page here. A key innovation in the paper is the way the UVM bench is connected to AFL: the authors invent a simple 4-byte XYLF format to represent a packet on the NOC. XY is the destination location, L the length, F a “free” flag. The UVM bench reads a binary file with a sequence of 4-byte chunks and then injects each packet in the sequence to each node in the NOC round robin style, first packet from cpu 00, then cpu 01, 02, 10, 11, and so on. If F is below some static threshold T then the UVM bench just has the cpu put nothing into the NOC for the equivalent length of that packet. The authors set T for a 20% chance of a “free” packet.

AFL is given an initial seed set of binary files taken from a non-fuzzed UVM bench run, applies them to the UVM bench, and is provided back with coverage data from the simulator – each line, branch, covergroup is just considered a coverpoint. AFL then starts applying mutations, randomly modifying bytes, splicing and re-stitching binary files, etc. A genetic algorithm is used to guide the mutation towards increasing coverage. It’s a wonderfully abstract, simple, and elegant utility that is completely blind to the goals for which it is aiming to improve coverage.

Great paper. Lots of potential to take this further commercially!

Raúl’s view

Fuzzing is a technique for automated software testing where a program is fed malformed or partially malformed data. These test inputs are usually variations on valid samples, modified either by mutation or according to a defined grammar. This month’s paper uses AFL (named after  a breed of rabbit) which employs mutation; its description offers a good understanding of fuzzing. Note that fuzzing differs from random or constrained random verification commonly applied in hardware verification.

The authors apply fuzzing techniques to hardware verification, specifically targeting Network-on-Chip (NoC) systems. The paper details the development of an UVM-based environment connected to the AFL fuzzer within a standard industrial verification process. They utilized Verilog, the Synopsys VCS simulator, and focused on conventional coverage metrics, predominantly code coverage. To interface the AFL Fuzzer to the UVM test environment, the test output of the fuzzer must be translated into a sequence of inputs for the NoC. Every NoC packet is represented as 40-bit string which contains the destination address, length, port (each node in the NoC has several ports) and a control flag that determines if the packet is to be executed or if the port remains idle. These strings are mutated by AFL. A simple grammar converts them into inputs for the NoC. This is one of the main contributions of the paper. The fuzzing framework is adaptable to any NoC topology.

NoC are the communication fabric of choice for digital systems containing hundreds of nodes and are hard to verify. The paper presents a case study of a compact 3×3 mesh NoC element from OpenPiton. The results are impressive: Fuzz testing achieved 100% line coverage in 2.6 hours, while Constrained Random Verification (CRV) only reached 97.3% in 120 hours. For branch coverage Fuzz testing achieved full coverage in 2.4 hours and CRV only reached 95.2% in 120 hours.

The paper is well written and offers impressive detail, with a practical focus that underscored its relevance in an industrial context. While it is occasionally somewhat verbose, it was certainly an excellent read.


Alphawave Semi Unlocks 1.2 TBps Connectivity for HPC and AI Infrastructure with 9.2 Gbps HBM3E Subsystem

Alphawave Semi Unlocks 1.2 TBps Connectivity for HPC and AI Infrastructure with 9.2 Gbps HBM3E Subsystem
by Kalar Rajendiran on 08-27-2024 at 10:00 am

9.2Gbps HBM3E Subsystem

In the rapidly evolving fields of high-performance computing (HPC) and artificial intelligence (AI), reducing time to market is crucial for maintaining competitive advantage. HBM3E systems play a pivotal role in this regard, particularly for hyperscaler and data center infrastructure customers. Alphawave Semi’s advanced HBM3E IP subsystem significantly contributes to this acceleration by providing a robust, high-bandwidth memory solution that integrates seamlessly with existing and new architectures.

The 9.2 Gbps HBM3E subsystem, combined with Alphawave Semi’s innovative silicon interposer, facilitates rapid deployment and scalability. This ensures that hyperscalers can quickly adapt to the growing data demands, leveraging the subsystem’s 1.2 TBps connectivity to enhance performance without extensive redesign cycles. The modular nature of the subsystem allows for flexible configurations, making it easier to tailor solutions to specific application needs and accelerating the development process.

Micron’s HBM3E Memory

Micron’s HBM3E memory stands out in the competitive landscape due to its superior power efficiency and performance. While all HBM3E variants aim to provide high bandwidth and low latency, Micron’s version offers up to 30% lower power consumption compared to its competitors. This efficiency is critical for data centers and AI applications, where power usage directly impacts operational costs and environmental footprint.

Micron’s HBM3E memory achieves this efficiency through advanced fabrication techniques and optimized design, ensuring that high-speed data transfer does not come at the cost of increased power usage. This makes it a preferred choice for integrating with high-performance systems that demand both speed and sustainability.

Alphawave Semi’s Innovative Silicon Interposer

At the heart of Alphawave Semi’s HBM3E subsystem is their state-of-the-art silicon interposer. This interposer is crucial for connecting HBM3E memory stacks with processors and other components, enabling high-speed, low-latency communication. In designing the interposer, Alphawave Semi addressed the challenges of increased signal loss due to longer interposer routing. By evaluating critical channel parameters such as insertion loss, return loss, intersymbol interference (ISI), and crosstalk, the team developed an optimized layout. Signal and ground trace widths, along with their spacing, were analyzed using 2D and 3D extraction tools, leading to a refined model that integrates microbump connections to signal traces. This iterative approach allowed the team to effectively shield against crosstalk between layers.

Detailed analyses of signal layer stack-ups, ground trace widths, vias, and the spacing between signal traces enabled the optimization of the interposer layout to mitigate adverse effects and boost performance. To achieve higher data rates, a jitter decomposition and analysis were performed on the interposer to budget for random jitter, power supply induced jitter, duty cycle distortion, and other factors. This set the necessary operating margins.

In addition, the interposer’s stack-up layers for signals, power, and decoupling capacitors underwent comprehensive evaluations for both CoWoS-S and CoWoS-R technologies in preparation for the transition to upcoming HBM4. The team engineered advanced silicon interposer layouts that provide excess margin, ensuring these configurations can support the elevated data rates required by future enhancements in HBM4 technology and varying operating conditions.

Alphawave Semi’s HBM3E IP Subsystem

Alphawave Semi’s HBM3E IP subsystem, comprising both PHY and controller IP, sets a new standard in high-performance memory solutions. With data rates reaching 9.2 Gbps per pin and a total bandwidth of 1.2 TBps, this subsystem is designed to meet the intense demands of AI and HPC workloads. The IP subsystem integrates seamlessly with Micron’s HBM3E memory and Alphawave’s silicon interposer, providing a comprehensive solution that enhances both performance and power efficiency.

The subsystem is highly configurable, adhering to JEDEC standards while allowing for application-specific optimizations. This flexibility ensures that customers can fine-tune their systems to achieve the best possible performance for their unique requirements, further reducing the time and effort needed for deployment.

Summary

Alphawave Semi’s HBM3E IP subsystem, powered by their innovative silicon interposer and Micron’s efficient HBM3E memory, represents a significant advancement in high-performance memory technology. By offering unparalleled bandwidth, enhanced power efficiency, and flexible integration options, this subsystem accelerates time to market for hyperscaler and data center infrastructure customers.

For more details, visit

https://awavesemi.com/silicon-ip/subsystems/hbm-subsystem/

Also Read:

Alphawave Semi Tapes Out Industry-First, Multi-Protocol I/O Connectivity Chiplet for HPC and AI Infrastructure

Driving Data Frontiers: High-Performance PCIe® and CXL® in Modern Infrastructures

AI System Connectivity for UCIe and Chiplet Interfaces Demand Escalating Bandwidth Needs


Analog Bits Momentum and a Look to the Future

Analog Bits Momentum and a Look to the Future
by Mike Gianfagna on 08-27-2024 at 6:00 am

Analog Bits Momentum and a Look to the Future

Analog Bits is aggressively moving to advanced nodes. On SemiWiki, Dan Nenni covered new IP in 3nm at DAC here. I covered the new Analog Bits 3nm IP presented at the TSMC Technology Symposium here. And now, there’s buzz about 2nm IP to be announced at the upcoming TSMC OIP event in September.  I was able to get a briefing from the master of analog IP, enology and viticulture Mahesh Tirupattur recently. The momentum is quite exciting, and I will cover that in this post. There is another aspect to the story – the future impact of all this innovation. Mahesh touched on some of that, and I will add my interpretation of what’s next. Let’s examine Analog Bits momentum and a look to the future.

The Momentum Builds

The Analog Bits catalog continues to grow, with a wide array of data communication, power management, sensing and clocking technology. Here is a partial list of IP that is targeted at TSMC N2:

Glitch Detector (current IP): Instant voltage excursion reporting with high bandwidth and voltage fluctuation detection. Delivers circuit protection and enhances system security in non-intended operation modes. IP can be cascaded to function similar to a flash ADC.

Synchronous Glitch Catcher (new IP):  Multi-output synchronized glitch detection. Reports voltage excursions above and below threshold during the clock period with high bandwidth. Improved detection accuracy with system clock alignment that also facilitates debugging and analysis.

Droop Detector (enhanced IP): Extended voltage range 0.495 – 1.05V with higher maximum bandwidth of 500MHz. Differential sensing and synchronous voltage level reporting. Precision in monitoring with continuous observation and adaptive power adjustment. A pinless version that operates at the core voltage is in development.

On-Die Low Dropout (LDO) Regulator (enhanced IP): Improved power efficiency. Fast transient response and efficient regulation and voltage scalability. Offers integration, space savings, and noise reduction. Use cases include high-performance CPU cores and high lane count, high-performance SerDes.

Chip-to-Chip (C2C) IO’s (enhanced IP): Supports core voltage signaling. Best suited for CoWoS with 2GHz+ speed of operation and 10GHz+ in low-loss media.

High-Accuracy PVT Sensor (enhanced IP): Untrimmed temperature accuracy was originally +/- 8 degrees C.  An improved version has been developed that delivers +/- 3.5 degrees C. Working silicon is available in TSMC N5A, N4 & N3P. The figure below summarizes performance.

PVT Sensor Temp Performance

Looking ahead, accuracy of +/- 1 degree C is possible with trimming. The challenge is, the trimming is affected by the die temperature, making it difficult to achieve this accuracy. Analog Bits has developed a way around this issue and will be delivering high accuracy PVT sensors for any die temperature.

This background sets the stage for what’s to come at the TSMC OIP event. In September, Analog Bits will tape out a test chip in TSMC N2. Here is a summary of what’s on that chip:

  • Die Size: 1.43×1.43mm
  • Wide-range PLL
  • 18-40MHz Xtal OSC
  • HS Differential Output Driver and Clock Receiver – Power Supply Droop Detector
  • High Accuracy PVT Sensors
  • Pinless High Accuracy PVT Sensor
  • LCPLL
  • Metal Stack – 1P 15M

The graphic at the top of this post is a picture of this test chip layout. In Q1, 2025 there will be another 2nm test chip with all the same IP and:

  • LDO
  • C2C & LC PLL’s
  • High Accuracy Sensor

The momentum and excitement will build.

A Look to the Future

Let’s re-cap some of the headaches analog designer face today. A big one is optimization of performance and power in an on-chip environment that is constantly changing, is prone to on-chip variation and is faced with all kinds of power-induced glitches. As everyone moves toward multi-die design, these problems are compounded across lots of chiplets that now also need a high-bandwidth, space-efficient, and power-efficient way to communicate.

If we take an inventory of the innovations being delivered by Analog Bits, we see on-chip technology that addresses all of these challenges head-on. Just review the list above and you will see a catalog of capabilities that sense, control and optimize pretty much all of it. 

So, the question becomes, what’s next? Mahesh stated that he views the mission of Analog Bits is to make life easier for the system designer. The solutions that are available and those in the pipeline certainly do that. But what else can be achieved? What if all the information being sensed, managed and optimized by the Analog Bits IP could be processed by on-chip software?

And what if that software could deliver adaptive control based on AI technology? This sounds like a new killer app to me. One that can create self-optimizing designs that will take performance and power to the next level.  I discussed these thoughts with Mahesh. He just smiled and said the future will be exciting.

I personally can’t wait to see what’s next.  And that’s my take on Analog Bits momentum and a look to the future.