Banner 800x100 0810

The Chip 4: A Semiconductor Elite

The Chip 4: A Semiconductor Elite
by KADEN CHUANG on 08-29-2024 at 6:00 am

Semiconductor Value Chain Market Share II
Can a 4-member alliance reshape the semiconductor industry?
Photo by Harrison Mitchell on Unsplash

Semiconductors are ubiquitous in electronics and computing devices, making them essential to developments in AI, advanced military, and the world economy. As such, it is unquestionable that nations attain considerable geopolitical and economic leverage from controlling large portions of the global semiconductor value chain, granting them access to key technological and commercial resources while providing them the ability to restrict the same access from other nations. For this reason, competition between major powers such as the United States and China has largely manifested itself in efforts to attain and restrict access to semiconductor technology. For example, under the CHIPS and Science Act, the United States government offers subsidies to global manufacturers on the condition that the companies do not establish fabrication facilities in countries that pose a national security threat. The United States has also established export controls on advanced semiconductor equipment to China and reached a deal for the Netherlands and Japan to undertake similar measures. China, on the other hand, is a net importer on semiconductors and deems its reliance on competing nations for semiconductor access as a weakness; to counter this, it aims to establish a fully independent value chain, investing billions in its “Made in China 2025” policy to do so.

Perhaps the most ambitious venture to establish greater control over the semiconductor value chain emerged in 2022 under the Biden administration. Prior to the enactment of the , Biden proposed the Chip 4 alliance, a semiconductor collective comprised of the United States, Japan, South Korea, and Taiwan. The four member states are essential to the semiconductor value chain, with each member specializing in the necessary components to develop semiconductors as a collective. Under the Chip 4, the four member states will engage in coordination for policies on supply chain security, research and development, and subsidy use. The alliance would hold considerable influence on the distribution of semiconductors and can be utilized to significantly limit the chip access of geopolitical rivals. Despite its potential influence, the Chip 4 has yet to be realized, and it is unclear whether the prospective members will make clear commitments towards the alliance. In this article, we will provide a closer examination of the Chip 4 coalition and assess how it may influence the semiconductor industry. We will also observe the numerous challenges that prevent the prospective member states from forming the alliance.

A Closer Look at the Global Semiconductor Value Chain
Figure #1, designed by The CHIPS Stack

The semiconductor value chain is composed of three central parts: design, fabrication, and assembly. In the design process, the chip architecture’s blueprint is mapped out to fulfill a particular need. The process is facilitated using design software known as electronic design automation (EDA), and intellectual property (Core IP), which serves as basic building blocks for chip design. The semiconductor development process follows in the fabrication stage, where the integrated circuits are manufactured for use. Since these integrated circuits are built within the nano scale, the fabrication process requires highly specialized inputs, both in the form of materials and manufacturing equipment. In the final step, the wafers are assembled, packaged, and tested to be usable within electronic devices. The silicon wafers are sliced into individual chips, placed within resin shells, and undergo testing before being delivered back to the manufacturers.

Figure #2, Data taken from the SIA and BCG

In recent decades, the semiconductor global value chain has become increasingly specialized, with much of the value chain contributions split between the United States and East Asia. United States possesses arguably the most important position within the semiconductor industry, having strong footholds in the design, software, and equipment domains. Its position in the design sector is especially essential, hosting key businesses such as Intel, Nvidia, Qualcomm, and AMD, which account for roughly half of the design market. On the other hand, much of the fabrication market is concentrated in East Asia, where Taiwan and South Korea play major roles. Taiwan and South Korea account for much of the world’s leading-edge fabrication, with TSMC producing the world’s most advanced semiconductors and Samsung following closely behind. In addition, Taiwan holds a well-established ecosystem for semiconductor manufacturing, with numerous sites for materials, chemicals, and assembly. Japan, along with the United States and the Netherlands, account for most of the industry’s equipment manufacturing, providing an essential function to the fabrication process. Lastly, China occupies the largest share of the assembly and testing processes and is also a major supplier of gallium and germanium, two chemicals central to semiconductor manufacturing.

As seen by the distribution of the value chain, the semiconductor industry relies on an interdependent network– no state can source semiconductors without the contributions of other states. Yet, the positioning of nations along the different components of the value chain creates imbalances in the degree of influence a nation has within the semiconductor industry. This, in turn, creates power dynamics that can be leveraged by nations with higher degrees of influence.

Weaponizing the Global Value Chain

Since the global semiconductor value chain operates under an interdependent network of states, states with access to exclusive resources can create chokepoints for rivals, diminishing their semiconductor capacities by withholding essential elements for its production. Hence, export controls operate as central weapons within the realm of global technology development, enabling dominant states to decelerate the growth of rising states.

The United States’ semiconductor-related export control measures against China provide valuable insights on how this principle has affected developments within the industry. In 2019, for instance, the Trump administration enacted export controls against the Chinese telecommunications company Huawei, employing a twofold measure to do so. Firstly, it banned Huawei from purchasing American-made semiconductors for its devices. Secondly, it banned its subsidiary semiconductor company, HiSilicon, from purchasing American-made software and manufacturing equipment. Initially, the measure proved to be ineffective in stunting Huawei’s business operations. Taiwan and South Korea held stronger positions within the semiconductor manufacturing space, and Huawei simply sought their services when American sources were unavailable. The American design firms, which provided blueprints for Huawei chips, outsourced their manufacturing to foreign shores. Here, the export measures damaged American chipmaking firms to a greater extent than it did to Huawei, depriving the domestic businesses of a lucrative client.

However, in an update of the export control policy, the Trump administration extended the export control efforts to third-party suppliers, potentially cutting their access to software, core IP, and manufacturing equipment should they continue to engage in business with Huawei. The United States, by controlling much of the software and core IP sources, could indirectly restrict Huawei’s access to chip design by denying essential inputs to third party design firms. Similarly, its dominant position within the manufacturing equipment industry gave it considerable leverage within the fabrication space, indirectly cutting Huawei’s access to semiconductor manufacturing. By threatening to cut off critical resources for design and fabrication, the United States effectively disincentivized third-party engagement with Huawei. Huawei soon lost crucial access to advanced semiconductors and trailed behind in the smartphone market in the subsequent years, with a report stating that the United States’ efforts cost the company roughly $30 billion a year.

The United States’ policy on semiconductor export control illustrates how having control over fundamental components of the global value chain enables an agent to produce rippling effects downstream. Specifically, the influence the United States was able to exert on China derived from its control over critical chokepoints; the earlier export control measures executed by the United States demonstrated that export controls enacted without sufficient leverage are largely ineffective.

Even so, there are inherent risks associated with a frequent tightening of chokepoints, especially if conducted unilaterally. Since the semiconductor industry is highly competitive and dynamic, companies are frequently producing new innovations within the market. Hence, while withholding critical technology and resources may be effective in the short run, a sustained use of export controls provides opportunities for competitors to produce reliable substitutes and fill the gap within the market. These risks are mitigated by multilateral export controls, where multiple producers along the same chokepoint collectively enact export controls, making it much more difficult for substitutes to be sourced or replaced. Indeed, the Biden administration has increasingly engaged in multilateral efforts in export controls– the Dutch-Japanese-American ban on equipment exports to China is a clear example. More importantly, the proposed Chip 4 alliance provides another critical avenue where multilateral action can be taken.

The World under Chip 4

The stated purpose of the Chip 4 alliance is to provide the four member states with a platform to coordinate policies relating to chip production, research and development, and supply chain management. The United States has outlined the arrangement as one that is fundamentally distinct from its export control policies against China, deeming it as a necessary multilateral coordination mechanism rather than an alliance driven by geopolitical competition. Yet, what would happen if the four member states were to operate under complete coordination and utilize their significant leverage? If the four member states act under a coordinated effort, the Chip 4 would possess unprecedented control over the semiconductor industry, creating an extremely powerful inner circle. In many ways, the formation of the Chip 4 can lead to an extensive weaponization of the global value chain.

As a collective, the United States, Japan, South Korea, and Taiwan would act as the most dominant force within the semiconductor industry, given the capability to exercise significant leverage across almost all areas within the global value chain. When combining their expertise, the Chip 4 would have a majority share in all aspects of the global value chain except for assembly and testing:

Figure #3, with data adapted from Figure #2

As seen above, the Chip 4 could engage in chipmaking processes with minimal engagement from outside sources. More critically, the coordination of resources provides the Chip 4 with a much stronger grasp on chokepoints than the members would have been able to acquire individually. In the design sector, for instance, the United States possesses a 49% share of the market. While significant, the Chip 4 would enhance this market dominance to 84% by combining the capabilities of Japan, South Korea, and Taiwan– a multilateral effort to restrict design exports would severely limit the number of reliable substitutes needed for semiconductor production. The Chip 4 would also hold 63% of the market share within the fabrication industry. While a significant figure, it underestimates the actual strength of the Chip 4 within advanced manufacturing; TSMC, Samsung, and Intel have been able to produce logic chips within 10 nanometers, providing the alliance with a near-exclusive access to leading logic technologies. Within the equipment industry, the United States and Japan can still provide essential resources for leading fabrication firms given its concentration in Taiwan, South Korea, and the United States, but the Chip 4’s ability to restrict tooling to outside states could also be used to enhance the position of existing fabrication firms. Imaginably, Netherland’s ASML will also be a close ally to the Chip 4, providing essential equipment with its EUV tooling. Hence, the Chip 4 will inevitably act as a dominant force within the design, fabrication, and equipment industries, greatly shifting the dynamics of the global semiconductor industry.

Conceivably, then, the Chip 4 can be used as an instrument to advance the United States’ technological race against China. Since the Chip 4 holds expertise across almost all aspects of the global value chain, it can rearrange the supply chain in a way that heavily reduces Chinese involvement and access, preventing the country from establishing a strong foothold within the industry. So far, China has been reliant on the technological prowess of its Far Eastern neighbors for its own development– it houses both Taiwanese and Korean fabrication facilities, providing it with access to logic and memory-based manufacturing. Korean companies such as Samsung and Hynix have been especially involved within China’s semiconductor ecosystem, providing a critical access point to the nation’s technological development; Here, China can utilize technological leakages from more advanced fabrication sites to conduct essential knowledge-based transfers. Yet, under American leadership, members of the Chip 4 alliance may opt to reduce further investments within Chinese borders, effectively stalling Chinese progress.

Given the advantages the members would attain by forming a coalition, the prospect of establishing the Chip 4 appears highly attractive. However, the current state of the alliance suggests that its formation remains a far-reaching ideal. Although plans of a coalition have been in discussion since March of 2022, the prospective member states have been slow to set the groundwork for a coordinated policy. So far, only two meetings have been held to discuss the nascent coalition. The first occurred in September of 2022 and was attended only by working-level officials. A more recent meeting occurred virtually in February of 2023 between senior officials, though more concrete plans for the coalition have yet to be laid out. Despite its salient benefits, the establishment of the coalition presents significant risk factors and challenges for its member states to confront, prompting a more cautious approach to the alliance. These pressing obstacles serve as the greatest source of inertia for the alliance’s progression.

The Geopolitical Challenge

The principal obstacle to a formal declaration of Chip 4 membership stems from the geopolitical implications it carries. Since the Chip 4 can be leveraged to impede China’s semiconductor development, a commitment towards the alliance will undoubtedly be interpreted antagonistically by the Chinese government. For Asian members who have complicated economic and geopolitical ties to China, this can serve as a significant barrier to entry. Unsurprisingly, the Chinese government has voiced concerns against the coalition, with a spokesman specifically urging the South Korean government to reconsider its long-term interests before making formal commitments. Diplomatically, South Korea has maintained stronger relations with China compared to other Chip 4 states and therefore has a weaker interest in slowing China’s semiconductor progress. While Japan and Taiwan have demonstrated strong interest in following the United States’ multilateral initiative even at the cost of worsening diplomatic ties with China, South Korea has indeed been more reluctant to act– of the four member states, South Korea was the last to commit to a preliminary meeting discussing the Chip 4. Within the semiconductor industry, South Korea’s tie to the Chinese market is significant; Samsung and Hynix have built numerous fabrication facilities in China, and the Chinese market accounted for 48% of South Korea’s memory chip exports in 2021. In addition, the Chinese government has demonstrated a willingness to engage in retaliatory action when its interests are placed under threat. In 2017, for instance, the Chinese government implemented policies to restrict trade with Korea as a response to its adoption of THAAD anti-missile technology. More recently, it restricted the export of gallium and germanium following the Dutch-Japanese-American export ban on semiconductor equipment. As such, any steps taken to restrict Chinese access to technology will likely lead to an escalation of trade restrictions, inflicting high economic costs on all involved parties. Attaining membership in the Chip 4 therefore carries a fundamental risk, and South Korea appears to be the most disinclined to act under such circumstances.

There are also geopolitical tensions among the prospective Chip 4 members that makes a formal coalition difficult to establish. While Japan, South Korea, and Taiwan each have strong diplomatic ties with the United States, the relationship between the East Asian member states tends to rest on more tentative grounds. South Korea and Japan’s foreign relations have not fully recovered from their wartime past, which remains as a source of diplomatic friction; in 2018, South Korea’s Supreme Court ruled for Japanese companies to compensate for its forced usage of Korean labor in their wartime factories, prompting the Japanese government to retaliate in kind by restricting the export of essential semiconductor-related chemicals to Korea. On a different note, a South Korean official raised questions about establishing a formal alliance with Taiwan, seeking assurance from the U.S. government that Taiwan’s membership does not preclude a violation of the One China policy. These concerns indicate that diplomatic tensions concerning the Chip 4 are not only manifesting externally, but internally as well. Clearly, the United States must play a role in ameliorating these tensions for a seamless establishment of the Chip 4. The arrangement of the trilateral summit between the United States, Japan, and South Korea in August of 2023 demonstrates the United States’ willingness to forge stronger ties between the Asian states, but it remains to be seen whether its efforts will be sufficient for the formation of the chip alliance.

The Business Challenge

When discussing the Chip 4, some have likened the alliance to OPEC of the oil business, observing that the centralized coordination of the semiconductor industry among the 4 member states could produce a cartel-like presence within the market. While there may be some similarities between the two coalitions, it is important to point out key differences: the coordination efforts of the Chip 4 would be conducted for the interest of national security, coming at the expense of private firms by depriving them of essential markets. The conflict between national security and business interests thus serves as another point of friction for the Chip 4’s establishment. Already, American firms have demonstrated increasing resistance at the tightening of sanctions against Chinese firms. When the U.S. announced export bans in 2022, Lam Research, Applied Materials, and KLA, U.S.-based equipment manufacturers, had stated that they could lose up to $5 billion in revenue from China. Following the enforcement of the bans, Applied Materials came under criminal probe for supplying shipments to China, reportedly selling to Chinese fabrication firms under disguised third parties. The realization of the Chip 4 would likely signify an escalation of trade restrictions against China, meaning businesses that have typically relied on Chinese consumption for its revenue would have much to lose from the maneuver. A sustained exclusion of exports to China would thus be received negatively by semiconductor firms, which rely on its large market for its businesses.

One must also consider the possible effects the formation of the Chip 4 may have on competition and chipmaking innovation. A coordination of semiconductor manufacturing would be a source of concern for leading fabrication firms, who may have concerns about the prospect of sharing technologies with potential rivals. As noted by U.S. government officials, the South Korean leadership has expressed apprehensions that companies such as TSMC and Samsung may be encouraged to engage in knowledge exchange. Similarly, there are worries that the Chip 4 initiative may be used for the United States to place their chipmaking firms under more favorable conditions within the market. Indeed, if the semiconductor firms were to engage in explicit coordination efforts regarding manufacturing and distribution, some firms would undoubtedly benefit more than others; it would be a challenge for the Chip 4 to reach an agreement that complements the competing interests of all governments and private firms. More importantly, the introduction of governmental intervention could greatly reduce the competitiveness of the industry, stalling the pace of innovation in the process. Here, an overextension of governmental control could reshape the semiconductor industry for the worse, depriving the industry of its most valuable innovations. To alleviate these business concerns, the Chip 4 must assure firms that it will strive towards geopolitical objectives while maintaining the integrity of the industry’s operations and practices. A failure to do so will be highly costly not only to the industry, but the various other industries that rely on semiconductor development.

The Future of Chip 4

Overall, it remains uncertain what will become of the Chip 4. The two preliminary meetings indicate that there is a nascent interest in the coalition among the Asian states, but the inner mechanisms of the alliance are yet to be fully articulated. Additionally, the scarcity of official statements regarding the alliance indicates that the dialogue surrounding it remains highly tentative; These developments suggest that the Chip 4’s formation will not be realized in the coming years but may take much longer to complete. In truth, if the Chip 4 were to reshape the semiconductor industry as outlined above, it would be wise for the member states to approach the opportunity with careful deliberation. While a potent concept, the prospective alliance remains held back by the geopolitical and business concerns that greatly damage its appeal. The threat of an escalation of trade-related conflicts, coupled with the challenges of business coordination, raise questions about the effectiveness of the coalition. The American leadership must assure that the benefits of the alliance clearly outweigh the risks before any substantial steps will be taken by other prospective members.

Even if the Chip 4 fails to form, however, the very discussion of its concept signifies a decisive shift in the state of the industry: geopolitical concerns have leaked into the semiconductor world, fundamentally transforming business practices across regions. The United States will continue to tighten its semiconductor exports to China and prompt many of its allies to engage in similar efforts. China will continue to look for avenues of innovation that circumvent its rival’s technology restrictions. The remaining players within the field will find it increasingly difficult to engage with one global power without displeasing another. As technological advancements raise the stakes of attaining semiconductor access, the industry will likely split even in the absence of the Chip 4. With or without it, the globalized era of chipmaking is nearing its end, ushering in a fragmented landscape in its stead.

Also Read:

The State of The Foundry Market Insights from the Q2-24 Results

Application-Specific Lithography: Patterning 5nm 5.5-Track Metal by DUV

3D IC Design Ecosystem Panel at #61DAC


AI: Will It Take Your Job? Understanding the Fear and the Reality

AI: Will It Take Your Job? Understanding the Fear and the Reality
by Ahmed Banafa on 08-28-2024 at 10:00 am

1723340638358

In recent years, artificial intelligence (AI) has emerged as a transformative force across industries, driving both optimism and anxiety. As AI continues to evolve, its potential to automate tasks and improve efficiency raises an inevitable question: Will AI take our jobs? This fear is compounded by frequent reports of layoffs, both in technology and other sectors, leading many to worry that AI might be accelerating job losses. But is this fear justified? In this essay, we will explore the impact of AI on the job market, the factors contributing to recent layoffs, and whether people should genuinely be afraid of AI’s growing presence in the workplace.

The Historical Context of Technological Disruption

To understand the current anxiety surrounding AI, it’s essential to place it within the broader context of technological disruption throughout history. Technological advancements have always had profound effects on employment. The Industrial Revolution, for example, dramatically changed the landscape of work, replacing manual labor with machines and shifting economies from agrarian to industrial. This period saw widespread fear and resistance, with movements like the Luddites destroying machinery they believed threatened their livelihoods.

However, history also shows that technological advancements can lead to the creation of new industries and jobs. The rise of automobiles, for instance, displaced jobs related to horse-drawn carriages but created new opportunities in car manufacturing, road construction, and automotive services. Similarly, the advent of computers and the internet revolutionized nearly every industry, leading to the rise of entirely new job categories like software development, IT support, and digital marketing.

AI represents the latest chapter in this ongoing story of technological disruption. But unlike previous technologies, AI has the potential to automate not just manual labor but also cognitive tasks, leading to concerns that it could replace a broader range of jobs, including those traditionally considered safe from automation.

Understanding AI and Its Capabilities

Artificial intelligence is a broad field encompassing various technologies designed to mimic human intelligence. These technologies include machine learning, natural language processing, computer vision, and robotics. AI systems can analyze data, recognize patterns, make decisions, and even learn from experience, allowing them to perform tasks that once required human intelligence.

Key Areas of AI Impact:
  1. Manufacturing and Production: AI-powered robots and automation systems have been integral to modern manufacturing. These machines can work tirelessly, performing repetitive tasks with precision and speed. In industries like automotive manufacturing, robots handle everything from welding to assembly, significantly reducing the need for human labor on production lines.
  2. Customer Service: AI has made significant inroads into customer service through chatbots and virtual assistants. These tools can handle a wide range of customer inquiries, from answering frequently asked questions to processing orders, reducing the need for large customer service teams.
  3. Healthcare: AI is revolutionizing healthcare by assisting in diagnosis, treatment planning, and even surgery. AI algorithms can analyze medical images, identify patterns, and suggest potential diagnoses, often with greater accuracy than human doctors. In surgical settings, AI-powered robots assist surgeons, improving precision and outcomes.
  4. Finance: In the financial industry, AI is used for algorithmic trading, fraud detection, and risk assessment. AI systems can analyze vast amounts of financial data in real-time, making decisions faster than any human could, which has transformed trading floors and back offices.
  5. Creative Industries: Even creative fields are not immune to AI’s reach. AI tools can generate music, write articles, design logos, and even create visual art. While these tools are often used to assist human creators rather than replace them, they raise questions about the future of creative jobs.
  6. Software Engineers and Developers: AI is increasingly automating parts of software development, such as code generation and bug detection, which could reduce the need for entry-level developers. However, fully replacing software engineers is unlikely, as the field requires critical thinking, creativity, and a deep understanding of complex problems that AI cannot yet replicate. Instead, AI is expected to enhance the work of engineers, allowing them to focus on higher-level tasks while improving overall efficiency.
The Reality of AI-Induced Layoffs

The fear of AI taking jobs is not unfounded, particularly as reports of layoffs in both tech and non-tech sectors dominate the news. However, it’s important to recognize that layoffs are rarely caused by a single factor. Economic conditions, shifts in consumer behavior, and organizational restructuring all play significant roles.

Economic Factors: The global economy has faced significant challenges in recent years, including the COVID-19 pandemic, inflation, and supply chain disruptions. These factors have led companies to reassess their operations, often resulting in cost-cutting measures such as layoffs. In such cases, AI may be seen as a way to maintain productivity with a reduced workforce, but it is not the sole cause of job losses.

Technological Disruption: As companies strive to remain competitive in an increasingly digital world, they are investing in AI and automation. This investment can lead to workforce reductions, particularly in roles that can be easily automated. For example, in retail, self-checkout systems and automated inventory management have reduced the need for cashiers and stock clerks. In finance, AI-driven trading algorithms and robo-advisors are displacing traditional roles in investment banking and financial advising.

Shifts in Business Models: The pandemic accelerated the shift toward digital and remote work, prompting companies to reevaluate their business models. Some jobs, particularly those tied to physical office spaces or traditional retail, have become redundant as companies adapt to new ways of working. AI has played a role in enabling this transition by providing tools for remote collaboration, customer service, and logistics.

However, it’s crucial to note that while AI contributes to job displacement in some areas, it also creates new opportunities. The demand for AI specialists, data scientists, and machine learning engineers is growing rapidly. These roles require skills in AI development, data analysis, and cybersecurity, offering new career paths for those willing to adapt and reskill.

The Fear of AI: Is It Justified?

The fear of AI taking jobs is often rooted in the perception that AI is an unstoppable force that will render human workers obsolete. While AI is undoubtedly powerful and capable of performing tasks that were once thought to require human intelligence, this fear may be overstated for several reasons.

Human Creativity and Emotional Intelligence: AI excels at tasks that involve data processing, pattern recognition, and decision-making based on predefined criteria. However, it struggles with tasks that require creativity, empathy, and nuanced understanding—areas where humans excel. Jobs that involve human interaction, emotional intelligence, and creative problem-solving are less likely to be fully automated. For example, while AI can assist in diagnosing diseases, the human touch is still essential in patient care, where empathy and communication are crucial.

New Job Creation: Just as previous technological revolutions created new industries and jobs, AI is expected to do the same. The rise of AI is leading to the creation of entirely new job categories, such as AI ethics specialists, data privacy officers, and AI trainers. These roles involve overseeing AI systems, ensuring they operate ethically and legally, and training AI models to perform specific tasks. Additionally, AI is likely to create demand for jobs in industries that do not yet exist, much like the internet gave rise to social media management and e-commerce.

Collaborative Work: Rather than replacing human workers, AI is increasingly seen as a tool that can augment human capabilities. In many fields, AI is being used to assist humans rather than replace them. For instance, in healthcare, AI can help doctors analyze medical images and suggest potential diagnoses, but the final decision is still made by a human doctor. In creative industries, AI tools can generate ideas or draft content, but the human touch is needed to refine and personalize the output.

Regulatory and Ethical Considerations: Governments and organizations are becoming increasingly aware of the ethical implications of AI. There is growing recognition of the need for regulations to ensure that AI is used responsibly and that its impact on the workforce is managed. Some countries are already implementing policies to protect workers from the negative effects of automation, such as retraining programs and social safety nets. These measures can help mitigate the impact of AI on employment and ensure that workers are not left behind in the AI-driven economy.

Preparing for the AI-Driven Future

While the fear of AI taking jobs is understandable, it is not inevitable. The key to navigating the AI-driven future lies in preparation and adaptability. Workers, companies, and governments all have roles to play in ensuring that the transition to an AI-driven economy is as smooth and inclusive as possible.

Reskilling and Upskilling: One of the most effective ways for workers to prepare for the AI-driven future is to invest in reskilling and upskilling. As AI continues to evolve, the demand for skills in AI development, data science, and cybersecurity is growing. Workers who acquire these skills will be well-positioned to take advantage of new job opportunities in the AI-driven economy. Additionally, workers should focus on developing skills that are difficult for AI to replicate, such as creativity, critical thinking, and emotional intelligence.

Lifelong Learning: In an AI-driven world, the concept of lifelong learning becomes increasingly important. Workers must be willing to continuously learn and adapt to new technologies and processes. This may involve taking online courses, attending workshops, or participating in on-the-job training programs. Companies can support lifelong learning by offering training and development opportunities to their employees, helping them stay competitive in a rapidly changing job market.

Adapting to Change: Workers should stay informed about technological advancements and be willing to adapt to new tools and processes that can enhance their work. For example, in industries like marketing, AI-driven tools are being used to analyze customer data, optimize ad campaigns, and personalize content. By embracing these tools, marketers can improve their effectiveness and remain valuable to their employers.

Focusing on Uniquely Human Skills: As AI continues to automate routine and repetitive tasks, workers should focus on developing skills that are uniquely human. These include creativity, emotional intelligence, problem-solving, and communication. Jobs that require these skills are less likely to be automated, as AI struggles to replicate the nuances of human interaction and creativity.

Government and Corporate Responsibility: Governments and companies also have a role to play in preparing for the AI-driven future. Policymakers should implement measures to protect workers from the negative effects of automation, such as retraining programs, social safety nets, and policies that encourage job creation in emerging industries. Companies, on the other hand, should invest in their employees by offering training and development opportunities and creating a culture of continuous learning.

Embracing the Future

The rise of AI is undeniably transforming the job market, leading to both challenges and opportunities. While it is natural to fear the unknown, the key to thriving in an AI-driven world lies in preparation, adaptability, and a willingness to embrace change. Rather than fearing AI, workers should focus on developing skills that are in demand, staying informed about technological advancements, and being open to new opportunities.

AI is not an unstoppable force that will render all human workers obsolete. Instead, it is a tool that, when used responsibly, can enhance human capabilities and create new opportunities. By focusing on uniquely human skills, investing in lifelong learning, and staying adaptable, workers can not only survive but thrive in the AI-driven future. The fear of AI may be understandable, but with the right approach, it can also be an opportunity for growth, innovation, and a brighter future for all.

Ahmed Banafa’s books

Covering: AI, IoT, Blockchain and Quantum Computing

Also Read:

The State of The Foundry Market Insights from the Q2-24 Results

AMAT Underwhelms- China & GM & ICAP Headwinds- AI is only Driver- Slow Recovery

The Impact of UCIe on Chiplet Design: Lowering Barriers and Driving Innovation


Bug Hunting in NoCs. Innovation in Verification

Bug Hunting in NoCs. Innovation in Verification
by Bernard Murphy on 08-28-2024 at 6:00 am

Innovation New

Despite NoCs being finely tuned in legacy subsystems, when subsystems are connected in larger designs or even across multi-die structures, differing traffic policies and system-level delays between NoCs can introduce new opportunities for deadlocks, livelocks and other hazards. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is NoCFuzzer: Automating NoC Verification in UVM. 2024 IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. The authors are from Peking University, Hong Kong University and Alibaba.

Functional bugs should be relatively uncommon in production grade NoCs, but performance bugs are highly dependent on expected traffic and configuration choices. By their nature NoCs will almost unavoidably include cycles; the mesh and toroidal topologies common in many-core servers and AI accelerators are obvious examples. Traffic in such cases may be subject to deadlock or livelock problems under enough traffic load. Equally weaknesses in scheduling algorithms can lead to resource starvation. Such hazards need not block traffic in a formal sense (never clearing) to undermine product success. If they take sufficiently long to clear, they will still fail to meet the expected service level agreements (SLAs) for the system.

There are traffic routing and scheduling solutions to mitigate such problems – many such solutions. Which will work fine within one NoC designed by one system integration team, but what happens when you must combine multiple legacy/3rd party subsystems, each with a NoC designed according to its own policy preferences and connected through a top-level NoC with its own policies? This issue takes on even more urgency in chiplet-based designs adding interposer NoCs to connect between chiplets. Verification solutions become essential to tease out potential bugs between these interconnected networks.

Paul’s view

A modern server CPU can have 100+ cores all connected through a complex coherent mesh-based network-on-a-chip (NOC). Verifying this NOC for correctness and performance is very hard problem and a hot topic with many of our top customers.

This month’s paper takes a concept called “fuzzing” from the software verification world and applies it to UVM-based verification of 3×3 OpenPiton NOC. The results are impressive: line and branch coverage hit 95% in 120hrs with the UVM bench vs. 100% in 2.5hrs with fuzzing; functional covergroups reach 89-99% in 120hrs with the UVM bench vs. 100% across all covergroups in 11hrs with fuzzing.  Also, the authors try injecting a corner case starvation bug into the design. The baseline UVM bench was not able to hit the bug after 100M packets whereas fuzzing was able to hit it after only 2M packets.

To achieve these results the authors use a fuzzing tool called AFL – checkout its Wikipedia page here. A key innovation in the paper is the way the UVM bench is connected to AFL: the authors invent a simple 4-byte XYLF format to represent a packet on the NOC. XY is the destination location, L the length, F a “free” flag. The UVM bench reads a binary file with a sequence of 4-byte chunks and then injects each packet in the sequence to each node in the NOC round robin style, first packet from cpu 00, then cpu 01, 02, 10, 11, and so on. If F is below some static threshold T then the UVM bench just has the cpu put nothing into the NOC for the equivalent length of that packet. The authors set T for a 20% chance of a “free” packet.

AFL is given an initial seed set of binary files taken from a non-fuzzed UVM bench run, applies them to the UVM bench, and is provided back with coverage data from the simulator – each line, branch, covergroup is just considered a coverpoint. AFL then starts applying mutations, randomly modifying bytes, splicing and re-stitching binary files, etc. A genetic algorithm is used to guide the mutation towards increasing coverage. It’s a wonderfully abstract, simple, and elegant utility that is completely blind to the goals for which it is aiming to improve coverage.

Great paper. Lots of potential to take this further commercially!

Raúl’s view

Fuzzing is a technique for automated software testing where a program is fed malformed or partially malformed data. These test inputs are usually variations on valid samples, modified either by mutation or according to a defined grammar. This month’s paper uses AFL (named after  a breed of rabbit) which employs mutation; its description offers a good understanding of fuzzing. Note that fuzzing differs from random or constrained random verification commonly applied in hardware verification.

The authors apply fuzzing techniques to hardware verification, specifically targeting Network-on-Chip (NoC) systems. The paper details the development of an UVM-based environment connected to the AFL fuzzer within a standard industrial verification process. They utilized Verilog, the Synopsys VCS simulator, and focused on conventional coverage metrics, predominantly code coverage. To interface the AFL Fuzzer to the UVM test environment, the test output of the fuzzer must be translated into a sequence of inputs for the NoC. Every NoC packet is represented as 40-bit string which contains the destination address, length, port (each node in the NoC has several ports) and a control flag that determines if the packet is to be executed or if the port remains idle. These strings are mutated by AFL. A simple grammar converts them into inputs for the NoC. This is one of the main contributions of the paper. The fuzzing framework is adaptable to any NoC topology.

NoC are the communication fabric of choice for digital systems containing hundreds of nodes and are hard to verify. The paper presents a case study of a compact 3×3 mesh NoC element from OpenPiton. The results are impressive: Fuzz testing achieved 100% line coverage in 2.6 hours, while Constrained Random Verification (CRV) only reached 97.3% in 120 hours. For branch coverage Fuzz testing achieved full coverage in 2.4 hours and CRV only reached 95.2% in 120 hours.

The paper is well written and offers impressive detail, with a practical focus that underscored its relevance in an industrial context. While it is occasionally somewhat verbose, it was certainly an excellent read.


Alphawave Semi Unlocks 1.2 TBps Connectivity for HPC and AI Infrastructure with 9.2 Gbps HBM3E Subsystem

Alphawave Semi Unlocks 1.2 TBps Connectivity for HPC and AI Infrastructure with 9.2 Gbps HBM3E Subsystem
by Kalar Rajendiran on 08-27-2024 at 10:00 am

9.2Gbps HBM3E Subsystem

In the rapidly evolving fields of high-performance computing (HPC) and artificial intelligence (AI), reducing time to market is crucial for maintaining competitive advantage. HBM3E systems play a pivotal role in this regard, particularly for hyperscaler and data center infrastructure customers. Alphawave Semi’s advanced HBM3E IP subsystem significantly contributes to this acceleration by providing a robust, high-bandwidth memory solution that integrates seamlessly with existing and new architectures.

The 9.2 Gbps HBM3E subsystem, combined with Alphawave Semi’s innovative silicon interposer, facilitates rapid deployment and scalability. This ensures that hyperscalers can quickly adapt to the growing data demands, leveraging the subsystem’s 1.2 TBps connectivity to enhance performance without extensive redesign cycles. The modular nature of the subsystem allows for flexible configurations, making it easier to tailor solutions to specific application needs and accelerating the development process.

Micron’s HBM3E Memory

Micron’s HBM3E memory stands out in the competitive landscape due to its superior power efficiency and performance. While all HBM3E variants aim to provide high bandwidth and low latency, Micron’s version offers up to 30% lower power consumption compared to its competitors. This efficiency is critical for data centers and AI applications, where power usage directly impacts operational costs and environmental footprint.

Micron’s HBM3E memory achieves this efficiency through advanced fabrication techniques and optimized design, ensuring that high-speed data transfer does not come at the cost of increased power usage. This makes it a preferred choice for integrating with high-performance systems that demand both speed and sustainability.

Alphawave Semi’s Innovative Silicon Interposer

At the heart of Alphawave Semi’s HBM3E subsystem is their state-of-the-art silicon interposer. This interposer is crucial for connecting HBM3E memory stacks with processors and other components, enabling high-speed, low-latency communication. In designing the interposer, Alphawave Semi addressed the challenges of increased signal loss due to longer interposer routing. By evaluating critical channel parameters such as insertion loss, return loss, intersymbol interference (ISI), and crosstalk, the team developed an optimized layout. Signal and ground trace widths, along with their spacing, were analyzed using 2D and 3D extraction tools, leading to a refined model that integrates microbump connections to signal traces. This iterative approach allowed the team to effectively shield against crosstalk between layers.

Detailed analyses of signal layer stack-ups, ground trace widths, vias, and the spacing between signal traces enabled the optimization of the interposer layout to mitigate adverse effects and boost performance. To achieve higher data rates, a jitter decomposition and analysis were performed on the interposer to budget for random jitter, power supply induced jitter, duty cycle distortion, and other factors. This set the necessary operating margins.

In addition, the interposer’s stack-up layers for signals, power, and decoupling capacitors underwent comprehensive evaluations for both CoWoS-S and CoWoS-R technologies in preparation for the transition to upcoming HBM4. The team engineered advanced silicon interposer layouts that provide excess margin, ensuring these configurations can support the elevated data rates required by future enhancements in HBM4 technology and varying operating conditions.

Alphawave Semi’s HBM3E IP Subsystem

Alphawave Semi’s HBM3E IP subsystem, comprising both PHY and controller IP, sets a new standard in high-performance memory solutions. With data rates reaching 9.2 Gbps per pin and a total bandwidth of 1.2 TBps, this subsystem is designed to meet the intense demands of AI and HPC workloads. The IP subsystem integrates seamlessly with Micron’s HBM3E memory and Alphawave’s silicon interposer, providing a comprehensive solution that enhances both performance and power efficiency.

The subsystem is highly configurable, adhering to JEDEC standards while allowing for application-specific optimizations. This flexibility ensures that customers can fine-tune their systems to achieve the best possible performance for their unique requirements, further reducing the time and effort needed for deployment.

Summary

Alphawave Semi’s HBM3E IP subsystem, powered by their innovative silicon interposer and Micron’s efficient HBM3E memory, represents a significant advancement in high-performance memory technology. By offering unparalleled bandwidth, enhanced power efficiency, and flexible integration options, this subsystem accelerates time to market for hyperscaler and data center infrastructure customers.

For more details, visit

https://awavesemi.com/silicon-ip/subsystems/hbm-subsystem/

Also Read:

Alphawave Semi Tapes Out Industry-First, Multi-Protocol I/O Connectivity Chiplet for HPC and AI Infrastructure

Driving Data Frontiers: High-Performance PCIe® and CXL® in Modern Infrastructures

AI System Connectivity for UCIe and Chiplet Interfaces Demand Escalating Bandwidth Needs


Analog Bits Momentum and a Look to the Future

Analog Bits Momentum and a Look to the Future
by Mike Gianfagna on 08-27-2024 at 6:00 am

Analog Bits Momentum and a Look to the Future

Analog Bits is aggressively moving to advanced nodes. On SemiWiki, Dan Nenni covered new IP in 3nm at DAC here. I covered the new Analog Bits 3nm IP presented at the TSMC Technology Symposium here. And now, there’s buzz about 2nm IP to be announced at the upcoming TSMC OIP event in September.  I was able to get a briefing from the master of analog IP, enology and viticulture Mahesh Tirupattur recently. The momentum is quite exciting, and I will cover that in this post. There is another aspect to the story – the future impact of all this innovation. Mahesh touched on some of that, and I will add my interpretation of what’s next. Let’s examine Analog Bits momentum and a look to the future.

The Momentum Builds

The Analog Bits catalog continues to grow, with a wide array of data communication, power management, sensing and clocking technology. Here is a partial list of IP that is targeted at TSMC N2:

Glitch Detector (current IP): Instant voltage excursion reporting with high bandwidth and voltage fluctuation detection. Delivers circuit protection and enhances system security in non-intended operation modes. IP can be cascaded to function similar to a flash ADC.

Synchronous Glitch Catcher (new IP):  Multi-output synchronized glitch detection. Reports voltage excursions above and below threshold during the clock period with high bandwidth. Improved detection accuracy with system clock alignment that also facilitates debugging and analysis.

Droop Detector (enhanced IP): Extended voltage range 0.495 – 1.05V with higher maximum bandwidth of 500MHz. Differential sensing and synchronous voltage level reporting. Precision in monitoring with continuous observation and adaptive power adjustment. A pinless version that operates at the core voltage is in development.

On-Die Low Dropout (LDO) Regulator (enhanced IP): Improved power efficiency. Fast transient response and efficient regulation and voltage scalability. Offers integration, space savings, and noise reduction. Use cases include high-performance CPU cores and high lane count, high-performance SerDes.

Chip-to-Chip (C2C) IO’s (enhanced IP): Supports core voltage signaling. Best suited for CoWoS with 2GHz+ speed of operation and 10GHz+ in low-loss media.

High-Accuracy PVT Sensor (enhanced IP): Untrimmed temperature accuracy was originally +/- 8 degrees C.  An improved version has been developed that delivers +/- 3.5 degrees C. Working silicon is available in TSMC N5A, N4 & N3P. The figure below summarizes performance.

PVT Sensor Temp Performance

Looking ahead, accuracy of +/- 1 degree C is possible with trimming. The challenge is, the trimming is affected by the die temperature, making it difficult to achieve this accuracy. Analog Bits has developed a way around this issue and will be delivering high accuracy PVT sensors for any die temperature.

This background sets the stage for what’s to come at the TSMC OIP event. In September, Analog Bits will tape out a test chip in TSMC N2. Here is a summary of what’s on that chip:

  • Die Size: 1.43×1.43mm
  • Wide-range PLL
  • 18-40MHz Xtal OSC
  • HS Differential Output Driver and Clock Receiver – Power Supply Droop Detector
  • High Accuracy PVT Sensors
  • Pinless High Accuracy PVT Sensor
  • LCPLL
  • Metal Stack – 1P 15M

The graphic at the top of this post is a picture of this test chip layout. In Q1, 2025 there will be another 2nm test chip with all the same IP and:

  • LDO
  • C2C & LC PLL’s
  • High Accuracy Sensor

The momentum and excitement will build.

A Look to the Future

Let’s re-cap some of the headaches analog designer face today. A big one is optimization of performance and power in an on-chip environment that is constantly changing, is prone to on-chip variation and is faced with all kinds of power-induced glitches. As everyone moves toward multi-die design, these problems are compounded across lots of chiplets that now also need a high-bandwidth, space-efficient, and power-efficient way to communicate.

If we take an inventory of the innovations being delivered by Analog Bits, we see on-chip technology that addresses all of these challenges head-on. Just review the list above and you will see a catalog of capabilities that sense, control and optimize pretty much all of it. 

So, the question becomes, what’s next? Mahesh stated that he views the mission of Analog Bits is to make life easier for the system designer. The solutions that are available and those in the pipeline certainly do that. But what else can be achieved? What if all the information being sensed, managed and optimized by the Analog Bits IP could be processed by on-chip software?

And what if that software could deliver adaptive control based on AI technology? This sounds like a new killer app to me. One that can create self-optimizing designs that will take performance and power to the next level.  I discussed these thoughts with Mahesh. He just smiled and said the future will be exciting.

I personally can’t wait to see what’s next.  And that’s my take on Analog Bits momentum and a look to the future.


Spatial audio concepts targeted for earbuds and soundbars

Spatial audio concepts targeted for earbuds and soundbars
by Don Dingee on 08-26-2024 at 10:00 am

Spatial audio concepts differ from traditional surround sound

Spatial audio technologies deliver more realistic sound by manipulating how the listener perceives sounds virtually sourced from different directions and distances in a 3D space. Where traditional surround sound technology uses various sound channels through many speakers positioned around a listener, spatial audio can deliver immersive experiences from fewer speakers in smaller packages, such as in a pair of earbuds or a compact soundbar. Kaushik Sethunath, Audio Test Engineer at Ceva, shared some thoughts leading into his series of blog posts explaining spatial audio concepts and parameters that help define innovative designs.

Better sound is intensely subjective for each listener

Audio has been the subject of intense scrutiny from expert reviewers since the initial development of high-fidelity analog recordings on 33rpm vinyl in 1948. Studio engineers became proficient at mixing multiple recorded tracks into stereo formats. At the peak of the vinyl format, 1970s bands like Steely Dan and Pink Floyd produced albums renowned for their complex yet crisp sound, becoming benchmarks for consumer stereo systems.

What constituted “stereo” sound was relatively simple, with left and right speakers standard and optional center and subwoofer channels on higher-end gear. If one spent more money on equipment – sensitive, mechanically smooth turntables, amplifiers with lower distortion and noise and higher dynamic range, and larger, more powerful speakers with improved response – the sound was, at least in theory, perceptibly better.

However, with so many variables in analog audio, including differences in the frequency sensitivity of each listener’s ears, better sound was a subjective measure. Vinyl records would degrade with handling and excessive play, altering even great experiences. Then, audio went digital, first on physical CDs, then in file formats such as MP3. Digital recordings don’t degrade over time, and new delivery mechanisms appeared.

Perhaps more importantly, digital audio technology ushered in significant engineering changes. Users moved from large, fixed stereo equipment and the 12” vinyl format to smaller, less expensive portable gear playing CDs or files. Some audio engineers responded by recording content for listening through lower-quality headphones in noisy ambient settings, using higher sound levels with less dynamic range, leaving the sound good enough for most listeners.

Use cases drive a need for an audio parameter framework

In the last few years, the pendulum has swung back: consumers can now buy digital audio technology rivaling high-end surround sound systems in affordable soundbars and earbuds, with pervasive streaming technology delivering more sophisticated audio formats like Dolby Atmos and DTS:X. The low-quality approaches to content are leaving listeners wanting more, and they are willing to spend incrementally more to get better quality they can hear.

“Trying to preserve the integrity of the original artist’s vision is really important,” says Sethunath. “We think the best way to experience sound is with different settings for different content. A podcast heard while commuting is a very different use case from a movie in the comfort of a home theater, and gamers have other needs, so there is no one-size-fits-all. Accordingly, based on the content, the parameters of the spatial audio processing need to be tuned, to create the appropriate spatial experience.”

Sethunath sees a more complex landscape where the industry lacks a framework to compare and quantify audio performance in different use cases. He proposes eight technical parameters in two broad categories to guide both spatial audio device design and content curation:

  • Spatialization
    • Degree of Externalization
    • Room Character and Presets
    • Maximum Number of Channels Rendered
    • Mono and Stereo Rendering
    • Artifacts
  • Head Tracking
    • Latency
    • Degrees of Freedom
    • Artifacts

There are tradeoffs and design decisions with host-based rendering (using the power of phones and tablets to do the heavy lifting of spatial audio processing) and embedded rendering on the headset (lowest latency, but without direct multi-channel support due to Bluetooth bandwidth limitations). Ceva provides optimized solutions for both architectures, including head tracking technology to enhance realism in affordable devices.

“I think creating a smoother onboarding process to spatial audio, walking people through what it can do and content that highlights the experience, will be compelling,” says Sethunath. He’s created a new series of three blog posts on spatial audio concepts, explaining the parameters in more detail and describing how designers can evaluate implementations. Links to the posts:

Evaluating Spatial Audio – Part 1 – Criteria & Challenges

Evaluating Spatial Audio – Part 2 – Creating and Curating Content for Testing

Evaluating Spatial Audio – Part 3 – Creating a Repeatable System to Evaluate Spatial Audio

 

For readers interested in Ceva’s IP with solutions for head tracking, more info is also online:

Ceva-RealSpace: Spatial Audio & Head Tracking Solution


A Closer Look at Conquering Clock Jitter with Infinisim

A Closer Look at Conquering Clock Jitter with Infinisim
by Mike Gianfagna on 08-26-2024 at 6:00 am

A Closer Look at Conquering Clock Jitter with Infinisim

As voltages go down and frequencies increase, the challenges in chip design become increasingly complex and unforgiving. Issues that once seemed manageable now escalate, while new obstacles emerge, demanding our attention. Among these challenges, clock jitter stands out as a formidable threat. At its core, clock jitter is defined as the variation of a clock signal from its ideal position in time. Seemingly minor, these kind of subtle variations in the clock can cause catastrophic failures in high-performance designs. Previously, Dan Nenni provided a great overview of the problem and what Infinisim is doing about it here.  Recently, I had the opportunity of speaking directly with the co-founder of Infinisim, where I gained profound insights into the enormity of the clock jitter problem and the monumental efforts required to address it. Read on for a closer look at conquering clock jitter with Infinisim.

Contributors to Clock Jitter

There are two main contributors to clock jitter – the PLL and the power delivery network (PDN). The PLL can deliver a noisy input signal to the clock circuit, creating jitter in the clock. In this case, the jitter is the same throughout the entire clock since it comes from one source. This localized effect isn’t the main focus for Infinisim’s tools. Instead, the company focuses on a much larger and more complex system design challenge, PDN induced jitter.

PDN jitter arises from a noisy supply voltage. Unlike PLL-induced jitter, PDNs can be influenced by multiple input pins and encompass numerous power domains. Add to that the local effects at each gate and you begin to see a pervasive and difficult to track problem. This is the area where Infinisim concentrates its efforts. The figure below illustrates these challenges.

PDN Jitter Challenge

What it Takes to Fix Clock Jitter

Dr. Zakir Hussain Syed

I had a highly informative discussion with Dr. Zakir Hussain Syed. Zakir is a co-founder and CTO at Infinisim with over 25 years of experience in EDA. His deep understanding of the issues was evident throughout our discussion, and I gained a wealth of knowledge from our exchange.

Zakir began by explaining the components of PDN-induced clock jitter. In the case of the PDN, every gate in the clock can see some level of noise-induced jitter. Each is an independent event, and the movement of clock edges is very small. Each event has the potential to change timing and behavior of the circuit. To find the best- and worst-case jitter in the circuit requires simulation of thousands of clock cycles – the errors can compound and the only way to find that is to simulate many cycles.

Furthermore, since the edge movement is very small, the simulation must be highly accurate. So, finding PDN-induced clock jitter requires SPICE-level accurate simulation over many cycles as quickly as possible. Remember, this is part of the verification loop, so speed is quite important.  Do you have a headache yet?  I began to at this point.

As Zakir continued, the problem got worse. Clock domains are becoming more complex thanks to multiple voltage domains. This creates more independent noise sources. Beyond that, power comes into the chip through many bump connections – potentially hundreds of bumps. Each bump will have its own noise signature which yet again increases the variety of issues that must be analyzed.

All this creates multiple types of clock jitter:

  • Absolute jitter:
    • The actual transition time compared to the ideal clock transition time
  • Period jitter:
    • The difference between actual transition and ideal transition at each period
  • Cycle-to-cycle jitter:
    • The difference in period jitter between two adjacent cycles periods

The figure below summarizes these effects.

Types of Clock Jitter

Zakir then provided a bit of history for perspective. For the case of on-chip variation (OCV), initially the worst-case number was used for guard banding. As designs got more complex, applying just one number created an overly pessimistic metric and the result was very poor circuit performance. For many years now, OCV is calculated across the chip at a very fine-grained local level to provide more realistic guard bands. We are now at a point where the same strategy needs to be applied to clock jitter guard banding. A single number must be replaced by fine-grained analysis across the entire clock of the chip.

That fine-grained analysis looks at the noise per gate, per path, per noise profile for each cycle. Designers are looking for the best and worst-case jitter on a local level to develop the guard banding to use. It turns out the worst-case jitter can happen anywhere in the path, not just on the input of the flops. If you couple that fact with the per noise profile analysis, designers cannot only develop much more accurate guard bands, the jitter in the circuit can also be reduced.

Zakir explained that the per gate analysis can identify the weakest gate in the path from a jitter perspective. That gate can then be modified to be less susceptible to jitter.  The per noise profile analysis can find the power bumps that generate the most noise and those, too can be modified to improve performance. All this helps improve the overall circuit performance in meaningful ways.

So, how does Infinisim manage to analyze all those profiles, circuits and scenarios over thousands of cycles with sub-picosecond resolution in a reasonable time frame? Zakir explained that relying on traditional SPICE simulations isn’t feasible – it will simply take far too long. Instead, he detailed Infinisim’s holistic approach to tackling this challenge.

First, the noise in the circuit is developed either with a commercial IR drop tool or with measurements if the silicon is available. That data is then analyzed by Infinisim’s ClockEdge and JitterEdge tools holistically across full clock domains. Using this analysis of the data over many scenarios finds the positive and negative jitter at every gate in the clock.

What is the Impact of Clock Jitter?

Just how big a problem is clock jitter?  Consider there are several potential impacts on chip performance and reliability. These include:

Slower chip performance: Clock jitter leads to timing uncertainties. This can cause data to arrive too early or too late, resulting in timing violations. To mitigate this, timing margins are increased which slows the clock frequency.

Lower yield: Clock jitter can cause a higher rate of timing failures, particularly in chips operating close to their performance limits. This can lead to a higher percentage of chips failing during testing and thus a lower manufacturing yield.

So, the question is, what’s the impact of the above effects? Here is one quick “back of the envelope” calculation. Assume a manufacturing cost per chip of $50 for a design with a projected volume of 1 million units. Further assume an expected yield without jitter issues of 95%. If we assume that jitter lowers yield by 5% (a 5% drop in yield due to jitter is a reasonable assumption for a high-volume production environment where even small timing issues can have significant impacts), the following will result:

  • Design without jitter issues:
    • Chips produced: 1,000,000
    • Yield: 95%
    • Good chips: 950,000
    • Cost per good chip: $52.63
  • With jitter issues (5% lower yield):
    • Chips produced: 1,000,000
    • Yield: 90%
    • Good chips: 900,000
    • Cost per good chip: $55.56
  • Increased cost per chip: $2.93
  • Total additional cost: $2.93 * 900,000 = $2,637,000

Jitter could easily cost you millions over the lifespan of a chip, and this calculation doesn’t even consider the potential loss of market share from a slower chip due to increased timing margins—a much greater concern in competitive markets where even minor performance deficits can lead to significant losses. The specifics may vary depending on your situation, but one thing is certain—clock jitter is a critical issue that cannot be overlooked.

To Learn More

If you are designing high-performance chips, you’re likely lowering voltage and boosting frequency – both of which elevate clock jitter to a critical first-order issue. I strongly recommend exploring how Infinisim can assist with this challenge. You can learn more about Infinisim’s jitter analysis capabilities here. You can also get a broad overview of what Infinisim can do along with access to a webinar replay on clock analysis at 7nm and below here.  And that’s a closer look at conquering clock jitter with Infinisim.


Podcast EP243: What is Yield Management and Why it is Important for Success with Kevin Robinson

Podcast EP243: What is Yield Management and Why it is Important for Success with Kevin Robinson
by Daniel Nenni on 08-23-2024 at 10:00 am

Dan is joined by Kevin Robinson, yieldHUB’s Vice President of Operations who was recently appointed Head of Sales for Europe, the Middle East & Africa. With over 23 years of experience as a test engineer in the semiconductor industry, Kevin brings a wealth of knowledge and dedication to his dual role. At yieldHUB, Kevin leads both sales and operations teams, playing a crucial role in delivering top-notch experiences to UK and European customers.

Kevin explains the basics of yield management in this broad conversation. He outlines the reasons to implement yield management early, which includes better market traction through customer trust and acceptance.

The aspects of buy vs. build for a yield management system are also explored, along with the risks of not implementing an early yield management system that can scale.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: BRAM DE MUER of ICsense

CEO Interview: BRAM DE MUER of ICsense
by Daniel Nenni on 08-23-2024 at 6:00 am

IMG 0411[6]

Bram co-founded ICsense in 2004 as a spin-off of the University of Leuven. He is CEO since 2004 and helped growing the company from 4 to over 100 people in 20 years while being profitable every year. He managed the acquisition by TDK in 2017. He is an experienced entrepreneur in the micro-electronics field with a strong interest in efficiently managing design teams and delivering projects with high quality.

Bram is a board member of Flanders Semiconductor, a non-profit organization that represents the Belgian semiconductor ecosystem. He is also member of the Crown Counsel of SOKwadraat, a non-profit organization to boost the number of spin-offs in Belgium. He holds a MSc. degree in micro- electronics and a Ph.D. from the Katholieke Universiteit Leuven, Belgium. Bram has been a research and postdoctoral assistant with ESAT-MICAS laboratories with Prof. M. Steyaert.

Tell us about your company?
At ICsense, we specialize in analog, mixed-signal and digital ASIC (Application-Specific Integrated Circuit) developments. We handle the complete chain from architectural definition, design, in-house test development upto mass production of the custom components. Today, we are one of the largest fabless European companies active in this domain.

I co-founded ICsense with 3 of my PhD colleagues back in 2004. Our focus has always been on analog, digital, mixed-signal, and high-voltage ICs, serving diverse industries including automotive, medical, industrial, and consumer electronics. ICsense is headquartered in Leuven, Belgium  and has a design center in Ghent, Belgium. The semiconductor ecosystem in Belgium is quite lively, with imec as a renown research center, world class universities and many industrial players in different parts of the semiconductor value chain represented by Flanders Semiconductors.

In 2017, we became part of the Japanese TDK Group (www.tdk.com), a leading supplier of electronic components. This enabled us to continue our strategy and serve customers worldwide as before. What many people don’t realize is that the majority of ICsense’s business today is outside the TDK Group!

Joining TDK has allowed us to grow faster and broaden our activities. We have invested in ATE (Automated Test Equipment, mass production testers and wafer probers) to do test program developments in-house. This makes ICsense unique in the market of ASIC suppliers, capable of building some of the highest-performance ASICs and bringing them into production for our customers.

What problems are you solving?
Many industries require specialized ICs tailored to specific applications, that off-the-shelf solutions often cannot adequately serve. To meet this need, we design custom ASICs for automotive, medical, industrial, and consumer electronics sectors, ensuring optimal performance and functionality.

Designing high-performance analog and mixed-signal ICs is inherently complex and requires specialized expertise. This expertise is the reason our customers knock on our door. Leveraging our extensive experience in analog, digital, mixed-signal, and high-voltage ICs, we deliver robust and reliable solutions. We develop advanced sensor interfaces, power management solutions, high-voltage actuation and sensing circuits, ultra-low-power circuitry and communication chips.

Every chip is uniquely build for one single customer at a time and only supplied to that customer. The customer’s IP is fully protected to keep his competitive edge in the market.

What application areas are your strongest?
In our 20 years of existence, we have built up a strong track record in complex ASIC developments in different technology nodes and for many different applications. We often push the boundaries to reach the highest performance or tweak the last uA out of a circuit. We are definitely not an “IP-gluer” (i.e. a company that simply combines existing IP blocks without modifications). Our design work is mostly custom, to meet the challenging requirements our customers are faced with.

Over the past 10 years, we have seen a strong growth in industries such as automotive and medical that require ICs meeting stringent quality and reliability standards. To address this, we employ rigorous design techniques. ICsense works according ISO13485 (medical) and ISO262626 (automotive) compliance standards. To give you one example, all the automotive ASICs we developed in the last 5 years are at least ASIL-B(D) Functional Safety level.

What keeps your customers up at night?
It really depends on the specific customer. We don’t have a typical client profile; our customers range from startups to large multinationals, from semiconductor companies to OEMs, each with their own unique concerns and expectations. In the medical market, for example, we collaborate with industry leaders in implants, such as Cochlear, as well as with brand-new startups aiming to bring novel ideas to new markets. The common ground among all our clients is their need for a partner who can build innovative, state-of-the-art ASICs with low risk and who supports sustainable production. They appreciate that ICsense combines the flexibility and dynamic team of a startup company, with the rigour, stability and sustainability of a large company.

In recent years, another major concern for our customers has been de-risking their supply chains. Discussions now frequently revolve around second sourcing and geopolitical issues. In response, we have been exploring more technology and partner options across the supply chain. Today, we are one of the few companies worldwide that can offer IC design in over 50 technology flavors, with fabrication facilities in the US, Europe, and Taiwan. Our specific design methodology allows us to efficiently work across various technology nodes, ensuring we can select the best match for our customers.

What does the competitive landscape look like and how do you differentiate?
Lately, there has been a lot of consolidation in the semiconductor value chain in Europe. As a result, ICsense remains one of the few companies of its size and capabilities that can serve external customers. Thanks to our mother company TDK, we can provide ASICs to Fortune 500 companies and to smaller companies and startups at the same time. With a team of over 100 skilled designers and in-house ATE and product engineering, we have a unique position in ASIC design and supply to the medical, industrial, consumer and automotive markets.

What new features/technology are you working on?
All our ASIC developments are customer specific. Some will hit the market as ASSP by our customer, most as part of a single product. Therefore, all the technology and features we are developing are confidential. We see some trends in the market, such as a shift towards smaller technology nodes (although not deep submicron) and a shift towards more differentiation in supply chain. Our technology-agnostic design approach is quite powerful to capture this trend.

Another trend is the push to higher integration and more functionality in many applications, from medical implants to industrial devices, that push the boundaries of the state-of-the-art. Again, this is one of our core strengths.

How do customers normally engage with your company?
We work with customers in 2 models: the first is a pure design support model, where we act as a virtual team to our customer. We perform the full design and hand over the design files, so our customer can integrate this further or handle the manufacturing themselves. Our second and most popular model is the turnkey supply model or -as we call it- ASIC design and supply. We handle the complete development from study upto mass production for our customer and we supply the ASICs to them throughout the lifetime of their product.

An ASIC design can start with just a back of the envelope idea or a full product requirement. Whatever the starting point, our first step is always to do a feasibility and architectural study in which we pin down all the details of the design to be made, define boundary conditions and prove with calculations and preliminary simulations that the requirements can be met.

We then proceed to the actual implementation, the design and layout work, which is the bulk of the work in the project. Through the design cycle, we continuously perform in-depth verification from transistor to chip top level to make sure all use cases are covered prior to the actual manufacturing of the wafers. In parallel to the manufacturing of the engineering silicon, we develop the ATE test hardware and software so that when the silicon returns from the fab, we can immediately start testing.

We have a good track record in first time functional designs, meaning that the ASIC is fully functional and can be used to build prototypes at the customer side. We typically only need a respin to fix small items and to optimise the yield. This is a result of our proprietary, systematic design flow based on commercially available EDA tools such as Cadence, Synopsys and Siemens.

The last stage is industrialisation, including qualification of the chips and perform additional statistical analysis to prove robustness over the lifetime of the product. Our product engineering team supports our customer with the ramp up, start of production and monitoring of yield, … during production. The supply model, direct or through partners, depends on the volume and the type of customer.

Also Read:

CEO Interview: Anders Storm of Sivers Semiconductors

CEO Interview: Zeev Collin of Semitech Semiconductor

CEO Interview: Yogish Kode of Glide Systems


Overcoming Verification Challenges of SPI NAND Flash Octal DDR

Overcoming Verification Challenges of SPI NAND Flash Octal DDR
by Kalar Rajendiran on 08-22-2024 at 10:00 am

Typical Octal Serial NAND Device

As the automotive industry continues to evolve, the demands for high-capacity, high-speed storage solutions are intensifying. Autonomous vehicles and V2X (Vehicle-to-Everything) communication systems generate and process massive amounts of data, necessitating advanced storage technologies capable of meeting these demands. NAND Flash memory, particularly in its Serial NAND form, has emerged as a critical component in this space, offering higher memory density compared to alternatives like NOR Flash. However, the adoption of new architectures, especially those involving SPI Octal DDR interfaces, presents unique challenges in the verification of these storage solutions.

Durlov Khan, a Product Engineering Lead at Cadence, gave a talk at the FMS 2024 Conference, on how his company helped overcome these verification challenges.

Challenges in Verifying SPI NAND Flash Octal DDR

One of the significant hurdles in integrating SPI Octal DDR NAND Flash into automotive applications is the difficulty in accurately verifying these advanced storage devices. Traditional verification models for NOR Flash memory cannot adequately model the architecture and addressing schemes of Serial NAND Flash memory, especially when it comes to the Command-Address-Data (C-A-D) instruction sequences.

Existing models for x1, x2, or x4 SPI Quad NAND devices fall short in simulating Octal SPI NAND devices due to key differences in architecture. Octal SPI NAND uses an 8-bit wide data bus, requiring more complex Command-Address-Data (C-A-D) sequences and additional signal pins (SIO3-SIO7), which aren’t supported by Quad SPI models.

Additionally, Octal devices operate at higher frequencies with stricter timing parameters, including the use of a Data Strobe (DS) signal for data synchronization. These factors make existing Quad SPI models inadequate for accurately simulating the behavior of Octal SPI NAND devices.

Attempting to replicate an Octal device by combining multiple SPI or SPI Quad NAND devices is not feasible due to signaling incompatibilities and significant discrepancies in AC/Timing parameters, leading to inaccurate verification results. This gap in verification capabilities poses a substantial risk, as it limits developers’ ability to ensure that their automotive storage solutions will perform reliably in real-world scenarios.

Collaborative Solution: SPI NAND Flash Memory Model Enhancement

To address these challenges, a collaborative effort was undertaken by Cadence, in partnership with Winbond, leading to the development of a robust solution for SPI Octal DDR verification. This solution centers around the enhancement of the Cadence SPI NAND Flash Memory Model, which now supports the new SPI Octal DDR capabilities.

This enhanced Memory Model can be activated through a configuration parameter and includes additional support for a Volatile Configuration Register. This register allows users to program the correct Octal transfer mode, enabling accurate simulation of the SPI Octal DDR interface. In this mode, legacy SI and SO pins are repurposed, and new SIO3-SIO7 pins are introduced, along with a Data Strobe (DS) output pin that works with read data to signal the host controller at maximum DDR frequencies.

The model is fully backward compatible and can operate in multiple modes, including 1-bit SPI Single Data Rate (SDR), 1-bit SPI Double Data Rate (DDR), 8-bit Octal SPI SDR, and 8-bit Octal SPI DDR, depending on user configuration. This flexibility ensures that developers can accurately simulate a wide range of operational scenarios, crucial for the varying demands of automotive applications.

Real-World Application and Results at NXP

The integration of the Cadence VIP into NXP’s test environment demonstrated the effectiveness of this solution. The VIP seamlessly supported various density grades of SPI NAND Flash, with commands automatically adapting to the specific density grade in use. This adaptability and the ability to accurately model the SPI Octal DDR interface provided NXP with a reliable verification tool, ensuring that their storage solutions met the stringent performance and reliability standards required in the automotive sector.

Summary

The challenges in verifying SPI NAND Flash Octal DDR devices highlight the complexities of developing advanced storage solutions for the automotive industry. However, through collaborative efforts and innovative solutions like the enhanced SPI NAND Flash Memory Model from Cadence, developers can overcome these challenges. This advancement not only supports the current needs of automotive applications but also lays the groundwork for future innovations in storage technology, ensuring that the next generation of vehicles can handle the ever-increasing demands of data processing and storage with efficiency, reliability, and security.

For more details, visit Cadence’s SPI NAND solutions page.

Also Read:

The Impact of UCIe on Chiplet Design: Lowering Barriers and Driving Innovation

The Future of Logic Equivalence Checking

Theorem Proving for Multipliers. Innovation in Verification