SILVACO 073125 Webinar 800x100

Accellera and PSS 3.0 at #61DAC

Accellera and PSS 3.0 at #61DAC
by Daniel Payne on 09-03-2024 at 10:00 am

PSS at #61DAC min

Accellera invited me to attend their #61DAC panel discussion about the new Portable Stimulus Standard (PSS) v3.0, and the formal press release was also just announced. The big idea with PSS is to enable seamless reuse of stimulus across simulation, emulation and post-silicon debug and prototyping.

Tom Fitzpatrick from Siemens EDA was the panel moderator, and he shared that a leading challenge was creating sufficient tests to verify the design. Debug time is the bottleneck. UVM is good for modular, reusable verification environments, but only a small group of people understand how to verify and bring up a chip. UVM isn’t great for creating test content, as scoreboard checkers are manual, and UVM isn’t scalable for concurrency, resources and memory management. So having PSS + UVM provides the required abstraction, and PSS provides schedule for a UVM environment, with UVM providing structural features and PSS providing features tailored for test scenario creation. PSS really complements UVM, not replacing it.

Major new features

  • Added “behavioral coverage” clause / Added “Formal semantics of behavioral coverage” annex
    • Coverage performs a critical role in assessing the quality of tests. PSS has supported data coverage (think SystemVerilog covergroups) since its first release, but being able to collect coverage on key sequences of behavior is critical for system-level verification, where PSS focuses. You can think of PSS behavior coverage as “SystemVerilog assertions (SVA) for the system level”. It allows users to capture key sequences of behavior and key behaviors that must occur concurrently, and to collect coverage data proving that these key behaviors were executed.

Minor new feature or incremental changes to existing features

  • Added address space group
    • PSS models memory management as a first-class language feature that allows users to characterize different regions of memory (eg ddr, sram, flash, read-only, read-write, etc) and specify test requirements for memory. When combining PSS models from different sources that access the same overall address space, it can happen that the different model creators characterized memory in different ways. Address-space groups improve reuse by allowing models that characterize memory differently to work together.
  • Added support for “sub-string operator” and string methods
    • PSS has had support for a string datatype since its initial release. The new operators and methods provide new ways for users to manipulate the values stored in strings.
  • Added support to allow collection of reference types
    • This enables more-complex PSS models to be created.
  • Added support for comments in template blocks
    • PSS supports a code-generation feature that is used for targets that require very specific control over the test structure (eg PERL scripts or specific assembly-language tests). This enhancement allows adding comments into the ‘template’ blocks used to generate the output code. This helps users as the develop PSS models by allowing them to temporarily disable portions of a template with less effort.
  • Added support for yielding control with cooperative multitasking
    • Test code running on a single core runs cooperatively. The yield statement allows the user to explicitly code in points at which other concurrently-running test code should be allowed to execute – for example, while polling a register to detect when an operation is complete.
  • Added PSS-SystemVerilog mapping for PSS lists
    • PSS defines interoperability with SystemVerilog, C, and C++. This enhancement allows PSS list variables to be passed to SystemVerilog, enabling more-complex data to be passed.

Clarifications, etc

  • Added support to allow platform qualifiers on function prototype declarations
  • Clarified static const semantics

Panel Discussion

Panel members were:

  • Dave Kelf – Brekker
  • Sergey Khaikin – Cadence
  • Hillel Miller – Synopsys
  • Freddy Nunez – Agnisys
  • Santosh Kumar – Qualcomm

Q&A

Q: What is your favorite PSS feature?

Sergey: The resource allocation feature, because I can create a correct by construction test case for DMA channels as an example.

Hillel: State objects with schedule. Sequences are correct by constraint.

Santosh: Inferencing of actions.

Freddy: One standard for multiple platforms.

Dave: Reusability.

Q: What are the challenges to learn another language like PSS?

Santosh: Folks from SV know about constraints, newbies learn the declarative nature, so the ramp up time is not too high with PSS, so there’s no going back.

Sergey: We’re teaching PSS in classes, and it’s a concise language, so there’s a mindset change from procedural to declarative, so we let the tool do the magic work.

Hillel: Freshers like to learn the PSS in just a week or two.

Dave: Mindset is the real barrier. Following a path to learn the methodology first, then language second. Expert users or services company are the first to adopt PSS. There’s a small adoption from those who create VIP models. PSS will grow just like Formal verification grew. VIP with PSS and C behind it are growing.

Q: Why isn’t everyone using PSS yet?
Dave: It’s a new language, system VIP is the killer app. More libraries are added each year.

Hillel: The PSS standard enables more validation and verification.

Q: UVM enabled reusable test environments, what about a PSS methodology library for VIP?

Dave: Absolutely. How to get new people to adopt it are showing the ease of use, so it’s an education issue.

Santosh: We need a standard methodology around using the standard. Applying PSS to verification, vs the UVM learning curve.

Freddy: PSS can do so much, but many don’t know how to first adopt it efficiently.

Q: How about formal vs stimulus free approaches? What about gen AI to create more stimulus?

Dave: The declarative approach in PSS is similar to the formal approach for verification.

Hillel: PSS is better than Excel and Word documents for an executable spec.

Santosh: Scenario specification is like an executable spec, so use the PSS language on how to program IPs.

Dave: PSS is really an executable spec standard.

Q: PSS usage is too low, who is using it today?

Sergey: There’s already a wide range of PSS users, and most of the verification pain is coming from multi-core embedded designers where they have a UVM environment with more than a dozen agents and too many virtual sequences.

Dave: I see PSS being used for large multi-core SoCs, verifying coherency and power domain testing, really, across the board use, even DSP applications.

Hillel: Wherever verification managers have pain in their present methodology.

Q: Why join the PSS committee?

Dave: It’s a great learning experience about verification and you get to talk with so many different users.

Sergey: We’ve been using this for many years and are new to the committee, and I see that both vendors and customers that are working on real challenges. New volunteers bring I new requirements.

Hillel: Constraint solvers are improving and need to be more scalable.

Freddy: We always need new eyes on the language standard for feedback.

Santosh: More people will just strengthen the standard features. Start out with questions, then build to requesting new features.

Q: When does PSS get into IEEE standards?

Tom: PSS 3.0 is coming out in about August or so. Likely 3.1 is required before going to IEEE standards in a year or two.

Q:  Will IP vendors provide PSS models and IP XACT models?

Tom: Yes, that’s ideal. IP vendors should provide the models.

Freddy: PSS will complement IP XACT, not compete with it.

Conclusion

The tone at this #61DAC panel was very upbeat and forward looking. Verification engineers should consider adopting PSS 3.0 in their methodologies along with UVM. The Accellera committee has been accepting new feature requests in the PSS specification and forging improvements along the way.

Read the press release for PSS 3.0 from August 29th.

Related Blogs


Nvidia Pulled out of the Black Well

Nvidia Pulled out of the Black Well
by Claus Aasholm on 09-03-2024 at 6:00 am

Nvidia Pulled the Quarter out of the Well

Despite a severe setback, Nvidia pulled it off once again

There have been serious concerns about the ROI on AI and yield problems with Blackwell, but Nvidia pulled it off again and delivered a result significantly above guidance.

Beating the revenue guidance of $28B with 2B$ to just above 30B$, representing 15% QoQ and a 122% YoY growth. As usual, the increase was driven by the Data Centre business that reached $26.3B, demonstrating that the H100 is not just filling the void before Blackwell takes over, but the H100 demand is still solid.

Despite the excellent result and a mixture of “Maintain” and “Outperform” ratings from the analyst communities, the investor community was less impressed, and the Nvidia stock responded negatively.

It looks like the worry of some of the larger financial institutions and economists about AI’s ROI has taken hold, and investors are starting to believe in it. What I know for sure is that I know as much about AI’s future return as anybody else: Nothing!

Mark Zuckerberg of Meta formulated it well when dividing the Meta AI investment into two buckets: a practical AI investment with very tangible returns and a more speculative long-term generative AI investment.

As I have lived through the dot-com crash of the early millennium, I know that a fairy tale is only a fairy tale when you choose the right moment to end the story. Many stocks that tanked and rightfully were seen as bubble stocks are with us today and incredibly valuable. I had shares in a small graphic company that tanked during that period – fortunately, I kept the shares or else I would not have been able to write this article. It is too early to tell how the AI revolution will end, but companies are still willing to invest (bet) in AI.

Not surprisingly, the analyst community was interested in Jensen Huang’s view of this, and he was very willing to attack the likely most significant growth inhibitor of the Nvidia stock.

While I will not comment on the stock price, I believe Jensen did an excellent job framing the company’s growth thesis. Opposed to how critics have presented it, it is not only a question of AI ROI—it should be seen in the much larger frame of Accelerated computing.

Without being too specific on the actual numbers and growth rates, Jensen presented his growth thesis based on the combined value of the current traditional data centre on a round of $1T.

While we can be criticised for working without exact numbers, we believe that viewing research on a high level with approximate numbers can provide value if you have a large-scale impact that does not require precision to provide insight. Fortunately, this is the foundation of any Nvidia analysis at the moment.

It is possible to judge if the 1T$ datacenter value is reasonable. The Property Plant and Equipment value (PPE) of the top 5 data centre owners is above 650B$, and the same companies have a depreciation of 28B$; the rough average write-off period is 5.8 years, suggesting the PPE is heavy on Server equipment with 4-5 year write off periods.

The 1T$ value is a reasonable approximation for the Nvidia growth thesis.

This is what we extracted from Nvidia’s investor call and would frame as Nvidia’s growth thesis:

Nvidia is at a tipping point between traditional CPU-based computing and GPU-based accelerated computing in the data center, and Blackwell represents a step function in this tipping point. In other words – you ain’t seen anything yet!

The fertile lands for Nvidia’s GPUs are not only the new fields of AI but also the existing and well-established data centres of today. They will also have to convert their workloads to accelerated computing for cost, power and efficiency reasons.

The current depreciation of the 1T data center value we calculate to 43B$/quarter, in other words, this is what is needed to maintain the value of the existing data centres. This depreciation is likely going to increase if Nvidia’s growth thesis is right that the data centers will have to convert their existing capacity to accelerated computing.

The current results of Nvidia will pale in comparison with the post-Blackwell era.

A prediction is not a result, but Jensen did an excellent job of framing the opportunity into a very tangible $1T+ opportunity and a more speculative xxxB$ AI opportunity that shows that Nvidia is not opportunity-limited. There is plenty of room to grow into a very tangible market.

It is time to dive into the details.

The status of Nvidia’s business

Investigating the product perspective, the GPU business dominates but both the Software and Networking products also did well.

From a quarterly perspective, software outgrew the rest with 27% growth while GPU took the price from a YoY perspective with 130% growth followed by 100% network growth and 70% software growth. We already know that Nvidia has transformed from a component to a systems company but the next transformation to services could be in sight. This reveals that Nvidia’s moat is more than serves and that it is expanding.

From a Data center platform perspective, this was expected to be the Blackwell quarter but no meaningful revenue was recorded.

The revenue is completely dominated by the H100 platform while the A100 is close to being phased out. The Chinese revenue kept growing at a strong rate despite having been setback by the restrictions imposed by the US government on GPU sales to China. We categorise all the architectures allowed in China (A800 and H800) as H20 (specially designed for China).

While Nvidia’s revenue by country can be hard to decipher as it is a mixture of direct and indirect business through integrators, the China business is purely based on what is billed in China.

As can be seen, the China revenue is showing strong growth. In the last quarter it grew by more than 47% bringing revenue back to close to the pre embargo period. Nvidia highlighted that China is a highly competitive market (lower prices) for AI but it is obvious that Nvidia competes well in China.

This is also a strong indication of the market position of Nvidia, even in a low cost market bound by embargoes, Nvidia’s competitive power is incredibly strong.

The increased GPU revenue is not really showing in the CapEx of the known cloud owners in China. We will continue following this topic over time.

The systems perspective

With AMD’s acquisition of ZT Systems accelerating the company’s journey from a components to a systems company, it is worth analysing Nvidia with that lens.

Nvidia have already made this transition back in Q3-23 when the first H100 revenue became visible.

Onwards the revenue is no more concentrated around Nvidia silicon (GPU+Network) but also memory silicon from predominantly SK Hynix and an increasing “Other” category that represent the systems transformation.

The other category also includes the advanced packaging including the still very costly built up substrates necessary for H100 and later for Blackwell.

This demonstrates that while the ZT Systems makes AMD more competitive, the company is not overtaking Nvidia but catching up to a similar level of competitiveness from a systems perspective.

The Q3 result in more detail

As can be seen from the chart, there was significant growth in Nvidias revenue, Gross margin and operating margin but not to the same degree as the last couple of quarters.

The growth rates are declining and this is a cause for concern in the analyst community and likely the reason the stock market response has been less than ecstatic.

Indeed the quarterly revenue growth rate was down from 17.8% last quarter to 15% this quarter and both Gross Profit Margin an Operating Profit Margin declined. In isolation this looks like the brakes are slightly impacting the Nvidia hypergrowth. Numbers don’t lie but they always exist in a context that seems to have eluded the analyst community

3 Months ago, Jensen Huang declared that there would be a lot of Blackwell revenue in both the quarter and the rest of the year but shortly after a design flaw allegedly impacting yields was found and a metal mask layer of Blackwell had to be reworked. In reality a key growth component vaporised and should have left the quarter in ruins. Nevertheless Nvidia delivered a result just shy of the growth performance of the last few stellar quarters.

Knowing how complex the semiconductor supply chain is, this is a testament to Nvidia’s operational agility and ability to deliver. The company did not get sufficient credit for this.

A dive into the supply machine room can add to the story.

The supply machine room

Assuming that the Q2 Cost of Goods Sold represent a balanced (under continuous steep growth) Nvidia requires 215K$ worth of COGS to generate 1B$ in revenue. The Q3-24 increase in revenue represent and additional COGS of 860K$ bringing the total COGS needed to 6.5B$

The COGS grew to 7.5B$ while the inventory also accelerated its increase from 600K$/qtr to 800$/qtr.

In total the COGS/Inventory position grew by 1-1.2B$ above of what Nvidia would need to deliver the result in H100 Terms. This represent the impact of the unexpected problems with Blackwell.

In other terms, Nvidia was probably preparing for around 5B$ worth of Blackwell revenue that now had to be replaced with H100 revenue.

Simultaneously, the TSMC HPC revenue jumped in what could be caused by other customers but also undoubtedly some extra work for Nvidia based on the Blackwell issues.

As seen on the TSMC HPC revenue, it also took a bump of 2.2B$, which easily can contain the 1-1.2B$ additional COGS/Inventory that Nvidia is exhibiting.

No matter what, the Blackwell issue was significant and Nvidia delt with it without taking the credit but downplaying the issues in the investor call. From my experience working in semiconductors battleships, this was like a direct hit close to the ammunition stores and everybody has been in panic. On the outside and on the investor call, this was treated like a gracing blow.

Demand and Competition

Rightfully a good analysis should include the competitive situation and a view on the demand side of the equation. We recently covered those two areas in the article: OMG It’s alive. The conclusion of that article is that Nvidias competitive advantage remains strong and that the CapEx of the large cloud providers are growing in line with their cloud businesses. The visibility of CapEx spend is also favorable to Nvidia and the AI companies in general.

Conclusion

As always, we do not try and predict any share price but concentrate on the underlying business results. Sometimes the two are connected, other times not.

This analysis shows that Nvidia pulled of a very good result while the company took a direct hit to the hull. The impact of the Blackwell issue was significant but handled with some damage to revenue growth and profitability. This will likely recover soon.

It reconfirmed Nvidia’s journey towards a systems company with strong networking and software growth and an increase in CapEx could signal something interesting is brewing. While the AMD ZT Systems acquisition is good for the company, it does not represent a tangible threat to Nvidia

The H100 platform is executing impressively and now accounts for 19B$/Qtr or 73% of the DC business and more than 63% of the total Nvidia revenue. Despite Blackwell problems, the H100 supply chain pulled through and Nvidia blew through the 28B$ guidance.

China is becoming important again with strong growth of over 47% QoQ. The Chinese share of revenue is now back to more than 12% of total revenue. Nvidia has clearly struck a balance between cost and performance that does not hurt the profitability of the company. While it is not visible where the GPU goes, it is safe to assume the Chinese growth does not stop here.

For us the most interesting thing in the Nvidia call was the revelation of the Nvidia Growth Thesis (our term) as a response to the worries of ROI on AI spread by banks and economists based on short term returns. We think that Jensen Huang layed out an excellent growth thesis with plenty of opportunity to grow while at the same time addressing the ROI on AI.

A more pressing issue will be the ROI on the $1T traditional CPU based data center value that will depreciate with $43B (our analysis) per quarter. Jensen argues that this will not be able to compete with accelerated computing and will be unable to compete very soon.

If Jensen is right here, there is no need to worry about the ROI on AI for some time. The cloud companies will have to invest just to protect their cloud business.

It looks like the growth thesis has escaped the most of the analyst community that are more interested in calculating the next quarter than lifting their gaze to the horizon. The future looks bright for Nvidia.

While our ambition is to stay neutral, we allow ourselves to be impressed every once in a while and that is what we were in this investor call.

Pledge your support for this content

Also Read:

Robust Semiconductor Market in 2024

Semiconductor CapEx Down in 2024, Up Strongly in 2025

Automotive Semiconductor Market Slowing


Intel and Cadence Collaborate to Advance the All-Important UCIe Standard

Intel and Cadence Collaborate to Advance the All-Important UCIe Standard
by Mike Gianfagna on 09-02-2024 at 10:00 am

Intel and Cadence Collaborate to Advance the All Important UCIe Standard

The Universal Chiplet Interconnect Express™ (UCIe™) 1.0 specification was announced in early 2022 and a UCIe 1.1 update was released on August 8, 2023. This open standard facilitates the heterogeneous integration of die-to-die link interconnects within the same package. This is a fancy way of saying the standard opens the door to true multi-die design, sourced from an open ecosystem that can be trusted and validated. This standard is very important to the future of semiconductor system design. It’s also quite complex and presents many technical hurdles to practical usage by many. Intel and Cadence recently published a white paper that details how the two companies are working together to get to the promised land of a chiplet ecosystem. If multi-die design is in your future, you will want to get your own copy. A link is coming, but let’s first examine some history and innovation as Intel and Cadence collaborate to advance the all-important UCIe standard.

Some History

It turns out Cadence and Intel have a history of collaborating to bring emerging standards into the mainstream. In 2021, the companies collaborated on simulation interoperability between an Intel host and Cadence IP for the Compute Express Link™ (CXL™) 2.0 specification. Like UCIe, this work aimed to have a substantial impact on chip and system design.

The specification, along with the latest PCI Express® (PCIe®) 5.0 specification provided a path to high-bandwidth, cache-coherent, low-latency transport for many high-bandwidth applications such as artificial intelligence, machine learning, and hyperscale applications, with specific use cases in newer memory architectures such as disaggregated and persistent memories.

The ecosystem to support this standard was rapidly evolving. Design IP, verification IP, protocol analyzers, and test equipment were all advancing simultaneously. This situation could lead to design issues not being discovered until prototype chips became available for interoperability testing. Finding the problem this late in the process would delay product introduction for sure.

So, Intel and Cadence collaborated on interoperability testing through co-simulation as the first proof point to successfully run complex cache coherent flows. This “shift-left” approach demonstrated the ability to confidently build host and device IP, while also providing essential feedback to the CXL standards body.

You can read about this project here.

Addressing Present Day Challenges

In 2023 Cadence and Intel began collaborating again, this time to advance the UCIe standard and help achieve on-package integration of chiplets from different foundries and process nodes – the promise of an open chiplet ecosystem. UCIe is expected to enable power-efficient and low-latency chiplet solutions as heterogeneous disaggregation of SoCs becomes mainstream.  This work is critical to keep the exponential complexity growth of Moore’s Law alive and well. Monolithic strategies won’t be enough.

To achieve a chiplet ecosystem, design IP, verification IP, and testing practices for compliance will be needed, and that is the focus of the work summarized in this white paper. Here are the topics covered in the white paper – a link is coming so you can get the whole story.

UCIe Compliance Challenges. Topics include the electrical, mechanical, die-to-die adapter, protocol layer, physical layer, and integration of the golden die link to the vendor device under test. The PHY electrical and adapter compliances include the die-to-die high-speed interface as well as the RDI and FDI interface. The mechanical compliance of the channel is tightly coupled with the type of reference package used for integration. There are a lot of technical challenges and design-specific challenges discussed in this section.

The Role of Pre-Silicon Interoperability. There are many parts to each of the standards involved in multi-die design. The entire system is designed concurrently, resulting in all layers going through design and debug at the same time. Like the work done on CXL, “shift-left” strategies are explored here to allow testing and validation to be done before fabrication. The figure below illustrates the relation of the various specifications.

UCIe – A Multi Layered Subsystem

UCIe Verification Challenges. Some of the unique challenges to the verification environment are discussed here. Topics covered include:

  • D2C (data-to-clk) Point Testing
  • PLL Programming Time
  • Length of D2C Eye Sweep Test
  • Number of D2C Eye Sweep Tests

UCIe Simulation Logistics. For this project, the Cadence UCIe advanced package PHY model with x64 lanes was used for pre-silicon verification with Intel’s UCIe vectors. Topics covered include:

  • Initial Interoperability
  • Simulation – Interoperability over UCIe
  • Controller Simulation Interoperability

The piece concludes with UCIe Benefits to the Wider Community.

To Learn More

If multi-die design is in your future, you need to understand the UCIe standard. And more importantly, you need to know what strategies exist for early interoperability validation. The white paper from Cadence and Intel is a must read. You can get your copy here. And that’s how Intel and Cadence collaborate to advance the all-important UCIe standard.

Also Read:

Overcoming Verification Challenges of SPI NAND Flash Octal DDR

The Impact of UCIe on Chiplet Design: Lowering Barriers and Driving Innovation

The Future of Logic Equivalence Checking


WEBINAR: Workforce Armageddon: Onboarding New Hires in Semiconductors

WEBINAR: Workforce Armageddon: Onboarding New Hires in Semiconductors
by Daniel Nenni on 09-02-2024 at 6:00 am

CHIPQUEST WEB1

The semiconductor industry is undergoing an unprecedented inflection—not in its technology, but in its very structure. This transformation is happening at a time of phenomenal growth, presenting both opportunity and crisis. The ingredient most critical to meeting the growth demands, but which also poses the greatest risk, is workforce. There will not be nearly enough skilled workers to fill all roles. The history of such industrial inflections suggests many companies will under-prepare, and then over-react. To their detriment.

This webinar addresses a key, and often overlooked, and maybe unexpected ingredient to weathering such crises—employee onboarding.

Join us at 13:00 EST on September 5th, 2024, hosted by Chipquest. Register here.

The Compounding Forces Behind the Workforce Crisis

This workforce crisis is not driven by one or two independent factors, but by several compounding forces that are reshaping the industry landscape:

Expanded Demand Across Multiple Fronts: The demand for semiconductors is skyrocketing across various sectors:

  • Consumer Electronics: More laptops, smartphones, and other devices are being produced than ever before.
  • Data Centers: The surge in digital transformation during the COVID-19 pandemic has increased the need for server farms to support cloud computing, e-commerce, and streaming platforms.
  • IoT and Automotive: The proliferation of IoT devices and the shift toward electric and autonomous vehicles are driving exponential increase in use cases.
  • Artificial Intelligence: AI and machine learning applications are generating a new wave of need for advanced, high-performance chips.

Supply Chain Redundancy and Geopolitical Tensions: Geopolitical tensions have led to a push for on-shoring, reshoring, or near-shoring semiconductor manufacturing:

  • Companies like TSMC and Amkor are expanding their manufacturing footprint to countries where they never had a presence before.
  • This duplication of infrastructure requires additional skilled workers, further stretching the already limited talent pool.

Technology Sovereignty as National Security: The global race for semiconductor supremacy has become a matter of national security:

  • Governments are investing heavily in domestic semiconductor capabilities. Newcomers like India and Vietnam are entering the semiconductor race, intensifying competition for talent.
  • The CHIPS and Science Act and similar initiatives in other nations aim to secure technology sovereignty, further escalating the need for skilled professionals.

Workforce Dynamics and a Changing Labor Landscape: The semiconductor workforce is already greatly reduced from its earlier peak industry is already facing a significant workforce gap due to early retirements, layoffs, and competition from other tech sectors:

  • A net exodus of workers due to layoffs, early retirements and pilfering of key talent by adjacent industries.
  • Declining interest in manufacturing roles, particularly among younger demographics.
What’s Being Done—and What’s Missing

Public-private partnerships, government funding, and renewed focus on education and apprenticeships are all steps in the right direction. While these initiatives do create a more knowledgeable pool to draw from, they do not serve to integrate new workers into the actual workplace, where the immensity of systems, procedures and policies readily overwhelm new workers..

A New Approach: Modernized Onboarding and Training

One critical aspect that continues to be overlooked is the effectiveness of onboarding and training within individual companies. Traditional methods—relying on static PDFs and uninspiring safety training—fail to engage new employees. This not only leads to costly mistakes but also impacts retention rates.

To address these challenges, the semiconductor industry needs innovative solutions that can modernize onboarding and training. Methods like gamification and microlearning offer a glimpse into how training can become more engaging and effective, better aligning with the expectations of today’s digital-native workforce.

Join Us to Learn More

The semiconductor industry is transforming, and companies must adapt their workforce strategies to stay competitive. Join Chipquest’s upcoming webinar, “Workforce Armageddon: Onboarding New Hires in Semiconductors,” to explore these critical challenges and the innovative solutions that can help your organization thrive.

Register now to secure your spot!

Also Read:

Elevate Your Analog Layout Design to New Heights

Introducing XSim: Achieving high-quality Photonic IC tape-outs

Synopsys IP Processor Summit 2024


Podcast EP244: A Review of the Coming Post-Quantum Cryptography Revolution with Sebastien Riou

Podcast EP244: A Review of the Coming Post-Quantum Cryptography Revolution with Sebastien Riou
by Daniel Nenni on 08-30-2024 at 10:00 am

Dan is joined by Sebastien Riou. Director of Product Security Architecture at PQShield. Sebastien has more than 15 years of experience in the semiconductor industry, focusing on achieving “banking grade security” on resource-constrained ICs such as smart cards and mobile secure elements. Formerly of Tiempo-Secure, he helped create the world’s first integrated secure element IP achieving CC EAL5+ certification.

Sebastien discusses post-quantum cryptography and why the US Government’s National Institute of Standards and Technology (NIST) is pushing for implementation of new, quantum resistant security now. Sebastian explains how the new standards are evolving and what dynamics are at play to deploy those standards across a wide range of systems, both large and small. The special considerations for open source are also discussed.

Sebastien describes the broad hardware and software offerings of PQShield and the rigorous verification and extensive documentation that are available to develop systems that are ready for the coming quantum computing threat to traditional security measures.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Wendy Chen of MSquare Technology

CEO Interview: Wendy Chen of MSquare Technology
by Daniel Nenni on 08-30-2024 at 6:00 am

Wendy Chen,MSquare CEO for SemiWiki

Wendy Chen, MBA from the University of Manchester, has been the Founder and CEO of MSquare Technology since 2021. With over 23 years in the semiconductor industry, Wendy’s career includes roles as Sales Director at Synopsys Technology, Vice President at TF-AMD, and Vice President at Alchip Asia Pacific. Her extensive experience and leadership have been key to MSquare Technology’s growth and innovation.

Tell us about your company?
Our company, MSquare Technology, is incorporated in 2021 as a leading provider of integrated circuit IPs and Chiplets. dedicated to addressing the challenges of chip interconnectivity and vertical integration in the AI era. Currently we operate offices in Taipei, Shanghai, Sydney, and San Jose, boasting a team of over 150 employees, with 80% dedicated to research and development. We strive to foster an open ecosystem service platform for AI and Chiplets, providing comprehensive support for innovation and development within the IC and Chiplet industry. MSquare’s IP products have been successfully validated by notable foundries’ process nodes and brought to mass production, spanning from 5nm to 180nm and covering over 400 different process nodes across 5 leading foundries. The R&D team has launched interconnect interface IPs including HBM, LPDDR, ONFI, UCIe, eDP, PCIe, USB, as well as Chiplet solutions represented by M2LINK.

What problems are you solving?
In the current climate of tight semiconductor supply chains and rising costs, we leverage our robust portfolio, substantial supply chain resources, and system integration capabilities to provide customers with cutting-edge technology, a shortened time to market, and reduced design cost. We offer clients validated IP products equipped with the latest technological advancements.

What application areas are your strongest?
We possess distinct strengths in high-speed interface IPs, Chiplets, foundation IPs and integrated services. These technologies find widespread application in sectors like AI, Data Centers, Automotive Electronics, the IoT, and Consumer Electronics. In scenarios that require extensive data processing and rapid data transmission, our solutions substantially improve efficiency and performance.

  • AI & Data Center: We specializes in providing advanced interface IP and Chiplet products for High Performance Computing and AI applications. Our product portfolio is designed to meet the demands of AI and Data Centers for high-speed, high-bandwidth memory and interconnect technologies, ensuring the efficiency and security of data processing.
  • Automative: Our products have obtained ISO 26262 functional safety certification, ensuring the most advanced functionality and reliable, safe operation.
  • Internet of Things: With our high-performance, low-power IP solutions, are designed to enhance the security and communication efficiency of IoT devices, facilitating safe and efficient data transmission across applications such as smart homes, industrial automation, and smart cities.
  • Consumer Electronics: Our high-performance, low-power IP solutions enable devices such as smartphones, tablets, and smartwatches to achieve faster processing speeds, richer multimedia capabilities, and extended battery life, provides robust momentum for creating the next generation of smart devices.

What keeps your customers up at night?
In the post-Moore era, customers across various industries—such as AI, data centers, automotive, and consumer electronics—face significant challenges related to memory bandwidth/density and system costs. As computational demands and model complexities increase, traditional memory solutions often fall short in several key areas:

  1. High Memory Costs: With the growing need for larger memory capacities to handle complex and memory-intensive applications, costs associated with high-bandwidth memory (HBM) can be prohibitive. Many customers struggle with the high cost per unit bandwidth and the limited availability of advanced memory solutions.
  2. System Integration and Scalability: Integrating large memory capacities into computing systems traditionally requires complex and costly silicon interposers, which increase system costs and complicate design.
  3. Performance Bottlenecks: The need for higher memory bandwidth to improve inference throughput is critical, yet existing solutions often face limitations in achieving the necessary performance levels.

To address these challenges, MSquare Technology offers our innovative HBM3 IO-Die solution. This approach provides several key benefits:

  • Cost Efficiency: By decoupling the HBM host IP from SoCs and utilizing a separate IO-Die that converts the HBM protocol to the UCIe protocol, we reduce the need for expensive silicon interposers. This integration allows for a more cost-effective solution with broader process coverage and improved availability.
  • Enhanced Performance: Our HBM3 IO-Die incorporates the latest 32Gbps UCIe IP, which significantly increases memory bandwidth and supports larger memory capacities within a single computing node. This reduces the need for synchronization across multiple nodes and enhances overall system performance.
  • Flexibility and Scalability: The UCIe-based approach enables customers to integrate various Chiplets and memory types more flexibly. This modularity not only lowers SoC development and packaging costs but also allows for greater customization to meet specific application requirements.
  • Advanced Technology: MSquare’s commitment to standardizing Chiplet interfaces and our early adoption of the UCIe standard ensure that our solutions are at the cutting edge of technology. Our HBM3 IO-Die, expected for mass production by the end of 2024, represents a significant advancement in addressing the memory and performance needs of modern computing systems.

By offering these advanced solutions, MSquare helps our customers overcome the limitations of traditional memory solutions, manage costs effectively, and achieve superior performance in a rapidly evolving technological landscape.

What does the competitive landscape look like and how do you differentiate?
The interface IP and Chiplet sectors are experiencing rapid growth, with fierce competition dominated by major global corporations. By 2030, the market size for IPs is projected to reach $10 billion, while the Chiplet market could expand to ten times that of the IP market. We believe Chiplets represent a revolutionary shift in the semiconductor industry, succeeding IDM and Fabless models, with high-speed interconnects being essential for fulfilling end-application requirements. We hold essential capabilities and resources for Chiplet production—including comprehensive, one-stop solutions and strong supply chain integration. These assets enable us to swiftly adapt to the fast-changing market landscape and provide tailored solutions to our clients.

What new features/technology are you working on?
Our latest technology, the M2LINK solution, decoupled the HBM host IP from SoCs by developing a separate IO-Die that converts the HBM protocol to the UCIe protocol. This IO-Die is packaged with the HBM stack into a single module, allowing direct connectivity on a common substrate without using a silicon interposer. Compatible with UCIe1.1 Die-to-Die technology, which offers a high clock frequency of up to 16GHz, provides a data transfer rate of up to 32Gbps per lane, and delivers 1Tbps (512Gbps TX + 512Gbps RX) bandwidth per module for standard packages. This capability significantly supports the efficient computation of complex AI models.

How do customers normally engage with your company?
Our customers typically engage with us in following ways:

  • IP Licensing: Customers can license various high-speed interface IPs, such as HBM, UCIe, PCIe, LPDDR, ONFI, etc. We provide thorough technical support and enables customers to integrate these IPs into their chip designs.
  • Chiplet Design: We offer Chiplet design services, which follow a process from design specifications through to sampling and verification. This engagement is for customers who need customized solutions beyond standard IP licensing.
  • One-stop Chip Design Service: We provide a comprehensive service that covers the entire lifecycle of chip development, including chip design, fabrication, packaging, and finally, testing.

We can be reached out through our sales, technical support, and marcom teams.

Our contact page: https://en.m2ipm2.com/AboutStd.html#s50019

Our website: https://en.m2ipm2.com/

Our LinkedIn: https://linkedin.com/company/m2ipm2

Also Read:

CEO Interview: BRAM DE MUER of ICsense

CEO Interview: Anders Storm of Sivers Semiconductors

CEO Interview: Zeev Collin of Semitech Semiconductor


Keysight EDA and Engineering Lifecycle Management at #61DAC

Keysight EDA and Engineering Lifecycle Management at #61DAC
by Daniel Payne on 08-29-2024 at 10:00 am

Keysight EDA at 61DAC min

Entering the exhibit area of DAC on the first floor I was immediately faced with the Keysight EDA booth, and it was even larger than either the Synopsys or Cadence booths. They had a complete schedule of partners presenting in their theatre that included: Microsoft Azure, Riscure, Fermi Labs, BAE Systems, Alphawave, Intel Foundry, Sarcina Technology, AWS, TSMC, Allegro Microsystems, Microsoft, UCIe. My visit was with Simon Rance, Director of Product Management and Strategy. The theme this year was Elevate Your Design Intelligence.

Engineering Lifecycle Management

What’s new for 2024 is the focus on Engineering Lifecycle Management (ELM), a new acronym for a new era. Starting with the Keysight Design Data Management (SOS) tool aimed at just design data management, the industry needed a way to fill the gap between Product Lifecycle Management (PLM)) for back-end manufacturing to include things like FuSA and ISO 26262, adding traceability all the way throughout the lifecycle. So ELM is PLM-like for engineers to focus on their project requirements, and include project management, where Keysight Engineering Lifecycle Management (HUB) is the single source of truth.

ELM will be used by design engineers and managers as they perform project management tasks, do digital and AMS design work; even the legal department uses it to verify that all IP in a project conforms to export controls; plus the IT department can define the security required for the data in each IP. The ELM also connects to your requirements tools and bug tracking tools.

Key customers using ELM today include aerospace, aeronautic and automotive where safety and traceability are paramount requirements. Both external IP and internally developed IP require tracking through the lifecycle. Even if a company has an ARM license and they get acquired by another company, that triggers an event to redo the ARM contract, so you really need to know where all of these IP blocks are being used. The ESD Alliance reports quarterly revenues of both EDA software and IP, where IP now is larger than EDA software.

SoC design teams can take months to locate all of the IP required for a new or derivative project, negotiate licenses, then start to use the IP. So, having a catalog of IP can help speed that process by enabling re-use across a corporation. ELM is a strategic approach being advocated top-down by management, then adopted by engineering teams.

Stephen Slater, EDA Product Management – Integrating Manager, talked about the needs of AI and ML for the simulation process as the tools generate so much data, creating a need to tag and store the data. With ELM there’s a central hub to store and organize this kind of simulation data. Even within HUB there’s a knowledge base, creating an incentive to share your project knowledge with others. Once data is stored in HUB then it can start to make correlations. With the growing number of industries mandating traceability, it makes using an ELM more feasible, and besides – adding meta-data is good for you.

Alphawave: UCIe Compliance

Letizia Giuliano, VP Product Marketing at Alphawave Semi shared how their engineering team validates its IP and chips for UCIe compliance using the Keysight Chiplet PHY Designer tool. They created their IBIS AMI model in collaboration with Keysight, validating their 3 nm UCIe IP, for both standard and advanced packages.

Source: Alphawave

YouTube video, 10:49 length.

Lawrence Berkeley National Laboratory

Carl Grace was part of a team that designed custom cryogenic ASICs for Neutrino science, and they used Keysight ELM, IP, and data management tools in their flow for data sharing, team collaboration and security. Their ADC needed to digitize 16 channels at 12-bit resolution and 2 MS/s sampling rate/channel, with low noise, while operating for 30 years at -184C.

Source: Lawrence Berkeley National Laboratory

YouTube video, 14:46 length.

Sarcina Technology, Advanced PackagingBump pitch transformers were presented by Larry Zhu, PhD of Sarcina, and they used Keysight ADS and Memory Designer for advanced packaging design, plus simulation for Fan-Out Chip-on-Substrate with Si Bridge (FOCoS-B).

Source: Sarcina

YouTube video, 11:38 length.

Keysight Tools on Azure

The Director of Customer Engagements – Silicon Collaboration, Joe Tostenrude, presented on how to scale design team collaboration by using Keysight ELM, IP and data management tools running on the Azure Modeling and Simulation Workbench platform as a service.

Source: Keysight

YouTube video, 13:14 length.

IP Security

Serge Leef of Microsoft spoke about how they are helping meet government requirements by creating secure design IP repositories while using the Keysight ELM, IP and data management capabilities.

Source: Microsoft

YouTube video, 8:13 length.

Security Validation during the Design and Development Cycle

From Riscure (acquired by Keysight), Erwin in’t Veld presented how security at the hardware level is a requirement for modern electronic systems. Hardware exploits like side channels and fault injection need to be verified pre-silicon.

Source: Keysight

YouTube video, 16:40 length.

Hybrid SaaS Cloud for EDA

Using a hybrid of on-premises and cloud infrastructure was presented by Ravi Poddar, Principal Semiconductor Industry Advisor, AWS. Detailed use cases for FPGA prototyping and emulation verification were shown. Nupur Bhonge, Sr. Solutions Engineer, Keysight talked about requirements for IP and data management in a hybrid cloud flow.

Source: AWS

YouTube video, 14:52 length.

DuPont, PCB Hybrid Boards

DuPont sent Kalyan Rapolu, Principal Engineer to DAC and he described the design, simulation and characterization of PCB hybrid boards. Their team did layout in Keysight ADS, EM simulation, insertion loss measurements, and channel simulations with Keysight ADS.

Source: Dupont

YouTube video, 12:48 length.

Intel, UCIe Consortium

The co-chair of UCIe’s marketing work group, Brian Rea, talked about the latest chiplet interconnect specification, UCIe 1.1, which is fully backward compatible with UCIe 1.0. This new specification has automotive enhancements, has streaming protocols on full stack, and adds bump map optimization.

Source: UCIe

YouTube video, 8:53 length.

Keysight Labs

Alex Stameroff of Keysight Labs described how their group delivers solutions to their customers by using Keysight EDA tools (SystemVue, Genesys, ADS, EMPro, HeatWave) to validate and then manufacture a variety of products.

Source: Keysight

YouTube video, 13:32 length.

SOC/Chiplet Design

From the Solutions Engineering group of Keysight, Prathna Sekar talked about how to optimize the IP-driven approach using IP and data management tools.

Source: Keysight

YouTube video, 16:00 length.

Allegro Microsystems, ISO26262

The EDA Design Methodology Manager, Ravia Shankar Gaddam from Allegro Microsystems shared about IP management and meeting ISO26262 compliance by using Keysight ELM, IP and data management tools. Their company has both corporate IPs and community IPs.

Source: Allegro Microsystems

YouTube video, 10:06 length.

Summary

2024 was a big growth year at DAC for the Keysight EDA team and I was able to see the increased awareness from attendees at the many theatre presentations from partners. ELM is a new acronym to keep track of in our EDA lexicon, and it will continue to grow in usage by teams around the globe. I did attend the UCIe presentation in the Keysight EDA theatre presented by Microsoft. I cannot wait to see what Keysight EDA develops in the next 12 months.

View all 12 of the theatre presentations on YouTube from this playlist.

Related Blogs


The Chip 4: A Semiconductor Elite

The Chip 4: A Semiconductor Elite
by KADEN CHUANG on 08-29-2024 at 6:00 am

Semiconductor Value Chain Market Share II
Can a 4-member alliance reshape the semiconductor industry?
Photo by Harrison Mitchell on Unsplash

Semiconductors are ubiquitous in electronics and computing devices, making them essential to developments in AI, advanced military, and the world economy. As such, it is unquestionable that nations attain considerable geopolitical and economic leverage from controlling large portions of the global semiconductor value chain, granting them access to key technological and commercial resources while providing them the ability to restrict the same access from other nations. For this reason, competition between major powers such as the United States and China has largely manifested itself in efforts to attain and restrict access to semiconductor technology. For example, under the CHIPS and Science Act, the United States government offers subsidies to global manufacturers on the condition that the companies do not establish fabrication facilities in countries that pose a national security threat. The United States has also established export controls on advanced semiconductor equipment to China and reached a deal for the Netherlands and Japan to undertake similar measures. China, on the other hand, is a net importer on semiconductors and deems its reliance on competing nations for semiconductor access as a weakness; to counter this, it aims to establish a fully independent value chain, investing billions in its “Made in China 2025” policy to do so.

Perhaps the most ambitious venture to establish greater control over the semiconductor value chain emerged in 2022 under the Biden administration. Prior to the enactment of the , Biden proposed the Chip 4 alliance, a semiconductor collective comprised of the United States, Japan, South Korea, and Taiwan. The four member states are essential to the semiconductor value chain, with each member specializing in the necessary components to develop semiconductors as a collective. Under the Chip 4, the four member states will engage in coordination for policies on supply chain security, research and development, and subsidy use. The alliance would hold considerable influence on the distribution of semiconductors and can be utilized to significantly limit the chip access of geopolitical rivals. Despite its potential influence, the Chip 4 has yet to be realized, and it is unclear whether the prospective members will make clear commitments towards the alliance. In this article, we will provide a closer examination of the Chip 4 coalition and assess how it may influence the semiconductor industry. We will also observe the numerous challenges that prevent the prospective member states from forming the alliance.

A Closer Look at the Global Semiconductor Value Chain

Figure #1, designed by The CHIPS Stack

The semiconductor value chain is composed of three central parts: design, fabrication, and assembly. In the design process, the chip architecture’s blueprint is mapped out to fulfill a particular need. The process is facilitated using design software known as electronic design automation (EDA), and intellectual property (Core IP), which serves as basic building blocks for chip design. The semiconductor development process follows in the fabrication stage, where the integrated circuits are manufactured for use. Since these integrated circuits are built within the nano scale, the fabrication process requires highly specialized inputs, both in the form of materials and manufacturing equipment. In the final step, the wafers are assembled, packaged, and tested to be usable within electronic devices. The silicon wafers are sliced into individual chips, placed within resin shells, and undergo testing before being delivered back to the manufacturers.

Figure #2, Data taken from the SIA and BCG

In recent decades, the semiconductor global value chain has become increasingly specialized, with much of the value chain contributions split between the United States and East Asia. United States possesses arguably the most important position within the semiconductor industry, having strong footholds in the design, software, and equipment domains. Its position in the design sector is especially essential, hosting key businesses such as Intel, Nvidia, Qualcomm, and AMD, which account for roughly half of the design market. On the other hand, much of the fabrication market is concentrated in East Asia, where Taiwan and South Korea play major roles. Taiwan and South Korea account for much of the world’s leading-edge fabrication, with TSMC producing the world’s most advanced semiconductors and Samsung following closely behind. In addition, Taiwan holds a well-established ecosystem for semiconductor manufacturing, with numerous sites for materials, chemicals, and assembly. Japan, along with the United States and the Netherlands, account for most of the industry’s equipment manufacturing, providing an essential function to the fabrication process. Lastly, China occupies the largest share of the assembly and testing processes and is also a major supplier of gallium and germanium, two chemicals central to semiconductor manufacturing.

As seen by the distribution of the value chain, the semiconductor industry relies on an interdependent network– no state can source semiconductors without the contributions of other states. Yet, the positioning of nations along the different components of the value chain creates imbalances in the degree of influence a nation has within the semiconductor industry. This, in turn, creates power dynamics that can be leveraged by nations with higher degrees of influence.

Weaponizing the Global Value Chain

Since the global semiconductor value chain operates under an interdependent network of states, states with access to exclusive resources can create chokepoints for rivals, diminishing their semiconductor capacities by withholding essential elements for its production. Hence, export controls operate as central weapons within the realm of global technology development, enabling dominant states to decelerate the growth of rising states.

The United States’ semiconductor-related export control measures against China provide valuable insights on how this principle has affected developments within the industry. In 2019, for instance, the Trump administration enacted export controls against the Chinese telecommunications company Huawei, employing a twofold measure to do so. Firstly, it banned Huawei from purchasing American-made semiconductors for its devices. Secondly, it banned its subsidiary semiconductor company, HiSilicon, from purchasing American-made software and manufacturing equipment. Initially, the measure proved to be ineffective in stunting Huawei’s business operations. Taiwan and South Korea held stronger positions within the semiconductor manufacturing space, and Huawei simply sought their services when American sources were unavailable. The American design firms, which provided blueprints for Huawei chips, outsourced their manufacturing to foreign shores. Here, the export measures damaged American chipmaking firms to a greater extent than it did to Huawei, depriving the domestic businesses of a lucrative client.

However, in an update of the export control policy, the Trump administration extended the export control efforts to third-party suppliers, potentially cutting their access to software, core IP, and manufacturing equipment should they continue to engage in business with Huawei. The United States, by controlling much of the software and core IP sources, could indirectly restrict Huawei’s access to chip design by denying essential inputs to third party design firms. Similarly, its dominant position within the manufacturing equipment industry gave it considerable leverage within the fabrication space, indirectly cutting Huawei’s access to semiconductor manufacturing. By threatening to cut off critical resources for design and fabrication, the United States effectively disincentivized third-party engagement with Huawei. Huawei soon lost crucial access to advanced semiconductors and trailed behind in the smartphone market in the subsequent years, with a report stating that the United States’ efforts cost the company roughly $30 billion a year.

The United States’ policy on semiconductor export control illustrates how having control over fundamental components of the global value chain enables an agent to produce rippling effects downstream. Specifically, the influence the United States was able to exert on China derived from its control over critical chokepoints; the earlier export control measures executed by the United States demonstrated that export controls enacted without sufficient leverage are largely ineffective.

Even so, there are inherent risks associated with a frequent tightening of chokepoints, especially if conducted unilaterally. Since the semiconductor industry is highly competitive and dynamic, companies are frequently producing new innovations within the market. Hence, while withholding critical technology and resources may be effective in the short run, a sustained use of export controls provides opportunities for competitors to produce reliable substitutes and fill the gap within the market. These risks are mitigated by multilateral export controls, where multiple producers along the same chokepoint collectively enact export controls, making it much more difficult for substitutes to be sourced or replaced. Indeed, the Biden administration has increasingly engaged in multilateral efforts in export controls– the Dutch-Japanese-American ban on equipment exports to China is a clear example. More importantly, the proposed Chip 4 alliance provides another critical avenue where multilateral action can be taken.

The World under Chip 4

The stated purpose of the Chip 4 alliance is to provide the four member states with a platform to coordinate policies relating to chip production, research and development, and supply chain management. The United States has outlined the arrangement as one that is fundamentally distinct from its export control policies against China, deeming it as a necessary multilateral coordination mechanism rather than an alliance driven by geopolitical competition. Yet, what would happen if the four member states were to operate under complete coordination and utilize their significant leverage? If the four member states act under a coordinated effort, the Chip 4 would possess unprecedented control over the semiconductor industry, creating an extremely powerful inner circle. In many ways, the formation of the Chip 4 can lead to an extensive weaponization of the global value chain.

As a collective, the United States, Japan, South Korea, and Taiwan would act as the most dominant force within the semiconductor industry, given the capability to exercise significant leverage across almost all areas within the global value chain. When combining their expertise, the Chip 4 would have a majority share in all aspects of the global value chain except for assembly and testing:

Figure #3, with data adapted from Figure #2

As seen above, the Chip 4 could engage in chipmaking processes with minimal engagement from outside sources. More critically, the coordination of resources provides the Chip 4 with a much stronger grasp on chokepoints than the members would have been able to acquire individually. In the design sector, for instance, the United States possesses a 49% share of the market. While significant, the Chip 4 would enhance this market dominance to 84% by combining the capabilities of Japan, South Korea, and Taiwan– a multilateral effort to restrict design exports would severely limit the number of reliable substitutes needed for semiconductor production. The Chip 4 would also hold 63% of the market share within the fabrication industry. While a significant figure, it underestimates the actual strength of the Chip 4 within advanced manufacturing; TSMC, Samsung, and Intel have been able to produce logic chips within 10 nanometers, providing the alliance with a near-exclusive access to leading logic technologies. Within the equipment industry, the United States and Japan can still provide essential resources for leading fabrication firms given its concentration in Taiwan, South Korea, and the United States, but the Chip 4’s ability to restrict tooling to outside states could also be used to enhance the position of existing fabrication firms. Imaginably, Netherland’s ASML will also be a close ally to the Chip 4, providing essential equipment with its EUV tooling. Hence, the Chip 4 will inevitably act as a dominant force within the design, fabrication, and equipment industries, greatly shifting the dynamics of the global semiconductor industry.

Conceivably, then, the Chip 4 can be used as an instrument to advance the United States’ technological race against China. Since the Chip 4 holds expertise across almost all aspects of the global value chain, it can rearrange the supply chain in a way that heavily reduces Chinese involvement and access, preventing the country from establishing a strong foothold within the industry. So far, China has been reliant on the technological prowess of its Far Eastern neighbors for its own development– it houses both Taiwanese and Korean fabrication facilities, providing it with access to logic and memory-based manufacturing. Korean companies such as Samsung and Hynix have been especially involved within China’s semiconductor ecosystem, providing a critical access point to the nation’s technological development; Here, China can utilize technological leakages from more advanced fabrication sites to conduct essential knowledge-based transfers. Yet, under American leadership, members of the Chip 4 alliance may opt to reduce further investments within Chinese borders, effectively stalling Chinese progress.

Given the advantages the members would attain by forming a coalition, the prospect of establishing the Chip 4 appears highly attractive. However, the current state of the alliance suggests that its formation remains a far-reaching ideal. Although plans of a coalition have been in discussion since March of 2022, the prospective member states have been slow to set the groundwork for a coordinated policy. So far, only two meetings have been held to discuss the nascent coalition. The first occurred in September of 2022 and was attended only by working-level officials. A more recent meeting occurred virtually in February of 2023 between senior officials, though more concrete plans for the coalition have yet to be laid out. Despite its salient benefits, the establishment of the coalition presents significant risk factors and challenges for its member states to confront, prompting a more cautious approach to the alliance. These pressing obstacles serve as the greatest source of inertia for the alliance’s progression.

The Geopolitical Challenge

The principal obstacle to a formal declaration of Chip 4 membership stems from the geopolitical implications it carries. Since the Chip 4 can be leveraged to impede China’s semiconductor development, a commitment towards the alliance will undoubtedly be interpreted antagonistically by the Chinese government. For Asian members who have complicated economic and geopolitical ties to China, this can serve as a significant barrier to entry. Unsurprisingly, the Chinese government has voiced concerns against the coalition, with a spokesman specifically urging the South Korean government to reconsider its long-term interests before making formal commitments. Diplomatically, South Korea has maintained stronger relations with China compared to other Chip 4 states and therefore has a weaker interest in slowing China’s semiconductor progress. While Japan and Taiwan have demonstrated strong interest in following the United States’ multilateral initiative even at the cost of worsening diplomatic ties with China, South Korea has indeed been more reluctant to act– of the four member states, South Korea was the last to commit to a preliminary meeting discussing the Chip 4. Within the semiconductor industry, South Korea’s tie to the Chinese market is significant; Samsung and Hynix have built numerous fabrication facilities in China, and the Chinese market accounted for 48% of South Korea’s memory chip exports in 2021. In addition, the Chinese government has demonstrated a willingness to engage in retaliatory action when its interests are placed under threat. In 2017, for instance, the Chinese government implemented policies to restrict trade with Korea as a response to its adoption of THAAD anti-missile technology. More recently, it restricted the export of gallium and germanium following the Dutch-Japanese-American export ban on semiconductor equipment. As such, any steps taken to restrict Chinese access to technology will likely lead to an escalation of trade restrictions, inflicting high economic costs on all involved parties. Attaining membership in the Chip 4 therefore carries a fundamental risk, and South Korea appears to be the most disinclined to act under such circumstances.

There are also geopolitical tensions among the prospective Chip 4 members that makes a formal coalition difficult to establish. While Japan, South Korea, and Taiwan each have strong diplomatic ties with the United States, the relationship between the East Asian member states tends to rest on more tentative grounds. South Korea and Japan’s foreign relations have not fully recovered from their wartime past, which remains as a source of diplomatic friction; in 2018, South Korea’s Supreme Court ruled for Japanese companies to compensate for its forced usage of Korean labor in their wartime factories, prompting the Japanese government to retaliate in kind by restricting the export of essential semiconductor-related chemicals to Korea. On a different note, a South Korean official raised questions about establishing a formal alliance with Taiwan, seeking assurance from the U.S. government that Taiwan’s membership does not preclude a violation of the One China policy. These concerns indicate that diplomatic tensions concerning the Chip 4 are not only manifesting externally, but internally as well. Clearly, the United States must play a role in ameliorating these tensions for a seamless establishment of the Chip 4. The arrangement of the trilateral summit between the United States, Japan, and South Korea in August of 2023 demonstrates the United States’ willingness to forge stronger ties between the Asian states, but it remains to be seen whether its efforts will be sufficient for the formation of the chip alliance.

The Business Challenge

When discussing the Chip 4, some have likened the alliance to OPEC of the oil business, observing that the centralized coordination of the semiconductor industry among the 4 member states could produce a cartel-like presence within the market. While there may be some similarities between the two coalitions, it is important to point out key differences: the coordination efforts of the Chip 4 would be conducted for the interest of national security, coming at the expense of private firms by depriving them of essential markets. The conflict between national security and business interests thus serves as another point of friction for the Chip 4’s establishment. Already, American firms have demonstrated increasing resistance at the tightening of sanctions against Chinese firms. When the U.S. announced export bans in 2022, Lam Research, Applied Materials, and KLA, U.S.-based equipment manufacturers, had stated that they could lose up to $5 billion in revenue from China. Following the enforcement of the bans, Applied Materials came under criminal probe for supplying shipments to China, reportedly selling to Chinese fabrication firms under disguised third parties. The realization of the Chip 4 would likely signify an escalation of trade restrictions against China, meaning businesses that have typically relied on Chinese consumption for its revenue would have much to lose from the maneuver. A sustained exclusion of exports to China would thus be received negatively by semiconductor firms, which rely on its large market for its businesses.

One must also consider the possible effects the formation of the Chip 4 may have on competition and chipmaking innovation. A coordination of semiconductor manufacturing would be a source of concern for leading fabrication firms, who may have concerns about the prospect of sharing technologies with potential rivals. As noted by U.S. government officials, the South Korean leadership has expressed apprehensions that companies such as TSMC and Samsung may be encouraged to engage in knowledge exchange. Similarly, there are worries that the Chip 4 initiative may be used for the United States to place their chipmaking firms under more favorable conditions within the market. Indeed, if the semiconductor firms were to engage in explicit coordination efforts regarding manufacturing and distribution, some firms would undoubtedly benefit more than others; it would be a challenge for the Chip 4 to reach an agreement that complements the competing interests of all governments and private firms. More importantly, the introduction of governmental intervention could greatly reduce the competitiveness of the industry, stalling the pace of innovation in the process. Here, an overextension of governmental control could reshape the semiconductor industry for the worse, depriving the industry of its most valuable innovations. To alleviate these business concerns, the Chip 4 must assure firms that it will strive towards geopolitical objectives while maintaining the integrity of the industry’s operations and practices. A failure to do so will be highly costly not only to the industry, but the various other industries that rely on semiconductor development.

The Future of Chip 4

Overall, it remains uncertain what will become of the Chip 4. The two preliminary meetings indicate that there is a nascent interest in the coalition among the Asian states, but the inner mechanisms of the alliance are yet to be fully articulated. Additionally, the scarcity of official statements regarding the alliance indicates that the dialogue surrounding it remains highly tentative; These developments suggest that the Chip 4’s formation will not be realized in the coming years but may take much longer to complete. In truth, if the Chip 4 were to reshape the semiconductor industry as outlined above, it would be wise for the member states to approach the opportunity with careful deliberation. While a potent concept, the prospective alliance remains held back by the geopolitical and business concerns that greatly damage its appeal. The threat of an escalation of trade-related conflicts, coupled with the challenges of business coordination, raise questions about the effectiveness of the coalition. The American leadership must assure that the benefits of the alliance clearly outweigh the risks before any substantial steps will be taken by other prospective members.

Even if the Chip 4 fails to form, however, the very discussion of its concept signifies a decisive shift in the state of the industry: geopolitical concerns have leaked into the semiconductor world, fundamentally transforming business practices across regions. The United States will continue to tighten its semiconductor exports to China and prompt many of its allies to engage in similar efforts. China will continue to look for avenues of innovation that circumvent its rival’s technology restrictions. The remaining players within the field will find it increasingly difficult to engage with one global power without displeasing another. As technological advancements raise the stakes of attaining semiconductor access, the industry will likely split even in the absence of the Chip 4. With or without it, the globalized era of chipmaking is nearing its end, ushering in a fragmented landscape in its stead.

Also Read:

The State of The Foundry Market Insights from the Q2-24 Results

Application-Specific Lithography: Patterning 5nm 5.5-Track Metal by DUV

3D IC Design Ecosystem Panel at #61DAC


AI: Will It Take Your Job? Understanding the Fear and the Reality

AI: Will It Take Your Job? Understanding the Fear and the Reality
by Ahmed Banafa on 08-28-2024 at 10:00 am

1723340638358

In recent years, artificial intelligence (AI) has emerged as a transformative force across industries, driving both optimism and anxiety. As AI continues to evolve, its potential to automate tasks and improve efficiency raises an inevitable question: Will AI take our jobs? This fear is compounded by frequent reports of layoffs, both in technology and other sectors, leading many to worry that AI might be accelerating job losses. But is this fear justified? In this essay, we will explore the impact of AI on the job market, the factors contributing to recent layoffs, and whether people should genuinely be afraid of AI’s growing presence in the workplace.

The Historical Context of Technological Disruption

To understand the current anxiety surrounding AI, it’s essential to place it within the broader context of technological disruption throughout history. Technological advancements have always had profound effects on employment. The Industrial Revolution, for example, dramatically changed the landscape of work, replacing manual labor with machines and shifting economies from agrarian to industrial. This period saw widespread fear and resistance, with movements like the Luddites destroying machinery they believed threatened their livelihoods.

However, history also shows that technological advancements can lead to the creation of new industries and jobs. The rise of automobiles, for instance, displaced jobs related to horse-drawn carriages but created new opportunities in car manufacturing, road construction, and automotive services. Similarly, the advent of computers and the internet revolutionized nearly every industry, leading to the rise of entirely new job categories like software development, IT support, and digital marketing.

AI represents the latest chapter in this ongoing story of technological disruption. But unlike previous technologies, AI has the potential to automate not just manual labor but also cognitive tasks, leading to concerns that it could replace a broader range of jobs, including those traditionally considered safe from automation.

Understanding AI and Its Capabilities

Artificial intelligence is a broad field encompassing various technologies designed to mimic human intelligence. These technologies include machine learning, natural language processing, computer vision, and robotics. AI systems can analyze data, recognize patterns, make decisions, and even learn from experience, allowing them to perform tasks that once required human intelligence.

Key Areas of AI Impact:
  1. Manufacturing and Production: AI-powered robots and automation systems have been integral to modern manufacturing. These machines can work tirelessly, performing repetitive tasks with precision and speed. In industries like automotive manufacturing, robots handle everything from welding to assembly, significantly reducing the need for human labor on production lines.
  2. Customer Service: AI has made significant inroads into customer service through chatbots and virtual assistants. These tools can handle a wide range of customer inquiries, from answering frequently asked questions to processing orders, reducing the need for large customer service teams.
  3. Healthcare: AI is revolutionizing healthcare by assisting in diagnosis, treatment planning, and even surgery. AI algorithms can analyze medical images, identify patterns, and suggest potential diagnoses, often with greater accuracy than human doctors. In surgical settings, AI-powered robots assist surgeons, improving precision and outcomes.
  4. Finance: In the financial industry, AI is used for algorithmic trading, fraud detection, and risk assessment. AI systems can analyze vast amounts of financial data in real-time, making decisions faster than any human could, which has transformed trading floors and back offices.
  5. Creative Industries: Even creative fields are not immune to AI’s reach. AI tools can generate music, write articles, design logos, and even create visual art. While these tools are often used to assist human creators rather than replace them, they raise questions about the future of creative jobs.
  6. Software Engineers and Developers: AI is increasingly automating parts of software development, such as code generation and bug detection, which could reduce the need for entry-level developers. However, fully replacing software engineers is unlikely, as the field requires critical thinking, creativity, and a deep understanding of complex problems that AI cannot yet replicate. Instead, AI is expected to enhance the work of engineers, allowing them to focus on higher-level tasks while improving overall efficiency.
The Reality of AI-Induced Layoffs

The fear of AI taking jobs is not unfounded, particularly as reports of layoffs in both tech and non-tech sectors dominate the news. However, it’s important to recognize that layoffs are rarely caused by a single factor. Economic conditions, shifts in consumer behavior, and organizational restructuring all play significant roles.

Economic Factors: The global economy has faced significant challenges in recent years, including the COVID-19 pandemic, inflation, and supply chain disruptions. These factors have led companies to reassess their operations, often resulting in cost-cutting measures such as layoffs. In such cases, AI may be seen as a way to maintain productivity with a reduced workforce, but it is not the sole cause of job losses.

Technological Disruption: As companies strive to remain competitive in an increasingly digital world, they are investing in AI and automation. This investment can lead to workforce reductions, particularly in roles that can be easily automated. For example, in retail, self-checkout systems and automated inventory management have reduced the need for cashiers and stock clerks. In finance, AI-driven trading algorithms and robo-advisors are displacing traditional roles in investment banking and financial advising.

Shifts in Business Models: The pandemic accelerated the shift toward digital and remote work, prompting companies to reevaluate their business models. Some jobs, particularly those tied to physical office spaces or traditional retail, have become redundant as companies adapt to new ways of working. AI has played a role in enabling this transition by providing tools for remote collaboration, customer service, and logistics.

However, it’s crucial to note that while AI contributes to job displacement in some areas, it also creates new opportunities. The demand for AI specialists, data scientists, and machine learning engineers is growing rapidly. These roles require skills in AI development, data analysis, and cybersecurity, offering new career paths for those willing to adapt and reskill.

The Fear of AI: Is It Justified?

The fear of AI taking jobs is often rooted in the perception that AI is an unstoppable force that will render human workers obsolete. While AI is undoubtedly powerful and capable of performing tasks that were once thought to require human intelligence, this fear may be overstated for several reasons.

Human Creativity and Emotional Intelligence: AI excels at tasks that involve data processing, pattern recognition, and decision-making based on predefined criteria. However, it struggles with tasks that require creativity, empathy, and nuanced understanding—areas where humans excel. Jobs that involve human interaction, emotional intelligence, and creative problem-solving are less likely to be fully automated. For example, while AI can assist in diagnosing diseases, the human touch is still essential in patient care, where empathy and communication are crucial.

New Job Creation: Just as previous technological revolutions created new industries and jobs, AI is expected to do the same. The rise of AI is leading to the creation of entirely new job categories, such as AI ethics specialists, data privacy officers, and AI trainers. These roles involve overseeing AI systems, ensuring they operate ethically and legally, and training AI models to perform specific tasks. Additionally, AI is likely to create demand for jobs in industries that do not yet exist, much like the internet gave rise to social media management and e-commerce.

Collaborative Work: Rather than replacing human workers, AI is increasingly seen as a tool that can augment human capabilities. In many fields, AI is being used to assist humans rather than replace them. For instance, in healthcare, AI can help doctors analyze medical images and suggest potential diagnoses, but the final decision is still made by a human doctor. In creative industries, AI tools can generate ideas or draft content, but the human touch is needed to refine and personalize the output.

Regulatory and Ethical Considerations: Governments and organizations are becoming increasingly aware of the ethical implications of AI. There is growing recognition of the need for regulations to ensure that AI is used responsibly and that its impact on the workforce is managed. Some countries are already implementing policies to protect workers from the negative effects of automation, such as retraining programs and social safety nets. These measures can help mitigate the impact of AI on employment and ensure that workers are not left behind in the AI-driven economy.

Preparing for the AI-Driven Future

While the fear of AI taking jobs is understandable, it is not inevitable. The key to navigating the AI-driven future lies in preparation and adaptability. Workers, companies, and governments all have roles to play in ensuring that the transition to an AI-driven economy is as smooth and inclusive as possible.

Reskilling and Upskilling: One of the most effective ways for workers to prepare for the AI-driven future is to invest in reskilling and upskilling. As AI continues to evolve, the demand for skills in AI development, data science, and cybersecurity is growing. Workers who acquire these skills will be well-positioned to take advantage of new job opportunities in the AI-driven economy. Additionally, workers should focus on developing skills that are difficult for AI to replicate, such as creativity, critical thinking, and emotional intelligence.

Lifelong Learning: In an AI-driven world, the concept of lifelong learning becomes increasingly important. Workers must be willing to continuously learn and adapt to new technologies and processes. This may involve taking online courses, attending workshops, or participating in on-the-job training programs. Companies can support lifelong learning by offering training and development opportunities to their employees, helping them stay competitive in a rapidly changing job market.

Adapting to Change: Workers should stay informed about technological advancements and be willing to adapt to new tools and processes that can enhance their work. For example, in industries like marketing, AI-driven tools are being used to analyze customer data, optimize ad campaigns, and personalize content. By embracing these tools, marketers can improve their effectiveness and remain valuable to their employers.

Focusing on Uniquely Human Skills: As AI continues to automate routine and repetitive tasks, workers should focus on developing skills that are uniquely human. These include creativity, emotional intelligence, problem-solving, and communication. Jobs that require these skills are less likely to be automated, as AI struggles to replicate the nuances of human interaction and creativity.

Government and Corporate Responsibility: Governments and companies also have a role to play in preparing for the AI-driven future. Policymakers should implement measures to protect workers from the negative effects of automation, such as retraining programs, social safety nets, and policies that encourage job creation in emerging industries. Companies, on the other hand, should invest in their employees by offering training and development opportunities and creating a culture of continuous learning.

Embracing the Future

The rise of AI is undeniably transforming the job market, leading to both challenges and opportunities. While it is natural to fear the unknown, the key to thriving in an AI-driven world lies in preparation, adaptability, and a willingness to embrace change. Rather than fearing AI, workers should focus on developing skills that are in demand, staying informed about technological advancements, and being open to new opportunities.

AI is not an unstoppable force that will render all human workers obsolete. Instead, it is a tool that, when used responsibly, can enhance human capabilities and create new opportunities. By focusing on uniquely human skills, investing in lifelong learning, and staying adaptable, workers can not only survive but thrive in the AI-driven future. The fear of AI may be understandable, but with the right approach, it can also be an opportunity for growth, innovation, and a brighter future for all.

Ahmed Banafa’s books

Covering: AI, IoT, Blockchain and Quantum Computing

Also Read:

The State of The Foundry Market Insights from the Q2-24 Results

AMAT Underwhelms- China & GM & ICAP Headwinds- AI is only Driver- Slow Recovery

The Impact of UCIe on Chiplet Design: Lowering Barriers and Driving Innovation


Bug Hunting in NoCs. Innovation in Verification

Bug Hunting in NoCs. Innovation in Verification
by Bernard Murphy on 08-28-2024 at 6:00 am

Innovation New

Despite NoCs being finely tuned in legacy subsystems, when subsystems are connected in larger designs or even across multi-die structures, differing traffic policies and system-level delays between NoCs can introduce new opportunities for deadlocks, livelocks and other hazards. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is NoCFuzzer: Automating NoC Verification in UVM. 2024 IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. The authors are from Peking University, Hong Kong University and Alibaba.

Functional bugs should be relatively uncommon in production grade NoCs, but performance bugs are highly dependent on expected traffic and configuration choices. By their nature NoCs will almost unavoidably include cycles; the mesh and toroidal topologies common in many-core servers and AI accelerators are obvious examples. Traffic in such cases may be subject to deadlock or livelock problems under enough traffic load. Equally weaknesses in scheduling algorithms can lead to resource starvation. Such hazards need not block traffic in a formal sense (never clearing) to undermine product success. If they take sufficiently long to clear, they will still fail to meet the expected service level agreements (SLAs) for the system.

There are traffic routing and scheduling solutions to mitigate such problems – many such solutions. Which will work fine within one NoC designed by one system integration team, but what happens when you must combine multiple legacy/3rd party subsystems, each with a NoC designed according to its own policy preferences and connected through a top-level NoC with its own policies? This issue takes on even more urgency in chiplet-based designs adding interposer NoCs to connect between chiplets. Verification solutions become essential to tease out potential bugs between these interconnected networks.

Paul’s view

A modern server CPU can have 100+ cores all connected through a complex coherent mesh-based network-on-a-chip (NOC). Verifying this NOC for correctness and performance is very hard problem and a hot topic with many of our top customers.

This month’s paper takes a concept called “fuzzing” from the software verification world and applies it to UVM-based verification of 3×3 OpenPiton NOC. The results are impressive: line and branch coverage hit 95% in 120hrs with the UVM bench vs. 100% in 2.5hrs with fuzzing; functional covergroups reach 89-99% in 120hrs with the UVM bench vs. 100% across all covergroups in 11hrs with fuzzing.  Also, the authors try injecting a corner case starvation bug into the design. The baseline UVM bench was not able to hit the bug after 100M packets whereas fuzzing was able to hit it after only 2M packets.

To achieve these results the authors use a fuzzing tool called AFL – checkout its Wikipedia page here. A key innovation in the paper is the way the UVM bench is connected to AFL: the authors invent a simple 4-byte XYLF format to represent a packet on the NOC. XY is the destination location, L the length, F a “free” flag. The UVM bench reads a binary file with a sequence of 4-byte chunks and then injects each packet in the sequence to each node in the NOC round robin style, first packet from cpu 00, then cpu 01, 02, 10, 11, and so on. If F is below some static threshold T then the UVM bench just has the cpu put nothing into the NOC for the equivalent length of that packet. The authors set T for a 20% chance of a “free” packet.

AFL is given an initial seed set of binary files taken from a non-fuzzed UVM bench run, applies them to the UVM bench, and is provided back with coverage data from the simulator – each line, branch, covergroup is just considered a coverpoint. AFL then starts applying mutations, randomly modifying bytes, splicing and re-stitching binary files, etc. A genetic algorithm is used to guide the mutation towards increasing coverage. It’s a wonderfully abstract, simple, and elegant utility that is completely blind to the goals for which it is aiming to improve coverage.

Great paper. Lots of potential to take this further commercially!

Raúl’s view

Fuzzing is a technique for automated software testing where a program is fed malformed or partially malformed data. These test inputs are usually variations on valid samples, modified either by mutation or according to a defined grammar. This month’s paper uses AFL (named after  a breed of rabbit) which employs mutation; its description offers a good understanding of fuzzing. Note that fuzzing differs from random or constrained random verification commonly applied in hardware verification.

The authors apply fuzzing techniques to hardware verification, specifically targeting Network-on-Chip (NoC) systems. The paper details the development of an UVM-based environment connected to the AFL fuzzer within a standard industrial verification process. They utilized Verilog, the Synopsys VCS simulator, and focused on conventional coverage metrics, predominantly code coverage. To interface the AFL Fuzzer to the UVM test environment, the test output of the fuzzer must be translated into a sequence of inputs for the NoC. Every NoC packet is represented as 40-bit string which contains the destination address, length, port (each node in the NoC has several ports) and a control flag that determines if the packet is to be executed or if the port remains idle. These strings are mutated by AFL. A simple grammar converts them into inputs for the NoC. This is one of the main contributions of the paper. The fuzzing framework is adaptable to any NoC topology.

NoC are the communication fabric of choice for digital systems containing hundreds of nodes and are hard to verify. The paper presents a case study of a compact 3×3 mesh NoC element from OpenPiton. The results are impressive: Fuzz testing achieved 100% line coverage in 2.6 hours, while Constrained Random Verification (CRV) only reached 97.3% in 120 hours. For branch coverage Fuzz testing achieved full coverage in 2.4 hours and CRV only reached 95.2% in 120 hours.

The paper is well written and offers impressive detail, with a practical focus that underscored its relevance in an industrial context. While it is occasionally somewhat verbose, it was certainly an excellent read.