SiC Forum2025 8 Static v3

Turnaround in Semiconductor Market

Turnaround in Semiconductor Market
by Bill Jewell on 08-14-2023 at 1:30 pm

Semiconductor Market Change Q3 2023

The global semiconductor market grew 4.2% in 2Q 2023 versus 1Q 2023, according to WSTS. The 2Q 2023 growth was the first positive quarter-to-quarter change since 4Q 2021, a year and a half ago. Versus a year ago, the market declined 17.3%, an improvement from a 21.3% year-to-year decline in 1Q 2023. Semiconductor market year-to-year change peaked at 30.1% in 2Q 2021 in the recovery from the 2020 pandemic slowdown.

Most major semiconductor companies experienced revenue growth in 2Q 2023 versus 1Q 2023. Of the 15 largest companies, 13 showed revenue gains. We at Semiconductor Intelligence only include companies which sell to the end user. Thus, we do not include foundries such as TSMC or companies which only use their semiconductor products internally such as Apple. Nvidia has not yet reported results for the latest quarter, but its guidance was for a 53% jump from the prior quarter. If this guidance holds, Nvidia will become the third largest semiconductor company in 2Q 2023, up from fifth in the prior quarter. Nvidia cited a steep increase in demand for AI processors as the driver for the strong growth. SK Hynix reported 2Q 2023 growth of 39%, bouncing back from three previous quarter-to-quarter declines of over 25%. The only companies with revenue declines were Qualcomm (down 10%) and Infineon Technologies (down 0.7%). The weighted average growth from 1Q 2023 to 2Q 2023 for these 15 companies was 8%. Excluding Nvidia, the growth was 3%.

Top Semiconductor Companies’ Revenue

Change versus prior quarter in local currency
US$B 2Q23 Reported Guidance Comments on 3Q23
Company 2Q23 3Q23
1 Intel 12.9 11% 3.5% inventory issues
2 Samsung SC 11.2 7.3% n/a demand recovery in 2H
3 Nvidia 11.0 53% n/a 2Q23 is guidance
4 Broadcom 8.85 1.3% n/a 2Q23 is guidance
5 Qualcomm IC 7.17 -10% 0.4% increase in handsets
6 SK Hynix 5.55 39% n/a increased demand in 2H
7 AMD 5.36 0.1% 6.4% client & data center up
8 TI 4.53 3.5% 0.4% auto up, others weak
9 Infineon 4.46 -0.7% -2.2% auto up, power down
10 STMicro 4.33 1.9% 1.2% auto up, digital down
11 Micron 3.75 1.6% 3.9% supply/demand improving
12 NXP 3.30 5.7% 3.1% auto & industrial up
13 Analog Devices 3.26 0.4% -5.0% auto & industrial down
14 MediaTek 3.20 1.7% 4.8% inventories down
15 Renesas 2.68 2.5% 0.4% inventory balanced
Total of above 8%
   Memory Cos. (US$) 9% n/a Samsung-Hynix-Micron
   Non-Memory Cos. 7% 2%

Most companies are guiding for continued growth in 3Q 2023 from 2Q 2023. Of the eleven companies providing guidance, nine call for revenue increases ranging from 0.4% (Qualcomm, Texas Instruments and Renesas Electronics) to 6.4% (AMD). Infineon expects a 2.2% decline and Analog Devices guided for a 5% decline. The memory companies (Samsung, SK Hynix, and Micron Technology) all stated they see improving demand in the second half of 2023. Intel cited continuing inventory issues while MediaTek and Renesas reported lower or balanced inventories. Automotive will continue to be a driver in 3Q 2023, as cited by TI, Infineon, and STMicroelectronics, and NXP Semiconductors. The weighted average guidance of the eleven companies is 2% growth in 3Q 2023 from 2Q 2023. Companies providing a range of revenue guidance had high-end growth ranging from 3 to 7 percentage points higher than their midpoint guidance.

Even with positive quarter-to-quarter growth in 2Q 2023, 3Q 2023, and possibly 4Q 2023, the semiconductor market will show a substantial decline for the year 2023. Estimates of the size of the decline range from 20% from Future Horizons to 10% from Tech Insights. Future Horizons’ Malcolm Penn stated he will raise his 2023 projection based on the 2Q 2023 WSTS data but has not yet cited a specific number. Our projection from Semiconductor Intelligence (SC-IQ) is a 13% drop in 2023. Looking at 2024, most projections are similar: Tech Insights at 10% growth, SC-IQ at 11% and WSTS at 11.8%. Gartner is the most bullish at 18.5%. The primary difference between the 2024 forecasts are the assumptions on the memory market. WSTS and Gartner are close on 2024 growth for non-memory products at 5.0% and 7.7% respectively. However, Gartner projects 70% growth for memory while WSTS forecasts 43%.

Our Semiconductor Intelligence July newsletter stated we likely reached the low point in electronics production in the second quarter of 2023. The semiconductor market finally showed quarter-to-quarter growth in 2Q 2023. Major semiconductor companies are projecting continued revenue growth into 3Q 2023. The semiconductor market has finally turned and is headed toward probable double-digit growth in 2024.

Also Read:

Has Electronics Bottomed?

Semiconductor CapEx down in 2023

Steep Decline in 1Q 2023


Next-Gen AI Engine for Intelligent Vision Applications

Next-Gen AI Engine for Intelligent Vision Applications
by Kalar Rajendiran on 08-14-2023 at 10:00 am

Synopsys ARC MetaWare NN SDK

Artificial Intelligence (AI) has witnessed explosive growth in applications across various industries, ranging from autonomous vehicles and natural language processing to computer vision and robotics. The AI embedded semiconductor market is projected to reach $800 billion by year 2030. Compare this with just $48 billion back in 2020. [Source: May 2022 IBS Report]. Computer vision driven applications drive a significant part of this incredible growth projection. Real-time AI examples in this space include drones, automotive applications, mobile cameras and digital still cameras.

The advent of AlexNet more than a decade ago was a major advancement in the realm of object detection compared to erstwhile methods. Since then, convolution neural network (CNN) models have been the dominant method of implementing object detection using digital signal processors (DSP). While the CNN model has evolved to be able to deliver a 90% accuracy level, it requires a lot of memory. More memory means higher power consumption. In addition, advances in memory performance have not kept pace with advances in compute performance, impacting efficient data movement.

Over the last few years, transformer models which were originally developed for natural language processing have been adapted for object detection purposes with 90% accuracy. But they are more demanding on compute capacity compared to CNNs. So, a combination of CNNs and transformers is a better solution for leveraging the best of both worlds, which in turn is pushing the demand for increasingly complex AI models and the need for real-time processing using specialized hardware accelerators.

As the network models are evolving, the number of cameras per application, image size and resolution are also increasing dramatically. While accuracy is a critical requirement, performance, power, area, flexibility and implementation costs are key decision factors too. These factors are driving the decisions on AI accelerator architectures and Neural Processing Units (NPUs) and DSPs are emerging as key components, each offering unique strengths to the world of AI.

Use DSP or NPU or Both for Implementing AI Accelerators

DSPs do provide more flexibility than NPUs. Using a vector DSP, AI can be implemented in software. DSPs can perform traditional signal processing as well as lower performance AI processing with no additional area penalty. And a vector DSP can be used to support functions that cannot be processed on an NPU.

On the other hand, NPUs can be implemented to accelerate all common AI network models such as CNN, RNN, transformers, and recommenders. For multiply-accumulate (MAC) dominated AI workloads, NPUs are more efficient in terms of power and area. In other words, for mid to high-performance AI needs, an NPU-approach is better than a DSP approach.

The bottom line is that except for low-end AI requirements, an NPU approach is the way to go. But as AI network models are rapidly evolving, for future proofing one’s application, a DSP-NPU combination approach is the prudent solution.

AI Accelerator Solutions from Synopsys

Synopsys’ broad processor portfolio includes both its ARC® VPX vector DSP family of cores as well as its ARC NPX neural network family of cores. The development platform comes with the ARC MetaWare MX development tools. The VPX cores are programmable using C or C++ while the NPX cores’ hardware is automatically generated by feeding the customer’s trained neural network into Synopsys development tools. The platform supports widely used frameworks for customers to train their neural network. The MetaWare NN SDK is the compiler for the trained NN. The development tools also offer the ability to perform virtual simulation for testing purposes.

ARC NPX6 is Synopsys’ sixth generation general purpose NPU that will support any CNN, RNN or transformer. Customers who bring their own AI engine can easily pair it with a VPX core. Customers could also design their own neural network using Synopsys’ ASIP Designer tool.

As applications’ demand for TOPS performance grows, the challenge of memory bandwidth grows with it. Some hardware and software features need to be added to minimize this bandwidth challenge. To address this scaling requirement, Synopsys uses L2 memory to help minimize data traffic over an external bus.

An ARC NPX6-based solution can be implemented to deliver up to 3,500 TOPS as needed by scaling all the way to 24 core NPU with 96K MACs and instantiating up to eight NPUs.

Summary

Combining vector DSP technology and neural processing technology can create a synergistic solution that includes future-proofing and can revolutionize AI acceleration. Synopsys offers a broad portfolio of IP in addition to the VPX and NPX family of IP cores. They also offer other tools such as platform architect tool that will help explore, analyze and visualize the data of the AI applications. High quality IP and a comprehensive, easy-to-use tool platform are needed for achieving fast time to market.

For more details, visit the product page.

Also Read:

VC Formal Enabled QED Proofs on a RISC-V Core

WEBINAR: Leap Ahead of the Competition with AI-Driven EDA Technology

Computational Imaging Craves System-Level Design and Simulation Tools to Leverage AI in Embedded Vision


Morris Chang’s Journey to Taiwan and TSMC

Morris Chang’s Journey to Taiwan and TSMC
by Daniel Nenni on 08-14-2023 at 6:00 am

Morris Chang 2023

High volume manufacturing is a 24/7 business with 12 hour shifts. You don’t always get to pick the shifts you want and you must cover for others when they can’t. It’s a challenging career and not for the faint of heart like myself.

In the 1980s and 1990s I spent time in Japan working with some of the top Japanese semiconductor manufacturers. It was an amazing experience but I walked away wondering how the US would be able to compete. The Japanese people I met worked very hard to honor their families and country. Back in the United States we worked 9-5 for a paycheck. Morris Chang knew this from his experience at TI and that is why he headed to Taiwan.

As I have written, Taiwan’s early start into semiconductors did not begin with Morris Chang but he does figure in prominently in Taiwan’s rise to dominance. Here is a brief biography of Morris (ChatGPT 4.0):

Morris Chang (张忠谋) is widely recognized as the father of Taiwan’s semiconductor industry. Born in 1931 in Ningbo, China, Chang moved to the U.S. in the late 1940s, where he pursued higher education in the field of physics and engineering.

Here are a few key points about Morris Chang:

  1. Education: Morris Chang holds degrees from several esteemed institutions. He received a B.S. and M.S. in Mechanical Engineering from the Massachusetts Institute of Technology (MIT) and a Ph.D. in Electrical Engineering from Stanford University.
  2. Texas Instruments: Before his endeavors in Taiwan, Chang worked at Texas Instruments (TI) in the United States for 25 years. During his tenure there, he held various senior positions including Group Vice President of the Worldwide Semiconductor Group.
  3. Taiwan Semiconductor Manufacturing Company (TSMC): In 1987, Chang founded the Taiwan Semiconductor Manufacturing Company (TSMC). TSMC is the world’s first dedicated semiconductor foundry, meaning it manufactures chips for other companies without designing its own products. This business model transformed the global semiconductor industry, enabling a myriad of fabless semiconductor companies to focus on chip design without having to invest in expensive manufacturing facilities.
  4. Economic Impact: Under Chang’s leadership, TSMC became a cornerstone of Taiwan’s IT industry, propelling the country into a major role in the global semiconductor market. Taiwan’s importance in chip manufacturing can’t be overstated, with TSMC at the forefront of cutting-edge semiconductor technology and production.
  5. Retirement: Chang retired from TSMC in 2018, but his influence in the semiconductor world and his legacy as a pioneer in the foundry business model will persist for years to come.
  6. Recognition: Chang has received numerous awards and honors over the years in recognition of his contributions to the semiconductor industry and his visionary leadership.

In summary, Morris Chang is a seminal figure in the semiconductor industry, especially in the foundry business model. His leadership and strategic vision not only transformed the industry but also elevated Taiwan’s standing in the global tech ecosystem.

From a semiconductor insider’s point of view, there is a lot more to this story. Morris started his education at Harvard but MIT turned out to be more to his liking both financially and technically. For engineers, MIT was the place to be and Morris was an engineer at heart. Morris chose mechanical engineering but he quickly became obsessed with the transistor during his first job right out of college.

After graduating from MIT (1955) Morris went to work for Sylvania, a company with a long history in lighting and electronics. After 3 years Morris wanted to go where the transistor innovation was and that was Texas Instruments. His dream was to be the head of the central research labs at TI but Morris did not have a PhD, or even a degree in electrical engineering.  In fact, he twice failed a qualifying exam for a doctoral degree at M.I.T.

Morris first worked in the germanium transistor business which would soon be surpassed by the silicon transistor. TI was IBMs major supplier (20% of TI’s revenue) and Morris was in charge of the IBM program. Getting yields ramped up was the first big challenge for Morris. He burned the midnight oil and cracked the yield code and became a hero. Morris was promoted to the head of the germanium transistor program and in 1963 he was sent to Stanford to get his PhD for further advancement. He Finished the PhD program in a record time (2.5 years) while still spending time at TI.

When Morris returned to TI full time, germanium was no longer leading edge technology so Morris took a leadership position with the TI IC group. Morris’s influence grew and in 1973 he became head of the semiconductors group and again became a hero. TI was the king of TTL (Transistor – Transistor Logic)  with a 60% market share and more than $1B in revenue, but TTL was soon replaced by MOS and TI lost the MOS race.

SemiWiki: Texas Instruments and the TTL Wars

Morris’s downfall at TI was MOS memory and microprocessors. Other companies caught up with TI (Mostek) and in some cases surpassed them. Microprocessors became the next big thing and TI had the first microprocessor patent, not Intel or Motorola. When IBM chose the Intel 8088 microprocessor for their first personal computer over the TI TMS9900 and the Motorola 6800 (amongst others), Morris took this as a personal defeat.

In 1977 Morris’s departure from TI officially started when he was removed as Group VP of Semiconductors and became Group VP of Consumer Products, a somewhat troubled business at the time (calculators and toys). Morris was then moved to head of corporate quality and his fall from grace was complete. Morris wasn’t fired from TI but his departure was not unexpected.

Morris then spent a difficult two years (1984-86) at General Instruments under CEO Frank Hickey before calling it quits and heading to Taiwan. I was a field engineer for GI during the Hickey era (1979-82)  and it was a tumultuous time for the company, absolutely.

Bottom Line: The work ethic and experience Morris developed through his career with innovative electronic and semiconductor companies was the perfect foundation for the customer centric pure-play foundry model that is TSMC. It should be noted that TI is today a semiconductor powerhouse, one of the longest standing semiconductor companies in the world. TI is also a long standing customer of TSMC.

To be continued…. How Philips saved TSMC!

Also Read:

How Taiwan Saved the Semiconductor Industry


Podcast EP176: Implementing End-to-End Security with Axiado’s New Breed of Security Processor

Podcast EP176: Implementing End-to-End Security with Axiado’s New Breed of Security Processor
by Daniel Nenni on 08-11-2023 at 10:00 am

Dan is joined by Tareq Bustami, senior vice president of marketing & sales, Axiado. Tareq has more than 20 years of experience in the semiconductor and networking industries. Before joining Axiado, he led NXP’s embedded processors for the wired and wireless markets, and was in charge of growing multi-core processor solutions for enterprise, data center infrastructure and general embedded and industrial markets.

Tareq describes Axiao’s unique and comprehensive approach to security. He provides details about its trusted control/compute unit (TCU) AI-driven hardware security platform. A broad look at the challenges of implementing end-to-end security is presented along with a discussion of how Axiado’s technology addresses these challenges.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


The Recovery has Started and it’s off to a Great Start!

The Recovery has Started and it’s off to a Great Start!
by Malcolm Penn on 08-11-2023 at 6:00 am

Semiconductor Recovery 2023

August’s WSTS Blue Book showed Q2-2023 sales rebounding strongly, up 4.2 percent vs. Q1, heralding the end of the downturn and welcome news for the beleaguered chip industry.

The really good news, however, was that the downturn bottomed one quarter earlier than previously anticipated. This pull-forward only added a modest US$11 billion to Q2’s US$ 244 billion sales but this was enough to swing Q2’s growth from minus 5.0 percent to plus 4.2 percent.

A small change in the numbers at the start of the year makes a huge difference to the quarterly growth rates and hence the final year-on-year number.

Market Detail

The market turnaround was driven by a dramatic change in the Asia/Pac region, with 5.4 percent month-on-month growth, followed by the US (plus 3.5 percent), Japan (plus 2.1 percent) and Europe (plus 1.8 percent).

On an annualised basis, Q2-2023 was down 17.3 percent vs. Q2-2023, with Asia/Pac down 22.6 percent, the US down 17.9 percent, Japan down 3.5 percent with Europe, the only region showing year-on-year growth, at plus 7.6 percent.

The near-term market outlook is starting to look a lot stronger, driven by the positive impact of the inventory burn, stronger than expected resilience in the global economy, especially in the USA, and a seemingly robust demand boost from the emerging AI market.

Forecast Summary

Looking ahead to the second half of the year, the overall industry consensus has now (mostly) acknowledged a likely double-digit decline for 2023 vs. the ‘positive growth’ positions predicted this time last Year.

Source: SEMI/Future Horizons

Future Horizons stood alone in the crowd when we first published our 2023 double-digit decline forecast 15 months ago in May 2022, likewise too when we stood by that number at our January 2023 Industry Update Webinar when all others, bar one, were predicted a very mild downturn followed by a sharp V-shaped rebound in 2024.

The stronger than expected second-quarter results will now push our 2023 forecast beyond the bull end of our January 2023 forecast scenario, but our longer-term concerns, re the still ongoing uncertain economic outlook and the excess CapEx spending, show no signs yet of abatement.

Over-capacity is the industry’s number one enemy, depressing ASPs and condemning the industry to low dollar value growth. An economic slowdown will nip any recovery in the bud.

Market Outlook

With two of the four industry fundamentals, unit sales and ASPs, now slowly but surely rebalancing, the havoc and inevitable consequences of the preceding supply-side shortage-induced market boom are now starting to recede. The stage is now set for a return to industry growth, but from a much-reduced base.

The size and shape of the recovery will depend on the potentially derailing impact of capacity (over-investment) and demand (the economy), the former of which is not looking healthy, and the latter still steeped in mixed signals and uncertainty.

That said, 2023 will undoubtedly transpire to have been a line in the sand; and 2024 will equally clearly be better. The recovery has got off to a great start, but its pace and form have yet to be determined.

We will be covering all this, together with an update to our outlook for 2023-24, at our forthcoming Industry Update Webinar on Tuesday 12 September at 3pm UK BST (GMT+1). Register now at:

https://us02web.zoom.us/webinar/register/7416911384194/WN_akISM9QxS8uNZS_oihzqFQ

Also Read:

The Billion Dollar Question – Single or Double Digit Semiconductor Decline

The Semiconductor Market Downturn Has Started

Semiconductor Crash Update


Japan’s Foundry Morgana Part II

Japan’s Foundry Morgana Part II
by Karl Breidenbach on 08-10-2023 at 10:00 am

Japan's Foundry Morgana Part II

Japan’s Foundry Morgana: A Journey from Mirage to Reality?Three years ago, I wrote an article about Japan’s semiconductor industry under the title “Japan’s Foundry Morgana.” Back in September 2020, I analyzed the decline of Japan’s once world-leading semiconductor sector and the ambitious plans for inviting TSMC to build an advanced process fab. Who could have imagined that by 2023, fueled by the experiences from the semiconductor crisis, the plans to revive the Japanese foundry footprint would have advanced at such quick pace and determination. It’s worth looking again at what happened so far and if Japan could be a blue print for other regions on the quest to semiconductor supply resilience.

A Look Back: The State of the Japanese Semiconductor Industry

In the late 1980s, Japan emerged as a global leader in semiconductors, holding strong alongside the US. Home to over 30 large-scale industry players like Renesas, Hitachi, Denso, Fujitsu, and Mitsubishi Electronics, Japan boasted a robust semiconductor ecosystem. By 1990 Japanese IDM’s NEC, Toshiba and Hitachi had taken the top three positions in the world-wide semiconductor sales ranking, just ahead of Intel and Motorola.

Top 10 Worldwide Semiconductor Sales Leaders from 1985 to 2021, Source: IC Insights

However, since the year 2000, Japan’s share of international IC exports declined sharply, dropping from 14% to less than 5% by 2020. Despite losing ground, some Japanese IDMs continued to excel in specialized segments like power electronics and optical CMOS sensors. 

Japanese silicon foundries primarily arose as carve-outs from leading IDMs, but they struggled to keep up with rivals in Taiwan, China, and the US. Japan held only 2% of global foundry capacity as of 2020. Factors such as late market entry into the foundry business, a lack of cost-containment strategies, and a narrow market segment focus led to the decline of Japan’s foundry ecosystem.

The completion of Nuvoton’s acquisition of Panasonic’s semiconductor unit in September 2020 marked a symbolic end to the Japan-owned foundry landscape, leaving a limited number of small-scale foundries.

The New Landscape: Attracting TSMC and Rebuilding the Foundry Sector

Fast forward to today, and Japan’s semiconductor industry is writing a new chapter. The once ambitious plan to invite TSMC has materialized, resulting in TSMC’s agreement to build two fabs in Japan. Backed by extensive government funding, these fabs symbolize a fresh start and an alignment with the global semiconductor landscape.

Japan’s foundry landscape as of 2023, Source: Own research

The Japanese government’s engagement in this venture is unprecedented, pledging to shoulder a significant portion of the construction costs. Leaders of the ruling party’s lawmaker coalition on chips recognize this as a national strategy, part of Japan’s efforts to revive its domestic chipmaking industry, a sector that is viewed as crucial for growth and economic security.

The joint effort between Hitachi, Renesas, Toshiba, and the Japanese Ministry of Economy signifies a strategic shift. It’s not just about reviving the Japanese-owned foundry sector; it’s about embracing international collaboration, recognizing the importance of supply security, and focusing on processes that align with Japan’s core strengths.

The Rise of Rapidus: A Bold Leap Forward

Alongside the collaboration with TSMC, Japan’s ambitious project Rapidus is a critical piece of the puzzle. Aiming for 2nm production in 2027, Rapidus represents a daring and costly venture. Supported by a consortium that includes IBM and backed by the Japanese government and large conglomerates, Rapidus seeks to reshape Japan’s semiconductor landscape by leapfrogging several generations of nodes.

The endeavor is both extremely challenging and tremendously expensive. Modern fabrication technologies are expensive to develop in general. Rapidus itself projects that it will need approximately $35 billion to initiate pilot 2nm chip production in 2025, and then bring that to high-volume manufacturing in 2027.

Despite the high stakes, the vision is clear and backed by strong commitment. Rapidus aims to serve a limited but significant client base, including tech giants like Apple and Google, focusing on quality and innovation. The focus on limited customers is a strategic move to secure enough demand and revenue to recover massive investment while avoiding emulation of TSMC’s extensive client base.

Rapidus’ success holds much significance for Japan’s advanced semiconductor supply chain, symbolizing more than just a money-making venture but a catalyst for revitalizing the Japanese industry. The Japanese government views it as a critical step towards creating more opportunities for local chip designers, even if immediate success may not be guaranteed.

Conclusion: From Mirage to Reality, A Blueprint for Others?

The reference to “Foundry Morgana” or fata morgana in my initial article resonated with the elusive, almost mythical nature of Japan’s semiconductor revitalization efforts. However, today’s landscape shows a transformation from illusion to reality.

With TSMC’s strategic presence and the pursuit of Rapidus, Japan demonstrates a new level of commitment. It is embracing both its past strength and future potential, rebuilding its foundry landscape with international collaboration, and aligning with global advancements.

Japan’s Foundry Morgana is no longer just a distant reflection. It’s a (potential) reality ;-), emerging on the horizon as a renewal of semiconductors Made in Japan.

The dynamics between Rapidus and TSMC and the larger global context add more intrigue to Japan’s semiconductor industry’s resurrection. The potential impact of geopolitics, market cap, governmental subsidization, and known knowns regarding yields and timetables further adds to the complexity of this journey.

Furthermore, Japan’s approach to revitalizing its semiconductor industry may serve as a blueprint for other regions seeking to enhance their own technological prowess. Europe, for example, with its ambitions to grow its semiconductor manufacturing and reduce dependence, could look to Japan’s strategy for inspiration.

Sources:

https://www.anandtech.com/show/18979/rapidus-wants-to-supply-2nm-chips-to-tech-giants-challenge-tsmc

https://www.taipeitimes.com/News/biz/archives/2023/08/04/2003804192

https://www.electronicsweekly.com/news/business/japan-asks-tsmc-build-fab-2020-07/

https://www.taiwannews.com.tw/en/news/3999523

https://www.semiconductors.org/wp-content/uploads/2018/06/SIA-Beyond-Borders-Report-FINAL-June-7.pdf

https://sst.semiconductor-digest.com/2016/07/whats-happening-to-japans-semiconductor-industry/

https://blog.semi.org/semi-news/japan-a-thriving-highly-versatile-chip-manufacturing-region

https://laylaec.com/2018/10/19/why-doesnt-japan-have-a-large-semiconductor-foundry-like-tsmc-samsung-or-intel-anymore/

Also Read:

How Taiwan Saved the Semiconductor Industry

Intel Enables the Multi-Die Revolution with Packaging Innovation

TSMC Redefines Foundry to Enable Next-Generation Products


VC Formal Enabled QED Proofs on a RISC-V Core

VC Formal Enabled QED Proofs on a RISC-V Core
by Bernard Murphy on 08-10-2023 at 6:00 am

The Synopsys VC Formal group have a real talent for finding industry speakers to talk on illuminating outside-the-box-topics in formal verification. Not too long ago I covered an Intel talk of this kind. A recent webinar highlighted use of formal methods used together with a cool technique I have covered elsewhere called Quick Error Detection (QED). This for me is a good example of what really makes formal so fascinating – not so much the engines behind the scenes as the intellectual freedom they enable in solving a problem. Frederik Möllerström Lauridsen, a verification engineer at SyoSil, shared his experience using this method for Synopsys VC Formal for proofs on a RISC-V core.

VC Formal Enabled QED Proofs

The verification objective

Considering only the base ISA plus possible custom extensions, Frederick wanted a generic setup for RISC-V cores, in part through how they define their SVA assertions. He doesn’t go into detail in his talk, but I believe this means assertions which reference only the start and end of the pipeline, not the internals or the number of cycles required to complete. His goal is to detect both single instruction bugs and multi-instruction bugs. Single instruction bugs are relatively easy to find, but multi-instruction bugs are harder to uncover thanks to context dependent stalls without which for example register read/write conflicts might occur.

Single-instruction bugs (eg does an ADD really add) are not context dependent so can be checked by running the instruction through an otherwise empty pipeline. But multi-instruction bugs are context specific. How can you verify against all legal contexts? To see how, first you need to understand a little about QED.

QED

Quick Error Detection (QED) is a method first invented for post-silicon validation. There you start with machine-level code and regularly duplicate instructions reading and writing through a parallel set of registers / memory locations. You then compare original values with duplicated values; a difference signals an error. Similar techniques are migrating to pre-silicon verification, for an interesting reason. The intent is to regularly compare consistency between parallel implementations, with the promise that root cause errors may be caught long before being flagged by some more functionally meaningful assertion we might think to write. (Incidentally, this technique is not limited to formal verification. It is just as valuable in dynamic verification.)

Combining formal methods and QED

To apply QED you need a reference design and a design under test (DUT). Here the reference design is a single-instruction pipeline test, eg pushing an ADD instruction through an otherwise empty pipeline. In parallel the DUT will push though the same instruction, but how do you define context as an arbitrary selection of possible surrounding instructions? For this Frederick used a variant on QED called C-S2QED.

Without dropping too much into the technical weeds, S2 means “symbolic state”, which allows for arbitrary instructions going through the pipeline, constrained so that the first instruction entering the pipeline is the same as the instruction entering the reference pipeline. The “symbolic” part of this is key. It is not necessary to define what other instructions are going through. These are only constrained to be legal instructions. Since we are applying formal methods, all possibilities will be considered together in proofs. The other neat trick that Frederick applied was first to demonstrate that all instructions would pass through the pipeline within at most a fixed number of cycles, providing a limit for bounded proofs.

Now using the QED methodology, comparing the reference design and DUT through formal methods provides proof that there are no multi-instruction bugs in the pipeline implementation, or it provides a counterexample. Pretty cool! Frederick did acknowledge that they had not extended their method to any of the standard RISC-V ISA extensions (M, A, F, etc) though you could use VC Formal DPV for the M extension and no doubt clever folks can come up with creative possibilities for other extensions.

Very cool stuff. You can register to watch the webinar HERE.

For enthusiasts of this line of thinking check out a blog I wrote back in 2018, on the Wolper method to verify the correctness of data transport logic in network switches or on-chip interconnects or memory subsystems. I love the way formal has been applied so creatively in QED and in Wolper. There must be more opportunities like this 😊


Elon Musk is Self-Aware

Elon Musk is Self-Aware
by Roger C. Lanctot on 08-09-2023 at 10:00 am

Elon Musk is Self Aware

“I think we’ll be better than human by the end of the year.” – Elon Musk, CEO, Tesla

Parsing the impact of the latest Tesla earnings call featuring CEO Elon Musk has become an eerie out-of-body experience. The comments of the CEO are simultaneously assessed in real time and in retrospect as they are being spoken. It is automotive history in the making – a latter day Henry Ford undoing much of what that scion created. Think: re-vision (not division) of labor.

There is, of course, the prosaic assessment of projected earnings hits or misses – and the “markets” chose a negative response to Musk’s otherwise euphoric take on the company’s prospects. What was hard to ignore was the company’s ongoing success in the face of multiple macroeconomic obstacles and Musk’s own musings on his own path.

At one point he described himself as “the boy who cried FSD” – referring to the controversial full-self-driving capability available to new Tesla buyers for $15,000. This is the same FSD that is still not quite living up to its name.

Musk’s level of self-awareness is hard to ignore or avoid. One can only imagine what it’s like to read about yourself on a daily, hourly basis. In real-time Musk must come to grips with who he is, who he thinks he is, and who everyone else thinks he is or what they think of him.

Maintaining one’s grip on reality in these circumstances is itself no small feat. For Musk it is made even more complex by the fact that there is the Tesla Musk, the SpaceX Musk, the Twitter Musk, the x.AI Musk etc. etc. Everyone has their own Musk.

The Tesla Musk is probably the most interesting and palatable. But the Tesla Musk is not without his skeptics and critics taking into account unfulfilled full-self-driving forecasts, ongoing investigations of fatal crashes, and price cut and vehicle delivery flip-flops.

The most disturbing aspect of the latest Tesla earnings call with Musk is his comprehensive grasp of the technical issues (software, AI, battery tech) facing his company and the industry and his willingness to discuss those challenges and the company’s plans to overcome them. Perhaps even more important is Musk’s discussion of how the company has already overcome them.

Musk wastes no time getting to two of what may be the biggest questions facing the automotive industry:

  • How to enhance cars in such a way to improve safety and reduce highway fatalities.
  • How to hire and retain talent to work on cars.

Musk says nothing of “vision zero” platitudes and plans. After all, talking about vision zero, these days, is like talking about climate change. We feeble little humans have set off global climate shifts that will require decades if not centuries to reverse. In the same way, a million human beings are dying annually on a global scale on roadways – a reality that will be equally difficult to correct.

As the pied piper of electric vehicles, Musk is taking on both these global challenges at once – and can already point to some success.

For Musk the answer lies in a unified theory of “autonomy.” It will take mountains of data to improve and achieve full-self-driving, which will require a limitless supply of processing power (much of it from Nvidia), to achieve the objective of superhuman driving capability – a 10x-100x improvement on human driving – which still won’t get “us” to zero fatalities.

Just as Musk acknowledged, on the earnings call, the expanding adoption of Tesla’s fast charging connector and network technology by car makers such as General Motors and Ford Motor Company, he hinted at the prospect of the first car maker licensee of Tesla FSD technology. No names yet.

No other car company is even close to the required level of data collection and processing that Musk has already put in place and is expanding daily. In the context of achieving this ultimate goal of safe self-driving, the $15,000 price tag for FSD will seem trivial, he says, but even so a subscription-based alternative could be made available.

In a world where we have routinely been “sold” by “legacy” auto makers on the wonders and attractions and liberation of human driving, Musk has made machine-assisted driving aspirational. It is for this and other reasons that analysts and shareholders hang on his every word.

Notably, Tesla is fundamentally rewiring the consumer mindset regarding cars and driving in such a way that it is now short-circuiting the value of mass market automobile advertising. Increasingly, television, radio, or Internet advertising targeted at traditional internal combustion vehicle value propositions is missing its mark. Tesla does little advertising of its own.

I may only be speaking for myself, but as an EV owner my experience of TV advertising for ICE vehicles has been permanently altered. These ads are only interesting to me, now, as historical artifacts.

As for hiring and retaining the personnel necessary to achieve Musk’s dreams and Tesla’s objectives, Musk talks about interviewing and recruiting candidates who essentially don’t want to work for Tesla. By expanding his endeavors with SpaceX and, most recently, x.AI Musk has been able to hire and retain top performers whose contributions to other efforts convey a collateral benefit to Tesla.

Musk is following in the footsteps of auto industry founders who also diverted the efforts of their engineers into non-automotive endeavors. Car companies today have strangely lost the luster of past non-automotive forays.

At the very beginning of the earnings call Musk noted record vehicle production (nearing 2M annualized) and revenue ($25B) and talked about anticipating “quasi-infinite” demand for a future dedicated “robotaxi.” If any organization could make robotaxis popular, it would be Tesla.

Musk has thrust Tesla to the forefront of autonomous vehicle and artificial intelligence development. While we worry about the machines becoming self aware, a self aware Elon Musk is oddly reassuring. He knows how he sounds. He knows what we’re thinking – even as he is altering the way we think. Don’t be frightened, but do be aware, like Elon.

Also Read:

Xcelium Safety Certification Rounds Out Cadence Safety Solution

Sondrel Extends ASIC Turnkey Design to Supply Services From Europe to US

Automotive IP Certification


Insights into DevOps Trends in Hardware Design

Insights into DevOps Trends in Hardware Design
by Bernard Murphy on 08-09-2023 at 6:00 am

DevOps

Periodically I like to check in on the unsung heroes behind the attention-grabbing world of design. I’m speaking of the people responsible for the development and deployment infrastructure on which we all depend – version control, testing, build, release – collectively known these days as DevOps (development operations). I met with Simon Butler, GM of the Methodics BU at Perforce to get his insights on directions in the industry. Version control proved to be just the tip of what would eventually become DevOps. I was interested to know how much the larger methodology has penetrated the design infrastructure (hardware and software) world.

Software and DevOps

DevOps grew up around the software development world, where it is evolving much faster than in hardware development. Early in-house Makefile scripts and open-source version control (RCS, SCCS) quickly progressed into more structured approaches, built around better open-source options combined with commercial tools. As big systems based on a mix of in-house and open/commercial development grew and schedules shrank, methods like CI/CD (continuous integration / continuous deployment) and agile became more common, spawning tools like Jenkins. Cloud-based CI/CD added further wrinkles with containers, Kubernetes and microservices. How far we have come from the early days of ad-hoc software development.

Why add all this complexity? Because it is scalable, far more so than the original way we developed software. Scalable to bigger and richer services, to larger and more distributed development teams, to simplified support and maintenance across a wide range of platforms. It is also more adaptable to emerging technologies such as machine learning, since the infrastructure for such technologies is packaged, managed, and maintained through transparent cloud/on-prem services.

What about hardware design?

Hardware design and design service teams have been slower to fully embrace DevOps, in some cases because not all capabilities for software make sense for hardware, in other cases because hardware teams are frankly more conservative, preferring to maintain and extend their own solutions rather than switch to external options. Still, cracks are starting to appear in that cautious approach.

Version control is one such area. Git and Subversion are well established freeware options but have scaling problems for large designs across geographically distributed development, verification, and implementation organizations. Addressing this challenge is where commercial platforms like Perforce Helix Core can differentiate.

In more extensive DevOps practices, some design teams are experimenting with CI/CD and Agile. During development, a new version of a lower level is committed after passing through quality checks. That triggers workspaces ready to roll with subset regression tests, running the new candidate automatically and all managed by Jenkins.

Product lifecycle management (PLM) has been common in large system development for decades. Cars, SoCs, and large software applications are built around many components, some legacy, some perhaps open source, some commercial. Each evolves through revisions, some of which have known problems discovered in design or in deployment, some are adapted to special needs. Certain components may work well with other components but not with all. PLM can trace such information, providing critical input to system audits/ signoffs.

In managing such functions in DevOps, design teams have two choices – fully develop their own automation or build around widely adopted tools. Some go for in-house for all the usual reasons, though management sentiment is increasingly leaning to proven flows in response to staffing limitations, risks in adding yet more in-house software, and growing demand for documented traceability between requirements, implementation, and testing. While management attitudes are still evolving, Simon believes organizations will inevitably move to proven flows to address these concerns.

Cloud

The state of DevOps adoption in hardware is somewhat intertwined with cloud constraints. For software there are real advantages to being in the cloud since that is often the ultimate deployment platform. The same case can’t be made for hardware. Simon tells me that based on multiple recent customer discussions there is still limited appetite for cloud-based flows, mostly based on cost. He says all agree with the general intent of the idea, but these plans are still largely aspirational.

This is true even for burst models. For hardware design and analytics, input and output data volumes are unavoidably high. Cloud costs for moving and storing such volumes are still challenging, undermining the frictionless path to elastic expansion we had hoped for. Perhaps at some point big AI applications only practical in the cloud (maybe generative methods) may tip the balance. Until then, heavy cloud usage beyond the cloud in-house design groups may struggle to move beyond aspirational.

Interest in unifying hardware and software DevOps

Are there other ways in which software and hardware can unify in DevOps? One trend that excites Simon is customers looking for a unified software and hardware Bill of Materials.

The demand is clear visibility into dependencies between software and hardware, for example does this driver work with this version of the IP? Product teams want to understand re-use dependencies between stack hardware and software components. They need insight into questions which PLM and traceability can answer. In traceability, one objective is to prove linkage between system requirements, implementation, and testing. Another is to trace between component usages and known problems in other designs using the same component. If I find a problem in design I’m working on right now, what other designs, quite possibly already in production, should I worry about? Traceability must cross from software to hardware to be fully useful in such cases.

Interesting discussion and insights into the realities of DevOps in hardware design today. You can learn more about Perforce HERE.


Breakthrough Gains in RTL Productivity and Quality of Results with Cadence Joules RTL Design Studio

Breakthrough Gains in RTL Productivity and Quality of Results with Cadence Joules RTL Design Studio
by Kalar Rajendiran on 08-08-2023 at 10:00 am

Joules RTL Design Studio Benefits

Register Transfer Level (RTL) is a crucial and valuable concept in digital hardware design. Over the years, it has played a fundamental role in enabling design of complex digital chips. By abstracting away implementation details and providing a clear description of digital behavior, RTL has contributed significantly to the advancement and widespread adoption of digital design methodologies. It abstracts away the specific implementation details and technology-dependent aspects, providing a more manageable and technology-agnostic representation of the design. RTL provides a basis for design exploration and optimization. Engineers can modify the RTL code to explore various design alternatives and identify the most efficient solutions.

While the chip design process benefits tremendously from the use of RTL, the designs need to be synthesized and taken through the layout process before the chips can be manufactured. Tools for synthesis and place and route rely on RTL as input to generate the physical layout of the chip. This transition comes with several challenges that designers need to address to ensure a successful and optimal chip implementation. Physical design constraints such as area, power and routability constraints must be satisfied during the layout process while considering the characteristics and limitations of the target process technology and manufacturing process. Power integrity, signal integrity, design for manufacturability (DFM) and many more requirements need to be addressed as well.

As designs grow in complexity, the productivity and turnaround time become significant challenges during the RTL-to-layout transition. The RTL-to-layout transition often involves iterative processes where designers must go back to the RTL level to make modifications and then repeat the layout process. Efficient iteration management is crucial to avoid time-consuming and costly iterations. It is in this context that Cadence’s recent announcement highlighting the delivery of the Joules RTL Design Studio takes significance. It promises to deliver up to 5X faster RTL convergence and up to 25% improved Quality of Results (QoR) when compared with traditional RTL design approaches.

Actionable Intelligence

The driving force behind the Joules RTL Design Studio lies in its ability to provide RTL designers with actionable intelligence and rapid insight into physical effects. This capability enables design teams to address potential issues early in the design process, leading to reduced iterations, thus speeding time to market. Front-end designers can now access digital design analysis and debugging capabilities from a single, unified cockpit, streamlining the design process and ensuring a fully optimized RTL design before implementation handoff. This provides the physical design tools a strong starting point.

Intelligent RTL Debugging Assistant System

Joules RTL Design Studio further distinguishes itself with an intelligent RTL debugging assistant system. It provides early power, performance, area and congestion (PPAC) metrics and actionable debugging information throughout the design cycle‑including logical, physical, and production implementation stages. Engineers can thoroughly explore “what-if” scenarios and identify potential resolutions with ease. This not only saves valuable time but also improves the overall design outcomes, leading to more efficient chip designs.

Integrated AI Platform

A key highlight of this solution is its integration with Cadence Cerebrus, an AI-driven solution for design flow optimization, and the Cadence JedAI Platform, which facilitates big data analytics. By leveraging generative artificial intelligence (AI) for RTL design exploration and comprehensive analytics with Cadence’s leading AI portfolio, designers gain new insights into design space scenarios, floorplan optimization, and frequency versus voltage tradeoffs. This opens up new possibilities for creative exploration and significantly enhances design productivity.

The software’s capabilities are based on proven engines, shared with Cadence’s Innovus Implementation System, Genus Synthesis Solution, and Joules RTL Power Solution. This integration allows users to access all analysis and design exploration features from a single intuitive graphical user interface (GUI), ensuring an optimal QoR and a seamless design experience.

Incorporating lint checker integration, Joules RTL Design Studio empowers engineers to run lint checkers incrementally. This capability helps rule out data and setup issues upfront, effectively reducing errors and accelerating the design completion process. The unified cockpit experience offered by the software caters to the specific needs of RTL designers, providing physical design feedback, localization, and categorization of violations, bottleneck analysis, and cross-probing between RTL, schematic, and layout. This user-friendly interface streamlines the design workflow and fosters productivity.

Intelligent System Design

Joules RTL Design Studio plays a vital role in Cadence’s broader digital full flow. This integrated flow offers customers a faster path to design closure, ensuring efficient and successful chip design. The tool aligns well with Cadence’s Intelligent System Design strategy, empowering engineers to achieve excellence in system-on-chip (SoC) design.

Summary

The impact of this innovation extends to all aspects of physical design, from power and performance to area and congestion. By incorporating advanced technologies like machine learning, big data analytics, and generative artificial intelligence, Cadence has engineered a powerful solution that empowers designers to achieve optimized RTL designs faster with improved QoR.

Customers from various industries have endorsed its powerful capabilities and the benefits it brings to their design processes. For details, refer to the Joules RTL Design Studio press release.

For more information, visit the Joules RTL Design Studio product page.

Also Read:

Cadence and AI at #60DAC

Automated Code Review. Innovation in Verification

Xcelium Safety Certification Rounds Out Cadence Safety Solution