DAC2025 SemiWiki 800x100

#60DAC Update from Arteris

#60DAC Update from Arteris
by Daniel Payne on 08-16-2023 at 10:00 am

FlexNoC 5 min

I met up with Andy Nightingale, VP Product Marketing and Michal Siwinski, Chief Marketing Officer of Arteris at #60DAC for an update on their system IP company dealing with SoCs and chiplet-based designs. SemiWiki has been blogging about Arteris since 2011, and the company has grown enough in those 12 years to have an IPO, see their IP used in 3 billion+ SoCs, attract 200+ customers and have 675+ SoC design starts. Their IP is used for creating a Network-on- Chip (NoC) through interconnect IP and interface IP, plus they have EDA software used for SoC integration automation.

Andy Nightingale and Michal Siwinski at #60DAC

Michal mentioned that NoC IP is growing to meet the SoC complexity demands, especially as SoC designs employ more combinations of Big and Small cores, and the process nodes get smaller. Every SoC company uses some NoC approach, even with a traditional bus approach, while NoC usage is growing the most. The average chip can now have 5-30 cores on it, and with multi-die that count could go even further. Every sub-system requires communication on chip and sees these days often multiple NoCs that help with the big challenge becomes how to integrate all of that.

Arteris stays neutral by supporting all of the popular transaction protocols, like:

  • Arm, AMBA
  • Ceva
  • Tensilica
  • OCP
  • PIF

The size of the company last year was about 200, and has grown now to about 250 people, with another 30-50 reqs open. Their main R&D centers are in Silicon Valley, Austin and France.

recent release of their 5th generation of NoC added physical-awareness, with the benefit of having up to a 5X faster physical closure over manual iterations to converge.  Physical effects encountered at 16nm and smaller nodes are causing respins, and these effects are so large that they need to be taken into account when placing and routing the NoC. Their new approach is to take floorplanning information and feed into the NoC creation, so that physical effects are accounted for as early as possible in the topology of the NoC.

FlexNoC 5

The physical implementation works with all popular EDA tools, like Synopsys (RTL Architect), Cadence (Genus) and Siemens. Engineers run logic synthesis, P&R and static timing tools to reach their PPA goals.

Old busses simply cannot meet complexity requirements, so a NoC approach must be adopted to meet latency, power and area goals. Automotive companies and OEMs are doing their own SoC designs, and even Mercedes presented at the SNUG event this year. Early in the pandemic there were many automotive chip shortages, so that industry needs more control over their supply chain by designing their own chips.

And with the ever-rising complexity of SoCs and chipset-based designs, the NoCs integrate with the IP-XACT-based SoC integration tools that Arteris offers to customers to address that aspect of design complexity. Using the SoC integration tools, developers can re-factor RTL when new power regions need to be inserted, for instance, Arteris’ SoC integration tools stem from the acquisitions of Magillem and Semifore that Arteris did in the past.

In the ongoing AI market boom, there are notable users of Arteris IP, such as Tenstorrent for AI high-performance computing and datacenter with RISC-V chiplets, Axelera AI to accelerate computer vision at the edge, and ASICLAND for automotive, AI enterprise and AI edge SoCs.

The NoC has become a key component to SoC design, and it’s just a Smart Connector, but you really have to get it done right to enjoy the benefits. Arteris has the deep experience in this area to help your SoC team get the NoC done right.

On the chiplet front, Arteris is participating in the standards groups UCIe and CXL, so their NoC should work with any PHY choice from the popular vendors: Synopsys, Cadence, Rambus, etc.

Summary

Arteris has grown both organically, and through the complimentary acquisitions of Semifore and Magillem. Their NoC approach works with all of the interconnect standards and their IP can be used with any EDA vendor tool flow. Their presence at DAC was well received, and I look forward to watching their continued growth as a SoC system IP vendor.

Related Blogs


AI and Machine Unlearning: Navigating the Forgotten Path

AI and Machine Unlearning: Navigating the Forgotten Path
by Ahmed Banafa on 08-16-2023 at 6:00 am

AI and Machine Unlearning Navigating the Forgotten Path

In the rapidly evolving landscape of artificial intelligence (AI), the concept of machine unlearning has emerged as a fascinating and crucial area of research. While the traditional paradigm of AI focuses on training models to learn from data and improve their performance over time, the notion of unlearning takes a step further by allowing AI systems to intentionally forget or weaken previously acquired knowledge. This concept draws inspiration from human cognitive processes, where forgetting certain information is essential for adapting to new circumstances, making room for fresh insights, and maintaining a balanced and adaptable cognitive framework.

Machine Learning vs Machine Unlearning

Machine Learning and Machine Unlearning are two concepts related to the field of artificial intelligence and data analysis. Let’s break down what each term means:

  • Machine Learning: Machine Learning (ML) is a subset of artificial intelligence that involves the development of algorithms and models that enable computers to learn from and make predictions or decisions based on data. In other words, it’s the process of training a machine to recognize patterns and relationships within data in order to make accurate predictions or decisions in new, unseen situations.

Machine Learning typically involves the following steps:

  • Data Collection: Gathering relevant and representative data for training and testing.
  • Data Preprocessing: Cleaning, transforming, and preparing the data for training.
  • Model Selection: Choosing an appropriate algorithm or model architecture for the task at hand.
  • Model Training: Feeding the data into the chosen model and adjusting its parameters to learn from the data.
  • Model Evaluation: Assessing the model’s performance on unseen data to ensure it’s making accurate predictions.
  • Deployment: Integrating the trained model into real-world applications for making predictions or decisions.

Common types of Machine Learning include supervised learning, unsupervised learning, and reinforcement learning.

  • Machine Unlearning: Machine Unlearning is not a widely recognized term within the field of AI and Machine Learning. However, if we consider the concept metaphorically, it could refer to the process of removing or updating the knowledge acquired by a machine learning model. In a sense, this could be seen as “unlearning” or “forgetting” certain patterns or information that the model has learned over time.

In practice, there are a few scenarios where we might perform a form of “machine unlearning”:

  • Concept Drift: Over time, the underlying patterns in the data may change, rendering a trained model less accurate or even obsolete. To adapt to these changes, the model may need to be retrained with new data, effectively “unlearning” the outdated patterns.
  • Privacy and Data Retention: In situations where sensitive data is involved, there might be a need to “unlearn” certain information from the model to comply with privacy regulations or data retention policies.
  • Bias and Fairness: If a model has learned biased patterns from the data, efforts might be made to “unlearn” those biases by retraining the model on more diverse and representative data.

While “machine unlearning” is not a well-defined concept in the context of machine learning, it could refer to the processes of updating, adapting, or removing certain knowledge or patterns from a trained model to ensure its accuracy, fairness, and compliance with changing requirements.

The Importance of Adaptability in AI

Adaptability is a cornerstone of intelligence, both human and artificial. Just as humans learn to navigate new situations and respond to changing environments, AI systems strive to exhibit a similar capacity to adjust their behavior based on shifting circumstances. Machine unlearning plays a pivotal role in fostering this adaptability by allowing AI models to shed outdated or irrelevant information. This enables them to focus on current and relevant data, patterns, and insights, thereby improving their ability to generalize, make predictions, and respond effectively to novel scenarios.

One of the key advantages of adaptability through machine unlearning is the mitigation of a phenomenon known as “catastrophic forgetting.” When AI models are trained on new data, there is a risk that they may overwrite or lose valuable knowledge acquired from previous training. Machine unlearning addresses this challenge by selectively discarding less crucial information, preserving the integrity of previously learned knowledge while accommodating new updates.

Strategies for Implementing Machine Unlearning

Implementing machine unlearning techniques requires innovative approaches that strike a balance between retaining valuable knowledge and letting go of outdated or irrelevant data. Several strategies are being explored to achieve this delicate equilibrium:

1. Regularization Techniques:

Regularization methods, such as L1 and L2 regularization, have traditionally been employed to prevent overfitting in AI models. These techniques penalize large weights in neural networks, leading to the weakening or elimination of less important connections. By applying regularization strategically, AI models can be nudged towards unlearning specific patterns while retaining essential information.

2. Dynamic Memory Allocation:

Inspired by human memory processes, dynamic memory allocation involves allocating resources within an AI system based on the relevance and recency of information. This enables the model to prioritize recent and impactful experiences while gradually reducing the influence of older data.

3. Memory Networks and Attention Mechanisms:

Memory-augmented neural networks and attention mechanisms offer avenues for machine unlearning. Memory networks can learn to read, write, and forget information from a memory matrix, emulating the process of intentional forgetting. Attention mechanisms, on the other hand, allow AI models to selectively focus on relevant data while gradually downplaying less pertinent information.

4. Incremental Learning and Lifelong Adaptation:

Machine unlearning is closely intertwined with the concept of incremental learning, where AI models continuously update their knowledge with new data while also unlearning or adjusting their understanding of older data. This approach mimics the lifelong learning process in humans, enabling AI systems to accumulate and refine knowledge over time.

Applications of Machine Unlearning

The concept of machine unlearning has far-reaching implications across various domains and applications of AI:

1. Copyright Compliance:

AI models are trained on a vast amount of data, including copyrighted materials. If there’s a push for removing copyrighted content from AI models, it could enhance compliance with copyright laws and regulations. This might be seen as a positive step by copyright holders and advocates for stronger intellectual property protection.

2. Personalized Recommendations and Content Delivery:

In the realm of content delivery and recommendation systems, machine unlearning can enhance personalization by allowing AI models to forget outdated user preferences. This ensures that recommendations remain relevant and reflective of users’ evolving tastes.

3. Healthcare and Medical Diagnosis:

Healthcare AI systems can benefit from machine unlearning by adapting to changing patient conditions and medical knowledge. By unlearning outdated medical data and prioritizing recent research findings, AI models can provide more accurate and up-to-date diagnostic insights.

4. Autonomous Vehicles and Robotics:

Machine unlearning can play a pivotal role in autonomous systems such as self-driving cars and drones. These systems can unlearn outdated sensor data and environmental features, enabling them to make real-time decisions based on current and relevant information.

5. Ethical Considerations and Bias Mitigation:

Machine unlearning holds the potential to address ethical concerns in AI, particularly related to bias and fairness. By unlearning biased patterns or associations present in training data, AI models can reduce the perpetuation of unfair decisions and outcomes.

Ethical Implications and Considerations

While machine unlearning offers numerous benefits, it also raises ethical questions and considerations:

1. Transparency and Accountability:

Machine unlearning could potentially complicate the transparency and interpretability of AI systems. If models are allowed to intentionally forget certain information, it might become challenging to trace the decision-making process and hold AI accountable for its actions.

2. Privacy and Data Retention:

The intentional forgetting of data aligns with privacy principles, as AI models can discard sensitive or personal information after its utility has expired. However, striking the right balance between unlearning for privacy and retaining data for accountability remains a challenge.

3. Unintended Consequences:

Machine unlearning, if not carefully managed, could lead to unintended consequences. AI systems might forget critical information, resulting in poor decisions or diminished performance in specific contexts.

4. Bias Amplification:

While machine unlearning can contribute to bias mitigation, it is essential to consider the potential for inadvertently amplifying biases. The process of unlearning might introduce new biases or distort the model’s understanding of certain data.

The Road Ahead: Challenges and Future Directions

The exploration of machine unlearning is still in its infancy, and numerous challenges lie ahead:

1. Developing Effective Algorithms:

Designing algorithms that enable AI models to unlearn effectively and intelligently is a complex task. Balancing the retention of valuable knowledge with the removal of outdated information requires innovative approaches.

2. Granularity and Context:

Determining the appropriate granularity and context for unlearning is essential. AI models must discern which specific data points, features, or relationships should be unlearned to optimize their performance.

3. Dynamic and Contextual Adaptability:

Machine unlearning should facilitate dynamic and contextual adaptability, allowing AI systems to forget information based on shifting priorities and emerging trends.

4. Ethical Frameworks:

As with any AI development, ethical considerations should guide the implementation of machine unlearning. Establishing clear ethical frameworks for unlearning processes is essential to ensure accountability, fairness, and transparency.

The Future

While the journey towards fully realizing machine unlearning is marked by challenges and ethical considerations, it holds the promise of unlocking new dimensions of AI’s potential. As researchers and practitioners continue to explore innovative strategies, algorithms, and applications, machine unlearning could pave the way for a more nuanced, contextually aware, and ethically conscious generation of AI systems. Ultimately, the integration of machine unlearning into the AI landscape could lead to systems that not only learn and remember but also adapt and forget, mirroring the intricate dance of human cognition.

Ahmed Banafa’s books

Covering: AI, IoT, Blockchain and Quantum Computing

Also Read:

The Era of Flying Cars is Coming Soon

AI and the Future of Work

Narrow AI vs. General AI vs. Super AI


WEBINAR: The Power of Formal Verification: From flops to billion-gate designs

WEBINAR: The Power of Formal Verification: From flops to billion-gate designs
by Daniel Nenni on 08-15-2023 at 5:00 pm

cover img new 400X400

Semiconductor industry is going through an unprecedented technological revolution with AI/ML, GPU, RISC-V, chiplets, automotive and 5G driving the hardware design innovation. The race to deliver high performance, optimizing power and area (PPA), while ensuring safety and security is truly on. It has never been a more exciting time for hardware design and architecture.

REGISTER HERE FOR REPLAY

The story around validation & verification is however not as inspiring with the industry struggling to show improvements in best practice adoption. If Harry Foster’s Wilson Research Report is anything to go by, an ever-increasing number of simulation cycles and the astronomical growth of UVM is unable to prevent the ASIC/IC respin which is at a staggering 76% while 66% of IC/ASIC projects continue to miss schedules. 

62% of the ASIC/IC bugs are logic related causing respins in designs with over a billion gates – practically everything that powers our devices in our day-to-day lives. It would be interesting to analyze what proportion of the logic bugs could have been caught on day one of the DV flow.

While the industry continues to talk about shift-left, it is not doing the walk. The best way of ensuring shift-left is to leverage formal methods into your DV flow. One of the ways we could adopt formal methods early in the DV flow by understanding its true potential. While the use of formal apps has certainly increased over the last decade, the application of formal is still very much on the extremities. Almost everyone would use linters (based on formal technology) in early stages of the project and use apps such as connectivity checking towards the end of the project flows. However, the full continuum is missed. The real value-add of formal is in the – middle – in functional verification as well as safety & security verification.

Modern-day designs are verifiable by formal when supported by great methodology. At Axiomise, we have formally verified designs as big as 1-billion gates (approx. 338 million flip-flops), though the gate and flop count is not the only criteria for determining the challenges with proof complexity.

Formal methods are capable of not only hunting down corner-case bugs easily, but they also establish exhaustive proofs of bug absence through mathematical analysis of the entire design space by employing program reasoning techniques based on sound rules of mathematical logic. Formal verification constructs a mathematical proof to verify a design-under-test (DUT) against a requirement. Along the way of building a proof, formal tools can encounter bugs in the design, or in the specifications, or both. When no more bugs can be found, a formal proof establishes that the requirement holds valid on all the reachable states of the design where reachability is determined purely through assumptions on the test environment with no human effort needed in injecting stimulus in the design – a formidable challenge in dynamic simulation!

REGISTER HERE FOR REPLAY

At Axiomise, we have been deploying production-grade formal methodology using all the commercial tools in the market with great success. To make formal normal, one must understand its true potential by mastering the methodology. This talk will discuss some of the key aspects of formal verification methodology based on the insights from the practical deployment of formal, and show how scalable formal methodology based on abstraction, bug-hunting and coverage can be used to accomplish functional verification for designs with few flip-flops to a design with billion gates.

Also Read:

A Five-Year Formal Celebration at Axiomise

Axiomise at #59DAC, Formal Update

CEO Interview: Dr. Ashish Darbari of Axiomise


A New Verification Conference Coming to Austin

A New Verification Conference Coming to Austin
by Bernard Murphy on 08-15-2023 at 6:00 am

Actually not so new, just new to us in the US. Verification Futures is already well established as a Tessolve event with a 10-year track record in the UK. This year they are bringing the conference to Austin on September 14th (REGISTER HERE).

A New Verification Conference Coming to Austin

While DVCon is an ever-popular event for sharing verification ideas, it isn’t always accessible to many hands-on engineers (travel costs, etc). Since the format for the Verification Futures conference leans heavily to hands-on topics presented by verification engineers, this looks like a great opportunity to listen to and network with experts (100+ and counting) in the field outside of the traditional verification conference sites. And where better to do that than Austin, a major center for verification? Or online if you really can’t get to this event.

The conference

This is a one-day event, hosted at the Austin Marriot South on September 14th, kicking off at 8:30am and wrapping up at 4:30pm. There are speakers from Arm, Ericsson, Cadence, Tenstorrent, Intel, Doulos, Renesas, Imperas, Breker, Broadcom, Imperas, NXP, UVMGen, and SynthWorks. This is not a lightweight group!

I see topics on safety and security, designing IP for a long shelf life, RISC-V CPU verification, validating hybrid architectures, trends in UVM-AMS, and leveraging AMS and DMS verification. All very topical.

Mike Bartley of Tessolve hosts the event. Mike was previously CEO of Test and Verification Solutions (TV&S) until the organization was acquired by Tessolve in 2020. Mike is now a senior VP in VLSI design and is clearly still very involved in events of this type.

You can register for the conference (Austin or online) on September 14th HERE.

About Tessolve

Tessolve offers a unique combination of pre-silicon and post-silicon expertise to provide an efficient turnkey solution for silicon bring-up, and spec to the product. With 3000+ employees worldwide, Tessolve provides a one-stop-shop solution with full-fledged hardware and software capabilities, including its advanced silicon and system testing labs.

Tessolve offers a Turnkey ASIC Solution, from design to packaged parts. Tessolve’s design services include solutions on advanced process nodes with a healthy eco-system relationship with EDA, IP, and foundries. Our front-end design strengths integrated with the knowledge from the backend flow, allows Tessolve to catch design flaws ahead in the cycle, thus reducing expensive re-design costs, and risks.

They actively invest in R&D center of excellence initiatives such as 5G, mmWave, Silicon photonics, HSIO, HBM/HPI, system-level test, and others. Tessolve also offers end-to-end product design services in the embedded domain from concept to manufacturing under an ODM model with application expertise in Avionics, Automotive, Industrial and Medical segments.

Tessolve’s Embedded Engineering services enable customers a faster time-to-market through deep domain expertise, innovative ideas, diverse embedded hardware & software services, and built-in infrastructure with world-class lab facilities. Tessolve’s clientele includes Tier 1 clients across multiple market segments, 7 of the top 10 semiconductor companies, start-ups, and government entities.

They have a global presence with office locations in the United States, India, Singapore, Malaysia, Germany, United Kingdom, China, UK, Japan, Thailand, Philippines, and Test Labs in India, Singapore, Malaysia, Austin, San Jose.


Turnaround in Semiconductor Market

Turnaround in Semiconductor Market
by Bill Jewell on 08-14-2023 at 1:30 pm

Semiconductor Market Change Q3 2023

The global semiconductor market grew 4.2% in 2Q 2023 versus 1Q 2023, according to WSTS. The 2Q 2023 growth was the first positive quarter-to-quarter change since 4Q 2021, a year and a half ago. Versus a year ago, the market declined 17.3%, an improvement from a 21.3% year-to-year decline in 1Q 2023. Semiconductor market year-to-year change peaked at 30.1% in 2Q 2021 in the recovery from the 2020 pandemic slowdown.

Most major semiconductor companies experienced revenue growth in 2Q 2023 versus 1Q 2023. Of the 15 largest companies, 13 showed revenue gains. We at Semiconductor Intelligence only include companies which sell to the end user. Thus, we do not include foundries such as TSMC or companies which only use their semiconductor products internally such as Apple. Nvidia has not yet reported results for the latest quarter, but its guidance was for a 53% jump from the prior quarter. If this guidance holds, Nvidia will become the third largest semiconductor company in 2Q 2023, up from fifth in the prior quarter. Nvidia cited a steep increase in demand for AI processors as the driver for the strong growth. SK Hynix reported 2Q 2023 growth of 39%, bouncing back from three previous quarter-to-quarter declines of over 25%. The only companies with revenue declines were Qualcomm (down 10%) and Infineon Technologies (down 0.7%). The weighted average growth from 1Q 2023 to 2Q 2023 for these 15 companies was 8%. Excluding Nvidia, the growth was 3%.

Top Semiconductor Companies’ Revenue

Change versus prior quarter in local currency
US$B 2Q23 Reported Guidance Comments on 3Q23
Company 2Q23 3Q23
1 Intel 12.9 11% 3.5% inventory issues
2 Samsung SC 11.2 7.3% n/a demand recovery in 2H
3 Nvidia 11.0 53% n/a 2Q23 is guidance
4 Broadcom 8.85 1.3% n/a 2Q23 is guidance
5 Qualcomm IC 7.17 -10% 0.4% increase in handsets
6 SK Hynix 5.55 39% n/a increased demand in 2H
7 AMD 5.36 0.1% 6.4% client & data center up
8 TI 4.53 3.5% 0.4% auto up, others weak
9 Infineon 4.46 -0.7% -2.2% auto up, power down
10 STMicro 4.33 1.9% 1.2% auto up, digital down
11 Micron 3.75 1.6% 3.9% supply/demand improving
12 NXP 3.30 5.7% 3.1% auto & industrial up
13 Analog Devices 3.26 0.4% -5.0% auto & industrial down
14 MediaTek 3.20 1.7% 4.8% inventories down
15 Renesas 2.68 2.5% 0.4% inventory balanced
Total of above 8%
   Memory Cos. (US$) 9% n/a Samsung-Hynix-Micron
   Non-Memory Cos. 7% 2%

Most companies are guiding for continued growth in 3Q 2023 from 2Q 2023. Of the eleven companies providing guidance, nine call for revenue increases ranging from 0.4% (Qualcomm, Texas Instruments and Renesas Electronics) to 6.4% (AMD). Infineon expects a 2.2% decline and Analog Devices guided for a 5% decline. The memory companies (Samsung, SK Hynix, and Micron Technology) all stated they see improving demand in the second half of 2023. Intel cited continuing inventory issues while MediaTek and Renesas reported lower or balanced inventories. Automotive will continue to be a driver in 3Q 2023, as cited by TI, Infineon, and STMicroelectronics, and NXP Semiconductors. The weighted average guidance of the eleven companies is 2% growth in 3Q 2023 from 2Q 2023. Companies providing a range of revenue guidance had high-end growth ranging from 3 to 7 percentage points higher than their midpoint guidance.

Even with positive quarter-to-quarter growth in 2Q 2023, 3Q 2023, and possibly 4Q 2023, the semiconductor market will show a substantial decline for the year 2023. Estimates of the size of the decline range from 20% from Future Horizons to 10% from Tech Insights. Future Horizons’ Malcolm Penn stated he will raise his 2023 projection based on the 2Q 2023 WSTS data but has not yet cited a specific number. Our projection from Semiconductor Intelligence (SC-IQ) is a 13% drop in 2023. Looking at 2024, most projections are similar: Tech Insights at 10% growth, SC-IQ at 11% and WSTS at 11.8%. Gartner is the most bullish at 18.5%. The primary difference between the 2024 forecasts are the assumptions on the memory market. WSTS and Gartner are close on 2024 growth for non-memory products at 5.0% and 7.7% respectively. However, Gartner projects 70% growth for memory while WSTS forecasts 43%.

Our Semiconductor Intelligence July newsletter stated we likely reached the low point in electronics production in the second quarter of 2023. The semiconductor market finally showed quarter-to-quarter growth in 2Q 2023. Major semiconductor companies are projecting continued revenue growth into 3Q 2023. The semiconductor market has finally turned and is headed toward probable double-digit growth in 2024.

Also Read:

Has Electronics Bottomed?

Semiconductor CapEx down in 2023

Steep Decline in 1Q 2023


Next-Gen AI Engine for Intelligent Vision Applications

Next-Gen AI Engine for Intelligent Vision Applications
by Kalar Rajendiran on 08-14-2023 at 10:00 am

Synopsys ARC MetaWare NN SDK

Artificial Intelligence (AI) has witnessed explosive growth in applications across various industries, ranging from autonomous vehicles and natural language processing to computer vision and robotics. The AI embedded semiconductor market is projected to reach $800 billion by year 2030. Compare this with just $48 billion back in 2020. [Source: May 2022 IBS Report]. Computer vision driven applications drive a significant part of this incredible growth projection. Real-time AI examples in this space include drones, automotive applications, mobile cameras and digital still cameras.

The advent of AlexNet more than a decade ago was a major advancement in the realm of object detection compared to erstwhile methods. Since then, convolution neural network (CNN) models have been the dominant method of implementing object detection using digital signal processors (DSP). While the CNN model has evolved to be able to deliver a 90% accuracy level, it requires a lot of memory. More memory means higher power consumption. In addition, advances in memory performance have not kept pace with advances in compute performance, impacting efficient data movement.

Over the last few years, transformer models which were originally developed for natural language processing have been adapted for object detection purposes with 90% accuracy. But they are more demanding on compute capacity compared to CNNs. So, a combination of CNNs and transformers is a better solution for leveraging the best of both worlds, which in turn is pushing the demand for increasingly complex AI models and the need for real-time processing using specialized hardware accelerators.

As the network models are evolving, the number of cameras per application, image size and resolution are also increasing dramatically. While accuracy is a critical requirement, performance, power, area, flexibility and implementation costs are key decision factors too. These factors are driving the decisions on AI accelerator architectures and Neural Processing Units (NPUs) and DSPs are emerging as key components, each offering unique strengths to the world of AI.

Use DSP or NPU or Both for Implementing AI Accelerators

DSPs do provide more flexibility than NPUs. Using a vector DSP, AI can be implemented in software. DSPs can perform traditional signal processing as well as lower performance AI processing with no additional area penalty. And a vector DSP can be used to support functions that cannot be processed on an NPU.

On the other hand, NPUs can be implemented to accelerate all common AI network models such as CNN, RNN, transformers, and recommenders. For multiply-accumulate (MAC) dominated AI workloads, NPUs are more efficient in terms of power and area. In other words, for mid to high-performance AI needs, an NPU-approach is better than a DSP approach.

The bottom line is that except for low-end AI requirements, an NPU approach is the way to go. But as AI network models are rapidly evolving, for future proofing one’s application, a DSP-NPU combination approach is the prudent solution.

AI Accelerator Solutions from Synopsys

Synopsys’ broad processor portfolio includes both its ARC® VPX vector DSP family of cores as well as its ARC NPX neural network family of cores. The development platform comes with the ARC MetaWare MX development tools. The VPX cores are programmable using C or C++ while the NPX cores’ hardware is automatically generated by feeding the customer’s trained neural network into Synopsys development tools. The platform supports widely used frameworks for customers to train their neural network. The MetaWare NN SDK is the compiler for the trained NN. The development tools also offer the ability to perform virtual simulation for testing purposes.

ARC NPX6 is Synopsys’ sixth generation general purpose NPU that will support any CNN, RNN or transformer. Customers who bring their own AI engine can easily pair it with a VPX core. Customers could also design their own neural network using Synopsys’ ASIP Designer tool.

As applications’ demand for TOPS performance grows, the challenge of memory bandwidth grows with it. Some hardware and software features need to be added to minimize this bandwidth challenge. To address this scaling requirement, Synopsys uses L2 memory to help minimize data traffic over an external bus.

An ARC NPX6-based solution can be implemented to deliver up to 3,500 TOPS as needed by scaling all the way to 24 core NPU with 96K MACs and instantiating up to eight NPUs.

Summary

Combining vector DSP technology and neural processing technology can create a synergistic solution that includes future-proofing and can revolutionize AI acceleration. Synopsys offers a broad portfolio of IP in addition to the VPX and NPX family of IP cores. They also offer other tools such as platform architect tool that will help explore, analyze and visualize the data of the AI applications. High quality IP and a comprehensive, easy-to-use tool platform are needed for achieving fast time to market.

For more details, visit the product page.

Also Read:

VC Formal Enabled QED Proofs on a RISC-V Core

WEBINAR: Leap Ahead of the Competition with AI-Driven EDA Technology

Computational Imaging Craves System-Level Design and Simulation Tools to Leverage AI in Embedded Vision


Morris Chang’s Journey to Taiwan and TSMC

Morris Chang’s Journey to Taiwan and TSMC
by Daniel Nenni on 08-14-2023 at 6:00 am

Morris Chang 2023

High volume manufacturing is a 24/7 business with 12 hour shifts. You don’t always get to pick the shifts you want and you must cover for others when they can’t. It’s a challenging career and not for the faint of heart like myself.

In the 1980s and 1990s I spent time in Japan working with some of the top Japanese semiconductor manufacturers. It was an amazing experience but I walked away wondering how the US would be able to compete. The Japanese people I met worked very hard to honor their families and country. Back in the United States we worked 9-5 for a paycheck. Morris Chang knew this from his experience at TI and that is why he headed to Taiwan.

As I have written, Taiwan’s early start into semiconductors did not begin with Morris Chang but he does figure in prominently in Taiwan’s rise to dominance. Here is a brief biography of Morris (ChatGPT 4.0):

Morris Chang (张忠谋) is widely recognized as the father of Taiwan’s semiconductor industry. Born in 1931 in Ningbo, China, Chang moved to the U.S. in the late 1940s, where he pursued higher education in the field of physics and engineering.

Here are a few key points about Morris Chang:

  1. Education: Morris Chang holds degrees from several esteemed institutions. He received a B.S. and M.S. in Mechanical Engineering from the Massachusetts Institute of Technology (MIT) and a Ph.D. in Electrical Engineering from Stanford University.
  2. Texas Instruments: Before his endeavors in Taiwan, Chang worked at Texas Instruments (TI) in the United States for 25 years. During his tenure there, he held various senior positions including Group Vice President of the Worldwide Semiconductor Group.
  3. Taiwan Semiconductor Manufacturing Company (TSMC): In 1987, Chang founded the Taiwan Semiconductor Manufacturing Company (TSMC). TSMC is the world’s first dedicated semiconductor foundry, meaning it manufactures chips for other companies without designing its own products. This business model transformed the global semiconductor industry, enabling a myriad of fabless semiconductor companies to focus on chip design without having to invest in expensive manufacturing facilities.
  4. Economic Impact: Under Chang’s leadership, TSMC became a cornerstone of Taiwan’s IT industry, propelling the country into a major role in the global semiconductor market. Taiwan’s importance in chip manufacturing can’t be overstated, with TSMC at the forefront of cutting-edge semiconductor technology and production.
  5. Retirement: Chang retired from TSMC in 2018, but his influence in the semiconductor world and his legacy as a pioneer in the foundry business model will persist for years to come.
  6. Recognition: Chang has received numerous awards and honors over the years in recognition of his contributions to the semiconductor industry and his visionary leadership.

In summary, Morris Chang is a seminal figure in the semiconductor industry, especially in the foundry business model. His leadership and strategic vision not only transformed the industry but also elevated Taiwan’s standing in the global tech ecosystem.

From a semiconductor insider’s point of view, there is a lot more to this story. Morris started his education at Harvard but MIT turned out to be more to his liking both financially and technically. For engineers, MIT was the place to be and Morris was an engineer at heart. Morris chose mechanical engineering but he quickly became obsessed with the transistor during his first job right out of college.

After graduating from MIT (1955) Morris went to work for Sylvania, a company with a long history in lighting and electronics. After 3 years Morris wanted to go where the transistor innovation was and that was Texas Instruments. His dream was to be the head of the central research labs at TI but Morris did not have a PhD, or even a degree in electrical engineering.  In fact, he twice failed a qualifying exam for a doctoral degree at M.I.T.

Morris first worked in the germanium transistor business which would soon be surpassed by the silicon transistor. TI was IBMs major supplier (20% of TI’s revenue) and Morris was in charge of the IBM program. Getting yields ramped up was the first big challenge for Morris. He burned the midnight oil and cracked the yield code and became a hero. Morris was promoted to the head of the germanium transistor program and in 1963 he was sent to Stanford to get his PhD for further advancement. He Finished the PhD program in a record time (2.5 years) while still spending time at TI.

When Morris returned to TI full time, germanium was no longer leading edge technology so Morris took a leadership position with the TI IC group. Morris’s influence grew and in 1973 he became head of the semiconductors group and again became a hero. TI was the king of TTL (Transistor – Transistor Logic)  with a 60% market share and more than $1B in revenue, but TTL was soon replaced by MOS and TI lost the MOS race.

SemiWiki: Texas Instruments and the TTL Wars

Morris’s downfall at TI was MOS memory and microprocessors. Other companies caught up with TI (Mostek) and in some cases surpassed them. Microprocessors became the next big thing and TI had the first microprocessor patent, not Intel or Motorola. When IBM chose the Intel 8088 microprocessor for their first personal computer over the TI TMS9900 and the Motorola 6800 (amongst others), Morris took this as a personal defeat.

In 1977 Morris’s departure from TI officially started when he was removed as Group VP of Semiconductors and became Group VP of Consumer Products, a somewhat troubled business at the time (calculators and toys). Morris was then moved to head of corporate quality and his fall from grace was complete. Morris wasn’t fired from TI but his departure was not unexpected.

Morris then spent a difficult two years (1984-86) at General Instruments under CEO Frank Hickey before calling it quits and heading to Taiwan. I was a field engineer for GI during the Hickey era (1979-82)  and it was a tumultuous time for the company, absolutely.

Bottom Line: The work ethic and experience Morris developed through his career with innovative electronic and semiconductor companies was the perfect foundation for the customer centric pure-play foundry model that is TSMC. It should be noted that TI is today a semiconductor powerhouse, one of the longest standing semiconductor companies in the world. TI is also a long standing customer of TSMC.

To be continued…. How Philips saved TSMC!

Also Read:

How Taiwan Saved the Semiconductor Industry


Podcast EP176: Implementing End-to-End Security with Axiado’s New Breed of Security Processor

Podcast EP176: Implementing End-to-End Security with Axiado’s New Breed of Security Processor
by Daniel Nenni on 08-11-2023 at 10:00 am

Dan is joined by Tareq Bustami, senior vice president of marketing & sales, Axiado. Tareq has more than 20 years of experience in the semiconductor and networking industries. Before joining Axiado, he led NXP’s embedded processors for the wired and wireless markets, and was in charge of growing multi-core processor solutions for enterprise, data center infrastructure and general embedded and industrial markets.

Tareq describes Axiao’s unique and comprehensive approach to security. He provides details about its trusted control/compute unit (TCU) AI-driven hardware security platform. A broad look at the challenges of implementing end-to-end security is presented along with a discussion of how Axiado’s technology addresses these challenges.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


The Recovery has Started and it’s off to a Great Start!

The Recovery has Started and it’s off to a Great Start!
by Malcolm Penn on 08-11-2023 at 6:00 am

Semiconductor Recovery 2023

August’s WSTS Blue Book showed Q2-2023 sales rebounding strongly, up 4.2 percent vs. Q1, heralding the end of the downturn and welcome news for the beleaguered chip industry.

The really good news, however, was that the downturn bottomed one quarter earlier than previously anticipated. This pull-forward only added a modest US$11 billion to Q2’s US$ 244 billion sales but this was enough to swing Q2’s growth from minus 5.0 percent to plus 4.2 percent.

A small change in the numbers at the start of the year makes a huge difference to the quarterly growth rates and hence the final year-on-year number.

Market Detail

The market turnaround was driven by a dramatic change in the Asia/Pac region, with 5.4 percent month-on-month growth, followed by the US (plus 3.5 percent), Japan (plus 2.1 percent) and Europe (plus 1.8 percent).

On an annualised basis, Q2-2023 was down 17.3 percent vs. Q2-2023, with Asia/Pac down 22.6 percent, the US down 17.9 percent, Japan down 3.5 percent with Europe, the only region showing year-on-year growth, at plus 7.6 percent.

The near-term market outlook is starting to look a lot stronger, driven by the positive impact of the inventory burn, stronger than expected resilience in the global economy, especially in the USA, and a seemingly robust demand boost from the emerging AI market.

Forecast Summary

Looking ahead to the second half of the year, the overall industry consensus has now (mostly) acknowledged a likely double-digit decline for 2023 vs. the ‘positive growth’ positions predicted this time last Year.

Source: SEMI/Future Horizons

Future Horizons stood alone in the crowd when we first published our 2023 double-digit decline forecast 15 months ago in May 2022, likewise too when we stood by that number at our January 2023 Industry Update Webinar when all others, bar one, were predicted a very mild downturn followed by a sharp V-shaped rebound in 2024.

The stronger than expected second-quarter results will now push our 2023 forecast beyond the bull end of our January 2023 forecast scenario, but our longer-term concerns, re the still ongoing uncertain economic outlook and the excess CapEx spending, show no signs yet of abatement.

Over-capacity is the industry’s number one enemy, depressing ASPs and condemning the industry to low dollar value growth. An economic slowdown will nip any recovery in the bud.

Market Outlook

With two of the four industry fundamentals, unit sales and ASPs, now slowly but surely rebalancing, the havoc and inevitable consequences of the preceding supply-side shortage-induced market boom are now starting to recede. The stage is now set for a return to industry growth, but from a much-reduced base.

The size and shape of the recovery will depend on the potentially derailing impact of capacity (over-investment) and demand (the economy), the former of which is not looking healthy, and the latter still steeped in mixed signals and uncertainty.

That said, 2023 will undoubtedly transpire to have been a line in the sand; and 2024 will equally clearly be better. The recovery has got off to a great start, but its pace and form have yet to be determined.

We will be covering all this, together with an update to our outlook for 2023-24, at our forthcoming Industry Update Webinar on Tuesday 12 September at 3pm UK BST (GMT+1). Register now at:

https://us02web.zoom.us/webinar/register/7416911384194/WN_akISM9QxS8uNZS_oihzqFQ

Also Read:

The Billion Dollar Question – Single or Double Digit Semiconductor Decline

The Semiconductor Market Downturn Has Started

Semiconductor Crash Update


Japan’s Foundry Morgana Part II

Japan’s Foundry Morgana Part II
by Karl Breidenbach on 08-10-2023 at 10:00 am

Japan's Foundry Morgana Part II

Japan’s Foundry Morgana: A Journey from Mirage to Reality?Three years ago, I wrote an article about Japan’s semiconductor industry under the title “Japan’s Foundry Morgana.” Back in September 2020, I analyzed the decline of Japan’s once world-leading semiconductor sector and the ambitious plans for inviting TSMC to build an advanced process fab. Who could have imagined that by 2023, fueled by the experiences from the semiconductor crisis, the plans to revive the Japanese foundry footprint would have advanced at such quick pace and determination. It’s worth looking again at what happened so far and if Japan could be a blue print for other regions on the quest to semiconductor supply resilience.

A Look Back: The State of the Japanese Semiconductor Industry

In the late 1980s, Japan emerged as a global leader in semiconductors, holding strong alongside the US. Home to over 30 large-scale industry players like Renesas, Hitachi, Denso, Fujitsu, and Mitsubishi Electronics, Japan boasted a robust semiconductor ecosystem. By 1990 Japanese IDM’s NEC, Toshiba and Hitachi had taken the top three positions in the world-wide semiconductor sales ranking, just ahead of Intel and Motorola.

Top 10 Worldwide Semiconductor Sales Leaders from 1985 to 2021, Source: IC Insights

However, since the year 2000, Japan’s share of international IC exports declined sharply, dropping from 14% to less than 5% by 2020. Despite losing ground, some Japanese IDMs continued to excel in specialized segments like power electronics and optical CMOS sensors. 

Japanese silicon foundries primarily arose as carve-outs from leading IDMs, but they struggled to keep up with rivals in Taiwan, China, and the US. Japan held only 2% of global foundry capacity as of 2020. Factors such as late market entry into the foundry business, a lack of cost-containment strategies, and a narrow market segment focus led to the decline of Japan’s foundry ecosystem.

The completion of Nuvoton’s acquisition of Panasonic’s semiconductor unit in September 2020 marked a symbolic end to the Japan-owned foundry landscape, leaving a limited number of small-scale foundries.

The New Landscape: Attracting TSMC and Rebuilding the Foundry Sector

Fast forward to today, and Japan’s semiconductor industry is writing a new chapter. The once ambitious plan to invite TSMC has materialized, resulting in TSMC’s agreement to build two fabs in Japan. Backed by extensive government funding, these fabs symbolize a fresh start and an alignment with the global semiconductor landscape.

Japan’s foundry landscape as of 2023, Source: Own research

The Japanese government’s engagement in this venture is unprecedented, pledging to shoulder a significant portion of the construction costs. Leaders of the ruling party’s lawmaker coalition on chips recognize this as a national strategy, part of Japan’s efforts to revive its domestic chipmaking industry, a sector that is viewed as crucial for growth and economic security.

The joint effort between Hitachi, Renesas, Toshiba, and the Japanese Ministry of Economy signifies a strategic shift. It’s not just about reviving the Japanese-owned foundry sector; it’s about embracing international collaboration, recognizing the importance of supply security, and focusing on processes that align with Japan’s core strengths.

The Rise of Rapidus: A Bold Leap Forward

Alongside the collaboration with TSMC, Japan’s ambitious project Rapidus is a critical piece of the puzzle. Aiming for 2nm production in 2027, Rapidus represents a daring and costly venture. Supported by a consortium that includes IBM and backed by the Japanese government and large conglomerates, Rapidus seeks to reshape Japan’s semiconductor landscape by leapfrogging several generations of nodes.

The endeavor is both extremely challenging and tremendously expensive. Modern fabrication technologies are expensive to develop in general. Rapidus itself projects that it will need approximately $35 billion to initiate pilot 2nm chip production in 2025, and then bring that to high-volume manufacturing in 2027.

Despite the high stakes, the vision is clear and backed by strong commitment. Rapidus aims to serve a limited but significant client base, including tech giants like Apple and Google, focusing on quality and innovation. The focus on limited customers is a strategic move to secure enough demand and revenue to recover massive investment while avoiding emulation of TSMC’s extensive client base.

Rapidus’ success holds much significance for Japan’s advanced semiconductor supply chain, symbolizing more than just a money-making venture but a catalyst for revitalizing the Japanese industry. The Japanese government views it as a critical step towards creating more opportunities for local chip designers, even if immediate success may not be guaranteed.

Conclusion: From Mirage to Reality, A Blueprint for Others?

The reference to “Foundry Morgana” or fata morgana in my initial article resonated with the elusive, almost mythical nature of Japan’s semiconductor revitalization efforts. However, today’s landscape shows a transformation from illusion to reality.

With TSMC’s strategic presence and the pursuit of Rapidus, Japan demonstrates a new level of commitment. It is embracing both its past strength and future potential, rebuilding its foundry landscape with international collaboration, and aligning with global advancements.

Japan’s Foundry Morgana is no longer just a distant reflection. It’s a (potential) reality ;-), emerging on the horizon as a renewal of semiconductors Made in Japan.

The dynamics between Rapidus and TSMC and the larger global context add more intrigue to Japan’s semiconductor industry’s resurrection. The potential impact of geopolitics, market cap, governmental subsidization, and known knowns regarding yields and timetables further adds to the complexity of this journey.

Furthermore, Japan’s approach to revitalizing its semiconductor industry may serve as a blueprint for other regions seeking to enhance their own technological prowess. Europe, for example, with its ambitions to grow its semiconductor manufacturing and reduce dependence, could look to Japan’s strategy for inspiration.

Sources:

https://www.anandtech.com/show/18979/rapidus-wants-to-supply-2nm-chips-to-tech-giants-challenge-tsmc

https://www.taipeitimes.com/News/biz/archives/2023/08/04/2003804192

https://www.electronicsweekly.com/news/business/japan-asks-tsmc-build-fab-2020-07/

https://www.taiwannews.com.tw/en/news/3999523

https://www.semiconductors.org/wp-content/uploads/2018/06/SIA-Beyond-Borders-Report-FINAL-June-7.pdf

https://sst.semiconductor-digest.com/2016/07/whats-happening-to-japans-semiconductor-industry/

https://blog.semi.org/semi-news/japan-a-thriving-highly-versatile-chip-manufacturing-region

https://laylaec.com/2018/10/19/why-doesnt-japan-have-a-large-semiconductor-foundry-like-tsmc-samsung-or-intel-anymore/

Also Read:

How Taiwan Saved the Semiconductor Industry

Intel Enables the Multi-Die Revolution with Packaging Innovation

TSMC Redefines Foundry to Enable Next-Generation Products