SILVACO 073125 Webinar 800x100

Ansys Revving up for Automotive and 3D-IC Multiphysics Signoff at DAC 2023

Ansys Revving up for Automotive and 3D-IC Multiphysics Signoff at DAC 2023
by Daniel Nenni on 06-26-2023 at 10:00 am

dac 2023 600x100

 

Highlights:

  • Ansys CTO Prith Banerjee will be delivering the Visionary Speaker opening address on Tuesday 11th
  • There will be technical presentations every hour in the Ansys Booth Theater (#1539)
  • Get yourself a complimentary sit-down breakfast and a discussion on automotive electronics by registering for the Ansys DAC Breakfast Panel on Tuesday morning
  • Register for some of the very limited seating at Ansys’ Customer Workshops with 3 technical tracks
  • Ansys customers have contributed over 22 technical papers to the DAC conference Engineering Track

In just a couple of weeks the 2023 Design Automation Conference and Exhibit will start on July 9th – 13th in San Francisco, and Ansys will be attending in full force. Ansys’ chief technology officer, Prith Banerjee, has been honored with an invitation to deliver the Visionary Speaker address at the opening of the conference on Tuesday morning. Prith will be sharing his insights on “Driving Engineering Simulation and Design with AI/ML” and the lessons of Ansys incorporating artificial intelligence and machine learning capabilities with its products.

The Ansys 40×40 booth (#1539) is one of the larger ones in this year’s Exhibit, with its major theme on latest multiphysics technology for 2.5D/3D-IC signoff. This includes Thermal Integrity, Electromagnetic Signal Integrity, and Structural Reliability (stress/warpage). The technology for multi-die, heterogeneous integration has been advancing by leaps and bounds as the semiconductor industry moves to this new design paradigm. Presentations on this and other topics are scheduled every hour in the Ansys DAC Booth Theater for any DAC attendee to sit down and ask questions of the experts presenting. Two information stations will be available for self-guided browsing through the full range of Ansys technology offerings or to engage with any of the Ansys support specialists standing by. Ansys is also sponsoring the Community Connection Zone (#1551) right next door to the Ansys booth where people can sit down, take a break, and relax with a coffee or a bite to eat.

The second theme at this year’s Ansys booth responds to the heightened customer interest around Automotive Electronics. The automotive sector is undergoing tectonic changes as manufacturers rush to adapt to 3 fundamental drivers of innovation: electrification of the power train, autonomous driving, and the over-the-air connected vehicle.  Each of these forces are increasing the electronic and semiconductor content in future vehicles. This aligns with Ansys’ deep and broad set of Automotive solutions; from high performance compute (HPC) chips for AI/ML algorithms, to battery management, mechanical reliability, crash test simulation, lighting, and more.

This year’s edition of the traditional Ansys DAC Breakfast Panel will dive more deeply into the Automotive theme. This event serves a full complimentary breakfast to attendees who register for the panel discussion in the Marriott Marquis on Tuesday July 11th at 7:00am – 8:30am (room Golden Gate B). The topic of this year’s panel discussion is “Driving Design Excellence: The Future of Automotive Electronics”. The discussion will be moderated by Ansys senior chief technologist for automotive Judy Curran who has over 30 years of experience in the automotive industry. A roster of panelists from Rivian, Intel, Synopsys and more. Attendees must register for the Breakfast Event at the Ansys DAC webpage.

The impressively broad usage of Ansys products across the semiconductor industry has once again enabled Ansys customers to submit an equally impressive 22 technical papers that have been accepted by the DAC Conference and will be presented in the Engineering Track. In addition, Ansys product specialist Lang Lin will be joined by researchers from the University of Kobe and the University of Maryland to deliver a tutorial on “Side-Channel Analysis: from Concepts to Simulation and Silicon Validation” on Monday afternoon.

Ansys is organizing a series of technical Customer Workshops in the Ansys Booth conference room. The workshops are 2-hour sessions organized into several tracks where several Ansys customers present detailed technical summaries of their experiences and successes in applying Ansys technology for their production designs. The seating for these valuable workshops is extremely limited and you must reserve your seat as early as possible.

Finally, Ansys is fully engaged with the many discussions and panels that make DAC the valuable must-go event of the year:

So please make sure to register to attend the conference and join Ansys at DAC. Register for one of our exclusive events or schedule a meeting as we reach out to our customers and partners in advancing the state-of-the-art in Electronic Design Automation.

Also Read:

Keynote Sneak Peek: Ansys CEO Ajei Gopal at Samsung SAFE Forum 2023

WEBINAR: Revolutionizing Chip Design with 2.5D/3D-IC design technology

Chiplet Q&A with John Lee of Ansys


Assessing EUV Wafer Output: 2019-2022

Assessing EUV Wafer Output: 2019-2022
by Fred Chen on 06-26-2023 at 6:00 am

Assessing EUV Wafer Output 2019 2022

At the 2023 SPIE Advanced Lithography and Patterning conference, ASML presented an update on its EUV lithography systems in the field [1]. The EUV wafer exposure output was presented and is shown below in table form:

From this information, we can attempt to extract and assess the EUV wafer output per quarter. First, since there are quarters with no reported output, we will interpolate with a quartic polynomial fit. A quartic polynomial is used as best fit because five data points are already available.

Cumulative EUV wafers exposures from 2019 to 2022. Quarter 0 corresponds to before Q1 2019.

For each quarter, we can calculate the average wafers per day per EUV tool, by taking the difference between wafers exposed in a pair of consecutive quarters, dividing by the average of the number of available systems of the two quarters, then dividing by 90 days. The resulting trend is shown below:

Average wafers per day per EUV system, for each quarter from 2019 to 2022.

The average EUV exposures per tool broke through 1000 wafers per day in a couple of quarters, but has most recently dropped to 904 wafers per day, or less than 40 wafers per hour. This looks like a surprisingly low throughput, compared to reported values of 120-180 wafers per hour [1], what could this mean?

A first possibility is that the EUV tools are simply not used that often, and are idle most of the time. A second possibility is that the tools are in maintenance most of the time.  However, uptime of >90% has been reported [2]. A third possibility would be higher doses, possibly over 100 mJ/cm2, to address stochastic effects [3]. However, this does seem to go counter to all the work done on achieving published throughput goals. Yet another possibility is that the graph does not count multiple layer exposures on a wafer separately. Hence, 15 EUV layers at 120 wafers per hour each layer would look like 8 wafers per hour, for example. However, compared to ~40 wafers per hour on average, this number is an even lower output rate! Where is the discrepancy? Research and development (R&D) wafers have not been considered. If only 20% of all EUV wafers run were for production, then the numbers could work out more reasonably. A possible breakdown would be below:

An example of EUV use breakdown for Q4 2022. In this case, uptime is allocated as 20% for production, 20% for R&D, 60% idle. The resulting monthly production volume is 933,333 wafers/month. Production assumed to run wafers at 120 WPH, R&D at 128 WPH.

For reference, TSMC monthly output is reported as up to 150,000 wafers/month [4]. If the monthly production volume is not over 900,000 wafer/month but actually ~250,000 wafers/month (so that TSMC’s portion is 60% of global total), the fraction in production needs to be ~5.3%. With the same wafer run rates, the R&D and idle time fraction don’t change appreciably.

In this example, uptime is allocated as 5.3% for production, 28% for R&D, 67% idle. The resulting monthly production volume is 247,333 wafers/month. Production assumed to run wafers at 120 WPH, R&D at 128 WPH.

The noticeable difference is the number of layers per production wafer. On average, it has increased to 57. This must include the multiple exposures for a given layer for many cases. For example, 14 layers with four exposures, and 1 layer with single exposure, to give 15 EUV layers and 57 EUV exposures total.

In both of the above examples, yield loss is not considered. If we assume that the monthly production volume is actually 420,000 wafers, but that yield loss had brought it down to 250,000, the production use is 9%. The 33 EUV exposures could come from 6 layers with four exposures, and 9 layers with single exposure, to give 15 EUV layers total.

In this example, uptime is allocated as 9% for production, 26% for R&D, 65% idle. The resulting monthly production volume is 420,000 wafers/month. Production assumed to run wafers at 120 WPH, R&D at 128 WPH.

The picture that emerges from considering the above scenarios is that there is substantial (>60%) idle time, some yield loss, and a good deal of multiple exposures (multipatterning) for some EUV layers, if we assume the EUV systems are running at least 120 wafers per hour. Otherwise, if the tools are not idle or under maintenance or repair for that much time, the actual running throughput is often (on average) <40 wafers per hour. Very high doses to address stochastic effects naturally result in such low throughputs.

References

[1] C. Smeets et al., Proc. SPIE 12494, 1249406 (2023).

[2] https://semiengineering.com/euv-challenges-and-unknowns-at-3nm-and-below/

[3] https://semiengineering.com/finding-predicting-euv-stochastic-defects/

[4] https://www.digitimes.com/news/a20220323PD215.html

This article first appeared in LinkedIn Pulse: Assessing EUV Wafer Output: 2019-2022

Also Read:

Application-Specific Lithography: 28 nm Pitch Two-Dimensional Routing

A Primer on EUV Lithography

SPIE 2023 – imec Preparing for High-NA EUV

Curvilinear Mask Patterning for Maximizing Lithography Capability


Podcast EP166: How iDEAL Semiconductor is Revolutionizing Power Device Design & Manufacturing

Podcast EP166: How iDEAL Semiconductor is Revolutionizing Power Device Design & Manufacturing
by Daniel Nenni on 06-23-2023 at 10:00 am

Dan is joined by Ryan Manack, Vice President of Marketing for iDEAL Semiconductor. Prior to iDEAL Ryan spent 15 years at Texas Instruments which I consider one of the most influential companies in the history of semiconductors.

Ryan describes SuperQ, the unique core technology platform of iDEAL Semiconductor. Using the approach defined by SuperQ, advanced power devices can be designed and built with standard CMOS manufacturing technology, avoiding the need to utilize alternate technology platforms that are not as mature or reliable. The result is advanced capabilities with current technology.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Efabless Celebrates AI Design Challenge Winners!

Efabless Celebrates AI Design Challenge Winners!
by Daniel Nenni on 06-23-2023 at 6:00 am

Efabless AI Challenge SemiWiki

The first AI Generated Open-Source Silicon Design Challenge invited participants to use generative AI to design an open-source silicon chip and tape it out in just three weeks. The contestants were required to create Verilog code from natural language prompts, and then implemented their designs using the chipIgnite platform and the OpenLane open-source design flow.

The challenge was a success, with participants from all over the world, some of whom had never designed a chip before, and virtually none of whom had previously used OpenLane. Six designs successfully met all the criteria, and in a very close call, three designs were declared to be the winners by our outside panel of judges based on pre-determined criteria including design completeness, documentation, technical merit, and community interest.

The first-place winner of the contest was QTCore-C1 by Hammond Pearce at New York University. The design is a co-processor that can be used for many applications, such as predictable-time I/O state machines for PIO functions as seen on some microcontrollers developed using the Chip-Chat methodology that the NYU team has published.

The second-place winner of the contest was Cyberrio by Xinze Wang, Guohua Yin, and Yifei Zhu at Tsinghua-Berkeley Shenzhen Institute. This design is a RISC-V CPU, implemented with Verilog code produced via a series of prompts given to ChatGPT-4.

The third-place winner of the contest was Asma Mohsin at Rapid Silicon. The design is a Model Predictive Control (MPC) that is used to predict future behavior and optimize control actions for a regulator control circuit provided in MATLAB code to ChatGPT-4 and then implemented with prompts in Verilog.

The designs were all very impressive, and all the participants successfully demonstrated how tools such as ChatGPT, Bard and others can revolutionize chip design by automating many of the tedious tasks involved in the development process, making it simpler, faster and more efficient.

Efabless will now fabricate the three winning designs on its chipIgnite shuttle. The winners will receive packaged parts and evaluation boards, valued at $9,750 each. In addition, all participants with qualifying designs will receive a free evaluation board and one of the winning AI-generated chips.

Efabless will shortly be featuring videos from the various winning designs and teams describing their experience and lessons learned.

Efabless will also soon release information about the second AI-Generated Design Challenge. The challenge will take place over the summer, with tapeouts expected in September. Stay tuned!

About Efabless

Efabless offers a platform applying open source and community models to enable a global community of chip experts and non-experts to collaboratively design, share, prototype and commercialize special purpose chips. Nearly 1,000 designs and 450 tapeouts have been executed on Efabless over the past two years. The company’s customers include startups, universities, and research institutions around the world.

Also Read:

Why Generative AI for Chip Design is a Game Changer

Join the AI Generated Open-Source Silicon Design Challenge!

A User View of Efabless Platform: Interview with Matt Venn


Optimism Prevailed at CEO Outlook, though Downturn Could Bring Unpredictable Challenges

Optimism Prevailed at CEO Outlook, though Downturn Could Bring Unpredictable Challenges
by Nanette Collins on 06-22-2023 at 10:00 am

CEO Outlook #2
CEO Outlook participants, front row (l-r): Niels Faché, John Lee and Prakash Narain; back row (l-r): Scott Seiden, Director Strategic Marketing at Keysight EDA Portfolio, Dean Drako, John Kibarian, Ed Sperling, Bob Smith, Executive Director of the ESD Alliance, Simon Segars and Joe Sawicki. Source: Julie Rogers, Director of Marketing for SEMI Americas and the ESD Alliance

Chances are anyone who attended the CEO Outlook will say it was an engaging, entertaining and enlightening view of the chip design space, though CEO Outlook may be a misnomer as four of the seven panelists had C-Suite titles other than CEO.

Regardless, the collective view was optimistic, though caution prevailed as the economic downturn could bring unpredictable challenges. The discussion was kept on point by moderator Ed Sperling, editor in chief of Semiconductor Engineering. SEMI’s ESD Alliance sponsored the event and it was hosted by Keysight.

As expected, the conversation covered topical subjects like heterogeneous integration, chiplets, education and manufacturing, but continued to drift back to the role AI is playing in the changing industry dynamics. John Kibrarian, President and CEO of PDF Solutions and a member of the ESD Alliance Governing Council, reinforced the role of AI, predicting the semiconductor industry will grow to $1 trillion by 2030 propelled by increasing AI computer needs.

AI’s impact on the design tool market and the industry cannot be overstated, agreed Dean Drako, President and CEO of IC Manage and former Governing Council member, who believes it will help accelerate productivity. It will be both a challenge with the massive amount of data that AI will generate, he warned.

AI puts the industry in an amazing space to monetize what EDA is doing as well as being able to transform the world, allowed Joe Sawicki, Executive Vice President of Siemens EDA and a Governing Council member. AI comes with a host of chip design-related questions that he quickly ticked off –– What if generative AI comes into the design space and how would it be useful or innovative? How would it discover what’s been done? What’s being pulled together in compelling ways? He finished with the promise: “It’s going to be an amazing ride in terms of how we take advantage of these opportunities.”

John Lee, GM and VP at Ansys and a newly elected ESD Alliance Governing Council member, chose a different angle and said heterogenous integration is both an opportunity and a challenge. Multiphysics around 3D ICs is a big challenge and an opportunity. So too are heterogeneous IC designs.

Kibarian took the manufacturing perspective and sees opportunities to improve production flows. He responded to Lee’s comments by adding heterogeneous development systems will lead to manufacturing challenges. The system package makes manufacturing challenging because the value isn’t in the wafer fab and assembly is a challenging process now due in many ways to geopolitics. The test points are much more complex.

Chiplets and the heterogeneous design have physical challenges that could be electromagnetic, thermal or electrothermal, continued Niels Faché, VP and GM at Keysight EDA and a newly elected Governing Council member. While tools are available, it’s critical that they are applied to solve the problems they’re well suited for and integrated in an overall portfolio, he added. Technologies may be available but may need to be modified for chiplets and 3D ICs. They also need to be in an integrated workflow and so they are not going from one highly specialized group to another specialized group causing data transfer problems. Faché’s advice to designers is to make sure they have the right tool for the right job and those tools are integrated in an overall workflow.

At Faché’s mention of chiplets, Sperling turned to Simon Segars, a former member of the ESD Alliance Governing Council, for his insights on the emerging chiplets market. Segars acknowledged the chiplet momentum and the complexity around chip and physical IP, libraries and memories and in-place blocks for chiplet design. He foretells a shift will be required –– a practical way forward once designers are comfortable using chiplets.

Prakash Narain, President and CEO of Real Intent and a Governing Council member, is a verification expert and firmly believes opportunities are available to further automate shift left or moving verification up much earlier in design. Since it’s a design step, the designer must get involved in this process. Due to time pressures, a verification vendor has to create the best experience for success. The challenge is technology innovation and the industry is responding, he affirmed, by investing in technology and innovation to design the user experience while expanding the size of the business space and engineering innovation.

As the discussion wound down, one attendee asked panelists what they would tell top U.S. policy makers, given the chance. Drako jumped in, describing his chance to talk recently to President Biden. “Basically, I made three points,” he said. “One was that we need education in the United States and that we need to invest so that we are top of the world in education because that’s how we’re going to compete in the long run. That’s how we competed over the last 500 years when we invented the first public education system. Second, AI is going to change video surveillance and we need to invest, reinvest as a country in manufacturing.”

Lee perhaps summed up the discussion best by noting that all the challenges the panelists talked about cannot be solved immediately. It takes a village to solve them or the idea of open extensible platforms as a form of a workable model. “SaaS-based systems talking to each other is the future. We have to embrace all this. Then we can solve more of the problems we face.”

The ESD Alliance Membership Drive

‘Tis the membership drive season for the ESD Alliance, an industry organization devoted to promoting the value of the electronic system and semiconductor design ecosystem as a vital component of the global electronics industry. It offers programs that address technical, marketing, economic and legislative issues affecting the entire industry. For more information, visit the ESD Alliance website. Or contact Bob Smith, Executive Director of the ESD Alliance, at bsmith@semi.org or Paul Cohen, ESD Alliance’s Senior Manager, at pcohen@semi.org.

Follow SEMI ESD Alliance

www.esd-alliance.org

ESD Alliance Bridging the Frontier blog

Twitter: @ESDAlliance

LinkedIn

Facebook

Also Read:

Nominations for Phil Kaufman Award, Phil Kaufman Hall of Fame Close June 30

SEMI ESD Alliance CEO Outlook Sponsored by Keysight Promises Industry Perspectives, Insights

Cadence Hosts ESD Alliance Seminar on New Export Regulations Affecting EDA and SIP March 28


Tensilica Processor Cores Enable Sensor Fusion For Robust Perception

Tensilica Processor Cores Enable Sensor Fusion For Robust Perception
by Kalar Rajendiran on 06-22-2023 at 6:00 am

Tensilica DSPs

While sensor-based control and activation systems have been around for several decades, the development and integration of sensors into control systems have significantly evolved over time. Early sensor-based control systems utilized basic sensing elements like switches, potentiometers and pressure sensors and were primarily used in industrial applications. With rapid advances in electronics, sensor technologies, microcontrollers, and wireless communications, sensor-based control and activation systems have become more advanced and widespread. Sensor networks, Internet of Things (IoT) platforms, and wireless sensor networks (WSNs) further expanded the capabilities of sensor-based control systems, enabling distributed sensing, remote monitoring, and complex control strategies.

Today, sensor-based control and activation systems are integral components in various fields, including industrial automation, automotive systems, robotics, smart buildings, healthcare, and consumer electronics. They play a vital role in enabling intelligent and automated systems that can adapt, respond, and interact with the environment based on real-time sensor feedback. With so many applications counting on sensor-based activation and control, how to ensure robustness, accuracy and precision from these systems? This was the context for Amol Borkar’s talk at the recent Embedded Vision Summit conference in Santa Clara, CA. Amol is a product marketing director at Cadence for the Tensilica family of processor cores.

Use of Heterogeneous Sensors

Heterogeneous sensors refer to a collection of sensors that are diverse in their sensing modalities, operating principles, and measurement capabilities and offer complementary information about the environment being monitored. Image sensors, Event-based image sensors, Radar, Lidar, Gyroscopes, Magnetometers, Accelerometers, and Global Navigation Satellite System (GNSS) are heterogeneous sensor types to name a few. Heterogeneous sensors are commonly used for redundancy to enhance fault tolerance and system reliability.

Why Sensor Fusion

As different sensors capture different aspects of the environment being monitored, combining these data allows for a more comprehensive and accurate understanding of the surroundings. The result is an enhanced perception of the environment, thereby enabling more informed decision-making.

While more sensors mean more data and that is good, each sensor type has its limitations and measurement biases. Sensors also often provide ambiguous or incomplete information about the environment. Sensor fusion techniques help resolve these ambiguities by combining complementary information from different sensors. By leveraging the strengths of different sensors, fusion algorithms can fill gaps, resolve conflicts, and provide a more coherent and reliable representation of the data. By fusing data from multiple sensors in a coherent and synchronized manner, fusion algorithms enable systems to respond in real time to changing conditions or events.

In essence, sensor fusion plays a vital role in improving perception, enhancing reliability, reducing noise, increasing accuracy, handling uncertainty, enabling real-time decision-making, and optimizing resource utilization.

Fusion Types and Fusion Stages

Two important aspects of sensor fusion are: (1) what types of sensor data to be fused and (2) at what stage of processing to fuse the data. The first aspect depends on the application and the second aspect depends on the types of data being fused. For example, if stereo sensors of the same type are being used, fusing is done at the point of data generation (early fusion). If image-sensor and radar are both used for identifying an object, fusing is done at late stage of separate processing (late fusion). There are other use cases where mid-fusion is performed, say for example, when doing feature extraction based on both image-sensor and radar sensing.

Modern Day Solution Trend

While traditional digital signal processing (DSP) is still the foundation for heterogeneous sensors-based systems, it is not easy to scale and automate these real-time systems as they get more complex. In addition to advanced DSP capabilities and superior sensor-fusion capabilities, AI processing is needed for scalability, robustness, effectiveness and automation requirements of large complex systems. AI-based sensor fusion combines information at the feature level, instead of fusing all data from different individual sensors.

Cadence Tensilica Solutions for Fusion-based Systems

Cadence’s Tensilica ConnX and Vision processor IP core families and floating point DSP cores, make it easy to develop sensor fusion applications. Cadence also provides a comprehensive development environment for programming and optimizing applications targeting the Tensilica ConnX and Vision processor IP cores. This environment includes software development tools, libraries, and compiler optimizations that assist developers in achieving high performance and efficient utilization of the processor resources.

Tensilica ConnX is a family of specialized processor cores designed for high-performance signal processing, AI, and machine learning applications. The architecture of ConnX cores enables efficient parallel processing and accelerates tasks such as neural network inference, audio processing, image processing, and wireless communication. With their configurability and optimized architecture, these cores offer efficient processing capabilities, enabling developers to build power-efficient and high-performance systems for a range of applications.

The Tensilica Vision Processor is a specialized processor designed to accelerate vision and image processing tasks. With its configurability and architectural enhancements, it provides a flexible and efficient solution for developing high-performance vision processing systems across various industries, including surveillance, automotive, consumer electronics, and robotics.

Summary

Cadence offers a wide selection of DSPs ranging from compact and low power to high performance optimized for radar, lidar, and communications applications in ADAS, autonomous driving, V2X, 5G/LTE/4G, wireless communications, drones, and robotics. To learn more about these IP cores, visit the following pages.

Vision DSPs

ConnX DSPs

FloatingPoint DSPs

Also Read:

Deep Learning for Fault Localization. Innovation in Verification

US giant swoops for British chipmaker months after Chinese sale blocked on national security grounds

Opinions on Generative AI at CadenceLIVE


Intel Internal Foundry Model Webinar

Intel Internal Foundry Model Webinar
by Scotten Jones on 06-21-2023 at 12:00 pm

IAO Investor Webinar Slides to post on our INTC website PDF Page 07

Intel held a webinar today to discuss their IDM2.0 internal foundry model. On the call were Dave Zinsner Executive Vice President and Chief Financial Officer and Jason Grebe Corporate Vice President and General Manager of the Corporate Planning Group.

On a humorous note, the person moderating the attendee questions sounded a lot like George Takei who played Lieutenant Sulu on the original Star Trek. Since Intel is trying to accelerate what they are doing, warp 10 mister Sulu seems appropriate.

Under Intel’s historical business model, manufacturing and technology development costs were allocated to the business units and to Intel Foundry Services (IFS). The business units and IFS sold their product and had reportable profit and loss statements (P&L). Under the new model, manufacturing, technology development, and IFS will be a reportable P&L and will sell wafers to the Intel business units at market prices, see figure 1.

Figure 1. Internal Foundry Model

This realignment will put a big focus on the new Internal Foundry Business Unit (or whatever they decide to call it). The business units will have increasing freedom to choose processes from internal and external sources and the internal unit will have to compete on price and performance. It will also make it easier to benchmark the internal manufacturing against external foundries because they will be competing on price and with a standalone P&L the relative financial performance will be clear.

The new structure will force the business units to pay a premium for hot lots the same way fabless companies do at foundries. Apparently, Intel runs a lot of hot lots, and it is impacting capacity by 8 to 10%. As a former fab manager, I can confirm that hot lots are very disruptive to fab operations, I hated them and strictly limited how many could be in the fab at one time.

Intel expects that initially operating margins will be negative and as they scale up and address their cost structure, they expect to move positive, see figure 2.

Figure 2. Manufacturing Operating Margins

Intel is delivering $3 billion dollars in cost reduction this year with $2 billion dollars in operating expense and $1 billion dollars in cost of sales savings, They are targeting $8 to $10 billion dollars in saving exiting 2025. Figure 3. Summarizes some identified savings opportunities.

Figure 3. Cost Savings Opportunities

In Q1 of next year the internal foundry business unit will have approximately $20 billion dollars of revenue making it the second largest foundry in the world. Virtually all the revenue in Q1 will be internal products but this is the same path Samsung has taken and they also have a lot of internal revenue.

I think this is an excellent realignment on Intel’s part to ensure their internal manufacturing is competitive on both a technology and cost basis.

One area in the presentation I have an issue with is the following slide, see figure 4.

Figure 4. IFS Customers Benefit from IDM Engineering

The idea behind figure 4 is that Intel will be designing and piloting four products on each new node before external customers get access to the node so Intel will have worked through the early issues and customers will get a more mature process. In my opinion this shows a lack of understanding of the foundry model on Intel’s part. Leading edge fabless companies are not going to accept being many months or even a year behind the leading edge. At TSMC leading fabless customers are involved in the definition and testing of new processes and are lead customers so they can be first to market. A company like Apple is involved and pays to be first to market, they are not going to wait for Intel to launch the processes on their own products first.

There was a discussion on process technology. The five nodes in four-year message was repeated and it was noted two of the five nodes are now done and the other three are on track. I personally don’t count i7 as it is really a plus version of 10nm, but even four nodes in four years is impressive.

Figure 5. Five Node in Four Years

Historically Intel was a leader in process technology introducing a new process every two years. The way I have always thought about Intel processes is they will have a node n in high volume production, are ramping up a new node n+1, and ramping down a previous node n-1. They typically have 3 to at most 4 nodes running at any time and this means older nodes are being regularly retired.

Figure 6. IDM 1.0 Drive Decades of Success

Slide 7. Illustrates how Intel views the changes in the market.

Slide 7. What Changed in the Environment

When I look at this slide, I agree with the left side graph, capital intensity is increasing. I don’t completely agree with the middle graph. Yes, disaggregation will likely increase node tails, but the reality is the node tails have historically been far shorter for Intel than for the foundries because Intel is focused on leading edge microprocessors. TSMC is still running their first 300mm fab on 130nm that entered production in the early 2000s, Intel’s 130nm production shut down by the late 2000s. Disaggregation will help Intel use more trailing edge technology, but external foundry customers will likely be an even larger driver of trailing edge if Intel succeeds in the foundry business.

David Zinsner specifically mentioned that running fabs past their depreciable lifetime generates cash and increases margins. This is a key part of the success of the foundry model. TSMC is still running 130nm, 90nm, 65nm, 40nm, 28nm, and 16nm fabs that are fully depreciated and are the cash cows paying for new technologies and have the highest margins. It may seem counter intuitive but the newest nodes with high depreciation pull down TSMC’s corporate margin by a couple of points of the first two years. When a fab becomes fully depreciated the wafer cost drops more than half and the foundries only pass on some of that savings to the customers increasing gross margins.

The longer node life will be particularly critical for Intel going forward, with i4 ramping, i3 coming this year, and 20A, and 18A next year, all EUV based processes, under the old 3 nodes running policy Intel’s non EUV processes would all be disappearing by 2025. As I have written about previously, Intel has a large portfolio of fabs that will never convert to EUV and they will need a use for those fabs, external customers can provide that. That article is available here.

On the right side of figure 7. Intel lines up their processes and timing versus competitors. I agree that at 32nm and 22nm Intel was well ahead and at 14nm the lead narrowed and at 10nm they fell behind. I am not sure exactly what the criteria is for the alignment of i4, i3, 20A and 18A versus the competitor processes. Certainly, from a timing perspective it is correct, but how did they decide what Intel processes match up to what foundry processes? On a density basis I would say even 20A and 18A likely won’t match TSMC 3nm but on a performance basis they will likely exceed even TSMC 2nm.

During the call Dave Zinsner said Intel expects to announce their first 18A customer this year. During questioning he was asked what is holding up the announcement and he said it is maturity of the PDK. This matches up with what Daniel Nenni has been hearing that the Intel 18A models aren’t ready yet.

In conclusion, I believe that creating a P&L around Intel Internal Foundry is a positive step to help drive competitiveness. I don’t completely agree with all aspects of the message on this call but I do think that overall Intel is making good progress and moving in the right direction.

Some people have speculated this is a step toward Intel eventually splitting the company, I am not sure I see that happening, but this would likely make that easier if it ever happened.

Intel’s execution on process technology has gotten a lot better and is in my opinion the single biggest driver of their future success. The Tower acquisition wasn’t discussed but will in my opinion also be a key piece in finding external foundry business to fill all the Intel non EUV fabs.

Also Read:

The Updated Legacy of Intel CEOs

VLSI Symposium – Intel PowerVia Technology

IEDM 2022 – Ann Kelleher of Intel – Plenary Talk


The Updated Legacy of Intel CEOs

The Updated Legacy of Intel CEOs
by Daniel Nenni on 06-21-2023 at 10:00 am

Intel HQ 2023

(First published December 24, 2014)

A list of the best and worst CEOs in 2014 was recently published. The good news is that none of our semiconductor CEOs were on the worst list. The bad news is that none of our semiconductor CEOs were on the best list either. I will be writing about the CEOs that made our industry what it is today starting with the largest and most innovative semiconductor company in the world.

Intel was officially founded in 1968 by Robert Noyce and Gordon Moore (Moore’s Law) with Andy Grove joining as employee number three. These three gentlemen would also be the first three of only eight CEOs over an unprecedented forty six year history. During their thirty year tenure at Intel, Noyce, Moore, and Grove became legends transforming Intel and the entire semiconductor industry into a force of nature that changed the world, absolutely. I would also add Intel CEO number four to that list since Craig Barrett is credited with the now famous Intel “copy exact” manufacturing process that has enabled “Moore’s Law” to continue to this day.

Here are brief bios of the first four Intel CEOs. As you can see there is a common thread amongst their education: PhDs from the top technology academic institutions across the United States.

Robert N. Noyce
Intel CEO, 1968-1975, Co-founder of Fairchild Semiconductor
Education: Ph.D in physics, Massachusetts Institute of Technology

Gordon E. Moore
Intel CEO, 1975-1987, Co-founder of Fairchild Semiconductor
Education: Ph.D in chemistry and physics, California Institute of Technology

Andrew S. Grove
Intel CEO, 1987-1998, previously worked at Fairchild Semiconductor
Education: Ph.D. in chemical engineering, University of California-Berkeley

Craig R. Barrett
Intel CEO, 1998-2005
Joined Intel in 1974, served as chief operating officer from 1993 to 1997, president from 1997 to 1998; chief executive from 1998 through 2005; and chairman from 2005 until 2009.
Education: Ph.D. in materials science, Stanford University

Without Intel where would we be today? We would certainly not have super computing power on our laps nor would we be designing SoCs with FinFETs. As a computer geek since the 1970s and a Silicon Valley based semiconductor professional since the 1980s I have a much better appreciation for Intel than most. I do however fear for their future which is why I am writing this. The problems Intel faces today in my opinion started with an MBA. Who exactly thought that putting a finance guy in charge of the most innovative semiconductor company in the world was a good idea?

Paul S. Otellini
Intel CEO, 2005-2013
Joined the Intel finance department in 1974 . From 1996 to 1998, Otellini served as executive vice president of sales and marketing and from 1994 to 1996 as senior vice president and general manager of sales and marketing.
Education: MBA, University of California-Berkeley, 1974; B.A. in economics, University of San Francisco, 1972

Paul Otellini’s legacy includes two very defining events:

  • In 2006 he oversaw the largest round of layoffs in Intel history when 10,500 (10% of the workforce) were laid-off in an effort to save $3 billion per year in costs.
  • Also in 2006 he passed on the opportunity to work with Apple on the iPhone.

We ended up not winning it or passing on it, depending on how you want to view it. And the world would have been a lot different if we’d done it. The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do. At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn’t see it. It wasn’t one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought.”

That was the day Intel “missed” mobile. Apple ended up partnering with TSMC which disrupted the foundry business with the half node process development methodology. This new yield learning centric strategy put TSMC solidly in the process technology lead ahead of both semiconductor industry titans Intel and Samsung.

I remember the rumors swirling Silicon Valley after Otellini’s resignation: Would Intel hire an outsider or promote from within? The potential outsider names I heard were very impressive but Intel chose Brian Krazinch, a career Intel employee with zero CEO experience. I was disappointed to say the least.

Brian M. Krzanich
Intel CEO 2013-2018
Began his career at Intel in 1982 in New Mexico as a process engineer and has progressed through a series of technical and leadership roles at Intel, most recently serving as the chief operating officer (COO) since January 2012. Prior to becoming COO, he was responsible for Fab/Sort Manufacturing from 2007-2011 and Assembly and Test from 2003 to 2007. From 2001 to 2003, he was responsible for the implementation of the 0.13-micron logic process technology across Intel’s global factory network. Krzanich also held plant and manufacturing manager roles at multiple Intel factories.
Education: BA in Chemistry from San Jose State University

In 2015-2016 Intel eliminated more than 15,000 jobs companywide which is now the largest downsizing in the company’s history.

“14nm is here, is working, and will be shipping by the end of this year” Brian Kranzich IDF 2013 Keynote. (Intel 14nm officially shipped in 2015).

Intel 14nm was the beginning of the end of Intel’s process technology dominance and at 10nm Intel hit rock bottom. Brian Kranzich was forced out as CEO of Intel for an improper relationship with a co-worker many years prior to becoming CEO. In reality BK was fired for being the worst CEO in the history of Intel, my opinion.

Robert Swan
Intel CEO January 2019-2021
Bob Swan was CEO of Intel Corporation from January 2019 until February 15, 2021. He joined Intel as CFO in October 2016 from General Atlantic,  Bob was formerly CFO at eBay, Electronic Data Systems, and TRW. Following the resignation of Brian Krzanich, he was named interim CEO on June 21, 2018, and appointed to full-time CEO on January 31, 2019. Bob was replaced by 30 year Intel employee and former VMware CEO Pat Gelsinger.
Education: Bachelor’s degree in Business Administration from the University of Buffalo, MBA from Binghamton University.

Again, no CEO experience. During Bob’s short reign Intel struck an outsourcing deal with TSMC to support a chiplet strategy which is now in production (Meteor Lake). Intel can use internal fabs or TSMC depending on which is best suited. In my opinion Bob did this as a way to motivate Intel manufacturing as an innovate or die mandate. Some think this is why Bob was fired but, as it turns out, the Intel-TSMC relationship was in fact a pivotal point in the history of Intel.

Pat Gelsinger
Intel CEO 2021-Present
Pat Gelsinger rejoined Intel as CEO on February 15, 2021. He started his career at Intel in 1979, where he spent 30 years in various roles and eventually rose to become the company’s first Chief Technology Officer (CTO). During his tenure at Intel, Gelsinger played a crucial role in the development of several groundbreaking technologies and microprocessors, including the 80486 processor and the original Pentium processor. Before returning to his “dream job” at Intel, Pat was CEO of VMware (2012-2021), a software company specializing in virtualization and cloud computing.
Education: Bachelor’s degree in Electrical Engineering from Santa Clara University, Master’s degree in Electrical Engineering from Stanford University.

Pat brought Intel back to the forefront of the semiconductor industry with the much heralded Intel IDM (Integrated Device Manufacturing) 2.0 strategy.

Intel IDM 2.0 is a plan introduced by Gelsinger to transform its manufacturing capabilities. It also represents a significant shift in Intel’s approach to manufacturing and involves a combination of strategies aimed at expanding its product portfolio, enhancing competitiveness, and increasing supply chain resilience.

The key elements of Intel IDM 2.0 include:
  1. Foundry Services: Intel plans to leverage its advanced manufacturing facilities and offer its manufacturing capabilities to external customers through Intel Foundry Services. This initiative aims to become a major player in the foundry business and provide advanced semiconductor manufacturing solutions to a diverse range of industries.
  2. Internal Product Leadership: Intel continues to prioritize its internal product development and plans to deliver a cadence of leadership products. The focus is on enhancing process technology and driving advancements in chip design to maintain Intel’s position as a leading provider of high-performance and high-efficiency semiconductor solutions.
  3. Investments in Research and Development: Intel has committed significant investments in research and development to drive innovation and accelerate advancements in semiconductor technologies. This includes investments in next-generation process nodes, packaging technologies, and specialized designs for specific market segments.
  4. Global Supply Chain Resilience: Intel aims to enhance its supply chain capabilities by diversifying its manufacturing locations and increasing capacity. This strategy is intended to improve responsiveness to market demands, mitigate potential disruptions, and ensure a reliable supply of Intel products.
  5. Partnerships and Ecosystem Collaboration: Intel recognizes the importance of collaboration and partnerships to drive industry-wide advancements. The company is actively engaging with partners, customers, and governments to foster innovation, develop new technologies, and create a robust ecosystem that supports the growth of the semiconductor industry.

Intel is a changed company with IDM 2.0 which has been covered by SemiWiki in great detail. Pat Gelsinger is a no nonsense – transformative leader with a very large challenge ahead of him. For the sake of Intel and semiconductor manufacturing let’s hope he is successful, absolutely.

Also Read:

VLSI Symposium – Intel PowerVia Technology

IEDM 2022 – Ann Kelleher of Intel – Plenary Talk

Intel Foundry Services Forms Alliance to Enable National Security, Government Applications


Managing Service Level Risk in SoC Design

Managing Service Level Risk in SoC Design
by Bernard Murphy on 06-21-2023 at 6:00 am

Traffic

Discussion on design metrics tends to revolve around power, performance, safety, and security. All of these are important, but there is an additional performance objective a product must meet defined by a minimum service level agreement (SLA). A printer display may work fine most of the time yet will intermittently corrupt the display. Or the nav system in your car intermittently fails to signal an upcoming turn until after you pass the turn. These are traffic (data) related problems. Conventional performance metrics only ensure that the system will perform as expected under ideal conditions; SLA metrics set a minimum performance expectation within specified traffic bounds. OEMs ultimately care about SLAs, not STAs. Meeting/defining an SLA is governed by interconnect design and operation.

What separates SLA from ideal performance?

Ideally, each component could operate at peak performance, but they share a common interconnect, limiting simultaneous traffic. Each component in the design has a spec for throughput and latency – perhaps initially frames/second for computer vision, AI recognition, and a DDR interface, mapping through to gigabytes/second and clock cycles or milliseconds in a spreadsheet. An architect’s goal is to compose these into system bandwidths and latencies through the interconnect, given expected use cases and the target SLA.

Different functions generally don’t need to be running as fast as possible at the same time; between use cases and the SLA, an architect can determine how much she may need to throttle bandwidths and introduce delays to ensure smooth total throughput with limited stalling. That analysis triggers tradeoffs between interconnect architecture and SLA objectives. Adding more physical paths through the interconnect may allow for faster throughput in some cases while increasing device area. Ultimately the architect settles on a compromise defining a deliverable SLA – a baseline to support a minimum service level while staying within PPA goals. This step is a necessary precursor but not sufficient to define an SLA; that step still needs to factor in potential traffic.

Planning for unpredictable traffic

Why not run simulations with realistic use cases? You will certainly do that for other reasons, but ultimately, such simulations will barely scratch the surface of SLA testing across an infinite range of possibilities. More useful is to run SystemC simulations of the interconnect with synthetic initiators and targets. These don’t need to be realistic traffic models for the application, just good enough to mimic challenging loads. According to Andy Nightingale (VP of product marketing at Arteris), you then turn all the dials up to some agreed level and run. The goal is to understand and tune how the network performs when heavily loaded.

An SLA will define incoming and outgoing traffic through minimum and maximum rates, also allowing for streams which may burst above maximum limits for short periods. The SLA will typically distinguish different classes of service, with different expectations for bandwidth-sensitive and latency-sensitive traffic. Between in-house experience in the capabilities of the endpoint IP together with simulations the architect should be able to center an optimum topology for the interconnect.

The next step is to support dynamic adaptation to traffic demands. In a NoC, like FlexNoC from Arteris, both the network interface units (NIUs) connecting endpoint IPs and the switches in the interconnect are programmable, allowing arbitration to dynamically adjust to serve varying demands. A higher-priority packet might be pushed ahead of a lower-priority packet or routed through a different path if the topology allows for that option, or a path might be reserved exclusively for a certain class of traffic. Other techniques are also possible, for example, adding pressure or sharing a link to selectively allow high priority low-latency packets to move through the system faster.

It is impossible to design to guarantee continued high performance under excessive or burst traffic, say, a relentless stream of video demands. To handle such cases, the architect can add regulators to gate demand, allowing other functions to continue to operate in parallel at some acceptable level (again, defined by the SLA).

In summary, while timing closure for ideal performance is still important, OEMs care about SLAs. Meeting those expectations must be controlled through interconnect design and programming. Arteris and their customers have been refining the necessary Quality of Service (QoS) capabilities offered in their FlexNoC product line for many years. You can learn more HERE.


DDR5 Design Approach with Clocked Receivers

DDR5 Design Approach with Clocked Receivers
by Daniel Payne on 06-20-2023 at 10:00 am

DFE min

At the DesignCon 2023 event this year there was a presentation by Micron all about DDR5 design challenges like the need for a Decision Feedback Equalizer (DFE) inside the DRAM. Siemens EDA and Micron teamed up to write a detailed 25 page white paper on the topic, and I was able to glean the top points for this much shorter blog. The DDR5 specification came out in 2020 with a data transfer bandwidth of 3200MT/s, requiring equalization (EQ) circuits to account for the channel impairments.

DFE is designed to overcome the effects of Inter-Symbol Interference (ISI), and the designers at Micron had to consider the clocking, Rx eye evaluation, Bit Error Rate (BER) and jitter analysis in their DRAM DFE. IBIS-AMI models were used to model the DDR5 behavior along with an EDA tool statistical simulation flow.

Part of the DDR5 specification is the four-tap DFE inside the DRAM’s Rx, and the DFE looks at past received bits to remove any ISI from the bits. The DFE first applies a voltage offset to remove ISI, then the slicer quantizes the current bit as high or low.

Typical 4-tap DFE from DDR5 Specification

With DDR5 the clocking is a differential strobe signal (DQS_t, DQS_c), and it’s forwarded along the single-ended data signals (DQ) to the Rx. The DQS signal is buffered up and then fanned out to the clock input of up to eight DQ latches, causing a clock tree delay.

DQS Clock tree delay

The maximum Eye Height is 95mV and the max Eye Width is 0.25 Unit Interval (UI), or just 78.125ps.. Using a statistical approach to measuring BER of 1e-16 is most practical.

IBIS models have been used for many generations of DDR systems, enabling  end-to-end system simulation, yet starting with DDR5 adding EQ features and  BER eye mask requirements, a new simulation model and analysis are sought. With IBIS-AMI modeling there is fast and accurate Si simulation that are portable across EDA tools while protecting the IP of the IO details. IBIS-AMI supports statistical and bit-by-bit simulation modes, and the statistical flow is shown below.

Statistical Simulation Flow

The result of this flow is a statistical eye digram that can be used to measure eye contours at different BER levels.

DDR5 Example Simulation

A DDR5 simulation was modeled in the HyperLynx LineSim tool, with the DQ and DQS IBIS-AMI models provided by Micron, and here’s the system schematic.

DDR5 system schematic

The EDA tool captures the waveform at specified clock times, where timing uncertainties within clock times are transferred into the resulting output eye diagram, reconstructing the voltage and timing margins before quantization by the slicer and its clock.

Variable clock times

Both DQS and and DQ timing uncertainty impact the eye diagram similar to timing margin. Figure A shows jitter injected onto the DQ signal, and figure B has jitter injected onto the DQS signal. DQ (red) and DQS (green) jitter are shown together in figure C.

Timing bathtub curve

Sinusoidal jitter effects can even be modeled on the DQ signal and DQS signal in various combinations to see the BER and timing bathtub curve results. DDR5 has Rj, Dj and Tj measurements instead of period and cycle to cycle jitter measurements. The impact of Rx and Rj values on the BER plots can be simulated, along with the timing bathtub curves.

Rx Rj on data, versus data and clock combined

Going beyond Linear and Time-Invariant (LTI) modeling, the Multiple Edge Response (MER) technique uses a set of rising and falling edges. With a custom advanced IBIS-AMI flow it performs a statistical analysis on each MER edge, then superimposes the combined effect into an output eye diagram.

Bit-by-bit, advanced simulation results

Adding Tx Rj values of 2% in the modeling shows even more realistic degraded BER plot results.

Summary

Signal Integrity effects dominate the design of a DDR5 system, so to get accurate results require detailed modeling of all the new physical effects. The IBS-AMI specification has been updated for Rx AMI models to use a forwarded clock. Micron showed how they used a clocked DDR5 simulation flow to model the new effects, including non-LTI effects, and achieving simulations with BER of 1e-16 and below.

Request and read the complete 25 page white paper online here.

Related Blogs