SemiWiki Ad2E ILOVEDAC 800x100

An Invited Talk at IEDM: Intel’s Mr. Transistor Presents The Incredible Shrinking Transistor – Shattering Perceived Barriers and Forging Ahead

An Invited Talk at IEDM: Intel’s Mr. Transistor Presents The Incredible Shrinking Transistor – Shattering Perceived Barriers and Forging Ahead
by Mike Gianfagna on 12-16-2024 at 10:00 am

An Invited Talk at IDEM Intel’s Mr. Transistor Presents The Incredible Shrinking Transistor – Shattering Perceived Barriers and Forging Ahead

IEDM turned 70 last week. This was cause for much celebration in the form of special events. One such event was a special invited paper on Tuesday afternoon from Intel’s Tahir Ghani, or Mr. Transistor as he is known. Tahir has been driving innovation at Intel for a very long time. He is an eyewitness to the incredible impact of the Moore’s Law exponential and his work has made a measurable impact on the growth of that exponential.

Tahir treated the audience to a colorful perspective on how we’ve arrived at the current level of density and scaling. Pervasive AI will demand substantial improvement in energy efficiency going forward and Tahir took the opportunity to call the industry to action to address these and other challenges as we move toward a trillion-transistor system in package. Here are some of the comments from this invited talk at IDEM as Intel’s Mr. Transistor presents the incredible shrinking transistor – shattering perceived barriers and forging ahead.

About the Presenter

Dr. Tahir Ghani

Dr. Tahir Ghani is a senior fellow and director of process pathfinding in Intel’s Technology Research Group. Tahir has a 30-year career at Intel working on many innovations, including strained silicon, high-K metal gate devices, FinFETs, RibbonFETs, and backside power delivery (BSPD), among others. He has filed more than 1,000 patents over his career at Intel and was honored as Intel’s 2022 Inventor of the Year. He has the nickname of “Mr. Transistor” since he’s passionate about keeping Moore’s Law alive.

About the Talk

Besides IEDM turning 70 this year, Moore’s Law will turn 60 next year. Tahir used this milestone to discuss the innovation that has brought us here and what needs to done going forward to maintain Moore’s Law exponential innovation.

Tahir began by discussing a remarkable milestone that lies ahead – one trillion transistors within a package by the end of this decade. He took a sweeping view of the multiple waves of innovation that drove transistor scaling over the last six decades. The graphic at the top of this post presents a view of the journey, from system-on-chip to systems-in-package scaling.  Tahir then presented the key innovations in this journey – past, present and future.

FIRST ERA: 1965 – 2005

The first four decades of Moore’s Law saw exponential growth in transistor count and enabled multiple eras of computing, starting with the mainframe and culminating in the PC. During this time, a second effect called Dennard scaling became important as well as Moore’s Law.

Robert H. Dennard co-authored a now-famous paper for the IEEE Journal of Solid State Circuits in 1974. Dennard and his colleagues observed that as transistors are reduced in size, their power density stays constant. This meant that the total chip power for a given area size stayed the same from process generation to process generation. Given the exponential scaling of transistor density predicted by Moore’s Law, this additional observation provided great promise for faster, cheaper and lower power devices.

Tahir explained that the happy marriage between Moore’s Law and Dennard scaling ushered in something he called the golden era of computing. The era was made possible by numerous innovations in materials and process engineering, most important being the consistent scaling of gate dielectric thickness (Tox) and the development of progressively shallower source/drain (S/D) extensions, which enabled scaling of gate lengths from micron-scale to nanometer-scale while lowering transistor threshold voltage (Vt).

From my point of view, these were the days when semiconductor innovation came from the process teams. If you could get to the next node, you’d have a faster, smaller and lower power product that would sell. Tahir explained that by 2005, power density challenges and the breakdown of Dennard scaling meant it was time for a new approach, which brings us to the present day.

SECOND ERA: 2005 – PRESENT

Tahir explain that during the last 20 years, technologists have shattered multiple seemingly insurmountable barriers to transistor scaling, including perceived limits to dimensional scaling, limits to transistor performance, and limits to Vdd reduction. This era marked the emergence of mobile computing, which shifted the focus of transistor development from raw performance (frequency) to maximizing performance within a fixed power envelope (performance-per-watt).

Many of the innovations from this era in materials and architectures came from Intel. In fact, Tahir has been in the middle of this work for many years. This work expedited the progress of groundbreaking ideas from research to development to high-volume manufacturing. Tahir explained that these innovations ushered in an era of astonishing progress in transistor technology over two decades. He discussed three important innovations from this time.

SEMINAL TRANSISTOR INNOVATIONS

  • Mobility enhancement leading to uniaxial strained silicon. In 2004, a novel transistor structure introduced by Intel at the 90nm node incorporated compressive strain for PMOS mobility enhancement. Intel’s uniaxial strain approach was in stark contrast to the biaxial strain approach pursued by the research community and turned out to be superior for performance and manufacturability. Moreover, this architecture proved scalable and enabled progressively higher strain and performance over the years.
  • Tox limit leading to Hi-K dielectrics and metal gate electrodes. Intel explored multiple approaches to introduce Hi-K gate dielectrics coupled with metal gate electrodes, including “gate-first,” “replacement-gate,” and even fully-silicided gate electrodes. The replacement metal gate flow adopted by Intel at the 45nm node in 2007 continues to be used in all advanced node processes to this day.
  • Planar transistor limits lead to FinFETs. The scaling of the planar transistor finally ran out of steam after five decades, mandating a move to the 3D FinFET
    Fin profile improvements at Intel
    structure. Intel was the first to introduce FinFETs into production at the 22nm node in 2011. Nanometer-scale fin widths enabled superior short-channel effects and, thus, higher performance at lower Vdd. The figure to the right illustrates the evolution of the fin profile over the last 15 years. The 3D structure of fins resulted in a sharp increase in effective transistor width (Zeff) within a given footprint, leading to vastly superior drive currents.

LOOKING AHEAD: THE NEXT DECADE

Tahir made the observation that the seventh decade of Moore’s Law coincides with the emergence of yet another computing era. He pointed out that AI will redefine computing and is already causing a tectonic shift in the enabling silicon platform from general-purpose processors (CPUs) to domain-specific accelerators (GPUs and ASICs).

Gate all around (GAA) transistor

He went on to say that this shift in computing platform also coincides with another inflection in transistor architecture. By completely wrapping the gate around the channel, the gate-all-around (GAA) transistor is poised to replace the FinFET. GAA transistors deliver enhanced drive current and/or lower capacitance within a given footprint, superior short-channel effects, and a higher packing density. The figure at the right shows what a GAA device looks like in silicon.

Looking ahead, he said the GAA architecture will likely be succeeded by a stacked GAA architecture with N/P transistors stacked upon each other to create more compact, monolithic 3D compute units. Looking further ahead, he explained that 2D transition metal chalcogenide (TMD) films are being investigated as channel material for further Leff scaling, but many issues are still to be addressed.

CALL TO ACTION: NEW TRANSISTOR

Tahir concluded his talk with a sobering observation- worldwide energy demand for AI computing is increasing at an unsustainable pace. Transitioning to chiplet-based system-in-package (SiP) designs with 3D stacked chips and hundreds of billions of transistors per package will increase heat dissipation beyond the limits of current best-in-class materials and architectures. Breaking through this impending “Energy Wall” will require coordinated and focused research toward reducing transistor energy consumption and improving heat removal capability. A focused effort is necessary to develop a new transistor capable of operating at ultra-low Vdd (< 300mV) to improve energy efficiency.

He went on to point out that ultra-low Vdd operation can lead to significant performance loss and increased sensitivity to variability, requiring circuit and system solutions to be more resilient to variation and noise. This suggests the need for a strong collaboration between the device, circuit, and system communities to achieve this important goal. There are many ways to attack this problem.

Tahir reviewed a few, including Tunnel FET (TFET), Negative Capacitance FET (NC-FET), and Ferroelectric FET (FE-FET). All have significant obstacles to overcome. New materials and new structures will need to be explored.

Conclusion

Dr. Tahir Ghani covered a lot of ground in this exceptional review of past, present and future challenges for semiconductor scaling. The best way to end this discussion is with an inspirational quote from Tahir.

“At every significant inflection in the past, when challenges to continued transistor scaling seemed too daunting, technologists across industry and academia forged new paths to enable the arc of exponential progress to continue unabated. There is no reason to believe that this trend will not continue well into the future. There is still plenty of room at the bottom.”

Tahir recently did a Semiconductor Insider’s podcast on SemiWiki. You can hear some of his views in this compelling discussion here. And that’s how Intel’s Mr. Transistor presents the incredible shrinking transistor – shattering perceived barriers and forging ahead.

Also Read:

What is Wrong with Intel?

3D IC Design Ecosystem Panel at #61DAC

Intel’s Gary Patton Shows the Way to a Systems Foundry #61DAC


Certification for Post-Quantum Cryptography gaining momentum

Certification for Post-Quantum Cryptography gaining momentum
by Don Dingee on 12-16-2024 at 6:00 am

NIST A6046 certificate for Secure-IC, the first security IP and software vendor to achieve certification for post-quantum cryptography

A crucial step in helping any new technology specification gain adoption is certification. NIST has been hard at work establishing more than post-quantum cryptography algorithms – they’ve also integrated the new algorithms into their process for third-party validation testing to ensure implementations are as advertised. Secure-IC is the first security IP and software vendor to achieve official worldwide NIST algorithm certification for post-quantum cryptography (PQC) software and secure element IP. Here’s a brief look at what NIST certification entails and what Secure-IC achieved.

An overview of NIST certification for crypto algorithms

NIST created its Cryptographic Algorithm Validation Program (CAVP) in 1995 to test FIPS-approved, NIST-recommended algorithms. Testing occurs on an Automated Cryptographic Validation Test System (ACVTS) with a NIST-controlled hardware environment. NIST offers a Demo ACVTS server as a sandbox environment and a Production ACVTS server accessible only by accredited third-party cryptographic and security testing (CTS) laboratories. Only tests by third-party CTS labs on the Production ACVTS server can advance as evidence for obtaining a CAVP certificate.

ACVTS spans capabilities for supported algorithms, including parameters such as message length, and automatically generates test cases and vectors for robust coverage. Vectors are suitable for feeding an implementation candidate, which can run its functions and provide outputs back to ACVTS. A correctness score for each algorithm in a test session returns. This approach keeps ACVTS testing as black-box – NIST never sees implementations as they are not uploaded to the ACVTS server, with only vectors sent and outputs returned.

NIST keeps the CAVP suite current, retiring outdated algorithms and incorporating new advancements as they become approved. CAVP online documentation contains a current list of algorithms and their specifications, validation testing requirements, validation lists, and test vectors.

Moving from PQC algorithms to crypto module certification

PQC algorithms are now part of the CAVP suite, and validation testing of PQC implementations can ensue. Since we last discussed PQC here, some of its algorithms received less informal, more technically accurate names from NIST. CRYSTALS-Kyber is now known as ML-KEM (module-lattice-based key-encapsulation mechanism), and CRYSTALS-Dilithium now goes by ML-DSA (module-lattice-based digital signature algorithm).

Secure-IC conferred with an in-country CTS-accredited lab, SERMA Safety and Security, to validate its Securyzr™neo-product for PQC. A summary of the algorithm tests appears in the NIST validation certificate, A6046, dated October 30, 2024. Secure-IC focuses on optimizing its implementations for fast throughput in SoC-optimized IP blocks ready for hardware design.

CAVP validation is crucial because compliance is ultimately a function of the complete system context for an implementation, as with many specifications. CAVP is a mandatory prerequisite for certifying a cryptographic module, a combination of hardware and software in an end product. NIST also shepherds a Cryptographic Module Validation Program (CMVP), transitioning from FIPS 140-2 compliance to FIPS 140-3, reflecting the recommendation for PQC implementations. A full FIPS 140-2 sunset date of September 2026 incentivizes module designers to get moving with their CMVP validation. Any system requiring cryptographic protection must conform to FIPS 140-3 requirements – with PQC incorporated – by that date.

Secure-IC is committed to helping its customers navigate these requirements and quickly bringing PQC into the mainstream. Their PQC-enabled solutions are configurable and scalable to meet a range of cryptography needs, with an eye on performance and power efficiency. Their achievement of certification for post-quantum cryptography algorithms puts their customers ahead in the race for protecting platforms from advanced cybersecurity threats. More information is available in a press release from Secure-IC, which includes more details on the Securyzr neo-product certification, links to the official NIST certificate, and background on the cooperation with SERMA Safety and Security.

Secure-IC obtains the first worldwide CAVP Certification of Post-Quantum Cryptography algorithms, tested by SERMA Safety & Security

Also Read:

Facing challenges of implementing Post-Quantum Cryptography

Secure-IC Presents AI-Powered Cybersecurity

How Secure-IC is Making the Cyber World a Safer Place


Podcast EP266: An Overview of the Design & Verification EDA Businesses at Keysight with Nilesh Kamdar

Podcast EP266: An Overview of the Design & Verification EDA Businesses at Keysight with Nilesh Kamdar
by Daniel Nenni on 12-13-2024 at 10:00 am

Dan is joined by Nilesh Kamdar, the General Manager of the Design & Verification EDA businesses at Keysight. Nilesh has also held roles as Portfolio Manager, and Director of the Software Business & Operations team at Keysight. Nilesh joined Hewlett-Packard in 1999 in the EEsof EDA division. Over his 25+ year career he has worked in various leadership roles including leading the Learning Products team, Circuit Simulation and Architecture team, and the Application Engineering and Customer Success team.

Dan explores the structure, focus and impact of the Keysight Design & Verification EDA business with Nilesh. This high-growth portion of the company focuses on high frequency, high speed designs. Nilesh explores the challenges design teams face. Multi-chip design is growing for this segment and there are many new challenges. He explains how Keysight takes a multi-physics approach to address chip/package/system requirements.

Keysight also builds high-performance chips for its test & measurement products in its own fab so the company has first-hand experience addressing the challenges its customers face. Nilesh also explores the impact AI will have on the design process in detail, as well as the benefits of the company’s engineering lifecycle management tools.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Caroline Guillaume of TrustInSoft

CEO Interview: Caroline Guillaume of TrustInSoft
by Daniel Nenni on 12-13-2024 at 6:00 am

Caroline Guillaume (TrustInSoft CEO)


Caroline Guillaume is the Chief Executive Officer of TrustInSoft.  She has an extensive background working in the critical software industry, notably at Thales Digital Identity and Security where for 14 years she contributed to the Sales division including as the VP of Sales – Software Monetization Europe and VP of Banking and Telecom Solutions Sales out of Singapore. She also previously worked as director of Product Marketing at Gemplus. Caroline holds an engineering degree from Télécom SudParis.

Tell us about your company?

TrustInSoft is a leader in advanced software analysis tools and services that specializes in formal verification of C and C++ source code to ensure safety, security and reliability. Recognized by the US National Institute of Standards and Technology (NIST) for leveraging advanced formal methods, including abstract interpretation, TrustInSoft can mathematically guarantee analyzed software is free of critical runtime errors and vulnerabilities. TrustInSoft serves a diverse range of industries including automotive, aerospace, defense, consumer electronics, and IoT industries.

What problems are you solving?

The problem we’re tackling is one that’s plagued software development since its inception – bugs, vulnerabilities, and unexpected failures. These issues can have devastating consequences, leading to security breaches, system crashes, and even physical harm in critical industries. More than that, TrustInSoft’s tools and services help you find the very subtle and critical bugs that often go unseen and have costly consequences on the field.

For developers and testers, TrustInSoft Analyzer provides exhaustive static analysis with up to zero false positives and no false negatives. Our customers have seen up to 40X less time spent detecting bugs and 4X decrease in verification time. This helps answer to problems like tight time-to-market constraints and software safety, security, and reliability.

What application areas are your strongest?

Our technology can benefit many applications from the most critical like defense, aeronautics, and EVs, to consumer devices, and telecommunications. We intervene in the low-level software and ensure safety, security, and reliability by securing the foundation of critical systems.

The tool performs a combination of static analysis based on formal methods. Static analysis allows us to exhaustively test code without actually running it, identifying potential problems before they ever cause an issue. Formal methods take it a step further – they provide a mathematical guarantee that the code is free of certain types of errors, specifically undefined behaviors.

What keeps your customers up at night?

Notably in critical software and modern IoT devices, software developers and testers might lay awake wondering if they’ve done all they can to safeguard their code from hackers and corner cases. We help give them peace of mind by taking an incremental approach to secure their test suites, exhaustively fuzz the code, and even provide formal mathematical proof that the software will behave as specified. We combat undefined behaviors which can lead to software defeat, security breaches, challenges for regulatory compliance, and tight time to market requirements without sacrificing quality and security.

What does the competitive landscape look like and how do you differentiate?

TrustInSoft Analyzer, is not your average static analysis tool. Our use of formal methods gives a guarantee of safety, security, and reliability by identifying all undefined behaviors, which are amongst some of the top CWE list vulnerabilities. TrustInSoft Analyzer guarantees zero false negatives and up to zero false positives, saving precious developer and tester time and effort.

What new features/technology are you working on?

We are constantly improving TrustInSoft Analyzer with two major updates per year. This October’s release includes:

  • Streamlined and Intuitive User Experience: The enhanced TISA UI dramatically simplifies the analysis process, making complex software verification more accessible and reducing the learning curve for technical teams
  • Enhance Your Accuracy and Efficiency with TrustInSoft Assistant Capabilities: advanced assistant capabilities guide users in setting up and tuning their analysis, ensuring accurate results and reducing the risk of errors in complex environments.
  • Advanced Compliance and Performance Capabilities: new features, including CWE alarm mapping, ARINC support, and enhanced C++ analysis performance, enable technical teams to achieve higher compliance and efficiency in complex environments.
How do customers normally engage with your company?

Our customers typically either use our tool, TrustInSoft Analyzer, independently or combine their tool usage with our Formal Verification Services (FVS). We help secure everything from airplanes to consumer devices like smartphones and gaming systems at the embedded systems layer ensuring safety and cybersecurity earlier in the development cycle. Our team of experts helps integrate into existing verification and validation processes to help developers dig into the vulnerabilities in their code.

Also Read:

CEO Interview: Mikko Utriainen of Chipmetrics

CEO Interview: GP Singh from Ambient Scientific

CEO Interview: Ollie Jones of Sondrel


IEDM Opens with a Big Picture Keynote from TSMC’s Yuh-Jier Mii

IEDM Opens with a Big Picture Keynote from TSMC’s Yuh-Jier Mii
by Mike Gianfagna on 12-12-2024 at 10:00 am

IEDM Opens with a Big Picture Keynote from TSMC’s Yuh Jier Mii

The main program for the 70th IEDM opened on Monday morning in San Francisco with an excellent keynote from Dr. Yuh-Jier Mii, Executive Vice President and Co-Chief Operating Officer at TSMC. Dr. Mii joined TSMC in 1994. Since then, he has contributed to the development and manufacturing of advanced CMOS technologies in both fab operations and R&D. In 2022, he received the IEEE Frederik Philips Award recognizing his outstanding accomplishments in the management of research and development. He holds 34 patents globally, including 25 US patents, and holds a B.S. in electrical engineering from National Taiwan University, as well as an M.S. and Ph.D. in Electrical Engineering from the University of California, Los Angeles (UCLA). He treated the audience to a broad view of technology innovation in his keynote. Let’s look at how IEDM opens with a big picture keynote from TSMC’s Yuh-Jier Mii.

About IEDM

To begin, that wasn’t a typo above. The 70th annual IEEE International Electron Devices Meeting (IEDM) just concluded. This incredibly long-lived conference began tracking technology innovation in the vacuum tube era. For seven decades the event has tracked semiconductor and electronic device technology, design, manufacturing, physics, and modeling. This year’s event had a record high number of submissions at 763 and a record number of accepted papers at 274. 

The figure below summarizes the growth of this premier conference over the years.

2024 IEDM paper statistics

About the Keynote

Dr. Yuh-Jier Mii

Dr. Mii began his keynote with a short but compelling video that provided an overview of some of the innovations that have occurred in the semiconductor industry in general, and some of the advances contributed by TSMC in particular. All of this is driving the development of a trillion-transistor system in the near future. These trends are summarized in the graphic at the top of this post.

Dr. Mii touched on five key areas in his talk. I will provide a summary of his remarks. He began with a semiconductor industry & market outlook (I). AI is poised to play a key role in the industry’s growth as we move toward one trillion dollars in revenue by 2030. He projected that high-performance computing will contribute 40% of this number, mobile 30%, automotive 15%, and IoT 10%. He discussed the how ubiquitous AI technology is becoming across many products and markets. Generative AI and large language models are contributing to this growth and the complexity of the models for these new applications and the associated training required present substantial new challenges.

He pointed out that these new applications will require gigawatts of power within a few years. Reducing power consumption will be critical to allow these applications to flourish and new device technology and architectural advances will be needed.

Next, Dr. Mii discussed advanced logic technologies (II). He described the industry’s move from planar devices to FinFETs and most recently nanosheet technology for gate-all-around devices at 2nm. Patterning also advanced from immersion lithography to EUV and multi-patterning EUV. Design technology co-optimization, or DTCO has also helped to bring technology to new levels. For example, backside power delivery has helped to reduce power and increase density.

Regarding logic technology frontiers (III), Dr. Mii discussed the evolution from FinFET to nanosheet FET to vertically stacked complimentary or CFET architectures. He explained that the CFET approach holds great promise to allow continued Moore’s Law scaling with its 1.5 – 2X density improvement when compared to nanosheet devices. He described the work going on at TSMC to make CFETs a reality. At this year’s IEDM, TSMC is presenting the first and smallest CFET inverter at a 48nm pitch.

Dr. Mii explained that beyond CFET, the ongoing quest for higher performance and more energy-efficient logic technologies necessitates an accelerated search for channel materials that go beyond those based on silicon. He explained that carbon nanotubes (CNTs) and transition metal dichalcogenides (TMDs) have garnered significant interest due to both their physical and electronic properties. In the area of interconnects, he discussed a new 2D material that is being explored as a superior alternative to copper. This material shows lower thin film resistivity than copper at reduced thicknesses, helping to mitigate line resistance increases in scaled geometries and enhance overall performance.

Dr. Mii then moved to a discussion of system integration technologies (IV). While pushing 2D technology scaling to enable better transistors and higher packing density in monolithically integrated SoCs is important, so are innovations beyond the chip level to extend integration into the heterogenous domain.

He explained that advanced silicon stacking and packaging technologies, including SoIC, InFO, and CoWoS® continue to aggressively scale down the chip-to-chip interconnect pitch, offering the potential to improve 3D interconnect density by another six orders of magnitude. These trends are summarized in the figure below.

Advanced silicon stacking and packaging technologies

Dr. Mii discussed an emerging System-on-Wafer (SoW) technology, where all the chiplets and HBM memories for an entire system can be integrated directly on a 12-inch wafer. He explained that this approach can deliver an additional 40X compute improvement when compared to the most advanced data center AI accelerator today. Optical interconnect was also discussed, which can provide 20X more power efficiency than copper interconnect. Vertical stacking of logic and optical transceivers will help deliver these improvements. He explained that today the laser light source is outside the chip, but efforts are underway to integrate the laser on chip as well.

Dr. Mii concluded with a discussion of specialty technologies (V). Many of the items discussed here are high frequency or analog in nature to accommodate the interface between the digital and analog (real) world. He discussed innovations spanning N16 to N4 to accommodate the increased demands of new standards for WiFi.

Advances in embedded non-volatile RAM were also discussed in this part of the keynote. The benefits and challenges of both MRAM and RRAM were covered. CMOS image sensors were also discussed. This is a critical technology for automotive applications. As pixel size decreases, new approaches are needed to maintain sensitivity and dynamic range. Dr. Mii described work to separate the photo diode from the pixel device and re-integrate them using 3D wafer-to-wafer stacking.

Summary

Dr. Mii concluded by observing that semiconductor innovations, encompassing advances in device technology, system-level scaling, and customer-specific design ecosystems will remain pivotal in driving rapid technological progress in the era of AI. He pointed out that TSMC is actively exploring a new array of innovations for future generations of technology, system integration platforms, and design ecosystems. These efforts will be crucial in meeting the increasing societal demands for energy-efficient, data-intensive computing in the coming decades. He invited the audience to join in this important collaboration. And that’s how IEDM opens with a big picture keynote from TSMC’s Yuh-Jier Mii.

Also Read:

Analog Bits Builds a Road to the Future at TSMC OIP

Maximizing 3DIC Design Productivity with 3DBlox: A Look at TSMC’s Progress and Innovations in 2024

Synopsys and TSMC Pave the Path for Trillion-Transistor AI and Multi-Die Chip Design


Thanks for the Memories

Thanks for the Memories
by Bill Jewell on 12-12-2024 at 8:00 am

Semiconductor Market Average Change 2024

The December 2024 WSTS forecast called for strong 2024 semiconductor market growth of 19%. However, the strength is limited to a few product lines. Memory is projected to grow 81% in 2024. Logic is expected to grow 16.9%. The micro product line should show only 3.9% growth, while discretes, optoelectronics, sensors and analog are all projected to decline. If the memory product line is excluded, the WSTS forecast for the rest of the semiconductor market in 2024 is only 5.8%.

The strength of memory in 2024 is also reflected in semiconductor company revenues. Revenue for the first three quarters of 2024 compared to a year earlier show gains of 109% for both Samsung Memory and SK Hynix. Micron Technology is up 78% and Kioxia is up 54%. The strongest growth among major semiconductor companies is from Nvidia, up 135%. Nvidia strength is due to its AI processors. Nvidia’s revenues also include memory purchases, adding to its revenue.

The robust memory growth is largely driven by memories for AI applications. Prices for memory have increased in 2024, especially for DRAM. Trend Force estimated average DRAM prices will be up 53% in 2024. Thus, one application, AI, is accounting for most of the growth of the semiconductor market in 2024. The revenues for the first three quarters of 2024 compared to the same three quarters of 2023 show a 97% gain for memory companies and a 135% gain for Nvidia. The total semiconductor market was up 19.9% for this period. Excluding the memory companies, the remainder of the semiconductor market was up only 6.8%. If both the memory companies and Nvidia are excluded, the rest of the semiconductor market declined 10.5%

Several major semiconductor companies experienced revenue declines in 1Q 2024 through 3Q 2024 versus a year earlier. STMicroelectronics and Analog Devices were each down 24%. Texas Instruments, Infineon Technologies, NXP Semiconductors, and Renesas Electronics also declined. These companies largely depend on the automotive and industrial sectors, which have been weak in 2024. Companies heavily dependent on the smartphone market showed revenue increases, with Qualcomm’s IC business up 10% and MediaTek up 25%. Among computer dependent companies, Intel was flat, and AMD was up 10%. Broadcom counts on AI for a significant portion of its revenues. Its calendar 3Q results have not yet been released, but it should be up about 47%.

Thus, except for AI and memory, the semiconductor market has been weak in 2024. Our Semiconductor Intelligence forecast of 6% growth in the semiconductor market in 2025 assumes some strengthening of core markets of PCs, smartphones, automotive and industrial. The rapid growth rates of memory and AI in 2024 should be significantly lower in 2025.

Memory has long exacerbated the cycles of the semiconductor industry. The chart below shows annual change in the semiconductor market based on WSTS data through 2023 and the WSTS forecast for 2024. Total semiconductor is compared with memory and semiconductor excluding memory. While the memory market has shown extremes of 102% growth and a 49% decline, the market excluding memory has been somewhat more stable, ranging from plus 42% to minus 26%. In the last ten years, the memory market change has ranged from plus 81% in 2024 to minus 33% in 2023 while the market excluding memory has ranged from plus 25% to minus 2%.

Over the last forty years, whenever the memory market has grown over 50%, it has seen a significant deceleration or a decline in the following year. In the six times this has occurred prior to 2024, the memory market has declined in the following year four times. In two cases the market has seen positive but significantly slower growth the following year and declines two years after the peak. These trends are driven by basic supply and demand for a commodity product. Memory prices and production rise when supply is below demand. When supply is above demand, production and prices fall. Thus, we should expect a significant downturn in the memory market either in 2025 or 2026.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Bill Jewell
Semiconductor Intelligence, LLC
billjewell@sc-iq.com

Also Read:

Semiconductors Slowing in 2025

AI Semiconductor Market

Asia Driving Electronics Growth


CEO Interview: Mikko Utriainen of Chipmetrics

CEO Interview: Mikko Utriainen of Chipmetrics
by Daniel Nenni on 12-12-2024 at 6:00 am

Chipmetrics Founders

Mikko Utriainen – Founder – Chief Executive Officer, Feng Gao – Founder – Chief Technology Officer, Pasi Hyttinen – Founder – Chief Data Officer

Tell us about your company?
Chipmetrics is a Finnish company specializing in metrology solutions for high aspect ratio 3D chips, especially 3D NAND and 3D DRAM with an eye on GAAFET and CFET structures. We are a young but mature company founded in 2019 with over 40 customers worldwide, and we’re looking to further scale up our international business efforts in the coming years.

This far, in addition to our home market in northern Europe, we’ve found success in Japan and we’re also represented in other key markets such as South Korea, Taiwan and USA.

What problems are you solving?
We provide test chips that enable companies to develop next-generation 3D semiconductors, speeding up associated R&D and process control workflows with quick access to high-quality data.

With our various test chips, engineers can check the quality of their film deposits with an ellipsometer or other conventional surface analysis tools near-instantaneously, rather than sending them away for analysis which can take weeks. This gives access to high-quality data quickly, which in its turn cuts down on development time of 3D chips.

What application areas are your strongest?
We have a unique position in metrology solutions for film deposition, be it ALD or CVD. Considering that ALD was originally invented in Finland, it feels extra special to be able to offer Finnish solutions as ALD technology is becoming more and more relevant with 3D semiconductors.

What keeps your customers up at night?
Challenges with conformal deposition, gap-fill deposition and selective deposition, the latter especially in DRAM. High-quality films and quality control are also at the forefront of our clients’ minds. As ever, they also wish to speed up their time to market and lower the process temperatures.

What does the competitive landscape look like and how do you differentiate?
We believe we have a unique niche with our test chips and metrology solutions. That said, we do jack into the ALD landscape with batch systems, where our test structures are needed.

What we do with our PillarHall family of test chips is speed up process control screening without diminishing accuracy. It is, to our knowledge, the only practical way to measure film quality, properties, microstructure, the elemental composition of the sidewall in a high aspect ratio cavity on a wafer level.

What new features/technology are you working on?
This summer, we launched the latest iteration of the PillarHall, LHAR5. It allows for metrology of high-aspect ratio chips with gap heights as low as 100 nanometers. At the same time, we launched the ASD-1 chip, for Area Selective Deposition, in our quest to offer a complete lineup of 3D metrology chips.

We also value collaboration with our clients, meaning we’re busy ideating and iterating with them and coming up with custom concepts and test structure wafers for them.

How do customers normally engage with your company?
We have well defined products for sale. Interested parties can easily approach us with a short RFQ email through our website and we reply fast. Our clients are typically recurring with orders, and we believe they come back in part due to our high service level. Also our product development is based on the repeat order customer feedback and needs. We are also active in digital marketing, trade shows and scientific conferences. Those are important forums for startups in the semiconductor field, and for us especially the AVS and SEMI events have been useful. Lastly, the core Chipmetrics team consists of industry veterans with strong personal networks that we tap into, and on top of this we have local representatives in key markets.

Also Read:

CEO Interview: GP Singh from Ambient Scientific

CEO Interview: Ollie Jones of Sondrel

CEO Interview: Dr. Yunji Corcoran of SMC Diode Solutions


How I learned Formal Verification

How I learned Formal Verification
by Daniel Nenni on 12-11-2024 at 10:00 am

Bing Xue

Bing Xue is a dedicated Formal Verification Engineer at Axiomise, with a strong academic and professional foundation in hardware verification. He completed his PhD at the University of Southampton, where he conducted cutting-edge research on Formal Verification, RISC-V, and the impact of Single Event Upsets. Bing is proficient in RISC-V, System Verilog, and Formal Verification tools such as Cadence Jaspergold, and is skilled in Python and Linux, bringing a versatile and analytical approach to his work.

How I learned FV

I had no idea what Formal Verification (FV) was when I started my PhD,.  I spent six months exploring related papers, books, websites and open-source projects, as well as watching videos to learn about FV and System Verilog Assertion (SVA).  However, I faced several challenges during that time.

Some resources, despite being labelled as FV-focused, primarily discussed simulations. Others were too abstract, providing no practical details while some were too theoretical, presenting modelling and proving algorithms without real-world applications. After six months of study, I had a basic overview of FV but still didn’t know how to apply it to my project.

It took me another three months of hands-on practice with simple RISC-V designs to make progress. During that time, I made many mistakes and had to invest significant effort to understand and fix them. Searching for quality FV learning resources was time-consuming, and extracting the accurate information was even more challenging. I always thought that if I had access to well-structured FV courses, including theories, demonstrations, and practical labs with real world-designs, I could have completed my project faster and with better results.

Why Axiomise FV courses

I finished Axiomise FV courses last month. I believe they are the best courses for freshers and verification engineers. I wish I had discovered them earlier as they would have made a significant difference in my research journey.

FV is more than model checking

Most of the resourses I found provided only a general overview, covering the definition and history of FV.   These resources mainly focused on model checking, but FV is not just model checking!

The Axiomise FV courses cover not only model checking but also theorem proving and equivalence checking.  During my project, I mainly used model checking to evaluate fault tolerance and hardware reliability.  After completing the course, I was inspired to use equivalence checking to achieve improvement in my work.

Theory

I learned FV theories from books and papers.  These theories include transforming designs and specifications into mathematical models and formulas and proving formal properties with various (such as BDD- and SAT-based) algorithms.  However, are these theories truly essential for all verification engineers?

Given that formal tool can handle much of the modelling and proving, it is clear that verification engineers should focus more on why, when and how to use FV.  This is exactly what Axiomise FV courses emphasize.   These courses help verification engineers save their valuable time by focusing on the most critical and applicable concepts, rather than overwhelming them with unnecessary details.

Formal Techniques

A ‘smart’ formal testbench, composed of high-quality formal properties, significantly contributes to better performance by reducing run time and overcoming state explosion threshold.  But how can we develop formal properties with high qualities?

The Axiomise FV courses answer this question clearly: by applying formal (problem reduction) techniques to develop ‘smart’ formal testbenches.  These techniques, such as abstraction, invariants, assume-guarantee, decomposition, case splitting, scenario splitting, black-boxing, cut-pointing and mutation, are explained in detail within the course; accompanied by codes and examples for a deeper understanding.

What sets the course apart is the inclusion of step-by-step demos and labs that help learners master these problem reduction techniques.  All the other resources I found fail to explain formal techniques in such an easy-to-understand manner.  In my previous project, I didn’t apply all these techniques, which led to some inclusive results when verifying multipliers and dividers.  Now, I know effectively how to apply these methods to improve my project.

Demos and Labs

When learning to develop formal testbenches, I often wished for more high-quality demos and labs. Unfortunately, the resources I found typically offered either overly simplistic examples, like a basic request-and-acknowledge handshake protocol, or non-generalized designs, such as a specific meaningless hardware module.

I really enjoy the demos and labs in the FV courses. I could see their careful selection of designs used for demonstrations. For instance, the courses present FIFO, a fundamental structure in electronics and computing, as demonstration. Two brilliant abstraction-based methods are presented to exhaustively verify a FIFO: Two-Transaction and Smart Tracker.  Another valuable example is using invariants for scalable proof and bug hunting.

All serial designs, such as processors and memory sub systems, which are challenging to verify, can be represented and verified as FIFOs.  The FV courses also provide multiple demos and labs, such as variable packet design and micro-cache to demonstrate this concept.

From the FV courses, I strongly believe verification engineers can acquire all knowledge and skills required to formally verify complex designs.

The Complete FV flow: ADEPT

Most resourses agree that FV can be used for exhaustive verification, but the question is: how?  What is the overall process of FV?  How can one verify the correctness of a formal testbench? When is it appropriate to sign off?  These were questions I struggled with early on, as I couldn’t find any detailed standards or guidance. It took me considerable time to investigate and eventually realized that coverage was the key to answer these questions.

Axiomise addressed these challenges by developing ADEPT, the first industrial FV flow which clearly states the flow of FV sign-off.  The FV courses also introduce formal coverage.  Coverage in FV is more comprehensive than that in simulations.  These insights are invaluable for conducting efficient and confident FV workflows.

Benefits

Axiomise’s vision is to make formal normal and the FV courses effectively address three major misunderstandings about FV:

  1. FV is not a mystery. With the training from Axiomise, all engineers (whether they are design engineers or verification engineers) can (and should) use FV in all stages.
  2. FV is not a magic wand. A high-quality formal testbench is essential for effective bug hunting and exhaustive proof.  The FV courses provide all the necessary knowledge and skills to develop and evaluate such formal testbenches.
  3. Learning FV is not hard. Following the FV courses, even beginners can smoothly transition into formal verification engineers.
Summary

In summary, the Axiomise FV courses are an invaluable resource for anyone looking to master formal verification. I sincerely recommend the FV courses to all design and verification engineers.

Also Read:

The Convergence of Functional with Safety, Security and PPA Verification

An Enduring Growth Challenge for Formal Verification

RISC-V Summit Buzz – Axiomise Accelerates RISC-V Designs with Next Generation formalISA®


Accellera 2024 End of Year Update

Accellera 2024 End of Year Update
by Bernard Murphy on 12-11-2024 at 6:00 am

logo accellera min

From my viewpoint, standards organizations in semiconductor design always looked like they were “sharpening the saw”: further polishing/refining what we already have but not often pushing on frontiers. Very necessary of course to stabilize and get common agreement in standards but equally always seeming to be behind the innovation curve. Given the recent trend to prominent new technologies, particularly through system vendors getting into chip design, it is encouraging to realize that organizations like Accellera have already jumped (cautiously 😀) on opportunities to push on those frontiers. Standards are again acknowledging innovation in the industries they serve.

Progress in 2024

Here I’m just going to call out a few of the topics that particularly interest me, no slight intended to other standards under the Accellera umbrella.

Portable Test and Stimulus (PSS), defining a framework for system level verification, is one of these frontiers; the state space for defining system-level tests is simply too vast to be manageable with a bottom-up approach to functional verification. PSS provides a standard framework to define high-level system-centric tests, monitors, randomization, the kind of features we already know and love in UVM but here abstracted to system level relevance.

Coverage is such a feature, already provided in the standard but now with an important extension in the 3.0 update. RTL coverage metrics obviously don’t make sense at a system level. Randomization and coverage measurement should be determined against reasonable use-cases – sequences of actions and data conditions – otherwise coverage metrics may be misleading. PSS 3.0 introduces behavioral coverage to meet these needs.

You may remember one of my earlier blogs on work towards a Federated Simulation Standard (FSS). Quick summary: the objective is to be able to link together simulators in the EDA domain with simulators outside that domain, say for talking to edge sensors, drivetrain MCUs and other devices around the car, all communicating through CAN or automotive Ethernet. Similar needs arise in aircraft simulations.

This requires standards for linking to proprietary instruction set simulators and other abstracted models to enable an OEM/Tier1 to develop and test software against a wide range of scenarios. An obvious question is how this standard will fit with the Arm-sponsored SOAFEE standard. As far as I can see SOAFEE seems to be mostly about interoperability and cloud-native support for the software layer of the stack, still leaving interoperability at the hardware and EDA level less defined. That’s where I suspect FSS will concentrate first. FSS is still at the working group and user group stage, no defined release date yet, but Lu says that pressure from the auto companies will force quick progress.

Expected in 2025

I have always been interested in progress on mixed signal standards. Analog and RF are becoming more entangled with digital cores in modern designs. For example, sensing demands periodic calibration to adjust for drift, DDR PHYs must align between senders and receivers, and RF PHYs now support analog beamforming guided by digital feedback. All of which must be managed through software/digital controlled interfaces into the analog functionality.

Software-digital-analog verification is a more demanding objective than allowed for by traditional co-simulation solutions, which increases the importance of real-number modelling (RNM) methods and UVM support. Lu tells me that the UVM-MS working group now has a standard ready for board approval which he sees likely to happen after the holidays.

There was a complication in achieving this goal in as far as it requires (in some areas) extension to the System Verilog (SV) standard. SV is under control of IEEE rather than Accellera and IEEE update standards update only on a 5-year cycle. However, IEEE and Accellera work together closely and Accellera is busy defining those extensions in a backward compatible way. This effort is expected to complete fairly soon at which point it will be donated back to IEEE for consideration on their next update to the SV standard.

This all sounds complicated and still a long way off, but it seems that those Accellera recommendations are more or less guaranteed to be accepted into the next IEEE update. Tentatively (not an official statement) vendors and users might be able proceed much sooner with more comprehensive UVM-MS development once tools, IPs, etc are released to the interim standard.

Finally, Accellera is actively looking for new areas where it can contribute in support of the latest technologies. One area Lu mentioned is AI, though it seems discussion at this stage is still very tentative, not yet settled into any concrete ideas.

DVCon International Perspectives

DVCon, under the auspices of Accellera, is already well established in the US, Europe and India. Recently conferences launched in China, then Japan and then in Taiwan. Each of these offers a unique angle. Europe is big in system level verification and automotive given local interest in aerospace and the car industry. India is very strong in verification as many multinationals with Indian sites have developed teams with strengths in this area. (I can confirm this; I see quite a lot of verification papers coming out of India.)

Japan has a lot of interest in board-level design simulation, whereas Chinese interests cut across all domains. (I can also confirm this. Many research papers I review for the Innovation in Verification blog series come out of China.) DVCon activity in Taiwan is quite new and Accellera has chosen to collocate with related conferences like RISC-V. Good stuff. Wider participation and input can only strengthen standards.

Overall – good progress and I’m happy to see that Accellera is pushing on those frontiers!


Electrical Rule Checking in PCB Tools

Electrical Rule Checking in PCB Tools
by Daniel Payne on 12-10-2024 at 10:00 am

HyperLynx DRC min

I’ve known about DRC (Design Rule Checking) for IC design, and the same approach can also be applied to PCB design. The continuous evolution of electronics has led to increasingly intricate PCB designs that require Electrical Rule Checking (ERC) to ensure that performance goals are met. This complexity poses several challenges in design verification, often resulting in errors, inefficiencies, and increased costs. This blog post examines these challenges and introduces HyperLynx DRC, an EDA tool from Siemens, to address them.

Modern electronic products demand enhanced functionality and performance, directly impacting the complexity of PCB design and verification. The use of complex components, high-speed interfaces, and advanced materials requires thorough PCB checks to guarantee optimal performance and reliability. This level of complexity often stretches the capabilities of traditional verification methods. 

Several factors contribute to the challenges in PCB design and verification:

  • Error-Prone Processes: The intricate nature of complex PCBs makes the design process susceptible to errors. Oversights and mistakes during layout, component placement, and routing can compromise product functionality and reliability. Undetected errors lead to revisions, rework, and possibly complete redesigns, impacting project timelines and budgets.
  • Infrequent Checks: The labor-intensive nature of PCB checking processes discourages frequent checks throughout the design cycle. Delays in verification lead to accumulated errors and inconsistencies, making fixes challenging and time-consuming.
  • Late-Stage Error Detection: Detecting design errors in later stages of development is inefficient, leading to more modifications, increased development time and costs, and delayed time-to-market. This is particularly critical in industries with rapid technological advancements.
  • Simulation Challenges: Traditional signal and power integrity simulations involve analyzing numerous objects, including nets, planes, and area-fills. Collecting simulation models and running simulations for each object is labor-intensive and time-consuming, often exceeding the benefits gained.
HyperLynx DRC

To face these challenges, Siemens developed HyperLynx DRC, a rule-based checker that identifies potential PCB design errors using geometrical calculations. The key features are:

  • Predefined Rules: The software comes with over 100 predefined rules addressing various aspects of PCB design, including signal integrity, power integrity, electromagnetic interference, electrostatic discharge, analog circuits, creepage, clearance, and IC package-specific checks.
  • Efficient Embedded Engines: HyperLynx DRC utilizes various embedded engines, such as the geometry engine, graph engine, field solver, and creepage engine, for efficiently checking diverse technical challenges.
  • Management of False Violations: The tool provides a feature for managing false violations, allowing users to create object lists, apply rules to specific objects, and eliminate unnecessary checks, significantly reducing checking time.
  • Enhanced Filtering Capability: HyperLynx DRC enables the creation of object lists manually or automatically, offering filtering capabilities to focus on relevant objects.

The extensive capabilities of HyperLynx DRC can lead to long rule-based geometrical run times for large and complex designs. To address this, HyperLynx DRC provides the area-crop function, allowing users to isolate and analyze specific areas of the design. 

The area-crop function streamlines the verification process through:

  • User-Friendly Interface: Users can quickly specify an area by selecting nets or components using a wizard.
  • Automated Cropping: The wizard automatically crops the design with predefined merging from the selected objects and creates a new project for checking.

This function enables users to concentrate on specific design areas, reducing complexity, enhancing accuracy and speeding up run times during verification.

Case Study

MediaTek, a leading semiconductor company, used HyperLynx DRC’s area-crop function on a highly complex board. The board specifications were:

  • Layout file size: Over 300 MB
  • Layers: Over 40
  • Layout size: Over 22000 mil * 16000 mil
  • Components: Over 16,000
  • Nets: Over 11,000

The area-crop function was used as follows:

  • Segmentation of the Board: The board was divided into four sections using vertical and horizontal virtual cuts, creating top-left, top-right, bottom-left, and bottom-right areas. Two additional overlap zones were added at the intersecting regions to ensure thoroughness.
  • Accelerated Verification: Checking each section individually significantly reduced the overall run time, particularly for the complex GND signal Long Stub rule.
  • Reduced Complexity: Dividing the board into smaller sections simplified the intricate GND nets, enhancing performance and allowing for efficient error identification and resolution.

PCB layout with four areas selected

The implementation of the area-crop function yielded impressive results:

  • Time Reduction: Total checking time was reduced from 33 hours, 51 minutes, 53 seconds to just 44 minutes, a big time savings.
  • Enhanced Efficiency and Precision: Focusing on segmented areas allowed for more precise verification, ensuring design reliability and integrity without compromising the project timeline.
  • Optimized Resource Allocation: Large time savings and enhanced focus enabled optimized resource allocation, ensuring critical areas received proper scrutiny and facilitated a smoother design refinement process.

Run Times per area under Long Stub rule

Conclusion

HyperLynx DRC’s area-crop function is a powerful tool for PCB design verification. By enabling focused verification, reducing complexity, and significantly accelerating the checking process, HyperLynx DRC ensures project success and meets the challenges of modern PCB designs. This innovative solution ensures advancements in electronic products are characterized by reliability, precision, and efficiency.

Read the complete, 12-page white paper online.

Related Blogs