100X800 Banner (1)

Intel Presents the Final Frontier of Transistor Architecture at IEDM

Intel Presents the Final Frontier of Transistor Architecture at IEDM
by Mike Gianfagna on 12-23-2024 at 6:00 am

Intel Presents the Final Frontier of Transistor Architecture at IEDM

IEDM was buzzing with many presentations about the newest gate-all-around transistor. Both Intel and TSMC announced processes based on nanosheet technology. This significant process innovation allows the fabrication of silicon RibbonFET CMOS devices, which promise to open a new era of transistor scaling, keeping Moore’s Law alive. It seems fitting that Intel should be leading this charge and the company’s innovation was on display at IEDM. The company presented “Silicon RibbonFET CMOS at 6nm Gate Length”. A summary of these results are shown in the graphic above. This technology was referred to as the last innovation for current transistor design. So, let’s examine how Intel presents the final frontier in transistor architecture at IEDM.

The Presenter

Dr. Ashish Agrawal

The work presented was a collaboration of many folks from Intel Foundry Technology Research and Intel Foundry Technology Development, both in Hillsboro, Oregon. The presentation was given by Dr. Ashish Agrawal, senior device engineer.

Ashish has been with Intel for over 10 years. His areas of focus include R&D in semiconductor front-end materials and physics, electrical characterization using his background in device physics and material science and design-technology co-optimization (DTCO). The DTCO work includes analysis of novel materials, devices, and architectures for future scaled technology nodes.

Ashish presented a lot of results for this new transistor architecture. Let’s examine some of what was presented.

Some Results

To accurately characterize true behavior of the RibbonFET at extreme gate length scaling, a novel, single nanoribbon (1NR) flow was developed in which source/drain are disconnected from the subfin. This ensures accurate knowledge of transistor dimension data and precise probing of NR characteristics.

TEM micrograph

A TEM micrograph the device is shown to the right. With Si/SiGe Epi stack innovation, the subfin is disconnected successfully from S/D epi in addition to healthy and uniform inner spacer above and below the NR.

Ashish explained that gate length scaling below 10nm was achieved by innovation in gate lithography and a dummy polysilicon etch process. He went on to say that source drain junctions and their doping profiles carry a new meaning and new implications at the small gate lengths being used in this work. Intel has done substantial work on the source/drain junctions to optimize for engineered short-channel effects and achieve the best performance possible from this highly scaled device.

Ashish also pointed out that, at a 6nm gate length you don’t have enough room to put in Hi-K, and di-pole and a work function to achieve the Vt target. So, for this technology, a work function was optimized and engineered to achieve a low Vt close to the target. Process innovation was critical to achieve effective scaling below a 10nm gate length.

As the gate length is scaled below 10nm, the source/drain doping profile in the tip region needs to be carefully examined. Highly diffused junctions not only degrade short channel effects but also result in remnant doping in the channel which degrades performance due to poor mobility from ionized impurity scattering. In the figure below, (a) shows peak Gmlin for LG=18nm and LG=100nm, highlighting a 34% gain in Gmlin with optimized junction Process B at short LG whereas long LG transconductance is matched. (b) shows drain induced barrier lowering (DIBL) vs. process indicating improved short channel effects with an optimized Process B junction profile. (c) shows Rext vs. process showing matched Rext for both processes and a very low value indicating no penalty from junction optimization.

Measurements across process

Ashish presented many more results that demonstrated effective scaling all the way down to a gate length of 6nm and nanoribbon thickness of 1.5nm. He concluded by saying that this work paves the path for continued gate length scaling, which is a cornerstone of Moore’s Law. And that’s how Intel presents the final frontier in transistor architecture at IEDM.

Also Read:

Intel – Everyone’s Favourite Second Source?

An Invited Talk at IEDM: Intel’s Mr. Transistor Presents The Incredible Shrinking Transistor – Shattering Perceived Barriers and Forging Ahead

What is Wrong with Intel?

 


Podcast EP267: The Broad Impact Weebit Nano’s ReRAM is having with Coby Hanoch

Podcast EP267: The Broad Impact Weebit Nano’s ReRAM is having with Coby Hanoch
by Daniel Nenni on 12-20-2024 at 10:00 am

Dan is joined by Coby Hanoch, Coby joined Weebit Nano as CEO in 2017. He has 15 years of experience in engineering and engineering management roles, and 28 years of experience in sales management and executive roles.

Coby describes the impact Weebit Nano’s ReRAM technology is having for: Embedded non-volatile memory, in many markets and applications including superior speed and endurance, lower power, a less expensive manufacturing process and the ability to scale well with the rest of the design on-chip. Power management, automotive, AI and edge-based inference and even aerospace applications.

Coby also describes some of the work underway at Weebit Nano to continue to move ReRAM technology into the mainstream.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


If you believe in Hobbits you can believe in Rapidus

If you believe in Hobbits you can believe in Rapidus
by Robert Maire on 12-20-2024 at 6:00 am

Rapidus Fab Japan

– Semicon Japan super crowded but outlook still uncertain/muted
– Slow analysts finally capitulate on weakening 2025 WFE outlook
– Article confirms our view on chip equip lobbying for China sales
– TSMC continues dominate, China slows, Samsung/Intel weak

Semicon Japan crowded but muted

We attended Semicon Japan last week and it was super crowded, however the overall tone was muted as most companies were concerned about overall spending weakening especially at Samsung and Intel.

Semicon Japan is more like what Semicon West used to be. Actual tools and vendors on the show floor rather than ensconced in hotel rooms away from the public. Plenty of meaningful conversations about technology and tools.

Everyone wants to be linked to AI in some way even though the semiconductor equipment tools have zero to do with whether they are building an AI chip or a video game chip, just microscopic manipulation of materials.

Seems like China continues to slow as they have enough equipment in warehouses to last for years. Whatever sanctions are on the way, they are too little and too late to be effective.

Rapidus de minimis

Rapidus, the Japanese government sponsored plan to bring Japan back into the semiconductor race had a large booth with lots of conceptual drawings of the planned fab.

Rapidus looks like a giant-sized Hobbit House, with a rolling, grassy, environmentally friendly roof, obviously to try to disguise what is an otherwise an ugly giant factory.

Much as the Hobbit World is a fantasy, we think Rapidus is quite the fantasy. IBM is one of the main partners and they haven’t been a commercial producer of chips in decades. Most of the other main players are close to retirement and haven’t been deep in the industry since Japan was a power player, also decades ago.

The bigger issue we see is that both Samsung and Intel which are several generations behind TSMC are struggling not to fall too far behind, yet Rapidus is many times further behind and will somehow miraculously jump over Samsung and Intel to catch up with TSMC? Good luck with that. Sounds a lot like the 5 nodes in 4 years we heard before.

But maybe if you believe in Hobbits you can believe in Rapidus.

Big, slow, investment house analysts finally capitulate on slowing 2025 WFE

Analysts from Citibank, Deutsche Bank and Bernstein finally got the memo that semiconductor spending in 2025 will be weak at best. They are all finally cutting 2025 outlooks after reality dawned on them.

This is something we have been saying for well over a year as the majority of the industry remains weak with a weakening China. We have pointed out time and again that only the bleeding edge, and most specifically AI is strong, both in logic and memory.

In our note a month ago we said “There had been an initial, more positive view by many analysts which now appear too optimistic and will have to be trimmed to be more conservative, to the view we have long held, of a slower recovery with more lumps and bumps along the way”

We remain very concerned about all the Chinese capacity coming on-line in the middle and trailing edge of the chip market flooding capacity and driving down pricing.

China has yet to have a significant impact on memory, but that’s coming too as they come up the technology curve very quickly

Chip equip companies lobby congress to help China advance chips

We have mentioned in many of our notes that US semiconductor equipment companies spend tens of millions of dollars lobbying US lawmakers to keep their lifeline of China sales flowing.

The New York times recently affirmed our view of the very strong lobbying on the part of the three major US semiconductor equipment makers: Applied Materials, Lam and KLA.

Link to NY Times article

We find its obviously hugely hypocritical to dilute or prevent sanctions on China, based on the lobbying of equipment firms yet support the CHIPS act whose aim is to prevent China’s dominance of the semiconductor industry.

It would a very interesting study to see which members of the house and senate took money from the equipment companies to prevent sanctions on China yet supported the opposite position by voting for the CHIPS Act which does the opposite.

Obviously, politicians are almost always hypocritical and some of the equipment companies are also hypocritical by lobbying to sell more to China while holding out their hand for CHIPS Act funding. Not surprising behavior.

The Stocks

Obviously, the stocks have been a bit unstable and will likely remain so given the downgrades from the larger sell side analysts who are capitulating.

Micron is a focus that may determine some near-term sentiment. Nvidia is somewhat stuck in the mud for a while now, waiting to break free but the broader negative tone appears to be holding it back.

We may be going into a wait and see period waiting to see what the new administration does with tariffs and other potential restrictions.

In our view, it’s still very much unclear which way the new administration will go and if semiconductors will be a priority.

We also remain unclear about the incoming administration and the CHIPS Act. Even though the current administration has rushed to finalize deals before they leave town, we think that this doesn’t mean that chip companies will actually get paid. We must remember that Trump revels in stiffing contractors he owes money to and views debts as optional donations.

In addition, circumstances are changing enough that some companies may not make the milestones needed to get paid. In short, the CHIPS Act likely will not do all of what was intended, as it couldn’t all get done in the short four years of the administration that started it.

We are not motivated to buy any of the equipment stocks given the near-term negative news of WFE spend reductions now happening.

We are also not happy about the broad semiconductor industry. We remain very enthusiastic about Nvidia and all things AI. Those memory makers who have an HBM product will also see benefit as well.

We may be in a holding pattern of uncertainty until Nvidia starts to move up again when the certainty of the incoming administration starts to take shape.

With the holidays coming up, this uncertainty will likely persist for a while during what will be a slow news period

About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies.

We have been covering the space longer and been involved with more transactions than any other financial professional in the space.

We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors.

We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

AMAT has OK Qtr but Mixed Outlook Means Weaker 2025 – China & Delays & CHIPS Act?

More Headwinds – CHIPS Act Chop? – Chip Equip Re-Shore? Orders Canceled & Fab Delay

KLAC – OK Qtr/Guide – Slow Growth – 2025 Leading Edge Offset by China – Mask Mash

 


TSMC Unveils the World’s Most Advanced Logic Technology at IEDM

TSMC Unveils the World’s Most Advanced Logic Technology at IEDM
by Mike Gianfagna on 12-19-2024 at 10:00 am

TSMC Unveils the World’s Most Advanced Logic Technology at IEDM

There was a lot of discussion at IEDM about the coming shift to gate-all-around (GAA) transistor structures. This new device brings many benefits to continue device scaling, both at the monolithic device level as well as for multi-die design. The path to GAA is not simple, there are new material, process and design considerations to tame. TSMC has devoted a substantial amount of effort here. Let’s look at some of the details disclosed when TSMC unveils the world’s most advanced logic technology at IEDM.

About the Presenter

Dr. Geoffrey Yeap

Dr. Geoffrey Yeap presented 2nm Platform Technology featuring Energy-efficient Nanosheet Transistors and Interconnects co-optimized with 3DIC for AI, HPC and Mobile SoC Applications on Monday at IEDM. He is Vice President, TSMC R&D Advanced Technology. Geoffrey has been at TSMC for almost nine years and has also led advanced work at Qualcomm, Motorola Mobility, AMD, and the University of Texas System Center for Supercomputing.

Geoffrey explained that the work he was presenting spanned four years and involved many staff members in TSMC’s Global R&D Center.

Presentation Overview

According to the IEDM press kit, this late news paper presents the world’s most advanced logic technology. As the title says, the work is focused on a leading edge 2nm CMOS platform technology (N2) that has been developed and engineered for energy-efficient compute in AI, mobile and HPC applications. Geoffrey explained that since the generative AI break-through in Q1’23, AI together with 5G-advanced mobile and HPC have created a huge appetite in the semiconductor industry for best-in-class energy-efficient logic technology and this work responds to that need.

Geoffrey described the state-of-art TSMC N2 technology and its successful transition into NS platform technology with acceleration of >140x energy-efficient compute from 28nm to N2, as depicted in the graphic at the top of this post. The N2 logic technology features energy-efficient gate-all-around nanosheet transistors, middle-of-line and backend-of-line interconnects with the densest SRAM macro of ~38Mb/mm2. N2 delivers a full node benefit from the previous 3nm node in offering 15% speed gain or 30% power reduction with >1.15x chip density increase.

The N2 platform technology is equipped with new copper scalable RDL interconnect, flat passivation and TSVs. It co-optimizes holistically with TSMC’s 3DFabric™ technology enabling system integration/scaling for the target AI/mobile/HPC product designs.

Geoffrey reported that N2 has successfully met wafer-level reliability requirements and passed 1,000 hours of HTOL qualification with high yielding 256Mb HC/HD SRAM, and logic test chip (>3B gates) consisting of CPU/GPU/ SoC blocks. N2 is currently in risk production. N2 platform technology is scheduled for mass production in the second half of 2025. N2P, a 5% speed enhanced version of N2 with full GDS compatibility, is targeted to complete qualification in 2025 and go to mass production in 2026.

Some More Details

From a platform perspective, Geoffrey provided some details about the N2 NanoFlex™ technology architecture. System technology co-optimization (STCO) was utilized with smart scaling features rather than brute-force design rule scaling which can drastically increase process cost and inadvertently causes critical yield issues. Extensive STCO coupled with smart scaling of major design rules (e.g., gate, nanosheet, MoL, Cu RDL, passivation, TSVs) was performed in optimizing the technology to achieve the target PPA.

He pointed out that co-optimization with 3DFabric SoIC 3D-stacking and advanced packaging technology (INFO/CoWoS variants) was done, thereby accelerating system integration/scaling for AI/mobile/HPC product designs. N2 NanoFlex standard cell innovation offers not only nanosheet width modulation but also a much-desired design flexibility of a multi-cell architecture.

This capability delivers N2 short cell libraries for area and power efficiency. He explained that selective use of tall cell library elements lifts the frequency to meet design targets. With six Vt offerings spanning 200mV, N2 provides unprecedented design flexibility to satisfy a wide spectrum of energy-efficient compute applications at the best logic density. The figure below illustrates some of the benefits of this approach for an Arm-based design.

N2 NanoFlex HD cell benefits

Geoffrey explained that N2 nanosheet technology exhibits substantially better performance/Watt than FinFET at the low Vdd range of 0.5V- 0.6V. Emphasis is placed on low Vdd performance/Watt uplift through process and device continuous improvements resulting in a 20% speed gain and 75% lower stand-by power at 0.5V operation. N2 NanoFlex coupled with multi-Vt provides unprecedented design flexibility to satisfy a wide spectrum of energy-efficient compute applications at the most competitive logic density.

Geoffrey went into more details on the SRAM, logic test chip and qualification and reliability. This was an impressive presentation. The N2 technology platform brings a lot of new capability to the table for future innovation. And that’s some of the details about how TSMC unveils the world’s most advanced logic technology at IEDM.

Also Read:

IEDM Opens with a Big Picture Keynote from TSMC’s Yuh-Jier Mii

Analog Bits Builds a Road to the Future at TSMC OIP

Maximizing 3DIC Design Productivity with 3DBlox: A Look at TSMC’s Progress and Innovations in 2024


Ultra Ethernet and UALink IP solutions scale AI clusters

Ultra Ethernet and UALink IP solutions scale AI clusters
by Don Dingee on 12-19-2024 at 6:00 am

UALink and Ultra Ethernet roles in AI infrastructure clusters

AI infrastructure requirements are booming. Larger AI models carry hefty training loads and inference latency requirements, driving an urgent need to scale AI acceleration clusters in data centers. Advanced GPUs and NPUs offer solutions for the computational load. However, insufficient bandwidth or latency between servers can limit AI performance, faster interconnects tend to chew up massive amounts of power, and scale magnifies these issues rapidly. Two new initiatives, Ultra Ethernet and UALink, target the scale out and scale up needs, respectively, of AI acceleration clusters. Synopsys brings proven Ethernet and PCIe IP, including its 224G Ethernet PHY, to its new Ultra Ethernet and UALink IP solutions to take on efficient, scalable data center interconnects.

Bringing Ultra Ethernet and UALink technology to data centers

“Moving all the data in and around an AI cluster running a large language model like Llama 3 and its successors poses interconnect challenges,” says Priyank Shukla, Principal Product Manager for Interface IPs at Synopsys. “By 2030, just the interconnects for training these models may consume 70% of data center power.” (See more AI data center 2030 insights from McKinsey.)

Advanced AI infrastructure cluster interconnects aren’t optional – LLM needs are already well beyond what a single GPU can accomplish. GPUs such as NVIDIA’s H100 are at the reticle limits, meaning the design consumes the largest fabricable die size, even in an advanced process, making adding more functionality on one chip difficult. Meta’s anecdotes on its Llama 3 training projects indicate 16,000 H100 nodes at work for 70 days. They also suggest that the model size doubles every four to six months, which will soon drive node counts to hundreds of thousands.

Two distinct aspects of the interconnect problem require different solutions. First is the classic bisectional bandwidth challenge of moving copious amounts of data between many nodes in a cluster, some within a rack and some several racks away, at low latency. Second is bringing potentially millions of endpoints in and out of the cluster, again at high speeds with low latency. Two recently formed consortiums, each with Synopsys as a member, worked quickly on new architectures to meet these exact challenges.

The Ultra Ethernet Consortium was formed with the backing of the Linux Foundation, seeking a supercomputing-ready scalable interconnect as an evolutionary path for Ethernet. Along with a faster PHY layer, Ultra Ethernet adds some twists to the technology to achieve scale out, including remote direct memory access (RDMA), packet spraying and out-of-order recovery, sender and receiver-based congestion control, link layer retry, switch offload, and more.

The UALink Consortium (short for Ultra Accelerator Link) provides an intelligent interconnect between accelerator nodes, including direct load, store, and atomic operations for software coherency. Based on the IEEE P802.3dj PHY layer, the initial release of the UALink specification defines a 200Gbps connection for up to 1024 accelerators. Scale up capability comes through a switched architecture, which connects nodes at low latency.

A simplified view of a few racks shows these concepts at work, with UALink between nodes and Ultra Ethernet as the broadside interface to the AI infrastructure cluster.

Proven Synopsys IP solutions speed implementation timeline

“Enabling scale up and scale out at once is a big story for our customers,” says Shukla. Keeping a foot in both consortia and aligning specification development with IP solution capability, Synopsys is already up with its Ultra Ethernet and UALink IP solutions. “Our efforts flowed from 25 years of IP solution development in Ethernet and PCIe technology and over 5000 customer tape outs.”

  • The Synopsys UALink IP solution comprises PHY, controller, and verification IP. The PHY is engineered for a 200Gbps per lane transfer rate. The controller implements memory-sharing capabilities in connecting up to 1024 nodes, and the verification suite provides protocol checking.
  • The Synopsys Ultra Ethernet IP solution starts with its proven 224G Ethernet PHY IP. It adds MAC and PCS controller layers to deliver 1.6Tbps SERDES links with minimal congestion, again with verification IP for advanced protocol features.

Both IP solutions optimize for power efficiency. “Think of it this way: if we save one pico joule of energy per bit, a data center may be able to save a gigawatt of interconnect power,” concludes Shukla. “We have low-risk IP solutions for Ultra Ethernet and UALink that are ready now for customer SoC designs.”  With ecosystem interoperability established and engagements underway, he expects that Ultra Ethernet and UALink chipsets should emerge in the next 18 to 24 months.

A three-year window from consortia formation through specification release to chipset products would be blazing fast. Still, Synopsys is confident because of its deep involvement in specification development and reuse of crucial IP elements. More details on the Ultra Ethernet and UALink IP solutions are available online from Synopsys.

News: Synopsys Announces Industry’s First Ultra Ethernet and UALink IP Solutions to Connect Massive AI Accelerator Clusters

Video: Industry First Ultra Ethernet and UALink IP

Blog post: Enabling Massive AI Clusters with the Industry’s First Ultra Ethernet and UALink IP Solutions

Also Read:

Synopsys Brings Multi-Die Integration Closer with its 3DIO IP Solution and 3DIC Tools

Enhancing System Reliability with Digital Twins and Silicon Lifecycle Management (SLM)

A Master Class with Ansys and Synopsys, The Latest Advances in Multi-Die Design
by Mike Gianfagna on 12-04-2024 at 6:00 am


Reset Domain Crossing (RDC) Challenges

Reset Domain Crossing (RDC) Challenges
by Daniel Payne on 12-18-2024 at 10:00 am

Origin of reset trees

In the early days an IC had a single clock and a single reset signal, making it a simple matter to reset the chip into a known, stable state, so there was little need for detailed analysis. For modern designs there can be dozens to hundreds of clocks, creating separate domains and some use of asynchronous resets, so the challenge of ensuring proper reset logic has become quite complex. If the reset tree has any logic errors, like metastability, glitches or functional failures, then a costly design spin cycle ensues. I’ve read a white paper from Siemens on this topic and share my insights for this blog.

A Reset Domain Crossing (RDC) tool performs static verification to prevent unpredictable behavior, so it first analyzes RTL code to find all the reset logic, then classifies each reset:

  • Synchronous or asynchronous
  • Active high or low
  • Set or reset

Static analysis further identifies the type of reset tree and its control flow.

Origin of reset trees

Siemens proposes an advanced structural check for a thorough RDC verification.

RDC verification methodology

Consider the case where an asynchronous signal merges with a reset signal before reaching the register, signal FF1 may cause a glitch in the reset path.

An RDC path is flagged between register Tx and Rx, caused by signal combo_rst. Your design team then fixes the error by changing the reset domain to make it synchronous with the reset signal or by specifying a constant or a stable signal on the asynchronous signal.

Non-resettable registers (NRR) on asynchronous reset paths may lead to an RDC issue in the reset path of Rx:

Questa RDC has a reset integrity check that identifies logic on NRRs, then a design engineer fixes the metastability issue.

If an asynchronous reset signal Rst is also used as data then it will create an RDC error, so the designer will be alerted to fix their RTL code to avoid this type of logic. If you really want this logic, then add a waiver to Questa RDC.

Case Study

The Questa RDC tool was run on three designs of varying sizes and complexities to give you an idea of how the static analysis provides thorough feedback to RTL designers on basic and advanced reset tree issues.

Design Complexity Comparisons

Structural issues were found in each of the three designs caused by basic reset tree issues.

There were a wide range of advanced reset tree issues identified by RDC validation.

Summary

The Questa RDC verification approach is quite thorough, ensuring the integrity of the reset tree logic, avoiding subtle errors caused by metastability, glitches and functional bugs. Both basic and advanced RDC verification are required, so that design teams can fix errors before tapeout, ensuring the most reliable and safe operation of a new chip. The earlier that you find these RTL issues, the more solid your design becomes and the quicker your team learns about best RDC practices.

Read the entire 13-page white paper, Effective identification of reset tree bugs to mitigate RDC issues.

Related Blogs


CEO Interview: Slava Libman of FTD Solutions

CEO Interview: Slava Libman of FTD Solutions
by Daniel Nenni on 12-18-2024 at 6:00 am

Slava Libman preferred

Slava Libman, Ph.D. is the CEO of FTD solutions, the provider of facility management solutions that improve sustainability and enhance efficiency in industrial facilities. With more than 25 years of experience in water technology, Dr. Libman is a sought-after speaker and an active leader in the water and semiconductor industries. His contributions include more than 60 publications, conference organization, and presentations. He is actively involved in policy and climate technology related to ultra-pure water, water and wastewater treatment, desalination, filtration, reverse osmosis, biological wastewater treatment, and semiconductor water technology roadmap and standards.

Tell us about your company?

FTD solutions is the leading provider of digital twin technology supporting semiconductor industry sustainability. We deliver advanced, sustainable solutions for industries with complex water use demands. We optimize water efficiency, reuse, and recycling to help companies achieve significant resource savings and enhance their sustainability practices.

FTD solutions addresses environmental goals through digital transformation in industrial water management with a solution called the Water Management Application (WMA). With our Facility Management Application (FMA), in industrial facilities where the environmental footprint is significant, where water and energy conservation are valuable, and where greenhouse emission control is mission critical, we unlock sustainable processes that save money, energy, and more. Recently, we closed our Series A funding round with an investment from Ecolab.

What problems are you solving?

Many industrial processes require high water utilization, generate wastewater, and consume energy – the supply and costs for which are not sustainable in the long term. At the same time, complexity of the semiconductor facilities makes it difficult to optimize their performance without advanced information management tools and top level expertise. FTD solutions is solving these problems for industrial clients by saving water and energy while reducing carbon and greenhouse gasses – all with a strong return on investment. The FTD platform transforms water system data into actionable insights, enabling enterprises to standardize and enhance water conservation efforts, even across sites. The solution is supported by a patented algorithm and expert guidance. The WMA and FMA ensure compliance, optimize infrastructure, and significantly reduce water usage. These solutions deliver substantial cost savings.

What application areas are your strongest?

We have strong expertise in the semiconductor industry, but many other industries also benefit from FTD solutions. We serve the refinery, manufacturing, water, chemical, food and beverage, machining, and pharmaceutical industries as well.

Our strongest application areas include industrial water management, ultrapure water technology, closed-loop water systems, and advanced water recycling and reuse strategies. We are particularly adept in supporting manufacturing and industrial sectors that require precise water balancing and resource recovery, making water systems more resilient and cost-effective.

What keeps your customers up at night?

Our customers have persistent concerns regarding water, the availability of it, the processing of it before, during, and after their industrial processes. They are awake at night worrying about sustainable design and operation of their facilities, while supporting uninterrupted high yield manufacturing, meeting water use regulations, and managing costs associated with water treatment and disposal. Many also worry about operational risks from water scarcity and want to ensure they are using resources as efficiently as possible. Our solutions provide peace of mind by helping them address these concerns head-on.

What does the competitive landscape look like, and how do you differentiate?

FTD has a patented solution that is unlike any other in the marketplace. While the competitive landscape includes traditional water consultants, technology providers, and sustainability firms, none of these firms can provide the intelligence and clear opportunities for saving resources and money within their processes. The uniqueness of the FTD role is in integration of its capabilities into the customer processes, providing an extension to the internal expertise and enhancement to information management systems. FTD maintains an independent and objective view on defining best solutions for the site-specific customer needs, while FTD systems and processes are designed to be non-disruptive to the customer operation. As such, FTD solutions is differentiated because it combines deep industry expertise with a focus on sustainable, data-driven solutions that align with each client’s unique water demands and regulatory challenges. Our customized approach ensures clients receive practical, scalable strategies tailored to their specific goals.

What new features/technology are you working on?

With the investment from Ecolab, FTD solutions is accelerating the development of its software and support systems. This includes the enhancement of our digital twin technology which will enable greater optimization of operations and the reduction of facilities’ environmental footprint. These are designed to empower industries to achieve their sustainability goals more effectively. FTD works to drive significant impact within the industrial sector, helping to foster environmentally responsible practices and enhance operational efficiency.

How do customers normally engage with your company?

Customers initially engage with us through consultations, water audits, and system evaluations. We dissect the customer needs and proposed optimized solution strategies within business boundaries and priorities. We partner closely to understand customer processes, identify pain points, and deliver tailored recommendations which allow them to meet their goals. Our engagements often involve both hands-on support and continuous monitoring, ensuring customers can adapt to new challenges and achieve their long-term water sustainability objectives. Customers can also check out our website or follow us on LinkedIn!

Also Read:

CEO Interview: Caroline Guillaume of TrustInSoft

CEO Interview: Mikko Utriainen of Chipmetrics

CEO Interview: GP Singh from Ambient Scientific


Intel – Everyone’s Favourite Second Source?

Intel – Everyone’s Favourite Second Source?
by Peter Bennet on 12-17-2024 at 10:00 am

newsroom robert noyce bldg 2.jpg.rendition.intel.web.1280.720

A response to Daniel Nenni’s “What’s Wrong with Intel?” article, which invited alternative views.

At the risk of calling down the forecast universal opprobrium, I’m going to disagree with Dan’s take on the centrality of Intel.

I don’t agree that Intel is too big/important to fail or that the US can’t succeed in semiconductors without it. Reading the comments on SemiWiki, suggests I’m in a minority here, but far from alone. Perhaps it’s easier for me to say this, coming from the UK with our widespread tall poppy syndrome (anything big and successful is automatically suspect) and less emotionally invested in Intel.

Nothing here is intended as any criticism of Intel’s people. It just feels now like Intel is fighting forces which can be delayed, but not ultimately resisted. Caught in a pincer between the success of the fabless design model and the relative decline of the x86 business, it arguably can’t sustain leading edge manufacturing without both massive external support and becoming a commercial foundry. But is that what Intel really wants and can succeed at? Broad line customer service was never in their DNA. Why not just split off the foundry side and keep the product group?

In some ways I hope I’m wrong here. A lot of people are working very hard to try to right the ship at Intel, facing the possibly hardest challenge ever seen in the semiconductor business. Much as some of us disagreed with Pat Gelsinger’s IDM2.0 plan and sometimes loose talk, you had to admire his sheer guts and determination. He felt like the last link back to the real men (who always had fabs) of the 70s and 80s – perhaps the last of the IDM true believers. But that world is finally slipping out of view in the rearview mirror.

Companies come and go – they have lifecycles just like their products. The average lifespan of a US company is only 15 years (having dropped from 67 years in the 1920s). At some point, even Intel will wither away and we’ll continue on regardless. Perhaps even CMOS will go the way of TTL, NMOS (as used on the 8086 and 80186) and all those other technologies we barely remember now.

If the US feels in a mess today with all its eggs in the TSMC/Taiwan basket, it is one entirely of its own choosing. It’s tempting to assume here that the choices of US companies and governments over the last three decades were consciously made and that the defense of Taiwan was factored into the cost-benefit analysis of those choices. But apparently not …

So now we are asking Intel to bail out the US by providing a domestic commercial foundry business. Every bit as much as many hope for the opposite – that the US bail out Intel.

Intel built itself – at least over the past 40 years – largely as a high performance microprocessor company. We’re now asking it to become something quite different. Even if that’s possible, I’m not convinced that’s what Intel really wants to – or should – do. It may fit the narrative which demands that Intel serve some vital national security role and start operating as a far more customer service oriented foundry business serving a much wider range of customers and designs. But can you really convert an America’s Cup foiling catamaran into a not quite so fast, but more versatile monohull racing yacht which doesn’t drop off its foils and come to a halt in lighter winds? And in the middle of the race?

What really matters is that technology continues to advance. And from a US perspective, that it retains a leadership position and effective strategic independence in semiconductors (note: I think that’s tolerable from a Western, non-US perspective, since the rest of us have lived with it for around 60 years already).

If we’re demanding that the US have its own foundry company, why not start from a blank sheet? Instead of committing the cardinal engineering error of writing a solution (Intel) into the spec instead of a requirement (we want our own foundry), create a new company. After all, isn’t that what the US does best ? And give the US government a stake if it’s putting up funding – a real, financial stake and not one in micromanaging employment policies. Split the foundry completely off from Intel and get rid of the current conflict of interest with Intel’s product groups. Recall some lessons from the SIA about industry collaboration and pull in talent from other companies. What you end up with may be 80 or 90% from Intel, but it needs to be a fresh start.

It’s often argued that Intel’s product and design groups gain some unique advantage from having close collaboration with Intel’s fabs or that they wouldn’t be competitive without this link. That certainly held in the past, but is far from certain today. Some claim Intel’s design teams are world class and others that they aren’t. Looking from outside with no direct knowledge, it all seems rather confusing and contradictory. Yet we know that for over 40 years Intel have reliably designed and produced some of the most complex, fastest chips seen. We’ve also seen AMD survive and thrive moving from internal fabs to TSMC. So what are we worried about here?

If you think Intel foundry shouldn’t be split off, just remember this: the risk that Intel becomes a follower, second best in everything it does. Intel will be behind TSMC in foundry, nVidia in AI and arguably AMD in x86. Is that what we really want for Intel – to be everyone’s favourite second source?

By all means have a US national foundry champion. Just do it properly. And don’t call it Intel. Let the Intel product group focus and return to its historic excellence. Shoehorning today’s Intel into the IDM 2.0 model won’t help Intel survive. And it won’t ultimately help the US.

I may well be wrong. But however Intel’s future plays out, looking back in 10 or 20 years time, we’ll likely have forgotten today’s chaos and confusion and view the outcome as something that was never in doubt. As Kierkegaard observed, “Life can only be understood backwards; but it must be lived forwards”. Living forwards isn’t going to be easy for Intel for some time. But it can survive. Though perhaps only as separate product and foundry businesses, with only the first called Intel.

Also Read:

What is Wrong with Intel?

3D IC Design Ecosystem Panel at #61DAC

Intel’s Gary Patton Shows the Way to a Systems Foundry #61DAC


An Invited Talk at IEDM: Intel’s Mr. Transistor Presents The Incredible Shrinking Transistor – Shattering Perceived Barriers and Forging Ahead

An Invited Talk at IEDM: Intel’s Mr. Transistor Presents The Incredible Shrinking Transistor – Shattering Perceived Barriers and Forging Ahead
by Mike Gianfagna on 12-16-2024 at 10:00 am

An Invited Talk at IDEM Intel’s Mr. Transistor Presents The Incredible Shrinking Transistor – Shattering Perceived Barriers and Forging Ahead

IEDM turned 70 last week. This was cause for much celebration in the form of special events. One such event was a special invited paper on Tuesday afternoon from Intel’s Tahir Ghani, or Mr. Transistor as he is known. Tahir has been driving innovation at Intel for a very long time. He is an eyewitness to the incredible impact of the Moore’s Law exponential and his work has made a measurable impact on the growth of that exponential.

Tahir treated the audience to a colorful perspective on how we’ve arrived at the current level of density and scaling. Pervasive AI will demand substantial improvement in energy efficiency going forward and Tahir took the opportunity to call the industry to action to address these and other challenges as we move toward a trillion-transistor system in package. Here are some of the comments from this invited talk at IDEM as Intel’s Mr. Transistor presents the incredible shrinking transistor – shattering perceived barriers and forging ahead.

About the Presenter

Dr. Tahir Ghani

Dr. Tahir Ghani is a senior fellow and director of process pathfinding in Intel’s Technology Research Group. Tahir has a 30-year career at Intel working on many innovations, including strained silicon, high-K metal gate devices, FinFETs, RibbonFETs, and backside power delivery (BSPD), among others. He has filed more than 1,000 patents over his career at Intel and was honored as Intel’s 2022 Inventor of the Year. He has the nickname of “Mr. Transistor” since he’s passionate about keeping Moore’s Law alive.

About the Talk

Besides IEDM turning 70 this year, Moore’s Law will turn 60 next year. Tahir used this milestone to discuss the innovation that has brought us here and what needs to done going forward to maintain Moore’s Law exponential innovation.

Tahir began by discussing a remarkable milestone that lies ahead – one trillion transistors within a package by the end of this decade. He took a sweeping view of the multiple waves of innovation that drove transistor scaling over the last six decades. The graphic at the top of this post presents a view of the journey, from system-on-chip to systems-in-package scaling.  Tahir then presented the key innovations in this journey – past, present and future.

FIRST ERA: 1965 – 2005

The first four decades of Moore’s Law saw exponential growth in transistor count and enabled multiple eras of computing, starting with the mainframe and culminating in the PC. During this time, a second effect called Dennard scaling became important as well as Moore’s Law.

Robert H. Dennard co-authored a now-famous paper for the IEEE Journal of Solid State Circuits in 1974. Dennard and his colleagues observed that as transistors are reduced in size, their power density stays constant. This meant that the total chip power for a given area size stayed the same from process generation to process generation. Given the exponential scaling of transistor density predicted by Moore’s Law, this additional observation provided great promise for faster, cheaper and lower power devices.

Tahir explained that the happy marriage between Moore’s Law and Dennard scaling ushered in something he called the golden era of computing. The era was made possible by numerous innovations in materials and process engineering, most important being the consistent scaling of gate dielectric thickness (Tox) and the development of progressively shallower source/drain (S/D) extensions, which enabled scaling of gate lengths from micron-scale to nanometer-scale while lowering transistor threshold voltage (Vt).

From my point of view, these were the days when semiconductor innovation came from the process teams. If you could get to the next node, you’d have a faster, smaller and lower power product that would sell. Tahir explained that by 2005, power density challenges and the breakdown of Dennard scaling meant it was time for a new approach, which brings us to the present day.

SECOND ERA: 2005 – PRESENT

Tahir explain that during the last 20 years, technologists have shattered multiple seemingly insurmountable barriers to transistor scaling, including perceived limits to dimensional scaling, limits to transistor performance, and limits to Vdd reduction. This era marked the emergence of mobile computing, which shifted the focus of transistor development from raw performance (frequency) to maximizing performance within a fixed power envelope (performance-per-watt).

Many of the innovations from this era in materials and architectures came from Intel. In fact, Tahir has been in the middle of this work for many years. This work expedited the progress of groundbreaking ideas from research to development to high-volume manufacturing. Tahir explained that these innovations ushered in an era of astonishing progress in transistor technology over two decades. He discussed three important innovations from this time.

SEMINAL TRANSISTOR INNOVATIONS

  • Mobility enhancement leading to uniaxial strained silicon. In 2004, a novel transistor structure introduced by Intel at the 90nm node incorporated compressive strain for PMOS mobility enhancement. Intel’s uniaxial strain approach was in stark contrast to the biaxial strain approach pursued by the research community and turned out to be superior for performance and manufacturability. Moreover, this architecture proved scalable and enabled progressively higher strain and performance over the years.
  • Tox limit leading to Hi-K dielectrics and metal gate electrodes. Intel explored multiple approaches to introduce Hi-K gate dielectrics coupled with metal gate electrodes, including “gate-first,” “replacement-gate,” and even fully-silicided gate electrodes. The replacement metal gate flow adopted by Intel at the 45nm node in 2007 continues to be used in all advanced node processes to this day.
  • Planar transistor limits lead to FinFETs. The scaling of the planar transistor finally ran out of steam after five decades, mandating a move to the 3D FinFET
    Fin profile improvements at Intel
    structure. Intel was the first to introduce FinFETs into production at the 22nm node in 2011. Nanometer-scale fin widths enabled superior short-channel effects and, thus, higher performance at lower Vdd. The figure to the right illustrates the evolution of the fin profile over the last 15 years. The 3D structure of fins resulted in a sharp increase in effective transistor width (Zeff) within a given footprint, leading to vastly superior drive currents.

LOOKING AHEAD: THE NEXT DECADE

Tahir made the observation that the seventh decade of Moore’s Law coincides with the emergence of yet another computing era. He pointed out that AI will redefine computing and is already causing a tectonic shift in the enabling silicon platform from general-purpose processors (CPUs) to domain-specific accelerators (GPUs and ASICs).

Gate all around (GAA) transistor

He went on to say that this shift in computing platform also coincides with another inflection in transistor architecture. By completely wrapping the gate around the channel, the gate-all-around (GAA) transistor is poised to replace the FinFET. GAA transistors deliver enhanced drive current and/or lower capacitance within a given footprint, superior short-channel effects, and a higher packing density. The figure at the right shows what a GAA device looks like in silicon.

Looking ahead, he said the GAA architecture will likely be succeeded by a stacked GAA architecture with N/P transistors stacked upon each other to create more compact, monolithic 3D compute units. Looking further ahead, he explained that 2D transition metal chalcogenide (TMD) films are being investigated as channel material for further Leff scaling, but many issues are still to be addressed.

CALL TO ACTION: NEW TRANSISTOR

Tahir concluded his talk with a sobering observation- worldwide energy demand for AI computing is increasing at an unsustainable pace. Transitioning to chiplet-based system-in-package (SiP) designs with 3D stacked chips and hundreds of billions of transistors per package will increase heat dissipation beyond the limits of current best-in-class materials and architectures. Breaking through this impending “Energy Wall” will require coordinated and focused research toward reducing transistor energy consumption and improving heat removal capability. A focused effort is necessary to develop a new transistor capable of operating at ultra-low Vdd (< 300mV) to improve energy efficiency.

He went on to point out that ultra-low Vdd operation can lead to significant performance loss and increased sensitivity to variability, requiring circuit and system solutions to be more resilient to variation and noise. This suggests the need for a strong collaboration between the device, circuit, and system communities to achieve this important goal. There are many ways to attack this problem.

Tahir reviewed a few, including Tunnel FET (TFET), Negative Capacitance FET (NC-FET), and Ferroelectric FET (FE-FET). All have significant obstacles to overcome. New materials and new structures will need to be explored.

Conclusion

Dr. Tahir Ghani covered a lot of ground in this exceptional review of past, present and future challenges for semiconductor scaling. The best way to end this discussion is with an inspirational quote from Tahir.

“At every significant inflection in the past, when challenges to continued transistor scaling seemed too daunting, technologists across industry and academia forged new paths to enable the arc of exponential progress to continue unabated. There is no reason to believe that this trend will not continue well into the future. There is still plenty of room at the bottom.”

Tahir recently did a Semiconductor Insider’s podcast on SemiWiki. You can hear some of his views in this compelling discussion here. And that’s how Intel’s Mr. Transistor presents the incredible shrinking transistor – shattering perceived barriers and forging ahead.

Also Read:

What is Wrong with Intel?

3D IC Design Ecosystem Panel at #61DAC

Intel’s Gary Patton Shows the Way to a Systems Foundry #61DAC


Certification for Post-Quantum Cryptography gaining momentum

Certification for Post-Quantum Cryptography gaining momentum
by Don Dingee on 12-16-2024 at 6:00 am

NIST A6046 certificate for Secure-IC, the first security IP and software vendor to achieve certification for post-quantum cryptography

A crucial step in helping any new technology specification gain adoption is certification. NIST has been hard at work establishing more than post-quantum cryptography algorithms – they’ve also integrated the new algorithms into their process for third-party validation testing to ensure implementations are as advertised. Secure-IC is the first security IP and software vendor to achieve official worldwide NIST algorithm certification for post-quantum cryptography (PQC) software and secure element IP. Here’s a brief look at what NIST certification entails and what Secure-IC achieved.

An overview of NIST certification for crypto algorithms

NIST created its Cryptographic Algorithm Validation Program (CAVP) in 1995 to test FIPS-approved, NIST-recommended algorithms. Testing occurs on an Automated Cryptographic Validation Test System (ACVTS) with a NIST-controlled hardware environment. NIST offers a Demo ACVTS server as a sandbox environment and a Production ACVTS server accessible only by accredited third-party cryptographic and security testing (CTS) laboratories. Only tests by third-party CTS labs on the Production ACVTS server can advance as evidence for obtaining a CAVP certificate.

ACVTS spans capabilities for supported algorithms, including parameters such as message length, and automatically generates test cases and vectors for robust coverage. Vectors are suitable for feeding an implementation candidate, which can run its functions and provide outputs back to ACVTS. A correctness score for each algorithm in a test session returns. This approach keeps ACVTS testing as black-box – NIST never sees implementations as they are not uploaded to the ACVTS server, with only vectors sent and outputs returned.

NIST keeps the CAVP suite current, retiring outdated algorithms and incorporating new advancements as they become approved. CAVP online documentation contains a current list of algorithms and their specifications, validation testing requirements, validation lists, and test vectors.

Moving from PQC algorithms to crypto module certification

PQC algorithms are now part of the CAVP suite, and validation testing of PQC implementations can ensue. Since we last discussed PQC here, some of its algorithms received less informal, more technically accurate names from NIST. CRYSTALS-Kyber is now known as ML-KEM (module-lattice-based key-encapsulation mechanism), and CRYSTALS-Dilithium now goes by ML-DSA (module-lattice-based digital signature algorithm).

Secure-IC conferred with an in-country CTS-accredited lab, SERMA Safety and Security, to validate its Securyzr™neo-product for PQC. A summary of the algorithm tests appears in the NIST validation certificate, A6046, dated October 30, 2024. Secure-IC focuses on optimizing its implementations for fast throughput in SoC-optimized IP blocks ready for hardware design.

CAVP validation is crucial because compliance is ultimately a function of the complete system context for an implementation, as with many specifications. CAVP is a mandatory prerequisite for certifying a cryptographic module, a combination of hardware and software in an end product. NIST also shepherds a Cryptographic Module Validation Program (CMVP), transitioning from FIPS 140-2 compliance to FIPS 140-3, reflecting the recommendation for PQC implementations. A full FIPS 140-2 sunset date of September 2026 incentivizes module designers to get moving with their CMVP validation. Any system requiring cryptographic protection must conform to FIPS 140-3 requirements – with PQC incorporated – by that date.

Secure-IC is committed to helping its customers navigate these requirements and quickly bringing PQC into the mainstream. Their PQC-enabled solutions are configurable and scalable to meet a range of cryptography needs, with an eye on performance and power efficiency. Their achievement of certification for post-quantum cryptography algorithms puts their customers ahead in the race for protecting platforms from advanced cybersecurity threats. More information is available in a press release from Secure-IC, which includes more details on the Securyzr neo-product certification, links to the official NIST certificate, and background on the cooperation with SERMA Safety and Security.

Secure-IC obtains the first worldwide CAVP Certification of Post-Quantum Cryptography algorithms, tested by SERMA Safety & Security

Also Read:

Facing challenges of implementing Post-Quantum Cryptography

Secure-IC Presents AI-Powered Cybersecurity

How Secure-IC is Making the Cyber World a Safer Place