SNPS1670747138 DAC 2025 800x100px HRes

Podcast EP261: An Overview of the Upcoming 70th IEDM with Dr. Gaudenzio Meneghesso

Podcast EP261: An Overview of the Upcoming 70th IEDM with Dr. Gaudenzio Meneghesso
by Daniel Nenni on 11-15-2024 at 9:00 am

Dan is joined by Dr. Gaudenzio Meneghesso, IEDM 2024 Publicity Co-Chair, and Head of the Department of Information Engineering at the University of Padua in Italy.

Dan explores the program for the upcoming IEDM event with Gaudenzio. This conference covers a wide range of innovations that have significant impact on the semiconductor industry.

Gaudenzio discusses four “grand challenges” that will be explored at IEDM: Device scaling,memory architectures and in-memory compute, chip packaging and power efficiency. In the area of power efficiency, the impact of new devices based on compound semiconductor technology will be explored.

The demands of AI performance and the associated impact on semiconductors will also be presented. Other high-profile topics include nano-sheet transistors and high-density aligned carbon nanotubes, among others.

The 70th IEDM will be held December 7-11, 2024 in San Francisco. You can learn more about this important conference and register to attend here.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


More Headwinds – CHIPS Act Chop? – Chip Equip Re-Shore? Orders Canceled & Fab Delay

More Headwinds – CHIPS Act Chop? – Chip Equip Re-Shore? Orders Canceled & Fab Delay
by Robert Maire on 11-15-2024 at 8:00 am

CHIPS Act Semiconductor USA

– CHIPS Act more likely to be maimed & cut than outright killed
– Will Legislators reverse flow of equipment to Reshore from Offshore?
– Recent order cuts, Fab Delay & SMIC comments are all negative
– News flow for semi equipment all bad in front of AMAT

CHIPS Act Chops likely to occur under new administration

In the days leading up to the election, Trump made it crystal clear he thought the CHIPS Act was a “bad deal”. Then Mike Johnson, following his lead said he would probably want to repeal the CHIPS Act.

Even though some analysts and investors will say the incoming administration can’t do anything because deals are signed, there is certainly plenty that can be done to delay, prevent, modify, question and generally screw with even a done deal….especially if you are the new administrator, who writes the checks, of said deal.

We don’t think that the deal with Intel will get torpedoed but the TSMC deal has some risk. Samsung Texas will likely get done. Micron in Idaho is probably safe but the Micron Clay, New York fab is likely toast.

We think the administration will also look to undo the chip design center in California just to spite a Blue state and Newsome. Likewise Clay New York was Schumer’s baby in an also very blue New York state.

No matter what, its going to be different than anticipated as the incoming administration will influence it much as the outgoing administration influenced it.

Take a look at the electoral map then look at the CHIPS Act map if you want an idea:

At the end of the day, reduced CHIPS Act spend is most directly less spend on semiconductor equipment as 90% of the cost of a new fab is in the equipment…

Cancellations & delays last week

We heard of some large cancellations coming out of a large US chip maker last week. Although likely anticipated its always a negative when it actually happens.

We also heard that Micron’s new Idaho Fab has been delayed. While this is not a near term issue it adds to the increasing headwinds. It also increases the chances of Micron’s Clay NY fab to be further delayed if not outright canned.

SMIC comments on China semi equipment indigestion

Adding to the cancelation/delay headwinds we heard from SMIC, China’s largest fab that orders for equipment coming out of China will be down as trailing edge capacity is over supplied.

Although this is something we already know and already heard about slowing China orders it is just confirmation that China is up to their eyeballs in equipment and already has way to much.

Over supply situations like this can take years to fix as not only is there too much current trailing edge supply but we also have a pipeline of equipment that hasn’t even been turned on yet.

The “Captain Obvious” award of the week goes to US legislators that finally figured out chip equipment is still be offshored, while trying to reshore chips.

US legislators sent letters to AMAT, LRCX, KLAC, Tokyo Electron & ASML asking what was up with their sales of chip equipment to China. But perhaps more importantly the letter asked about where the equipment is being made and the supply chain of that equipment.

Select committee on CCP Chips

SCMP article

NY Times article

This topic is something we have been talking about longer and more vocally than anyone else.

It seems insanely stupid and short sighted to “re-shore” semiconductors while you continue to “off-shore” the equipment made to produce them.

Wouldn’t it be just plain dumb to move chip manufacturing back to the US only to have equipment made by US companies in Asia imported back into the US where all the equipment used to be made?

Applied Materials has been the leader in moving production out of Texas to Singapore. Lam is not far behind in moving all its California and Oregon based manufacturing to Malaysia. Lam recently crowed about shipping its 5,000th chamber out of Malaysia.

Could the US finally get its act together and force chip equipment makers to reshore that which they have off shored so quickly just over the past few years? Its not like Taiwan or China stole the US equipment industry. The industry has been moving to Asia as fast as humanly possible for primarily financial reasons.

It would be yet another problem/headwind for equipment makers. The huge cost of moving only to have a large cost to move back. Lower margins and higher costs due to increased costs in the US that led them to leave in the first place.

The incoming administration could even put a tariff on imported chip equipment much as they will likely put a tariff on imported chips to force manufacturers to move back to the US as this is a core of the platform Trump was running on.

It could get ugly.
The Stocks

The recent election results raised all the boats in the stock market to new highs.

We would point out that the actual impact on the semiconductor and especially the semiconductor equipment stocks are not quite so positive especially over the longer run given both recent and future headwinds.

The CHIPS act will be likely negatively impacted, its only a question of how much. China and Tariffs will only get worse and likely impact chip production and equipment.

Near term headwinds continue to slow the overall market and most recent news is certainly negative.

It may not be a bad time to think about reducing exposure to some of the more impacted names in the space before everyone figures out the potential negative impacts.

Buckle up, things will change, a lot.
About Semiconductor Advisors LLC

Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

KLAC – OK Qtr/Guide – Slow Growth – 2025 Leading Edge Offset by China – Mask Mash

LRCX- Coulda been worse but wasn’t so relief rally- Flattish is better than down

ASML surprise not a surprise (to us)- Bifurcation- Stocks reset- China headfake


CEO Interview: Dr. Sakyasingha Dasgupta of EdgeCortix

CEO Interview: Dr. Sakyasingha Dasgupta of EdgeCortix
by Daniel Nenni on 11-15-2024 at 6:00 am

Sakya Dasgupta 2024

Sakya is the founder and Chief Executive officer of EdgeCortix. He is an artificial intelligence (AI) and machine learning technologist, entrepreneur, and engineer with over a decade of experience in taking cutting edge AI research from ideation stage to scalable products, across different industry verticals.  He has lead teams at global companies like Microsoft and IBM Research / IBM Japan, along with national research labs like RIKEN Japan and the Max Planck Institute Germany. Previously, he helped establish and lead the technology division at lean startups in Japan and Singapore, in semiconductor technology, robotics and Fintech sectors. Sakya is the inventor of over 20 patents and has published widely on machine learning and AI with over 1,000 citations.

Tell us about your company?
We are a fabless semiconductor company focused on enabling energy-efficient and sustainable artificial intelligence processing that will scale from edge computing to servers. I founded the company in 2019 with our development headquarters in Tokyo, Japan and we have now expanded our operations into both the United States and India. We deliver a software-first approach to AI focused processors, with our patented “hardware and software co-exploration” system to bring to market a unified edge AI acceleration platform. This platform provides an end-to-end solution for our customers with our MERA software and latest SAKURA low-power AI inference accelerators.

Our customers span a wide array of industries, including smart cities, robotics, manufacturing, aviation, aerospace, security, and telecommunications. While these industries are distinct and serve unique purposes, they all share a common goal of deploying extremely low power, high performance AI solutions at the edge. The edge is where the vast majority of data is now being created and collected, and because critical business decisions are being made there continuously, these decisions must be made accurately and securely. The other commonality between these industries is that they demand a combination of real-time processing, tight power restrictions and low-latency. This is where EdgeCortix’s solutions lives and excels, offering specialized hardware and software solutions to meet these demanding criteria.

What problems are you solving?
EdgeCortix was founded with the principal goal to solve the AI performance and power inefficiency challenges ‘at the edge’.  Our core mission is to democratize access to all types of AI solutions by solving the fundamental mission of enabling near cloud-level AI performance at the edge, with better energy efficiency and speed, drastically reducing customer operating costs. Today, it is truly incredible to see how the latest generative AI and multi-modal AI applications are expanding so rapidly in the marketplace. These AI applications however, typically require massive computational and electrical power, which is tough at the edge where being performant, while maintaining energy-efficiency is critical. EdgeCortix has developed an industry leading, energy-efficient, ultra-low latency software and hardware acceleration platform, powered by our latest SAKURA-II devices that accelerates these multi-modal generative AI workloads, and empowers its customers to solve their edge-based challenges.

What application areas are your strongest?
Four industries where we have been seeing the most prominent demand, includes smart cities, industrial applications, aerospace and security. As municipalities implement more Smart City functionality, they face a variety of challenges in adding AI capabilities to analyze issues such as traffic congestion and security. Ultimately, smart surveillance can apply to any gathering place in a city with networks of cameras providing high-resolution video from many angles and collecting volumes of data. Using AI inference to accurately recognize people and items has the potential to keep citizens safe in crowded spaces in case of an emergency. From an industrial perspective we find the most traction in smart manufacturing – an area right now with so much potential for improvement in both production, cost savings and safety. In factories, edge AI solutions can enable optimization of production lines, predict equipment failures, and enhance quality control.

Real-time analysis of sensor data helps improve efficiency and reduce downtime. In the aerospace industry, our SAKURA-II solutions can assist in aircraft maintenance, provide quality assurance in manufacturing, and most importantly is a critical enabler for adding AI capabilities. It can ensure safety and reliability, all while minimizing maintenance costs. Last but not the least, we are very excited about the prospects of our AI processors being applied in the space industry from low-earth orbit to outer space environments. In this regard, the proven ability of our SAKURA devices to survive outer space radiation impact significantly better compared to comparable commercially off-the-shelf processors, as recently tested by NASA, opens up a variety of applications.

What keeps your customers up at night?
What keeps our customers up at night fuels our relentless focus during the day. We must solve for the edge AI performance and power inefficiency challenges. Our customers, no matter what industry they serve, are trying to do more with less. Less space, less cost, less power and less heat are all critical considerations, and our ability to deliver high performance and high efficiency while meeting these constraints is highly valued by our customers. In addition to these factors, a critical consideration point for all our customers has been software robustness. Every day we are considering how we can augment our software and solutions to help drive improved performance based on our customers’ unique needs. EdgeCortix operates on a global scale with teams spanning from Asia to North America. We are dedicated to fulfilling our customers’ needs around the clock.

What does the competitive landscape look like and how do you differentiate?
Our goal is to meet our customers where they are in their technology stack and to help to future-proof their operations. I believe that we are in a truly unique market position. Many companies focus on either the hardware or the software, but the way in which we’ve developed our platform is unique. We apply equal importance to software development and chip design, and we started with software-first, and then enabling a robust hardware ecosystem. In addition to our patented run-time reconfigurable processor, the flexibility of our software and our ability to easily integrate within existing heterogeneous hardware platforms is not something we’re seeing made available from the rest of the industry today.

What new features/technology are you working on?
The SAKURA-II Edge AI platform is a complete AI solution comprised of three elements, the SAKURA-II silicon device, the Dynamic Neural Accelerator® (DNA) runtime reconfigurable (IP) neural processing architecture, and our MERA heterogeneous compiler software platform. We implement these technologies on a selection of hardware from M.2 modules, PCIe cards and compute boxes for immediate AI system deployment by our customers.

SAKURA-II is optimized for applications requiring fast, real-time (Batch=1) AI inferencing with excellent performance in a small footprint and low power silicon device. SAKURA-II is designed to handle the most challenging multi-modal AI applications at the edge, enabling designers to create new content based on disparate inputs like images, text, and sounds, and supports multi-billion parameter models like Llama 3, Stable Diffusion, DETR, Mistral, and ViT within few Watts of power.

Our Dynamic Neural Accelerator (DNA) is a flexible, modular dataflow architecture with our proprietary run-time reconfigurable data path connecting all major compute engines on chip, achieving exceptional parallelism and efficiency through dynamic grouping. Using a patented approach that combines sparsity handling, power management techniques, mixed precision support, vector and tensor processing, DNA achieves outstanding parallelism while reducing on-chip memory bandwidth, allowing faster, more efficient hardware execution.

MERA is a compiler and software framework providing a robust platform for deploying the latest neural network models in a machine learning framework agnostic manner. MERA enables optimized deep neural network graph compilation and inference, while providing the necessary tools, APIs, code-generator, and runtime libraries needed to deploy any pre-trained deep neural network from convolutions to the latest transformer models. MERA is designed to handle the most challenging AI applications at the edge with interfaces to open-source platforms like Hugging-Face as well as a rapidly growing EdgeCortix Model Library, enabling designers to create new content or deploy from a wide variety of existing models. MERA’s built-in heterogeneous support for other leading general-purpose processors, including AMD, Intel, Arm, and RISC-V, allows quick integration into existing systems.

How do customers normally engage with your company?
Our customers typically engage with us in the following three ways:

  • Software: Customers who purchase a SAKURA solution will automatically access the EdgeCortix MERA Compiler software framework to deploy AI acceleration within their existing environments. In select cases we have also licensed our software to enable integration with other third-party Arm and X86 based hardware platforms, enhancing the overall ecosystem support.
  • AI Accelerator Devices: EdgeCortix offers the latest SAKURA-II devices for purchase, a 60 TOPS (INT8) / 30 TFLOPS, yet small, low-power, mass produced product suitable for edge computing.
  • AI Accelerator Cards & Modules: Customers can use our AI Accelerator hardware to directly integrate into their systems or solutions (orders available now). EdgeCortix currently offers SAKURA-II hardware in single and multi-chip low-profile PCIe Card and M.2 Module form factors.

We can be reached via the following:
Our contact page: https://www.edgecortix.com/en/contact
Our website: https://www.edgecortix.com/en/
Our LinkedIn: https://www.linkedin.com/company/edgecortix/

Also Read:

CEO Interview: Bijan Kiani of Mach42

CEO Interview: Dr. Adam Carter of OpenLight

CEO Interview: Sean Park of Point2 Technology


Analog IC Migration using AI

Analog IC Migration using AI
by Daniel Payne on 11-14-2024 at 10:00 am

Analog Migration with virtuoso studio

My first job out of college was migrating a DRAM chip from one process node to a newer node, and it was a 100% manual process that required many months of effort. That need to migrate semiconductor IP to newer nodes is still with us today, and much automation has been applied to digital circuits, however migrating analog IP has proven to be much more challenging to automate. I spoke with Girish Vaidyanathan, Product Management Director at Cadence to learn how their Virtuoso Studio tools were enabling AI-driven, custom design migration.

IC design companies want higher productivity, and faster turnaround time to enable the promises of IP design reuse as they move from one node to a newer node, staying within the foundry partner ecosystem or even moving to another foundry. Ideally, during an IP migration the design intentions should remain the same, for example, matching and shielding requirements. Any automation for migration needs to understand the design intents, while at the same time conforming to the new PDK being targeted.

Early migration approaches with older nodes used more of a Lambda scaling when process nodes were at 180nm or larger dimensions, but today each new process generation doesn’t scale with Lambda, so the non-uniform scaling of transistors, interconnects, contacts and vias requires a much smarter approach to migration. The IC layout approaches dramatically change when migrating from planar to FinFET, and FinFET to GAA.

What Cadence has put together in Virtuoso Studio is an AI-powered flow that accepts schematic and layout as inputs, infer the design intentions, apply mapping from the foundry source to target, transform the source schematic to a target schematic, optimize the parameters of the devices to make the circuit meet the target specifications, and automates the layout migration process.

The schematic migration flow is shown in more detail in the customer presentation at Cadence Live:

Schematic Migration

A testbench from the Virtuoso ADE Suite is rerun to see if the specifications are met, and optimizations are run after updating parameters to meet the new specifications. The optimization helps meet the specifications across all PVT corners. ML techniques are used to infer optimizations. The design space is too large for analog designs, so non-gradient based techniques are used for optimizations. ML techniques are also applied during the creation of the new layout.

I learned that during circuit simulation in this optimization flow you can use Cadence or other SPICE circuit simulators. Your CAD team can even use a custom optimizer in the flow by coding in C++ or Python. There’s the Virtuoso ADE framework to manage circuit simulations, so you can use the Cadence optimizer or your own. Cadence developers have spent the last three years on this automated migration flow. You can even have Cadence do an IC migration for your project, as a service option.

The layout migration flow is shown below and has several steps where you guide the tool to get the best results that meet specifications:

Layout Migration

Girish showed me a sample analog layout that was migrated using Virtuoso Studio where automated place and route demonstrated a 2X productivity gain over manual methods:

Source and target layouts

For Cadence customers there are four interesting videos from CadenceLIVE events that highlight several use cases for analog IC migration:

  • Samsung – schematic migration
  • Global Foundries – schematic migration, AI/ML-driven optimization, layout migration
  • TSMC – RF migration
  • Intel – layout migration

Summary

Migrating custom and analog IP is a challenging engineering task that can be done either manually or with the help of automation. Cadence has created an automated migration flow that is producing some impressive results in saving time and reducing engineering effort by using ML-based optimization in the flow. Major customers have already been using the flow, so it’s safe to give it a look for your own migration projects

Related Blogs


Next Generation of Systems Design at Siemens

Next Generation of Systems Design at Siemens
by Daniel Payne on 11-14-2024 at 8:00 am

New, Unified GUI

Electronic systems design is filled with a wide range of tools used across IC packaging design, multi-board systems, design creation, physical implementation, electro-mechanical co-design, simulation & analysis, and new product introduction. Siemens has been offering tools in this flow for many years now, so I was able to meet by video with David Wiens, Product Marketing Manager to get briefed on their next generation release. The following three EDA tools have a new, unified GUI, along with cloud connectivity and AI smarts:

  • Xpedition – electronics system design for enterprise use
  • HyperLynx – high-speed system analysis and verification
  • PADS Professional – low-cost, integrated PCB design

The vision at Siemens is to enable integrated, model-based system engineering, so that teams of engineers working across the multiple domains of software, electrical, electronics and mechanical can collaborate throughout the design process. Industry trends reveal a workforce in transition, with a general shortage of engineers, mass electrification of industrial products, and volatility in the supply chain across the globe. We are now in a new era where AI techniques are being applied to the electronic design process, the cloud is used to connect the work of teams, and using EDA tools through intuitive GUIs improves productivity.

Next Generation

Across the new release of Xpedition, HyperLynx and PADS Professional tools you quickly notice the consistent GUI, which has a modern look using more icons, arranged in groups based on function. Engineers will experience a short learning curve, making them more productive across the flow of these tools. Users can personalize how their icons are arranged, or even opt to go back to the classic look.

New, Unified GUI

As an engineer is using these tools there are AI-infused, predictive commands appearing in the menu, based on the patterns. Each customer will see their own predictive commands, based on their tool usage, and they can have an expert train their own model and share that within an organization. Engineers can also use natural language to find new components for their system design. Simulations are optimized to use predictive selection, so a design can be optimized without resorting to brute-force simulations across a large number of permutations, allowing you to explore the design space in a reasonable amount of time. Doing SI/PI analysis on a large system can now be run overnight, instead of waiting hundreds of days.

Predictive Commands

These next generation of tools are also integrated with other Siemens products, like: Teamcenter, NX and Simcenter, to support multi-domain design. There is partner PLM integration too, with Dassault and PTC. Model-based engineering happens through requirements decomposition and verification management in Xpedition.

Teams of engineers collaborate in real-time using a cloud-connected environment, enabling easier design reviews, getting insight to supply chain availability, performing component research and sourcing, and even ensure manufacturability through DFM profile management. RoHS compliance can be met using supply chain insights from Supplyframe. Assuring IP integrity, accuracy and reliability is security through managed access control based on each user’s role, permission and geography.

Summary

Siemens has released a new version for Xpedition, HyperLynx and PADS Professional that sports a new, unified, modern GUI, making life more productive for PCB designers. AI features also benefit users, through anticipating their next menu item and optimizing the number of simulations required. Collaboration is improved through cloud connectivity, making communication between team members faster. The PCB tools integrate throughout the systems design flow with both Teamcenter and NX software, enabling multi-domain design and analysis.

Related Blogs


Samtec Paves the Way to Scalable Architectures at the AI Hardware & Edge AI Summit

Samtec Paves the Way to Scalable Architectures at the AI Hardware & Edge AI Summit
by Mike Gianfagna on 11-14-2024 at 6:00 am

Samtec Paves the Way to Scalable Architectures at the AI Hardware & Edge AI Summit

AI is exploding everywhere. We’ve all seen the evidence. The same thing is happening with AI conferences. The conference I will discuss here began in 2018 as the AI Hardware Summit. The initial venue was the Computer History Museum in Mountain View, CA. Like most things AI, this conference has grown substantially in a relatively short period of time.  As you will notice, its name has grown, too to encompass a larger mission. At the recent event, there was significant focus on scalability in the deployment of AI systems. Samtec was there to address this challenge head-on. I’ll provide a summary here of the company’s presence at the show and how Samtec paves the way to scalable architectures at the AI Hardware & Edge AI Summit.

Samtec’s Presentation

At the show, Matt Burns, global director of technical marketing at Samtec presented Optimizing Data Routing in Copper & Optical Interconnects in Scalable AI Hardware Architectures. A rather long title, but there is a lot to address here.

Let’s take a look at the some of the topics Matt covered.

AI Agents

Over the past few years, Gen AI has been a driver in the adoption of AI agents. ChatGPT was just a pivot point. Applications such as text-to-chat/audio/image/video are redefining the customer experience in many industries. The next revolution in AI capabilities will be using AI agents to supplement the user’s experience.  The new “co-pilots” we are seeing from companies like Microsoft are good examples of this.  Other examples are actually improving code generation for simplicity and efficiency in real-time for developers.

Enterprise AI

Similarly, Gen AI has been the driving force behind enterprise AI adoption. However, only a fraction of the Fortune 1,000 has really started implementing AI to improve processes internally.  As enterprises discover how to use AI foundation models or application-specific models with their own internal data, AI will then begin to impact the bottom line for innovative companies.  The hyperscalers are leading the charge, but other companies will eventually follow.

Increasing model sizes requires more compute, but . . .

AI models are growing in size and scale. ChatGPT uses GPT-3.5 which has 175 billion parameters. GPT4 is rumored to approach 1 trillion parameters. Other models will soon approach 2 trillion parameters.  Model sizes are growing exponentially annually.  One GPU can’t handle all this.

Literally, hundreds if not thousands of GPUs need to be linked to parallel process the models. So, what’s the problem? AI compute performance is growing ~4.6x per year, but memory bus speeds are growing only ~1.3x per year and interconnect/fabric bus speeds growing only ~1.2x per year.  Those are the bottle necks.  Routing high-speed protocols like HBMx, CXL, PCIe and others over optics is becoming the trend.  Samtec demonstrated its CXL over optics solution at the show. The focus here is to position Samtec FireFly and Halo for some niche AI hardware applications.

Insatiable data center demand, but how are we going to power them?

More GPUs means more power. GPUs and other AI compute engines are approaching 2kW PER CHIP. That’s a lot of power.  System architects need to figure out how to get massive power into a rack and chassis efficiently and in small form factors at scale.

With these challenges as a backdrop, Matt presented the broad class of solutions for both copper and optical interconnect that Samtec offers. What is interesting about this show is that there are exhibits, but the footprint has always been limited to a table-top style of display. This keeps the focus on technology as opposed to fancy booth construction.

Samtec was at the show again this year, demonstrating its wide range of products for AI enablement.

Samtec booth at the show

To Learn More

If AI system scalability keeps you awake at night, Samtec can help. You can learn more about this unique company on SemiWiki here. And you can get an overview of Samtec’s AI capabilities here. You can even download a complete Artificial Intelligence/Machine Learning Solutions Guide here. As an aside, the conference is changing its name again. Next year’s event will be called AI Infra Summit. You can learn more about this change here.

And that’s how Samtec paves the way to scalable architectures at the AI Hardware & Edge AI Summit.


The Chips R&D Program Seeks to Accelerate Innovation

The Chips R&D Program Seeks to Accelerate Innovation
by Joseph Byrne on 11-13-2024 at 10:00 am

chips timeline

The CHIPS and Science Act has allocated $11 billion for semiconductor R&D, including for advanced packaging and AI-driven design. Companies should apply now.

In 2022, the United States signed the $50 billion Chips and Science Act. Under the act, the National Institute of Standards and Technology (NIST), which is part of the US Department of Commerce, is administering $11 billion for research and development projects. Befitting its name, the Chips R&D effort seeks to foster innovation (research) and commercialization (development). A third goal is to nurture the workforce. Chips R&D targets five areas:

  1. The National Semiconductor Technology Center (NSTC), a public-private consortium to provide R&D facilities and equipment.
  2. The National Advanced Packaging Manufacturing Program (NAPMP).
  3. The Chips Manufacturing USA Institute to develop digital twin technologies for semiconductor manufacturing.
  4. Chips Metrology, which applies the science of measurement (a key part of NIST) to semiconductor materials, packaging, and production.
Funding Opportunities and Deadlines

NIST is doling out R&D awards in a notice of funding opportunity (NOFO) series. NOFOs from earlier this year target package substrates and establishment of the Chips Manufacturing USA Institute. Two open NOFOs include one applying artificial intelligence (AI) and autonomous experimentation (AE) to manufacturing and another targeting advanced packaging. In both cases, applicants’ first step is to submit a concept paper. Due dates are January 13, 2025, and December 20, 2024, for the AI/AE and packaging NOFOs, respectively, as Figure 1 shows. Local to NIST and having a writing background, I’m available to work with applicants on their submissions.

Figure 1. Timelines for Chips R&D packaging and AI/AE funding opportunities.

The Chips AI/AE for Rapid, Industry-Informed Sustainable Semiconductor Materials and Processes (Carissma) competition expects to disburse $100 million to organizations developing semiconductor materials. They must outperform existing materials and be better for the environment. The timeline is short—only five years for an investment to produce something the industry can test. Carissma requires the projects to be university led and apply AI/AE techniques.

Pushing Packaging Boundaries

Part of the NAPMP, the packaging NOFO will provide multiple awards totaling $1.55 billion and spans five R&D areas (RDAs in government lingo):

  1. Equipment, tools, processes, and process integration
  2. Power delivery and thermal management
  3. Connector technology, including photonics and radio frequency (RF)
  4. Chiplets ecosystem
  5. Codesign/electronic design automation (EDA)

Area Four indicates the NOFO’s thrust: extending the multi-die (and multidimensional) packaging technology found in products such as the AMD MI300X, Intel Ponte Vecchio, and Nvidia Blackwell. Examining this area also reveals the program’s vision and assumptions: thousands of wires will connect chiplets, packages will be ultra-large and house a thousand chiplets, and chiplets will be functionally and physically heterogeneous. It’s an unusual vision considering systems today contain tens or possibly hundreds of chips per chassis—not thousands of chips. For a few more details on the chiplet RDA, see my post at https://xpu.pub/2024/10/24/chips-act-packaging/.

The other four areas proceed from this vision. The first has two categories that applicants can address: either a specific step in the packaging flow or an end-to-end process linking the individual steps. The second offers four objectives that applicants can address, including actual power-delivery and thermal-management solutions and models.

Area Three addresses interpackage (not intrapackage) interconnect and covers three scales: less than 25 mm, less than 1 m, and less than 1 km. For the shortest distance, the goal is 100 Gb/s per channel and a shoreline bandwidth density of 10 Tb/s per mm. The latter parameter is the challenging one; 224 Gbps serdes are already in production. For the sub-meter and sub-kilometer scales, the minimum bandwidth is 100 Tb/s. A further challenge for all three distances is to achieve a 0.1 picojoule per bit ceiling. As Figure 2 shows, the interconnect among packages can be wired, wireless (RF), or photonic.

Figure 2. The interconnect area envisions scaling out designs, such as by employing wires, radios, and photonics to connect four thousand-chiplet packages. (Source: NIST)

The final area is for software tools to aid design, security, resilience (e.g., fault tolerance), and integration, verification, and validation (IV&V). The EDA tools must handle designs employing any substrate, a thousand chiplets, 24-high chiplet stacks, a mixture of functionality (digital, analog, and optical), and various other components.

Write 10 Pages, Get $150 Million

It’s unusual for the United States to implement an industrial policy as directly interventionist as the Chips Act. Left alone, companies undoubtedly would develop technologies, but the Chips R&D effort is an opportunity for them to accelerate programs. Although applicants must be US-based and the act aims to bolster US manufacturing, operations and people can also be elsewhere. Recognizing this, attendees at the recent NAPMP NOFO conference came not only from American organizations but also from European and Asian companies with an affiliated US entity.

As noted above, the next step for those interested in the Carissma and NPAMP packaging NOFOs is to submit a concept paper. It must be no longer than 10 pages, making it possible to crank out in less than a month. It must broadly discuss the applicant’s project, including the technical area, execution plan, and budget. The Chips R&D review board will consider submissions on the basis of economic and national security, technical merit, project management, and ultimate impact. Concepts deemed meritorious then must submit a full application for final review. Grants will range up to $150 million, a significant sum for a large company and transformational for a smaller one. I urge US entities to apply. As noted above, I’m available to assist with concept papers and have the advantage of being local to NIST.

Joseph Byrne is an independent analyst and consultant. For more information, see xampata.com.

Also Read:

Synopsys-Ansys 2.5D/3D Multi-Die Design Update: Learning from the Early Adopters

Sarcina Democratizes 2.5D Package Design with Bump Pitch Transformers

Analog Bits Builds a Road to the Future at TSMC OIP

Navigating Resistance Extraction for the Unconventional Shapes of Modern IC Designs


Tier1 Eye on Expanding Role in Automotive AI

Tier1 Eye on Expanding Role in Automotive AI
by Bernard Murphy on 11-13-2024 at 6:00 am

Car EE system

The unsettled realities of modern automotive markets (BEV/HEV, ADAS/AD, radical views on how to make money) don’t only affect automakers. These disruptions also ripple down the supply chain prompting a game of musical chairs, each supplier aiming to maximize their chances of still having a chair (and a bigger chair) when the music stops. One area where this is very apparent is in the tier immediately below the automakers (the Tier1s) who supply complete subsystems – electronics, mechanical and software – to be integrated directly into cars. They are making a play to offer more highly integrated services, as evidenced by a recent announcement from DENSO, the second largest of the Tier1s.

Zonal architecture (image courtesy of Jeff Miles)

More AI will drive more unified systems

There are plenty of opportunities for Tier1s around BEV/HEV power and related electronics (where DENSO also has a story), but here I want to focus on AI implications for automotive systems. AI systems are inevitably distributed around a car, but as capabilities advance, training and testing must comprehend the total system. Which in piecemeal distributed systems will become increasingly impractical and may push towards unified supplier platforms. (There is talk of higher speed communication shifting all AI to the center, but it’s not fast enough yet to meet that goal and I worry about power implications in shipping raw data from many sensors to the center.)

Take side mirrors as an example. Ten years ago the electronics for a side mirror was simple enough; just enough to control mirror orientation from a joystick in the driver arm-rest. But then we added cameras and AI to detect a motorbike or car approaching on the left or right, which at first simply flashed light on the mirror housing to warn us not to change lane. Now maybe we also want visual confirmation outlining the vehicle in the side mirror or the cabin console.

How much of that processing should be in (or near) the side mirror and how much in a zonal or even central controller? Questions like this don’t have pre-determined answers and depend very much on the total car system architecture, latency/safety requirements, communications speeds, and the software and AI models that are a part of that architecture. Is it possible to build a safe system when different suppliers are providing software, models, and hardware for the mirror, zonal controller and central controller? Yes in this limited context, but when this input is one of many on which ADAS or autonomous driving depends and the car crashes or hits a pedestrian, who is at fault?

OEMs already depend on Tier1s to deliver integrated and fully characterized subsystems, hardware and software combined. Perhaps now their scope should not be limited to modules. Distributed AI adds a new kind of complexity which ultimately must be proven in-system. Think about the millions or billions of miles which must be trained and tested in digital twins to provide high levels of confidence and safety. That’s difficult to commit when AI backbone components for sensing, edge NPUs, fusion, and safety systems are under control of multiple suppliers. This objective seems more tractable when the whole system is under the control of one supplier. At least that’s how I think the Tier1s would see it.

DENSO and Quadric

DENSO announced very recently that they will acquire an intellectual property (IP) core license for Quadric’s Chimera general purpose NPU (GPNPU) and that the two companies will co-develop IP for an in-vehicle semiconductor. This announcement is interesting for several reasons. First it was initiated by DENSO, not by Quadric. Press releases from IP companies on license agreements are a dime a dozen, but DENSO had a larger goal, to signal that they are now getting into the semiconductor design game.

Second, DENSO has been an investor in Quadric for several years, to track progress in NPU technologies along with a couple of other contenders. Now this upgrade from being simply an investor to being a licensor and co-developer is an important step forward for both companies.

The press release highlights DENSO’s expectation that in-vehicle SoCs they will build must be able to process large amounts of information at high speed. They are also attracted to Quadric’s Chimera GPNPU ability to support DENSO adding their own AI capabilities in future, requiring support for a wide variety of general-purpose operations. DENSO see this profile as essential to support in-vehicle technologies and to support updates to AI advances in the future.

Feels like an important endorsement for Quadric. You can read the press release HERE.

Also Read:

A New Class of Accelerator Debuts

The Fallacy of Operator Fallback and the Future of Machine Learning Accelerators

2024 Outlook with Steve Roddy of Quadric


Signal Integrity Basics

Signal Integrity Basics
by Daniel Payne on 11-12-2024 at 10:00 am

Digital and analog waveforms

PCB and package designers need to be concerned with Signal Integrity (SI) issues to deliver electronic systems that work reliably in the field. EDA vendors like Siemens have helped engineers with SI analysis using a simulator called HyperLynx, dating all the way back to 1992. Siemens even wrote a 56-page e-book recently, Signal Integrity Basics, so I’ll capture the essence of that in this blog.

Signal Integrity

A digital designer can start out by assuming that a signal has a perfectly shaped waveform, but when they measure that signal as it propagates along a PCB or package to some receiver, the signal has analog distortions, like overshoot, plus there is a time delay to transit the interconnect.

Digital and analog waveforms

Overshoot comes from impedance mismatches and is followed by some ringing. Another waveform issue is Inter Symbol Interference (ISI), where bits sent over a channel start to interfere with each other, causing the receiver to ponder what the correct data is. Here’s what ISI looks like in a serial bit stream.

ISI effects in a serial bit stream

The bits are changing value so rapidly that the high and low level are not reaching proper  values. The eye diagram for this channel has grown quite small, as the orange hexagon indicates, meaning that bit errors will be high.

Small eye diagram

Increasing the length of the channel or increasing the frequency of the channel will close the opening of the eye diagram.

Interconnects used in PCB designs will always have delay, loss, and coupling, which then impact the signal integrity, so modeling this as a transmission line helps to understand and predict the behavior. The typical propagation velocity in a PCB is about 5.9 inches/ns. You can model a transmission line as a collection of resistors, inductors, and capacitors, in order to simulate and predict signal fidelity.

Transmission line model

Two examples of transmission line types in PCB traces are microstrips and striplines. Delay and impedance along these transmission lines are impacted by the parameters of the particular trace. Delay along a microstrip is affected by the interconnect length, the dielectric constants, the height of the dielectric under the trace, and the width of the trace. Delay along a stripline line is affected by the interconnect length and dielectric constant(s). The characteristic impedance, Z0, for both of these transmission line types is impacted by the dielectric constant(s), the height of the dielectric(s) around the trace, and the width of the trace.

Microstrip and stripline examples

These examples used uniform cross sections, however changing the cross section of a trace can introduce an impedance discontinuity, which in turn causes reflections in a signal. The idea is to minimize and manage discontinuities, by:

  • Using short interconnect, relative to rise/fall times
  • Keeping consistent impedance along a trace
  • Avoid or minimize vias

In the following example there’s a 3.3V CMOS driver and receiver connected by a 50 ohm transmission line with a 10ns delay from driver to receiver.

Reflections

The driver is shown in Red, and it rises to 3V, short of the full 3.3V as there’s output impedance on the driver. As the signal propagates along the transmission line it reaches the receiver, which has a high impedance and reflection coefficient of 1, making the Green signal reach 6V. A reflected 3V signal propagates in 10ns back to the driver with a reflection coefficient of -0.85, which bumps the Red driver signal. These reflections continue to bounce back and forth, changing the Red and Green voltages as ringing.

Adding a series terminating resistor close to the source driver can mitigate the overshoot and ringing as shown below:

Series termination

Parallel termination configurations can also reduce overshoot and ringing.

Parallel termination

Multiple PCB traces placed in close proximity exhibit crosstalk, caused by capacitive and inductive coupling. Notice how the victim trace bounces around at the near end and the far end as the aggressor trace is toggled.

Crosstalk example

Adding termination to the traces will mitigate the bouncing, as does moving the traces farther apart, true for both microstrip and stripline configurations.

Differential Pairs

Another type of signal are differential pairs which have two complementary signals, Vpos and Vneg:

Even mode is when both signals are the same, while odd mode has opposite values on the signals. Three termination examples are shown which produce a cleaner Vdiff signal.

Differential pair termination examples

Vias

A basic via is shown below and the signal is in Red color, while the Green vias are stitching vias that connect reference nets together between layers.

Via structure

Analysis of vias on a trace is done in the frequency domain using S-parameters and in the time domain using Time Domain Reflectometry (TDR). S21 is the ratio of the signal out of port 2 when injecting a signal into port 1, called insertion loss. S-parameters have both magnitude and phase components. S11 is the return loss.

Via performance

S21 – the insertion loss, has a dip at 15 GHz, from the stub acting as a quarter-wave resonator.

For the TDR plot it is flat on the left and right, corresponding to the two 50 ohm traces, and in the middle there’s a bounce in the impedance caused by the via.

A number of modifications can be made to a layout that affect via performance. The impacts of a few of these are investigated in the eBook: the presence of non-functional pads, the size of antipads, and stub length.

Timing

PCB traces are characterized by timing parameters like edge rates and propagation delay from a driver to multiple loads, which then impact setup and hold times for digital circuits. Consider the time difference in a differential pair, called skew, which changes the shape of Vdiff, caused by mismatches in the trace layouts. As the skew increases, then the edge rate of Vdiff slows down.

Differential skew example

Increasing the skew also begins to close the eye diagram.

Differential skew eye diagram

Summary

The 56 page e-book from Siemens EDA does a thorough job of introducing signal integrity concepts, challenges and mitigation approaches to PCB professionals. High-speed digital designs have challenges, and with understanding plus analysis, they can have reliable signals.

Read the entire e-book of Signal Integrity Basics, written by John Golding, Sr. AE Consultant, Siemens EDA.

Related Blogs


My Conversation with Infinisim – Why Good Enough Isn’t Enough

My Conversation with Infinisim – Why Good Enough Isn’t Enough
by Mike Gianfagna on 11-12-2024 at 6:00 am

My Conversation with Infinisim – Why Good Enough Isn’t Enough

My recent post on a high-profile chip performance issue got me thinking. The root cause of the problem discussed there had to do with a clock tree circuit that was particularly vulnerable to reliability aging under elevated voltage and temperature. Chip aging effects have always got my attention. I’ve lived through a few of them in my career and they are, in a word, exciting. Perhaps frightening.

This kind of failure represents a ticking time bomb in the design. There are many such potential problems embedded in lots of chip designs. Most don’t ignite, but when one does, things can get heated quickly. I made a comment at the end of the last post about Infinisim and how the company’s technology may have avoided the issue being addressed. I decided to dig into that topic a bit further to better understand the dynamics at play with clock performance. So, I reached out to the company’s co-founder and CTO. What I got was a master class in good design practices and good company strategy. I want to share my conversation with Infinisim and why good enough isn’t enough.

Who Is Infinisim?

You can learn more about Infinisim on SemiWiki here. The company provides a range of solutions that focus on accurate, robust full-chip clock analysis.

Several tools are available to achieve this result. One is SoC Clock Analysis that helps to accurately verify timing, detect failures, and optimize performance of the clock in advanced designs. Another is Clock Jitter Analysis that helps to accurately compute power supply induced jitter of clock domains – a hard-to-trace problem that can cause lots of problems. And finally Clock Aging Analysis that helps to accurately determine the operational lifetime of power-sensitive clocks. It is this last tool that I believe could have helped with the chip issue discussed in my prior blog.

The tools offered by Infinisim use highly accurate and very efficient analysis techniques. The approach goes much deeper than traditional static timing analysis.

My Conversation With the CTO

Dr. Zakir H. Syed

I was able to spend some time speaking with Dr. Zakir H. Syed, co-founder and chief technology officer at Infinisim. Zakir has almost 30 years of experience in EDA. He was at Simplex Solutions (acquired by Cadence) at its inception in 1995 through the end of 2000.  He has published numerous papers on verification and simulation and has presented at many industry conferences.  Zakir holds an MS in Mechanical Engineering and a PhD in Electrical Engineering, both from Duke University.

Here are the questions I posed to Zakir and his responses.

It seems like Infinisim’s capabilities can provide the margin of victory for many designs. How are you received when you brief potential customers?

 Their response really depends on past experiences. If they’ve previously encountered issues—like anomalous clock performance, timing challenges, or yield problems—they tend to quickly see the value Infinisim brings and are eager to learn more. In my experience, these folks are few and far between, however.

This is a bit surprising. Why do you think this is the case?

It’s an interesting point. The issue isn’t that better performance isn’t desirable; it’s that there’s a general trend to accept less-than-optimal performance as the norm. Over time, parameters like timing, aging, jitter, yield, and voltage have been treated as “known quantities” and design teams rely on established margins to work within these expectations.

I’m beginning to see the challenge. If design teams are meeting the generally accepted parameters, why rock the boat?

Exactly. If the design conforms to the required margins, all is well. Designers are rewarded for meeting schedules. CAD teams are recognized for delivering an effective flow. And this continues until there is some kind of catastrophic failure. When that “ticking time bomb” goes off, suddenly every assumption is questioned, and a deep analysis begins.

I get your point. I wrote a blog recently that looked at a high-profile issue that was traced back to clock aging.

Yes, that issue could likely have been discovered with our tools, before the chip was shipped to customers. In that case, aging effects came into play under certain operating conditions. Since N-channel and P-channel devices age differently, the result was a clock duty cycle that began to drift from the expected 50/50 duration. Once the asymmetry became large enough, circuit performance began to fail.

So, what you don’t know can hurt you.

You’re right. But there’s also a bigger opportunity here. It’s not just about preventing catastrophic failures. Advanced nodes are costly, and we pay for that performance. By thoroughly examining circuit behavior across all process corners, we can leverage that investment to its fullest potential instead of leaving performance on the table with excessive margins. The same goes for yield, which directly impacts profitability. In today’s competitive chip design landscape, accepting less performance often means losing out on market share.

OK, the light bulb is going off. Now I see the bigger picture. Using tools like Infinisim’s doesn’t just prevent failures; it’s a strategic move toward maximizing profitability and competitiveness.

I think you’ve got it. When more people within a company—from engineers to executives—embrace this mindset, it leads to a stronger, more competitive organization. By challenging the status quo, companies can achieve more and realize their full potential.

To Learn More

You can learn more about the integrated flow offered by Infinisim here.  My conversation with Infinisim made it clear why good enough isn’t enough.