Banner 800x100 0810

Soft checks are needed during Electrical Rule Checking of IC layouts

Soft checks are needed during Electrical Rule Checking of IC layouts
by Daniel Payne on 02-28-2024 at 10:00 am

Metal1 Via Metal2 s

IC designs have physical verification applications like Layout Versus Schematic (LVS) at the transistor-level to ensure that layout and schematics are equivalent, in addition there’s an Electrical Rules Check (ERC) for connections to well regions called a soft check. The  connections to all the devices needs to have the most consistent voltage signals.  Therefore, the path should be through the Metal layers to reduce resistance and factors like IR Drop.  Detecting connections thought other materials–like Wells–in mandatory.  Soft-Checks are the method most commonly employed to detect this situation. The Calibre product line from Siemens is the most popular tool for DRC and LVS checking, so I read a technical paper from Terry Meeks to learn more about soft checks.

Connecting two metal layers in an IC layout requires precise alignment of both metal layers and the via layer. Here’s a comparison using both a side view and top-down view where the first example is not connected, because Metal1 and Metal 2 are not overlapping, while the second example is connected properly.

Connecting two metal layers with a Via layer.

We want our ERC tool to identify well connectivity errors during soft checks, so that they can be fixed. The following IC layout has a well connectivity error and is shown from the side view, where the Metal1 signal texted as Gnd is connected a diffusion region called a tap diffusion. On the right-hand side is another Metal1 layer with a tap diffusion, but this connectivity creates a high-resistance path in the Rwell to Gnd, and is flagged as an error by the soft check.

Well connectivity error – side view

Another example of soft connectivity error happens in the IC layout below where we can apply only one name per polygon. The digital power net VDD cannot coexist with the analog power net AVDD, and we need to separate these into two shapes. Soft checks help to flag these issues.

AVDD net to VDD net soft check error

An IC layout with both digital and analog power supplies can become rather complex to layout properly, so it’s even more important to have soft checks.

Undetermined areas have question marks

Soft checks are included during your LVS runs, and with Calibre nmLVS there’s a report of soft check results, which can then be viewed using the Calibre RVE viewer.

Using Calibre RVE to review Soft Check errors

Clicking on RVE results tells you which cell has the soft check error, the net names, upper and lower names, and other properties. This info helps to pinpoint what to fix in the IC layout. Clicking on a lower layer like a PWell for a soft check error displays the geometry in yellow.

Soft check result, lower layer

For the same soft check error, clicking on the upper layer shows:

Soft check result, upper layer

During debug you can also show all the upper layer shapes, the green shapes are the selected net upper layer shapes, while yellow is the rejected net upper layer shape.

All upper layer shapes

Debugging soft check errors with RVE involves clicking on the connectivity of selected and rejected nets. A Net Info windows reveals details like which layers are involved, and if shapes are missing connectivity. Looking at which ports are connected to a net reveal if there’s missing VDD or GND errors. This example shows that net 18 is rejected, because it’s missing connectivity to Metal1.

Missing connectivity to Metal1

Summary

LVS checks are mandatory to ensure that an IC has an error-free layout, and soft checks are part of your LVS checks. There’s a proven debugging flow from Siemens in their Calibre nmLVS tool that uses RVE to help layout designers quickly identify soft check failures, so that designers can make fixes and re-verify until all checks are passing. Siemens has written a technical paper for reading online, Detecting and debugging soft check connectivity errors.

Related Blogs

 

 


CEO Interview: Michael Sanie of Endura Technologies

CEO Interview: Michael Sanie of Endura Technologies
by Daniel Nenni on 02-28-2024 at 8:00 am

Michael Sanie
Michael Sanie

Michael Sanie is a veteran of the semiconductor and EDA industries. His career spans several executive roles in diverse businesses with multifunctional responsibilities. He is a passionate evangelist for disruptive technologies.

Most recently, he was the chief marketing executive and senior VP of Enterprise Marketing and Communications at Synopsys, where he also held leadership roles as VP of marketing and strategy for the Design Group and VP of product management for the Verification Group.

Michael previously held executive and senior marketing positions at Cadence, Calypto, Numerical, and Actel, as well as IC design and software engineering positions at VLSI Technology (now NXP Semiconductors).

He holds BSECE and MSEE degrees from Purdue University and an MBA from Santa Clara University.

Tell us about your company

Endura Technologies is developing an end-to-end SoC power delivery solution. In addition to our revolutionary, patented power delivery architecture, we have a diverse skillset to implement test silicon, design IP, design services, design passives (required inductors and capacitors as part of the power delivery solutions), partnerships, and silicon manufacturing relationships. This allows us to create end-to-end SoC power delivery solutions.

Our unique architecture, combined with our fully integrated approach to power delivery at the system level is changing the game for challenging applications such as data centers, automotive, and many others.

What problems are you solving?

Energy consumption for advanced products has become a major care-about across many markets and applications. Battery life and heat dissipation for aggressive form factors drive part of this. The substantial operating costs for massive compute infrastructure is another driver.

A bit more specifically, servers/AI chips are driving much higher compute demands, requiring more power to be delivered.  At the same time, these chips are built on smaller nodes, which run on lower Vdd’s.  The only way this equation can work is to provide much higher currents with several power rails, and increasingly this is only achievable by 2.5D or 3D IC integration These facts are fundamentally changing power delivery approaches.

On top of that, systems in automotive, audio, and switches typically rely on many sensory inputs ranging from MEMs devices to image sensors to radar. These devices require efficient power delivery across many load configurations and at increasing switching frequencies while maintaining ultra-low noise.

These fundamental disruptions are making people take power delivery a lot more seriously — in two ways:  Power delivery is no longer an afterthought; it needs to be designed/architected at the same time as the SoC AND it needs a much more holistic approach. Off-the-shelf PMICs are quickly running out of steam in how they meet these complex requirements.  To get the best power delivery each SoC needs its own ‘application-specific’ (or context-aware) power delivery solution.

Powering these systems at scale requires a new approach. One that takes a comprehensive view of power requirements for the chips and chiplets that implement the complete system. And one that optimizes performance, scalability, and efficiency over the broad spectrum of switching frequencies, current loads, voltage ranges, and silicon manufacturing processes.

This is the problem Endura is solving.

What application areas are your strongest?

Endura has applied its technology across a wide range of power-intensive or power-sensitive application areas – mostly data center and automotive. You can find more specific examples on our website that cover data centers, requirements for memories in data centers, a notebook design with a PCIe Gen5 solid state drive, optical modules and automotive.

What keeps your customers up at night?

Advanced system design presents a power delivery balancing act. The drivers for the requirement may differ, but all systems must operate efficiently with the lowest energy consumption possible.

These systems contain many parts, all operating at different frequencies, with varying power demands and obstacles. Solving the complete problem requires a holistic approach to power management and delivery.

But such an approach has been out of reach for most companies, requiring system designers to attempt integration of multiple tools and multiple sets of IP and software to solve the problem. This has been a very difficult problem to solve. Until now.

What does the competitive landscape look like and how do you differentiate?

The traditional approach to power delivery focuses on a component-level strategy. That is, acquire best-in-class power management solutions, typically from tier-1 suppliers and integrate these devices at the PCB level.

The substantial complexity and power demands of applications such as data centers require a new, fine-grained approach – one that integrates power delivery down to the chip level and one that co-optimizes the architecture for optimal system-level performance.

There are some design teams (typically in larger companies with a broad range of skills) that are making the investment to achieve these results across the supply chain. For everyone else, the complexity of integrating such approaches remains out of reach.  Endura is democratizing this new, system-level approach to power delivery, so it is available to every system design team.

What new features/technology are you working on?

Power management approaches include the use of traditional, discrete devices (sVR) to embedded chiplets for 2.5/3D integration (eVR) down to on-chip, integrated blocks for optimum point-of-load energy delivery (iVR).

While sVR approaches are well-understood, deployment of fully integrated eVR and iVR strategies is extremely complex and challenging. Endura has the technology and know-how to solve these problems, and this is our development focus.

How do customers normally engage with your company?

Endura Technologies has development facilities in California and Dublin, Ireland. If you would like to explore how we can help you develop a forward-looking power strategy you can reach out at info@enduratechnologies.com.

Also Read: 

CEO Interview: Vincent Bligny of Aniah

CEO Interview: Jay Dawani of Lemurian Labs

Luc Burgun: EDA CEO, Now French Startup Investor


Revolutionizing RFIC Design: Introducing RFIC-GPT

Revolutionizing RFIC Design: Introducing RFIC-GPT
by Jason Liu on 02-28-2024 at 6:00 am

Figure1 (10)

In the rapidly evolving world of Radio Frequency Integrated Circuits (RFIC), the challenge has always been to design efficient, high-performance components quickly and accurately. Traditional methods, while effective, come with a high complexity and a lengthy iteration process. Today, we’re excited to unveil RFIC-GPT, a groundbreaking tool that transforms RFIC design through the power of generative AI.

RF chips are known as the crown jewel of analog chips, and RF circuits typically contains not only the active circuits, i.e., the circuits composed of mostly active devices such as transistors, but also a large number of passive components such as inductors, transformers and matching networks. Fig. 1. is an example of a one stage RF power amplifier (PA), the active part of the circuit is a differential common source PA with cross coupled varactors, and it is connected by an input matching network and an output matching network.  The matching networks are usually a combination of passive devices such as inductors, capacitors and transformers connected in an optimized configuration.

To design such an RF circuit, both of the devices in the active circuit and the passive layout patterns in the matching networks need to be optimized. The conventional design flow of RFIC circuit is shown in the top half of Fig. 2. On one hand, active circuits need to be first designed and simulated both in schematics and in layouts. On the other hand, the passive components and circuits are iterated repeatedly using more physical and tedious electromagnetic (EM) simulation combined with their layouts, making it a key challenge in RF design.

Thereafter, the parameters of entire layouts are extracted and post layout simulations are run to compare with the design specifications (Specs). Finally, the designs of both active circuits and layouts of passive circuits are re-adjusted and re-simulated, and the results are compared again. This process is iterated for a numerous number of times until the design Specs are achieved. Among others, the main difficulties of designing RFIC can be attributed to:

(1) large design search space of both active and passive circuits;

(2) lengthy and tedious EM simulation required;

(3) Interactions between active and passive circuits, and that between RFIC and its surroundings demands numerous iterations and optimizations.

Therefore, the traditional design flow of RFIC typically takes a lot of human effort, and its design quality in a constrained time also largely depends on the experience of particular IC designers.

Recently, generative AI has been researched and explored extensively for generating contents including but not limited to dialogues, pictures, programming codes. Analogous to this concept, generative AI is also considered for the RFIC design automation in the area of IC design. The bottom half of Fig. 2 exhibits an example RFIC design flow with the assisted generative AI. Essentially, the behavior of small circuit components can be lumped into models and lengthy simulations can be omitted.

Additionally, the solution searching “experience” for the RFIC design can be “learned”, and the solutions, i.e., the initial design of RFIC schematics and layouts, can be quickly “generated”. Importantly, the simulated results of the AI generated RFIC circuits can indeed be already close to the design Specs, and IC design engineers only need to do some final optimization and verifying simulations before they can be applied to the RFIC design blocks for tape-outs. This methodology saves a large amount of the simulation iterations and drastically improves design efficiency. Furthermore, the results are more consistent run to run since the task is performed by “emotionless” computer.

As a pioneer of intelligent chip design solutions, the AI based RFIC design automation tool RFIC-GPT has been launched. Using RFIC-GPT, GDSII or schematic diagrams of RF devices and circuits meeting design specifications (such as Q/L/k of the transformer; matching degree S11 of the matching circuit, insertion loss IL; gain, OP1db of the PA etc.) can be directly generated based on AI algorithm engine. It reduces simulation iterations by over 50%, accelerating the journey from concept to production. This tool is not just about speed; it’s about precision. It generates optimized layouts and schematics that meet design specifications with up to 95% accuracy, ensuring high-quality results with fewer revisions.

What sets RFIC-GPT apart? Unlike traditional tools that rely heavily on manual input and trial-and-error, RFIC-GPT leverages AI to predict and optimize design outcomes, making the process faster and more reliable. This means designers can focus more on innovation and less on the repetitive tasks that often slow down development.

In conclusion, RFIC-GPT represents a significant leap forward in RFIC design technology. By harnessing the power of AI, it offers unprecedented efficiency, accuracy, and ease of use. We’re proud to introduce this innovative tool and are excited about the potential it holds for the future of RFIC design. Join us in this revolution, try RFIC-GPT today, and take the first step towards more efficient, accurate, and innovative RFIC designs. The author encourages designer to try RFIC-GPT online  ( www.RFIC-GPT.com )  and give feedback . The practice of RFIC-GPT only takes three steps:

(1) Input your design Specs and requirements;

(2) Consider the design trade-offs and choose the appropriate GDSII or active design;

(3) Click download for your application.

Author:

Jason Liu, Jason is a senior researcher on the design automation solution for RFIC. Jason holds a Ph.D. degree in Electrical Engineering and has been in the EDA industry for more than 15 years.

Also Read:

CEO Interview: Vincent Bligny of Aniah

Outlook 2024 with Anna Fontanelli Founder & CEO MZ Technologies

2024 Outlook with Cristian Amitroaie, Founder and CEO of AMIQ EDA


2024 Signal & Power Integrity SIG Event Summary

2024 Signal & Power Integrity SIG Event Summary
by Daniel Nenni on 02-27-2024 at 10:00 am

SIG Event Synopsys

It was a dark and stormy night here in Silicon Valley but we still had a full room of semiconductor professionals. I emceed the event. In addition to demos, customer and partner presentations, we did a Q&A which was really great. One thing I have to say is that Intel really showed up for both DesignCon and the Chiplet Summit. Quite a few Intel employees introduced themselves and a couple even took pictures with me, great networking.

The SIPI SIG 2024 event was hosted at the Santa Clara Hilton on Jan 31st on the margins of DesignCon and was over-subscribed with 100 attendees (despite inclement weather). There were 20+ customers and partners  represented including the likes of Apple, Samsung, AMD, TI, Micron, Qualcomm, Google, Meta, Amazon, Tesla, Cisco, Broadcom, Intel, Sony, Socionext, Realtek, Microchip, Winbond, Lattice Semi, Mathworks, Ansys, Keysight, and more:

Synopsys Demos & Cocktail Hour
Interposer Extraction from 3DIC Compiler & SIPI Analysis
TDECQ Measurement for High Speed PAM4 Data Links

Customer Presentations and Q&A:
Optimization of STATEYE Simulation Parameters for LPDDR5 Application
Youngsoo Lee, Senior Manager of AECG Package Development Team, AMD

IBIS and Touchstone: Assuring Quality and Preparing for the Future
Michael Mirmak, Signal Integrity Technical Lead, Intel

Signal and Power Integrity Simulation Approach for HBM3 Hisham Abed, Sr. Staff A&MS Circuit Design Engineer, Solutions Group, Synopsys

Signal Integrity at the Cutting Edge: Advanced Modeling and Verification for High-Speed Interconnects Barry Katz, Director of Engineering, RF & AMS Products, MathWorks.

All great presentations, the panelists had more than 100 years of combined experience, but I must say that Michael Mirmak from Intel was really really great. Here is a quick summary that Michael helped me with. Michael started his presentation with the standard corporate disclaimer:

“I must emphasize that my statements and appearance at the event was not intended and should not be construed as an endorsement by my employer, or by any organization of particular products or services.”

IBIS and Touchstone: Assuring Quality and Preparing for the Future
  • IBIS and Touchstone are the most common model formats for SI and PI applications today
  • Assessing model quality remains a constant concern for both model users and producers
  • The simulation output log file is often neglected but can provide very useful insights, as it includes model quality reporting and issue detection outside of outputs such as eye diagrams, before actual channel simulation begins
  • Even for high-speed IBIS AMI (Algorithmic Model Interface) simulations, problems can arise from simple analog IBIS data mismatches between impedance and transition characteristics; the simulation log can alert the user and model-maker to these early, before larger and potentially expensive batch runs
  • The simulation output log can also help find issues with the algorithmic portion of IBIS AMI models that may distort output in subtle ways that cannot (yet) be checked with the standard parsing tool
  • IBIS 7.0 and later supports standard modeling of modern, complex component package designs that tend to be represented using proprietary SPICE variants today; S-parameters under Touchstone are now included as well
  • S-parameters using the Touchstone format are frequently used for interconnect modeling, but can become unwieldy when used to describe high-speed links at the system level over manufacturing or environmental variations
  • Touchstone 3.0 is coming and is planned to include a pole-residue format that enables compression of S-parameter data

Congratulations to Synopsys and the semiconductor ecosystem, it was a great event, absolutely.

Also Read:

Synopsys Geared for Next Era’s Opportunity and Growth

Automated Constraints Promotion Methodology for IP to Complex SoC Designs

UCIe InterOp Testchip Unleashes Growth of Open Chiplet Ecosystem


BDD-Based Formal for Floating Point. Innovation in Verification

BDD-Based Formal for Floating Point. Innovation in Verification
by Bernard Murphy on 02-27-2024 at 6:00 am

Innovation New

A different approach to formally verifying very challenging datapath functions. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome. We’re planning to add a wrinkle to our verification exploration this year. Details to follow!

The Innovation

This month’s pick is Polynomial Formal Verification of Floating-Point Adders. This article was published in the 2023 DATE Conference. The authors are from the University of Bremen, Germany.

Datapath element implementations must be proved absolutely correct (remember the infamous Pentium floating point bug), which demands formal proofs. Yet BDD state graphs for floating point elements rapidly explode, while SAT proofs are often bounded hence not truly complete.

The popular workaround today is to use equivalence checking with a C/C++ reference model, which works very well but of course depends on a trusted reference. However some brave souls are still trying to find a path with BDD. These authors suggest methods to use case-splitting to limit state graph explosion, dropping from exponential to polynomial bounded complexity. Let’s see what our reviewers think!

Paul’s view

Compact easy read paper to kick-off of 2024, and on a classic problem in computer science: managing BDD size explosion in formal verification.

The key contribution of the paper is a new method for “case splitting” in formal verification of floating point adders. Traditionally, case splitting means to pick a boolean variable that causes a BDD to blow up in size, and just run two separate formal proofs, one for the “case” where that variable is true and one for the case where that variable is false. If both proofs pass, then it means that the overall proof for the full BDD including that variable must necessarily also pass. Of course, case splitting for n variables means 2^n cases so if you use it everywhere then you just trade one exponential blow up for another.

This paper observes that case splitting need not be based only on individual Boolean variables. Any exhaustive sub-division of problem is valid. For example, prior to normalizing the base-exponent, a case split on the number of leading zeros in the base can be performed – i.e. zero leading zeros in the base, one leading zero in the base, and so on. This particular choice of split combined with one other cunning split in the alignment shift step achieves a magical compromise such that the overall proof for a floating point add goes from being exponential to polynomial in complexity. A double precision floating point add circuit can now be formally proved correct in 10 seconds. Nice!

Raúl’s view

This short paper introduces a novel approach to managing the size explosion problem in formal verification of floating point adders using BDDs, a classic issue in equivalence checking. Traditionally, this is addressed by case splitting, i.e., dividing the problem based on the values of individual Boolean variables (0, 1), also leading to exponential growth in complexity with the number of variables split. Based on observations on where the explosion in size happens when constructing the BDDs, the paper proposes three innovative case splitting methods. They are not based on individual Boolean variables and are specific for floating point adders (of course they do not simplify general equivalence checking to P).

  1. Alignment Shift Case Splitting: The paper suggests splitting with regard to the shift amount or exponent difference, significantly reducing the number of cases needed for verification.
  2. Leading Zero Case Splitting: To address the explosion at the normalization shift, the paper proposes creating cases based on the number of leading zeros in the addition result.
  3. Subnormal Numbers and Rounding: Subnormal numbers are handled by adding a simplification in cases where they can occur; rounding does not trigger an explosion in BDD size.

By strategically choosing these case splits, the overall proof complexity for floating point addition can be reduced from exponential to polynomial. As a result, formal verification of double and quadruple precision floating point add circuits, which in classic symbolic simulation time out at two hours, can now be completed in 10-300 secs!


New Emulation, Enterprise Prototyping and FPGA-based Prototyping Launched

New Emulation, Enterprise Prototyping and FPGA-based Prototyping Launched
by Daniel Payne on 02-26-2024 at 10:00 am

Veloce Strato CS min

General purpose CPUs have run most EDA tools quite well for many years now, but if you really want to accelerate something like simulation then you start to look at using specializedhardware accelerators. . Emulators came onto the scene around 1986 and the processing power has greatly increased over the years, mostly in response to the demands of leading-edge companies designing CPUs, GPUs and more recently AI-based processors and hyperscalers that need to accelerate simulation to ensure that designs are bug-free and will actually boot-up and run software properly before tape out.

All modern CPU, GPU, Hyperscalers, and AI processor teams are using emulation to accelerate the design and debug of their SOCs, with transistor counts ranging from 25 billion to 167 billion transistors, often using chiplets as the massive number of transistors no longer fit within the maximum reticle size. These systems are challenging to verify, and using a general purpose CPU to run EDA simulations is no longer fast enough, so emulation must be used. Design teams on projects for AI and hyperscale applications are running software loads that demand quick analysis so that trade offs can be made between power and performance.

Emulation is used early in the design flow, when there are lots of design changes happening, so having flexible debug and fast compile features are critical for quick turn-around. When the RTL coding has become stable enough and there’s less debugging required, then a faster simulation approach using enterprise prototyping can be started as early firmware and software development can begin. The third stage of accelerated simulation follows and is traditional FPGA-based prototyping, where software developers are the main users, where performance and flexibility is prime need.

With the three hardware-assisted acceleration techniques you could opt for using three hardware systems from multiple vendors, however I just learned about a new announcement from Siemens where they have launched a next-generation family of products that covers all three use cases and they call it Veloce CS.

 

For Emulation the Veloce Strato CS is using a domain-specific chip called the CrystalX, which enables fast, predictable compile during design bring-up and speeds iterations. Designers are more productive by using native debug capabilities, and the platform has scalability to fit the biggest designs. On the prototyping side the FPGA-based Veloce Primo CS is using the latest AMD Chip, the VP1902 Adaptive SoC, which has 2X higher logic density, and an 8X faster debug performance.

 

Previous generations of emulators often had unique hardware form factors, but with the new Veloce CS Siemens adopted a blade architecture, which fits into modern data centers more easily.

The previous generation of emulators from Siemens was called the Veloce Strato+, introduced in 2021; now with the new Veloce Strato CS you enjoy 4X gate capacity, 5X performance gain, and a 5X debug throughput boost. Scalability now goes up to 40+B gates, and the modular blade approach spans from 1 to 256 blades.

Veloce Strato CS configurations

For enterprise prototyping Siemens offered the Veloce Primo beginning in 2021; with the new Veloce Primo CS your team will benefit from 4X gate capacity, 5X in performance, and a whopping 50X in debug throughput. Once again, blades are used with Veloce Primo CS, providing a range of 500M gates, all the way up to 40+B gates.

The following diagram shows the common compiler, debug and runtime software shared between the emulator and enterprise prototyping systems, with the major difference being that the emulator uses the custom CrystalX chip and the enterprise prototype employs the AMD VP1902 chips.

Emulator and Enterprise Prototype systems

By using a blade architecture these systems require only air cooling, so no expensive water cooling is needed.

The third new product introduced is Veloce proFPGA CS, and it gives you 2X gate capacity, 2X performance, and a stunning 50X debug throughput advantage over previous generation proFPGA system. Scaling starts out with just a single FPGA clocking at 100MHz, then growing up to 4B gates. The Uno and Quad configurations are well suited for desktop prototyping, then each blade system has 6 FPGAs.

Prototyping used to be limited by slow design bring-up, but now with Veloce proFPGA CS engineers will experience efficient compile without manual RTL edits, enjoy automated multi-FPGA partitioning, benefit from timing-driven performance optimization, and become more efficient with sophisticated at-speed debug due to VPS SW.

Summary

Siemens designed, built and announced three new hardware-accelerated systems that have some immediate benefits, like:

  • Lower power to cool
  • ~10Kw/Billion gates
  • Fits into data center using blades and air cooling cold aisle – hot aisle air flow
  • Multi-user support, enabling 24×7 use
  • Emulation, Enterprise Prototyping, FPGA-based prototyping

Early users of Veloce CS include tier-one names like AMD and ARM. The new Veloce has impressive credentials, certainly worth taking a closer look at, and they span all three types of hardware platforms. Your team can choose just the right size for each platform to meet your project capacity.

Related Blogs


Photonic Computing – Now or Science Fiction?

Photonic Computing – Now or Science Fiction?
by Mike Gianfagna on 02-26-2024 at 6:00 am

Photonic Computing – Now or Science Fiction?

Cadence recently held an event to dig into the emerging world of photonic computing. Called The Rise of Photonic Computing, it was a two-day event held in San Jose on February 7th and 8th. The first day of the event was also accessible virtually. I attended a panel discussion on the topic – more to come on that. The day delivered a rich set of presentations from industry and academic experts intended to help you tackle many of your design challenges. Some of this material will be available for replay in late February. Please check back here for the link. Now let’s look at a spirited panel discussion that asked the question, Photonic computing – now or science fiction?

The Panelists

There is a photo at the top of this post of the panel Moving left to right:

Gilles Lamant, distinguished engineer at Cadence moderated the panel. Gilles has worked at Cadence for almost 31 years. He is a Virtuoso platform architect and a design methodology consultant in San Jose, Moscow, Tokyo and Burlington Vermont. Gilles has a deep understanding of system design and kept the panel moving in some very interesting directions.

Dr. Daniel Perez-Lopez, CTO and co-founder of iPronics, a company that aims to expand photonics processing to all the layers of the industry with its SmartLight processors. The company is headquartered in Valencia, Spain.

Dr. Michael Förtsch, Founder and CEO of Q.ANT, a company that develops quantum sensors and photonic chips and processors for quantum computing based on its Quantum Photonic Framework. The company is headquartered in Stuttgart, Germany.

Dr. Bahvin Shastri, Assistant Professor, Engineering & Applied Physics, Centre for Nanophotonics, Queen’s University, located in Kingston, Ontario, Canada. Bhavin presented the keynote address right before the panel on Neuromorphic Photonic Computing, Classical to Quantum.

Dr. Patrick Bowen, CEO of and co-founder of Neurophos, a company that is pioneering a revolutionary approach to AI computation, leveraging the vast potential of light. Neurophos leverages metamaterials in its work, they are based in Austin, Texas.

That’s quite a lineup of intriguing worldwide startups and advanced researchers. The conversation covered a lot of topics, insights and predictions. Watch for the replay to hear the whole story. In the meantime, here are some takeaways…

The Commentary

Gilles observed that some of the companies on the panel look like traditional players in the sense that they use existing materials and fabs to build their products but others are innovating in the materials domain and therefore need to build the factory and the product. This observation highlights the fact that photonic computing is indeed a new field. The players that are building fabrication capabilities may become vertically integrated suppliers or they may become pure-play fab partners to others. It’s a dynamic worth watching.

Bahvin commented on this topic from an academic research perspective. His point of view was that, if you can get it done with mainstream silicon photonics, that’s what you do. However, new and exotic materials research is opening up possibilities that are not attainable with silicon and so advanced work like that will be important to realize the broader potential of the technology.

Other discussions on this topic pointed out that the massive compute demands of advanced AI algorithms simply cannot fit the size or power envelope required using silicon. New materials will be the only way forward. In fact, some examples were given as to how challenging applications such as transformers can be re-modeled in a way that makes them more appropriate for the analog domain offered by photonic processing.

An interesting observation was made regarding newly minted PhD students. What if part of the dissertation was to develop a pitch about the invention and try it with a VC. This would bring a reality check to the invention process – how does the invention contribute to the real world? I thought that was an interesting idea.

Here is a good quote from the discussion: “Fifty years of Moore’s Law and we are still at the stage where we haven’t found an efficient computer to simulate nature.”  This is a problem that photonic computing has a chance to solve.

Gilles ended the panel with a question regarding when photonic computing would be fully mainstream. 10 years, 20 years? No one was willing to answer. We are at the beginning of a very exciting time.

To Learn More

Much of the first day of the event will be available for replay, including this panel. Check back here around the end of February. In the meantime, you can check out what Cadence has to offer for photonic design here.  The panel Photonic computing – now or science fiction? didn’t necessarily answer the question, but it did deliver a lot of detail and insights to ponder for the future.


Intel Direct Connect Event

Intel Direct Connect Event
by Scotten Jones on 02-23-2024 at 12:00 pm

Figure 1

On Wednesday, February 21st Intel held their first Foundry Direct Connect event. The event had both public and NDA sessions, and I was in both. In this article I will summarize what I learned (that is not covered by NDA) about Intel’s business, process, and wafer fab plans (my focus is process technology and wafer fabs).

Business

Key points in the keynote address from my perspective.

  • Intel is going to organize the company as Product Co (not sure Product Co is the official name) and Intel Foundry Services (IFS) with Product Co interacting with IFS like a regular foundry customer. All the key systems will be separated and firewalled to ensure that foundry customer data is secure and not accessible by Product Co.
  • Intel’s goal is for IFS to be the number two foundry in the world by 2030. There was a lot of discussion about IFS being the first system foundry, in addition to offering access to Intel’s wafer fab processes, IFS will offer Intel’s advanced packaging, IP, and system architecture expertise.
  • It was interesting to see Arm’s CEO Rene Haas on stage with Intel’s CEO Pat Gelsinger. Arm was described as Intel’s most important business partner, and it was noted that 80% of parts run at TSMC have Arm cores. In my view this shows how seriously Intel is taking foundry, in the past it was unthinkable for Intel to run Arm IP.
  • Approximately 3 months ago IFS disclosed they had orders with a lifetime value of $10 billion dollars, today that has grown to $15 billion dollars!
  • Intel plans to release restated financials going back three years breaking out Product Co and IFS.
  • Microsoft’s CEO Satya Nadella appeared remotely to announce that Microsoft is doing a design for Intel 18A.
Process Technology
  • In an NDA session Ann Kelleher presented Intel’s process technology.
  • Intel has been targeting five nodes in four years (as opposed to the roughly 5 years it took to complete 10nm). The planned nodes were i7, i4 Intel’s first EUV process, i3, 20A with RibbonFET (Gate All Around) and PowerVia (backside power), and 18A.
  • i7 and i4 are in production with i4 being produced in Oregon and Ireland, and i3 is manufacturing ready. 20A and 18A are on track to be production ready this year, see figure 1.

 Figure 1. Five Nodes in Four Years.

I can quibble with whether this is really five nodes, in my view i7, i3 and 18A are half nodes following i10, i4, and 20A, but it is still very impressive performance and shows that Intel is back on track for process development. Ann Kelleher deserves a lot of credit for getting Intel process development back on track.

  • Intel is also filling out their offering for foundry, i3 will now have i3-T (TSV), i3-E (enhanced), and i3-P (performance versions).
  • I can’t discuss specifics, but Intel showed strong yield data for i7 down through 18A.
  • 20A and 18A are due for manufacturing readiness this year and will be Intel’s first RibbonFET processes (Gate All Around stacked Horizontal Nanosheets) and PowerVia (backside power delivery. PowerVia will be the world’s first use of backside power delivery and based on public announcement I have seen from Samsung and TSMC, will be roughly two years ahead of both companies. PowerVia leaves signal routing on the front side of the wafer and moves power delivery to the backside allowing independent optimization of the two and reduces power droop and improves routing and performance.
  • 18A appears to be generating a lot of interest and is progressing well with 0.9PDK released and several companies have taped out test devices. There will be an 18A-P performance version as well. It is my opinion that 18A will be the highest performance process available when it is released although TSMC will have higher transistor density processes.
  • After 18A Intel is going to a two-year node cadence with 14A, 10A and NEXT planned. Figure 2 illustrates Intel’s process roadmap.

Figure 2. Process Roadmap.

  • Further filling out Intel’s foundry offering they are developing a 12nm process with UMC and a 65nm process with Tower.
  • The first High NA EUV tool is in Oregon with proof points expected in 2025 and production on 14A expected in 2026.
Design Enablement

Gary Patton presented Intel’s design enablement in an NDA session. Gary is a longtime IBM development executive and was also CTO at Global Foundries before joining Intel. In the past Intel’s nonstandard design flows have been a significant barrier to accessing Intel processes. Key parts of Gary’s talk:

  • Intel is adopting industry standard design practices, PDK releases and nomenclature.
  • All the major design platforms will be supported, Synopsys, Siemens, Cadence, Ansys and representatives from all four presented in the sessions.
  • All the major foundational IP is available across Intel’s foundry offering.
  • In my view this is a huge step forward for Intel, in fact they discussed how quickly it has been possible to port various design elements into their processes now.
  • The availability of IP and the ease of design for a foundry are critical to success and Intel appears to have checked off this critical box for the first time.
Packaging

Choon Lee presented packaging and he is another outsider brought into Intel, I believe he said he had only been there 3 months. Another analyst commented that it was refreshing to see Intel putting people brought in from outside in key positions as opposed to all the key people being long time Intel employees. Packaging isn’t really my focus but a couple of notes I thought were key:

  • Intel is offering their advanced packaging to customers and referred to it as ASAT (Advanced System Assembly and Test) as opposed to OSAT (Outsourced Assembly and Test).
  • Intel will assemble multiple die products with die sourced from IFS and from other foundries.
  • Intel has a unique capability for testing singulated die that enables much faster and better temperature control.
  • Figure 3 summarizes Intel’s foundry and packaging capabilities.

Figure 3. Intel’s Foundry and Packaging.

Intel Manufacturing

Also under NDA Keyvan Esfarjani presented Intel’s manufacturing. Key disclosable points are:

  • Intel is the only geographically diverse foundry with Fabs in Oregon, Arizona, New Mexico, Ireland and Israel and planned fabs in Ohio and Germany. Intel builds infratsutures around the fabs at each location.
  • The IFS foundry model will enable Intel to ramp up processes and keep them in production as opposed to ramping up processes and then ramping them down several years later the way they previously did as an IDM.
  • Intel fab locations:
    • Fab 28 in Israel is producing i10/i7 and fab 38 is planned for that location.
    • Fab 22/32/42 in Arizona are running i10/i7 with fabs 52/62 planned for that site in mid 2025 to run 18A.
    • Fab 24 in Ireland is running 14nm with i16 foundry planned, Fab 34/44 also at that location are running i4 now and ramping i3. They will eventually run i3 foundry.
    • Fab 9/11x in new Mexico is running advanced packing and will add 65nm with Tower in 2025.
  • Planned expansions in Ohio and Germany.
  • Oregon wasn’t discussed in any detail presumably because it is a development site although it does do early manufacturing. Oregon has Fabs D1C, D1D and 3 phase of D1X running with rebuilds of D1A and an additional 4th phase of D1X being planned.
Conclusion

Overall, the event was very well executed, and the announcements were impressive. Intel has their process technology development back on track and they are taking foundry seriously and doing the right things to be successful. TSMC is secure as the number one foundry in the world for the foreseeable future, but given Samsung’s recurring yield issues I believe Intel is well positioned to challenge Samsung for the number two position.

Also Read:

ISS 2024 – Logic 2034 – Technology, Economics, and Sustainability

Intel should be the Free World’s Plan A Not Plan B, and we need the US Government to step in

How Disruptive will Chiplets be for Intel and TSMC?


Podcast EP209: Putting Soitec’s Innovative Substrates to Work in Mainstream Products with Dr. Christophe Maleville

Podcast EP209: Putting Soitec’s Innovative Substrates to Work in Mainstream Products with Dr. Christophe Maleville
by Daniel Nenni on 02-23-2024 at 10:00 am

Dan is joined by Dr. Christophe Maleville, chief technology officer of Soitec’s Innovation. He joined Soitec in 1993 and was a driving force behind the company’s joint research activities with CEA-Leti. For several years, he led new SOI process development, oversaw SOI technology transfer from R&D to production and managed customer certifications.

He also served as vice president, SOI Products Platform at Soitec, working closely with key customers worldwide. Christophe has authored or co-authored more than 30 papers and also holds some 30 patents.

In this fascinating and informative discussion, Christophe details the innovations Soitec has achieved in engineered substrates, with a particular emphasis on silicon carbide material. He explains how these unique substrates are manufactured. The qualification that has been achieved with partners as well as how the manufacturing process is cost optimized and environmentally friendly are also discussed.

Chistophe cites some impressive data that shows the improvements the technology can deliver for EVs along with a timeline for production deployment.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


A Candid Chat with Sean Redmond About ChipStart in the UK

A Candid Chat with Sean Redmond About ChipStart in the UK
by Daniel Nenni on 02-23-2024 at 6:00 am

Chip Start UK

When I first saw the Silicon Catalyst business plan 10 years ago I had very high hopes. Silicon Valley design starts were falling and Venture Capital Firms were distracted by software companies even though without silicon there would be no software.

Silicon Catalyst is an organization focused on accelerating silicon-based startups. It provides a unique incubation ecosystem designed to help semiconductor-centric startups overcome the challenges they face in bringing their innovations to market. Silicon Catalyst offers access to a broad range of resources including mentors, industry partners, investors, and other support services critical for the success of startups in the semiconductor space. The organization aims to foster innovation and entrepreneurship within the semiconductor industry by providing startups with the guidance, resources, and networking opportunities they need to thrive.

We have been collaborating with Silicon Catalyst for 4 years with great success. SemiWiki is part of the Silicon Catalyst ecosystem. We not only offer the incubating companies coverage (CEO interviews and podcasts), we attend the Silicon Catalsyt events and participate on many different levels. It has been an incredibly enriching partnership, absolutely.

One of the advantages of being a semiconductor professional is we get to work with the smartest and most driven people in the world. We also get to see new technologies developing that may change the world. I was on the ground floor of the smartphone revolution which changed the world and it does not even compare to what AI will do, my opinion. Bottom line: If you look at the Silicon Catalyst incubate companies you will see the future.

Two years ago Silicon Catalyst invaded the UK under the guidance of Sean Redmond. Sean and I started in Semiconductors the same year and have run into each other quite a few times, twice during acquisitions.  Sean is the Silicon Catalyst Managing Partner for the UK. With the overwhelming success of the first one, Sean is launching the 2nd Cohort of the ChipStart UK Incubator. In the first cohort, eleven semiconductor startups are now half way through the nine month incubation with great success. They have full access to everything they need to deliver a full tape-out and experienced advisors to get them there safely.

I had a long conversation with Sean last week to get more details on semiconductors in the UK. AI seems to be driving the semiconductor community in the UK, and the rest of the world for that matter. Millions of dollars have already been raised by the first Chip Start program and Sean expects bigger things the second time around. The goal in the UK is to have a herd of semiconductor unicorns and I have no doubt that will be the case since the UK already has the 4th largest semiconductor based R&D.

Low power AI is a big part of the semiconductor push in the UK as you might suspect. Some of the applicants are spin outs from Universities and have first time senior executives. As part of the program classes are offered on IP strategy, legal protection, all parts of goto market plans, and of course fundraising. Exit strategies are also important as semiconductor start-ups have an average ten year life span so it is a marathon not a sprint.

Here is the related press release

Sean also mentioned that the GSA will return to the UK with an event in London next month in partnership with UK Government’s Department for Science, Innovation & Technology (DSIT) to jointly explore the impact of semiconductor innovation in anticipation to a NetZero economy. You can get details here:

Semiconductor Innovation for NetZero

About Silicon Catalyst

Silicon Catalyst is the world’s only incubator focused exclusively on accelerating semiconductor solutions, built on a comprehensive coalition of in-kind and strategic partners to dramatically reduce the cost and complexity of development. More than 1000 startup companies worldwide have engaged with Silicon Catalyst and the company has admitted over 100 exciting companies. With a world-class network of mentors to advise startups, Silicon Catalyst is helping new semiconductor companies address the challenges in moving from idea to realization. The incubator/accelerator supplies startups with access to design tools, silicon devices, networking, and a path to funding, banking and marketing acumen to successfully launch and grow their companies’ novel technology solutions. Over the past eight years, the Silicon Catalyst model has proven to dramatically accelerate a startup’s trajectory while at the same time de-risking the equation for investors. Silicon Catalyst has been named the Semiconductor Review’s 2021 Top-10 Solutions Company award winner.

The Silicon Catalyst Angels was established in July 2019 as a separate organization to provide access to seed and Series A funding for Silicon Catalyst portfolio companies. SiliconCatalyst.UK. a subsidiary of Silicon Catalyst, was selected by the UK government to manage ChipStart UK, an early-stage semiconductor incubator funded by the UK government.

More information is available at www.siliconcatalyst.uk, www.siliconcatalyst.com and www.siliconcatalystangels.com.

Also Read:

Seven Silicon Catalyst Companies to Exhibit at CES, the Most Powerful Tech Event in the World

Silicon Catalyst Welcomes You to Our “AI Wonderland”

McKinsey & Company Shines a Light on Domain Specific Architectures