DAC2025 SemiWiki 800x100

Revolutionizing RFIC Design: Introducing RFIC-GPT

Revolutionizing RFIC Design: Introducing RFIC-GPT
by Jason Liu on 02-28-2024 at 6:00 am

Figure1 (10)

In the rapidly evolving world of Radio Frequency Integrated Circuits (RFIC), the challenge has always been to design efficient, high-performance components quickly and accurately. Traditional methods, while effective, come with a high complexity and a lengthy iteration process. Today, we’re excited to unveil RFIC-GPT, a groundbreaking tool that transforms RFIC design through the power of generative AI.

RF chips are known as the crown jewel of analog chips, and RF circuits typically contains not only the active circuits, i.e., the circuits composed of mostly active devices such as transistors, but also a large number of passive components such as inductors, transformers and matching networks. Fig. 1. is an example of a one stage RF power amplifier (PA), the active part of the circuit is a differential common source PA with cross coupled varactors, and it is connected by an input matching network and an output matching network.  The matching networks are usually a combination of passive devices such as inductors, capacitors and transformers connected in an optimized configuration.

To design such an RF circuit, both of the devices in the active circuit and the passive layout patterns in the matching networks need to be optimized. The conventional design flow of RFIC circuit is shown in the top half of Fig. 2. On one hand, active circuits need to be first designed and simulated both in schematics and in layouts. On the other hand, the passive components and circuits are iterated repeatedly using more physical and tedious electromagnetic (EM) simulation combined with their layouts, making it a key challenge in RF design.

Thereafter, the parameters of entire layouts are extracted and post layout simulations are run to compare with the design specifications (Specs). Finally, the designs of both active circuits and layouts of passive circuits are re-adjusted and re-simulated, and the results are compared again. This process is iterated for a numerous number of times until the design Specs are achieved. Among others, the main difficulties of designing RFIC can be attributed to:

(1) large design search space of both active and passive circuits;

(2) lengthy and tedious EM simulation required;

(3) Interactions between active and passive circuits, and that between RFIC and its surroundings demands numerous iterations and optimizations.

Therefore, the traditional design flow of RFIC typically takes a lot of human effort, and its design quality in a constrained time also largely depends on the experience of particular IC designers.

Recently, generative AI has been researched and explored extensively for generating contents including but not limited to dialogues, pictures, programming codes. Analogous to this concept, generative AI is also considered for the RFIC design automation in the area of IC design. The bottom half of Fig. 2 exhibits an example RFIC design flow with the assisted generative AI. Essentially, the behavior of small circuit components can be lumped into models and lengthy simulations can be omitted.

Additionally, the solution searching “experience” for the RFIC design can be “learned”, and the solutions, i.e., the initial design of RFIC schematics and layouts, can be quickly “generated”. Importantly, the simulated results of the AI generated RFIC circuits can indeed be already close to the design Specs, and IC design engineers only need to do some final optimization and verifying simulations before they can be applied to the RFIC design blocks for tape-outs. This methodology saves a large amount of the simulation iterations and drastically improves design efficiency. Furthermore, the results are more consistent run to run since the task is performed by “emotionless” computer.

As a pioneer of intelligent chip design solutions, the AI based RFIC design automation tool RFIC-GPT has been launched. Using RFIC-GPT, GDSII or schematic diagrams of RF devices and circuits meeting design specifications (such as Q/L/k of the transformer; matching degree S11 of the matching circuit, insertion loss IL; gain, OP1db of the PA etc.) can be directly generated based on AI algorithm engine. It reduces simulation iterations by over 50%, accelerating the journey from concept to production. This tool is not just about speed; it’s about precision. It generates optimized layouts and schematics that meet design specifications with up to 95% accuracy, ensuring high-quality results with fewer revisions.

What sets RFIC-GPT apart? Unlike traditional tools that rely heavily on manual input and trial-and-error, RFIC-GPT leverages AI to predict and optimize design outcomes, making the process faster and more reliable. This means designers can focus more on innovation and less on the repetitive tasks that often slow down development.

In conclusion, RFIC-GPT represents a significant leap forward in RFIC design technology. By harnessing the power of AI, it offers unprecedented efficiency, accuracy, and ease of use. We’re proud to introduce this innovative tool and are excited about the potential it holds for the future of RFIC design. Join us in this revolution, try RFIC-GPT today, and take the first step towards more efficient, accurate, and innovative RFIC designs. The author encourages designer to try RFIC-GPT online  ( www.RFIC-GPT.com )  and give feedback . The practice of RFIC-GPT only takes three steps:

(1) Input your design Specs and requirements;

(2) Consider the design trade-offs and choose the appropriate GDSII or active design;

(3) Click download for your application.

Author:

Jason Liu, Jason is a senior researcher on the design automation solution for RFIC. Jason holds a Ph.D. degree in Electrical Engineering and has been in the EDA industry for more than 15 years.

Also Read:

CEO Interview: Vincent Bligny of Aniah

Outlook 2024 with Anna Fontanelli Founder & CEO MZ Technologies

2024 Outlook with Cristian Amitroaie, Founder and CEO of AMIQ EDA


2024 Signal & Power Integrity SIG Event Summary

2024 Signal & Power Integrity SIG Event Summary
by Daniel Nenni on 02-27-2024 at 10:00 am

SIG Event Synopsys

It was a dark and stormy night here in Silicon Valley but we still had a full room of semiconductor professionals. I emceed the event. In addition to demos, customer and partner presentations, we did a Q&A which was really great. One thing I have to say is that Intel really showed up for both DesignCon and the Chiplet Summit. Quite a few Intel employees introduced themselves and a couple even took pictures with me, great networking.

The SIPI SIG 2024 event was hosted at the Santa Clara Hilton on Jan 31st on the margins of DesignCon and was over-subscribed with 100 attendees (despite inclement weather). There were 20+ customers and partners  represented including the likes of Apple, Samsung, AMD, TI, Micron, Qualcomm, Google, Meta, Amazon, Tesla, Cisco, Broadcom, Intel, Sony, Socionext, Realtek, Microchip, Winbond, Lattice Semi, Mathworks, Ansys, Keysight, and more:

Synopsys Demos & Cocktail Hour
Interposer Extraction from 3DIC Compiler & SIPI Analysis
TDECQ Measurement for High Speed PAM4 Data Links

Customer Presentations and Q&A:
Optimization of STATEYE Simulation Parameters for LPDDR5 Application
Youngsoo Lee, Senior Manager of AECG Package Development Team, AMD

IBIS and Touchstone: Assuring Quality and Preparing for the Future
Michael Mirmak, Signal Integrity Technical Lead, Intel

Signal and Power Integrity Simulation Approach for HBM3 Hisham Abed, Sr. Staff A&MS Circuit Design Engineer, Solutions Group, Synopsys

Signal Integrity at the Cutting Edge: Advanced Modeling and Verification for High-Speed Interconnects Barry Katz, Director of Engineering, RF & AMS Products, MathWorks.

All great presentations, the panelists had more than 100 years of combined experience, but I must say that Michael Mirmak from Intel was really really great. Here is a quick summary that Michael helped me with. Michael started his presentation with the standard corporate disclaimer:

“I must emphasize that my statements and appearance at the event was not intended and should not be construed as an endorsement by my employer, or by any organization of particular products or services.”

IBIS and Touchstone: Assuring Quality and Preparing for the Future
  • IBIS and Touchstone are the most common model formats for SI and PI applications today
  • Assessing model quality remains a constant concern for both model users and producers
  • The simulation output log file is often neglected but can provide very useful insights, as it includes model quality reporting and issue detection outside of outputs such as eye diagrams, before actual channel simulation begins
  • Even for high-speed IBIS AMI (Algorithmic Model Interface) simulations, problems can arise from simple analog IBIS data mismatches between impedance and transition characteristics; the simulation log can alert the user and model-maker to these early, before larger and potentially expensive batch runs
  • The simulation output log can also help find issues with the algorithmic portion of IBIS AMI models that may distort output in subtle ways that cannot (yet) be checked with the standard parsing tool
  • IBIS 7.0 and later supports standard modeling of modern, complex component package designs that tend to be represented using proprietary SPICE variants today; S-parameters under Touchstone are now included as well
  • S-parameters using the Touchstone format are frequently used for interconnect modeling, but can become unwieldy when used to describe high-speed links at the system level over manufacturing or environmental variations
  • Touchstone 3.0 is coming and is planned to include a pole-residue format that enables compression of S-parameter data

Congratulations to Synopsys and the semiconductor ecosystem, it was a great event, absolutely.

Also Read:

Synopsys Geared for Next Era’s Opportunity and Growth

Automated Constraints Promotion Methodology for IP to Complex SoC Designs

UCIe InterOp Testchip Unleashes Growth of Open Chiplet Ecosystem


BDD-Based Formal for Floating Point. Innovation in Verification

BDD-Based Formal for Floating Point. Innovation in Verification
by Bernard Murphy on 02-27-2024 at 6:00 am

Innovation New

A different approach to formally verifying very challenging datapath functions. Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome. We’re planning to add a wrinkle to our verification exploration this year. Details to follow!

The Innovation

This month’s pick is Polynomial Formal Verification of Floating-Point Adders. This article was published in the 2023 DATE Conference. The authors are from the University of Bremen, Germany.

Datapath element implementations must be proved absolutely correct (remember the infamous Pentium floating point bug), which demands formal proofs. Yet BDD state graphs for floating point elements rapidly explode, while SAT proofs are often bounded hence not truly complete.

The popular workaround today is to use equivalence checking with a C/C++ reference model, which works very well but of course depends on a trusted reference. However some brave souls are still trying to find a path with BDD. These authors suggest methods to use case-splitting to limit state graph explosion, dropping from exponential to polynomial bounded complexity. Let’s see what our reviewers think!

Paul’s view

Compact easy read paper to kick-off of 2024, and on a classic problem in computer science: managing BDD size explosion in formal verification.

The key contribution of the paper is a new method for “case splitting” in formal verification of floating point adders. Traditionally, case splitting means to pick a boolean variable that causes a BDD to blow up in size, and just run two separate formal proofs, one for the “case” where that variable is true and one for the case where that variable is false. If both proofs pass, then it means that the overall proof for the full BDD including that variable must necessarily also pass. Of course, case splitting for n variables means 2^n cases so if you use it everywhere then you just trade one exponential blow up for another.

This paper observes that case splitting need not be based only on individual Boolean variables. Any exhaustive sub-division of problem is valid. For example, prior to normalizing the base-exponent, a case split on the number of leading zeros in the base can be performed – i.e. zero leading zeros in the base, one leading zero in the base, and so on. This particular choice of split combined with one other cunning split in the alignment shift step achieves a magical compromise such that the overall proof for a floating point add goes from being exponential to polynomial in complexity. A double precision floating point add circuit can now be formally proved correct in 10 seconds. Nice!

Raúl’s view

This short paper introduces a novel approach to managing the size explosion problem in formal verification of floating point adders using BDDs, a classic issue in equivalence checking. Traditionally, this is addressed by case splitting, i.e., dividing the problem based on the values of individual Boolean variables (0, 1), also leading to exponential growth in complexity with the number of variables split. Based on observations on where the explosion in size happens when constructing the BDDs, the paper proposes three innovative case splitting methods. They are not based on individual Boolean variables and are specific for floating point adders (of course they do not simplify general equivalence checking to P).

  1. Alignment Shift Case Splitting: The paper suggests splitting with regard to the shift amount or exponent difference, significantly reducing the number of cases needed for verification.
  2. Leading Zero Case Splitting: To address the explosion at the normalization shift, the paper proposes creating cases based on the number of leading zeros in the addition result.
  3. Subnormal Numbers and Rounding: Subnormal numbers are handled by adding a simplification in cases where they can occur; rounding does not trigger an explosion in BDD size.

By strategically choosing these case splits, the overall proof complexity for floating point addition can be reduced from exponential to polynomial. As a result, formal verification of double and quadruple precision floating point add circuits, which in classic symbolic simulation time out at two hours, can now be completed in 10-300 secs!


New Emulation, Enterprise Prototyping and FPGA-based Prototyping Launched

New Emulation, Enterprise Prototyping and FPGA-based Prototyping Launched
by Daniel Payne on 02-26-2024 at 10:00 am

Veloce Strato CS min

General purpose CPUs have run most EDA tools quite well for many years now, but if you really want to accelerate something like simulation then you start to look at using specializedhardware accelerators. . Emulators came onto the scene around 1986 and the processing power has greatly increased over the years, mostly in response to the demands of leading-edge companies designing CPUs, GPUs and more recently AI-based processors and hyperscalers that need to accelerate simulation to ensure that designs are bug-free and will actually boot-up and run software properly before tape out.

All modern CPU, GPU, Hyperscalers, and AI processor teams are using emulation to accelerate the design and debug of their SOCs, with transistor counts ranging from 25 billion to 167 billion transistors, often using chiplets as the massive number of transistors no longer fit within the maximum reticle size. These systems are challenging to verify, and using a general purpose CPU to run EDA simulations is no longer fast enough, so emulation must be used. Design teams on projects for AI and hyperscale applications are running software loads that demand quick analysis so that trade offs can be made between power and performance.

Emulation is used early in the design flow, when there are lots of design changes happening, so having flexible debug and fast compile features are critical for quick turn-around. When the RTL coding has become stable enough and there’s less debugging required, then a faster simulation approach using enterprise prototyping can be started as early firmware and software development can begin. The third stage of accelerated simulation follows and is traditional FPGA-based prototyping, where software developers are the main users, where performance and flexibility is prime need.

With the three hardware-assisted acceleration techniques you could opt for using three hardware systems from multiple vendors, however I just learned about a new announcement from Siemens where they have launched a next-generation family of products that covers all three use cases and they call it Veloce CS.

 

For Emulation the Veloce Strato CS is using a domain-specific chip called the CrystalX, which enables fast, predictable compile during design bring-up and speeds iterations. Designers are more productive by using native debug capabilities, and the platform has scalability to fit the biggest designs. On the prototyping side the FPGA-based Veloce Primo CS is using the latest AMD Chip, the VP1902 Adaptive SoC, which has 2X higher logic density, and an 8X faster debug performance.

 

Previous generations of emulators often had unique hardware form factors, but with the new Veloce CS Siemens adopted a blade architecture, which fits into modern data centers more easily.

The previous generation of emulators from Siemens was called the Veloce Strato+, introduced in 2021; now with the new Veloce Strato CS you enjoy 4X gate capacity, 5X performance gain, and a 5X debug throughput boost. Scalability now goes up to 40+B gates, and the modular blade approach spans from 1 to 256 blades.

Veloce Strato CS configurations

For enterprise prototyping Siemens offered the Veloce Primo beginning in 2021; with the new Veloce Primo CS your team will benefit from 4X gate capacity, 5X in performance, and a whopping 50X in debug throughput. Once again, blades are used with Veloce Primo CS, providing a range of 500M gates, all the way up to 40+B gates.

The following diagram shows the common compiler, debug and runtime software shared between the emulator and enterprise prototyping systems, with the major difference being that the emulator uses the custom CrystalX chip and the enterprise prototype employs the AMD VP1902 chips.

Emulator and Enterprise Prototype systems

By using a blade architecture these systems require only air cooling, so no expensive water cooling is needed.

The third new product introduced is Veloce proFPGA CS, and it gives you 2X gate capacity, 2X performance, and a stunning 50X debug throughput advantage over previous generation proFPGA system. Scaling starts out with just a single FPGA clocking at 100MHz, then growing up to 4B gates. The Uno and Quad configurations are well suited for desktop prototyping, then each blade system has 6 FPGAs.

Prototyping used to be limited by slow design bring-up, but now with Veloce proFPGA CS engineers will experience efficient compile without manual RTL edits, enjoy automated multi-FPGA partitioning, benefit from timing-driven performance optimization, and become more efficient with sophisticated at-speed debug due to VPS SW.

Summary

Siemens designed, built and announced three new hardware-accelerated systems that have some immediate benefits, like:

  • Lower power to cool
  • ~10Kw/Billion gates
  • Fits into data center using blades and air cooling cold aisle – hot aisle air flow
  • Multi-user support, enabling 24×7 use
  • Emulation, Enterprise Prototyping, FPGA-based prototyping

Early users of Veloce CS include tier-one names like AMD and ARM. The new Veloce has impressive credentials, certainly worth taking a closer look at, and they span all three types of hardware platforms. Your team can choose just the right size for each platform to meet your project capacity.

Related Blogs


Photonic Computing – Now or Science Fiction?

Photonic Computing – Now or Science Fiction?
by Mike Gianfagna on 02-26-2024 at 6:00 am

Photonic Computing – Now or Science Fiction?

Cadence recently held an event to dig into the emerging world of photonic computing. Called The Rise of Photonic Computing, it was a two-day event held in San Jose on February 7th and 8th. The first day of the event was also accessible virtually. I attended a panel discussion on the topic – more to come on that. The day delivered a rich set of presentations from industry and academic experts intended to help you tackle many of your design challenges. Some of this material will be available for replay in late February. Please check back here for the link. Now let’s look at a spirited panel discussion that asked the question, Photonic computing – now or science fiction?

The Panelists

There is a photo at the top of this post of the panel Moving left to right:

Gilles Lamant, distinguished engineer at Cadence moderated the panel. Gilles has worked at Cadence for almost 31 years. He is a Virtuoso platform architect and a design methodology consultant in San Jose, Moscow, Tokyo and Burlington Vermont. Gilles has a deep understanding of system design and kept the panel moving in some very interesting directions.

Dr. Daniel Perez-Lopez, CTO and co-founder of iPronics, a company that aims to expand photonics processing to all the layers of the industry with its SmartLight processors. The company is headquartered in Valencia, Spain.

Dr. Michael Förtsch, Founder and CEO of Q.ANT, a company that develops quantum sensors and photonic chips and processors for quantum computing based on its Quantum Photonic Framework. The company is headquartered in Stuttgart, Germany.

Dr. Bahvin Shastri, Assistant Professor, Engineering & Applied Physics, Centre for Nanophotonics, Queen’s University, located in Kingston, Ontario, Canada. Bhavin presented the keynote address right before the panel on Neuromorphic Photonic Computing, Classical to Quantum.

Dr. Patrick Bowen, CEO of and co-founder of Neurophos, a company that is pioneering a revolutionary approach to AI computation, leveraging the vast potential of light. Neurophos leverages metamaterials in its work, they are based in Austin, Texas.

That’s quite a lineup of intriguing worldwide startups and advanced researchers. The conversation covered a lot of topics, insights and predictions. Watch for the replay to hear the whole story. In the meantime, here are some takeaways…

The Commentary

Gilles observed that some of the companies on the panel look like traditional players in the sense that they use existing materials and fabs to build their products but others are innovating in the materials domain and therefore need to build the factory and the product. This observation highlights the fact that photonic computing is indeed a new field. The players that are building fabrication capabilities may become vertically integrated suppliers or they may become pure-play fab partners to others. It’s a dynamic worth watching.

Bahvin commented on this topic from an academic research perspective. His point of view was that, if you can get it done with mainstream silicon photonics, that’s what you do. However, new and exotic materials research is opening up possibilities that are not attainable with silicon and so advanced work like that will be important to realize the broader potential of the technology.

Other discussions on this topic pointed out that the massive compute demands of advanced AI algorithms simply cannot fit the size or power envelope required using silicon. New materials will be the only way forward. In fact, some examples were given as to how challenging applications such as transformers can be re-modeled in a way that makes them more appropriate for the analog domain offered by photonic processing.

An interesting observation was made regarding newly minted PhD students. What if part of the dissertation was to develop a pitch about the invention and try it with a VC. This would bring a reality check to the invention process – how does the invention contribute to the real world? I thought that was an interesting idea.

Here is a good quote from the discussion: “Fifty years of Moore’s Law and we are still at the stage where we haven’t found an efficient computer to simulate nature.”  This is a problem that photonic computing has a chance to solve.

Gilles ended the panel with a question regarding when photonic computing would be fully mainstream. 10 years, 20 years? No one was willing to answer. We are at the beginning of a very exciting time.

To Learn More

Much of the first day of the event will be available for replay, including this panel. Check back here around the end of February. In the meantime, you can check out what Cadence has to offer for photonic design here.  The panel Photonic computing – now or science fiction? didn’t necessarily answer the question, but it did deliver a lot of detail and insights to ponder for the future.


Intel Direct Connect Event

Intel Direct Connect Event
by Scotten Jones on 02-23-2024 at 12:00 pm

Figure 1

On Wednesday, February 21st Intel held their first Foundry Direct Connect event. The event had both public and NDA sessions, and I was in both. In this article I will summarize what I learned (that is not covered by NDA) about Intel’s business, process, and wafer fab plans (my focus is process technology and wafer fabs).

Business

Key points in the keynote address from my perspective.

  • Intel is going to organize the company as Product Co (not sure Product Co is the official name) and Intel Foundry Services (IFS) with Product Co interacting with IFS like a regular foundry customer. All the key systems will be separated and firewalled to ensure that foundry customer data is secure and not accessible by Product Co.
  • Intel’s goal is for IFS to be the number two foundry in the world by 2030. There was a lot of discussion about IFS being the first system foundry, in addition to offering access to Intel’s wafer fab processes, IFS will offer Intel’s advanced packaging, IP, and system architecture expertise.
  • It was interesting to see Arm’s CEO Rene Haas on stage with Intel’s CEO Pat Gelsinger. Arm was described as Intel’s most important business partner, and it was noted that 80% of parts run at TSMC have Arm cores. In my view this shows how seriously Intel is taking foundry, in the past it was unthinkable for Intel to run Arm IP.
  • Approximately 3 months ago IFS disclosed they had orders with a lifetime value of $10 billion dollars, today that has grown to $15 billion dollars!
  • Intel plans to release restated financials going back three years breaking out Product Co and IFS.
  • Microsoft’s CEO Satya Nadella appeared remotely to announce that Microsoft is doing a design for Intel 18A.
Process Technology
  • In an NDA session Ann Kelleher presented Intel’s process technology.
  • Intel has been targeting five nodes in four years (as opposed to the roughly 5 years it took to complete 10nm). The planned nodes were i7, i4 Intel’s first EUV process, i3, 20A with RibbonFET (Gate All Around) and PowerVia (backside power), and 18A.
  • i7 and i4 are in production with i4 being produced in Oregon and Ireland, and i3 is manufacturing ready. 20A and 18A are on track to be production ready this year, see figure 1.

 Figure 1. Five Nodes in Four Years.

I can quibble with whether this is really five nodes, in my view i7, i3 and 18A are half nodes following i10, i4, and 20A, but it is still very impressive performance and shows that Intel is back on track for process development. Ann Kelleher deserves a lot of credit for getting Intel process development back on track.

  • Intel is also filling out their offering for foundry, i3 will now have i3-T (TSV), i3-E (enhanced), and i3-P (performance versions).
  • I can’t discuss specifics, but Intel showed strong yield data for i7 down through 18A.
  • 20A and 18A are due for manufacturing readiness this year and will be Intel’s first RibbonFET processes (Gate All Around stacked Horizontal Nanosheets) and PowerVia (backside power delivery. PowerVia will be the world’s first use of backside power delivery and based on public announcement I have seen from Samsung and TSMC, will be roughly two years ahead of both companies. PowerVia leaves signal routing on the front side of the wafer and moves power delivery to the backside allowing independent optimization of the two and reduces power droop and improves routing and performance.
  • 18A appears to be generating a lot of interest and is progressing well with 0.9PDK released and several companies have taped out test devices. There will be an 18A-P performance version as well. It is my opinion that 18A will be the highest performance process available when it is released although TSMC will have higher transistor density processes.
  • After 18A Intel is going to a two-year node cadence with 14A, 10A and NEXT planned. Figure 2 illustrates Intel’s process roadmap.

Figure 2. Process Roadmap.

  • Further filling out Intel’s foundry offering they are developing a 12nm process with UMC and a 65nm process with Tower.
  • The first High NA EUV tool is in Oregon with proof points expected in 2025 and production on 14A expected in 2026.
Design Enablement

Gary Patton presented Intel’s design enablement in an NDA session. Gary is a longtime IBM development executive and was also CTO at Global Foundries before joining Intel. In the past Intel’s nonstandard design flows have been a significant barrier to accessing Intel processes. Key parts of Gary’s talk:

  • Intel is adopting industry standard design practices, PDK releases and nomenclature.
  • All the major design platforms will be supported, Synopsys, Siemens, Cadence, Ansys and representatives from all four presented in the sessions.
  • All the major foundational IP is available across Intel’s foundry offering.
  • In my view this is a huge step forward for Intel, in fact they discussed how quickly it has been possible to port various design elements into their processes now.
  • The availability of IP and the ease of design for a foundry are critical to success and Intel appears to have checked off this critical box for the first time.
Packaging

Choon Lee presented packaging and he is another outsider brought into Intel, I believe he said he had only been there 3 months. Another analyst commented that it was refreshing to see Intel putting people brought in from outside in key positions as opposed to all the key people being long time Intel employees. Packaging isn’t really my focus but a couple of notes I thought were key:

  • Intel is offering their advanced packaging to customers and referred to it as ASAT (Advanced System Assembly and Test) as opposed to OSAT (Outsourced Assembly and Test).
  • Intel will assemble multiple die products with die sourced from IFS and from other foundries.
  • Intel has a unique capability for testing singulated die that enables much faster and better temperature control.
  • Figure 3 summarizes Intel’s foundry and packaging capabilities.

Figure 3. Intel’s Foundry and Packaging.

Intel Manufacturing

Also under NDA Keyvan Esfarjani presented Intel’s manufacturing. Key disclosable points are:

  • Intel is the only geographically diverse foundry with Fabs in Oregon, Arizona, New Mexico, Ireland and Israel and planned fabs in Ohio and Germany. Intel builds infratsutures around the fabs at each location.
  • The IFS foundry model will enable Intel to ramp up processes and keep them in production as opposed to ramping up processes and then ramping them down several years later the way they previously did as an IDM.
  • Intel fab locations:
    • Fab 28 in Israel is producing i10/i7 and fab 38 is planned for that location.
    • Fab 22/32/42 in Arizona are running i10/i7 with fabs 52/62 planned for that site in mid 2025 to run 18A.
    • Fab 24 in Ireland is running 14nm with i16 foundry planned, Fab 34/44 also at that location are running i4 now and ramping i3. They will eventually run i3 foundry.
    • Fab 9/11x in new Mexico is running advanced packing and will add 65nm with Tower in 2025.
  • Planned expansions in Ohio and Germany.
  • Oregon wasn’t discussed in any detail presumably because it is a development site although it does do early manufacturing. Oregon has Fabs D1C, D1D and 3 phase of D1X running with rebuilds of D1A and an additional 4th phase of D1X being planned.
Conclusion

Overall, the event was very well executed, and the announcements were impressive. Intel has their process technology development back on track and they are taking foundry seriously and doing the right things to be successful. TSMC is secure as the number one foundry in the world for the foreseeable future, but given Samsung’s recurring yield issues I believe Intel is well positioned to challenge Samsung for the number two position.

Also Read:

ISS 2024 – Logic 2034 – Technology, Economics, and Sustainability

Intel should be the Free World’s Plan A Not Plan B, and we need the US Government to step in

How Disruptive will Chiplets be for Intel and TSMC?


Podcast EP209: Putting Soitec’s Innovative Substrates to Work in Mainstream Products with Dr. Christophe Maleville

Podcast EP209: Putting Soitec’s Innovative Substrates to Work in Mainstream Products with Dr. Christophe Maleville
by Daniel Nenni on 02-23-2024 at 10:00 am

Dan is joined by Dr. Christophe Maleville, chief technology officer of Soitec’s Innovation. He joined Soitec in 1993 and was a driving force behind the company’s joint research activities with CEA-Leti. For several years, he led new SOI process development, oversaw SOI technology transfer from R&D to production and managed customer certifications.

He also served as vice president, SOI Products Platform at Soitec, working closely with key customers worldwide. Christophe has authored or co-authored more than 30 papers and also holds some 30 patents.

In this fascinating and informative discussion, Christophe details the innovations Soitec has achieved in engineered substrates, with a particular emphasis on silicon carbide material. He explains how these unique substrates are manufactured. The qualification that has been achieved with partners as well as how the manufacturing process is cost optimized and environmentally friendly are also discussed.

Chistophe cites some impressive data that shows the improvements the technology can deliver for EVs along with a timeline for production deployment.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


A Candid Chat with Sean Redmond About ChipStart in the UK

A Candid Chat with Sean Redmond About ChipStart in the UK
by Daniel Nenni on 02-23-2024 at 6:00 am

Chip Start UK

When I first saw the Silicon Catalyst business plan 10 years ago I had very high hopes. Silicon Valley design starts were falling and Venture Capital Firms were distracted by software companies even though without silicon there would be no software.

Silicon Catalyst is an organization focused on accelerating silicon-based startups. It provides a unique incubation ecosystem designed to help semiconductor-centric startups overcome the challenges they face in bringing their innovations to market. Silicon Catalyst offers access to a broad range of resources including mentors, industry partners, investors, and other support services critical for the success of startups in the semiconductor space. The organization aims to foster innovation and entrepreneurship within the semiconductor industry by providing startups with the guidance, resources, and networking opportunities they need to thrive.

We have been collaborating with Silicon Catalyst for 4 years with great success. SemiWiki is part of the Silicon Catalyst ecosystem. We not only offer the incubating companies coverage (CEO interviews and podcasts), we attend the Silicon Catalsyt events and participate on many different levels. It has been an incredibly enriching partnership, absolutely.

One of the advantages of being a semiconductor professional is we get to work with the smartest and most driven people in the world. We also get to see new technologies developing that may change the world. I was on the ground floor of the smartphone revolution which changed the world and it does not even compare to what AI will do, my opinion. Bottom line: If you look at the Silicon Catalyst incubate companies you will see the future.

Two years ago Silicon Catalyst invaded the UK under the guidance of Sean Redmond. Sean and I started in Semiconductors the same year and have run into each other quite a few times, twice during acquisitions.  Sean is the Silicon Catalyst Managing Partner for the UK. With the overwhelming success of the first one, Sean is launching the 2nd Cohort of the ChipStart UK Incubator. In the first cohort, eleven semiconductor startups are now half way through the nine month incubation with great success. They have full access to everything they need to deliver a full tape-out and experienced advisors to get them there safely.

I had a long conversation with Sean last week to get more details on semiconductors in the UK. AI seems to be driving the semiconductor community in the UK, and the rest of the world for that matter. Millions of dollars have already been raised by the first Chip Start program and Sean expects bigger things the second time around. The goal in the UK is to have a herd of semiconductor unicorns and I have no doubt that will be the case since the UK already has the 4th largest semiconductor based R&D.

Low power AI is a big part of the semiconductor push in the UK as you might suspect. Some of the applicants are spin outs from Universities and have first time senior executives. As part of the program classes are offered on IP strategy, legal protection, all parts of goto market plans, and of course fundraising. Exit strategies are also important as semiconductor start-ups have an average ten year life span so it is a marathon not a sprint.

Here is the related press release

Sean also mentioned that the GSA will return to the UK with an event in London next month in partnership with UK Government’s Department for Science, Innovation & Technology (DSIT) to jointly explore the impact of semiconductor innovation in anticipation to a NetZero economy. You can get details here:

Semiconductor Innovation for NetZero

About Silicon Catalyst

Silicon Catalyst is the world’s only incubator focused exclusively on accelerating semiconductor solutions, built on a comprehensive coalition of in-kind and strategic partners to dramatically reduce the cost and complexity of development. More than 1000 startup companies worldwide have engaged with Silicon Catalyst and the company has admitted over 100 exciting companies. With a world-class network of mentors to advise startups, Silicon Catalyst is helping new semiconductor companies address the challenges in moving from idea to realization. The incubator/accelerator supplies startups with access to design tools, silicon devices, networking, and a path to funding, banking and marketing acumen to successfully launch and grow their companies’ novel technology solutions. Over the past eight years, the Silicon Catalyst model has proven to dramatically accelerate a startup’s trajectory while at the same time de-risking the equation for investors. Silicon Catalyst has been named the Semiconductor Review’s 2021 Top-10 Solutions Company award winner.

The Silicon Catalyst Angels was established in July 2019 as a separate organization to provide access to seed and Series A funding for Silicon Catalyst portfolio companies. SiliconCatalyst.UK. a subsidiary of Silicon Catalyst, was selected by the UK government to manage ChipStart UK, an early-stage semiconductor incubator funded by the UK government.

More information is available at www.siliconcatalyst.uk, www.siliconcatalyst.com and www.siliconcatalystangels.com.

Also Read:

Seven Silicon Catalyst Companies to Exhibit at CES, the Most Powerful Tech Event in the World

Silicon Catalyst Welcomes You to Our “AI Wonderland”

McKinsey & Company Shines a Light on Domain Specific Architectures


Achieving Extreme Low Power with Synopsys Foundation IP Memory Compilers and Logic Libraries

Achieving Extreme Low Power with Synopsys Foundation IP Memory Compilers and Logic Libraries
by Mike Gianfagna on 02-22-2024 at 10:00 am

Achieving Extreme Low Power with Synopsys Foundation IP Memory Compilers and Logic Libraries

The relentless demand for lower power SoCs is evident across many markets.  Examples include cutting-edge mobile, IoT, and wearable devices along with the high compute demands for AI and 5G/6G communications. Drivers for low power include battery life, thermal management and, for high compute applications, the overall cost of operation.  Several approaches are available to achieve low power. A common thread for many is the need for optimal Foundation IP, that is, embedded memories and logic libraries. This is an area of significant investment and market leadership for Synopsys. Two informative publications are now available to help you understand the options and benefits that are available.  It turns out achieving extreme low power with Synopsys Foundation IP memory compilers and logic libraries is within reach.

Let’s look at the information that is available.

Technical Bulletin

I’ll start with Optimizing PPA for HPC & AI Applications with Synopsys Foundation IP, a technical bulletin that focuses on logic libraries. The piece provides details on Synopsys’ tool-aware Foundation IP solution. Topics such as optimized circuitry, broad operating voltage range support and the flexibility to add customer-specific optimizations are discussed. The article also offers a perspective on achieving either maximum possible performance or the best power-performance trade-off. The figure below summarizes the logic library circuits available in the HPC Design Kit.

Synopsys HPC Design Kit components

Details of how power improvements are achieved is provided across many applications and design strategies. Topics that are covered include dynamic voltage scaling across a wide operating voltage range, optimizing AI and application-specific accelerator block PPA, solutions for network on chip, and how the Synopsys HPC Design Kit is co-optimized with Synopsys EDA for efficient SoC Implementation.

This technical bulletin provides a rich set of information and examples. You can access this information here.

White Paper

Also available is a comprehensive white paper entitled, How Low Can You Go? Pushing the Limits of Transistors. This piece digs into both embedded memories and logic libraries. It examines the details behind achieving extreme low power. Several application areas are discussed, including mobile, Bluetooth and IoT, high-performance computing, automotive, and crypto.

For embedded memories, several approaches are discussed, including assist techniques and splitting supply voltages. It is pointed out that careful co-optimization between technology and the design of memory assist circuits is required to deliver dense, low-power memory operation at low voltages. Several enhanced assist techniques are reviewed. Improvements in power range from 10% to 37%.

Reliability of memories is also discussed.  The piece explains that as the voltage is reduced, the SRAM cell starts showing degradation. This degradation can cause multiple issues: reads are upset, the bitcell does not flip, SER is pronounced, sensing fails, control signals deviate, and the BL signal weakens. Therefore, assist techniques are needed to support the lower extreme low voltages required by cutting-edge low power applications.

The approaches Synopsys takes here make a significant difference. Strategies to improve reliability and methods to simulate aging are discussed. You should read the details for yourself – a link is coming. The data shows compelling results, with five to ten years of life added.

Logic libraries are also discussed, with strategies to enable deep low voltage operation at 0.4v and below. Architectural optimization is also reviewed. Standard cell architectural techniques can be employed to reduce both dynamic and leakage power. For example, Synopsys uses stack-based versus stage-based architectural techniques for the optimal topology for deep low voltage operation. The strategy behind this approach is presented.

Characterization optimization is also covered. One important piece of characterization is modeling process variation across an SoC, referred to as on chip variation (OCV). Several advanced techniques are employed here, including machine learning to increase accuracy and optimize performance and power.

The white paper concludes with an overview of how to put everything together at the SoC level to achieve deep low voltage operation. Voltage reduction is discussed, along with dynamic voltage and frequency scaling (DVFS) techniques and various shut-down strategies such as light sleep, deep sleep, full shut down and POFF (Periphery OFF) modes.

This white paper covers a number of power optimization topics in excellent detail. I highly recommend it. You can get your copy here.  And that’s how achieving extreme low power with Synopsys Foundation IP memory compilers and logic libraries is within reach.


Navigating the 1.6Tbps Era: Electro-Optical Interconnects and 224G Links

Navigating the 1.6Tbps Era: Electro-Optical Interconnects and 224G Links
by Kalar Rajendiran on 02-22-2024 at 6:00 am

Simulation and Silicon ADC outpit scatter plot

In the relentless pursuit of ever-increasing data speeds, the 1.6 Terabits per second (Tbps) era looms on the horizon, promising unprecedented levels of connectivity and bandwidth within data centers. As data-intensive applications proliferate and the demand for real-time processing escalates, the need for robust and efficient communication infrastructure becomes paramount. At the heart of this infrastructure lie electro-optical interconnects, poised to revolutionize data transmission with their blend of high-speed, low-latency, and power-efficient capabilities. The adoption of 224G serial links emerges as a critical enabler for achieving end-to-end 1.6Tbps traffic capacity. These high-speed serial links serve as the backbone of data transmission, facilitating seamless communication between various components within the network. Their ability to handle ultra-high data rates and bandwidth demands makes them indispensable for the realization of next-generation communication systems. As with every major technology advancement, there are inherent challenges to be overcome. Both the optical channel and optical engine introduce nonlinear behavior. Traditional simulation-assisted design methods often model optical engines using electrical circuit languages and simulators, assuming linear channels, leading to overly optimistic assessments of interconnect performance.

At the recently held DesignCon 2024 conference, Synopsys presented the results from their electrical-optical co-simulation study using native electrical and optical signal representations. A highlight of this study is that the system design methodology that was utilized accounts for both linear and non-linear impairments, agnostic of technology, data rate, and modulation format. The “System design methodology, simulation and silicon validation of a 224Gbps Serial Link” paper submission received DesignCon 2024 Best Paper Award.

The following are some excerpts from Synopsys’ two paper submissions at DesignCon,  namely, “Performance assessment for high-speed 112G/224G SerDes with Direct-Drive Optical Engine” and “System Design Methodology, simulation and silicon validation of a 224Gbps serial link.”

Forward Error Correction in the 1.6T Era

Forward Error Correction (FEC) mechanisms play a pivotal role in enhancing the reliability of data transmission over high-speed links, particularly in the context of 1.6Tbps traffic. While FEC helps combat errors and ensures data integrity, its implementation introduces additional considerations such as power consumption and latency. Striking the right balance between Bit Error Rate (BER), power efficiency, and latency becomes imperative in designing efficient communication systems for the 1.6T era.

The Emergence of Electro-Optical Interfaces

To meet the evolving demands of the 1.6Tbps era, electro-optical interfaces are poised to play a transformative role. These interfaces leverage the advantages of optical technology to deliver high-speed, low-latency, and power-efficient communication solutions. Technologies such as Co-packaged Optics (CPO) and Die-to-Die (D2D) interconnects offer promising avenues for seamlessly integrating optical components into existing data center architectures, ushering in a new era of efficiency and performance.

Navigating Impairments in End-to-End Links

However, the deployment of end-to-end 224G links is not without its challenges. The conventional approach to simulating optical interconnects using electrical circuit languages and simulators, while effective in some cases, comes with several tradeoffs. Impairments such as noise, jitter, distortion, and crosstalk can significantly degrade signal quality and impact overall performance. To address these challenges, meticulous attention must be paid to modeling and mitigating impairments, ensuring the robustness and reliability of communication infrastructure in the face of non-linear effects inherent in optical and electro-optical interfaces.

The Role of Accurate System Modeling

Accurate system modeling is paramount in navigating the complexities of electro-optical interconnects and countering the non-linear effects inherent in optical transmission. By meticulously simulating various components and their interactions, designers can gain invaluable insights into system behavior and identify potential areas for optimization. Furthermore, correlation with silicon implementation ensures that simulation results closely align with real-world performance, enabling informed decision-making and efficient design iterations.

System Simulation to Silicon Correlation Comparison

In Synopsys’ electro-optical co-simulation study, the process of correlating system simulation with silicon involved a detailed setup for performance characterization in the lab. The setup encompassed various components including a BERT, cables, test board daughter card, and the device under test residing in an Ironwood socket. The s-parameters considered in the system model included responses from the Wildriver, the taconic fastrise 12-layer daughter card, and the testchip package. The comparison between silicon results and system simulation outputs showcased the correlation between the two. Overall, the findings from the study underscored the effectiveness of the system simulation model in capturing silicon behavior and provided valuable insights into system performance and optimization.

The below four charts indicate similarities in the PAM4 levels, EYE opening, and BER performance when simulation and silicon were compared.

The impulse response comparison below shows a slight difference in the lock point between simulation and silicon but overall correlation in shape.

The below chart shows the equalization capability of the receiver, with the DSP compensating for ISI and flattening the overall channel response.

The below chart captures the FFE and DFE coefficients from simulation and silicon readings, indicating some differences attributed to variations in the AFE transfer function and CDR lock point.

Summary

As data centers transition into the 1.6Tbps era, the integration of electro-optical interconnects holds the key to unlocking unprecedented levels of connectivity, bandwidth, and efficiency. Through meticulous system modeling, simulation, and correlation with silicon implementation, designers can harness the full potential of these technologies, ushering in a new era of innovation and performance in data center infrastructure. With the convergence of high-speed serial links, advanced FEC mechanisms, and emerging electro-optical interfaces, data centers are poised to meet the escalating demands of modern computing and networking applications, paving the way for a future of unprecedented connectivity and efficiency.

For more details and access to the full papers presented at DesignCon, please contact Synopsys.

For more information about Synopsys High Speed Ethernet solutions, visit www.synopsys.com/ethernet

Also Read:

Why Did Synopsys Really Acquire Ansys?

Synopsys Geared for Next Era’s Opportunity and Growth

Automated Constraints Promotion Methodology for IP to Complex SoC Designs