SNPS1670747138 DAC 2025 800x100px HRes

Developing the Lowest Power IoT Devices with Russell Mohn

Developing the Lowest Power IoT Devices with Russell Mohn
by Daniel Nenni on 03-24-2023 at 6:00 am

InPlay NanoBeacon Technology

Russell Mohnis the Co-Founder and Director of RF/AMS Design at InPlay Inc. and his team has been using WiCkeD from MunEDA for several years. We thought the rest of the world would like to learn about his experiences.

How did you get started in semiconductors and what brought you to InPlay?
I was initially drawn to analog and mixed-signal chip design because it seemed like a direct path to start using what I had learned in engineering school. I’ve stayed in the same field because there’s always something for me to learn and there are always interesting problems to solve, both of which I really enjoy. I like building things, and I’ve always been fascinated by all the fields that make the microelectronics industry possible: photolithography, material science, physics, robotics, chemistry, microscopy, not to mention all the algorithms, mathematics, and computer science that is pushing breakthroughs in the tools we use. It’s a field that keeps capturing my imagination in new ways. I like the idea of casting a design in a mask and having it produced nearly flawlessly millions of times over. I enjoy the pressure in trying to get it right the first time, and I take pride in the fact that there is a lot at stake. The feeling of getting a new part in the lab and seeing it work as designed is incredibly rewarding. And when there are problems, figuring them out is also rewarding.

I joined InPlay because our current CEO asked me to lead the RF and analog/mixed-signal design for InPlay’s chips at the end of 2016. I had worked with the other co-founders at my previous employer, which had gone through two acquisitions in the previous two years or so. I had a lot of respect for them and enjoyed working with them in the past. I always dreamed of starting my own company, so I thought it was a golden, albeit risky, opportunity. The team had a lot of complementary domain knowledge, and knowing the others were great in their fields gave me the confidence to join.

What does InPlay do?
InPlay is a fabless semiconductor company. We design and develop chips that enable wireless connectivity in applications that require low-latency, many devices, and low power … all at the same time. We are also enabling a new generation of active RFID smart sensors and beacons with our NanoBeacon product line. It doesn’t require firmware. The BOM is tiny. And power consumption is very low, so it can be powered by unique batteries and energy harvesting technologies.

What type of circuits do you design?
We design and develop all the necessary circuits for a radio transceiver. Some examples are low noise amplifiers, mixers, programmable amplifiers, analog to digital converters, digital to analog converters, low-drop out regulators, phase locked loops, power amplifiers. We also design the power management circuit necessary for the chip, which includes DCDC converters, really low power oscillators, references, and regulators.

Which MunEDA tools do you use?
We use WiCkeD and SPT.

How do you apply the MunEDA tools to your day-to-day job?
We’ve done some porting work over the past couple years. It was necessary with the foundry wafer shortage, especially for startup companies like us. Using SPT to get the schematics all ported over has been really helpful.

We also use WiCkeD for both optimization and for design centering over process/voltage/temperature variation. If the circuit is small enough, an opamp for example, after choosing the right topology, the optimizer can do the work of a designer to get the needed performance, all while keeping the design centered over PVT.

We’ve also used it for intractable RF matching/filtering tasks and for worst case analysis on startup issues for metastable circuits.

What value do you see from the MunEDA tools?
I see the MunEDA tools as basically another designer on my team. This is huge since we’re a small team, so the impact has been significant.

How about the learning curve?
MunEDA’s support is really great; they care about their customers, no matter how small. The learning curve is not too bad after some built-in tutorials. I see value from the tools every time I use them, from the first time, until now.

What advice would you give a circuit designer considering the MunEDA tools?
I would advise that they keep an open mind, and really look at the resulting data. I think many designers would be happy by the amount of time they can save, and the insight they can gain into the trade-offs in their designs.

Also Read:

Webinar: Post-layout Circuit Sizing Optimization

Automating and Optimizing an ADC with Layout Generators

Webinar: Simulate Trimming for Circuit Quality of Smart IC Design

Webinar: AMS, RF and Digital Full Custom IC Designs need Circuit Sizing


Mercedes, VW Caught in TikTok Blok

Mercedes, VW Caught in TikTok Blok
by Roger C. Lanctot on 03-23-2023 at 10:00 am

Mercedes VW Caught in TikTok Blok

Thirteen years ago, General Motors announced the introduction of a voice-enabled integration of Facebook in its cars. The announcement reflected the irresistible urge to please consumers and lead the market.

Today, multiple car makers are introducing games, streaming video, and social media apps, the most prominent of which is TikTok – with a billion users across 150 countries, including 200M+ downloads in the U.S. alone. Automotive integration looks like a no-brainer – it is, but not in a good way.

Volkswagen and Mercedes are in the forefront of the movement, Volkswagen with its announced plans for its Harman Ignite app store and Mercedes with its Faurecia Aptoide-sourced app store. Both car companies would do well to look back to the original social media integrations of GM, Mercedes, and others – which included Twitter. It all sounded like a great idea at the time – Facebook and Twitter in the dash! – but very soon, as the British say, there was no joy.

It didn’t take a rocket scientist to perceive that social media is ill-suited for automotive integration – with the possible exception of rearseat use by passengers. Car companies tried to create automated links from navigation apps to Twitter – for posts indicating departures and arrivals – or by emphasizing voice interaction, to no avail. It was soon clear that these apps simply didn’t belong.

The problem is that social media apps demand attention. Their entire business models are built on distraction. They simply don’t belong in cars.

TikTok has the added baggage of being a threat to privacy and national security in the eyes of many governments around the world. I’d argue connected cars are by definition a threat to privacy. Actually, based on the amount of CC-TV deployed around the world I’d say leaving your home is a threat to privacy.

TikTok appears to be a special case because of its ability to spread Chinese government propaganda and misinformation. In other words, it’s not enough that it is distracting and invading privacy, it may also invade and alter users’ political beliefs.

Car companies could not resist the Siren song of TikTok. They simply couldn’t ignore those billion users and included TikTok in their app stores. If ever there were a “red flag” moment in in-car app deployment, this is it.

With governments around the world having either already banned TikTok or with plans to do so, perhaps auto makers will take a hint. The Washington Post details the breadth of the growing official rejection of TikTok.

India – initially banned in 2020, permanent ban in January 2021
U.S. – government agencies have 30 days to delete TikTok from government-issued devices; dozens of state-level bans
Canada – banned on government-issued phones
Taiwan – banned on government devices since last December, considering nationwide ban
European Union – banned on government/staff devices
Britain – banned on government devices
Australia – banned on government staff devices
Indonesia – temporary ban in 2018, later lifted
Pakistan – varous temporary bans
Afghanistan – banned in 2021 – but workarounds possible

As auto makers such as Volkswagen and Mercedes reconsider the wisdom of TikTok integration in cars, maybe they’ll rethink some of the other crazy stuff – or at least confine it to the rearseat or limit access to when vehicles are parked or charging. Angry Birds? Really, Mercedes?

It’s a good time to pause and rethink what we are putting into cars. Car makers have a history of wanting to integrate the latest and greatest tech in their cars, which explains the growing number of announcements regarding in-vehicle ChatGPT and Meta integrations. The good news is that these days, with over-the-air software update technology, apps can be removed as quickly as they can be deployed. Let’s hope so.

Within a year of its launch of Facebook in its dashboards General Motors changed course and dropped the plan. I think we can expect a similar outcome in this case.

Also Read:

AAA Hypes Self-Driving Car Fears

IoT in Distress at MWC 2023

Modern Automotive Electronics System Design Challenges and Solutions


Webinar: Enhance Productivity with Machine Learning in the Analog Front-End Design Flow

Webinar: Enhance Productivity with Machine Learning in the Analog Front-End Design Flow
by Daniel Payne on 03-23-2023 at 6:00 am

analog Circuit Optimization

Analog IC designers can spend way too much time and effort re-using old, familiar, manual iteration methods for circuit design, just because that’s the way it’s always been done. Circuit optimization is an EDA approach that can automatically size all the transistors in a cell, by running SPICE simulations across PVT corners and process variations, to meet analog and mixed-signal design requirements. Sounds promising, right?

So which circuit optimizer should I consider using?

To answer that question there’s a webinar coming up, hosted by MunEDA, an EDA company started back in 2001, and it’s all about their circuit optimizer named WiCkeD. Inputs are a SPICE netlist along with design requirements, like: gain, bandwidth and power consumption. Outputs are a sized netlist that meets or exceed the design requirements.

Analog Circuit Optimization

The secret sauce with WiCkeD is how it builds up a Machine Learning (ML) model to run a Design Of Experiments (DOE) to calculate the worst-case PVT corner, find the transistor geometry sensitivities, and even calculate the On Chip Variation (OCV) sensitivities. This approach creates and updates a non-linear, high-dimensional ML model from simulated data.

Having a ML model enables the tool to solve  the optimization challenge, then do a final verification by running a SPICE simulation. There are automated iterations until all requirements are met. Now that sounds much faster than the old manual iteration methods. Training the ML model is all automatic, and quite efficient.

Circuit designers will also learn:

  • Where to use circuit optimization
  • What types of circuits are good to optimize
  • How much value circuit optimization brings to the design flow

Engineers at STMicroelectronics have used the circuit optimization in WiCkeD, and MunEDA talks about their specific results in time savings and improvements in meeting requirements. Power Amplifier company Inplay Technologies showed circuit optimization results from the DAC 2018 conference.

Webinar Details

View the webinar replay by registering online.

About MunEDA
MunEDA provides leading EDA technology for analysis and optimization of yield and performance of analog, mixed-signal and digital designs. MunEDA’s products and solutions enable customers to reduce the design times of their circuits and to maximize robustness and yield. MunEDA’s solutions are in industrial use by leading semiconductor companies in the areas of communication, computer, memories, automotive, and consumer electronics. www.muneda.com.

Related Blogs


Narrow AI vs. General AI vs. Super AI

Narrow AI vs. General AI vs. Super AI
by Ahmed Banafa on 03-22-2023 at 10:00 am

Narrow AI vs. General AI vs. Super AI

Artificial intelligence (AI) is a term used to describe machines that can perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. AI is classified into three main types: Narrow AI, General AI, and Super AI. Each type of AI has its unique characteristics, capabilities, and limitations. In this article, we will explain the differences between these three types of AI.

Narrow AI  

Narrow AI, also known as weak AI, refers to AI that is designed to perform a specific task or a limited range of tasks. It is the most common type of AI and is widely used in various applications such as facial recognition, speech recognition, image recognition, natural language processing, and recommendation systems.

Narrow #ai works by using machine learning algorithms, which are trained on a large amount of data to identify patterns and make predictions. These algorithms are designed to perform specific tasks, such as identifying objects in images or translating languages. Narrow AI is not capable of generalizing beyond the tasks for which it is programmed, meaning that it cannot perform tasks that it has not been specifically trained to do.

One of the key advantages of Narrow AI is its ability to perform tasks faster and more accurately than humans. For example, facial recognition systems can scan thousands of faces in seconds and accurately identify individuals. Similarly, speech recognition systems can transcribe spoken words with high accuracy, making it easier for people to interact with computers.

However, Narrow AI has some limitations. It is not capable of reasoning or understanding the context of the tasks it performs. For example, a language translation system can translate words and phrases accurately, but it cannot understand the meaning behind the words or the cultural nuances that may affect the translation. Similarly, image recognition systems can identify objects in images, but they cannot understand the context of the images or the emotions conveyed by the people in the images.

General AI  

 General AI, also known as strong AI, refers to AI that is designed to perform any intellectual task that a human can do. It is a theoretical form of AI that is not yet possible to achieve. General AI would be able to reason, learn, and understand complex concepts, just like humans.

The goal of General AI is to create a machine that can think and learn in the same way that humans do. It would be capable of understanding language, solving problems, making decisions, and even exhibiting emotions. General AI would be able to perform any intellectual task that a human can do, including tasks that it has not been specifically trained to do.

One of the key advantages of General AI is that it would be able to perform any task that a human can do, including tasks that require creativity, empathy, and intuition. This would open up new possibilities for AI applications in fields such as healthcare, education, and the arts.

However, General AI also raises some concerns. The development of General AI could have significant ethical implications, as it could potentially surpass human intelligence and become a threat to humanity. It could also lead to widespread unemployment, as machines would be able to perform tasks that were previously done by humans. Here are a few examples of General AI:

1.    AlphaGo: A computer program developed by Google’s DeepMind that is capable of playing the board game Go at a professional level.

2.    Siri: An AI-powered personal assistant developed by Apple that can answer questions, make recommendations, and perform tasks such as setting reminders and sending messages.

3.    ChatGPT: a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a chatbot. The language model can answer questions, and assist you with tasks such as composing emails, essays, and code.

Super AI

Super AI refers to AI that is capable of surpassing human intelligence in all areas. It is a hypothetical form of AI that is not yet possible to achieve. Super AI would be capable of solving complex problems that are beyond human capabilities and would be able to learn and adapt at a rate that far exceeds human intelligence.

The development of Super AI is the ultimate goal of AI research. It would have the ability to perform any task that a human can do, and more. It could potentially solve some of the world’s most pressing problems, such as climate change, disease, and poverty.

Possible examples from movies: Skynet (Terminator), Viki (iRobot), Jarvis (Ironman).

Challenges and Ethical Implications of General AI and Super AI

The development of General AI and Super AI poses significant challenges and ethical implications for society. Some of these challenges and implications are discussed below:

  1. Control and Safety: General AI and Super AI have the potential to become more intelligent than humans, and their actions could be difficult to predict or control. It is essential to ensure that these machines are safe and do not pose a threat to humans. There is a risk that these machines could malfunction or be hacked, leading to catastrophic consequences.
  2. Bias and Discrimination: AI systems are only as good as the data they are trained on. If the data is biased, the AI system will be biased as well. This could lead to discrimination against certain groups of people, such as women or minorities. There is a need to ensure that AI systems are trained on unbiased and diverse data.
  3. Unemployment: General AI and Super AI have the potential to replace humans in many jobs, leading to widespread unemployment. It is essential to ensure that new job opportunities are created to offset the job losses caused by these machines.
  4. Ethical Decision-making: AI systems are not capable of ethical decision-making. There is a need to ensure that these machines are programmed to make ethical decisions, and that they are held accountable for their actions.
  5. Privacy: AI systems require vast amounts of data to function effectively. This data may include personal information, such as health records and financial data. There is a need to ensure that this data is protected and that the privacy of individuals is respected.
  6. Singularity: Some experts have raised concerns that General AI or Super AI could become so intelligent that they surpass human intelligence, leading to a singularity event. This could result in machines taking over the world and creating a dystopian future.

Narrow AI, General AI, and Super AI are three different types of AI with unique characteristics, capabilities, and limitations. While Narrow AI is already in use in various applications, General AI and Super AI are still theoretical and pose significant challenges and ethical implications. It is essential to ensure that AI systems are developed ethically and that they are designed to benefit society as a whole

Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

 

References

1.    Quantum Computing and Other Transformative Technologies , Book by Ahmed Banafa https://www.amazon.com/Transformative-Technologies-Publishers-Information-Technology/dp/8770226849/ref=sr_1_1?

2.    https://www.bbvaopenmind.com/en/technology/artificial-intelligence/intellectual-abilities-of-artificial-intelligence/

3.    #chatgpt

4. Terminator Movie

5. Iron Man Movie

6. iRobot Movie

7. https://www.zdnet.com/article/what-is-chatgpt-and-why-does-it-matter-heres-everything-you-need-to-know/

Also Read:

Scaling AI as a Service Demands New Server Hardware

10 Impactful Technologies in 2023 and Beyond

Effective Writing and ChatGPT. The SEMI Test


Intel Keynote on Formal a Mind-Stretcher

Intel Keynote on Formal a Mind-Stretcher
by Bernard Murphy on 03-22-2023 at 6:00 am

Intellectual understanding min

Synopsys has posted on the SolvNet site a fascinating talk given by Dr. Theo Drane of Intel Graphics. The topic is datapath equivalency checking. Might sound like just another Synopsys VC Formal DPV endorsement but you should watch it anyway. This is a mind-expanding discussion on the uses of and considerations in formal which will take you beyond the routine user-guide kind of pitch into more fascinating territory.

Intellectual understanding versus sample testing

Test-driven simulation in all its forms is excellent and often irreplaceable in verifying the correctness of a design specification or implementation. It’s also easy to get started. Just write a test program and start simulating. But the flip side of that simplicity is that we don’t need to fully understand what we are testing to get started. We convince ourselves that we have read the spec carefully and understand all the corner cases, but it doesn’t take much compounded complexity to overwhelm our understanding.

Formal encourages you to understand the functionality at a deep level (at least if you want to deliver a valuable result). In the example above, a simple question – can z ever be all 1’s – fails to demonstrate an example in a billion cycles on a simulator. Not surprising, since this is an extreme corner case. A formal test provides a specific and very non-obvious example in 188 seconds and can prove this is the only such case in slightly less time.

OK formal did what dynamic testing couldn’t do, but more importantly you learned something the simulator might never have told you. That there was only one possible case in which that condition could happen. Formal helped you better understand the design at an intellectual level, not just as probabilistic summary across a finite set of test cases.

Spec issues

Theo’s next example is based on a bug vending machine (so called because when you press a button you get a bug). This looks like a pretty straightforward C to RTL equivalence check problem, C model on the left, RTL model on the right. One surprise for Theo in his early days in formal was that right-shift behavior in the C-model is not completely defined in the C standard, even though gcc will behave reasonably. However, DPV will complain about a mismatch in a comparison with the RTL, as it should. Undefined behavior is a dangerous thing to rely on.

Spec comparison between C and RTL comes with other hazards, especially around bit widths. Truncation or loss of a carry bit in an intermediate signal (#3 above) are good examples. Are these spec issues? Maybe a gray area between spec and implementation choices.

Beyond equivalence checking

The primary purpose of DPV, it would seem, is to check equivalence between a C or RTL reference and an RTL implementation. But that need is relatively infrequent and there are other useful ways such a technology might be applied, if a little out of the box. First a classic in the implementation world – I made a change, fixed a bug – did I introduce any new bugs as a result? A bit like SEQ checking after you add clock gating. Reachability analysis in block outputs may be another useful application in some cases.

Theo gets even more creative, asking trainees to use counter examples to better understand the design, solve Sudokus or factorize integers. He acknowledges DPV make be an odd way to approach such problems but points out that his intent is to break the illusion that DPV is only for equivalence checking. Interesting idea and certainly brain-stretching to think through such challenges. (I confess I immediately started thinking about the Sudoku problem as soon he mentioned it.)

Wrap up

Theo concludes with a discussion on methodologies important in production usage, around constraints, regressions and comparisons with legacy RTL models. Also the challenges in knowing whether what you are checking actually matches the top-level natural language specification.

Very energizing talk, well worth watching here on SolvNet!

 

 


eFPGA goes back to basics for low-power programmable logic

eFPGA goes back to basics for low-power programmable logic
by Don Dingee on 03-21-2023 at 10:00 am

Renesas ForgeFPGA Evaluation Board features Flex Logic ELFX 1K low-power programmable logic tile

When you think “FPGA,” what comes to mind? Massive, expensive parts capable of holding a lot of logic but also consuming a lot of power. Reconfigurable platforms that can swallow RTL for an SoC design in pre-silicon testing. Big splashy corporate acquisitions where investors made tons of money. Exotic 3D packaging and advanced interconnects. But probably not inexpensive, small package, low pin count, low standby power parts, right? Flex Logix’s eFPGA goes back to basics for low-power programmable logic that can take on lower cost, higher volume, and size-constrained devices.

Two programmable roads presented a choice

At the risk of dating myself, my first exposure to what was then called FPGA technology was back when Altera brought out their EPROM-based EP1200 family in a 40-pin DIP package with its 16 MHz clock, 400 mW active power and 15 mW standby power. It came with a schematic editor and a library of gate macros. Designers would draw their logic, “burn” their part, test it out, throw it under a UV lamp and erase it if it didn’t work, and try again.

Soon after, a board showed up in another of our labs with some of the first Xilinx FPGAs. These were RAM-based instead of EPROM-based – bigger, faster, and reprogramming without the UV lamp wait or removing the part from the board. The logic inside was also more complex, with the introduction of fast multipliers. These parts could not only sweep up logic but could also be used to explore custom digital signal processing capability with rapid redesign cycles.

That set off the programmable silicon arms race, and a bifurcation developed between the PLD – programmable logic device – and the FPGA. Manufacturers made choices, with Altera and Xilinx taking the high road of FPGA scalability and Actel, Lattice, and others taking the lower road of PLD flexibility for “glue logic” to reduce bill-of-materials costs.

eFPGA shifts the low-power programmable logic equation

All that sounds like a mature market, with a high barrier to entry on one end and a more commoditized offering on the other. But what if programmable logic was an IP block that could be designed into any chip in this fabless era – including a small, low-power FPGA? That would circumvent the barrier (at least in the low and mid-range offerings) and commoditization.

Flex Logix took on that challenge with the EFLX 1K eFPGA Tile. Each logic tile has 560 six-input look-up tables (LUTs) with RAM, clocking, and interconnect. Arraying EFLX tiles gives the ability to handle various logic and DSP roles. But its most prominent features may be its size and power management.

Fabbed in TSMC 40ULP, the EFLX 1K tile fits in 1.5mm2 and offers power-gating for deep sleep modes with state retention – much more aggressive than traditional PLDs. EFLX 1K also has production-ready features borrowed from FPGAs. It presents AXI or JTAG interfaces for bitstream configuration, readback circuitry enabling soft error checking, and a test mode with streamlined vectors improving coverage and lowering test times.

See the chip in the center of this next image? That’s a ForgeFPGA from Renesas in a QFN-24 package, based on EFLX 1K IP, which Renesas offers at sub-$1 price points in volume. Its standby target current checks in at less than 20uA. Smaller size, lower cost, and less power open doors previously closed to FPGAs. The lineage of ForgeFPGA traces back to Silego Technology, then to Dialog Semiconductor, acquired by Renesas in 2021.

 

 

 

 

 

 

 

Renesas brings the Go Configure IDE environment, putting a graphical user interface on top of the Flex Logix EFLX compiler. It supports mapping ForgeFPGA pins, compiling Verilog, generating a bitstream, and has a lightweight logic analyzer.

 

 

 

 

 

 

 

 

 

The pre-built application blocks for the ForgeFPGA have an interesting one that Flex Logix’s Geoff Tate points out: a UART. Creating a UART in logic isn’t all that difficult, but it turns out that everyone has gone about it differently, and it’s just enough logic to be more than a couple of discrete chips. A ForgeFPGA is a chunk of reconfigurable logic that can solve that problem, allowing one hardware implementation to adapt quickly for various configurations.

 

 

 

 

 

 

 

ForgeFPGA is just one example of what can be done with the Flex Logix EFLX 1K eFPGA Tile. Flex Logix can adapt the IP for various process nodes, and the mix-and-match tiling capability offers scalability. It achieves new lows for low-power programmable logic and allows chip makers to differentiate solutions in remarkable ways. For more info, please visit:

Flex Logix EFLX eFPGA family

Also Read:

eFPGAs handling crypto-agility for SoCs with PQC

Flex Logix: Industry’s First AI Integrated Mini-ITX based System

Flex Logix Partners With Intrinsic ID To Secure eFPGA Platform


Lithography Resolution Limits: The Point Spread Function

Lithography Resolution Limits: The Point Spread Function
by Fred Chen on 03-21-2023 at 6:00 am

Lithography Resolution Limits The Point Spread Function

The point spread function is the basic metric defining the resolution of an optical system [1]. A focused spot will have a diameter defined by the Airy disk [2], which is itself a part of the diffraction pattern, based on a Bessel function of the 1st kind and 1st order J1(x), with x being a normalized coordinate defined by pi*radius/(0.5 wavelength/NA), with NA being the numerical aperture of the system. The intensity is proportional to the square of 2J1(x)/x. The intensity profile is the point spread function, since it is the smallest possible defined pattern that can be focused by a lens (or mirror). The full-width at half-maximum (FWHM) is closely estimated by 0.5 wavelength/NA. DUV patterns are often much smaller than this size (down to ~0.3 wavelength/NA) and are thus required to be dense arrays and use phase-shifting masks [3].

In the context of EUV lithography, there are 0.33 NA systems and 0.55 NA systems with 20% central obscuration. The latter requires a modification of the point spread function by subtracting the point spread function corresponding to the obscured portion. For a 20% central obscuration, this means subtracting 0.4 J1(0.2x)/x, i.e., the intensity is proportional to the square of [2J1(x)/x – 0.4 J(0.2x)/x]. The point spread functions for 0.33 NA and 0.55 NA EUV systems are plotted below.

Point spread functions for 0.33 NA and 0.55 NA EUV systems

The 0.55 NA system has a narrower FWHM, ~12.5 nm vs ~21 nm for 0.33 NA. However, the larger NA goes out of focus faster for a given defocus distance due to larger center-to-edge optical path differences [4]. Moreover, experimentally measured EUV point spread functions [5] indicated much reduced contrast than expected from a ~22 nm FWHM point spread function for a 13.5 nm wavelength 0.3 NA system. This can be attributed to aberrations but also significantly includes relatively long-range effects specific to the resist, which can be attributed to photoelectrons and secondary electrons resulting from EUV absorption [6].

As indicated earlier, spot sizes smaller than the point spread function are possible only for dense pitches, with a lower pitch limit of 0.7 wavelength/NA. For random logic arrangements on interconnects, however, pitches have to be much larger, and so line cuts, for example, are still limited by the point spread function. On current 0.33 NA EUV systems, for example, it can be seen that the point spread function already covers popularly targeted line pitches in the 28-36 nm range. So, in fact, the edge placement from overlay and CD targeting, compounded by the spread of the secondary electrons [6,7], looks prohibitive. No wonder, then, that SALELE (Self-Aligned Litho-Etch-Litho-Etch) has been the default technique, even for EUV [8-11].

References

[1] https://en.wikipedia.org/wiki/Point_spread_function

[2] https://en.wikipedia.org/wiki/Airy_disk

[3] Y-T. Chen et al., Proc. SPIE 5853 (2005).

[4] A Simple Model for Sharpness in Digital Cameras – Defocus, https://www.strollswithmydog.com/a-simple-model-for-sharpness-in-digital-cameras-defocus/

[5] J. P. Cain, P. Naulleau, and C. Spanos, Proc. SPIE 5751 (2005).

[6] Y. Kandel et al., Proc. SPIE 10143, 101430B (2017).

[7] F. Chen, Secondary Electron Blur Randomness as the Origin of EUV Stochastic Defects, https://www.linkedin.com/pulse/secondary-electron-blur-randomness-origin-euv-stochastic-chen

[8] F. Chen, SALELE Double Patterning for 7nm and 5nm Nodes, https://www.linkedin.com/pulse/salele-double-patterning-7nm-5nm-nodes-frederick-chen

[9] R. Venkatesan et al., Proc. SPIE 12292, 1229202 (2022).

[10] Q. Lin et al. Proc. SPIE 11327, 113270X (2020).

[11] Y. Drissi et al., “SALELE process from theory to fabrication,” Proc. SPIE 10962, 109620V (2019).

This article first appeared in LinkedIn Pulse: Lithography  Resolution Limits: The Point Spread Function

Also Read:

Resolution vs. Die Size Tradeoff Due to EUV Pupil Rotation

Multiple Monopole Exposures: The Correct Way to Tame Aberrations in EUV Lithography?

Application-Specific Lithography: Sub-0.0013 um2 DRAM Storage Node Patterning


Checklist to Ensure Silicon Interposers Don’t Kill Your Design

Checklist to Ensure Silicon Interposers Don’t Kill Your Design
by Dr. Lang Lin on 03-20-2023 at 10:00 am

Image1

Traditional methods of chip design and packaging are running out of steam to fulfill growing demands for lower power, faster data rates, and higher integration density. Designers across many industries – like 5G, AI/ML, autonomous vehicles, and high-performance computing – are striving to adopt 3D semiconductor technologies that promise to be the solution. The tremendous growth in 2.5 and 3D IC packaging technology has been driven by high-profile early adopters delivering high bandwidth and latency products.

CPU and Computer chip concept

Benefits of 2.5 and 3D Technology

This trending technology meets the demands of enclosing all functionality in one sophisticated IC package, enabling engineers to meet aggressive high-speed and miniaturization goals. In 3D-IC packaging, dies are stacked vertically on top of each other (e.g. HBM), while 2.5D packaging places bare die (chiplets) next to each other. The chiplets are connected through a silicon interposer and through-chip vias (TSVs). This makes for a much smaller footprint and eliminates bulky interconnects and packaging which can significantly impede data rate and latency performance. Heterogenous integration is another benefit of silicon interposers, enabling engineers to place memory and logic with different silicon technologies in the same package, reducing unnecessary delays and power consumption. Integrating different chips designed in their most appropriate technology nodes provides better performance, cost, and improved time to market when compared to monolithic SOC designs on advanced technology nodes. Monolithic SOCs take longer to design and validate, contributing to increased cost and time to market.

The implementation of silicon interposers allows for more configurable system architectures but also poses additional multiphysics challenges like thermal expansion and electromagnetic interference along with fewer design and production issues.

Challenges of 2.5 and 3D Design

Silicon interposers is a successful and booming advancement in IC packaging technology. This technology will soon replace the traditional methods of chip design. Combining different functional blocks and memory within the same package provides high speed and improved performance for advanced design technologies. But the new considerations with interposers impose unfamiliar challenges and designers must understand the power integrity, thermal integrity and signal integrity interactions between the chiplet dies, the interposer, and the package. System simulation becomes an integral factor for the expected performance of the IC package.

Interposers act as a passive layer with a coefficient of thermal expansion that matches that of the chiplets, which explains the popularity of silicon for interposers. Nevertheless, it doesn’t eliminate the possibility of thermal hot spots and joule heating problems within the design. Interposers are supported by placing them on an ordinary substrate with a different thermal expansion coefficient, which contributes to increased mechanical stress and interposer warpage. That’s where the designer should be worried about the reliability of the system as this stress can easily crack some of the thousands of microbump connections.

Silicon interposers provide significantly denser I/O connectivity allowing higher bandwidth and better use of die space. But as we know, nothing comes for free. Multiple IPs in the same package require multiple power sources, constituting a complex power distribution network (PDN) within the package itself. The PDN runs throughout the entire package and is always vulnerable to power noise leading to power integrity problems. Analyzing the voltage distribution and current signature of every chip in the IC system with an interposer is important for ensuring power integrity.  Routing considerable amounts of power through the vertical connections between elements creates more problems for power integrity. These include TSVs and C4 bumps, as well as tiny micro-bumps, and hybrid bonding connections. Last but not least, many high-speed signals are routed among the chips and interposer which can easily fall victim to electromagnetic coupling and crosstalk. Electromagnetic signal integrity, also for high-speed digital signals, must be on your verification list when designing an IC package with interposer. This technology is a cost-effective, high-density, and power-efficient technique but is still susceptible to EM interference, thermal, signal and power integrity issues.

Figure2: Block diagram of Multiphysics analysis of multi-die system

Power Integrity:  

Power is the most critical aspect of any IC package design. Everything around the package design is driven by the power consumed by chips within the IC package. Every chip has a different power requirement which leads to requirements for the power delivery network. The PDN also has a critical role in maintaining the power integrity of the IC package by minimizing voltage drop (IR-drop) and avoiding electromigration failures. The best way to achieve power integrity is to optimize the power delivery network by simulating the fluctuating current at each IC and the parasitic of passive elements that make up the PDN. It becomes more complicated with an interposer since chips are connected through the interposer. Power and ground trails routed through the interposer impose new challenges when analyzing power integrity. But it is not the only issue. Electromigration issues come hand in hand with PI problems. The current density in each piece of geometry must be modeled and should be below the maximum limit supplied by the foundry. Joule heating of the microbumps and wires has a significant impact on the maximum allowable current density, which implies a degree of thermal simulation for maximum accuracy.

Ansys Redhawk-SC and Totem, can extract the most accurate chip power model to understand the power behavior of chips in a full-system context. If you don’t yet have the chip layout model at the prototyping stage, create an estimated CPM (chip power model) using Ansys Redhawk tools to anticipate the physics at the initial level. Thermal and power analysis shouldn’t be a signoff step, but an ongoing process because making last-minute changes in the design might not work.

Figure3: Power Integrity Analysis using Ansys Redhawk-SC Electrothermal

Thermal Integrity:  It is extremely important to understand the thermal distribution in the interposer design to regulate thermal integrity. Just power and signal integrity might not save your design from thermal runaway or local thermal failure. with multiple chips close together in a 2.5D package the hotter chiplet might heat up the nearby chiplets and change their power profile, possibly leading to yet more heating. Heat is dissipated from the chips to the interposer and further through TSVs to the substrate, which heats up the entire package. To avoid stress and warpage due to the differential thermal expansion, designers should understand the thermal profile of every chip and interposer in the design. These maps will give insight into the thermal distribution across the IC package, allowing the designer to determine thermal coupling among chips through the interposer.

Power dissipation is, of course, driven by activity. Ansys PowerArtist is an RTL power analysis tool that is integrated with in RedHawk-SC Electrothermal to generate the most accurate chip thermal models (CTMs) based on ultra-long, realistic activity vectors produced by hardware emulators. By assembling the entire 3D-IC system including chip CTM, interposer, package, and heat sink, Ansys RedHawk-SC Electrothermal gives the designer an accurate thermal distribution and an understanding of the thermal coupling between chiplets and the interposer. Monitoring temperature gradients needs to start early in the IC package design. The sooner the better. The complete front-to-back flow with gives a clear insight into the thermal distribution over time for the entire package, making your design more reliable.

Figure 4: Different parameter extractions for Silicon Interposer Design

Signal Integrity:  In the IC package, high-speed signals are transmitted from one die to another through an interposer at very high bit rates. The signals are closely spaced and also relatively long (compared to on-chip routing), which makes them vulnerable to electromagnetic interference (EMI) and coupling (EMC). Even digital designers need to follow high speed design guidelines to maintain signal integrity. The only way to control the EMC/EMI is with fast, high-capacity electromagnetic solvers that extract a coupled electromagnetic model including chiplets, signal routing through the interposer, and system coupling effect. With Ansys RaptorHand HFSS easy to analyze all these elements in a single, large model and meet the desired goal of a clean eye diagram. HFSS and Ansys Q3D can also be used to extract RLC parasitics and provide visualization of the electromagnetic fields and scale up to system level extraction beyond the interposer.

Learn more about challenges and solutions for 3D-IC and interposers.

Semiconductor Design and Simulation Software | Ansys

Ansys RedHawk-SC Electrothermal Datasheet

Thermal Integrity Challenges and Solutions of Silicon Interposer Design | Ansys

Also Read:

HFSS Leads the Way with Exponential Innovation

DesignCon 2023 Panel Photonics future: the vision, the challenge, and the path to infinity & beyond!

Exponential Innovation: HFSS


Samtec Lights Up MemCon

Samtec Lights Up MemCon
by Mike Gianfagna on 03-20-2023 at 6:00 am

Samtec Lights Up MemCon

Every conference and trade show that Samtec attends is better for the experience. Samtec has a way of bringing exciting and innovative demos and technical presentations to any event they attend. I personally have fond memories of exhibiting next to Samtec at an early AI Hardware Summit at the Computer History Museum in Mountain View, CA. At the time I was at eSilicon, and we had developed an eye-popping long-reach communication demo with our SerDes and Samtec’s cables. We ran that demo with a cable that connected our two booths – very long reach in action. I don’t think I’ve ever seen a demo span more than one trade show booth since then. The subject of this post is Samtec’s attendance at MemCon, which is also being held at the Computer History Museum. Samtec overall, and Matt Burns, technical marketing manager in particular will be working their magic on March 28 and 29 this year. Let’s see how Samtec lights up MemCon.

MemCon, Then and Now

Thanks to Paul McLellan and his Breakfast Bytes blog, I was able to get some early history of MemCon. Those who have been at the semiconductor and EDA game for a while will remember Denali, an early IP company that focused on memory models. Denali decided to get some more visibility for the company and its offerings, so around 2001 they held the first MemCon at the Hyatt Hotel in the Bay Area. So, this was the birth of the show. The historians among us will also fondly remember the Denali Party, probably the best social event ever held at the Design Automation Conference.

Today, MemCon is managed by Kisaco Research. I have some personal experience with this organization. While at eSilicon, we were one of the early participants at the previously mentioned AI Hardware Summit. Under their leadership, Kisaco Reseach grew this event from a humble and small beginning to one of the premier events in AI for the industry. All this from a location in London. Their reach is substantial, and they are working their magic for MemCon as well.

Expected audience at MemCon

Memories have become a critical enabling technology for many forward-looking applications. Some of the areas of focus for MemCon include AI/ML, HPC, datacenter and genomics. The list is actually much longer. The expected audience at MemCon covers a lot of ground. This is clearly an important conference – registration information is coming.

Samtec at MemCon

At its core, Samtec provides high-performance interconnect solutions for customers and partners. Samtec’s high-speed board-to-board, high-speed cables, mid-board and panel optics, pecision RF, flexible stacking, and micro/rugged components route data from a bare die to an interface 100 meters away, and all interconnect points in between. For the memory and storage sector, niche applications require niche interconnect solutions and that is Samtec’s specialty.

You can learn more about what Samtec does on their SemiWiki page here.

If you’re headed to MemCon, definitely stop by the Samtec booth. You will find talented, engaging staff and impressive demonstrations. Samtec’s own Matt Burns will also be presenting an informative talk on Wednesday March 29 at MemCon:

2:10 PM – 2:35 PM

How Flexible, Scalable High-Performance Interconnect Extends the Reach of Next Generation Memory Architectures

So, this is how Samtec lights up MemCon. If you haven’t registered yet for the show, you can register here. Use SAMTECGOLD15 at check-out to save 15%.


Podcast EP148: The Synopsys View of High-Performance Communication and the Role of Chiplets

Podcast EP148: The Synopsys View of High-Performance Communication and the Role of Chiplets
by Daniel Nenni on 03-17-2023 at 10:00 am

Dan is joined by John Swanson, who is the HPC Controller & Datapath Product Line Manager in the Synopsys Solutions Group. John has worked in the development and deployment of verification, integration, and implementation tools, IP, standards, and methodologies used in IP-based design for over 25 years at Synopsys.

Dan explores the future of high-performance computing with John. What is required for success, and what challenges are faced by designers and applications to get to 1.6T Ethernet leveraging 224 GbE, including FEC, cabling, and standardization.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.