100X800 Banner (1)

EUV Resist Absorption Impact on Stochastic Defects

EUV Resist Absorption Impact on Stochastic Defects
by Fred Chen on 04-03-2022 at 10:00 am

EUV Resist Absorption Impact on Stochastic Defects 1

Stochastic defects continue to draw attention in the area of EUV lithography. It is now widely recognized that stochastic issues not only come from photon shot noise due to low (absorbed) EUV photon density, but also the resist material and process factors [1-4].

It stands to reason that resist absorption of EUV light, which is depth-dependent, will also have an important bearing on the occurrence of stochastic defects. EUV resists have initially been mainly chemically amplified, with a typical absorption coefficient of 5/um [5], meaning that exp(-5) ~ 0.67% represents the fraction of light that is transmitted by a 1 um thick layer of the resist. For a layer only 20 nm thick, on the other hand, 90% of the light is transmitted, meaning that only 10% of the light is absorbed. For a 40 nm thick resist layer, it would be interesting to compare the absorption in the top half vs. the bottom half (Figure 1).

Figure 1. EUV photon absorption in the top half (left) vs. the bottom half (right) of a chemically amplified resist. The threshold to print here is taken to be 24 photons absorbed per 2 nm x 2 nm pixel. The assumed dose (averaged over the displayed 80 nm x 80 nm area) is 60 mJ/cm2. The oval outline is for reference to visually assist observing the stochastic absorption profile.

The bottom half receives less light (90%) than the top because some light has already been absorbed, and it also absorbs a small fraction (10%) itself. Consequently, it is more likely for some areas in the bottom half of the resist layer to fall below the nominal threshold photon absorption density to print. This leads to the higher probability of stochastic defects forming from underexposure in the lower half of the resist.

Given that lower resist absorption aggravates stochastic effects, we would expect the higher absorption of metal oxide resists [5] to be much better. From Figure 2, we see something significantly different.

Figure 2. EUV photon absorption in the top half (left) vs. the bottom half (right) of a metal oxide resist. The threshold to print here is taken to be 24 photons absorbed per 2 nm x 2 nm pixel. The assumed dose (averaged over the displayed 80 nm x 80 nm area) is 60 mJ/cm2. The oval outline is for reference to visually assist observing the stochastic absorption profile.

The lower half of the resist layer receives significantly less light to begin with, so the absorbed photons ultimately define a smaller region at the bottom of the resist. Since the metal oxide resist is a negative-tone resist, the photon absorption determines where the resist remains after development. This means toppling of resist features (or missing resist features) from a narrower bottom than top (‘undercutting’) can be a stochastic defect specific to negative-tone resists, particularly metal oxide resists.

DUV photoresists would never have this problem in common use, even with a lower dose and lower absorption coefficient on the order of 1/um (Figure 3). It’s because the resist is thicker and the pixels are effectively larger.

Figure 3. ArF photon absorption in the top half (left) vs. the bottom half (right) of a chemically amplified resist. The threshold to print here is taken to be 1200 photons absorbed per 7 nm x 7 nm pixel. The assumed dose (averaged over the displayed 280 nm x 280 nm area) is 30 mJ/cm2. The oval outline is for reference to visually assist observing the absorption profile.

A fairer EUV vs ArF comparison would use realistically difficult scenarios. Figure 4 compares the photons per pixel absorbed in the bottom half of the resist for 40 nm square pitch EUV (0.5 nm x 0.5 nm pixel) with 40 nm metal oxide resist thickness vs. 80 nm square pitch ArF (1 nm x 1 nm pixel) with 100 nm chemically amplified resist thickness, using a negative tone hole pattern. The EUV image assumes an ideal binary mask with quadrupole illumination, while the ArF image assumes a 6% attenuated phase shift mask with cross dipole illumination.

Figure 4. EUV (left) vs. ArF (right) photon absorption in the bottom half of the resist. The EUV resist thickness is 40 nm, while the ArF resist thickness is 100 nm. The pixel size is 0.5 nm x 0.5 nm for the EUV case, and 1 nm x 1 nm for the ArF case. The threshold to print here is taken to be 1.6 photons/pixel for EUV and 3.3 photons/pixel for ArF, to target the half-pitch. The assumed dose (averaged over the pitch) is 60 mJ/cm2.

At the finer pixel size, the roughness of the edge becomes more apparent for both wavelengths. However, the EUV case has lots of spots in the background where the exposure is sub-threshold, which will lead to potential resist removal during development, whereas the ArF case is free from such spots. The reason for this, in fact, has to do with the higher contrast of phase-shift masks compared to binary masks. The bright background has closer intensity to the central dark spot in the binary case, or less contrast, giving more opportunity for the background noise variation to reach levels comparable to the dark spot.

Acid generation (in chemically amplified resists) and electron release (following EUV exposure) lead to smoothing effects, which are simulated here using 4x Gaussian smoothing (sigma=2 pixels). This leads to an effective resist blur of 2 times the pixel size, i.e., 2 nm for the EUV case and 4 nm for the ArF case.

Figure 5. EUV (left) vs. ArF (right) latent image in the bottom half of the resist, after 4x Gaussian smoothing (sigma=2 pixels). The EUV resist thickness is 40 nm, while the ArF resist thickness is 100 nm. The pixel size is 0.5 nm x 0.5 nm for the EUV case, and 1 nm x 1 nm for the ArF case. The threshold to print here is taken to target the half-pitch. The assumed dose (averaged over the pitch) is 60 mJ/cm2.

With smoothing, the general spottiness of the image is removed, leaving residual edge roughness, for both ArF and EUV cases (Figure 5). However, since the EUV case had more random background counts, the edge looks relatively rougher, and there is a tendency for defects occurring at and near the edge. These can also impact the effective edge placement.

To conclude, the higher absorption coefficient is not helping to avoid stochastic defects which are occurring near the bottom of the resist layer, so the higher dose to compensate lower absorption is still necessary.

References

[1] https://www.jstage.jst.go.jp/article/photopolymer/31/5/31_651/_pdf

[2]http://ww.lithoguru.com/scientist/litho_papers/2019_Metrics%20for%20stochastic%20scaling%20in%20EUV.pdf

[3] https://www.spiedigitallibrary.org/journals/journal-of-micro-nanopatterning-materials-and-metrology/volume-20/issue-01/014603/Contribution-of-EUV-resist-counting-statistics-to-stochastic-printing-failures/10.1117/1.JMM.20.1.014603.full?SSO=1

[4]http://euvlsymposium.lbl.gov/pdf/2014/6b1e6ae745cd40aba5940af61c0c908e.pdf

[5] http://euvlsymposium.lbl.gov/pdf/2015/Posters/P-RE-06_Fallica.pdf

This article was first published in LinkedIn Pulse: EUV Resist Absorption Impact on Stochastic Defects

Also read:

Etch Pitch Doubling Requirement for Cut-Friendly Track Metal Layouts: Escaping Lithography Wavelength Dependence

Horizontal, Vertical, and Slanted Line Shadowing Across Slit in Low-NA and High-NA EUV Lithography Systems

Pattern Shifts Induced by Dipole-Illuminated EUV Masks


A Blanche DuBois Approach Won’t Resolve Traffic Trouble

A Blanche DuBois Approach Won’t Resolve Traffic Trouble
by Roger C. Lanctot on 04-03-2022 at 6:00 am

A Blanche DuBois Approach Wont Resolve Traffic Trouble

Near the end of Tennessee Williams’ “A Streetcar Named Desire” the Blanche DuBois character, who has suffered a mental breakdown following an implied rape, tells the doctor and matron who have come to take her to the hospital: “Whoever you are – I have always depended on the kindness of strangers.” Sadly, this is the same mentality of municipalities turning to Waze to resolve traffic troubles.

More than 3,000 municipalities around the world – according to Waze – have turned to the popular navigation app to better understand their own traffic woes and resolve challenges. The irony here, of course, is that Waze often CREATES traffic problems by routing vehicles through neighborhoods abutting major highways – or even navigating users into dangerous areas such as favelas in Brazil, occupied territories in Israel, or forest fire zones in California.

Waze’s traffic insights are derived from crowd-sourced “probe” data from users of the app. The location data from users’ mobile phones traces the pace and direction of travel and users can report road hazards or the location of law enforcement vehicles for the benefit of other drivers.

The result – when calibrated to convert real-time data into predictive traffic models – is a compelling navigation tool that has put pressure on car makers that offer built-in navigation systems that often use outdated maps. Waze claims 140M users worldwide and that kind of ubiquity is hard to ignore.

It’s also hard to ignore the fact that Waze has yet to define a profit-generating business model. The sheer desperation of this effort is reflected in the increasingly distracting array of display advertising that shows up in the app while in use. Oddly, you can’t enter a destination into the app while driving, but the app can try to distract you with impossible to read advertisements while it is navigating.

The source of Waze’s strength, though, is also its greatest weakness. Waze has no reliable means to validate the user inputs regarding hazards and other observations – and on multiple occasions users have punked Waze into re-routing drivers away from particular neighborhoods or locations.

Waze has also been known to use its negative impact on local traffic patterns as a door opener with local municipalities. Waze creates the traffic problem – blamelessly! – and then begins collaborating with local traffic authorities to “hack” the app to overcome the routing snafu that is riling citizens.

In the end, Waze is still relying on the “kindness” of strangers and the company runs its Waze for Cities program almost like a protection racket – mesmerizing cities into relying on Waze to fix their traffic problems by using it as a communication conduit. In one such case, Sandy Springs, Georgia, officials joined forces with Waze to communicate confusing traffic patterns to drivers through the app. This alone on its face isn’t such a horrible idea, but in the context of failing to communicate the same information via 511 services or embedded navigation systems from Telenav, NNG, TomTom, and HERE or even via radio stations drives even more users to Waze.

The strangest thing about cities that have turned to Waze as a partner and a communications tool is that most cities have their own traffic information resources. In fact, cities have access to traffic cameras – thousands of them – mostly trained on predictable traffic hotspots.

In the U.S., the leading provider of nationwide traffic camera information is TrafficLand. TrafficLand is responsible for nearly half of all server-side traffic camera installations and is able to deliver still images or streaming video on demand. The company also performs digital analytics on the images it gathers.

With a proper front-end interface, built-in vehicle navigation systems could tap into TrafficLand content to allow drivers to make better-informed navigation decisions. There’d be no need to “trust” Waze, in this instance. Still images or video would confirm the reality of traffic conditions on the road ahead.

Better yet, what about auto makers as a source of front-facing camera data. Mercedes-Benz and the Netherlands’ Ministry of Infrastructure and Water Management have announced a two-year project whereby Mercedes-Benz will share anonymized sensor data from its vehicles with the agency for the identification of road safety hotspots and maintenance issues. The key difference here is the reliance on vehicle sensors – which include cameras – that generate verifiable and reliable inputs.

Between fixed cameras (TrafficLand) and vehicle-mounted cameras, local municipalities ought to be able to identify and resolve their traffic issues – without the assistance of Waze and its user-generated inputs. Cities can then share their data resources with any and all traffic information and navigation providers – not just Waze.

In fact, every city should have a data exchange program in place in which auto makers could participate. That could be a job for Telenav or NNG or HERE or TomTom. But please, not Waze. We shouldn’t rely entirely on strangers for our traffic insights when we can observe reality directly in real time.

Also read:

Auto Safety – A Dickensian Tale

No Traffic at the Crossroads

GM’s Super Duper Cruise


Cadence and DesignCon – Workflows and SI/PI Analysis

Cadence and DesignCon – Workflows and SI/PI Analysis
by Daniel Nenni on 04-02-2022 at 6:00 am

Clarity 3D solver 112Gbps

DesignCon 2022 is back to a live conference, from Tuesday, April 5th through Thursday, April 7th, at the Santa Clara Convention Center.

Introduction

DesignCon is a unique gathering in our industry.  Its roots incorporated a focus on complex design and analysis requirements of (long-reach) high-speed interfaces.  Technical presentations and vendor exhibits on the conference Expo floor spanned a wide variety of topics – e.g., SerDes Tx/Rx design methods;  PCB interconnect and dielectric materials;  cables and connectors for high-speed signals; EDA tools for PCB design, plus extraction and simulation of (strongly frequency-dependent) interconnect parameters.

World-renowned signal integrity (SI) and power integrity (PI) experts offered their insights into how to layout PCB busses and place power decoupling and DC blocking capacitors for improved SI/PI fidelity.

And, the conference Expo highlighted the latest in technical equipment for high-speed interface analysis (on reference boards), from oscilloscopes and signal generators to bit-error rate testers to spectrum analyzers and vector network analyzers.

As the complexity of system designs has grown, and as high-speed interface design has expanded to encompass short, medium, and long-range topologies, so too has DesignCon expanded.

The emphasis on the materials and properties of boards, cables, and connectors is still strong, to be sure.  The introduction of 56Gbps and 112Gbps datarate standards (and pulse-amplitude modulated signal levels) has added to the focus on model extraction accuracy.  The transition from simple IBIS Tx/Rx electrical elements to more complex IBIS-AMI functional plus electrical models has enabled comprehensive “end-to-end” simulations.

In addition to the evolution of these more demanding tasks, there are three key focus areas evident in this year’s DesignCon program.

  • advanced 2.5D/3D packaging technology necessitates “full 3D” electromagnetic model extraction

Traditional rules-of-thumb used for early PCB layout definition of high-speed interface signals no longer apply to today’s system-in-package integration.  Insertion loss guidelines in “dB per inch per GHz” have no significance in a 2.5D package with heterogeneous functionality, spanning very wide high bandwidth memory (HBM) busses to source-synchronous short-reach interfaces between die.  The nature of this advanced packaging technology requires a full 3D extraction model for accurate analysis of insertion, reflection, and crosstalk behavior.

  • tight integration between design and analysis tools is absolutely required

Traditional PCB and backplane-centric high-speed interface design didn’t offer many degrees of freedom – e.g., board layer materials and thicknesses, via topologies and types (through vias or backdrilled vias, for impedance control).

The development of a 2.5D package requires focus on interface planning, as an integral part of the initial design flow.  The sheer number of interface connections and the disparity in their clocking and IL/RL/Xtalk requirements necessitates a tight design and analysis optimization loop.  A PCB may be able to accommodate the insertion of a high-speed repeater (re-driver or re-timer) component later in the design cycle to address a failing signal spec.  A 2.5D package design offers no such flexibility – it has to be “first time right”.  The design and analysis workflow must offer fast, accurate results. In addition, there is typically very limited SI/PI expertise available to design companies – these workflows need to be available to many designers, based on a familiar EDA platform environment.

  • IP offerings are critical to accelerating product introductions integrating advanced interface standards

The pace at which new interface definitions are being introduced is rapid.  DesignCon has expanded to provide system developers with information on (silicon-proven, qualified) IP available for SoC integration.

Cadence at DesignCon

I recently had the opportunity to chat briefly with Sherry Hess at Cadence, to learn what new technologies Cadence will be presenting at this year’s DesignCon.  Indeed, their focus is on new workflows, improved 2.5D/3D modeling accuracy, and advanced IP.

Workflows      

Here are some of the workflow-related presentation sessions.

  • NO Exit Ramps Needed – Cadence’s System Design Workflow Delivers Seamless In-Design Analysis, Reducing Turnaround Time and Minimizing Risk
  • Mainstream Signal Integrity Workflow for PCI 6.0 PAM4 Signaling
  • Amphenol: 112G Connector and Board Design/Analysis Workflow
  • Meta (Facebook): MIPI-C Board and Camera Interface Design/Analysis Workflow
  • Microsoft: Interconnect Optimization of Wearables with an In-Design Analysis Workflow

Note that these presentations include collaborations with various other firms using these workflows, from component providers to systems companies.

Clarity 3D Solver Enhancements

The parametric model extraction of a 3D structure involves a tradeoff in accuracy versus computational resources.  The complex nature of 3D geometries requires an intricate finite element mesh, whether for a system-in-package or multiple packages on a combination of rigid and/or flex substrates.  The electromagnetic solver for the mesh is computationally demanding.

At DesignCon, Cadence will be demonstrating significant enhancements to their Clarity 3D solver, including:

  • a new distributed meshing algorithm, with significant reduction in simulation runtimes
  • a new machine-learning based algorithm for optimizing a “sweep” of design parameters
  • workflow integration with Cadence Allegro (and Allegro Package Designer), Integrity 3D-IC, and Virtuoso RF platforms

IP Strategy

It wasn’t long ago that 28Gbps was the emerging standard interface.  Cadence will also be presenting their IP development for 224Gbps.

  • The Future of 224G Serial Links

Appended below are DesignCon links of interest – at a minimum, if you are involved in functional and/or electrical interface design, from system-in-package to long-reach signaling, you should definitely get a FREE Expo pass.  And, be sure to stop by the Cadence Expo booth for product demonstrations and more technical information.

Cadence sessions at DesignCon:

https://events.cadence.com/event/df7b2870-50d2-48bd-846b-bd8e3c2ea7b2/summary

Cadence Clarity 3D Solver:

https://www.cadence.com/en_US/home/tools/system-analysis/em-solver/clarity-3d-solver.html

DesignCon 2022 Registration:

https://www.designcon.com/en/conference/passes-pricing.html

Also read:

Symbolic Trojan Detection. Innovation in Verification

Leveraging Virtual Platforms to Shift-Left Software Development and System Verification

Using a GPU to Speed Up PCB Layout Editing


Podcast EP69: Ayar Labs and the Future of Optical I/O

Podcast EP69: Ayar Labs and the Future of Optical I/O
by Daniel Nenni on 04-01-2022 at 10:00 am

Dan is joined by Hugo Saleh, senior VP of commercial operations and managing director of Ayar Labs, UK. Hugo discusses the technology and application of optical I/O, its use and impact now and in the future.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


DUV, EUV now PUV Next gen Litho and Materials Shortages worsen supply chain

DUV, EUV now PUV Next gen Litho and Materials Shortages worsen supply chain
by Robert Maire on 04-01-2022 at 8:00 am

EUV DUV Lithography

-New PUV light source will push litho into Angstrom Era
-Rare earth elements shortages add to supply chain woes
-Could strategic wafer reserve releases lower memory pricing
-Can we cut off/turn off Russian access to chip equipment?

DUV, EUV and now “PUV” to become next generation lithography

Lithography is the locomotive the pulls the entire semiconductor industry along the Moore’s Law curve to ever smaller feature sizes by using ever smaller wavelengths of light to print smaller and smaller transistors. Light is the paintbrush and shorter wavelengths is akin to using finer and finer paintbrushes.

The industry has gone from using “G line” and “I Line” visible light to DUV, ultraviolet light generated by Krf (248NM) and Arf (193NM) laser sources. We have now transitioned to the smaller “paintbrush” of EUV, or extreme ultraviolet at 13NM.

The industry is already forced to use optical tricks to print 5NM and 3NM features from 13NM light, much as with prior nodes, but is already running out of gas so we have to move to an even shorter wavelength than EUV for future nodes.

PUV is the next generation of lithographic light at 6.5NM. This is still considered “soft XRay”, so we can use current lens and reticle technology, as we are not into “tender” or “hard” XRays of sub Nanometer wavelengths.

One of the key advantages of PUV is that the light source is polarized (hence the name “P”UV) which allows for more accurate printing. In fact the proposed litho tools will have two light sources of opposing polarity (P & S) for higher contrast.

The other nickname for the new technology is “Plaid” which also indicates the cross hatching of crossed polarity printing.

Plaid UV (PUV) will be generated by a new methodology rather than the current laser shooting tin droplet LPP source. This new source will require short but very, very high spikes of power generating almost fusion/fission like conditions. This will require battery power technology as local power grids already strain under EUV power requirements.

It is rumored that Intel is working with Tesla as the supplier of the battery technology for Plaid UV as it has significant experience in this area.

Plaid could accelerate Intel past TSMC in race to leading edge

If it is Intel working on PUV and they are successful it could be the secret weapon that accelerates them past TSMC in the race to leading edge of technology. Not only does Plaid offer much smaller feature sizes and higher contrast, it also offers higher throughput (speed) as its much higher power light.

In addition there is the potentially huge benefit that existing EUV tools could be retrofitted with Plaid sources as the EUV lenses and reticle are compatible (in theory).All of this makes Plaid well beyond Ludicrous…

Demonstration of Plaid Technology

Rare materials shortages add to supply chain woes in chip industry

The conflict in Ukraine has had far reaching impact around the globe as international trade has slowed and may grind to a halt between the East and West. This is of key concern in the area of rare earth elements and especially noble gases which Ukraine was a large exporter of, as production has essentially stopped.

Neon, which is used in lithography tools seems to be in adequate supply for the time being but could run short over time. Xenon is potentially used in future EUV reticle inspection tools, also hard to find.

Rare earth elements come primarily from China and some from Russia and other countries, with lower cost labor and lax mining regulations. Yttrium (Y) is used not only in LEDs but also as a lining inside deposition and etch chambers to protect them. Neodymium (Nd) is used in magnets in electronics.

Phlebotnium (Ph) is a key nanotechnology ingredient mechanically applied to enhance wafers. Unobtainium (Uo) is high on the periodic table and is used in matter/anti-matter experiments due to its instability and isotope half life. It is currently one of the key raw materials in advanced, atomic layer, selective deposition and etch tools due to these unique, otherworldly, characteristics.
Alternate sources of these rare materials needs to be developed quickly. The US used to have rare earth mines but few if any are left. A new mined source of Unobtainium is from Pandora but will take time to develop, and may require help from Elon Musk as well, for transport.

Its not just money and fabs that are needed for US self sufficiency in chips, its elusive, hard to find, rare materials as well.

Strategic Wafer Reserves may be released to alleviate shortages

The US recently announced that it was releasing a million barrels of oil a day from the strategic reserves. It is not widely known but there is also a strategic reserve of semiconductor wafers. Obviously it would be difficult to store chips of every type and vintage but common devices, mainly memory and CPU types are stockpiled by the defense electronics agencies. This is especially important as many semiconductor devices used in defense applications have been discontinued long ago.

While not huge, a release of strategic wafer reserves could help is some areas in which chips are in short supply. Rationing of those released devices will obviously be on a prioritized basis for applications that are the most needy.
We could see memory pricing negatively impacted if stockpiles of memory chips are released too quickly into the market, but it would have the effect of lowering prices and inflation in semiconductors in general.

Is there a “kill switch” built in to semiconductor equipment?

Russia does not have very much in the way of a semiconductor industry. Unlike China, Russia has not figured out the critical importance that the semiconductor industry plays in todays world. They are perhaps the only remaining manufacturer of vacuum tubes because much of their electronics still uses them. There are a handful of ancient fabs in Russia producing 10 and 15 year old technology semiconductors that is probably right in line with their current state of the art for electronics as well.

Pretty much all of the semiconductor equipment in Russia comes from the West. We wonder if any of that equipment has remote “diagnostic” capabilities. Could it be disabled, turned off or maybe just stop shipping spares and consumables?

Its no small wonder why TSMC is so paranoid about the tools in its fab. USB sticks are not allowed in and there are no live internet connections allowed in the fab for fear of remote hacking.

Maybe tool makers should install wireless backdoors into their equipment for non-paying customers…. or maybe they already have… you never know.

Happy April First!!!!

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read

AMAT – Supply Constraints continue & Backlog Builds- Almost sold out for 2022

Intel buys Tower – Best way to become foundry is to buy one

KLAC- Great quarter and year – March Q is turning point of supply chain problem


CEO Interview: Kelly Peng of Kura Technologies

CEO Interview: Kelly Peng of Kura Technologies
by Daniel Nenni on 04-01-2022 at 6:00 am

Kelly Peng, Co-founder, and CEO of Kura Technologies

This interview is with Kelly Peng, Co-founder, and CEO of Kura Technologies. Kura Gallium, Kura’s first product, was named Best of CES 2022 and received a 2022 CES Innovation Award as well. Kelly is an inventor, engineer and entrepreneur that leads a team of dedicated innovators that are redefining the term “Augmented Reality”. She was a recipient of the prestigious Forbes “30 Under 30” in 2019 for her work in developing the most advanced augmented reality glasses that will start sampling in 2022. Based in Silicon Valley, Kura Technologies is eclipsing the competition in areas of field of view, resolution, brightness, transparency, depth of field, sizes, and other critical metrics.

Web3 and in particular Augmented Reality is a hot topic currently, help our readers understand what differentiates Kura from the many other early-stage companies in this market?

As you have indicated, the interest in Virtual Reality and Augmented reality is currently expanding in all directions. The fact is that we are currently experiencing a rapid change in the way AR and VR will be deployed in the very near future. These changes will open up new markets that were inconceivable a few years ago and will allow us to interact with information in the world around us in practical and interesting ways. The emerging applications will enable new activities like medical diagnosis and treatments, training, 3D design visualization,  industrial inspection, and face-to-face virtual communications all through a pair of glasses.

To facilitate this, Kura has focused our technology development to create a visualization system that is as natural as wearing glasses, and allows the wearer to experience the enhanced content necessary to optimize the desired reality. The Kura Gallium is the first pair of AR glasses to offer a 150-degree full frame field of view, 95% transparency, 8K resolution, unlimited range of depth, and many other features that provide the seamless view of the natural and augmented surroundings often referred to as the “Metaverse”.

What is the status of the glasses that were demonstrated at CES in January?

We showed demos with the world’s biggest field of view in AR, high transparency, and high brightness; newly assembled eyepieces for our upcoming dev kits, and software applications including3D model viewers and telepresence and remote collaboration tools running on our headset with 9-degree of-freedom head-tracking and gesture input. The team is focused on a couple of major developments for the hardware and telepresence platform side of Gallium. One of the larger projects is the development of the ASIC that creates the backbone of the system electronics. Custom ASICs and silicon were taped out last Fall at some of the world’s biggest production foundries and we are pleased to announce that the silicon has already come back from the fab and has also been packaged. We have been testing and running characterization recently and the results look great. The tape-out is a big success!

Can you provide some more details about the ASIC?

Yes, the internal code name for the chip is “Mill’s Creek”. The chip incorporates the control and driver circuits for the micro-LED displays, as the world’s fastest micro-LED display driver ASIC and core enabler for our 8K resolution.  This is one of the most critical components in the systems because it provides more than 100x resolution expansion, on-the-fly pixel repair, high dynamic range and full-color images.

The Augmented Reality user experience is dependent on high-quality display capabilities. The Gallium glasses use fully customized micro-LED displays to create the brightness and image sharpness that people expect. However, the micro-LED technology is not optimized unless the display driver and our optical architecture are optimized specifically for the application. The “Mill’s Creek” ASIC is fully customized specifically for the Kura Micro-LED display hardware with a completely unique architecture, which is unlike practically all of the other headsets that use off-the-shelf components for the display driver. This successful tape-out is a big milestone for us toward pushing Gallium to production.

Startup companies are all about the team of engineers and innovators that are driving the development. Can you give us a little more insight into the team?

Kura is currently made up of over 35 people that are all contributing to the product development. We have been very fortunate to pull together a great mix of talent. More than half of Kura’s founding and leadership are from MIT, and 3 of our lead engineers have together of 400+ patents. Kura’s in-house ASIC design team leader is Mark Flowers, Kura’s Director of Technology, who was previously the founding CTO of Leapfrog (IPOed, and valued $1B+). He has over 30 years of leadership experience designing and delivering custom mixed-signal ASICs. In the past, he was responsible for the shipment of tens of millions of customized chips as well as integrated consumer and enterprise platforms and products. In an earlier startup, Mark was the co-inventor of DSL, with over 800 million installed lines. That company was acquired by Texas Instruments. He graduated from MIT with a Master’s and Bachelor’s in electrical engineering with a specialty in IC design and computer science.

We are also fortunate to have a strong operations organization. That team is led by Gregory Gallinat, our COO, and Chuck Alger, Director of Supply Chain and Manufacturing. A core focus of the Ops team is defining and facilitating the worldwide supply chain and business practices to support Kura’s product launch and growth trajectory. Chuck has more than 20+ years of experience with Intel, Microsoft, and CP Display. This includes multiple manufacturing site launches for products like Hololens and Surface. He also has extensive expertise in semiconductor quality and reliability coming from his time at Intel. Chuck also worked as Director of Supply Chain at Compound Photonics (just acquired by Snap), a company building ASICs for driving micro-displays like micro-LED and LCoS.

What’s the demand and upcoming adoption of Kura Gallium?

We are a platform company poised to reshape the landscape of AR. The interest in Kura Gallium has been fantastic in the last year. The CES awards and the exposure we received through various other venues has opened up the path to adopt the platform into a broad range of applications. The various venues where we have been invited to speak at such conferences as SPIE, has exposed our thought leadership to this market. Kura currently has orders from over 350 companies, 100% of which are in-bound, and among those, over 50 companies that are in the Fortune 500, with total order requests from paid Fortune 500 companies totaling more than 100K units, and as these companies recognize the superiority of our performance and plan to use our product and platform in such areas as remote collaboration, telepresence, virtual showrooms, training, entertainment, tele-medicine, etc..

Many of these clients had also become investors in Kura. We also have several active projects with government agencies that see Augmented Reality as a critical technology for training, visualization, and remote collaboration and assistance. As you can see, the need for AR in various enterprise applications is very high now and we see many of these adopting our product quickly, as many clients and repeatedly express to us they really want to have the headset deployed as soon as possible. The CEO of Tokens .com, a publicly traded company that invests in Web 3 assets recently said in an interview on CNBC that “within the next 24 months all major companies will have a presence in the Metaverse like they have a website.”

Your initial focus seems to be on the enterprise and B2B2C side. When will consumers be buying Kura Gallium?

As with most emerging technologies, the early adopters start with the enterprise market. The enormous benefit of having real-time information augmenting that forward-looking view of users can be realized quickly in many industries. We are launching our hardware + software platform (global holographic telepresence platform, computer vision/AI SDK, and AR data platform), and many of our clients are industry leaders or some of the biggest companies in the world in automotive, training, design, telecommunication, entertainment, etc.

AR is really an industry in that demand had been waiting long for a product that users can use with acceptable vision quality together with a good form factor, Kura’s product and platform are serving the biggest demand in the industry and also largely expand the number of use models. Not to mention, Kura’s performance combined with the comforts of our first product over-compete all the “consumer-targeted” AR glasses and solutions today already. The consumer demand is there and will grow rapidly following the enterprise adoption, with a rich set of applications like the App Store. We have already designed many of the core technologies for our future generations of products that will be launched for both consumers and enterprises with improved performance and even more compact with a deeper level of silicon integration.

Also read:

CEO Interview: Aki Fujimura of D2S

CEO Interview: Frankwell Lin, Chairman and CEO of Andes Technology

CEO Interview: Tamas Olaszi of Jade Design Automation


AMIQ EDA Adds Support for Visual Studio Code to DVT IDE Family

AMIQ EDA Adds Support for Visual Studio Code to DVT IDE Family
by Kalar Rajendiran on 03-31-2022 at 10:00 am

FSM of State DVT VSCodium

“A picture is worth a thousand words” is a widely known adage across the world. Recognizing patterns and cycles becomes easier when data is presented pictorially. Naturally, data visualization technology has a long history from the early days, when people used a paper and pencil to graph data, to modern day visualization platforms. While visualization products have gotten fancier, driven by the data age we are in, the semiconductor industry was among the early industries that needed them. Electronics is all about signals and waveforms. It is easier to comprehend and analyze that data graphically than in the form of a table of data points. While Microsoft Excel has always offered visualization through its graphing feature, visualization solutions received broad market attention after Tableau introduced its visualization platform in 2003.

Visual Studio (VS)

Over the last couple of decades, rapid advances in the field of software have led to the introduction of integrated development environment (IDE) platforms. While there are many development platforms available to software developers, Eclipse and Visual Studio are two well-known and widely used IDE platforms. Is an IDE platform a visualization platform per se? Well. The platform itself is an environment that enables visualization of all sorts through various specific tools that work under that environment. The platform makes this possible through ongoing addition of extensions to support interfacing to various analysis and visualization tools.

So, why is Visual Studio called visual studio? Does it mean Eclipse IDE is not a visualization platform? The Visual Studio name has Visual Basic to thank for it. The developer GUI to Visual Basic earned it the name almost three decades ago. While the development environment has expanded since then, Microsoft has maintained the “visual” prefix for their modern-day IDE. Eclipse IDE is also a visualization platform, even though it does not have “visual” in its name.

Visual Studio (VS) Code

Visual Studio Code is a source code editor developed by Microsoft for Windows, Linux and OS X. It works within the Visual Studio IDE environment and includes support for debugging, embedded Git control and the IntelliSense feature. IntelliSense is a code-completion feature that is context-sensitive to the language being edited. VS Code is also customizable for the editor’s theme, keyboard shortcuts and preferences. In November of 2015, Microsoft released the VS Code source code under the MIT License and posted on GitHub.

A long list of extensions and themes ecosystem is available for VS Code, making it a very popular editor. The open-source nature of the source code is also attracting a large section of the developer community. The speed performance of the editor is not shabby either.

Semiconductor Design and Verification

Design and verification of semiconductors involves code development too. While VHDL and Verilog are not the standard software languages, they are programming languages nonetheless. Design and verification tasks can benefit from an IDE just as software coding and testing do. As such, there has been interest and a push for IDE offerings to support the semiconductor community.

AMIQ EDA

AMIQ EDA provides software tools that enable hardware design and verification engineers to improve productivity and reduce time to market. Prior to its spinoff from AMIQ Consulting, the team had observed three recurring challenges semiconductor companies faced: developing new code to meet tight schedules, understanding legacy or poorly documented code, and getting new engineers up to speed quickly. In the software world, IDEs are commonly used to overcome such challenges. But in the early 2000s, no IDE was available for design and verification languages such as Verilog, VHDL, e and SystemVerilog. So, they developed an IDE for internal use.

In 2008, AMIQ EDA was spun off from AMIQ Consulting and Design and Verification Tools (DVT) Eclipse IDE was launched. They launched DVT Debugger in 2011, Verissimo SystemVerilog Testbench Linter in 2012, and Specador Documentation Generator in 2014. As a company that strongly believes in user-driven development and building solutions based on real-life experiences, they recently launched DVT IDE for VS Code. You can read their press announcement here.

AMIQ EDA’s products help customers accelerate code development, simplify legacy code maintenance, speed up language and methodology learning, and improve source code reliability.

DVT IDE for VS Code

DVT IDE for Visual Studio Code (VS Code) is an integrated development environment (IDE) for SystemVerilog, Verilog, Verilog-AMS, and VHDL. The DVT IDE consists of a parser, the VS Code (editor), an intuitive graphical user interface, and a comprehensive set of features that help with code writing, inspection, navigation, and debugging. It provides capabilities that are specific to the hardware design and verification domain, such as design diagrams, signal tracing, and verification methodology support. The VS Code platform’s extensible architecture allows the DVT IDE to integrate within a large extension ecosystem and work flawlessly with third-party extensions.

DVT IDE for VS Code shares all the analysis engines with DVT Eclipse IDE, that is field proven since 2008. The product enables engineers to inspect a project through diagrams.  Designers can use HDL diagrams such as schematic, state machine, and flow diagrams. Verification engineers can use UML diagrams such as inheritance and collaboration diagrams. Diagrams are hyperlinked and synchronized with the source code and can be saved for documentation purposes. Users can easily search and filter diagrams as needed, for example, visualizing only the clock and reset signals in a schematic diagram. Both tools also have important non-visualization features such as hyperlinked navigation, auto-complete, code refactoring, and semantic searches for usages, readers, and writers of signals and variables.

For a couple of screenshots showing the DVT IDE for VS Code in action, refer to the Figures below.

 

DVT IDE for VS Code is available for download from the VS Code marketplace. For more details, refer to the product page.

Also read:

Automated Documentation of Space-Borne FPGA Designs

Continuous Integration of RISC-V Testbenches

Continuous Integration of UVM Testbenches


Shift left gets a modulated signal makeover

Shift left gets a modulated signal makeover
by Don Dingee on 03-31-2022 at 6:00 am

Modulated signals uncover combined effects in a shift left approach

Everyone saw Shift Left, the EDA blockbuster. Digital logic design, with perfect 1s and 0s simulated through perfect switches, shifted into a higher gear. But the dark arts – RF systems, power supplies, and high-speed digital – didn’t shift smoothly. What do these practitioners need in EDA to see more benefits from shift left? Higher fidelity behavioral models. Authentic waveforms. Fast, accurate simulation schemes. And, looking forward, components with an “executable datasheet,” reproducing physical results when simulated in various contexts. Let’s go inside Keysight’s strategy in this series on how shift left gets a modulated signal makeover.

Unlocking more value across the ecosystem

The fundamental value of shift left – in fact, the whole purpose of EDA tools – is earlier visibility for design teams in virtual space. Waiting until problems show up in hardware drives up cost and risk and takes away flexibility. Earlier virtual validation reduces hardware re-spins with their trial-and-error. Teams can explore architectural options and avoid over-design trying to protect margins.

But shift left has the potential to unlock more value by pulling the ecosystem together. Project “visibility” improves when predictions show how close a design is to performance goals, and what remains to be done. Virtual sampling in the customer’s and the customer’s-customer contexts would help vendors demonstrate parts quickly and close sales faster.

Ultimately, shift left sets the stage for digital twins, enabling physical engineering activities to move to virtual space. For example, design teams and end customers would be able to gauge difficult physical experiences, like a satellite in orbit with interference, sun loading, jamming, and other dynamics. Digital twins need unprecedented levels of accuracy and trust in EDA tools.

Paving a two-way path for waveforms and data

When electromagnetic (EM) behaviors appear, these higher value wins only happen if modulated signals come to life in virtual space. Consider the case of a power amplifier (PA) designed using S-parameters and simulated with sine waves. Basic validation of a PA might fall apart when a team designs a hardware prototype around it and applies complex modulation like 5G or Wi-Fi 7.

Think about it this way: if testing with modulated signals is a given for hardware, why is it optional for simulating a design? The answer is most RF EDA workflows are one-way. They lack any path for bringing physical measurements back through simulation, either as a stimulus waveform or as improvements to behavioral modeling parameters. Bringing real-world effects back into a math-based model is at best applying a fudge factor with who knows what margin. When data and models don’t line up, system-level experiences get lost in translation.

Now consider Keysight’s vision for a two-way path with high-fidelity transportable models in a common modeling language. A device under test comes packaged with authentic waveforms and enhanced modeling data – an executable that goes beyond a datasheet. A single workflow connects design and simulation from detailed componentry to world-level scenario planning, enabling regression not possible with discontinuous point-based tools, waveforms, and data.

An example of why modulation is not an option

When shift left works for EM design, workflows must be two-way, and modulation is not an option. An example is error vector magnitude (EVM), a reliable figure of merit in wireless applications.

PA designers do an excellent job of focusing on power-dependent parameters, such as non-linearity and peak-to-average power ratio (PAPR). EVM outcomes rely on other important effects at work. Quickly varying effects determine performance in power, frequency, and time domains. Slower varying effects alter performance around temperature, load, and bias.

For example, a complex waveform can set off time-dependent memory effects that present as self-heating problems. Or wideband operation uncovers impedance mismatches at some points. Shouldn’t designers see those problems developing virtually, before getting surprised by measurements on a hardware prototype, or observations from a distant deployed platform?

Most EDA approaches have models that look at one affect at a time. Modern EM problems have more interrelated dimensions, and demand simulation of combinations of effects. Keysight provides a unique approach, blending EDA and test and measurement expertise, to see deeper. Modulated signals are the key to bringing the customer into the design environment. That requires stronger simulation engines, higher fidelity models, multiple effects in play, and one workflow from schematic capture through validation.

How does this work? Here’s a new Keysight video on modulated signals in EVM performance, using test and measurement insights and simulation techniques getting faster results on complex waveforms with combined effects.

Beyond the “choose your own compliance adventure”

A final point on changes in the complex EM systems business. Systems now have carefully defined waveforms from industry specifications. Choosing your own compliance adventure with some random stimulus won’t get the job done. Teams using shift left with modulated signals in a Keysight EDA environment can demonstrate with confidence that a virtual design hits requirements before hardware materializes.

It’s true not only for RF communications systems, but also for adjacent markets. Switched-mode power supply design must hit EMI compliance profiles in many markets. High-speed digital design is now dictated by specifications like PCIe Gen 5 and DDR5. Shift left brings those specifications and their waveforms into view much sooner in the design cycle.

System complexity shouldn’t be left to the reader to solve alone. Keysight has invested in creating EDA tools and workflows for RF system design with world-class measurement science built in. Over the next installments in this series, we’ll see more detailed examples in Keysight’s EDA solutions for shift left with modulated signals. Next up: digital pre-distortion design.

Also read:

WEBINARS: Board-Level EM Simulation Reduces Late Respin Drama

 

 


Synopsys Announces FlexEDA for the Cloud!

Synopsys Announces FlexEDA for the Cloud!
by Daniel Nenni on 03-30-2022 at 10:30 am

Synopsys Cloud Graphic

There’s been a lot of discussion and hype regarding use of the cloud for chip design for quite a while, more than ten years I would say. I spoke with Synopsys to better understand their recent Synopsys Cloud announcement to determine if it is different. Briefly, it is different, and here is why:

If you’re trying to design a complex SoC, or more like a system-in-package today, you have many hurdles to negotiate regarding the design infrastructure required. Things like:

  1. Compute power: Has your IT department provisioned enough CPUs, memory and disk (with the required access speed) to support your next huge design project? What about peak load requirements? This is a tough one since it takes a long time to justify, procure and provision this kind of asset.
  2. EDA tools: This one can be quite tricky as well. Like IT infrastructure, EDA procurement cycles can be long and complicated. You may have a good idea of how many of each kind of license you need for that next huge design. But there will be surprises. You may need more of some tools to meet time to market. There may be a new tool that gets released during the design – a tool that is perfect for this project. If only you knew about it beforehand. Now that the procurement cycle is done, you’re faced with more difficulty to get what you need.
  3. Building the design flow: Has your CAD department hooked up the required tools in the right flow for each step in the huge project ahead of you? This isn’t the end of it. Besides the actual flow, the tools need to be matched to the right compute environment. Some design steps need a lot of memory. Some need a lot of compute and others need both. Everything seems to need more disk space than you’ll ever have.

And all of this just gets you to the starting line. You still must design that next huge project.

The widespread move to the cloud for chip design has really helped with the first item, above. But the second two are still essentially an exercise for the design team (and CAD team) to address. Whether you’re on the cloud or on premises, configuring machines for the workload at hand and ensuring you had the foresight to buy enough of all the right tools are challenges. Negotiating peak load license capacity can help, but did you ask for enough? And what about new tools that aren’t in your contract?

What’s New?

The second and third items on the list, above, are what sets Synopsys Cloud apart. Two business models are supported – bring your own cloud (BYOC) and a unique software-as-a-service (SaaS) approach to chip design on the cloud.

BYOC is similar to traditional models – you use the cloud vendor of your choice to flatten the compute requirement problem and the EDA vendor provides cloud-certified tools that you can purchase and run there. The SaaS model takes the user experience a step further by providing pre-packaged design flows for all the workflows you’ll need that are optimized and matched to the right compute resources. This is all provided by Synopsys, so you don’t need either an IT or CAD department.

BYOC is just that, pick your favorite cloud provider (Azure, AWS, Google Cloud Platform) and Synopsys will provide cloud-certified tools. The SaaS model is a joint development with Microsoft Azure.

There is more, however. Another innovation that is part of Synopsys Cloud is something called FlexEDA. This one is a game-changer. This is patent pending metering technology that provides access to a growing catalog of EDA tools from Synopsys and use them on a pay-per-use basis by the minute or hour. No pre-determined licensing requirements. Decide what you need and provision it for as long as you need it. This makes EDA deployment just like cloud computing deployment. Ask for whatever you need and pay for what you use.  FlexEDA is available for both the BYOC and SaaS model, so lots of options. Synopsys is also working with foundries to simplify access to resources like PDKs. There is no EDA company closer to the foundries that Synopsys.

The FlexEDA model is what we cloud enthusiasts, like myself, have been patiently waiting for and could fundamentally change the EDA landscape, absolutely.

Companies can sign up for Synopsys Cloud immediately.

Also read:

Use Existing High Speed Interfaces for Silicon Test

Getting to Faster Closure through AI/ML, DVCon Keynote

Upcoming Webinar: 3DIC Design from Concept to Silicon


Symbolic Trojan Detection. Innovation in Verification

Symbolic Trojan Detection. Innovation in Verification
by Bernard Murphy on 03-30-2022 at 6:00 am

Innovation New

We normally test only for correctness of the functionality we expect. How can we find functionality (e.g. Trojans) that we don’t expect? Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Trojan Localization Using Symbolic Algebra. The paper published in the 2017 Asia and South Pacific DAC. The authors are from the University of Florida and the paper has 30 citations.

Methods in this class operate by running an equivalence check between a specification in RTL, presumed Trojan-free, and a gate-level implementation potentially including Trojans. Through equivalence checking, the authors can not only detect the presence of a Trojan but also localize that logic. This approach extracts and compares polynomial representations from specification and gate-level views, unlike traditional BDD-based equivalence checking.

Implementation polynomials are easy to construct, eg NOT(a) → 1-a and OR(a,b) → a+b-a*b. One writes specification polynomials to resolve to 0, eg 2*Cout+Sum-(A+B+Cin) with one polynomial per output. In sequential logic, flop inputs act as as outputs. Checking proceeds then by “reducing” specification polynomials, tracing backwards in a defined order from outputs. At each step backwards, the method replaces output/intermediate terms by the implementation polynomials creating those values. On completion, each polynomial remainder should be zero if specification and implementation are equivalent. Any non-zero remainder flags suspect logic.

Since the method identifies quite closely any suspect areas, they can prune out all non-suspect logic, leaving a much smaller region of logic. They then run ATPG run to generate vectors to trigger theTrojan.

Paul’s view

This is a fun paper and an easy read. The basic idea is to use logical equivalence checking (LEC) between specification and implementation (e.g. RTL vs. gates) to see if malicious trojan logic has been added to the design. The authors do the equivalence by forming arithmetic expressions to represent logic rather than more traditional BDD or SAT based approach. As with commercial LEC tools their approach first matches sequential elements in the specification and implementation, which then reduces the LEC problem to a series of Boolean equivalence checks on the logic cones driving each sequential element.

The key insight for me in this paper is the observation that if a non-equivalent “suspicious” logic cone overlaps with another equivalent logic cone (i.e. they share some common logic) then this overlap logic cannot be suspicious – i.e. if can be removed from the set of potential trojan logic.

Having used this insight to identify a minimal set of suspicious gates, the authors then use an automatic test pattern generation (ATPG) tool on this set to identify a design state that activates the trojan logic.

To be honest I didn’t quite follow this part of the paper. Once the non-equivalent logic is identified, commercial LEC tools just perform a SAT operation on the non-equivalent logic itself. This generates a specific example of design state where the specification and implementation behave differently. It isn’t necessary to use an ATPG tool for this purpose.

As another fun side note, Cadence has a sister product to our Conformal LEC tool, called Conformal ECO,. This uses this same overlapping logic cone principle (together with a bunch of other neat tricks) to identify a minimal set of non-equivalent gates from a LEC compare. A designer can use the tool to automatically map last minute functional RTL patches to an existing post-layout netlist patch late in the tapeout process. This is a big advantage when re-running the whole implementation flow is not feasible.

Raúl’s view

Detecting Trojans is difficult because the trigger conditions are deliberately designed to be satisfied only in very rare situations. Random or other test pattern generation will likely not activate the Trojan and the circuit will appear to be functioning correctly. If a specification is available, formal verification, i.e., equivalence checking, of this specification against an implementation will uncover that they are not equivalent.

This paper uses a method based on extracting polynomials from an implementation and comparing them to a specification polynomial. This works only for combinatorial circuits. Retiming through movement of Flip-Flops would presumably break this assumption. The basics are explained in the introduction to this blog, it is an application of Gröbner basis theory.

The paper claims that the algorithms scale linearly, which would mean that equivalence checking is linear (unlikely at best). I am not clear what feature the call linear. As Paul points out, most of this is state of the art in commercial tools. However, the ability to narrowly identify the part of the circuit that contains the Trojan is a nice result.

My view

I understood that this was an LEC problem, approached in a different way. However it didn’t occur to me that it was still subject to the same bounds. Paul (an expert in this area) set me straight!

Also read:

Leveraging Virtual Platforms to Shift-Left Software Development and System Verification

Using a GPU to Speed Up PCB Layout Editing

Dynamic Coherence Verification. Innovation in Verification