Banner 800x100 0810

Security Requirements for IoT Devices

Security Requirements for IoT Devices
by Daniel Nenni on 04-05-2022 at 6:00 am

IoT Product Lifecycles SemiWiki

Designing for secure computation and communication has become a crucial requirement across all electronic products.  It is necessary to identify potential attack surfaces and integrate design features to thwart attempts to obtain critical data and/or to access key intellectual property.  Critical data spans a wide variety of assets, including financial, governmental, and personal privacy information.

The security of intellectual property within a design is needed to prevent product reverse engineering.  For example, an attacker may seek to capture firmware running on a microcontroller in an IoT application to provide competitive manufacturers with detailed product information.

Indeed, the need for a security micro-architecture integrated in the design is not only applicable to high-end transaction processing systems, but also down to IoT devices, as well.

Vincent van der Leest, Director of Product Marketing at Intrinsic ID, recently directed me to a whitepaper entitled, “Preventing a $500 Attack Destroying Your IoT Devices”.  Sponsored by the Trusted IoT Ecosystem Security Group of the Global Semiconductor Alliance (GSA), with contributions from Intrinsic ID and other collaborators, the theme of this primer is that a security architecture is definitely a requirement for IoT devices.  The rest of this article covers some of the key observations in the paper – a link is provided at the bottom.

Types of Attacks

There are three main types of attacks that may yield information to attackers:

  • side-channel attacks: non-invasive listening attacks that monitor physical signals coming from the IoT device
  • fault injection attacks: attempts to disrupt the internal operation of the IoT device, such as attempting to modify program execution of an internal microcontroller to gain knowledge about internal confidential data or interrupt program flow
  • invasive attacks: physical reverse engineering of the IoT circuitry, usually too expensive a process for attackers

For the first two attack types above, current IoT systems offer an increasing number of (wired and wireless) interfaces providing potential access.  Designers must be focused on security measures across all these interfaces, from memory storage access (RAM and external non-volatile flash memory) to data interface ports to test access methods (JTAG).

The paper goes into more detail on the security architecture, Cryptographic Keys and a Digital Signature, plus Key Generation.

Examples of Security Measures

The first step to prevent IoT attacks is taking certain design considerations into account. Here are some examples of design and program execution techniques to consider:

  • firmware storage.

An obvious malicious attack method would be to inject invalid firmware code into the IoT platform.  A common measure is to execute a small boot loader program upon start-up from internal ROM (not externally visible), then continue to O/S boot and application firmware from flash.  As the intent of using flash memory is to enable in-field product updates and upgrades, this externally-visible attack surface requires a security measure.  For example, O/S and application code could incorporate a digital signature evaluated for authentication by the security domain.

  • runtime memory security

Security measures may be taken to perform integrity checks on application memory storage during runtime operation.  These checks could be evaluated periodically, or triggered by a specific application event.  Note that these runtime checks really only apply to relatively static data – examples include:  application code, interrupt routines, interrupt address tables.

  • application task programming

An early design decision involves defining which tasks within the overall application necessitate security measures, and different operating system privileges.  As a result, the application may need to be divided into small subtasks.  This will also simplify the design of the security subsystem, if the task complexity is limited.

  • redundant hardware/software resources

For additional security, it may be prudent to execute critical application code twice, to detect an attack method that attempts to inject a glitch fault into the IoT system.

  • attestation

An additional measure to integrate into the design is the means by which the security subsystem attests (through an external interface) to confirm the IoT product is initialized as expected.  If runtime security checks are employed, the design also needs to communicate that operational security is being maintained.

IoT Product Development and Lifecycle

The figure above provides a high-level view of the IoT product lifecycle.  During IC fabrication, initial configuration and boot firmware are typically written to the device.  Cryptographic keys would be included as part of the security subsystem.  During IoT system integration, additional security data may also be added to the SoC.

Summary

The pervasive and diverse nature of IoT applications, from industrial IoT deployments to consumer (and healthcare) products, necessitates a focus on security of critical economic and personal privacy assets.  The whitepaper quotes a survey indicating that security concerns continue to be a major inhibitor to consumer IoT device adoption.  A recommendation in the whitepaper is for the IoT product developer to pursue an independent, third-party test and certification of the security features.

Here is the whitepaper: Preventing a $500 Attack Destroying Your IoT Devices

Contributors:

Published by:

Also read:

WEBINAR: How to add a NIST-Certified Random Number Generator to any IoT device?

Enlisting Entropy to Generate Secure SoC Root Keys

Using PUFs for Random Number Generation


Designing a FinFET Test Chip for Radiation Threats

Designing a FinFET Test Chip for Radiation Threats
by Tom Simon on 04-04-2022 at 10:00 am

Charged Particles

Much of the technology that goes into aerospace applications is some of the most advanced technology that exists. However, these same systems must also offer the highest level of reliability in what is arguably an extremely difficult environment. For semiconductors a major environmental risk in aerospace applications are the effects of charged particles in the upper atmosphere and space.  Charged particles can lead to soft errors, where an IC is not permanently damaged, but its operation is temporarily disrupted leading to unpredictable results. Of course, ICs used in aerospace are hardened to limit the effects of charged particles, but their underlying process technology plays a big role in their susceptibility, especially in the case of FinFETs.

To lower the risk from charged particles, aerospace ICs have tended to use trailing process nodes. However, new nodes offer many attractive characteristics, such as reduced power and higher levels of integration. According to a Siemens EDA white paper, the European Space Agency (ESA) was interested in evaluating various FinFET nodes to determine which ones were the most promising candidates for use in their flight systems. The white paper titled “IROC tapes out aerospace SoC with Aprisa place-and-route” talks about ESA’s work with IROC Technologies to design test chips for evaluating the effects of charged particles on these advanced nodes.

IROC has broad capabilities for designing and testing ICs for quality and reliability. Yet, when ESA approached them to design self-contained test vehicles for FinFET processes they needed to put in place a complete FinFET SOC design flow for 16nm. The test chip design calls for standard cells, memories, complex IP and more. Along with the structures being tested the chip also had to include support circuitry to measure, record and analyze the performance of the test structures. The monitoring blocks had to be completely immune to radiation.

Like many companies looking for tools to set up their flow, IROC started looking at the tools qualified for the targeted TSMC N16 FinFET process. They were of course familiar with Calibre, but also saw that Siemens’ Aprisa place and route solution was included as well. IROC decided to proceed with using Aprisa for this project. Their initial decision, guided by Aprisa’s integration with other Siemens tools and tools from other vendors, was affirmed once they saw how intuitive and easy Aprisa was to set up and use. IROC used the reference flow provided by the Aprisa support team to go through their rapid learning process. One example of this smooth bring-up was the ease of setting up Aprisa to use the TSMC PDK for their process.

IROC started with the design partitioning step using a combination of custom and RTL based blocks. Aprisa let them generate the Liberty and layout views necessary for floor planning and chip assembly. Within a few days of setting up the flow they had completed first pass placement. The built-in timing analysis engine only required a few lines of Tcl to give them timing feedback that correlated well with their final sign off tools.

In the white paper IROC said they were pleased with the quick turnaround of the detail-route-centric design results provided by Aprisa’s multithreading capability. According to the white paper IROC saw high quality results that reduced the need for iterations to achieve design closure. In only three months IROC went from tool installation all the way through to a successful tapeout. For a FinFET process this is a remarkable achievement. They were able to use the silicon to perform the targeted reliability analysis for ESA. The aggressive schedule for the project meant that IROC made the decision to proceed with Aprisa without a traditional evaluation. Their assessment that the Aprisa would perform well for their project appears to have been well founded. No doubt the reputation that Siemens has earned for its EDA tools played a large factor in this decision. Aprisa is also certified down to TSMC N6 and is in the process of gaining certification for TSMC N5 and N4. The white paper is available for download on the Siemens EDA website.

Also read:

Path Based UPF Strategies Explained

Co-Developing IP and SoC Bring Up Firmware with PSS

Balancing Test Requirements with SOC Security


Bridging Analog and Digital worlds at high speed with the JESD204 serial interface

Bridging Analog and Digital worlds at high speed with the JESD204 serial interface
by Daniel Nenni on 04-04-2022 at 6:00 am

Comcore JESD204 Webinar SemiWiki

We are delighted to showcase our “Bridging Analog and Digital worlds at high speed with the JESD204 Serial Interface” webinar on April 20th, in case you missed the live webinar back in February 2022.

To meet the increased demand for converter speed and resolution, JEDEC proposed the JESD204 standard describing a new efficient serial interface to handle data converters. In 2006, the JESD204 standard offered support for multiple data converters over a single lane with the following standard revisions; A, B, and C successively adding features such as support for multiple lanes, deterministic latency, and error detection and correction while constantly increasing Lane Rates. The JESD204D revision is currently in the works and aims to once more increase the Lane Rate to 112Gbps with the change of lane encoding and a switch of the error correction scheme to Reed-Solomon. Most of today’s high-speed converters make use of the JESD standard and the applications fall within but are not limited to Wireless, Telecom, Aerospace, Military, Imaging, and Medical, in essence anywhere a high-speed converter can be used.

Watch the replay here

 

The JESD204 standard is dedicated to the transmission of converter samples over serial interfaces. Its framing allows for mapping M converters of S samples each with a resolution of N bits, onto L lanes with a F octet sized frames that, in succession, form larger Multiframes or Extended Multiblock structures described by K or E parameters. These frames allow for various placement of samples in high- or low-density (HD) and for each sample to be accompanied by CS control bits within a sample container of N’ bits or at the end of a frame (CF). These symbols, describing the sample data and frame formatting, paired with the mapping rules dictated by the standard, allow to communicate a shared understanding of how the transmitted data should be mapped and interpreted by both parties engaging in the transmission.

The 8b10b encoding scheme of JESD204, JESD204A and JESD204B paired with Decision Feedback Equalizers (DFEs) may not work efficiently above 12.5Gbps as it may not offer adequate spectral richness, for this reason, and for better relative power efficiency 64b66b encoding was introduced in JESD204C targeting applications up to 32 Gbps. JESD204D that is following in its footsteps with even higher line rates planned up to 112Gbps utilizing PAM4 PHYs demands a new encoding to efficiently encapsulate the Reed Solomon Forward Error Correction (RS-FEC) 10-bit symbol-oriented mapping.

Deterministic latency introduced in JESD204B allows for the system to maintain constant system latency throughout reset, and power up cycles, as well as re-initialization events. This is accomplished in most cases by providing a system reference signal (SYSREF) that establishes a common timing reference between the Transmitter and Receiver and allows the system to compensate for any latency variability or uncertainty.

The main traps and pitfalls of system design around the JESD204 standard would deal with system clocking in subclass 1 where deterministic latency is achieved with the use of SYSREF as well as SYSREF generation and utilization under different system conditions. Choosing the right frame format and SYSREF type to match system clock stability and link latency can also prove challenging.

Watch the replay here
Also read:

CEO Interview: John Mortensen of Comcores

WEBINAR: Unlock your Chips’ Full Data Transfer Potential with Interlaken

WEBINAR: O-RAN Fronthaul Transport Security using MACsec


EUV Resist Absorption Impact on Stochastic Defects

EUV Resist Absorption Impact on Stochastic Defects
by Fred Chen on 04-03-2022 at 10:00 am

EUV Resist Absorption Impact on Stochastic Defects 1

Stochastic defects continue to draw attention in the area of EUV lithography. It is now widely recognized that stochastic issues not only come from photon shot noise due to low (absorbed) EUV photon density, but also the resist material and process factors [1-4].

It stands to reason that resist absorption of EUV light, which is depth-dependent, will also have an important bearing on the occurrence of stochastic defects. EUV resists have initially been mainly chemically amplified, with a typical absorption coefficient of 5/um [5], meaning that exp(-5) ~ 0.67% represents the fraction of light that is transmitted by a 1 um thick layer of the resist. For a layer only 20 nm thick, on the other hand, 90% of the light is transmitted, meaning that only 10% of the light is absorbed. For a 40 nm thick resist layer, it would be interesting to compare the absorption in the top half vs. the bottom half (Figure 1).

Figure 1. EUV photon absorption in the top half (left) vs. the bottom half (right) of a chemically amplified resist. The threshold to print here is taken to be 24 photons absorbed per 2 nm x 2 nm pixel. The assumed dose (averaged over the displayed 80 nm x 80 nm area) is 60 mJ/cm2. The oval outline is for reference to visually assist observing the stochastic absorption profile.

The bottom half receives less light (90%) than the top because some light has already been absorbed, and it also absorbs a small fraction (10%) itself. Consequently, it is more likely for some areas in the bottom half of the resist layer to fall below the nominal threshold photon absorption density to print. This leads to the higher probability of stochastic defects forming from underexposure in the lower half of the resist.

Given that lower resist absorption aggravates stochastic effects, we would expect the higher absorption of metal oxide resists [5] to be much better. From Figure 2, we see something significantly different.

Figure 2. EUV photon absorption in the top half (left) vs. the bottom half (right) of a metal oxide resist. The threshold to print here is taken to be 24 photons absorbed per 2 nm x 2 nm pixel. The assumed dose (averaged over the displayed 80 nm x 80 nm area) is 60 mJ/cm2. The oval outline is for reference to visually assist observing the stochastic absorption profile.

The lower half of the resist layer receives significantly less light to begin with, so the absorbed photons ultimately define a smaller region at the bottom of the resist. Since the metal oxide resist is a negative-tone resist, the photon absorption determines where the resist remains after development. This means toppling of resist features (or missing resist features) from a narrower bottom than top (‘undercutting’) can be a stochastic defect specific to negative-tone resists, particularly metal oxide resists.

DUV photoresists would never have this problem in common use, even with a lower dose and lower absorption coefficient on the order of 1/um (Figure 3). It’s because the resist is thicker and the pixels are effectively larger.

Figure 3. ArF photon absorption in the top half (left) vs. the bottom half (right) of a chemically amplified resist. The threshold to print here is taken to be 1200 photons absorbed per 7 nm x 7 nm pixel. The assumed dose (averaged over the displayed 280 nm x 280 nm area) is 30 mJ/cm2. The oval outline is for reference to visually assist observing the absorption profile.

A fairer EUV vs ArF comparison would use realistically difficult scenarios. Figure 4 compares the photons per pixel absorbed in the bottom half of the resist for 40 nm square pitch EUV (0.5 nm x 0.5 nm pixel) with 40 nm metal oxide resist thickness vs. 80 nm square pitch ArF (1 nm x 1 nm pixel) with 100 nm chemically amplified resist thickness, using a negative tone hole pattern. The EUV image assumes an ideal binary mask with quadrupole illumination, while the ArF image assumes a 6% attenuated phase shift mask with cross dipole illumination.

Figure 4. EUV (left) vs. ArF (right) photon absorption in the bottom half of the resist. The EUV resist thickness is 40 nm, while the ArF resist thickness is 100 nm. The pixel size is 0.5 nm x 0.5 nm for the EUV case, and 1 nm x 1 nm for the ArF case. The threshold to print here is taken to be 1.6 photons/pixel for EUV and 3.3 photons/pixel for ArF, to target the half-pitch. The assumed dose (averaged over the pitch) is 60 mJ/cm2.

At the finer pixel size, the roughness of the edge becomes more apparent for both wavelengths. However, the EUV case has lots of spots in the background where the exposure is sub-threshold, which will lead to potential resist removal during development, whereas the ArF case is free from such spots. The reason for this, in fact, has to do with the higher contrast of phase-shift masks compared to binary masks. The bright background has closer intensity to the central dark spot in the binary case, or less contrast, giving more opportunity for the background noise variation to reach levels comparable to the dark spot.

Acid generation (in chemically amplified resists) and electron release (following EUV exposure) lead to smoothing effects, which are simulated here using 4x Gaussian smoothing (sigma=2 pixels). This leads to an effective resist blur of 2 times the pixel size, i.e., 2 nm for the EUV case and 4 nm for the ArF case.

Figure 5. EUV (left) vs. ArF (right) latent image in the bottom half of the resist, after 4x Gaussian smoothing (sigma=2 pixels). The EUV resist thickness is 40 nm, while the ArF resist thickness is 100 nm. The pixel size is 0.5 nm x 0.5 nm for the EUV case, and 1 nm x 1 nm for the ArF case. The threshold to print here is taken to target the half-pitch. The assumed dose (averaged over the pitch) is 60 mJ/cm2.

With smoothing, the general spottiness of the image is removed, leaving residual edge roughness, for both ArF and EUV cases (Figure 5). However, since the EUV case had more random background counts, the edge looks relatively rougher, and there is a tendency for defects occurring at and near the edge. These can also impact the effective edge placement.

To conclude, the higher absorption coefficient is not helping to avoid stochastic defects which are occurring near the bottom of the resist layer, so the higher dose to compensate lower absorption is still necessary.

References

[1] https://www.jstage.jst.go.jp/article/photopolymer/31/5/31_651/_pdf

[2]http://ww.lithoguru.com/scientist/litho_papers/2019_Metrics%20for%20stochastic%20scaling%20in%20EUV.pdf

[3] https://www.spiedigitallibrary.org/journals/journal-of-micro-nanopatterning-materials-and-metrology/volume-20/issue-01/014603/Contribution-of-EUV-resist-counting-statistics-to-stochastic-printing-failures/10.1117/1.JMM.20.1.014603.full?SSO=1

[4]http://euvlsymposium.lbl.gov/pdf/2014/6b1e6ae745cd40aba5940af61c0c908e.pdf

[5] http://euvlsymposium.lbl.gov/pdf/2015/Posters/P-RE-06_Fallica.pdf

This article was first published in LinkedIn Pulse: EUV Resist Absorption Impact on Stochastic Defects

Also read:

Etch Pitch Doubling Requirement for Cut-Friendly Track Metal Layouts: Escaping Lithography Wavelength Dependence

Horizontal, Vertical, and Slanted Line Shadowing Across Slit in Low-NA and High-NA EUV Lithography Systems

Pattern Shifts Induced by Dipole-Illuminated EUV Masks


A Blanche DuBois Approach Won’t Resolve Traffic Trouble

A Blanche DuBois Approach Won’t Resolve Traffic Trouble
by Roger C. Lanctot on 04-03-2022 at 6:00 am

A Blanche DuBois Approach Wont Resolve Traffic Trouble

Near the end of Tennessee Williams’ “A Streetcar Named Desire” the Blanche DuBois character, who has suffered a mental breakdown following an implied rape, tells the doctor and matron who have come to take her to the hospital: “Whoever you are – I have always depended on the kindness of strangers.” Sadly, this is the same mentality of municipalities turning to Waze to resolve traffic troubles.

More than 3,000 municipalities around the world – according to Waze – have turned to the popular navigation app to better understand their own traffic woes and resolve challenges. The irony here, of course, is that Waze often CREATES traffic problems by routing vehicles through neighborhoods abutting major highways – or even navigating users into dangerous areas such as favelas in Brazil, occupied territories in Israel, or forest fire zones in California.

Waze’s traffic insights are derived from crowd-sourced “probe” data from users of the app. The location data from users’ mobile phones traces the pace and direction of travel and users can report road hazards or the location of law enforcement vehicles for the benefit of other drivers.

The result – when calibrated to convert real-time data into predictive traffic models – is a compelling navigation tool that has put pressure on car makers that offer built-in navigation systems that often use outdated maps. Waze claims 140M users worldwide and that kind of ubiquity is hard to ignore.

It’s also hard to ignore the fact that Waze has yet to define a profit-generating business model. The sheer desperation of this effort is reflected in the increasingly distracting array of display advertising that shows up in the app while in use. Oddly, you can’t enter a destination into the app while driving, but the app can try to distract you with impossible to read advertisements while it is navigating.

The source of Waze’s strength, though, is also its greatest weakness. Waze has no reliable means to validate the user inputs regarding hazards and other observations – and on multiple occasions users have punked Waze into re-routing drivers away from particular neighborhoods or locations.

Waze has also been known to use its negative impact on local traffic patterns as a door opener with local municipalities. Waze creates the traffic problem – blamelessly! – and then begins collaborating with local traffic authorities to “hack” the app to overcome the routing snafu that is riling citizens.

In the end, Waze is still relying on the “kindness” of strangers and the company runs its Waze for Cities program almost like a protection racket – mesmerizing cities into relying on Waze to fix their traffic problems by using it as a communication conduit. In one such case, Sandy Springs, Georgia, officials joined forces with Waze to communicate confusing traffic patterns to drivers through the app. This alone on its face isn’t such a horrible idea, but in the context of failing to communicate the same information via 511 services or embedded navigation systems from Telenav, NNG, TomTom, and HERE or even via radio stations drives even more users to Waze.

The strangest thing about cities that have turned to Waze as a partner and a communications tool is that most cities have their own traffic information resources. In fact, cities have access to traffic cameras – thousands of them – mostly trained on predictable traffic hotspots.

In the U.S., the leading provider of nationwide traffic camera information is TrafficLand. TrafficLand is responsible for nearly half of all server-side traffic camera installations and is able to deliver still images or streaming video on demand. The company also performs digital analytics on the images it gathers.

With a proper front-end interface, built-in vehicle navigation systems could tap into TrafficLand content to allow drivers to make better-informed navigation decisions. There’d be no need to “trust” Waze, in this instance. Still images or video would confirm the reality of traffic conditions on the road ahead.

Better yet, what about auto makers as a source of front-facing camera data. Mercedes-Benz and the Netherlands’ Ministry of Infrastructure and Water Management have announced a two-year project whereby Mercedes-Benz will share anonymized sensor data from its vehicles with the agency for the identification of road safety hotspots and maintenance issues. The key difference here is the reliance on vehicle sensors – which include cameras – that generate verifiable and reliable inputs.

Between fixed cameras (TrafficLand) and vehicle-mounted cameras, local municipalities ought to be able to identify and resolve their traffic issues – without the assistance of Waze and its user-generated inputs. Cities can then share their data resources with any and all traffic information and navigation providers – not just Waze.

In fact, every city should have a data exchange program in place in which auto makers could participate. That could be a job for Telenav or NNG or HERE or TomTom. But please, not Waze. We shouldn’t rely entirely on strangers for our traffic insights when we can observe reality directly in real time.

Also read:

Auto Safety – A Dickensian Tale

No Traffic at the Crossroads

GM’s Super Duper Cruise


Cadence and DesignCon – Workflows and SI/PI Analysis

Cadence and DesignCon – Workflows and SI/PI Analysis
by Daniel Nenni on 04-02-2022 at 6:00 am

Clarity 3D solver 112Gbps

DesignCon 2022 is back to a live conference, from Tuesday, April 5th through Thursday, April 7th, at the Santa Clara Convention Center.

Introduction

DesignCon is a unique gathering in our industry.  Its roots incorporated a focus on complex design and analysis requirements of (long-reach) high-speed interfaces.  Technical presentations and vendor exhibits on the conference Expo floor spanned a wide variety of topics – e.g., SerDes Tx/Rx design methods;  PCB interconnect and dielectric materials;  cables and connectors for high-speed signals; EDA tools for PCB design, plus extraction and simulation of (strongly frequency-dependent) interconnect parameters.

World-renowned signal integrity (SI) and power integrity (PI) experts offered their insights into how to layout PCB busses and place power decoupling and DC blocking capacitors for improved SI/PI fidelity.

And, the conference Expo highlighted the latest in technical equipment for high-speed interface analysis (on reference boards), from oscilloscopes and signal generators to bit-error rate testers to spectrum analyzers and vector network analyzers.

As the complexity of system designs has grown, and as high-speed interface design has expanded to encompass short, medium, and long-range topologies, so too has DesignCon expanded.

The emphasis on the materials and properties of boards, cables, and connectors is still strong, to be sure.  The introduction of 56Gbps and 112Gbps datarate standards (and pulse-amplitude modulated signal levels) has added to the focus on model extraction accuracy.  The transition from simple IBIS Tx/Rx electrical elements to more complex IBIS-AMI functional plus electrical models has enabled comprehensive “end-to-end” simulations.

In addition to the evolution of these more demanding tasks, there are three key focus areas evident in this year’s DesignCon program.

  • advanced 2.5D/3D packaging technology necessitates “full 3D” electromagnetic model extraction

Traditional rules-of-thumb used for early PCB layout definition of high-speed interface signals no longer apply to today’s system-in-package integration.  Insertion loss guidelines in “dB per inch per GHz” have no significance in a 2.5D package with heterogeneous functionality, spanning very wide high bandwidth memory (HBM) busses to source-synchronous short-reach interfaces between die.  The nature of this advanced packaging technology requires a full 3D extraction model for accurate analysis of insertion, reflection, and crosstalk behavior.

  • tight integration between design and analysis tools is absolutely required

Traditional PCB and backplane-centric high-speed interface design didn’t offer many degrees of freedom – e.g., board layer materials and thicknesses, via topologies and types (through vias or backdrilled vias, for impedance control).

The development of a 2.5D package requires focus on interface planning, as an integral part of the initial design flow.  The sheer number of interface connections and the disparity in their clocking and IL/RL/Xtalk requirements necessitates a tight design and analysis optimization loop.  A PCB may be able to accommodate the insertion of a high-speed repeater (re-driver or re-timer) component later in the design cycle to address a failing signal spec.  A 2.5D package design offers no such flexibility – it has to be “first time right”.  The design and analysis workflow must offer fast, accurate results. In addition, there is typically very limited SI/PI expertise available to design companies – these workflows need to be available to many designers, based on a familiar EDA platform environment.

  • IP offerings are critical to accelerating product introductions integrating advanced interface standards

The pace at which new interface definitions are being introduced is rapid.  DesignCon has expanded to provide system developers with information on (silicon-proven, qualified) IP available for SoC integration.

Cadence at DesignCon

I recently had the opportunity to chat briefly with Sherry Hess at Cadence, to learn what new technologies Cadence will be presenting at this year’s DesignCon.  Indeed, their focus is on new workflows, improved 2.5D/3D modeling accuracy, and advanced IP.

Workflows      

Here are some of the workflow-related presentation sessions.

  • NO Exit Ramps Needed – Cadence’s System Design Workflow Delivers Seamless In-Design Analysis, Reducing Turnaround Time and Minimizing Risk
  • Mainstream Signal Integrity Workflow for PCI 6.0 PAM4 Signaling
  • Amphenol: 112G Connector and Board Design/Analysis Workflow
  • Meta (Facebook): MIPI-C Board and Camera Interface Design/Analysis Workflow
  • Microsoft: Interconnect Optimization of Wearables with an In-Design Analysis Workflow

Note that these presentations include collaborations with various other firms using these workflows, from component providers to systems companies.

Clarity 3D Solver Enhancements

The parametric model extraction of a 3D structure involves a tradeoff in accuracy versus computational resources.  The complex nature of 3D geometries requires an intricate finite element mesh, whether for a system-in-package or multiple packages on a combination of rigid and/or flex substrates.  The electromagnetic solver for the mesh is computationally demanding.

At DesignCon, Cadence will be demonstrating significant enhancements to their Clarity 3D solver, including:

  • a new distributed meshing algorithm, with significant reduction in simulation runtimes
  • a new machine-learning based algorithm for optimizing a “sweep” of design parameters
  • workflow integration with Cadence Allegro (and Allegro Package Designer), Integrity 3D-IC, and Virtuoso RF platforms

IP Strategy

It wasn’t long ago that 28Gbps was the emerging standard interface.  Cadence will also be presenting their IP development for 224Gbps.

  • The Future of 224G Serial Links

Appended below are DesignCon links of interest – at a minimum, if you are involved in functional and/or electrical interface design, from system-in-package to long-reach signaling, you should definitely get a FREE Expo pass.  And, be sure to stop by the Cadence Expo booth for product demonstrations and more technical information.

Cadence sessions at DesignCon:

https://events.cadence.com/event/df7b2870-50d2-48bd-846b-bd8e3c2ea7b2/summary

Cadence Clarity 3D Solver:

https://www.cadence.com/en_US/home/tools/system-analysis/em-solver/clarity-3d-solver.html

DesignCon 2022 Registration:

https://www.designcon.com/en/conference/passes-pricing.html

Also read:

Symbolic Trojan Detection. Innovation in Verification

Leveraging Virtual Platforms to Shift-Left Software Development and System Verification

Using a GPU to Speed Up PCB Layout Editing


Podcast EP69: Ayar Labs and the Future of Optical I/O

Podcast EP69: Ayar Labs and the Future of Optical I/O
by Daniel Nenni on 04-01-2022 at 10:00 am

Dan is joined by Hugo Saleh, senior VP of commercial operations and managing director of Ayar Labs, UK. Hugo discusses the technology and application of optical I/O, its use and impact now and in the future.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


DUV, EUV now PUV Next gen Litho and Materials Shortages worsen supply chain

DUV, EUV now PUV Next gen Litho and Materials Shortages worsen supply chain
by Robert Maire on 04-01-2022 at 8:00 am

EUV DUV Lithography

-New PUV light source will push litho into Angstrom Era
-Rare earth elements shortages add to supply chain woes
-Could strategic wafer reserve releases lower memory pricing
-Can we cut off/turn off Russian access to chip equipment?

DUV, EUV and now “PUV” to become next generation lithography

Lithography is the locomotive the pulls the entire semiconductor industry along the Moore’s Law curve to ever smaller feature sizes by using ever smaller wavelengths of light to print smaller and smaller transistors. Light is the paintbrush and shorter wavelengths is akin to using finer and finer paintbrushes.

The industry has gone from using “G line” and “I Line” visible light to DUV, ultraviolet light generated by Krf (248NM) and Arf (193NM) laser sources. We have now transitioned to the smaller “paintbrush” of EUV, or extreme ultraviolet at 13NM.

The industry is already forced to use optical tricks to print 5NM and 3NM features from 13NM light, much as with prior nodes, but is already running out of gas so we have to move to an even shorter wavelength than EUV for future nodes.

PUV is the next generation of lithographic light at 6.5NM. This is still considered “soft XRay”, so we can use current lens and reticle technology, as we are not into “tender” or “hard” XRays of sub Nanometer wavelengths.

One of the key advantages of PUV is that the light source is polarized (hence the name “P”UV) which allows for more accurate printing. In fact the proposed litho tools will have two light sources of opposing polarity (P & S) for higher contrast.

The other nickname for the new technology is “Plaid” which also indicates the cross hatching of crossed polarity printing.

Plaid UV (PUV) will be generated by a new methodology rather than the current laser shooting tin droplet LPP source. This new source will require short but very, very high spikes of power generating almost fusion/fission like conditions. This will require battery power technology as local power grids already strain under EUV power requirements.

It is rumored that Intel is working with Tesla as the supplier of the battery technology for Plaid UV as it has significant experience in this area.

Plaid could accelerate Intel past TSMC in race to leading edge

If it is Intel working on PUV and they are successful it could be the secret weapon that accelerates them past TSMC in the race to leading edge of technology. Not only does Plaid offer much smaller feature sizes and higher contrast, it also offers higher throughput (speed) as its much higher power light.

In addition there is the potentially huge benefit that existing EUV tools could be retrofitted with Plaid sources as the EUV lenses and reticle are compatible (in theory).All of this makes Plaid well beyond Ludicrous…

Demonstration of Plaid Technology

Rare materials shortages add to supply chain woes in chip industry

The conflict in Ukraine has had far reaching impact around the globe as international trade has slowed and may grind to a halt between the East and West. This is of key concern in the area of rare earth elements and especially noble gases which Ukraine was a large exporter of, as production has essentially stopped.

Neon, which is used in lithography tools seems to be in adequate supply for the time being but could run short over time. Xenon is potentially used in future EUV reticle inspection tools, also hard to find.

Rare earth elements come primarily from China and some from Russia and other countries, with lower cost labor and lax mining regulations. Yttrium (Y) is used not only in LEDs but also as a lining inside deposition and etch chambers to protect them. Neodymium (Nd) is used in magnets in electronics.

Phlebotnium (Ph) is a key nanotechnology ingredient mechanically applied to enhance wafers. Unobtainium (Uo) is high on the periodic table and is used in matter/anti-matter experiments due to its instability and isotope half life. It is currently one of the key raw materials in advanced, atomic layer, selective deposition and etch tools due to these unique, otherworldly, characteristics.
Alternate sources of these rare materials needs to be developed quickly. The US used to have rare earth mines but few if any are left. A new mined source of Unobtainium is from Pandora but will take time to develop, and may require help from Elon Musk as well, for transport.

Its not just money and fabs that are needed for US self sufficiency in chips, its elusive, hard to find, rare materials as well.

Strategic Wafer Reserves may be released to alleviate shortages

The US recently announced that it was releasing a million barrels of oil a day from the strategic reserves. It is not widely known but there is also a strategic reserve of semiconductor wafers. Obviously it would be difficult to store chips of every type and vintage but common devices, mainly memory and CPU types are stockpiled by the defense electronics agencies. This is especially important as many semiconductor devices used in defense applications have been discontinued long ago.

While not huge, a release of strategic wafer reserves could help is some areas in which chips are in short supply. Rationing of those released devices will obviously be on a prioritized basis for applications that are the most needy.
We could see memory pricing negatively impacted if stockpiles of memory chips are released too quickly into the market, but it would have the effect of lowering prices and inflation in semiconductors in general.

Is there a “kill switch” built in to semiconductor equipment?

Russia does not have very much in the way of a semiconductor industry. Unlike China, Russia has not figured out the critical importance that the semiconductor industry plays in todays world. They are perhaps the only remaining manufacturer of vacuum tubes because much of their electronics still uses them. There are a handful of ancient fabs in Russia producing 10 and 15 year old technology semiconductors that is probably right in line with their current state of the art for electronics as well.

Pretty much all of the semiconductor equipment in Russia comes from the West. We wonder if any of that equipment has remote “diagnostic” capabilities. Could it be disabled, turned off or maybe just stop shipping spares and consumables?

Its no small wonder why TSMC is so paranoid about the tools in its fab. USB sticks are not allowed in and there are no live internet connections allowed in the fab for fear of remote hacking.

Maybe tool makers should install wireless backdoors into their equipment for non-paying customers…. or maybe they already have… you never know.

Happy April First!!!!

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read

AMAT – Supply Constraints continue & Backlog Builds- Almost sold out for 2022

Intel buys Tower – Best way to become foundry is to buy one

KLAC- Great quarter and year – March Q is turning point of supply chain problem


CEO Interview: Kelly Peng of Kura Technologies

CEO Interview: Kelly Peng of Kura Technologies
by Daniel Nenni on 04-01-2022 at 6:00 am

Kelly Peng, Co-founder, and CEO of Kura Technologies

This interview is with Kelly Peng, Co-founder, and CEO of Kura Technologies. Kura Gallium, Kura’s first product, was named Best of CES 2022 and received a 2022 CES Innovation Award as well. Kelly is an inventor, engineer and entrepreneur that leads a team of dedicated innovators that are redefining the term “Augmented Reality”. She was a recipient of the prestigious Forbes “30 Under 30” in 2019 for her work in developing the most advanced augmented reality glasses that will start sampling in 2022. Based in Silicon Valley, Kura Technologies is eclipsing the competition in areas of field of view, resolution, brightness, transparency, depth of field, sizes, and other critical metrics.

Web3 and in particular Augmented Reality is a hot topic currently, help our readers understand what differentiates Kura from the many other early-stage companies in this market?

As you have indicated, the interest in Virtual Reality and Augmented reality is currently expanding in all directions. The fact is that we are currently experiencing a rapid change in the way AR and VR will be deployed in the very near future. These changes will open up new markets that were inconceivable a few years ago and will allow us to interact with information in the world around us in practical and interesting ways. The emerging applications will enable new activities like medical diagnosis and treatments, training, 3D design visualization,  industrial inspection, and face-to-face virtual communications all through a pair of glasses.

To facilitate this, Kura has focused our technology development to create a visualization system that is as natural as wearing glasses, and allows the wearer to experience the enhanced content necessary to optimize the desired reality. The Kura Gallium is the first pair of AR glasses to offer a 150-degree full frame field of view, 95% transparency, 8K resolution, unlimited range of depth, and many other features that provide the seamless view of the natural and augmented surroundings often referred to as the “Metaverse”.

What is the status of the glasses that were demonstrated at CES in January?

We showed demos with the world’s biggest field of view in AR, high transparency, and high brightness; newly assembled eyepieces for our upcoming dev kits, and software applications including3D model viewers and telepresence and remote collaboration tools running on our headset with 9-degree of-freedom head-tracking and gesture input. The team is focused on a couple of major developments for the hardware and telepresence platform side of Gallium. One of the larger projects is the development of the ASIC that creates the backbone of the system electronics. Custom ASICs and silicon were taped out last Fall at some of the world’s biggest production foundries and we are pleased to announce that the silicon has already come back from the fab and has also been packaged. We have been testing and running characterization recently and the results look great. The tape-out is a big success!

Can you provide some more details about the ASIC?

Yes, the internal code name for the chip is “Mill’s Creek”. The chip incorporates the control and driver circuits for the micro-LED displays, as the world’s fastest micro-LED display driver ASIC and core enabler for our 8K resolution.  This is one of the most critical components in the systems because it provides more than 100x resolution expansion, on-the-fly pixel repair, high dynamic range and full-color images.

The Augmented Reality user experience is dependent on high-quality display capabilities. The Gallium glasses use fully customized micro-LED displays to create the brightness and image sharpness that people expect. However, the micro-LED technology is not optimized unless the display driver and our optical architecture are optimized specifically for the application. The “Mill’s Creek” ASIC is fully customized specifically for the Kura Micro-LED display hardware with a completely unique architecture, which is unlike practically all of the other headsets that use off-the-shelf components for the display driver. This successful tape-out is a big milestone for us toward pushing Gallium to production.

Startup companies are all about the team of engineers and innovators that are driving the development. Can you give us a little more insight into the team?

Kura is currently made up of over 35 people that are all contributing to the product development. We have been very fortunate to pull together a great mix of talent. More than half of Kura’s founding and leadership are from MIT, and 3 of our lead engineers have together of 400+ patents. Kura’s in-house ASIC design team leader is Mark Flowers, Kura’s Director of Technology, who was previously the founding CTO of Leapfrog (IPOed, and valued $1B+). He has over 30 years of leadership experience designing and delivering custom mixed-signal ASICs. In the past, he was responsible for the shipment of tens of millions of customized chips as well as integrated consumer and enterprise platforms and products. In an earlier startup, Mark was the co-inventor of DSL, with over 800 million installed lines. That company was acquired by Texas Instruments. He graduated from MIT with a Master’s and Bachelor’s in electrical engineering with a specialty in IC design and computer science.

We are also fortunate to have a strong operations organization. That team is led by Gregory Gallinat, our COO, and Chuck Alger, Director of Supply Chain and Manufacturing. A core focus of the Ops team is defining and facilitating the worldwide supply chain and business practices to support Kura’s product launch and growth trajectory. Chuck has more than 20+ years of experience with Intel, Microsoft, and CP Display. This includes multiple manufacturing site launches for products like Hololens and Surface. He also has extensive expertise in semiconductor quality and reliability coming from his time at Intel. Chuck also worked as Director of Supply Chain at Compound Photonics (just acquired by Snap), a company building ASICs for driving micro-displays like micro-LED and LCoS.

What’s the demand and upcoming adoption of Kura Gallium?

We are a platform company poised to reshape the landscape of AR. The interest in Kura Gallium has been fantastic in the last year. The CES awards and the exposure we received through various other venues has opened up the path to adopt the platform into a broad range of applications. The various venues where we have been invited to speak at such conferences as SPIE, has exposed our thought leadership to this market. Kura currently has orders from over 350 companies, 100% of which are in-bound, and among those, over 50 companies that are in the Fortune 500, with total order requests from paid Fortune 500 companies totaling more than 100K units, and as these companies recognize the superiority of our performance and plan to use our product and platform in such areas as remote collaboration, telepresence, virtual showrooms, training, entertainment, tele-medicine, etc..

Many of these clients had also become investors in Kura. We also have several active projects with government agencies that see Augmented Reality as a critical technology for training, visualization, and remote collaboration and assistance. As you can see, the need for AR in various enterprise applications is very high now and we see many of these adopting our product quickly, as many clients and repeatedly express to us they really want to have the headset deployed as soon as possible. The CEO of Tokens .com, a publicly traded company that invests in Web 3 assets recently said in an interview on CNBC that “within the next 24 months all major companies will have a presence in the Metaverse like they have a website.”

Your initial focus seems to be on the enterprise and B2B2C side. When will consumers be buying Kura Gallium?

As with most emerging technologies, the early adopters start with the enterprise market. The enormous benefit of having real-time information augmenting that forward-looking view of users can be realized quickly in many industries. We are launching our hardware + software platform (global holographic telepresence platform, computer vision/AI SDK, and AR data platform), and many of our clients are industry leaders or some of the biggest companies in the world in automotive, training, design, telecommunication, entertainment, etc.

AR is really an industry in that demand had been waiting long for a product that users can use with acceptable vision quality together with a good form factor, Kura’s product and platform are serving the biggest demand in the industry and also largely expand the number of use models. Not to mention, Kura’s performance combined with the comforts of our first product over-compete all the “consumer-targeted” AR glasses and solutions today already. The consumer demand is there and will grow rapidly following the enterprise adoption, with a rich set of applications like the App Store. We have already designed many of the core technologies for our future generations of products that will be launched for both consumers and enterprises with improved performance and even more compact with a deeper level of silicon integration.

Also read:

CEO Interview: Aki Fujimura of D2S

CEO Interview: Frankwell Lin, Chairman and CEO of Andes Technology

CEO Interview: Tamas Olaszi of Jade Design Automation


AMIQ EDA Adds Support for Visual Studio Code to DVT IDE Family

AMIQ EDA Adds Support for Visual Studio Code to DVT IDE Family
by Kalar Rajendiran on 03-31-2022 at 10:00 am

FSM of State DVT VSCodium

“A picture is worth a thousand words” is a widely known adage across the world. Recognizing patterns and cycles becomes easier when data is presented pictorially. Naturally, data visualization technology has a long history from the early days, when people used a paper and pencil to graph data, to modern day visualization platforms. While visualization products have gotten fancier, driven by the data age we are in, the semiconductor industry was among the early industries that needed them. Electronics is all about signals and waveforms. It is easier to comprehend and analyze that data graphically than in the form of a table of data points. While Microsoft Excel has always offered visualization through its graphing feature, visualization solutions received broad market attention after Tableau introduced its visualization platform in 2003.

Visual Studio (VS)

Over the last couple of decades, rapid advances in the field of software have led to the introduction of integrated development environment (IDE) platforms. While there are many development platforms available to software developers, Eclipse and Visual Studio are two well-known and widely used IDE platforms. Is an IDE platform a visualization platform per se? Well. The platform itself is an environment that enables visualization of all sorts through various specific tools that work under that environment. The platform makes this possible through ongoing addition of extensions to support interfacing to various analysis and visualization tools.

So, why is Visual Studio called visual studio? Does it mean Eclipse IDE is not a visualization platform? The Visual Studio name has Visual Basic to thank for it. The developer GUI to Visual Basic earned it the name almost three decades ago. While the development environment has expanded since then, Microsoft has maintained the “visual” prefix for their modern-day IDE. Eclipse IDE is also a visualization platform, even though it does not have “visual” in its name.

Visual Studio (VS) Code

Visual Studio Code is a source code editor developed by Microsoft for Windows, Linux and OS X. It works within the Visual Studio IDE environment and includes support for debugging, embedded Git control and the IntelliSense feature. IntelliSense is a code-completion feature that is context-sensitive to the language being edited. VS Code is also customizable for the editor’s theme, keyboard shortcuts and preferences. In November of 2015, Microsoft released the VS Code source code under the MIT License and posted on GitHub.

A long list of extensions and themes ecosystem is available for VS Code, making it a very popular editor. The open-source nature of the source code is also attracting a large section of the developer community. The speed performance of the editor is not shabby either.

Semiconductor Design and Verification

Design and verification of semiconductors involves code development too. While VHDL and Verilog are not the standard software languages, they are programming languages nonetheless. Design and verification tasks can benefit from an IDE just as software coding and testing do. As such, there has been interest and a push for IDE offerings to support the semiconductor community.

AMIQ EDA

AMIQ EDA provides software tools that enable hardware design and verification engineers to improve productivity and reduce time to market. Prior to its spinoff from AMIQ Consulting, the team had observed three recurring challenges semiconductor companies faced: developing new code to meet tight schedules, understanding legacy or poorly documented code, and getting new engineers up to speed quickly. In the software world, IDEs are commonly used to overcome such challenges. But in the early 2000s, no IDE was available for design and verification languages such as Verilog, VHDL, e and SystemVerilog. So, they developed an IDE for internal use.

In 2008, AMIQ EDA was spun off from AMIQ Consulting and Design and Verification Tools (DVT) Eclipse IDE was launched. They launched DVT Debugger in 2011, Verissimo SystemVerilog Testbench Linter in 2012, and Specador Documentation Generator in 2014. As a company that strongly believes in user-driven development and building solutions based on real-life experiences, they recently launched DVT IDE for VS Code. You can read their press announcement here.

AMIQ EDA’s products help customers accelerate code development, simplify legacy code maintenance, speed up language and methodology learning, and improve source code reliability.

DVT IDE for VS Code

DVT IDE for Visual Studio Code (VS Code) is an integrated development environment (IDE) for SystemVerilog, Verilog, Verilog-AMS, and VHDL. The DVT IDE consists of a parser, the VS Code (editor), an intuitive graphical user interface, and a comprehensive set of features that help with code writing, inspection, navigation, and debugging. It provides capabilities that are specific to the hardware design and verification domain, such as design diagrams, signal tracing, and verification methodology support. The VS Code platform’s extensible architecture allows the DVT IDE to integrate within a large extension ecosystem and work flawlessly with third-party extensions.

DVT IDE for VS Code shares all the analysis engines with DVT Eclipse IDE, that is field proven since 2008. The product enables engineers to inspect a project through diagrams.  Designers can use HDL diagrams such as schematic, state machine, and flow diagrams. Verification engineers can use UML diagrams such as inheritance and collaboration diagrams. Diagrams are hyperlinked and synchronized with the source code and can be saved for documentation purposes. Users can easily search and filter diagrams as needed, for example, visualizing only the clock and reset signals in a schematic diagram. Both tools also have important non-visualization features such as hyperlinked navigation, auto-complete, code refactoring, and semantic searches for usages, readers, and writers of signals and variables.

For a couple of screenshots showing the DVT IDE for VS Code in action, refer to the Figures below.

 

DVT IDE for VS Code is available for download from the VS Code marketplace. For more details, refer to the product page.

Also read:

Automated Documentation of Space-Borne FPGA Designs

Continuous Integration of RISC-V Testbenches

Continuous Integration of UVM Testbenches