Banner 800x100 0810

Microchips in Humans: Consumer-Friendly App, or New Frontier in Surveillance?

Microchips in Humans: Consumer-Friendly App, or New Frontier in Surveillance?
by Ahmed Banafa on 10-11-2022 at 10:00 am

Microchips in humans

On 2021, a British/Polish firm known as Walletmor announced that it had become the first company to sell implantable payment microchips to everyday consumers. While the first microchip was implanted into a human way back in 1998, says the BBC News—so long ago it might as well be the Dark Ages in the world of computing—it is only recently that the technology has become commercially available (Latham 2022). People are voluntarily having these chips—technically known as “radio frequency identification chips” (#RFIDs)—injected under their skin, because these microscopic chips of silicon allow them to pay for purchases at a brick and mortar store just by hovering their hand over a scanner at a checkout counter, entirely skipping the use of any kind of a credit card, debit card, or cell phone app.

While many people may initially recoil from the idea of having a #microchip inserted into their body, a 2021 survey of more than 4,000 people in Europe found that more than 51 percent of respondents said that they would consider this latest form of contactless payment for everything from buying a subway Metro card to using it in place of the key fob to unlock a car door. (Marqeta/Consult Hyperion 2021).

In some ways, the use of RFID chips in this manner is merely an extension of what has been going on before; the chips are already widely used among pet-owners to identify their pet when it is lost. The chips come in many sizes and versions and are far more common than most consumers realize—they are sometimes sewn into articles of clothing so that retailers can monitor the buying habits of their customers long after a purchase has been made. And Amazon has now come out with its button-sized RFID chips, which it dubs “air tags”: Clip one onto your keys, and the air tag will help you find where you accidentally dropped them—as well as making it simple to track anyone, said the Washington Post in “Apple’s AirTag trackers made it frighteningly easy to ‘stalk’ me in a test” (Fowler 2021). All for less $30 per air tag.

So, to some extent, human-machine products and the use of RFID chips is old hat; the underlying driver has always been the goal of expanding the abilities and powers of humans by making certain tasks easier and less time-consuming.

Consequently, such consumer technology can look like the next logical step—especially among those who already favor piercings and tattoos. But on second glance, the insertion of identifying microchips in humans would also seem to bear the seeds of a particularly intrusive form of surveillance, especially at a time when authorities in some parts of the world have been forcibly collecting DNA and other biological data—including blood samples, fingerprints, voice recordings, iris scans, and other unique identifiers—from all their citizens, in an extreme form of the #surveillance state. Before deciding what to think of the tech, we ought to look under the hood, and find out more about some of the nuts and bolts of this hybrid human-machine technology.

Read the full article in this link : https://thebulletin.org/premium/2022-09/microchips-in-humans-consumer-friendly-app-or-new-frontier-in-surveillance/

Also Read:

Intellectual Abilities of Artificial Intelligence (AI)

The Metaverse: Myths and Facts

Quantum Computing Trends


Where Are EUV Doses Headed?

Where Are EUV Doses Headed?
by Fred Chen on 10-11-2022 at 6:00 am

Where Are EUV Doses Headed 1

In spite of increasing usage of EUV lithography, stochastic defects have not gone away. What’s becoming clearer is that EUV doses must be managed to minimize the impact from such defects. The 2022 edition of the International Roadmap for Devices and Systems has updated its Lithography portion [1]. An upward trend with decreasing feature size has been revealed (Figure 1).

Figure 1. Increasing EUV doses are projected by the IRDS 2022 Lithography Chapter for decreasing diameter. These plotted doses give photon numbers of 4000-7000 within +/-5% CD of the edge. The photon number is decreasing with decreasing diameter.

The occurrence of stochastic defects actually defines an EUV dose window [2]. The consequences of going outside this window are shown in Figure 2.

Figure 2. 40 nm pitch contact holes have dose windows defined by the occurrence of stochastics. Too low a dose (left) results in insufficient photon absorption within the target circular area (example: encircled blue spots). Too high a dose (right) results in narrow gaps between features in which bridges (encircled adjacent pixels partly filled with orange) may form due to excessive photon absorption. The pixel size is 1 nm x 1 nm.

Too low a dose results in too few photons absorbed which leads to underexposure-type defects, such as missing, misshapen or undersized contacts. On the other hand, too high a dose results in overexposure-type defects, where gaps between exposed areas are accidentally bridged. From a multitude of studies on this topic, it is understood that the occurrence of defects is minimized (if not completely eliminated) within some range in between the two limits. We may expect that this dose window will shift toward higher values as feature sizes shrink.

The trend toward higher doses will obviously drive source power toward higher targets [3]. However, even at 500 W, doses going over 100 mJ/cm2 will drive throughput below 100 wafers per hour (Figure 3).

Figure 3. Throughput vs dose, as a function of source power. The calibration is based on Fig. 15 from Ref. 3.

Increasing source power is also an issue for environmental impact. Already EUV machines consume over a MW each [4]. In order to be able pass more wafers per day through each machine, multipatterning may have to be considered [5]. Lower doses would be ok for larger exposed features, but these then need post-litho shrink and have to be packed successively into the tighter pitches, as already practiced with DUV lithography.

References

[1] https://irds.ieee.org/editions/2022/irds%E2%84%A2-2022-lithography

[2] J. van Schoot et al., “”High-NA EUVL exposure tool: key advantages and program status”, Proc. SPIE 11854, 1185403 (2021).

[3] H. Levinson, “High-NA EUV lithography: current status and outlook for the future,” Jpn. J. Appl. Phys. 61 SD0803 (2022).

[4] P. van Gerven, https://bits-chips.nl/artikel/hyper-na-after-high-na-asml-cto-van-den-brink-isnt-convinced/

[5] A. Raley et al., “Outlook for high-NA EUV patterning: a holistic patterning approach to address upcoming challenges,” Proc. SPIE 12056, 120560A (2022).

This article first appeared in LinkedIn Pulse: Where are EUV Doses Headed?

Also Read:

Application-Specific Lithography: 5nm Node Gate Patterning

Spot Pairs for Measurement of Secondary Electron Blur in EUV and E-beam Resists

EUV’s Pupil Fill and Resist Limitations at 3nm


The Increasing Gaps in PLM Systems with Handling Electronics

The Increasing Gaps in PLM Systems with Handling Electronics
by Rahul Razdan on 10-10-2022 at 6:00 am

figure1 3

Product LifeCycle Management (PLM) systems have shown incredible value for integrating the enterprise with a single view of the product design, deployment, maintenance, and end-of-life processes.  PLM systems have traditionally grown from the mechanical design space, and this still forms their strength.

Meanwhile, due to the revolution in semiconductors, electronics systems have become increasingly integrated within system designs in nearly all industrial segments. To date, PLM systems have handled electronics systems largely as pseudo-mechanical components. However, with the rapid increase in electronic value (example: over 40% of automotive cost), this treatment of electronics within PLM systems is breaking down the fundamental value of PLM for their customers. This article outlines the increasing gaps created by electronics in PLM systems, and the nature of the required solutions.

What is the value of PLM?

Figure 1:  PLM

Most product development teams use PLM systems from companies such as PTC, Siemens, Dassault, Zuken, Aras and others to integrate major functions of the enterprise (figure one).  Underlying technologies of data vaulting, structured workflows, collaboration, and analytics provide a coherent view of the state of a project and the significant value delivered is a streamlined product development and lifecycle management capability. PLM infrastructure intersects with design through domain specific design tools (mechanical, electronic, software, and more). The semantic understanding of any underlying data held by PLM is actually contained in these domain specific design tools.  All the significant parts of the enterprise (design, manufacturing, field, product definition) use these design domain specific tools to interact with the underlying PLM data.

System PCB Electronic Design:

Figure 2:  Electronics Design Process for System PCB customers in non-consumer markets

 

The economics of semiconductor design imply that custom semiconductors only make sense for markets with high volume. Today, this consists largely of the consumer (cell phone, laptop, tablet, cloud, etc) marketplace. In the consumer marketplace,  a co-design of semiconductor and system model has evolved and this model is well supported by the Electronics Design Automation (EDA) industry.  Given the size of the markets involved, these projects are also typically very well resourced.  However, for every other market,  the electronics design flow follows the pattern shown in figure 2.

In this non-consumer electronics flow, the electronic design steps consist of the following stages:

  1. System Design:  In this phase, a senior system designer is mapping their idea of function to key electronics components.  In picking these key components, the system designer is often making these choices with the following considerations:
    1. Do these components conform to any certification requirements in my application?
    2. Is there a software (SW) ecosystem which provides so much value that I must pick hardware (HW) components in a specific software architecture?
    3. Are there AI/ML components which are critical to my application which imply choice of an optimal HW and SW stack most suited for my end application?
    4. Do these components fit in my operational domain of space, power, and performance at a feasibility level of analysis.
    5. Observation: This stage of design determines the vast majority of immediate and lifecycle cost. This stage is the critical selection point for semiconductor systems.
    6. Today, this stage of design is largely unstructured with the use of generic personal productivity tools such as XL, Word, PDF (for reading 200+ page data sheets), and of course google search. Within PLM, at best the raw data in the form of text is stored.
  2. System Implementation:  In this phase,  the key components from the system design phase must be refined into a physical PCB design.  Typically driven by electrical engineers (vs system engineers) within the organization or sourced by external design services companies,  this stage of design has the following considerations:
    1. PCB Plumbing:  Combining the requirements from the key components with the external facing aspects of the PCB is the key job at this stage of design.  This often involves a physical layout of the PCB, defining power, and clock architecture, and any signal level electrical work (high speed, EMR, and more). This phase also involves part selection, but typically of the low complexity (microcontrollers) and analog nature.
    2. PCB Plumbing Support: Today, this stage of design is reasonably well supported by the physical design, signal integrity, and electrical simulation tools from the traditional EDA Vendors such as Cadence, Zuken and Mentor-Graphics. Part Selection is also reasonably well supported by web interfaces from companies such as Mouser and Digikey.  Also, PLM systems do a decent job of capturing and tracking these components as a part of Bill of Materials (BOM). While the design intent is not necessarily captured,  the range of analysis is limited (to plumbing) and can be recreated by another competent electrical engineer.
    3. Bootup Architecture:  As the physical design is being put together, a bootup architecture for the system is defined. This typically proceeds with a series of stages starting with electrical stability (DC_OK) on power-up, self-test processes for the component chips, microcontroller/fpga programming from non-volatile memory sources, and finally to the booting of a live operating system. Typically, connected to this work are a large range of tools to help debug the PCB board (memory lookup, injection of bus instructions, etc) The combination of all of these capabilities is referred to as the Board Support Package (BSP).  BSPs must span across all the abstraction levels of the System PCB, so today, often they are “cobbled” together from a base of tools with the information sitting dynamically  on various disparate websites. Today, PLM systems may or may not capture the broad design chain implied by BSP systems. Also, BSP components move at the rate of SW, and must be managed within that operational domain.

Overall, from a PLM point-of-view, large parts of the most critical parts of the current electronics design flow are built within an unstructured design process.  This is a problem for all products, but especially for a class of  non-consumer system designs with long lifecycle properties. Let’s discuss these now…

Electronics LLC Markets:

Long Lifecycle (LLC) products are typically defined as products with an expected “active” life in excess of 5 years. LLC markets include Aerospace and Defense (A&D), Energy, Industrial, Medical Devices, Transportation Infrastructure and more.  Even mid-volume markets such as networking and auto exhibit LLC properties. Table 1 below outlines the critical differences between short and long cycle products.

Short Lifecycle Products (SLC) Long Lifecycle Products (LLC)
Useful life 1-2 Years  Useful life 5+ Years
Short Warranty Model Significant Maintenance commitment
Fast technology adoption/transition/disposal Slow technology adoption/transition/disposal
Focus on Time-to-Market , Performance, Features, Price Focus on Life-time Revenue, Reliability, Supply Chain
Maintenance = replacement Maintenance = repair

 

Table 1:  Market Segmentation Comparison

From a design point-of-view, this manifests itself three specific and additional requirements:

  1. Obsolescence:  Consumer market activity/churn can often lead to dramatic dropoff in demand for particular semiconductors. The result leads to semiconductor obsolescence events which negatively impact the LLC product supply chains.  In effect, an LLC product owner has to deal with managing a tsunami of activity from the SLC space to maintain product shipments.
  2. Reliability:   Semiconductors for the consumer market are optimized for consumer lifetimes. For LLC markets, the longer product life in non-traditional environmental situations often leads to product reliability and maintenance issues.
  3. Future Function:  LLC products often have the characteristic of being embedded in the environment. In this situation, upgrade costs are typically very high. A classic example is one of a satellite where the upgrade cost is prohibitively high.  PLM and electronics design systems must account for this reality.

Interestly, PLM is the perfect application to help manage these issues.  However, gaps in functionality to handle electronics prevent it from being effective.

Differentiated Issues/Gaps in current PLM Systems for LLC Product Teams

Since the PLM systems are dealing with electronics primarily at the mechanical level,  the only structured information available within the PLM systems consists of physical design abstractions.  However, this representation misses key aspects of the whole product for electronics.  These include:

  1. System Design Data with associated intent
  2. Meta-Product information on the various abstractions above the pseudo-mechanical chip (AI or SW stacks)
  3. Associated design chains (compilers, debuggers, analysis tools)

The lack of the capture, management, and communication of this information handicaps PLM systems from helping solve the significant issues for LLC markets. Examples include:

  • Obsolescence: Downstream struggle for supply chain teams to somehow manage part availability (through secondary and other channels) once the respective components are served with discontinuation notices.
    • Part Replacement: Can I replace the obsolete part with an equivalent ?  What was the system designer’s intent ? Is this a key semiconductor or one just needed for “plumbing” ?  Further,  often the design team is not available at the time of this event ?
    • Is there sufficient captured system design information to re-spins with EOL parts appropriately replaced and/or competitive requirements for new features.
  • Reliability: It is not unusual for System specific environmental conditions to generate reliability profiles wildly divergent from the semiconductor data-sheets.
    • How does the field organization “debug” reliability issues without a clear view of system design intent for the parts ?
    • How do the learnings of the field organization get back into the next system design process ?
  • Future Function: Increasingly,  field embedded electronics require flexibility to manage derivative design function WITHOUT hardware updates.  How does one design for this capability, and how does marketing understand the band of flexibility in defining new products.

How does one fill the gaps in the current PLM systems ?

Fig. 3 below delineates the critical features in PLM systems which could ease LLC product designers to deal with the above issues, but none of the standard PLM products from top 5 vendors (or others) support these:

  1. Capturing of Design “Intent” upstream during the design phase. Design intent could capture the operating conditions, expected life cycle of the product being designed, the domain and application the product is intended for and expectations on software and AI stack for the end application (for eg. ability to perform basic facial expression recognition on the device and availability of existing models and support for OS and frameworks like TensorFlow or PyTorch etc).
  2. Visibility/awareness of supply chain (distributors, vendors, pricing etc) upstream during the design phase in a strategic manner with a view towards lifecycle costs (vs immediate costs).
  3. The design intent captured above & awareness of supply chain could potentially allow the PLM tools provide “smart search” and iteratively result in an optimal part selection upstream during the design phase itself and then once again during the downstream process of respinning the design due to the imperative of replacing an EOL part (if any)
  4. The field data captured in the PLM systems can be an excellent source to build accurate reliability models for the key components. This is even more relevant since besides the limited accuracy of the reliability numbers (# hours) provided by Semicon vendors (and available in Silicon Expert), the reliability would be different in different operating conditions. Hence the field data in PLM is an excellent source to build these models. These models can be made available upstream to allow for more optimal selection of parts during the design phase itself vis-a-vis the design intent.
  5. Marketing data on potential derivatives can inform the flexibility built within the hardware systems for “over-the-air” updates.

Fig. 3 Gaps in existing PLM Systems

Conclusion:

PLM systems have shown incredible value for integrating the enterprise with a single view of the product design, deployment, maintenance, and end-of-life processes.   However,   the massive infusion of electronics which is embedded in nearly every system design is creating a situation where the core value statement of PLM systems is rupturing.

The solution?    PLM systems must integrate with Smart System Design (SSD) electronics EDA platforms. These platforms would have abstractions of the system at all interesting levels of the system such as hardware components, software components, AI stacks, and more.  With this representation, critical processes such as design intent, field feedback, derivative design, predictive maintenance functions can all be integrated within PLM systems.

Acknowledgements: Special thanks to Anurag Seth for co-authoring this article.

Related Information:
Also Read

DFT Moves up to 2.5D and 3D IC

Siemens EDA Discuss Permanent and Transient Faults

Analyzing Clocks at 7nm and Smaller Nodes


WEBINAR: Flash Memory as a Root of Trust

WEBINAR: Flash Memory as a Root of Trust
by Bernard Murphy on 10-09-2022 at 4:00 pm

secure flash

It should not come as a surprise that the vast majority of IoT devices are insecure. As an indication, one survey estimates that 98% of IoT traffic is unencrypted. It’s not hard to understand why. Many such devices are cost-sensitive, designing security into a product is hard, buyers aren’t prepared to pay a premium for security and there haven’t been any meaningful barriers to insecure products.

REGISTER HERE

Overcoming our human inability to understand low percentage risks isn’t going to happen so the burden falls on regulations. Which are now starting to develop teeth. The EU will require security certificates for all connected devices by 2023. In the US, NIST is working on cybersecurity regulations which are thought will appear in a year or two and will carry penalties for non-compliance. Automotive markets will self-police security by expecting ISO 21434 documentation on processes and risk. Still, many product builders will try to dodge the problem unless solutions are easy. Winbond has an intriguing approach with their secure flash.

Roots of Trust (RoT)

This concept is familiar to anyone with a moderate understanding of security. A root of trust in a system is a core component the system can always trust for security purposes – authentication, cryptography and so on. The goal is to minimize the attack surface around essential security functions, rather than distributing these across the system. All other services must turn to the RoT when making a security-related request. A hardware-based RoT is essential in such implementations.

The standard approach to an RoT is processor-centric – Apple T2 and Google Titan chips are a couple of examples. Such chips use on-board flash memory, for support but with limited size. This is necessary to keep cost down and because the combination of embedded flash and processor logic on a single chip limits memory size. That limitation is a problem for IoT applications which need to support complex stacks for NB-IoT and other advanced communications protocols. There are workarounds outside the RoT but those again increase the attack surface.

Winbond has an intriguing approach in which the make the flash memory a complement to an MCU root of trust, allowing for much more spacious storage. This is an active security component, not just a larger memory.

The Winbond W77Q secure flash memory

The W77Q is a smart flash memory with an emphasis on security. A single use key must sign write and erase commands. The device verifies boot code integrity on reset and allows for secure boot (XIP) directly from flash. Without need to first upload to DRAM. It supports fallback allowing boot from an alternative code space if an integrity problem is detected. It protects against rollback attacks where a hacker attempts to install a correctly signed older version of code with known bugs.

W77Q handles over the air updates directly, without need for MCU support. A remote trusted authority can force a clean boot using an authenticated watchdog timer. And it supports secure storage in separately protected partitions in the same device.

Pretty neat for a serial NOR flash pin-compatible with a conventional device, yet certified secure to a number of relevant standards. You can watch a detailed webinar HERE.

Also Read:

WEBINAR: Taking eFPGA Security to the Next Level

WEBINAR: How to Accelerate Ansys RedHawk-SC in the Cloud

Webinar: Semifore Offers Three Perspectives on System Design Challenges

 

 


Podcast EP111: How sureCore is Fueling the AI Revolution With Tony Stansfield

Podcast EP111: How sureCore is Fueling the AI Revolution With Tony Stansfield
by Daniel Nenni on 10-07-2022 at 10:00 am

Dan is joined by Tony Stansfield, sureCore’s CTO. Tony has over 35 years of semiconductor industry experience in a variety of technical roles. He is cited as an inventor on 23 patents covering SRAM, CAM, low-power electronics, and programmable logic.

Improving the Efficiency of AI Applications Using In-Memory Computation

Tony explores the unique, low power capabilities of sureCore’s standard and custom memory products. The specific ways this technology is used to optimize AI applications are covered in some detail, including general and specific examples of approaches such as in-memory compute.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


A Memorable Samsung Event

A Memorable Samsung Event
by Daniel Nenni on 10-07-2022 at 6:00 am

Samsung DRAM Roadmap 2022

Samsung hosted its first-ever Samsung Tech Week Oct 3-5 with some insightful keynotes and great food. The week led off with Samsung Foundry Forum and a keynote from Foundry president, Si-Young Choi. Attendees at Samsung Foundry’s SAFE Forum were welcomed by Ryan Lee, head of the business’ Design Enablement team. John In-Young Park, President of Samsung’s System LSI, welcomed guests to System LSI Tech Day. Finally, Memory President, Jung-Bae Lee, gave an industry keynote at the Memory Tech Day to round out the week. The most memorable (pun intended) presentation from Samsung this year, in my opinion, was from the Memory group.

“One trillion gigabytes is the total amount of memory Samsung has made since its beginning over 40 years ago. About half of that trillion was produced in the last three years alone, indicating just how fast digital transformation is progressing,” said Jung-bae Lee, President and Head of Memory Business at Samsung Electronics. “As advances in memory bandwidth, capacity and power efficiency enable new platforms and these, in turn, stimulate more semiconductor innovations, we will increasingly push for a higher level of integration on the journey toward digital coevolution.”

Jim Elliott has been in the memory business for 25 years, 20 of those with Samsung. This alone is an impressive feat in Silicon Valley. Jim was engaging and presented a nice landscape for the memory business moving forward.

Jim highlighted the industry drivers with the PC, phones, and now the data driven (availability and reliability) era we are in today. According to reports 90% of the worlds data was created in the last two years and that hypergrowth will continue.

For growth trends Jim mentioned the Metaverse, automotive, and robotics with AI. I would argue that for memory and logic both AI will be the underlying growth driver for most semiconductor market segments and will require massive amounts of leading-edge memory and logic. To be clear, AI will touch an enormous amount of chips that will never have enough logic or memory performance and density.

The seven hundred billion dollar question is: “Can memory technology keep up with the data explosion demand?” The answer of course is yes and Jim explained why.

According to Jim the memory node transition has increased from 7-9 quarters at 90nm to 26 quarters at 10nm. To address the coming challenges Jim talked in more detail about Samsung memory.

Samsung unveiled its fifth-generation 10nm-class DRAM as well as eighth- and ninth-generation Vertical NAND (V-NAND). Today Samsung has 567 DRAM engagements and 617 NAND engagements. Let’s face it, Samsung is the #1 semiconductor company for a reason and memory is the driver behind the Samsung semiconductor dynasty so I don’t see that changing anytime soon.

Jim’s presentation covered partnerships, alternative business models, and the coming Open Innovation Samsung Memory Research Centers. He also presented the roadmaps for DRAM and NAND:

Jim concluded with application notes on mobile, server, cloud, and moving forward with automotive (server on wheels). Samsung currently has 400 automotive projects on the way and are in mass production with 60+ automotive customers.

The other presentation that caught my interest was from Synopsys. Sassine Ghazi, the president and COO of Synopsys, kicked off Samsung SAFE with an engaging keynote on unlocking innovation potential. Sassine began his presentation with the statement, semiconductor chips and software have been the most uplifting phenomena in the history of humankind. Quite a bold statement. He went on to focus on the pivotal role hardware (chips) have made and he observed three fundamental obstacles that must be overcome to reach the next level of innovation.

  • Balance complexity and energy
  • Scale for workload-optimized chips
  • Optimize talent and productivity

He expanded on each item and provided concrete examples of solutions developed through collaboration between Synopsys and Samsung. He concluded with some eye-opening information about the deployment of AI technology to design chips at Samsung. The impact appears to be quite significant. Sassine stated, AI is the only way forward. This is an area to watch. Absolutely.

Also Read:

Webinar: Semifore Offers Three Perspectives on System Design Challenges

WEBINAR: Taking eFPGA Security to the Next Level

WEBINAR: How to Accelerate Ansys RedHawk-SC in the Cloud


DFT Moves up to 2.5D and 3D IC

DFT Moves up to 2.5D and 3D IC
by Daniel Payne on 10-06-2022 at 10:00 am

2.5D and 3D chiplets min

The annual ITC event was held the last week of September, and I kept reading all of the news highlights from the EDA vendors, as the time spent on the tester can be a major cost and the value to catching defective chips from reaching production is so critical. Chiplets, 2.5D and 3D IC design have caught the attention of the test world, so I learned what Siemens EDA just announced to address the new test demands with their DFT approach. Vidya Neerkundar is a Product Manager for the Tessent family of DFT products, and she presented an update.

DFT Challenges

For most of the history of IC designs we’ve had one die in one package, along with multi-chip modules (MCM). For 2.5D and 3D ICs with multiple dies, how do you take the individual die tests, then make them work for the final package?

What if the DFT architectures for each internal die are different from each other?

Is there an optimal way to schedule the die tests while in a package to reduce test times?

2.5D and 3D chiplets

Tessent Multi-die

Siemen’s development team extended their technology to support 2.5D and 3D IC packaging with Tessent Multi-die. At SemiWiki we blogged last year about the Tessent Streaming Scan Network, which used 2D hierarchical scan test. This same approach extends 2D hierarchical DFT into 2.5D and 3D ICs now. Here’s what that looks like for three chiplets in a 2.5D device:

The IEEE created a standard for test access architecture for 3D stacked ICs, known as IEEE 1838-2019. IEEE 1687 defines the access and control of instrumentation embedded inside an IC using another standard, IEEE 1149.1 – with test access ports. Tessent Multi-die supports all of these standards.

Each die in a chiplet design has a Boundary Scan Description Language (BSDL) file, and then Tessent Multi-die creates the package level BSDL for you.

IEEE 1838

This die-centric test standard became board approved in November 2019, and allows testing of a die as part of a multi-die stack. A 3D stack of die is connected for test purposes using a Flexible Parallel Port (FPP), along with Die Wrapper Registers (DWR) and Test Access Ports (TAP):

3D Stack for Testing

IEEE 1687 – Internal JTAG

This 2014 standard helps to streamline the use of instruments that are embedded inside each die.  There’s an Instrument Connectivity Language (ICL), and Procedure Description Language (PDL) to define the instrumentation. The flow between an ATE system and internal JTAG is shown below:

IEEE 1687 flow

IEEE 1149.1 JTG

The boundary scan standard with a Test Access Port goes back to 1990, and the Boundary Scan Description Language (BSDL) came along in 2001. This standard defines how instructions and test data flow inside a chip.

IEEE 1149.1 JTAG

Bringing all of these test standards together, we can see how Tessent Multi-die connects to each chiplet inside of a 3D stack. The test patterns for cores within each die and test scheduling is accomplished with Tessent Streaming Scan Network (SSN).

Tessent Streaming Scan Network

SSN basically packetizes test data delivery, which decouples the core DFT and chip DFT, allowing an independent shift of concurrently tested cores. Practical benefits are time savings for DFT planning, easier routing and timing closure, and up to a 4X test time and volume reduction.

Tessent SSN

Summary

Close collaboration between foundries, design, test and the IEEE have created a vibrant 2.5D and 3D eco-system, with all of the technology in place to advance semiconductor innovation. Siemens EDA has extended their Tessent software to embrace the new test challenges, while using IEEE standards. Tessent Multi-die is integrated with all of the other Tessent products and platform, so you don’t have to cobble tools and flows together.

Related Blogs


U.S. Automakers Broadening Search for Talent and R&D As Electronics Take Over Vehicles

U.S. Automakers Broadening Search for Talent and R&D As Electronics Take Over Vehicles
by Tony Hayes on 10-06-2022 at 6:00 am

DOCKLANDS3

The auto industry isn’t for the faint of heart in late 2022. As Deloitte recently explained, chip shortages, supply chain bottlenecks, unpredictable consumer demand and the industry overhaul mandated by the rise of EVs are all creating unprecedented turmoil in this key sector. One particularly pressing challenge is the ongoing dearth of technically oriented employees in this era when cars make decisions on their own.

The automotive sector is predicted to face a global shortage of 2.3 million skilled workers by 2025 and 4.3 million by 2030 according to a research project from global executive recruiting firm Ennis & Co, which specializes in the sector.  This underscores the rapidly growing amount of modern technology in vehicles today. According to Edward Jones, Professor in Electrical and Electronic Engineering at the University of Galway, “The mechanical components are just one part of the equation and there’s probably as much if not more emphasis now on the software, the electronics, the sensors and the user experience.”

He and his research colleagues have had a front-row seat to the quickly evolving auto industry because so many manufacturers have come to his region in a search of new solutions to hiring, development, testing and other key aspects of building a modern, in-demand vehicle and the key electronics that make it run. Jaguar Land Rover (JLR), General Motors, Intel, Analog Devices, Valeo and many other players have heavily focused on the West of Ireland , in recent years to tap into the resources available there. For example, JLR opened a technology development center in 2017 and keeps adding to its staff while GM has increasingly grown and evolved its Irish operations to encompass key priority areas such as AI, data management and cyber.

As Jones describes it, Ireland offers companies “many of the high-value pieces of the technology stack, software, sensing, AI, machine learning, as well as the test infrastructure.” One company paying attention is Valeo, a top supplier of automotive cameras that sells products to vehicle manufacturers worldwide. University of Galway has several intriguing government-funded projects underway with Valeo that include examining the effect of inclement weather on autonomous vehicle sensors, multi-sensor fusion to improve the perception of a car’s vision system, and modeling the behavior of road users at junctions so that a vehicle’s intelligent systems can plan ahead rather than taking reactive actions.

According to Martin Glavin, an autonomous vehicle expert and Professor of Electronic and Computer Engineering at the University of Galway, “The auto belt in the west of Ireland conducts research on auto sensors and systems in ways that maybe aren’t possible in a lot of the United States. The weather in Ireland is known to be very variable and changeable.” Both Glavin and Jones are also researchers at Lero (the Science Foundation Ireland Research Center for Software), where they do extensive research and testing of vehicle sensors and perception systems.

“In Ireland, our road network length is equivalent to France,” reports Glavin. “We have everything from small country lanes all the way up to high-end motorways.  R&D in Ireland is ideal in that the country is compact and very dynamic in terms of the climate and conditions. Within 10 or 15 minutes, you can go from winter to summer or be on a small country road or a busy motorway.”

The University of Galway team and other research groups are also conducting testing of intelligent technology in the farm equipment arena. Projects are underway with McHale Engineering, part of a leading agricultural equipment dealer headquartered in Ireland.  One ag-tech project is focusing on data capture and algorithm development so that equipment operators and the devices they use are more efficient. Meanwhile, the team is also performing “analysis based on the behavior of the machine and how the machines are being used, ultimately with the aim of predicting failures on those machines and offering a more robust machine that can diagnose its own problems,” notes Glavin.

On the testing side, the Irish government funded Future Mobility Campus Ireland (FMCI), a facility that lets manufacturers put their technologies through their paces via heavily monitored test tracks.  Partners include GM, JLR, Cisco, Analog Devices, Seagate and Red Hat. Noted FMCI CEO Russell Vickers: “There are two main reasons why companies come to Ireland.  One is probably European localization there’s also the areas of data management, data processing, AI and machine learning. That’s why Jaguar Land Rover set up in Ireland because they could get access to software developers that have those skills. You have to follow the people.”

Underpinning the talent search is the growing demand for EVs due to emission concerns and costs at the pump. Proof of this important direction was seen recently in America’s Inflation Reduction Act of 2022, which offers new or expanded tax incentives for buying EVs, as well as the recent mandate in California, America’s environmental pacesetter, that all new vehicles sold by 2035 must be EVs.  The Paris Accord, which includes 196 of the world’s nations, was the forerunner – aligning with a vision of zero-emission vehicles, fewer crashes and reduced congestion.

Pursuing new technologies reflects the public/private partnership that has long characterized Ireland, with government organizations funding and collaborating with universities, research operations and companies. For example, the Science Foundation Ireland (SFI) centers work with companies in the areas of lithium batteries for EVs as well as breakthrough non-metal batteries and vehicle parts.

Noted Lorraine Byrne, executive director of AMBER, the SFI-funded materials science center headquartered at Trinity College Dublin, “We offer companies multidisciplinary scientific expertise to address specific research questions associated with their technology roadmaps. We help to accelerate early-stage research that can reduce the time to market for our industry partners. The materials science work we do at AMBER has relevance in multiple sectors but for automotive, we focus on materials challenges associated with batteries, optical components and the increasing use of sustainable or recycled materials in molded polymer or fabrics.  The SFI centers have a cost‑share model that allows us to co‑fund projects, which is attractive for companies who want to invest in higher-risk early-stage research.”

For example, AMBER has worked with Merck Millipore in the membrane area, where AMBER and Merck have collaborated on molding of polymers and material selection, particularly in the area of new membranes for filtration, whether for air filters or oil filters.

However, moving research forward isn’t the only lure for companies in the auto sector coming to Ireland. In an era when talent is in short supply, the availability of trained technical staff coming out of the universities and research institutes is particularly attractive. Says Byrne: “At the moment, over 50% of our post-doctoral researchers are ending up in the industry as their first destination. A lot of companies are working with AMBER, not just for the research but also for access to the talent pipeline.”

Tony Hayes, VP Engineering, Industrial & Clean Technologies, IDA Ireland

Also Read:

Super Cruise Saves OnStar, Industry

Arm and Arteris Partner on Automotive

The Truly Terrifying Truth about Tesla


Podcast EP110: The Real Story Behind Cerebras Systems – What It Does and Why It Matters

Podcast EP110: The Real Story Behind Cerebras Systems – What It Does and Why It Matters
by Daniel Nenni on 10-05-2022 at 10:00 am

Dan is joined by Rebecca Lewington, Technology Evangelist at Cerebras Systems. Before Cerebras she held similar roles at Micron Technology, Hewlett Packard Labs and Applied Materials. Rebecca has a master’s degree in mechanical and electrical engineering from the University of London and holds 15 patents.

Rebecca explains the one-of-a-kind architecture behind Cerebras technology and the unique approaches it facilitates. Details of customer applications are also discussed.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


How Deep Data Analytics Accelerates SoC Product Development

How Deep Data Analytics Accelerates SoC Product Development
by Kalar Rajendiran on 10-05-2022 at 8:00 am

Continuous Monitoring and Improvement Loop

Ever since the birth of the semiconductor industry, advances have always been at a fast pace. The complexity of SoCs have grown along the way, driven by the demanding computational and communication needs of various market applications. Over the last decade, the growth in complexity has accelerated at unforeseen rates, fueled by AI/ML processing, 5G communications and related applications. This of course has brought a strain on SoC product development cycles and time to market schedules.

Semico Research recently published a detailed report titled “Deep Data Analytics Accelerating SoC Product Development.” The report explains how deep data analytics can help accelerate all phases of an SoC product development including test and post-silicon management. proteanTecs deep data analytics technology and solution are spotlighted and the resulting benefits quantified and presented in a whitepaper. This post will cover some of the salient points from this whitepaper.

The proteanTecs Approach to Deep Data Analytics

The proteanTecs approach is to include monitoring IP into SoC designs and leverage machine learning algorithms to analyze the collected data for actionable analytics. The monitoring IP, referred to as on-chip Agents fall into four categories.

Classification and Profiling

These Agents collect information related to the chip’s basic transistors and standard cells. They are constructed to be sensitive to the different device parameters and can map a chip or a sub-chip to the closest matching process corner, PDK simulation point and RC model.

Performance and Performance Degradation Monitoring

These Agents are placed at the end of many timing paths and continuously track the remaining timing margin to the target capture clock frequency. They can be used to pinpoint critical path timing issues as well as track their degradation over time.

Interconnect and Performance Monitoring

These Agents are located inside a high bandwidth die-to-die interface and are capable of continuously detecting the signal integrity and performance of the critical chip interfaces.

Operational Sensing

These Agents turn the SoC into a system sensor by sensing the effects of the application, board or environment on the chip. They track changes in the DC voltage and temperature across the die as well as information related to the clock jitter, power supply noise, software and workload. The information gathered can be used to explain timing issues detected by the Performance and Degradation Agents. The collected information helps understand the system environment, for fast debug and root cause analysis.

The proteanTecs Deep Data Analytics Software Platform

The proteanTecs platform is a one-stop software platform that generates analytics from the data created by the on-chip Agents. It performs intelligent integration of the Agents and applies machine learning techniques on the Agent readouts to provide actionable analytics. The platform is centered on the principle of continuous monitoring and improvements and implements a continuous feedback loop as shown in the Figure below.

The platform feeds relevant real-time analytics to the appropriate teams who are responsible to take corrective actions. Depending on the type of analytics feedback, the recipients would be the marketing group, SoC hardware and software group, the manufacturing team or the field deployment and support team.

Benefits of Adopting the proteanTecs Approach

Design teams can use the data to understand how the different chip parameters  are affected by various applications and environmental conditions over time. With this type of insight from the current product, the next product can be better planned.

With the in-field monitoring, predictive maintenance can be performed and when something does fail unexpectedly, debugging becomes easier and quicker. The conditions leading to the failure can be easily recreated right in the field and the fix accomplished in a much shorter time.

Analytics shared with the software team can be used to identify and fix bottlenecks between the silicon and the software during different operations.

A further benefit could be the monetization of the data stream between the system developer and the end customers. For example, auto manufacturers could provide data to their customers on how a vehicle is operating under different road conditions, so that performance could be optimized. Data centers could provide insights to their customers on how different loading factors impact response times and latencies.

There are multiple possibilities for monetization of the data streams established via the proteanTecs approach. This could open up an additional revenue stream to the owners of such a platform.

The Quantifiable Business Impact Results

In the report, Semico includes a head-to-head comparison of two companies designing a similar multicore data center accelerator SoC on a 5nm technology node. This assessment is used for understanding the quantifiable benefits of using the proteanTecs approach. The design profile and metrics of this sample SoC is presented in the Table below.

The following Table shows the quantifiable benefits of using the proteanTecs approach as it pertains to market metrics and sales results.

Summary

The proteanTecs chip analytics platform helps drive the process of SoC design, manufacturing, testing, bring-up and deployment for a significant market advantage. It performs deep dive analytics on data captured from silicon and systems to identify potential problems in all phases of the lifecycle of an SoC. The emergence of such deep data analytics solutions will benefit the electronics industry as problems can now be avoided during the development stage and in-field issues corrected rapidly.

For more details about the proteanTecs platform, visit https://www.proteantecs.com/solutions.

You can download the Semico Research whitepaper from the proteanTecs website.

Also Read:

proteanTecs Technology Helps GUC Characterize Its GLink™ High-Speed Interface

Elevating Production Testing with proteanTecs and Advantest’s ACS Edge™ Platforms

CEO Interview: Shai Cohen of proteanTecs