Banner 800x100 0810

Architectural Planning of 3D IC

Architectural Planning of 3D IC
by Daniel Payne on 11-15-2022 at 10:00 am

3D IC min

Before chiplets arrived, it seemed like designing an electronic system was a bit simpler, as a system on chip (SoC) methodology was well understood, and each SoC was mounted inside a package, then the packages for each component were interconnected on a printed circuit board (PCB). The emerging trend to design a 3D IC using chiplets has been shown for central processing units (CPU), application processors (AP), graphical processing units (GPU), and even AI chips. With a 3D IC approach there is a promise of higher systems integration, performance improvements, and at a lower cost than using a single SoC.

New challenges arise with a 2.5D or 3D IC design, like knowing how to divide up the system features into chiplets, and then choosing an architecture that will meet requirements: power, performance, area, time to market, cost. Siemens EDA has written a 14 page eBook: Launching the full potential of 3D IC with front-end architectural planning, and I’ll share the major points learned.

Source: NanoElec

Design-technology co-optimization (DTCO) has been a collaborative process between foundry engineers and IC design engineers to optimize IC metrics, although this is reaching diminishing returns with the slowing of Moore’s Law on the process side. System-technology co-optimization (STCO) takes into account architectural and technology trade-offs earlier in time, and with predictive analysis there are more design scenarios looked at.

Using STCO there is partitioning of hardware and software, where hardware is divided into SoCs or system in package (SIP) using 2.5D or 3D assembly. Heterogenous integration is when multiple chiplets are combined. Collaboration for a chiplet-based approach require early discussions between engineering groups:

  • System
  • RTL
  • Packaging
  • Silicon
  • Testing
STCO

Partitioning hardware into chiplets brings new technical challenges, like:

  • Signal Integrity (SI) between chiplets
  • Power Integrity (PI) inside the packaging
  • Electro-Migration (EM) of interconnect between chiplets
  • Thermo-Mechanical stress analysis
  • Substrate Verification
  • Assembly Verification

Siemens EDA has a design flow for systems using chiplets and STCO, where you can make early trade-offs and explore how a system is composed. Testbenches can be automatically generated, in only minutes, making your design verification team productive. Verification IP helps engineering teams validate compliance to many industry standards, like:

  • PCI Express Gen 6
  • Compute Express Link (CXL)
  • DDR5
  • HBM memory interface protocols
  • Flash
  • MIPI
  • USB
  • Ethernet
  • Serial
  • USB
Verification Framework

UCIe

In March 2022 the Universal Chiplet Interconnect Express (UCIe) consortium was announced to support chiplet standards, and the 10 founding member companies are: AMD, Arm, Advanced Semiconductor Engineering, Google Cloud, Intel, Meta, Microsoft, Qualcomm, Samsung and TSMC. UCIe maps PCI Express (PCIe) and CXL protocols, and a standard way for die-to-die communication. Chiplet teams do not have to reinvent the wheel with their own, one of a kind interconnect, instead they can adopt the UCIe standard. There are other interconnects to consider: XSR, USR, AIB, BOW.

Apple M1 Ultra

Apple is at the vanguard of using chiplets for their laptops and tablets, as the M1 Ultra processor chip includes two M1 Max chiplets, connecting 10,000 signals along a single edge. This level of system integration in a package supports 2.5 TB/s of bandwidth, along with eight memory chips, plus an application processor and GPU included.

Summary

Siemens EDA has long been in the systems engineering space, and the 2.5D and 3D trend is also supported for SIP, including architectural design, RTL verification, IP for verification, and support for standards like UCIe. Heterogeneous integration is making the news, and design teams can also catch the wave by choosing a vendor like Siemens EDA.

View the 14 page eBook online.

Related Blogs

 


proteanTecs Technology Helps GUC Characterize Its GLink™ High-Speed Interface

proteanTecs Technology Helps GUC Characterize Its GLink™ High-Speed Interface
by Kalar Rajendiran on 11-15-2022 at 6:00 am

proteanTecs D2D Monitoring Hardware Block Diagram

An earlier post on SemiWiki discussed how deep data analytics helps accelerate SoC product development. The post presented insights into proteanTecs’ technology and quantified the benefits that can be derived by leveraging the software platform for SoC product development. You can review that earlier blog here. The power of proteanTecs’ technology extends beyond the development phase and benefits semiconductor device testing as well. Another SemiWiki blog discussed how the economics of testing can be enhanced by leveraging proteanTecs’ platform. The blog showcased how defective parts can be weeded out earlier in the assembly process to minimize scrap cost.  You can review that blog here.

With heterogeneous chiplets-based SoC implementations picking up momentum, GUC has been offering its GLink™ high-speed interface IP for connecting the different chiplets of an SoC. As the 2.5D/3D packaging assembly cost will be higher, it becomes even more important to weed out defective dies before they enter the assembly process. The proteanTecs technology not only makes this task easier but also makes in-field predictive maintenance possible, preventing catastrophic system failures.

GUC implemented the proteanTecs monitoring system into its 5nm GLink 2.0 test chip to assist in testing and characterizing the GLink Phy. proteanTecs recently published a whitepaper that goes into the details of this collaborative effort. This post will cover the salient points garnered from that whitepaper.

Chiplets Interconnect Challenge

Critical to the success of highly integrated Silicon In Package (SiP) products is the high-speed interface connectivity. High-speed parallel interfaces are being favored more over SerDes interfaces due to simplicity and flexibility for continuous scaling. As such, a number of implementations are in use such as HBM, OpenHBI, AIB, BoW, UCIe and GUC’s GLink. While these implementations offer significant comparative advantages over metrics such as BER, power efficiency, area efficiency and ASIC die area, they do introduce challenges in assembly. The microbumps used over the silicon interposer may suffer defects such as voids or cracks. On organic substrates, resistive shorts can cause signal integrity issues and performance degradation. Once assembled, there is no practical way to test and assure defect-free, fully functioning products. But assuring a 100% defect-free product is imperative for system stability over its guaranteed lifetime.

Current approach to handling this challenge is to implement spare lanes that can replace defective ones. But how to identify which lanes are candidates for replacement? BIST techniques detect gross failures such as opens and shorts but are often unable to detect small variations that may cause catastrophic system failures in the future. Probe points, X-ray or other imaging approaches are ineffective due to substrate covering the die interconnects.

proteanTecs Solution

The proteanTecs patent-protected solution is compromised of low footprint, digital-only sensors for monitoring the performance of the parallel interface. These sensors can be placed next to each pin inside the die-to-die (D2D) interconnect Phy to achieve 100% coverage without impacting the signal behavior. The I/O sensors are connected to and managed by a hierarchy of controllers designed to measure, collect and edge-process the data. These embedded agents can be controlled directly from the automatic test equipment (ATE) or by firmware running on an embedded CPU when connected to an APB port.

The measurement data from the monitoring system is extracted and processed for actionable analytics using dedicated machine learning algorithms.

Solution Offers Unprecedented Visibility

During characterization phase: Allows for per-pin eye diagram visualization and margin correlation to process, voltage levels, driver strength, receiver reference voltage, physical location on the die, substrate routing topology and more.

In mass production stage: Identifies marginal pins and recommends candidates for spare lane swapping.

During field operation: Makes predictive maintenance possible by alerting about pins that show signs of wear-out and suggesting as candidates for lane swapping before system fails.

All in all, system maintenance cost is reduced and system uptime is increased.

GUC GLink Test Chip Analysis

The proteanTecs-GUC collaborative effort involved implementation of a 5nm Test chip consisting of a single GLink Phy instance.

  • Eight slices, each containing 42 lanes:
    • 32 full-duplex data lanes
    • Four DBI (Data Bus Inversion) lanes
    • One “frame” and one parity lanes (for debug purposes only)
    • Two clock lanes (one differential pair)
    • Two spare lanes per (covering for data, DBI and parity lane redundancy)
  • Up to 16Gbps per lane
  • Built-in test pattern generator and checker (BIST)
  • proteanTecs I/O Sensor per lane, 42 per slice
  • proteanTecs monitoring control system
  • APB to JTAG bridge for external control

Key Findings

Measuring the jitter of all the pins on all available samples allows for unique ability of full coverage parametric pin characterization. The proteanTecs monitoring system made it easier to compare different transceiver circuitry implementations for margin differences. It also provided visibility into process corner effects on transceiver performance across all samples.

For full details, you can download the joint GUC-proteanTecs whitepaper from proteanTecs’ website.

For more details about the proteanTecs platform, visit https://www.proteantecs.com/solutions.

For more details about GUC’s GLink high-speed interface, visit GUC’s GLink product page.

Also Read:

Elevating Production Testing with proteanTecs and Advantest’s ACS Edge™ Platforms

How Deep Data Analytics Accelerates SoC Product Development

CEO Interview: Shai Cohen of proteanTecs


Selecting a flash controller for storage reliability

Selecting a flash controller for storage reliability
by Don Dingee on 11-14-2022 at 10:00 am

2.5 SSD with Hyperstone X1

Flash memory cards and solid-state drives (SSDs) provide high-performance storage in many devices and systems today. While the flash chips inside cards and SSDs provide raw capacity and performance, they must be combined with an intelligent flash controller to achieve the reliability system designers and consumers need. A new product guide from Hyperstone provides an overview of their high-reliability flash memory controllers, their accompanying feature sets, and available software tools.

Critical features of advanced flash controllers

When people think of flash storage, the first thing that comes to mind is capacity. Innovative fabrication techniques continue driving flash chip density higher. Stringing together flash chips to reach the required amount of storage in a memory card or SSD is the easy part. Flash controllers add critical features to make flash storage more efficient, durable, and reliable under real-world conditions.

Here are five necessary flash memory controller features:

  • Wear leveling. Flash chips specify a minimum write cycle endurance – how many write cycles each cell can withstand. The entire storage card wears out prematurely if a subset of flash cells is used repeatedly. Wear-leveling techniques distribute the use of flash cells across all the chips, providing even wear and extending life for the entire card.
  • Power fail protection. Interrupting a flash write before completion can corrupt data and leave cells temporarily unusable. Power fail protection helps write cycles complete using brief capacitive hold-up mechanisms.
  • Write management. Flash write cycles are a delicate dance of power supply changes and signal timing constraints to set a physical state in a cell. A flash controller offloads the host processor from managing the details, completing the cycle from a simple write command, and allowing the host to move on to other tasks.
  • Error correction coding. Various encoding schemes enabled with extra stored bits with words can help detect and correct bit errors on the fly. BCH is a commonly used code, alongside algorithms such as LDPC.
  • Data refresh. Flash storage is regarded as non-volatile, where cells retain their programmed state with power off. However, cell programming tends to degrade if power is left off for extended periods. Data refresh cycles read and re-write cell states in the background, restoring their full retention strength.

A range of flash controllers for different applications

Other variables come into play for NAND-based flash solutions, including cost, maintenance (considering onboard embedded flash versus a card that can be easily exchanged), and system-level redundancy. It’s essential for OEMs choosing a flash controller to be able to tune both the hardware and the firmware to their needs.

Hyperstone’s portfolio of NAND flash memory controllers targets a wide range of demanding solutions across various interfaces. From Serial ATA (SATA) and Parallel ATA (PATA) solid-state disks (SSDs), Disk-on-Module (DoM), Disk-on-Board (DoB), embedded flash solutions, USB and flash cards such as CompactFlash, SD, and microSD, Hyperstone is constantly developing and optimizing both their hardware and firmware. The hyMap® firmware comes with many standard features and is customized for each flash application. Additionally, an API is available alongside specific controllers, allowing  customers to add extra security features to those controllers. Software tools to assist in life expectancy estimation, factory pre-configuration of flash, and in-use performance analysis complete the Hyperstone portfolio.

To learn more about Hyperstone flash controller products for reliable storage solutions, visit the NAND Flash Memory Controllers product portfolio page – the product guide is downloadable by clicking a button shown.

Also Read:

CEO Interview: Jan Peter Berns from Hyperstone


Why Intel may be the first casualty if Beijing retaliates over Biden’s export controls

Why Intel may be the first casualty if Beijing retaliates over Biden’s export controls
by admin on 11-14-2022 at 6:00 am

Intel Bejing

After the Biden administration upped the ante in the tech war by restricting China’s access to advanced US semiconductor technology, the $64,000 question was “How might Beijing respond?”

Punishing American companies in China (like Apple and Tesla) was not considered likely given the employment they generate – Apple contractor Foxconn employs more than 1 million Chinese – not to mention the technology transfer benefits that Beijing craves from foreign companies.

However, a hint of how the Chinese Communist Party may strike back has emerged – and it’s not so much an “action” as a form of “inaction”. The first casualty may be America’s biggest chipmaker.

On February 15 this year, Intel announced an agreement to acquire Israel-based foundry Tower Semiconductor for $5.4 billion.

The deal was seen as key to the long term success of Intel Foundry Services (IFS), as Tower’s strength in analog complemented Intel’s in digital.

“Tower’s specialty technology portfolio, geographic reach, deep customer relationships and services-first operations will help scale Intel’s foundry services and advance our goal of becoming a major provider of foundry capacity globally,” Intel CEO Pat Gelsinger said at the time.

However, that deal may be at the mercy of Beijing , according to some commentators.

“The United States is directly trying to stop China’s semiconductor independence and pulled out all the stops with its recent export controls. Meanwhile, Intel is trying to bolster domestic production and reduce the United State’s reliance on Taiwan. Why would China let Intel, and by extension, the United States government, do this? They will almost certainly block the deal,” wrote semiconductor analyst Doug O’Laughlin in a blog titled “China’s revenge: The Tower Semiconductor deal is in a tough place”.

How could Beijing scupper the deal? The same way it stopped Qualcomm from acquiring NXP Semiconductors in 2018.

Big global M&A deals require the approval of various regulatory agencies, such as the Federal Trade Commission in the US. In China, the antitrust body is the State Administration for Market Regulation (SAMR).

SAMR killed Qualcomm/NXP by not issuing regulatory approval for the deal, and Qualcomm – reliant on the Chinese market for major revenues – had to play along. Some predict that the same may happen with Intel/Tower.

“[In China] every regulatory agency is just an extension of the [communist] party’s will, so I think the clear way to hinder the United States and its companies is to block every deal in the approval process,” said O’Laughlin.

Last month, SAMR applied the same tactic to another US merger deal, though not related to the semiconductor industry. DuPont’s $5.2-billion deal to acquire Arizona-based speciality materials supplier Rogers Corp, which was announced over a year ago, was terminated on November 2 because SAMR failed to approve it.

Companies above a certain annual revenue threshold are subject to SAMR review, but if they don’t have any business in China, it’s a moot point. However, Intel derives a significant portion of its revenue from China – and operates a 300mm wafer fab in the country.

“It’s possible to merge without Chinese approval, but then China could restrict Intel’s right to sell products in China,” O’Laughlin said. “Tower Semi is a quick way to make the IFS dream a reality and has to be at the top of Intel’s strategic priorities. But this is how China can strike back.”

Ben Thompson, a tech analyst who pens the Stratechery newsletter, believes it would be “devastating” for Intel if SAMR blocked the Tower acquisition.

“While it is fair to be skeptical of Intel’s ability to catch-up, that task will be far more difficult without the sort of transformation in culture around foundry services that Tower was acquired to provide,”  Thompson said.

If China blocks the deal, Intel could decide to go ahead anyway. Worst case, Beijing may ban the company from selling in China, but given the Chinese government’s vociferous opposition to its lack of access to US chips, that would be a self-inflicted wound.

The national security implications of the case have also not been lost on commentators.

Thompson said that if Intel sacrificed the China market it would “at least be in line with [CEO Pat] Gelsinger’s rhetoric on the matter,” while O’Laughlin said “not getting Tower Semi to kickstart IFS feels like a national security travesty”.

Also Read:

Why China hates CHIPS

How TSMC Contributed to the Death of 450mm and Upset Intel in the Process

The Evolution of Taiwan’s Silicon Shield

US Supply Chain Data Request Elicits a Range of Responses, from Tight-Lipped to Uptight

Losing Lithography: How the US Invented, then lost, a Critical Chipmaking Process

Why Tech Tales are Wafer Thin in Hollywood


Requiem for a Self-Driving Prophet

Requiem for a Self-Driving Prophet
by Roger C. Lanctot on 11-13-2022 at 4:00 pm

Requiem for a Self Driving Prophet

In a few short years, self-driving tech enfant terrible George Hotz managed to get a rebuff from Tesla CEO Elon Musk and a brush back from both the California Department of Motor Vehicles and the National Highway Traffic Safety Administration (NHTSA) while single-handedly inventing the aftermarket for autonomous vehicle technology. Today, an average consumer with a little bit of ingenuity can add SAE Level 2 autonomous driving capability to a wide range of vehicles from Toyota, Honda, Subaru and others. Anyone can do it.

Two weeks ago, Hotz published a blog indicating that he was taking a break from the slog of chasing investor cash and struggling with supply chain issues in order to pursue other interests. He simply said he’d had enough of the Comma.ai rat race. It’s a shame.

We pat ourselves on the back here in the U.S. for having a vibrant startup industry. Hotz’s experience is a testament to both the vibrance of that eco-system and its limitations.

Hotz’s OpenPilot software implemented in the Comma devices – 1,2, and 3 – has clearly proven its merit with the endorsement of Consumer Reports and the support of thousands of tinkerers who have bought the necessary hardware (Panda or Giraffe devices directly from Comma), downloaded the open source code and installed the system into their own personal cars. Hotz cleverly won over the CR editors with the combination of the system’s impressive performance along with the integrated driver monitoring technology.

The willingness of average consumers to take on the formidable task of more or less “hacking into” their own vehicle controls with an aftermarket device that will clearly void any manufacturer’s warranty is perhaps most amazing. More amazing still is the fact that the couple thousands of consumers who have gone to the trouble of installing devices using OpenPilot software have yet to report a single unhappy experience using the device. Also, thankfully, no ugly headlines regarding crashes or fatalities.

The performance of Hotz’s open sourced OpenPilot software (open sourced in order to avoid NHTSA sanction) has been sufficient to attract a host of companies seeking to build upon the technology with solutions of their own. These companies include Epilog AI, Kommu, BlueBox, and Merlin Mobility, which offers a solution to assist drivers with various disabilities or limitations.

Consumers can take the OpenPilot challenge with the help of a wide range of aftermarket kits available from Websites such as AliExpress.com: https://www.aliexpress.com/store/1101868933

Part of Hotz’s recent frustrations that led to his (temporary?) departure from the self-driving development circus was the inability to source a particular Qualcomm chipset. He mused about alternatives and alleged that Qualcomm was deliberately blocking his efforts.

It’s interesting to consider the implications of Qualcomm standing in Hotz’s path preventing further progress. Qualcomm, of course, has its own self-driving ambitions.

Hotz’s allegations are reminiscent of Mobileye walking away from Tesla following the famous fatal Florida crash in 2017. Musk blamed Mobileye for the failure of his forward-facing camera system to recognize a tractor trailer blocking the highway. Months later, Mobileye disclosed that it was Mobileye that had parted company with Musk.

This development was not unlike Nvidia’s decision, following Uber’s fatal Phoenix-area crash, to pause its own self-driving testing.  Suppliers do not want to be associated with AV system failures especially when they have AV ambitions of their own.

It’s interesting to ponder the prospect of a nascent self-driving aftermarket emerging – especially just two months before the opening of CES 2023. It will be interesting to see whether the seeds planted by Hotz bear fruit with or without his ongoing participation.

Also Read:

MIPI in the Car – Transport From Sensors to Compute

Musk: The Post-Truth Messiah

Flash Memory Market Ushered in Fierce Competition with the Digitalization of Electric Vehicles


Podcast EP121: Managing Design Flows and EDA Resources with Innova

Podcast EP121: Managing Design Flows and EDA Resources with Innova
by Daniel Nenni on 11-11-2022 at 10:00 am

Dan is joined by Chouki Aktouf, founder & CEO of Defacto Technologies and co-founder of Innova Advanced Technologies. Prior to founding Defacto in 2003, Dr. Aktouf was an associate professor of Computer Science at the University of Grenoble – France and leader of a dependability research group. He holds a PhD in Electric Engineering from Grenoble University.

Dan explores the offerings of Chouki’s new company. Innova provides a flexible and customizable capability to manage design flows and EDA tool resources. This disruptive solution serves as a single portal to help reduce the complexity of using tools and dedicated design environments.

Chouki also discusses an upcoming webinar on the new product that will occur on December 7, 2022 at 10AM Pacific time. You can register for this webinar here: Reduce design cost by better managing EDA tool licenses and servers

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Blockchain 4.0

Blockchain 4.0
by Ahmed Banafa on 11-10-2022 at 10:00 am

Blockchain 4.0

The simple and best definition of Blockchain technology is to think about it as electricity , you only see it’s applications but you understand how important it’s and know there are many applications and products that can run on it . But like any other technology it went through stages and evolved as it progressed and matured. We started with Blockchain 1.0 and now we are at Blockchain 4.0.

In the following article we will explain each version of Blockchain:

Blockchain 1.0 – Cryptocurrencies

The Blockchain’s first-ever application was bitcoins. Blockchain has already established itself as the enabler of an ‘Decentralized Internet of Money’ by powering cryptocurrencies. By providing transparency, accountability, immutability and security, Blockchain very soon triggered the influx of more cryptocurrencies, and today we have more than 10,000 different cryptocurrencies in circulation.

 Cryptocurrency Types 

1.     Central Bank Digital Coin

2.     Stablecoins

3.     Cryptocurrency (Bitcoin, Ethereum, Solana …)

4.     Meme Coins (Elon Musk!)

*Maximum number Bitcoin is 21 million coins , we have ~19 millions in the market now

 Blockchain 2.0 – Smart Contracts

With Blockchain 2.0 came the era of smart contracts that helped #blockchain to outgrow its original functionality of powering cryptocurrencies.

What is a smart contract?

·       Smart contracts are essentially automated agreements between the contract creator and the recipient.

·       Written in code, this agreement is baked into the blockchain, making it immutable as well as irreversible.

·       They’re usually used to automate the execution of an agreement so that all parties can be sure of the conclusion right away, without the need for any intermediaries.

·       They can also automate a workflow, starting when certain circumstances are satisfied.

One key benefit of a smart contract is the automation of tasks that traditionally require a third-party intermediary. For example, instead of needing a bank to approve a fund transfer from client to freelancer, the process can happen automatically, thanks to a smart contract. All that’s required is for two parties to agree on one concept.

Smart contracts have gained widespread appeal because they are tamperproof and lower the cost of verification, exception, arbitration, and fraud protection, in addition to permitting automated permission-less execution. Also, smart contracts allow transparent data recording, which is easily verifiable and provides the involved parties equal sovereignty over their deals.

The very popular Ethereum is a 2nd generation blockchain. For fueling the functionality of smart contracts, Ethereum is the go-to Blockchain for enterprises across industries, especially supply chain, logistics, cross border payments.

Although a second-gen Blockchain, Ethereum has been continuously at the forefront, scaling up its offerings to expand the blockchain functionalities across industries. Ethereum is leading the way in everything from smart contacts to dApps, asset tokenization to DAOs, DeFi to NFTs.

Blockchain 3.0 – DApps

Blockchain 3.0 has been all about Decentralized applications (Dapps).

Decentralized applications (Dapps) are applications that run on a P2P network of computers rather than a single computer. #dapps , have existed since the advent of P2P networks. They are a type of software program designed to exist on the Internet in a way that is not controlled by any single entity.

With a frontend user interface, calling to its backend smart contracts hosted on decentralized storage, DApps support various powerful blockchain use-cases like #defi platforms, Crypto loan platforms, #nft marketplaces, P2P lending and others.

Powered by new consensus mechanisms like Proof of Stake, Proof of History and others, 3rd gen blockchain protocols focused on areas like Speed, Security, Scalability, Interoperability and Environment friendliness.

For offering benefits like transparency, scalability, flexibility and reliability, the Global DApp market is expected to reach $368.25 billion by 2027. DApps have found applications across verticals like Gaming, Finance, social media, and Crypto transaction.

Blockchain 4.0  

Blockchain 4.0 is focused on innovation. Speed, user experience and usability by larger and common mass will be the key focus areas for Blockchain 4.0. We can divide Blockchain 4.0 applications into two verticals:

•       Web 3.0  

•       Metaverse

Web 3.0

The Internet is constantly transforming, and we are on our way to the third generation of internet services, which will be fueled by technological advances such as IoT, Blockchain, and Artificial Intelligence. Web 3.0, is focused at having decentralization at its core, therefore Blockchain plays a critical role in its development.

Web 2.0 has been revolutionary in terms of opening up new options for social engagement. But to take advantage of these opportunities, we as consumers have poured all of our data into centralized systems, giving up our privacy and exposing ourselves to cyber threats. Web 2.0 platforms are managed by centralized authorities that dictate transaction rules while also owning user data.

The 2008 global financial crisis exposed the cracks in centralized control, paving the way for decentralization. The world needs Web 3.0- a user-sovereign platform. Because Web 3.0 aims to create an autonomous, open, and intelligent internet, it will rely on decentralized protocols, which Blockchain can provide.

There are already some third-generation Blockchains that are designed to support web 3.0, but with the rise of Blockchain 4.0, we can expect the emergence of more web 3.0 focused blockchains that will feature cohesive interoperability, automation through smart contracts, seamless integration, and censorship-resistant storage of P2P data files.

Metaverse

The dream projects of tech giants like Facebook, Microsoft, Nvidia, and many more, Metaverses, are the next big thing for us to experience in the coming few years. We are connected to virtual worlds across different touchpoints like social engagement, gaming, working, networking and many more. Metaverse will make these experiences more vivid and natural.

Advanced AI, IoT, AR & VR, Cloud computing and Blockchain technologies will come into play to create the virtual-reality spaces of #metaverse , where users will interact with a computer-generated environment and other users through realistic experiences.

Centralized Metaverse entails more intense user engagements, deeper use of internet services and more uncovering of users’ personal data. All these almost likely means higher cybercrime exposure. Giving power to centralized bodies to regulate, control and distribute users’ data is not a sustainable set-up for the future of Metaverse. Therefore, much emphasis has been placed on developing decentralized Metaverse platforms that will provide user autonomy. Decentraland, Axie Infinity, and Starl, these are all decentralized Metaverses powered by Blockchain:

Also, Blockchain 4.0’s advanced solutions can help Metaverse users regulate their security and trust needs. Take the Metaverse gaming platform, for example, where users may purchase, possess, and trade in-game items with potentially enormous value. Proof of ownership through something as immutable and scarce as NFTs will be required to prevent forgery of these assets.

Blockchain 4.0 solutions can aid in the following Metaverse development requirements:

•       Decentralization

•       Decentralized data management

•       Security

•       Digital Proof of ownership

•       Digital collectability of assets (such as NFTs)

•       Governance

•       Transfer of value through crypto

•       Interoperability

 At the end Blockchain 4.0 will enable businesses to move some or all of their current operations onto secure, self-recording applications based on decentralized, trustless, and encrypted ledgers. Businesses and institutions can easily enjoy the basic benefits of the Blockchain.

 Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

References

 https://www.leewayhertz.com/blockchain-4-0/

https://www.coinspeaker.com/guides/evolution-of-blockchain-from-blockchain-1-0-to-blockchain-4-0/

“Blockchain Technology and It’s Applications “Course by Prof. Ahmed Banafa -Stanford University

Also Read:

NIST Standardizes PQShield Algorithms for International Post-Quantum Cryptography

WEBINAR: Flash Memory as a Root of Trust

WEBINAR: Taking eFPGA Security to the Next Level


Integrating High Speed IP at 5nm

Integrating High Speed IP at 5nm
by Pavan Patel on 11-10-2022 at 6:00 am

Figure 1

Introduction:

The advancements in deep submicron technology and adding multiple functionalities to reduce costs combined with scaling existing operations means that SoC designs become ever more complex. The biggest driving factors to go below the 16nm process node are the decrease in power and the increase in performance due to the higher transistor densities of these advanced nodes. However, doing so creates challenges to physical Implementation and timing/power closure. In particular, high speed IP such as SerDes, DDR, PCIE integration in a large SoC needs careful floor planning to reduce the project time as well as achieve timing/power signoff. In this article, we will look at the new challenges which have been introduced due to 5nm technology as well due to new additional functionality in SoC. We will show the approach to tackle the floor planning and timing issue to reduce the physical implementation iteration.

Methodology

The implementation of large, complex IP integration needs a methodology that efficiently closes the floorplan signoff criteria as well as preventing large timing violations at a later stage.

Figure 1: Custom Floorplan Methodology

Challenges of 5nm physical design

A holistic approach is needed to concurrently address the planning, editing and optimization environment for the project along the path from SoC to advanced packaging techniques (like InFo/Feveros/X-Cube). As well as considering the impact backwards up the path of decisions. For example, by iterating the placement bumps, PADs and macros early in the process enables the turnaround time to be reduced.

Another thing to be considered and planned for early in the process are thermal effects long before place and route in order to improve yield and reliability by designing out hot spots that can lead to failures. For example, standard cells packed at high density can create hot spots. This is because, at 14/16 nm nodes, three to four fins are used to provide structural stability to each gate but, below 7nm, two are used. These are higher to compensate for the reduced number of fins and still give the reinforcement required. However, care should be taken with standard cell placement as fins surrounded by dielectric (gate oxide) have poor thermal conductivity and therefore do not dissipate heat as well as expected creating a hot spot. Therefore, doing an early stage of power analysis (Dynamic/Static) helps to prevent hot spot surprises at a later stage at power signoff.

Lastly, process and voltage variation intensity are higher at lower geometries. To combat this, PT-ECO signoff typically needs more than twelve iterations for large complex blocks due to noise and transition requirement.

Challenges of integrating high speed blocks

Partially hardened IP has higher state of flux because of continuous improvement of the hard IP from the analog team. This can be addressed by using models that were tailored for different design stages with increasing level of complexity and completeness as we approach tapeout. Having IP collateral and the list of IP deliverables as soon as possible in the process are vital to a swift and successful integration.

Lastly, design complexity at Floorplanning, DFT integration, Custom Clock tree and timing/power signoff all require scripting knowledge and basic IP understanding to tweak implementation.

Floorplan Challenges:

On a recent design, the foundry provided a multi-height library and, initially, it was difficult to pass the grid checks. The problem was that we were implementing our design using a power efficient library however the third-party IP was on high performance library. Hence, placement of both library types had to be on least common multiple (LCM) rows.

Things to look out for are:

  • Avoiding tap cells and boundary cell insertion on special areas where an analog signal is routing.
  • Power Grid (PG) connectivity is important for multiple power domain design along with Analog VDD as well as third party IP VDD.
  • Connecting PG nets/pins must be according to guideline defined by the Analog team/Third Party IP deliverables.
  • TCD/ESD IP to be connected according to the power clamp implementation rule defined by the foundry.
  • Legality checks need to take care of after spare cell insertion: grid check takes care of IP placement legality.
  • Integration checks which honor the top-level floorplan DRC (??) while integrating blocks at chip level.
  • Terminal/Port placement checks as this is an important check at the initial design integration.
  • PG Design Rule Checking (DRC) such as PG Via and power stripe routing, plus macro to macro and macro to boundary spacing rule checks to avoid DRC.

The Floorplanning Goal

This is to have a clean Design Rule Check (DRC) and Layout Versus Schematic (LVS) design that follows the design implementation guideline as well as the timing/congestion aware macro placement.

These are the stages of how to achieve this by improving a floorplan. NB a Synopsys Fusion compiler is used in this example.

I. Grid creation is required due to there being multiple vendors of third-party IP, so generating a grid provides uniformity for interconnection:

create_grid -type block -x_step $cell_site_pitch  -y_step $cell_row_pitch -orientations “R0 MX” Macro_wrapper

set_block_grid_references -grid [get_grids Macro_wrapper ] -design Macro_Wrapper

set_snap_setting -class macro_Cell -snap block -user_grid Macro_wrapp

## Macro wrappers need to be snapped to 7.752um, 9.576um which is multiple of cell site 0.051 and cell row height 0.028 (1 track distance)

II. When you do a floorplan, you will need to do manual routing of the high-speed signals. You will need to avoid any placement or routing by creating blockages over the area using

  • create_routing_blockage
  • create_placement_blockage

For example:

  • create_placement_blockage -name $blk_name -boundary [get_attr [get_attr $blk_poly poly_rects] point_list]
  • create_routing_blockage -name SNRG#${blk_name} -boundary [get_attr [get_attr $blk_poly poly_rects] point_list] -layers [get_layers -filter full_name!~*G*] -zero_spacing
Figure 2: Block and signal routing over high speed macro

III. Power Grid (PG) connectivity requires that the PG mesh routing follows the pre-connection command defined by implementation. Hence, we need to connect analog PG ports and BUMP connections along with digital power/ground connectivity.

Example:  Connect_pg_net –net VDD [get_pins BUMPS_VDD_*/BUMP]

Connect_pg_net –net VSS  [get_pins BUMPS_VSS_*/BUMP]

#Special PG connection

foreach v “VDDA VSSA VDDM” {

Connect_pg_net –net $v [get_pins High_speed_IP_*/[string tolower $v]]

Connect_pg_net –net $v [get_pins   Monitor*/[string tolower $v]]

}

IV. Extra signal and special connections as defined by the Analog team to connect at block level can be challenging when you have insufficient or incomplete Library Exchange Format (LEF) but must need to do to check Block level and Chip level Layout Versus Schematic (LVS).

Example:

Figure 3: Manual routing which connects Special analog signal

V. ESD cell and TCD (Test-key Critical Dimension) checks are needed. The electrostatic discharge macro is required to protect high speed analog macros. TCD cells are employed to monitor the critical dimension such as minimum line width, etc. Also check the critical dimension of cells to ensure layout uniformity during fabrication to improve the yield.

Note: We place ESD cells near to Hard IP and supplied the same voltage as that is required by the Hard IP for efficiency.

Figure 4 Schematic diagram

iii. Terminal/Port placement checks. These are the common and simple checks of floor planning to spot errors.

Example: violation to check.

Type of Violation Count
Missing Pins 0
Pin Off Edge 276
Pins Off Track 145
Pin Short 0
Pin Size 3
Pin Spacing 3
Total Violations 427

iv. Power Grid Design Rule Checks: Write out PG DRC reports and compare Implementation tool PG DRC with floorplan signoff (rule deck aware) DRC of ICV/Calibre. If DRC violations such as Illegal overlap, insufficient width, minimum metal width min metal area, illegal shapes or min metal edges, try to debug the PG mesh scripts and manual PG via insertion scrips.

Note: Shorts and Opens on the PG should be clean before further executing the next step during the tentative signoff iteration.

In conclusion, Sondrel has been working on advanced nodes for decades and already has several 5nm designs under its belt which were used to create this list of checks and suggestions that can help master the challenges of 5nm design. Further articles can be found at https://www.sondrel.com/solutions/white-papers

Pavan Patel is ASIC enthusiastic physical design engineer. Having implementation and signoff experience of Modem, Camera chip, networking switch, Mobile SoC, Router SoC.  Fascinated with VLSI history and impact of SoC on business as well as consumer.

Also Read:

NoC-Based SoC Design. A Sondrel Perspective

Closing the Communication Chasms in the SoC Design and Manufacturing Supply Chain

SoC Application Usecase Capture For System Architecture Exploration


Podcast EP120: How NXP is Revolutionizing Automotive Electronics Design

Podcast EP120: How NXP is Revolutionizing Automotive Electronics Design
by Daniel Nenni on 11-09-2022 at 10:00 am

Dan is joined by Jim Bridgewater, director of product marketing for NXP automotive edge product line.

Jim provides an overview of the various wireless interfaces in current automotive design. He also discusses a new product from NXP called OrangeBox, a device that combines many of these interfaces into one domain controller. Jim explores the benefits of this approach, including stronger security implementation and enhanced quality of service.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


MIPI in the Car – Transport From Sensors to Compute

MIPI in the Car – Transport From Sensors to Compute
by Bernard Murphy on 11-09-2022 at 6:00 am

NXP Camera subsystem min

I’ve written on and off about sensors, ML inference of the output of those sensors and the application of both in modern cars. Neither ADAS nor autonomous/semi-autonomous driving would be possible without these. But until now I have never covered the transport between sensors and the compute that safely turns what they produce into clear images and accurate object detection. Mixel and Rambus recently gave a talk on that transport, MIPI, at MIPI DevCon. Useful, since I had previously assumed that the data somehow magicked its way from the sensor to the compute. The example focused particularly on imaging subsystems, in this talk featuring the camera-serial interface (MIPI CSI-2) from Rambus and the physical interface (MIPI C-PHY and MIPI D-PHY) from Mixel.

MIPI CSI-2 and PHY transmit and receive blocks

MIPI CSI-2 is the function which defines a serial interface between a camera on one end and an ISP on the other end. Pixels stream in one side and eventually stream out the other side, so the interface needs a transmit function and a receive function. Because these functions must be able to connect any camera (or more than one camera) to any ISP, they need a lot of flexibility. One example is bandwidth matching between the sensor and the ultimate consumer, allowing for a continuous streaming flow for example.

Between the CSI-2 transmit and receive functions, D-PHY (or C-PHY) handle the physical communication. D-PHY uses differential signaling while C-PHY uses a clever differential technique looking pairwise at a trio of signals, together with encoding. Complex stuff but apparently supports a higher data rate than D-PHY.

Safety in the PHY

Back in more familiar territory for me, these IPs are designed for automotive applications, making safety a critical objective. Both the PHY and controller must meet the ISO 26262 FMEDA requirements for the appropriate ASIL level. In addition, safety critical automotive applications require in-system testability for the MIPI PHY. I’m seeing similar in-system testability requirements becoming more common at ASIL-C/D levels for other PHYs, so this is not a surprise. The Mixel MIPI PHY supports full-speed and in-system loopback testing for the universal configuration (Tx+Rx) as well as with their own implementations for area optimized transmit only and receive only configurations called TX+ and RX+.

Mixel also noted additional testing required for automotive IP: stress testing, HTOL and reliability tests (e.g. aging). These, together with meeting the ISO 26262 standard DFMEA and FMEDA. Ensuring the overall reliability of the IP,  essential for car safety over a 15+ year service life.

Safety in the CSI-2 controller

To meet ASIL-B fault coverage requirements Rambus’s CSI-2 Controller Core with Built-In-Self-Test (BIST). BIST mechanisms are used here together with familiar safety mitigation techniques: ECC, CRC, parity. It is interesting to note that the BIST here is at the IP level, not at the system level. I have seen the same principle for in-system testing in the NoC. In both cases, the argument is that function level BIST is better than system level for multiple reasons. It can go deeper and provide more confidence in safety coverage. It is also available even if system-level BIST is not provided, offering central feedback if the system becomes non-operational.

In safety mitigation techniques, the CSI-2 controller provides parity protection on pixels and pixel buffers. Also ECC for the protocol header and CRC for packet data. These add redundancy for data formatting, packing logic, critical state machines and other critical blocks. Packet ordering is checked, and order errors are flagged. One other interesting check I have seen coming up more in safety critical applications is a watchdog timer. This is to detect frozen or excessively delayed operations. All emphasizing that at high ASIL levels, safety mitigation is no longer just about the basic methods. Designers are adding more active and complex tests and mitigations to rise to ASIL-C/D.

This talk can be found HERE and is a good introduction to the topic.

If you would like to learn more information about Mixel and their MIPI offering, visit their website here or learn about their MIPI D-PHY IP here.

Also Read:

A MIPI CSI-2/MIPI D-PHY Solution for AI Edge Devices

FD-SOI Offers Refreshing Performance and Flexibility for Mobile Applications

New Processor Helps Move Inference to the Edge