SNPS1670747138 DAC 2025 800x100px HRes

Selecting a flash controller for storage reliability

Selecting a flash controller for storage reliability
by Don Dingee on 11-14-2022 at 10:00 am

2.5 SSD with Hyperstone X1

Flash memory cards and solid-state drives (SSDs) provide high-performance storage in many devices and systems today. While the flash chips inside cards and SSDs provide raw capacity and performance, they must be combined with an intelligent flash controller to achieve the reliability system designers and consumers need. A new product guide from Hyperstone provides an overview of their high-reliability flash memory controllers, their accompanying feature sets, and available software tools.

Critical features of advanced flash controllers

When people think of flash storage, the first thing that comes to mind is capacity. Innovative fabrication techniques continue driving flash chip density higher. Stringing together flash chips to reach the required amount of storage in a memory card or SSD is the easy part. Flash controllers add critical features to make flash storage more efficient, durable, and reliable under real-world conditions.

Here are five necessary flash memory controller features:

  • Wear leveling. Flash chips specify a minimum write cycle endurance – how many write cycles each cell can withstand. The entire storage card wears out prematurely if a subset of flash cells is used repeatedly. Wear-leveling techniques distribute the use of flash cells across all the chips, providing even wear and extending life for the entire card.
  • Power fail protection. Interrupting a flash write before completion can corrupt data and leave cells temporarily unusable. Power fail protection helps write cycles complete using brief capacitive hold-up mechanisms.
  • Write management. Flash write cycles are a delicate dance of power supply changes and signal timing constraints to set a physical state in a cell. A flash controller offloads the host processor from managing the details, completing the cycle from a simple write command, and allowing the host to move on to other tasks.
  • Error correction coding. Various encoding schemes enabled with extra stored bits with words can help detect and correct bit errors on the fly. BCH is a commonly used code, alongside algorithms such as LDPC.
  • Data refresh. Flash storage is regarded as non-volatile, where cells retain their programmed state with power off. However, cell programming tends to degrade if power is left off for extended periods. Data refresh cycles read and re-write cell states in the background, restoring their full retention strength.

A range of flash controllers for different applications

Other variables come into play for NAND-based flash solutions, including cost, maintenance (considering onboard embedded flash versus a card that can be easily exchanged), and system-level redundancy. It’s essential for OEMs choosing a flash controller to be able to tune both the hardware and the firmware to their needs.

Hyperstone’s portfolio of NAND flash memory controllers targets a wide range of demanding solutions across various interfaces. From Serial ATA (SATA) and Parallel ATA (PATA) solid-state disks (SSDs), Disk-on-Module (DoM), Disk-on-Board (DoB), embedded flash solutions, USB and flash cards such as CompactFlash, SD, and microSD, Hyperstone is constantly developing and optimizing both their hardware and firmware. The hyMap® firmware comes with many standard features and is customized for each flash application. Additionally, an API is available alongside specific controllers, allowing  customers to add extra security features to those controllers. Software tools to assist in life expectancy estimation, factory pre-configuration of flash, and in-use performance analysis complete the Hyperstone portfolio.

To learn more about Hyperstone flash controller products for reliable storage solutions, visit the NAND Flash Memory Controllers product portfolio page – the product guide is downloadable by clicking a button shown.

Also Read:

CEO Interview: Jan Peter Berns from Hyperstone


Why Intel may be the first casualty if Beijing retaliates over Biden’s export controls

Why Intel may be the first casualty if Beijing retaliates over Biden’s export controls
by admin on 11-14-2022 at 6:00 am

Intel Bejing

After the Biden administration upped the ante in the tech war by restricting China’s access to advanced US semiconductor technology, the $64,000 question was “How might Beijing respond?”

Punishing American companies in China (like Apple and Tesla) was not considered likely given the employment they generate – Apple contractor Foxconn employs more than 1 million Chinese – not to mention the technology transfer benefits that Beijing craves from foreign companies.

However, a hint of how the Chinese Communist Party may strike back has emerged – and it’s not so much an “action” as a form of “inaction”. The first casualty may be America’s biggest chipmaker.

On February 15 this year, Intel announced an agreement to acquire Israel-based foundry Tower Semiconductor for $5.4 billion.

The deal was seen as key to the long term success of Intel Foundry Services (IFS), as Tower’s strength in analog complemented Intel’s in digital.

“Tower’s specialty technology portfolio, geographic reach, deep customer relationships and services-first operations will help scale Intel’s foundry services and advance our goal of becoming a major provider of foundry capacity globally,” Intel CEO Pat Gelsinger said at the time.

However, that deal may be at the mercy of Beijing , according to some commentators.

“The United States is directly trying to stop China’s semiconductor independence and pulled out all the stops with its recent export controls. Meanwhile, Intel is trying to bolster domestic production and reduce the United State’s reliance on Taiwan. Why would China let Intel, and by extension, the United States government, do this? They will almost certainly block the deal,” wrote semiconductor analyst Doug O’Laughlin in a blog titled “China’s revenge: The Tower Semiconductor deal is in a tough place”.

How could Beijing scupper the deal? The same way it stopped Qualcomm from acquiring NXP Semiconductors in 2018.

Big global M&A deals require the approval of various regulatory agencies, such as the Federal Trade Commission in the US. In China, the antitrust body is the State Administration for Market Regulation (SAMR).

SAMR killed Qualcomm/NXP by not issuing regulatory approval for the deal, and Qualcomm – reliant on the Chinese market for major revenues – had to play along. Some predict that the same may happen with Intel/Tower.

“[In China] every regulatory agency is just an extension of the [communist] party’s will, so I think the clear way to hinder the United States and its companies is to block every deal in the approval process,” said O’Laughlin.

Last month, SAMR applied the same tactic to another US merger deal, though not related to the semiconductor industry. DuPont’s $5.2-billion deal to acquire Arizona-based speciality materials supplier Rogers Corp, which was announced over a year ago, was terminated on November 2 because SAMR failed to approve it.

Companies above a certain annual revenue threshold are subject to SAMR review, but if they don’t have any business in China, it’s a moot point. However, Intel derives a significant portion of its revenue from China – and operates a 300mm wafer fab in the country.

“It’s possible to merge without Chinese approval, but then China could restrict Intel’s right to sell products in China,” O’Laughlin said. “Tower Semi is a quick way to make the IFS dream a reality and has to be at the top of Intel’s strategic priorities. But this is how China can strike back.”

Ben Thompson, a tech analyst who pens the Stratechery newsletter, believes it would be “devastating” for Intel if SAMR blocked the Tower acquisition.

“While it is fair to be skeptical of Intel’s ability to catch-up, that task will be far more difficult without the sort of transformation in culture around foundry services that Tower was acquired to provide,”  Thompson said.

If China blocks the deal, Intel could decide to go ahead anyway. Worst case, Beijing may ban the company from selling in China, but given the Chinese government’s vociferous opposition to its lack of access to US chips, that would be a self-inflicted wound.

The national security implications of the case have also not been lost on commentators.

Thompson said that if Intel sacrificed the China market it would “at least be in line with [CEO Pat] Gelsinger’s rhetoric on the matter,” while O’Laughlin said “not getting Tower Semi to kickstart IFS feels like a national security travesty”.

Also Read:

Why China hates CHIPS

How TSMC Contributed to the Death of 450mm and Upset Intel in the Process

The Evolution of Taiwan’s Silicon Shield

US Supply Chain Data Request Elicits a Range of Responses, from Tight-Lipped to Uptight

Losing Lithography: How the US Invented, then lost, a Critical Chipmaking Process

Why Tech Tales are Wafer Thin in Hollywood


Requiem for a Self-Driving Prophet

Requiem for a Self-Driving Prophet
by Roger C. Lanctot on 11-13-2022 at 4:00 pm

Requiem for a Self Driving Prophet

In a few short years, self-driving tech enfant terrible George Hotz managed to get a rebuff from Tesla CEO Elon Musk and a brush back from both the California Department of Motor Vehicles and the National Highway Traffic Safety Administration (NHTSA) while single-handedly inventing the aftermarket for autonomous vehicle technology. Today, an average consumer with a little bit of ingenuity can add SAE Level 2 autonomous driving capability to a wide range of vehicles from Toyota, Honda, Subaru and others. Anyone can do it.

Two weeks ago, Hotz published a blog indicating that he was taking a break from the slog of chasing investor cash and struggling with supply chain issues in order to pursue other interests. He simply said he’d had enough of the Comma.ai rat race. It’s a shame.

We pat ourselves on the back here in the U.S. for having a vibrant startup industry. Hotz’s experience is a testament to both the vibrance of that eco-system and its limitations.

Hotz’s OpenPilot software implemented in the Comma devices – 1,2, and 3 – has clearly proven its merit with the endorsement of Consumer Reports and the support of thousands of tinkerers who have bought the necessary hardware (Panda or Giraffe devices directly from Comma), downloaded the open source code and installed the system into their own personal cars. Hotz cleverly won over the CR editors with the combination of the system’s impressive performance along with the integrated driver monitoring technology.

The willingness of average consumers to take on the formidable task of more or less “hacking into” their own vehicle controls with an aftermarket device that will clearly void any manufacturer’s warranty is perhaps most amazing. More amazing still is the fact that the couple thousands of consumers who have gone to the trouble of installing devices using OpenPilot software have yet to report a single unhappy experience using the device. Also, thankfully, no ugly headlines regarding crashes or fatalities.

The performance of Hotz’s open sourced OpenPilot software (open sourced in order to avoid NHTSA sanction) has been sufficient to attract a host of companies seeking to build upon the technology with solutions of their own. These companies include Epilog AI, Kommu, BlueBox, and Merlin Mobility, which offers a solution to assist drivers with various disabilities or limitations.

Consumers can take the OpenPilot challenge with the help of a wide range of aftermarket kits available from Websites such as AliExpress.com: https://www.aliexpress.com/store/1101868933

Part of Hotz’s recent frustrations that led to his (temporary?) departure from the self-driving development circus was the inability to source a particular Qualcomm chipset. He mused about alternatives and alleged that Qualcomm was deliberately blocking his efforts.

It’s interesting to consider the implications of Qualcomm standing in Hotz’s path preventing further progress. Qualcomm, of course, has its own self-driving ambitions.

Hotz’s allegations are reminiscent of Mobileye walking away from Tesla following the famous fatal Florida crash in 2017. Musk blamed Mobileye for the failure of his forward-facing camera system to recognize a tractor trailer blocking the highway. Months later, Mobileye disclosed that it was Mobileye that had parted company with Musk.

This development was not unlike Nvidia’s decision, following Uber’s fatal Phoenix-area crash, to pause its own self-driving testing.  Suppliers do not want to be associated with AV system failures especially when they have AV ambitions of their own.

It’s interesting to ponder the prospect of a nascent self-driving aftermarket emerging – especially just two months before the opening of CES 2023. It will be interesting to see whether the seeds planted by Hotz bear fruit with or without his ongoing participation.

Also Read:

MIPI in the Car – Transport From Sensors to Compute

Musk: The Post-Truth Messiah

Flash Memory Market Ushered in Fierce Competition with the Digitalization of Electric Vehicles


Podcast EP121: Managing Design Flows and EDA Resources with Innova

Podcast EP121: Managing Design Flows and EDA Resources with Innova
by Daniel Nenni on 11-11-2022 at 10:00 am

Dan is joined by Chouki Aktouf, founder & CEO of Defacto Technologies and co-founder of Innova Advanced Technologies. Prior to founding Defacto in 2003, Dr. Aktouf was an associate professor of Computer Science at the University of Grenoble – France and leader of a dependability research group. He holds a PhD in Electric Engineering from Grenoble University.

Dan explores the offerings of Chouki’s new company. Innova provides a flexible and customizable capability to manage design flows and EDA tool resources. This disruptive solution serves as a single portal to help reduce the complexity of using tools and dedicated design environments.

Chouki also discusses an upcoming webinar on the new product that will occur on December 7, 2022 at 10AM Pacific time. You can register for this webinar here: Reduce design cost by better managing EDA tool licenses and servers

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Blockchain 4.0

Blockchain 4.0
by Ahmed Banafa on 11-10-2022 at 10:00 am

Blockchain 4.0

The simple and best definition of Blockchain technology is to think about it as electricity , you only see it’s applications but you understand how important it’s and know there are many applications and products that can run on it . But like any other technology it went through stages and evolved as it progressed and matured. We started with Blockchain 1.0 and now we are at Blockchain 4.0.

In the following article we will explain each version of Blockchain:

Blockchain 1.0 – Cryptocurrencies

The Blockchain’s first-ever application was bitcoins. Blockchain has already established itself as the enabler of an ‘Decentralized Internet of Money’ by powering cryptocurrencies. By providing transparency, accountability, immutability and security, Blockchain very soon triggered the influx of more cryptocurrencies, and today we have more than 10,000 different cryptocurrencies in circulation.

 Cryptocurrency Types 

1.     Central Bank Digital Coin

2.     Stablecoins

3.     Cryptocurrency (Bitcoin, Ethereum, Solana …)

4.     Meme Coins (Elon Musk!)

*Maximum number Bitcoin is 21 million coins , we have ~19 millions in the market now

 Blockchain 2.0 – Smart Contracts

With Blockchain 2.0 came the era of smart contracts that helped #blockchain to outgrow its original functionality of powering cryptocurrencies.

What is a smart contract?

·       Smart contracts are essentially automated agreements between the contract creator and the recipient.

·       Written in code, this agreement is baked into the blockchain, making it immutable as well as irreversible.

·       They’re usually used to automate the execution of an agreement so that all parties can be sure of the conclusion right away, without the need for any intermediaries.

·       They can also automate a workflow, starting when certain circumstances are satisfied.

One key benefit of a smart contract is the automation of tasks that traditionally require a third-party intermediary. For example, instead of needing a bank to approve a fund transfer from client to freelancer, the process can happen automatically, thanks to a smart contract. All that’s required is for two parties to agree on one concept.

Smart contracts have gained widespread appeal because they are tamperproof and lower the cost of verification, exception, arbitration, and fraud protection, in addition to permitting automated permission-less execution. Also, smart contracts allow transparent data recording, which is easily verifiable and provides the involved parties equal sovereignty over their deals.

The very popular Ethereum is a 2nd generation blockchain. For fueling the functionality of smart contracts, Ethereum is the go-to Blockchain for enterprises across industries, especially supply chain, logistics, cross border payments.

Although a second-gen Blockchain, Ethereum has been continuously at the forefront, scaling up its offerings to expand the blockchain functionalities across industries. Ethereum is leading the way in everything from smart contacts to dApps, asset tokenization to DAOs, DeFi to NFTs.

Blockchain 3.0 – DApps

Blockchain 3.0 has been all about Decentralized applications (Dapps).

Decentralized applications (Dapps) are applications that run on a P2P network of computers rather than a single computer. #dapps , have existed since the advent of P2P networks. They are a type of software program designed to exist on the Internet in a way that is not controlled by any single entity.

With a frontend user interface, calling to its backend smart contracts hosted on decentralized storage, DApps support various powerful blockchain use-cases like #defi platforms, Crypto loan platforms, #nft marketplaces, P2P lending and others.

Powered by new consensus mechanisms like Proof of Stake, Proof of History and others, 3rd gen blockchain protocols focused on areas like Speed, Security, Scalability, Interoperability and Environment friendliness.

For offering benefits like transparency, scalability, flexibility and reliability, the Global DApp market is expected to reach $368.25 billion by 2027. DApps have found applications across verticals like Gaming, Finance, social media, and Crypto transaction.

Blockchain 4.0  

Blockchain 4.0 is focused on innovation. Speed, user experience and usability by larger and common mass will be the key focus areas for Blockchain 4.0. We can divide Blockchain 4.0 applications into two verticals:

•       Web 3.0  

•       Metaverse

Web 3.0

The Internet is constantly transforming, and we are on our way to the third generation of internet services, which will be fueled by technological advances such as IoT, Blockchain, and Artificial Intelligence. Web 3.0, is focused at having decentralization at its core, therefore Blockchain plays a critical role in its development.

Web 2.0 has been revolutionary in terms of opening up new options for social engagement. But to take advantage of these opportunities, we as consumers have poured all of our data into centralized systems, giving up our privacy and exposing ourselves to cyber threats. Web 2.0 platforms are managed by centralized authorities that dictate transaction rules while also owning user data.

The 2008 global financial crisis exposed the cracks in centralized control, paving the way for decentralization. The world needs Web 3.0- a user-sovereign platform. Because Web 3.0 aims to create an autonomous, open, and intelligent internet, it will rely on decentralized protocols, which Blockchain can provide.

There are already some third-generation Blockchains that are designed to support web 3.0, but with the rise of Blockchain 4.0, we can expect the emergence of more web 3.0 focused blockchains that will feature cohesive interoperability, automation through smart contracts, seamless integration, and censorship-resistant storage of P2P data files.

Metaverse

The dream projects of tech giants like Facebook, Microsoft, Nvidia, and many more, Metaverses, are the next big thing for us to experience in the coming few years. We are connected to virtual worlds across different touchpoints like social engagement, gaming, working, networking and many more. Metaverse will make these experiences more vivid and natural.

Advanced AI, IoT, AR & VR, Cloud computing and Blockchain technologies will come into play to create the virtual-reality spaces of #metaverse , where users will interact with a computer-generated environment and other users through realistic experiences.

Centralized Metaverse entails more intense user engagements, deeper use of internet services and more uncovering of users’ personal data. All these almost likely means higher cybercrime exposure. Giving power to centralized bodies to regulate, control and distribute users’ data is not a sustainable set-up for the future of Metaverse. Therefore, much emphasis has been placed on developing decentralized Metaverse platforms that will provide user autonomy. Decentraland, Axie Infinity, and Starl, these are all decentralized Metaverses powered by Blockchain:

Also, Blockchain 4.0’s advanced solutions can help Metaverse users regulate their security and trust needs. Take the Metaverse gaming platform, for example, where users may purchase, possess, and trade in-game items with potentially enormous value. Proof of ownership through something as immutable and scarce as NFTs will be required to prevent forgery of these assets.

Blockchain 4.0 solutions can aid in the following Metaverse development requirements:

•       Decentralization

•       Decentralized data management

•       Security

•       Digital Proof of ownership

•       Digital collectability of assets (such as NFTs)

•       Governance

•       Transfer of value through crypto

•       Interoperability

 At the end Blockchain 4.0 will enable businesses to move some or all of their current operations onto secure, self-recording applications based on decentralized, trustless, and encrypted ledgers. Businesses and institutions can easily enjoy the basic benefits of the Blockchain.

 Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

References

 https://www.leewayhertz.com/blockchain-4-0/

https://www.coinspeaker.com/guides/evolution-of-blockchain-from-blockchain-1-0-to-blockchain-4-0/

“Blockchain Technology and It’s Applications “Course by Prof. Ahmed Banafa -Stanford University

Also Read:

NIST Standardizes PQShield Algorithms for International Post-Quantum Cryptography

WEBINAR: Flash Memory as a Root of Trust

WEBINAR: Taking eFPGA Security to the Next Level


Integrating High Speed IP at 5nm

Integrating High Speed IP at 5nm
by Pavan Patel on 11-10-2022 at 6:00 am

Figure 1

Introduction:

The advancements in deep submicron technology and adding multiple functionalities to reduce costs combined with scaling existing operations means that SoC designs become ever more complex. The biggest driving factors to go below the 16nm process node are the decrease in power and the increase in performance due to the higher transistor densities of these advanced nodes. However, doing so creates challenges to physical Implementation and timing/power closure. In particular, high speed IP such as SerDes, DDR, PCIE integration in a large SoC needs careful floor planning to reduce the project time as well as achieve timing/power signoff. In this article, we will look at the new challenges which have been introduced due to 5nm technology as well due to new additional functionality in SoC. We will show the approach to tackle the floor planning and timing issue to reduce the physical implementation iteration.

Methodology

The implementation of large, complex IP integration needs a methodology that efficiently closes the floorplan signoff criteria as well as preventing large timing violations at a later stage.

Figure 1: Custom Floorplan Methodology

Challenges of 5nm physical design

A holistic approach is needed to concurrently address the planning, editing and optimization environment for the project along the path from SoC to advanced packaging techniques (like InFo/Feveros/X-Cube). As well as considering the impact backwards up the path of decisions. For example, by iterating the placement bumps, PADs and macros early in the process enables the turnaround time to be reduced.

Another thing to be considered and planned for early in the process are thermal effects long before place and route in order to improve yield and reliability by designing out hot spots that can lead to failures. For example, standard cells packed at high density can create hot spots. This is because, at 14/16 nm nodes, three to four fins are used to provide structural stability to each gate but, below 7nm, two are used. These are higher to compensate for the reduced number of fins and still give the reinforcement required. However, care should be taken with standard cell placement as fins surrounded by dielectric (gate oxide) have poor thermal conductivity and therefore do not dissipate heat as well as expected creating a hot spot. Therefore, doing an early stage of power analysis (Dynamic/Static) helps to prevent hot spot surprises at a later stage at power signoff.

Lastly, process and voltage variation intensity are higher at lower geometries. To combat this, PT-ECO signoff typically needs more than twelve iterations for large complex blocks due to noise and transition requirement.

Challenges of integrating high speed blocks

Partially hardened IP has higher state of flux because of continuous improvement of the hard IP from the analog team. This can be addressed by using models that were tailored for different design stages with increasing level of complexity and completeness as we approach tapeout. Having IP collateral and the list of IP deliverables as soon as possible in the process are vital to a swift and successful integration.

Lastly, design complexity at Floorplanning, DFT integration, Custom Clock tree and timing/power signoff all require scripting knowledge and basic IP understanding to tweak implementation.

Floorplan Challenges:

On a recent design, the foundry provided a multi-height library and, initially, it was difficult to pass the grid checks. The problem was that we were implementing our design using a power efficient library however the third-party IP was on high performance library. Hence, placement of both library types had to be on least common multiple (LCM) rows.

Things to look out for are:

  • Avoiding tap cells and boundary cell insertion on special areas where an analog signal is routing.
  • Power Grid (PG) connectivity is important for multiple power domain design along with Analog VDD as well as third party IP VDD.
  • Connecting PG nets/pins must be according to guideline defined by the Analog team/Third Party IP deliverables.
  • TCD/ESD IP to be connected according to the power clamp implementation rule defined by the foundry.
  • Legality checks need to take care of after spare cell insertion: grid check takes care of IP placement legality.
  • Integration checks which honor the top-level floorplan DRC (??) while integrating blocks at chip level.
  • Terminal/Port placement checks as this is an important check at the initial design integration.
  • PG Design Rule Checking (DRC) such as PG Via and power stripe routing, plus macro to macro and macro to boundary spacing rule checks to avoid DRC.

The Floorplanning Goal

This is to have a clean Design Rule Check (DRC) and Layout Versus Schematic (LVS) design that follows the design implementation guideline as well as the timing/congestion aware macro placement.

These are the stages of how to achieve this by improving a floorplan. NB a Synopsys Fusion compiler is used in this example.

I. Grid creation is required due to there being multiple vendors of third-party IP, so generating a grid provides uniformity for interconnection:

create_grid -type block -x_step $cell_site_pitch  -y_step $cell_row_pitch -orientations “R0 MX” Macro_wrapper

set_block_grid_references -grid [get_grids Macro_wrapper ] -design Macro_Wrapper

set_snap_setting -class macro_Cell -snap block -user_grid Macro_wrapp

## Macro wrappers need to be snapped to 7.752um, 9.576um which is multiple of cell site 0.051 and cell row height 0.028 (1 track distance)

II. When you do a floorplan, you will need to do manual routing of the high-speed signals. You will need to avoid any placement or routing by creating blockages over the area using

  • create_routing_blockage
  • create_placement_blockage

For example:

  • create_placement_blockage -name $blk_name -boundary [get_attr [get_attr $blk_poly poly_rects] point_list]
  • create_routing_blockage -name SNRG#${blk_name} -boundary [get_attr [get_attr $blk_poly poly_rects] point_list] -layers [get_layers -filter full_name!~*G*] -zero_spacing
Figure 2: Block and signal routing over high speed macro

III. Power Grid (PG) connectivity requires that the PG mesh routing follows the pre-connection command defined by implementation. Hence, we need to connect analog PG ports and BUMP connections along with digital power/ground connectivity.

Example:  Connect_pg_net –net VDD [get_pins BUMPS_VDD_*/BUMP]

Connect_pg_net –net VSS  [get_pins BUMPS_VSS_*/BUMP]

#Special PG connection

foreach v “VDDA VSSA VDDM” {

Connect_pg_net –net $v [get_pins High_speed_IP_*/[string tolower $v]]

Connect_pg_net –net $v [get_pins   Monitor*/[string tolower $v]]

}

IV. Extra signal and special connections as defined by the Analog team to connect at block level can be challenging when you have insufficient or incomplete Library Exchange Format (LEF) but must need to do to check Block level and Chip level Layout Versus Schematic (LVS).

Example:

Figure 3: Manual routing which connects Special analog signal

V. ESD cell and TCD (Test-key Critical Dimension) checks are needed. The electrostatic discharge macro is required to protect high speed analog macros. TCD cells are employed to monitor the critical dimension such as minimum line width, etc. Also check the critical dimension of cells to ensure layout uniformity during fabrication to improve the yield.

Note: We place ESD cells near to Hard IP and supplied the same voltage as that is required by the Hard IP for efficiency.

Figure 4 Schematic diagram

iii. Terminal/Port placement checks. These are the common and simple checks of floor planning to spot errors.

Example: violation to check.

Type of Violation Count
Missing Pins 0
Pin Off Edge 276
Pins Off Track 145
Pin Short 0
Pin Size 3
Pin Spacing 3
Total Violations 427

iv. Power Grid Design Rule Checks: Write out PG DRC reports and compare Implementation tool PG DRC with floorplan signoff (rule deck aware) DRC of ICV/Calibre. If DRC violations such as Illegal overlap, insufficient width, minimum metal width min metal area, illegal shapes or min metal edges, try to debug the PG mesh scripts and manual PG via insertion scrips.

Note: Shorts and Opens on the PG should be clean before further executing the next step during the tentative signoff iteration.

In conclusion, Sondrel has been working on advanced nodes for decades and already has several 5nm designs under its belt which were used to create this list of checks and suggestions that can help master the challenges of 5nm design. Further articles can be found at https://www.sondrel.com/solutions/white-papers

Pavan Patel is ASIC enthusiastic physical design engineer. Having implementation and signoff experience of Modem, Camera chip, networking switch, Mobile SoC, Router SoC.  Fascinated with VLSI history and impact of SoC on business as well as consumer.

Also Read:

NoC-Based SoC Design. A Sondrel Perspective

Closing the Communication Chasms in the SoC Design and Manufacturing Supply Chain

SoC Application Usecase Capture For System Architecture Exploration


Podcast EP120: How NXP is Revolutionizing Automotive Electronics Design

Podcast EP120: How NXP is Revolutionizing Automotive Electronics Design
by Daniel Nenni on 11-09-2022 at 10:00 am

Dan is joined by Jim Bridgewater, director of product marketing for NXP automotive edge product line.

Jim provides an overview of the various wireless interfaces in current automotive design. He also discusses a new product from NXP called OrangeBox, a device that combines many of these interfaces into one domain controller. Jim explores the benefits of this approach, including stronger security implementation and enhanced quality of service.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


MIPI in the Car – Transport From Sensors to Compute

MIPI in the Car – Transport From Sensors to Compute
by Bernard Murphy on 11-09-2022 at 6:00 am

NXP Camera subsystem min

I’ve written on and off about sensors, ML inference of the output of those sensors and the application of both in modern cars. Neither ADAS nor autonomous/semi-autonomous driving would be possible without these. But until now I have never covered the transport between sensors and the compute that safely turns what they produce into clear images and accurate object detection. Mixel and Rambus recently gave a talk on that transport, MIPI, at MIPI DevCon. Useful, since I had previously assumed that the data somehow magicked its way from the sensor to the compute. The example focused particularly on imaging subsystems, in this talk featuring the camera-serial interface (MIPI CSI-2) from Rambus and the physical interface (MIPI C-PHY and MIPI D-PHY) from Mixel.

MIPI CSI-2 and PHY transmit and receive blocks

MIPI CSI-2 is the function which defines a serial interface between a camera on one end and an ISP on the other end. Pixels stream in one side and eventually stream out the other side, so the interface needs a transmit function and a receive function. Because these functions must be able to connect any camera (or more than one camera) to any ISP, they need a lot of flexibility. One example is bandwidth matching between the sensor and the ultimate consumer, allowing for a continuous streaming flow for example.

Between the CSI-2 transmit and receive functions, D-PHY (or C-PHY) handle the physical communication. D-PHY uses differential signaling while C-PHY uses a clever differential technique looking pairwise at a trio of signals, together with encoding. Complex stuff but apparently supports a higher data rate than D-PHY.

Safety in the PHY

Back in more familiar territory for me, these IPs are designed for automotive applications, making safety a critical objective. Both the PHY and controller must meet the ISO 26262 FMEDA requirements for the appropriate ASIL level. In addition, safety critical automotive applications require in-system testability for the MIPI PHY. I’m seeing similar in-system testability requirements becoming more common at ASIL-C/D levels for other PHYs, so this is not a surprise. The Mixel MIPI PHY supports full-speed and in-system loopback testing for the universal configuration (Tx+Rx) as well as with their own implementations for area optimized transmit only and receive only configurations called TX+ and RX+.

Mixel also noted additional testing required for automotive IP: stress testing, HTOL and reliability tests (e.g. aging). These, together with meeting the ISO 26262 standard DFMEA and FMEDA. Ensuring the overall reliability of the IP,  essential for car safety over a 15+ year service life.

Safety in the CSI-2 controller

To meet ASIL-B fault coverage requirements Rambus’s CSI-2 Controller Core with Built-In-Self-Test (BIST). BIST mechanisms are used here together with familiar safety mitigation techniques: ECC, CRC, parity. It is interesting to note that the BIST here is at the IP level, not at the system level. I have seen the same principle for in-system testing in the NoC. In both cases, the argument is that function level BIST is better than system level for multiple reasons. It can go deeper and provide more confidence in safety coverage. It is also available even if system-level BIST is not provided, offering central feedback if the system becomes non-operational.

In safety mitigation techniques, the CSI-2 controller provides parity protection on pixels and pixel buffers. Also ECC for the protocol header and CRC for packet data. These add redundancy for data formatting, packing logic, critical state machines and other critical blocks. Packet ordering is checked, and order errors are flagged. One other interesting check I have seen coming up more in safety critical applications is a watchdog timer. This is to detect frozen or excessively delayed operations. All emphasizing that at high ASIL levels, safety mitigation is no longer just about the basic methods. Designers are adding more active and complex tests and mitigations to rise to ASIL-C/D.

This talk can be found HERE and is a good introduction to the topic.

If you would like to learn more information about Mixel and their MIPI offering, visit their website here or learn about their MIPI D-PHY IP here.

Also Read:

A MIPI CSI-2/MIPI D-PHY Solution for AI Edge Devices

FD-SOI Offers Refreshing Performance and Flexibility for Mobile Applications

New Processor Helps Move Inference to the Edge

 


SoC Design Closure Just Got Smarter

SoC Design Closure Just Got Smarter
by Daniel Payne on 11-08-2022 at 10:00 am

iterations min

Near the end of any large SoC design project, the RTL code is nearly finished, floorplanning has been done, place and route has a first-pass, static timing has started, but the timing and power goals aren’t met. So, iteration loops continue on blocks and full-chip for weeks or even months. It could take a design team 5-7 days per iteration – not knowing how much time per iteration, and not really knowing how many iterations will reach closure of their design goals. Clearly, not a fun process to be caught in.

Design Closure Challenges

The clever R&D engineers at Cadence have taken action to deliver some relief for reaching design closure by creating a full-chip closure flow, and it uses an automated approach that is massively distributed, delivering both optimization and signoff. I talked with Brandon Bautz, senior group director, product management in the Digital & Signoff Group at Cadence, to learn about their newest EDA announcement. The new offering is called the Cadence Certus Closure Solution, and it has a shared engine with the Cadence place and route tool, the Innovus Implementation System, and the Static Timing Analysis (STA) tool, the Tempus Timing Signoff Solution. Here’s how the Cadence Certus Closure Solution works with STA (Tempus), place & route (Innovus), fill (Pegasus Verfication System) and extraction (Quantus Extraction Solution):

Cadence Certus Closure Solution

Reaching block-level closure is accomplished through using Tempus for STA along with Tempus ECO, controlled by the Cadence Certus Closure Solution. For full-chip and sub-system level closure, the Cadence Certus Closure Solution controls Tempus Signoff using either STA or distributed STA (DSTA). SemiWiki has written about how Cadence applied ML to chip optimization steps with the introduction of the Cadence Cerebrus Intelligent Chip Explorer last year, and this also works with the Cadence Certus Closure Solution.

As your SoC design size increases you really want an EDA tool that scales, so with the Cadence Certus Closure Solution you get a design closure flow that is distributed, and supports hierarchical optimization, ready to run in the cloud or your own data center to get results more quickly. When a change is made within a block, then the incremental signoff only needs to restore and replace the changed area.

Designers of 3D ICs will also benefit from the Cadence Certus Closure Solution, as it works with the Integrity 3D-IC platform, closing the timing on inter-die paths.

Certus Results

Two customer designs were run through the Cadence Certus Closure Solution, showing some impressive timing optimization and closure times that were gained overnight – not taking weeks and months.

N6 design

  • 22M instances
  • Cadence Certus Closure Solution Client Manager – 8 CPUs, 150GB
  • Cadence Certus Closure Solution  Clients – 4 CPUs, 50GB

Timing optimization and closure, overnight, resulting in 10X improved TAT.

N16 design

  • 140M instances
  • Cadence Certus Closure Solution Client Manager – 4 CPUs, 200GB
  • Cadence Certus Closure Solution  Clients – 8 CPUs, 600GB
  • Timing optimization and closure – overnight results with
  • 8X TAT improvement
  • 9.7% power improvement for interface and 1.3% at a full-chip level

Even Renesas was quoted as seeing, “6X faster chip-level signoff closure turnaround times.

If you have some STA and P&R experience, then learning the Cadence Certus Closure Solution  will be quick, as you can read the user guide and run through the examples, becoming proficient within just one day. The Cadence Certus Closure Solution works well on the largest SoC designs, and also on IP blocks with millions of cells. The approach in the Cadence Certus Closure Solution applies to all silicon technology: Planar, FinFET, Gate-All-Around.

Summary

Grunt work like manually trying to iterate EDA tool flows in reaching design closure on timing and power is expensive in both engineering costs and lost time to market. Cadence now offers new automation and optimization in the Cadence Certus Closure Solution by using a massively distributed flow that handles unlimited capacity. Engineering teams can expect to get overnight, concurrent full-chip optimization and signoff results.

This new automation area for Cadence looks promising, as it has the potential to save so much time and manual engineering effort.

Related Blogs and Podcasts


Electron Blur Impact in EUV Resist Films from Interface Reflection

Electron Blur Impact in EUV Resist Films from Interface Reflection
by Fred Chen on 11-08-2022 at 6:00 am

Electron Blur Impact in EUV Resist Films from Interface Reflection

The resolution of EUV lithography is commonly expected to benefit from the shorter wavelengths (13.2-13.8 nm) but in actuality the printing process needs to include Pde the consideration of the lower energy electrons released by the absorption of EUV photons. The EUV photon energy itself has a nominal energy range of 90-94 eV, corresponding to 13.2-13.8 nm. Upon absorption, photoelectrons of ~80 eV are released through ionization. These very quickly scatter and lose energy, releasing more electrons (“secondary electrons”) in the process. The energy finally absorbed in the resist film is essentially deposited by these secondary electrons, even with energies as low as around 1 eV [1]. Therefore, the actual resolution of the EUV-printed image is ultimately determined by the spread of these secondary electrons.

The inelastic mean free path (IMFP) is a parameter commonly used to describe the distance traveled by electrons before scattering with energy loss, i.e., inelastic scattering. It is expected to be on the order of nanometers. Since scattering is itself a random or stochastic event, the IMFP itself would have a range of values. This leads to a range of possible values for the blur in the resist image. The IMFP will depend on the kinetic energy of the electron, characteristically taking a minimum value below around 100 eV [2, 3] (Figure 1).

Figure 1. IMFP vs electron kinetic energy, adapted from [2].

As the energy decreases, the electron is continually traveling through scattering, and potentially may encounter the interface of the resist with the vacuum above it. This top surface has an energy barrier that tends to prevent electrons from escaping. The barrier is equal to the sum of the Fermi energy and the work function, and is also the same as the ionization potential [4]. For one popular tin-based EUV resist material component, SnOH, the ionization potential is 6.6 eV [5]. That means electrons with energies less than this value will be prevented from escaping into the vacuum. They will be reflected back from the top surface. The physics behind this is the conservation of momentum parallel to the interface. The perpendicular component of the momentum is reduced due to the interface barrier [6]. The electron energy in the vacuum upon crossing the barrier is decreased from E to E-U, where U is the barrier energy. The corresponding total momentum magnitude goes from sqrt(2mE) to sqrt(2m(E-U)), where m is the electron mass. The component parallel to the interface is given by sqrt(2mE)sin(q) in the resist, and sqrt(2m(E-U))sin(s), in the vacuum above the resist (Figure 2). These are equal, leading to an equation similar to Snell’s Law of refraction. If E<U, or sin(q)>sqrt([E-U]/E), the presence of the electron in the vacuum is forbidden, and so the electron keeps its initial energy and momentum magnitude E by being reflected at the same energy and angle. This is analogous to the total internal reflection of light at waveguide walls. This tends to give the lowest energy electrons extra opportunity to scatter and spread laterally within the resist, thereby increasing the image blur.

Figure 2. Total internal reflection of electrons occurs at the top interface when the interface barrier energy U exceeds the initial electron kinetic energy E, or the vacuum refracted angle s has to exceed 90 degrees.

The bottom interface with the resist underlayer would similarly play a key role, as an energy barrier there would increase blur even further with an additional boundary for internal reflection. On the other hand, it may also be a negative energy barrier, allowing electrons to escape the resist film into the underlayer. Hence, we expect the resist underlayer to significantly affect the low-energy electron spread in EUV resists. There is the uncomfortable realization that the EUV-deposited dose, rather than being a certain number of photons absorbed in a square nanometer, is actually an indeterminate number of electrons of unknown trajectories finally resting in that square nanometer.

References

[1] I. Bespalov et al., ACS Appl. Mater. Interfaces 12, 8, 9881 (2020).

[2] M. P. Seah and W. A. Dench, Surf. and Interf. Anal. 1, 2 (1979).

[3] D-N. Le and H. T. Nguyen-Truong, J. Phys. Chem. C 125, 34, 18946 (2021).

[4] A. Klein et al., Materials 3, 4892 (2010).

[5] Y. Zhang et al., Appl. Phys. Lett. 118, 171903 (2021).

[6] O. Yu et al., J. Elec. Spec. and Rel. Phenom. 241, 146824 (2020).

This article first appeared in LinkedIn Pulse:  Electron Blur Impact in EUV Resist Films from Interface Reflection

Also Read:

Where Are EUV Doses Headed?

Application-Specific Lithography: 5nm Node Gate Patterning

Spot Pairs for Measurement of Secondary Electron Blur in EUV and E-beam Resists