DAC2025 SemiWiki 800x100

9 Trends Will Dominate Blockchain Technology In 2023

9 Trends Will Dominate Blockchain Technology In 2023
by Ahmed Banafa on 12-20-2022 at 10:00 am

9 Trends Will Dominate Blockchain Technology In 2023

It’s clear that blockchain will revolutionize operations and processes in many industries and governments agencies if adopted, but its adoption requires time and efforts, in addition blockchain technology will stimulate people to acquire new skills, and traditional business will have to completely reconsider their processes to harvest the maximum benefits from using this promising technology. [2]

The following 9 trends will dominate blockchain technology in 2023:

1. Blockchain 4.0

Blockchain 4.0 is focused on innovation. Speed, user experience and usability by larger and common mass will be the key focus areas for #blockchain 4.0. We can divide Blockchain 4.0 applications into two verticals:

•       Web 3.0 

•       Metaverse

Web 3.0

The 2008 global financial crisis exposed the cracks in centralized control, paving the way for decentralization. The world needs Web 3.0- a user-sovereign platform. Because Web 3.0 aims to create an autonomous, open, and intelligent internet, it will rely on decentralized protocols, which blockchain can provide.

There are already some third-generation blockchains that are designed to support web 3.0, but with the rise of Blockchain 4.0, we can expect the emergence of more web 3.0 focused blockchains that will feature cohesive interoperability, automation through smart contracts, seamless integration, and censorship-resistant storage of P2P data files.

Metaverse

The dream projects of tech giants like Facebook, Microsoft, Nvidia, and many more, Metaverses, are the next big thing for us to experience in the coming few years. We are connected to virtual worlds across different touchpoints like social engagement, gaming, working, networking and many more. Metaverse will make these experiences more vivid and natural.

Advanced AI, IoT, AR & VR, Cloud computing and Blockchain technologies will come into play to create the virtual-reality spaces of metaverse , where users will interact with a computer-generated environment and other users through realistic experiences.

Centralized Metaverse entails more intense user engagements, deeper use of internet services and more uncovering of users’ personal data. All these almost likely means higher cybercrime exposure. Giving power to centralized bodies to regulate, control and distribute users’ data is not a sustainable set-up for the future of Metaverse. Therefore, much emphasis has been placed on developing decentralized Metaverse platforms that will provide user autonomy. Decentraland, Axie Infinity, and Starl, these are all decentralized Metaverses powered by Blockchain:

Also, Blockchain 4.0’s advanced solutions can help Metaverse users regulate their security and trust needs. Take the Metaverse gaming platform, for example, where users may purchase, possess, and trade in-game items with potentially enormous value. Proof of ownership through something as immutable and scarce as #NFTs will be required to prevent forgery of these assets.

At the end blockchain 4.0 will enable businesses to move some or all of their current operations onto secure, self-recording applications based on decentralized, trustless, and encrypted ledgers. Businesses and institutions can easily enjoy the basic benefits of the blockchain.

2. Stablecoins Will Be More Visible

Using Bitcoin as an example of cryptocurrencies its highly volatile in nature. To avoid that volatility, #stablecoins came to the picture strongly with stable value associate with each coin. As of now, stablecoins are in their initial phase and it is predicted that 2023 will be the year when blockchain stablecoins will achieve their all-time high. [1]

3. Social Networking Problems Meet Blockchain Solution

There are around 4.74 billion social media users around the globe in 2022.

The introduction of blockchain in social media will be able to solve the problems related to notorious scandals, privacy violations, data control, and content relevance. Therefore, the blockchain blend in the social media domain is another emerging technology trend in 2023.

With the implementation of blockchain, it can be ensured that all the social media published data remain untraceable and cannot be duplicated, even after its deletion. Moreover, users will get to store data more securely and maintain their ownership. Blockchain also ensures that the power of content relevance lies in the hands of those who created it, instead of the platform owners. This makes the user feel more secure as they can control what they want to see. One daunting task is to convince social media platforms to implemented it, this can be on a voluntary base or as a results of privacy laws similar to GDPR. [1]

4. Interoperability and Blockchain Networks

Blockchain interoperability is the ability to share data and other information across multiple blockchain systems as well as networks. This function makes it simple for the public to see and access the data across different blockchain networks. For example, you can send your data from one Ethereum blockchain to another specific blockchain network. Interoperability is a challenge but the benefits are vast [5].

5. Economy and Finance Will Lead Blockchain Applications

Unlike other traditional businesses, the banking and finance industries don’t need to introduce radical transformation to their processes for adopting blockchain technology. After it was successfully applied for the cryptocurrency, financial institutions begin seriously considering blockchain adoption for traditional banking operations.

Blockchain technology will allow banks to reduce excessive bureaucracy, conduct faster transactions at lower costs, and improve its secrecy. One of the blockchain predictions made by Gartner is that the banking industry will derive billions dollars of business value from the use of blockchain-based cryptocurrencies by 2023.

Moreover, blockchain can be used for launching new cryptocurrencies that will be regulated or influenced by monetary policy. In this way, banks want to reduce the competitive advantage of standalone cryptocurrencies and achieve greater control over their monetary policy. [2]

6. Blockchain Integration into Government Agencies

The idea of the distributed ledger is also very attractive to government authorities that have to administrate very large quantities of data. Currently, each agency has its separate database, so they have to constantly require information about residents from each other. However, the implementation of blockchain technologies for effective data management will improve the functioning of such agencies.

According to Gartner, by 2023, more than a billion people will have some data about them stored on a blockchain, but they may not be aware of it. Also, national cryptocurrencies will appear, it’s inevitable that governments will have to recognize the benefits of blockchain-derived currencies. Digital money is the future and nothing will stop. [3]

7. Blockchain Combines with IoT

The IoT tech market will see a renewed focus on security as complex safety challenges crop up. These complexities stem from the diverse and distributed nature of the technology. The number of Internet-connected devices has breached the 26 billion mark. Device and IoT network hacking will become commonplace in 2023. It is up to network operators to stop intruders from doing their business.

The current centralized architecture of IoT is one of the main reasons for the vulnerability of IoT networks. With billions of devices connected and more to be added, IoT is a big target for cyber-attacks, which makes security extremely important.

Blockchain offers new hope for IoT security for several reasons. First, blockchain is public, everyone participating in the network of nodes of the blockchain network can see the blocks and the transactions stored and approves them, although users can still have private keys to control transactions. Second, blockchain is decentralized, so there is no single authority that can approve the transactions eliminating Single Point of Failure (SPOF) weakness. Third and most importantly, it’s secure—the database can only be extended and previous records cannot be changed [7].

8. Blockchain with AI 

With the integration of AI (Artificial Intelligence) with blockchain technology will make for a better development. This integration will show a level of improvement in blockchain technology with adequate number of applications.

The International Data Corporation (IDC) suggests that global spending on AI will reach $57.6 billion by 2023 and 51% of businesses will be making the transition to AI with blockchain integration.

Additionally, blockchain can also make AI more coherent and understandable, and we can trace and determine why decisions are made in machine learning. Blockchain and its ledger can record all data and variables that go through a decision made under machine learning.

Moreover, AI can boost blockchain efficiency far better than humans, or even standard computing can. A look at the way in which blockchains are currently run-on standard computers proves this with a lot of processing power needed to perform even basic tasks

Examples of applications of AI in Blockchain: Smart Computing Power, Creating Diverse Data Sets, Data Protection, Data Monetization, Trusting AI Decision Making. [6]

9. Demand for Blockchain Experts 

Blockchain is a new technology and there are only few percent of individuals who are skilled in this technology. As blockchain technology becoming a fast-increasing and wide-spreading technology, that creates a situation for many to develop skills and experience about blockchain technology.

Even though the number of experts in blockchain fields is increasing, on the other hand the implementation of this technology has a rapid growth which will create a situation for the demand of Blockchain experts by 2023. [3]

It’s worth saying that there are genuine efforts by universities and colleges to catch up with this need including San Jose State University with several courses covering blockchain technology, but the rate of graduating students with enough skills to deal with blockchain technology is not enough to fill the gap. Also, Companies are taking steps to build on their existing talents by adding training programs for developing and managing blockchain networks.

  Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

References

[1] https://www.mobileappdaily.com/top-emerging-blockchain-trends

[2] https://www.aithority.com/guest-authors/blockchain-technology-in-the-future-7-predictions-for-2020/

[3] https://www.bitdeal.net/blockchain-technology-in-2020

[4] https://medium.com/altcoin-magazine/to-libra-or-not-to-libra-e2d5ddb5455b

[5] https://blockgeeks.com/guides/cosmos-blockchain-2/

[6] https://medium.com/altcoin-magazine/blockchain-and-ai-a-perfect-match-e9e9b7317455

[7] https://medium.com/@banafa/ten-trends-of-iot-in-2020-b2

[8]  https://www.linkedin.com/pulse/blockchain-40-ahmed-banafa/

Also Read:

The Role of Clock Gating

Ant Colony Optimization. Innovation in Verification

A Crash Course in the Future of Technology


Chiplets UCIe Require Verification

Chiplets UCIe Require Verification
by Daniel Nenni on 12-20-2022 at 6:00 am

UCIe VIP Truechip

Chiplets have been trending on SemiWiki for the past two years and I think that will continue into the distant future. As a potential way to unclog Moore’s Law, you can bet the semiconductor ecosystem will prove once again to be a chiplet force of nature driving semiconductor company roadmaps to smaller and better things.

To be clear, the chiplet concept is not new. We have been doing multi chip modules (MCMs) for years now. IP blocks are also not new and that is what a chiplet is, a hard IP block. What’s new is the chiplet ecosystem that is developing so all companies, big and small, can design with chiplets.

To allow chiplet connectivity we have the developing Universal Chiplet Interconnect Express standard. UCIe is an open specification for a die-to die interconnect and serial bus between chiplets. It’s co-developed by our colleagues at AMD, Arm, ASE Group, Google Cloud, Intel, Meta, Microsoft, Qualcomm, Samsung, and TSMC.

One of the critical pieces of the chiplet UCIe design puzzle of course is verification, which brings us to a recent announcement:

Truechip Announces First Customer Shipment of UCIe Verification IP

Speaking at SemIsrael expo 2022 Nitin Kishore, CEO, Truechip, said, “UCIe is the need of the hour as it not only assists to increase yield for SoCs with larger die size but also allows to intermix components (or chiplets) from multiple vendors within a single package. SoC providers can reduce time to market and cost if they can re-use chiplets from previous or other chips (like a processor subsystem or a memory subsystem, etc.) versions or plug-in chiplets from third-party vendors. With the launch of the UCIe Verification IP, I believe that this protocol will enable design houses to configure, launch, analyze, manage sustainability targets, and accelerate them achieve their design goals.”

UCIe Verification IP Key Benefits

  • Available in native SystemVerilog (UVM/OVM /VMM) and Verilog
  • Unique development methodology to ensure highest levels of quality
  • Availability of various Regression Test Suites
  • 24X5 customer support
  • Unique and customizable licensing models
  • Exhaustive set of assertions and cover points with connectivity example for all the components
  • Consistency of interface, installation, operation and documentation across all our VIPs
  • Provide complete solution and easy integration in IP and SoC environment.

Nitin concluded, “With high-speed support of 32GTps per lane and the fact that it can also enable the mapping of other protocols via the streaming mode, UCIe is not only a high-performance protocol but also an interconnect protocol that requires very low power. The advantages of UCIe makes it the most innovative technique to smoothen the way towards a truly open multi-die system ecosystem by ensuring interoperability.”

Intellectual Property is a critical part of the semiconductor ecosystem. In fact, without the commercial IP market the fabless semiconductor business would not be what it is today. IP is still the most read topic on SemiWiki and the fastest growing semiconductor design market segment and will continue to be so, with or without chiplets. With chiplets, however, the IP market could easily hit the $10B mark by the end of the decade, absolutely.

About Truechip

Truechip is a leading provider of Verification IPs, NOC Silicon IP, GUI based automation products and chip design services, which aid to accelerate IP/ SOC design thus lowering the cost and the risks associated with the development of ASIC, FPGA and SOC. Truechip provides Verification IP solutions for RISC V-based chips, Networking, Automotive, Microcontroller, Mobile, Storage, Data Centers, AI domains for all known protocols along with custom VIP development. The company has global footprints and coverage across North America, Europe, Israel, Taiwan, South Korea, Japan, Vietnam and India. Truechip offers Industry’s first 24 x 7 technical support.

Also Read:

Truechip Introduces Automation Products – NoC Verification and NoC Performance

Truechip: Customer Shipment of CXL3 VIP and CXL Switch Model

Truechip’s Network-on-Chip (NoC) Silicon IP

Truechip’s DisplayPort 2.0 Verification IP (VIP) Solution


An Update on HLS and HLV

An Update on HLS and HLV
by Daniel Payne on 12-19-2022 at 10:00 am

NCJ29D5 BD2 min

I first heard about High Level Synthesis (HLS) while working in EDA at Viewlogic back in the 1990s, and have kept watch on the trends over the past decades. Earlier this year Siemens EDA hosted a two day event, having speakers from well-known companies share their experiences about using HLS and High Level Verification (HLV) in their semiconductor products. I’ll recap the top points from each speaker in this blog.

Stuart Clubb from Siemens EDA kicked off the two day event, and he explained how general purpose CPUs have struggled to meet compute demands, RTL design productivity is stalling, and that RTL verification costs are only growing. HLS helps by reducing both simulation and verification times, allowing for more architectural exploration, and enables new domain-specific processors and accelerators to handle new workloads more efficiently. The tool at Siemens EDA for HLS and HLV is called Catapult.

Catapult

With the Catapult tool designers model at a higher-level than RTL with C++, SystemC or MatchLib, which then produces RTL code for traditional logic synthesis tools. Your HLS source code can be targeted to an ASIC, FPGA or even eFPGA. There’s even a power analysis flow supported with the PowerPro add-on.

NXP

This company has 31,000 people, producing revenues of $11.06 billion in 2021, located across 30 countries and Reinhold Schmidt talked about their secure car access group of 11 engineers. Their product included an IEEE 802.15.4z compliant IR-UWB transceiver, ARM Cortex M33, and a DSP; they started with a 40nm process then migrated to a 28nm process, and their device operates on a coin cell battery.

NXP – NCJ29D5

Modeling was done in Matlab, C++ and SystemC. MatchLib, a SystemC/C++ library was also used. PowerPro was used for power optimization and estimation. Results on an IIR DC notch filter showed that HLS had an area reduction of about 40%, compared to handwritten RTL.

They plan to integrate HLS further into their infrastructure, and investigate using HLV. It’s a challenge to get their RTL-centric engineers to think in terms of algorithms, and for SW engineers to think about hardware descriptions.

Google, VCU

Video traffic takes up to 80% of the Internet, so Google has focused their HW development on a Video Coding Unit (VCU). Aki Kuusela presented the history of video compression: H.264, VP9, AV1, AV2. Video transcoding follows a process from creator to viewer:

Video Transcoding

Google developed their own chips for this video transcoding task to get a proper implementation of H.264 and VP9, optimized for datacenter workload, so an HLS design approach allowed them to do this quickly. With a Google VCU a creator can upload a 1080p 30 fps video at 20 Mbps, then a viewer can watch it at 1080p 30fps using only 4 Mbps.

The VCU ASIC block diagram shows how all the IP blocks are connected to a NOC internally.

VCU ASIC

The VCU ASIC goes onto a board and rack, then it’s built up into a cluster. Google engineers have been using HLS for about 10 years now, and the methodology allows SW/HW co-design, plus fast design iteration. Catapult converts their C++ to Verilog RTL, and an in-house tool called Taffel is used for block integration, verification and visualization.

HLS design style worked well for data-centric blocks, state machines and arbiters. With C++ there was a single source of truth, and there were bit-exact results between model and RTL, using 5 to 10X less code compared to RTL coding.

NVIDIA Research

Nate Pickney and Rangharajan Venkatesan started out with four research areas that HLS has been used in their group:  RC18: Inference chip, Simba: Deep-learning Inference with chiplet-based architecture, MAGNET: A Modular Accelerator Generator for neural networks, IPA: Floorplan-Aware SystemC Interconnect Performance Modeling.

The motivation for IPA – Interconnect Prototyping Assistant, was to abstract and automate interconnects within a SystemC codebase. You use IPA’s SystemC API for magic message passing, SystemC simulation for modeling, and through HLS for RTL generation. IPA was originally developed by NVIDIA, and now is maintained by Siemens EDA.

Interconnect Prototyping Assistant (IPA)

The SoC design flow between HLS and RTL, including exploration and implementation is shown below:

HLS Flow

Adding IPA into this flow shows how exploration times can be reduced to 10 minutes, while implementation times are just 6 hours.

IPA added to Flow

For the 4×4 MAGNET DL accelerator example the first step was to write a unified SystemC model, make an initial run of the IPA tool, update the interconnect, and then revise the microarchitecture. Experiments from this analysis compared directly-connected links, centralized crossbar, and uniform mesh (NOC). Each experiment using IPA took only minutes of design effort, instead of weeks required without IPA.

IPA info is open-source, learn more here.

NVIDIA, Video Codecs

Hai Lin described how their design and verification flow follows several steps:

  • HLS design electronic spec
  • HLS design lint check
  • Efficient Catapult synthesis
  • Design quality tracking
  • Power optimization, PowerPro
  • Block-level clock gating
  • HLS vs RTL coherence checking
  • Automatic testbench generation
  • Catapult Code Coverage (CCOV) coverpoint insertions

Their video codec group switched from an RTL flow to Catapult HLS and saw a reduction in coding effort, reduced number of bugs, and shortened simulation runtimes. Automation now handled pipelining, parallelization, interface logic, idle signal generation and more. RTL clock gating is automated with PowerPro. Finally, the HLS methodology integrates code and functional coverage at the C++ design source-code level.

STMicroelectronics

Engineers at ST have 10 years of HLS experience using Catapult on products like set-top boxes, imaging and communication systems, and now are using HLS for products like: sensors, MEMS actuators – ASICs for MEMS mirror drivers, and analog products.

An Infrared Smart Sensor project used HLS, and a neural network was trained from a set of data coming from a sensor in real life situations.

Infrared Smart Sensor

With Catapult they were able to explore neural networks with various arithmetic formats, then compare the area, memory and accuracy of results. The time for HLS design in Catapult, data analysis, testbench and modeling was only 5 person-weeks.

A second HLS design project was for a Qi-compliant ASK demodulator.  They were able to explore the design space by comparing demodulators with various architectures,  then measure the area and slack time numbers:

  • Fully rolled
  • Partial rolled 8 cycles
  • Partial rolled 4 cycles
  • Partial rolled 2 cycles
  • Unrolled

The third example shared was for a contactless infrared sensor with embedded processing. Three HW blocks for temperature compensation formulas were modeled in HLS.

Contactless Infrared Sensor with Embedded Processing

The generated RTL from each block was run through logic synthesis  and the area numbers were compared for a variety of architectures. Latency times were also estimated in Catapult to help choose the best architecture.

NASA JPL

FPGA engineer Ashot Hambardzumyan from NASA JPL compared using C++ and SystemC for the Harris Corner Detector. The main algorithm computes the Dx and Dy derivatives of an image, then computes a Harris response score at each pixel, finally applying a non-maximum suppression to the score.

Harris Corner Detector

They modeled this as a DSP process, and the HLS architecture was modeled as a Kahn Process. Comparisons were made between using C++, SystemC and SystemVerilog approaches:

Synthesis results

To verify each of these languages an image was used and the simulation time per frame was measured, and the SystemVerilog implementation required 3 minutes per frame, SystemC took only 5 seconds, and C++ was the fastest at only 0.3 seconds per frame.

Design and verification times for HLS were shorter than with RTL methodology. Basic training for using C++ was shorter at 2 weeks, versus SystemC at 4 weeks.  The Harris Corner Detector algorithm took just 4 weeks using C++, compared to 6 weeks with SystemC.

Viosoft

The final presentation talked about 5G and the challenges of the physical layer (L1), where complex math is used in communication with algorithms like Channel Estimation, Modulation, Demodulation and Forward Error Correction.

RAN Physical Layer

HLS was applied to L1 functions written in C++, then a comparison was made on the runtime for a CRC in three implementations:

  • X86 XEON CPU 2.3GHz – 608, 423ns
  • RISC-V QEMU CPU 2.3GHz – 4,895,104ns
  • Catapult – 300ns

Another comparison was between an RTL flow versus Catapults flow for maximum clock frequency, and it showed that HLS results from Catapult were 2X higher clock frequency than RTL. Resource utilization in Intel FPGA devices showed that RTL from Catapult was comparable to manual RTL for logic utilization, total thermal power dissipation and static thermal power dissipation.

Viosoft prefers the single source implementation of HLS, as HW/SW can be partitioned easier, design trade-offs can be explored, performance can be estimated, and time to market shortened.

Summary

HLS and HLV are growing trends and I expect to see continued adoption across a widening list of application areas. Higher levels of abstraction have benefits like fewer lines of source code, quicker times for simulation, faster verification, all leaving more time for architectural exploration. RTL coding isn’t disappearing, it’s just being made more efficient with HLS automation.

There’s even HLS open source IP at Github to help get you started quicker. The Catapult tool comes with reference examples across different applications to speed learning. You’ll even find YouTube tutorials on using HLS in Catapult. The HLS Bluebook is another way to learn this methodology.

View the two day event archive here, about 8 hours of video.

Related Blogs


MIPI bridging DSI-2 and CSI-2 Interfaces with an FPGA

MIPI bridging DSI-2 and CSI-2 Interfaces with an FPGA
by Don Dingee on 12-19-2022 at 6:00 am

MIPI specification chart, courtesy MIPI Alliance

We’re so spoiled by 4K and 8K and frame rates of 120 Hz or higher video content on high-performance devices that now, many of us expect these higher resolutions and rates on even small devices. The necessary interfaces exist in MIPI Display Serial Interface 2 (DSI-2) and Camera Serial Interface 2 (CSI-2). The challenge is these interfaces eat up a conventional SoC, either overwhelming low-end parts wholly or consuming too much power and battery life on a higher-end part. At MIPI DevCon 2022, Mahmoud Banna of Mixel co-presented with a customer, Hercules Microelectronics (HME), their solution: MIPI bridging DSI-2 and CSI-2 interfaces with an FPGA, providing acceleration at low power.

“Beyond mobile” applications for MIPI specifications

MIPI-powered displays have been ubiquitous in smartphones for some time. In most cases, smartphones feature high-end SoCs to power the complex cellular network interface and provide the multi-tasking response users expect. In exchange for access to high-performance content, users learned to take steps to save smartphone battery life, like dark mode, turning down the brightness, and more. It’s an acceptable trade most of the time.

A new generation of “beyond mobile” applications demands the same type of performance without the same resources or management. High-performance displays and camera-based sensors now appear in automotive, IoT, wearables, industrial devices, and more. Bandwidth requirements are high, power requirements are low, and EMI is a concern anytime signals switch rapidly. MIPI developed its primary display and camera interface specifications from the ground up for these applications. The evolution of these specifications continues; DSI-2 v 2.0 was released in July 2021, and CSI-2 v 3.0 was released in September 2019.

Splitting the difference with FPGA-based MIPI bridging solutions

Most of these new applications operate in consumer segments with short lifecycles or industrial segments with relatively low volumes. Add to those factors the ongoing enhancements of the MIPI specifications, and designing an ASIC becomes risky. It’s easy to miss a market window, a critical new feature, or cost targets, and seeing a device with an outdated specification raises questions among buyers.

An FPGA could solve many of those concerns. FPGAs enable faster prototyping and proof of concept, enabling companies to demonstrate devices for investors. FPGAs allow customization, using the same basic elements in more than one design, or quickly targeting a new use case. Risks of a hardware re-spin shrink, with the ability to reprogram logic. And FPGA-based reference designs allow OEMs to do their tweaking to unique requirements.

There’s still the challenge of power consumption. A mid-range FPGA on a newer process technology node can offer the right baseline of power use while still delivering the logic size and performance needed. “The traditional way to build a MIPI bridging system over an FPGA was to use the FPGA LVDS interface to emulate the MIPI interface,” says Banna. “But the growing trend here is to harden the MIPI subsystem, including the PHY and the controller, to achieve many benefits.” Hardened MIPI D-PHY and DSI-2 controller IP from Mixel hit higher data rates, are more stable in FPGA contexts, and consume less power.

Illustrating the concept, Mixel is teaming with Hercules Microelectronics and their H1D03 FPGA. Built on a 40nm LP process, the H1D03 pairs two hardened Mixel MIPI IP blocks with an 8051 microcontroller, two SRAM blocks, 2K LUTs running at up to 200 MHz, and LVDS and other I/O.

Wide range of possibilities for MIPI bridging

Configuring the MIPI D-PHY, MIPI DSI-2, and LVDS blocks for emulating CSI-2, this approach can hit many use cases. Yundong Cui of HME offers this diagram:

 

 

 

 

 

 

 

 

In the recorded video of the MIPI DevCon 2022 session, Cui steps through several applications for the H1D03. His examples include a low-end cellphone or tablet, an AR headset, an e-Paper display, a smart home control panel, and an industrial camera. In each case, he points out how offloading the SoC improves system performance while keeping power low. Banna takes on some questions in a short Q&A and highlights upcoming MIPI PHY IP on the Mixel roadmap.

Hardening the Mixel MIPI IP is an important step here. These latest MIPI specifications are complex, and hardened IP ensures consistent performance regardless of what happens in the FPGA. It also removes the burden for the OEM to try to debug a soft implementation so they can focus on functionality in their application.

To see how Mixel and Hercules Microelectronics work together in MIPI bridging DSI-2 and CSI-2 interfaces with an FPGA, view the entire MIPI DevCon 2022 session on YouTube at:

Leveraging MIPI DSI-2 & MIPI CSI-2 Low-Power Display and Camera Subsystems

Also Read:

MIPI in the Car – Transport From Sensors to Compute

A MIPI CSI-2/MIPI D-PHY Solution for AI Edge Devices

FD-SOI Offers Refreshing Performance and Flexibility for Mobile Applications


Podcast EP132: The Growing Footprint of Methodics IPLM with Simon Butler

Podcast EP132: The Growing Footprint of Methodics IPLM with Simon Butler
by Daniel Nenni on 12-16-2022 at 10:00 am

Dan is joined by Simon Butler, the founder and CEO of Methodics Inc, Methodics was acquired by Perforce in 2020, and he is currently the general manager of the Methodics business unit at Perforce. Methodics created IPLM as a new business segment in the enterprise software space to service the needs of IP and component based design. Simon has 30 years of IC design and EDA tool development experience and specializes in product strategy and design.

Dan discusses the growing need for IP lifecycle management across design and manufacturing. How the Chips Act impacts these activities is discussed, along with the requirements of legacy node design and emerging chiplet-based design approaches.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Ron Black of Codasip

CEO Interview: Ron Black of Codasip
by Daniel Nenni on 12-16-2022 at 6:00 am

RBl

Dr. Black has over 30 years of industry experience. Before joining Codasip, he has been President and CEO at Imagination Technologies and previously CEO at Rambus, MobiWire (SAGEM Handsets), UPEK, and Wavecom. He holds a BS and MS in Engineering and a Ph.D. in Materials science from Cornell University. A consistent thread of his career has been processors including PowerPC at IBM, network processors at Freescale, security processors at Rambus, and GPUs at Imagination.

Tell us about Codasip
Codasip is unique. It was founded in 2014, and a year later we were offering the first commercial RISC-V core and co-founding RISC-V International. Since then, we have grown rapidly, particularly in the past two years. Today we have 179 employees in offices around the world in 17 locations. What I find so interesting is that we do ‘RISC-V with a twist’. We design RISC-V cores using Studio, our EDA tool, and then license both the cores and Studio to our customers so they can customize the processors for their unique applications. Think of Codasip as providing a very low-cost architectural license with a fantastic EDA tool to change the design so it is unique for you – ‘design for differentiation’.

Our customers all seem to have one common characteristic – they are ambitious innovators that want to make their products better than what you get from just a standard offering.

‘Codasip makes the promise of RISC-V openness a reality’, can you explain?
The RISC-V instruction set architecture, or ISA, is an open standard specifically designed for customers to be able to extend it to fit their specific need, whilst still having a base design that is common. You can add optional standard extensions and non-standard custom extensions whenever you want to ensure the processor you are designing truly runs your workload optimally.

Some people say that this creates fragmentation, but it really does not. Indeed, alternative proprietary architectures have segment specific versions that one could call fragmented because they are not interoperable. The key question is – do you want the processor supplier to control what you do, or do you want to decide for yourself? I think the answer is obvious. We see the industry moving to letting customers decide, not the supplier.

With our approach you can always use our standard processor offering to start with, and be assured that you can change it in the future if you want to. In fact, we like to think that describing the processor using CodAL source code plus the open RISC-V ISA reinvents the concept of architecture licenses to give customers the best of both worlds – a base design with a proven quality through unparalleled verification, plus an easy way to customize for any application.

You recently announced several partnerships with RISC-V players, can you tell us more about your role in the RISC-V ecosystem?
We strongly believe that to be successful RISC-V requires a community – nobody can or should walk alone. By partnering with other key players in the industry we all build the RISC-V ecosystem together.

Two areas we feel the community needs to focus on and excel at are processor verification and security. So we were proud to partner with Siemens EDA on verification, and CryptoQuantique on security. Each has industry-leading solutions and are great partners.

We also recently joined the Intel Pathfinder for RISC-V program, which is helping the industry scale. We made our award-winning L31 core available for evaluation on Intel’s widely accepted FPGA platform, targeted for both educational and commercial purposes.

Similarly, we were keen to help the ecosystem to increase the quality of RISC-V processor IP by being part of the Open HW Group, which has a strong belief in commercial grade verification.

You also recently announced the acquisition of a cybersecurity company, can you tell us more?
We fundamentally believe in both organic and inorganic growth because we are always looking for the absolutely best talent, and were lucky enough to find the Cerberus team, a UK-based cybersecurity company known for its strong hardware and software IP. The Cerberus team really embraced the Codasip approach and have already been instrumental in helping us to win new business in secure processors and secure processing. To expand the initiative, we are now in the process of combining our automotive safety initiative with our security initiative, which is something that we believe can be incredibly important for the industry. Stay tuned.

As a leading European RISC-V company, how do you influence the European industry and market?
We like to think of ourselves as a global company, engaging customers and partners across the world, but always operating locally and very proud of our European heritage. Europe is home to many great semiconductor and systems companies doing chip design, and has a fantastic STEM (Science, Technology, Engineering, and Mathematics) education system supplying a large number of talented graduates each year. Our university program launched this year is expanding rapidly and we look to be at 24 universities by the end of next year. Given the geopolitical situation today, we believe that it is incredibly important to have a strategy of balancing and being both local and global.

How do you see the future of RISC-V and the future of Codasip?
Definitely extremely bright! RISC-V is growing and getting serious attention for good reasons – customers are looking for open ISA alternatives with ecosystem support, and RISC-V is what they are all turning to. Everyone knows about RISC-V and Codasip is no longer a well-kept secret. The question is no longer if RISC-V is too risky to adopt, but whether it is too risky not to adopt?

Also Read:

Re-configuring RISC-V Post-Silicon

Scaling is Failing with Moore’s Law and Dennard

Optimizing AI/ML Operations at the Edge


Functional Safety for Automotive IP

Functional Safety for Automotive IP
by Daniel Payne on 12-15-2022 at 10:00 am

functional safety in automotive electronics

Automotive engineers are familiar with the ISO 26262 standard, as it defines a process for developing functional safety in electronic systems, where human safety is preserved as all of the electronic components are operating correctly and reliably.  Automotive electronics have now grown to cover dozens of applications, and George Wall of Cadence presented on this topic at the recent IP SoC event. I learned that ISO 26262 came from the parent standard IEC 61508, first released in 1998. Functional safety is defined as the, “Absence of unreasonable risk due to hazards caused by malfunctioning behavior of Electrical/Electronic systems.

Automotive Electronics

A systematic failure would be a design bug in the hardware, causing something unintended in the system, while a random hardware fault would be a silicon defect in a chip causing a stuck bit or even an alpha particle causing a memory bit to flip. The goal for SoC designers is to make their design resilient to faults, ensuring safety.

The Automotive Safety Integrity Level (ASIL) defines the highest level of protection from systematic and random faults, called ASIL-D, which requires >99% coverage against single-point faults, and >90% coverage against latent faults. For automotive electronics that control braking and air-bags you need ASIL-D level compliance.

To reach the safety goals, then any faults need to be blocked, avoided, designed out, or mitigated. Four commonly used hardware safety mechanisms found in processor-based SOCs include the following:

  • ECC protection of memories
  • Watchdog Timer
  • Software Self-Test
  • Dual-Core Lockstep

These safety mechanisms are specific to processors, and the list is not exhaustive.

It was 9 years ago in 2013 that Cadence acquired Tensilica for their IP cores, and that investment has grown over time to supply IP for automotive in several categories:

The Tensilica processor IP has been certified to be ASIL-D compliant for systematic faults, where a single processor is ASIL-B compliant against random faults and two processors operating in lockstep are ASIL-D compliant against random faults. Even the Tensilica C/C++ compiler toolchain is certified to ASIL-D. The IP has both fault reporting and fault protection mechanisms built in.

Memory ECC

Error Correcting Code (ECC) is available for memory interfaces like: instruction local SRAM or cache, data local SRAM or cache, and the cache tag store. At the system level you can monitor the ECC error information. A 7-bit ECC syndrome is calculated on 32-bit words, and single bit memory data errors are automatically corrected, while for multi-bit errors an exception is signaled.

Windowed Watchdog Timer (WWDT)

To ensure normal execution of software a WWDT acts as a system supervisor, where inside the normal operation window the software restarts the WWDT, however if the WWDT counts down too early or too late it will cause a reset request to the SoC. The ISO 26262 standard defines Program Sequence Monitoring (PSM) as a way to ensure correct code execution, so the WWDT is the safety mechanism used.

WWDT

Logic BIST, Software Self-test

Using  logic Built-In Self Test (BIST) the hardware tests a portion of logic, detecting static faults , while adding about 5% overhead in the silicon and running in only milliseconds of time, typically producing 90% fault coverage. Logic BIST can be run at startup time and then report any detected faults.

With software self-test there’s no hardware overhead, because it’s just software to test logic at different times, like for periodic runtime checking. ISO 26262 lists software self-test as a safety mechanism from random faults, with a medium diagnostic coverage, and it’s included in the Tensilica qualitative FMEDA. The Xtensa Software Test Library (XT-STL) provides tests to confirm the basic processor operation, non-intrusively. You would combine XT-STL functions along with your own tests, like during power-on testing, or mission mode tests.

Hardware Redundancy

Higher fault tolerance can be achieved through hardware redundancy, either with time-based redundancy or hardware redundancy. ECC for memory is an example, and you can add triple-redundancy voting flip-flops, parity protection of Critical State Registers (CSRs), or for processors use a Dual-Core Lockstep (DCLS).

Tensilica supports DCLS with a technology called FlexLock, where two cores run the same code in lockstep with each other, and a comparator finds any differences, supporting ASIL-D requirements.

DCLS

There’s even a dual memory lockstep, adding redundancy on core logic and memories.

Dual Memory Lockstep

Security

There’s a cybersecurity standard for road vehicles, dubbed ISO 21434, adding a security lifecycle for automotive. Four commonly used threat protection mechanisms used in SoCs include:

  • Hardware root of trust – secure boot, authentication of boot
  • Cryptography – protecting data
  • Hardware isolation – divide trusted and non-trusted regions in memory
  • Anomaly detection – alert suspicious activity

Tensilica has Xtensa LX processors that support hardware isolation using a secure mode for running authenticated code, then a non-secure mode for running untrusted code.

Anomaly detection can be implemented with WWDT, alerting about unexpected program execution. With the dual memory lockstep approach any divergent execution causes a safety fault.

Summary

A traditional car today has at least 40 kinds of chips, while the total number of chips in a car can reach 500, so designing for safety requires the disciplines of following ISO 26262 standards. Meeting safety goals means that processor IP used in cars be ASIL certified to the appropriate level. Cadence has a good track record in their Tensilica IP of using safety and security measures to meet automotive requirements.

Review the 33 slide presentation from IP SoC 2022 here.

Related Blogs

 


Cracking post-route Compliance Checking for High-Speed Serial Links with HyperLynx

Cracking post-route Compliance Checking for High-Speed Serial Links with HyperLynx
by Peter Bennet on 12-15-2022 at 6:00 am

hyperlynx flow

SemiWiki readers from a digital IC background might find it surprising that post-PCB route analysis for high speed serial links isn’t a routine and fully automated part of the board design process. For us, the difference between pre- and post-route verification is running a slightly more accurate extraction and adding SI modelling, while GHz signals aren’t microwaves – they’re just faster than MHz ones.

PCB design is not so forgiving. Traces at the board level are much longer and we need S-parameters and transmission line modeling for high speed signals. It’s a far more demanding design flow and EDA challenge, requiring greater user expertise, time and effort. Several of the intricate process steps are not fully automated, run time can be far too slow and the whole process not smoothly automated and reliably repeatable. In practice then it’s a flow step that’s not always fully verified, leaving projects at risk of tricky PCB debug and respin delays and costs.

Can’t we do better than this ? Aren’t there are too many designs with too many serial links these days ? And too few signal integrity experts to do the work ? Isn’t it time for EDA to catch up with such pockets of the design flow still resisting automation 58 years after the first DAC ?

Enter HyperLynx

Todd Westerhoff’s white paper explains what Siemens EDA is doing to remove this critical flow bottleneck with their HyperLynx PCB signal integrity tool, taking a SerDes protocol compliance check as an example.

The goals of this HyperLynx flow are simple:

  • automate as much of the flow as possible so that design teams can target overnight post-route verification of all serial links on a design
  • deliver a flow that can be quickly and easily repeated
  • avoid reliance on slow, manual PCB layout inspection (often used today to cover the risk of skipping post-route analysis)
  • allow design teams to do all analysis work in house
  • ease the workload on scarce signal integrity experts
  • directly target protocol compliance (does the interface perform correctly) rather than proxy metrics

Let’s look first at how this post-route analysis of high speed serial links might be done today in a protocol compliance checking flow. An IBIS-AMI simulation flow would be slightly different, but with similar complexity.

We won’t try to explain all the details here – the paper does this very well. Just note how many steps there are, many requiring user effort, expertise and output checking. And the three main parts: preparing the design for analysis, running the analysis and figuring out what the results actually mean.

Let’s look at these in turn.

Channel Modeling

Getting to the analysis step where we’ll run full wave simulations takes a lot of care and effort. Full EM solving takes serious run time, so we only want to run it on the high speed links if we can. But we also want to run on all these nets as the layout of each is unique – we cannot reliably second guess which is likely the worst of a set and skip the rest to save time.

Perhaps the trickiest step in getting to the channel models needed for the simulations is isolating and modeling the physical path for a channel with sufficient precision that accuracy is not lost, a process known as cut and stitch. Each net can be cut into longer sections where transverse electromagnetic mode (TEM) propagation holds and regions around discontinuities like vias where more time costly non-TEM propagation must be modelled. It’s a typical run time vs accuracy tradeoff we make all the time in EDA, but here we have to decide exactly where to break the sections. Precision really matters here and this isn’t easy. Nor is stitching these back together for simulation. It takes experts and multiple iterations to do this reliably well. HyperLynx automates both the cut decisions (using its DRC engine) and the stitching where transmission line length adjustment is needed. That’s a key breakthrough which opens the door to creating interconnect models for hundreds of serial channels, automatically, overnight.

Analysis

There are two methods for post-layout analysis of the serial links: IBIS-AMI simulation and standards-based compliance analysis. Ideally, we’d use the first, but this often runs into practical issues with availability, completeness and accuracy of IBIS-AMI models and excessive run times. Not the ideal technique if you need to run it repeatedly.

Protocol standards-based compliance analysis is quicker to run. Driver (Tx) and receiver (Rx) models for the serial protocol can be used instead of vendor IBIS models, achieving run times below a minute per channel against 30 minutes or more for AMI simulations. But if you had to do all the work yourself to configure all the compliance models for the myriad of serial protocols, this would be of little practical help. This is where HyperLynx automation steps in with its Compliance Wizard allowing simple specification of protocols for each channel from a library of 210 protocols and configuring the checking parameters needed for each.

Results Processing

Conventional simulation analysis only gets us to signal waveforms. The critical question – “does this still work ?” – is not directly answered. But now we’re doing protocol-based analysis we need not stop and rely on interpreting eye diagrams. We know the exact limits –and any design margins we wish to apply – for all the key parameters. So HyperLynx can directly report which high-speed signals passed and which failed with complete, detailed reports.

HyperLynx Flow

We can see the greater simplicity and automation of the HyperLynx flow below.

It’s important to note here that Siemens is not arguing that IBIS-AMI simulations don’t have a role to play in post-route verification. Their point here is that a lot of what would have been done that way can now be done quicker and more easily with protocol-based analysis. The protocol compliance approach uses standard models which just meet the protocol spec – so if a design passes compliance testing, it may well show some margin in IBIS-AMI simulation when the actual board Tx and Rx models are used.

Summary

HyperLynx looks to be closing an important gap in pre-fab PCB verification here, helping designers avoid needless prototype PCB respins by enabling faster, more reliable verification of all serial links post-layout, putting overnight verification within reach. And also doing what good EDA tools should – automating the workflow – managing complexity and design partitioning for modeling and simulations and giving clear pass/fail results. And making good engineers better and more productive. Including those scarce SI experts.

This very clear and highly readable white paper covers this all in a lot more detail than we have space for here:

Automated compliance analysis of serial links reduces schedule risk

https://resources.sw.siemens.com/en-US/white-paper-automated-compliance-analysis-of-serial-links-reduces-schedule-risk

Also Read:

Calibre: Early Design LVS and ERC Checking gets Interesting

Architectural Planning of 3D IC

Pushing Acceleration to the Edge


Podcast EP131: Intrinsic ID – Implementing Security Across the Electronics Ecosystem

Podcast EP131: Intrinsic ID – Implementing Security Across the Electronics Ecosystem
by Daniel Nenni on 12-14-2022 at 10:00 am

Dan is joined by Pim Tuyls, CEO of Intrinsic ID. Pim founded the company in 2008 as a spinout from Philips Research. With more than 20 years of experience in semiconductors and security, Pim is widely recognized for his work in the field of SRAM PUF and security for embedded applications. He speaks at technical conferences and has written significantly in the field of security. He co-wrote the book Security with Noisy Data, which examines new technologies in the field of security based on noisy data and describes applications in the fields of biometrics, secure key storage and anti-counterfeiting. Pim holds a Ph.D. in mathematical physics from Leuven University and has more than 50 patents.

Pim discusses the underlying technology, business strategy and ecosystem partnerships that all help Intrinsic ID to deliver security, flexibility and trust across a growing electronics ecosystem.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Synopsys Crosses $5 Billion Milestone!

Synopsys Crosses $5 Billion Milestone!
by Daniel Nenni on 12-14-2022 at 6:00 am

Synopsys NASDAQ SemiWiki

“We intend to grow revenue 14% to 15%, continue to drive notable ops margin expansion and aim for approximately 16% non-GAAP earnings per share growth.”

Synopsys, Inc. (NASDAQ:SNPS) Q4 2022 Earnings Call Transcript

Synopsys is the EDA bellwether since they report early and are the #1 EDA and #1 IP company.  In addition to crossing the $5B mark, Aart de Geus shocked everyone with a 14-15% growth estimate for 2023. REALLY?!?!?! Yes, really, and don’t ever bet against Aart. SNPS is generally conservative with fiscal year growth numbers so if you are betting the over/under bet the over.

“Looking at the landscape around us, some of you have asked us why customers design activity remains solid throughout waves of the business cycle. Two reasons. First, the macro quest for Smart Everything devices and with its AI and big data infrastructure is unrelenting and expect it to drive a decade of strong semiconductor growth. Second, semiconductor and systems companies, be it traditional or new entrants, prioritize design engineering throughout economic cycle precisely to be ready to feel competitive new products when the market turns upward again. We’ve seen this dynamic consistently in past up and down markets and expect it to continue.”

As history has shown, semiconductor companies design their way out of challenging times with a “design or die” mantra. When semiconductor companies cut EDA budgets then you should be concerned. And yes, the fabless systems companies (Apple, Google, Amazon, Microsoft, etc…) are now leading the EDA budget charge. In the past few years fabless systems companies have taken over as the leading readers of SemiWiki and I expect that to continue for the foreseeable future.

“Synopsys is uniquely positioned to address these challenges as we provide the most advanced and complete design and verification solutions available today, the leading portfolio of highly valuable semiconductor IP blocks and the broader set of software security testing solutions. In the past few years, we have introduced some truly groundbreaking innovations that radically advance how design is done.”

If I had to rank these Synopsys market segments in regards to importance, I would put the IP business as #1. The other EDA companies just do not get this, IP is everything. This also puts Synopsys in the unique position to capitalize on the chiplet revolution that is coming because chiplets are IP. And the key to chiplets, like IP, are the foundries (TSMC) stamp of silicon proven approval. Synopsys already has the advantage of closer relationships with the foundries since Synopsys IP is always on the first test chips. This has HUGE value today and tomorrow!

“Today, we’re already tracking more than 100 multi-die designs for a range of applications, including high-performance compute, data centers, and automotive, seeing strong adoption of our broad solution. A notable example is achieving plan of record for multiple 3D stack designs at a very large, high-performance computing company as well as expanded deployment at a leading mobile customer.”

You will be hard pressed to find a tape-out that does NOT involve a Synopsys product so these numbers are legit. In fact, I would say multiple products from Synopsys is the tape-out norm.

It really has been an amazing career experience watching the EDA business grow from my first DAC in 1984 to now. Synopsys and Cadence did not even exist back then. It was Daisy, Mentor, and Valid Logic or what we called DMV, and now Synopsys hits $5 billion, simply amazing!

“In summary, Synopsys exceeded beginning of year targets and delivered a record fiscal 2022 across all metrics with the additional spark of passing the $5 billion milestone. We enter FY 2023 with excellent momentum and a resilient business model that provides stability and wherewithal to navigate market cycles. Notwithstanding, some economic uncertainty, our customers are continuing to prioritize their chip system and software development investments to be ready with differentiated products at the next upturn. On our side, many game changing innovations across our portfolio position as well to capitalize a decade of semiconductor importance and impact.”

I am much less concerned with the economic uncertainty of 2023. I have never seen a stronger demand and respect for semiconductors and it will only get stronger as AI touches the majority of the chips being designed today. If you have any doubts look at TSMC’s 2022 numbers, 40%+ growth?!?!?!

In the old days, like Joe Costello said, “We’re stuck in a fixed-pie model.  Have you seen three big dogs hovering over one bowl of dog food?  It’s not a pretty
picture.” Today there are four dogs hovering over the EDA bowl (Synopsys, Cadence, Siemens EDA and Ansys) but now it is a VERY large bowl and I give Synopsys their due credit for innovating outside of the EDA box, absolutely.

Also Read:

Configurable Processors. The Why and How

New ECO Product – Synopsys PrimeClosure

UCIe Specification Streamlines Multi-Die System Design with Chiplets