Bronco Webinar 800x100 1

Avery Levels Up, Starting with CXL

Avery Levels Up, Starting with CXL
by Bernard Murphy on 05-25-2021 at 6:00 am

QEMU block diagram min

Let me acknowledge up front that Avery isn’t the most visible EDA company around. If you know of them, you probably know their X-propagation simulator. Widely respected and used, satisfying a specialized need. They have also been quietly building over the years a stable of VIPs and happy customers, with a special focus on VIPs for PCIe and standards building on PCIe such as NVMe, CXL and CCIX. All hot standards in datacenters. Avery claims, and I have no reason to doubt them, that they are the #1 provider of VIPs in this area.

OK, good for them, they’re now active in a bigger market with more product range. But what caught my attention is what they are offering around CXL. First, I need to explain why this is important.

The off-chip cache coherence war

For those of you who don’t know, CXL and CCIX are off-chip/off-die cache-coherent interfaces. In some applications, particularly in machine learning, designs have become so big that they must split across multiple die/chips. Accelerators, memory and administration spread across multiple die. Yet applications still require the system as a whole to have a common logical view of memory. Which, since those memory accesses are mediated by caches, means they must be cache coherent. This problem has been solved on-chip through for example the ARM CCI network and the Arteris IP Ncore NoC. But those only work on-chip. CXL and CCIX extend from these networks to interconnect between chips/die. Intel is behind CXL, while AMD, ARM and several others are behind CCIX.

A new standards war, but what is important here is that, as I mentioned earlier, these standards are exceptionally important in datacenters. Particularly to the hyperscalars. And they’re still very new standards.

Avery CXL – more than a VIP

All of which means that compliance testing becomes very important. Against emerging/evolving standards. This takes a bit more than just VIPs, especially for cache coherence checking which must run through extensive testing. So Avery stepped up. They have built a virtual host co-simulation platform, around a CXL-aware QEMU emulator running Linux and connecting to a simulation (or emulation or prototyping platform) running the DUT. Avery’s CXL VIP sits inside the DUT testbench and connects to the QEMU host. Particularly notable here is that the VIP (and the QEMU host and Linux kernel with latest Intel patches for CXL) is ready to run type-3 designs, ahead of availability of processor silicon supporting that release.

This is arranged so the QEMU host looks like an Intel motherboard CXL host system. Meaning that a design team can validate against this setup. With high confidence that what they will build will work against real boards once those become available. In particular, they can run compliance tools and tests suites, such as CXLCV. And they can run performance benchmarking applications such as FIO and PCMark8.

Avery is contributing to the Intel QEMU/SystemC branch with a number of extensions in support of this capability. You might expect to see such a solution in compliance labs, especially since Avery is an early CXL Consortium member. And you probably wouldn’t be wrong.

And it’s more than CXL

Unsurprisingly, Avery also supports this path for PCIe host communication. They’ve recently been working with the University of New Hampshire Interoperability Lab and an industry leading NVMe SSD vendor on NVMe SSD validation using the UNH-IOL INTERACTTM test software. Plus other performance benchmarking applications such as FIO, PMark8, and CrystalDiskMark.  Each of these comes which their own compliance tools you can run on the host side to model real traffic and testing against your DUT.  The same QEMU co-sim idea also works for embedded processor side and supports Arm targets and AMBA bus communication.

Avery is now enabling more comprehensive validation of standards important to the hyperscalars and the companies who serve those giants. Leveling up indeed. You can learn more HERE.

Also Read:

Data Processing Unit (DPU) uses Verification IP (VIP) for PCI Express

PCIe 6.0, LPDDR5, HBM2E and HBM3 Speed Adapters to FPGA Prototyping Solutions

Controlling the Automotive Network – CAN and TSN Update


Safety Architecture Verification, ISO 26262

Safety Architecture Verification, ISO 26262
by Daniel Payne on 05-24-2021 at 10:00 am

fault injection state space min

I love to read articles about autonomous vehicles and the eventual goal of reaching level 5, Full Automation, mostly because of the daunting engineering challenges in achieving this feat and all of the technology used in the process. The auto industry already has a defined safety requirements standard called ISO 26262, and one of the questions that must be answered as part of the safety architecture is, “Do random failures violate any safety requirement?”

IC designers and test engineers have been aware of random failures in their semiconductor chips ever since the beginning, so over the decades have developed Design For Test (DFT) techniques like full scan design, and then adding tools like Automatic Test Pattern Generation (ATPG) to stimulate a fault, and then propagate that fault to an observable output pin. For automotive safety verification the approach is to:

  • Inject a random fault
  • Propagate the fault
  • Check if safety mechanisms catch the fault
  • Classify the fault
  • Generate safety metrics

Siemens EDA has been in the DFT business for decades now, and they have extended that experience into the ISO 26262 realm, so I read a white paper from Jacob Wiltgen for a deep dive into the concept of fault campaigns.

Huge Number of Fault States

A simple two input logic gate can have three fault injection sites, but then consider that a modern SoC likely has a million+ gates, and that fault models can be both stuck-at and transient faults, so that’s a large fault state space.

The Automotive Safety Integrity Level (ASIL) has defined levels A through D, where D is the highest, and fault injection is highly recommended to meet ISO26262 requirements.

Functional Simulators

Every design engineer has easy access to a functional simulator, and for small blocks, sure, you could manually choose to inject faults, then verify the effects of that fault, but it would be too time consuming, so not viable.

Fault Injection Platform

Beyond functional simulators, there are now four helpful methods to inject faults and analyze the results:

  1. Formal Analysis
  2. Fault Simulator
  3. Fault HW Emulator
  4. Fault Prototype Board

With Formal Analysis you get the benefits of an exhaustive approach on smaller blocks where a fault classification proof is needed, while not requiring a test bench, but you need some formal experience.

For higher capacity than Formal Analysis consider using a Fault Simulator, and it does require a test bench where the efficiency depends on how good your stimulus is.

The highest capacity is achieved with a Fault Emulation method, and you can run software test libraries too.

OK, we have these four distinct ways of doing fault injection, but how do I create the shortest fault campaign? It all starts with a written plan, detailing which approach will be applied to each block or module in a system, the total number of faults to be tested, etc. Consider using the following table to help you sort out the different Design Profiles of your system: Digital IC, Digital IP, Mixed Signal, Analog IP.

A second way to decide which tool to use for fault injection is by the Safety Feature: Digital HW, Software, LBIST/MBIST, Analog HW.

Fault Injection Methodology

Start with your fault list generation, then run the fault injection tool, and finally the work product is generated as a Failure Modes, Effects and Diagnostic Analysis (FMEDA) report.  A FMEDA will describe the failure modes, and safety metrics calculated using fault classifications spotted during the fault injection.

Fault List Generation

You determine if each fault in the fault list is safety critical or not. The flow for creating the fault list with the chosen safety architecture is:

Fault Injection

When a fault is injected and propagated, can we see the effects and does that infringe a safety goal or a safety requirement? Can the fault be detected by some safety mechanism?

A fault that infringes a safety goal or requirement are called Observed, while faults that can be detected by some safety mechanism are called Detected, so we get four fault classifications:

Each of the four injection methods are used, then the results get merged into a single list of fault classifications.

Work Product Generation

In the ISO 26262 standard there are three safety metrics that need to be calculated for your safety architecture:

  • Single Point Fault Metric (SPFM)
  • Latent Fault Metric (LFM)
  • Probabilistic Metric for Hardware Failure (PMHF)

Equations for each:

The five FMEDA safety metrics are:

  • Failure In Time (FIT)
  • Single Point Fault Metric (SPFM)
  • Latent Fault Metric (LFM)
  • Probabilistic Metric for Hardware Failure (PMHF)
  • Diagnostic Coverage (DC)

Summary

The safety architecture and ISO 26262 requirements for preventing random failures from becoming safety violations is a tough problem to solve, yet it can be done with the multi-prong approach developed by Siemens EDA over the years. Yes, there were lots of acronyms introduced in this blog, and the complete 14 page White Paper has even more details to bring you up to speed on designing and verifying automotive ICs that conform to safety standards.

Related Blogs


WEBINAR: What Makes SoC Compiler The Shortest Path from SoC Design Specification to Logic Synthesis?

WEBINAR: What Makes SoC Compiler The Shortest Path from SoC Design Specification to Logic Synthesis?
by Daniel Nenni on 05-24-2021 at 6:00 am

SoC compiler puzzle

Defacto SoC Compiler whose 9.0 release was announced recently automates the SoC design creation from the first project specifications. It covers register handling, IP and connectivity insertion at RTL, UPF and SDC file generation right to logic synthesis. As part of the generation process of RTL and design collaterals, basic advanced editing and refactoring are made automated which is a major step forward for RTL design engineers and SoC architects. Indeed, design structural changes which are automated by SoC Compiler have a multi-domain awareness: physical, power, clocking and DFT.

Towards domain awareness during the front-end SoC design process, a user has access to exploration, coherency checks, linting and view generation capabilities.

As a typical example, power awareness includes UPF linting, UPF & RTL design exploration and coherency checks, UPF file generation and UPF promotion or demotion capabilities for a top-level generation or a hierarchical UPF file extraction, respectively.

SoC Compiler can be used at different steps from user specification to logic synthesis.

Step1: Extraction, generation & update of power intent files

To manage power intent requirements a user can start by generating UPF files either from scratch or by extracting necessary files from previous projects databases. UPF updates are also automated by SoC Compiler whenever a change happens in an RTL or a gate-level description.

Step 2: Exploration, linting & coherency checks

Any generated UPF is automatically checked through design exploration capabilities and coherency checks between RTL, liberty and UPF files.

Step 3: Integration/Promotion

During SoC design assembly, UPF files are automatically promoted in conjunction with RTL files and all related files are generated, ready for synthesis.

The above automated steps for power intent management are also taken into consideration by SoC Compiler for other domains as well. The provided APIs, Python, TCL or C++ make the solution particularly easy to use, open and ready to be plugged in within internal design flows.

SoC Compiler is adopted by major SoC chip companies and recommended by top IP core providers for IP integration.

Defacto experts are hosting a LIVE webinar on June 3rd 10-11am PDT (REGISTER HERE) in which typical cases such as System Integration, RTL Integration and Power Integration will be presented.

About Defacto
Defacto Technologies is an innovative chip design software company providing breakthrough RTL platforms to enhance integration, verification and Signoff of IP cores and System on Chips. New segment markets such as automotive, mobile, virtual reality and artificial intelligence require leading edge SoCs with greater functionality, higher performance and much lower consumption.

Meeting time-to-market requirements and lowering the overall cost including design steps is becoming a critical factor of success. By adopting Defacto’s SoC Compiler design solutions, major semiconductor companies are continuously moving from traditional and painful SoC design tasks to the Defacto’s joint “Build & Signoff” design methodology. The related ROI has been proven for hundreds of projects.

Also Read

Small EDA Company with Something New: SoC Compiler

CEO Interview: Dr. Chouki Aktouf of Defacto

Power in Test at RTL Defacto Shows the Way


Supply Issues Limit 2021 Semiconductor Growth

Supply Issues Limit 2021 Semiconductor Growth
by Bill Jewell on 05-23-2021 at 10:00 am

Top Semiconductor Revenues 2021

Worldwide semiconductor shipments were $123.1 billion in 1Q 2021, up 3.6% from 4Q 2020 and up 17.8% from a year ago, according to WSTS. The 3.6% quarter-to-quarter growth was the highest for a first quarter since 1Q 2010, eleven years ago. The strong growth in 1Q21 implies strong growth in the following quarters and for the year 2021. However, supply constraints may limit semiconductor growth in 2021.

The table below show the top 14 semiconductor companies’ revenues in 1Q21, change versus 4Q20, and guidance (where available) for revenue growth in 2Q21 versus 1Q21. Of the 12 companies which have reported for 1Q21, three had revenue declines from 4Q20 – Intel, Qualcomm, and STMicroelectronics. These three companies all expect declines in 2Q21 revenues of about 4% from 1Q21. Intel and Qualcomm stated they were supply constrained. STMicroelectronics attributed the decline to seasonal trends.

The rest of the companies all had revenue growth, ranging from 2.4% for NXP Semiconductors to 12.1% for MediaTek. These companies all expect 2Q21 revenues to increase from 1Q21, ranging from 0.1% for NXP to about 14% for Micron Technology and MediaTek. NXP cited supply constraints for its cautious outlook. Thus, of the nine companies which provided guidance for 2Q21, four stated they are supply constrained.

How long will the semiconductor industry be supply constrained? A recent article on zdnet.com asserted it could take two years to work out all the semiconductor shortages. CNBC quoted an analyst who send the shortage may not be resolved until 2023. The CNBC article also cited a Gartner report that the shortage will last another six months. As we reported in our last newsletter, the automotive industry has been hit especially hard by the shortage. In a recent interview on CBS’ 60 Minutes TSMC chairman Mark Liu said his company can meet customer requirements for automotive semiconductors by the end of June, but supply chain issues could delay automotive production for several more months.

The global economy and key end equipment markets will drive increased semiconductor demand through at least 2021 and 2022. According to the International Monetary Fund (IMF) global GDP will bounce back from a 3.3% decline in 2020 due to the COVID-19 pandemic to a strong 6.0% growth in 2021. GDP growth is expected grow 4.4% in 2022, above the long-term trend. IDC projects smartphone units will rebound from a 6.7% decline in 2020 to 5.5% growth in 2021, moderating to 3.7% in 2022. The PC market grew 13% in 2020 as home-based work and education drove demand. IDC expects 2021 to be even stronger, with 18% PC unit growth. A correction in the PC market is forecast in 2022 with a 5% decline. Wards Intelligence / Morningstar project shipments of light vehicles will grow a robust 11% in 2021 after a 15% decline in 2020. Light vehicle growth will moderate to 7% in 2022, above the long-term trend. However automotive semiconductor shortages could limit 2021 growth.

Forecasting the 2021 semiconductor market is particularly difficult as the world recovers from the pandemic. Rebounding demand for electronics is offset by semiconductor shortages. Shortages will drive up some semiconductor prices, but others are set by long term contracts. Building a new semiconductor fab takes about two years, but in many cases, production can be increased at existing fabs in a relatively short time period.

Recent forecasts for the 2021 semiconductor market are in two camps. The December 2020 WSTS forecast was updated with final 4Q20 data, resulting in 10.9% growth in 2021. IDC’s May projection was 12.5% in 2021. IDC states robust growth in key markets for semiconductors will be offset by supply constraints. IC Insights believes the strong 1Q21 and moderate quarterly growth for the next three quarters will drive 19% semiconductor growth for the year.

Our latest forecast from Semiconductor Intelligence is similar to IC Insights, with a projection of 20% growth in 2021. We believe strong demand will drive high growth, even though shortages may limit the upside. Without supply constraints, potential growth could be in the 25% range. We expect semiconductor growth to moderate to 12% in 2022, still above the long-term trend growth of 6% to 7%.

Semiconductor Intelligence is a consulting firm providing market analysis, market insights and company analysis for anyone involved in the semiconductor industry – manufacturers, designers, foundries, suppliers, users or investors. Please contact me if you would like further information.

Also Read:

Automakers to Blame for Semiconductor Shortage

Electronics Back Strongly in 2021

Semiconductors up 6.5% in 2020, >10% in 2021?


AMAT Nice Beat Strong Growth for Both 2021 & 2022

AMAT Nice Beat Strong Growth for Both 2021 & 2022
by Robert Maire on 05-23-2021 at 6:00 am

Applied Materials Q2 2021 1

-Strong beat & guide- WFE up in 2021 & 2022-$160B combined
-Taking share in conductor etch & CVD
-Traditional Moore Scaling – No More?
-Foundry Logic leads followed by DRAM with weak NAND

Nice beat & guide & raise
Applied reported revenues of $5.58B with GM of 47.5% resulting in non-GAAP EPS of $1.63. Street expectation was for $5.41B and EPS of $1.51.
Guidance for the current quarter was $5.92B+-$200M and EPS range of $1.70-$1.82 versus street expectations of $5.53B and EPS of $1.56.

Financial results continue to improve nicely year over year with system sales up 50% year on year, great strengths in service and packaging at $800M

Second half 2021 to be up and 2022 also up
Applied went way ahead of its normal conservative guidance to say hat the second half of 2021 will be up over the first half and 2022 will be up over that.

WFE estimates increased for two years
Applied Materials upped the ante in WFE projections from the low $70’sB to the High $70’sB for 2021 and both years combined to be at least $160B which implies continued growth into 2022 which we take as a very bullish statement.

We think Applied management obviously has enough confidence in orders going forward to predict almost two years of growth. That kind of confidence in this industry is highly unusual so we think they are getting very strong signals over the long term from customers including some very large capex spending projections from the largest players.

China business improves
Applieds China revenue was up from last quarters $1.138B to the reported quarters $1.844B gong from 29% of revenues to 33% of revenues and the largest geographic segment of their business.

Obviously Applied is not getting hurt by any embargo on SMIC or others in China as China continues to ramp up equipment purchases more than any other place on the planet.

Share gains in conductor etch & CVD
Applied pointed out share gains in both Conductor Etch & CVD and further pointed to overall share gains in the semiconductor equipment market as compared to their peer group. We would assume that a fair amount of the gains came at the expense of Tokyo Electron.

Packaging at $800M in business looks like a segment of future strong growth as packaging is one of the key “more than Moore ” areas that will see increased spend on heterogeneous chiplet packaging.

Service business continues to grow very strongly and is emerging as a strong anti cyclical source of revenue.

Moore’s scaling , no more?
Applied suggested on the call that traditional geometric Moore’s Law scaling is on the decline (which we would agree with). Their view is that their offerings are favored by non-traditional Moore scaling alternatives which we would tend to agree with.

How far, how fast and how much will be spent on non-traditional scaling remains to be seen but we think EUV spend , which is traditional geometric scaling, will remain huge and get even bigger over time.

The stock
Applied has pulled back since its peak in the $140’s around the time of its analyst meeting. It closed yesterday at around $130 and the excellent report and very strong long term outlook could help it regain much of the value that came out of Applied and the rest of the semi equipment stocks. We would continue to be owners and might even get a bit more aggressive on some of the smaller cap or sub supplier names in the space.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor), specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

You know you have a problem when 60 Minutes covers it!

KLAC- Great QTR & Guide- Foundry/logic focus driver- Confirms $75B capex in 2021

Lam Research performing like a Lion – Chip equip on steroids


Podcast EP21: Leading Edge Analog Design

Podcast EP21: Leading Edge Analog Design
by Daniel Nenni on 05-21-2021 at 10:00 am

Dan is joined by Mark Williams, founder and CEO of Pulsic. The application of shape-based routing to automate analog design is explored. Pulsic’s revolutionary new automated analog layout system, Animate is also discussed. With this system, multiple, high quality, fully routed layouts can be created in minutes from an OpenAccess schematic. The unique business model being deployed by Pulsic is also outlined. Mark concludes with a discussion of the future of analog design.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Toshio Nakama of S2C EDA

CEO Interview: Toshio Nakama of S2C EDA
by Daniel Nenni on 05-21-2021 at 6:00 am

Toshio Nakama SemiWiki

Toshio Nakama is the founder and the CEO of S2C and also a strong advocate of FPGA accelerated ASIC/SoC design methodology. Mr. Nakama devotes much of his time in promoting scalable Prototyping/Emulation hardware architecture and defining automated software specifications. He first started his career at Altera in 1997 and served in technical and sales management roles at Aptix Corporation from 1998 to 2003. He co-founded S2C in Silicon Valley in 2003 and established R&D and manufacturing teams in Shanghai, China in 2004. S2C was acquired by SMiT Group in 2018 and continues to be a leading global provider of FPGA prototyping solutions. Mr. Nakama holds a bachelor’s degree in Electrical Engineering from Cornell University and an EMBA degree from CEIBS.

What brought you to the semiconductor industry?

I was first introduced to FPGAs in my Digital Circuit Design course at Cornell and I was immediately drawn to the programmability and the vast application aspects of FPGAs. This led me to join Altera and later Aptix, who was known for FPIC (Field Programmable Interconnect Chips). FPICs were often used in conjunction with FPGAs as solutions for reconfigurable computing and IC design emulation. The immense value such as convenience, productivity and flexibility demonstrated by these programmable devices eventually became my mission – to push the limits of what FPGA prototyping can do and to make the IC verification process easier, faster, and more efficient for IC designers.

Can you tell us about the origin of S2C?

Two other ex-Aptix principals and I, together with another finance professional – we pooled our money and founded S2C in 2003. At the time, most of the focus on IC design verification was with software simulation and hardware emulation. FPGA-based prototyping on the other hand was yet to be a mainstream verification methodology and it was only accessible by large design houses with the budget and resources to materialize a prototyping architecture. We recognized the value of prototyping and the power to accelerate time-to-success for SoC design companies.

Also at the time, we saw a new wave of Asian firms, as well as Asian design centers for US/European companies taking shape. We expected that these new companies were to be more open to new ideas and new EDA tools from a new innovator. In 2004, we set up a R&D center in Shanghai to gain better access to talent pools for upcoming product developments and to better connect and service the Asian customers. The latter is particularly important as the methodology behind FPGA-based prototyping was fairly new and there would be a fair amount of customer handholding required.

From the start, S2C was not just another “FPGA board” vendor but we aimed to bring convenience, productivity, and flexibility to shorten the verification cycle. One key challenge at hand when we started was the development of tools and IPs specifically for FPGA-based prototyping. In particular, FPGA tools were different, even foreign to most ASIC designers. S2C’s first task was to develop a complete methodology – a set of tools and IP that would not only make FPGA-based prototyping more productive but would also smooth out the transition of a design from the prototyping stage back into an EDA flow bound for an SoC.

In February 2005, S2C filed its patent for a “Scalable reconfigurable prototyping system and method.” It described a system for automating validation tasks for SoCs, with a user workstation, data communication interface, and an emulation platform with multiple FPGAs plus interfaces to a real-world target system. In May 2005, S2C announced its first product the IP Porter system at DAC. Beta customers working with the product estimated their design time was cut by 3 to 6 months.

What markets and what are some of the customer challenges does S2C address today?

As mentioned, our key mission has always been to help customers shorten their time-to-success through hardware-accelerated verification solutions. We now see that many of the high complexity ASIC designs today come from markets such as AI, Datacenter, Multimedia, Networking and Automotive These are often hyperscale multi-core SOC designs with time consuming software development and testing requirements. FPGA-based prototyping is the optimal solution to provide a high performance platform not only for hardware validation but also as an early prototype to enable software teams to conduct hardware/software co-development and co-testing.

To target hyperscale designs, we launched Prodigy Logic Matrix in late 2020. Logic Matrix is a high-density FPGA prototyping platform designed for multi-system expansion to address the needs for both capacity and performance. Earlier this quarter, we also announced MDM Pro, the latest member of our Multi-FPGA debugging solution. MDM Pro increases the concurrent deep trace capability to 8 FPGAs and supports faster sampling rates and deeper trace capacity. We are also continuously refining our partitioning software to and releasing more off-the-shelf daughter cards to simplify the setup of customer’s prototyping environment and to enable testing done via real-world data.

What is the S2C competitive positioning?

There is a combination of things which is raising S2C to new heights. With close to 20 years of know-how and proven track record, solid products, close customer relationship and outstanding service, we are doing extremely well in China, where the IC design activities have grown rapidly over the last few years. These customers not only help to provide scale of economy to lower cost, they also in turn help to provide feedback to enable us to continuously innovate and to roll our new products to match the market demand – not only for China customers but for customers worldwide, at good value.

If we compare ourselves to the Big 3, while S2C may not have the same comprehensive EDA coverage as they do, we are however more agile and more flexible. We aim to service and provide customization to help address customer demands. If we compare S2C to other tier two vendors and BYO (Build Your Own), S2C’s products are proven, more robust, more comprehensive. Together with scale of economy, we deliver high values to our customers.

What does the next twelve months have in store for S2C EDA?

2021 will be an exciting year for S2C. On the hardware side, we are rolling out a higher capacity Logic Matrix LX2 in Q3 and our first emulator platform in Q4. On the software side we will be adding RTL partitioning and serdes based pinmux support in a few months to better service hyperscale designs.

www.s2ceda.com

Also Read:

COO Interview: Michiel Ligthart of Verific

CEO Interview: Srinath Anantharaman of Cliosoft

CEO Interview: Rich Weber of Semifore, Inc.


Upping the Safety Game Plan for Automotive SoCs

Upping the Safety Game Plan for Automotive SoCs
by Rich Collins on 05-20-2021 at 10:00 am

Upping the Safety Game Plan for Automotive SoCs

Thanks to advanced hardware and software, smart vehicles are improving with every generation. Capabilities that once seemed far-off and futuristic—from automatic braking to self-driving at the very pinnacle—are now either standard or within reach. However, considering how vehicle architectures have continued to evolve, the way that safety and security are being addressed also must change.

Vehicles have typically been designed with dozens of discrete microcontrollers, each managing a separate function, from window operations and door locks to engine control. Now, we’re seeing increased centralization, with large systems-on-chip (SoCs) managing wider categories of functions. For example, one SoC might be dedicated for all vehicular communications, another for networking, and so on.

Considering the size and complexity of today’s automotive SoCs, a sound approach is to really understand the safety architecture and develop a safety plan first, before defining the vehicle’s architecture. The safety plan should be guided by automotive functional safety standards, namely ISO 26262. Developed by the International Organization for Standardization in conjunction with the International Electrotechnical Commission (IEC), ISO 26262 mandates a functional safety development process, from specification through production release, for automotive OEMs and suppliers to follow and document in order to have their devices qualified to run inside commercial vehicles. By following ISO 26262, automotive OEMs and suppliers provide assurance that their devices will perform as intended, when intended. The standard outlines a risk classification system, based on Automotive Safety Integrity Levels (ASIL), with the aim of reducing possible hazards caused by malfunctions in electrical and electronic systems. There are four ASILs, each based on the probability and acceptability of harm. ASIL D, the highest degree, is most relevant to safety-critical applications like Advanced Driver Assistance Systems (ADAS). ASIL D will only continue to grow in importance as vehicles incorporate increased levels of autonomous driving capabilities.

Another framework for which automotive safety devices must comply comes from AUTOSAR, which was founded in 2003 to create an open and standardized automotive software architecture and has defined the use of C++14 for safety-critical environments. Also important in the early phases of safety planning is consideration of cybersecurity measures. The U.S. National Highway Traffic Safety Administration (NHTSA) has an updated 2020 draft of its Cybersecurity Best Practices for the Safety of Modern Vehicles document, which mandates compliance from anyone manufacturing or selling vehicles in the U.S. The organization considers vehicles to be “cyber-physical systems and cybersecurity vulnerabilities could impact safety.”  Other automotive security standards, such as ISO/SAE 21434, are in the early stages, but look to help drive best practices in developing security architectures for safety-critical SoCs.

Defining a Safety Plan for Automotive SoCs

A strong safety plan outlines and defines all of the safety mechanisms for a given component, including compliance with AUTOSAR standards and ASIL levels. It’s also important to factor in cybersecurity at this stage. A key component of executing a safety plan is the implementation of a functional safety manager.

When designing with discrete microcontrollers, automotive engineers tend to utilize discrete safety managers from their chip vendors. With an SoC-centric approach, it’s important from safety and performance perspectives to have a dedicated safety manager integrated on the SoC to initiate, manage, and schedule boot-up and mission-mode tests. A large SoC tends to have multiple processor cores. Devoting a dedicated processor core to serve as a safety manager prevents periodic safety checks and monitoring tasks from interfering with normal SoC operations, while also isolating safety code from non-safety application software. Other benefits include reduced power and area, lower system costs, and enhanced real-time response rates. The figure below illustrates the evolution from a multi-chip to a single-chip solution for an advanced driver assistance system (ADAS) application.

Meeting Functional Safety Software Requirements

With hardware comes the need for software, which is where functional safety manager software comes into play. Having an integrated functional safety manager provides many benefits, including:

  • Independent and deterministic safety decision-making across various subsystems of a complex automotive SoC, with the option of having a dedicated safety routine per subsystem or IP module
  • Faster time-to-market through substantially reduced software overhead

Automotive software developers can choose to write their own functional safety software. But, clearly, this requires an investment in time and resources. Alternatively, they can turn to a proven, off-the-shelf software library. An effective safety management software library consists of:

  • A test manager that plans and schedules test execution, interacts with test providers for full SoC test coverage, works in boot and mission modes, and manages fault injection
  • A fault manager that collects and post-processes raw fault notifications from SoC components and converts them into safety alarms; maintains severity, hierarchy, and aggregation of safety alarms; generates software-visible safety alarms via callbacks or non-maskable interrupts; and asserts hardware fault notification or reset signals
  • A watchdog manager that handles internal watchdogs to control program execution flow, handles external watchdogs to guarantee system-level fault detection time intervals, and interacts with the test manager to provide the seed for test signature generation

Introducing ASIL-Certified Software

Synopsys recently unveiled a set of ASIL-certified ARC® embedded functional safety software components for safety-critical applications:

  • A functional safety C runtime library provides building blocks for safety-critical applications
  • Software test libraries provide a mechanism to achieve ASIL certification where redundant hardware isn’t required
  • Fault, watchdog, and test management components enable a fully programmable SoC safety management solution
  • Example MCAL and complex drivers ease integration into an AUTOSAR environment

The functional safety software stack runs on ASIL D-compliant DesignWare® ARC functional safety processor IP to simplify safety-critical automotive SoC development and accelerate ISO 26262 qualification. To facilitate development, debugging, and optimization of embedded software for ARC processors, we offer ASIL D-certified ARC MetaWare Development Toolkit for Safety. The combination of the software stack and the processor IP can save several staff years of development time.

The ARC software stack and processor IP are part of a larger portfolio of Synopsys solutions for automotive design. With a long history of automotive expertise, Synopsys provides many other resources to help hardware designers and software developers comply with automotive functional safety requirements:

Planning for safety early in the vehicle design process can pay big dividends for you and, ultimately, your customers. And by executing a safety plan with ASIL-compliant electronic design automation (EDA) tools and IP, along with robust software security testing solutions, you can save time and effort in the process of creating smarter, safer cars.

In Case You Missed It

Catch up on some other recent automotive-related blog posts:


Architecture Wrinkles in Automotive AI: Unique Needs

Architecture Wrinkles in Automotive AI: Unique Needs
by Bernard Murphy on 05-20-2021 at 6:00 am

Baidu versus Mobileye min

Arteris IP recently spoke at the Spring Linley Processor Conference on April 21, 2021 about Automotive systems-on-chip (SoCs) architecture with artificial intelligence (AI)/machine learning (ML) and Functional Safety. Stefano Lorenzini presented a nice contrast between auto AI SoCs and those designed for datacenters. Never mind the cost or power, in a car we need to provide near real-time performance for sensing, recognition and actuation. For IoT applications we assume AI on a serious budget, power-sipping, running for 10 years on a coin cell battery. But that isn’t the whole story. AI in the car is a sort of hybrid, with the added dimension of safety, which makes for unique architecture wrinkles in automotive AI.

I’ve mentioned before that Arteris IP is in a good position to see these trends because the network-on-chip (NoC) is at the heart of enabling architecture options for these designs. Currently Arteris IP is in the fortunate position to be the NoC intellectual property (IP) of choice in a wide range of hyperscalar and transportation applications, particularly those requiring AI acceleration. For example, Baidu with their Kunlun chip for in-datacenter AI training versus Mobileye with their EyeQ5 chip targeted at mobile autonomy for levels 4 and 5. Each is quite representative of its class, in constraints and architecture choices, granting that AI architecture is a fast-moving domain.

Datacenter AI Hardware

All hardware is designed to optimize the job it must do. In a datacenter, that can be a pretty diverse spectrum of pattern recognition algorithms. Therefore training/inference architectures most often settle on arrays of homogenous processing elements, where the interconnect is a uniform mesh between those elements (perhaps also with E/W sides or N/S sides connected).

These architectures must process huge amounts of data as fast as possible. Datacenter services and competitiveness are all about throughput. The accelerator core will often connect directly to high bandwidth memory (HBM) in the same package for working memory to maximize throughput. The design includes necessary controller and other SoC support but is dominated by the accelerator.

Performance is king, and power isn’t a big concern, as you can see in the table above for the Kunlun chip.

Automotive AI Hardware

Automotive AI is also designed to optimize the job it must do, but those tasks are much more tightly limited. It must recognize a pedestrian, lane markings or a car about to pass you. Such designs need to be more self-contained, handling sensors, computer vision, control, potentially multiple different accelerators, plus an interface to the car network. A more heterogeneous design with a mesh network won’t help.

Even within the accelerators, arrays of processing elements with mesh networks are far from ideal. Architects are shooting for two things: lowest possible power and lowest possible latency for safety. Both of which you can improve by keeping as many memory accesses as possible on-chip. Local caches and working memories must be distributed through the accelerator. Array/mesh structures are also not ideal for latency. These structures force multi-hop transfers across the array where an automotive application may want to support more direct transfers. An array of processing elements is often overkill. A more targeted structure no longer looks like a neat array.

You can further reduce latency through broadcast capabilities. These fan out critical data across the network in one clock tick, becoming faster by departing yet further from that simple array/mesh structure.

By default, AI accelerators are power hogs. Huge images are flowing through big arrays of processors, all constantly active. Dedicated applications can be much more selective. Not all processors or memories have to be on all the time; they can be clock gated. You can also selectively clock gate the interconnect itself. This is an important consideration because there can be a lot of long wires in these interconnects.  You can manage dynamic power through careful design. Augmenting this with intelligent prediction of what logic you want on and when.

Automotive AI and Safety

Safety isn’t a big consideration in datacenter AI hardware, but it’s very important in auto applications. All that extra memory on-chip needs error code correction (ECC) to mitigate the impact of transient bit flips, which will likely further complicate timing closure. Typically, safety mitigation methods will increase area and may negatively impact yield.

More generally, Kurt Shuler, vice president of marketing at Arteris IP, likes to say that an SoC (micro-)architect should pay close attention to any project management topic in safety which might impact architecture. Safety-critical designs start with pre-agreed lists of Assumptions of Use (AoU) from IP suppliers. If they start checking these late in design, they can get into a lot of trouble. They need to understand these AoUs up-front as you are developing the architecture. These are things suppliers can’t change easily. Save yourselves and your suppliers all that hassle. Read the instructions up front!

You can access the Linley presentation HERE.

Also Read:

Arteris IP Contributes to Major MPSoC Text

SoC Integration – Predictable, Repeatable, Scalable

Arteris IP folds in Magillem. Perfect for SoC Integrators


Chip Design in the Cloud – Annapurna Labs and Altair

Chip Design in the Cloud – Annapurna Labs and Altair
by Kalar Rajendiran on 05-19-2021 at 10:00 am

Compute Farm Growth

The above title refers to a webinar that was hosted by Altair on April 28th. Chip design in the cloud is not a new idea. So, what is the big deal with the above title. Sometimes titles don’t reveal the full story. Annapurna Labs happens to be an Amazon company. It used to be an independent semiconductor company that was acquired by Amazon in 2015. So why not say, “Chip Design in the Cloud – Amazon and Altair” or “Chip Design in the Cloud – AWS and Altair.” The key phrases are “food for thought”, “eagle eyes”, and “optimized scaling.” After reading this blog you will know why.

The webinar was delivered by Andrea Casotto, Chief Scientist at Altair, Zohar Levy, HPC Project Manager at Altair and David Pellerin, Head of Worldwide Business Development for Infotech/Semiconductor Amazon Web Services.

Straight off the bat Andrea shocked the audience by stating that many companies are repatriating back from cloud to on-premises. He presented some cost overruns stats to back up his shocker statement. Of course, he quickly pointed out the reasons behind those overruns and introduced the solution as well. The solution is Rapid Scaling.

Rapid Scaling is Altair’s patented approach to implementing cloud elasticity. It is a feature within their Accelerator software and was developed by Altair when working with Annapurna Labs. This feature helps bring cloud services cost as close as possible to demand by not asking for more hardware than is needed to complete the workloads. It accomplishes this by:

  • Categorizing similar characteristics jobs into workload buckets and calculating the speed at which each bucket can get scheduled
  • Monitoring EDA license dependencies and availability of required licenses and not asking for hardware until the licenses become available
  • Enforcing customer specified cost-schedule limits by not launching workloads and/or requesting more hardware resources when cost-tally gets close to preset limits
  • Executing workload scheduling policies and accordingly switching between on-demand instances and spot instances to optimize cost, AND
  • Stopping the Compute Farm growth at the optimal point knowing (based on its estimation) that all jobs still remaining in the queue will get dispatched to hardware within customer specified time window. Refer to Figure 1. In this example, the Compute Farm growth stops even when there are 100 jobs in the queue [vertical red line cutting through the graphs]. That is because the Accelerator estimates that all jobs in the queue can be dispatched within 10 minutes to existing hardware. The 10-minute window was set by the customer and is a configurable parameter.

Figure 1:

Andrea continued by discussing the different operating systems, processor architectures and instance types currently supported by Rapid Scaling, and then passed the baton to Zohar.

Zohar demonstrated Annapurna Labs’ live production environment for semiconductor design, without and with Rapid Scaling feature enabled. Refer to Figure 2 for Altair Accelerator Architecture and operating environment. You will have to see the live demo to see the benefits presented visually on an hourly, daily, weekly or monthly time scale. Suffice it to say the demo clearly demonstrated cloud elasticity.

Figure 2:

David followed Zohar with a talk summarizing Amazon’s experience in designing chips in the cloud.

He discussed how and why Amazon got into designing custom silicon, how these initiatives help its AWS customers and the expansion in the number and types of instances offered. Graviton/Graviton2, Inferentia, Trainium, and Nitro System were listed as examples of custom silicon built at Amazon Labs that are powering many of the purpose-built AWS instances. He shared case study snapshots of customers such as MediaTek, Qualcomm and Arm who have benefitted by EDA on AWS Cloud for designing their chips and IP.

David also highlighted how ARM-based instances are fast becoming a good high-performance alternative to traditional x86-based instances for EDA on the cloud. He spotlighted the recently announced X2gd Arm-based instance as particularly suited for EDA workloads as these instances have a high amount of memory.

David also touched on Amazon’s own EDA journey to AWS Cloud as they migrated (refer to Figure 3) from Annapurna Labs’ on-prem EDA flow to everything on AWS Cloud, except for emulators.

Figure 3:

David closed his talk with a thought on how customers who have on-prem EDA flow could explore hybrid EDA orchestration. He pointed out that a tool such as Altair’s Accelerator knows when to tap into the Cloud for certain types of instances or for spot instances or for EDA licenses to optimize cost.

The webinar closed with a Q&A segment during which some excellent questions were fielded.

Now You Know

The Annapurna Labs team has a penchant for scaling obstacles. The word Annapurna refers to a mountain range in the Himalayas with a number of tall peaks. The Annapurna Labs logo showcases that. The etymology of the word Annapurna tells us that it stands for abundant food. True to its name, Annapurna Labs has certainly provided some food for thought with respect to efficiently scaling the peaks, valleys and plateaus of semiconductor design workloads utilizing AWS cloud services.

The word Altair stands for eagle as per etymological roots. Altair keeps an eagle eye on dependencies, resources and costs through its scheduling software equipped with patented rapid scaling technology. The result is very cost-effective scaling for Annapurna Labs. One case study derived a 50% cost savings compared to not leveraging the rapid scaling feature.

Summary:

Altair’s Accelerator with its patented Rapid Scaling feature is a cost-conscious job scheduler proven to meet the compute demands of the semiconductor and EDA workloads in the cloud. It is capable of launching and managing millions of jobs on a daily basis.

Anyone designing semiconductor chips can benefit from Altair solution when designing in the cloud. It is currently supported on AWS cloud. I recommend you listen to the entire webinar and then discuss with Altair on ways to leverage their solution for your benefits.

Also Read

Webinar: Annapurna Labs and Altair Team up for Rapid Chip Design in the Cloud

Altair Expands Its Technology Footprint with I/O Profiling from Ellexus

Altair HPC Virtual Summit 2020 – The Latest in Enterprise Computing