Banner 800x100 0810

5G and Aircraft Safety: Simulation is Key to Ensuring Passenger Safety – Part 2

5G and Aircraft Safety: Simulation is Key to Ensuring Passenger Safety – Part 2
by Shawn Carpenter on 03-15-2022 at 10:00 am

Fig 4 Elements of Interference Analysis

In our first blog installment, we outlined the interference concerns surrounding the coexistence of the new C-band 5G telecom service spanning the band from 3.7 to 3.98 GHz with aviation radar altimeters. Radar altimeters are essential components for safety during landing and takeoff, as they offer precise measurements from the aircraft to the ground. For background on the spectrum allocation involved, please refer to our earlier installment.

We will now consider the components required in a high-fidelity interference analysis aimed at determining the maximum interference potential between a 5G C-band transmitter and a radar altimeter receiver.

The Anatomy of an Interference Analysis

The traditional method for determining whether interference exists has been to simply turn on the radios involved and measure the spectrum. In the case of 5G C-band interference with radar altimeters, this would involve turning on a tower near an airport, pushing peak traffic levels through the radio system, flying an aircraft through the airspace with a particular radar altimeter system, and taking many data samples. Undertaking real measurements is costly for many reasons:

  • Testing can only validate one radar altimeter at a time per test aircraft, and depending on antenna interaction with the host airframe, may only apply to one aircraft type at a time
  • Other signals within the 5G and radar altimeter band would need to be “quieted” so that measurements are not biased by contributions from other signals in the area
  • The airspace would need to be cleared of other aircraft while testing is conducted
  • Testing would apply to one 5G base station location at a time, and one airport at a time

These are just some factors that lead to a very high cost of validation through measurement.

With sufficient fidelity, simulation offers a very cost-effective and repeatable way to test and validate combinations of radar altimeters, host aircraft, C-Band 5G base station combinations and parameters, and airport locations. Let’s examine a worst-case interference analysis via simulation. In our case, we will use the Ansys Electronics Desktop, featuring the Ansys HFSS simulator for modeling antennas and their interactions with their local environment, and the Ansys Electromagnetic Interference Toolkit (EMIT) for modeling wideband interference potential between radio systems.

Both in-channel and out-of-band effects are considered. Beyond transmitters and receivers, the antenna systems must also be considered, allowing for the orientation and position of the aircraft and for the beamforming and beam steering characteristics of the 5G antenna system.

Interference scenario modeling can be broken down into three parts, as illustrated in Figure 4.

Figure 4 – The major components of RF interference modeling and simulation

In this case, we are concerned with a single 5G transmitter and a radar altimeter receiver. For purposes of this analysis, we won’t concern ourselves with interference in the other direction (from radar altimeter transmitter to the 5G receiver) but with Ansys EMIT it could be considered.

 

Emissions Model for the 5G C-Band Transmitter

The 5G Base Station model requires knowledge of its wideband electromagnetic emissions — both within its 5G channels and its out-of-band emissions. Any transmitter that carries messages in the RF signal has out-of-band emissions because of signal modulation, and the FCC and the International Telecommunication Union (ITU) set regulatory limits on the levels of signal transmitted by any licensed (or unlicensed) transmitter. The transmitter is fixed — sitting on the ground or on a fixed tower, but the antenna may have the ability to concentrate its energy in certain directions using a process called beam forming.

In the process of looking for interference potential, we study worst-case effects. In modeling the transmitter, we start with a peak power spectral mask, which shows the maximum power that is used at any frequency at any time. We can also capture effects like harmonics, intermodulation products, broadband noise, narrowband noise, and so forth, but one of the best ways to start is by using the industry regulatory requirements for maximum emissions. The International Telecommunications Union (ITU) sets these standards to ensure safety to people and systems due to RF level exposures. For our examination, we have started by using the specifications for a Wide Area Coverage C-Band base station with a 16-by-16 array, as set forward in the 3GPP Specifications. (If you’re interested in digging into the details, you can find it here.) I should mention that telecom equipment providers may (and do) provide equipment with broadband noise performance that exceeds the values we used; we start with the requirement as this represents a worst-case for a compliant transmitter. In fact, in a supporting study to the FAA by the Radio Technical Commission for Aeronautics (RTCA), we found a number of helpful parameters for defining the 5G radio emissions mask.

Figure 5 shows the 5G transmitter emission models used in our simulations, and we considered the currently available band at 3.7-3.8 GHz, in addition to the proposed future bands at 3.8-3.9 GHz and 3.9-3.98 GHz.

Figure 5 – The wideband emissions mask specification for the 5G C-band transmitters. Current implements involve only the 100 MHz band from 3.7-3.8 GHz, but future spectrum has been purchased by telecom providers for the 100 MHz band at 3.8-3.9 GHz and the 80 MHz band from 3.9-3.98 GHz.

Receiver Susceptibility Model

The radar altimeter receiver also has a wideband performance characteristic. While it is designed to operate in the 4.2-4.4 GHz band, it can suffer degraded performance if other radios put sufficiently strong emissions into this band. In addition, it is potentially susceptible to radiation outside this band of operation. Radio system designers often look at wideband receiver performance with a metric called susceptibility, which is generally a measure of how well a receiver can reject RF signals at any frequency. Within its band of operation, a receiver is intended to be very sensitive, therefore its susceptibility is very low. Outside its channel of operation, it is designed to be insensitive to incoming signals, so its susceptibility is very high at out-of-band frequencies.

A particular challenge in receiver design is balancing in-band or in-channel susceptibility with out-of-band susceptibility. A receiver might be very sensitive to signals within its band, but a consequence of this sensitivity may be that it can be overloaded by an out-of-band signal that is so strong that it defeats the receiver’s ability to reject it, resulting in a condition known as saturation.

Because saturation events can happen with strong transmission sources near our receiver, any good interference simulation needs to consider the receiver’s sensitivity and saturation characteristics for both the in-channel and the out-of-band signals.

While researching radar altimeter performance models, we found that there are wide performance variations. Arguably the best altimeter systems are used for commercial passenger aircraft, and indeed this is reflected in the types of aircraft that have now been approved for landing at the designated airports under low-visibility conditions. In our effort to develop a model for this demonstration, we looked for a “middle of the road” system to represent the radar altimeter susceptibility.

To formulate our model, we found a useful resource in the RTCA study, choosing an altimeter with good wideband characteristics (to yield the best altitude measurement resolution), along with a “reasonably good” receiver saturation level of -10 dBm. This means that the radar should have reasonable performance to reject signals outside of its intended frequency of operation. Figure 6 shows the receiver susceptibility model that we are using for this interference study, based on parameters listed in the RTCA study.

Figure 6 – Receiver susceptibility of a candidate radar altimeter operating at center frequency of 4.3 GHz. Most high-resolution aviation altimeters use 170 MHz of spectrum for measuring range from aircraft to ground.

With models for transmitter emissions and receiver susceptibility, we have two of the three important components of any interference analysis. The third component will be the wireless channel, depicted in Figure 4. We’ll cover the wireless channel and consider an interference analysis for a worst-case scenario in our next blog installment.

Also read:

5G and Aircraft Safety: Simulation is Key to Ensuring Passenger Safety

5G and Aircraft Safety: Simulation is Key to Ensuring Passenger Safety – Part 3

5G and Aircraft Safety: Simulation is Key to Ensuring Passenger Safety – Part 4


Leveraging Virtual Platforms to Shift-Left Software Development and System Verification

Leveraging Virtual Platforms to Shift-Left Software Development and System Verification
by Kalar Rajendiran on 03-15-2022 at 6:00 am

Extend Accuracy with Hybrid Platforms

Ever since the cost of development started growing exponentially, engineering teams have been deploying a shift-left strategy to software development and system verification. While this has helped contain cost and accelerated product development schedules, a shift-left strategy is not without challenges. A virtual platform methodology is a common approach to implement a shift-left strategy. The platform is expected to fully represent the functionality of a target system-on-chip (SoC) or a board-based system.

With systems getting more and more complex, it takes a long cycle time to completely finalize the system. Thus, a virtual platform will not be able to accurately represent the full system until the system itself is finalized. So, developers pull together abstract models of various subsystems and components of the system to produce a virtual platform. The goal is to help make progress on the software development front. But in reality, many projects delay their virtual platform initiative until the hardware is reasonably implemented. With this being the bottleneck, having access to emulators and FPGA prototyping system resources doesn’t change the situation much.

Can something be done to leverage a virtual platform methodology earlier than what may be commonly practiced today? This was the focus of a talk, delivered at DVCon 2022. The following is a synopsis of the salient points from that talk by Ross Dickson and Pankaj Kakkar. Both work at Cadence where Ross is a product management director and Pankaj is a solutions group director in the System & Verification Group. They present details of how to improve system verification by using virtual platforms as a software-driven methodology.

Hybrid Platform Approach

The idea is to extract the value of a virtual platform and the values of emulation and FPGA prototyping systems at the earliest possible opportunity. With most systems, the competitive differentiation lies within a portion of the entire design. Let’s say, for example, that a custom accelerator is a key differentiator in a system. Of course, the system will have some kind of CPU core, a modem for communication, some I/Os, etc. Even if the specific CPU core for the system is not finalized yet, knowing whether it’s Arm or RISC-V and whether it’s 64-bit or 32-bit would be very useful. Picking something that is about right is better than a completely abstract model. With access to an extensive library of reference designs, one can start off with a virtual platform that is not completely abstract and virtual.

With this approach, the team designing the custom accelerator can work with a hybrid platform that is more realistic for its purposes. And the team working on integrating with the CPU core has its hybrid platform that is more realistic for its purposes. Other teams may be working on I/Os and drivers. As the different teams refine and finalize their respective blocks/sub-systems, those are swapped into the virtual platform. Finally, all pieces are integrated to provide the complete system. While this doesn’t happen until late in the design flow, the various teams have been productive throughout.

If the detailed RTL already exists for a portion of a design, then the user can put that into an emulator or FPGA prototyping system. And integrate that with the virtual platform for a hybrid platform that leverages verification hardware solutions.

Cadence® Helium™ Virtual and Hybrid Studio

The Cadence® Helium™ Virtual and Hybrid Studio provides a unified embedded software experience with native integration to Cadence’s Xcelium™, Palladium® and Protium™ verification engines. It includes an integrated debugger and comes with an entire library of reference designs. Pankaj walked the audience through the process of building a new hybrid platform from the base reference platform. Users can select various models from the extensive library of models that come with Helium and configure the model parameters as needed. They can also remove unwanted models from the platform and then handle the port bindings, interconnections and memory mappings. For more details, visit the product page.

Pankaj then walked the audience through a typical hybrid platform creation flow, which enables software debug on Helium and runs Linux. The audience saw a live demo of the Helium hybrid platform in action and its various features.

Debugger

The Helium software debugger is a standard Eclipse-based software debugger that provides all the basic features that any debugger would provide. Refer to the figure below for the debugger window. Through this GUI, users can see all the critical information.

Summary

With Cadence’s Helium Virtual and Hybrid Studio, users can increase their verification throughput and produce a better system more easily than through a traditional development methodology. This enables early pre-silicon software bring-up and hardware/software co-verification.

The Dickson-Kakkar DVCon talk can be accessed here by registering at the DVCon website.

The Dickson-Kakkar DVCon presentation slides can be downloaded here.

Learn more about the Helium Virtual and Hybrid Studio.

Also Read:

Using a GPU to Speed Up PCB Layout Editing

Dynamic Coherence Verification. Innovation in Verification

How System Companies are Re-shaping the Requirements for EDA


5G and Aircraft Safety: Simulation is Key to Ensuring Passenger Safety

5G and Aircraft Safety: Simulation is Key to Ensuring Passenger Safety
by Shawn Carpenter on 03-14-2022 at 10:00 am

Fig 1 Verizon Cov Map 5G UWB

The new 5G C-band service is now up and running in the U.S., and subscribers are finally starting to see some of the promise of 5G. The new C-band services are primarily in spectrum allocations between 3 and 4 GHz, providing the wider channel allocation bandwidths necessary to deliver significantly higher data throughput. At the same time, signals at this frequency can travel significantly farther than with the mm-wave band. AT&T and Verizon customers are reporting download speeds ranging from 400 megabits per second (Mbps) to as much as 800 Mbps, a nearly 10x improvement over 4G LTE systems.

You can’t get this faster service at the airport though— at least not yet. The U.S. Federal Aviation Administration (FAA) has worked out a six-month agreement with the telecom providers to keep the C-band transmitters turned off near the affected airports due to ongoing concerns about the potential for interference with aircraft radar altimeters. During this time, the FAA continues to examine the radar altimeter systems used in commercial aircraft for certification, as well as to further study possible additional constraints needed for the nearby 5G C-band base stations.

Figure 1 – Verizon 4G/5G coverage map around the New York City airports, dated Feb. 24, 2022 (https://www.verizon.com/coverage-map/). The lighter colors around the airport indicate that lower-frequency 5G service is available within the region, but not the new Ultra Wideband (C-band) service.

This six-month hiatus currently affects about 500 towers nationwide at around 87 airports. As of Feb. 25, an estimated 90% of commercial airline aircraft have been approved for landing under the current agreement based on their radar altimeters’ ability to deal with interference from C-Band 5G base stations outside the airport zones on the 3.7-3.98 GHz bands. At some airports, the aircraft approved for landing takeoff and approach vary by runway, creating aircraft scheduling challenges between airlines and airport management. For example, at Chicago’s Midway International Airport, only one runway is cleared for 100% of the aircraft types served by the airport, whereas the other four runways are cleared for between 81% and 95% of aircraft types.

In July, Verizon and AT&T are expected to energize the C-band service towers for enhanced 5G service closer to, and perhaps including the airport campuses. During that time, the telecom service providers and the FAA will presumably have negotiated and settled on the acceptable parameters of operation for those new C-band 5G base stations. In addition, it is expected that the FAA will have completed testing on the radar altimeters that are currently in use throughout the aviation industry and their interactions with closer 5G towers.

How Simulation Can Help
It is somewhat surprising that this issue has come up when there are simulation tools such as Ansys EMIT which can predict these interference effects and provide guidance for mitigation. For difficult interference problems, the Ansys EMIT toolkit, an integral component of the Ansys Electronics Desktop and part of the Ansys HFSS portfolio, is designed to consider wideband transmitter emissions and assess their impact on wideband receiver characteristics.


Figure 2 – ” Ansys EMIT is an integral component of the Ansys Electronics Desktop and provides wideband interference simulation and mitigation for multiple RF systems and emission sources within a localized environment. EMIT is capable of utilizing high-fidelity installed antenna coupling data simulated by Ansys HFSS to capture wideband installed antenna-to-antenna couplings.”

When combined with accurate models for the radar altimeter antennas installed on host aircraft, and for the 5G base station antenna systems, we can form an accurate prediction of the maximum expected potential for interference. This interference prediction is useful for both in-channel conflicts as well as spectrum conflicts which might occur outside the radar altimeter band.

In this blog series, we will illustrate 5G C-band interference potential with a candidate radar altimeter system during an aircraft landing approach.
Examining the C-Band Spectrum neighborhood

Before considering simulation, let’s review first the spectrum situation in the C-band part of the radio spectrum.

Figure 3 – C-Band spectrum allocation showing the C-band 5G service channels (3.7-3.98 GHz) in proximity with the Aircraft Safety and Radar Systems band (4.0-4.4 GHz)

The 5G service providers purchased the rights to a country-wide spectrum from the FCC in December 2020 at a combined cost of $69 billion to gain access to up to 280 MHz of combined bandwidth. This allocation covers three separate 5G channels:
3.7-3.8 GHz: Currently being phased in for current C-band tower deployments. This band is the primary subject of concern because it is being made available now.
3.8-3.9 GHz: Future 100 MHz of spectrum that will be added to further increase capacity.
3.9-3.98 GHz: Future 80 MHz of spectrum that will likely be added after the first two 100 MHz bands have been fully deployed.

So far, only the lowest channel with the greatest channel spacing from the altimeter band is being considered, but the closer (future) 5G channels may create even higher potential for interference between the two systems.
In our next blog installment, we’ll examine the 5G radio wideband emissions model and a candidate radar altimeter receiver model with wideband performance—essential ingredients to examining the in-band or out-of-band interference potential.

Also read:

5G and Aircraft Safety: Simulation is Key to Ensuring Passenger Safety – Part 2

5G and Aircraft Safety: Simulation is Key to Ensuring Passenger Safety – Part 3

5G and Aircraft Safety: Simulation is Key to Ensuring Passenger Safety – Part 4


Use Existing High Speed Interfaces for Silicon Test

Use Existing High Speed Interfaces for Silicon Test
by Tom Simon on 03-14-2022 at 6:00 am

High Speed Test Access

The growth of complexity for silicon test as it relates to test data volume and test times is driven by multiple concurrent factors. One dimension is simply the increase in silicon complexity. However, other factors are playing a role as well. These include higher reliability requirements for new applications such as automotive, aerospace and defense. These requirements have not only increased test challenges at the point of manufacture, but also are moving incipient test challenges to system assembly and assembled products in the field. Approaches that worked before are reaching their practical limits and new silicon lifecycle management test requirements are being tackled for the first time.

SiliconMAX High-Speed Access & Test IP + Synopsys TestMAX ALE Solution

To address these multiple issues Synopsys has developed an IP that allows the use of high-speed functional interfaces that already exist on chips for accessing the test network. This eliminates the need to set aside dedicated pins. This is especially true with high speed functional interfaces operating at speeds higher than test pins. Benefits of this approach are reduced pin count, less need for specialized test equipment, higher data rates and the ability to access test functionality at all phases of an SOC’s lifecycle.

Let’s dig into the details of this interesting shift in thinking. Many of us painfully remember when there were unique and specific interfaces for keyboards, hard drives, displays, pointers, printers, interface cards and the like. Once interfaces like USB and PCIe came along, it became obvious that consolidation made sense. Today’s SOCs all utilize interfaces such as USB and PCIe which can run at high speeds, so why not leverage as test access ports too? Furthermore, the IEEE 1149.10 specification approved in 2017 creates a standard for packetizing test data which can be moved through existing high-speed interfaces.

Synopsys is working on supporting 1149.10 protocol. TestMAX ALE test software from Synopsys pairs with SiliconMAX HSAT IP to provide a complete solution. Synopsys TestMAX ALE can run on testers, PCs or SLT platforms, doing ATPG pattern translation for data feeding into the SoC and reverse mapping for the data being collected from the SOC.  SiliconMAX HSAT IP has the capability to support variety of interfaces like PCIe, USB, SPI, MIPI, 1149.10, etc.

SiliconMAX HSAT IP supports full test functionality. It handles data format translation, packetizing and depacketizing. It also conveys the test data and commands to the SOC’s DFT logic. Chips that use the SiliconMAX HSAT IP for test no longer need dedicated test pins. Time required to move data is reduced due to higher data rates of the functional interface. Plus, increased flexibility allows for improved reliability by supporting test throughout the entire lifecycle of the chip.

The SiliconMAX HSAT IP comes with the full set of collaterals needed to integrate it into a design. It uses a configurable ARM AMBA AXI slave interface to connect to the functional interface. Also included is ARM AMBA AXI loopback testbench generation. It also comes with configurable scan chains (512 max) and a TAP. There is an optional EBC interface for USB to enable DMA function. An added benefit is that SiliconMAX HSAT IP can also provide access to silicon monitoring network on SOCs.

Using existing high-speed interfaces for solves several important issues for SOC designers. It reduces or eliminates the need for dedicated pins and offers higher speed access to on chip scan and test resources. But perhaps most importantly it opens the door to improved silicon lifecycle management which is essential for many rapidly growing application areas, such as automotive. More information on SiliconMAX HSAT IP can be found here on the Synopsys website.

Also read:

Non Volatile Memory IP is Invaluable for PMICs

Why It’s Critical to Design in Security Early to Protect Automotive Systems from Hackers

Identity and Data Encryption for PCIe and CXL Security

 

 

 

 

 

 

 


Webinar: From Glass Break Models to Person Detection Systems, Deploying Low-Power Edge AI for Smart Home Security

Webinar: From Glass Break Models to Person Detection Systems, Deploying Low-Power Edge AI for Smart Home Security
by Daniel Nenni on 03-13-2022 at 10:00 am

Untitled design

Moving deep learning from the cloud to the edge is the holy grail when it comes to deploying highly accurate, low-power applications. Market demand for edge AI continues to grow globally as new hardware and software solutions are now more readily available, enabling any sized company to easily implement deep learning solutions at the edge of the network, free from Internet connectivity, ensuring privacy, reliability, responsiveness and battery life.

Advanced audio interfaces, cutting-edge image recognition, and multi-axis motion and passive infrared sensing technology are enabling a new generation of security solutions for the smart home or enterprise. As a leader in deep learning technology, AI chip company Syntiant is hosting an upcoming webinar that will focus on building low-power, multimodal edge applications to ensure safety and privacy. Whether for smart home surveillance, medical devices, autos or industrial IoT, a panel of deep learning experts and engineers will demonstrate how image, sound and sensor applications can be run simultaneously at significantly low power.

Webinar Background

Today’s machine learning approaches are enabling significantly higher accuracy for a litany of tasks where safety and privacy are paramount, like image and sound classification, object and person detection, condition-based monitoring, motion tracking and occupancy monitoring, natural language processing and medical data analytics. However, deployment of cloud-based deep neural networks often requires huge amounts of processing, memory and power consumption, which also are vulnerable to data breaches and higher latency. This webinar will focus on how to successfully deploy edge AI neural networks using Syntiant® ultra-low-power Neural Decision Processors™ that sense, analyze and autonomously act to allow mission-critical and time-sensitive decisions to be made faster, more reliably, and with nominal power consumption and greater privacy at the edge of the network. Learn how AI models for video doorbells, gunshot and glass break detection, occupancy monitoring, tamper detection, fire and smoke alerts, and so many more use cases can be easily deployed at less cost with designated latency, memory size and power consumption. The result is highly accurate, cloud-free inference, while minimizing false detections across myriad consumer and industrial IoT applications, from smart home security and medical devices to automobiles and aviation, among other use cases.

Learn More

The live webinar will be broadcast Wednesday, March 23 at 9 a.m. PST. Register here to reserve a spot to learn more about edge AI deployment for safety and privacy applications, as well as find answers to probing questions, such as:

  • Are non-AI sensors creating a high rate of false detections?
  • Is excessive power consumption causing high budget costs?
  • Do space constraints limit design choices and implementations?

The market for edge AI is exploding and this exciting webinar will provide details for successful deployments. Research suggests that by 2028, 37 percent of the global infrastructure edge footprint will be for use cases associated with mobile and residential consumers, with the remaining 63 percent supporting applications in vertical markets such as healthcare, manufacturing, energy, logistics, smart cities, retail and transportation.

About Syntiant

Syntiant Corp. is a leader in delivering end-to-end deep learning solutions for always-on applications by combining purpose-built silicon with an edge-optimized data platform and training pipeline. The company’s advanced chip solutions merge deep learning with semiconductor design to produce ultra-low-power, high performance, deep neural network processors for edge AI applications across a wide range of consumer and industrial use cases, from earbuds to automobiles. Syntiant’s Neural Decision Processors™ typically offer more than 100x efficiency improvement, while providing a greater than 10x increase in throughput over current low-power MCU-based solutions, and subsequently, enabling larger networks at significantly lower power.

Read a SemiWiki CEO interview with Syntiant’s Kurt Busch here.


5G Network Activation Update

5G Network Activation Update
by Daniel Nenni on 03-13-2022 at 10:00 am

ANSYS 5G Kerfuffle

As a pilot and semiconductor professional I was a bit shocked to get an Airworthiness Directive due to the 5G rollout. Airworthiness Directives are legally enforceable regulations issued by the FAA to correct an unsafe condition in a product:

“Airworthiness Directive (AD) 2021-23-12 was issued for all fleets in December 2021 and was added to all fleet AOM/OM VOL I. Specific 5G airport/runway NOTAMs activated the operating provisions and restrictions contained in the AD. The FAA has issued 5G NOTAMs effective 12:00 AM EST on Wednesday, January 19, 2022. There are a significant number of 5G NOTAMs issued for airports throughout our system, to include our hub and gateway cities.”

We discussed the is detail in the SemiWiki Experts Forum: Airlines warn of ‘catastrophic’ crisis when new 5G service is deployed and I made some inquiries inside the semiconductor ecosystem of how this could possibly happen and how could it have been prevented. The best response I got was from Ansys who has been publishing blogs on the topic on SemiWiki and on their own website and there is more to come:

Can you Simulate me now? Ansys and Keysight Prototype in 5G

The 5G Rollout Safety Controversy

5G and Aircraft Safety: How Simulation Can Help to Ensure Passenger Safety

5G and Aircraft Safety Part 2: Simulating Altimeter Antenna Interference

The latest post (above) included a video simulation (which is worth watching), the recent developments with the FAA, Verizon and AT&T, and how simulation could have avoided this Airworthiness Directive.

Caption – This computer simulation shows an aircraft landing through a C-Band 5G signal emitted from a base station near the airport. The orb below the aircraft represents the radar altimeter. Ansys, which created the simulation, is the leader in simulation software that is used by engineers to model these very types of scenarios so they can see and mitigate problems before physical products are made or deployed. This kind of modeling can also be used to set safety constraints on 5G transmitters at specific locations around airports or other locations of interest to public safety. It’s a lot less expensive to tweak a system in the computer than it is to make costly updates to hardware after construction. Shawn Carpenter, an expert in radio frequencies, antennas and 5G, is available to comment on the subject.

  1. The FAA reached an agreement with Verizon and AT&T to delay activation of C-band service for six months on base stations near 50 commercial airports with low-visibility approaches. According to an FAA statement released January 28, the service providers shared information on the exact locations of new 5G transmitters so that the FAA can study interference potential in more detail to shrink areas where the wireless carriers are allowed to deploy active transmitters. At this point, it appears that C-band towers within 2 miles of the designated airports are still inactive, and it’s not yet clear when AT&T and Verizon will plan to activate them. Verizon has indicated that this affects about 500 towers near airports, which is less than 10% of their total deployment of new C-band systems.
  2. The FAA has worked to approve radar altimeter systems as well as commercial aircraft on which they are installed, to allow low-visibility landings at airports where 5G C-band services have been deployed, subject to the agreement with the 5G service providers. By the end of January, the FAA estimated that it had approved about 90% of the U.S. commercial aircraft, including most large commercial jets that incorporate one of the 20 approved radar altimeter units. However, some smaller airports which can only be served by smaller aircraft are still experiencing flight cancellations because the aircraft servicing them have not yet been cleared.

As a pilot and frequent international flyer I still have concerns but I do appreciate the discussion and the efforts of ANSYS to better understand this problem, absolutely.

Also Read

The Clash Between 5G and Airline Safety

The Hitchhiker’s Guide to HFSS Meshing

The 5G Rollout Safety Controversy


GM’s Super Duper Cruise

GM’s Super Duper Cruise
by Roger C. Lanctot on 03-13-2022 at 6:00 am

GMs Super Duper Cruise

It’s one thing to lead an industry. It’s another to anticipate and meet a new challenge well ahead of competitors in an industry. It’s another thing, still, to solve a long-standing problem and receive barely a hint of credit.

Such is the case with General Motors’ semi-autonomous hands-free Super Cruise feature. While many of us are by now familiar with the hands free aspect of this novel solution, widely seen in Super Bowl commercials, fewer are aware of an entirely new capability that the system introduced to the market.

With the introduction of Super Cruise, GM has taken the lead in delivering what is described in the industry as a minimum risk maneuver (MRM) + stop in lane function. In other words, a driver using an active Super Cruise system will be saved from a crash by the system’s ability to detect inattentiveness and slow to a stop in its existing travel lane.

Interestingly, this Super Cruise side gig doesn’t appear to have a separate name and has yet to be called out or highlighted by GM. But a GM vehicle operating in Super Cruise mode will in fact slow down to a stop in its lane in the event of the driver’s incapacitation or willful inattention to the driving task.

The function is perfectly suited to the needs of drivers that may experience a medical emergency such as a seizure or heart attack. Car companies around the world have long sought solutions for detecting driver inattention or medical emergencies in the interest of automatically slowing and removing the car from the roadway – preventing any resulting death or injury from single- or multi-vehicle crashes.

This kind of safe, emergency functionality calls attention to two ongoing National Highway Traffic Safety Administration investigations of Tesla and Honda for unintended braking – attributable to either software glitches or sensor-related false positives. Teslas and Hondas are under investigation for stopping unexpectedly, sometimes at highway speeds, with a fully attentive driver at the wheel.

The irony of Tesla’s unintended braking – which has been widely noted for years prior to NHTSA stepping in – is the near impossibility of slowing or stopping a Tesla in the event of an unresponsive Tesla driver. On one or more occasions, police officers seeking to stop such a vehicle have had to position their cruisers in front of and around the offending Tesla in order to slow it to a stop by triggering its sensors.

GM’s attention to this particular application is reminiscent of the company’s Stolen Vehicle Slowdown function enabled by OnStar connectivity. OnStar, in collaboration with local law enforcement, can slow and stop a properly equipped stolen GM vehicle remotely as long as the vehicle is within eyesight of cooperating police officers.

The MRM + stop in lane functionality of Super Cruise is a harbinger of future semi-autonomous vehicle functionality tuned to the needs of an aging driving population. At least one company, Merlin Mobility, is seeking to bring such an aftermarket solution to the market while also extending its capabilities.

While the Super Cruise application is intended to slow and stop a vehicle in its existing travel lane, Merlin Mobility is seeking to deliver an aftermarket solution capable of pulling a car with an incapacitated driver out of the travel lane while also summoning emergency assistance.

Super Cruise, too, summons emergency assistance via its OnStar connection. According to Cadillac’s consumer-facing online messaging, the application works in the following manner:

FIRST ALERT

”If the system detects that you may not be paying attention to the road ahead, the steering-wheel light bar flashes green to prompt you to return your attention to the road.”

SECOND ALERT

“If the steering-wheel light bar flashes green for too long and the system determines continued lack of attention to the road ahead, the steering-wheel light bar flashes red to notify you to look at the road and steer the vehicle manually. Also, beeps will sound or the Safety Alert Seat will vibrate.”

THIRD ALERT

“If the steering-wheel light bar flashes red for too long, a voice prompt will be heard. You should take over steering immediately; otherwise, the vehicle will slow in your lane of travel and eventually brake to a stop, as well as prompt an Emergency-Certified OnStar Advisor to call into the vehicle to check on you. Super Cruise and Adaptive Cruise Control will disengage.”

The Merlin Mobility system operates in a similar fashion to GM’s Super Cruise except that Merlin Mobility claims that its system will call 911 operators (in the U.S.) directly and, as previously noted, will take over driving in the event of an emergency.

Merlin Mobility, which has yet to announce availability or pricing for its system, says it is based on a device installed on the inside of a vehicle’s windshield and is compatible with 100 vehicle models for aftermarket installation. Originally leveraging technology from Comma.ai, Merlin Mobility executives now hint that they have developed their own solution organically.

Both the GM and Merlin Mobility solutions point to potentially life-saving applications for semi-autonomous vehicle systems. Both demonstrate the creation of value from autonomous vehicle technology development that can be and is being realized in the market immediately. Now all GM has to do is give its application a name. Merlin Mobility’s system is called Copilot.

Also read:

Emergency Response Getting Sexy

Waymo Collides with Transparency

Apple and OnStar: Privacy vs. Emergency Response


Podcast EP66: The Data Center Revolution, the Speedata Point of View

Podcast EP66: The Data Center Revolution, the Speedata Point of View
by Daniel Nenni on 03-11-2022 at 10:00 am

Dan is joined by Jonathan Friedmann, a serial entrepreneur and the CEO and co-founder of Speedata. Jonathan discusses the current challenges of big data analytics and how Speedata fits into the landscape.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Frankwell Lin, Chairman and CEO of Andes Technology

CEO Interview: Frankwell Lin, Chairman and CEO of Andes Technology
by Daniel Nenni on 03-11-2022 at 6:00 am

Frankwell Jyh Ming Lin

Frankwell Lin, Chairman of Andes Technology, started his career being as application engineer in United Microelectronics Corporation (UMC) while UMC was an IDM with its own chip products, he experienced engineering, product planning, sales, and marketing jobs with various product lines in UMC. In 1995, after four years working on CPU chip product line as business director, he was transferred to UMC-Europe branch office to be its GM when UMC reshaped to do wafer foundry service, he lead UMC-Europe to migrate itself from selling IDM products to selling wafer foundry service.

In 1998, after 14 years working in UMC, Frankwell switched jobs to work in Faraday Technology Corporation (Faraday), he lead ASIC business development then on-and-off leading ASIC implementation, chip backend service, IP business development, industry relationship development (IR), as well as Faraday’s spokesperson. In 2004 he started to lead the CPU project spin off operation of Faraday.

Frankwell became co-founder of Andes Technology Corporation in 2005 and he formally took the position to be Andes’ President since 2006 and got promoted as Chairman and CEO in 2021.

Frankwell received BSEE degree of Electrophysics from the National Chiao-Tung University, Taiwan, and MSEE degree of Electrical and Computer Engineering from Portland State University, Oregon, USA. Under his management, Andes has been recognized as one of leading suppliers of embedded CPU IP in semiconductor industry.

Andes also won the reputation of leading technology company with awards such like 2012 EE Times worldwide Silicon 60 Hot Startups to Watch, 2015 the Deloitte Technology Fast 500 Asia Pacific award, etc. Frankwell received accolade award of Outstanding Technology Management Performance, Taiwan, in 2015 and ERSO award in 2020 for his contribution to the high-tech industry. Frankwell is also the Board Director of RISC-V International since 2020.

What is the Andes Backstory?
Andes was founded in 2005. We have nearly 17 years of experience in the CPU IP business. We began by developing our own reduced instruction set architecture. As we watched the development of the RISC-V ISA, we found a great deal of similarity between our ISA and the RISC-V instruction set. As a result, Andes was able to seamlessly migrate our proprietary RISC to RISC-V gracefully without much pain.

What is the Andes history with RISC-V?
Andes began evaluating the open-source RISC-V ISA in 2014, a year before the founding of the RISC-V Foundation in 2015. We joined the foundation in 2016. Later, the foundation changed its name to RISC-V International Association.

Today, Andes is a board member. In RISC-V International, there are many task groups for various ISA development. We serve on several of them, including the SIMD DSP task group, which Andes chairs and we led the ISA development for the SIMD DSP extension, part of the standard RISC-V ISA.

The rapid market acceptance of our product line resulted from our ability to introduce RISC-V ISA CPU IP offerings ahead of competition. The design and verification procedures we built to ensure quality design with our original product line provided us an edge over competitors. They are using an open-source non-human readable hardware description language (HDL) to build their RTL. Andes designed RTL is human readable and developed using our well-established design and verification procedures. In addition, we have sales and support channels with 17 years of experience.

One example is the RISC-V Vector processor. We were the world’s first to make a commercial RISC-V Vector processor available in 2020. At that time, the vector ISA was still in the draft stage. When a ratified version 1.0 vector processor instruction set was announced, we updated our IP to meet the spec. The second example is the RISC-V DSP P extension. We developed IP based on a draft version of the Spec in earlier 2019 and will keep advancing the design when the P extension spec is ratified. Again, we were the first to deliver a commercial RISC-V DSP processor.

We’re experiencing rapid market adoption. Our default business model is IP licensing. Customers adopting our RISC-V IP to design their SOCs include the following. A couple of tier-one companies have designed Wi-Fi, Bluetooth, and IoT chips used in several different platforms. Picocom used our RISC-V core to design 5G base station. Similarly, EdgeQ has designed a 5G AI chip. Kneron has developed a machine learning AI Edge chip. HP Micro launched an MCU that can be used in AI Edge and other microcontroller applications. For the HPC (High Performance Computing) data center server market, we have several major wins at FAANG companies, but I cannot disclose their names. For security, one of our customers is a Silex Insight. For microcontrollers, Renesas was a major win for us.

The second business model is custom computing in which we enter into a signed agreement with customers to help them customize their special requirement based on one of our on-the-shelf RISC-V CPUs. Customers may require advanced features or instruction sets additions to make their CPU uniquely their own.

In 2021, Andes achieved a growth rate of 41 percent over 2020. For the first to third quarter of 2021, our preliminary EPS (earning per share) showed an increase of more than 300 percent. Our growth and profitability are built on two revenue engines. First our existing proprietary CPU business continues to produce license and royalty revenue. Second, our RISC-V product line is driving IP license and non-recurring engineering revenue and is quickly growing royalty income.

Can you explain how RISC-V is fundamentally different from Arm? What is the key characteristic?
The advantages that the RISC-V ISA brings are it is modularized, it is open sourced, and it started from a clean slate. Additionally, designers can add their own custom instructions. RISC-V begins with a base set of instructions then adds instruction modules:  memory load store, integer, floating-point, vector processing, DSP etc. A minimum configuration for a very simple application may need only 200 instructions. From 200 to several thousand, different combinations of instructions provide RISC-V expansive flexible. Because the RISC-V ISA is open source, anyone can design their own CPU based on the RISC-V ISA, or they can license from commercial or even open-source resources.

Next, because RISC-V was designed from a clean slate, it has no burden at the beginning, the ISA has the architectural elements to serve the simplest IoT applications to the most complex AI-Deep Learning ones. Uniquely with the RISC-V ISA anyone can add their own custom instructions to accelerate computation of specific tasks in their design.  Andes provides the EDA automation tools to designers add these custom extensions without impacting their design schedules

Where do you expect the most success in terms of design wins over the next couple to several years? Do you expect it to be broad based or do you expect more success in some applications over others?
Automotive is one application, others include AI, data center, 5G mobile application, etc. Also Flash memory storage such as SSD, end users SSD or enterprise SSD is another where we have had design wins. Recently, we have wins in high-speed, high-performance computing HPC. For example, optical computing integrated with traditional RISC computing. Another application is in display drivers, where the trend is to integrate the timing controller, the driver, and the touch panel controller into one chip.

RISC-V is also ideal in emerging markets as long as there’s no one dedicated market leader with dominant market share such as PCs and mobile phones. There will be three major industry standards in the next decade: X86, Arm, and RISC-V.

From a regional perspective, is it fair to assume that China is the fastest growing market for you when you think about the business for the next three to five years?
Okay, let me give you an example. Today, our revenue contribution from China was 25% in 2020. In 2021, it grew to more than 30%. In 2020, we did 26% in North America and 26% in 2021. The ratio from China is definitely growing. We’re expecting to see that its contribution in the 30% to 33% range.

China is drawn to RISC-V because it’s open source and they can control the architecture. The nature of the open source is no one alone will control your development. I think that’s important for their strategic thinking.

I’m looking at the list on the risc-v.org website, and I see Western Digital listed. Western Digital, is their SSD controller application the way they’re using RISC-V?
Yes, including but not limited. They also applied RISC-V to hard disk and other types of storage including but not limit to storage, they also open source their CPU to the world.

One other question, so are there any other companies that that have a very similar business model, as you guys in terms of offering RISC-V IP or are you pretty much the sole provider?
In RISC-V International there are 300 enterprise members and 2000 personal members. Let’s focusing on the 300 enterprise members. I believe about 10 of them are offering similar CPU IP products to the market. To name a few there is SiFive in the US, Codasip in Germany, Imagination Technology in the UK, CloudBEAR in Russia, Alibaba in China. In addition, there are several fully open-source suppliers such as UC Berkeley, ETH Zurich, and China Research Institute. Thus, customers can choose commercial CPU IP or open-source IP.

Does RISC-V hand have any IP that’s kind of aimed at replicating GPUs functionality or is it pretty much just focused on CPU architecture?
Imagination Technology is one of the candidates that that will incorporate RISC-V in GPU applications. But Imagination’s product is a RISC-V CPU coupling with their own display controller or GPU. However, RISC-V with vector extension is also available. You can leverage vector processing to do similar computation as found in the GPU application. Customers are leveraging our vector processor to combine with GPU to perform deep learning, display, and graphic computing.

How do the best RISC-V processors compare with Arm and X86 performance in terms of industry benchmarks?
To compare Arm’s product number with Andes product segments, the advanced processors we offer are in the same performance range from the low-end to the Cortex A53, A55. And we are developing out of order processors to achieve competitive advantage. Another vendor SiFive have processors in the same performance as the Cortex A72. They claim they will have a Cortex A76 level product offering later this year. I mentioned, Russian supplier CloudBEAR who claims to have an A76 class product available this year.

Very informative, thank you Frank!

Also read:

CEO Interview: Tamas Olaszi of Jade Design Automation

CEO Interview: John Mortensen of Comcores

CEO Interviews: Kurt Busch, CEO of Syntiant


Getting to Faster Closure through AI/ML, DVCon Keynote

Getting to Faster Closure through AI/ML, DVCon Keynote
by Bernard Murphy on 03-10-2022 at 10:00 am

Manish min

Manish Pandey, VP R&D and Fellow at Synopsys, gave the keynote this year. His thesis is that given the relentless growth of system complexity, now amplified by multi-chiplet systems, we must move the verification efficiency needle significantly. In this world we need more than incremental advances in performance. We need to become more aggressively intelligent through AI/ML in how we verify, in both tool and flow advances. To this end, he sets some impressive goals: speeding up verification by 100X, reducing total cost by 10X and improving the quality of result by 10 percentile points (e.g. 80% coverage would improve to 90% coverage). He sees AI/ML as key to getting to faster closure.

Background in AI/ML types

Manish does a nice job of providing a quick overview of the primary types of learning: supervised, unsupervised and reinforcement along with their relative merits. Synopsys verification tools and flows use all three techniques  today. Supervised learning is the most familiar technique, commonly used for object recognition in images. This follows training on large banks of labeled images (this is a dog, this is a cat, etc). He points out that in verification, supervised learning is a little different. Datasets are much larger than bounded images and there are no standard reference sets of labeled data. Nevertheless, this technique has high value in some contexts.

In unsupervised learning there is no target attribute. Learning explores data looking for intrinsic structure, typically demonstrated in clustering. This method is well suited to many verification applications where no a priori structure is known. The third method is reinforcement learning, familiar in applications like Google’s AlphaGo. Here the technique learns it’s way to improvement across a succession of run datasets, such as it might see in repeated regressions.

Constrained random, CDC/RDC and formal

One important area where these methods can be applied is to identify coverage holes in constrained random (CR) analysis, then using AI/ML to find ways to break through. Difficult branch conditions can cause holes for example. Overcoming these barriers allows CR to expand coverage beyond that point. (I wrote about something similar recently.) Manish cited a real example where this technique was able to both substantially reduce time to target coverage by 1-2 orders of magnitude and to increase coverage over the pure CR target.

Another application is common in CDC analysis. Static analyses are infamous for generating say ~100k raw violations, generally rooted in a very small number of real errors. Unsupervised learning is an excellent approach to analyzing these cases, looking for clustering among violations. Between clustering and automated root cause analysis they were able to reduce one massive dataset to just ~100 clusters. This is easily a 100X reduction in time to complete analysis.

In formal analysis over a set of properties, orchestration is a way to manage distribution of proof tasks through selection of preferred proof engines over a finite set of servers. Reinforcement learning can greatly enhance this process through learning, to better order properties and engine assignments from one regression pass to the next. This they have seen deliver 10-100X speed up over default approaches to scheduling. Which in turn allows more time for verifiers to push for higher proof coverage.

Debug, regression performance, assertion generation

Debug can benefit from AI/ML through automated root cause analysis a potentially huge benefit in compressing a very tedious task. Looking at past simulation results and debug action graphs through a combination of supervised and unsupervised learning, it is possible to identify top potential root causes by probability. Manish doesn’t quantify how much this reduces debug time, probably because that is highly dependent on many factors. But he does say the reduction is substantial, which seems entirely believable.

A holy grail in optimizing verification throughput is in finding a way to slim down testing in regression. Reducing to only those tests you need to run given any changes that have been made in the design. This is not an easy problem to solve, which makes it an obvious candidate for AI/ML. Manish talks about mining historic simulation data, bug databases and code feature data to determine test set reductions which should have minimal impact on coverage for a given situation. He doesn’t mention which methods he applied in this case, but I could see a combination of all three being valuable. He again suggests a large potential savings in time through this technique.

One last interesting example. Mapping from specification requirements to assertions is today a purely manual (and error-prone) task. Synopsys is now able to support this conversion automatically through natural language processing (NLP). Manish is careful to point out that success here depends on some level of user discipline in how they write their specifications. They support a workflow to help users learn how improve recognition rate. Once both users and the technology are trained 😃, conversion becomes an almost magical translation, again saving significant effort over the old manual approach.

This is real today

Manish closes by pointing out that they are already able to demonstrate the goals he set out at the beginning of his talk – substantial speedup in net runtimes with corresponding reduction in human and machine cost and improvement in quality of results over conventional CR coverage targets. He also stressed that these advances were hard won. They have been working on refining these capabilities, jointly with customers, over several years. AI/ML are not quick fixes, but they can deliver substantial gains in verification with enough investment.

You can watch Manish’s keynote HERE.

Also Read:

Upcoming Webinar: 3DIC Design from Concept to Silicon

Heterogeneous Integration – A Cost Analysis

Delivering Systemic Innovation to Power the Era of SysMoore