RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Tesla’s License to Kill

Tesla’s License to Kill
by Roger C. Lanctot on 04-04-2021 at 8:00 am

Tesla license to Kill

Even as Honda Motors puts a so-called Level 3 semi-autonomous vehicle on the road in Japan – 100 of them to be exact – the outrage grows over semi-autonomous vehicle tech requiring driver vigilance. Tesla Motors and General Motors have taken this plunge, creating a driving scenario where drivers – under certain circumstances – can take their hands off the steering wheel as long as they are still paying attention.

Allowing for the hands-off proposition, for some, requires the definition by the system designer of a so-called operational design domain (ODD). This appropriately named protocol describes the acceptable circumstances and functional capability of the semi-automated driving.

The ODD concept is raised in the UNECE ALKS (Automated Lane Keeping System) regulation that defines the enhanced cruise control functionality and the systems and circumstances that make it allowable – including a “driver availability recognition system.” Critics have begun speaking up. The latest is Mobileye CEO, Amnon Shashua.

In a blog post, Shashua asserts that all of these developments represent examples of “failure by design.” By allowing vehicles to drive under particular circumstances, the offending designers are setting the stage for catastrophic Black Swan events because the ODD does not provide for evasive maneuvers, only braking.

It’s worth noting that the main Black Swan scenario envisioned by Shashua is the autonomous driving car following another car, when the leading car swerves to avoid an obstruction and the following autonomous vehicle is incapable of reacting fast enough to avoid the obstruction. Ironically, one of Shashua’s proposed solutions to this problem is to ensure that “the immediate response (of the robotic driver) should handle crash avoidance maneuvers at least at a human level.”

The irony is that in the circumstance of a robotic driver following another car, the robotic driver may in fact be able to respond more rapidly than the human driver. So why should human driving be the standard in all cases? There are clearly circumstances where robotic driving will be superior – too numerous to list here.

Let’s review the UNECE ALKS ODD requirements for semi-autonomous driving – what the Society of Automotive Engineers describes as Level 3 automation:

The Regulation requires that on-board displays used by the driver for activities other than driving when the ALKS is activated shall be automatically suspended as soon as the system issues a transition (from robot driver to human) demand, for instance in advance of the end of an authorized road section. The Regulation also lays down requirements on how the driving task shall be safely handed back from the ALKS to the driver, including the capability for the vehicle to come to a stop in case the driver does not reply appropriately.

The Regulation defines safety requirements for:

  • Emergency maneuvers, in case of an imminent collision;
  • Transition demand, when the system asks the driver to take back control;
  • Minimum risk maneuvers – when the driver does not respond to a transition demand, in all situations the system shall minimize risks to safety of the vehicle occupants and other road users.

“The Regulation includes the obligation for car manufacturers to introduce Driver Availability Recognition Systems. These systems control both the driver’s presence (on the driver’s seats with seat belt fastened) and the driver’s availability to take back control (see details below).”

So the UNECE is very specific about when its ALKS ODD applies. In a recent SmartDrivingCar podcast, hosted by Princeton Faculty Chair of Autonomous Vehicle Engineering Alain Kornhauser, Kornhauser complains that the average driver is unlikely to either study or comprehend the ODD or the related in-vehicle user experience – i.e. settings, displays, interfaces, etc.

For Kornhauser, the inability of the driving public to understand the assisted driving proposition of semi-autonomous vehicle operation renders the entire proposition unwise and dangerous. He also appears to assert that additional sensors are necessary to avoid the kind of crashes that have continued to bedevil Tesla Motors: i.e. Tesla vehicles on Autopilot driving under tractor trailers situated athwart driving lanes, and certain highway crashes with stationary vehicles.

What Shashua and Kornhauser fail to recognize is that Tesla has actually brought to market an entirely new driving experience. While Tesla has clearly identified appropriate driving circumstances for the use of Autopilot, the company has also introduced an entirely new collaborative driving experience.

A driver using Autopilot is indeed expected to remain engaged. If the driver fails to respond to the vehicle’s periodic and frequent requests for acknowledgement, Autopilot will be disengaged. Repeated failures of driver response will render Autopilot unavailable for the remainder of that day.

More significantly, while appropriately equipped Tesla’s are able to recognize traffic lights and, in some cases, can recognize the phase of the signal, when a Tesla approaches a signalized intersection it defaults to slowing down – even if the light is green – and requires the driver to acknowledge the request for assistance. The Tesla will only proceed through the green light after being advised to do so by the human driver.

This operation and engagement occurs once the driver has made the appropriate choices of vehicle settings and may not require that the drive understand the vehicle’s operational design domain. When properly activated, the traffic light recognition system introduces the assistance of a “robot driver” that is humble enough to request assistance.

Regarding Shashua’s concern that a too-narrowly defined ODD, that does not provide for evasive maneuvers, is a failure by design. But appropriately equipped Tesla’s are capable of evasive maneuvers. In fact, appropriately equipped Tesla’s in Autopilot mode are capable of passing other vehicles without human prompting on highways. It’s not clear how well these capabilities are understood by the average Tesla owner/driver.

The problem lies in the messaging of Tesla Motors’ CEO, Elon Musk. Musk has repeatedly claimed – for five years or more – to be on the cusp of fully automated driving. Musk insists all of the vehicles the company is manufacturing today are possessed of all the hardware necessary to enable full self driving – ultimately setting the stage for what he sees as a global fleet of robotaxis.

The sad reality is that these claims of autonomy have displaced a deeper consumer understanding of what Tesla is actually delivering. Tesla is delivering a collaborative driving experience which provides driver assistance in the context of a vigilant and engaged driver. But Musk is SELLING assisted driving as something akin to fully automated driving.

This is where the Tesla story unravels. Current owners that understand and choose not to abuse this proposition view Musk as a visionary, a genius, who has empowered these drivers with a new driving experience.

Competitors of Tesla, regulators, and non-Tesla owning consumers are angry, intrigued, or confused. Some owners may even be outraged at the delta between the promise of Autopilot – as seen and heard in multiple presentations and interviews with Musk – and the reality.

To add insult to injury, those drivers that have suffered catastrophic crashes in their Tesla’s – some of them fatal – have discovered Musk’s willingness and ability to turn on his own customers and blame them and their bad driving behavior for those crashes – some of which appear to be failures of Autopilot. This is the critical issue.

Musk is essentially using his own ODD definition to exempt Tesla of any responsibility for bad choices made by Tesla owners – or even misunderstandings regarding the capability of Autopilot. As a result, Musk’s marketing has indeed given Tesla a license to kill by enabling ambiguous or outright misleading marketing information regarding Autopilot to proliferate and persist.

The collateral impact of this may well be insurance companies that refuse to pay claims based on drivers violating their vehicle’s end user licensing agreement – the fine print no one pays attention to. Musk is muddling the industry’s evolution toward assisted driving even as he is pioneering the proliferation of driver-and-car collaborative driving.

Can Tesla and Musk be stopped? Should they be stopped? How many more people will die in the gap that lies between what Autopilot is intended to do and what drivers think it is capable of? How many is too many?

The saddest part of all is that Musk is an excellent communicator, so there is no question that he knows precisely what he is doing. That somehow seems unforgiveable.

SmartDrivingCar podcast

“On Black Swans, Failures-by-Design, and Safety of Automated Driving Systems” – Amnon Shashua, CEO, Mobileye


The Quest for Bugs: “Deep Cycles”

The Quest for Bugs: “Deep Cycles”
by Bryan Dickman on 04-04-2021 at 6:00 am

A1AA56C9 4588 46FE B93E 085880BF5D7E

Verification is a resource limited ‘quest’ to find as many bugs as possible before shipping. It’s a long, difficult, costly search, constrained by cost, time and quality. For a multi-billion gate ASIC,

The search space is akin to a space search; practically infinite

In this article we talk about the quest for bugs at the system-level, which it turns out is even more difficult than finding another Earth-like planet humanity can survive on!

FPGA prototyping systems make deep cycles possible for deep RTL bug searches scaled-up to full chip or sub-system level by running real software payloads with realistic I/O stimulus. In fact, with scale-out, you can approximate the throughput of silicon devices.

Terms of reference

Are we on the same page?

Modern FPGA prototyping systems are effective and performant platforms for software development (in advance of available hardware reference boards), system-level validation (demonstrating that the hardware with the target firmware/software delivers the required capabilities), and system-level verification (bug searching/hunting from a system context). Hardware and software bugs can be found in all use-cases.  Remember…

Verification and validation don’t mean the same thing.

Sadly, bugs do remain undetected (even after rigorous searches such as simulation and formal)

Why do hardware bugs escape earlier verification levels such as simulation and formal and what type of bugs can be found when using an FPGA prototyping platform as your verification environment? Hardware and software cannot be developed in isolation. How are you going to ‘validate’ the capabilities of hardware and software combined before committing to silicon? How are you going to validate the ‘target’ system in terms of real software and realistic hardware I/O?

What are the key capabilities that you will need from your FPGA prototyping environment to be successful? Deployed silicon will achieve exacycles (1018) of usage over millions of devices. What are both the achievable and the aspirational targets for ‘deep cycles’ (at least a quadrillion cycles!) of pre-silicon verification, and how do you scale-up to meet these goals?

Hitting the buffers with simulation

You’ve probably met all your coverage goals with simulation. Does this mean all of the bugs have been found? We know you wouldn’t bet your house on it! Consider this for a moment: If you are developing an ASIC or an IP Core that will likely run with a target clock speed in the GHz range; consider how many cycles will run in daily usage and then try to figure out the total number of cycles that you have simulated? You might be surprised (or depressed!).

1e9 and 1e11 cumulative scaled-out simulation cycles is equivalent to only a few seconds of activity

And that’s if you have the servers and licences. You probably have not run the target firmware or software or been able to simulate realistic I/O traffic yet. Are you confident to go to silicon on that basis? Don’t be fooled by great coverage results. As previously mentioned in The Origin of Bugs,

Covered != Verified

How many silicon respins can you afford to address critical design bugs?

If one thinks about actual devices operating at 2-3GHz or greater, and being deployed in the millions, those devices in aggregate are running exacycles (1e18) and greater every day. Now, it’s not the job of verification to simulate the entire lifetime of all devices but,

You need a convincing road-test

Given that, what is the minimum cycle depth we would want to achieve if we want to sleep at night? Companies simply can’t afford to do this with simulation. Emulation goes a long way towards it, but FPGA takes performance and scalability up to another level. The challenge is how to validate designs to this depth, pre-silicon?

The only viable solution is scaled-out FPGA prototyping.

For a modest 1GHz device, consider how many simulation hours are needed to achieve just one hour of target speed equivalent cycles:

Total Simulation time (@100Hz) = 3600x1e9 /100 = 36x1e9 seconds (10M hours)

(or 10K elapsed hours if you can run 1000 simulations in parallel)

The latest FPGA prototyping technologies can achieve speeds from tens of MHz to hundreds of MHz depending on the design parameters. Even here, engineering will need to scale-out the FPGA prototyping capacity to achieve meaningful deep cycle targets.

Approximating to Silicon

As an example, for an FPGA clock speed of say 50MHz, 20 prototype instances will give an equivalent of 1GHz throughput capability.  In other words,…

20 prototype instances gets us a piece of silicon to play with!

Scale-out further, and it’s possible to approximate to the throughput of multiple pieces of silicon. So, it’s feasible to achieve deep cycles of pre-silicon verification in a practical timeframe. Of course, the level of scale-out needed depends on achievable FPGA clock speed, the target silicon clock speed, and the depth of testing being targeted.

To find most of the bugs (at least all of the critical ones)

Deep cycles of verification testing alone is not a smart enough goal for our quest.

Hardware acceleration unlocks the ability to run real firmware and software on the RTL

Booting an operating system and then running some software, or rendering a few frames of video, demands long sequences of execution that are not practical with simulation – you can’t wait that long for the test to complete. Emulation takes a big step closer to this capability, and FPGA prototyping takes everything a stage further.

Finding bugs that escape simulation, formal and emulation

Both FPGA prototyping and emulation enable a software-driven approach to system-level verification. Just booting the OS alone is likely to take several billion cycles.

Booting Android takes around 30B cycles; that’s only a few minutes for our 50MHz FPGA, versus several thousand hours for a fast 1KHz simulation! Running the target software or other software payloads such as compliance suites, benchmarks or other test suites, can and does find bugs that will have escaped earlier stages of verification. You might not find a huge volume of such bugs, but when you find them, you know you have…

Saved an escape that otherwise would have made it into silicon

So, the relative value of these bug finds is high. If you then multiply this testing load by the configuration space of both the hardware and the software, you can quickly escalate towards an extremely large volume of testing demand.

When bugs occur, can you detect them?

A really great way to improve failure detection is to leverage SVA assertions from your simulation and formal environments. When SVA’s fire, they pinpoint the failure precisely, which is a huge advantage when you are running huge volumes of testing! You may be able to implement a home-grown flow for adding assertions to your FPGA prototyping environment, or better still, the FPGA prototyping vendor will have already provided a workflow to do this.

Recommendation: Leverage SVA assertions to enhance FPGA-prototyping checking capability

When bugs occur, can you debug them?

Debug consumes a lot of engineering time and this is also the case for an FPGA prototyping approach. There have been many studies[1] of where verification engineers spend their time and most of them show that over 40% of time is spent in debug. Debug is labor intensive, so,

Efficient debug is critical

FPGA prototyping systems necessitate a different approach to debug compared with emulation. You might start the debug process with software debug and trace tools, but from this point onwards you are going to need to debug the hardware, requiring visibility of internal state at the RTL code level. This requires trigger points for events of interest and the ability to extract signal waveforms for debug. It’s an iterative process as the designer zooms-in on the suspicious timeframe and provide enough of the internal hardware state and logic signals to diagnose the bug.

FPGA prototyping systems provide multiple complementary debug strategies to accomplish this. A logic analyzer is required to set up the necessary trigger points to start/stop waveform traces. Debug can be performed at speed, with visibility of a limited subset of chosen signals (probe points), which are then stored to either on-FPGA memory, on-board memory or off-board memory when deeper trace windows are required. This works well, especially when the engineer has a good idea of where to search in the RTL.  The limited set of probe points may be more than sufficient to debug most problems.

More complex bugs require full-vision waveform analysis, demanding a different debug approach where full RTL visibility can be reconstructed.

Quest impossible?

No, scale-up and scale-out!

Nothing is impossible, but certain things are hard problems to solve. With many ASIC designs trending to multi-billion gates, prototyping systems need to get bigger, and offer platform architectures that

Scale-up to handle bigger systems and sub-systems

by plugging multiple boards and systems together while minimizing the impact on achievable DUT clock speeds.

You may have started with a lab or desk-based setup, but quickly conclude that you need to deploy an FPGA prototyping cluster (or farm). Deep cycles can be achieved if you

Scale-out to approximate to silicon levels of throughput

Use real-world traffic on a system realistic platform

Advanced FPGA prototyping solutions offer strong build and compile workflows, debug and visibility solutions, and high degrees of flexibility to configure the system to the design needs and cope with devices of any size. Plug-in hardware such as daughter cards allow users to quickly configure the environment to be system-realistic and connect the DUT to real I/O with real traffic profiles. This invariably involves some stepping down of native I/O speeds to the achieved FPGA clock speeds, and it means that the prototyping system needs to be able to handle many asynchronous clocks.

Good automation of the flow (e.g., auto-partitioning) will get you to running your first cycles within a few days. You can then optimize to get the fastest possible throughputs needed to drive towards your approximated-silicon testing goals.

How many cycles to run, is itself, a dilemma; a ‘deep cycles’ dilemma!

In practice this is determined by budget and the capacity available, but the fact is that risk of getting it badly wrong is increasing rapidly. Billion gate designs that integrate a growing array of IP components are required in our modern world to perform ever more complex tasks for applications such as 5G, AI, ML, Genomics, Big Data…the list goes on…

With scaled-out FPGA prototyping you can approximate to the throughput of the final silicon device.  But how effective is this software-driven verification approach?

Bug data will be the best metric to measure effectiveness and ultimately the ROI of the investment. Be sure to document and analyze bugs found from FPGA prototyping and assess what the total impact cost would have been had those bugs escaped to be discovered later in costly silicon.

Even if no bugs were found, which is a highly unlikely scenario, there is a value calculation for the assurance that this level of extreme deep testing brings.

Think of it as the “help me sleep at night” verification metric

A true-negative result is just as valuable as a true-positive when it comes to confidence levels and the true value of the final product.

Innovation in technology always creates bigger, better, faster ways of completing the verification quest successfully

Balance the investment between simulation, formal, emulation and FPGA prototyping in a way that reflects the complexity and size of the design, but critically the nature of risk if there were to be escapes. Justifying investment in the quest for bugs is predicated on many factors: the risk of reputational damage and huge rework/mitigation costs can build a strong ROI for investment in FPGA prototyping at scale.

Read the full whitepaper “System Bug Quest

[1] Part 8: The 2020 Wilson Research Group Functional Verification Study


Podcast EP14: AI at the Edge

Podcast EP14: AI at the Edge
by Daniel Nenni on 04-02-2021 at 10:00 am

Dan and Mike are joined by Semir Haddad, senior director of product marketing at Eta Compute and Vineet Ganju, vice president and general manager, low power edge AI business at Synaptics. Semir and Vineet discuss the collaboration between Eta Compute and Synaptics to develop new and innovative solutions for AI applications at the edge.

The components of AI systems, both hardware and software are discussed, along with strategies for power reduction. New applications, from building management to farming are also explained.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

Eta Compute

Synaptics


Semiconductor Startups – Are they back?

Semiconductor Startups – Are they back?
by Karthee Madasamy on 04-02-2021 at 6:00 am

Semiconductor Startups – Are they back

Semiconductor startups used to rule the roost in Silicon Valley. The very name, Silicon Valley, comes from the birth of the semiconductor industry in the San Francisco bay area 60+ years ago. Large percentage of venture financing used to go to semiconductor startups, even as recently as 15 years ago. As a chip designer doing startups in the late ‘90s and early 2000s in the San Francisco bay area, I felt as if there was a semi startup on every street corner.

Not so much in the last 10 years. Maturing industry, high capital requirements, and dwindling exits have caused a huge decline in funding for semiconductor startups. The narrative of Silicon Valley moved to consumer internet and software-led businesses. Remember, software is eating the world. But in the last few years, we have seen a slow but steady resurgence of semiconductor startups and witnessed blockbuster financing and acquisitions. So, are semiconductor startups on the comeback trail? Or is it a mirage?

First, it is important to separate semiconductor startups from the overall semiconductor industry. Globally, the market for semiconductor products has been growing for several decades and in recent years led by growth in computers, smartphones, consumer electronics, and automotive and industrial electronics. Computers have gotten powerful, phones have faster internet speeds, consumer gadgets have gotten smaller – all because of the technological advances in semiconductors.

Source: “2020 State of the US Semiconductor Industry” from SIA

In a large growing market with improving technology why is there no place for new startups? While semiconductor is maturing as an industry, it is hardly a commoditized market with no place for innovation.

Well, structurally a few things happened:

  • Cleantech, one of the prominent semiconductor sectors, had flopped badly, losing lots of capital for investors.
  • The technological advances made in internet infrastructure and mobile technologies led to a boom in the software application ecosystem across social, mobile, and cloud, moving entrepreneurial interest away from semiconductors.
  • China emerged as a large supplier of semiconductors, thus increasing competition while driving down premiums in the market.
  • Huge waves of consolidation happened in publicly traded semiconductor companies.

Most sectors go through these phenomena – they are nothing new. When this happens, newer innovations kickstart the next disruption cycle. Why didn’t it happen with semi startups?

Well, the success of a startup ecosystem rests on the number and variety of experiments that are attempted. The more experiments across a wider variety of areas, the better the chances for a breakout success. For semiconductor startups, two main issues slowed down these experiments:

  • It takes roughly $30M of financing to even get to a product and another $100M or more to get to volume production.
  • Buyer universe is limited because of the public market consolidation. The reduced list of buyers meant smaller acquisition premiums and smaller exits.

Huge capital costs combined with a small buyer universe and small exits don’t make for an attractive investment area. Combine this with harder macro trend, we saw a vicious cycle of diminishing interest and funding in semiconductor startups.

Things have started turning around over the last few years. Since 2017, investments in semiconductor startups have increased significantly. So, what happened?

Source: Tracxn

One of the main reasons is the explosion of Artificial Intelligence (AI). Innovative semiconductor products were required to meet the computing demand of AI. Advances in AI and Computer Vision helped make huge strides in autonomous vehicles and autonomous platforms. That drove demand for specialized semiconductor sensors and processor architectures.

The cost of building semiconductor products has come down significantly, especially if you are not operating at the cutting edge of semiconductor manufacturing processes. Most semiconductor products do not require these advanced processes. Today, you can build a first-generation semiconductor chip with $10M or less. That is lot less than $30M.

Expanding buyer universe – Apple, Google, Microsoft, Facebook, Amazon, and other large internet and software companies have started building semiconductors for their internal use and consumer products. They have become new acquirers of semiconductor startups.

US-China trade tensions have centered around semiconductors and there has been an increased focus on self-sufficiency and nationalization. This has driven demand for US semiconductor suppliers. Chip shortages facing the automotive industry is driving home the point of self sufficiency.

All these factors have driven huge investments and exits in Semiconductors recently. Just a few examples:

  • Qualcomm acquired a two-year-old semiconductor startup, Nuvia for $1.4B
  • Automotive Sensor Semiconductor companies, LuminarAevaAeyeOuster and Innoviz all going public at valuation of $1B or more.
  • Investments flowing into semiconductor startups focused on AI, Quantum computing, Robotics, and others.

So, are semiconductor startups back for good? As new areas come up including Quantum computing, Space technologies, Computational biology …  the need for innovation and newer semiconductor products is on the rise. I remain bullish that the resurgence we have seen in semiconductor startups is here to stay. And once again, the name, Silicon Valley, will fit its description.

Deep (tech) thoughts with Karthee!


AUGER, the First User Group Meeting for Agnisys

AUGER, the First User Group Meeting for Agnisys
by Daniel Nenni on 04-01-2021 at 10:00 am

website banner with date 1

As a long-time member of the EDA community, I really believe in user groups. EDA tools are complicated beasts, with many options and different ways to use them, and they are constantly evolving. Users interact with their local field applications engineers (FAEs) and sometimes corporate AEs (product specialists) as well on a regular basis. But there is a lot of knowledge on how best to use tools in the R&D teams that develop them. There’s also a great deal of experience spread among the user base, but it’s uncommon for users from different companies to talk directly.

User group meetings are a great way to get a critical mass of users, AEs, and R&D engineers together in one place. It’s best if they’re held in person so that all the participants can interact informally during breaks and meals in addition to the technical sessions. Of course, for now almost every type of meeting and conference is virtual. I was pleased to learn that Agnisys recently held its first-ever user group meeting, which they dubbed AUGER for some mysterious reason. I talked with CEO and founder Anupam Bakshi to find out the scoop.

What is AUGER and what does the acronym mean?

It stands for Agnisys User Group Educational Roundtable, and it is for the most part a traditional user group meeting. It was virtual, as you’d expect right now, but it was a really successful event. There’s also a bit of a pun involved since we wanted to drill down (auger) into technical details and not just present a bunch of fluffy sales/marketing slides.

What were your goals?

It seems to me that there are three key forms of communication that should occur in a user group meeting: vendor to user, user to vendor, and user to user. The host vendor should present updates on new tools and features, often directly from members of the R&D team, and provide guidance on best practices for using the tools, usually from the field and corporate AEs. The CEO should also offer a company vision and talk about future directions. Second, the vendor wants to hear from the users. It’s really nice to have some user presentations where they share their own experiences and best practices. There should also be a feedback session where the users suggest new tools, features, and support mechanisms to make their lives easier. Last but certainly not least, the users need to interact with each other. That’s harder to accomplish in a virtual format, but we included a roundtable slot where anyone could talk about anything related to Agnisys.

What sort of topics were covered in the technical sessions?

Our engineering team worked hard to develop brand-new slides with the latest and greatest information. Our engineering head summarized the most recent tools and features, many of which were suggested directly by our users. We had a second talk focusing deeply on the latest properties and customizations available for users to tailor our tools to meet their specific needs and fit into their design and verification environments. As you know, we started as a register automation company and this area remains a big part of our business. Accordingly, we held a session dedicated to the quality checks that we do on the register maps provided by users. The more accurate the input maps are, the better the results that we generate for RTL design, UVM testbench verification, embedded C/C++ code, system validation, and documentation. Finally, we had a presentation on how our tools can be used to ensure functional safety and security in chip designs, a hot topic in these days of increasingly autonomous vehicles.

Did the interaction with the users go well?

Honestly, it exceeded my expectations. I have to admit that I was a bit worried about the roundtable, wondering what we would do for 30 minutes if no one spoke up. Fortunately, that was not the case. We had a great facilitator in Tom Anderson, who ensured that we had a lively discussion. We had participation by users from multiple companies, and I was really pleased with that. The attendees were also active participants in the technical sessions, asking lots of good questions. A user from Intrinsix presented an excellent case study on how they benefit from our tools, and other users shared experiences during the roundtable.

Is there anything you might change for future events?

Well, we fervently hope that the pandemic subsides and that we will be able to meet in person next time. We plan on a hybrid event so that users unable or unwilling to travel can still participate. It might make sense to hold multiple events in different regions where we have concentrations of customers. We also hope to add a few more user talks; this first AUGER was developed on a rather tight schedule and not everyone had time to prepare slides. Overall, I expect that we will do many of the same things we did this year because they worked so well.

For those who missed the event, is it possible to access the talks?

Absolutely! We recorded everything, including the roundtable, and it is available on demand. To register, just go to https://www.agnisys.com/events/auger-2021/.

Also read:

Register Automation for a DDR PHY Design

Automatic Generation of SoC Verification Testbench and Tests

Embedded Systems Development Flow


Bouncing off the Walls – How Real-Time Radar is Accelerating the Development of Autonomous Vehicles

Bouncing off the Walls – How Real-Time Radar is Accelerating the Development of Autonomous Vehicles
by Jeffrey Decker on 04-01-2021 at 6:00 am

RealTimeRadar 5Radars thumbnail 1

In the race to get people out of the driver’s seat, the developers of autonomous vehicles (AV) and advanced driving assistance systems (ADAS) have gone off road and into the virtual world.

Using simulation to design, train and validate the brains behind self-driving cars — the neural networks of sensors and systems that perceive the world then react to split-second changes in the environment — is an essential tool for building the AV/ADAS platform.

Without simulation, developers are limited to naturally occurring events on public roads as their proving grounds. That means they’d spend far more time and money creating specific scenarios to test that sensors recognize and algorithms respond appropriately and safely to routine and hazardous conditions: red lights, pets and wildlife, oncoming traffic, or a child darting into the street.

Real-world driving and simulation work together to advance ADAS/AV technology. Real-world driving data is an important measure of road-worthiness and system intelligence, and it provides additional inputs to improve their algorithms. Simulation complements on-road testing with its ability to run orders of magnitude more scenarios and challenging events that are rare in real-world driving but essential to get right.

CNET writer Kyle Hyatt describes how simulation technology gives Alphabet’s Waymo engineers the capacity to “simulate a century’s worth of on-road testing virtually in just a single 24-hour period.” Another way of looking at it:  It took Waymo 10 years to log 20 million actual driving miles, and a single year to simulate 2.5 billion.

As valuable as simulation is, sometimes it needs to pick up the pace, too.

That’s where the real-time radar (RTR) simulation engine takes the wheel. Ultra-fast and physics-based RTR can accomplish in minutes what used to take days.

Images Made Faster than Ever

Along with LIDAR, cameras and ultrasonic sensors, the typical AV also has multiple radar sensors for short-, medium-, and long-range sensing tasks. Long-range radars monitor traffic down the road for adaptive cruise control and collision avoidance. Shorter range sensors handle blind spots, cross traffic, and collision avoidance.

Traditionally, central processing units (CPUs) were used for automotive radar simulations. CPU architecture is fast, but not nearly fast enough to simulate complex radar systems at real-time frame rates.

Radars sample the world at up to 30 frames per second (fps). Automotive radars have multiple transmitters that broadcast hundreds or thousands of radar chirps and multiple receiving antennas that measure those signals at hundreds of frequencies for a single frame of data. Multiple-input multiple-output (MIMO) radars measure millions of data points per frame – hundreds of channels, chirps per channel, and frequencies per chirp. That’s all for one radar, and autonomous vehicles have multiple radars.

Caption:  Range-Doppler image of busy street shows the radar mounted on the white car detecting distance (range) and relative velocity (Doppler) of objects from a moving vehicle.

CPU-based simulation requires up to a minute to simulate one frame of data from one radar, even with new algorithms invented by Ansys. A revolutionary leap forward is needed.

Ansys’ Real-Time Radar (RTR) overcomes these limitations with graphics processing units (GPUs) . The combined power of Ansys simulation and NVIDIA GPU acceleration can not only generate data from single-channel, multi-channel, and MIMO radars, it generates images faster than real-time. Scenarios that took days, months, or years to simulate before RTR are possible in seconds or minutes. Single-channel radars are over 5000x faster. MIMO radars that would have taken so long to simulate that we never tried also run thousands of times faster.

Radar perception algorithms detect buildings, curbs, and other street-level objects of interest from RTR range-Doppler imagery.  The range-Doppler imagery is shown in the lower left quadrant with waterfall plots above and to the right.  Post-processed ISAR imagery is shown in the upper right.

Real-Time Simulation Enables New Applications

Ansys engineers are using RTR to create amazing new capabilities, and the difference is astonishing.

Setting up a scenario at Waymo (picture courtesy CNET): https://www.cnet.com/roadshow/pictures/waymo-castle/3/

For example, Ansys RTR took just 11 seconds to simulate a car with five radars at 250 fps traveling down a 1-kilometer (0.6-mile) busy street for a 20-second scenario using an NVIDIA RTX A6000 GPU. Because safe urban driving means contending with all kinds of hazards and distractions, we packed our scenario with 70 vehicles, 14 buildings, over 300 streetlights and a nightmarish 42 traffic signals.

Ansys RTR simulates a vehicle with five radars at over 250 fps in a busy urban environment.

Before RTR, that same simulation would have taken more than 25 hours. If that seems like a vast improvement, consider this: Before Ansys developed new algorithms for Doppler processing, the simulation would have taken more than four years. RTR cuts simulation time to 11 seconds and maintains 57 fps for five radars, far faster than the 30 fps real-time metric. This 8000x speedup compared to 25 hours and nearly 3 million times speedup over four years.

“With real-time radar, high-fidelity simulation is no longer a barrier in the development of ADAS data pipelines. Radar sensor data can be generated at a rate never thought possible for physics-based simulations.”

  • Arien Sligar, senior principal application engineer, Ansys

RTR’s dramatic performance improvement has already paid off as an enabling technology in downstream analysis. Labeling images – identifying locations of objects such as people, cars, and buses in a radar image –  is a time-consuming effort when done by hand. RTR users produced more than 160,000 labeled images overnight with RTR compared to 9,000 images in five days with slower simulation or several dollars per image labeling by hand.

Ansys engineers also connected RTR to a machine learning algorithm to teach a car to drive through reinforcement learning. Ansys principal application engineer Dr. Kmeid Saad conducted a week-long webinar, Reinforcement Learning with Physics-based Real Time Radar for Longitudinal Vehicle Control, that trained a throttle-control algorithm using GPUs on Microsoft’s Azure cloud. RTR simulated radar returns at a faster-than-real-time 50-60 fps on one GPU while three other GPUs ran the driving simulator and machine learning.

Physics-Based Simulation Built on Established Methodology

The RTR simulation engine is based on the well-established shooting-and-bouncing-rays (SBR) technique for large, high-frequency scenes. RTR generates range-Doppler images that display the distance and relative velocity of objects in driving scenarios under various traffic conditions. SBR models radar reflection off objects, multi-bounce propagation through the scene, material properties, transmission through windows, and radar antenna patterns. RTR incorporates all of these real-world interactions to produce physics-based simulation results.

RTR simulates both range-Doppler images and “raw” radar chirp versus frequency data. Raw data is used post-processing like angle-of-arrival (AoA), inverse synthetic aperture radar (ISAR), object detection, perception, and object classification analysis.

RTR models radar waveforms, which influence the radar outputs. The frequency modulated continuous wave (FMCW) waveform is common in automotive radars. RTR users enter waveform details from the radar’s specification, and RTR outputs capture physics specific to the waveform, such as range-velocity coupling.

Being able to measure the target range and its velocity has considerable practical applications, not the least of which is keeping the AV from bumping into the car in front of it as it slows or if it suddenly stops. Given that studies indicate AVs are involved in rear-end collisions more than any other type of accident, training forward sensors to detect when there’s danger ahead is critically important.

Trained for Any Situation

Human error contributes to 94 percent of severe traffic accidents according to the U.S. Department of Transportation’s National Highway Traffic Safety Administration (NHTSA). The ADAS/AV community is working to make ADAS/AV systems safer than humans by eliminating the preoccupied, tired, or careless person behind the wheel.

At the same time, ADAS/AV systems lack human intuition and experience. People can differentiate whether a hazard up ahead is harmless litter to ignore, a stick in the road to swerve around, or a more serious roadblock that requires jamming on the brakes. Currently, most ADAS/AV systems cannot yet make the distinction.

But the day when they can might not be too far off. With ultra-fast tools like RTR, which can model radar bouncing off the walls and through the windows with Autobahn-like speed, automakers will be able train driverless cars to handle almost any situation, including scenarios that were impossible to model before. And that will put consumer confidence and acceptance into high gear.

ANSYS will discuss the new RTR solver and show results of automotive scenarios in busy, complex environments at the Nvidia GTC Conference April 12-16. Click to register.      

For more presentations on autonomy, attend Ansys Simulation World, April 20-22. Click to register.

To learn more about Autonomous Vehicle Safety click here.

Also Read

The Electromagnetic Solution Buyer’s Guide

Electromagnetic and Circuit RLCK Extraction and Simulation for Advanced Silicon, Interposers and Package Designs

Need Electromagnetic Simulations for ICs?


VersionVault EDA Integration: A Differentiated Value Solution

VersionVault EDA Integration: A Differentiated Value Solution
by Kalar Rajendiran on 03-31-2021 at 10:00 am

Capabilities and Benefits from Integration

HCL Technologies is a large, well-established multi-national company with an annual revenue of around $10B and worldwide employee count of well over 150K. They provide valuable solutions to about 20 different industries and related market segments. Over the years, I have had first hand insights to their semiconductor design services solutions but had not heard of VersionVault. Over the course of the last 12 months or so, there has been many writeups about HCL’s VersionVault software. Following is a summary what I gathered by researching VersionVault. The primary focus of this blog is to complement what was covered in a very recent blog by Manish Virmani, general manager at HCL Software Labs.

Before my research got underway, I figured VersionVault must be a product related to version control system. That turned out to be just the tip of the iceberg.

VersionVault offers a safe, secure and powerful configuration system that provides controlled access to soft assets, including code, requirements, design documents, models, schematics, test plans and test results and enables ease of hardware/software co-development. It allows for tracking and managing changes for all of a product’s assets throughout the entire lifecycle of the product.

As good a product VersionVault is in terms of its built-in capabilities, its value to semiconductor and EDA customers is further differentiated through its integration to EDA tool suite platforms. It is currently integrated with Cadence Virtuoso platform and exploring integrations with more EDA tool suites including Synopsys Custom Compiler.

Figure 1: VersionVault Virtuoso Integration

Source: HCL Software Labs

In addition to features (refer to Figure 1) such as Interactive graphical schematic diff, command-line interface, hierarchical design management through a GUI, common tooling for SW and HW teams, etc., VersionVault also offers the following, not very obvious, nonetheless very important benefits depending on your particular role in the organization. Your role may be as a developer, engineering manager, project manager, QA manager, field support engineer, support manager, IT manager, CIO or CTO.

Ease of Adoption and Consistent Use

For ease of adoption and consistent use in practice, anything new should fit into the regular workflow. Seamless integration with EDA tool suite enables designers to take advantage of core capabilities of VersionVault, without leaving their familiar design environment.

Handling Multiple Versions of Product

Software products typically support multiple versions in the market at any given time. An engineer needs to be able to quickly switch between one development setup on version 1 to another development setup on version 2. Developers should be able to visualize the difference in versions across streams. VersionVault’s Unified Change Management feature makes that possible and enhances developer’s productivity.

Compliance to Procedures and Effective Management

Need for compliance to procedures and desire for minimal overhead is a delicate balancing act. VersionVault provides controlled access to soft assets, including code, requirements, design documents, models, schematics, test plans, and test results. User authentication and authoritative audit trails capabilities help meet compliance requirements with minimal administrative overheads.

Role-based Access and Control

As a company, you want a tool that can control access to IP based on one’s role on a project-level basis rather than just at the user-level basis. VersionVault allows you to create role-based specifications of access control, and reuse that specification across teams by assigning users to roles for each team. Access control can be modified at any level of the asset hierarchy, or inherited through the hierarchy if desired.

VersionVault provides effective authoritative build auditing. It helps streamline the edit-build-debug cycle and accurately reproduces software versions. By detecting dependencies, reusing derived objects wherever possible and producing detailed build audit trails, it helps ensure the reproducibility of software versions and improve build performance. To recreate a result, for debugging purposes or for analysis and review by a third party, this information is important.

Auditing and Compliance Support

For projects within regulated industries which require every change to be captured and logged, VersionVault makes it effort-less to comply. Every build of a “derived object” can automatically create a configuration record with every tool version and every file version used in its creation recorded. The configuration record may then be used for comparison purposes when a build goes bad, making it very easy to find what change caused the problem. Every configuration, which may consist of hundreds of thousands of files, can be recreated instantaneously, whether the configuration was created yesterday, or a decade ago.

Fitting Name

The product name VersionVault is a two-word mashup. When we come across the word “vault”, a common imagery that pops in our head is of “bank vault”. Based on that, it is not unreasonable for one to think of VersionVault as just a safe and secure way to perform versioning. But as highlighted in this blog, VersionVault is much more than that. Out of curiosity, I looked up the word vault for synonyms and discovered that it has so many different synonyms. Some of the synonyms are bound (as in leaps and bounds), leap, rise, safe, soar, and structure. This expanded definition seems to be more descriptive of the scope and extent of the capabilities of the product.

You may want to do your own evaluation of VersionVault for consideration as a solution for use at your organization.

 


Formal for Post-Silicon Bug Hunting? Makes perfect sense

Formal for Post-Silicon Bug Hunting? Makes perfect sense
by Bernard Murphy on 03-31-2021 at 6:00 am

Bug hunting process for DDR problem min

You verified your product design against every scenario your team could imagine. Simulated, emulated, with constrained random to push coverage as high as possible. Maybe you even added virtualized testing against realistic external traffic. You tape out, wait with fingers crossed for first silicon to come back. Plug it into the test board and everything looks good – until an intermittent bug sneaks in. After much head scratching, you isolate a problem to read/write re-ordering misbehavior around the memory controller. Now you have to try to reproduce the problem in pre-silicon verification. Hunting for a bug you missed. But formal for post-silicon bug hunting? That’s not as strange as you might think.

Out of control

You know where this is going. There’s an interconnected set of state machines mediating interaction between the interface control unit, the buffer unit and the memory controller. In some overlooked and seemingly improbable interaction, old data can be read before a location has been updated. Not often. In the lab you only see a failure intermittently, somewhere between every 2 to 8 hours. Not surprising that you didn’t catch it in pre-silicon verification.  I’ve seen similar issues crop up around cache coherence managers and modem controllers.

This is where formal methods can shine, finding obscure failures in complex state machine interactions. But in this application, you’re not setting out to prove there are no possible failures – that’s pre-silicon verification. Here you want to hunt for a bug you know must exist. That takes a different approach, one that won’t present any great challenge to formal experts but can be a frustrating search for a needle in a haystack for most of us. Through much experience Siemens EDA have developed a systematic approach they call a Spiral Refinement Methodology that should help you find that needle, without losing your mind.

Spiraling through a radar chart

They graph this refinement in a radar chart (the image in this blog is an example). The search progresses through multiple objectives at several levels. They start by reducing complexity to make formal analysis possible. Since the debug approach is formal, you first need to localize where, functionally, in the design the failure is happening. This insight typically emerges through bug triage in the lab. Then you can eliminate big chunks of the design that should not be important. And perhaps (carefully) add constraints. You will need access to formal experts, internal or external, to guide you away from pitfalls. Particularly as you start to abstract or divide up the problem to further manage complexity.

Assertions and initial state

Another key objective is to refine assertions towards the failing condition. One technique they mention here is “formal goal-posting”. This is a method to progress towards a condition through a sequence of proofs which allow you to step out through the state-space in digestible chunks, rather than trying to do the whole thing in one impossible leap. Along similar lines, they stress the importance of finding a suitable initial state to start proving cycles. For bugs that may not crop up for several hours, you’ll need to start close in the time, not just in space (function). Simulation can get you there, to set up that initial state.

Then they refine each of these objectives. Further abstractions, further tuning assertions, finding more suitable initial states. Zeroing in on a sequence or set of sequences that can lead to that failure detected in the lab. They describe application to three example failures, including this one. In each case localizing the problem through a very systematic search.

Very nice paper. You can read it HERE.

Also Read:

Library Characterization: A Siemens Cloud Solution using AWS

Smarter Product Lifecycle Management for Semiconductors

Observation Scan Solves ISO 26262 In-System Test Issues


WEBINAR: Pulsic’s Animate Makes Automated Analog Layout a Reality

WEBINAR: Pulsic’s Animate Makes Automated Analog Layout a Reality
by Tom Simon on 03-30-2021 at 10:00 am

Pulsic Webinar

Many years ago, digital and analog design flows diverged, with digital design benefiting from increasing levels of automation and more importantly separation between the front-end design process and the back-end design process. While digital design still requires linkages between the front and back end, they are well defined and the existing flows handle them in a straightforward manner. The same cannot be said for analog design. Despite the many advances in custom layout tools and improvements in the entire analog design flow, the dependencies between front-end and back-end have remained challenging along with the intricacies required in analog layout on its own.

Pulsic has a long history of providing design tools that can help improve the quality of custom digital designs and have recently turned their focus to solving the long-standing challenges of automating the analog design process. Their Animate Preview product can be used right from inside the Cadence schematic editor to begin creating and understanding the circuit layout. Because layout considerations are critical to design success, having insight and control of the physical design helps speed up the process and improve final design quality at the same time.

Paul Clewes of Pulsic gave me a detailed look at Animate Preview and talked about their upcoming webinar on April 15th. Animate is integrated with Virtuoso and when launched adds a preview window in the lower left corner of the schematic editing view. Animate will automatically detect when an analog circuit is loaded and then identify common structures such as differential pairs, current mirrors, matched pairs, etc. Animate will generate a layout on the fly and display it in the preview window.

Quite a lot happens when this layout is generated. Users do not need to specify constraints, the current technology information is used to create DRC correct results. The resulting layout is DRC correct and fully compatible and editable in Virtuoso. Because Animate is aware of the structures mentioned above, it is smart when it comes to placing devices. The webinar will show several examples of how Animate intelligently places devices to ensure optimal circuit operations.

Analog circuit designers can get quick and accurate area estimates and can then go into the Animate user interface to easily and graphically control device placement, guard ring configuration, dummy device location, etc. It is easy to modify the guard rings and dummy devices as well as control relative positions for devices. Each change made in the user interface triggers an update to the layout inside of Animate.

Users can also select from a variety of aspect ratios and also assign pins to the desired edge of the cell. Under the hood Animate is creating a DRC correct layout with proper spacing. From the user’s perspective it is a bit like using a drag and drop editor, but one intended for analog layout design. My first thought was about how WYSIWYG html editors hide the underlying html but let you move blocks easily to achieve the results you desire.

After talking with Paul, it was clear that Pulsic is onto something with Animate Preview. Because the layout of analog circuits is so important during circuit design, giving the circuit designer a tool to see and control the layout is going to help immensely. A lot of companies have taken a run at solving this problem, but there is a subtle combination of automation and direct control required to come up with a feasible solution. To make your own assessment of how useful this might be, feel free to watch the video here.

Also Read:

CEO Interview: Mark Williams of Pulsic

Analog IC Layout Automation Benefits

Webinar: Boosting Analog IC Layout Productivity


Webinar: Rapid Exploration of Advanced Materials (for Ferroelectric Memory)

Webinar: Rapid Exploration of Advanced Materials (for Ferroelectric Memory)
by Tom Dillinger on 03-30-2021 at 6:00 am

polarization

There are many unsung heroes in our industry – companies that provide unique services and expertise that enable the rapid advances in fabrication process development that we’ve come to rely upon.  Some of these companies offer “back-end” services, assisting semiconductor fabs with yield diagnostic engineering and failure analysis.  Some are “front-end” companies that pursue advanced research into promising new materials and processing techniques, and then assist with technology transfer to production manufacturing.  We tend to focus on the large semiconductor foundries and their process roadmaps, yet the underlying support from these engineering firms is fundamental to the industry as a whole.

An exemplary front-end services company is Intermolecular, the Silicon Valley science hub of the EMD Electronics.  They offer process development research and characterization services, spanning a wide range of materials – e.g., metal alloys, oxides/nitrides, thin films for specialized applications.

With each new process node, extensive investigations into new materials are pursued, to determine the optimum stoichiometry and electrical properties.  This is especially evident in the pursuit of alternative memory technologies.  A specific example is the introduction of new ferroelectric materials for very high density, non-volatile data storage.

Background

A ferroelectric material is a special dielectric, in that it exhibits two (stable) remanent polarization states.  The figure below illustrates the hysteresis curve of the crystalline polarization when an applied voltage is cycled across the dielectric – note the two intersections of the curve with the vertical axis when the applied voltage is zero, representing the stored “state” of the material.

The polarization is the contribution of the electric dipoles in the material to the electric flux between the terminals in the presence of an electric field.  The formula for the dielectric constant in the material is:  epsilon = (epsilon_0 + P/E), where epsilon_0 is the dielectric constant of free space, P is the polarization in the material, and E is the applied field.  The curve for a conventional, non-ferroelectric material would be a straight line through the origin – i.e., no polarization when the applied voltage is removed.

Note in the figure that the chemical bond orientation in the material differs slightly in the two states, resulting in the remanent polarization.

(The term ferroelectric is a bit misleading – there is no iron (Fe) constituent in the dielectric material.  The hysteresis curve of the dielectric polarization resembles the curve of a ferromagnetic material in the presence of an applied magnetic field.  After the external field is removed, the ferromagnetic material retains a magnetic polarization.  When the concept of remanent electrical polarization in a dielectric was first demonstrated, the term ferroelectricity was introduced, which has lasted.)

The two polarization states of the ferroelectric material suggest that it may be used as part of a memory bit circuit implementation.  The figure below illustrates a couple of potential implementations:

One depicts the ferroelectric material as a replacement for the storage capacitor in a 1T1C bitcell.  Unlike a conventional DRAM, note that no refresh of storage charge due to leakage current is required – data storage is represented by a dielectric polarization state, not by the amount of charge stored on the bitcell capacitance.  The other implementation depicted above shows the ferroelectric material directly integrated in the dielectric gate stack of a field-effect transistor.  The two polarization states of the material will result in different threshold voltages for the device, representing the stored data value.

As you might imagine, the crystalline properties of the dielectric material are crucial to the magnitude of the polarization states and the opening of the hysteresis curve.

Intermolecular Webinar on Ferroelectric Materials Optimization

I had an opportunity to view an outstanding webinar from Intermolecular, describing their recent investigations into ferroelectric materials.  Their prototype fab capabilities provided atomic level deposition (ALD) of a variety of hafnium oxide (HfO2) and zirconium oxide (ZrO2) films.

An image from the webinar is shown below, for the case of HfO2.  There are three crystalline topologies for these oxides, however only one demonstrates attractive ferroelectric behavior.

It is therefore necessary to ensure the process flow for depositing (and crystallizing) the film results in a very high percentage of material with the desired crystalline structure.

Another process experiment pursued by the Intermolecular team evaluated the ferroelectric properties of a stoichiometric mix of hafnium and zirconium in the dielectric in a single thin film, as well as stacking separate ALD-deposited HfO2 and ZrO2 layers.

Even if you’re not directly involved in advanced process development, I would encourage you to view this webinar presentation from Vijay Narasimhan at Intermolecular.  It is extremely informative, starting with the basics of ferroelectricity, and offering insights into the R&D flow for materials evaluation – e.g., deposition/annealing, crystalline spectroscopy, and electrical characterization.

Here is the webinar replay link.

Here is a link to more information on the front-end services provided by Intermolecular.

Also Read:

Executive Interview: Casper van Oosten of Intermolecular, Inc.

Integrating Materials Solutions with Alex Yoon of Intermolecular

Ferroelectric Hafnia-based Materials for Neuromorphic ICs