WP_Term Object
(
    [term_id] => 497
    [name] => Arteris
    [slug] => arteris
    [term_group] => 0
    [term_taxonomy_id] => 497
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 140
    [filter] => raw
    [cat_ID] => 497
    [category_count] => 140
    [category_description] => 
    [cat_name] => Arteris
    [category_nicename] => arteris
    [category_parent] => 178
)
            
Arteris logo bk org rgb
WP_Term Object
(
    [term_id] => 497
    [name] => Arteris
    [slug] => arteris
    [term_group] => 0
    [term_taxonomy_id] => 497
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 140
    [filter] => raw
    [cat_ID] => 497
    [category_count] => 140
    [category_description] => 
    [cat_name] => Arteris
    [category_nicename] => arteris
    [category_parent] => 178
)

Evolving Landscape of Self-Driving Safety Standards

Evolving Landscape of Self-Driving Safety Standards
by Bernard Murphy on 11-14-2019 at 5:00 am

I sat in a couple of panels at Arm TechCon this year, the first on how safety is evolving for platform-based architectures with a mix of safety-aware IP and the second on lessons learned in safety and particularly how the industry and standards are adapting to the larger challenges in self-driving, which obviously extend beyond the pure functional safety intent of ISO 26262. Here I want to get into some detail on this range of standards because we’re going to need to understand a lot more about these if we want to be serious about autonomous cars.

Autonomous Vehicle

Let’s start with some evolving requirements for functional safety, the topic covered by ISO 26262. The standard itself doesn’t specify what safety mechanisms should be used but it does require (in ISO 26262-2) that you need to deliver credible evidence that the safety mechanisms you provide are sufficient. This is a neat little twist – the burden is on you (and your customers and suppliers) to demonstrate functional safety no matter how complex your design may become.

And they are becoming a lot more complex. We now have designs in safety-critical systems (ASIL-D) in which not all IP components individually meet that expectation and cannot reasonably be expected to be brought up to that level. How can a system be at ASIL-D if parts of it are at lower ASIL levels, or even may be safety indifferent (QM)? The answer lies in being able to isolate and test those components regularly and if they fail to meet expectations, leave them isolated. This has also led to the concept of a fully ASIL-D safety-island which can initiate such testing and report problems back to command-central in the car, to be able to support fail-operational responses.

Another mechanism in this diagnostic framework is detecting errors through timeouts from requests to acknowledgement on the bus. Pretty reasonable that this would be a good method to look for misbehaving components. ISO 26262-2 defines a fault handling time interval, usually well within time to detect fault, but of course does not specify how this should be accomplished, just as it doesn’t specify the isolation and safety island mechanisms. These are design responses to the documented requirements.

Arteris IP FlexNoC Reliance supports all of these capabilities. DreamChip talked in an earlier panel about using Arteris IP NoC technology both to build a safety island, to provide the network between IPs naturally,  and to manage IP isolation and independent testing on that network. They also program and test timeouts for request/response per IP through their NoC safety features

So far this is purely functional safety. Safety Of The Intended Function (SOTIF), also know as ISO/PAS 21448, is a follow-on to 26262 with the goal of defining safety at the system level – think about software and ML certainly but also misuse or environmental factors. Safety concerns here are not necessarily determined by system failures; they could be determined by scenarios which weren’t considered in the design of the autonomous driving systems. A simple example might be driving on an icy road; SOTIF requires that these kinds of conditions be included threat modeling and risk mitigation.

SOTIF is certainly a start in the right direction, though Kurt Shuler’s feeling is that it is currently rather too philosophical to be actionable in engineering design and validation practices (Kurt is VP Mktg at Arteris IP). We’ll see how his view evolves in follow-on releases.

Another very interesting standard, sponsored by Underwriter’s Labs (UL) is UL 4600. What he likes about this is that it defines a standard of care for the design of an autonomous vehicle. This must be presented as very methodical documentation of:

  • Why the developer thinks the vehicle is safe
  • Why we should believe their argument
  • A list of #DidYouThinkOfThat? Cases which allow incorporating lessons learned

This isn’t a metric and doesn’t set absolute standards for what should be considered safe or what kind of tests should be run, but it does insist on a comprehensive list of safety cases with goals and claims which must be demonstrated to be supported by evidence. And a list of possible exceptions/cases not tested must be included (and can evolve). This is at minimum very auditable. I certainly think this is an important step.

These are the main standards Kurt thinks are important today for autonomous driving. Progress is being made though there’s still a lot of work to be done to more exactly determine how we should define safety in autonomous vehicles, much less how we should implement safety. But we’re advancing.

Share this post via:

Comments

One Reply to “Evolving Landscape of Self-Driving Safety Standards”

You must register or log in to view/post comments.