WP_Term Object
(
    [term_id] => 497
    [name] => Arteris
    [slug] => arteris
    [term_group] => 0
    [term_taxonomy_id] => 497
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 137
    [filter] => raw
    [cat_ID] => 497
    [category_count] => 137
    [category_description] => 
    [cat_name] => Arteris
    [category_nicename] => arteris
    [category_parent] => 178
)
            
Arteris logo bk org rgb
WP_Term Object
(
    [term_id] => 497
    [name] => Arteris
    [slug] => arteris
    [term_group] => 0
    [term_taxonomy_id] => 497
    [taxonomy] => category
    [description] => 
    [parent] => 178
    [count] => 137
    [filter] => raw
    [cat_ID] => 497
    [category_count] => 137
    [category_description] => 
    [cat_name] => Arteris
    [category_nicename] => arteris
    [category_parent] => 178
)

Autonomous Driving Still Terra Incognita

Autonomous Driving Still Terra Incognita
by Bernard Murphy on 12-12-2019 at 6:00 am

I already posted on one automotive panel at this year’s Arm TechCon. A second I attended was a more open-ended discussion on where we’re really at in autonomous driving. Most of you probably agree we’ve passed the peak of the hype curve and are now into the long slog of trying to connect hope to reality. There are a lot of challenges, not all technical; this panel did a good job (IMHO) of exposing some of the tough questions and acknowledging that answers are still in short supply. I left even more convinced that autonomous driving is still a hard problem needing a lot more investment and a lot more time to work through.

Whither self-driving?

Panelists included Andrew Hopkins (Dir Systems Tech, Arm), Kurt Shuler (VP Mktg, Arteris IP), Martin Duncan (GM of ADAS, ASIC Div at ST) and Hideki Sugimoto (CTO NSITEXE/DENSO). Mike Demler of the Linley group moderated. There was some recap of what we do know about functional safety, with a sobering observation that this field (as understood today) started over a decade ago. Through five generations of improvements we now feel we understand more or less what we’re doing for this quite narrow definition of functional safety. We should keep this in mind as we approach safety for autonomous drive, a much more challenging objective.

That led to the million-dollar question – how do you know what’s good enough? Even at a purely functional safety level there is still anxiety. We’re now mixing IPs designed to meet ASIL levels with IPs designed for mobile phones, built with no concept of safety. Are there disciplined ways to approach this? I heard two viewpoints; certainly safety islands and isolation are important, and modularity and composability are important. However if interactions between subsystems are complex you still need some way to tame that complexity, to be able to analyze and control with high confidence. Safety islands and isolation are a necessary but not sufficient requirement.

In case you’re wondering why we don’t force everything to be designed to meet the highest safety standards, the answer is ROI. Makers of functions in phones have a very healthy market which doesn’t need safety assurance. They’re happy to have those functions also used in self-driving cars but they’re not interested in doubling their development costs (a common expectation for safety standards) in order to serve that currently tiny and very speculative market. And no-one can afford to build this stuff from scratch to meet the new standards.

The hotter question is how safety plays with AI, which is inherently non-deterministic and dependent on training in ways that are still uncharacterized from a safety perspective. ISO 26262 is all about safety in digital and analog functionality; as in much of engineering we know how to characterize components, subsystems and systems, we can define metrics and methods to improve those metrics. We’re much less clear on any of this for AI. The “state-of-the-art” in autonomy today seems to be proof by intimidation – we’ll test over billions of miles of self-driving and that will surely be good enough – won’t it? But how do you measure coverage? How do you know you’re not testing for similar scenarios a billion times over rather than billions of different scenarios? And how do you know that billions of scenarios would be enough for acceptable coverage? Should you really test trillions,  quadrillions, ..  ?

This led on to SOTIF (safety of the intended function) an ISO follow-on to 26262 intended to address safety at the system level. Kurt’s view is that this is more of a philosophical guide rather than a checklist, useful at some level but hardly an engineering benchmark. There’s a new emerging standard from Underwriters Labs (UL) called UL 4600 which as I understand it is primarily a  very disciplined approach to documenting use-cases and the testing done per use-case. That seems like a worthwhile and largely complementary contribution.

Getting back to mechanisms, one very interesting discussion revolved around a growing suspicion that machine learning (ML) alone is not enough for self-driving AI. We already know of a number of problems: non-determinism, the coverage question, spoofing, security issues and issues in diagnosis. Should ML be complemented by other methods? A popular trend in a number of domains is to make more use of statistical techniques. This may sound odd; ML and statistics are very similar in some ways, but they have complementary strengths. For example statistical methods are intrinsically diagnosable.

Another mechanism drawn from classical AI is rule-based systems. Some of you may remember ELIZA, a very early natural language systems based on rules. Driving is to some extent a rules-based activity (following the highway code for example) so could be a useful input. Of course simply following rules isn’t good enough. The highway code doesn’t specify what to do if a pedestrian runs in front of the car, or how to recognize a pedestrian in the first place. But it’s not a bad starting framework. On top of that a practical system needs flexibility to make decisions around situations not seen before, and the ability to learn from mistakes. We should also recognize that complex rulesets may have internal inconsistencies; intelligent systems need to be able to work around these.

The panel closed with a discussion on the explosion in different AI systems and whether this is compounding the problem. The general view was that yes, there are a lot of solutions but (a) that’s a natural part of evolution in this domain, (b) some difference is inevitable between say audio and vision solutions and (c) some will likely be essential between high-end, high-complexity solutions (say vision) and lower complexity solutions (say radar).

All in all, a refreshing and illuminating debate, chasing away some of the confusion shed by the popular pundits.

Circling back to our safety roots, if you’re looking for a clear understanding of ISO 26262 and what it means for chip design teams, a great place to start is the paper, “Fundamentals of Semiconductor ISO 26262 Certification:  People, Process and Product.”

Share this post via:

Comments

3 Replies to “Autonomous Driving Still Terra Incognita”

You must register or log in to view/post comments.