Semiwiki 400x100 1 final
WP_Term Object
(
    [term_id] => 95
    [name] => Automotive
    [slug] => automotive
    [term_group] => 0
    [term_taxonomy_id] => 95
    [taxonomy] => category
    [description] => 
    [parent] => 0
    [count] => 750
    [filter] => raw
    [cat_ID] => 95
    [category_count] => 750
    [category_description] => 
    [cat_name] => Automotive
    [category_nicename] => automotive
    [category_parent] => 0
)

Crowd-Sourcing Morality for Autonomous Cars

Crowd-Sourcing Morality for Autonomous Cars
by Bernard Murphy on 01-03-2017 at 7:00 am

Questions are being raised on how autonomous vehicles should react in life-or-death situations. Most of these have  been based on thought experiments, constructed from standard dilemmas in ethics such as what should happen if the driver of a car or an autonomous car is faced with either killing two pedestrians or killing the occupants of the car. Recent fatal Tesla crashes have added broader interest in whether questions of this nature have more practical relevance.

We should acknowledge up front that issues of the type offered above are likely to be at the fringes of reasonable operation for autonomous cars. But even if rare, the consequences can still be severe; also these outliers aren’t the only cases where moral issues can arise. When is it OK to exceed the speed limit or drive in the shoulder lane or park next to a fire-hydrant? For human drivers, the simple answer – never – can be modified given extenuating circumstances. Does each micro-infraction need to go to traffic court for a judgement? Again, for human drivers, obviously not but which instances can be ignored depends in turn on the in-situ judgment of traffic police or other nearby drivers who must make moral decisions – at least for which incidents rise to a level of proper concern.

A research team at UC Irvine and the MIT Media Lab has taken a next logical step in this direction. They have built a platform they call the Moral Machine which conducts an online survey posing moral questions in the narrow domain of collisions/fatalities, gathering (so far) over 2.5 million participants in 160 countries. In each case, the choice is between two bad options. Some are perhaps easier than others, such as valuing human life over the life of an animal. Others are trickier. Do you value the lives of young people over older people, or women over men, or do you value less the lives of law-breakers (crossing a pedestrian crossing when they don’t have a signal that they can cross) over people obeying the law? What if you consider variants among these options?

The results are interesting if largely unsurprising. You should try the survey yourself (link below) but here’s a taste of crowd-sourced preferences (you get to this as a reward at the end of the survey):

  • There’s a reasonably strong bias to saving more people than less people
  • There’s little bias to saving pedestrians versus people in your car
  • There’s some bias to saving people following traffic laws law versus people flouting those laws
  • There’s some bias to saving women over men but a strong bias against saving animals
  • There’s a reasonably strong bias to saving people with high social value (e.g. doctors) versus people with low social value (e.g. bank robbers) – assuming you can figure this out prior to an accident

You could imagine, since the survey reflects the views of a large population, that it could provide a basis for a “morality module” to handle these unusual cases in an autonomous (or semi-autonomous) car. Or not (see below). The survey would have to be extended to handle the lesser infractions I raised earlier, but then it might run into a scalability problem. How many cases do you need to survey to cover speeding violations for example? And how do you handle potentially wider variances in responses between different regions and different cultures?

Then again, there may be a larger problem lurking behind this general topic, at least in the longer term. As technologists, we want to find technology solutions to problems but here we’re creeping into a domain that civil and religious institutions will reasonably assert belongs to them. From a civil law perspective, morality by survey is a form of populism – fine up to a point, but probably needing refinement in the context of case law and constitutional law. From a religious viewpoint, it is difficult to see how morality by survey can be squared with fundamental religious guidelines which many adherents would consider not open to popular review.

You might argue I am over-extending my argument – the cases I have raised are either rare or minor and don’t rise to  the level of significant civil or religious law. I counter first that this is an argument from ignorance of possible outcomes (we can’t imagine any but the cases we’ve discussed; therefore all others must be minor). Second, from the perspective of the civil/religious lawyers, minor though these instances may be, they could easily be seen as the start of a slippery slope where control of what they consider their domain passes from them to technologists (remember Galileo). And third, we are eager enough to over-extend what we believe AI can do (putting us all out of work, extermination or enslavement of the human race by robots) so why disallow over-extension along this line of thinking? 😎

Perhaps we need to start thinking in terms of micro-morality versus macro-morality, as we do already with micro- versus macro-economics. Micro-morality would be the domain of “intelligent” machines and their makers and macro-morality would be the domain of the police, the courts and religions. But where is the dividing line and can these domains even be split in this way? We probably can’t expect law-makers or religious leaders to cede any amount of their home turf without a fight. And perhaps we shouldn’t hope for that outcome. I’m not sure I want for-profit technologists or mass public opinion deciding moral questions.

Then again, perhaps law-makers and religions will adapt (as they usually have) by forming their own institutes for cyber-morality where they review survey responses, but deliver rulings for use in cyber contexts based on fitting popular opinion to their own reference documents and beliefs. Then you can dial in your political and religious inclinations before setting off on that autonomous car-ride. Meanwhile, you can read more about the UCI/MIT study HERE and take the survey HERE.

More articles by Bernard…

Share this post via:

Comments

0 Replies to “Crowd-Sourcing Morality for Autonomous Cars”

You must register or log in to view/post comments.