I was led to this article at Wired because of the following snip: Looks like it may be more difficult to come to grips with the ethical side of autonomous car crash avoidance that it is to solve all the technological issues. Here's another snip. Fascinating article, here's the rest.
What's to say that the algorithm chosen is based on cost (but this is never admitted to for obvious reasons).... The injured party that survives and all of the hospital costs and pain and suffering that goes with it.... Versus wrapping it up with a quick death payout...
"and surely killing someone is one of the worst things auto manufacturers desperately want to avoid." Then if follows that Asimov's 'Three Laws of Robotics' should come into play. 1.A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. It's just that Isaac Asimov could never foresee the implementation of a mindless nanny in front of the wheel watching out for the mindless driver behind the wheel.
Now this is the proper way to program the algorithm. Then, once Shakespeare's desires are fulfilled (kill all the lawyers) we can do away with the nanny robot drivers without getting sued for "cruelty to the mechanically enabled" and start driving with individual responsibility again. (And perhaps buy a ladder not covered in stickers.)