Re: A glimpse of the future?
Posted: Sat Nov 21, 2020 9:55 pm
tulamide wrote:The biggest hurdle so far is not what people might think. It's ethics. Yep, the really tough questions of what to do in really dangerous situations. Your car becomes unstable. If you stir left you will hit a woman with her baby. The chance they will be hurt is 100%, the chance they'll die is 50%. If you stir right you will hit an old man. The chance he dies is 100%. What to do?
Truth is, we don't make decisions in such situations, where only the blink of an eye is between you and a catastrophe. Instead we act instinctively, and that involves life experience. AI can't act instinctively. It has to make a decision. And that's the ethic problem. Is it ok to kill an old man? He won't live very long anyway, right? Or is it acceptible to hit the woman with her child, as there is a chance they'll survive? And what about the passengers in the car? Should the car decide based on the well-being of them?
The answer to your last question is almost certainly "yes"...because that's how the human driver instinctually behaves in the kind of situations you describe. Is that ethical? Perhaps not, subject to 20-20 hindsight, but the alternative is expecting the car's controller (or by extension, its programmer) to "decide" in ways that the human driver is completely incapable of under those conditions.
But I'm concerned that endlessly debating how the autonomous car will react in extreme edge-of-the-envelope no-win situations completely ignores the benefit (in terms of saved lives and reduced injuries) that the tech will produce in the 99.99999% of everyday driving that it can handle better than the human driver. In other words, the accidents that simply don't occur because the car doesn't get tired/drunk/distracted/enraged/whatever.