Wikipedia summarizes:
The Three Laws of Robotics are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround", although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the "Handbook of Robotics, 56th Edition, 2058 A.D.", are:
It's 2025. You and your daughter are riding in a driverless car along Pacific Coast Highway. The autonomous vehicle rounds a corner and detects a crosswalk full of children. It brakes, but your lane is unexpectedly full of sand from a recent rock slide. It can't get traction. Your car does some calculations: If it continues braking, there's a 90% chance that it will kill at least three children. Should it save them by steering you and your daughter off the cliff?
This isn't an idle thought experiment. Driverless cars will be programmed to avoid collisions with pedestrians and other vehicles. They will also be programmed to protect the safety of their passengers. What happens in an emergency when these two aims come into conflict?
The author raises a real concern and discusses how such things should be regulated. He notes:
Google, which operates most of the driverless cars being street-tested in California, prefers that the DMV not insist on specific functional safety standards. Instead, Google proposes that manufacturers “self-certify” the safety of their vehicles, with substantial freedom to develop collision-avoidance algorithms as they see fit.
But he says that's not good enough:
That's far too much responsibility for private companies. Because determining how a car will steer in a risky situation is a moral decision, programming the collision-avoiding software of an autonomous vehicle is an act of applied ethics. We should bring the programming choices into the open, for passengers and the public to see and assess.
I wounder how the public would assess this issue? Let's take the same case today, with a person driving the car. How many people would say that they would go over a cliff to avoid killing pedestrians? It's actually a harder question than you think, and you might have a different answer in real time than in the abstract.
I'm guessing that, in real time, the instinctive action for most of us would likely be to swerve to avoid the children, not realizing fully than in doing so, we'll go over the cliff.
In contrast, if we had a chance to calmly consider the scenario in advance, we might have mixed emotions.
For example, you might say, "Well, even if I go over a cliff, the car will protect me from harm; whereas if I hit the children they'll likely die. So, I'll take my chance with the cliff."
Or, you might say, "My obligation is to my own child first, and I'm not going to risk killing her by going over a cliff. I'm not violating the speed limit, and it's not my fault if there's gravel on the road. I'll do my best to stop, but if I can't, so be it. These things happen."
Eric offers the following thought:
Some consumer freedom seems ethically desirable. To require that all vehicles at all times employ the same set of collision-avoidance procedures would needlessly deprive people of the opportunity to choose algorithms that reflect their values. Some people might wish to prioritize the safety of their children over themselves. Others might want to prioritize all passengers equally. Some people might wish to choose algorithms more self-sacrificial on behalf of strangers than the government could legitimately require of its citizens.
Lest you think this provides too much freedom of choice, Eric reminds us that today's drivers also engage in implicit moral choices:
There is something romantic about the hand upon the wheel — about the responsibility it implies. But future generations might be amazed that we allowed music-blasting 16-year-olds to pilot vehicles unsupervised at 65 mph, with a flick of the steering wheel the difference between life and death.
He notes:
A well-designed machine will probably do better in the long run. That machine will never drive drunk, never look away from the road to change the radio station or yell at the kids in the back seat.
What would Isaac say?
Here's what I worry about, more than this ethical question. As we've seen in the medical world--e.g., with regard to robotic surgery, femtosecond lasers, and proton beam therapy--there is an inexorable push to adopt new technologies before we determine that they are safer and more efficacious than the incumbent modes of treatment. Corporations have a financial imperative to push technology into the marketplace, employing the "gee whiz, this is neat" segment of early adopters to carry out their marketing, leading to broader adoption. All this happens well before society engages in the kind of thoughtful deliberation suggested by Eric. Meanwhile those same corporations take advantage of the policy lacunae that emerge to argue for less government interference. Unnecessary harm is done, and then we say, "These things happen."
Let's remember what Ethel Merman said in the movie when Milton Berle reported in that manner on a terrible traffic accident, "We gotta have control of what happens to us."
The Three Laws of Robotics are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround", although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the "Handbook of Robotics, 56th Edition, 2058 A.D.", are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
It's 2025. You and your daughter are riding in a driverless car along Pacific Coast Highway. The autonomous vehicle rounds a corner and detects a crosswalk full of children. It brakes, but your lane is unexpectedly full of sand from a recent rock slide. It can't get traction. Your car does some calculations: If it continues braking, there's a 90% chance that it will kill at least three children. Should it save them by steering you and your daughter off the cliff?
This isn't an idle thought experiment. Driverless cars will be programmed to avoid collisions with pedestrians and other vehicles. They will also be programmed to protect the safety of their passengers. What happens in an emergency when these two aims come into conflict?
The author raises a real concern and discusses how such things should be regulated. He notes:
Google, which operates most of the driverless cars being street-tested in California, prefers that the DMV not insist on specific functional safety standards. Instead, Google proposes that manufacturers “self-certify” the safety of their vehicles, with substantial freedom to develop collision-avoidance algorithms as they see fit.
But he says that's not good enough:
That's far too much responsibility for private companies. Because determining how a car will steer in a risky situation is a moral decision, programming the collision-avoiding software of an autonomous vehicle is an act of applied ethics. We should bring the programming choices into the open, for passengers and the public to see and assess.
I wounder how the public would assess this issue? Let's take the same case today, with a person driving the car. How many people would say that they would go over a cliff to avoid killing pedestrians? It's actually a harder question than you think, and you might have a different answer in real time than in the abstract.
I'm guessing that, in real time, the instinctive action for most of us would likely be to swerve to avoid the children, not realizing fully than in doing so, we'll go over the cliff.
In contrast, if we had a chance to calmly consider the scenario in advance, we might have mixed emotions.
For example, you might say, "Well, even if I go over a cliff, the car will protect me from harm; whereas if I hit the children they'll likely die. So, I'll take my chance with the cliff."
Or, you might say, "My obligation is to my own child first, and I'm not going to risk killing her by going over a cliff. I'm not violating the speed limit, and it's not my fault if there's gravel on the road. I'll do my best to stop, but if I can't, so be it. These things happen."
Eric offers the following thought:
Some consumer freedom seems ethically desirable. To require that all vehicles at all times employ the same set of collision-avoidance procedures would needlessly deprive people of the opportunity to choose algorithms that reflect their values. Some people might wish to prioritize the safety of their children over themselves. Others might want to prioritize all passengers equally. Some people might wish to choose algorithms more self-sacrificial on behalf of strangers than the government could legitimately require of its citizens.
Lest you think this provides too much freedom of choice, Eric reminds us that today's drivers also engage in implicit moral choices:
There is something romantic about the hand upon the wheel — about the responsibility it implies. But future generations might be amazed that we allowed music-blasting 16-year-olds to pilot vehicles unsupervised at 65 mph, with a flick of the steering wheel the difference between life and death.
He notes:
A well-designed machine will probably do better in the long run. That machine will never drive drunk, never look away from the road to change the radio station or yell at the kids in the back seat.
What would Isaac say?
Here's what I worry about, more than this ethical question. As we've seen in the medical world--e.g., with regard to robotic surgery, femtosecond lasers, and proton beam therapy--there is an inexorable push to adopt new technologies before we determine that they are safer and more efficacious than the incumbent modes of treatment. Corporations have a financial imperative to push technology into the marketplace, employing the "gee whiz, this is neat" segment of early adopters to carry out their marketing, leading to broader adoption. All this happens well before society engages in the kind of thoughtful deliberation suggested by Eric. Meanwhile those same corporations take advantage of the policy lacunae that emerge to argue for less government interference. Unnecessary harm is done, and then we say, "These things happen."
Let's remember what Ethel Merman said in the movie when Milton Berle reported in that manner on a terrible traffic accident, "We gotta have control of what happens to us."
Funny, my first thought was "How cool is it that we've lived long enough that this guy was part of our lives and yet we've lived long after his death." But, that's my view of a lot of things.
ReplyDeleteThere is of course no one right answer to this. It seems this is a never-ending problem as humanity gains some measure of control over things: what do we do with the new power? Who gets to benefit and who doesn't? Where do you draw the line?
In medicine, for every new (expensive) life-saving treatment there's the question of who will get it.
I saw an article about this driverless car problem somewhere recently, and it wasn't just about deciding to kill your family or a mob. The idea in that article was *which* mob would you smash into?
In non-robotic reality we (implicitly?) know it's beyond human choice ("there was nothing she could do"). Programming a device in advance seems repugnant. And the article (which I don't recall clearly) mentioned the option of a robotic coin toss.
And I guess that's one big problem at the limits of what they prefer to call the "autonomous" car.
Very interesting and.... your timing is uncanny...
ReplyDeleteMonday's XKCD cartoon: http://xkcd.com/1613/