How will autonomous cars handle rock and hard place situations?

One of the best parts of humanity and the worst, is our ability to deal with difficult situations. When it comes down to it, when our backs are against the wall, we can do something amazing. Sometimes we do something horrible, but for the most part we like to think of humans as doing the right thing – and if not, we are at least accountable. Robots don’t operate in the same way however, and as we see more automation in our society, how will they handle difficult moral decisions which they may have to make?

Cars are a prime example. As we begin to edge towards fully automated driving, there will be cases where a computer controlled car makes a mistake and it ends up killing someone. How will we deal with that in a legal and moral sense?

QZ however takes that thought experiment a bit further however. What will the computers do when faced with impossible decisions. For example, if a driverless car is faced with sending you as the passenger over a cliff, or colliding with a young child that’s fallen in its path, what will it do? The simple answer is, whatever it was programmed to do, because ultimately, as smart as all of these automatons are, they’re only as smart as the programming that went into them.

Even if we create computational systems that can learn and grow as time goes on, they won’t have the same attachment to other people and specifically children as real humans do, so we’ll need to program in a desire to see us avoid harm, but if it can’t, it’s difficult to imagine what it would do. In that sense as well, it’s important to think about how we might program it to react.

This question was (clumsily) asked in the film adaptation of Isaac Asimov's I Robot

This question was (clumsily) asked in the film adaptation of Isaac Asimov’s I Robot

Some analysts have started to argue though that the last thing we want is product designers making these complex moral decisions for us. While the people behind Google’s driverless cars might be geniuses in their own right, their moral compasses shouldn’t become the one that all of the autonomous vehicles adopt. Somehow, we need to get our own morality in there.

One way of looking at it would be to give the owner of the car a choice when they buy the vehicle or when they set off on a journey. If travelling alone, perhaps they might like to set their value to somewhat average, meaning that they would be happy for the AI to sacrifice their safety in the event of a larger group of people being in danger, or the theoretical child. When they have their family in the car with them though, they might want to set the value of their vehicle’s contents to something much higher, considering that no price would be too high to pay for keeping the family safe.

Likewise though, if we went with a system like that, we’d have to respect the selfish who would set their own personal value above all others, just as we might mournfully have to accept those that wished to throw themselves under the proverbial and literal bus to save anyone and every thing.

What do you guys think would be a good way to handle this moral dilemna?

 

 

    Jon Martindale

    Jon Martindale is an English author and journalist, who's written for a number of high-profile technology news outlets, covering everything from the latest hardware and software releases, to hacking scandals and online activism.

    All author posts