‘Machines of Loving Grace,’ by John Markoff

With so much breathless coverage of cutting-edge technology, it can be hard to remember the past. Not too long ago, autonomous vehicles (like Google’s driverless car), widely used digital servants (like Siri), and family robots (like Jibo) seemed like things we’d only encounter in science fiction. “In Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots,” John Markoff goes behind the headlines and reconstructs the long and winding road that led here.

Markoff is a technology and science writer for the New York Times. And so it’s not surprising that the Pulitzer Prize winner discusses the recent history of computer development (from the 1950s to today) with a reporter’s sensibilities. He fills pages with facts about who did what, why something got done in a particular way and what significance all of this moving and shaking has for specialists and the rest of society.

Markoff did his homework and capably tackles interesting things: why artificial intelligence legend Marvin Minsky was motivated to quip that “if we’re lucky” machines will “keep us as their pets,” how Google was able to hide its driverless car program in plain sight, why Watson’s “Jeopardy” victory is less impressive than meets the eye, why the abysmal failure of Microsoft’s intelligent office assistant Clippy didn’t portend subsequent chapters of human-machine interaction, what led cybernetics trailblazer Norbert Wiener to stop trying to present his reservations about the “ultimate machine age” in the New York Times, and why debates about automation’s impact on work are surrounded by misleading assertions (ATMs killed off bank tellers) and flawed analogies (Instagram killed Kodak). Of course, there are plenty of other gems, too.

All writers — reporters included — have opinions, and two strong and deeply interconnected ones influence Markoff’s account. First and foremost, he rejects technological determinism: Technical pathways aren’t set in stone, hotly discussed hypothetical inventions aren’t destined to flood the market, and experts specializing in how information is processed, automated and used will make decisions that largely determine which technologies will be found in the future. If this thesis is correct, good decisions from the right people will lead to a desirable tomorrow and bad ones might bring dystopia.

But how can good decisions be distinguished from bad ones? Markoff doesn’t directly answer this question. Instead, his adamant and frequent refrain is that ethical judgment needs to play a prominent role when technology designers choose their research plans. To focus our attention on this point, he frames the entire book as a plea for two entrenched and divisive camps to communicate better with each other.

There are the folks developing artificial intelligence to replace humans with computers. And then there are the people creating computational tools to assist human decision-making and augment our limited human faculties. If these groups don’t mend their dichotomous ways, Markoff warns, nasty things lie ahead. Humans will become increasingly obsolete. And human judgment will be increasingly confined to assessing trivial matters.

While there’s much to praise Markoff for, a few issues will make readers uneasy.

The far-reaching scope of the inquiry is at odds with the readerly quality of the narrative. Without writing a colossal tome, you simply can’t do a deep dive into any single person or project and cover lots of ground. Predictably, aiming for breadth and controlled word count makes it feel as if Markoff jumps too quickly between short-form reflections and connects dots between flat characters and mere summary descriptions. Some will be grateful for a big-picture education. Others will be disappointed and wish Markoff followed a more circumscribed path.

Markoff could have spent more time defending his anti-determinist outlook and demonstrating that it’s a viable position within design-decision parameters. For example, when discussing automation in the workplace, he depicts capitalism as generating “inevitable” outcomes. If artificial intelligence gets sophisticated enough, he acknowledges we’ll say goodbye to all the jobs it can replace at a lower cost, including “white collar and professional workers.”

Now, the market provides powerful incentives for entrepreneurial technologists. And Markoff himself says that “it will be truly remarkable” if Silicon Valley “rejects a profitable technology for ethical reasons.” This leaves the reader wondering why he would believe designers can make choices that will prevent job-destroying tools from being created, or else do something to counteract their results. Quickly mentioning that Toyota has a corporate vision for keeping humans in the loop doesn’t cut it here. Frankly, policy remedies might be needed, but Markoff curiously downplays them.

For all his lofty appeal to ethics, Markoff sells contemporary conversations short. Yes, he discusses recently voiced concerns about health care robots, military robots and cyber-servant dependency. But Markoff also uncharitably dismisses philosophical reflections on self-driving cars and distorts why philosophers emphasize variations of the Trolley Problem.

In its classic form, the Trolley Problem is a thought experiment that asks you to consider whether you’d save the lives of several people from being hit by a runaway trolley if you had to pull a lever — an action that would send the vehicle to another track and kill someone there. Among other things, the scenario helps us think about whether there’s a meaningful difference between actively killing and passively allowing people to die. The variation for self-driving cars ponders what the vehicles should be programmed to do if, say, they’re on a collision course with an “errant school bus carrying forty innocent children.”

For Markoff, the answer is obvious: choose the “lesser evil” and remember that far too many accidents come from human drivers making mistakes. But as Patrick Lin convincingly argues, it can be tricky to identify what the lesser evil actually is.

If you believe it’s clearly a matter of saving as many lives as possible and couldn’t possibly be anything else, you’re overlooking the conflicting ethical principles at stake in the decision-making. You’re also brushing off the ethical issues associated with machines making moral decisions on our behalf — potentially without us understanding how they’re programmed and without us consenting to their coded values.

Even coping with “incoming rear-end collisions” requires nuanced considerations. Like the rest, they’re sparked by a willingness to acknowledge that ethical depth lies beneath the obvious surface.

Evan Selinger is a philosophy professor at the Rochester Institute of Technology. His writing has appeared in Wired, the Atlantic, Slate and the Wall Street Journal. E-mail: books@sfchronicle.com

Machines of Loving Grace

The Quest for Common Ground Between Humans and Robots

By John Markoff

(Ecco; 378 pages; $26.99)




Source: ‘Machines of Loving Grace,’ by John Markoff

Via: Google Alerts for AI