Stop Worrying About Whether Machines Are “Intelligent”


Stephen Hawking, Bill Gates, and Elon Musk have sounded warnings that AI, especially robotic weapons, might escape our control and take over. Ray Kurzweil has claimed the “singularity” (his term for the moment of takeover) is at hand, surfacing some of our primitive terrors. Given the trend to a surveillance society, our deepening embrace of technology, and the emerging Internet of Things, are we right to be afraid?

One way to detect such a shift is “Turing’s Test,” a puzzle devised by the British mathematician Alan Turing. It challenges us to decide whether questions posed to a hidden respondent are being answered by a human or a computer. Rather than comparing our capabilities to the computer’s (based on memory, calculation speed, and so on) Turing’s Test explores our sense of consciousness. Our amazement at the machine’s knowledge and insight leads us to believe it has an intelligence greater than our own. The mechanism here is the same as at the mythical Greek spot of Delphi – where the famous Oracle supposedly spoke in gibberish that was interpreted by priests to be poetic prophecies. How readily we fool ourselves.

This indicates three views of the singularity. Firstly, and most naïve, is that as we create ever more complex systems there must come a point when they are more complex than us. But even then, as amazed at the system’s capabilities as we may be, we know these abilities have been engineered in. Calling the system “intelligent” abuses the meaning of the term, for its intelligence is only that of its human makers. Of course, a system might well fool those who do not understand what is going on, just as people unfamiliar with modern medicine may be fooled by a doctor’s “magic.”

A second notion, proposed by Herbert Simon and Allen Newell, is of machine intelligence based on heuristics. Their insight was that these human and non-logical work-rules, such as kitchen recipes or mate finding, could be programmed into a machine that is entirely logical. Medical diagnoses were early examples. Such systems are now common. They are “intelligent” to the extent programmers transport human-devised intelligent-seeming rules into the system. Comparing the results against the rules can lead to feedback, a mode of machine learning. Yet this, too, must be programmed in. Simon and Newell reminded us the system is displaying the programmer’s learning, not its own.

But Turing offered a third and different notion. His famous 1950 Mind article suggested that we may eventually be unable to distinguish machine intelligence from our own. Lacking the ability to make a distinction, we would conclude that their intelligence is fundamentally similar. But note that for Turing, machine consciousness is not at issue; the emphasis is on our inability to distinguish the machine’s “consciousness” from our own. We can assume we have capabilities machines lack, such as faith and love, but cannot ever define or test them. Turing simply proposed that machines could acquire all the rules necessary to imitate us so proficiently that they’d become indistinguishable. Is more needed for the singularity to arrive?

Not all are convinced. The deepest doubts revolve around imagination, our evident capacity to deal with the uncertainties in our lives, and non-computable situations such as choosing a mate or what to cook for dinner. Could machines ever mimic our imagining as well as our computation?

We believe our imagination is shaped — though not determined — by our experiences, and our ability to both think and observe our thinking. Might a machine learn to know itself and so become conscious? Though computers inhabit their own universe, not ours, and so do not live as we do, Turing presumed they might seem to mimic our imagining. With the Internet of Things, they might also get to share our panoply of senses and develop experiential learning capabilities we could not distinguish from our own. A “reverse Turing test” hovers in the background — just as we challenge computers to become human-like, so smart machines might choose to challenge us to become more computer-like, and ignore or punish us if we fail. Recall Star Trek’s Mr. Spock, who coped patiently, if a little stiffly, with humans’ failings.

And so I go back to my original question: Are we right to fear the singularity and all that it means – from robots taking our jobs to the advanced dystopias of science fiction? Are we right to worry about a loss of our own relevance in a world dominated by machines?

If a singularity really is coming, it’s beyond our ability understand it. Machines might become conscious – they may be already – but the odds are, we won’t be able to recognize it. If the singularity is not coming, then it’s just empty dogmatism. Hence our task is always more practical — to bring a machine’s functionality, as we comprehend it, to bear on our world and our projects, answering “What does it mean to us?” rather than puzzling about what we might mean to them.

This post is one in a series of perspectives by presenters and participants in the 7th Global Drucker Forum, taking place November 5-6, 2015 in Vienna. The theme: Claiming Our Humanity — Managing in the Digital Age.

Source: Stop Worrying About Whether Machines Are “Intelligent”

Via: Google Alert for ML