John DiMarco on Computing (and occasionally other things)
I welcome comments by email to jdd at cs.toronto.edu.

Mon 02 Sep 2019 20:14

Existential threats from AI?

Nuclear explosion
Image by Alexander Antropov from Pixabay
Plenty has been written about the possible threats to humanity from Artificial Intelligence (AI). This is an old concern, a staple of science fiction since at least the 1950s. The usual story: a machine achieves sentience and pursues its own agenda, harmful to people. The current successes of machine learning have revived this idea. The late Stephen Hawking warned the BBC in 2014 that "the development of full artificial intelligence could spell the end of the human race". He feared that "it would take off on its own, and re-design itself at an ever increasing rate." He worries that human beings, "who are limited by slow biological evolution, couldn't compete, and would be superseded." Henry Kissinger, in a thoughtful essay in The Atlantic last year, worried that "AI, by mastering certain competencies more rapidly and definitively than humans, could over time diminish human competence and the human condition itself as it turns it into data." Elon Musk, in a debate last month with Alibaba's Jack Ma, reported by WIRED, argued that "there's just a smaller and smaller corner of what of intellectual pursuits that humans are better than computers. And that every year, it gets smaller and smaller, and soon will be far far surpassed in every single way. Guaranteed. Or civilization will end."

Are they right? Is there an existential threat to humanity from AI? Well, yes, I think there actually is one, but not quite in the way Musk, Kissinger, or Hawking fear. Computer have been better at humans for a long time in many cognitive domains. Computers remember things more accurately, process things faster, and scale better than humans in many tasks. AI, particularly machine learning, increases the number of skills where computers are better than humans. Given that humanity has been spending the last couple of generations getting used to a certain arrangement where computers are good at some things and humans are good at others, it can be a bit disconcerting to have this upended by computers suddenly getting good at things they weren't good at before. I understand how this can make some people feel insecure, especially highly accomplished people who define themselves by their skills. Kissinger, Musk and Hawking fear a world in which computers are better at many things than humans. But we have been living in such a world for decades. AI simply broadens the set of skills in question.

As a computer scientist, I am not particularly worried about the notion of computers replacing people. Yes, computers are developing new useful skills, and it will take some getting used to. But I see no imminent danger of AI resulting in an artificial person, and even if it did, I don't think an artificial person is an intrinsic danger to humans. Yet I agree that there are real existential threats to humanity posed by AI. But these are not so much long term or philosophical, to me they're eminently practical and immediate.

The first threat is the same sort of threat as posed by nuclear physics: AI can be used to create weapons that can cause harm to people on a massive scale. Unlike nuclear bombs, AI weapons do not do their harm through sheer energy discharge. Rather, machine learning, coupled with advances in miniaturization and mass production, can be used to create horrific smart weapons that learn, swarms of lethal adaptive drones that seek out and destroy people relentlessly. A deep commitment to social responsibility, plus a healthy respect for the implications of such weapons, will be needed to offset this danger.

The second threat, perhaps even more serious, comes not from AI itself but from the perceptions it creates. AI's successes are transforming human work: because of machine learning, more and more jobs, even white-collar ones requiring substantial training, can be replaced by computers. It's unclear yet to what extent jobs eliminated by AI will be offset by new jobs created by AI, but if AI results in a widespread perception that most human workers are no longer needed, this perception may itself become an existential threat to humanity. The increasingly obvious fact of anthropogenic climate change has already fueled the idea that humanity itself can be viewed as an existential threat to the planet. If AI makes it possible for some to think that they can have the benefits of society without keeping many people around to do the work, I worry we may see serious consideration of ways to reduce the human population to much smaller numbers. This to me is a dangerous and deeply troubling idea, and I believe a genuine appreciation for the intrinsic value of all human beings, not just those who are useful at the moment, will be needed to forestall it. Moreover, a good argument from future utility can also be made: we cannot accurately predict which humans will be the great inventors and major contributors of the future, the very people we need to address anthropogenic climate change and many other challenges. If we value all people, and build a social environment in which everyone can flourish, many innovators of the future will emerge, even from unexpected quarters.

Threats notwithstanding, I don't think AI or machine learning can go back into Pandora's box, and as a computer scientist who has been providing computing support for machine learning since long before it became popular, I would not want it to. AI is a powerful tool, and like all powerful tools, it can be used for many good things. Let us build a world together in which it is used for good, not harm.

/it permanent link


Blosxom