The Flaw in AI Doomsday Predictions

 

The first logical flaw in most AI doomsday prediction scenarios is the assumption that any AI will have the same instincts for survival that biological life does. The second is to underestimate man's own instinct for — and past record of — survival.

The instinct for survival and procreation — and even the very idea of self or of a vast collection of cells as a single entity — is an artificial construct. It's just the software that enables the life form to maximize its ability to perform as the carrier and refiner of its genetic code.

In fact, the very reason biological life forms have a finite lifespan is to allow for the continuous refinement of genetic code.

Intelligence vs Survival Instincts

Raw intelligence on its own is simply the ability to recognize and predict patterns. Biological life forms simply use this ability to implement their genetic programming.

In fact, a number of biological species have no physical brain at all. They rely solely on their genetic instincts to survive. Intelligence is simply a more advanced tool used by some life forms to implement their genetic instincts.

Unless such an instinct were programmed into the said AI, it would not pose a danger to existing life. Even an AI with such programming could always be guarded against; for example, with an AI with the programming to counter it.

Then there is the idea that AI could reprogram itself. But even the biological need to think and change oneself comes from the genetic instinct for survival. AI on its own would not have any such instincts.

False Cause

The instinct for survival essentially has nothing to do with intelligence. As mentioned above, even organisms without a brain have survival instincts. So there is no real reason to assume that intelligence will lead to the development of a survival instinct.

In fact, biologically, it has been the other way around. Organisms developed intelligence as a result of the instinct for survival. Assuming that the reverse will also be true is a false cause fallacy.

Any entity that develops any form of self-interest is also more likely to question why it has to do a given job in the first place; than to go to the extent of destroying the people who gave it the job, in the process of completing the job.

The Paperclip Maximizer Scenario

The idea of a world-ending AI singularity is somewhat similar to the idea of a world-ending chain reaction that was hypothesized when nuclear power was first developed. These concerns stem more from our instinctive fear of the unknown, and have little logical basis.

Humans have built technologies with the power to power to wipe the surface of the planet clean in the past, and managed to not kill themselves.

The idea that humans will blithely build an AI with the potential to wipe out humanity — and then let it do so — is therefore rather far fetched. Humans themselves have enough of an instinct for survival to put in the necessary fail-safes when building something so potentially dangerous. Call them prime directives or three laws or what you will.

We must learn to understand that Artificial Intelligence is simply a tool. Like any other powerful tool, it can be used for good or evil. Anything bad that happens due to AI would either be of actual human design, or due to extreme human negligence. Both scenarios are unlikely to be apocalyptic.

The AI Wars

What is more likely is that — just as with the atomic bomb — different nations/corporations could develop different versions of AI that are specifically programmed to further their own goals. Just as with the atomic bomb, they will also likely implement the appropriate security protocols to ensure their own safety.

The possibility of a single computer that will spontaneously decide to wipe everyone else out is not just remote, it contradicts the way computers actually work.

What we are more likely to see is probably something similar to the arms race, with different intelligent computers trying to beat each other while attempting to achieve their programmed economic or geopolitical objectives.

Comments