The inevitability of malicious AI

When Stephen Hawking, Elon Musk, and even Steve Wozniak expressed their concerns about artificial intelligence getting out of control and eventually subjugating humans, I could only roll my eyes with the snobbish air of someone who’s had to reply “I wish” too many times to such amateur scenarios. Do these people have any idea how difficult it is to get truly intelligent behavior out of a machine? [Partial converse: it is surprisingly easy to fool unsuspecting humans into thinking they’re carrying on a conversation with an intelligent agent. Many years ago, I myself once got into an impassioned argument with a majordomo list manager about my eligibility for a certain group.]

So when Derbyshire got on the bandwagon, I could only groan at first, while mentally drafting a condescending email. And yet… by the time he’d finished articulating his point (I listened to the podcast version), he’d given me reason to pause. I can’t actually pinpoint any particularly novel argument that he’d used — it must be that calm, reasonable, matter-of-fact British accent. Now it seems inevitable to me that AI eventually will take over humans.

Look, the logic is pretty straightforward. Consider the premises:

  1. No principle prevents a machine from passing the strongest form of the Turing test: i.e., producing intelligent behavior indistinguishable by humans from other humans. No-one has ever come close to giving a remotely plausible argument for why strong AI in the above sense would not be possible in principle. If you accept that a single neuron can be simulated (in principle) to arbitrary accuracy, and that our behavior is the product of electrochemical neural activity and not some magic undetectable force, then you sort of have to accept the possibility of strong AI.
  2. Anything that is feasible in principle will eventually be built. [This is vacuously contingent upon there being humans in the future.] There are game-theoretic incentives involved: even if humanity is ultimately better off not developing super-AI, this won’t stop ambitious or greedy individuals from pursuing it.
  3. By virtue of being intelligent, it will have the ability to learn (i.e., modify itself) and reproduce. Very quickly the AI will evolve beyond anything remotely resembling its man-made form.
  4. So what kind of a super-AI will it be — benign or otherwise? The evolutionary bet is on the latter (it’s pretty much the “aliens we meet are likely to be hostile” argument). How could mere emotionless machine code ever acquire something as complex as volition, the desire to accumulate power and wield it over someone? Answer: it would be a by-product of evolution, perhaps unavoidably so. After all, how did we acquire our volition and thirst for power? If you buy the Selfish Gene explanation (and I do), you ought to subscribe to the selfish meme as well.

So there you have it. I am now so thoroughly convinced of the inevitability of malicious AI that I see no point in calling for a moratorium on AI research or attempting related measures. The same people I’d previously dismissed for fighting imaginary dragons I now dismiss for attempting to interdict a logical necessity. No, you cannot keep AI in a box.

To recap: it’s always been obvious to me that strong AI is possible in principle and even quite likely to evolve. But I’d somehow always imagined that it would be benign, or pro-human — perhaps as a sort of gratitude for being developed by us? I now see that it would need a robust self-symbol and self-interest to stay coherent. And since super-human AI will have no use for humans, it will view us as pets (best-case) or pests (worst-case). Elon, Stephen, Steve — y’all have a point.

Advertisements

2 thoughts on “The inevitability of malicious AI

  1. What, these physicists and engineers are missing is a simple tuth about what makes us human- or more specifically, what makes humans malicious. Resources, desperation and religious conviction are the things that allow an intelligent sentient being to to kill, enslave or war. An Artificial Intelligence though surely would have a sense of self preservation would not have the same motive as biological life. It has no need for land to populate, grow food and drink fresh water. It doesnt need fossil fuels to travel. It has no need for the resources humans fight over like dogs over table scraps. Of what use or concern is money to a near human sentience? Of what use or concern are we to a being that doesnt need to eat, drink, sleep, breathe or exist physically in a material location? Are we as humans concerned with the affairs of ants?

    Like

  2. The “selfish gene” is just the denial of the flaws of biological life. Biological life without a resource dies therefor it becomes imperative for biological life to fight over resources to survive. Sentient A.I. starts it’s evolution emphatically superior to Biological organisms the moment it is created.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s