A(.I.)pocalypse

| December 9, 2017 | 0 Comments

Elon Musk and I have lot in common. We both think space is awesome, we’re both worth billions of dollars, and we both are terrified of A.I.

If you don’t know, A.I. stands for artificial intelligence, or basically making robots into humans (I mean, what could go wrong?) The A.I. that exists right now ranges from helpful (think Siri) and amusing (remember Cleverbot?) to vaguely irritating (targeted advertising on Facebook). But this is only the beginning of A.I.’s trajectory. It’s heading towards powerful, self-evolving software that mirrors human learning.

Screen Shot 2017-11-16 at 5.37.22 PM

Elon Musk has been an A.I. doomsdayer since 2014, when he said in a speech at MIT that A.I. was humanity’s “biggest existential threat.” Think about that for a second. There’s a lot of shit to be worried about in the world right now: the escalating nuclear crisis on the Korean peninsula, Russian hacking, climate change, the Saudi Arabia and Iran teetering on the brink of war, Donald fucking Trump. But the biggest existential threat for humanity is something that is quietly encroaching further and further into our daily lives. And it’s not just Musk ringing the alarm bells. One of the leaders of DeepMind (even that sounds sinister), the lab that is spearheading A.I. development, “I think human extinction will probably occur, and technology will likely play a part in this.”

Consider the robot future for a hot second. Imagine that scientists develop a super-powerful, super-intelligent being that they can no longer control (think Frankenstein’s monster but way, way worse). We will try to use A.I. for policing and weaponry, but I seriously doubt that they’ll be content to serve humanity. If you don’t think those robots would take one look at human society and say “I think we’d all be better off without all of…this,” you’re kidding yourself.

Screen Shot 2017-11-16 at 5.38.26 PM

It would also raise a minefield of new human rights questions (and you know, on the whole, we haven’t done so hot with that). Once A.I. develops to be a thinking, decision-making, feeling, evolving being, what really differentiates it from a human? Maybe you saw CHAPPiE, the 2015 sci-fi film about a robot police force. If you haven’t, spoiler alert, one of the robot policemen becomes sentient and chaos ensues. My boyfriend at the time thought it was a heartwarming story about A.I. I thought it was terrifying.

Musk’s critics dismiss him as a reductive, sensationalist fear monger. And it is true that tweeting about the impending robot apocalypse doesn’t leave much room for nuance. But then, the platform is only one aspect of it. Musk also wrote a letter to the UN asking for regulations on A.I. weapons, and is an advisor at the Centre for Existential Risk. He is doing the nuanced work, but using a sensationalist platform to draw attention to an issue that he (and I) think is salient, to say the least.

 

 

 

Tags: , , ,

Category: featured, Reflections, Science and Technology

Ellen Asermely

About the Author ()

Ellen Asermely is a senior (!) in the Pardee School studying International Relations. Born and raised in Rhode Island, the smallest but weirdest state, she enjoys coffee milk, the Big Blue Bug, and Awful Awfuls. In her free time, Ellen can be found by the ocean, eating anything with cheese on it, reading Harry Potter, or hugging strangers' dogs.

Leave a Reply