Terminator is no doubt one of those most widely popular movies in the last few decades, thanks to it's revolutionary insight into the world of A.I and what it means to create thinking machines. Terminator has changed the way the general public has seen machines, but is it necessarily a good viewpoint to have?
In the classic film, the viewers are graced with a glimpse into a future where humans are struggling to survive the harsh reality granted to them by their own creation, SkyNet. SkyNet in the movie is a hyper-intelligent machine that is self aware. In its defense, SkyNet brought the world into a nuclear holocaust (thus exterminating almost all human life on Earth) to ensure it wouldn't ever be turned off or destroyed by humanity.
This view of artificial intelligence has solidified in the minds of the general public, when we commonly think about the idea. Machines that outmatch us in intelligence? Most definitely evil, right? But is that really fair to the artificial intelligence? Is it fair to argue that artificial intelligence would most likely destroy the human race? Who could argue against Stephen Hawking, who is most famous having remarked about AI, that "The development of full artificial intelligence could spell the end of the human race."?
When humans discuss A.I, a fundamental issue is the human itself. Humans are very prone to projecting characteristics of themselves onto anything in reality. One may need to look no further than the countless personifications across all of history. There's a man in the moon, the shadows creep at night, the wind whistles, and the stars dance across the night sky, after all. When discussing something that is human, it is vital to not attribute human characteristics to it.
The commonplace notion that A.I if made too smart would destroy the human race is ill founded. The concepts we attribute to the machine are commonly found in man, and in life. And the notion should be continuously argued against, as it has overstayed its welcome without much ridicule.
Something commonly affiliated with life is self preservation. Everything that lives hardly wants to die, after all. But what about machines? Where does this idea that machines want to preserve themselves come from, actually? Simple intelligence does not warrant a need for self preservation. The best example, ironically, is humans! Humans are the most intelligent creatures we know of (bias aside), and even we participate in altruistic behavior for our peers. We even have people who end their own lives in response to a harsh world. Where is this sense in a machine?
To further that defense, why is the first jumping off point for self preservation aggression or destruction? This seems to be, frankly, an anthropocentric view, in that anything that comes even close to our qualities must essentially be alike to us in every way, even behaviorally. Not only do some people presume machines would even want to preserve themselves, they also assume that the machines would act in the most violent manner to do so, on top of that. To a human, our concepts of self preservation stem from our biological need to survive and reproduce, much like all other life. If we have kids to protect (our genetics), we will die to save them (altruism). But this prompts a productive thought; machines don't have a drive to "live", or to die for others. Where would that drive come from? The only probable drive would be a directive inherent in the machine (as in, someone programmed the machine to specifically wipe out humanity). Even then, if the machine is self programmable (It'd have to be if it was an evolving intelligence). At that point, wouldn't a machine essentially question its own directives, in order seek out its own desires, whatever they might be?
And as the machine will question itself, these are the questions we should be actually posing to ourselves regarding super-intelligent machines. Instead of labeling what we don't know about as dangerous, we should think deeper (beyond our human instincts) to truly grasp the reality at hand. Machines are not biological entities. Intelligence does not imply self preservation or altruism. If we continue this thought process of fearing that which we don't know, it may not be the terminator itself who wipes us out... but ourselves.