In fiction today, besides zombies and natural disasters, robots are probably the most timeless popular form of apocalypse. On a fundamental level, rampant machines represent the failings of human achievement; civilization is destroyed by its own ego and arrogance. This sentiment dates all the way back to Mary Shelly’s Frankenstein. A mad scientist thought he could cheat God and create a monster. However, as time passed, it seemed as though a different kind of Frankenstein’s monster was approaching. Machines began popping up everywhere, replacing human labor. We wondered if our own creations might one day grow a consciousness and kill us all.
In the early days of robotics, we assumed that each robot was its own entity, much like humans. Robots had a boxy appearance, with antennae and buttons. As technology advanced, our understanding of robots and artificial intelligence increased. Robots were not the problem; computers were. In the 1950s and 1960s, robots were portrayed as single humanoid units with their own personalities such as Robby the Robot (Forbidden Planet) and Rosie the Maid (The Jetsons). Computers then became smaller, more efficient, and more intelligent. Likewise, Robby was replaced with artificial intelligences, working in large collectives rather than as units. This was the era of Skynet (The Terminator). Artificial intelligence was certainly a more accurate representation of robotics, but could an event like Judgment Day happen in real life or would machines take over another way?
I don’t think a “Terminator” scenario is likely. As far as scenarios like hostile machine takeovers, there are two other current possibilities I see emerging in the science fiction genre. The first scenario appears over a grand scale. Rather than being killed by machines, humanity will more likely lose itself by becoming machines. One day, we might all become like the Borg from Star Trek. We will give up our individuality in order to collectively advance ourselves. Of course, I don’t think this will happen since humanity has proven itself to be stubbornly individualistic. If we did eventually turn out like the Borg, it would happen in the far future, thousands of years from now.
Rather than disobeying programming and becoming sentient, machines will more likely destroy humanity without ever breaking their programming. In fact, a powerful machine that follows its instructions too well might endanger the people working with it. This is what happened with GLaDOS in the video game “Portal.” Her primary function was to test, so making the scientists into test subjects was logically the best way to fulfill that function. Machines don’t need to be sentient to kill; they only need the physical ability and a logical reason. Suppose we built a computer whose sole purpose was to maximize the happiness of as many humans as possible. What is stopping it from simply killing every unhappy human? It might even kill every human, reasoning that zero humans equals zero unhappiness. This scenario is an exaggeration, but the conceptual threat still exists. Isaac Asimov saw the potential dangers of rogue robots in science fiction, so he published the “Three Laws of Robotics.” With them, the threat of a rampant computer was theoretically erased.
I don’t believe that machines will destroy us. Either something else, like human ignorance or an asteroid will destroy us first or perhaps we will even evolve and merge with machines. However, this is all fun speculation. Perhaps machines will doom us all, but in a way no one can currently predict. Science fiction is limited to the knowledge that we currently possess, so until robots do become smarter than us, all we can do is ponder.
Latest posts by Spencer Cylinder (see all)
- Dawn of the Planet of the Apes: an Unexpected Ray of Hope - July 8, 2014
- A National Disappointment: Why We Should Care About NASA’s Budget - January 10, 2014
- The Final Energy Crisis - August 12, 2013