Translate

Sunday, June 11, 2017

Welcome to Monsters Part 4: The Monster in the Machine

Before I begin today's blog, I have an announcement. A few weeks ago, I worked up the courage to submit a short story I had written to a literary magazine. Today I got my first rejection email. Yeah, I know it's not good news. However, it did get me over my fears of rejection. I'm going to keep trying until I get that sweet acceptance. Even if that means getting hundreds of rejections.

I am a glutton for punishment after all...

Monsters are very often associated with the unknown and what is more unknown in today's society than where technology will take us next. There are dozens to hundreds of movies, books, and video games out there where robots or AIs have had enough of human's shenanigans and decides we must be terminated.

There's even a movie called Terminator which has this very basic plot point.

Which one is out to get us Mr. Smith?

When ever AI (artificial intelligence) or robots are brought up in the media, there's always this underlying worry that one day they'll go on a murderous rampage against humanity. It's not a new fear. Frankenstein's monster could easily fall into this category if he were made of metal instead of flesh.

Now, do I believe that humans will be eradicated by machines?

No, not at all. I highly doubt the apocalypse will happen because of angry robots or an AI system gone rouge. Sure, they might replace us in the workforce, but it's highly unlikely they will suddenly get minds of their own and wipe out humanity. Someone or a group of someones will hack the machines first.

If you think about it, the Three Laws of Robotics has humans covered. That is, if we're certain these three laws are totally applicable. I believe they are and, without some human element, unlikely to fail. If you're unfamiliar with Asimov's Three Laws, they are listed below:

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Technology, for the most part, is a tool that helps human kind. Like all tools, accidents can happen. I'm far more worried about a driverless car glitching and causing an accident than I am about a robot realizing that I am obsolete. If the robot apocalypse does happen, it will be because some stupid/ and/or vengeful human programmed it to.

That doesn't mean there are other monsters in the machine to worry about.

Technology is changing daily and it really is only a matter of time before we begin to merge with our tools. Cyborgs already exist (no not the DC character - though that would be totally awesome and I can't wait for the movie). I'm talking about people (or really any organic being) who have biomechatronic body parts. How common will a mechanical arm be in the future? What about implants that allow us to access the Internet with our eyes?

My biggest fear is what hackers will be able to do with all this technology.

When Stephen King wrote the novel Christine, do you think he was envisioning a world where driverless cars would someday be the norm? Probably not since Christine was a possessed car. But think about it. How likely is it that a hacker might be able to kill someone with a car that drives itself? What about killer drones?

Then there's what happens in Ghost in the Shell. For the record, I'm only familiar with the 1995 anime movie and one of the creepiest parts that I remember from this movie is about a guy believing he had a wife and daughter. Except he didn't. In fact didn't have any family whatsoever. You see, the people in Ghost in the Shell had merge themselves with technology. Very few people are 100% organic in this world. The villain of the story had "ghost-hacked" this person and fabricated memories. No matter how much evidence was shown to him, he truly believed his wive and daughter were waiting for him at home.

Ghost in the Shell (at least the 1995 version) raises a lot of questions about how technology and humans will interact in the future. There are a lot of questions about identity, reproduction, and what it means to be alive. There's a sense that, at some point, humans might not be able to disconnect themselves from the technology they so heavily rely on.

Which is why I don't think robots are the monsters. It's what people do with the technology that I'm worried about.

So, why are there so many stories featuring rouge AIs and robots as monsters? I think it comes back to the age old question of why are humans here? Why do we think? Why are we so curious, creative, and destructive? The philosopher René Descartes once said "cogito ergo sum" or "I think, therefore I am" as proof of one's own existence. We can only prove our own existence (and by extension other humans) because we know we can think. What we don't know is what else can think or, at least, has some form of self awareness? How will we be able to tell if a machine has gained some form of self awareness?

Answer: we can't. And that's what makes technology such a threat.

So don't be scared of technology. Be afraid of who programmed it. Be afraid of who can hack it. Because humans are the ones behind it all.

If you enjoyed this post (or it really pissed you off), please like, share, and/or leave a comment. I love hearing from my readers and I hope you guys like hearing from me. I have a least two more blog posts planned for this particular series. So I hope you are enjoying my take on monsters.

Until next week!

No comments:

Post a Comment