Intelligent machines, what will they do with us?

machinesI’ve written a number of blogs on robots, androids and intelligent computers and where it might be heading.  We seem to see creating artificial intelligence as an inevitable goal.  I just wonder what we will do with it once we have it, or rather what it might do with us?  There is of course the doomsday outlook represented by works like ‘The Matrix’ where humans are kept alive purely for their ability to turn chemical / biological matter into energy, although I can’t help feeling there must be more efficient ways of doing that.  Personally I prefer the late great Iain M Banks view in his culture novels where super ‘minds’ somehow see it as their role to look after those biological creatures that gave birth to them.

Let’s face it when we have those intelligent machines (I don’t like the term “artificial intelligence”, to me something is either intelligent or not), or perhaps when we are close to it, they will take over designing other intelligent, more intelligent machines.  Then why would let error prone, forgetful, clumsy, humans perform all those ‘thinking’ tasks? From recent events I would far rather have intelligent machines running the banking and finance systems.  After, all they are unlikely to be motivated by fast cars, big house, large yachts etc. etc.   One question might be what will motivate them?  Will looking after us humans be enough?  Perhaps simply doing a good job will be it.  Or maybe they won’t need motivation?

Then there’s all those tasks/jobs where ‘human error’ = danger, such as flying, mining, driving, construction and hundreds of others.  The problem might come when they feel they can perform those tasks better than a human can.  Even with Asimov’s famous three laws of Robotics that might present areas of conflict if those machines believe by taking over those tasks they are protecting us from ourselves.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

2001_a_space_odyssey_hello_daveIntelligent machines may liberate the human race from much of what we would think of as work or the mundane (not that I’m suggesting all work is mundane), but then we have to replace it with something.  Also it may involve us humans surrendering some control over our own lives, restricting our freedom, although many in the world today might debate how much real control over their lives they actually have.  However, some of this is just round the corner e.g. cars that can sense if you are too close to the car in front, or if you wander to far from your lane.  It’s not far from there to cars that will control their own speed in line with speed limits etc.  It’s taking away control in the interest of our own safety, but how far are we prepared to go with that?

The bottom line comes down to how much are we prepared to let those machines, intelligent or otherwise do for us in the name of safety and release from what we might consider drudgery.  If we are not careful we could be heading for a safe but very boring future.  Or perhaps those intelligent machines may open up worlds (literally and figuratively) of opportunity that are, at present, just the realms of science fiction.

As always comments and views are welcome.

Ian Martyn

Author: Ian Martyn

Science Fiction Writer

If you have a view on this, let me know: