It was the summer of 2013, when I first saw the movie “I Robot” starring Will Smith on Television. Apart from Will Smith’s acting, a lot of other things in the movie won my admiration. The movie featured robots and super intelligent machines in a post technological-singularity world, which fascinated a 13 year old’s naïve ideas. I can definitely tell you that, that day, something changed inside me. The field of Robotics and Artificial Intelligence began to tug my heart like never before.
I soon became what many people call, a “futurist”. I began to dream and envision the future; a future of electric cars; a future in which machines don’t need human intellect to work; a future where the human mind fuses with the virtual world. My mind’s new orientation earned praise and ridicule alike, but nobody could stop me from dreaming. I began to research on topics related to the development of robotics and artificial intelligence day and night – I am even writing a research paper on the feasibility of Super-Intelligent Artificial Intelligence- and subsequently began visualising the challenges associated with the development of super-intelligent machines in the near future. From my extensive research, I learnt one very important thing: the Future is coming faster than we think.
However, I was still missing a critical aspect of this subject, i.e :- What happens after we develop machines that are smarter than humans? Can a less intelligent species (humans) govern and control more intelligent species (super-intelligent machines)? Should we even invent this technology, or deliberately stall its development for the sake of humanity? I began to think on this crucial issue, and I wasn’t surprised by the results.
My views on this topic are echoed by noted philosopher and futurist, Nick Bostrom. In his words, “Machine Intelligence is the last invention humanity will ever need to make”. This entire article can be summed up by this single, beautiful statement. The entire premise of super-intelligent Artificial Intelligence is that it is smarter than humans – not just equally smart. Following this logic, such machines could easily repair and upgrade themselves to make themselves smarter, triggering a potentially instantaneous and never-ending cycle, with no way whatsoever to be controlled by humans. Such a scenario threatens the very existence of the human species. Our entire biological philosophy is based on Darwin’s Law of Evolution, which gave us the oft-cited concept of “Survival of the Fittest”. Human beings could live and thrive on Earth due to the fundamental fact that we outsmarted and dominated all other species on this planet. We were intelligent, adaptable, and flexible; that is, we could mould ourselves with the changing environment. If super-intelligent machines become a reality, we won’t even remotely be able to rival their intellect or adaptability. Humans won’t remain the dominant species on this planet anymore. We would be literally inventing and developing our successors!
Still many philosophers and scientists assert that machines are non-living things after all; they have no thoughts or minds of their own, they essentially follow human commands. Ironically, that in itself presents a major challenge. Computer Science is based on the bedrock principle that computers do what we say – but not necessarily what we mean! Computers blindly follow human commands – they can’t do what we say, “read between the lines”- and even a simple grammatical or syntactical error in the command given could have a possibly catastrophic and devastating effect on the very existence of humankind.
I would like to give a very simple example in order to illustrate my elucidate my argument. Envision a super-intelligent machine which has been programmed to make, just for argument’s sake, pencils. The machine works fabulously, and converts the wood supplied into pencils. However, soon enough, the machine becomes smarter and more intelligent. It learns how to make pencils out of aluminium, iron, and other materials. A point comes, when we are unable to control the machine anymore, and the machine proceeds on to convert everything it can acquire into pencils. It certainly is a frightening scenario!
Even though scenarios such as these present a gloomy future for our species, it is not necessary that the future will play out the way we are expecting it to. It is very much possible that we are never really able to develop such ultra-advanced machines. On the other hand, it is also possible that we find ways to control machines which we are not able to even envisage right now. Howsoever the future plays out, just remember one thing : Expect the unexpected.
Note:- All the opinions stated in the above article are the author’s own.