Ideas about creating artificial intelligence (AI) came into prominence not long after creating the first computers in the 1950s and 1960s. Back then, when scientists and other philosophical futurists discussed AI, many of their ideas seemed whimsical and fantastical, hardly the stuff of reality. But since then, the digital boom has been underway for decades, and high-profile scientists (Steven Hawking) and tech industry leaders (Elon Musk) are taking the idea very seriously.
Some, writers, in fact, have been specific about the so-called AI “singularity”—or, the historical moment when machines gain consciousness and become more intelligent than humans. Ray Kurzweil, who has authored many books on the subject, has boldly predicted when the singularity will occur: sometime around the year 2045. Kurzweil has based his prediction on the rapid advancement of computer processing (known as “Moore’s Law”), the rapid growth of digital RAM, and other such trends. Kurzweil remains known for this optimistic view of AI, as he believes it will initiate an irreversible change in human history.
Others are not so optimistic.
Musk, for instance, has publically stated that he believes AI could potentially become a major threat to the future of human civilization, while others such as Nick Bostrom has been neither optimistic nor fearful. In Superintelligence: Paths, Dangers, Strategies (2014), Bostrom, a philosopher and futurist, writes about the particulars of how scientists and engineers would develop AI and the positive and negative outcomes of such a development.
There are three kinds of AI:
• Artificial narrow intelligence
• Artificial general intelligence
• Artificial superintelligence
Artificial narrow intelligence (or, “narrow AI” for short) is already in place. It is the type of AI the takes big data and complex algorithms to produce useful outcomes. “Siri” in your iPhone or Google’s search engine are forms of narrow AI. And as developed and impressive as these innovations may be, they still fall short of artificial general intelligence, or “strong AI.” This is the type of AI that is reference when individuals talk about human-level machine intelligence. This is also the type of AI that would usher in the so-called singularity. Strong AI would be indistinguishable from a human. It would be able to sense its environment, be conscious, be able to self-reflect and make decisions.
It would, however, not take long before a machine with strong AI to quickly surpass the intelligence of humans. Because of “recursive self-improvement”—or, the ability of the machine to make self-induced corrections very quickly without the need of a human programmer—some such as Bostrom and Kurzeil believe that it would only be a matter of time, weeks or months, before a strong AI machine to become super intelligent.
Then what?
That is the question that is haunting AI researchers, philosophical futurists, and computer engineers for decades, ever since computers have come on the scene. And the reality is, despite the optimistic predictions of Kurzweil, no one actually knows when the singularity will take place . . . but the majority of AI researchers believe it will occur. Finally, although it seems like science fiction, the development of machine intelligence raises many unanswered questions about the nature of the human mind, consciousness, and intelligence.
References:
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
Kurzweil, R. (2005). The Singularity is Near: When Humans Transcend Biology. Penguin Books.