In recent years, there’s been a lot of talk about computer development, machine learning, and artificial intelligence. And as computer technology develops ever more complex, individuals have begun to fall into two different camps: Those who worry about advancing A.I., and those who welcome it. In the first group are people like Ray Kurzweil, who is known for his anti-virus computer software and his futurist books about the golden age of artificial intelligence. In the latter camp are well-known scientists and Silicon Valley people like the late physicist Steven Hawking and Elon Musk.
Certainly, there are intelligent arguments on both sides of debate, so it’s worth asking: Who’s vision of the future of A.I. likely to be true?
Although it remains impossible to answer that question for sure, looking back on the past 60 years of computer technology development may give us some clues as to how the future might unfold.
The first place to look is in the 1960s when Gordon Moore, the co-founder of the Intel Corporation, developed what would later be known as “Moore’s Law.” In short, Moore argued at the time that computer development (in this case meaning a computer’s processing speed and memory) would double at about every 18 months. Not only was Moore correct, but Moore’s Law has become almost legendary among tech nerds for its accuracy and place in computer development history.
In short, you can thank Moore’s Law for the better performance of your smart phone and smart devices each Christmas when you get a new tech gift. The prime reason for Moore’s Law is that more and more transistors (a transistor is basically a gate that regulates the flow of elections) can be squeezed in to smaller and smaller spaces, and in the case of computer tech, the spaces we are talking about are silicon chips.
But do you see a problem with this situation?
Some have pointed out, like the physicist Micho Kaku, that eventually – sometime in the 2020s through the 2040s – there will be a natural limit for how many transistors can fit on a silicon chip. This is because once transistors reach the size of a few atoms, electricity will leak out of the cables, and there will be a big problem on hand. In other words, the end of Moore’s Law means the end of computer processing development.
Unless computer engineers figure out a way to get around this problem. Some ideas are already in development, such as using photons (light) as a medium to transmit information (yes, I know it sound like science fiction) or the use of quantum computers. But this technology has not yet reached a viable point where it is affordable or practical.
In the meantime, although A.I. devices such as Alexa and Siri remain impressive, computer development – at the present time – is not anywhere near the complexity of a human brain, at least for now. Will this change?
But we need to wait another 50 – 100 years to find out if computers will ever become more efficient than the human brain.