Philosophers all around the world are wondering, “What will happen when we have machines which are more intelligent than we are?” This event has come to be known as the “technological singularity”. Wikipedia defines the event as follows:
Technological singularity refers to the hypothetical future emergence of greater-than human intelligence. Since the capabilities of such an intelligence would be difficult for an unaided human mind to comprehend, the occurrence of technological singularity is seen as an intellectual event horizon, beyond which the future becomes difficult to understand or predict. Nevertheless, proponents of the singularity typically anticipate such an event to precede an “intelligence explosion”, wherein superintelligences design successive generations of increasingly powerful minds. The term was coined by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement or brain-computer interfaces could be possible causes for the singularity. The concept is popularized by futurists like Ray Kurzweil and widely expected by proponents to occur in the early to mid twenty first century.
It’s certainly an interesting idea to ponder. My guess is that once we have nanobots which can noninvasively be injected into the human brain, giving detailed neuron by neuron readouts of organisms, including ourselves, interacting and living in real-world scenarios, we’ll be able analyze the human brain and all of its functions in ways we never have before. This will lead to unprecedented progress in our understanding of intelligence and our minds, and ultimately I think it inevitably leads to machines which can do everything we can do, and better at that.
I don’t imagine that such an event will happen overnight, or even within a few decades, however. Scientists won’t build some artificial brain in a lab and then hit the “start” button. It will happen gradually. Computers and phones will get more and more advanced, with ever increasing intelligence. Handheld gaming devices for instance, such as the Playstation Portable, will get more and more immersive in the real world. We already have video games where you can have one on one combat with virtual augmented reality beings on your kitchen table. This technology is the fruition of decades of research into what’s called machine vision. Very soon we’ll have games where you can do a jump kick and knock your opponent into the teacup there on your table, the victim will fall in, there’ll be a huge splash, and the screen will say “K.O.” The handheld computer will use its camera and be aware of the environment that it’s in and virtually manipulate it on screen.
Currently, machine vision is capable of reading in images from a camera to build a 3D environment, but analyzing the properties of the objects within the field of view is still rather rudimentary. For example, the computer can build you a 3D geometric model of the teacup, and assign a texture and wrap it around that model, but it doesn’t know that the fluid within the cup is tea, that humans drink tea, that it’s a fluid which flows by the laws of physics, that the tea cup could be shattered if you picked it up and threw it, that if you were to grab the tea cup by the handle and turn it over the tea would spill out, and so on and so forth. We know these things from experience, and this huge database of knowledge is absent from our current machines. But machines are rapidly advancing, and as I said before, once nanobots are inside the our brains, we’ll decode how our brains perform this task, and machines will be able to learn all these things too.
This is the thing a lot of singularians fail to realize. Intelligence is not just about the AI algorithm and data processing methods. Even if you designed an artificial brain inside a humanoid robot which could perfectly emulate human learning and thinking methods, and this machine could think way more quickly than we can, the robot still has to go out in the world and experience things, watch how objects behave, and learn. The robots will get more intelligent by interacting with us and the world. I agree with the neuroscientist Jeff Hawkins and his opinion on the singularity.
”If you define the singularity as a point in time when intelligent machines are designing intelligent machines in such a way that machines get extremely intelligent in a short period of time–an exponential increase in intelligence–then it will never happen. Intelligence is largely defined by experience and training, not just by brain size or algorithms. It isn’t a matter of writing software. Intelligent machines, like humans, will need to be trained in particular domains of expertise. This takes time and deliberate attention to the kind of knowledge you want the machine to have.”
“Machines will understand the world using the same methods humans do; they will be creative. Some will be self-aware, they will communicate via language, and humans will recognize that machines have these qualities. Machines will not be like humans in all aspects, emotionally, physically. If you think dogs and other mammals are conscious, then you will probably think some machines are conscious. If you think consciousness is a purely human phenomenon, then you won’t think machines are conscious.”
“The term ‘singularity’ applied to intelligent machines refers to the idea that when intelligent machines can design intelligent machines smarter than themselves, it will cause an exponential growth in machine intelligence leading to a singularity of infinite (or at least extremely large) intelligence. Belief in this idea is based on a naive understanding of what intelligence is. As an analogy, imagine we had a computer that could design new computers (chips, systems, and software) faster than itself. Would such a computer lead to infinitely fast computers or even computers that were faster than anything humans could ever build? No. It might accelerate the rate of improvements for a while, but in the end there are limits to how big and fast computers can run. We would end up in the same place; we’d just get there a bit faster. There would be no singularity.
”Exponential growth requires the exponential consumption of resources (matter, energy, and time), and there are always limits to this. Why should we think intelligent machines would be different? We will build machines that are more ‘intelligent’ than humans, and this might happen quickly, but there will be no singularity, no runaway growth in intelligence. There will be no single godlike intelligent machine. Like today’s computers, intelligent machines will come in many shapes and sizes and be applied to many different types of problems.
”Intelligent machines need not be anything like humans, emotionally and physically. An extremely intelligent machine need not have any of the emotions a human has, unless we go out of our way to make it so. No intelligent machine will ‘wake up’ one day and say ‘I think I will enslave my creators.’ Similar fears were expressed when the steam engine was invented. It won’t happen. The age of intelligent machines is starting. Like all previous technical revolutions, it will accelerate as more and more people work on it and as the technology improves. There will be no singularity or point in time where the technology itself runs away from us.”
– Jeff Hawkins, Neuroscientist
Machines will increase in intelligence, and I think we can expect the rate of technological change to speed up, but I don’t think machines will suddenly start recursively upgrading themselves, leading us to practically god-like intelligence levels within a few decades.
One of my heroes, Steven Pinker, professor of psychology at Harvard, shared his opinion on the singularity. Here it is.
”There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles–all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems.”
– Steven Pinker
But, even so, amazing advances are coming very quickly. I’ve read news stories about machines already driving cars down the highway, without human intervention. As this technology further matures, and processing capabilities get better, truck drivers and taxi cab drivers will be replaced by AI.
You’ve seen checkout counters at the grocery store become automated. You’ll see this sort of thing happening everywhere, and if you try to walk out of the store without paying, cameras with AI will notify the police who will come and grab you.
I imagine that future computers will all be geared with cameras which will notice you as you walk up to them, recognize your face, and load your settings. There will be all kinds of advances like this. Thinking about that, Google’s new image search functionality shows some promise. You can upload an image to them and they’ll search their image database and find images of similar things. The computer knows what the image is and it finds you similar things. This is just the beginning.
Search engines will get more and more intelligent and before too long we’ll just talk to our computers and they will know what we want them to do. They’ll eventually be building us custom information reports based on a task we assign them. They won’t do keyword searches, they’ll go out and scour all human knowledge, and then compile us an intelligent report based on any topic we ask.
I personally guess that computing will become more and more cloud like. Computers will become more and more connected and we won’t see them as distinct units, as we do now. Processors will be embedded underground and within the environment, and when we ask to do a computationally intensive task, the “job” will be delegated out to this cloud of parallel computers all working together. That won’t happen for a long time, but I’m just speculating.
The “singularity” won’t be an abrupt happening, but there will be a gradual increase in our technology until eventually the computers and their AI is so intelligent and so powerful that human intervention will no longer even be necessary. I don’t think us humans will even notice it when it happens. It will just sort of happen. Hopefully by then we will have molded and directed the AI in the direction it needs to go. If a super-intelligent computer were to be introduced on us abruptly, I think it’d be our doom. But if it’s gradual, and the computers are all programmed and directed by all of us on Earth, and we have ample time to test and assess the technology, and all become aware of how it all works, then we have a lot less to fear.
Eventually though, the computers will start writing their own software algorithms and human computer programmers will not be needed. We’ll look at our desktop and say, “I don’t like how this interface works. Computer, make it work like this instead.” It will dynamically reprogram itself for you. There won’t be generic computer operating systems. It’ll be your own personal AI designed system based around your life and the tasks you’re involved with.
It’s also a neat thought to picture computers designing their own hardware without our intervention. Once physicists discover a new breakthrough, the computers will redesign their circuitry to automatically harness the new knowledge. That’s pretty cool.
I imagine humanoid robots will become commonplace and we’ll all have a few of them to order around. They’ll help around the house, cook for us, mow the lawn, and so forth. Eventually human beings will integrate with the machines with neural prosthetics. Minds will be greatly enhanced, but to what degree people will go with that, I don’t know. Nanotechnology will make that possible.
I wrote about virtual reality the other day so no need to rehash that. If you take the time to study the trends in this technology you’ll see some major upcoming advances which are really profound in their implications – nanotechnology, genetics, and robotics.
Nanotechnolgy is about building machines with super-tiny components. Later this will eventually lead to us reorganizing the matter of this world into intelligent stuff which responds to our desires. Genetics is about applying our computers to engineer personal medical solutions by detailed processing of our DNA. We’ll also use tiny nanobots to go into our bodies and reprogram our DNA, administer custom tailored medicines, eradicate diseases, stop aging, and enhance the human body. You won’t take generic pills, the medications will be made especially for you based on custom body scans. Robotics will be about building strong, durable, machines to do all kinds of tasks for us.
We’ve already created life in the lab. We insert man-made, computer designed DNA into a cell, and grow the organism. Once we have even more advanced computers, and our knowledge in genetics and biology progresses, we won’t make love to have a child – we’ll design your child on a computer and grow it. Sex will be around, but I can’t see women wanting to endure child birth any longer. If you want a child with your DNA, just insert it into the computer and use it. The child would come out the same.
I make no predictions as to how quickly all of this will happen, but it’s inevitable that it will, absent us eradicating ourselves. Overall, it won’t be that long either, generally speaking. Within two hundred years this has to happen. I’d be really surprised if it didn’t. All of this and much more. There are some people on the fringe who seem to think all of this will happen within the next thirty years, quoting exponential trends as evidence, but we’ll have to wait and see. I’ll most likely be around that long, so I’ll see for myself whether or not that’s true. I hope so. That’d be pretty amazing to get to see this stuff come into fruition. But even absent exponential growth, we’ll be seeing major changes, and the future is exciting.
This last video is a prominent philosopher of conscious discussing his views on the singularity. His name is David Chalmers. He’s very well known and is famous for being the original articulator of the “hard problem” of consciousness.