How long will it be before we can have fluent conversations with machines? Amit Singhal, a lead developer at Google, feels it’ll be here in about twenty years.
In the past, computers have had no representation of objects. They didn’t know anything about your room, the shape of the objects in it, or what their purpose is. Machines have mostly been giant indexes of text strings, and using clever tricks, they’ve been able to build halfway intelligent search engines. That’s all changing now.
Take the issue of understanding what’s going on in your room. Computer vision is really processor intensive, but you see the beginnings of computers which can drive vehicles and move about in an environment. That’s because they can actually process the 3D spatial environment as well as identify what the objects are. It’s still not perfect, and object identification in images needs work, but it’s getting better exponentially. As processor power and memory increases, we’ll see robots walking around, controlling their bodies, and having conversations with people.
I was watching a technology video on Youtube the other day and an engineer was saying that within thirty years, an inexpensive memory stick will be able to hold and process more information than all the human brains living today combined. Something like that. A robot with a cheap chip in its head will have the memory and processing power of billions of humans. That’s just one robot, stocking shelves in a grocery store. Think of what super-computers will be capable of.
Currently Google is building what’s called a knowledge tree. Basically it’s a giant database of every type of object, its purpose, its behavior, three-dimensional appearance, and how it relates to other objects. That’s the next big step, and they imagine they’ll complete it in twenty to thirty years. It won’t fall on us all at once, but you’ll keep seeing incremental changes as computers understand more and more about the world we live in.
As a kid, Amit would watch old re-runs of Star Trek, which inspired him to work on AI and computer programming. He saw Captain Kirk talking to the computer and thought that was the neatest thing. Things like Google are birthed in a child’s imagination. Later in life they devote themselves to building things which they think are interesting and fun. As I’ve said before, that is why scientists and engineers so often love science fiction. It gives us ideas.
Google estimates that computers like Samantha in ‘Her’ will be here in twenty to thirty years as well. That’s their planned time-frame. These new machines will fully understand context and the true meaning of our words. Each word will be linked to this giant artificial brain relating each and every object, so the computer will “get” what you’re talking about.
I’ll be in my late fifties, early sixties, nearing retirement when these AI beings come into existence. I’ve been thinking that around the time of my death, humans will be surpassed by the machines in intelligence. I’ll be in my eighties, maybe early nineties if I live that long, and machines will be better than us in just about everything.
The exact year doesn’t matter. We’re witnessing the birth of a new form of life. What takes us a lifetime to learn, they’ll be able to master in minutes. They’ll be able read and understand every book in the library of Congress in just a few minutes time. They’ll know everything.
I’ve studied textbooks on AI, but this is all going beyond anything I understand. I can only foresee about twenty years into the future, and that window of time seems to get shorter and shorter. Past that, I have no idea.
I’m guessing that humans will build brain-computer interfaces which will allow them to tap directly into this super-brain, which will house all our knowledge. We’ll become more machine-like and machines will become more human and emotional. The distinctions will become blurred as time goes on.