« | Home | »

A Key To Understanding Abstract Thought

August 4, 2014

Back when I was a teenager, I remember first reading Plato’s works.  I was completely puzzled trying to figure out how abstract thinking worked.  Take our ability to think of general concepts.  We know that women and men are both human beings and that all human beings have arms, legs, mouths, and noses.  It’s effortless for us.  It happens automatically within a few milliseconds and feels instantaneous.

Or think about something else.  Say I was a mad scientist and took you down to my lab.  I had built a brain scanning machine and I was going to strap you down and steal your memories.  How would I do it?  How does your brain store your memories?  What format are they in?  How are they organized?  Believe it or not, I pretty much know the answer!  But back when I was a teenager, I had no idea how this worked.

I never knew how my brain could recognize, categorize, and label objects.  Take walking into a kitchen.  We always see particular tables, and each table is different from one another, but somehow our mind has some abstract concept of a “table”, and can look at objects in the world around us and say, “Yes, that’s a table.”  The same applies to animals.  We see beagles, bulldogs, and golden retrievers, but we know they’re all “dogs”.  What is a “dog”?  How does our brain do it?

Plato used to theorize that there was some ideal “dog” in another invisible dimension and somehow all particular dogs were a copy of that, in some inexplicable way.  That never made any sense to me and I doubt it does to anyone else either.

But nowadays we’re making huge breakthroughs in artificial intelligence and we’ve cracked the code for how the brain does it.  Using biologically inspired algorithms, AI researchers can now write computer code which is able to do this same sort of abstract thinking.  Take Microsoft’s Project Adam for instance.  You can go out into your backyard with your smartphone and snap a picture of your dog.  Their software can then analyze the image and tell you exactly what type of dog it is, even more accurately than the best human dog experts.

Once I studied how these sorts of algorithms work, I immediately understood what abstract concepts are and how they’re stored in our brains.  I don’t know whether to begin with our brains or with the computer algorithms.  If you understand one, you immediately understand the other.  We’ll begin with our brains.

Basically, abstract thinking takes place within our neocortex which is like a thin layer of neurons which sits on the outer layer of our brains.  This is simplifying things a bit, but it consists of layers of neurons wired together into vertical columns.  The columns themselves have cross-connections, wiring them to other columns.

neocortex

It works like this.  Let’s just talk about identifying things we’re looking at with our eyes.  Light beams make their way into our eyes which then stimulates photo-receptors in our retinas.  This creates small electrical signals which are then sent to the “bottom” layers of the neocortex.  That’s when a very simple pattern recognition process starts.

Basically the bottom-most neuronal layers identify patterns, then the next layer of neurons “above” those identify patterns within the patterns.  The layer “above” that identifies patterns within the pattern’s patterns.  And so on.  It forms this hierarchy of patterns within patterns within patterns … within patterns.

htm

So let’s talk a little more about Microsoft’s Project Adam.  How does it tell the difference between a dog and a cat?  Well, all you have to do is show it a bunch of images of cats and tell the software that all those images contain cats.  Then you show it a bunch of images of dogs and tell the software that all those images are dogs.   The software will build this hierarchy of image patterns within patterns within patterns.  Low levels begin with raw sensory data, such as colored pixels.  The next level will be patterns of colors, such as a small splotch of black and brown next to one another, or white and brown next to one another, etc.   As you go higher up this hierarchy, you’ll come to “tails”, “mid-sections”, “noses”, “eyes”, “floppy ears”, etc.  Then at the even higher levels, you’ll find “dog”, “cat”, and “human”.

We’ve been discussing patterns within images, but there are also patterns within sounds.  Instead of finding patterns within colored pixels spatially spread out over your retina, your brain also uses the exact same system to identify temporal patterns.

Say you find a cover song by a random person on Youtube and they’re playing their own rendition of a popular Beatles song.  How do you know it’s the same song?  If it’s really bad, you may not.  But if there is a good resemblance, your brain finds temporal patterns, within patterns, within patterns, of sound intensity based on pressure waves which vibrate your ear-drums, which in turn creates similar sorts of electrical signals for your brain.

This is related to language as well.  Nouns are consistent high level patterns, such as those discerned from images.  Verbs are their temporal patterns over time.  You’re linking up the sounds and symbols of the words (themselves high level patterns), to the high level patterns from sensory experience over time.

So what is the ideal table?  Does it exist in another dimension?  No.  The ideal “table” doesn’t look like anything. It’s just a type of information pattern.  It’s hard to describe what it is because it doesn’t look like anything, or sound like anything, or exist in any sort of space.  It’s just a type of encoded information.

If you’d like to hear about this process in even more detail, I’d recommend watching the neuroscientist Jeff Hawkins explain all of this in the video below.  In the tech world, this AI technique is called deep learning.  The topic begins at around 10 minutes.

I’ve slightly simplified this discussion a bit though.  Our brains are even cooler than this.  The information doesn’t just flow upward, it also flows downward.  When you study how the neurons are laid out, as your brain is trying to figure out what it’s looking at, it’s also making predictions about what it should see in the future.  If there’s a match between what your brain thinks will come next and what does come next, you unconsciously say to yourself, “Ah, I know what this is.”  If your brain’s predictions do not match what you experience next, there’s a shift in your attention.

For example, if you’re looking at your pet dog and all of the sudden it stands on its back legs upright and starts talking to you like a human, you’re totally taken back, in shock, and your attention is completely focused on your dog.  That’s because it doesn’t match the patterns stored in your brain and we’re wired up to say, “I don’t understand this.  This is new.  I need to pay attention to it.”  It may also generate fear, etc.  So it’s more complicated in us humans, but abstract thought and object identification is built on this process.

If you showed Project Adam a fake video of a talking dog standing up on its legs, since it has no emotion, all it would do is say, “new pattern”, and create new nodes in this hierarchy of patterns.  But with us, our object identification process is directly tied into emotional centers which trigger a more complex response.

So as the mad scientist, if I was wanting to read your mind, it’d be very difficult.  It’s not like the information is stored in the same way in each person.  It’s totally different in each brain based on what you’ve experienced.  The pattern hierarchies are all laid out differently.

In short though, your memories are temporal sequences of hierarchical patterns.  It’s not like there’s a movie-clip I could access and playback with perfect clarity.  The brain also discards most of the raw sensory information we experience.  So that’s going to be a problem.

For fun, I could ask you to think about your wedding, and once I found out where the high-level patterns were stored, I could trace down your tree and try to produce a movie of you walking down the isle.   Maybe, I’m not sure.  If I had sophisticated enough equipment and you’re willing to let me cut your skull open, I’d be more than willing to give it a try!  *giddy grin*  Then again, it may be better just to search your attic for the DVD or VHS tape!

One of my biggest interests in this subject is trying to understand what numbers are.  I feel quite certain they’re rooted in this same process.  The number one is some sort of high level pattern in our sensory hierarchy.  The same with two, three, four, five, and maybe six.  We intuitively understand small numbers because they actually exist within this hierarchy in our mind.  I’ve seen two glasses on a table, two books on my desk, and two buttons on a computer mouse.  There’s some common pattern within all of them which we’d label “2”.

Higher numbers are probably just symbols linked to logical rules for manipulating them.  We don’t have any intuitive sense of the difference between a billion and a trillion stars.  It’s too much for our minds to comprehend.  In other words, imagine the resolution an image would need to distinctly contain a billion separate objects.  There’s not enough pixels on the back of our retina to contain such an image no matter how tiny you made each object individually.  You could try to link the word symbol “a billion stars” to lots of tiny dots flying by in some temporal sequence, possibly.  Or maybe you could say something like, “Imagine dropping a thousand pennies in a jar each day.  Each penny will be a star.”  Then the person goes and does that for a single day, one thousand pennies.  “How long would we have do this process before we’d counted a billion stars?  It would take you over 2,700 years to count that many stars!”  Little tricks like that help to grasp numbers, but you really have to think hard to come up with those things.  It’s tricky to have any intuitive sense of large numbers.

Bertrand Russell struggled to define numbers in his Principles of Mathematics.  I’m not really sure if this conception of numbers is correct though.  Still, I’m pretty sure you could show Project  Adam lots of different pictures with just a single object in them and then tell it, “1”.  Then do the same for images with two objects in them, etc.  I think you could get it to grasp the first few numbers.  There’s scientific evidence that animals understand the first few numbers.  For example, if an animal is trying to escape a group of predators, and the predators run into a cave and then exit, one by one, the animal seems to know when they’re all gone.  If there were three predators but only two have left the cave, the animal knows to remain hidden.  It must have some basic conception of number.

Topics: Philosophy | No Comments »

Leave A Reply