Tuesday, April 27, 2010

The Brain is Not an Uber-Intelligent Mega-computer

A friend forwarded me a recent New Scientist article containing some freaky, grandiose anti-AI rhetoric...

Some interesting tidbits about clever things single cells can do, are followed by the following dramatic conclusion:

For me, the brain is not a supercomputer in which the neurons are transistors; rather it is as if each individual neuron is itself a computer, and the brain a vast community of microscopic computers. But even this model is probably too simplistic since the neuron processes data flexibly and on disparate levels, and is therefore far superior to any digital system. If I am right, the human brain may be a trillion times more capable than we imagine, and "artificial intelligence" a grandiose misnomer.

I think it is time to acknowledge fully that living cells make us what we are, and to abandon reductionist thinking in favour of the study of whole cells. Reductionism has us peering ever closer at the fibres in the paper of a musical score, and analysing the printer's ink. I want us to experience the symphony.


Actually, I'm a big fan of complex systems biology, as opposed to naive molecular biology reductionism.

But just because cells and organisms are complex systems, doesn't mean they're non-simulably superintelligent!

What's funny is that all this grandiose rhetoric is being flung about without ANY evidence whatsoever of actual acts of human intelligence being carried out by this posited low-level intra-cellular computing!!

Still, this relates to the basic reason why I'm not trying to do AGI via brain simulation. There is too much unknown about the brain....

But even though there is much unknown about the brain, I totally don't buy the idea that every neuron is doing super-sophisticated computing, so that the brain is a network of 100 billion intelligent computers achieving some kind of emergent superintelligence....

I don't know how this kind of argument explains stuff like Poggio's model of feedforward processing in visual cortex. By modeling at the neuronal group level, he gets NNs to give very similar behavior to human brains classifying images and recognizing objects under strict time-constraints. If the brain is doing all this molecular supercomputing, how come when it's given only half a second to recognize an object in a picture, it performs semi-stupidly, just like Poggio's feedforward NNs? How come digital computer programs can NOW outperform the brain in time-constrained (half-second) object recognition? Wouldn't it have been to our evolutionary advantage to be able to accurately recognize objects more effectively?

How about motion detection neurons -- for each small region in the visual field, there are tens of thousands of them, with an average 80 degrees or so error in which direction they point Averaging their outputs together gives a reasonably accurate read-out of the direction of motion in that region. If all this molecular supercomputing is going on, why all the error in motion detection? Why bother with all the mess of averaging together erroneous results?

And why the heck do we make so many basic cognitive errors, as diagnosed in the heuristics and biases literature? Is it THAT hard to avoid them, that a network of 100 billion sophisticated computers can't do it, when there would obviously be SOME evolutionary advantage
in doing so....

Also, the comment on "far superior to any digital system" is especially naive. What physics theory does this guy advocate? Any classical physics based system can be emulated by a digital computer to within arbitrary accuracy. Quantum computers can compute some functions faster than digital ones, on an average case basis -- so is he saying cells are quantum computers. Stuart Hameroff has flogged that one pretty hard, and there is NO evidence of it yet.

And even so, quantum doesn't mean superior. Birds seem to use some sort of quantum nonlocality to sense the direction of the Earth's magnetic field, which is funky -- but we have electronic devices that do the same thing BETTER, without any quantum weirdness....

OK, we haven't yet proved that digital computer systems can be intelligent like humans. But this guy certainly is not providing anything even vaguely resembling evidence to the contrary...

Wishful thinking if you ask me -- wishful thinking about the grandiosity about human intelligence. We're clever apes who stumbled upon the invention of language and culture, not uber-intelligent mega-computers....

Just to be clear: I can believe that individual cells do a LOT of sophisticated stuff internally, but I'm unclear how necessary all that they do is for intelligence...

To repeat a time-worn analogy, the cells in a bird's wing probably do a LOT also, yet airplanes and spacecraft work well without mega-ubercomputers or quantum computers to emulate all that intra-bird-wing sub-cellular processing...


2 comments:

Joel said...

Those kind of comments (from the New Scientist post author) frustrate me. They start arguing against reductionism - but arguing about reductionism is a form of reductionism in itself!

Hardly anyone believes that neurons themselves hold the secret sauce of intelligence, it's all about the pattern and interactions. Further, it's hard to see why a digital implementation of a mind equates to reductionism?

Basically it seems like the author doesn't know anything about intelligence, neural networks or the brain and just has a beef with AI.

Tim Tyler said...

why the heck do we make so many basic cognitive errors, as diagnosed in the heuristics and biases literature?

Much of that is adaptive in the ancestral environment.

Things like believing you are right all the time, and underestimating project schedules actually serve important signalling roles in humans.

Some are heuristics: humans are resource limited. We can't encode the solution to every problem in our finite brains and genome.

...and some are plain screw-ups. Evolution doesn't do perfection, good enough is OK.