Jeff Hawkins (the theoretical neuroscientist/mobile computing pioneer) recently gave a lecture at the Beckman Institute on his work at Numenta on Hierarchical Temporal Memory (HTM). Actual title: "Advances in Modeling Neocortex and Its Impact on Machine Intelligence". A video of the lecture can be viewed here.
Basically, Jeff is proposing a new paradigm for thinking about brains and technology. With the advent of "soft" computing techniques (e.g. evolutionary algorithms, neural networks), bio-inspired software, and new techniques to peer into the brain (e.g. fMRI/EEG and fNIR) we need a new way to both produce machine intelligence and theoretically understand what is going on in the brain. The fact that he makes this link, and has been interested in this for most of his career automatically makes me a fan.
Yet while I like Jeff Hawkins (I basically bought into the argument he laid out in "On Intelligence"), I do not agree with some of the details featured in this talk (although the work is technically impressive and correct). Mainly the idea that neocortex (the 6-layered tissue responsible for much of mammalian higher cognition more properly called isocortex) is computationally powerful because it has a repetitive structure.
I have encountered this idea in a number of computational neuroscience papers. I guess my objection is to the idea of repetitive structures being limited to the neocortex, and that the neocortex defines intelligence. This is incorrect on two counts:
1) there are other structures (cerebellum, parts of the medial temporal lobe) which also exhibit repetition. It's not that these structures do not produce intelligent behavior. In fact, the cerebellum is known for movement and other behavioral regulation, while the medial temporal lobe is known to be involved in memory consolidation and spatial navigation. The problem is that Hawkins all too often equates repetition of structure with pattern recognition and predictive capacity. It may work when running HTM simulations, but is it biologically accurate and ultimately robust? While this is certainly true of visual cortex, it is not true of all neocortical regions. There are other attributes such as convergence and higher-order feedback that exploit this repetitive, hierarchical structure that do not require nor preclude pattern recognition.
2) birds use pallial-derived structure to generate intelligent behavior. While one could argue that this structure is also hierarchical (it is certainly layered), it does not share many of the design principles found in mammalian neocortex. The neural substrate of insects, who can likewise generate complex behaviors, is also not equivocal to the mammalian neocortex. While hierarchical processing may also exist in avian pallium and insect neuropil/ganglia networks, it may or may not be consistent with Hawkins' HTM.
The other problem I have with current artificial intelligence research (and machine learning in general) is the focus on pattern recognition. While pattern recognition may be a necessary condition for intelligence, it is not the only hallmark of intelligence. To his credit, Hawkins argues that prediction is actually the hallmark of intelligent behavior. This is much more powerful than blind pattern recognition, which can produce a lot of false positives (e.g. seeing
an image of the virgin Mary on the side of a barn). The ability to predict upcoming events in the environment may not only define intelligence in the brain (neuronal populations), but among cell and organismal populations as well.
Yet there may be ways to define intelligent behavior outside the realm of prediction. For several years now (since the early years of my PhD studies), I have been fascinated by sensory integration and signal convergence in the brain. For example, perception of a coffee mug being lifted, brought to the mouth, and set down again involves visual, auditory, and tactile cues -- all of which need to be integrated in the course of producing the intelligent behavior of consciousness we all take for granted. There are centers in the brain (e.g. superior colliculus) in which single neurons will integrate inputs of different sensory types, and depending on how they are weighted, will produce either an additive, suppressive, or superadditive response. The superadditive response is the outcome that has intrigued me the most, as taken across cells could produce a very complex (and fascinating) emergent phenomenon. And, like it or not, this may produce intelligent behavior with no direct connection to prediction nor pattern recognition.
Anastasio, T.J., Patton, P.E., and Belkacem-Boussaid, K. (2000). Using Bayes' rule to model multisensory enhancement in the superior colliculus. Neural Computation, 12, 1165-1187.
Ernst, M.D. and Banks, M.S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415, 429-433.
Floreano, D. and Mattiussi, C. (2008). Bio-inspired Artificial Intelligence. MIT Press, Cambridge, MA.
Hawkins, J. and Blakeslee, S. (2004). On Intelligence. Times Books, New York.
Jehee, J.F.M. and Murre, J.M.J (2008). The scalable mammalian brain: emergent distributions of glia and neurons. Biological Cybernetics, 98(5), 439-445.
Jarvis, E.D. et.al (2005) Avian brains and a new understanding of vertebrate brain evolution. Nature Review Neuroscience, 6(2), 151-159.
Meredith, M.A. and Stein, B.E. (1983). Interactions among converging sensory inputs in the superior colliculus. Science, 221(4608), 389-391.
Richards, W. (1988). Natural Computation. MIT Press, Cambridge, MA.
Shadmehr, R. and Wise, S.P. (2005). Computational Neurobiology of Reaching and Pointing. MIT Press, Cambridge, MA.
Shasha, D.E. and Lazere, C. (2010). Natural computing: DNA, quantum bits, and the future of smart machines. W.W. Norton, New York.
Stein, B.E. and Meredith, M.A. (1993). The merging of the senses. MIT Press, Cambridge, MA.
Stein, B.E. (1998). Neuronal mechanisms for synthesizing sensory information and producing adaptive behaviors. Experimental Brain Research, 123, 124-125.
Strausfeld, N.J. et.al (1998). Evolution, Discovery, and Interpretations of Arthropod Mushroom Bodies. Learning and Memory, 5, 11-37.