How can the brain, with its messy mix of neurons and glia spreading activation all over the place, give rise to the precise mathematical structures of symbolic reasoning?
This "symbolic / subsymbolic gap" has been a major puzzle at the center of cognitive science for decades, at least.
In a paper for the IJCNN conference in Beijing in July, I proposed a potential solution that -- while still speculative -- I believe has real potential for solving the issue.
The paper is linked here; and the abstract is as follows:
Abstract—A novel category of theories is proposed, providing
a potential explanation for the representation of complex
knowledge in the human (and, more generally, mammalian)
brain. Firstly, a ”glocal” representation for concepts is suggested,
involving localized representations in a sparse network
of ”concept neurons” in the Medial Temporal Lobe, coupled
with a complex dynamical attractor representation in other
parts of cortex. Secondly, it is hypothesized that a combinatory
logic like representation is used to encode abstract
relationships without explicit use of variable bindings, perhaps
using systematic asynchronization among concept neurons to
indicate an analogue of the combinatory-logic operation of
function application. While unraveling the specifics of the
brain’s knowledge representation mechanisms will require data
beyond what is currently available, the approach presented
here provides a class of possibilities that is neurally plausible
and bridges the gap between neurophysiological realities and
mathematical and computer science concepts.
Note that this is a hypothesis about brains, and potentially a design principle for closely brain-like AGI systems -- but not a statement about, for example, the OpenCog AGI system, which implements symbolic thought more directly. However, there are certainly analogies with things that happen inside OpenCog. OpenCog has explicit symbolic representation (analogous to concept neurons, very roughly) and also subsymbolic representation from which symbolic-like representations may emerge; and the design intention of OpenCog is that these two kinds of representations can work together. The specific mechanisms of this interaction are quite different in OpenCog from what I hypothesize to take place in the brain, but, on the level of the cognitive processes emerging from these systems at the highest levels, there may not be a large difference.