Sunday, September 11, 2016

Does Modern Evidence Refute Chomskyan Universal Grammar?

Scientific American says, in a recent article, “Evidence Rebuts Chomsky’s Theory of Language Learning” …

Does it?  Well, sort of.  Partly.  But not as definitively as the article says.

Michael Tomasello, whose work I love, argues in the article that Chomsky’s old idea of a universal grammar is now obsoleted by a new “usage-based approach”:

In the new usage-based approach (which includes ideas from functional linguistics, cognitive linguistics and construction grammar), children are not born with a universal, dedicated tool for learning grammar. Instead they inherit the mental equivalent of a Swiss Army knife: a set of general-purpose tools—such as categorization, the reading of communicative intentions, and analogy making, with which children build grammatical categories and rules from the language they hear around them.

Here’s the thing, though.   Every collection of learning tools is going to be better at learning some things than others.   So, for any collection of learning tools that is set to the task of learning grammar, some grammars will be easier to learn than others.  That is, given a certain set of social and physical situations, any particular of learning tools will be biased to learn certain grammars for communication in those situations, as opposed to other grammars.

So if humans have a certain set of universal learning tools, it follows that humans have a certain “universal probability distribution over (situation, grammar) pairs.”

This is not exactly the same as a universal grammar in the classic Chomskyan sense.  But just how far off it is from what Chomsky was thinking, remains to be understood.

For instance, more recent versions of Chomsky’s ideas view a sort of linguistic recursion as the core principle and tool of universal grammar.   Does our collection of human learning tools give us a strong bias to learn grammars involving certain sorts of linguistic recursion, in humanly common physical/social situations?   It may well.

Does the fact that some obscure grammars like Piraha appear not to have much recursion in their grammar refute such a possibility?  Not really.  It appears likely the Piraha have recursion in their linguistic repertoire, but just carry out this recursion more on the pragmatic and cross-sentential level, rather than on the level of syntax within individual sentences.  But that’s one obscure language — and the fact that a certain linguistic form does not appear in EVERY human language, does not refute the idea that there is a universal probabilistic bias toward this form in the human brain.

I’m not just splitting hairs here.   The question is to what extent has evolution honed the set of learning tools in the human mind for learning particular sorts of linguistic forms.   Tomasello’s intuition seems to be: not that much.  That is, he seems to think that our learning tools basically evolved for more general perceptual, motor and social learning, and then we just use these for language learning as well.   This is possible.  However, it’s also possible that our toolset has been substantially honed by evolution for the particularities of language learning — in which case there is a meaningful “universal human bias for learning certain types of grammars”, which can be thought about as a more modern incarnation of many of Chomsky’s ideas about universal grammar.

This issue is also relevant to AGI, because it has to do with how much attention AGI designers should spend on learning algorithms that are tuned and tweaked for language learning in particular, as opposed to expecting language learning to just pop out from application of general-purpose learning tools without any special language-oriented tuning.

Clearly Chomsky proposed a lot of strong ideas that just don’t hold up in the light of modern data regarding child language learning.  However, sometimes science (like many other human endeavors) can be a bit too much of a swinging pendulum, going from one extreme all the way to the other.  I wonder if the wholesale rejection of universal-grammar-related ideas in favor of usage-based ideas may be an example of this.  I wonder if we will find that the specific assemblage of learning tools in the human mind is, in fact, very well tuned by evolution to make learning of some specific grammatical forms especially easy in evolutionarily commonplace human situations. 


Mentifex said...

I suspect that some tiny increment in the human genome made it possible for Homo Sapiens to make use of language.

From my vantage point as an AI-coding amateur neurotheoretician (there's an oxymoron for you :-) I see a probability of how evolution arrived at thinking. The first neurons were only a two-way street -- sensory input neurons and motor outputs neurons. Then memory was added as nodal linkages between the sensory and output neurons, so that specific sensations could cause specific motor outputs.

A major development in AI/NI evolution (i.e., Natural Intelligence) was when some sensory input fibers accidentally came loose from their connection to a sense organ and thus be default became abstract fibers -- not concrete neurons connected concretely to any of the senses, but neurons still lying in tandem with, and probably between, the sensory fibers on one side and the motor fibers on the other side. These abstract fibers became the bedrock and foundation of concepts. Your dog, for instance, knows you, because your dog has a concept of you, and of itself (for when you call your dog's name), and even of its duty as a dog -- to fetch an object, and not to chew the slippers or curtains or furniture. Your parrot that can speak English words like "Polly want a cracker" has similar nerve-fibers in control of the motor outputs of vocalizations. [Please see next Comment as continuation]

Mentifex said...

The leap from canine or chimp intelligence to human intelligence came in evolution when some of the abstract, conceptual fibers took abstraction one step beyond and became abstractions of abstractions, that is, a Chomskyan linguistic superstructure that could spiral through time, adding or deleting syntactic elements as nodes of control of parts of speech, typically subject-noun and predicate-verb and object-noun. What Ben Goertzel calls "linguistic recursion" up above is most likely a circular linkage of the syntactic control fibers such that word order in a child's mind can be established by trying out nodal sequences in communication with adults, and by not only adding nodes for any desired part of speech, but by deleting nodes of gambits that fail to satisfy adults by failing to convey intended meaning.

The leap in human evolution, maybe fifty thousand years ago when humans surpassed Neanderthals, could have been as simple as genetically coding for abstract nerve fibers that CANNOT associate to sensory fibers or motor fibers, but can ONLY associate to (that is, govern and control) abstract concept fibers, so that the Chomskyan transformational-grammar superstructure arises not physically but only logically within the otherwise and seemingly flat MindGrid of concept fibers. Now recently in this current Anno Domini 2016 the Mentifex AI Minds in Perl and in Forth have radically simplified their conceptual implementation of the underlying Theory of Mind for AGI. It would be of utility, Dr. Ben, if you and your robotics team could somehow join the Mentifex mind-design with your
robotics actuators, somewhat as follows. In MindForth Robot AGI the thinking module thinks about its motor options without immediately causing them to go into action. The FreeWill or Volition mind-module fires motor output signals only when the thinking module thinks _repeatedly_ about a given motor output (such as "RUN!") so that a kind of neural accumulator fills up and tips over into a motor output initiative -- a motor "GO" signal. In such a way, the Mentifex linguistic Mind should initiate motor actions just by thinking prolongedly about a proposed action. Interruptions of the chain of thought -- of the fixation upon a motor proposal -- cancel out the motor proposal. So this Comment is my reaching out from the Mentifex AGI project to OpenCog and any other receptive AGI project. Bye for now. -Arthur

Hugo Arraes said...

It's seems that language it's only a extension of a more general forest of concepts used to represent physical elements of the world in concept tree's using elements like shape, color, form, etc.
The same way, the tree of concepts of muscular motor stimuli, used to produce a sound with the mouth, it's mounted and linked with the adequate tree of concepts mounted by ear input.
It's all start with a "seed", a primary structure that will be a ground to others concepts trees.
It's like a baby saying "auau" pointing to a dog because the motor and sound conceptual tree it's already mapped because of hours of "baby speak" training.

johnmn3 said...

Like with intelligence, there is no "general" language. How efficiently processes or abstractions relate to situations is not some context independent affair.

johnmn3 said...

Like with intelligence, there is no "general" language. How efficiently processes or abstractions relate to situations is not some context independent affair.

Unknown said...

Language is social grooming first and foremost, immediate group survival warnings second, information transmission last and least.

Drink Recipes said...

Very nnice blog you have here