A team of UK-based researchers has published an interesting paper on language learning & reasoning using neural networks. There has also been a somewhat sensationalist media article describing the work.
I was especially familiar with one of the authors, Angelo Cangelosi, who gave a keynote at the AGI-12 conference at Oxford, touching on some of his work with the iCub robot.
The news article (but not the research paper) says that the ANNABELL system reported here is first time automated dialogue has been done w/ neural nets.... Actually, no. I recall a paper by Alexander Borzenko giving similar results in the "Artificial Brains" special issue of Neurocomputing that Hugo DeGaris and I co-edited some years ago…. And I’m pretty sure there were earlier examples as well.
When I pointed the ANNABELL work out to Japanese AGI researcher Koichi Takahashi, he noted a few recent related works, such as:
- Yann LeCun's introduction to RNN-based question-answering, in his slides (pp. 18-31):
- Peter Ford Dominey’s work on an emergent approach to language learning, e.g. ”Recurrent temporal networks and language acquisition— from corticostriatal neurophysiology to reservoir computing," (2013)
See also this nice survey on the emergent approach for language in robotics today.
So, what distinguishes this new work by Cangelosi and colleagues from other related stuff I’ve seen is more the sophistication of the underlying cognitive architecture. Quite possibly ANNABELL works better than prior NNs trained for dialogue-response, or maybe it doesn't; careful comparison isn't given, which is understandable since there is no standard test corpus for this sort of thing, and prior researchers mostly didn't open their code. But the cognitive architecture we see described here, is very carefully constructed in a psychologically realistic way; combined with the interesting practical results, this is pretty nifty...
The training method is interesting, incrementally feeding the system facts with increasing complexity, while interacting with it along the way, and letting it build up its knowledge bit by bit. A couple weeks ago I talked to a Russian company (whose name is unfortunately slipping my mind at the moment, but it began with a Z), who had a booth at RobotWorld in Seoul, that has been training a Russian NLP dialogue system in a similar way (again with those Russians!!).... But the demo they were showing that day was only in Russian so I couldn’t really assess it.
To my mind, the key limitation of the approach we see here is that the passage from question to response occurs very close to the word and word-sequence level. There is not much conceptualization going on here. There is a bit of generalization, but it’s generalization very close to the level of sentence forms. This is not an issue of symbolic versus connectionist, it’s a matter of the kinds of patterns the system recognizes and represents.
For instance, with this method, the system will respond to many questions involving the word "dad" without really knowing what a "dad" is (e.g. without knowing that a dad is a human or is older than a child, etc.). This is just fine, and people can do this too. But we should avoid assuming that just because it gives responses that, if heard from a human, would result from a certain sort of understanding, the system is demonstrating that same sort of understanding. This system is building up question-response patterns from the data fed into it, and then performing some (real, yet fairly shallow) generalization. The AI question is whether the kind of generalization it is performing is really the right kind to support generally intelligent cognition.
My feeling is that the kind of processing their network is doing, actually plays only a minor supporting rule in human question-answering and dialogue behavior. They are using a somewhat realistic cognitive architecture for reactive processing, and a somewhat realistic neural learning mechanism -- but the way the learning mechanism is used within the architecture for processing language, is not very much like the way the brain processes language. The consequence of this difference is that their system is not really forming the kinds of abstractions that a human mind (even a child's mind) automatically forms when processing this kind of linguistic information.... The result of this is that the kinds of question-answering, question-asking, concept formation etc. their system can do will not actually resemble that of a human child, even though their system's answer-generation process may, under certain restrictions, give results resembling those you get from a human child...
The observations I’m making here do not really contradict anything said in the paper, though they of course contradict some of the more overheated phrasings in the media coverage…. We have here a cognitive architecture that is intended as a fragment of an overall cognitive architecture for human-level, human-like general intelligence. Normally, this fragmentary architecture would not do much of anything on its own, certainly not anything significant regarding language. But in order to get it to do something, the authors have paired their currently-fragmentary architecture with learning subsystems in a way that wires utterances directly to responses more directly than happens in a human mind, bypassing many important processes related to conceptualization, motivation and so forth.
It’s an interesting step, anyway.
No comments:
Post a Comment