In one of those pleasant synchronicities, a couple days ago PJ Manney started a conversation with me about music and the scientific mind, at the same time as I received in the mail a book I had ordered a couple weeks ago, "The Singing Neanderthals," about the cognitive origins of music.
So, here I'll start with some personal notes and musings in the musicaloidal direction, and finally wander around to tying them in with cognitive theory...
I had told PJ I was a spare-time semi-amateur musician (improvising and composing on the electronic keyboard -- yeah, one of these days I'll put some recordings online; I keep meaning to but other priorities intervene) and she was curious about whether this had had any effect on my AI and other scientific work.
I mentioned to her that I often remember how Nietzsche considered his music improvisation necessary to his work as a philosopher. He kept promising himself to stop spending so much time on it, and once said something like "From now on, I will pursue music only insofar as it is domestically necessary to me as a philosopher."
This is a sentiment I have expressed to myself many times (my music keyboard being a tempting 10 feet away from my work desk...). Like Nietzsche, I have found a certain degree of musicological obsession "domestically necessary" to myself as a creative thinker.... The reasons for this are interesting to explore, although one can't draw definite conclusions based on available evidence....
When I get "stuck" thinking about something really hard, I often improvise on the piano. That way one of two things happens: either
1) my mind "loosens up" and I solve the problem
2) I fail to solve the problem, but then instead of being frustrated about it, I abandon the attempt for a while and enjoy myself playing music ;-)
Improvising allows one's music to follow one's patterns of thought, so the music one plays can sorta reflect the structure of the intellectual problem one is struggling with....
I drew on my experiences composing/improvising music when theorizing about creativity and its role in intelligence, and cooking up the aspects of the Novamente AGI design that pertain to flexible creativity....
As well as composing and improvising, I also listen to music a lot -- basically every kind of music except pop-crap and country -- most prototypically, various species of rock while in the car, and instrumental jazz/jazz-fusion when at home working ... [I like music with lyrics, but I can't listen to it while working, it's too distracting... brings me back too much to the **human** world, away from the world of data structures and algorithms and numbers!! ... the nice thing with instrumental music is how it captures abstract patterns of flow and change and interaction, so that even if the composer was thinking about his girlfriend's titties when he wrote the song, the abstract structures (including abstract **emotional** structures) in the music may feel (and genuinely **be**) applicable to something in the abstract theory of cognition ;-) ] ... but more important than that is the almost continual unconsciously-improvised "soundtrack" inside my head. It's as though I'm thinking to music about 40% of the time, but the music is generated by my brain as some kind of interpretation of the thoughts going on.... But yet when I try to take this internal music and turn it into **real music** at the keyboard, the translation process is of course difficult, and I find that much of the internal music must exist in some kind of "abstract sound space" and could never be fully realized by any actual sounds.... (These perverted human brains we are stuck with!!!)
Now, on to Mithen's book "The Singing Neanderthals," which makes a fascinating argument for the centrality of music in the evolution of human cognition.... (His book "The Prehistory of Mind" is really good as well, and probably more of an important work overall, though not as pertinent to this discussion...)
In brief he understands music as an instantiation and complexification of an archaic system of communication that was based (not on words but) on patterns of vocal tonal variation.
(This is not hard to hear in Radiohead, but in Bach it's a bit more sublimated ;=)
This ties in with the hypothesis of Sue Savage-Rumbaugh (who works with the genius bonobo Kanzi) that language likely emerged originally from protolanguages composed of **systems of tonal variation**.
Linguist Alison Wray has made related hypotheses: that protolanguage utterances were holistic, and got partitioned into words only later on. What Savage-Rumbaugh adds is that before protolanguage was partitioned into words, it was probably possessed of a deep, complex semantics of tonal variation. She argues this is why we don't recognize most of the existing language of animals: it's not discrete-word language but continuous-tonal-variation language.
(Funny that both these famous theorists of language-as-tonal-variation are women! I have sometimes been frustrated by my mom or wife judging my statements not by their contents but by the "tone" of delivery ;-)
This suggests that a nonhuman AI without a very humanlike body is never going to experience language anywhere near the same way as a human. Even written language is full of games of implied tonal variation-pattern; and in linguistics terms, this is probably key to how we select among the many possible parses of a complex sentence.
[Side note to computational linguists and pragmatic AI people: I agree the parse selection problem can potentially be solved via statistics, like Dekang Lin does in MiniPar; or via pure semantic understanding, as we do when reading Kant in translation, or anything else highly intellectual and non-tonal in nature.... But it is interesting to note that humans probably solve parse selection in significant part thru tonal pattern recognition....]
Regarding AI and language acquisition, this line of thinking is just a further justification of taking a somewhat nonhumanlike approach to protolanguage learning; as if this sort of theory is right, the humanlike approach is currently waaay inaccessible to AI's, even ones embodied in real or simulated robots... It will be quite a while until robot bodies support deep cognitive/emotional/social experience of tonal variation patterns in the manner that we humans are capable of.... The approach to early language learning I propose for Novamente is a subtle combination of humanlike and nonhumanlike aspects.
More speculatively, there may be a cognitive flow-through from "tonal pattern recognition" to the way we partition up the overall stream of perceived/enacted data into events -- the latter is a hard cognitive/perceptual problem, which is guided by language, and may also on a lower level be guided by subtle tonal/musical communicative/introspective intuitions. (Again, from an AI perspective, this is justification in favor of a nonhumanlike route ... one of the subtler aspects of high-level AI design, I have found, is knowing how to combine human-neurocognition inspiration with computer-science inspiration... but that is a topic for another blog post some other day...)
I am also reminded of the phenomenon of the mantra -- which is a pattern of tonal variation that is found to have some particular psychospiritual effect on humans. I have never liked mantras much personally, being more driven to the spare purity of Zen meditation (in those rare moments these days when emptying the intellectual/emotional mind and seeking altered states of purer awareness seems the thing to do...); but in the context of these other ideas on music, tones and psychology, I can see that if we have built-in brain-wiring for responding to tonal variation patterns, mantras may lock into that wiring in an interesting way.
I won't try to describe for you the surreal flourish of brass-instrument sounds that I hear in my mind at this moment -- a celebratory "harmony of dissonance" tune/anti-tune apropos of the completion of this blog post, and the resumption of the software-code-debugging I was involved with before I decided to distract myself briefly via blogging...
Post a Comment