Saturday, March 26, 2005
Smart Man with a Funny Beard
Aubrey de Grey's website
Not only is he an uncommonly cool-looking individual -- I think he even beats me at my coolest-looking (2000, I think that was -- back when I had numerous bogus paper millions, was still on my first wife, and none of my sons had a moustache, and I still sorta thought we could probably create an AI without an embodiment for simplicity's sake...) -- but he has some extremely interesting ideas on how to combat human aging.
I have my own as well, which intersect only partly with his -- but he's thought about it a lot more than me, so he's probably more likely to be right ;-)
Like the Novamente AGI project, nearly all of Aubrey's brilliant ideas are currently almost unfunded.
Well, it's not as though society doesn't spend money on research. And a lot of good research gets funded. But research funding seems to suffer from the same peculiar human-mob shortsightedness that causes the US to stick with the absurd, archaic English system of measurement year after year ... and that's causing English to emerge as the international language while Lojban remains the province of 350 geeks on an Internet mailing list...
More later! (For readers of my just-previous blog entry: Yes, I'm still procrastinating cleaning the turtle tank!)
Darkness at the Break of Noon, Goddamned Turtle Tank, etc.
Dylan usually gets it right....
Arrrghh.... I'm in an oddly dark mood this Sunday at 5PM, probably not a good frame of mind to be blogging, but it's a good way to delay cleaning out my son's turtle tank (my son doesn't live in it; his turtle Rick does) -- I don't really want to clean the tank but I know I have to do it, and if I start working on something intense or start playing the keyboard, the turtle will probably end up swimming in its own excrement for yet another day....
Hmmm ... the fact that I'm blogging about turtle shit probably indicates that I'm in a bad mood....
Not a bad day overall -- I got lots of interesting AI thinking & writing done, and took a long walk in the woods with the dogs. Sorta miss my kids as usual when they're at their mom's for the weekend. And, an interesting guest from New Zealand is arriving in a couple hours. Oops, better mop the dogs' mud off the floor, too.... (Sometimes I wish I lived in a country like Brazil where you don't need to be rich to have a maid! Is cleaning up really the best use of any fraction of my potentially far too limited lifespan? Well, if you saw the general state of my house you'd realize I don't think so!)
Maybe Buddha was right: all existence is suffering. Well of course he was right, but he left out the other half that Nietzsche said so well: "Have you ever said Yes to one joy? O my friends, then you have said Yes to all woe as well! All things are enchained, all things are entwined, all things are in love." Or something like that. In German, which I can't read, except for a few words and phrases. Everything is all mixed up, that's the nature of humanity. Almost every experience has some suffering in it -- only the most glorious peak of joy breaks this rule. And semi-symmetrically, almost every experience has some joy. Semi-symmetrically, because the mix of joy and pain seems rather differently biased for different people, based on variations in neurochemistry and situation. Most of the time I have an asymmetrically large amount of joy, I think -- as Dylan was well aware, it's not always easy to tell -- ....
Blah blah blah.
In moods like this I seriously consider giving up on the whole AI business and doing something easier and more amusing. I could become a professor again, write philosophy books and math papers, record CD's of weird music and write novels about alien civilizations living inside magic mushrooms.... I'm really more of a philosopher/artist type, software engineering isn't my thing ... nor is business. Not that I'm bad at these things -- but they don't really grab me, grip me, whatever metaphor you want today ....
Getting rich would be nice but I don't care too much about it -- I could live quite comfortably according to my standards without being rich, especially if I left accursed Washington DC for somewhere with cheaper land. Wow, I miss New Zealand, Western Australia and New Mexico ... great places I lived back in the day ... but it seems I'm stuck here in the DC metro for another 10 years due to a shared-child-custody situation.... Well, there are worse fates. And it's a good place for business....
Well, OK, time to clean the damn turtle tank! I could try to portray the stupid turtle (yeah, they really are stupid, though my son claims he can communicate with them psychically, and they're not as dumb as snakes) swimming in its own crap as a metaphor for something, but I don't feel perverted enough right now. Or at least, I don't feel perverted in the right sort of way.
Years I used to delude myself that I was just, say, 6 or 12 or 18 months away from having a completed thinking machine. That was a fun attitude, but it turned out I wasn't quite self-delusional enough to keep it up forever. I've now gained a lot more respect for how idiotic we humans are, and how much time it takes us to work through the details of turning even quite clear and correct abstract ideas into concrete realities. I've tried hard to become more of a realist, even though it makes me significantly less happy, I suppose because getting to the end goal is more important to me than being maximally happy.
I still think that if I managed to turn Biomind into a load of cash, or some rich philanthropist or government body decided to fund Novamente R&D, I could lead a small team of AI geniuses to the creation of an AI toddler within a few years. But realistically, unless a miraculous patron shows up or DARPA undergoes a random brain tremor and suddenly decides to fund one of my proposals, it's likely to take several years before I manage to drum up the needed funding to make a serious attack on the "Novamente AI toddler problem." (Yeah, I know, good things can sometimes pop up out of nowhere. I could get an email out of the blue tomorrow from the mystery investor. That would be great -- but I'm not counting on it.) Honestly, I just barely have it in me to keep doing software business for 3-5 more years. Not that it isn't fun sometimes, not that it isn't challenging, not that I don't learn a lot -- but it's an affront to my "soul" somehow (no I don't believe in any religious crap...). And no, it's not that I'm a self-contradictory being who would feel that way about any situation -- there are lots of things I love doing unreservedly, software business just isn't one of them. The difficulty is that the things I love doing don't seem to have decent odds of putting me in a position to create a thinking machine. I love music but I'm not good enough to become a star; and I'm about maximally good at fiction writing IMO, but my style and taste is weird enough that it's not likely to ever make me rich..... Urrgghh!!
Y'know, if I didn't have kids and obscenely excessive alimony payments (which were determined at a time when my businesses were more successful, but now my income is a lot lower and the alimony payment remains the same!! ... but hey, they only go on another couple years ;-p), I might just retreat to an electrified hut in some Third World country and program for three years and see if I could make the Novamente toddler myself. No more business and management and writing -- just do it. Very appealing idea. But Zarathustra (oldest son) starts college in a couple years. The bottom line is I'm not singlemindedly devoted to creating AI even though I think it's the most important thing for me to do -- I'm wrapped up with human attachments -- family attachments, which mean an awful lot to me.
Funny, just this morning I was reflecting on how great it was to be alone for a change -- the kids are with their mom, my wife is overseas visiting her family, the dogs and cats and turtle and gerbil don't quite count (OK, the gerbil almost does...) -- how peaceful and empty it felt and how easy it was to think clearly and work uninterruptedly. But now I see the downside: if a dark mood hits me there's no one to lift me out of it by showing me a South Park rerun or giving me a hug.... Human, all-too-human indeed!
And now this most boring and silly of my blog entries comes to an end. Unlike the previous ones I don't think I'll publicize this one on any mailing lists! But I guess I will click "Publish Post" in spite of some momentary reservations. Maybe someone will be amused to observe that egomaniacal self-styled AI superheroes have the same erratic human emotions as everyone else....
How important is it for this kind of human chao-emotionality to survive the Singularity? I'm not saying it shouldn't -- but isn't there some way to extract the joyous essence of humanity without eliminating what it means to be human? Perhaps there is. After all, some humans are probably "very happy" 5-10 times more often than others. What percentage of happiness can you achieve before you lose your humanity? All human existence has some suffering wending through it, but how much can it be minimized without creating "Humanoids"-style euphoridic idiot-bliss? I don't know, but even though I'm a pretty happy person overall, I'm pretty sure my unhappiness level hasn't yet pushed up against the minimum euphoridiotic boundary ;-p
And in classically humanly-perverse style, I find that writing about a stupidly unpleasant mood has largely made it go away. Turtle tank, here I come! Suddenly it doesn't seem so bad to do software business for a few more years, or spend a year going around giving speeches about AI until some funding source appears. Why the hell not? (No, I haven't taken any drugs during the last 10 minutes while typing this!). There's plenty of joy in life -- I had a great time doing AI theory this morning, and next week I'll be canoeing in the Everglades with my wife and kids. Maybe we should bring the turtle and let it swim behind the canoe on a leash?
Ahh.... Turtle tank, turtle tank, turtle tank. (That thing has really gotten disgusting, the filter broke and I need to drain it entirely and install a new filter.) Yum.
Saturday, March 12, 2005
Lojbanic AI and the Chaotic Committee of Sub-Bens
Along the way Mark also mentioned to me a language called Lojban, which he said was based on predicate logic. He observed to me in passing that it might be easier for us to make our AI system understand Lojban than English. I agreed that it might be, if Lojban was more logically structured, but I reckoned this wasn't very practical, since no one on the team except Mark spoke any Lojban. Also, we were interested in creating a real AI incrementally, along a path that involved spinning off commercial apps -- and the commercial applications of a Lojban-speaking AI system seemed rather few.
Well, six and a half years later, Mark's suggestion has started to seem like a pretty good one. In my new AI project Novamente, we have progressed moderately far along the path of computational language understanding. Our progress toward powerful general AI has been painfully slow due to the team's need to pay rent and the lack of any funding oriented toward the grand AI goal, but for 2004 and part of 2003 the Novamente team and I had some funding to build some English language processing software -- and while we didn't build anything profoundly real-AI-ish, we used the opportunity to explore the issues involved in AI language processing in some depth.
The language processing system that we built is called INLINK and is described here. It doesn't understand English that well by itself, but it interacts with a human user, presenting alternate interpretations of each sentence typed into it, until the human verifies it's found a correct interpretation. The interactive process is slow and sometimes irritating but it ultimately works,
allowing English sentences to be properly interpreted by the AI system. We have plans to create a version of the INLINK system called BioCurator, aimed at biological knowledge entry -- this should allow the construction of a novel biology database containing formal-logic expressions representing biological knowledge of a much subtler nature than exists in current online bio resources like the Gene Ontology.
I've had a lot of doubts about the value of computational linguistics research for "real AI" -- there's a moderately strong argument that it's better to focus on perception, action and embodiment, and let the AI learn language as it goes along interacting with humans using its (real or simulated) body. On the other hand, there's also an argument that a certain degree of "cheating" may be helpful -- that building in some linguistic knowledge and facility may be able to accelerate the experiential-language-learning process. I've outlined this argument in an article called Post-Embodied AI.
The work on INLINK has clarified for me exactly what's involved in having an AI system understand English (or any other natural language). Syntax processing is tricky but the problems with it can be circumvented using an interactive methodology as we've done in INLINK; and eventually the system can learn from its errors (based on repeated corrections by human users) and make fewer and fewer mistakes. The result of INLINK is that English sentences are translated into probabilistic logical expressions inside Novamente's memory, which may then be interpreted, reasoned on, data-mined, intercombined, and yadda yadda yadda. Very nice -- but nasty issues of computational efficiency arise.
Novamente's probabilistic-inference module currently exists only in prototype form, but the prototype has proven capable of carrying out commonsense reasoning pretty well on a number of simple test problems. But there's a catch: for the reasoning process to be computationally tractable, the knowledge has to be fed to the reasoning module in a reasonably simple format. For instance, the knowledge that Ben likes the Dead Kennedys, has to be represented by a relationship something like
#likes( #Ben_Goertzel, #Dead_Kennedys)
where the notation #X refers to a node inside Novamente that is linked by a high-strength link to the WordNode/PhraseNode representing the string X. Unfortunately, if one types the sentence
"Ben likes the Dead Kennedys"
into INLINK, the Novamente nodes and links that come out are more complicated and numerous and less elegant. So a process called "semantic transformation" has to be carried out. This particular case is simple enough that this process is unproblematic for the current Novamente version. But for more complex sentences, the process is, well, more complex, and the business of building semantic transformations becomes highly annoying. One runs into severe issues with the fuzziness and multiplicity of preposition and verb-argument relationships, for example. As occurs so many times in linguistics and AI, one winds up generating a whole bunch of rules which don't quite cover every situation -- and one realizes that in order to get true completeness, so many complexly interlocking small rules are needed that explicitly encoding them is bound to fail, and an experiential learning approach is the only answer.
And this is where -- as I just recently realized -- Lojban should come in! Mark Shoulson was right back in 1998, but I didn't want to see it (urrrgghh!! what useful things are smart people saying to me now that I'm not accepting simply because I'm wrapped up in my own approaches?? why it is to hard to truly keep an open mind?? why is my information processing capacity so small??!! wait a minute -- ok -- this is just the familiar complaint that the limitations of the human brain are what make it so damn hard to build a superior brain. And the familiar observation that cutting-edge research has a way of making the researcher feel REALLY REALLY STUPID. People tell me I'm super-smart but while working on AI every day I come to feel like quite a bloody moron. I only feel smart when I re-enter the everyday world and interact with other people ;-p)
What if instead of making INLINK for English, we made it for Lojban (LojLink!)? Of course this doesn't solve all the problems -- Lojban is a constructed language based on formal logic, but it's not equivalent to formal logic; it allows ambiguity where the speaker explicitly wants it, otherwise it would be un-usable in practice. Semantic transformation rules would still be necessary to make an AI system understand Lojban. But the human work required to encode such transformations -- and the AI learning required to learn such transformations -- would clearly be one or two orders of magnitude less for Lojban.
Lojban isn't perfect... in my study of Lojban over the last week I've run up against the expected large number of things I would have designed differently, if I were building the language. But I have decided to resist the urge to create my own Lojban-ish language for AI purposes, out of respect for the several decades of work that have gone into "tuning" Lojban to make it more usable than the original version was.
In some respects Lojban is based on similar design decisions to the knowledge representation inside my Novamente AI Engine. For instance, in both cases knowledge can be represented precisely and logically, or else it can be represented loosely and associatively, leaving precise interpretation reliant on contextual factors. In Lojban loose associations are represented by constructs called "tanru" whereas in Novamente they're represented by explicit constructs called AssociativeLinks, or by emergent associations between activity-patterns in the dynamic knowledge network.
Next, it's worth noting that Lojban was created and has been developed with a number of different goals in mind -- my own goal, easier interfacing between humans and early-stage AGI's, being just one of them.
Some Lojbanists are interested in having a "culturally neutral" language -- a goal which, while interesting, means fairly little to me.
In fact I don't really believe it's possible -- IMO Lojban is far from culturally neutral, it embodies its own culture, a nerdy and pedantic sort of culture which has plusses and minuses. There is a Lojban term "malglico" which translates roughly to "damn English" or "fucking English" -- it refers to the tendency to use Lojban in English-like ways. This is annoying to Lojban purists but really doesn't matter to me. What I care about is being able to communicate in a way that is fluid and simple and natural for me, and easy for an early-stage AI to comprehend. If the best way to achieve this is through a malglico dialect of Lojban, so be it. If malglico interferes with the comprehensibility of Lojban by AI software, however, then I'm opposed to it.
I've printed up a bunch of materials on Lojban and started studying it seriously -- if I keep up with it then in 6 months or so I'll be a decent Lojbanist. Generally I'm not much good at learning languages, but that's mostly because it bores me so much (I prefer learning things with more of a deep intrinsic structure -- languages always strike me as long lists of arbitrary decisions, and my mind wanders to more interesting things when "I" try to force it to study them...). But in this case I have a special motivation to help me overcome the boredom....
If you want to try to learn Lojban yourself, the most useful resources I've found are:
- This set of Lojban lectures
- This lengthy, complete exposition of Lojban grammar
- This mathematical formalization of Lojban grammar
- This Lojban-English dictionary (take the file dict.ps and convert it to .pdf)
- This English-Lojban dictionary
If it does happen that we teach Novamente to speak Lojban before English then in order to participate in its "AI preschool" you'll need to know Lojban! Of course once it gets beyond the preschool level it will be able to generalize from its initial language to any language. But the preschool level is my focus at the moment -- since as I'm intensely aware, we haven't gotten there yet!
I remain convinced that with 2-3 years of concentrated single-focused effort by myself and a handful of Novamente experts (which will probably only be possible if we get some pure-AI-focused funding, alas), we can create a Novamente system with the intelligence and creativity and self-understanding of a human preschooler. But I'm trying really hard to simplify every aspect of my plan in this regard, just to be sure that no unexpected time-sinks come along. One advantage of NOT having had pure-AI-focused funding for the last few years is that the AI design has been refined an awful lot during this frustrating period. The decision to take a "post-embodied" approach to linguistics -- incorporating both experiential learning and hard-wiring of linguistic knoweldge -- is not a new one; that was the plan with Webmind, back in the day. But the idea of doing initial linguistic instruction and hard-wiring for Novamente in Lojban rather than English is a new one and currently strikes me as quite a good one.
Ah -- there's a bit of a catch, but not a big one. In order to do any serious "hard-wiring" of Lojban understanding into Novamente or any other AI system, the existing computational linguistics resources for Lojban need to be beefed up a bit. I describe exactly what needs to be done here. It seems to me there's maybe 3/4 man-years of work in making pure Lojbanic resources, and another year of work in making resources to aid in automated Lojban-English translation.
And another interesting related point. While in 1998 when Mark Shoulson first pointed Lojban out to me, I thought there were no practical commercial applications for a Lojban-based AI system, I've now changed my mind. It seems to me that an AI system with a functional Lojban language comprehension module and modest level of inferential ability would actually be quite valuable in the area of knowledge management. If a group of individuals were trained in Lojban, they could enter precise knowledge into a computer system very rapidly, and this knowledge could then be reasoned on using Novamente or other tools. This knowledge base could then be queried and summarized in English -- because processing simple English queries using a system like INLINK isn't very hard, and doing crude Lojban-English translation for results reporting isn't that hard either. In any application where some institution has a LOT of knowledge to encode and several years to do it, it may actually make sense to take a Lojbanic approach rather than a more standard approach. Here you'll find an overview of this approach to knowledge management, which I call LojLink.
One example where this sort of approach to knowledge encoding could make sense is bioscience -- I've long thought that it would be good to have every PubMed abstract entered into a huge database of bio knowledge, where it could then be reasoned on and connected with online experimental biology data. But AI language comprehension tools aren't really up to this task -- all they can do now is fairly simplistic "information extraction." We plan to use a bio-customized version of INLINK to get around this problem, but entering knowledge using INLINK's interactive interface is always going to be a bit of a pain. There's enough biology out there, and the rate of increase of bio knowledge is fast enough, that it makes sense to train a crew of bio knowledge encoders in Lojban, so that the store of bio knowledge can be gotten into computer-comprehensible form at maximum rate and minimum cost. Yes, I realize this sounds really weird and would be a hard idea to sell to venture capitalists or pharma company executives -- but that doesn't mean it doesn't make sense....
As another aside, there is some Lojban poetry on the Net but I haven't found much Lojban music. I like to sing & play the keyboard sometimes (in terms of vocal style, think Bob Dylan meets Radiohead); I'm considering doing some of my future lyrics in Lojban! True, few listeners would understand what I was talking about -- but I reckon that, in many cases, the verbal contents of lyrics aren't all that important -- what's important is the genuineness of feeling attached to them, which is achievable if the words have deep meaning to the singer, whether or not the listener can understand them. Of course, I have some lyrics that violate this rule and succeed at least a bit in communicating poetically (even a bit of transhumanist lyricism here and there -- e.g. "I've got to tell you something / your lonely story made me cry / I wish we all could breathe forever / God damn the Universal Mind"). But even so I think Lojbanic lyrics could really rock....
But -- wow -- how to fit learning a new language into my schedule? Urgggh!! Way too much to do. Fortunately I have a wife who says she's willing to learn this weird language along with me, which will make things much easier; it'd be trickier to learn a language with no one to speak to. But still ... every time something new like this comes up I'm confronted with the multiplicity of Bens in my head: each with different goals and priority rankings on their shared goals ... some of them saying "Yeah! You've got to do this!", others cautioning that it will siphon away the sometimes irritatingly small amount of time currently allocated to enjoying the non-intellectual aspects of human life in the Ben-iverse....
But "I" digress. Or do I?
Perhaps internal multiplicity and the falsehood of the unified "I" is a topic best saved for another blog entry. But yet, it does tie back into Lojban -- which I notice contains a single word for "I" just like ordinary languages. This is an area where I'm tempted to introduce new Lojbanic vocabulary.
I don't know what "I" am. I like the Walt Whitman quote "I contradict myself? Very well then, I contradict myself. I am large, I contain multitudes." Indeed, I do. In From Complexity to Creativity I explored the notion of subselves extensively. This notion should be explicitly embodied in language. You should be able to say "One of my subselves wants X" rather than "I want X" -- easily, via a brief linguistic expression, rather than a complicated multi-phrasal description. The distinction between "Some of my subselves want this very intensely" and "All of my subselves want this moderately strongly" should be compactly and immediately sayable. If these things were compactly and simply expressible in language, maybe we'd get out of the habit of thinking of ourselves as unities when we're really not. At least, I'm definitely not. (Just like I feel idiotic most of the time, then feel more clever when interacting with others; similarly, when I'm on my own I often feel like a population of sub-Bens with loosely affiliated goals and desires, and then I feel more unified when interacting with others, both because others view me as a whole, and because compared to other peoples' subselves, mine all cluster together fairly tightly in spite of their differences... (and then I'm most unified of all when I let all the goals drift away and dissolve, and exist as a single non-self, basking in the 1=0, at which point humanity and transhumanity and language and all that seem no more important than un ... but now I really digress!)). And in an appropriately designed language -- say, a subself-savvy extension of Lojban -- this paragraph would be a lot shorter and simpler and sound much less silly.
And this brings up a potentially very interesting aspect of the idea of teaching AI systems in odd constructed languages. My main motivation for thinking about using Lojban instead of English to teach Novamente is to simplify the semantic mapping process. But, it's also the case that English -- like all other natural languages -- embodies a lot of really irritating illusions ... the illusion of the unified self being one of them. Lojban now also happens to embody the illusion of the unified self, but this is a lot easier to fix in Lojban than in English, because of the simpler and more flexible structure of the Lojban language. I don't buy the strongest versions of the Sapir-Whorf hypothesis (though I think everyone should read Whorf's essay-collection Language, Thought and Reality), but clearly it's true that language guides cognition to a significant extent, and this can be expected to be true of AI's at least as much as of humans.
I can envision a series of extensions to Lojban being made, with the specific objective of encouraging AI systems learn to think according to desired patterns. Avoidance of illusions regarding self is one issue among many. Two areas where Lojban definitely exceeds English are ethics and emotion. English tends to be very confused in these regards -- look at the unnecessary ambiguities of the words "happy" and "good", for example. The current Lojban vocabulary doesn't entirely overcome these problems, but it does so significantly, and could be improved in these regards with modest effort.
Well, as I type these words, my son Zeb is sitting next to me playing "Final Fantasy" (yes my work-desk sits in the livingroom next to the TV, which is mostly used by the kids for videogames, except for their obsessive viewing of South Park... the new season just started, there are new episodes now, and did you know Mr. Garrison is now Mrs. Garrison??!!). As I look over at the manly-chested (scrawny, artistic little 11-year-old Zeb's favorite utterance these days: "Admire my manly chest or go down trying!"), womanly-faced heroes run through their somewhat bleak simulated landscape, and feel really intensely sick of the repetitive background music, I can't help but observe that, obsessed as he is with that game, I'm even more obsessed with my own "final fantasy." Or am I? One of my "I"'s is. Another one feels a lot more like playing the piano for an hour or so before bed, even though clearly working on AI for that time-interval would be more productive in terms of the long-term good of Ben and the cosmos. Or is playing music a while justified by the mental peace it brings, enabling clearer thinking about AI research later? How to ensure against self-delusion in judgments like that? Ah, by 38 years old I have devised an excellent set of mental tools for guarding against delusion of one subself by itself, or of one subself by others -- and these tools are frustratingly hard to describe in the English language! No worries -- the community of sub-Bens remains reasonably harmonious, though in the manner of strange attractors rather than fixed psychological arrangements. The chaos goes on.... (ranji kalsa ... ) ... the human chaos goes on, moving inevitably toward its own self-annihilation or self-transcendence ... and my committee of sub-Bens unanimously agrees that it's worth spending a lot of thought and effort to bias the odds toward the latter ...
Tuesday, March 08, 2005
Cognitive Neuroscience of Consciousness
So, in this rather technical and academic blog entry (no details on my sex life or the psychotropic characteristics of Armenian swine toes today, sorry...), I'm going to talk about some interesting research in this field that I've been reading about lately....
Specifically, I'm going to briefly comment on a paper I just read, by a guy named Ned Block, called "Paradox and Cross-Purposes in Recent Work on Consciousness." The paper is in a book called "The Cognitive Neuroscience of Consciousness," which is a special issue of the journal COGNITION. Many of the other papers in the book are good too. This is one of two really good books I've recently read on this subject, the other being "Neural Correlates of Consciousness" edited by Thomas Metzinger (whose over-long tome Being No One, on the way the brain-mind constructs phenomenal selves, I also recommend).
One point raised repeatedly in the book is that the brain can often respond to stimuli in an unconscious and yet useful way. Stimuli that are too weak to enter consciousness can nevertheless influence behavior, via priming and other methods. For instance, if a person is shown a picture (the Muller-Lyer illusion) that is known to cause the human mind to mis-estimate line lengths, and then asked to make a motor response based on the line lengths in the picture (say, pointing to the ends of the lines) VERY QUICKLY, they will respond based on the actual line lengths without making any illusory perceptions. But if they are given a little more time to respond, then they will respond erroneously, falling prey to the illusion. The illusion happens somewhere between perception and cognition -- but this pathway is slow, and there can be super-quick loops between perception and action, which bypass cognition with all its benefits and illusions.
Block, in his paper, raises the familiar point that the concept of "consciousness" is a bit of a mess, and he decomponses it into three subconcepts:
- phenomenality (which I've called "raw awareness")
- accessibility (that something is accessible throughout the brain/mind, not just in one localized region)
- reflectivity (that something can be used as content of another mental experience)
- everything has some phenomenality ("the mind is aware of everything inside it", which to me is just a teeeeeensy step from the attractive panpsychist proposition "everything is aware")
- but only things that undergo a particular kind of neural/mental processing become reflective, and
- with reflectivity comes accessibility
Accessibility has to do with Baars' old-but-good notion of the "global workspace" -- the idea that reflective consciousness consists of representing knowledge in some kind of "workspace" where it can be freely manipulated in a variety of ways. This workspace appears not to be localized in any particular part of the brain, but rather to be a kind of coordinated activity among many different brain regions ... perhaps, in dynamical systems terms, some kind of "attractor."
The experienced intensity of consciousness of something, Block proposes, has to do largely with the intensity of the phenomenality of the something, which may have to do with the amount of activation in the neural region where the "something" is taking place. But reflectivity requires something else besides just intensity (it requires the triggering of the global workspace attractor).
In terms of scientists' search for neural correlates of consciousness, Block reckons that what they're finding now are mainly neural correlates of intense phenomenality. For instance, when the ventral area of the brain is highly active, this seems to indicate some conscious perception is going on. But, if reflectivity is a separate and additional process to phenomenality, then finding neural correlates of the latter may not be any help in deducing the neural basis of the former.
Block's ideas fit in pretty nicely with my hypothesis (see my essay Patterns of Awareness) that the phenomenality attached to a pattern has to do with the degree to which that pattern IS a pattern in the system that it's a pattern in. In this view, locally registered things can be patterns in the brain and ergo be phenomenal to an extent; but, expansion of something into the global workspace attractor is going to make it a lot more intense as a pattern, ergo more intensely phenomenal. Ergo in the human brain intense phenomenality and reflectivity seem to go along with each other -- since both are coupled to accessibility....
All this is still pretty far from a detailed understanding of how consciousness arises in human brains. But finally, it seems to me that neuroscientists are saying the right sorts of things and asking the right sorts of questions. The reason isn't that this generation of neuroscientists is wiser than the last, but rather that modern experimental tools (e.g. fMRI and others) have led to empirical data that make it impossible either to ignore the issue of consciousness, or to continue to hold to simplistic and traditional views.
No specific brain region or brain function or neurotransmitter or whatever will be found that causes raw awareness (Block's phenomenality). But the particular aspects associated with intense human awareness -- like global cognitive accessibility and reflectivity -- will in the next few years come to be clearly associated with particular brain structures and processes. As Block proposes, these will come to be viewed as ways of modulating and enhancing (rather than causing) basic phenomenal awareness. In AI terms, it will become clear how software systems can emulate these structures and processes -- which will help guide the AI community to creating reflective and highly intelligent AI systems, without directly addressing the philosophical issue of whether AI's can really experience phenomenality (which is bogus, in my view -- of course they can; every bloody particle does; but for me, as a panpsychist, the foundational philosophy of consciousness is a pretty boring and easy topic).
I don't find these ideas have much to add to the Novamente design -- I already took Baars' global workspace notions into account in the design of Webmind, Novamente's predecessor, way back in the dark ages when Java was slow and Dubya was just a nightmare and I still ate hamburgers. But they increase the plausibility of simple mappings between Novamente and the human mind/brain -- which is, as my uncle liked to say, significantly better than a kick in the ass.
Sunday, March 06, 2005
Terrified Medieval Christs and Weird Picasso Women
The essence of some human's being -- what does that mean? The core of some human's personality.... It's different for each one of us, but still there are common patterns -- a common essence of being human. Always some pleasure and some pain. Some resignation to fate, some resolution to struggle. In the interesting faces, some deep joy, some terrible suffering. We humans are bundles of contradictions -- that's part of what makes us human.
I thought about the Singularity, of course -- about transcending what is human, and about perfecting what is human to make something that's human yet better than human. And I found myself really intuitively doubting the latter possibility. Isn't the essence of being human all bound up with contradiction and confusion, with the twisting nonstationary nonlinear superposition of pleasure and pain, of clarity and illusion, of beauty and hideousness?
Some humans are perverse by nature -- for instance, priests who condemn child molestation in their sermons while conducting it in their apartments. But even without this nasty and overt sort of self-contradiction, still, every human personality is a summation of compromises. I myself am a big teeming compromise, with desires to plunge fully into the realm of the intellect, to spend all day every day playing music, to hang out and play with my wife and kids all the time, to live in the forest with the pygmies, to meditate and vanquish/vanish the self....
Potentially with future technology we can eliminate the need for this compromise by allowing Ben to multifurcate into dozens of Bens, one living in the forest with the pygmies, one meditating all day and achieving perfect Zen enlightenment, one continually playing childrens' games and laughing, one proving mathematical theorems until his brain is 90% mathematics, one finally finishing all those half-done novels, one learning every possible musical instrument, one programming AI's, etc. etc. Each of these specialized Bens could be put in telepathic coordination with the others, so they could all have the experience, to an extent, of doing all these different things. This would be a hell of a great way to live IMO -- I'd choose it over my current existence. But it'd be foolish to call this being human. Getting rid of the compromises means getting rid of humanity.
The beauty I see in the faces portrayed by great artists is largely the beauty of how individual human personalities make their own compromises, patch together personal realities from the beauty and the terror and the love and the hate and the endless press of limitations. Getting rid of the compromises is getting rid of humanity....
Trite thoughts, I suppose.... Just another page in my internal debate about the real value of preserving humanity past the Singularity. Of course, I am committed to an ethic of choice -- I believe each sentient being should be allowed to choose to continue to exist in its present form, unless doing so would be radically dangerous to other sentient beings. Humans shouldn't be forced to transcend into uberhumans. But if they all chose to do so, would this be a bad thing? Intuitively, it seems to me that 90% of people who chose to remain human rather than to transcend would probably be doing so out of some form of perversion. And the other 10%? Out of a personality-central attachment to the particular beauty of being human, the particular varieties of compromises and limitations that make humans human ... the looks on the faces of the twisted medieval Christs and weird Picasso women....
(Of course, in spite of my appreciation for the beauty of the human, I won't be one of those choosing to turn down transcension. Though I may allow a certain percentage of my future multi-Bens to remain human ... time will tell!)
Introductory Whining and Complaining About the Difficulty of Getting Funding to Build a Real AI
My answer was that I don't have time, and I really don't -- but I decided to give it a try anyway. Last time I tried blogging was in 2002 and I kept going for a few months, then petered out. Maybe this time will have a better fate!
What's on my mind lately? Frustration, in large part. My personal life is going great -- last year my drawn-out divorce finally concluded; my kids are finally pretty much settled into their new routine and doing well again, and my new wife Izabela and I are having a great time together.
I'm enjoying doing bioinformatics research with Biomind, and recording whacky music using Sonar4 (the first time I've hooked up a sequencer to my keyboard for many years; I'd avoided it for a while due to its powerful addictive potential).
Life is good. But the problem is: the longer I think about it, the more I write about it and the more exploratory design and engineering work my Novamente colleagues and I do, the more convinced I am that I actually know how to make a thinking machine... an intelligent software program, with intelligence at the human level and beyond.
Yeah, I know, a lot of people have thought that before, and been wrong. But obviously, SOMEONE is going to be the first one to be right....
I don't pretend I have every last detail mapped out. There are plenty of little holes in my AI design, and they'll need to be filled in via an iterative, synergistic process of experimentation and theory-revision. But the overall conceptual and mathematical design is solid enough that I'm convinced the little holes can be filled in.
What's frustrating is that, though I can clearly see how to do it, I can also clearly see how much work it requires. Not a Manhattan Project scale effort. But more work than I could do in a couple years myself, even if I dropped everything else and just programmed (and even if I were a faster/better programmer like some of the young hacker-heroes on the Novamente team).
My guess is that 3 years of 100% dedicated effort by a team of 5-6 of the right people would be enough to create an AI with the intelligence of a human toddler. After that point, it's mostly a matter of teaching, along with incremental algorithm/hardware improvements that can be carefully guided based on observation of the AI mind as it learns.
And I have the right 5-6 people already, within the Novamente/Biomind orbit. But they're now spending their time on (interesting, useful) narrow-AI applications rather than on trying directly to build a thinking machine.
I thought for a while that we could create a thinking machine along the way, whilst focusing on narrow-AI applications. But it's not gonna work. Real AGI and narrow-AI may share software components, they may share learning algorithms and memory structures, but the basic work of building an AGI cognitive architecture out of these components, algorithms and structures has nothing to do with narrow AI.
As CEO of Biomind, a startup focused on analyzing biological data using some tools drawn from the Novamente AI Engine (our partially-complete, wannabe AGI system) and some other AI tools as well, I'm constantly making decisions to build Biomind software using methods that I know don't contribute much if at all toward AGI. This is because from a Biomind point of view, it's often better to have a pretty good method that runs reasonably fast and can be completed and tested relatively quickly -- rather than a better method that has more overlap with AGI technology, but takes more processor time, more RAM, and more development time.
Although our work on Biomind and other commercial apps has helped us to create a lot of tools that will be useful for building an AGI (and will continue to do so), the bottom line is that in order to create an AGI, dedicated effort will be needed. Based on the estimate I've given above (5-6 people for 3 years or so), it would seem it could be done for a million US dollars or a little less.
Not a lot of money from a big-business perspective. But a lot more than I have lying around, alas.
Some have asked why I don't just build the thing using volunteers recruited over the Net. There are two reasons.
One, this kind of project, doesn't just require programmers, it requires the right people -- with a combination of strong programming, software design, cognitive science, computer science and mathematical knowledge. This is rare enough that it's a hard combination to find even if you have money to pay for it. To find this combination among the pool of people who can afford to work a significant number of hours for free ... well the odds seem pretty low.... (Though if you have the above skills and want to work full or near-full-time on collaborating to build a thinking machine, for little or no pay, please send me an email and we'll talk!!)
Two, this is a VERY HARD project, even with a high-quality design and a great team, and I am not at all sure it can be successfully done if the team doesn't have total focus.
Well, I'm hoping the tides will turn in late 2005 or early 2006. Finally this year I'll release the long-awaited books on the Novamente design and the underlying ideas, and following that I'll attempt a serious publicity campaign to attract attention to the project. Maybe Kurzweil's release of his Singularity book in late 2005 will help, even though he's a skeptic about AGI approaches that don't involve detailed brain simulation. I'd much rather focus on actually building AGI than on doing publicity, but, y'know, "by any means necessary" etc. etc. ;-)
OK, that's enough venting for one blog entry! I promise that I won't repeat this theme over and over again, I'll give you some thematic variety.... But this theme is sure to come up again and again, as it does in my thoughts....
Very foolish of the human race to be SO CLOSE to something SO AMAZING, and yet not have the common sense to allocate resources to it instead of, for instance, the production of SpongeBob-flavored ice cream (not that I have anything against SpongeBob, he's a cute little guy...)...
P.S. Those with a taste for history may recall that in the late 1990's I did have a significant amount of funding for pure AI work, via the startup company Intelligenesis (aka Webmind), of which I was a cofounder. We tried for about 3 years and failed to create a real AI, alas. But this was not because our concepts were wrong. On the contrary, it was because we made some bad decisions regarding software engineering (too complex!), and because I was a bad manager, pursuing too many different directions at once instead of narrowly focusing efforts on the apparently best routes. The same concepts have now been shaped into a much simpler and cleaner mathematical and software design, and I've learned a lot about how to manage and focus projects. Success consists of failing over and over in appropriately different ways!