To follow this blog by email, give your address here...

Sunday, April 10, 2005

The Seven Pillars of Senescence

A few more thoughts on Aubrey de Grey's work, written down as notes while updating the
"anti-aging" chapter in "The Path to Posthumanity" as I prepare the final version of the manuscript...

Check out de Grey's site here, it's well worth a few hours reading even if biology isn't one of your main interests...

Of all the senescence researchers out there, no other has done as much as Aubrey de Grey to improve our integrative understanding of the overall picture of the phenomenon of aging. I don’t always agree with his proposed solutions to particular sub-problems of the aging problems, but I find him invariably energetic, rational and insightful. Although he says he’s not a big booster of caloric restriction for humans, because he thinks its effect diminishes rapidly with the size of the organism, he’s also one of the skinniest humans I’ve ever seen, and he gives the appearance of being robustly healthy, so I suspect he’s practicing some approximative variant of the caloric restriction diet.

de Grey’s buzzword is SENS, which stands for Strategies for Engineered Negligible Senescence – a very carefully constructed scientific phrasing for what I’ve loosely been calling here “anti-aging research.” The point of the term is that it’s not merely slowing down of aging that we’re after – it’s the reduction of senescence to a negligible level. And we’re not trying to achieve this goal via voodoo, we’re trying to achieve it via engineering – mostly biological engineering, though nano-engineering is also a possibility, as in Robert Bradbury’s “robobiotics” idea.

As part of his effort to energize the biology research community about SENS, de Grey has launched a contest called the “Methuselah mouse prize” – a prize that yields money to the researcher that produces the longest-lived mouse of species mus musculus. In fact there are two sub-prizes: one for longevity, and the “rejuvenation” prize, given to the best life-extension therapy that’s applicable to an already-partially-aged mouse. There is a complicated prize structure, wherein each research who produces the longest-lived mouse ever or the best-ever mouse-lifespan rejuvenation therapy receives a bit of money each week until the his record is broken.

His idea is that, during the next decade or so, it should be possible to come pretty close to defeating senescence within mice – if the research community puts enough focus on the area. And then, porting the results from mouse to human shouldn’t take all that much longer. Of course, some techniques will port more easily, and unforeseen difficulties may arise. But of course, if we manage to extend human lives by 30 or 40 years via partly solving the problem of aging, then I’ll have 30 or 40 extra years in which to help the biologists solve the other problems….


Theory-wise, de Grey (correctly IMO) doesn’t believe there’s one grand root cause of senescence, but rather that it’s the result of a whole bunch of different things going wrong, because human DNA wasn’t evolved in such a way as to make them not go wrong. On his website, he gives a table of the seven causes of senescence, showing for each one the date that the connection between this phenomenon and senescence first become well-known to biologists – and also showing, for each one, the biological mechanism that he believes will be helpful for eliminating that particular cause.

The seven causes are:

1. Cell loss, cell atrophy

Discovered: 1955

Potentially curable, according to de Grey, via: Stem cells, growth factors, exercise

2. Nuclear [epi]mutations

1959/1982

WILT (Whole-body Interdictionof Lengthening of Telomeres)

3. Mutant mitochondria

1972

Allotopic expression of 13 proteins

4. Cell senescence

1965

Ablation of unwanted cells

5. Extracellular crosslinks

1981

AGE-breaking molecules/enzymes

6. Extracellular junk

1907

Phagocytosis; beta-breakers

7. Intracellular junk

1959

Transgenic microbial hydrolases


Seven basic causes – is that really all there is? Well, as de Grey puts it, “the fact that we have not discovered another major category of even potentially pathogenic damage accumulating with age in two decades, despite so tremendous an improvement in our analytical techniques over that period, strongly suggests that no more are to be found -- at least, none that would kill us in a presently normal lifetime.” Let’s hope he’s right….


One of these “Seven Pillars of Aging” should be familiar to those of you who read my essay on mitochondrial DNA and Parkinson’s disease (pointed to in a blog I posted yesterday or the day before): mutant mitochondria. Looking at this case a little more deeply is interesting for what it reveals about the strength and potential weaknesses of de Greys “engineering” based approach. The term “engineering” in the SENS acronym is not a coincidence -- de Grey came to biology from computer science and he tends to take a different approach from conventional biologists, thinking more in terms of “mechanical” repair solutions. Whether his approach will prove the best or not, remains to be seen; frankly I’m not biologist enough to have a strong general intuition on this point. The mainstream molecular biology community seems to think de Grey’s proposed solutions to his seven problems reveal a strange taste, but this doesn’t mean very much, as the mainstream’s scientific taste may well be mortally flawed.

Regarding mitochondrial DNA damage, de Grey’s current proposal is to fix it, not by explicitly repairing the DNA as in GENCIA’s protofection technique mentioned in my article on Parkinson's disease, but rather by replacing the flawed proteins produced by the flawed mitochondrial DNA. This could work because there is already an in-built biological mechanism that carries proteins into mitochondria: the TIM/TOM complex, which carries about 1000 different proteins produced from nuclear DNA into the mitochondria.

What de Grey proposes is to make copes of the 13 protein-coding genes in the mitochondrial genome, with a few simple modifications to make them amenable to the TIM/TOM mechanism, and then insert them into the nuclear chromosomes. Then they’ll get damaged much more slowly, because the nuclear chromosomes are a lot more protected from mutations than mitochondrial genes.

Sensible enough, no? Whether this or protofection is the best approach I’m really not certain, although my bet is tentatively on protofection, which seems a bit simpler (since as de Grey admits, fooling the TIM/TOM mechanism in an appropriate way could wind up to be difficult). Unfortunately, neither approach is being really amply funded at the moment, though.

Similarly, each of de Grey’s other six categories of aging-related damage is amenable to a number of different approaches – and we just need to do the experiments and see which one works better. A lot of work, and a lot of micro-level creativity required along the way – but straightforward scientific work of the kind that modern biologists are good at doing. It may well turn out that senescence is defeatable without any really huge breakthroughs occurring – just via the right combination of clever therapeutic tricks like protofaction or mitochondrial protein replacement.

Depending on how well this work is funded and how many “hidden rocks” appear – and what happens with the rest of 21’st-century science and technology -- the process of scientific advance may or may not be too slow to save us from dying. But it seems nearly certain that for our grandchildren, or great-great-grandchildren, “old age” may well be something they read about in the history books, along with black plague and syphilis, an ailment of the past.

Saturday, April 09, 2005

Parkinson's, Alzheimer's and Mitochondrial DNA

I wrote a little journalistic article on some work I did last year regarding the biological roots of Parkinson's disease (which I believe has implications for Alzheimer's as well).

The article is here. Read it. For those of you inclined toward sensationalism, there's even a part about a bad batch of heroin.

This work was done in collaboration with Drs. Davis Parker and Rafal Smigrodzki of the U. of Virginia, plus a bunch of my Biomind colleagues. I think it's rather nice stuff.

I don't find this sort of thing as rewarding as AGI work (and in a big-picture sense, I really do think that me spending so much time on stuff besides AGI is a big waste of the human race's resources ;-p), but even so, it's REALLY nice to be able to use narrow-AI technology for a really good purpose -- helping biologists to figure out the many ways in which the human organism degenerates and dies ... and how, hopefully, to repair these problems....

I do think that, via systematic biological research, we humans can beat aging and make our pathetic human bodies live effectively forever. Maybe we can even do it before ancient, 38-year-old Ben dies. I'm strongly in favor of increasing public funding for life extension research by a factor of 20, including full funding for Aubrey de Grey's fascinating proposals.

I don't think we need AGI to beat aging -- but I do think AGI, if we can create it, will be able to vastly accelerate the pace of research in all areas of biology, including life extension. This was my main idea in founding Biomind, although Biomind's work to date has been limited to some fairly small corners of biology (due to funding limitations, and due to the naturally slow pace of most rigorous scientific research). Even in these little corners we've managed to do some good, as this Parkinson's work illustrates. (Though in fact the Parkinson's work was a bit of a deviation from Biomind's primary research and product development, which has been in the area of microarray data analysis.) And we're poised to expand the scope of Biomind's work later this year with the release of a new product, yadda yadda yadda....

Focus, Garry Kasparov, Isaac Newton, AGI, Business Management, Episodic Memory, Buddhists Who Don’t Spill Tea in Hats, etc.... and, oh yeah, Focus....

Well, this is an odd blog entry to post, because I wrote it (except this new introductory babbling) a couple days ago and it doesn't really reflect my mood or thoughts at this moment very well. Right now I'm in a quite perky mood, just about to go outside and play some tennis on a sunny Saturday, and then come back in afterwards and launch into the final revisions on "The Path to Posthumanity" (a book on the future that I wrote a couple years ago, and now need to finish in a hurry since the publisher decided to light a fire under my butt by listing the book on amazon.com and getting some sales of the as-yet not-quite-existent book...). Hopefully I can finish these revisions in the next week or so (in spare time, since most of my time is spent on software-biz stuff these days) and then plunge into the almost-done Novamente book that I really wish I were able to find more time for...

But anyway, here is the long blog entry that I wrote at the start of a business trip a few days ago, but didn't find time to post until now. A lot of rambling nonsense I'm afraid, but also some interesting nuggets here and there.

By the way, a couple people have emailed me to ask about Captain Zebulon's famous turtle tank. I not only changed it but I replaced the filter with a new, much better Penguin-brand filter that seems much more effective at filtering out the massive amounts of crap that water-turtles produce as compared to fish (thus hopefully reducing the need for frequent manual tank-cleaning). However, the new filter makes quite a loud noise -- you can hear it in the background of my son's latest musical recordings, of his soon-to-be-classic tune "The King of the Jews is Singing the Blues." (Unfortunately, his recording only exists within a videogame he's creating using RPGMaker 2003, so I can't post it here. Trust me, though, it's good. If I could sing as well as Zeb, I'd give up AI and become the next Michael Jackson. Er ... well ... something like that....)

OK OK, here is the biz-trip-blog...

I’m writing these words on my laptop at Gate B44 in the Washington Dulles airport – I missed my 6:10 AM flight due to stupidly forgetting to reset my alarm clock back to Standard Time from Daylight Savings time (so it woke me up at 5AM rather than 4AM). Rescheduled for a 7:40AM flight, I’ve got a bit of extra time in the airport -- so I took 15 minutes and speed-read a business/management book in the airport bookstore. (It’s not a vacation flight, unfortunately – I had a great vacation w/ the wife and kids last week, swimming and snorkeling and canoeing in South Florida, but this is a one-day business trip to California, to meet with some folks potentially interested in funding Biomind … (my bioinformatics business – which, after a few years of work, might eventually yield me enough profit that I can pay myself and a small team enough money to build the fabled Thinking Machine whose design lays mostly-neglected on my desktop….)). Not my usual fare, though I’ve probably read a few dozen business/management books in my life, but this one was moderately interesting. (As always with such books, the core information could be summarized in about 5 pages, but there are lots of evocative anecdotes. This ties in with something I’ve often thought about in the context of the Novamente design: human episodic memory seems to be at least partially organized by “story.” The human brain seems to store episodes different from procedures or declarative knowledge, and it seems to store them in units defined by some sort of conceptual coherence. In Novamente a “story” corresponds to a particular kind of “map,” meaning a set of nodes and links that are joined by HebbianLinks mutually reinforcing each other; a story differs from a generic map in that the nodes and links within it pertain to a set of events unfolding over time.)

But anyway … I digress (which is the main amusing thing about blogging – unlike in the “serious” writing I do, I allow myself to ramble and digress almost unlimitedly. I used to do that in writing fiction, but in the novel I’m writing now I’m orchestrating the digressions in a more careful way, which results in a better product but a less relaxed writing process. True, Jack Kerouac and Philip K. Dick wrote a lot of great stuff via pure “downhill skiing”, and Kerouac allowed a lot of digression in his writing-skiing process, but I don’t seem able to control my writing-skiing as well as those guys in real-time – my real-time verbal-conceptual improvisation is too wide-ranging and whacky, and it needs rational-critical post-processing to be made into something really artistic … unless the (err…) “artistic” nature sought is that of a blog entry, in which case this kind of digression is OK….)

The business book. The theme was one I’ve been thinking about a lot lately: focus. The exact opposite of this blog entry, in other words. Focus.

The basic idea of the book was: To succeed at X, find the one thing essential to X, and focus obsessively on X, to the exclusion of all else. The key to success is not balance, but strategically and tactically appropriate imbalance.

Whoops!

I try really hard to focus. I really do. But there are just so many interesting things in the world. There are dozens of novels well-worked-out in my head, hundreds of musical compositions, hundreds sketches of theorems (70% or so of which are probably correct), three substantially different AGI designs (Novamente, plus one based on Hebbian neural nets, plus one based on automated theorem-proving), even a few movie scripts… Not to mention that outline-for-a-unified-physics-theory…. Egads!

I could get myself to focus 90% of my time on creating an AGI, and push philosophy, music and fiction-writing (the other intellectual/creative pursuits that are necessary for my existence) into the other 10%. But I don’t seem to be able to get myself to focus quite that fully on bioinformatics, or natural language software, or any other business opportunity with the potential to yield the money needed to fund the implementation of my AGI design. I’ve been giving Biomind maybe 60-70% of my focus lately (which is a lot, because I work an awful lot of hours each week compared to what most people consider “full time” – I don’t sleep a lot) – because it needs it -- and pushing AGI into the background, which is extremely painful to me emotionally and intellectually.

(I have no issues with focus in the micro-scale: when I work I work with total mental concentration no matter how much noise and chaos are going on around me and no matter what mood I’m in or how tired I am, etc. (Except when Zadi’s watching South Park on the TV next to my desk, as that tends to be funny enough to distract me…. The episode I just watched almost convinced me that I should give up bioinformatics and fund Novamente via recording a Christian Rock CD…. If Cartman did it, so can I! I like his algorithm: just take a love song from the radio and replace all occurrences of the words “you”, “baby”, “darling” etc. with the words “Jesus” or “Lord.” Try it yourself, it works surprisingly well.) The level of focus that worries me is, rather, the choice of which things to direct my highly-focused micro-attention to. Which is mainly a problem because what I really want to focus on isn’t what the world currently wishes to pay me to focus on, and due to having a family to support I have this irritating ongoing need for money…. Leading to difficult temporal-assignment-of-credit problems, such as how much time to spend actively working toward AGI, versus working on things-I-like-but-don’t-love (bioinformatics, at the moment) that may yield money to pay for AGI research in a couple years, versus things that put me in a peaceful and creative state of mind (music! weird fiction!) so that my work on things-I-don’t-love is more effective, etc….)

If it’s true that sustained narrow-focus is the prerequisite of success, this would certainly explain why the most successful people aren’t generally the most interesting ones. Balance and breadth tend to make people interesting to interact with on a sustained basis. People narrowly obsessed with one thing tend to get tiring quickly – though they can be exciting and fascinating to talk to for brief periods of time. My close friends tend to be broad and balanced people, yet the people I admire most often have more of a narrow-focusing nature.

Now I’m sitting on the airplane – had to stop typing for a few minutes to board the plane, and then wait until the plane was aloft to bring out the laptop, because of the peculiar urban legend (embraced by the FAA) that laptops interfere with airplanes’ navigation equipment. While the plane was taking off I decided to continue the theme of my morning’s reading, and I read a couple articles in a free onboard copy of “Harvard Business Review.” (Also a delightful article on hats in “Ebony,” but I’ll spare you the details of that one….”Make no mistake, it takes a certain amount of bravado to wear a hat. .... It’s like the exclamation point to a fashion statement. … Hats hint at the essence of the wearer, giving a peek into the soul of the Brother underneath….” Ah, humanity! Gotta love it!) The current issue of HBR contains an interview with Garry Kasparov, the recently-retired world chess champion, on the relationship between chess and business.

Amusingly, Kasparov had something to say about focus, in the context of his chess battle with computer program Deep Blue in 1996-1997. He reckoned the contest had been an unfair one, since Deep Blue was trained on transcripts of his prior chess games, whereas all transcripts of Deep Blue’s play were kept secret from him. He also said he thought Deep Blue couldn’t beat him on his best day. But he said he thought one of the big advantages computers had over human chess players was their ability to focus exclusively and narrowly. “Human players have to cope with a lot of external pressures and distractions: you have a family, you write books, you give lectures, you get headaches, you have to earn money. There’s a lot of stuff filling up your brain while you’re playing. A machine, on the other hand, is completely without distractions. This shows the weakness, the shortcomings of the mortal mind, which is a daunting lesson for human beings. We just can’t play with the same consistency as a computer. So it’s all the more fortunate that we have our intuitions to help us play better.”

Kasparov obviously spent most of his life narrow-focusing on chess. Yet, he remains a bit jealous of a computer program’s ability to narrow-focus even more intensively.

And it’s interesting to observe that, for a chess master, Kasparov is an unusually breadth-oriented guy. His style is that of a strategic risk-taker, as opposed to that of his arch-enemy Karpov, who was always more conservative and analytical. Kasparov likes to think about business, literature, politics, and human nature in general – as he says, “I do not like details.” Of course, to become world chess champion he must have learned an awful lot of details – but what made him a master was not merely his mastery of details; it was his mastery of details combined with a truly rare and powerful intuition.

Kasparov’s style of chess could only be conducted by a mind with some breadth as well as narrow-focus, because it relies on general principles and intuitions regarding strategy – principles and intuitions going beyond chess and applicable to other domains as well. On the other hand Karpov’s style of chess was more suited to a purely narrow-focused approach.

AGI, I suspect, is really only susceptible to a Kasparov-style approach -- or really, to an approach that’s even more breadth-centric than Kasparov’s. This may be one of the reasons why AGI is so hard. If achieving anything substantial requires narrow-focus, then how is it possible for anyone to achieve something that by its nature can only be comprehended and mastered by someone with tremendous breadth? Tres dificil, nyet?

Physical sciences and mathematics don’t generally have this property – a very hard problem like creating a relativistic theory of gravity (solved by Einstein a long way back) or unifying gravitational and quantum physics (not solved yet) is nevertheless defined within a fairly delimited formal domain, and can plausibly be solved by a mind narrowly focused on that domain. To do what Newton did, on the other hand, clearly required breadth combined with focus. He had to focus to solve the hard technical problems, but he also had to have a lot of breadth to figure out what were the right questions to address, drawing from the incoherent mess of concepts and ideas that was pre-Newtonian physics. The analogy isn’t perfect nor original but I guess it’s an OK one: the task of creating AGI seems roughly comparable in magnitude to the task of creating Newtonian physics. Both have a conceptual and a technical aspect, though in Newton’s case the technical aspect was mainly mathematical, whereas in the AGI case it involves software design and engineering as much as mathematics.

Newton made his biggest breakthroughs during a three-year period when he was largely isolated in his house, at a time when England was mostly shut down due to the bubonic plague. (And, according to my university philosophy professor, his dog was named “Diamond.”) Maybe that’s what I need right now – a dog named Diamond, and an outbreak of plague to hit Washington, forcing me to sit in my house isolated for three years and do nothing but work on AGI by myself. Of course, the plague would have to hit the Internet too – isolation is harder to come by these days. Nah, that’s just a silly thought – software engineering, unlike mathematics, is better done by a small “extreme programming” style team than by a single individual. Plus, I don’t quite trust myself to teach a baby AI alone; the baby needs a woman as well as a man as a teacher (Izabela, with some help from Zadi?) and it needs a strong dose of Cassio’s conservatism and good judgment. What I need is for the plague to strike when I’m stuck in a house with the 4 or 5 best members of the Novamente team. And preferably it’s a big house, so there’s room for my kids and dogs with their noise and chaos in a separate soundproofed wing! (Yeah, yeah, this is just a stupid joky digression, please don’t quote me as if I seriously want a plague to come down on the world, I don’t…. (I started thinking that, since I happen to live in the Washington DC area, a plague in my local region might end up having some positive effects due to eliminating a lot of politicians. But, the body politic seems to have a self-regenerating characteristic similar to the limbs of certain lizards. And of course, a plague here in DC would probably be mistaken for a terrorist attack, which might cause Dubya to annihilate the continent of Africa by mistake or something … OK OK, enough!)

Wow, this is a long blog entry! I’d better call it to an end now. I’d intended to spend this flight finalizing the manuscript of “The Path to Posthumanity” – which, I recently noticed, the publisher has listed on amazon.com, even though I have not yet actually sent him the text of book! Well, some things move fast these days. Unfortunately I’m not going to be able to make that book nearly as good as I’d like, due to lack of time rather than lack of ability. I need to get that one out of the way so I can get back to finishing “Foundations of Emergent Cognition” (the shiny new name for the “Novamente book”), which is pretty much done, but just needs a final going-over, addition of references, clarification of which aspects of the discussion pertain to Novamente in particular versus pertaining to “any sensible AGI design,” etc. etc. Still, maybe “Path” will get some sales riding on the coattails of Kurzweil’s “Singularity.” The books cover much of the same ground, but mine gives fewer exponential and hyperexponential charts and more scientific depth – and mine also gives a more transcensionist, less “kinder, gentler” view of the Singularity. (Kurzweil is brilliantly insightful, yet he often seems to downplay the dramatic nature of the Singularity even as he trumpets its inevitability. Sometimes it seems like he foresees a Singularity full of modified or uploaded humans with shiny new gadgets – rather than a fundamental overthrow of the current order of mental, physical and social being. Of course, we may well get BOTH of these, but it seems a bit disingenuous to focus primarily on the former, even though it’s easier to understand and goes down better on Main Street. But of course, these comments are based on not having seen his book, which hasn’t been released yet – they’re based on his prior books and his speeches and online writings – maybe his book will give fair time to the transcensionist aspects as well, we’ll see.)

Enough – enough rambling, Ben – enough. Focus! Focus! Focus! Finish “Path to Posthumanity” and send it to the damn publisher! Write those Biomind press releases! Test the new Biomind ArrayGenius release! Finish the Novamente book! Launch the damn Singularity already so you can give yourself a better temporal-assignment-of-credit algorithm, eat 7 cakes without gaining weight, and push your daughter on the swing and canoe past crocodiles while composing weird jazz fission and programming meta-Haskell and kiss your wife while proving theorems that are themselves hyperdimensional conscious beings… yadda yadda… Focus! Focus! Focus!

Ah – wait – one more afterthought about focus. I had a Buddhist friend once who, every time I made a mechanical mishap like spilling a cup of tea, would point out to me: “See, if you were an enlightened Buddhist master, you’d never do anything like that. You’d never spill your tea because you’d be totally focused on whatever you were doing, in that moment!” In fact, this guy was neither particularly enlightened nor particularly focused nor emotionally balanced himself, though he was highly adept at pointing out the unenlightenment of others -- but he did have a point there. But of course, my retort was always “Fine, but I don’t WANT to focus my total attention on something boring like holding a cup. I’ll accept a certain error rate with boring things in order to focus most of my attention on interesting things. It’s no wonder no Buddhist master has ever achieved anything fascinating in science or mathematics – these things require focus in themselves, which is hard to obtain if one is focusing all one’s attention on drinking tea or raking leaves or breathing!” I think the analogy between Buddhist mindfulness and narrow-focus-for-business-success is not totally spurious. (Yeah, this brief paragraph doesn’t come near doing justice to my thoughts on Buddhism, but that’ll be saved for a later blog, it’s a deep and complicated issue in spite of its perfect simplicity, yadda yadda.) One problem is that the human mind is so painfully limited that it’s hard for it to do even one thing well, and when it divides its attention, it’s bound to make mistakes. Another problem is that we were probably evolved to focus on one intensive task at a time – like hunting, or escaping, or mating – and the modern emphasis on multitasking (on various time-scales) is an abuse of our evolutionary neural and physiological wiring.

Enough, OK, OK. Focus! Focus! Focus!

(I spent the flight from Salt Lake to San Fran sitting next to a very intelligent mining engineering executive who spoke very passionately about nutraceuticals and was bringing in a couple hundred thou a year selling them, via a variant of the classic “multilevel marketing” scheme. The nutraceutical line he was hawking actually seemed decent – founded on reasonable science – and I was almost convinced to give up the idea of making money for AGI through narrow-AI businesses and make the money through selling skin lotions and nutritional supplements instead. It might be a lot easier. I almost followed that plan when I was 13 and now I’m sorta wishing I had. OK, not really. But it’s an amusing thought…. I’m not such a bad salesman if I’m selling something I believe in; if I were selling e.g. life extension oriented supplements with some foundation in biology, I could probably give a convincing rap. My wife knows a lot of vain women in Brazil; maybe we could start the business in Brazil…. I’ve often thought that mixing up making money with AI is a mistake – it might be better just to keep my AI work pure and just accept that I need to spend a percentage of my time on some stupid business in order to pay the bills and hopefully eventually make enough money to pay the Novamente team to actually work on AGI engineering and teaching….. But, yah yah, the problem is that making any business work takes a lot of focus and attention, and it’s hard for me to see myself getting motivated to direct much of my focus and attention to something so boring as selling skin lotions…. The marketing ploy is slighty clever though: they suck women in my selling them skin lotion, and then upsell them to more expensive nutraceuticals, pointing out (correctly) that the key to beautiful skin is good health. Well, this sort of shit is what most humans seem to be interested, right? Beautiful skin, big muscles, good hair, shiny teeth and symmetrical faces. If you can’t have them yourself at least you can watch them on TV! (OK, OK, I’m not really going to quit the AI business to sell skin lotion. Although I’m not sure it would be a stupid idea in the medium term; in the short term I don’t have the stomach for it…. And anyway Biomind’s business prospects are actually looking pretty good right now (sales pitch ahead: anyone want to buy some of the world’s best microarray data analysis software?)

Foooocuuuuusssssss…...

Saturday, March 26, 2005

Smart Man with a Funny Beard

By the way -- one of these days I'll write a proper blog entry on anti-aging technology, but for the moment let me just indicate that everyone should look at

Aubrey de Grey's website

Not only is he an uncommonly cool-looking individual -- I think he even beats me at my coolest-looking (2000, I think that was -- back when I had numerous bogus paper millions, was still on my first wife, and none of my sons had a moustache, and I still sorta thought we could probably create an AI without an embodiment for simplicity's sake...) -- but he has some extremely interesting ideas on how to combat human aging.

I have my own as well, which intersect only partly with his -- but he's thought about it a lot more than me, so he's probably more likely to be right ;-)

Like the Novamente AGI project, nearly all of Aubrey's brilliant ideas are currently almost unfunded.

Well, it's not as though society doesn't spend money on research. And a lot of good research gets funded. But research funding seems to suffer from the same peculiar human-mob shortsightedness that causes the US to stick with the absurd, archaic English system of measurement year after year ... and that's causing English to emerge as the international language while Lojban remains the province of 350 geeks on an Internet mailing list...

More later! (For readers of my just-previous blog entry: Yes, I'm still procrastinating cleaning the turtle tank!)

Darkness at the Break of Noon, Goddamned Turtle Tank, etc.

"Darkness at the break of noon / Shadows even the silver spoon / The handmade blade, the child's balloon / Eclipses both the sun and moon / To understand you know too soon / There is no sense in trying."

Dylan usually gets it right....

Arrrghh.... I'm in an oddly dark mood this Sunday at 5PM, probably not a good frame of mind to be blogging, but it's a good way to delay cleaning out my son's turtle tank (my son doesn't live in it; his turtle Rick does) -- I don't really want to clean the tank but I know I have to do it, and if I start working on something intense or start playing the keyboard, the turtle will probably end up swimming in its own excrement for yet another day....

Hmmm ... the fact that I'm blogging about turtle shit probably indicates that I'm in a bad mood....

Not a bad day overall -- I got lots of interesting AI thinking & writing done, and took a long walk in the woods with the dogs. Sorta miss my kids as usual when they're at their mom's for the weekend. And, an interesting guest from New Zealand is arriving in a couple hours. Oops, better mop the dogs' mud off the floor, too.... (Sometimes I wish I lived in a country like Brazil where you don't need to be rich to have a maid! Is cleaning up really the best use of any fraction of my potentially far too limited lifespan? Well, if you saw the general state of my house you'd realize I don't think so!)

Maybe Buddha was right: all existence is suffering. Well of course he was right, but he left out the other half that Nietzsche said so well: "Have you ever said Yes to one joy? O my friends, then you have said Yes to all woe as well! All things are enchained, all things are entwined, all things are in love." Or something like that. In German, which I can't read, except for a few words and phrases. Everything is all mixed up, that's the nature of humanity. Almost every experience has some suffering in it -- only the most glorious peak of joy breaks this rule. And semi-symmetrically, almost every experience has some joy. Semi-symmetrically, because the mix of joy and pain seems rather differently biased for different people, based on variations in neurochemistry and situation. Most of the time I have an asymmetrically large amount of joy, I think -- as Dylan was well aware, it's not always easy to tell -- ....

Blah blah blah.

In moods like this I seriously consider giving up on the whole AI business and doing something easier and more amusing. I could become a professor again, write philosophy books and math papers, record CD's of weird music and write novels about alien civilizations living inside magic mushrooms.... I'm really more of a philosopher/artist type, software engineering isn't my thing ... nor is business. Not that I'm bad at these things -- but they don't really grab me, grip me, whatever metaphor you want today ....

Getting rich would be nice but I don't care too much about it -- I could live quite comfortably according to my standards without being rich, especially if I left accursed Washington DC for somewhere with cheaper land. Wow, I miss New Zealand, Western Australia and New Mexico ... great places I lived back in the day ... but it seems I'm stuck here in the DC metro for another 10 years due to a shared-child-custody situation.... Well, there are worse fates. And it's a good place for business....

Well, OK, time to clean the damn turtle tank! I could try to portray the stupid turtle (yeah, they really are stupid, though my son claims he can communicate with them psychically, and they're not as dumb as snakes) swimming in its own crap as a metaphor for something, but I don't feel perverted enough right now. Or at least, I don't feel perverted in the right sort of way.

Years I used to delude myself that I was just, say, 6 or 12 or 18 months away from having a completed thinking machine. That was a fun attitude, but it turned out I wasn't quite self-delusional enough to keep it up forever. I've now gained a lot more respect for how idiotic we humans are, and how much time it takes us to work through the details of turning even quite clear and correct abstract ideas into concrete realities. I've tried hard to become more of a realist, even though it makes me significantly less happy, I suppose because getting to the end goal is more important to me than being maximally happy.

I still think that if I managed to turn Biomind into a load of cash, or some rich philanthropist or government body decided to fund Novamente R&D, I could lead a small team of AI geniuses to the creation of an AI toddler within a few years. But realistically, unless a miraculous patron shows up or DARPA undergoes a random brain tremor and suddenly decides to fund one of my proposals, it's likely to take several years before I manage to drum up the needed funding to make a serious attack on the "Novamente AI toddler problem." (Yeah, I know, good things can sometimes pop up out of nowhere. I could get an email out of the blue tomorrow from the mystery investor. That would be great -- but I'm not counting on it.) Honestly, I just barely have it in me to keep doing software business for 3-5 more years. Not that it isn't fun sometimes, not that it isn't challenging, not that I don't learn a lot -- but it's an affront to my "soul" somehow (no I don't believe in any religious crap...). And no, it's not that I'm a self-contradictory being who would feel that way about any situation -- there are lots of things I love doing unreservedly, software business just isn't one of them. The difficulty is that the things I love doing don't seem to have decent odds of putting me in a position to create a thinking machine. I love music but I'm not good enough to become a star; and I'm about maximally good at fiction writing IMO, but my style and taste is weird enough that it's not likely to ever make me rich..... Urrgghh!!

Y'know, if I didn't have kids and obscenely excessive alimony payments (which were determined at a time when my businesses were more successful, but now my income is a lot lower and the alimony payment remains the same!! ... but hey, they only go on another couple years ;-p), I might just retreat to an electrified hut in some Third World country and program for three years and see if I could make the Novamente toddler myself. No more business and management and writing -- just do it. Very appealing idea. But Zarathustra (oldest son) starts college in a couple years. The bottom line is I'm not singlemindedly devoted to creating AI even though I think it's the most important thing for me to do -- I'm wrapped up with human attachments -- family attachments, which mean an awful lot to me.

Funny, just this morning I was reflecting on how great it was to be alone for a change -- the kids are with their mom, my wife is overseas visiting her family, the dogs and cats and turtle and gerbil don't quite count (OK, the gerbil almost does...) -- how peaceful and empty it felt and how easy it was to think clearly and work uninterruptedly. But now I see the downside: if a dark mood hits me there's no one to lift me out of it by showing me a South Park rerun or giving me a hug.... Human, all-too-human indeed!

And now this most boring and silly of my blog entries comes to an end. Unlike the previous ones I don't think I'll publicize this one on any mailing lists! But I guess I will click "Publish Post" in spite of some momentary reservations. Maybe someone will be amused to observe that egomaniacal self-styled AI superheroes have the same erratic human emotions as everyone else....

How important is it for this kind of human chao-emotionality to survive the Singularity? I'm not saying it shouldn't -- but isn't there some way to extract the joyous essence of humanity without eliminating what it means to be human? Perhaps there is. After all, some humans are probably "very happy" 5-10 times more often than others. What percentage of happiness can you achieve before you lose your humanity? All human existence has some suffering wending through it, but how much can it be minimized without creating "Humanoids"-style euphoridic idiot-bliss? I don't know, but even though I'm a pretty happy person overall, I'm pretty sure my unhappiness level hasn't yet pushed up against the minimum euphoridiotic boundary ;-p

And in classically humanly-perverse style, I find that writing about a stupidly unpleasant mood has largely made it go away. Turtle tank, here I come! Suddenly it doesn't seem so bad to do software business for a few more years, or spend a year going around giving speeches about AI until some funding source appears. Why the hell not? (No, I haven't taken any drugs during the last 10 minutes while typing this!). There's plenty of joy in life -- I had a great time doing AI theory this morning, and next week I'll be canoeing in the Everglades with my wife and kids. Maybe we should bring the turtle and let it swim behind the canoe on a leash?

Ahh.... Turtle tank, turtle tank, turtle tank. (That thing has really gotten disgusting, the filter broke and I need to drain it entirely and install a new filter.) Yum.

Saturday, March 12, 2005

Lojbanic AI and the Chaotic Committee of Sub-Bens

In 1998 when my dear departed AI software company Intelligenesis Corp. was just getting started, we had a summer "intern" by the name of Mark Shoulson, who was (if memory serves) a grad student at Rutgers University. Mark worked for us for a few summer months then went back to grad school. Although he was extremely bright with broad interests in computing and cognitive science, his work that summer focused on some technical issues in the computational linguistics portions of our software; he didn't really get into the deeper aspects of the AI theory Intelligenesis was pursuing. Informally, many of us referred to Mark as "The Klingon" because of his mastery of the Klingon language. (For the non-nerds in the audience: Yeah, when they created the Klingons in Star Trek, they hired a linguist to design an actual language for them. Cool, huh?) Mark was involved in the translation of Hamlet into Klingon and didn't mind showing off his Klingon fluency to curious colleagues. Mark's Klingon was smooth and fluent but often seemed a bit odd because of his kind and soft-spoken nature -- personality-wise, at least on the surface, Mark was pretty far from a Klingon-ish guy. He also told us about a friend of his who raised his daughter bilingual English-Klingon: speaking to his daughter only in Klingon from birth, while his wife spoke to her in English.

Along the way Mark also mentioned to me a language called Lojban, which he said was based on predicate logic. He observed to me in passing that it might be easier for us to make our AI system understand Lojban than English. I agreed that it might be, if Lojban was more logically structured, but I reckoned this wasn't very practical, since no one on the team except Mark spoke any Lojban. Also, we were interested in creating a real AI incrementally, along a path that involved spinning off commercial apps -- and the commercial applications of a Lojban-speaking AI system seemed rather few.

Well, six and a half years later, Mark's suggestion has started to seem like a pretty good one. In my new AI project Novamente, we have progressed moderately far along the path of computational language understanding. Our progress toward powerful general AI has been painfully slow due to the team's need to pay rent and the lack of any funding oriented toward the grand AI goal, but for 2004 and part of 2003 the Novamente team and I had some funding to build some English language processing software -- and while we didn't build anything profoundly real-AI-ish, we used the opportunity to explore the issues involved in AI language processing in some depth.

The language processing system that we built is called INLINK and is described here. It doesn't understand English that well by itself, but it interacts with a human user, presenting alternate interpretations of each sentence typed into it, until the human verifies it's found a correct interpretation. The interactive process is slow and sometimes irritating but it ultimately works,
allowing English sentences to be properly interpreted by the AI system. We have plans to create a version of the INLINK system called BioCurator, aimed at biological knowledge entry -- this should allow the construction of a novel biology database containing formal-logic expressions representing biological knowledge of a much subtler nature than exists in current online bio resources like the Gene Ontology.

I've had a lot of doubts about the value of computational linguistics research for "real AI" -- there's a moderately strong argument that it's better to focus on perception, action and embodiment, and let the AI learn language as it goes along interacting with humans using its (real or simulated) body. On the other hand, there's also an argument that a certain degree of "cheating" may be helpful -- that building in some linguistic knowledge and facility may be able to accelerate the experiential-language-learning process. I've outlined this argument in an article called Post-Embodied AI.

The work on INLINK has clarified for me exactly what's involved in having an AI system understand English (or any other natural language). Syntax processing is tricky but the problems with it can be circumvented using an interactive methodology as we've done in INLINK; and eventually the system can learn from its errors (based on repeated corrections by human users) and make fewer and fewer mistakes. The result of INLINK is that English sentences are translated into probabilistic logical expressions inside Novamente's memory, which may then be interpreted, reasoned on, data-mined, intercombined, and yadda yadda yadda. Very nice -- but nasty issues of computational efficiency arise.

Novamente's probabilistic-inference module currently exists only in prototype form, but the prototype has proven capable of carrying out commonsense reasoning pretty well on a number of simple test problems. But there's a catch: for the reasoning process to be computationally tractable, the knowledge has to be fed to the reasoning module in a reasonably simple format. For instance, the knowledge that Ben likes the Dead Kennedys, has to be represented by a relationship something like

#likes( #Ben_Goertzel, #Dead_Kennedys)

where the notation #X refers to a node inside Novamente that is linked by a high-strength link to the WordNode/PhraseNode representing the string X. Unfortunately, if one types the sentence

"Ben likes the Dead Kennedys"

into INLINK, the Novamente nodes and links that come out are more complicated and numerous and less elegant. So a process called "semantic transformation" has to be carried out. This particular case is simple enough that this process is unproblematic for the current Novamente version. But for more complex sentences, the process is, well, more complex, and the business of building semantic transformations becomes highly annoying. One runs into severe issues with the fuzziness and multiplicity of preposition and verb-argument relationships, for example. As occurs so many times in linguistics and AI, one winds up generating a whole bunch of rules which don't quite cover every situation -- and one realizes that in order to get true completeness, so many complexly interlocking small rules are needed that explicitly encoding them is bound to fail, and an experiential learning approach is the only answer.

And this is where -- as I just recently realized -- Lojban should come in! Mark Shoulson was right back in 1998, but I didn't want to see it (urrrgghh!! what useful things are smart people saying to me now that I'm not accepting simply because I'm wrapped up in my own approaches?? why it is to hard to truly keep an open mind?? why is my information processing capacity so small??!! wait a minute -- ok -- this is just the familiar complaint that the limitations of the human brain are what make it so damn hard to build a superior brain. And the familiar observation that cutting-edge research has a way of making the researcher feel REALLY REALLY STUPID. People tell me I'm super-smart but while working on AI every day I come to feel like quite a bloody moron. I only feel smart when I re-enter the everyday world and interact with other people ;-p)

What if instead of making INLINK for English, we made it for Lojban (LojLink!)? Of course this doesn't solve all the problems -- Lojban is a constructed language based on formal logic, but it's not equivalent to formal logic; it allows ambiguity where the speaker explicitly wants it, otherwise it would be un-usable in practice. Semantic transformation rules would still be necessary to make an AI system understand Lojban. But the human work required to encode such transformations -- and the AI learning required to learn such transformations -- would clearly be one or two orders of magnitude less for Lojban.

Lojban isn't perfect... in my study of Lojban over the last week I've run up against the expected large number of things I would have designed differently, if I were building the language. But I have decided to resist the urge to create my own Lojban-ish language for AI purposes, out of respect for the several decades of work that have gone into "tuning" Lojban to make it more usable than the original version was.

In some respects Lojban is based on similar design decisions to the knowledge representation inside my Novamente AI Engine. For instance, in both cases knowledge can be represented precisely and logically, or else it can be represented loosely and associatively, leaving precise interpretation reliant on contextual factors. In Lojban loose associations are represented by constructs called "tanru" whereas in Novamente they're represented by explicit constructs called AssociativeLinks, or by emergent associations between activity-patterns in the dynamic knowledge network.

Next, it's worth noting that Lojban was created and has been developed with a number of different goals in mind -- my own goal, easier interfacing between humans and early-stage AGI's, being just one of them.

Some Lojbanists are interested in having a "culturally neutral" language -- a goal which, while interesting, means fairly little to me.

In fact I don't really believe it's possible -- IMO Lojban is far from culturally neutral, it embodies its own culture, a nerdy and pedantic sort of culture which has plusses and minuses. There is a Lojban term "malglico" which translates roughly to "damn English" or "fucking English" -- it refers to the tendency to use Lojban in English-like ways. This is annoying to Lojban purists but really doesn't matter to me. What I care about is being able to communicate in a way that is fluid and simple and natural for me, and easy for an early-stage AI to comprehend. If the best way to achieve this is through a malglico dialect of Lojban, so be it. If malglico interferes with the comprehensibility of Lojban by AI software, however, then I'm opposed to it.

I've printed up a bunch of materials on Lojban and started studying it seriously -- if I keep up with it then in 6 months or so I'll be a decent Lojbanist. Generally I'm not much good at learning languages, but that's mostly because it bores me so much (I prefer learning things with more of a deep intrinsic structure -- languages always strike me as long lists of arbitrary decisions, and my mind wanders to more interesting things when "I" try to force it to study them...). But in this case I have a special motivation to help me overcome the boredom....

If you want to try to learn Lojban yourself, the most useful resources I've found are:


If it does happen that we teach Novamente to speak Lojban before English then in order to participate in its "AI preschool" you'll need to know Lojban! Of course once it gets beyond the preschool level it will be able to generalize from its initial language to any language. But the preschool level is my focus at the moment -- since as I'm intensely aware, we haven't gotten there yet!

I remain convinced that with 2-3 years of concentrated single-focused effort by myself and a handful of Novamente experts (which will probably only be possible if we get some pure-AI-focused funding, alas), we can create a Novamente system with the intelligence and creativity and self-understanding of a human preschooler. But I'm trying really hard to simplify every aspect of my plan in this regard, just to be sure that no unexpected time-sinks come along. One advantage of NOT having had pure-AI-focused funding for the last few years is that the AI design has been refined an awful lot during this frustrating period. The decision to take a "post-embodied" approach to linguistics -- incorporating both experiential learning and hard-wiring of linguistic knoweldge -- is not a new one; that was the plan with Webmind, back in the day. But the idea of doing initial linguistic instruction and hard-wiring for Novamente in Lojban rather than English is a new one and currently strikes me as quite a good one.

Ah -- there's a bit of a catch, but not a big one. In order to do any serious "hard-wiring" of Lojban understanding into Novamente or any other AI system, the existing computational linguistics resources for Lojban need to be beefed up a bit. I describe exactly what needs to be done here. It seems to me there's maybe 3/4 man-years of work in making pure Lojbanic resources, and another year of work in making resources to aid in automated Lojban-English translation.

And another interesting related point. While in 1998 when Mark Shoulson first pointed Lojban out to me, I thought there were no practical commercial applications for a Lojban-based AI system, I've now changed my mind. It seems to me that an AI system with a functional Lojban language comprehension module and modest level of inferential ability would actually be quite valuable in the area of knowledge management. If a group of individuals were trained in Lojban, they could enter precise knowledge into a computer system very rapidly, and this knowledge could then be reasoned on using Novamente or other tools. This knowledge base could then be queried and summarized in English -- because processing simple English queries using a system like INLINK isn't very hard, and doing crude Lojban-English translation for results reporting isn't that hard either. In any application where some institution has a LOT of knowledge to encode and several years to do it, it may actually make sense to take a Lojbanic approach rather than a more standard approach. Here you'll find an overview of this approach to knowledge management, which I call LojLink.

One example where this sort of approach to knowledge encoding could make sense is bioscience -- I've long thought that it would be good to have every PubMed abstract entered into a huge database of bio knowledge, where it could then be reasoned on and connected with online experimental biology data. But AI language comprehension tools aren't really up to this task -- all they can do now is fairly simplistic "information extraction." We plan to use a bio-customized version of INLINK to get around this problem, but entering knowledge using INLINK's interactive interface is always going to be a bit of a pain. There's enough biology out there, and the rate of increase of bio knowledge is fast enough, that it makes sense to train a crew of bio knowledge encoders in Lojban, so that the store of bio knowledge can be gotten into computer-comprehensible form at maximum rate and minimum cost. Yes, I realize this sounds really weird and would be a hard idea to sell to venture capitalists or pharma company executives -- but that doesn't mean it doesn't make sense....

As another aside, there is some Lojban poetry on the Net but I haven't found much Lojban music. I like to sing & play the keyboard sometimes (in terms of vocal style, think Bob Dylan meets Radiohead); I'm considering doing some of my future lyrics in Lojban! True, few listeners would understand what I was talking about -- but I reckon that, in many cases, the verbal contents of lyrics aren't all that important -- what's important is the genuineness of feeling attached to them, which is achievable if the words have deep meaning to the singer, whether or not the listener can understand them. Of course, I have some lyrics that violate this rule and succeed at least a bit in communicating poetically (even a bit of transhumanist lyricism here and there -- e.g. "I've got to tell you something / your lonely story made me cry / I wish we all could breathe forever / God damn the Universal Mind"). But even so I think Lojbanic lyrics could really rock....

But -- wow -- how to fit learning a new language into my schedule? Urgggh!! Way too much to do. Fortunately I have a wife who says she's willing to learn this weird language along with me, which will make things much easier; it'd be trickier to learn a language with no one to speak to. But still ... every time something new like this comes up I'm confronted with the multiplicity of Bens in my head: each with different goals and priority rankings on their shared goals ... some of them saying "Yeah! You've got to do this!", others cautioning that it will siphon away the sometimes irritatingly small amount of time currently allocated to enjoying the non-intellectual aspects of human life in the Ben-iverse....

But "I" digress. Or do I?

Perhaps internal multiplicity and the falsehood of the unified "I" is a topic best saved for another blog entry. But yet, it does tie back into Lojban -- which I notice contains a single word for "I" just like ordinary languages. This is an area where I'm tempted to introduce new Lojbanic vocabulary.

I don't know what "I" am. I like the Walt Whitman quote "I contradict myself? Very well then, I contradict myself. I am large, I contain multitudes." Indeed, I do. In From Complexity to Creativity I explored the notion of subselves extensively. This notion should be explicitly embodied in language. You should be able to say "One of my subselves wants X" rather than "I want X" -- easily, via a brief linguistic expression, rather than a complicated multi-phrasal description. The distinction between "Some of my subselves want this very intensely" and "All of my subselves want this moderately strongly" should be compactly and immediately sayable. If these things were compactly and simply expressible in language, maybe we'd get out of the habit of thinking of ourselves as unities when we're really not. At least, I'm definitely not. (Just like I feel idiotic most of the time, then feel more clever when interacting with others; similarly, when I'm on my own I often feel like a population of sub-Bens with loosely affiliated goals and desires, and then I feel more unified when interacting with others, both because others view me as a whole, and because compared to other peoples' subselves, mine all cluster together fairly tightly in spite of their differences... (and then I'm most unified of all when I let all the goals drift away and dissolve, and exist as a single non-self, basking in the 1=0, at which point humanity and transhumanity and language and all that seem no more important than un ... but now I really digress!)). And in an appropriately designed language -- say, a subself-savvy extension of Lojban -- this paragraph would be a lot shorter and simpler and sound much less silly.

And this brings up a potentially very interesting aspect of the idea of teaching AI systems in odd constructed languages. My main motivation for thinking about using Lojban instead of English to teach Novamente is to simplify the semantic mapping process. But, it's also the case that English -- like all other natural languages -- embodies a lot of really irritating illusions ... the illusion of the unified self being one of them. Lojban now also happens to embody the illusion of the unified self, but this is a lot easier to fix in Lojban than in English, because of the simpler and more flexible structure of the Lojban language. I don't buy the strongest versions of the Sapir-Whorf hypothesis (though I think everyone should read Whorf's essay-collection Language, Thought and Reality), but clearly it's true that language guides cognition to a significant extent, and this can be expected to be true of AI's at least as much as of humans.

I can envision a series of extensions to Lojban being made, with the specific objective of encouraging AI systems learn to think according to desired patterns. Avoidance of illusions regarding self is one issue among many. Two areas where Lojban definitely exceeds English are ethics and emotion. English tends to be very confused in these regards -- look at the unnecessary ambiguities of the words "happy" and "good", for example. The current Lojban vocabulary doesn't entirely overcome these problems, but it does so significantly, and could be improved in these regards with modest effort.

Well, as I type these words, my son Zeb is sitting next to me playing "Final Fantasy" (yes my work-desk sits in the livingroom next to the TV, which is mostly used by the kids for videogames, except for their obsessive viewing of South Park... the new season just started, there are new episodes now, and did you know Mr. Garrison is now Mrs. Garrison??!!). As I look over at the manly-chested (scrawny, artistic little 11-year-old Zeb's favorite utterance these days: "Admire my manly chest or go down trying!"), womanly-faced heroes run through their somewhat bleak simulated landscape, and feel really intensely sick of the repetitive background music, I can't help but observe that, obsessed as he is with that game, I'm even more obsessed with my own "final fantasy." Or am I? One of my "I"'s is. Another one feels a lot more like playing the piano for an hour or so before bed, even though clearly working on AI for that time-interval would be more productive in terms of the long-term good of Ben and the cosmos. Or is playing music a while justified by the mental peace it brings, enabling clearer thinking about AI research later? How to ensure against self-delusion in judgments like that? Ah, by 38 years old I have devised an excellent set of mental tools for guarding against delusion of one subself by itself, or of one subself by others -- and these tools are frustratingly hard to describe in the English language! No worries -- the community of sub-Bens remains reasonably harmonious, though in the manner of strange attractors rather than fixed psychological arrangements. The chaos goes on.... (ranji kalsa ... ) ... the human chaos goes on, moving inevitably toward its own self-annihilation or self-transcendence ... and my committee of sub-Bens unanimously agrees that it's worth spending a lot of thought and effort to bias the odds toward the latter ...