Friday, December 08, 2006
Polya's Inner Neanderthal
He surveyed a bunch of mathematicians, intending to find out how mathematicians think internally. Many mathematicians thought visually, it was found; some thought in terms of sounds, some purely abstractly.
But, George Polya was the only mathematician surveyed who claimed to think internally in terms of grunts and groans like "aaah", "urrghhh" , "hmtphghhghggg"....
At the time I read this, I thought it was very odd.
However, now I have just read Mithen's book ("The Singing Neanderthals", discussed in another, recent blog of mine) claiming that the language of Neanderthals and early Cro-magnons was like that: no words, just lengthy, semi-musical grunts and groans with varying intonation patterns....
So maybe Polya was just old-fashioned.... ;-)
Anyone else out there think in terms of grunts and groans and so forth? If so please contact me....
Wednesday, December 06, 2006
Updating Kazantzakis
"What a strange machine man is! You fill him with bread, wine, fish, and radishes, and out comes sighs, laughter, and dreams."
-- Nikos Kazantzakis (1885-1957), Greek novelist.
To which my immediate mental response was:
OK, fine -- but it's what happens when you feed him with hallucinogenic mushrooms, amphetamines, ginger beer, Viagra and stir-fried snails that really fascinates me!!
Saturday, December 02, 2006
Zebulon's Favorite Place
My Favorite Place
Zebulon Goertzel
I work my way past all the furniture in my cramped room and sit down at my chair. I see a computer, a laptop. On its screen are pixels. Tiny, stabbing rays of color that drill into my eyes and let me enjoy my computer to no end despite its hideous flaws. The monitor is marked and scarred due to various past and unknown misuses. The dull keyboard is with its regular layout, usable but without an S key. I look at the front disk drive and recall being told not to remove it.
Beside my laptop is my tablet. In its middle-left side is the pen, a gnawed-on, well-used device that is often lost and found in my pocket. The tablet cover is not without scratches, some deep, some light. Each scratch is from a scribble or drawing or line somebody drew. A bright wire links my tablet to the sloppy tangle of wires, connectors and cables which is usually behind my laptop.
My computer’s fan consistently buzz-whirs with high pitch. I am hypnotized as I slowly lean forward, as I grip my tablet pen with sore, almost numb fingers, as I click and click and click. My back is hunched and my neck is out. I work. My eyes ache, but I hardly notice. My stomach is empty, but I try to ignore it. I decide to be done. I get up, stretch, and go to care for myself. My favorite place is my computer, or my desk, because there are no limits to what a computer can do, and my computer fascinates me to no end.
The Cognitive Significance of Radiohead (aka, The Historical and Possibly Current Significance in the Human Mind of Patterns of Tonal Variation)
So, here I'll start with some personal notes and musings in the musicaloidal direction, and finally wander around to tying them in with cognitive theory...
I had told PJ I was a spare-time semi-amateur musician (improvising and composing on the electronic keyboard -- yeah, one of these days I'll put some recordings online; I keep meaning to but other priorities intervene) and she was curious about whether this had had any effect on my AI and other scientific work.
I mentioned to her that I often remember how Nietzsche considered his music improvisation necessary to his work as a philosopher. He kept promising himself to stop spending so much time on it, and once said something like "From now on, I will pursue music only insofar as it is domestically necessary to me as a philosopher."
This is a sentiment I have expressed to myself many times (my music keyboard being a tempting 10 feet away from my work desk...). Like Nietzsche, I have found a certain degree of musicological obsession "domestically necessary" to myself as a creative thinker.... The reasons for this are interesting to explore, although one can't draw definite conclusions based on available evidence....
When I get "stuck" thinking about something really hard, I often improvise on the piano. That way one of two things happens: either
1) my mind "loosens up" and I solve the problem
or
2) I fail to solve the problem, but then instead of being frustrated about it, I abandon the attempt for a while and enjoy myself playing music ;-)
Improvising allows one's music to follow one's patterns of thought, so the music one plays can sorta reflect the structure of the intellectual problem one is struggling with....
I drew on my experiences composing/improvising music when theorizing about creativity and its role in intelligence, and cooking up the aspects of the Novamente AGI design that pertain to flexible creativity....
As well as composing and improvising, I also listen to music a lot -- basically every kind of music except pop-crap and country -- most prototypically, various species of rock while in the car, and instrumental jazz/jazz-fusion when at home working ... [I like music with lyrics, but I can't listen to it while working, it's too distracting... brings me back too much to the **human** world, away from the world of data structures and algorithms and numbers!! ... the nice thing with instrumental music is how it captures abstract patterns of flow and change and interaction, so that even if the composer was thinking about his girlfriend's titties when he wrote the song, the abstract structures (including abstract **emotional** structures) in the music may feel (and genuinely **be**) applicable to something in the abstract theory of cognition ;-) ] ... but more important than that is the almost continual unconsciously-improvised "soundtrack" inside my head. It's as though I'm thinking to music about 40% of the time, but the music is generated by my brain as some kind of interpretation of the thoughts going on.... But yet when I try to take this internal music and turn it into **real music** at the keyboard, the translation process is of course difficult, and I find that much of the internal music must exist in some kind of "abstract sound space" and could never be fully realized by any actual sounds.... (These perverted human brains we are stuck with!!!)
Now, on to Mithen's book "The Singing Neanderthals," which makes a fascinating argument for the centrality of music in the evolution of human cognition.... (His book "The Prehistory of Mind" is really good as well, and probably more of an important work overall, though not as pertinent to this discussion...)
In brief he understands music as an instantiation and complexification of an archaic system of communication that was based (not on words but) on patterns of vocal tonal variation.
(This is not hard to hear in Radiohead, but in Bach it's a bit more sublimated ;=)
This ties in with the hypothesis of Sue Savage-Rumbaugh (who works with the genius bonobo Kanzi) that language likely emerged originally from protolanguages composed of **systems of tonal variation**.
Linguist Alison Wray has made related hypotheses: that protolanguage utterances were holistic, and got partitioned into words only later on. What Savage-Rumbaugh adds is that before protolanguage was partitioned into words, it was probably possessed of a deep, complex semantics of tonal variation. She argues this is why we don't recognize most of the existing language of animals: it's not discrete-word language but continuous-tonal-variation language.
(Funny that both these famous theorists of language-as-tonal-variation are women! I have sometimes been frustrated by my mom or wife judging my statements not by their contents but by the "tone" of delivery ;-)
This suggests that a nonhuman AI without a very humanlike body is never going to experience language anywhere near the same way as a human. Even written language is full of games of implied tonal variation-pattern; and in linguistics terms, this is probably key to how we select among the many possible parses of a complex sentence.
[Side note to computational linguists and pragmatic AI people: I agree the parse selection problem can potentially be solved via statistics, like Dekang Lin does in MiniPar; or via pure semantic understanding, as we do when reading Kant in translation, or anything else highly intellectual and non-tonal in nature.... But it is interesting to note that humans probably solve parse selection in significant part thru tonal pattern recognition....]
Regarding AI and language acquisition, this line of thinking is just a further justification of taking a somewhat nonhumanlike approach to protolanguage learning; as if this sort of theory is right, the humanlike approach is currently waaay inaccessible to AI's, even ones embodied in real or simulated robots... It will be quite a while until robot bodies support deep cognitive/emotional/social experience of tonal variation patterns in the manner that we humans are capable of.... The approach to early language learning I propose for Novamente is a subtle combination of humanlike and nonhumanlike aspects.
More speculatively, there may be a cognitive flow-through from "tonal pattern recognition" to the way we partition up the overall stream of perceived/enacted data into events -- the latter is a hard cognitive/perceptual problem, which is guided by language, and may also on a lower level be guided by subtle tonal/musical communicative/introspective intuitions. (Again, from an AI perspective, this is justification in favor of a nonhumanlike route ... one of the subtler aspects of high-level AI design, I have found, is knowing how to combine human-neurocognition inspiration with computer-science inspiration... but that is a topic for another blog post some other day...)
I am also reminded of the phenomenon of the mantra -- which is a pattern of tonal variation that is found to have some particular psychospiritual effect on humans. I have never liked mantras much personally, being more driven to the spare purity of Zen meditation (in those rare moments these days when emptying the intellectual/emotional mind and seeking altered states of purer awareness seems the thing to do...); but in the context of these other ideas on music, tones and psychology, I can see that if we have built-in brain-wiring for responding to tonal variation patterns, mantras may lock into that wiring in an interesting way.
I won't try to describe for you the surreal flourish of brass-instrument sounds that I hear in my mind at this moment -- a celebratory "harmony of dissonance" tune/anti-tune apropos of the completion of this blog post, and the resumption of the software-code-debugging I was involved with before I decided to distract myself briefly via blogging...
Friday, November 10, 2006
Virtual Brilliance, Virtual Idiocy
These days, more than 10K people are online in Second Life at any given moment, it seems. A million subscribers, half of them active. People are talking about the potential for using Second Life for business presentations, as a kind of super-pumped-up 3D avatar-infused WebEx. And of course the possibility for other cool apps not yet dreamed of.
Stirring stuff ... definitely, technology worth paying attention to.
And yet, Sibley's excellent presentation left me wondering the following: Do we really want to perpetuate all the most stupid and irritating features of human society in the metaverse ... such as obsession with fashion and hairstyles!!??
"Virtual MTV Laguna Beach", a non-Second-Life project that Electric Sheep Factory did, is technically impressive yet morally and aesthetically YUCK, from a Ben Goertzel perspective. Virtual So-Cal high school as a post-Singularity metaverse is a kind of transhumanist nightmare.
I remain unclear regarding whether there will really be any **interesting** "killer apps" for metaverse technology (and I don't find gaming or online dating all that interesting ;) before really powerful multisensory VR interfaces come about.
And even then, simulating humanity in virtuo fascinates me far less than going beyond the human body and its restrictions altogether.
But, I do note that we are currently using a 3D sim world to teach our Novamente baby AI system. Once it becomes smarter, perhaps we will release our AI in Second Life and let it learn from the humans there ... about important stuff like how to wear its hair right (grin!)
And I must admit to being excited about the potential of this sort of tech for scientific visualization. Flying your avatar through the folds of a virtual human brain, or a virtual cell full of virtual DNA, would be mighty educational. Not **fundamental** in the sense of strong AI or molecular assemblers or fully immersive VR, but a lot niftier than Virtual Laguna Beach....
-- Ben
Thursday, November 02, 2006
Music as a Force of Nature...
Thinking over the issues I wrote about in that post, I was reminded of a failed attempt I made many years ago to construct a more robust kind of music theory than the ones that currently exist....
(Ray Jackendoff's generative-grammar-based theory of music is a nice attempt in a similar direction to what I was trying to do, but ultimately I think he failed also....)
Existing music theory seems not to address the most important and interesting questions about music: Which melodies and rhythms are the most evocative to humans, in which ways, and why?
To put it crudely, we know how to distinguish (with fairly high accuracy) a horrible melody from an OK-or-better melody based on automated means. And we know how to distinguish (with fairly high accuracy) what sorts of emotions an OK-or-better melody is reasonably likely to evoke, by automated means.
But, we have NO handle whatsoever, scientifically or analytically, on what distinguishes a GREAT melody (or rhythm, though I've thought most about melodies) from a mediocre one.
I spent a fair bit of time looking for patterns of this nature, mostly eyeballing various representations of melodies but also using some automated software scripts. No luck ... and I long ago got to busy to keep thinking about the issue....
What was wrong with this pursuit was, roughly speaking, the same thing that's wrong with thinking about human minds as individual, separate, non-social/cultural entities....
A musical melody is a sequence of notes arranged in time, sure ... but basically it's better thought of as a kind of SOFTWARE PROGRAM intended to be executed within the human acoustic/cognitive/emotional brain.
So, analyzing melodies in terms of their note-sequences and time-delays is sort of like analyzing complex software programs in terms of their patterns of bits. (No, it's not an exact analogy by any means, but you may get the point.... The main weaknesses of the analogy are: notes and delays are higher-level than bits; and, musical melodies are control-sequences for a complex adaptive system, rather than a simpler, more deterministic system like a von Neumann computer.)
In principle one could find note/delay-level patterns to explain what distinguishes good from great music, but one would need a HUGE corpus of examples, and then the patterns would seem verrrry complex and tricky on that level.
A correct, useful music theory would need to combine the language of notes and delays and such with the language of emotional and cognitive responses. The kind of question involved is: in a given emotional/cognitive context, which specific note/delay patterns/combinations provide which kinds of shifts to the emotional/cognitive context.
However, we currently lack a good language for describing emotional/cognitive contexts.... Which makes the development of this kind of music theory pretty difficult.
So in what sense is music a force of nature? A piece of music comes out of the cultural/psychological/emotional transpersonal matrix, and has meaning and pattern mainly in combination with this matrix, as a sequence of control instructions for the human brains that form components of this matrix...
(I am reminded of Philip K. Dick's novel VALIS, in which a composer creates music that is specifically designed to act on human brains in a certain way, designed to bring them to certain spiritual realizations. Before ever reading Dick, in my late teens, I had a fantasy of composing a musical melody that was so wonderfully recursively revelatory -- in some kind of Escher-meets-Jimi-Hendrix-and-Bach sort of way -- that it would wake up the listener's mind to understand the true nature of the universe. Alas, I've been fiddling at the piano keyboard for years, and haven't come up with it yet....)
Anyway, this is far from the most important thing I could be thinking about! Compared to artificial general intelligence, music is not so deep and fascinating ... ultimately it's mostly a way of fiddling with the particularities of our human mental system, which is not so gripping as the possibility of going beyond these particularities in the right sort of way....
But yet, in spite of its relative cosmic unimportance, I can't really stay away from music for too long! The KORG keyboard sitting behind me tempts ... and many of my best ideas have come to me in the absence/presence that fills my mind while I'm improvising in those quasi-Middle-Eastern scales that I find so seductive (and my daughter, Scheherazade, says she's so sick of hearing, in spite of her Middle-Eastern name ;-)
OK... back to work! ...
Tuesday, October 31, 2006
On Being a Force of Nature...
Presence: An Exploration of Profound Change in People, Organizations, and Society
by Peter M. Senge, C. Otto Scharmer, Joseph Jaworski, and Betty Sue Flowers
led me inevitably to thoughts about the useful (but sometimes counterproductive) illusions of self and free will.
The authors argue that one path to achieving great things and great happiness is to let go of the illusion of autonomy and individual will, and in the words of George Bernard Shaw "be a force of nature," allowing oneself to serve as a tool of the universe, of larger forces that exist all around and within oneself, and ultimately are a critical part of one's own self-definition (whether one always realizes this or not).
The Shaw quote says:
"
This is the true joy in life, the being used for a purpose you consider a mighty one, the being a force of nature, rather than a feverish, selfish clod of ailments and grievances complaining that the world will not devote itself to making you happy.
"
A related quote from Martin Buber says of the "truly free" man, that he:
"
... intervenes no more, but at the same time, he does not let things merely happen. He listens to what is emerging from himself, to the course of being in the world; not in order to be supported by it, but in order to bring it to reality as it desires.
"
There is an interesting dilemma at the heart of this kind of wisdom, which is what I want to write about today.
A part of me rebels strongly against all this rhetoric about avoiding individual will and being a force of nature. After all, nature sucks in many ways -- nature "wants" me and my wife and kids and all the rest of you humans to die. What the natural and cultural world around me desires is in large measure repellent to me. I don't want to "get a haircut and get a real job" just because that's what the near-consensus of the world around me is ... and nor do I want to submit to death and disease. Nor do I want to listen to everything that nature has put inside me: anger, irrationality and the whole lot of it.... Nature has given me some great gifts and some nasty stuff as well.
Many of the things that are important to me are -- at least at first glance -- all about me exercising my individual will against what nature and society want me to do. Working to end the plague of involuntary death. Working to create superhuman minds. Composing music in scales few enjoy listening to; writing stories with narrative structures so peculiar only the really open-minded can appreciate them. Not devoting my life entirely or even primarily to the pursuits of money, TV-viewing, and propagating my genome.
On the other hand, it's worth reflecting on the extent to which the isolation and independence of the individual self is an illusion. We humans are not nearly so independent as modern Western -- and especially American -- culture (explicitly and implicitly) tells us. In fact the whole notion of a mind localized in a single body is not quite correct. As my dear friend Meg Heath incessantly points out, each human mind is an emergent system that involves an individual body, yes, but also a collection of tools beyond the body, and a collection of patterns of interaction and understanding within a whole bunch of minds. In practice, I am not just who I am inside my brain, I am also what I am inside the brains of those who habitually interact with me. I am not just what I do with my hands but also what I do with my computer. I wouldn't be me without my kids, nor without the corpus of mathematical and scientific knowledge and philosophical ideation that I have spent a large bulk of my life absorbing and contributing to.
So, bold and independent individual willfulness is, to an extent, an illusion. Even when we feel that we're acting independently, from the isolation of our own heart and mind, we are actually enacting distributed cultural and natural processes. A nice illustration of this is the frequency with which scientific discoveries -- even revolutionary ones -- are made simultaneously by multiple individuals. Charles Darwin and Alfred Russell Wallace were being willful, independent, deviant thinkers -- yet each of them was also serving as a nodal point for a constellation of forces existing outside himself ... a constellation of forces that was almost inevitably moving toward a certain conclusion, which had to be manifested through someone and happened to be manifested through those two men.
An analogy appears to exist with the representation of knowledge in the human brain. There is a peculiar harmony of localization and distribution in the way the brain represents knowledge. There are, in many cases, individual and highly localized brain regions corresponding to particular bits of knowledge. If you remove that little piece of the brain, the knowledge may go away (though in many but not all cases, it may later be regenerated somewhere else). But yet, that doesn't mean the knowledge is immanent only in that small region. Rather, when the knowledge is accessed or utilized or modified, a wide variety of brain regions may be activated. The localized region serves as a sort of "trigger" mechanism for unlocking a large-scale activation pattern across many parts of the brain. So, the knowledge is both localized and distributed: there are globally distributed patterns that are built so as to often be activated by specific local triggers.
We can look at humans as analogous to neurons, in the above picture. None of us contains that much in and of ourselves, but any one of us may be more or less critical in triggering large-scale activation patterns ... which in turn affect a variety of other individuals in a variety of ways....
So then, the trick in being a "force of nature" is to view yourself NOT as an individual entity with an individual mind contained in an individual body, making individual decisions ... but rather, as a potential trigger for global activity patterns; or, to put it slightly differently, as a node or nexus of a whole bunch of complex global activity patterns, with the capability to influence as well as be influenced.
When we act -- when we feel like "we" are "acting" -- it is just as fair to say that the larger (social, cultural, natural, etc.) matrix of patterns that defines us is acting thru the medium of us.
I feel analytically that what I said in the previous paragraph is true... but what is interesting is how rarely I actually feel that way, in practice, in the course of going about my daily business. Even in cases where it is very obviously the truth -- such as my work on artificial general intelligence. Yes, I have willfully chosen to do this, instead of something else easier or more profitable or more agreeable to others. On the other hand, clearly I am just serving as the tool of a larger constellation of forces -- the movement of science and technology toward AGI has been going on a long time, which is why I have at my disposal the tools to work on AGI; and a general cultural/scientific trend toward legitimization of AGI is beginning, which is why I have been able to recruit others to work on AGI with me, which has been an important ingredient for maintaining my own passion for AGI at such a high level.
How different would it be, I wonder, if in my individual daily (hourly, minutely, secondly) psychology, I much more frequently viewed myself as a node and a trigger rather than an individual. A highly specialized and directed node and trigger, of course -- not one that averages the inputs around me, but one that is highly selective and responds in a very particular way intended to cause particular classes of effects which (among other things) will come back and affect me in specific ways.
In short: Letting go of the illusion of individuality, while retaining the delights of nonconformity.
Easy enough to say and think about; and rather tricky to put into practice on a real-time basis.
Cultures seem to push you either to over-individualism or over-conformity, and finding the middle path as usual is difficult -- and as often, is not really a middle path, in the end, but some sort of "dialectical synthesis" leading beyond the opposition altogether and into a different way of being and becoming....
Sunday, September 10, 2006
Friendliness vs. Compassion, revisited (plus a bunch of babbling about what I've been up to this year)
It's been a busy year ... I've sent my oldest son Zarathustra off to college at age 16 (to Simon's Rock College, www.simons-rock.edu, the same place I, my sister and my ex-wife went way back in the day), which is a very odd feeling ... I finished a pretty decent draft of a novel, Echoes of the Great Farewell, which is a completely lunatic prose-poetic novel-thing told from the stream-of-consciousness point of view of a madman who believes that hallucinogenic mushrooms have told him how to create a superhuman AI (perhaps I'll actually try to get this one published, though it's not a terribly publisher-friendly beast) ... I came up with a substantial simplification of the Novamente AI design, which I'm pretty happy with due to its deep foundations in systems philosophy ... worked with my Novamente colleagues to take a few more incremental steps toward implementation of the Novamente AGI design (especial progress in the area of probabilistic reasoning, thanks to the excellent efforts of Ari Heljakka) ... did some really nice data mining work in the context of some commercial projects ... make some freaky instrumental music recordings that my wife at least enjoyed ... hiked the Na Pali Trail on Kaui and a whole bunch of trails near the Matterhorn in the Alps with my mountain-maniacal young wife Izabela ... co-organized a conference (the AGIRI workshop) ... published a philosophy book, The Hidden Pattern, which tied together a whole bunch of recent essays into a pretty coherent statement of the "world as pattern" perspective that has motivated much of my thinking ... developed a new approach to AGI developmental psychology (together with Stephan Vladimir Bugaj) ... starred in a few animations created by my son Zebulon (zebradillo.com), including one about rogue AI and another in which I mercilessly murder a lot of dogs ... helped discover what seems to be the first plausible genetic underpinnings for Chronic Fatigue Syndrome (together with colleagues at the CDC and Biomind LLC) ... and jeez, well this list is dragging on, but it's really not the half of it...
A pretty full year -- fun to live; too much going on to permit much blogging ... but frustrating in the big picture, given that it's been yet another year in which only modest incremental progress has been made toward my most important goal of creating AGI. My understanding of AGI and the universe has increased significantly this year so far, which is important. And the Novamente codebase has advanced too. Again, though, balancing the goal of achieving AGI with the goal of earning cash to support a family (send the kids to college, pay the alimony (which runs out in another 9 months -- yay!!), etc.) proves a tough nut to crack, and is just a dilemma I keep living with, without solving it satisfactorily so far.... I'll be spending much of the next 6 weeks trying to solve it again, by doing a bunch of meetings and social-networking events partially aimed at eventually putting me in touch with investors or other partners who may be interested in funding my AGI work more fully than is currently the case. (Don't get me wrong, we are moving toward AGI in the Novamente project right now, but we could be moving 10 times faster with some fairly modest investment ... the small amount of investment we've gotten so far, combined with the small surplus value my colleauges and I have managed to extract from our commercial narrow-AI contracts, is far from enough to move us along at maximum rate.)
BUT ANYWAY ... all this was not the point of this blog entry. Actually, the point was to give a link to an essay I wrote on a train from Genova to Zermatt, following a very interesting chat with Shane Legg and Izabela. Shane wrote a blog entry after our conversation, which can be found by going to his site
http://www.vetta.org/
and searching for the entry titled "Friendly AI is Bunk." I wrote an essay with a similar theme but a slightly different set of arguments. It is found at
http://www.goertzel.org/papers/LimitationsOnFriendliness.pdf
The essay is informal in the sense of a blog entry, but is too long to be a blog entry. My argument is a bit more positive than Shane's in that, although I agree with him that guaranteeing "AI Friendliness" in a Yudkowskian sense is very unlikely, I think there may be more general and abstract properties ("compassion" (properly defined, and I'm not sure how), anyone?) that can be more successfully built into a self-modifying AI.... (Shane by the way is a deep AI thinker who is now a PhD student working with Marcus Hutter on the theory of infinitely powerful AI's, and who prior to that did a bunch of things including working with me on the Webmind AI system in the late 1990's, and working with Peter Voss on the A2I2 AGI architecture.)
While you're paying attention, you may be interested in another idea I've been working on lately, which is a variant of the Lojban language (tentatively called Lojban++) that I think may be very useful for communication between humans and early-stage AGI's. If you're curious you can read about it at
http://www.goertzel.org/papers/lojbanplusplus.pdf
With a view toward making Lojban++ into something really usable, I've been spending a bit of time studying Lojban lately, which is a slow but fascinating and rewarding process that I encourage others to undertake as well (see www.lojban.org).
Well, OK ... that's enough for now ... time for bed. (I often like late-night as a time for work due to the quiet and lack of interruptions' but tonight my daughter is having a friend sleep over and they're having an extremely raucous post-midnight mop-the-dirty-kitchen-floor/mock-ice-skating party which is more conducive to blogging than serious work ;-). I hope to blog a bit more often in the next months; for whatever obscure human-psychology reason it seems to gratify some aspects of my psyche. Hopefully the rest of 2006 will be just as fun and diverse as the part so far -- and even more productive for Novamente AGI...
Wednesday, January 25, 2006
Inconsistentism
I find that Marc's suggestion ties in interestingly with a prior subject I've dealt with in this blog: Subjective Reality.
I think it is probably not the best approach to think about the universe as a formal system. I find it more useful to consider formal systems as approximate and partial models of the universe.
So, in my view, the universe is neither consistent nor inconsistent, any more than a brick is either consistent or inconsistent. There may be mutually consistent or mutually inconsistent models of the universe, or of a brick.
The question Marc has raised, in this perspective, is whether the "best" (in some useful sense) way of understanding the universe involves constructing multiple mutually logically inconsistent models of the universe.
An alternative philosophical perspective is that, though the universe is not in itself a formal system, the "best" way of understanding it involves constructing more and more comprehensive and sophisticated consistent formal systems, each one capturing more aspects of the universe than the previous. This is fairly close to being a rephrasing of Charles S. Peirce's philosophy of science.
It seems nice to refer to these two perspectives as Inconsistent versus Consistentist views of the universe. (Being clear however that the inconsistency and consistency refer to models of the universe rather than the universe itself.)
Potentially the Inconsistentist perspective ties in with a previous thread in this blog regarding the notion of Subjective Reality. It could be that, properly formalized, the two models
A) The universe is fundamentally subjective, and the apparently objective world is constructed out of a mind's experience
B) The universe is fundamentally objective and physical, and the apparently subjective world is constructed out of physical structures and dynamics
could be viewed as two
- individually logically consistent
- mutually logically inconsistent
- separately useful
Inconsistentism also seems to tie in with G. Spencer Brown's notion of modeling the universe using "imaginary logic", in which contradiction is treated as an extra truth value similar in status to true and false. Francisco Varela and Louis Kauffmann extended Brown's approach to include two different imaginary truth values I and J, basically corresponding to the series
I = True, False, True, False,...
J = False, True, False, True,...
which are two "solutions" to the paradox
X = Not(X)
obtained by introducing the notion of time and rewriting the paradox as
X[t+1] = Not (X[t])
In Brownian philosophy, the universe may be viewed in two ways
- timeless and inconsistent
- time-ful and consistent
creates(subjective reality, objective reality)
creates(objective reality, subjective reality)
creates(X,Y) --> ~ creates(Y,X)
and then a resolution such as
I = subjective, objective, subjective, objective,...
J = objective, subjective, objective, subjective,...
embodying the iteration
creates(subjective reality[t], objective reality[t+1])
creates(objective reality[t+1], subjective reality[t+2)
If this describes the universe then it would follow that the subjective/objective distinction only introduces contradiction if one ignores the existence of time.
Arguing in favor of this kind of iteration, however, is a very deep matter that I don't have time to undertake at the moment!
I have said above that it's better to think of formal systems as modeling the universe rather than as being the universe. On the other hand, taking the "patternist philosophy" I've proposed in my various cognitive science books, we may view the universe as a kind of formal system comprised of a set of propositions about patterns.
A formal system consists of a set of axioms.... OTOH, in my "pattern theory" a process F is a pattern in G if
- F produces G
- F is simpler than G
In this sense, any set of patterns may be considered as a formal system.
I would argue that, for any consistent simplicity-evaluation-measure, the universal pattern set is a consistent formal system; but of course inconsistent simplicity-evaluation-measures will lead to inconsistent formal systems.
Whether it is useful to think about the whole universe as a formal system in this sense, I have no idea...
Thursday, December 08, 2005
A General Theory of the Development of Forms (wouldn't it be nice to have one?)
Since the Novamente AGI high-level design and the "patternist philosophy of mind" are basically completed and stable for a while (though I'm still engaged with writing them up), I need a new conceptual obsession to absorb the extremely-abstract-thinking portion of my brain... ;-)
Thinking about the development of forms, I have in mind three main specific areas:
- developmental psychology (in humans and AI's)
- epigenesis in biological systems
- the growth of the early universe: the emergence of physical law from lawlessness, etc. (cf John Wheeler)
Each of these is a big area and I've decided to proceed through them in this order. Maybe I will never get to the physics part and will just try to abstract a general theory of development from the first two cases, we'll see.
I also have an intuition that it may be useful to use formal language theory of some sort as a conceptual tool for expressing developmental stages and patterns. Piaget tried to use abstract algebra in some of his writings, which was a nice idea, but didn't quite work. This ties in with Jerry Fodor's notion of a "language of thought", which I don't buy quite in all the senses he means it, but may have some real meat to it. It may be that developing minds at different stages. I don't know if anyone has taken this approach in the developmental psych literature.
For instance, it's arguable that quantifier binding is only added to the human language of thought at Piaget's formal stage, and that recursion is only added to the human language of thought at Piaget's concrete operational stage (which comes along with phrase structure syntax as opposed to simpler proto-language). What I mean by "X is added to the human language of thought at stage S" is something like "X can be used with reasonable generality and fluidity at stage S" -- of course many particular instances of recursion are used before the pre-operational phase, and many particular instances of quantifier binding are used before the formal phase. But the full "syntax"of these operations is not mastered prior to the stages I mentioned, I suggest. (Note that I am using Piaget's stage-labels only for convenience, I don't intend to use them in my own theory of forms; if I take a stage-based approach at all then I will define my own stages.)
I note that formal language theory is something that spans different domain areas in the sense that
- there's discussion of "language of thought" in a general sense
- natural language acquisition is a key aspect of developmental psych
- L-system theory shows that formal languages are useful for explaining and modeling plant growth
- "Symbolic dynamics" uses formal language theory to study the dynamics of chaotic dynamical systems in any domain, see also Crutchfield and Young
So it seems to be a potentially appropriate formal tool for such a project.
I was discussing this with my friend Stephan Bugaj recently and he and I may write a book on this theme if we can pull our thinking together into a sufficiently organized form....
Friday, December 02, 2005
More Venting about Scientific Narrowmindedness and Superintelligent Guinea Pigs
AAARRRGGHHH!!! Amazing how difficult it is to get even clever, motivated, knowledgeable biologists to understand math/CS methods. The techniques I presented to them (a bunch of Biomind stuff) would genuinely help with their research, and are already implemented in stable software -- there's nothing too fanciful here. But the "understanding" barrier is really hard to break through -- and I'm not that bad at explaining things; in fact I've often been told I'm really good at it....
We'll publish a bunch of bioinformatics papers during the next year and eventually, in a few more years, the techniques we're using (analyzing microarray and SNP and clinical data via learning ensembles of classification rules; then data mining these rule ensembles, and clustering genes together based on whether they tend to occur in the same high-accuracy classification rules, etc.) will become accepted by 1% or 5% of biomedical researchers, I suppose. And in 10 years probably it will all be considered commonplace: no one will imagine analyzing genetics data without using such techniques....
Whether Biomind will manage to get rich during this process is a whole other story -- it's well-known that the innovative companies at the early stage of a revolution often lose out financially to companies that enter the game later once all the important ideas have already been developed. But finances aside, I'm confident that eventually, little by little, the approach I'm taking to genetic data analysis will pervade and transform the field, even if the effect is subtle and broad enough that I don't get that much credit for it....
And yet, though this Biomind stuff is complex enough to baffle most bioinformaticists and to be really tough to sell, it's REALLY REALLY SIMPLE compared to the Novamente AI design, which is one or two orders of magnitude subtler. I don't think I'm being egomaniacal when I say that no one else has really appreciated most of the subtlety in the Novamente design -- not even the other members of the Novamente team, many of whom have understood a lot. Which is verrrry different from the situation with Biomind: while the Biomind methods are too deep for most biologists, or most academic journal referees who review our papers, to understand, everyone on the Biomind team fully "gets" the algorithms and ideas.
Whether the subtlety of the Novamente design ever gets to be manifested in reality remains to be determined -- getting funding to pay a small team to build the Novamente system according to the design remains problematic, and I am open to the possibility that it will never happen, dooming me (as I've joked before) to a sort of Babbagedom. What little funding there is for AGI-ish research tends to go to folks who are better at marketing than I am, and who are willing to tell investors the story that there's some kind of simple path to AGI. Well, I don't think there is a simple path. There's at least one complex path (Novamente) and probably many other complex paths as well; and eventually someone will follow one of them if we don't annihilate ourselves first. AGI is very possible with 3-8 years effort by a small, dedicated, brilliant software team following a good design (like Novamente), but if the world can't even understand relatively simple stuff like Biomind, getting any understanding for something like Novamente is obviously going to continue to be a real uphill battle!
Relatedly, a couple weeks ago I had some long conversations with some potential investors in Novamente. But the investors ended up not making any serious investment offer -- for a variety of reasons, but I think one of them was that the Novamente design was too complex for them to easily grok. If I'd been able to offer them some easily comprehensible apparent path to AGI, I bet they would have invested. Just like it would be easier to sell Biomind to biologists if they could grok the algorithms as well as the Biomind technical team. Urrrghh!
Urrrgghhh!! urrrgghh!! ... Well, I'll keep pushing. There are plenty of investors out there. And the insights keep coming: interestingly, in the last few days a lot of beautiful parallels have emerged between some of our commercial narrow-AI work in computational linguistics and our more fundamental work in AGI (relating to making Novamente learn simple things in the AGI-SIM simulation world). It turns out that there are nice mathematical and conceptual parallels between algorithms for learning semantic rules from corpuses of texts, and the process of learning the functions of physical objects in the world. These parallels tell us a lot about how language learning works -- specifically, about how structures for manipulating language may emerge developmentally from structures for manipulating images of physical objects. This is exactly the sort of thing I want to be thinking about right now: now that the Novamente design is solid (though many details remain to be worked out, these are best worked out in the course of implementation and testing), I need to be thinking about "AGI developmental psychology," about how the learning process can be optimally tuned and tailored. But instead, to pay the bills and send the kids to college yadda yadda yadda, I'm trying to sell vastly simpler algorithms to biologists who don't want to understand why it's not clever to hunt for biomarkers for a complex disease by running an experiment with only 4 Cases and 4 Controls. (Answer: because complex diseases have biomarkers that are combinations of genes or mutations rather than individual genes/mutations, and to learn combinational rules distinguishing one category from another, a larger body of data is needed.)
Ooops! I've been blogging too long, I promised Scheherazade I would go play with her guinea pigs with her. Well, in a way the guinea pigs are a relief after dealing with humans all day ... at least I don't expect them to understand anything. Guinea pigs are really nice. Maybe a superintelligent guinea pig would be the ultimate Friendly AI. I can't remember ever seeing a guinea pig do anything mean, though occasionally they can be a bit fearful and defensive....
Tuesday, November 29, 2005
Post-Interesting
I have launched a second blog, which is called Post-Interesting
www.post-interesting.com
and I have invited a number of my friends to join me in posting to it (we'll see if any of them actually get around to it!).
The idea is that this current blog ("Multiverse According to Ben") will contain more personal-experience and personal-opinion type entries, whereas Post-Interesting will be more magazine-like, containing reviews, interesting links, and compact summaries of highly crisp scientific or philosophical ideas.... (Of course, even my idea of "magazine-like" contains a lot of personal opinions!)
Not that I really have time to maintain one blog let alone two, but from time to time I seem to be overtaken by an irresistable desire to expunge massive amounts of verbiage ;-D
If people make a lot of interesting posts to Post-Interesting then one day it will be a multimedia magazine and put Wired and Cosmopolitan out of business! (For now I just put three moderately interesting initial posts there....)
-- Ben
Wednesday, November 16, 2005
Reality and Religion (a follow-up to earlier posts on Objective/Subjective Reality)
http://www.goertzel.org/blog/2005/07/objective-versus-subjective-reality.html
to read his comments.
Bob is a former Mormon and has written extensively and elegantly about his reasons for leaving the faith:
http://mccue.cc/bob/spirituality.htm
He read my blog on objective/subjective reality and my essay on "social/computational/probabilist" philosophy of science
http://www.goertzel.org/dynapsyc/2004/PhilosophyOfScience_v2.htm
and then posed some questions regarding the probabilistic justification of religious beliefs.
Bob: The questions you raise are deep and fascinating ones and unfortunately I don't have time right now to write a reply that does them justice.
However, I can't resist saying a few things ;-)
I was never religious but my ex-wife was and, although this led to numerous unpleasant arguments between us, it also led me to gain some degree of appreciation (OK, not all that much!) for the religious perspective. For her (as a Zen Buddhist) it was never about objective truth at all, it was always about subjective experience -- her own and that of the others in her sangha (religious group). If probability theory was relevant, it was in the context of evaluations like
Probability ( my own spiritual/emotional state is good GIVEN THAT I carry out these religious practices)
>
Probability ( my own spiritual/emotional state is good GIVEN THAT I don't carry out these religious practices)
The evaluation criterion was internal/subjective not external/objective. The actual beliefs of the religion were only evaluated in regard to their subjective effects on the believer's internal well-being. This fits in with a Nietzschean perspective in which "An organism believes what it needs to believe in order to survive", if you replace "survive" with "maximize internal satisfaction" (which ultimately approximately reduces to Nietzsche's "survival" if one takes an evolutionary view in which we have evolved to, on average, be satisfied by things correlated with our genomes' survival).
I am not sure what this has to do with religions like Mormonism though. I think my ex got interested in Zen (in her mid-20's) partly because I had talked to her about it years before that, when as a teenager I had found Huang Po's Zen writings (on exiting the world of thought and ideas and entering the world of pure truth/nothingness) really radical and fascinating. Zen is not very typical of religions and it's questionable whether it really belongs in the "religion" category -- it's a borderline case. It specifically teaches that the external, "objective" world is illusory and urges you to fully, viscerally and spiritually understand this world's construction via the mind. Thus in a Zen perspective the empirical validation or refutation of hypotheses (so critical to science) is not central, because it takes place within a sphere that is a priori considered illusory and deceptive. Because of this Zen tends not to make statements that contradict scientific law; rather it brushes the whole domain of science aside as being descriptive of an illusory reality.
I guess that Mormonism is different in that it makes hypotheses that directly contradict scientific observation (e.g. do Mormons hold the Earth was created 6000 years ago?). But still, I suspect the basic psychological dynamics is not that different. People believe in a religion because this belief helps them fulfill their own goals of personal, social or spiritual satisfaction. Religious people may also (to varying extents) have a goal of recognizing valid patterns in the observed world; but people can have multiple goals, and apparently for religious people the goal of achieving personal/social/spiritual satisfaction thru religion overwhelms the goal of recognizing valid patterns in the observed world. I find nothing very mysterious in this.
Bob: You ask about belief in Kundalini Yoga (another obsession of my ex-wife, as it happens.) I guess that the KY system helps people to improve their own internal states and in that case people may be wise to adopt it, in some cases... even though from a scientific view the beliefs it contains are a tricky mix of sense and nonsense.
However, it seems pretty clear to me that religious beliefs, though they may sometimes optimally serve the individual organism (via leading to various forms of satisfaction), are counterproductive on the species level.
As a scientific optimist and transhumanist I believe that the path to maximum satisfaction for humans as a whole DOES involve science -- both for things like medical care, air conditioning and books and music, and for things like creating AI's to help us and creating nanotech and gene therapy solutions for extending our lives indefinitely.
There's a reason that Buddhism teaches "all existence involves suffering." It's true, of course -- but it was even more true in ancient India than now. There was a lot more starvation and disease and general discomfort in life back then, which is why a suffering-focused religion like Buddhism was able to spread so widely. The "suffering is everywhere" line wouldn't sell so well in modern America or Western Europe, because although suffering still IS everywhere, it's not as extreme and not as major a component of most people's lives. Which is due, essentially, to science. (I am acutely aware that in many parts of the world suffering is a larger part of peoples' lives, but, this does not detract from the point I am making.)
Since religious belief systems detract from accurate observation of patterns in reality, they detract from science and thus from the path with the apparently maximal capacity to lead humanity toward overall satisfaction, even though they may in fact deliver maximal personal satisfaction to some people (depending on their personal psychology).
However, one may argue that some people will never be able to contribute to science anyway (due to low intelligence or other factors), so that if they hold religious beliefs and don't use them to influence the minds of science-and-technology-useful people, their beliefs are doing no harm to others but may be increasing their own satisfaction. Thus, for some people to be religious may be a good thing in terms of maximizing the average current and long term satisfaction of humanity.
There is also a risk issue here. Since religion detracts from science and technology, it maintains humans in a state where they are unlikely to annihilate the whole species, though they may kill each other in more modest numbers. Science gives us more power for positive transformation and also more power for terrible destruction. The maximum satisfaction achievable thru science is higher than thru religion (due to the potential of science to lead to various forms of massively positive transhumanism), but the odds of destruction are higher too. And we really have no way of knowing what the EXPECTED outcome of the sci-tech path is -- the probabilities of transcension versus destruction.
[As I wrote the prior paragraph I realized that no Zen practitioner would agree with me that science has the power to lead to greater satisfaction than religion. Semantics of "satisfaction" aside they would argue that "enlightenment" is the greatest quest and requires no technology anyway. But even if you buy this (which I don't, fully: I think Zen enlightenment is an interesting state of mind but with plusses and minuses compared to other ones, and I suspect that the transhuman future will contain other states of mind that are even more deep and fascinating), it seems to be the case that only a tiny fraction of humans have achieved or ever will achieve this exalted state. Transhumanist technology would seem to hold the possibility of letting any sentient being choose their own state of mind freely, subject only to constraints regarding minimizing harm to others. We can all be enlightened after the Singularity -- if we want to be! -- but we may well find more appealing ways to spend our eternity of time!! -- ]
OK, I drifted a fair way from Mormonism there, back to my usual obsessions these days. But hopefully it was a moderately interesting trajectory.
For a more interesting discussion of Mormonism, check out the South Park episode "All About Mormons." It was actually quite educational for me.
Saturday, October 22, 2005
Quantum Erasers, Psychokinesis and Time Travel
http://www.bottomlayer.com/bottom/kim-scully/kim-scully-web.htm
http://www.dhushara.com/book/quantcos/qnonloc/eraser.htm
Even though the quantum eraser experiments don’t allow true “backwards causation,” this doesn’t prove that such a thing is impossible. It just proves that there is no way to do it within the commonly accepted constraints of physical law. There is at least once concrete possibility for how currently known physical law may be breakable, in a way that would allow backward causation (and, as an effective consequence, time travel – since being able to cause events in the past would mean being able to create an exact replica of oneself in the past, including a brain-state possessing the feeling of having just been quantum-magically transported into the past).
This possibility is “quantum psychokinesis” – a notion which sounds bizarre, but is apparently supported by a variety of experiments done by respected scientists at various institutions including Princeton University; see
http://www.fourmilab.ch/rpkp/strange.html
The simplest of these experiments involve people trying to influence, by the power of concentration, random events such as the direction of an electron’s spin. A long list of experiments show that, after some training, people have a weak but real ability to do this. Over tens of thousands of trials people can make electrons spin in the direction they want to 51% of the time or so, whereas chance would dictate merely 50%. This is a small difference but over so many trials is highly statistically significant.
Hooking this kind of PK experiment up to a quantum eraser apparatus, one would obtain a practical example of reverse causation. If this kind of PK actually works, then in the context of the above “paradox” situation, for example, it really would be possible for someone on Alpha Centauri to send messages faster than light to someone back home, via biasing the direction of spin of the coupled twin particle observed on Alpha Centauri. The rate of information transmission would be extremely low, since all that PK has ever been observed to do is give a slight statistical bias to events otherwise thought random. But with an appropriate code even a very slow rate of information transmission can be made to do a lot. And hypothetically, if this sort of PK phenomenon is actually real, one has to imagine that AI’s in the future will find ways to amplify it far beyond what the human brain can do.
Quantum Theory and Consciousness
I've been working on the last couple chapters of my long-due philosophy-of-mind book "The Hidden Pattern", and one of the chapters is on quantum reality, so I've been re-studying some of the trickier aspects of quantum theory and its interpretation.
In the course of this, I've come to what I think is a clearer understanding of the relation between quantum theory and consciousness, based on the "decoherence" approach to quantum measurement -- see
http://en.wikipedia.org/wiki/Quantum_decoherence
for a refresher on this topic.
This blog entry will make the most sense to readers who are at least a little familiar with quantum theory, at least at the popular-science level.
Unlike what Eugene Wigner suggested back in the 1960’s, we can’t quite say consciousness is the collapse of the wave function” because in the decoherence approach the wave function does not collapse – there are merely some systems that are almost-classical in the sense that there is minimal interference between the different parts of their wave function.
Of course, we can always say “everything is conscious” but this doesn’t really solve anything – even if everything is conscious, some things are more conscious than others and the problem of consciousness then is pushed into defining what it means for one thing to have a higher degree of consciousness than another.
The analogue of “consciousness is the collapse of the wave function” in the decoherence approach would seem to be “consciousness is the process of decoherence.” I propose that this is actually correct in a fairly strong sense, although not for an entirely obvious reason.
Firstly, I suggest that we view consciousness as “the process of observing.” Now, “observation,” of course, is a psychological and subjective concept, but it also has a physical correlate. I suggest the following characterization of the physical substrate of observation: Subjective acts of observation physically correspond to events involving the registration of something in a memory from which that thing can later be retrieved.
It immediately follows from this that observation necessarily requires an effectively-classical system that involves decoherence.
But what is not so obvious is that all decoherence involves an act of observation, in the above sense. This is because, as soon as a process decoheres, the record of this process becomes immanent in the perturbations of various particles all around it – so that, in principle, one could deconstruct the process from all this data, even though this may be totally impractical to do. Therefore every event of decoherence counts as an observation, since it counts as a registration of a memory that can (in principle) be retrieved.
Most events of decoherence correspond to registration in the memory of some fairly wide and not easily delineated subset of the universe. On the other hand, some events of decoherence are probabilistically concentrated in one small subset of the universe – for example, in the memory of some intelligent system. When a human brain observes a picture, the exact record of the picture cannot be reconstructed solely from the information in that brain – but a decent approximation can be. We may say that an event of registration is approximately localized in some system if the information required to reconstruct the event in an approximate way is contained in that system. In this sense we may say that many events of consciousness are approximately localized in particular systems (e.g. brains), though in an exact sense they are all spread more widely throughout the universe.
So, just as the Copenhagen-interpretation notion of “wave function collapse” turns out to be a crude approximation of reality, so does the notion of “wave function collapse as
consciousness.” But just as decoherence conceptually approximates wave function collapse, so the notion of “decoherence as registration of events in memory as consciousness” conceptually approximates “wave function collapse as consciousness.”
How is this insight reflected in the language of patterns (the theme of my philosophy book – “everything is pattern”)? If a system registers a memory of some event, then in many cases the memory within this system is a pattern in that event, because the system provides data that allows one to reconstruct that event. But the extent to which a pattern is present depends on a number of factors: how simple is the representation within the system, how difficult is the retrieval process, and how approximate is the retrieved entity as compared to the original entity. What we can say is that, according to this definition, the recognition of a pattern is always an act of consciousness. From a physics point of view, though, not all acts of consciousness need to correspond to recognitions of patterns. On the other hand, if one takes a philosophical perspective in which pattern is primary (the universe consists of patterns) then it makes sense to define pattern-recognition is identical to consciousness (???)
Of course, none of this forms a solution to the "hard problem of consciousness," which may be phrased as something like "how does the feeling of conscious experience connect with physical structures and dynamics?" This is philosophically subtler issue and you'll have to wait for "The Hidden Pattern" to read my views on it these days (which are different from anything I've published before). But an understanding of the physical correlates of consciousness is a worthwhile thing in itself, as well as a prerequisite to an intelligent discussion of the “hard problem.”
What do you think?
Too many stupid professors and bureaucrats...
This is another one, with a slightly different slant.
At the end of the whining, however, I'll include an actual constructive suggestion for how to make some aspects of the academic world better. (Not that I expect my suggestion to have any actual impact!)
As I mentioned before, I've been making a push to submit papers and books for publication recently; something I haven't done much of since leaving academia in the late 90's. It's been quite an experience!
At first I thought I was doing something badly wrong. I have had some publications accepted but my rejection rate has been higher than I thought -- and not because what I'm submitting is bad (really!), mostly just (egads! can you believe it!) because it's unorthodox.
Of course, I'm revising and resubmitting and everything will be published in time. But the process has been educational as well as frustrating. And I've become aware that others whose work is even less radical than mine have been having an even more annoying time with this sort of thing.
I recently got two emails from friends reporting similar experiences to my own.
One is a biologist who recently left a major university for industry and has worked out a truly radical technique for repairing some types of DNA damage. This technique has now been demonstrated in live cells as well as in the test tube. Amazing stuff, with potential to cure some degenerative diseases as well as to slow human aging.
His paper? Rejected without review six times so far. WITHOUT REVIEW each time !!!
Another is an MD who has found particular patterns of DNA mutations that correspond to a couple very well known diseases. But -- oops -- these patterns are more complex than the ones biologists are used to looking at, and they occur in parts of the genome that biologists don't normally like to look at. So, no matter how statistically significant the results, he's got an uphill battle to fight. He's fighting against convention and presupposition. The result: right after he gets some breakthrough results, his government grant funding is cut off.
As compared to in the late 80's and early 90's, it seems much more common now to have things rejected without review. At least, this seems to be happening to me moderately often lately (though not a majority of the time), whereas back then I don't remember it ever happening.
A draft of my book on the Novamente design for general intelligence (not fully polished -- that's still in progress) was rejected by a publisher recently -- the rejection didn't surprise me, but the nature of the rejection did. The book wasn't even sent to a reviewer -- instead the editor just sent back a letter saying that their book series was intended for "serious academic works."
I had a bit of an email conversation with the editor, which revealed that he had shown the book to a "very distinguished AI professor" who had commented that due to the broad scope of the book and its claims to address general intelligence, it couldn't be a very serious academic work. Heh. Well, my ideas might be WRONG, but they're definitely just as serious as a lot of other books published. And the book doesn't contain a lot of mathematical proofs and only a handful of experimental results, but, it has more of both than Minsky's Society of Mind -- which also addresses general intelligence (or tries to) -- but wait, Minsky is old and famous, he's allowed to address big topics.... What we want to avoid is young people addressing big and interesting topics, right? But wait, why?
Please understand the nature of my complaint: I'm not pissed because this publisher rejected my book, I'm pissed because it was rejected without being read or even seriously skimmed over. And note that I've had six academic books published before, so it should be obvious to the publisher (who had my resume') that I'm not a complete raving crackpot.
I had the same experience with a couple bioinformatics papers I recently submitted -- which were nowhere near as eccentric as my book on Novamente, but presented algorithms and approaches radically different from what's typical in the bioinformatics field. Not just rejected --rejected WITHOUT REVIEW.
Of course, I also had some bioinformatics papers rejected after being reviewed, but by reviewers who plainly understood nothing in the paper. Of course, I could have tried to explain my methods more didactically -- but then the papers would have been rejected for being too long! Tricky, tricky....
Yes, I have had some papers accepted this year, and I have couple books (a futurist manifesto of sorts, and an edited volume on AGI) coming out in an academic press later this year. So these are not the whinings of a complete academic failure ;-p
I've been through enough of this crap before to realize that, after enough resubmissions, eventually one's books or papers hit a publisher or journal who sends them to intelligent and open-minded reviewers who actually read the materials they're given and either understand them or admit they don't (so the editor can find someone else who does). Eventually. But it's a long and annoying search process.
The academic community does reward innovators -- sometimes, eventually,.... But more often than not it places huge obstacles in the way of innovation, via a publication process that makes it much easier to publish variations on orthodox ideas than unusual approaches. One might argue that this kind of extremely strong bias is necessary to filter out all the crap in the world. But I don't believe it. Major changes to the reviewing process are in order.
Collaborative filtering technology would seem to provide a fairly easy answer. Suppose one assumes, as a basis, that individuals with PhD's (or MD's or other similar degrees) are, on the whole, reasonably valid raters of academic content. Then one can give each PhD a certain number of rating points to allocate each year, and let them use them to rate each others' work. People can then post their work online in resources like arxiv.org, and ratings can then be used to guide individuals to the most important or interesting works.
Journals aren't needed since the Net and computer printers are so widespread, and book publishers may still exist, but will be able to assume that if a book manuscript has received a reasonable number of rating points in its online version, then it's probably worth publishing.
You can argue that citations play a similar role -- but citations only play a role after a work is published, they don't help with the irritation of getting innovative ideas past conservative referees in the first place.
Anyway I don't have time to work toward implementing an idea like this, so I'll just keep working within the existing, annoying system, unless I manage to gather enough money for my research from business profits or private investments or donations that I don't need to worry about the often-absurd publication game.
Urrrghh!! I can easily see how, facing this kind of crap, young scientists and philosophers give up on trying to think wild and novel thoughts and follow along with everyone else.
Following along certainly would create a lot less hassle.
Or else giving up on the game of seeking reputation and simply wandering around in the woods like Zarathustra (Nietzsche's, not my son; my son Zar only wanders around these days in the simulated woods inside World of Warcraft!) and keeping one's thoughts to oneself (and then foolishly emerging to preach them to the world after a couple decades, only to find that no one understands what the HELL you're talking about...)
Humanity -- gotta love it...
Or -- hmm -- do you ???
Ah well...
Friday, October 07, 2005
Immortality and the Potential Obsolescence of the Self
I recently co-founded a group called the DC Future Salon that meets once a month in Bethesda, Maryland, to discuss futurist issues (if you live near DC and want to join, join the dcfuture group on yahoogroups). This week our salon meeting focused on the notion of immortality. After a nice lecture and movie showing by Immortality Institute founder (and DC Future Salon co-organizer) Bruce Klein, the discussion traveled through various topics, including the viability of cryonics and the politics of discussing immortality among nontranshumanists – and finally, moved on to more philosophical issues, such as the reasons why immortality is desirable. One of the key issues that came up here is the extent to which the individual self, the personal identity – the thing most transhumanists want most to preserve via immortality, much more so than our physical bodies – is actually a real thing worth preserving. Preserving the physical body is, like uploading, just one means to preserving the self. But what is this “self” that’s so valuable to persist throughout time?
There is a lot of neuropsychological research showing that the “self” is in a strong sense an illusion – much like its sister illusion, “free will.” Thomas Metzinger’s recent book Being No One makes this point in an excellently detailed way. The human mind’s image of itself – what Metzinger calls the “phenomenal self” – is in fact a construct that the human mind creates in order to better understand and control itself, it’s not a “real thing.” Various neuropsychological disorders may lead to bizarre dysfunctions in self-image and self-understanding. And there are valid reasons to speculate that a superhuman mind – be it an AI or a human with tremendously augmented intelligence – might not possess this same illusion. Rather than needing to construct for itself a story of a unified “self entity” controlling it, a more intelligent and introspective mind might simply perceive itself as the largely heterogenous collection of patterns and subsystems that it is. In this sense, individuality might not survive the transcendence of minds beyond the human condition.
The key philosophical point here is: What is the goal of immortality? Or, to put it more precisely: What is the goal of avoiding involuntary death? Is it to keep human life as we know it around forever? That is a valid and non-idiotic goal. Or is it to keep the process of growth alive and flourishing beyond the scope painfully and arbitrarily imposed on it by the end of the human life?
Human life as it exists now is not a constant, it's an ongoing growth process; and for those who want it to be, human life beyond the current maximum lifespan and beyond the traditional scope of humanity will still be a process of growth, change and learning. Fear of death will largely be replaced by more interesting issues like the merit of individuality in its various forms -- and other issues we can't come close to foreseeing yet.
It may be that, when we live long enough and become smart enough, what we find out is that maintaining individuality unto eternity isn't interesting, and it's better to merge into a larger posthuman intelligent dynamical-pattern-system. Or it may be that what we find out is that individuality still seems interesting forever, since there are so many resources available at the posthuman stage, and diversity still seems like an interesting value (plenty of room for both humans and transhuman intelligent dynamical pattern systems!).
The quest for radical life extension is largely about staying around to find out about things like this!
And there is, of course, a familiar and acute irony in observing that -- while these (along with the scientific puzzles of human biology, uploading and so forth) are the interesting issues regarding immortality -- the public discourse on immortality will be focusing on much less fascinating aspects for quite some time to come: aspects like whether living forever is a violation of the will of the divine superbeing who created us all 6000 years ago....
Friday, July 22, 2005
P.S. on objective/subjective reality and consciousness (and future virtual Elvises)
http://www.goertzel.org/new_essays/QualiaNotes.htm
But it's still rough and informal and speculative in the manner of a blog entry, rather than being a really polished essay.
Of course, I have plenty more to say on the topic than what I wrote down there, but -- well -- the usual dilemma ... too many thoughts, too little time to write them all down... I need to prioritize. Entertaining, speculative philosophy only gets a certain fraction of my time these days!
BTW, I wrote about 1/3 of those notes while watching "Jailhouse Rock" with the kids, but I don't know if Elvis's undulating pelvis had any effect on the style or contents of the essay or not. (Wow -- the Elvis phenomenon really makes piquant the whole transhumanist dilemma of "Is humanity really worth preserving past the Singularity or not?"!! ... A decent helping of art, beauty and humor exists there in Elvis-land, sure -- but along with such a whopping dose of pure and unrefined asininity --- whoa.... )
How many of you readers out there agree that the first superhuman AI should be programmed to speak to humans through a simulation of Elvis's face??? ;-D
Tuesday, July 19, 2005
Objective versus subjective reality: Which is primary?
One of my motivations for venturing into this topic is: I've realized that it's wisest to clearly discuss the issue of reality before entering into issues of consciousness and will. Very often, when I try to discuss my theory of consciousness with people, the discussion falls apart because the people I'm talking to want to assume that objective reality is primary, or else that subjective experiential reality is primary. Whereas, to me, a prerequisite for intelligently discussing consciousness is the recognition that neither of these two perspectives on being is primary -- each has their own validity, and each gives rise to the other in a certain sense.
OK, so ... without further ado... : There are two different ways to look at the world, both of which are to some degree sympathetic to me.
One way is to view the objective world as viewed by science and society as primary, and to look at the subjective worlds of individuals as approximations to objective reality, produced by individual physical systems embedded within physical reality.
Another way is to view the subjective, experiential world of the individual world (mine, or yours) as primary, and look at "objective reality" as a cognitive crutch that the experiencing mind creates in order to make use of its own experience.
I think both of these views are valid and interesting ones -- they each serve valuable purposes. They don't contradict each other, because the universe supports "circular containment": it's fine to say "objective reality contains subjective reality, and subjective reality contains objective reality." The theory of non-well-founded sets shows that this kind of circularity is perfectly consistent in terms of logic and mathematics. (Barwise and Etchemendy's book "The Liar" gives a very nice exposition of this kind of set theory for the semi-technical reader. I also said a lot about this kind of mathematics in my 1994 book Chaotic Logic, see a messy rough draft version of the relevant chapter here ... (alas, I long ago lost the files containing the final versions of my books!!))
But it's also interesting to ask if either of the two types of world is properly viewed as primary. I'll present here an argument that it may make sense to view either subjective or objective reality as primary, depending on the level of detail with which one is trying to understand things.
My basic line of argument is as follows. Suppose we have two entities A and B, either of which can be derived from the other -- but it's a lot easier to derive B from A than to derive A from B. Then, using the principle of Occam's Razor, we may say that the derivation of B from A is preferable, is more fundamental. (For those not in the know, Occam's Razor -- the maxim of preferring the simplest explanation, from among the pool of reasonably correct ones -- is not just a pretty little heuristic, but is very close to the core of intelligent thought. For two very different, recent explorations of this theme, see Marcus Hutter's mathematical theory of general intelligence; and Eric Baum's book What is Thought (much of which I radically disagree with, but his discussion of the role of Occam's Razor in cognition is quite good, even though he for some reason doesn't cite Ray Solomonoff who conceived the Occam-cognition connection back in the 1960's)).
I will argue here that it's much easier to derive the existence of objective reality from the assumption of subjective reality, than vice versa. In this sense, I believe, it's sensible to say that the grounding of objective reality in subjective reality is primary, rather than vice versa.
On the other hand, it seems that it's probably easlier to derive the details of subjective reality from the details of objective reality than vice versa. In this sense, when operating at a high level of precision, it may be sensible to say that the grounding of subjective reality in objective reality is primary, rather than vice versa.
Suppose one begins by assuming "subjective reality" exists -- the experienced world of oneself, the sensations and thoughts and images and so forth that appear in one's mind and one's perceived world. How can we derive from this subjective reality any notion of "objective reality"?
Philip K. Dick defined objective reality as "that which doesn't go away even when you stop believing in it." This is a nice definition but I don't think it quite gets to the bottom of the matter.
Consider the example of a mirage in the desert -- a lake of water that appears in the distance, but when you walk to its apparent location, all you find is sand. This is a good example of how "objective reality" arises within subjective reality.
There is a rule, learned through experience, that large bodies of water rarely just suddenly disappear. But then, putting the perceived image of a large body of water together with the fact that large bodies rarely disappear,and the fact that when this particular large body of water was approached it was no longer there -- something's gotta give.
There are at least two hypotheses one can make to explain away this contradiction:
1. one could decide that deserts are populated by a particular type of lake that disappears when you come near it, or
2. one can decide that what one sees from a distance need not agree with what one sees and otherwise senses from close up.
The latter conclusion turns out to be a much more useful one, because it explains a lot of phenomena besides mirage lakes.Occam's Razor pushes toward the second conclusion, because it gives a simple explanation of many different things, whereas explanations of form 1 are a lot less elegant, since according to this explanatory style, each phenomenon where different sorts of perception disagree with each other requires positing a whole new class of peculiarly-behaving entity.
Note that nothing in the mirage lake or other similar experiences causes one to doubt the veracity of one's experiences.
Each experience is valid unto itself. However, the mind generalizes from experiences, and takes particular sensations and cognitions to be elements of more general categories. For instance, it takes a particular arrangement of colors to be a momentary image of a "lake", and it takes the momentary image of a lake to be a snapshot of a persistent object called a "lake." These generalizations/categorizations are largely learned via experience, because they're statistically valid and useful for achieving subjectively important goals.
From this kind of experience, one learns that, when having a subjective experience, it's intelligent to ask "But the general categories I'm building based on this particular experience -- what will my future subjective experiences say about these categories, if I'm experiencing the same categories (e.g. the lake) through different senses, or from different positions, etc." And as soon as one starts asking questions like that -- there's "objective reality."
That's really all one needs in order to derive objective reality from subjective reality. One doesn't need to invoke a society of minds comparing their subjective worlds, nor any kind of rigorous scientific world-view. One merely needs to posit generalization beyond individual experiences to patterns representing categories of experience, and an Occam's Razor heuristic.
In the mind of the human infant, this kind of reasoning is undertaken pretty early on -- within the first six months of life.
It leads to what developmental psychologists call "object permanence" -- the recognition that, when a hand passes behind a piece of furniture and then reappears on the other side, it still existed during the interim period when it was behind the furniture. "Existed" here means, roughly, "The most compact and accurate model of my experiences implies that if I were in a
different position, I would be able to see or otherwise detect the hand while it was behind the chair, even though in actual fact I can't see or detect it there from my current position." This is analogous to what it means to believe the mirage-lake doesn't exist: "The most compact and accurate model of my experiences implies that if I were standing right where that lake
appears to be, I wouldn't be wet!" Notice from these examples how counterfactuality is critical to the emergence of objective from subjective reality. If the mind just sticks to exactly what it experiences, it will never evolve the notion of objective reality. Instead, the mind needs to be able to think "What would I experience if...." This kind of basic counterfactuality leads fairly quickly to the notion of objective reality.
On the other hand, what does one need in order to derive subjective reality from objective reality? This is a lot trickier!
Given objective reality as described by modern science, one can build up a theory of particles, atoms, molecules, chemical compounds, cells, organs (like brains) and organisms -- and then one can talk about how brains embodied in bodies embedded in societies give rise to individual subjective realities. But this is a much longer and more complicated story than the emergence of objective reality from subjective reality.
Occam's-razor-wise, then, "objective reality emerges from subjective reality" is a much simpler story than the reverse.
But of course, this analysis only scratches the surface. The simple, development-psychology approach I've described above doesn't explain the details of objective reality -- it doesn't explain why there are the particular elementary particles and force constants there are, for example. It just explains why objective reality should exist at all.
And this point gives rise to an interesting asymmetry. While it's easier to explain the existence of objective reality based on subjective reality than vice versa, it seems like it's probably easier to explain the details of subjective reality based on objective reality than vice versa. Of course, this is largely speculative, since right now we don't know how to do either -- we can't explain particle physics based on subjectivist developmental psychology, but nor can we explain the nature of conscious experience based on brain function. However, my intuition is that the latter is an easier task, and will be achieved sooner.
So we then arrive at the conclusion that:
- At a coarse level of precision, "subjectivity spawns objectivity" is a simpler story than vice versa
- At a higher level of precision, "objectivity spawns subjectivity" is a simpler story than vice versa
So, which direction of creation is more fundamental depends on how much detail one is looking for!
This is not really such a deep point -- but it's a point that seems to elude most philosophers, who seem to be stuck either in an "objective reality is primary" or "subjective reality is primary" world-view. It seems to me that recognizing the mutual generation of these two sorts of reality is prerequisite for seriously discussing a whole host of issues, including consciousness and free will. In my prior writings on consciousness and will I have taken for granted this kind of mutual-generationist approach to subjectivity/objectivity, but I haven't laid it out explicitly enough.
All these issues will be dealt with in my philosophy-of-mind book "The Hidden Pattern", which I expect to complete mid-fall. I wish I had more time to work on it: this sort of thinking is really a lot of fun. And I think it's also scientifically valuable -- because, for example, I think one of the main reasons the field of AI has made so little progress is that the leading schools of thought in academic and industrial AI all fall prey to fairly basic errors in the philosophy of mind (such as misunderstanding the relation between objective and subjective reality). The correct philosophy of mind is fairly simple, in my view -- but the errors people have made have been quite complicated in some cases! But that's a topic for future blog entries, books, conversations, primal screams, whatever....
More later ... it's 2AM and a warm bed beckons ... with a warm wife in it ;-> ... (hmm -- why this sudden emphasis on warmth? I think someone must have jacked the air conditioning up way too high!!)
Monday, July 18, 2005
The massive suckage of writing academic research papers / the ontology of time / White Sands
This is a pretty boring blog entry, I'm afraid: just a long rant about how annoying academic research can be. But I got irritated enough to write this stuff down, so I guess I may as well post it....
I've been working on an academic paper together with my former Webmind colleague Pei Wang, on the topic of "why inference theories should represent truth values using two numbers rather than one." For instance, the inference component of my Novamente AI system represents the truth values of statements using a probability and a "weight of evidence" (which measures, roughly, the number of observations on which the probability is based). Pei's NARS reasoning system uses two-component truth values with a slightly different interpretation.
Now, this is a perfectly decent paper we've written (it was just today submitted for publication), but, what strikes me is how much pomp, circumstance and apparatus academia requires in order to frame even a very small and simple point. References to everything in the literature ever said on any vaguely related topic, detailed comparisons of your work to whatever it is the average journal referee is likely to find important -- blah, blah, blah, blah, blah.... A point that I would more naturally get across in five pages of clear and simple text winds up being a thirty page paper!
I'm writing some books describing the Novamente AI system -- one of them, 600 pages of text, was just submitted to a publisher. The other two, about 300 and 200 pages respectively, should be submitted later this year. Writing these books took a really long time but they are only semi-technical books, and they don't follow all the rules of academic writing -- for instance, the whole 600 page book has a reference list no longer than I've seen on many 50-page academic papers, which is because I only referenced the works I actually used in writing the book, rather than every relevant book or paper ever written. I estimate that to turn these books into academic papers would require me to write about 60 papers. To sculpt a paper out of text from the book would probably take me 2-7 days of writing work, depending on the particular case. So it would be at least a full year of work, probably two full years of work, to write publishable academic papers on the material in these books!
For another example, this week I've been reading a book called "The Ontology of Time" by L. Nathan Oaklander. It's a pretty interesting book, in terms of the contents, but the mode of discourse is that of academic philosophy, which is very frustrating to me. It's a far cry from Nietzsche or Schopenhauer style prose -- academic philosophy takes "pedantic" to new heights.... The book makes some good points: it discusses the debate between philosophers promoting the "A-theory of time" (which holds that time passes) and the "B-theory of time" (which holds that there are only discrete moments, and that the passage of time is an illusion). Oaklander advocates the B-theory of time, and spends a lot of space defending the B-theory against arguments by A-theorists that are based on linguistic usage: A-theorists point out that we use a lot of language that implies time passes, in fact this assumption is embedded in the tense system of most human languages. Oaklander argues that, although it's convenient to make the false assumption that time passes for communicative purposes, nevertheless if one is willing to spend a lot of time and effort, one can reduce any statement about time passing to a large set of statements about individual events at individual moments.
Now, clearly, Oaklander is right on this point, and in fact my Novamente AI design implicitly assumes the B-theory of time, by storing temporal information in terms of discrete moments and relations of simultaneity and precedence between them, and grounding linguistic statements about time in terms of relationships between events occurring at particular moments (which may be concrete moments or moments represented by quantified mathematical variables).
There are also deep connections between the B-theory and Buddhist metaphysics, which holds that time is an illusion and only moments exist, woven together into apparent continua by the illusion-generating faculty of the mind. And of course there are connections with quantum physics: Julian Barbour in "The End of Time" has argued ably that in modern physics there is no room for the notion of time passing. All moments simply exist, possessing a reality that in a sense is truly timeless -- but we see only certain moments, and we feel time moving in a certain direction, because of the way we are physically and psychologically constructed.
But Oaklander doesn't get to the connections with Buddhism and quantum theory, because he spends all his time pedantically arguing for fairly simple conceptual points with amazing amounts of detail. The papers in the book go back 20 years, and recount ongoing petty arguments between himself and his fellow B-theorists on the one hand, and the A-theorists on the other hand. Like I said, it's not that no progress has been made -- I think Oaklander's views on time are basically right. What irritates me is the painfully rate of progress at which these very smart philosophers have proceeded. I attribute their slow rate of progress not to any cognitive deficits on their part, but to the culture and methodology of modern academia.
Obviously, Nietzsche would be an outcast in modern academia -- casting his books in the form of journal papers would really be a heck of a task!
And what if the scientists involved in the Manhattan Project had been forced to write up their incremental progress every step of the way, and fight with journal referees and comb the literature for references? There's no way they would have made the massively rapid progress they did....
And the problem is not restricted to philosophy, of course -- "hard" science has its own issues. In computer science most research results are published at least twice: once in a conference proceedings and once in a journal article. What a waste of the researcher's time, to write the same shit up twice ... but if you don't do it, your status will suffer and you'll lose your research grants, because others will have more publications than you!
Furthermore, if as a computer scientist you develop a new algorithm intended to solve real problems that you have identified as important for some purpose (say, AI), you will probably have trouble publishing this algorithm unless you spend time comparing it to other algorithms in terms of its performance on very easy "toy problems" that other researchers have used in their papers. Never mind if the performance of an algorithm on toy problems bears no resemblance to its performance on real problems. Solving a unique problem that no one has thought of before is much less impressive to academic referees than getting a 2% better solution to some standard "toy problem." As a result, the whole computer science literature (and the academic AI literature in particular) is full of algorithms that are entirely useless except for their good performance on the simple "toy" test problems that are popular with journal referees....
Research universities are supposed to be our society's way of devoting resources to advancing knowledge. But they are locked into a methodology that makes knowledge advance awfully damn slowly....
And so, those of us who want to advance knowledge rapidly are stuck in a bind. Either generate new knowledge quickly and don't bother to ram it through the publication mill ... or, generate new knowledge at the rate that's acceptable in academia, and spend half your time wording things politically and looking up references and doing comparative analyses rather than doing truly productive creative research. Obviously, the former approach is a lot more fun -- but it shuts you out from getting government research grants. The only way to get government research money is to move really slowly -- or else to start out with a lot of money so you can hire people to do all the paper-writing and testing-on-toy-problems for you....
Arrrgh! Anyway, I'm compromising, and wasting some of my time writing a small fragment of my research up for academic journal publication, just to be sure that Novamente AI is "taken seriously" (or as seriously as a grand AGI project can possibly be taken by the conservative-minded world we live in).... What a pain.
If society valued AGI as much as it valued nuclear weapons during World War II, we'd probably have superhuman AI already. I'm serious. Instead, those of us concerned with creating AGI have to waste our time carrying out meaningless acts like writing academic papers describing information already adequately described in semi-formal documents, just to be taken seriously enough to ask for research money and have a nonzero chance of getting it. Arrggh!
OK, I promise, the next blog entry won't be as boring as this, and won't be a complaint, either. I've actually been enjoying myself a lot lately -- Izabela and I had a great vacation to New Mexico, where we did a lot of hiking, including the very steep and very beautiful Chimney Canyon route down Mount Sandia, which I'd always wanted to do when I lived in New Mexico, but never gotten around to. Also, we camped out on the dunes in White Sands National Monument, which is perhaps the most beautiful physical location I know of. I can't think of anywhere more hallucinogenic -- psychedelic drugs would definitely enhance the experience, but even without them, the landscape is surprisingly trippy, giving the sensation of being in a completely different universe from the regular one, and blurring the distinction between inside and out....
Most of the time wandering around in White Sands was spent in conversation about the subtleties of the interrelationship between free will and consciousness -- interesting and perhaps valuable ideas that I haven't found time to write down yet, because all my writing-time these last couple weeks has been spent putting already-well-understood ideas into the form of academic papers ;-ppp White Sands is exactly the right place to mull over the structure of your mind, since the landscape itself projects you involuntarily into a kind of semi-meditative state....
Hmmm... maybe I'll write down those ideas about free will and consciousness in the next blog entry. It's tempting to write that stuff now -- but it's 1:25 AM, I think I'll go to sleep instead. Tomorrow, alas, is another day... (I tried to make all the days run into each other by taking Modafinil to eliminate my need for sleep -- but it just wound up upsetting my stomach too much, so I've had to go back to sleeping again: bummer!!)