To follow this blog by email, give your address here...

Sunday, April 05, 2015

AI-Based Trading is More Likely to Decrease than Increase Problems in the Markets

Futurist Thomas Frey has written an article suggesting that AI-based financial trading is a threat and is likely to cause a series of horrible market crashes....

This is a topic I've thought about a bit due to my being co-founder and Chief Scientist of Aidyia Limited, an AI-based asset management firm that will be launching a series of funds, beginning with a US equity long-short fund in a couple months.   Note that this is not a high-frequency approach -- our average holding period will be weeks to months, not microseconds.

So here are a few quasi-random musings on the topic....

Overall -- It seems clear to me that within a decade or two the financial markets will be entirely dominated by AIs of one form or another. Human minds are simply not well configured for the sorts of problems involved in asset price prediction, in such a complexly interlinked world as we have today.

However, I see no reason why AI-based trading would lead to worse crashes. Generally, when one creates an AI-based trading system, one does so with a certain mandate in mind, including a certain risk/return profile. IMO an AI that is well-done is more likely to operate within its intended risk/return profile, than a human trader.

Many of the trading disasters commonly attributed to quantitative methods are ultimately the result of plain old bad human judgment. For instance the Long Term Capital Management problem in the late 1990s did involve use of advanced quantitative models -- but ultimately the core of that problem was the use of leverage up to 100x, a choice made by the humans running the system not by the equations themselves. Common sense would tell you that trading with 100x leverage is pretty risky no matter what equations you're using. Having AI inside a trading system is not a total protection against the stupidity -- or emotional pathology -- of the humans trading that system.

The flash crash apparently was mainly due to automated systems, but probably not AI-based systems. Most HFT systems have minimal AI in them -- they're based on reacting super-quickly not super-smartly. The use of HFT shouldn't be conflated with the use of AI. HFT could be pretty much eliminated from any market by imposing a per-transaction tax like we have here in Hong Kong; but this wouldn't get rid of AI. Our AI predictors at Aidyia are currently being used to predict asset price movements 20 days in advance, not microseconds in advance.

But anyway...

As I've written previously in various places, I personally think the whole world financial and economic system is going to transform into something utterly different, once robots and AIs eliminate the need for (and relative value of) human effort in most domains of practical endeavor. So I view these issues with AIs and asset management as "transitional", in a sense. But that doesn't make them unimportant, obviously -- for the period between now and Singularity, they will be relevant.

I worry more about the ongoing increase of income and wealth inequality in nearly every nation, than about the impact of AI on the markets. Computers are already dominant on the markets, AIs will soon be dominant, but as long as the AIs are operating funds owned and controlled by humans, this doesn't really affect the nature of the financial system.  But part of this financial system is increasing wealth concentration -- and I worry that increasing inequality, combined with a situation where robots and AIs ultimately liberate people from their jobs, could eventually lead to a difficult situation.  I believe we ultimately will need some kind of guaranteed minimum income across the planet, the only alternatives being mass warfare or mass dying-off.  But I worry that the worse class divisions get, the harder this guaranteed-income solution will be to put in place, because the folks holding the remnants of human political and economic power will become more and more alienated from the average people.

So I do think there are lots of tricky worries in the medium-term future, regarding the relation between human society and (basically inexorably) advancing AI. But AI-based traders aren't really something to fuss about IMO.   I think that getting messy human emotion out of the mechanics of trading is more likely to decrease the odds of catastrophic crashes than increase it....  If you want to look out for dangers associated with the advent of AI based trading, I'd suggest to keep an eye out for more LTCM like situations where humans make egregiously bad emotional judgments in managing their AI prediction systems.   The AIs themselves are not likely to be the source of irrationality and chaos in the markets.

Friday, April 03, 2015

Easy as 1-2-3: Obsoleting the Hard Problem of Consciousness via Brain-Computer Interfacing and Second-Person Science

NOTE ADDED A FEW DAYS AFTER INITIAL POSTING: The subtitle of this post used to be "Solving the Hard Problem of Consciousness via Brain-Computer Interfacing and Second-Person Science" -- but after reading the comments to the post I changed the first word of the subtitle to "Obsoleting" instead.  I made this change because I realized my initial hope that second-person-experience-enabling technology would "solve" the "hard problem of consciousness" was pretty idealistic.   It might solve the "hard problem" to me, but everyone has their own interpretation of the "hard problem", and in the end, philosophical problems never get solved to everybody's satisfaction.   On the other hand, this seems a great example of my concept of Obsoleting the Dilemma from A Cosmist Manifesto.  A philosophical puzzle like the "hard problem" can't necessarily be finally and wholly resolved -- but it can be made irrelevant and boring to everybody and basically obsolete.  That is what, at minimum, I think second-person-oriented tech will do for the "hard problem."

The so-called “hard problem of consciousness” (thus labeled by philosopher David Chalmers) can be rephrased as the problem of connecting FIRST-PERSON and THIRD-PERSON views of consciousness, where

  • the FIRST-PERSON view of consciousness = what it feels like to be conscious; what it feels like to have a specific form of consciousness
  • the THIRD-PERSON view of consciousness = the physical (e.g. neural or computational) correlates of consciousness … e.g. what is happening inside a person’s brain, or a computer system’s software and hardware, when that person or computer system makes a certain apparently sincere statement about its state of consciousness

The “hard problem”  is the difficulty of bridging the gap between these.  This gap is sufficient that some people, taking only a third-person view, go so far as to deny that the first-person view has any meaning — a perspective that seems very strange to those who are accustomed to the first-person view.  

(To me, from my first-person perspective, for someone to tell me I don’t have any qualia, any conscious experience -- as some mega-materialist philosophers would do -- is much like someone telling me I don’t exist.   In some senses I might not “exist” — say, this universe could be a simulation so that I don’t have concrete physical existence as some theories assert I do.  But even if the universe is a simulation I still exist in some other useful sense within that simulation.  Similarly, no matter what you tell me about my own conscious experience from a third-person perspective, it’s not going to convince me that my own conscious awareness is nonexistent — I know it exists and has certain qualities, even more surely than I am aware of the existence and qualities of the words some materialist philosopher is saying to me…) 

So far, science and philosophy have not made much progress toward filling in this gap  between first and third person views of consciousness— either before or after Chalmers explicitly identified the gap.

What I’m going to suggest here is a somewhat radical approach to bridging the gap: To bridge the gap between 1 and 3, the solution may be 2.  

I.e., I suggest we should be paying more attention to:
  • the SECOND-PERSON view of consciousness = the experience of somebody else’s consciousness

Brain-Machine Interfacing and the Second Person Science of Consciousness

There is a small literature on “second person neuroscience”   which contains some interesting ideas.  Basically it’s about focusing on what peoples brains are doing while they’re socially interacting. 

I also strongly recommend Evan Thompson’s edited book “Between Ourselves: Second-person issues in the study of consciousness”, which spans neurophysiology, phenomenology and neuropsychology and other fields.

What I mean though is something a little more radical than what these folks are describing.   I want to pull brain-computer (or at least brain-wire) interfacing into the picture!

Imagine, as a scientist, you have a brain monitor connected to your subject, Mr. X; and you are able to observe various neural correlates of Mr. X’s consciousness via the brain monitor’s read-out.  And imagine that you also have a wire (it may not actually be a physical wire, but let’s imagine this for concreteness) from your brain to Mr. X’s brain, allowing you to  experience what Mr. X experiences, but on the “fringe” of your own consciousness.  That is, you can feel Mr. X’s mind as something distinct from your own — but nevertheless you can subjectively feel it.  Mr. X’s experiences appear in your mind as a kind of second-person qualia.

Arguably we can have second-person qualia of this nature in ordinary life without need for wires connecting between brains.  This is what Martin Buber referred to as an “I-Thou” rather than “I-It” relationship.   But we don’t need to get into arguments about the genuineness of this kind of distinction or experience.  Though I do think I-Thou relationships in ordinary life have a kind of reality that isn’t captured fully in third-person views, you don’t have to agree with me on this to appreciate the particular second-person science ideas I’m putting forward here.  You just have to entertain the idea that direct wiring between two peoples’ brains can induce a kind of I-Thou experience, where one person can directly experience another’s consciousness.

If one wired two peoples’ brains sufficiently closely together, and setting aside a host of pesky little practical details, then one might end up with a single mind with a unified conscious experience.  But what I’m suggesting is to wire them together more loosely, so that each person’s consciousness appears on the *fringe* of the other’s consciousness.

The point, obviously, is that in this way, comparisons between first and third person aspects of consciousness can be made in a social rather than isolated, solipsistic way.

What is “Science”?

Science is a particular cultural institution that doesn’t necessarily fit into any specific formal definition.  But for sake of discussion here, I’m going to propose a fairly well-defined formal characterization of what science is.

The essence of science, in my view, is a community of people agreeing on
  • a set of observations as valid 
  • a certain set of languages for expressing hypotheses about observations
  • some intuitive measures of the simplicity of observation-sets and hypotheses

Given such a community, science can then proceed via the search for hypotheses that the community will agree are simple ways of explaining certain sets of agreed-on observations.   The validity of hypotheses can then be explored statistically by the community.  

No Clear Route to “First Person Science”

The problem with first-person views of consciousness is that they can’t directly enter into science, because a first-person experience can’t be agreed-upon by a community as valid.  

Now, you might argue its not entirely IMPOSSIBLE for first-person aspects of consciousness to enter into science.   It’s possible because a certain community may decide, for example, to fully trust each other’s verbal reports of their subjective experiences.   This situation is approximated within various small groups of individuals who work together in various wisdom traditions, aimed at collectively improving their state of consciousness according to certain metrics.   Consider a group of close friends meditating together, and sharing their states of consciousness and discussing their experiences and trying to collectively find reliable ways to achieve certain states.   Arguably the mystical strains of various religions have at various times contained groups of people operating in this sort of way.

A counter-argument against this kind of first-person science might be that there are loads of fake gurus around, claiming to have certain “enlightened” states of consciousness that they seem not to really have.   But of course, fraud occurs in third-person science too…

A stronger counter-argument, I think, is that even a group of close friends meditating together is not really operating in terms of a shared group of first-person observations.  They are operating in terms of third-person verbal descriptions and physical observations of each other’s states of consciousness — and maybe in terms of second-person I-Thou sensations of each other’s states of consciousness.

But There Likely Will Soon Come Robust Second-Person Science

On the other hand, second-person observations clearly do lie within the realm of science as I’ve characterized it above.   As long as any sane observer within the scientific community who wires their brain into Mr. X’s brain , receives roughly the same impression of Mr. X’s state of mind, then we can say that the second-person observation of Mr. X is part of science within that community. 

You might argue this isn’t so, because how do we know what Ms. Y perceived in Mr. X’s brain, except by asking Ms. Y?  But if we’re relying on Ms. Y’s verbal reports, then aren’t we ultimately relying on third-person data?  But this objection doesn’t really hold water — because if we wanted to understand what Ms. Y was experiencing when second-person-experiencing Mr. X’s brain, we could always stick a wire into her brain at the same time as she’s wired into X, and experience her own experience of Mr. X vicariously.   Or we could stick  a wire into her brain later, and directly experience her memory of what she co-experience with Mr. X.  Etc. 

Granted, if we follow a few levels of indirection things are going to get blurry — but still, the point is that, in the scenario I’m describing, members of a scientific community can fairly consider second-person observations achieved via brain-computer interfacing as part of the “observation-set collectively verifiable by the community.”   Note that scientific observations don’t need to be easily validated by every member of a community - it’s a lot of work to wire into Ms. Y’s brain, but it’s also a lot of work to set up a particle accelerator and replicate someone’s high-energy physics experiment, or to climb up on a mountain and peer through a telescope.   What matters in the context of science as I understand it is that the observations involved can in principle, and in some practical even if difficult way, be validated by any member of the scientific community.

Solving (Or at least Obsoleting) the Hard Problem of Consciousness

Supposing this kind of set-up is created, how does it relate to first and third person views of consciousness?

I presume that what would happen in this kind of scenario is that, most of the time, what X reports his state of consciousness to be, will closely resemble what Y perceives X’s state of consciousness to be, when the two are neurally wired together.   Assuming this is the case, then we have a direct correlation between first-person observations about consciousness and second-person observations — where the latter are scientifically verifiable, even though the first or not.  And of course we can connect the second person observations to  third-person observations as well.

Thus it appears likely to me the hard problem of consciousness can be "solved" in a meaningful and scientific way, via interpolating 2 between 1 and 3.   At very least it can be obsoleted, and made as uninteresting as the problem of solipsism currently is (are other people really conscious like me?), or, say, the philosophical problem of whether time exists or not (we can't solve that one intellectually, but we don't spend much time arguing about it)....

Can Computers or Robots be Conscious in the Same Sense as Humans Are?

Of course, solving or obsoleting the hard problem of consciousness is not the only useful theoretical outcome that would ensure from this kind of BCI-enabled second-person science.

For instance, it's not hard to see how this sort of approach could be used to explore the question of whether digital computers, robots, quantum computers or whatever other artifact you like can be "genuinely conscious" in the same sense that people are.

Just wire your brain into the robot's brain, in a similar way to how you'd wire your brain into a human subject's brain.   What do you feel?  Anything?  Do you feel the robot's thoughts, on the fringe of your consciousness?   Or does it feel more similar to wiring your brain into a toaster?

Is Panpsychism Valid?

And what does it feel like, actually, to wire your brain into that toaster?   What is it like to be a toaster?  If you could wire some of your neurons into the toaster's sensors and actuators, could you get some sense of this?  Does it feel like nothing at all?  Or does it feel, on the fringe of your awareness, like some sort of simpler and less sophisticated consciousness?

When your friend hits the toaster with a sledgehammer, what is it you feel on the fringe of your awareness, where you (hypothetically) sense the toaster's being?   Do you just feel the toaster breaking?   Or do you feel some kind of painful sensation, at one remove?   Is the toaster crying out, even though (if not for your wiring your brain into it) nobody would normally hear...?

The second-person knowledge about the toaster's putative awareness would be verifiable across multiple observers, thus it would be valid scientific content.   Panpsychism, in a certain sense, could become a part of science....

Toward a Real Science of Consciousness

In sum -- to me the hard problem is about building a connection between the first person and third person accounts of consciousness, and I think the second person account can provide a rich connection.... 

 That is, I think a detailed theory of consciousness and its various states and aspects is going to come about much more nicely as a mix of first, second and third person perspectives, than if we just focus on first and third...

Tuesday, March 31, 2015

Kermit the Frog, Maximum Entropy Production, Etc.

While in Shanghai on a business trip recently, in a restaurant eating some terrifyingly spicy fish hot-pot with a couple of my Aidyia colleagues, I noticed the radio was playing a cover version of a song from the original Muppet Movie, “The Rainbow Connection”…. 

As often happens, this got me thinking….

This is not remotely an original observation, but it’s one of these cliches that has struck me over and over again during my life: There’s a certain beauty to the process of seeking, which often exceeds the satisfaction of finding what one is looking for.  This is related to why so many people enjoy the process of entrepreneurship — of starting new things and chasing success.  The feeling of increasing success is exciting, the ups and downs as one moves toward the goal and then toward and then away etc. in complex patterns are exciting….  Actually achieving the goal may give an oomph of satisfaction but then the oomph goes away and one craves the joy of the process of seeking again.  Of course there are always new things to seek though — one can seek to grow one’s company bigger and bigger, etc.  Many people enjoy seducing new men or women more than actually having an ongoing relationship with one whom they’ve captured, but I’ve never quite felt that way; I guess seeking a really good ongoing relationship is enough of a challenging quest for me, given my  peculiar and in some ways difficult personality…

This point struck me hard as a kid when I was watching the Muppet Movie and saw Kermit the Frog singing the “The Rainbow Connection”

Why are there so many songs about rainbows
And what's on the other side
Rainbows are visions
But only illusions
And rainbows have nothing to hide

So we've been told
And some choose to believe it
I know they're wrong, wait and see
Some day we'll find it
The rainbow connection
The lovers, the dreamers, and me
Who said that every wish
Would be heard and answered
When wished on the morning star
Somebody thought of that
And someone believed it
And look what it's done so far

What's so amazing
That keeps us stargazing
And what do we think we might see
Some day we'll find it
The rainbow connection
The lovers, the dreamers, and me

All of us under its spell, we know that it's probably magic

Have you been half asleep?
And have you heard voices?
I've heard them calling my name;
Is this the sweet sound
That called the young sailors?
The voice might be one and the same

I've heard it too many times to ignore it
It's something that I'm supposed to be
Some day we'll find it
The rainbow connection
The lovers, the dreamers, and me

Kermit’s plaintive froggy voice moved my emotions and I have to say it still does, way more than the typical ballad sung by a human pop star…   What occurred to me as a child as I watched him sing (maybe not the first time I saw the movie — we had it on video-tape when I was a kid and I heard the song more than once!), was that he had found his Rainbow Connection right there, inside the song — He was seeking something else, something beyond himself and his life, but actually inside the beauty of the song, and the feeling of singing the song, and the connection between him and the singer — and the songwriters and puppeteers behind the Kermit persona — and the various listeners of the song such as myself, and the people singing and humming the song around the world at various times and places … this whole melange of feeling and creation and expression and interaction obviously WAS the “Rainbow Connection” — a connection between different minds and voices, sounds waving through the air and colored pictures flashing on screens decoded from electromagnetic waves cast through the air via antennas … a diversity of colors beyond the humanly perceived rainbow and including all sorts of other frequencies ….  When I listened to the song I was basking in the reality of the Rainbow Connection and so was the imaginary and real Kermit.  Of course as a child I didn’t articulate it exactly this way but less-crystallized versions of these thoughts verged through my mind (as probably has happened with many other listeners to this same song, in another aspect of the Good Old Rainbow Connection).   And I could only suspect that somewhere in the back of his good-natured though not that bright little froggy mind, Kermit realized that the beauty was really in the process of seeking and not in the goal — that the beauty and connection and joy he was after, were already there in the the song he was singing about the quest for these things, and in the life and love he expressed that constituted and animated this quest itself….

So, well, all hail Kermit !!! ... what else?

Similar ideas have occurred to me recently in an utterly different context…

A different twist on the aesthetic primacy of process over goal is provided by the Maximum Entropy Production Principle, which hypothesizes that, in many circumstances, complex systems evolve along paths of *maximum entropy production*.   The fine print is complex, but there's a lot of evidence mathematical, conceptual and physical in favor of this idea, e.g.:

This is rather fascinating — it suggests we can think about the wonderful complexity of life, nature, humanity and so forth as, in some measure, resulting from a rush to achieve the goal of the Second Law of Thermodynamics — heat death — as rapidly as possible!!   Of course this isn’t really the total story of complexity and life and all that, but it seems to be an important ingredient — and it’s certainly a poignant one.   The goal in this case is a humanly repellent and disturbing one: the loss of complex form and the advent of stultifyingly uniform random movement in the universe.  The path followed in working toward this goal is a deep, rich, tremendously beautiful one.  

Whether you’re seeking the Rainbow Connection or Ultimate Heat Death, it seems that the process of optimization, in many cases, has a great deal of power to create beauty and structure and feeling.   The process of seeking a goal in the face of limitations and constraints forces a tradeoff between the degree of goal fulfillment and the constraints — and it’s this dance that leads to so much structure and beauty.  

In the case of a song like the Rainbow Connection, the constraints are about time (people get bored if a song is too long) and human comprehension (it’s hard to express a universal human feeling in a way that humans can universally appreciate, given the diversity of our mind-sets and cultures) and the physics of sound waves and the limitations of the human ear and so on.  In the case of Jimi Hendrix, whose music I prefer to even that of Kermit, it was about Hendrix’s musical creativity and the sounds he heard in his head interacting with the constraints of what could be done with the electric guitar and the amplification and production equipment at the time.

In the case of thermodynamics, the core constraints are the “laws” of mechanics and temporal continuity.   The end goal is Ultimate Heat Death, perhaps, but a physical system can only change so much at each point in time.   The physical system is trying to maximize entropy production, yeah, but it can only do so in a manner consistent with the laws of physics, which — among many other constraints — only allow a certain pace of change over time.  Figuring out how to maximize entropy production in the context of obeying the laws of physics and what they say about the relation between matter and spacetime — this is the interplay that helps yield the wonderful complexity we see all around us. 

If the constraints were too relaxed, the goal might get approached too quickly and surely, and there would be no beauty on the path along the way.  If the goal and the constrants were both weak, things might just drift around quasi-randomly in less than interesting ways.  If the constraints were too strong there might just be no interesting ways for the overall objective function to get pursued (be it heat death or writing a great song or whatever).   Constraints that are strong but not too strong, imposed on a suitable objective function, are what yield wonderful complexity.  Lots of analogies arise here, from raising kids to the evolution of species.

To view it in terms of optimization theory: Constraints take a simple objective function and turn it into a complex objective function with multiple extrema and subtle dependencies all across the fitness landscape.  These subtleties in the objective function lead to subtle, intricate partial solutions — and when there is a continuity constraint, so that newly posed solutions must constitute slight variations on previously posed solutions, the only way to dramatically maximize the core objective function is to pass through a variety of these partial solutions.

The ultimate bliss and glorious spectral togetherness Kermit was seeking — or that my childhood self thought he was seeking — or whatever — is an amazing, thrilling vision for sure.  But the process of gradually moving toward this ultimate cosmic vision, in a manner consistent with the constraints of human and froggy life, and the continuity constraint in moving through possible solutions, is what yields such subtle, interesting and moving forms as we see, hear and are in this world right now…

OK OK, that’s all pretty fast and loose, I know.  Hey, I’m just musing while listening to a song, not proving a bloody theorem.  My petty human mind, not yet achieved ultimate superintelligence, has got to churn through stuff like this day by day to gradually muck toward a fuller understanding of the world.  It’s a process ;-) ….

As Captain Beefheart said, “a squid eating dough in a polyethylene bag is fast and bulbous — fast and bulbous, got me?”

Sunday, March 08, 2015

Paranormal Phenomena, Nonlocal Mind and Reincarnation Machines

How I Came to Accept the Paranormal

While I’m generally an extremely stubborn person,  my opinion has radically changed on some topics over the years.   I don't view this as a bad thing.   I don't aspire to be one of those people whose ideas are set in stone, impervious to growth or adaptation.

Some of my changes of opinion have been purely "changes of heart" -- e.g. in my early 20s I transitioned from a teenage solipsism to a more compassion-oriented attitude, due more to internal growth than any external data or stimuli.   

Other times, the cause of my change of opinion has been encountering some body of evidence that I simply hadn’t been aware of earlier.  

The change of mind I'm going to write about here has been of the latter kind -- data-driven.

What I’m going to write about here is a certain class of paranormal phenomena that seem related to religious notions of “survival after death.”   In my teens and 20s I was pretty confident these phenomena were obviously nothing more than wishful thinking.   People didn't want to admit they were doomed to die one day, so they made up all sorts of fanciful stories about heavens and hells after death, and reincarnation, and ghosts and what-not.  

I didn’t want to die either, but I was interested in achieving physical immortality via fixing the problems that make the human body age, or  by uploading my  mind into a robot or computer or whatever – by methods that made good solid rational sense according to science, even if they seemed outlandish according to most peoples’ everyday world-views.   

(I did, in my youth, acquire from somewhere some sort of intuitive spiritual sense that my mind would continue to exist after my death, fused with the rest of the universe somehow.  But I didn’t imagine I’d continue to have any individuality or will after my body died – I intuitively, non-rationally felt I’d continue to exist in some sort of inert form, always on the verge of having a thought or taking an action but never quite doing so….)

My current view of these "survival-ish" paranormal phenomena is quite different.   I definitely haven’t had any sort of religious conversion, and I don’t believe any of the traditional stories about an afterlife are anywhere near accurate.    But I now am fairly confident there is SOMETHING mysterious and paranormal going on, related to reincarnation, channeling and related phenomena.

My new perspective doesn’t fit that well into our standard contemporary verbiage, but a reasonable summary might be:
  • Individual human minds have an aspect that is "nonlocal", in the sense of not being restricted to exist within the flow of our time-axis, in the same sense that our bodies are generally restricted.  
  • Due to this non-localized aspect, it’s possible for human minds that are evidently grounded in bodies in a certain spacetime region, to manifest themselves in various ways outside this spacetime region – thus sometimes generating phenomena we now think of as “paranormal”
  • This non-localized aspect of human minds probably results from the same fundamental aspects of the universe that enable psi phenomena like ESP, precognition, and psychokinesis
  • The path from understanding which core aspects of physics enable these phenomena, to  understanding why we see the precise paranormal phenomena we do, may be a long one – just as the path from currently known physics to currently recognized biology and psychology is a long one

How did I come to this new view?

The first step was to accept, based on an extensive review of the available evidence, that psi phenomena are almost surely real.   My perspective on this is summarized in the introductory and concluding chapters of  Evidence for Psi, a book I co-edited with Damien Broderick last year.   See also the links on this page.   I don’t want to take space and time here summarizing the evidence for psi phenomena, which includes lots of carefully-analyzed laboratory data, alongside loads of striking, well-substantiated anecdotal evidence.   It was the laboratory data that first convinced me psi was very likely real.  After getting largely convinced by the laboratory data, I started reading through the literature on anecdotal psi phenomena, and it started to seem less and less feasible that it was all just fabricated.

I’ve also speculated a bit about how one might tweak currently understood physics to obtain a physics in which psi could be possible.   See my blog post on Surprising Multiverse Theory.  Basically, I think it might be enough to posit that the various histories summed over in quantum-mechanical sums-over-histories are weighted in a manner that depends on their information content, rather than just on their energy.   This relates closely to a proposal made by the famous physicist Lee Smolin in a totally different context, as well as to Sheldrake’s morphic field hypothesis.

I recall reading (a few years ago) the excellent book Varieties of Anomalous Experience, with its run-down of various case studies of apparent reincarnation, and then digging into that literature a bit further afterwards.   I became almost-convinced there was SOMETHING paranormal going on there, though not terribly convinced that this something was really “reincarnation” as typically conceived.

Now I’ve just read the equally excellent book Immortal Remains by the philosopher Stephen Braude.   In the careful, rationalist manner of the analytical philosopher, he summarizes and dissects the evidence for various paranormal phenomena that others have taken as indicative of an afterlife for humans – reincarnation, mediumistic channeling, possession, out-of-body experiences, and so forth.   (But the book is a lot more fun to read than most academic philosophy works, with lots of entertaining tidbits alongside the meticulous deductions and comparisons – it’s highly recommended to anyone with a bit of patience who wants to better understand this confusing universe we live in!).

Survival versus SuperPsi versus ??

One of Braude’s themes in the book is the comparison of what he (following generally accepted terminology in this area) calls “survival” based versus “SuperPsi” based explanations of these phenomena. SuperPsi in this context means any combination of recognized psi phenomena like ESP, precognition, psychokinesis and so forth – even very powerful combinations of very powerful instances of these phenomena.

One thing that Braude points out in the book is that, for nearly all the phenomena he considers, there seems to be a thinkable SuperPsi-based explanation, as well as a thinkable survival-based explanation.   This is not surprising since neither the SuperPsi hypothesis nor the survival hypothesis can be very clearly formulated at this stage of our knowledge.  So, he considers the choice between the two classes of hypothesis to come down mainly to considerations of simplicity.  In his view, the SuperPsi explanations often tend to get way too complex and convoluted, leading him to the conclusion that there is most probably some survival-esque phenomenon going on along with probably lots of psi phenomena....  (For a discussion of why I agree with Braude that simplicity is key to a good scientific explanation, see this essay, which was reprinted with slight changes as part of my book The Hidden Pattern.)

The contrast of survival vs. SuperPsi makes a compelling theme for a book, but I suspect it may not be the best way to think about the issues.   

As far as my attitudes have drifted, I still strongly doubt that “survival” in any traditional sense is the real situation.   I really doubt that, after people have died, they keep on living in some “other world” – whether a heaven or hell or just another planet or whatever.   I also really doubt that, after someone dies, their soul or essence enters another person so that this other person is “a new version of them” (the traditional reincarnation story in its most common form).    One thing Braude’s careful review makes clear is how scantily the evidence supports these traditional conclusions.  

The evidence DOES support the conclusion that the paranormal phenomena Braude considers actually happen in the world, and don’t have explanations in terms of science as we now know it.  But the evidence does NOT especially strongly support any of the classical religious analyses of these paranormal phenomena.  My own view is that these religious attempts at explanation have largely served to cloud the matter.   Personally, the main reason I previously rejected these sorts of phenomena entirely, was my reaction to the various conceptual inconsistencies and obvious appeals to human emotion that I saw in these traditional religious explanations.

What we see in the data Braude considers is that:
  • After a human being dies, it is sometimes possible for “self and other mind patterns” associated with that human being’s mind to manifest themselves in the world at a later time. 
  • While a human being is alive, it is sometimes possible for  “self and other mind patterns” associated with that human being’s mind to manifest themselves in the world at some distant physical location, without any good conventional explanation for how this could happen
  • Sometimes these “self and other mind patterns” manifest themselves in a manner that is mixed up with other things, e.g. with someone else’s mind
  • Sometimes these “self and other mind patterns” provide evidence of their association with the mind of a spatially or temporally distant human, which is very difficult to “explain away” in terms of known science

Exactly what specific forms the above phenomena take is a long story, which Braude tells in his book, which I don’t feel like taking time to summarize here right now.  Read the book!

Anyway, it should be pretty clear that the above does not imply “survival / afterlife” in any traditional sense.   Yet Braude makes a good case that hypothesizing these phenomena to be caused by some combination of ESP, psychokinesis, precognition and so forth becomes inordinately complicated.

From Carbon to Ecosystems

One thing that strikes me is what a long distance exists between potential “physics of psi” explanations like my Surprising Multiverse Theory, and the complex, messy particulars of phenomena like mediumistic channeling.   Channeling, for instance, apparently involves subtle intersections between individual and social psychology and culture, and appears to mix up instances of ESP and psychokinesis with other “nonlocal mind” phenomena that are more distinct from traditional psi.

An analogy that springs to mind, however, is the relation between the carbon atom and the complexities of the Earth’s ecosystem.   The carbon atom enables the development of life, and this can be seen, in a general sense, via looking at the atom at the micro level, and the nature of the bonds it permits.   On the other hand, predicting the specifics of macroscopic life based on the microscopic properties of the carbon atom is something we still can’t come close to doing.   We can’t, yet, even solve the protein folding problem (a very particular subcase of this more general problem).  

Similarly, it’s “easy” to see that hypotheses like the Surprising Multiverse Theory have some potential to explain how the universe could contain phenomena like mediumistic channeling, apparent reincarnation, and so forth.   But getting from something like a low-level information-theoretic tweak to quantum physics, up to specific predictions about paranormal phenomena among human beings, is bound to involve a lot of complexity, just like any explanation bridging multiple hierarchical levels of reality.  

Toward a Paranormal-Friendly (Patternist) Philosophy of the Cosmos

I don’t have anywhere near a scientific explanation of these paranormal phenomena I’m talking about, at present.  I would like to find one, perhaps by building up from Surprising Multiverse Theory or something similar, perhaps by some other means.  Of course, I don’t think it makes sense to reject evidence simply because we don’t have a good theory for it yet.

I do have a philosophical perspective on these phenomena, which helps me think about them in what I hope is a coherent way.   My basic philosophy of the universe is summarized in The Hidden Pattern (free pdf) and A Cosmist Manifesto (free pdf).  But thinking about paranormal phenomena leads me to enrich and extend that basic philosophy in certain ways.

As I’ve said in my previous writings, my preferred way of thinking about these things involves positing a Pattern Space, which exists outside our spacetime continuum.   The whole spacetime universe that defines our everyday being, is just one particular pattern of organization, which in some sense exists within a much larger space of patterns.   When a pattern like a human mind emerges within our spacetime continuum, it also exists in the broader pattern space.   

But what is meant by a pattern being “within our spacetime continuum"?  I haven’t thought about this deeply before.  Basically, I suggest, what it means that this pattern is heavily interlinked with other patterns that are “within our spacetime continuum”, and not so heavily interlinked with other patterns that are not “within our spacetime continuum.”   That is: the spacetime continuum may be thought of as a special sort of cluster of interlinked patterns.

Since the spacetime continuum is just one powerful, but not omnipotent, pattern of organization, it’s not so bizarre that sometimes a pattern that is heavily interlinked with other patterns in the “spacetime continuum pattern cluster”, could sometimes interlink with other patterns that are outside this cluster.   Extra-cluster pattern interactions are then perceived, by patterns inside the cluster, as “paranormal.”

This way of thinking ties in with philosopher Charles Peirce’s “one law of mind” – which he formulated as “the tendency to take habits.”   Peirce observed that, in our universe (but NOT in a hypothetical random universe), once a pattern has been observed to exist, the probability of it being observed again is surprisingly high.  This is the basic idea underlying the Surprising Multiverse Theory.   This seems conceptually related to the statement that the patterns we observe mainly live inside a cluster in pattern space.   Inside a cluster, the odds of various entities being connected via a strong pattern should be atypically high – that’s closely related to what makes the cluster a cluster.

Mind Uploading via Reincarnation Machines?

If indeed the paranormal phenomena Braude surveys are real, and have some sort of scientific explanation that we just haven’t found yet, then this has fascinating potential implications for mind uploading.   It suggests that, when someone dies, their mind is still in some sense somewhere – and can potentially be brought back by appropriate dynamics in certain biophysical systems (e.g. the mind of a medium, or a child born as an apparent reincarnation, etc.).

This raises the possibility that, by engineering the right kind of physical system, it might be possible to specifically induce “paranormal” phenomena that cause a dead person’s mind to manifest itself in physical reality, long after that person’s death.

Of course, this is utterly wild speculation.   But what makes it fun is that it’s also fairly logical extrapolation from empirical observations.   If the data about the paranormal is real, but the data ultimately has some scientific explanation rather than a religious one, then most likely the underlying phenomena can be tapped into and manipulated via engineered systems, like all other scientifically understood phenomena.

Of course, a scientific understanding of these phenomena will likely include an understanding of their limitations.  Maybe these limitations will prevent us from building reincarnation machines.   But maybe they won’t.

If we buy the “morphic field” type idea, then what would attract the reincarnation of a deceased person’s mind, would likely be a set of mind-patterns very similar to that person’s mind.  This would be in the spirit of the well-demonstrated phenomenon of ESP among identical twins.   

In this case, it would follow that one very good way to engineer reincarnation might be to create an intelligent robot (perhaps with a quantum-computer infrastructure?) with
  • Lots of the mind-patterns of the deceased person one wishes to reincarnate 
  •  Lots of degrees of freedom capable of being adjusted and adapted

This would be achieved, for instance, if one created a robot intended as a simulacrum of a deceased person based on information they had left behind – videos, emails and what-not.  There are existing organizations focused specifically on accumulating information about people so as to facilitate this kind of post-mortem simulation.

The strange and exciting hypothesis one is led to, is that such a simulacrum might actually attract some of the mind-patterns of the person simulated, seducing these patterns out of the overarching pattern space – and thus animating the simulacrum with the “feel” of the person being simulated, and perhaps specific memories and capabilities of that person, beyond what was programmed into the simulacrum.

Oh Really?

If you’re a typical tech geek who’s a fan of my AGI work, you may think I’ve gone totally nuts by this point.   That doesn’t bother me particularly though.  

I mean, AGI is almost trendy now, but when I started out with it 30 years ago everyone thought I was nuts to be thinking about it or trying to work on it.    Peer pressure doesn’t really work on me.

I don’t have any real interest in arguing these points with people who haven’t taken the time to inform themselves about the relevant data.   If you want to discuss the points I’ve raised here, do us all a favor and read at least

If you’ve absorbed all this data and are still highly skeptical, then I’m quite willing to discuss and debate with you.   On the other hand, if you feel like you don’t want to take the time to read so many pages on this sort of topic, that’s understandable – but yet, IMO, by  making this choice you are forfeiting your right to debate these points with people who HAVE familiarized themselves with the data.

This is weird stuff, for sure.   But don’t blame the messenger.   It’s a weird world we live in.   We understand very little of it, at present.   If we want to increase our understanding as rapidly as we can, the best strategy is to keep an open mind – to look at what reality is showing us and really think about it, rather than shutting out troublesome data because it doesn’t match our preconceptions, and rather than accepting emotionally satisfying simplifications (be they scientific or religious in nature).

Immortality and Immortality

Does this line of thinking I’ve presented here reassure me that my possible forthcoming physical death (I’m 48 years old now, certainly old enough to be thinkig about such things!) may not be so bad after all?    Hmmm – kind of, I guess.  But I’m not going to cancel my Alcor membership, nor stop devoting a portion of my time to longevity research.   I want to keep this body going, or port my mind to a different physical substrate in a direct way.   

The apparent fact that my mind exists outside of spacetime, and can potentially be brought back into spacetime – at least in some partial way – after my death, doesn’t really diminish my urge to keep on continually existing within THIS spacetime continuum, going forward from right now.   Why would it?  

The overarching pattern space is no doubt wonderful, but ongoing existence in this limiting time-axis is pretty cool too – and keeping on living here is very unlikely to stop my mind-patterns from flourishing and romping trans-temporally in the cosmic pattern space, sooo.... 

Tuesday, January 27, 2015

Why I Signed Tegmark's Open Letter on AI Safety

A bunch of folks have messaged me recently asking me why the heck I signed the Open Letter on AI safety recently proposed by Max Tegmark and his chums at the newly formed Future of Life Institute (in full, the letter is called Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter)....   It is true that Tegmark's new organization is at the moment mainly funded by the great inventor and entrepreneur Elon Musk, whose recent statements about AI have, er, not pleased me especially.

Reading through the brief text of the Open Letter, I did find one thing I don't agree with.   The letter contains the phrase "...our AI systems must do what we want them to do. "  Well, that is just not going to be possible, obviously....   It may happen for a while, but once we have AGI systems that are massively smarter than people, they are going to do what they want to do, or what other powerful forces in the universe want them to do, but not necessarily what we want them to do.   

Our best hope of making AI systems do what we want them to do, will be to become one with these AI systems via brain-computer interfacing, mind uploading and the like.   But in this case, the "we" who is guiding the AIs (i..e AIs that are the future "us") will not be the same human "we" that we are now -- this future "we", descended from our current human selves, may have quite different goals, motivations, and ways of thinking.

But in the Open Letter, this phrase about controlling AIs, which doesn't make sense to me, is embedded in the paragraph

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations ...  constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. 

and I do approve the overall gist here.  I think we should do research aimed at maximizing the odds that increasingly capable AI systems are robust and beneficial.   This seems a no-brainer, right?

Looking at the associated document outlining suggested research directions, I found myself agreeing that these would all be worthwhile things to study.   For instance, in the computer science section of the document, they advocate study of

  1. Verification: how to prove that a system satis es certain desired formal properties. (Did I build
    the system right?")
  2. Validity: how to ensure that a system that meets its formal requirements does not have unwanted behaviors and consequences. (Did I build the right system?")
  3. Security: how to prevent intentional manipulation by unauthorized parties..
  4. Control: how to enable meaningful human control over an AI system after it begins to operate

I think all these kinds of research are valuable and worth pursuing presently.

I definitely do NOT think that these kinds of research should take priority over research on how to make thinking machines be more and more generally intelligent.   But that is not what the open letter advocates.   It just suggests that the research areas suggested should get MORE attention than they current do.

It may be that some of the signatories of the Open Letter actually ARE in favor of stopping or slowing AGI R&D while work on safety-oriented topics proceeds.   But, for example, Demis Hassabis of Google DeepMind signed the letter, and I know he is trying pretty hard to push toward creating AGI, with his team at Google in the UK.  Ditto for Itamar Arel, and plenty of other signatories.

One could argue about whether it makes sense to divert resources from building AGI toward research on AGI safety, at this stage.  But to my mind, this would be a pointless argument.   AGI occupies a minimal percentage of the world's resources, so if more funding is to be put into AGI safety, it doesn't have to come out of the pot of AGI R&D funding.   As an example, Musk put US$10M into Tegmark's Future of Life Institute -- but it's not as though, if he hadn't done that, he would have put the same money into AGI R&D.  

Do I think that $10M would have been better spent on OpenCog AGI R&D?   Well, absolutely.   But that wasn't the issue.    There is a huge amount of wealth in the world, and very little of it goes to AGI or other directly Singularity-oriented tech.   Rather than fighting over the scraps of resources that currently go to Singularity-oriented tech, it's more useful IMO to focus on expanding the percentage of the world's resources that go into Singularity-oriented development as a whole.

In short, I considered not signing the document because of the one phrase I disagreed with, as mentioned above.  But eventually I decided this is not a legal contract where every phrase has to be tuned to avoid having loopholes that could come back to bite me; rather, it's just an expression of common feeling and intent.   I signed the document because I wanted to signal to Max Tegmark and his colleagues that I am in favor of research aimed at figuring out how to maximize the odds that AGI systems are robust and beneficial.   This kind of research has the potential to be worthwhile in making the phase "from here to Singularity" go more smoothly -- even though in the end we're obviously not going to be able to "make AGI systems do what we want them to do" ... except potentially by morphing what "we" are so that there is no boundary between the AGIs and us anymore.

Tuesday, December 02, 2014

What is Life?

I was trolling the Google+ discussion forum related to the Developmental AI MOOC, which my son Zar recently completed, and noticed a discussion thread on the old question of “What is Life?” — prompted by the question “Are these simple developmental-AI agents we’ve built in this course actually “alive” or not?

I couldn’t help adding my own two cents to the forum discussion; this blog post basically summarizes what I said there.

First of all, I’m not terribly sure "alive or not" is the right question to be asking about digital life-forms.  Of course it's an interesting question from our point of view as evolved, biological organisms.  But -- I mean, isn't it a bit like asking if an artificial liver is really a liver or not?   What are the essential characteristics of being a liver?  Is it enough to carry out liver-like functions for keeping an organism's body going?  Or do the internal dynamics have to be liver-like?  And at what level of detail?

Having said that, though, I think one can make some mildly interesting headway on the "what is life?” question by starting with the concept of agency and proceeding from there...

I think Stan Franklin was onto something with his definition of "autonomous agent" in his classic “Agent or Program?” paper .  He was writing there more from an AI view than an ALife view, but still the ideas there seem very much applicable.  The core definition of the paper is:

An autonomous agent is a system situated within and part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to affect what it senses in the future.

The paper then goes on to characterize additional properties possessed by various different types of agents.   For instance, according to Franklin's approach,

  • every agent satisfies the properties: reactive, autonomous, goal-oriented and temporally continuous.
  • some agents have other interesting properties like: learning/adaptive, mobile, flexible, having-individual-character, etc.

Given this approach, one could characterize “life” via saying something like

A life-form is an autonomous agent that is adaptive and possesses metabolism and self-reproduction.

This seems fairly reasonable — but of course begs the question of how to define metabolism and self-reproduction.  If one defines them too narrowly based on biological life, one will basically be defining "traditional biological life."  If one defines them too broadly, they'll have no meaning.

A related approach that seems sensible to me is to define a kind of abstract "survival urge"  For instance, we could say that

An agent possesses survival-urge if its interactions with the environment, during the period of its existence, have a reasonably strong influence on whether it continues to exist or not ... and if its continued existence is one of its goals.


An agent with individual character possesses individual-character survival-urge if its interactions with the environment, during the period of its existence, have a reasonably strong influence on whether other agents with individual-character similar to it exist in future ... and if both its continued existence and the existence of other agents with similar individual-character to it, are among its goals.

Then we could abstractly conceive life as
A life-form is an adaptive autonomous agent with survival urge
An individuated life-form is an adaptive autonomous agent with, survival urge, individual character and individual-character survival-urge

These additions bring us closer to the biological definition of life.

(I note that Franklin, in his paper, doesn't define what a "goal" is.  But in the discussion in the paper, it's clear that he conceived it as what I've called an implicit goal rather than an explicit goal.   That is, he considers that a thermostat has a goal; but obviously, the thermostat does not contain its goal as deliberative, explicitly-represented cognitive content.  He seems to consider a system's goal as, roughly, "the function that a reasonable observer would consider the system as trying to optimize."   I think this is one sensible conception of what a goal is.)

Unfortunately I haven't studied the agents created in the Developmental AI MOOC well enough to have a definitive opinion if they are "alive" according to the definitions I've posited in this post.  My suspicion, though, based on a casual study, is that they are autonomous agents without so much of a survival urge.   But I guess a survival urge and even an individuated one could be achieved via small modifications to the approach taken in the course exercises.

My overall conclusion, then, is that some fairly simple digital life-forms should logically be said to obey the criteria of “life”, if these are defined in a sensible way that isn’t closely tied to the specifics of the biological substrate. 

Now, some may find this unsatisfying, in that digital organisms like the ones involved in the Developmental AI MOOC are palpably  much simpler than known biological life-forms like amoebas, paramecia and so forth.    But  my reaction to that would be that complexity is best considered a separate issue from “alive-ness.”  The complexity of an agent’s interactions and perceptions and goal-oriented behaviors can be assessed, as can the complexity of its behaviors specifically directed toward survival or individual-character survival.   According to these criteria, existing digital life-forms are definitely simpler than amoebas or paramecia, let alone humans.  But I don’t think this makes it sensible to classify them as “non-alive.”   It’s just that the modern digital environment happens to allow simpler life-forms than Earthly chemistry gave rise to via evolution.

Saturday, November 22, 2014

Is the Technological Singularity a "Final Cause" of Human History?

Intuitively, it is tempting (to some people anyway!) to think of the potential future Technological Singularity as somehow "sucking us in" -- as a future force that reaches back in time and guides events so as to bring about its future existence.   Terrence McKenna was one of the more famous and articulate advocates of this sort of perspective.

This way of thinking relates to Aristotle's notion of "Final Causation" -- the final cause of a process being its ultimate purpose or goal.   Modern science doesn't have much of a place for final causes in this sense; evolutionary theories often seem to be teleological in a "final causation" way on the surface, but then can generally be reformulated otherwise.   (We colloquially will say "evolution was trying to do X," but actually our detailed models of how evolution was working toward X, don't require any notion of "trying", but only notions of mutation, crossover and differential survival...)

It seems to me, though, that the Surprising Multiverse theory presented in one of my recent blog posts (toward the end), actually implies a different sort of final causation -- not quite the same as what Aristotle suggested, but vaguely similar.  And this different sort of final causation does, in a sense, suggest that the Singularity may be sucking us in....

The basic concept of the Surprising Multiverse theory is that, in the actual realized rather than merely potential world, patterns with high information-theoretic surprisingness are more likely to occur.   This implies that, among the many possible universes consistent with a given set of observations (e.g. a given history over a certain interval of time), those universes containing more surprisingness are more likely to occur.

Consider, then, a set of observations during a certain time interval -- a history as known to a certain observer, or a family of histories as known to a set of communicating observers -- and the question of what will happen AFTER that time interval is done.   For instance, consider human history up till 2014, and the question of the human race's future afterwards.

Suppose that, of the many possible futures, some contain more information-theoretic surprisingness.   Then, if the Surprising Multiverse hypothesis holds, these branches of the multiverse -- these possible universes -- will have boosted probabilities, relative to other options.   The surprisingness weighting may then be viewed intuitively as "pulling the probability distribution over universes, toward those with greater surprisingness."

The "final cause" of some pattern P according to observer O, may be viewed as the set of future surprising patterns Q that are probabilistically caused by P, from the perspective of observer O. (There are many ways to quantify the conceptual notion of probabilistic causation -- perhaps the most compelling is as "P having nonneutralized positive component effect on Q, based on the knowledge of O", as defined in the interesting paper A Probabilistic Analysis of Causation.)

So the idea is: final causation can be viewed as the probabilistic causation that has the added oomph of surprisingness (and then viewed in the backwards direction).   A final cause of P is something that is probabilistically caused by P, and that has enough surprisingness to be significantly overweighted in the Surprising Multiverse weighting function that balances P's various possible futures.

So what of the Singularity?  We may suppose that a Technological Singularity will display a high degree of probabilistic surprisingness, relative to other alternative futures for humanity and its surrounds. If so, branches of the multiverse involving a Singularity would be preferentially weighted higher, according to the Surprising Multiverse hypothesis.   The Singularity is thus a final cause of human history.   QED....

A fair example of the kind of thing that passes through my head at 2:12 AM Sunday morning ;-) ...

Sunday, November 02, 2014

The SpaceshipTwo Crash and the Pluses and Minuses of Prize-Driven and Incremental Development

SpaceShipTwo, Virgin Galactic's ambitious new plane/spaceship, crashed Friday (two days ago), killing one pilot and seriously injuring another.

This is a human tragedy like every single death; and it's also the kind of thing one can expect from time to time in the course of development of any new technology.   I have no doubt that progress toward tourist spaceflight will continue apace: inevitable startup struggles notwithstanding, it's simply an idea whose time has come.   

Every tragedy is also an occasion for reflection on the lessons implicit in the tragic events.

(in the center picture, the SpaceShipTwo is shown in the center, 
between the motherships that provide its initial lift)

For me, watching the struggles of the Virgin Galactic approach to spaceflight has also been a bit of a lesson in the pluses and minuses of prize-driven technology development.   SpaceShipTwo is the successor to SpaceShipOne, which won the Ansari X-Prize for commercial spaceflight a decade ago.   At the time it seemed that the Ansari X-Prize would serve at least two purposes:
  1. Raise consciousness generally about the viability of commercial spaceflight, particularly of the pursuit of spaceflight by startups and other small organizations rather than governments and large government contractors
  2. Concretely help pave a way toward commercially viable spaceflight, via progressive development of the winning spaceflight technology into something fairly rapidly commercially successful
It seems clear that the first goal was met, and wonderfully well.  Massive kudos are due to the X-Prize Foundation and Ansari for this.   The press leading up to and following from the Ansari X-Prize made startup spaceflight into a well-recognized "thing" rather than a dream of a tiny starry-eyed minority.

Regarding the second goal, though, things are much less clear.    Just a little before the tragic SpaceShipTwo crash, a chillingly prescient article by Doug Messier was posted, discussing the weaknesses of the SpaceShipTwo design from a technical perspective.   If you haven't read it yet, I encourage you to click and read it through carefully -- the article you're reading now is basically a reflection on some of the points Messier raises, and a correlation of some of those points with my own experiences in the AI domain.

Messier's article traces SpaceShipTwo's development difficulties back to the SpaceShipOne design, on which it was based -- and points out that this design may well have been chosen (implicitly, if not deliberately) based on a criterion of winning the Ansari X-Prize quickly and at relatively low cost, rather than a criterion of serving as the best basis for medium-term development of commercial spaceflight technology.

As Messier put it,

It turns out that reaching a goal by a deadline isn’t enough; it matters how you get there. Fast and dirty doesn’t necessarily result in solid, sustainable programs. What works well in a sprint can be a liability in a marathon. A - See more at:
It turns out that reaching a goal by a deadline isn’t enough; it matters how you get there. Fast and dirty doesn’t necessarily result in solid, sustainable programs. What works well in a sprint can be a liability in a marathon.
It turns out that reaching a goal by a deadline isn’t enough; it matters how you get there. Fast and dirty doesn’t necessarily result in solid, sustainable programs. What works well in a sprint can be a liability in a marathon. - See more at:
It turns out that reaching a goal by a deadline isn’t enough; it matters how you get there. Fast and dirty doesn’t necessarily result in solid, sustainable programs. What works well in a sprint can be a liability in a marathon. A - See more at:

However, while I am fascinated by Messier's detailed analysis of the SpaceShipOne and SpaceShipTwo technologies, I'm not sure I fully agree with the general conclusion he draws -- or at least not with the way he words his conclusions.   His article is titled "Apollo, Ansari and the Hobbling Effects of Giant Leaps" -- he argues that a flaw in both the Ansari X-Prize approach and the Apollo moon program was an attempt to make a giant leap, by hook or by crook.  In both cases, he argues, the result was a technology that achieved an exciting goal using a methodology that didn't effectively serve as a platform for ongoing development.

Of course, the inspirational value of putting a man on the moon probably vastly exceeded the technical value of the accomplishment - and the inspirational value was the main point at the time.    But I think it's also important to make another point: the problem isn't that pushing for Giant Leaps is necessarily bad.   The problem is that pushing for a Giant Leap that is defined for non-technical, non-scientific reasons, with a tight time and/or financial budget, can lead to "fast and dirty" style short-cuts that render the achievement less valuable than initial appearances indicate.
Apollo, Ansari and the Hobbling Effects of Giant Leaps - See more at:

That is: If the goal is defined as "Achieve Giant Leap Goal X as fast and cheap as possible," then the additional goal of "Create a platform useful for leaping beyond X" is not that likely to be achieved as well, along the way.   And further -- as I will emphasize below -- I think the odds of the two goals being aligned are higher if Great Leap Goal X emerges from scientific considerations, as opposed to from socially-oriented marketing or flashy-demonstration considerations.

It's interesting that Messier argues against Giant Leaps and in favor of incremental development.   And yet there is a sense in which SpaceShipOne/Two represents incremental development at its most incremental.    I'm thinking of the common assumption in the modern technology world, especially in Silicon Valley, that the best path to radical technological success is also generally going to be one that delivers the most awe-inspiring, visible and marketable results at each step of the way.   The following graphic is sometimes used to illustrate this concept:

On the surface, the SpaceShipTwo approach exemplifies this incremental development philosophy perfectly.   It's a spaceplane, an incremental transition between place and spaceship; and the spaceship portion is lifted high into the air initially by a plane.   It's precisely because of taking this sort of incremental approach that SpaceShipOne was able to win the Ansari X-Prize with the speed and relatively modest cost that it did.

On the other hand, Messier favors a different sort of incremental spacecraft development -- not incremental steps from plane to plane/spacecraft to spacecraft, but rather ongoing incremental development of better and better materials and designs for making spacecraft, even if this process doesn't lead to commercial space tourism at the maximum speed.   In fact, scientific development is almost always incremental -- the occasional Eureka moment notwithstanding (and Eureka moments tend to rest on large amounts of related incremental development).

It seems important, in this context, to distinguish between incremental basic scientific/technological progress and incremental business/marketing/demonstration progress.   Seeking incremental scientific/technological progress makes sense (though other issues emerge here, in terms of pathologies resulting from trying too hard to quantitatively and objectively measure incremental scientific/technological progress -- I have discussed this in an AGI context before).   But the path of maximally successful incremental business/marketing/demonstration progress often does not involve the most sensible incremental scientific path -- rather, it sometimes involves "fast and dirty" technological choices that don't advance science so much at all.

In my own work on AGI development, I have often struggled with these aspects of development.    The incremental business/marketing/demo development approach has huge commercial advantages, as it has more potential of giving something money-making at each step of the way.   It also has advantages in the purely academic world, in terms of giving one better demos of incremental progress at each step of the way, which helps with keeping grant funding flowing in.   The advantages also hold up in the pure OSS software domain, because flashy, showy incremental results help with garnering volunteer activity that moves an OSS project forward.

However, when I get into the details of AGI development, I find this "incremental business/marketing/demo" approach often adds huge difficulty.  In the case of AGI the key problem is the phenomenon I call cognitive synergy, wherein the intelligence of a cognitive system largely comes from the emergent effects of putting many parts together.   So, it's more like the top picture in the above graphic (the one that's supposed to be bad) rather than the bottom picture.    Building an AGI system with many parts, one is always making more and more scientific and technological progress, step by step and incrementally.   But in terms of flashy demos and massive commercial value, one is not necessarily proceeding incrementally, because the big boost in useful functionality is unlikely to come before a lot of work has been done on refining individual parts and getting them to work together.

Google, IBM and other big companies recently redoubling their efforts in the AI space are trying to follow the bottom-picture approach, and work toward advanced AGI largely via incrementally improving their product and service functionalities using AI technology.  Given the amount of funding and manpower they have, they may be able to make this work.   But where AGI is concerned, it's pretty clear to me that this approach adds massive difficulty to an already difficult task.

One lesson the SpaceShipOne/Two story has, it seems to me, is that aggressive pursuit of the "maximize incremental business/marketing/demo results" path has not necessarily been optimal for commercial spaceflight either.   It has been fantastically successful marketing-wise, but perhaps less so technically.

I've been approached many times by people asking my thoughts on how to formulate a sort of X-Prize for AGI.   A couple times I put deep thought into the matter, but each time I came away frustrated -- it seemed like every idea I thought of was either

  • "Too hard", in the sense that winning the prize would require having a human-level AGI (in which case the prize becomes irrelevant, because the rewards for creating a human-level AGI will be much greater than any prize); OR
  • Susceptible to narrow-AI approaches -- i.e. likely end up rewarding teams who pushed toward winning the prize quickly via taking various short-cuts, using approaches that probably wouldn't be that helpful toward achieving human-level AGI eventually

The recently-proposed AI TED-talk X-Prize seems to me likely to fall into the latter category.   I can envision a lot of approaches to making AIs to give effective TED talks, that are basically "specialized TED talk giving machines" designed and intensively engineered for the purpose, without really having architectures suitable as platforms for long-term AGI development.   And if one had a certain fixed time and money budget for winning the AI TED-talk X-Prize, pursuing this kind of specialized approach might well be the most rational course.   I know that if I myself join a team aimed at winning the prize, there will be loads of planning discussions aimed at balancing "the right way to do AGI design/development" versus "the cleverest way to win the prize."

On the other hand, as a sci-tech geek I need to watch out for my tendency to focus overly on the technical aspects.  The AI TED-Talk X-Prize, even if it does have the shortcomings I've mentioned above, may well serve amazingly well from a marketing perspective, making the world more and more intensely aware of the great potential AI holds today, and the timeliness of putting time and energy and resources into AGI development.

I don't want to overgeneralize from the SpaceShipTwo crash -- this was a specific, tragic event; and any specific event has a huge amount of chance involved in it.    Most likely, in a large percentage of branches of the multiverse, the flight Friday went just fine.    I also don't want to say that prize-driven development is bad; it definitely has an exciting role to play, at very least in helping to raise public consciousness about technology possibilities.  And I think that sometimes the incremental  business/marketing/demo progress path to development is exactly the right thing.   As well as being a human tragedy, though, I think the recent terrible and unfortunate SpaceShipTwo accident does serve as a reminder of the limitations of prize-driven technology development, and a spur to reflect on the difficulties inherent in pursuing various sorts of "greedy" incremental development.