To follow this blog by email, give your address here...

Saturday, March 22, 2014

Lessons from Deep Mind & Vicarious


Recently we've seen a bunch of Silicon Valley money going into "deep learning" oriented AI startups -- an exciting trend for everyone in the AI field.  Even for those of us who don't particularly aspire to join the Silicon Valley crowd, the symbolic value of these developments is dramatic.   Clearly AI is getting some love once again.

The most recent news is a US$40M investment from Mark Zuckerberg, Elon Musk, Ashton Kutcher and others into Vicarious Systems, a "deep learning computer vision" firm led by Dileep George, who was previously Jeff Hawkins' lead researcher at Numenta.

A couple months ago, the big story was Google acquiring London deep reinforcement learning firm Deep Mind for something like US$500M.   Many have rumored this was largely an "acqui-hire", but with 60 employees or so, that would set the price per employee at close to US$10M, way above the $1M-$2M value assigned to a Silicon Valley engineer in a typical acqui-hire transaction.   Clearly a tightly-knit team of machine learning theory and implementation experts is worth a lot to Google these days, dramatically more than a comparable team of application programmers.

Both of these are good companies led by great researchers, whom I've admired in the past.   I've met Deep Mind's primary founder, Demis Hassabis, at a few conferences, and found him to have an excellent vision of AGI, plus a deep knowledge of neuroscience and computing.   One of Deep Mind's other founders, Shane Legg, worked for me at Webmind Inc. during 1999-2001.   I know Dileep George less well; but we had some interesting conversations last summer in Beijing, when at my invitation he came to speak at the AGI-13 conference in Beijing.

Vicarious's focus so far has been on visual object recognition --- identifying what are the objects in a picture.  As Dileep described his progress at AGI-13: Once they crack object recognition, they will move onto recognizing events in videos. Once they crack that, they will move on to other aspects of intelligence.   Dileep, like his mentor Jeff Hawkins, believes that perceptual data processing is the key to general intelligence... and that vision is the paradigm case of human perceptual data processing...

Zuckerberg's investment in Vicarious makes a lot of sense to me.  Given Facebook's large trove of pictures and the nature of their business, it seems they would have great value for software that can effectively identify objects in pictures.

Note that Facebook just made a big announcement about the amazing success of their face recognition software, which they saddled with the probably suboptimal name "Deep Face" (a bit Linda Lovelace, no?).  If you dig into the research paper behind the press release, you'll see that DeepFace actually uses a standard, well known "textbook" AI algorithm (convolutional neural nets) -- but they deployed it across a huge amount of data, hence their unprecedented success...

Lessons to Learn?


So what can other AGI entrepreneurs learn from these recent big-$$ infusions to Deep Mind (via acquisition) and Vicarious (by investment)?

The main lesson I take from this is the obvious one, that a great really working demo (not a quasi faked up demo like one often sees) goes a long way...

Not long ago Vicarious beat CAPTCHA -- an accomplishment very easy for any Internet user to understand

On the other hand, the Deep Mind demo that impressed Larry Page was the ability to beat simple video games via reinforcement learning

Note that (analogously to IBM Watson), both of these demos involve making the AI meet a challenge that was not defined by the AI makers themselves, but was rather judiciously plucked from the space of challenges posed by the human world....


I.e.: doing something easily visually appreciable, that previously only humans could do...

Clearly Deep Mind and Vicarious did not excel particularly in terms of business model, as compared to many other firms out there...

Other, also fairly obvious points from these acquisitions are:
  1. For an acquihire-flavored acquisition at a high price, you want a team of engineers in a First World country, who look like the profile of people the acquiring company would want to hire.
  2. Having well-connected, appropriately respected early investors goes a long way.  Vicarious and Deep Mind both had Founders Fund investment.   Of course FF investment didn't save Halcyon Molecular, so it's no guarantee, but having the right early-stage investors is certainly valuable..

 

Bubble or Start of a Surge?


And so it goes.  These are interesting times for AI, indeed.    

A cynic could say it's the start of a new AI bubble -- that this wave of hype and money will be followed by disappointment in the meager results obtained by all the effort and expense, and then another "AI winter" will set in.

But I personally don't think so.   Whether or not the Vicarious and Deep Mind teams and technologies pay off big-time for their corporate investors (and I think they do have a decent chance to, given the brilliant people and effective organizations involved), I think the time is now ripe for AI technology to have a big impact on the world. 
DeepFace is going to be valuable for Facebook; just as machine learning and NLP are already being valuable for Google in their core search and ads businesses, and will doubtless deliver even more value with the infusion of the Deep Mind team, not to mention Ray Kurzweil's efforts as a Google Director of Engineering.

The love that Silicon Valley tech firms are giving AI is going to help spur many others all around the world to put energy into AI -- including, increasingly, AI projects verging on AGI -- and the results are going to be amazing.

 

Are Current Deep Learning Methods Enough for AGI?


Another lesson we have learned recently is that contemporary "deep learning" based machine learning algorithms, scaled up on current-day big data and big hardware, can solve a lot of hard problems.

Facebook has now pretty convincingly solved face recognition, via a simple convolutional neural net, dramatically scaled.   Self-driving cars are not here yet -- but a self-driving car can, I suspect, be achieved via a narrow-AI integration of various components, without any general intelligence underlying.   IBM Watson beat Jeopardy, and a similar approach can likely succeed in other specialized domains like medical diagnosis (which was actually addressed fairly well by simpler expert systems decades ago, even without Watson's capability to extract information from large bodies of text).  Vicarious, or others, can probably solve the object recognition problem pretty well, even with a system that doesn't understand much about the objects it's recognizing -- "just" by recognizing patterns in massive image databases.

Machine translation is harder than the above two areas, but if one is after translation of newspaper text or similar, I suppose it may ultimately be achievable via statistical ML methods.  Although, the rate of improvement of Google Translate has not been that amazing in recent years -- it may have hit a limit in terms of what can be done by these methods.  The MT community is looking more at hybrid methods these days.

It would be understandable to conclude from these recent achievements, that these statistical machine learning / deep learning algorithms basically have the AI problem solved, and focus on different sorts of Artificial General Intelligence architectures is unnecessary.

But such a conclusion would not be correct.   It's important to note that all these problems I've just mentioned are ones that have been focused on lately, precisely because they  can be addressed fairly effectively by narrow-AI statistical machine learning methods on today's big data/hardware...

If you picked other problems like 
  • being a bicycle messenger on a crowded New York Street
  • writing a newspaper article on a newly developing situation
  • learning a new language based on real-world experience
  • identifying the most meaningful human events, among all the interactions between people in a large crowded room
then you would find that today's statistical / ML methods aren't so useful...

In terms of my own work with OpenCog, my goal is not to outdo CNNs or statistical MT on the particular problems for which they were developed.  The goal is to address general intelligence...

The recent successes  of deep learning technology and other machine learning / statistical learning approaches are exciting, in some cases amazing.  Yet these technologies address only certain aspects of the broader AI problem.

One hopes that the enthusiasm and resource allocation that the successes of these algorithms are bringing, will cause more attention, excitement and funding to flow into the AI and AGI worlds as a whole, enabling more rapid progress on all the different aspects of the AGI problem.



Thursday, February 06, 2014

Why Humans Are So Screwy



Aha!!! ... Last night I had the amusing and satisfying feeling that I was finally grokking the crux of the reason why we humans are so screwy -- I never saw it quite so clearly before!

Here's the upshot: A big factor making human beings so innerly complicated is that in our psyches two different sources of screwiness are layered on top of each other:

  1. The conflict between the results of individual and group (evolutionary) selection, encoded in our genome
  2. The emergence of civilization, to which we are not adapted, which disrupted the delicate balance via which tribal human mind/society quasi-resolved the above-mentioned conflict

I.e.: the transition to civilized society disrupted the delicate balance between self--oriented and group-oriented motivations that existed in the tribal person's mind.   In place of the delicate balance we got a bunch of self vs. group conflict and chaos -- which  makes us internally a bit twisted and tormented, but also stimulates our creativity and progress.

Screwiness Source 1: Individual versus Group Selection


The first key source of human screwiness was best articulated by E.O. Wilson; the second was best articulated by Freud.  Putting the two together, we get a reasonably good explanation for why and how we humans are so complexly self-contradictory and, well "screwy."

E.O. Wilson, in his recent book The Social Conquest of Earth, argues that human nature derives its complex, conflicted nature from the competitive interplay of two kinds of evolution during our history: individual and group selection.  Put simply:


  • Our genome has been shaped by individual selection, which has tweaked our genes in such a way as to maximize our reproductive success as individuals
  • Our genome has also been shaped by group selection, which has tweaked our genes in such a way as to maximize the success of the tribes we belonged to


What makes a reproductively successful individual is, by and large, being selfish and looking out for one's own genes above those of others.  What makes a successful *tribe* is, by and large, individual tribe members who are willing to "take one for the team" and put the tribe first.

Purely individual selection will lead to animals like tigers that are solitary and selfish.  Purely group selection will lead to borg-like animals like ants, in which individuality takes a back seat to collective success.  The mix of individual and group selection will lead to animals with a complex balance between individual-oriented and group-oriented motivations.

As Wilson points out, many of the traits we call Evil are honed by individual selection; and many of the trains we call Good are honed by group selection.

That's Screwy Human Nature, Part 1.


Good vs. Evil vs. Hierarchy-Induced Constraints 


These points of Wilson's tie in with general aspects of constraint in hierarchical systems.   This observation provides a different way of phrasing things than Wilson's language of  Good vs. Evil.   As opposed to adopting traditional moral labels, wonder if a better way to think about the situation might be in terms of the tension and interplay between
  • adapting to constraints

vs.

  • pushing against constraints and trying to get beyond them
In the context of social constraints, it seems that individual selection (in evolution) would lead us to push against social constraints to seek individual well-being; whereas group selection would lead us to adapt to the social constraints regardless of our individual goals...


Much great (and mediocre) art comes from pushing against the constraints of the times -- but it's critical to have constraints there to push against; that's where a lot of the creativity comes from. You could think about yoga and most sports similarly ... you're both adapting to to the particularities of the human body; and trying to push the body beyond its normal everyday-life limits...

From the point of view of the tribe/society, those who push against the constraints too much can get branded as Evil and those who conform can get branded as Good..... But it all depends on what level you're looking at.... From the point of view of the human body, the cell that doesn't conform to the system will branded as Evil (non-self) and eliminated by the immune system!!

In any hierarchical system, from the perspective of entities on level N, the entities on level N+1 impose constraints -- constraints that restrict the freedom of the level N entities in order to enable functionality on level N+1; but also have potential to guide the creativity of level N entities.  Stan Salthe's book Evolving Hierarchical Systems makes this point wonderfully.   In some cases, like the human body vs. its cells, the higher level is dominant and the creativity of the lower level entities is therefore quite limited.  In thhe case of human society vs. its members, the question of whether the upper or lower level dominates the dynamics is trickier, leaving more room for creativity on the part of the lower level entities (humans), but also making the lives of the lower level entities more diversely complex.

Screwiness Source 2:The Discontents of Civilization


Moving on -- Screwy Human Nature, Part 2 was described with beautiful clarity by Sigmund Freud in his classic book Civilization and its Discontents.

What Freud pointed out there is that neurosis, internal mental stress and unhappiness and repression and worry, is a result of the move from nomadic tribal society to sedentary civilized society.  In tribal societies, he pointed out, by and large people were allowed to express their desires fairly freely, and get their feelings out of their system relatively quickly and openly, rather than represssing them and developing complex psychological problems as a result.

A fascinating recent book encountering one modern linguist/missionary's contact with a modern Stone Age society in the Amazon, the Piraha, is Daniel Everett's Don't Sleep There Are Snakes.   A book I read in the 1980s, recounting an average guy from Jersey dropping his life and migrating to Africa to live with a modern Stone Age pygmy tribe in central Africa, is Song From the Forest.  (The phoos below show Louis and some of his Bayaka friends.  Some recent news from Louis Sarno is here, including an intriguing recent video, a trailer for a forthcoming movie.) These accounts and others like them seem to validate Freud's analysis.  The tribal, Stone Age lifestyle tends not to lead to neurosis, because it matches the human emotional makeup in a basic way that civilization does not.




Wilson + Freud = Why We Are So Screwy


I full well realize the "noble savage" myth is just that -- obviously, the psychology of tribal humans was not as idyllic and conflict-free as some have imagined.   Tribal humans still have the basic conflict between individual and group selection embedded into their personalities.  BUT it seems to me that, in tribal human sociopsychology, evolution has worked out a subtle balance between these forces.  The opposing, conflicting forces of Self and Group are intricately intermeshed.

What civilization does is to throw this balance off -- and put the self-focused and group-focused aspects of human nature out of whack in complex ways.  In tribal society  Self and Group balance against each other elegantly and symmetrically -- there is conflict, but it's balanced like yin and yang.  In civilized society, Self and Group are perpetually at war, because the way our self-motivation and our group-motivation have evolved was right for making them just barely balance against each other in a tribal context; so it's natural that they're out of balance in complex ways in a civilization context.

For example, in a tribal situation, it is a much better approximation to say that: What's good for the individual is good for the group, and vice versa.   The individual and group depend a lot on each other. Making the group stronger helps the individual in very palpable ways (if a fellow hunter in the tribe is stronger for instance, he's more likely to kill game to share with you).  And if you become happier or stronger or whatever, it's likely to significantly benefit the rest of the group, who all directly interact with you and are materially influenced by you.   The harmony between individual interest and group interest is not perfect, but it's at least reasonably present ... the effects of individual and group selection have been tuned to work decently together.

On the other hand, in a larger civilized society the connection between individual and group benefit is far more erratic   What's good for me, as a Hong Kong resident, is not particularly the same as what's good for Hong Kong.   Of course there's a correlation, but it's a relatively weak one.   It's reasonably likely that what's good for Hong Kong as a unit could actually make my life worse (e.g. raising taxes, as my income level is above average for HK).  Similarly, most things that are likely to improve my life in the near future are basically irrelevant to the good of Hong Kong; in fact, my AGI research work is arguably bad for all political units in the long term, as advanced AGI is likely to lead to the transcendent of nation-states.   There is definitely some correlation between my benefit and Hong Kong's benefit -- if I create a successful company here in HK, that benefits the HK economy.   But the link is fairly weak, meaning that my society is often going to push me to do stuff that goes against my personal interest; and vice versa.  This seems almost inevitable in a complex society containing people playing many different roles.

Another interesting case is lying.   Lying of course occurs in tribal societies just like in advanced civilizations -- humans are dishonest by nature, to some extent.   Yet, only in complex civilizations do we have a habit of systematically putting on "false fronts" before others.  This doesn't work so well if you're around the same 50 people all the time.   Yet it's second nature to all of us in modern civilization -- we learn in childhood to act one way at home, one way at school, one way around grandma, etc.

As we mature, the habit of putting on false fronts -- or as Nietzsche called them, "masks" -- becomes so integrated into our personalities that the fronts aren't even "false" anymore.   Rather, our personalities become melanges of subselves, with somewhat different tastes and interests and values, in a complex coopetition for control of our thoughts and memories.  This is complex and stressful, but stimulates  various sorts of creativity.

Sarno reports how the interaction of the Bayaka pygmies with civilization caused them to develop multiple subpersonalities.  A pygmy's personality while living the traditional nomadic lifestyle in the bush, may be very different from that same pygmy's personality while living in a village with Africans from other tribes, drinking alcohol and doing odd jobs for low wages.

Individually, we have a motive to lie and make others think we are different in various ways than we actually are.   Tribally, group-wise, there is a reason for group members to tell the truth -- a group with direct and honest communication and understanding is likely to do better on average, in many important contexts, because deception often brings with it lots of complexity and inefficiency.   The balance between truth and lying is wired into our physiology -- typical people can lie only a little bit without it showing in their faces.   But modern society has bypassed these physiological adaptations, which embody tribal society's subtle balance between self and group motivations, via the creation of new media like telephones, writing and the Internet, which bypass telltale facial expressions and open up amazing new vistas for systematic self-over-group dishonesty.   Then society, and the minds of individuals within it, must set up all sorts of defense mechanisms to cope with the rampant dishonesty.   The balance of self versus group is fractured, and complexity emerges in an attempt to cope, but never quite copes effectively, and thus keeps ramifying and developing.

In Freudian terms, civilization brought with it the split between the Ego and Super-ego -- between what we are (at a given point in time); and what we think we should be.  It also brought with it a much mor complex and fragmented Ego that was present in tribal peoples.

What Wilson makes clear is: the pre-civilized human mind already had within it the split between the Self-motivation and Group-motivation.  Freud somewhat saw this as well, with his Id as a stylized version of the pure Self-motivation and his Ego going beyond this to balance Self versus Group.

The Freudian Ego and Super-ego are different ways of balancing Self versus Group.  The perversity and complexity of civilized society is that each of us is internally pushed to balance the conflict of Self vs. Group in one way (via our Ego, which is largely shaped for tribal society), while feeling we "should" be carrying out this balance in a different way (via our Super-Ego, which comes from civilized culture).  Of course these Freudian terms are not scientific or precisely defined, and shouldn't be taken too seriously.   But they do paint an evocative picture.

How much of this kind of inner conflict is a necessary aspect of being an intelligent individual mind living in a civilization?  Some, to be sure -- there is always going to be some degree of conflict between what's good for the individual and what's good for the group.  But having genomes optimized for tribal society, while living in civilized society, foists an additional layer of complexity on top of the intrinsic conflict.  The fact that our culture changes so much faster than our genomes, means that we are not free to seek the optimal balance between our current real-life Self and Group motivations, consistent with the actual society we are living in.  Instead we must live with methods of balancing these different motivations, that were honed in radically different circumstances than the ones we actually reside in and care about.

A Transhumanist Punchline


This is Benjamin Nathaniel Robot Goertzel's blog, so you knew there would be a transhumanist angle coming eventually, right? -- Once we achieve the ability to modify our brains and bodies according to our wishes, we will be able to adapt the way we balance Self versus Group in a much more finely-tuned and contextually appropriate way.

To the extent that layers of conflict within conflict are what characterize humanity, this will make us less human.  But it will also make us less perverse, less confused, and more fulfilled.

Our Screwiness Spurs Our Creativity and Progress


The punchier punchline, though, is that what is driving us toward the reality of amazing possibilities like flexible brain and body modification is -- precisely the screwiness I've analyzed above.

It's the creative tension between Self and Group that drove us to create sophisticated language in the first place.   One of the earliest uses of language, that helped it to grow into the powerful tool it now is, was surely gossip -- which is mainly about Self/Group tensions.

And our Self and Group aspects conspired to enable us to develop sophisticated tools.  Invention of new tools generally occurs via some wacky mind off in the corner fiddling with stuff and ignoring everybody else.  But, we do much better than other species at passing our ideas about new tools on from generation to generation, leveraging language and our rich social networking capability -- which is what allows our tool-sets to progressively improve over time.

The birth of civilization clearly grew from the same tension.   Tribal groups that set up farms and domesticated animals, in certain ecological situations, ended up with greater survival value -- and thus flourished in the group selection competition.  But individuals, seeking the best for themselves, then exploited this new situation in a variety of complex ways, leading to developments like markets, arts, schools and the whole gamut.  Not all of these new developments were actually best for the tribe -- some of the ways individuals grew to exploit the new, civilized group dynamics actually were bad for the group.  But then the group adapted, and got more complex to compensate.  Eventually this led to twisted sociodynamics like we have now ... with (post)modern societies that reject and psychologically torment their individualistic nonconformist rebels, yet openly rely on these same rebels for the ongoing innovation needed to compensate the widespread dissatisfaction modernity fosters.

And the creativity spurred by burgeoning self/group tensions continues and blossoms multifariously.  Privacy issues with Facebook and the NSA ... the rise and growth and fluctuation of social networks in general ... the roles of anonymity and openness on the Net ... websites devoted to marital infidelity ... issues regarding sharing of scientific data on the Net or keeping it private in labs ... patents ... agile software development ... open source software licenses and processes ... Bill Gates spending the first part of his adult life making money and the second part giving it away.   The harmonization of individual and group motivations remains a huge theme of our world explicitly, and is even more important implicity.

I imagine that, long after humans have transcended their legacy bodies and psychologies, the tension between Self and Group will remain in some form.  Even if we all turn into mindplexes, the basic tension that exists between different levels in any hierarchical system will still be there.   But at least, if it's screwy, it will be screwy in more diverse and fascinating ways!  Or beyond screwy and non-screwy, perhaps ;-)

Monday, January 27, 2014

Hawking's new thoughts on information & chaos & black hole physics...

Interesting new paper by Stephen Hawking, though I only half-understand it... (ok maybe 2/3 ...)

http://arxiv.org/pdf/1401.5761v1.pdf

Basically: He is discussing a certain case [stuff happening inside a black hole] where general relativity says something is not observable, but quantum theory says it is in principle observable....

Hawking's new solution is that the data escaping from inside the black hole is chaoticallly messed up, so that it's sort of in principle observable, but in practice too complicated to actually see...

This seems in line with my notion that quantum logic is for stuff that you cannot in principle measure -- where the YOU is highlighted ... i.e. this has to do with what you, as an information-processing system, have the specific capacity to measure without losing your you-ness...

hmmmm...

Tuesday, July 16, 2013

Robot Toddlers and Fake AI 4 Year Olds

Oh, the irony...

At the same time as my OpenCog project is running an Indieogogo crowdfunding campaign aimed at raising funds to create a robot toddler, by using OpenCog to control a Hanson Robokind robot...

... the University of Illinois's press gurus come out with a report titled

But what is this system, that is supposedly as smart as a 4 year old?  It's a program that answers vocabulary and similarity questions as well as a human 4 year old, drawing on MIT's ConceptNet database.

Whoopie!   My calculator can answer arithmetic questions better than I can -- does that make it a superintelligence? ;-D ....

A toddler is far more than a question-answering program back-ended on a fixed database, obviously....

This Illinois/MIT program is basically like IBM Watson, but for a different set of knowledge...

ConceptNet is an intriguing resource, and one of the programmers in our Addis Ababa OpenCog lab is currently playing with importing it into OpenCog....

But obviously, this Illinois/MIT software lacks the ability to learn new skills, to play, to experiment, to build, to improvise, to discover, to generalize beyond its experience, etc.....   It has basically none of the capabilities of the mind of a 4 year old child.

BUT... one thing is clear ... these universities do have excellent PR departments!

The contrast between their system -- a question-answering system based on MIT's ConceptNet knowledge base -- and the system OpenCog, Hanson and I are building is both dramatic and instructive.

The Illinois/MIT program is, they report, as good as a human 4 year old at answering vocabulary and similarity questions.  

OK, I believe that.   But: Big deal!   A calculator is already way better than a human 4 year old at answering arithmetic questions!

What we are after with our project is not just a system that passes certain tests as well as a human toddler.  We are after a system that can understand and explore the world, and make sense of itself and its surroundings and its goals and desires and feelings and worries, in the rough manner of a human toddler.  This is a wholly different thing.

The kind of holistic toddler-like intelligence we're after, would naturally serve as a platform for building greater and greater levels of general intelligence -- moving toward adult-level AGI....

But a question-answering system based on ConceptNet doesn't particularly build toward anything -- it doesn't learn and grow.  It just replies based on the data in its database.

It is unfortunate, but not terribly surprising, that this kind of distinction still needs to be repeated over and over again.  General intelligence - the ability to achieve a variety of complex goals in a variety of complex environments, including goals and environments not foreseen in advance by the creators of the intelligent system -- is a whole different kettle of fish than engineering a specialized intelligent system for a specific purpose.


The longer I work on AGI, the more convinced I am that an embodied approach will be the best way to fully solve the common sense problem.   The AI needs to learn common sense by learning to control a robot that does commonsensical things....  Then the ability to draw analogies and understand words will emerge from the AI's ability to understand the world and relate different experiences it has had.   Whereas, a system that answers questions based on ConceptNet is just manipulating symbols without understanding their meaning, an approach that will never lead to real human-like general intelligence.

The good news is, my OpenCog colleagues and I know how to make a robot that will achieve first toddler-level commonsense knowledge, and then full-scale human-adult level AGI.   And then what?

The less exciting news is, it's going to take a lot of work -- though exactly how many years depends on how well funded our project is.

Next Big Future just ran an extensive interview with me on these topics, check it out if you're curious for more information...


Monday, June 10, 2013

Physicists Rediscover Sheldrake's Morphic Fields ... and my Morphic Pilot Wave ...

Today Damien Broderick pointed out to me an Edge interview with physicist Lee Smolin, which led me to a fascinating article by Smolin titled "Precedence and freedom in quantum physics."

Smolin's article is deep and thought-provoking -- and overlaps greatly with prior thinking by some other folks, such as Rupert Sheldrake and Charles Peirce and myself.


Smolin's Principle of Precedence


Smolin explores augmenting the standard axiomatic foundation of quantum physics with an additional axiom, namely the

Principle of precedence: When a quantum process terminating in a measurement has many precedents, which are instances where an identically prepared system was subject to the same measurement in the past, the outcome of the present measurement is deter- mined by picking randomly from the ensemble of precedents of that measurement. 

Or as he puts it in his Edge interview, "nature is developing habits as it goes along."

His goal is to explore the possibility that the laws of nature can be viewed as accumulating historically via the principle of precedence, rather than being fixed and immutable laws...



Sheldrake's Morphic Fields


But this principle is awfully reminiscent of Rupert Sheldrake's (highly controversial) notion of morphic fields...

I propose that memory is inherent in nature. Most of the so-called laws of nature are more like habits. 

The idea is that there is a kind of memory in nature. Each kind of thing has a collective memory. So, take a squirrel living in New York now. That squirrel is being influenced by all past squirrels."

The habits of nature depend on non-local similarity reinforcement. Through morphic resonance, the patterns of activity in self-organizing systems are influenced by similar patterns in the past, giving each species and each kind of self-organizing system a collective memory.


Sheldrake's core idea regarding morphic fields is that, once a pattern occurs somewhere in the universe, it is more likely to occur elsewhere.

The parallel between Smolin's and Sheldrake's ideas is fairly obvious, and Bruce Sterling notes it in a comment on Smolin's Edge article:  "If nature "forms habits," then that's very Rupert Sheldrake"....

Both Smolin and Sheldrake are positing that when something has occurred in the universe, this increases the probability of similar things occurring in the future -- in a nonlocal way, separate from ordinary processes of physical causation...

I have no idea whether Smolin will appreciate this parallel, though.   Sheldrake has developed his morphic field idea fairly extensively as an explanation for psi phenomena, which are widely viewed with skepticism within the scientific community, although they are broadly accepted by the general public, and in my own opinion the evidence for their reality in many cases is pretty strong (see my brief page on psi here).


Peirce's Tendency to Take Habits



The same core idea that Smolin and Sheldrake have articulated can be found considerably earlier in the philosophy of Charles S. Peirce, who spoke at the turn of the 19th century of the "tendency to take habits" as a key aspect of the universe, and opined that...

Logical analysis applied to mental phenomenon shows that there is but one law of mind, namely that ideas tend to spread continuously and to affect certain others which stand to them in a peculiar relation of affectibility. In this spreading they lose intensity, and especially the power of affecting others, but gain generality and become welded with other ideas.

...

Matter is but mind hide-bound with habit

...

The one intelligible theory of the universe is that of objective idealism, that matter is effete mind, inveterate habits becoming physical laws. But before this can be accepted it must show itself capable of explaining the tridimensionality of space, the laws of motion, and the general characteristics of the universe, with mathematical clearness and precision ; for no less should be demanded of every Philosophy.

It seems that Smolin is now attempting to push in precisely the direction Peirce was suggesting in the final quote given above...



Goertzel's Morphic Pilot Wave


I also notice an interesting parallel between Smolin's paper and my own paper on Morphic Pilot Theory from a few years ago.   In that paper, I was trying to connect Sheldrake's morphic field idea with quantum theory, and I posited that one could look at the tendency to take habits as an additional property of the "pilot waves" that Bohmian theory posits to underly quantum reality.   Specifically, I argued that if one viewed pilot waves as being directed by simplicity, as quantified e.g. by algorithmic information (in which something is simpler if it can be computed by a shorter program), then one could derive a variant of the morphic field hypothesis as a consequence.

Lo and behold, reading Smolin's paper carefully, what do I find?   Smolin notes that, according to his theory:

  • Novel occurrences are in a sense maximally random
  • Occurrences that have happened many times before, are shown via the principle of precedence to simply obey good old quantum mechanics
  • Occurrences that have happened only a few times before, are not explained explicitly by his principle of precedence -- but may perhaps be explained by an additional principle stating that the universe is biased toward outcomes that are simpler in the sense of algorithmic information theory (he attributes this suggestion to some of his colleagues)

Well, well, well....   Obviously Smolin did not read my speculative paper on quantum theory and psi, but he and his colleagues independently arrived at a similar conclusion to that paper.   So science often goes.

All this, of course, is still preliminary and speculative.  For one thing, I find the axiomatic foundation for quantum mechanics that Smolin chose a bit inelegant, though probably better than the Bohmian pilot waves I used in my own paper; and I would love to see how an algorithmic simplicity assumption can be integrated into the much prettier and more fundamental symmetry based foundation for quantum mechanics recently articulated by Kevin Knuth and his colleagues.


But one does see, here, an interesting direction for bridge-building between quantum theory, morphic fields and psi phenomena.   The connection between psi and quantum mechanics has been discussed a lot, but I've never been convinced that quantum theory on its own can explain psi.  In my Morphic Pilot paper I suggested that augmenting quantum theory with an algorithmic information theory based morphic field type assumption might do the trick.   Without explicitly thinking about psi at all (so far as I can tell, anyway), Smolin has made an interesting move in the same direction.


Quantum Darwinism and State Broadcasting


After writing the above, another suggestion of Damien's led me indirectly to a paper by some different physicists (not collaborators of Smolin), which suggests that Quantum Darwinism (a recent addition to the pantheon of
foundations for quantum physics) may be derivable from a phenomenon called

state broadcasting—a process aimed at proliferating a given state through correlated copies
This appears to me much like a different way of looking at Smolin's Principle of Precedence...

Of course, to get morphic resonance out of this, one would still need some addition such as Smolin's & my suggestion of an Occam's (Aristotle's) Razor -like simplicity principle for the case where there are not that many correlated copies...

All this also makes me wonder about the findings of Aerts, Atmanspacher and others regarding the necessity, in some case, of modeling classical systems using quantum mathematics and logic.   Could it be the case that, whenever a system internally displays a morphic resonance type dynamic, it is best to model it using some variant of quantum math?



Lots of yummy food for thought!


Tuesday, May 21, 2013

RIP Ray Manzarek


What a bummer to read that Ray Manzarek has died.   

I was born in 1966, and the psychedelic rock of the late 1960s and early 1970s was the music I grew up on.   Later I became more interested in jazz fusion, bebop, classical music and so forth -- but the psychedelic 60s/70s music (Hendrix, Doors, Floyd, Zeppelin) was where my love for music started.  This was the music that showed me the power of music to open up the mind to new realities and trans-realities, to bring the mind beyond itself into other worlds....

Hendrix was and probably always will be my greatest musical hero -- but Ray Manzarek was the first keyboardist who amazed me and showed me the power of wild and wacky keyboard improvisation.   I now spend probably 30-45 minutes a day improvising on the keyboard (and more on weekends!).  I don't have Ray's virtuosity but even so, keyboard improv keeps my mind free and agile and my emotions on the right side of the border between sanity and madness.  Each day I sit at my desk working, working working -- and when too much tension builds up in my body or I get stuck on a difficult point, I shift over to the piano or the synth and jam a while.   My frame of mind re-sets, through re-alignment with the other cosmos the music puts my mind in touch with.

The Doors and Ray had a lot of great songs.  But no individual song is really the point to me.  The point is the way Ray's music opens up your mind -- the way, if you close your eyes and let it guide you, you follow it on multiple trans-temporal pathways into other realms, beyond the petty concerns of yourself and society ... and when you return your body feels different and you see your everyday world from a whole new view....

The Singularity, if it comes, will bring us beyond petty human concerns into other realms in a dramatic, definitive way.   Heartfelt, imaginative improvisation like Ray Manzarek's can do something similar, in its own smaller (yet in another sense infinite) way -- opening up a short interval of time into something somehow much broader.

As Ray once said:

“Well, to me, my God, for anybody who was there it means it was a fantastic time, we thought we could actually change the world — to make it a more Christian, Islamic, Judaic Buddhist, Hindu, loving world. We thought we could. The children of the ’50s post-war generation were actually in love with life and had opened the doors of perception. And we were in love with being alive and wanted to spread that love around the planet and make peace, love and harmony prevail upon earth, while getting stoned, dancing madly and having as much sex as you could possibly have.” 


w00t! ... those times are gone, and I was too young in the late 60s early 70s to take part in the "getting stoned and having as much sex as you could possibly have" aspect (that came later for me, including some deep early-80s acid trips to Doors music), but my child self picked up the vibe of that era nonetheless ... all the crazy, creative hippies I saw and watched carefully back then affected more than just my hairstyle....   Somewhat like Steve Jobs, I see the things I'm doing now as embodying much of the spirit of that era.   Ray Manzarek and his kin of that generation wanted to transcend boring, limited legacy society and culture and revolutionize everything and make it all more ecstatic and amazing -- and so do I....


I recall a Simpsons episode where Homer gets to heaven and encounters Jimi Hendrix and Thomas Jefferson playing air hockey.  Maybe my memory has muddled the details, but no matter.   I hope very much that, post-Singularity, one of my uploaded clones will spend a few eons jamming on the keyboard with the uploaded, digi-resurrected Ray Manzarek.

Until then: Rest In Peace, Ray....


Sunday, May 19, 2013

Musing about Mental vs. Physical Energy


Hmmm....

I was talking with my pal Gino Yu at his daughter Oneira's birthday party yesterday … and Gino was sharing some of his interesting ideas about mental energy and force…

Among many other notions that I won't try to summarize here, he pointed out that, e.g. energy (in the sense he meant) is different from arousal as psychologists like to talk about…  You can have a high-energy state without being particular aroused -- i.e. you can be high-energy but still and quiescent.

This started me thinking about the relation between "mental energy" in the subjective sense Gino appeared to be intending, and "energy" in physics.

I have sometimes in the past been frustrated by people -- less precise in their thinking than Gino -- talking about "energy" in metaphorical or subjective ways, and equating their intuitive notion of "energy" with the physics notion of "energy."

Gino was being careful not do to this, and to distinguish his notion of mental energy from the separate notion of physical energy.   However, I couldn't help wondering about the connection.   I kept asking myself, during the conversation: Is there some general notion of energy which the physical and mental conceptions both instantiate?

Of course, this line of thinking is in some respects a familiar one, e.g. Freud is full of ideas about mental energy, mostly modeled on equilibrium thermodynamics (rather than far-from equilibrium thermodynamics which would be more appropriate as an analogical model for the brain/mind)…

Highly General Formulations of Force, Energy, Etc.

Anyway... here is my rough attempt to generalize energy and some other basic physics concepts beyond the domain of physics, while still capturing their essential meaning.

My central focus in this line of thinking is "energy", but I have found it necessary to begin with "force" ...

Force may, I propose, be generally conceived as that which causes some entity to deviate from its pattern of behavior ...

Note that I've used the term "cause" here, which is a thorny one.   I think causation must be understood subjectively: a mind M perceives A as causing B if according to that mind's world-model,

  • A is before B
  • P(B|A) > P(B)
  • there is some meaningful (to M) avenue of influence between A and B, as evidenced e.g. by many shared patterns between A and B

So, moving on ... force quickly gives us energy…

Energy, I suggest (not too originally), may be broadly conceived as a quantity that

  • is conserved in an isolated system (or to say it differently: is added or subtracted from a system only via interactions with other systems)
  • measures (in some sense) the amount of work that a certain force gets done, or (potential energy) the amount of work that a certain force is capable of getting done

Now, in the case of Newtonian mechanics,

  • an entity's default pattern of behavior is to move in a straight line at a constant velocity (conservation of momentum), therefore force takes the form of deviations from constant momentum, i.e. it is proportional to acceleration
  • "mass" is basically an entity's resistance to force…
  • energy = force * distance

However, the basic concepts of force and energy as described above are pertinent beyond the Newtonian context, e.g. to relativistic and quantum physics; and I suppose they may have meaning beyond the physics domain as well.

This leads me to thinking about a couple related concepts...

Entropy maximization: When a mind lacks knowledge about some aspect of the world, its generically best hypothesis is the one that maximizes entropy (this is the hypothesis that will lead to its being right the maximum percentage of the time).   This is Jaynes' MaxEnt principle of Bayesian inference.

Maximum entropy production: When a mind lacks knowledge about the path of development of some system, its generically best hypothesis is that the system will follow the path of maximal entropy production (MEP).   It happens that this path often involves a lot of temporary order production; as Swenson said, "The world, in short, is in the order production business because ordered flow produces entropy faster than disordered flow"

Note that while entropy maximization and MEP are commonly thought of in terms of physics, they can actually be conceived as general inferential principles relevant to any mind confronting a mostly-opaque world.

Sooo... overall, what's the verdict?  Does it make sense to think about "mental energy", qualitatively, as something different from physical energy -- but still deserving the same word "energy?"   Is there a common abstract structure supervening both uses of the "energy" concept?

I suppose that there may well be, if the non-physical use of the term "energy" follows basic principles like I've outlined here.

This is in line with the general idea that subjective experiences can be described using their own language, different from that of physical objects and events -- yet with the possibility of drawing various correlations between the subjective and physical domains.  (Since in the end the subjective and physical can be viewed as different perspectives on the same universe … and as co-creators of each other…)

In What Sense Is Mental Energy Conserved?

But ... hmmm ... I wonder if the notion of "mental energy" -- in folk psychology or in whatever new version we want to create -- really obeys the principles suggested above?

In particular, the notion of "conservation in isolated systems" is a bit hard to grab onto in a psychological context, since there aren't really any isolated systems ... minds are coupled with their environments, and with other minds, by nature.

On the other hand, it seems that whenever physicists run across a situation where energy may seem not to be conserved, they invent a new form of energy to rescue energy conservation!   Which leads to the idea that within the paradigm of modern physics, "being conserved" is essentially part of the definition of "energy."

Also, note that above I used the phrasing that energy "is conserved in an isolated system (or to say it differently: is added or subtracted from a system only via interactions with other systems)."   The alternate parenthetical phrasing may, perhaps, be particularly relevant to the mental-energy case.

(Note for mathematical physicists: Noether's Theorem shows that energy conservation ensues from temporal translation invariance, but it only applies to systems governed by Lagrangians, and I don't want to assume that about the mind, at least not without some rather good reason to....) 

Stepping away from physics a bit, I'm tempted to consider notion of mental energy in the context of the Vedantic hierarchy, which I wrote about in The Hidden Pattern (here's an excerpt from Page 31 ...)


In a Vedantic context, one could perhaps view the Realm of Bliss as being a source of mental energy that is in effect infinite from the human perspective.   So when a human mind needs more energy, it can potentially open itself to the Bliss domain and fill itself with energy that way (thus perhaps somewhat losing its self, in a different sense!).   This highlights the idea that, in a subjective-mind context, the notion of an "isolated system" may not make much sense.

But one could perhaps instead posit a principle such as

Increase or decreases in a mind-system's fund of mental energy, are causally tied to that mind-system's interactions with the universe outside itself.

This sort of formulation captures the notion of energy conservation without the need to introduce the concept of an "isolated system."    (Of course, we still have to deal with the subjectivity of causality here -- but there's no escaping that, except via stopping to worry about causality altogether!)

But -- well, OK -- that's enough musing and rambling for one Sunday early afternoon; it's time to walk the dogs, eat a bit of lunch, and then launch into removing the many LaTeX errors remaining in the (otherwise complete) Building Better Minds manuscript....

And so it goes...

-- This post was written while listening to Love Machine's version of "One More Cup of Coffee" by Bob Dylan ... and DMT Experience's version of "Red House" by Jimi Hendrix.   I'm not sure why, but it seems a "cover version" sort of afternoon...

Wednesday, May 15, 2013

Quasi-Mathematical Speculations on Contraction Maps and Hypothetical Friendly Super-AIs


While eating ramen soup with Ruiting in the Tai Po MegaMall tonight, I found myself musing about the possible use of the contraction mapping theorem to understand the properties of AGI systems that create other AGI systems that create other AGI systems that … etc. ….

It's a totally speculative line of thinking, that may be opaque to anyone without a certain degree of math background.

But if it pans out, it ultimately could provide an answer to the question: When can an AGI system, creating new AGI systems or modifying itself in pursuit of certain goals, be reasonably confident that its new creations are going to continue respecting the goals for which they were created?

This question is especially interesting when the goals in question are things like "Create amazing new things and don't harm anybody in the process."   If we create an AGI with laudable goals like this, and then it creates a new AGI with the same goals, etc. -- when can we feel reasonably sure the sequence of AGIs won't diverge dramatically from the original goals?

Anyway, here goes…

Suppose that one has two goals, G and H

Given a goal G, let us use the notation " agi(G, C) " to denote the goal of creating an AGI system, operating within resources C, that will adequately figure out how to achieve goal G

Let d(,) denote a distance measure on the space of goals.  One reasonable hypothesis is that, if

d(G,H) = D

then generally speaking,

d( agi(G,c), agi(H,c) ) < k D

for some k ….  That is: because AGI systems are general in capability and good at generalization, if you change the goal of an AGI system by a moderate amount, you have to change the AGI system itself by less than that amount…

If this is true, then we have an interesting consequence….   We have the consequence that

F(X) = agi(X,C)

is a contraction mapping on the space of goals.   This means that, if we are working with a goal space that is a complete metric space, we have a fixed point G* so that

F(G*) = G*

i.e. so that

G* = agi(G*,C)

The fixed point G* is the goal of the following form:

G* = the goal of finding an AGI that can adequately figure out how to achieve G*

Of course, to make goal space a complete metric space one probably needs to admit some uncomputable goals, i.e. goals only computable using infinitely long computer programs.   So a goal like G* can never quite be achieved using ordinary computers, but only approximated.

Anyway, G* probably doesn't seem like a very interesting goal… apart from a certain novelty value….

However, one can vary on the above argument in a way that makes it possibly more useful.

Suppose we look at

agi(G,I,C)

-- i.e., the goal of creating an AGI that can adequately figure out how to achieve goals G and I within resources C.

Then it may also be the case that

d( agi(G,I,C), agi(H, I, C) ) < k d(G,H)

If so, then we can show the existence of a fixed point goal G* so that

G* = agi(G*, I, C)

or in words,

G* = the goal of finding an AGI that can adequately figure out how to achieve both goal G* and goal I

The contraction mapping theorem shows that if we start with a goal G close enough to G*, we can converge toward G* via an iteration such as

G, I
agi(G, I, C)
agi( agi(G,I,C), I, C)
agi( agi( agi(G,I,C), I, C) , I, C)

etc.

At each stage of the iteration, the AGI becomes more and more intelligent, as it's dealing with more and more abstract learning problems.  But according to the contraction mapping theorem, the AGI systems in the series are getting closer and closer to each other -- the process is converging.

So then we have the conclusion: If one starts with a system smart enough to solve the problem agi(G,I, C) reasonably well for the given I and C -- then ongoing goal-directed creation of new AGI systems will lead to new systems that respect the goals for which they were created.

Which may seem a bit tautologous!   But the devil actually lies in the details -- which I have omitted here, because I haven't figured them out!   The devil lies in the little qualifiers "acceptably" and "reasonably well" that I've used above.  Exactly how well does the problem agi(G,I,C) need to be solved for the contraction mapping property to hold?

And of course, it may be that the contraction mapping property doesn't actually hold in the simple form given above -- rather, some more complex property similar in spirit may hold, meaning that one has to use some generalization of the contraction mapping theorem, and everything becomes more of a mess, or at least subtler.

So, all this is not very rigorous -- at this stage, it's more like philosophy/poetry using the language of math, rather than real math.   But I think it points in an interesting direction.  It suggests to me that, if we want to create a useful mathematics of AGIs that try to achieve their goals by self-modifying or creating new AGIs, maybe we should be looking at the properties of mappings like agi() on the metric space of goals.   This is a different sort of direction than standard theoretical computer science -- it's an odd sort of discrete dynamical systems theory dealing with computational iterations that converge to infinite computer programs compactly describable as hypersets.

Anyway this line of thought will give me interesting dreams tonight ... I hope it does the same for you ;-) ...


Wednesday, May 01, 2013

The Dynamics of Attachment and Non-Attachment in Humans and AGIs



A great deal of human unhappiness and ineffectiveness is rooted in what Buddhists call "attachment"… roughly definable as an exaggerated desire not to be separated from someone, something, some idea, some feeling, etc.

Buddhists view attachment as ensuing largely from a lack of recognition of the oneness of all things.  If all things are one, then they can't really be separated anyway, so there's no reason to actively resist separation from some person or thing.

Zen teacher John Daido Loori put it as follows: "[A]ccording to the Buddhist point of view, nonattachment is exactly the opposite of separation. You need two things in order to have attachment: the thing you’re attaching to, and the person who’s attaching. In nonattachment, on the other hand, there’s unity. There’s unity because there’s nothing to attach to. If you have unified with the whole universe, there’s nothing outside of you, so the notion of attachment becomes absurd. Who will attach to what?"

That way of thinking makes plenty of sense to me (in a trans-sensible sort of way!).  However, I think one can also take a more prosaic and less cosmic, but quite compatible, approach to the attachment phenomenon...

In this blog post I will present a simple neural and cognitive model of attachment and its opposite.

I want to clarify that I'm not positing that the subjective experiences of attachment or non-attachment "reduce" to the neural/cognitive mechanisms I'll describe here -- I am not: not in a physics sense nor in a basic ontological sense.  I prefer to think about the ideas presented here as pertaining to the "neural/cognitive correlates of the experiences of attachment and non-attachment."

After presenting my model of attachment and non-attachment, I will dig into AGI theory for a bit, and explain why I think advanced AGI systems would suffer from the attachment phenomenon far less than human beings.   Or in other words:
  • Enlightening human minds is, in practice, a chancy and difficult matter ...
  • Enlightening AGI minds may merely be a matter of reasonable cognitive architecture design...

Hebbian Learning

I will start with some quasi-biological speculation.  What might be the neural roots of attachment?

Let's begin with the concept of Hebbian learning, an idea from neural network theory.  Hebbian learning has to do with a network in which neurons are joined by weighted synapses.  The larger the positive weight on the synapse between neuron N1 and neuron N2, the more of N1's activity will spill over to N2.  The larger the negative weight on the synapse between neuron N1 and neuron N2, the more strongly N1's activity will inhibit activity in N2.

In basic Hebbian learning the following two rules obtain:
  1. If N1 and N2 are active at the same time, the link (synapse) between N1 and N2 has its weight increased
  2. If N1 is active but N2 is not, or N2 is active but N1 is not, the link between N1 and N2 has its weight decreased
The result is that, over time
  • pairs of neurons that are frequently simultaneously active will be joined by synapses with high positive weights (so when one of them becomes active, the other will tend to be)
  • pairs of neurons that are generally active at different times, will be joined by synapses with very negative weights (so when one of them becomes active, the other will tend not to be active)
This is a very basic form of pattern recognition, but it's been shown to be adequate to learn arbitrarily complex patterns.  In technical terms, Hebbian learning can learn to achieve any computable goal in any computable environment -- though it may be very slow at doing so, and may require a very large network of neurons.

One of the interesting consequences of Hebbian learning is the formation of "cell assemblies" -- groups of neurons that are richly interconnected via high-positive-weight synapses, and hence tend to become activated as a whole.   Donald Hebb, who came up with the idea of Hebbian learning in the late 1940s, suggested that ideas in the mind are represented by neuronal cell assemblies in the brain.  60-odd years later, this still seems a sensible idea, and there is significant evidence in its favor.  The emergence of nonlinear dynamics has deepened the theory somewhat; it now seems likely that the cell assemblies representing ideas, memories and feelings in the human mind are associated with complex dynamical phenomena like strange attractors and strange transients.

Hebbian learning is a conceptual and mathematical model, but the basic idea is reflected in the brain in the form of long-term potentiation of synapses.  It may be found to be reflected in the brain in other ways as well, e.g. as our understanding of the roles of glia in memory increases.

So what does all this have to do with attachment?

Let's explore this via a simple example....

Suppose that Bob's girlfriend has left him.  He misses her.

While his girlfriend was with him, he woke up every morning, found her in the bed next to him, and put his arm around her.  He liked that.  The association between "wake up" and "put arm around girlfriend" become strong.   In Hebbian learning terms, the neurons in the "wake up" cell assembly got strongly positively weighted synapses to the neurons in the "put arm around girlfriend" cell assembly.  A larger assembly of the form " wake up and put arm around girlfriend" formed, linking together the two smaller assemblies.

Now, after the girlfriend left, what happens in Bob's brain?

According to straightforward Hebbian learning, the association between "wake up" and "put arm around girlfriend" should gradually decrease, until eventually there is no longer a positive weight between the two cell assemblies.  The larger assembly should fragment, leaving the "wake up" and "put arm around girlfriend" assemblies separate; and at the same time the "put arm around girlfriend assembly should start to dissipate, as it no longer gets reinforcement via experience.

But this may not actually be what happens.  Suppose, for example, that Bob spends a lot of time thinking about his girlfriend (now his ex-girlfriend).  Suppose he lies awake at night in bed and dwells on the fact that he's the only one there.  In that case, the "wake up" cell assembly and the "put arm around girlfriend" assembly will be activated simultaneously a lot, and will retain their positive association.

What's happening here is that Bob's emotions are causing a cell assembly to remain highly active -- in a case where the external world, in the absence of these emotions, would drive the assembly to dwindle.

This, I suggest, is the key neural correlate of the psychological phenomenon of attachment.  Attachment occurs -- neurally speaking -- when there is a circuit binding a cell assembly to the brain's emotional center, in such a way that emotion keeps the circuit whole and flourishing even though otherwise it would dissipate.

Ideally, a mind with amazing powers of self-control would delete the association between "wake up" and "put arm around girlfriend" as soon as the relationship with the girlfriend ended.   However, a mind without emotional interference in its Hebbian network dynamics would do the next best thing: the association would gradually dwindle over time.   For a typical human mind, on the other hand, the coupling of the "wake up and put arm around girl" network with the mind's emotional centers, will cause this association to persist a long time after simple Hebbian dynamics would have caused it to dwindle.

The example of Bob and his girlfriend is somewhat simplistic of course, and I chose it largely because of its simplicity.  A more pernicious example is when a mind becomes attached to an aspect of its model of itself.  For example, someone who derives pleasure from being correct (say, because someone praises them for being correct), may then become emotionally attached to the idea of themselves as someone who knows the right answer.   They may then have trouble letting go of this idea, even in contexts where the genuinely do not know the answer, and would be better off to admit this to themselves as well as to others.   Becoming attached to inaccurate models of oneself causes all sorts of problems, including the creation of compoundedly, increasingly inaccurate self-models, as well as self-defeating behaviors.

A Semantic Network Perspective

Now let's take a leap from modeling brain to modeling mind.  I've been talking here about neural networks and brains -- but the core idea presented above could actually be relevant to minds with very different biological underpinnings.  It could also be relevant if Hebbian learning turns out to be a terrible model of the brain.

Regardless of how the brain works, one can model the mind as a network of nodes, connected by weighted links.  The nodes represent concepts, actions, and perceptions in the mind; the links represent relationships between these, including associative relationships.  The "semantic networks" often used in AI are a simplistic version of this kind of model, but one can articulate much richer versions, capable of capturing all documented aspects of human cognition.

This sort of model of the mind has been instrumental in my own thinking about AI and cognitive science.  I have articulated a specific network model of minds called SMEPH, Self-Modifying Evolving Probabilistic Hypergraphs.   I won't go into the details of that here, though -- I mention it only to point out that the model of attachment and non-attachment here may be interpreted two ways: as a neural model, and as a cognitive model.   These interpretations are related but far from identical.

COEX Systems

The model of attachment presented here relates closely to Stanislav Grof's notion of a "COEX (Condensed Experience) system."  Roughly, a COEX is a set of related experiences organized around a powerful emotional center.   The emotional center is generally one or a few highly emotionally impactful experiences.  The various experiences in the COEX, all reinforce each other, keeping each other energetic and relevant to the mind.

In a Hebbian perspective, a COEX system would be modeled as a system of cell assemblies, each representing a certain episodic memory, linked together via positive, reinforcing connections.  The memories in the COEX stimulate powerful emotions, and these emotions reinforce the memories -- thus maintaining a powerful, ongoing attachment to the memories.

But Why?

I have said that "Attachment occurs -- neurally speaking -- when there is a circuit binding a cell assembly to the brain's emotional center, in such a way that emotion keeps the circuit whole and flourishing even though otherwise it would dissipate."

But why would the human mind be that way?

Emotions, basically, are system-wide (body and mind inclusive) reactions to events regarding system goals/desires/aspirations.  We are happy when we are achieving goals better and better; especially happy when we're doing so better than expected.  We are sad when we're making progress worse than expected.   We're angry when someone or something stands in the way of our goal fulfillment.   We feel pity when we use our mind's power of analogy to feel someone ELSE's frustration at their inability to fulfill their goals….

So, it's only natural that the emotion-bearing cell assemblies and attractors, wind up getting richly interlinked with other cell assemblies and attractors.

Let's say the "wake up", "put arm around girlfriend" and "happy emotion" assemblies all get richly interlinked.   Then there are multiple reverberating circuits joining all these  assemblies.  So even when the girlfriend goes away, these circuits will keep on cycling.

This won't be such a problem for an animal like a dog -- because in a dog, the associational cortex is not such a big part of its neural processing -- immediate perceptions and actions tend to hold sway.  But a larger and more complex associational cortex brings all sort of new possibilities with it, including the possibilities for more complex and persistent forms of attachment!

The Brains of the Enlightened

In recent years there  has been an increasing amount of work studying the brains of experienced meditators, and of people capable of various "enlightened" states of consciousness.   One of the interesting findings here is that such individuals have unusual patterns activity in a part of the brain called the posterior cingulate cortex (PCC).

The PCC does many different things, so the significance of this finding is not fully clear, and may be multidimensional.  However, it is noteworthy that ONE thing the PCC does is to regulate the interaction between memory and emotion.

The neural/cognitive theory presented above leads directly to the prediction that, if there's a key difference between the brains of attachment-prone versus non-attached people, it should indeed have to do with the interaction between memory and emotion.

I thus submit the hypothesis that ...  ONE of the significant factors the neurodynamics of enlightened states is: A change in the function of the PCC, so that in relatively non-attached people, emotion plays a significantly lesser role in the maintenance and dissolution of cell assemblies and associated attractors representing memories.

Toward Enlightened Digital Minds

This line of thinking, if correct, suggests that it may be relatively straightforward to create digital minds without the persistent phenomenon of attachment that characterizes ordinary human minds.

First of all, a digital mind -- if its design is not slavishly tied to that of the human brain -- may be able to explicitly remove associations and other inferences that are no longer rationally judged as relevant.  In other words, when a well-designed robot's girlfriend leaves him, he will just be able to remove any newly irrelevant associations from his brain, so his post-breakup malaise will be brief or nonexistent.

Secondly, even if a digital mind lacks this level of deliberative, rational self-modification, there is no reason it needs to have the same level of coupling of emotion and memory as human beings have.  From an AI software design perspective, it is quite simple to make the coupling of memory and emotion optional, to a much greater degree than the human brain does…

The interaction between memory and emotion is valuable for many purposes.  There is intelligence in emotional response, sometimes.  But there is no need, from a cognitive architecture perspective, for the formation and dissolution of memory attractors to be so inextricably tied to emotion.

Attachment in OpenCog

To explore the notion of attachment in digital minds more concretely, let's take a specific AGI design and muse on it in detail.   This exercise will also help us better understand why human minds get so extremely wrapped up in attachment as they do.

What if Bob's mind were a mature, fully functional OpenCog AGI engine, instead of a human?

(NOTE: to understand this example more thoroughly, take an hour or two and read the overview of the CogPrime cognitive architecture being gradually implemented in the OpenCog open-source AI framework....  Or, if you don't have time for that, just skim through the following instead, and you'll probably grok something!)

Then there would be an explicit link in OpenCog's Atomspace knowledge store, such as

PredictiveImplicationLink
   AND
      PredicateNode: wake_up
      PredicateNode: put_arm_around_girlfriend
   Happy
 
(NOTE: the actual nodes in the OpenCog knowledge base probably wouldn't have such evocative names, as they would be learned via experience --  but the basic structure would be like this.)

There would also be a bunch of HebbianLinks, similar to synapses in a neural network with Hebbian learning, going between various nodes related to wake_up and put_arm_around_girlfriend, and various nodes related to Happy.

When the girlfriend left, human-like attachment dynamics would likely be present, related to the HebbianLinks involved.   But the probabilistic truth value on the PredictiveImplicationLink would decrease.  It would decrease gradually via experience; or might be decreased very rapidly via reasoning (i.e. the AI could rationally infer that since the girlfriend is gone, putting its arm around her is not likely to be associated with happiness anymore).

The question then is: How rapidly and thoroughgoingly would this change in the OpenCog system's explicit knowledge (the PredictiveImplicationLink) cause a corresponding change in the system's implicit knowledge (the HebbianLinks between the assemblies or "maps" of nodes corresponding to "wake-up", "put_arm_around_girlfriend", and "Happy")?

Suppose the OpenCog system has a process that: Whenever the truth value of a link changes dramatically, puts the link in the system's AttentionalFocus (the latter being the set of nodes and links in the system's memory that have the highest Short Term Importance (STI) values, and thus get the most attention from the system's cognitive processes).  Putting the link in the AttentionalFocus, will cause STI to be spread to the nodes that the link connects, and to other nodes related to these.  This will then cause the HebbianLinks among these nodes to have their weights updated.  And this will gradually get rid of assemblies and attractors that are no longer relevant.

So this process that triggers attention based on truth value change, will serve directly to combat attachment.

Why Human Brains Get More Attached than a Smart OpenCog Would

In the human mind/brain, explicit knowledge is purely emergent from implicit knowledge -- different from the situation with OpenCog where the two kinds of knowledge exist in parallel, dynamically coupled together.  Obviously, given this, there must be neural mechanisms for changes in emergent explicit knowledge (derived via reasoning, for example) to cause changes in the corresponding underlying implicit knowledge.   But these mechanisms are apparently more complex and harder to control than the corresponding ones in OpenCog.

Evolutionarily, the reason for the difficulty the human brain has in coordinating explicit and implicit knowledge, seems to be that the brain's mechanisms mostly evolved in the context of brains with a lot less associational cortex than the human brain has.  In the context of a dog or ape brain, a sloppy mechanism for coordinating explicit and implicit knowledge may not be so troublesome.   In the context of a human brain, this sloppy mechanism leads to various problems, such as excessive attachment to ideas, people, feelings, etc.   And these problems can be worked around, to a large extent, via difficult and time-consuming practices like meditation, psychotherapy, etc.  Perhaps future technologies like brain implants will enable the circumvention of excessive attachment and other problematic aspects of the human mind/brain architecture, without the need for as much effort as uncertainty as is involved in current mind-improving disciplines....

...

And
so
it
goes
.
.
.