To follow this blog by email, give your address here...

Monday, June 16, 2014

The Bullshit at the Heart of Humanity

I explained, in a recent blog post, Why Humans Are So Screwy.   But I didn't quite finish the story there.   Here I'll explain an additional aspect of Screwy Human Nature -- the nature of the self-delusion that lies at the heart of our selves.

My previous post identified two major culprits where human screwiness is concerned:

  • The conflict between the results of individual and group (evolutionary) selection, encoded in our genome (as described excellently by E.O. Wilson)
  • The emergence of civilization, to which we are not adapted, which disrupted the delicate balance via which tribal human mind/society quasi-resolved the above-mentioned conflict (as described excellently by Sigmund Freud)

What I want to point out here is the next chapter in the story -- the way these conflicts  impact our “selves” – our “autobiographical self-models”, which play such a large role in our inner lives.   They play a major role in causing our selves to comprise self-damagingly inaccurate models of the thought, feeling and behavior patterns of which we are actually constituted. 

Put relatively simply: in order to avoid the pain that we are conditioned to feel from violating individual or group needs, or violating civilized or tribal standards of individual/group balance, we habitually create false self-models embodying the delusion that such violations are occurring in our minds and actions much less often than they really are.  Emotional attachment to these sorts of inaccurate self-models is perhaps the most directly important cause of human mental and social suffering.

Our Problematic Selves


The primary culprit of human suffering has often been identified as the “self” – meaning the autobiographical, psychosocial self; the self-model that each of us uses to symbolize, define and model our own behavior.   One of the most commonly cited differences between normal human psychology and the psychology of “enlightened” spiritual gurus is that the latter are said to be unattached to their autobiographical selves – indeed they are sometimes said to have “no self at all.”

I think there is some deep truth to this perspective; but one needs to frame the issues with care.   Any mind concerned with controlling a body that persists through time, has got to maintain some sort of model of that body and the behavior patterns that are associated with it.   Without such a model (which may be represented explicitly or implicitly), the mind could not control the body very intelligently.  Any such model can fairly be called a “self-model.”   In this sense any persistently embodied intelligence is going to have a self.

The problem with the human self, however, is that it tends to be a bad model – not a morally bad model, but an inaccurate one.  The self-models we carry around in our minds, are generally not very accurate models of the actual behavior-patterns that our bodies display, nor of the thought-patterns that our minds contain.   And the inaccuracies involved are not just random errors; they are biased in very particular ways.

Our self-models are symbols for clusters of behavior-patterns that are observed to occur among our bodily behaviors and our internal cognitive behaviors.   This is not in itself bad – symbolic reasoning is critical for general intelligence.   However, we are very easily drawn to make incorrect conclusions regarding our symbolic self-models – and to become emotionally attached to these incorrect conclusions.

And this brings us straight back to the two conflicts that I highlighted in my earlier blog post: Self versus Group (Wilson), and Evolved Self/Group Balance versus Civilized Self/Group Balance (Freud).   These layered contradictions yank our self-models around willy-nilly.   Each modern human feels great pressure to be both self-focused and group-focused; and to balance self and group in a tribal way, and in a civilized way.

What’s the simplest way for a person to fulfill all these contradictory requirements?  -- or rather, to feel like they have at least done a halfway-decent job of fulfilling them?

That’s easy: To bullshit themselves! 

Human self-models are typically packed with lies -- lies to the effect that the person is fulfilling all these contradictory requirements much better than is actually the case.  Because when a person clearly sees just how badly they have been fulfilling these contradictory requirements, they will generaly experience a lot of bad emotion – unless that person has somehow managed to let go of the expectations that evolution and society have packed into their brains and minds.

The above analysis of the conflicts in human nature lets us specifically identify four kinds of lies that are typically packed into human selves.  There are two kinds of Wilsonian lies:
  • Lies about how a person has acted against their own goals and desires
  • Lies about how a person has disappointed the others around them

And there are two kinds of Freudian lies:

  • Lies about how a person has repressed their true desires, in order to adhere to general social expectations
  • Lies about how a person has violated general social expectations, in effort to act out their true desires

What if a person could avoid these four kinds of lies, and openly, transparently acknowledge all these kinds of violations to themselves, on an ongoing basis during life?  This would allow the person in question to form an accurate self-model -- not the usual self-delusional self-model biased by the Wilsonian and Freudian contradictions.   But this sort of internal self-honesty is far from the contemporary human norm. 

The problem is that evolution has wired us to become unhappy when we know we have acted against our own goals and desires; OR when we know we have disappointed someone else.   And civilized society has taught us to become unhappy when we violate social expectations; but evolution has taught us to become unhappy when we don’t balance self and group in the way that is “natural” in a tribal context.    So, inside the privacy of our minds, we are constantly tripping over various evolved or learned triggers for unhappiness.  The easiest way to avoid setting off these triggers is to fool ourselves that we haven’t really committed the “sins” required to activate them – i.e. to create a systematically partially-false self-model. 

The harder way to avoid setting off these triggers is to effectively rewire our mind-brains to NOT be reflexively caused unhappiness when we act against our goals/desires, disappoint others, violate social expectations, or balance self and group in tribally inappropriate ways.  Having done this, the need for an inaccurate, self-deluding self-model disappears.  But performing this kind of rewiring is very difficult for human beings, given the current state of technology.   The only reasonably reliable methods for achieving this kind of rewiring today involve years or decades of concentrated effort via meditation or other similar techniques.

And what would a human mind be like without a dishonesty-infused, systematically inaccurate self-model?  Some hints in this direction may be found in the mind-states of spiritually advanced individuals who have in some sense gone beyond the negative reinforcement triggers mentioned above, and also beyond the traditional feeling of self.   My friend Jeffery Martin's recent study of the psychology of the spiritually advanced  (soon to be published) suggests that, without a self in the traditional sense, a person’s mind feels more like an (ever-shifting) set of clusters of personality/behavior patterns.   One of the lies the self tells itself, it seems, is about its own coherence.  Actually human beings are not nearly as coherent and systematic and unified as their typical self-models claim.

Goals Beyond the Legacy Self


So much for the complexly conflicted present.  Let's think a bit about the possibly better future.

In the current state of human nature, self and goals are intimately wrapped up together.   

Substantially, we pursue our goals because we want our self-model to be a certain way – and we do this in a manner that is inextricably tangled up with the various lies the self-model embodies.

But consider, on the other hand, the case of a post-Singularity human or human-like mind that understands itself far better than contemporary humans, thus arriving at far more accurate – and likely less unified and coherent – self-model than a typical pre-Singularity human mind.   What will the goals of such a mind be?  What will a mind without a coherent self --without a self built around lies and confusions regarding self vs. group and repression and status -- actually want to do with itself?

Considering our primary current examples of minds that have discarded their traditional autobiographical selves -- spiritual gurus and the like – provides confusing guidance.   One notes that (with nontrivial exceptions) the majority of such people are mainly absorbed with enjoying the wonder of being, and sometimes with spreading this wonder to others, rather than with attempting to achieve ambitious real-world goals.   The prototypical spiritually advanced human is not generally concerned with pursuing pragmatic goals, because they are in a sense beyond the typical human motives that cause people to become attached to pursuit of such goals.   This makes one wonder if the legacy self – with all its associated self-deception -- is somehow required in order for humans to work hard toward the achievement of wildly ambitious goals, in the manner for instance of the scientists and entrepreneurs who are currently bringing the Singularity within reach.

But it’s not clear that the contemporary or historical spiritual guru is a good model for a post-Singularity, post-legacy-self human mind.   I suspect that in a community of post-delusory-self minds, avid pragmatic goal-pursuit may well emerge for different reasons, mostly unrelated to legacy human motives. 

Why would a community of post-delusory-self minds pursue goals, if not for the usual human reasons of status and ego?   Here we come to grips with deep philosophical issues.   I would argue that, once the conflicts that wrack human nature are mostly removed, other deep human motives will rise to the fore – for instance, the drive to discover new things, and create new things.   That is: the drives for pattern, creation and information.   

One can view the whole long story of the emergence of life and intelligence on Earth as the manifestation of these “drives”, as embedded in the laws of physics and the nature of complex systems dynamics.   From the point of view of the Cosmos rather than humanity in particular, the drives for pattern, creation and information are even deeper than the conflicts that wrack human nature. 

If spiritually advanced humans, having cast aside self and ego and status, tend not to pursue complex goals of discovery and creation, this may be because, given the constraints of the human brain architecture, merely maintaining a peaceful mindstate without self/ego/status-obsession requires a huge amount of the brain’s energy.   The simple, blissful conscious state of these individuals may be bought at the cost of a great deal of ongoing unconscious neural effort. 

On the other hand, once the legacy human brain architecture becomes flexibly mutable, most of the old constraints no longer apply.   It may become possible to maintain a peaceful, blissful conscious state – relatively free of Freudian repression and individual/group conflicts – while still avidly pursuing the deeper goals of gaining more and more information, and creating more and more structures and patterns in the universe.   Here we are far beyond the domain of the currently scientifically testable – but this is indeed my strong suspicion.

Current human nature got where it is largely via the advent of certain technologies – the technologies of agriculture and construction that enabled civilization, for example.   The folks who invented the plow and the brick weren’t thinking about the consequences their creations would have for the emergence and dynamics of the superego  -- but these consequences were real enough anyway.
Similarly, the next steps in human nature may well emerge as a consequence of technological advancements like brain-computer interfacing and mind uploading – even though the scientists and engineers building these technologies will mostly have other goals in mind, rather than explicitly focusing their work toward reducing conflict in the human psyche and bringing about an era where self is less critical and discovery and creation are the main motivations.

Growth, joy and creation beyond the constrictions of the self-delusory self -- I'm more than ready!

Conceptor Networks

I read today about a new variant of recurrent neural nets called Conceptor Networks, which look pretty. interesting,

In fact this looks kinda like a better-realized variant of the idea of "glocal neural nets" that my colleagues and I experimented with a few years ago.

The basic idea, philosophically (abstracting away loads of important details) is to

  • create a recurrent NN
  • use PCA to classify the states of the NN
  • create explicit nodes or neurons corresponding to these state-categories, and then to imprint these states directly on the dynamics

So there is a loop of "recognizing patterns in the NN and then incorporating these patterns explicitly in the NN dynamics", which is a special case of the process of "a mind identifying patterns in
itself and then embodying those patterns explicitly in itself", which I long ago conjectured to be critical to cognition in general (and which underlies the OpenCog design on a philosophical level...)

There is some hacky Matlab code here implementing the idea; but as code, it's pretty specialized to the exact experiments described in the above technical report...

My intuition is that, for creating a powerful approach to machine perception, a Conceptor Network would fit very well inside a DeSTIN node, for a couple reasons
  1. It has demonstrated ability to infer complex dynamical patterns in time series
  2. It explicitly creates "concept nodes" representing the patterns recognized, which could then be cleanly exported into a symbolic system like OpenCog

Of course, Conceptor Networks are still at the research stage, so getting them to really work inside DeSTIN nodes would require a significant amount of fiddling...

But anyhow it's cool stuff ;)

Monday, June 09, 2014

Review of "More Than Nature Needs" by Derek Bickerton

I've been a fan of Derek Bickerton's writing and thinking on linguistics since happening upon Language and Species in a Philadelphia bookstore, disturbingly many decades ago.   More Than Nature Needs, the latest addition to Bickerton's canon, is an intriguing and worthy one, and IMO is considerably deeper than its predecessor Adam's Tongue.

Adam's Tongue argues that the elements of human symbolic language likely emerged via scavenging behavior, as this was an early case in which early humans would have needed to systematically refer to situations not within the common physical enviroment of the speaker and hearer.  This is an interesting speculation, showcasing Bickerton's inventiveness as a lateral thinker.   MTNN continues in this vein, exploring the ways in which language may have emerged from simplistic proto-language.  However, MTNN draws more extensively on Bickerton's expertise as a linguist, and hence ends up being more profoundly thought-provoking and incisive.

As I see it, the core point of MTNN -- rephrased into my own terminology somewhat -- is that the developmental trajectory from proto-language to fully grammatical, proper language should be viewed as a combination of natural-selection and cultural/psychological self-organization.   To simplify a bit: Natural selection gave humans the core of language, the abstract "universal grammar" (UG) which underlies all human languages and is in some way wired into the brain; whereas cultural/psychological self-organization took us the rest of the way from universal grammar to actual specific languages.

The early stages of the book spend a bunch of time arguing against a purely learning-oriented view of language organization, stressing the case that some sort of innate, evolved universal grammar capability does exist.   But the UG Bickerton favors is a long way from classic-Chomskian Principles and Parameters -- it is more of an abstract set of word-organization patterns, which requires lots of individual and cultural creativity to get turned into a language.

I suspect the view he presents is basically correct.   I am not sure it's quite as novel as the author proposes; a review in Biolinguistics cites some literature where others present similar perspectives.  In a broader sense, the mix of selection-based and self-organization-based ideas reminded me of the good old cognitive science book Rethinking Innateness (and lots of other stuff written in that same vein since).   However, Bickerton presents his ideas far more accessibly and entertainingly than the typical academic paper, and provides interesting stories and specifics going along with the abstractions.

He also bolsters his perspective via relating it to the study of creoles and pidgins, an area in which he has done extensive linguistics research over many decades.  He presents an intriguing argument that children can create a creole (a true language) in a single generation, building on the pidgins used by their parents and the other adults around them.   I can't assess this aspect of his argument carefully, as I'm not much of a creologist (creologian??), but it's fascinating to read.  There is ingenuity in the general approach of investigating creole language formation as a set of examples of recent-past language creation.

The specific linguistics examples in the book are given in a variant of Chomskian linguistics (i.e. generative grammar), in which a deep and surface structure are distinguished, and it's assumed that grammar involves "moving" of words from their positions in the deep structure to their new positions in the surface structure.  Here I tend to differ from Bickerton.  Ray Jackendoff and others have made heroic efforts to modernize generative grammar and connect it with cognitive science and neuroscience, but in the end, I'm still not convinced it's a great paradigm for linguistic analysis.  I much more favor Dick Hudson's Word Grammar approach to grammatical formalization (which will not be surprising to anyone familiar with my work, as Word Grammar's theory of cognitive linguistics is similar to aspects of the OpenCog AGI architecture that I am now helping develop; and Word Grammar is fairly similar to the link grammar that is currently used within OpenCog).

Word Grammar also has a deep vs. surface structure dichotomy - but the deep structure is a sort of semantic graph.  In a Word Grammar version of the core hypothesis of MTNN, the evolved UG would be a semantic graph framework for organizing words and concepts, plus a few basic constraints for linearizing graphs into series of words (e.g. landmark transitivity, for the 3 Word Grammar geeks reading this).   But the lexicon, along with various other particular linearization constraints dealing with odd cases, would emerge culturally and be learned by individuals.

(If I were rich and had more free time, I'd organize some sort of linguistics pow-wow on one of my private islands, and invite Bickerton and Hudson to brainstorm together with me for a few weeks; as I really think Word Grammar would suit Bickerton's psycholinguistic perspective much better than the quasi-Chomskian approach he now favors.)

But anyhow, stepping back from deep-dive scientific quibbles: I think MTNN is very well worth reading for anyone interested in language and its evolution.   Some of the technical bits will be slow going for readers unfamiliar with technical linguistics -- but this is only a small percentage of the book, and most of it reads very smoothly and entertainingly in the classic Derek Bickerton style.   Soo ... highly recommended!

Saturday, March 22, 2014

Lessons from Deep Mind & Vicarious


Recently we've seen a bunch of Silicon Valley money going into "deep learning" oriented AI startups -- an exciting trend for everyone in the AI field.  Even for those of us who don't particularly aspire to join the Silicon Valley crowd, the symbolic value of these developments is dramatic.   Clearly AI is getting some love once again.

The most recent news is a US$40M investment from Mark Zuckerberg, Elon Musk, Ashton Kutcher and others into Vicarious Systems, a "deep learning computer vision" firm led by Dileep George, who was previously Jeff Hawkins' lead researcher at Numenta.

A couple months ago, the big story was Google acquiring London deep reinforcement learning firm Deep Mind for something like US$500M.   Many have rumored this was largely an "acqui-hire", but with 60 employees or so, that would set the price per employee at close to US$10M, way above the $1M-$2M value assigned to a Silicon Valley engineer in a typical acqui-hire transaction.   Clearly a tightly-knit team of machine learning theory and implementation experts is worth a lot to Google these days, dramatically more than a comparable team of application programmers.

Both of these are good companies led by great researchers, whom I've admired in the past.   I've met Deep Mind's primary founder, Demis Hassabis, at a few conferences, and found him to have an excellent vision of AGI, plus a deep knowledge of neuroscience and computing.   One of Deep Mind's other founders, Shane Legg, worked for me at Webmind Inc. during 1999-2001.   I know Dileep George less well; but we had some interesting conversations last summer in Beijing, when at my invitation he came to speak at the AGI-13 conference in Beijing.

Vicarious's focus so far has been on visual object recognition --- identifying what are the objects in a picture.  As Dileep described his progress at AGI-13: Once they crack object recognition, they will move onto recognizing events in videos. Once they crack that, they will move on to other aspects of intelligence.   Dileep, like his mentor Jeff Hawkins, believes that perceptual data processing is the key to general intelligence... and that vision is the paradigm case of human perceptual data processing...

Zuckerberg's investment in Vicarious makes a lot of sense to me.  Given Facebook's large trove of pictures and the nature of their business, it seems they would have great value for software that can effectively identify objects in pictures.

Note that Facebook just made a big announcement about the amazing success of their face recognition software, which they saddled with the probably suboptimal name "Deep Face" (a bit Linda Lovelace, no?).  If you dig into the research paper behind the press release, you'll see that DeepFace actually uses a standard, well known "textbook" AI algorithm (convolutional neural nets) -- but they deployed it across a huge amount of data, hence their unprecedented success...

Lessons to Learn?


So what can other AGI entrepreneurs learn from these recent big-$$ infusions to Deep Mind (via acquisition) and Vicarious (by investment)?

The main lesson I take from this is the obvious one, that a great really working demo (not a quasi faked up demo like one often sees) goes a long way...

Not long ago Vicarious beat CAPTCHA -- an accomplishment very easy for any Internet user to understand

On the other hand, the Deep Mind demo that impressed Larry Page was the ability to beat simple video games via reinforcement learning

Note that (analogously to IBM Watson), both of these demos involve making the AI meet a challenge that was not defined by the AI makers themselves, but was rather judiciously plucked from the space of challenges posed by the human world....


I.e.: doing something easily visually appreciable, that previously only humans could do...

Clearly Deep Mind and Vicarious did not excel particularly in terms of business model, as compared to many other firms out there...

Other, also fairly obvious points from these acquisitions are:
  1. For an acquihire-flavored acquisition at a high price, you want a team of engineers in a First World country, who look like the profile of people the acquiring company would want to hire.
  2. Having well-connected, appropriately respected early investors goes a long way.  Vicarious and Deep Mind both had Founders Fund investment.   Of course FF investment didn't save Halcyon Molecular, so it's no guarantee, but having the right early-stage investors is certainly valuable..

 

Bubble or Start of a Surge?


And so it goes.  These are interesting times for AI, indeed.    

A cynic could say it's the start of a new AI bubble -- that this wave of hype and money will be followed by disappointment in the meager results obtained by all the effort and expense, and then another "AI winter" will set in.

But I personally don't think so.   Whether or not the Vicarious and Deep Mind teams and technologies pay off big-time for their corporate investors (and I think they do have a decent chance to, given the brilliant people and effective organizations involved), I think the time is now ripe for AI technology to have a big impact on the world. 
DeepFace is going to be valuable for Facebook; just as machine learning and NLP are already being valuable for Google in their core search and ads businesses, and will doubtless deliver even more value with the infusion of the Deep Mind team, not to mention Ray Kurzweil's efforts as a Google Director of Engineering.

The love that Silicon Valley tech firms are giving AI is going to help spur many others all around the world to put energy into AI -- including, increasingly, AI projects verging on AGI -- and the results are going to be amazing.

 

Are Current Deep Learning Methods Enough for AGI?


Another lesson we have learned recently is that contemporary "deep learning" based machine learning algorithms, scaled up on current-day big data and big hardware, can solve a lot of hard problems.

Facebook has now pretty convincingly solved face recognition, via a simple convolutional neural net, dramatically scaled.   Self-driving cars are not here yet -- but a self-driving car can, I suspect, be achieved via a narrow-AI integration of various components, without any general intelligence underlying.   IBM Watson beat Jeopardy, and a similar approach can likely succeed in other specialized domains like medical diagnosis (which was actually addressed fairly well by simpler expert systems decades ago, even without Watson's capability to extract information from large bodies of text).  Vicarious, or others, can probably solve the object recognition problem pretty well, even with a system that doesn't understand much about the objects it's recognizing -- "just" by recognizing patterns in massive image databases.

Machine translation is harder than the above two areas, but if one is after translation of newspaper text or similar, I suppose it may ultimately be achievable via statistical ML methods.  Although, the rate of improvement of Google Translate has not been that amazing in recent years -- it may have hit a limit in terms of what can be done by these methods.  The MT community is looking more at hybrid methods these days.

It would be understandable to conclude from these recent achievements, that these statistical machine learning / deep learning algorithms basically have the AI problem solved, and focus on different sorts of Artificial General Intelligence architectures is unnecessary.

But such a conclusion would not be correct.   It's important to note that all these problems I've just mentioned are ones that have been focused on lately, precisely because they  can be addressed fairly effectively by narrow-AI statistical machine learning methods on today's big data/hardware...

If you picked other problems like 
  • being a bicycle messenger on a crowded New York Street
  • writing a newspaper article on a newly developing situation
  • learning a new language based on real-world experience
  • identifying the most meaningful human events, among all the interactions between people in a large crowded room
then you would find that today's statistical / ML methods aren't so useful...

In terms of my own work with OpenCog, my goal is not to outdo CNNs or statistical MT on the particular problems for which they were developed.  The goal is to address general intelligence...

The recent successes  of deep learning technology and other machine learning / statistical learning approaches are exciting, in some cases amazing.  Yet these technologies address only certain aspects of the broader AI problem.

One hopes that the enthusiasm and resource allocation that the successes of these algorithms are bringing, will cause more attention, excitement and funding to flow into the AI and AGI worlds as a whole, enabling more rapid progress on all the different aspects of the AGI problem.



Thursday, February 06, 2014

Why Humans Are So Screwy



Aha!!! ... Last night I had the amusing and satisfying feeling that I was finally grokking the crux of the reason why we humans are so screwy -- I never saw it quite so clearly before!

Here's the upshot: A big factor making human beings so innerly complicated is that in our psyches two different sources of screwiness are layered on top of each other:

  1. The conflict between the results of individual and group (evolutionary) selection, encoded in our genome
  2. The emergence of civilization, to which we are not adapted, which disrupted the delicate balance via which tribal human mind/society quasi-resolved the above-mentioned conflict

I.e.: the transition to civilized society disrupted the delicate balance between self--oriented and group-oriented motivations that existed in the tribal person's mind.   In place of the delicate balance we got a bunch of self vs. group conflict and chaos -- which  makes us internally a bit twisted and tormented, but also stimulates our creativity and progress.

Screwiness Source 1: Individual versus Group Selection


The first key source of human screwiness was best articulated by E.O. Wilson; the second was best articulated by Freud.  Putting the two together, we get a reasonably good explanation for why and how we humans are so complexly self-contradictory and, well "screwy."

E.O. Wilson, in his recent book The Social Conquest of Earth, argues that human nature derives its complex, conflicted nature from the competitive interplay of two kinds of evolution during our history: individual and group selection.  Put simply:


  • Our genome has been shaped by individual selection, which has tweaked our genes in such a way as to maximize our reproductive success as individuals
  • Our genome has also been shaped by group selection, which has tweaked our genes in such a way as to maximize the success of the tribes we belonged to


What makes a reproductively successful individual is, by and large, being selfish and looking out for one's own genes above those of others.  What makes a successful *tribe* is, by and large, individual tribe members who are willing to "take one for the team" and put the tribe first.

Purely individual selection will lead to animals like tigers that are solitary and selfish.  Purely group selection will lead to borg-like animals like ants, in which individuality takes a back seat to collective success.  The mix of individual and group selection will lead to animals with a complex balance between individual-oriented and group-oriented motivations.

As Wilson points out, many of the traits we call Evil are honed by individual selection; and many of the trains we call Good are honed by group selection.

That's Screwy Human Nature, Part 1.


Good vs. Evil vs. Hierarchy-Induced Constraints 


These points of Wilson's tie in with general aspects of constraint in hierarchical systems.   This observation provides a different way of phrasing things than Wilson's language of  Good vs. Evil.   As opposed to adopting traditional moral labels, wonder if a better way to think about the situation might be in terms of the tension and interplay between
  • adapting to constraints

vs.

  • pushing against constraints and trying to get beyond them
In the context of social constraints, it seems that individual selection (in evolution) would lead us to push against social constraints to seek individual well-being; whereas group selection would lead us to adapt to the social constraints regardless of our individual goals...


Much great (and mediocre) art comes from pushing against the constraints of the times -- but it's critical to have constraints there to push against; that's where a lot of the creativity comes from. You could think about yoga and most sports similarly ... you're both adapting to to the particularities of the human body; and trying to push the body beyond its normal everyday-life limits...

From the point of view of the tribe/society, those who push against the constraints too much can get branded as Evil and those who conform can get branded as Good..... But it all depends on what level you're looking at.... From the point of view of the human body, the cell that doesn't conform to the system will branded as Evil (non-self) and eliminated by the immune system!!

In any hierarchical system, from the perspective of entities on level N, the entities on level N+1 impose constraints -- constraints that restrict the freedom of the level N entities in order to enable functionality on level N+1; but also have potential to guide the creativity of level N entities.  Stan Salthe's book Evolving Hierarchical Systems makes this point wonderfully.   In some cases, like the human body vs. its cells, the higher level is dominant and the creativity of the lower level entities is therefore quite limited.  In thhe case of human society vs. its members, the question of whether the upper or lower level dominates the dynamics is trickier, leaving more room for creativity on the part of the lower level entities (humans), but also making the lives of the lower level entities more diversely complex.

Screwiness Source 2:The Discontents of Civilization


Moving on -- Screwy Human Nature, Part 2 was described with beautiful clarity by Sigmund Freud in his classic book Civilization and its Discontents.

What Freud pointed out there is that neurosis, internal mental stress and unhappiness and repression and worry, is a result of the move from nomadic tribal society to sedentary civilized society.  In tribal societies, he pointed out, by and large people were allowed to express their desires fairly freely, and get their feelings out of their system relatively quickly and openly, rather than represssing them and developing complex psychological problems as a result.

A fascinating recent book encountering one modern linguist/missionary's contact with a modern Stone Age society in the Amazon, the Piraha, is Daniel Everett's Don't Sleep There Are Snakes.   A book I read in the 1980s, recounting an average guy from Jersey dropping his life and migrating to Africa to live with a modern Stone Age pygmy tribe in central Africa, is Song From the Forest.  (The phoos below show Louis and some of his Bayaka friends.  Some recent news from Louis Sarno is here, including an intriguing recent video, a trailer for a forthcoming movie.) These accounts and others like them seem to validate Freud's analysis.  The tribal, Stone Age lifestyle tends not to lead to neurosis, because it matches the human emotional makeup in a basic way that civilization does not.




Wilson + Freud = Why We Are So Screwy


I full well realize the "noble savage" myth is just that -- obviously, the psychology of tribal humans was not as idyllic and conflict-free as some have imagined.   Tribal humans still have the basic conflict between individual and group selection embedded into their personalities.  BUT it seems to me that, in tribal human sociopsychology, evolution has worked out a subtle balance between these forces.  The opposing, conflicting forces of Self and Group are intricately intermeshed.

What civilization does is to throw this balance off -- and put the self-focused and group-focused aspects of human nature out of whack in complex ways.  In tribal society  Self and Group balance against each other elegantly and symmetrically -- there is conflict, but it's balanced like yin and yang.  In civilized society, Self and Group are perpetually at war, because the way our self-motivation and our group-motivation have evolved was right for making them just barely balance against each other in a tribal context; so it's natural that they're out of balance in complex ways in a civilization context.

For example, in a tribal situation, it is a much better approximation to say that: What's good for the individual is good for the group, and vice versa.   The individual and group depend a lot on each other. Making the group stronger helps the individual in very palpable ways (if a fellow hunter in the tribe is stronger for instance, he's more likely to kill game to share with you).  And if you become happier or stronger or whatever, it's likely to significantly benefit the rest of the group, who all directly interact with you and are materially influenced by you.   The harmony between individual interest and group interest is not perfect, but it's at least reasonably present ... the effects of individual and group selection have been tuned to work decently together.

On the other hand, in a larger civilized society the connection between individual and group benefit is far more erratic   What's good for me, as a Hong Kong resident, is not particularly the same as what's good for Hong Kong.   Of course there's a correlation, but it's a relatively weak one.   It's reasonably likely that what's good for Hong Kong as a unit could actually make my life worse (e.g. raising taxes, as my income level is above average for HK).  Similarly, most things that are likely to improve my life in the near future are basically irrelevant to the good of Hong Kong; in fact, my AGI research work is arguably bad for all political units in the long term, as advanced AGI is likely to lead to the transcendent of nation-states.   There is definitely some correlation between my benefit and Hong Kong's benefit -- if I create a successful company here in HK, that benefits the HK economy.   But the link is fairly weak, meaning that my society is often going to push me to do stuff that goes against my personal interest; and vice versa.  This seems almost inevitable in a complex society containing people playing many different roles.

Another interesting case is lying.   Lying of course occurs in tribal societies just like in advanced civilizations -- humans are dishonest by nature, to some extent.   Yet, only in complex civilizations do we have a habit of systematically putting on "false fronts" before others.  This doesn't work so well if you're around the same 50 people all the time.   Yet it's second nature to all of us in modern civilization -- we learn in childhood to act one way at home, one way at school, one way around grandma, etc.

As we mature, the habit of putting on false fronts -- or as Nietzsche called them, "masks" -- becomes so integrated into our personalities that the fronts aren't even "false" anymore.   Rather, our personalities become melanges of subselves, with somewhat different tastes and interests and values, in a complex coopetition for control of our thoughts and memories.  This is complex and stressful, but stimulates  various sorts of creativity.

Sarno reports how the interaction of the Bayaka pygmies with civilization caused them to develop multiple subpersonalities.  A pygmy's personality while living the traditional nomadic lifestyle in the bush, may be very different from that same pygmy's personality while living in a village with Africans from other tribes, drinking alcohol and doing odd jobs for low wages.

Individually, we have a motive to lie and make others think we are different in various ways than we actually are.   Tribally, group-wise, there is a reason for group members to tell the truth -- a group with direct and honest communication and understanding is likely to do better on average, in many important contexts, because deception often brings with it lots of complexity and inefficiency.   The balance between truth and lying is wired into our physiology -- typical people can lie only a little bit without it showing in their faces.   But modern society has bypassed these physiological adaptations, which embody tribal society's subtle balance between self and group motivations, via the creation of new media like telephones, writing and the Internet, which bypass telltale facial expressions and open up amazing new vistas for systematic self-over-group dishonesty.   Then society, and the minds of individuals within it, must set up all sorts of defense mechanisms to cope with the rampant dishonesty.   The balance of self versus group is fractured, and complexity emerges in an attempt to cope, but never quite copes effectively, and thus keeps ramifying and developing.

In Freudian terms, civilization brought with it the split between the Ego and Super-ego -- between what we are (at a given point in time); and what we think we should be.  It also brought with it a much mor complex and fragmented Ego that was present in tribal peoples.

What Wilson makes clear is: the pre-civilized human mind already had within it the split between the Self-motivation and Group-motivation.  Freud somewhat saw this as well, with his Id as a stylized version of the pure Self-motivation and his Ego going beyond this to balance Self versus Group.

The Freudian Ego and Super-ego are different ways of balancing Self versus Group.  The perversity and complexity of civilized society is that each of us is internally pushed to balance the conflict of Self vs. Group in one way (via our Ego, which is largely shaped for tribal society), while feeling we "should" be carrying out this balance in a different way (via our Super-Ego, which comes from civilized culture).  Of course these Freudian terms are not scientific or precisely defined, and shouldn't be taken too seriously.   But they do paint an evocative picture.

How much of this kind of inner conflict is a necessary aspect of being an intelligent individual mind living in a civilization?  Some, to be sure -- there is always going to be some degree of conflict between what's good for the individual and what's good for the group.  But having genomes optimized for tribal society, while living in civilized society, foists an additional layer of complexity on top of the intrinsic conflict.  The fact that our culture changes so much faster than our genomes, means that we are not free to seek the optimal balance between our current real-life Self and Group motivations, consistent with the actual society we are living in.  Instead we must live with methods of balancing these different motivations, that were honed in radically different circumstances than the ones we actually reside in and care about.

A Transhumanist Punchline


This is Benjamin Nathaniel Robot Goertzel's blog, so you knew there would be a transhumanist angle coming eventually, right? -- Once we achieve the ability to modify our brains and bodies according to our wishes, we will be able to adapt the way we balance Self versus Group in a much more finely-tuned and contextually appropriate way.

To the extent that layers of conflict within conflict are what characterize humanity, this will make us less human.  But it will also make us less perverse, less confused, and more fulfilled.

Our Screwiness Spurs Our Creativity and Progress


The punchier punchline, though, is that what is driving us toward the reality of amazing possibilities like flexible brain and body modification is -- precisely the screwiness I've analyzed above.

It's the creative tension between Self and Group that drove us to create sophisticated language in the first place.   One of the earliest uses of language, that helped it to grow into the powerful tool it now is, was surely gossip -- which is mainly about Self/Group tensions.

And our Self and Group aspects conspired to enable us to develop sophisticated tools.  Invention of new tools generally occurs via some wacky mind off in the corner fiddling with stuff and ignoring everybody else.  But, we do much better than other species at passing our ideas about new tools on from generation to generation, leveraging language and our rich social networking capability -- which is what allows our tool-sets to progressively improve over time.

The birth of civilization clearly grew from the same tension.   Tribal groups that set up farms and domesticated animals, in certain ecological situations, ended up with greater survival value -- and thus flourished in the group selection competition.  But individuals, seeking the best for themselves, then exploited this new situation in a variety of complex ways, leading to developments like markets, arts, schools and the whole gamut.  Not all of these new developments were actually best for the tribe -- some of the ways individuals grew to exploit the new, civilized group dynamics actually were bad for the group.  But then the group adapted, and got more complex to compensate.  Eventually this led to twisted sociodynamics like we have now ... with (post)modern societies that reject and psychologically torment their individualistic nonconformist rebels, yet openly rely on these same rebels for the ongoing innovation needed to compensate the widespread dissatisfaction modernity fosters.

And the creativity spurred by burgeoning self/group tensions continues and blossoms multifariously.  Privacy issues with Facebook and the NSA ... the rise and growth and fluctuation of social networks in general ... the roles of anonymity and openness on the Net ... websites devoted to marital infidelity ... issues regarding sharing of scientific data on the Net or keeping it private in labs ... patents ... agile software development ... open source software licenses and processes ... Bill Gates spending the first part of his adult life making money and the second part giving it away.   The harmonization of individual and group motivations remains a huge theme of our world explicitly, and is even more important implicity.

I imagine that, long after humans have transcended their legacy bodies and psychologies, the tension between Self and Group will remain in some form.  Even if we all turn into mindplexes, the basic tension that exists between different levels in any hierarchical system will still be there.   But at least, if it's screwy, it will be screwy in more diverse and fascinating ways!  Or beyond screwy and non-screwy, perhaps ;-)

Monday, January 27, 2014

Hawking's new thoughts on information & chaos & black hole physics...

Interesting new paper by Stephen Hawking, though I only half-understand it... (ok maybe 2/3 ...)

http://arxiv.org/pdf/1401.5761v1.pdf

Basically: He is discussing a certain case [stuff happening inside a black hole] where general relativity says something is not observable, but quantum theory says it is in principle observable....

Hawking's new solution is that the data escaping from inside the black hole is chaoticallly messed up, so that it's sort of in principle observable, but in practice too complicated to actually see...

This seems in line with my notion that quantum logic is for stuff that you cannot in principle measure -- where the YOU is highlighted ... i.e. this has to do with what you, as an information-processing system, have the specific capacity to measure without losing your you-ness...

hmmmm...

Tuesday, July 16, 2013

Robot Toddlers and Fake AI 4 Year Olds

Oh, the irony...

At the same time as my OpenCog project is running an Indieogogo crowdfunding campaign aimed at raising funds to create a robot toddler, by using OpenCog to control a Hanson Robokind robot...

... the University of Illinois's press gurus come out with a report titled

But what is this system, that is supposedly as smart as a 4 year old?  It's a program that answers vocabulary and similarity questions as well as a human 4 year old, drawing on MIT's ConceptNet database.

Whoopie!   My calculator can answer arithmetic questions better than I can -- does that make it a superintelligence? ;-D ....

A toddler is far more than a question-answering program back-ended on a fixed database, obviously....

This Illinois/MIT program is basically like IBM Watson, but for a different set of knowledge...

ConceptNet is an intriguing resource, and one of the programmers in our Addis Ababa OpenCog lab is currently playing with importing it into OpenCog....

But obviously, this Illinois/MIT software lacks the ability to learn new skills, to play, to experiment, to build, to improvise, to discover, to generalize beyond its experience, etc.....   It has basically none of the capabilities of the mind of a 4 year old child.

BUT... one thing is clear ... these universities do have excellent PR departments!

The contrast between their system -- a question-answering system based on MIT's ConceptNet knowledge base -- and the system OpenCog, Hanson and I are building is both dramatic and instructive.

The Illinois/MIT program is, they report, as good as a human 4 year old at answering vocabulary and similarity questions.  

OK, I believe that.   But: Big deal!   A calculator is already way better than a human 4 year old at answering arithmetic questions!

What we are after with our project is not just a system that passes certain tests as well as a human toddler.  We are after a system that can understand and explore the world, and make sense of itself and its surroundings and its goals and desires and feelings and worries, in the rough manner of a human toddler.  This is a wholly different thing.

The kind of holistic toddler-like intelligence we're after, would naturally serve as a platform for building greater and greater levels of general intelligence -- moving toward adult-level AGI....

But a question-answering system based on ConceptNet doesn't particularly build toward anything -- it doesn't learn and grow.  It just replies based on the data in its database.

It is unfortunate, but not terribly surprising, that this kind of distinction still needs to be repeated over and over again.  General intelligence - the ability to achieve a variety of complex goals in a variety of complex environments, including goals and environments not foreseen in advance by the creators of the intelligent system -- is a whole different kettle of fish than engineering a specialized intelligent system for a specific purpose.


The longer I work on AGI, the more convinced I am that an embodied approach will be the best way to fully solve the common sense problem.   The AI needs to learn common sense by learning to control a robot that does commonsensical things....  Then the ability to draw analogies and understand words will emerge from the AI's ability to understand the world and relate different experiences it has had.   Whereas, a system that answers questions based on ConceptNet is just manipulating symbols without understanding their meaning, an approach that will never lead to real human-like general intelligence.

The good news is, my OpenCog colleagues and I know how to make a robot that will achieve first toddler-level commonsense knowledge, and then full-scale human-adult level AGI.   And then what?

The less exciting news is, it's going to take a lot of work -- though exactly how many years depends on how well funded our project is.

Next Big Future just ran an extensive interview with me on these topics, check it out if you're curious for more information...


Monday, June 10, 2013

Physicists Rediscover Sheldrake's Morphic Fields ... and my Morphic Pilot Wave ...

Today Damien Broderick pointed out to me an Edge interview with physicist Lee Smolin, which led me to a fascinating article by Smolin titled "Precedence and freedom in quantum physics."

Smolin's article is deep and thought-provoking -- and overlaps greatly with prior thinking by some other folks, such as Rupert Sheldrake and Charles Peirce and myself.


Smolin's Principle of Precedence


Smolin explores augmenting the standard axiomatic foundation of quantum physics with an additional axiom, namely the

Principle of precedence: When a quantum process terminating in a measurement has many precedents, which are instances where an identically prepared system was subject to the same measurement in the past, the outcome of the present measurement is deter- mined by picking randomly from the ensemble of precedents of that measurement. 

Or as he puts it in his Edge interview, "nature is developing habits as it goes along."

His goal is to explore the possibility that the laws of nature can be viewed as accumulating historically via the principle of precedence, rather than being fixed and immutable laws...



Sheldrake's Morphic Fields


But this principle is awfully reminiscent of Rupert Sheldrake's (highly controversial) notion of morphic fields...

I propose that memory is inherent in nature. Most of the so-called laws of nature are more like habits. 

The idea is that there is a kind of memory in nature. Each kind of thing has a collective memory. So, take a squirrel living in New York now. That squirrel is being influenced by all past squirrels."

The habits of nature depend on non-local similarity reinforcement. Through morphic resonance, the patterns of activity in self-organizing systems are influenced by similar patterns in the past, giving each species and each kind of self-organizing system a collective memory.


Sheldrake's core idea regarding morphic fields is that, once a pattern occurs somewhere in the universe, it is more likely to occur elsewhere.

The parallel between Smolin's and Sheldrake's ideas is fairly obvious, and Bruce Sterling notes it in a comment on Smolin's Edge article:  "If nature "forms habits," then that's very Rupert Sheldrake"....

Both Smolin and Sheldrake are positing that when something has occurred in the universe, this increases the probability of similar things occurring in the future -- in a nonlocal way, separate from ordinary processes of physical causation...

I have no idea whether Smolin will appreciate this parallel, though.   Sheldrake has developed his morphic field idea fairly extensively as an explanation for psi phenomena, which are widely viewed with skepticism within the scientific community, although they are broadly accepted by the general public, and in my own opinion the evidence for their reality in many cases is pretty strong (see my brief page on psi here).


Peirce's Tendency to Take Habits



The same core idea that Smolin and Sheldrake have articulated can be found considerably earlier in the philosophy of Charles S. Peirce, who spoke at the turn of the 19th century of the "tendency to take habits" as a key aspect of the universe, and opined that...

Logical analysis applied to mental phenomenon shows that there is but one law of mind, namely that ideas tend to spread continuously and to affect certain others which stand to them in a peculiar relation of affectibility. In this spreading they lose intensity, and especially the power of affecting others, but gain generality and become welded with other ideas.

...

Matter is but mind hide-bound with habit

...

The one intelligible theory of the universe is that of objective idealism, that matter is effete mind, inveterate habits becoming physical laws. But before this can be accepted it must show itself capable of explaining the tridimensionality of space, the laws of motion, and the general characteristics of the universe, with mathematical clearness and precision ; for no less should be demanded of every Philosophy.

It seems that Smolin is now attempting to push in precisely the direction Peirce was suggesting in the final quote given above...



Goertzel's Morphic Pilot Wave


I also notice an interesting parallel between Smolin's paper and my own paper on Morphic Pilot Theory from a few years ago.   In that paper, I was trying to connect Sheldrake's morphic field idea with quantum theory, and I posited that one could look at the tendency to take habits as an additional property of the "pilot waves" that Bohmian theory posits to underly quantum reality.   Specifically, I argued that if one viewed pilot waves as being directed by simplicity, as quantified e.g. by algorithmic information (in which something is simpler if it can be computed by a shorter program), then one could derive a variant of the morphic field hypothesis as a consequence.

Lo and behold, reading Smolin's paper carefully, what do I find?   Smolin notes that, according to his theory:

  • Novel occurrences are in a sense maximally random
  • Occurrences that have happened many times before, are shown via the principle of precedence to simply obey good old quantum mechanics
  • Occurrences that have happened only a few times before, are not explained explicitly by his principle of precedence -- but may perhaps be explained by an additional principle stating that the universe is biased toward outcomes that are simpler in the sense of algorithmic information theory (he attributes this suggestion to some of his colleagues)

Well, well, well....   Obviously Smolin did not read my speculative paper on quantum theory and psi, but he and his colleagues independently arrived at a similar conclusion to that paper.   So science often goes.

All this, of course, is still preliminary and speculative.  For one thing, I find the axiomatic foundation for quantum mechanics that Smolin chose a bit inelegant, though probably better than the Bohmian pilot waves I used in my own paper; and I would love to see how an algorithmic simplicity assumption can be integrated into the much prettier and more fundamental symmetry based foundation for quantum mechanics recently articulated by Kevin Knuth and his colleagues.


But one does see, here, an interesting direction for bridge-building between quantum theory, morphic fields and psi phenomena.   The connection between psi and quantum mechanics has been discussed a lot, but I've never been convinced that quantum theory on its own can explain psi.  In my Morphic Pilot paper I suggested that augmenting quantum theory with an algorithmic information theory based morphic field type assumption might do the trick.   Without explicitly thinking about psi at all (so far as I can tell, anyway), Smolin has made an interesting move in the same direction.


Quantum Darwinism and State Broadcasting


After writing the above, another suggestion of Damien's led me indirectly to a paper by some different physicists (not collaborators of Smolin), which suggests that Quantum Darwinism (a recent addition to the pantheon of
foundations for quantum physics) may be derivable from a phenomenon called

state broadcasting—a process aimed at proliferating a given state through correlated copies
This appears to me much like a different way of looking at Smolin's Principle of Precedence...

Of course, to get morphic resonance out of this, one would still need some addition such as Smolin's & my suggestion of an Occam's (Aristotle's) Razor -like simplicity principle for the case where there are not that many correlated copies...

All this also makes me wonder about the findings of Aerts, Atmanspacher and others regarding the necessity, in some case, of modeling classical systems using quantum mathematics and logic.   Could it be the case that, whenever a system internally displays a morphic resonance type dynamic, it is best to model it using some variant of quantum math?



Lots of yummy food for thought!


Tuesday, May 21, 2013

RIP Ray Manzarek


What a bummer to read that Ray Manzarek has died.   

I was born in 1966, and the psychedelic rock of the late 1960s and early 1970s was the music I grew up on.   Later I became more interested in jazz fusion, bebop, classical music and so forth -- but the psychedelic 60s/70s music (Hendrix, Doors, Floyd, Zeppelin) was where my love for music started.  This was the music that showed me the power of music to open up the mind to new realities and trans-realities, to bring the mind beyond itself into other worlds....

Hendrix was and probably always will be my greatest musical hero -- but Ray Manzarek was the first keyboardist who amazed me and showed me the power of wild and wacky keyboard improvisation.   I now spend probably 30-45 minutes a day improvising on the keyboard (and more on weekends!).  I don't have Ray's virtuosity but even so, keyboard improv keeps my mind free and agile and my emotions on the right side of the border between sanity and madness.  Each day I sit at my desk working, working working -- and when too much tension builds up in my body or I get stuck on a difficult point, I shift over to the piano or the synth and jam a while.   My frame of mind re-sets, through re-alignment with the other cosmos the music puts my mind in touch with.

The Doors and Ray had a lot of great songs.  But no individual song is really the point to me.  The point is the way Ray's music opens up your mind -- the way, if you close your eyes and let it guide you, you follow it on multiple trans-temporal pathways into other realms, beyond the petty concerns of yourself and society ... and when you return your body feels different and you see your everyday world from a whole new view....

The Singularity, if it comes, will bring us beyond petty human concerns into other realms in a dramatic, definitive way.   Heartfelt, imaginative improvisation like Ray Manzarek's can do something similar, in its own smaller (yet in another sense infinite) way -- opening up a short interval of time into something somehow much broader.

As Ray once said:

“Well, to me, my God, for anybody who was there it means it was a fantastic time, we thought we could actually change the world — to make it a more Christian, Islamic, Judaic Buddhist, Hindu, loving world. We thought we could. The children of the ’50s post-war generation were actually in love with life and had opened the doors of perception. And we were in love with being alive and wanted to spread that love around the planet and make peace, love and harmony prevail upon earth, while getting stoned, dancing madly and having as much sex as you could possibly have.” 


w00t! ... those times are gone, and I was too young in the late 60s early 70s to take part in the "getting stoned and having as much sex as you could possibly have" aspect (that came later for me, including some deep early-80s acid trips to Doors music), but my child self picked up the vibe of that era nonetheless ... all the crazy, creative hippies I saw and watched carefully back then affected more than just my hairstyle....   Somewhat like Steve Jobs, I see the things I'm doing now as embodying much of the spirit of that era.   Ray Manzarek and his kin of that generation wanted to transcend boring, limited legacy society and culture and revolutionize everything and make it all more ecstatic and amazing -- and so do I....


I recall a Simpsons episode where Homer gets to heaven and encounters Jimi Hendrix and Thomas Jefferson playing air hockey.  Maybe my memory has muddled the details, but no matter.   I hope very much that, post-Singularity, one of my uploaded clones will spend a few eons jamming on the keyboard with the uploaded, digi-resurrected Ray Manzarek.

Until then: Rest In Peace, Ray....


Sunday, May 19, 2013

Musing about Mental vs. Physical Energy


Hmmm....

I was talking with my pal Gino Yu at his daughter Oneira's birthday party yesterday … and Gino was sharing some of his interesting ideas about mental energy and force…

Among many other notions that I won't try to summarize here, he pointed out that, e.g. energy (in the sense he meant) is different from arousal as psychologists like to talk about…  You can have a high-energy state without being particular aroused -- i.e. you can be high-energy but still and quiescent.

This started me thinking about the relation between "mental energy" in the subjective sense Gino appeared to be intending, and "energy" in physics.

I have sometimes in the past been frustrated by people -- less precise in their thinking than Gino -- talking about "energy" in metaphorical or subjective ways, and equating their intuitive notion of "energy" with the physics notion of "energy."

Gino was being careful not do to this, and to distinguish his notion of mental energy from the separate notion of physical energy.   However, I couldn't help wondering about the connection.   I kept asking myself, during the conversation: Is there some general notion of energy which the physical and mental conceptions both instantiate?

Of course, this line of thinking is in some respects a familiar one, e.g. Freud is full of ideas about mental energy, mostly modeled on equilibrium thermodynamics (rather than far-from equilibrium thermodynamics which would be more appropriate as an analogical model for the brain/mind)…

Highly General Formulations of Force, Energy, Etc.

Anyway... here is my rough attempt to generalize energy and some other basic physics concepts beyond the domain of physics, while still capturing their essential meaning.

My central focus in this line of thinking is "energy", but I have found it necessary to begin with "force" ...

Force may, I propose, be generally conceived as that which causes some entity to deviate from its pattern of behavior ...

Note that I've used the term "cause" here, which is a thorny one.   I think causation must be understood subjectively: a mind M perceives A as causing B if according to that mind's world-model,

  • A is before B
  • P(B|A) > P(B)
  • there is some meaningful (to M) avenue of influence between A and B, as evidenced e.g. by many shared patterns between A and B

So, moving on ... force quickly gives us energy…

Energy, I suggest (not too originally), may be broadly conceived as a quantity that

  • is conserved in an isolated system (or to say it differently: is added or subtracted from a system only via interactions with other systems)
  • measures (in some sense) the amount of work that a certain force gets done, or (potential energy) the amount of work that a certain force is capable of getting done

Now, in the case of Newtonian mechanics,

  • an entity's default pattern of behavior is to move in a straight line at a constant velocity (conservation of momentum), therefore force takes the form of deviations from constant momentum, i.e. it is proportional to acceleration
  • "mass" is basically an entity's resistance to force…
  • energy = force * distance

However, the basic concepts of force and energy as described above are pertinent beyond the Newtonian context, e.g. to relativistic and quantum physics; and I suppose they may have meaning beyond the physics domain as well.

This leads me to thinking about a couple related concepts...

Entropy maximization: When a mind lacks knowledge about some aspect of the world, its generically best hypothesis is the one that maximizes entropy (this is the hypothesis that will lead to its being right the maximum percentage of the time).   This is Jaynes' MaxEnt principle of Bayesian inference.

Maximum entropy production: When a mind lacks knowledge about the path of development of some system, its generically best hypothesis is that the system will follow the path of maximal entropy production (MEP).   It happens that this path often involves a lot of temporary order production; as Swenson said, "The world, in short, is in the order production business because ordered flow produces entropy faster than disordered flow"

Note that while entropy maximization and MEP are commonly thought of in terms of physics, they can actually be conceived as general inferential principles relevant to any mind confronting a mostly-opaque world.

Sooo... overall, what's the verdict?  Does it make sense to think about "mental energy", qualitatively, as something different from physical energy -- but still deserving the same word "energy?"   Is there a common abstract structure supervening both uses of the "energy" concept?

I suppose that there may well be, if the non-physical use of the term "energy" follows basic principles like I've outlined here.

This is in line with the general idea that subjective experiences can be described using their own language, different from that of physical objects and events -- yet with the possibility of drawing various correlations between the subjective and physical domains.  (Since in the end the subjective and physical can be viewed as different perspectives on the same universe … and as co-creators of each other…)

In What Sense Is Mental Energy Conserved?

But ... hmmm ... I wonder if the notion of "mental energy" -- in folk psychology or in whatever new version we want to create -- really obeys the principles suggested above?

In particular, the notion of "conservation in isolated systems" is a bit hard to grab onto in a psychological context, since there aren't really any isolated systems ... minds are coupled with their environments, and with other minds, by nature.

On the other hand, it seems that whenever physicists run across a situation where energy may seem not to be conserved, they invent a new form of energy to rescue energy conservation!   Which leads to the idea that within the paradigm of modern physics, "being conserved" is essentially part of the definition of "energy."

Also, note that above I used the phrasing that energy "is conserved in an isolated system (or to say it differently: is added or subtracted from a system only via interactions with other systems)."   The alternate parenthetical phrasing may, perhaps, be particularly relevant to the mental-energy case.

(Note for mathematical physicists: Noether's Theorem shows that energy conservation ensues from temporal translation invariance, but it only applies to systems governed by Lagrangians, and I don't want to assume that about the mind, at least not without some rather good reason to....) 

Stepping away from physics a bit, I'm tempted to consider notion of mental energy in the context of the Vedantic hierarchy, which I wrote about in The Hidden Pattern (here's an excerpt from Page 31 ...)


In a Vedantic context, one could perhaps view the Realm of Bliss as being a source of mental energy that is in effect infinite from the human perspective.   So when a human mind needs more energy, it can potentially open itself to the Bliss domain and fill itself with energy that way (thus perhaps somewhat losing its self, in a different sense!).   This highlights the idea that, in a subjective-mind context, the notion of an "isolated system" may not make much sense.

But one could perhaps instead posit a principle such as

Increase or decreases in a mind-system's fund of mental energy, are causally tied to that mind-system's interactions with the universe outside itself.

This sort of formulation captures the notion of energy conservation without the need to introduce the concept of an "isolated system."    (Of course, we still have to deal with the subjectivity of causality here -- but there's no escaping that, except via stopping to worry about causality altogether!)

But -- well, OK -- that's enough musing and rambling for one Sunday early afternoon; it's time to walk the dogs, eat a bit of lunch, and then launch into removing the many LaTeX errors remaining in the (otherwise complete) Building Better Minds manuscript....

And so it goes...

-- This post was written while listening to Love Machine's version of "One More Cup of Coffee" by Bob Dylan ... and DMT Experience's version of "Red House" by Jimi Hendrix.   I'm not sure why, but it seems a "cover version" sort of afternoon...