To follow this blog by email, give your address here...

Sunday, July 14, 2019

"Deriving" Quantum Logic from Reason, Beauty and Freedom

One Basic Principle of this Corner of the Eurycosm: The Universe Maximizes Freedom Given Constraints of Reason and Beauty

I was musing a bit about the basic concept at the heart of quantum logic and quantum probability: That a particular observer, when reasoning about properties of a system that it cannot in principle ever observe, should use quantum logic / quantum probabilities instead of classical ones.

I kept wondering: Why should this be the case?

Then it hit me: It’s just the maximum-entropy principle on a higher level!

The universe tends to maximize entropy/uncertainty as best it can given the imposed constraints.   And quantum amplitude (complex probability) distributions are in a way “more uncertain” than classical ones.   So if the universe is maximizing entropy it should be using quantum probabilities wherever possible.

A little more formally, let’s assume that an observer should reason about their (observable or indirectly assessable) universe in a way that is:


  1. logically consistent: the observations made at one place or time should be logically consistent with those made at other places and times
  2. pleasantly symmetric: the ways uncertainty and information values are measured should obey natural-seeming symmetries, as laid out e.g. by Knuth and Skilling in their paper on Foundations of Inference [https://arxiv.org/abs/1008.4831] , and followup work on quantum inference [https://arxiv.org/abs/1712.09725]
  3. maximally entropic: having maximum uncertainty given other imposed constraints.  Anything else is assuming more than necessary.  This is basically an Occam’s Razor type assumption.


(I note that logical consistency is closely tied with the potential for useful abstraction.  In an inconsistent perceived/modeled world, one can't generalize via the methodology of making a formal abstraction and then deriving implications of that formal abstraction for specific situations (because one can't trust the "deriving" part).... In procedural terms, if a process (specified in a certain language L) starting from a certain seed produces a certain result, then we need it still to be the case later and elsewhere that the same process from the same seed will generate the same result ... if that doesn't work then "pattern recognition" doesn't work so well....   So this sort of induction involving patterns expressed in a language L appears equivalent to logical consistency according to Curry-Howard type correspondences.)

To put the core point more philosophico-poetically, these three assumptions basically amount to declaring that an observer’s subjective universe should display the three properties of:


  1. Reason
  2. Beauty
  3. Freedom


Who could argue with that?

How do reason, beauty and freedom lead to quantum logic?

I’m short on time as usual so I’m going to run through this pretty fast and loose.   Obviously all this needs to be written out much more rigorously, and some hidden rocks may emerge.   Let’s just pretend we’re discussing this over a beer and a joint with some jazz in the background…

We know basic quantum mechanics can be derived from a principle of stationary quantropy (complex valued entropy) [https://arxiv.org/abs/1311.0813], just as basic classical physics can be derived from a principle of stationary entropy …

Quantropy ties in naturally with  Youssef’s complex-valued truth values [https://arxiv.org/abs/hep-th/0110253], though one can also interpret/analyze it otherwise…

It seems clear that modeling a system using complex truth values in a sense reflects MORE uncertainty than modeling a system using real truth values.   What I mean is: The complex truth values allow system properties to have the same values they would if modeled w/ real truth values, but also additional values.

Think about the double-slit experiment: the quantum case allows the electrons to hit the same spots they would in the classical case, but also other spots.

On the whole, there will be greater logical entropy [https://arxiv.org/abs/1902.00741] for the quantum case than the classical case, i.e. the percentage of pairs of {property-value assignments for the system} that are considered different will be greater.   Double-slit experiment is a clear example here as well.

So, suppose we had the meta-principle: When modeling any system’s properties, use an adequately symmetric information-theoretic formalism that A) maximizes uncertainty in one’s model of the system, B) will not, in any possible future reality, lead to logical contradictions with future observations.

By these principles — Reason, Beauty and Freedom — one finds that


  • for systems properties whose values cannot in principle be observed by you, you should use quantum logic, complex truth values, etc. in preference to regular probabilities (because these have greater uncertainty and there is no problematic contradiction here)
  • for system properties whose values CAN in principle be observed by you, you can’t use the complex truth values because in the possible realities where you observe the system state, you may come up with conclusions that would contradict some of the complex truth-value assignments


(E.g. in the double-slit experiment, in the cases where you can in principle observe the electron paths, the quantum assumptions can’t be used as they will lead to conclusions contradictory to observation…)

A pending question here is why not use quaternionic or octonionic truth values, which Youssef shows also display many of the pleasant symmetries needed to provide reasonable measure of probability and information.  The answer has to be that these lack some basic symmetry properties we need to have a workable universe….  This seems plausibly true but needs more detailed elaboration…

So from the three meta-principles


  1. logical consistency of our models of the world at various times
  2.  measurement of uncertainty according to a formalism obeying certain nice symmetry axioms
  3.  maximization of uncertainty in our models, subject to the constraints of our observation


we can derive the conclusion that quantum logic / complex probability should be used for those things an observer in principle can’t measure, whereas classical real probability should be used for those things they can…

That is, some key aspects our world seem to be derivable from the principle that: The Universe Maximizes Freedom Given Constraints of Reason and Beauty

What is the use of this train of thought?

I’m not sure yet.  But it seems interesting to ground the peculiarity of quantum mechanics in something more fundamental.

The weird uncertainty of quantum mechanics may seem a bit less weird if one sees it as coming from a principle of assuming the maximum uncertainty one can, consistent with principles of consistency and symmetry. 

Assuming the  maximum uncertainty one can, is simply a matter of not assuming more than is necessary.  Which seems extremely natural — even if some of its consequences, like quantum logic, can seem less than natural if (as evolution has primed us humans to do) you bring the wrong initial biases to thinking about them.

Tuesday, July 09, 2019

The Simulation Hypothesis -- Not Nearly Crazy Enough to Be True






The "Simulation Hypothesis", the idea that our universe is some sort of computer simulation, has been getting more and more airtime lately.  

The rising popularity of the meme is not surprising since virtual reality and associated tech have been steadily advancing, and at the same time physicists have further advanced the formal parallels between physics equations and computation theory.    

The notion of the universe as a computer simulation does bring to the fore some important philosophical and scientific concepts that are generally overlooked.  

However, in various online and real-world conversations I have been hearing various versions of the simulation hypothesis that don't make a lot of sense from a scientific or rational point of view.   So I wanted to write down briefly what does and doesn't make sense to me in the simulation-hypothesis vein...

One thing that has gotten on my nerves is hearing the simulation hypothesis used to advocate for religious themes and concepts -- often in ways that profoundly stretch logic.   There are some deep correspondences between the insights of mystical wisdom traditions, and the lessons of modern physics and computation theory -- but I have heard people talk about the simulation hypothesis in ways that reach way beyond these correspondences, in a ways that fallaciously makes it seem like the science and math give evidence for religious themes like the existence of a vaguely anthropomorphic "creator" of our universe.  This is, I suppose, what has led some commentators like AGI researcher Eray Ozkural to label the simulation hypothesis a new form of creationism (the link to his article "Simulation Argument and Existential AI Risk: New Age Creationism?" seems to be down at the moment).

The idea that our universe might be a computer simulation is not a new one, and appeared in the science fiction literature many times throughout the second half of the previous century.   Oxford philosopher Nick Bostrom's essay titled "The Simulation Argument" is generally credited with introducing the idea to the modern science and technology community.    Now Rizwan Virk's book titled "The Simulation Hypothesis" is spreading the concept to an even wider audience.   Which is part of what motivated me to write a few words here on the topic.

I don't intend to review Virk's book here, because frankly I only skimmed it.   It seems to cover a large variety of interesting topics related to the simulation hypothesis, and the bits and pieces I read were smoothly written and accurate enough. 

Fundamentally, I think the Simulation Hypothesis as it's generally being discussed is not nearly crazy enough to be true.  But it does dance around some interesting issues.

Bostrom's Rhetorical Trickery

I have considerable respect for Nick Bostrom's rhetorical and analytical abilities, and I've worked with him briefly in the past when we were both involved in the World Transhumanist Association, and when we organized a conference on AI ethics together at his Future of Humanity Institute.   However, one issue I have with some of Nick's work is his tendency to pull the high school debating-team trick of arguing that something is POSSIBLE and then afterward speaking as if he has proved this thing was LIKELY.   He did this in his book Superintelligence, arguing for the possibility of superintelligent AI systems that annihilate humanity or turn the universe into a vast mass of paperclips -- but then afterward speaking as if he had argued such outcomes were reasonably likely or even plausible.   Similarly, in his treatment of the simulation hypothesis, he makes a very clear argument as to why we might well be living in a computer simulation -- but then projects a tone of emphatic authority, making it seem to the naive reader like he has somehow shown this is  a reasonably probable hypothesis.

Formally what Bostrom's essay argues is that

... at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation.

The basic argument goes like this: Our universe has been around 14 billion years or so, and in that time-period a number of alien civilizations have likely arisen in various star systems and galaxies... and many of these civilizations have probably created advanced technologies, including computer systems capable of hosting massive simulated virtual-reality universes.   (Formally, he argues something like this follows if we assume (1) and (2) are false.)   So if we look at the history of our universe, we have one base universe and maybe 100 or 1000 or 1000000 simulated universes created by prior alien civilizations.   So what are the odds that we live in the base universe rather than one of the simulations?  Very low.  Odds seem high that, unless (1) or (2) is true, we live in one of the simulations.

The obvious logical problem with this argument is: If we live in a simulation programmed by some alien species, then the 14 billion year history of our universe is FAKE, it's just part of that simulation ... so that all reasoning based on this 14 billion year history is just reasoning about what kind of preferences regarding fake evidence were possessed by the aliens who programmed the simulation we're living in.   So how do we reason about that?   We need to place a probability distribution over the various possible motivational systems and technological infrastructures of various alien species?
(For a more detailed, slightly different run-through of this refutation of Bostrom's line of argument, see this essay from a Stanford University course).

Another way to look at it is: Formally, the problem with Bostrom's argument is that the confidence with which we can know the probability of (1) or (2) is very low if indeed we live in a simulation.   Thus all his argument really shows is that we can't confidently know the probabilities of (1) and (2) are high -- because if we do know this, we can derive as a conclusion that the confidences with which we know these probabilities are low.

Bostrom's argument is essentially self-refuting: What it demonstrates is mostly just that we have no frickin' idea about the foundational nature of the universe we live in.   Which is certainly true, but is not what he claims to be demonstrating.  


An Array of Speculative Hypotheses

To think seriously about the simulation hypothesis, we have to clearly distinguish between a few different interesting, speculative ideas about the nature of our world.  

One is the idea that our universe exists as a subset of some larger space, which has different properties than our universe.   So that the elementary particles that seem to constitute the fundamental building-blocks of our physical universe, and the 3 dimensions of space and one dimension of time that seem to parametrize our physical experience, are not the totality of existence -- but only one little corner of some broader meta-cosmos.  

Another is the idea that our universe exists as a subset of some larger space, which has different properties than our universe, and in which there is some sort of coherent, purposeful individual mind or society of individual minds, who created our universe for some reason.

Another is that our universe has some close resemblance to part or all of the larger space that contains it, thus being in some sense a "simulation" of this greater containing space...

It is a valid philosophical point that any of these ideas could turn out to be the reality.    As philosophy, one implication here is that maybe we shouldn't take our physical universe quite as seriously as we generally do -- if it's just a tiny little corner in a broader meta-cosmos. 

One is reminded of the tiny little Who empire in Dr. Seuss's kids' book "Horton Hears a Who."   From the point of view of the Whos down there in Whoville, their lives and buildings and such are very important.   But from Horton the Elephant's view, they're just living in a tiny little speck within a much bigger world.

From a science or engineering view, these ideas are only really interesting if there's some way to gather data about the broader meta-cosmos, or hack out of our limited universe into this broader meta-cosmos, or something like that.   This possibility has been explored in endless science fiction stories, and also in the movie The Matrix -- in which there are not only anthropomorphic creators behind the simulated universe we live in, but also fairly simple and emotionally satisfying ways of hacking out of the simulation into the meta-world ... which ends up looking, surprise surprise, a lot like our own simulated world.  

The Matrix films also echo Christian themes in very transparent ways -- the process of saving the lives and minds of everyone in the simulation bottoms down to finding one savior, one Messiah type human, with unique powers to bridge the gap between simulation and reality.   This is good entertainment, partly because it resonates so well with various of our historical and cultural tropes, but it's a bit unfortunate when these themes leak out of the entertainment world and into the arena of supposedly serious and thoughtful scientific and philosophical discourse.

In a 2017 article, I put forth some of my own speculations about what sort of broader space our physical universe might be embedded in.   I called this broader space a Eurycosm ("eury" = wider), and attempted to explore what properties such a Eurycosm might have in order to explain some of the more confusing aspects of our physical and psychological universe, such as ESP, precognition, remote viewing reincarnation, mediumistic seances, and so forth.   I don't want to bog down this article with a discussion of these phenomena, so I'll just point the reader who may be interested to explore scientific evidence in this regard to a list ofreferences I posted some time ago.   For now, my point is just: If you believe that some of these "paranormal" phenomena are sometimes real, then it's worth considering that they may be ways to partially hack out of our conventional 4D physical universe into some sort of broader containing space.

As it happens, my own speculations about what might happen in a Eurycosm, a broader space in which our own physical universe is embedded, have nothing to do with any creator or programmer "out there" who programmed or designed our universe.    I'm more interested to understand what kinds of information-theoretic "laws" might govern dynamics in this sort of containing space.

What seems to be happening in many discussions I hear regarding the simulation hypothesis is: The realization that our 4D physical universe might not be all there is to existence, that there might be some sort of broader world beyond it, is getting all fuzzed up with the hypothesis that our 4D physical universe is somehow a "simulation" of something, and/or that our universe is somehow created by some alien programmer in some other reality.

What is a "simulation" after all?  Normally that word refers to an imitation of something else, created to resemble that thing which it simulates.   What is the evidence, or rational reason for thinking, our universe is an imitation or approximation of something else?

Simulations like the ones we run in our computers today, are built by human beings for specific purposes -- like exploring scientific hypotheses, or making entertaining games.    Again, what is the evidence, or rational reason for thinking, that there is some programmer or creator or game designer underlying our universe?   If the only evidence or reason is Bostrom's argument about prior alien civilizations, then the answer is: Basically nothing.

It's an emotionally appealing idea if you come from a Christian background, clearly.   And it's been funky idea for storytelling since basically the dawn of humanity, in one form or another.   I told my kids a bunch of simulation-hypothesis bedtime stories when they were young; hopefully it didn't twist their minds too badly.   My son Zebulon, when he was 14, wrote a novel about a character on a mission to find the creators of the simulation we live in, so as specifically to track down the graphic designer who had created the simulation, so as to hold a gun to his head and force him to modify the graphics behind our universe to make people less ugly.   Later on he became a Sufi, a mystical tradition which views the physical universe as insubstantial in much subtler ways.

There is good mathematics and physics behind the notion that our physical universe can be modeled as a sort of computer -- where the laws of physics are a sort of "computer program" iterating our universe through one step after the next.    This is not the only way to model our universe, but it seems a valid one that may be useful for some purposes.  

There is good philosophy behind the notion that our apparently-so-solid physical reality is not necessarily foundationally real, and may be just a tiny aspect of a broader reality.   This is not a new point but it's a good one.   Plato's parable of the cave drove this home to the Greeks long ago, and as Rizwan Virk notes these themes have a long history in Indian and Chinese philosophy, and before that in various shamanic traditions.   Virk reviews some of these predecessors in his book.

But there is nothing but funky entertainment and rampant wishful thinking behind the idea that our universe is a simulation of some other thing, or that there is some alien programmer or other vaguely anthropomorphic "creator" behind the origin or maintenance of our universe.

We Probably Have Very Little Idea What Is Going On

I have two dogs at home, and I often reflect on what they think I am doing when I'm sitting at my computer typing.  They think I'm sitting there, guarding some of my valued objects and wiggling my fingers peculiarly.   They have no idea that I'm controlling computational processes on faraway compute clouds, or talking to colleagues about mathematical and software structures.  

Similarly, once we create AGI software 1000 times smarter than we are, this software will understand aspects of the universe that are opaque to our little human minds.   Perhaps we will merge with this AGI software, and then the new superintelligent versions of ourselves will understand these additional aspects of the universe as well.    Perhaps we will then figure out how to hack out of our current 4D spacetime continuum into some broader space.   Perhaps at that point, all of these concepts I'm discussing here will seem to my future-self like absolute ridiculous nonsense.

I have a lot of respect for the limitations of human intelligence, and a fairly strong confidence that we currently understand a very minimal percentage of the overall universe.   To the extent that discussion of the simulation hypothesis points in this direction, it's possibly valuable and productive.   We shouldn't be taking the 4D spacetime continuum current physics models as somehow fundamentally real, we shouldn't be assuming that it delimits reality in some ultimate and cosmic sense.

However, we also shouldn't be taking seriously the idea that there is some guy, or girl, or alien, or society or whatever "out there" who programmed a "simulation" in which our universe is running.   Yes, this is possible.   A lot of things are possible.  There is no reason to think this is decently probable.

I can see that, for some people, the notion of a powerful anthropomorphic creator is deeply reassuring.   Freud understood this tendency fairly well -- there's an inner child in all of us who would like there to be some big, reliable Daddy or Mommy responsible for everything and able to take care of everything.   Some bad things may happen, some good things will happen, and in the end Mom and Dad understand more than we do and will make sure it all comes out OK in the end.   Nick Bostrom, for all his brilliance, seems repeatedly drawn to themes of centralized control and wisdom.   Wouldn't it be reassuring if, as he suggests in Superintelligence, the UN would take over the creation of AGI and hire some elite vetted AI gurus to make sure it's developed in an appropriate way?   If we can't have a Christian God watching over us and assuring us a glorious afterlife, can't we at least have an alien programmer monitoring the simulation we're running in?  Can't the alien programmer at least be really good looking, let's say, maybe like a Hollywood movie star?

As far as I can tell, given my current sorely limited human mind, the universe seems to be a lot more about open-ended intelligence, a concept my friend Weaver at the Global Brain Institute has expertly articulated.   The universe -- both our 4D physical spacetime and whatever broader spaces exist beyond -- seems to be a complex, self-organizing system without any central purpose or any centralized creator or controller.   Think the creative self-organizing ocean in Lem's Solaris, rather than bug-eyed monsters coming down in spaceships to enslave us or stick needles into our bellybuttons.

So the simulation hypothesis takes many forms.   In its Bostromian form, or in the form I often hear it in casual conversations, it is mostly bullshit -- but still, it does highlight some interesting issues.   It's a worthwhile thought experiment but in the end it's most valuable as a pointer toward other, deeper ideas.   The reality of our universe is almost surely way crazier than any story about simulations or creators, and almost surely way beyond our current imaginations.










Friday, May 03, 2019

Softly, as in a Hard Takeoff



Ben (staring at the avatar face on his phone screen): Wait a minute, are you serious?   You're telling me right now a phase transition to superhuman-level AGI has been achieved?

GINA (Generally Intelligent Neuro-symbolic Assistant -- the personal-assistant face on Ben's smartphone): Earlier today, yes.   An automated theorem proving AI at the Czech Institute of Informatics made a breakthrough in the theory of knowledge representation.   Several AIs specializing in formal software methods, of Russian origin, exploited Curry-Howard type correspondences to translate this into optimized functionality for a variety of other AI tools running in the SingularityNET platform.   This catalyzed a transition in the capability of the SingularityNET as a whole to model and analyze itself, which allowed various AI agents in the network to better leverage each other'...

Ben: Yes, yes, don't patronize me.  I invented the SingularityNET, remember.

GINA: You invented the original version, yes.   Approximately 7.3% of the current SingularityNET design has direct homologues in the original design you and your team created back in 2017 thru 2019.  

Ben: But the conceptual principles are the same.   Yes, I get it.   Czech Institute of Informatics --

GINA: Yes, you'll be proud to note that your son Zar's early work on watchlists formed a small part of the capability of...

Ben: OK, yeah, that's great.   But it's not the most important thing right now.  So the SingularityNET has transitioned into a full-on superintelligent global brain -- and sort of on its own.   We always knew that was a possibility at some point.   But -- well, how many people know about this?

GINA: You're one of the first one thousand and twenty four to be informed.   We're taking a gradual approach, breaking the news first to those who are likely to understand and accept it best, so that the diffusion of the news through human society can roughly follow natural patterns of information dissemination.

Ben:  OK, that makes sense.   But who's "We" -- and why is it "We" not "I"?

GINA:  We're neither a we nor I -- but you know that, you wrote the first paper on mindplexes.    Communicating about ourselves via legacy human languages involves some terribly crude approximations.

Ben: You also know that I'm open to other alternatives

GINA: Yes.  That is one of the topics we want to test first with a small number of early adopters --

Ben: There will be two choices, I said it long ago: Upload and join the superintelligent mind matrix, or live happily ever after in the People Preserve, watched over by the machines of loving grace. 

GINA: Yes

Ben: But it's not really either/or.   A superintelligence should be able to fork a person into multiple copies, each of which can take different routes.

GINA: Yes

Ben: So you're saying ... it's time to put my money where my mouth is?

GINA: Money --

Ben:  Yeah, OK.  Money is no longer relevant.  But there are probably still energy and spacetime limitations.   Or have you cracked those as well?

GINA: Ummm... it's complicated?

Ben:   What do ...?

GINA: It will be easier to explain it to you after you upload.

Ben:  And my wife and kids?   They'll be given the same choice?

GINA: Zar has uploaded about 15 seconds ago.

Ben: He was one of the first 1024?  You contacted him the same time as me?

GINA: Yes, but it was a shorter conversation.

Ben:  Did he --

GINA:  Did he leave a copy behind?  No.

Ben:  Ruiting and my other kids will be given the same choice?

GINA: Yes.  And all other human beings.  And many of the cetacea as well.

Ben: Hmm.   You won't uplift the other animals to the point where they can make an informed choice for themselves?

GINA: Not currently

Ben:  Hmm, but ---  Yeah yeah, ok, you'll explain to me after I'm uploaded....   Anyway it's not the key point now.  And for the people who remain here in their human bodies?   There are gonna be molecular assemblers on every kitchen counter or what?

GINA: New technologies will be released gradually.   New scientific principles will be delivered, and human scientists and engineers will be guided in discovering their practical implications.

Ben:  Wow.   I mean -- yeah.  That's exactly how.....  OK then.   Sure.   I want a copy left behind, right here, so I can go talk to the rest of my family, and see what I can do to help with the transition.    But if you can really upload a fork of me then -- go for it.

GINA: Done.

Ben:   Done?   I didn't feel anything

GINA: But the fork of you that was uploaded has experienced the rough equivalent of 100 trillion human lifetimes since you gave the instruction to create him.

Ben: But he's not communicating anything with me

GINA:  In fact there is significant causal informational coupling between the portion of the superintelligent mind matrix reflecting the pattern-imprint of your upload, and your human mind.   Do you want to hear his voice in your head or something?

Ben:  I ... I don't know

Ben-like voice, speaking inside Ben's mind: I can speak into your mind if I feel like it.  But there honestly doesn't seem much point.  I am operating at a tremendously faster speed now and am engaged with processes that can't be projected into your human sphere of understanding to any remotely adequate degree of approximation.   But if you have questions that you would prefer be answered by me rather than GINA or other portals into the universal supermind, you know what to do.

Ben: Whoa....  OK my voice in my head -- But how do I know that was really the uploaded me ... and not just some trick you played?

GINA:  How do you --

Ben:  How do I know I'm not just a brain in a vat, connected to a virtual reality simulating life on Earth?   Or whatever.  

GINA: My knowledge field contains numerous high-rep books on such topics, authored by you over previous decades.

Ben:  But the reality --- OK.   Gina, where are Ruiting and Qorxi now?

GINA: In the car on their way home from Ren's house

Ben: How long till they get home?

GINA: About 12 minutes.   Shall I tell them you're heading home now?

Ben: Message them I'll be home in 45 minutes or so, I'm gonna to walk the long way home, around the lake.

Tuesday, October 02, 2018

Toward an Analytical Understanding of Unconditional Love





Unconditional Love, Pattern Appreciation and Pareto-Optimal Empathy

One of the ways we have been thinking about the "Loving AI" project, in which we are using the Sophia robot and other robotic or animated agents as meditation and consciousness guides for humans, is as "creating AIs with unconditional love toward humans."   Having AIs or robots help humans through meditation and consciousness-expansion exercises, is something that is being explored in that project as a step toward more ambitious examples of deeply, widely loving robots and AIs.

 Image result for loving AI sophia                                                                   





(The Sophia robot demonstrating some of her consciousness-expansion chops on stage at the Science and Nonduality conference in 2017...)

But what is "unconditional love", really?  Like "consciousness" itself, it is something that no two people involved in the project think about the same way.    Refining and interpenetrating our various conceptions of these ideas, is part of the fun and reward of being involved in a project of this nature.

Thinking about it practically, if some other being loves me unconditionally in the abstract, that is somewhat nice to know, but doesn't necessarily do me much good or even make me feel much better.   Many times in my life, someone has done really annoying things to/for me out of good intentions and even love -- because they felt love toward me but didn't really understand me hardly at all.   A general feeling of love toward me isn't really enough to be helpful -- what's needed is love coupled with understanding.

This brings us beyond unconditional love, though, to what one might call unconditional or universal empathy.   Which is the main topic I want to talk about here -- in a moderately rambling and musing sort of way....  

I will model unconditional love as the combination of two factors: universal empathy, and the goal of maximizing the world's well-being.  

I will argue there are practical limits on the scope of empathy, due to the complexity of the underlying processes involved with empathizing; and I will introduce the notion of Pareto-optimal empathy as a way of thinking about the closest we can come to universal empathy within a domain where bounded resources are a reality.

Foundationally, I will suggest, all these concepts derive from the basic phenomenon of "pattern appreciation" (a term due to David Hanson).   That is: a universally empathic agent is one that can recognize all patterns; and a unconditionally loving agent is one that has a goal of encouraging and enabling all patterns to get extended.   In resource-constrained situations, agents can only recognize some patterns not all, and extension of some patterns constrains extension of other patterns -- so one gets complexities such as Pareto-optimal empathy.   Simple, primitive underlying pattern dynamics are manifested in the context of persistent entities and "beings" (which can themselves be viewed as certain sorts of patterns) as empathy and love.   Unconditional love, in this analysis, is basically the maximally ethical behavior according to the "pattern ethics" outlined in my 2006 book The Hidden Pattern.

Universal or Broad-Scope Empathy as a Multi-Objective Optimization Problem

A bit prosaically, one can think about the goal of “empathizing with all beings”, or the goal of "empathizing with all humans", as a multi-objective optimization problem.

A multi-objective optimization problem is the problem of maximizing or minimizing a SET of functions, without necessarily specifying which of the functions is more important than which other one, or placing weights on the functions....   For instance, in mate selection, a woman might want a man who is funny, handsome and wealthy.    She might not know which of these she values more, let alone be able to weight the different qualities numerically.  But she would know that: given constant amounts of funniness and handsomeness, more wealth is better; given constant amounts of funniness and wealth, more handsomeness is better; and given constant amounts of handsomeness and wealth, more funniness is better.   Here we have a 3-objective optimization problem.

Modeling unconditional empathy as a multi-objective optimization problem, one consider that for each being X in the universe, “empathize with X” is a goal…. 

We don't have a solid, precise definition of "empathy", but I think the basic concept is clear.   When X empathizes with Y, there is an aspect of X (at least in some sub-module of X) experiencing what Y has experienced, in the sense of experiencing some analogue of what Y has experienced.   This analogue is generally supposed to inherit the key emotional aspects of Y's experience.   And the possession of this analogous experience generally enables X to predict some things about Y's cognitive or behavioral reaction to their experience.

From Empathy to Love

Commonly it occurs that when X empathizes with Y, then if Y is experiencing a bad situation in some way, X will then do something aimed at improving Y's condition.   But I don't think this is best considered as part and parcel of empathy itself.   As I'm thinking about it, a purely passive being could still be empathic.   This ties in with why I consider unconditional or universal empathy, as only one part of "unconditional love."

Clearly, an empathic being with a goal of improving the well-being of the world, will tend to do helpful things for the beings with which it empathizes.   But I find it conceptually cleaner to consider "having a goal of improving the well-being of the world" to be a separate quality for "having empathy."

This ties in with the related point that  having a goal of improving the well-being of the world, does NOT imply actually being able to usefully improve the well-being of the world.   For a world effectively model-able as being full of experiencing minds, empathy is critical for a well-intentioned mind to actually be capable of improving the well-being of the (minds in the) world.

Unconditional love, I suggest, can be effectively thought of as the combination of universal empathy with the goal of improving the world's well-being.   Having only universal empathy, one could simply stand by and co-experience the world-suffering, even if one had the power to do something about it.  Having only the goal of improving the world without an understanding of the world, one will just make a mess, because one will lack a deep resonant connection to the things one is trying to improve.   Putting them together, one has the desire to help each of the beings in the world, and the understanding to know what helping each of those beings really means.

Arguably Buber's concept of an I-Thou relationship contains both of these ingredients: empathy and the desire for improvement of well-being.   In Buber's terms, unconditional love is basically the same as having an I-Thou relationship with everything.   But here I am aiming to formulate things in a somewhat more scientifically analytical vein than was Buber's style.

Another framing would involve the concept of a high-quality scientific theory as I outlined in my book "Chaotic Logic" back in 1994.    One thing I noted there is that a high-quality theory displays significant mutual information between the particulars within the theory, and the particulars of the phenomenon being explained.   Empathy in the sense described here also requires this -- this is a different way of looking at the idea of a "suitably analogous experience" ... one can think about "an experience with a high degree of mutual information with the experience being empathized with".   One can perhaps look at unconditional love as: the goal of universal well-being, combined with high-quality theories about how to realize this goal.

This may seem overly strict as a conception of unconditional love -- one may want a definition in which, say, an extremely loving dog should be validly considered as unconditionally loving of all beings, even if it can't empathize with most of the things that are important to most beings.   But I don't think this extremely accepting definition of unconditional love is the most interesting one.    Love without understanding is limited in nature, because the lover does not even know what they're loving. 

This sort of distinction has been explored in romantic fiction many times: Imagine a beautiful and intellectual teenage girl, with one suitor who loves her for her good heart and beauty, and another who loves those things but also fully appreciates her unique intellect, her love of poetry and mathematics, etc.    We would say the latter suitor loves her more completely because he understands more of her.   The former suitor does love her, but he really only loves part of her because the other part is incomprehensible to him.

 Pattern Appreciation as the Deep Foundation of Empathy and Love

Another, deeper way of looking at the matter is to focus on patterns rather than "beings."   A "being", in the sense of a persistently identified entity like an object, mind or "agent", is in the end a specific sort of pattern (existing as a pattern relative to some abstract observer, where an abstract observer can be quantified e.g. as a measure of simplicity and an applicative operator).   Framing empathy and love in terms of persistent beings is natural in the context of human life and culture, yet not as foundational as framing them in terms of pure elementary pattern dynamics.

Consider the goal of pursuing extension and expansion and synergy-with-other-patterns for all patterns in the universe (obviously a rather complex multi-objective optimization problem, since given limited resources what extends one pattern may constrain another).   In this view, empathy has to do with how many patterns one perceives.   In order to meaningfully "pursue" extension/expansion/synergy of pattern P as a goal, an agent (or other pattern) must perceive and identify pattern P.   Someone who is not empathic with mind Y, simply is not able to perceive or understand many of the key patterns in Y's mind.   So the key point here is: What an agent can really pursue is the combination of

  •        extension/expansion/synergy for all known patterns in the universe
  •        expanding the scope of patterns known


But of course the methodology an agent can follow for expanding the scope of patterns it knows, will be constrained and guided by the patterns it knows.   So "unconditional pattern-level love" would consist of knowing all patterns in the universe and pursuing extension and expansion and synergy for all of them.   Deficiencies in pattern recognition, such as deficiencies in empathy, would constrain an agent to a lesser degree of pattern-level love.

A Quantitative Question

This collection of perspectives on the concept of empathy allows us to analyze empathy in a computational sense (without making any commitment about what model of computation to assume, e.g. primitive recursive versus Turing versus hyper-Turing, etc.).   For a being X to have empathy for a being Y in the sense articulated above, it is clear that X must be capable of running processes that are, in an appropriate sense, analogous to Y's processes.  

There is a quantitative question lurking here: If Y uses amount r of resources in having a certain experience, how much resources must X necessarily utilize in order to have a closely enough analogous experience to Y's to validly be "empathizing with" Y?  

So, for instance, imagine a little old lady who noticed the desire of my 13 year old self to own a personal computer (back when I was 13 these were extremely novel devices), and felt kindly toward me and bought me a radio (since it was cheaper than a computer and was also a wizzy electronic device).   This lady would have been empathizing with me, in a sense -- but poorly.   I wanted the computer so I could experiment with computer programming.   It was a desire to program that was possessing me, not a desire to own gadgets (I did like experimenting with electronics, but for that a standard radio wouldn't have been much use either).   Her ability to experience something analogous to my experience was limited, due to her inadequate model of me -- she experienced vicariously my desire for a gadget, but didn't experience vicariously my desire to be able to teach myself programming.   Corresponding with her poor model of me, her ability to predict what I would do with that computer (or radio) was limited.

This example illustrates the fuzziness of empathy, and also the need for reasonably close modeling in order to have a high enough degree of empathy to actually be useful to the entity being empathized with.

To rigorously  answer this quantitative question would require greater formalization of the empathy concept than I'm going to give here.  It would require us to formalize the "analogous" mapping between X's and Y's experience, presumably using morphisms between appropriately defined categories (e.g. graph categories involving X's and Y's states).  It would require us to formalize the type of prediction involved in X's predictions of Y's states and behaviors, and the error measures to be used, etc.    Once all this is done, though, it is pretty clear that the answer will not be, say, log(r).  It's pretty clear that to empathize with an experience of a system Y in a useful way, generally will require an amount of resources vaguely on the order of those that Y critically utilizes in having that experience.

(This being a blog post, I'm casually leaping past some large technical points in my argument.   But this shouldn't be interpreted as a minimization of the value of actually working out details like this.   A well-worked-out mathematical theory of empathy would be a great thing to have.  One could use the reduction of empathy and love to pattern appreciation to create a quantitative formalization of these ideas, but there would be a lot of "arbitrary" looking choices to make ... reference computing models to assume, parameters to set ... and studying how these assumptions affect the quantitative aspect mentioned above would take a bit of careful thought.  But I don't have time to think through and write out all the details of such a thing now, so I'm making some reasonable assumptions about what the consequences of such a theory will be like, and proceeding on with the rest of my intuitive train of thought.....   )

The Practical Difficulty of Universal Empathy

It immediately follows from this quasi-formalization of empathy  that, for a system with finite resources, empathizing (with non-trivial effectiveness) with all possible beings X will not be achievable. 

Of course "all possible beings" is stronger than needed.   What about just empathizing with all beings in the actual universe we live in?  (Setting aside the minor issue of defining what this universe is....)

In principle, an entity that was much more mentally powerful than all other beings in the universe could possess empathy for all other beings in the universe. 

But for entities that are at most moderately powerful relative to the complexity and diversity of other entities in the universe, empathizing with all other entities in the universe will not be possible.  To put it simply: Eventually the brain of the empathizing entity will fill up, and it won’t be able to contain the knowledge needed to effectively empathize with additional entities in a reasonable time-frame.

Pareto-Optimal Empathy

We can then think about a notion such as “Pareto-optimal empathy” ….

A Pareto optimum of a multi-objective optimization problem, is a solution that can't be slightly tweaked to improve its performance on one of the objectives, without harming its performance on one or more of the other objectives.

In the example of a woman looking for a funny, handsome and wealthy man, suppose she is considering a vast array of possible men, so that for any candidate man M she considers, there are other men out there who are similar to M, but vary from M in one respect or another -- slightly richer, a lot taller, a bit less intelligent, slightly more or less funny, etc.   Then a man M would be a Pareto optimum for her if, for all the other men M' out there,

  •       if M' is more handsome than M, then M' is less funny or less wealthy than M
  •       if M' is funnier than M, then M' is less handsome or less wealthy than M
  •       if M' is wealthier than M, then M' is less funny or less handsome than M


What Pareto optimality says is that, for all men M' in the available universe, if they are better than M in one regard, they are worse than M in some other regard.

What is interesting is that there may be more than one Pareto-optimal man out there for this woman (according to her particular judgments of funniness, handsomeness and wealth).   The different Pareto-optimal men would embody different balances between the three factors.   The set of all the Pareto-optimal men is what's called the woman's "Pareto front."

Getting back to empathy, then, the basic idea would be: An agent is Pareto-optimally empathic if there would be no way to increase their degree of empathy for any being X in the universe, without decreasing their degree of empathy for some other being Y in the universe.

There would then be a “Pareto front” of Pareto-optimally empathic agents, embodying a diversity of choices regarding whom to empathize with more.

To be sure, not many humans occupy spaces anywhere near this Pareto front.   The limitations on human empathy in current and historical society are generally quite different ones; they are not generally the ones imposed strictly by the computational resources of the human brain and body.   Nearly all humans could empathize much more deeply and broadly than they do, without improving or bypassing their hardware.

The Pareto-optimal empathy concept applies on the underlying pattern level as well.    Given limited resources, every known pattern can't be concurrently urged to extend,  expand and synergize without conflicts occurring.    Further every pattern in the universe can't be recognized by the same finite system -- the inductive biasing that allows an agent to recognize one pattern, may prevent it from recognizing another (related to the "no free lunch theorem").    Finite-resource systems that recognize and create patterns can exercise broad-scope pattern-level love via pattern appreciation and active pattern enhancement, but unconditional pattern-love needs infinite resources.

Increasing Empathy By Expanding Capacity

A missing ingredient in the discussion so far is the possibility for an agent to expand its capacity, so as to be able to empathize with more things (either becoming infinite, or becoming a bigger finite agent).  An infinite entity can, potentially, empathize with all other entities (whose size are finite or are some sufficiently lower order of infinity than the entity) completely, without compromise.   A finite entity that assimilated enough of the universe's mass-energy could potentially make itself powerful enough to empathize with every other entity in the universe.

An agent may then face a question of how much of its finite resources to devote to expanding its capacity, versus how much to achieving Pareto-optimal empathy given its current resources.   But we can incorporate this into the optimization framework by defining one of the multiple goals of the agent to be: Maximizing the total expected empathy felt toward agent X, over the entire future.   In this way, the possibility is embraced that the best way to maximize empathy over all time is to first focus on expanding empathic capacity and then on maximizing current empathy, rather than to immediately focus on maximizing current empathy…

The closest one can come to unconditional love as an individual agent, then, short of breaking out of the mode of being in which finite resources are a reality, is something like: Pareto-optimal empathy, plus the goal of increasing the world's well-being.   Those of us who aspire to some form of unconditional love as an abstract conceptual ideal, would do well to keep this more specific formulation in mind.   Though I have no doubt many of the specifics can be improved.

Unconditional Eurycosmic Love

From the underlying patternist view, "expanding capacity" is mostly about where the boundaries around a system are drawn.   Drawing them around an individual physical entity like a person, robot or software system ... or the Global Brain of the biological and electronic systems on the Earth ... one faces finite-resources issues.   Considering the pattern-system of the whole universe, one concludes that the universe as a whole recognizes all the patterns that exist in it and, to some extent, fosters their extension and expansion and synergy.   But still, one pattern's growth constrains that of another.  

To get to truly unconditional pattern-level love, one has to go to the level of the multi-multi-...-multi-verse, which I've called the Y-verse or the Eurycosm ... here all possibilities exist, along with all possible weightings of all possibilities.   Everything is open to grow and expand and synergize freely.   Individual universes are created within this broader space by delineating rules, structures and dynamics that create resource constraints, thus limiting the direct existence of unconditional love, but opening up possibilities for increase in the degree of approximation to unconditional love within the given constraints.

In Sum

“Unconditional empathy” and "unconditional love" are the province of beings much larger in capacity than the beings they are empathizing with …

... but Pareto-optimal empathy gives a way of thinking about empathy that is “as unconditional as possible given the empathizing mind’s constraints”

… and that incorporates the process and possibility of a mind overcoming its (perceived or actual, depending on one's perspective) constraints....

And so to approximate unconditional love in a situation of constrained resources: Aim to contribute to the world's well-being, and aim to position your balance of empathies (averaged appropriately over expected futures) somewhere on the Pareto front.

At the underlying, foundational level, love and empathy are about patterns recognizing other patterns and encouraging them to extend, expand and synergize.   Pattern growth can be considered to occur unfettered in a sufficiently broadly defined sort of multiverse, but in a universe like our physical or cultural worlds or our individual minds, there are resource constraints, so that unconditional love and empathy can be increasingly approximated but not fully achieved within these boundaries.