Tuesday, June 16, 2020

Simplificational Causal Decision Theory

In a few spare moments lately, I found myself revisiting issues regarding the foundations of decision theory, and came up with a few ideas that are somewhat new and maybe useful.

TL;DR version is:


  • I define "simplificational causality", as a slight generalization of the definition of causality using algorithmic Markov conditions
  • I give a (semi-)formalization that tells a system what properties it could potentially possess that would be (simplificationally) causal for the situation of being embedded in desirable possible universes.


I think this is as close as one can come to a meaningful reasonably-general-purpose decision theory without getting boggled up with delusions of free will and such.

Boring Historical Prelude


The last time I plunged into decision theory issues was a decade ago when I wrote a draft called "Counterfactual Reprogramming Decision Theory" (CRDT) which I was somewhat but not entirely happy with.   The abstract of that paper read:

"A novel variant of decision theory is presented. The basic idea is that one should ask, at each point in time: What would I do if the reprogrammable parts of my brain were reprogrammed by a superintelligent Master Programmer with the goal of supplying me with a program that would maximize my utility averaged over possible worlds? Problems such as the Prisoner’s Dilemma, the value of voting, Newcomb’s Problem and the Psychopath Button are reviewed from this perspective and shown to be addressed in a satisfactory way."

(That plunge into decision theory was largely triggered by the legendary transhumanist John Oh who, after the first AGI conference in 2006, hounded me relentlessly for a solution to voting paradoxes and other such decision-theory conundrums!)

At some point in the subsequent years I decided that CRDT was basically the same as UDT2, an improved version of Wei Dai's Updateless Decision Theory.   For background see


UDT and UDT2 are described in the above links as follows (just to give you a flavor if you're too lazy/busy to click on the links)

"More formally, [in UDT] you have an initial distribution of "weights" on possible universes (in the currently most general case it's the Solomonoff prior) that you never update at all. In each individual universe you have a utility function over what happens. When you're faced with a decision, you find all copies of you in the entire "multiverse" that are faced with the same decision ("information set"), and choose the decision that logically implies the maximum sum of resulting utilities weighted by universe-weight. 

"UDT1 receives an observation X and then looks for provable facts of the form "if all my instances receiving observation X choose to take a certain action, I'll get a certain utility".

"UDT1.1 also receives an observation X, but handles it differently. It looks for provable facts of the form "if all my instances receiving various observations choose to use a certain mapping from observations to actions, I'll get a certain utility". Then it looks up the action corresponding to X in the mapping."

... "[W]hile UDT1 optimizes over possible outputs to its input and UDT1.1 optimizes over possible input/output mappings it could implement, UDT2 simultaneously optimizes over possible programs to self-modify into and the amount of time (in computation steps) to spend before self-modification."


This is weird-ass convoluted stuff but you actually do have to go to these sorts of lengths to avoid various pathologies and paradoxes that emerge from more straightforward "textbook" approaches like evidential and causal decision theory.

As neither CRDT nor UDT2 was ever fully formalized or fleshed out, it's hard to say (without doing a lot more work) exactly how close to equivalent they are or what the key differences are.

Anyway now  -- with the benefit of a decade of reflection on the fundamental nature of the universe and how to formalize it in simple ways -- and also with the benefit of the concept of algorithmic Markov conditions, due to Janzing and Scholkopf -- I have been taking a different and (I currently think) more fundamental direction regarding these topics.

(Relational) Simplificational Causality

So, getting the point --

-- or, firstly, the setup needed to properly articulate the point --

Consider an ensemble of possible universes, and a certain predicate F which applies to systems within universes (a “system” being, at first approximation, simply a subset of a universe — though there may emerge some reason to restrict this).

The predicate F defines an “individual”, e.g. Ben Goertzel or the USA, in a way that spans the instances of this individual across multiple universes.  If F(S) is true then S is an instance of the individual.

Assume a meta-observer, which is another predicate G that applies to pairs of the form (system, property), and makes estimates G(S,f) of the degree to which system S displays property f.

Assume a value function v(F,U), which rates the quality of a universe from the perspective of an individual, so that vv=(F,U) can be inferred by the meta-observer from looking at the state of some instance of F.

Then the desirability of a universe according to F, should be measured as the value measured by v(F,U) for the instances of F in universe U.

A property of an individual is characterized by a predicate that assesses the degree to which an instance possesses that property

The desirability of a property of instances of F, should be measured as the average over universes of:

(desirability of the universe) * (degree to which the instances of F in the universe have the property) 

Now let’s assume a simplicity measure s(F,A,B), which measures the conditional simplicity of predicate A relative to predicate B, from the perspective of F (meaning that the meta-observer can identify  the assessment of s(A|B) relative to an instance of F via inspecting that instance). 

(In "Grounding Occam's Razor in a Formal Theory of SimplicityI gave one approach to defining a set of properties that a function should have to be considered a useful simplicity measure.)

We can define the mutual simplification of x and y conditional on z as

I(x:y| z)= s(y|z) = s(x|z) + s(y|z) - s(x,y |z)

And then... ba-da-bing! ...  we can construct causal networks, e.g. between properties of individuals, using the postulate that if x and y have nonzero mutual simplification, they must have some common cause.

This is what I'd call “simplificational causality”.

(Note that the above is a minor generalization/abstraction of the algorithmic Markov condition approach to causality outlined in Causal inference usingthe algorithmic Markov condition... an outstanding paper that I recommend strongly...)

For a given degree d of desirability, we can then ask: What is the simplest property that instances of F may have, that will be simplificationally causal for these instances to have desirability of degree at least d?

So -- what does all this cockamamie abstraction tell us about decision theory?

OK: This is not exactly telling an instance of F what decision to make in a given circumstance. 

But it IS telling an instance of F what properties it could potentially possess that would be (simplificationally) causal for the situation of being embedded in desirable possible universes.

Now I suspect that if you measure simplification using algorithmic information, then when you work out the math of algorithmic Markov conditions, you'll wind up with something in the close vicinity of UDT2 and CRDT.   However, even if so (as usual I don't have free time to work the details), I think the formulation I've given here is more conceptually elegant and transparent.

Practical Approximations/ Applications?

It's not why I started musing in this direction, but I think there may actually be some use for these ideas in the SingularityNET / Rejuve team's currently work using OpenCog for causal network inference in a biological context (soon perhaps to be extended to other contexts such as robotics).

We are estimating relative simplification in OpenCog now using fuzzy pattern-sets constructed from nodes and links in OpenCog Atomspace (this is part of the PLN logic systems intensional inference).   So we have some practical ways to estimate simplification, and in this context we could estimate the simplificational causality between two biological actors, e.g. two proteins playing roles in protein interaction networks, which can be useful e.g. in automated discovery of new biological pathways.

(Of course this practical work involves simplicity measures that are crude compared to e.g. conditional algorithmic information, but they have the advantage of being feasible to estimate...)

5 comments:

  1. It was definitely informative article you've done. This site is very useful. Thank you

    ReplyDelete
  2. Hi! I like to give you a huge thumbs up for the great info right here, Love it

    ReplyDelete
  3. You provide the correct information here I really bookmark it, awesome

    ReplyDelete
  4. You make so many great post here that I read this article twice. Keep us updated

    ReplyDelete
  5. Wonderful website. Plenty of useful information here. And certainly, thank you for your effort!

    ReplyDelete