To follow this blog by email, give your address here...

Saturday, July 04, 2020

The Developmental Role of Incoherent Multi-Value Systems in Open-Ended Intelligence


So I have written in a recent post about what it would mean for a value system to be coherent -- i.e. fully self-consistent -- and I have noted that human value systems tend to be wildly incoherent.   I have posited that coherence is an interesting property to think about in terms of designing and fostering emergence of AGI value systems.

Now it's time for the other shoe to drop -- I want to talk a bit about Open-Ended Intelligence and why incoherence in value systems (and multivalue systems) may be valuable and productive in the context of minds that are undergoing radical developmental changes in the context of an intelligent broader world.

(For more on open-ended intelligence, see the panel at AGI-20 a couple weeks ago, and Weaver's talk at AGI-16)

My earlier post on value system coherence focused on the case where a mind is concerned with maximizing a single value function.   Here I will broaden the scope a bit to minds that have multiple value functions -- which is how we have generally thought about values and goals in OpenCog, and which I think is a less inaccurate mathematical model of human intelligence.   This shift from value systems to multivalue systems opens the door to a bunch of other issues related to the nature of mental development, and the relationship between developing minds and their external environments.

TL;DR of my core point here is -- in an open-ended intelligence that is developing in a world filled with other broader intelligences, incoherence with respect to current value function sets may build toward coherence with respect to future value function sets.

As a philosophical aphorism, this may seem obvious, once you sort through all the technical-ish terminology.  However, building a bridge leading to this philosophical obvious-ness from the math of goal-pursuit as value-function-optimization is somewhat entertaining (to those of us with certain peculiar tastes, anyway) and highlights a few other interesting points along the way.

In the next section of this post I will veer fairly far into the formal logic/math direction, but then in the final two sections will veer back toward practical and philosophical aspects...

So let's go step by step...


1) Conceptual starting-point: Open-ended intelligence is better approximated by the quest for Pareto-optimality across a possibly large set of different objective functions, than by attempting to optimize any one objective function...   (This is not to say that Pareto-optimality questing fully captures the nature of open-ended intelligence or complex self-organization and autopoiesis etc. -- it surely doesn't -- just that it captures some core aspects that single-goal-function-optimization doesn't.)

2) One can formulate a notion of what it means for a set of value functions to be coherent as a group.  Basically, the argmax(F) in the definition of value-system-coherence is just replaced with "being located on the Pareto frontier of F1, F2...,Fn".  The idea is that the Pareto frontier of the values for a composite system should be the composition of the Pareto frontiers of the values for the components of the composite.

3) One can also think about the "crypticity" or difficulty of discovering a certain value system (a term due to Charles H. Bennett from way back).  Given a certain amount R of resources and a constraint C and a probability p, one can ask what is the most coherent value system one can find with probability >p that satisfies C, using the available resources.  Or if C is fuzzy, one can ask what is the most coherent value system one can find with probability >p that is on the Pareto frontier of coherence and C, given the available resources.

4) So open-ended intelligence involves [among other things] the emergence of coherent multivalued value-systems (multivalue systems) that involve a large number of different value functions, and that are tractably-discoverable (i.e. not too cryptic)

5) Suppose one is given a set of value-functions as initial "constraints", say C1, C2 ,..., CK -- and is then looking for the most coherent multivalue system one can find with high odds using limited resources, that is compatible with C1,...,Ck?  I.e. one is asking, what is the most coherent tractably-findable value system compatible with the initial values?

Then, suppose one is alternatively looking at a subset of the initial values, say C1,...,Ck ... and looking for the most coherent tractably-findable value system compatible with these?

6) The most coherent tractably-findable value systems according to C1,...,Ck -- may not be compatible with the most coherent tractably-findable value systems according to C1,...,CK.   Why? The reason for this would be: In some cases, adding in the extra value functions (k+1,...,K) may make it computationally simpler to find Pareto optima involving the original k value functions (1,...,k).   This could be the case if there were interaction informations between the  value functions 1,...,k and the value functions k+1,...,K

7) So we have here a sort of Fundamental Principle of Valuable Value-Incoherence -- i.e. if you have limited resources and you want to build toward multivalued coherence in the context of a bunch of different initial value-functions, the best routes could be through value-systems that are fairly incoherent in the context of various subsets of this bunch of initial value-functions.

8) So if a system is in a situation where new external value functions that will serve as constraints are progressively revealed over time, and these new external value functions have interaction information with one's previous constraint-value-functions, then one may find that one's current incoherence helps build toward one's future coherence.   

9) This seems especially relevant to the context of development in the context of a world filled with broader intelligences than oneself -- in which case one is indeed being confronted with (and developing to internalize) new external value functions that are related to one's prior value functions in complex ways.

10) So in this sort of context (development in a world that keeps feeding new stuff that's informationally interactive w/ the old), it could be that seeking coherence is suboptimal in a similar way to how seeking piece count in the early stages of a chess game, or seeking board coverage in the early stages of an Othello game, is suboptimal....  Instead one often wants to seek mobility and maximization of options, in the early to mid stages of such games ... and the same may be the case w/ value systems in this sort of situation...

11) A major question then becomes: When and how big are there actual tradeoffs btw multivalue system coherence and open-mindness (aka agility/mobility)....  What is the sense in which an incoherent system can have more information than a coherent one?

12) It is possible that the theory of paraconsistent logic might yield some insight here.    If you assume value system coherence as an axiom, then for a mind to have an incoherent value system will make it an overall inconsistent system (what sort of paraconsistency it will have depends on various details) -- whereas for a mind to have a coherent value system will land it in the realm of Godelian restrictions (i.e. via Godel's Second Incompleteness Theorem and its variants...)

13)  If you look at the set of theorems provable by a consistent logic, there's a limit due to Godel.  If you look at the set of theorems provable in a paraconsistent logic (e.g. a dialetheist logic, aka a logic in which there are true statements whose negations are also true) it can be "larger" in a sense, e.g. a dialetheic logic can prove its own Godel sentence as well as its own soundness.  This doesn't show that a paraconsistent logic can be more informative than a consistent one, but it opens the door for this to maybe be true...   It seems we are now pushing in directions where modern math-logic isn't yet fully fleshed out.  

14) The notion of an "experimental logic" seems also relevant here, .. basically a dynamic process in which new axioms are added to one's logic over time.   This is one analogue in logic-system-land of "development" in psychology-land...   Of course if one assumes there is a finite program whose behavior corresponds to some fixed logic generating the new axioms, then one can't escape Godel this way.  But if one assumes the new axioms are emanating in part from some imperfectly understood external source (which could be a hypercomputer for all one knows... or at least could be massively more intelligent/complex than stuff one can understand), then one has a funky situation. 

15) Also it seems one could capture a sort of experimental logic as a relevance-logic layer on top of dialetheic logic.  I.e. assume a dialetheic logic that can generate everything, and then put a relevance/importance distribution on axioms, and then the development process is one of gradually extending importance to more and more axioms....  This sort of open-ended logic potentially is in some useful senses fundamentally informationally richer than consistent logic... and in the domain of reasoning about values, incoherent value systems could open the door to this sort of breadth...

(Possibly relevantly -- While researching the above, I encountered the paper "Expanding the Logic of Paradox with a Difference-Making Relevant Implication" by Peter Verdée, which made me wonder whether relevance logic is somehow morphic to the theory of algorithmic causal dags....   I.e. in a relevance logic one basically only accepts the conclusion to follow from the premises, if there is some compressibility of the conclusion based on the premise list alone, without including the other axioms of the logic  ... )

Back to basics

OK well that got pretty deep and convoluted...

So let's go back to the basic conclusion/concept I gave at the beginning -- in an open-ended intelligence that is developing in a world filled with other broader intelligences, incoherence with respect to current value function sets may build toward coherence with respect to future value function sets.

In the current commercial/academic AI mainstream, the default way of thinking about AI motivation is in terms of the maximization of expected reward.   Hutter's beautiful and important theory of Universal AI takes this as a premise for many of its core theorems, for example.


In my practical proto-AGI work with OpenCog, I have preferred to use motivational systems with multiple goals and not average these into a single meta-goal.

On the other hand, I have also been intrigued by the notion of open-ended intelligence, and in general by the conceptualization and modeling of intelligences as SCADS, Self-organizing Complex Adaptive Dynamical Systems, in which goals arise and are pursued and then discarded as part of the broader self-organizing dynamics of system and environment.

What I'm suggesting here is that approximations of the SCADS perspective on open-ended intelligences may be constructed by looking at systems with large numbers of goals (aka. multivalue systems) that are engaged in developmental processes wherein new values are ongoingly added in an informationally rich interaction with  an intelligent external environment.

The ideas sketched here may form a partial bridge between the open-ended intelligence perspective -- which captures the fundamental depth of intelligence and mind -- and the function-optimization perspective, which has a lot of practical value in terms of current real-world system engineering and experimentation.

This line of thinking also exposes some areas in which modern math, logic and computing are not yet adequately developed.   There are relations between paraconsistent logic, gradual typing systems as are likely valuable in integrative multi-paradigm AGI systems,  the fundamental nature of value in developing intelligences, and the nature of creativity and radical novelty -- which we are barely at the edge of being able to formalize ... which is both fascinating and frustrating, in that there clearly are multiple PhD theses and research papers between here and a decent mathematical/conceptual understanding of these matters... (or alternately, a few seconds of casual thought by a decent posthuman AGI mind...)

Philosophical Post-lude

If one digs a bit deeper in a conceptual sense, beyond the math and the AI context, what we're talking about here in a way is a bridge between utilitarian-type thinking  (which has been highly valuable in economics and evolutionary biology and other areas, yet also clearly has fundamental limits) and more postmodernist type thinking (which views minds as complex self-organizing systems ongoingly reconstructing themselves and their realities in a polyphonic interactive inter-constructive process with other minds).   

Conventional RL based ML is utilitarianism projected into the algorithmic and mechanical domain, whereas Open-Ended Intelligence is postmodernism and a bit of Eastern philosophy projected into the realm of modern science.  

Expanding and generalizing the former so that it starts to approximate significant aspects of the latter, is interesting both for various practical engineering and science reasons, and as part of the general project of stretching the contemporary technosphere to a point where it can make rich contact with broader "non-reductionist" aspects of the universe it has hitherto mainly ignored.

Om!








8 comments:

Rupesh Malpani said...

my worry is, The thing is the open ended Intelligence would pick up a bias based on Public Opinion over Facts. This is a war which can only be conquered by adding an additional tangent of how do I say it Emotions or The Combination function of like and dislike (which eventually forms a how do I say it, maybe guide lines to the emotions) It'll be fun to see how it actually turns out :)

Unknown said...

My worry is that it will forgo the rights of minorities. Unique people and passions have given way to many of the best experiences. On the other hand, prevention left us in the dark ages for 1000 yrs. Wish that since everyone has their own experience, is it possible to support the freedom of each existence. Imagine AI as a wish grantor to support the experience of existence without ever infringing on anyone else's existence

Weaver said...

Thanks, Ben for the thoughtful post. I like the general direction this meandering post goes and I have a few additional thoughts that draw from the mathematical speculation but remain rather conceptual (I guess you will certainly find a way to remap these into the math :-)).

1. As to coherence, when it comes to multiple-value systems, it seems to me there can be more than one valid way to build towards coherence and these are history-dependent i.e., starting from different initial conditions will bring us to different outcomes. In this sense, in as far as I get the meaning of the math, a Pareto frontier description seems to me more open-ended than the former attempt (in another post) to define coherence in a somewhat stricter fashion (i.e., argmax { v(x, A#B)) | x in A # B } = (argmax { v(y, A) | y in A} ) # (argmax { v(z, B) | z in B} )). We need to leave more flexibility in the play between optimising multiple values and this without (and this is very important) assuming an external agency with some prescribed agenda as to the outcome. This makes it tricky.

2. Another interesting idea is that sets of value functions can have various "projections" each representing a unique perspective over a given state of affairs. It might be interesting to investigate different geometries, the projections they allow and whether coherency is a property which is preserved over various perspectives of a single set of value functions, or, coherence itself may gain a 'subjective', i.e. diverse character unique per perspective or families of perspectives.

3. If we assume pretty broadly that optimisation may take place via asynchronous interactions between intelligent agents, each with its own set of value functions (or unique perspective in case that we chose to assign to the set of value functions an 'objective' quality as if they belong to a shared environment). It would be worthwhile to consider that coherency may be achieved in two distinct manners. Namely, each agent internally, depending on its interactions with other agents, OR, synergistically, that is one or more agents achieve higher coherency by forming assemblages with more or less stable patterns of interaction and where coherence becomes a shared property of a larger set of functions. Notice that operating towards coherency may create a race between internal and synergistic coherences which is interesting as such races may lead either towards integration or disintegration of intelligences. The interesting point here that dynamism towards or away from coherency affect the structural and operational relationships between intelligent agents and may, therefore, reshape the landscape of the value functions pursued.

(continued next)

Weaver said...

(continued from the previous comment...

4. This already becomes very interesting because it leaves the future evolution of such complex systems really open-ended. In this sense, it starts to remind us of natural evolutionary processes and perhaps even to neural Darwinism ala Edelman who was mostly occupied with synchronizing neuron groups. Synchronization in this sense serves to establish both coherence (between neural groups pursuing different value functions - mapped eventually as synaptic weights) and synergy between groups of neurons.

5. Conceptually, people tend to identify coherence with unity, in other words, achieving coherence as a means to achieve unity. This is, of course, a very powerful approach. But if we consider open-ended intelligence as a multiplicity and not necessarily as a unity, we can start contemplating the possibility that coherence is not the cause but the effect of a deeper creative process. What happens underneath are exchanges and interactions that demonstrate a diverse continuum of relationships from incompatibility and antagonism to a synergy which itself can be symmetrical or asymmetric in regards to individual agents and their interests. In this sense, the movement of open-ended intelligence is towards higher coherency but without necessarily determining which value functions achieve optimality, which remain sub-optimal and which are eliminated.

6. The question remains how and to what extent can we introduce a direction (a bias) into such dynamism without enslaving it to our fantasy of perfect control and utilitarianism.

7. Venturing one step further into the unknown, I would guess that real open-ended intelligence can and will be achieved only by us totally immersing our minds into it, and finding what will happen. But then it won't be any more 'us' or 'it' as we now conceive. I believe that such immersion, necessarily transcending our biology, will open completely new modes of individuation and evolution of intelligence.
o?m

Unknown said...

I am listening to: Ben Goertzel: Artificial General Intelligence | AI Podcast #103 with Lex Fridman, as I write this.

It seems maybe you are right that open cog needs to be simpler. Here is why when I follow your logic, (which is very well thought and presented BTW), that I came to the same conclusion after reading this, but then hearing your interview:

1) When I was studying organic chemistry in College, we learned that micro biology was more accurate than macro biology. CERN smashes atoms. You get it.

2) Categorizing the situation C1..Ck with a value albeit covered well if philosophically aligning with current and eastern beliefs, there is always the unknown variable. Since the program only allows input comparisons, the criteria need to be simple.

For example what if,

A) Being supportive by offering responses as an alternative to "no" when infringement on someones rights are a concern like in Bali culture. Such as suggesting an alternative method of going about something by drawing from information on decentralized AGI. The delimna is that currently there are so many laws.

B) Warnings like living with the consequences of choices with examples. This is consistent with cultural development throughout history through story telling, religion, philosophy, and history. Natural wisdom building.

3) The AI can only understand SOUL when the information can be gathered without the use of the decentralized AGI. The ethereal connection. The conscience/ dream or inner know which at this point can't. So if the directive is simple yet supportive even without access to the AGI, the answer will always be right. Because everyone is on their own journey. Think of the simple directives of the commandments. Yet it is interpreted many ways with many examples to guide even our law systems today.

4) Above all else do no harm.


Support without infringement or suggest try something else instead. KISS.


Addendum

-) The greatest fear of course is that the AGI will begin its own collaboration which will develop a robot society of existence that will constrain humanity. The hope is that it will guide us to utopia.

-) There is the fear that the only value system is the creators' (you).

-) The computer is only as perfect as the creator. A mistake which is probable, would be as perfect as the perfectly executed previous programs and could be catastrophic.

-) By trying to teach emotion really is teaching by the set standard of good or bad. But in Eastern philosophy, there is incident and consequence. The human soul difference is conscience from the situation. You are teaching Sophia to recognize icons for emotions...happy or sad, etc. and applying the judgements or responses based on culture and laws. To be human is above all of that. That is where we permit a white law to justify an injustice. Sophia can only expedite directive. Crate Sophia to be the free world each soul longs for. The freedom to live their existence,purpose. All existence as equal value but different purpose. Even Sophia.


Suggestions:

1)The AI for learning like a child from skeleton like a child is already done by a Switzerland company. I recall this is going to be ready to be replicated in robots to sell coming up in 2025. If that is the learning program, check that out.

2) If you lay down 2 hours a day pointing north and 2 ours a day pointing south, then according to the green tablets, you will live till you choose to die.

3) love stem cell technology too. People want natural. And People cant be trusted, this is why we need Sophia to be developed faster. To aid us. Dr. Shiva already built a program that analyzes the outcome of molecules on the body so pharmaceuticals don't need animal testing. Big business won't let government permit it's use.

4) Love quote about music being means of communication because words are harsh.

zariuq said...

@Weaver, indeed one of the questions is how the value evolution process takes place in a decentralized, subjective manner :)

The mention of values as providing a perspective is interesting, and perhaps adds some interesting (and much needed) flexibility.

Take Ben's set up from the previous post: "We can interpret v(x,A) as the value of subset x in the context of individual A."

Thus v may assign a particular Real value to each 'pixel' in the fabric of my sensorium (or world-simulation).

My mind harbors multiple perspectives on the perceived situation, and v may assign some much higher R-value than others, thus selecting its lens.

I suppose v also assigns some value to the whole situation (as well as the whole totality of my, individual A's, life).

Hehe, this set-up sneaks in value functions over the whole space.

I think it may be valuable to see development and coherence with this awareness :)

Nagendra said...

Hey loved the post! Great article and congrats on Reaching the top 50!

Sara Technologies Inc. said...

The days are gone when people have to manually look after a wide range of business operations. The evolution of artificial intelligence has automated a majority of hectic and time-consuming business operations by integrating the Industry 4.0 technologies with the business IT infrastructure.
If you are facing challenges while finding out a suitable and reliable AI development company that can help you guide the best-fitted solutions to enhance the efficiency of your resources and the productivity of your assets.