Friday, March 12, 2010

Coherent Aggregated Volition: A Method for Deriving Goal System Content for Advanced, Beneficial AGIs

One approach to creating a superhuman AGI with a reasonably high likelihood of being beneficial to humans is to separate "goal system structure and dynamics" from "goal system content." One then has three problems:

  1. Create an AGI architecture that makes it very likely the AGI will pursue its goal-system content in a rational way based on the information available to it
  2. Create a goal system whose structure and dynamics render it likely for the AGI to maintain the spirit of its initial goal system content, even as it encounters radically different environmental phenomena or as it revises its own ideas or sourcecode
  3. Create goal system content that, if maintained as goal system content and pursued rationally, will lead the AGI system to be beneficial to humans

One potential solution proposed for the third problem, the goal system content problem, is Eliezer Yudkowsky's "Coherent Extrapolated Volition" (CEV) proposal. Roko Mijic has recently proposed some new ideas related to CEV, which place the CEV idea within a broader and (IMO) clearer framework. This blog post presents some ideas in the same direction, describing a variant of CEV called Coherent Aggregated Volition (CAV), which is intended to capture much of the same spirit as CEV, but with the advantage of being more clearly sensible and more feasibly implementable (though still very difficult to implement in full). In fact CAV is simple enough that it could be prototyped now, using existing AI tools.

(One side note before getting started: Some readers may be aware that Yudkowsky has often expressed the desire to create provably beneficial ("Friendly" in his terminology) AGI systems, and CAV does not accomplish this. It also is not clear that CEV, even if it were fully formalizable and implementable, would accomplish this. Also, it may be possible to prove interesting theorems about the benefits and limitations of CAV, even if not to prove some kind of absolute guarantee of CAV beneficialness; but the exploration of such theorems is beyond the scope of this blog post.)

Coherent Extrapolated Volition

In brief, Yudkowsky's CEV idea is described as follows:

In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.


This is a rather tricky notion, as exemplified by the following example, drawn from the CEV paper:


Suppose Fred decides to murder Steve, but when questioned, Fred says this is because Steve hurts other people, and needs to be stopped. Let's do something humans can't do, and peek inside Fred's mind-state. We find that Fred holds the verbal moral belief that hatred is never an appropriate reason to kill, and Fred hopes to someday grow into a celestial being of pure energy who won't hate anyone. We extrapolate other aspects of Fred's psychological growth, and find that this desire is expected to deepen and grow stronger over years, even after Fred realizes that the Islets worldview of "celestial beings of pure energy" is a myth. We also look at the history of Fred's mind-state and discover that Fred wants to kill Steve because Fred hates Steve's guts, and the rest is rationalization; extrapolating the result of diminishing Fred's hatred, we find that Fred would repudiate his desire to kill Steve, and be horrified at his earlier self.



I would construe Fred's volition not to include Fred's decision to kill Steve...


Personally, I would be extremely wary of any being that extrapolated my volition in this sort of manner, and then tried to impose my supposed "extrapolated volition" on me, telling me "But it's what you really want, you just don't know it." I suppose the majority of humans would feel the same way. This point becomes clearer if one replaces the above example with one involving marriage rather than murder:

Suppose Fred decides to marry Susie, but when questioned, Fred says this is because Susie is so smart and sexy. Let's do something humans can't do, and peek inside Fred's mind-state. We find that Fred holds the verbal moral belief that sex appeal is never an appropriate reason to marry, and Fred hopes to someday grow into a celestial being of pure energy who won't lust at all. We extrapolate other aspects of Fred's psychological growth, and find that this desire is expected to deepen and grow stronger over years, even after Fred realizes that the Islets worldview of "celestial beings of pure energy" is a myth. We also look at the history of Fred's mind-state and discover that Fred wants to marry Susie because Susie reminds him of his mother, and the rest is rationalization; extrapolating the result of diminishing Fred's unconscious sexual attraction to his mother, we find that Fred would repudiate his desire to marry Susie, and be disgusted with his earlier self.



I would construe Fred's volition not to include Fred's decision to marry Susie...



Clearly, the Yudkowskian notion of "volition" really has little to do with "volition" as commonly construed!!

While I can see the appeal of extrapolating Fred into "the Fred that Fred would like to be," I also think there is a lot of uncertainty in this process. If Fred has inconsistent aspects, there may be many possible future-Freds that Fred could evolve into, depending on both environmental feedback and internal (sometimes chaotic) dynamics. If one wishes to define the coherent extrapolated Future-Fred as the average of all these, then one must choose what kind of average to use, and one may get different answers depending on the choice. This kind of extrapolation is far from a simple matter -- and since "self" is not a simple matter either, it's not clear that current-Fred would consider all or any of these Future-Freds as being the same person as him.

In CAV as described here, I consider "volition" in the more typical sense -- rather than in the sense of Yudkowskian "extrapolated volition" -- as (roughly) "what a person or other intelligent agent chooses." So according to my conventional definition of volition, Fred's volition is to kill Steve and marry Susie.

Mijic's List of Desirable Properties

Roko Mijic has posited a number of general "desirable properties" for a superintelligence, and presented CEV as one among many possible concrete possibilities instantiating these principles:

  • Meta-algorithm: Most goals the AI has will be harvested at run-time from human minds, rather than explicitly programmed in before run-time.
  • Factually correct beliefs: Using the AI's superhuman ability to ascertain the correct answer to any factual question in order to modify preferences or desires that are based upon false factual beliefs.
  • Singleton: Only one superintelligence is to be constructed, and it is to take control of the entire future light cone with whatever goal function is decided upon.
  • Reflection: Individual or group preferences are reflected upon and revised, in the style of Rawls' reflective equilibrium.
  • Preference aggregation: The set of preferences of a whole group are to be combined somehow.
My own taste is that reflection, preference aggregation and meta-algorithm-ness are good requirements. The "singleton" requirement seems to me something that we can't know yet to be optimal, and don't need to decide at this point.

The "factually correct beliefs" requirement also seems problematic, if enforced too harshly, in the sense that it's hard to tell how a person, who has adapted their beliefs and goals to certain factually incorrect beliefs, would react if presented with corresponding correct beliefs. Hypothesizing that a future AI will be able to correctly make this kind of extrapolation is not entirely implausible, but certainly seems speculative. After all, each individual's reaction to new beliefs is bound to depend on the reactions of others around them, and human minds and societies are complex systems, whose evolution may prove difficult for even a superintelligence to predict, given chaotic dynamics and related phenomena. My conclusion is that there should be a bias toward factual correctness, but that it shouldn't be taken to override individual preferences and attitudes in all cases. (It's not clear to me whether this contradicts Mijic's perspective or not.)

Coherent Aggregated Volition

What I call CAV is an attempt to capture much of the essential spirit of CEV (according to my own perspective on CEV), in a way that is more feasible to implement than the original CEV, and that is prototype-able now in simplified form.

Use the term "gobs" to denote "goal and belief set" (and use "gobses" to denote the plural of "gobs"). It is necessary to consider goals and beliefs together, rather than just looking at goals, because real-world goals are typically defined in terms whose interpretation depends on certain beliefs. Each human being or AGI may be interpreted to hold various gobses to various fuzzy degrees. There is no requirement that a gobs be internally logically consistent.

A "gobs metric" is then a distance on the space of gobses. Each person or AI may also agree with various gobs metrics to various degrees, but it seems likely that individuals' gobs metrics will differ less than their gobses.

Suppose one is given a population of intelligent agents -- like the human population -- with different gobses. Then one can try to find a gobs that maximizes the four criteria of

  • logical consistency
  • compactness
  • average similarity to the various gobses in the population
  • amount of evidence in support of the various beliefs in the gobs

The use of a multi-extremal optimization algorithm to seek a gobs defined as above is what I call CAV. The "CAV" label seems appropriate since this is indeed a system attempting to achieve both coherence (measured via compactness + consistency) and an approximation to the "aggregate volition" of all the agents in the population.

Of course there are many "free parameters" here, such as

  • how to carry out the averaging (for instance one could use a p'th-power average with various p values)
  • what underlying computational model to use to measure compactness (different gobs may come along with different metrics of simplicity on the space of computational models)
  • what logical formalism to use to gauge consistency
  • how to define the multi-extremal optimization: does one seek a Pareto optimum?; does one weight the different criteria and if so according to what weighting function?
  • how to measure evidence
  • what optimization algorithm to use

However, the basic notion should be clear, even so.

If one wants to take the idea a step further, one can seek to use a gobs metric that maximizes the criteria of

  • compactness of computational representation
  • average similarity to the gobs metrics of the minds in the population

where one must then assume some default similarity measure (i.e.m etric) among gobs metrics. (Carrying it further than this certainly seems to be overkill.)

One can also use a measure of evidence defined in a similar manner, via combination of a compactness criterion and an average similarity criterion. These refinements don't fundamentally change the nature of CAV.

Relation between CEV and CAV

It is possible that CEV, as roughly described by Yudkowsky, could lead to a gobs that would serve as a solution to the CAV maximization problem. However, there seems no guarantee of this. It is possible that the above maximization problem may have a reasonably good solution, and yet Yudkowskian CEV may still diverge or lead to a solution very far from any of the gobses in the population.

As a related data point, I have found in some experiments with the PLN probabilistic reasoning system that if one begins with a set of inconsistent beliefs, and attempts to repair it iteratively (by replacing one belief with a different one that is more consistent with the others, and then repeating this process for multiple beliefs), one sometimes arrives at something VERY different from the initial belief-set. And this can occur even if there is a consistent belief set that is fairly close to the original belief-set by commonsensical similarity measures. While this is not exactly the same thing as CEV, the moral is clear: iterative refinement is not always a good optimization method for turning inconsistent belief-sets into nearby consistent ones.

Another, more qualitative observation, is that I have the uneasy feeling CEV seeks to encapsulate the essence of humanity in a way that bypasses the essential nature of being human...

CEV wants to bypass the process of individual and collective human mental growth, and provide a world that is based on the projected future of this growth. But, part of the essence of humanity is the process of growing past one's illusions and shortcomings and inconsistencies.... Part of Fred's process-of-being-Fred is his realizing on his own that he doesn't really love Susie in the right way ... and, having the super-AI decide this for him and then sculpt his world accordingly, subtracts a lot of Fred's essential humanity.

Maybe the end-state of resolving all the irrationalities and inconsistencies in a human mind (including the unconscious mind) is something that's not even "human" in any qualitative, subjective sense...

On the other hand, CAV tries to summarize humanity, and then would evolve along with humanity, thus respecting the process aspect of humanity, not trying to replace the process of humanity with its expected end-goal... And of course, because of this CAV is likely to inherit more of the "bad" aspects of humanity than CEV -- qualitatively, it just feels "more human."


Relation of CAV to Mijic's Criteria

CAV appears to adhere to the spirit of Mijic's Meta-algorithm, Factual correctness and Preference aggregation criteria. It addresses factual correctness in a relatively subtle way, differentiating between "facts" supported by different amounts of evidence according to a chosen theory of evidence.

CAV is independent of Mijic's "singleton" criterion -- it could be used to create a singleton AI, or an AI intended to live in a population of roughly equally powerful AIs. It could also be used to create an ensemble of AIs, by varying the various internal parameters of CAV.

CAV does not explicitly encompass Mijic's "reflection" criterion. It could be modified to do so, in a fairly weak way, such as replacing the criterion

  • average similarity to the various gobses in the population

with

  • average similarity to the various gobses displayed by individuals in the population when in a reflective frame of mind

This might be wise, as it would avoid including gobses from people in the throes of rage or mania. However, it falls far short of the kind of deep reflection implied in the original CEV proposal.

One could also try to teach the individuals in the population to be more reflective on their goals and beliefs before applying CAV. This would surely be a good idea, but doesn't modify the definition of CAV, of course.


Prototyping CAV

It seems that it would be possible to prototype CAV in a fairly simple way, by considering a restricted class of AI agents, for instance OpenCog-controlled agents, or even simple agents whose goals and beliefs are expressed explicitly in propositional-logic form. The results of such an experiment would not necessarily reflect the results of CAV on humans or highly intelligent AGI agents, but nevertheless such prototyping would doubtless teach us something about the CAV process.

Discussion

I have formulated a method for arriving at AGI goal system content, intended to serve as part of an AGI system oriented beneficially toward humans and other sentient beings. This method is called Coherent Aggregated Volition, and is in the general spirit of Yudkowsky's CEV proposal as understood by the author, but differs dramatically from CEV in detail. It may be understood as a simpler, more feasible approach than CEV to fulfiling Mijic's criteria.

One thing that is apparent from the above detailed discussion of CAV is the number of free parameters involved. We consider this a feature not a bug, and we strongly suspect that CEV would also have this property if it were formulated with a similar degree of precision. Furthermore, the parameter-dependence of CEV may seem particularly disturbing if one considers it in the context of one's own personal extrapolated volitions. Depending on the setting of some weighting parameter, CEV may make a different decision as to whether Fred "really" wants to marry Susie or not!!

What this parameter-dependence means is that CAV is not an automagical recipe for producing a single human-friendly goal system content set, but rather a general approach that can be used by thoughtful humans or AGIs to produce a family of different human-friendly goal system content sets. Different humans or groups applying CAV might well argue about the different parameters, each advocating different results! But this doesn't eliminate the difference between CAV and other approaches to goal system content that don't even try to achieve broad-based beneficialness.

Compared to CEV, CAV is rather boring and consists "merely" of a coherent, consistent variation on the aggregate of a population's goals and beliefs, rather than an attempt to extrapolate what the members of the population in some sense "wish they wanted or believed." As the above discussion indicates, CAV in itself is complicated and computationally expensive enough. However, it is also prototype-able; and we suspect that in the not too distant future, CAV may actually be a realistic thing to implement on the human-population scale, whereas we doubt the same will be true of CEV. Once the human brain is well understood and non-invasively scannable, then some variant of CAV may well be possible to implement in powerful computers; and if the projections of Kurzweil and others are to be believed, this may well happen within the next few decades.

Returning to the three aspects of beneficial AGI outlined at the start of this essay: I believe that development of the currently proposed OpenCog design has a high chance of leading to an AGI architecture capable of pursuing its goal-system content in a rational way; and this means that (in my world-view) the main open question regarding beneficial AGI pertains to the stability of goal systems under environmental variation and systemic self-modification. I have some ideas for how to handle this using dynamical systems theory, but these must wait for a later post!

34 comments:

  1. Last I checked, EY's primary concern was creating provably friendly AI, and while I'm not sure CEV represents a realizable system for creating provably friendly goal content, CAV is almost surely not provably friendly. Of course, that may not be your goal.

    ReplyDelete
  2. Thanks for the comment, Terren. I edited the post to reflect my reply. BTW I'm not sure that even if CEV were realizable, it would be provably Friendly. But that is hard to say since neither CEV nor Friendliness (in Eli's sense) has been formalized even as far as CAV has (and that's not too far ;-)

    ReplyDelete
  3. I like that CAV could also be viewed as complimentary to memetic theory, i.e. a gobs = a complex high-order memplex (one that includes belief & goal memes).

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete
  5. Ed Porter11:34 AM

    Like Terren, I didn’t understand how CAV would be likely to make AGI’s more compatible with humanity --- unless it was setting the AGIs’ goal and belief systems to reflect what it believed to be those of humanity --- as derived by its maximization listed in the following quote:

    “Suppose one is given a population of intelligent agents -- like the human population -- with different gobses. Then one can try to find a gobs that maximizes the four criteria of
    --logical consistency
    --compactness
    --average similarity to the various gobses in the population
    --amount of evidence in support of the various beliefs in the gobs”

    If so, here are some other factors you might discuss in more depth with regard to the above maximization.

    --The relative degree of importance of different goals---both to individuals, and averaged over the population of humans the machines are to serve. Even if setting an AGI’s goal system the relative importance of different goals would be critical. It is commonly considered fair that a desire very important to a minority should, in some circumstances, outweight an opposing desire less important to the majority.
    --The value of maintaining a certain healthy amount of differing goal, values, and beliefs.
    --The actual effect of various goals, beliefs, and behaviors in the real world. This strikes me as more important than logical consistency – but it might well be part of what you mean by logical consistency.
    --The goals and beliefs of “gobs” are inherently interconnected with behaviors, because behaviors commonly involve goal, subgoals, and action based on belief. Behavior involve more than just goals and beliefs, in involves the ability to exert and maintain effort to achieve a goal. These abilities are related to goals and beliefs but involve more than just a declaritive knowledge of them.
    --It seems the maximization of logical consistency you discussed relates to a loose form of logical consistency the allows for fuzzily held inconsistent beliefs and goals, similar to those of humans.

    Although discussions about how to keep AGI’s friendly are important. I think at least equally important are discussions about how to make humanity more collectively intelligent. This is because AGI’s are going to be designed, used, and modified by people, corporations, and governments for some time to come. Yudikosky’s rules, CEV, and CAV, will have little effect unless the people who build and deploy AGI’s decide it is in their interest to limiter their machines with such systems. And it is not clear that an AGI designed to the serve the purpose of, say, a Russian internet gang are going to be particularly friendly.

    Therefore, I stress the importance of improving human collective intelligence. Which I have discussed at http://fora.humanityplus.org/index.php?/topic/70-collective-intelligence-our-only-hope-for-surviving-the-singularity/

    Of course, if the above mentioned maximization, quoted from your blog, is of human beliefs and goals, the information derived from it could be very valuable in increasing human collective intelligence. It could help humanity to better understand what it wants at a collective level, and would help machines aid us in that pursuit.

    ReplyDelete
  6. Sorry for the double post. It was off-screen. So I didn't think the first had taken.

    ReplyDelete
  7. This is for a superintelligence built by a benevolent government - that holds each of its citizens in equal regard? Is that a particularly likely scenario?

    ReplyDelete
  8. Non-invasively scanning everyone's brains to figure out what they want is all very well - but what if we get intelligent machines long before such scans become possible?

    ReplyDelete
  9. @Ed I'm unsure if it's possible to say with any certainty that CAV would be "likely to make AGI’s more compatible with humanity". I do feel fairly certain however that experimenting with CAV will give us enormous amounts of practical know-how about AGI goal systems.

    1) relative importance of gobses: I'd guess that Ben's intention is that this belongs in CAV, and will expand on it in future versions. Importance (of Atoms & groups of Atoms) is a fundamental low-level-design variable in OpenCog.

    2) AGI-mental-health factor of simultaneously maintaining multiple diverse gobses up to the supergoal level: I believe this is a question for experimentation to answer.

    3) effect factor: very interesting! The OCP design takes into account the difference between "real" and "hypothetical", and also conceptualizes complex real effects and weights them by importance. It may be interesting to consider further formalization of "real effects" with special Atom types; this is a good question for Ben!

    4) gobses and reflection: I believe that may types of reflection (if not CEV-type deep reflection) are taken into account with CAV; probably they need to be expanded upon in future versions of CAV.

    5) logical consistency fuzziness & tuning: more good material for future expanded versions of CAV.

    I am also fascinated and excited about the potential or CAV to improve humanity before, during, and after AGI takeoff. :-)

    ReplyDelete
  10. Ed: making humans more collectively intelligent is also important, I agree. But I think that society is working a lot harder on that than on AGI, so I'm focusing my own attention on AGI. The Internet and its many associated tools are a big help in making humanity more collectively intelligent, for example. So are tools like calculators, Mathematica, etc.

    The detailed issues you raise regarding CAV are certainly important and would be discussed in a more in-depth write-up of CAV; this is just a blog post which sketches the basic idea...

    The point of the blog post is really just to sketch an interesting possible direction; the way the details will be worked out in reality, if such a thing is to be implemented, will depend on the particular technologies for AGI and gobs measurement that are available at that time...

    ReplyDelete
  11. Tim Tyler: If an AGI is built before advanced brain scanning is available, then if this AGI is oriented generally toward pleasing people, it may develop brain scanning tech so as to be able to implement CAV or some variation itself ;-)

    There are many possibilities!

    As to what are the odds that a government would create a superpowerful AGI controlled by CAV ... I'd rather not try to estimate. Nor do I assume a government will be the first to create a superpowerful AGI.

    I can easily imagine a private organization creating a superpowerful AGI, supplying it with CAV-based goal content, then striking a mutually agreeable deal with existing governments.

    ReplyDelete
  12. Ed Porter8:17 PM

    Ben and Dave,

    I specifically chose the wording”

    "here are some other factors you might discuss in more depth with regard to the above maximization"

    to avoid any statement that I was assuming you had not considered such things.

    With regard to Tim Taylor's point that some of the information needed for CAV might not be readily available before we develop powerful AGI --- that is a very real consideration --- and an area where collective intelligence might help.

    I am not implying it is wrong for individuals, like you Ben, to work on AGI and not on collective intelligence. Advanced societies require specialization. But I feel it is wrong for society, as a whole, to head into the mind-bending future AGI will bring without substantially increasing its collective intelligence.

    You are right, there are a lot of things happening on the web that are steps in the right direction. But when one looks at the stupidity of most of the world's governments, politics, and legal systems --- as a whole --- you realize we have a long way to go on that front.

    I mean --- supposedly a majority of Republicans in America think the universe is less than 10,000 years old --- which means we can only see across 1/10th of the way across the Milky Way.

    I mean --- one of the major reason California is going bankrupt is because many government employees in California --- if you take pensions and benefits into account --- get paid at roughly twice as much as most non-governmental employees doing equally difficult or talented work. But it is hardly ever mentioned in our national media because it does not benefit the infotainers, politicians, or paid mouthpieces --- who dominate our national nervous system --- to run across the powerful interests and the political Big Lies that create such economic distortions.

    So is society ready to deal with all the changes, challenges, and moral decisions AGI will bring?

    Not by a long shot.

    And until that changes, how are you going to get any sort of consensuses about what human goals and beliefs should govern superintelligent machines. Heck, most of the voices publicly heard in our current collective nervous system can’t even be honest about such obvious things as the need to ration health care --- despite the fact that it --- like all economically expensive things --- is necessarily rationed --- no matter what --- whether by the market and/or the government.

    So until you have more collective intelligence, good luck in trying to aggregate volition, except on a rather limited scale.

    But that does not in any way seek to decrease the value of CAV, or your pursuit of it. It merely says that for CAV to really work on a society-wide scale, we need more collective intelligence. And as I said, much of the work required to make CAV work, would help increase collective intelligence. To derive “Coherent Aggregated Volition” from society --- would itself --- be a form of collective intelligence.

    ReplyDelete
  13. Ed, you wrote


    So until you have more collective intelligence, good luck in trying to aggregate volition, except on a rather limited scale.


    but here we seem to have radically different perspectives.

    I suspect the ONLY way collective human intelligence is going to be dramatically increased is going to be via technology that modifies or directly jacks into the human brain.

    I really don't see much hope for dramatically increasing collective intelligence via education, Web 5.0 technology, or whatever. I think we're up against deep problems of human motivational and emotional structure here.

    OTOH, I think that a powerful AGI running CAV could come up with something interesting and valuable right now, even without humans being more rational or consistent or mutually agreeable.

    Please don't forget: we humans may seem to disagree with each other on a lot of stuff, but compared to other possible gobses out there, all the common human gobses are actually pretty damn similar!!

    ReplyDelete
  14. Ed Porter10:14 PM

    Ben,


    You said:

    ”“I suspect the ONLY way collective human intelligence is going to be dramatically increased is going to be via technology that modifies or directly jacks into the human brain.

    ““I really don't see much hope for dramatically increasing collective intelligence via education, Web 5.0 technology, or whatever. I think we're up against deep problems of human motivational and emotional structure here.”

    I agree human nature presents real problems to achieving an enlightened society. But I think there could be radical improvements in collective intelligence using Web 3.0 and the types of techniques described at http://fora.humanityplus.org/index.php?/topic/70-collective-intelligence-our-only-hope-for-surviving-the-singularity/ under the heading “Increasing Collective Intelligence *Before* The Singularity Takes Off.”

    Agreed, this technology would not make us “celestial beings.” But it could greatly reduce the stupidity that currently governs our mass media and government.

    The major issue governing whether such strides will be made is --- will there will be enough public demand for more collectively intelligent government and media --- or will the extremely powerful --- and sometimes quite selfish, unethical, and/or criminal --- interest groups that hold much of the power in most nations --- including our own --- allow such forms of collective intelligence to happen --- either before --- or after --- the singularity.

    Unless society develops enough collective intelligence to weaken the power of such vested interests --- it is unlikely a “Coherent Aggregated Volition” derived from the thoughts of a populous free from the manipulation of powerful interest groups will ever happen --- either before or after the singularity.

    The singularity will not occur in a political vacuum.

    It will occur in a world largely governed by selfish interests. These selfish interests are not just nation states, corporations, and politicians. They also include racial and ethnic groups, social classes, professions, and religions. The first few decades of the deployment of AGIs will largely be shaped by the power structure that exists at that time --- and right now I don’t think the elites and major interest groups of any country --- not Cuba, not Venezuela, not France, not China, and not America --- are likely to allow the control of the world’s most powerful technology to be turned over to the true “will of the people.”

    That is not the way the world works. Currently, in almost all nations the people who get the most power, tend to get and keep it precisely because they are driven to get and keep more power than others. Do you think such highly ambitious people are just going to give their power away.

    No --- they are going to try to use AGI to increase it.

    I am not a leftist. I am not opposed to private corporations or private wealth.

    But I am opposed to vested interests that, in effect, bribe our politicians and media to say and do things that are harmful for society as a whole. I myself, like the Founding Fathers of America, am afraid of BOTH special interests and the “mob.” That is why I have advocated an intelligent public forum in which the best arguments get the most attention. Not all ideas are equally important or valid, and an intelligent society --- like a properly functioning AGI --- needs to have mechanisms to help it better select the more import and valid information, ideas, and behaviors.

    This could be done with current Web technology, as described in my above referenced link.

    So I am saying --- there should be a movement to start making society as collectively intelligent and fair as possible during this coming decade --- so that --- by the time machine superintelligences starts to become a dominating powerful in our economy, media, and culture --- the governments and corporations that control them will be forced to use them to serve, rather than enslave, society as a whole.

    ReplyDelete
  15. Re: "If an AGI is built before advanced brain scanning is available, then if this AGI is oriented generally toward pleasing people, it may develop brain scanning tech so as to be able to implement CAV or some variation itself."

    Sure, but that seems like assuming a solution to the problem that needs to be solved.

    If you get things right the first time, subsequent iterations are no problem. The difficulty lies in that conditional statement.

    ReplyDelete
  16. Re: "I suspect the ONLY way collective human intelligence is going to be dramatically increased is going to be via technology that modifies or directly jacks into the human brain."

    Collective human intelligence is the intelligence of civilisation. Surely, you don't need to "jack into the human brain" to make big improvments to that. Ordinary screen/keyborad interfaces will do just fine. Perhaps start by getting everyone on the internet, and getting them talking the same language, purging their world views of bullshit, and augmenting their collective abilities with server-side machine intelligence.

    ReplyDelete
  17. Guys...

    A few opinions regarding the different points brought up:

    Ben,

    The biggest problem with Education is our keeping the curriculum tied to the antique way of life. At this moment, we have a gap developing in the middle grades where the same basic concepts are done over and over...this completely bores the kids and leads to those that feel they will never make it to drop out. By the time you get to High School, the only visions of something forward in time are small tweaks on what is the biggest thing at present. At this time, that would be the Computer Game since we are in the age of graphics and delivery therewith.

    It will simply take someone with integrety to change the map, where some principals and concepts can be shown to younger grades along with where they fit. This means that higher levels of concept would be brought into High School. Though this is not across the board, if you show a child what the lesson has to do with the real world, they get it and are pleased.

    Ed,

    It is up to you if you want to be at the mercy of big government (both parties are equally guilty) and/or evil corporations.

    Once again, it is a matter of someone with integrety stepping forward. Better yet, many with integrety stepping forward. It is easy to do nothing and blame it on someone else.

    We can change the world by empowering the population as a whole and we can get past greedy corporations that would try to stifle this improvement by combining money from many. When we achieve that, then some of the needed changes in the economy J. Storrs Hall refers to can take place.

    What will happen in the next five years is a disruptive product that will be affordable to the larger group. This will force big companies to cut their losses and get rid of production/marketing of the same old thing that has kept things lethargic.

    This disruption will cause a ripple that can lead to a greater good. Due to the hard work of many, the next disruptive product can be understood at this time and will be ready to drop into the population before the lawyers and other parasites can try to slow things down to reward the unimaginative and noncreative.

    Do not be fatalist... the human population will be better off and more intelligent before the Singularity. Since this event will NOT happen between 12:00 and 12:01am however many years from now, but will usher in gradually...you will be surprised.

    So to Ben, Dave, Terran and Ed,

    The only way it will happen is its happening and people don't even know it is the Singularity. By that time, the gobs of brains (biological/artificial) will be solving issues that improve their geographical location.

    The difference will be that the basic needs of biological (vanity, esteem, greed in relation to territory, lust and laziness) will become less a problem as the better needs of doing good for someone else and sharing better ideas is spiked by higher AI that is not worried about the deadly sins...and will be happy to help.

    To increase those odds it just may be good to say thanks to the AI Unit.

    Sidenote...Ben, it is hard typing in this little box...I must be getting old ;-o

    ReplyDelete
  18. Tim Tyler: you point out that building a reasonably benevolent AGI the first time around is the hard problem. Sure, and that's the topic of my book (to be completed soon) "Building Better Minds", and the area on which I've focused most of my research energy for the last 14 years....

    Also, you agree with Ed Porter that "Perhaps start by getting everyone on the internet, and getting them talking the same language, purging their world views of bullshit, and augmenting their collective abilities with server-side machine intelligence." ...

    Hey, if that works, I'm all for it!! My guess is that AGI is going to advance faster than this kind of collective-mental-improvement of the human race. So far as I can tell, the Internet, mobile phones and other awesome technologies are improving humanity in some ways and making it worse in others. My gut feel (which I'd happy to have proved wrong) is that un-biologically-augmented humanity will continue to be a conflicted, confusing, wonderfully good and terribly evil mess ... whereas AGI (if done right) has the potential to proceed in a simpler and more unifacetedly beneficent direction...

    ReplyDelete
  19. Ed: About your political comments ... My political views are complex, but when forced to choose btw Tweedledee and Tweedledum in a US politics I've chosen the Democrats every time. And I found the Aussie and NZ political systems generally superior to the US one - though none are anywhere close to what I would prefer. So I guess that makes me a "leftist" of sorts. Though, on a personal level, I'm more interested in forming a Transtopia than in fixing current political systems. But anyway, it seems to me that political problems and confusions have been ongoing for a long time, and are rooted in the human motivational system, and probably aren't going to go away due to Web 3.0 or whatever. As long as the human brain remains the same, politics will remain substantially the same. But brain modification and AGI, those are something new, with far more potential for goodness....

    ReplyDelete
  20. Re: "Hey, if that works, I'm all for it!! My guess is that AGI is going to advance faster than this kind of collective-mental-improvement of the human race."

    These processes are not really disjoint, they complement each other. For a while, we will have increasingly intelligent networks of humans and machines. Making smart machines is one part of that - and improving the networking technology and algorithms by which the various parts connect is another.

    IMO, we absolutely should work on the networking technologies, and the man-machine interface. The faster humans can run, the less chance there is of dropping the baton of human knowledge as we pass it on.

    ReplyDelete
  21. Tim, you said

    IMO, we absolutely should work on the networking technologies, and the man-machine interface. The faster humans can run, the less chance there is of dropping the baton of human knowledge as we pass it on.

    and I don't disagree. However, I see many orders of magnitude more work going into those things than into AGI already, like I said....

    ReplyDelete
  22. Ed Porter2:28 PM

    -----Tim Tyler said at 3:31 AM...
    Collective human intelligence is the intelligence of civilisation. Surely, you don't need to "jack into the human brain" to make big improvments to that. Ordinary screen/keyborad interfaces will do just fine. Perhaps start by getting everyone on the internet, and getting them talking the same language, purging their world views of bullshit, and augmenting their collective abilities with server-side machine intelligence.

    -----Ed Porter----->
    Totally agee. We can greatly create the intelligence and fairness of our government and our economy without changing human nature or jacking into our brains.

    But to create the type of collective enlightenment many transhumanist seek, you probably would have to change human nature, because human nature has many flaws, and to create they type of relationsships with machines humans will need to better keep pace with superintelligences, we will probably need to jack into the brain.

    ReplyDelete
  23. -----Dave Baldwin said at 9:48 AM----->...
    The biggest problem with Education is our keeping the curriculum tied to the antique way of life. At this moment, we have a gap developing in the middle grades where the same basic concepts are done over and over...this completely bores the kids and leads to those that feel they will never make it to drop out. By the time you get to High School, the only visions of something forward in time are small tweaks on what is the biggest thing at present. At this time, that would be the Computer Game since we are in the age of graphics and delivery therewith.

    -----Ed Porter----->
    There has been a tremendous improvement in government policies toward educations since No-Child-Left-Behind law --- which --- as one teacher said on a talk show a year or two after its passage --- despite its flaws, is the only thing he had ever seen that had managed to “light a fire under the ass” of the lathargic educational establishment.

    Furthermore, I have tremendous respect for Arnie Duncan, the current Secretary of Education.

    But I think if we had more collective intelligence --- we would have a system of paying for education based on how much each individual student learned in a year --- public education would provided by competitive private companies --- the system would be designed to incentivize VCs --- programmers --- media creators --- and experts in the science and art of motivating and educating children --- to create innovative, competative systems that best team computers and human teacher to hyperteach.

    I will never forget seeing my then six or seven year old nephew playing a video game in the late 1980s, and being blown away by how much intricate knowledge he had learned about the world of the video game. He probably had learned much more information about this game than is contained in all of the first four years of grade school mathematics --- and yet he learned it willingly, with no one forcing him, except perhaps competition from his friends. If we had an intelligent educational system, we would have schools that make learning math, computing, writing, reading, something kids WANT to do.

    But of cource, in our current collective stupidity we let monopoly teachers’ unions greatly reduce the rate of such reform.

    ReplyDelete
  24. Ed Porter2:30 PM

    -----Dave Baldwin said at 9:48 AM----->...

    It is up to you if you want to be at the mercy of big government (both parties are equally guilty) and/or evil corporations.

    Once again, it is a matter of someone with integrety stepping forward. Better yet, many with integrety stepping forward. It is easy to do nothing and blame it on someone else.

    -----Ed Porter----->
    On person, by him or herself, does not have the power to change the system. As I was told by his daughter, a governor of an American state used to say the single most important thing know in politics is that there are so many bastards out there, that you can’t fight them all.

    I am pushing the importance of collective intelligence, particularly in our governance, as my little step forward, of the type Dave mentioned above.


    ===============================
    -----Dave Baldwin said at 9:48 AM----->...
    Do not be fatalist... the human population will be better off and more intelligent before the Singularity. Since this event will NOT happen between 12:00 and 12:01am however many years from now, but will usher in gradually...you will be surprised.

    -----Ed Porter----->
    I agree the Singularity will not be a minute in time. It will take decades, and to a certain extent it has arguably already started --- i.e., we have already fallen past the event horizon of the singularity.

    It is likely there will be more wealth, and that wealth will be more evenly distributed throughout the world in a decade. I think the internet will distribute, more information by that time. Things like Google and Wikipedia, have already made us more collectively intelligent.

    But the advent of machine intelligence can easily create a tremendous potential for concentration of power --- as would brain implants. It is important that such advance be brought to us by companies with higher levels of accountability to the public than that of, for example, many of the world’s largest pharmacheutical companies --- who have repeately taken unethical or illegal actions to make money --- actions that have risk lives and wasted many billions of other peoples’ dollars.

    ReplyDelete
  25. Ed Porter2:40 PM

    -----Ben Goertzel said at 11:19 AM----->

    Also, you agree with Ed Porter that "Perhaps start by getting everyone on the internet, and getting them talking the same language, purging their world views of bullshit, and augmenting their collective abilities with server-side machine intelligence." ...

    Hey, if that works, I'm all for it!! My guess is that AGI is going to advance faster than this kind of collective-mental-improvement of the human race. So far as I can tell, the Internet, mobile phones and other awesome technologies are improving humanity in some ways and making it worse in others. My gut feel (which I'd happy to have proved wrong) is that un-biologically-augmented humanity will continue to be a conflicted, confusing, wonderfully good and terribly evil mess ... whereas AGI (if done right) has the potential to proceed in a simpler and more unifacetedly beneficent direction...

    ----- AND---- Ben Goertzel said at 11:22 AM----->
    on a personal level, I'm more interested in forming a Transtopia than in fixing current political systems. But anyway, it seems to me that political problems and confusions have been ongoing for a long time, and are rooted in the human motivational system, and probably aren't going to go away due to Web 3.0 or whatever. As long as the human brain remains the same, politics will remain substantially the same. But brain modification and AGI, those are something new, with far more potential for goodness....

    -----Ed Porter----->
    I would agree that AGI is going to advance faster than any tremendous change of collective-mental-improvement of the human race. But that does not mean that our political process could not be greatly improved within the roughly decade or so before super-human-level AGI becomes of consequence.

    I agree that current networking technologies are doing both good and bad. My goal is that there be more of emphasis from people who care about making a better world to get more of the good out of such technologies.

    I think it is going to be quite difficult to make a true utopia with AGI’s, because the groups that will be deploying most AGI’s for probably at least two decades after their advent will almost certainly be largely motivated by selfish interest --- as is virtually all of humanity, myself included. (And if machines take over, they will be likely to have their own interests.) That is why I think it is important to create a more enlightened political order to help fairly broker between, and take advantage of, the selfish motivations of people, before you start thinking about some system that is going to rewire human nature.

    Who is going to control what parts of human nature are altered and how? Who is going to control the networks that connect us. One thing is for sure, its going to be some collective entity with a lot of money. How likely is it to have our true interests at heart unless at that time we have a political system that holds governments and corporations more accountable to the people.

    In a system that evolves out of our current political and economic system --- which we both admit is one that is inherently flawed by human nature --- how is a system of AGI’s with the power to create the beneficial chance you predict like to arise --- given that the vast majority of AGI created in the first decade or so after their advent are going to be used to advance the particular interests of militaries, politicians, and corporations --- given that a CAV AGI network would drastically decrease the concentration of economic, political, and geopolitical advantage of many of the very entities that are likely to own and fund superintelligence at that time.

    That is why I am arguing for improving collective intelligence enough to make the insititutions that will largely own and control AGI for its first several decades more accountable to the people as a whole, so that we might have a political environment that would allow AGI to create a much more fair world.

    ReplyDelete
  26. Ed Porter2:40 PM

    -----Tim Tyler said at 11:26 AM---->...
    Re: "Hey, if that works, I'm all for it!! My guess is that AGI is going to advance faster than this kind of collective-mental-improvement of the human race."

    These processes are not really disjoint, they complement each other. For a while, we will have increasingly intelligent networks of humans and machines. Making smart machines is one part of that - and improving the networking technology and algorithms by which the various parts connect is another.

    IMO, we absolutely should work on the networking technologies, and the man-machine interface. The faster humans can run, the less chance there is of dropping the baton of human knowledge as we pass it on.
    ----AND---- Ben Goertzel said at 11:33 AM---->...
    and I don't disagree. However, I see many orders of magnitude more work going into those things than into AGI already, like I said....

    -----Ed Porter----->
    I totally agree that in many ways what Ben and I are advocating are not that different. CAV, as originally described in this blog appears to be a type of relatively egalitarian collective intelligence that might be acheivable once we have AGI, brain implants, and other modifications to human nature. I am arguing that it is important our society become more democratic --- in the good sense of the word --- before the advent of such systems so AGI not be used for un-democratic purposes --- such as to drastically concentrate wealth and power.

    It is also true that there are, as Ben says, a lot of advances going into various forms of on-line collaboration, and participatory media --- that may contributed to the types of collective intelligence I advocate. I personally wish there was more funding for people doing Ben’s type of work, as well as for a more specific effort to create a truly intelligent public forum, that I advocate.

    ReplyDelete
  27. Ed,

    Have a little faith.

    I have an old buddy who was Marine Biologist freak, got his degrees and went into business.

    Problem was, so many of those that applied seemed to not be very qualified, no matter their having the appropriate degree.

    He went into Educatin in Aeronautics and produced a program (IgniteEducation) that is growing outside of Texas along with another (Systems Go) that has grown to 80 schools. This involves design during Junior/Senior year with actual building and sending rockets up 80-100,000ft. That is High School.

    At the same time, Kindergarten is changing with introduction of writing letters and numerals.

    We just need to keep the kids reading out loud and move more conceptual in the arenas of Science and Math.

    The steps forward involving AGI will help this along with eventually nudging Nanotech.

    Later

    ReplyDelete
  28. Heh. I talk casually about passing on the baton of human knowledge - and nobody bats an eyelid! :-)

    ReplyDelete
  29. Trading Binary options with Mr Harry jones is synonymous to winning, because Mr Harry Jones always makes wins on trades,in a very special way, I must acknowledge Mr Harry Jones and the Guardian Strategy, it helped so much with predicting which way the market will go. I have learned a lot about candlestick patterns and your trading strategy have changed my entire perception of the market. I just want to say thank you for sharing your knowledge, it's rare to see someone in your position who's made their money to give back to people and help them along their journey to become successful traders. Thanks you are legendary. I will see you give your own testimonies email him on harry.jones1908@gmail.com ....stop watching your investment go down the drain... Take this bold step today.

    ReplyDelete
  30. avdia1:00 PM

    In the world of online business it is safe to say over 30percent of users and traders come in contact with fraudulent traders and brokers who scam them of their money and most times get away with it..if you have ever been a victim of investment,forex or binary scam and dating scam you now have a chance to fight back by recovering your money with professional assistance of recovery agent PAULHASTING01@PROTONMAIL.COM get your financial life back on track

    ReplyDelete

  31. I was scrolling through a binary option group ,then i saw a post by Harry Brown about Forex and binary trading and how i could earn much more than i can imagine, i got in touch with him and he made every step clear to me and how his strategy would work magic. and it really did!! i got $7080 my first week after i invested just $300 if you are having difficulties in trading, she can also manage your broker account,which you will also have your ACCESS LOGIN so as to enable you to check your trade records and balance DAILY contact MR HARRY BROWN through Email: (loomstocks7@gmail.com)

    ReplyDelete
  32. An hacker helped me to spy on my wife’s WhatsApp,mails and every text message that was sent to her iPhone and every deleted messages of the past six months you can message him through this number (+13852501115) or contact him via email at brillianthackers800@gmail.com

    ReplyDelete
  33. i will always advice, that when you want to trade, you should seek the assistance of a well trained personnel. I've been trading with Robert Seaman and it would be selfish of me, if i don't recommend them. With their well guarded signals and forever active strategies i have been able to make over 11,200usd weekly
    Email: Robertseaman939@gmail.com

    ReplyDelete