To follow this blog by email, give your address here...

Wednesday, September 09, 2009

AGI, Ethics, Cognitive Synergy and Ethical Synergy (from my Yale talk...)

Earlier this year I gave a talk at Yale University titled "Ethical Issues Related to Advanced Artificial General Intelligence (A Few Small Worries)" ...

It was a verbal discussion focused rather than PPT focused talk but I did show a few slides (though I mostly ignored them during the talk): anyway the brief ugly slideshow is here for your amusement...

The most innovative point made during the talk was a connection between the multiple types of memory and multiple types of ethical knowledge and understanding.

I showed this diagram of different types of memory and the cognitive processes associated with them (click the picture to see a bigger, more legible version)



and then I showed this diagram


which associates different types of ethical intuition with different types of memory.

To wit:
  • Episodic memory corresponds to the process of ethically assessing a situation based on similar prior situations
  • Sensorimotor memory corresponds to "mirror neuron" type ethics, where you feel another person's feelings via mirroring their physiological emotional responses and actions
  • Declarative memory corresponds to rational ethical judgment
  • Procedural memory corresponds to "ethical habit" ... learning by imitation and reinforcement to do what is right, even when the reasons aren't well articulated or understood
  • Attentional memory corresponds to the existings of appropriate patterns guiding one to pay adequate attention to ethical considerations at appropriate times
I presented the concept that an ethically mature person should balance all these kinds of ethics.

This notion ties in with a paper that Stephan Bugaj and I delivered at AGI-08, called Stages of Ethical Development in Artificial General Intelligence Systems. In this paper we discussed, among other topics, Kohlberg's theory of logical ethical judgment and Gilligan's theory of empathic ethical judgment. In the present terms, I'd say Kohlberg's theory is declarative-memory focused whereas Gilligan's theory is focused on episodic and sensorimotor memory. We concluded there that to pass to the "mature" stage of ethical development, a deep and rich integration of the logical and empathic approaches to ethics is required.

The present ideas suggest a modification to this idea: to pass to the mature stage of ethical development, a deep and rich integration of the ethical approaches associated with the five main types of memory systems is required.

Tuesday, September 08, 2009

Why Bother to Vote?: A Novel Multiversal Answer

Any sensible person with the choice of going to the polls to vote has almost surely asked themselves “Why should I bother voting when it’s incredibly unlikely my vote will make any difference, given the large number of people voting?”

[Note for non-US readers: in the US, unlike some countries, there is no legal requirement to vote; it's an option.]

I've discussed this issue with dozens of people and have never really heard any sensible answers.

I say "If I stay home and work or play Parcheesi instead of voting, then the election will proceed exactly the same way as if I had voted. The odds of me affecting this election are incredibly tiny."

They say: "Yeah, but if EVERYBODY thought that way, then democracy couldn't work." ... as if this were a counterargument.

Or: "Yeah, but if everyone intelligent enough to have that train of thought followed it and avoided voting, then only stupid people would vote and we'd have a government elected by the retarded.... Oh, wait ... would that be any different than what we actually have now?"

I've thought about this a lot, off and on, over the years, and finally I think I've come up with an interesting, novel answer to the question. To have a handy label, I'll call it the "multiversal answer."

This is a somewhat philosophically complex answer, which requires a deviation from our ordinary ways of thinking about our relationship between ourselves and the universe.

I'll run through some details and probability calculations, and then get back to philosophy and free will and such at the end.


Rational Agents

The multiversal answer pertains to agents who make choices based on expected utility maximization. That is, it pertains to agents who: Given a choice between two actions, will choose the one with the property that, after the choice is made, the agent’s utility will be highest. Or, to put it informally, it pertains to agents who follow the rule: “Choose the option that, in hindsight, you will wish you had made.”

Of course people don't always follow this sort of rule in determining their actions; people are complex dynamical systems and don't follow any simple rules. But, my point is to argue why voting might make sense for an agent following a simple rational decision-making procedure. I.e.: why voting might be a reasonable behavior even though, in the sense indicated above, the odds of your vote being decisive in an election are minimal.


Vote So That You'll Live in a Universe Where People Like You Vote

The conceptual basis of the multiversal answer is the principle that “you should vote because, if you vote, this means that after you’ve voted, you’ll know that you probably live in a universe where people similar to you vote.

On the other hand, if you don’t vote, this means you probably live in a universe where people similar to you don’t vote.”

Clearly, you would rather live in a universe where people similar to you vote.

(Yes, this could be formalized based on the degrees to which individuals with varying degrees of similarity to you vote. But we won’t worry about the math details for now.)

Your vote may not count much on its own, but it’s a bad thing if everyone similar to you (with the same preferences as you) doesn’t vote.

Note that it’s not good enough to intend to vote but then back out at the last minute. After all, if you do that, then probably everyone similar to you is going to do the same thing! So if you do that, it means you’re in a universe where people similar to you are likely to almost vote, rather than a universe where people similar to you are likely to vote.

Possible Worlds

Underlying this answer is a “possible worlds” philosophy, holding that there are many possible universes we could live in -- and we don’t know exactly which one we do live in, based on the limited data at our disposal.

So, given a predicate P like “the degree to which people similar to me vote,” we can estimate the truth value of P by a weighted average of the product

(degree that P holds in possible world W) * (probability that world W is the one I live in)

(or some similar formula).

Some Plausible Assumptions

So, based on the above, suppose we we assume that


P(people like me vote | I vote) > P(people like me vote | I don’t vote)

and


P(good world | people like me vote) > P(good world | people like me don't vote)

(where “good world” is shorthand for “I live in a possible world I like.” Again, this can be more fully formalized, but I won’t bother with that for now.)

Note that these probabilities are calculated across possible worlds. For instance,

P(people like me vote | I vote)

means

P(people like me vote in possible world W | I vote in possible world W)

A Critical Question

So, given the above, one has the question: From the above inequalities, can we derive that


P(good world | I vote) > P(good world | I don't vote)

which would imply that voting increases the probability of living in a good world (whether or not one wins the particular election one is voting in)???

The answer is: not quite.

But, sort of.

What the mathematics tells us is that this conclusion holds if

max[ P(gw|Iv & plmv) - P(gw|plmv), P(gw|Iv & ~plmv) - P(gw|~plmv) ]

<


.5 * [P(gw | plmv) - P(gw)] [ P(plmv|Iv) - P(plmv)]



where

gw = good world
Iv = I vote
plmv = people like me vote

Basically, if the left hand side of this equation is small, this means that the effect of me voting on the probability that I live in a good world, is almost entirely contained in the effect of people like me voting on this probability. But, this seems quite sensible.

So, if this condition holds, then voting increases the odds of being in a good world, so it makes some sense to vote to increase the odds of being in a good world.

There’s still a quantitative calculation to make, though. Voting has some cost, so one needs to estimate whether the increase in the expected goodness of {the world one estimates oneself to live in}, induced by voting, outweighs the cost of voting. This devolves into a bunch of algebra that I don’t feel like doing right now. But note that it’s a totally different calculation than the calculation as to whether one’s individual vote makes any difference in a particular election.

Free Will

Underlying the above perspective is an attitude toward "free will" which is different from the one conventional in the modern Western mindset.

In the conventional interpretation of "free will", a person can choose whether to vote or not, and this doesn't impact their estimate of what kind of universe they live in -- it's an independent, free choice.

In the interpretation used in the multiversal answer to the voting problem, a person can (in a sense) choose what to do, but then when they study their choices in hindsight, they can infer from the pattern of their choices something about the universe they live in.

Combining this with the "expectation maximization" approach, which says you should make the choices that you'll be happiest with in hindsight (after the choice is made) ... one comes up with the principle that you should make the choice that, in hindsight, will yield the most desirable implications about what kind of universe you live in.

And it's according to this principle that, in the multiversal answer, voting may be a sensible choice regardless of the small chance that your particular vote impacts the election.

The point is that, after voting, the fact that you voted will give you evidence that you live in a nice universe, where people like you vote and therefore things tend to go in a favorable way for you.

On the other hand, if you don't vote, then afterwards the fact that you didn't vote will give you evidence that you live in a universe where people like you don't vote, and therefore things tend to go against you.

So, I think the decision to not vote because your vote is very unlikely to impact the election, is based partly on a naive folk theory of "free will." In a more mature view of will and its relation to the universe, the decision to vote or not isn't exactly a "free and independent decision" ... but there is rationality in making the "not quite free or independent decision" to vote.

(Perhaps this is related to the intuition people have when they say things like "If everyone thought that way, then no one would vote." Statements like this may reflect some intuition about what it means to live in a good branch of the multiverse, which however conflicts with modern Western folk psychology intuition about free will.)