To follow this blog by email, give your address here...

Wednesday, September 09, 2009

AGI, Ethics, Cognitive Synergy and Ethical Synergy (from my Yale talk...)

Earlier this year I gave a talk at Yale University titled "Ethical Issues Related to Advanced Artificial General Intelligence (A Few Small Worries)" ...

It was a verbal discussion focused rather than PPT focused talk but I did show a few slides (though I mostly ignored them during the talk): anyway the brief ugly slideshow is here for your amusement...

The most innovative point made during the talk was a connection between the multiple types of memory and multiple types of ethical knowledge and understanding.

I showed this diagram of different types of memory and the cognitive processes associated with them (click the picture to see a bigger, more legible version)

and then I showed this diagram

which associates different types of ethical intuition with different types of memory.

To wit:
  • Episodic memory corresponds to the process of ethically assessing a situation based on similar prior situations
  • Sensorimotor memory corresponds to "mirror neuron" type ethics, where you feel another person's feelings via mirroring their physiological emotional responses and actions
  • Declarative memory corresponds to rational ethical judgment
  • Procedural memory corresponds to "ethical habit" ... learning by imitation and reinforcement to do what is right, even when the reasons aren't well articulated or understood
  • Attentional memory corresponds to the existings of appropriate patterns guiding one to pay adequate attention to ethical considerations at appropriate times
I presented the concept that an ethically mature person should balance all these kinds of ethics.

This notion ties in with a paper that Stephan Bugaj and I delivered at AGI-08, called Stages of Ethical Development in Artificial General Intelligence Systems. In this paper we discussed, among other topics, Kohlberg's theory of logical ethical judgment and Gilligan's theory of empathic ethical judgment. In the present terms, I'd say Kohlberg's theory is declarative-memory focused whereas Gilligan's theory is focused on episodic and sensorimotor memory. We concluded there that to pass to the "mature" stage of ethical development, a deep and rich integration of the logical and empathic approaches to ethics is required.

The present ideas suggest a modification to this idea: to pass to the mature stage of ethical development, a deep and rich integration of the ethical approaches associated with the five main types of memory systems is required.


Thom Blake said...

Thanks for posting all of this together. I found your yale talk illuminating, and I'll surely be referencing some of this work in my dissertation.

Particleion said...

How to validate the ethicalness of an advanced AGI system … won’t it “game” the tests?

The best thing I can think of for this would be to place it in a virtual simulation and closely monitor it. Never letting it know when it's being tested

How much can humanity change and still remain humanity?

I have been thinking long and hard over this, and believe it will be a two part answer. There may exist a definite genetic line in terms of how many genes can be altered at any given time, which I doubt.

The second part of this answer will be a philosophical answer the nature of human mind.

Is it possible to be human with out a body?

Answer that and we may be half there.

Growth = more and more patterns? Should there be an earthly gap, to assure it doesnt use up all the matter on earth to continue growth, this cap could be lifted after it is launched into space.

Will advanced AGIs be conscious? This will the most difficulty question to answer on your entire list... How can we test this? Asking the machine any sort of questions might as well be reduced to asking are you conscious. That in it self is silly but so is asking anything else I think.

Maybe the answer lies again in observing it for complex pattern recognition

what’s the top-level goal?
As dangerous as this may sound the top level goal may best be left open with lower set goals. for instance

AGi set your top level goal to align with top level goals of all sentient life.. this way the top level goal can change but will be brought back in synch with lower level goals...

But in all reality a truly self modifying Agi should be able rewrite all of this. At that point all bets are off.

At best you should set up a goal monitoring safe guard, a narrow Ai that simply continuously processes Agi's top level goal for anything dangerous and serve as an alert system.

Miguel Antonio said...

Although I admire Ben's work for making friendlier AGI systems, we don't know how an objetive ethical framework will behave in an actual AGI scenario. It could become a subjetive experience.

Anti Money Laundering said...

Ethics is not definable, is not implementable, because it is not conscious; it involves not only our thinking, but also our feeling.

interior dapur said...

Hello Benjamin,

I really love the template you are using. It's really a cool template, very minimalist and loading is also very fast. I really liked the look of the first template like this. Success always for you. contoh advertisement text | cara menata ruang tamu | contoh makalah strategi pemasaran | contoh judul makalah