Wednesday, September 09, 2009

AGI, Ethics, Cognitive Synergy and Ethical Synergy (from my Yale talk...)

Earlier this year I gave a talk at Yale University titled "Ethical Issues Related to Advanced Artificial General Intelligence (A Few Small Worries)" ...

It was a verbal discussion focused rather than PPT focused talk but I did show a few slides (though I mostly ignored them during the talk): anyway the brief ugly slideshow is here for your amusement...

The most innovative point made during the talk was a connection between the multiple types of memory and multiple types of ethical knowledge and understanding.

I showed this diagram of different types of memory and the cognitive processes associated with them (click the picture to see a bigger, more legible version)



and then I showed this diagram


which associates different types of ethical intuition with different types of memory.

To wit:
  • Episodic memory corresponds to the process of ethically assessing a situation based on similar prior situations
  • Sensorimotor memory corresponds to "mirror neuron" type ethics, where you feel another person's feelings via mirroring their physiological emotional responses and actions
  • Declarative memory corresponds to rational ethical judgment
  • Procedural memory corresponds to "ethical habit" ... learning by imitation and reinforcement to do what is right, even when the reasons aren't well articulated or understood
  • Attentional memory corresponds to the existings of appropriate patterns guiding one to pay adequate attention to ethical considerations at appropriate times
I presented the concept that an ethically mature person should balance all these kinds of ethics.

This notion ties in with a paper that Stephan Bugaj and I delivered at AGI-08, called Stages of Ethical Development in Artificial General Intelligence Systems. In this paper we discussed, among other topics, Kohlberg's theory of logical ethical judgment and Gilligan's theory of empathic ethical judgment. In the present terms, I'd say Kohlberg's theory is declarative-memory focused whereas Gilligan's theory is focused on episodic and sensorimotor memory. We concluded there that to pass to the "mature" stage of ethical development, a deep and rich integration of the logical and empathic approaches to ethics is required.

The present ideas suggest a modification to this idea: to pass to the mature stage of ethical development, a deep and rich integration of the ethical approaches associated with the five main types of memory systems is required.

12 comments:

  1. Thanks for posting all of this together. I found your yale talk illuminating, and I'll surely be referencing some of this work in my dissertation.

    ReplyDelete
  2. Anonymous9:28 AM

    How to validate the ethicalness of an advanced AGI system … won’t it “game” the tests?

    The best thing I can think of for this would be to place it in a virtual simulation and closely monitor it. Never letting it know when it's being tested


    How much can humanity change and still remain humanity?

    I have been thinking long and hard over this, and believe it will be a two part answer. There may exist a definite genetic line in terms of how many genes can be altered at any given time, which I doubt.

    The second part of this answer will be a philosophical answer the nature of human mind.

    Is it possible to be human with out a body?

    Answer that and we may be half there.

    Growth = more and more patterns? Should there be an earthly gap, to assure it doesnt use up all the matter on earth to continue growth, this cap could be lifted after it is launched into space.

    Will advanced AGIs be conscious? This will the most difficulty question to answer on your entire list... How can we test this? Asking the machine any sort of questions might as well be reduced to asking are you conscious. That in it self is silly but so is asking anything else I think.

    Maybe the answer lies again in observing it for complex pattern recognition

    what’s the top-level goal?
    As dangerous as this may sound the top level goal may best be left open with lower set goals. for instance

    AGi set your top level goal to align with top level goals of all sentient life.. this way the top level goal can change but will be brought back in synch with lower level goals...

    But in all reality a truly self modifying Agi should be able rewrite all of this. At that point all bets are off.

    At best you should set up a goal monitoring safe guard, a narrow Ai that simply continuously processes Agi's top level goal for anything dangerous and serve as an alert system.

    ReplyDelete
  3. Although I admire Ben's work for making friendlier AGI systems, we don't know how an objetive ethical framework will behave in an actual AGI scenario. It could become a subjetive experience.

    ReplyDelete
  4. Ethics is not definable, is not implementable, because it is not conscious; it involves not only our thinking, but also our feeling.

    ReplyDelete
  5. Anonymous3:40 AM

    Thanks for posting all of this together. I found your yale talk illuminating, and I'll surely be referencing some of this work in my dissertation.

    || obat manjur kanker serviks || obat salep buat kutil kelamin || || menghilangkan kutil di vagina || obat keluar nanah dari penis || nanah keluar dari kemaluan lelaki || obat kutil kelamin untuk wanita || obat kutil kelamin wanita || obat kutil kelamin wanita || obat kutil kelamin/jengger ayam

    ReplyDelete