To follow this blog by email, give your address here...

Tuesday, December 15, 2009

Dialoguing with the US Military on the Ethics of Battlebots

Today (as a consequence of my role in the IEET), I gave a brief invited talk at the National Defense University, in Washington DC, about the ethics of autonomous robot missiles and war vehicles and "battlebots" (my word, not theirs ;-) in general....

Part of me wanted to bring a guitar and serenade the crowd (consisting perhaps 50% of uniformed officers) with "Give Peace a Chance" by John Lennon and "Masters of War" by Bob Dylan ... but due to the wisdom of my 43 years of age I resisted the urge ;-p

Anyway the world seems very different than it did in the early 1970s when I accompanied my parents on numerous anti-Vietnam-war marches. I remain generally anti-violence and anti-war, but my main political focus now is on encouraging a smooth path toward a positive Singularity. To the extent that military force may be helpful toward achieving this end it has to be considered as a potentially positive thing....

My talk didn't cover any new ground (to me); after some basic transhumanist rhetoric I discussed my notion of different varieties of ethics as corresponding to different types of memory (declarative ethics, sensorimotor ethics, procedural ethics, episodic ethics, etc.), and the need for ethical synergy among different ethics types, in parallel with cognitive synergy among different memory/cognition types. For the low-down on this see a previous blog post on the topic.

But some of the other talks and lunchroom discussions were interesting to me, as the community of military officers is rather different from the circles I usually mix in...

One of the talks before mine was a prerecorded talk (robo-talk?) on whether it's OK to make robots that decide when/if to kill people, with the basic theme of "It's complicated, but yeah, sometimes it's OK."

(A conclusion I don't particularly disagree with: to my mind, if it's OK for people to kill people in extreme circumstances, it's also OK for people to build robots to kill people in extreme circumstances. The matter is complicated, because human life and society are complicated.)

(As the hero of the great film Kung Pow said, "Killing is bad. Killing is wrong. Killing is badong!" ... but, even Einstein had to recant his radical pacifism in the face of the extraordinary harshness of human reality. Harshness that I hope soon will massively decrease as technology drastically reduces material scarcity and gives us control over our own motivational and emotional systems.)

Another talk argued that "AIs making lethal decisions" should be outlawed by international military convention, much as chemical and biological weapons and eye-blinding lasers are now outlawed.... One of the arguments for this sort of ban was that, without it, one would see an AI-based military arms race.

As I pointed out in my talk, it seems that such a ban would be essentially unenforceable.

For one thing, missiles and tanks and so forth are going to be controlled by automatic systems of one sort or another, and where the "line in the sand" is drawn between lethal decisions and other decisions, is not going to be terribly clear. If one bans a robot from making a lethal decision, but allows it to make a decision to go into a situation where making a lethal decision is the only rational choice, then what is one really accomplishing?

For another thing, even if one could figure out where to draw the "line in the sand," how would it possibly be enforced? Adversary nations are not going to open up their robot control hardware and software to each other, to allow checking of what kinds of decisions robots are making on their own without a "human in the loop." It's not an easy thing to check, unlike use of nukes or chemical or biological weapons.

I contended that just as machines will eventually be smarter than humans, if they're built correctly they'll eventually be more ethical than humans -- even according to human ethical standards. But this will require machines that approach ethics from the same multiple perspectives that humans do: not just based on rules and rational evaluation, but based on empathy, on the wisdom of anecdotal history, and so forth.

There was some understandable concern in the crowd that, if the US held back from developing intelligent battlebots, other players might pull ahead in that domain, with potentially dangerous consequences.... With this in mind, there was interest in my report on the enthusiasm, creativity and ample funding of the Chinese AI community these days. I didn't sense much military fear of China itself (China and the US are rather closely economically tied, making military conflict between them unlikely), but there seemed some fear of China distributing their advanced AI technology to other parties that might be hostile.

I had an interesting chat with a fighter pilot, who said that there are hundreds of "rules of engagement" to memorize before a flight, and they change frequently based on political changes. Since no one can really remember all those rules in real-time, there's a lot of intuition involved in making the right choices in practice.

This reminded me of a prior experience making a simulation for a military agency ... the simulated soldiers were supposed to follow numerous rules of military doctrine. But we found that when they did, they didn't act much like real soldiers -- because the real soldiers would deviate from doctrine in contextually appropriate ways.

The pilot drew the conclusion that AIs couldn't make the right judgments because doing so depends on combining and interpreting (he didn't say bending, but I bet it happens too) the rules based on context. But I'm not so sure. For one thing, an AI could remember hundreds of rules and rapidly apply them in a particular situation -- that is, it could do a better job of declarative-memory-based battle ethics than any human. In this context, humans compensate for their poor declarative memory based ethics [and in some cases transcend declarative memory based ethics altogether] with superior episodic memory based ethics (contextually appropriate judgments based on their life experiences and associated intuitions). But, potentially, an AI could combine this kind of experiential judgment with superior declarative ethical capability, thus achieving a better overall ethical functionality....

One thing that was clear is that the US military is taking the diverse issues associated with battle AI very seriously ... and soliciting a variety of opinions from those all across the political spectrum ... even including out-there transhumanists like me. This sort of open-ness to different perspectives is certainly a good sign.

Still, I don't have a great gut feeling about superintelligent battlebots. There are scenarios where they help bring about a peaceful Singularity and promote overall human good ... but there are a lot of other scenarios as well.

My strong hope is that we can create peaceful, benevolent, superhumanly intelligent AGI before smart battlebots become widespread.

My colleagues and I -- among others -- are working on it ;-)

Wednesday, December 02, 2009

100 neural net cycles to produce consciousness?

This interesting article presents data indicating that it takes around half a second for an unconscious visual percept to become conscious (in the human brain)...

This matches well with Libet's result that there is a half-second lag between unconsciously initiating an action and consciously knowing you're initiating an action...

(Of course, what is meant by "consciousness" here is "consciousness of the reflective, language-friendly portion of the human mind" -- but I don't want to digress onto the philosophy of consciousness just now; that's not the point of this post ... I've done that in N prior blog posts ;-)

My Chinese collaborator ChenShuo pointed out that, combined with information about the timing of neural firing, this lets us estimate how much neural processing is needed to produce conscious perception.

As I recall, the firing of a single neuron's action potential takes around 5 milliseconds ... It takes maybe another 10-20 milliseconds after that for the neuron to be able to fire again (that's the "refractory period") .... Those numbers are not exact but I'm pretty sure they're the right order of magnitude...

So, the very rough estimate is 100 cycles in the neural net before consciousness, it would seem ;)

This fits with the view of consciousness in terms of strange attractors ... 100 cycles is often enough time for a recurrent net to converge to into an attractor basin ...

But of course the dynamics during those ~100 cycles is the more interesting story, and it's still obscure....

Is it really an attractor we have here, or "just" a nicely patterned transient? A terminal attractor a la Mikhail Zak's work, perhaps? Etc.

Enquiring minds want to know! (TM)

Monday, November 16, 2009

Dream of the Multiversal Cylinder

(I usually reserve this blog for speculations on intellectual topics, but last night I had a dream that seemed sufficiently interesting to post here. So, here goes ;-) ....

In this dream, I moved to a strange foreign nation, and met a beautiful girl there whose ex-boyfriend was making her life very difficult, yet who she was still somehow attached to....

His martial arts expertise alarmed me, and so together with the mother of a friend who lived in this same strange place -- a very short, hunchbacked old lady who walked with a cane and wore a funny straw hat -- I went to a weird old-fashioned section of the city, where we did two things.

First, we paid some old white-bearded "witch doctor" to cast a magical spell on the ex-boyfriend, which caused him to forget having ever known the girl, haha.

Then, we went to a strange store full of ancient relics, and bought this cylindrical wooden container, which I was supposed to keep in my bedroom for good luck, but not to open.

The girl and I walked along the beach and the ex-boyfriend walked right past and showed no sign of recognizing her. This freaked her out a bit, and she asked me to have the spell undone on Dec. 21 2012.

Then I went back to my house, which I suddenly shared with the girl, and of course I had to open the wooden cylinder. She kept telling me not to, but I had to anyway. I opened one end of it, prying it open with a screwdriver, and inside the small cylinder was an infinite space -- a whole multiverse of possibilities.

She just kept staring inside it, looking intent but not saying anything. I asked if she wanted me to close it; but she shook her head no. There were millions of these little intelligent creatures in there, which could see our (and everything's) past and future.... Clearly she was absorbing a lot of knowledge from them ... and so was I ... but it was also clear that we were absorbing somewhat different things.

Then, we looked at each other and, without words, asked each other if we should dive into one of those universes or stay in this one. It was clear that in those universes we could still exist as individuals (and could still be with each other); but would exist in radically different form (some form not constrained by time, though there were other constraints not comprehensible in human terms).

Gradually, we collectively realized that we did not feel like entering that other multiverse at that particular time.

Then, she gave me a look that meant something like: "I will never be afraid of anything relating to human society anymore, nor be afraid of my own emotions, because I can see that this whole world of you and me and humanity and Earth is just a sort of artistic construction, which exists for aesthetic purposes. We have chosen to remain in this universe so as to remain part of this artwork ... "

... and then her unspoken thought faded out before it was done, because someone was in the house walking around and we got distracted by wondering who it was...

... and then I woke up because of the noise of my dad walking around downstairs in my house (he was visiting last night)

... and I tried to fall back asleep so as to re-enter the dream, but failed ...