Tuesday, December 15, 2009

Dialoguing with the US Military on the Ethics of Battlebots

Today (as a consequence of my role in the IEET), I gave a brief invited talk at the National Defense University, in Washington DC, about the ethics of autonomous robot missiles and war vehicles and "battlebots" (my word, not theirs ;-) in general....

Part of me wanted to bring a guitar and serenade the crowd (consisting perhaps 50% of uniformed officers) with "Give Peace a Chance" by John Lennon and "Masters of War" by Bob Dylan ... but due to the wisdom of my 43 years of age I resisted the urge ;-p

Anyway the world seems very different than it did in the early 1970s when I accompanied my parents on numerous anti-Vietnam-war marches. I remain generally anti-violence and anti-war, but my main political focus now is on encouraging a smooth path toward a positive Singularity. To the extent that military force may be helpful toward achieving this end it has to be considered as a potentially positive thing....

My talk didn't cover any new ground (to me); after some basic transhumanist rhetoric I discussed my notion of different varieties of ethics as corresponding to different types of memory (declarative ethics, sensorimotor ethics, procedural ethics, episodic ethics, etc.), and the need for ethical synergy among different ethics types, in parallel with cognitive synergy among different memory/cognition types. For the low-down on this see a previous blog post on the topic.

But some of the other talks and lunchroom discussions were interesting to me, as the community of military officers is rather different from the circles I usually mix in...

One of the talks before mine was a prerecorded talk (robo-talk?) on whether it's OK to make robots that decide when/if to kill people, with the basic theme of "It's complicated, but yeah, sometimes it's OK."

(A conclusion I don't particularly disagree with: to my mind, if it's OK for people to kill people in extreme circumstances, it's also OK for people to build robots to kill people in extreme circumstances. The matter is complicated, because human life and society are complicated.)

(As the hero of the great film Kung Pow said, "Killing is bad. Killing is wrong. Killing is badong!" ... but, even Einstein had to recant his radical pacifism in the face of the extraordinary harshness of human reality. Harshness that I hope soon will massively decrease as technology drastically reduces material scarcity and gives us control over our own motivational and emotional systems.)

Another talk argued that "AIs making lethal decisions" should be outlawed by international military convention, much as chemical and biological weapons and eye-blinding lasers are now outlawed.... One of the arguments for this sort of ban was that, without it, one would see an AI-based military arms race.

As I pointed out in my talk, it seems that such a ban would be essentially unenforceable.

For one thing, missiles and tanks and so forth are going to be controlled by automatic systems of one sort or another, and where the "line in the sand" is drawn between lethal decisions and other decisions, is not going to be terribly clear. If one bans a robot from making a lethal decision, but allows it to make a decision to go into a situation where making a lethal decision is the only rational choice, then what is one really accomplishing?

For another thing, even if one could figure out where to draw the "line in the sand," how would it possibly be enforced? Adversary nations are not going to open up their robot control hardware and software to each other, to allow checking of what kinds of decisions robots are making on their own without a "human in the loop." It's not an easy thing to check, unlike use of nukes or chemical or biological weapons.

I contended that just as machines will eventually be smarter than humans, if they're built correctly they'll eventually be more ethical than humans -- even according to human ethical standards. But this will require machines that approach ethics from the same multiple perspectives that humans do: not just based on rules and rational evaluation, but based on empathy, on the wisdom of anecdotal history, and so forth.

There was some understandable concern in the crowd that, if the US held back from developing intelligent battlebots, other players might pull ahead in that domain, with potentially dangerous consequences.... With this in mind, there was interest in my report on the enthusiasm, creativity and ample funding of the Chinese AI community these days. I didn't sense much military fear of China itself (China and the US are rather closely economically tied, making military conflict between them unlikely), but there seemed some fear of China distributing their advanced AI technology to other parties that might be hostile.

I had an interesting chat with a fighter pilot, who said that there are hundreds of "rules of engagement" to memorize before a flight, and they change frequently based on political changes. Since no one can really remember all those rules in real-time, there's a lot of intuition involved in making the right choices in practice.

This reminded me of a prior experience making a simulation for a military agency ... the simulated soldiers were supposed to follow numerous rules of military doctrine. But we found that when they did, they didn't act much like real soldiers -- because the real soldiers would deviate from doctrine in contextually appropriate ways.

The pilot drew the conclusion that AIs couldn't make the right judgments because doing so depends on combining and interpreting (he didn't say bending, but I bet it happens too) the rules based on context. But I'm not so sure. For one thing, an AI could remember hundreds of rules and rapidly apply them in a particular situation -- that is, it could do a better job of declarative-memory-based battle ethics than any human. In this context, humans compensate for their poor declarative memory based ethics [and in some cases transcend declarative memory based ethics altogether] with superior episodic memory based ethics (contextually appropriate judgments based on their life experiences and associated intuitions). But, potentially, an AI could combine this kind of experiential judgment with superior declarative ethical capability, thus achieving a better overall ethical functionality....

One thing that was clear is that the US military is taking the diverse issues associated with battle AI very seriously ... and soliciting a variety of opinions from those all across the political spectrum ... even including out-there transhumanists like me. This sort of open-ness to different perspectives is certainly a good sign.

Still, I don't have a great gut feeling about superintelligent battlebots. There are scenarios where they help bring about a peaceful Singularity and promote overall human good ... but there are a lot of other scenarios as well.

My strong hope is that we can create peaceful, benevolent, superhumanly intelligent AGI before smart battlebots become widespread.

My colleagues and I -- among others -- are working on it ;-)

10 comments:

slartibartfast said...

Thanks for this very interesting read, Ben. The dichotomy between our aspirations and our genetic heritage could not be more apparent than in the discussions you must have had.

Exocentrick said...

Ben, I really appreciate your blog. I was struck by your willingness to accept violence as a potential necessary step to reach the singularity. After thinking about this, I decided I would express my doubts with humor:

11 Thrilling Action Hero Quotes Delivered by our Robot Overlords

http://exocentrick.blogspot.com/2009/12/11-thrilling-action-hero-quotes.html

Ben Goertzel said...

Exocentrick: I really hope violence is NOT necessary to achieve a positive Singularity. I was raised a pacifist and still have that strong inclination. But, I think the possibility exists and may as well be recognized.

Matt Kruse said...

Really interesting indeed! It's great they were willing to listen to your point of view. BTW, loved the article in h+ magazine!

Unknown said...

I contended that just as machines will eventually be smarter than humans, if they're built correctly they'll eventually be more ethical than humans -- even according to human ethical standards. But this will require machines that approach ethics from the same multiple perspectives that humans do: not just based on rules and rational evaluation, but based on empathy, on the wisdom of anecdotal history, and so forth.

This is the same line of reasoning I was using in the AGI e-mail list discussion about the box problem. Any AI will have to be more ecologically conscious than any human would be. Thanks for making that point, I think it is important and rarely (if ever) stated

Unknown said...

Also, I think this goes pretty far to make my point about the US Military (I said DARPA previously) likely being at the lead of AI development.

FWIW I am a military intelligence officer so there is some crossover.

Ben Goertzel said...

Andrew: At the moment I am not particularly optimistic that the US military will fund the innovations that fuel the AGI revolution. It seems more likely that once the breakthroughs are made with other funding, the US military will jump on board at that point and fund the approach that has already proven itself. The politics of DARPA and other US military AI funding is quite complex -- but the end result of the complexity is that the research funding tends NOT to go to a broad diversity of AGI research, being concentrated instead on a small subset of AGI approaches. So if the particular approaches that DARPA and other agencies habitually fund prove successful then yeah, they will be the ones to fund the AGI revolution. But if the secret sauce actually lies in some other sort of approach (as I suspect) then they probably will not.

BTW I have personally had more success getting US gov't funding for bioinformatics and for NLP work than for AGI work ... because my own AGI research lies outside the scope of the paradigms that DARPA and other AI funding agencies tend to favor.

It seems to me that a lot of people in a lot of government agencies feel the need for profound AGI innovation ... yet the AI research funding agencies are not really funding that innovation in a terribly effective way. As often occurs with the government [or any large organization], the left hand may not be entirely clear on what the right hand is doing ;-) ... even though there are a lot of intelligent and well-meaning people involved, and also a lot of resources...

So, it's complex...

psychic said...

i am looking on yahoo, came to your post, i read couple of your post very nice.
hoping to read more such post in future.

psychic said...

i am looking on yahoo, came to your post, i read couple of your post very nice.
hoping to read more such post in future.

casinosite24.com said...

Your blog is really nice. Its sound really good 바카라사이트