Tuesday, January 27, 2015

Why I Signed Tegmark's Open Letter on AI Safety


A bunch of folks have messaged me recently asking me why the heck I signed the Open Letter on AI safety recently proposed by Max Tegmark and his chums at the newly formed Future of Life Institute (in full, the letter is called Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter)....   It is true that Tegmark's new organization is at the moment mainly funded by the great inventor and entrepreneur Elon Musk, whose recent statements about AI have, er, not pleased me especially.

Reading through the brief text of the Open Letter, I did find one thing I don't agree with.   The letter contains the phrase "...our AI systems must do what we want them to do. "  Well, that is just not going to be possible, obviously....   It may happen for a while, but once we have AGI systems that are massively smarter than people, they are going to do what they want to do, or what other powerful forces in the universe want them to do, but not necessarily what we want them to do.   

Our best hope of making AI systems do what we want them to do, will be to become one with these AI systems via brain-computer interfacing, mind uploading and the like.   But in this case, the "we" who is guiding the AIs (i..e AIs that are the future "us") will not be the same human "we" that we are now -- this future "we", descended from our current human selves, may have quite different goals, motivations, and ways of thinking.

But in the Open Letter, this phrase about controlling AIs, which doesn't make sense to me, is embedded in the paragraph

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations ...  constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. 

and I do approve the overall gist here.  I think we should do research aimed at maximizing the odds that increasingly capable AI systems are robust and beneficial.   This seems a no-brainer, right?

Looking at the associated document outlining suggested research directions, I found myself agreeing that these would all be worthwhile things to study.   For instance, in the computer science section of the document, they advocate study of

  1. Verification: how to prove that a system satis es certain desired formal properties. (Did I build
    the system right?")
  2. Validity: how to ensure that a system that meets its formal requirements does not have unwanted behaviors and consequences. (Did I build the right system?")
  3. Security: how to prevent intentional manipulation by unauthorized parties..
  4. Control: how to enable meaningful human control over an AI system after it begins to operate

I think all these kinds of research are valuable and worth pursuing presently.

I definitely do NOT think that these kinds of research should take priority over research on how to make thinking machines be more and more generally intelligent.   But that is not what the open letter advocates.   It just suggests that the research areas suggested should get MORE attention than they current do.

It may be that some of the signatories of the Open Letter actually ARE in favor of stopping or slowing AGI R&D while work on safety-oriented topics proceeds.   But, for example, Demis Hassabis of Google DeepMind signed the letter, and I know he is trying pretty hard to push toward creating AGI, with his team at Google in the UK.  Ditto for Itamar Arel, and plenty of other signatories.

One could argue about whether it makes sense to divert resources from building AGI toward research on AGI safety, at this stage.  But to my mind, this would be a pointless argument.   AGI occupies a minimal percentage of the world's resources, so if more funding is to be put into AGI safety, it doesn't have to come out of the pot of AGI R&D funding.   As an example, Musk put US$10M into Tegmark's Future of Life Institute -- but it's not as though, if he hadn't done that, he would have put the same money into AGI R&D.  

Do I think that $10M would have been better spent on OpenCog AGI R&D?   Well, absolutely.   But that wasn't the issue.    There is a huge amount of wealth in the world, and very little of it goes to AGI or other directly Singularity-oriented tech.   Rather than fighting over the scraps of resources that currently go to Singularity-oriented tech, it's more useful IMO to focus on expanding the percentage of the world's resources that go into Singularity-oriented development as a whole.

In short, I considered not signing the document because of the one phrase I disagreed with, as mentioned above.  But eventually I decided this is not a legal contract where every phrase has to be tuned to avoid having loopholes that could come back to bite me; rather, it's just an expression of common feeling and intent.   I signed the document because I wanted to signal to Max Tegmark and his colleagues that I am in favor of research aimed at figuring out how to maximize the odds that AGI systems are robust and beneficial.   This kind of research has the potential to be worthwhile in making the phase "from here to Singularity" go more smoothly -- even though in the end we're obviously not going to be able to "make AGI systems do what we want them to do" ... except potentially by morphing what "we" are so that there is no boundary between the AGIs and us anymore.






4 comments:

  1. Anonymous9:55 PM

    Great article, I will always read an article on this website

    ReplyDelete
  2. Hello Ben, I fully agree with you. Elon Musk is cautiously treading in the public domain. I assume that he is fully on board with AGI development and also making them autark. Check out this amazing music: https://www.youtube.com/watch?v=snKCli0lTwU

    ReplyDelete
  3. Anonymous12:11 AM

    good article, very recommended cara melancarkan asi cara melancarkan asi

    ReplyDelete