A recent article by Bill Hibbard, myself and other colleagues on the value of openness in
AI development (linked to a petition in favor of transparent AI development and deployment) has, predictably, gotten a bunch of comments from
people alarmed at the potential riskiness of putting AGI designs and
code in the hands of the general public.
Another complaint
from many commenters has been that there is so much profit in AGI
that big companies will inevitably dominate it, buying any small
actors who make significant innovations.
The second complaint
has a more obvious counter-argument – simply that not everyone can
be bought by big companies, either because
- they are already rich due to non-AI stuff, like the folks behind OpenAI
- they are “crazy / idealistic”, like those of us behind OpenCog
So if some folks who
cannot be bought make enough progress to seed a transparent and open
source AGI movement that takes off fast enough, it may beat the big
tech companies. Linux provides an inexact but meaningful analogue;
in some important ways it's beating the big companies at OS
development. And many of the individuals behind Linux are too
crazy/idealistic to be bought by big companies.
The counterargument
to the first complaint is slightly subtler, but has some important
ingredients in common with the “seed a movement” counterargument
to the second complaint. Saying “it's dangerous to give AGI to
bad guys” overlooks the complexity of the likely processes
underlying the development of AGI, and the relation of these
processes to the nature of the AGI itself and the social networks in which emerging AGIs are embedded.
In this blog post I
will explore some of this complexity. This is somewhat of a
preliminary document and these ideas will likely get presented in a
better organized and more systematic fashion later, somewhere or other....
First: it is true
that, given any specific AGI system, it MIGHT be safer to keep that
particular AGI system in the hands of a specific chosen group of
beneficial actors, than to spread knowledge of that AGI system
widely. Whether this is actually true in a particular instance
depends on various factors including:
- how beneficial that chosen group actually is
- the number and nature of “smart but malevolent or irresponsible” other parties on the scene
- how much resources the AGI requires to run effectively
- whether the rest of the world will get annoyed and start a fight about having the secrets of AGI kept from it (and then who is likely to win that fight)
- etc.
However, looking at
it this way overlooks many things, including the dependency between
two factors:
- the degree of openness with which an AGI system is developed
- the various properties (e.g. robustness and general beneficial-ness) of that AGI system
One could argue that
the RIGHT small, elite, closed group is more likely to develop a
robust and beneficial AGI system than a large, distributed, diverse
motley crew of participants (such as tends to emerge in a successful
OSS project community). On the other hand, this argument seems very
weak, because in an open source setting, there is still the
opportunity for altruistic smart, elite group to fork existing
codebases and create their own separate version, leveraging the work
done by others but perfecting it according to their own ideas and
aesthetics. One could argue that closed-source provides more
incentives for smart, elite groups to participate in projects. But
again this is weak, given the evidence of well-funded OSS AI
initiatives like OpenAI, and given the high level of technical
strength of many individuals in the broader OSS community. There are
more big proprietary projects than big OSS projects out there. But
the big OSS projects are generally very high in quality.
Commercial projects
have historically done better at user interface refinement than OSS
projects. But creating AGI is arguably more like building an OS
kernel or a machine learning library than like building a user
interface – i.e. it's complex and technically subtle, requiring
deep expertise to make real progress. This is the kind of thing the
OSS world has been good at. (Indeed, similar to user interfaces, we
need AGIs to respond to the various peculiarities of human nature.
But we need the AGI to learn this kind of response, we don't want to
code the particulars of human nature into the AGI one by one. And
this kind of learning appears to require algorithms that appear
highly technical and tricky given the current state of science.)
My own feeling is
that an open and transparent modality is much more likely to lead to
a robust and beneficial AGI. This is because there will be such a
diverse group of smart people working on it. And because the group
of people working on it will not be biased by having specific
commercial goals. Even when the business goals underlying a
commercial AGI system are not especially nefarious nor contradictory
to the goal of making broadly beneficial AGI – nevertheless, the
existence of specific commercial goals will bias the thinking of the
people involved in a certain direction, leading them to overlook
certain promising directions and also certain risks.
As Bill Hibbard
points out in his recent follow-up article, the “is open versus
closed AGI better” debate ties in closely with differing ideas
about AGI design and the likely process via which AGI will develop.
If one believes there is going to be a relatively small/simple AGI
design, which will give anyone who “knows the trick” dramatically
superior performance to anything that came before it – then there
is a reasonable argument for keeping this trick secret, assuming one
trusts the group that holds the secret. If one believes that the
first powerful AGI is likely to be more complex and heterogeneous,
emergent from the combination of a large number of different software
components carrying out different functions in different ways, then
there is less argument for keeping such systems secret as they
develop.
For one thing, in
the latter “big emergent combination” scenario, secrets about AGI
design will not likely be well kept anyway. Big tech companies are
far outpacing top-secret underground government labs in AI
development, and this trend seems likely to continue; but big
companies have ongoing employee turnover and tend not to be extremely
good at keeping secrets for long periods of time. If AGI is going to
emerge via a pathway that requires years of effort and incremental
improvements, then the in-process AGI system is bound to leak out
even if it's developed in a big-company lab. (Whether a top-secret
government lab would be any better at keeping a complex AGI design
secret is a different question. I doubt it; there are plenty of
spies and double agents about....)
For another thing, a
complex, heterogeneous system is exactly the sort of thing that a
large, diverse community has a lot to contribute to. Parts of such
a system that are not especially critical to any company's business
model, can nonetheless get loving care from some brilliant, focused
academic or hacker community.
In principle, of
course, if a company or government were rich enough and ambitious
enough, they could buy an almost arbitrarily diverse development community.
Taking this to the ultimate extent one has a fascist-type model
where some company or government agency rigidly controls everyone in
the world -- a Google-archic government, or whatever. But in practice any company or government agency seems
to only be able to acquire relatively limited resources, not
sufficient to enable them to fund top-notch teams working on
peripheral aspects of their AI projects.
So given all the
above, it seems we may well have a choice between
- a worse (less robust, less richly and generally intelligent) AGI that is created and owned by some closed group
- a better (more robust, more richly and generally intelligent) AGI that is created and deployed in an open way, and not owned by anyone
Given this kind of
choice, the risk of a nasty group of actors doing something bad with
an AGI would not be the decisive point. Rather, we need to look at
multiple options such as
- the odds of a nasty group getting ahold of an AGI and things going awry
- the odds of a group with broadly beneficial goals in mind getting ahold of an AGI and things going awry
- the odds of a group with non-malevolent but relatively narrow (e.g. commercial or national-security) goals getting ahold of an AGI and things going awry
On the face of it,
Problem 1 would seem more likely to occur with the open approach.
But if open-ness tends to lead to more robust and beneficial AGIs (as
I strongly suspect is the case) then Problems 2 and especially 3 are
more likely to occur in the closed case.
One must bear in
mind also that there are many ways for things to go awry. For
instance, things can go awry due to what an AGI directly does when
deployed in the world. Or, things can go awry due to the world's
reaction to what an AGI does when deployed in the world. This is a
point Bill Hibbard has highlighted in his response. A closed
commercial AGI will more likely be viewed as manipulating people for
a the good of a small elite group, and hence more likely to arouse
public ire and cause potential issues (such as hackers messing with
the AGI, for that matter). A closed military AGI has strong
potential to lead to arms-race scenarios.
Summing up, then -- I don't claim to
have a simple, knockout argument why transparent, open AGI is better.
But I want to emphasize that the apparent simple, knockout argument
why closed AGI is better, is simply delusory. Saying “closed AI is
better because with open AI, bad guys can take it and kill us all”
is simply sweeping copious complexities of actual reality under the
rug. It's roughly analogous to saying “having more weapons is
better because then it's possible to blow up a larger class of
enemies.” The latter problematically overlooks arms-race
phenomena, in which the number of weapons possessed by one group,
affects the number of weapons possessed by another group; and also
psychological phenomena via which both adversarial and beneficial
attitudes tend to spread. The former problematically overlooks the
dependencies between the open vs. closed choice and the nature of the
AGI to be developed, and the nature of the social network in which
the AGI gets released.
I have considered a bunch of complexly interdependent factors, in the above. They're all quite clear in my own head, but I worry that due to typing these notes hastily, I may not have made it all that clear to the reader. Some futurists have
proposed a Bayesian decision-tree approach to describing complex
future situations involving subtle choices. It might be interesting
to make such a tree regarding closed vs. transparent AGI, based on
the factors highlighted in this blog post along with any others that
come up. This might be a more effective way of clearly outlining all the relevant issues and their cross-relationships.
Speaking of OpenAI, they've got a funny way of going about 'transparent AGI' ; we haven't heard a squeak out of them since 4 months ago when they first announced themselves.
ReplyDeleteBen, I hear you. I am one of those crazy / idealistic types that are mainly motivated by things other than money.
ReplyDeleteBut I think your argument is naive. It would work if AI could be developed by one or two mad scientists in a garage, but real operational AI requires an army of scientists, engineers, hardware specialists, programmers, managers, and coffee boys. You need money to pay for that, and money is what the tech giants have.
OK, perhaps one or two mad scientists in a garage could develop the big original unexpected conceptual breakthrough that opens the way to operational AI. Perhaps they are crazy idealists like us and refuse to be bought off by Google, Facebook or IBM.
But they wouldn't continue the project in secret, first because they don't have the money for the army of support staff, second because they are crazy idealists who don't like secret projects.
So they would publish their result, probably in an open access journal, and keep a website with detailed information and hints. Google, Facebook and IBM wouldn't even need to buy the inventors off, because they can just copy the research and make it operational.
(comment copied to your new post)
However, since crazy idealists should unite and lose their chains and all that, I signed the petition and joined the mailing list!
ReplyDeleteGiulio -- for sake of argument, let's suppose that OpenCog makes the big breakthrough. Don't you think that, with evidence of a huge breakthrough, OpenCog could get all the resources it wanted to hire programmers and buy machines? Google is rich, but not richer than, for instance, various national governments. I have already talked F2F and in depth with one national leader about AGI.
ReplyDeleteIf a well-organized, substantial OSS project makes the big breakthrough, indeed, big companies will try to copy what they've done. But then you'll have a competition between multiple well-funded entities -- one of which is the group that created the breakthrough in the first place....
If Google or other big companies recognized the breakthrough, but the OSS hackers who made the breakthrough were not able to summon up substantial funding for their ongoing OSS work, then the scenario you describe would be possible. But I don't think this is the most likely outcome...
Ben, I think the main point I took from this article is what you said about the Design of AGI. It seems like many of the arguments against transparent AGI are accompanied with the implication that there is one small trick to AGI and we have not found it yet. But I don’t think that is the case. (Well not anymore at least :) )
ReplyDeleteIf there is indeed a set of simple yet colossal tricks waiting to be discovered by mad scientists in a lab, then I see how it could prove problematic to “release” those secrets to everyone who can simply implement them with a wave of their wands to conquer earth. But I hardly think that that is the case.
“If one believes there is going to be a relatively small/simple AGI design, which will give anyone who “knows the trick” dramatically superior performance to anything that came before it – then there is a reasonable argument for keeping this trick secret, assuming one trusts the group that holds the secret. If one believes that the first powerful AGI is likely to be more complex and heterogeneous, emergent from the combination of a large number of different software components carrying out different functions in different ways, then there is less argument for keeping such systems secret as they develop.”
The design of AGI is a huge process involving the design of a plethora of sub designs that require extreme technical and mathematical and scientific expertise that build up towards a large system. Yes it may only require a few labs to design state of the arts nuclear technology, and it may take only a few scientists to discover new practical tricks now and then. However, “Physics” was and still is a large worldwide open source “project” (for lack of a better word) that the whole world helped build. And I think AGI relates more to the latter case. The design of AGI is less about a group of scientists forming a search party to look for the holy grail of algorithms, but more about the quest towards a new era of science.
(Disclaimer: What do I know? lol)
Most 'big breakthroughs' are not generally recognized as such at the time, but only in hindsight, or only after a spectacular practical demonstration of some sort (such as AlphaGo for example ), which already required the funding and manpower to carry through a prototype project to completion.
ReplyDeleteI can see papers describing huge conceptual breathroughs that occuured over the last year that almost no-one recognizes as such.
Example: BPL (Bayesian Program Learning)- 'single-shot learning', capable of compeletely superceding all current deep learning methods, but apparently ignored by experts locked into the deep learning paradigm.
I've repeatedly posted hints about massive conceptual breakthroughs I've made, but no-one listens ;)
Example: 3-levels of reflective reasoning (Object, Meta and Meta-Meta); the 3-level split in levels of abstraction is clearly visible throughout multiple branches of math and computer-science and hugely significant for AGI, but apparently invisble to most. I've harped on about the significance of the number '27' (3^3) many times, another huge clue in my very username!
My point: Even if OpenCog made break-throughs, it would mostly go unrecognized at the time by all but a very few super-geniuses, who would simply appropriate your ideas for whomever they worked for and not give you any credit or funds in return.
No. If someone needs your idea to earn money with it he will want you too, because you are precious to him. Your Ideas can not be used properly without understanding the contex in which they evolved and this is you. The field is just to complex and there are to few people who are able to contribute. If your ideas are really good and usefull - you are needed anyway.
Delete"let's suppose that OpenCog makes the big breakthrough. Don't you think that, with evidence of a huge breakthrough, OpenCog could get all the resources it wanted to hire programmers and buy machines? Google is rich, but not richer than, for instance, various national governments."
ReplyDeleteThat's assuming:
1. More programmers and machines is what is needed to further AGI research
2. National government is willing to fund open source AGI development, given competing government would be able to copy such development for free.
I doubt these assumptions would hold in a real world situation.
About Alphago: There was a paper 2 years ago by Clark and Storkey about go and convolutional networks and when I read it, it was pretty clear to me that this was the missing piece to create a human pro level go AI. And apparently Deepmind came to the same conclusion and they, to paraphrase from an interview "made sure that it was them". Which is a lot easier if you have the expertise, the money and the hardware, all ready to go.
ReplyDeleteSo to me the scenario that Giulio describes seems pretty realistic: As soon as the finishing line is in sight, one of the big companies will take the last step.
Phille
http://ai.neocities.org/P6AI_FAQ.html is about a totally transparent AGI project that has also been described at http://www.sourcecodeonline.com/details/ghost_perl_webserver_strong_ai.html.
ReplyDeleteThe only fear I have is, that Ben becomes fed up with all the ignorance and turns to BlackBen like the ghost in the bottle after thousands of years and throws the Open_Cogus down to the the world to clean it from all this crab....
ReplyDelete