Saturday, June 23, 2018

Google Deep Mind’s Bogus AI Patent Filings

I hadn't intended to write a second post in a row about politically weird stuff related to Google Deep Mind, but the news just keeps coming....

This one gets filed in the “don’t know whether to laugh or barf” department, I suppose....

So I saw today that Google Deep Mind has filed a bunch of patent applications for well-known AI techniques, all or nearly all of which certainly are not their original inventions.   

This specific batch of patents has been made public now because they were filed a year and a half ago.   Any patents filed since December 2016 are yet to be disclosed. These patents are not yet granted, just filed, and my guess is they will not be granted -- they’re too ridiculously, obviously broad and unoriginal.   However, even if moderately weakened versions of these are somehow granted, it still will be absurd and potentially dangerous…

Check this one out, for instance:  a patent filing for RECURRENT NEURAL NETWORKS , whose abstract is


“Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for environment simulation. In one aspect, a system comprises a recurrent neural network configured to, at each of a plurality of time steps, receive a preceding action for a preceding time step, update a preceding initial hidden state of the recurrent neural network from the preceding time step using the preceding action, update a preceding cell state of the recurrent neural network from the preceding time step using at least the initial hidden state for the time step, and determine a final hidden state for the time step using the cell state for the time step. The system further comprises a decoder neural network configured to receive the final hidden state for the time step and process the final hidden state to generate a predicted observation characterizing a predicted state of the environment at the time step.”


Many of us remember teaching and implementing this stuff before the Web, let alone Google, existed…The patent filing for NEURAL NETWORKS FOR SELECTING ACTIONS TO BE PERFORMED BY A ROBOTIC AGENT   is equally ridiculous … and the list goes on and on … you get the picture …


Google Deep Mind is an awesome group of AI researchers and developers; I know two of the founders personally, one of them pretty well as he worked for me for a couple years around 1999-2000, and I also know a bunch of their other research staff from our interactions in the AGI community.   Deep Mind has certainly has had its share of genuine AI innovations. For instance if they’d filed for a patent on Neural Turing Machines (which they may well have, since December 2016) it would be less insane -- one could argue there about the relation to various prior art, but at least there was a genuine new invention involved….


The arguments against software patents in general are well known and I find them pretty compelling overall -- the classic essay Why Patents are Bad for Software by Simon Garfinkel, Mitch Kapor and Richard Stallman lays out the argument fairly well, and this article gives some of Stallman’s updated comments.  


Even those who argue in favor of software patents in the abstract, have to admit that the software patent system is typically used to the advantage of big companies as opposed to small ones, e.g. Paul Heckel, in a 1992 article devoted to “Debunking the Software Patent Myths” observes that


“The data shows that it is commonplace for large companies to pirate the technology of small entities. No case was cited where a large company licensed a small entity's technology without first being sued, suggesting that the existing laws do not motivate large companies to resolve patent disputes with small companies quickly.”


He also notes “Heckel's Principle of Dealing with Big Companies: There is no such thing as a free lunch; unless you're the lunch.”


Tesla opened up a number of its patents a few years ago.   Their motives for doing so may have been complexly business-driven , but nevertheless, open is open.   If Deep Mind patents well-known AI algorithms and then makes the patents open and the (basically spuriously) patented technology free for all to use, it will arguably be doing a service to the world and the AI community, by blocking other less beneficent big companies from patenting these things and trying to enforce the patents.


On the other hand, obtaining (even watered down versions of) such patents and retaining them is just plain bad for innovation, bad for AI and bad for humanity.


Software patents are generally not all THAT impactful on the industry, occasional horror stories aside.   However, the holding of patents for well known technologies is part of the megacorporation’s strategy for achieving domination.   Such patents form tools that big companies can use in market battles against other big companies, and against small companies threatening their domination.   


Google has never been a patent troll and their goal in filing these bogus patent applications may well be purely defensive -- to protect themselves against Facebook or IBM or whomever doing the same first.    It still does stink, though. It is a symbol and reminder and example of why AI technology -- the most important thing happening on the planet now -- should not be trusted to megacorporations. Big companies claiming ownership rights over well-known techniques, and succeeding a certain percentage of the time, and then judiciously exerting this bogus “ownership” to advance their economic advantage -- this is not the kind of dynamic we want, if our goal is beneficial AGI to uplift all sentient beings.


This is a reminder and example of why we need a mostly decentralized and open AI ecosystem, such as we’re building toward with SingularityNET -- and with our newly forming Decentralized AI Alliance bringing together decentralization oriented AI projects.   AI innovation and application will occur most naturally and beneficially in a self-organizing, decentralized, entrepreneurial way -- but there is a looming risk that the development of AI gets channeled toward narrower aims by phenomena like what we’re seeing just now from Google Deep Mind, big companies using their resources to claim ownership of ideas they did not invent.


It actually pains and annoys me to blog negative stuff about Google Deep  Mind, because that is an awesome group of people doing amazing AI R&D. The message I’m getting, from my position outside that organization without knowledge of the internal politics, is that even a group of really brilliant, good-hearted and open-minded people, acting within a global mega-corporation, cannot avoid getting sucked into processes that guide AI advance in directions that contradict the common and overall good.   


When founded, Deep Mind made a lot of noise about its AI ethics board … with comments by folks like Jaan Tallinn and Nick Bostrom, regarding the importance of this ethics board for guiding the work of Deep Mind in the case they make serious progress toward human level Artificial General Intelligence.   But what we see lately are more immediate and practical instances of confusing AI ethics at Deep Mind, from their recent questionable preferential access to public medical data, to this rash of bogus patent applications.


For sure, none of these recent ethical oddities are as serious as a rogue AGI taking over the planet and self-modifying and self-replicating so as to turn the universe into paper clips.    However, it may be that the first human-level AIs and superhuman AIs on the planet emerge in part from a combination of earlier-stage practical AI systems created by various companies addressing various markets.   If this is the case, then having current practical AI efforts obey practical everyday ethics, and develop in a democratic and participatory rather than centralized and megacorporation-share-price-driven way, may have implications for longer-term AGI ethics as well as for the near-term health of society and the AI ecosystem.

Who Owns Your Medical Data?

Questionable Relationship Between UK Government and Google Deep Mind Health Highlights the Need for Decentralized Medical Big Data and AI

While my main interest remains in the algorithmics of creating powerful AGI, issues regarding the social and political aspects of narrow AI in the contemporary world keep jumping to my attention....   And intriguingly, to the extent that AGI is going to emerge from a network like SingularityNET, it may be the case that these sociopolitical and algorithms aspects intersect in complex ways to guide the nature of the first AGIs that emerge...

In the “about as surprising as a corrupt North Korean politician, or a windy day in the wind tunnel” category, an evaluation of Google Deep Mind Health’s deal with the UK government has identified some serious concerns with the arrangement.

This third-party evaluation was requested by Google Deep Mind Health itself, which is a positive indication that the organization does care about the ethics of their doings (a cynic might hypothesize their main concern here to be the public perception of their ethics, but such cynicism is not necessarily founded at this stage).   However, this doesn’t detract from the bite or validity of the findings.

Among the essential problems noted are the nature of the bargain between the UK government and Google Deep Mind Health.   The government gets free data organization and data services from Google Deep Mind Health, for a wide scope of medical data regarding UK citizens.  In exchange, Google Deep Mind Health gets preferential access to this data.

In principle, the data in question is supposed to be made widely available to others besides Google Deep Mind Health, via accessible APIs.  However, in practice, this seems not to be happening in a really useful way due to various prohibitive clauses in the commercial contracts involved.   

Not to put too fine a point on it: What we see here is a centralized government medical system dancing to the tune of a big centralized AI/data company, passing along individuals' private medical data without their permission.   The 1.6 million people whose medical data was passed to Google Deep Mind Health were not asked permission!

This is in some ways a similar bargain to the one an individual makes when using Gmail or Google Search — a free service is obtained in exchange for provision of a large amount of private data, which can then be monetized in various ways.  The (fairly large) difference, however, is that the bargain now being made by the government on behalf of millions of citizens without their consent. A free service is being obtained from a commercial company by the government, in exchange for the provisions to this same commercial company of the population’s medical data, which can then by monetized in various ways.  

Quite possibly the UK government officials involved have entered into this bargain in a good-hearted and pure-minded way, with a view toward the health of the population, rather than for more nefarious motives such as, way, maximizing campaign contributions from entities associated with the commercial corporations benefiting from the arrangement.   But even if the motives of the government officials involved are squeaky clean, this seems much worse ethically than Gmail/Google-Search, whose bargain people enter into knowingly if haplessly. Because in this case the government is entering into the bargain on each individual's behalf without their knowledge or consent.

This sort of problem is a core part of the motivation for a host of recent projects operating at the intersection of medical data and blockchain.  By enabling individuals to store their medical data online in a secure manner, where they hold the encryption keys for their data, the nexus of control over medical data is shifted from monopoly or oligopoly to participatory democracy.   

In the democratic blockchain based approach, if a company (Google Deep Mind Health or anyone else) wants to use a person’s medical data for their research, or to provide information derived from this data as a service, or to provide an easier way for the person to access their medical data — then the company has to ASK the person.   Permission may be given or not based on the permission’s choice. Permission may be given to use the person’s data only in certain aspects, which can be enabled technically via homomorphic encryption, multiparty computation and other techniques. The process of asking and granting permission will often involve transparency regarding how and why the data will be used.

The point isn’t that Big Medical Data or medical AI is bad — directed properly, applying advanced AI to massive amounts of human biomedical data is the best route to effective personalized medicine, and ultimately to curing disease and prolonging human life.  The point is that the process of getting to personalized medicine and radical health improvement should be directed in a participatory way by the people whose data is being used to fuel these revolutions.

My own SingularityNET AI-meets-blockchain project is doing work in this direction, for instance its explorations into AI-guided regenerative medicine, and its partnership with the Shivom  medical blockchain project.  But SingularityNET and Shivom aren’t going to democratize big medical data alone.  What is needed is a wholesale redirection of medical data access, storage and analytics away from private collaborations between governments and megacorporations, and toward decentralized networks that are controlled by the world’s individuals in a democratic and participatory way.   

What is at stake here is, quite precisely: Who owns the deep knowledge about human life and death and health and disease and body function, which AI is going to discover by analyzing the world’s biomedical data over the next years.