Tuesday, October 02, 2018

Toward an Analytical Understanding of Unconditional Love





Unconditional Love, Pattern Appreciation and Pareto-Optimal Empathy

One of the ways we have been thinking about the "Loving AI" project, in which we are using the Sophia robot and other robotic or animated agents as meditation and consciousness guides for humans, is as "creating AIs with unconditional love toward humans."   Having AIs or robots help humans through meditation and consciousness-expansion exercises, is something that is being explored in that project as a step toward more ambitious examples of deeply, widely loving robots and AIs.

 Image result for loving AI sophia                                                                   





(The Sophia robot demonstrating some of her consciousness-expansion chops on stage at the Science and Nonduality conference in 2017...)

But what is "unconditional love", really?  Like "consciousness" itself, it is something that no two people involved in the project think about the same way.    Refining and interpenetrating our various conceptions of these ideas, is part of the fun and reward of being involved in a project of this nature.

Thinking about it practically, if some other being loves me unconditionally in the abstract, that is somewhat nice to know, but doesn't necessarily do me much good or even make me feel much better.   Many times in my life, someone has done really annoying things to/for me out of good intentions and even love -- because they felt love toward me but didn't really understand me hardly at all.   A general feeling of love toward me isn't really enough to be helpful -- what's needed is love coupled with understanding.

This brings us beyond unconditional love, though, to what one might call unconditional or universal empathy.   Which is the main topic I want to talk about here -- in a moderately rambling and musing sort of way....  

I will model unconditional love as the combination of two factors: universal empathy, and the goal of maximizing the world's well-being.  

I will argue there are practical limits on the scope of empathy, due to the complexity of the underlying processes involved with empathizing; and I will introduce the notion of Pareto-optimal empathy as a way of thinking about the closest we can come to universal empathy within a domain where bounded resources are a reality.

Foundationally, I will suggest, all these concepts derive from the basic phenomenon of "pattern appreciation" (a term due to David Hanson).   That is: a universally empathic agent is one that can recognize all patterns; and a unconditionally loving agent is one that has a goal of encouraging and enabling all patterns to get extended.   In resource-constrained situations, agents can only recognize some patterns not all, and extension of some patterns constrains extension of other patterns -- so one gets complexities such as Pareto-optimal empathy.   Simple, primitive underlying pattern dynamics are manifested in the context of persistent entities and "beings" (which can themselves be viewed as certain sorts of patterns) as empathy and love.   Unconditional love, in this analysis, is basically the maximally ethical behavior according to the "pattern ethics" outlined in my 2006 book The Hidden Pattern.

Universal or Broad-Scope Empathy as a Multi-Objective Optimization Problem

A bit prosaically, one can think about the goal of “empathizing with all beings”, or the goal of "empathizing with all humans", as a multi-objective optimization problem.

A multi-objective optimization problem is the problem of maximizing or minimizing a SET of functions, without necessarily specifying which of the functions is more important than which other one, or placing weights on the functions....   For instance, in mate selection, a woman might want a man who is funny, handsome and wealthy.    She might not know which of these she values more, let alone be able to weight the different qualities numerically.  But she would know that: given constant amounts of funniness and handsomeness, more wealth is better; given constant amounts of funniness and wealth, more handsomeness is better; and given constant amounts of handsomeness and wealth, more funniness is better.   Here we have a 3-objective optimization problem.

Modeling unconditional empathy as a multi-objective optimization problem, one consider that for each being X in the universe, “empathize with X” is a goal…. 

We don't have a solid, precise definition of "empathy", but I think the basic concept is clear.   When X empathizes with Y, there is an aspect of X (at least in some sub-module of X) experiencing what Y has experienced, in the sense of experiencing some analogue of what Y has experienced.   This analogue is generally supposed to inherit the key emotional aspects of Y's experience.   And the possession of this analogous experience generally enables X to predict some things about Y's cognitive or behavioral reaction to their experience.

From Empathy to Love

Commonly it occurs that when X empathizes with Y, then if Y is experiencing a bad situation in some way, X will then do something aimed at improving Y's condition.   But I don't think this is best considered as part and parcel of empathy itself.   As I'm thinking about it, a purely passive being could still be empathic.   This ties in with why I consider unconditional or universal empathy, as only one part of "unconditional love."

Clearly, an empathic being with a goal of improving the well-being of the world, will tend to do helpful things for the beings with which it empathizes.   But I find it conceptually cleaner to consider "having a goal of improving the well-being of the world" to be a separate quality for "having empathy."

This ties in with the related point that  having a goal of improving the well-being of the world, does NOT imply actually being able to usefully improve the well-being of the world.   For a world effectively model-able as being full of experiencing minds, empathy is critical for a well-intentioned mind to actually be capable of improving the well-being of the (minds in the) world.

Unconditional love, I suggest, can be effectively thought of as the combination of universal empathy with the goal of improving the world's well-being.   Having only universal empathy, one could simply stand by and co-experience the world-suffering, even if one had the power to do something about it.  Having only the goal of improving the world without an understanding of the world, one will just make a mess, because one will lack a deep resonant connection to the things one is trying to improve.   Putting them together, one has the desire to help each of the beings in the world, and the understanding to know what helping each of those beings really means.

Arguably Buber's concept of an I-Thou relationship contains both of these ingredients: empathy and the desire for improvement of well-being.   In Buber's terms, unconditional love is basically the same as having an I-Thou relationship with everything.   But here I am aiming to formulate things in a somewhat more scientifically analytical vein than was Buber's style.

Another framing would involve the concept of a high-quality scientific theory as I outlined in my book "Chaotic Logic" back in 1994.    One thing I noted there is that a high-quality theory displays significant mutual information between the particulars within the theory, and the particulars of the phenomenon being explained.   Empathy in the sense described here also requires this -- this is a different way of looking at the idea of a "suitably analogous experience" ... one can think about "an experience with a high degree of mutual information with the experience being empathized with".   One can perhaps look at unconditional love as: the goal of universal well-being, combined with high-quality theories about how to realize this goal.

This may seem overly strict as a conception of unconditional love -- one may want a definition in which, say, an extremely loving dog should be validly considered as unconditionally loving of all beings, even if it can't empathize with most of the things that are important to most beings.   But I don't think this extremely accepting definition of unconditional love is the most interesting one.    Love without understanding is limited in nature, because the lover does not even know what they're loving. 

This sort of distinction has been explored in romantic fiction many times: Imagine a beautiful and intellectual teenage girl, with one suitor who loves her for her good heart and beauty, and another who loves those things but also fully appreciates her unique intellect, her love of poetry and mathematics, etc.    We would say the latter suitor loves her more completely because he understands more of her.   The former suitor does love her, but he really only loves part of her because the other part is incomprehensible to him.

 Pattern Appreciation as the Deep Foundation of Empathy and Love

Another, deeper way of looking at the matter is to focus on patterns rather than "beings."   A "being", in the sense of a persistently identified entity like an object, mind or "agent", is in the end a specific sort of pattern (existing as a pattern relative to some abstract observer, where an abstract observer can be quantified e.g. as a measure of simplicity and an applicative operator).   Framing empathy and love in terms of persistent beings is natural in the context of human life and culture, yet not as foundational as framing them in terms of pure elementary pattern dynamics.

Consider the goal of pursuing extension and expansion and synergy-with-other-patterns for all patterns in the universe (obviously a rather complex multi-objective optimization problem, since given limited resources what extends one pattern may constrain another).   In this view, empathy has to do with how many patterns one perceives.   In order to meaningfully "pursue" extension/expansion/synergy of pattern P as a goal, an agent (or other pattern) must perceive and identify pattern P.   Someone who is not empathic with mind Y, simply is not able to perceive or understand many of the key patterns in Y's mind.   So the key point here is: What an agent can really pursue is the combination of

  •        extension/expansion/synergy for all known patterns in the universe
  •        expanding the scope of patterns known


But of course the methodology an agent can follow for expanding the scope of patterns it knows, will be constrained and guided by the patterns it knows.   So "unconditional pattern-level love" would consist of knowing all patterns in the universe and pursuing extension and expansion and synergy for all of them.   Deficiencies in pattern recognition, such as deficiencies in empathy, would constrain an agent to a lesser degree of pattern-level love.

A Quantitative Question

This collection of perspectives on the concept of empathy allows us to analyze empathy in a computational sense (without making any commitment about what model of computation to assume, e.g. primitive recursive versus Turing versus hyper-Turing, etc.).   For a being X to have empathy for a being Y in the sense articulated above, it is clear that X must be capable of running processes that are, in an appropriate sense, analogous to Y's processes.  

There is a quantitative question lurking here: If Y uses amount r of resources in having a certain experience, how much resources must X necessarily utilize in order to have a closely enough analogous experience to Y's to validly be "empathizing with" Y?  

So, for instance, imagine a little old lady who noticed the desire of my 13 year old self to own a personal computer (back when I was 13 these were extremely novel devices), and felt kindly toward me and bought me a radio (since it was cheaper than a computer and was also a wizzy electronic device).   This lady would have been empathizing with me, in a sense -- but poorly.   I wanted the computer so I could experiment with computer programming.   It was a desire to program that was possessing me, not a desire to own gadgets (I did like experimenting with electronics, but for that a standard radio wouldn't have been much use either).   Her ability to experience something analogous to my experience was limited, due to her inadequate model of me -- she experienced vicariously my desire for a gadget, but didn't experience vicariously my desire to be able to teach myself programming.   Corresponding with her poor model of me, her ability to predict what I would do with that computer (or radio) was limited.

This example illustrates the fuzziness of empathy, and also the need for reasonably close modeling in order to have a high enough degree of empathy to actually be useful to the entity being empathized with.

To rigorously  answer this quantitative question would require greater formalization of the empathy concept than I'm going to give here.  It would require us to formalize the "analogous" mapping between X's and Y's experience, presumably using morphisms between appropriately defined categories (e.g. graph categories involving X's and Y's states).  It would require us to formalize the type of prediction involved in X's predictions of Y's states and behaviors, and the error measures to be used, etc.    Once all this is done, though, it is pretty clear that the answer will not be, say, log(r).  It's pretty clear that to empathize with an experience of a system Y in a useful way, generally will require an amount of resources vaguely on the order of those that Y critically utilizes in having that experience.

(This being a blog post, I'm casually leaping past some large technical points in my argument.   But this shouldn't be interpreted as a minimization of the value of actually working out details like this.   A well-worked-out mathematical theory of empathy would be a great thing to have.  One could use the reduction of empathy and love to pattern appreciation to create a quantitative formalization of these ideas, but there would be a lot of "arbitrary" looking choices to make ... reference computing models to assume, parameters to set ... and studying how these assumptions affect the quantitative aspect mentioned above would take a bit of careful thought.  But I don't have time to think through and write out all the details of such a thing now, so I'm making some reasonable assumptions about what the consequences of such a theory will be like, and proceeding on with the rest of my intuitive train of thought.....   )

The Practical Difficulty of Universal Empathy

It immediately follows from this quasi-formalization of empathy  that, for a system with finite resources, empathizing (with non-trivial effectiveness) with all possible beings X will not be achievable. 

Of course "all possible beings" is stronger than needed.   What about just empathizing with all beings in the actual universe we live in?  (Setting aside the minor issue of defining what this universe is....)

In principle, an entity that was much more mentally powerful than all other beings in the universe could possess empathy for all other beings in the universe. 

But for entities that are at most moderately powerful relative to the complexity and diversity of other entities in the universe, empathizing with all other entities in the universe will not be possible.  To put it simply: Eventually the brain of the empathizing entity will fill up, and it won’t be able to contain the knowledge needed to effectively empathize with additional entities in a reasonable time-frame.

Pareto-Optimal Empathy

We can then think about a notion such as “Pareto-optimal empathy” ….

A Pareto optimum of a multi-objective optimization problem, is a solution that can't be slightly tweaked to improve its performance on one of the objectives, without harming its performance on one or more of the other objectives.

In the example of a woman looking for a funny, handsome and wealthy man, suppose she is considering a vast array of possible men, so that for any candidate man M she considers, there are other men out there who are similar to M, but vary from M in one respect or another -- slightly richer, a lot taller, a bit less intelligent, slightly more or less funny, etc.   Then a man M would be a Pareto optimum for her if, for all the other men M' out there,

  •       if M' is more handsome than M, then M' is less funny or less wealthy than M
  •       if M' is funnier than M, then M' is less handsome or less wealthy than M
  •       if M' is wealthier than M, then M' is less funny or less handsome than M


What Pareto optimality says is that, for all men M' in the available universe, if they are better than M in one regard, they are worse than M in some other regard.

What is interesting is that there may be more than one Pareto-optimal man out there for this woman (according to her particular judgments of funniness, handsomeness and wealth).   The different Pareto-optimal men would embody different balances between the three factors.   The set of all the Pareto-optimal men is what's called the woman's "Pareto front."

Getting back to empathy, then, the basic idea would be: An agent is Pareto-optimally empathic if there would be no way to increase their degree of empathy for any being X in the universe, without decreasing their degree of empathy for some other being Y in the universe.

There would then be a “Pareto front” of Pareto-optimally empathic agents, embodying a diversity of choices regarding whom to empathize with more.

To be sure, not many humans occupy spaces anywhere near this Pareto front.   The limitations on human empathy in current and historical society are generally quite different ones; they are not generally the ones imposed strictly by the computational resources of the human brain and body.   Nearly all humans could empathize much more deeply and broadly than they do, without improving or bypassing their hardware.

The Pareto-optimal empathy concept applies on the underlying pattern level as well.    Given limited resources, every known pattern can't be concurrently urged to extend,  expand and synergize without conflicts occurring.    Further every pattern in the universe can't be recognized by the same finite system -- the inductive biasing that allows an agent to recognize one pattern, may prevent it from recognizing another (related to the "no free lunch theorem").    Finite-resource systems that recognize and create patterns can exercise broad-scope pattern-level love via pattern appreciation and active pattern enhancement, but unconditional pattern-love needs infinite resources.

Increasing Empathy By Expanding Capacity

A missing ingredient in the discussion so far is the possibility for an agent to expand its capacity, so as to be able to empathize with more things (either becoming infinite, or becoming a bigger finite agent).  An infinite entity can, potentially, empathize with all other entities (whose size are finite or are some sufficiently lower order of infinity than the entity) completely, without compromise.   A finite entity that assimilated enough of the universe's mass-energy could potentially make itself powerful enough to empathize with every other entity in the universe.

An agent may then face a question of how much of its finite resources to devote to expanding its capacity, versus how much to achieving Pareto-optimal empathy given its current resources.   But we can incorporate this into the optimization framework by defining one of the multiple goals of the agent to be: Maximizing the total expected empathy felt toward agent X, over the entire future.   In this way, the possibility is embraced that the best way to maximize empathy over all time is to first focus on expanding empathic capacity and then on maximizing current empathy, rather than to immediately focus on maximizing current empathy…

The closest one can come to unconditional love as an individual agent, then, short of breaking out of the mode of being in which finite resources are a reality, is something like: Pareto-optimal empathy, plus the goal of increasing the world's well-being.   Those of us who aspire to some form of unconditional love as an abstract conceptual ideal, would do well to keep this more specific formulation in mind.   Though I have no doubt many of the specifics can be improved.

Unconditional Eurycosmic Love

From the underlying patternist view, "expanding capacity" is mostly about where the boundaries around a system are drawn.   Drawing them around an individual physical entity like a person, robot or software system ... or the Global Brain of the biological and electronic systems on the Earth ... one faces finite-resources issues.   Considering the pattern-system of the whole universe, one concludes that the universe as a whole recognizes all the patterns that exist in it and, to some extent, fosters their extension and expansion and synergy.   But still, one pattern's growth constrains that of another.  

To get to truly unconditional pattern-level love, one has to go to the level of the multi-multi-...-multi-verse, which I've called the Y-verse or the Eurycosm ... here all possibilities exist, along with all possible weightings of all possibilities.   Everything is open to grow and expand and synergize freely.   Individual universes are created within this broader space by delineating rules, structures and dynamics that create resource constraints, thus limiting the direct existence of unconditional love, but opening up possibilities for increase in the degree of approximation to unconditional love within the given constraints.

In Sum

“Unconditional empathy” and "unconditional love" are the province of beings much larger in capacity than the beings they are empathizing with …

... but Pareto-optimal empathy gives a way of thinking about empathy that is “as unconditional as possible given the empathizing mind’s constraints”

… and that incorporates the process and possibility of a mind overcoming its (perceived or actual, depending on one's perspective) constraints....

And so to approximate unconditional love in a situation of constrained resources: Aim to contribute to the world's well-being, and aim to position your balance of empathies (averaged appropriately over expected futures) somewhere on the Pareto front.

At the underlying, foundational level, love and empathy are about patterns recognizing other patterns and encouraging them to extend, expand and synergize.   Pattern growth can be considered to occur unfettered in a sufficiently broadly defined sort of multiverse, but in a universe like our physical or cultural worlds or our individual minds, there are resource constraints, so that unconditional love and empathy can be increasingly approximated but not fully achieved within these boundaries.






Saturday, June 23, 2018

Google Deep Mind’s Bogus AI Patent Filings

I hadn't intended to write a second post in a row about politically weird stuff related to Google Deep Mind, but the news just keeps coming....

This one gets filed in the “don’t know whether to laugh or barf” department, I suppose....

So I saw today that Google Deep Mind has filed a bunch of patent applications for well-known AI techniques, all or nearly all of which certainly are not their original inventions.   

This specific batch of patents has been made public now because they were filed a year and a half ago.   Any patents filed since December 2016 are yet to be disclosed. These patents are not yet granted, just filed, and my guess is they will not be granted -- they’re too ridiculously, obviously broad and unoriginal.   However, even if moderately weakened versions of these are somehow granted, it still will be absurd and potentially dangerous…

Check this one out, for instance:  a patent filing for RECURRENT NEURAL NETWORKS , whose abstract is


“Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for environment simulation. In one aspect, a system comprises a recurrent neural network configured to, at each of a plurality of time steps, receive a preceding action for a preceding time step, update a preceding initial hidden state of the recurrent neural network from the preceding time step using the preceding action, update a preceding cell state of the recurrent neural network from the preceding time step using at least the initial hidden state for the time step, and determine a final hidden state for the time step using the cell state for the time step. The system further comprises a decoder neural network configured to receive the final hidden state for the time step and process the final hidden state to generate a predicted observation characterizing a predicted state of the environment at the time step.”


Many of us remember teaching and implementing this stuff before the Web, let alone Google, existed…The patent filing for NEURAL NETWORKS FOR SELECTING ACTIONS TO BE PERFORMED BY A ROBOTIC AGENT   is equally ridiculous … and the list goes on and on … you get the picture …


Google Deep Mind is an awesome group of AI researchers and developers; I know two of the founders personally, one of them pretty well as he worked for me for a couple years around 1999-2000, and I also know a bunch of their other research staff from our interactions in the AGI community.   Deep Mind has certainly has had its share of genuine AI innovations. For instance if they’d filed for a patent on Neural Turing Machines (which they may well have, since December 2016) it would be less insane -- one could argue there about the relation to various prior art, but at least there was a genuine new invention involved….


The arguments against software patents in general are well known and I find them pretty compelling overall -- the classic essay Why Patents are Bad for Software by Simon Garfinkel, Mitch Kapor and Richard Stallman lays out the argument fairly well, and this article gives some of Stallman’s updated comments.  


Even those who argue in favor of software patents in the abstract, have to admit that the software patent system is typically used to the advantage of big companies as opposed to small ones, e.g. Paul Heckel, in a 1992 article devoted to “Debunking the Software Patent Myths” observes that


“The data shows that it is commonplace for large companies to pirate the technology of small entities. No case was cited where a large company licensed a small entity's technology without first being sued, suggesting that the existing laws do not motivate large companies to resolve patent disputes with small companies quickly.”


He also notes “Heckel's Principle of Dealing with Big Companies: There is no such thing as a free lunch; unless you're the lunch.”


Tesla opened up a number of its patents a few years ago.   Their motives for doing so may have been complexly business-driven , but nevertheless, open is open.   If Deep Mind patents well-known AI algorithms and then makes the patents open and the (basically spuriously) patented technology free for all to use, it will arguably be doing a service to the world and the AI community, by blocking other less beneficent big companies from patenting these things and trying to enforce the patents.


On the other hand, obtaining (even watered down versions of) such patents and retaining them is just plain bad for innovation, bad for AI and bad for humanity.


Software patents are generally not all THAT impactful on the industry, occasional horror stories aside.   However, the holding of patents for well known technologies is part of the megacorporation’s strategy for achieving domination.   Such patents form tools that big companies can use in market battles against other big companies, and against small companies threatening their domination.   


Google has never been a patent troll and their goal in filing these bogus patent applications may well be purely defensive -- to protect themselves against Facebook or IBM or whomever doing the same first.    It still does stink, though. It is a symbol and reminder and example of why AI technology -- the most important thing happening on the planet now -- should not be trusted to megacorporations. Big companies claiming ownership rights over well-known techniques, and succeeding a certain percentage of the time, and then judiciously exerting this bogus “ownership” to advance their economic advantage -- this is not the kind of dynamic we want, if our goal is beneficial AGI to uplift all sentient beings.


This is a reminder and example of why we need a mostly decentralized and open AI ecosystem, such as we’re building toward with SingularityNET -- and with our newly forming Decentralized AI Alliance bringing together decentralization oriented AI projects.   AI innovation and application will occur most naturally and beneficially in a self-organizing, decentralized, entrepreneurial way -- but there is a looming risk that the development of AI gets channeled toward narrower aims by phenomena like what we’re seeing just now from Google Deep Mind, big companies using their resources to claim ownership of ideas they did not invent.


It actually pains and annoys me to blog negative stuff about Google Deep  Mind, because that is an awesome group of people doing amazing AI R&D. The message I’m getting, from my position outside that organization without knowledge of the internal politics, is that even a group of really brilliant, good-hearted and open-minded people, acting within a global mega-corporation, cannot avoid getting sucked into processes that guide AI advance in directions that contradict the common and overall good.   


When founded, Deep Mind made a lot of noise about its AI ethics board … with comments by folks like Jaan Tallinn and Nick Bostrom, regarding the importance of this ethics board for guiding the work of Deep Mind in the case they make serious progress toward human level Artificial General Intelligence.   But what we see lately are more immediate and practical instances of confusing AI ethics at Deep Mind, from their recent questionable preferential access to public medical data, to this rash of bogus patent applications.


For sure, none of these recent ethical oddities are as serious as a rogue AGI taking over the planet and self-modifying and self-replicating so as to turn the universe into paper clips.    However, it may be that the first human-level AIs and superhuman AIs on the planet emerge in part from a combination of earlier-stage practical AI systems created by various companies addressing various markets.   If this is the case, then having current practical AI efforts obey practical everyday ethics, and develop in a democratic and participatory rather than centralized and megacorporation-share-price-driven way, may have implications for longer-term AGI ethics as well as for the near-term health of society and the AI ecosystem.

Who Owns Your Medical Data?

Questionable Relationship Between UK Government and Google Deep Mind Health Highlights the Need for Decentralized Medical Big Data and AI

While my main interest remains in the algorithmics of creating powerful AGI, issues regarding the social and political aspects of narrow AI in the contemporary world keep jumping to my attention....   And intriguingly, to the extent that AGI is going to emerge from a network like SingularityNET, it may be the case that these sociopolitical and algorithms aspects intersect in complex ways to guide the nature of the first AGIs that emerge...

In the “about as surprising as a corrupt North Korean politician, or a windy day in the wind tunnel” category, an evaluation of Google Deep Mind Health’s deal with the UK government has identified some serious concerns with the arrangement.

This third-party evaluation was requested by Google Deep Mind Health itself, which is a positive indication that the organization does care about the ethics of their doings (a cynic might hypothesize their main concern here to be the public perception of their ethics, but such cynicism is not necessarily founded at this stage).   However, this doesn’t detract from the bite or validity of the findings.

Among the essential problems noted are the nature of the bargain between the UK government and Google Deep Mind Health.   The government gets free data organization and data services from Google Deep Mind Health, for a wide scope of medical data regarding UK citizens.  In exchange, Google Deep Mind Health gets preferential access to this data.

In principle, the data in question is supposed to be made widely available to others besides Google Deep Mind Health, via accessible APIs.  However, in practice, this seems not to be happening in a really useful way due to various prohibitive clauses in the commercial contracts involved.   

Not to put too fine a point on it: What we see here is a centralized government medical system dancing to the tune of a big centralized AI/data company, passing along individuals' private medical data without their permission.   The 1.6 million people whose medical data was passed to Google Deep Mind Health were not asked permission!

This is in some ways a similar bargain to the one an individual makes when using Gmail or Google Search — a free service is obtained in exchange for provision of a large amount of private data, which can then be monetized in various ways.  The (fairly large) difference, however, is that the bargain now being made by the government on behalf of millions of citizens without their consent. A free service is being obtained from a commercial company by the government, in exchange for the provisions to this same commercial company of the population’s medical data, which can then by monetized in various ways.  

Quite possibly the UK government officials involved have entered into this bargain in a good-hearted and pure-minded way, with a view toward the health of the population, rather than for more nefarious motives such as, way, maximizing campaign contributions from entities associated with the commercial corporations benefiting from the arrangement.   But even if the motives of the government officials involved are squeaky clean, this seems much worse ethically than Gmail/Google-Search, whose bargain people enter into knowingly if haplessly. Because in this case the government is entering into the bargain on each individual's behalf without their knowledge or consent.

This sort of problem is a core part of the motivation for a host of recent projects operating at the intersection of medical data and blockchain.  By enabling individuals to store their medical data online in a secure manner, where they hold the encryption keys for their data, the nexus of control over medical data is shifted from monopoly or oligopoly to participatory democracy.   

In the democratic blockchain based approach, if a company (Google Deep Mind Health or anyone else) wants to use a person’s medical data for their research, or to provide information derived from this data as a service, or to provide an easier way for the person to access their medical data — then the company has to ASK the person.   Permission may be given or not based on the permission’s choice. Permission may be given to use the person’s data only in certain aspects, which can be enabled technically via homomorphic encryption, multiparty computation and other techniques. The process of asking and granting permission will often involve transparency regarding how and why the data will be used.

The point isn’t that Big Medical Data or medical AI is bad — directed properly, applying advanced AI to massive amounts of human biomedical data is the best route to effective personalized medicine, and ultimately to curing disease and prolonging human life.  The point is that the process of getting to personalized medicine and radical health improvement should be directed in a participatory way by the people whose data is being used to fuel these revolutions.

My own SingularityNET AI-meets-blockchain project is doing work in this direction, for instance its explorations into AI-guided regenerative medicine, and its partnership with the Shivom  medical blockchain project.  But SingularityNET and Shivom aren’t going to democratize big medical data alone.  What is needed is a wholesale redirection of medical data access, storage and analytics away from private collaborations between governments and megacorporations, and toward decentralized networks that are controlled by the world’s individuals in a democratic and participatory way.   

What is at stake here is, quite precisely: Who owns the deep knowledge about human life and death and health and disease and body function, which AI is going to discover by analyzing the world’s biomedical data over the next years.

Monday, March 12, 2018

Machine Learning for Plant Disease Diagnosis and Prediction

AI is being applied to everything these days -- including various fields of endeavor generally thought of as low-tech and backwards, such as agriculture.

In this vein, I gave a talk last week in Leshan, in Szechuan province (mainland China), on the application of AI to diagnosing crop diseases (from images of leaves) and predicting disease course, disease response to treatment, etc. In the talk, I reviewed a bit of existing literature and suggested some new twists based on discussions with farmers, crop doctors and agricultural researchers in Leshan. This was part of a collaboration between Chinese knowledge management firm KComber (and in particular their Yoonop service, ) and my bio-AI project Mozi AI Health and decentralized AI project SingularityNET. The talk I gave in Leshan wasn’t video-recorded, so after I got back home to Hong Kong I recorded a post-talk video going through the same concepts from the talk, using the same slides, with a few additions… Here it is!

(I was kind of half-asleep when recording the video as it was well past midnight but, that's when I found a free 30 min for this ... so it goes ... ) The slides from the talk, saved to PDF, are at: http://goertzel.org/Chengdu.pdf .... (This PDF version lacks the robot videos I showed in the talk in Leshan, but those videos are somewhat peripheral to the main topic anyway....)

In more futurist-evangelist talks I give, I often stress the importance of using AI for broad global benefit -- because if early-stage AGIs are actively engaged in helping people of all sorts in various practical ways, the odds are likely higher that as these AGIs get smarter and smarter, they will be richly imbued with positive human values and interested in keeping on helping people. Down-to-earth practical work on stuff like machine learning for diagnosing and predicting crop disease, is how this high level concept of "AI for broad global benefit" gets realized....