To follow this blog by email, give your address here...

Saturday, April 11, 2020

The Likely Nasty Social, Economic and Surveillance Aftereffects of COVID-19 -- and How to Combat Them

A lot of attention right now is going into the question of flattening the curve of global COVID-19 infection -- and this is exactly right.   I've been trying to do my own part here, via organizing the COVIDathon blockchain-AI-against-COVID-19 hackathon, and working with  my SingularityNET colleagues on using some of our AI code for simulating COVID-19 spread and analyzing related biology.

It's also important, though, to think about the other side of the curve -- what happens once the virus starts to gradually recede into the background, and life resumes some variation of "normal."    How will things be different after COVID-19?  Which of the unusual things happening now in the midst of the pandemic are likely to continue to have impact in the post-pandemic world?

TL;DR it seems the answer is: Barring something unusual and countervailing happening, the impact of the pandemic will be the rich getting richer, the poor getting poorer, and Big Tech and Big Government getting more access to diverse personal data and more skill at mining it effectively.

Potentially these effects could be palliated by rolling out decentralized blockchain-based technologies for managing aspects of the pandemic and the pandemic-era economy.   But it appears likely that, even if we succeed in getting a few such technologies built and adopted rapidly via COVIDathon and other efforts, by and large it will be centralized technologies, centralized government agencies and companies and the traditionally financialized economy that will dominate COVID-19 response.

A more open question is whether, when the next pandemic or other global crisis rolls around, decentralized tech will be ready to play a major role.   Can COVID-19 and its impacts on society, economy and industry serve as a wake-up call regarding the risks global crises pose on multiple fronts including data sovereignty and economic fairness?   Will this wake-up call be loud enough to rouse a large open-source development community into action regarding the creation of decentralized, secure and democratically controlled technologies for doing things like, say, managing uploaded personal medical data ... tracking and predicting spread of epidemics ... carrying out precision medicine analytics on clinical trials
 ...  and assessing lifestyle choices in the light of current medical realities and practicalities like weather and transportation?

Let's run through the probable future in more detail.   Social distancing and travel restrictions are most likely to cause the virus's spread to slow as 2020 progresses; and then before too long effective antiviral compounds or cocktails will be available.   Sometime in 2021, most likely, COVID-19 vaccines will hit the market; and then this virus currently wreaking so much havoc will be relegated to a status much like that of the lowly flu.

In the meantime, though lots of low-wage service workers are getting laid off ... and many will not get re-hired, as many businesses will choose to rebuild in other ways after the pandemic fades (automation, anyone?).  For instance, many of the people who are now ordering groceries for home delivery for the first time, will continue doing this a lot after COVID-19 is gone.   Resulting in fewer jobs for supermarket cashiers and other staff.   The same sort

At the same time, savvy investment funds are right now buying up every valuable asset they can at bargain prices -- so that after the pandemic fades they will own an even larger percentage of the planet

And the techlash is already fading into the dim recesses of history along with net neutrality -- as everyone grows increasingly attached to Amazon, Netflix, Google etc. while trapped in their homes using the Internet for everything.  

Big Tech has been underhandedly striving to gather as much medical data as possible, for years now -- e.g. Google Deep Mind's series of sweetheart deals with the British health system to garner access to peoples' medical records; or Project Nightingale which saw Google quietly capture 50 million Americans medical records.  Gathering medical data from a wide population with a view toward pandemic-related analysis and prediction is absolute golden for Big Tech.   This data and the pipelines that bring it their way will continue to yield value for these companies and their government partners long after COVID-19 has been reduced to the level of one more irritating seasonal infection.

As everyone becomes increasingly fearful for the lives of their elderly friends and relations, centralized monitoring of everybody's location and movements and physiological data is increasingly taken as a Good Thing.   Today uploaded temperature readings from a million+ wireless digital thermometers are letting us track the spread of COVID-19 around the US.    Stanford researchers have also shown that, by using AI anomaly detection on data from heart-rate variability, body temperature and pulse oximetry , one can identify a person is sick even before they show any symptoms. 

But then what happens when it becomes standard for your smartwatch, smartphone and fitness tracker to upload your data to Big Tech and Big Government so they can track and analyze disease spread?   Do you really trust these corporate and governmental entities not to use this data for other purposes -- and not to find ways to quietly keep collecting and utilizing similar data?   Edward Snowden has recently gone on record that, no, he does not.  As you may have guessed,  I don't either.

Yet the UK is already going directly down this path, with a governmental software app that detects and tracks nearby COVID-19 sufferers.  Completely harmless, extremely helpful -- until the same tech and organizational set up is used to track other things of interest to the ruling politicos and their business and military allies.

Big Brother is watching your heart rate, your temperature and your blood oxygen level -- better be sure your heart doesn't pound too much when you walk past that political demonstration, or your credit rating's going way down!!

Global monitoring of human movement and human physiology can do wonders for optimizing global health, during a pandemic and otherwise -- but it should be done with decentralized, secure tools.   Otherwise one is placing great trust in the entities that are gathering and utilizing this data -- not only to do helpful things with it in the pandemic, but not to leverage this data and related data-gathering capabilities later in the interest of goals different from that of global human benefit.

At the moment most decentralized networks and associated software tools are still in fairly early states of development -- so to combat COVID-19 fast we are understandably relying on centralized methods.   But this will not be the last pandemic nor the last acute, unprecedented global crisis that humanity faces.   It is important work so that for the next such situation that arises, decentralized frameworks will be fully prepared to play a leading role in helping humanity cope.

Otherwise, each successive crisis will serve to concentrate more and more wealth and power in the hands of a small elite -- which is not at all the best way to create a beneficial future for humanity and its technological children.

Friday, April 10, 2020

Can We "Discover" Semantic Primitives for Commonsense and Math via Semantic Relation Extraction from Corpora?

Once more wild-ish train of thought completely unrelated to anything immediately practical … I was thinking about Chalmers’ idea from Constructing the World  that the notion of universal semantic primitives underlying all human concepts might be rendered sensible by use of intensional logic … i.e. extensionally reducing all concepts to combinations of a few dozen primitives [plus raw perception/action primitives] is doomed to fail (as shown by eons of pedantic pickery in the analytical philosophy literature) but doing the reduction intensionally seems to basically work….

But in his book he argues why this is the case and gives lots of examples but doesn’t fully perform the reduction as that’s too big a job (there are a lot of concepts to intensionally reduce…)

So it occurred to me if we managed to do decent semantic-relation-extraction from large NL corpora, then if Chalmers is right, there would be a set of a few dozen concepts such that doing intensional-logic operations to combine them (plus perception/action primitives) would yield close approximations (small Intensional Difference) from any given concepts

In vector embedding space, it might mean that any concept can be expressed fairly closely via a combination of the embedding vectors from a few dozen concepts, using combinatory operators like vector sum and pointwise min …

As I recall it the intensional-combination operators used in Chalmer’s philosophical arguments don’t involve so much advanced quantifier-munging so basic fuzzy-propositional-logic operators might do it…

Now if we cross-correlate this with Lakoff and Nunez’s thoughts in “Where  Mathematics Comes From?” — where they argue that  math theorem proving is done largely by unconscious analogy to reasoning about everyday physical situations — then we get the idea that morphisms from common-sense domains to abstract domains guide math theorem-proving, and that these potential generators of the algebra of commonsense concepts, can be mapped into abstract math-patterns (e.g. math-domain-independent proof strategies/tactics) that serve as generators of proofs for human-friendly mathematics….

Which led me to wonder if one could form an interesting corpus from videos of math profs going thru proofs online at the whiteboard.  One would then capture the verbal explanations along with proofs, hopefully capturing some of the commonsense intuitions/analogies behind the proof steps… from such a corpus one might be able to mine some of the correspondences Lakoff and Nunez wrote about….

There won’t be a seq2seq model mapping mathematicians’ mutterings into full Mizar proofs, but there could be useful guidance for pruning theorem-prover activity in models of the conceptual flow in mathematician’s proof-accompanying verbalizations.... 

Can we direct proofs from premises to conclusions, via drawing a vector V pointing from the embedding vector for the premise to the embedding vector for the conclusion, and using say the midpoint of V as a subgoal for getting from premise to the conclusion ... and where the basis for the vector space is the primitive mathematical concepts that are the Lakoff-and-Nunez-ian morphic image of primitive everyday-human-world concepts?

Alas making this sort of thing work is 8 billion times harder than conceptualizing it.   But conceptualization is a start ;)

Logical Inference Control via Quantum Partial Search — Maybe


While running SingularityNET and thinking about next-generation OpenCog and helping Ruiting with our charming little maniac Qorxi are taking up most of my time, I can’t help thinking here and there about quantum AI …

Quantum computing is moving toward practical realization — it’s still got a long way to go, but clearly the Schrodinger’s cat is out of the bag … the time when every server has a QPU alongside its GPU is now something quite concrete to foresee…

So I’m thinking a bit about how to use quantum partial search  (Grover's algorithm on a chunked database) to speed up backward-chaining logical inference dramatically. 

Suppose we are searching in some set S for those x in S that satisfying property P.   (The interesting case is where S is known implicitly rather than explicitly listed.)

Suppose we have some distribution f over S, which assigns a probability value f(x) to each element of S — interpretable as the prior probability that x will satisfy P

Suppose we divide S into bin S1, S2,…, Sk, so that the expected number of x that satisfy P is the same for each Si  (in which case the bins containing higher-probability x will have smaller cardinality) …

Then we can use quantum partial search to find a bin that contains x that satisfies P. 

If the size of S is N and the number of items per bin were constant b, then the time required is (pi/4) sqrt(N/b).   Time required increases with uneven-ness of bins (which means non-uniformity of distribution f, in this setup).

In an inference context, for instance, suppose one has a desired conclusion C and n premises Pi.   One wants to know for what combinations Pi * Pj ==> C.  One then constructs an N = n^2 dimensional Hilbert space, which has a basis vector corresponding to each combination (i,j).  One call to the quantum oracle can tell us whether Pi * Pj ==> C for some particular (i,j) (note though that this call must be implementable as a unitary transformation on the Hilbert space — but following the standard math of quantum circuits it can be set up this way). 

Using straight Grover’s algorithm, one can then find which Pi * Pj ==> C in sqrt(N) time.

If one wants to leverage the prior distribution, one can find which bin(s) the premise-pairs so that {Pi * Pj ==> C } live in, in time (pi/4)  sqrt(c*N/b) where c>1 is the correction for the non-uniformity of the prior and b is the average number of pairs per bin.

With a uniform prior, one is finding log(N/b) bits of information about what the premises are (and narrowing down to a search over b items).

With a non-uniform prior, one is still narrowing down *on average* to a search over b items, so is still finding log(N/b) bits on average about where the items are.

This could be useful e.g. in a hybrid classical-quantum context, where the quantum computer is used to narrow down a very large number of options to a more modest number, which are then searched through using classical methods.

It could also be useful as a heuristic layer on top of Grover’s algorithm.  I.e., one could do this prior-probability-guided search to narrow things down to a bin, and then do full-on Grover’s algorithm within the bin selected.

Constructing the bins in an artful way, so that e.g. bins tend to have similar entities in them, could potentially  make things work even faster.   Specifically, if the elements in each bin tend to be similar to each other, then the bin may effectively be a lower-dimensional subspace, which means the algorithm will work faster on that bin.   So there would be advantage to clustering the items being searched before constructing the bins.   If items that are clustered together tend to have similar prior probabilities, then the bins would tend to be lower-dimensional and things would tend to go faster.

Grover’s Algorithm and Natural Gradients

Now if we want to go even deeper down the rabbit hole — this funky paper shows that the quantum search problem reduces to finding optimal geodesic paths that minimize lengths on a manifold of pure density matrices with a metric structure defined by the Wigner-Yanase metric tensor …

Fisher metric geeks will simultaneously drop their jaws in amazement, and nod and grin in a self-satisfied way

So what we see here is that Grover’s algorithm is actually just following the natural gradient ... well sort of…

Putting some pieces together … We have seen that partial quantum search (Grover’s algorithm over a chunked database) can be set up to provide rapid (on average) approximate location of an item in an implicit database, where the average is taken relative to a given probability distribution (and the distribution is used to guide the chunking of the database)….

Well then — this partial quantum search on a database chunked-according-to-a-certain-distribution, should presumably correspond to following the natural gradient on a manifold of pure density matrices with a metric structure conditioned by that same distribution…

Which — if it actually holds up — is not really all that deep, just connecting some (quantum) dots, but sorta points in a nice quantum AI direction…

Post-Script: Wow, This Stuff May Be Implementable?

I was amazed/ amused to note some small-scale practical implementations of Grover’s Algorithm using Orbital Angular Momentum

It’s all classical optics except preparation of the initial state (which is where the Oracle gets packed).

Could this be how our quantum-accelerated logical inference control is going to work?   Quantum optics plugins for the server … or the cortex?