To follow this blog by email, give your address here...

Sunday, November 25, 2012

Complex-Probability Random Walks and the Emergence of Continuous General-Relativistic Spacetime from Quantum Dynamics


(A post presenting some interesting, but still only half-baked, physics ideas....)

The issue of unifying quantum mechanics and general relativity is perenially bouncing around in the back of my mind.   I don't spend that much time thinking about it, because I decided years ago to focus most of my intellectual energy on AI and understanding the mind, but I can't help now and then revisiting the good old physics problem, and doing occasional relevant background reading....

Of course there are loads of approaches to unified physics out there these days, some of them extremely sophisticated.  Yet I can't help hoping for a conceptually simpler unification.   Here's what I'm thinking today....

I've been enjoying Frank Blume's 2006 paper A Nontemporal Probabilistic Approach to Specialand General Relativity....   It consists of fairly elementary calculations done in pursuit of a philosophical point.  Blume wanted to show that the continuous spacetime assumed in special and general relativity, can be approximated arbitrarily well by discrete random walks.   The subtle point is that these discrete random walks hop around randomly (according to a certain specified probability distribution) not only in space, but also in time.   So Blume's picture has particles hopping back and forth in time, which in his view is in accordance with Julian Barbour's perspective that "physical reality is essentially nontemporal and is best thought of as an ordered sequence of discrete static images" (see Barbour's book  The End of Time).  

I don't feel confident I know how physical reality is "best thought of" ... but I do agree with Barbour and Blume that the view of time as flowing forward from past to future is badly flawed.  This sense of unidirectional time-flow is part of  human psychology, and perhaps part of the dissipative nature of the human mind/body as a macroscopic, thermodynamic system ... but it's not fundamental in the way that people sometimes naively assume.   It's not there in microphysics, either -- at the quantum level the flowing of time from past to future is an alien concept.  If you think this sounds like nonsense, read Barbour's book!

But the philosophy of time is somewhat peripheral to the point I want to make here.   What I've been thinking about is the possibility of replacing Blume's random walk, which is defined in terms of ordinary real-number probabilities, with an analogous random walk defined in terms of complex-number probabilities.   

Saul Youssef, in a series of interesting papers (click here and scroll down to Youssef's name) has shown that if one replaces ordinary real-number probabilities with complex-number probabilities, and adds a few other commonsensical assumptions, then the equations of quantum theory basically pop out.        

This direction of research seems natural once one notes that, according to the basic math of probability theory, there are four options for creating probabilities that obey all the standard probability rules: real-number, complex-number, quaternionic and octonionic probabilities.  Classical physics uses the standard real-number option.  Quantum physics uses the complex-number option.

Ordinary quantum logic uses real-number probabilities, but uses an unusual logic (lattice meet and join on the lattice of subspaces of a complex Hilbert space), which lacks some of the normal rules of Boolean logic, such as distributivity.    Youssef's exotic probability approach retains ordinary Boolean logic rules, but moves to complex number probabilities.   

What I began wondering is: What if you replace Blume's conventional random walk with a random walk in which each movement of a particle is quantified by a certain complex-number probability?

Then a particle may move in various spatiotemporal directions, and there is the possibility for constructive or destructive interference between the different directions.  

And it seems that, in the case where the interference between the different directions cancels out, one would get the same behavior as a real-probability random walk.  

So based on back-of-the-envelope calculations I did the other day, it looks like one can probably get General Relativity to emerge as a statistical approximation to the large-scale behavior of complex-number-probability (quantum) random walks, under conditions of minimal interference.

How far does a perspective like this go, in terms of explaining the particulars of unified physics?  I don't know, and don't seem to have the time to do the rigorous calculations to find out, right now.  But it seems an interesting direction....   If you're a physicist interested in helping work out the details, drop me a line! ...

 

Monday, October 29, 2012

Avoiding the Tyranny of the Majority in Collaborative Filtering



One of the more annoying aspects of the modern Internet is crap comments.  For instance, it's improved in recent years, but for a while the typical comments on Youtube music videos were among the most idiotic examples of human "thought" and behavior I've ever seen…

A common solution to the problem is to have readers rate comments.  Then comments that are highly-rated by readers get ranked near the top of the list, and comments that are panned by readers get ranked near the bottom of the list.  This mechanism is used to good effect on general-purpose sites like Reddit, and specialized-community sites like Less Wrong.

Obviously this mechanism is very similar to the one used on Slashdot and Digg and other such sites, for collaborative rating of news items, web pages, and so forth.

There are many refinements of the methodology.  For instance, if an individual tends to make highly-rated comments, one can have the rating algorithm give extra weight to their ratings of others' comments.

Such algorithms are interesting and effective, but have some shortcomings as well, one of which is a tendency toward "dictatorship of the majority."  For instance, if you have a content that's loved by a certain 20% of readers but hated by the other 80%, it will get badly down-voted.

I started wondering recently whether this problem could be interestingly solved via an appropriate application of basic graph theory and machine learning.

That is, suppose one is given: A pool of texts (e.g. comments on some topic), and a set of ratings for each text, and information on the ratings made by each rater across a variety of texts.

Then, one can analyze this data to discover *clusters of raters* and *networks of raters*.

A cluster of raters is a set of folks who tend to rate things roughly the same way.   Clusters might be defined in a context-specific way -- e.g. one could have a set of raters who form a cluster in the context of music video comments, determined via only looking at music video comments and ignoring all other texts.

A network of raters is a set of folks who tend to rate each others' texts highly, or who tend to write texts that are replies to each others' texts.

Given information on the clusters and networks of raters present in a community, one can then rank texts using this information.  One can rank a text highly if some reasonably definite cluster or network of raters tends to rank it highly.

This method would remove the "dictatorship of the majority" problem, and result in texts being highly rated if any "meaningful subgroup" of people liked it.  

Novel methods of browsing content also pop to mind here.  For instance: instead of just a ranked list of texts, one could show a set of tabs, each giving a ranked list of texts according to some meaningful subgroup.

Similar ideas could also be applied to the results of a search engine.  In this case, the role of "ratings of text X" would be played by links from other websites to site X.   The PageRank formula gives highest rank to sites that are linked to by other sites (with highest weight given to links from other sites with high PageRank, using a recursive algorithm).  Other graph centrality formulas work similarly.  As an alternative to this approach, one could give high rank to a site if there is some meaningful subgroup of other sites that links to it (where a meaningful subgroup is defined as a cluster of sites that link to similar pages, or a cluster of sites with similar content according to natural language analysis, or a network of richly inter-linking sites).   Instead of a single list of search results, one could give a set of tabs of results, each tab listing the results ranked according to a certain (automatically discovered) meaningful subgroup.

There are many ways to tune and extend this kind of methodology.   After writing the above, a moment's Googling found a couple papers on related topics, such as:

http://iswc2004.semanticweb.org/demos/01/paper.pdf

http://www.citeulike.org/user/abellogin/article/2200728

But it doesn't seem that anyone has rolled out these sorts of ideas into the Web at large, which is unfortunate….

But the Web is famously fast-advancing, so there's reason to be optimistic about the future.  Some sort of technology like I've described here, deployed on a mass scale, is going to be important for the development of the Internet and its associated human community into an increasingly powerful "global brain" …