To follow this blog by email, give your address here...

Tuesday, May 15, 2007

Technological versus Subjective Acceleration

This post is motivated by an ongoing argument with Phil Goetz, a local friend who believes that all this talk about "accelerating change" and approaching the Singularity is bullshit -- in part because he doesn't see things advancing all that amazingly exponentially rapidly around him.

There is plenty of room for debate about the statistics of accelerating change: clearly some things are advancing way faster than others. Computer chips and brain scanners are advancing more rapidly than forks or refrigerators. In this regard, I think, the key question is whether Singularity-enabling technologies are advancing exponentially (and I think enough of them are to make a critical difference). But that's not the point I want to get at here.

The point I want to make here is: I think it is important to distinguish technological accel eration from subjective acceleration.

This breaks down into a couple sub-points.

First: Already by this point in history, I suggest, advancement in technology has far outpaced the ability of the human brain to figure out new ways to make meaningful use of that technology.

Second: The human brain and body themselves pose limitations regarding how thoroughly we can make use of new technologies, in terms of transforming our subjective experience.

Because of these two points, a very high rate of technological acceleration may not lead to a comparably high rate of subjective acceleration. Which is, I think, the situation we are seeing at present.

Regarding the first point: Note that long ago in history, when new technology was created, it lasted quite a while before being obsoleted, so that each new technology was exploited pretty damn thoroughly before its successor came along.

These days, though, we've just BARELY begun figuring out how to creatively exploit X, when something way better than X comes along.

The example of music may serve to illustrate both of these points.

The invention of the electronic synthesizer/sampler keyboard was a hell of a breakthrough. However, the music we humans actually make has not changed nearly as much as the underlying technology has. By and large we use all this advanced technology to make stuff that sounds harmonically, rhythmically and melodically not that profoundly different from pre-synthesizer music. Certainly, the degree of musical change has not kept up with the degree of technological change: Madonna is not as different from James Brown as a synthesizer keyboard is from an electric guitar.

Why is that?

Well, humans take a while to adapt. People are still learning how to make optimal use of synthesizer/sampling keyboards for making intersting music ... but while people are still relatively early on that learning curve, technology has advanced yet further and computer music software gives us amazing new possibilities ... that we've barely begun to exploit...

Furthermore, our musical tastes are limited by our physiology. I could make fabulously complex music using a sequencer, with 1000's of intersecting melody lines carefully calculated, but no human would be able to understand it (I tried ;-). Maybe superhuman minds will be able to use modern music tech to create music far subtler and more interesting than any human music, for their own consumption.

And, even when acoustic and cognitive physiology isn't relevant, the rate of growth and change in a person's music appreciation is limited by their personality psychology.

To take another example, let's look at bioinformatics. No doubt that technology for measuring biological systems has advanced exponentially. As has technology for analyzing biological data using AI (my part of that story).

But, AI-based methods are very slow to pervade the biology community due to cultural and educational issues ... most biologist can barely deal with stats, let alone AI tech....

And, the most advanced measurement machinery is often not used in the most interesting possible ways. For instance, microarray devices allow biologists to take a whole-genome approach to studying biological systems, but, most biologists use them in a very limited manner, guided by an "archaic" single-gene-focused mentality. So much of the power of the technology is wasted. This situation is improving -- but it's improving at a slower pace than the technology itself.

Human adoption of the affordances of technology has become the main bottleneck, not the technology itself.

So there is a dislocation between the rate of technological acceleration and the rate of subjective acceleration. Both are fast but the former is faster.

Regarding word processing and Internet technology: our capability to record and disseminate knowledge has increased TREMENDOUSLY ... and, our capability to create knowledge worth recording and disseminating has increased a lot too, but not as much...

I think this will continue to be the case until the legacy human cognitive architecture itself is replaced with something cleverer such as an AI or a neuromodified human brain.

At that point, we'll have more flexible and adaptive minds, making better use of all the technologies we've invented plus the new ones they will invent, and embarking on a greater, deeper and richer variety of subjective experiences as well.

Viva la Singularity!


Anonymous said...

Ben, what you are saying here is really just another variation of another problem that has been going on for even longer. Lord Raleigh was probably one of the last physicists who was competent in the entire field of physics. Since his time, physics/math (and all other disciplines of science) have ballooned into these enormous edifices that no one brain can any longer hold. As a consequence of this, there must be a tremendous amount of discoveries that the human race could be making right now but aren't, because there is no one brain looking at the total amount of knowledge available and synthesising it. Einstein had to be introduced to Reimann topology by a colleague in order to discover how to write down General Relativity. Today, similar things seem to be happening with string theory. Think of the vast amount of chemistry that sits in rows and rows of research papers all over the world. Just imagine the startling advances that a super intelligence could produce from that body of data if it could be assimilated all at once. The age of the AGI is certainly at hand due to necessity as this exponential knowledge increase will soon be followed by an exponential ignorance of knowledge as the amount that the human brain can soak up gets smaller and smaller.
Eric B. Ramsay

Ben said...

A reply to Eric Ramsay's comment: Yes, indeed!

We built a prototype system for the NIH Clinical Center that used NLP technology to read biomedical research abstracts, extract logical relationships therefrom, and then do reasoning to figure out new biological knowledge not contained in any of the individual abstracts.

The final production system was never completed due to issues w/in the NIH, but, the principle was demonstrated (and presented at the bioNLP workshop of the 2006 ACL conference). Well enough to make clear to me that even a pretty simple, subhuman narrow-AI system could make loads of scientific discoveries by putting together the pieces that are out there, right now, online.

-- Ben Goertzel

Bayle Shanks said...

Thanks for explaining this so clearly. I'm with Mr. Goetz; I think a singularity is not near, for this reason.

You say that technology is growing exponentially even though human usage of it is not. However, the whole reason that one might expect an exponential increase in technology is that one expects a positive feedback loop in which the rate of technological development is proportional to the amount of existing technology.

But if there is an inherent human limitation in how fast we can integrate new technology, then this will prevent us from rapidly utilizing new technology to produce more new technology -- so the feedback loop is broken.

Symbolically, if the rate of tech growth was proportional to the amount of tech we already have, we'd have the differential equation "tech' = k*tech" for some constant k, the solution of which is an exponential.

But a model with an inherent limit on speed of technological absorption yields only polynomial growth after that limit is reached:

tech'' = human_usage(tech)

human_usage(tech) = min(k*tech, inherent_limit)

In this model, after the limit is reached, acceleration becomes a constant, and we get "tech = O(time^2)" (which is not exponential but is admittedly still pretty fast).

Of course, once we can modify our minds or create other minds this "inherent human limit" might disappear. But I argue that that point is a long way off. If we're already being limited by the tech absorption bottleneck, then the velocity of tech increases only linearly from here on out. Because it seems that there is an enormous amount of research left to be done before we can make ourselves smarter or create A.I. smarter than us, I don't expect that point to be reached soon.

One mitigating possibility is that economic development could greatly multiply the number of professional researchers in the world; which might almost multiply the velocity of technology.

I apologize in advance if I've made any mistakes in the math or otherwise.

Joel said...

RE: music...

I think you should check out some dance or electronic music. These are whole genres spawned by the creation of synthesizers and software synthesis of sound.

In particular, psytrance, IDM and the current wave of electro-breaks are all mostly machine generated apart from the lyrics... and the lyrics are usually mashed/chopped and altered substantially too. I've got some stuff downloadable from my mixblog:

Although IDM is probably the only one that starts mucking around with erratic or fractal beats - and it won't be everyones cup of tea!