Friday, July 10, 2015

Life Is Complexicated

I grew up, intellectually, drinking the Complexity Kool-Aid from a big fat self-organizing firehose.

Back in the 1980s, I ate up the rhetoric, and the fascinating research papers, emanating from the Santa Fe Institute and ilk.  The core idea was extremely compelling — out of very simple rules, large-scale self-organizing dynamics can give rise to extraordinarily complex and subtle phenomena … stuff like stars, ecosystems, people,… the whole physical universe, maybe?

Fast forward a few decades and how does the “complexity” paradigm feel now?

(for simplicity, in the rest of this blog post I will use the word “complexity” to refer to Santa Fe Institute style, self-organizing-systems-ish “complexity”, rather than other meanings of the word)

Artificial life hasn’t panned out all that spectacularly — it has led to lots of cool insights and funky demos, but in the end attempts to get really richly-behaving life-forms or ecosystems to self-organize out of simple rules in the computer haven’t gone that well.

In AI, simplistic “complexity” oriented approaches — e.g. large, recurrent neural networks self-organizing via Hebbian learning or other local rules; or genetic programming systems — haven’t panned out insanely well either.  Again, research results have been obtained and a lot has been learned, but more impressive progress has been made via taking simple elements and connecting them together in highly structured ways to carry out specific kinds of learning tasks (e.g. the currently super-popular deep learning networks).

What about modeling economies, physical or chemical systems, ecosystems, etc.?   “Complex systems” style computer simulation models have provided insightful qualitative models of real systems here and there.  To some extent, the early message of the Santa Fe Institute and the other early complexity pioneers has simply diffused itself throughout science and become part of standard practice.   These days “everybody knows” that one very important way to understand complex real-world phenomena is to set up computer simulation models capturing the key interactions between the underlying elements, and run the simulations with various parameter values and look at the results.

Universal Laws of Complexity?

But back in the 80s the dream of complexity science seemed to go well beyond basic pragmatic lessons about running computer simulations, and high level discussions of properties shared by various systems in various contexts.  Back in the 80s, there was lots of talk about “universal laws of complex systems” — about using simulations of large numbers of very simple elements to understand the rules of complexity and emergence, in ways that would give us concrete lessons about real-world systems.

The great Stephen Wolfram, in 1985, foresaw cellular automaton models as a route to understanding “universal laws of complexity”   … decades later his book “A New Kind of Science” pushed in the same direction, but ultimately not all that compellingly.

I myself, with my occasional (er) grandiose tendencies, was a huge fan of this vision of universal laws of complex systems.   I even tried to lay some out in detail, pages 67-70 of my 2001 book “Creating Internet Intelligence”,

And one still sees mention of this idea here and there, e.g. in 2007

“The properties of a complex system are multiply realisable since they satisfy universal laws—that is, they have universal properties that are independent of the microscopic details of the system.”

But, truth be told, the “laws” of complexity found so far are just not all that law-like.  The grand complexity-science vision has panned out a little, but more complicatedly than anticipated.   Certainly broad self-organization/emergence based phenomena common to a huge variety of real-world complex systems have been identified.  Phase transitions, small world networks, strange attractors, self-organized criticality and so forth are now, simply, part of the language of science.   But these are more like “common phenomena, existing alongside all the other known phenomena characterizing systems, and manifesting themselves in various systems in various subtle and particular ways” — not remotely as law-like as, say, the “laws” of physics or chemistry.

(“Law” of course is a metaphor in all these cases, but the point is that the observational patterns referred to as physical or chemical “laws” are just a lot more solidly demonstrated and broadly applicable than any of the known properties of complex systems…)

Why So Complicated?

So why has the success of complexity science been so, well, complicated?

Some would say it’s because the core ideas of complexity, emergence, self-organization and so forth just aren’t the right ones to be looking at.

But I don’t think it’s that.  These are critical, important ideas.

Rather, I think the correct message is a subtler one: Real-world systems aren’t just complex, in the Santa Fe Institute sense of displaying emergent properties and behaviors that self-organize from the large-scale simple interactions of many simple elements.

Rather, real-world systems are what I’ll -- a bit goofily, I acknowledge -- call “complexicated”.

That is: They are complex (in the Santa Fe Institute sense) AND complicated (in the sense of just having lots of different parts that are architected or evolved to have specific structures and properties, which play specific roles in the whole system).

A modern car or factory is a complicated system - with many specialized parts, each carefully designed to play their own role.

Conway’s Game of Life (a popular, interesting cellular automaton model), or a giant Hebbian recurrent neural net, is a complex system in the SFI sense — it has emergent high-level properties that can ultimately be traced back to zillions of simple interactions between the simple parts.  But doing the tracing-back in detail would be insanely complicated and computationally resource-intensive.

On the other hand, a human body, or a modern economy, or the Internet, combines both of these aspects.   These are complicated systems, with many specialized parts, each carefully created to play their own role — yet key aspects of the roles that these parts play, involve their entrainment in complex emergent dynamics that can ultimately be traced back to zillions of simple interactions between simple parts (but doing the tracing-back in detail would be insanely complicated and computationally resource-intensive).

These kinds of “complexicated” systems lack the elegance of a well-designed car or factory, and they also lack the elegance of Conway’s Game of Life or a Hopfield formal neural network.  They are messy in a lot of different ways.  They have lots of specialized parts all working together, AND they have complex holistic dynamics that are hard to predict from looking at the parts, but that are critical to the practical operation of the parts.

Why does our world consist so much of this sort of perversely complexicated system, instead of nice elegant well-organized systems, or simplistic SFI-style “complex systems” models?   Because when dealing with severe resource constraints, evolutionary processes are going to make use of Any Means Necessary (well, any  means they can find within the searching they have resources to do).  Both self-organizing emergence and well-organized factory-style organization are effective ways of making big systems do difficult things.   If they can be gotten to work together in the same system, sometimes that’s even better.

The simple, uncomplicated self-organizing systems that the SFI-style "complexity science" likes to study, are not generally capable of giving rise to interesting phenomena given realistic amounts of resources.  That's a bit inelegant, but it's a cost of living in a universe that imposes such severe resource constraints on its residents.  To get interesting complex-self-organization-ish phenomena in reality, one generally needs to interweave some complicatedness with one's complexity.  Which means that one obtains systems whose behavior is a mixture of universal complex-systems properties, and highly specific properties resulting from complicated particulars.   Which is either ugly and messy or weirdly beautiful or completely plainly normal and real, depending on one's perspective!

Life and Mind are Complexicated

The all-knowing Urban Dictionary defines “complexicated” as

Something so complex, it's not enough to say it's complicated.

Girl 1: So how are things going with you and that new guy you're seeing?

Girl 2: I don't know, things are really complexicated with us. I'm not sure where things are going.

which isn’t exactly the meaning I’m using here, but I figure it’s aesthetically reasonably enough in synch.

Of course, followers of my AI work will have already anticipated my closing comment here.  The OpenCog AGI design I’ve co-created and am currently working on, combines SFI-style complexity with complicatedness in various subtle ways, some of which can be frustrating to work with at times.

I have spent a fair bit of time trying to figure out how to make a fundamentally simpler AGI design with the smell of “hey, this could work too” — but I haven’t succeeded at that, and instead have kept pressing ahead with OpenCog, along with some great colleagues.   If the line of thinking in this blog post is correct, then the search for a “fundamentally simpler” design may be misguided.   Getting rid of either the complexity or the complicatedness may not be possible.

Or in short, OpenCog is complexicated ... human minds and bodies are complexicated ...  the economy is complexicated ... the Global Brain is complexicated ... Life is complexicated.


-- And hey, even if the word doesn't have legs (or has complexicated legs -- yeesh, that sounds like an unfortunate disease!!), the underlying concept is important! ;-)

Some Interesting Comments

I posted a link to this post on the Global Brain and AGI email lists and got some interesting responses, e.g.

From Weaver (David Weinbaum):

Yes, I resonate with Ben's observations. It seems that real complexity defies universality as a principle.

Whenever we can describe a phenomenon so that its local particularities are easily decoupled from universal patterns e.g. describing a classic mechanical system in terms of the Hamiltonian and its initial conditions, this is not a complex phenomenon. I would also add to the list of complexicated systems those systems where statistical descriptions do not contribute a lot to their understanding.
Things that seem to be characteristic to complexicated systems are:

  1. Heterogeneity of the elements at various scales.
  2. Diversity of properties and functions.
  3. Degeneracy - every property and function has multiple ways of realization.
  4. 2^3. Very different structures realizing very similar functions while very similar structures may realize radically different functions. (I call it transductive instability, a concept I am working on developing). This seems to be a major key to the evolution of complex systems. 
  5. Variable range correlations - Local interactions may have global effects and vice versa, global patterns may affect local interactions. In other words, it is often hard or entirely impossible to clearly delineate distinct scales within such systems. 
  6. Contingency - certain behaviors are contingent and unpredictable.    

At least some of these are examined in some depth in two papers written by me and Viktoras Veitas that uses the theory of individuation to tackle complexity of the complexicated kind in the context of intelligence and cognition:

From Francis Heylighen:

Ben makes a number of correct observations here, about  truly complex systems (which he calls "complexicated") being more than ordered patterns emerging out of simple, homogeneous agents and rules. In practice, evolution works by developing specialized modules, which are relatively complex systems highly adapted to a particular function of niche, and then fitting these modules together so as to combine them recursively into higher-order systems. This leads to a hierarchical "architecture of complexity", as envisaged by Herbert Simon, where you find complexity at all levels, not only at the top level.

The picture of Simon is still too simple, because the modules are in general not neatly separable, and because sometimes you have distributed patterns of feedback and coordination that exploit the local capabilities of the modules. But I agree with Ben that the old "Santa Fe" vision of deterministic "laws of complexity" that specify how simple rules produce emergent patterns is equally unrealistic. The combination of the two, as Ben seems to propose, is likely to be more fruitful.

My own preferred metaphor for a complex adaptive system is an ecosystem, which consist of an immense variety of complex organisms, from bacteria to bears and trees, and assemblies of such organisms, interacting non-linearly which each other and with the somewhat simpler physical and chemical processes of climate, resource flows, erosion, etc... The components of such a system have co-evolved to be both largely autonomous, and mutually dependent via intricate networks, producing a truly "complexicated" whole.

From Russell Wallace:

Good article! Or, put another way:

  1. The Santa Fe school implicitly optimizes for smallness of source code versus aesthetic interestingness of results.
  2. Biology optimises for ease of creation by evolution versus performance.
  3. Technology optimises for ease of creation by human engineers versus performance.

Looked at that way, it makes sense that 1 isn't a good model for 2 or 3.


Tim Tyler said...

Good one. I tried to think of alternative terminology. "Hierarchical complexity" is what came to mind. It doesn't mean exactly the same thing - but it means something useful and good. It is more long-winded - but also more self explanatory. At least, *I* think it is self-explanatory: many of the 20,000 Google hits seem to be using the term a little differently from its most obvious meaning.

Boris Kazachenko said...

Universal laws of complexity is what pure math is all about. Has always been, the only difference is that we now have computers to simulate it.
Reducing real-world complexity, that's empirical science, again nothing new here. Generating frivolous complexity, that's art, also computer-assisted now.
In engineering, including AGI, you have to define purpose or fitness values, however abstractly. Complexity here is a cost, not a benefit.
Over-complexicating the concept of complexity is grievous violation of Occam's razor :).

Danko Nikolic said...

A very important insight, and very nicely described. I think you are right. We have to make efforts towards building complexicated machines. But here is a question: Can human engineers understand something that is complexicated? Can the understanding be sufficiently deep to make an actual progress towards AGI? Or is a complexicated system beyond a reach of a human mind?

pconroy said...

Yes, I think a complexicated system is beyond the reach of the "current" human mind.

My prediction from Jan 2015, is that it will be easier to enhance human intelligence, via neural implants, neural modem technology, and so forth, than to build AGI. In fact I think humans, artificially enhanced, will become superintelligent before machines.
Another idea I have, is that maybe we need to gene edit (via CRISPR/Cas9) humans for SNP's linked to higher IQ, and turn out a batch of 300 IQ geniuses, maybe with networked minds, before we can achieve machine superintelligent.

pconroy said...

Yes, I think a complexicated system is beyond the reach of the "current" human mind.

My prediction from Jan 2015, is that it will be easier to enhance human intelligence, via neural implants, neural modem technology, and so forth, than to build AGI. In fact I think humans, artificially enhanced, will become superintelligent before machines.
Another idea I have, is that maybe we need to gene edit (via CRISPR/Cas9) humans for SNP's linked to higher IQ, and turn out a batch of 300 IQ geniuses, maybe with networked minds, before we can achieve machine superintelligent.

pconroy said...

Yes, I think a complexicated system is beyond the reach of the "current" human mind.

My prediction from Jan 2015, is that it will be easier to enhance human intelligence, via neural implants, neural modem technology, and so forth, than to build AGI. In fact I think humans, artificially enhanced, will become superintelligent before machines.
Another idea I have, is that maybe we need to gene edit (via CRISPR/Cas9) humans for SNP's linked to higher IQ, and turn out a batch of 300 IQ geniuses, maybe with networked minds, before we can achieve machine superintelligent.

BlaiseP said...

Hebbian learning didn't pan out because it was a poor model of the neuron itself. Completely ignored the glial cells - which in fairness to Hebb, nobody understood at the time. He was correct as far as his understanding went - but the failure you described follows the failure of the model.

DaVinci: Although nature commences with reason and ends in experience it is necessary for us to do the opposite, that is to commence with experience and from this to proceed to investigate the reason.

The model is the problem. Intelligence is never artificial, it arises of utter necessity. The smart ones, the fast ones, the sneaky ones - they survive. The stupid and the slow and the obvious ones get eaten.

ZHD said...

At this time I don't have any intellectual content to contribute but I wanted to say that I think this is one of clearest and most important posts you've written in quite some time. There's a lot of great things in a short amount of space. Nicely done.

cellulite exercises said...

Your blog is too informative really liked the way you are blogging. Thanks for this post and please keep posting more like this.

Unknown said...

Thank you for your information and very useful article. you are the greatest writer
Menjaga Kesehatan Dengan Herbal

Anonymous said...

تحميل برنامج نمبر بوك 2015


Download Number Book Android 2015

YYZ said...

Ben, what seems to be "complicating" the issue of complexity is the division between the agent and the environment. Whatever state transitions are taking place inside the evolving agent(s) in question must be fed back into the overall system. Where agents evolve out of discord with the overall system, they evolve into isolation from it, evolving away from self-sustainability. Implied here is that evolutionary self-configuration & self-sustainability are autologous functions of the overall system, this being an objective requirement where the self interpretive and self configurative properties of the system are universal. Unless of course it turns out that you can't make a cup of coffee on Mars, but that would be weird, esp if you couldn't taste it.

Grimeandreason said...

I like this idea.

Every paradigm evolves into extremes, from which new generations synthesis knowledge and concepts. I agree that complexity theory has it's idealisms, but I recognise that complex systems work via feedback, not simple bottom-up emergence.

In this feedback model, top-down influences are essentially Complication enforced upon Complexity. It is a crucial dynamic to include in any model.

So perhaps the dichotomy in Complexity Theory, when it emerges, here and now in fact lol, will be this Idealism vs Pragmatism familiar to all dichotomies.

Christopher said...

pcconroy is correct. "AI" will evolve by giving humans technological tools to enhance their intelligence period. Now why is that? Intelligence is a function of life and cannot be simulated. Early on critics of AI insisted on that insight. I think the idea comes from the profound pessimism that came out of the middle part of the last century where it seemed mankind needed help from his collective foolishness and the hope was that aliens or computers would save us from ourselves. Systems Analysis worked very well initially and complexity has helped us understand aspects, as was said in this piece, of the natural world. There is also the problem of the murder of metaphysics symbolically accomplished by Wittgenstein. Actually it was a faux execution but that's too much to go into here. My point is that unless we start to work out the meaning of life we aren't going to be able to plumb the depths of complexity because C.P. Snow's two cultures theory still stands though I come out with the opposite conclusion--it is the arts, the humanities that must rise now and bring meaning, somehow, back to the technical and scientific worlds. Science ignores paradox and tries to find little hedges sort of like Ptolemaic astronomy tried to handle new and paradoxical data as measuring instruments matured. Thing is, as the author suggests, you just have the wrong meta-theory and wrong metaphysics.

The truth lies in the deep study of paradox which lies in the realm of magic, of Fortean phenomena, of the millions of bits of data or anomalies that science deliberately neglects for ideological reasons and, of course, religion and mysticism. We live in something very much like a multiverse and that we know that we may not know or perceive, obviously, physical phenomena that cannot be measured or perceived. I suggest that readers turn to Abbott's strange little book *Flatland* to begin to perceive the problem we face in truly understanding or not-understanding the world. The age of logical positivism is long gone.