A journalist emailed me today and asked me some questions about the possibility of the Internet becoming conscious. The questions and my answers follow:
> 1) Why do some people think it is possible for the internet (or internet plus humans) to become conscious? Is it to do with the network architecture?
Many scientists believe that consciousness is a property that will inevitably emerge from any complex system that has the right sort of internal dynamics, and the right sort of interaction with its environment.
Exactly what the right sort of dynamics and interactions are, different theorists disagree on. But it seems plausible that the Internet may have enough of them to develop its own sort of consciousness. The Internet perceives and acts on the world; it stores declarative, episodic and procedural memories; it recalls some information and forgets others; etc. In short it behaves a fair bit like a human mind, though there are a lot of differences too.
According to this perspective, the Internet might already have a degree of consciousness, though of a type quite different from human consciousness.
Neuroscientist Susan Greenfield views consciousness as consisting of "whole-brain activation patterns". In this sense one would say that the Internet of today has a more fragmented, dissociated consciousness than a human mind ... there aren't so many "whole-internet activation patterns", though there are intense patterns spanning large portions of the Internet.
Of course, there are many philosophies of consciousness. My own view of consciousness is a bit eccentric for the scientific world though rather commonplace among Buddhists (which I'm not): I think consciousness is everywhere, but that it manifests itself differently, and to different degrees, in different entities.
So to me the interesting question is whether the Internet has (or will develop) consciousness of the same type as humans, or maybe even of a more advanced and intricate type.
It seems that as the Internet expands and grows richer, it *could* develop a more human-like, more unified consciousness than it has now ... with more coherent "whole Internet activation patterns"...
> 2) What might be the consequences of such an event? Do you think it might be something that we should welcome?
The potential consequences of the Internet developing more coherent holistic activation patterns (ergo more humanlike consciousness) are rather difficult to predict, I find!
However, I personally am pessimistic about the future in the case that humans remain the most powerful minds on the planet. I don't trust us to use our increasingly advanced technologies in an ethical and nondestructive way.
So I think the outlook for humanity is probably better in the case that an emergent, coherent and purposeful Internet mind develops, than in the case where it doesn't.
But there is a lot of uncertainty in either case!
> 3) If it were possible, what would be needed to make the internet conscious? How far away from that situation are we?
My guess is that humanlike consciousness is not going to spontaneously evolve from the Net. However, I think someone could engineer it, by specifically creating an AI system on a server farm, oriented toward serving as a kind of "central cognition engine" for the Internet as a whole.
This central cognition engine wouldn't need to control everything on the Net; it would just need to read a lot of the information out there on the Net, and then insert information of its own creation in appropriate locations (posting to email lists, creating web pages, buying and selling things, etc.).
The engine might be created with some other primary purpose (e.g. as an artificial scientist aimed at making new discoveries via collaborating with human scientists online), or it might be created specifically with the goal of transforming the Internet into a more coherent, more humanlike intelligence. Either way the effect might be the same.
This is the scenario I described in my 2001 book "Creating Internet Intelligence," and I still think it is a plausible one.
Hi, Ben, I'm Stephan's friend Matt. We've met a met a few times at his house.
Do you know of any theory, model or test for determining if a system has developed an emergent consciousness or intelligence? My gut feeling is in agreement with you that consciousness exists everywhere. But can the consciousness of a system isolated from the greater ontological whole exist? (My own subjective experience would say: yes!) Instead of asking if a computer network could be considered conscious, wouldn't it be equally valid to ask if a arbitrialy defined system could be said to have a consciousness? Is the Turning test the best we have? It seems a bit human-centric.
For example, take the Bay Bridge between San Francisco and Oakland. Let's define the bridge system as not only the physical steel, concrete and earth the of Yerba Buena island, but also the maintenance crews of CalTrans, the automobile traffic passing its span, the California Highway Patrol enforcing laws enacted by the state legislature, and even the budget line items supporting its upkeep. The system defined as thus is able to support itself against the wear and tear of its environment, clear wrecks out of the way to keep traffic flowing, etc. It seems unlikely a human-like consciousness might emerge from this system but is there a rating system that might be able to determine the bridge system might rate at some level of biological awareness, such "~= colony creature - jellyfish"?
My suspicion is that our understanding of consciousness is so poor that there isn't much in the way of guidance in this area. If the lofty goal of meeting the Turing test is the only standard, the AIs greatest problem right now seems to be the lack of a clearly defined problem domain. If we could determine what the requirements of a conscious yet non-human system might be, that in itself would seem quite an advancement.
Yep, I remember you of course...!
As you suspect there is no rigorous theory of consciousness; nor any reasonably well agreed upon non-rigorous theory...
However, I don't think this lack is really responsible for the lack of progress toward AI.
We do have humans as an exemplar, so "make machines that think pretty much like humans" is a fairly clear, though not rigorous, goal for AI...
A hard problem is outlining *incremental* steps toward adult-human-level AI, so that each incremental step comes with metrics allowing careful evaluation of progress. In that regard I'm an advocate of a developmental path, where you start by trying to emulate the cognitive behaviors of a 3 year old child, then work up. In this approach one can use functional behaviors as a guide for one's work, and doesn't need to worry about rigorous definitions...
But having said that, I think it would certainly be nice to have a rigorous and universal metric for "intensity of consciousness", by which we could rate a human as more conscious than a lizard, and a lizard as more conscious than a rock
I imagine such a metric could be formulated (using algorithmic information theory, pattern theory, etc.; and, building on some specific assumptions about the nature of consciousness), but it would take more work than I'm willing to put into this blog comment today ;-)
Actually I'm giving a talk at the consciousness conference in Hong Kong in June ... maybe I'll come up with a metric and put it in my talk there... I'd been wondering what I would talk about, heh ...
Thanks, Ben. Should you choose to talk about such a metric at the conference, I hope you'd put it online. I'd be very interested to see what you come with.
Of course, coming up with an elegant formal characterization of "degrees of consciousness" is one thing ... and coming up with a measure that can actually be used in reality, given the types of data available about real systems, is another (probably harder) thing ...
I've been working on a novel formal definition of general intelligence (a tweak of one created by Shane Legg and Marcus Hutter recently) and I think one can make a formal definition of consciousness that is related to this...
So, stay tuned ;-)
Hi Mattbot, regarding your question about a consciousness test you might consider this note
I think when a net becomes conscious, it must satisfy three conditions: distributed orchestration, embedded encapsulation, and grounded representation, see here
Both Anonymous and jfromm's comments (i.e. the links therefrom) seem to me to reflect an overly anthropomorphized vision of "consciousnesss."
Anonymous: the reason humans and other animals display "coping" behavior in response to attacks is that they evolved to have a survival instinct. There's not reason to expect the Net to have a similar survival instinct.
jfromm: I don't see why a nonhuman consciousness needs to be as unitary as a human one; nor why awareness of one's own infrastructure should necessarily make one non-conscious. I suspect I'd still be conscious if advanced brain-imaging instruments let me observe and toy with my neurons.
Ben, other scholars distinguish coping, but not defense, precisely as a defining phenotype of consciousness.
Anonymous: I understand, but these theorists are probably thinking exclusively about biological consciousness in evolved organisms, and one wouldn't necessarily expect consciousness in the Internet to have all the same properties as such...
I think awareness of one's own infrastructure can prevent the emergence of self-consciousness in the first place, if the 'self' is a single, unified object. It is hard to perceive a unified self if the perception is dominated by a lot of gears and widgets. I know this sounds paradox: self-consciousness is not possible if the true nature of the self is conscious to us. Yet I am convinced if we understand this paradox, then we may come a bit closer to the problem of artificial self-consciousness.
Anonymous: It seems to me that perception of one's infrastructure shouldn't interfere with perception of oneself as a unitary coherent mind ... because, the dynamics by which infrastructure gives rise to emergent cognition are so damn complex that cognition can't grok them in detail anyway.
Let's suppose I had complete introspective power to see my neurons and synapses and the flow of charge and chemicals between them. So what? I wouldn't be able to grok the complex dynamics by which this lower-level stuff gives rise to my mind, anyway.
It seems to me the emergence of coherent self from underlying self-organizing dynamics is intrinsically opaque due to the nature of complex systems ... i.e. a sufficiently complex system is incapable to understand its own dynamics in anywhere near real time.
So, IMO, opacity of one's infrastructure is irrelevant to consciousness..
If a system's architecture depended on its ability to predict its behaviors based on knowledge of its infrastructure, then this system would be incapable of advanced intelligence (I predict). But that would be because it would be constrained to very rigid dynamics ... not because of its infrastructural introspection per se
I don't agree with you here, knowledge of one's infrastructure is not irrelevant. First, according to Dan Dennett we know that the self as the center of narrative gravity is an illusion. It will be difficult to create and maintain the illusion if the agent recognizes that there is indeed no center.
Second, let's suppose I had complete introspective power to see my neurons and synapses and the flow of charge and chemicals between them. But these patterns of activities are not independent from my thinking, they are my thinking processes. A total confusion and chaos would arise, which would make any reasonable thought impossible. It would be impossible to observe the system without affecting it massively.
jfromm: I've meditated enough to have a strong intuitive sense that "there is no center", but yet my pragmatic everyday sense of having a central, coherent self remains.... Others have this sort of inner experience far more strongly I'm sure.
But my main point is: even if you can sense all the neurons and synapses (or the digital analogues thereof) in your brain, that doesn't equate to sensing how your self and awareness EMERGE from these low-level entities. I contend that sensing this emergence in real-time is computationally intractable even for systems that have full proprioception of their neural internals. So I think the illusion of self would be roughly equally powerful for a system with neural proprioception.
Hey, how are you doing? Hope all is well.
I think consciousness is an emergent phenomenon and so there is no sense in "degrees of" as the same with "degrees of" life.
I'm not educated in philosophy or neuroscience....or anything beyond computers/networking for that matter really, I'm just your local friendly IT guy. That being said, allow me to proceed.
This somewhat recent article has really got me thinking: http://www.popsci.com/scitech/article/2009-07/computerized-rat-brain-spontaneously-develops-complex-patterns
The gist of it is that when they first fired up this simulation of a rat brain (I think the simulation uses an entire cpu core to simulate each neuron [meaning thousands of cores]) there was no discernible pattern. After time, patterns began to emerge.
Now compare this to the internet. Each "node" on the internet, be it a router, a computer, a phone, all are hard coded to behave in a certain way, just like these simulated rat brain neurons. The only difference is, these "nodes" on the internet are not programmed to just randomly connect to one another. There has to be a user action for the VAST majority of them. That being said, there are servers on the internet with narrowly defined algorithms which operates independently of a human interaction, however they remain relatively unaffected by their environment. They're just out there scraping emails off web pages, or brute force password cracking, etc. But the fact remains, either you (or one of these "bots") have to click on a web page, to request data to be sent to your computer, the routers and switches along the way relay each packet of information until it reaches its final destination....your computer. Meaning, me uploading this post to Googles servers will have no effect whatsoever on any equipment beyond the routers and switches between me and Google. Once I hit "send" and the operation is complete, that's it, its done.
However, if for some reason, a packet (or frame) is corrupted between me and Google after I click "send", (each node along the way performs a Cyclic Redundancy Check to ensure each bit is in the same place it was when initially transmitted) then this error has caused an unforeseen event, requiring the "internet" to take action and discard the packet/frame and thus request that a new packet/frame be sent.
This (in my understanding) is the only global mechanism EVERY device on the internet posses which is capable of generating traffic due to environmental variables outside of the end [human/bot] user. Meaning, electromagnetic interference, crosstalk, etc could cause a bit to flip, fail the CRC, thus generating data beyond that caused by human/bot intervention.
If there were some sort of cascading emergent phenomenon, similar to that described in the article I linked to above, it would have to come from the initiation of these CRC's. If one node was capable of triggering a CRC, which in turn, occasionally somehow outside of the intended programming triggered a CRC in the next node, and so on and so on, I could see some sort of possible emergent behavior.
Unfortunately, due to the fact that the internet is a network of independently operated networks, it is currently impossible to monitor such phenomenon beyond one's own network.
I would venture to guess however, that if we could peer into the internet the way the researches in the rat brain project peer into their simulation, you would see the beginnings of cascading patterns of CRC's.
So, that's my take on it. I have to believe that with the vastness and complexity of the current internet, there has to be something going on out there beyond the scope of human programming.
If you got this far, thanks for reading :)
Cool article as for me. It would be great to read more concerning that theme. Thanks for posting this data.
I like what Roger Penrose said about consciousness; that it surpasses our neuro capacity to accomplish it.
Kurzweil designed evolving logic for his voice recognition software. I believe internet consciousness will also be evolutionary, strengthening much the way dendrites become robust through synaptic activity.
All life on Earth has collaborated to form higher orders of intelligence. Ours is an unbroken line back to the primordial beginnings of life. Someone once said, "life cannot be contained, it must burst forth". Consciousness is the same. It cannot be contained (even within a box of human understanding). We look at the wondrous plants and animals of this planet, and know them as myriad expressions of an elemental happening. Maybe consciousness is also a singular event; one that human being, "enhanced", planetal, galactic, and cosmic are all joined.
We are not the creators; consciousness is.
This looks like it might interest you, Ben:
Thank you for writing articles that made me know and add new knowledge
Post a Comment