I probably wouldn't have come to Finland just for this gathering, but it happened I was really curious to meet the people at RealXTend, the Finnish open-source-virtual-worlds team Novamente has been collaborating with (with an aim toward putting our virtual pets in RealXTend) ... so the workshop plus RealXTend was enough to get me on a plane to Helsinki (with a side trip to Oulu where RealXTend is located).
This blog post quasi-randomly summarizes a few of my many reactions to the workshop....
Many of the talks were interesting, but as occurs at many conferences, the chats in the coffee and meal breaks were really the most rewarding part for me...
I had not met Haikonen personally before, though I'd read his books; and I also met a lot of other interesting people, both Finnish and international....
I had particularly worthwhile chats with a guy named Harri Valpola, a Finnish computational neuroscience researcher who is also co-founder of an AI company initially focused on innovative neural-net approaches to industrial robotics.
Harri Valpola is the first person I've talked to who seems to have originally conceived a variant of my theory of how brains may represent and generate abstract knowledge (such as is represented in predicate logic using variables and quantifiers). In brief my theory is that the brain can re-code a neural subnetwork N so that the connection-structure of N serves as input to some other subnetwork M. This lets the brain construct "higher order functions" as used in combinatory logic or Haskell, which pose an equivalent mathematical alternative to traditional predicate logic formulations. Harri's ideas did not seem exactly identical to this, but he did have the basic idea that neural nets can generate abstraction via having subnets take-as-input aspects of the connection structure of other nets.
Once again I was struck by the way different people, from totally different approaches, may arrive at parallel ideas. I arrived at these particular ideas via combinatory logic and then sought a neuroscience analogue to combinatory logic's higher-order-functions, whereas Harri arrived at them via a more straightforward neuroscience route. So our approaches have different flavors and suggest different research directions ... but ultimately they may well contribute the same core idea.
I don't have time to write summaries of the various talks I saw or conversations I had, so I'll just convey a few general impressions of the state of "machine consciousness" research that I got while at the conference.
First of all, I'm grateful to Pentti Haikonen for organizing the workshop -- and I'm pleased to see that the notion of working on building conscious, intelligent machines, in the near term, has become so mainstream. Haikonen is a researcher at a major industry research lab, and he's explicitly saying that if the ideas in his recent book Conscious Robots are implemented, the result will be a conscious intelligent robot. Nokia does not seem to have placed a tremendous amount of resources behind this conscious-robot research program at present, but at least they are taking it seriously, rather than adopting the skeptical attitude one might expect from talking to the average member of the AAAI. (My own view is that Haikonen's architecture lacks many ingredients needed to achieve human-level AGI, but could quite possibly produce a conscious animal-level intelligence, which would certainly be a very fascinating thing....)
The speakers were a mix of people working on building AI systems aimed at artificial consciousness, and philosophers investigating the nature of consciousness in a theoretical way. A few individuals with neuroscience background were present, and there was a lot of talk about brains, but the vast majority of speakers and participants were from the computer science, engineering or philosophy worlds, not brain science. The participants were a mix of international speakers, local Finns with an interest in the topic (largely from local universities), and Nokia Research staff (some working in AI-related areas, some with other professional foci but a general interest in machine consciousness).
Regarding the philosophy of consciousness, I didn't feel any really new ground was broken at the workshops, though many of the discussants were insighful. As a generalization, there was a divide betwen participants who felt that essentially any machine with a functioning perception-action-control loop was conscious, versus those who felt that a higher level of self-reflection was necessary.
My own presentation from the workshop is here ... most of it is cut and pasted from prior presentations on AGI but the first 10 slides are so are new and discuss the philosophy of consciousness specifically (covering content previously given in my book The Hidden Pattern and various blog posts). I talked for half an hour and spent the first half on philosophy of consciousness, and the second half on AGI stuff.
I was the only vocal panpsychist at the workshop ... i.e. the only one maintaining that everything is conscious, and that it makes more sense to think of the physical world as a special case of consciousness (Peirce's "Mind is matter hide-bound with habit") than to think of consciousness as a special case of the physical world. However, one Finnish philosopher in the audience came up to me during a coffee break and told me he thought my perspective made sense, and that he was happy to see some diversity of perspective at the workshop (i.e. to see a panpsychist there alongside all the hard-core empiricists of various stripes).
My view on consciousness is that raw consciousness, Peircean First, is an aspect of everything ... so that in a sense, rocks and numbers are conscious, not just mice and people. However, different types of entities may have qualitatively different kinds of consciousness. For instance, systems that are capable of modeling themselves and intelligently governing their behavior based on their self-models, may have what I call "reflective consciousness." This is what I have tried to model with hypersets, as discussed in my presentation and in a prior blog post.
Another contentious question was whether simple AI systems can display consciousness, or whether there's a minimal level of complexity required for it. My view is that reflective consciousness probably does require a fairly high level of complexity -- and, furthermore, I think it's something that pretty much has to emerge from an AI system through its adaptive learning and world-interaction, rather than being explicitly programmed-in. My guess is that an AI system is going to need a large dynamic knowledge-store and a heck of a lot of experience to be able to usefully infer and deploy a self-model ... whereas, many of the participants in the workshop seemed to think that reflective consciousness could be created in very simple systems, so long as they had the right AI architecture (e.g. a perception-action-control loop).
Since my view is that
- consciousness is an aspect of everything
- enabling the emergence of reflective consciousness is an important part of achieving advanced AGI
- the study of consciousness in general is part of philosophy, or general philosophical psychology
- the study of reflective consciousness is an important part of cognitive science, which AGI designers certainly need to pay attention to
As an example, the "7 +/- 2" property of human short-term memory seems to have a very big impact on the qualitative nature of human reflective consciousness ... and I've always wondered the extent to which it represents a fundamental property of STM versus just being a limitation of the brain. It's worth noticing that other mammals have basically the same STM capacity as humans do.
(I once speculated that the size of STM is tied to the octonion algebra (an algebra that I discussed in another, also speculative cog-sci context here), but I'm not really so sure about that ... I imagine that even if there are fundamental restrictions on rapid information processing posed by algebraic facts related to octonions, AI's will have tricky ways of getting around these, so that these fundamental restrictions would be manifested in AI's in quite different ways than via limited STM capacity.)
However, it's hard to ever get to fine-grained points like that in broad public discussions of consciousness ... even among very bright, well-intentioned expert researchers .. because discussion of consciousness seems to bring up even worse contentious, endless, difficult arguments among researchers than the discussion of general intelligence ... in fact consciousness is a rare topic that is even harder to discuss than the Singularity!! This makes consciousness workshops and conferences fun, but also means that they tend to get dominated by disagreements-on-the-basics, rather than in-depth penetration of particular issues.
It's kind of hard for folks who hold different fundamental views on consciousness -- and, in many cases, also very different views on what constitute viable approaches to AGI -- to get into deep, particular, detailed discussions of the relationship between consciousness and particular AI systems!
In June 2009 there will be a consciousness conference in Hong Kong. This should be interesting on the philosophy side -- if I go there, I bet I won't be the only panpsychist ... given the long history of panpsychism in various forms in Oriental philosophy. I had to laugh when one speaker at the workshop got up and stated that, in studying consciousness, he not only didn't have any answers, he didn't know what were the interesting questions. I was tempted to raise my hand and suggest he take a look at Dharmakirti and Dignaga, the medieval Buddhist logicians. Buddhism, among other Oriental traditions of inquiry, has a lot of very refined theory regarding different states of consciousness ... and, while these traditions have probably influenced some modern consciousness studies researchers in various ways (for example my friend Allan Combs, who has sought to bridge dynamical systems theory and Eastern thought), they don't seem to have pervaded the machine-consciousness community very far. (My own work being an exception ... as the theory of mind on which my AI work is based was heavily influenced by Eastern cognitive philosophy, as recounted in The Hidden Pattern.)
I am quite eager to see AI systems like my own Novamente Cognition Engine and OpenCogPrime (and Haikonen's neural net architecture, and others!!) get to the point where we can study the precise dynamics by which reflective consciousness emerges from them. Where we can ask the AI system what it feels or thinks, and see which parts of its mind are active relevant to the items it identifies as part of its reflective consciousness. This, along with advances in brain imaging and allied advances in brain theory, will give us a heck of a lot more insight....
Your link to your theory http://www.blogger.com/www.acceleratingfuture.com/people-blog/?p=2199 is broken :-)
ReplyDeleteyou want http://www.acceleratingfuture.com/people-blog/?p=2199
Thanks for the write up - I am looking forward to hearing more about it from Will Browne who was there. I have put an initial response at http://brains.parslow.net/?q=node/1445 (I am not convinced my trackback system is working properly, so it may not tall your blog that automagically for me)
Harri Pirkola - you're talking about Harri Valpola, I believe.
ReplyDeleteThanks Kaj: fixed... sadly my American memory tends to merge various finnish names together ;-p
ReplyDeleteI've been reading a lot of your articles for a number of years, Dear Ben. Today I was thrilled as well as surprised by your most recent views, expressed in this post.
ReplyDelete1) I agree 100% that everything is (even in a very elementary sense) "conscious". My pantheistic leanings are not the only reason for this. The main reason is our common background in George Spencer-Brown's "Laws of Form", etc.
2) With only one thing that you say you believe, I disagree: Numbers, in particular, are NOT conscious. They arise out of the activity of consciousness, of course, but they are as unrelated to being conscious as a dead fossil is related to the consciousness of the (once alive) animal that became the fossil.
Point (2) above suddenly explained to my mind, the reason for a small but important flaw (I believe) that exists in your reasoning: Intelligent systems are NOT going to be necessarily "conscious". They may well be much more intelligent than we are, but such intelligence is not at all necessarily related to consciousness as such: It can be the reified or objectified part of a (living) consciousness.
Consciousness may (O.T.O.H.) exist in artefacts created from materials which are in themselves potentially conscious, such as DNA structures. Artificial life may become possible, with its own consciousness, only when it succeeds in tapping the mysterious "reservoir of Universal Consciousness" existing in the depths of matter itself. O.T.O.H. intelligent artefacts created as mere simulations in some system of representation, based on numbers or other symbols, seem to be doomed to be completely void of consciousness.
Sorry for the long diatribe, but I think it may have contributed something interesting to your thinking (brilliant in every other way).
this is a nice blog.
ReplyDeleteCool story as for me. I'd like to read something more about this matter. Thanx for posting this information.
ReplyDeleteSexy Lady
UK escort