I returned home 2 weeks ago from the First AGI Summer School, which was held in the Artificial Brain Lab at Xiamen University in Xiamen, China at the end of June and the beginning of July.
Ever since I got back I've been meaning to write a proper summary of the summer school -- how it went, what we learned, and so forth -- but I haven't found the time, and it doesn't look like I'm going to; so, this blog post will have to suffice, for the time being at any rate.
First of all, I need to express my gratitude to Hugo de Garis
and Xiamen University for helping set up the summer school. Coming to Xiamen to do the Summer School was a great experience for me and the others involved -- so, thanks much!
Some photos I took in Xiamen are here (mixed up with a few that YKY took on the same trip). (Viewer beware: some of these are summer school photos, some are just "Ben's Xiamen tourism photos"....)
To get a sense of what was taught at the summer school -- and who was on the faculty -- you can go to the summer school website; I won't repeat that information here.
The first two weeks of the summer school were lecture-based, and the last week was a practical, hands-on workshop focused on the OpenCog AI system. Unfortunately I missed most of the hands-on segment, as I wound up spending much of that week meeting with various Chinese university officials about future possibilities for Chinese AGI funding (but I'll write another blog post about that), and demo-ing the Artificial Brain Lab robot to said officials.
See here for some videos of the above-mentioned robot, along with some "OpenCog virtual pet" demo videos that were shown at the summer school. (And, the OpenCog virtual pet was also gotten up and running "live" in Xiamen, of course....)
The number of students wasn't as large as we'd hoped -- but on the plus side, we did have a group of VERY GOOD students who learned a lot about AGI, which was after all the point.
(In fact, most conferences have found their attendance figures down this year, due to people wanting to save money on travel costs: an obvious consequence of the faltering world economy.) The majority of students were Chinese from Xiamen University and other universities in Fujian province, but there were also some overseas students from Europe, the US, Korea and Hong Kong (OK, well, Hong Kong isn't quite "overseas" ;-).
All the lectures were videotaped by Raj Dye (thanks Raj!!)
and will be put online once Raj gets time to edit them. I think these will form an extremely valuable resource, and will reach a lot more people than the summer school itself did. (Long live the Internet!!). Raj's active camera work captured a bunch of the dialogues during and after the talks as well, and I think these will make quite interesting viewing. As you might expect, there was some pretty intense give-and-take (especially, for example, during Allan Combs' talks on cognition and the brain).
I'm definitely interested to help organize some future AGI summer schools ... though the next one will be in a different location, as we've already done a pretty good job of spreading the word about AGI to the AI geeks of Fujian Province! Maybe the next one will even be back here in the boring old US of A ....
Random Observations on Chinese-ness in the AGI Context
I learned a lot about China in the course of doing the summer school (though I'm still pathetically ignorant about the place of course ... there's a lot to know) ... I won't try to convey 1% of what I learned here, but will just write down a few hasty and random semi-relevant observations.
First, I learned to speak verrrry slowly and clearly since Chinese students are more accustomed to written than spoken English! ;-)
More interestingly, I learned that the Chinese educational system is more narrowly disciplinary than the US system, and also more focused on memorization of declarative knowledge rather than practical "know how." Compared to their US counterparts, computer science graduates in China know an AWFUL LOT of computer science, yet don't have much advanced knowledge of areas beyond computer science, nor all that much software engineering knowledge or hands-on coding experience. ("Software Engineering" is a separate department in the Chinese university system, and I didn't get to know the Software Engineering students, only the Computer Science ones.) So one role the summer school served was just to introduce a bunch of Chinese AI students to some allied disciplines -- neuroscience, cognitive psychology, philosophy of mind -- that they hadn't seen much during their formal education so far.
(Actually, separately from the Summer School, I did give a talk to some undergrads in the Software Engineering School, on AI and Gaming, which contained one funny bit (unfortunately that talk was not videotaped). I wasn't sure if the students understood what I was talking about, so as a test I showed them this picture as part of my powerpoint
Normally I use this lovely picture as an example of "conceptual blending" (a cognitive operation that OpenCog and other AGI systems must carry out), but this time I announced it differently; I said: "Furthermore, the Artificial Brain Lab here at XMU has an ambitious backup plan, in case our computer science approach to AGI fails. We've devised a machine that can remove the head from a graduate student, and attach it to the body of a Nao humanoid robot, and thus create a kind of synergetic cyborg intelligence." I was curious to see if these Chinese undergrads understood my English well enough to tell that I was joking -- but from their reaction, I was unable to tell. They laughed because the picture looked funny, but I still don't know if they understood what I was saying! Fortunately the Summer School students were less inscrutable, and more reactive and communicative! And overall the AI in Games lecture went well in spite of this perplexing crosscultural joke experience....)
(As an aside within an aside, I also learned during various conversations that typical Chinese high school students spend from 7AM till 10PM or so at school, 6 days a week. Damn.)
Another thing that surprised me was the strength of knowledge the Chinese students had in neural nets, fuzzy logic, computer vision and other "soft computing" and robotics related AI, as compared to logic-based AI. By and large, they had a very strong mathematics background, and a good knowledge of formal logic -- but fairly little exposure to the paradigm in which logic is applied to create AI systems. Quite different from the typical American AI education.
All in all the Chinese seemed to have a lot less skepticism about "strong AI" than Americans. It's not that they had a great faith in its immediacy -- more that they lacked the egomaniacal confidence in its extreme difficulty or implausibility, which one so often finds in Westerners. Chinese culturally seem much more comfortable with accepting situations of great unconfidence, in which the evidence just doesn't exist to make a confident estimate.
I came to the summer school from the Toward a Science of Consciousness conference in Hong Kong, where I led a Machine Consciousness workshop -- which I won't write about here, because I wrote a summary of it for H+ magazine, which will appear shortly. Issues of machine consciousness came up now and then at the summer school, but interestingly, they seem to hold a lot less fascination for Chinese than for Westerners. When I put forth my panpsychist perspective in China (that the universe as a whole is conscious in a useful sense, and different systems -- like human brains and digital computers -- manifest this consciousness in different ways ... and our "theater of reflective consciousness" is one of the ways universal consciousness can manifest itself in certain sorts of complex systems), no one really bats an eye (and not just because the Chinese lack a taste for eye-batting). Not that Chinese scientists consider this panpsychist perspective wholly obvious or necessarily correct; but nor do they consider it outrageous -- and, most critically, very FEW Chinese seem to feel like many Westerners do, that "reductionism" or "materialism" is obviously correct. Once you remove the tendency toward dogmatic materialism, the whole topic and dilemma of "machine consciousness" loses its bite....
China versus California (A Semi-Digression)
(This section contains some ramblings on Oriental versus California culture, and the Singularity -- which are only semi-relevant to the summer school, but I'll put them here anyway, because I find them amusing! Hey, this is a blog, anything goes ;-)
In mid-July I voyaged from the Xiamen AGI summer school to California where I gave the keynote speech at the IJCAI workshop on Neural-Symbolic computing (a really interesting gathering, which I'll discuss some other time), and then gave a lecture on AGI at the Singularity University (at NASA Ames Lab, in Silicon Valley).
The contrast between the SU students and the Chinese AGI Summer School students couldn't have been more acute.
For one thing, there was a huge contrast of ego ... to phrase things dramatically: The SU students emanated an attitude that seemed to say "We know more than anyone on the planet!! We already knew almost everything we need to know to dominate the world as part of the techno-elite!"
The Chinese students were not actually more ignorant (though their knowledge bases had different strengths and weaknesses than those of the SU students), but they were dramatically more humble about their state of knowledge!
The SU students also seemed extremely eager to project everything I said about AGI into the world they knew best: Silicon Valley style Internet software. So, most of the questions during and after my talk centered around the theme: "Isn't it unnecessary to work on AGI explicitly ... won't AGI just emerge from the Internet after Silicon Valley startup firms create enough cool narrow-AI online widgets?" When I said I thought this was unlikely, then the questions turned to: "OK, but rather than writing an AGI that actually thinks on its own, shouldn't you just write a narrow-AI that figures out the best way to combine existing online widgets, and achieves general intelligence that way?" And so forth.
But I don't want to make it sound like the SU student body is "all of one mind" -- it's certainly a heterogeneous bunch. At the lunch following my talk at SU, one SU student surprised me with the following statement (paraphrased): "One reason I think AI systems may not achieve the same kind of ethical understandings or states of mind as humans, is that they lack one of the most important human characteristics: our humbleness. We humans have a lot of limitations in our bodies and minds, and these limitations have made us humble, and this humbleness is part of what makes us ethical and part of what makes us profoundly intelligent in a way that a mere calculating machine could ever be."
I laughed out loud and immediately said to the student: "OK, I'm onto you. You're not American." (The student did look Asian ... but I was guessing he was not Asian-American.)
He admitted to being from Korea ... and I noted that few Americans -- and especially no Silicon Valley techno-geek -- would ever identify humbleness as a central characteristic of humans or a key to human intelligence!
Then I couldn't help thinking of the saying "Pride comes before a fall" ... and Vinge's (correct) characterization of the Singularity as a point after which HUMANS WILL HAVE NO IDEA WHAT'S GOING ON ... i.e. no real ability to predict what happens next, as superhuman nonhuman intelligences will be dominating the scene.
Philosopher Walter Kauffmann coined the dorky but evocative term "humbition" to denote the combination of humility and ambition. There's not much humbition in Silicon Valley ... nor for that matter in the public trumpetings of the Chinese government ... but there was a LOT of humbition in the Chinese students at the AGI summer school and the Artificial Brain Lab. Perhaps this quality will serve them well as the world advances, and our knowledge and intuitions prove decreasingly adequate to comprehend our situation...
If you believe that AGI will be created from piecing together narrow-AI internet widgets, then yeah, mostly likely AGI will be created by the Silicon Valley techno-elite. But if (as I suspect) it requires fundamentally different ideas from the ones now underlying the world's technological infrastructure ... maybe it will be created by people who are more open to fundamentally new and different ideas.
But this leads into the next blog post I'm going to write, exploring the question of whether Hugo de Garis is right that AGI is going to get created in China rather than the West!
Musings on the Concept of a Systematic AGI Curriculum, and Lessons for Future AGI Summer Schools
Next, what did I learn this summer about the notion of an AGI summer school, and about teaching AGI altogether?
One big lesson that got reinforced in my mind is: Teaching AGI is very different than teaching Narrow AI!
There is basically no systematic AGI education in universities anywhere on the planet, and this fact certainly helps to perpetuate the current AGI research situation (in which there is very little AGI research going on). By and large, everywhere in the world, students graduate with PhD degrees in AI, without really knowing what "AGI research" means.
Another conclusion I came to is that a carefully crafted "AGI Summer School" curriculum could play a major role -- not only in providing AGI education, but in demonstrating how AGI material should be structured and taught.
However, creating a thorough, systematic AGI curriculum would be a lot of work ... and we didn't really attempt it for the First AGI Summer School. I think the lectures mostly went very well this time (well, you can judge when the videos come online!!), and the sequencing of the lectures made good didactic sense -- but, for the next AGI summer school, we'll put a little more thought into framing the curriculum in a systematic way. Now, having done the summer school once, it's more clear to me (and probably the other participants as well) what an AGI curriculum should be like.
First of all, it's obvious that to make a systematic AGI curriculum, one would need some systematic background curriculum in areas like
- Neuroscience
- Linguistics
- Philosophy of Mind
- Psychology (of Cognition, Perception, Emotion, etc.)
In this vein, one thing that became clear to me at the Xiamen summer school is: The standard "cognitive science" curriculum would certainly fill this need for background, but it's not exactly right, because it's not specifically focused on AGI ... AGI students really only need to digest a certain subset of the cognitive science curriculum, selected specifically with AGI-relevance in mind. But judiciously making this selection would be a nontrivial task in itself.
Next, as part of a thorough AGI curriculum, one would need a systematic review of different conceptions of what "general intelligence" is -- we did such a review at the Xiamen summer school, but not all that systematically. Pei Wang gave a nice talk on this theme, and then Joscha Bach and I presented our own conceptions of GI, and I also briefly reviewed the Hutter/Schmidhuber "universal intelligence" perspective.
Then there's the matter of reviewing the various AGI architectures out there. I think the Xiamen summer school did a fairly good job of that, with in-depth treatments of OpenCog, Pei Wang's NARS architecture, and Joscha Bach's MicroPsi ... and a briefer discussion of Hugo de Garis's neural net based Artificial Brain approach ... and then very quick reviews of other AGI architectures like SOAR and LIDA. Of course there are many, many architectures one could discuss, but in a limited time-frame one has to pick just a few and focus on them. (It would be nice if there were some more systematic way to review the various AGI architectures out there than taking a "laundry list" approach, but this isn't an education problem, it's a fundamental theory problem -- no such systematization exists, even in the research literature.)
There were a lot of OpenCog-related lectures at the Xiamen summer school, and one thing I felt was that it was both too much and too little! Too much OpenCog for a generic AGI Summer School, but too little for a real in-depth OpenCog education. At future summer schools we may split OpenCog stuff off to a greater extent: give a briefer OpenCog treatment in the main summer school lectures, and then do a separate one-week OpenCog lecture series after that, for students who want to dig deep into OpenCog.
Another educational issue is that each AGI architecture involves different narrow-AI algorithms, so that to really follow the architecture lectures fully, students needed to know all about forward and backward inference, attractor, feedforward and recurrent neural nets, genetic algorithms and genetic programming, and so forth. (Most of them did have this knowledge, so it wasn't a problem; actually this might be more of a problem in the US than in China, as China's education system is very strong on comprehensively teaching factual knowledge.) That is: even though AGI is quite distinct from narrow AI, existing AGI architectures make ample use of narrow-AI tools, so students need a good grounding in narrow AI to grok current AGI systems. It would be good to make a systematic list of tutorials on the most AGI-relevant areas of narrow AI, for students whose narrow-AI background is spotty. Again, we did some of this for the Xiamen summer school, but probably not enough.
Finally there's the terminology issue. There is no good "AGI glossary", and every researcher uses terms in slightly different ways. Updating and enlarging an online AGI glossary would be a great project for students at an AGI summer school to participate in!
Undramatic Non-Conclusion
So, the first AGI Summer School went pretty interestingly, and I'm really glad it happened. It was interesting to get to know China a little bit, and to get some experience teaching AGI in an intensive-course context. I learned a lot, and I guess the other faculty and the students did too.... I also made a number of excellent new friends, both among the Chinese and the foreign students. As with many complex real-world experiences, I don't really have any single dramatic summary or conclusion to draw ... but I'm looking forward both to future AGI summer schools, and to future experiences with "AGI in China"....
1 comment:
Hey Ben,
Very interesting and funny blog entry. I so wish I had been on that summer school even though I would have understood the least of everyone since I'm just an undergraduate student. Just wanted to say that your blog and your overall work is a real source of inspiration to me. After learning about the concept of the singularity and the current technological trends I've become motivated enough to get back to uni and trying for a degree in Mathematics and Information Technology. I hope I'll get to contribute to some of the exciting stuff that you guys are creating before we all get assimilated by the future :P
Post a Comment