My review of the Kurzweil-biopic/futurist-think-piece documentary Transcendent Man -- which features me mouthing off for 4-5 minutes in a zebra-striped cowboy hat -- in HPlus Magazine is here:
http://www.hplusmagazine.com/articles/ai/transcendent-man-film-about-kurzweil
In that article, as well as reviewing the film, I also recount some moderately interesting dialogue btw me and Ray Kurzweil that occurred in the moderated discussion at the end of the film's premiere at the Tribeca Film Festival...
After that conversation with Ray I discuss at the end of the article, the discussion-moderator asked me another question (which I didn't put in the review article): he asked me what my goal was. What was I trying to achieve?
What I said was something like this: "I would like me, and any other human or nonhuman animal who wants to, to be able to increase our intelligence and wisdom gradually ... maybe, say, 37.2% per year ... so that we can transcend to higher planes of being gradually and continuously, and feel ourselves becoming gods ... appreciate the process as it unfolds."
That's what I'm after, folks. Hope you'll come along for the ride!
Why 37.2% each year, gradually? Why not 2363% all at once, every 10 years? If intelligence and time are infinite (admittedly an important 'if'), then the two will be effectively identical if you zoom out enough.
ReplyDeleteSteve: Because I think that improving by 2363% would basically be equivalent to dying and getting replaced by some massively superior thing. Whereas if I improve by only 37.2% per year [or some other modest amount!] then I'll be able to feel myself broadening and improving, each step of the way.... "I" being a dynamical process anyway, of course...
ReplyDeleteOf course, it may be that after I've improved by a mere 86.2% or so, I'll realize that controlled ascent at a slow pace is a silly idea, and choose to up my intelligence by 10000% in a yoctosectond... ;-)
ben
You mention in an earlier blog entry:
ReplyDelete"the question of 'how smart are cetaceans' is much less interesting than the question of 'how are they smart'"
This brings up an interesting question. Do you see intelligence as a scalar or a vector (or maybe something else)? If you view it as a scalar, then a comparison between the intelligence of two beings should only depend on the difference or ratio of intelligence of the two beings.
If you view intelligence as a vector, then what does it mean for a being to be 37.2% smarter than another being?
The only way I can think to resolve this would be to define the intelligence of a being as a vector in mind-space, but also give intelligence and alternate definition as the norm of that vector.
Is this a new goal? Is it your top goal? How do you know it is your goal?
ReplyDeleteAt first glance it appears rather inconsistent with various previous actions - namely getting married and having kids.
I would like to believe that all of the myth and fantasy human beings have been creating since the beginning would play an important role as we rise up into higher beings. Each myth, religion, fiction, has some kind of truth, some facet of a lesson to learn about co-existence with gods, which perhaps could mean ourselves.
ReplyDeleteWe seem to have a tendency to forget these lessons, but perhaps a precondition for transcendancy is sufficient awareness of certain benevolent meta-mythological themas and reverence for truth, beauty, etc.
I have another question for Ben: Does he really expect the Singularity will happen within his biological lifetime? (Provided the intelligence enhancement he's describing is connected to the Singularity.)
ReplyDeleteThen the S. would have to happen sometime in the next, say, 40 to 50 years...Maybe Ben wasn't facetious when he hoped for a Singularity within app. 10 years if we really, really try.
You call that a goal? ;-D
ReplyDeleteTim Tyler wrote:
ReplyDeleteIs this a new goal? Is it your top goal? How do you know it is your goal?
At first glance it appears rather inconsistent with various previous actions - namely getting married and having kids.
No... I'm not a mind with a rigid top-down goal system ... I don't have a "top goal" ...
I definitely have multiple goals on multiple time scales
Avoiding torture is one, for example.... Having more pleasure than pain ... doing good for my family and the rest of the world, etc.
I didn't mean to imply that controlled ascent is my exclusive goal nor the sole top goal from which all my other goals derive
Only that it's a major and important goal for me, and the one that's most exciting to me in the long term
Hi! Thanks for the clarification - it helps! However, I am left wondering what your actual goals are - what you seem to be saying is that "it's complicated".
ReplyDeleteMy understanding is that there is a utility function that represents the goals for any specified intelligent agent - as compactly as any other representation can. I generally encourage agents to find that representation - and then to publish it.
Finding that representation often helps clarify the agent's goals to itself. Publishing it is a form of transparency.
Personally, if an agent doesn't tell me what its goals are, I find it hard to know what it will do in the future - and thus it is difficult for me to know whether I can trust it. Since so many people are evasive about what their goals are - or profess goals which do not explain their actions and rather obviously seem to serve signalling purposes - I am often left wondering why they are hiding their motives.
Anyway, if you are interested in publishing your actual goals someday, go for it.
Tim ... hmmm ... I don't claim to know precisely what my total set of "actual goals" are ... nor do I consider it feasible to express them insofar as I understand them myself compactly in human language or mathematics...
ReplyDeleteReal humans have mixes of motives which shift complexly over time.
Explicitly stated goals (for humans) are just approximations useful for biasing the self-organizing process of human life, and for communicating, but we shouldn't make more of them than what they are...
Hi! Thanks again for your response. You seem to be going with "it's complicated" again - which might be true - but is not an answer which helps people to understand your motives.
ReplyDeleteYou could probably still produce some kind of answer.
A relatively small number of facts go a long way towards explaining most human motives:
One is that we are a product of four billion years of evolution by natural selection - which consistently favoured organisms that acted so as to maximise their inclusive fitness. This simple idea neatly explains resource acquisition, growth and reproductive activities, pain, happiness, jealousy, love, anger, etc.
Another is that humans are in an unfamiliar modern environment, and many of them malfunction in various ways. That explains why people behave as they do around sperm banks, choclolate cake and strip clubs.
Another is that the human brain can be infected by deleterious ideas, which act against their host's own genetic fitness. That explains much of the behaviour of Catholic priests.
So, in this way, most people's motives are highly compressible to a few short theories, and an indication of to what extent they apply:
Some people malfunction less than others in the modern world. Some people's brains are more prone to infectious ideas than others.
*If* you are susceptible to such infections, your motives might well be complex - as complex as a list of all the deleterious infections your harbour. Also, your motives might be liable to change over time - as more deleterious infections are acquired.
Otherwise, most of your motives may be highly compressible to a few short theories - plus the facts of human biology.
Well, yeah ... the motivations that are common to large numbers of people, have been studied and discussed enough that they can now be compactly explained!
ReplyDeleteHowever, the motivations that are restricted to rare weird individuals are harder to explain because we lack a good language for them ...
It's easy for me to explain that I love my kids and care about their welfare, because that's so much part of our culture and heritage
Explaining what motivates me regarding immortality, the preservation and/or transformation of my self and other mind-patterns etc., is just an awful lot harder, because we lack a good common language for discussing such things. Creating such a language is a major ongoing philosophical/scientific endeavor...
I do get the inkling that you may be trying to analyze humans according to an oversimplistic model that doesn't really fit.
Biological motivations and simple logic explain a lot about human behavior, but they leave out a lot of the most interesting stuff....
The stuff that's interesting and nonobvious about my motivational structure is the stuff that relates to places where I've overcome the basic biological and cultural motivations, right?
Also, I don't know what you really expect to get by asking people to list their motivations. Let's say Billy-Bob tells you he is motivated solely by helping others, but his subgoal is to get rich so that he can then help others more effectively. What does it really matter to you that he said that? How do you know whether to believe him? Even if he believes it himself, how do you know the extent to which it's true?
If Mary says she gives her $$ away out of pure compassion rather than ego-gratification, do you believe her? Does she really know? Does the distinction really mean anything?
yes, it's complicated
Ultimately, "motivation" is a theoretical abstraction that doesn't really directly apply to human beings.
We can APPROXIMATE humans, in some contexts and over certain periods of time, by theoretical goal-achieving, motivation-following systems. But this doesn't really capture the essence of what they are, it's just a way of modeling them....
Thanks again for reply.
ReplyDeleteRe: oversimplistic model
Simple models that explain lots of facts do seem important to me.
Re: The stuff that's interesting and nonobvious about my motivational structure is the stuff that relates to places where I've overcome the basic biological and cultural motivations, right?
Perhaps. I wasn't particularly asking after the nonobvious stuff, though. For instance if you said you mostly wanted to have a family and kids who love you, and be respected by other members of society, that answer would be fine by me - even if most humans have much the same aims.
Re: My motivations for enquiring after the values of others.
I have several. I have an academic interest in goal-directed systems. I am interested to know what the goals of other agents are because that is an important fact about them that assists me in understanding their behaviour. I am interested in to what extent people's stated goals actually match their behaviour. I am interested in how and why people use their own goals as signalling tools, to decieve others about their intentions and thereby manipulate their behaviour - and I am interested in the defenses against this sort of thing. It seems an important and under-appreciated area.
Re: How do you know whether to believe him? Even if he believes it himself, how do you know the extent to which it's true?
Well, actions speak louder than stated goals. If Billy says he is on the protest ship in order to help save the whales - but then meets his future wife there, and goes on to have several kids with her - then it becomes time to reexamine his stated motives. Does the hypothesis that he claimed high motives in order to represent himself as concerned with higher things in order to attract other high motive individuals to him, so that he could mate with them fit the facts better?
Re: We can APPROXIMATE humans, in some contexts and over certain periods of time, by theoretical goal-achieving, motivation-following systems. But this doesn't really capture the essence of what they are [...]
Right - more what they should be, perhaps. Sure, most agents have design problems, developmental problems, damage and other issues that cause them them to fall short of this kind of ideal system - but it can still be a useful basis of a model of them. Similarly, a vaccum cleaner can still usefully be modelled as a cleaning device - even if its dirt-bag is half full and its brushes are a bit clogged.
Dear Tim
ReplyDeleteMy advice is not to get trapped in little boxes. Looking at the concept of Singularity is a much bigger box filled with smaller ones.
We can stay in the first little box and debate until 2050 if you want to hold off the improvement of the world.
Placing the subject of 'motive' into the arena of money is slippery. One of the most important factors in accelerating returns is money. The more money produced places more money into development.
I think Ben is of the type that is looking for a path where capital will be produced in order to fund the next step and the following steps.
If he were looking at only 'one' idea, make it work and go sip a drink on the beach for the rest of his life, he would not have acheived what he has to this point.
Have a little faith and step outside of the box for a moment.
Money is the worst motivator.
ReplyDelete