Wednesday, November 26, 2008

The Increasing Value of Peculiar Intelligence

One more sociopolitical/futurist observation ...

If you're not aware of David Brin's Transparent Society concept, you should read the book, and start with the Web page

His basic idea is that, as surveillance technology improves, there are two possibilities:

  • The government and allied powers watch everybody, asymmetrically
  • Everybody watches everyone else, symmetrically (including the government and allied powers being watched)

He calls the latter possibility sousveillance ... all-watching ...

What occurs to me is that in a transparent society, there is massive economic value attached to peculiar intelligence

This is because, if everyone can see everything else, the best way to gain advantage is to have something that nobody can understand even if they see it

And it's quite possible that, even if they know that's your explicit strategy, others can't really do anything to thwart it...

Yes, a transparent society could decide to outlaw inscrutability. But this would have terrible consequences, because nearly all radical advances are initially inscrutable....

Inscrutability is dangerous. But it's also, almost by definition, the only path to radical growth.

I argued in a recent blog post that part of the cause of the recent financial crisis is the development of financial instruments so complex that they are inscrutable to nearly everyone -- so that even if banks play by the rules and operate transparently, they can still trick shareholders (and journalists) because these people can't understand what they see!

But it seems that this recent issue with banks is just a preliminary glimmering of what's to come....

Yes, maybe there is something a bit self-centered or self-serving in this theory, since I seem to personally have more differential in "peculiar intelligence" than in most other qualities ... but, call it what you will, my peculiar intelligence is definitely pushing me to the conclusion that peculiar intelligence is going to be a more and more precious commodity as the Age of Transparency unfolds...

Obviously, you can also see this phenomenon in financial trading even in a non-crisis time. It's not enough to be smart at predicting the markets to make a LOT of money ... because if you're just smart, but in non-peculiar way, then once you start trading a lot of money, others will observe your trades and pick up the pattern, and start being smart in the same way as you ... and then you'll lose your edge. The way to REALLY make a lot of money in the markets is to be very smart, and in a very peculiar way, so that even if others watch your trades carefully, they can't understand the pattern, and they can't imitate your methodology....

The best trader I know personally is Jaffray Woodriff who runs, and he exemplifies this principle wonderfully: very intelligent, very peculiar (though in an extremely personable, enjoyable way), and very "peculiarly intelligent" ;-)

Democratic market socialism redux

All in all, the conclusion I'm coming to lately ... as reflected in my last two blog posts, as well as in some other thinking ... is that government is going to need to do some rather careful and specific things to guide society toward a positive Singularity.

Yes, if someone creates a hard-takeoff with a superhuman AGI, then the government and other human institutions may be largely irrelevant.

But if there is a soft takeoff lasting say 1-3 decades ... before the hard takeoff comes along ... then, my view is increasingly that the market system is going to screw things up, and lead to a situation where there are a lot of unhappy and disaffected people ... which increases the odds of some kind of nasty terrorist act intervening and preventing a positive Singularity from occurring.

It seems we may need to review the general line of thinking (though not many of the specific proposals) from old democratic-socialism-style texts like Economics and the Public Purpose ...

Perhaps one positive consequence of the current economic crisis is that it may cause the US public to understand the value of well-directed government spending....

And yes, I'm well aware that most of my colleagues in the futurist community tend to be libertarian politically. I think they're just wrong. I am all in favor of getting rid of victimless crimes ... legalizing drugs and prostitution and so forth ... but, given the realities of the next century and the risks of a negative Singularity, I don't think we can afford to leave things up to the unpredictable self-organizing dynamics of the market economy ... I think we as a society will need to reflect on what we want the path to Singularity to be, and take specific concerted governmental actions to push ourselves along that path...

This is not the political dialogue for 2008 ... different issues are occupying peoples' minds right now ... but it may be the political dialogue for 2012 or 2015 ... or at latest, I'd guess, 2020 ...

Why the average workweek isn't decreasing faster ... and what we can do about it

This is another post on political, economic and futurist themes ... starting out with a reflection on a bogus patent, and winding up with a radical social policy proposal that just might improve life in the near future and also help pave the way to a positive Singularity... (yeah, I know I know, lack of ambition has never been one of my numerous faults ;-)

Let's start with the simple, oft-asked question: Why isn't the average workweek decreasing faster ... given all the amazing technology recently developed?

One clue is found in some news I read today: IBM has patented the idea of a specialized electronic device that makes it handier to split your restaurant bill among several people at the table:,457,767.PN.&OS=PN/7,457,767&RS=PN/7,457,767

The patent application is really quite hilarious reading ... and the number of references to prior, equally bullshitty patents for related inventions is funny too.

What strikes me most here is the amount of effort expended by lawyers, patent examiners and judges in dealing with this crap. Not to mention their paralegals, secretaries, and on and on and on.

Why does this part of the contemporary human workload exist?

One could argue that some wasted work is a necessary part of a complex economic system, and that it would be hard to eliminate the crap without throwing out a lot of useful stuff as well.

I'm sure this is part of the truth, but I don't think it's the whole story.

Another part of the answer, I think, is: This kind of BS work exists because people have time to do it.

If people didn't have time to do this shit, because they all had to be occupied gathering food or making shelter or defending themselves against attackers -- or conceiving or manufacturing truly original and interesting inventions -- then the legal system would rapidly get adjusted so as to make bullshit patents like this illegal.

But, because we have plenty of extra resources in our economy ... due to the bounty created by advances in technology ... we (collectively and without explicitly intending to) adjust various parameters of our economy (such as the patent system) in such a way as to create loads of bullshit work for ourselves.

But there's a little more to it than this. Granted that we have the time ... but why do we choose to spend our time on this kind of crap? Instead of, say, doing less useless work and having more free time?

One important, relevant fact to grasp is that people like working.

Yes, it's not just that we're ambitious or greedy, it's that we actually like working -- in spite of the fact that we also like complaining about working. (Don't blame me -- humans are just fucked-up creatures....).

There is loads of data supporting this idea; I'll cite just a smattering here.

Americans work more and are happier than Europeans:

Old folks who work longer are happier:

I can't remember the link, but studies show that people are on average happier at work than at home (if asked at random times during the day), but will when asked SAY they are happier at home than at work.... I think this study was done by Mihaly Csikszentmihalyi, who you can find out about at,8599,1606395,00.html

He is the pioneer of the study of "flow", the state of supreme happiness that many humans enter into when carrying out some highly absorbing, satisfying activity, like creating an artwork, participating in a sport, acting in a play or doing scientific or other professional work ... or gardening ... or chopping down a tree ... actually the activity can be almost anything, the real point is in the way the mind interacts with the activity: it has to be enough to fully occupy the mind without causing too much frustration, so that the self is continually lost and rediscovered within the ongoing dynamism of the interactive process....

Among many more interesting observations, he notes that:

Those at work generally report that they wish they were at home, but when they're home they often feel passive, depressed or bored. "They have in mind that free time at home will make them feel better, but often it doesn't,"

But, the plot thickens ... because, although we like working ... this doesn't necessarily mean we like working long hours.

People who work part-time are happier than those working full-time (84 per cent are happy versus 79 per cent):

So where we seem to fail is in creating more part-time jobs...

This seems to be because, for an employer, it's always more cost efficient to hire one full time worker than two part-timers ;-(

So, instead of making use of the bounty provided by technology by creating part-time jobs and also creating more opportunities for creative, growth-producing, flow-inducing leisure ... we make use of it by creating more and more full-time jobs doing more and more bullshit work, like processing patents for "inventions" like the one I referenced above.

Given the way our brains work, we're better off working full-time than we would be not working at all.

But there is at least some evidence to suggest we'd be even better off working fewer hours....

But, given the way the market system works, there is almost never any incentive for any employer to allow part-time workers. It just isn't going to be rational from their perspective, as it will decrease their economic efficiency relative to competitors.

The market system, it seems, is going to push toward the endless creation of BS work, because no individual company cares about reducing the workweek ... whereas individual companies DO in many cases profit by creating BS work ... if they can bill other companies (or the government) for this bullshit work....

So, this leads to the idea that the government of France may have had the right idea at heart, in creating the 35 hour maximum workweek. Which they have since rescinded, because in a global context, it made them uncompetitive with other countries lacking such a restriction.

But anyway, outlawing long work hours is obviously not the answer. People should have the freedom to work long hours and get paid for them, if they want to.

So, an interesting question, policy-wise, is how the government can create incentives for reduced workweeks, without introducing punitive and efficiency-killing regulations.

One possibility would be, instead of just government projects aimed at paying people to build infrastructure, to launch government projects aimed at paying people to do interesting, creative, growth-inducing stuff in their spare time.

Basically, what I'm suggesting is: massively boosted government funding for art and science and other not-directly-economically-productive creative work ... but without making people jump through hoops of fire to get it (i.e. no long, tedious, nearly-sure-to-fail grant applications) ... and specifically designed NOT to discriminate against people who do not-directly-economically-productive creative work only part-time.

This would make it harder for companies to find full-time employees, because it wouldn't be all that much more lucrative to work full-time than to work part-time plus earn a creative-activity stipend on the side. But, unlike France's previous restrictive laws, it would enable companies that REALLY need full-time employees to hire them, so long as they were able to pay the required premium salaries ... or recruit workers who preferred the offered work to paid on-the-side creative-activity....

I suspect this would actually boost the economy, by shifting effort away from BS make-work like processing bogus patents, and toward creative work that would end up having more indirect, ancillary economic value in all sorts of ways.

This may seem a funny thing to think about in the current economic context, when the economy is in such trouble globally. But I consider it obvious that the current economic troubles are "just" a glitch (albeit an extremely annoying one) on the path to a post-scarcity economy. And even so, the government is still giving away money right now to people who are out of work. What if the payment were decreased to people who didn't engage in creative activities, but increased to people who did. Peoples' lives would become richer, as more of them would be more involved in creative activities. And, the human world as a whole would become richer because of all of these new creations.

And this sort of thing will become all the more interesting once robots and AI software eliminate more and more jobs. At that point the market system, unrestricted, would probably create an insane overgrowth of bullshit full-time jobs ... but a part-time creative-activity incentive as I've described could interfere with this dynamic, and nudge things toward a more agreeable situation where people divide their time between genuinely economically useful work, and non-directly-economically-useful but personally-and-collectively-rewarding creative activity.

Furthermore, this would create a culture of creative activity, that would serve us well once robots and AIs eliminate jobs altogether. It would be great if we could greet the Singularity with a culture of collective creativity rather than one characterized by a combination of long hours on useless bullshit jobs, combined with leisure time spent staring at the tube (whether the TV or the YouTube) or playing repetitive shoot-em-up video games ... etc. etc.

OK -- now ... time for me to get back to work ;-)

Tuesday, November 25, 2008

The Inevitable Increase of Irrationality (more musings on the recent financial crisis)

In a recent email dialogue with my dad, I found myself spouting off again about the possible causes of the recent financial crisis ... and found myself coming to a conclusion that may have some more general implications as we move forward toward Singularity.

Basically, I came to the conclusion that the financial crisis can be interpreted, in part, as a failure of the "rational actor" model in economics.

And furthermore, it's a failure of the rational actor model for an interesting reason: technology is advancing sufficiently that the world is too damn complex for most people to be rational about it, even if they want to be.

And this situation, I suggest, is likely to get worse and worse as technology advances.

Greenspan said something related in an interview shortly after the credit crunch hit: he said was shocked that major banks would deviate so far from rational self-interest.

To the extent this is the case, the recent crisis could be viewed as a failure of the rational-actor model -- and a validation of the need to view socioeconomic systems as complex self-organizing systems, involving psychological, sociological and economic dynamics all tangled together ... a need that will only increase as technology advances and makes it harder and harder for even smart rational-minded people to approximate rational economic judgments.

As a semi-aside: In terms of traditional economists, I'd say Galbraith's perspective probably accounts for this crisis best. He was always skeptical of over-mathematical approaches, and stressed the need to look at economic mechanisms in their social context. And of course, Obama's proposed solution to the problem is strongly Galbraithian in nature (except that Galbraith, not being an elected official, had the guts to call it "socialism" ;-)

Now, there is (of course) a counterargument to the claim that the recent financial crisis indicates a failure of the rational actor model. But one can also make a counter-counterargument, which I find more compelling.

The counterargument is: the banks as institutions were perhaps being irrational, but the individual decision-makers within the banks were being rational in that their incentive structures were asymmetric ... they got big bonuses for winning big, and small penalties for losing big. As a human being who is a banker, taking a huge gamble may be rational, if you get a huge bonus upon winning ... and upon losing, just need to go find another job (a relatively small penalty). So in that sense the individuals working in the banks may have been acting rationally ...

Yet the corporations were not acting rationally: which means that the bank shareholders were the ones not acting rationally, by not constraining the bank managers to put more appropriate incentive structures in place for their employees...

But why were the shareholders acting irrationally?

Well, I suggest that the reason the bank shareholders did not act rationally was, largely, out of ignorance.

Because the shareholders were just too ignorant of the actual risks involved in these complex financial instruments, not to mention of the incentive structures in place within the banks, etc.

We have plenty of legal transparency requirements in place, so that shareholders can see what's going on inside the corporations they invest in. But this is of limited value if the shareholders can't understand what's going on.

So, getting back to rational actor theory: the novel problem that we have here (added on top of the usual human problems of dishonesty, corruption and so forth) may be that, in a world that is increasingly complex (with financial instruments defined by advanced mathematics, for example), being a rational economic actor is too difficult for almost anybody.

The rational-actor assumption fails for a lot of reasons, as many economists have documented in the last few decades ... but this analysis of the current financial crisis suggests that as technology advances, it is going to fail worse and worse.

You could argue that this effect would be counterbalanced by the emergence of an increasingly effective professional "explainer" class who will boil down complex phenomena so that individuals (like bank shareholders) can make judgments effectively.

However, there are multiple problems with this.

For one thing, with modern media, there is so much noise out there that even if the correct explanations are broadcast to the world on the Web, the average person has essentially no way to select them from among the incorrect explanations. OK, they can assume the explanations given in the New York Times are correct ... but of course there is not really any objective and independent press out there, and "believing what the authorities tell you" is a strategy with many known risks.

And, secondly, it's not clear that the journalists of the world can really keep up with developments well enough to give good, solid, objective advice and explanations, even if that is their goal!

So we can expect that as Singularity approaches, the rational-actor model will deviate worse and worse from reality, making economics increasingly difficult to predict according to traditional methods.

Some folks thought that increasing technology would somehow decrease "market friction" and make markets more "efficient" ... in the technical sense of "efficiency," meaning that everyone is paying the mathematically optimal price for the things they buy.

But in fact, increasing technology seems to be increasing "market incomprehensibility" and hence, in at least some important cases, making markets LESS efficient ...

But of course, making markets less "efficient" in the technical sense doesn't necessarily make the economy less efficient in a more general sense.

The economy is, in some senses, becoming fantastically more and more efficient (at producing more and more interesting configurations of matter and mind given less and less usage of human and material resources) ... but it's doing so via complex, self-organizing dynamics ... not via libertarian-style, rational-actor-based free-market dynamics.

Interesting times ahead....

Glocal memory, neural nets, AI, psi

The dichotomy between localized and globalized (distributed) memory in the brain and world is pretty much bullcrap.

For a long time I've had the idea of harmonizing the two, and the Novamente and OpenCog AGI designs incorporate this harmony, as do the speculative brain theories I proposed in my books in the 1990s.

But I never really articulated the concept of global/local ... i.e. glocal ... memory in a general way before, which I've now done in a semi-technical essay called "Glocal Memory"

I decided to write up the glocal memory stuff now because I'm writing a technical paper on glocal Hopfield neural nets and glocal attention allocation in OpenCog, and I wanted to have something to reference in the paper on glocal-memory-in-general, and there wasn't anything....

In case that's not odd enough for you, I also posted another essay using the glocal memory concept to give a possible model for how psi might work, building on others' ideas regarding filter and transmission models of mind:


Wednesday, November 19, 2008

A meta-theory of consciousnes


C = consciousness

P = physical reality

Then, the various theories of consciousness may be placed into 11 categories:
  1. C and P are the same
  2. C is a part of P
  3. P is a part of C
  4. C and P are parts of something else
  5. C and P share a common part but are nonidentical
  6. C and P are parts of each other (hyperset style)
  7. C and P are separate -- somehow correlated but not via parthood
  8. C does not exist, it's an illusion
  9. P does not exist, it's an illusion
  10. C and P are parts of each other, and also parts of something else
  11. C and P are parts of each other, and also parts of something else, which is also a part of them
The word "part" should not be overinterpreted in the above, it's just used in a generic and informal sense.

I haven't yet broken this down mathematically, but on quick observation these seem the only logically possible relationships involving C, P and parthood (and allowing for circular parthood).

Each of these theory-categories has some adherents, and if I had more time and tolerance-for-boredom, I could divide existing theories of consciousness into these categories.

The question I want to ask here is: What could the relationship between C and P be, so as to explain why different individuals would propose each of the above 11 theory-categories?

My observation is that there is ONE category of theories that would explain all the 11 theory-categories as different views of a common reality, involving in some cases errors of omission but in no case errors of commission.

This is category 11: that C and P contain each other, and are also contained in something else that is also a part of them

If we posit an underlying model of the general form

C ={C0, P, D}

P = {P0, C, D}

D = {D0, C, P}

then we can see how the other 10 theory-categories would emerge via sins of omission:
  1. C and P are the same results from ignoring C0 and P0
  2. C is a part of P results from ignoring C0
  3. P is a part of C results from ignoring P0
  4. C and P are parts of something else ignores that D is a part of C and P, and that C and P are parts of each other
  5. C and P share a common part but are nonidentical, is correct so long as one allows this common part to be a hyperset (which is C nested within P nested within C ... etc.)
  6. C and P are parts of each other (hyperset style) results from ignoring D
  7. C and P are separate -- somehow correlated but not via parthood -- results from projecting C into {C0}, P into {P0}
  8. C does not exist, it's an illusion results from ignoring C
  9. P does not exist, it's an illusion results from ignoring P
  10. C and P are parts of each other, and also parts of something else, results from ignoring that C and P contain D
It is thus tempting to conjecture that 11 is the underlying reality, and people who posit the other 10 theories are simply ignoring some of the aspects of reality -- i.e. to use the old metaphor, they are like blind people each feeling a different part of the elephant and concluding that's all there is.

Question for reflection: what is D? ;-)

Friday, November 14, 2008

Ethics as an Attractor

While visiting my father a couple weeks ago, he talked a bit about what he sees as one of the core ideas of religion: the notion of some kind of "universal morality" or "universal moral force-or-field-or-whatever."

While he's not a big fan of the more superstitious aspects of historical or contemporary religions, he does believe there is a universal sense of right versus wrong. For instance, he feels that killing a random person on the street just because one is in a bad mood, is somehow REALLY wrong, not just "wrong according to some person or group's subjective belief system."

My main initial reaction to this idea was not understanding what it might mean. I'm enough of a postmodernist that absolutes don't make much sense to me.

Even "2+2=4" is only true relative to some specific formal system defining the terms involved. And " '2+2=4' is true relative to formal system F " is also only true relative to some metamathematical system -- and so on ... even in mathematics, you never escape the chain of indirections and get to something absolute.

But in the few days after we had that conversation, I thought a bit about what meaning could be assigned to his notion of universal morality within my postmodernist world-view. This is a topic I addressed in the final chapter of The Path to Posthumanity, but not from exactly this perspective: there I was more concerned with enunciating moral principles sufficiently abstract to guide the development of posthumans, rather than with debating the absoluteness or relativity of such principles.

An interesting perspective with pertinence to this issue is Mark Waser's argument that ethical, cooperative behavior (in some form) may be an "attractor" of social systems. Meaning roughly that:

  • social systems without something like this are unlikely to survive ... except by adopting some form of ethical, cooperative behavior pattern as a norm
  • as a new social system originates and gradually grows, it is likely to evolve its own form of ethical, cooperative behavior

I think this is an interesting perspective, and in the next paragraphs I'll point out some of its limits, and then connect it back to the conversation with my father.

To make the argument more concrete, I'll begin by defining "ethics" in my own special way -- probably not precisely the same as how Mark intends it; but then, in his paper Mark doesn't give a very precisely drawn definition. First of all I'll define a "code of social behavior" as a set of rules, habits or principles that some agent in a society adopts to guide its behavior toward other agents in that society. I'll then define an ethics as a code of social behavior such that

  • it occurs in an agent that has the internal flexibility to plausibly adopt the ethics, or not, without causing some sort of immediate disaster for the agent
  • it occurs in an agent that is capable of counterfactual reasoning regarding the social situations it regularly encounters (psychologists have various ways to test this, that have been used to study animals for example)
  • it involves the agent carrying out behaviors that it reasons (counterfactually) it would not carry out if it did not adhere to the ethics
  • it involves the agent carrying out behaviors that the agent believes will benefit other agents in the society

In short, I define an ethics as a code that agents can optionally adopt, and if they adopt it, they know they're taking actions to benefit others, that they reason they wouldn't take in the absence of the code of ethics.

This reminds of a story one of my great-uncles used to tell about my 5-year-old incarnation. He was watching me play with toys, and then my little sister Rebecca came up and asked for one of the toys I was most enjoying. After a moment of reflection, I gave it to her, and then I commented to him that "Sometimes it feels good to do something for someone else that you don't want to do."

Pertaining to Mark Waser's argument, we can then ask whether ethics in this sense is likely to be an attractor. There are two questions here, of course: will the existence of an ethics, any ethics, be an attractor; and will some specific ethics be an attractor. I'll deal mainly with the first question, because if "the existence of ethics" isn't an attractor, then obviously no specific ethics is going to be.

The main limit I see in Waser's argument (as ported to my definition of ethics) is that the argument doesn't seem to apply in cases where one or more of the members of the social system are vastly more intrinsically capable than the others (in relevant ways).

In societies of humans, it could be argued that unethical behavior is ultimately unstable, because the oppressed underclass (with asymmetrically little social power, but roughly equal intrinsic capabilities) will eventually revolt. But the possibility of revolt exists because outside of the scope of the social system, all healthy adult humans have roughly the same level of intrinsic capability.

One can imagine a scenario roughly like the one in H.G. Wells "The Time Machine," where a subset of society that actually is strongly more capable in relevant senses (smarter, saner, stronger) takes control and oppresses the less capable majority. Perhaps any adequately capable individual among the underclass is either killed or taken into the overclass ... and any inadequately capable person among the overclass is either killed or tossed into the underclass. In this kind of scenario, after a certain number of generations, one would have a situation in which there would be pressure for ethics within each class, but not ethics between them.

Nothing like the above has ever happened in human history, of course, and nor is it likely to. However, the case of future AI minds is somewhat different. All humans are built according to the same architecture and have roughly the same amount of intrinsic computational resources, but the same won't necessarily be the case for all AIs.

I see no reason to believe that existence-of-ethics will be attracting in societies involving members with strongly asymmetric capabilities. In fact, it seems it might be easier to frame an alternate argument: that in a society consisting of two groups of radically different degrees of intrinsic capability, the attractor will be

  • ethical behavior within the overclass
  • ethical behavior within the underclass
  • exploitative behavior from the overclass to the underclass

A related situation is human behavior toward "lower animals" -- but this is a different sort of matter because animals don't meet the criteria of ethical agents I laid out above. Adult treatment toward children also doesn't quite fit the mold of this situation, because the intrinsic difference in capability between parents and children reverses as the children grow older (leading to sayings like, "Don't spank your kid too hard; when he grows up he'll be the one choosing your nursing home!").

One thus arrives at the hypothesis that a restricted form of Waser's argument might hold: maybe existence-of-ethics is an attractor in societies composed of agents with roughly equal intrinsic capabilities (in relevant situations).

As to what specific ethical codes may be attractors, it seems to me that is going to depend upon the specifics of the agents and societies. But the general phenomenon of choosing actions for others' benefit, that one knows one would not take in the absence of the ethical code, seems something that could plausibly be argued to serve as an attractor in any society of sufficiently flexible, intelligent organisms with roughly equal intrinsic relevant-capabilities.

Note what the notion of an attractor means here: essentially it means that if you have a society with the right characteristics that *almost* has ethics, then the society will eventually evolve ethics.

Ethics being an attractor doesn't imply that it must be the only attractor; there could be other attractors with very different properties. Arguing that any society with the right characteristics will necessarily evolve into a state supporting ethics, would be a stronger argument.

Another twist on this is obtained by thinking about the difference between the conscious and unconscious minds.

Let's say we're talking about a society consisting of agents with reflective, deliberative capability -- but with a lot of mental habits that aren't easily susceptible to deliberation. This is certainly the situation we humans are in: most of the unconscious behavior-patterns that govern us, are extremely difficult for us to pull into our theater of conscious reflection and reason about ... for a variety of reasons, including the limited capacity of our conscious theater, and the way much unconscious knowledge is represented in the brain, which is only laboriously translatable into the way the conscious theater wants to represent knowledge.

Then, it may be that ethics winds up getting largely packed into the unconscious part of the mind, which is hard to deliberatively reason about. This might happen, for instance, if ethics were largely taught by imitation and reinforcement, rather than by abstract instruction. And this does seem to be how early-childhood ethical instruction happens among humans. We correct a child for doing something bad and reward them for doing something good (reinforcement learning), and we indicate to them real-world everyday-life examples of ethical behavior (both via personal example and via fictional stories, movies and the like). Abstract ethical principles only make sense to them via grounding in this experiential store of ethical examples.

So, if ethics evolves in a society due to its attracting nature, and is memetically propagated largely through unconscious instruction, then in effect what is happening is that in many cases

  • the reflective, deliberative mind is thinking about the individual organism and its utility
  • the unconscious mind is thinking about the superorganism of the overall society, via the experientially inculcated ethical principles

The voice of the conscience is thus revealed as the voice of the existence-of-ethics attractor that superorganisms (I hypothesize, following Waser) inevitably settle into, assuming their member agents possess certain characteristics.

Where does this leave my dad's notion of a universal ethical force? It doesn't validate any such thing, in quite the sense that my dad seemed to mean it.

However, it does validate the notion that an unconscious sense of ethics may be universal in the sense of being an inevitable mathematical property of any society satisfying certain characteristics.

What does this mean for me, as a reasoning being with an intuitive, unconscious sense of ethics ... and also the deliberative capability to think about this ethical sense ... as well as some ability to modify this sense of ethics if I want to?

Among other things, it reminds me that the deliberative, ratiocinative aspect of the mind probably needs to be a little humbler than it sometimes gets to be, inside us hyperintellectual, nerdy types. The "I" of the deliberative mind is a symbol for the whole mind, rather than constituting the whole mind ... and there may be systematic patterns characterizing which of our mind-patterns get stored in easily-deliberatively-accessible form and which do not. So as frustrating as it can be to those of us in love with ratiocination, if one wants to be maximally rational, one must accept that sometimes "the unconscious knows best" ... even if one can't understand its reasons ... because through observation, imitation and reinforcement, it gained experiential knowledge that possesses a scale and (in some cases) a form that is not ratiocination-friendly (but may yet be rationally considered extremely useful for goal-achievement).

Unfortunately, the unconscious also makes a lot of mistakes and possesses a powerful capacity for tricking itself ... which, all in all, makes the business of being a finite-resources mind with an even-more-acutely-finite-resources deliberative/rational component, an inordinately tricky one.

I personally find these issues slightly less confusing if I view them from the perspective of pattern space. Suppose we consider the universe as set of patterns, governed by a variety of pattern-dynamics rules, including the rule that patterns tend to extend themselves. Different patterns have different degrees of power -- that is, they have differing probabilities of success in extending themselves. The arguments I've given above suggest that ethics, as an abstract pattern for organizing behaviors, is a pattern of considerable power, especially among societies of intelligent entities of roughly comparable capability. In this view there is no "universal moral force" -- the universal force is the tendency of patterns to extend themselves, and ethics as a pattern seems to contain a great deal of this force.

On these issues there are two fairly extreme points of view, which may be compactly summarized (I hope not parodized) as:

  1. there is an absolutely correct ethics, an absolute right versus wrong, which may be imperfectly known to humans but which we can hope to better and better discern through various mental and social exercises
  2. ethics is purely relative in nature: people adopt ethical codes because they were taught to, or maybe because their genes tell them to (i.e. because their species was taught to, by evolution) ... but there is no fundamental meaning to these ethics rather than social, psychological or evolutionary happenstance

Unlike my father, I never had much attraction to the first perspective. The second perspective has bedeviled me from time to time, yet I've always been nagged by a suspicion there's something deeper going on (yes, of course, someone can attribute this internal nagging to my psychology, my upbringing or my evolutionary heritage!). I don't delude myself I've fully gotten to the bottom of the issue here, but I hope that (building on the ideas of others) I've made a teeny bit of progress.

In What Sense Is "Overcoming Bias" a Good Goal?

This blog post consists of some musings that arose to me a few weeks ago on reading the multi-author blog Overcoming Bias. They were not triggered by the contents of the blog -- though those are sometimes quite interesting -- but merely by the name. My business in this post is to probe the extent to which "overcoming bias" is actually the right way to think about the problem of (as the Welcome page for the Overcoming Bias blog informally puts it) "How can we obtain beliefs closer to reality?"

I think it's clear that "overcoming bias" is important, but I also think it's important to explore and understand the limitations of "overcoming bias" as a methodology for obtaining beliefs that are more useful in achieving one's goals.

(Note that in my own thinking, I tend to more often think in terms of "obtaining beliefs that are more useful in achieving one's goals," rather than in terms of "obtaining beliefs that are closer to reality." In many contexts this just amounts to a nitpick, but it also reflects a significant philosophical distinction. I don't make the philosophical assumption of an objective reality, to which beliefs are to be compared. My philosophical view of beliefs could be loosely considered Nietzschean, though I doubt it agrees with Nietzsche's views in all respects.)

According to Wikipedia,

Bias is a term used to describe a tendency or preference towards a particular perspective, ideology or result, especially when the tendency interferes with the ability to be impartial, unprejudiced, or objective. The term biased is used to describe an action, judgment, or other outcome influenced by a prejudged perspective.

This definition is worth dissecting because it embodies two different aspects of the "bias" concept, which are often confused in ordinary discourse. I'll call these:

Bias_1: "a tendency or preference toward a particular perspective"

Bias_2: "an instance of Bias_1 that interferes with the ability to be impartial, unprejudiced or objective", which I'll replace with "... interferes with the ability to achieve one's goals effectively."

Bias_1 is, obviously, not necessarily a bad thing.

First of all, the universe we live in has particular characteristics (relative to a universe randomly selected from a sensibly-defined space of possible universes), and being biased in a way that reflects these characteristics may be a good thing.

Secondly, the particular environments and goals that an organism lives in, may have particular characteristics, and it may benefit the organism to be biased in a way that reflects these characteristics.

Now, ideally, an organism would be aware of its environment- or goal- specific biases, so that if its environment or goals change, it can change its biases accordingly. On the other hand, it may be that maintaining this awareness detracts from the organism's ability to achieve goals, if it consumes a lot of resources that could otherwise be spent doing other stuff (even if this other stuff is done in a way that's biased toward the particular environment and goals at hand).

When discussed in a political context, "bias" is assumed to be a bad thing, as in the "especially when" clause of the above Wikipedia definition. Gender bias and racial bias are politically incorrect and according to most modern moral systems, immoral. The reason these biases are considered to be bad is rooted in the (correct, in most cases) assumption that they constitute Bias_2 with respect to the goals that most modern moral systems say we should have.

On the other hand, in cognitive science, bias is not always a bad thing. One may argue, as Eric Baum has done persuasively in What Is Thought?, that the human mind's ability to achieve its goals in the world is largely due to the inductive bias that it embodies, which is placed into it via evolutionary pressure on brain structure. In this context, bias is a good thing. The brain is a general-purpose intelligence, but it is biased to be able to more easily solve some kinds of problems (achieve some kinds of goals) than others. Without this biasing, there's no way a system with the limited computational capacity of the human brain would be able to learn and do all the things it does in the short lifespan of a human organism. The inductive bias that Baum speaks about is largely discussed as Bias_1, but also may in some cases function as Bias_2, because biases that are adaptive in some circumstances may be maladaptive in others.

One might argue that, in the case of evolved human inductive bias, it's the evolutionary process itself that has been less biased, and has (in a relatively unbiased way) evolved brains that are biased to the particular conditions on Earth. However, this is not entirely clear. The evolutionary mechanisms existing on Earth have a lot of particularities that seem adapted to the specific chemical conditions on Earth, for example.

One may argue that, even though we humans are born with certain useful biases, it is to our advantage to become reflective and deliberative enough to overcome these biases in those cases where they're not productive. This is certainly true -- to an extent. However, as noted above, it's also true that reflection and deliberation consume a lot of resources. Any organism with limited resources has to choose between spending its resources overcoming its biases (which may ultimately help it to achieve its goals), and spending its resources achieving its goals in a direct ways.

Furthermore, it's an interesting possibility that resource-constrained minds may sometimes have biases that help them achieve their goals, yet that they are not able to effectively reflect and deliberate on. Why might this be? Because the class of habits that an organism can acquire via reinforcement learning, may not fully overlap with the class of habits that the organism can study via explicit reflective, deliberative inference. For any particular mind-architecture, there are likely to be some things that are more easily learnable as "experientially acquired know-how" than as explicit, logically-analyzable knowledge. (And, on the other hand, there are going to be other things that are more easily arrived at via explicit inference than via experiential know-how acquisition.)

If a certain habit of thought is far more amenable to experiential reinforcement based learning than reflective, logical deliberation, does this mean that one cannot assess its quality, with a view toward ridding it of unproductive biases? Not necessarily. But overcoming biases in these habits may be a different sort of science than overcoming biases in habits that are more easily susceptible to reason. For instance, the best way to overcome these sorts of biases may be to place oneself in a large variety of different situations, so as to achieve a wide variety of different reinforcement signaling patterns ... rather than to reflectively and deliberatively analyze one's biases.

Many of these reflections ultimately boil down to issues of the severely bounded computational capability of real organisms. This irritating little issue also arises when analyzing the relevance of probabilistic reasoning (Bayesian and otherwise) to rationality. If you buy Cox's or de Finetti's assumptions and arguments regarding the conceptual and mathematical foundations of probability theory (which I do), then it follows that a mind, given a huge amount of computational resources, should use probability theory (or do something closely equivalent) to figure out which actions it should take at which times in order to achieve its goals. But, these nice theorems don't tell you anything about what a mind given a small or modest amount of computational resources should do. A real mind can't rigorously apply probability theory to all its judgments, it has to make some sort of heuristic assumptions ... and the optimal nature of these heuristic assumptions (and their dependencies on the actual amount of space and time resources available, and the specific types of goals and environments involved, etc.) is something we don't understand very well.

So, the specific strategy of overcoming Bias_2 by adhering more strictly to probability theory, is interesting and often worthwhile, but not proven (nor convincingly argued) to always the best thing to do for real systems in the real world.

In cases where the answer to some problem can be calculated using probability theory based on a relatively small number of available data items ... or a large number of data items that interrelate in a relatively simply characterizable way ... it's pretty obvious that the right thing for an intelligent person to do is to try to overcome some of their evolutionary biases, which may have evolved due to utility in some circumstances, but which clearly act as Bias_2 in many real-world circumstances. The "heuristics and biases" literature in cognitive psychology contains many compelling arguments in this regard. For instance, in many cases, it's obvious that the best way for us to achieve our goals is to learn to replace our evolved mechanisms for estimating degrees of probability, with calculations more closely reflecting the ones probability theory predicts. Professional gamblers figured this out a long time ago, but the lesson of the heuristics and biases literature has been how pervasive our cognitive errors (regarding probability and otherwise) are in ordinary life, as well as in gambling games.

On the other hand, what about problems a mind confronts that involve masses of different data items, whose interrelationships are not clear, and about which much of the mind's knowledge was gained via tacit experience rather than careful inference or scientific analysis?

Many of these problems involve contextuality, which is a difficult (though certainly not impossible) thing to handle pragmatically within formal reasoning approaches, under severe computational resource constraints.

For these problems, there seem to be two viable strategies for improving one's effectiveness at adapting one's beliefs so as to be able to more adeptly achieve one's goals:

  1. Figure out a way to transform the problem into the kind that can be handled using explicit rational analysis
  2. Since so much of the knowledge involved was gained via experiential reinforcement-learning rather than inference ... seek to avoid Bias_2 via achieving a greater variety of relevant experiences

So what's my overall takeaway message?

  • We're small computational systems with big goals, so we have to be very biased, otherwise we wouldn't be able to achieve our goals
  • Distinguishing Bias_1 from Bias_2 is important theoretically, and also important *but not always possible* in practice
  • The right way to cure instances of Bias_2 depends to some extent on the nature of the mental habits involved in the bias
  • In some cases, diversity of experience may be a better way to remove Bias_2, than explicit adherence to formal laws of rationality
  • It is unclear in which circumstances (attempted approximate) adherence to probability theory or other formal laws of rationality is actually the right thing for a finite system to do, in order to optimally achieve its goals
  • Heuristically, it seems that adherence to formal laws of rationality generally makes most sense in cases where contextuality is not so critical, and the relevant judgments depend sensitively mainly on a relatively small number of data items (or a large number of relatively-simply-interrelated data items)

Saturday, November 08, 2008

In Search of the "Machines of Loving Grace": AI, Robotics and Empathy

Empathy: the ability to feel each others’ feelings. It lies at the core of what makes us human. But how important will it be to the artificial minds we one day create? What are AI researchers doing to imbue their creations with artificial empathy ... and should they be doing more? In short, what is the pathway to the “machines of loving grace” that poet Richard Brautigan foresaw?

The mainstream of AI research has traditionally focused on the more explicitly analytical, intellectual, nerdy aspects of human intelligence: planning, problem-solving, categorization, language understanding. Recent attempts to broaden this focus have focused mainly on creating software with perceptual and motor skills: computer vision systems, intelligent automated vehicles, and so forth. Missing almost entirely from the AI field is the more social and emotional aspects of human intelligence. Chatbots attempting the Turing test have confronted these aspects directly – going back to ELIZA, the landmark AI psychotherapist from the early 1970’s – but these bots are extremely simplistic and have little connection to the main body of work in the AI field.

I think this is a major omission, and my own view is that empathy may be one of the final frontiers of AI. My opinion as an AI researcher is that, if we can crack artificial empathy, the rest of the general-AI problem will soon follow, based on the decades of successes that have already been achieved in problem-solving, reasoning, perception, motorics, planning, cognitive architecture and other areas.

(In my own AI work, involving the Novamente Cognition Engine and the OpenCog Prime system, I’ve sought to explicitly ensure the capability for empathy via multiple coordinated design aspects – but in this blog post I’m not going to focus on that, restricting myself to more general issues.)

Why would empathy be so important for AI? After all, it’s just about human feelings, which are among the least intelligent, most primitively animal-like aspect of the human mind? Well, the human emotional system certainly has its quirks and dangers, and the wisdom of propagating these to powerful AI systems is questionable. But the basic concept of an emotion, as a high-level integrated systemic response to a situation, is critical to functioning of any intelligent system. An AI system may not have the same specific emotions as a human being – particular emotions like love, anger and so forth are manifestations of humans’ evolutionary heritage, rather than intrinsic aspects of intelligence. But it seems unlikely that an AI without any kinds of high-level integrated systemic responses (aka emotions) would be able to cope with the realities of responding to a complex dynamic world in real-time.

A closely related point is the social nature of intelligence. Human intelligence isn’t as individual as we modern Westerners often seem to think: a great percentage of our intelligence is collective and intersubjective. Cognitive psychologists have increasingly realized this in recent decades, and have started talking about “distributed cognition.” If the advocates of the “global brain” hypothesis are correct, then eventually artificial minds will synergize with human minds to form a kind of symbiotic emergent cyber-consciousness. But in order for distributed cognition to work, the minds in a society need to be able to recognize, interpret and respond to each others’ emotions. And this is where empathy comes in.

Mutual empathy binds together social networks: as we go through our lives we are perpetually embodying self-referential emotional equations like

X = I feel pain and that you feel X

Y = I feel that you feel both joy and Y

or mutually-referential ones like

A = I feel happy that you enjoy both this music and B

B = I feel surprised that you feel A

These sorts of equations bind us together: as they unfold through time they constitute much of the rhythm by which our collective intelligence experiences and creates.

So empathy is important: but how does it work?

We don’t yet know for sure ... but the best current thinking is that there are two aspects to how the brain does empathy: inference and simulation. (And I do think this is a lesson for AI: in my own AI designs I deal with these aspects separately, and then address their interaction ... and I do think this is the right approach.)

Inference-wise, empathy has to do with understanding and modeling (sometimes consciously, sometimes unconsciously) what another person must be feeling, based on the cues we perceive and our background knowledge. Psychologists have mapped out transformational rules that help us do this modeling.

Simulative empathy is different: we feel what each other are feeling. A rough analogue is virtualization in computers: running Windows on a virtual machine within Linux; emulating an Activision console within Windows. Similarly, we use the same brain-systems that are used to run ourselves, to run a simulation of another person feeling what they seem to be feeling. And we do this unconsciously, at the body level: even though we don’t consciously notice that sad people have smaller pupils, our pupils automatically shrink when we see a sad person -- a physiological response that synergizes with our cognitive and emotional response to their sadness (and is the technique the lead character uses in Blade Runner to track down androids who lack human feeling). A long list of examples has been explored in the lab already, and we’ve barely scratched the surface yet: people feel disgust when they see others smell bad odor, they feel pain when they see others being pierced by a needle or get electrical shock, they sense touching when they see others being brushed, etc.

Biologists have just started to unravel the neural basis of simulative empathy, which seems to involve brain cells called mirror neurons ... which some have argued play a key role in other aspects of intelligence as well, including language learning and the emergence of the self (I wrote a speculative paper on this a couple years back).

(A mirror neuron is a neuron which fires both when an animal acts and when the animal observes the same action performed by another animal (especially one of the same species). Thus, the neuron "mirrors" the behavior of another animal, as though the observer were itself acting. These neurons have been directly observed in primates, and are believed to exist in humans and in some birds. In humans, brain activity consistent with mirror neurons has been found in the premotor cortex and the inferior parietal cortex.)

So: synergize inference and simulation, and you get the wonderful phenomenon of empathy that makes our lives so painful and joyful and rich, and to a large extent serves as the glue holding together the social superorganism.

The human capacity for empathy is, obviously, limited. This limitation is surely partly due to our limited capabilities of both inference and simulation; but, intriguingly, it might also be the case that evolution has adaptively limited the degree of our empathic-ness. Perhaps an excessive degree of empathy would have militated against our survival, in our ancestral environments?

The counterfactual world in which human empathy is dramatically more intense is difficult to accurately fathom. Perhaps, if our minds were too tightly coupled emotionally, progress would reach the stage of some ant-colony-like utopia and then halt, as further change would be too risky in terms of hurting someone else’s feelings. On the other hand, perhaps a richer and more universal empathy would cause a dramatic shift in our internal architectures, dissolving or morphing the illusion of “self” that now dominates our inner worlds, and leading to a richer way of individually/collectively existing.

One aspect of empathy that isn’t sufficiently appreciated is the way it reaches beyond the touchy-feely sides of human life: for instance it pervades the worlds of science and business as well, which is why there are still so many meetings in the world, email, Skype and WebEx notwithstanding. The main reason professionals fly across the world to hob-nob with their colleagues – in spite of the often exhausting and tedious nature of business travel (which I’ve come to know all too well myself in recent years) -- is because, right now, only face-to-face communication systematically gives enough of the right kind of information to trigger empathic response. In a face-to-face meeting, humans can link together into an empathically-joined collective mind-system, in a way that doesn’t yet happen nearly as reliably via electronically-mediated communications.

Careful study has been given to the difficulty we have empathizing with certain robots or animated characters. According to Mori’s theory of the “uncanny valley” – which has been backed up by brain imaging studies -- if a character looks very close to human, but not close enough, then people will find it disturbing rather than appealing. We can empathize more with the distorted faces of Disney cartoons or manga, than with semi-photo-realistic renditions of humans that look almost-right-but-eerily-off.

To grasp the uncanny valley viscerally, watch one of the online videos of researcher Hiroshi Ishiguro and the “geminoid” robot that is his near-physical-clone -- an extremely lifelike imitation of his own body and its contours, textures and movements. No AI is involved here: the geminoid is controlled by motion-capture apparatus that watches what Ishiguro does and transfers his movements to the robot. The imitation is amazing – until the bot starts moving. It looks close enough to human that its lack of subtle human expressiveness is disturbing. We look at it and we try to empathize, but we find we’re empathizing with a feelingless robot, and the experience is unsettling and feels “wrong.” Jamais Cascio has proposed that existly this kind of reaction may occur to transhumans with body modifications – so from that point of view among others, this phenomenon may be worth attending.

It’s interesting to contrast the case of the geminoid, though, with the experience of interacting with ELIZA, the AI psychotherapist created by Joseph Weizenbaum in 1966. In spite of having essentially no intrinsic intelligence, ELIZA managed to carry out conversations that did involve genuine empathic sharing on the part of its conversaton-partners. (I admit ELIZA didn’t do much for me even back then, but, I encountered it knowing exactly what it was and intrigued by it from a programming perspective, which surely colored the nature of my experience.)

Relatedly, some people today feel more empathy with their online friends than their real-life friends. And yet, I can’t help feel there’s something key lacking in such relationships.

One of the benefits of online social life is that one is freed from the many socio-psychological restrictions that come along with real-world interaction. Issues of body image and social status recede into the background – or become the subject of wild, free-ranging play, as in virtual worlds such as Second Life. Many people are far less shy online than in person – a phenomenon that’s particularly notable in cultures like Japan and Korea where social regulations on face-to-face communcation are stricter.

And the benefits can go far beyond overcoming shyness: for example, a fifty-year-old overweight trucker from Arkansas may be able to relate to others more genuinely in the guise of a slender, big-busted Asian girl with blue hair, a microskirt and a spiky tail... and in Second Life he can do just that.

On the other hand, there’s a certain falsity and emotional distance that comes along with all this. The reason the trucker can impersonate the Asian ingenue so effectively is precisely that the avenues for precise emotional expression are so impoverished in today’s virtual environments. So, the other fifty-year-old trucker from Arkansas whose purple furry avatar is engaged in obscene virtual acts with the Asian babe, has to fill in the gaps left by the simplistic technology – to a large extent, the babe he’s interacting with is a construct of his own mind, which improvises on the cues provided by the signals given by the first trucker.

Of course, all social interaction is constructive in this way: the woman I see when I talk to my wife is largely a construct of my own mind, and may be a different woman than I would see if I were in a different mood (even if her appearance and actions were precisely the same). But text-chat or virtual-world interactions are even more intensely constructive, which is both a plus and a minus. We gain the ability for more complete wish-fulfillment (except for wishes that are intrinsically tied to the physical ... though some people do impressively well as satisfying virtual satisfactions for physical ones), but we lose much of the potential for growing in new directions via empathically absorbing emotional experiences dramatically different from anything we would construct on our own based on scant, sketchy inputs.

It will be interesting to see how the emotional experience of virtual world use develops as the technology advances ... in time we will have the ability to toggle how much detail our avatars project, just as we can now choose whether to watch cartoons or live action films. In this way, we will be able to adjust the degree of constructive wish-fulfillment versus self-expanding experience-of-other ... and of course to fulfill different sorts of wishes than can be satisfied currently in physical or virtual realities.

As avatars become more realistic, they may encounter the uncanny valley themselves: it may be more rewarding to look at a crude, iconic representation of someone else’s face, than a representation that’s almost-there-but-not-quite ... just as with Ishiguro’s geminoids. But just as with the geminoids, the technology will get there in time.

The gaming industry wants to cross the uncanny valley by making better and better graphics. But will this suffice? Yes, a sufficiently perfected geminoid or game character will evoke as much empathy as a real human. But for a robot or game character controlled by AI software, the limitation will probably lie in subtleties of movement. Just like verbal language, the language of emotional gestures is one where it’s hard to spell out the rules exactly: we humans grok them from a combination of heredity and learning. One way to create AIs that people can empathize with will be to make the AIs themselves empathize, and reflect back to people the sorts of emotions that they perceive. Much as babies imitate adult emotions. Envision a robot or game character that watches a video-feed of your face and tailors its responses to your emotions.

Arguably, creating AIs capable of empathy has importance far beyond the creation of more convincing game characters. One of the great unanswered questions as the Singularity looms is how to increase the odds that once our AIs get massively smarter than we are, they still value our existence and our happiness. Creating AIs that empathize with humans could be part of the answer.

Predictably, AI researchers so far have done more with the inferential than the simulative side of empathic response. Selmer Bringsjord’s team at RPI got a lot of press earlier this year for an AI that controls a bot in Second Life, in a way that demonstrates a limited amount of “theory of mind”: the bot watches other characters with a view toward figuring out what they’re aware of, and uses this to predict their behavior and guide its interactions.

But Bringsjord’s bots don’t try to feel the feelings of the other bots or human-controlled avatars they interact with. The creation of AIs embodying simulative empathy seems to be getting very little attention. Rosalind Picard’s Affective Computing Lab at MIT has done some interesting work bringing emotion into AI decision processses but has stopped short of modeling simulative empathy. But I predict this is a subfield that will emerge within the next decade. In fact, it seems plausible that AI’s will one day be far more empathic than humans are – not only with each other but also with human beings. Ultimately, an AI may be able to internally simulate you better than your best human friend, and hence demonstrate a higher degree of empathy. Which will make our games more fun, our robots less eerie, and potentially help make the post-Singularity world a more human-friendly place.