I think it's clear that "overcoming bias" is important, but I also think it's important to explore and understand the limitations of "overcoming bias" as a methodology for obtaining beliefs that are more useful in achieving one's goals.
(Note that in my own thinking, I tend to more often think in terms of "obtaining beliefs that are more useful in achieving one's goals," rather than in terms of "obtaining beliefs that are closer to reality." In many contexts this just amounts to a nitpick, but it also reflects a significant philosophical distinction. I don't make the philosophical assumption of an objective reality, to which beliefs are to be compared. My philosophical view of beliefs could be loosely considered Nietzschean, though I doubt it agrees with Nietzsche's views in all respects.)
According to Wikipedia,
"
Bias is a term used to describe a tendency or preference towards a particular perspective, ideology or result, especially when the tendency interferes with the ability to be impartial, unprejudiced, or objective. The term biased is used to describe an action, judgment, or other outcome influenced by a prejudged perspective.
"
This definition is worth dissecting because it embodies two different aspects of the "bias" concept, which are often confused in ordinary discourse. I'll call these:
Bias_1: "a tendency or preference toward a particular perspective"
Bias_2: "an instance of Bias_1 that interferes with the ability to be impartial, unprejudiced or objective", which I'll replace with "... interferes with the ability to achieve one's goals effectively."
Bias_1 is, obviously, not necessarily a bad thing.
First of all, the universe we live in has particular characteristics (relative to a universe randomly selected from a sensibly-defined space of possible universes), and being biased in a way that reflects these characteristics may be a good thing.
Secondly, the particular environments and goals that an organism lives in, may have particular characteristics, and it may benefit the organism to be biased in a way that reflects these characteristics.
Now, ideally, an organism would be aware of its environment- or goal- specific biases, so that if its environment or goals change, it can change its biases accordingly. On the other hand, it may be that maintaining this awareness detracts from the organism's ability to achieve goals, if it consumes a lot of resources that could otherwise be spent doing other stuff (even if this other stuff is done in a way that's biased toward the particular environment and goals at hand).
When discussed in a political context, "bias" is assumed to be a bad thing, as in the "especially when" clause of the above Wikipedia definition. Gender bias and racial bias are politically incorrect and according to most modern moral systems, immoral. The reason these biases are considered to be bad is rooted in the (correct, in most cases) assumption that they constitute Bias_2 with respect to the goals that most modern moral systems say we should have.
On the other hand, in cognitive science, bias is not always a bad thing. One may argue, as Eric Baum has done persuasively in What Is Thought?, that the human mind's ability to achieve its goals in the world is largely due to the inductive bias that it embodies, which is placed into it via evolutionary pressure on brain structure. In this context, bias is a good thing. The brain is a general-purpose intelligence, but it is biased to be able to more easily solve some kinds of problems (achieve some kinds of goals) than others. Without this biasing, there's no way a system with the limited computational capacity of the human brain would be able to learn and do all the things it does in the short lifespan of a human organism. The inductive bias that Baum speaks about is largely discussed as Bias_1, but also may in some cases function as Bias_2, because biases that are adaptive in some circumstances may be maladaptive in others.
One might argue that, in the case of evolved human inductive bias, it's the evolutionary process itself that has been less biased, and has (in a relatively unbiased way) evolved brains that are biased to the particular conditions on Earth. However, this is not entirely clear. The evolutionary mechanisms existing on Earth have a lot of particularities that seem adapted to the specific chemical conditions on Earth, for example.
One may argue that, even though we humans are born with certain useful biases, it is to our advantage to become reflective and deliberative enough to overcome these biases in those cases where they're not productive. This is certainly true -- to an extent. However, as noted above, it's also true that reflection and deliberation consume a lot of resources. Any organism with limited resources has to choose between spending its resources overcoming its biases (which may ultimately help it to achieve its goals), and spending its resources achieving its goals in a direct ways.
Furthermore, it's an interesting possibility that resource-constrained minds may sometimes have biases that help them achieve their goals, yet that they are not able to effectively reflect and deliberate on. Why might this be? Because the class of habits that an organism can acquire via reinforcement learning, may not fully overlap with the class of habits that the organism can study via explicit reflective, deliberative inference. For any particular mind-architecture, there are likely to be some things that are more easily learnable as "experientially acquired know-how" than as explicit, logically-analyzable knowledge. (And, on the other hand, there are going to be other things that are more easily arrived at via explicit inference than via experiential know-how acquisition.)
If a certain habit of thought is far more amenable to experiential reinforcement based learning than reflective, logical deliberation, does this mean that one cannot assess its quality, with a view toward ridding it of unproductive biases? Not necessarily. But overcoming biases in these habits may be a different sort of science than overcoming biases in habits that are more easily susceptible to reason. For instance, the best way to overcome these sorts of biases may be to place oneself in a large variety of different situations, so as to achieve a wide variety of different reinforcement signaling patterns ... rather than to reflectively and deliberatively analyze one's biases.
Many of these reflections ultimately boil down to issues of the severely bounded computational capability of real organisms. This irritating little issue also arises when analyzing the relevance of probabilistic reasoning (Bayesian and otherwise) to rationality. If you buy Cox's or de Finetti's assumptions and arguments regarding the conceptual and mathematical foundations of probability theory (which I do), then it follows that a mind, given a huge amount of computational resources, should use probability theory (or do something closely equivalent) to figure out which actions it should take at which times in order to achieve its goals. But, these nice theorems don't tell you anything about what a mind given a small or modest amount of computational resources should do. A real mind can't rigorously apply probability theory to all its judgments, it has to make some sort of heuristic assumptions ... and the optimal nature of these heuristic assumptions (and their dependencies on the actual amount of space and time resources available, and the specific types of goals and environments involved, etc.) is something we don't understand very well.
So, the specific strategy of overcoming Bias_2 by adhering more strictly to probability theory, is interesting and often worthwhile, but not proven (nor convincingly argued) to always the best thing to do for real systems in the real world.
In cases where the answer to some problem can be calculated using probability theory based on a relatively small number of available data items ... or a large number of data items that interrelate in a relatively simply characterizable way ... it's pretty obvious that the right thing for an intelligent person to do is to try to overcome some of their evolutionary biases, which may have evolved due to utility in some circumstances, but which clearly act as Bias_2 in many real-world circumstances. The "heuristics and biases" literature in cognitive psychology contains many compelling arguments in this regard. For instance, in many cases, it's obvious that the best way for us to achieve our goals is to learn to replace our evolved mechanisms for estimating degrees of probability, with calculations more closely reflecting the ones probability theory predicts. Professional gamblers figured this out a long time ago, but the lesson of the heuristics and biases literature has been how pervasive our cognitive errors (regarding probability and otherwise) are in ordinary life, as well as in gambling games.
On the other hand, what about problems a mind confronts that involve masses of different data items, whose interrelationships are not clear, and about which much of the mind's knowledge was gained via tacit experience rather than careful inference or scientific analysis?
Many of these problems involve contextuality, which is a difficult (though certainly not impossible) thing to handle pragmatically within formal reasoning approaches, under severe computational resource constraints.
For these problems, there seem to be two viable strategies for improving one's effectiveness at adapting one's beliefs so as to be able to more adeptly achieve one's goals:
- Figure out a way to transform the problem into the kind that can be handled using explicit rational analysis
- Since so much of the knowledge involved was gained via experiential reinforcement-learning rather than inference ... seek to avoid Bias_2 via achieving a greater variety of relevant experiences
So what's my overall takeaway message?
- We're small computational systems with big goals, so we have to be very biased, otherwise we wouldn't be able to achieve our goals
- Distinguishing Bias_1 from Bias_2 is important theoretically, and also important *but not always possible* in practice
- The right way to cure instances of Bias_2 depends to some extent on the nature of the mental habits involved in the bias
- In some cases, diversity of experience may be a better way to remove Bias_2, than explicit adherence to formal laws of rationality
- It is unclear in which circumstances (attempted approximate) adherence to probability theory or other formal laws of rationality is actually the right thing for a finite system to do, in order to optimally achieve its goals
- Heuristically, it seems that adherence to formal laws of rationality generally makes most sense in cases where contextuality is not so critical, and the relevant judgments depend sensitively mainly on a relatively small number of data items (or a large number of relatively-simply-interrelated data items)
I agree. Another problem is that formal probability calculations can easily turn into a kind of alchemy. i.e. you can slip into a situation where your calculations serve to make you and others overconfident of your results.
ReplyDelete