We've been on the theme of political ignorance for nearly a month now, and so far we've concluded that people are generally pretty ignorant, but we don't yet understand whether this is such a bad thing. Recall from last time there were three arguments about whether political ignorance matters.
1) People don't need to know all the details as long as they know enough to figure out what's best for them.
2) People don't need to know very much as long as the voting public as a whole gets the answer right.
3) On the other hand, it's not about whether people can get by or whether groups get things right—it's about making sure everyone's voice is heard, and those with more knowledge have an easier time getting their voices heard.
We looked last time at the first argument and concluded there might be something to it, but maybe only when voters know what they want and who shares those wants. Then, they can infer what choices to make based on who endorses a policy proposal. This could work—in fact, did work in at least one case—in things like voter referenda, but might not work so well when choosing between political candidates.
Today, I'll look at the second argument. Then I will tear in to it until it runs screaming and bloody back from whence it came.
Maybe groups get it right, but probably not, or: How applying basic statistics seems clever but can go horribly wrong
One hope is that even though people don't know much by themselves, as a group they do. You may have heard of this idea under the title "the wisdom of crowds," and in some cases it might work. In politics, the idea is that no individual voter has a very good idea which candidate is best, but by voting, people can contribute their (largely inaccurate) information to a greater whole, and that greater whole might be accurate. We're going to see how the argument can go wrong, but first let's go through it in more detail.
Underlying this version of the wisdom-of-crowds argument is something called the central limit theorem, one of the most important results in statistics. It's an idea best conveyed by an example. Suppose you're part of a Science Bowl team tasked with estimating the size of a watermelon. If any one member of your team estimates the size, that estimate will surely be off by a little bit. But let's say that everyone writes down an estimate, and then you, as team captain, read them and average them together. Everyone's estimate will be off, but some people will overestimate and others will underestimate. As long as there aren't any systematic biases in the way people estimate, these over- and underestimates balance each other out, so the average will be likely be pretty accurate. Not only that, the team as a whole will be much more certain because everyone has contributed their estimates. Think of it this way. If one or two people say the watermelon is about six feet around, you might not be so confident, but if eight people say it's about six feet around, you'll start to be pretty confident of the watermelon's size.
In politics, the wisdom of crowds idea is that people might have what's called private information about candidates, that is, their own personal estimates of the relative quality of two candidates. By voting, they are saying, "I think candidate X is the better quality candidate," and by summing up everyone's votes, the group as a whole can get a good idea of who the best candidate actually is.
Unfortunately, the central limit theorem/wisdom of crowds argument makes two implicit assumptions that work in the watermelon example (as I've described it) but are dubious at best in politics.
The first is the assumption that the voting public all want the same thing, that is, that they all want to find the highest quality candidate. In reality, not everyone wants the same things. Some people want more environmental regulation. Some people want to build big houses on little mountains overlooking Malibu. Everyone surely wants to figure out the best candidate, but by "best" they can mean different things.
Maybe we could get around this "politics is not watermelon-size estimation" problem, but even then we would have a more pernicious problem, namely that our beliefs about watermelons and politicians depend on others' beliefs. If that's the case, the central limit theorem argument no longer applies because aggregating everyone's beliefs won't do anything to cancel out the mistakes that individuals make.
Here's an example, called an information cascade, of how interdependent beliefs mess things up. Let's say you're wandering around 9th and Irving in San Francisco patiently looking for somewhere good to eat. You notice Pasquale's Pizzeria, some sushi joint, and Crepe Vine, but then you notice the crowd outside Park Chow. Reasonably assuming that other people also want somewhere good to eat, you infer that Park Chow must have some tasty bites, so you hop in line and prepare to eat the best spaghetti and meatballs in the universe. (Okay, I'm biased, but they are good.)
Here's the fun fact: each person in the line might be there for the same reason as you. Why? Let's say everybody was just looking for a good place to eat, but nobody knew where to go. One guy picks Park Chow. Then, some other person comes along, figures that first guy knows something she doesn't, and heads to Park Chow (she'll take the Smiling Noodles, please). Pretty soon the whole of the Inner Sunset is waiting on a strangely narrow sidewalk near Golden Gate Park waiting to eat.
What's the point of all this? The point is, let's say your "friend" tells you that Obama is a socialist. You figure he's not dumb and knows something you don't, so you believe him. Pretty soon, the idea becomes sort of popular, and other people start thinking a million people can't be wrong. Guess what. They are. An information cascades is only one kind of interdependent-beliefs problem, but it's a nasty one.
A related issue is anchoring, where people's estimates can depend on not just other people's estimates, but also completely random numbers. For example, some decades ago, psychologists showed they could affect estimates of US spending on the United Nations with a Roulette wheel. Some time I'll say more about this one.
To sum up, the wisdom of crowds idea is an appealing one, and it might work for some things, but it depends on a number of assumptions that probably don't hold up, at least in politics. First, people aren't trying to figure out how to get what's best—they don't even agree on what "best" means. Second, their beliefs depend on each others', so rather than actually aggregating a bunch of independent pieces of information, they might just be going off what the first guy says. Third, their beliefs are easily swayed by random, irrelevant information. It all ends up to a strong argument against crowds being in any sense wise.
More next time. Stay tuned.
No comments:
Post a Comment