Wednesday, June 22, 2011

Nathan Answers Questions: What About Manipulation?

Over on Facebook, reader Jonathan Wang asks,

"…how about cases where politicians or interest groups deliberately misrepresent the facts?"

It's a good question with a number of facets. Three issues come to mind immediately: framing, media priming, and the many ways in which people either ignore or fail to use new information that could correct false beliefs.

Here's a quick overview, with some examples. Well, we'll see about quick, but there will be examples.

Framing
Back in the 70s, psychologists—notably Amos Tversky and Danny Kahneman—got interested in how people actually made decisions. (If this sounds like an obvious thing to do, let me assure that economists of the day were more interested in how people should make decisions. Many still are.) I'm going to explain what framing is in terms of a classic psychology experiment, and then I'll explain what it has to do with politics.

Here's the experiment. There's a horrible disease that, if no one does anything, will kill 600 people, and there are two programs to combat the disease. Program A will save 200 people, while Program B might save everyone, but there's a two-thirds chance no one will be saved. Something like three-quarters of the people in the experiment who faced this choice went for Program A, the option with certainty. This much is easy to explain. The options are equivalent in the sense that if we repeated the situation many times, on average 200 people would be saved using either program, but people favor the certainty of Program A over the gamble of B. Economists call this risk aversion, and they'd thought about it well before the psychologists did.

But here's where things go wrong for the economists. Kahneman and Tversky presented another group with these choices: Program C, in which 400 will die, or Program D, in which there is a two-thirds everyone will die.

See what they did? Programs A and C are exactly the same, as are B and D, but they're described differently. When you describe them in terms of death—Programs C and D—about three-quarters of the participants in the experiment favor the gamble over certain deaths.

The lesson for politics is that how you frame something—in this case, whether you talk about saving people or letting people die—affects the choices people make. Politicians know this. There's a reason, for example, you hear about pro-choice and pro-life: anti-life and anti-choice, or baby killer and woman hater for that matter, don't sound so good. A more recent (and vastly more politically relevant) example concerns natural resources: drilling for oil versus energy exploration.

At this point, you're probably thinking that this is kind of obvious. Maybe so, but no matter. It works, and it's likely working on you right now. Homework: read a newspaper—you read all of them, right?—and look to see how politicians talk about the issues. It turns out you can tell whether someone is a Republican or Democrat by the words they use. Look up my colleagues Daniel Diermeier or John Wilkerson if you'd like to know more about that.

Media Priming
Also back in the 1970s, the prevailing wisdom among political psychologists and communications scholars was the "minimal effects" hypothesis: empirically, it appeared that media did not change what people believed about politicians very much.

Then, in 1982, along came Shanto Iyengar, Mark Peters, and Don Kinder, people trained in standard Michigan-school political psychology but with a new idea: media might not affect what people believe, but it might affect how they evaluate politicians by changing the basis of the evaluation. In one early experiment, they showed they could change how important people thought an issue was simply by showing them a news broadcast with more coverage of that issue. Not only that, it changed how people evaluated the president.

So, for example, when they presented a news broadcast with more coverage of defense issues, people's overall ratings of then-President Carter were very highly correlated with their ratings of him on defense. The correlation didn't vanish when the news coverage was on other issues, but it did drop substantially.

The lasting effects of misleading reports and blatant lies
So far, we've seen that politicians and media can have a profound effect on the preferences people express by reshaping what they think about (media priming) and how they think about it (framing). Now, what do people remember?

An unfortunate fact is that people don't always remember the truth even after they've been told it in no uncertain terms. Well after the invasion of Iraq in 2003 failed to uncover weapons of mass destruction, for example, many Americans still believed there were WMDs there.

There are at least two ways to account for this. First, people might simply not have been exposed to the new information. That is, maybe it was in no uncertain terms, but they were watching American Idol.

The second way—really it's a set of ways—is more distressing: for a variety of reasons, people tend to reject new information that's inconsistent with what they already believe. One of these reasons is called motivated reasoning, which says that when people are exposed to information that counters their beliefs, they have a negative, emotional reaction to it and tend to discredit it. The source also matters—partisans may tend to discredit officials from the opposing party and hence discredit anything they say. This is a case of what's called cognitive dissonance—we don't want to believe someone we dislike could have anything useful to say, so we just don't believe it. Finally, there is a cognitive tendency, called confirmation bias, to discredit anything inconsistent with what you've already observed. (I have long suspected these are all aspects of a single cognitive process, but to my knowledge no one has actually demonstrated this, either experimentally or theoretically.)

The political science literature is a bit thin on these issues when it comes to correcting demonstrably false beliefs such as a belief in the existence of WMDs in Iraq—most of the science concerns political messages rather than facts per se—but there is a lovely experiment by Ross, Lepper, and Hubbard from 1975 that illustrates the ideas. These three had a group of people take a test, which they then scored. After seeing the scores, the test takers reported whether they felt they were more or less skilled than the average person when it came to the material on the test. It turned out, however, that the scores were completely random—they had nothing to do with how people actually did on the tests. Now here's the kicker: telling the test takers that their scores were random had no effect on their beliefs about their own skills and intelligence.

What does all this mean? Well, it means that not only are people fairly easy to manipulate, but those manipulations might have long-lasting effects. I have no pithy closing remark. This topic is a little too disheartening for that sort of thing.




- Posted using BlogPress from my iPad

No comments:

Post a Comment