Still More About Polling

Like I said, there’s going to be a lot of this. Charlie Martin has quite an enlightening post about what goes into producing a poll, and what that means for their credibility:

The thing about this is that these people have been picked as closely as possible to be a random sample. Ideally, they use dice or something like them to pick which people’s numbers they call; practically, they can’t always get it perfectly random — what if you’re calling a Texas football town and the high school game is that night? — but they do try. 

Finally, though, they have their sample, and it includes enough people according to their rules. But: because they have been picked randomly, they almost certainly don’t exactly represent the real population. 

Think about it — I have 100 million people, and I can only pick 1,000 of them. There are lots and lots of ways they could pick that random set (lots and lots: the interested reader should go to Wolfram Alpha and type in “100 million choose 1000” to see how many.) And almost none of them will really be a perfect sample. 

… 

So with all that in mind, now let’s think about how to read a poll. 

First: when you read a poll, you’ll always see it stated as something like “53 percent Democratic, margin of error of plus or minus 3.5 percent.” Here’s how you should read that: “The polling company believes that if the election were taken today, there is 1 chance in 20 that the actual result will be more than 56.5 percent Democrat or less than 49.5 percent Democrat.” 

You see, that’s what the “margin of error” really means: by statistical methods, they believe that 19 out of 20 times — or 95 percent of the time — the real value would come out between 49.5 percent and 56.5 percent.

Second, and this is where the controversy is coming now: the quality of those results depends on the accuracy of that original model. 

 

And Zombie relates the five false assumptions that lead to poll-skewing, including:

Polling companies need to be accurate in order to gain a reputation for reliability, so they have no motivation to lie. 

This assumption, which the pollsters hope the public has, is partly true. A reputation for accuracy is one way for a polling company to attract clients. 

But polling companies have a second motivation often at odds with and usually trumping the desire for accuracy: To give their clients (in this instance, political campaigns) what they want. 

Want to see a poll that shows you’re winning (so you can use those false stats to sway the electorate?) You got it! 

Want us to weight the results so that opposing voters become too despondent to bother voting? You got it! 

Want evidence that you were in the lead so that when when voter fraud propels you to otherwise undeserved victory, it looks believable? You got it! 

Campaigns will seek out any pollster who can provide them with the propaganda necessary to manipulate the election. Accuracy is only useful for secret internal polls; intentionally deceptive skewing is useful as tool to trick voters.

Expect more where this came from. Romney supporters need to believe the polls aren’t true, and they make convincing cases. But we shall see.

Leave a Reply