So I figured I'd better get this done before it was too late.
Polls are tough. As I'm sure you've noticed they tend to disagree, sometimes wildly. How could two polls, both conducted using "scientific" standards be so different? This question is one not simple. Actually, it's complicated enough that I've dreaded writing this post because I'm afraid it will blow up into a 5000 word expository essay saying effectively nothing. However, I'll try to be brief and split this into two parts.
A political poll is a scientific sample of the electorate that is designed to predict who the entire population will vote. A few quick rules to remember.
-No poll is perfect, even the most honest and perfectly conducted poll will have between a 3 and 6 point margin of error. This is mostly because the sample is only a representation of the population not the population in reality.
-Every poll uses different methodology that greatly affects the outcome of the survey. It is not hard to quite justifiably create a poll that reports a seemingly wildly inaccurate representation of the electorate.
-Mixing polls is not a great way to really get a sense of what is right. Because methodology is very different from poll to poll things like the Realclearpolitics.com average of the polls is interesting but not any more informative than any individual poll... and less informative than a single well conducted poll would be.
With that I have to ask: Where to start?
Registered voters vs Likely voters:
The first thing to look at when you're confronted with a poll is whether it is sampling registered voters, or voters the pollster considers likely to actually cast their ballot. The registered voter survey means that the pollster usually asks a very simple question often "If the election were held today, would you vote for Barack Obama or John McCain?" (they'll rotate who's name is read first) The likely voter poll uses the same question but applies dozens upon dozens of handicaps to attempt to discern who will actually vote, and to account for known deficiencies in polling procedure.
Common "adjustments"
So, when considering likely voters the first, most obvious thing to look for is voting history. Usually this is determined by how many times the person being polled has voted out of their last 4 or 6 opportunities. The Democratic party pollsters tend to favor 6, the Republicans favor 4... I've never been given an explanation as to why. Based on voter history different respondents will be given more weight in the poll. Whereas the average voter is worth 1 point in a poll those who are more likely to vote are weighted higher and less likely to vote is weighted lower. For example, a person who is polled who has voted in 4 out of the last 4 elections is very likely to vote in this election, therefore her response to the poll will be weighted as worth 1.2 points while someone who has voted 0 out of the last 4 times will be weighted at 0.8. In this way the final tally will tend to favor the opinions of those folks who have voted in the past and hypothetically better model the election in reality.
A sample of other adjustments
Pollsters have studied their own art extensively. Those like Zogby and Rasmussen have invested millions of dollars trying to guess these elections exactly and make their money by being accurate. Because a phone poll cannot achieve a perfect sample of the population for a myriad of reasons these pollsters make best guess adjustments. Some examples:
-Men are less likely to answer the phone than women. Also, men are more likely to vote. So when a man is surveyed his response is weighted higher than a woman's to model this phenomena.
-The rough guess is that 1/3rd of people under the age of 25 do not have a land line and are beyond the reach of a pollster. Therefore, when a person under the age of 25 is surveyed his/her answer is given greater weight.
-Republicans are more than twice as likely to refuse to participate in a survey than Democrats. So when Republicans are surveyed they're given more weight than Democrats.
-The Democrats have about a 10% registration edge nationally... however, Republicans are statistically significantly more likely to vote. So, the pollster has to decide how to weight these two conflicting inputs when deciding how many registered Republicans and registered Democrats to include in their sample.
-Older voters are more likely to vote than younger voters regardless of history. As a rule, the older the respondent is, the more weight they have in the poll.
-Newly registered voters vote and don't vote very unpredictably. Historically has been excellent turnout and enthusiasm among people voting for the first time in some years and other years it's been very flat. Unfortunately this is almost impossible to predict. In 2000 new voters voted in massive numbers despite no great effort on either party's part to register new voters. In 2004 bother Republicans and Democrats registered millions of new voters and they largely failed to show up! This is a very ticklish element for pollsters to account for because there's just no good rules to follow. Most of the time the surveyor has to use their "gut" and decide how to weight first time voters based on early voting, absentee ballot requests and dead reckoning.
There are literally dozens upon dozens more little adjustments and tweaks that go into a Likely Voter poll. Even the Registered Voter polls will select their sample based on how they believe the Republican VS. Democratic registration numbers will effect their sample. As such there is no such thing as a poll that doesn't have the finger prints of who took the poll.
Okay, you've got the basics of how a poll is put together, and why they're considered more an art than a science. Next I'll apply these lessons to the way that polls actually look in reality, how corruption factors in and who I trust to give me an accurate picture.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment