Explainer: how do you read an election poll?
- Written by The Conversation
The first published opinion poll seems to have been in 1824, when the Harrisburg Pennsylvanian newspaper correctly predicted the result of the US presidential election. Things have moved on a long way since then, and opinion polls have become a permanent part of the election landscape in most democratic countries.
But do the polls tell us anything useful? Well, up to a point, yes, they do.
The first thing about polls that puzzles people is how they can produce anything like accurate information when they canvas the voting intentions of only about 1,000 people – a tiny proportion of the population.
I like to use the analogy of cooking a large pan of soup. If you want to know how it tastes, you don’t have to eat the whole lot – a spoonful will do, as long as you’ve stirred the pot up properly. (And indeed the same size spoonful will do, whether you’re tasting a small pan or a huge vat.)
If you can get a sample of electors that’s representative of the whole electorate, you can ask them how they will vote and that will tell you, to a pretty good approximation, how the whole electorate will vote.
But you do have to do the equivalent of stirring up the soup. If you just skim a spoonful off the top, you’ll get whatever floats, and that might not represent the whole of the pan. Find out more.
The poll conducted by US magazine Literary Digest for the 1936 presidential election is a classic example of what happens if you don’t stir. The publication asked about 10m people whether Alfred Landon or Franklin D Roosevelt would win, and about 2.4m replied. That is way bigger than the average opinion poll. Their results showed Landon well in the lead, but in fact Roosevelt won by a landslide. The problem was that the Literary Digest had asked (mainly) just its own readers, who were far from typical of the US electorate.
Since then, pollsters have been much more careful. These days, a typical opinion poll will involve asking maybe 1,000 or 2,000 carefully chosen people which party they intend to vote for. The people are chosen, as far as possible, to match the population of the area or country being polled, in terms of age, gender, some measure of social class, and probably other features such as work status (employed, unemployed, retired, and so on) and the region of the country where they live.
Most of the polling organisations speak to their sample of electors either by telephone interviews or through websites.
The margin of error
There are several kinds of opinion poll results for an election. The commonest kind in the UK give the voting intentions for the whole of Great Britain. (Often Northern Ireland is left out of these “national” results, because politics works differently there.)
The pollster will publish the percentage of electors that say they will vote for each of the political parties, and will also give some idea of the margin of error attached to these percentages.
The margin of error is a way of reporting what’s called sampling error in the statistical trade. The point is that the pollsters didn’t ask everyone. They may have been unlucky with their sampling and just happened, by chance, to get rather fewer SNP supporters in their sample than are found in the whole electorate.
For a typical poll with 1,000 people, the margin of error would be about 3 percentage points. The exact meaning of this is a bit complicated – if a party is reported in a poll as having 34% of the national vote, with a margin of error of 3 percentage points, that means that there’s a high chance that the true percentage is somewhere between 31% and 37%, but it doesn’t even entirely exclude percentages outside that range.
Putting up the number of people polled will obviously tend to reduce the margin of error, but not as strongly as you might think. If you polled 2,000 people instead of 1,000, for instance, then (other things being equal) the margin of error would go down only from about 3% to about 2%.
A mug’s game?
Much more of a problem is that even a perfect set of poll results wouldn’t tell you the election result. To this extent it stops being like the soup. If you did eat the whole panful, you certainly would know exactly how the soup tastes, though the guests you’d cooked it for might be less than pleased.
If you could go out today and ask every elector in the UK how they are going to vote, that would give you some idea of the election result, and there would be no margin of error, but you still wouldn’t know the exact result. Some people will change their mind between now and the election. Some people will just not turn out to vote on election day. Some people won’t tell you the truth. Pollsters try to allow for these effects, but it’s not easy.
And that’s not even the biggest problem in countries like the UK, which has a first-past-the-post electoral system.
Even if you knew exactly what the national shares of the vote would be, that doesn’t tell you how many parliamentary seats each party will win. In the 2010 general election, for instance, Labour got 29% of the country’s votes, and almost 40% of the parliamentary seats, while the Liberal Democrats got 23% of the votes and only 9% of the seats. UKIP got 3% of the votes but no seats, while the SNP got 1.7% of the votes and six seats.
The process of translating opinion poll vote shares to forecasts of seats in parliament can therefore be very complicated, and different polling companies and political analysts do it in very different ways. Until recently the most common way to do it used something called “uniform national swing”, which basically updates the results from the previous election in each constituency by incorporating how the overall percentages of votes changed nationally.
That often worked pretty well in the past, but can’t take into account the specific characteristics of different constituencies, and it also can’t take into account data from opinion polls run in single constituencies, which are becoming more common. So a wide range of different methods is being used to produce seat forecasts for the 2015 election – and we won’t really know which is best until after all the votes are counted and we can compare the forecasts with reality.
The exit poll
An exit poll is a special kind of opinion poll that, as the name might suggest, involves asking people who they voted for as they exit from a polling station. You might wonder why anyone bothers – after all, the true results will be known pretty soon. One reason becomes obvious when you look at who actually pays for exit polls – it’s media and news organisations. An exit poll can provide a prediction, and encourage people to watch the media channel that provides it. They also provide material to fill up those lengthy election night programmes.
Exit polls are often very accurate compared to polls taken before polling day, because they ask people who are known to have at least gone to the polling station and therefore probably voted, and because they aren’t affected by last-minute changes of voting intention.
What to look out for
So what should you look for when you’re reading poll results? Certainly, look at the size of the margin of error. It will be reported somewhere – if you can’t find it in the news report, it will be given on the polling organisation’s website. If the news story is making a big fuss about, say, a 2% change in one party’s fortunes, well, that’s probably less than the margin of error and may simply be down to chance variation from one sample of electors to another. The same goes if it’s making a big play about a difference of just one or two percentage points between two parties.
Something else to bear in mind is that there can be systematic differences in the results between different polling organisations. These so-called “house effects” might be due to difference in the way they choose their samples or do the weighting. Most of the polling organisations whose polls you see in the mainstream media are members of a trade organisation, the British Polling Council. This imposes rules of good conduct on its members, so they will not be doing something deliberately deceptive, and will be reporting what they did adequately.
But perhaps the most important advice is never to read too much into the results of just one opinion poll. It may be a fluke – it may have had, entirely by chance or bad luck, a particularly unrepresentative sample. Many organisations produce summaries of most or all of the published opinion polls, which is likely to be more accurate than a single poll because it averages out chance variations and house effects.
Other organisations combine poll results and (possibly) other information in a more sophisticated way, and may use them to produce forecasts of numbers of parliamentary seats.
But, for the 2015 UK general elections, we won’t know who predicted best until the results are all out – and as usual, by that time we won’t need a forecast any more. Not until the next time, anyway.
Kevin McConway does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.
Authors: The Conversation
Read more http://theconversation.com/explainer-how-do-you-read-an-election-poll-41204