Posted on 14 January 2016 by Sir John Curtice, Senior Research Fellow
Since last May’s general election there has been plenty of speculation about why the polls collectively underestimated the Conservatives’ lead over Labour. Evidence on the reasons for their failure, however, has been in shorter supply. But today NatCen publishes a report that presents important new evidence on why the polls might have got it wrong.
The report presents the results obtained by the latest instalment of NatCen’s annual British Social Attitudes survey, which was conducted face to face between the beginning of July and the beginning of November last year. All 4,328 respondents to the survey were asked whether or not they voted in the general election the previous May and, if so, for which party.
Why are these results of interest? Well, BSA is conducted in a very different way from the polls. Not only does interviewing take place over an extended period of four months, but also during that time repeated efforts are made, as necessary, to make contact with those who have been selected for interview. At the same time, potential respondents are selected using random probability sampling. The addresses that interviews are required to call upon are selected at random from the Postcode Address File (a comprehensive listing of all addresses in the UK) while at each address one person is selected for interview again using a random process. This means that more or less anyone in Britain can be selected for interview, while their chances of being selected can also be calculated.
Political opinion polls, in contrast, are typically conducted over just two or three days. That means they are more likely to interview those who are contacted more easily. While those polls that are conducted by phone ring numbers at random, the selection of the person to be interviewed is not usually made at random, while many a phone call goes unanswered or is greeted with the response, ‘No thanks’. At the same time, polls conducted over the internet are typically done by drawing people from a panel of persons who have either previously volunteered to take part in internet surveys or else have been successfully recruited into membership. They are certainly not drawn from the population at random.
Meanwhile, not only did the polls underestimate Conservative support and overestimate Labour’s before election day, but when they went back to those whom they had interviewed during the campaign they still obtained much the same result – that is Conservative and Labour more or less neck and neck with each other, when in reality the Conservatives were seven points ahead. In other words the polls were still wrong even when the election was over. That means we cannot simply lay the blame for their difficulties on such possibilities as ‘late swing’ or a failure by those who said they would vote Labour to make it to the polling station. The polls may simply have been interviewing too many Labour voters in the first place.
It is on this issue that BSA helps shed some light. If in contrast to the polls BSA has managed more or less to replicate the election result, that would add considerably to the evidence that the polls were led astray because their samples were not fully representative of the British public. And if that is the case, perhaps BSA provides some clues as to why this is the case.
In fact, BSA 2015 has proved relatively successful at replicating the election result (see Table 1). This is especially the case so far as the Conservative lead over Labour is concerned. At 6.1 points, (after the data have been weighted using BSA’s standard procedures), BSA’s estimate of the Conservative lead over Labour matches the actual Conservative lead of 6.6 points almost exactly. It seems that those Conservative voters that apparently proved so elusive are not necessarily so elusive after all.
Table 1 Reported Vote in the 2015 British Social Attitudes survey compared with the Actual Election Result
Reported Vote Weighted Election Result (GB)
Conservative 39.7 37.8
Labour 33.6 31.2
UKIP 9.0 12.9
Liberal Democrat 7.3 8.1
Green 4.0 3.8
Other 6.4 6.2
Con Lead over Lab 6.1 6.6
BSA is not alone in having found plenty more Conservative voters in the election than Labour ones. A face to face survey conducted by gfkNOP for the academic British Election Study that was also undertaken using random probability sampling put the Conservatives as much as eight points ahead of Labour. That two random probability samples have both succeeded where the polls largely failed strongly suggests that the problems that beset the polls did indeed lie in the character of the samples that they obtained.
Meanwhile, BSA’s data do provide some clues as to why those interviewed by the polls were not necessarily representative of Britain as a whole. First, those who participated in polls were much more interested in the election than voters in general. The polls pointed to as much as a 90% or so turnout, far above the 66% that eventually did so. In contrast, just 70% of those who participated in BSA 2015 said that they made it to the polling station. More detailed analysis suggests that many a poll especially overestimated how many younger people would make it to the polls. And because younger voters were more Labour inclined than older ones, this created a risk that Labour’s strength amongst those who were actually going to vote was overestimated.
Second, those who are contacted most easily by polls and survey researchers appear to be more likely to have voted Labour than those who are more difficult to find. In the BSA survey no less than 41% of those who gave an interview the first time an interviewer knocked on their door said that they voted Labour, while just 35% said that they voted Conservative. Only amongst those where a second or (especially) a third call had to be made are Conservative voters more plentiful than Labour ones. Labour’s lead amongst first call interviewees cannot be accounted for by their demographic profile, which perhaps helps explain why the pollsters’ attempts to weight their data to match Britain’s known demographic profile failed to eliminate the pro-Labour bias in their samples.
Of course nobody is ever going to suggest that a poll should be conducted over a period of four months, though maybe taking a little longer would prove to be in the pollsters’ own best interests even when their role is to generate tomorrow’s newspaper headline. But where the objective is to conduct serious, long-term and in depth research that is intended to enhance our understanding of the public mood in Britain, the lesson is clear. Time-consuming and expensive though it may be, random probability sampling is still the most robust way of measuring public opinion. Hopefully it is a lesson that will now be appreciated by those who fund opinion research.