Another year, another national vote, and once again the media and public have been captivated by the polls: did Theresa May only call the election due to the apparent Conservative lead?; has there been a genuine surge in support for Labour in the last week?; and, fundamentally, can we believe anything that the pollsters say?
Following the 2015 General Election polling miss, an independent inquiry concluded that “the primary cause of the polling miss in 2015 was unrepresentative samples”. The trend this polling season seems to be for pollsters to apply adjustments more aggressively. In a recent article, two of the authors of this inquiry conclude that the representativeness of samples still remains a problem, but also that turnout weighting is having a much bigger effect on poll estimates now than it did in 2015.
In his blog post published alongside this, our Director of Survey Research Kirby Swales talks about sample quality, but here I’ll draw on some evidence NatCen have on the impact turnout weights can have on political polling.
Looking back to the EU Referendum
Back in May 2016, NatCen conducted a survey looking at public opinion on the EU referendum question using our random probability panel – a unique approach that was recommended by the 2015 polling inquiry. We experimented with different adjustments to the data to predict how likely someone was to vote based on:
- Self-reported likelihood to vote (LTV): (On a scale of 0 to 10, how likely are you to vote in the EU referendum, with 0 meaning you definitely will not vote and 10 meaning you definitely will vote?) As a self-reported measure, this risks measurement error (e.g. people may mis-report or change their mind), and different groups may use the scale differently which could bias the estimates.
- Modelled likelihood to vote (based on turnout for the 2015 General Election). The effectiveness of this approach depends on the extent to which the variables used in the model explain voting behaviour, and whether the turnout patterns we saw in 2015 are repeated.
The table below shows that the way we adjusted for turnout had a substantial impact on our final estimate. Subsequent analysis conducted by comparing these estimates to how people reported actually voting (soon to be published) found no clear evidence that the lead for Remain in our pre-referendum survey was created by sample bias or late swing (though these factors cannot be ruled out altogether). However, there was evidence that the turnout adjustments we used were not effective at predicting turnout for either Leave or Remain supporters, and because we didn’t predict that Leave voters were more likely to turn out, this affected our estimate.
Note: details of how these different approaches were applied can be found in the original report
Back to the future
So what does this tell us about the 2017 General Election polling?
The table below shows the impact of applying two different turnout weights to data on how people said they would vote in the General Election – one based on self-reported likelihood to vote and the other on a model of how people voted in the 2015 General Election - compared to if we only make adjustments to account for non-response. (I will caveat now that the majority of the interviews for this survey were completed in late April/early May, and reflect the population at the time, so we’re not saying these represent a prediction of the actual result on 8th June).
The first thing to note is that both turnout models increase the Conservative lead over Labour by around 5 percentage points – similar to what was seen by other analysis.
The second thing to note is that the two approaches actually produce fairly similar estimates, certainly relative to the difference they made to our pre-EU referendum estimate. We don’t know whether this finding would hold in the context of a non-probability sample, but this also backs up the suggestion that “turnout weighting would not appear to be the main cause of the volatility between the polls that has been evident in this campaign”.
Turnout weights are clearly having a larger impact on the polls in the 2017 General Election than they did in 2015. Last Thursday’s article from YouGov discusses how the pollsters are experimenting with different weighting approaches and we welcome this (and more!) conversation and transparency. However, this analysis tentatively supports the idea that different approaches may not explain the variance we have seen as much as it did at the EU referendum.
What we can be certain of is that the result looks uncertain, and no-one will be offering to eat their hats. I’ll personally be sleeping as the results come in, but for those of you keen to guess the result before the final figures, I’d suggest looking at the exit poll and the early marginal seats. Some polls will be right, some will be wrong, but both groups should examine their results closely and transparently to understand why – it’s possible to be right for the wrong reasons, and vice-versa.
For more information about the NatCen panel, its methodology, or research conducted with it, please do contact us at email@example.com
Follow me on Twitter: @CurtisJessop