Which are Right? Phone or Internet Polls? New Evidence from a NatCen Innovation

Posted on 20 June 2016 by John Curtice

One of the marked features of polling during the referendum campaign has, of course, been the divergent findings of internet and phone polls. Most of the time, polls done via the internet have put Remain and Leave neck and neck while those undertaken by ringing people up on the phone have put Remain ahead. Inevitably this has engendered a lively debate about which set of figures, if either, is correct, while in some cases polling companies have adapted their methodology in order to respond to some of the criticism of both kinds of polling that this debate has provoked.

Meanwhile this debate has come on the back of the difficulties that the polls had last year in estimating correctly the balance of Conservative and Labour support. The official Inquiry into what went wrong argued that the main problem lay with the unrepresentative character of the samples that both types of polling had acquired, a character that the companies’ various attempts at weighting and filtering their data had failed to correct. In contrast, two surveys conducted after the election with randomly selected members of the public (a very different approach to that used by both internet and phone polls), that is, the British Social Attitudes survey and the British Election Study, were both relatively successful at replicating the 2015 election result (unlike the polls, which still struggled to do so even after the event).  The Inquiry suggested, inter alia, that this experience underlined the continued need for surveys conducted via random probability sampling, relatively expensive though that approach is, and suggested that it would welcome an attempt to undertake internet polling via that approach.

That last recommendation has been picked up by NatCen Social Research, who have established a panel of people who were originally interviewed as part of the (random probability) 2015 British Social Attitudes (BSA) survey, and who have agreed to participate in subsequent interviews via the internet, or, if necessary, via the phone. It is the first time that such a probability-based internet panel has been established in the UK. As many members of this panel as possible were surveyed about their views on the European Union between mid-May and mid-June, including how they intended to vote on Thursday. The interviewing was conducted over a lengthy four week period in order to guard against the risk that the views about the EU of those who are more difficult to contact are different from those who are reached more easily – such ‘availability bias’ does after all appear to have been one of the things that helped undermine the accuracy of the polls in 2015.

All in all, 62% of those BSA respondents who agreed to join the panel participated in this survey on the EU, totalling some 1,632 people in total. Interviews were initially collected via the internet, but those who did not respond via that route (together with those who did not have access to the internet) were followed up by phone. The data have been weighted so that the distribution of a range of demographic characteristics together with reported interest in politics in the EU survey matches that for all respondents to the original 2015 BSA survey.

The headline finding from the report of this exercise is clear. Once Don’t Knows are left to one side, the survey estimates 53% would have voted to Remain in the EU if the referendum had been held while the survey was being conducted, while 47% would have voted to Leave. This figure is in between the estimates being produced by internet and phone polls at this time – internet polls were saying that Remain would win 50% of the vote, Leave 50%, while phone polls were calling it Remain 55%, Leave 45%. There has, of course, been a sign of some movement towards Leave since then.

This estimate takes into account the possibility that the outcome might be affected by differential turnout. However, to do so, the survey relies not on respondents’ report of their probability of voting (as most polls do) but rather the propensity of those in different demographic groups to vote in 2015 (as measured by  the 2015 BSA in which the overall level of turnout was only a little higher than the official figure). The effect of this procedure was to add one point to Remain and deduct one point from Leave, primarily because university graduates (who are more likely to vote for Remain) were more likely to turnout on that occasion (and indeed in elections and referendums in general) than were those with few, if any, educational qualifications. Thus at Remain 52%, Leave 48%, the estimate of support for the two sides without taking into account possible differences in turnout still lies in between the figures being produced by the two kinds of polling.

There is one other point to note about this exercise. It has been suggested, not least on the basis of data collected by the British Election Study, that those who say they will vote to Remain are more difficult to contact than those who are inclined to vote to Leave, and that thus polls are at risk of suffering from an ‘availability bias’ in favour of Leave. This survey, in contrast, found the very opposite pattern. Those who were interviewed towards the end of the fieldwork period were more likely to say they would vote to Leave, not least because fewer of them were graduates (in part because those without access to the internet and thus were interviewed by phone are less likely to be graduates). It thus may be unwise to assume that polls conducted over a much shorter fieldwork period are necessarily at risk of failing to find too few Remain voters.

On Thursday we will, of course, find out the truth about the relative accuracy of the internet and phone polls. But using a method that tries to overcome some of the deficiencies in the polls that were revealed on the occasion of last year’s general election, this survey suggests that the truth may lie in between whatever final figures the two methods eventually produce. Should that prove to be what happens, the case for bringing together the advantages of the internet in terms of speed and cost and the strengths of random probability sampling in terms of quality will have been strengthened significantly.

Avatar photo

By John Curtice

John Curtice is Senior Research Fellow at NatCen and at 'UK in a Changing Europe', Professor of Politics at Strathclyde University, and Chief Commentator on the What UK Thinks: EU website.

12 thoughts on “Which are Right? Phone or Internet Polls? New Evidence from a NatCen Innovation

  1. Could Prof Cutrice comment on the relationship between opinion polls and the probabilities and odds offered by bookmakers? It seems that even when the opinion polls were predicting a majority vote for leave, the bookies where quoting odds based on a 60% probability of the outcome being remain. What are the bookies factoring in that is not revealed by opinion polls?Report

    1. I think that is partly to do with odds being determined by the volume of betting on either side, not by any “predictive” mechanism. It could simply be that Remain have more money to bet.Report

  2. It is likely the internet polls are closer to the facts. Are the pollsters taking into account the big differences between urban areas and provincial areas? Provincial areas are more conservative both small C and big C than London. Are pollsters asking respondents their religion? Religion and not social class is the main determiner as to whom you vote. Roman Catholics are more international looking than Protestants, because they belong to a European based international church and have more affinity with the EU via their religious similarities with countries like Ireland, Spain and Italy. Other religious minorities are also more international looking and therefore more liely to be Remain voters. Report

  3. This made me chuckle. Osborne hinted that he might close the Stock market on Friday. He does not seem to have considered whether the markets could have been unsettled by the scare stories that he and Remain have been putting about.Report

    1. @jon livesey
      You don’t seem to have considered that the markets respond to facts, conditions, and forecasts.
      While the Treasury forecasts will undoubtably be factored in, they would be quickly rejected if they weren’t based on sound assumptions and were not in line with every other analysis, as well no doubt as the private analysis commisioned by stakeholders in the market.
      Report

      1. Ah yes. Surely that explains why markets so completely anticipated the effect of the US housing crash, right?

        Honestly, anyone who can declare that markets respond to ” facts, conditions, and forecasts” and not realize that this is what I am saying, just isn’t reading very well.

        What is a “forecast”? A scare-mongering story is a forecast. It’s not a forecast based on anything very useful, but it is certainly a forecast. And what has Osborne boon pumping out for the last few months? Forecasts that were scaremongering stories.

        Have we forgotten that L30bn “hole” he was going to find in the budget come Friday? The ending of trade with Europe? The “bill” for each family, down to the last Pound over ten years?

        If Osborne closes the markets on Friday, he will be confronting a problem of his own making and he will have made a complete mockery of any reputation for stability and calm that British Governments have traditionally enjoyed.Report

    2. The ‘scare stories’ I feel are Cameron and Osborne setting out reasonable warnings but exaggerated in their usual style of their typical political claims …. when they are now addressing cross-party voters who are not so prepared ready to accept it.Report

      1. Really? Claiming that you will need an “emergency budget” to solve an L30bn “hole” in tax revenue that none can see but Osborne is “reasonable”. I think some people actively want to be panicked. It’s their adrenalin rush.Report

  4. Thanks for your research Professor.

    One quick question: Was there any way to adjust for the shifting overall trend in voters over the four weeks of the survey? You write that “It has been suggested, not least on the basis of data collected by the British Election Study, that those who say they will vote to Remain are more difficult to contact than those who are inclined to vote to Leave, and that thus polls are at risk of suffering from an ‘availability bias’ in favour of Leave. This survey, in contrast, found the very opposite pattern”. Is it possible that the remain camp is indeed still harder to contact than the leave camp, only that shifting patterns in the wider electorate produced this result – i.e. there was a shift to leave over the duration of this survey that led to the appearance of leave being harder to contact, when this doesn’t reflect the underlying reality?

    Thanks again Professor, your research has proved invaluable many times!Report

    1. My thoughts exactly. But great analysis and will be very useful for the market research industry (from a market research professional)Report

Leave a Reply

Your email address will not be published. Required fields are marked *