Today sees the release of data from the National Travel Survey (NTS) 2016, which provides the latest information on how and why we travel. But this year’s annual publication highlights an important issue facing both producers and users of national statistics; one that is likely to have significant implications for the social science community in the coming years.
One of the great things about the NTS is that it has been run using a largely unchanged methodology for several decades. This means we can confidently draw conclusions about what has or has not changed over time without worrying about whether any differences are simply because we collect the data in a different way. As this year’s NTS shows, we make broadly the same number of trips and spend around the same time travelling but we go much further than we used to.
In 2016, an important methodological improvement was introduced into the survey. Short walks are only recorded on one day of the week-long diary and this was moved from the last day to the first day. This has resulted in an increase in the number of walks recorded and hence breaks the time series. The publication therefore has to explain this change and what it might mean for the data. The good news for the NTS is that we will shortly be able to make changes to the past data (by weighting-up the number of short walks in earlier years) so we can make time series comparisons again.
Surveys continue to evolve, of course, but one key change is the move toward online data collection alongside face-to-face interviews. We are also likely to see data from alternative sources such as administrative records used alongside survey data. These changes, which are likely to gather pace in the next few years will bring challenges for the presentation and interpretation of time series data. While the issue of change in methodology is hardly new, we could see it affecting many of our most well-known and high profile national statistics series.
Recently, the Crime Survey for England & Wales published the latest data alongside the police crime figures. Not for the first time, the two data sources painted a different picture (one showing a rise, the other a fall). Although it wasn’t a methodological change as such, it did necessitate explanations about why this discrepancy might have occurred and what it might mean. Such speculation and/or caveats around the data, which are also required when methodology changes, can inevitably result in a complex technical discussion, which becomes difficult for many to follow. It also provides an opportunity for those inclined to dismiss the findings.
Data users have ever less time to absorb the detail of the research and demand ‘bite-size’ or topline findings. At the same time, the proliferation of media channels allows both the well- and ill-informed alike to comment, challenge, claim, or label as ‘fake news’. Although the most recent British Social Attitudes data suggest that public trust in official statistics remains high, we must ensure that the growing complexity of our data doesn’t undermine this trust or provide opportunity for those who don’t like the findings to simply discredit them.