Around this time last year, I wrote a blog about the take-home messages from the International Survey Non-Response Workshop in Oslo. Once a year researchers, statisticians and academics from around the world get together for three days to discuss the issues surrounding participation in social surveys. This is an annual get together where researchers, statisticians and academics from around the world meet for three days to discuss the isisues surrounding participation in social suryveys.
This year, I returned to the workshop, this time in the beautiful Dutch city of Utrecht, to see how the debate had moved on over the past 12 months. In this blog I reflect on the emergence of probability-based online panels and what we’ve learnt about how to maintain them.
A probability-based whatsit?
Panels are simply groups of people who take part periodically in research. Usually you will join a panel and be invited to take part in, say, a survey once every two weeks. Panels have many advantages for data collection, from speed, to the fact that you can track how people’s views change over time.
Most commercial research panels use quota approaches to sampling. In this case, people typically volunteer to join, and are invited to take part in each survey until specific allocations are filled, like certain numbers in each age and gender group.
For probability-based panels, on the other hand, a limited number of randomly selected individuals are invited to join. This means that the panel isn’t biased towards the views and characteristics of people who volunteer to join research panels – or at least those that aren’t ironed out by the use of demographic quotas. This is the method we used to create the NatCen Panel that launched last summer.
This type of panel is a relatively new development in social research and the workshop covered some of the most important debates, with the key question being: do probability panels really offer a representative view of the population?
Representing over time
Researchers working on the LISS Panel presented an illuminating analysis of representativeness over time. LISS is a general population panel in the Netherlands that has now been running for a decade - The representativeness of a sample can be assessed using a statistic called an ‘R’ indicator. R for representativeness! It answers the question, how far does my sample represent the population, according to information I know about both?
But in this case, the researchers didn’t compare with the population – where relatively little is known. They compared participants in the first panel survey with those in latter waves of data collection. So, their analysis looked at how much the representativeness of the panel has changed over time.
Their data showed that although there were substantial reductions in representativeness during the first two years of the panel (as the most casual participants departed), after two years this became stable, even though panellists continued to drop out.
War of attrition
Although attrition appears not to be a critical issue for panels like LISS, which ask questions about multiple topics, it can be for others which only ask about one. A presentation from the US Federal Reserve Board provided a cautionary tale on panel representativeness. Their panel survey tracks the use of digital financial services across the US. The research team observed that over time, as the initial participants dropped out of their panel, older, more affluent, white males were becoming over represented. With a single-topic panel, there is a particular danger that those engaged in the topic will continue to respond over time, while those who do not will drop out.
Practicalities vs. method
As well as limiting drop-out, panel managers must also ensure that their fieldwork practices encourage a balanced sample. One of the key advantages of panels is their ability to turn around data collection really quickly; typical fieldwork length is reduced from months to weeks. Researchers working on the GESIS Panel, a general population panel in Germany, shared their work on how they are balancing the desire for shorter fieldwork periods with the risk of bias.
Their analysis looked at how representativeness changed over the course of fieldwork, as more questionnaires were completed. Panel members can decide whether they would like to take part online or using a postal questionnaire – the split is about two thirds online and the remainder by post.
They showed that non-response bias stabilised after just a week of fieldwork for online participants, but took between two to three weeks for those taking part by post. This is probably a simple function of returning postal questionnaires taking longer, but it shows the thoughtfulness necessary when making trade-offs between practicalities – like timing and costs – and methodological considerations.
It was great to see that so many probability-based panels are now successfully running around Europe and in North America, as well as researchers working together to make sure they continue to provide the highest quality samples. We’ll be considering how we can use these findings to fine tune our approach on the NatCen Panel over the coming months.
Find out more about the NatCen Panel
Follow me on Twitter: @matthewcjonas