In 1995 the Black Women’s Health Study (BWHS) sought to engage more cohorts in fair, equitable and sustainable partnerships with racially and ethnically diverse populations. The prospective cohort has since collected data from over 59,000 participants. Between 2004 and 2007, the BWHS team sent mouthwash collection kits and materials directly to all BWHS participants, mailed reminder letters and then called participants to encourage non-responders to participate. Despite these efforts, they had an overall response rate of only 51%: only 11% (n = 5,772) actively refused, but over one-third of the BWHS participants did not respond (n = 16,424) or could not be reached (n = 3,442).
Exploring the results of the BWHS study, Lacy and Savage (2016) raise the following question: should research with a 51% response rate proceed? In answering this question, they highlight some of the challenges epidemiological research faces and discuss some solutions for improving long-term sustainability of this type of research.
They suggest that low participant response is increasingly the norm, and consequently affects research validity and cost. That is, when researchers mailed collection kits to participants, they saw a 50% return on investment. Therefore, the cost of the DNA the researchers did obtain was twice what it should have been. In the particular instance of the BWHS, the original investigators noted few differences between responders and non-responders. But could the differences, even if they are minor, create bias within a study?
BWHS respondents were older, healthier and more engaged with the health care system than BWHS non-respondents. Thirty percent more respondents were 60 years or older. Also, women who took a multivitamin at least three times per week were also 30% more likely to respond, and women who reported recent cancer screenings or visits to their primary physician were 60% more likely. There is an association between age, health status and socioeconomic status, and many of the cancer and other health outcomes that will be studied in the BWHS. Therefore, the chances of biased results are high.
The authors note that the BWHS study demonstrates the way in which self-selection in a cohort can increase over time and may subsequently affect research outcomes. But they ask, “Besides context, motivation, and non-modifiable personal characteristics, what leads participants to agree to join a new study, complete follow-up surveys, or donate a biospecimen?” They suggest that the way in which the researcher asks individuals to participate, who is asking and what they are asking are extremely important factors in epidemiology research participation. The California Teachers Study (CTS) used a biobank scenario of their own. They have found that these factors are significant contributors to participation. They have actively worked to improve response rate by being responsive themselves. That is, they have noted factors such as their own follow-up timing relative to the overall time frame in which their participants typically respond. They also recorded the way that they made requests of participants, which staff members asked questions and related it all back to the type of biological materials they have specifically asked participants for.
Lacy and Savage conclude with the recommendation that researchers need to determine why their response rates are 50% and proactively work toward addressing this issue. Furthermore, they suggest that researchers should design their studies based on what participant partners want and need in return for the investments they make as volunteers.
Lacey, V.J. and Savage, K.E. (2016) “50% response rates: Half-empty, or half-full?” Cancer Causes and Control [Epub ahead of print].