BOBI Awards 2015: FINALIST- Excellence in Data Collection & Fieldwork

What is it?

An industry award given to a project or research study that has demonstrated clear benefits in the way that the data and/or respondent information was collected. Entries are open to all types of market research/business intelligence including qualitative, quantitative through to secondary data and analytics. The focus of this award is:

  • Generation of insights through delivery of high quality data and/or service
  • Improving traditional techniques and/or the introduction of innovative methods
  • Clear evidence of tangible positive impact on the UK client business, patients and/or the NHS

Our entry

Give and Take: using visual and interactive data summaries for mutual client / respondent benefit in brand tracking (Liz Gilbert, Roche Products & John Aitchison, First Line Research)

Roche and First Line Research incorporated interactive visual summaries of respondent’s data into two online oncology brand trackers. The aim was two-fold. First, to reassure as to the accuracy and quality of data capture. Second, to deliver something of additional and intrinsic value to participants that would make their research experience more rewarding. A solution was designed and programmed that allowed respondents to review their own data in diagrammatic format, make any corrections, and then print-out if they wished. The results were very encouraging, and have persuaded Roche to include such visual summaries as standard in future online brand trackers.

Declining participation rates threaten the quality and sustainability of online research amongst HCPs. Whilst online methods have enjoyed strong commercial and technical advances over the last 15 years or so, the boom is not without its fall-out – a chaotic market for respondent participation. All healthcare researchers know that the volume of online HCP surveys has increased, but those on the supply side can also to testify that alongside the growth has been:

  • Increasingly stringent and lengthier screening;
  • Longer survey durations;
  • A decline in average standards for question quality and survey design;
  • Further compression of project and fieldwork schedules; and more …

Unsurprisingly, respondent satisfaction with the survey experience has waned. This is most clearly evidenced by an observable steady decline in response rates in healthcare and other sectors (we are happy to evidence the statements above, on request). Yet for many in business intelligence the problem is effectively invisible, as the relatively unglamorous task of achieving sample is outsourced to specialist third parties.  In the USA, the problem of declining response rates became so acute that fieldwork companies were forced to act before they ran out of sample. We are approaching a similar moment in the UK and will face a choice between changing our practice in respect of how we treat respondents, in order to preserve long-term interest in (online) market research participation, or continue to overlook the problem in favour of immediate commercial imperatives, whilst we quietly run out of sample.

Roche has an established and successful oncology portfolio and has, since 2008, engaged First Line Research to undertake brand tracking and market understanding studies. First Line Research is close to the issues described, having specialised solely in online research since 2004. Together, we sought a way of returning something meaningful to participating oncologists that also had mutual benefit, in terms of improving data quality and client confidence in the results.

Poorly executed brand trackers are the type of survey that can lead to respondent fatigue and dissatisfaction. They are numerous in our sector; tend to contain similar material wave to wave; and often ask the participant to complete detailed information about their prescribing habits and/or patient management and/or assessment of treatment attributes. In the Roche trackers respondents are of course remunerated at standard rates (typically with Amazon vouchers), but that would normally be the end of the story for them until the next survey invitation. There was arguably little of intrinsic value in their experience – and certainly nothing of non-monetary value to take away. It occurred to us that the information entered at the often detailed prescribing / patient flow sections may have some value as feedback on their own practice. Once converted into visual and summary form a record of their own stated activity might, we hypothesized, be interesting to them – even instructive – and certainly not something that would otherwise have been created.

On our side, given the effort required to complete these sections and their hierarchical, step-wise nature (inevitable given the need for real-time validation against prior entries) Roche and First Line wanted to be confident that respondents were entering their information as accurately as possible. We devised and programmed what we believe to be a novel approach to allow this mutual benefit to be realised.

Prescribing / patient flow questions were asked as usual, but at the end of each clinical setting section (e.g. neoadjuvant, first line, second line etc.) we presented respondents with a screen on which they had the opportunity to review a visual summary of the numbers they had entered, as shown below.

(NB – all numbers shown in the screen-shot are fictional and do not represent actual data captured)

On this screen respondents could:

  • Click-to-correct:
    All numbers entered appeared as underlined and highlighted, and clicking on them returned the respondent directly to the relevant screen so that any correction/s could be made. Having made changes, the respondent was returned directly to this visual summary page.
  • Print out and keep:
    Once satisfied that the visual summary represented their prescribing accurately the respondent was able to print it out, for their own use.

The technical design and programming challenges involved were appreciable, requiring specialist skills and a few weeks work – but once created we knew that the basic template should be reasonably future-proof. Investing in this approach was in part a recognition that what was being asked of respondents was relatively demanding, and in part a means of obtaining reassurance around data quality. Additionally, both Roche and First Line agreed that introducing the approach was a way of testing whether a sustainable, non-monetary, quid pro quo could work for this type of survey. This seemed to have relevance beyond issues of response rates, especially given the Pharmaceutical industry’s increasing vigilance around ABPI guidance on physician remuneration etc. The approach was applied to two Roche oncology brand trackers in 2014. Respondents were non-overlapping.

Our preference was to assess the approach using behavioural measures rather than asking for respondents’ opinion of it at the end of the survey. Data on the actual use of the print-out and click-to-correct functions were of course automatically captured electronically as part of the survey data, and simple analysis of the popularity of each new facility would tell us what we needed to know. Asking respondents a “what did you think…?” type question seemed likely to be an unnecessary additional use of their time, and we didn’t want to risk undermining any good work that might have been done!

In oncology tracker 1:

  • x3 visual summaries presented
  • 16 / 75 respondents used the print facility (21%), for a total of 24 print occasions
  • 4 / 75 respondents clicked-to-correct, for a total of 6 click-to-correct occasions

In oncology tracker 2:

  • x4 visual summaries presented
  • 8 / 60 respondents used the print facility (13%), for a total of 14 print occasions
  • 2 / 60 respondents clicked-to-correct, for a total of 2 click-to-correct occasions

We were very pleased with these results. They show that a significant minority of respondents found the summaries useful enough to print out for reference, and the low number of respondents clicking-to-correct gave re-assurance with regard to the current method of data capture.  From a quality control perspective we wouldn’t necessarily expect respondents to make too many mistakes when entering about their own recent prescribing (especially given that real-time validation / error-checking is in place to prevent mathematically and/or logically impossible answers), and so picking up even a small number of corrections represents an improvement in the accuracy of the data set as whole. From a technical perspective the visual summaries worked as intended throughout, and First Line did not receive any error messages or respondent complaints relating to them.  An analysis of completion times versus earlier waves of each tracker suggests that these visual summaries added about one minute to the total survey duration – or about 20 seconds each, on average.

Part of the difficulty when seeking to improve respondent experience is knowing which types of non-monetary incentives work best. Roche and First Line both concluded that the experience of using this approach delivered a mutual respondent <> client benefit, independent of monetary remuneration. The implementation of these visual summaries cemented Roche’s confidence in current data capture, and having in place a real-time facility for respondents to quickly review and/or correct sections of their data entry is reassuring – especially when rarely used! They also worked to give back something of value to interested respondents. Roche and First Line agreed, on the strength of these results, to continue with the approach. There will be opportunities to refine things along the way, and possibly to add functionality. We believe several other advantages -more subtle, less quantifiable- were also gained:

  • The visual summary screens and opportunity to review / correct may, in themselves, demonstrate to respondents that the researcher actively values their responses, thus positively affecting engagement.
  • The visual summaries provide a natural short pause between sections, guarding against the danger that respondents conflate one similar-looking area of data collection with another, and providing some visual stimulus in what can be lengthy numeric entry sections.

This novel approach showed that surveys we might think of as relatively routine create unique respondent generated content that has value to participants when visualised, summarised and made accessible. Furthermore, that the creation of such visual summaries gives us an opportunity to put in place another useful, non-intrusive, data quality measure that helps re-assure us regarding data accuracy. We cannot possibly, on the basis of these examples alone, predict the longer term impact of what we have done on the wider response rate problem, but it certainly feels to both parties like we just conducted a successful trial!


Leave a Reply

Your email address will not be published. Required fields are marked *