Missing the target? On the extent and consequences of poor participant profiling

A specialist blog this, but hopefully of interest to colleagues in healthcare business intelligence who are keen to understand more about why healthcare professionals (HCPs) are increasingly disillusioned with the idea of participating in market research.  

It is multi-faceted, but we know that a central problem is the negative emotion generated by failure to qualify through the often lengthy ‘screener’ that prefaces a questionnaire. Online research is the main culprit here – it is the most common method of conducting quantitative research amongst HCPs and therefore by far the most likely interface between an HCP and market research. Unfortunately, it is often less interface, and more in-their-face.

An industry report, and our own project meta-data data, says that a big part of the problem is inadequate profiling of HCPs who are signed up to panel companies, which in turn produces inadequate targeting, resulting in lots of screen-outs. Whilst it makes sense that HCP profiles will have gaps, either because certain information changes over time or because HCPs do not want to spend ages filling out forms, there are some basic facts that I’d expect to be there, and the first of those is main clinical specialty.

By main clinical specialty, I mean the type of HCP they are, for example: Neurologist; Immunologist; GP, Oncologist; Dermatologist; Primary Care Nurse; and so on. We are not talking about specific job titles or sub-specialisms here, but their over-arching category. If that is not known, then it is impossible to target the right people.

There are many other qualifying questions that might be asked during a screener, but we fixated on main specialty because it is an ever-present and because most other questions are either of second-order importance, or asked primarily for quota-control purposes, or cannot reasonably be expected to form part of a profile. A few examples: the grade of a secondary care doctor will change and should not make or break their relevance to the objectives if they otherwise qualify; it would be important to ensure a sample of GPs is gender balanced but you wouldn’t screen out based on gender; and individual patient caseload is something we’d expect to discover, rather than know in advance.

We reviewed data from the screening sections of the 15 most recent online surveys First Line has run amongst UK HCPs (all in 2018), on behalf of various clients.

  • First, we removed respondents who only partially completed, or who were rejected because quotas were already full (another issue, for another blog)
  • We then removed respondents who screened out at questions other than main specialty
  • Leaving us with completes, plus those who screened out at the main specialty question

Across the 15 projects, the screen-out rate for ‘failing’ the main specialty question was as high as 23% (i.e. n=1104 completes vs. n=338 screening out at that question). In other words, for every three respondents that complete, one gets screened out because their specialty is not represented. On its own I find that shocking, but the underlying picture is more nuanced.

In delivering those 15 surveys we partnered with six different fieldwork / panel companies – see below:

Panel company ID# completes achieved# screening out at main specialtyScreen-out rate for failing main specialty
Total110433823%
127721%
2664406%
332510625%
42428%
53317884%
631514%

Supplier #2, who handled most of the fieldwork, had a screen-out rate of only 6% at the main specialty question – certainly acceptable. Supplier #3, who ran enough fieldwork for us to be able to fairly judge their performance, did much worse – a screen-out rate of 25%. Suppliers #1, #4, and #6 all have base sizes too low for robust analysis, but Supplier #5 is a special case and shows what horrors can be unleashed by poor fieldwork management… On the one project they handled they managed to dispatch almost entirely to the wrong type of clinician, producing n=178 screen-outs at the main specialty question on their way to achieving the n=33 completes we needed! In other words, n=178 clinicians who were prepared to help with market research were rejected because the specialty list at the very first question did not include their specialty. How many will be willing to try again?

Aside from errors in sample dispatch, why does this happen? The panel companies argue that profiling is difficult and necessarily inexact, because HCPs can go under a variety of titles that can vary according to individual training, preference, department, hospital, and region. They say that it is sometimes possible that a clinician is relevant to the study objectives even if they don’t recognise their main specialty at question one. Whilst I agree there is something in that, I do not think it happens to the extent that it explains the very high screen-out rates in our review. If I am researching biologic use in dermatology settings, then I know for sure that I want Dermatologists and it is hard to imagine that the right people might be classified under some other heading.

That said, I do know of marketers who are ‘certain’ they only want, for example, Haematologists, when in fact other descriptions of main specialty may also fit their objectives (in this case, perhaps types of Oncologist). In the 15 projects we looked at, there are no such instances.

In conclusion, I think there is enough here to say:

  • There is evidence that a lack of profiling on main specialty is an issue for some panel companies and/or for some types of HCP.
  • Poor sample / dispatch management sometimes happens, with very unhappy consequences.
  • Losing every fourth willing HCP because they’re not from the relevant medical specialty is unacceptable, and unsustainable.

Panel companies: is this a fair analysis? Can we share data / pool resources to address the problem?


Leave a Reply

Your email address will not be published. Required fields are marked *