BOBI Awards 2014: FINALIST- Most Innovative Approach

What is it?

An industry award for the successful implementation of an innovative approach in any area of business intelligence. The focus of this award is:

  • Development of the new approach to meet the business need
  • Generation of added insights that wouldn’t have been revealed using a traditional approach
  • Clear evidence of tangible positive impact on the UK client business, patients and/or the NHS

Our entry

Quick click, no drag! A behavioural approach to the testing of promotional material

Takeda needed to test promotional materials prior to launching into a sensitive therapy area in which they had no heritage. With the potential for muted and misdirected feedback, First Line proposed a suite of interlocking non-introspective techniques, inspired by behavioural and neuro- science. Passive and unguided data capture was preferred to deliberative questioning; monadic testing was preferred to comparative; and key objectives were answered by triangulating multiple data points. Takeda’s expectations were exceeded, with findings offering rich insight into prescribers’ emotional, non-conscious response to materials. Final executions were selected with confidence, supported by multi-dimensional – behaviour based – evidence.

Given a lack of heritage in this therapy area, Takeda’s marketing campaign wanted to establish them as a serious partner, prepared to invest.  It is a therapy area inundated with brand messages and imagery, and known for generating awkwardness and embarrassment in both clinical consultation, and research! Usually at Takeda there would be high level Global support for such a launch, including provision of a thoroughly developed marketing campaign. However this is a UK-only launch, and all branding and marketing responsibility sits with the UK team. Whilst this meant greater opportunity to innovate, the budget and human resource was limited, making it imperative to get things right first time!

Campaign material was well developed at the outset of this project but had not been tested. Takeda’s experience of traditional promotional materials research suggested that it often involved multiple rounds; delivered conflicting feedback; and often a lack of clarity over which executions to choose, and what to revise. For all these reasons Takeda were keen to go with a new approach, one that relied less on clinician’s considered opinions and more on the capture of intuitive, non-introspective, observed, freely-expressed responses. Behavioural and neuro- science tell us that this type of feedback is likely to have greater authenticity and predictive power when judging response to promotional materials.

The objectives were typical of late-stage promotional research. The difference was in how they were met.

The materials on test were a Detail Aid; a comprehensive set of brand messages across four clinical categories; and a set of four alternative Ad executions.  All materials were at an advanced stage of development, and research objectives were as follows:

  • Capture essential contextual information (i.e. role, demographics, caseload, prescribing behaviours)
  • Capture current perceptions of subject and competitor brand positioning (in a clinical sense)*
  • Detail Aid: provide quantitative feedback on performance, with qualitative feedback on page detail
  • Key messages: refine from 20+ messages across four categories, enabling rejections – and tweaks
  • Ads: Assess the impact of each Ad, providing evidence for a decision on which execution to run with.
  • Re-check perceptions of subject and competitor brand positioning, to gain a sense of the overall credibility and persuasiveness of the materials as a whole.*

Takeda were impressed by the case for the incorporation of techniques inspired by behavioural and neuro- science, but – as ever when introducing methods that differ somewhat from the norm – an appreciable amount of work needed to be done internally to persuade senior management that the approaches proposed would indeed ‘deliver the goods’!

NB – The final sample was n=200, comprising n=129 GPs, n=38 Nurses, n=33 Hospital specialists

A behavioural approach meant being bold with methodology.

  • Non-introspective / implicit measures led the way. As a guiding principle we wanted to avoid forced introspection. Studies from psychology and neuroscience teach us that we are often unreliable witnesses to our own motivations; unable to articulate ‘why’ we choose as we do. Conventional promotional materials testing techniques rely heavily on introspection and thereby muddy the waters, as respondents reflect on their experience and preferences. Behavioural thinking suggests in this context we should minimise (or even eradicate entirely) the amount of thinking respondents are asked to do. When the subject area is sensitive we have all the more reason to bypass a ‘considered’ reply
  • Triangulation: No research can simply ignore the problematic “why?” question, and in this project we decided that a fuller understanding of the “what”, “how”, and “when” aspects of respondent behaviour would be more useful than drilling down for perceived reasons. Indeed, there were zero “why do you say that?” style questions in our questionnaire! Takeda and First Line agreed instead that triangulation in design would give us a strong, multi-dimensional evidence base. By triangulation we mean the deliberate arrangement of several techniques or questions in pursuit of a single objective. For example, when understanding individual detail aid page performance we drew on three types of information: 1) passively collected browsing times, 2) unguided page area selection, and 3) non-structured qualitative commentary. For other objectives the components of triangulation were different. We acknowledge that retrospective cross-analysis is always possible, but in this project we built triangulation in to the design, i.e. in advance.
  • Monadic testing:  Monadic testing was our preference because of the essential similarity to how we view promotional material in practice, i.e. once and on its own, rather than critiqued side by side with material for competing products. We also know that research results differ depending purely on whether an evaluation was conducted monadically or comparatively. Typically, comparative evaluation demands higher levels of cognitive effort and introspection which, in a setting like this -where we know attention is fleeting- would likely take us further away from an understanding of performance. Whilst it may feel uncomfortable to minimize side-by-side comparisons with competitors, Takeda agreed that greater store should be set in the advantages of monadic testing, and invested in the sample size and methodology to enable it.
  • Naturalistic / passive / unguided tasks. Promotional materials research is inevitably restricted by its artificial nature – we can never wholly place ourselves in the moment of response and/or decision and so we find other ways to make our research authentic. This project applied the following tenets; each of which is evidenced in this submission and supporting materials
    • Simplicity: Make things as easy as possible for the respondent to understand and do
    • Freedom: Reduce structure and increase respondent independence / freedom of expression (especially relevant, we thought, given the nature of the therapy area).
  • True random allocation (rather than by quota). Very often in medical market research a quota-based approach to sample splitting is employed – perhaps as insurance against later criticism if ‘random’ allocation happens not to work out as hoped. The strong contention of all researchers on this project was that ‘random’ trumps quota-based allocation
    • Safe: randomness takes human interference out of the equation!
    • Proven: randomness may produce uneven sized allocations but statisticians know it is the fairest way to achieve best possible realism.
    • Flexible: randomness is not sensitive to the number of executions on test.

Detail Aid testing was unguided:
Firstly, respondents were asked to browse the Detail Aid (via an online “page-turning” journal) at a pace similar to they would expect if reviewing with a Representative. Time spent on each page was collected, passively, in the background. Respondents were then asked to review each page separately, freely selecting and annotating any areas on which they wanted to comment (using intuitive ‘click and drag’). They were given an entirely free hand in this; no direction was offered other than instruction on how to make selections / add comments. Respondents could make as many or as few selections / annotations as they wished. This approach was also used in the Ad testing.

Please click here to see how this worked.

To see example outputs, please click here.

Message testing used an adaptation of a classic scale:
There were 20+ messages to test, across four categories. Rather than swamp respondents, the sample was split so each respondent was randomly allocated half the messages in a category. Each message was tested for perceived credibility, persuasiveness and impact on an adapted “Juster” scale (to capture preference rather than probability).

Ad testing was multi-faceted:
Respondents were allocated at random into one of four groups, with each group allocated one of the subject brand’s four Ad executions, also at random. In addition, respondents were shown four competitor Ads and a ‘control’ Ad (from a related therapy area) to make up a set of six.  Respondents browsed all six Ads in randomised order via an online ‘journal’.  This is a familiar approach, but was directed with a very light touch and, again, with time spent on each page collected passively in the background. Immediately following the journal, we showed respondents all six Ads side by side and asked, simply, which was their favourite. We wanted any comparative approach to be as effortless as possible, no critiques! Respondents were then able to freely select / annotate areas on the subject brand Ad in a self-directed way (exactly as described for the Detail Aid, above). Finally, respondents were shown two subject brand Ad executions that they had not yet seen. These appeared side by side on screen for two seconds only, with respondents asked for their preference. This has been coined the “blink test” and gauges strength of first impression.

Please click here to see how this worked.

 

Selected findings:
The Detail Aid started well – respondents spent longer periods of time per page and responded well to the opening messages. However, in a run of one particular category of messages that followed, time spent per page deteriorated along with Juster scale message scoring. Times and scoring improved towards the finish of the piece, and from unguided selections / annotations we understood what was behind the partial negative, dragging, effect. Triangulated feedback allowed us to speculate with confidence on what needed changing, where, and why (without having to ask)!

Passively capturing time taken to view Ads in the journal revealed significant differences between competing brands, although not between different executions of the subject brand. Importantly, when results for ‘favourite’ were introduced we noted a lack of correlation between the two variables, suggesting that “favouritism” (comparative) had little to do with independent duration of viewing (monadic). The density and location of clusters of free selections/annotations provided a natural means of weighting strength and depth of feeling. We also perceived a certain extra freedom of expression, which we attribute at least in part to the unguided methodology. First impressions data from the “blink test” confirmed what was coming through from other sources, and allowed us to identify a clear ‘winner’.

 

We identified clear ‘winners’ in each category of promotional material on test.
First Line made clear, evidence-based, recommendations on how the Detail Aid flowed as an entire piece, highlighting areas of engagement as well as drag, and identifying specific areas of concern and/or enthusiasm. The research identified n=5 top performing messages, and cross-referenced this with data from Detail Aid testing to ensure that message performance was understood in context. Finally we recommended one Ad execution that stood out, interpreting the evidence to make macro- suggestions as to look and feel as well as micro- suggestions on the fine detail.

The brand’s Marketing Director commented…

By incorporating multiple techniques and utilising innovative methodology, a comprehensive and holistic picture was obtained that gave me clear feedback on the impact, credibility and overall standard of the campaign.

The research produced a coherent body of behavioural data that allowed Takeda to assess the performance of their promotional materials easily, clearly and with great confidence. Investment in a large sample and a preference for monadic testing and random allocation paid-off, ensuring that results stood up under scrutiny, and ultimately negating the prospect of further rounds of testing. In Takeda’s opinion the non-introspective techniques employed more than ‘delivered the goods’ – findings from these areas were what made this research special.  We really did not miss the “why?” questions! Leaving respondents to their own devices in the unguided sections resulted in a surprisingly large amount of high quality information which, being naturally self-weighting, helped researchers steer decision-makers away from obvious distractions (i.e. those attention grabbing but isolated verbatim comments that Marketers love!).

This project – with its alternative, innovative, mixed methodology – gave Takeda an intense and complete reportage on how promotional materials were consumed by customers. Researchers from both companies feel that such valuable data would almost certainly not have broken the surface had we relied on conventional testing techniques alone, especially given the therapy area context.


Leave a Reply

Your email address will not be published. Required fields are marked *