A great article here from fivethirtyeight showing how the results from gold standard research polling in the US (by which mean they mean mixed method, probability sampled) are being copied and incorporated, with a heavy weighting, into non-gold standard research polling.

In other words, lesser pollsters are benefiting from the investment and practice of good pollsters. In today’s open source, big data world this is relatively easy for them to do – at least in published US political polling, which is what they’re talking about here.

In that particular part of our industry, what is in it for the gold-standard pollster, why should they continue to invest?

The article itself is forensic in its attention to detail and it uncovers a statistical truth that would remain hidden to all but the most interested observer. And that is the real trouble I think – in many areas of research, investment in good practice is difficult to discern on the surface and can be easily ‘aped’ by others. I guess it was ever thus. But it remains a problem nevertheless.

Consider you are in the market for an online survey. You’ve decided it isn’t a DIY job, it is going to be 20 minutes plus and likely to contain some detailed programming instruction and perhaps a trade-off exercise. How can you tell at commissioning that the chosen supplier will be abel to deliver all the sampling, programming and procedural things needed for an excellent project? Realistically, only through prior experience with them. A truism,but it is easy to talk about being good at technical things and difficult to actually be good at them.

But, as we know, the trend is towards DIY online surveying and off-the-shelf pre-paid river-sourced sample, and many research buyers today will be take it on blind faith that they are receiving or about to receive a good research service. Without ever really knowing one way or the other.

Similarly with data quality and integrity. It takes a thorough QC process to ensure that data is clean and free from time wasters and serial respondents. But how many people could look at the data they just bought (in the format that they just bought it) and discern how good the QC had been, or even whether it had taken place at all? How many research buyers even get remotely close to their data, to scrutinise it? How many actually care?

I’ll stop there! The underlying point I wanted to make is that it is diffiult for good researchers to discern their offering in a number of the most critcial areas, and at the same time it is easy for poor researchers to talk a good game and get away with it. Now more than ever, with the proliferation of online surveying and the growth in number, size and type of data sources.

Like the gold standard pollsters of the US, might we be tempted to cease investing in good practice when we know that cheaper facsimiles are winning the business? Unsurprisingly, I can’t bring to mind an easy answer to this one, but I suspect good researchers will have to find ever more interesting and succinct ways to introduce clients to some process and management fundamentals, hitherto perceived as the more arid areas of market research!