In my role as a client-side researcher, I am blessed with an unbelievably large global customer database in the tens of millions of people. Not email addresses, but real people that I know exist in the real world by virtue of their purchase histories, credit card information, and behavioral metrics on our various online services. Because of this blessing, over 90% of the survey work I field is CRM-driven.
But there’s still that other 10%. And while I have a reasonable degree of confidence about what the sources of error and bias are in my CRM-based sampling efforts, my trust in panel recruiting erodes more and more every month. Consider, for example, a recent vendor selection experience:
- I decide to run a study in Country A, Country B, and Country C. I request bids from Company X, Company Y, and Company Z.
- I receive bids for $25 per complete, $20 per complete, and $6 per complete.
- I contact the local office for Company X in Country A and get an additional quote of $12 per complete.
- I tell Company Y that their $25 per complete bid has been beaten by a significant margin and get a “new” bid at $15 per complete.
Names obscured to protect the innocent. But nobody was “innocent” in this exchange, because from a distance it becomes obvious that the “value” of a completed survey is completely arbitrary and driven not by data quality or service quality but by a desire to win the bid.
Worse yet, I question whether that completed survey is even worth $6 to begin with. As an experiment, I joined one of the name-brand panels as a “panelist” under a pseudonym one month ago. I completed as little of the registration process as necessary to become qualified to take surveys. (Don’t worry, I haven’t polluted any of your actual work with fake responses. But I’ll come back to that in a moment.) Between August 3 and September 10, I received 25 survey invitations. That’s roughly 5 surveys per week. The panel’s frequency of contact guidelines explicitly say no more than one invitation every two days and no more than 12 per month.
“Oh, but it can’t be that bad! Most panelists are legitimate.” Let’s assume for a moment that this hypothesis is correct, and that panelists are recruited through completely legitimate efforts. For example, perhaps they were on Google and searched for “surveys” (see right).
Hmm. (And by the way: I’ve never been offered $20 to complete a panel survey. Which panel do I need to join?)
Creating a fake panelist account is fast and painless. Identity verification on the Internet is near impossible. But it’s not just respondents committing fraud in this process, the panels themselves are complicit. The lack of transparency in downstream processes invites opportunism, if not straight-up breaking of rules.
Consider: What’s the difference between a $6 respondent and a $25 respondent? Is the $25 respondent substantially better in terms of recruiting practices, data quality, and policy integrity? Or was the $25 respondent simply a $6 respondent that had been purchased from another source and marked up? And how can you tell the difference?
Answer: You can only tell the difference in quality if you are told which panels are being used, and how those panels manage their database. I haven’t found full-service research agencies to be terribly eager to share that sort of information, because:
- it allows me (with a little bit of sleuthing) to determine the profit margin between the sample cost and the delivered work, and
- the agency harbors fears of being disintermediated (there’s that word again!).
So what’s a client to do? I have three rules for myself:
- Always know the source of the sample.
- Never communicate “statistical margin of error” to internal clients on panel-based surveys.
- Stay close to the source. Don’t tolerate unreasonable mark-up on panel data, particularly when it’s known to be a pass-through cost from a downstream supplier.