devxlogo

Beware Thought Leadership Surveys that are Worse than Useless

Beware Thought Leadership Surveys that are Worse than Useless

“Never believe anything you read on the Internet.” – Abraham Lincoln

Honest Abe never spoke a truer word – even though he didn’t say anything of the sort, of course. And while we can immediately tell this familiar Facebook saying is merely a joke, there are many documents on the Internet that have such a veneer of respectability that we’re tempted to take them at their word – even though they may be just as full of nonsense as the presidential proclamation above.

Among the worst offenders are survey reports, especially when they are surveys of professionals about emerging technologies or approaches. Fortunately, it’s possible to see through the bluster, if you know the difference between a good survey and a bad one. Forewarned is forearmed, as the saying goes – even though Lincoln probably didn’t say that.

The Basics of a Good Survey

The core notion of a survey is that a reputable firm asks questions of a group of people who represent a larger population. If the surveyed group accurately represents the larger population, the answers are truthful, and questions are good, then the results are likely to be relatively accurate (although statistical error is always a factor). Unfortunately, all of these criteria present an opportunity for problems. Here are a few things to look for.

Does the sample group represent the larger population? The key here is that the sample group must be selected randomly from the population, and any deviation from randomness must be compensated for in the analysis. Ensuring randomness, however, is quite difficult, since respondents may or may not want to participate, or may or may not be easy to find or identify.

Here’s how reputable political pollsters handle deviations from randomness. First, they have existing demographic data about the population in question (say, voters in a county). Based on census data, they know what percent are male and female, what percent are registered Democrat or Republican, what the age distribution of the population is, etc. Then they select, say, 100 telephone numbers at random in the county, and call each of them. Some go to voicemail or don’t answer, and many people who do answer refuse to participate. For the ones that do participate, they ask demographic questions as well as the questions the survey is actually interested in. If they find, say, that 50% of voters in a county are female, but 65% of respondents were female, they have to adjust the results accordingly. Making such adjustments for all factors – including who has phones, which numbers are mobile, etc. – is complex and error-prone, but is the best they can do to get the most accurate result possible.

Compare that political polling selection process to how, say, Digital Transformation, Big Data, or Cloud Computing adoption surveys assemble their populations. Perhaps the survey company emails their mailing list and asks for volunteers. Maybe it’s a Web page or a document handed out at a conference. Or worst of all, perhaps survey participants are hand-selected by the sponsor of the survey. None of these methods produces a sample that’s even close to being random. The result? The results of the survey cannot be expected to represent the opinions of any population other than the survey participants themselves.

Are the answers truthful? I’m willing to posit that people are generally honest folks, so the real question here is, what motivations would people have not to be completely honest on a survey? For emerging technologies and approaches the honesty question is especially important, because people like to think they’re adopting some new buzzword, even if they’re not. Furthermore, people like to think they understand a new buzzword, even if they don’t. People also tend to exaggerate their adoption: they may say they’re “advanced Cloud adopters” when they simply use online email, for example. Finally, executives may have different responses than people in the trenches. CIOs are more likely to say they’re doing DevOps than developers in the same organization, for example.

Are the questions good? This criterion is the most subtle, as the answer largely amounts to a matter of opinion. If the surveying company or the sponsor thinks the questions are good, then aren’t they? Perhaps, but the real question here is one of ulterior motives. Is the sponsor looking for the survey to achieve a particular result, and thus is influencing the questions accordingly? Were certain questions thrown out after responses were received, because those answers didn’t make the surveying company or sponsor happy? If scientific researchers were to exclude certain questions because they didn’t like the results, they’d get fired and blacklisted. Unfortunately, there are no such punishments in the world of business surveys.

So, How Do You Tell?

I always recommend taking surveys with a large grain of salt regardless, but the best way to get a sense of the quality of a survey is to look at the methodology section. The survey you’re wondering about doesn’t have a methodology section, you say? Well, it might be good for wrapping fish, but not much else, since every survey report should have one.

Even if it has one, take a look at it with a critical eye, not just for what it says, but for what it doesn’t say. Then, if some critical bit of information is missing, assume the worst. For example, here is the entire methodology section from a recent Forrester Research “Thought Leadership Paper” survey on Business Transformation commissioned by Tata Consultancy Services (TCS):

In this study, Forrester interviewed 10 business transformation leaders and conducted an online survey of 100 US and UK decision-makers with significant involvement in business transformation projects. Survey participants included Director+ decision-makers in IT and line of business. Questions provided to the participants asked about their goals, metrics, and best practices around business transformation projects. Respondents were offered an incentive as a thank you for time spent on the survey. The study began in February 2014 and was completed in May 2014.

How did Forrester ensure the randomness of their survey sample? They didn’t. Is there any reason to believe the survey sample accurately represents a larger population? Nope. How did they select the people they surveyed? It doesn’t say, except to point out they have significant involvement in business transformation projects. So if we assume the worst, we should assume the respondents were hand-selected by the sponsor. Does the report provide an analysis of the answers to every question asked? It doesn’t say. The methodology statement does point out respondents were offered an incentive for participating, however. This admission indicates Forrester is a reputable firm to be sure, but doesn’t say much for the accuracy or usefulness of the results of the report.

So, what should a business survey report methodology look like? Take a look at this one from the International Finance Corporation (IFC), a member of the World Bank Group. The difference is clear. Consider yourself forewarned!

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist