Are Online Surveys Biased?

0 comments

The past few years have seen explosive growth in the use of online surveys. The reasons for this development are obvious. Online surveys cost less to conduct than in-person or phone surveys, response times are faster, and the results are easy to compile and analyze because they are already in a digital format. But no survey method is perfect, and online surveys have been criticized by some as being biased because they collect information only from people who have access to the Internet.

Is Sampling Bias Inevitable?
In fact, most surveys must deal with this type of bias. For example, telephone surveys collect information only from households that have land lines, a shrinking percentage of the population. Paper surveys require that respondents have a certain level of literacy. Online surveys have the same requirement, as well as the obvious additional requirement that respondents have access to the Internet. As Internet access becomes more and more widespread, this is becoming less of an issue. According to recent estimates, more than 74 percent of people in North America have access to the Internet, and the number is growing steadily. Still, there is no question that Internet users represent a more affluent, well-educated segment of the population.

Online surveys must also deal with the common sampling problem of non-response bias. In most surveys, a certain percentage of those solicited will not respond. Survey administrators must somehow determine if non-respondents skew the survey population in some way.

Survey administrators must also have some means of excluding responses from people outside the target population. Because the Internet is such a wide-open, boundary-less medium, the response to a survey may be coming from a broader population than the administrator intended.

Finally, survey administrators may have to deal with sampling bias because of the sites they use to solicit responses. For example, the population of Facebook users includes more women and young people than the population of Internet users in general, so it seems likely that these groups would be over-represented in a survey conducted through Facebook.

Removing Sampling Bias
Mineful software offers a simple but effective way to deal with sampling bias. Post-stratification allows a survey administrator to correct for groups that are over-represented or under-represented in a survey population. Here’s how it works.

Survey respondents are divided into homogeneous subgroups (strata). For example, respondents might be divided into the strata male and female. Responses are recorded separately for men and women, and then a sampling fraction is applied to give each group its correct weight in proportion to the target population.

For example, suppose that a survey administrator wanted an equal number of responses from men and women. As it turned out, sixty percent of respondents were men and forty percent were women. The sampling fraction would allow the administrator to take all responses into account, but would give proportionally more weight to the response from each woman. This would allow the survey results to be an accurate reflection of the target population: half men and half women.

The same method can be applied to correct for imbalances in race, education, age, and other factors. The key to using post-stratification is to identify areas of potential sampling bias and then use survey questions to determine if respondents accurately represent the target population. In the example we used, the survey would ask about gender. Such questions would allow the survey administrator to use post-stratification to reduce the effect of sampling bias.
Read On

8 Ways to Test Advertisements

0 comments

Before you commit thousands or millions of dollars to an ad campaign, you would like to have some reassurance that you will be getting a decent return on your investment. The best way to predict whether a campaign will be effective is to do some upfront advertising evaluation.

In the past, pre-testing ads was a cumbersome, time-consuming process, typically involving focus groups and in-person interviews. Today, online surveys offer a fast, cost-effective alternative to traditional testing methods.

Advertising Evaluation


What should an online survey measure? Most advertisers would agree on these eight parameters:

1. Recognition. When advertisers test recognition, they are just trying to determine whether respondents remember seeing an ad before. For example, a survey might show respondents several ads and ask which ones they remember.

2. Recall. Advertisers use the term recall to describe what a viewer gets out of an ad. For example, a survey might show respondents an ad with the brand name removed and ask what brand the ad is promoting.

3. Attitude and opinion. These questions are meant to determine how respondents feel about a product based on an ad.

4. Comprehension. Questions about comprehension test how well respondents understand an ad. These questions are particularly worthwhile for ad campaigns that rely on allusions or subtle messages.

5. Credibility. These questions are meant to determine what portion of respondents believe the claims made in ad.

6. Persuasiveness. Questions in this category are meant to determine to what extent respondents are persuaded to adopt a viewpoint promoted in an ad.

7. Buying predisposition. These questions may take a variety of forms. For example: “How likely are you to buy this product in next month?” “How do you think this product compares to specifically named competitors?” These questions are meant to determine how much an ad encourages participants to take the next step and make a purchase.

8. Ad rating. This is a subjective overall measure of what respondents think about an ad. Do they find it amusing, annoying, aesthetically pleasing?

Depending on the circumstances, advertisers may decide to weigh some of these parameters more heavily than others. For example, if an insurance company is trying to counteract stories about unfair treatment of policy holders, it may value credibility more than other parameters.

Making Sense of Survey Responses

Responses to questions in each of these categories can be useful in themselves, but they become more useful when viewed in relation to each other. For example, what topic correlates most strongly with buying predisposition? How is comprehension related to recall? A statistical tool called conjoint analysis can help advertisers understand these relationships by showing how specific variables interact.Conjoint analysis can also show how different features of an ad affect the eight parameters discussed above. For example, respondents might be shown different versions of an ad and asked questions intended to measure recall. The goal is to determine what combination of features in an ad has the strongest positive effect on a given parameter.

Mineful’s software makes advertising evaluation easy to perform, giving advertisers valuable guidance before they invest in major new campaigns.

Read On

Satisfaction Scorecards: A Powerful Tool to Track Customer Satisfaction

0 comments

Customer satisfaction scorecards are becoming an increasingly popular way to make critical information available to a broad range of employees.

A scorecard or dashboard in a management information system serves the same purpose as the dashboard in a car. It displays complex operating data in a way that is easy to read and interpret. Dashboards require no special knowledge of statistics or information technology. They use widely understood presentation methods such as line graphs and bar charts. Their ability to summarize large amounts of data make them a powerful tool to help managers track customer satisfaction.

A typical customer satisfaction survey asks people to express their opinions about such things as quality, price, and ease of purchase. To be useful, the data generated by such a survey needs to be summarized and interpreted in a way that managers will understand. Dashboards perform this important function. A typical scorecard might track three categories of data on customer satisfaction: Key Indicators, Overall Satisfaction, and Reasons for Dissatisfaction. Instead of trying to analyze responses to 15 or 20 survey questions, a manager can tell at a glance how the company is doing in keeping its customers satisfied.

Slicing and Dicing Data
Dashboards offer simple ways to sort data. For example, a dashboard used by a chain of craft stores might display “helpfulness of sales staff” as a key indicator of customer service. Marketing executives could use this indicator to determine which stores are doing a good job of helping customers and which stores need to provide their staff additional training. This key indicator might also be sorted by customer service representative or by type of product purchased. Marketing executives might discover that some customer service representatives are not doing a satisfactory job, or they might find that customers want more help when shopping for certain types of products.

Customer satisfaction scorecards can also show how key indicators are related to overall customer satisfaction. For example, a dashboard might show that “knowledgeable staff” is more directly correlated with overall satisfaction than “ease of purchase.” This information might lead managers to devote more resources to training customer service representatives rather than adding cashiers.

Dashboards can also highlight trouble spots. For example, Mineful’s software can provide a robust analysis of “reasons for dissatisfaction” along with simple displays to identify areas that are most in need of improvement.

Tracking Trends
One of the most useful features of dashboards is their ability to illustrate trends. Businesses typically use dashboards to identify changes from month to month or from one quarter to the next. Is “overall satisfaction” trending up or down so far this year? Are the main reasons for customer dissatisfaction different from what they were a year ago? Which of our stores has made the greatest gains in customer satisfaction since we initiated our new training program? These are the kinds of questions that can be easily answered with dashboards.

Customer Satisfaction Scorecards from Mineful
Mineful’s dashboards enable clients to determine what types of information will be available to different types of employees. For example, a store manager might see data sorted by customer service representative, while a regional manager might see data sorted by store. The key to getting the most out of dashboards is to provide the right information to the right people in a format they can easily understand and use.
Read On










Back to TOP