Switch to English Site

Part 1: Survey Planning and Questionnaire Design

Part 1: Survey Planning and Questionnaire Design

15 de abril de 2021

A three-part series on practical survey considerations with guidance from Gregg Schoenfeld, survey guru and founder of MNoet, a boutique research consultancy.

Surveys remain a primary way for organizations to quantify the world. Although use cases vary (e.g., understanding why people like to travel or determining if current customers are happy), approaches are similar. Agree on what you want to know, come up with questions that help reach that goal, put into a survey questionnaire, and attempt to get people to complete it.

With everyone asked for their opinion these days, surveys are increasingly seen as a burden. You likely receive - and perhaps ignore - multiple invitations each week. So when you make the decision to put a survey into the world, make sure not to waste the research opportunity.


Pre-survey work

It often helps to have informal conversations with people that fit the population you want to understand before building the actual survey questionnaire. This gives you a chance to test initial assumptions and uncover important perspectives from the group that you hadn’t previously considered. The collective themes can then be validated more widely through a formal survey.

It is also crucial to get input from your colleagues as you go through the development process, especially the people who have a vested interest in the final results. There is nothing worse than hearing someone say, “why didn’t we ask about this or that?”, after results are shared. It is very difficult to go back after the survey closes to ask each respondent for one more piece of information.


Mini-Case: Data science hiring plans in Europe

Let’s say that you are interested in understanding how human resources (HR) professionals in Europe make hiring decisions for data science roles. You decide to reach out to HR professionals with recruiting responsibilities in the market.

Broadly speaking we can think of potential respondents as:

  • Universe: Global HR professionals with recruiting responsibilities.
  • Population: A subset of HR professionals who are based in Europe. We want to generalize results for this group.
  • Sample: HR professionals in Europe for which we have contact information and will send the survey.
  • Respondents: The HR professionals from our sample who completed the survey and provided information that will be used to describe the wider population.

There are several sampling methodologies to consider in order to maximize our confidence in the survey findings. These include simple random samplingstratified samplingcluster sampling, and multistage sampling. You can read more on each technique in the online textbook Introduction to Modern Statistics.


Avoiding bias

We want our survey results to reflect the actual hiring plans from HR professionals in the population. To do this, we need to take steps that minimize bias during survey creation and administration. The two most common types of survey bias include:

Sampling bias

When we hear from people who don’t represent our population.

This can arise, for example, when we take a sample of convenience (e.g., the first 100 HR professionals who come to our website) instead of a random sample (e.g., all HR professionals in Europe are equally likely to be invited and respond).

Response bias

Anything that encourages misleading responses.

This can come from poorly worded survey questions such as, “with the economy in such poor shape this year, do you think it will really improve next year?”. This is an example of a leading question. A better, more neutral variant is, “what do you think will happen with the economy next year?”.

Response bias can also arise from anxiety over how a given response might be used or shared, especially if it relates to sensitive topics. Respondents in these situations may be less than truthful when completing the questionnaire. Clearly communicating to survey respondents that individual responses are confidential or entirely anonymous may help reduce this risk.

Finally, a related issue comes from asking double barrel questions. Even if they avoid leading language, they are still difficult for the respondent to answer and for the survey administer to analyze.

An example would be, “Why do you hire Business Analysts and Data Engineers?”. The reasons for a respondent hiring a Business Analyst are likely different than for a Data Engineer. When the roles are combined into the same question, it is impossible to disentangle the underlying drivers for each. Be cautious whenever you see the word and in a question. It is generally better to break such instances into multiple prompts.


Choosing question types

The are many ways to ask questions of interest. Your choice will impact the level of effort placed on respondents and your ability to report out findings in specific ways. We’ll discuss analysis and implications for each question type in part 3 of this series, but for now here are the most common approaches.

Multiple choice

Select the one most appropriate answer.

How many new data hires do you envision making this coming year?

 Fewer than last year
 The same as last year
 More than last year
 I do not know

Multiple select

Select all answers that apply.

What data roles do you plan to hire for this year?

 Business Analyst
 Data Analyst
 Data Scientist
 Data Engineer
 Data Architect
 Other (please specify)

Rank order

Put a set of response options into a specific order.

Please rank the importance of the following strategic actions for your organization from most important to least important.

  • Enter New Markets
  • Improve Customer Engagement
  • Launch New Products
  • Reduce Costs

Text entry

Write an open-ended response.

What are your biggest hiring challenges for the coming year?


Making analysis easier

You also want to make sure that there is consistency in response options. Let’s look at the question, “Will you hire one or more data scientists next year?”.

Balance

It is generally best to have a balanced set of possible responses. Take these options.

Not balanced Definitely not-Maybe not-Probably not-Probably yes-Yes

Balanced Definitely not-Probably not-Probably yes-Definitely yes

The first example has three negative options compared with two affirmative ones. This itself is somewhat leading. It also makes reporting look awkward and could raise methodological questions. The second set of options is balanced and could be evenly collapsed into No and Yes for further analysis.

Indifference

Although there is no right or wrong here, you often need to decide if respondents will be able to select an option of indifference.

The benefit of including indifference is that you can quantify a sense of uncertainty. The drawback is that indifference may be harder to take action on. By removing the indifference option, you force respondents to take a stand on either side of the scale.

IndifferenceDefinitely not-Probably not-Not sure-Probably yes-Yes

No indifferenceDefinitely not-Probably not-Probably yes-Yes


Testing the survey

The draft survey needs to be programmed into survey software. It is very important that a preview version of the survey on the platform is widely tested. This will ensure that (1) the questions and logic are operating as expected and (2) other stakeholders have a final chance to provide feedback.

Another question to address at this stage is survey length. Ask you colleagues to take the survey as if they were encountering the questions for the first time and report back how long it took to complete. The optimal length depends on the anticipated engagement of your audience as well as your expectations for the total number of responses.

The guiding philosophy here should be no longer than it needs to be. Keep it as short as possible without sacrificing key topics you hope to better understand from the results.

The question types you choose matter as they place varying levels of mental burden on respondents. A single-select multiple choice question on a non-controversial topic takes less effort than an open-ended text question asking respondents to justify their religious beliefs.

Open-ended questions, which are more taxing, are generally best placed at the end of the survey and made optional as some people will drop off when they encounter these. By placing at or near the end, you may be able to salvage the earlier responses for people who ultimately do not complete all questions.


Next steps

You’ve now built a survey that you’re proud of. It has been tested and others in your company have been given a chance to make final suggestions. Now what?

In part 2 we’ll turn to survey distribution logistics.

Interested in upping your survey game? Explore these courses.

Data Collection: Online, Telephone And Face-To-Face from Coursera

Quantitative Marketing Research from edX (week 3 material)

Analyzing Survey Data In R from DataCamp

Suscríbete para recibir actualizaciones

search
Or create a free DataKwery.com account

Cursos relacionados

Coursera
University of Maryland, College Park
Free

Rutas de aprendizaje relacionadas

Coursera
University of Michigan