Writing successful and unambiguous surveys depends on having clear and unambiguous questions. Furthermore, mobile surveys are constrained by space on the screen, and it’s essential to ensure that questions all appear above the fold on all screens.
Combining general rules and specific rules for mobiles provides a checklist of Golden Rules that should be checked before any survey is fielded. When you submit a survey in ResearchDesk™, adherence to these Golden Rules is checked.
The purpose of these rules is two-fold: quality and quality. You want respondents to absolutely answer the question you posed (not what what they thought you asked), and you want respondents to remain engaged throughout the survey giving the most thoughtful answers they can, without dropping out.
This blog details the following Golden rules:
General Survey Structure
- Don’t waste respondents time
- Clear language
- Phrase questions as ACTION – QUESTION – CONTEXT
- Highlight differences between similar sequential questions
- One question per page
- Eliminate jargon (or thoroughly explain it)
- Short sentences
- Eliminate cultural references
- Never go below the fold
- Require every question
- Remind the user what they said
- Allow going back to change answers
For specific question types
- Radio Button and Checkbox questions
- Exclusive answers
- Number of answer options
- Flipping and randomizing answer options
- Matrix questions
- Ranking questions
- Text questions
General Survey Structure
Don’t waste respondents’ time
If there is a set of questions that some respondents might not be interested in, then add a question before the group, and allow the respondent to skip all the questions.
For example: Don’t ask a respondent to rate a set of films that have been at the cinema in a series of 20 questions that ask them to rate each film. Instead, ask them a question “Have you been to see a film at the cinema in the last 6 months?” (Yes, no), then “which of the following films did you see?”, and give them the list of 20, and then only ask them about the films they’ve seen.
Online respondents can easily quite easily scroll through long lists, especially when they have a wheel on their mouse. However, because the equivalent is more cumbersome on a small touchscreen (without missing bits), it’s really important to minimize the amount of unnecessary words that a mobile respondent needs to absorb.
Every question and answer choice should be in the clearest possible language. Many professional survey designers are at their peak when they have an eight-year old child. Eight-year olds make the best proof readers of survey questions. Design your survey for an eight year old.
The reason for this is quality. It’s bad when a respondent does not understand the question and gives you an ill-considered answer. It’s worse when you don’t know if the respondent has given you an ill-considered answer.
If your text is ambiguous, then the you can’t know what the respondent understood by the question. The results are worse than useless: they could be completely misleading.
Phrase questions as ACTION – QUESTION – CONTEXT
When wording a question, try to get the actual action as close to the beginning of the sentence. This helps the respondent to be reading the actual question in light of what you want them to do about it.
For example: (bad example) “Considering all the times that you have been to see a movie in a movie theater in the last six months, select the three most important reasons why you buy popcorn.”.
There are usually three things in a question:
Action: (what you want the respondent to do) — “select the three most important reasons”
Question: (what you want to know) — “reasons people buy popcorn”
Context: (when/where etc) — “an movie theaters in the last six months”.
The structure of a question should typically follow the pattern:
ACTION — QUESTION — CONTEXT
Putting the action first helps the respondent to be prepared to know what they need to do and can be thinking about it while the read the rest of the question.
Having the question next is important because we want to absolutely have the respondent thinking.
Putting the context last is usually appropriate, because most often this is going to be the same context as the previous questions. We need to have the minimum amount of the respondents attention at this point, because they are probably already aware of the context.
For example: (rewritten) “Select the three main reasons you bought popcorn when you went to a movie theater in the last six months.”
When writing for a PC, you have more space on the screen. It’s very common to drop the action to a new line after the question, in a different font. This is fine, on a PC, but is too hungry on real-estate to be appropriate for a mobile survey.
There are times when you are fundamentally changing the context. When this happens, you sometimes see a PC-focus question that starts “Considering all the times that you’ve been to the cinema in the last six months…”. This is ok on a PC, but the question and action get lost in a plethora of text when using a mobile. You sometimes see questions written like this for every question, and in this writers opinion, that is not even ok on a PC, and absolutely not OK on a mobile.
So, what should you do when you are transitioning the context, and the survey is a mobile survey?
Answer: you should recognise that changing the context mid-survey is like a section break in a book. You should create a page specifically to change the context, then carry on with the ACTION-QUESTION-CONTEXT approach, starting on a new page.
For example: you have been talking about buying popcorn in grocery stores, and now you have a bunch of question about buying popcorn in movie theaters.
Create a page that says something like “Thank you. Now we’re going to ask about popcorn in movie theaters”, on it’s own, with nothing but a Next/OK button to go to the first of the second set of questions.
Highlight differences between similar sequential questions
When you need to ask the same question about multiple things (which in olden days was a matrix question), the respondents experience is to physically see one question per page, and each page looks almost identical.
Therefore, you should draw the respondents attention to what has changed. Bolding the change is usually perfect for the job.
This is not just to be helpful. If the difference from the previous question is buried deep in the sentence, then a respondent might end up accidentally checking a response to a different question. This happens often when the original question was a matrix question. (See Matrix questions below).
This is particularly important with mobile surveys, because there’s often less white space, and the other questions are not likely to be on the same page.
You’re trying to help the respondent in the scenario where they have read the question, but been interrupted and have forgotten precisely what you were asking about. Help them to visually find it without having to read the whole question again.
One question per page
Ensure that there is one question per page. Every question should fit on the page, and you the respondent should never need to scroll.
It is always better to have lots of pages rather than crowded pages.
In addition, try to find whatever option works to minimize the number of times that the respondent has to find and press something. For example, in SurveyMonkey, they have an option called “Question at a time”, which puts one question on each page. However, this actually presents an “OK” button, followed by a “Next” button. By switching this option off, and then adding a page break between each question means that the respondent only has an “OK” button, which turns the page.
Eliminate jargon (or thoroughly explain it)
You’d have thought this would be obvious, but industry jargon is the biggest reason why consumer surveys fail. You should use language that all respondents will understand unambiguously.
However, it’s not always easy to avoid industry jargon when your survey is about something that only the industry knows about. When this happens, you need to explain what you mean.
You can even test the respondent. Why not, it’s fun! Respondents will just as merrily spent 20 minutes doing something fun, rather than 3 minutes doing something mind-numbing.
For example (made-up):
Q1: “This survey is about Sprankots. A Sprankot is a photo-torpedic device embedded in your brain, which sends images from one location to another, which could be used to telepathically send images of what you are seeing to a friend who also has a sprankot implant”.
Q2: “Where would you find a sprankot?” [“on the Play store”, “in my head”, “on my television”, “in a laser guided weapon”, “none of these”].
Q2 would have validation so that if they get the answer wrong, they would be instructed to go back and re-read.
The purpose of a survey question is to accurately extract information/opinion from the respondent into your dataset. Keep it as simple as possible.
Eliminate cultural references
Unless you explicitly know that the respondent will understand a colloquialism or a cultural reference, don’t use one. You don’t know the respondent, so you can say with certainty that you don’t know for sure they will understand.
For example, don’t refer to “scoring an own goal”, or “shopping on Broadway”.
Sports analogies very often end up in surveys which end up going international… these don’t work, especially since few people around the world play or understand the sports played in the USA, and vice versa.
Never go below the fold
When planning for a mobile, long questions are the start of failure for a survey. Respondents often can’t read all the question, and if they have to scroll up and down in order to understand what you’re asking, they will lose patience and give up, or they will get confused and be more likely to give an incorrect answer.
It is ALWAYS better on a mobile to break a long question into smaller questions whenever possible.
Three short questions takes less time than one long question.
Some experienced survey designers tend to link long questions, because their experience comes from the online survey design world, where real estate was less of a problem, and because 10 years ago, sample houses (firms who provided access to respondents) charged by the question. Therefore, they have been brought up believing that one long question is necessarily better then 3 short questions, because it’s cheaper.
This is NOT the case on mobile.
Require every question
It’s much easier to ‘fat finger’ (accidentally press the wrong thing) on a mobile than it is on a PC. It’s good practice to require that every question has an answer, in case they click the “OK” button twice by accident.
Remind the user what they said
It’s often a good idea to start a question with “You said you shopped at Kmart”…
It not only provides context to the next question, but allows the respondent to spot if they have mis-clicked previously. This is especially good if you have a question that was not worded in the same way (because you were trying not to lead the question before). Here, you’re describing your interpretation of the previous answer, and giving them a chance to ensure they have not mislead you.
It also gives the impression that you are listening and care about what they are saying.
This is especially useful in the first question after a branch, where the consequences of branching based on an incorrect answer are more serious.
(See also the ACTION-QUESTION-CONTEXT Golden Rule above)
Allow going back to change answers
Respondents can fat-finger, or simply realize that they’d misread a previous question. There’s (almost) never a good reason not to allow them to go back and give you a better answer.
The only exception to this rule is if you have complex screener requirements, and there’s some structure in your questions that would make a respondent know that they were about to get screened out, but they haven’t actually been screened out at that point.
For specific question types:
Radio Button and Checkbox questions
For radio button questions, in every imaginable scenario, there should not be two possible answer choices. If the question requires that multiple answers are possible, the question must ask the user to choose the one that most applies.
No survey question should have more than one concept. It is always better to break a question mixing multiple concepts into two (or more) questions. This applies equally to answer choices.
For example: “Why do you like Acme Inc?”. Answer choices: “Because they are competitively priced and I can save money”. This answer option has two concepts: that about them being competitively priced, and that being the respondent thinks he can save money.
When concepts are mixed in the question, the respondent might be answering about one concept, the other concept, either concept or only if both concepts apply. As a survey designer, you will have no idea on what question the respondent answered, and the question has no value and should be purged.
For checkbox questions: These are “none of the above” answer choices. When a respondent clicks on one of these, they should not be allowed to answer any other option. Most survey platforms allow an answer to be ‘exclusive’ or to write question/page validation to ensure this.
If you allow the possibility for a respondent to check “None of the Above” as well as one of the answer choices, you will need to make business rules to decide whether to keep that response, or to purge it. This is unnecessary. It is always better to make it so that it’s not possible to have to make that business rule.
Number of answer options
The number of answer options should not be more than about 5. This is because, on a mobile, some of the options will fall below the fold, and these are less likely to be clicked.
If you have more than five answer options, try to break it into more than one question, or pipe in a reduced number from a previous question.
If you still have more than 5 answer options, you should consider using a dropdown instead. In general, survey platforms do a good job of rendering dropdown questions, though you should check what it actually looks like on an Android phone. However, nothing beats restricting the answer options.
The exception to this rule is where the respondent will know the answer without reading the answer choices (e.g. “In what state do you currently reside?”). In these cases, dropdowns with many options are perfectly acceptable (see Flipping and randomizing answer options).
Flipping and randomizing answer options
Flipping or randomizing answer choices is often a good idea, because some respondents are more likely to check answers that are higher up the list than lower. ResearchDesk panelists are better than many, but there will always be respondents who don’t read an entire question before selecting answer choices.
Randomizing puts the answer choices in a random order. Your survey platform should allow you to ensure that any exclusive answer choices are not randomized and remain at the bottom of the list.
Flipping randomly turns the answer options up-side down for half of the respondents.
The first thing you need to decide is the way that a respondent will think about answering your question. You need to distinguish between questions where the respondent knows the answer before reading the answer choices, versus one where the respondent is choosing from the answer choices.
If the respondent knows the answer in advance (e.g. “What state do you live in?”), then your objective is to help the respondent find the answer as easily as possible. In this case, many answer options are OK (though should be in a dropdown), but they should be alphabetical.
Most questions, however, the user is selecting from the answer choices provided. The question to ask yourself is “are the answer choices cardinals or ordinals?”.
A set of cardinal answer choices are where there is no natural order (e.g. apples, bananas and couscous).
A set of ordinal answer choices are one where there is a natural order (e.g. “love”, “like”, “indifferent”, “dislike”, “hate”).
Ordinal answer options should never be randomized.
Ordinal answer choices can be flipped, but it’s generally better to ensure that the flip is the same for each respondent. So, if a respondent has been randomly allocated to be a “flip” respondent, then they should see all ordinal questions flipped, or none flipped, but not some mix of the two.
Cardinal questions should be randomized, but if a series of answer options is going to be repeated regularly, then the randomization should be the same if possible, especially if the list of options is long.
A matrix question used in an online survey is where you see a grid of answers. It’s very common to see questions written for mobiles which were originally designed for PC, but which are being rendered as a mobile survey.
Matrix questions rarely work well on mobiles, though some survey platforms will do the best they can. Many survey tools will convert a matrix question into a series of questions that are exactly the same as each row.
These are really unpleasant for the user. The respondent sees a sequence of very similar questions. In these cases, it is essential that the difference between the questions is highlighted to the respondent.
None-the-less, repeating the same question over and over is bad practice on a mobile. The respondent does not get the benefit of being able to see a consistent structure as they would on a PC, and it feels repetitive. Expect high drop-outs and low quality answers.
If you have to use a matrix question, make sure that you have detailed branching to ensure that only highly relevant questions will appear for the user. If necessary, ask a question before the matrix question to find out what is relevant.
Ranking questions are often great questions. These require the user to select some or all of the answer choices and put them into an order, and is sometimes called “Forced Ranking”. Ranking questions are only appropriate for cardinal answer choices.
Most survey platforms do a good job of rendering ranking questions on mobiles, but you should test this (on a mobile) to be sure.
If you have lots of answer options, then you should break the question into two questions. The first question asks them which are relevant, and the next question (which is branched assuming there’s more than one that is relevant) asks them to rank those that are relevant.
Ranking questions should be randomized, but when splitting a question with lots of answer choices into two, you should ensure that the answer options are in the same sequence for the two questions. Most survey platforms will naturally pipe answer options from a previous question in the order they were presented, but you should check this is the case with your survey platform.
Mobile phones are not as easy to type into as online surveys. You should avoid text questions. Where you do need a text question, structure the question so that a short answer is adequate, and if you have question validation based on the length of the question, then this should not be too onerous.
Questions sometimes ask for brand names to be entered unprompted. (e.g. “List any movie theater chains you can think of”). These questions are very hard to handle on a mobile, because many respondents will answer, but the results you get will, in many cases, be the spellcheck version of the brand. Because of this, you should consider any other ways of getting unprompted information from the respondent without depending on them typing the names precisely.