Articles

How to recruit high-quality survey participants

Jane Hillman
|May 11, 2022

KEY TAKEAWAYS:

  • Survey participants expect to be compensated for their time, and researchers should think of compensation as an investment in quality, not simply a cost
  • Online surveys are often responded to on a first-come, first-serve basis, meaning prescreening participants can increase the percentage of high-quality respondents
  • ACQs are an easy, effective way to gauge if participants are paying attention while they respond to your survey (or questionnaire, which we should note are not the same thing)

Online research platforms, panels, and providers are steadily growing in popularity. As a result, it’s becoming increasingly easy to find survey participants for a given research project. However, research professionals understand that sample size does not equate to sample quality.

Therefore, it’s important to view the participant recruitment process itself as the first and, arguably, most important part of ensuring online survey results (i.e., data) will be of the highest possible quality. Here are three simple yet impactful ways to help do exactly that.

1. Pay Your Participants Fairly

It’s true that, traditionally, the idea of paying people to complete a survey or questionnaire was controversial. The thinking was that, unwittingly, compensation could negatively influence how participants would respond to questions. But the methodologies involved in designing research surveys and questionnaires now can, with care, account for any potential influence compensation can have. Good thing too. Because, increasingly, survey participants expect to be paid for their time. In fact, not paying participants is proving to have a larger consequence in contemporary research.

As Sabrina Trinquetel explains as part of a recent ResearchLive article, “... in reducing these individuals to a number [we] forget about their treatment in our ecosystem...with the downward creep of budgets and commoditization of online research, we’ve created a process that means people are not afforded the treatment they should be receiving.”

Remember, it takes time to participate in a survey. And this is time that, especially in a gig economy, could be spent earning money some other way. So, by paying less than a fair rate, you may actually be asking survey participants to sacrifice money for participating in your research. That’s morally questionable on its own. But when the goal is high-quality respondents, setting up revenue-negative situations will certainly work against you.

This isn’t to suggest researchers need to “overpay” when recruiting participants. Research actually shows that, in general, employees care less about how much they’re specifically paid and much more that they feel they’re being fairly compensated. Do we expect survey participants to feel any different? This is why, ideally, the research platform you use will help you determine equitable rates for your project. But, no matter what, you should factor in the following when determining survey pricing:

  • Years of experience, current working situation, or other employment specifics
  • Level of education
  • Specificity of the research being conducted, including length, complexity, niche, and turnaround time/needs

Done ethically and with care, fair pricing means anyone within reason will want to participate in your survey, which is a good problem to have. Because not everyone who’s willing to be part of your research should be.

Curious about Prolific’s own audience of potential participants? Click here for details.

2. Be As Selective As Possible

As the academic and market research industry continues to grow, the number of potential participants grows too. This is a good thing. But with more people regularly participating in online research, more are, naturally, less than a perfect fit for your needs. This means prescreening (i.e., preemptively filtering) survey participants is an increasingly crucial part of ensuring online surveys deliver quality responses.

Unlike the controls afforded by in-person research studies, participants sourced through online providers will, likely, engage on a first-come, first-serve basis. With less (or no) control over the order your survey responses return to you, it’s especially wise to ensure that anyone who gets to your survey first will be a qualified responder, not simply a quick one.

Online research platforms and providers should, at their most basic, allow any age bracket to be selected and applied to a potential pool of survey participants. However, when a representative sample is required, age may be broken down into 9-year brackets (e.g., 18-27, 28-37, etc.).

Other basic, sortable demographics typically include sex, ethnicity, and nationality. As online research platforms get better, an array of useful prescreening options is becoming available. For instance, by using Prolific, researchers can also filter potential survey participants for more specific factors, including:

  • Country of birth
  • Time spent living abroad
  • Number of languages spoken fluently
  • Workplace setting (i.e., office, remote, hybrid)
  • Types of neurodiversity
  • Experiences with COVID-19
  • Beliefs
  • Family & relationships
  • Lifestyles and interests
  • Tech and social media preferences

In general, the more prescreening options you have at your disposal, the healthier your samples will be. And more options mean you’ll have more granular control in relation to the specific needs of your research project. Finally, it’s just a matter of making sure potential survey participants, fairly paid and prescreened, actually do pay attention to what they’re doing.

3. Test the Attention of Your Survey Participants

However, in saying you should test the attention of your survey participants, we, in no way, advocate the testing of their patience. These are two very different concepts. (In fact, if you’re experiencing any confusion here, take a look at our Researcher Help Centre. Helping folks find research participants they can trust is kinda our thing 🙌 )

Attention plays an important role in online research. Because when a respondent is paying attention, their responses can be deemed more reliable. When we’re soliciting first-party data anonymously through the cloud, any way to determine reliability becomes that much more important. A common way to gauge participant attention as they fill out a survey is the use of ACQs, or attention check questions.

At their most basic, ACQs are engineered to gauge whether or not participants are paying close attention as they take a survey. ACQs should:

  • Demonstrate whether a participant has paid attention to the question (less so to the instructions above it)
  • Clearly instruct the participant to complete a task in a certain way
  • Be easy to read (anyone, in theory, should be able to read and understand what the ACQ asks/instructs)
  • Not rely on memory recall
  • Be contextually relevant to the survey they’ll appear in

Consider the following:

EXAMPLE SURVEY QUESTION

The following question is simple: when asked what the best in-person networking event is, you need to select ‘LinkedIn.’ This is an attention check.

Based on the text you read above, which of the below is correct?

  • Work Parties
  • LinkedIn
  • Job Fairs
  • Professional Networking
  • Informal Gatherings

The question above is clear and simple, and the correct response is clearly defined. However, there is another approach to attention checks that’s worth noting.

Nonsensical questions are questions that have only one objectively correct answer. Participants get no explicit direction on how the question should be answered. However, by placing all potential responses on a scale, the correct answer should be clear, to everyone who’s paying attention.

Again, consider the following example:

Example of nonsensical question which asks people to indicate how much they agree with the statement 'I swim across the Atlantic Ocean in order to get to work each day'

The statement above has an objectively correct answer (no one achieves this task on a daily basis to commute to work). Answering this question correctly requires no prior knowledge. And even though we’d hope attentive survey participants would choose “Strongly Disagree,” the choice of “Disagree” would still be an indicator of attention.

Whether they be ACQs or nonsensical questions, it’s beneficial to study both good and bad examples side-by-side. Understanding the differences and being able to put them into practice ensures participant attention can be easily measured alongside more straightforward metrics, like response rates and survey completion percentages.

Recruiting Quality Begins with You

Fairness, filtering, and...attention check questions.

Dang. Okay. Well, we may not be able to call these the “Three Fs” of recruiting high-quality survey participants. That said, they do serve as a solid foundation to set your research up for success.

And this approach should be beneficial no matter which online research platform you choose to work with. But remember, different platforms employ different criteria for how they let participants participate in the first place.

To build on the foundation detailed above, make sure the ways participants are vetted match your needs and the needs of your research.