How Prolific is building an AI research community

Prolific is entering a new chapter in its approach to AI research. Senior strategic researcher Lisa Laeber has been working alongside Dr. Andrew Gordon, staff researcher in behavioral science, to refine how participants' skills are matched with researchers' needs, starting with AI tasks.
Moving beyond basic filtering by demographics and performance track record, we’re building specialized pools of participants who've shown they can handle complex AI research tasks.
Here's how we're making it happen.
Building our AI Tasker pool
Lisa’s role at Prolific centers on the bigger picture. That’s exploring where research is heading and what our customers will need months or years from now.
Recently, the team noticed a clear trend among our AI clients. They need participants who can tackle more cognitively demanding tasks, particularly in AI development. These aren't your typical survey responses. They're tasks that require careful evaluation, strong reasoning skills, and structured thinking.
So we decided to add something new to our existing filters: skill-based assessments. Think of it as building a pool of participants who've demonstrated specific abilities. For AI tasks, this means people who can evaluate model outputs, check factual accuracy, and provide clear, structured responses.
This is where Dr. Gordon's expertise has been invaluable. With his background in cognitive neuroscience and years of experience in behavioral science research, he's helped us identify exactly what skills we need to measure and how to measure them.
His work leads our social and behavioral science projects, setting new standards for study design and data integrity—exactly what we needed to build a strong skills assessment process.
How we designed the assessments
We began by developing a two-stage approach. First, a general exam measures core abilities like verbal comprehension, pattern recognition, and complex problem solving - skills we've identified as important for AI tasks. Only participants who pass this initial qualification are invited to the next stage.
The specialized AI assessment came next. Working with our biggest AI customers, we studied how they use participant pools. This helped us identify three key areas to evaluate:
- Reasoning
- Fact-checking
- Structured writing
For reasoning tasks, participants compare outputs from AI models and evaluate them using specific criteria. The fact-checking component asks them to identify inaccuracies in AI model outputs and provide reliable sources. Finally, they complete a structured writing task that tests their ability to write clear, detailed responses.
These assessment components mirror real research needs. Some customers require participants to evaluate model outputs or check factual accuracy. Others want well-structured responses for training data. Participants need to demonstrate strong performance across all components of the AI assessment to qualify as AI Taskers. By testing all three abilities, we make sure our AI Taskers can handle whatever tasks researchers throw at them.
What it means in practice
For researchers, the assessment process means access to participants who can handle sophisticated AI tasks. These are carefully evaluated contributions that directly support AI development.
Our AI Taskers excel at tasks like:
- Comparing different model outputs and explaining their choices
- Identifying factual errors with evidence-based corrections
- Writing clear, structured content that follows detailed guidelines
A specialized pool helps researchers who need high-quality human feedback for model evaluation and fine-tuning. Whether they're assessing output quality, checking factual accuracy, or gathering structured responses, they can be confident in the data quality.
The future of skill-based filters
A skill assessment system is just the beginning. The Prolific Sciences team is creating more specialized qualifications for different types of AI research. Our customers' needs in AI are changing on a day-to-day basis, so we're building a flexible, modular approach. This will let researchers choose participants with exactly the right combination of skills for their specific projects.
We're already developing comprehensive training modules that will precede these assessments. This approach means participants can build their skills before demonstrating them, making AI research opportunities more accessible to our diverse participant pool.
A new chapter for AI research and our community
These are exciting times for Prolific. With the introduction of skill-based filters alongside our demographic ones, we're building a unique community where researchers can find exactly the right participants for their AI projects, and where participants can grow their abilities. We can't wait to see what our community achieves together.