Balancing participant wellbeing as AI research evolves
AI research is branching into new areas, from testing system weaknesses to evaluating sensitive content. With these changes, keeping track of participant wellbeing on Prolific matters more than ever. We spent 2024 examining how different types of research affects our participants' mental health, with heightened attention to those engaging with more challenging content.
With our next wellbeing report coming in early 2025, we wanted to share what we learned from our 2024 findings, particularly about participants working on more challenging research.
Why adversarial testing matters
Adversarial testing—where AI models are exposed to difficult or problematic scenarios— plays a central role in developing safe, ethical AI systems. Identifying potential risks and biases early helps prevent harmful AI behaviors before deployment.
Testing requires human evaluation of challenging content, which makes participant wellbeing particularly important. We recognize our unique position in facilitating this essential work while protecting those who make it possible.
Key findings from the 2024 Wellbeing report
Using the Short Warwick-Edinburgh Mental Wellbeing Scale (SWEMWBS), our Participant Wellness Report reveals encouraging stability in participant wellbeing. The general participant pool maintained consistent scores—from 23.13 in December 2023 to 23.64 in August 2024—matching closely with UK population benchmarks of 23.6 to 23.7.
Such stability over eight months demonstrates the effectiveness of our protective measures in maintaining participant wellbeing.
Sensitive content research: Additional safeguards
We've specifically tracked participants involved in sensitive content studies, as these studies demand even greater consideration of participant welfare. Their average score of 22.93 (SD 4.91) suggests that while overall wellbeing remains stable, certain aspects need enhanced attention.
The scores for 'thinking clearly' (ability to concentrate and make decisions) and 'feeling close to other people' (sense of connection and social wellbeing) were slightly lower. These findings inform our approach to enhancing Prolific’s participant support features.
Refining functionality and adding tools that support participant management and wellbeing lets us better serve researchers conducting sensitive content studies. Future improvements may include participant limits for sensitive content exposure and additional resources designed to bolster wellbeing for participants engaging in challenging studies.
As AI systems become more sophisticated, this type of research increases in its importance. Our challenge—and commitment—is to balance platform and product improvements with exceptional care for participant welfare, particularly in sensitive content studies. This means developing additional support systems and monitoring protocols specifically for participants engaging with challenging material.
The human impact of research participation
While statistical measures provide necessary oversight, participant voices reveal the deeper impact of their involvement. The below three testimonials capture the essence of how research participation affects wellbeing:
"The platform keeps me connected and lets me think about new topics I would not otherwise encounter. The studies help me be more open-minded about the world."
"It helped me save up for a life-changing and saving operation, so it's had a big impact on my wellbeing."
"The financial compensation allows me to have a little wiggle room in my budget. I am grateful to be able to make a little extra money. That is very impactful to my wellbeing."
Each story reinforces why participant wellbeing is so important—highlighting the real difference that research participation can make in people's lives.
Evolving participant demographics
Our sample for this wellbeing report highlights meaningful shifts within our participant pool. In this sample of 1,000 participants—500 in general studies and 500 in sensitive content studies—August 2024 data shows a more balanced gender distribution, with 277 male and 226 female participants, compared to our first data collection in December 2023 (283 male, 215 female).
Geographic representation also shifted significantly within this group, with UK participation increasing from 125 to 265 participants, while US participation decreased from 375 to 238. These sample-based insights help us understand and respond to trends, although they may not fully reflect the makeup of our entire participant pool of 200,000.
These changes provide a more accurate picture of our participant demographics, with balanced representation across key groups. Such diversity is key as AI research expands, making sure that varied perspectives and backgrounds contribute to high-quality AI evaluation and testing. This is especially true in sensitive content and adversarial testing scenarios.
Protecting participants in sensitive research
As we move to include more sensitive content studies, we're implementing enhanced protective measures:
- Comprehensive content warnings and explicit opt-in processes
- More frequent wellbeing monitoring for sensitive content participants
- Enhanced compensation guidelines for challenging tasks
- Specialized support systems for participants handling difficult material
- Continuous assessment and refinement of operational interventions
- Additional safeguards for adversarial testing participation
The consistency in wellbeing scores, even among participants engaging with sensitive content, suggests our protective measures are working. We're not complacent, however. The slightly lower scores in specific areas guide our focus for improvements, particularly as we expand into more challenging research territories.
What comes next
We'll continue enhancing our approach to participant welfare in 2025, with special attention to anyone engaging in sensitive content research. Our focus remains on maintaining equitable care across our participant pool, regardless of the type of research involved.
In collaboration with initiatives like Fairwork's Cloudwork Principles and the Partnership on AI's "AI, Labor, and the Economy" program , we're dedicated to developing new support systems for participants, particularly in adversarial testing and other challenging research areas. Our participation in the upcoming "Mapping Human Labour in the AI Pipeline" workshop and The Human Factor in AI Red Teaming will further inform our practices, as we engage in discussions on the human impact of AI research.
The message from our 2024 data is clear: with proper safeguards and attention to participant welfare, we can support advancing AI research while protecting those who make it possible. Their wellbeing is fundamental to quality research and ethical AI development. As we expand into new research territories for 2025, our commitment to participant welfare remains unwavering.
Want to learn more? Read our full 2024 Participant Wellbeing Report.