Jenny Tang, a Ph.D. student at Carnegie Mellon University discusses the generalization of privacy and security surveys on platforms like Amazon MTurk and Prolific. The podcast explores the challenges of obtaining accurate survey responses and compares different samples from these platforms to the US population. Key insights include differences in gender balance and behavior, with recommendations for future surveys.
Read more
AI Summary
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
The study found that online survey platforms like MTurk and Prolific are not as representative of the general population as the Pew sample, highlighting the importance of understanding limitations and demographic biases.
Text box attention checks are effective in filtering out non-human responses in online surveys, but researchers should be mindful of their impact on survey length and availability of future tasks for workers.
Deep dives
Comparison of Survey Sources
The podcast episode discusses a research study comparing responses from Mechanical Turk (MTurk) and Prolific, two online survey platforms, to a survey conducted by the Pew Research Center. The study aimed to determine if the responses from MTurk and Prolific were representative of the general population. The findings revealed that both MTurk and Prolific samples were not as representative as the Pew sample. However, the representative sample from Prolific performed better than the others. Attention checks were used to filter out non-human responses, and text box attention checks proved to be more effective than other types. Overall, the study highlights the importance of understanding the limitations and demographic biases associated with online survey platforms.
Challenges in Generalizability and Filtering Out Bots
The podcast episode also explores the challenges in achieving generalizability in survey research. It is highlighted that online survey platforms tend to skew younger and more educated than the general population, posing representation issues. The presence of bots also affects data quality, with anecdotal evidence suggesting an increase in bot responses in platforms like MTurk. The use of attention checks, specifically text box attention checks, can be effective in filtering out non-human responses. However, relying solely on attention checks can make surveys longer and may impact the future tasks available to MTurk workers.
Recommendations for Online Survey Research
The podcast episode provides recommendations for conducting online survey research. It suggests using platforms like Prolific that offer representative samples for better generalizability. Attention checks should be used selectively, considering the specific research questions and type of survey platform. Text box attention checks are effective for identifying non-human responses. Researchers are advised to be mindful of the types of questions asked and the nuances in conclusions drawn from survey data. Additionally, the importance of addressing the limitations and biases of online survey platforms is emphasized.
Future Research and Bridging Policy-Technical Gaps
The podcast episode briefly mentions the future work of the guest speaker, Jenny Tang, in the areas of misinformation and bridging gaps between policy and technical communities. She discusses her project on examining the spread of academic misinformation and the loss of nuance in research papers. Another project focuses on racial equity and technology policy. Tang intends to explore ways to address racially motivated or gender-based harassment online and hold platforms accountable. Her work aims to combine her technical and policy backgrounds to tackle important societal issues.
Today, Jenny Tang, a Ph.D. student of societal computing at Carnegie Mellon University discusses her work on the generalization of privacy and security surveys on platforms such as Amazon MTurk and Prolific. Jenny shared the drawbacks of using such online platforms, the discrepancies observed about the samples drawn, and key insights from her results.
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode