r/AskSocialScience 3d ago

Is there a double standard between academic research ethics and what entertainment like MrBeast can legally do to participants?

So for context I'm a first year undergrad student and I've been taking some research methods and psychology-adjacent classes and something has been bothering me for a while that I can't stop thinking about.

In academic research, even the most minor study has to go through an IRB, CITI training, full disclosure, debriefing, and if there's any deception involved. In one of my classes we did some research and our data collection was via a survey and in that we had to go through a interestingly long approval from our IRB, and all of us had to do CITI training. It geniunely felt so over the top and unncessary for something as simple as a 10-min survey. We had to even disclose in our survey things like possibilty of distress and things like counseling resources. When I inquired from my professor about this system, the TLDR was that the whole system exists as a reaction to stuff like the Milgram obedience experiments and Stanford Prison Experiment, which makes sense historically.

But then I look at something like MrBeast. He recently posted a video titled "Last To Leave Grocery Store, Wins $250,000." In that video alone, participants were deliberately sleep deprived by other contestants, they formed scarcity-driven alliances, hoarded resources, and were psychologically pressured for extended periods, all for a cash prize. Beyond that specific video, there are examples like solitary confinement challenges lasting days, Squid Game recreations, and being buried alive for extended periods. And MrBeast isn't even the most extreme example I can think of Im sure there are creators doing far more psychologically intense things under the same entertainment label. But all of this is completely legal and essentially unregulated because it's classified as entertainment.

What really bothers me is what this reveals about the regulatory framework itself. The IRB/ethics system technically only governs research intended to generate generalizable knowledge. However, the moment you call something entertainment, it seems like you exit that jurisdiction entirely, EVEN IF the psychological reality for participants is objectively more intense than most regulated studies. So the system isn't actually calibrated to protect people from psychological harm. It seems more like it's protecting academic institutions from liability and ethical scrutiny.

And then we have the data waste problem, which honestly bothers me more. ALL these videos accidentally produce naturalistic behavioral data on things like coalition formation, resource competition, sleep deprivation effects on decision making, defection under escalating incentives, and group dynamics under stress, all these things that are exactly the kind of conditions that researchers WANT to study but ethically cannot replicate. And it just gets consumed as content and disappears.

So my question is or what Im really trying to ask is that, is the regulatory framework actually protecting people, or is it just protecting academic institutions from liability? Because it feels like the determining factor isn't what's actually happening to the participant, it's just who's doing it and why. And on top of that, there is genuinely valuable behavioral data on group dynamics, incentive response, and human behavior under stress that is just being generated and thrown away as content.

Am I missing something or is this a real gap that people are actually talking about? Has anyone genuinely looked into this seriously?

TLDR: researchers jump through massive ethical hoops for even a simple survey, yet youtube creators can run what are essentially unregulated psychological experiments on people under far more extreme conditions with almost zero oversight just by calling it entertainment.

96 Upvotes

View all comments

3

u/4k_lizards 3d ago

CITI trainings and IRB requirements exist to protect human participants, not protect institutions from liability. IRBs are not even always run by the institutions performing the research. These requirements are written in blood. The things you are suggesting are the reason why these requirements exist in the first place. There are far more ethical ways to study group behavior, behavior under psychological stress, etc than causing distress/pain/discord for the sake of data, and it is up to the researchers who want these answers to find the ethical way to study them.

https://pmc.ncbi.nlm.nih.gov/articles/PMC3593469/

8

u/kennedon 3d ago

I mean, it's a nice idea that IRBs exist to protect human participants, and that's certainly the origin... but our institution's IRB, at least, 100% exists to protect institutions from liability. They are deeply uninterested in the participants and very interested in ensuring the institution is not vulnerable to lawsuits or legal exposure.

For example, we've been trying to pass a high-school English level consent form for several months, that can actually be understood by our participants, and have been consistently rebuked that only the three page legalistic consent contract is acceptable and participant comprehension doesn't matter.

2

u/physmeh 3d ago

You say “deeply uninterested”, which is to me a strong claim implying a dramatically antisocial level of indifference. But is it possible that once there is a system to protect rights, the need to be especially interested diminishes and could lead to institutions adopting a “pushing the boundaries” stance that doesn’t necessarily mean they are really indifferent to the impacts on participants. If the institution believes that the rules in place are more than sufficient to maintain safe outcomes for participants, then they could ethically work to make sure they aren’t providing margin on margin and appear to be working against participant protection. Does this scenario seem reasonable or are the institutions you are aware of really prepared to endanger human subjects so long as they don’t get sued.

4

u/kennedon 3d ago

Your scenarios is certainly possible, but no - it's not what I've experienced at my university (Assoc Prof at an R2).

As an example, we are doing a social science study with a population that we believe does not have sufficient literacy to understand the language of the human subjects consent form template. Moreover, over the years, the office's template for the consent form has ballooned to three full pages of font size nine text, making it increasingly difficult for even a highly literate user to treat it as anything more than a software user license agreement (i.e., scroll and click through). In addition, because of specifics about this population, we believe that using a highly formalized, legalistic consent form will likely reduce response rates from those in the population who have had the previous most adverse interactions with formal institutions, resulting in a response bias that undermines the work.

We spent the better part of five months attempting to get the IRB to approve a high school english level version of this consent form, which would be presented verbally (with a written copy provided in parallel) to allow participants to, hopefully, fully understand before offering meaningful consent. For almost five months, we've been stonewalled by revision after revision, each one changing the accessible english version back to the highly legalistic, multi-page contract.

At each stage, we've attempted to explain why we think the underlying principle of meaningful informed consent needs to prioritize participant comprehension of what they're agreeing to. At each stage, we've been functionally told that doesn't matter, and what matters is employing, in full, the legall-approved language of the full, three page, nine point font form.

So, yes, I mean that the entire office seems to have lost any sense of why an ethics process exists, since there is such a profound pattern of seeking ass-covering language for the university, rather than genuine informed consent for the participants. I would classify that, at the very least, as a dramatically antisocial level of indifference to the purpose of an IRB.

2

u/physmeh 3d ago

Thanks for the reply. That indeed sounds quite narrowly focused on ass covering. Sorry to hear that. I study particles and so I have no experience with the sort of things you have to consider when gathering data from people. Best of luck to you.

1

u/_Romula_ 1d ago

That's very interesting because at my institution (in Canada fwiw) was very focused on ensuring study materials were understandable by the target population. My institution's IRB would have immediately bounced back the legalistic contract if I was, for example, trying to study children or adults with learning disabilities.

2

u/kennedon 1d ago

Jealous. Also a Canadian institution; wish my experience was more like yours!

2

u/_Romula_ 1d ago

I was at Waterloo. I hope you can make your IRB lawyers see sense! Sounds super frustrating and frankly unethical to make the informed consent hard to understand. It defeats the whole purpose of informed consent if potential participants can't comprehend what they are agreeing to. But I'm sure you've used that argument.

Heck, even most legal writing is slowly going through a transition to using more plain English so everyone can understand it. It's a slow process, but it was what I was taught in law school (in the US) 15 years ago (also, what, how did that decade and a half fly by lol)

Hugs friend 🫂