Radicalization via YouTube, as widely understood, is when someone watches a few partisan videos and unwittingly starts a feedback loop in which the algorithm gradually recommends more and more extreme content and the viewer starts to believe more and more of it.
-
-
Show this thread
-
The key is that the user’s beliefs, preferences, and behavior shift over time, and the algorithm both learns and encourages this, nudging the user gradually. But this study didn’t analyze real users. So the crucial question becomes: what model of user behavior did they use?
Show this thread -
The answer: they didn’t! They reached their sweeping conclusions by analyzing YouTube *without logging in*, based on sidebar recommendations for a sample of channels (not even the user’s home page because, again, there’s no user). Whatever they measured, it’s not radicalization.
Show this thread -
Sidenote: the first author has been on a diatribe about the media, even in the thread introducing the paper. It doesn’t undermine the paper by itself, but given that they disingenuously exclude how radicalization might actually work, it… raises questions.https://twitter.com/mark_ledwich/status/1210743217982803970 …
Show this thread -
Others have pointed out many more limitations of the paper, including the fact that it claims to refute years of allegations of radicalization using late-2019 measurements. Sure, but that’s a bit like pointing out typos in the article that announced "Dewey Defeats Truman".
Show this thread -
Incidentally, I spent about a year studying YouTube radicalization with several students. We dismissed simplistic research designs (like the one in the paper) by about week 2, and realized that the phenomenon results from users/the algorithm/video creators adapting to each other.
Show this thread -
Let’s not forget: the peddlers of extreme content adversarially navigate YouTube’s algorithm, optimizing the clickbaitiness of their video thumbnails and titles, while reputable sources attempt to maintain some semblance of impartiality. (None of this is modeled in the paper.)
Show this thread -
After tussling with these complexities, my students and I ended up with nothing publishable because we realized that there’s no good way for external researchers to quantitatively study radicalization. I think YouTube can study it internally, but only in a very limited way.
Show this thread -
If you’re wondering how such a widely discussed problem has attracted so little scientific study before this paper, that’s exactly why. Many have tried, but chose to say nothing rather than publish meaningless results, leaving the field open for authors with lower standards.
Show this thread -
In our data-driven world, the claim that we don’t have a good way to study something quantitatively may sound shocking. The reality even worse — in many cases we don’t even have the vocabulary to ask meaningful quantitative *questions* about complex socio-technical systems.
Show this thread -
Consider the paper’s definition of radicalization: "YouTube’s algorithm [exposes users] to more extreme content than they would otherwise." Savvy readers are probably screaming: There is no "otherwise"! There is no YouTube without the algorithm! There is no neutral!
Show this thread -
That’s the note on which I’d like to end: a plea to consider that the available quantitative methods can’t answer everything. And I want to thank the journalists who’ve been doing the next best thing — telling the stories of people led down a rabbit hole by YouTube’s algorithm.
Show this thread
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.