So this @pewresearch study of a bunch of random walks via the YouTube recommender is interesting, but flawed: http://www.pewinternet.org/2018/11/07/many-turn-to-youtube-for-childrens-content-news-how-to-lessons/ …
-
Show this thread
-
They took seed videos and following paths of recommendations from them to see what tendencies the YouTube recommender might encourage, and they found that these walks tended to lead to ever-more-popular videos.
1 reply 0 retweets 1 likeShow this thread -
The implicit takeaway here is that, counter to arguments that folks like
@zeynep have made, the recommender biases to the popular, not to the fringe. But this research doesn't show that!1 reply 0 retweets 3 likesShow this thread -
What it shows is that, for a brand new user, who makes five interactions with the service, the recommended videos become progressively more popular (and, for some reason, longer). What assumptions are in here?
1 reply 0 retweets 7 likesShow this thread -
Primarily, that the YouTube API recommendations are the same as the up-next auto-play recommendations. This is a bad assumption, as the authors note, because YouTube recommendations are *personalized*—the recommendations aren't just contingent on the video, but on your history!
1 reply 0 retweets 5 likesShow this thread -
They try to get out of this issue by suggesting that the API recommendations represents a "baseline viewer," like if you always used incognito mode. The implication is that personalization would just skew you away from that baseline a bit—that the baseline is a sort of average.
1 reply 0 retweets 2 likesShow this thread -
But that's not how recommender systems work! If you're a brand new user, you pose the "cold start problem"—without data, data-driven recommendation doesn't work. So, if you're a recommender system, you cope by trying to find stuff this mystery person is likely to like.
1 reply 0 retweets 4 likesShow this thread -
What is a random person likely to like? Popular shit. (That's what it means to be popular!) So, a tendency to popularity is *exactly* what you'd expect a recommender to have, given a mystery user with no history. But if you are logged in or have an IP address, this ain't you.
1 reply 1 retweet 14 likesShow this thread -
(I think the survey work is more reliable, and the bias toward *length* is a weird and interesting finding, probably the result of length correlating with something else. But experimental studies of algorithms often have exactly this kind of problem. Your bots aren't real users!)
1 reply 0 retweets 3 likesShow this thread -
For more rain on your parade of interface experiments on recommender systems, check out this old piece of mine: https://nick-seaver.squarespace.com/s/seaverMiT8.pdf … (and, uh,
#cscw2018, I guess?)2 replies 1 retweet 16 likesShow this thread
It's a hard space to research without access to the data only the companies have and are keeping opaque! What you say is true, but also that "popular stuff" and "more extreme/hateful" aren't in contradiction. As they note, recommended stuff get more popular over time..
-
-
Replying to @zeynep
Yeah, but the main issue here is that the "baseline viewer" they construct is not normal, but in fact unusual in a very specific way that usually invokes atypical responses from recommender systems.
1 reply 0 retweets 0 likes -
Replying to @npseaver
Yes, cannot start from zero (or stay close there) to understand how it behaves in the wild.. On the other hand, it is tough and I'm glad there is these efforts to try to get at it.
0 replies 0 retweets 2 likes
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.