There seems to be a consensus now that the exposure to ideas, arguments and memes on youtube, facebook, google search and twitter must be carefully manipulated to make sure people don't elect the wrong politician or have the wrong idea about gay marriage. But why stop there?
-
-
If the individual user has no agency over forming the correct beliefs when exposed to uncurated media sources, could we perhaps use machine learning to identify the delta of what a user thinks and what they should be thinking and play them the right feed to correct their beliefs?
Show this thread -
Twitter is already modeling whom you follow and who reads/likes your posts to detect if you are likely to hold unwelcome opinions, and curbs your influence by reducing how often your posts will show up in the feeds of your followers. Why not directly regulate belief formation?
Show this thread
End of conversation
New conversation -
-
-
Also, what's the metric of apparent consensus and what's the minimum value to legitimately draw conclusions from such premises?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.