There seems to be a consensus now that the exposure to ideas, arguments and memes on youtube, facebook, google search and twitter must be carefully manipulated to make sure people don't elect the wrong politician or have the wrong idea about gay marriage. But why stop there?
-
Show this thread
-
Also, what is the metric of measuring the ethical benefits of manipulating the mental states of social media users away from self-directed exploration? If it is about quality of life/reduction of suffering, should we not be shadowbanning/delisting any praise of unhealthy food?
2 replies 2 retweets 14 likesShow this thread -
If the individual user has no agency over forming the correct beliefs when exposed to uncurated media sources, could we perhaps use machine learning to identify the delta of what a user thinks and what they should be thinking and play them the right feed to correct their beliefs?
8 replies 3 retweets 19 likesShow this thread -
Twitter is already modeling whom you follow and who reads/likes your posts to detect if you are likely to hold unwelcome opinions, and curbs your influence by reducing how often your posts will show up in the feeds of your followers. Why not directly regulate belief formation?
6 replies 4 retweets 14 likesShow this thread -
This Tweet is unavailable.
It sometimes seems to me that the people most attracted to regulating ethical questions are those least intellectually equipped to do so, even within Google. It is related to the problem that university and funding admins (=meta scientists) are usually too stupid to be scientists
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.