I asked perhaps an obvious question: Why doesn't WhatsApp just remove misinformation?https://www.newstatesman.com/science-tech/coronavirus/2020/04/whatsapp-harder-forward-messages-why-not-remove-misinformation …
Yeah I mean, this is definitely part of it - I obviously understand that. But I would argue that when you're literally becoming one of the biggest sources of misinformation the moral argument would be "okay look at those messages" to see if they're literally conspiracy theories
-
-
I totally get the business elements but I'm floating my blue sky/morally-driven opinion
-
It might be if you take a very British centric view of the app. It's also used by liberation groups organising in places where, for example, it is illegal or badly advised to be out as LGBT+. Or for business reasons as you mentioned in another tweet.
- 3 more replies
New conversation -
-
-
I don't agree that you should break open all end-to-end encryption services which is where that trail of thought leads. This is a closed service not an open social platform. I do wonder if something could be done to blacklist links to certain sites that are known to be untrusted.
-
Yeah I get that argument. I just think that the way this app is used is *like* it's a social media platform for boomers – that's what it's become. And when it's such a hotspot for misinformation I would argue there's a moral obligation to break that to remove conspiracy theories
End of conversation
New conversation -
-
-
So you're quite happy for a private company to a) look at your messages, and b) decide whether or not you can send them? The question might be obvious, but the answer is blindingly obvious too. It's not on Facebook to police what is and isn't misinformation, and nor should it be
-
Yes, that is what I'm arguing! And I acknowledge the free speech backlash in the piece
- 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.