I like how Twitter couldn't even manage to be honest. Like it did not say, "click here to see replies from users that we decided are bad people". It made a claim as to the potentially offensive nature of the _content_, even though there was nothing offensive about it.
-
-
Replying to @cmuratori @notegone
It may be just a first pass implementation. Frankly with all the toxicity on Twitter and the (well deserved) criticism they are receiving for allowing it, I can see how they'd rather have a false positive than missed an offensive content.
1 reply 0 retweets 0 likes -
Twitter could easily have provided me with a checkbox I could check to opt out of their literally-100%-erroneous-in-my-experience "content warnings". I have had to click that "show me" box hundreds of times and never once did I end up offended.
1 reply 0 retweets 3 likes -
Replying to @cmuratori @notegone
I bet Twitter has reasons (the analytics? backlash in media?) that warrant erring on the safe side in a general case. I'd rather assume your experience is uncommon because of mostly technical tweets, rather than accuse the company of acting stupid ;)
1 reply 0 retweets 0 likes -
Who accused them of "acting stupid"? I accused them of dishonesty. You can believe the dishonesty is intentional, or due to incompetence, but you can't believe "this was a competent company acting honestly", because that behavior by definition can't produce the observed results.
1 reply 0 retweets 2 likes -
They could have behaved honestly (label the tweet as "this comes from a user we have flagged"), they could have behaved competently (actually review tweets and label them when they are offensive, or give me the option of opting out of their "algorithm").
1 reply 0 retweets 2 likes -
Replying to @cmuratori @notegone
Incompetence or dishonesty? How about priorities? I'd think that making tweets of people who are only occasionally offensive more visible aren't the biggest fish to fry for them.
1 reply 0 retweets 0 likes -
Just so I understand you correctly: by "priorities", you are saying that _neither_ changing the message text to be accurate, nor adding an opt-out checkbox to the preferences, were high enough "priority" to fit inside an annual R&D budget of around eight hundred million dollars?
1 reply 0 retweets 3 likes -
Replying to @cmuratori @notegone
I can't quanitify in dollars. Just saying that Twitter is known for its toxicity, botting and overall enabling of bad things. Their priorities are perhaps to address that, while your concern is to the contrary. Why allow opt out that undermines their own shadowban?
1 reply 0 retweets 0 likes -
You didn't answer my question. It seems you you do believe Twitter is acting dishonestly, but that they have good reason to do so. If that's true, then you agree with me, so great :) If you don't agree with me, please be clearer about how they are being "honest" here.
1 reply 0 retweets 2 likes
As you can see from the thread if you reread it, at no time did I suggest Twitter was acting against their own best interests. I simply said they were either dishonest or incompetent. As far as I can tell, you're not actually arguing with that.
-
-
I mean, to simplify the discussion, can you come up with a plausible explanation for how the concept of a "shadowban" can ever be honest in the first place? It is inherently an attempt to lie to your user base about what is happening, that's why it's a shadowban instead of a ban.
1 reply 0 retweets 4 likes -
Replying to @cmuratori @notegone
It is similar to isolating detected cheaters so they matchmake only to other cheaters. Informing them about that would only let them know that they need to create another throwaway account. Judge yourself, if this is "dishonest". IMO they exempted themselves from honest treatment
1 reply 0 retweets 0 likes - Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.