Last year, I felt it was suboptimal for tech platforms to outsource their governance issues, because giving permit/deny rights on a material portion of all human utterances to parties which rhyme with governance seemed unwise. This year, it feels *extremely* suboptimal.
-
-
While I think your model is generally correct, it very much ignores the actions of agents who coordinate and attempt to exploit this system - often by getting jobs at SFBA companies on "trust and safety" teams.
-
The dissenting voice who says "the WHO seems very wrong frequently so lets reconsider this policy" didn't get hired by trust and safety. They read his twitter before hiring to weed him out and then recommended their friend from $organized_group_of_agents instead.
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.