The more I follow academic debates, the more I think over-politeness harms our ability to converge on the truth. A lot of papers are weak enough they constitute ~zero evidence for their claim. But critics just politely refer to "questions" or "debate" around the papers... 1/n
-
Show this thread
-
... rather than "Uh, guys, we should not be updating on those papers, like at all" So we keep citing them, and treating them as evidence, bc we're socially obligated to. And in practice we end up using "# of papers claiming X" as a proxy for "strength of evidence for X" (2/n)
9 replies 19 retweets 150 likesShow this thread -
CLARIFICATIONS: - I'm complaining about over-politeness to *ideas*, not to people - Criticizing papers does cause some adjustment of belief away from their claims. But not as much as it would if the critics were allowed to say "c'mon, this paper is zero evidence for X" (3/n)
6 replies 4 retweets 133 likesShow this thread -
I'm not sure this is about ppl being too nice. I think it's more about ppl feeling obligated to follow certain social rules about evidence. Like, if a paper is widely cited, you have to treat it as if it provides at least some evidence for X, even if you don't think it does (4/n)
4 replies 3 retweets 84 likesShow this thread -
... bc it would be intellectually arrogant of you to claim otherwise, or something? So ppl will say "(Smith 1990) showed X. However, others have noted [flaws in the method]; this remains an open question" ...and the flaws are CLEARLY FATAL, but they stop short of saying so (5/n)
14 replies 9 retweets 117 likesShow this thread -
Replying to @juliagalef
Much of this depends on an insider/outsider dichotomy, at least in the fields I know well. Insiders will know what work to ignore (& may occasionally tell others), without needing to waste time & create enemies by explicitly saying so.
1 reply 0 retweets 11 likes -
Replying to @michael_nielsen @juliagalef
Internal to the field this seems to me like a pretty good (albeit imperfect) solution. But it's not great for outsiders, who often can't evaluate easily, and it may lead to the public having a very inaccurate view.
1 reply 0 retweets 8 likes -
Replying to @michael_nielsen @juliagalef
It also relies on those fields having pretty good ways of determining whether something is correct. Eg, I ignored a lot of quantum information papers that had obviously wrong (or not even wrong) mathematics in them. In some fields it's much harder to say what "progress" is at all
1 reply 0 retweets 4 likes -
Replying to @michael_nielsen @juliagalef
This is one of the things that seems so worrisome about social psychology, where it seems almost the entire field is reasonably called into question by the replication crisis. Errors don't really hurt a field, _provided_ there are reliable ways of id'ing errors...
1 reply 0 retweets 2 likes -
Replying to @michael_nielsen @juliagalef
Yup exactly. eg in the academic distributed systems community there is a private discussion about which things are wrong and not worth updating on that's not public. People who aren't in the insider community can get the wrong impression. It's sad but why risk the blowback?
2 replies 0 retweets 2 likes
As to "why risk the blowback" I think there's a reasonable question: how much does this hurt or help the field collectively? My guess is that in quantum info (& maybe distributed systems) there is essentially no mechanism for public criticism that wouldn't slow the field.
-
-
But in fields where it's less easy to tell how reliable a result is, there might be more value in norms supporting public criticism.
1 reply 0 retweets 2 likes -
Replying to @michael_nielsen @juliagalef
This is an excellent point. At worst we lose some engineer years and investment dollars. They lose entire generations. I'm
agreed with you here.0 replies 0 retweets 1 like
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.