Meta discussion impairs legibility. Meta makes it hard to work out what's being said, and to check the truth of claims. For meta isn't about the object claims. It e.g. moves focus to people, their motives, and their effects on other people — whose motives, etc. in turn mix in.
-
Show this thread
-
Lulie Retweeted Julia Galef
Example of meta discussion impairing legibility: Here, it's complicated to figure out what actually happened — whether it was a misunderstanding, a difference in worldview, an error, stubbornness, etc. — and 'who was right' (if anyone).https://twitter.com/juliagalef/status/1091410380499210240?s=21 …
Lulie added,
Julia GalefVerified account @juliagalef1) I'm frustrated Steven Pinker won't admit an error in Enlightenment Now. Summary: - Pinker names Stuart Russell as an expert who's skeptical of AI risk - Someone points out that's exactly backwards; Russell is one of the main experts warning about AI risk - Pinker doubles downShow this thread1 reply 0 retweets 2 likesShow this thread -
And trying to find out whether someone is a reliable source has limited use — there are no reliable sources. The closest thing is: if you have a model of the kind of errors someone makes, you can use that to figure out what must've happened when they report something.
3 replies 0 retweets 6 likesShow this thread -
Replying to @reasonisfun
David Manheim Retweeted Lulie
Not such limited use - it's exactly how Aumann agreement works in practice! In
@abramdemski's Aumann game - https://docs.google.com/document/d/1gCKURs0Xdnb8PQS54rckS4CJUp8kCklKs2KKi7xDZdA/edit?usp=sharing … - estimated probabilities sum to 1 after agreement IFF everyone converges on their estimate of everyone's calibrations. cc:@juliagalefhttps://twitter.com/reasonisfun/status/1101079007100108800 …David Manheim added,
Lulie @reasonisfunAnd trying to find out whether someone is a reliable source has limited use — there are no reliable sources. The closest thing is: if you have a model of the kind of errors someone makes, you can use that to figure out what must've happened when they report something.Show this thread1 reply 0 retweets 2 likes -
Replying to @davidmanheim @reasonisfun and
The game was fantastic at building intuition around how and why our estimates do or do not converge. If extrapolated to large groups, it makes sense that you'd find cliques with locally consistent models, not converging globally because of failure to converge re: miscalibration.
1 reply 0 retweets 2 likes -
Replying to @davidmanheim @reasonisfun and
David Manheim Retweeted Julia Galef
Specifically, it's exactly this class of failure we should expect - discounting parts of arguments where you think there's bias. IIRC Pinker himself described his strong priors about what classes of prediction failure to expect from "AI-risk alarmists" due to cognitive failures.https://twitter.com/juliagalef/status/1091411673758367744 …
David Manheim added,
Julia GalefVerified account @juliagalef4) But Russell says in that post that AI could wipe out humanity & we need ppl working on AI safety. His reasons for optimism at the end don't negate what he believes is a serious risk. I don't see how you could read this & call him a skeptic of AI risk. https://www.edge.org/response-detail/26157 … pic.twitter.com/3od2wUYro0Show this thread1 reply 0 retweets 2 likes -
Replying to @davidmanheim @reasonisfun and
That's why Pinker says Russel's argument amount to agreeing with Risk-minimalists. In this case, Aumann agreement occurs based on calibration about future events - it gets no new evidence to update models of failure modes until it's too late to matter.
2 replies 0 retweets 1 like -
Replying to @davidmanheim @reasonisfun and
Anyways, I'm arguing that it's plausible that Pinker is evaluating Russel's arguments rationally, and - fully rationally - doesn't converge with your/our model of which failures and miscalibrations are coming into play to converge on object level estimates.
3 replies 0 retweets 2 likes -
I don't think it's plausible that someone can honestly refer to a man who has built his (current) career around warning about AI risk "skeptical of AI risk."
3 replies 0 retweets 3 likes -
It is if the context was eg comparing stronger to less strong views. "risk" could mean 'there's some chance AI will do a lot of harm, as humans do', or 'massive harm is imminent & unstoppable', etc. I haven't read enough of the context to tell! Which was my point re:legibility.
1 reply 0 retweets 3 likes
yes, meta discussion is bad, but pinker made it necessary by transgressing against the laws of discourse, and is therefore to blame
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.