Identity communities often form based on outsider understandings of fields on the boundary of science and pseudoscience. When I asserted recently that fMRI stuff is largely nonsense, I got many outraged responses, apparently mostly from psychiatrists.https://twitter.com/EikoFried/status/1141015324474712071 …
-
Show this thread
-
The inferential path between the fMRI instrument and anything meaningful is extraordinarily long, complex, and tenuous. Just at the front end it involves several stages of statistically torturing the distorted and noisy data to get some stable signal out of it.
1 reply 4 retweets 23 likesShow this thread -
I would guess few psychiatrists can follow details of fMRI data processing methods, so their faith in it has another basis. Confronted with the extreme nebulosity of human mental dysfunction, having SOME authoritative knowledge source must be reassuring?
1 reply 0 retweets 12 likesShow this thread -
This tweet thread prompted by the analogy with upset responses whenever I say “deep learning stuff is mostly nonsense.” Those seem to come mostly not from actual AI researchers, but AI fans. Personal and non-professional community identities depend on belief in AI progress.
3 replies 1 retweet 22 likesShow this thread -
Replying to @Meaningness
In my work, I treat deep learning as a legitimate engineering tool, divorced from any claims about "intelligence" or "AI." It may be nothing but a powerful curve-fitter, but it turns out you can solve many hard engineering problems with high-dimensional curve-fitting.
1 reply 0 retweets 4 likes
David Chapman Retweeted David Chapman
David Chapman added,
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.