This thread by @fchollet, applied to Bayesianism, is the basic counterargument to the last ten years of “AGI as existential threat”.https://twitter.com/fchollet/status/1010988618993655808 …
The question whether something is science fiction is entirely orthogonal to whether it is possible.
-
-
The confusion between "possible to imagine" with "possible" seems to be at the heart of the claims you're making. No such algorithm or mechanism is remotely capable of the things you're suggesting (e.g., modeling the complexity and functionality of a mind).
-
I think that this is a nontrivial claim to make, even though most people would subscribe to it (and I share the intuition). But if you are concerned about AGI dangers, I don’t think it is good enough to make you feel safe.
- 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.