Call me crazy, but I’m much more scared of AI than I am of climate change 
-
-
I think the concept of Superintelligence is logically flawed. All the uniquely human attributes we call "intelligence" reduce to a single functionality: Creating new explanatory knowledge. AGI will have it, too. But we can understand its explanations, too. And it'll be fallible.
-
E.g.,
@elonmusk and@samharris can talk about "the control problem" while also worrying about the AI's "open-ended utility function". But isn't this a simple contradiction? An AGI's values will depend on its moral knowledge, which it will create as we do. It'll be a person. - 8 more replies
New conversation -
-
-
How likely do you think it is that the model of intelligence/minds/etc. required for this scenario to plausibly happen is correct?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
AGI isn't even a coherent concept as of now, much less a practical reality. Moreover intelligent isn't fearful. Motives are. AI of any form doesn't have motives.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Human understanding relies on inductive leaps. Computers are purely deductive machines. How do you get inductive leaps from deduction?
-
Induction doesn’t exist. It’s a myth. Knowledge is conjecture. See Popper, Conjectures and Refutations.
- 20 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.