Imitating the human brain is not more likely to result in AGI than mathematical approaches, because human brains are not generally intelligent after all — Ben Goertzel, #AGI2019
I don't think that Ben is naive. He sometimes seems to throw out a very tentative idea on impulse, because he is not invested into his novel thoughts very much, so he lightly revises them later, and he may not realize that others take them more seriously?
-
-
I don't know Ben, that's why I'm just discussing ideas. When is "a variety of Narrow AI" enough to generate AGI? How many more games do we need them to "learn" how to play? When will tasks be general enough? These are old questions and I can't see any new answers.
-
My take-away lesson was that the new answers are of the form: "to every question to be answered and every challenge it prescribes, we can design a narrow AGI"
- 2 more replies
New conversation -
-
-
On the same note there's no need to take my opinion seriously, I am no AI expert ;) I'd truly enjoy being optimistic, but based on the vision presented in that post - I don't know his work in detail - my honest opinion is simply skepticism. I wish him luck though!
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
