It’s about human understanding, to be sure, but I take it to be showing that rule-following (AI) is insufficient to produce it (ie, understanding such as humans have, or AGI).
-
-
Replying to @Plato4Now @NegarestaniReza
You could say it’s an argument against GOFAI, and in favor of machine learning
2 replies 0 retweets 0 likes -
Replying to @SimonDeDeo @NegarestaniReza
I’m almost with you. I’d say it gives an argument in favor of machine “learning,” while showing that such learning could not be understanding.
1 reply 0 retweets 2 likes -
-
PS the learning that they do these days is super underdetermined—there’s no teacher.
1 reply 0 retweets 0 likes -
Replying to @SimonDeDeo @NegarestaniReza
Right. I don’t think understanding requires a teacher. If it did, that would preclude (scientific) advancement. The problem with machine “learning” is not that there’s no teacher but that there’s no understanding.
1 reply 0 retweets 0 likes -
I think machine “learning” is like the way a virus “learns” to infect a host. When it succeeds, and accomplishes the task, it does so through a complex mechanism, but not by understanding.
2 replies 0 retweets 0 likes -
Replying to @Plato4Now @NegarestaniReza
What naturalized epistemology is there except one based in evolution?
1 reply 0 retweets 1 like -
Replying to @SimonDeDeo @Plato4Now
You can even go further without risking pancomputationalism and replace the term evolution with computation as Samson Abramsky does.
1 reply 0 retweets 0 likes -
Replying to @NegarestaniReza @Plato4Now
So we’re all functionalists in the end, I guess? And just have a scientific problem (and maybe an identification one.)
1 reply 0 retweets 1 like
I think so, but we should ask exactly what kind of functionalists are we?
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.