Conversation

Hmm. Vaguely interesting but the whole thread rests on the well-posedness of GIs via human brains being an existence proof, and therefore the well-posedness of the AGI question. I think I’ve gone hardline on the idea that humans are not in fact GIs, and the idea of a GI is bogus
Quote Tweet
Why doesn’t a good negative history of attempts to get to Artificial General Intelligence (AGI) exist? What is it about AGI that makes such a treatment not only hard to find, but hard to create? In this thread, I’ll propose reasons for 1) hard to find, and 2) hard to create
Show this thread
Replying to
Sure, I am inclined to agree with this. The human claim to "I" in AGI is uncertain, and the human claim to "G" is even more so. If so, though, just mentally replace "AGI" in the thread with "Artificial systems that are roughly as generally intelligent as people are"
1
3
Replying to
Been chasing down related thoughts recently. I think the AGI frame should basically be ignored from here on out rather than given an overwrought 8-fold postmortem. It now does more harm than good. Need to move on
Quote Tweet
I hadn’t seen this critique of superintelligence before. Interesting. It lands roughly where I did but via a different route (his term is much cleverer, “AI cosplay”). Ht @Aelkus idlewords.com/talks/superint
Show this thread
1
Show replies
Replying to
100% agree. If you study neuroscience you very quickly learn that brains (like everything in biology) are solely optimized for the organism’s survival (and not especially efficiently or elegantly). Any “generality” achieved is only due to diversity of the organism’s environment.