When will humanity create convincing AGI (artificial general intelligence)?
-
Show this thread
-
Replying to @bob_burrough
Bob, great question. Never to be exact. It is chasing something that is not a problem and why folks get lost with Turing tests and AGI in general. We don’t fully understand human intelligence. And AGI supposes we do and a human can answer any question—they can’t. Thus a dead end.
1 reply 0 retweets 7 likes -
Replying to @BrianRoemmele @bob_burrough
There was nice discussion about that month ago and creator of term AGI stated that the original definition was ~ intelligence similar to Humans. From only computational perspective current estimate will give ~10y knowing that we will find more nuances maybe 16-25y?
1 reply 0 retweets 2 likes -
We cant define higher level Intelligence but we can define the neural models that produce basal cognitive function and once you generate that, higher level intelligence is built automatically. AGI can only be reached if you can define it correctly.
1 reply 0 retweets 1 like -
Folks, great insights. The problem with the concept of AGI is it is used as a blunt instrument to convince folks that fully useful conversations with computers are “decades away”. AGI is not needed. What is needed is HyperContext human protocols for work-to-be-done.
2 replies 0 retweets 2 likes
Thus we are aiming for the wrong endpoint. With #VoiceFirst systems I build, from the ground up they are dialogue and conversational based not Q&A. I use ~14000 human protocols to anticipate conversion interaction. Not designed to beat a Turing test but is better than thumbs.
-
-
World model comes first for me. Language adoption is almost trivial once you build the world model.
0 replies 0 retweets 1 likeThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.