Usually it's task specific, because we're not quite at baby AGI level yet. But this is about as close of a stab at general intelligence as any:https://youtu.be/Ih8EfvOzBOY
-
-
Replying to @higherOrderNet @Junk_lzn and
I believe we'll see real progress when someone figures out how to use reinforcement learning (what was shown in the video) with attention based language models like gpt-2.
1 reply 0 retweets 2 likes -
Replying to @higherOrderNet @Junk_lzn and
Btw this is the same flavor of deep learning that Lee Sidol lost to.
1 reply 0 retweets 3 likes -
Replying to @higherOrderNet @EmojiPan and
what is your belief in seeing real progress based off of?
1 reply 0 retweets 0 likes -
Deep learning wasn't viable before imagenet 2012, now it's surpassing human-level performance at various tasks, including medical imagery and the analysis of legal contracts.
1 reply 1 retweet 4 likes -
Replying to @higherOrderNet @EmojiPan and
I was trying to ask what makes attention-based reinforcement learning so special
1 reply 0 retweets 1 like -
Ah, sorry. This is a pet theory of mine. Reinforcement learning style schemes are characterized by the model being embodied in an agent which interacts with an environment. This has led to some of the most AGI style models, able to act decicively and strategically.
1 reply 0 retweets 3 likes -
Replying to @higherOrderNet @Junk_lzn and
On the other hand, language models like GPT-2 or BERT have demonstrated some generalizing tendencies. Ultimately the language models though exist in a world entirely made of of language, with no concept of anything outside of strings of text.
3 replies 0 retweets 3 likes -
Replying to @higherOrderNet @Junk_lzn and
Emm, what does that mean? So we have natural languages, then extant formal languages and then languages which are based on the generalization of pragmatic/computational interaction paradigm? How do these coalesce?
2 replies 0 retweets 1 like -
Replying to @NegarestaniReza @Junk_lzn and
Oh so I'm talking specifically about transformers, a new class of deep learning model for natural language processing.
2 replies 0 retweets 3 likes
Natural language processing is the worst. Check Oxford computer science dept. What is important is to first capture the complexity of natural language and then formally generalize it.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.