Media
- Tweets
- Tweets & replies
- Media, current page.
-
Aristotelian physics may be flawed, but it has been unfairly discarded in favor of Newtonian physics, despite the latter not solving all of the open problems in the field. Clearly the way forward is a hybrid system, wherein aether obeys the conservation of momentum.
#AIDebatepic.twitter.com/ZgNpoaqTi7 -
Conference-decision-anticipation at an all time high.
@iclr_conf I need that sweet sweet email notification!pic.twitter.com/UKQuvXLjWQ
-
@SchmidhuberAI talk at#NeurIPS2019 I managed to ask a question afterwards: Q: Did you ever implement, or even attempt to implement, the "special case" where adversarial training is used to mimic a training dataset A: No. Hopefully this helps clear up some misconceptions ;)pic.twitter.com/4ofX86rm1n
-
@OpenAI finally released a paper on their DOTA AI. I've given them a lot of crap over the years for not sharing details in a timely manner (and I still stand by most of it that), but this sort of openness is still greatly appreciated. https://cdn.openai.com/dota-2.pdf pic.twitter.com/UPzBVCj7RE -
Is it too late to rebrand entropy regularization as "the free-will prior"?
#NeurIPS2019pic.twitter.com/Jwx2yjTe8p -
"now let's talk about consciousness" Can we not?pic.twitter.com/PZXlHcZbZA
-
I like London, but it'll be so nice to just forget about Brexit and think about nothing but AI for a week.
#NeurIPS2019 Day 1:pic.twitter.com/FJfA9ktpqQ
-
Great tutorial at
@NeurIPSConf Turns out that "dataset shit" is really hard.pic.twitter.com/sGVctScLec
-
-
What's that? An excuse for posting about my beloved baby bear? Don't mind if I do! Bear on chair: https://twitter.com/tkasasagi/status/1199990912903864320 …pic.twitter.com/PevYuU98UD
-
How is there not a paper on the
@OpenAI DOTA AI yet? I remember asking@ilyasut about this two years ago (link to exact timestamp), and he claimed there would be at least an arXiv paper after 5v5 results (which they had in 2018 and concluded this April).https://vimeo.com/250399465#t=16m40s …Show this thread -
Oh wow, there truly is an XKCD for everything! I had this in mind:https://youtu.be/Lrlro3YJ15o
-
@iclr_conf reviews are out! Good luck everyone! May your rebuttals fall on receptive ears, and your additional experiments be non-existent.pic.twitter.com/Y93xy8Ed1i -
Feverishly working on preparing the tasks for an external just in time for
@NeurIPSConf. We hope these tasks represent an interesting challenge for the deep RL community. Excited to see what y'all can do with them! http://sites.google.com/corp/view/memory-tasks-suite/ … n/n back to work timepic.twitter.com/eurQnUVA2RShow this thread -
Memory Recall Agent! A new agent that combines 1) an external memory 2) contrastive auxiliary loss 3) jumpy-backpropagation for credit assignment Importantly, all of these pieces were validated through over 10 ablations! 5/npic.twitter.com/5rQjDjQVYA
Show this thread -
Tasks! In addition to a standard train/test split based on partitioning some variable (e.g. color), we also pick a scalar variable (e.g. size of room). We can thus train on some values and test on unseen values inside the range (interp) or outside of the range (extrap) 4/npic.twitter.com/RDKaVbQWlz
Show this thread -
Results! 1) Some of these tasks are hard! Underfitting is still an issue in RL 2) Extrapolation isn't impossible for Deep RL agents, but it requires the right inductive biases and is far from solved 3) Adding a contrastive loss to an external memory is a good thing to do 3/npic.twitter.com/LEbaXFtcqG
Show this thread -
Excited to announce our work on memory generalization in Deep RL is out now! We created a suite of 13 tasks with variants to test interpolation and extrapolation. Our new MRA agent out-performs baselines, but these tasks remain an open challenge. https://arxiv.org/abs/1910.13406 1/npic.twitter.com/jnnPr5ISk7
Show this thread
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.