Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @NeuroAILab
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @NeuroAILab
-
Stanford NeuroAI Lab proslijedio/la je Tweet
We introduce Dreamer, an RL agent that solves long-horizon tasks from images purely by latent imagination inside a world model. Dreamer improves over existing methods across 20 tasks. paper https://arxiv.org/pdf/1912.01603.pdf … code https://github.com/google-research/dreamer … Thread
pic.twitter.com/K5DnooVIUHPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stanford NeuroAI Lab proslijedio/la je Tweet
.
@SuryaGanguli & I are co-organizing the next@StanfordHAI Conference Apr 1, 2020—no joke!—on the topic Triangulating Intelligence: Melding Neuroscience, Psychology, and AI. Botvinick—@YejinChoinka—@chelseabfinn—@AudeOliva—Tenenbaum—@dyamins—save the date!https://hai.stanford.edu/events/triangulating-intelligence-melding-neuroscience-psychology-and-ai …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stanford NeuroAI Lab proslijedio/la je Tweet
Best Paper Award
#MarrPrize@ICCV19 SinGan: Learning a Generative Model from a Single Natural Image Congratulations to@TamarRottShaham (@TechnionLive),@talidekel (@GoogleAI), Tomer Micaeli (@TechnionLive)! First 2 authors are women!@WiCVworkshop@WomeninSTEM@women_in_aipic.twitter.com/nBstStRBp6
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stanford NeuroAI Lab proslijedio/la je Tweet
Thank you
#SfN19 for inviting me to give the talk, a return to where I began my science career after 18 years! Tomorrow’s#AI technology should be inspired and developed in collaboration with brain and cognitive sciences.https://twitter.com/Neurosci2019/status/1185599515811831809 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stanford NeuroAI Lab proslijedio/la je Tweet
Looking forward to speaking at the
#SfN19 Minisymposium "Artificial Intelligence and Neuroscience"! I'll be presenting our work on task-driven recurrent convolutional models of higher visual cortex dynamics. Monday Oct. 21, 9:55-10:15 am, Session 261, Room S406A.pic.twitter.com/DAueCxSIAP
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stanford NeuroAI Lab proslijedio/la je Tweet
We've trained an AI system to solve the Rubik's Cube with a human-like robot hand. This is an unprecedented level of dexterity for a robot, and is hard even for humans to do. The system trains in an imperfect simulation and quickly adapts to reality: https://openai.com/blog/solving-rubiks-cube/ …pic.twitter.com/8lGhU2pPck
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stanford NeuroAI Lab proslijedio/la je Tweet
Heterogeneous graph support is finally here! Many new models: GCMC, RGCN(for hetero), HAN, Metapath2vec. New DGL-KE package supports efficient training of TransE, ComplEx, DistMult. Look forward to new research ideas using the right tool! V0.4 release: https://github.com/dmlc/dgl/releases/tag/v0.4.0 …pic.twitter.com/tdzzfa8RgX
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stanford NeuroAI Lab proslijedio/la je Tweet
A new work on structuring diverse semantics in 3D space that yielded the 3D Scene Graph! It’s showcased on the Gibson database by annotating the models with diverse semantics using a semi-automated method.https://twitter.com/ir0armeni/status/1181568361844445185 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stanford NeuroAI Lab proslijedio/la je Tweet
The list of accepted papers at the
@NeurIPSConf Graph Representation Learning Workshop 2019 is online! https://grlearning.github.io/papers/ (Camera-ready versions will follow later this month). Submission statistics / acceptance rates below
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stanford NeuroAI Lab proslijedio/la je Tweet
RPN is inspired by classical symbolic planning and modern model-based planning and aims to combine the best of both worlds. We show that RPN can achieve strong zero-shot generalization to multi-step tasks (~50 subgoals) with pixel inputs. (2/3)pic.twitter.com/aCpcxrBi0Z
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stanford NeuroAI Lab proslijedio/la je Tweet
Really enjoyed the (non OpenAI) ICLR submission https://openreview.net/pdf?id=S1eZYeHFDS … that trained a transformer on symbolic math. The surprise: it beat Mathematica on symbolic integration and diff eq solving by a _very_ big margin!
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stanford NeuroAI Lab proslijedio/la je Tweet
Thank you Machine Learning community for getting
#ICLR2020 off to such a great start
. Congratulations to everyone who submitted and for pushing our science to new heights
.
Read the papers: 2594 now online https://openreview.net/group?id=ICLR.cc/2020/Conference#all-submissions …
Next phase: Bidding, instructions on Friday.pic.twitter.com/jge0G27GJW
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stanford NeuroAI Lab proslijedio/la je Tweet
We see more significant improvements from training data distribution search (data splits + oversampling factor ratios) than neural architecture search. The latter is so overrated :)
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stanford NeuroAI Lab proslijedio/la je Tweet
What does deep learning bring to neuroscience? What is the role of theory in the age of deep learning? Answers (and more questions) in our workshop "Brain Against the Machine", Berlin, Sep 17&18 at
#BernsteinConference. Full schedule: https://www.bernstein-network.de/en/bernstein-conference/2019/satellite-workshops/brain-against-the-machine-266b-and-now-you-do-what-they-told-ya-266b …@NNCN_Germanypic.twitter.com/pXTJ7I7PAB
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stanford NeuroAI Lab proslijedio/la je Tweet
It's hard to scale meta-learning to long inner optimizations. We introduce iMAML, which meta-learns *without* differentiating through the inner optimization path using implicit differentiation. https://arxiv.org/abs/1909.04630 to appear
@NeurIPSConf w/@aravindr93@ShamKakade6@svlevinepic.twitter.com/fBznTaubgr
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stanford NeuroAI Lab proslijedio/la je Tweet
One of the coolest aspects of SPIRAL is to use computer programs (as opposed to games) as environments for RL agents. Another is to use a discriminator that provides rewards to the agent. https://twitter.com/TensorFlow/status/1162454023636684800 …pic.twitter.com/mEYC8RGPyu
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stanford NeuroAI Lab proslijedio/la je TweetHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
-
Stanford NeuroAI Lab proslijedio/la je Tweet
What sizes do these come in? I don’t want to overfithttps://twitter.com/hardmaru/status/1161993563061673984 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stanford NeuroAI Lab proslijedio/la je Tweet
Facebook AI researchers have released PHYRE, a new open benchmark for assessing an
#AI system’s capacity for reasoning about the physical laws that govern real-world environments. https://ai.facebook.com/blog/phyre-a-new-ai-benchmark-for-physical-reasoning/ …pic.twitter.com/JGavwRcv5qHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Stanford NeuroAI Lab proslijedio/la je Tweet
Geoff Hinton: "One big challenge the community faces is that if you want to get a paper published in ML now it's got to have a table in it, with all these different data sets across the top, and all these diff methods along the side, and your method has to look like the best one"
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.