Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
- Medijski sadržaj
Blokirali ste korisnika/cu @trurom
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @trurom
-
Roman Rumin proslijedio/la je Tweet
Outstanding talk by
@blaiseaguera furture of#MachineLearning. Most definitely unlocking secrets to how are brains
work.https://slideslive.com/38921748 Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
"Transformer-XH: Multi-hop question answering with eXtra Hop attention" is an interesting work.
#iclr2020https://openreview.net/forum?id=r1eIiCNYwS …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roman Rumin proslijedio/la je Tweet
How does deep learning perform DEEP learning? Microsoft and CMU researchers establish a principle called "backward feature correction" and explain how very deep neural networks can actually perform DEEP hierarchical learning efficiently: https://aka.ms/AA70ptc
@ZeyuanAllenZhuHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roman Rumin proslijedio/la je Tweet
A simpler, flatter neural network, closer to actual brain architecture, can produce robust performance compared to deeper, more complex networks.https://twitter.com/mcgovernmit/status/1217891816848154624 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Also, NNs can dramatically reduce the cost of producing chips. Due to the assumption that a NN that can generalize well, hence can adapt to manufacturing process imperfections. Also, good neural networks would be able to solve synchronization issues themselves. Etc. 3/3
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Also, we should not worry about some noise. NN itself should be general enough to manage noise. The noise will force generalization. There is also no need for RAM at all. It is cheaper to use several chips without RAM than one chip with energy-hungry RAM. 2/2
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Excellent! I am also a fan of the analog approach. There is no need for analog-digital conversions, if we will make 1-bit precision chips. And use huge gated huge buses instead of addressing. If the chip is cold enough, then several or more layers of connectivity can be made. 1/2https://twitter.com/SongHan_MIT/status/1102732127383310336 …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roman Rumin proslijedio/la je Tweet
ProxylessNAS is available on Pytorch Hub. It takes only two lines of code to use: https://pytorch.org/hub/pytorch_vision_proxylessnas/ … The search code is also open sourced on github.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roman Rumin proslijedio/la je Tweet
Our work on Visual Wake Words Challenge is highlighted by Google. The technique we used is
#ProxylessNAS https://arxiv.org/pdf/1812.00332.pdf …#AIoT#EdgeAIhttps://twitter.com/TensorFlow/status/1189972556692021248 …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roman Rumin proslijedio/la je Tweet
“The MIT-IBM researchers designed a temporal shift module, which gives the model a sense of time passing w/out explicitly representing it. In tests, the method was able to train the deep-learning, video recognition AI 3x faster than existing methods.”
@MITIBMLab@IBMResearch#AIhttps://twitter.com/engadget/status/1181827225273995264 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roman Rumin proslijedio/la je Tweet
Our approach incrementally learns a mixture latent space, incorporating dynamic expansion to capture new concepts, and mixture generative replay to avoid forgetting previous ones. Work by
@drao64@FrancescoVisin@andreialexrusu@yeewhye@rpascanu@RaiaHadsellPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roman Rumin proslijedio/la je Tweet
Humans perform “mental time travel” across memories for goal-directed decisions. Our new algorithm, also based on episodic memory retrieval, enables AI agents to perform long-term credit assignment. Paper: https://www.nature.com/articles/s41467-019-13073-w … Code: https://github.com/deepmind/tvt pic.twitter.com/1E73Fe0x6Z
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
I like the way Yoshua Bengio (and others) do the data processing, they take a few steps with RIMs instead of one simple pass. They still continue to explore this idea, it will be very interesting to look at the continuation.https://twitter.com/MILAMontreal/status/1204891142191534081 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roman Rumin proslijedio/la je Tweet
Our paper 𝗛𝗼𝗹𝗼𝗚𝗔𝗡 was accepted to
#ICCV2019! We show that HoloGAN automatically learns a 𝗱𝗶𝘀𝗲𝗻𝘁𝗮𝗻𝗴𝗹𝗲𝗱 3D representation from natural images. NO pose labels, NO 3D shapes, NO multiple views, ONLY 2D images! https://www.monkeyoverflow.com/#/hologan-unsupervised-learning-of-3d-representations-from-natural-images/ … Video: https://youtu.be/z2DnFOQNECM pic.twitter.com/xtrKqy77x6Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roman Rumin proslijedio/la je Tweet
Meta Reinforcement Learning is good at adaptation to very similar environments. But can we meta-learn general RL algorithms? Our new approach MetaGenRL is able to. With
@vansteenkiste_s and@SchmidhuberAI Paper: https://arxiv.org/abs/1910.04098 Blog:http://louiskirsch.com/metagenrlHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roman Rumin proslijedio/la je Tweet
We introduce Dreamer, an RL agent that solves long-horizon tasks from images purely by latent imagination inside a world model. Dreamer improves over existing methods across 20 tasks. paper https://arxiv.org/pdf/1912.01603.pdf … code https://github.com/google-research/dreamer … Thread
pic.twitter.com/K5DnooVIUHPrikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roman Rumin proslijedio/la je Tweet
Dota 2 with Large Scale Deep Reinforcement Learning https://cdn.openai.com/dota-2.pdf Via
@OpenAI#Dota2pic.twitter.com/yT0GLiTv56
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roman Rumin proslijedio/la je Tweet
Really proud to see
#AlphaStar published in@nature. Playing the full game of StarCraft II with a pro-approved interface, the system ranked higher than 99.8% of all players – a fantastic achievement! Read our paper here: https://rdcu.be/bVI7G https://twitter.com/DeepMind/status/1189617587916689408 …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roman Rumin proslijedio/la je Tweet
Sometimes we were unaware that our robot is partially broken because the neural network could compensate for it. The model worked just fine with broken fingers or defected sensors.
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Roman Rumin proslijedio/la je Tweet
Cool to see the discussion of our multiagent work at the top of /r/programming:https://twitter.com/reddit_progr/status/1186836737148014593?s=20 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.