Tweetovi
- Tweetovi, trenutna stranica.
- Tweetovi i odgovori
Blokirali ste korisnika/cu @thanhng12
Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @thanhng12
-
Thanh Nguyen proslijedio/la je Tweet
Check out our extensive review paper on normalizing flows! This paper is the product of years of thinking about flows: it contains everything we know about them, and many new insights. With
@eric_nalisnick,@DeepSpiker,@shakir_za,@balajiln. http://arxiv.org/abs/1912.02762 Thread
https://twitter.com/DeepSpiker/status/1202868429780336640 …
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanh Nguyen proslijedio/la je Tweet
A surprising deep learning mystery: Contrary to conventional wisdom, performance of unregularized CNNs, ResNets, and transformers is non-monotonic: improves, then gets worse, then improves again with increasing model size, data size, or training time. https://openai.com/blog/deep-double-descent/ …pic.twitter.com/Zdox9dbIBv
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanh Nguyen proslijedio/la je Tweet
Go over the topic of over-parameterization. Gain an explanation for landscape connectivity of low-cost solutions for multilayer nets, see a proof that explores the fundamental reason behind the concept, and learn the memorization capacity of ReLU networks: https://aka.ms/AA6jrhh
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanh Nguyen proslijedio/la je Tweet
New blog post: "A Recipe for Training Neural Networks" https://karpathy.github.io/2019/04/25/recipe/ … a collection of attempted advice for training neural nets with a focus on how to structure that process over time
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanh Nguyen proslijedio/la je Tweet
I wrote my first blog about my recent paper with John Hopfield.https://twitter.com/IBMResearch/status/1113113666298281989 …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanh Nguyen proslijedio/la je Tweet
I've decided to share my slide deck detailing my ... concerns about the F measure. tl;dr: just don't.pic.twitter.com/AsKB93sx8K
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanh Nguyen proslijedio/la je Tweet
Good news! TensorBoard now works in Jupyter Notebooks, via magic commands "%" that match the command line. Example: https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/r2/tensorboard_quickstart.ipynb …pic.twitter.com/SceivzfrTJ
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanh Nguyen proslijedio/la je Tweet
Really proud to share "What is torch.nn, really?", which takes you from a neural net written from scratch, refactored step by step using all the key concepts in `torch.nn`. If you want to really understand how neural nets work in
@PyTorch, start here! https://pytorch.org/tutorials/beginner/nn_tutorial.html …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanh Nguyen proslijedio/la je Tweet
Neural Ordinary Differential Equations .... blog explainshttps://blog.acolyer.org/2019/01/09/neural-ordinary-differential-equations/ …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanh Nguyen proslijedio/la je Tweet
Could you also consider taking a look at "fastprogress", our recent replacement for tqdm, which has some nice extra features (see the readme) and avoids some of tqdm's bugs:https://github.com/fastai/fastprogress …
Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanh Nguyen proslijedio/la je Tweet
Paper on role of over-parametrization in generalization of neural nets is accepted to
#ICLR2019: https://openreview.net/forum?id=BygfghAcYX … … We have also released our code: https://github.com/bneyshabur/over-parametrization … … This is a joint work with with Zhiyuan Li, Srinadh Bhojanapalli,@ylecun and Nati Srebro.pic.twitter.com/etznQfmxi1
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Attending
#NeurIPS18 from home with Facebook livestream https://m.facebook.com/nipsfoundation/

Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanh Nguyen proslijedio/la je Tweet
“So you want to be a Research Scientist” by Vincent Vanhoucke
@GoogleAI: •Your will spend a career working on things that don’t work •Your work will be obsolete the minute you publish it •Your entire career will largely be measured by 1 number (H-Index)https://medium.com/@vanhoucke/so-you-want-to-be-a-research-scientist-363c075d3d4c …Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanh Nguyen proslijedio/la je Tweet
Great tips for beginning grad students - Twenty things I wish I’d known when I started my PhD https://www.nature.com/articles/d41586-018-07332-x …
@LucyATaylorHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanh Nguyen proslijedio/la je Tweet
Finally learned about using Einstein notation to flexibly multiply and sum across axes of high-dimensional arrays in
#numpy using np.einsum() - very powerful! I found this tutorial very helpful: http://ajcr.net/Basic-guide-to-einsum/ …Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanh Nguyen proslijedio/la je Tweet
Gradient Descent Provably Optimizes Over-parameterized (single hidden layer relu) Neural Networks (trained with l2 loss assuming random init and non degenerate data): https://arxiv.org/abs/1810.02054
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanh Nguyen proslijedio/la je Tweet
Neural Processes in
@PyTorch: Short blog post explaining “Neural Processes” (https://arxiv.org/abs/1807.01622 ), connection to VAEs, and highlights shortcomings with the approach (with suggestions to make it work better). https://chrisorm.github.io/VAE-pyt.html pic.twitter.com/at95vlyg7j
Prikaži ovu nitHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanh Nguyen proslijedio/la je Tweet
In order to incentivize and measure progress towards the goal of zero confident classification errors in
#MachineLearning models, we're announcing the Unrestricted Adversarial Examples Challenge. Learn how to participate in the blog post below!http://goo.gl/uNzZPoHvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanh Nguyen proslijedio/la je Tweet
Here's my attempt at connecting the dots for some
#GAN and#VAE papers presented at#icml2018https://medium.com/peltarion/generative-adversarial-nets-and-variational-autoencoders-at-icml-2018-6878416ebf22 …Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi -
Thanh Nguyen proslijedio/la je Tweet
“Variational Inference: A Review for Statisticians” https://arxiv.org/pdf/1601.00670.pdf … is an excellent introduction to variational inference from
@blei_lab — highly recommended for those looking to learn!Hvala. Twitter će to iskoristiti za poboljšanje vaše vremenske crte. PoništiPoništi
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.